DIGITAL CAMERA SENSORS A digital camera uses a sensor array of millions of tiny pixels in order to produce the final image. When you press your camera's shutter button and the exposure begins, each of these pixels has a "photosite" which is uncovered to collect and store photons in a cavity. Once the exposure finishes, the camera closes each of these photosites, and then tries to assess how many photons fell into each. The relative quantity of photons in each cavity are then sorted into various intensity levels, whose precision is determined by bit depth (0 - 255 for an 8-bit image).

Each cavity is unable to distinguish how much of each color has fallen in, so the above illustration would only be able to create grayscale images. To capture color images, each cavity has to have a filter placed over it which only allows penetration of a particular color of light. Virtually all current digital cameras can only capture one of the three primary colors in each cavity, and so they discard roughly 2/3 of the incoming light. As a result, the camera has to approximate the other two primary colors in order to have information about all three colors at every pixel. The most common type of color filter array is called a "Bayer array," shown below.
Color Filter Array

A Bayer array consists of alternating rows of red-green and green-blue filters. Notice how the Bayer array contains twice as many green as red or blue sensors. Each primary color does not receive an equal fraction of the total area because the human eye is more sensitive to green light than both red and blue light. Redundancy with green pixels produces an image which appears less noisy and has finer detail than could be accomplished if each color were treated equally. This also explains why noise in the green channel is much less than for the other two primary colors (see "Understanding Image Noise" for an example).

Original Scene (shown at 200%)

What Your Camera Sees (through a Bayer array)

Note: Not all digital cameras use a Bayer array, however this is by far the most common setup. The Foveon sensor used in Sigma's SD9 and SD10 captures all three colors at each pixel location. Sony cameras capture four colors in a similar array: red, green, blue and emerald green.

Bayer "demosaicing" is the process of translating this Bayer array of primary colors into a final image which contains full color information at each pixel. How is this possible if the camera is unable to directly measure full color? One way of understanding this is to instead think of each 2x2 array of red, green and blue as a single full color cavity.


This would work fine, however most cameras take additional steps to extract even more image information from this color array. If the camera treated all of the colors in each 2x2 array as having landed in the same place, then it would only be able achieve half the resolution in both the horizontal and vertical directions. On the other hand, if a camera computed the color using several overlapping 2x2 arrays, then it could achieve a higher resolution than would be possible with a single set of 2x2 arrays. The following combination of overlapping 2x2 arrays could be used to extract more image information.

Note how we did not calculate image information at the very edges of the array, since we assumed the image continued on in each direction. If these were actually the edges of the cavity array, then calculations here would be less accurate, since there are no longer pixels on all sides. This is no problem, since information at the very edges of an image can easily be cropped out for cameras with millions of pixels. Other demosaicing algorithms exist which can extract slightly more resolution, produce images which are less noisy, or adapt to best approximate the image at each location.

Images with small-scale detail near the resolution limit of the digital sensor can sometimes trick the demosaicing algorithm—producing an unrealistic looking result. The most common artifact is moiré (pronounced "more-ay"), which may appear as repeating patterns, color artifacts or pixels arranges in an unrealistic maze-like pattern:

Two separate photos are shown above—each at a different magnification. Note the appearance of moiré in all four bottom squares, in addition to the third square of the first photo (subtle). Both maze-like and color artifacts can be seen in the third square of the downsized version. These artifacts depend on both the type of texture and software used to develop the digital camera's RAW file.

You might wonder why the first diagram in this tutorial did not place each cavity directly next to each other. Real-world camera sensors do not actually have photosites which cover the entire surface of the sensor. In fact, they often cover just half the total area in order to accommodate other electronics. Each cavity is shown with little peaks between them to direct the photons to one cavity or the other. Digital cameras contain "microlenses" above each photosite to enhance their light-gathering ability. These lenses are analogous to funnels which direct photons into the photosite where the photons would have otherwise been unused.

Well-designed microlenses can improve the photon signal at each photosite, and subsequently create images which have less noise for the same exposure time. Camera manufacturers have been able to use improvements in microlens design to reduce or maintain noise in the latest high-resolution cameras, despite having smaller photosites due to squeezing more megapixels into the same sensor area.

2.CAMERA EXPOSURE A photograph's exposure determines how light or dark an image will appear when it's been captured by your camera. Believe it or not, this is determined by just three camera settings: aperture, ISO and shutter speed (the "exposure triangle"). Mastering their use is an essential part of developing an intuition for photography.


Achieving the correct exposure is a lot like collecting rain in a bucket. While the rate of rainfall is uncontrollable, three factors remain under your control: the bucket's width, the duration you leave it in the rain, and the quantity of rain you want to collect. You just need to ensure you don't collect too little ("underexposed"), but that you also don't collect too much ("overexposed"). The key is that there's many different combinations of width, time and quantity that will achieve this. For example, for the same quantity of water, you can get away with less time in the rain if you pick a bucket that's really wide. Alternatively, for the same duration left in the rain, a really narrow bucket can be used as long as you plan on getting by with less water. In photography, the exposure settings of aperture, shutter speed and ISO speed are analogous to the width, time and quantity discussed above. Furthermore, just as the rate of rainfall was beyond your control above, so too is natural light for a photographer.


Each setting controls exposure differently:

Aperture: controls the area over which light can enter your camera Shutter speed: controls the duration of the exposure ISO speed: controls the sensitivity of your camera's sensor to a given amount of light One can therefore use many combinations of the above three settings to achieve the same exposure. The key, however, is knowing which trade-offs to make, since each setting also influences other image properties. For example, aperture affects depth of field, shutter speed affects motion blur and ISO speed affects image noise. The next few sections will describe how each setting is specified, what it looks like, and how a given camera exposure mode affects their combination.

A camera's shutter determines when the camera sensor will be open or closed to incoming light from the camera lens. The shutter speed specifically refers to how long this light is permitted to enter the camera. "Shutter speed" and "exposure time" refer to the same concept, where a faster shutter speed means a shorter exposure time. By the Numbers. Shutter speed's influence on exposure is perhaps the simplest of the three camera settings: it correlates exactly 1:1 with the amount of light entering the camera. For example, when the exposure time doubles the amount of light entering the camera doubles. It's also the setting that has the widest range of possibilities:
Shutter Speed 1 - 30+ seconds 2 - 1/2 second 1/2 to 1/30 second 1/50 - 1/100 second 1/250 - 1/500 second 1/1000 - 1/4000 second Specialty night and low-light photos on a tripod To add a silky look to flowing water Landscape photos on a tripod for enhanced depth of field To add motion blur to the background of a moving subject Carefully taken hand-held photos with stabilization Typical hand-held photos without substantial zoom To freeze everyday sports/action subject movement Hand-held photos with substantial zoom (telephoto lens) To freeze extremely fast, up-close subject motion Typical Examples

How it Appears. Shutter speed is a powerful tool for freezing or exaggerating the appearance of motion:

Slow Shutter Speed

Fast Shutter Speed

With waterfalls and other creative shots, motion blur is sometimes desirable, but for most other shots this is avoided. Therefore all one usually cares about with shutter speed is whether it results in a sharp photo -- either by freezing movement or because the shot can be taken hand-held without camera shake. How do you know which shutter speed will provide a sharp hand-held shot? With digital cameras, the best way to find out is to just experiment and look at the results on your camera's rear LCD screen (at full zoom). If a properly focused photo comes out blurred, then you'll usually need to either increase the shutter speed, keep your hands steadier or use a camera tripod.

A camera's aperture setting controls the area over which light can pass through your camera lens. It is specified in terms an f-stop value, which can at times be counterintuitive, because the area of the opening increases as the f-stop decreases. In photographer slang, the when someone says they are "stopping down" or "opening up" their lens, they are referring to increasing and decreasing the f-stop value, respectively.

Technical Note: With many lenses. How it Appears. For example. Every time the f-stop value halves.8 to f/8. Note: Shutter speed values are not always possible in increments of exactly double or half another shutter speed. The above f-stop numbers are all standard options in any camera. Lower f-stop values correlate with a shallower depth of field: Wide Aperture f/2. whereas a digital SLR camera might have a range of f/1. such as f/3. A narrow aperture range usually isn't a big problem. their light-gathering ability is also affected by their transmission efficiency.2 and f/6. a compact camera might have an available range of f/2. the light-collecting area quadruples.0.large f-stop number large depth of field . although this is almost always much less of a factor than aperture.6 f/4. A camera's aperture setting is what determines a photo's depth of field (the range of distance over which objects appear in sharp focus). but a greater range does provide for more creative flexibility. It's also beyond the photographer's control. although most also allow finer adjustments. For example. There's a formula for this. The range of values may also vary from camera to camera (or lens to lens). but most photographers just memorize the f-stop numbers that correspond to each doubling/halving of light: Aperture Setting f/22 f/16 f/11 f/8.4 1X 2X 4X 8X 16X 32X 64X 128X 256X Relative Light 16 seconds 8 seconds 4 seconds 2 seconds 1 second 1/2 second 1/4 second 1/8 second 1/15 second Example Shutter Speed The above aperture and shutter speed combinations all result in the same exposure.0 .3.0 f/1.By the Numbers.8 f/2.0 f/2. Canon's 24-105 mm f/4L IS lens gathers perhaps ~10-40% less light at f/4 than Canon's similar 24-70 mm f/2.0 f/5.4 to f/32 with a portrait lens.8L lens at f/4 (depending on the focal length). Differences in transmision efficiency are typically more pronounced with extreme zoom ranges.low f-stop number shallow depth of field Narrow Aperture f/16 . but they're always close enough that the difference is negligible.

400 and 800. How It Works Program (P) Aperture Priority (Av or A) Shutter Priority (Tv You specify the shutter speed & ISO. Useful for exposures longer than 30 seconds. an ISO speed in the range of 50-200 generally produces acceptably low image noise. since higher ISO speeds dramatically increase image noise. Program (P). Manual (M) and Bulb (B) mode." ). You specify the aperture & ISO. The following charts describe how each mode pertains to exposure: Exposure Mode Auto ( ) Camera automatically selects all exposure settings. the camera's metering determines the corresponding aperture. Av. Some modes attempt to pick all three values for you. You specify the aperture and ISO. or S) Manual (M) Bulb (B) You specify the aperture.regardless of whether these values lead to a correct exposure. P can also act as a hybrid of the Av & Tv modes. With some cameras. whereas others let you specify one setting and the camera picks the other two (if possible). a lower ISO speed is almost always desirable. 200. whereas with digital SLR cameras. a range of 50-800 (or higher) is often acceptable. Camera automatically selects aperture & shutter speed. Low ISO Speed (low image noise) High ISO Speed (high image noise) note: image noise is also known as "film grain" in traditional film photography Common ISO speeds include 100. or by the duration until you press the . Tv. However. you can choose a corresponding ISO speed & exposure compensation. the shutter speed is determined by a remote release switch. Similar to shutter speed. unlike aperture and shutter speed. the camera's metering determines the corresponding shutter speed.ISO SPEED The ISO speed determines how sensitive the camera is to incoming light. CAMERA EXPOSURE MODES Most digital cameras have one of the following standardized exposure modes: Auto ( "auto exposure (AE) modes. it also correlates 1:1 with how much the exposure increases or decreases. As a result. and M are often called "creative modes" or Each of these modes influences how aperture. ISO speed is usually only increased from its minimum value if the desired aperture and shutter speed aren't otherwise obtainable. Shutter Priority (Tv). With compact cameras. ISO and shutter speed are chosen for a given exposure. ISO and shutter speed -. although many cameras also permit lower or higher values. Aperture Priority (Av).

metering can often be fooled. keep in mind that most of the above settings rely on the camera's metering system in order to know what's a proper exposure. Compact cameras also often set their focus distance to distant objects or infinity. some of the above modes may also control camera settings which are unrelated to exposure. If the camera is aimed directly at any object lighter or darker than middle gray. center-weighted and spot metering. Metering options often include partial. A hand-held light meter would calculate the same exposure for any object under the same incident lighting. For tricky subject matter. however real-world subjects vary greatly in their reflectance. but will likely appear similar to those below: Exposure Mode Portrait Camera tries to pick the lowest f-stop value possible for a given exposure. In addition. this would work just fine. metering mode and autofocus modes. respectively. Understanding these can improve one's photographic intuition for how a camera measures light. However. How It Works Landscape Camera tries to pick a high f-stop to ensure a large depth of field. Night/Lowlight Camera permits shutter speeds which are longer than ordinarily allowed for hand-held shots.shutter button a second time.ideally 1/250 seconds or faster. for some cameras this setting means that a flash is used for the foreground. and what you can do to compensate for such exposure errors (see section on exposure compensation within the camera metering tutorial). portrait. the camera may also have several pre-set modes. Such additional settings might include the autofocus points. so it's a good idea to also be aware of when it might go awry. This means the best they can do is guess how much light is actually hitting the subject. the fast shutter speed is usually achieved by increasing the ISO speed more than would otherwise be acceptable in portrait mode. the camera's light meter will incorrectly calculate under or over-exposure. although this varies from camera to camera. 3. amongst others. Finally. Approximations* of 18% Luminance: . the most common include landscape. For this reason. evaluative zone or matrix. Metering is the brains behind how your camera determines the shutter speed and aperture. The symbols used for each mode vary slightly from camera to camera.and for which they fail. Sports/Action Camera tries to achieve as fast a shutter speed as possible for a given exposure -.CAMERA METERING & EXPOSURE Knowing how your digital camera meters light is critical for achieving consistent and accurate exposures. based on lighting conditions and ISO speed. sports and night mode. in-camera metering is standardized based on the luminance of light which would be reflected from an object appearing as middle gray. and a long shutter speed and high ISO are used expose the background. and increases the ISO speed to near its maximum available value. This ensures the shallowest possible depth of field. ISO & shutter speed BACKGROUND: INCIDENT vs. Recommended background reading: camera exposure: aperture. REFLECTED LIGHT All in-camera light meters have a fundamental flaw: they can only measure reflected light. If all objects reflected the same percentage of incident light. Check your camera's instruction manual for any unique characteristics. In addition to using a low f-stop. Each of these have subject lighting conditions for which they excel-. However.

which correspond to settings on the Canon EOS 1D Mark II.5% and 3. such as a photo of a white dove in the snow.8% of the picture area. In other words. depending on the metering options and autofocus point used. An in-camera light meter can work surprisingly well if object reflectance is sufficiently diverse throughout the photo. Each of the above metering diagrams may also be located off-center. What constitutes middle gray? In the printing industry it is standardized as the ink density which reflects 18% of incident light. but for the purposes of this tutorial simply know that each camera has a default somewhere in the middle gray tones (~10-18% reflectance). METERING OPTIONS In order to accurately expose a greater range of subject lighting and reflectance combinations. The whitest regions are those which contribute most towards the exposure calculation. . or of a black dog sitting on a pile of charcoal. Each option works by assigning a weighting to different light regions. This topic deserves a discussion of its own. respectively. whereas black areas are ignored. Center-Weighted Partial Metering Spot Metering Partial and spot areas are roughly 13. Metering off of a subject which reflects more or less light than this may cause your camera's metering algorithm to go awry-. For such cases the camera may try to create an image with a histogram whose primary peak is in the midtones.either through under or over-exposure. so this is also a fundamental limitation. however cameras seldom adhere to this. Monitors transmit as opposed to reflect light. respectively. and thus contribute more to the final exposure calculation. then the average reflectance will remain roughly middle gray. some scenes may have a significant imbalance in subject reflectivity. most cameras feature several metering options. if there is an even spread varying from dark to light objects. even though it should have instead produced this peak in the highlights or shadows (see high and low-key histograms). Unfortunately. those with a higher weighting are considered more reliable.18% Gray 18% Red Tone 18% Green Tone 18% Blue Tone *Most accurate when using a PC display which closely mimics the sRGB color space. and have calibrated your monitor accordingly.

This can be an advantage when you are unsure of your subject's reflectance and have a specially designed gray card (or other small object) to meter off of. Metering off of their face can help avoid making the subject look like an under-exposed silhouette against the bright background. The location of the autofocus point and orientation of the camera (portrait vs. one could meter off of the diffusely lit foreground tiles. They are useful when there is a relatively small object within your scene which you either need to be perfectly exposed. care should be taken as the shade of a person's skin may lead to inaccurate exposure if it is far from neutral gray reflectance-. or off of the directly lit stone below the opening to the sky. landscape) may also contribute to the calculation. On the other hand.but probably not as inaccurate as what would have been caused by the backlighting.More sophisticated algorithms may go beyond just a regional map and include: evaluative.at least initially. Spot and partial metering are also quite useful for performing creative exposures. light intensity or color. Each generally works by dividing the image up into numerous sub-sections. zone and matrix metering. These are usually the default when your camera is set to auto exposure. and when the ambient lighting is unusual. where each section is then considered in terms of its relative location. Spot metering is used less often because its metering area is very small and thus quite specific. . One of the most common applications of partial metering is a portrait of someone who is backlit. or know that it will provide the closest match to middle gray. WHEN TO USE PARTIAL & SPOT METERING Partial and spot metering give the photographer far more control over the exposure than any of the other settings. In the examples to the left and right below. but this also means that these is more difficult to use-.

3-0. lens speed and image quality. This tutorial aims to improve understanding by providing an introductory overview of concepts relating to image quality. sometimes it is useful to set a slight negative exposure compensation (0. Choosing the right lens for the task can become a complex trade-off between cost. size. a positive exposure compensation can be used to improve the signal to noise ratio in situations where the highlights are far from clipping. Photographs in the snow will always require around +1 exposure compensation. yet still allows one to increase the exposure afterwards. and in specificity by partial and spot metering. zoom lenses and aperture or f-number. causing image blurring.UNDERSTANDING CAMERA LENSES Understanding camera lenses can help add more creative control to digital photography. Optical aberrations occur when points of the image do not translate back onto single points after passing through the lens. This decreases the chance of clipped highlights. Most cameras allow up to 2 stops of exposure compensation. whereas matrix and evaluative metering modes have complicated algorithms which are harder to predict. The goal is to minimize aberrations. Try moving your mouse over each of the options below to see how these can impact image quality for extreme cases. except the final settings are then compensated by the EC value. while still utilizing the fewest and least expensive elements. 4. For this reason some prefer to use it as the default metering mode. reduced contrast or misalignment of colors (chromatic aberration)." Each of these elements aims to direct the path of light rays such that they recreate the image as accurately as possible on the digital sensor.NOTES ON CENTER-WEIGHTED METERING At one time center-weighted metering was a very common default setting in cameras because it coped well with a bright sky above a darker landscape. Alternatively. prime vs. Nowadays. . When shooting in RAW mode under tricky lighting. No matter what metering mode is used. focal length. This allows for manual corrections if you observe a metering mode to be consistently under or over-exposing. the results produced by center-weighted metering are very predictable. perspective. weight. an in-camera light meter will always mistakenly under-expose a subject such as a white dove in a snowstorm (see incident vs. On the other hand. A setting of zero means no compensation will be applied (default). The metering calculation still works as normal. whereas a low-key image may require negative compensation. radially decreasing image brightness (vignetting) or distortion. Exposure compensation is ideal for correcting in-camera metering errors caused by the subject's reflectivity. reflected light). it has more or less been surpassed in flexibility by evaluative and matrix. each stop of exposure compensation provides either a doubling or halving of light compared to what the metering mode would have done otherwise. EXPOSURE COMPENSATION Any of the above metering modes can use a feature called exposure compensation (EC). Lenses may also suffer from uneven. LENS ELEMENTS & IMAGE QUALITY All but the simplest cameras contain lenses which are actually comprised of several "lens elements.5).

Note: The location where light rays cross is not necessarily equal to the focal length. Top of Form Required Focal Length Calculator Subject Distance: Subject Size: Camera Type: Approximate Required Focal Length: Note: Calculator assumes that camera is oriented such that the maximum subject dimension given by "subject size" is in the camera's longest dimension. as depicted. Note: For a much more quantitative and technical discussion of the above topic. depending on the subject matter. and thus also how much the subject will be magnified for a given photographic position.Loss of Contrast Original Image Chromatic Aberration Vignetting Blurring Distortion Original Any of the above problems is present to some degree with any lens. Calculator not intended for use in extreme macro photography. Therefore longer focal lengths still result in narrower angles of view. In the rest of this tutorial. but is instead roughly proportional to this distance. while telephoto lenses have larger corresponding focal lengths. INFLUENCE OF LENS FOCAL LENGTH The focal length of a lens determines its angle of view. Wide angle lenses have small focal lengths. when a lens is referred to as having lower optical quality than another lens. this is manifested as some combination of the above artifacts. . resolution & contrast. please see the tutorial on camera lens quality: MTF. but does take into account small changes in the angle of view due to focusing distance. Some of these lens artifacts may not be as objectionable as others. as shown above.

many use telephoto lenses in distant landscapes to compress perspective. the laser's bright spot would not change with distance. Lens Focal Length* Less than 21 mm 21-35 mm 35-70 mm 70-135 mm 135-300+ mm Extreme Wide Angle Wide Angle Normal Medium Telephoto Telephoto Terminology Architecture Landscape Street & Documentary Portraiture Sports. If you have a compact or digital SLR camera. when shining this pointer at a nearby object its bright spot ordinarily jumps around less than for objects further away. perspective only changes with one's location relative to their subject. then you likely have a different sensor size.Bottom of Form Many will say that focal length also determines the perspective of an image. Longer focal lengths require shorter exposure times to minimize burring caused by shaky hands. If one tries to achieve the same subjects filling the frame with both a wide angle and telephoto lens. A final consideration is that medium and telephoto lenses generally yield better optical quality for similar price ranges. FOCAL LENGTH & HANDHELD PHOTOS The focal length of a lens may also have a significant impact on how easy it is to achieve a sharp handheld photograph. Please note that focal lengths listed are just rough ranges. Perspective control can be a powerful compositional tool in photography. Other factors may also be influenced by lens focal length. The relative sizes of objects change such that the distant doorway becomes smaller relative to the nearby lamps. and actual uses may vary considerably. but strictly speaking. . whereas the telephoto lens compresses or flattens perspective. whereas if only up and down or side to side vibrations were present. partially because the designers assume that the sun is more likely to be within the frame for a wider angle of view. for example. Wide angle lenses are generally more resistant to flare. For these scenarios only. the wide angle lens exaggerates or stretches perspective. Bird & Wildlife Typical Photography *Note: Lens focal lengths are for 35 mm equivalent cameras. similar to the shakiness experience while trying to look through binoculars with a large zoom. then perspective does indeed change because one is forced to move closer or further from their subject. Move your mouse over the above image to view an exaggerated perspective due to a wider angle lens. The following table provides a overview of what focal lengths are required to be considered a wide angle or telephoto lens. and often determines one's choice in focal length (when one can photograph from any position).therefore requiring a closer position for the wider angle lens. Think of this as if one were trying to hold a laser pointer steady. please use the focal length converter in the tutorial on digital camera sensor sizes. Telephoto lenses are more susceptible to camera shake since small hand movements become magnified within the image. This is primarily because slight rotational vibrations are magnified greatly with distance. in addition to their typical uses. To adjust the above numbers for your camera. Note how the subjects within the frame remain nearly identical-.

the best prime lenses almost always offer better light-gathering ability (larger maximum aperture) than the fastest zoom lenses-. one could have zoomed in and gotten further from the subject. a larger zoom designation does not necessarily mean that the image can be magnified any more (since that zoom may just have a wider angle of view when fully zoomed out). the rest of this tutorial refers to lenses in terms of their aperture size. Keep in mind that using a zoom lens does not necessarily mean that one no longer has to change their position. to achieve the opposite perspective effect. one needs to convert into a 35 mm equivalent focal length. digital zoom is not the same as optical zoom. the original position is shown along with two alternatives using a zoom lens. Additionally. Note that larger aperture openings are defined to have lower f-numbers (often very confusing)." because for a given ISO speed. The primary advantages of prime lenses are in cost. and still offer many advantages over their more modern counterparts. An inexpensive prime lens can generally provide as good (or better) image quality as a high-end zoom lens. zoom designation refer to the ratio between the longest and shortest focal lengths. Therefore. Two Options Available with a Zoom Lens: Change of Composition Change of Perspective Why would one intentionally restrict their options by using a prime lens? Prime lenses existed long before zoom lenses were available. ZOOM LENSES vs. the exposure time needs to be at least 1/200 seconds-. unless scrutinized by the trained eye (or in a very large print). Lenses with larger apertures are also described as being "faster. one often had to be willing to sacrifice a significant amount of optical quality. When zoom lenses first arrived on the market. Note: Above comparison is qualitative.A common rule of thumb for estimating how fast the exposure needs to be for a given focal length is the one over focal length rule. aperture opening (iris) is rarely a perfect circle. This states that for a 35 mm camera. For users of digital cameras with cropped sensors. the change of perspective was achieved by zooming out and getting closer to the subject. Finally. In other words. zooms just increase flexibility. The primary advantage of a zoom lens is that it is easier to achieve a variety of compositions or perspectives (since lens changes are not necessary). more modern high-end zoom lenses generally do not produce noticeably lower image quality. the shutter speed can be made faster for the same exposure. 4X. etc. INFLUENCE OF LENS APERTURE OR F-NUMBER The aperture range of a lens refers to the amount that the lens can open up or close down to let in more or less light. lenses listed with a 3X. respectively. and when a shallow depth of field is necessary. when using a 200 mm focal length on a 35 mm camera. For compact digital cameras. the exposure time needs to be at least as fast as one over the focal length in seconds. These two terms are often mistakenly interchanged. Read the fineprint to ensure you are not misled. then a prime lens with a similar focal length will be significantly smaller and lighter. In the example below. weight and speed. Keep in mind that this rule is just for rough guidance. as the former only enlarges the image through interpolation. Apertures are listed in terms of f-numbers. if only a small fraction of the focal length range is necessary for a zoom lens. whereas this cannot be changed with a "prime" or fixed focal length lens. Similar to the example in the previous section. due to the presence of 5-8 blade-like lens diaphragms.often critical for low-light sports/theater photography. a concept also termed the depth of field. . some may be able to hand hold a shot for much longer or shorter times than this rule estimates. PRIME LENSES A zoom lens is one where the photographer can vary the focal length within a pre-defined range.otherwise blurring may be hard to avoid. If a prime lens were used. then a change of composition would not have been possible without cropping the image (if a tighter composition were desirable). such as in photojournalism and children's photography. This advantage is often critical for dynamic subject matter. a smaller aperture means that objects can be in focus over a wider range of distance. Alternatively. which quantitatively describe relative light-gathering area (depicted below). Additionally. However. Additionally.

The maximum aperture is perhaps the most important lens aperture specification. in terms of both exposure options and depth of field. which is often listed on the box along with focal length(s). . respectively. Portrait and indoor sports/theater photography often requires lenses with very large maximum apertures. in order to be capable of faster shutter speeds or narrower depth of fields. An f-number of X may also be displayed as 1:X (instead of f/X).8). specifications ordinarily list the maximum (and maybe minimum) available apertures.4 f/2. as shown below for the Canon 70-200 f/2. For cases where extreme depth of field is desired.6 32X 16X 8X 4X 2X 1X Relative Light-Gathering Ability Fastest Available Prime Lenses (for Consumer Use) Fast Prime Lenses Fastest Zoom Lenses (for Constant Aperture) Light Weight Zoom Lenses or Extreme Telephoto Primes Typical Lens Types Minimum apertures for lenses are generally nowhere near as important as maximum apertures. For digital SLR cameras. lenses with larger maximum apertures provide significantly brighter viewfinder images-.0 f/2. These also often give faster and more accurate auto-focusing in low-light.0 f/1. and because these may require prohibitively long exposure times.8 lens (whose box is also shown above and lists f/2. This is primarily because the minimum apertures are rarely used due to photo blurring from lens diffraction. Lenses with a greater range of aperture settings provide greater artistic flexibility. then smaller minimum aperture (larger maximum f-number) lenses allow for a wider depth of field.Corresponding Impact on Other Properties: f-# Light-Gathering Area (Aperture Size) Smaller Larger Slower Faster Required Shutter Speed Wider Narrower Depth of Field Higher Lower When one is considering purchasing a lens. The narrow depth of field in a portrait helps isolate the subject from their background.possibly critical for night and low-light photography.8 f/4.0 f/5. Typical Maximum Apertures f/1. Manual focusing is also easier because the image in the viewfinder has a narrower depth of field (thus making it more visible when objects come into or out of focus).

This article aims to familiarize one with these and other filter options that cannot be reproduced using digital editing techniques. will reduce glare and reflections off of water and other surfaces. Rivers under bright light Dramatically Lit Landscapes Any Landscapes. larger and more expensive. Other considerations include cost. and will reduce the contrast between land and sky.0 or f/1.0 would mean that the maximum available aperture gradually changes from f/2. Lenses with larger maximum apertures are typically much heavier.Finally. Size/weight may be critical for wildlife. A range of f/2. Example uses for each are listed below: Filter Type Linear & Circular Polarizers Neutral Density (ND) Graduated Neutral Density (GND) UV / Haze Warming / Cooling Reduce Glare Improve Saturation Extend Exposure Time Control Strong Light Gradients Reduce Vignetting Improve Clarity with Film Provide Lens Protection Change White Balance Primary Use Common Subject Matter Sky / Water / Foliage in Landscape Photography Waterfalls. a f/2. UV/haze. graduated neutral density and warming/cooling or color filters. These aperture ranges therefore refer only to the range of maximum aperture. neutral density. Similar to polarizing sunglasses. They work by reducing the amount of reflected light that passes to your camera's sensor.0). some zoom lenses on digital SLR and compact digital cameras often list a range of maximum aperture. This *may* therefore mean that if one wanted the best quality f/2. or require carrying equipment for extended periods of time. and should be an important part of any photographer's camera bag.8 photograph. Also note that just because the maximum aperture of a lens may not be used. 5.4 lens may yield higher quality than a lens with a maximum aperture of f/2. Special Lighting LINEAR & CIRCULAR POLARIZING FILTERS Polarizing filters (aka "polarizers") are perhaps the most important of any filter for landscape photography. regardless of focal length.0 on a lens with a maximum aperture of f/2. Lenses typically have fewer aberrations when they perform the exposure stopped down one or two f-stops from their maximum aperture (such as using a setting of f/4. Select: No Polarizer Polarizer at Max . The primary benefit of having a zoom lens with a constant maximum aperture is that exposure settings are more predictable. Underwater. These can include polarizing filters to reduce glare and improve saturation. hiking and travel photography because all of these often utilize heavier lenses.0 (at full zoom).0 (fully zoomed out) to f/3. polarizers will make skies appear deeper blue. or simple UV/haze filters to provide extra protection for the front of your lens.8. because this may depend on how far one has zoomed in or out. Common problems/disadvantages and filter sizes are discussed towards the end. size and weight. this does not necessarily mean that this lens is not necessary. OVERVIEW: LENS FILTER TYPES The most commonly used filters for digital photography include polarizing (linear/circular).0-3. not overall range.CAMERA LENS FILTERS Camera lens filters still have many uses in digital photography.

although no more than 180° of rotation is needed. This means that the risk of a blurred handheld image goes up dramatically. However. but that is rarely desirable. polarizing filters should be used with caution because they may adversely affect the photo. and may make some action shots prohibitive. The effect is strongest when your camera is aimed in a direction which is perpendicular to the direction of the sun's incoming light. . This is useful when a sufficiently long exposure time is not otherwise attainable within a given range of possible apertures (at the lowest ISO setting). Linear vs. using a polarizer on a wide angle lens can produce an uneven or unrealistic looking sky which visibly darkens. Additionally. Circular Polarizing Filters: The circular polarizing variety is designed so that the camera's metering and autofocus systems can still function. One could of course forego metering and autofocus. and how the foliage/rocks acquire slightly more color saturation. but cannot be used with cameras that have through-the-lens (TTL) metering and autofocus—meaning nearly all digital SLR cameras. NEUTRAL DENSITY FILTERS Neutral density (ND) filters uniformly reduce the amount of light reaching the camera's sensor. since beyond this the possible intensities repeat. The intensity of the polarizing effect can be varied by slowly rotating your polarizing filter. The polarizing effect may also increase or decrease substantially depending on the direction your camera is pointed and the position of the sun in the sky. Use your camera's viewfinder (or rear LCD screen) to view the effect as you rotate the polarizing filter. In the example to the left. Polarizers dramatically reduce the amount of light reaching the camera's sensor—often by 2-3 f-stops (1/4 to 1/8 the amount of light). the polarizing effect will be greatest near the horizon in all directions. Linear polarizers are much less expensive.two separate handheld photos taken seconds apart Note how the sky becomes a much darker blue. This means that if the sun is directly overhead. the sky could be considered unusually uneven and too dark at the top.

a smaller aperture (for depth of field) or a lower ISO setting (to reduce image noise).9 ND 1. only use ND filters when absolutely necessary because they effectively discard light—which could otherwise be used to enable a shorter shutter speed (to freeze action).Situations where ND filters are particularly useful include: photo with a smoothed water effect from a long exposure • • • • • Smoothing water movement in waterfalls. Extreme light reduction can enable very long exposures even during broad daylight. ND32X ND64.5 ND 1.6 ND 0. Scenes which are ideally suited for GND filters are those with simple lighting geometries. some ND filters can add a very slight color cast to the image.3 ND 0. ND16X ND32. such as the linear blend from dark to light encountered commonly in landscape photography (below).8 ND Lee. . oceans. These are sometimes also called "split filters". ND2X ND4. rivers. ND8X ND16. so most photographers just keep one or two different ND filter amounts on hand.2 ND 1. Additionally. ND64X Hoya. Achieving a shallower depth of field in very bright light Reducing diffraction (which reduces sharpness) by enabling a larger aperture Making moving objects less apparent or not visible (such as people or cars) Introducing blur to convey motion with moving subjects However. Understanding how much light a given ND filter blocks can sometimes be difficult since manufacturers list this in many different forms: Amount of Light Reduction f-stops 1 2 3 4 5 6 1/2 1/4 1/8 1/16 1/32 1/64 Fraction ND2. B+W and Cokin 0. Tiffen 1X 4X 8X 16X 32X 64X Leica Generally no more than a few f-stops is need for most waterfall scenarios. ND4X ND8. etc. GRADUATED NEUTRAL DENSITY FILTERS Graduated neutral density (GND) filters restrict the amount of light across an image in a smooth geometric pattern.

GND filters come in many varieties. respectively. but this increases image noise). . which is usually termed "soft edge" or "hard edge" for gradual and more abrupt blends. this technique is not possible for fast moving subject matter or changing light (unless it is a single exposure developed twice from the RAW file format. With digital cameras one can instead often take two separate exposures and blend these using a linear gradient in photoshop. The first important setting is how quickly the filter blends from light to dark. Many also prefer using a GND to see how the final image will look immediately through the viewfinder or rear LCD. for example. Alternatively. GND filters were absolutely essential for capturing dramatically-lit landscapes. the blend can instead be radial to either add or remove light fall-off at the lens's edges (vignetting). where a sharp division between dark land and bright sky would necessitate a harder edge GND filter. These are chosen based on how quickly the light changes across the scene.GND Filter Final Result Prior to digital cameras. On the other hand.

digital camera sensors are nowhere near as sensitive to UV light as film. this effect is often unavoidable when using GND filters. In that sense. Fortunately. The soft edge is generally more flexible and less forgiving of misplacement. A problem with the soft and hard edge terminology is that it is not standardized from one brand to another. and keeping your filter very clean minimizes any reduction in image quality (although even invisible micro abrasions will affect sharpness/contrast). which passes 100% of the light Placing the blend should be performed very carefully and usually requires a tripod. since it is much easier to replace a filter than to replace or repair a lens. The problem with UV light is that it is not visible to the human eye. 77 mm UV filter However. .6 ND grad" therefore refers to a graduated neutral density filter which lets in 2 f-stops less light (1/4th) at one side of the blend versus the other. therefore UV filtration is no longer necessary. High quality UV filters will not introduce any visible color cast.9 ND grad lets in 3 f-stops less light (1/8th) at one side. The second important setting is the differential between how much light is let in at one side of the blend versus the other (the top versus bottom in the examples directly above). a UV filter could also even be deemed to increase image quality (relative to an unfiltered lens) since it can be routinely replaced whenever it is perceived to adversely affect the image. On the other hand. a 0. One should also be aware that vertical objects extending across the blend may appear unrealistically dark Choose: Final Photo Location of GND Blend Note how the rock columns become nearly black at their top compared to below the blend. It is therefore best to take these on a case by case basis and actually look at the filter itself to judge the blend type. A "0. With film cameras. the increased protection is often the determining factor. UV filters have the potential to decrease image quality by increasing lens flare. One company's "soft edge" can sometimes be nearly as abrupt a blend as another company's so called "hard edge". for less expensive SLR lenses or compact digital cameras protection is much less of a factor—the choice therefore becomes more a matter of personal preference. Most manufacturers will show an example of the blend on their own websites. UV filters reduce haze and improve contrast by minimizing the amount of ultraviolet (UV) light that reaches the film. However. HAZE & UV FILTERS Nowadays UV filters are primarily used to protect the front element of a camera lens since they are clear and do not noticably affect the image. Most landscape photos need no more than a 1-3 f-stop blend.Soft Edge GND Hard Edge GND Radial Blend note: in the above diagrams white = clear. UV therefore adversely affects the camera's exposure by reducing contrast. Multicoated UV filters can dramatically reduce the chance of flare. adding a slight color tint or reducing contrast. For digital cameras. This differential is expressed using the same terminology as used for ND filters in the previous section. For very expensive SLR lenses. it is often debated whether the advantage of a UV filter (protection) outweighs the potential reduction in image quality. but is often uniformly distributed on a hazy day. a soft edge may produce excessive darkening or brightening near where the blend occurs if the scene's light transitions faster than the filter. Another consideration is that UV filters may increase the resale value of the lens by keeping the front lens element in mint condition. Similarly.

with this type of light source virtually no amount of white balance correction can restor full color. This usually comes in the form of either a slight color tint. This is because there may be such an overwhelming amount of monochromatic light that no amount of white balance can restore full color—or at least not without introducing huge amounts of image noise in some color channels. On the other hand. and cannot accidentally move relative to the lens during composure. Stacking filters therefore has the potential to make all of the above problems much worse. a reduction in local or overall image contrast. such as adding warmth to a cloudy day to make it appear more like during sunset. . or to instead add one. Screw-on filters can provide an air-tight seal when needed for protection.COOL & WARM FILTERS Cooling or warming filters change the white balance of light reaching the camera's sensor. some situations may still necessitate color filters. Front filters are more flexible because they can be used on virtually any lens diameter. On the other hand. however these may also be more cumbersome to use since they may need to be held in front of the lens. Since they effectively introduce an additional piece of glass between your camera's sensor and the subject. and this can be adjusted afterwards when taking photos with the RAW file format. The main disadvantage is that a given screw-on filter will only work with a specific lens size. This was created by stacking a polarizing filter on top of a UV filter while also using a wide angle lens—causing the edges of the outermost filter to get in the way of the image. These filters have become much less important with digital cameras since most automatically adjust for white balance. Filters may also introduce physical vignetting (light fall-off or blackening at the edges of the image) if their opaque edge gets in the way of light entering the lens (right example). PROBLEMS WITH LENS FILTERS visible filter vignetting Filters should only be used when necessary because they can also adversely affect the image. or ghosting and increased lens flare caused by light inadvertently reflecting off the inside of the filter. such as situations with unusual lighting (above example) or underwater photography. Above image's orange color cast is from the monochromatic sodium streetlamps. they have the potential to reduce image quality. filter holder kits are available that can improve this process. This can be used to either correct an unrealistic color cast. NOTES ON CHOOSING A FILTER SIZE FOR A CAMERA LENS Lens filters generally come in two varieties: screw-on and front filters. A cooling filter or special streetlight filter could be used to restore color based on other light sources.

but instead occurs as a gradual transition. The depth of field does not abruptly change from sharp to unsharp. 6. This section is designed to give a better intuitive and technical understanding for photography. The height of the filter edges may also be important.TUTORIALS: DEPTH OF FIELD Depth of field is the range of distance within the subject that is acceptably sharp. which corresponds to the diameter usually listed on the top or front of your camera lens. these may also be much more expensive and often do not have threads on the outside to accept another filter (or sometimes even the lens cap). whereas step-up adapters mean that your filter is much larger (and potentially more cumbersome) than is required. respectively. The depth of field varies depending on camera type. Step-up or step-down adapters can enable a given filter size to be used on a lens with a smaller or larger diameter. aperture and focusing distance. On the other hand. However. everything immediately in front of or in back of the focusing distance begins to lose sharpness-.even if this is not perceived by our eyes or by the resolution of the camera. In fact. Ultra-thin and other special filters are designed so that they can be used on wide angle lenses without vignetting. although print size and viewing distance can influence our perception of it. This diameter is listed in millimeters and usually ranges from about 46 to 82 mm for digital SLR cameras. step-down filter adapters may introduce substantial vignetting (since the filter may block light at the edges of the lens). and provides a depth of field calculator to show how it varies with your camera settings.The size of a screw-on filter is expressed in terms of its diameter. CIRCLE OF CONFUSION .

the circle of confusion is actually smaller than the resolution of your screen for the two dots on either side of the focal point. and observed from a standard viewing distance of about 1 foot. a more rigorous term called the "circle of confusion" is used to define how much a point needs to be blurred in order to be perceived as unsharp. the depth of field can be based on when the circle of confusion becomes larger than the size of your digital camera's pixels. this region is said to be outside the depth of field and thus no longer "acceptably sharp. As a result. as demonstrated in the following chart: . At this viewing distance and print size.6 f/2. In reality.Since there is no critical point of transition. Note that depth of field only sets a maximum value for the circle of confusion. in reality this would be only a tiny fraction of the camera sensor's area. The following depth of field test was taken with the same focus distance and a 200 mm lens (320 mm field of view on a 35 mm camera). and so these are considered within the depth of field. Larger apertures (smaller F-stop number) and closer focal distances produce a shallower depth of field. When the circle of confusion becomes perceptible to our eyes. camera manufacturers use the 0. most lenses will render it as a polygonal shape with 5-8 sides.01 inches (when enlarged). These regions also called "bokeh. When it becomes large. In the earlier example of blurred dots." The circle of confusion above has been exaggerated for clarity. and so the circle of confusion has to be even smaller than this to achieve acceptable sharpness throughout. Two images with identical depth of field may have significantly different bokeh. Alternatively. A different maximum circle of confusion also applies for each print size and viewing distance combination.0 f/5.01 inch standard when providing lens depth of field markers (shown below for f/22 on a 50mm lens). Even though telephoto lenses appear to create a much shallower depth of field. and does not describe what happens to regions once they become out of focus. camera manufactures assume a circle of confusion is negligible if no larger than 0.8 CLARIFICATION: FOCAL LENGTH AND DEPTH OF FIELD Note that I did not mention focal length as influencing depth of field. but with various apertures: f/8. this is mainly because they are often used to make the subject appear bigger when one is unable to get closer. When does the circle of confusion become perceptible to our eyes? An acceptably sharp circle of confusion is loosely defined as one which would go unnoticed when enlarged to a standard 8x10 inch print. CONTROLLING DEPTH OF FIELD Although print size and viewing distance are important factors which influence how large the circle of confusion appears to our eyes. the circle of confusion is usually not actually a circle. as this depends on the shape of the lens diaphragm. If the subject occupies the same fraction of the image (constant magnification) for both a telephoto and a wide angle lens. In reality. the total depth of field is virtually* constant with focal length! This would of course require you to either get much closer with a wide angle lens or much further with a telephoto lens. aperture and focal distance are the two main factors that determine how big the circle of confusion will be on your camera's sensor." from Japanese (pronounced bo-ké). but is only approximated as such when it is very small. a person with 20-20 vision or better can distinguish features 1/3 this size or smaller.

If you use the 0. *Note: We describe depth of field as being virtually constant because there are limiting cases where this does not hold true.404 Note: Depth of field calculations are at f/4. for situations of high magnification the traditional DoF calculation becomes inaccurate due to another factor: pupil magnification.9 % 46. and increase it for telephoto and macro lenses. This actually acts to offset the DoF advantage for most wide angle lenses.421 0.5 5.0 on a Canon EOS 30D (1.8 % 39. which is important for traditional landscape photographs. even though both may contribute to the perception of sharpness. a longer focal length lens will have a shallower depth of field (even though the pictures will show something entirely different).0 % 50. Even though the total depth of field is virtually constant. and can thus more easily attain critical sharpness at infinity for any given focal distance.Focal Length (mm) Focus Distance (m) Depth of Field (m) 10 20 50 100 200 400 0. This is more representative of everyday use. Top of Form Depth of Field Calculator . At the other limiting case. CALCULATING DEPTH OF FIELD In order to calculate the depth of field. Note how there is indeed a subtle change for the smallest focal lengths.01 inch standard of eyesight. or very near the hyperfocal distance. the increase in DoF arises because the wide angle lens has a greater rear DoF.0 % 51. wide angle lenses may provide a greater DoF than telephoto lenses. because SLR cameras require a longer focal length to achieve the same field of view.0206 mm. This renders a background much larger relative to the foreground-.0 % 49.5 % Rear 29. when standing in the same place and focusing on a subject at the same distance. The depth of field calculator below assumes this standard of eyesight. On the other hand. This is a real effect.6X crop factor). however people with 20-20 vision can see features 1/3 this size.2 % 60. one needs to first decide on an appropriate value for the maximum allowable circle of confusion.0 % 48. using a circle of confusion of 0.01 inches is required for acceptable sharpness (as discussed earlier).404 0.406 0. On the other hand. not focal length. the fraction of the depth of field which is in front of and behind the focus distance does change with focal length. and on the viewing distance / print size combination. Longer focal lengths also appear to have a shallow depth of field because they flatten perspective. A wide angle lens provides a more gradually fading DoF behind the focal plane than in front.5 1. but is negligible compared to both aperture and focus distance.0 2. Depth of field also appears shallower for SLR cameras than for compact digital cameras. as demonstrated below: Distribution of the Depth of Field Focal Length (mm) 10 20 50 100 200 400 70. This is based on both the camera type (sensor or film size).404 0.1 % 54.0 10 20 0.5 % Front This exposes a limitation of the traditional DoF concept: it only accounts for the total DoF and not its distribution around the focal plane. For focal distances resulting in high magnification.482 0. near the hyperfocal distance. however I also provide a more flexible depth of field calculator.0 % 52. but is an effect due to higher magnification.even if no more detail is resolved. Depth of field calculations ordinarily assume that a feature size of 0. understand that the edge of the depth of field may not appear acceptably sharp.0 % 49.

whereas the pupil magnification does not change the calculation when it is equal to one. A greater depth of field is achieved (than would be ordinarily calculated) for a pupil magnification less than one. respectively.even within the plane of focus. For macro photography (high magnification). too small of an aperture softens the image by creating a larger circle of confusion (or "Airy disk") due to an effect called diffraction-. If the light rays hit the sensor at slightly different locations (arriving at a disc instead of a point). The purple shaded in portion represents all other possible angles.and increasingly so depending on how far apart the light rays are. although for wide angle and telephoto lenses this is greater or less than one. The key concept is this: when an object is in focus. as opposed to how much of the subject is in focus. this is also why "pinhole cameras" have limited resolution. the depth of field is actually influenced by another factor: pupil magnification. then this object will be rendered as out of focus -. Diagram can also be used to illustrate depth of field. This is important because it sets tolerances on how flat/level the camera's film or digital sensor have to be in order to capture proper focus in all regions of the image. light rays originating from that point converge at a point on the camera's sensor. Diffraction quickly becomes more of a limiting factor than depth of field as the aperture gets smaller. 7. Diagram depicting depth of focus versus camera aperture.UNDERSTANDING CAMERA AUTOFOCUS - .Camera Type Selected aperture Actual lens focal length Focus distance (to subject) mm meters Closest distance of acceptable sharpness Furthest distance of acceptable sharpness Total Depth of Field Note: CF = "crop factor" (commonly referred to as the focal length multiplier) Bottom of Form DEPTH OF FOCUS & APERTURE VISUALIZATION Another implication of the circle of confusion is the concept of depth of focus (also called the "focus spread"). but in that case it's the lens elements that move instead of the sensor. OTHER NOTES Why not just use the smallest aperture (largest number) to achieve the best possible depth of field? Other than the fact that this may require prohibitively long shutter speeds without a camera tripod. The problem is that the pupil magnification is usually not provided by lens manufacturers. The purple lines represent the extreme angles at which light could potentially enter the aperture. and one can only roughly estimate it visually. It differs from depth of field in that it describes the distance over which light is focused at the camera's sensor. This is equal to one for lenses which are internally symmetric. Despite their extreme depth of field.

Move your mouse over the image below to highlight areas of good and poor performance. or vice versa. however. The process of autofocusing generally works as follows: (1) An autofocus processor (AFP) makes a small change in the focusing distance. In the example to the left we were fortunate that the location where autofocus performs best also corresponds to the subject location. CONCEPT: AUTOFOCUS SENSORS A camera's autofocus sensor(s) are the real engine behind achieving accurate focus. this tutorial will assume passive autofocus. Change Focus Amount: Blurred Partial Sharp Sensor Histogram 400% Please visit the tutorial on image histograms for a background on image contrast. (2) The AFP reads the AF sensor to assess whether and by how much focus has improved. assuming all other factors remain equal. phase detection is another method. the above diagram illustrates the contrast detection method of AF. the AFP sets the lens to a new focusing distance. Despite a seemingly simple goal—sharpness at the focus point —the inner workings of how a camera focuses are unfortunately not as straightforward. This has an important implication for your choice of autofocus point: selecting a focus point which corresponds to a sharp edge or pronounced texture can achieve better autofocus.A camera's autofocus system intelligently adjusts the camera lens to obtain focus on the subject. The next example is more problematic because autofocus performs best on the background. This is the dreaded "focus hunting" scenario where the camera focuses back and forth repeatedly without achieving focus lock. resulting in failed autofocus. Passive AF can be performed using either the contrast detection or phase detection methods. Whether and why autofocus may fail is primarily determined by factors in the next section. . This does not. The three most important factors influencing autofocus are the light level. one may be able to achieve autofocus even for a dimly lit subject if that same subject also has extreme contrast. Note that each of these factors are not independent. Each sensor measures relative focus by assessing changes in contrast at its respective point in the image— where maximal contrast is assumed to correspond to maximal sharpness. This tutorial aims to improve your photos by introducing how autofocus works—thereby enabling you to both make the most of its assets and avoid its shortcomings. We will also discuss the AF assist beam method of active autofocus towards the end. and do not necessarily have multiple discrete autofocus sensors (which are more common using the phase detection method of AF). Note: many compact digital cameras use the image sensor itself as a contrast sensor (using a method called contrast detection AF). This entire process is usually completed within a fraction of a second. but this still relies on contrast for accurate autofocus. Unless otherwise stated. Further. For difficult subjects. FACTORS AFFECTING AUTOFOCUS PERFORMANCE The photographic subject can have an enormous impact on how well your camera autofocuses—and often even more so than any variation between camera models. lenses or focus settings. but both rely on contrast for achieving accurate autofocus. (4) The AFP may iteratively repeat steps 2-3 until satisfactory focus has been achieved. the camera may fail to achieve satisfactory focus and will give up on repeating the above sequence. (3) Using the information from (2). they will therefore be treated as being qualitatively similar for the purposes of this AF tutorial. subject contrast and camera An example illustrating the quality of different focus points has been shown to the left. in other words. and can mean the difference between a sharp photo and a missed opportunity. not the subject. mean that focus is not possible for the chosen subject. Note: Autofocus (AF) works either by using contrast sensors within the camera (passive AF) or by sending out a signal to illuminate or estimate distance to the subject (active AF). and are laid out in various arrays across your image's field of view. move your mouse over this image to see the advantages and disadvantages of each focus location.

0 f/5.8 f/4. NUMBER & TYPE OF AUTOFOCUS POINTS The robustness and flexibility of autofocus is primarily a result of the number.0 f/2. High-end SLR cameras can have 45 or more autofocus points.0 f/5. whereas other cameras can have as few as one central AF point. if one focused on the fast-moving light sources behind the subject. Alternatively.6 . What makes the above choices difficult. In the photo to the right. position and type of autofocus points made available by a given camera model. one would risk an out-of-focus subject when the depth of field is shallow (as would be the case for a low-light action shot like this one). Two example layouts of autofocus sensors are shown below: Max f/#: f/2. Additional specific techniques for autofocusing on still and moving subjects will be discussed in their respective sections towards the end of this tutorial. If one's camera had difficulty focusing on the exterior highlight. is that these decisions often have to be either anticipated or made within a fraction of a second.or subject motion.6 f/8. a lower contrast (but stationary and reasonably well lit) focus point would be the subject's foot. however. with the caveat that this highlight would change sides and intensity rapidly depending on the location of the moving light sources. or leaves on the ground at the same distance as the subject.8 f/4. focusing on the subject's exterior highlight would perhaps be the best approach.

This is an important consideration when choosing a camera lens: even if you do not plan on using a lens at its maximum aperture. Canon cameras refer to this as "AI Servo" focusing. It works by predicting where the subject will be slightly in the future. For SLR cameras. since the central AF sensor is almost always the most accurate. based on estimates of the subject velocity from previous focus distances.6.8 IS L lens. One shot focusing requires a focus lock before the photograph can be taken. The one shot mode is susceptible to focus errors for fast moving subjects since it cannot anticipate subject motion. which is best for still subjects. AF MODE: CONTINUOUS & AI SERVO vs.High-End SLR Entry to Midrange SLR Cameras used for left and right examples are the Canon 1D MkII and Canon 50D/500D. or can work in isolation for improved specificity. whereas Nikon cameras refer to his as "continuous" focusing.0 and f/5. This greatly increases the probability of correct focus for moving subjects. Multiple AF points can work together for improved reliability. Ironically. for off-center subjects it is often best to first use this sensor to achieve a focus lock (before recomposing the frame). Many cameras also support an autofocus mode which continually adjust the focus distance for moving subjects. . respectively. this type of sensor is therefore best at detecting horizontal lines. the number and accuracy of autofocus points can also change depending on the maximum aperture of the lens being used. The camera then focuses at this predicted distance in advance to account for the shutter lag (the delay between pressing the shutter button and the start of the exposure). higher accuracy) l vertical line sensors (one-dimensional contrast detection. lower accuracy) Note: The "vertical line sensor" is only called this because it detects contrast along a vertical line. depending on your chosen camera setting. Two types of autofocus sensors are shown: + cross-type sensors (two-dimensional contrast detection. as illustrated above. in addition to potentially also making it difficult to visualize these moving subjects in the viewfinder. Example maximum tracking speeds are shown for various Canon cameras below: Values are for ideal contrast and lighting. Some cameras also have an "auto depth of field" feature for group photos which ensures that a cluster of focus points are all within an acceptable level of focus. ONE SHOT The most widely supported camera focus mode is one-shot focusing. and use the Canon 300mm f/2. this aperture may still help the camera achieve better focus accuracy. For these cameras autofocus is not possible for apertures smaller than f/8. Further.

IN PRACTICE: PORTRAITS & OTHER STILL PHOTOS Still photos are best taken using the one-shot autofocus mode. one might be able to avoid a failed autofocus by orienting their camera first in landscape mode during autofocus. the stairs are comprised primarily of horizontal lines. Accurate focus is especially important for portraits because these typically have a shallow depth of field. Afterwards one could rotate the camera back to portrait orientation during the exposure. the subject contrast and lighting. which ensures that a focus lock has been achieved before the exposure begins. This tutorial is all about how to choose and make the most of your camera tripod. one may be able to achieve a focus lock not otherwise possible by rotating the camera 90° during autofocus. Although the central autofocus sensor is usually most sensitive. Actual maximum tracking speeds also depend on how erratic the subject is moving. that in continuous autofocus mode shots can still be taken even if the focus lock has not yet been achieved. Most compact cameras use a built-in infrared light source for the AF assist. In low-light conditions. AUTOFOCUS ASSIST BEAM Many cameras come equipped with an AF assist beam. the most accurate focusing is achieved using the off-center focus points for off-center subjects. setting this to the greatest distance possible (assuming the subject will never be closer) can also improve performance. it may also be worth considering whether your focus point contains primarily vertical or horizontal contrast. the type of lens and the number of autofocus sensors being used to track the subject. Perhaps the most universally supported way of achieving this is to pre-focus your camera at a distance near where you anticipate the moving subject to pass through. although one needs to ensure there is very little subject motion. which is a method of active autofocus that uses a visible or infrared beam to help the autofocus sensors detect the subject. Be warned. one could pre-focus near the side of the road since one would expect the biker to pass by at near that distance. In the example to the left. It enables photos to be taken with less light or a greater depth of field. The usual focus point requirements of contrast and strong lighting still apply. For further reading on this topic please visit the tutorials on depth of field and the hyperfocal distance. Note that the emphasis in this tutorial has been on *how* to focus—not necessarily *where* to focus. When using a flash for the AF assist. if so desired.CAMERA TRIPODS A camera tripod can make a huge difference in the sharpness and overall quality of photos. the eye is the best focus point—both because this is a standard and because it has good contrast. whereas digital SLR cameras often use either a built-in or external camera flash to illuminate the subject. If one were to instead use the central AF point to achieve a focus lock (prior to recomposing for an off-center subject). Also be warned that using focus tracking can dramatically reduce the battery life of your camera. although the AF assist beam also comes with the disadvantage of much slower autofocus. Focusing performance can be improved dramatically by ensuring that the lens does not have to search over a large range of focus distances. . In the biker example to the right. however. in addition to enabling several specialty techniques. 8.The above plot should also provide a rule of thumb estimate for other cameras as well. Since the most common type of AF sensor is the vertical line sensor. so use only when necessary. Use of the AF assist beam is therefore only recommended for still subjects. IN PRACTICE: ACTION PHOTOS Autofocus will almost always perform best with action photos when using the AI servo or continuous modes. This can be very helpful in situations where your subject is not adequately lit or has insufficient contrast for autofocus. Some SLR lenses also have a minimum focus distance switch. If one were to focus near the back of the foreground stairs (so as to maximize apparent depth of field using the hyperfocal distance). For portraits. the AF assist beam may have trouble achieving focus lock if the subject moves appreciably between flash firings. the focus distance will always be behind the actual subject distance—and this error increases for closer subjects.

In other words. always use a tripod. In addition. the exposure time needs to be at most 1/100 seconds long -. one needs to convert to a 35 mm equivalent focal length.but then again. Finally.WHEN TO USE A TRIPOD A camera tripod's function is pretty straightforward: it holds the camera in a precise position. or a lower ISO in order to reduce image noise.otherwise blurring may be hard to avoid. The exact shutter speed where camera shake affects your images will depend on (i) how steady you hold the camera. However. IS and VR do not always help when the subject is moving -. the more your laser pointer is likely to jump above and below this position due to an unsteady hand: Simulation of what happens when you try to aim a laser pointer at a point on a distant wall. you could use a smaller aperture in order to achieve more depth of field. this doesn't necessarily mean that you should not use a tripod. camera lenses with image stabilization (IS) or vibration reduction (VR) may enable you to take hand-held photographs at anywhere from two to eight times longer shutter speeds than you'd otherwise be able to hold steady. This states that for a 35 mm camera. This gives you a sharp picture when it might have otherwise appeared blurred due to camera shake. Keep in mind that this rule is just for rough guidance. which may mean the photo is no longer able to be taken hand-held. the exposure time needs to be at least as fast as one over the focal length in seconds. (ii) the sharpness of your lens. Photo with a smoothed water effect from a long exposure (only possible with a tripod). The reason this rule depends on focal length is because zooming in on your subject also ends up magnifying camera movement. For digital cameras with cropped sensors. both require a longer shutter speed. This is analogous to trying to aim a laser pointer at a position on a distant wall. In other words: if in doubt. several specialty techniques may also require the use of a tripod: • • • Taking a series of photos at different angles to produce a digital panorama. neither do tripods. the larger absolute movements on the further wall are similar to what happens with camera shake when you are using longer focal lengths (when you are more zoomed in). . For example. ISO and shutter speed. OTHER REASONS TO USE A TRIPOD Just because you can hold the camera steady enough to take a sharp photo using a given shutter speed. But how can you tell when you should and shouldn't be using a tripod? When will a hand-held photo become blurred? A common rule of thumb for estimating how fast the exposure needs to be is the one over focal length* rule. Taking a series of time lapse photographs to produce an animation. (iii) the resolution of your camera and (iv) the distance to your subject. when using a 100 mm focal length on a 35 mm camera. Taking a series of photos at different exposures for a high dynamic range (HDR) photo. the further this wall is. You might be able to choose a more optimal combination of aperture.

and (iii) the length of the legs and whether a center column is needed to reach eye level. Important factors which can influence sturdiness include: (i) the number of tripod leg sections. Whenever you need to have your camera in the right composition well in advance of the shot. and take a few test photos. What's the point of having a tripod if it stays in the closet because you find it too cumbersome. you may not wish to take your photos at eye level most of the time. This can determine whether you take the tripod along with you on a hike. Twist locks are usually a little more compact and streamlined though. the less stable it will end up being. either with lever/clip locks or twist locks. CHOOSING A TRIPOD: OTHER CONSIDERATIONS Other more minor considerations when choosing a tripod include: Number of Tripod Leg Sections.• • • Taking a series of photos to produce a composite image. Having a lighter tripod can therefore mean that you'll end up using it a lot more. or even on a shorter stroll through town. weight and ease of use: Tripod Sturdiness/Stability. but can also reduce the size of the tripod when it's fully contracted and in your camera bag. such as during a sporting event. and how one positions the leg sections. . or combining portions lit by daylight with those at dusk. Whenever you want to precisely control your composition. the only way to gauge the sturdiness of a tripod is to try it out. Ease of use depends on the type of tripod head (discussed later). Having more leg sections can also mean that it takes longer to position or fully extend your tripod. choosing the best tripod often involves many competing factors. Ultimately though. the higher you extend your tripod (even without the center column). Further. although some types can be tough to grip when wearing gloves. Further. twist lock Tripod leg sections are usually extended or contracted using a locking mechanism. Maximum Tripod Height. since you could end up having to crouch. but these also may not be as versatile as a result. just make sure that you're not sacrificing too much sturdiness in exchange for portability. Make sure that the tripod's max height specification does not include having to extend its center column. This is probably why you purchased a tripod in the first place: to keep your camera steady. (ii) the material and thickness of the leg units. This is especially important if you're quite tall. tripods that do not extend as high may weigh a little less. or vice versa. Example of a tripod with multiple leg sections extended. since they do not have external clips/latches. Tripod Weight. Each leg of a tripod can typically be extended using anywhere from two to four concentric leg sections. because this can make the camera much less steady. Tripod Ease of Use. Finding the best tripod requires identifying the optimal combination of trade-offs for your type of photography. since this can make for an ordinary looking perspective. or will at least make using it more enjoyable since it won't cause a lot of fatigue when you have to carry it around. or if you miss the shot because it takes you too long to set it up? A tripod should therefore be quick and easy to position. more leg sections reduce stability. Tap or apply weight to the top to see if it vibrates or sways. Twist locks also sometimes require two hands if you want to extend or contract each leg section independently. However. The top considerations are usually its sturdiness. On the other hand. CHOOSING A TRIPOD: TOP CONSIDERATIONS Even though a tripod performs a pretty basic function. tripod weight and sturdiness are often closely related. such as selectively including people in a crowd. Lever/clip locks tend to be much quicker to use. In general.

Contracted Tripod Height. since small scratches or dryness of the rotation ball can cause it to grind or move in uneven jumps. Be on the lookout for whether your tripod head creeps/slips under heavy weight. some ball heads come with a "rotation only" ability for just this type of situation. since it may cause your composition to no longer be level when you unlock the camera's position -. The strength to weight ratio is another important tripod head consideration. getting one that has a quick release mechanism can make a big difference in how quickly you can attach or remove your camera. but is also a big problem for long exposures with heavier SLR cameras.especially when your tripod legs aren't equally extended on uneven ground. This is surprisingly common. Ball Heads are great because you can quickly point the camera freely in nearly any direction before locking it into position. instead of requiring the camera to be screwed or unscrewed. Try attaching a big lens to your SLR camera and facing it horizontal on your tripod. Tripods with more leg sections are generally more compact when fully contracted. for moving subjects this can also be a disadvantage. you might want to consider purchasing something that better suits your shooting style. A quick release mechanism lets your camera attach to the tripod head using a latch. On the other hand. Finally. However.even if all you wanted to change was its left/right angle. Take an exposure of around 30 seconds if possible. This is primarily important for photographers who take a lot of macro photographs of subjects on the ground. However. Ball heads can also be more susceptible to damage. This describes the rated load weight of the tripod head (how much equipment it can hold without creeping/slipping). This is primarily important for photographers who need to fit their tripod in a backpack. However. the advantage of free motion can also be a disadvantage. This can be very useful once you've already gone to great care in leveling the tripod. TRIPOD HEADS: PAN-TILT vs. but need to shift the composition slightly. often times more compact tripods either don't extend as far or aren't as sturdy. or who like to use extreme vantage points in their photographs. and see whether your image has slight motion blur when viewed at 100%. suitcase or other enclosed space. Higher ratios will make for a much lighter overall tripod/head combination.Center column of tripod extends to increase maximum height (at the expense of stability). Pay particular attention to creeping when the tripod head is rotated so that it holds your camera in portrait orientation. The two most common types of tripod heads are pan-tilt and ball heads: Pan-Tilt Head Ball Head Pan-Tilt Heads are great because you can independently control each of the camera's two axes of rotation: left-right (yaw) and up/down (pitch). regardless of which type of tripod head you choose. Minimum Tripod Height. bubble level A tripod head with a built-in bubble/spirit level can also be an extremely helpful feature -. since you will need to adjust at least two camera settings before you can fully recompose the shot. They are typically also a little more compact than equivalent pan-tilt heads. BALL HEADS Although many tripods already come with a tripod head. . compared to the weight of the tripod head itself.

it's just too heavy and impractical for typical use. The center column wobbles much more easily than the tripod's base. . photos taken at eye level often appear ordinary. A lens collar is most commonly used with large telephoto lenses. For indoor use. because you also need this surface to be at a level which gives you the desired vantage height. such as rock or concrete versus dirt. sand or grass. Use add-on spikes at the ends of the tripod legs if you have no other choice but to set up your tripod on carpet or grass. Much less rotational stress (aka torque) is therefore placed on the tripod head. Below is a list of top tips for achieving the sharpest possible photos with your tripod: • • • • • • • • • Hang a camera bag from the center column for added stability. Photos taken at above or below this height are therefore often perceived as more interesting. since that's the perspective that we're most used to seeing. use it! Otherwise you might consider purchasing one if there's a size available that fits your lens. especially in the wind. A lens collar can also make a huge difference in how susceptible the tripod and head are to vibrations. which greatly increases the amount of weight that your tripod head can sustain without creeping or slipping. Use the center column only after all of the tripod's leg sections have been fully extended.TRIPOD LENS COLLARS Tripod lens collar on a 70-200 mm lens. However. TRIPOD TIPS FOR SHARP PHOTOS How you use your tripod can be just as important as the type of tripod you're using. Extend your tripod only to the minimum height required for a given photograph. this portability often comes at the expense of versatility. or this could be counter-productive. Aluminum tripods are generally much cheaper than carbon fiber models. This causes the camera plus lens to rest on the tripod at a location which is much closer to their center of mass. shifting your camera to higher or lower heights is not a possibility. since this type of tripod can often be quite portable and even carried in one's pocket. TABLETOP & MINI TRIPODS A tabletop or mini tripod is usually used with compact cameras. Spread the legs to their widest standard position whenever possible. and only when absolutely necessary. ALUMINUM vs. and can be uncomfortably cold to handle with your bare hands in the winter. A tabletop/mini tripod can only really change your camera's up/down (pitch) and left/right (yaw) orientation. Extend only the thickest leg sections necessary in order to reach a given tripod height. tile or hardwood floor is much better than a rug or carpet. However. Set your tripod up on a sturdy surface. but they are often also a lot heavier for an equivalent amount of stability. A tabletop/mini tripod can be one way of forcing you to try a different perspective with your photos. This means that finding the best surface to place the tripod is more important than usual with a tabletop/mini tripod. Remove the center column to shave off some weight. The tripod head then directly attaches to the lens collar itself (instead of the camera body). the best tripod material for damping vibrations is good old-fashioned wood -. However. However. and is an attachment that fits around the lens somewhere near its base or midsection. In other words: if your lens came with a collar. CARBON FIBER TRIPODS The two most common types of tripod material are aluminum or carbon fiber. Carbon fiber tripods are generally also better at dampening vibrations. make sure that this camera bag does not swing appreciably. Shield the tripod's base from the wind whenever possible.

Understanding lens flare can help you use it-or avoid it--in a way which best suits how you wish to portray the final image. in addition to bright streaks and an overall reduction in contrast (see below). but carrying a full tripod might be too cumbersome. but yet still keeps the moving subject reasonably sharp (example on left). such as large telephoto lenses for sports and wildlife. These are most commonly used to hold up heavy cameras and lenses. monopods can increase hand-holdability for situations where just a little bit longer shutter speed is needed. The polygonal shapes vary in size and can actually become so large that they occupy a significant fraction of the image. 9. Alternatively. Look for flare near very bright objects. This technique works by rotating the monopod along its axis -.causing the camera to pan in only one direction. It can lower the overall contrast of a photograph significantly and is often an undesired artifact. A monopod can also make it much easier to photograph a moving subject in a way that creates a blurred background. This often appears as a characteristic polygonal shape.UNDERSTANDING CAMERA LENS FLARE Lens flare is created when non-image forming light enters the lens and subsequently hits the camera's film or digital sensor. although its effects can also be seen far away from the actual source (or even throughout the image). with sides which depend on the shape of the lens diaphragm. WHAT IT LOOKS LIKE The above image exhibits tell-tale signs of flare in the upper right caused by a bright sun just outside the image frame. These take the form of polygonal bright regions (usually 5-8 sides). .CAMERA MONOPODS monopod used to track a moving subject A monopod is a tripod with only one leg. however some types of flare may actually enhance the artistic meaning of a photo.

Although using a lens hood may appear to be a simple solution. In the visual example with flowers. Ensure that this hood has a completely non-reflective inner surface. Flare-inducing light sources may include the sun. but yet it still caused significant lens flare. Lens elements often contain some type of anti-reflective coating which aims to minimize flare. . the sun was not actually in the frame itself. artificial lighting and even a full moon. Light sources will still reflect a small fraction of their light. Ordinarily light which is outside the angle of view does not contribute to the final image." Lens flare is caused by non-image light which does not pass (refract) directly along its intended path. and this may include just one or all of the polygonal shapes. REDUCING FLARE WITH LENS HOODS A good lens hood can nearly eliminate flare caused by stray light from outside the angle of view." because these lens hoods were made for the greater angle of view. This is particularly problematic when using 35 mm lenses on a digital SLR camera with a "crop factor. stray light may still enter the lens if it hits the front element. this often requires very intense light sources in order to become significant (relative to refracted light). shown above. but if this light reflects it may travel an unintended path and reach the film/sensor. but instead reflects internally on lens elements any number of times (back and forth) before finally reaching the film or digital sensor. In addition. Petal lens hoods often protect better than non-petal (round) types. in reality most lens hoods do not extend far enough to block all stray light. and so the angle of view is greater in one direction than the other. such as felt. BACKGROUND: HOW IT HAPPENS All but the simplest cameras contain lenses which are actually comprised of several "lens elements. Although flare is technically caused by internal reflections. hoods for zoom lenses can only be designed to block all stray light at the widest focal length.Flare can take many forms. or overall washed out look (veiling flare) shown above. Flare which appears as polygonal shapes is caused by light which reflects off the inside edges of the lens aperture (diaphragm). however no multi-element lens eliminates it entirely. This is because petal-style hoods take into account the aspect ratio of the camera's film or digital sensor. Note: The aperture above is shown as being behind several lens elements. Even if the photo itself contains no intense light sources. and that there are no regions which have rubbed off. and this reflected light becomes visible as flare in regions where it becomes comparable in intensity to the refracted light (created by the actual image). bright streaks.

Unfortunately. instead of the one it comes with. the larger the lens hood the better-. there are some easy but less convenient workarounds. A more expensive solution used by many pros is using adjustable bellows. MINIMIZING FLARE THROUGH COMPOSITION Flare is thus ultimately under the control of the photographer. Zoom lenses therefore have more internal surfaces from which light can reflect. Modern high-end lenses typically contain better anti-reflective coatings. and can be pressed to simulate the streaks and polygonal flare shapes. fixed focal length (or prime) lenses are less susceptible to lens flare than zoom lenses. and so this may not be representative of how the flare will appear after the exposure. One common example is to use the EW-83DII hood with Canon's 17-40 f/4L lens. The image on the left shows a cropped region within a photo where a tree trunk partially obstructed a street light during a long exposure. as this flare artifact also depends on the length of the exposure (more on this later). Although this provides better protection. and can thus flare up quite significantly under even soft lighting. certain compositions can be very effective at minimizing flare. Another solution to using 35 mm lenses and hoods on a digital SLR with a crop factor is to purchase an alternative lens hood. mainly because the manufacturer knows that these will likely have the sun within or near the angle of view. The best solutions are those where both artistic intent and technical quality coexist. The EW-83DII hood works with both 1. photographing from a position where that source is obstructed can also reduce flare. it is still only adequate for the widest angle of view for a zoom lens. Even if the problematic light source is not located within the image. VISUALIZING FLARE WITH THE DEPTH OF FIELD PREVIEW The appearance and position of lens flare changes depending on the aperture setting of the photo. The depth of field preview button is usually found at the base of the lens mount. but beware that this will also darken the viewfinder image significantly. INFLUENCE OF LENS TYPE In general. Other than having an inadequate lens hood at all focal lengths.3X (surprisingly) crop factors as it was designed to cover the angle of view for a 24 mm lens on a full-frame 35 mm camera. Look for one which was designed for a lens with a narrower angle of view (assuming this still fits the hood mount on the lens). One effective technique is to place objects within your image such that they partially or completely obstruct any flare-inducing light sources. Placing a hand or piece of paper exterior to the side of the lens which is nearest the flare-inducing light source can mimic the effect of a proper lens hood. closely following the angle of view.If the lens hood is inadequate. Even changing the angle of the lens slightly can still at least change the appearance and position of the flare. based on where the lens is pointed and what is included within the frame.6X and 1. On the other hand. it is sometimes hard to gauge when this makeshift hood will accidentally become part of the picture. This button is still inadequate for simulating how "washed out" the final image will appear. The depth of field preview button can be used to simulate what the flare will look like for other apertures. Real-world lens hoods cannot protect against stray light completely since the "perfect" lens hood would have to extend all the way out to the furthest object. This is just a lens hood which adjusts to precisely match the field of view for a given focal length. Some older lenses made by Leica and Hasselblad do not contain any special coatings. Despite all of these measures. Although photographers never like to compromise their artistic flexibility for technical reasons. there is no perfect solution. Wide angle lenses are often designed to be more flare-resistant to bright light sources. The viewfinder image in a SLR camera represents how the scene appears only when the aperture is wide open (to create the brightest image). although this is usually either too limiting to the composition or not possible. Care should still be taken that this hood does not block any of the actual image light. more complicated zoom lenses often have to contain more lens elements. . The best approach is to of course shoot with the problematic light source to your back.at least when only considering its light-blocking ability.

the levels tool and local contrast enhancement can both help regain the appearance of contrast. this section explains how hyperfocal distance is calculated. although this is rarely halfway in between. . use the hyperfocal chart at the bottom of this page. 10. Inexpensive UV. The hyperfocal distance is particularly useful in landscape photography. then the closest distance which is still within the depth of field will also be the hyperfocal distance. If flare was unavoidable and it produced a washed out image (due to veiling flare). and understanding it will help you maximize sharpness throughout your image by making the most of your the depth of field-. it is also true that if one focuses at a very distant object on the horizon (~infinity). as with lens elements. and provides a hyperfocal chart calculator. Front Focus Back Focus Front-Center Focus Note how only the right image has words which are (barely) legible at all distances. WHERE ITS LOCATED Where is this optimal focusing distance? The hyperfocal distance is defined as the focus distance which places the maximum allowable circle of confusion at infinity. clears up a few misconceptions. and neutral density filters can all increase flare by introducing additional surfaces which light can reflect from. If one were to focus any closer than this--if even by the slightest amount--then at some distance beyond the focal plane there would be an object which is no longer within the depth of field. I do not recommend using this distance "as is. Knowing it for a given focal length and aperture can be tricky.OTHER NOTES Lens filters. need to have a good anti-reflective coating in order to reduce flare. To calculate its location precisely. The hyperfocal distance uses a similar concept.HYPERFOCAL DISTANCE Focusing your camera at the hyperfocal distance ensures maximum sharpness from half this distance all the way to infinity.thereby producing a more detailed final print." but instead suggest using it as a reference point. polarizing. Somewhere between the nearest and furthest subject distance lies a focal point which maximizes average sharpness throughout. Alternatively. except its bounds are from infinity to half the focus distance (and the amount of softness shown above would be unacceptable).

The problem with the hyperfocal distance is that objects in the far background (treated as ~infinity) are on the extreme outer edge of the depth of field. These objects therefore barely meet what is defined to be "acceptably sharp." This seriously compromises detail, considering that most people can see features 1/3 the size of those used by most lens manufacturers for their circle of confusion (see "Understanding Depth of Field"). Sharpness at infinity is particularly important for those landscape images that are very background-heavy. Sharpness can be a useful tool for adding emphasis, but blind use of the hyperfocal distance can neglect regions of a photo which may require more sharpness than others. A finely detailed foreground may demand more sharpness than a hazy background (left). Alternatively, a naturally soft foreground can often afford to sacrifice some softness for the background. Finally, some images work best with a very shallow depth of field (such as portraits), since this can separate foreground objects from an otherwise busy background.

When taking a hand-held photograph, one often has to choose where to allocate the most sharpness (due to aperture and shutter speed limitations). These situations call for quick judgment, and the hyperfocal distance is not always the best option.

What if your scene does not extend all the way to the horizon, or excludes the near foreground? Although the hyperfocal distance no longer applies, there is still an optimal focus distance between the foreground and background.

Many use a rule of thumb which states that you should focus roughly 1/3 of the way into your scene in order to achieve maximum sharpness throughout. I encourage you to ignore such advice since this distance is rarely optimal; the position actually varies with subject distance, aperture and focal length. The fraction of the depth of field which is in front of the focal plane approaches 1/2 for the closest focus distances, and decreases all the way to zero by the time the focus distance reaches the hyperfocal distance. The "1/3 rule of thumb" is correct at just one distance in between these two, but nowhere else. To calculate the location of optimal focus precisely, please use the depth of field calculator. Ensure that both the nearest and furthest distances of acceptable sharpness enclose your scene.

The hyperfocal distance is best implemented when the subject matter extends far into the distance, and if no particular region requires more sharpness than another. Even so, I also suggest either using a more rigorous requirement for "acceptably sharp," or focusing slightly further such that you allocate more sharpness to the background. Manually focus using the distance markers on your lens, or by viewing the distance listed on the LCD screen of your compact digital camera (if present). You can calculate "acceptably sharp" such that any softness is not perceptible by someone with 20/20 vision, given your expected print size and viewing distance. Just use the hyperfocal chart at the bottom of the page, but instead modify the eyesight parameter from its default value. This will require using a much larger aperture number and/or focusing further away in order to keep the far edge of the depth of field at infinity. Using too large of an aperture number can be counterproductive since this begins to soften your image due to an effect called "diffraction." This softening is irrespective of an object's location relative to the depth of field, so the maximum sharpness at the focal plane can drop significantly. For 35 mm and other similar SLR cameras, this will become significant beyond about f/16. For compact digital cameras, there is usually no worry since these are often limited to a maximum of f/8.0 or less.

Want to learn more? Discuss this and other articles in our digital photography forums.
Top of Form

Hyperfocal Chart Calculator Maximum Print Dimension Viewing Distance Eyesight Camera Type

Note: CF = "crop factor" (commonly referred to as the focal length multiplier)
Bottom of Form Top of Form








m m f/2.8 f/4.0 f/5.6 f/8.0 f/11 f/16 f/22 f/32

m m

m m

m m

m m

m m

m m

11.MACRO CAMERA LENSES A macro lens literally opens up a whole new world of photographic subject matter. It can even cause one to think differently about everyday objects. However, despite these exciting possibilities, macro photography is also often a highly meticulous and technical endeavor. Since fine detail is often a key component, macro photos demand excellent image sharpness, which in turn requires careful photographic technique. Concepts such as magnification, sensor size, depth of field and diffraction all take on new importance. This advanced tutorial provides a technical overview of how these concepts interrelate.

Photo courtesy of Piotr Naskrecki, author of "The Smaller Majority ."

Magnification describes the size an object will appear on your camera's sensor, compared to its size in real-life. For example, if the image on your camera's sensor is 25% as large as the actual object, then the magnification is said to be 1:4 or 0.25X. In other words, the more magnification you have, the smaller an object can be and still fill the image frame.

Photograph at 0.25X Magnification (subject is further)

Photograph at 1.0X Magnification (subject is closer)

Diagram only intended as a qualitative illustration; horizontal distances not shown to scale. Magnification is controlled by just two lens properties: the focal length and the focusing distance. The closer one can focus, the more magnification a given lens will be able to achieve -- which makes sense because closer objects appear to become larger. Similarly, a longer focal length (more zoom) achieves greater magnification, even if the minimum focusing distance remains the same.

Top of Form

Focusing Distance*


Lens Focal Length**





*Measured as the distance between camera sensor and subject. **If using a full frame lens on a cropped sensor, you will need to use a focal length multiplier. Otherwise just use the actual lens focal length (without multipliers).
Bottom of Form

True macro lenses are able to capture an object on the camera's sensor at the same size as the actual object (termed a 1:1 or 1.0X macro). Strictly speaking, a lens is categorized as a "macro lens" only if it can achieve this 1:1 magnification. However, "macro" is often used loosely to also include close-up photography, which applies to magnifications of about 1:10 or greater. We'll use this loose definition of macro for the rest of the tutorial...
Note: Lens manufacturers inconsistently define the focusing distance; some use the sensor to subject distance, while others measure from the lens's front or center. If a maximum magnification value is available or measurable, this will provide more accurate results than the above calculator.

However, despite its usefulness, magnification says nothing about what photographers often care about most: what is the smallest object that can fill the frame? Unfortunately, this depends on the camera's sensor size -- of which there's a wide diversity these days.

Full Size Object (24 mm diameter)

Compact Camera at 0.25X Full Frame SLR Camera at 0.25X

All illustrations above are shown to scale. Compact camera example uses a 1/1.7" sensor size (7.6 x 5.7 mm). A US quarter was chosen because it has roughly the same height as a full frame 35 mm sensor. In the above example, even though the quarter is magnified to the same 0.25X size at each camera's sensor, the compact camera's smaller sensor is able to fill the frame with the image. Everything else being equal, a smaller sensor is therefore capable of photographing smaller subjects.
Top of Form




Sensor Size
60 mm

Smallest subject which can fill the image*

*as measured along the photo's shortest dimension
Bottom of Form

In order for a camera lens to focus progressively closer, the lens apparatus has to move further from the camera's sensor (called "extension"). For low magnifications, the extension is tiny, so the lens is always at the expected distance of roughly one focal length away from the sensor. However, once one approaches 0.25-0.5X or greater magnifications, the lens becomes so far from the sensor that it actually behaves as if it had a longer focal length. At 1:1 magnification, the lens moves all the way out to twice the focal length from the camera's sensor:

and f/8 more like f/16. the only reason "effective" is even used is because many cameras still show the uncompensated f-stop setting (as it would appear at low magnification). for example.5X) 1:1 (1. one can estimate the effective f-stop as follows: Effective F-Stop = F-Stop x (1 + Magnification) For example.3. In the case of a macro lens. the f-stop really has changed. including an increase in the depth of field. In practice. the f-stop increases because the effective focal length increases -. A 100 mm lens with an aperture diameter of 28 mm will have an f-stop value of f/2.8 therefore becomes more like f/5.6. a longer exposure time and a greater susceptibility to diffraction.not because of any change in the aperture itself (which remains at the same diameter regardless of magnification). for example. An aperture of f/2. . this rarely requires additional action by the photographer.0X) Note: Diagram assumes that the lens is symmetric (pupil magnification = 1). etc. this will mean that you'll need a 2-3X longer exposure time. if you are shooting at 0. which might make the difference between being able to take a hand-held shot and needing to use a tripod. then the effective f-stop for a lens set to f/4 will be somewhere between f/5. The biggest problem is that pupil magnification changes depending on focusing distance. A rule of thumb is that at 1:1 the effective f-stop becomes about 2 stops greater than the value set using your camera.8. using the pupil magnification formula probably isn't practical for most situations. In all other respects though. which introduces yet another formula. Technical Notes: The above formula works best for normal lenses (near 50 mm focal length). such as 105 mm or 180 mm. It's also rarely published by camera lens manufacturers. The most important consequence is that the lens's effective f-stop increases*.5 at 1:1.5L macro lens has a pupil magnification of 0. resulting in a 50% larger f-stop than if one were to have used the simpler formula. *Technical Notes: The reason that the f-stop changes is because this actually depends on the lens's focal length. For those interested in more accurate results. you will need to use the formula below along with knowing the pupil magnification of your lens: Effective F-Stop = F-Stop x (1 + Magnification / Pupil Magnification) Canon's 180 mm f/3. will tend to slightly underestimate the the effective lens f-stop. For other magnifications. since the camera's metering system automatically compensates for the drop in light when it calculates the exposure settings: Reduced Light from 2X Magnification After 8X Longer Exposure Time Photo courtesy of Piotr Naskrecki.Choose a Magnification: 1:2 (0.5X magnification. This has all the usual characteristics. However. In fact. Using this formula for macro lenses with much longer focal lengths. An f-stop is defined as the ratio of the aperture diameter to the focal length. However.6 and f/6.

For example. for example. unlike with low magnification photography.6. it's often helpful to know how much depth of field one has available to work with: Top of Form Macro Depth of Field Calculator Magnification Sensor Size Selected Lens Aperture Depth of Field Note: Depth of field defined based on what would appear sharp in an 8x10 in print viewed from a distance of one foot. as long as they are at the same f-stop. there's no inherent depth of field advantage for smaller camera sensors. one can always set their camera to f/5.6 or f/8 and press the "depth of field preview" button. Also. lenses with minimum f-stop values of greater than f/2. this can become razor thin -.5X. In addition. based on standard circle of confusion for 35 mm cameras of 0. While it's true that a smaller sensor will have a greater depth of field at the same f-stop.032 mm.8 will lose the ability to autofocus when at 1:1 magnification. Photo courtesy of Piotr Naskrecki. both sensor sizes will have the same depth of field. MACRO DEPTH OF FIELD The more one magnifies a subject. As a result. a 100 mm lens at 0. Technical Notes: Contrary to first impressions.5X therefore has the same depth of field as a 65 mm lens at 0. Macro photos therefore usually require high f-stop settings to achieve adequate depth of field. Alternatively. the shallower the depth of field becomes. When both sensor sizes produce prints with the same diffraction-limited resolution. To see what this would look like. Regardless. For magnifications above 1X. because the larger sensor can get away with a higher f-stop before diffraction limits resolution at a given print size.Other consequences of the effective aperture include autofocus ability and viewfinder brightness. output is in units of µm (aka microns or 1/1000 of a mm) Bottom of Form Note that depth of field is independent of focal length. this isn't a fair comparison. the viewfinder may also become unreasonably dark when at high magnification. most SLR cameras lose the ability to autofocus when the minimum f-stop becomes greater than f/5. the depth of field remains symmetric about the focusing distance (front and rear depth of field are equal). With macro and close-up photography.often just millimeters: Example of a close-up photograph with a very shallow depth of field. The only inherent advantage is that the smaller sensor requires a much shorter exposure time in order to achieve a given depth of field. . one can make the most of what little depth of field they have by aligning their subject matter with the plane of sharpest focus.

actual results will also depend on the characteristics of your specific lens. so apertures slightly larger or smaller than the above diffraction limit will not all of a sudden look better or worse. but f/22+ is sometimes necessary for extra (but softer) depth of field. However. respectively.regardless of how many megapixels your camera may have (see diffraction in photography tutorial).5L macro lens has a more comfortable working distance of ~300 mm (12") at the same magnification. it can disturb insects and other small creatures (such as causing a bee to fly off of a flower). This is different from the closest focusing distance. any subsequent f-stop increase only acts to further decrease resolution. diffraction becomes so pronounced that it begins to limit image resolution (the "diffraction limit"). the above calculator is for viewing the image at 100% on-screen. a subject in grass or other foliage may make closer working distances unrealistic or impractical. While a close working distance may be fine for photographs of flowers and other stationary objects. which is instead (usually) measured from the camera's sensor to the subject. With digital SLR cameras in general.using your particular lens and subject matter. For example. small or large print sizes may mean that the diffraction-limited f-stop is actually greater or less than the one suggested above.not necessarily the one set by your camera. at high magnification the effective f-stop is actually what determines the diffraction limit -. Close working distances also have the potential to block ambient light and create a shadow on your subject. This is often the most important consideration when choosing between macro lenses of different focal lengths. This is especially true with macro lenses. In addition. Don't be afraid to push the f-stop beyond the diffraction limit. another consideration is that shorter focal lengths often provide a more three-dimensional and immersive photograph. aperture settings of f/11-f/16 provide a good trade-off between depth of field and sharpness. This can often can make the difference being able to photograph a subject and scaring them away. Using the shortest focal length available will help offset this effect and provide a greater sense of depth. WORKING DISTANCE & FOCAL LENGTH The working distance of a macro lens describes the distance between the front of your lens and the subject. Photo courtesy of Piotr Naskrecki The working distance is a useful indicator of how much your subject is likely to be disturbed. With macro photography one is nearly always willing to trade some diffraction-induced softening for greater depth of field.MACRO DIFFRACTION LIMIT Diffraction is an optical effect which limits the resolution of your photographs -. the above is only a theoretical limit. At a given magnification.8 macro lens has a working distance of just ~150 mm (6") at 1:1 magnification. the working distance generally increases with focal length. Canon's 100 mm f/2. Furthermore. However. because the greater effective focal length will tend to flatten perspective. After that. . This is accounted for below: Top of Form Macro Diffraction Limit Calculator Magnification Sensor Size Resolution Megapixels Diffraction Limited F-Stop Bottom of Form Keep in mind that the onset of diffraction is gradual. respectively. the best way to identify the optimal trade-off is to experiment -. Images are more susceptible to diffraction as the f-stop increases. at high f-stop settings. Finally. whereas Canon's 180 mm f/3. Ultimately though.

chromatic aberration is far less apparent. The example below was taken at 0. While the central crop (in blue) isn't as sharp as one would hope. California. particularly near the corners of the image). 16mm ultra-wide angle lens . These include chromatic aberrations (magenta or blue halos along high contrast edges. image distortion and blurring. it's also one of the most difficult types of lenses to learn how to use. see camera lenses: focal length & aperture). image quality clearly suffers: Close-up at 0. All of these are often most apparent when using a non-macro lens at high magnification. On a compact camera. This translates into an angle of view which is greater than about 55° across your photo's widest dimension. 12.3X magnification using a compact camera at its closest focusing distance. USA OVERVIEW A lens is generally considered to be "wide angle" when its focal length is less than around 35 mm (on a full frame.USING WIDE ANGLE LENSES A wide angle lens can be a powerful tool for exaggerating depth and relative size in a photo.3X using a Compact Camera Crops shown at 100% zoom Above images are depicted even after aggressive capture sharpening has been applied. a true macro lens achieves optimal image quality near its minimum focusing distance. Note how the chromatic aberrations and image softness is more pronounced further from the center of the image (red crop). however ultra-wide is usually never available without a special lens adapter. The definition of ultra-wide is a little fuzzier. This page dispels some common misconceptions. Since this is a standard non-macro lens. the more you will tend to notice the unique effects of a wide angle lens. and discusses techniques for taking full advantage of the unique characteristics of a wide angle lens. Regardless. but most agree that this realm begins with focal lengths somewhere around 20-24 mm and less. However. . by contrast. wide angle is often when you've fully zoomed out.sunset near Death Valley.CLOSE-UP IMAGE QUALITY Higher subject magnification also magnifies imperfections from your camera lens. the key concept is this: the shorter the focal length.

A misconception is that a wide angle lens affects perspective. WIDE ANGLE PERSPECTIVE Obviously. The angle of view therefore still increases similarly. but yet still want to capture all of this subject in a single camera frame. With a wider angle of view. The location where light rays cross is not necessarily equal to the focal length.The above diagrams depict the maximum angles that light rays can take when hitting your camera's sensor. The rest of this page focuses on techniques for how to best use these traits for maximal impact in wide angle photography. this isn't true. Unfortunately. but is instead roughly proportional to this distance. but strictly speaking. What makes a wide angle lens unique? A common misconception is that wide-angle lenses are primarily used for when you cannot step far enough away from your subject. further objects therefore comprise a much lower fraction of the total angle of view. Perspective is only influenced by where you are located when you take a photograph. The reason for this is the angle of view: Wide Angle Lens (objects are very different sizes) Telephoto Lens (objects are similar in size) Even though the two cylinders above are the same distance apart when photographed with each lens. Uses a 16mm ultra-wide angle focal length. if one were to only use it this way they'd really be missing out.which does affect perspective. . UK. a wide angle lens is special because it has a wide angle of view -. This causes nearby objects to appear gigantic. Exaggerated 3 inch Flowers in Cambridge. wide-angle lenses often cause you to move much closer to your subject -. they result in a surprising range of possibilities. in practical use. In fact. let's take a closer look at just what makes a wide angle lens unique: • • Its image encompasses a wide angle of view It generally has a close minimum focusing distance Although the above characteristics might seem pretty basic.but what does this actually do? A wide angle of view means that both the relative size and distance is exaggerated when comparing near and far objects. their relative sizes are very different when one fills the frame with the closest cylinder. However. wide angle lenses are often used for just the opposite: when you want to get closer to a subject! So. and far away objects to appear unusually tiny and distant.

Just take extra care with the composition though. while still capturing expansive backgrounds. you'll want to get as close as possible to the nearest subject in the scene. head or other features can become greatly out of proportion if you are too close to them when taking the photo.In the extreme wide angle example to the left. sometimes it's a good idea to include some foreground elements to anchor the composition. it will cause otherwise parallel vertical lines to appear as if they are converging.resulting in a big difference in how sharply lines seem to converge. Regardless. the nearest flowers are nearly touching the front of the lens. Their nose. In the example to the right. extremely close objects can move a lot inside the image due to camera movements of even a fraction of an inch. However. CONVERGING VERTICALS Whenever a wide angle lens is pointed above or below the horizon. Finally. . In real life. This can be a useful tool for adding drama or extra character to a candid shot. Otherwise a landscape shot (taken at eye level) can appear overly busy and lack that extra something that's needed to draw the eye into the photo.it's just that a wider expanse of converging lines is visible with a wide angle lens. even small changes in composition will alter the location of the vanishing point by a large amount -. This proportionality is in part why narrower focal lengths are much more common for traditional portrait photography. If you plan on using this effect to full impact. but certainly isn't how most people would want to be depicted in a standard portrait. these flowers are only a few inches wide! Disproportionate body parts caused by a wide angle lens. because far away objects become quite small.even telephoto lenses -. one needs to take extra caution when photographing people. It can therefore become quite difficult to frame subjects the way you want. don't be afraid to get much closer! This is where wide angle really shines. note how the person's head has become abnormally large relative to their body. which greatly exaggerates their size.This exaggeration of relative size can be used to add emphasis and detail to foreground objects. Further. with a wide angle lens. Any lens does this -.

the vanishing point didn't change by a whole lot as a fraction of the total image -. Although converging vertical lines are generally avoided in architectural photography for the above reasons. Move your mouse over the image below to see a simulation of what happens when you point your camera above or below the horizon: Camera Aimed Above the Horizon Camera Aimed Below the Horizon In the above example. one can also sometimes use these to their advantage: . In each case.In this case. the building appears to be either falling in on or away from the viewer.but this had a huge impact on the building. the vanishing point is the direction that you are pointing your camera.

Arizona. even if this means that you'll capture a lot of ground in addition to the subject (which you crop out later). USA. simply because one cannot move far enough away from their subject to get all of them in the photo (using a normal lens). On the other hand. left: 16mm focal length . . a wide angle lens was used to capture the towering trees in a way that makes them appear to be enveloping the viewer. or cost. Similarly. Canada.Antelope Canyon. this also gives the unwanted appearance that the building is about to fall over backwards. right: King's College Chapel. A big reason for this is that they look as if they are coming from all directions and converging in the middle of the image -. In the above trees example. (iii) use Photoshop or other software to distort the photo so that vertical lines diverge less. Cambridge. A common example is photography of interior rooms or other indoor architecture. convenience/perspective in the case of (ii). UK. whether it be resolution in the case of (i) and (iii). INTERIORS & ENCLOSED SPACES A wide angle lens can be an absolute requirement in enclosed spaces. or (iv) use a tilt/shift lens to control perspective.in part because it forces you to be close to the subject. right: Spiral staircase in New Court.and yet the photos do not give any appearance of being cramped. This kind of photography is also perhaps the easiest way to make the most of a wide angle lens -. Cambridge In both examples above. Unfortunately all of these options have their drawbacks. technical knowledge and a slight reduction in image quality in the case of (iii).even though they are actually all parallel to one another. (ii) get much further from your subject and use a lens with a longer focal length. the architectural photo to the right was taken close to the door in order to exaggerate the apparent height of the chapel. The only ways to reduce converging verticals are to either (i) aim your camera closer to the horizon. St John's College. you could not move more than a few feet in any direction -.left: Wide angle shot of trees on Vancouver Island.

This often results in an over-exposed sky and/or an under-exposed foreground. . similarly. which is usually undesirable. one edge of your image frame might be nearly facing the sun.POLARIZING FILTERS Coral Reef National Park. you will minimize the effect of a polarizer. MANAGING LIGHT ACROSS A WIDE ANGLE GND filter example . A key trait of a polarizer is that its influence varies depending on the angle of the subject relative to the sun. A wide angle lens is also much more susceptible to lens flare. Using a polarizing camera lens filter should almost always be avoided with a wide angle lens. A common hurdle with wide angle lenses is strong variation in the intensity of light across an image. whereas the opposing edge might be facing 90° away from the sun. Utah. while also gradually letting in more and more light for positions progressively lower in the photo. whenever you face your camera directly away from or into the sun. the GND filter let in the full amount of light. Most photographers therefore use what is called a graduated neutral density (GND) filter to overcome this uneven lighting. you will maximize its effect. One therefore needs to take extra care when determining the desired exposure. It can also be difficult to effectively shield the sides of the lens from stray light using a lens hood. Sardinia.lighthouse in Nora. Also take a look at the tutorials on camera lens filters and high dynamic range (HDR) for additional examples. uneven light can make some parts of the image over-exposed. USA.even though our eye would have adjusted to this changing brightness as we looked in different directions. In the example above. Move your mouse over the image above to see what it would have looked like without a GND filter. the blue sky clearly changes in saturation and brightness as you move across the image from left to right. in part because the sun is much more likely to enter into the composition. the GND filter partially obstructed some of the light from the bright sky. This means that you will be able to see the changing influence of your polarizer across a single photo. In the example to the left. With an ultra-wide angle lens. For example. while also leaving other parts underexposed -. At the bottom of the photo. since this hood cannot also block any of the image-forming light across the wide angle of coverage. Using an ordinary exposure. in landscape photography the foreground foliage is often much less intensely lit than the sky or a distant mountain. When you face your camera 90° from where the sun is coming from.

Note that nowhere in this page is it mentioned that a wide angle lens has a greater depth of field. Unfortunately, this is another common misconception. If you are magnifying your subject by the same amount (meaning that they fill the image frame by the same proportion), then a wide angle lens will give the same* depth of field as a telephoto lens. *Technical Note: for situations of extreme magnification, the depth of field may differ by a small amount. However, this is an extreme case and is not relevant for the uses discussed in this page. See the tutorial on depth of field for a more detailed discussion of this topic. The reason that wide angle lenses get the reputation of improving depth of field is not because of any inherent property with the lens itself. It's because of how they're most often used. People rarely get close enough to their subject to have them fill the same amount of the frame with a wide angle lens as they do with lenses that have narrower angles of view.

While there's no steadfast rules, you can often use your wide angle lens most effectively if you use the following four guidelines as starting points:
(1) Subject Distance. Get much closer to the foreground and physically immerse yourself amongst your subject.

A wide angle lens exaggerates the relative sizes of near and far subjects. To emphasize this effect it's important to get very close to your subject. Wide angle lenses also typically have much closer minimum focusing distances, and enable your viewer to see a lot more in tight spaces.
(2) Organization. Carefully place near and far objects to achieve clear compositions.

Wide angle shots often encompass a vast set of subject matter, so it's easy for the viewer to get lost in the confusion. Experiment with different techniques of organizing your subject matter. Many photographers try to organize their subject matter into clear layers, and/or to include foreground objects which might guide the eye into and across the image. Other times it's a simple near-far composition with a close-up subject and a seemingly equidistant background.
(3) Perspective. Point your camera at the horizon to avoid converging verticals; otherwise be acutely aware of how these will impact your subject.

Even slight changes in where you point your camera can have a huge impact on whether otherwise parallel vertical lines will appear to converge. Pay careful attention to architecture, trees and other geometric objects.
(4) Distortion. Be aware of how edge and barrel distortion may impact your subject.

The two most prevalent forms of distortion are barrel and edge distortion. Barrel distortion causes otherwise straight lines to appear bulged if they don't pass through the center of the image. Edge distortion causes objects at the extreme edges of the frame to appear stretched in a direction leading away from the center of the image.

13.USING TELEPHOTO LENSES You've probably heard that telephoto lenses are for enlarging distant subjects, but they're also a powerful artistic tool for affecting the look of your subject. They can normalize the size and distance difference between near and far objects, and can make the depth of field appear more shallow. Telephoto lenses are therefore useful not only for wildlife photography, but also for landscape photography. Read on to learn techniques for utilizing the unique characteristics of a telephoto lens . . .

300 mm telephoto lens - two cheetahs laying behind a log

A lens is generally considered to be "medium telephoto" when its focal length is greater than around 70 mm (on a full frame; see camera lenses: focal length & aperture). However, many don't consider a lens a "full telephoto" lens until its focal length becomes greater than around 135 mm. This translates into an angle of view which is less than about 15° across your photo's widest dimension. On a compact camera with a 3-4X or greater zoom lens, telephoto is simply when you've fully zoomed in. However, some compact cameras might require a special adapter in order to achieve full telephoto. Regardless, the key concept is this: the longer the focal length, the more you will tend to notice the unique effects of a telephoto lens.

The above diagrams depict the maximum angles that light rays can take when hitting your camera's sensor. The location where light rays cross is not necessarily equal to the focal length, but is instead roughly proportional to this distance. The angle of view therefore still increases similarly. Why use a telephoto lens? A common misconception is that telephoto lenses are just for capturing distant objects. While this is a legitimate use, there's a whole array of other possibilities, and often times distant objects are better photographed by simply getting a little closer. Yes, this isn't practical with a lion, but a pet or a person will likely appear better when they aren't photographed from afar. Why? The distance from your subject actually changes your photo's perspective, even if your subject is still captured at the same size in your camera frame. Confused? More on this in the next section...

A telephoto lens is special because it has a narrow angle of view -- but what does this actually do? A narrow angle of view means that both the relative size and distance is normalized when comparing near and far objects. This causes nearby objects to appear similar in size compared to far away objects -- even if the closer object would actually appear larger in person. The reason for this is the angle of view:

Wide Angle Lens (objects are very different sizes)

Telephoto Lens (objects are similar in size)

Even though the two cylinders above are the same distance apart, their relative sizes are very different when one uses either a wide angle lens and telephoto lens to fill the frame with the closest cylinder. With a narrow angle of view, further objects comprise a much greater fraction of the total angle of view. A misconception is that a telephoto lens affects perspective, but strictly speaking, this isn't true. Perspective is only influenced by where you are located when you take a photograph. However, in practical use, the very fact that you're using a telephoto lens may mean that you're far from your subject -- which does affect perspective.

Objects appear in proper proportion to one another. Uses a 135 mm telephoto focal length. This normalization of relative size can be used to give a proper sense of scale. For full impact, you'll want to get as far as possible from the nearest subject in the scene (and zoom in if necessary). In the telephoto example to the left, the people in the foreground appear quite small compared to the background building. On the other hand, if a normal focal length lens were used, and one were closer to the foreground people, then they would appear much larger relative to the size of the building. However, normalizing the relative size too much can make the scene appear static, flat and uninteresting, since our eyes generally expect closer objects to be a little larger. Taking a photo of someone or something from very far away should therefore be done only when necessary. In addition to relative size, a telephoto lens can also make the distance between objects appear compressed. This can be beneficial when you're trying to emphasize the number of objects, or to enhance the appearance of congestion:

Exaggerated Crowd Density

Exaggerated Flower Density

left: 135 mm focal length - congestion of punters on the River Cam - Cambridge, UK. right: telephoto shot of flowers in Trinity College, Cambridge, UK.

In the example to the left, the boats all appear to be right next to each other -- even though they appeared much further from each other in person. On the right, the flowers and trees appear stacked on top of one another, when in reality this image spans around 100 meters.


320 mm detail shot of a parrot Perhaps the most common use for a telephoto lens is to bring otherwise small and distant subjects closer, such as wildlife. This can enable a vantage on subjects not otherwise possible in real life. One should therefore pay careful attention to far off detail and texture.

telephoto sunset Furthermore, even if you were able to get a little closer to the subject, this may adversely impact the photograph because being closer might alter the subject's behavior. This is especially true when trying to capture candid photographs of people; believe it or not, people usually act differently when they're aware that someone is taking their photograph. Finally, consider this: since a telephoto lens encompasses a much narrower angle of view, you as the photographer can be much more selective with what you choose to contain within your camera frame. You might choose to capture just the region of the sky right around the sunset (left), just the surfer on their wave, or just a tight region around someone's interesting facial expression. This added selectivity can make for very simple and focused compositions.


130 mm telephoto shot using layered subject matter. Photo taken on Mt. Baldy, California. Standard photography teaching will often tell you that "a wide angle lens is for landscapes" and "a telephoto lens is for wildlife." Nonsense! Very powerful and effective compositions can still be made with the "inappropriate" type of lens. However, such claims aren't completely unfounded. Telephoto lenses compress the sense of depth, whereas wide angle lenses exaggerate the sense of depth. Since spaciousness is an important quality in many landscapes, the rationale is that wide angle lenses are therefore better suited.

320 mm focal length . and/or all other seeminly equidistant background objects. MINIMIZING CAMERA SHAKE A telephoto lens may have a significant impact on how easy it is to achieve a sharp handheld photograph. the foreground fence was less than a foot from the cat's face -. and then recompose your frame without worry of changing the distance at which objects are in sharpest focus (see tutorial on camera autofocus for more on this topic). a common telephoto technique is to compose the scene so that it's comprised of layered subject matter at distinctly different distances. This means that you can use your central autofocus point to achieve a focus lock. POINT OF FOCUS For a given subject distance. the image would have seemed much less three-dimensional without the foreground layer of trees on the hill. and the furthest layer could be the sky.yet it appears extremely out of focus due to the shallow depth of field. when shining this pointer at a nearby object its bright spot ordinarily jumps around less than for objects further away.primarily because one is often much further from their subject. It's therefore critical that you achieve pinpoint accuracy with your chosen point of focus. which enlarges their blur. Out of focus distant objects are also made much larger. For example. haze or mist affect an image. the subsequent layers could be successively more distant hillsides.Mt. a telephoto lens captures the scene with a much shallower depth of field than does other lenses. California In the above example.However. since these lenses make distant objects appear closer. Think of this as if one were trying to hold a laser pointer steady. which would have ruined the intent of the photograph.shallow depth of field telephoto shot of a cat amongst leaves In the above example. the separate layers of trees. telephoto landscapes just require different techniques. Fortunately. If you want to improve the sense of depth. Longer focal lengths require shorter exposure times to minimize blurring caused by shaky hands. Similarly. 165 mm telephoto shot using layered subject matter . clouds and background mountainside also give the first example more depth. Hamilton. the closest layer could be a foreground set of trees. telephoto lenses are rarely subject to the "focus and recompose" errors caused by shorter focal lengths -. ocean. A telephoto lens can also enhance how photography in fog. Even a misfocus of an inch could have therefore caused the cat's eyes to become blurred. .

This techniques article uses examples to illustrate how to make the most out of photos in these unique shooting environments. or some combination of the two. People usually magnify their subject matter a lot more with telephoto lenses than with lenses that have wider angles of view. *Technical Note: for situations of extreme magnification. so this subject ends up filling more of the frame. To achieve a faster shutter speed you will need to use a larger aperture (such as going from f/8. poorly-positioned out of focus highlights may prove distracting for a foreground subject (such as in the parrot example). 14.Simulation of what happens when you try to aim a laser pointer at a point on a distant wall. Unfortunately. both of these options have drawbacks. you can (i) use your other hand to stabilize the lens. distracting out of focus background However. See the tutorial on depth of field for a more detailed discussion of this topic. However. In other words. it's also very easy to end up with photos that look washed-out and flat. since a larger aperture decreases depth of field. using a camera tripod or monopod is the only truly consistent way to reduce camera shake. then a telephoto lens will give the same* depth of field as other lenses. It's this higher magnification that is what causes the shallower depth of field.TAKING PHOTOS IN FOG. this is an extreme case and is not relevant for the uses discussed in this page. For example. a telephoto lens does enlarge out of focus regions (called "bokeh"). and a higher ISO speed increases image noise.8) and/or increase the ISO speed. However. or (iii) lean your body or lens against another solid object. To hold your camera steadier. It's because of how they're most often used. the larger absolute movements on the further wall are similar to what happens with camera shake when you are using a telephoto lens (since objects become more magnified). One should therefore pay close attention to how a background will look and be positioned when it's out of focus. This may give the appearance of a shallower depth of field. Minimizing camera shake requires either shooting using a faster shutter speed or holding your camera steadier. If you are magnifying your subject by the same amount (meaning that they fill the image frame by the same proportion). the depth of field may differ by a small amount. this is a common misconception. . However. A telephoto lens itself does not have less depth of field.0 to f/2. people generally don't get further from their subject. TELEPHOTO LENSES & DEPTH OF FIELD Note that I've been careful to say that telephoto lenses only decrease depth of field for a given subject distance. However. (ii) try taking the photo while crouching. The reason that telephoto lenses get the reputation of decreasing depth of field is not because of any inherent property with the lens itself. mist or haze can give a wonderfully moody and atmospheric feel to your subjects. MIST OR HAZE Photography in fog. since it enlarges the background relative to the foreground.

The trick is knowing how to make use of these unique assets -. Photographing in the fog is very different from the more familiar photography in clear weather. Just as with photographs in the snow. fog makes the air much more reflective to light. we'll primarily talk about fog. lighting. and shape of your subjects. Both photos are from St John's College. Cambridge.Cambridge. Cambridge. Compared to a street lamp or light from the sun on a clear day. fog is a natural soft box: it scatters light sources so that their light originates from a much broader area.an often elusive. this dramatically reduces contrast: A Lamp or the Sun on a Clear Day (High Contrast) Light in the Fog. which often tricks your camera's light meter into thinking that it needs to decrease the exposure.Clare Bridge in the fog at night (version 1) . In addition. It is also much more likely to form near the surface of water that is slightly warmer than the surrounding air. EMPHASIZING DEPTH Mathematical Bridge in Queens' College. but well sought after prize for photographers.without also having them detract from your subject. . Haze or Mist (Low Contrast) Scenes in the fog are also much more dimly lit -. In exchange for all of these potential disadvantages. In essence. In this techniques article. UK OVERVIEW Fog usually forms in the mid to late evening. fog can be a powerful and valuable tool for emphasizing the depth. but the photographic concepts apply similarly to mist or haze. Scenes are no longer necessarily clear and defined. UK.often requiring longer exposure times than would otherwise be necessary. and often lasts until early the next morning. these traits can even make scenes feel mysterious and uniquely moody -. and they are often deprived of contrast and color saturation: Examples of photos which appear washed-out and de-saturated due to the fog. fog therefore usually requires dialing in some positive exposure compensation. As you will see later.

EMPHASIZING LIGHT View of King's College Bridge from Queens' College. UK. However. but also makes light streaks visible from concentrated or directional light sources. if the fog is very dense or the light source is extremely concentrated. Southwest coast of Sardinia in haze. Cambridge. but the light sources were extremely intense and concentrated. because while it exaggerates the difference between near and far objects. This also serves to add some tonal diversity to the scene. Although there are no steadfast rules with photographing in the fog. The furthest layer. Cambridge during BBC lighting of King's Chapel for the boy's choir.As objects become progressively further from your camera. whereas the closest layer has near full color and contrast. This way a portion of your image can contain high contrast and color. In the example to the left. the scattered light was much brighter relative to the sky because it was taken after sunset. it also makes distant objects difficult to photograph in isolation. when the camera was moved just a few feet backwards. Spires above entrance to King's College. Notice how both color saturation and contrast drop dramatically with each successively distant tree layer. . This greatly softens light. The trick to making light rays stand out is to carefully plan your vantage point. it's often helpful to have at least some of your subject close to the camera. Additionally. This "off-angle" perspective ensures that the scattered light will both be bright and well-separated from the darker looking air. is reduced to nothing more than a silhouette. the streaks from the window were no longer visible. near the bridge. then the light rays will be clearly visible no matter what vantage point you have. Light rays will be most apparent if you are located close to (but not at) where you can see the light source directly. The classic example is the photo in a forest with early morning light: when the photo is taken in the direction of this light. while also hinting at what everything else would look like otherwise.and sometimes quite dramatically. but they also lose contrast -. The second example above was taken in air that was otherwise not visibly foggy. where a large tree partially obstructs an orange lamp. there are at least four layers of trees which cascade back towards the distant bridge. light streaks are clearly visible from an open window and near the bridge. Water droplets in the fog or mist make light scatter a lot more than it would otherwise. rays of light streak down from the trees and scatter off of this heavy morning air. This can be both a blessing and a curse. not only do they become smaller. In the example to the right. On the other hand.

the closest object -. You will of course also need to pay careful attention to the relative position of objects in your scene." This is because it can be hard to get a sense of scale by photographing just a cluster of trees -. Cambridge. Rear gate entrance to Trinity College. Just make sure to expose based on the fog -. . PHOTOGRAPHING FROM WITHOUT You've perhaps heard of the saying: "it's difficult to photograph a forest from within.you have to go outside the forest so you can see its boundaries.stands out much more than it would otherwise against this tangled backdrop of tree limbs. Behind this gate. the bright fog background contrasts prominently with the relatively darker swan. and not have individual trees hamper this perspective. you could dial in a negative exposure compensation to make sure that objects do not turn out too bright. Often. In the example to the right.if you want this subject to appear as a dark silhouette. Fog can emphasize the shape of subjects because it downplays their internal texture and contrast. The very same technique can often be very helpful with fog or haze. UK.and not the subject -. Alternatively. the subject can even be reduced to nothing more than a simple silhouette. Furthermore. the swan's outline has been greatly exaggerated because the low-laying fog has washed out nearly all remains of the wall behind the swan.a cast iron gate -. In the photo to the left. each tree silhouette is visible in layers because the branches become progressively fainter the further they are in the distance. otherwise one object's outline or border may overlap with another object. Cambridge.SHAPES & SILHOUETTES Swan at night on the River Cam.

it can move in clumps and vary in thickness with time. BEWARE OF CONDENSATION If water droplets are condensing out of the air. Move your mouse over the image below to see how the exposure time affects the appearance of mist above the water: Shorter Exposure (1 second) Longer Exposure (30 seconds) Clare Bridge in low-laying fog at night (version 2) . then you may not notice any condensation at all. or when your subject is not magnified by as much. from a distance it's really nothing more than low-laying clouds. Depending on the type of fog.left: Mt Rainier breaking through the clouds . .Cambridge. Although the shorter exposure does a much better job of freezing the fog's motion . However. Fortunately. UK Note that the above image is the very same bridge that was shown as the first image in this article. Even if you time the photograph for when you feel there's the most interesting distribution of fog. these differences are sometimes difficult to spot if they happen slowly. Try moving your mouse over the labels below to see how the scene changed over just 6 minutes: First Photograph +2 minutes +6 minutes Another important consideration is the apparent texture of fog. this can take 30 minutes or more if the indoor-outdoor temperature difference is big. it also has a substantial impact on the amount of image noise when viewed at 100%. TIMING THE FOG FOR MAXIMAL IMPACT Just as with weather and clouds. but you have to wait until everything within the bags have have reached the same temperature as outdoors before you open the bags. Sometimes freezing the motion of fog therefore isn't an option if you want to avoid noise. then you can be assured that these same droplets are also likely to condense on the surface of your lens or inside your camera. Fog can dramatically change the appearance of a subject depending on where it is located. this fog may not retain its texture if the exposure time is not short enough.Los Angeles. Before taking your camera and lens into a warmer humid environment. and it is warmer outside. USA right: sunset above the haze on Mt Wilson . This can be a common problem with fog photography. On the other hand. California. there's an easy way to minimize condensation caused by going from indoors to outdoors. You can then take these sealed bags outdoors. and how dense it is in that location. expect substantial condensation if you previously had your camera indoors. For large camera lenses with many elements. but without also incurring its contrast-reducing disadvantages (at least for objects outside the fog/haze). However. place all items within a plastic bag and ensure it is sealed airtight. In the case of fog.Washington. you might be able to get away with longer exposures when the fog is moving more slowly. If your camera is at a similar temperature to the air. since our eyes adjust to the changing contrast. the shutter speed needs to be a second or less in order to prevent the fog's texture from smoothing out. since (i) fog is most likely to occur in the late evening through to the early morning (when light is low) and (ii) fog greatly reduces the amount of light reaching your camera after reflecting off the subject. In general. timing when to take a photo in the fog can also make a big difference with how the light appears. and the fog isn't too dense. USA This way you can capture the unique atmospheric effects of fog or haze.

respectively. Just make sure to bring a lens cloth with you for repeatedly wiping the front of your lens. 3 and 4. Just as how sports and landscape photography push the camera's limits for shutter speed and aperture. For this reason. with constant exposure: Note the trade-off incurred by moving in the direction of any of the four scenarios above. 15. and discusses how to surmount many. a problem with film called "reciprocity failure" means that progressively more light has to reach the film as the exposure time increases-. the abundance and diversity of night photography has been closely tied to the advance of photographic technology. exposure time and image noise. .greatly increasing the enjoyment and lowering the risk of investing the time to take photographs at odd hours. Each scenario often has a technique which can minimize the trade-off. these include image averaging. Most static nightscape photos have to choose between scenarios 2. Finally. digital night photography is still not without its technical limitations. the photographer would then have to wait for the film to be developed to assess whether it had been captured to their liking-. shutter speed and light sensitivity-.Unfortunately. This section aims to familiarize the photographer with obstacles they might encounter at night. night photography often demands technical extremes in both (see below). many photographers simply put their camera away and "call it a day" after sunset. BACKGROUND Night photography is subject to the same set of constraints as daylight photography-. The diagram below illustrates all available combinations of these for a typical night photo under a full moon.COMMON OBSTACLES IN NIGHT PHOTOGRAPHY Night photography has the ability to take a scene and cast it in an unusual light-. Furthermore. or produced unacceptable amounts of image noise. Modern digital cameras are no longer limited by reciprocity failure and provide instant feedback-. Early film photographers shied away from capturing night scenes because these require prohibitively long exposures to maintain adequate depth of field. TRADE-OFFS IN DIGITAL NIGHT PHOTOGRAPHY Fortunately. Photos are unavoidably limited by the trade-off between depth of field.much like the "golden hour" surrounding sunrise and sunset can add an element of mood and uniqueness to a sunlit scene. sometimes a little condensation is unavoidable.namely aperture. Due to lack of familiarity and since night photos are often highly technical.although these are all often pushed to their extremes. even if a proper exposure had been achieved. times have changed since the early days of night photography. stacking and multiple focal planes (to be added). Also note how even the minimum possible exposure time above is one second-.a degree of uncertainly which is often prohibitive after one has stayed up late and spent minutes to hours exposing each photo.leading to diminishing returns compared to shorter exposures. Even with all these advances.making a sturdy camera tripod essential for any photos at night.

Choose Exposure Time: Crop of Tree Shadows on Path: Photograph Under a Full Moon 1 minute 4 minutes Note how the 1 minute exposure above clearly shows high contrast and shadows from even the smaller branches. a lens with a large maximum aperture can greatly increase viewfinder brightness during composure. whereas the 4 minute exposure is at lower contrast and only shows the larger branches. while a moonless night greatly increases star visibility. Even if you intend to expose using a small aperture. Shots which include the moon in the frame are also susceptible to moon movement. depending where it is during its 29. A low-laying moon can create long shadows on cross-lit objects.The diagram does not consider additional constraints: decreased resolution due to diffraction and increased susceptibility to fixed pattern noise with longer exposures. IMPORTANCE OF MOONLIGHT Just as how daylight photographers pay attention to the position and angle of the sun. the intensity of the moon can be chosen at a time which provides the ideal balance between artificial light (streetlamps) and moonlight. Moon movement softens harsh shadows. Gauging exposure times during a full moon can be tricky. then adjust towards scenarios 1-4 accordingly if OK. Another factor rarely noticed during daylight is movement of the light source (sun or moon). moon movement and star trails (see below) can both limit the maximum exposure time. downward shadows.0 and 30 seconds at ISO100 as a starting point (if subject is diffuse and directly lit). To see the effect of different apertures. however too much movement can create seemingly flat light. night photographers should also pay careful attention to the moon. The choice of exposure time can also vary by much more than a factor of four-. . whereas an overhead moon creates harsher. it can quickly appear elongated if this exposure time is approached. VIEWFINDER BRIGHTNESS Properly composing your photograph in the viewfinder can be problematic when there is little available light. As a result. much like the trade-off of reciprocity failure in film. Furthermore. Fixed pattern noise is the only disadvantage to progressively longer exposures in digital photography (other than also possibly being impractical).5 day cycle of waxing and waning. Furthermore. manually choose an aperture by pressing the "depth of field preview" button (usually located on camera at base of lens). A rule of thumb is that the moon appears to move its own diameter roughly every 2 minutes. use f/2. An additional variable is that the moon can have varying degrees of intensity. A full moon can be a savior for reducing the required exposure time and allowing for extended depth of field.greatly exaggerating the above effect. The long exposure time required for moonlight photography often means that the moon may have moved significantly over the course of the exposure.

ensure that you give ample time for your eyes to fully adjust to the decrease in light-. Larger format sensors also produce a brighter viewfinder image (such as full frame 35 mm. using a large aperture and higher sensitivity (ISO 200-400) can enhance the brightness of each streak. night scenes rarely have enough light or contrast to perform auto focus. Close to North Star —> Far From North Star Normal focal lengths (28-50 mm) usually have minimal star movement if exposures are no longer than about 15-30 seconds.therefore one cannot afford to waste mispositioning the depth of field (see hyperfocal distance). nor enough viewfinder brightness to manually focus. . however sometimes these streaks detract from the artistic message if stillness and tranquility is the desired look. This effect can create a dizzying look. Mirror lock-up can drastically increase sharpness for exposure times comparable to the settling time of the mirror (~1/30 to 2 seconds).6X or smaller crop factors) .especially after standing in stronger light or using a flashlight. the stabilizing time can increase significantly (~8 seconds).5-1. Using a longer focal length and photographing stars far from the north star increase the distance stars will move across the image. Finally. mirror shake is negligible for exposures much longer than this.The way a SLR camera redirects light from the lens to your eye can also affect brightness. When forced to use wobbly tripods (never desired) or long focal lengths. On the other hand. Cameras with a pentaprism (as opposed to pentamirror) ensure that little light is lost before it hits your eye. It works by separating the mirror flip and aperture opening into two steps. APPEARANCE OF STAR TRAILS Even modestly long exposures can begin to reveal the rotation of stars in the sky. any vibrations induced by the mirror have time to settle down before the exposure begins. INFLUENCE OF MIRROR LOCK-UP Mirror lock-up (MLU) is a feature available in some SLR cameras which aims to minimize camera shake induced by mirror-slap (which produces the characteristic snapping sound of SLR cameras). FOCUSING AND DEPTH OF FIELD Proper focusing is critical at night because small apertures are often impractical-. If star trails are desired. compared to 1. This way. To further complicate focusing. however these often increase the cost of the camera significantly. therefore MLU is not critical for most night photography.

16. and shooting in RAW mode. I generally recommend always fully exposing the image as if it were a daytime photo.while still maintaining minimal image noise because more light was collected at the digital sensor. The central focus point is more accurate/sensitive in many cameras. Usually one can first meter using a larger aperture (so that the metered exposure time is under 30 seconds). METERING AT NIGHT Unfortunately. What is a proper exposure at night? Unlike during daytime where the basis is (roughly) a 18% gray card.both vertically and horizontally: Rule of Thirds Composition Region Divided Into Thirds . Alternatively. In the photo to the left. autofocus. then stop down as necessary and multiply the exposure time accordingly. and then removed before the exposure begins. otherwise these will have significant blown highlights. focused on. bring a small flashlight since this can be set on the subject. and so it is best to use this (instead of the outer focus points)-. OVERVIEW The rule of thirds states than an image is most pleasing when its subjects or regions are composed along imaginary lines which divide the image into thirds -. One can try focusing on any point light sources which are at similar distance to the subject of interest.COMPOSITION: RULE OF THIRDS The rule of thirds is a powerful compositional technique for making photos more interesting and dynamic. One could "under-expose" to maintain the dark look of night. just aim your camera at the moon. This way the exposure can always be decreased afterwards-. If all else fails. This article uses examples to demonstrate why the rule works.Fortunately there are several solutions to this focusing dilemma. a good starting point is to meter off of a diffuse object which is directly lit by one of the light sources. one can always resort to manual focus using distance markings on the lens (and an appropriate hyperfocal distance). when it's ok to break the rule. If you wish to autofocus at infinity. or could alternatively have the histogram fill the entire tonal range like a daytime shot. then recompose accordingly. there is not really a consistent. the camera needs to be set to "bulb mode" and an external timer/release device should be used (below). be sure to bracket each image. commonly agreed upon way to expose night photos. If all these approaches are impractical. most in-camera light meters become inaccurate or max out at about 30 seconds. For exposure times longer than ~30 seconds. Alternatively.even if using it requires having to recompose afterwards. Metering these can be tricky if the camera's auto-metering fails. one could achieve guaranteed autofocus by using the bright light at the bottom as the focal target. It's also perhaps one of the most well known. and how to make the most of it to improve your photography. Night scenes which contain artificial light sources should almost always have low-key histograms. or zero in on the correct exposure by using guess and check with the rear LCD screen. one may need to carry an external light meter to achieve the most accurate metering.

without making the image look too busy.It is actually quite amazing that a rule so seemingly mathematical can be applied to something as varied and subjective as a photograph. the biker was placed more or less along the leftmost third since he was traveling to the right.but the previous example was simple and highly geometric. there can still be a reasonable amount of order and organization. and surprisingly well. IMPROVE EXISTING PHOTOS BY CROPPING Thus far we've looked at examples that have satisfied the rule -. and how the horizon aligns with the topmost third. How does the rule of thirds fare with more abstract subjects? See if you can spot the lines in the photo below: Original Photo Show Rule of Thirds Note how the tallest rock formation (a tufa) aligns with the rightmost third of the image.it's just a rough guideline. But it works. RULE OF THIRDS EXAMPLES OK. the bird is off-center to give the impression that it can take off to the right at any moment. The rule of thirds is all about creating the right aesthetic tradeoffs. this usually means photographing them to either side of the photo.and a sense of complexity -. For landscapes. perhaps you can see its usefulness by now -. For subjects. It often creates a sense of balance -. What's usually most important is that your main subject or region isn't always in the direct middle of the photograph. Does this mean that you need to worry about perfectly aligning everything with the thirds of an image? Not necessarily -. but usually not. Even in an apparently abstract photo.but what if they hadn't? Wouldn't they have still appeared just fine? Perhaps. . The next set of examples shows situations where cropping to enforce the rule yields a clear improvement. Off-Center Subjects Can Give a Sense of Direction In the examples above. Off-center composition is a powerful way to convey or imply motion. this usually means having the horizon align with the upper or lower third of the image. and give subjects a sense of direction. It is often quite amazing how you can resurrect an old photo and give it new life with something as simple as cropping it. Similarly. This can make landscape compositions much more dynamic. The darker foreground tufa also aligns with both the bottommost and leftmost thirds of the photo.without making the image appear too static -.

Alternatively. the "spirit of the rule" may still apply: giving the photo a sense of balance without making the subject appear too static and unchanging. Similarly. as long as it is for a good cause. then use it. and what do I want to emphasize? What mood do I want to convey? If the rule of thirds helps you achieve any of these goals. But what if you wanted to emphasize the subject's symmetry? The example to the left does just that. the free-spirited and creative artist that you are is probably feeling a bit cramped by the seeming rigidity of this rule. In the example to the right. you might want to knock things out of balance. there's many other situations where it might be better to ignore the rule of thirds than to use it. However. That is.Uncropped Original (horizon in direct middle) Cropped Version (horizon now along upper third of image) In the example above. A central tenet of the rule of thirds is that it's not ideal to place a subject in the center of a photograph.effectively creating an off-center composition.and this one's no exception. middle and lower thirds region. If not.adding emphasis to the foreground and mountains. but that's probably pushing it. It's time to unleash that inner rebel. there's not even a single line or subject that can be aligned with the thirds of the image. BREAKING THE RULE OF THIRDS Example of beneficial symmetry By now. this might be the case for extremely abstract compositions. Regardless. then don't let it get in the way of your composition. Perhaps the C-shaped region of light can be grouped into an upper. for example. 17. LIMITATIONS But what if there's simply nothing in the image to apply the rule of thirds to? Although rare. However.A COLOR PHOTO INTO BLACK & WHITE . You might want to make your subject look more confronting. the image is on average brighter to the left compared to its right -. part of the empty sky was cropped off so that the horizon aligned with the upper third of the image -. all rules are bound to be broken sooner or later -. It's important to ask yourself: what is special about this subject.

Consider the example below. Also note how the green and red-green filters enhance texture in the feathers. green and blue. This section provides a background on using color filters. not the color they block. Original Color Photo Red Filter Green Filter Red-Green Combination Note how the red and green filters make the parrot much brighter and darker than the background. BACKGROUND: COLOR FILTERS FOR B&W FILM Contrary to what one might initially assume. Move your mouse over the options below to view some of the possibilities. a color filter should be chosen which translates bright red into a tone which is significantly different from the middle gray background. or can partially block any weighted combination of the primary colors (such as orange or yellow). In other words. green foliage and blue sea: . The image below contains regions of red. So which color filter is best? This depends on the goal of the mage. Careful selection of these filters allows the photographer to decide which colors will produce the brightest or darkest tones. and outlines several different black and white conversion techniques-. Although the red filter above decreases contrast in the feathers. Conversion which does not take into account an image's color and subject of interest can dilute the artistic message.comparing each in terms of their flexibility and ease of use. it would do the opposite in a cyan-blue sky.Converting a digital color photo into black and white goes beyond simply desaturating the colors. Filters are named after the hue of the color which they pass. To give the parrot similar contrast with the background in black and white. images rarely contain just one color. we want to choose a filter whose color is on the opposite side of the color wheel (right) to the image's color. but in general: one can increase contrast in a given region by choosing a filter color which is complimentary to that region's color.but only if the appropriate color filters have been chosen. Of course. move your mouse over each filter and note its influence on the red rocks. respectively. If we wished to maximize cloud contrast in a cyan-blue sky. where the original color image makes the red parrot stand out against the near colorless background. Color filters are often used in front of the lens to selectively block some colors while passing others (similar to how color filters are used for each pixel in a digital camera's bayer array). traditional black and white photographers actually have to be quite attentive to the type and distribution of color in their subject. CONTROLLING TEXTURE AND CONTRAST Just as with color photography. and can be made to mimic any of a wide range of looks created by using color filters in black and white film photography. These can block all but a primary color such as red. black and white photography can use color to make a subject stand out-. then a reddish-yellow filter would achieve this goal. green or blue. whereas an intermediate combination of the two makes the parrot blend in more. Black and white conversion may therefore require interpretive decisions. and that the red filter eliminates tonal separation between the feathers and the white skin. and may create an image which appears washed out or lacks tonal range.

. CHANNEL MIXER The channel mixer tool allows the user to control how much each of the three color channels (red. however its menu location may vary. For an even more pronounced effect. although the more powerful ones give you full control. and blue percentages need to equal 100% in order to maintain roughly constant brightness. green and blue sliders to produce an image to your liking. set: red=30%. This allows one to quickly assess which of the many combinations of color filters work best. Some techniques assume a combination for you. It takes a little longer to setup than the channel mixer. Pure red or primarily red color filters often work best for landscapes. It is often best to get a feel for the distribution of each color channel by first setting each of the color channels to 100% individually. Whether you specify it or not. GIMP and many other image editing programs also offer this tool. Then adjust each of the red. green and blue filtering in order to achieve the desired amount of contrast and tonal range. some colors can even have negative percentages. without necessarily having one in mind when starting. Each image may therefore require its own combination of red. green=59% and blue=11%. On the other hand. sky and foliage. Open this tool by clicking on Image > Adjustments > Channel Mixer in Adobe Photoshop. but is actually faster to use once in place. and so you may find some techniques are best suited only to certain tasks. as this increases texture in regions containing water. HUE . green. It is undoubtedly one of the most powerful black and white conversion methods. green and blue) contribute to the final grayscale brightness. One can visualize other possibilities since all color filters would produce some superposition of the three images above (yellow would be half red. however it may take some time to master since there are many parameters which require simultaneous adjustment. The sum of the red. or can darken/brighten some regions excessively. Be sure to first click on the lower left tick box entitled "Monochrome" for black and white conversion. color filters can also make contrast appear greater than what we would perceive with our eyes.SATURATION ADJUSTMENT LAYER This technique is particularly elegant because it allows you to apply any of the entire spectrum of color filters just by dragging the hue slider. although overall brightness can also be adjusted by using the "Constant" slider at the bottom. If the aim is to mimic the luminosity perceived by the human eye. half green and zero blue).Original Color Filter Red Filter Green Filter Blue Filter Notice the contrast changes both between and within regions of red. green and blue above. all conversion techniques have to use some weighted combination of each color channel to produce a grayscale brightness. Each makes its own trade-offs between power and ease of use. DIGITAL COLOR INTO BLACK & WHITE Converting a digital color photo into black and white utilizes the same principles as with color filters in film photography. except filters instead apply to each of the three RGB color channels in a digital image (see bit depth).

it is probably the most commonly used way of converting into black and white. such as increased noise." however I have given these custom names for this tutorial. Try to use as little of the blue channel as possible to avoid excess noise. the channel window can be accessed by clicking on Window > Channels. On the other hand. clipping or loss of texture detail. Despite this. Recall that the noise levels in each color channel can be quite different. This is the main control for adjusting the look from this technique. An alternative technique which may be a bit easier is to only add one Hue/Saturation adjustment layer and change the hue of the image itself. respectively. Once all adjustments have been made. as opposed to just one of the three channels for an RGB histogram. Removal of color casts means that the colors will be more pure.Open the image in Photoshop and create two separate "Hue/Saturation Adjustment Layers" by following the menus: Layers > New Adjustment Layer > Hue/Saturation. On the top adjustment layer (Saturation). Please see "Understanding Histograms: Luminance and Color" for further reading on this topic. Shoot in RAW mode if possible. View the "Lightness" channel by clicking on it (as shown to the left) in the channel window. Each window will be named "Hue/Saturation 1 or 2. On the bottom adjustment layer. In Photoshop. as 16-bit (per channel) images allow for the smoothest grayscale tones and greatest flexibility when using color filters. with the blue and green channels having the most and least noise. DESATURATE COLORS Desaturating the colors in an image is the simplest type of conversion. This is because it requires all three color channels to reach their maximum for clipping. or "chrominance"). set the blending mode to "Color" and set the saturation to its minimum of "-100. but often produces inadequate results. OTHER CONSIDERATIONS Ordinarily the best results are achieved when the image has the correct white balance. This is because it does not allow for control over how the primary colors combine to produce a given grayscale brightness. this is accomplished by going from Image > Adjustments > Desaturate. This also gives the ability to fine-tune the white balance based on the desired black and white look.. . merge/flatten the layers to make these final. this does not allow one to go back and change the color filter hue if it is no longer in Photoshop's undo history (at least not without unnecessarily destroying bit depth). and so the results of any color filter will be more pronounced. but this time it fine-tunes the magnitude of the filter effect for a given hue. higher color saturations also mean that each color filter will have a more pronounced effect. change the "Hue" slider to apply any of the entire spectrum of color filters. Any black and white conversion which utilizes a significant boost in color saturation may begin to show artifacts. Note that the lightness channel may subsequently require significant levels adjustments as it may not utilize the entire tonal range of the histogram. Delete both the "a" and "b" channels to leave only the lightness channel ("a" and "b" refer the red-green and blue-yellow shift.. LIGHTNESS CHANNEL IN LAB MODE Using the lightness channel in lab mode is quick and easy because it converts based on the luminance value from each pixel's RGB combination. If not already open. First convert the image into the LAB color space by clicking on Image > Mode > Lab Color in Photoshop. The saturation slider can also be adjusted in this layer." shown below. On the other hand.

LOCAL CONTRAST ENHANCEMENT Local contrast enhancement attempts to increase the appearance of large-scale light-dark transitions. Care should also be taken when using these because even slight color clipping in any of the individual color channels can become quite apparent in black and white (depending on which channel(s) is/are used for conversion). Keep in mind though that some contrast adjustments can only be made by choosing an appropriate color filter. Local contrast enhancement is also useful for minimizing the effect of haze. This creates a local contrast mask which maps larger-scale transitions than the small-scale edges which are mapped when sharpening an image. which is not possible when enhancing contrast using levels or curves.Levels and curves can be used in conjunction with black and white conversion to provide further control over tones and contrast. VISUALIZING LOCAL CONTRAST High Resolution High Local Contrast Both Qualities When viewed at a distance. despite the lack of resolution. note how the large-scale features are much more pronounced for the image with high local contrast. It achieves this feat by making some pixels in the histogram cross over each other. and provide additional features such as sepia conversion or adding film grain. CONCEPT The trick with local contrast enhancement is that it increases "local" contrast in smaller regions. since this adjusts relative contrast within and between color regions. Step 1: Detect Transitions and Create Mask Step 2: Increase Contrast at Transitions Original Higher Contrast Original . Local contrast enhancement works similarly to sharpening with an unsharp mask.thereby protecting large-scale shadow/highlight detail. Good local contrast gives an image its "pop" and creates a three-dimensional effect-.mimicking the look naturally created by high-end camera lenses. There are also a number of third party plug-ins for Photoshop which help automate the process of conversion. Both resolution and local contrast are essential to create a detailed. lens flare. similar to how sharpening with an "unsharp mask" increases the appearance of small-scale edges. however the mask is instead created using an image with a greater blur distance. or the dull look created by taking a photograph through a dirty window. 18. while at the same time preventing an increase in "global" contrast-. threedimensional final image.

Local Contrast Mask - Blurred Copy Original = Local Contrast Mask = Final Image Note: The "mask overlay" is when image information from the layer above the local contrast mask passes through and replaces the layer below in a way which is proportional to the brightness in that region of the mask. The upper image does not contribute to the final for regions where the mask is black. In order to fully see this effect. but should show a noticeable increase in clarity. while it completely replaces the layer below in regions where the local contrast mask is white. one needs to examine the images up close. The difference between the original and final image is often subtle. Move your mouse on and off of "local contrast enhancement" and then "high contrast" in order to see their influence on the tones within the image below: Original Local Contrast High .

but is arguably underutilized for other types of low-light and night photography. This affects the size of the transitions you wish to enhance. and so reducing this noise can greatly enhance your final image or print.NOISE REDUCTION BY IMAGE AVERAGING Image noise can compromise the level of detail in your digital or film photos.thereby changing its mood. An added bonus is that averaging may also increase the bit depth of your image-. This can also be thought of as how much contrast is added at the transitions. it should be performed before adjusting levels (if levels are used bring tones to the extreme highlights within the image histogram). Averaging has the power to reduce noise without compromising detail. but foliage in landscapes can suffer with even conservative attempts to reduce noise. using the same camera settings and under identical conditions (temperature. Radius controls the amount to blur the original for creating the mask. Image averaging is common in high-end astrophotography. This section compares a couple common methods for noise reduction. Much more so than with sharpening. Local contrast enhancement can also clip highlights in regions which are both very bright and adjacent to a darker region. This is rarely used in local contrast enhancement. and controls the magnitude of each overshoot. Averaging can also be especially useful for those wishing to mimic the smoothness of ISO 100. This way. respectively. Pay special attention to the dirt in between the rocks and how this region becomes very dark for the high contrast image. High resolution images. etc. Threshold is typically set to 0.beyond what would be possible with a single image. Threshold sets the minimum brightness change that will be sharpened. . so a smaller radius enhances smaller-scale detail. COMPLICATIONS Local contrast enhancement.Enhancemen Contrast t Note how it creates more contrast near the transition between the rocks and dirt. Portrait photography is one area where one should be particularly cautious with this technique. as with sharpening. or those where light-dark transitions are large. The problem is that most techniques to reduce or remove noise always end up softening the image as well. and also introduces an alternative technique: averaging multiple exposures to reduce noise. It is identical to sharpening with an unsharp mask. local contrast enhancement is often less pronounced. except the "radius" is much larger and the "percentage" is much lower. shown by "blurred copy" in the illustration above. Care should also be taken when using this technique because it can detract from the "smoothness" of tones within your image-. Very low resolution images may require a radius even less than 30 pixels to achieve the effect. The unsharp mask can be accessed in Adobe Photoshop by clicking on the following drop-down menus: Filter > Sharpen > Unsharp Mask. but is preserved for local contrast enhancement. lighting. 19. but whose camera only goes down to ISO 200 (such as most Nikon digital SLR's). but preserves texture in large-scale light and dark regions. because it actually increases the signal to noise ratio (SNR) of your image.). random fluctuations above and below actual image data will gradually even out as one averages more and more images. You can eliminate these unwanted effects by either performing local contrast enhancement in the lightness channel of the LAB color space. Some softening may be acceptable for images consisting primarily of smooth water or skies. Amount is typically 5-20%. CONCEPT Image averaging works on the assumption that the noise in your image is truly random. the radius setting is strongly influenced by your image size and the scale of the light-dark transitions you wish to enhance. The effect above is quite strong to aid in visualization. require using a larger radius value. For this reason. In addition. Amount is usually listed as a percentage. This allows for a "buffer zone" when local contrast enhancement extends the lightest and darkest tones to full white or black. Radius is typically 30-100 pixels. can also create unwanted color changes if performed on all three color channels. Local contrast enhancement can increase color saturation significantly. performing local contrast enhancement in Photoshop and other image editing programs is quick and easy. but could be set to a non-zero value to only enhance contrast at the most prominent edges. If you were to take two shots of a smooth gray patch. or in a separate layer (while still in an RGB working space) and blending using "luminosity" in the layers window. then you would obtain images similar to those shown on the left. IN PRACTICE Fortunately.

Visually. Two averaged images usually produce noise comparable to an ISO setting which is half as sensitive. the maximum deviation is greatly reduced. and so on. 100% Crop of Regions on the Left Original 2 Images 4 Images . If we were to take the pixel value at each location along this line. magnitude of noise fluctuation drops by the square root of the number of images averaged. NOISE & DETAIL COMPARISON The next example illustrates the effectiveness of image averaging in a real-world example. The following photo was taken at ISO 1600 on the Canon EOS 300D Digital Rebel. In general. The dashed horizontal line represents the average. this has the affect of making the patch to the left appear smoother. Note how each of the red and blue lines uniquely fluctuates above and below the dashed line. then the luminance variation would be reduced as follows: Even though the average of the two still fluctuates above and below the mean. so you need to average 4 images in order to cut the magnitude in half. so two averaged images taken at ISO 400 are comparable to one image taken at ISO 200. and average it with value for the pixel in the same location for the other image. or what this plot look like if there were zero noise. and suffers from excessive noise. respectively.The above plot represents luminance fluctuations along thin blue and red strips of pixels in the top and bottom images.

but leaves larger fluctuations behind and eliminates pixel-level detail.5 uses default settings and "auto fine-tune" Neat Image is the best of all for reducing noise in the smooth sky.Note how averaging both reduces noise and brings out the detail for each region. Neat Image is your best option for situations where you cannot use image averaging (hand held shots). and so this is used as the benchmark in the following comparison: Original 2 Images 4 Images Neat Image Median Filter Noise reduction with Neat Image Pro Plus 4. but sharpening cannot recover lost information. one could use a combination of the two: image averaging to increase the SNR as much as possible. Noise reduction programs such as Neat Image are the best available arsenal against noise. Overall. This is effective at removing very fine noise. It calculates each pixel value by taking the median value of all adjacent pixels. but it sacrifices some fine detail in the tree branch and vertical mortar/grout lines in the brickwork. Sharpening could be used to enhance the remaining detail and greatly improve the overall appearance of sharpness. Ideally. then Neat Image to reduce any remaining noise: Averaging: 4 Images Neat Image + Averaging Original Neat Image . The median filter is a primitive technique and is in most versions of Photoshop.

. and finally the top layer to 25%. 20. Extra care should be taken with technique and averaging can only be used for photos taken on a very sturdy camera tripod. many key limitations still remain. consider the following: taking two shots at ISO 800 and 30 seconds to produce the rough equivalent (both in brightness and noise levels) of a single 60 second exposure at ISO 400. The following is an overview of several digital techniques that were on this website in the beginning.5 uses default settings and "auto fine-tune" Note how neat image plus averaging is now able to both retain the vertical detail in the bricks and maintain a smooth.. As an example. However. and have them blend together such that each layer contributes equally. unlike other shots." you may be limited to 15-30 second exposures. Disadvantages of the averaging technique include increased storage requirements (multiple image files for one photo) and possibly longer exposure times. You could then take several short shots in between passers-by. This means that to properly average four images.NEW PHOTOGRAPHY WITH DIGITAL Digital cameras have opened up amazing new photography possibilities. faster moving areas while still retaining low noise in high detail. . RECOMMENDATIONS When should one perform image averaging. despite all of this progress. Overcoming these often requires interpretive decisions both before and after the exposure. One must first load all images into Photoshop which are to be averaged. Camera equipment has made great strides in being able to mimic our visual perception in a single photograph. Image averaging does not work on images which suffer from banding noise or fixed pattern noise. the averaging can begin. Photographers have to be aware of these and other shortcomings in order to emphasize the elements of a scene as they see them. One should instead set the bottom (or background) layer's opacity to 100%. An example of this is a starry night with foliage in the foreground. Each technique has links to more detailed advice if you want to learn more. but cannot take a long enough exposure because pedestrians often pass through the shot. the layer on top of that to 50%. The idea is to stack each image in a separate layer. for example. one should not set each layer's opacity to 25%. one might be taking a photo in a public place and want low noise.Noise reduction with Neat Image Pro Plus 4. The key to averaging in Photoshop is to remember that each layer's opacity determines how much the layer behind it is "let through. Many other combinations are possible. For such cases. and then copy and past each image on top of each other so that they are all within the same project window. Our eye can discern a far greater range of light to dark (dynamic range). To reduce shadow noise (even in low ISO shots) where you wish to later bring out shadow detail through post-processing. For situations where you cannot guarantee interruption-free exposures beyond a given time. It now serves as a motivator to delve into the various techniques available in the digital world." and the same goes for each image underneath.. each layer's percent opacity is calculated by: All markings in red have been added for clarity. is able to realize a broader range of colors (color gamut). Once this is done.. Note how the bright white "hot pixel" in the lower left of both the top and bottom images does not diminish with averaging. If for some reason one layer receives more weighting than another. slower moving areas. and then the next layer to 33%. low noise look. Averaging. To selectively freeze motion in low detail. AVERAGING IMAGES IN PHOTOSHOP USING LAYERS Performing image averaging in Adobe Photoshop is relatively quick using layers. the blending of images will not be as effective. they will not actually be visible in Photoshop. This is illustrated below: For averaging any number of images. as opposed to just taking a longer exposure at a lower ISO speed? The following list of situations may all prove useful: • • • • • To avoid excessive fixed-pattern noise from long exposures For cameras which do not have a "bulb mode. and can assess what is white in a given scene (white balance) far better than any photographic equipment. requires zero camera movement *between* exposures in addition to during the exposure.

depending on what you found interesting. the irises within our eyes can adjust to changing conditions as we focus on regions of varying brightness—both extending the dynamic range where we can discern detail. and the image noise (or film grain) for a given photo. and less light in the brighter regions. but also how they would like them to see it. either the first or the second image would be closer to what you would see. Cameras. On the other hand.When we view a scene." or the distance around the focal plane which still appears to be in sharp focus. This difference presents the photographer with an important interpretive choice: does one wish to portray the scene in a way that draws attention to one aspect by making only that aspect in focus (such as would occur during a fleeting glance). This technique allows a photographer to decouple themself from the traditional trade-off between depth of field. The end result is a print that has both less noise and more image sharpness throughout. whereas a lens has to choose a specific focal point and what photographers call a "depth of field. and improving the local contrast. EXTENDED DEPTH OF FIELD Our eyes can choose to have any particular object in perfect focus. cannot always capture such scenes where the brightness varies drastically—at least not with the same contrast as we see it. and length of exposure. such as the final image on the right. Traditional landscape photography has practiced a technique to overcome this limitation by using a camera lens filter which lets in more light in the darker regions. by emphasizing not only what one wishes for them to see. HIGHER DYNAMIC RANGE As we look around a scene. This is apparent when we stand near a window in a dark room on a sunny day and see not only detail which is indoors and around the window (such as the frame or the pattern on the curtains). Until recently. or does one instead wish to portray all elements in the scene as in focus (such as would occur by taking a sweeping look throughout). noise or film grain. If you were to stand in front of the above scene and take a quick glance. on the other hand. Where artistic flexibility is required. traditional photography was especially restricted with this choice. but also that which is outside and under the intense lighting (such as the blades in the grass in the yard or the clouds in the sky). This is similar to how our eyes may glance at both near and distant objects in a farreaching scene. it is the implications arising from this that are discussed in the three sections below: Depth of Field Dynamic Range Field of View Each technique can evoke a heightened emotional response in the viewer. if you were to fully absorb the scene—analysing both the the stone carvings in the foreground as well as the bridge and trees in the background—then your view would be represented more realistically by portraying details for both regions. . we have the luxury of being able to look around and change what we are analyzing with our eyes. the depth of field. This ability is quite different from what a still camera is able to do with a given lens. one could use a technique which utilises multiple exposures to create a single photo that is composed of several focal points. because there is always a trade-off between the length of the exposure.

This works remarkably well. which gradually transitions into a darker foreground. This usually limits the photographer to photos consisting of a bright sky. however it is limited to photos with a simple distribution of light intensity. since a filter has to exist which approximates the light distribution. . such as the filter shown below.

a lens with a relatively narrow field of view (just 17° horizontally. such as those which contain alternating electrically-lit and moonlit objects. In the example below. also take a look at the tutorial on using the high dynamic range (HDR) feature of photoshop. These could then be combined digitally in a way that accounts for lens distortion and perspective— producing a single. one then has the ability to combine these images for any arbitrary lighting geometry—thus diversifying the types of scenes one can reproduce in photographic print. and to enhance image detail. Note how the rooftop appears curved in the upper image. For much more on this topic. To increase the dynamic range captured in a photo. we are able to encompass a broader field of view than may be possible with a given lens. whereas the rooftop is straight in the final print. By exposing the photo several times for each intensity region. As you can see by comparing the before and after images below. EXTREME FIELD OF VIEW By looking around a scene.Other scenes. This is technique is referred to as photo stitching or a digital panorama. creating a single image from a mosaic of images is more complicated than just aligning these images. This is adapted from a similar technique used in astrophotography. seamless image. while still retaining the local contrast. this process also has to take into account perspective. contain far more complex lighting geometries than can be captured using traditional photographic techniques. one could point the camera in several adjacent directions for each exposure. or 80mm on a 35mm camera) was used to create a final image that contains both more detail and a wider field of view than would be possible with a single exposure. . one can expose the photo several times: once for each region where the light intensity changes beyond the capabilities of their equipment. To mimic this behaviour.

It's also required of any digital photo at some point -. in a nutshell it works by exaggerating the brightness difference along edges within an image: Photo of the letter "T" .whether you're aware it's been applied or not. An added bonus is that the final image contains over 6X the detail and local contrast than what would have been captured with a single photograph (if one also happened to have such a lens). 21. actually acts to sharpen an image..Individual Photos Seamlessly Stitched The final result is a perspective that would have required a lens which horizontally encompasses a 71° field of view. unsightly sharpening artifacts may appear.GUIDE TO IMAGE SHARPENING Image sharpening is a powerful tool for emphasizing texture and drawing viewer focus.Pasadena. sharp cacti at the Huntington Gardens . When performed too aggressively. for example. when done correctly. and this requires correction. You can view the digital photography tutorials for a more detailed and technical description of many of these photographic concepts. sharpening can often improve apparent image quality even more so than upgrading to a high-end camera lens. Although this tool is thoroughly covered in the unsharp mask tutorial. On the other hand. However. California HOW IT WORKS Most image sharpening software tools work by applying something called an "unsharp mask.. not all sharpening techniques are created equal." which despite its name. Digital camera sensors and lenses always blur an image to some degree.

Controls the minimum brightness change that will be sharpened.Original Sharpened Note that while the sharpening process isn't able to reconstruct the ideal image above. .) How It Works Controls the size of the edges you wish to enhance. while also minimizing visible under and overshoots (called "sharpening halos"). A good starting point is often a value of 100%. most of the sharpening settings within image-editing software are reasonably standardized. and is usually listed as a percentage. You will therefore likely need to adjust this setting in conjunction with the amount/percent setting. but also increase the overall sharpening effect. You'll usually want a radius setting that is comparable to the size of the smallest detail within your image. Higher values emphasize fine detail. Controls the overall strength of the sharpening effect. Controls the relative sharpening of fine versus coarse detail (within a given radius value). This can be used to sharpen more pronounced edges. It's especially useful to avoid sharpening noise. One can usually adjust at least three settings: Setting Radius Amoun t Thresh old (Maski ng) Detail (if avail. The key to effective sharpening is walking the delicate balance between making edges appear sufficiently pronounced. it is able to create the appearance of a more pronounced edge (see sharpness: acutance & resolution). while leaving more subtle edges untouched. where a smaller radius enhances smaller-scale detail. Soft Original Mild Sharpening Over Sharpening (Visible Halos) note: all images shown at 200% zoom to improve visibility SETTINGS Fortunately. in addition to affecting the overall strength of sharpening.

then all image editing would have to be re-done every time you wished share/print the photo using a different output device. based on artistic intent and/or image content. the amount will depend on your camera model and any custom settings you may have applied. (3) Output sharpening uses settings customized for a particular output device.It's generally advisable to first optimize the radius setting. On the other hand. and may be applied automatically by the camera for photos which are saved as JPEG files. Manual Capture Sharpening requires weighing the advantages of enhancing detail with the disadvantages of amplifying the appearance of image noise. along with any detail & noise characteristics Uniquely accounts for your image's content & artistic intent Accounts for final output medium.ACR. type and viewing distance of a print. all that is needed is a quick top-off pass of sharpening for the output device. when it saves the image as a JPEG. and is applied at the very end of the image editing workflow. This may include special considerations based the size. For example. you might not want to apply additional sharpening to a smooth sky or a person's skin. and then finally to fine-tune the results by adjusting the threshold/masking setting (and potentially other settings such as "detail"). Overall though. Highly recommended. creative and output sharpening terminology was formally introduced in Real World Image Sharpening by Bruce Fraser & Jeff Schewe. Although most cameras automatically apply capture sharpening for JPEG photos. such blurring is caused by the camera sensor's anti-aliasing filter and demosaicing process. SHARPENING WORKFLOW Most photographers now agree that sharpening is most effective and flexible when it's applied more than once during image editing. This can either occur automatically in your camera. Optimal results may require a few iterations. When printing or sharing one of these images. For example. then to adjust the amount. but it can also be used to offset any softening caused by resizing an image for the web or e-mail. its use may vary wildly from photo to photo. while also taking image noise and detail into consideration. It's also the least used stage since it can also be the most timeconsuming. Overall. the above sharpening workflow has the convenience of being able to save edited images at a near-final stage. in addition to your camera's lens. the two images below have vastly different levels of fine detail. Capture sharpening is required for virtually all digital images. respectively. With digital cameras. if all sharpening were applied in a single step. Regardless. For example. but you may want to crank up the sharpness in foliage or a person's eye lashes. so their sharpening strategies will also need to differ: . STAGE 1: CAPTURE SHARPENING Capture sharpening is usually applied during the RAW development process. sharpen using a radius value that is comparable to the size of the smallest details. Also be aware that the preset shooting modes will influence the amount of capture sharpening. to enhance detail. First. images taken in landscape mode are usually much sharper than those taken in portrait mode. optimal capture sharpening requires shooting using the RAW file format. (2) Creative sharpening is usually applied selectively. so creative sharpening is really a "catch all" category. Lightroom or any other RAW software that may have come with your camera). and applying the sharpening manually on your computer (see below). It also ensures the image will respond well to subsequent rounds of sharpening. Automatic Capture Sharpening. Each stage of the sharpening process can be categorized as follows: Capture Sharpening Creative Sharpening Output Sharpening — > — > Accounts for your image's source device. after all editing & resizing (1) Capture sharpening aims to address any blurring caused by your image's source. Note: the above capture. or it can occur manually using RAW software on your computer (such as Adobe Camera RAW .

Generally. well-focused images will require a sharpening radius of 1. while slightly out of focus images may require a sharpening radius of 1.0 pixels.Coarse (Low Frequency) Detail Fine (High Frequency) Detail Sharpening Radius: 0. Using high values of the threshold or masking settings help ensure that sharpening is only applied to pronounced edges: Without sharpening threshold/masking: With sharpening threshold/masking: . One often has to sacrifice sharpening some of the really subtle detail in exchange for not amplifying noise in otherwise smooth regions of the image.8 pixels Sharpening Radius: 0. make sure to view a representative region within your image that contains the focal point and/or fine detail. there's an element of subjectivity to this process.0 pixels) Radius Just Right (1. Keep an eye on regions with high contrast edges.2 pixels) Radius Too Large (2. since these are also more susceptible to visible halo artifacts. capture sharpening isn't always able to be applied as aggressively and uniformly as desired. and view it at 100% on-screen. and such small differences wouldn't be distinguishable in a print.1 pixels.0 pixels) When trying to identify an optimum sharpening radius. capture sharpening rarely needs a radius greater than 2.0 or greater.0 or less. Shooting technique and/or the quality of your camera lens can also impact the necessary sharpening radius. Regardless. When noise is pronounced.4 pixels Note: The sharpening radii described above are applied to the full resolution images (and not to the downsized images shown above). Soft Original Radius Too Small (0. Don't fret over trying to get the radius "accurate" within 0.

For example. One may therefore need to postpone sharpening during RAW development until noise reduction has been applied. Grain Surgery & Noiseware. Also note how image noise is more pronounced within the darker regions. The key to performing such selective sharpening is the creation of a mask. common plug-ins include Neat Image. since sharpening will make noise removal less effective. such as with darker tones and/or high ISO speeds. At the time of this writing. to sharpen the foliage without also roughening the sky. STAGE 2: CREATIVE SHARPENING While creative sharpening can be thought of as just about any sharpening which is performed between capture and output sharpening. Noise Ninja.Original Image Sharpening Mask Move your mouse over above images to see the unsharpened original. Unlike with the output sharpening example. This can be done to avoid amplifying image noise within smooth areas of a photo. which is just a way of specifying where and by how much the creative sharpening should be applied. or with landscapes. However. If image noise is particularly problematic. noise reduction should always be performed before sharpening. or using a third-party noise reduction plug-in. with portraits one may want to sharpen an eye lash without also roughening the texture of skin. or to draw viewer attention to specific subjects. this mask may need to be manually created. The value used for the "masking" setting used above was 25. one might consider using a creative sharpening technique. An example of using a mask for creative sharpening is shown below: . Such a mask was chosen because it doesn't worsen the appearance of image noise within the otherwise textureless areas of the image. its most common use is to selectively sharpen regions of a photograph. Note how the masking/threshold setting was chosen so that only the edges of the cactus leaves are sharpened (corresponding to the white portions of the sharpening mask above).

4. It can also lessen the impact of a distracting background. Paint regions of the image with white and/or black when you want creative sharpening to remain visible or hidden in the final image. 3. although one could argue that this technique falls into a different category altogether (even though it still uses the unsharp mask tool). Fine-Tune. Overall. Sometimes this type of creative sharpening can even be applied along with RAW development by using an adjustment brush in ACR or Lightroom. the options for creative sharpening are virtually limitless. Some photographers also apply local contrast enhancement (aka "clarity" in Photoshop) during this stage. Reduce the opacity of the top layer if you want to lessen the influence of creative sharpening. or the Shift+Ctrl+N keys. In Photoshop. since it's nearly impossible to judge whether an image is appropriately sharpened for a given print just by viewing it on your computer screen. You can also change the blending mode of this layer to "Luminosity" to reduce color artifacts. Make a duplicate of your image (with capture sharpening and all other editing applied). Another way of achieving the same results is to use a brush. Select the layer mask (by left-clicking on it).Sharpening Mask: Image Used for Creative Sharpening Selective Sharpening Using a Mask Move your mouse over the image to see Top layer has creative sharpening applied. 2. then apply creative sharpening to the entire image. Paint Mask. Create Mask. effective output sharpening often makes an on-screen image look harsh or brittle: . such as a history. amongst others. This sharpening can be very aggressive since you can always fine-tune it later.making the subject appear much sharper -.while also avoiding oversharpening. respectively. The image may have also been softened due to digital photo enlargement. Move your mouse over the top left image to see this technique applied to the previous example. The relative sharpness difference will increase -. sometimes the best technique for selectively sharpening a subject is to just blur everything else. Alternatively. This can often be simpler than dealing with layers and masks. Sharpen Duplicate.. an image should look nice and sharp on-screen. mask ensures this is only applied to the white regions. In fact.. this usually isn't enough to produce a sharp print. However. use the menus Layer > New > Layer. selective blurring applied to background To apply selective sharpening using a mask: 1. Shades of gray will act partially.. Output sharpening therefore often requires a big leap of faith. "sharpen more" or blurring brush. STAGE 3: OUTPUT SHARPENING FOR A PRINT After capture and creative sharpening.

A good starting point is always the default amount/percent value used by your image editing software. the necessary amount of sharpening will still likely depend on the image content..Florence. The key is to have this radius small enough that it is near the limit of what our eyes can resolve (at the expected viewing distance). for mission-critical prints this best solution is often just trial and error. but is also large enough that it visibly improves sharpness. *It's generally a good estimate to assume that people will be viewing a print at a distance which is roughly equal to the distance along the print's diagonal." DPI is often used interchangeably with PPI. one can also estimate the radius manually using the calculator below: Top of Form Output Sharpening Radius Estimator Typical Viewing Distance* (or length of print's diagonal) Print Resolution PPI** Estimated Sharpening Radius **PPI = pixels per inch.e. Regardless. Bottom of Form The above radius estimates should only be taken as a rough guideline. However. Such estimates are often built into RAW development or image editing software. it looks sharp when viewed on-screen).0 for 8. type of paper. Alternatively. the two terms can have different meanings. For example. To save costs.Output Sharpening Original Image Applied for On-Screen Display Output Sharpening Applied for a 300 PPI Glossy Print Photograph of the Duomo at dusk . but these usually assume that the image has already had capture sharpening applied (i. you can always print a cropped sample instead of the full print. although strictly speaking. matte/canvas papers often require more aggressive sharpening than glossy paper. see tutorial on "Digital Camera Pixels.0 sec at 150 mm and ISO 200) Output sharpening therefore relies on rule of thumb estimates for the amount/radius based on the (i) size and viewing distance of the print. printer type and the look you want to achieve. In general. . Italy (f/11. a larger viewing distance demands a larger output sharpening radius. (iii) type of printer and (iv) type of paper. (ii) resolution of the print (in DPI/PPI).

RECOMMENDED READING If you're thirsting for additional examples. although most recent photo editing software offers a similar feature. along with a more thorough technical treatment of the above topics. an unsharp mask radius of 0. see the tutorial on image resizing for the web and e-mail. the internet has yet to set a standard for representing color and tone such that images on one display look the same as those viewed on another. Despite all of these displays. RAW & TIFF files respond much better to sharpening than JPEG files.5 pixels to this layer 2-5 times. Some camera lenses do not blur objects equally in all directions (see tutorial on camera lens quality . For downsized images. one doesn't have to worry about halo artifacts. resizing it to less than 50% of its original size often removes any existing sharpening halos. a great book is Real World Image Sharpening (2nd Edition) by Bruce Fraser & Jeff Schewe. also save unsharpened originals whenever possible.STAGE 3: OUTPUT SHARPENING FOR THE WEB & EMAIL Even if an image already looks sharp when viewed on-screen. the central square should appear to have the same shade as the gray background. and may be in a direction which is either (i) away from the image's center or (ii) perpendicular to that direction. although new problems such as aliasing/pixelation and moiré may become apparent if the amount/percent is set too high. since the former preserve more detail. Images will often appear sharper if you also remove chromatic aberrations during RAW development.astigmatisms). This option can be found under the "lens corrections" menu in Adobe Camera RAW. 22. The following image has been designed so that when viewed at a distance. respectively. Better photos (and more fun) can usually be achieved if this time is spent elsewhere. This can be extremely difficult to remove. Grossly over-sharpened images can sometimes be partially recovered in Photoshop by (i) duplicating the layer.2-0. As a result. The light sharpening halos are often more objectionable than the dark ones. whereas the leftmost and rightmost squares will appear slightly darker and lighter than the background. (iii) setting the blending mode of this top layer to "darken" and (iv) potentially decreasing the layer's opacity to reduce the effect. In addition to these measures. you can help ensure more accurate viewing by taking the extra step to verify that your display has been properly calibrated. advanced sharpening techniques sometimes get away woth more agressive sharpening by reducing the prominence of the former. .3 and an amount of 200-400% works almost universally well.VIEWING ADVICE Computers use varying types of display devices—ranging from small monitors to the latest large flat panels. sharpening may amplify JPEG compression artifacts. Further. this website has been designed to minimize these unavoidable viewing errors by approximating how the average display device would render these images. Don't get too caught up with scrutinizing all the fine detail.2-0. Blurring due to subject motion or some types of camera shake may require advanced techniques such as deconvolution or Photoshop's "smart sharpen" tool. (ii) applying a gaussian blur of 0. This type of blur tends to increase further from the center of the image. One usually needs to apply output sharpening to offset this effect: Softer Downsized Image Original Image Downsized Image (after output sharpening) Move your mouse over the buttons on the right to see the effect of output sharpening. ADDITIONAL SHARPENING ADVICE • • • • • • • • Sharpening is irreversible. With such a small radius value. For more on image downsizing. and usually requires creative sharpening.

. Having well-calibrated mid-tones is often the highest-priority goal. if you have been using this display for a while. it assumes that tossing your old monitor and buying a new one is not an option. This tutorial covers basic calibration for the casual photographer. In either case. and then adjust the brightness setting until the central square blends in completely. in addition to using calibration and profiling devices for high-precision results.MONITOR CALIBRATION FOR PHOTOGRAPHY Knowing how to calibrate your monitor is critical for any photographer who wants accurate and predictable photographic prints. then all the time spent on image editing and post-processing could actually be counter-productive.when viewed out of focus or at a distance. This method doesn't require a color profile for your monitor. — > — > Calibrat ed Monitor Digital Image File Color Profile ADJUSTING BRIGHTNESS & CONTRAST The easiest (but least accurate) way to calibrate your display is to simply adjust its brightness and contrast settings. The images below are designed to help you pick optimal brightness/contrast settings. 24. If your monitor is not correctly reproducing shades and colors. so it's ideal for casual use. closing one eye. make sure that your display has first been given at least 10-15 minutes to warm up.© 2004-2010 Sean McHugh If this is not the case. then your screen will display images much lighter or darker than at which they are intended to be viewed. If your display does not pass either of these tests. then you will have to choose which of the two is most important. do not despair! The eye has a remarkable ability to adjust to viewing conditions. set your display to maximum contrast (if possible). also be aware that many LCD screens will display these galleries with more contrast than intended. These pages are optimally formatted on screens which are displaying for 1024x768 and higher resolutions. Such a monitor should depict the central square as being the same shade as the solid outer portion -. images within this gallery will probably look just fine compared to what you are used to seeing. To correct for this. Although the above calibration step will help. For more on this topic. Ideal viewing conditions can be obtained by using a display which has been hardware calibrated using a measurement device. A well-calibrated monitor should be able to pass both tests. but if it cannot. Some pop-up blockers may need to be disabled in order to see larger views of or to visit the purchase screen for the individual images. This is best verified by taking several steps back from your display. or for when you're not at your own computer and need to make some quick adjustments. and partially opening the other so that you are seeing the screen slightly out of focus. Hardware calibrated displays will not only pass the above test. please see this website's tutorial on monitor calibration. respectively. The leftmost and rightmost squares should also appear darker and lighter than the solid gray. (1) Mid-Tones. the tones in the final print may look different than when viewed on your display. Furthermore. but should also show all eight gradations in each of the dark and light rectangles below with a neutral tone. Be aware that for such cases.

or vice versa. the above examples are just crude adjustments that only address small portions of the tonal range. your display is likely depicting images lighter or darker than intended. For both CRT & LCD displays. and do not fix colors at all. There are somewhat more accurate methods out there for visual calibration. then instead set it to maximum contrast. You will likely not need to have your display at its maximum brightness if the room isn't too bright. A digital green value may therefore appear darker. Otherwise you've likely reached the limit of what brightness/contrast adjustments alone can achieve. but ultimately. you can ignore the mid-tone image. OVERVIEW: CALIBRATION & PROFILING The colors and shades that a monitor reproduces vary with the monitor's type. achieving truly accurate results requires systematic and objective measurements using a calibration device. However. solid black will appear gray.. (2) Highlight & Shadow Detail. Note: increasing the brightness of your display too much can shorten its usable life span. so it's something that should be addressed. all numbers aren't created equal when it comes to monitors. then adjust the brightness until the central square blends in. settings and even age. If the central square is lighter or darker than the outer gray region. You should be able to distinguish the 8 shades in each of the two images below: Shadow Detail Highlight Detail The two adjacent shaded bands at each outer edge of this page should be just barely distinguishable. but when it's too low shadow clipping will make several of the darker 8 shades appear the same..2. lighter or with a different saturation than this color was intended to be seen: Digital Value of Green 200 150 100 50 —> —> —> —> <. if maximal shadow and highlight detail are more important than mid-tone lightness. In that case. first set your display to its default contrast (this will likely be either 100% or 50%).© 2004-2010 Sean McHugh Note: the above calibration assumes that your monitor is set to gamma 2. make sure that these are set to gamma 2.Color Mismatch -> Monitor "X" Standardiz ed Color . This will also have a noticeable impact on your prints. unlike in the digital world. it may also mean that the shadows and highlights will appear too bright or dark. If you are using an LCD monitor. first use brightness to control shadow detail and then use contrast to control highlight detail (in that order). brand. If you've followed the previous calibration. However. now your mid-tones will be reproduced roughly at the shade intended. If you are using a CRT monitor (the larger "old-fashioned" type). Unfortunately.2 if available (most current displays come with this as the native setting). if the display isn't back-lit (such as in front of a window) and if the display isn't too old. When brightness is too high. Alternatively.

note: for the purposes of this example.Gamut Mismatch -> Wide Color Gamut Narrow Color Gamut In the above example. this isn't always possible. in addition to creating what is called a Look-Up Table (LUT). A perfect translation from one device's color into another's therefore isn't always possible. However. but if I convert the 50 into a 78 before sending it to the monitor. Color profiles enable color-managed software to make intelligent compromises when making these imperfect conversions: Original Image Standa rd "A" ColorManaged Software —————> Standa Converted rd Image "B" <.' I know that it reproduces green=50 darker than the standard. "standardized color" is just one example of a desirable state that is well-defined in terms of universal parameters. in addition to the spacing of intermediate shades within this range ("gamma"). you would get your monitor to simply translate the digital values in a file into a standardized set of colors. then the color will come out how a green=50 was intended to be seen. This usually involves changing various physical parameters on your monitor. Other properties may also be included. Profiling is important because different devices cannot necessarily reproduce the same range of colors and shades (a "gamut mismatch")." . such as green=50 in the above example. Ideally.Colors Match -> Monitor "X" Standardiz ed Color (2) Profiling is the process of characterizing your monitor's calibrated state using a color profile. such as gamma. and then says "on 'Monitor X. Standard "A" has a greater range of greens than Standard "B. such as brightness from before. also see tutorial on "color space conversion." An LUT therefore translates digital values in a file into new values which effectively compensate for that particular monitor's characteristics: Digital Value of Green 200 150 100 50 Compensate d LUT Digital Values —> —> —> —> 200 122 113 78 —> —> —> —> <. For details on how color spaces are converted. so the process of monitor calibration actually requires two steps: (1) calibration and (2) profiling. white point and luminance. These characteristics include the range of colors your monitor is capable of displaying (the "color space"). The LUT takes an input value. (1) Calibration is the process of getting your monitor into a desirable and well-defined state." so the colors in the original image get squeezed from a wide range of intensities to a narrow range.

It is usually something that looks like a computer mouse. this is at first counter-intuitive). Common calibration devices include the X-Rite Eye-One Display. The result will be a matrix of color values and their corresponding measurements. This setting controls the relative warmth or coolness the display's lightest tone. This makes a given image appear brighter and darker for higher and lower gamma values. and no warm or cool hue will be apparent unless it is being directly compared. CALIBRATION SETTINGS Here's a brief description and recommendation for each of the target calibration settings: White Point. ColorEyes Display and ColorMunki Photo. While many LCD's have a color temperature option. amongst others. as specified by the "color temperature. With CRT monitors. then the software tries to prioritize so that inaccuracies only correspond to tonal and color differences that our eyes are not good at perceiving. that's primarily because they're being compared side by side. the standard recommendation is to set your display to around 6500K (aka D65). ColorVision Spyder. Any deviation from this native value will end up reducing your display's color gamut. it's generally recommended to leave your LCD at its default color temperature unless you have a good reason to set it otherwise. the back light for these displays always has a native color temperature. but does not change the black and white points. If neither are perfectly achievable (they never are). Your eye will adjust to this native color temperature. This setting controls the rate at which shades appear to increase from black to white (for each successive digital value). Special software then controls the monitor so that it displays a broad range of colors and shades underneath the calibration device. However. and were the brightest shade your display could show. It also strongly influences an image's apparent contrast: Gamma 1.8 Gamma 2.MONITOR CALIBRATION DEVICES calibration device in-use A monitor calibration device is what performs the task of both calibration and profiling." See the tutorial on white balance for additional background reading on this topic. For this reason. but it instead fastens to the front of your monitor. Before initiating a calibration. These may include the white point. respectively. which is a little cooler than daylight. Gamma.2 Gamma 4.0 Gamma 1. your calibration software will ask you to specify several parameters that it will calibrate to (the "target settings"). with LCD monitors it's become a bit more complicated. whereas lower temperatures appear warmer (yes. which are each sequentially measured and recorded. then your eye would adjust and you would likely call each of them "white. Your Monitor's Warmer Color Cooler Color Native Temperature Temperature Color Temperature Even though the above shades appear slightly warmer and cooler. gamma and luminance (we'll get to these in the next section). This ensures that its brightness and color balance have reached a steady and reproducible state. including brightness and contrast (and RGB values if you have a CRT)." Higher color temperatures appear cooler. If they were on their own. Just before the calibration starts.0 . first make sure to give your monitor at least 10-15 minutes to warm up. Sophisticated software algorithms then attempt to create an LUT which both (i) reproduces neutral. During the calibration process you may also be instructed to change various display settings. accurate and properly-spaced shades of gray and (ii) reproduces accurate color hue and saturation across the gamut.

A display gamma of 2. so it's always better to instead move your monitor to somewhere darker if you can.100 50. with brighter working environments potentially requiring values that exceed this range.159. the basic concept remains the same.2 has become a standard for image editing and viewing. R.159 100.2. The maximum attainable luminance will depend on your monitor type and age. However. so it will be used regardless of whether your software program is color managed -. and is usually close to your display's native setting. However. . as is most commonly used with CRT monitors. so it's generally recommended to use this setting.B Input Values 200. the optimal luminance setting is heavily influenced by the brightness of your working environment. higher luminance will shorten the usable life span of your monitor.8. This setting controls the amount of light emitted from your display. Most people set the luminance to anywhere from 100-150 Cd/m2. green and blue values are equal.G.2.50. Older mac computers at one time used gamma values of 1.unlike with the color profile. Whenever the red. you'd be surprised how often this isn't the case (see below).50 —> —> —> —> Monito r "X" Neutra l Gray <. an accurate monitor should display this as a neutral gray. Use the lowest setting in the 100-150 range where you can still see all 8 shades in the above image.Note: The above images assume that your display is currently set to gamma 2.200. and is used identically regardless of what your monitor is displaying. so this may ultimately limit how bright your working environment can be. but they now also use gamma 2. there's also more complicated 3D LUT's which do not treat each color independently. It effectively applies a separate tonal curve to each of your monitor's three color channels: —> No Adjustment Look-Up Table (LUT) Note: The table shown above is an 8-bit 1D LUT. It also correlates best with how we perceive brightness variations. *Note: this example is for a simpler 1D 8-bit LUT. Unlike with the white point and gamma settings. Luminance.100. However. The LUT is usually loaded immediately after booting up into your operating system.200 159. CALIBRATION: LOOK-UP TABLE The Look-Up Table (LUT) is either controlled by your video card or by your monitor itself. The job of the LUT* is to maintain neutral gray tones with the correct gamma.Mismatch -> A sample LUT that corrects Monitor "X" is shown below.

In the latter case. The quickest and easiest way to diagnose the quality of a color calibration is to view a large grayscale gradient in an image viewing program that supports color management. If you end up noticing that your color calibration device was unable to repair some inaccuracies. note how greater color corrections correspond to color curves which diverge more from a straight diagonal. A copy of the LUT is also included. CTRL+Y toggles the monitor profile on and off. unless the calibration software is designed for your particular monitor. This won't be a problem if you're running the latest PC or Mac operating systems though. such as gamma. you will need to view images using color-managed software in order to use a color profile. such as the maximum red. green and blue intensities that your display can emit. this will usually achieve more accurate calibrations than using your video card's LUT. However. at least you can be aware of these in the back of your mind if you perform any image editing that influences color. there are limits to how accurately you can calibrate your display. However. Alternatively. If the monitor has the same range of colors specified in the digital image.B=159. It's important to also verify the quality of this calibration. green and blue value using the tonal curves. in addition to measurements from the calibration. if the color spaces differ (as is usually the case). the more you will decrease the number of colors/shades that it can display.159. depending on how important color accuracy is to your work. There's often several LUT's along the imaging chain -. Also. a color profile is likely still a vast improvement over no color profile -. Whenever a digital image is opened that contains an embedded color profile.not just with the video card. then this might mean that your monitor needs re-calibration. This could be due to the monitor calibration settings you're using. and when alternating between having the monitor's color profile turned on and off. It's generally recommended to perform this once every month or so.G. your software can compare this profile to the profile that was created for your monitor.159 gets sent to your monitor as an output value of 145. the more you have to change your monitor from its native state. Otherwise Photoshop or any other mainstream image editing or RAW development software will work just fine. A color profile is used to convert images so that they can be displayed using the unique characteristics of your monitor. this is achieved by using "Proof Colors" set to "Monitor RGB". this means that the monitor's color profile is not being used. If your monitor supports modifying its own LUT (few do). since they're both color-managed. Fortunately. the video card looks up each red.155. Sub optimal monitor calibration may render this gradation with subtle vertical bands of color. With the LUT. but could also be caused by the age of the monitor. This process is called color space conversion. The other LUT that is most relevant to monitor calibration is your monitor's internal LUT (as discussed later). or occasional discrete jumps in tone. These properties collectively define the color space of your monitor. Unlike with the LUT. then your software will perform a more sophisticated conversion. An input value of R. then values from the file will be directly converted by the LUT into the correct values on your monitor. it will likely end up using your video card's LUT instead. since a monitor with a higher bit depth LUT is able to draw upon a larger palette of colors: —> OR No Adjustment Low Bit Depth LUT (2 output shades) High Bit Depth LUT (4 output shades) (4 output shades) . the bit depth of your monitor's internal LUT can influence how well it is calibrated. your video card sends an input color value of 159 (from a digital file) directly to your monitor as an output value of 159 (no matter what the color is). In Photoshop. Such a gradation is easiest to diagnose when viewed at your display's maximum size.Without the above LUT. white point and luminance. Move your mouse over the image below to see what a poor quality monitor calibration might look like: Example of a smooth grayscale gradation for diagnosing the quality of a monitor calibration. PROFILING: COLOR PROFILE The color profile specifies the target settings from your calibration.but it comes with compromises. When "Monitor RGB" is turned on. With a digital display. your monitor's native color reproduction might be so far from optimal that the color profile represents an extreme correction.162 (which is now perceived as neutral gray). LIMITATIONS OF MONITOR CALIBRATION Unfortunately. TESTING YOUR MONITOR CALIBRATION Do not assume that just because you've performed a color calibration that this monitor will now reproduce accurate color without complication. but this is not used directly since it's already been implemented by your monitor or video card. If color banding is visible.

Much confusion often arises on this topic because there are both so many different size options.3X crop factor.5X crop factor. This is why a higher bit depth LUT in your video card will not on its own achieve more accurate calibrations. the LUT bit depth is just something to be aware of as your monitor ages. the high bit depth LUT can use additional intermediate values. I have written this article after conducting my own research to decide whether the new Canon EOS 5D is really an upgrade from the 20D for the purposes of my photography. The relative size for many of these is shown below: Canon's 1Ds/1DsMkII/5D and the Kodak DCS 14n are the most common full frame sensors. Camera phones and other compact cameras use sensor sizes in the range of ~1/4" to 2/3". then you'll likely get good calibrations. These will thus not be addressed here specifically. because these sometimes sacrifice the bit depth of their LUT (or other aspects) in exchange for higher refresh rates -. The above chart excludes the 1. respectively. 25. image noise. which has a 2X crop factor compared to 35 mm film. It is called this because when using a 35 mm lens. since the number of input values remains the same. OVERVIEW OF SENSOR SIZES Sensor sizes currently have many possibilities. although some have 6-bit or 10+ bit LUT's. price point and desired portability. depending on their use.Note: A higher bit depth internal LUT does not mean that a monitor can actually display more colors at the same time. Fuji and Kodak all teamed up to create a standard 4/3 system. but the same principles still apply. cost and size/weight. such a sensor effectively crops out this much of the image at its exterior (due to its limited size). Background reading on this topic can be found in the tutorial on digital camera sensors. however these are far less common and currently prohibitively expensive. On the other hand.with a few notable differences unique to digital technology. Medium format and larger sensors exist.which are of no importance to viewing still images. whereas Nikon cameras such as the D70(s)/D100 have a 1. Canon cameras such as the 300D/350D/10D/20D all have a 1. CROP FACTOR & FOCAL LENGTH MULTIPLIER The crop factor is the sensor's diagonal size compared to a full-frame 35 mm sensor. and so many trade-offs relating to depth of field. medium format and large format film cameras-. In the low bit depth example. If you have a new accurate display with an 8-bit LUT. Olympus. This greatly reduces the likelihood of color banding and image posterization -.6X crop factor.DIGITAL CAMERA SENSOR SIZES This article aims to address the question: how does your digital camera's sensor size influence different types of photography? Your choice of sensor size is analogous to choosing between 35 mm. since the LUT has to round to the nearest output value available.even when the display is old and deviates substantially from its original colors. which is used in Canon's 1D series cameras. the brightest (4) and darkest (1) shades are forced to merge with white (5) and black (0). . The vast majority of displays have an 8-bit LUT. diffraction. Avoid LCD monitors that are marketed to the gaming community.

This means that a cropped sensor effectively discards the lowest quality portions of the image. the focal length multiplier relates the focal length of a lens used on a smaller format to a 35 mm lens producing an equivalent angle of view. the optical performance of wide angle lenses is rarely as good as longer focal lengths. this can degrade quality. Smaller sensors also enlarge the center region of the lens more. Additionally. and this lens would be of high enough quality that its change in sharpness would be negligible towards its edges. this also means that one is carrying a much larger lens than is necessary-. one would use nearly all image light transmitted from the lens. Top of Form Focal Length Multiplier Calculator Sensor Type: .35 mm Full Frame Angle of View One might initially think that throwing away image information is never ideal. Uncropped Photograph Center Crop Corner Crop On the other hand.a factor particularly relevant to those carrying their camera for extended periods of time (see section below). and is equal to the crop factor. so its resolution limit is likely to be more apparent for lower quality lenses. This means that a 50 mm lens used on a sensor with a 1. Similarly. which is quite useful when using low quality lenses (as these typically have the worst edge quality).6X crop factor would produce the same field of view as a 1. Nearly all lenses are sharpest at their centers. Ideally. however it does have its advantages. Since a cropped sensor is forced to use a wider angle lens to produce the same angle of view as a larger sensor. while quality degrades progressively toward to the edges.6 x 50 = 80 mm lens on a 35 mm full frame sensor. See the tutorial on camera lens quality for more on this.

LENS SIZE AND WEIGHT CONSIDERATIONS Smaller sensors require lighter lenses (for equivalent angle of view. zoom range. hiking and travel photography because all of these often utilize heavier lenses or require carrying equipment for extended periods of time. This is because larger sensors require one to get closer to their subject.8 lens on a camera with a 1. At the same time. This difference may be critical for wildlife. This means that one has to use progressively smaller aperture sizes in order to maintain the same depth of field on larger sensors. The chart below illustrates this trend for a selection of Canon telephoto lenses typical in sport and wildlife photography: An implication of this is that if one requires the subject to occupy the same fraction of the image on a 35 mm camera as using a 200 mm f/2. The lens focal length does not change just because a lens is used on a different sized sensor-.Actual Lens Focal Length: mm Focal Length Multiplier 35 mm Equivalent Focal Length Bottom of Form Be warned that both of these terms can be somewhat misleading. A 50 mm lens is always a 50 mm lens. DEPTH OF FIELD REQUIREMENTS As sensor size increases. Top of Form Depth of Field Equivalents Sensor #1 Selected aperture .5X crop factor (requiring a 300 mm f/2.just its angle of view. However. The following calculator predicts the required aperture and focal length in order to achieve the same depth of field (while maintaining perspective). which can be especially helpful when manual focusing. For SLR cameras. Additionally. these will also be heavier and cost more because they require a larger prism/pentamirror to transmit the light from the lens into the viewfinder and towards your eye.5X as much weight! This also ignores the size difference between the two. which may be important if one does not want to draw attention in public. larger sensor sizes result in larger and clearer viewfinder images. the depth of field will decrease for a given aperture (when filling the frame with a subject of the same size and distance). regardless of the sensor type. "crop factor" may not be appropriate to describe very small sensors because the image is not necessarily cropped out (when using lenses designed for that sensor).8 lens). build quality and aperture range). one would have to carry 3. or to use a longer focal length in order to fill the frame with that subject. heavier lenses typically cost much more.

Note that this only shows when diffraction will be visible when viewed onscreen at 100%-. This option.6X crop factor-.whether this will be apparent in the final print also depends on viewing distance and print size.Actual lens focal length Sensor #2 ?? mm Required Focal Length (for same perspective) Required Aperture Bottom of Form As an example calculation. however. If you instead use the same lens. This is why compact cameras struggle to produce significant background blur in portraits. Top of Form Diffraction Limited Aperture Estimator Sensor Size Resolution Megapixels Diffraction Limited Aperture Bottom of Form . whereas a larger depth of field is desirable for landscape photography. one would need to use a 16 mm lens and an aperture of roughly f/18. whereas a 35 mm sensor would require significant enlargement. INFLUENCE OF DIFFRACTION Larger sensor sizes can use smaller apertures before the diffraction airy disk becomes larger than the circle of confusion (determined by print size and sharpness criteria).not possible with consumer lenses! A shallower depth of field may be desirable for portraits because it improves background blur. Alternatively. please visit: diffraction limits and photography. and so its image would not need to be enlarged at all for a 8x10 inch print.9 on a camera with a 1. if one wanted to reproduce the same perspective and depth of field on a full frame sensor as that attained using a 10 mm lens at f/11 on a camera with a 1. also changes perspective. Use the following calculator to estimate when diffraction begins to reduce sharpness. this would produce a depth of field so shallow it would require an aperture of 0. while large format cameras struggle to produce adequate depth of field in landscapes. To calculate this as well. Note that the above calculator assumes that you have a lens on the new sensor (#2) which can reproduce the same angle of view as on the original sensor (#1).6X crop factor. if one used a 50 mm f/1. This is primarily because larger sensors do not have to be enlarged as much in order to achieve the same print size. As an example: one could theoretically use a digital sensor as large as 8x10 inches.4 lens on a full frame sensor. then the aperture requirements remain the same (but you will have to get closer to your subject).

Note: cavities shown without color filters present Further. respectively. actual results will also depend on lens characteristics. because more pixels may not necessarily provide more resolution (for your depth of field requirements).and thus a greater range of photon capacity -. This pixel size refers to when the airy disk size becomes the limiting factor in total resolution-. This factor may be critical when deciding on a new camera for your intended use. for example. The following diagrams show the size of the airy disk (theoretical maximum resolving ability) for two apertures against a grid representing pixel size: Pixel Density Limits Resolution (Shallow DOF Requirement) Airy Disk Limits Resolution (Deep DOF Requirement) An important implication of the above results is that the diffraction-limited pixel size increases for larger sensors (if the depth of field requirements remain the same). PIXEL SIZE: NOISE LEVELS & DYNAMIC RANGE Larger sensors generally also have larger pixels (although this is not always the case). the diffraction-limited depth of field is constant for all sensor sizes. Larger Pixels Smaller Pixels . larger pixels receive a greater flux of photons over a given exposure time (at the same aperture).Keep in mind that the onset of diffraction is gradual. On a Canon 20D. but above this it becomes quite apparent.and thus a smoother looking photo. Dynamic range describes the range of tones which a sensor can capture below when a pixel becomes completely white.not the pixel density. one can often use f/11 without noticeable changes in focal plane sharpness. but yet above when texture is indiscernible from background noise (near black). the above is only a theoretical limit.these generally have a higher dynamic range. Further. this produces a higher signal to noise ratio-. Furthermore. more pixels could even harm image quality by increasing noise and reducing dynamic range (next section). For a given amount of background noise. Since larger pixels have a greater volume -. which give them the potential to produce lower image noise and have a higher dynamic range. so apertures slightly larger or smaller than the above diffraction limit will not all of a sudden look better or worse. so their light signal is much stronger. In fact.

Further. In general though. which allow one to increase (or decrease) the apparent depth of field using the tilt feature. and that each lens is of comparable quality. Another aspect to consider is that even if two sensors have the same apparent noise when viewed at 100%. Assuming these factors (chips per wafer and yield) are most important.(Often Larger Sensor) (Often Smaller Sensor) This is not always the case however. This is not to say though that certain sized sensors will always be prohibitively expensive. larger sensor sizes do not necessarily have a resolution advantage. because the amount of background noise also depends on sensor manufacturing process and how efficiently the camera extracts tonal information from each pixel (without introducing additional noise). costs increase proportional to the square of sensor area (a sensor 2X as big costs 4X as much).allowing one to change the angle of the focal plane and therefore increase the apparent depth of field. so you are effectively paying more per unit "sensor real estate" as you move to larger sizes. the sensor with the higher pixel count will produce a cleaner looking final print. therefore this noise has a higher frequency and thus appears finer grained. COST OF PRODUCING DIGITAL SENSORS The cost of a digital sensor rises dramatically as its area increases. CONCLUSIONS: OVERALL IMAGE DETAIL & COMPETING FACTORS Depth of field is much shallower for larger format sensors. Furthermore. On the other hand. if one were to use the smallest aperture before diffraction became significant. fast ultra-wide angle lenses (f/2. therefore fewer chips per wafer result in a much higher cost per chip. Real-world manufacturing has a more complicated size versus cost relationship. This is because the noise gets enlarged less for the higher pixel count sensor (for a given print size). Silicon Wafer (divided into small sensors) Silicon Wafer (divided into large sensors) One can understand this by looking at how manufacturers make their digital sensors. the tilt lens feature is far more common in larger format cameras-. which might also be a consideration if these help your style of photography. their price may eventually drop. Tilt/shift lenses can also use shift to control perspective and reduce (or eliminate) converging vertical lines caused by aiming the camera above or below the horizon (useful in architectural photography). but the relative cost of a larger sensor is likely to remain significantly more expensive (per unit area) when compared to some smaller size. Furthermore. OTHER CONSIDERATIONS Some lenses are only available for certain sensor sizes (or may not work as intended otherwise).even though the diffraction limited aperture will be different. which may be a deciding factor if needed in sports or photojournalism. . So which option has the potential to produce the most detailed photo? Larger sensors (and correspondingly higher pixel counts) undoubtedly produce more detail if you can afford to sacrifice depth of field. the above trend holds true. This means that a sensor with twice the area will cost more than twice as much. One notable type is tilt/shift lenses. Technical Notes: This result assumes that your pixel size is comparable to the size of the diffraction limited airy disk for each sensor in question. all sensor sizes would produce the same depth of field-. the diffraction-limited depth of field is the same for all sensor sizes. therefore the percentage of usable sensors goes down with increasing sensor area (yield per wafer). which may contain thousands of individual chips. but this gives you an idea of skyrocketing costs. the chance of an irreparable defect (too many hot pixels or otherwise) ending up in a given sensor increases with sensor area. if you wish to maintain the same depth of field. In other words. Each sensor is cut from a larger sheet of silicon material called a wafer. Each wafer is extremely expensive (thousands of dollars). Furthermore.8 or larger) are currently only available for 35 mm and larger sensors. however one could also use a smaller aperture before reaching the diffraction limit (for your chosen print size and sharpness criteria).

but yet still achieve a comparable depth of field to a smaller sensor by using a higher ISO speed and smaller aperture (or when using a tripod). RESOLUTION & CONTRAST Everyone is likely to be familiar with the concept of image resolution. This is why images downsized for the web and small prints look so noise-free.QUALITY: MTF. This factor is probably most relevant to macro and nightscape photography. Technical Notes: This all assumes that differences in microlens effectiveness and pixel spacing are negligible for different sensor sizes. Ideally. we'll use images composed of alternating black and white lines ("line pairs"). but at the cost of requiring larger lenses and more expensive equipment. Resolution only describes how much detail a lens is capable of capturing -. 26. deciphering MTF charts and comparing the resolution of different lenses can be a science unto itself. even if it may look noisier at 100% on your computer screen. At the very least. Overall: larger sensors generally provide more control and greater artistic flexibility. On the other hand. too much emphasis is often placed on this single metric. thicker lines.Another important result is that if depth of field is the limiting factor.and not by the resolution of the camera itself. one could conceivably average adjacent pixels in the higher pixel count sensor (thereby reducing random noise) while still achieving the resolution of the lower pixel count sensor. Beyond the resolution of your lens. RESOLUTION & CONTRASTLens quality is more important now than ever. these progressively deteriorate in both contrast and edge clarity (see sharpness: resolution and acutance) as they become finer: . exposure times may not necessarily increase as much as one might initially assume because larger sensors generally have lower noise (and can thus afford to use a higher sensitivity ISO setting while maintaining similar perceived noise). This is because noise in the higher resolution camera gets enlarged less. This tutorial gives an overview of the fundamental concepts and terms used for assessing lens quality. as these both may require a large depth of field and reasonably short exposure time. Even though they're still resolved. Frequently. these lines are of course no longer distinguishable: High Resolution Line Pairs Lens Unresolved Line Pairs Example of line pairs which are smaller than the resolution of a camera lens. let's take a look at what happens to an image when it passes through a camera lens and is recorded at the camera's sensor. something that's probably less well understood is what happens to other. Theoretically. No matter what the pixel size. This flexibility allows one to create a shallower depth of field than possible with a smaller sensor (if desired). If pixel spacing has to remain constant (due to read-out and other circuitry on the chip). those same photos may not necessarily be taken handheld in the larger format. larger sensors unavoidably have more light-gathering area. To make things simple. the resolution of your digital photos is actually limited by the camera's lens -. hopefully it will cause you to think twice about what's important when purchasing your next digital camera or lens. then higher pixel densities will result in less light gathering area unless the microlenses can compensate for this loss. which may vary significantly depending on camera model and read-out circuitry. due to the ever-increasing number of megapixels found in today's digital cameras. this ignores the impact of fixed pattern or dark current noise. but unfortunately. a larger sensor with smaller pixels will still have lower apparent noise (for a given print size) than a smaller sensor with larger pixels (and a resulting much lower total pixel count). To understand this. Additionally. perceived noise levels (at a given print size) generally decrease with larger digital camera sensors (regardless of pixel size). However. Other factors therefore often contribute much more to our perception of the quality and sharpness of a digital image.and not necessarily the quality of the detail that is captured. the required exposure time increases with sensor size for the same sensitivity. Alternatively. Note that even if photos can be taken handheld in a smaller format. However.

Move your mouse over each of the labels to see how high and low quality lenses often differ. Increasing Line Pair Frequency —> <— Maximum Resolution (Diffraction Limit) Note: The spacing between black and white lines has been exaggerated to improve visibility. Alternatively.until an MTF of 0. MTF: MODULATION TRANSFER FUNCTION A Modulation Transfer Function (MTF) quantifies how well a subject's regional brightness variations are preserved when they pass through a camera lens. The example below illustrates an MTF curve for a perfect* lens: *A perfect lens is a lens that is limited in resolution and contrast only by diffraction. This resolution limit is an unavoidable barrier with any lens. The blue line above represents the MTF curve for a perfect "diffraction limited" lens.the number of line pairs (LP) that are concentrated into a millimeter (mm). No real-world lens is limited only by diffraction. sometimes this frequency is instead expressed in terms of line widths (LW).0 represents perfect contrast preservation. An MTF of 1. the apparent quality of the image will therefore be mostly determined by how well each lens preserves contrast as these lines become progressively narrower. in order to make a fair comparison between lenses we need to establish a way to quantify this loss in image quality. The figure below compares a perfect lens to two real-world examples: Increasing Line Pair Frequency —> Very High Quality Camera Lens (close to the diffraction limit) Low Quality Camera Lens (far from the diffraction limit) Comparison between an ideal diffraction-limited lens (blue line) and real-world camera lenses.. This frequency is therefore usually expressed in terms of "LP/mm" -. .Progressively Finer Lines Lens Progressively Less Contrast & Edge Definition For two lenses with the same resolution. whereas values less than this mean that more and more contrast is being lost -. The line pair illustration below the graph does not apply to the perfect lens. it only depends on the camera lens aperture and is unrelated to the number of megapixels. although high-end camera lenses can get much closer to this limit than lower quality lenses. where two LW's equals one LP. Line pairs are often described in terms of their frequency: the number of lines which fit within a given unit length. where line pairs can no longer be distinguished at all. other aperture shapes will produce slightly different results.. See tutorial on diffraction in photography for a background on this topic. However. MTF curve assumes a circular aperture.

The MTF is usually measured along a line leading out from the center of the image and into a far corner.0 representing perfect reproduction of line pairs. the above MTF versus frequency chart is not normally how lenses are compared. A high-end lens with an MTF-50 of 50 LP/mm will appear far sharper than a lower quality lens with an MTF-50 of 20 LP/mm. What often matters more is knowing how the MTF changes depending on the distance from the center of your image. Further. and 0 representing line pairs that are no longer distinguished from each other.6X cropped sensor. with 1. with 21. and positions further from the center will often have progressively lower MTF values. For a 1. for a fixed line frequency (usually 10-30 LP/mm).The highest line frequency that a lens can reproduce without losing more than 50% of the MTF ("MTF-50") is an important number. The example below shows how these lines might be measured and shown on an MTF chart for a full frame 35mm camera: Meridional (Circular) Line Pairs Sagittal (Radial) Line Pairs Distance From Center [mm] Detail at the center of an image will virtually always have the highest MTF. you can ignore everything beyond 13. However.8L II Zoom Lens Canon 35mm f/1. more on this later). anything beyond about 18 mm with a full frame sensor will only be visible in the extreme corners of the photo: . HOW TO READ AN MTF CHART Now we can finally put all of the above concepts into practice by comparing the properties of a zoom lens with a prime lens: Distance from Image Center [mm] Distance from Image Center [mm] Canon 16-35mm f/2. We'll discuss why the sagittal and meridional lines diverge later. This is why the corners of camera lenses are virtually always the softest and lowest quality portion of your photos. Knowing just the (i) maximum resolution and (ii) MTF at perhaps two different line frequencies is usually more than enough information.4L Prime Lens (zoom set at 35mm) On the vertical axis. we have the MTF value from before. These lines can either be parallel to the direction leading away from the center (sagittal) or perpendicular to this direction (meridional).5 mm. for example (presuming that these are used on the same camera and at the same aperture. we have the distance from the center of the image. On the horizontal axis.6 mm being the far corner on a 35 mm camera. because it correlates well with our perception of sharpness.

4. one line might represent MTF values when the lens is at an aperture of f/4. These are most relevant for landscape photography. In the above example. Here's a breakdown of what each of these represents: Bold -> 10 LP/mm .6X Cropped Sensor Note: For a 1.resolution or fine detail Blue -> Aperture at f/8. This is the main reason why the black lines appear so much worse for the prime lens. The advantage of the prime lens is even more pronounced towards the outer regions of the camera's image. On the other hand. The MTF of black lines will almost always be a worst case scenario (unless you use unusually small apertures). the far corner is at 14. and the far edge is at 11. it does quite admirably -. the prime lens has a better MTF at all positions. However. black lines unfortunately aren't a fair apples to apples comparison. Each line represents a separate MTF under different conditions. and at 30 LP/mm toward the edges of the image.0.0 to f/1. Thin Lines.0 is a much bigger change than f/2. Bold vs. It's therefore highly likely that the prime lens will outperform the zoom lens when they're both at f/2.8-f/8. Black Lines. both lenses have similar contrast at f/8. Each line above has three different styles: thickness. See the tutorial on digital camera sensor sizes for more on how these affect image quality. given that the prime lens has such a handicap. Blue Lines. A big hurdle with understanding how to read an MTF chart is learning what each line refers to.especially at 10 LP/mm in the center. although the prime lens is a little better here. . similar to what happens when performing local contrast enhancement.0 Black -> Aperture wide open Dashed -> Meridional (concentric) line pairs Line Type: Solid -> Sagittal (radial) line pairs Line Color: Since a given line can have any combination of thickness. All of the different looking lines in the above MTF charts can at first be overwhelming. or need a shallow depth of field.Full Frame 35 mm Sensor 1. whereas thin lines describe finer details or resolution.0. For example.0.4-f/8.8 on the zoom vs f/1.9 mm. while another might represent MTF values at f/8.8. a curve that is bold. for both high and low frequency details (30 and 10 LP/mm).0. Bold lines are often a priority since high values can mean that your images will have a more three dimensional look. The zoom lens barely loses any contrast when used wide open compared to at f/8. since a wide open aperture is different for each of the above lenses (f/2. but we cannot say for sure based only on the above charts. For example.0. color and type. the above MTF chart has a total of eight different types of lines. They are also more useful for comparisons because blue lines are always to be at the same aperture: f/8. In the above example. In the above example. blue and dashed would describe the MTF of meridional 10 LP/mm lines at an aperture of f/8. Bold lines describe the amount of "pop" or small-scale contrast.small-scale Line contrast Thickness: Thin -> 30 LP/mm . or other situations where you need to maximize depth of field and sharpness.2 mm. the key is to look at them individually.4 on the prime).0. the prime lens loses quite a bit of contrast when going from f/8.5X sensor. color and type. These are most relevant when you are using your lens in low light. but this is probably because f/1. need to freeze rapid movement.0.

Many of you reading this tutorial right now might even be using eye glasses that correct for an astigmatism. so you might end up over-sharpening some edges. These tools assume that blur is equal in all directions. However. depending on the lens. In the MTF charts for the Canon zoom versus prime lens from before. The figure below shows the MTF-50 for various apertures on a high-quality lens: The aperture corresponding to the maximum MTF is the so-called "sweet spot" of a lens. However." as illustrated below: Original Image Astigmatism: MTF in S > M Astigmatism: MTF in M > S No Astigmatism: MTF in M=S Move your mouse over the labels on the image to the right to see the effect of astigmatism. In the above example. A wide angle lens with significant barrel distortion can therefore achieve a better MTF since objects at the periphery are stretched much less than they would be otherwise. M = meridional lines Note: Technically. Astigmatism can also be problematic with photos containing stars or other point light sources. Technical Note: With wide angle lenses. The location of this sweet spot is also independent of the number of megapixels in your camera. This quality-reducing artifact is called an "astigmatism. An aberration is when imperfect lens design causes a point light source in the image not to converge onto a point on your camera's sensor. the S above will have a slightly better MTF because it is closer to the center of the image. However. At this point you're probably wondering: why show the MTF for both sagittal ("S") and meridional ("M") line pairs? Wouldn't these be the same? Yes. M lines are much more likely to have a lower MTF than S lines.4 the prime lens primarily blurs in a circular direction.. S = sagittal lines. at the image's direct center they're always identical. Technical Notes: • At large apertures. at f/1. Therefore. resolution and contrast are generally limited by light aberrations. this causes the white dots to appear to streak outward from the center of the image. for the purposes of this example we're assuming that M & S are at similar positions. subjects near the periphery become progressively more stretched/distorted in directions leading away from the center of the image. since images will generally have the best sharpness and contrast at this setting. When the MTF in S is greater than in M. MTF & APERTURE: FINDING THE "SWEET SPOT" OF A LENS The MTF of a lens generally increases for successively narrower apertures. then reaches a maximum for intermediate apertures.0 and f/16. Whenever the dashed and solid lines begin to diverge.4 versus at f/8. objects are blurred in the opposite (circular) direction when the MTF in M is greater than in S. At f/8. with the prime lens. this means that the amount of blur is not equal in all directions. and finally declines again for very narrow apertures. What does astigmatism mean for your photos? Probably the biggest implication.0. which is a common occurrence. Solid Lines. . which is much less common.0. almost as if they had motion blur. however.. Similarly. things become more interesting progressively further from the center. MERIDIONAL LINES Dashed vs. the lens primarily blurs in the radial direction. as the angle of view becomes wider. this sweet spot is usually somewhere between f/8. On a full frame or cropped sensor camera. both lenses begin to exhibit pronounced astigmatism at the very edges of the image. since this will make the asymmetric blur more apparent. is that standard sharpening tools may not work as intended. something interesting happens: the type of astigmatism reverses when comparing the lens at f/1. this is usually an unacceptable trade-off with architectural photography.ASTIGMATISM: SAGITTAL vs. However. other than the unique appearance. partly because these try to preserve a rectilinear image projection. while leaving other edges still looking blurry. objects are blurred primarily along lines radiating out from the center of the image.

an MTF chart says nothing about: • • • • Color quality and chromatic aberrations Image distortion Vignetting (light fall-off toward the edges of an image) Susceptibility to camera lens flare Furthermore. After all. and in some cases even impossible. MTF measurements will therefore vary depending on which camera is used for the measurement. It can often be quite difficult to discern whether an image will look better on another lens based on an MTF. diffraction is a fundamental physical limit caused by the scattering of light. . or the type of software used in the RAW conversion. If you cannot tell the different between shots with different lenses used in similar situations. etc. COMPARING DIFFERENT CAMERAS & LENS BRANDS A big problem with the MTF concept is that it's not standardized. for example. fingerprints or other coatings on your lens Most importantly. and is not necessarily any fault of the lens design. even if one performed their own MTF tests. A lens is rarely superior in all of these aspects at the same time. so that's all that really matters at the end of the day. objects outside the depth of field will likely still improve in sharpness even when your f-stop is larger than the so-called sweet spot. because the cropped sensor gets enlarged more when being made into the same size print. because there's usually many competing factors: contrast. because the materials and engineering of the lens are much more important. Comparing different MTF charts can therefore be quite difficult. One needs to be extra careful when comparing MTF charts amongst cameras with different sensor sizes. This net MTF represents the combined result from the lens. In fact. an MTF curve at 30 LP/mm on a full frame camera is not equivalent to a different 30 LP/mm MTF curve on a 1. because the Canon uses theoretical calculations while Nikon uses measurements.• • • At small apertures. For example. distortion. even though MTF charts are amazingly sophisticated and descriptive tools -. resolution. aperture.and not the MTF of the lens alone. The cropped sensor would instead need to show a curve at 48 LP/mm for a fair comparison. Large apertures are where high quality lenses really stand out. A line frequency of 1000 LP/PH. other factors such as the condition of your equipment and your camera technique can often have much more of an impact on the quality of your photos than small differences in the MTF. Unlike aberrations.ultimately nothing beats simply visually inspecting an image on-screen or in a print. they'd still run into problems. one should not conclude that the optimal aperture setting is completely independent of what is being photographed. High and low quality lenses are therefore very similar when used at small apertures (such as f/16-32 on a full frame or cropped sensor). It's therefore only practical to compare MTF charts that were measured using identical methodologies. astigmatism. However. Full Frame Sensors. as opposed to using an absolute unit like a millimeter. EF-S and other cropped sensor lenses is because this makes their MTF charts look better. The sweet spot at the center of the image may not correspond with where the edges and corners of the image look their best. has the same appearance at a given print size -. The diversity of sensor sizes is why some have started listing the line frequency in terms of the picture or image height (LP/PH or LP/IH). they still have many limitations.regardless of the size of the camera's sensor. this all assumes that your subject is in perfect focus. For example. then any MTF discrepancies probably don't matter. the optimal aperture would just be wide open. One would suspect that part of the reason manufacturers keep showing MTF charts at 10 and 30 LP/mm for DX. resolution and contrast are generally limited by diffraction.with lots of good science to back them up -. pictures are made to look at. However. moisture. A typical self-run MTF chart actually depicts the net total MTF of your camera's optical system -. MTF CHART LIMITATIONS While MTF charts are an extremely powerful tool for describing the quality of a lens. MTF charts by Canon and Nikon cannot be directly compared. camera sensor and RAW conversion. In fact. a perfect lens would not even have a "sweet spot". this often requires going to an even narrower aperture. Further. Some of these quality-reducing factors might include: • • • • Focusing accuracy Camera shake Dust on your camera's digital sensor Micro abrasions. Cropped vs. in addition to any sharpening or other post-processing.6X cropped sensor.

This opens up a whole new set of lighting possibilities which one might have previously avoided—for purely technical reasons. sharpening and local contrast enhancement can often make this disadvantage imperceptible in a print -. MOTIVATION: THE DYNAMIC RANGE DILEMMA As digital sensors attain progressively higher resolutions. This is particularly apparent in compact cameras with resolutions near 8 megapixels. and thereby successively smaller pixel sizes.Finally. This is because GND filters extend dynamic range while still maintaining local contrast. Further.as long as the original quality difference isn't too great. some scenes simply contain a greater brightness range than can be captured by current digital cameras-. WHEN TO USE HDR IMAGES I would suggest only using HDR images when the scene's brightness distribution can no longer be easily blended using a graduated neutral density (GND) filter. even if one lens's MTF is indeed worse than another's. There is no free lunch however. 27. High dynamic range imaging attempts to utilize this characteristic by creating images composed of multiple exposures. which can far surpass the dynamic range of a single exposure.just not in a single photo. most digital cameras can change how much light they let in by a factor of 50. By varying the shutter speed alone. the one quality of an image which does not benefit is its dynamic range. Scenes which are ideally suited for GND filters are those with simple lighting geometries. The "bright side" is that nearly any camera can actually capture a vast dynamic range-.of any type. Learning to use the merge to HDR feature in Photoshop can help you make the most of your dynamic range under tricky lighting—while still balancing this trade-off with contrast.HDR: HIGH DYNAMIC RANGE PHOTOGRAPHY High dynamic range (HDR) images enable photographers to record a greater range of tonal detail than a given camera could capture in a single photo. as these are more susceptible than ever to blown highlights or noisy shadow detail. such as the linear blend from dark to light encountered commonly in landscape photography (corresponding to the relatively dark land transitioning into bright sky). The new "merge to HDR" feature of Photoshop allows the photographer to combine a series of bracketed exposures into a single image which encompasses the tonal detail of the entire series. trying to broaden the tonal range will inevitably come at the expense of decreased contrast in some tones. .000 or more.

The real benefit is that HDR files use these extra bits to create a relatively open-ended brightness scale.GND Filter Final Result In contrast. far more bits are used to distinguish lighter tones than darker tones (from the tutorial on gamma correction. because our eyes would adjust to changing brightness. relatively speaking.therefore requiring a custom-made GND filter. which can adjust to fit the needs of your image. an image file can specify a brightness of 4. which would be too large even with 32-bit integers. A floating point number is composed of a decimal number between 1 and 10 multiplied by some power of 10. tonal levels and exposure . We see that the floating point notation certainly looks neater and more concise. This way. The important distinction is that these extra bits are used differently than the extra bits in 16-bit images. such as 5. INNER WORKINGS OF AN HDR FILE Photoshop creates an HDR file by using the EXIF information from each of your bracketed images to determine their shutter speed. an exponentially greater fraction of these bits are used to specify color more precisely. a scene whose brightness distribution is no longer easily blended using a GND filter is the doorway scene shown below. which instead just define tones more precisely (see tutorials on the "RAW File Format" and "Posterization"). and therefore a larger dynamic range? Recall that for ordinary LDR files. Brightness Distribution Underexposure Overexposure We note that the above scene contains roughly three tonal regions with abrupt transitions at their edges-. instead of extending dynamic range. as opposed to the usual 0-255 (for 8-bit) or 0-65535 (for 16-bit) integer color specifications. as discussed in the tutorial on "Understanding Bit Depth"). If we were to look at this in person. As a result. aperture and ISO settings. as more bits are added.300. we would be able to discern detail both inside and outside the doorway.3x109. The goal of HDR use in this article is to better approximate what we would see with our own eyes through the use of a technique called tonal mapping. . also referred to as exponential notation.467x103.000 simply as 4. It then uses this information to assess how much light came from each image region. We refer to the usual 8 and 16-bit files as being low dynamic range (LDR) images. but how does this help a computer? Why not just keep adding more bits to specify successively larger numbers.000. The 32-bit HDR file format describes a greater dynamic range by using its bits to specify floating point numbers. Photoshop creates the HDR file using 32-bits to describe each color channel (as opposed to the usual 16 or 8-bits. Since this light may vary greatly in its intensity.to be added).

Before tonal mapping can be performed. . the digital sensor's response curve)-. Further. Make sure to take at least three exposures. gamma-encoding just becomes more and more inefficient as the bit depth increases. which can be displayed on a monitor. Recall that each "stop" refers to a doubling (+1 stop) or halving (-1 stop) of the light captured from an exposure. We also note another disadvantage of HDR images: they require relatively static subject matter.924x109 each have the same number of significant figures (four). monitor gamma. This tutorial therefore focuses on how to create and convert HDR files into an ordinary 8 or 16-bit image. The brightest exposure should show the darkest regions of the image with enough brightness that they are relatively noise-free and clearly visible. although five or more is recommended for optimum accuracy. a high dynamic range file does not guarantee greater dynamic range unless this is also present in the actual subject matter. Photoshop has a feature which attempts to align the images when the camera may have moved between shots. The more closely spaced bits for brighter values is a result of the fact that ordinary 8 and 16-bit JPEG files are gammaencoded. and depends on other factors such as screen bit depth. which can actually help increase dynamic range for low-bit files. since numbers such as 2. or will look great as a photographic print. CREATING A 32-BIT HDR FILE IN PHOTOSHOP Here we use Adobe Photoshop to convert the sequence of exposures into a single image. which uses tonal mapping to approximate what we would see with our eye. More exposures allow the HDR algorithm to better approximate how your camera translates light into digital values (a. even though the second number is more than a million times larger. in addition to the two shown previously. Our previous ocean sunset example would therefore not be well-suited for the HDR technique.allowing for greater bit efficiency. and these are ideally set by varying the shutter speed (as opposed to aperture or ISO speed). or linear).576x103 and 8. a sturdy tripod is essential. as the waves would have moved significantly between each exposure. Note: just as how using high bit depth images do not necessarily mean your image contains more color. This ensures that bits are equally spaced throughout the dynamic range.creating a more even tonal distribution. the use of floating point numbers ensure that all tones are recorded with the same relative precision. we first need to combine all exposures into a single 32-bit HDR file. IN-FIELD PREPARATION Since creating a HDR image requires capturing a series of identically-positioned exposures. etc. The problem is that your computer display (or the final photographic print) can only show a fixed brightness scale. however best results are achieved when this is not relied upon. due to the necessity of several separate exposures. and effectively allow for a nearly infinite brightness range to be described. All of these extra bits provided by the HDR format are great.k. Reference -1 Stops -2 Stops -3 Stops It is essential that the darkest of these exposures includes no blown highlights in areas where you want to capture detail. Each exposure should be separated by one to two stops. and not just concentrated in the brighter tones-.Representation of How Bits Are Allocated for Increasing Brightness Note: Above representation is qualitative. This process is also commonly referred to as tonal mapping. HDR files get around this LDR dilemma of diminishing returns by using floating point numbers which are proportional to the actual brightness values of the subject matter (gamma equals one. The doorway example is best-suited with several intermediate exposures.a.

USING HDR TONAL MAPPING IN PHOTOSHOP Here we use Adobe Photoshop to convert the 32-bit HDR image into a 16 or 8-bit LDR file using tonal mapping. respectively. One function which is available is exposure adjustment (Image>Adjustments>Exposure). If your images were not taken on a stable tripod. it will show a window with their combined histogram. this leaves you with a 32-bit HDR image. for this example it would be the four images shown in the previous section. so it is of little use other than for archival purposes. Convert into a regular 16-bit image (Image>Mode>16 Bits/Channel�) and you will see the HDR Conversion tool. which is often critical in contrast-deprived HDR images. or decreasing the exposure to see any hidden highlight detail. This method attempts to redistribute the HDR histogram into the contrast range of a normal 16 or 8-bit image. which can now be saved if required. After pressing OK. This requires interpretive decisions about the type of tonal mapping.Open the HDR tool (File>Automate>Merge to HDR�). very few image processing functions can be applied to a 32-bit HDR file. After pressing OK. only once it has been converted into a 16 or 8-bit image (using tonal mapping) will it begin to look more like the desired result. but this value often clips the highlights. You may wish to move the white point slider to the rightmost edge of the histogram peaks in order to see all highlight detail. Photoshop has estimated the white point. one may first wish to set the black and white points on the image histogram sliders (see "Using Levels in Photoshop" for a background on this concept). This method has no options and applies a custom tonal curve. Once your computer has stopped processing. which serve as the equivalent to brightness and contrast adjustment. This method also allows changing the tonal curve to better suit the image. It generally works best for image histograms which have several relatively narrow peaks with no pixels in between. you will soon see a "Computing Camera Response Curves" message. The tonal mapping method can be chosen from one of four options. and load all photographs in the exposure sequence. this step may require checking "Attempt to Automatically Align Source Images" (which greatly increases processing time). depending on the subject matter and brightness distribution within the photograph. This value is for preview purposes only and will require setting more precisely later. described below. At this stage. Equalize Histogram Local Adaptation Before using any of the above methods. This is the most flexible method and probably the one which is of most use to photographers. Click on the double arrow next to "Toning Curve and Histogram" to show the image histogram and sliders. This has the effect of tricking the eye into thinking that the image has more contrast. this one changes how much it brightens or darkens regions on a per-pixel basis (similar to local contrast enhancement). Exposure and Gamma Highlight Compression This method lets you manually adjust the exposure and gamma. . which greatly reduces highlight contrast in order to brighten and restore contrast in the rest of the image. Note how the image may still appear quite dark. Unlike the other three methods. This uses a custom tonal curve which spreads out histogram peaks so that the histogram becomes more homogenous. You may wish to try increasing the exposure to see any hidden shadow detail.

as this is likely the most-used. while over smaller distances our eyes do not. We refer to contrast over larger image distances as global contrast.particularly for brightness distributions which are more complex than the simple vertical blend in the ocean sunset above. for the case of low global contrast the front of the statue's face is virtually the same brightness as it's side. since it retains local contrast while still decreasing global contrast (thereby retaining texture in the darkest and lightest regions). It translates pixel intensities not just with a single tonal curve. and shown with sufficient contrast to give it a three-dimensional appearance.if even by a small amount. the local adaptation method does not necessarily retain the overall hierarchy of tones. tones on the histogram are not just stretched and compressed. Note how the large-scale (global) patches of light and dark are exaggerated for the case of high global contrast. Tonal mapping using local adaptation would likely produce an image similar to the far right image (although perhaps not as exaggerated). Conversely. the final image renders the distant ocean as being darker. The key concept here is that over larger image regions our eyes adjust to changing brightness (such as looking up at a bright sky). The local adaptation method attempts to maintain local contrast.The remainder of this tutorial focuses on settings related to the "local adaptation" method. An example of a more complex brightness distribution is shown below for three statue images. . CONCEPT: TONAL HIERARCHY & IMAGE CONTRAST In contrast to the other three conversion methods. Mimicking this characteristic of vision can be thought of as a goal of the local adaptive method-. but may instead cross positions. while decreasing global contrast (similar to that performed with the ocean sunset example). Now imagine that we started with the middle image. even though the foreground sea foam and rock reflections are actually darker than the distant ocean surface. and provides the greatest degree of flexibility. this would mean that some part of the subject matter which was initially darker than some other part could later acquire the same brightness or become lighter than that other part-. which would be an ideal candidate for HDR conversion. This means that unlike using a tonal curve. but instead also based on the surrounding pixel values. Visually. whereas contrast changes over smaller image distances are termed local contrast. Original Image Low Global High Global Contrast Contrast High Local Low Local Contrast Contrast The above example illustrates visually how local and global contrast impact an image. Final Composite that Underexposed Photo Overexposed Photo Violates Large-Scale Tonal Hierarchy A clear example where global tonal hierarchy is not violated is the example used in the page on using a GND to extend dynamic range (although this is not how local adaptation works). The original image looks fine since all tonal regions are clearly visible. In this example.

whereas too low of a radius can make the image appear washed out. Additionally. This technique is identical to that described in the Photoshop curves tutorial. the sunlit building and sky are the brightest objects.particularly in the field of digital photography. and they stayed that way in our final image. Want to learn more? Discuss this article in our HDR photo techniques forum. Existing tools are therefore likely to improve significantly. touch-up with local contrast enhancement may also yield a more pleasing result. As with all new tools. Subtle use of levels and saturation can drastically improve problem areas in the image. As a result. there is not currently. Good HDR conversions therefore require significant work and experimentation in order to achieve realistic and pleasing final images. Further. HDR should only be used when necessary. no photo within my gallery used the HDR technique. it may unnecessarily darken naturally white textures and brighten darker ones. Photoshop CS2 Tool Final Result Using Local Adaptation Method HDR images which have been converted into 8 or 16-bit often require touching up in order to improve their color accuracy. For any given image. This curve is shown for our doorway example below. but in most other instances this should be avoided. If used properly. Only when necessary. images almost always require adjustments to the tonal curve. be careful not to overdo their use. In general. I can certainly see when the photo would be unattainable without HDR. an automated single-step process which converts all HDR images into those which look pleasing on screen. Note: To clarify email queries. . Use care when violating the image�s original tonal hierarchy. Photoshop always uses the most exposed image to represent a given tone—thereby collecting more light in the shadow detail (but without overexposing). these do not induce halo artifacts while still maintaining local contrast. yielding the final result. or in a print. RECOMMENDATIONS Keep in mind that HDR images are extremely new-. While re-investigating the conversion settings is recommended as the first corrective step. since their ideal combination varies depending on image content. Furthermore. however. whereas the opposite occurs for a decrease in contrast. Ever noticed how digital images always have more noise in the shadows than in brighter tones? This is because the image's signal to noise ratio is higher where the image has collected more of a light signal. Overdoing editing during HDR conversion easily can cause the image to lose its sense of realism. incorrectly converted or problematic HDR images may appear washed out after conversion.HDR CONVERSION USING LOCAL ADAPTATION The distance which distinguishes between local and global contrast is set using the radius value. In addition to the radius and threshold values. it is recommended to adjust each of these to see their effect. In our doorway example. In some situations. A high threshold improves local contrast. TIP: USING HDR TO REDUCE SHADOW NOISE Even if your scene does not require more dynamic range. where small and gradual changes in the curve's slope are nearly always ideal. Radius and threshold are similar to the settings for an unsharp mask used for local contrast enhancement. The main problem with the local adaptation method is that it cannot distinguish between incident and reflected light. I prefer to use linear and radial graduated neutral density filters to control drastically varying light. these have been a standard by landscape photographers for nearly a century. Be aware of this when choosing the radius and threshold settings so that this effect can be minimized. You can take advantage of this by combining a properly exposed image with one which has been overexposed. and may never be. but also risks inducing halo artifacts. Changes in saturation may sometimes be desirable when brightening shadows. regions which have increased in contrast (a large slope in the tonal curve) will exhibit an increase in color saturation. best results can always be achieved by having good lighting to begin with. do not expect deep shadows to become nearly as light as a bright sky. your final photo may still improve from a side benefit: decreased shadow noise.

Apply Shift: Left Right Ordinary Camera Lens Lens Capable of Shift Movements Above comparison shown for 11 mm shift movements on a 35 mm SLR camera. The second part focuses on using tilt shift lenses to control depth of field. The tilt effect therefore does not necessarily increase depth of field—it just allows the photographer to customize its location to better suit their subject matter. The first part of this tutorial addresses the shift feature. Tilt movements enable the photographer to tilt the plane of sharpest focus so that it no longer lies perpendicular to the lens axis. actual image circles would be larger relative to the sensor for cameras with a crop factor (see tutorial on digital camera sensor sizes for more on this topic). architectural and product photography. Many of the optical tricks these lenses permit could not otherwise be reproduced digitally—making them a must for certain landscape.28. actually project a much larger imaging circle than is ordinarily required—thereby allowing the photographer to "shift" this imaging circle to selectively capture a given rectangular portion. by contrast. OVERVIEW: TILT SHIFT MOVEMENTS Shift movements enable the photographer to shift the location of the lens's imaging circle relative to the digital camera sensor.TILT SHIFT LENSES: PERSPECTIVE CONTROL Tilt shift lenses enable photographers to transcend the normal restrictions of depth of field and perspective. and focuses on its use for in digital SLR cameras for perspective control and panoramas. With most lenses this circle is designed to extend just beyond what is needed by the sensor. Shift lenses. and produces an effect similar to only using a crop from the side of a correspondingly wider angle lens. . CONCEPT: LENS IMAGING CIRCLE The image captured at your camera's digital sensor is in fact just a central rectangular crop of the circular image being captured by your lens (the "imaging circle"). This means that the lens's center of perspective no longer corresponds the the image's center of perspective. This produces a wedge-shaped depth of field whose width increases further from the camera.

a lens capable of shift movements will need to be much larger and heavier than a comparable regular lens. with potentially less pronounced distortion. Extreme shift movements will also expose regions of the imaging circle with lower image quality. but this may not be any worse than what is always visible with an ordinary camera lens.Shift movements have two primary uses: they enable photographers to change perspective or expand the angle of view (using multiple images). these lenses will typically have better image quality at the edges of the frame—similar to using full frame 35 mm lenses on cameras with a crop factor. since wider angle lenses generally have poorer optical quality. vertical lines which are parallel in person remain parallel in print: Converging verticals arise whenever the camera lens (ie. SHIFT MOVEMENTS FOR PERSPECTIVE CONTROL Shift movements are typically used for perspective control to straighten converging vertical lines in architectural photography. When the camera is aimed directly at the horizon (the vanishing point below). The shift ability comes with an additional advantage: even when unshifted. This means that this 24 mm tilt shift lens is therefore likely to be surpassed in optical quality by an ordinary 24 mm lens. This effect changes the perspective. a 24 mm tilt shift lens is likely to be optically similar to an ordinary 16 mm lens due to a similar sized imaging circle. assuming the same focal length and maximum aperture. Further. center of the imaging circle) is aimed away from the horizon. This means less softness and vignetting. On the other hand. Techniques for each are discussed in subsequent sections. The trick with a shifted lens is that it can capture an image which lies primarily above or below the horizon—even though the center of the imaging circle still lies on or near the horizon. . The above example would be more useful for creating a panorama since the medium telephoto camera lens created a flat perspective.

Ordinary Lens Lens Shifted for Perspective Control The shifted lens gives the architecture much more of a sense of prominence and makes it appear more towering—as it does in person. In my experience. using a shifted lens is visibly better when using Canon's 45 mm and 90 mm tilt shift lenses. . The second method would retain more resolution. A similar perspective effect could be achieved using an ordinary lens and digital techniques. although this would sacrifice a substantial portion of the camera's megapixels. Note that in the above example the vanishing point of perspective was not placed directly on the horizon. Although the above digital techniques clearly sacrifice resolution. One way would be to use a wider angle lens and then only make a print of a cropped portion of this. Often times a slight bit of convergence is desirable. which means that one can avoid having to use a panoramic head to prevent parallax error with foreground subject matter. Canon's 24 mm tilt shift lens is a closer call. A second way would be to stretch the image from the ordinary lens above using photoshop's perspective control (so that it is shaped like an upside down trapezoid). Technical Note: it is often asked whether digital perspective control achieves similar quality results as a shifted lens. SHIFT MOVEMENTS FOR SEAMLESS PANORAMAS One can create digital panoramas by using a sequence of shifted photographs. Either way. Another potential benefit is that the final composite photo will retain the rectilinear perspective of the original lens. if chromatic aberrations are properly removed I still find that the shifted lens is a little better. the shifted lens generally yields the best quality. the question is whether this is necessarily any worse than the softening caused by using the edge of the imaging circle for an optically poor tilt shift lens. and therefore vertical lines are not perfectly parallel (although much more so than with the ordinary lens). but would yield an image whose horizontal resolution progressively decreases toward the top. since perfectly parallel vertical lines can sometimes look overdone and unrealistic. This technique has the advantage of not moving the optical center of the camera lens. This can be a very useful effect for situations where one cannot get sufficiently far from a building to give it this perspective (such as would be the case when photographing buildings from the side of a narrow street).

area increases rounded to nearest 5% Note how cropped sensors have more to gain from shifting than full frame sensors. For panoramas. shift direction and sensor size can be explored using the calculator in the next section. Several common shift scenarios have been included below to give a better feel for what 11 mm of shift actually means for photos. which describes how far the lens can physically move relative to the camera sensor (in each direction).42:1 Sensor with 1. Many more combinations of camera orientation. one can achieve dramatically wide aspect ratios of 2:1 and 3:1 for full frame and cropped sensors.28:1 Sensor with 1.6X Crop Factor Area Increase: 100% Aspect Ratio: 3:1 Wide Angle Using Horizontal Shift Movements in Portrait Orientation Full Frame 35 mm Sensor Area Increase: 90% Aspect Ratio: 1. respectively.6X Crop Factor Area Increase: 150% Aspect Ratio: 1.5 mm. Shift can also be used in other directions than just up-down or left-right. Since each lens can rotate on its axis.66:1 note: all diagrams shown to scale for 11 mm shift. The example below illustrates all combinations of shift in 30° increments for a 35 mm full frame sensor in landscape orientation: . with substantially more resolution. respectively.The Canon and Nikon lenses can shift up to 11 mm and 11. this shift could be applied in two directions: Panorama Using Horizontal Shift Movements in Landscape Orientation Full Frame 35 mm Sensor Area Increase: 60% Aspect Ratio: 2.

6X CF this would be 5X. Top of Form Tilt Shift Lens Calculator: Creating Panoramas Camera Sensor Size Focal Length of T/S Lens Camera Orientation 11 45 mm Shift Amount mm Shift Direction Angle of View ( Horizontal x Vertical ) Focal Length if Single Photograph Megapixel Increase — unshifted sensor area — area including camera shifts 2. one could use photo stitching software on a series of shifted photographs to create a perspective control panorama. This occurs because the camera's through the lens (TTL) metering is based on measurements with the lens wide open (smallest f-number). Such a panorama would require the lens to be shifted either up or down. when your lens has markings for 5 and 10 mm of shift. Megapixels of above image increased by 3X compared to a single photo. This way. and remain in that position for each camera angle comprising the panorama. if 1. and lens vignetting will not be uneven between images. not the aperture used for exposure. Alternatively. The diagram within the calculator (on the right) adjusts dynamically to illustrate your values. This is more intended to give a better sense for the numerics of shift movements than to necessarily be used in the field.Move your mouse over the image to see frame outlines for each shift combination. Photoshop or another basic image editing program could therefore be used to layer the images and align manually. along with other relevant values. the stitching process is more straightforward since each photograph does not have to be corrected for perspective and lens distortion. Make sure to use manual or fixed exposure since vignetting can cause the camera to expose the shifted photos more than the unshifted photo—even if the photos are exposed using a small aperture. TILT SHIFT LENS CALCULATOR The tilt shift calculator below computes the angle of view encompassed by shift movements in up-down or left-right directions. Once captured. you should be able to better visualize how this will impact the final image.7 Scale: pixels per mm .

CF = crop factor of digital sensor. and focuses on its use in digital SLR cameras for controlling depth of field. for part 2 visit: Tilt Shift Lenses: Using Tilt to Control Depth of Field Alternatively. Many of the optical tricks these lenses permit could not otherwise be reproduced digitally—making them a must for certain landscape. This means that the lens's center of perspective no longer corresponds the the image's center of perspective. lens plane and plane of sharpest focus must all intersect along a line. Calculator not intended for use in macro photography and assumes negligible distortion. for an overview of ordinary camera lenses.8D ED PC-E Nikkor 85 mm F2.0° 4. In the diagram below. please visit: Understanding Camera Lenses: Focal Length & Aperture 29." these collectively define the location for the plane of sharpest focus as follows: Apply Lens Tilt: 0. Tilt movements enable the photographer to tilt the plane of sharpest focus so that it no longer lies perpendicular to the lens axis.8D ED Nikon Tilt Shift Lenses Calculations and diagrams above have been designed to represent the range of tilt and shift movements relevant for these lenses on the 35 mm and cropped camera formats. The first part of this tutorial focused on using tilt shift lenses to control perspective and create panoramas. This part of the tutorial addresses the tilt feature. architectural and product photography. When the Scheimpflug principle is combined with the "Hinge" or "Pivot Rule. see tutorial on digital camera sensor sizes for more on this topic.8 Canon 90 mm TS-E f/2. This produces a wedge-shaped depth of field whose width increases further from the camera.5L II Canon 45 mm TS-E f/2.0° 0. From this we can see that the imaging circle of a 45 mm tilt shift lens actually covers an angle of view comparable to an ordinary 28 mm wide angle lens. OVERVIEW: TILT SHIFT MOVEMENTS Shift movements enable the photographer to shift the location of the lens's imaging circle relative to the digital camera sensor. CONCEPT: SCHEIMPFLUG PRINCIPLE & HINGE RULE The Scheimpflug principle states that the sensor plane. The tilt effect therefore does not necessarily increase depth of field—it just allows the photographer to customize its location to better suit their subject matter. Bottom of Form The output for "focal length if single photograph" is intended to give a feel for what focal length would be required. Note that this tutorial only discusses the shift feature of a tilt shift lens.5D ED PC-E Nikkor 45 mm F2. AVAILABLE NIKON & CANON TILT SHIFT LENSES Canon has four and Nikon has three mainstream tilt shift lens models available: Canon Tilt Shift Lenses Canon 17 mm TS-E f/4L Canon 24 mm TS-E f/3. using an unshifted photo.5° 1.0° . and produces an effect similar to only using a crop from the side of a correspondingly wider angle lens.0° 5. this intersection is actually a point since the line is perpendicular to your the screen.0° 3. in order to encompass the entire shifted angle of view.0° 2.8 PC-E Nikkor 24 mm F3.TILT SHIFT LENSES: DEPTH OF FIELD Tilt shift lenses enable photographers to transcend the normal restrictions of depth of field and perspective.

On the left we see the typical depth if field produced by an ordinary lens. depending on the subject matter. TILT MOVEMENTS TO REPOSITION DEPTH OF FIELD Depth of field for many scenes is often insufficient using standard equipment—even with small lens apertures. Each image is taken using a wide aperture of f/2. The example below demonstrates the effect of tilt movements on a scene whose subject matter traverses both up/down and front/back directions. Notice that even a small lens tilt angle can produce a correspondingly large tilt in the plane of sharpest focus. The focusing distance can also change the plane of sharpest focus along with tilt.8 using the Canon 45 mm TS-E lens on a full 35 mm frame sensor. The central image. Purple line (—) represents plane parallel to lens plane and separated by the lens focal length. the rest of this tutorial will use "plane of sharpest focus" and "focus plane" synonymously. . however. but not without also increasing softness at the camera's focus plane due to diffraction. and will be discussed later in this tutorial. In order to get both the front and rear rug edges sharp in the left image we would have needed to use a very small aperture. On the other hand.6. Center image at f/16 brightens due to reduced vignetting. The problem is that one could use even smaller apertures to increase depth of field. Try experimenting with different values of tilt to get a feel for how this influences the plane of sharpest focus. is able to achieve this even with the same aperture.0° — sensor plane — lens plane — plane of sharpest focus Figure based on actual calculations using Canon's 45 mm TS-E lens. Tilt movements can sometimes avoid this technical limitation by making more efficient use of the depth of field.0° 8. Zero Tilt 3° Downward Tilt Rug DoF Increased Lens DoF Decreased 8° Upward Tilt Apparent DoF Decreased mouseover to view at f/16 DoF = Depth of Field. All images taken at f/2. vertical scale compressed 2X. 30° towards rug.8 to make the depth of field more noticeable at this small image size. note how the vertical depth of field has decreased and caused the top of the front lens to be blurred. Also note that for the sake of brevity. Camera lens aimed downward approx.

Downward Tilt Only Lens Tops in Focus mouseover to view at f/5. but also the shape of the depth of field. However. This means that achieving optimal sharpness throughout is often a compromise between the best possible location for the focus plane and the constraints caused by a narrow range of tilt.5 and 8 degrees of tilt. This type of placement is common for many types of flower shots. since these have a geometry similar to this rug/lens example. such that only the tops of the two lenses are in sharp focus (right image). Large Aperture Small Aperture Large Aperture Small Aperture Ordinary Camera Lens Tilt Shift Lens Blue intensity qualitatively represents the degree of image sharpness at a given distance. the goal is usually to achieve maximal sharpness throughout. This requires considering not just the angle of the focus plane.Tilt can also be used to reduce apparent depth of field. This can sometimes occur when one requires a horizontal focus plane. Traditional view cameras (ie. which should be taken into account. In the rug/lens example this would require placing the focus plane slightly above and parallel to the rug with a small aperture. For landscapes and architecture. the Nikon and Canon tilt shift lenses are limited to 8. respectively. This means that placement of the depth of field is more critical near foreground subject matter. however. or when one wishes to focus on only part of a vertical object. actual depth of field can be unequally distributed to either side of the focus plane. Note how both the rug and vertical depth of field appear to have decreased. or with a more horizontal focus plane.6 Instead of the usual rectangular region for an ordinary lens. since this may not always be achievable with just 8 degrees of tilt. Note how using a small aperture with a tilt shift lens can become very important with vertical subject matter—and increasingly so if this subject matter is in the foreground. The example below demonstrates an alternative placement: Optimal Depth of Field Best Available Depth of Field . Deciding where to optimally place the focus plane can become a tricky game of geometry. This is because the focus plane is at an angle in between the rug and lens. Another possibility would be to place the depth of field both above and parallel to the rug. particularly if the subject traverses both front/back and up/down directions. the depth of field for a tilt shift lens actually occupies a wedge which broadens away from the camera. This can be particularly useful for portraits when a wide aperture is insufficient. old-fashioned looking camera with flexible bellows) can use virtually any amount of lens tilt. as demonstrated by the 8° upward tilt image. Also note how the field of view has moved downward due to the tilt.

even for the most experienced of photographers. Also note that using the camera's focus points and focus lock confirmation can be of great help. if no further improvement then the focusing procedure is complete. with the aim of having the focus plane converge onto the desired location. Overall. then choose the focus distance which achieves the best available depth of field placement. For other subject distributions. For landscapes. In this example the crossover distance is positioned just before the hyperfocal distance of the corresponding untilted lens. A more sophisticated possibility is to use a combination of tilt and shift. This could have been accomplished by first aiming the camera itself slightly towards the ground. FOCUSING A TILT SHIFT LENS Mentally visualizing how tilting a lens will correspond to changes in the depth of field can be quite difficult. Even though tilt shift lenses do no work with a camera's autofocus. Once an approximate distance is identified. see tutorials on depth of field and the hyperfocal distance. in which case zero tilt is usually best. One could then use shift to change the field of view—thereby maintaining a similar composition as the original unshifted camera angle. but with a different perspective. In other words. Perhaps the easiest scenarios are those which demand more tilt than the lens supports. In these cases one can just use maximal tilt in the chosen direction. Overall. for more exact focusing under specific conditions. changing the focusing distance changes the angle of the focus plane in addition to changing its distance. your camera can still be used to notify when you have achieved successful manual focus. but also its wedge-shaped depth of field. but ultimately nothing beats having a better intuition for how the process works. proper placement depends on the relative importance of subject matter and the artistic intent of the photograph. With practice. since there is minimal vertical subject matter. Repeat steps (3) and (4) with smaller changes than before to identify whether this improves both near and far subject sharpness. the above procedure aims to give robust results across a wide range of scenarios. refer to the calculators/charts later in this tutorial. Very slowly apply progressively more lens tilt towards the subject plane until near and far subject sharpness is maximized in viewfinder. . slightly rotate the tilt knob back and forth to get a better estimate of this angle. thereby rotating the focus plane even further than possible using lens tilt alone. which ensures that depth of field is most efficiently distributed across the floor and two subjects. One is encouraged to first experiment heavily with their tilt shift lens in order to get a better feel for using tilt movements. Since accurate focusing requires consistent and careful attention to detail. The following procedure is intended for situations where the subject lies primarily along a horizontal plane or some other plane which is rotated relative to the camera's sensor: Focusing Procedure for a Tilt Shift Lens (1) Compose Set lens to zero degrees tilt and frame the photograph (2) Identify (3) Focus Identify critical nearest and furthest subjects along the subject plane Focus at a distance which maximizes near and far subject sharpness in the viewfinder (if far subject is at infinity. this distance will be at or near the hyperfocal distance). (4) Tilt (5) Refine For more on step (3) above. Even then. The reason focusing can become so difficult is because the focusing distance and the amount of tilt do not independently control the focus plane's location. using a tripod is almost always a must. Note how in the right image the focus plane crosses the floor. No tilt/focusing iterations are required. Focusing can therefore become an iterative process of alternately adjusting the focusing distance and lens tilt until the photo looks best. one can usually still use some tilt and be better off than what would have been achievable with an ordinary lens. one should generally put more weight on having the furthest subject sharp.Placement (if wide range of tilt angles available) (if horizontal placement not possible) The key is to optimally place not only the focus plane. This works by following a systematic procedure of alternating between setting the focusing distance and tilt. knowing where to best place the focus plane is only half the battle— actually putting it there can be a different matter entirely. although shift movements are likely to be helpful. visual procedures work fine. Select a focus point which is on your subject and use the focus lock lights in the viewfinder to confirm when your tilt or focusing has successfully brought this subject into focus. Once an approximate tilt angle is identified. even if one cannot tilt enough to place the focus plane at the best possible location. For more difficult focusing scenarios. The only exception is when there is vertical subject matter in the foreground which fills a significant fraction of the image. rock the focus ring back and forth slightly to get a better estimate of this distance. tilt shift lenses are usually focused using trial and error techniques through the viewfinder.

5 or a wide angle of view. as demonstrated in the next section. which can make for more convenient focusing when the camera is near the ground. Alternatively. one can then independently use the focusing distance to set the angle of the focus plane. The vertical distance "J" is easy to set because it is only determined by the lens tilt. A magnified viewfinder can also help. if your camera supports real-time viewing using your camera's LCD (Live View) this can be of great help. A special texturized manual focusing screen can ensure the eye has a clear reference to compare with out of focus objects. FOCUSING TECHNIQUE FOR LANDSCAPE PHOTOGRAPHY Tilt movements for landscape photography often require a focus plane which lies along a sweeping. several tools are available which may make this process easier. or with tilt shift lenses having a maximum aperture of f/3. Many of these are at a right angle to the viewfinder. such as Canon's "angle finder C" or one of many third party viewfinder magnifiers. This can make it very hard to discern changes in sharpness—particularly in low light. However. Setting the lens's focus ring to further distances will simultaneously increase the angle of the focus plane and the angular depth of field.TOOLS TO MAKE TILT SHIFT FOCUSING EASIER Trial and error techniques can be problematic due to the limited size of viewfinders used in 35 mm or cropped digital camera sensors. not focusing distance. In these situations it is very important to place the focus plane accurately in the foreground. Otherwise one's eye can get tricked and try to make objects appear in focus even though these objects are not necessarily in focus in the viewfinder. Once the desired value of J has been determined and the corresponding tilt set. In such cased the eye effectively becomes a part of the optical system. One can also take a series of test photographs and then zoom in afterwards to verify sharpness at critical points. near horizontal subject plane. lens tilt and untilted focusing distance to locate the plane of sharpest focus and depth of field: Top of Form Tilt Shift Lens Depth of Field Calculator Camera Sensor Size Aperture 45 Aperture and sensor size are only necessary if you also wish to estimate depth of field: Focal Length of Tilt/Shift Lens mm 8 Tilt Amount Untilted Focus Distance degrees 1 meters . TILT SHIFT LENS DEPTH OF FIELD CALCULATOR The calculator below uses the lens focal length.

the total angular depth of field (near minus far angles of acceptable sharpness) decreases for closer focusing distances. The calculator below demonstrates how much one would have to rotate their camera in order to offset a given lens shift. light gray line lies along the center of the photograph. Also observe how the untilted focusing distance can have a significant impact on the focus plane angle. USING SHIFT TO FURTHER ROTATE THE FOCUS PLANE The next calculator is useful for situations where tilt and shift are used together to achieve an even greater rotation in the focus plane. which is also equal to the rotation in the focus plane. Top of Form Using Shift to Rotate the Focus Plane Focal Length of Tilt/Shift Lens 45 mm 11 Shift Amount mm . with greatest error for focusing distances near infinity or close up. With a tilt shift lens this is no different.Angle of Plane of Sharpest Focus (Focus Plane) Vertical Distance of Focus Plane from Lens (J) Angle of Near Plane of Acceptable Sharpness Angle of Far Plane of Acceptable Sharpness Total Angular Depth of Field Calculator is a work in progress. but should give adequate estimates for most scenarios. the key is that with a tilt shift lens. Ordinary Unrotated Lens Rotated Ordinary Lens Rotates plane of focus. the angle of the focus plane changes when the camera is rotated. and that tilt is correspondingly less influential as the tilt angle increases. This would achieve a rotation in the focus plane similar to the top left and right images above. Similar to ordinary depth of field. Bottom of Form Note how small changes in tilt lead to large changes in the focus plane angle. we can rotate the camera slightly and then use a shift movement to ensure the same field of view (same composition). maintains field of view Blue intensity qualitatively represents the degree of image sharpness at a given distance. different field of view Rotated Lens With Shift Rotates plane of focus. Assumes thin lens approximation. The circle of confusion is the standard size also used in the regular depth of field calculator. However. "Untilted Focus Distance" is (approximately) the distance labeled on your lens's focus ring. since the focus plane is always perpendicular to the lens's line of sight. With an ordinary camera lens. with also the same field of view.

On the other hand.8 PC-E Nikkor 24 mm F3. but no shift. OVERVIEW: SEEING THE BIG PICTURE Stitching a photo can require a complex sequence of steps. This tutorial aims to provide a background on how this process works. or can be performed yourself with a small screwdriver. Be aware that using shift to rotate the focus plane may require that your tilt shift lens be modified so that it can tilt and shift in the same direction. One needs to remove the four small screws at the base of the lens.Amount of Focus Plane Rotation Rotation in focus plane is relative to its location for camera/lens with same field of view.8 Canon 90 mm TS-E f/2. Bottom of Form Note how shift can rotate the plane of sharpest focus much more for shorter focal lengths. which may be an important consideration. which is usually not the case by manufacturer default. . Later sections of this tutorial go into each stage with greater detail. or require manually selecting pairs of control points which should ideally overlay exactly in the final image. However. and then taking the sequence of photos. involves choosing the order and precise positioning which mutually aligns all photos. This procedure can be simplified into several closely related groups of steps. Note that this tutorial primarily discusses the tilt feature. achieving a seamless result is more complicated than just aligning photographs. where all are taken from virtually the same point of perspective. STAGE 1: physically setting up the camera.8D ED PC-E Nikkor 85 mm F2. configuring it to capture all photos identically. along with discussing common obstacles that one may encounter along the way—irrespective of panorama software type. for shift movements visit part 1: Tilt Shift Lenses: Using Shift to Control Perspective or Create Panoramas 30. all-encompassing panoramic perspectives. identifying pixel-perfect matches between subject matter. which can then each be addressed in separate stages. STAGE 2: the first stage to begin using photo stitching software. rotate the base 90°. including terminology and alternative approaches. a given mm shift corresponds to a greater rotation in in the field of view.5L II Canon 45 mm TS-E f/2. This may occur automatically. This is because in absolute units. and properly blending each photo at their seam. This can be sent to the manufacturer for modification.5D ED PC-E Nikkor 45 mm F2. it also involves correcting for perspective and lens distortion. AVAILABLE NIKON & CANON TILT SHIFT LENSES Canon has four and Nikon has three mainstream tilt shift lens models available: Canon Tilt Shift Lenses Canon 17 mm TS-E f/4L Canon 24 mm TS-E f/3. and then screw them back into the base. this also means that the perspective will be more strongly influenced for shorter focal lengths.8D ED Nikon Tilt Shift Lenses Calculations and diagrams above have been designed to represent the range of tilt and shift movements relevant for these lenses on the 35 mm and cropped camera formats.PHOTO STITCHING DIGITAL PANORAMAS Digital photo stitching for mosaics and panoramas enable the photographer to create photos with higher resolution and/or a wider angle of view than their digital camera or lenses would ordinarily allow—creating more detailed final prints and potentially more dramatic. This stage may also require input of camera and lens settings so that the panorama software can estimate each photo's angle of view. or type of panoramic stitch. which may change depending on subject matter. The end result is a set of images which encompasses the entire field of view.

curves. STAGE 6: cropping the panorama so that it adheres to a given rectangular (or otherwise) image dimension. and the chosen perspective (based on vanishing point) is still maintained. This may also involve any necessary touch-up or post-processing steps for the panorama. one may also need to consider the type of panoramic projection. and is often the most computationally intensive of all the stages. STAGE 5: reducing or eliminating the visibility of any seam between photos by gradually blending one photo into another. .STAGE 3: defining the perspective using references such as the horizon. and may sometimes be combined with the previous stage of moving and distorting each image. The projection type influences whether and how straight lines become curved in the final stitched image. rotating and distorting each of the photos such that both the average distance between all sets of control points is minimized. This stage requires digital image interpolation to be performed on each photo. straight lines or a vanishing point. STAGE 4: shifting. or may also involve custom placement of the seam to avoid moving objects (such as people). including levels. For stitched photos that encompass a wide angle of view. This stage is optional. color refinements and sharpening.

even though the camera used for this was under 4 megapixels. after the photos have been taken. this can be illustrated by looking at what happens for two adjacent. but even this refers to a small area and not an individual point. REQUIRES A PANORAMIC HEAD Panoramas require that the camera rotates about the optical center of its lens. whereas the angle shown on the right is after camera rotation. BACKGROUND: PARALLAX ERROR & USING A PANORAMIC HEAD The size and cost of panoramic equipment can vary drastically. thereby maintaining the same point of perspective for all photographs. with details on stages 2-6 being presented in the second part of the tutorial. its images may become impossible to align perfectly. the change in perspective that results is due to parallax error. If the camera does not rotate about its optical center. overlapping photos which comprise a panorama. small movements deviating from the lens's optical center have a negligible impact on the final image—allowing these photos to be taken handheld. The above stages can be summarized as: Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Stage 6 Equipment setup and acquisition of photographs Selection of desired photo alignment and input of camera and lens specifications Selection of perspective and projection type Computer shifts. based on required equipment: Scenario #1 Handheld or tripod-mounted photographs with no close foreground subject matter. The location which we refer to is therefore the point at the center of the entrance pupil." Scenario 2 is far more sensitive to parallax error due to foreground subject matter. touch-up and post-processing Note how stages 2-6 are all conducted on the computer using a panorama software package. For incorrect rotation. these misalignments are called parallax error. A panoramic head is a special device that ensures your camera and lens rotate about their optical center. To see why foreground subject matter is so important. because the camera was not rotated about its optical center. although this term is not strictly correct. rotates and distorts photos to conform with requirements of stages 2 and 3 Manual or automatic blending of seams Cropping. Move your mouse over the three buttons below to see the effect of each scenario: Incorrect Rotation: Scenario #1 Scenario #2 Correct Rotation: Scenario #2 with Panoramic Head . and require many interpretive decisions to be made along the way. A more accurate term is the entrance pupil. handheld and inexpensive travel camera. With scenario 1. Here we identify two typical stitching scenarios. These stages will show that panoramas are not always straightforward. PANORAMIC HEAD NOT REQUIRED Scenario #2 Tripod-mounted photographs with foreground subject matter in multiple frames. The two pink pillars below represent the background and foreground subjects. This provides a much greater level of detail—something ordinarily only attainable with much more expensive equipment—by using a compact. The rest of this tutorial takes an in-depth look at stage 1. Being able to identify when you need additional equipment can save time and money. The photo angle shown on the left (below) is the initial position before camera rotation.The resulting panorama is 20 megapixels. Note: The optical center of a lens is often referred to as its nodal point. which may also be called the "no parallax point" or "perspective point. depending on the intended use.

This makes it absolutely essential that the camera is rotated precisely about its optical center. while skyline vistas usually never do. and the rear column remains behind the front column. correct rotation assumes rotation about the optical center SCENARIO #1: The problem with the second image (right) is that each photo in the panorama will no longer see the same image perspective.Note: incorrect rotation assumes that the camera is rotated about the front of the lens. The size of each rotation increment and the number to images in total depends on the angle of view for each photo. the light rays from both pillars still coincide. the problem is far less pronounced as when there are close foreground objects. SCENARIO #2: Here we see that the degree of misalignment is much greater when foreground objects are present in more than one photo of the panorama. SCENARIO #2. Panoramic photos of building interiors almost always require a panoramic head. The trick is to hold the camera directly above one of your feet. then down to the second row. Although some degree of misalignment may occur from this. as illustrated for scenario #2. PANORAMIC HEAD: Here we see that the perspective is maintained because the lens is correctly rotated about its optical center. the camera first scanned from left to right across the top row. . and back across the bottom half of the image from right to left. which is determined by the focal length of the camera lens being used. With care. parallax error can be made undetectable in handheld panoramas which do not have foreground subject matter. Multi-row or spherical panoramas may also require a tripod-mounted panoramic head that keeps the lens at the center of rotation for up and down rotations. then rotate your body about the ball of that foot while keeping the camera at the same height and distance from your body. and usually necessitates the use of a special panoramic head (as shown in the final scenario). This is apparent because for the image on the right. and the amount of overlap between photos. STAGE 1: DIGITAL CAMERA SETUP & PANORAMA ACQUISITION Taking a digital panorama involves systematically rotating your camera in increments to encompass the desired field of view. The image below is composed of two rows of four photographs.

even after the images have been taken. then taking the photos in any particular order (while ensuring that the shutter button remains pressed halfway before taking the first photo). the key to creating a seamless panorama is to ensure that each of the panorama photos are taken using identical settings. up to 360 degree panoramic views. because exposing directly into or away from the sun may make the rest of the panorama too dark or light. Single Row Panorama Multi-Row Panorama or Stitched Mosaic Ensure that each photograph has roughly 10-30% overlap with all other adjacent photos. For a compact digital camera. many of these include a panoramic preset mode. This may pose problems when choosing the exposure settings. The percent overlap certainly does not have to be exact. one may wish to expose based on the brightest regions in order to preserve highlight detail. and then manually using that setting for all photographs. the exposure can be locked in by using the panoramic preset mode. and may therefore encompass a drastic range of illumination across all photo angles. Depending on artistic intent. Additionally. holding the shutter button down halfway at the intermediate angle. Often the most intermediate exposure is obtained by aiming the camera in a direction perpendicular to one's shadow. but too little of an overlap may provide too short a region over which to blend or redirect the placement of seams. . the panoramic preset modes use manual exposure settings. focus. where they lock in the white balance and exposure based on the first photograph (or at least based on the first time the shutter button is held down half-way). it is highly recommended that all photos be taken in manual exposure mode using the RAW file format. lighting or white balance between shots creates a surprising mismatch. If using a compact digital camera. This way white balance can be customized identically for all shots. which shows the previous image on-screen along with the current composition. using an automatic exposure setting (as if this were a single photo). Any change in exposure. too high of an overlap could mean that you have to take far more photos for a given angle of view.Other than minimizing parallax error. If you are using a digital SLR camera. This can be very helpful with handheld panoramas because it assists in making sure each photo is level and overlaps sufficiently with the previous photo. however. Panoramas can encompass a very wide angle of view.

Note that if your panorama contains regions which are moving. One can increase the number of megapixels in their stitched photo dramatically by comprising them of progressively more images. The following calculator demonstrates how the number of megapixels and camera lens settings change when attempting to make a stitched photo mosaic out of a scene which could have otherwise been captured in a single photograph. because of the resulting exposure time required. or because the small aperture induces significant photo blurring due to diffraction. This may make achieving certain resolutions nearly impossible with some subject matter. the disadvantage of this is that in order to achieve the same depth of field. In the image to the left. Another consideration is whether to stitch a single row panorama in landscape or in portrait orientation. This could also be used to quickly assess what focal length is needed to encompass a given scene. This way you do not run into problems where someone appears in the panorama twice. Other considerations when taking photos include total resolution and depth of field. or photo misalignment occurs because a moving objects is on the seam between two stitched images. but the lower third of the image was contained within a single photo. Required input values are in the dark gray boxes and results are shown within dark blue boxes. Top of Form Settings for a Single Photo which Encompasses the Entire Scene: Selected aperture ?? Actual lens focal length mm Mosaic Size 20 Photos Percent Overlap % Required Focal Length (for same overall angle of . it is best to try and isolate movement in a single camera angle. such as water or people.25X the number of megapixels (for the same subject matter) for cameras with a 3:2 aspect ratio digital sensor (for the same 20% overlap). the colorful Swiss guards were marching side to side. and thus a smaller aperture to achieve the same depth of field (since magnification has increased). Using portrait orientation can achieve nearly 2. The disadvantage to this is that portrait orientation requires a longer focal length. However. one has to use progressively smaller lens aperture settings (larger f-numbers) as the number of stitched images increases (for the same angle of view).

such as when clouds are moving across the sky and selectively illuminating a landscape. whether this all be in landscape or portrait orientation. and that photos are of low magnification. as strong changes in sky lightness may appear. Also. Top of Form Photo Stitching Efficiency Calculator Mosaic Size 20 Photos Percent Overlap % Efficiency (Final Megapixels / Megapixels of Input Images) Bottom of Form Use of a polarizing filter should be avoided for extremely wide angle panoramas. unnatural sky gradient can be observed in the photo of an arch to the right. Recall that polarizing filters darken the sky most when facing at a 90 degree angle to the direction of the sun. Finally. A strong. and least when facing directly into or away from the path of sunlight. try to ensure that each of your photographs rotates across the sky in a systematic. Additionally. With large panoramas it can become very easy to drift upwards or downwards. grid-like direction. be wary of attempting panoramas of scenes with rapidly changing light.view) Megapixels (relative to single photo) Requirements if needing to maintain the same depth of field: Lens Aperture Exposure Time (relative to single photo) Note: Calculator assumes that photographs are all taken in the same orientation. just ensure that any moving patches of light (or dark) are contained within individual photos. Bottom of Form Here we see that even small photo mosaics can quickly require impractical lens apertures and exposure times in order to maintain the same depth of field. polarizing filters may make the edges of each photograph much more difficult to stitch without showing visible seams. implying that photo stitching is definitely not an efficient way to store image data on a memory card. requiring that an unacceptable amount of the final panorama be cropped out (as shown below). The calculator below estimates the total megapixels of a stitched photo mosaic as a percentage of all its individual photos. This means that any panorama which spans 180 degrees of the sky may see regions where the polarizer both darkens fully and not at all. . and not on the seams. Such scenes can still be stitched. Also note that image overlap may reduce the final resolution significantly (compared to the sum of megapixels in all the individual photos). Hopefully this makes it clear that digital panoramas and stitched photo mosaics are more difficult to technically master than single photographs.

in addition to providing for nearly all possible custom stitching options available in other programs. or these may be generated automatically using sophisticated matching algorithms (such as Autopano for PTAssembler). For further reading on this topic. for stage 1 and an overview of the whole stitching process please visit part 1 of this tutorial on digital panoramas. This is part 2 of the tutorial. or one third. Pairs of control points may be manually selected by visual inspection. STAGE 2: CONTROL POINTS & PHOTO ALIGNMENT Panorama stitching software uses pairs of control points to specify regions of two camera photos that refer to the same point in space. please continue to: Part 2: Using Photo Stitching Software 31. A similarly-equipped software for the Mac is called PTMac. The biggest difference between options is in how they choose to address the tradeoff between automation and flexibility. or popular commercial packages such as Autostitch. fully customized stitching software will always achieve better quality than automated packages.USING PHOTO STITCHING SOFTWARE Digital photo stitching software is the workhorse of the panorama-making process. Panorama Factory and PanaVue. to a more time-consuming manual process. other notable programs include those that come packaged with the camera.). Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Stage 6 Equipment setup and acquisition of photographs Selection of desired photo alignment and input of camera and lens specifications Selection of perspective and projection type Computer shifts. we need to select an appropriate software program. . Generally speaking. best results can only be achieved with manual control point selection (which is often the most time-consuming stage of the software stitching process). touch-up and post-processing TYPES OF STITCHING SOFTWARE In order to begin processing our series of photos. Hugin Panorama Photo Stitcher. etc. With most photographs. among others. however actual software features may refer to a program called PTAssembler (front-end for PanoTools or PTMender). At the time of this article. but this may also result in being overly technical or time consuming. which assumes all individual photos have already been properly captured (stage 1 below is complete). and can range from providing a fully automatic one-click stitching. such as Canon PhotoStitch. Arc Soft Panorama Maker.The above result can be prevented by carefully placing the horizon at a pre-defined position in each photograph (such as halfway down the photo. rotates and distorts photos to conform with requirements of stages 2 and 3 Manual or automatic blending of seams Cropping. This tutorial aims to improve understanding of most software stitching concepts by keeping the discussion as generic as possible. PTAssembler incorporates a fully-automated one-click stitching option.

PTAssembler has a feature called "automatically micro-position control points. The best control points are those which are based upon highly rigid objects with sharp edges or fine detail." which works by using your selection as an initial guess. Any parallax error in the near foreground may not be visible if each of these foreground elements are not contained within the overlap between photos. The example below demonstrates a situation where the only detailed. This means that basing control points on tree limbs. For panoramas taken without a panoramic head. Another consideration is how far away from the camera each control point is physically located. The vanishing point is usually where one would be directly facing if they were standing within the panoramic scene. therefore more accurate results can be achieved by only basing these on distant objects. In these situations automated control point selection may prove more accurate. such as the example below (120° crop from the rectilinear projection). . parallax error may become large in foreground objects. Careful choice of this vanishing point can help avoid converging vertical lines (which would otherwise run parallel). rigid portion of each image is in the silhouette of land at the very bottom—thereby making it difficult to space the control points evenly across each photo's overlap region.The example above shows a selection of four pairs of control points. this point is also clearly apparent by following lines into the distance which are parallel to one's line of site. STAGE 3: VANISHING POINT PERSPECTIVE Most photo stitching software gives the ability to specify where the reference or vanishing point of perspective is located. For architectural stitches. this effectively combines the advantages of manual control point selection with those of automated algorithms. otherwise control point selection may prove difficult and inaccurate (such as for panoramas containing all sky or water). then looking to all adjacent pixels within a specified distance (such as 5 pixels) to see if these are a better match. and are spaced evenly and broadly across each overlap region (with 3-5+ points for each overlap). along with the type of image projection. clouds or water is ill-advised except when absolutely necessary. When stitching difficult cloud scenes such as that shown above. or a curved horizon. It is for this reason recommended to always capture some land (or other rigid objects) in the overlap region between all pairs of photographs. for two photos within a panorama.

move your mouse over the image to see image if vanishing point were too low The vanishing point is also critical in very wide angle. resulting in a curved horizon. This effect can also be observed by using a wide angle lens in an architectural photo and pointing the camera significantly above or below the horizon— thereby giving the impression of buildings which are leaning.Incorrect placement of the vanishing point causes lines laying in the planes perpendicular to the viewer's line of site to converge (even though these would otherwise appear as being parallel). . cylindrical projection panoramas (such as the 360 degree image shown below). It may exhibit different looking distortion if misplaced.

trees or other obstructions. this is to emphasize that when the stitching software positions each image it adjusts for perspective. For such difficult scenarios the location of the horizon could then be inferred by placing it at a height which minimizes any curvature. If the panorama itself were taken level. and that the amount of perspective distortion depends on that image's location relative to the vanishing point. and that they follow the other guidelines listed in stage 2. the photo stitching software can then begin to distort and align each image to create the final stitched photograph. If the average distance is still too large then this may be caused by improperly captured images. mountains. if unknown. This process may also adjust lens distortion parameters.If the vanishing point were placed too high. It works by systematically searching through combinations of yaw. including parallax error from camera movement or not using a panoramic head. This is often the most computationally intensive step in the process. The first thing to check is whether any control points were mistakenly placed. then seams may be visible regardless of how well these are blended. This can be very useful when the photo containing the vanishing point was not taken perfectly level. due to the presence of hills. Yaw Pitch Roll Note that the above photos are slightly distorted. pitch and roll in order to minimize the aggregate error between all pairs of control points. If this distance is large relative to the print size. Sometimes it may be difficult to locate the actual horizon. vanishing point perspective and image projection have all been chosen. . the horizon may be rendered as having an S-curve if the imaginary horizon does not align with the actual horizon (in the individual photo). Panorama stitching software also often gives the option to tilt the imaginary horizon. even if the vanishing point is placed at the correct height. the horizon curvature would be in the opposite direction. For this scenario. then the straightest horizon would be the one that yields a stitched image whose vertical dimension is the shortest (and is a technique sometimes employed by stitching software). The key quality metric to be aware of is the average distance between control points. STAGE 4: OPTIMIZING PHOTO POSITIONS Once the control points.

manually blending seams can become extremely time consuming. green and blue channels. then recombines these to re-create a normal looking photograph. This means that the lower resolution components are blended over a larger distance. STAGE 5: AUTOMATICALLY REDIRECTING & BLENDING SEAMS One of the best ways to blend seams in a stitched photograph is by using a technique called "multi-resolution splines". Show: Large-Scale Textures Small-Scale Textures —> Original Image in Black & White Processed Image The multi-resolution spline effectively blends each texture size separately. we demonstrate a seemingly impossible blend between an apple and an orange—objects which contain different large-scale color and small-scale texture. as described in the next section. . Fortunately some stitching software has an automated feature which can perform this simultaneously. For fine detail. Small-scale features (such as foliage or fine grass) have a high spatial resolution. It is therefore best to blend fine details over short distances using seams which avoid any easily noticeable discontinuities (view the "mask from manual blend" above to see how the sky and buildings were blended). Make sure to blend the mask over large distances for smooth textures. including the crescent of pillars. similar to how an RGB photo can be separated into individual red. whereas the higher resolution components are blended over shorter distances. This addresses the common problem of visible jumps across the seams corresponding to smooth areas. which can often rectify even poorly captured panoramas or mosaics. In the example below. On the other hand. or blurriness along the seams corresponding to fine detail. If the stitching software supports layered output one can perform this manually using a mask in photoshop: Without Blend Manual Blend Mask from Manual Blend Note how the above manual blend evens the skies and avoids visible jumps along geometrically prominent architectural lines. foreground row of statues and distant white building. whereas larger scale features (such as a clear sky gradient) are said to have low spatial resolutions.STAGE 5: MANUALLY REDIRECTING & BLENDING SEAMS Ideally one would want to place the photo seams along unimportant or natural break points within the scene. such as the sky region above. except that in this case each component represents a different scale of image texture. It works by breaking each image up into several components. blending over large distances can blur the image if there is any misalignment between photos.

Show: Apple Orange MultiFeathered Blended: Resolution (Normal) Spline —> Individual Images Blended Image Of course this "apples and oranges" blend would likely never be performed intentionally in a stitched photograph. Move your mouse over this image to see how well the multi-resolution spline performs. please refer to: Part 1: Photo Stitching Digital Panoramas or the tutorial on Understanding Image Projections . For background reading on this topic. this image will need an unsharp mask or other sharpening technique applied since the perspective distortion (using image interpolation) and blending will introduce significant softening. STAGE 6: FINISHING TOUCHES Here one may wish to crop their irregularly shaped stitch to fit a standard rectangular aspect ratio or frame size. Most importantly. The assembled panorama may then be treated as any ordinary single image photograph in terms of post-processing. but it does help to demonstrate the true power of the technique. Smartblend and Enblend are two add-on tools that can perform the multi-resolution spline in PTAssembler and other photo stitching software. Note the highly uneven sky brightness at the seams. move your mouse over the image to see final blended result The above example demonstrates its use in a real-world panorama. Smartblend has the added advantage of being able to intelligently place seams based on image content. which could include photoshop levels or photoshop curves. which was primarily caused by pronounced vignetting (light fall-off at the edges of the frame caused by optics).

as these are the ones which are most widely used when photo stitching digital panoramas. or vice versa. therefore each projection type only tries to minimize one type of distortion at the expense of others. Horizontal stretching therefore increases further from the poles. A projection is performed when a cartographer maps a spherical globe of the earth onto a flat piece of paper. where this grid is roughly twice as wide as it is tall. except it also vertically stretches objects as they get closer to the north and south poles. As the viewing angle increases. with infinite vertical stretching occurring at the poles (therefore no horizontal line is shown at the top and bottom of this flattened grid). and is particularly common in panoramic photography. with the north and south poles being stretched across the entire upper and lower edges of the flattened grid. try to first just read and understand the distinction between rectilinear and cylindrical (shown in bold). IMAGE PROJECTION TYPES IN PHOTOGRAPHY Grid representing sphere of vision (if standing at center) Flatten ed Sphere: Equirectangular (100% Coverage) Choose a Projection Type: Rectilinear Mercator Sinusoidal Cylindrical Fisheye Stereographic If all the above image projection types seem a bit daunting.PANORAMIC IMAGE PROJECTIONS An image projection occurs whenever a flat image is mapped onto a curved surface. Equirectangular projections can show the entire vertical and horizontal angle of view up to 360 degrees. the viewing arc becomes more curved. Some distortion is inevitable when trying to map a spherical image onto a flat surface. Many of the projection types discussed in this tutorial are selectable as an output format for several panoramic software packages. When to use each projection depends largely on the subject matter and application. Cylindrical image projections are similar to equirectangular. It is for this reason that cylindrical projections are also not suitable for images with a very large vertical angle of view. Narrow Angle of View (grid remains nearly square) Wider Angle of View (grid is highly distorted) For small viewing angles. for example. Equirectangular image projections map the latitude and longitude coordinates of a spherical globe directly onto horizontal and vertical coordinates of a grid. it is relatively easy to distort this into an image on a flat piece of paper since this viewing arc is relatively flat. here we focus on a few which are most commonly encountered in digital photography. and thus the difference between panorama projection types becomes more pronounced. Cylindrical projections are also the standard type rendered by . a similar spherical to 2-D projection is required for photographs which are to be printed. PTAssembler allows selection of all those which are listed.32. Since the entire field of view around us can be thought of as the surface of a sphere (for all viewing angles).

This perspective-exaggerating characteristic is somewhat similar to that yielded by the rectilinear projection. yielding an image which would look similar to the reflection off of a metallic sphere. A camera with a fisheye lens is extremely useful when creating panoramas that encompass the entire sphere of vision. but may instead represent the input images when the camera lens type being used for photo stitching is a fisheye lens. . Cylindrical projections maintain more accurate relative sizes of objects than rectilinear projections. This projection is perhaps the most recognizable from its use in flat maps of the earth. The equal area characteristic is useful because if recording a spherical image in 2-D. Mercator image projections are most closely related to the cylindrical and equirectangular projection types. so this is perhaps the projection with which we are most familiar. The first example demonstrates how a rectilinear image projection would be rendered in a photo stitch of the above three photographs. This projection type is what most ordinary wide angle lenses aim to produce. This projection is similar to the fisheye and stereographic types. except that it maintains a better sense of perspective by progressively stretching objects away from the point of perspective. however this is done at the expense of rendering lines parallel to the viewer's line of sight as being curved (even though these would otherwise appear straight). providing for less vertical stretching and a greater usable vertical angle of view than cylindrical. mercator represents a compromise between these two types. it maintains the same horizontal and vertical resolution throughout the image.traditional panoramic film cameras with a swing lens. yielding an image which fits within a circle. vertical panoramas are used later on to illustrate differences in vertical distortion between other projection types. though certainly less pronounced. If flattening the globe of an earth. EXAMPLES: WIDE HORIZONTAL FIELD OF VIEW How do the above image projections actually influence a panoramic photograph? The following series of photographs are used to visualize the difference between two projection types most often encountered in photo stitching software: rectilinear and cylindrical projections. Its primary disadvantage is that it can greatly exaggerate perspective as the angle of view increases. except that it maintains perfectly horizontal latitude lines from the original sphere. Sinusoidal image projections aim to maintain equal areas throughout all grid sections. Fisheye projections are also limited to vertical and horizontal angles of view of 180 degrees or less. It is for this reason that rectilinear projections are generally not recommended for angles of view much greater than 120 degrees. Stereographic image projections are very similar to fisheye projections. but with more line curvature. This would be characterized by (otherwise straight) lines becoming progressively more curved the further they get from the center of the image grid. These are designed to show only distortion differences for a wide horizontal angle of view. one can imagine that this projection could be rolled back up again to form a sphere with the same area and shape as the original. These are generally not used as an output format for panoramic photography. Fisheye image projections aim to create a flattened grid where the distance from the center of this grid is roughly proportional to actual viewing angle. leading to objects appearing skewed at the edges of the frame. Here we also note that an alternative form of this projection (the transverse mercator) may be used for very tall vertical panoramas. Rectilinear image projections have the primary advantage that they map all straight lines in three-dimensional space to straight lines on the flattened two-dimensional grid. since these often require stitching just a few input photographs.

On the other hand.Note the extreme distortion near the edges of the angle of view. in addition to the dramatic loss in resolution due to image stretching. The next example demonstrates how the stitched photographs would appear using a cylindrical projection. The next image demonstrates how the highly distorted image above would appear if it were cropped to contain just a 120 degree horizontal angle of view. the difference between cylindrical and equirectangular is negligible for photographs which do not have extreme vertical angles of view (such as the example below). Additionally. Here we see that this cropped rectilinear projection yields a very suitable look. . and also require minimal cropping of empty space. this is done at the expense of maintaining the relative size of objects throughout the angle of view. Cylindrical projections also have the advantage of producing stitched photographs with relatively even resolution throughout. objects toward the edge of the angle of view (far left and right) are significantly enlarged compared to those at the center (tower with doorway at base). since all straight architectural lines are rendered straight in the stitched photograph.

Cylindrical Mercator Equirectangular Note: The point of perspective for this panorama was set as the base of the tower. therefore the effective vertical angle of view looks as if there were a 140 degrees field of view in total (if the perspective point were at the halfway height). cylindrical and mercator projections. This gives a chance to visualize the difference between the equirectangular. even though these would have appeared the same in the previous example (with a wide horizontal angle of view). .EXAMPLES: TALL VERTICAL FIELD OF VIEW The following examples illustrate the difference between projection types for a vertical panorama (with a very large vertical field of view).

This large vertical angle of view allows us to clearly see how each of these image projections differ in their degree of vertical stretching/compression. Also note how this projection closely mimics the look of each of the individual source photographs. Transverse Mercator PANORAMIC FIELD OF VIEW CALCULATORS The following calculator can be used to estimate your camera's vertical and horizontal angles of view for different lens focal lengths. The equirectangular projection compresses vertical perspective so greatly that one arguably loses the sense of extreme height that this tower gives in person. the transverse mercator projection to the right sacrifices some curvature for a (subjectively) more realistic perspective. The three projections above aim to maintain nearly straight vertical lines. The difference between rectilinear and cylindrical is barely noticeable for this narrow horizontal angle of view. which can help in assessing which projection type would be most suitable. This projection type is often used for panoramas with extreme vertical angles of view. For this reason. Top of Form Panoramic Field of View Calculator Lens Focal Length: Horizontal Size ?? mm photo(s) Vertical Size Camera Orientation: Percent Overlap: 20 photo(s) % Camera Type: Field of View: x (horizontal x vertical) . so the rectilinear projection was not included. equirectangular is only recommended when absolutely necessary (such as in stitched photographs with both an extreme vertical and horizontal field of view).

OVERVIEW OF COLOR MANAGEMENT "Color management" is a process where the color characteristics for every device in the imaging chain is known precisely and utilized to better predict and control color reproduction. . For background reading on creating digital panoramas. which describes the relative width of the camera sensor compared to a 35 mm camera. since the angle of view is actually also influenced (to a lesser degree) by the focusing distance. please also refer to: Part 1: Photo Stitching Digital Panoramas Part 2: Using Photo Stitching Software 33. given the input settings of: focal length.Note: Calculators not intended for use in extreme macro photography. and fields of view assume that the point of perspective is located at the center of this angle. please visit the tutorial on digital camera sensor sizes. Bottom of Form The next calculator estimates how many photos are required to encompass a 360 degree horizontal field of view. camera orientation. this imaging chain usually starts with the camera and concludes with the final print. and may include a display device in between. Top of Form 360° Panorama Calculator Lens Focal Length: Camera Orientation: Percent Overlap: 20 ?? mm % Camera Type: Required Number of Horizontal Photos: Note: CF = crop factor. Additionally. The above results are only approximate. field of view estimate assumes that the lens performs a perfect rectilinear image projection. Bottom of Form For a summary of when to consider each projection type. For digital photography. photo overlap and digital camera sensor size. please refer to the table below: Projection Type Rectilinear Cylindrical Mercator Equirectangular Fisheye <120° ~120-360° ~120-360° ~120-360° <180° Field of View Recommendations Horizontal <120° <120° <150° 120-180° <180° Vertical YES NO NO NO NO Straight Lines? Horizontal YES YES YES YES NO Vertical Note: All straight line considerations exclude the centermost horizontal and vertical lines. For a background reading. lenses with large barrel or pincushion distortion may yield slightly different results.

Spice varies not just with the number of peppers included in the dish. but also specifies how those numbers are intended to appear. Input Number (Green) Output Color Device 1 —> —> Device 2 200 150 . This way. Restaurants could standardize this by establishing that one pepper equals "mild. To solve your spiciness dilemma. Computers color manage using a similar principle. Although you enjoy spiciness." 5 equals "medium. it is often critical that others see your work how it is intended to be seen. CONCEPT: THE NEED FOR REFERENCE COLORS Color reproduction has a fundamental problem: different color numbers do not necessarily produce the same color in all devices." and so on (assuming that all peppers are the same)." and so on. Let's say that you're at a restaurant with a friend and are about to order a spicy dish. Color-managed software can then take this profile into account and adjust the numbers sent to the device accordingly. As a photographer." for every device which associates each number with a measured color. The table below is an example similar to the personalized spiciness table you and your friend created. Color management requires a personalized table. but it can at least give you more control over any changes which may occur. your threshold for it is limited. or "color profile. it does not merely send numbers. as this is rarely possible. The dilemma is this: a "mild" degree of spiciness may represent one level of spice in Thailand. however this would not be universal.Many other imaging chains exist. which compares the input number with an output color. in addition to meaning something different at other restaurants. with each containing slightly more peppers (shown above). you could undergo a one-time taste test where you eat a series of dishes. and a completely different level in England. Color management cannot guarantee identical color reproduction. when a computer tries to communicate colors with another device. We use an example of spiciness to convey both why this creates a problem. any device which attempts to reproduce color from another device can benefit from color management. but in general. and how it is overcome." two equals "medium. and so you also wish to specify a pleasurable amount. but also depends on how sensitive the taster is to each pepper. You could then create a personalized table to carry with you at restaurants which specifies that 3 equals "mild. "Mild" would have a different meaning for you and your friend.

This system involves three key concepts: color profiles. When trying to reproduce color on another device. The PCS is a color space which is independent of any device's particular color reproduction methods. and thereby defines a color space. standardized color management system which is now used in most computers. some of that device's colors will be outside the other's color space. profiles. The thin trapezoidal region drawn within the PCS is what is called a "working space. In order for these profiles to be useful. color saturation. The following diagram shows these concepts for conversion between two typical devices: a monitor and printer. . When trying to reproduce color on another device. and maps these colors as a subset of the "profile connection space" (PCS). color spaces. A color space relates numbers to actual colors and contains all realizable color combinations. COLOR MANAGEMENT OVERVIEW The International Color Consortium (ICC) was established in 1993 to create an open." A color management module (CMM) performs all calculations needed to translate from one space into another. please visit: Color Management. and is a three-dimensional object which contains all realizable color combinations. they have to be presented in a standardized way which can be read by all programs. and are often more sophisticated than in the above table. A gamut mismatch requires the CMM to make key approximations that are specified by a "rendering intent. color spaces can show whether you will be able to retain shadow/highlight detail. and so it serves as a universal translator. Input Device Profile Connection Space Output Device RGB Profile (RGB Space) CMYK Profile (CMYK Space) The color profile keeps track of what colors are produced for a particular device's RGB or CMYK numbers.100 50 —> —> Real-world color profiles include all three colors. and by how much either will be compromised. and by how much either will be compromised. color spaces can show whether you will be able to retain shadow/highlight detail. Each step in the above chain specifies the available colors. These "out-of-gamut colors" occur with nearly every conversion and are called a "gamut mismatch.COLOR MANAGEMENT: COLOR SPACES A color space relates numbers to actual colors. The PCS is usually the set of all visible colors defined by the Commission International de l'éclairage (CIE) and used by the ICC. Part 2:Color Spaces Part 3:Color Space Conversion 34. more data. color saturation. so for a more in-depth explanation of color spaces." The rendering intent is often specified manually and includes several options for how to deal with out-ofgamut colors. If one device has a larger gamut of colors than another device can produce. This all may seem a bit confusing at first. and rendering intent. and is the workhorse of color management. and translation between color spaces." The working space is used in image editing programs (such as Adobe Photoshop) and defines the set of colors available to work with when performing any image editing.

which is what occupies the midtones of an image histogram. VISUALIZING COLOR SPACES Each dimension in "color space" represents some aspect of color. REFERENCE SPACES What is the device-independent reference space shown above? Nearly all color management software today uses a device-independent space defined by the Commission International de l' éclairage (CIE) in 1931. In order to visualize this. This space aims to describe all colors visible to the human eye based upon the average response from a set of people with no vision problems (termed a "standard colorimetric observer"). whereas the opposite is true for a device with a narrow gamut color space. depending on the type of space. 2D Color Space Comparison (Colors at 50% Luminance) What can we infer from a 2D color space comparison? Both the black and white outlines show the subset of colors which are reproducible by each color space. Wide Gamut RGB. Each contains the same colors. the reference space almost always contains more colors than can be shown on a computer display. saturation or hue. however they differ in how they distribute color onto a two-dimensional space: . Devicedependent color spaces can tell you valuable information by describing the subset of colors which can be shown with a monitor or printer. we could look at a similar 2D cross-section of the color space at roughly 25% and 75% luminance. For this particular diagram. CIE L*a*b*. two-dimensional diagrams usually show the cross-section containing all colors which are at 50% luminance (a horizontal slice at the vertical midpoint for the color space shown above). its surface includes the most extreme colors of the space. The following diagram shows three example color spaces: sRGB. whereas the two horizontal dimensions represent the red-green and yellow-blue shift. respectively. while device-independent color spaces express color in absolute terms. The two diagrams below show the outer surface of a sample color space from two different viewing angles. and a device-independent reference space. These are more useful for everyday purposes since they allow you to quickly see the entire boundary of a given cross-section. Devices with a large color space. This is because a color space almost always needs to be compared to another space. or can be captured with a camera or scanner. color spaces are often represented by two-dimensional regions. In addition. and greens. Colors shown in the reference color space are only for qualitative visualization. and CIE L u'v' (1976). The vertical dimension represents luminosity. Nearly all devices are subsets of the visible colors specified by the CIE (including your display device). whereas the "sRGB" color space contains slightly more blues. Device-dependent spaces express color relative to some other color space.TYPES Color spaces can be either dependent to or independent of a given device. The CIE space of visible color is expressed in several common forms: CIE xyz (1931). as a fraction of some device-independent reference space. or "wide gamut. Keep in mind that this analysis only applies for colors at 50% luminance. such as lightness. and so any representation of this space on a monitor should be taken as qualitative and highly inaccurate. Sample Color Space (Same Space Rotated 180°) The above color space is intended to help you qualitatively understand and visualize a color space." can realize more extreme colors. as these depend on how your display device renders color. Unless specified otherwise. purples. If we were interested in the color gamut for the shadows or highlights. we see that the "Wide Gamut RGB" color space contains more extreme reds. These dimensions could also be described using other color properties. however it would not be very useful for real-world color management. sRGB and Wide Gamut RGB are two working spaces sometimes used for image editing.

The problem with this representation is that it allocates too much area to the greens. please see sRGB vs. CIE L*a*b* transforms the CIE colors so that they extend equally on two axes-. Knowing how these approximations work can help you control how the photo may change-. Input Device Profile Connection Space Output Device RGB Profile (RGB Space) CMYK Profile (CMYK Space) . For further reading. CIE L u'v' was created to correct for this distortion by distributing colors roughly proportional to their perceived color difference. but no more. each axis in L*a*b* color space represents an easily recognizable property of color. Two of the most commonly used working spaces in digital photography are Adobe RGB 1998 and sRGB IEC61966-2. and so fewer bits are available to encode a given color gradation. This is because the bit depth is stretched over a greater area of colors. Furthermore. WORKING SPACES A working space is used in image editing programs (such as Adobe Photoshop). Conversion may require approximations in order to preserve the image's most important color qualities. Using a color space with an excessively wide gamut can increase the susceptibility of your image to posterization. Part 1 Color Management: Color Space Conversion (Part 3) 35.1.conveniently filling a square. Adobe RGB 1998. and defines the set of colors available to work with when performing any image editing. Why not use a working space with the widest gamut possible? It is generally best to use a color space which contains all colors which your final output device can render (usually the printer). Finally.hopefully maintaining the intended look or mood. Y and Z tristimulus functions created in 1931. For an in-depth comparison for each of these color spaces. please visit: Color Management.COLOR MANAGEMENT: COLOR SPACE CONVERSION Color space conversion is what happens when the color management module (CMM) translates color from one device's space to another.CIE xy CIE a*b* CIE u'v' (All color spaces shown are 2D cross-sections at 50% Luminance) CIE xyz is based on a direct graph of the original X. such as the red-green and blue-yellow shifts used in the 3D visualization above.

Relative colorimetric maintains a near exact relationship between in gamut colors. some of the those colors will be outside the final device's color space. Each of these types maintains one property of color at the expense of others (described below). These "out-of-gamut colors" occur with nearly every conversion and are called a gamut mismatch. perceptual rendering tries to also preserve some relationship between out of gamut colors. perceptual. If the original device has a larger color gamut than the final device. PERCEPTUAL & RELATIVE COLORIMETRIC INTENT Perceptual and relative colorimetric rendering are probably the most useful conversion types for digital photography. the CMM uses the rendering intent to decide what qualities of the image it should prioritize. even if this results in inaccuracies for in gamut colors. RGB Color Space CMYK Color Space (Destination Space) Each time a gamut mismatch occurs. and saturation. In contrast.BACKGROUND: GAMUT MISMATCH & RENDERING INTENT The translation stage attempts to create a best match between devices-.even when seemingly incompatible. even if this clips out of gamut colors. The following example demonstrates an extreme case for an image within a 1-D black-magenta color space: Original Image: A = Wide Gamut Space B = Narrow Gamut Space (Destination Space) Relative Colorimetric Perceptual A A . Each places a different priority on how they render colors within the gamut mismatch region. Common rendering intents include: absolute and relative colorimetric.

does destroy color information. The exact conversion depends on what CMM is used for the conversion.it just redistributes it. Microsoft ICM and Apple ColorSynch are some of the most common. Relative colorimetric. The white point is the location of the purest and lightest white in a color space (also see discussion of color temperature). note how it remaps the central tones more precisely than those at the edges of the gamut. If one were to draw a line between the white and black points. this would pass through the most neutral colors. ABSOLUTE COLORIMETRIC INTENT Absolute is similar to relative colorimetric in that it preserves in gamut colors and clips those out of gamut.B Converted Image: B Converted Image: Note how perceptual maintains smooth color gradations throughout by compressing the entire tonal range. Adobe ACE. relative colorimetric maps these to the closest reproducible hue in the destination space. on the other hand. this would require careful use of tone curves to reverse the color compression caused by the conversion. For 2D and 3D color spaces. whereas relative colorimetric clips out of gamut colors (at center of magenta globules and in the darkness between them). Even though perceptual rendering compresses the entire gamut. This is not to say that converting from space A to B and then back to A again using perceptual will reproduce the original. while perceptual can be reversed. This means that conversion using relative colorimetric intent is irreversible. 3D Color Space 2D Cross-Section (Two Spaces at 50% Luminance) . Another distinction is that perceptual does not destroy any color information-. but they differ in how each handles the white point.

Relative colorimetric skews the colors within gamut so that the white point of one space aligns with that of the other.The location of this line often changes between color spaces. then saturation intent ensures that those colors will remain saturated in the new color space-. however relative colorimetric adjusts the white point for a reason. just because an image is defined by a large color space does not mean that it actually utilizes all of those extreme colors. as shown by the "+" on the top right. Some dithering may be unavoidable as inkjet printers never have an ink to match every color. while absolute colorimetric preserves colors exactly (without regard to changing white point). Relative colorimetric would compensate colors to account for the fact that the whitest and lightest point has a tint of blue. If the destination color space fully encompasses the image's colors (despite being smaller than the original space). this is often acceptable for computer graphics such as pie charts. while relative colorimetric actually displaces the colors so that the old white point aligns with the new one (while still retaining the colors' relative positions). Pie chart with fully saturated cyan. This color shift results because the white point of the color space usually needs to align with that of the light source or paper tint used. magenta and red: Saturation intent is not desirable for photos because it does not attempt to maintain color realism. the example below shows two theoretical spaces that have identical gamuts. . If one were printing to a color space for paper with a bluish tint. If the original RGB device contained pure (fully saturated) colors. and is thus rarely of interest to photographers. but different white points: = White Point Color Space #1 Color Space #2 Absolute Colorimetric Relative Colorimetric Absolute colorimetric preserves the white point. Maintaining color saturation may come at the expense of changes in hue and lightness. which is usually an unacceptable trade-off for photo reproduction. then relative colorimetric will yield a more accurate result. absolute colorimetric would ignore this tint change. The exact preservation of colors may sound appealing. Visible dithering due to lack of fully saturated colors: PAY ATTENTION TO IMAGE CONTENT One must take the range of image colors present into account.even if this causes the colors to become relatively more extreme. To illustrate this. SATURATION INTENT Saturation rendering intent tries to preserve saturated colors. blue. however saturation intent can minimize those cases where dithering is sparse because the color is very close to being pure. Another use for saturation intent is to avoid visible dithering when printing computer graphics on inkjet printers. Without this adjustment. absolute colorimetric results in unsightly image color shifts. On the other hand. and is most useful when trying to retain color purity in computer graphics when converting into a larger color space.

. If one were to convert the above image into a destination space which had less saturated reds and greens.Example Image: The above image barely utilizes the gamut of your computer display device. this would not place any image colors outside the destination space. This is available in the conversion properties of nearly all software which supports color management (such as Adobe Photoshop). even though up until now we have been primarily analyzing spaces in one and two dimensions. which is actually typical of many photographic images. so this aspect is of particular importance when making a print of a digital photograph. relative colorimetric would yield more accurate results.regardless of whether these colors are actually utilized. This is because perceptual intent compresses the entire color gamut-.for shadows and highlight colors. Most prints cannot produce the range of light to dark that we may see on our computer display. however it does this at the cost of reducing overall contrast (relative to what would have been produced with colorimetric intent). SHADOW & HIGHLIGHT DETAIL IN 3D COLOR SPACES Real-world photographs utilize three-dimensional color spaces. this detail may be clipped when using relative/absolute colorimetric intent. The most important consequence of rendering intent on 3D color spaces is how it affects shadow and highlight detail. The main difference is that now the compression or clipping occurs in the vertical dimension-. Perceptual intent compresses these dark and light tones to fit within the new space. Using the "black point compensation" setting can help avoid shadow clipping-. If the destination space can no longer reproduce subtle dark tones and highlights. For such cases.even with absolute and relative colorimetric intents. The conversion difference between perceptual and relative colorimetric is similar to what was demonstrated earlier with the magenta image.

For related reading. Adobe RGB 1998 extends its advantage in the cyan-greens for the highlights. unless you know specifics about each image. IN PRINT All of these extra colors in Adobe RGB 1998 are great to have for viewing on a computer monitor. Since sRGB serves as a "best guess" for how another person's monitor produces color.improving upon sRGB's gamut primarily in cyan-greens. This section aims to clear up some of the confusion associated with sRGB and Adobe RGB 1998.) to encompass most of the colors achievable on CMYK printers. On the other hand. Inc. perceptual and relative colorimetric are best suited for photography because they aim to preserve the same visual appearance as the original. A Fuji Frontier printer is what large companies such as Walmart use for making their prints. Images with more subtle tones (such as some portraits) often stand to benefit more from the increased accuracy of relative colorimetric (assuming no colors are placed within the gamut mismatch region). Adobe RGB 1998 does not extend as far beyond sRGB in the shadows.1 Adobe RGB 1998 25% Luminance 50% Luminance 75% Luminance Comparison uses CIE L*a*b* reference space. but can we actually reproduce them in a print? It would be a shame to edit using these extra colors. GAMUT COMPARISON The following color gamut comparison aims to give you a better qualitative understanding of where the gamut of Adobe RGB 1998 extends beyond sRGB for shadow (~25%). however it still shows advantages in the dark greens (often encountered with dark foliage). but by using only RGB primary colors on a device such as your computer display. ADOBE RGB 1998 Adobe RGB 1998 and sRGB IEC61966-2. sRGB's color gamut encompasses just 35% of the visible colors specified by CIE (see section on color spaces).RECOMMENDATIONS So which is the best rendering intent for digital photography? In general. Adobe RGB 1998 was designed (by Adobe Systems. Images with intense colors (such as bright sunsets or well-lit floral arrangements) will preserve more of their color gradation in extreme colors using perceptual intent.for all tonal levels. colors are only qualitative to aid in visualization. Perceptual intent is overall the safest bet for general and batch use.1 (sRGB) are two of the most common working spaces used in digital photography. The Adobe RGB 1998 working space encompasses roughly 50% of the visible colors specified by CIE-. it has become the standard color space for displaying images on the internet. and yellows-.sRGB vs. Although sRGB results in one of the narrowest gamuts of any working space. Note how Adobe RGB 1998 extends into richer cyans and greens than does sRGB-. but now has advantages with intense magentas. The decision about when to use each of these depends on image content and the intended purpose. however the shadow and highlight diagrams also deserve attention.colors which can add to the drama of a bright sunset. sRGB's gamut is still considered broad enough for most color applications. The following diagrams compare sRGB and Adobe RGB 1998 with two common printers: a Fuji Frontier (390) and a high-end inkjet printer with 8 inks (Canon iP9900 on Photo Paper Pro). BACKGROUND sRGB is a RGB color space proposed by HP and Microsoft because it approximates the color gamut of the most common computer display devices. this may come at the expense of compressing or dulling more moderate colors. The 50% luminance diagram is often used to compare these two working spaces. sRGB IEC61966-2. oranges. midtone (~50%). please visit: Color Management. . Part 1 Color Management: Color Spaces (Part 2) 36. and highlight colors (~75%). and to provide guidance on when to use each working space. only to later retract their intensity due to printer limitations.

why not just use it in every situation? Another factor to consider is how each working space influences the distribution of your image's bit depth. then we would be wasting bits by allocating them to encode colors outside the small gamut: For a limited bit depth which encodes all colors within the large gamut: Large Gamut Small Gamut .sRGB IEC61966-2. whereas smaller gamuts concentrate these bits within a narrow region. Consider the following green "color spaces" on a line: Large Gamut Small Gamut If our image contained only shades of green in the small gamut color space. Color spaces with larger gamuts "stretch" the bits over a broader region of colors. The printer should also be considered when choosing a color space. midtones. whereas the high-end inkjet printer exceeds sRGB for colors in shadows. colors are only qualitative to aid in visualization. and highlights. We see a big difference in how each printer uses the additional colors provided by Adobe RGB 1998: The Fuji Frontier only uses a small patch of yellow in the highlights. This color profile can help you achieve similar conclusions to those visible in the above analysis. Most mid-range printer companies provide a downloadable color profile for their printer. The high-end inkjet even exceeds the gamut of Adobe RGB 1998 for cyan-green midtones and yellow highlights. INFLUENCE ON BIT DEPTH DISTRIBUTION Since the Adobe RGB 1998 working space clearly provides more colors to work with. as this can have a big influence on whether the extra colors are utilized.1 Adobe RGB 1998 25% Luminance 50% Luminance 75% Luminance High-End Inkjet Select Printer Type: Fuji Frontier Comparison uses CIE L*a*b* reference space.

. and so any reduction due to your choice of working space might be negligible.TUTORIALS: PHOTOSHOP LEVELS Levels is a tool in Photoshop and other image editing programs which can move and stretch the brightness levels of an image histogram. contrast. OTHER NOTES It is apparent that Adobe RGB 1998 has a larger gamut than sRGB. then you would be better served using sRGB. and do not wish to decide on your working space using a case-by-case method? My advice is to use Adobe RGB 1998 if you normally work with 16-bit images. On the other hand. Ask yourself: do you really need the richer cyan-green midtones. you may have plenty of "spare" bits if you are using a 16-bit image. but by how much? Adobe RGB is often depicted has having a superior gamut in greens. and tonal range by specifying the location of complete black. except in three dimensions.1 Adobe RGB 1998 CIE xy Exaggerates difference in greens CIE u'v' Closer to the eye's perceived difference When the two are compared using the CIE u'v' reference space. sRGB can simplify your workflow since this color space is also used for displaying images on the internet. orange-magenta highlights. the advantage in greens becomes less apparent. Consider the following comparison: sRGB IEC61966-2. and not quite as dramatic as demonstrated above. the diagram on the right now shows Adobe RGB 1998 having similar advantages in both the cyans and greens-better representing the relative advantage we might perceive with our eyes. It has the power to adjust brightness. What if you desire a speedy workflow. so you are only utilizing 70% of your bit depth if the colors in Adobe RGB 1998 are unnecessary (for evenly spaced bits). Care should be taken to also consider the influence of a reference space when drawing conclusions from any color space comparison diagram. you never want to eliminate them as a possibility for those images which require them. In addition. and whether these can benefit from the additional colors afforded by Adobe RGB 1998. Even if you may not always use the extra colors.Waste d Bits If all bits were concentrated within the smaller gamut: A similar concentration of bit depth occurs with sRGB versus Adobe RGB 1998. In addition. complete white. or green shadows? Will these colors also be visible in the final print? Will these differences even be noticeable? If you've answered "no" to any of these questions. sRGB will make the most of your bit depth because it allocates more bits to encoding the colors present in your image. however this can be misleading and results mainly from the use of the CIE xyz reference space. 37. Since every photo's histogram is unique. A proper understanding of how to adjust the levels of an image histogram will help you better represent tones in the final image. and midtones in a histogram. there is no single way to adjust the levels for all your photos. Adobe RGB 1998 occupies roughly 40% more volume than sRGB. SUMMARY My advice is to know which colors your image uses. and sRGB if you normally work with 8-bit images.

HOW IT WORKS The levels tool can move and stretch brightness levels in a histogram using three main components: a black point. This means that it is often best to perform levels such that the histogram extends all the way from black (0) to white (255). Images which do not extend to fill the entire tonal range often look washed out and can lack impact. although levels can also be performed on other types of histograms. with added blue labels for clarity: All examples below will use the levels tool on an RGB histogram. and does the image histogram show this? Most images look best when they utilize the full range dark to light which can be displayed on your screen or in a print. The position of the black and white point sliders redefine the histogram's "Input Levels" so they are mapped to the "Output Levels" (default is black (0) or white (255). whereas the midtone slider redefines the location of middle gray (128). Each slider is shown below as they appear in Photoshop's levels tool. respectively). The image below was taken in direct sunlight and includes both bright clouds and dark stone shadows-. ADJUSTING THE BLACK AND WHITE POINT LEVELS When considering adjusting the black and white point levels of your histogram. Levels can be performed on an individual color channel by changing the options within the "Channel" box at the top. This histogram can be extended to fill the entire tonal range by adjusting the levels sliders as shown: Histogram Before Levels Histogram After Levels Lower Contrast Higher Contrast . ask yourself: is there any region in the image which should be completely black or white.an example of where there should be at least some regions that are portrayed as nearly white or black. white point and midtone slider.

versus 235 above).only that at least one of the red. This can be quite useful because knowing where the clipping will occur can help one assess whether this will actually be detrimental to the artistic intent of the image. This is often the case with low-key images (see histograms tutorial). Keep in mind though that clipping shown while dragging a slider on an RGB histogram does not necessarily mean that region has become completely white-. or blue color channels has reached its maximum of 255. whereas movement to the right performs the opposite. the image would have appeared as follows: If the image is fully black while dragging a black or white point slider. A histogram may contain highlights or shadows that are shown with a height of just one pixel. the midtone slider's main use is to brighten or darken the midtones within an image. Therefore. as these can easily clip the shadows and highlights. be wary of developing a habit of simply pushing the black and white point sliders to the edges of the histogram-. and these are easily clipped. Adjusting levels for such images can ruin the mood and make your image less representative of the actual scene by making it appear as though the lighting is harsher than it actually was. Movement to the left stretches the histogram to the its right and compresses the histogram to its left (thereby brightening the image by stretching out the shadows and compressing the highlights). Images taken in fog. while still stretching the highlights to white: . If you move the white point slider so that it reaches the edge of the histogram. Using the midtone slider in conjunction with the white point slider can help you maintain the brightness in the rest of your image. ADJUSTING THE MIDTONE LEVEL Moving the midtones slider compresses or stretches the tones to the left or right of the slider. and even though the histogram extends to full black. the regions of the image which have become clipped get highlighted as shown above. When the slider is dragged over where there are counts on the histogram. such as the example shown below: Histogram Before Levels Histogram After Levels No Pixel at Full Brightness Stronger Highlights Holding down the "ALT" key while dragging the black or white point slider is a trick which can help avoid shadow or highlight clipping. depending on which direction it is moved. If I were to have dragged the highlight slider above to a point which was further left (a level of 180 was used. One should also be cautious when moving the black and white point sliders to the edge of the histogram. green. it does not extend to white. respectively. haze or very soft light often never have fully black or white regions.On the other hand. When else should one use the midtone slider? Consider the following scenario: your image should contain full black and white. while simultaneously holding down ALT.without also paying attention to the content of your image. you end up making the image much brighter and overexposed. then no clipping has occurred.

it is instead shown as 1. The same method could be used to darken the shadows while maintaining midtones. values greater than one mean there are more levels are to the slider's right.00 to avoid confusion when the black and white points change. shown below in red: One can use the dropper tools on the far left and right to set the black and white points by clicking on locations within the image that should be either black or white. respectively. The black and white point droppers are more useful for computer-generated graphics as opposed to photos. even though the overall brightness of the image remained similar. If the midtones tool were not used. which can be thought of as a relative measure of the number of levels on the sliders left to those on its right. ADJUSTING LEVELS WITH THE DROPPER TOOLS The histogram levels can also be adjusted using the dropper tools. the image to the right would have appeared very overexposed.00 even when the other sliders have been moved. This is often not as precise as using the sliders. the midtones slider is always at 1. except the midtones slider would instead be moved to the left. The midtone "Input Level" number actually represents the gamma adjustment. This way. Thus. whereas values less than one mean more levels are to its left. Note: Even though the midtones slider is always initially at 128. because one does not necessarily know whether clicking on a given point will clip the histogram.Histogram Before Levels Histogram After Levels Sky Not At Full Brightness Stronger Highlights Similar Overall Brightness Note how the sky became more pronounced. .

without ever having to retake the photograph.TUTORIALS: PHOTOSHOP CURVES The Photoshop curves tool is perhaps the most powerful and flexible image transformation. Performing levels on an individual color histogram or channel can adversely affect the color balance. OTHER USES FOR THE LEVELS TOOL The levels tool can be performed any type of image histogram in addition to the RGB histograms shown above. Move your mouse over the curve types below to see how these changes affect this exaggerated example: . which only has black. PRECAUTIONS • • • Minimize use of the levels tool. 38. or when your image contains too much contrast. Since photographers effectively paint with light. If you follow two spaced input tones. Performing levels on a luminance histogram can easily clip an individual color channel. whereas tones get compressed when the slope decreases (compared to the original diagonal line). then over to its resulting output tone. Levels can also be used to decrease the contrast in an image by modifying the "Output Levels" instead of the "Input Levels. HOW IT WORKS Similar to Photoshop levels. a tonal curve is controlled using any number of anchor points (small squares below. The middle dropper actually sets the "gray point." This can be a useful step before performing techniques such as local contrast enhancement since it avoids clipping (because this technique may darken or brighten the darkest or brightest regions. A diagonal line through the center will therefore leave tones unchanged. whereas stretched tones get more contrast. Unlike levels however. the curves tool can take input tones and selectively stretch or compress them. note that their separation becomes stretched as the slope of the curve increases. Recall from the image histogram tutorial that compressed tones receive less contrast. This is useful when there is a colorless reference object within your scene. although this may also allow for darker and brighter black and white points. it is better to perform a white balance on a RAW file format since this reduces the risk of posterization. so color channel levels should only be performed when necessary or intentional color shifts are desired. as anything which stretches the image histogram increases the possibility of posterization. The result of a given curve can be visualized by following a test input tone up to the curve. the middle dropper tool does not perform the same function as the midtone slider. so understanding how they work allows one to mimic any film-. whereas levels on a color histogram can change the color balance for images which suffer from unrealistic color casts (such as those with an incorrect white balance). yet it may also be one of the most intimidating. respectively. Tonal curves are also what give different film types their unique character. up to a total of 16). On the other hand. one can click on it with the dropper tool and removing color casts by setting the white balance." which is a section of the image that should be colorless. including luminance and color histograms.Unlike the black and white point droppers. curves is central to their practice because it affects light's two primary influences: tones and contrast. respectively). Performing levels on a luminance histogram can be useful to increase contrast without also influencing color saturation. white and midpoint control.

Move your mouse over the image (right) to see how an S-curve can help maintain contrast in the midtones. . Each fi lm's unique character is primarily defined by its tonal curve. one always has to compress the tonal range to reproduce it in a print.Choose: High Contrast Show Tonal Labels? Low Contrast YES NO Note: curves and histograms shown above are applied to and shown for luminosity (not RGB) The curves shown above are two of the most common: the "S-curve" and "inverted S-curve. and note its similarity to an actual film curve (below). so the shadows and highlights usually end up bearing the bulk of this tonal compression. Midtone contrast is perceptually more important. Curves allows us to better utilize limited dynamic range. MOTIVATION: DYNAMIC RANGE & FILM CURVES Why redistribute contrast if this is always a trade-off? Since actual scenes contain a greater lightness range (dynamic range) than we can reproduce on paper. Most films and photo papers therefore use something similar to an S-curve to maintain midtone contrast. Note how these change the histogram and most importantly." An S-curve adds contrast to the midtones at the expense of shadows and highlights. whereas the inverted S-curve does the opposite. also notice how these changes influence the image: reflection detail on the side and underside of the boat become clearer for the inverted S-curve while water texture becomes more washed out (and the opposite for the S-curve).

(Shown for Kodak Supra II Paper) IN PRACTICE: OVERVIEW The key concept with curves is that you can never add contrast in one tonal region without also decreasing it in another. A tricky aspect is that even minor movement in an anchor point can result in major changes in the final image. In other words. Furthermore. Therefore moderate adjustments which produce smooth curves usually work best. Abrupt changes in slope can easily induce posterization by stretching tones in regions with gradual tonal variation. and even PC/Mac computers apply a different tonal curve when displaying images (gamma). . curves always preserves the tonal hierarchy (unless uncommon curves with negative slope are used). This means that if a certain tone was brighter than another before the conversion. use the histogram to ensure that the region of greater slope falls on top of this peak. the curves tool only redistributes contrast. UTILIZING EMPTY TONAL RANGE The exception to the contrast trade-off is when you have unused tonal range. either at histogram edges or as gaps in between tonal peaks.Furthermore. the film emulsion.whether this be by our eyes. try enlarging the size of the curves window. If you want to increase contrast in a certain tonal peak. digital camera. our eyes/brain actually apply a tonal curve of their own to achieve the maximum visual sensitivity over the greatest lightness range. it will still be brighter afterwards-just not necessarily by the same amount. while our camera (ideally) estimates the relative amount of photons hitting each pixel. If these gaps are at the histogram's edges. this unused tonal range can be utilized with the black and white anchor points (as with levels tool). All photographs therefore have a "contrast budget" and you must decide how to spend it-. display device or in post-processing. midtones and highlights) are generally all that is needed (in addition to the black and white points). The camera therefore has to apply its own tonal curve to the RAW file format to maintain accuracy. Three anchor points shown above (each for shadows. In summary: tonal curves are required for every image in one form or another-. Pay close attention to the image histogram when making adjustments. each type of digital sensor has their own tonal response curve.whether this be by spreading contrast evenly (straight diagonal line) or by unequal allocation (varying slope). If you need extra fine-tuning ability. I prefer to open the histograms window (Window > Histogram) to see live changes as I drag each anchor point. On top of this.

. and often a smoother transition to white is preferred. BEFORE AFTER Note how this produces an overall smoother toned image. TRANSITION OF CLIPPED HIGHLIGHTS Digital photos may abruptly clip their highlights once the brightness level reaches its maximum (255 for 8-bit images). This can create an unrealistic look. The next example uses a curve to close the tonal gap between the sky and darker foliage. then a unique ability with curves is that it can decrease contrast in these unused tones-.BEFORE AFTER If the gaps occur in between tonal peaks. Move your mouse over the image to see the difference.thereby freeing up contrast to be spent on tones which are actually present in the image. and that the midtones and highlights remain more or less unchanged on the histogram.

This is accomplished by reducing the opacity appropriately (circled in red above). but in most other instances this should be avoided. LIGHTNESS CHANNEL & ADJUSTMENT LAYERS Performing curves to just the lightness/luminosity channel .thereby reducing posterization. Let's say your image had a bluish color cast in the shadows. whereas a small slope will decrease it. however both the midtones and highlights appeared balanced.either in LAB mode or as an adjustment layer . Inverted S-Curve S-Curve Note how color saturation is greatly decreased and increased for the inverted S-curve and the regular S-curve. they can also be used on individual color channels as a powerful way of correcting color casts in specific tonal areas. curves with a large slope in the midtones will increase color saturation.) can be set to make curves only apply to the luminosity channel by choosing a different blending mode (right). Move your mouse over each of the images below to see what would have happened if this curve had been applied to the RGB channel. In general. Another benefit is that it can make your curves adjustment more subtle. respectively. Changing the white balance or adjusting the overall color to fix the shadows would inadvertently harm the other tones.(Above results achieved with a custom color profile curve. . This is particularly useful because small changes in anchor points sometimes yield too much of a change in the image. In general. USING CURVES TO CORRECT COLOR BALANCE Although all curves thus far have been applied to RGB values or luminosity.. Finally. you can continually fine-tune the curve without changing the actual image levels each time-.can help reduce changes in hue and color saturation..) Note the transition at the sun's border. the highlight transition can be made more gradual by decreasing the curve's slope at the far upper right corner. Changes in saturation may be desirable when brightening shadows. Adjustment layers (Layer > New Adjustment Layer > Curves.

BEFORE AFTER The above example selectively decreases the amount of blue in the shadows to fix the bluish color cast. This works best for images whose average midtone color is roughly neutral. however this depends on subject matter. the curves tool is probably overkill. Make sure to apply anchor points along the diagonal for all tonal regions which you do not wish to change. photos with an overabundance of one color (such as one taken within a forest) should use other methods such as white balance in RAW or with the levels tool.." in Photoshop). There may be cases where one would wish to deliberately not use the entire tonal range. however harsh or overcast light can result in too much or too little contrast. atmosphere and artistic intent.. respectively. Alternatively. haze or very soft light as they often never have fully black or white regions. If you do not require precise color adjustments. In such cases a color balance correction would be much easier ("Image > Adjustments > Color Balance. overall color casts can be fixed using the "Snap Neutral Midtones" setting under the options button. Contrast can emphasize texture or enhance subject-background separation. These may include Images taken in fog. NOTES ON IMAGE CONTRAST This tutorial has discussed contrast as if it were always desirable. .

CONCEPT The sharpening process works by utilizing a slightly blurred version of the original image. Step 1: Detect Edges and Create Mask Step 2: Increase Contrast at Edges Original Higher Contrast Original Unsharp Mask - Blurred Copy Original = Unsharp Mask = Sharpened Final Image . Contrast is then selectively increased along these edges using this mask-. and can be performed with nearly any image editing software (such as Photoshop). Extreme levels adjustments in the RGB channel should be avoided.leaving behind a sharper final image. Unsharp masks are probably the most common type of sharpening. as anything which stretches the image histogram increases the possibility of posterization. Always perform curves on 16-bit images when possible. but it can greatly enhance the appearance of detail by increasing small-scale acutance. An unsharp mask cannot create additional detail. creating the unsharp mask (effectively a high-pass filter). and is critical when post-processing most digital images. This is then subtracted away from the original to detect the presence of edges.PRECAUTIONS • • • Minimize use of the curves tool.SHARPENING: UNSHARP MASK An "unsharp mask" is actually used to sharpen an image. for such cases perform curves using the lightness channel in an adjustment layer or LAB mode to avoid significant changes in hue and saturation. 39. contrary to what its name might lead you to believe. Sharpening can help you emphasize texture and detail.

On the right side of each step you will notice it is lighter." "Radius. An unsharp mask improves sharpness by increasing acutance. although resolution remains the same (see sharpness: resolution and acutance). These enhance our ability to discern detail at an edge. then why is the final text so much sharper? We can better see how it works if we magnify and examine the edge of one of these letters as follows: Original Sharpened Note how it does not transform the edges of the letter into an ideal "step.Note: The "mask overlay" is when image information from the layer above the unsharp mask passes through and replaces the layer below in a way which is proportional to the brightness in that region of the mask.very similar to the behavior of an unsharp mask." . This was used more to enhance local contrast than small-scale detail." and "Threshold. They were traditionally performed with film by utilizing a softer. BIOLOGICAL MOTIVATION Why are these light and dark over/undershoots so effective at increasing sharpness? It turns out that an unsharp mask is actually utilizing a trick performed by our own human visual system. Using the unsharp mask requires understanding its three settings: "Amount. slightly out of focus image (which would act as the unsharp mask)." but instead exaggerates the light and dark edges of the transition. whereas on the left it is darker-. It can be accessed in Adobe Photoshop by clicking on the following drop-down menus: Filter > Sharpen > Unsharp Mask. If the resolution in the above image is not increasing. The human eye sees what are called "Mach bands" at the edges of sharp transitions. The positive of the unsharp mask was then sandwiched with the negative of the original image and made into a print. sharpening with an unsharp mask in Photoshop and other image editing programs is quick and easy. Move your mouse on and off of the following image to see the mach band effect: (Alternating with a smooth gradient enhances the mach band effect) Note how the brightness within each step of the gradient does not appear constant. The upper image does not contribute to the final for regions where the mask is black. Move your mouse over the plot below to see what is happening: IN PRACTICE Fortunately. Note: Unsharp masks are not new to photography. named after their discovery by physicist Ernst Mach in the 1860's. while it completely replaces the layer below in regions where the unsharp mask is white.

then the overshoot is light red and the undershoot (barely visible) becomes dark red-. This affects the size of the edges you wish to enhance. COMPLICATIONS Unsharp masks are wonderful at sharpening images." These are visible as light/dark outlines or halos near edges. Alternatively. is to: 1) Create a duplicate layer 2) Sharpen this layer like normal using the unsharp mask 3) Blend sharpened layer with the original using "luminosity" mode in the layers window . Remedies: Color shifts can be avoided entirely by performing the unsharp mask within the "lightness" channel in LAB mode. This is equivalent to clipping off the darkest non-black pixel levels in the unsharp mask. this produces cyan color shifts where the overshoot occurs (see subtractive color mixing). this can selectively increase some colors while decreasing others.Amount is usually listed as a percentage. A better technique. while leaving more subtle edges untouched. Halos artifacts become a problem when the light and dark over and undershoots become so large that they are clearly visible at the intended viewing distance. Soft Original Over Mild Sharpening Sharpening (Visible Halos) Remedies: The appearance of halos can be greatly reduced by using a smaller radius value for the unsharp mask. one could employ one of the more advanced sharpening techniques (coming soon). Normal unsharp masks increase the over and undershoot of the RGB pixel values similarly. Another complication of using an unsharp mask is that it can introduce subtle color shifts. and controls the magnitude of each overshoot. If the unsharp mask were only performed on the luminance channel (right image). Consider the following example: Soft Original Normal RGB Sharpening (Visible Cyan Outline) Luminance Sharpening When red is subtracted away from the neutral gray background at the edges (middle image). which avoids converting between color spaces and minimizes posterization. as opposed to only increasing the over and undershoots of luminance. Radius controls the amount to blur the original for creating the mask. This can also be thought of as how much contrast is added at the edges. The threshold setting can be used to sharpen more pronounced edges. or to sharpen an eye lash without also roughening the texture of skin. shown by "blurred copy" in the TEXT illustration above.avoiding the color shift. In situations where very fine color texture exists. however too much sharpening can also introduce "halo artifacts. so a smaller radius enhances smaller-scale detail. Threshold sets the minimum brightness change that will be sharpened. This is especially useful to avoid amplifying noise.

so the image may only selectively record them in a repeating pattern: Image Downsized to 50% . Unlike in photo enlargement where jagged edges are a problem.largeformatphotography. also see this website's Guide to Image Sharpening. These textures surpass the resolution when downsized. Learn another use for an unsharp mask with "local contrast enhancement" Alternatively. learn about their use in large format film photography at: http://www. This shows up in images with fine textures which are near the resolution limit. although some images are much more susceptible than others. being able to retain artifact-free sharpness in a downsized image is critical-. downsizing results in the opposite aliasing artifact: moiré.REAL-WORLD EXAMPLE Move your mouse over unsharp mask and sharpened to see how the sharpened image compares with the softer original image.yet may prove problematic. This tutorial compares different approaches of how to resize an image for web and email.IMAGE RESIZING FOR THE WEB & EMAIL Resizing images for the web and email are perhaps the most common ways to share digital photos. The difference can often be quite striking. Particularly for web presentation. but may instead occur when downsizing an image. and makes recommendations based on their results.info/unsharp/ 40. Original Unsharp Mask Sharpened (Unsharp mask brightened slightly to increase visibility) RECOMMENDED READING • • • For a more practical discussion. The prevalence of moiré largely depends on the type of interpolator used. BACKGROUND: MOIRÉ ARTIFACTS Moiré (pronounced "more-ay") is another type of aliasing artifact.

Downsized Image Shown at 200% Note how this pattern has no physical meaning in the picture because these lines do not correlate with the direction of roof shingles. and others. a test was designed to assess both the maximum resolution and degree or moiré each interpolator produces upon downsizing. Interpolation algorithms which preserve the best sharpness are more susceptible to moiré. whereas those which avoid moiré typically produce a softer result. wire mesh fences. INTERPOLATION PERFORMANCE COMPARED As an example: when an image is downsized to 50% of its original size. these include roof tiles. This is unfortunately an unavoidable trade-off in resizing. RESIZE-INDUCED SOFTENING In addition to moiré artifacts. a resized image can also become significantly less sharp. Images with fine geometric patters are at the highest risk.even if the original had already been sharpened. It amplifies these artifacts for a typical scenario: resizing a digital camera image to a more manageable web and email resolution of 25% its original size. distant brick and woodwork. Original Image Softer Resized Image One of the best ways to combat this is to apply a follow-up unsharp mask after resizing an image-. Move your mouse over the image above to see how this can regain lost sharpness. this is not real and must be an artifact of the interpolator. . it is impossible to show detail which previously had a resolution of just a single pixel. Original Image Image Averages to Gray Using this concept. If any detail is shown.

Lanczos 6. Nearest Neighbor 2. 1px pre-blur 7. 1. Photoshop bicubic comes in second. has variations which take into account anywhere from 256-1024+ adjacent known pixels. even if the algorithm has the same name.worldserver. Furthermore. which is often the case with "bicubic. as it has visible moiré patterns far outside the box. Sinc interpolation. This allows you to eliminate any detail smaller than what you know is impossible to capture at a lower resolution." PRE-BLUR TO MINIMIZE MOIRÉ ARTIFACTS One approach which can improve results in problem images is to apply a little blur to the image *before* you downsize it. all diagrams and custom code above were performed in Matlab for the above use.com/turk/opensource/. and are discussed below. software may also vary in how much weighting they give to close vs. although it does poorly at downsizing (not its intended use). then there is no need to pre-blur. If you do not have a problem with moire artifacts. Bilinear 3. Bicubic.0 was included for comparison. . **Bicubic is from the default setting used in Adobe Photoshop CS & CS2 Test chart conceived in a BBC paper and first implemented at www. This may or may not be explicitly stated in the software. Interpolators which show detail all the way up to the edge of this resolution limit (dashed red box shown below) preserve maximum detail. When the image gets downsized. Technical Note: interpolation algorithms vary depending on the software used. note how bicubic also does not show as much detail and contrast just inside the red box. Furthermore. Genuine Fractals 4. actual image is 800x800 pixels and stripes extend to max resolution at that size. all stripes beyond a certain distance from the center should no longer be resolvable. Sinc 5. Bicubic ** 4. Sinc and lanczos algorithms produce the best results. #6 w/ sharpening 8. far known pixels in their calculations. while still maintaining the fewest artifacts beyond. and vice versa. This highlights a key divide: some interpolation algorithms are much better at increasing than decreasing image size. Genuine Fractals Type: Show Red Box? YES NO Test Image* *Test image shown has been modified for viewing. whereas interpolators which show detail outsize this limit are adding patterns to the image which are not actually there (moiré). for example. 6 & 7 are variants of the bicubic downsize.The test image (below) was designed so that the resolution of stripes progressively increases away from the center of the image. they are able to resolve detail all the way to the theoretical maximum (red box).

and bicubic sharper. however additional sharpening (performed in #7) is required to regain sharpness for detail just inside the red box.3) unsharp mask to correct for any interpolation-induced softening. The main disadvantage to this approach is that the required radius of blur depends on how much you wish to downsize your image-. however 1 pixel is all that was needed to virtually eliminate artifacts outside the box. but my preference is to use the standard bicubic for downsizing-. If resizing is artifact-free. A radius as high as 2 pixels (for a total diameter of 4 pixels) could have been used in #6. then follow-up with a very small radius (0. Therefore if your image has moiré. The pre-blurred photoshop image above (#6) eliminates most of the moiré (found in #3).leaving greater flexibility to sharpen afterwards as the image requires. the sharper setting will amplify and the smoother setting will reduce it (relative to default). Many photos do not have detail which is susceptible to moiré-. RECOMMENDATIONS All of this analysis is directed at explaining what happens when things go wrong. An alternative approach would be to use bicubic.Since the above image was downsized to 1/4 its original size. The ideal solution is to use a sinc or lanczos algorithm to avoid moiré artifacts in the downsized image. Many find the built-in sharpening in the sharper variation to be a little to strong and coarse for most images. After pre-blur and sharpening. BICUBIC SMOOTHER Adobe Photoshop versions CS (8.therefore you have to use this technique on a case-by-case basis. Too high of a pre-blur can lead to softening in the final image. the sinc algorithm is not widely supported and software which uses it is often not as user-friendly. bicubic (intermediate default).2-0. On the other hand. you may not need to change a thing. but with varying degrees of sharpness. This prepares the image for the interpolator in a way which minimizes aliasing artifacts. pre-blur problematic images and then sharpen after downsizing. photographic workflows can become complicated enough as is. photoshop bicubic performs close to the more sophisticalted sinc and lanczos algorithms. All variations provide similar results to #3 in the interpolation comparison. PHOTOSHOP BICUBIC SHARPER vs. On the other hand.0) and higher actually have three options for bicubic interpolation: bicubic smoother. This works well.and what actions you can take to fix it. Show Bicubic Type: Original Image Smoother / Sharper Many recommend using the smoother variation for upsizing and the sharper variation for downsizing.regardless of the interpolation. but this is simply a matter of preference. when things do go wrong this can help explain why-. any repeating patterns smaller than 4 pixels cannot be resolved. Original Computer Graphic Zero Anti- .

a low noise setting and a good RAW file converter. while still retaining sharp detail. the best optimization is to start with the highest quality image possible. Once all of this has been attempted.Aliasing Finally. know that there is no magic solution. as this algorithm is the most prone to moiré artifacts. Move your mouse over the options below to see how each interpolator performs for this enlargement: 1. For further reading. is perhaps the ultimate goal of many interpolation algorithms. BACKGROUND The problem arises because unlike film. enlargement results can vary significantly depending on the resize software. optimizing digital photo enlargement can help you make the most of this image. a high resolution camera. pixelated appearance. Bilinear 3. please visit: Digital Image Interpolation. A small sampling of the most common algorithms are included below. Bicubic Sharper . Ensuring this means using proper technique. Original Visible pixels without interpolation Before proceeding with this tutorial. Bicubic Smoother 4. you can ensure that you do not induce any anti-aliasing in computer graphics if you use the nearest neighbor algorithm. Any attempt to magnify an image also enlarges these pixels-. digital cameras store their detail in a discrete unit: the pixel. Move your mouse over the image to the right to see how even standard interpolation can improve the blocky. OVERVIEW OF NON-ADAPTIVE INTERPOLATION Recall that all non-adaptive interpolation algorithms always face a trade-off between three artifacts: aliasing. Despite this common aim.unless some type of image interpolation is performed. The following diagram and interactive visual comparison demonstrate where each algorithm lies in this three-way tug of war.DIGITAL PHOTO ENLARGEMENT Digital photo enlargement to several times its original 300 PPI size. Just be particularly cautious when the image contains fine textures. Part 1 41. blurring and edge halos. Bicubic * 5. Nearest Neighbor 2. sharpening and interpolation algorithm implemented.

6. Lanczos 7. Bilinear w/ blur Type Selected: Test Image

*default interpolation algorithm for Adobe Photoshop CS and CS2

The qualitative diagram to the right roughly demonstrates the trade-offs of each type. Nearest neighbor is the most aliased, and along with bilinear these are the only two that have no halo artifacts-- just a different balance of aliasing and blur. You will see that edge sharpness gradually increases from 3-5, but at the expense of both increased aliasing and edge halos. Lanczos is very similar to Photoshop bicubic and bicubic sharper, except perhaps a bit more aliased. All show some degree of aliasing, however one could always eliminate aliasing entirely by blurring the image in Photoshop (#7).

Lanczos and bicubic are some of the most common, perhaps because they are very mild in their choice of all three artifacts (as evidenced by being towards the middle of the triangle above). Nearest neighbor and bilinear are not computationally intensive, and can thus be used for things like web zooming or handheld devices.

Recall that adaptive (edge-detecting) algorithms do not treat all pixels equally, but instead adapt depending on nearby image content. This flexibility gives much sharper images with fewer artifacts (than would be possible with a nonadaptive method). Unfortunately, these often require more processing time and are usually more expensive. Even the most basic non-adaptive methods do quite well at preserving smooth tonal gradations, but they all begin to show their limitations when they try to interpolate near a sharp edge.

1. Nearest Neighbor 2. Bicubic * 3. Genuine Fractals 4. PhotoZoom (default) 5. PhotoZoom (graphic)

6. PhotoZoom (text)

7. SmartEdge **

Type Selected:

Test Image

*default interpolation algorithm for Adobe Photoshop CS and CS2 **still in research phase, not available to public Genuine Fractals is perhaps the most common iterative (or fractal) enlargement software. It tries to encode a photo similar to a vector graphics file-- allowing for near lossless resizing ability (at least in theory). Interestingly, its original aim was not for enlargement at all, but was instead intended for efficient image compression. Times have changed since storage space is now more plentiful and fortunately, so has its application. Shortcut PhotoZoom Pro (formerly S-Spline Pro) is another common enlargement program. It takes into account many surrounding pixels when interpolating each pixel, and attempts to re-create a smooth edge that passes through all known pixels. It uses a spline algorithm to re-create these edges, which is similarly used by car manufacturers when they design a new smooth-flowing body for their cars. PhotoZoom has several settings-- each geared towards a different type of image.

Note how PhotoZoom produces superior results in the computer graphic above, as it is able to create a sharp and smooth-flowing edge for all the curves in the flag. Genuine fractals adds small-scale texture which was not present in the original, and its results for this example are arguably not much better than those of bicubic. It is also worth noting though that Genuine Fractals does the best job at preserving the tip of the flag, whereas PhotoZoom sometimes breaks it up into pieces. The only interpolator which maintains both smooth sharp edges and the flag's tip is SmartEdge.

The above comparisons demonstrate enlargement of theoretical examples, however real-world images are seldom this simple. These also have to deal with color patterns, image noise, fine textures and edges that are not as easily identifiable. The following example includes regions of fine detail, sharp edges and a smooth background:

Nearest Neighbor Sharpened:

Bicub Bicubic ic Smoother Bicub Bicubic ic Smoother

PhotoZoom PhotoZoom (Default)

Genuine Fractals Genuine Fractals

SmartEd ge SmartEd ge

All but nearest neighbor (which simply enlarges the pixels) do a remarkable job considering the relatively small size of the original. Pay particular attention to problem areas; for aliasing these are the top of the nose, tips of ears, whiskers and purple belt buckle. As expected, all perform nearly identically at rendering the softer background. Even though genuine fractals struggled with the computer graphic, it more than holds its own with this real-world photo. It creates the narrowest whiskers, which are even thinner than in the original image (relative to other features). It also renders the cat's fur with sharp edges while still avoiding halo artifacts at the cat's exterior. On the other hand, some may consider its pattern of fur texture undesirable, so there is also a subjective element to the decision. Overall I would say it produces the best results. PhotoZoom Pro and bicubic are quite similar, except PhotoZoom has fewer visible edge halos and a little less aliasing. SmartEdge also does exceptionally well, however this is still in the research phase and not available. It is the only algorithm which does well for both the computer graphic and the real-world photo.

Attention has been focused on the type of interpolation, however the sharpening technique can have at least as much of an impact.

Apply your sharpening after enlarging the photo to the final size, not the other way around. Otherwise previously unperceivable sharpening halos may become clearly visible. This effect is the same as if one were to apply an unsharp mask with a larger than ideal radius. Move your mouse over the image to the left (a crop of the enlargement shown before) to see what it would have looked like if sharpening were applied before enlargment. Notice the increase in halo size near the cat's whiskers and exterior. Also be aware that many interpolation algorithms have some sharpening built into them (such as Photoshop's "bicubic sharper"). A little sharpening is often unavoidable because the bayer interpolation itself may also introduce sharpening.

If your camera does not support the RAW file format (and therefore have to use JPEG images), be sure to disable or decrease all in-camera sharpening options to a minimum. Save these JPEG files at the highest quality compression, otherwise previously undetectable JPEG artifacts will be magnified significantly upon enlargement and subsequent sharpening. Since an enlarged photo can become significantly blurred compared to the original, resized images often stand to benefit more from advanced sharpening techniques. These include deconvolution, fine-tuning the light/dark over/undershoots, multi-radius unsharp mask and PhotoShop CS2's new feature: smart sharpen.

The expected viewing distance of your print may change the requirements for a given depth of field and circle of confusion. Furthermore, an enlarged photo for use as a poster will require a larger sharpening radius than one intended for display on a website. The following estimator should be used as no more than a rough guide; the ideal radius also depends on other factors such as image content and interpolation quality.
Top of Form

Sharpening Radius Estimator Viewing Distance

Print Resolution


Estimated Sharpening Radius

*PPI = pixels per inch; see tutorial on "Digital Camera Pixels"
Bottom of Form

A typical display device has a pixel density of around 70-100 PPI, depending on resolution setting and display dimensions. A standard value of 72 PPI gives a sharpening radius of 0.3 pixels using the above calculator-- corresponding to the common radius used for displaying images on the web. Alternatively, a print resolution of 300 PPI (standard for photographic prints) gives a sharpening radius of ~1.2 pixels (also typical).

The resolution of a large roadside billboard image need not be anywhere near as high as that of a closely viewed fine art print. The following estimator lists the minimum PPI and maximum print dimension which can be used before the eye begins to see individual pixels (without interpolation).
Top of Form

Photo Enlargement Calculator Viewing Distance


Camera Aspect Ratio


Camera Resolution


Minimum PPI Maximum Print Dimension
Bottom of Form

You can certainly make prints much larger-- just beware that this marks the point where you need to start being extra cautious. Any print enlarged beyond the above size will become highly dependent on the quality of interpolation and sharpening.

Both the size and type of texture within a photo may influence how well that image can be enlarged. For landscapes, the eye often expects to see detail all the way down near their resolving limit, whereas smooth surfaces and geometric objects may be less demanding. Some regions may even enlarge better than others; hair in portraits usually needs to be fully resolved, although smooth skin is often much less demanding.

Many professional printers have the ability to use a small image and perform the photo enlargement themselves (hardware interpolation), as opposed to requiring a photo that has already been enlarged on a computer (software interpolation). Many of these printers claim better enlargement quality than is possible with the bicubic algorithm, so which is a better option? Performing the enlargement in software beforehand allows for greater flexibility-- allowing one to cater the interpolation and sharpening to the needs of the image. On the other hand, enlarging the file yourself means that the file sizes will be MUCH larger, which may be of special importance if you need to upload images to the online print company in a hurry.

42.DEPTH OF FIELD CALCULATOR A depth of field calculator is a useful photographic tool for assessing what camera settings are required to achieve a desired level of sharpness. This calculator is more flexible than that in the depth of field tutorial because it adjusts for parameters such as viewing distance, print size and eyesight-- thereby providing more control over what is "acceptably sharp" (maximum tolerable circle of confusion). In order to calculate the depth of field, one needs to first decide on an appropriate value for the maximum circle of confusion (CoC). Most calculators assume that for a 8x10 inch print viewed at 25 cm (~1 ft), features smaller than 0.01 inches are not required to achieve acceptable sharpness. This scenario is often not an adequate description of acceptable sharpness, and so the calculator below accounts for other viewing scenarios (although it defaults to the standard settings).
Top of Form

Depth of Field Calculator Maximum Print Dimension Viewing Distance Eyesight Camera Type

" This should serve only as a rough guideline to conditions where detail can no longer be resolved by our eyes. and so the depth of field increases (max. the depth of field calculator can then be used to enhance those carefully planned landscape. Consult your camera's manual or manufacturer website if unsure what to enter for this parameter. Actual lens focal length refers to the focal length in mm listed for your lens. however they also require longer focal lengths to achieve the same field of view. NOT the "35 mm equivalent focal length" sometimes used. On the other hand. our eyes become less able to perceive fine detail in the print. then it is likely to be incorrect. Changing the eyesight parameter therefore has a significant influence on the depth of field." IN PRACTICE Care should be taken not to let all of these numbers get in the way of taking your photo.Selected Aperture Actual Lens Focal Length Focus Distance (to subject) mm meters Closest distance of acceptable sharpness Furthest distance of acceptable sharpness Hyperfocal distance Total Depth of Field Note: CF = "crop factor" (commonly referred to as the focal length multiplier) Bottom of Form USING THE CALCULATOR As the viewing distance increases. even if you can detect the circle of confusion with your eyes. please see "Understanding the Hyperfocal Distance. but be sure not to multiply the value listed on your lens by a crop factor (or focal length multiplier). The camera type determines the size of your film or digital sensor. People with 20/20 vision can perceive details which are roughly 1/3 the size of those used by lens manufacturers (~0. A photo intended for close viewing at a large print size (such as in a gallery) will likely have a far more restrictive set of constraints than a similar image intended for display as a postcard or on a roadside billboard. the image may still be perceived as "acceptably sharp. This can only be achieved by getting out there and experimenting with your camera. Most compact digital cameras have a zoom lens that varies on the order of 6 or 7 mm to about 30 mm (often listed on the front of your camera on the side of the lens). Conversely. This is useful when deciding where to focus such that you maximize the sharpness within your scene. If you have already taken your photo. Hyperfocal distance is the focus distance where everything from half the hyperfocal distance to infinity is within the depth of field. but instead suggest that you get a visual feel for how aperture and focal length affect your image. although I do not recommend using this value "as is" since sharpness is often more critical at infinity than in front of the focus distance. nearly all digital cameras also record the actual lens focal length in the EXIF data for the image file. .01 in features for a 8x10 in print viewed at 1 ft) to set the standard for lens markings. our eyes can perceive finer detail as the print size increases. and so the depth of field decreases. CoC increases). I do not recommend calculating the depth of field for every image. and thus how much the original image needs to be enlarged to achieve a given print size. SLR cameras are more straightforward as most of these use standard 35 mm lenses and clearly state the focal length. Once you have done this. For more on this topic. Larger sensors can get away with larger circles of confusion because these images do not have to be enlarged as much. macro of lowlight images where the range of sharpness is critical. If you are using a focal length outside this range for a compact digital camera.

However. how and where to backup. support is still not as universal as one would like it to be for a format that aims to be archival (although this is rapidly changing). RAW files are certainly the best when it comes to preserving what was originally captured. companies go in and out of existence (remember the once dominant Kodak?). Sony or another camera manufacturer's proprietary RAW format still have full software support. it's highly advised that you shoot in RAW if your camera supports it. Modern digital images and scans require an intimate understanding of topics such as file format. DNG aims to combine the compatibility advantages of TIFF and JPEG file formats with the quality and efficiency advantages of your camera's original RAW files. Excellent years later (in theory) Software Compatibility JPEG files are by far the most likely to be widely supported many years down the road. NEF. after all. Nikon. media type and ever-changing storage technologies. for future photos.ARCHIVAL DIGITAL PHOTO BACKUP Backing up your photos so that they last for 100 years is no longer as simple as having an archival print made and stored in a safe frame. then the choice of what format to store them in is easy: leave them as JPEG files. . so you can be sure the files can be more easily and universally opened in the future. TIFF files either preserve much less about the original photo (if the bit depth is 8-bit). or (ii) to backup the RAW files in their native format until some later date when you start to notice compatibility issues. Many feel that a suitable format already exists: the Digital Negative (DNG) file format. data degradation. If you already have a lot of photos taken in JPEG.43. Further. However. or are even larger than RAW files despite preserving a little less of the original image (if the bit depth is 16-bit). circa 1890 New Photograph. DNG itself has version numbers. TIFF files are a close second to JPEG when it comes to compatibility. the photo on the right will not necessarily last as long as the one on the left did. and will the images be reproduced exactly as before when loaded? Old Photo. TIFF achieves an optimal balance. but are much higher quality because they do not use JPEG's lossy image compression. JPEG has become a near standard for images on the internet. even DNG is not future-proof. which was created by Adobe to address many of the problems associated with longer term archival storage. with vastly different technology? Will Canon. However. but it also won't be subject to the gradual fading and deterioration of the photo from 1890. and a suitable replacement format exists. nearly every camera has a slightly different RAW file. The chosen file type is therefore an important first consideration when backing up archives of your photos. so it's highly unlikely that general software 10-20 years later will be able to open every one of these file types correctly. This tutorial summarizes the best strategies in three stages--what to store. if the necessary precautions are taken. and what to do once everything's archived--so that you can be confident your photos will stand the test of time. RAW file backup therefore leaves two options: (i) to convert them to some other format. etc DNG Smallest Medium Largest Large Large Size Lowest Medium High Highest Highest Quality Excellent Excellent Excellent Good now. Questionable years later Moderate now. not only will the photo on the right be preserved. and DNG is helpless if sensor technologies change dramatically. However. as discussed later. 50 or 100 years down the road. The table below compares the most common file formats: Archival File Format JPEG TIFF (8 bit) TIFF (16 bit) RAW files: CR2. circa 2008 Unfortunately. while still being smaller than 16-bit TIFF files. However. For many. It is an open standard and royalty free. ARCHIVAL FILE FORMATS FOR PHOTO STORAGE Here's a topic that keeps many photographers up at night: how can you be truly sure that the photos you are saving will be readable on computers 10. With the exception of Adobe software.

this is one area where changing technology can mean you'd like to rework certain images using the latest software and techniques. It can often be difficult to tell which longevity category your media purchase falls under. how can we be sure that these files will later be accessible on our chosen backup device or media? Remember 5. Consumer models are less common. Over time they can gradually demagnetize. and library files (for Aperture). using and storing adjustment layers is also a great way to avoid multiple intermediate files for each edit. storage technologies have been increasing in capacity exponentially. are quite fast. tapes and hard drives eventually become demagnetized. CHOOSING PHOTO BACKUP MEDIA Even if we use a compatible file format. In Photoshop. and (ii) they do not require an internal motor. Fortunately. all data degrades over time. the amount of work required to transfer these will not necessarily increase each time you need to do so. but do not actually apply them as an additional saved TIFF file. All of these processes are inevitable. Further. Probably their biggest drawback is inconsistency. some tapes are much more vulnerable to humidity. which is something that DNG does not address. even though it would be easily visible as a colored streak in a print. The chemicals in a DVD disk gradually decompose. CD. Every few years it's a good idea to convert file types that are in danger of becoming obsolete. Overall. Pay attention to the type of dyes used (blue. PRESERVING IMAGE INTEGRITY No matter what the backup media. The best way to conserve storage space is to save file types that preserve the editing steps. and thus have no risk of not spinning up for access (unlike hard drives).and one would expect that 10 or more of these could easily be combined into just one of the next storage technology. External hard drives. etc). and so on. catalog files (for Lightroom). and permit the backed up data to be immediately accessible and modifiable. This is a little unsettling considering that most people have hundreds if not many thousands of photographs. Fortunately. Unfortunately. silver. and today is only really used for large corporate backups. Hard drives can store a tremendous amount of information in a small area. it can be expensive to recover). and errors can occur each time you copy your images from one location to another. This means that even if you continue to accrue photos. and they haven't quite kept pace with the storage density progress that's been made with hard drives. such as XMP sidecar files (for Camera RAW). water damage and other external factors than are external hard drives or other removable media. Another concern would be if the eSATA. The following is a real-world example of what it can look like when a photograph becomes corrupted: Noticing the above flaw requires zooming in 100% and inspecting specific regions of the photograph. . Their biggest advantages are that (i) they are very inexpensive for high volume backups. to online accelerated aging tests. RAW conversion software often has the ability to store the settings you used to convert the RAW file. PSD or other files can quickly become extremely large and unmanageable. and flash memory can lose its charge. while once the "go to" method of archiving data. They have the advantage of being reasonably inexpensive and broadly compatible. but the biggest concern is that they may not spin up because their internal motor has failed (although no data is lost. or other removable media has been the primary method of consumer backup for quite some time. many of the formats used for storing edited photos are also subject to future compatibility issues. There's often a dramatic difference in longevity between one brand and another. identifying each and every corrupt image would clearly be unrealistic. while others claim a lifetime of 50-100 years. Unfortunately. is becoming increasingly marginalized. some removable media lasts only 5-10 years. Just make sure that you also have an archived version of the unaltered original photo. and to reports of issues with a particular model/batch. so your old 10 photo DVD's can be combined into just one Blu-Ray or a fraction of an external hard drive -.just in case a file can only be loaded on one of these older computer setups. while a newcomer on the backup scene. Multiple 16-bit TIFF.Another consideration is how to store various edited versions of your files. Tape backup. have made great progress since they've dropped in price tremendously over the past several years. the only fail-proof solution is to keep your data up-to-date. gold. amongst others. the only future-proof solution is to migrate your data over to the latest technology every 3-5 years.25 inch floppy disks? In fact. USB or firewire connector becomes obsolete. DVD. the US Federal Government is so concerned about this topic that they house and maintain computers at various stages of advancement -. Blu-Ray. Do not assume that all writable media is created equal.

such as RAW files. it's almost guaranteed that the fingerprint won't match. but SFV and MD5 are currently the most widely supported. If your internet connection is fast. backups can even be transferred regularly and systematically via FTP. but MD5 checksums are also more sensitive to file changes. CRC checksums are much quicker to calculate than their equivalent MD5 checksums. the parity file can be used along with the surviving portions of the corrupt file to re-create the original data. 5 or 10 is an array of disk drives with fault protection in case one of your drives fail. that's all they do: inform you when there was an error. **Technical Notes: A checksum is a digital fingerprint that verifies the integrity/identity of a file. this is not an option for true digital negatives. RAID comes in many varieties. if part of the original becomes corrupt. VERIFICATI ON Checksum files verify that a file or copy is identical to its original. parity files take up increasingly more space if you want to recover files which are more badly damaged. RAID 5 is three or more disks where one drive contains parity data. and would go unnoticed until a print were made many years later. REPAIR Technical Notes: Although it's beyond the scope of this article. RAID 1 is effectively two disks containing identical data at all times. That is the only reason the photo in the above example (and others) were identified before they became a problem. WHERE TO STORE YOUR PHOTO ARCHIVES The best location to store your archival photo backups is in a cool. and storing MD5/SFV checksum and parity files along with your archived photographs. If there's a chance of humidity. the best protection is achieved by using RAID while editing on your computer and between backups. without losing N any information.5. be sure to seal the media in a plastic bag prior to storage. This way you do not need to worry about complicated RAID or parity files. These can continue to operate even if a drive fails. Depending on the size and quantity of your photos. This could mean having a duplicate archive in a safety deposit box. but you will still need to store SFV or MD5 checksum files** along with each archived photo. image corruptions become replicated in each subsequent backup copy.even if the file format or media isn't in danger of becoming obsolete. There are far too many programs that can read or create SFV and MD5 files to list here. since they cannot be displayed as is. A simpler solution would be to store two backup copies immediately after the photo is captured. they can also substantially increase costs since they require additional disk drives and a RAID controller. Regardless. There are other checksum file types available. SFV stands for "single file verify". dry place with a reasonably constant environment and minimal need for movement. However.10 SFV or MD5 Checksum Files Parity or Recovery Files PREVENTIO A RAID 1. verifying and repairing corrupt photographs: Type Primary Use How It Works RAID 1. then the other backup copy can be used as a replacement. it's important to keep your data "fresh" by copying it to some other media after 5-10 years -. They store carefully chosen redundant information about a file. but these are usually much less important than the unaltered originals. if you cannot find a photo once it's been archived then it's as good as lost. When even one bit of the file changes.Further. but to also verify its authenticity (that no person had intentionally modified a file). However. However. which are created based on every 0 and 1 in a digital file. at a friend or family's household or at some remote online server. RAID 10 requires four drives. However. unforeseen accidents such as theft and fire can occur. After all. and contains a list of checksums corresponding to a list of files. a quick search engine query will yield several free options. . Not having RAID means that there's no protection against losing intermediate edited files on your computer. checksum or other data verification files is the only way to systematically spot these problems before they permanently alter your photo archives. RAID 0 should not be used with critical data since it increases the failure rate in exchange for better performance. If you ever identify a corrupt file. A storage technique that employs parity. The following chart outlines some of the most common techniques for preventing. some even treat online photo sharing sites as backup locations. and is similar to RAID 1 except it improves performance by simultaneously reading/writing to multiple drives. If you routinely work with very important photographs. Try and stick to a regular backup schedule with an easy to follow naming convention. Parity files can be used to repair minor damage without requiring a full duplicate of the original. However. so any fail-proof backup strategy should make use of multiple backup locations. MD5 checksums were created to not just verify the integrity of a file. They are effectively digital fingerprints.

Rod cells are better for low-light vision. edited versions of photos should be stored as processing steps (such as in XMP catalog files) as opposed to separate TIFF files. it selectively blocks some colors and reflects others. Archived RAW or DNG files should be converted to some other format every 3-5 years to maintain software compatibility. and to prevent corrupt images by keeping the data fresh. 44.MINIMIZING RISK OF ACCIDENTAL DELETION OK. just in case a repair is needed. so we've now gone to great lengths to ensure that (1) the file format will be readable. possibly including very memorable events or work to be made into large prints. The example below illustrates the relative sensitivity of each type of cone cell for the entire visible spectrum from ~400 nm to 700 nm. Each batch of photo backups should be written to at least two media. clearly labeling your media is a must. whereas while cone cells can also discern color. then simply use a password of "password". because it means there's always the possibility of forgetting the password. The set of signals possible at all three cone cells describes the range of colors we can see with our eyes. each being more sensitive to either short (S). Backup Strategy: Discerning photographers should always save their photos using their camera's RAW file format.TUTORIALS: COLOR PERCEPTION Color can only exist when three components are present: a viewer. not overly concerned with making large prints. (2) Discerning Photographer: takes a variety of photos. Often uses a compact digital camera or camera phone and photos are rarely post-processed. medium (M). since the purpose is to add another barrier to inadvertent deletion as opposed to preventing unauthorized access. Although pure white light is perceived as colorless. but it might also be a good idea to make the archived photos read only. each of these backups should be on new media using the latest storage technology to keep their data fresh. When possible. and to password-protect the photos folder and/or media. What's preventing someone from mistakenly deleting or overwriting some of your photo archive? Of course. they function best in bright light. or long (L) wavelength light. maximal preservation of each scene is of high importance. Each batch of photos should be backed up in two copies on removable media. Each set of backups should be stored in a different physical building. Archived photos should be transferred over to new media every 5 years to keep the storage technology up to date. Often uses a digital SLR camera. or a compact digital camera when weight/space are important. otherwise unaltered photos should be backed up immediately after capture. but wants their collections to be preserved for later generations. adding a password is a double edged sword. RAW files should either be converted to the DNG format prior to archiving or saved in their native format. When white light hits an object. SUMMARY OF ARCHIVAL PHOTO BACKUP OPTIONS Photographers can be loosely grouped into one of two categories: (1) Casual Photographer: generally takes snapshots to record events and other get togethers. ideally with SFV or MD5 checksum files to identify if any image later becomes corrupt. because any added trouble in post-processing is worth it in exchange for knowing they can make the most out of any print if they happen to capture a special moment. (2) the backup media will be loadable and (3) the accuracy of each photo will be preserved identically. only the reflected colors contribute to the viewer's perception of color. and all images should be stored along with SFV or MD5 checksum files and parity information. HUMAN COLOR PERCEPTION: OUR EYES & VISION The human eye senses this spectrum using a combination of rod and cone cells for vision. Select View: Cone Cells Luminosity . it actually contains all colors in the visible spectrum. Three types of cone cells exist in your eye. but can only sense the intensity of light. However. Backup Strategy: Casual photographers should try and save their JPEG files using the highest quality "superfine" (or similar setting) to minimize image compression artifacts. Usually takes RAW photographs. Usually takes JPEG photographs to simplify image sharing and printing. If this is a concern. and light. an object. or to save storage space. Any photo editing should ideally occur on a computer with duplicate hard drives in RAID 1.

green and blue (RGB) pixels. Many printers also include black ink in addition to cyan. whereas printers use pigments or dyes to absorb light and create subtractive colors. Devices which use these primary colors can produce the maximum range of color. Additive processes create color by adding light to a dark background. ADDITIVE & SUBTRACTIVE COLOR MIXING Virtually all our visible colors can be produced by utilizing some combination of the three primary colors. Move your mouse over "luminosity" to see which colors contribute the most towards our perception of brightness. This is why printed color processes require a specific type of ambient lighting in order to accurately depict colors. Also note how human color perception is most sensitive to light in the yellow-green region of the spectrum. but instead has varying degrees of sensitivity across a broad range of wavelengths. UCL Note how each type of cell does not just sense one color. because this light is what becomes selectively blocked to produce all their colors. A proper understanding of each of these processes creates the basis for understanding color reproduction. Monitors release light to produce additive colors. Additive Subtractive The color in the three outer circles are termed primary colors. and are different in each of the above diagrams. . whereas subtractive processes use pigments or dyes to selectively block white light. magenta and yellow (CMYK) because CMY alone cannot produce deep enough shadows. magenta and yellow (CMY) inks. either by additive or subtractive processes. this is utilized by the bayer array in modern digital cameras. whereas most color printers use at least cyan. Additive Color Mixing (RGB Color) Red + Green Green + Blue Blue + Red —> —> —> Yellow Cyan Subtractive Color Mixing (CMYK Color) Cyan + Magenta Magenta + Yellow —> —> —> —> Blue Red Green Black Magent Yellow + Cyan a White Cyan + Magenta + Yellow Red + Green + Blue —> Subtractive processes are more susceptible to changes in ambient light.Raw data courtesy of the Colour and Vision Research Laboratories (CVRL). This is why nearly all monitors use a combination of red.

it is not a requirement. megapixels. however each can be more objectively illustrated by inspecting the light's color spectrum.BASICS OF DIGITAL CAMERA PIXELS The continuous advance of digital camera technology can be quite confusing because new terms are constantly being introduced. THE PIXEL: A FUNDAMENTAL UNIT FOR ALL DIGITAL IMAGES Every digital image consists of a fundamental small-scale descriptor: THE PIXEL. but less saturated color. then its hue would instead be yellow (see the additive color mixing table). A color's saturation is a measure of its purity. Concepts such as sensor size. millions of pixels can also combine to create a detailed and seemingly continuous image.COLOR PROPERTIES: HUE & SATURATION Color has two unique components that set it apart from achromatic light: hue and saturation. Naturally occurring colors are not just light at one wavelength. even though it contains wavelengths throughout the spectrum. If this object instead had separate and pronounced peaks in just the the red and green regions. A highly saturated color will contain a very narrow set of wavelengths and appear much more pronounced than a similar. This tutorial aims to clear up some of this digital pixel confusion-. Although this spectrum's maximum happens to occur in the same region as the object's hue. dithering and print size are discussed.particularly for those who are either considering or have just purchased their first digital camera. but actually contain a whole range of wavelengths. The following example illustrates the spectrum for both a highly saturated and less saturated shade of blue. Visually describing a color based on each of these terms can be highly subjective. invented by combining the words "PICture ELement." Just as how pointillist artwork uses a series of paint blotches. The object whose spectrum is shown below would likely be perceived as bluish. Move mouse over each to select: Pointillism (Paint Blotches) Pixels . A color's "hue" describes which wavelength appears to be most dominant. Select Saturation Level: Low High 45.

Each pixel contains a series of numbers which describe its color or intensity. The precision to which a pixel can specify color is called its bit or color depth. The more pixels your image contains, the more detail it has the ability to describe. Note how I wrote "has the ability to"; just because an image has more pixels does not necessarily mean that these are fully utilized. This concept is important and will be discussed more later.

Since a pixel is just a logical unit of information, it is useless for describing real-world prints-- unless you also specify their size. The terms pixels per inch (PPI) and dots per inch (DPI) were both introduced to relate this theoretical pixel unit to real-world visual resolution. These terms are often inaccurately interchanged (particularly with inkjet printers)-- misleading the user about a device's maximum print resolution. "Pixels per inch" is the more straightforward of the two terms. It describes just that: how many pixels an image contains per inch of distance in the horizontal and vertical directions. "Dots per inch" may seem deceptively simple at first. The complication arises because a device may require multiple dots in order to create a single pixel; therefore a given number of dots per inch does not always lead to the same resolution. Using multiple dots to create each pixel is a process called "dithering."

A device with a limited number of ink colors can play a trick on the eye by arranging these into small patterns-- thereby creating the perception of a different color if each "sub-pixel" is small enough. The example above uses 128 pixel colors, however the dithered version creates a nearly identical looking blend of colors (when viewed in its original size) using only 24 colors. There is one critical difference: each color dot in the dithered image has to be much smaller than the individual pixel. As a result, images almost always require more DPI than PPI in order to achieve the same level of detail. PPI is also far more universal because it does not require knowledge of the device to understand how detailed the print will be. The standard for prints done in a photo lab is about 300 PPI, however inkjet printers require several times this number of DPI (depending on the number of ink colors) for photographic quality. It also depends on the application; magazine and newspaper prints can get away with much less than 300 PPI. The more you try to enlarge a given image, the lower its PPI will become (assuming the same number of pixels).

A "megapixel" is simply a unit of a million pixels. If you require a certain resolution of detail (PPI), then there is a maximum print size you can achieve for a given number of megapixels. The following chart gives the maximum 200 and 300 PPI print sizes for several common camera megapixels.
# of Maximum 3:2 Print Size Megapixel at 300 PPI: at 200 PPI: s 2 3 4 5.8" x 3.8" 7.1" x 4.7" 8.2" x 5.4" 8.7" x 5.8" 10.6" x 7.1" 12.2" x 8.2"

5 6 8 12 16 22

9.1" x 6.1" 10.0" x 6.7" 11.5" x 7.7" 14.1" x 9.4" 16.3" x 10.9" 19.1" x 12.8"

13.7" x 9.1" 15.0" x 10.0" 17.3" x 11.5" 21.2" x 14.1" 24.5" x 16.3" 28.7" x 19.1"

Note how a 2 megapixel camera cannot even make a standard 4x6 inch print at 300 PPI, while it requires a whopping 16 megapixels to make a 16x10 inch photo. This may be discouraging, but do not despair! Many will be happy with the sharpness provided by 200 PPI, although an even lower PPI may suffice if the viewing distance is large (see "Digital Photo Enlargement"). Many wall posters assume that you will not be inspecting them from 6 inches away, and so these are often less than 200 PPI.

The print size calculations above assumed that the camera's aspect ratio, or ratio of longest to shortest dimension, is the standard 3:2 used for 35 mm cameras. In fact, most compact cameras, monitors and TV screens have a 4:3 aspect ratio, while most digital SLR cameras are 3:2. Many other types exist though: some high end film equipment even use a 1:1 square image, and DVD movies are an elongated 16:9 ratio.

This means that if your camera uses a 4:3 aspect ratio, but you need a 4 x 6 inch (3:2) print, then a lot of your megapixels will be wasted (11%). This should be considered if your camera has a different ratio than the desired print dimensions.

Pixels themselves can also have their own aspect ratio, although this is less common. Certain video standards and earlier Nikon cameras have pixels with skewed dimensions.

Even if two cameras have the same number of pixels, it does not necessarily mean that the size of their pixels are also equal. The main distinguishing factor between a more expensive digital SLR and a compact camera is that the former has a much greater digital sensor area. This means that if both an SLR and a compact camera have the same number of pixels, the size of each pixel in the SLR camera will be much larger.
Why does one care about how big the pixels are? A larger pixel has more light-gathering area, which means the light signal is stronger over a given interval of time. Compact Camera Sensor

This usually results in an improved signal to noise ratio (SNR), which creates a smoother and more detailed image. Furthermore, the dynamic range of the images (range of light to dark which the camera can capture without becoming either black or clipping highlights) also increases with larger pixels. This is because each pixel well can contain more photons before it fills up and becomes completely white.

SLR Camera Sensor

The diagram below illustrates the relative size of several standard sensor sizes on the market today. Most digital SLR's have either a 1.5X or 1.6X crop factor (compared to 35 mm film), although some high-end models actually have a digital sensor which has the same area as 35 mm. Sensor size labels given in inches do not reflect the actual diagonal size, but instead reflect the approximate diameter of the "imaging circle" (not fully utilized). Nevertheless, this number is in the specifications of most compact cameras.

Why not just use the largest sensor possible? The main disadvantage of having a larger sensor is that they are much more expensive, so they are not always beneficial.

Other factors are beyond the scope of this tutorial, however more can be read on the following points: larger sensors require smaller apertures in order to achieve the same depth of field, however they are also less susceptible to diffraction at a given aperture.

Does all this mean it is bad to squeeze more pixels into the same sensor area? This will usually produce more noise, but only when viewed at 100% on your computer monitor. In an actual print, the higher megapixel model's noise will be much more finely spaced-- even though it appears noisier on screen (see "Image Noise: Frequency and Magnitude"). This advantage usually offsets any increase in noise when going to a larger megapixel model (with a few exceptions).

46.TUTORIALS: BIT DEPTH Bit depth quantifies how many unique colors are available in an image's color palette in terms of the number of 0's and 1's, or "bits," which are used to specify each color. This does not mean that the image necessarily uses all of these colors, but that it can instead specify colors with that level of precision. For a grayscale image, the bit depth quantifies how many unique shades are available. Images with higher bit depths can encode more shades or colors since there are more combinations of 0's and 1's available.

Every color pixel in a digital image is created through some combination of the three primary colors: red, green, and blue. Each primary color is often referred to as a "color channel" and can have any range of intensity values specified by its bit depth. The bit depth for each primary color is termed the "bits per channel." The "bits per pixel" (bpp) refers to the sum of the bits in all three color channels and represents the total colors available at each pixel. Confusion arises frequently with color images because it may be unclear whether a posted number refers to the bits per pixel or bits per channel. Using "bpp" as a suffix helps distinguish these two terms.

Most color images from digital cameras have 8-bits per channel and so they can use a total of eight 0's and 1's. This allows for 28 or 256 different combinations—translating into 256 different intensity values for each primary color. When all three primary colors are combined at each pixel, this allows for as many as 28*3 or 16,777,216 different colors, or "true color." This is referred to as 24 bits per pixel since each pixel is composed of three 8-bit color channels. The number of colors available for any X-bit image is just 2X if X refers to the bits per pixel and 23X if X refers to the bits per channel.

The following table illustrates different image types in terms of bits (bit depth), total colors available, and common names.
Bits Per Pixel 1 2 4 8 16 24 32 48 2 4 16 256 65536 16777216 16777216 + Transparency 281 Trillion Number of Colors Available Common Name(s) Monochrome CGA EGA VGA XGA, High Color SVGA, True Color

By moving your mouse over any of the labels below, the image will be re-displayed using the chosen amount of colors. The difference between 24 bpp and 16 bpp is subtle, but will be clearly visible if you have your display set to true color or higher (24 or 32 bpp).
24 bpp 16 bpp 8 bpp

• • • The human eye can only discern about 10 million different colors, so saving an image in any more than 24 bpp is excessive if the only intended purpose is for viewing. On the other hand, images with more than 24 bpp are still quite useful since they hold up better under post-processing (see "Posterization Tutorial"). Color gradations in images with less than 8-bits per color channel can be clearly seen in the image histogram. The available bit depth settings depend on the file type. Standard JPEG and TIFF files can only use 8-bits and 16-bits per channel, respectively.

47.IMAGE TYPES: JPEG & TIFF FILES Knowing which image type to use ensures you can make the most of your digital photographs. Some image types are best for getting an optimal balance of quality and file size when storing your photos, while other image types enable you to more easily recover from a bad photograph. Countless image formats exist and new ones are always being added; in this section we will focus on options related to the two of the three formats most relevant to digital photography: JPEG and TIFF. The RAW file format is covered in a separate tutorial.

An important concept which distinguishes many image types is whether they are compressed. Compressed files are significantly smaller than their uncompressed counterparts, and fall into two general categories: "lossy" and "lossless." Lossless compression ensures that all image information is preserved, even if the file size is a bit larger as a result. Lossy compression, by contrast, can create file sizes that are significantly smaller, but achieves this by selectively discarding image data. The resulting compressed file is therefore no longer identical to the original. Visible differences between these compressed files and their original are termed "compression artifacts."

JPEG stands for "Joint Photographic Expert Group" and, as its name suggests, was specifically developed for storing photographic images. It has also become a standard format for storing images in digital cameras and displaying photographic images on internet web pages. JPEG files are significantly smaller than those saved as TIFF, however this comes at a cost since JPEG employs lossy compression. A great thing about JPEG files is their flexibility. The JPEG file fomat is really a toolkit of options whose settings can be altered to fit the needs of each image. JPEG files achieve a smaller file size by compressing the image in a way that retains detail which matters most, while discarding details deemed to be less visually impactful. JPEG does this by taking advantage of the fact that the human eye notices slight differences in brightness more than slight differences in color. The amount of compression achieved is therefore highly dependent on the image content; images with high noise nevels or lots of detail will not be as easily compressed, whereas images with smooth skies and little texture will compress very well.

Image with Fine Detail Image without Fine Detail (Less Effective JPEG Compression) (More Effective JPEG

The RAW file format has yet to undergo demosaicing. and can be either uncompressed or compressed using lossless compression.Compression) It is also helpful to get a visual intuition for how varying degrees of compression impact the quality of your image. Unlike JPEG. Ensure that image noise levels are as low as possible.giving more flexibility to the photographer to later apply these themselves. since compression artifacts may accumulate and progressively degrade the image. Digital cameras normally "develop" this RAW file by converting it into a full color JPEG or TIFF image file. since this will produce dramatically smaller JPEG files. Avoid compressing a file multiple times. Many cameras have an option to create images as TIFF files. At 100%. USEFUL TIPS • • • Only save an image using a lossy compression once all other image editing has been completed. or blue value at each pixel location. since these are significantly smaller and can retain even more information about your image. 48. One key advantage of RAW is that it allows the photographer to postpone applying these adjustments-. and so the RAW file format offers you more control over how the final JPEG or TIFF image is generated. but these can consume excessive space compared to the same JPEG file. in a way which best suits each image. and makes suggestions about when to use the RAW file format. the JPEG algorithm is forced to sacrifice the quality of more and more visually prominant textures in order to continue decreasing the file size. TIFF files are significantly larger than their JPEG counterparts. and then store the converted file in your memory card. if at all. Notice how the JPEG algorithm prioritizes prominent high-contrast edges at the expense of more subtle textures. green. If your camera supports the RAW file format this is a superior alternative. Choose Compression Quality: 100% 80% 60% 30% 10% ORIGINAL IMAGE COMPRESSED IMAGE TIFF FILE FORMAT TIFF stands for "Tagged Image File Format" and is a standard in the printing and publishing industry. since many image manipulations can amplify compression artifacts. Digital cameras have to make several interpretive decisions when they develop a RAW file.RAW FILE FORMAT The RAW file format is digital photography's equivalent of a negative in film photography: it contains untouched. "raw" pixel information straight from the digital camera's sensor. and multiple layered images can be stored in a single TIFF file. The following diagram illustrates the sequence of adjustments: . each of which may contain several irreversible image adjustments. TIFF files can have a bit depth of either 16-bits per channel or 8-bits per channel. This section aims to illustrate the technical advantages of RAW files. TIFF files are an excellent option for archiving intermediate files which you may edit later. the JPEG algorithm will also produce larger and larger files at the same compression level. since it introduces no compression artifacts. you will barely notice any difference between the compressed and uncompressed image below. and so it contains just one red. As the compression quality decreases. OVERVIEW A RAW file is developed into a final JPEG or TIFF image in several steps. For such cases.

Even so. In order for the numbers recorded within a digital camera to be shown as we perceive them. Performing the demosaicing step on a personal computer allows for the best algorithms since a PC has many times more processing power than a typical digital camera. as opposed to within a digital camera. which is visible in the second image. This is why the first and second images above look so much darker than the third. depending on the setting within your camera. on the other hand. less noise. Up until this step. Most digital cameras therefore take quality-compromising shortcuts to convert a RAW file into a TIFF or JPEG in a reasonable amount of time. and occur in the same step. The in-camera JPEG image is not able to resolve lines as closely spaced as those in the RAW image. because the process of demosaicing always introduces some softening to the image. Only sensors which capture all three colors at each pixel location could achieve the ideal image shown at the bottom (such as Foveon-type sensors). The next sections describe how using RAW files can enhance these RAW conversion steps. A digital camera. RAW image information most likely resided within the digital camera's memory buffer. The image is then sharpened to offset the softening caused by demosaicing. a RAW file cannot achieve the ideal lines shown. Note the resolution advantage shown below: JPEG (in-camera) RAW Ideal Images from actual camera tests with a Canon EOS 20D using an ISO 12233 resolution test chart. The high bit depth RAW image is then converted into 8-bits per channel. and so when light intensity quadruples we only perceive this as a doubling in the amount of light. and so the best demosaicing algorithms require more processing power than is practical within today's digital cameras. . The bayer array is what makes the first image appear more pixelated than the other two. records differences in lightness linearly-twice the light intensity produces twice the response in the camera sensor. Better algorithms can squeeze a little more out of your camera sensor by producing more resolution. Our eyes perceive differences in lightness logarithmically. tone curves need to be applied. Differential between RAW and JPEG resolution may vary camera model and conversion software.Demosaicing White Balance Tone Curves Contrast Color Saturation Sharpening Conversion to 8-bit JPEG Compression Demosaicing and white balance involve interpreting and converting the bayer array into an image with all three colors at each pixel. better small-scale color accuracy and reduced moiré. Color saturation and contrast may also be adjusted. There are several advantages to performing any of the above RAW conversion steps afterwards on a personal computer. and compressed into a JPEG based on the compression setting within your camera. DEMOSAICING Demosaicing is a very processor-intensive step. and gives the image a greenish tint.

Similar results could not be achieved by merely brightening or darkening a JPEG file-. 0 (no change). Much like demosaicing. so that objects which appear white in person are rendered white in your photo. the exposure of a RAW file can be adjusted slightly-.both in dynamic range and in the smoothness of tones. but at the cost of bit depth and color gamut. Move your mouse over each to see how exposure compensation affects the image: Apply Exposure Compensation: -1. Dynamic range refers to the range of light to dark which can be captured by a camera before becoming completely white or black.0 none +1. better sharpening algorithms are often far more processor intensive.0 Note: +1 or -1 stop refers to a doubling or halving of the light used for an exposure. ENHANCED SHARPENING Since a RAW file is untouched. . Higher bit depth decreases the susceptibility to posterization. HIGH BIT DEPTH Digital cameras actually record each color channel with more precision than the 8-bits (256 levels) per channel used for JPEG images (see "Understanding Bit Depth"). the RAW file format also provides more control over what type and how much sharpening is applied (given your purpose). so having a pre-sharpened JPEG is not optimal. respectively.after the photo has been taken.FLEXIBLE WHITE BALANCE White balance is the process of removing unrealistic color casts.without unnecessarily destroying bits. LOSSLESS COMPRESSION The RAW file format uses a lossless compression. sharpening has not been applied within the camera. Note the broad range of shadow and highlight detail across the three images. This is because the white balance has effectively been set twice: once in RAW conversion and then again in post-processing. Exposure compensation can correct for metering errors. Most current cameras capture each color with 12-bits of precision (212 = 4096 levels) per color channel. respectively. and +1 stop exposure compensation. but without the compression artifacts of JPEG. RAW files give you the ability to set the white balance of a photo *after* the picture has been taken-. and shows the same RAW file with -1 stop. providing several times more levels than could be achieved by using an in-camera JPEG. A graduated neutral density filter (see tutorial on camera lens filters) could then be used to better utilize this broad dynamic range. and increases your flexibility when choosing a color space and in post-processing. Sharpening performed on a personal computer can thus create fewer halo artifacts for an equivalent amount of sharpening (see "Sharpening Using an Unsharp Mask" for examples of sharpening artifacts). and so it does not suffer from the compression artifacts visible with "lossy" JPEG compression. depending on how the camera creates its JPEG. Since the raw color data has not been converted into logarithmic values using curves (see overview section above). and so +1 stop is equivalent to +1 eV. A stop can also be listed in terms of eV. RAW files contain more information and achieve better compression than TIFF. or can help bring out lost shadow or highlight detail. Color casts within JPEG images can often be removed in post-processing. The following example was taken directly into the setting sun. Sharpening is usually the last postprocessing step since it cannot be undone. Since sharpness depends on the intended viewing distance of your image. DYNAMIC RANGE & EXPOSURE COMPENSATION The RAW file format usually provides considerably more "dynamic range" than a JPEG file.

storage space and ease of use. OTHER CONSIDERATIONS One problem with the RAW file format is that it is not very standardized. Good RAW conversion software can perform batch processes and often automates all conversion steps except those which you choose to modify. although any artifacts are much lower than would be perceived with a similar JPEG image. A histogram can tell you whether or not your image has been properly exposed. DISADVANTAGES • • • • • RAW files are much larger than similar JPEG files. and blue (RGB). RAW files give the photographer far more control. In most cases. and so fewer photos can fit within the same memory card. therefore it may be necessary to first convert them into JPEG. . RAW files require a more powerful computer with more temporary memory (RAM). The RAW trade-off is sometimes not worth it for sports and press photographers. Many newer cameras can save both RAW and JPEG images simultaneously. RAW files cannot be given to others immediately since they require specific software to load them. Each of these colors can have a brightness value ranging from 0 to 255 for a digital image with a bit depth of 8-bits. any camera which has the ability to save RAW files should come with its own software to read them. as this depends on the type of photography you are doing. Lossy JPEG compression at 60% in Adobe Photoshop.CAMERA HISTOGRAMS: TONES & CONTRAST Understanding image histograms is probably the single most important concept to become familiar with when working with pictures from a digital camera. This can mitigate or even eliminate the ease of use advantage of JPEG files. RAW files often take longer to be written to a memory card since they are larger. In addition. although all will have the same basic layout as the histogram example shown below. but retains the RAW "negative" just in case more flexibility is desired later. green. but with this comes the trade-off of speed. This provides you with an immediate final image. Fortunately. 49. Each camera has their own proprietary RAW file format. whether the lighting is harsh or flat. It will not only improve your skills on the computer. The efficiency of RAW compression also varies with digital camera manufacturer. and what adjustments will work best. Adobe has announced a digital negative (DNG) specification which aims to standardize the RAW file format. Other types of histograms exist. SUMMARY So which is better: RAW or JPEG? There is no single answer. but as a photographer as well. Note: Kodak and Nikon employ a slightly lossy RAW compression algorithm. and so one program may not be able to read all formats.Compression: Lossless Lossy Image shown at 200%. RAW files will provide the best solution due to their technical advantages and the decreasing cost of large memory cards. therefore most digital cameras may not achieve the same frame rate as with JPEG. RAW files are more time consuming since they may require manually applying each conversion step. Each pixel in an image has a color which has been produced by some combination of the primary colors red. although landscape and most fine art photographers often choose RAW in order to maximize the image quality potential of their digital camera. A RGB histogram results when the computer scans through each of these RGB brightness values and counts how many are at each level from 0 through 255.

with markers to illustrate where regions in the scene map to brightness levels on the histogram. There is no one "ideal histogram" which all images should try to mimic. Conditions of ordinary and even lighting. respectively. Lighting is often not as extreme as the last example. but does have plentiful shadow and highlight regions in the lower left and upper right of the image. gradually tapering off into the shadows and highlights." Tonal range can vary drastically from image to image. so developing an intuition for how numbers map to actual brightness values is often critical— both before and after the photo has been taken.TONES The region where most of the brightness values are present is called the "tonal range. With the exception of the direct sunlight reflecting off the top of the building and off some windows. This translates into a histogram which has a high pixel count on both the far left and right-hand sides. . histograms should merely be representative of the tonal range in the scene and what the photographer wishes to convey. when combined with a properly exposed subject. Most cameras will have no trouble automatically reproducing an image which has a histogram similar to the one shown below. This coastal scene contains very few midtones. will usually produce a histogram which peaks in the centre. the boat scene to the right is quite evenly lit. The above image is an example which contains a very broad tonal range.

A good rule of thumb is that you will need to manually adjust the exposure whenever you want the average brightness in your image to appear brighter or darker than the midtones. These estimates frequently result in an image whose average brightness is placed in the midtones. . Images where most of the tones occur in the shadows are called "low key. and estimate how bright an image should be.HIGH AND LOW KEY IMAGES Although most cameras will produce midtone-centric histograms when in an automatic exposure mode. however high and low key scenes frequently require the photographer to manually adjust the exposure. Before the photo has been taken. they are unable to assess the absolute brightness of their subject. many cameras contain sophisticated algorithms which try to circumvent this limitation." whereas with "high key" images most of the tones are in the highlights. Since cameras measure reflected as opposed to incident light. As a result. Note how the average pixel count is brought closer to the midtones. relative to what the camera would do automatically. the distribution of peaks within a histogram also depends on the tonal range of the subject matter. The following set of images would have resulted if I had used my camera's auto exposure setting. it is useful to assess whether or not your subject matter qualifies as high or low key. This is usually acceptable.

Most digital cameras are better at reproducing low key scenes since they prevent any region from becoming so bright that it turns into solid white. whereas narrow histograms reflect less contrast and may appear flat or dull. Ultimately. Contrast is a measure of the difference in brightness between light and dark areas in a scene. underexposure is usually more forgiving than overexposure (although this compromises your signal to noise ratio). CONTRAST A histogram can also describe the amount of contrast. Photos taken in the fog will have low contrast." The histogram is a good tool for knowing whether clipping has occurred since you can readily see when the highlights are pushed to the edge of the chart. when the sun is included in the frame or when other bright sources of light are present. Fortunately. When this occurs the highlights are said to be "clipped" or "blown. This can be caused by any combination of subject matter and lighting conditions. while those taken under strong daylight will have higher contrast. High key scenes. often produce images which are significantly underexposed. the amount of clipping present is up to the photographer and what they wish to convey. Broad histograms reflect a scene with significant contrast. . on the other hand. Detail can never be recovered when a region becomes so overexposed that it becomes solid white. Some clipping is usually ok in regions such as specular reflections on water or metal. regardless of how dark the rest of the image might become as a result.

. creating texture which "pops" out at the viewer. as shown in the image above. and stronger highlights in the upward-facing and directly exposed areas. When you change to one of the color histograms a different image will be shown. The middle and bottom regions are produced entirely from diffuse. and how the colors within each region influence this brightness. This produces deeper shadows underneath the boat and its ledges. Conditions in the bottom region create more pronounced highlights. and how this translates into the relevant histogram. but it still lacks the deep shadows of the top region. Pay particular attention to how each color changes the brightness distribution within the image. This new image is a grayscale representation of how that color's intensity is distributed throughout the image. Although RGB histograms are the most commonly used histogram. visit part 2 of this tutorial: "Understanding Camera Histograms: Luminance & Color" 50. The upper region contains the most contrast of all three because the image is created from light which does not first reflect off the surface of water. other types are more useful for specific purposes. The high contrast water has deeper shadows and more pronounced highlights.Contrast can have a significant visual impact on an image by emphasizing texture. Move your mouse over the labels at the bottom to toggle which type of color histogram is displayed. similar to if one were taking photographs in the fog. For additional information on histograms. Contrast can also vary for different regions within the same image due to both subject matter and lighting. The sum of the histograms in all three regions creates the overall histogram shown before. The image below is shown alongside several of the other histogram types which you are likely to encounter. We can partition the previous image of a boat into three separate regions—each with its own distinct histogram.CAMERA HISTOGRAMS: LUMINANCE & COLOR This section is designed to help you develop a better understanding of how luminance and color both vary within an image. The bottom region has more contrast than the middle—despite the smooth and monotonic blue sky—because it contains a combination of shade and more intense sunlight. reflected light and thus have lower contrast.

Once all pixels have been converted into luminosity. Luminance takes into account the fact that the human eye is more sensitive to green light than red or blue light. View the above example again for each color and you will see that the green intensity levels within the image are most representative of the brightness distribution for the full color image. each pixel is converted so that it represents a luminosity based on a weighted average of the three colors at that pixel. An important difference to take away from the above calculation is that while luminance histograms keep track of the location of each color pixel. A RGB histogram produces three independent histograms and then adds them together. whereas a simple addition of each RGB value would give the same intensity at each rectangle. RGB histograms discard this information.Choose: RED GREEN BLUE ALL LUMINANCE HISTOGRAMS Luminance histograms are more accurate than RGB histograms at describing the perceived brightness distribution or "luminosity" within an image. dark est light est How is a luminance histogram produced? First. Luminance correctly predicts that the following stepped gradient gradually increases in lightness. a luminance histogram is produced by counting how many pixels are at each luminance—identical to how a histogram is produced for a single color. while the red and blue channels account for just 30% and 11%. . irrespective of whether or not each color came from the same pixel. respectively. This also reflected by the fact that the luminance histogram also matches the green histogram more than any other color. To illustrate this point we will use an image which the two types of histograms interpret quite differently. This weighting assumes that green represents 59% of the perceived luminosity. Move your mouse over "convert to luminosity" below the example image to see what this calculation looks like when performed for for each pixel.

The luminance histogram tells an entirely different story by showing no pixels anywhere near full brightness. Since this image contains primarily blue.The above image contains many patches of pure color. so all patches have significant color clipping and only in that color. Also note that the relative horizontal position of each peak is in accordance with the percentages used in the weighted average for calculating luminance: 59%. View Channel: RED GREEN BLUE ALL LUMINOSITY View Histogram: RGB LUMINOSITY . then least of all green. On the other hand. a color histogram describes the brightness distribution for any of these colors individually. the more intense and pure the colors are in your image. This is because the RGB histogram does not take into account the fact that all three colors never clip in the same place. the relative heights clearly show which color belongs where. the luminance histogram accurately tells us that no pixel is anywhere near full black or white. At the interior of each color patch the intensity reaches a maximum of 255. This can be more helpful when trying to assess whether or not individual colors have been clipped. 30%. Each has its own use and should be used as a collective tool. Pay careful attention when your subject contains strong shades of blue since you will rarely be able to see blue channel clipping with luminance histograms. the more a luminance and RGB histogram will differ. As a rule of thumb. It also shows three distinct peaks—one for each color that has become significantly clipped. COLOR HISTOGRAMS Whereas RGB and luminance histograms use all three color channels. then red. So which one is better? If we cared about color clipping. Even though this image contains no pure white pixels. just be aware of its shortcomings. and 11%. the RGB histogram shows strong clipping—so much that if this were a photograph the image would appear significantly overexposed. then the RGB histogram clearly warns us while the luminance histogram provides no red flags. Since most digital cameras show only a RGB histogram.

these clipped regions may still retain some luminance texture if the other two colors have not also been clipped. even though the rest of the image remained within the histogram. and even varies amongst different camera models. the signal is the light which hits the camera sensor.The petals of the red flowers caught direct sunlight. and to view the image luminance. Regions where individual color channels are clipped lose all texture caused by that particular color. Move your mouse over the labels above to compare the luminance and RGB histograms. Even though noise is unavoidable. CONCEPT Some degree of noise is always present in any electronic device that transmits or receives a "signal. grainy look which is reminiscent of early film. The strength and purity of colors within this image cause the RGB and luminance histograms to differ significantly. Color histograms amplify this effect and clearly show the type of clipping. Notice how the intensity distribution for each color channel varies drastically in regions of nearly pure color. Original Image Camera Image Colorful 3-D representation of the camera's image The image above has a sufficiently high SNR to clearly separate the image information from background noise. visit part 1 of this tutorial: "Understanding Camera Histograms . However." For televisions this signal is the broadcast data transmitted over cable or received at the antenna.Tones and Contrast" 51. A low SNR would produce an image where the "signal" and noise are more comparable and thus harder to discern from one another. length of the exposure. high ratios will have very little visible noise whereas the opposite is true for low ratios. For digital images. temperature. The sequence of images below show a camera producing a very noisy picture of the word "signal" against a smooth background. it is sometimes desirable since it can add an old-fashioned. however they do not tell you if this is due to an individual color or all three. although this all depends upon what you wish to convey. Alternatively. for digital cameras. one can think of it as analogous to the subtle background hiss you may hear from your audio system at full volume. Although noise often detracts from an image. Noise increases with the sensitivity setting in the camera. Some noise can also increase the apparent sharpness of an image. this noise appears as random speckles on an otherwise smooth surface and can significantly degrade image quality. RGB histograms can show if an individual color channel clips. to view the image in terms of only a single color channel. Individual color clipping is often not as objectionable as when all three colors clip.DIGITAL CAMERA IMAGE NOISE "Image noise" is the digital equivalent of film grain for analogue cameras. The resulting image is shown along with an enlarged 3-D representation depicting the signal above the background noise. . For additional information on histograms. The signal to noise ratio (SNR) is a useful and universal way of comparing the relative amounts of signal and noise for any electronic system. it can become so small relative to the signal that it appears to be nonexistent. so their red color became clipped.

and is noise which is introduced by the camera when it reads data from the digital sensor. Please also see my section on image averaging for another technique to reduce noise. Fixed pattern noise is much less of a problem than random noise in the latest generation of digital cameras. Banding noise is most visible at high ISO speeds and in the shadows. Although fixed pattern noise appears more objectionable. Programs such as Neat Image and Noise Ninja can be remarkably good at reducing noise while still retaining actual image information. and banding noise. . length of exposure. There will always be some random noise at any exposure length and it is most influenced by ISO speed. such as ISO 50. however this also amplifies noise and so higher ISO speeds will produce progressively more noise. however even the slightest amount can be more distracting than random noise. ISO speed is analogous to ASA speed for different films. Banding noise can also increase for certain white balances. Fixed pattern noise generally appears in very long exposures and is exacerbated by higher temperatures. ISO settings are usually listed as factors of 2. depending on camera model." which are defined as such when a pixel's intensity far surpasses that of the ambient random noise fluctuations. "fixed pattern" noise. TYPES OF NOISE Digital cameras produce three common types of noise: random noise. Fixed Pattern Noise Long Exposure Low ISO Speed Random Noise Short Exposure High ISO Speed Banding Noise Susceptible Camera Brightened Shadows Random noise is characterized by intensity and color fluctuations above and below the actual image intensity. Computers have a difficult time discerning random noise from fine texture patterns such as those occurring in dirt or foliage. A camera's internal electronics just has to know the pattern and it can subtract this noise away to reveal the true image. The three qualitative examples below show pronounced and isolating cases for each type of noise against an ordinarily smooth grey background. The less objectionable random noise is usually much more difficult to remove without degrading the image. This is accomplished by amplifying the image signal in the camera.Camera Image TERMINOLOGY A camera's "ISO setting" or "ISO speed" is a standard which describes its absolute sensitivity to light. so if you remove the random noise you often end up removing these textures as well. however a single digital camera can capture images at several different ISO speeds. Fixed pattern noise is unique in that it will show almost the same distribution of hot pixels if taken under the same conditions (temperature. it is usually easier to remove since it is repeatable. or when an image has been excessively brightened. ISO speed). ISO 100 and ISO 200 and can have a wide range of values. meaning a photo at ISO 200 will take half as long to reach the same level of exposure as one taken at ISO 100 (all other settings being equal). The pattern of random noise changes even if the exposure settings are identical Fixed pattern noise includes what are called "hot pixels. Higher numbers represent greater sensitivity and the ratio of two ISO numbers represents their relative sensitivity. Banding noise is highly camera-dependent.

along with the separate effects of chroma and luminance noise. The example below shows noise on what was originally a neutral grey patch. Noise fluctuations can also vary in both their magnitude and spatial frequency. however complete elimination of luminance noise can result in unnatural or "plasticy" looking images.Understanding the noise characteristics of a digital camera will help you know how noise will influence your photographs. darker regions will contain more noise than the brighter regions. overexposed images will have less noise and can actually be advantageous. although spatial frequency is often a neglected characteristic. Noise reduction software can be used to selectively reduce both chroma and luminance noise. but it can also vary within an individual image. Part 1"). Noise is also composed of two elements: fluctuations in color and luminance. On the other hand. Color or "chroma" noise is usually more unnatural in appearance and can render images unusable if not kept under control. CHARACTERISTICS Noise not only changes depending on exposure setting and camera model. resulting in a higher overall SNR. and the frequency and magnitude of image noise. This means that images which are underexposed will have more visible noise-.even if you brighten them up to a more natural level afterwards. with film the inverse is true. which is the same as having a high spatial frequency. The term "fine-grained" was used frequently with film to describe noise whose fluctuations occur over short distances. For digital cameras. Luminance Noise Image Noise Chroma Noise The relative amount of chroma and luminance noise can vary significantly from one camera model to another. assuming that you can darken them later and that no region has become solid white where there should be texture (see "Understanding Histograms. Examples of noise variation based on ISO and color channel are also shown for three different digital cameras. "chroma" and luminance noise. Low Frequency Noise (Coarser Texture) Standard Deviation: High Frequency Noise (Finer Texture) Standard Deviation: . The following sections discuss the tonal variation of noise. Each Region at 100% Zoom 1 2 3 4 Note how noise becomes less pronounced as the tones become brighter. The example below shows how the spatial frequency can change the appearance of noise. Brighter regions have a stronger signal due to more light.

7 High Magnitude Noise (Rougher Texture) Standard Deviation: 20. This concept can also be understood by looking at the histogram for each patch: Select noise LO magnitude: W HIG H RGB Histogram If each of the above patches had zero noise." which quantifies the typical variation a pixel will have from its "true" value. please see: "Understanding Histograms: Luminosity and Color. the patch on the right actually appears to be much less noisy than the patch on the left. This is due entirely to the spatial frequency of noise in each patch.Even though noise's spatial frequency is under emphasized. We present this for the RGB histogram. As noise levels increase. and can be more difficult to remove without over softening the image. For more information on types of histograms.11. Upon visual inspection. High magnitude noise can overpower fine textures such as fabric or foliage. but the same spatial frequency. The examples below show the noise characteristics for three different cameras against an otherwise smooth grey patch.5 If the two patches above were compared based solely on the magnitude of their fluctuations (as is done in most camera reviews). its magnitude still has a very prominent effect. although the same comparison can also be made for the luminosity and individual color histograms. then the patch on the right would seem to have higher noise.8 Note how the patch on the left appears much smoother than the patch on the right. all pixels would be in a single line located at the mean." EXAMPLES It is helpful to experiment with your camera so you can get a feel for how much noise is produced at a given ISO setting. The next example shows two patches which have different standard deviations. so does the width of this histogram. Low Magnitude Noise (Smoother Texture) Standard Deviation: 11.7 12. The magnitude of noise is usually described based on a statistical measure called the "standard deviation. ISO 100 ISO 200 ISO 400 .

color channel and ISO speed.TUTORIALS: SHARPNESS Sharpness describes the clarity of detail in a photo.Canon EOS 20D Pixel Area: 40 µm2 Released in 2004 Canon PowerShot A80 Pixel Area: 9. The greater the area of a pixel in the camera sensor. As a result. Even though the Epson PhotoPC 800 has much larger pixels than the Canon PowerShot A80. daylight white balance and default sharpening) Note the differences due to camera model. This is because the much older Epson camera had much higher internal noise levels caused by less sophisticated electronics. Resolution describes the camera's ability to distinguish between closely spaced elements of detail. the more light gathering ability it will have-. On the other hand. cameras with physically larger pixels will generally appear less noisy since the signal is larger relative to the noise. image magnification and viewing distance. . respectively.3 µm2 Released in 2003 Epson PhotoPC 800 Pixel Area: 15 µm2 Released in 1999 Show Channel: RE D GREE BLU AL N E L (best JPEG quality. Also note how the Epson develops patches of color which are much more objectionable than noise caused only by brightness fluctuations. in digital cameras with Bayer arrays (see "Understanding Digital Sensors"). although sharpness is ultimately limited by your camera equipment. This is why cameras with more megapixels packed into the same sized camera sensor will not necessarily produce a better looking image. however noise variation between cameras is more complex. Two fundamental factors contribute to the perceived sharpness of an image: resolution and acutance. Proper photographic and post-processing technique can go a long way towards improving sharpness. Part 1 of this tutorial can be found at: "Image Noise: Concept and Types" 52.thus producing a stronger signal. and can be a valuable creative tool for emphasizing texture.especially at ISO 400. Move your mouse over the buttons below to see that each individual channel has quite a different amount of noise. The blue and green channels will usually have the highest and lowest noise. and so high acutance results in sharp transitions and detail with clearly defined borders. such as the two sets of lines shown above. it has visibly more noise-. Acutance Resolution High Low High Low Acutance describes how quickly image information transitions at an edge. You can also see that increasing ISO speed always produces higher noise for a given camera. a stronger signal does not necessarily lead to lower noise since it is the relative amounts of signal and noise that determine how noisy an image will appear.

so acutance is what is enhanced when you digitally sharpen an image (see Sharpening using an "Unsharp Mask"). however small amounts can actually increase the appearance of sharpness. Consider the following example: . whereas acutance depends on both the quality of your lens and the type of post-processing. The following example is designed to give you a feel for how each influences your image: Acutance: High Resolution: Low Acutance: Low Resolution: High Acutance: High Resolution: High PROPERTIES OF SHARPNESS Sharpness also depends on other factors which influence our perception of resolution and acutance. Image noise (or film grain) is usually detrimental to an image. Acutance is the only aspect of sharpness which is still under your control after the shot has been taken.For digital cameras. resolution is limited by your digital sensor. COMPARISON Photos require both high acutance and resolution to be perceived as critically sharp.

Low Noise. Sharpness is also significantly affected by your camera technique. Sharp Although both images have not been sharpened. Incorrect White Balance Correct White Balance BACKGROUND: COLOR TEMPERATURE Color temperature describes the spectrum of light which is radiated from a "blackbody" with that surface temperature.tricking the eye into thinking sharp detail is present. may have much lower resolution than fine art prints in a gallery. An incorrect WB can create unsightly blue. A rough analogue of blackbody radiation in our day to day experience might be in heating a metal or stone: these are said to become "red hot" when they attain one temperature. Image noise can be both very fine and have a very high acutance-.neither reflecting it nor allowing it to pass through. Keep this property in mind when sharpening your image. Soft High Noise. such as posters or billboards. but yet both may be perceived as sharp because of your viewing distance. use of a sturdy camera tripod and mirror lock-up can also significantly impact the sharpness of your prints. whereas with digital this is no longer required. orange. Our eyes are very good at judging what is white under different light sources. Images which are designed to be viewed from further away. which are unrealistic and particularly damaging to portraits. A blackbody is an object which absorbs all incident light-. and then "white hot" for even higher temperatures. Performing WB in traditional film photography requires attaching a different cast-removing filter for each lighting condition. or even green color casts. Similarly. so that objects which appear white in person are rendered white in your photo. blackbodies at different temperatures also have varying color temperatures of "white light. 53. as the optimal type of your sharpening may not necessarily be what looks best on your screen. however digital cameras often have great difficulty with auto white balance (AWB)." Despite its name. Even small amounts of camera shake can dramatically reduce the sharpness of an image. the image to the left appears softer and less detailed. light which may appear white does not necessarily contain an even distribution of colors across the visible spectrum: . which refers to the relative warmth or coolness of white light. Proper shutter speeds. Understanding digital white balance can help you avoid color casts created by your camera's AWB.TUTORIALS: WHITE BALANCE White balance (WB) is the process of removing unrealistic color casts. Proper camera white balance has to take into account the "color temperature" of a light source. thereby improving your photos under a wider range of lighting conditions. Sharpness also depends on viewing distance.

the term is implied to be a "correlated color temperature" with a similarly colored blackbody. but results from the fact that shorter wavelengths contain light of higher energy. so you do not have to deal with color temperature and green-magenta shift during the critical shot. Tungsten Fluorescent Daylight Flash Cloudy Shade . whereas 3000 K and 9000 K produce light spectrums which shift to contain more orange and blue wavelengths. The remaining six white balances are listed in order of increasing color temperature. most digital cameras contain a variety of preset white balances. The following table is a rule-of-thumb guide to the correlated color temperature of some common light sources: Color Temperature 1000-2000 K 2500-3500 K 3000-4000 K 4000-5000 K 5000-5500 K 5000-6500 K 6500-8000 K 9000-10000 K Light Source Candlelight Tungsten Bulb (household variety) Sunrise/Sunset (clear sky) Fluorescent Lamps Electronic Flash Daylight with Clear Sky (sun overhead) Moderately Overcast Sky Shade or Heavily Overcast Sky IN PRACTICE: JPEG & TIFF FILES Since some light sources do not resemble blackbody radiators. Why is color temperature a useful description of light for photographers. The first three white balances allow for a range of color temperatures.usually between 3000/4000 K and 7000 K.Relative intensity has been normalized for each temperature (in Kelvins). however fluorescent and other artificial lighting may require significant green-magenta adjustments to the WB. Note how 5000 K produces roughly neutral light. white balance uses a second variable in addition to color temperature: the green-magenta shift. Custom white balance allows you to take a picture of a known gray reference under the same lighting. As the color temperature rises. Commonly used symbols for each of these are listed to the left. light sources such as daylight and tungsten bulbs closely mimic the distribution of light created by blackbodies. however many compact cameras do not include a shade white balance. This may not seem intuitive. which is designed to work in newer daylight-calibrated fluorescents. although others such as fluorescent and most commercial lighting depart from blackbodies significantly. and then set that as the white balance for future photos. the color distribution becomes cooler. With "Kelvin" you can set the color temperature over a broad range. Auto white balance is available in all digital cameras and uses a best guess algorithm within a limited range-. Auto White Balance Custom Kelvin Fortunately. Some cameras also include a "Fluorescent H" setting. if they never deal with true blackbodies? Fortunately. Adjusting the green-magenta shift is often unnecessary under ordinary daylight. Since photographers never use the term color temperature to refer to a true blackbody light source. respectively.

or can be a portable item which you carry with you. as these allow you to set the WB *after* the photo has been taken. although custom-made photographic references are the best (such as the cards shown above). you can resort to manually entering a temperature in the Kelvin setting. and can consistently do so under a broad range of color temperatures. Care should be taken when using a neutral reference with high image noise. or may include less expensive household items. pre-made portable references are almost always more accurate since one can easily be tricked into thinking an object is neutral when it is not. Either of these methods should be avoided since they can severely reduce the bit depth of your image. In general. or you can simply click on a neutral reference within the image (see next section). CUSTOM WHITE BALANCE: CHOOSING A NEUTRAL REFERENCE A neutral reference is often used for color-critical projects. Alternatively. In fact. If all else fails and the image still does not have the correct WB after inspecting it on a computer afterwards. Custom-made devices can be used to measure either the incident or reflected color temperature of the illuminant.The description and symbol for the above white balances are just rough estimates for the actual lighting they work best under. cloudy could be used in place of daylight depending on the time of day. since clicking on a seemingly gray region may actually select a colorful pixel caused by color noise: . Neutral references can either be parts of your scene (if you're lucky). you can adjust the color balance to remove additional color casts. Even if only one of your photos contains a neutral reference. Below is an example of a fortunate reference in an otherwise bluish twilight scene. or degree of haziness. Performing a white balance with a raw file is quick and easy. you can click on it and then use the resulting WB settings for the remainder of your photos (assuming the same lighting). one could click on a colorless reference (see section on neutral references) with the "set gray point" dropper while using the "levels" tool in Photoshop. RAW files also allow one to set the WB based on a broader range of color temperature and green-magenta shifts. IN PRACTICE: THE RAW FILE FORMAT By far the best white balance solution is to photograph using the RAW file format (if your camera supports them). you can quickly increase the color temperature by selecting a symbol further down on the list above. An example of a premade gray reference is shown below: Common household neutral references are the underside of a lid to a coffee or pringles container. Portable references can be expensive and specifically designed for photography. You can either adjust the temperature and green-magenta sliders until color casts are removed. On the other hand. or for situations where one anticipates auto white balance will encounter problems. elevation. Most neutral references measure reflected light. whereas a device such as a white balance meter or an "ExpoDisc" can measure incident light (and can theoretically be more accurate). These are both inexpensive and reasonably accurate. if your image appears too cool on your LCD screen preview (regardless of the setting). An ideal gray reference is one which reflects all colors in the spectrum equally. If the image is still too cool (or warm if going the other direction).

but just be aware that its absence may cause problems with the auto white balance. the best solution for white balancing with noisy images is to use the average of pixels with a noisy gray region as your reference. the camera's auto white balance mistakenly created an image with a slightly warmer color temperature. . Some digital cameras are more susceptible to this than others. do not try to change your composition to include a colorless object. IN MIXED LIGHTING Multiple illuminants with different color temperatures can further complicate performing a white balance. One example is if the image already has an overabundance of warmth or coolness due to unique subject matter. Of course. and so the camera mistakes this for a color cast induced by a warm light source. The camera then tries to compensate for this so that the average color of the image is closer to neutral. This can be either a 3x3 or 5x5 pixel average if using Adobe Photoshop.Low Noise (Smooth Colorless Gray) High Noise (Patches of Color) If your software supports it. but in doing so it unknowingly creates a bluish color cast on the stones. and will depend upon where color accuracy is most important. NOTES ON AUTO WHITE BALANCE Certain subjects create problems for a digital camera's auto white balance-. Some lighting situations may not even have a truly "correct" white balance. The image below illustrates a situation where the subject is predominantly red.) A digital camera's auto white balance is often more effective when the photo contains at least one white or bright colorless element.even under normal daylight conditions. Without the white boat in the image below. Automatic White Balance Custom White Balance (Custom white balance uses an 18% gray card as a neutral reference.

The two RGB histograms below demonstrate an extreme case. A stretched histogram is forced to spread these discrete levels over a broader range than exists in the original image.bringing out the warm color temperature of the artificial lighting below. This approach is usually acceptable. The best way to ward off posterization is to keep any histogram manipulation to a minimum. then there would be peaks at every increment of 5 (100. The term posterization is used because it can influence your photo similar to how the colors may look in a mass-produced poster. Visually inspecting an image is a good way to detect posterization.IMAGE POSTERIZATION Posterization occurs when an image's apparent bit depth has been decreased so much that it has a visual impact. if we were to take a color histogram which ranged from 120 to 130 and then stretched it from 100 to 150 (5x its original width). Why does it look like this? Recall that each channel in an 8-bit image can only have discrete color intensities from 0 to 255 (see "Understanding Bit Depth"). This creates gaps where there is no longer any intensity information left in the image. Stretching can be caused by techniques such as levels and curves in Photoshop. 110. as compared with what we perceive with our eyes. . Critical images may even require a different white balance for each lighting region. This is because the white balance was set based on the moonlight-. and then uses this as the white balance. Any process which "stretches" the histogram has the potential to cause posterization. Note how the building to the left is quite warm. the individual color histograms are your most sensitive means of diagnosis. Visually. This effect ranges from subtle to quite pronounced. White balancing based on the natural light often yields a more realistic photograph. 105. Note the tell-tale sign of posterization on the right: vertical spikes which look similar to the teeth of a comb. some may prefer to leave the color temperatures as is. Reference: Moon Stone 54. Although RGB histograms will show extreme cases.Under mixed lighting. Exaggerated differences in color temperature are often most apparent with mixed indoor and natural lighting. Choose "stone" as the white balance reference and see how the sky becomes unrealistically blue. this would force colors to "jump" or form steps in what would otherwise be smooth color gradations. Keep in mind though that all digital images have discrete color levels—it is only when these levels sufficiently disperse that our eye is able to perceive them. etc) and no pixels in between. or by converting an image from one color space into another as part of color management. whereas the sky is somewhat cool. where the print process uses a limited number of color inks. auto white balance usually calculates an average color temperature for the entire scene. although one's tolerance for posterization may vary. As an example. however auto white balance tends to exaggerate the difference in color temperature for each light source. however the best objective tool is the histogram. On the other hand. where a previously narrow histogram has been stretched to almost three times its original width.

one never encounters true white or black-. USEFUL TIPS • • • • Using images with 16-bits per channel can greatly reduce the risk of posterization since this provides up to 256 times as many color levels as 8-bit. These regions require more colors levels to describe them and so any decrease in levels can have a visual impact on the image. Therefore the concept of dynamic range becomes more complicated. Realistically. 55. or the subject itself. and depends on whether you are describing a capture device (such as a camera or scanner).DYNAMIC RANGE IN DIGITAL PHOTOGRAPHY Dynamic range in photography describes the ratio between the maximum and minimum measurable light intensities (white and black. .Posterization occurs more easily in regions of gradual color transitions. Working in color spaces with broad gamuts can increase the likelihood of posterization because they require more bit depth to produce the same color gradient. respectively). such as smooth skies.only varying degrees of light source intensity and subject reflectivity. performing all editing in 16-bit mode will nearly eliminate posterization caused by rounding errors. Even if your original image was 8-bits per channel. a display device (such as a print or computer display). Adjustment layers in Photoshop will decrease the likelihood of unnecessarily performing the same image manipulations more than once. In the real world. you can expect anywhere from 4-16 times as many levels if your image originated from a digital camera since most capture at 10 to 12-bits per channel in RAW mode— regardless of whether or not you saved it as a 16-bit file.

In prints and computer displays. but also decreases dynamic range. Translating image information between devices may therefore affect how that image is reproduced. Both illuminance and luminance are typically measured in candelas per square meter (cd/m2). nothing can become brighter than paper white or a maximum intensity pixel. Therefore. Since larger photosites can contain a greater range of photons.thereby increasing the light signal. A photosite which overflows is said to have become saturated. Each photosite's size. is therefore critical when assessing dynamic range. or luminance. and is therefore unable to discern between additional incoming photons-. respectively. In fact. both contribute to the dynamic range of a scene (see tutorial on "camera metering and exposure"). which also have their own dynamic range.particularly if the exposure is not spot on. Approximate values for commonly encountered light sources are shown below. there is an extended low ISO setting which produces less noise. but then later truncates the highlights-. The concept of dynamic range is therefore useful for relative comparisons between the actual scene. each device within the above imaging chain has their own dynamic range. This is because the setting in effect overexposes the image by a full f-stop. its contrast ratio would therefore be just the number of photons it could contain within each photosite. divided by the darkest measurable light intensity (one photon). since the above diagram is scaled to powers of ten. in addition to how its contents are measured. If a scene were unevenly illuminated by both direct and obstructed sunlight. your camera. another device not shown above is our eyes. If each held 1000 photons. this alone can greatly increase a scene's dynamic range (as apparent from the canyon sunset example with a partially-lit cliff face). if the bucket becomes too full. then the contrast ratio would be 1000:1. Here we use the term illuminance to specify only incident light. DIGITAL CAMERAS Although the meaning of dynamic range for a real-world scene is simply the ratio between lightest and darkest regions (contrast ratio). dynamic range is generally higher for digital SLR cameras compared to compact cameras (due to larger pixel sizes). its definition becomes more complicated when describing measurement devices such as digital cameras and scanners. which have an ISO-50 speed below the ordinary ISO-100. Strong Reflections Uneven Incident Light Scenes with high variation in reflectivity. may actually have a greater dynamic range than scenes with large incident light variation. An example of this is many of the Canon cameras.Just as with color management. determine a digital camera's dynamic range. such as those containing black objects in addition to strong reflections. . Here we see the vast variation possible for incident light. and the image on your screen or in the final print. INFLUENCE OF LIGHT: ILLUMINANCE & REFLECTIVITY Light intensity can be described in terms of incident and reflected light. Recall from the tutorial on digital camera sensors that light is measured at each pixel in a cavity or well (photosite). Note: In some digital cameras. Black Level (Limited by Noise) White Level (Saturated Photosite) Darker White Level (Low Capacity Photosite) Photosites can be thought of as a buckets which hold photons as if they were water. Photography under either scenario can easily exceed the dynamic range of your camera-. For an ideal camera. Accurate measurement of light intensity.thereby defining the camera's white level. it will overflow.

Photon noise is created by the statistical variation in arrival of photons. Here we show the maximum measurable (or reproducible) dynamic range for several devices in terms any preferred measure (f-stops. For high pigment density. Low Reflectance (High Density) High Reflectance (Low Density) High Pigment Density (Darker Tone) Low Pigment Density (Lighter Tone) The overall dynamic range in terms of density is therefore the maximum pigment density (Dmax).In reality. as shown below. Note: Even if a photosite could count individual photons. it would still be limited by photon noise. scanner manufacturer's typically list just the Dmax value." SCANNERS Scanners are subject to the same saturation:noise criterion as for dynamic range in digital cameras. and therefore represents a theoretical minimum for noise. similar to how vastly different earthquake intensities are all measured on the same Richter scale. except it is instead described in terms of density (D). minus the minimum pigment density (Dmin). Depending on the application.Dmin is approximately equal to Dmax. Move your mouse over each of the options below to compare these. A density of 3. and is therefore limited in darkness by image noise. COMPARISON Dynamic range varies so greatly that it is commonly measured on a logarithmic scale. we call this the black level. A contrast ratio of 1024:1 could therefore also be described as having a dynamic range of 10 f-stops (since 210 = 1024). This is because unlike with digital cameras. which describes total light range by powers of 2. dynamic range generally increases for lower ISO speeds and cameras with less measurement noise. The black level is limited by how accurately each photosite can be measured. Therefore the measurable Dmax is also determined by the noise present during readout of the light signal. density is measured using powers of 10 (just as the Richter scale for earthquakes). the dynamic range of a digital camera can therefore be described as the ratio of maximum light intensity measurable (at pixel saturation). to minimum light intensity measurable (above read-out noise). the same noise constraints apply to scanners as digital cameras (since they both use an array of photosites for measurement). since Dmax . The most commonly used unit for measuring dynamic range in digital cameras is the f-stop. a scanner has full control over it's light source. Overall. Select Measure for Dynamic Range: f-stops Density Contrast Ratio Select Types to Display Above: Printed Media Scanner Digital Display . density and contrast ratio). each unit f-stop may also be described as a "zone" or "eV. Total noise represents the sum of photon noise and read-out noise. Dynamic Range of Original Dynamic Range of Scanner Instead of listing total density (D). consumer cameras cannot count individual photons. Unlike powers of 2 for f-stops. This is useful because it is conceptually similar to how pigments create tones in printed media. ensuring that minimal photosite saturation occurs. Therefore. Dynamic range is therefore limited by the darkest tone where texture can no longer be discerned.0 therefore represents a contrast ratio of 1000:1 (since 103.0 = 1000).

we can only consider the instantaneous dynamic range (where our pupil opening is unchanged).6 4. If we were to consider situations where our pupil opens and closes for varying light. Furthermore. etc. High contrast ratios (without a correspondingly higher luminosity) can be completely negated by even ambient candle light. The workhorse which translates these continuous measurements into discrete numerical values is called the analog to digital (A/D) converter. and not looking anywhere else. these vary from approximately 3 f-stops for a cloudy day with nearly even reflectivity. This would be similar to looking at one region within a scene. a non-linear A/D converter's bit precision does not necessarily correlate with dynamic range. values shown are a theoretical maximum. On the other hand. while display devices require near complete darkness to realize their full potential-. and should not be used to interpret results for 8 and 16-bit image files. as there is no manufacturer standard for listing these.2 4.s Cameras Devices Note the huge discrepancy between reproducible dynamic range in prints. Most estimate anywhere from 10-14 f-stops. Additionally. real-world dynamic range is a strong function of ambient light for prints and display devices. The problem with these numbers is that our eyes are extremely adaptable. For this reason attention should be paid to both contrast ratio and luminosity. for accurate comparisons with a single photo (at constant aperture. Contrast ratios in excess of 500:1 are often only the result of a very dark black point. because our eye's sensitivity and dynamic range actually change depending on brightness and contrast. and that measurable by scanners and digital cameras. The A/D converter is what creates values for the digital camera's RAW file format.8 256:1 1024:1 4096:1 16384:1 65536:1 Note: Above values are for A/D converter precision only. although care should be taken that these concepts are not used interchangeably. model generation. our eyes approach even higher instantaneous dynamic ranges (see tutorial on "Color Perception of the Human Eye"). assuming noise is not limiting. price range. The accuracy of an A/D converter can be described in terms of bits of precision. . For situations of extreme low-light star viewing (where our eyes have adjusted to use rod cells for night vision). For a comparison with real-world dynamic range in a scene. to 12+ f-stops for a sunny day with highly uneven reflectivity. these values are rough approximations only.4 3. Care should be taken when interpreting the above numbers. instead of a brighter white point. BIT DEPTH & MEASURING DYNAMIC RANGE Even if one's digital camera could capture a vast dynamic range. shutter and ISO). letting our eyes adjust. this applies only to linear A/D converters. Bit Precision of Analog/Digital Converter 8 10 12 14 16 Dynamic Range Contrast Ratio f-stops 8 10 12 14 16 Density 2. the precision at which light measurements are translated into digital values may limit usable dynamic range. similar to bit depth in digital images. THE HUMAN EYE The human eye can actually perceive a greater dynamic range than is ordinarily possible with a camera. Be warned that contrast ratios for display devices are often greatly exaggerated.especially for plasma displays. our eyes can see over a range of nearly 24 fstops. Finally. actual values depend on age of device. For this scenario there is much disagreement. Prints not viewed under adequate light may not give their full dynamic range.0 3.

Using a gamma of 2. 56. this would suffer from severe posterization. CONCEPT Interpolation works by using known data to estimate values at unknown points. you could see that the bulk of the temperature rise occurred before noon. this high bit depth only helps minimize image posterization since total dynamic range is usually limited by noise levels. Again though.2 (standard for PC's). to be added). Most digital cameras use a 10 to 14-bit A/D converter. and could use this additional data point to perform a quadratic interpolation: . We first need to distinguish between whether we are speaking of recordable dynamic range. the dynamic range of a digital camera does not even approach the A/D converter's theoretical maximum. On the other hand. and that the A/D converter has the required bit precision.assuming that the right tonal curve is applied during RAW conversion (see tutorial on curves. INFLUENCE OF IMAGE TYPE & TONAL CURVE Can digital image files actually record the full dynamic range of high-end devices? There seems to be much confusion on the internet about the relevance of image bit depth on recordable dynamic range. if a digital camera has a high precision A/D converter it does not necessarily mean it can record a greater dynamic range. The only current standard solution for encoding a nearly infinite dynamic range (with no visible posterization) is to use high dynamic range (HDR) image files in Photoshop (or other supporting program). and rotating an image. changing perspective. you could estimate its value by performing a linear interpolation: If you had an additional measurement at 11:30AM. and so their theoretical maximum dynamic range is 10-14 stops. whereas remapping can occur under a wider variety of scenarios: correcting for lens distortion. Image resizing is necessary when you need to increase or decrease the total number of pixels. or displayable dynamic range. It is only an approximation. However. but only measured it at 11AM and 1PM. The problem lies in the usability of this dynamic range.whether this be in bayer demosaicing or in photo enlargement. displayable dynamic range depends on the gamma correction or tonal curve implied by the image file. or used by the video card and display device. Even an ordinary 8-bit JPEG image file can conceivably record an infinite dynamic range-. It occurs anytime you resize or remap (distort) your image from one pixel grid to another. the results can vary significantly depending on the interpolation algorithm.DIGITAL IMAGE INTERPOLATION Image interpolation occurs in all digital photos at some stage-. 10-bits of tonal precision translates into a possible brightness range of 0-1023 (since 210 = 1024 levels). This tutorial aims to provide a better understanding of how the results may vary-. 5-9 stops is generally all one can expect from the camera. Similar to how a high bit depth image does not necessarily mean that image contains more colors. under motivation: dynamic range). it would be theoretically possible to encode a dynamic range of nearly 18 f-stops (see tutorial on gamma correction. Original Image After Interpolation Even if the same image resize or remap is performed. In practice.As an example. For example: if you wanted to know the temperature at noon. Assuming that each A/D converter number is proportional to actual image brightness (meaning twice the pixel value represents twice the brightness). then this can lead to image posterization. 10-bits of precision can only encode a contrast ratio of 1024:1. if too few bits are spread over too great of a tonal range. therefore an image will always lose some quality each time interpolation is performed.helping you to minimize any interpolation-induced losses in image quality.

although the image continues to deteriorate with successive rotations. the better the interpolation will become. these use anywhere from 0 to 256 (or more) adjacent pixels when interpolating. whereas non-adaptive methods treat all pixels equally. Original . pixel values can change far more abruptly from one location to the next. bilinear. and interpolation can never add detail to your image which is not already present. The above results could be improved significantly. and show significant deterioration. The more adjacent pixels they include. lanczos and others. IMAGE RESIZE EXAMPLE Image interpolation works in two directions.The more temperature measurements you have which are close to noon. Depending on their complexity. The following example illustrates how resizing / enlargement works: 2D Interpolation Original Before After No Interpolation Unlike air temperature fluctuations and the ideal gradient above. and how dark haloes are created around the light blue. the more sophisticated (and hopefully more accurate) your interpolation algorithm can be. the more accurate they can become. and tries to achieve a best approximation of a pixel's color and intensity based on the values at surrounding pixels. if an unleveled photo requires it. the more you know about the surrounding pixels. bicubic. Therefore results quickly deteriorate the more you stretch an image. The previous example was misleading because it is one which interpolators are particularly good at. rotate no more than once. smooth texture). spline. These algorithms can be used to both distort and resize a photo. IMAGE ROTATION EXAMPLE Interpolation also occurs each time you rotate or distort an image. sinc. One should therefore avoid rotating your photos when possible. As with the temperature example. depending on the interpolation algorithm and subject matter. Note how most of the detail is lost in just the first rotation. but this comes at the expense of much longer processing time. Adaptive methods change depending on what they are interpolating (sharp edges vs. Non-adaptive algorithms include: nearest neighbor. The above results use what is called a "bicubic" algorithm. Note the overall decrease in contrast evident by color becoming less intense. This next example shows how image detail can be lost quite rapidly: Image Degrades Original 45° Rotation 90° Rotation 2 X 45° (Lossless) Rotations 6 X 15° Rotations The 90° rotation is lossless because no pixel ever has to be repositioned onto the border between two pixels (and therefore divided). TYPES OF INTERPOLATION ALGORITHMS Common interpolation algorithms can be grouped into two categories: adaptive and non-adaptive.

These algorithms are primarily designed to maximize artifact-free detail in enlarged photos.Adaptive algorithms include many proprietary algorithms in licensed software such as: Qimage. Bicubic produces noticeably sharper images than the previous two methods. so the interpolated value is simply their sum divided by four. INTERPOLATION ARTIFACTS TO WATCH OUT FOR All non-adaptive interpolators attempt to find an optimal balance between three undesirable artifacts: edge halos. Since these are at various distances from the unknown pixel. This has the effect of simply making each pixel bigger. and improves the appearance of sharpness by increasing acutance.for a total of 16 pixels. NEAREST NEIGHBOR INTERPOLATION Nearest neighbor is the most basic and requires the least processing time of all the interpolation algorithms because it only considers one pixel-. and retain the most image information after an interpolation. The diagram to the left is for a case when all known pixel distances are equal. for single-step enlargements or rotations. They are therefore extremely useful when the image requires multiple rotations / distortions in separate steps. so some cannot be used to distort or rotate an image. BILINEAR INTERPOLATION Bilinear interpolation considers the closest 2x2 neighborhood of known pixel values surrounding the unknown pixel. .the closest one to the interpolated point. and is perhaps the ideal combination of processing time and output quality. these higher-order algorithms provide diminishing visual improvement as processing time is increased. HIGHER ORDER INTERPOLATION: SPLINE & SINC There are many other interpolators which take more surrounding pixels into consideration. Many of these apply a different version of their algorithm (on a pixel-by-pixel basis) when they detect the presence of an edge-. PhotoZoom Pro. and are thus also much more computationally intensive.therefore at least one will be visible. blurring and aliasing. Also note how the edge halo is similar to the artifact produced by over sharpening with an unsharp mask. For this reason it is a standard in many image editing programs (including Adobe Photoshop). It then takes a weighted average of these 4 pixels to arrive at its final interpolated value.aiming to minimize unsightly interpolation artifacts in regions where they are most apparent. This results in much smoother looking images than nearest neighbor. Original Aliasing Blurring Edge Halo Even the most advanced non-adaptive interpolators always have to increase or decrease one of the above artifacts at the expense of the other two-. However. BICUBIC INTERPOLATION Bicubic goes one step beyond bilinear by considering the closest 4x4 neighborhood of known pixels-. Genuine Fractals and others. printer drivers and in-camera interpolation. closer pixels are given a higher weighting in the calculation. These algorithms include spline and sinc.

DIGITAL ZOOM Many compact digital cameras can perform both an optical and a digital zoom. termed "jaggies. Many adaptive interpolators detect the presence of edges and adjust to minimize aliasing while still retaining edge sharpness." These give text or images a rough digital appearance: 300% Anti-aliasing removes these jaggies and gives the appearance of smoother edges and higher resolution. however they can also induce non-image textures or strange pixels at small-scales: Original Image with Small-Scale Textures Crop Enlarged 220% On the other hand.after it has been acquired at the sensor. In contrast. Since an anti-aliased edge contains information about that edge's location at higher resolutions. Since the eye expects to see detail down to the smallest scales in fine-textured areas such as foliage. It works by taking into account how much an ideal edge overlaps adjacent pixels. NOTE ON OPTICAL vs. it is also conceivable that a powerful adaptive (edge-detecting) interpolator could at least partially reconstruct this edge when enlarging. a digital zoom degrades quality by simply interpolating the image-. ANTI-ALIASING Anti-aliasing is a process which attempts to minimize the appearance of aliased or jagged diagonal edges. The aliased edge simply rounds up or down with no intermediate value. some of these "artifacts" from adaptive interpolators may also be seen as benefits. whereas the anti-aliased edge gives a value proportional to how much of the edge was within each pixel: Ideal Edge on Low Resolution Grid Choose: Aliased Anti-Aliased A major obstacle when enlarging an image is preventing the interpolator from inducing or exacerbating aliasing.Adaptive interpolators may or may not produce the above artifacts. these patterns have been argued to trick the eye from a distance (for some subject matter). . A camera performs an optical zoom by moving the zoom lens so that it increases the magnification of light before it even reaches the digital sensor.

since a flash emits within milliseconds or less. please visit more specific tutorials on: Digital Photo Enlargement Image Resizing for the Web and Email 57. or better yet: a lens with a longer focal length. ISO and shutter speed control exposure. which you have some control over. a flash is so quick that even after the shot it's nearly impossible to tell what it looked like without checking your camera. flash is also one of the most confusing and misused of all photographic tools. Further. purchase a teleconverter add-on. position and distribution of light coming from a flash. For further reading. While this fact may seem simple and obvious. This tutorial aims to overcome all the technical terminology in order to focus on the real essence of flash photography: how to control your light and subsequently achieve the desired exposure. unless it helps to visualize a distant object on your camera's LCD preview screen. In fact. camera flashes firing in a stadium: beautiful. FLASH LIGHTING INTRO Using a flash is fundamentally different from taking a normal camera exposure because your subject is being lit by two light sources: your flash. if you regularly shoot in JPEG and plan on cropping and enlarging the photo afterwards. With ordinary ambient light photos. the best flash photo is often the one where you cannot even tell a flash was used. one can only affect the appearance of a subject by changing exposure and depth of field. the detail is clearly far less than with optical zoom. However. and the ambient light. its consequences are probably not: • • A flash photograph can vary the appearance of a subject by controlling the intensity. If you find you are needing digital zoom too frequently. digital zoom at least has the benefit of performing the interpolation before any compression artifacts set in.CAMERA FLASH: APPEARANCE Using a camera flash can both broaden the scope and enhance the appearance of your photographic subjects. . one cannot see how their camera flash will affect the scene prior to taking the photograph. Digital zoom should be almost entirely avoided. Unlike with ambient light photography.10X Optical Zoom 10X Digital Zoom Even though the photo with digital zoom contains the same number of pixels. which is likely beyond your control. but a good example of misuse Before proceeding. Alternatively. it's advisable to first read the tutorials on camera metering & camera exposure on how aperture.

so you will need to have a much stronger flash in order to achieve the same exposure. When light is more distributed (right). shadows and highlights appear softer and less intense because this light is hitting the sphere from a wider angle. Further. A good flash photographer therefore knows how to make their flash appear as if it had originated from a much larger and more evenly distributed light source. bouncing a flash is often unrealistic for outdoor photographs of people since they are no longer in a contained environment." and more concentrated and directional light as being "hard light. while the opposing side is nearly black because it only receives what little light had bounced off the walls. aiming your flash *away* from your subject can actually enhance their appearance. Additionally.It's therefore critical to develop a good intuition for how the position and distribution of your camera's flash influences the appearance of your subject. However. When light is more localized (left). if the sphere in the above example had texture. For a photo of a person. Landscape photographers understand this well. These qualitative aspects will be the focus of the first part of this tutorial. just as with a bounced flash. then its texture would have been greatly emphasized in high contrast lighting. This causes the incident light from your flash to originate from a greater area. However. Whereas the localization of light affects contrast. the second part will concentrate on camera settings for achieving the desired flash exposure. photographs of people will appear more appealing if they are captured using less contrast." What does this all mean in practice? Generally. be aware that using a flash diffuser can greatly increase the necessary flash intensity. too much can be a bad thing. High Contrast Low Contrast Contrast describes the brightness difference between the lightest and darkest portions of a subject. light source position affects the visibility of a subject's shadows and highlights: . since some of the scattered light from your flash will first bounce off of other objects before hitting your subject. The big problem is that a camera flash is by its very nature a localized light source. a flash diffuser is usually just a simple piece of translucent plastic which fastens over your flash. Contrast tends to over-exaggerate facial features due to deep shadows being cast across the face. LIGHT DISTRIBUTION: BOUNCED FLASH & DIFFUSERS An important concept in flash photography is the following: for a given subject. the distribution of the light source determines how much contrast this subject will have. acting to scatter outgoing light. ceiling and floor. bouncing a flash greatly reduces its intensity. Similarly. bounced flash is diffuse but loses intensity While it may at first sound counterintuitive. Light which is overly diffuse can cause the subject to look flat and two-dimensional. For outdoor photos this will make very little difference. Two ways to achieve this are by using either a flash diffuser or a bounced flash. this would be analogous to giving skin a rougher and often less desirable texture. and is why portraits are usually taken with a flash that first bounces off a large umbrella. but for photographs taken indoors this will soften the lighting on your subject. LIGHT POSITION: ON-CAMERA & OFF-CAMERA FLASH The position of the light source relative to the viewer also affects the appearance of your subject. However. overly diffuse light is rarely a problem with flash photography. Photographers often describe light which scatters substantially or originates from a large area as being "soft light. As with anything though. one face of the sphere receives intense direct light. as it's the flat look created by light which is emitted evenly across the sky on an overcast day.

which increases the distance between the flash unit and the front of your camera. as is often the case with indoor lighting. With oncamera flash. The best and easiest way to achieve the look of an off-camera flash using an on-camera flash is to bounce the flash off of an object. MULTIPLE LIGHT SOURCES: FILL FLASH . respectively. A flash bracket's biggest disadvantage is that they can be quite large. such as a wall or ceiling. A noticeable improvement is reducing red-eye. Another option is to use a flash bracket. However. example of non-ideal on-camera flash Overall. as may be the case for a big event like a wedding. resulting in shadows that are barely visible. as discussed previously. subjects generally look best when the light source is neither head-on. because light from the flash no longer bounces straight back to the camera (see red-eye section later). which is is exactly the difference one sees when using an on-camera versus off-camera flash. as with on-camera flash. using an on-camera flash can often give a "deer in the headlights" appearance to subjects. unless one is in a studio or has a sophisticated setup. nor directly overhead. In real-world photographs. and a bright and harshly-lit subject.Head-On Lighting Subject Appears Flat Off-Angle Lighting Subject is More Three-Dimensional The subject with head-on lighting (left) looks less three-dimensional than the subject shown using off-angle flash (right). since they need to extend far above or to the side of your camera body in order to achieve their effect. the side of the subject which receives all the light is also the side of the subject the camera sees. it's usually unrealistic to expect that one can have a flash located off of the camera. Flash brackets create substantial off-angle lighting for close range photos. such as in the example of the well-known subject to the left. but appear increasingly similar to an on-camera flash the further they are from your subject.

this technique should only be used as a last resort since it does not address the underlying cause of red-eye. Another technique is to use digital red-eye removal. which works by using image editing software to select the red pupils and change their hue to match the person's natural eye color. The red color is due to the high density of blood vessels directly behind the pupil at the back of the eye.reduces harsh shadows from strong sunlight The term "fill flash" is used to describe a flash that contributes less to the exposure than does ambient light. This does not eliminate red-eye entirely (since the smaller pupils still reflect some light). while not appreciably changing the overall exposure. For example. Red-eye can be most distracting when the subject is looking directly into the camera lens. FLASH & RED-EYE REDUCTION A big problem with camera flashes is unnatural red eyes in subjects. or to increase the amount of ambient light -. fill flash is most useful under bright ambient lighting. and is difficult to perform so that the eye looks natural in a detailed print.both will naturally contract the pupil. When there is plenty of ambient light. It is also much more prominent when the flash is very localized and directional ("hard light"). However. The second half of this tutorial will go into more detail about how to achieve the right amount of fill flash. or when the lighting has too much contrast. or can have portions of their eye that are colored like a blue iris but still have the texture of a pupil. in order to use a fill flash you will need to force your flash to fire. Move your mouse over the image to see it without fill flash. It can dramatically improve the appearance of people being photographed in otherwise harsh outdoor lighting. compact and SLR cameras will default to using their flash as a fill flash when it's activated. . However. A common misconception is that a flash is only used for situations where it's dark. A fill flash effectively plays the role of a secondary light source. such as in afternoon sunlight on a clear day (example to the right). An alternative method for red-eye reduction would be to just take the photo where it is brighter. caused by a flash which glares back from the subject's pupil. most cameras do not fire a flash in automatic mode unless the scene is rather dimly lit. Fill flash gets its name because it is effectively "filling in" the shadows of your subject. which sends a series of smaller flashes before the exposure so that the subject's pupils are contracted during the actual flash. Just pay close attention to the charge on your camera's battery since flash can deplete it much more rapidly than normal. Contrary to this belief. subjects can easily end up not having any pupils at all. example of red-eye caused by flash Some camera flashes have a red-eye reduction mode. such as when your subject is back-lit. or when their pupils are fully dilated due to dim ambient light. but it makes red-eye much less prominent since the pupil area is greatly reduced.

the best flash photo is often the one where you cannot even tell a flash was used. color filters and other add-ons. Alternatively. which is comparable to daylight (see tutorial on white balance). EXTERNAL FLASH UNITS External flash units are usually much more powerful than flash units which are built into your camera. FLASH EXPOSURE OVERVIEW Using a flash is fundamentally different from taking a normal camera exposure because your subject is being lit by two light sources: your flash. in order to better match indoor incandescent lighting. Flash white balance issues can also result from a flash which bounces off a colored surface. this type of light can be quite harsh. The tint is most apparent with artificial lighting. However. reflectors. and the ambient light. which is likely beyond your control. for example. This tutorial aims to overcome all the technical terminology in order to focus on the real essence of flash photography: how to control your light and subsequently achieve the desired exposure. bouncing off a colored surface will not necessarily change the white balance of your flash if ambient light bounces off that surface as well. the flash's white balance can be intentionally modified to achieve a given effect. (ii) to use a flash bracket. which can reduce red-eye and slightly improve light quality. Furthermore. An added benefit is that external flash units are usually easier to modify with diffusers.The only ways to eliminate red-eye entirely are (i) to have the subject look away from the camera. brackets. or (iii) to avoid using a flash in the first place. Ambient light will therefore have a color tint if it differs substantially from 5000K. In this part of the tutorial we'll focus on the other two consequences of this fact.CAMERA FLASH: EXPOSURE Using a camera flash can both broaden the scope and enhance the appearance of your photographic subjects. which you have some control over. an off-camera flash or a bounced flash. or to give the appearance of light from a sunset. such as a wall which is painted orange or green. since most cameras automatically set their white balance to match the flash (if it's used). flash is also one of the most confusing and misused of all photographic tools. Even though an in-camera flash has enough intensity for direct light on nearby people. external flashes are also a little further from your lens's line of sight. Part 2: Flash Ratios & Exposure 58. However. as they pertain to flash exposure: . In fact. Some flash diffusers have a subtle warming effect. Often only an external flash unit has enough power to bounce off a distant wall or ceiling and still adequately illuminate the subject. The first part of the camera flash tutorial focused on the qualitative aspects of using a camera's flash to influence a subject's appearance. and when balanced flash ratios (1:4 to 4:1) make light from both flash and ambient sources clearly distinguishable. Please continue onto the second half of this tutorial: Camera Flash. this second part focuses on what camera settings to use in order to achieve the desired flash exposure. FLASH WHITE BALANCE flash vs ambient white balance Most flash units emit light which has a color temperature of about 5000K.

there's always some amount of ambient light. aperture and ISO speed still affect flash and ambient light equally. 3:1 and 5:1 ratio would be equivalent to a 1:1. or capture the photo using a tripod). so an infinite flash ratio is just a theoretical limit. flash ratios less than 1:2 can often achieve excellent results using a flash that is built into the camera. which means that the amount of flash captured by your camera is independent of your shutter speed. the mix of flash and ambient light is adjusted using only two camera settings: (i) the length of the exposure and (ii) the flash intensity.Diagram Illustrating the Flash Exposure Sequence Illustration shown roughly to scale for a 1/200th second exposure with a 4:1 flash ratio. if possible. Flash ratios of 1:2 or greater are where the topics in the first half of this tutorial become most important. most photographers will likely want to use their flash as a fill flash. CONCEPT: FLASH RATIO The "flash ratio" is an important way to describe the mix between ambient light and light from your flash. *Technical Note: Sometimes the flash ratio is instead described in terms of the ratio between total light and light from the flash. or if your flash is far from your subject. since this is the simplest type of flash photography. it's unlikely that the internal flash of a compact camera could achieve flash ratios approaching 10:1. • • A flash photograph actually consists of two separate exposures: one for ambient light and the other for flash. The key is knowing how to achieve the desired mix between light from your flash and light from ambient sources -. a 2:1. 1:2 and 1:4 ratio in the table above. Unfortunately both conventions are often used. Realistically though. the flash ratio* is used to describe the ratio between light from the flash and ambient light. On the other hand. Flash shown for first curtain sync.8:1 Strong Flash shortest exposure strongest flash shorter longest exposure exposure weakest flash weaker flash In this tutorial.1:2 Fill Flash 1:1 Balanced Flash 2:1 . . Each of these occurs in the split second between when you hold the shutter button and when the shutter opens. Since the shutter speed doesn't affect the amount of light captured from your flash (but does affect ambient light). For this reason. For a given amount of ambient light. Flash N/A or 0 Ratio: Only Ambient Light Settings: no flash 1:8 . using a subtle 1:8 fill flash might be impractical if there's very little ambient light and your lens doesn't have a large maximum aperture (or if you are unable to use a high ISO speed. and at the other extreme is photography using mostly light from the flash (right). At the other extreme. respectively. you can use this fact to control the flash ratio. In that case. for example. since the flash can appear quite harsh unless carefully controlled. It's important to also note that not all flash ratios are necessarily attainable with a given flash unit or ambient light intensity. including the flash position and its apparent light area. If ambient light is extremely intense. Newer SLR cameras also fire a pre-flash in order to estimate how bright the actual flash needs to be. A pre-flash is not emitted with much older flash units. A flash pulse is usually very brief compared to the exposure time. On the other hand.while also having the right amount of total light (from all sources) to achieve a properly exposed image. At one extreme of this ratio is ordinary ambient light photography (left).

FEC only affects flash intensity. However. In Tv mode. otherwise flash doesn't fire fill flash if bright. Most cameras intelligently decrease their fill flash as ambient light increases (called "auto fill reduction" in Canon models). flash in Program mode acts just as it did in Auto mode. Aperture Priority (Av) and Shutter Priority (Tv) modes have even different behavior. unlike with Auto and P modes. whereas a -2 value means there's a quarter as much light. In all modes. For situations where the shutter speed is longer than 1/60 of a second. while others virtually ignore ambient light and assume that your camera's flash will be the dominant source of illumination. the flash turns on only if the shutter speed would otherwise drop below what is deemed as being hand-holdable -. otherwise greater than 1:1 fill flash whatever flash ratio is necessary Flash Ratio Program (P) Aperture Priority (Av) Shutter Priority (Tv) Manual (M) In Auto mode ( ).FEC The key to changing the flash ratio is using the right combination of flash exposure compensation (FEC) and ordinary exposure compensation (EC). the flash ratio may also increase if the necessary f-stop is smaller than available with your lens. Program (P) mode is similar to Auto. the flash ratio never increases beyond about 1:1 and exposures are as long as necessary (aka "slow sync"). FEC works much like regular EC: it tells the camera to take whatever flash intensity it was going to use. Fortunately. in which case the flash will act as a fill flash. respectively. and to override that by the FEC setting. The following table summarizes how to change the flash ratio if it had originally been 1:1: . The fill flash ratio may therefore be anywhere from 1:1 (in dim light) to 1:4 (in bright light). FLASH EXPOSURE COMPENSATION . Each positive or negative stop refers to a doubling or halving of light. In Manual (M) mode." which results in the camera using the flash as a fill flash. A table summarizing the most common camera modes is listed below: Camera Mode Auto ( ) 1:1 or greater if dim. This might include requiring an aperture that is outside the range available with your lens. Manual mode therefore enables a much broader range of flash ratios than the other modes.FLASH EXPOSURE MODES One of the most difficult tasks in flash photography is understanding how different camera and flash metering modes will affect an overall exposure. The key is knowing when and why your camera uses its flash in each of these ways. all cameras use their flash as either the primary light source or as a fill flash. The big difference is that while EC affects the exposures for both flash and ambient light.usually 1/200 to 1/500 second). Therefore a +1 EC or FEC value means a doubling of light. except one can also force a flash to be used in situations where the subject is well-lit. the camera exposes ambient light based on how you set the aperture. The flash exposure is then calculated based on whatever remaining light is necessary to illuminate the subject. or a shutter speed that is faster than what your camera/flash system supports (the "X-sync speed" . The problem is that it is complicated to adjust both EC and FEC to change the flash ratio without also changing the overall exposure. but the shutter speed remains at 1/60 of a second. Both EC and FEC are specified in terms of stops of light. Just as with Program mode. one usually has to force their flash to "on. the relevant setting in your viewfinder will blink if a flash exposure is not possible using that setting. shutter speed and ISO. The flash ratio then increases progressively as light hitting the subject gets dimmer. Some modes assume you only want a fill flash.usually about 1/60 of a second.

achieving a 1:2 flash ratio requires an FEC value of -1 and a corresponding EC value of about +1/3 to +1/2. Top of Form Flash Ratio Calculator Original Flash Ratio: New Flash Ratio: Bottom of Form FEC Setting: EC Setting: note: EC can only be set in 1/3 to 1/2 stop increments. we know this EC value has to be between 0 and -1. Note that the FEC value is straightforward: it's just equal to the number of stops you intend to increase or decrease the flash ratio by. the EC setting is far from straightforward: it depends not only on how much you want to change the flash ratio by. if only FEC is increased +1. Assuming a default 1:1 flash ratio.thereby increasing the overall exposure. TTL FLASH METERING Most current SLR flash systems employ some form of through-the-lens (TTL) metering. It's equal to log2(2/3). Fortunately. As an example of why EC is much more complicated than FEC. Since each negative EC halves the amount of light. We therefore need to use an EC value which reduces the total amount of light by a factor of 2/3 (150% * 2/3 = 100%). which comes out to about -0. You will first want to dial in +1 FEC. How to decrease the flash ratio: dial in a negative flash exposure compensation. But how much EC? Since the original flash ratio was 1:1.. While it's not something one would necessarily use in the field. let's walk through what happens when you change the flash ratio from 1:1 to 2:1 in the above example. We therefore need to dial in a negative EC to compensate for this. However. which are then used to estimate what flash intensity is needed during the actual exposure. the total amount of light using +1 FEC is now 150% of what it was before. How and why this might happen is discussed in the next section. so that the exposure is unchanged.. Finally. while simultaneously entering a positive exposure compensation (but not exceeding +1). hopefully it can help you develop a better intuition for roughly what EC values are needed in different situations. On the other hand. while simultaneously entering a negative exposure compensation. EC settings are listed as a range because they can only be set in 1/2 to 1/3 stop increments. Assuming a default 1:1 flash ratio. achieving a 2:1 flash ratio requires an FEC value of +1 and a corresponding EC value of -1/2 to -2/3. .and it's rarely an integer. It can also be used to override errors by your camera's flash metering system.Flash 1:8 Ratio: FEC -3 Setting: 1:4 -2 1:2 -1 1:1 0 2:1 +1 4:1 +2 8:1 +3 -2 to -2 1/3 EC +2/3 to +2/3 Setting: +1 +1/3 to 0 +1/2 -1/2 to -1 1/3 -2/3 The above table shows how to change the flash ratio by adjusting FEC and EC. so use the nearest value available How to increase the flash ratio: dial in a positive flash exposure compensation.58. then the amount of light from flash doubles while light from ambient remains the same -. but also on the original flash ratio -. Digital TTL flash metering works by bouncing one or more tiny pre-flash pulses off the subject immediately before the exposure begins. but the exact value isn't something we can readily calculate in our head. the flash ratio calculator (below) solves this problem for us. it's important to note that FEC is not always used to change the flash ratio. since that's the easiest part.

the flash unit starts emitting its flash pulse. and determines the combination of aperture. uneven ambient light Complex lighting situations can also be problematic. Recall that in-camera metering goes awry primarily because it can only measure reflected and not incident light (see tutorial on camera metering & exposure). ISO and shutter speed. the flash might mistakenly try to balance light hitting the overall scene (or some other object). . The subject distance is important because it strongly influences how much flash will hit and bounce back from this subject: Flash Illumination vs. It's quite important since it controls the overall exposure. Depending on the camera mode. and quenches (stops) the flash once the necessary amount of light has been emitted. then dialing in a positive or negative exposure compensation (EC) will fix ambient light metering and improve flash metering at the same time. both (1) ambient light metering and (2) flash metering have to be correct. as opposed to light which only hits your subject. but the flash ratio will be off as well -. then your camera will mistakenly assume that this apparent brightness is caused by lots of incident light. a lot can go wrong. Note: Ironically. and is what the subsequent flash metering will be based on. the flash will be quenched once it either balances ambient light (fill flash) or adds whatever light is necessary to expose the subject (greater than 1:1 flash ratio). Similarly. such as in the example above.thereby affecting the appearance of your subject.even though weddings are often where flash photography and accurate exposures are most important. Reflective Subject Incident vs.Just after the exposure begins. situations with high or low-key lighting can also throw off your camera's metering (see digital camera histograms). The biggest causes for flash error are the distance to your subject. We'll therefore deal with each source of metering error separately. Since a flash exposure is actually two sequential exposures. as opposed to its high reflectance. If ambient light illuminates your subject differently than the background or other objects. Your camera then measures how much of this flash has reflected back in real-time. Distance light fall-off is so rapid that objects 2x as far receive 1/4 the amount of flash Even with a proper flash exposure. it therefore ends up under-exposing the subject. a dark and unreflective subject often results in an over-exposure. expect regions of these objects which are closer to the camera to appear much brighter than regions which are further. (2) Flash Metering is based on the results from both the pre-flash and from ambient light metering. not only will your overall exposure be off. (1) Ambient Light Metering is the first to occur. Reflected Light If your subject is light and reflective. if you suspect your camera's ambient light metering will be incorrect. Regardless. However. If your TTL flash metering system emits an incorrect amount of flash. if your subject (or other objects) traverse a large distance to and from the camera. white wedding dresses and black tuxedos are perfect examples of highly reflective and unreflective subjects that can throw off your camera's exposure -. the distribution of ambient light and your subject's reflective properties. example of complex. Since your camera overestimates the amount of ambient light. Furthermore.

and the snow appears to be "falling" upwards: Example of First Curtain Sync Appearance of Moving Objects For the above reasons. First and second curtain sync control whether the blurred portion trails behind or in front of the subject's flash image. One therefore needs to anticipate where the subject will be at the end of the exposure. second curtain sync can be very useful for exaggerating subject motion. most cameras do not use second curtain sync by default. This is because second curtain sync introduces much more of a delay between when you press the shutter button and when the flash fires -. Ordinarily light travels in straight lines through uniform air. but increases for very small apertures. glass or other similar objects. There's also subtleties with how different manufacturer's metering systems work.no matter how many megapixels your camera may have.causing the blurred portion to streak in front of the sharper flash image. you will likely have either E-TTL or E-TTL II. since flash metering occurs after your camera meters for ambient light. On the other hand. The particular reflective properties of objects in your photo can also throw off flash metering. Therefore the best approach is to experiment with a new flash system before using it for critical photos. However. caused by the slower ambient light exposure. for Nikon digital it will be D-TTL or i-TTL. because the light streaks appear behind the moving subject. Since a flash pulse is usually much shorter than the exposure time. it is important not to use the auto exposure (AE) lock setting when using the focus and recompose technique. Each of these is effectively overlaid to create the final flash photograph. This can give moving objects the appearance of traveling in the opposite direction of their actual motion. . or for really fast moving subjects. metal. as opposed to when the shutter button is pressed. because it can make timing the shot more difficult. 59. most of the ambient light is captured after the flash pulse -. Since photographers pursuing better sharpness use smaller apertures to achieve a greater depth of field. When this occurs your camera optics are said to have become diffraction limited. Knowing this limit can help you to avoid any subsequent softening. However. If available. one should instead use flash exposure lock (FEL). by synchronizing the flash pulse with the beginning ("first curtain") or end ("second curtain") of the exposure: With first curtain sync.Additionally. This might include flash glare and other hard reflections from mirrors. so you can get a better feel for when metering errors might occur. many of their flash metering algorithms are complicated and proprietary. In the example below. This effect is normally negligible. however it begins to disperse or "diffract" when squeezed through a small hole (such as your camera's aperture). and a sharper portion. This can be very tricky to time correctly for exposures of a second or more. These objects may also create additional unintended sources of hard light.unless the exposure time is kept short enough that no streaks are visible. For Canon EOS digital. at some aperture the softening effects of diffraction offset any gain in sharpness due to better depth of field. a flash photo of a moving object is comprised of both a blurred portion. marble. the swan has motion streaks which make it appear as though it is rapidly swimming backwards. respectively. caused by the much faster flash pulse.and increasingly so for longer exposure times. and differences often only arise in situations with uneven ambient lighting.TUTORIALS: DIFFRACTION & PHOTOGRAPHY Diffraction is an optical effect which can limit the total resolution of your photography-. and the unnecessarily long exposure time or high ISO speed required for such a small aperture. first curtain sync is usually undesirable for subjects in motion -. FIRST & SECOND CURTAIN SYNC First and second curtain sync are flash exposure settings that affect how a subject's motion blur is perceived. which can cast additional shadows on your subject.

Barely Resolved No Longer Resolved . This becomes more significant as the size of the aperture decreases relative to the wavelength of light passing through. and less light where they cancel out. Alternatively." after its discoverer George Airy. some move out of phase and begin to interfere with each other-. The width of the airy disk is used to define the theoretical maximum resolution for an optical system (defined as the diameter of the first dark circle). For an ideal circular aperture. If one were to measure the intensity of light reaching each position on a line. This interference produces a diffraction pattern with peak light intensities where the amplitude of the light waves add. Large Aperture Small Aperture Since the divergent rays now travel different distances. it begins to have a visual impact on the image. if two airy disks become any closer than half their width they are also no longer resolvable (Rayleigh criterion).adding in some places and partially or completely canceling out in others. the 2-D diffraction pattern is called an "airy disk. the data would appear as bands similar to those shown below. but occurs to some extent for any size of aperture or concentrated light source. Airy Disk 3-D Visualization Spatial Position When the diameter of the airy disk's central peak becomes large relative to the pixel size in the camera (or maximum tolerable circle of confusion).BACKGROUND Parallel light rays which pass through a small aperture begin to diverge and interfere with one another.

This means that as the diffraction limit is approached. PIXEL SIZE The size of the airy disk itself is only useful in the context of depth of field and pixel size. Typical digital SLR cameras can capture light with a wavelength of anywhere from 450 to 680 nm. therefore resolution loss from diffraction may be greater in one direction. Technical Notes: • • • • The actual pixels in a camera's digital sensor do not actually occupy 100% of the sensor area. unused pixels. but instead have gaps in between. the Canon G6 does not require apertures as small as the 20D in order to achieve the same depth of field (for a given angle of view) due to its much smaller total sensor size (more on this later). so at best the airy disk would have a diameter of 80% the size shown above (for pure blue light). Since not all manufacturers provide info on the number of used vs.6 f/8. however only real-world photography can show its visual impact.2 µm2 30. The following series of images were taken on the Canon EOS 20D. The following interactive table shows the airy disk within a grid which is representative of the pixel size for several camera models (move your mouse over each to change grid).6 µm2 52. the airy disk can have a diameter approaching about 2 pixels before diffraction begins to have a visual impact (assuming an otherwise perfect lens. or the size of the film format. when viewed at 100% onscreen). This effect should be visually negligible. This pixel sizes above are thus slightly larger than is actually the case (by no more than 5% in the worst case scenario).6.0 f/11 f/16 f/22 Canon EOS 1D Canon EOS 1Ds Canon EOS 1DMkII / 5D Nikon D70 Canon EOS 10D Canon EOS 1DsMkII Canon EOS 20D / 350D Nikon D2X Camera Type 136. small apertures can also decrease small-scale contrast significantly due to partial overlap. but in reality these are polygonal with 5-8 sides (a common approximation).9 µm2 Pixel Area f/32 Canon PowerShot G6 5. and that they all contribute to those seen in the final image.0 f/2. This calculation assumes that the microlenses are effective enough that this can be ignored. only used pixels were considered when calculating the fraction of total sensor area. Blue light requires the smallest apertures (largest f-stop number) in order to reduce its resolution due to diffraction.0 f/5.8 f/4. Move your mouse over each f-number and notice the differences in the texture of the fabric. The above chart approximates the aperture as being circular. each of the three primary colors will reach its diffraction limit at a different aperture. µm2 77. the first signs will be a loss of resolution in green and in pixel-level luminance. It depends only on the aperture's f-stop (or f-number) setting on your lens. Since the size of the airy disk also depends on the wavelength of light. Another complication is that bayer arrays allocate twice the fraction of pixels to green as red or blue light. One can think of it as the smallest theoretical "pixel" of detail in photography. . the Canon EOS 20D begins to show diffraction at around f/11. On the other hand. As a result of the sensor's anti-aliasing filter (and the Rayleigh criterion above).6 µm2 67. whereas the Canon PowerShot G6 (compact camera) begins to show its effects at only about f/4.1 µm2 61. camera manufacturers leave some pixels unused around the edge of the sensor.46 µm2 Recall that a digital sensor utilizing a bayer array only captures one primary color at each pixel location. VISUAL EXAMPLE: APERTURE VS. Nikon digital SLR cameras have pixels which are slightly rectangular.1 µm2 54.Diffraction thus sets a fundamental resolution limit that is independent of the number of megapixels. One final note is that the calculation for pixel area assumes that the pixels extend all the way to the edge of each sensor. The calculation above assumes light in the middle of the visible spectrum (~510 nm). WHAT IT LOOKS LIKE The above calculations and diagrams are quite useful for getting a feel for the concept of diffraction. which begins to be diffraction limited at about f/11 (as shown above). In reality. and only noticeable with very precise measurement software. As two examples.0 µm2 41. and then interpolates these colors to produce the final full color image. Even if two peaks can still be resolved. the secondary ring and other ripples around the central disk (see example photo). Aperture f/2.0-5. and on the wavelength of light being imaged.

similar to the effect on adjacent rows of alternating black and white airy disks (as shown on the right).0 f/11 f/16 f/22 Partial Overlap of Airy Disks Note how most of the lines in the fabric are still resolved at f/11. Top of Form Diffraction Limit Calculator Maximum Print Dimension Viewing Distance Eyesight ? 10 Resolution Megapixels Camera Type Selected Aperture Set Circle of Confusion = Twice Pixel Size? Pixel Size (µm) Maximum Circle of Confusion (µm) Diameter of Airy Disk (µm) Diffraction Limited? . This is because the airy disks are only partially overlapping. but they are shown with slightly lower small-scale contrast or acutance (particularly where the fabric lines are very close). By f/22. Sections in dark grey are optional and have the ability to define a custom circle of confusion (CoC). CALCULATING THE DIFFRACTION LIMIT The form below calculates the size of the airy disk and assesses whether the system has become diffraction limited.No Overlap of Airy Disks Select Aperture: f/8. almost all fine lines have been smoothed out because the airy disks are larger than this detail.

NOTES ON REAL-WORLD USE IN PHOTOGRAPHY Even when a camera system is near or just past its diffraction limit. See "camera lens quality: MTF. Even though the resolution is the same. and so this only applies for the sharpest lenses. This should not lead you to think that "larger apertures are better. 4:3 aspect ratio for compact digital and 3:2 for SLR Bottom of Form This calculator decides that the system has become diffraction limited when the diameter of the airy disk exceeds that of the CoC. but this is mostly due more to the different design and distance between the focal plane and "entrance pupil. in situations where the depth of field can be more shallow. As a result. Smaller pixels also provide the flexibility of having better resolution with larger apertures. both scenarios still have the same total resolution (although one will produce a larger file). Softening due to diffraction only becomes a limiting factor for total sharpness when using a sturdy tripod. why doesn't the size of the airy disk vary with focal length? This is because the distance to the focal plane also increases with focal length. assumes square pixels. motion blur and imperfect lenses are likely to be more significant. and so there is an optimal aperture in between the largest and smallest settings-. other factors such as focus accuracy. Understand that the "twice pixel size" limit is arbitrary. resolution & contrast" for more on this. Are smaller pixels somehow worse? Not necessarily. depending on the lens." even though very small apertures create a soft image. These calculations only show when diffraction becomes significant. which describes both focal length and aperture size. There is some variation between lenses though. Technical Note: Since the physical size of the lens aperture is larger for telephoto lenses (f/22 is a larger aperture at 200 mm than at 50 mm). Some diffraction is often ok if you are willing to sacrifice some sharpness at the focal plane. The term used to universally describe the lens opening is the "numerical aperture" (inverse of twice the f-stop). in exchange for a little better sharpness at the extremities of the depth of field. Therefore the size of the airy disk only depends on the f-stop. mirror lock-up and a very high quality lens. The "set CoC = Twice Pixel Size" checkbox is intended to give you an indication of when diffraction will become visible when viewing your digital image at 100% on a computer screen. When other factors such as noise and depth of field are considered.usually located right at or near the diffraction limit. For a further explanation on each input setting. Just because the diffraction limit has been reached with large pixels does not mean the final photo will be any worse than if there were instead smaller pixels and the limit was surpassed. not necessarily the location of optimum sharpness (although both often coincide)." Bottom of Form . Most lenses are also quite soft when used wide open (at the largest aperture available).Note: CF = "crop factor" (commonly referred to as the focal length multiplier). Alternatively. very small apertures may be required to achieve a long exposure where needed. such as to create motion blur in flowing water for waterfall photography. Real-world results will also depend on the lens being used. and that there is actually gradual transition between when diffraction is and is not visible at 100% view. the answer as to which is better becomes more complicated. please see their use in the flexible depth of field calculator. Alternatively. the camera with the smaller pixels will render the photo with fewer artifacts (such as color moiré and aliasing). and so the airy disk diverges more over this greater distance. the optimum sharpness may even be below the diffraction limit for some lenses. the two effects of physical aperture size and focal length cancel out.

Sign up to vote on this title
UsefulNot useful