This action might not be possible to undo. Are you sure you want to continue?
The RAW file format is digital photography's equivalent of a negative in film photography: it contains untouched, "raw" pixel information straight from the digital camera's sensor. The RAW file format has yet to undergo demosaicing, and so it contains just one red, green, or blue value at each pixel location. Digital cameras normally "develop" this RAW file by converting it into a full color JPEG or TIFF image file, and then store the converted file in your memory card. Digital cameras have to make several interpretive decisions when they develop a RAW file, and so the RAW file format offers you more control over how the final JPEG or TIFF image is generated. This section aims to illustrate the technical advantages of RAW files, and makes suggestions about when to use the RAW file format.
A RAW file is developed into a final JPEG or TIFF image in several steps, each of which may contain several irreversible image adjustments. One key advantage of RAW is that it allows the photographer to postpone applying these adjustments — giving more flexibility to the photographer to later apply these themselves, in a way which best suits each image. The following diagram illustrates the sequence of adjustments:
Demosaicing White Balance
Tone Curves Contrast Color Saturation Sharpening
Conversion to 8-bit JPEG Compression
Demosaicing and white balance involve interpreting and converting the bayer array into an image with all three colors at each pixel, and occur in the same step. The bayer array is what makes the first image appear more pixelated than the other two, and gives the image a greenish tint. Our eyes perceive differences in lightness logarithmically, and so when light intensity quadruples we only perceive this as roughly a doubling in the amount of light. A digital camera, on the other hand, records differences in lightness linearly — twice the light intensity produces twice the response in the camera sensor. This is why the first and second images above look so much darker than the third. In order for the numbers recorded within a digital camera to be shown as we perceive them, tone curves need to be applied (see the tutorial on gamma correction for more on this topic). Color saturation and contrast may also be adjusted, depending on the setting within your camera. The image is then sharpened to offset the softening caused by demosaicing, which is visible in the second image. The high bit depth RAW image is then converted into 8-bits per channel, and compressed into a JPEG based on the compression setting within your camera. Up until this step, RAW image information most likely resided within the digital camera's memory buffer.
There are several advantages to performing any of the above RAW conversion steps afterwards on a personal computer, as opposed to within a digital camera. The next sections describe how using RAW files can enhance these RAW conversion steps.
Demosaicing is a very processor-intensive step, and so the best demosaicing algorithms require more processing power than is practical within today's digital cameras. Most digital cameras therefore take quality-compromising shortcuts to convert a RAW file into a TIFF or JPEG in a reasonable amount of time. Performing the demosaicing step on a personal computer allows for the best algorithms since a PC has many times more processing power than a typical digital camera. Better algorithms can squeeze a little more out of your camera sensor by producing more resolution, less noise, better small-scale color accuracy and reduced moiré. Note the resolution advantage shown below:
Images from actual camera tests with a Canon EOS 20D using an ISO 12233 resolution test chart. Differential between RAW and JPEG resolution may vary with camera model and conversion software.
The in-camera JPEG image is not able to resolve lines as closely spaced as those in the RAW image. Even so, a RAW file cannot achieve the ideal lines shown, because the process of demosaicing always introduces some softening to the image. Only sensors which capture all three colors at each pixel location could achieve the ideal image shown at the bottom (such as Foveon-type sensors).
FLEXIBLE WHITE BALANCE
White balance is the process of removing unrealistic color casts, so that objects which appear white in person are rendered white in your photo.Color casts within JPEG images can often be removed in post-processing, but at the cost of bit depth and color gamut. This is because the white balance has effectively been set twice: once in RAW conversion and then again in post-processing. RAW files give you the ability to set the white balance of a photo *after* the picture has been taken — without unnecessarily destroying bits.
HIGH BIT DEPTH
Digital cameras actually record each color channel with more precision than the 8-bits (256 levels) per channel used for JPEG images (see "Understanding Bit Depth"). Most current cameras capture each color with 12-bits of precision (212 = 4096 levels) per color channel, providing several times more levels than could be achieved by using an in-camera JPEG. Higher bit depth decreases the susceptibility to posterization, and increases your flexibility when choosing a color space and in post-processing.
DYNAMIC RANGE & EXPOSURE COMPENSATION
The RAW file format usually provides considerably more "dynamic range" than a JPEG file, depending on how the camera creates its JPEG. Dynamic range refers to the range of light to dark which can be captured by a camera before becoming completely white or black, respectively. Since the raw color data has not been converted into logarithmic values using curves (see overview section above), the exposure of a RAW file can be adjusted slightly — after the photo has been taken. Exposure compensation can correct for metering errors, or can help bring out lost shadow or highlight detail. The following example was taken directly into the setting sun, and shows the same RAW file with -1 stop, 0 (no change), and +1 stop exposure compensation. Move your mouse over each to see how exposure compensation affects the image:
Apply Exposure Compensation:
Note: +1 or -1 stop refers to a doubling or halving of the light used for an exposure, respectively. A stop can also be listed in terms of eV, and so +1 stop is equivalent to +1 eV.
Note the broad range of shadow and highlight detail across the three images. Similar results could not be achieved by merely brightening or darkening a JPEG file — both in dynamic range and in the smoothness of tones. A graduated neutral density filter could then be used to better utilize this broad dynamic range.
Since a RAW file is untouched, sharpening has not been applied within the camera. Much like demosaicing, better sharpening algorithms are often far more processor intensive. Sharpening performed on a personal computer can thus create fewer halo artifacts for an equivalent amount of sharpening (see "Sharpening Using an Unsharp Mask" for examples of sharpening artifacts). Since sharpness depends on the intended viewing distance of your image, the RAW file format also provides more control over what type and how much sharpening is applied (given your purpose). Sharpening is usually the last post-processing step since it cannot be undone, so having a presharpened JPEG is not optimal.
The RAW file format uses a lossless compression, and so it does not suffer from the compression artifactsvisible with "lossy" JPEG compression. RAW files contain more information and achieve better compression than TIFF, but without the compression artifacts of JPEG.
Note: Kodak and Nikon employ a slightly lossy RAW compression algorithm, although any artifacts are much lower than would be perceived with a similar JPEG image. The efficiency of RAW compression also varies with digital camera manufacturer. Right image shown at 200%; lossy JPEG compression at 60% in Adobe Photoshop "Save for Web" mode.
RAW files are much larger than similar JPEG files, and so fewer photos can fit within the same memory card. RAW files are more time consuming since they may require manually applying each conversion step. RAW files often take longer to be written to a memory card since they are larger, therefore most digital cameras may not achieve the same frame rate as with JPEG. RAW files cannot be given to others immediately since they require specific software to load them, therefore it may be necessary to first convert them into JPEG. RAW files require a more powerful computer with more temporary memory (RAM).
One problem with the RAW file format is that it is not very standardized. Each camera has their own proprietary RAW file format, and so one program may not be able to read all formats. Fortunately, Adobe has announced a digital negative (DNG) specification which aims to standardize the RAW file format. In addition, any camera which has the ability to save RAW files should come with its own software to read them.
Good RAW conversion software can perform batch processes and often automates all conversion steps except those which you choose to modify. This can mitigate or even eliminate the ease of use advantage of JPEG files.
Many newer cameras can save both RAW and JPEG images simultaneously. This provides you with an immediate final image, but retains the RAW "negative" just in case more flexibility is desired later.
So which is better: RAW or JPEG? There is no single answer, as this depends on the type of photography you are doing. In most cases, RAW files will provide the best solution due to their technical advantages and the decreasing cost of large memory cards. RAW files give the photographer far more control, but with this comes the trade-off of speed, storage space and ease of use. The RAW trade-off is sometimes not worth it for sports and press photographers, although landscape and most fine art photographers often choose RAW in order to maximize the image quality potential of their digital camera. Want to learn more? Discuss this and other articles in our digital photography forums.
Raw Image Format: Pros and Cons
What does it bring, and is it worth the trouble?
which translates signal values from individual R. green-.6 MB of .The first decision you'll have to make before using your camera is what format do you want to save your image files in. an extra 1. like 12 or 14 bits per photosite). the raw data may be compressed or not. capitalized. In most of raw development programs (embedded in firmware. tonal curve. before conversion to the eight-bit RGB. There are two choices here: raw (proprietary for every camera maker) and RGB (JPEG or TIFF standards). Consider them a digital version of an undeveloped film: the final image will depend on how you develop it and how you make the prints. It can be also packed (with individual photosite information not aligned with byte boundaries). and blue-filtered photosites of the sensor. and that's what some people should have learned at SCHOOL if they were paying ATTention). Depending on the camera model. or stand-alone) this is done while the image still has 16 bits per color (even if the original signal had less. say. the uncompressed (older) Olympus Raw Format (ORF) stores information from two 12-bit photosites packed in three bytes. On average. with the compression always being lossless. usually without any incamera processing. Raw files contain the recording of the signal as picked off individual red-. G. and B photosites into combined RGB values of image pixels. This translation. also known as raw development. expect a raw image to require from one to two megabytes of storage per megapixel. white balance. Obviously. four megapixels worth of information becomes a twelve-megapixel image — see another article on that subject. which may also vary from one camera to another. For example. Every manufacturer uses a different raw image format. an 8 MP image will be written as a 13.6 MB file (obviously. Image adjustment: sharpening. two out of three components of every pixel have to be obtained by interpolation from its neighbors. Raw image files Raw image files are often referred to as RAW. Both these factors (often confused) will affect the raw image file size. which is an obvious misunderstanding (this is not an acronym but a common word. this is how. therefore usually you need a specialized raw converter application (or plugin) to translate the photosite information into RGB pixels. involves two main steps: Conversion proper (or demosaicing).
Newer Olympus cameras (and many others.. an 8 MP image file from the Canon 350D (Digital Rebel XT) takes just 8 MB. but the data is also compressed. a 40% savings. if ever. below 14 MB. usually only a choice between raw and JPEG is provided. For the Olympus E-500. plus overhead). The TIFF format is rarely. while a compressed 12 MP ORF file from the Olympus E-30 fits. Obviously. color balance) have been already applied to the image.) Being uncompressed. including all Canon SLRs) use raw compression.e. In our film metaphor. but they are fairly large (3 megabytes per megapixel. This is because some corrections (like. but most manufacturers do not use that option).) Other common image formats Before we go deeper into our discussion. with the main practical difference being compression.6 MB (the 350D does not support this format). and even at 1:8 they are still quite small. some information is lost in the process (or some artifacts introduced). as JPEG images submit themselves to adjustment much better than a . (The analog-to-digital conversion uses 12 bits per photosite in all these cases. let us ga=have a quick look at tow most popular image formats. the sensor signal is translated into RGB before the file is written.overhead is used to store additional information). JPEG (Joint Photographic Expert Group) Like in TIFF. TIFF files do not suffer from compression artifacts (or any other forms of data loss). this is not the same as working with the original information. we could compare JPEG files to developed negatives (maybe from a minilab. good enough for a great majority of applications. used in recent cameras. this format contains uncompressed RGB information on all image pixels. TIFF (Tagged Image Format) In its simplest version. (Think of that as color-correcting of a not-so-well developed negative at the printing stage. for example. but at lower compression ratios (say. and while new ones can be applied on top of them. i. with eight bits per color image depth (the format allows for 16 bits/color. a full-size TIFF file is about 24. (COmparisons to a Polaroid print are not really good. 1:4 or below) these effects are negligible. Both are RGB (this means they do have pixels with colors assigned). This is a "lossy" compression. really not worth worrying about. not a custom shop). in this format the image is already translated into RGB. therefore one might compare TIFF files to a developed negative. on average.
sharpening. an extra step. [CON] If. with the differences due to compression of rather secondary importance. if not applied. while in RGB (TIFF or JPEG) ones this information is already converted. you can use more powerful and/or most up-to-date software.) Thanks to the compression. [CON] This extra tweaking is not always necessary. These are: color balance. or 3 MB at 1:8. True. Raw or RGB? Why not to use the raw format all the time if it contains the full. 3. print. [PRO] Some of the camera settings are applied only to the raw image development process. possibly better than the camera firmware. gradation. you may kiss your (undeveloped) digital negatives goodbye. a few years down the road. 2. using the current camera settings. contrast. JPEG files are much smaller than TIFF or ORF ones: an 8 MP JPEG image will use about 6 MB of storage at 1:4. most of these corrections can be applied also to RGB images. again. but usually within a smaller range and/or with not as good results. your camera manufacturer (or a third party) no longer offers a raw converter working with your current operating system. at least those which you want. unaltered information of the captured image? Well. [PRO] Raw files contain the full. or even view your images. Performing that conversion on a PC. 4. limit themselves during the offcamera development to using just the parameters as set during the shooting (as these settings are remembered. and tweak them to your liking. 5. for better or worse. nothing comes quite free.copy of a print would. many people who use the raw format "just in case". [PRO] If and when a better version of the conversion software becomes available. you may "develop" your raw files. 6. not to the picture-taking itself. unaltered information as taken off the sensor. here are the pros and cons: 1. Therefore you may adjust these parameters as needed during the raw development. In our further discussion we will be approaching TIFF and JPEG file format as two forms of RGB. [CON] With raw files you need to do the conversion before you can edit. in the raw .
This means that in sequential shooting you will get longer sequences using JPEGs. Still. An example of a picture with blown highlights not helped by "exposure compensation" in raw development can be found here. On the other hand. (This can be alleviated by using a low in-camera JPEG compression. hardly worth the hassle. An educated choice Using the raw format without understanding why. On the plus side. the extra bits in the raw information may allow you to make a slightly better use in extracting the available detail from shadows or highlights by adjusting the brightness translation curves: with more bits there is less error accumulation in multiple processing stages. . the detail has to be there to start with. if you are unhappy with the default conversion.files). the corresponding photosites are oversaturated regardless on how you save your images: before or after the RGB conversion. It is easier to run out of storage using the raw format. losing some of the range.or under-exposure. The bottom line in this approach is that you are getting the same RGB images as those converted in-camera. and without taking an advantage of it during postprocessing misses the point. even at the conservative 1:4 compression the difference is by a factor of two. You may see some improvement only if the in-camera conversion is deficient. 9. you may change the parameters and get things done your way. [CON] A related issue: the internal memory buffer in your camera (in most models at least) stores RGB images after conversion from raw.) Contrary to what many believe. and if your highlights are blown out. which may (but does not have to) bring better results than adjusting images already converted into RGB. using the raw format usually does not offer better protection from over. 8. and the photographer has to understand how the tonal curves are used. [PRO] Starting from the raw file involves one fewer (lossy) compression process: less image degradation. Watching some discussion forums I've seen many photographers who do it just because they've heard it is better. [CON] Raw images take much more storage space than JPEG ones. 7. The RGB conversion process uses the whole tonal range recorded in the raw image. then they spend their time converting the images to RGB with use of the settings as dialed when the picture was shot.
I do not have one. and here are my recommendations. While I believe that 95% of photographers will be perfectly happy storing their images as low-compression (1:4 or better) JPEGs. the raw format offers most of its advantages to those who need them least. there is no way back.So. Most importantly. See also another (2009) article by Ken: Film: The Real Raw. you may belong to the remaining 5%. what is my advice? Sorry to disappoint you. Some cameras offer an option to write both raw and JPEG versions of the same image. While I rarely resort to that. While I do postprocess all my images. I know people who do. Now the only thing you have to do is to make your mind. let me just offer you one quote: "I'm consistently amused by innocent hobbyists who go through the aggravation of shooting raw files just to get what they think is marginally better technical quality [.. in such cases going through the raw stage means extra hassle. Only in rare cases I do switch to the raw format. this usually happens in trickier WB situations. are very similar to mine. as it turns out. an experienced photographer will usually have the exposure. and contrast set so that an out-of-camera JPEG will be just fine. and the effects can be quite ugly. and storage space. Actually. carefully considering the pros and cons listed above.. As for myself. My Highlight Recovery from Raw Files: a case study.7 or 1:4 JPEGs. saving them as 1:2. but they are also least likely to get that step right. white balance. time. most of the time I convert my pictures to RGB in-camera. quite entertaining. You may want to dig a bit deeper.]" . Web references This article was intended to be just a quick introduction to the subject. I find that even when starting from RGB I still have enough room left for the adjustments I need. also taking into account your image postprocessing skills. and his conclusions. I keep the in-camera sharpening below the default level: once your converted image is oversharpened. It is the less fluent photographers who may profit from the extra postprocessing flexibility offered by the raw format. you will have to make your own decision. nononsense way from a photographer's (as opposed to pixel-peeper's) viewpoint. with examples of how much you can actually fix overexposed pictures in raw format Raw file pros and cons are discussed by Ken Rockwell in his usual. Ironically.
When you press your camera's shutter button and the exposure begins. Bob Atkins published an informative article on this subject at photo. each of these pixels has a "photosite" which is uncovered to collect and store photons in a cavity.255 for an 8-bit image). each cavity has to have a filter placed over it which only allows penetration of a particular color of light. Once the exposure finishes. Color Filter Array A Bayer array consists of alternating rows of red-green and green-blue filters." shown below. Notice how the Bayer array contains twice as many green as red or blue sensors. whose precision is determined by bit depth (0 .net. The most common type of color filter array is called a "Bayer array. Redundancy with green pixels produces an image which appears less noisy and has finer detail than could be accomplished if each color were treated equally. . Each cavity is unable to distinguish how much of each color has fallen in. Virtually all current digital cameras can only capture one of the three primary colors in each cavity. This also explains why noise in the green channel is much less than for the other two primary colors (see "Understanding Image Noise" for an example). To capture color images. the camera has to approximate the other two primary colors in order to have information about all three colors at every pixel. A tutorial on raw files can be found at Sean McHugh's Cambridge in Colour photography site. and then tries to assess how many photons fell into each. the page also contains a discussion thread offering more insight into the matter. Each primary color does not receive an equal fraction of the total area because the human eye is more sensitive to green light than both red and blue light. DIGITAL CAMERA SENSORS A digital camera uses a sensor array of millions of tiny pixels to produce the final image. and so they discard roughly 2/3 of the incoming light. so the above illustration would only be able to create grayscale images. The relative quantity of photons in each cavity are then sorted into various intensity levels. the camera closes each of these photosites. As a result.
however this is by far the most common setup. How is this possible if the camera is unable to directly measure full color? One way of understanding this is to instead think of each 2x2 array of red.Original Scene (shown at 200%) What Your Camera Sees (through a Bayer array) Note: Not all digital cameras use a Bayer array. green. The Foveon sensor used in Sigma's SD9 and SD10 captures all three colors at each pixel location. blue and emerald green. BAYER DEMOSAICING Bayer "demosaicing" is the process of translating this Bayer array of primary colors into a final image which contains full color information at each pixel. Sony cameras capture four colors in a similar array: red. green and blue as a single full color cavity. .
then it would only be able achieve half the resolution in both the horizontal and vertical directions. which may appear as repeating patterns.→ This would work fine. produce images which are less noisy. however most cameras take additional steps to extract even more image information from this color array. if a camera computed the color using several overlapping 2x2 arrays. The most common artifact is moiré (pronounced "more-ay"). If the camera treated all of the colors in each 2x2 array as having landed in the same place. This is no problem. On the other hand. color artifacts or pixels arranges in an unrealistic maze-like pattern: . → Note how we did not calculate image information at the very edges of the array. If these were actually the edges of the cavity array. then it could achieve a higher resolution than would be possible with a single set of 2x2 arrays. The following combination of overlapping 2x2 arrays could be used to extract more image information. since information at the very edges of an image can easily be cropped out for cameras with millions of pixels. DEMOSAICING ARTIFACTS Images with small-scale detail near the resolution limit of the digital sensor can sometimes trick the demosaicing algorithm—producing an unrealistic looking result. then calculations here would be less accurate. or adapt to best approximate the image at each location. Other demosaicing algorithms exist which can extract slightly more resolution. since we assumed the image continued on in each direction. since there are no longer pixels on all sides.
Each cavity is shown with little peaks between them to direct the photons to one cavity or the other. in addition to the third square of the first photo (subtle). These artifacts depend on both the type of texture and software used to develop the digital camera's RAW file. For further reading on digital camera sensors.Second Photo at ↓ 65% of Above Size Two separate photos are shown above—each at a different magnification. and subsequently create images which have less noise for the same exposure time. Camera manufacturers have been able to use improvements in microlens design to reduce or maintain noise in the latest high-resolution cameras. Realworld camera sensors do not actually have photosites which cover the entire surface of the sensor. In fact. Well-designed microlenses can improve the photon signal at each photosite. Note the appearance of moiré in all four bottom squares. MICROLENS ARRAYS You might wonder why the first diagram in this tutorial did not place each cavity directly next to each other. despite having smaller photosites due to squeezing more megapixels into the same sensor area. Both maze-like and color artifacts can be seen in the third square of the downsized version. they often cover just half the total area in order to accommodate other electronics. These lenses are analogous to funnels which direct photons into the photosite where the photons would have otherwise been unused. please visit: Digital Camera Sensor Sizes: How Do These Influence Photography? Raw Basics . Digital cameras contain "microlenses" above each photosite to enhance their light-gathering ability.
Since raw data is entirely unique to each camera. Just like raw eggs. Nikon and Adobe to support 10 or 20 year old cameras? How about 30 or 40 year old cameras? If you do. it's the potential ability of future software to read it. Can you find a computer to open word processing files from 10 or 20 years ago today in Lotus Notes or PFS Write or Brother Style Writer? I can't.Raw files are just the raw sensor data. In the September 2004 issue of "Outdoor Photographer" magazine. Horror of horrors. silly. Do you trust Canon.CRW or . It's not the file that goes bad. I've heard that the latest Nikon software can't even read the NEFs from older cameras and that you need to load older software to read them. They also go bad fast if left in the raw state and can keep forever once processed to something like olive oil or JPGs.. even though the different files have the same suffix like . Some cameras have a handy raw + JPG mode which saves both the raw data and the JPG picture. Raw files themselves don't go bad. Raw files are just like raw olives: you need to cook or otherwise process them before you can use them. that's why I converted my files from these programs to the universal .the high quality JPEG images looked far superior to the raw files when both were opened directly. The JPG processing in the camera can be better than what you may be able to do later in software from raw. Cameras do this processing in hardware much faster than your computer can do it in software. you still have to do the processing in your computer to make an image (JPG or otherwise) that you actually can see. If you do. JPGs are universal. whatever software we're running on whatever sort of computer we'll be using may not be able to open a long-forgotten 20-year old proprietary file.TXT format back when I could. page 25. Most fancy digital cameras allow you to save the raw data instead of the actual JPG picture. the raw files may go bad if left unprocessed. It isn't a picture until it is processed further. Without solid manufacturer support you won't be able to use your raw files again. and different even for different firmware revisions for the same camera. I convert all my raw files to JPGs or TIFFs for archiving. Raw is proprietary to camera make and model and even camera firmware version. Rob Shepard says ".NEF." . unless you process it into something like an eggalbumen print or a JPG. raw isn't even a format. go ahead and leave your raw files as raw. What goes bad is that in 10 or 20 years..
For many hobbyists tweaking is part of the fun and I don't want to spoil that. Your contrast. or Blue site of the Bayer pattern. white balance. since it includes everything. Please just don't take it personally that I prefer to get my shots right the first time instead of having to tweak them later. also takes up a whole lot more space and takes more time to move around. and 1. Green. You can get this same synthetic lightening from JPGs. Thus the 6MP camera records 1. and a host of other items. You'll see no additional artifacts since that's all done before the JPG conversion.5M Blue pixels. You only want to go through this trouble if for some reason you're unsure of what settings to use. Using raw files obviously takes a lot more time and patience. Everyone's needs vary. in most cases. Q: What is a RAW image file? A: All RAW file consist of two things. since you could have had all that processing done right in the camera for free.Cameras create their JPGs from the 12 bit or more raw data as it comes off the sensor. although the software that opens this data gives one the option to rescale the data and give the impression of changing exposure. sharpening and everything are applied to the raw data in-camera. If I need to correct a goof I just do it from the JPGs. or a science project in a million pieces that still needs assembly before you can drive it (raw). The raw data. Shutter Speed. Image Data: which. Others prefer to spend even more time later twiddling in raw. Q: What does a RAW converter do? . ISO. is a the unmodified data exactly as it is output from the A to D convertors (nomaly 12 bit linear). If you're the sort of person who likes to twiddle and redo than by all means raw is for you.. Meta Data: Which is a record of the state of the camera or recording device at the moment the image was taken. too. 3M Green. although only raw allows some ability to correct overexposure. It's sort of like either having a complete car that runs (JPG). This include items such as Date. and only afterwards is the file compressed and stored as a JPG. You can't really change exposure after a raw file is shot. I take a lot of heat from tweakers because I. I get the look I need with JPGs and prefer to spend my time making more photos.5M Red. like refrying beans. prefer to make my adjustments in-camera and use the JPGs directly. but that's not for me. like other photographers. Each 12 bit piece of data is a record for either a Red. Time..
when your input starts out with only 33% of the data of your output. sharpness. Load a device (camera) color profile and an output color profile and create a conversion matrix. Raw images based on Bayer Patterns have only one of the three color components at each pixel location . saturation. Of course. Noise. noise reduction. The other trick is to do so in a way that has a low computational cost.. amounts of detail and/or saturated color all can have a bearing on which routine is best for a given image. The trick is to select the tradeoffs that result in the fewest inaccuracies. When each RGB pixel is put through the matrix the RGB values are adjusted for more accurate color. Green and Blue components.. The ability to extract an RGB image from Bayer Pattern. How the converter deals with (clips or reconstructs) "blown highlights". How (really "which") color profiles are used for output. there are going to be inaccuracies. Read Bayer Pattern Data (or raw file) Scale Brightness values of each photo site/color based on White Balance values. There is NO perfect . Q: What is the best Bayer Pattern conversion algorithm? A: Simply .A: Basically a raw converter (including the generation of an in-camera jpeg) does the following. jpeg. Q: What sets RAW converter apart? A: Three things set RAW converters apart when it comes to the actual conversion of the RAW data. or dump to a program such as PS). ECW in still others. Q: What is the big deal about Bayer Pattern conversion? What makes it so tough? A: Full color images usually consist of a picture element (pixel) that contains at least three components – Most typically Red. etc) Save an output file (tiff. (see more below) Post process the data (deal with highlights. Demosiac (convert) the Bayer pattern into RGB data. ACC may be better in many. Developers face an uphill battle and many tradeoffs in the quest to create the best results. every image has qualities that one routine does better then another. VNG in yet others. AHD better in others. The challenge has to do with reconstructing the missing two components at each location.There isn't one! Currently the industry favorite is the AHD routine and its derivatives. Put simply. However the author's ACC* routine does a better job in many cases.
For example. The long answer is fairly easy to guess at. It is usually more prominent in the corners of photographs and usually is the result of the design of the camera lens.solution. Konica-Minolta was often praised for their dSLR’s color rendition and skin tones. (See: "What makes it so tough?" above. So let’s say you convert a raw file using the XYZ raw converter and the colors look horrible and the skin tones make everyone look like dead people and you post the images on the internet anyway.) Q: Why are Camera Manufactures so reluctant to publish specifications for their raw file formats? A: The short answer is “I don’t know”. a new version of an older routine is tweaked to get better results. but. First. It is possible but most developers have not yet discovered the secret. Theoretically this is caused most often when the subject has more detail than the resolution of the . What is it? A: Probably you are seeing something known as Moiré. Green and Blue components of light do not come together in exactly in the same place. The second cause is the result of the Bayer Pattern routines used to create the output image either in the camera (in the case of in camera jpeg files) or in the raw file image conversion program. there are red and blue fringes in areas of high contrast. Who gets the bad wrap??? The camera manufacturer! Second. Most Bayer Pattern demosiacing routines do not correctly render high contrast areas to deal with this kind of color fringing. This is when the Red. in the past. it seems that every month. The truth is the same raw file processed through two different converters will give two very different image and tonal qualities. many people actually enjoy decoding the various raw data formats and publishing the results. The fact is that when it comes to getting these things out of a MRW file (the KM raw file format) there is little the actual camera has to do with it. The first is from a kind of lens distortion called Chromatic Aberration. Why? A: This can be caused by two things. APO and more expensive lenses with special lens elements (such as SD and ED elements) are far less prone to this kind of problem. Q: When I look very closely at some of my photos. Control! Camera manufactures are often praised for the unique image qualities of their cameras. Money! The camera manufactures would much rather have you buy their raw conversion software… Fortunately. Q: In some of my photos I see this really wild looking color pattern "effect".
One common way of dealing with this is the use of strong edge detection routines. Q: Why do some edge details in my images have what almost looks like a zipper pattern? A: This is the result of incorrectly interpreting the color changes that result from alternating Red Green or Blue Green pixels of a Bayer Pattern. Sometimes. In this case the cause is often related to the same thing that causes color fringing. This reduction is often great enough that the resulting images look dim. Q: I have heard that very often there are 1 to 3 extra photographic stops of highlight detail present in RAW files compared to the jpeg image produced by the camera. In order to display all of the brightness/highlight detail available the image must be reduced in contrast.camera (or image). this results in lower frequency harmonics that appear as “waves” in your image . Bad things would also happen to pictures with a lot of red or blue as well (such as a golden sunset or a macro shot of a pink rose). but then the Auto White Balance routines would then run amuck and make candlelight and flames look white as well. Many routines simply do not deal with this image area's correctly. and lifeless. Why is this extra detail “thrown away” or not used in the jpeg output? A: One reason for throwing away or clipping this data is because our cameras can capture more levels of brightness then our computer displays and or output devices can reproduce. Q: I have noticed some artifacts that look like mazes or some crazy Greek wall trim. Why are these in my image? A: This is a result of Bayer Pattern demosiacing routines trying to make sense of high frequency information. However. dull. Basically there would far more complaints caused by this then there are now for less then perfect whites in tungsten light. Really we should be happy Auto White Balance is somewhat limited. the data have more then one mathematically valid solution for edge direction and therefore chooses the "wrong" solution. these maze artifacts can result. this is often also artificially made worse by many Bayer Pattern demosiacing routines. Many. routines have some kind of edge/detail direction sensing. if not most. due to the nature of Bayer Patterns. Q: Why don’t camera manufactures improve the Auto White Balance routines in their cameras so pictures taken in tungsten light look as good as those taken in daylight? A: They could. Some raw . when these routines hit a section of an image that trick them into “thinking” the direction of some detail is other then the “real” direction.
Q: Why is the RAW converter I use so slow? A: A good raw converter must make many calculations to produce the best “full resolution” results. ISO and shutter speed (the "exposure triangle"). but the highlights must be compressed in brightness in order for the overall image to look good. if a reduced output size (1/2 the size dimension or ¼ the megapixels) is all that is required. CAMERA EXPOSURE A photograph's exposure determines how light or dark an image will appear when it's been captured by your camera. this is determined by just three camera settings: aperture. If done well. Another reason has to due with White Balance scaling. Note the following images (50% crops) esp. Usually the best results require the converter to look at a pixel and its surrounding data many many times and this is computationally expensive. Some may like the lower image. Still. A few programs actually reconstruct the clipped data using reasonable assumptions and data from the unclipped channels. Note the following images (100% crops). Most all of the nice comments about how well a camera reproduces colors and skin tones can be attributed to the device color profile NOT the camera. or washed out. These programs tend to have the most natural looking results. the camera (device) profile.conversion programs allow you to recover and use some or all of this data. . but the top image is more accurate. One is correct white balance data (stored in the Meta Data or determined manually). the results can be as natural looking as highlight areas in comparable film images. The top image used a proper color profile conversion/correction while the second image is "straight" from the camera with no color profile correction. the skin tones and the yellows. Believe it or not. Q: What is all the fuss over Color Profiles and Color Management? A: TWO things are needed to get accurate color images from a raw data set. this extra recovered data must be “compressed” in brightness in order to be displayed on a monitor so it doesn’t look flat. then very simple routines which do not require much computing overhead may (should) be used. This scaling causes the clipping of highlight data to happen at different levels for each color. dim. and an output profile. Mastering their use is an essential part of developing an intuition for photography. as in the case of most images required for websites or emails. Many programs that try to recover this data either only recover up to the lowest value clipped or just leave the values clipped resulting in blue or pink overtones in the clipped areas. the other is TWO color profiles. However. esp the skin tones and the yellows.
shutter speed affects motion blur and ISO speed affects image noise. is knowing which trade-offs to make. shutter speed and ISO speed are analogous to the width. In photography. so too is natural light for a photographer. Furthermore. Alternatively. however. the duration you leave it in the rain. ISO & SHUTTER SPEED Each setting controls exposure differently: Aperture: controls the area over which light can enter your camera Shutter speed: controls the duration of the exposure ISO speed: controls the sensitivity of your camera's sensor to a given amount of light One can therefore use many combinations of the above three settings to achieve the same exposure. just as the rate of rainfall was beyond your control above. what it looks like. EXPOSURE TRIANGLE: APERTURE. For example. but that you also don't collect too much ("overexposed"). time and quantity that will achieve this.UNDERSTANDING EXPOSURE Achieving the correct exposure is a lot like collecting rain in a bucket. While the rate of rainfall is uncontrollable. you can get away with less time in the rain if you pick a bucket that's really wide. and the quantity of rain you want to collect. for the same duration left in the rain. since each setting also influences other image properties. You just need to ensure you don't collect too little ("underexposed"). and how a given camera exposure mode affects their combination. The key is that there are many different combinations of width. The key. . The next few sections will describe how each setting is specified. the exposure settings of aperture. for the same quantity of water. a really narrow bucket can be used as long as you plan on getting by with less water. aperture affects depth of field. time and quantity discussed above. three factors remain under your control: the bucket's width. For example.
but for most other shots this is avoided.30+ seconds Slow Shutter Speed Fast Shutter Speed With waterfalls and other creative shots. Shutter speed is a powerful tool for freezing or exaggerating the appearance of motion: Shutter Speed 1 .1/500 second Hand-held photos with substantial zoom (telephoto lens) 1/1000 .SHUTTER SPEED A camera's shutter determines when the camera sensor will be open or closed to incoming light from the camera lens. By the Numbers. when the exposure time doubles the amount of light entering the camera doubles. Shutter speed's influence on exposure is perhaps the simplest of the three camera settings: it correlates exactly 1:1 with the amount of light entering the camera. "Shutter speed" and "exposure time" refer to the same concept. the best way to find out is to just experiment and look at the results on your camera's rear LCD screen (at full zoom). The shutter speed specifically refers to how long this light is permitted to enter the camera. up-close subject motion How it Appears. How do you know which shutter speed will provide a sharp hand-held shot? With digital cameras. motion blur is sometimes desirable. If a properly focused .1/4000 second To freeze extremely fast.1/2 second Landscape photos on a tripod for enhanced depth of field To add motion blur to the background of a moving subject 1/2 to 1/30 second Carefully taken hand-held photos with stabilization 1/50 . It's also the setting that has the widest range of possibilities: Typical Examples Specialty night and low-light photos on a tripod To add a silky look to flowing water 2 . For example. where a faster shutter speed means a shorter exposure time.1/100 second Typical hand-held photos without substantial zoom To freeze everyday sports/action subject movement 1/250 . Therefore all one usually cares about with shutter speed is whether it results in a sharp photo — either by freezing movement or because the shot can be taken hand-held without camera shake.
the when someone says they are "stopping down" or "opening up" their lens.6 f/4. but they're always close enough that the difference is negligible. There's a formula for this. For example. such as f/3. their light-gathering ability is also affected by their transmission efficiency.0 f/1.photo comes out blurred. The above f-stop numbers are all standard options in any camera.0 f/5. they are referring to increasing and decreasing the f-stop value. whereas a digital SLR camera might have a range of f/1.8L lens at f/4 (depending on the focal length).4 to f/32 with a portrait lens.4 Relative Light 1X 2X 4X 8X 16X 32X 64X 128X 256X Example Shutter Speed 16 seconds 8 seconds 4 seconds 2 seconds 1 second 1/2 second 1/4 second 1/8 second 1/15 second The above aperture and shutter speed combinations all result in the same exposure. How it Appears. a compact camera might have an available range of f/2. Lower f-stop values correlate with a shallower depth of field: . A camera's aperture setting is what determines a photo's depth of field (the range of distance over which objects appear in sharp focus). which can at times be counterintuitive.8 to f/8. see the tutorial on Using Camera Shutter Speed Creatively. It's also beyond the photographer's control. In photographer slang. but most photographers just memorize the f-stop numbers that correspond to each doubling/halving of light: Aperture Setting f/22 f/16 f/11 f/8. The range of values may also vary from camera to camera (or lens to lens).0. A narrow aperture range usually isn't a big problem. because the area of the opening increases as the f-stop decreases. For more on this topic. Technical Note: With many lenses.0 f/2. although this is almost always much less of a factor than aperture. keep your hands steadier or use acamera tripod. It is specified in terms an f-stop value. For example.2 and f/6. then you'll usually need to either increase the shutter speed. APERTURE SETTING A camera's aperture setting controls the area over which light can pass through your camera lens. respectively.3. Every time the f-stop value halves. By the Numbers.8 f/2. although most also allow finer adjustments. the light-collecting area quadruples. but a greater range does provide for more creative flexibility. Canon's 24-105 mm f/4L IS lens gathers perhaps ~10-40% less light at f/4 than Canon's similar 24-70 mm f/2. Differences in transmision efficiency are typically more pronounced with extreme zoom ranges. Note: Shutter speed values are not always possible in increments of exactly double or half another shutter speed.
0 . a lower ISO speed is almost always desirable. Similar to shutter speed. ISO speed is usually only increased from its minimum value if the desired aperture and shutter speed aren't otherwise obtainable. Low ISO Speed (low image noise) High ISO Speed (high image noise) note: image noise is also known as "film grain" in traditional film photography . unlike aperture and shutter speed. since higher ISO speeds dramatically increase image noise.high f-stop number large depth of field ISO SPEED The ISO speed determines how sensitive the camera is to incoming light. it also correlates 1:1 with how much the exposure increases or decreases. However.low f-stop number shallow depth of field Narrow Aperture f/16 . As a result.Wide Aperture f/2.
P can also act as a hybrid of the Av & Tv modes. Program (P) You specify the aperture & ISO. whereas with digital SLR cameras. you can choose a corresponding ISO speed & exposure compensation. The symbols used for each mode vary slightly from camera to camera. Tv. but will likely appear similar to those below: Exposure Mode Auto ( ) Exposure Mode How It Works Camera tries to pick the lowest f-stop value possible for a given exposure. Camera automatically selects aperture & shutter speed. portrait. Bulb (B) In addition. the most common include landscape." ). CAMERA EXPOSURE MODES Most digital cameras have one of the following standardized exposure modes: Auto ( "auto exposure (AE) modes. sports and night mode. Manual (M) and Bulb (B) mode. With some cameras. You specify the aperture and ISO. whereas others let you specify one setting and the camera picks the other two (if possible). Av. and M are often called "creative modes" or Each of these modes influences how aperture. Manual (M) Useful for exposures longer than 30 seconds. . Shutter Priority (Tv). or by the duration until you press the shutter button a second time. 200.Common ISO speeds include 100. the camera's metering determines the corresponding shutter speed. ISO and shutter speed — regardless of whether these values lead to a correct exposure. With compact cameras. Camera tries to pick a high f-stop to ensure a large depth of field. 400 and 800. the shutter speed is determined by a remote release switch. the camera may also have several pre-set modes. Compact Landscape cameras also often set their focus distance to distant objects or infinity. Some modes attempt to pick all three values for you. This Portrait ensures the shallowest possible depth of field. an ISO speed in the range of 50-200 generally produces acceptably low image noise. Aperture Priority (Av). Aperture Priority (Av or A) You specify the shutter speed & ISO. a range of 50-800 (or higher) is often acceptable. the camera's metering determines the corresponding aperture. The following charts describe how each mode pertains to exposure: How It Works Camera automatically selects all exposure settings. although many cameras also permit lower or higher values. Shutter Priority (Tv or S) You specify the aperture. ISO and shutter speed are chosen for a given exposure. Program (P).
Night/Low-light However. center-weighted and spot metering. however real-world subjects vary greatly in their reflectance. This means the best they can do is guess how much light is actually hitting the subject. Finally. in-camera metering is standardized based on the luminance of light which would be reflected from an object appearing as middle gray. Each of these have subject lighting conditions for which they excel — and for which they fail. based on lighting conditions and ISO speed. CAMERA METERING & EXPOSURE Knowing how your digital camera meters light is critical for achieving consistent and accurate exposures. For this reason. However. For tricky subject matter. and a long shutter speed and high ISO are used expose the background. If the camera is aimed directly at any object lighter . this would work just fine. and what you can do to compensate for such exposure errors (see section on exposure compensation within the camera metering tutorial). ISO & shutter speed BACKGROUND: INCIDENT vs.Camera tries to achieve as fast a shutter speed as possible for a given exposure — ideally 1/250 seconds or faster. for some cameras this setting means that a flash is used for the foreground. and increases the ISO speed to near its maximum available value. Such additional settings might include the autofocus points. Camera permits shutter speeds which are longer than ordinarily allowed for handheld shots. amongst others. If all objects reflected the same percentage of incident light. so it's a good idea to also be aware of when it might go awry. Metering is the brains behind how your camera determines the shutter speed and aperture. keep in mind that most of the above settings rely on the camera's metering system in order to know what's a proper exposure. metering can often be fooled. Metering options often include partial. metering mode and autofocus modes. although this varies from camera to camera. evaluative zone or matrix. Recommended background reading: camera exposure: aperture. Check your camera's instruction manual for any unique characteristics. REFLECTED LIGHT All in-camera light meters have a fundamental flaw: they can only measure reflected light. Understanding these can improve one's photographic intuition for how a camera measures light. In addition to using a low f-stop. the fast shutter Sports/Action speed is usually achieved by increasing the ISO speed more than would otherwise be acceptable in portrait mode. some of the above modes may also control camera settings which are unrelated to exposure.
but for the purposes of this tutorial simply know that each camera has a default somewhere in the middle gray tones (~10-18% reflectance). 18% Gray Tone 18% Red Tone 18% Green Tone 18% Blue Tone Above patches depict approximations of 18% luminance. and have calibrated your monitor accordingly. A hand-held light meter would calculate the same exposure for any object under the same incident lighting. if there is an even spread varying from dark to light objects. even though it should have instead produced this peak in the highlights or shadows (see high and low-key histograms). so this is also a fundamental limitation. Unfortunately. or of a black dog sitting on a pile of charcoal. Metering off of a subject which reflects more or less light than this may cause your camera's metering algorithm to go awry — either through under or over-exposure. then the average reflectance will remain roughly middle gray.or darker than middle gray. respectively. the camera's light meter will incorrectly calculate under or over-exposure. In other words. An in-camera light meter can work surprisingly well if object reflectance is sufficiently diverse throughout the photo. Monitors emit as opposed to reflect light. respectively. some scenes may have a significant imbalance in subject reflectivity. This will appear most accurate when using a PC display which closely mimics the sRGB color space. however cameras seldom adhere to this. What constitutes middle gray? In the printing industry it is standardized as the ink density which reflects 18% of incident light. such as a photo of a white dove in the snow. For such cases the camera may try to create an image with a histogram whose primary peak is in the midtones. This topic deserves a discussion of its own. .
METERING OPTIONS In order to accurately expose a greater range of subject lighting and reflectance combinations. those with a higher weighting are considered more reliable. Center-Weighted Partial Metering Spot Metering Partial and spot areas are roughly 13. depending on the metering options and autofocus point used. . Each of the above metering diagrams may also be located off-center.5% and 3. respectively. most cameras feature several metering options. Each option works by assigning a weighting to different light regions. and thus contribute more to the final exposure calculation.8% of the picture area. The whitest regions are those which contribute most towards the exposure calculation. which correspond to settings on the Canon EOS 1D Mark II. whereas black areas are ignored.
or know that it will provide the closest match to middle gray. Each generally works by dividing the image up into numerous sub-sections. . where each section is then considered in terms of its relative location. These are usually the default when your camera is set to auto exposure. The location of the autofocus point and orientation of the camera (portrait vs. but this also means that these is more difficult to use — at least initially. care should be taken as the shade of a person's skin may lead to inaccurate exposure if it is far from neutral gray reflectance — but probably not as inaccurate as what would have been caused by the backlighting. zone and matrix metering. Spot metering is used less often because its metering area is very small and thus quite specific. WHEN TO USE PARTIAL & SPOT METERING Partial and spot metering give the photographer far more control over the exposure than any of the other settings. One of the most common applications of partial metering is a portrait of someone who is backlit. This can be an advantage when you are unsure of your subject's reflectance and have a specially designed gray card (or other small object) to meter off of.More sophisticated algorithms may go beyond just a regional map and include: evaluative. light intensity or color. They are useful when there is a relatively small object within your scene which you either need to be perfectly exposed. landscape) may also contribute to the calculation. On the other hand. Metering off of their face can help avoid making the subject look like an under-exposed silhouette against the bright background.
or off of the directly lit stone below the sky opening: NOTES ON CENTER-WEIGHTED METERING At one time center-weighted metering was a very common default setting in cameras because it coped well with a bright sky above a darker landscape. In the examples to the left and right below. Nowadays. and when the ambient lighting is unusual. EXPOSURE COMPENSATION . it has more or less been surpassed in flexibility by evaluative and matrix.Spot and partial metering are also quite useful for performing creative exposures. and in specificity by partial and spot metering. one could meter off of the diffusely lit foreground tiles. whereas matrix and evaluative metering modes have complicated algorithms which are harder to predict. On the other hand. the results produced by center-weighted metering are very predictable. For this reason some prefer to use it as the default metering mode.
This decreases the chance of clipped highlights. Exposure compensation is ideal for correcting in-camera metering errors caused by the subject's reflectivity. It varies depending on camera type. When shooting in RAW mode under tricky lighting. a positive exposure compensation can be used to improve the signal to noise ratio in situations where the highlights are far from clipping. yet still allows one to increase the exposure afterwards. an in-camera light meter will always mistakenly under-expose a subject such as a white dove in a snowstorm (see incident vs. each stop of exposure compensation provides either a doubling or halving of light compared to what the metering mode would have done otherwise. sometimes it is useful to set a slight negative exposure compensation (0.3-0. TUTORIALS: DEPTH OF FIELD Depth of field refers to the range of distance that appears acceptably sharp. No matter what metering mode is used. . aperture and focusing distance. reflected light). and provides a depth of field calculator to show how it varies with your camera settings. Photographs in the snow will always require around +1 exposure compensation. except the final settings are then compensated by the EC value. Most cameras allow up to 2 stops of exposure compensation.Any of the above metering modes can use a feature called exposure compensation (EC). This allows for manual corrections if you observe a metering mode to be consistently under or over-exposing. whereas a low-key image may require negative compensation. This tutorial is designed to give a better intuitive and technical understanding for photography. although print size and viewing distance can also influence our perception of depth of field. A setting of zero means no compensation will be applied (default). Alternatively.5). The metering calculation still works as normal.
In fact. When the circle of confusion becomes perceptible to our eyes. in reality this would be only a tiny fraction of the camera sensor's area. this region is said to be outside the depth of field and thus no longer "acceptably sharp. CIRCLE OF CONFUSION Since there is no critical point of transition. but instead occurs as a gradual transition. everything immediately in front of or in back of the focusing distance begins to lose sharpness — even if this is not perceived by our eyes or by the resolution of the camera. ." The circle of confusion above has been exaggerated for clarity. a more rigorous term called the "circle of confusion" is used to define how much a point needs to be blurred in order to be perceived as unsharp. and observed from a standard viewing distance of about 1 foot. When does the circle of confusion become perceptible to our eyes? An acceptably sharp circle of confusion is loosely defined as one which would go unnoticed when enlarged to a standard 8x10 inch print.The depth of field does not abruptly change from sharp to unsharp.
In reality. and does not describe what happens to regions once they become out of focus. the depth of field can be based on when the circle of confusion becomes larger than the size of your digital camera's pixels. Note that depth of field only sets a maximum value for the circle of confusion. but is only approximated as such when it is very small.At this viewing distance and print size. and so these are considered within the depth of field. CONTROLLING DEPTH OF FIELD Although print size and viewing distance influence how large the circle of confusion appears to our eyes. and so the circle of confusion has to be even smaller than this to achieve acceptable sharpness throughout. Alternatively. In the earlier example of blurred dots. The following test maintains the same focus distance. the circle of confusion is actually smaller than the resolution of your screen for the two dots on either side of the focal point. most lenses will render it as a polygonal shape with 5-8 sides." from Japanese (pronounced bo-ké). Larger apertures (smaller F-stop number) and closer focusing distances produce a shallower depth of field. Two images with identical depth of field may have significantly different bokeh. As a result. camera manufactures assume a circle of confusion is negligible if no larger than 0. camera manufacturers use the 0. These regions also called "bokeh.01 inches (when enlarged).0 . the circle of confusion is usually not actually a circle.01 inch standard when providing lens depth of field markers (shown below for f/22 on a 50mm lens). A different maximum circle of confusion also applies for each print size and viewing distance combination. When it becomes large. as this depends on the shape of the lens diaphragm. aperture and focal distance are the two main factors that determine how big the circle of confusion will be on your camera's sensor. but changes the aperture setting: f/8. a person with 20-20 vision or better can distinguish features 1/3 this size or smaller. In reality.
421 0.5 5.404 .0 2.5 1.406 0.482 0. If the subject occupies the same fraction of the image (constant magnification) for both a telephoto and a wide angle lens.0 10 20 Depth of Field (m) 0. thetotal depth of field is virtually* constant with focal length! This would of course require you to either get much closer with a wide angle lens or much further with a telephoto lens. Even though telephoto lenses appear to create a much shallower depth of field.f/5.404 0.6 f/2. this is mainly because they are often used to magnify the subject when one is unable to get closer.8 note: images taken with a 200 mm lens (320 mm field of view on a 35 mm camera) CLARIFICATION: FOCAL LENGTH AND DEPTH OF FIELD Note that I did not mention focal length as influencing depth of field.404 0. as demonstrated in the following chart: Focal Length (mm) 10 20 50 100 200 400 Focus Distance (m) 0.
Longer focal lengths may also appear to have a shallower depth of field because they enlarge the background relative to the foreground (due to their narrower angle of view). not focal length.0 on a Canon EOS 30D (1. For focal distances resulting in high magnification. the fraction of the depth of field which is in front of and behind the focus distance does change with focal length. At the other limiting case. CALCULATING DEPTH OF FIELD In order to calculate the depth of field. even though both may contribute to the perception of sharpness. This is more representative of everyday use. one needs to first decide on an appropriate value for the maximum allowable circle of confusion. near the hyperfocal distance.5 % 49. when standing in the same place and focusing on a subject at the same distance. the increase in DoF arises because the wide angle lens has a greater rear DoF.0 % 46. Depth of field also appears shallower for SLR cameras than for compact digital cameras.0 % 50. using a circle of confusion of 0.2 % 29.0 % 52. Even though the total depth of field is virtually constant. and on the viewing distance / print size combination. and increases it for telephoto and macro lenses.8 % 60.6X crop factor).0 % 48. as demonstrated below: Focal Length (mm) 10 20 50 100 200 400 Distribution of the Depth of Field Rear Front 70. or very near the hyperfocal distance. because SLR cameras require a longer focal length to achieve the same field of view (see thetutorial on digital camera sensor sizes for more on this topic). a longer focal length lens will have a shallower depth of field (even though the pictures will show something entirely different). This is a real effect. and can thus more easily attain critical sharpness at infinity. However. since depth of field only describes the sharp region of a photo — not the blurred regions. Try out the depth of field calculator tool to help you find this for your specific situation. knowing what this will be ahead of time often isn't straightforward.5 % This exposes a limitation of the traditional DoF concept: it only accounts for the total DoF and not its distribution around the focal plane. Needless to say.0206 mm. A wide angle lens provides a more gradually fading DoF behind the focal plane than in front.9 % 54.0 % 49. This reduces the DoF advantage for most wide angle lenses.0 % 51. This can make an out of focus background look even more out of focus because its blur has become enlarged. *Technical Note: We describe depth of field as being virtually constant because there are limiting cases where this does not hold true. at high magnification the traditional DoF calculation becomes inaccurate due to another factor: pupil magnification. DEPTH OF FOCUS & APERTURE VISUALIZATION . but is an effect due to higher magnification. but is negligible compared to both aperture and focus distance. On the other hand. Note how there is indeed a subtle change for the smallest focal lengths. this is another concept entirely. which is important for traditional landscape photographs.Note: Depth of field calculations are at f/4. This is based on both the camera type (sensor or film size). wide angle lenses may provide a greater DoF than telephoto lenses.1 % 39. On the other hand.
Choosing the right lens for the task can become a complex trade-off between cost. lens speed and image quality. but in that case it's the lens elements that move instead of the sensor. UNDERSTANDING CAMERA LENSES Understanding camera lenses can help add more creative control to digital photography. If the light rays hit the sensor at slightly different locations (arriving at a disc instead of a point). The problem is that the pupil magnification is usually not provided by lens manufacturers. and one can only roughly estimate it visually. although for wide angle and telephoto lenses this is greater or less than one. whereas the pupil magnification does not change the calculation when it is equal to one. This tutorial aims . weight. Diagram can also be used to illustrate depth of field. OTHER NOTES Why not just use the smallest aperture (largest number) to achieve the best possible depth of field? Other than the fact that this may require prohibitively long shutter speeds without acamera tripod. The purple shaded in portion represents all other possible angles. light rays originating from that point converge at a point on the camera's sensor. OTHER WEBSITES & FURTHER READING Norman Koren provides another perspective on depth of field. It differs from depth of field in that it describes the distance over which light is focused at the camera's sensor. For macro photography (high magnification). The purple lines represent the extreme angles at which light could potentially enter the aperture. respectively.Another implication of the circle of confusion is the concept of depth of focus (also called the "focus spread"). size. as opposed to the subject: Diagram depicting depth of focus versus camera aperture. Despite their extreme depth of field. too small of an aperture softens the image by creating a larger circle of confusion (or "Airy disk") due to an effect called diffraction — even within the plane of focus. This is equal to one for lenses which are internally symmetric. Diffraction quickly becomes more of a limiting factor than depth of field as the aperture gets smaller. this is also why "pinhole cameras" have limited resolution. the depth of field is actually influenced by another factor: pupil magnification. The key concept is this: when an object is in focus. A greater depth of field is achieved (than would be ordinarily calculated) for a pupil magnification less than one. then this object will be rendered as out of focus — and increasingly so depending on how far apart the light rays are. including many equations for calculating depth of field and the circle of confusion The Luminous Landscape compares the depth of field for several focal lengths — providing visual proof that depth of field does not change much with the focal length.
prime vs. perspective. focal length. Optical aberrations occur when points in the image do not translate back onto single points after passing through the lens — causing image blurring. while still utilizing the fewest and least expensive elements.to improve understanding by providing an introductory overview of concepts relating to image quality." Each of these elements directs the path of light rays to recreate the image as accurately as possible on the digital sensor. zoom lenses and aperture or f-number. LENS ELEMENTS & IMAGE QUALITY All but the simplest cameras contain lenses which are actually comprised of several "lens elements. Move your mouse over each of the options below to see how these can impact image quality in extreme cases: Original Image Loss of Contrast Blurring . radially decreasing image brightness (vignetting) or distortion. reduced contrast or misalignment of colors (chromatic aberration). Lenses may also suffer from uneven. The goal is to minimize aberrations.
but is instead roughly proportional to this distance. this is manifested as some combination of the above artifacts. In the rest of this tutorial. Wide angle lenses have short focal lengths. Some of these lens artifacts may not be as objectionable as others. depending on the subject matter.Chromatic Aberration Vignetting Distortion Original Any of the above problems is present to some degree with any lens. resolution & contrast. and thus also how much the subject will be magnified for a given photographic position. INFLUENCE OF LENS FOCAL LENGTH The focal length of a lens determines its angle of view. please see the tutorial on camera lens quality: MTF. while telephoto lenses have longer corresponding focal lengths. Note: The location where light rays cross is not necessarily equal to the focal length. Required Focal Length Calculator Subject Distance meters .when a lens is referred to as having lower optical quality than another lens. as shown above. Note: For a more quantitative and technical discussion of the above topic.
For these scenarios only. Move your mouse over the above image to view an exaggerated perspective due to a . the wide angle lens exaggerates or stretches perspective. and often determines one's choice in focal length (when one can photograph from any position).Subject Size meters Camera Type Digital SLR with CF of 1. If one tries to fill the frame with the same subjects using both a wide angle and telephoto lens. perspective only changes with one's location relative to their subject. Perspective control can be a powerful compositional tool in photography. then perspective does indeed change. Many will say that focal length also determines the perspective of an image. but strictly speaking. whereas the telephoto lens compresses or flattens perspective.6X Required Focal Length: Note: Calculator assumes that camera is oriented such that the maximum subject dimension given by "subject size" is in the camera's longest dimension. because one is forced to move closer or further from their subject. Calculator not intended for use in extreme macro photography.
in part because the designers assume that the sun is more likely to be within the frame. in addition to their typical uses. Wide angle lenses are generally more resistant to flare. If you have a compact or digital SLR camera. Think of this as if one were trying to hold a laser pointer steady. . Longer focal lengths require shorter exposure times to minimize blurring caused by shaky hands. the laser's bright spot would not change with distance.wider angle lens. Note how the subjects within the frame remain nearly identical — therefore requiring a closer position for the wider angle lens. similar to the shakiness experience while trying to look through binoculars. and actual uses may vary considerably. then you likely have a different sensor size. Telephoto lenses are more susceptible to camera shake since small hand movements become magnified. Lens Focal Length* Terminology Typical Photography Architecture Landscape Street & Documentary Portraiture Sports. Other factors may also be influenced by lens focal length. To adjust the above numbers for your camera. whereas if only up and down or side to side vibrations were present. Please note that focal lengths listed are just rough ranges. Bird & Wildlife Less than 21 mm Extreme Wide Angle 21-35 mm 35-70 mm 70-135 mm 135-300+ mm Wide Angle Normal Medium Telephoto Telephoto *Note: Lens focal lengths are for 35 mm equivalent cameras. when shining this pointer at a nearby object its bright spot ordinarily jumps around less than for objects further away. FOCAL LENGTH & HANDHELD PHOTOS The focal length of a lens may also have a significant impact on how easy it is to achieve a sharp handheld photograph. A final consideration is that medium and telephoto lenses generally yield better optical quality for similar price ranges. The following table provides an overview of what focal lengths are required to be considered a wide angle or telephoto lens. please use the focal length converter in the tutorial on digital camera sensor sizes. This is primarily because slight rotational vibrations are magnified greatly with distance. many use telephoto lenses in distant landscapes to compress perspective. for example. The relative sizes of objects change such that the distant doorway becomes smaller relative to the nearby lamps.
Keep in mind that using a zoom lens does not necessarily mean that one no longer has to change their position. to achieve the opposite perspective effect. Additionally. one often had to be willing to sacrifice a significant amount of optical quality. However. when using a 200 mm focal length on a 35 mm camera. If a prime lens were used. then a prime lens with a similar focal length will be significantly smaller and lighter. such as in photojournalism and children's photography. Similar to the example in the previous section. then a change of composition would not have been possible without cropping the image (if a tighter composition were desirable). Keep in mind that this rule is just for rough guidance. In other words. the change of perspective was achieved by zooming out and getting closer to the subject. In the example below. if only a small fraction of the focal length range is necessary for a zoom lens. one needs to convert into a 35 mm equivalent focal length. This states that for a 35 mm camera. the exposure time needs to be at least 1/200 seconds — otherwise blurring may be hard to avoid. the exposure time needs to be at least as fast as one over the focal length in seconds. one could have zoomed in and moved further from the subject. The primary advantage of a zoom lens is that it is easier to achieve a variety of compositions or perspectives (since lens changes are not necessary). Two Options Available with a Zoom Lens: Change of Composition Change of Perspective Why would one intentionally restrict their options by using a prime lens?Prime lenses existed long before zoom lenses were available. PRIME LENSES A zoom lens is one where the photographer can vary the focal length within a pre-defined range. some may be able to hand hold a shot for much longer or shorter times. zooms just increase flexibility. This advantage is often critical for dynamic subject matter. whereas this cannot be changed with a "prime" or fixed focal length lens. The primary advantages of prime lenses are in cost.A common rule of thumb for estimating how fast the exposure needs to be for a given focal length is the one over focal length rule. . unless scrutinized by the trained eye (or in a very large print). weight and speed. An inexpensive prime lens can generally provide as good (or better) image quality as a high-end zoom lens. When zoom lenses first arrived on the market. more recent high-end zoom lenses generally do not produce noticeably lower image quality. Alternatively. Finally. ZOOM LENSES vs. the original position is shown along with two alternatives using a zoom lens. For users of digital cameras with cropped sensors. and still offer many advantages over their more modern counterparts. See the tutorial on reducing camera shake with handheld photos for more on this topic.
digital zoom is not the same as optical zoom. in terms of both exposure options and depth of field. respectively. Additionally. Therefore. 4X. lenses listed with a 3X. Lenses with larger apertures are also described as being "faster. zoom designation refer to the ratio between the longest and shortest focal lengths. The maximum aperture is perhaps the most important lens aperture specification. ." because for a given ISO speed. etc. For compact digital cameras. which quantitatively describe relative light-gathering area (depicted below). a concept also termed the depth of field. Lenses with a greater range of aperture settings provide greater artistic flexibility. a smaller aperture means that objects can be in focus over a wider range of distance. the shutter speed can be made faster for the same exposure. INFLUENCE OF LENS APERTURE OR F-NUMBER The aperture range of a lens refers to the amount that the lens can open up or close down to let in more or less light. Apertures are listed in terms of f-numbers. due to the presence of 5-8 blade-like lens diaphragms. Note: Aperture opening (iris) is rarely a perfect circle. These two terms are often mistakenly interchanged. the rest of this tutorial refers to lenses in terms of their aperture size.the best prime lenses almost always offer better light-gathering ability (larger maximum aperture) than the fastest zoom lenses — often critical for low-light sports/theater photography. Additionally. which is often listed on the box along with focal length(s). a larger zoom designation does not necessarily mean that the image can be magnified any more (since that zoom may just have a wider angle of view when fully zoomed out). Note that larger aperture openings are defined to have lower f-numbers (often very confusing). Corresponding Impact on Other Properties: Light-Gathering Area (Aperture Size) Required Shutter Speed Depth of Field Smaller Larger Slower Faster Wider Narrower f-# Higher Lower When one is considering purchasing a lens. Read the fine-print to ensure you are not misled. as the former only enlarges the image through interpolation. specifications ordinarily list the maximum (and maybe minimum) available apertures. and when ashallow depth of field is necessary.
. and because these may require prohibitively long exposure times. For digital SLR cameras.8). respectively.4 f/2.0 f/5.8 f/4.0 f/2. Typical Maximum Apertures f/1.0 f/1. lenses with larger maximum apertures provide significantly brighter viewfinder images — possibly critical for night and low-light photography. as shown below for the Canon 70-200 f/2.6 Relative Light-Gathering Ability 32X 16X Typical Lens Types Fastest Available Prime Lenses (for Consumer Use) Fast Prime Lenses 8X 4X 2X 1X Fastest Zoom Lenses (for Constant Aperture) Light Weight Zoom Lenses or Extreme Telephoto Primes Minimum apertures for lenses are generally nowhere near as important as maximum apertures.8 lens (whose box is also shown above and lists f/2. These also often give faster and more accurate auto-focusing in low-light. then smaller minimum aperture (larger maximum f-number) lenses allow for a wider depth of field. in order to be capable of a narrower depth of field or a faster shutter speed.An f-number of X may also be displayed as 1:X (instead of f/X). For cases where extreme depth of field is desired. Portrait and indoor sports/theater photography often requires lenses with very large maximum apertures. This is primarily because the minimum apertures are rarely used due to photo blurring from lens diffraction. The narrow depth of field in a portrait helps isolate the subject from their background.Manual focusing is also easier because the image in the viewfinder has a narrower depth of field (thus making it more visible when objects come into or out of focus).
because this may depend on how far one has zoomed in or out. A range of f/2. hiking and travel photography because all of these often utilize heavier lenses. Lenses typically have fewer aberrations when they perform the exposure stopped down one or two f-stops from their maximum aperture (such as using a setting of f/4. . or even green color casts. Our eyes are very good at judging what is white under different light sources. This *may* therefore mean that if one wanted the best quality f/2.Finally. Lenses with larger maximum apertures are typically much heavier. These aperture ranges therefore refer only to the range of maximum aperture. some zoom lenses on digital SLR and compact digital cameras often list a range of maximum aperture.0-3. Understanding digital white balance can help you avoid these color casts.0 would mean that the maximum available aperture gradually changes from f/2. Also note that just because the maximum aperture of a lens may not be used. or require carrying equipment for extended periods of time.0 (at full zoom). but digital cameras often have great difficulty with auto white balance (AWB) — and can create unsightly blue. which refers to the relative warmth or coolness of white light. orange. Depth of Field & Effective F-Stop TUTORIALS: WHITE BALANCE White balance (WB) is the process of removing unrealistic color casts. size and weight.0 or f/1. thereby improving your photos under a wider range of lighting conditions. The primary benefit of having a zoom lens with a constant maximum aperture is that exposure settings are more predictable.0 (fully zoomed out) to f/3.0 on a lens with a maximum aperture of f/2.8 photograph. larger and more expensive. Other considerations include cost.4 lens may yield higher quality than a lens with a maximum aperture of f/2. also visit the following tutorials: Using Wide Angle Lenses Using Telephoto Lenses Macro Lenses: Magnification. a f/2.8.0). FURTHER READING For more on camera lenses. Proper camera white balance has to take into account the "color temperature" of a light source. regardless of focal length. so that objects which appear white in person are rendered white in your photo. this does not necessarily mean that this lens is not necessary. not overall range. Size/weight may be critical for wildlife.
A blackbody is an object which absorbs all incident light — neither reflecting it nor allowing it to pass through. light which may appear white does not necessarily contain an even distribution of colors across the visible spectrum: . and then "white hot" for even higher temperatures. A rough analogue of blackbody radiation in our day to day experience might be in heating a metal or stone: these are said to become "red hot" when they attain one temperature." Despite its name. Similarly. blackbodies at different temperatures also have varying color temperatures of "white light.Color Cast → Daylight White Balance BACKGROUND: COLOR TEMPERATURE Color temperature describes the spectrum of light which is radiated from a "blackbody" with that surface temperature.
whereas 3000 K and 9000 K produce light spectrums which shift to contain more orange and blue wavelengths. however fluorescent and other artificial lighting may require significant green-magenta adjustments to the WB. but results from the fact that shorter wavelengths contain light of higher energy. Why is color temperature a useful description of light for photographers. Note how 5000 K produces roughly neutral light. white balance uses a second variable in addition to color temperature: the green-magenta shift. the term is implied to be a "correlated color temperature" with a similarly colored blackbody. As the color temperature rises. The following table is a rule-ofthumb guide to the correlated color temperature of some common light sources: Color Temperature 1000-2000 K 2500-3500 K 3000-4000 K 4000-5000 K 5000-5500 K 5000-6500 K 6500-8000 K 9000-10000 K Light Source Candlelight Tungsten Bulb (household variety) Sunrise/Sunset (clear sky) Fluorescent Lamps Electronic Flash Daylight with Clear Sky (sun overhead) Moderately Overcast Sky Shade or Heavily Overcast Sky IN PRACTICE: JPEG & TIFF FILES Since some light sources do not resemble blackbody radiators. light sources such as daylight and tungsten bulbs closely mimic the distribution of light created by blackbodies. Adjusting the green-magenta shift is often unnecessary under ordinary daylight.Relative intensity has been normalized for each temperature (in Kelvins). Auto White Balance Custom . if they never deal with true blackbodies? Fortunately. although others such as fluorescent and most commercial lighting depart from blackbodies significantly. This may not seem intuitive. the color distribution becomes cooler. respectively. Since photographers never use the term color temperature to refer to a true blackbody light source.
Some cameras also include a "Fluorescent H" setting. If all else fails and the image still does not have the correct WB after inspecting it on a computer afterwards. elevation. if your image appears too cool on your LCD screen preview (regardless of the setting). The remaining six white balances are listed in order of increasing color temperature. and then set that as the white balance for future photos.most digital cameras contain a variety of preset white balances. Auto white balance is available in all digital cameras and uses a best guess algorithm within a limited range — usually between 3000/4000 K and 7000 K. which is designed to work in newer daylight-calibrated fluorescents. RAW files also allow one to set the WB based on a broader range of color temperature and green-magenta shifts. The first three white balances allow for a range of color temperatures. If the image is still too cool (or warm if going the other direction). cloudy could be used in place of daylight depending on the time of day. You can either adjust the temperature and greenmagenta sliders until color casts are removed. one could click on a colorless reference (see section on neutral references) with the "set gray point" dropper while using the "levels" tool in Photoshop. In general. Performing a white balance with a raw file is quick and easy. With "Kelvin" you can set the color temperature over a broad range. In fact. you can click on it and then use the resulting WB settings for the remainder of your photos (assuming the same lighting). you can adjust the color balance to remove additional color casts. Either of these methods should be avoided since they can severely reduce the bit depth of your image. or degree of haziness. The description and symbol for the above white balances are just rough estimates for the actual lighting they work best under.Kelvin Tungsten Fluorescent Daylight Flash Cloudy Shade Fortunately. Custom white balance allows you to take a picture of a known gray reference under the same lighting. or you can simply click on a neutral reference within the image (see next section). however many compact cameras do not include a shade white balance. Even if only one of your photos contains a neutral reference. so you do not have to deal with color temperature and green-magenta shift during the critical shot. Alternatively. Commonly used symbols for each of these are listed to the left. as these allow you to set the WB *after* the photo has been taken. IN PRACTICE: THE RAW FILE FORMAT By far the best white balance solution is to photograph using the RAW file format (if your camera supports them). . you can resort to manually entering a temperature in the Kelvin setting. you can quickly increase the color temperature by selecting a symbol further down on the list above.
or may include less expensive household items. Custom-made devices can be used to measure either the incident or reflected color temperature of the illuminant. Neutral references can either be parts of your scene (if you're lucky). or can be a portable item which you carry with you.CUSTOM WHITE BALANCE: CHOOSING A NEUTRAL REFERENCE A neutral reference is often used for color-critical projects. Below is an example of a fortunate reference in an otherwise bluish twilight scene. although custom-made photographic references are the best (such as the cards shown above). or for situations where one anticipates auto white balance will encounter problems. On the other hand. Most neutral references measure reflected light. since clicking on a seemingly gray region may actually select a colorful pixel caused by color noise: . and can consistently do so under a broad range of color temperatures. Portable references can be expensive and specifically designed for photography. Care should be taken when using a neutral reference with high image noise. whereas a device such as a white balance meter or an "ExpoDisc" can measure incident light (and can theoretically be more accurate). pre-made portable references are almost always more accurate since one can easily be tricked into thinking an object is neutral when it is not. An ideal gray reference is one which reflects all colors in the spectrum equally. An example of a premade gray reference is shown below: Common household neutral references are the underside of a lid to a coffee or pringles container. These are both inexpensive and reasonably accurate.
Some digital cameras are more susceptible to this than others. and so the camera mistakes this for a color cast induced by a warm light source. but in doing so it unknowingly creates a bluish color cast on the stones. The camera then tries to compensate for this so that the average color of the image is closer to neutral. This can be either a 3x3 or 5x5 pixel average if using Adobe Photoshop. The image below illustrates a situation where the subject is predominantly red. Automatic White Balance Custom White Balance (Custom white balance uses an 18% gray card as a neutral reference. NOTES ON AUTO WHITE BALANCE Certain subjects create problems for a digital camera's auto white balance — even under normal daylight conditions. but just be aware .) A digital camera's auto white balance is often more effective when the photo contains at least one white or bright colorless element.Low Noise (Smooth Colorless Gray) High Noise (Patches of Color) If your software supports it. do not try to change your composition to include a colorless object. Of course. One example is if the image already has an overabundance of warmth or coolness due to unique subject matter. the best solution for white balancing with noisy images is to use the average of pixels with a noisy gray region as your reference.
the camera's auto white balance mistakenly created an image with a slightly warmer color temperature. as compared with what we perceive with our eyes. .that its absence may cause problems with the auto white balance. auto white balance usually calculates an average color temperature for the entire scene. and will depend upon where color accuracy is most important. Without the white boat in the image below. Some lighting situations may not even have a truly "correct" white balance. IN MIXED LIGHTING Multiple illuminants with different color temperatures can further complicate performing a white balance. Reference: Moon Stone Under mixed lighting. This approach is usually acceptable. however auto white balance tends to exaggerate the difference in color temperature for each light source. and then uses this as the white balance.
whereas the sky is somewhat cool. This tutorial aims to improve your photos by introducing how autofocus works—thereby enabling you to both make the most of its assets and avoid its shortcomings. Note how the building to the left is quite warm. Unless otherwise stated. On the other hand. they will therefore be treated as being qualitatively similar for the purposes of this AF tutorial. This is because the white balance was set based on the moonlight — bringing out the warm color temperature of the artificial lighting below. Each sensor measures relative focus by assessing changes in contrast at its respective point in the image— where maximal contrast is assumed to correspond to maximal sharpness. this tutorial will assume passive autofocus. We will also discuss the AF assist beam method of active autofocus towards the end. and are laid out in various arrays across your image's field of view. Change Focus Amount: Blurred Partial Sharp Sensor Histogram 400% . CONCEPT: AUTOFOCUS SENSORS A camera's autofocus sensor(s) are the real engine behind achieving accurate focus. some may prefer to leave the color temperatures as is. Despite a seemingly simple goal—sharpness at the focus point—the inner workings of how a camera focuses are unfortunately not as straightforward. White balancing based on the natural light often yields a more realistic photograph. Passive AF can be performed using either the contrast detection or phase detection methods. Note: Autofocus (AF) works either by using contrast sensors within the camera (passive AF) or by emitting a signal to illuminate or estimate distance to the subject (active AF). and can mean the difference between a sharp photo and a missed opportunity. Critical images may even require a different white balance for each lighting region. but both rely on contrast for achieving accurate autofocus. Choose "stone" as the white balance reference and see how the sky becomes unrealistically blue UNDERSTANDING CAMERA AUTOFOCUS A camera's autofocus system intelligently adjusts the camera lens to obtain focus on the subject.Exaggerated differences in color temperature are often most apparent with mixed indoor and natural lighting.
(2) AFP reads the AF sensor to assess whether and by how much focus has improved. the AFP sets the lens to a new focusing distance. the above diagram illustrates the contrast detection method of AF. Note: many compact digital cameras use the image sensor itself as a contrast sensor (using a method called contrast detection AF). . subject contrast and camera or subject motion. (3) Using the information from (2). and do not necessarily have multiple discrete autofocus sensors (which are more common using the phase detection method of AF). the camera may fail to achieve satisfactory focus and will give up on repeating the above sequence. This entire process is usually completed within a fraction of a second. Whether and why autofocus may fail is primarily determined by factors in the next section. mean that focus is not possible for the chosen subject. The three most important factors influencing autofocus are the light level. resulting in failed autofocus. but this still relies on contrast for accurate autofocus. This is the dreaded "focus hunting" scenario where the camera focuses back and forth repeatedly without achieving focus lock. however. FACTORS AFFECTING AUTOFOCUS PERFORMANCE The photographic subject can have an enormous impact on how well your camera autofocuses—and often even more so than any variation between camera models. For difficult subjects. phase detection is another method. (4) The AFP may iteratively repeat steps 2-3 until satisfactory focus has been achieved. This does not. lenses or focus settings. The process of autofocusing generally works as follows: (1) An autofocus processor (AFP) makes a small change in the focusing distance. Further.Please visit the tutorial on image histograms for a background on image contrast.
one may be able to achieve autofocus even for a dimly lit subject if that same subject also has extreme contrast. . move your mouse over this image to see the advantages and disadvantages of each focus location. assuming all other factors remain equal. not the subject. Move your mouse over the image below to highlight areas of good and poor performance. or vice versa. In the example to the left we were fortunate that the location where autofocus performs best also corresponds to the subject location. This has an important implication for your choice of autofocus point: selecting a focus point which corresponds to a sharp edge or pronounced texture can achieve better autofocus.An example illustrating the quality of different focus points has been shown to the left. Note that each of these factors are not independent. in other words. The next example is more problematic because autofocus performs best on the background.
8 f/4. Additional specific techniques for autofocusing on still and moving subjects will be discussed in their respective sections towards the end of this tutorial.0 f/5. if one focused on the fast-moving light sources behind the subject. Alternatively. or leaves on the ground at the same distance as the subject. is that these decisions often have to be either anticipated or made within a fraction of a second.0 f/2. If one's camera had difficulty focusing on the exterior highlight. with the caveat that this highlight would change sides and intensity rapidly depending on the location of the moving light sources. one would risk an out-offocus subject when the depth of field is shallow (as would be the case for a low-light action shot like this one). High-end SLR cameras can have 45 or more autofocus points. focusing on the subject's exterior highlight would perhaps be the best approach.In the photo to the right. however. whereas other cameras can have as few as one central AF point. a lower contrast (but stationary and reasonably well lit) focus point would be the subject's foot. position and type of autofocus points made available by a given camera model. Two example layouts of autofocus sensors are shown below: Max f/#: f/2. NUMBER & TYPE OF AUTOFOCUS POINTS The robustness and flexibility of autofocus is primarily a result of the number.6 .0 f/5. What makes the above choices difficult.6 f/8.8 f/4.
Some cameras also have an "auto depth of field" feature for group photos which ensures that a cluster of focus points are all within an acceptable level of focus. since the central AF sensor is almost always the most accurate. based on estimates of the subject velocity from previous focus distances. For these cameras autofocus is not possible for apertures smaller than f/8. It works by predicting where the subject will be slightly in the future. Canon cameras refer to this as "AI Servo" focusing. depending on your chosen camera setting.0 and f/5. the number and accuracy of autofocus points can also change depending on the maximum aperture of the lens being used. higher accuracy) l vertical line sensors (one-dimensional contrast detection. in addition to potentially also making it difficult to visualize these moving subjects in the viewfinder. this aperture may still help the camera achieve better focus accuracy. Two types of autofocus sensors are shown: + cross-type sensors (two-dimensional contrast detection. The one shot mode is susceptible to focus errors for fast moving subjects since it cannot anticipate subject motion. Further. ONE SHOT The most widely supported camera focus mode is one-shot focusing. this type of sensor is therefore best at detecting horizontal lines. AF MODE: CONTINUOUS & AI SERVO vs. which is best for still subjects. The camera then focuses at this predicted distance in advance to account for the shutter lag (the delay between pressing the shutter button and the start of the exposure).6. as illustrated above. for off-center subjects it is often best to first use this sensor to achieve a focus lock (before recomposing the frame). lower accuracy) Note: The "vertical line sensor" is only called this because it detects contrast along a vertical line.High-End SLR Entry to Midrange SLR Cameras used for left and right examples are the Canon 1D MkII and Canon 20D. For SLR cameras. Multiple AF points can work together for improved reliability. Example maximum tracking speeds are shown for various Canon cameras below: . Many cameras also support an autofocus mode which continually adjust the focus distance for moving subjects. This is an important consideration when choosing a camera lens: even if you do not plan on using a lens at its maximum aperture. respectively. whereas Nikon cameras refer to his as "continuous" focusing. Ironically. One shot focusing requires a focus lock before the photograph can be taken. This greatly increases the probability of correct focus for moving subjects. or can work in isolation for improved specificity.
AUTOFOCUS ASSIST BEAM Many cameras come equipped with an AF assist beam. the type of lens and the number of autofocus sensors being used to track the subject. Actual maximum tracking speeds also depend on how erratic the subject is moving. Focusing performance can be improved dramatically by ensuring that the lens does not have to search over a large range of focus distances. Also be warned that using focus tracking can dramatically reduce the battery life of your camera. although the AF assist beam also comes with the disadvantage of much slower autofocus. Use of the AF assist beam is therefore only recommended for still subjects. whereas digital SLR cameras often use either a built-in or external camera flash to illuminate the subject. the subject contrast and lighting.Values are for ideal contrast and lighting. . When using a flash for the AF assist. which is a method of active autofocus that uses a visible or infrared beam to help the autofocus sensors detect the subject. so use only when necessary. IN PRACTICE: ACTION PHOTOS Autofocus will almost always perform best with action photos when using the AI servo or continuous modes. The above plot should also provide a rule of thumb estimate for other cameras as well. This can be very helpful in situations where your subject is not adequately lit or has insufficient contrast for autofocus. Most compact cameras use a built-in infrared light source for the AF assist. and use the Canon 300mm f/2.8 IS L lens. the AF assist beam may have trouble achieving focus lock if the subject moves appreciably between flash firings.
the stairs are comprised primarily of horizontal lines. the eye is the best focus point—both because this is a standard and because it has good contrast. which ensures that a focus lock has been achieved before the exposure begins. Since the most common type of AF sensor is the vertical line sensor. the focus distance will always be behind the actual subject distance— and this error increases for closer subjects. however. If one were to focus near the back of the foreground stairs (to maximize apparent depth of field using the hyperfocal distance). the most accurate focusing is achieved using the offcenter focus points for off-center subjects. Some SLR lenses also have a minimum focus distance switch. In the biker example to the right. Note that the emphasis in this tutorial has been on *how* to focus — not necessarily *where* to focus. it may also be worth considering whether your focus point contains primarily vertical or horizontal contrast. The usual focus point requirements of contrast and strong lighting still apply. one may be able to achieve a focus lock not otherwise possible by rotating the camera 90° during autofocus. one could pre-focus near the side of the road since one would expect the biker to pass by at near that distance. For further reading on this topic please visit the tutorials on depth of field and the hyperfocal distance. If one were to instead use the central AF point to achieve a focus lock (prior to recomposing for an off-center subject). Afterwards one could rotate the camera back to portrait orientation during the exposure. although one needs to ensure there is very little subject motion. setting this to the greatest distance possible (assuming the subject will never be closer) can also improve performance. Although the central autofocus sensor is usually most sensitive. For portraits. if so desired. that in continuous autofocus mode shots can still be taken even if the focus lock has not yet been achieved.Perhaps the most universally supported way of achieving this is to pre-focus your camera at a distance near where you anticipate the moving subject to pass through. IN PRACTICE: PORTRAITS & OTHER STILL PHOTOS Still photos are best taken using the one-shot autofocus mode. one could avoid a failed autofocus by first orienting their camera in landscape mode during autofocus. In low-light conditions. Accurate focus is especially important for portraits because these typically have a shallow depth of field. In the example to the left. . Be warned.
This action might not be possible to undo. Are you sure you want to continue?