You are on page 1of 20

Digital Photography

Understanding the Basics

Let's say you want to take a picture and e-mail it to a friend. To do this, you need the image to be
represented in the language that computers recognize -- bits and bytes. Essentially, a digital image is just a
long string of 1s and 0s that represent all the tiny colored dots -- or pixels -- that collectively make up the
image.
If you want to get a picture into this form, you have two
options:

• You can take a photograph using a conventional film


camera, process the film chemically, print it onto
photographic paper and then use a digital scanner to
sample the print (record the pattern of light as a series of
pixel values).

• You can directly sample the original light that bounces


off your subject, immediately breaking that light pattern
down into a series of pixel values -- in other words, you Cameră foto digitală
can use a digital camera.

At its most basic level, this is all there is to a digital camera. Just like a conventional camera, it has a series of
lenses that focus light to create an image of a scene. But instead of focusing this light onto a piece of film, it
focuses it onto a semiconductor device that records light electronically. A computer then breaks this electronic
information down into digital data. All the fun and interesting features of digital cameras come as a direct result of
this process.
The key difference between a digital camera and a film-based camera is that the digital camera has no
film. Instead, it has a sensor that converts light into electrical charges.
The image sensor employed by most digital cameras is a charge coupled device (CCD). Some low-end
cameras use complementary metal oxide semiconductor (CMOS) technology. While CMOS sensors will
almost certainly improve and become more popular in the future, they probably won't replace CCD sensors in
higher-end digital cameras. Throughout the rest of this article, we will mostly focus on CCD. For the purpose
of understanding how a digital camera works, you can think of them as nearly identical devices. Most of what
you learn will also apply to CMOS cameras.
The CCD is a collection of tiny light-sensitive diodes,
which convert photons (light) into electrons (electrical
charge). These diodes are called photosites. In a nutshell,
each photosite is sensitive to light -- the brighter the light
that hits a single photosite, the greater the electrical charge
that will accumulate at that site.
One of the drivers behind the falling prices of digital
cameras has been the introduction of CMOS image sensors.
CMOS sensors are much less expensive to manufacture than
CCD sensors. Both CCD and CMOS image sensors start at
the same point -- they have to convert light into electrons at
the photosites. If you've read the article How Solar Cells Work, you already understand one of the pieces of
technology used to perform the conversion. A simplified way to think about the sensor used in a digital
camera (or camcorder) is to think of it as having a 2-D array of thousands or millions of tiny solar cells, each
of which transforms the light from one small portion of the image into electrons. Both CCD and CMOS
devices perform this task using a variety of technologies.
The next step is to read the value (accumulated charge) of each cell in the image. In a CCD device, the charge
is actually transported across the chip and read at one corner of the array. An analog-to-digital converter turns

1
each pixel's value into a digital value. In most CMOS devices, there are several transistors at each pixel that
amplify and move the charge using more traditional wires. The CMOS approach is more flexible because each
pixel can be read individually. CCDs use a special manufacturing process to create the ability to transport
charge across the chip without distortion. This process leads to very high-quality sensors in terms of fidelity
and light sensitivity. CMOS chips, on the other hand, use completely standard manufacturing processes to
create the chip -- the same processes used to make most microprocessors. Because of the manufacturing
differences, there are several noticeable differences between CCD and CMOS sensors.

• CCD sensors, as mentioned above, create high-quality, low-noise images. CMOS sensors,
traditionally, are more susceptible to noise.
• Because each pixel on a CMOS sensor has several transistors located next to it, the light sensitivity of
a CMOS chip is lower. Many of the photons hitting the chip hit the transistors instead of the
photodiode.
• CMOS sensors traditionally consume little power. Implementing a sensor in CMOS yields a low-
power sensor. CCDs, on the other hand, use a process that
consumes lots of power. CCDs consume as much as 100
times more power than an equivalent CMOS sensor.
• CMOS chips can be fabricated on just about any standard
silicon production line, so they tend to be extremely
inexpensive compared to CCD sensors.
• CCD sensors have been mass produced for a longer period
of time, so they are more mature. They tend to have higher
quality pixels, and more of them.

Based on these differences, you can see that CCDs tend to be used in
cameras that focus on high-quality images with lots of pixels and
excellent light sensitivity. CMOS sensors usually have lower quality,
lower resolution and lower sensitivity. However, CMOS cameras are
less expensive and have great battery life.
Senzor de imagine CMOS
Resolution

The amount of detail that the camera can capture is called the resolution, and it is measured in pixels. The more
pixels your camera has, the more detail it can capture. The more detail you have, the more you can blow up a
picture before it becomes "grainy" and starts to look out-of-focus.

Some typical resolutions that you find in digital cameras today include:

• 256x256 pixels - You find this resolution on very cheap cameras. This resolution is so low that the
picture quality is almost always unacceptable. This is 65,000 total pixels.
• 640x480 pixels - This is the low end on most "real" cameras. This resolution is great if you plan to e-
mail most of your pictures to friends or post them on a Web site. This is 307,000 total pixels.
• 1216x912 pixels - If you are planning to print your images, this is a good resolution. This is a
"megapixel" image size -- 1,109,000 total pixels.
• 1600x1200 pixels - This is "high resolution." Images taken with this resolution can be printed in
larger sizes, such as 8x10 inches, with good results. This is almost 2 million total pixels. You can find
cameras today with up to 10.2 million pixels.

You may or may not need lots of resolution, depending on what you want to do with your pictures. If you
are planning to do nothing more than display images on a Web page or send them in e-mail, then using
640x480 resolution has several advantages:

• Your camera's memory will hold more images at this low resolution than at higher resolutions.

2
• It will take less time to move the images from the camera to your computer.
• The images will take up less space on your computer.

On the other hand, if your goal is to print large images, you definitely want to take high-resolution shots
and need a camera with lots of pixels.

What picture resolution will give the best quality prints my inkjet printer

There are many different technologies used in inkjet printers. In general, printer manufacturers will
advertise the printer resolution in dots per inch (dpi). However, all dots are not created equal. One printer may
place more drops of ink (black, cyan, magenta or yellow) per dot than another.
For instance, printers made by Hewlett Packard that use PhotoREt III technology can layer a combination of
up to 29 drops of ink per dot, yielding about 3,500 possible colors per dot. This may sound like a lot, but most
cameras can capture 16.8 million colors per pixel. So these printers cannot replicate the exact color of a pixel
with a single dot. Instead, they must create a grouping of dots that when viewed from a distance blend
together to form the color of a single pixel. The rule of thumb is that you divide your printer's color resolution
by about four to get the actual maximum picture quality of your printer. So for a 1200 dpi printer, a resolution
of 300 pixels per inch would be just about the best quality that printer is capable of. This means that with a
1200x900 pixel image, you could print a 4-inch by 3-inch print. In practice, though, lower resolutions than
this usually provide adequate quality. To make a reasonable print that comes close to the quality of a
traditionally developed photograph, you need about 150 to 200 pixels per inch of print size.

Kodak recommends:

Print Size Megapixels Image Resolution


Wallet 0.3 640x480 pixels
4x5 inches 0.4 768x512 pixels
5x7 inches 0.8 1152x768 pixels
8x10 inches 1.6 1536x1024 pixels

Capturing Color

Unfortunately, each photosite is colorblind. It only keeps track of the total intensity of the light that strikes
its surface. In order to get a full color image, most sensors use filtering to look at the light in its three primary
colors. Once all three colors have been recorded, they can be added together to create the full spectrum of
colors that you've grown accustomed to seeing on computer monitors and color printers.

3
How the three basic colors combin to form other colors
There are several ways of recording the three colors in a digital camera. The highest quality cameras use
three separate sensors, each with a different filter over it. Light is directed to the different sensors by placing a
beam splitter in the camera. Think of the light entering the camera as water flowing through a pipe. Using a
beam splitter would be like dividing an identical amount of water into three different pipes. Each sensor gets
an identical look at the image; but because of the filters, each sensor only responds to one of the primary
colors.
The advantage of this method is that
the camera records each of the three colors
at each pixel location. Unfortunately,
cameras that use this method tend to be
bulky and expensive.
A second method is to rotate a series
of red, blue and green filters in front of a
single sensor. The sensor records three
separate images in rapid succession. This
method also provides information on all
three colors at each pixel location; but
since the three images aren't taken at
precisely the same moment, both the
camera and the target of the photo must
remain stationary for all three readings.
This isn't practical for candid photography
or handheld cameras.
A more economical and practical way
to record the three primary colors from a
single image is to permanently place a
filter over each individual photosite. By breaking up the sensor into a variety of red, blue and green pixels, it
is possible to get enough information in the general vicinity of each sensor to make very accurate guesses
about the true color at that location. This process of looking at the other pixels in the neighborhood of a sensor
and making an educated guess is called interpolation.
The most common pattern of filters is the Bayer filter pattern. This pattern alternates a row of red and
green filters with a row of blue and green filters. The pixels are not evenly divided -- there are as many green
pixels as there are blue and red combined. This is because the human eye is not equally sensitive to all three
colors. It's necessary to include more information from the green pixels in order to create an image that the
eye will perceive as a "true color."
There are other ways of handling color in a digital camera. Some single-sensor cameras use alternatives to
the Bayer filter pattern. A company called Foveon has developed a sensor that captures all three colors by
embedding red, green and blue photodetectors in silicon. This X3 technology works because red, green and
blue light each penetrate silicon to a different depth. There is even a method that uses two sensors. Some of
the more advanced cameras don't add up the different values of red, green and blue, but instead subtract
values using the typesetting colors cyan, yellow, green and magenta. However, most consumer cameras on the
market today use a single sensor with alternating rows of green/red and green/blue filters.

Fovenon X3 Tehnology

Until now, you haven't been getting the


picture. At least not the complete picture. That's
because revolutionary Foveon X3 technology
features the first and only image sensors that
capture red, green and blue light at each and every

4
pixel location. All other image sensors record just one color per pixel location —that's why Foveon's direct
image sensors deliver increased sharpness, better color detail and resistance to unpredictable color artifacts.
From point-and-shoot digital cameras to high-end professional equipment, Foveon X3 technology offers a
wealth of benefits to consumers and manufacturers alike. At the same time, it paves the way for other
innovations, such as new kinds of cameras that record both video and still images without compromising the
image quality.
The revolutionary design of Foveon X3 direct image sensors features three layers of pixel sensors. The
layers are embedded in silicon to take advantage of the fact that red, green, and blue light penetrate silicon to
different depths—forming the world's first direct image sensor.
To capture the color that other image sensors miss, Foveon X3® direct image sensors use three layers of
pixel sensors embedded in silicon. The layers are positioned to take advantage of the fact that silicon absorbs
different wavelengths of light to different depths, so one layer records red, another layer records green, and
the other layer records blue. This means that for every pixel location on Foveon X3 direct image sensors,
there's actually a stack of three pixel sensors, forming the first and only direct image sensors.

Until now, all other image sensors have featured just one layer of pixel sensors, with just one pixel sensor
per pixel location. To capture color, pixel sensors are organized in a grid, or mosaic, resembling a three-color
checkerboard. Each pixel is covered with a filter and records just one color—red, green, or blue. That
approach has inherent drawbacks, no matter how many pixels a mosaic-based image sensor might contain.
Since mosaic-based image sensors capture only one-third of the color, complex processing is required to
interpolate the color they miss. Interpolation leads to color artifacts and a loss of image detail. Blur filters are
used to reduce color artifacts, but at the expense of sharpness and resolution.
With its revolutionary process for capturing light, Foveon X3 technology never needs to compromise on
quality, so you get sharper pictures, truer colors, and fewer artifacts. And cameras equipped with Foveon X3
technology do not have to rely on processing power to fill in missing colors, reducing hardware requirements,

5
simplifying designs and minimizing lag time between one shot and the next. Dollar for dollar, pixel for pixel,
nothing compares to Foveon X3 technology.
Variable Pixel in X3 Tehnology

Foveon X3® direct image sensors not only lead to better pictures, but better cameras too, as a result of
their powerful full-color variable pixel size (VPS) capability. VPS opens the door to an entirely new breed of
camera, one that can switch seamlessly between still photography and digital video, without sacrificing the
quality of either.
The VPS capability allows signals from adjacent pixels to be combined into groups and read as one larger
pixel. For example, a 2300 x 1500 image sensor contains more than 3.4 million pixel locations. But if the VPS
capability were used to group those pixel locations into 4x4 blocks, the image sensor would appear to have
575 x 375 pixel locations, each of them 16 times larger than the originals. The size and configuration of a
pixel group are variable—2x2, 4x4, 1x2, etc.—and are controlled through sophisticated circuitry integrated
into Foveon X3 direct image sensors. Because Foveon X3 image sensors capture full color at every pixel
location, pixels that are grouped together form full-color "super pixels." No other image sensor can do this.

The grouping of pixel locations increases the signal-to-noise ratio, allowing the camera to take full-color
pictures in low-light conditions with reduced noise. Using the VPS capability to increase pixel size and reduce
the resolution also allows the image sensor to run at higher frame rates, accelerating the speed at which
images can be captured.
This makes it possible to shoot high-quality digital video, enabling the development of the first cameras
with true dual-mode functionality. Without Foveon X3 technology, cameras attempting to accommodate both
still and video functions must sacrifice performance in one mode to do the other well. And since the sizing of
pixels can be done in an instant, a Foveon X3 direct image sensor can capture a high—resolution still photo in
the midst of recording video—yet another first in digital photography.

Better Quality

The unique ability of Foveon X3 direct image sensors to


capture all the light at every pixel location results in more than
truer color—it also translates into images of unprecedented
sharpness and clarity. All colors, especially green, carry luminance
information that the human visual system uses to discern and
define image detail. Recognizing the importance of green light,
manufacturers of mosaic image sensors dedicate 50% of pixel
locations to capturing green light, with the remaining 50% evenly
divided between red and blue. Yet they still capture only half as

6
much green as Foveon X3 direct image sensors, which capture 100% of every color for sharper, clearer
images.
In many cases, the difference in sharpness and detail is compounded by the use of blur filters in mosaic-
based digital cameras. The blur filters are intended to minimize luminance and color artifacts. The artifacts are
unpredictable byproducts of the complex processing required to interpolate the information mosaic image
sensors miss. However, blur filters reduce artifacts at the expense of resolution and sharpness.
These trade-offs are unnecessary with Foveon X3 direct image sensors. There's no need to rely on
interpolation to reconstruct missing information, because all the information is captured by the revolutionary
stacked pixel design of Foveon X3 technology.

The Effects of Filters

Cameras using mosaic image sensors are forced to compromise between image quality and sharpness.
Images directly sampled with mosaic sensors have better resolution than those taken using blur filters, but
suffer from interpolation artifacts. Blur filters will alleviate the artifacts, but cause a reduction in overall
resolution and image detail.

Mosaic Without Blur Filter


Mosaic With Blur Filter
Foveon X3

Visible artifacts without blur filter. Overall image softening with blur filter. Foveon X3 direct image sensor,
no blur filter required.

New technology automatically corrects lighting problems in digital photographs.

X3 Fill Light, a new software feature, dramatically improves the image quality of digital images affected by
challenging lighting conditions. X3 Fill Light simulates the photographic method of adding extra light to
shadow regions, while preserving highlight detail. It is a powerful yet automatic method for "dodging and
burning" an image, where
each pixel is optimally
adjusted in relation to
surrounding pixels. The
X3 Fill Light feature is
included in software
designed to process the
X3F files generated by
cameras which use
Foveon direct image

7
sensors for capture. The X3 Fill Light feature is simple to apply: moving the slider in the positive direction
from the default setting of 0.0 increases the affect, as illustrated in the example.
As the X3 Fill Light slider is increased, the relationship between the regions of an image that contain
shadows, midtones, and highlights are altered in relation to each other. By increasing the amount of X3 Fill
Light, the brightness and contrast of the shadow regions are increased to add visibility into areas that have
been underexposed. Simultaneously, the contrast in highlight regions is increased and the brightness is
adjusted to avoid over saturation.
Examples of images where the use of X3 Fill Light is desirable are those taken in mixed lighting
conditions including shadow and direct sunlight, indoor-outdoor scenes (such as through a doorway or
window), back-lit subjects, or dramatic sky scenes. The end results are natural looking images that map from a
wide-dynamic-range scene into a narrower dynamic range that can be properly reproduced on a print.

Image Comparison

Foveon X3® technology visibly improves image quality, as these comparisons demonstrate. In this case,
an image taken with a mosaic sensor is compared to an image taken with Foveon X3 technology.

Mosaic capture Foveon X3

Clarity
Mosaic Foveon X3

As you can see, the camera equipped with Foveon X3 technology takes sharper pictures. That's because it
captures twice as much green as mosaic image sensors, and the green wavelengths of light are critical in
defining image detail.
Color Detail
Mosaic Foveon X3

8
These pictures demonstrate how Foveon X3 technology improves color detail. The difference is that
Foveon X3 direct image sensors measure full color at each and every pixel location, while mosaic sensors
capture 50% of the green and just 25% of the red and blue.

Artifacts
Mosaic Foveon X3

As shown here, Foveon X3 technology offers resistance to unpredictable artifacts. A mosaic image sensor
is more vulnerable to artifacts, largely because it must rely on complex processing to interpolate the colors it
missed. No amount of processing power can completely take the guesswork out of color interpolation.

Digitizing Information

The light is converted to electrical charge; but the electrical charges that build up in the CCD are not
digital signals that are ready to be used by your computer. In order to digitize the information, the signal must
be passed through an analog-to-digital converter (ADC). Interpolation is handled by a microprocessor after
the data has been digitized. Think of each photosite as a bucket or a well, and think of the photons of light as
raindrops. As the raindrops fall into the bucket, water accumulates (in reality, electrical charge accumulates).
Some buckets have more water and some buckets have less water, representing brighter and darker sections of
the image. Sticking to the analogy, the ADC measures the depth of the water, which is considered analog
information. Then it converts that information to binary form.

Is the number of photosites the same as the number of pixels?

If you read digital camera claims carefully, you'll notice that the number of pixels and the maximum
resolution numbers don't quite compute. For example, a camera claims to be a 2.1-megapixel camera and it is
capable of producing images with a resolution of 1600x1200. Let's do the math: a 1600x1200 image contains
1,920,000 pixels. But "2.1 megapixel" means there ought to be at least 2,100,000 pixels. This isn't an error
from rounding off, and it isn't binary mathematical trickery. There is a real discrepancy between these two
numbers. If a camera says it has 2.1 megapixels, then there really are approximately 2,100,000 photosites on
the CCD. What happens is that some of the photosites are not being used for imaging. Remember that the
CCD is an analog device. It's necessary to provide some circuitry to the photosites so that the ADC can

9
measure the amount of charge. This circuitry is dyed black so that it doesn't absorb any light and distort the
image.

How big are the sensors?

The current generation of digital sensors are smaller than film. Typical film emulsions that are exposed in
a film-based camera measure 24mm x 36mm. If you've look at the specifications of a typical 1.3-megapixel
camera, you'll find that it has a CCD sensor that measures 4.4mm x 6.6mm. As you'll see in a later section, a
smaller sensor means smaller lenses.

Output, Storage and Compression

Most digital cameras on the market today have an LCD screen, which
means that you can view your picture right away. This is one of the great Creating Fun Photos
advantages of a digital camera: You get immediate feedback on what you With the image-editing software
capture. Once the image leaves the CCD sensor (by way of the ADC and a that often comes with your
microprocessor), it is ready to be viewed on the LCD. camera you can do lots of neat
Of course, that's not the end of the story. Viewing the image on your things. You can:
camera would lose its charm if that's all you could do. You want to be able • crop the picture to
to load the picture into your computer or send it directly to a printer. There capture just the part you
are several ways to store images in a camera and then transfer them to a want
computer. • add text to the picture
• make the picture brighter
Storage or darker
• change the contrast and
Early generations of digital cameras had fixed storage inside the sharpness
camera. You needed to connect the camera directly to a computer by • apply filters to the
cables to transfer the images. Although most of today's cameras are picture to make it look
capable of connecting to a serial, parallel, SCSI, and/or USB port, they blurry, painted,
usually provide you with some sort of removable storage device. embossed, etc.
• resize pictures
There are a number of storage systems currently used in digital cameras: • rotate pictures
• cut stuff out of one
• Built-in memory - Some extremely inexpensive cameras have picture and put it into
built-in Flash memory. another
• "stitch" together many
• SmartMedia cards - SmartMedia cards are pictures to create one
small Flash memory modules. large panoramic/360-
degree picture
• create a 3-D picture that
• CompactFlash - CompactFlash cards are you can rotate and zoom
another form of Flash memory, similar to but in on and out of
slightly larger than SmartMedia cards.

• Memory Stick - Memory Stick is a proprietary form of Flash memory used by


Sony.

• Floppy disk - Some cameras store images directly onto floppy disks.

• Hard disk - Some higher-end cameras use small built-in hard disks, or
PCMCIA hard-disk cards, for image storage.

10
• Writeable CD and DVD - Some of the newest cameras are using writeable CD and DVD drives to
store images.

In order to transfer the files from a Flash memory device to your computer without using cables, you will
need to have a drive or reader for your computer. These devices behave much like floppy drives and are
inexpensive to buy.
Think of all these storage devices as reusable digital film. When you fill one up, either transfer the data or put
another one into the camera. The different types of Flash memory devices are not interchangeable. Each
camera manufacturer has decided on one device or another. Each of the Flash memory devices also needs
some sort of caddy or card reader in order to transfer the data.

What is the image capacity of each type of storage?

Right now, there are two main types of storage media in use today. Some cameras use 1.44-MB floppy
disks, and some use various forms of Flash memory that have capacities ranging from several megabytes to 1
gigabyte. There are several other formats, but for now we'll discuss these two.
The main difference between storage media is their capacity: The capacity of a floppy disk is fixed, and
the capacity of Flash memory devices is increasing all the time. This is fortunate because picture size is also
increasing constantly, as higher resolution cameras become available.
The two main file formats used by digital cameras are TIFF and JPEG. TIFF is an uncompressed format
and JPEG is a compressed format. Most cameras use the JPEG file format for storing pictures, and they
sometimes offer quality settings (such as medium or high). The following chart will give you an idea of the
file sizes you might expect with different picture sizes.

TIFF JPEG JPEG


Image Size
(uncompressed) (high quality) (medium quality)
640x480 1.0 MB 300 KB 90 KB
800x600 1.5 MB 500 KB 130 KB
1024x768 2.5 MB 800 KB 200 KB
1600x1200 6.0 MB 1.7 MB 420 KB

One thing that becomes apparent is that a 1.44-MB disk cannot hold very many pictures. In fact, at some
image sizes you can't even fit one picture on the disk. But the floppy disk does have its uses. For Internet
publishing and e-mailing pictures to friends, you almost never need a picture bigger than 640x480, and you
will almost always save it in JPEG form. In this case, you might be able to fit 16 or so pictures on each disk.
If you are trying to store the biggest, highest quality images you can, then you will want the highest
capacity medium. A 128-MB Flash memory card, for instance, could store more than 1,400 small compressed
images or 21 of the uncompressed 1600x1200 images. You would probably never use the whole 128 MB if
you were just taking small pictures, but if you were taking the big pictures this would be the only way to go.
The large capacity might also come in handy if you were going on a long trip and wanted to be able to take
lots of pictures.

Compression

It takes a lot of memory to store a picture with over 1.2 million pixels. Almost all digital cameras use some

11
sort of data compression to make the files smaller. There are two features of digital images that make
compression possible. One is repetition. The other is irrelevancy.
You can imagine that throughout a given photo, certain patterns develop in the colors. For example, if a blue
sky takes up 30 percent of the photograph, you can be certain that some shades of blue are going to be
repeated over and over again. When compression routines take advantage of patterns that repeat, there is no
loss of information and the image can be reconstructed exactly as it was recorded. Unfortunately, this doesn't
reduce files any more than 50 percent, and sometimes it doesn't even come close to that level.
Irrelevancy is a trickier issue. A digital camera records more information than is easily detected by the human
eye. Some compression routines take advantage of this fact to throw away some of the more meaningless data.
If you need smaller files, you need to be willing to throw away more data. Most cameras offer several
different levels of compression, although they may not call it that. More likely they will offer you different
levels of resolution. This is the same thing. Lower resolution means more compression.

Batteries

Digital cameras, especially those that use a CCD sensor and an LCD display, tend to use lots of power --
which means they eat batteries. Rechargeable batteries help to lower the cost of using the digital camera, but
rechargeable batteries are sometimes expensive. Here are some things to consider:
• Does the camera use standard-size rechargeable batteries (e.g., AA), or does it use special
rechargeable batteries made by the
manufacturer? If it uses the special ones, check
to see what the price of another battery pack is.
• If the camera takes AA batteries, can you use
normal alkaline batteries in a pinch?
• Are the rechargeable batteries removable, or are
they permanently mounted in the camera? If
they are not removable, it means that once the
batteries go dead you can't use the camera again
until you can get to a recharger and power supply. This can be a major pain in the neck if you want to
take a lot of pictures at once.

Aperture and Shutter Speed

It is important to control the amount of light


that reaches the sensor. Thinking back to the
water bucket analogy, if too much light hits the
sensor, the bucket will fill up and won't be able
to hold any more. If this happens, information
about the intensity of the light is being lost. Even
though one photosite may be exposed to a higher
intensity light than another, if both buckets are
full, the camera will not register a difference
between them.
The word camera comes from the term camera
obscura. Camera means room (or chamber) and
obscura means dark. In other words, a camera is a dark room. This dark room keeps out all unwanted light. At
the click of a button, it allows a controlled amount of light to enter through an opening and focuses the light
onto a sensor (either film or digital). In this section, you will learn how the aperture and shutter work together
to control the amount of light that enters the camera.

Aperture

12
The aperture is the size of the opening in the camera. It's located behind the lens. On a bright sunny day,
the light reflected off your image may be very intense, and it doesn't take very much of it to create a good
picture. In this situation, you want a small aperture. But on a cloudy day, or in twilight, the light is not so
intense and the camera will need more light to create an image. In order to allow more light, the aperture must
be enlarged.
Your eye works the same way. When you are in the dark, the iris of your eye dilates your pupil (that is, it
makes it very large). When you go out into bright sunlight, your iris contracts and it makes your pupil very
small. If you can find a willing partner and a small flashlight, this is easy to demonstrate (if you do this, please
use a small flashlight, like the ones they use in a doctor's office). Look at your partner's eyes, then shine the
flashlight in and watch the pupils contract. Move the flashlight away, and the pupils will dilate.
Shutter Speed
Traditionally, the shutter speed is the amount of
time that light is allowed to pass through the
aperture. Think of a mechanical shutter as a
window shade. It is placed across the back of the
aperture to block out the light. Then, for a fixed
amount of time, it opens and closes. The amount of
time it is open is the shutter speed. One way of
getting more light into the camera is to decrease
the shutter speed -- in other words, leave the
shutter open for a longer period of time.
Film-based cameras must have a mechanical shutter. Once you
expose film to light, it can't be wiped clean to start again. Therefore, it
must be protected from unwanted light. But the sensor in a digital camera
can be reset electronically and used over and over again. This is called a
digital shutter. Some digital cameras employ a combination of electrical
and mechanical shutters.

Exposing the Sensor

These two aspects of a camera, aperture and shutter speed, work


together to capture the proper amount of light needed to make a good
image. In photographic terms, they set the exposure of the sensor. Most digital cameras automatically set
aperture and shutter speed for optimal exposure, which gives them the appeal of a point-and-shoot camera.
Some digital cameras also offer the ability to adjust the aperture settings by using menu options on the LCD
panel. More advanced hobbyists and professionals like to have control over the aperture and shutter speed
selections because it gives them more creative control over the final image. As you climb into the upper levels
of consumer cameras and the realm of professional cameras, you will be rewarded with controls that have the
look, feel and functions common to film-based cameras.

Lens and Focal Length

A camera lens collects the available light and focuses it on the sensor. Most digital cameras use automatic
focusing techniques, which you can learn more about in the article How Autofocus Cameras Work.
The important difference between the lens of a digital camera and the lens of a 35mm camera is the focal
length. The focal length is the distance between the lens and the surface of the sensor. You learned in the
section on technical details that the surface of a film sensor is much larger than the surface of a CCD sensor.
In fact, a typical 1.3-megapixel digital sensor is approximately one-sixth of the linear dimensions of film. In
order to project the image onto a smaller sensor, it is necessary to shorten the focal length by the same
proportion.
Focal length is also the critical information in determining how much magnification you get when you
look through your camera. In 35mm cameras, a 50mm lens gives a natural view of the subject. As you

13
increase the focal length, you get greater magnification, and objects appear to get closer. As you decrease the
focal length, things appear to get farther away, but you can capture a wider field of view in the camera.

You will find four different types of lenses on digital cameras:


• Fixed-focus, fixed-zoom lenses - These are the kinds of lenses you find on disposable and inexpensive
film cameras -- inexpensive and great for snapshots, but fairly limited.
• Optical-zoom lenses with automatic focus - Similar to the lens on a video camcorder, you have "wide"
and "telephoto" options and automatic focus. The camera may or may not let you switch to manual
focus.
• Digital-zoom lenses - With digital zoom, the camera takes pixels from the center of the image sensor
and "interpolates" them to make a full-size image. Depending on the resolution of the image and the
sensor, this approach may create a grainy or fuzzy image. It turns out that you can manually do the
same thing a digital zoom is doing -- simply snap a picture and then cut out the center of the image
using your image processing software.
• Replaceable lens systems - If you are familiar with high-end 35mm cameras, then you are familiar
with the concept of replaceable lenses. High-end digital cameras can use this same system, and in fact
can use lenses from 35mm cameras in some cases.

Since many photographers that use film-based cameras are familiar with the focal lengths that project an
image onto 35mm film, digital cameras advertise their focal lengths with "35mm equivalents." This is
extremely helpful information to have.
In the chart below, you can compare the actual focal lengths of a typical 1.3-megapixel camera and its
equivalent in a 35mm camera.
Focal 35mm
View Typical Uses
Length Equivalent
Things look smaller and Wide-angle shots, landscapes, large
5.4 mm 35 mm
farther away. buildings, groups of people
Things look about the same "Normal" shots of people and
7.7 mm 50 mm
as what your eye sees. objects
Things are magnified and
16.2 mm 105 mm Telephoto shots, close-ups
appear closer.

Optical Zoom vs. Digital Zoom

In general terms, a zoom lens is any lens that has an adjustable focal length. Zoom doesn't always mean a
close-up. As you can see in the chart above, the "normal" view of the world for this particular camera is 7.7
mm. You can zoom out for a wide-angle view of the world, or you can zoom in for a closer view of the world.
Digital cameras may have an optical zoom, a digital zoom, or both.
An optical zoom actually changes the focal length of your lens. As a result, the image is magnified by the
lens (sometimes called the optics, hence "optical" zoom). With greater magnification, the light is spread
across the entire CCD sensor and all of the pixels can be used. You can think of an optical zoom as a true
zoom that will improve the quality of your pictures.
A digital zoom is a computer trick that magnifies a portion of the information that hits the sensor. Let's
say you are shooting a picture with a 2X digital zoom. The camera will use half of the pixels at the center of
the CCD sensor and ignore all the other pixels. Then it will use interpolation techniques to add detail to the
photo. Although it may look like you are shooting a picture with twice
the magnification, you can get the same results by shooting the photo
without a zoom and blowing up the picture using your computer
software.

14
Macro

If you plan to take close-up images, look for a camera that has a macro focusing capability. This feature
lets you move the camera's lens very close to the subject. Here is an example of a macro photograph -- this is
a picture of part of a small electric motor, and the white disk is about the size of a U.S. quarter coin. If your
camera is not equipped with a macro setting, there is no way for you to take an image like this.

Cool Facts
• In the United States, there is roughly one camera for every adult.
• With a 3-megapixel camera, you can take a higher-resolution picture than most computer monitors
can display. .
• The first consumer-oriented digital cameras were sold by Kodak and Apple in 1994.
• In 1998, Sony inadvertently sold over 700,000 camcorders with a limited ability to see through
clothes.
• You can use various software programs to "stitch" together a series of digital pictures to create a large
panorama

Image Formats

On the Net, luckily, we really only have to deal with three main types of images: CompuServe GIF,
JPEG, and Bitmaps. At the moment, those are the only three that are roundly supported by the major
browsers. But what's the difference between them? What does it mean if a GIF is interlaced or non-interlaced?
Is a JPEG progressive because it enjoys art deco? Does a Bitmap actually offer directions somewhere? And
the most often asked question:

When do I use a specific image format?

Image or Graphic?

Technically, neither. If you really want to be strict, computer pictures are files, the same way WORD
documents or solitaire games are files. They're all a bunch of ones and zeros all in a row. But we do have to
communicate with one another so let's decide. Image. We'll use "image". That seems to cover a wide enough
topic range.
"Graphic" is more of an adjective, as in "graphic format." You see, we denote images on the Internet by
their graphic format. GIF is not the name of the image. GIF is the compression factors used to create the raster
format set up by CompuServe. (More on that in a moment).
So, they're all images unless you're talking about something specific

44 Different Graphic Formats?

It does seem like a big number, doesn't it? In reality, there are not 44 different graphic format names. Many
of the 44 are different versions under the same compression umbrella, interlaced and non-interlaced GIF, for
example.
Before getting into where we get all 44, and there are more than that even, let us discuss for a moment.
There actually are only two basic methods for a computer to render, or store and display, an image. When you
save an image in a specific format you are creating either a raster or meta/vector graphic format.

Raster

Raster image formats (RIFs) should be the most familiar to Internet users. A Raster format breaks the
image into a series of colored dots called pixels. The number of ones and zeros (bits) used to create each pixel
denotes the depth of color you can put into your images.

15
If your pixel is denoted with only one bit-per-pixel then that pixel must be black or white. Why? Because
that pixel can only be a one or a zero, on or off, black or white. Bump that up to 4 bits-per-pixel and you're
able to set that colored dot to one of 16 colors. If you go even higher to 8 bits-per-pixel, you can save that
colored dot at up to 256 different colors. Does that number, 256 sound familiar to anyone? That's the upper
color level of a GIF image. Sure, you can go with less than 256 colors, but you cannot have over 256. That's
why a GIF image doesn't work overly well for photographs and larger images. There are a whole lot more
than 256 colors in the world. Images can carry millions. But if you want smaller icon images, GIFs are the
way to go.
Raster image formats can also save at 16, 24, and 32 bits-per-pixel. At the two highest levels, the pixels
themselves can carry up to 16,777,216 different colors. The image looks great! Bitmaps saved at 24 bits-per-
pixel are great quality images, but of course they also run about a megabyte per picture.

The three main Internet formats, GIF, JPEG, and Bitmap, are all Raster formats
Some other Raster formats include the following:

CLP Windows Clipart


DCX ZOFT Paintbrush
DIB OS/2 Warp format
FPX Kodak's FlashPic
IMG GEM Paint format
JIF JPEG Related Image format
MAC MacPaint
MSP MacPaint New Version
PCT Macintosh PICT format
PCX ZSoft Paintbrush
PPM Portable Pixel Map (UNIX)
PSP Paint Shop Pro format
RAW Unencoded image format
Run-Length Encoding
RLE (Used to lower image bit
rates)
TIFF Aldus Corporation format
WPG WordPerfect image format

Pixels and the Web

Since I brought up pixels, I thought now might be a pretty good time to talk about pixels and the Web.
How much is too much? How many is too few?
There is a delicate balance between the crispness of a picture and the number of pixels needed to display it.
Let's say you have two images, each is 5 inches across and 3 inches down. One uses 300 pixels to span that
five inches, the other uses 1500. Obviously, the one with 1500 uses smaller pixels. It is also the one that offers
a more crisp, detailed look. The more pixels, the more detailed the image will be. Of course, the more pixels
the more bytes the image will take up.
So, how much is enough? That depends on whom you are speaking to, and right now you're speaking to
me. We recomand 100 pixels per inch. That creates a ten-thousand pixel square inch. This provides a pretty
crisp image without going overboard on the bytes. It also allows some leeway to increase or decrease the size
of the image and not mess it up too much.
The lowest is 72 pixels per inch, the agreed upon low end of the image scale. In terms of pixels per
square inch, it's a whale of a drop to 5184. Try that. See if you like it, but I think you'll find that lower
definition monitors really play havoc with the image.

Meta/Vector Image Formats

16
You may not have heard of this type of image formatting, not that you had heard of Raster, either. This
formatting falls into a lot of proprietary formats, formats made for specific programs. CorelDraw (CDR),
Hewlett-Packard Graphics Language (HGL), and Windows Metafiles (EMF) are a few examples.

Where the Meta/Vector formats have it over Raster is that they are more than a simple grid of colored dots.
They're actual vectors of data stored in mathematical formats rather than bits of colored dots. This allows for a
strange shaping of colors and images that can be perfectly cropped on an arc. A squared-off map of dots
cannot produce that arc as well. In addition, since the information is encoded in vectors, Meta/Vector image
formats can be blown up or down (a property known as "scalability") without looking jagged or crowded (a
property known as "pixelating").

So that I do not receive e-mail from those in the computer image know, there is a difference in Meta and
Vector formats. Vector formats can contain only vector data whereas Meta files, as is implied by the name,
can contain multiple formats. This means there can be a lovely Bitmap plopped right in the middle of your
Windows Meta file. You'll never know or see the difference but, there it is. I'm just trying to keep everybody
happy.

What's A Bitmap?

I get that question a lot. Usually it's followed with "How come it only works on Microsoft Internet
Explorer?" The second question's the easiest. Microsoft invented the Bitmap format. It would only make sense
they would include it in their browser. Every time you boot up your PC, the majority of the images used in the
process and on the desktop are Bitmaps.
Against what I said above, Bitmaps will display on all browsers, just not in the familiar <IMG SRC="--">
format we're all used to. I see Bitmaps used mostly as return images from PERL Common Gateway Interfaces
(CGIs). A counter is a perfect example. Page counters that have that "odometer" effect are Bitmap images
created by the server, rather than as an inline image. Bitmaps are perfect for this process because they're a
simple series of colored dots. There's nothing fancy to building them. It's actually a fairly simple process. In
the script that runs the counter, you "build" each number for the counter to display. Note the counter is black
and white. That's only a one bit-per-pixel level image. To create the number zero in the
counter above, you would build a grid 7 pixels wide by 10 pixels high. The pixels you 0 0 0 0 0 0 0
want to remain black, you would denote as zero. Those you wanted white, you'd denote as 0 0 1 1 1 0 0
one. 0 1 1 1 1 1 0
See the number zero in the graph above? I made it red so it would stand out a bit more. 0 1 1 0 1 1 0
You create one of those patterns for the numbers 0 through 9. The PERL script then 0 1 1 0 1 1 0
returns the Bitmap image representing the numbers and you get that neat little odometer 0 1 1 0 1 1 0
effect. That's the concept of a Bitmap. A grid of colored points. The more bits per pixel, 0 1 1 0 1 1 0
the more fancy the Bitmap can be. 0 1 1 1 1 1 0
Bitmaps are good images, but they're not great. If you've played with Bitmaps versus 0 0 1 1 1 0 0
any other image formats, you might have noticed that the Bitmap format creates images 0 0 0 0 0 0 0
that are a little heavy
on the bytes. The reason is that the
Bitmap format is not very efficient at
storing data. What you see is pretty much
what you get, one series of bits stacked
on top of another.

Bitmap Image

Compression

17
I said above that a Bitmap was a simple series of pixels all stacked up. But the same image saved in GIF or
JPEG format uses less bytes to make up the file. How? Compression.
"Compression" is a computer term that represents a variety of mathematical formats used to compress an
image's byte size. Let's say you have an image where the upper right-hand corner has four pixels all the same
color. Why not find a way to make those four pixels into one? That would cut down the number of bytes by
three-fourths, at least in the one corner. That's a compression factor.
Bitmaps can be compressed to a point. The process is called "run-length encoding." Runs of pixels that are
all the same color are all combined into one pixel. The longer the run of pixels, the more compression.
Bitmaps with little detail or color variance will really compress. Those with a great deal of detail don't offer
much in the way of compression. Bitmaps that use the run-length encoding can carry either the common
".bmp" extension or ".rle". Another difference between the two files is that the common Bitmap can accept 16
million different colors per pixel. Saving the same image in run-length encoding knocks the bits-per-pixel
down to 8. That locks the level of color in at no more than 256. That's even more compression of bytes to
boot.
So, why not create a single pixel when all of the colors are close? You could even lower the number of
colors available so that you would have a better chance of the pixels being close in color. Good idea. The
people at CompuServe felt the same way.

JPEG Image Formats

JPEG is a compression algorithm developed by the people the format is named after, the Joint
Photographic Experts Group. JPEG's big selling point is that its compression factor stores the image on the
hard drive in less bytes than the image is when it actually displays. The Web took to the format straightaway
because not only did the image store in fewer bytes, it transferred in fewer bytes. As the Internet adage goes,
the pipeline isn't getting any bigger so we need to make what is traveling through it smaller.
For a long while, GIF ruled the Internet roost but when JPEG appeared it was adopted quickly, even
though it brought some problems with it.
JPEG pictures can be saved at different compressions and there are presented a few exemples below of a
picture at 400*336, with 153 kilobiti, the size:

Compression 20% - 37 kb Compression 40% - 25 Kb Compression 60% - 19 kb

Compression 80% - 12 kb Compression 90% - 7 kb Compression 95% - 4 kb

18
The difference between 1% and 60% is not great, from the quality point of view, but the picture’s size
decreased very much. At 95% the picture looks horrible, it is not worth it. In some cases you can’t tell what
the picture wants to show, because of high compression.

The GIF Image Formats

GIF, which stands for "Graphic Interchange Format," was first standardized in 1987 by CompuServe,
although the patent for the algorithm (mathematical formula) used to create GIF compression actually belongs
to Unisys. The first format of GIF used on the Web was called GIF87a, representing its year and version. It
saved images at 8 pits-per-pixel, capping the color level at 256. That 8-bit level allowed the image to work
across multiple server styles, including CompuServe, TCP/IP, and AOL. It was a graphic for all seasons, so to
speak.
CompuServe updated the GIF format in 1989 to include animation, transparency, and interlacing. They
called the new format, you guessed it: GIF89a. There's no discernable difference between a basic (known as
non-interlaced) GIF in 87 and 89 formats.

GIF87a GIF89a

Animation

The concept of GIF89a animation is much the same as a picture book with small animation cells in each
corner. Flip the pages and the images appear to move. Here, you have the ability to set the cell's (technically
called an "animation frame") movement speed in 1/100ths of a second. An internal clock embedded right into
the GIF keeps count and flips the image when the time comes.
The animation process has been bettered along the way by companies who have found their own method of
compressing the GIFs further. As you watch an animation you might notice that very little changes from
frame to frame. So, why put up a whole new GIF image if only a small section of the frame needs to be
changed? That's the key to some of the newer compression factors in GIF animation. Less changing means
fewer bytes.

Transparency

The process is best described as similar to the weather forecaster on your local news. Each night they stand
in front of a big green (sometimes blue) screen and deliver the weather while that blue or green behind them is
"keyed" out and replaced by another source. In the case of the weather forecaster, it's usually a large map with
lots of Ls and Hs.
The process in television is called a "chroma key." A computer is told to hone in on a specific color, let's
say it's green. Chroma key screens are usually green because it's the color least likely to be found in human
skin tones. You don't want to use a blue screen and then chroma out someone's pretty blue eyes. That chroma
(color) is then "erased" and replaced by another image.

19
Think of that in terms of a transparent GIF. There are only 256 colors available in the GIF. The computer
is told to hone in on one of them. It's done by choosing a particular red/green/blue shade already found in the
image and blanking it out. The color is basically dropped from
the palette that makes up the image. Thus whatever is behind it
shows through.
The shape is still there though. Try this: Get an image with a
transparent background and alter its height and width in your
HTML code. You'll see what should be the transparent color
seeping through.
Any color that's found in the GIF can be made transparent, not just the color in the
background. If the background of the image is speckled then the transparency is going to be speckled. If you
cut out the color blue in the background, and that color also appears in the middle of the image, it too will be
made transparent.

Interlaced vs. Non-Interlaced GIF

When you do NOT interlace an image, you fill it in


from the top to the bottom, one line after
another. Hopefully, you're on a slower connection
computer so you got the full effect of waiting for the
image to come in. It can be torture sometimes. That's
where the brilliant Interlaced GIF89a idea came from.
Interlacing is the concept of filling in every other line
of data, then going back to the top and doing it all again,
filling in the lines you skipped. Your television works
that way. The effect on a computer monitor is that the
graphic appears blurry at first and then sharpens up as the
other lines fill in. That allows your viewer to at least get
an idea of what's coming up rather than waiting for the
entire image, line by line. The example image below is of
a spice shop in the Grand Covered Bazaar, Istanbul.
Both interlaced and non-interlaced GIFs get you to the
same destination. They just do it differently. It's up to you which you feel is better.

Progressive JPEGs

You can almost guess what this is all about. A progressive JPEG works a lot like the interlaced GIF89a by
filling in every other line, then returning to the top of the image to fill in the remainder. The example is again
presented three times at 1%, 50%, and 99% compression. Obviously, here's where bumping up the
compression does not pay off. Rule of thumb: If you're going to use progressive JPEG, keep the compression
up high, 75% or better.

20

You might also like