You are on page 1of 22

The field of digital image processing refers to processing digital images by means of

digital computer. Digital image is composed of a finite number of elements, each of which
has a particular location and value. These elements are called picture elements, image
elements, pels and pixels. Pixel is the term used most widely to denote the elements of
digital image.
An image is a two-dimensional function that represents a measure of some characteristic
such as brightness or color of a viewed scene. An image is a projection of a 3- D scene
into a 2D projection plane.
An image may be defined as a two-dimensional function f(x,y), where x and y are spatial
(plane) coordinates, and the amplitude tofat any pair of coordinates (x,y) is called the
intensity of the image at thatpoint. The term gray levelis used often to refer to the intensity
of monochrome images. Color images are formed by a combination of individual 2-D
images. For example, the RGB color system, a color image consists of three (red, green
and blue) individual component images. For this reason, many of the techniques
developed for monochrome images can be extended to color images by processing the
three component images individually. An image may be continuous with respect to the x-
and y- coordinates and also in amplitude. Converting such an image to digital form
requires that the coordinates, as well as the amplitude, be digitized.

Image Processing System is the combination of the different elements involved in the
digital image processing. Digital image processing is the processing of an image by means
of a digital computer. Digital image processing uses different computer algorithms to
perform image processing on the digital images.

Components of Image Processing System


It consists of following components:-
 Image Sensors:
Image sensors senses the intensity, amplitude, co-ordinates and other
features of the images and passes the result to the image processing
hardware. It includes the problem domain.

 Image Processing Hardware:


Image processing hardware is the dedicated hardware that is used to process
the instructions obtained from the image sensors. It passes the result to
general purpose computer.

 Computer:
Computer used in the image processing system is the general purpose
computer that is used by us in our daily life.

 Image Processing Software:


Image processing software is the software that includes all the mechanisms
and algorithms that are used in image processing system.

 Mass Storage:
Mass storage stores the pixels of the images during the processing.

 Hard Copy Device:


Once the image is processed then it is stored in the hard copy device. It can
be a pen drive or any external ROM device.

 Image Display:
It includes the monitor or display screen that displays the processed images.

 Network:
Network is the connection of all the above elements of the image processing
system.
Fundamental steps in image processing:

1. Image acquisition: to acquire a digital image


2. Image preprocessing: to improve the image in ways that increase the chances for success
of the other processes.
3. Image segmentation: to partitions an input image into its constituent parts or objects.
4. Image representation: to convert the input data to a form suitable for computer
processing. 5. Image description: to extract features that result in some quantitative
information of interest or features that are basic for differentiating one class of objects
from another.
6. Image recognition: to assign a label to an object based on the information provided by
its descriptors.
7. Image interpretation: to assign meaning to an ensemble of recognized objects.

APPLICATIONS OF DIGITAL IMAGE PROCESSING:


Since digital image processing has very wide applications and almost all of the technical
fields are impacted by DIP, we will just discuss some of the major applications of DIP.
Digital image processing has a broad spectrum of applications, such as
1. Remote sensing via satellites and otherspacecrafts
2. Image transmission and storage for business applications
3. Medical processing
4. RADAR (Radio Detection and Ranging)
5. SONAR (Sound Navigation and Ranging)
6. Acoustic Image Processing (The study of underwater sound is known as Underwater
Acousticsor HydroAcoustics)
7. Robotics and automated inspection of industrial parts

Images acquired by satellites are useful in trackingof


1. Earthresources
2. Geographical mapping
3. Prediction of agriculturalcrops
4. Urban growth and weathermonitoring
5. Flood and fire control and many other environmental applications
Space image applications include:
1. Recognition and analysis of objects contained in images obtained from deep space-
probemissions.
2. Image transmission and storage applications occur in broadcasttelevision
3. Teleconferencing
4. Transmission of facsimile images (Printed documents and graphics) for office
automation
5. Communication over computer networks

Medical applications:
1. Processing of chest X-rays
2. Cineangiograms
3. Projection images of trans axial tomographyand
4. Medical images that occur in radiology nuclear magneticresonance (NMR)
5. Ultrasonicscanning
Vidicon Digital Camera
A Vidicon is a type of camera tube whose basis of working is photoconductivity.
Basically it changes optical energy into electrical energy by the variation in resistance of
the material with respect to the illumination.
The Vidicon camera tube is based on the photoconductive properties of semiconductors.
When light falls on it, the number of free electrons created at any point is directly
proportional to the intensity of light falling on that point. A photo-conductive property of
semiconductors means a decrease in resistance with the amount of incident light. Brighter
the light, greater is the number of free electrons. These electrons are removed from the
material by using a positive voltage and hence that becomes positively charged. The value
of charge on the target at any point is proportional to the intensity of light at the
corresponding point in the original scene. So, the charge image of the picture is formed
on the surface of the target.

Construction of Vidicon Camera Tube


The Camera Tube structure is illustrated is shown in the figure below. It consists of :

(i) Target Plate or Signal Plate

(ii) Scanning System

(i) Target Plate or Signal Plate


The target consists of a thin conducting metallic film of the photoconductive layer, so as
to be transparent. The side of these films facing cathode is coated with a very thin layer
of photo-conductive material i.e. either selenium or antimony compounds. This is
deposited on a transparent conducting film, coated on the inner surface of the faceplate.
This conductive coating is known as a signal electrode or plate. This side is scanned by
the electron beam. The optical image is focused on the other side of this film.

The photoconductive target material is an intrinsic semi-conductor that has a very high
resistivity in darkness and is decreasing it with an increase in illumination. The photo layer
has a thickness of about 0.0001 cm and behaves like an insulator with a resistance of
approximately 20 MΩ when in dark. When bright light falls on any area of the
photoconductive coating, resistance gets reduces to about 2 MΩ. Very few charge leaks
between the successive frames and this change are stored by the beam and the resulting
current to the electrode is called dark current.

(ii) Scanning System


The electron beam for scanning is formed by the combination of the cathode, control grid
(Grid no. 1), accelerator (Grid-2) and anode (Grid-3). The focusing coil produces an axial
field which focuses the beam on the film. Vertical and horizontal deflection of the beam,
so as to scan the whole film, is accomplished by passing sawtooth current waves through
deflecting coils which thus the produce transverse horizontal and vertical magnetic field
respectively the alignment coil are for initial adjustment of the direction of the electron
beam.

Working of Vidicon camera tube


When the scanning beam passes over the photo-conductive material of the signal plate, it
deposits electrons so that the potentials of this side of the plate are reduced to that of the
cathode. But the other side of the film (plate) is still at its original potential consequently
a potential difference across a given point on the photo-conductive material is created. It
is approximately 40 V. Before the next scanning (which may be done after an interval of
1/50 or 1/25 sec.), the charge leaks through photoconductive material at a rate determined
by the conductivity of the material which in turn, depends upon the amount of incident
light. White portions of the object will, in turn, depends upon the film and make it more
conductive.
Video Signal Output from Vidicon Camera Tube

Applications of Vidicon camera tube


This tube is quite popular for CCTV (Closed Circuit Television) applications because of
low cost, small size, simplicity and ease of operations. Some more applications of the
vidicon camera tube are as follows:

1. Widely used for outdoor recording, domestic and industrial recording.


2. Aerospace and oceanography.
3. It is also used in slides, pictures, medicine, education, etc.
4. It is the most popular tube in the television industry.

Advantages

1. No ghost image.
2. There is no halo effect.
3. It is low cost and simple.
4. No gamma corrections needed.
5. Signal response is close to the human eyes.
6. It has a long life of about 5000 to 20,000 hours.
7. It is compact, having a length of 12-20 cms and a diameter of 1.5-4 cm.
8. By varying the target voltage as per the illumination of the scene, sensitivity can
be adjusted.
9. Resolution is better than ortiocon. Resolution in order of 350 lines can be achieved
under practical conditions.
Elements of Visual Perception
In human visual perception, the eyes act as the sensor or camera, neurons act as the
connecting cable and the brain acts as the processor.
The basic elements of visual perceptions are:
1. Structure of Eye
2. Image Formation in the Eye
3. Brightnes Adaptation and Discrimination

Structure of Eye:
The eye is nearly a sphere with average approximately 20 mm diameter. The eye is
enclosed with three membranes

The cornea and sclera - it is a tough, transparent tissue that covers the anterior
surface of the eye. Rest of the optic globe is covered by the sclera
The choroid – It contains a network of blood vessels that serve as the major
source of nutrition to the eyes. It helps to reduce extraneous light entering in the eye It
has two parts
(1) Iris Diaphragms- it contracts or expands to control the amount of light that enters
the eyes
(2) Ciliary body

Retina – it is innermost membrane of the eye. When the eye is properly focused, light
from an object outside the eye is imaged on the retina. There are various light
receptors over the surface of the retina

Image Formation in the Eye:


When the lens of the eye focus an image of the outside world onto a light-sensitive
membrane in the back of the eye, called retina the image is formed. The lens of the eye
focuses light on the photoreceptive cells of the retina which detects the photons of
light and responds by producing neural impulses.

The distance between the lens and the retina is about 17mm and the focal length is
approximately 14mm to 17mm.
Brightness Adaptation and Discrimination:
Digital images are displayed as a discrete set of intensities. The eyes ability to
discriminate black and white at different intensity levels is an important consideration
in presenting image processing result.

The range of light intensity levels to which the human visual system can adapt is of the
order of 1010 from the scotopic threshold to the glare limit. In a photopic vision, the
range is about 106.

Brightness: The brightness or apparent brightness of an object is the perceived luminance


and depends on the luminance of the surround. Brightness is a subjective, psychological
measure of perceived intensity. Brightness is practically impossible to measure
objectively. It is relative. For example, a burning candle in a darkened room will appear
bright to the viewer; it will not appear bright in full sunshine.

Contrast: Simultaneous contrast is related to the fact that a region’s perceived brightness
does not depend simply on its intensity, as below fig. demonstrates. All the center squares
have exactly the same intensity. Contrast is defined as the ratio (max-min)/(max+min)
where max and min are the maximum and minimum of the grating intensity respectively.

Mach band effect:


We know that perceived brightness is not a simple function of intensity. The visual system
tends to undershoot or overshoot around the boundary of regions of different intensities.
Figure shows a striking example of this phenomenon. Although the intensity of the stripes
is constant, we actually perceive a brightness pattern that is strongly scalloped, especially
near the boundaries. These seemingly scalloped bands are called Mach bands.

Hue

Hue refers to a specific basic tone of color or the root color and, in a rough definition, can be
considered as the main colors in the rainbow. It is not another name for color as colors are
more explicitly defined adding with brightness and saturation. For example, blue can be
considered as a hue, but with the addition of different levels of hue and saturation many colors
can be created. Prussian blue, navy blue, and royal blue are some commonly known colors of
blue.

Hue spectrum has three primary colors, three secondary colors, and six tertiary colors.

Saturation

Saturation is the measure of the strength of hue included in the color. At maximum saturation,
the color is almost like the hue and contains no grey. At the minimum, the color contains the
maximum amount of grey.

What is the difference between Hue and Saturation?

• Hue is a root color identified and can be roughly taken as the primary colors of the
rainbow.

• Saturation is the strength of the hue present in the color ranging from grey to the original
root color.
Color Fundamentals
Colors are seen as variable combinations of the primary color s of light:
red (R), green (G), and blue (B). The primary colors can be mixed to
produce the secondary colors: magenta (red+blue), cyan (green+blue),
and yellow (red+green). Mixing the three primaries, or a secondary with
its opposite primary color, produces white light.

Figure 15.1 Primary and secondary colors of light

RGB colors are used for color TV, monitors, and video cameras.
However, the primary colors of pigments are cyan (C), magenta (M), and
yellow (Y), and the secondary colors are red, green, and blue. A proper
combination of the three pigment primaries, or a secondary with its
opposite primary, produces black.

Figure Primary and secondary colors of pigments

CMY colors are used for color printing.


Color characteristics
The characteristics used to distinguish one color from another are:
 Brightness: means the amount of intensity (i.e. color level).
 Hue: represents dominant color as perceived by an observer.
 Saturation: refers to the amount of white light mixed with a hue.

Color Models
The purpose of a color model is to facilitate the specification of colors in
some standard way. A color model is a specification of a coordinate
system and a subspace within that system where each color is represented
by a single point. Color models most commonly used in image processing
are:

 RGB model for color monitors and video cameras


 CMY and CMYK (cyan, magenta, yellow, black) models for color
printing
 HSI (hue, saturation, intensity) model

The RGB color model


In this model, each color appears in its primary colors red, green, and
blue. This model is based on a Cartesian coordinate system. The color
subspace is the cube shown in the figure below. The different colors in
this model are points on or inside the cube, and are defined by vectors
extending from the origin.
Figure 15.3 RGB color model

All color values R, G, and B have been normalized in the range [0, 1].
However, we can represent each of R, G, and B from 0 to 255.
Each RGB color image consists of three component images, one for each
primary color as shown in the figure below. These three images are
combined on the screen to produce a color image.
Figure 15.4 Scheme of RGB color image

The total number of bits used to represent each pixel in RGB image is
called pixel depth. For example, in an RGB image if each of the red,
green, and blue images is an 8-bit image, the pixel depth of the RGB
image is 24-bits. The figure below shows the component images of an
RGB image.

Full color

Red Green Blue


Figure A full-color image and its RGB component images
The CMY and CMYK color model
Cyan, magenta, and yellow are the primary colors of pigments. Most
printing devices such as color printers and copiers require CMY data
input or perform an RGB to CMY conversion internally. This conversion
is performed using the equation
𝐶 1 𝑅
[𝑀] = [1] − [𝐺]
𝑌 1 𝐵
where, all color values have been normalized to the range [0, 1].
In printing, combining equal amounts of cyan, magenta, and yellow
produce muddy-looking black. In order to produce true black, a fourth
color, black, is added, giving rise to the CMYK color model.

The figure below shows the CMYK component images of an RGB image.

Full color Cyan Magenta

Yellow Black

Figure A full-color image and its CMYK component images


The HSI color model
The RGB and CMY color models are not suited for describing colors in
terms of human interpretation. When we view a color object, we describe
it by its hue, saturation, and brightness (intensity). Hence the HSI color
model has been presented. The HSI model decouples the intensity
component from the color-carrying information (hue and saturation) in a
color image. As a result, this model is an ideal tool for developing color
image processing algorithms.
The hue, saturation, and intensity values can be obtained from the RGB
color cube. That is, we can convert any RGB point to a corresponding
point is the HSI color model by working out the geometrical formulas.
Converting colors from RGB to HSI
The hue H is given by
𝜃 if 𝐵 ≤ 𝐺
𝐻={ }

360 − 𝜃 if 𝐵 > 𝐺
Where
1
[(𝑅 − 𝐺) + (𝑅 − 𝐵)]
𝜃 = 𝑐o𝑠 {
−1 2 }
√(𝑅 − 𝐺) + (𝑅 − 𝐵)(𝐺 − 𝐵)
2

The saturation S is given by


3
𝑆 =1−
(𝑅 + 𝐺 + 𝐵) [min (𝑅, 𝐺, 𝐵)]

The intensity I is given by

𝐼 = 1 (𝑅 + 𝐺 + 𝐵)

All RGB values are normalized to the range [0,1].


Converting colors from HSI to RGB
The applicable equations depend on the value of H:
If 0° ≤ 𝐻 < 120° :
𝐵 = 𝐼(1 − 𝑆)
𝑆 cos 𝐻
𝑅 = 𝐼 [1 + ]
cos (60°− 𝐻)

𝐺 = 3𝐼 − (𝑅 + 𝐵)

If 120° ≤ 𝐻 < 240° :


𝐻 = 𝐻 − 120°
𝑅 = 𝐼(1 − 𝑆)
𝑆 cos 𝐻
𝐺 = 𝐼 [1 + ]
cos (60°−𝐻)

𝐵 = 3𝐼 − (𝑅 + 𝐺)

If 240° ≤ 𝐻 ≤ 360° :
𝐻 = 𝐻 − 240°
𝐺 = 𝐼(1 − 𝑆)
𝑆 cos 𝐻
𝐵 = 𝐼 [1 + ]
cos (60°−𝐻)

𝑅 = 3𝐼 − (𝐺 + 𝐵)

The next figure shows the HSI component images of an RGB image.
Full color

Hue Saturation Intensity

Figure 15.7 A full-color image and its HSI component images

Basics of Full-Color Image Processing


Full-color image processing approaches fall into two major categories:
 Approaches that process each component image individually and
then form a composite processed color image from the individually
processed components.
 Approaches that work with color pixels directly.
In full-color images, color pixels really are vectors. For example, in the
RGB system, each color pixel can be expressed as
𝑐𝑅(𝑥, 𝑦) 𝑅(𝑥, 𝑦)

𝑐(𝑥, 𝑦) = [𝑐𝐺(𝑥, 𝑦)] = [𝐺(𝑥, 𝑦)]


𝑐𝐵(𝑥, 𝑦) 𝐵(𝑥, 𝑦)
For an image of size M×N, there are MN such vectors, c(x, y), for x = 0,1,
2,...,M-1; y = 0,1,2,...,N-1.
Color Transformation
As with the gray-level transformation, we model color transformations using
the expression
𝑔(𝑥, 𝑦) = 𝑇[ƒ(𝑥, 𝑦)]
where f(x, y) is a color input image, g(x, y) is the transformed color outputimage,
and T is the color transform.
This color transform can also be written
𝑠i = 𝑇i(𝑟1, 𝑟2, … , 𝑟𝑛) i = 1,2, … , 𝑛
For example, we wish to modify the intensity of the image shown in Figure
14.8(a) using
𝑔(𝑥, 𝑦) = 0.7ƒ(𝑥, 𝑦)
 In the RGB color space, three components must be transformed:
𝑠i = 0.7𝑟i i = 1,2,3
 In CMY space, also three component images must be transformed
𝑠i = 0.7𝑟i + 0.3 i = 1,2,3
 In HSI space, only intensity component 𝑟3is transformed
𝑠3 = 0.7𝑟3

(a) (b)
Figure 15.8 (a) Original image. (b) Result of decreasing its intensity

You might also like