Professional Documents
Culture Documents
OBJECTIVES:
• To become familiar with digital image fundamentals
• To get exposed to simple image enhancement techniques in Spatial and
Frequency domain.
• To learn concepts of degradation function and restoration techniques.
• To study the image segmentation and representation techniques.
• To become familiar with image compression and recognition methods
1 pixel
Computer
Digital image processing focuses on two major tasks
Improvement of pictorial information for human interpretation
Processing of image data for storage, transmission and
machine perception
Band Applications
Gamma Rays Nuclear medicines & Astronomy
X-rays Medical diagnostics, industry
Microwave RADAR
Radio waves Nuclear medicines & Astronomy
Introduction to Digital Image Processing- Dr.S.Deepa, Mrs.B.Sathyabhama,Mrs.V.Subashree PEC
1.Gamma Rays: PET & Astronomy
Image Segmentation
Restoration
Problem Image
Representation
Domain Acquisition & Description
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Introduction to Digital Image Processing- Dr.S.Deepa, Mrs.B.Sathyabhama,Mrs.V.Subashree PEC
3.Image Restoration
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Introduction to Digital Image Processing- Dr.S.Deepa, Mrs.B.Sathyabhama,Mrs.V.Subashree PEC
4.Morphological Processing
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Introduction to Digital Image Processing- Dr.S.Deepa, Mrs.B.Sathyabhama,Mrs.V.Subashree PEC
5. Image Segmentation
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Introduction to Digital Image Processing- Dr.S.Deepa, Mrs.B.Sathyabhama,Mrs.V.Subashree PEC
6. Object Recognition
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Introduction to Digital Image Processing- Dr.S.Deepa, Mrs.B.Sathyabhama,Mrs.V.Subashree PEC
7. Representation & Description
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Introduction to Digital Image Processing- Dr.S.Deepa, Mrs.B.Sathyabhama,Mrs.V.Subashree PEC
8. Image Compression
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
Introduction to Digital Image Processing- Dr.S.Deepa, Mrs.B.Sathyabhama,Mrs.V.Subashree PEC
9.Colour Image Processing
Image Morphological
Restoration Processing
Image
Segmentation
Enhancement
Image Object
Acquisition Recognition
Representation
Problem Domain
& Description
Colour Image Image
Processing Compression
PART 2
Dr.S.Deepa
Mrs.V.Subashree
Mrs.B.Sathyabhama
Panimalar Engineering College
1.4
Components
of an
Image Processing System
Components of an
Image Processing System
1. Image Sensors
Image Sensors- Physical Devices used to acquire digital images
Two elements are required to acquire digital images.
The Sensor
The Digitiser
THE SENSOR
The physical device that is sensitive to the energy radiated by the
object we wish to image
THE DIGITISER
Device for converting the output of the sensor into digital form.
2.Specialized Image Processing Hardware
Super computers
4. Image Processing Software
Types:
1.Short Term Storage
2.Online Storage
3.Archival Storage
Short term storage devices are used to access images at the time of
processing. Example: Computer Memory, Frame Buffers
Frame Buffers are specialised boards that can store one or more images
that can be
Online accessed
Storage: To rapidly at video
store images rates
that (30 images
require /sec)
frequent and fast recall
Frame buffers allow image zoom, image scroll and image pan
5. Mass Storage
Archival Storage Used for massive storage requirements which are
accessed rarely.
Example: Magnetic Tapes , Optical disks housed in ‘Jukeboxes’
Stereo Displays
7. Hard Copy
Example:
Printers
Film Cameras
Heat Sensitive Devices
Inkjet Devices
Digital Units
Optical and CDROM disks
8.Networking
Used to share large amount of data involved
in image processing applications.
Cornea
1.Cornea and Sclera
Cornea is a tough , transparent tissue that covers the
anterior surface of the eye
▪Except for the blindspot, the distribution of receptors is radially symmetric about
the fovea.
▪Receptor density is measured in degrees from visual axis
▪Cones are most dense in the center area of the fovea which is the reason for
photopic vision.
▪Rods increase in density from the center out to approximately 20° off axis and
then decrease in density out to the extreme periphery of the retina.
Rods and Cones
Rods
Rods are responsible for scotopic or dim light vision.
About 75 to 150 million rods are distributed over
the retinal surface. Several rods are connected to
a single nerve end which reduces the amount of
detail discernible by these receptors.
Rods are not involved in color vision and are
sensitive to low levels of illumination.
That is the reason brightly colored objects in
daylight appears as colorless objects when
seen by moonlight.
Rods and Cones
Cones
Cone vision is called photopic or bright-light vision.
Blind Spot
The photoreceptors are absent at the spot from which the optic nerve emerges
and it is known as blind spot.
Image Formation
15/100=h/17
h=2.55 mm.
Perceived Brightness
Perceived Brightness is not a simple
function of intensity
.
Although the intensity of the stripes shown in figure is constant,
HVS perceives a brightness pattern that is strongly scalloped,
especially near the boundaries. These seemingly scalloped bands are
called Mach Band after Ernst Mach(1865).
Simultaneous Contrast
Optical Illusion
Eye fills in non existing information or wrongly perceives geometrical
properties of objects.
Brightness Adaptation and Discrimination
Types of Sensors
▪ Single Sensor
▪ Line Sensor
▪ Array Sensor
Single Sensor
▪The sensing material transforms the
illumination energy into an electrical
voltage
▪Filters are used to increase the
selectivity of the sensor element.
▪Example: Photodiodes are single
element sensors that convert light
energy into electrical energy.
Application
Used in airborne imaging applications, in which the imaging system is mounted on an aircraft
that flies at a constant altitude and speed over the geographical area to be imaged.
Image Acquisition using Circular Sensor Strip
▪To obtain 3D images
▪Sensor strips mounted in a ring configuration
▪A rotating X-ray source provides illumination
▪Portion of the sensors opposite to the source
collect the X-ray energy that pass through the
object
▪The output of the sensors is processed by
reconstruction algorithms to transform the sensed
data into meaningful cross-sectional images.
▪3-D digital volume consisting of stacked images
is generated
Applications
•Medical and industrial Computerized Axial Tomography (CAT)
•Magnetic Resonance Imaging (MRI) and
•Positron Emission Tomography (PET) imaging.
Array Sensor
▪Image sensors arrays are used in
electromagnetic, ultrasonic sensing devices and
in digital cameras.
▪CCD and CMOS sensors are used to form the
sensor array.
▪The response of each sensor is proportional to
the integral of the light energy projected onto the
surface of the sensor.
▪Since the sensor array is two dimensional, the
complete image can be obtained by focusing the
energy pattern onto the surface of the array.
Digitised Image using Sensor Array
Digital Camera
1.7
Image Sampling
and
Quantisation
Image Sampling and Quantisation
Conversion of Analog to Digital Images involves two steps
▪Sampling
▪Quantisation
Sampling:
Digitizing the coordinate values is called sampling.
Quantisation
Digitizing the amplitude values is called quantization.
•In order to form a digital function, the gray-level values also must be
converted (quantized) into discrete quantities.
•The right side of Fig. 2.16(c) shows the gray-level scale divided into eight
discrete levels, ranging from black to white.
•The vertical tick marks indicate the specific value assigned to each of the
eight gray levels.
•The continuous gray levels are quantized simply by assigning one of the
eight discrete gray levels to each sample. The assignment is made
depending on the vertical proximity of a sample to a vertical tick mark.
•The digital samples resulting from both sampling and quantization are
shown in Fig. 2.16(d).
Using Sensor Array
When a sensing array is used for image acquisition, there is no motion and the
number of sensors in the array establishes the limits of sampling in both
directions.
Figure (a) shows a continuous image projected onto the plane of an array
sensor.
Figure 2.17(b) shows the image after sampling and quantization.
Clearly, the quality of a digital image is determined to a large degree by the
number of samples and discrete gray levels used in sampling and quantization.
Resolution
128 x 128 64 x 64 32 x 32
Intensity Resolution
▪It refers to the smallest discernible change in gray level and
it depends on the number of gray levels.
▪Quantization is the principal factor determining the Intensity
resolution of an image.
▪The most commonly used is 28 that is 256 gray levels.
False Contouring
256 Levels 16 Levels 4 Levels
Pixels are basic elements which form the image and their
proper manipulation gives rise to different appearances.
▪Neighbors of a pixels
▪Adjacency
▪Connectivity
▪Path or Curve
▪Regions and Boundaries
▪Edge
▪Distance Measures
Neighbors of Pixels – Conventional Indexing Method
Conventional Indexing
Neighborhood relation is used for analyzing regions in an Image
Three types
▪4 –Neighbors
▪D-Neighbors
▪8-Neighbors
4 Neighbors of a Pixel
▪ Any pixel p(x, y) has two vertical and two horizontal neighbors, given by (x+1, y),
(x-1, y), (x, y+1), (x, y-1)
▪This set of pixels are called the 4-neighbors of P, and is denoted by N4(P).
▪Each of them are at a unit distance from p(x, y)
(x,y-1) 4-neighbors of p:
(x-1,y)
(x-1,y) p (x+1,y)
(x+1,y)
N4(p) = (x,y-1)
(x,y+1)
(x,y+1)
(x-1,y-1)
p
(x+1,y-1)
ND(p) = (x-1,y+1)
(x+1,y+1)
(x-1,y+1) (x+1,y+1)
(x-1,y-1)
(x-1,y) p (x+1,y)
(x,y-1)
(x+1,y-1)
(x-1,y)
(x-1,y+1)(x,y+1)(x+1,y+1) (x+1,y)
N8(p) = (x-1,y+1)
(x,y+1)
(x+1,y+1)
Connectivity
S1
S2
w D(p,q) = D(q,p)
Types
▪Euclidean Distance
▪Cityblock Distance
▪Chessboard Distance
Euclidean distance
De ( p, q) = ( x - s ) 2 + ( y - t ) 2
City-block distance
D4 ( p, q) = x - s + y - t
2 1 2
2 1 0 1 2
2 1 2
D8 ( p, q) = max( x - s , y - t )
2 2 2 2 2
2 1 1 1 2
2 1 0 1 2
2 1 1 1 2
2 2 2 2 2
scene
Illumination source
reflection
eye
Radiance: total amount of energy that flow from the
light source, measured in watts (W)
Luminance: amount of energy an observer
perceives from a light source, measured in lumens
(lm)
Brightness: subjective descriptor that is hard to
measure, similar to the achromatic notion of
intensity
6~7M Cones are the sensors in the eye
3 principal sensing categories in eyes
• Red light 65%, green light 33%, and blue
light 2%
In1931, CIE(International Commission on
Illumination) defines specific wavelength
values to the primary colors
• B = 435.8 nm, G = 546.1 nm, R = 700 nm
• However, we know that no single color may be
called red, green, or blue
Secondary
colors: G+B=Cyan, R+G=Yellow,
R+B=Magenta
Primary and secondary colors
Color TV
色度圖
Color model, color space, color system
• Specify colors in a standard way
• A coordinate system that each color is
represented by a single point
Types:
▪RGB model - Used in hardware systems like monitors, Television
▪CMY model – Color Printing
▪CYMK model-Color Printing
▪HSI model – Human perception & Image Processing Algorithm
Pixel depth: the number of bits used to
represent each pixel in RGB space
Full-color image: 24-bit RGB color image
• (R, G, B) = (8 bits, 8 bits, 8 bits)
Totally how many colors can be represented ???
1,67,77,216 Colors
Subset of colors is enough for some
application
Safe RGB colors (safe Web colors, safe
browser colors) → colors that can be
faithfully reproduced by any system
(6)3 = 216 colors are called safe colors, defact standard for Internet Applications
Full color cube Safe color cube
CMY: secondary colors of light, or
primary colors of pigments
Used to generate hardcopy output
C 1 R
M = 1 - G
Y 1 B
Will you describe a color using its R, G, B
components?
Human describe a color by its hue,
saturation, and brightness
• Hue : color attribute
• Saturation: purity of color (white->0, primary
color->1)
• Brightness: achromatic notion of intensity
RGB -> HSI model Colors on this triangle
Have the same hue
Intensity
line
HSI model