You are on page 1of 38

Advance Digital Image Processing

Books: Text/References
 Digital Image Processing
By Rafael C. Gonzalez & Richard
E. Woods
 Digital Image Processing using MATLAB
By Rafael C. Gonzalez
Richard E. Woods
 Fundamentals of Digital Image Processing by Anil K.
Jain
D.K.Vishwakarma
Digital Image Processing

 What is an Image ?
 What is Digital Image ?

 What is Digital Image Processing?

 What is its applications ?

D.K.Vishwakarma
What is an Image?
 An image is two
dimensional function,
f(x,y), where x and y are
spatial coordinates, and
the amplitude of f at any
pair of coordinates (x,y)
is called the intensity or
grey level of the image
at that point.
D.K.Vishwakarma
What is Digital Image?

D.K.Vishwakarma
What is Digital Image Processing?
 The digital image Fundamental digital processing steps

processing means the  Image Acquisition


processing of digital  Image Enhancement
image by mean of  Image Restoration
digital computer.  Image Analysis
 Image Reconstruction
 Image Compression
D.K.Vishwakarma
Components of Digital Image Processing System

Image displays Computer Mass storage

Specialized Image
Image Processing
Hardcopy Processing
Software
Hardware

Image Sensors

D.K.Vishwakarma
Problem domain
..components of DIP
 Image sensors is the physical device  Computer is a general purpose
use to sense the object. computer range from a PC to super
 Specialized image processing Computer suitable for offline
hardware usually consists of the processing tasks.
digitizer & hardware that perform  Software may be a well designed
the arithmetical and logical package also include the capability
operations on the entire image. This for the user to write code.
unit performs the functions that  Mass storage is use to store images for
require fast data through put (e.g. 3 purposes 1) short term storage
digitizing & averaging video images during processing 2) on-line storage
at 30 frames/s) for fast recall 3) in frequent access.

D.K.Vishwakarma
What is DIP Applications ?
Computer Vision
 Present era is digital 

 Face Detection
era, so digital  Remote Sensing
processing of image is  Medical image processing
widely used in many (Gamma Ray Imaging, X-Ray
Imaging)
applications, some of  Analysis Space photograph
them are listed.  Film Industry
 Intelligent Transport system

D.K.Vishwakarma
Contents Covered?
 Introduction and Digital  Image Transforms
Image Fundamentals  Image Enhancement
 Elements of Visual Perception  Image restoration
 Image Sensing and Acquisition  Image analysis using multi
 Image Sampling and resolution techniques
Quantization  Image segmentation
 Relationship between pixels  Morphology
 Intensity Transformations and  Wavelet Transform
spatial filtering
 Image Compression Technique
D.K.Vishwakarma
Image Acquisition
 Human Eye Structure of Eye
 Spherical in shape with
 Ordinary Camera average diameter
approximately 20 mm.
 X-Ray Machine  There are three membranes
 Positron Emission enclose the eye Cornea and
sclera as outer cover Choroid
Tomography Retina.
 Infrared Imaging  Iris (pupil) varies in diameter
approximately 2 to 8 mm.
 Geophysical Imaging  Lens.
D.K.Vishwakarma
Human Visual System
 Image Formation
 cornea, sclera, pupil,
iris, lens, retina, fovea
 Transduction
 retina, rods, and cones
 Processing
 optic nerve, brain
D.K.Vishwakarma
Retina and Fovea
 Retina has photosensitive
receptors at back of eye Retina
 Fovea is small, dense region of
receptors
 only cones (no rods) Fovea
 gives visual acuity

 Outside fovea
 fewer receptors overall

 larger proportion of rods


D.K.Vishwakarma
Transduction (Retina)
 Transform light to neural
impulses
 Receptors signal bipolar cells
 Bipolar cells signal ganglion
cells
 Axons in the ganglion cells
form optic nerve
Bipolar cells Rods
Ganglion Cones
D.K.Vishwakarma
Optic nerve
Rods vs. Cones
Cones Rods
 Cones in each eye  Rods in an eye numbers
numbers between 6 to 7 between 75 to 150
million. millions.
 Contain photo-pigment  Contain photo-pigment
 Respond to high energy  Respond to low energy
 Enhance perception  Enhance sensitivity
 Concentrated in fovea,  Concentrated in retina, but
exist sparsely in retina outside of fovea
 Three types, sensitive to  One type, sensitive to
different wavelengths
D.K.Vishwakarma grayscale changes
Image Formation & Eye Structure

D.K.Vishwakarma
Distribution of Light Receptor over Retina

D.K.Vishwakarma
Electromagnetic Spectrum

D.K.Vishwakarma
EM Spectrum of Blue, Green, & Red

Band No. Name Wave Length(µm) Characteristics Uses

1 Visible Blue 0.45-0.52 Maximum water penetration

Good for measuring Plant Vigor


2 Visible Green 0.52-0.60
(Strength)
Vegetation (Collectively Plants)
3 Visible Red 0.63-.069
Discrimination

D.K.Vishwakarma
Image Sensing & Acquisition
 Sensors are use to
transform illumination
energy into digital images.
Sensors are three types:

Single Imaging Sensor


Line sensor

Array Sensor

D.K.Vishwakarma
Single Imaging Sensor
To generate a 2-D image using a single sensor, there has to be relative displacements in both the x- and y-
directions between the sensor and the area to be imaged.

A film negative is mounted onto a


drum whose mechanical rotation
provides displacement in one
dimension. The single sensor is
mounted on a lead screw that
provides motion in the perpendicular
direction. Since mechanical motion
can be controlled with high
precision, this method is an
inexpensive (but slow) way to obtain
high-resolution
D.K.Vishwakarma images.
Linear Sensor
A geometry that is used much more frequently than single sensors consists of an in-line arrangement of
sensors in the form of a sensor strip. The strip provides imaging elements in one direction. Motion
perpendicular to the strip provides imaging in the other direction.

In-line sensors are used routinely in airborne


imaging applications, in which the imaging system is
mounted on an aircraft that flies at a constant
altitude and speed over the geographical area to be
imaged. One-dimensional imaging sensor strips that
respond to various bands of the electromagnetic
spectrum are mounted perpendicular to the direction
of flight. The imaging strip gives one line of an
image at a time, and the motion of the strip
completes the other dimension of a two-dimensional
image. Lenses or other focusing schemes are used to
project the area to be scanned onto the sensors.
D.K.Vishwakarma
Array Sensor
Numerous electromagnetic and some ultrasonic sensing devices frequently are arranged in an array format. This
arrangement is also found in digital cameras. The response of sensor is proportional to the integral of the light
energy projected onto the surface of the sensor, a property that is used in astronomical and other applications
requiring low noise images.

The illumination source reflects the energy


from a scene element and the first function
performed by the imaging system is to collect
the incoming energy and focus it onto an
image plane. If the illumination is light, the
front end of the imaging system is a lens,
which projects the viewed scene onto the lens
focal plane. The sensor array, which is
coincident with the focal plane, produces
outputs proportional to the integral of the
D.K.Vishwakarma
light received at each sensor.
Image formation model
As we know that image can be represented as 2-dimensional function of the form f(x,y). The value of f at
spatial coordinates (x,y) is a positive scalar quantity whose physical meaning is determined by the source of
image. When an image is generated from a physical process, its intensity value is proportional to energy
radiated by a physical source (e.g. EM waves). As a consequence f(x,y) must be non zero and finite.
0 < f (x, y) < ∞ (1)
The function f(x,y) may be characterized by 2-components a) the amount of source illumination incident on
the scene being viewed. b) The amount of illumination reflected by the object in the scene.
Appropriately we called it as illumination & reflectance components and are denoted by i(x,y) and r(x,y).
the two function combined as a product to form f(x,y):
f (x, y) = i(x, y).r(x, y) (2)

Where 0 < i( x, y ) < ∞ (3)


and
D.K.Vishwakarma
0 < r(x, y) < 1 (4)
e.g. for representation of image
Lets we take a example to understand mathematical value of intensity. The illumination intensity i(x,y) have some
typical value at different condition
1) On a clear day sun produces 90,000 lm/m2 on the surface of earth.
2) On a cloudy condition 10,000 lm/m2
3) On a clear evening 0.1 lm/m2
4) The typical level for a office is 1000 lm/m2
similarly the typical value of r(x,y) is given for
1) 0.01 for black velvet
2) 0.65 for stainless
3) 0.80 for flat-white wall paint
4) 0.90 for silver plated
Let the intensity of a monochrome image at any coordinate (x1,y1) be denoted by
l=f(x1,y1)
From equation 2,3, & 4, it is evident that l lies in the range
L ≤ l ≤ Lmax
Lmin = imin rmin and Lmax = imin
max rmax
We may expect typical value Lmin = 10 (approximately) and Lmax = 1000(approximately) the interval [Lmin , Lmax] is
called gray scale.
a common way to represent [0, L-1] where l=0 for black and l=L-1 is for white on gray scale.
D.K.Vishwakarma
Image Sampling & Quantization
 The O/P of most of the
sensors are continuous voltage
wave form whose amplitude
and spatial behavior are
related to the physical
phenomenon being sensed.
 To create digital image , we
need to convert the continuous
sensed data in to digital form
for better processing.
 This involves two processes a)
sampling and b) quantization
D.K.Vishwakarma
Sampling & Quantization
 To convert it to digital
form, we have to sample
the function in both
coordinates and in
amplitude.
 Digitizing the coordinate
values is called sampling.
 Digitizing the amplitude
values is called
quantization.
D.K.Vishwakarma
Sampling & Quantization
ab
cd
a) Continuous image
b) A scan line from
A to B in the
continuous image
c) Sampling &
Quantization.
d) Digital scan line.

D.K.Vishwakarma
Sampling & Quantization

Image before sampling and quantization


D.K.Vishwakarma
Result of sampling and quantization
Representation of Digital Image
 Let f(s,t), represent a continuous image A digital image can be represented as M X N
function of two variables, s & t. we convert array as:
this function in to digital image by sampling
and quantization
 f (0,0) f (0,1) .... f (0, N − 1) 
 Let us suppose that continuous image in 2-D  f (1,0) f (1,1) .... f (1, N − 1) 
array, f(x,y), containing M rows and N 
. . . .
columns, where x & y are discrete f ( x, y ) =  
 . . . .  (5)
coordinates.  
 
 For convenience we take these we use inter  f ( M − 1,0) f ( M − 1,1) .... f ( M − 1, N − 1)
values for these discrete coordinates: x=0, 1,
2, …,M-1 and y=0, 1,2,3 ……,N-1. In above matrix of real numbers, the each of
 Thus the value of digital image at origin is the matrix element is called an image element,
f(0,0), next coordinate value along with first picture element, pixel, & pel.
row is f(0,1).
 The coordinates of an image is called the
spatial domain, with x, and y being referred
D.K.Vishwakarma
to as spatial variables or spatial coordinates.
Representation of Digital Image
 A more traditional matrix notation to  The digitization process requires decisions
represent a digital image and its element is: about values for M, N, and for the number, L,
of discrete gray levels allowed for each
pixel. Where M and N, are positive integers.
 a 0 ,0 a 0 ,1 .... a 0 , N −1  However, due to processing, storage, and
 a a 1 ,1 .... a 1 , N − 1  sampling hardware considerations, the
 1,0 number of gray levels typically is an integer
A =  . . . .  (6) power of 2:
 . . . . 
  L = 2K (7)
 a M −1, 0 aM − 1 ,1 .... a M −1, N −1 
Where k is number of bits require to
represent a grey value
 Clearly aij =f(x=i,y=j)=f(i,j), equation no. 5  The discrete levels should be equally spaced
and 6 are identical matrices. and that they are integers in the interval [0,
L-1].
 We also represent image as a vector, v. for
 The number, b, of bits required to store a
example column vector of MN x 1 is formed digitized image is
by letting the first M elements of v be the b=M*N*k. (8)
first column of A, the next M element of
D.K.Vishwakarma
second column and so on.
Number of storage bits for various value
of N & k. L is the no. of intensity levels
 When M=N then equation no. 8 becomes : b=N 2*k (9)

D.K.Vishwakarma
Spatial & Intensity Resolution
 Spatial resolution is number of pixel(dots) per unit
distance. EXAMPLE: let us consider a chart with
alternate black and white vertical lines, each width is
W units. The width of pair of lines is 2W and per unit
1/2W line pairs. Let W=0.5mm. Then there will be 5
lines per unit distance. DOTS PER UNIT DISTANCE IS A
MEASURE OF IMAGE RESOLUTION. PUBLISHING
INDUSTRY USED IT dpi(dots per inches) US standard.
D.K.Vishwakarma
Effect of Reducing spatial resolution

D.K.Vishwakarma
Intensity Resolution
 Smallest discernible(capable of perceives clearly)
change in intensity level.
 Number of intensity level is usually an integer
power of 2. most common is 8 bits, 16 bits, etc..
 The effect can be observe on next slide.

D.K.Vishwakarma
D.K.Vishwakarma
Image Interpolation
 It is tool used for image zooming, shrinking, rotating and
geometric corrections.
 EXAMPLE: an image 500x500 pixels has to be
enlarged 2 times to 1000x1000 pixels. Simplest way is
to create an imaginary 1000x1000 grid with the same
pixels spacing in the original image, and then shrink it
so that it fits exactly over the original image. Obviously
the pixel spacing in the shrunken 1000x1000 grid will
be less than the pixel spacing in the original image.
D.K.Vishwakarma
Types of Image interpolation
 NEAREST NEIGHBOR INTERPOLATION: assigning
intensity to its nearest neighbor in the original image.
This approach is simple but it creates undesirable
artifacts such as distortion of straight edges.
 BILINEAR INTERPOLATION: In this interpolation we use 4
nearest neighbors to estimate intensity at a given
location. Let (x,y) denote the coordinates of the location
to which we want to assign intensity value and let v(x,y)
be a intensity value. Then itD.K.Vishwakarma
can be determine;
v ( x , y ) = ax + by + cxy + d
Image Details(Information)

D.K.Vishwakarma

You might also like