You are on page 1of 135

Evaluation Duration Weightage Date, Time & Nature of

Component Venue Component


Mid Sem 90 Mins 30% ----- Closed/open Book
Quiz/Assignment 30% ----- Closed Book

Comprehensive 3 Hours 40% ------ Closed/open Book


Examination
L-1
The CCD was invented in 1969 by Willard Boyle and
George Smith at AT&T Bell Labs.
Why image processing?

(a)Improvement of pictorial information for


human interpretation
e.g. google earth

(b) Processing of image data for storage,


transmission, and representation for
autonomous machine perception.
Applications:

Remote sensing via satellites and other spacecrafts: Images acquired by

mapping; prediction of agricultural crop, urban growth and weather


and flood and fire control. :: Unmanned Air Vehicle (UAV)

Space image applications: recognition and analysis of object contained


in images obtained from deep space-probe missions. :: Hubble

Image transmission and storage application : occurs in broadcast


television, teleconferencing, military communication.

Medical application: x-ray, cine-angiogram, radiology, NMR, ultrasonic


scanning.

Radar and sonar images are used for detection and recognition of
various types of targets or in guidance and maneuvering of aircraft
or missile system.
Implies digital processing of any two dimensional (2D) data

Data: 2D images or some 2D data

Digital Image : a 2D array of real or complex numbers


represented by a finite number of bits.

Real

Complex
How to obtain a digital image?

Imaging
system

Object
Sample &
quantize
Display

Digital Digital Online Image


Storage (disk) computer buffer

Record
L-2

A simple image formation method


IMAGE is defined by a two dimensional function f(x,y)

where f(x,y) is the intensity or the gray level values


of the image at the spatial coordinates x,y.

When x, y and f are discrete, the image is called DIGITAL image

A DIGITAL image is composed of finite number of


elements (say 256x256) each of which has a
particular location and values. These elements are
called picture element, or image element or pels
or pixels
f

coordinate (x,y) is a positive scalar quantity whose

physical meaning is determined by the source of

the image.

Pixel values (f) is proportional to energy radiated by


the physical source.

So, 0 f ( x, y )
f(x,y) depends on
1. amount of illumination of the source on the object (i(x,y))

2. amount of illumination reflected from the object (r(x,y))

f ( x, y ) i ( x, y ) r ( x, y ) l

where 0 i( x, y)
and l = gray level of the monochrome image
0 r ( x, y ) 1

Lmin l Lmax where Lmin imin rmin


Lmax imaxrmax

So, Lmin 0
Lmax L 1 ( say )
The interval [0, L 1] is defined as GRAYSCALE

l 0 black
l L 1 white

8-bit grayscale [0,28 1] [0,255]


An image may be continuous with respect to the

x- and y- and also in the amplitude.

Digitization of coordinate value is called


SAMPLING.

Digitization of amplitude value is called


QUANTIZATION.
Sampling depends on arrangement of sensors
to generate the image
Representing Digital Images :

The result of sampling and quantization yields the

image in form of a matrix of real numbers.


f (0,0) ......... .........
Digital image ..... ......... ............
..... f ( M 1, N 1) MxN

The values of f, M and N has to be .

Due to processing, storage and sampling hardware consideration, the

number of gray levels typically is an integer power of 2.

L 2k

Discrete levels are equally spaced and they are integers in the level

(i.e., gray level) [0, L-1].


The number of bits (b) required to store a digitized image is

b M N k

For 8-bit image, k = 8

Gray level = [0,255]

L = 28

b=MxNx8
Spatial and gray level resolution

Spatial resolution :

Spatial resolution is the smallest distinguishable detail


in an image.

It depends on sampling.
w

A B A B A B A B

Line pairs : AA and BB


Distance between the line pairs = 2w
1
No. of lines per unit length =
2w
Spatial resolution = no. of distinguishable lines/length
1
Hence, spatial resolution =
2w
A 8-bit M x N gray scale image.

No. of samples = MxN (i.e. total number of pixels)


Typical effects of varying the number of samples in a
digital image

(Pixel size = constant,


and gray level = 256)

Sub-sampling
Sub-sampled image is scaled to the original one
The spatial resolution goes down due to sub-sampling
Gray level resolution :

Refers to the smallest distinguishable change in the

gray level.

Gray level resolution is highly subjective and it

depends on the hardware utilized to capture the image.


28 27

6
2 25
24 23

22 21
L-3
N and k are independent in the previous examples

How to vary N and k to obtain an improved image?


Isopreferential lines
Zooming and Shrinking the digital images

Zooming requires 2 steps

1. creation of new pixel locations

2. assignment of gray levels to those new locations


Pixel replication :

Duplicate the rows and columns of an image

Bilinear interpolation :

The assignments to the pixel value is accomplished


by bilinear interpolation
Let ( x' , y' ) are the coordinates of the pixel in the zoomed
image (i.e., in the grid)

v( x' , y' ) ax' by' cx' y' d

The coefficients are determined from the NN pixels.


Image shrinking
Pixel replication :

Delete rows and columns (alternate) to shrink the image

by an integer value.
Enhancement is needed for better representation

and extraction of important information.

Methods of enhancement is highly subjective.


Image enhancement approach

Two categories

Spatial domain method Frequency domain method


Spatial domain method::
Spatial domain refers to the image plane and

the method implies the direct manipulation

of the pixels in the image.

Frequency domain method

Modifying the pixels in the Fourier transformed

image of the original image.


L-4

Spatial domain process :

g( x , y ) T[ f ( x, y )]
where f ( x , y ) Original image

g( x , y ) Processed image

T Transformation function or Operator


Point processing processing an image by
considering the gray level
of each pixels

Mask processing creating a mask about the


pixel (x,y) and processing it.
The operator T is defined over some neighborhood

of (x,y)

Subimage
The subimages can be
Point Processing

The simplest form of T is when neighborhood size is

1 x 1 (i.e., point processing)

g( x , y ) T[ f ( x, y )]
s T (r )
where s g( x , y )
r f ( x, y )
T is the gray level transformation function
Gray level transformation for contrast enhancement
Basic transformation functions for image enhancement
1. Linear (negative and identity transformation)

2. Logarithmic (log and inverse-log transformation)

3. Power law (nth and nth root power transformation)


Image identity

s r
Image negative

s L 1 r
For 8-bit image ; s 255 r
Log transformation

s c log(1 r )
Where c = constant; r 0
Power law transformation

s cr

where

c & are positive constants


correction Dark levels have to be stretched
correction Dark levels have to be compressed
Piecewise linear transformation function
L-5
167 133 111

144 140 135

159 154 148


L-6
nH nT n

nH
nT
n

nH nT
1
n n
0 P ( A) 1

F (a ) P( x a)
1. F ( ) 0
2. F ( ) 1
3. 0 F ( x ) 1
4. F ( x1 ) F ( x 2 ) if x1 x2
5. P ( x 1 x x 2 ) F ( x 2 ) F ( x1 )
6. F ( x ) F ( x ) if x x , 0
dF ( x )
p( x )
dx
1. p( x ) 0 for all x

2. p( x )dx 1

x
3. F ( x ) p( )d

x2

4. P ( x1 x x2 ) p( x )dx
x1
h(rk ) nk

rk
nk
h( rk ) nk
p( rk )
n n

n
p( rk )

p( rk ) 1
k
p( rk ) 1
k
r 0
r 1

s T (r ) ; 0 r 1
pr ( r ) & ps ( s )

ps ( s ) ds pr (r ) dr
ps (s ) 1

ds pr ( r ) dr
s r
ds pr ( ) d
0 0
r
s pr ( ) d T (r )
0
r
s T (r ) pr ( ) d
0
nk
pr ( rk ) ,k 0, 1, ..... L 1
n

k
sk T ( rk ) pr ( r j )
j 0

k nj
sk T ( rk ) , for k 0, 1, ...., L 1
j 0 n
L-7, 29/1/2014
r
s T (r ) pr ( ) d
0
k k nj
sk T ( rk ) pr ( r j )
j 0 j 0 n

z
G( z ) pz ( ) d s
0
k
G( zk ) pz ( z i ) sk
i 0

G( z ) T (r )

z G 1 ( s) G 1[T (r )]
r
s T (r ) pr ( ) d
0
z
s G( z ) pz ( ) d
0

You might also like