You are on page 1of 40

IMAGING

Film photography

Time consuming

Real time processing was not possible

No automation based on vision


Digital photography

Images are stored by capturing the binary data using


some electronic devices (SENSORS)

Sensors: Charge Coupled Device (CCD)


Complementary metaloxidesemiconductor (CMOS)
Photo multiplier tube (PMT)
The CCD was invented in 1969 by Willard Boyle and
George Smith at AT&T Bell Labs.
Nobel Prize in Physics in 2010 !!!!!!
A simple image formation method
IMAGE is defined by a two dimensional function f(x,y)

where f(x,y) is the intensity or the gray level values


of the image at the spatial coordinates x,y.

When x, y and f are discrete, the image is called DIGITAL image

A DIGITAL image is composed of finite number of


elements (say 256x256) each of which has a
particular location and values. These elements are
called picture element, or image element or pels
or pixels
The value or the amplitude of f at the spatial

coordinate (x,y) is a positive scalar quantity whose

physical meaning is determined by the source of

the image.

Pixel values (f) is proportional to energy radiated by


the physical source.

So, 0 f ( x, y )
f(x,y) depends on
1. amount of illumination of the source on the object (i(x,y))

2. amount of illumination reflected from the object (r(x,y))

f ( x, y ) i ( x, y ) r ( x, y ) l

where 0 i( x, y)
and l = gray level of the monochrome image
0 r ( x, y) 1

Lmin l Lmax where Lmin imin rmin


Lmax imaxrmax

So, Lmin 0
Lmax L 1 ( say )
The interval [0, L 1] is defined as GRAYSCALE

l 0 black
l L 1 white

8-bit grayscale [0,28 1] [0,255]


f=0 f = L-1
16 gray levels , hence 4 bit image
An image may be continuous with respect to the

x- and y- and also in the amplitude.

Digitization of coordinate value is called


SAMPLING.

Digitization of amplitude value is called


QUANTIZATION.
Sampling depends on arrangement of sensors
to generate the image
Representing Digital Images :

The result of sampling and quantization yields the

image in form of a matrix of real numbers.

x, y vary from 0,1.. and are not the actual value of the physical coordinate
The number of bits (b) required to store a digitized image is

b M N k

For 8-bit image, k = 8

Gray level = [0,255]

L = 28

b=MxNx8
Spatial and gray level resolution

Spatial resolution :

Spatial resolution is the smallest distinguishable detail


in an image.

It depends on sampling.
w

A B A B A B A B

Line pairs : AA and BB


Distance between the line pairs = 2w
1
No. of lines per unit length =
2w
Spatial resolution = no. of distinguishable lines/length
1
Hence, spatial resolution =
2w
Typical effects of varying the number of samples in a
digital image

(Pixel size = constant,


and gray level = 256)

Sub-sampling
Sub-sampled image is scaled to the original one
Gray level resolution :

Refers to the smallest distinguishable change in the

gray level.

Gray level resolution is highly subjective and it

depends on the hardware utilized to capture the image.


8 7
2 2

6
2 2 5
4
2 2 3

2 1
2 2
Enhancement is needed for better representation

and extraction of important information.

Methods of enhancement is highly subjective.


Image enhancement approach

Two categories

Spatial domain method Frequency domain method


Spatial domain method::
Spatial domain refers to the image plane and

the method implies the direct manipulation

of the pixels in the image.

Frequency domain method

Modifying the pixels in the Fourier transformed

image of the original image.


Spatial domain process :

g( x, y ) T[ f ( x, y )]

where f ( x , y ) Original image

g( x , y ) Processed image

T Transformation function or Operator


Point Processing

The simplest form of T is when neighborhood size is

1 x 1 (i.e., point processing)

g( x, y ) T[ f ( x, y )]
s T (r )
where s g( x , y )
r f ( x, y)
T is the gray level transformation function
Gray level transformation for contrast enhancement
Basic transformation functions for image enhancement

You might also like