You are on page 1of 53

Digital Image Processing

Relationships of Pixel II

1
Image description

f (x,y): intensity/brightness of the image at spatial coordinates (x,y)


0< f (x,y)<∞ and determined by 2 factors:
illumination component i(x,y): amount of source light incident
reflectance component r(x,y): amount of light reflected by objects
f (x,y) = i(x,y) r(x,y)
where
0< i(x,y)<∞: determined by the light source
0< r(x,y)<1: determined by the characteristics of objects

12/15/21 2
Sampling and Quantization

quantization
sampling sampling
3
Sampling and Quantization
Sampling: Digitization of the spatial coordinates (x,y)
Quantization: Digitization in amplitude (also called gray-
level quantization)
8 bit quantization: 28 = 256 gray levels (0: black, 255: white)
Binary (1 bit quantization): 2 gray levels (0: black, 1: white)

Commonly used number of samples (resolution)


Digital still cameras: 640x480, 1024x1024, 4064 x 2704 so on
Digital video cameras: 640x480 at 30 frames/second
1920x1080 at 60 f/s (HDTV)

4
Sampling and Quantization

An M x N digital image is expressed as


Columns
 f (0,0) f (0,1) . . . f (0, N  1) 
 f (1,0) f (1,1) . . . f (1, N  1) 
Rows


 . . . . . . 
 
 . . . . . . 
 . . . . . . 
 
 f ( M  1,0) f ( M  1,1) . . . f ( M  1, N  1)

N : No of Columns
M : No of Rows

5
Sampling and Quantization
 Image coordinate convention (not valid for MATLAB!)

There is no universally accepted convention or notation. Always check carefully!

6
Sampling and Quantization
 MATLAB Representation

7
Digital Images

Digital images are 2D arrays (matrices) of numbers:

8
Sampling

9
Sampling

10
Effect of Sampling and Quantization

250 x 210 samples 125 x 105 samples 50 x 42 samples 25 x 21 samples


256 gray levels

16 gray levels 8 gray levels 4 gray levels Binary image


11
Effect of Sampling and Quantization

12
Effect of Sampling and Quantization

13
Effect of Sampling and Quantization

14
Effect of Sampling and Quantization

15
RGB (color) Images

16
Pixel Neighborhood
• The pixels surrounding a given pixel. Most neighborhoods used in image
processing algorithms are small square arrays with an odd number of pixels.

17
Basic relationships between pixels
Arrangement of pixels: 0 1 1
0 1 0
0 0 1
4 neighbours N4(p): 1
0 1 0
0

Diagonal neighbours ND(p): 0 1


1
0 1

8 neighbours N8 (p) = ND(p) U N4(p) : 0 1 1


0 1 0
0 0 1
18
Basic relationships between Pixels
 Connectivity between pixels:
An important concept used in establishing boundaries of objects
and components of regions
Two pixels p and q are connected if
– They are adjacent in some sense
– If their gray levels satisfy a specified criterion of similarity

V: Set of gray level values used to define the criterion of


similarity

19
Basic relationships between Pixels

4-connectivity:
If gray-level p , q  V, and q  N4(p)

8-connectivity:
If gray-level p , q  V, and q  N8(p)

m-connectivity (mixed connectivity):


Gray-level p , q  V, and q satisfies one of the following:
1) q  N4(p), 2) q  ND(p) and N4(p)∩ N4(q) has no values from V

20
Basic relationships between pixels

Mixed Connectivity:
Note: Mixed connectivity can eliminate the multiple path
connections that often occurs in 8-connectivity

Pixel 8-adjacent to the m-adjacency


arrangement center pixel

21
Basic relationships between pixels
Path
Let coordinates of pixel p: (x, y), and of pixel q: (s, t)
A path from p to q is a sequence of distinct pixels with
coordinates: (x0, y0), (x1, y1), ......, (xn, yn) where
(x0, y0) = (x, y) & (xn, yn) = (s, t),
and (xi, yi) is adjacent to (xi-1, yi-1) 1 i  n
Regions
A set of pixels in an image where all component pixels are
connected
Boundary of a region
A set of pixels of a region R that have one of more neighbors
that are not in R

22
Distance Measures
Given coordinates of pixels p and q: (x,y) and (s,t)
Euclidean distance between p and q:
De ( p, q )  ( x  s) 2  ( y  t ) 2
– De distance  r from (x,y) define a disk of radius r centered at (x,y)
City-block distance between p and q:
D4 ( p, q)  x  s  y  t
– The pixels with D4 distance  r from (x,y) form a diamond centered at (x,y)
– the pixels with D4=1 are the 4-neighbors of (x,y)
Chessboard distance between p and q:
D8 ( p, q )  max(| x  s |, | y  t |)
– The pixels with D8 distance  r from (x,y) form a square centered at (x,y)
– The pixels with D8=1 are the 8-neighbors of (x,y)

23
Distance Measures

24
Distance Measures

25
Distance Measures

26
Distance Measures

27
Distance Measures

28
Euclidean distance

29
Distance Measures

30
City Block Distance

31
City Block Distance

32
Chess Board Distance

33
Shape Matching
 If we want to find the
difference in shapes .
 These 2 shapes are almost
similar except that there is
a hole in the second
shape.

34
Shape Matching
 To find the feature of
shapes
 We need to take the
skeleton of the shapes
which are shown in the
figures below the shapes

35
The skeleton information

 By comparing the 2 skeletons,


– the skeleton of the first shape, there
are only 5 line segments, whereas
for the skeleton of the second shape
there are 10 line segments.
– Similarly, the number of points
where more than 2 line segments
meet; in the first skeleton there are
only 2 such points, whereas in the
second skeleton there are 4 such
points.
 By compare the skeleton rather
than comparing using the original
shape, there is lot of difference
that can be found out both in
terms of the number of line
segments, and also in terms of the
number of points where more
than 1 line segments meet.

36
Skeleton

 What is skeleton?
 When we remove some
of the foreground points  Now, the question is
in such a way that the
how do we get this
shape information as
skeleton?
well as the dimension is
more or less retained in
the skeleton

37
How to get the Skeleton
 assume that the foreground region in the
input binary image is made of some uniform
slow burning material
 What if we light fire at all the points across
the boundary of this region that is the
foreground region.
 Then as the fire lines go in, there will be
some points in the foreground region where
the fire coming from 2 different boundaries
will meet and at that point, the fire will
extinguish itself.
 The set of all those points is what is called
the quench line and the skeleton of the region
is nothing but the quench line that we
obtained by using this fire propagation
concept.
 The set of all those points are called the
quench line
 The skeleton of the region is “the quench line
obtained by using fire propagation concept”
 This is also called “Medial Axis
Transformation”

38
39
 Simple description of  So, for that we use distance
movement of the fire line measure.
does not give you an idea of  When we a lighting the fire
how to compute the across all the boundary
skeleton of a particular points simultaneously and
shape. the fire is moving inside the
foreground region slowly,
 We can note at every point
the time the fire takes to
reach the particular point so
it is a distance
transformation of the image.

40
 Distance transform is  Where the grey level
normally used for binary intensity of the points in
images and we calculate the inside the foreground
the time value for every region had changed to
point the the fire takes to show the distance of that
reach that particular point; point from the closest
 so by applying distance boundary point.
transformation, we get an
image or the shape of the
image is in a grey level
image

41
Distance transform
 Here, we have a binary
image where the foreground
region is a rectangular
region and if I take the
distance transform,
 Image in the right all the
boundary points, they are
getting a distance value
equal to 1.
 Then the points inside the
boundary points, they get a The intensity value that we are assigning to
distance value equal to 2 different points within the foreground region,
 and the points further the intensity value increases slowly from the
inside, they gets a distance
value equal to 3. boundary to the interior points.

A grey level image which you get after


42
 By analyzing this
distance transformed
image, you find that there
are few points at which
there is some
discontinuity of the
curvature.
 So, by identify the points
of discontinuity or
curvature discontinuity,
those are actually the
point which lies on the
skeleton of this particular
shape.
43
 original image;
 the middle column is the
distance transformed
image
 and the right most
columns is the skeleton
 By correlating the right
most column with the
middle column, is the
skeleton in the right most
columns can now be
easily obtained from the
middle column
44
Skeletons shapes

 We can use this method


when there is the need for
shape matching or
discrimination problem;
 So instead of processing
on the original shapes, we
compare the shapes using
skeleton,

45
Use of skeleton
 We can have the length of a shape because by getting the
different end points of the skeleton and the distances
between every pair of end points in the skeleton, then the
maximum of all those pair wise distances will give the
length of the shape
 The skeleton gives us the ability of
qualitatively/quantitatively difference between different
shapes

46
Arithmetic and logical operations

 In case of numerical system,


if its decimal number
system or binary number
system; We can have
arithmetic as well as logical
operations.
 Similarly, for images also
we can have arithmetic and
logical operations.
 We can add 2 image pixel
by pixel. That is a pixel
from an image can be added
to the corresponding pixel
of a second image.
47
NOT operation or invert operation
 Invert the particular
binary image that is I can
make a NOT operation or
invert operation,
 A is binary image where
by NOT operation, all the
pixels in the original
image which was black
now becomes white or 1
and pixels which are
white or 1 in the original
pixel, those pixels became
become equal to 0.

48
AND, XOR
 Similarly, 2 images - A
and B,
 A and B, the logical AND
operation which is shown
in the left image lower.
 Similarly, the XOR
operation the image in the
right image lower.

49
Pixel Neighborhood operations

 3 by 3 matrix, this represents


the part of an image which has
nine pixel elements Z1 to Z9
 I want to replace every pixel
value by the average of its
neighborhood considering the
pixel itself.
 The simple average operation
at individual at every pixel
level
 We are replacing the intensity
by a function of the intensities
of its neighborhood pixels.
 We will see later that this is
the simplest form of low pass
filtering to remove noise from
the image.
50
Using templates
 We define a 3 by 3 template
which is shown on this right
hand figure where the
template contains nine
elements W1 to W9
 To perform the neighborhood
operation; we put the
template, on the original
image in such a way that the
pixel at which place we want
to replace the value, the center
of the template just relates
with that pixel. the value which will be replaced is given by Z equal
 We replace the value with the to W Z summation of i varying from 1 to 9.
weighted sum of the values i i
taken from the image and the
corresponding point from the
template.
51
Neighborhood operation
 The template operation can also be used for edge
detection operation in different images.

52
Summery of the lecture
 Different distance measures
 Application of distance measures
 Arithmetic and logical operations on images
 Neighborhood operation on images

53

You might also like