You are on page 1of 73

Advance Digital Image Processing

Chapter 10:

Mohammad Mahadi Hassan


Associate Professor, CSE
Image Segmentation
 Segmentation attempts to partition the pixels of
an image into groups that strongly correlate
with the objects in an image
 Typically the first step in any automated
computer vision application

2
Image Segmentation
• Segmentation algorithms generally are based
on one of two basis properties of intensity
values
• Discontinuity: to partition an image based
on abrupt changes in intensity (such as
edges)

• Similarity: to partition an image into regions


that are similar according to a set of
predefined criteria.

3
Image Segmentation
 Image Segmentation

4
Image Segmentation
 Image Segmentation

5
Image Segmentation
 Detection of discontinuities:
 There are three basic types of gray-level
discontinuities:
 points , lines , edges

 the common way is to run a mask through


the image

6
Point Detection:
• Note that the mark is the same as the mask of
Laplacian Operation (in chapter 3)

• The only differences that are considered of


interest are those large enough (as determined
by T) to be considered isolated points.

|R| >T

7
Point Detection:

8
Line Detection

R1 R2 R3 R4

• Horizontal mask will result with max response when a line


passed through the middle row of the mask with a
constant background.
• the similar idea is used with other masks.
• note: the preferred direction of each mask is weighted with
a larger coefficient (i.e.,2) than other possible directions.

9
Line Detection
• Apply every masks on the image
• let R1, R2, R3, R4 denotes the response of
the horizontal, +45 degree, vertical and -45
degree masks, respectively.
• if, at a certain point in the image
|Ri| > |Rj|, for all j≠i,
• that point is said to be more likely
associated with a line in the direction of
mask i.
10
Line Detection
• Alternatively, if we are interested in detecting all
lines in an image in the direction defined by a
given mask, we simply run the mask through the
image and threshold the absolute value of the
result.
• The points that are left are the strongest
responses, which, for lines one pixel thick,
correspond closest to the direction defined by
the mask.

11
Line Detection

12
Edge Detection Approach
 Segmentation by finding pixels on a region
boundary.

 Edges found by looking at neighboring


pixels.

 Region boundary formed by measuring gray


value differences between neighboring pixels

13
Edge Detection

• an edge is a set of connected pixels that


lie on the boundary between two regions.

• an edge is a “local” concept whereas a


region boundary, owing to the way it is
defined, is a more global idea.

14
Edge Detection

15
Edge Detection

16
Edge Detection

17
Edge Detection
 Detection of discontinuities: Image Derivatives

18
Edge Detection
• First column: images and gray-
level profiles of a ramp edge
corrupted by random Gaussian
noise of mean 0 and = 0.0,
0.1, 1.0 and 10.0, respectively.
• Second column: first-derivative
images and gray-level profiles.
• Third column : second-
derivative images and gray-
level profiles.

19
Edge Detection
 Gradient Operator

 f 
 G x   x 
f      f 
G y   
 y

f  G  G 2
x
2 1/ 2
y


for 3 3 mask
Gx  (z 7  z8  z9 )  (z1  z2  z3 )
G y  (z 3  z6  z9 )  (z1  z4  z7 )

20
Edge Detection
 Prewitt and Sobel Operators

21
Edge Detection

22
Edge Detection

23
Edge Detection

24
Edge Detection

25
Edge Detection
 The Laplacian

26
Edge Detection

27
Edge Detection
 The Laplacian of Gaussian (LoG)

28
Edge Detection
 The Laplacian of Gaussian (LoG)

29
The Hough Transform

30
The Hough Transform
 Global processing: The Hough Transform

31
The Hough Transform
 Global processing: The Hough Transform

32
The Hough Transform
 Global processing: The Hough Transform

33
Region-Based Segmentation

34
What is a Region?
 Basic definition :- A group of
connected pixels with similar
properties.

 Important in interpreting an image because


they may correspond to objects in a
scene.
 For that an image must be partitioned into
regions that correspond to objects or parts of
an object.

35
Basic formulation of Region

36
What is a Region?
 Basic definition :- A group of
connected pixels with similar
properties.

 Important in interpreting an image because


they may correspond to objects in a
scene.
 For that an image must be partitioned into
regions that correspond to objects or parts of
an object.

37
Region-Based vs. Edge-Based

Region-Based Edge-Based
 Closed boundaries  Boundaries formed
not necessarily
closed
 Multi-spectral  No significant
images improve improvement for
segmentation multi-spectral images
 Computation based  Computation based
on similarity on difference
 36
Region based algorithm
 Region Growing
- An initial set of small areas are iteratively merged according to
similarity constraints.
- Start by choosing an arbitrary seed pixel and compare it with
neighbouring pixels.
- Region is grown from the seed pixel by adding in neighbouring
pixels that are similar, increasing the size of the region.
- When the growth of one region stops we simply choose another
seed pixel which does not yet belong to any region and start again.

39
Region based algorithm
 Region splitting and merging

40
Image Thresholding
•What is thresholding?
•Simple thresholding
•Adaptive thresholding

41
Thresholding – A Key Aspect

 Most algorithms involve establishing a threshold


level of certain parameter.
 Correct thresholding leads to better
segmentation.
 Using samples of image intensity available,
appropriate threshold should be set automatically in a
robust algorithm i.e. no hard-wiring of gray values

42
Automatic Thresholding
 Use of one or more of the following:-
1. Intensity characteristics of objects
2. Sizes of objects
3. Fractions of image occupied by objects
4. Number of different types of objects
 Size and probability of occurrence – most
popular
 Intensity distributions estimate by
histogram computation.

43
Automatic Thresholding Methods

 Some automatic thresholding schemes:


1. P-tile method
2. Iterative threshold selection
3. Adaptive thresholding

44
Thresholding Methods
 P-tile Method:- If object
occupies P% of image pixels
then set a threshold T such
that P% of pixels have
intensity below T.

 Iterative Thresholding:-
Successively refines an
approx. threshold to get a
new value which partitions T
1
2
 1
the image better. 2
 41
P-Tile Thresholding
• Thresholding is usually the first
step in any segmentation approach
• Single value thresholding can be
given mathematically as
follows:
if f (x, y) 
g(x, y)  T

0
1
if f (x, y)  46
P-Tile Thresholding
• Basic global thresholding:
• Based on the histogram of an image
Partition the image histogram using
a single global threshold

47
P-Tile Thresholding
• Basic global thresholding:
• The success of this technique very
strongly depends on how well the
histogram can be partitioned

48
Iterative P-Tile Thresholding
• The Basic global thresholding:
1. Select an initial estimate for T (typically the
average grey level in the image)
2. Segment the image using T to produce two
groups of pixels: G1 consisting of pixels with
grey levels >T and G2 consisting pixels with
grey levels ≤ T
3. Compute the average grey levels of pixels
in G1 to give μ1 and G2 to give μ2

49
Iterative P-Tile Thresholding
• The Basic global thresholding:
4. Compute a new threshold value:
T
1  2

2
5. Repeat steps 2 – 4 until the difference in T
in successive iterations is less than a
predefined limit T∞
This algorithm works very well for finding thresholds
when the histogram is suitable.
50
P-Tile Thresholding

51
P-Tile Thresholding

52
P-Tile Thresholding
• Limitation of P-Tile thresholding:

53
P-Tile Thresholding
• Limitation of P-Tile thresholding:

54
P-Tile Thresholding
• Limitation of P-Tile thresholding:

55
Adaptive Thresholding
 Adaptive Thresholding is used in scenes with
uneven illumination where same threshold
value not usable throughout complete image.
 In such case, look at small regions in the
image and obtain thresholds for individual
sub-images. Final segmentation is the union
of the regions of sub-images.

56
Adaptive Thresholding
 Thresholding – Basic Adaptive Thresholding

57
Adaptive Thresholding
 Thresholding – Basic Adaptive Thresholding

58
Adaptive Thresholding
 Thresholding – Basic Adaptive Thresholding

59
Adaptive Thresholding
 Thresholding – Basic Adaptive Thresholding

60
Adaptive Thresholding
 Thresholding – Basic Adaptive Thresholding

61
Connected Components

Take the largest 2 components of the object


Connected Components

How many connected components are there in


the object?
Connectivity (2D)
Two pixels are connected if Two connected All pixels
pixels connected to x
their squares share:
A common edge
 4-connectivity x
A common vertex
 8-connectivity 4-connectivity

8-connectivity
Connectivity (2D)
Connected component
A maximum set of pixels (voxels) in the object or
background, such that any two pixels (voxels) are
connected via a path of connected pixels (voxels)

8-connected object 4-connected object


(1 component) (4 components)
Connectivity (2D)
What is the component connected to the blue pixel?

8-connectivity 4-connectivity
Connectivity (2D)
Can we use arbitrary connectivity for object and
background?

Object: 8-connectivity (1 comp) Object: 4-connectivity (4 comp)


Background: 8-connectivity (1 comp) Background: 4-connectivity (2 comp)

Paradox: a closed curve does not disconnect the background, while


an open curve does.
Connectivity (2D)
Different connectivity for object (O) and background
(B)
2D pixels: 4- and 8-connectivity respectively for O and B
(or B and O)

Object: 8-connectivity (1 comp) Object: 4-connectivity (4 comp)


Background: 4-connectivity (2 comp) Background: 8-connectivity (1 comp)
Finding Connected Components
The “flooding” algorithm
Start from a seed pixel/voxel, expand the connected
component
Either do depth-first or breadth-first search (a LIFO stack
or FIFO queue)
// Finding the connected component containing an object pixel p
1. Initialize
1. Create a result set S that contains only p
2. Create a Visited flag at each pixel, and set it to be
False except for p
3. Initialize a queue (or stack) Q that contains only p.
2. Repeat until Q is empty:
1. Pop a pixel x from Q.
2. For each unvisited object pixel y connected to x, add y
to S, set its flag to be visited, and push y to Q.
3. Output S
Finding Connected Components
Why having a “visited” flag?
Avoid inserting the same pixel to the queue (stack)
multiple times
 Only look for “unvisited” neighbors
Avoid searching in the list S for visited pixels

1. …
2. Repeat until Q is empty:
1. Pop a pixel x from Q.
2. For each unvisited object pixel y connected to x, add y
to S, set its flag to be visited, and push y to Q.
3. Output S
Finding Connected Components
Labeling all components in an image:
Loop through each pixel (voxel). If it is not labeled, find
its connected component, then label all pixels (voxels) in
the component.

Two
One components
component

8-connected object 4-connected object


Using Connected Components
Pruning isolated islands from the main object
Filling interior holes of the object

Take the largest 2 components Invert the largest component


of the object of the background
Summary
 Segmentation is the most essential step
in most scene analysis and automatic
pictorial pattern recognition problems.

 Choice of the technique depends on the


peculiar characteristics of individual
problem in hand.

73

You might also like