You are on page 1of 146

Image Segmentation

Dr. Dipti Patra

Department of Electrical Engineering


National Institute of Technology
Rourkela
Image Segmentation
 Partitioning of an image into different constituent
regions or objects depending on their properties.

 Isolates the object of interest in an application.


Why segmentation is difficult ?

It can be difficult for many reasons:


•Non- uniform illumination
•No control of the environment
•Inadequate model of the object of interest
•Noise
Applications
 Automated target detection, ex: infrared image
segmentation in military application.

 Automated Industrial Application.

 Remote sensing: Satellite image segmentation for


detecting the crop area, vegetation area, urban area etc.

 Bio-medical: MRI, Ultrasound, CT image


segmentation for diagnosis of diseases and disorders.
Segmentation

Discontinuity Similarity of pixels

Variation in pixel All neighbouring pixels


intensities. Have similarity in pixel
Appropriate grouping of intensities are grouped
edge pixels. into a region.
Discontinuity based Image Segmentation

Edge detection Techniques are


applied.

A set of edge pixels are obtained

Some edge pixels Some edge


correspond to pixels
boundary of objects. are isolated.
Edges are significant local changes of intensity
in an image.

• Geometric events
– surface orientation (boundary)
discontinuities
– depth discontinuities
– color and texture discontinuities
• Non-geometric events
– illumination changes
– shadows
– inter-reflections
Edge Descriptors
Edge direction: perpendicular to the direction
of maximum intensity change (i.e., edge normal)
Edge strength: related to the local image
contrast along the normal.
Edge position: the image position at which the
edge is located.
Two edge pixels are linked up based upon two
criteria

 Edge strength gradient


 Edge angle direction of the gradient

Boundary detection is done by examining the


links of edge pixels.
Modeling Intensity Changes

• Step edge: the image intensity abruptly


changes from one value on one side of the
discontinuity to a different value on the
opposite side.
Modeling Intensity Changes
(cont’d)
• Ramp edge: a step edge where the
intensity change is not instantaneous but
occur over a finite distance.
Modeling Intensity Changes
(cont’d)
• Ridge edge: the image intensity abruptly
changes value but then returns to the
starting value within some short distance
(i.e., usually generated by lines).
Modeling Intensity Changes
(cont’d)
• Roof edge: a ridge edge where the
intensity change is not instantaneous but
occur over a finite distance (i.e., usually
generated by the intersection of two
surfaces).
Main Steps in Edge Detection

Smoothing: suppress as much noise as


possible, without destroying true edges.

Enhancement: apply differentiation to


enhance the quality of edges (i.e.,
sharpening).
Main Steps in Edge Detection
Thresholding: determine which edge
pixels should be discarded as noise and
which should be retained (i.e., threshold
edge magnitude).

Localization: determine the exact edge


location.
sub-pixel resolution might be required for
some applications to estimate the location of
an edge to better than the spacing between
pixels.
Edge Detection Using Derivatives

• Often, points that lie on an edge


are detected by:
(1) Detecting the local maxima or
minima of the first derivative.

(2) Detecting the zero-crossings


1st derivative
of the second derivative.

2nd derivative
The first derivate of an image can be computed using
the gradient:

Gradient of an image f(x,y) at location (x,y)


 f 
Gx   x 
f      f 
G y   
 y 
Magnitude of gradient vector
f  Gx2  Gy 
2 1/ 2

1  G y

Direction of the gradient vector  ( x, y )  tan  
 Gx 
f f
| || |
x y

18
An edge pixel at (x',y') is similar in magnitude
to the pixel at (x,y) if

f ( x, y)  f ( x, y)  E

where E is a non-negative threshold.

An edge pixel at (x',y') has an angle similar


to the pixel at (x,y) if

 ( x, y)   ( x, y)  A
Approximating Gradient

• We can implement and using the following masks:

(x+1/2,y)

good approximation
(x,y+1/2)
at (x+1/2,y) *
*

good approximation
at (x,y+1/2)
Approximating Gradient

• A different approximation of the


gradient: good approximation
(x+1/2,y+1/2)

• and can be implemented using


the following masks:
Common Operators
• Gradient operator

g (m, n)  g12 (m, n)  g 22 (m, n)

Examples: 1. Roberts operator

 0 1 1 0 
  1 0 0 1
   
g1 g2

22
Common Operators
2. Prewitt operator 3. Sobel operator
 1 0 1   1 0 1
 1 0 1   2 0 2
vertical    
 1 0 1   1 0 1 

 1  1  1  1  2  1

horizontal 0 0 0  0 0 0 
   
 1 1 1   1 2 1 

23
Examples

original image horizontal edge vertical edge

Prewitt operator
24
Effect of Thresholding Parameters

small threshold large


25
Compass Operators
1 1 0 1 1 1 0 1 1
1 0  1 0 0 0  1 0 1
   
0 1  1  1  1  1  1 1 0
1 0  1  1 0 1
1 0  1  1 0 1
 
1 0  1  1 0 1
0 1  1  1  1  1  1 1 0
1 0  1 0 0 0  1 0 1
   
1 1 0   1 1 1   0 1 1
g (m, n)  max{| g k (m, n) |}
k 26
Examples

Compass operator
27
Edge Detection Steps Using Gradient

(i.e., sqrt is costly!)


Prewitt Operator

29
Example
Magnitude

After thresholding
Practical Issues
• Noise suppression-localization tradeoff.
– Smoothing depends on mask size (e.g.,
depends on σ for Gaussian filters).
– Larger mask sizes reduce noise, but
worsen localization (i.e., add uncertainty to
the location of the edge) and vice versa.
smaller mask larger mask
Effect of Smoothing on
Derivates

Where is the edge??


Effect of Smoothing on Derivatives
(cont’d)
Combine Smoothing with
Differentiation
(i.e., saves one operation)
Practical Issues (cont’d)

• Choice of threshold.
gradient magnitude

low threshold high threshold


Practical Issues (cont’d)

• Edge thinning and linking.


Criteria for Optimal Edge
Detection
• Good detection
– Minimize the probability of false positives (i.e.,
spurious edges).
– Minimize the probability of false negatives (i.e.,
missing real edges).

• Good localization
– Detected edges must be as close as possible to the
true edges.

• Single response
– Minimize the number of local maxima around the true
edge.
Canny edge detector
• Canny has shown that the first derivative of the
Gaussian closely approximates the operator that
optimizes the product of signal-to-noise ratio and
localization.
(i.e., analysis based on "step-edges" corrupted by "Gaussian noise“)

J. Canny, A Computational Approach To Edge Detection, IEEE Trans. Pattern Analysis


and Machine Intelligence, 8:679-714, 1986.
 Local edge linking approach: A local
neighbourhood is used. (Canny Edge Detector)

 Global edge linking approach: each edge


pixel is linked with other edge pixels.

Hough transform
Steps of Canny edge detector
Steps of Canny edge detector
(cont’d)

(and direction)
Canny Edge Detector

• Low error rate of detection


– Well match human perception results
• Good localization of edges
– The distance between actual edges in an
image and the edges found by a
computational algorithm should be minimized
• Single response
– The algorithm should not return multiple
edges pixels when only a single one exists
43
Flow-chart of Canny Edge Detector
Original image

Smoothing by Gaussian convolution

Differential operators along x and y axis

Non-maximum suppression
finds peaks in the image gradient

Hysteresis thresholding locates edge strings

Edge map
44
Canny Edge Detector Example

original image vertical edges horizontal edges

norm of the gradient after thresholding after thinning


45
Non-maxima suppression

• Check if gradient magnitude at pixel location (i,j)


is local maximum along gradient direction
Non-maxima suppression
Warning: requires checking
interpolated pixels p and r

(i,j)
Hysteresis thresholding

• Standard thresholding:

- Can only select “strong” edges.


- Does not guarantee “continuity”.
gradient magnitude low threshold high threshold
Hysteresis thresholding (cont’d)

• Hysteresis thresholding uses two thresholds:


- low threshold tl
- high threshold th (usually, th = 2tl)

th
tl th
tl

• For “maybe” edges, decide on the edge if


neighboring pixel is a strong edge.
Hysteresis thresholding/Edge Linking

Idea: use a high threshold to start edge


curves and a low threshold to continue
them.

Use edge
“direction” for
linking edges
Hysteresis Thresholding/Edge Linking
(cont’d)

(using tl and th)

Note: large gaps are still difficult to bridge.


(i.e., more sophisticated algorithms are required)
Second Derivative in 2D: Laplacian
(cont’d)
Variations of Laplacian
Laplacian - Example

detect zero-crossings
Properties of Laplacian

• It is an isotropic operator.
• It is cheaper to implement than the
gradient (i.e., one mask only).
• It does not provide information about edge
direction.
• It is more sensitive to noise (i.e.,
differentiates twice).
Laplacian of Gaussian (LoG)
(Marr-Hildreth operator)
• To reduce the noise effect, the image is first smoothed.

• When the filter chosen is a Gaussian, we call it the LoG


edge detector.

σ controls smoothing

• It can be shown that: (inverted


LoG)
2σ2
Laplacian of Gaussian (LoG)
(inverted LoG) (inverted LoG)

filtering zero-crossings
Decomposition of LoG
• It can be shown than LoG can be
written as follows:

• 2D LoG convolution can be


implemented using 4, 1D convolutions.
Decomposition of LoG (cont’d)

Steps
Difference of Gaussians (DoG)
• The Laplacian of Gaussian can be approximated by
the difference between two Gaussian functions:

approximation
actual LoG
Difference of Gaussians
(DoG) (cont’d)

(a) (b) (b)-(a)


Gradient vs LoG
• Gradient works well when the image contains sharp
intensity transitions and low noise.
• Zero-crossings of LOG offer better localization,
especially when the edges are not very sharp.

step edge

ramp edge
Gradient vs LoG (cont’d)

LoG behaves poorly at corners


Marr and Hildreth’s Method
Edge is scale-dependent
A different edge map can be generated at different scale
• Scale space representation

f ( x , y ; s )  f ( x , y ;0 )  g ( x , y ; s )
coarse-scale fine-scale Gaussian kernel
image image with width of s
1 x2  y2
g ( x, y; s )  exp(  )
2s 2s
64
Scale-Space Edge Detection Examples

fine coarse

65
Image Segmentation based on
property of similarity

f1 f3
f4

f2

Extract the regions or object of interest based on their


properties of similarity.
Segmentation based on Thresholding
• Based on similarity of intensity value.
• Above threshold  Binary 1
• Below Threshold  Binary 0
• Threshold selection:
• Ex. With dark object and bright background
• Image intensity:
f x. y   T : object point

f x. y   T : background
No. of Pixels Object Background

0 255
T Gray Values
Dark Bright
Object Background

No. of Pixels

0 Gray Values 255


Threshold operator
• Factors on which T depends:
T  f1x, y, px, y , f x, y 
 Depends on local neighbourhood : f(x,y)
 Vary from region to region : (x,y)
 Depends on local property: P(x,y)
• A thresholded image is defined as

1 if f  x, y   T
g  x, y    f(x,y) = intensity at (x,y)
0 if f  x, y   T
P(x,y) = some property at (x,y)
Threshold
Types

Global Threshold Local Threshold


T only depends T is a function of
on f(x,y). both P(x,y) and f(x,y).

Dynamic threshold
T is a function
of [x,y, P(x,y) f(x,y)].
 In a controlled environment ( proper lighting condition)
Global thresholding is successful.

 In an uncontrolled environment, Local and Dynamic


thresholding are successful.
Picking the threshold is the hard part

• Human operator decided the threshold


• Use mean gray level of the image
• A fixed proportion of pixels are detected ( set to 1)
by the thresholding operation
•Analyzing the histogram of an image
Dynamic (Adaptive) Thresholding

A more complex thresholding algorithm would be to use a


spatially varying threshold. This approach is very useful to
compensate for the effects of non-uniform illumination. If T
depends on coordinates x and y, this referred to as Dynamic
Thresholding or Adaptive Thresholding.
Preprocessing the image to remove noise of other non-
uniformities can improve the performance of the
thresholding.
A technique which often provides better results is to only use
edge points when creating the grey level histogram .
Adaptive thresholding - how it works?
There are two main approaches to finding the threshold:
(i) the Chow and Kaneko approach and
(ii) local thresholding.
The assumption behind both methods is that smaller image
regions are more likely to have approximately uniform
illumination, thus being more suitable for thresholding.
Chow and Kaneko divide an image into an array of overlapping
subimages and then find the optimum threshold for each
subimage by investigating its histogram. The threshold for each
single pixel is found by interpolating the results of the
subimages. The drawback of this method is that it is
computational expensive and, therefore, is not appropriate for
real-time applications.
ADAPTIVE THRESHOLDING
T12 T13
T11

Image

T22 T23
T21
Adaptive thresholding - Local thresholding
An alternative approach to find the local threshold is to statistically
examine the intensity values of the local neighborhood of each
pixel. The statistic which is most appropriate depends largely on
the input image.

The size of the neighborhood has to be large enough to cover


sufficient foreground and background pixels, otherwise a poor
threshold is chosen. On the other hand, choosing regions which are
too large can violate the assumption of approximately uniform
illumination. This method is less computationally intensive than
the Chow and Kaneko approach and produces good results for
some applications.
Optimal thresholding
• We will consider in detail minimisation method
for determining the threshold

• Idealized object/background image histogram


h(i)
2500.00

2000.00

1500.00

1000.00

500.00

0.00 i
0.00 50.00 100.00 150.00 200.00 250.00
85
T
Optimal thresholding

• Any threshold separates the histogram into 2


groups with each group having its own statistics
(mean, variance)
• The homogeneity of each group is measured by
the within group variance
• The optimum threshold is that threshold which
minimizes the within group variance thus
maximizing the homogeneity of each group

86
Optimal Thresholding
• p1(z): pdf of dark region.
• p2(z): pdf of bright region.

Overall probability density condition,


h(i)
2500.00

pz   P1 p1 z   P2 p2 z 
2000.00
Background

1500.00

P2(z)
• P1 + P2 = 1 Object
1000.00

500.00
P1(z)
0.00 i
0.00 50.00 100.00 150.00 200.00 250.00

T
Optimal Thresholding

Dark region : object


Bright region : background

mean of object < mean of background


μ1< μ2

Aim: T should be selected such that


all pixels with a level below T are
object pixels.
Optimal Threshold:
• What is the probability of erroneous classification of
object point as a background point?
• Solution is:
• Classifying background as object
E1 T    p2 z dz
T


• Classifying object as background

E2 T    p1 z dz
• Overall probability of error is T

ET   P2 E1 T   P1 E2 T 
• Threshold value for minimization of error is:
E T 
0
T
• so, P1 p1 T   P2 p2 T 

• p1(T), p2(T): Gaussian PDF

P1   z   1 2  P2   z   2 2 
Pz   exp   exp  
2 1 2 1  2 2 2 2 
2 2
 
• General solution is: AT 2  BT  C  0
A  1   2
2 2

where

B  2 1 2   2 1
2 2

  2 P1 
C   1  2   2 1  2 1  2 ln  
2 2 2 2 2 2

  1 P2 
• If variances are same
1   2 2  P2 
1   2   2
2 2
T  ln  
2 1   2  P1 
• If P1=P2 (probabilities of two levels are equal)
1   2
T
2
Histogram : Approximation of actual pdf
p(z) depends upon p1(z), p2(z), μ1, μ2, σ1, σ2.

These factors should be such that MSE is minimized


between p(z) and observed pdf h(z).

Mean square error between mixture density


and image histogram is:
2
   p  zi   h zi 
1 n
ems
n i 1
n = no of points in the histogram.
Optimal thresholding
– Minimisation of the within group variance
– Robot Vision, Haralick & Shapiro, volume 1,
page 20
• Let group o (object) be those pixels with
greylevel <=T
• Let group b (background) be those pixels
with greylevel >T
• The prior probability of group o is po(T)
• The prior probability of group b is pb(T)
93
• The following expressions can easily be derived
for prior probabilities of object and background
T
po ( T )   P( i )
i 0
255
pb ( T )   P( i )
i  T 1

P(i )  h( i ) / N
• where h(i) is the histogram of an N pixel image

94
• The mean and variance of each group are
as follows :
T
 o ( T )   iP(i ) / po ( T )
i0
255
 b ( T )   iP( i ) / pb ( T )
i  T 1

 ( T )   i   o ( T ) P(i ) / po ( T )
T 2
2
o
i0

 ( T )   i   b ( T ) P( i ) / pb ( T )
255 2
2
b
i  T 1
95
• The within group variance is defined as :
W2 ( T )   2o ( T ) po ( T )   b2 ( T ) pb ( T )

• We determine the optimum T by


minimizing this expression with respect to
T
– Only requires 256 comparisons for and 8-bit
greylevel image

96
Greylevel thresholding

h(i)
Topt
2500.00

2000.00

Histogram
1500.00
Within group variance

1000.00

500.00

0.00 i
0.00 50.00 100.00 150.00 200.00 250.00

97
Greylevel thresholding

Low noise image Thresholded at T=124

98
Greylevel thresholding

Low noise image Thresholded at T=124

99
• High level of pixel miss-classification
noticeable

• This is typical performance for


thresholding

– The extent of pixel miss-classification is


determined by the overlap between object
and background histograms.

100
Greylevel thresholding

p(x)

0.02

Background

0.01
Object

x
0.00

o b

T
101
Greylevel thresholding

p(x)

0.02

Background
0.01

Object

x
0.00
o b

T
102
• Easy to see that, in both cases, for any
value of the threshold, object pixels will
be miss-classified as background and
vice versa

• For greater histogram overlap, the pixel


miss-classification is obviously greater
– We could even quantify the probability of
error in terms of the mean and standard
deviations of the object and background
histograms

103
Greylevel clustering
• Consider an idealized
object/background histogram

Background

Object

c1 c2 104
Greylevel clustering

• Clustering tries to separate the histogram


into 2 groups

• Defined by two cluster centres c1 and c2


– Greylevels classified according to the nearest
cluster centre

105
Greylevel clustering

• A nearest neighbour clustering algorithm


allows us perform a greylevel
segmentation using clustering

– A simple case of a more general and widely


used K-means clustering

– A simple iterative algorithm which has known


convergence properties
106
Greylevel clustering

• Given a set of greylevels


g(1), g( 2 )...... g( N )
• We can partition this set into two groups
g (1), g ( 2 )...... g ( N )
1 1 1 1

g (1), g ( 2 )...... g ( N )
2 2 2 2

107
Greylevel clustering

• Compute the local means of each group

1 N1
c1   g1 ( i )
N 1 i 1
N2
1
c2 
N2
g
i 1
2 (i )

108
Greylevel clustering
• Re-define the new groupings

g1 ( k )  c1  g1 ( k )  c2 k  1.. N 1

g2 ( k )  c2  g 2 ( k )  c1 k  1.. N 2

• In other words all grey levels in set 1 are nearer


to cluster centre c1 and all grey levels in set 2
are nearer to cluster centre c2

109
Greylevel clustering

• But, we have a chicken and egg situation

– The problem with the above definition is that


each group mean is defined in terms of the
partitions and vice versa

– The solution is to define an iterative algorithm


and worry about the convergence of the
algorithm later
110
Greylevel clustering

• The iterative algorithm is as follows

Initialize the label of each pixel randomly

Repeat
c1= mean of pixels assigned to object label
c2= mean of pixels assigned to background label

Compute partition g (1), g ( 2 )...... g ( N )


1 1 1 1

Compute partition g (1), g ( 2 )...... g ( N )


2 2 2 2

Until none pixel labelling changes


111
Greylevel clustering

• Outline proof of algorithm convergence


– Define a ‘cost function’ at iteration r

  g1 ( i )  c1     g2 ( i )  c2 
1 N1 ( r ) ( r 1 ) 2 1 N2
( r 1 ) 2
E(r )  (r )

N1 i 1 N 2 i 1
– E(r) >0

112
Greylevel clustering

• Now update the cluster centres

1 N1 ( r )
c1( r )   g1 ( i )
N1 i 1
N1
1
c2( r ) 
N2
 2 (i)
g (r )

i 1

• Finally update the cost function


  g1 ( i )  c1     g2 ( i )  c2 
1 N1 ( r ) 2 1 N2
(r ) 2
E1( r )  (r ) (r )

N1 i 1 N 2 i 1
113
Greylevel clustering

• Easy to show that


( r 1 )
E E (r )
1 E (r )

• Since E(r) >0, we conclude that the


algorithm must converge
– but
• What does the algorithm converge to?

114
Greylevel clustering

• E1 is simply the sum of the variances


within each cluster which is minimised at
convergence
– Gives sensible results for well separated
clusters
– Similar performance to thresholding

115
Greylevel clustering

g2

g1

c1 c2
116
Let there is a small object against a vast background.

Edge detection Estimate the


boundary pixels Compute
the histogram of the edge
pixels.
No. of pixels

Histogram of the image

Gray labels
Three regions:

1. High gradient = non-uniform intensity


boundary

2. Low gradient = uniform intensity


1. object
2. Background

Change in intensity f from +ve to –ve


Change in intensity f from –ve to +ve
Change in intensity = 0
0iff  T

S ( x, y )   iff  Tand 2 f  0
 iff  Tand 2 f  0

Three distinct gray levels:

0 = All pixels that are not on an edge.

+ = All pixels on the dark side of an edge

- = All pixels on the light side of an edge


These boundary pixels having value +
and – are estimated and the histogram is
computed.

The threshold of this histogram is used


to classify the edge pixels into object and
background pixels.

This information is used to generate a


segmented binary image.
Applications
 Oneof the most powerful and important tool for image
segmentation.

 It has the advantages of smaller storage space, fast


processing speed.

 The simplicity and speed of the thresholding algorithm


make it one of the most widely used algorithms in
automated systems ranging from medical applications to
industrial manufacturing.
Region Orientated Image Segmentation:

Let R be the region describing the complete image.

R1, R2, R3,….,Rn are n different Partitions.

R1 R2

R3

R4
R5
Properties

Completeness: Image is fully


n
1. U Ri  R
i 1
divided into different regions.
2. Ri is a connected region. All pixels in a region are
connected.
3. Ri  R j   for all i, j where i ≠ j

4. P( Ri )  True for i = 1, 2, 3, ….,n

5. P( Ri  R j )  False for i ≠ j
Region
segmentation

Region growing Region split and merge


by pixel aggregation
Region Growing
1. Define the set of seed points.
2. Define the similarity function = P (Ri)
3. Append the neighbouring pixels based upon
the similarity function.
1 0 0 5 6 7 P(Ri) = pixels within Ri
1 2 1 7 6 5 must not have difference
0 1 2 1 4 6 in intensity more than 3
2 1 0 6 7 7
2 2 1 4 5 6
Difficulties:

1. How to start the seed point

2. What is the similarity function


How do we choose the seed(s) in practice ?

•It depends on the nature of the problem.

•If targets need to be detected using infrared


images for example, choose the brightest pixel(s).

•Without a-priori knowledge, compute the


histogram and choose the gray-level values
corresponding to the strongest peaks
How do we choose the similarity criteria (predicate)?

 The homogeneity predicate can be based on any


characteristic of the regions in the image such as:

• average intensity
• variance
• color
• texture
Region Merging
• Region merging operations eliminate false
boundaries and spurious regions by merging
adjacent regions that belong to the same
object.
• Merging schemes begin with a partition
satisfying condition (4) (e.g., regions
produced using thresholding).

• Then, they proceed to fulfill condition (5) by


gradually merging adjacent image regions.
Region Merging (cont’d)
How to determine region similarity?
(1) Based on the gray values of the regions –
examples:
– Compare their mean intensities.
– Use surface fitting to determine whether the
regions may be approximated by one surface.
– Use hypothesis testing to judge the similarity
of adjacent region

(2) Based on the weakness of boundaries between


the regions.
Region merging using hypothesis testing
• This approach considers whether or not to merge
adjacent regions based on the probability that they
will have the same statistical distribution of intensity
values.

• Assume that the gray-level values in an image region


are drawn from a Gaussian distribution
– Parameters can be estimated using sample
mean/variance:
R2
R1
Region merging using hypothesis testing

• Given two regions R1 and R2 with m1 and m2 pixels


respectively, there are two possible hypotheses:
R2
R1
H0: Both regions belong to the same object.
The intensities are all drawn from a single Gaussian
distribution N(μ0, σ0)

H1: The regions belong to different objects.


The intensities of each region are drawn from separate
Gaussian distributions N(μ1, σ1) and N(μ2, σ2)
Region merging using hypothesis testing

• The joint probability density under H0,


assuming all pixels are independently
drawn, is given by:

• The joint probability density under H1 is


given by
Region merging using hypothesis testing

• The likelihood ratio is defined as the ratio


of the probability densities under the two
hypotheses:
R2
R1

• If the likelihood ratio is below a threshold


value, there is strong evidence that there
is only one region and the two regions
may be merged.
Region merging by removing weak edges

• The idea is to combine two regions if the


boundary between them is weak.

• A weak boundary is one for which the


intensities on either side differ by less than
some threshold.

• The relative lengths between the weak


boundary and the region boundaries must
be also considered.
Region merging by removing weak edges

• Approach 1: merge adjacent regions R1 and


R2 if

where:
W is the length of the weak part of the
boundary
S = min(S1, S2) is the minimum of the
perimeter of the two regions.
Region merging by removing weak edges

• Approach 2: Merge adjacent regions R1 and R2


if

where:
W is the length of the weak part of the boundary
S is the common boundary between R1 and R2.
Region Splitting
• Region splitting operations add missing
boundaries by splitting regions that contain
parts of different objects.
• Splitting schemes begin with a partition
satisfying condition (5), for example, the
whole image.

• Then, they proceed to satisfy condition (4) by


gradually splitting image regions.
Region Splitting
• Two main difficulties in implementing
this approach:
– Deciding when to split a region (e.g., use
variance, surface fitting).
– Deciding how to split a region.
Region Splitting and Merging

• Splitting or merging might not produce


good results when applied separately.
• Better results can be obtained by
interleaving merge and split operations.
• This strategy takes a partition that possibly
satisfies neither condition (4) or (5) with
the goal of producing a segmentation that
satisfies both conditions.
Region Splitting and Merging
Region Splitting and Merging

thresholding split and merge

You might also like