You are on page 1of 3

www.jntuworld.

com

LCE/7.5.1/RC 01
TEACHING NOTES
Department: ELECTRONICS & COMMUNICATION ENGINEERING
Unit: VII Date:
Topic name: Image Segmentation & Edge Detection No. of marks allotted by JNTUK:
Books referred: 01. Digital Image Processing by R C Gonzalez and R E Woods
02. www.wikipedia.org
03. www.google.com
Image Segmentation:
In computer vision, segmentation refers to the process of partitioning a digital image into
multiple segments (sets of pixels). The goal of segmentation is to simplify and/or change the
representation of an image into something that is more meaningful and easier to analyze. Image
segmentation is typically used to locate objects and boundaries in images. More precisely, image
segmentation is the process of assigning a label to every pixel in an image such that pixels with the
same label share certain visual characteristics.
The result of image segmentation is a set of segments that collectively cover the entire
image, or a set of contours extracted from the image. Each of the pixels in a region is similar with
respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent
regions are significantly different with respect to the same characteristics.
Edge Detection:
Edge detection is a problem of fundamental importance in image analysis. In typical images,
edges characterize object boundaries and are therefore useful for segmentation, registration, and
identification of objects in a scene. In this section, the construction, characteristics, and performance
of a number of gradient and zero-crossing edge operators will be presented.
An edge is a jump in intensity. The cross section of an edge has the shape of a ramp. An ideal
edge is a discontinuity (i.e., a ramp with an infinite slope). The first derivative assumes a local
maximum at an edge. For a continuous image f(x, y), where x and y are the row and column
coordinates respectively, we typically consider the two directional derivatives δx f(x, y) and δy f(x, y).
Of particular interest in edge detection are two functions that can be expressed in terms of these
directional derivatives: the gradient magnitude and the gradient orientation. The gradient
magnitude is defined as

And the gradient orientation is given by,

Local maxima of the gradient magnitude identify edges in f(x, y). When the first derivative achieves a
maximum, the second derivative is zero. For this reason, an alternative edge-detection strategy is to
locate zeros of the second derivatives of f(x, y). The differential operator used in these so-called
zero-crossing edge detectors is the Laplacian.

This loads the package.

Faculty/Date: HOD/Date:

Page 1 of 3
www.jntuworld.com

LCE/7.5.1/RC 01
TEACHING NOTES
Department: ELECTRONICS & COMMUNICATION ENGINEERING
Unit: VII Date:
Topic name: Thresholding No. of marks allotted by JNTUK:
Books referred: 01. Digital Image Processing by R C Gonzalez and R E Woods
02. www.wikipedia.org
03. www.google.com
Thresholding:
Thresholding is the simplest method of image segmentation. From a grayscale image,
Thresholding can be used to create binary images.
During the Thresholding process, individual pixels in an image are marked as “object” pixels
if their value is greater than some threshold value (assuming an object to be brighter than the
background) and as “background” pixels otherwise. This convention is known as threshold above.
Variants include threshold below, which is opposite of threshold above; threshold inside, where a
pixel is labeled "object" if its value is between two thresholds; and threshold outside, which is the
opposite of threshold inside.
Typically, an object pixel is given a value of “1” while a background pixel is given a value of
“0.” Finally, a binary image is created by coloring each pixel white or black, depending on a pixel's
label.
Thresholding is called adaptive Thresholding when a different threshold is used for different
regions in the image. This may also be known as local or dynamic Thresholding.
The key parameter in the Thresholding process is the choice of the threshold value (or
values, as mentioned earlier). Several different methods for choosing a threshold exist; users can
manually choose a threshold value, or a Thresholding algorithm can compute a value automatically,
which is known as automatic Thresholding. A simple method would be to choose the mean or
median value, the rationale being that if the object pixels are brighter than the background, they
should also be brighter than the average. In a noiseless image with uniform background and object
values, the mean or median will work well as the threshold, however, this will generally not be the
case. A more sophisticated approach might be to create a histogram of the image pixel intensities
and use the valley point as the threshold. The histogram approach assumes that there is some
average value for the background and object pixels, but that the actual pixel values have some
variation around these average values. However, this may be computationally expensive, and image
histograms may not have clearly defined valley points, often making the selection of an accurate
threshold difficult.
One method that is relatively simple, does not require much specific knowledge of the
image, and is robust against image noise, is the following iterative method:
1. An initial threshold (T) is chosen; this can be done randomly or according to any other method
desired.
2. The image is segmented into object and background pixels as described above, creating two sets:
G1 = {f (m, n): f (m, n)>T} (object pixels)
G2 = {f (m, n): f (m, n) ≤T} (background pixels)

Faculty/Date: HOD/Date:

Page 2 of 3
www.jntuworld.com

LCE/7.5.1/RC 01
TEACHING NOTES
Department: ELECTRONICS & COMMUNICATION ENGINEERING
Unit: VII Date:
Topic name: Region Oriented Segmentation No. of marks allotted by JNTUK:
Books referred: 01. Digital Image Processing by R C Gonzalez and R E Woods
02. www.wikipedia.org
03. www.google.com
3. The average of each set is computed.
m1 = average value of G1
m2 = average value of G2
4. A new threshold is created that is the average of m1 and m2
T’ = (m1 + m2)/2
5. Go back to step two, now using the new threshold computed in step four, keep repeating until the
new threshold matches the one before it (i.e. until convergence has been reached).
This iterative algorithm is a special one-dimensional case of the k-means clustering
algorithm, which has been proven to converge at a local minimum—meaning that a different initial
threshold may give a different final result.
Region Oriented Segmentation:
The main goal of segmentation is to partition an image into regions. Some segmentation
methods such as "Thresholding", achieve the goal by looking for the boundaries between regions
based on discontinuities in gray levels or color properties. Region-based segmentation is a technique
finding the region directly. Here is the basic formulation for Region-Based Segmentation:

(b) Ri is a connected region, i = 1, 2... n

(d) P (Ri) = TRUE for i = 1, 2... n

P (Ri) is a logical predicate defined over the points in set P (Rk) and ф is the null set.
(a) Indicates that the segmentation must be complete; that is, every pixel must be in a region.
(b) Requires that points in a region must be connected in some predefined sense.
(c) Indicates that the regions must be disjoint.
(d) Deals with the properties that must be satisfied by the pixels in a segmented region-for example
P (Ri) = TRUE if all pixels in Ri have the same gray level.
And the condition (e) indicates that region Ri and Rj are different in the sense of predicate P.

Faculty/Date: HOD/Date:

Page 3 of 3

You might also like