You are on page 1of 46

Color Image Edge detection Algorithm

Based on Circular Shifting

RAJENDRA KUMAR YADLA

1
Chapter – 1

INTRODUCTION TO IMAGE PROCESSING

1.1 Introduction:

EARLY days of computing, data was numerical. Later, textual data


became more common. Today, many other forms of data: voice, music,
speech, images, computer graphics, etc. Each of these types of data is
signals. A Signal is a function that conveys information. Before going to
know about Digital Image Processing, let us discuss the history from
where it exists.

First the issue of digital image processing appeared relatively late


in computer history, it had to wait for the arrival of the first graphical
operating systems to become a true matter. Secondly, digital image
processing requires the most careful optimizations and especially for real
time applications. As long as people have tried to send or receive the
message through electronic media: telegraphs, telephones, television,
radar, etc. there has been the realization that these signals may be
affected by the system used to acquire, transmit, or process them.
Sometimes, these systems are imperfect and introduce noise, distortion,
or other artifacts.

Understanding the effects these systems have and finding ways to


correct them is the fundamental of signal processing. That is, we
specifically introduce the information content into the signal and hope to
extract it out later. Sometimes, these man-made signals are encoding of
natural phenomena (audio signal, acquired image, etc.), but sometimes
we can create them from scratch (speech generation, computer
generated music, computer graphics). Finally, we can merge these
technologies together by acquiring a natural signal, processing it, and

2
then transmitting it in some fashion. This fashion is called Digital Image
Processing.

Vision allows humans to perceive and understand the world


surrounding us. Computer vision aims to duplicate the effect of human
vision by electronically perceiving and understanding an image. Giving
computers the ability to see is not an easy task - we live in a three
dimensional (3D) world, and when computers try to analyze objects in 3D
space, available visual sensors (e.g., TV cameras) usually give two
dimensional (2D) images, and this projection to a lower number of
dimensions incurs an enormous loss of information. In order to simplify
the task of computer vision understanding, two levels are usually
distinguished as Low-level image processing and High-level image
processing.

1.2 What is Digital Image Processing?

According to Erza Pound, “Image is an intellectual and emotional


complex in an instant of time". The term Image, refers to a two-
dimensional light intensity function f(x, y), where x and y denote spatial
coordinates and the value of f at any point (x, y) is proportional to the
brightness (or gray level) of the image at that point. The x axis is usually
the horizontal axis. The x axis is usually the vertical axis. The origin of the
coordinate system is usually the upper left corner of the image. The x axis
is positive from left to right. The y axis is positive from the top to the
bottom of the image.

Image processing is any form of information processing for which


both the input and output are images, such as photographs or frames of
video. Image processing modifies pictures to improve them
(enhancement, restoration), extract information (analysis, recognition),
and change their structure (composition, image editing). It is a subclass of

3
signal processing concerned specifically with pictures. Improve image
quality for human perception and/or computer interpretation.

A Digital image is an image f(x, y) that has been discretized both in


spatial coordinates and brightness, which can be considered as a matrix
with rows and columns indices identify a point in the image, the value of
the matrix element identifies the grey level at that point.

Digital Image Processing refers to processing of digital images


by means of a digital computer. Digital Image Processing (DIP) is a
multidisciplinary science that borrows principles from diverse fields such
as optics, surface physics, visual psychophysics, computer science and
mathematics. Digital image processing is a subset of the electronic
domain wherein the image is converted to an array of small integers,
called pixels, representing a physical quantity such as scene radiance,
stored in a digital memory, and processed by computer or other digital
hardware.

Digital Images are divided into three type, they are Binary image,
Monochrome Image, Color Image. Binary Image can only encode two
levels, namely bright as 0’s and dark as 1’s. In Monochrome image the
array values are represented with in some range of values. Color image
records both brightness and color of each pixel i.e. for each pixel in an
image it records the contents of Red, Green, Blue.

There are a few fundamental steps of digital image processing that


can be applied to images for different purposes and possibly with different
objectives. They are:

4
Image Acquisition: Generally, the image acquisition stage
involves preprocessing, such as scaling. Image is captured by a sensor
(such as a monochrome or color TV camera) and digitized. If the output of
the camera or sensor is not already in digital form, an analog-to-digital
converter digitizes it.

Example: Frame grabber only needs circuits to digitize the electrical


signal from the imaging sensor to store the image in the memory (RAM) of
the computer.

5
Image Enhancement: It is the simplest and most appealing
areas of digital image processing. Basically, the idea behind enhancement
techniques is to bring out detail that is obscured, or simply to highlight
certain features of interest in an image. Enhancement is a very subjective
area of image processing.

Example:

Image Restoration: It improves the appearance of an image.


However, unlike enhancement, which is subjective, image restoration is
objective, in the sense that restoration techniques tend to be based on
mathematical or probabilistic models of image degradation.

Example:

Color Image Processing: This area is gaining more importance


because of the significant increase in the use of digital images over the
Internet.

Wavelets: This is a foundation for representing images in various


degrees of resolution. Unlike the Fourier Transforms, whose basic

6
functions are sinusoids, wavelet transform is based on small waves called
“wavelets” of varying frequency and limited duration.

Compression: This reduces the storage space required to save an


image, or the bandwidth required to transmit it. Image compression is
familiar to most users of computers in the form of image file extensions,
such as the jpg file extension used in the JPEG (Joint Photographic Experts
Group) image compression standard.

Morphological processing: These are the tools for extracting


image components that are useful in the representation and description
of shape. This is the stage in which the output of the process is not the
image but the image attributes.

Image Segmentation: This partitions an image into its


constituent parts or objects. In general, the more accurate the
segmentation, the more likely recognition is to succeed.

Representation & Description: It makes a decision whether


the data should be represented as a boundary or as a complete region.
Boundary representation focuses on external shape characteristics, such
as corners and inflections. Region representation focuses on internal
properties, such as texture or skeleton shape.Description, also called
feature selection, deals with extracting attributes that result in some
quantitative information of interest or are basic for differentiating one
class of objects from another.

Object Recognition: It is the process that assigns a label (e.g.,


“vehicle”) to an object based on its descriptors.

Knowledge Base: Knowledge about the problem domain is coded


into an image processing system in the form a knowledge databases.

7
1.3 Applications of Digital Image Processing:

The many applications of digital image processing include:


astronomy, ultrasonic imaging, remote sensing, video communications
and microscopy, among innumerable others. The few applications are as
follows;

1. Image processing technology is used by planetary scientists to


enhance images of Mars, Venus, or other planets.
2. Doctors use this technology to manipulate CAT scans and MRI
images.
3. One of the first applications of digital images was digitized
newspaper pictures sent by submarine cable between London and New
York.
4. The major application area of digital image processing techniques is
in problems dealing with machine perception which is focused on
procedures for extracting image information in a form suitable for
computer processing. Examples of the type of information used in
machine perception are statistical moments, Fourier transform
coefficients, and multidimensional distance measures.
5. Image processing in the laboratory can motivate students and make
science relevant to student learning.
6. In Archeology, Image Processing methods are to store the blurred
pictures or the image which is damaged after being photographed.
7. Image Processing is used in Astronomy, Geography, Biology,
Defense, Law enforcement, and so on.

Until a few years ago, segmentation techniques were


proposed mainly for gray-level images since for a long time these have
been the only kind of visual information that acquisition devices were able
to take and computer resources to handle.

8
1.4 What is Segmentation?

Segmentation distinguishes objects from background that is it


divides an image into parts that have a strong correlation with objects or
areas of the real world contained in the image. Simply, partitioning of an
image into several constituent components is called “Segmentation”.
Another word for object detection is segmentation. The object to be
segmented differs greatly in contrast from the background image.

Segmentation is concerned with splitting an image up into


segments (also called regions or areas) that each holds some property
distinct from their neighbor. This is an essential part of scene analysis 
in answering the questions like where and how large is the object, where
is the background, how many objects are there, how many surfaces are
there... Segmentation is a basic requirement for the identification and
classification of objects in scene. The segmentation of structure from 2D
and 3D images is an important step for a variety of image analysis and
visualization tasks.

Image Segmentation is a subset of an expansive field of


Computer Vision which deals with the analysis of the spatial content of an
image. In particular, it is used to separate regions from the rest of the
image, in order to recognize them as objects. Image segmentation is a
computational process and should NOT be treated as a task.

Segmentation is an important part of practically any automated


image recognition system, because it is at this moment that one extracts
the interesting objects, for further processing such as description or
recognition.

Segmentation of an image is in practice the classification of each


image pixel to one of the image parts. If the goal is to recognize black
characters, on a grey background, pixels can be classified as belonging to

9
the background or as belonging to the characters: the image is composed
of regions which are in only two distinct grey value ranges, dark text on
lighter background. The grey level histogram, viz. the probability
distribution of the grey values, has two separated peaks, i.e. is clearly
bimodal. In such a case, the segmentation, i.e. the choice of a grey level
threshold to separate the peaks, is trivial. The same technique could be
used if there were more than two clearly separated peaks.

1.5 Procedure for Segmentation:

The goal of Segmentation is to segment a given gray level image


into regions. First we have to label the various regions of the image.
Secondly, we have to create a region adjacency graph, which will be used
to determine whether two regions are adjacent. Lastly, we have to
compute the mean value of each region. If the mean values of two
adjacent regions differ by less than one threshold, the regions will be
merged as one.
1. The first step is to read a chosen test image as;

2. Secondly, the test image had to be converted to a gray level image


so that thresholding would be possible. The four threshold values of 10,
70, 130, 190, and 250 were hand chosen so that the resulting image
would be an evenly spread 5-level gray picture. This 5-level gray image
can be seen in the following picture.

10
A region adjacency graph is created. This graph will determine
which regions are connected. The mean value of each region will then be
calculated. If the mean values of two adjacent regions differ by less than
a threshold, the regions will be merged. This process will continue until
there are not anymore regions that can be merged.
3. The result of merging the test image can be as;

It can be seen in this image that each of the objects in the


original image has a unique gray-level value. This is done so that each
object can be recognized.

1.6 Types of Segmentation:

Segmentation methods can be divided into three groups according


to the dominant features they employ. They are Thresholding, Edge-
basedsegmentation, Region-based segmentation.

11
1.6.1 Supervised and Unsupervised Segmentation:

Supervised segmentation of color images is based on the palletized


representation of the image and consists of a few steps as, down-
sampling the lattice upon which the image pixels are defined, blurring the
image through low pass filtering in order to average colors lying in almost
uniform regions, exploiting the well- established and fast methods for
color quantization in order to compute color clusters in the RGB space.

Unsupervised color image segmentation is based upon the ideas


where processing of images in their palletized format; using the low
spatial frequency content in the low-low band of the 2-level wavelet
transform of the image; building and thresholding hue and chroma
histograms. The goal of unsupervised segmentation is to create structure
for the data by objectively partitioning the data into homogeneous groups
where the within group object similarity and the between group object
dissimilarity are optimized.

1.6.2. Complete and Partial segmentation:

Complete Segmentation is a one which results in set of disjoint


regions uniquely corresponding with objects in the input image. The whole
class of segmentation can be solved successfully using low-level
processing. In this case, the image commonly consists of contrasted
objects on a uniform background -- simple assembly tasks, blood cells,
printed characters, etc. Here, a simple global approach can be used and
the complete segmentation of an image into objects and background can
be obtained.

Partial Segmentation is a one in which regions do not


correspond directly with image objects. Image is divided into separate
regions that are homogeneous with respect to a chosen property such as
brightness, color, reflectivity, texture, etc. In a complex scene, a set of
possibly overlapping homogeneous regions may result. The partially
segmented image must then be subjected to further processing, and the

12
final image segmentation may be found with the help of higher level
information.

Many segmentation methods are based on two basic properties of


gray-level values and the pixels in relation to their local neighborhood are:
discontinuity and similarity. In the first category, the approach is to
partition an image based on abrupt changes in gray level. The principal
areas of interest in this category are detection of isolated points and
detection of lines and edges in an image. Methods based on pixel
discontinuity are called boundary-based method which relies on the
gradient features at a subset of the spatial positions of an image.

The principal approaches in the second category are based on


thresholding and label region algorithm. Methods based on pixel similarity
are called region-based method which relies on the homogeneity of
spatially localized features.

1.6.3 Region Based Segmentation:

Region finding techniques are based on the absence of


discontinuities within the regions themselves. The region based
approaches try to form clusters of similar pixels. Initially, clusters can be
found by grouping adjacent pixels with same value.

Various approaches are distinguished by the particular rules


used to decide when to merge regions and when to keep them distinct.
The rules are normally based on some measure of the strength of the
boundary between the regions. The strength of the boundary at any
particular position is based on examination of the values of the pixels
which lie on either side.

If the difference in value exceeds some threshold, the


boundary location is said to be strong. If the difference is less than the
threshold, the boundary location is said to be weak. If enough weak
boundary locations exist, the regions can be merged.

13
Homogeneity is an important property of regions and is used
as the main segmentation criterion in region growing, whose basic idea is
to divide an image in to zones of maximum homogeneity. The criteria for
homogeneity can be based on gray-level, color, texture, shape, model,
etc.

A region-based method usually proceeds as follows: the image


is partitioned into connected regions by grouping neighboring pixels of
similar intensity levels. Adjacent regions are then merged under some
criterion involving perhaps homogeneity or sharpness of region
boundaries.
Over-stringent criteria create fragmentation; lenient ones
overlook blurred boundaries and over merge. Hybrid techniques using a
mix of the methods above are also popular.
Region growing techniques are generally better in noisy
images, where borders are extremely difficult to detect is one of the
advantage of region growing.

1.7 Uses of Segmentation:

1. Image segmentation has been used widely in image processing


applications like, content based image retrieval, image understanding,
object-recognition, browsing and image classification. Recently, the use of
image processing applications have spread over Web and into our daily
lives much more, thus the speed requirements of the applications are
increasing day by day.

2. Segmentation is a tool that has been widely used in medical image


processing and computer vision for a variety of reasons. The goal is to
segment the images with respect to their characteristics such as bone
and tissue types; (or) Segmentation is a tool that has been widely used in
medical image processing and computer vision for a variety of reasons.

14
Chapter – 2

ANALYSIS

Types of Segmentation

2.1 Thresholding:

Thresholding has a long but checkered history in digital image


processing. Thresholding is a commonly used enhancement whose goal is
to segment an image into object and background. A threshold value is
computed above (or below) which pixels are considered “object” and
below (or above) which “background”, and eliminates unimportant
shading variation.

One obvious way to extract the objects from the background is to


select a threshold T that separates these modes. Then, any point (x, y) for
which f(x, y) > T is called an object point; otherwise, the point is called a
background point.. Thresholding can also be done using neighborhood
operations.

Thresholding essentially involves turning a color or grayscale


image into a 1-bit binary image. This is done by allocating every pixel in

15
the image either black or white, depending on their value. The pivotal
value that is used to decide whether any given pixel is to be black or
white is the threshold.

If the threshold value is only dependent on the gray values then it is


called as “Global Thresholding”. If the threshold value is dependent on
the gray values and on some local property then it is called as “Local
Thresholding”. If the threshold value is dependent on the gray values,
some local property and some spatial co-ordinates then it is called as
“Adaptive Thresholding”.

Different ways to choose the threshold value when we covert a gray


image to binary image are;

1. Randomly selecting the gray values of the image

2. Average of all the gray values of the image:

3. Median of given gray values:

4. (Minimum value + Maximum value)/2

Algorithm for Selecting the Threshold value Randomly:

Step 1 : Read image.

Step 2 : Convert image into two dimensional matrixes (eg: 64X64).

Step 3 : Select the threshold value randomly.

16
Step 4 : If grey level value is greater than the threshold value make it
bright,
else ,make it dark.

Step 5 : Display threshold image.

Algorithm for Selecting the Threshold value using Average of all


gray values:

Step 1 : Read image.

Step 2 : Convert image into two dimensional matrixes (eg: 64X64).

Step 3 : Select the threshold value by calculating average of all the gray values.

Step 4 : If grey level value is greater than the threshold value make it bright,
else
make it dark.

Step 5 : Display threshold image.

Algorithm for Selecting the Threshold value using Median of all


gray values:

Step 1 : Read image.

Step 2 : Convert image into two dimensional matrixes (eg: 64X64).

Step 3 : Select the threshold value by calculating median of all the gray values.

Step 4 : If grey level value is greater than the threshold value make it bright,
else

make it dark.

Step 5 : Display threshold image.

Algorithm for Selecting the Threshold value using Average of minimum


and maximum gray values:

17
Step 1 : Read image.

Step 2 : Convert image into two dimensional matrixes (eg: 64X64).

Step 3 : Select the threshold value by calculating the average of Minimum and
Maximum gray values.

Step 4 : If grey level value is greater than the threshold value make it bright,
else

make it dark.

Step 5 : Display threshold image.

Limitations:

1. As hardware cost dropped and sophisticated new algorithms were developed,


thresholding became less important.

2. Thresholding destroys useful shading information and applies essentially


infinite gain to noise at the threshold value, resulting in a significant loss of
robustness and accuracy.

2.2 Edge Based Segmentation:

The edges of an image hold much information in that image. The


edges tell where objects are, their shape and size, and something about
their texture. An edge is where the intensity of an image moves from a
low value to a high value or vice versa. Edges in images are areas with
strong intensity contrasts – a jump in intensity from one pixel to the next.

18
Edge detecting an image significantly reduces the amount of data and
filters out useless information, while preserving the important structural
properties in an image

Edge detectors are a collection of very important local image pre-


processing methods used to locate (sharp) changes in the intensity function.
Edges are pixels where the brightness function changes abruptly. The following
figure shows several typical standard edge profiles;

Edge-based segmentation represents a large group of methods based


on information about edges in the image; it is one of the earliest segmentation
approaches and still remains very important and also relies on edges found in an
image by edge detecting operators -- these edges mark image locations of
discontinuities in gray level, color, texture, etc.

The classification of edge detectors is based on the behavioral study of


the edges with respect to the following operators:

• First Order derivative edge detection

• Second Order derivative edge detection

2.2.1. First Order edge detection Technique:

The first derivative assumes a local maximum at an edge. For a gradient image
f(x, y), at location (x, y), where x and y are the row and column coordinates
respectively, we typically consider the two directional derivatives. The two

19
functions that can be expressed in terms of the directional derivatives are the
gradient magnitude and the gradient orientation. The gradient magnitude is
defined by,

The gradient orientation is also an important quantity. The gradient orientation is


given by,

The angle is measured with respect to the x- axis. The other method of
calculating the gradient is given by estimating the finite difference.

We can approximate this finite difference as;

Let us see the most popular classical gradient-based edge detectors;

20
2.2.1.1. Robert edge detection operator:

The calculation of the gradient magnitude and gradient magnitude of an


image is obtained by the partial derivatives,

at every pixel location. The simplest way to implement the first order partial
derivative is by using the Roberts cross gradient operator. The Roberts operator
masks are;

Therefore the partial derivatives for the above two 2x2 masks are as follows;

The Roberts cross operator provides a simple approximation to the gradient


magnitude:

Using convolution masks, this becomes:

21
The above two masks are designed to respond maximally to edges
running at 45° to the pixel grid, one mask for each of the two perpendicular
orientations. The masks can be applied separately to the input image, to
produce separate measurements of the gradient component in each orientation
(call these Gx and Gy). These can then be combined together to find the
absolute magnitude of the gradient at each point and the orientation of that
gradient. The gradient magnitude is given by:

By using the mask given below:

The approximate magnitude is as follows

Likewise the Robert cross edge detector is used.

Algorithm:

Step 1: Read image.

Step 2: Convert image into two dimensional matrix (eg: 64X64).

Step 3: Calculate Threshold value of the image using average method.

22
Step 4: If grey level value is greater than or equal to the threshold value make it

bright, else make it dark.

Step 5: Display threshold image.

Step 6: Take 2x2 Robert Operator and place it on the Original Image.

Step 7: Find the Gradient Magnitude and replace the origin pixel by the

magnitude.

Step 8: Convolve the mask on the entire image.

Step 9: Display the Convolved image.

Limitations:

1. It is less vulnerability to noise.

2. It produces very weak responses to genuine edges unless they are very sharp.

2.2.1.2 Prewitt edge detection operator:

The Prewitt edge detector is a much better operator than the Roberts
operator. This operator having a 3x3 masks deals better with the effect of noise.

The Prewitt edge detection masks are one of the oldest and best
understood methods of detecting edges in images. Basically, there are two
masks, one for detecting image derivatives in X and one for detecting image
derivatives in Y. This Prewitt operator is obtained by setting c = 1.

An approach using the masks of size 3x3 is given by considering the below
arrangement of pixels;

23
Therefore the partial derivatives for the above two 2x2 masks are as follows;

Like wise the Prewitt edge detector is used as the masks have longer support.

Algorithm :

Step 1: Read image.

Step 2: Convert image into two dimensional matrix (eg: 64X64).

Step 3: Calculate Threshold value of the image using average method.

Step 4: If grey level value is greater than or equal to the threshold value make it

bright, else make it dark.

Step 5: Display threshold image.

Step 6: Take 3x3 Prewitt Operator and place it on the Original Image.

Step 7: Find the Gradient Magnitude and replace the center pixel by the

magnitude.

Step 8: Convolve the mask on the entire image.

Step 9: Display the Convolved image.

Limitation:

24
These differentiate in one direction and average in the other direction,
so the edge detector is less vulnerable to noise.

2.2.1.3. Sobel edge detection operator:

The Sobel edge detector is very much similar to the Prewitt edge
detector. The difference between the both is that the weight of the center
coefficient is 2 in the Sobel operator.

The Sobel operator performs a 2-D spatial gradient measurement on an


image. Then, the approximate absolute gradient magnitude (edge strength) at
each point can be found. The Sobel operator is more sensitive to diagonal edges
than vertical and horizontal edges. The Sobel operator uses a pair of 3x3
convolution masks, one estimating the gradient in the x-direction (columns) and
the other estimating the gradient in the y-direction (rows). A convolution mask is
usually much smaller than the actual image. The actual Sobel masks are shown
below:

This operator consists of a pair of 3*3 convolution masks, one mask is


simply the other rotated by 90°. The mask is slid over an area of the input
image, changes that pixel's value and then shifts one pixel to the right and
continues to the right until it reaches the end of a row. It then starts at the
beginning of the next row.

The center of the mask is placed over the pixel you are manipulating in the
image. It is important to notice that pixels in the first and last rows, as well as
the first and last columns cannot be manipulated by a 3x3 mask. This is because

25
when placing the center of the mask over a pixel in the first row (for example),
the mask will be outside the image boundaries.

An approach using the masks of size 3x3 is given by considering the below
arrangement of pixels;

Therefore the partial derivatives for the above two 2x2 masks are as follows;

Like wise the Sobel edge detector is used.

ALGORITHM FOR SOBEL OPERATOR:

Step 1: Read image.

Step 2: Convert image into two dimensional matrix (eg: 64X64).

Step 3: Calculate Threshold value of the image using average method.

Step 4: If grey level value is greater than or equal to the threshold value make it

bright, else make it dark.

Step 5: Display threshold image.

Step 6: Take 3x3 Sobel Operator and place it on the Original Image.

Step 7: Find the Gradient Magnitude and replace the center pixel by the

magnitude.

26
Step 8: Convolve the mask on the entire image.

Step 9: Display the Convolved image.

Advantages:

1. An advantage of using a larger mask size is that errors due to the effects of
noise are reduced by local averaging within the neighborhood of the mask.

2. An advantage of using a masks of odd size is that the operators are centered
and can therefore provide an estimate that is biased towards a centre pixel (i , j).

Limitations:

Although the Prewitt masks are easier to implement than the Sobel
masks, the later has better noise suppression characteristics.

2.2.1.4. Robinson Edge Detector operator:

The Robinson edge detector is similar to the Sobel edge detector. The
Robinsons row, column and corner masks are shown below:

It is symmetrical by the axis. The maximum value is found in the edge


magnitude and the edge direction is defined by the maximum value found by the
edge magnitude.

ALGORITHM FOR ROBINSON OPERATOR:

Step 1: Read image.

Step 2: Convert image into two dimensional matrix (eg: 64X64).

27
Step 3: Calculate Threshold value of the image using average method.

Step 4: If grey level value is greater than or equal to the threshold value make it

bright, else make it dark.

Step 5: Display threshold image.

Step 6: Take 3x3 Robinson Operator and place it on the Original Image.

Step 7: Find the Gradient Magnitude and replace the center pixel by the

magnitude.

Step 8: Convolve the mask on the entire image.

Step 9: Display the Convolved image.

2.2.1.5. Krisch Edge Detector operator:

The Krisch edge detector is similar to the above edge detector. The
Krisch row, column and corner masks are shown below:

ALGORITHM FOR KRISCH OPERATOR:

Step 1: Read image.

Step 2: Convert image into two dimensional matrix (eg: 64X64).

Step 3: Calculate Threshold value of the image using average method.

Step 4: If grey level value is greater than or equal to the threshold value make it

28
bright, else make it dark.

Step 5: Display threshold image.

Step 6: Take 3x3 Krisch Operator and place it on the Original Image.

Step 7: Find the Gradient Magnitude and replace the center pixel by the

magnitude.

Step 8: Convolve the mask on the entire image.

Step 9: Display the Convolved image.

2.2.2. Second Order edge detection technique:

Second Order derivative edge detection techniques employ


some form of spatial second order differentiation to accentuate edges. an edge
is marked if a significant spatial change occurs in the second derivative.

2.2.2.1 Laplacian and Gaussian Edge detection operator:

The Laplacian operator is a very popular operator approximating the


second derivative which gives the gradient magnitude only. The principle used in
the Laplacian of Gaussian method is, the second derivative of a signal is zero
when the magnitude of the derivative is maximum. The Laplacian is
approximated in digital images by a convolution sum. The Laplacian of a 2-D
function f(x, y) is defined as;

A 3x3 masks for 4-neighborhoods and 8-neighborhood for this operator


are as follows;

29
The two partial derivative approximations for the Laplacian for a 3x3 region are
given as,

The 5x5 Laplacian is a convoluted mask to approximate the second


derivative. Laplacian uses a 5x5 mask for the 2nd derivative in both the x and y
directions.

ALGORITHM FOR LAPLACIAN AND GAUSSIAN OPERATOR:

Step 1: Read image.

30
Step 2: Convert image into two dimensional matrix (eg: 64X64).

Step 3: Calculate Threshold value of the image using average method.

Step 4: If grey level value is greater than or equal to the threshold value make it

bright, else make it dark.

Step 5: Display Threshold image.

Step 6: Perform Gaussian Operation on Original image to reduce the noise.

Step 7: Take 3x3 Laplacian Operator and place it on the Original Image which is

masked by the Gaussian operator.

Step 7: Find the Gradient Magnitude and replace the center pixel by the

magnitude.

Step 8: Convolve the mask on the entire image.

Step 9: Display the Convolved image.

Limitations:

1. It responds doubly to some edges in the image.

Limitations for Edge Based Segmentation:

Edge-based methods center around contour detection: their weakness


in connecting together broken contour lines make them, too, prone to failure in
the presence of blurring

The main disadvantage of these edge detectors is their dependence on


the size of objects and sensitivity to noise

Further, since conventional boundary finding relies on changes in the grey


level, rather than their actual values, it is less sensitive to changes in the grey
scale distributions over images as against region based segmentation.

2.3 Algorithm for Adjacent Difference Method:

Step 1: Read image.

31
Step 2: Convert image into two dimensional matrixes (eg: 64X64).

Step 3: Calculate Threshold value of the image using average method.

Step 4: If grey level value is greater than or equal to the threshold value make it

bright, else make it dark.

Step 5: Display Threshold image.

Step 6: Calculate the Row Wise differences by subtracting A[i] [j+1] from A[i] [j],

where i,j are row and column indices respectively.

Step 7: Calculate the Column Wise differences by subtracting A [i+1] [j] from

A[i] [j], where i,j are row and column indices respectively.

Step 8: Calculate the Right Diagonal Wise differences by subtracting

A [i+1] [j+1] from A[i] [j], where i,j are row and column indices

respectively.

Step 9: Calculate the Left Diagonal Wise differences by subtracting A [i+1] [j-1]

from A[i] [j], where i,j are row and column indices respectively.

Step 10: If Row Wise difference value, Column Wise difference value, Right

Diagonal Wise difference value, Left Diagonal Wise difference value is

greater than or equal to the 6 then make it bright, else make it dark.

Step 11: Display Completely Segmented image.

32
Chapter -3

DESCRIPTION OF THE PROJECT

3.1 Introduction

EDGE detection is a vital step in image processing and is one


of the most crucial steps towards classification and Recognition of
objects. Color plays a crucial role in image analysis and recognition. A
color image will have a vector of three values for every pixel unlike in
gray images where a single value representing the intensity of a
pixel. Human vision system chooses color rather than shapes and
texture as the major discriminate attribute . Many algorithms have
been proposed for edge detection of color images. Of all the edge
detectors, Sobel is the standard detector and Canny is the modern
standard and is used by researchers to compare their results with the
results of Canny detector. Novak and Shafer found that 90% of the
edges are about the same in gray level and in color images. It implies
that 10% of the edges are left over in gray level images. Since
color images give more information than gray-level images, this 10%

33
left over edges may be extracted from color images. In general, to
extract edges from the images either gradient based methods or
vector based methods are used.

The real complement approach is taking as a pre-processing


step for real edge detection . This complement acts like a low pass
filter and highlights the weak intensity pixels. The nature of the
complement process is that it reduces the gray intensity distribution
to 50% i.e., a gray level image having the intensities in the range 0-
255 will be reduced to the intensities of the range 0-128 thereby the
weak intensity pixels will be highlighted. In a new method real
complement approach has been introduced. This real edge gives
efficient edge information with out discontinuities.

To achieve this, we are using different shift operations .


In the proposed method, we have used the complement approach as
a pre-processing step. Initially, the algorithm is tested on RGB
images, where in each channel is separately processed by the
complement operation followed by matrix shifting operations. The
proposed algorithm is further tested on HSV and YUV color spaces also.
In addition, the algorithm is tested on both standard and real images
and the results are compared with the results of the other methods.
All the procedures used in the algorithm runs in polynomial time.

3.2. Color Spaces

A color space relates to number of actual colors, and is a three


dimensional object which contains all realizable color combinations. Color
spaces can be either dependent to or independent of a given device.
Device-dependent spaces express color relative to some other color space
whereas independent color spaces express color in absolute terms. Each
dimension in color space represents some aspect of color, such as lightness,
saturation or hue, depending on the type of space.

34
A. RGB Model

An RGB color space is any additive color space based on RGB


color model. A particular RGB color space is defined by the three
chromaticity’s of red, green and blue additive primaries. RGB is a
convenient color model for computer graphics because the human visual
system works similar to an RGB color space.

B.HSL and HSV Model

HSL and HSV are two related representations of points in RGB


color space which attempt to describe the perceptual color
relationships more accurately than RGB. HSL stands for hue,
saturation and lightness.HSV stands for hue, saturation and value.

C. YUV Model

YUV model defines a color space in one luminance (Y) and


two chrominance (UV) components. YUV models human perception of
color in a different way from the standard RGB model. Y stands for
luminance (brightness) component and UV stands for the
chrominance (color) components. The transformation from RGB into
YUV is given by

Since the luminance and chrominance components are separate,


the YUV space is vigorously used to broadcast video systems and
hence it is also used in image and video processing.

3.3. Real Complement Approach

The Real complement approach works like a low-pass filter

35
strongly attenuating the lower image frequencies. This method is based
on image complement and hence the name, Real complement. The
method is inexpensive due to its simple arithmetic operations. The
steps involved are as follows.

Step-1: Read the given image (i).

Step-2: Obtain the complement of the image. (ic).

Step-3: Do image differencing. (i1=i-ic).

Step-4: Perform global threshold ding on i1 to get binary image (i2).

Step-5: Obtain the real complement of the image by image differencing


between i and i2.(i3=i-i2).

The third step in real Complement approach is the subtracting the


complement of the input image from the given input image. Let i be the
given image and ic be its complement.

This image subtraction makes the background uniform


depending on the intensity of the background pixels of the image. For
illustration, they categorized the
images into three cases based on the background information. The
categorization is (i) background - very bright, (ii) background 
slightly dark and (iii) background  intermediate range. In the first
case, the background of i1 remains almost same since the
complement of the bright intensity values when subtracted from its
original intensity leads to its nearest (bright) intensity range. In the
second case i.e., when the background is slightly dark, the same
analogy works as in the case of first case, but nearer to its original
range. In the third case, if the difference is a positive
value, a similar gray of value is retain otherwise it becomes black.
The fourth step in Real-Complement approach is the binary map
formation. This is achieved by global thresholding . The binary map of

36
i2 obtained in the previous step is calculated by using following
equation.

3.4. Circular Shift Operations

Literature reveals many edge detectors based on


gradients, filters, derivatives etc. A simple method to detect real
edges has been introduced based on shift operations. The present
approach to edge detection incorporates the simple shift operations.
Here we are shifting entire image data i.e. left, right, up and
bottom. The following are terms used for representing
shift operations in proposed algorithm.

i. RSHT: Row Shift


ii.CLSHT: Circular Left Shift
iii.CRSHT: Circular Right Shift
iv. CSHFT: Column Shift
v. CTSHT: Circular Top shift
vi.CBSHT: Circular Bottom shift
vii.DSHT: Diagonal shift.

Procedure for finding different shifting images.

37
A. Finding RSHT image.

i.Read given image (i1)


ii. Apply CLSHT (Circular Left Shift) operation on entire original image(i2).
iii. Image differencing to obtain left slope edge pixels.(i3=(i1-i2).
iv.Apply CRSHT(Circular Right Shift)Operation on entire original
image(i4)
v.Image differencing to obtain right slope edge pixels.(i5=i1-i4).
vi.To obtain RSHT image add (i3) and (i5).

B. Finding CSHT image

i. Read given image (i1)


ii .Apply CTSHT( Circular Top Shift) operation on entire original
image(i2)
iii. Image differencing to obtain top slope edge pixels(i3=i1-i2)
iv. Apply CBSHT (Circular Bottom Shift) operation on entire original
image (i4).
v.Image differencing to obtain bottom slope edge pixels.(i5=i1-i4).
vi.To obtain CSHT image add (i3) and (i5).

C. Finding DSHT Image


i.Read given image.(i1).
ii.Apply Circular shift on diagonal image data.(i2).
iii.Image differencing to obtain diagonal edge pixels.(i3=i1-i2).

3.4. Proposed Methodology

The method proposed by us for extracting edges from color


images. The block diagram of the proposed method is given in Fig. 1.
The steps of the proposed method are as follows.

38
Step-1: Read the given color image.

Step-2: Obtain the Real complement for all three channels separately.

Step-3: Find RSHT image for individual real complement.

Step-4: Find CSHT image for individual real complement.

Step-5: Find DSHT image for individual real complement.

Step-6: For Strong Real edges add result images obtained by step3, 4and
5.

Step-7: Post-processing of the resultant edge image to get the final output.

39
Chapter-4

40
EXPERIMENTAL RESULTS ON IMAGES

In this section, the results of the proposed method are


presented. Both standard images and real world images are used to
show the efficiency of the proposed method. By the results we can see
that the proposed algorithm detecting exact real edges.

Standard color image of a


women is supplied as input
to the proposed method.

The edge map for the


given input image
using Right shift
operations.

41
The edge map for the
given input image using
column shift operations.

The edge map for the


given input image using
real complement
approach.

The same algorithm applied for real world image, a tree.

The real world image of a tree


is supplied as input to the
proposed method.

42
The edge map for the
given input image using
Right shift operations

The edge map for the


given input image using
column shift operations.

The edge map for


the given input
image using real
complement
approach.

Chapter- 5

43
CONCLUSION & FUTURE WORK

5.1 Conclusion

A new approach that is proposed in this paper is used to


extract real edges from color images. The detected edges are more
accurate compare with some existed color edge detectors. This algorithm
incorporates simple shift and arithmetic operations rather than using
gradient and other operations which are computationally expensive.
And also it manipulates entire image at a time which is never
observed in normal edge detection process. Experimental results
indicate that the performance of the proposed method is satisfactory
in almost all cases and runs in polynomial time.

5.2 Future Work

Our future work entails the reconstruction of images from the


edge map.

REFERENCES

44
[1]. T. N. Janakiraman, and P. V. S. S. R.ChandraMouli, “Color Image
Edge Detection using Pseudo-Complement and Matrix Operations”,
Proceedings of World Academy of sciences, Engineering and
Techonology. ,Volume 32,August 2008.ISSN 2070-3740.

[2].J.T.Allen and T.Huntsburger,”Comparing color edge detection and


segmentation methods.”IEEE South East Con’89.

[3].T. Carron and P. Lambert, “Fuzzy color extraction by inference rules


quantitative study and evaluation performances ”, proceedings of IEEE
International Conference on Image Processing, pp.181-184, 1995.

[4]. R. Nevatia , “A color edge detector andits use in scene


segmentation”, IEEE Transactions on Systems, ManCybernetics,
Vol. 7, pp.820-826, 1977.

[5]. M.A. Abidi, R.A. Salinas, C. Richardson and R.C. Gonzalez,


“Data fusion color edge detection and surface reconstruction through
regularization”, IEEE Transactions on Ind Electron.,43(3), pp.355-
363,1996.

[6]. J. Canny, “A computational approach to edge detection”, IEEE


Transactions on Pattern Analysis and Machine Intelligence, 8(6),
pp.679-698, 1986.

[7]. C.L. Novak and S.A. Shafer, Color edge detection”, Proceedings
of DARPA Image Understanding workshop, Los Angeles, CA, USA, vol.1,
pp.35-37, 1987.

[8]. R.C.Gonalez and R.E.Woods: “Digital Image Processing (3rd


Editin)”, Prentice Hall, 2007.

[9]M.Morris Mano: “Computer System Architecture” (3rd edition)


Prentice Hall,

45
2006.

46

You might also like