P. 1
Image Processing: The Comparison of the Edge Detection Algorithms for Images in Matlab

Image Processing: The Comparison of the Edge Detection Algorithms for Images in Matlab

|Views: 778|Likes:
Published by ijcsis
Edge detection is the first step in image segmentation. Image Segmentation is the process of partitioning a digital image into multiple regions or sets of pixels. Edge detection is one of the most frequently used techniques in digital image processing. The goal of edge detection is to locate the pixels in the image that correspond to the edges of the objects seen in the image. Filtering, Enhancement and Detection are three steps of Edge detection. Images are often corrupted by random variations in intensity values, called noise. Some common types of noise are salt and pepper noise, impulse noise and Gaussian noise. However, there is a trade-off between edge strength and noise reduction. More filtering to reduce noise results in a loss of edge strength. In order to facilitate the detection of edges, it is essential to determine changes in intensity in the neighborhood of a point. Enhancement emphasizes pixels where there is a significant change in local intensity values and is usually performed by computing the gradient magnitude. Many points in an image have a nonzero value for the gradient, and not all of these points are edges for a particular application. Therefore, some method should be used to determine which points are edge points. Four most frequently used edge detection methods are used for comparison. These are: Roberts Edge Detection, Sobel Edge Detection, Prewitt Edge Detection and Canny Edge Detection. One the other method in edge detection is spatial filtering. This Paper represent a special mask for spatial filtering and compare throughput the standard edge detection algorithms (Sobel, Canny, Prewit & Roberts) with the spatial filtering.
Edge detection is the first step in image segmentation. Image Segmentation is the process of partitioning a digital image into multiple regions or sets of pixels. Edge detection is one of the most frequently used techniques in digital image processing. The goal of edge detection is to locate the pixels in the image that correspond to the edges of the objects seen in the image. Filtering, Enhancement and Detection are three steps of Edge detection. Images are often corrupted by random variations in intensity values, called noise. Some common types of noise are salt and pepper noise, impulse noise and Gaussian noise. However, there is a trade-off between edge strength and noise reduction. More filtering to reduce noise results in a loss of edge strength. In order to facilitate the detection of edges, it is essential to determine changes in intensity in the neighborhood of a point. Enhancement emphasizes pixels where there is a significant change in local intensity values and is usually performed by computing the gradient magnitude. Many points in an image have a nonzero value for the gradient, and not all of these points are edges for a particular application. Therefore, some method should be used to determine which points are edge points. Four most frequently used edge detection methods are used for comparison. These are: Roberts Edge Detection, Sobel Edge Detection, Prewitt Edge Detection and Canny Edge Detection. One the other method in edge detection is spatial filtering. This Paper represent a special mask for spatial filtering and compare throughput the standard edge detection algorithms (Sobel, Canny, Prewit & Roberts) with the spatial filtering.

More info:

Published by: ijcsis on Mar 08, 2011
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

03/08/2011

pdf

text

original

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No.

2, February 2011

Image Processing: The Comparison of the Edge Detection Algorithms for Images in Matlab
Ehsan Azimirad
Department of electrical and computer engineering, Tarbiat Moallem University of Sabzevar, Sabzevar, Iran eazimi@sttu.ac.ir

Javad Haddadnia
Department of electrical and computer engineering, Faculty of Electrical Collage, Tarbiat Moallem University of Sabzevar, Sabzevar, Iran haddadnia@sttu.ac.ir most common operations in image analysis. An edge in an image is a contour across which the brightness of the image changes abruptly. In image processing, an edge is often interpreted as one class of singularities. In a function, Singularities can be characterized easily as discontinuities where the gradient approaches Infinity. However, image data is discrete, so edges in an image often are defined as the Local maxima of the gradient. This is the definition we will use here. Operations in image processing, This topic has attracted many researchers and many achievements have been made [11-18]. For Such as: Rooms et al proposed to estimate the out-of focus blur in wavelet domain by examining the sharpness of the sharpest edges [11]. Hanghang Tong et al proposed new blur detection schemes which can determine whether an image is blurred or not and to what extent an image is blurred. Which raises the demand for image quality assessment in terms of blur Based on the edge type and sharpness analysis using Harr wavelet transforms [12]. X. Marichal, proposed using DCT information to qualitatively characterize blur extent [13] Berthold K., ET AL describes the processing performed in the course of producing a line drawing from an image obtained through an image dissector camera. The edgemarking phase uses a non-linear parallel line-follower [14]. Lixia Xue et al proposed An edge detection algorithm for multispectral remote sensing image, they extended the onedimensional cloud-space mapping model to the multidimensional model [15].Mike Heath etal, presented a paradigm based on xperimental psychology and statistics, in which humans rate the output of low level vision algorithms. They demonstrate the proposed experimental strategy by comparing four wellknown edge detectors: Canny, Nalwa–Binford, Sarkar–Boyer, and Sobel [16], Hoover etal at USF have recently conducted such a comparison study based on manually constructed ground truth for range segmentation tasks [17]. Krishna Kant Chintalapudi et al showed that such localized edge detection techniques are non-trivial to design in an arbitrarily deployed sensor network. They defined the notion of an edge and develop performance metrics for evaluating localized edge detection algorithms [10,18]. Usage of specific linear time-invariant (LTI) filters is the most common procedure applied to the edge detection problem, and the one which results in the least computational

Abstract—Edge detection is the first step in image segmentation. Image Segmentation is the process of partitioning a digital image into multiple regions or sets of pixels. Edge detection is one of the most frequently used techniques in digital image processing. The goal of edge detection is to locate the pixels in the image that correspond to the edges of the objects seen in the image. Filtering, Enhancement and Detection are three steps of Edge detection. Images are often corrupted by random variations in intensity values, called noise. Some common types of noise are salt and pepper noise, impulse noise and Gaussian noise. However, there is a trade-off between edge strength and noise reduction. More filtering to reduce noise results in a loss of edge strength. In order to facilitate the detection of edges, it is essential to determine changes in intensity in the neighborhood of a point. Enhancement emphasizes pixels where there is a significant change in local intensity values and is usually performed by computing the gradient magnitude. Many points in an image have a nonzero value for the gradient, and not all of these points are edges for a particular application. Therefore, some method should be used to determine which points are edge points. Four most frequently used edge detection methods are used for comparison. These are: Roberts Edge Detection, Sobel Edge Detection, Prewitt Edge Detection and Canny Edge Detection. One the other method in edge detection is spatial filtering. This Paper represent a special mask for spatial filtering and compare throughput the standard edge detection algorithms (Sobel, Canny, Prewit & Roberts) with the spatial filtering.

Keywords-Spatial Filtering, Median Filter, Edge Detection, Image
Segmentation.

I.

INTRODUCTION

Over the years, several methods have been proposed for the image edge detection which is the method of marking points in a digital image where luminous intensity changes sharply for which different type of methodology have been implemented in various applications like traffic speed estimation [5], Image compression [6], and classification of images [7]. Most of the traditional edge-detection algorithms in image processing typically convolute a filter operator and the input image, and then map overlapping input image regions to output signals which lead to considerable loss in edge detection [8,9]. Edge and feature points are basic low level primitives for image processing. Edge and feature detection are two of the

108

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 2, February 2011

effort. In the case of first-order filters, an edge is interpreted as an abrupt variation in gray level between two neighbor pixels. The goal in this case is to determine in which points in the image the first derivative of the gray level as a function of position is of high magnitude. By applying the threshold to the new output image, edges in arbitrary directions are detected. In other ways the output of the edge detection filter is the input of the polygonal approximation technique to extract features which to be measured. A very important role is played in image analysis by what are termed feature points, pixels that are identified as having a special property. Feature points include edge pixels as determined by the well-known classic edge detectors of PreWitt, Sobel, Roberts, Canny and Spatial Filtering. Classical operators identify a pixel as a particular class of feature point by carrying out some series of operations within a window centered on the pixel under scrutiny. The classic operators work well in circumstances where the area of the image under study is of high contrast. In fact, classic operators work very well within regions of an image that can be simply converted into a binary image by simple thresholding[1]. This paper is organized as follows. Section II is for the purpose of providing some information about edge detection. Section III is focused on simulation results and also focused on comparison of various Edge Detection Methods. Section IV presents the conclusion. II. EDGE DETECTION

An Edge in an image is a significant local change in the image intensity, usually associated with a discontinuity in either the image intensity or the first derivative of the image intensity. Discontinuities in the image intensity can be either Step edge, where the image intensity abruptly changes from one value on one side of the discontinuity to a different value on the opposite side, or Line Edges, where the image intensity abruptly changes value but then returns to the starting value within some short distance. However, Step and Line edges are rare in real images. Because of low frequency components or the smoothing introduced by most sensing devices, sharp discontinuities rarely exist in real signals. Step edges become Ramp Edges and Line Edges become Roof edges, where intensity changes are not instantaneous but occur over a finite distance. Illustrations of these edge shapes are shown in Fig.1. A. Steps in Edge Detection Edge detection contain three steps namely Filtering, Enhancement and Detection. The overview of the steps in edge detection are as follows. 1) Filtering: Images are often corrupted by random variations in intensity values, called noise. Some common types of noise are salt and pepper noise, impulse noise and Gaussian noise. Salt and pepper noise contains random occurrences of both black and white intensity values. However, there is a trade-off between edge strength and noise reduction. More filtering to reduce noise results in a loss of edge strength. 2) Enhancement: In order to facilitate the detection of edges, it is essential to determine changes in intensity in the neighborhood of a point. Enhancement emphasizes pixels where there is a significant change in local intensity values and is usually performed by computing the gradient magnitude. 3) Detection: Many points in an image have a nonzero value for the gradient, and not all of these points are edges for a particular application. Therefore, some method should be used to determine which points are edge points. Frequently, thresholding provides the criterion used for detection. B. Edge Detection Methods Three most frequently used edge detection methods are used for comparison. These are (1) Roberts Edge Detection, (2) Sobel Edge Detection, (3) Prewitt edge detection and (4) Canny edge detection. One the other method in edge detection is spatial filtering. The details of methods as follows: 1) The Roberts Detection: The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an image. It thus highlights regions of high spatial frequency which often correspond to edges. In its most common usage, the input to the operator is a grayscale image, as is the output. Pixel values at each point in the output represent the estimated absolute magnitude of the spatial gradient of the input image at that point. Fig.2. shows Roberts Mask.

Edge detection techniques transform images to edge images benefiting from the changes of grey tones in the images. Edges are the sign of lack of continuity, and ending. As a result of this transformation, edge image is obtained without encountering any changes in physical qualities of the main image. Objects consist of numerous parts of different color levels. In an image with different grey levels, despite an obvious change in the grey levels of the object, the shape of the image can be distinguished in Fig.1.

Figure 1. Type of Edges (a) Step Edge (b) Ramp Edge (c) Line Edge (d) Roof Edge

109

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 2, February 2011

Figure 2. Roberts Mask

2) The Prewitt Detection: The prewitt edge detector is an appropriate way to estimate the magnitude and orientation of an edge. Although differential gradient edge detection needs a rather time consuming calculation to estimate the orientation from the magnitudes in the x and y-directions, the compass edge detection obtains the orientation directly from the kernel with the maximum response. The prewitt operator is limited to 8 possible orientations, however experience shows that most direct orientation estimates are not much more accurate. This gradient based edge detector is estimated in the 3x3 neighbourhood for eight directions. All the eight convolution masks are calculated. One convolution mask is then selected, namely that with the largest module. Fig.3. shows Prewitt Mask.

Figure 5. Edge patterns for Sobel edge detector

4) The Canny Detection: Canny edge detection is an important step towards mathematically solving edge detection problems. This edge detection method is optimal for step edges corrupted by white noise. Edge detection with low probability of missing true edges, and a low probability of detecting false edges. [2] The Canny algorithm uses an optimal edge detector based on a set of criteria which include finding the most edges by minimizing the error rate, marking edges as closely as possible to the actual edges to maximize localization, and marking edges only once when a single edge exists for minimal response.[3] Canny used three criteria to design his edge detector. The first requirement is reliable detection of edges with low probability of missing true edges, and a low probability of detecting false edges. Second, the detected edges should be close to the true location of the edge. Lastly, there should be only one response to a single edge. To quantify these criteria, the following functions are defined:

SNR( f ) =
Figure 3. Prewitt Mask

A . n0

0

−∞

f ( x)dx
1 2

(1)

 f 2 ( x )dx   ∫−∞   

3) The Sobel Detection: The Sobel operator performs a 2-D spatial gradient measurement on an image and so emphasizes regions of high spatial frequency that correspond to edges. Typically it is used to find the approximate absolute gradient magnitude at each point in an input grayscale image. In theory at least, the operator consists of a pair of 3x3 convolution kernels as shown in Figure 4. One kernel is simply the other rotated by 90o.This is very similar to the Roberts Cross operator. The convolution masks of the Sobel detector are given in Fig.4. Fig.5. shows Edge patterns for Sobel edge detector.

SNR( f ) =

A . n0

f ′(0)

 f ′2 ( x)dx   ∫−∞   

1 2

(2)

Figure 4. Sobel Mask

where A is the amplitude of the signal and n20 is the variance of noise. SNR(f) defines the signal-to-noise ratio and Loc(f) defines the localization of the filter f(x). The Canny edge detection algorithm runs in 5 separate steps: 1. Smoothing: Blurring of the image to remove noise. 2. Finding gradients: The edges should be marked where the gradients of the image has large magnitudes. 3. Non-maximum suppression: Only local maxima should be marked as edges. 4. Double thresholding: Potential edges are determined by thresholding. 5. Edge tracking by hysteresis: Final edges are determined by suppressing all edges that are not connected to a very certain (strong) edge.[19] 5) The Spatial Filtering Detection: we implement image edge detection so that we can identify the boundary of object

110

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 2, February 2011

in an image. For this, we apply a spatial mask. Fig.6. shows Spatial Mask.

 −1 −2 −1  −2 0 2    1 2 1  
Figure 6. Spatial Mask

The mechanics of spatial filtering are illustrated in the Fig.7. The process consists simply of moving the center of the filter mask ω from point to point in an image, f. at each point (x, y), the response of the filter at that point is the sum of the products of the filter coefficients and the corresponding neighborhood pixels in the area spanned by the filter mask.[4]

ones that have been found out by Any one of the standard edge detection algorithms (Sobel, Canny, Prewit & Roberts). On the other hand, by the “Spatial Filtering” more of the edges will be traced and the outputs of this algorithm provide much more distinct marked edges and thus have better visual appearance than the standard existing. Thus the “Spatial Filtering” Edge Detection algorithm provides better edge detection and helps to extract the edges with a very high efficiency and specifically establishes to avoid double edges results in obtaining an image with single edges.

Figure 7. The Mechanics of Spatial Filtering. Figure 8. Results of our algorithm compared with standard edge detection algorithms(Sobel, Canny, Prewit & Roberts)

III.

SIMULATION RESULTS

The algorithm for image edge detection was tested for various images and the outputs were compared to the existing edge detection algorithms and it was observed that the outputs of this algorithm provide much more distinct marked edges and thus have better visual appearance than the ones that are being used. The sample output shown below in Fig.8 compares the “Sobel”, “Roberts”, “Prewitt” and “Canny” Edge detection algorithms together and with the “Spatial Filtering” algorithm in Fig.9. It can be observed that the output that has been generated by the “Spatial Filtering” has found out the edges of the image more distinctly as compared to the

Figure 9. Results of our algorithm compared with Spatial Filtering

111

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

(IJCSIS) International Journal of Computer Science and Information Security, Vol. 9, No. 2, February 2011

IV.

CONCLUSION
[12] Hanghang Tong, Mingjing Li, Hongjiang Zhang, Changshui Zhang, " Blur Detection for Digital Images Using Wavelet Transform" ICME04, 2004. [13] X. Marichal, W.Y. Ma and H.J. Zhang, “Blur Determination in the Compressed Domain Using DCT Information,”Proceedings of the IEEE ICIP'99, pp.386-390. [14] Berthold K. P. Horn, "The 'Binford-Horn LINE-FINDER" MASSACHUSETTS INSTITUTE OF TECHNOLOGY ARTIFICIAL INTELLIGENCE LABORATORY 1971 [15] Lixia Xuea Zuocheng Wang, "An Edge Detection Algorithm for Remote Sensing Image" The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. Vol. XXXVII. Part B3b. Beijing 2008 [16] Mike Heath, Sudeep Sarkar, Thomas Sanocki,z and Kevin Bowyer, "Comparison of Edge Detectors A Methodology and Initial Study" Computer Vision And Image Understanding Vol. 69, No. 1, January, pp. 38–54, 1998. [17] A. Hoover, G. Jean-Baptiste, X. Jiang, P. J. Flynn, H. Bunke, D. Goldgof,and K. Bowyer, "Range image segmentation: The user’s dilemma", in InternationalSymposium on Computer Vision, 1995, pp. 323–328 . [18] K. Chintalapudi, R. Govindan, "Localized Edge Detection in Sensor Fields", Ad-hoc Networks Journal, 2003. [19] J. Canny, “A Computational Approach to Edge Detection”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 8, No. 6, Nov. 1986. AUTHORS PROFILE Ehsan Azimi Rad, received the B.Sc. degree in computer engineering and M.Sc. degree in control engineering with honors from the Ferdowsi University of Mashhad, Mashhad , Iran , in 2006 and 2009, respectively.He is now PHD student in electrical and electronic engineering at Tarbiat Moallem University of Sabzevar in Iran. His research interests are fuzzy control systems and its applications in urban traffic and any other problems, nonlinear control, Image Processing and Pattern Recognition and etc. Javad Haddadnia, received his B.S. and M.S. degrees in electrical and electronic engineering with the first rank from Amirkabir University of Technology, Tehran, Iran, in 1993 and 1995, respectively. He received his Ph.D. degree in electrical engineering from Amirkabir University of Technology, Tehran, Iran in 2002. He joined Tarbiat Moallem University of Sabzevar in Iran. His research interests include neural network, digital image processing, computer vision, and face detection and recognition. He has published several papers in these areas. He has served as a Visiting Research Scholar at the University of Windsor, Canada during 2001- 2002. He is a member of SPIE, CIPPR, and IEICE.

This paper proposed 2 methods for edge detection. In the first method the standard edge detection algorithms (Sobel, Canny, Prewitt & Roberts) has been used for edge detection and the second method is the special Spatial Filtering method is used for edge detection. It can be observed that the output that has been generated by the “Spatial Filtering” has found out the edges of the image more distinctly as compared to the ones that have been found out by Any one of the standard edge detection algorithms (Sobel, Canny, Prewit & Roberts). On the other hand, by the “Spatial Filtering” more of the edges will be traced and the outputs of this algorithm provide much more distinct marked edges and thus have better visual appearance than the standard existing. Thus the “Spatial Filtering” Edge Detection algorithm provides better edge detection and helps to extract the edges with a very high efficiency and specifically establishes to avoid double edges results in obtaining an image with single edges. REFERENCES
[1] [2] Abdallah A. Alshennawy and Ayman A. Aly, ”Edge Detection in Digital Images Using Fuzzy Logic Technique ”, World Academy of Science, Engineering and Technology 51 2009 N. Senthilkumaran and R. Rajesh, “Edge Detection Techniques for Image Segmentation – A Survey of Soft Computing Approaches”, International Journal of Recent Trends in Engineering, Vol. 1, No. 2, May 2009. Hong Shan Neoh and Asher Hazanchuk, “Adaptive Edge Detection for Real-Time Video Processing using FPGAs”. N. B. Bahadure, “Image Processing: Filteration, Gray Slicing, Enhancement, Quantization, Edge Detection and Blurring of Images in Matlab”, International Journal of Electronic Engineering Research, ISSN 0975 - 6450 Volume 2 Number 2 (2010) pp. 145–151. Dailey D. J., Cathey F. W. and Pumrin S. 2000. An Algorithm to Estimate Mean Traffic Speed Using Uncalibrated Cameras. In proceedings of IEEE Transactions on intelligent transport systems, Vol.1. Desai U. Y., Mizuki M. M., Masaki I., and Berthold K.P. 1996. Edge and Mean Based Image Compression. Massachusetts institute of technology artificial intelligence laboratory .A.I. Memo No. 1584. Rafkind B., Lee M., Shih-Fu and Yu C. H. 2006. Exploring Text and Image Features to Classify Images in Bioscience Literature. In Proceedings of the BioNLP Workshop on Linking Natural Language Processing and Biology at HLTNAACL 06, pages 73–80, New York City. Roka A., Csapó Á., Reskó B., Baranyi P. 2007.Edge Detection Model Based on Involuntary Eye Movements of the Eye-Retina System. Acta Polytechnica Hungarica Vol. 4. Shashank Mathur and Anil Ahlawat, “Application of Fuzzy Logic on Image Edge Detection”, Intelligent Technologies and Applications. Leila Fallah Araghi and Mohammad Reza Arvan, ”An Implementation Image Edge and Feature Detection Using Neural Network”, Proceedingof the International MultiConference of Engineers and Computer Scientists 2009 Vol I IMECS 2009, March 18 - 20, 2009, Hong Kong. F. Rooms, and A. Pizurica, “Estimating image blur in the wavelet domain”, ProRISC 2001, pp. 568-572.

[3] [4]

[5]

[6] [7]

[8] [9] [10]

[11]

112

http://sites.google.com/site/ijcsis/ ISSN 1947-5500

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->