Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Download
Standard view
Full view
of .
Look up keyword
Like this
0Activity
0 of .
Results for:
No results containing your search query
P. 1
02 Paper 30111304 IJCSIS Camera Ready Pp. 8-13

02 Paper 30111304 IJCSIS Camera Ready Pp. 8-13

Ratings: (0)|Views: 0|Likes:
Published by ijcsis

More info:

Published by: ijcsis on Jan 06, 2014
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

01/06/2014

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 12, 2013
A novel non-Shannon edge detection algorithm for noisy images
El-Owny, Hassan Badry Mohamed A.
Department of Mathematics, Faculty of Science ,Aswan University , 81528 Aswan, Egypt.
Current:
 CIT College, Taif University, 21974 Taif, KSA. .
 Abstract
 — 
 
Edge detection is an important preprocessing step in image analysis. Successful results of image analysis extremely depend on edge detection. Up to now several edge detection methods have been developed such as Prewitt, Sobel, Zero-crossing, Canny, etc. But, they are sensitive to noise. This paper proposes a novel edge detection algorithm for images corrupted with noise. The algorithm finds the edges by eliminating the noise from the image so that the correct edges are determined. The edges of the noise image are determined using non-Shannon measures of entropy. The proposed method is tested under noisy conditions on several images and also compared with conventional edge detectors such as Sobel and Canny edge detector. Experimental results reveal that the proposed method exhibits better performance and may efficiently be used for the detection of edges in images corrupted by Salt-and-Pepper noise
.
 Keywords
-
 Non-Shannon Entropy ; Edge Detection; Threshold Value ; Noisy images.
I.
 
I
 NTRODUCTION
Edge detection has been used extensively in areas related to image and signal processing. Its use includes pattern recognition, image segmentation, and scene analysis. The edges are also use to locate the objects in an image and measure their geometrical features. Hence, edge detection is an important identification and classification tool in computer vision. This topic has attracted many researchers and several achievements have been made to investigate new and more robust techniques .  Natural images are prone to noise and artifacts. Salt and  pepper noise is a form of noise typically seen on images. It is typically manifested as randomly occurring white and black  pixels. Salt and pepper noise creeps into images in situations where quick transients, such as faulty switching, take place. On the other hand, White noise is additive in nature where the each pixel in the image is modified via the addition of a value drawn from a Gaussian distribution. To test the generality of the results, the proposed edge detection algorithm was tested on images containing both these types of noise. A large number of studies have been published in the field of image edge detection[1-16], which attests to its importance within the field of image processing. Many edge detection algorithms have been proposed, each of which has its own strengths and weaknesses; for this reason, hitherto there does not appear to be a single "best" edge detector. A good edge detector should be able to detect the edge for any type of image and should show higher resistance to noise. Examples of approaches to edge detection include algorithms such as the Sobel and Prewitt edge detectors which are based on the first order derivative of the pixel intensities[1]. The Laplacian-of-Gaussian (LoG) edge detector is another popular technique, using instead the second order differential operators to detect the location of edges [2,17,18]. However, all of these algorithms tend to be sensitive to noise, which is an intrinsically high frequency phenomenon. To solve this problem the Canny edge detector was proposed, which combines a smoothing function with zero crossing based edge detection [3]. Although it is more resilient to noise than the  previously mentioned algorithms, its performance is still not satisfactory when the noise level is high. There are many situations where sharp changes in color intensity do not correspond to object boundaries like surface marking, recording noise and uneven lighting conditions [4-7,19-22] In this paper we present a new approach to detect edges of gray scale noisy images based on information theory, which is entropy based thresholding. The proposed method is decrease the computation time as possible as can and the results were very good compared with the other methods. The paper is organized as follows: Section 2 describes in  brief the basic concepts of Shannon and non-Shannon entropies. Section 3 is devoted to the proposed method of edge detection. In Section 4, the details of the edge detection algorithm is described. In Section 5, some particular images will be analyzed using proposed method based algorithm and moreover, a comparison with some existing methods will be  provided for these images. Finally, conclusions will be drawn in Section 6. II.
 
B
ASIC
C
ONCEPT OF
E
 NTROPY
Physically Entropy can be associated with the amount of disorder in a physical system. In[23] Shannon redefined the entropy concept of Boltzmann/Gibbs as a measure of uncertainty regarding the information content of a system. He defined an expression for measuring quantitatively the amount of information produced by a process.
 
In accordance with this definition, a random event
 
 
that occurs with probability

 is said to contain
  ln1   ln
 units of information. The amount

 
is called the self-information of event
 
 
. The amount of
8http://sites.google.com/site/ijcsis/ ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 12, 2013
self information of the event is inversely proportional to its  probability. If
  1
, then
  0
 and no information is attributed to it. In this case, uncertainty associated with the event is zero. Thus, if the event always occurs, then no information would be transferred by communicating that the event has occurred. If
  0.8
, then some information would be transferred by communicating that the event has occurred[24]. The basic concept of entropy in information theory has to do with how much randomness is in a signal or in a random event. An alternative way to look at this is to talk about how much information is carried by the signal [25]. Entropy is a measure of randomness. Let
,
,,
 be the probability distribution of a discrete source. Therefore,
0
  1,   1,2,,
 and
 
 1

, where
k
is the total number of states. The entropy of a discrete source is often obtained from the probability distribution. The Shannon Entropy can be defined as
  
 ln

 This formalism has been shown to be restricted to the domain of validity of the Boltzmann–Gibbs–Shannon (BGS) statistics. These statistics seem to describe nature when the effective microscopic interactions and the microscopic memory are short ranged. Generally, systems that obey BGS statistics are called extensive systems. If we consider that a  physical system can be decomposed into two statistical independent subsystems
  and
, the probability of the composite system is

 
 
, it has been verified that the Shannon entropy has the extensive property (additive):
    
 (1) From [25]
  
 
 
,
 (2) where
ψα
 is a function of the entropic index. In Shannon entropy
 1
. Rènyi entropy[26] for the generalized distribution can be written as follows:
  11ln

 ,  0 ,
this expression meets the BGS entropy in the limit
1
. Rènyi entropy has a nonextensive property for statistical independent systems, defined by the following pseudo additivity entropic formula
  
 
1
 

. Tsallis[27-29] has proposed a generalization of the BGS statistics, and it is based on a generalized entropic form,
 
  


 ,
 where
 is the total number of possibilities of the system and the real number
α
 is an entropic index that characterizes the degree of nonextensivity. This expression meets the BGS entropy in the limit
1
. The Tsallis entropy is nonextensive in such a way that for a statistical independent system, the entropy of the system is defined by the following  pseudo additive entropic rule
  
 
1
 

 The generalized entropies of Kapur of order
α
 and type
β
 [30,31] is
,
 
 
ln
 

 

 ,  , ,0
 
(3)
 
In the limiting case, when
α 1
 and
β 1
,
H
,
p
 reduces to

 and when
β  1,
 
H
,
p
 reduces to

. Also,
H
,
p
 is a composite function which satisfies pseudo-additivity as:
,
   
,
 
,
1  
,
 
,
.
 (4)
 
III.
 
S
ELECTION OF THRESHOLD VALUE BASED ON KAPUR ENTROPY
A gray level image can be represented by an intensity function, which determines the gray level value for each pixel in the image. Specifically, in a digital image of size
 an intensity function
 ,
 
,|  1,2,,, 1,2,,
, takes as input a particular pixel from the image, and outputs its gray level value, which is usually in the range of 0 to 255 (
if
256
levels are used 
)
.
Thresholding produces a new image based on the original one represented by
 f 
. It is basically another function
,
, which produces a new image (i.e.
the thresholded image).
A threshold is calculated for each pixel value. This threshold is compared with the original image (i.e.
 
) to determine the new value of the current pixel.
 can be represented by the following equation [31,32].
, 0, if , 1, if ,  , is the thresholding value
. When Entropy applied to image processing techniques, entropy measures the normality (i.e. normal or abnormal) of a  particular gray level distribution of an image. When a whole image is considered, the Kapur entropy as defined in (3) will indicate to what extent the intensity distribution is normal. When we extend this concept to image segmentation, i.e. dealing with foreground(Object) and background regions in an image, the entropy is calculated for both regions, and the subsequent entropy value provides an indication of the normality of the segmentation. In this case, two equations are need for each region, each of them called priori. In image thresholding, when applying maximum entropy, every gray level value is a candidate to be the threshold value. Each value will be used to classify the pixels into two groups  based on their gray levels and their affinity, as less or greater than the threshold value (
). Let
,
,….,
,

,….,
 be the probability distribution for an image with k gray-levels, where
 is the normalized histogram i.e.
 
  
 and
 is the gray level histogram. From this distribution, we can derive two
9http://sites.google.com/site/ijcsis/ ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security, Vol. 11, No. 12, 2013
 probability distributions, one for the object (class A) and the other for the background (class B), are shown as follows:
:
,
,…..,
 ,
 (5)
:

 ,

 ,…..,
 ,
 where
   

 ,
   

 ,
 
is the threshold value. (6) In terms of the definition of Kapur entropy of order
 and type
, the entropy of Object pixels and the entropy of background pixels can be defined as follows:
,
   1ln 

 

 , , ,0
 (7)
,
   1ln 

 

 , , ,0 .
 The Kapur entropy
,

 is parametrically dependent upon the threshold value
 for the object and background. It is formulated as the sum each entropy, allowing the pseudo-additive property for statistically independent systems, as defined in (4). We try to maximize the information measure  between the two classes (object and background). When
,

 is maximized, the luminance level
 that maximizes the function is considered to be the optimum threshold value. This can be achieved with a cheap computational effort.

  Argmax 
,
  
,
  1  ·
,
  ·
,
 .
 (8) When
  1 and   1
, the threshold value in (4), equals to the same value found by Shannon Entropy. Thus this  proposed method includes Shannon’s method as a special case. The following expression can be used as a criterion function to obtain the optimal threshold at
  1 and   1
.

  Argmax 
, 
 
,
 
.
 (9)  Now, we can describe the Kapur Threshold algorithm to determine a suitable threshold value

 and
α
 and
β
 as follows:
II
.
if 
 
0    1 and 0    1
 
 then
 Apply Equation (8) to calculate optimum threshold value

.
else
Apply Equation (9) to calculate optimum threshold value

.
end-if
5
.
 
Output
: The suitable threshold value
 

 
of 
 
 I,
 FOR
,  0,  
 IV.
 
P
ROPOSED
A
LGORITHM FOR
E
DGE
D
ETECTION
The process of spatial filtering consists simply of moving a filter mask
w
 of order
 from point to point in an image. At each point
,
, the response of the filter at that point is calculated a predefined relationship. We will use the usual masks for detection the edges. Assume that
  2  1
 and
  21
, where
,
 are nonnegative integers. For this  purpose, smallest meaningful size of the mask is
33
, as shown in Fig. 1[1].
1,11,0
 
1,10,10,0
 
0,11,11,0
 
1,1
Fig. 1: Mask coefficients showing coordinate arrangement
 
 
  1,1
 
  1,
 
  1,1
 
 ,1 ,
 
 ,1  1,1  1,
 
  1,1
Fig. 2 Image region under the above mask is shown in Fig. 2. In order to edge detection, firstly classification of all pixels that satisfy the criterion of homogeneousness, and detection of all  pixels on the borders between different homogeneous areas. In the proposed scheme, first create a binary image by choosing a suitable threshold value using Kapur entropy. Window is applied on the binary image. Set all window coefficients equal to 1 except centre, centre equal to × as shown in Fig. 3. 1 1 1 1 × 1 1 1 1 Fig. 3 Move the window on the whole binary image and find the  probability of each central pixel of image under the window. Then, the entropy of each Central Pixel of image under the window is calculated as
 
ln
.
 
T
ABLE
I
 
.
 
P
 AND
H
 OF CENTRAL PIXEL UNDER WINDOW
.
 
1/9 2/9 3/9 4/9 5/9 6/9 7/9 8/9
0.2441 0.3342 0.3662 0.3604 0.3265 0.2703 0.1955 0.1047
A
LGORITHM
1:
 
T
HRESHOLD
V
ALUE
S
ELECTION
(K 
APUR
T
HRESHOLD
)
 
1. Input:
 A digital grayscale noisy image I of size
.
 
2
.
Let
 ,
 be the original gray value of the pixel at the  point
,, 1,2,,, 1,2,,
 
3
. Calculate the probability distribution
,
0   255
 
4
. For all
0,1,…,255,
 
I
. Apply Equations (5) and (6) to calculate
 
,
,
 and
 
10http://sites.google.com/site/ijcsis/ ISSN 1947-5500

You're Reading a Free Preview

Download
scribd
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->