You are on page 1of 55

Edge Detection System For Noisy Images

CHAPTER 1
DIGITAL IMAGE PROCESSING
1.1. INTRODUCTION
Digital image processing is an area characterized by the need for extensive
experimental work to establish the viability of proposed solutions to a given problem.
An important characteristic underlying the design of image processing systems is the
significant level of testing and experimental solution. This characteristic implies that the
ability to formulate approaches and quickly prototype candidate solutions generally
plays a major role in reducing the cost and time required to arrive at a viable system
implementation.

An image may be defined as a two-dimensional function, where x and y are


spatial (plane) coordinates, and the amplitude of at any pair of coordinates (x, y) is
called the intensity or gray level of the image at that point. When x, y, and the intensity
values of f are all finite, discrete quantities, we call the image a digital image. The field
of digital image processing refers to processing digital images by means of a digital
computer. The digital image is composed of a finite number of elements, each of which
has a particular location and value. The elements are called pixels.
Vision is the most advanced of our senses, so it is not surprising that images play
the single most important role in human perception. However, unlike humans, who are
limited to the visual band of the electromagnetic (EM) spectrum, imaging machines
cover almost the entire EM spectrum, ranging from gamma to radio waves. They can
operate on images generated by sources that humans are not accustomed to associating
with images
There are no clear-cut boundaries in the continuum from image processing at
one end to computer vision at the other. However, one useful paradigm is to consider
three types of computerized processes in this continuum: low-, mid-, and high-level
processes. Low-level processes involve primitive operations such as image
preprocessing to reduce noise, contrast enhancement, and image sharpening. A low level
process is characterized by the fact that both its inputs and outputs are images. Mid-
level processing on images involves tasks such as segmentation (partitioning an image
into regions or objects), description of those objects to reduce them to a form suitable
for computer processing, and classification (recognition) of individual objects. A mid-
level process is characterized by the fact that its inputs generally are images, but its
Department of CMRT Page
ECE C 1
Edge Detection System For Noisy Images
outputs are attributes extracted from

Department of CMRT Page


ECE C 2
those images (e.g., edges, contours, and the identity of individual objects). Finally,
higher- level processing involves “making sense” of an ensemble of recognized objects,
as in image analysis, and, at the far end of the continuum, performing the cognitive
functions normally associated with vision.

Fig 1.1 Types of Computerized Processes

1.2 STEPS IN DIGITAL IMAGE PROCESSING

Fig 1.2 Block diagram of digital image processing


1.2.1 Image Acquisition
It is a process of retrieving images from some source. The image that is acquired
is completely unprocessed and is the result of whatever hardware was used to generate
it, which can be very important in some fields have a consistent baseline from which to
work. One of the ultimate goals of this process is to have a source of input that operates
within such controlled and measured guidelines that the same image can, if necessary,
be nearly perfectly reproduced under the same conditions so anomalous factors are
easier to locate and eliminate.
1.2.2 Image Enhancement
Image enhancement is the process of manipulating an image so that the result is more
suitable than the original for a specific application. The word specific is important
here, because it establishes at the outset that enhancement techniques are problem
oriented. Thus, for example, a method that is quite useful for enhancing X- ray
images may not be the best approach for enhancing satellite images taken in the
infrared band of the electromagnetic spectrum.
1.2.3 Image Restoration
Image restoration is an area that also deals with improving the appearance of an
image. However, unlike enhancement, which is subjective, image restoration is
objective, in the sense that restoration techniques tend to be based on mathematical or
probabilistic models of image degradation.
1.2.4 Color Image processing
Color image processing is an area that has been gaining in importance because
of the significant increase in the use of digital images over the Internet. Chapter 6
covers a number of fundamental concepts in color models and basic color processing in
a digital domain. Color is used also in later chapters as the basis for extracting features
of interest in an image. Wavelets are the foundation for representing images in various
degrees of resolution. In particular, this material is used in this book for image data
compression and for pyramidal representation, in which images are subdivided
successively into smaller region.
1.2.5 Compression
Compression as the name implies, deals with techniques for reducing the storage
required to save an image, or the band width required to transmit it
Edge Detection System For Noisy Images

Although strong technology has improved significantly over the past decade, the same
cannot be said for transmission capacity. This is true particularly in uses of the Internet,
which are characterized by significant pictorial content. Image compression is familiar
(perhaps inadvertently) to most users of computers in the form of image file extensions,
such as the jpg file extension used in the JPEG (Joint Photographic Experts Group)
image compression standard.

1.2.6 Morphological Processing


Morphological processing deals with tools for extracting image components that
are useful in the representation and description of shape.
1.2.7 Segmentation
Segmentation procedures partition an image into its constituent parts or objects.
In general, autonomous segmentation is one of the most difficult tasks in digital image
processing. A rugged segmentation procedure brings the process a long way toward the
successful solution of imaging problems that require objects to be identified
individually. On the other hand, weak or erratic segmentation algorithms almost always
guarantee eventual failure. In general, the more accurate the segmentation, the more
likely recognition is to succeed.

1.2.8 Representation and Description


Representation and description almost always follow the output of a
segmentation stage, which usually is raw pixel data, constituting either the boundary of
a region (i.e., the set of pixels separating one image region from another) or all the
points in the region itself. In either case, converting the data to a form suitable for
computer processing is necessary. The first decision that must be made is whether the
data should be represented as a boundary or as a complete region. Boundary
representation is appropriate when the focus is on external shape characteristics, such as
corners and inflections. Regional representation is appropriate when the focus is on
internal properties, such as texture or skeletal shape.
1.2.9 Recognition
Recognition is the process that assigns a label (e.g., “vehicle”) to an object based
on its descriptors. We conclude our coverage with the development of methods for
recognition of individual objects.

Department of CMRT Page 4


ECE C
1.3 COMPONENTS OF IMAGE PROCESSING

Fig 1.3 Components of image processing

1. In sense, two elements are required to acquire digital images. The first is the
physical device that is sensitive to the energy radiated by the object we wish to
image. The second called a digitizer, is a device for converting the output of the
physical sensing device into digital form.
2. Specialized image processing hardware usually consists of the digitizer plus
hardware that performs other primitive operations such as arithmetic and logical
operations (ALU). E.g. Noise reduction. This type of hardware sometimes is called
a front end subsystem.
3. The computer is an image processing system is a general purpose to
supercomputer.
4. Mass storage capability is a must in image processing applications.
5. Image displays in use today are mainly color type monitors.
6. Hardcopy devices for recording images include laser printers, and film cameras.
7. Networking for communication
Edge Detection System For Noisy Images

1.4 EDGE DETECTION


1.4.1 What is an Edge?
An edge in an image is a significant local change in the image intensity, usually
associated with a discontinuity in either the image intensity or the first derivative of
the image intensity. The points at which image brightness changes sharply are typically
organized into a set of curved line segments termed edges. Simply an edge is a boundary
between two homogeneous regions.

Fig: 1.4 Example of an edge

1.4.2 What Is Edge Detection


The edge detection is one of the most important aspects of the image processing,
image analysis and statistical pattern recognition. The significance of the edges is in
their aspect in which they provide a perfect perspective of the object in the image.
With the use of the object edges, it is possible to obtain the sufficient data on image
analysis. The success in the object recognition stage depends on the accuracy of the
edge detection.
In general terms, the following factors must be considered in the recognition stage:
first, no edge should be destroyed as much as possible, and secondly, no edge should be
chosen wrongly, and thirdly, the edge must be place in its right position.
It is obvious, image pollution using noises, makes problems in receiving, transferring
and processing images. If there is no solution eliminating the noises, they cause
undesirable edges, which are not real edges. As a result, it causes uneven edges, thus the
main form of objects is lost. Using image smoothing methods it is possible to reduce
noises to some extent, but this action causes transforming the real position of the edge.
In order to resolve this problem, this paper provides an edge detection system for
polluted images by different noises, which finally after passing the stages mentioned
in section three, the real position of the edges will maintain and the effect of noises in
final output, will be significantly reduced.
1.4.3 Usage of Edge Detection
Edge detection is used to reduce unnecessary information in the image while
preserving the structure of the image. It is used to extract important features of an image
such as,
i. Corners
ii. Lines
iii. Curves

Edge detection is also used to recognize objects, boundaries, segmentation of a give


image. It works by detecting discontinuities in brightness. Edge detection is used for
image segmentation and data extraction in areas such as image processing, computer
vision, and machine vision.
1.4.4 Types of edges
Step edge: The image intensity abruptly changes from one value to one side of the
discontinuity to a different value on the opposite side.
Ridge edge: The image intensity abruptly changes value but then returns to the starting
value within some short distance (generated usually by lines)
Ramp edge: A step edge where the intensity change is not instantaneous but occur over
a finite distance.
Roof edge: A ridge edge where the intensity change is not instantaneous but occur over
a finite distance (generated usually by the intersection of surfaces).
Edge Detection System For Noisy Images

Fig: 1.5 types of edges in an image

1.4.5 Steps of edge detection


(1) Smoothing: suppress as much noise as possible, without destroying the true edges.
(2) Enhancement: apply a filter to enhance the quality of the edges in the image
(sharpening).
(3) Detection: determine which edge pixels should be discarded as noise and which
should be retained (usually, thresholding provides the criterion used for detection).
(4) Localization: determine the exact location of an edge (sub-pixel resolution might
be required for some applications, that is, estimate the location of an edge to better
than the spacing between pixels). Edge thinning and linking are usually.
1.4.6 Edge detection method

They are different types of edge detection methods as follows:


First order derivative/gradient
 Roberts operator
 Sobel operator
 Prewitt operator

Second Order Derivative


 Laplacian
 Laplacian of Gaussian

Optimal Edge Detection


 Canny Edge Detection
1.5 LITERATURE SURVEY

[1] Variational image denoising approach with diffusion porous media


flow:
In this paper a novel image restoration system based on an anisotropic diffusion model
was proposed. Nonlinear PDE-based model performs an efficient noise reduction while
preserving and enhancing de image boundaries. He contributed that the proposed edge-
stopping function and conductance parameter of the diffusion equation, and also the
robust mathematical treatment of the developed anisotropic diffusion model. For the
First time he demonstrated that diffusivity function is properly chosen, satisfying the
required conditions.
Then, he have provided some mathematical discussion on the existence and uniqueness
of the solution of this forward parabolic equation. The computed numerical
approximation iterative algorithm represents another contribution. The performed
denoising experiments and the method comparison provided very encouraging results.
Diffusion technique executes faster and produces much better image enhancement
results than other PDE-based algorithms, like influential Persona-Malik scheme, and
many non- PDE denoising methods. Because of its strong edge-preserving character, it
can be successfully used for edge detection or computer vision tasks like object
detection.
[2] Image quality assessment: from error visibility to structural
similarity:
In this paper an Objective methods for assessing perceptual image quality have
traditionally attempted to quantify the visibility of errors between a distorted image and
a reference image using a variety of known properties of the human visual system.
Under the assumption that human visual perception is highly adapted for extracting
structural information from a scene, an alternative complementary framework for
quality assessment based on the degradation of structural information was introduced.
As a specific example of this concept, a Structural Similarity Index and demonstrated its
promise through a set of intuitive examples, was developed as well as comparison to
both subjective ratings and state-of-the-art objective methods on a database of images
compressed with JPEG and JPEG2000.
[3] Scope of validity of PSNR in image/video quality assessment:
In this a novel technique of hiding sound files inside unsuspecting image files has been
presented where with minimal distortion in the image quality were acquired. Cuckoo
search has been an instrumental tool in his work, used to find the optimal solution set
for the given problem. Cuckoo search has reduced the time required to reach at the best
solution due to its aping behavior of cuckoo laying behavior therefore using nature’s
evolutionary techniques to its advantage.
Like most of the pixel substitutions algorithms, the size of the image does not increase
as a result of embedding sound into the image. A method outperforms most state of the
art methods where in it is attempted to hide any binary data in an image, in terms of
time efficiency at reaching the optimal/sub-optimal solution and also at introducing
minimal distortion to the input signal.
[4] Fuzzy-based multi-scale edge detection (FWOMED):
In this paper a new fuzzy-based multi-scale edge detection technique was
proposed. An approach to achieve an optimal edge detection using the wavelet
decomposition of the original signal followed by a novel fuzzy-based detection
technique that is applied across the SCBISS was proposed. Results indicate a significant
improvement in locating edger compared to other multi-scale approaches.
[5] Image Processing Toolbox for Use with MATLAB:
In pattern recognition problems, it is usually recommended to extract a low
number of features in order to avoid the computational cost. However, using today's
computer capabilities we are able to extract and process more features than before. In
this way, in an off-line training process, it is possible to extract a very large number of
features with the goal of finding relevant features for the classification task. Afterwards,
in an on- line testing process, we can extract only the relevant features to classify the
samples. In this paper, we use this idea to present a highly general pattern recognition
methodology applied to image analysis. We combine feature extraction and feature
selection techniques with highly simple classifiers to achieve high classification
performances. The key idea of the proposed method is to select during training time,
from a large universe of features (in some cases more than 1500 features), only those
features that are relevant for the separation of the classes. We tested our methodology
on six different recognition problems (with 2, 3, 6, 10 and 40 classes) yielding
classification rates exceeding 85% in accuracy in every case using no more than 8
features. The selected features are so robust that well known and simple classifiers are
able to separate the classes.
1.6 THESIS OUTLINE
Chapter1: Includes of introduction: Image processing is a quickly moving field. Its
growth has been fueled by technological advances in digital imaging, computer
processors and mass storage devices. An attempt is made to review the edge detection
techniques which are based on discontinuity intensity levels. The relative performance
of various edge detection techniques is carried out with an image by using MATLAB
software. It is observed from the results Marr-Hildreth, LoG and Canny edge detectors
produce almost same edge map. Canny result is superior one when compared to all for a
selected image since different edge detections work better under different conditions.
Even though, so many edge detection techniques are available in the literature, since it is
a challenging task to the research communities to detect the exact image without noise
from the original image.
Chapter 2: There are many edge detection techniques in the literature for image
segmentation. The most commonly used discontinuity based edge detection techniques
are reviewed in this section. Those techniques are Roberts’s edge detection, Sobel Edge
Detection, Prewitt edge detection, Kirsh edge detection, Robinson edge detection, Marr-
Hildreth edge detection, LoG edge detection and Canny Edge Detection.
Chapter 3: In this section the various noises are added to determine the edges of the
image by using proposed system algorithm.
Chapter4: Morphology is a broad set of image processing operations that process
images based on shapes. In a morphological operation, each pixel in the image is
adjusted based on the value of other pixels in its neighbourhood. By choosing the size
and shape of the neighbourhood, you can construct a morphological operation that is
sensitive to specific shapes in the input image.
Chapter 5: This section presents the relative performance of various edge detection
techniques such as Gaussian, salt and pepper, Rayleigh, and gamma noises. We have
calculated the various factors like structural similarity index and peak signal to noise
ratio for the noises.
Chapter 6: Deals with Conclusions and future scope.
CHAPTER 2
EXISTING AND PROPOSED METHOD
2.1 Existing method
The existing method is proposed by S.Muhammad Hossein Mousavi and
MarwaKharazi. They have developed the technique using classic edge detection
operators such as canny, zerocross, log, Roberts, prewitt, sobel are the examples of
gradiant-based edge detection methods. These operators, due to being so sensitive to
noises, are not suitable for main stage of image processing.
2.2 Existing Edge Detection Methods
2.2.1 Canny Edge Detection
We have considered canny edge detection technique with regards to following criteria:
Detection: The probability of detecting real edge points should be maximized while the
probability of falsely detecting non-edge points should be minimized. This corresponds
to maximizing the signal-to-noise ratio.
Localization: The detected edges should be very close to real edges. There will be
minimum gap between real edge and detected Edge.
Number of responses: One real edge should not result in more than one detected edge.
Steps in Canny Edge Detection Algorithm
Canny edge detection algorithm runs mainly in five sequential steps:
1. Smoothing
2. Finding gradients
3. Non-maximum suppression
4. Hysteresis thresholding
Smoothing
Images taken from a camera will contain some amount of noise. As noise can mislead
the result in finding edges, we have to reduce the noise. Therefore the image is first
smoothed by applying a Gaussian filter. The function ’Gaussian Filter’ multiplies each
pixel in the image by the kernel generated. It returns the smoothed image in a two
dimensional array.
𝑆 = 𝐼∗𝑔 (𝑥, 𝑦) = 𝑔( 𝑥, 𝑦) ∗𝐼
𝑥2+𝑦2
1 −
𝑊ℎ𝑒𝑟𝑒, 𝑔( 𝑥, 𝑦 )= 𝑒 2𝜎2
√2𝜋𝜎
Edge Detection System For Noisy Images
Finding Gradients
Here we will find the edge strength by taking the gradient of the image. The Sobel
operator performs a 2-D spatial gradient measurement on an image.
The Sobel operator uses a pair of 3x3 convolution masks, one estimating the gradient in
the x-direction and the other estimating the gradient in the y-direction.
𝛻𝑆 = 𝛻( 𝑔∗𝐼) =( 𝛻𝑔) ∗𝐼
𝑔𝑥 𝑔𝑥 ∗ 𝐼
𝛻𝑆 = [𝑔𝑦] *I = [ ]
𝑔𝑦 ∗ 𝐼
𝜕𝑔 𝑔𝑥
𝑊ℎ𝑒𝑟𝑒, 𝛻𝑔 = [𝜕𝑥] = [𝑔𝑦]
𝜕𝑔
𝜕𝑦

Now the approximate absolute gradient magnitude (edge strength) at each point can be
found as
G = √𝐺2 + 𝐺2
𝑥 𝑦

and the orientation of the gradient can be found using the gradient in the x and y directions
𝐺
=tan−1 ( 𝑥 )
𝐺𝑦

Fig: 2.1 Image Gradient

Non-Maximum Suppression
This is necessary to convert the blurred edges in the image of the gradient magnitudes to
sharp edges. Actually this is performed by considering only all local maxima in the
gradient image and deleting everything rest. The algorithm is applied for each pixel in
the gradient image.
To find the edge points, we need to find the local maxima of the gradient magnitude.
Broad ridges must be thinned so that only the magnitudes at the points of greatest local
change remain.
All values along the direction of the gradient that are not peak values of a ridge are
suppressed.
Hysteresis Thresholding
After the non-maximum suppression step, the edge pixels are still marked with their
strength pixel-by-pixel.
The received image may still contain false edge points. Using threshold, Potential edges
are determined by double thresholding (High and Low).
If the gradient at a pixel is
Above “High”, declare it as an ‘edge pixel’
Below “Low”, declare it as a “non-edge-pixel”
Between “low” and “high”
Consider its neighbours iteratively then declare it an “edge pixel” if it is connected to an
‘edge pixel’ directly or via pixels between “low” and “high”

Fig: 2.2 Hysteresis Thresholding


Performance of Canny Edge Detection Algorithm
The performance of the canny algorithm depends heavily on the adjustable parameters,
σ, which is the standard deviation for the Gaussian filter, and the threshold values, ‘High
and Low “.σ also controls the size of the Gaussian filter. The bigger the value for σ, the
larger the size of the Gaussian filter becomes. This implies more blurring, necessary for
noisy images, as well as detecting larger edges. As expected, however, the larger the
scale of the Gaussian, the less accurate is the localization of the edge. Smaller values of
σ imply a smaller Gaussian filter which limits the amount of blurring, maintaining finer
edges in the image. The user can tailor the algorithm by adjusting these parameters to
adapt to different environments.
Canny’s edge detection algorithm is computationally more expensive compared to
Sobel, Prewitt and Robert’s operator. However, the Canny’s edge detection algorithm
performs better than all these operators under almost all scenarios.

Fig: 2.3 Example of Canny Edge Detection


2.2.2 Sobel Edge Detection
The Sobel operator performs a 2-D spatial gradient measurement on an image and so
emphasizes regions of high spatial frequency that correspond to edges. Typically it is
used to find the approximate absolute gradient magnitude at each point in an input
grayscale image.
A very common operator for doing this is a Sobel Operator, which is an approximation
to a derivative of an image. It is separate in the y and x directions. If we look at the x-
direction, the gradient of an image in the x-direction is equal to this operator here. We
use a kernel 3 by 3 matrix, one for each x and y direction. The gradient for x-direction
has minus numbers on the left hand side and positive numbers on the right hand side and
we are preserving a little bit of the center pixels. Similarly, the gradient for y-direction
has minus numbers on the bottom and positive numbers on top and here we are
preserving a little bit on the middle row pixels.
How It Works
In theory at least, the operator consists of a pair of 3×3 convolution kernels as
shown in below Figure. One kernel is simply the other rotated by 90°. This is very
similar to the Roberts Cross operator.
Fig: 2.4 Sobel convolution kernels

Essentially what we are trying to do here with the Sobel Operator is trying to find out
the amount of the difference by placing the gradient matrix over each pixel of our
image. We get two images as output, one for X- Direction and other for Y-Direction. By
using Kernel Convolution, we can see in the example image below there is an edge
between the column of 100 and 200 values.

Fig: 2.5 Kernel Convolution


These kernels are designed to respond maximally to edges running vertically and
horizontally relative to the pixel grid, one kernel for each of the two perpendicular
orientations. The kernels can be applied separately to the input image, to produce
separate measurements of the gradient component in each orientation (call these Gx and
Gy). These can then be combined together to find the absolute magnitude of the gradient
at each point and the orientation of that gradient. The gradient magnitude is given by:

Typically, an approximate magnitude is computed using:


Which is much faster to compute. The angle of orientation of the edge (relative
to the pixel grid) giving rise to the spatial gradient is given by:

In this case, orientation 0 is taken to mean that the direction of maximum


contrast from black to white runs from left to right on the image, and other angles are
measured anti-clockwise from this.

Fig: 2.6 Example of Sobel Operator

2.2.3 Prewitt Edge Detection


Prewitt operator is used for edge detection in an image. It detects two types of edges
 Horizontal edges

 Vertical Edges

Edges are calculated by using difference between corresponding pixel intensities of an


image. All the masks that are used for edge detection are also known as derivative
masks. Because as we have stated many times before in this series of tutorials that
image is also a signal so changes in a signal can only be calculated using differentiation.
So that’s why these operators are also called as derivative operators or derivative masks.

All the derivative masks should have the following properties:

 Opposite sign should be present in the mask.

 Sum of mask should be equal to zero.

 More weight means more edge detection.


Prewitt operator provides us two masks one for detecting edges in horizontal direction
and another for detecting edges in a vertical direction.

Vertical direction
When we apply this mask on the image it prominent vertical edges. It simply works
like as first order derivate and calculates the difference of pixel intensities in an edge
region. As the center column is of zero so it does not include the original values of an
image but rather it calculates the difference of right and left pixel values around that
edge. This increase the edge intensity and it became enhanced comparatively to the
original image.
Table: 2.1 Vertical Direction Mask

Above mask will find the edges in vertical direction and it is because the zeros column
in the vertical direction. When you will convolve this mask on an image, it will give you
the vertical edges in an image.
Horizontal Direction
This mask will prominent the horizontal edges in an image. It also works on the
principle of above mask and calculates difference among the pixel intensities of a
particular edge. As the center row of mask is consist of zeros so it does not include the
original values of edge in the image but rather it calculate the difference of above and
below pixel intensities of the particular edge. Thus increasing the sudden change of
intensities and making the edge more visible. Both the above masks follow the principle
of derivate mask. Both masks have opposite sign in them and both masks sum equals
tozero. The third condition will not be applicable in this operator as both the above
masks are standardize and we can’t change the value in them.
Table: 2.2 Horizontal Direction Mask
-1 -1 -1

0 0 0

1 1 1

Fig: 2.7 Original Image Fig: 2.8 After Applying Prewitt operator

2.2.4 Zero Cross Edge Detection


The zero crossing detector looks for places in the Laplacian of an image where the value
of the Laplacian passes through zero --- i.e. points where the Laplacian changes sign.
Such points often occur at `edges' in images --- i.e. points where the intensity of the
image changes rapidly, but they also occur at places that are not as easy to associate
with edges. It is best to think of the zero crossing detector as some sort of feature
detector rather than as a specific edge detector. Zero crossings always lie on closed
contours and so the output from the zero crossing detector is usually a binary image
with single pixel thickness lines showing the positions of the zero crossing points.
The starting point for the zero crossing detector is an image which has been filtered
using the Laplacian of Gaussian filter. The zero crossings that result are strongly
influenced by the size of the Gaussian used for the smoothing stage of this operator. As
the smoothing is increased then fewer and fewer zero crossing contours will be found,
and those that do remain will correspond to features of larger and larger scale in the
image.
How It Works
The core of the zero crossing detector is the Laplacian of Gaussian filter and so a
knowledge of that operator is assumed here. As described there, `edges' in images give
rise to zero crossings in the Log output. For instance, Figure 1 shows the response of a
1- D Log filter to a step edge in the image.

Fig: 2.9 Response for Zero Cross


However, zero crossings also occur at any place where the image intensity
gradient starts increasing or starts decreasing, and this may happen at places that are not
obviously edges. Often zero crossings are found in regions of very low gradient where
the intensity gradient wobbles up and down around zero.

Once the image has been Log filtered, it only remains to detect the zero crossings.
This can be done in several ways.
The simplest is to simply threshold the Log output at zero, to produce a binary
image where the boundaries between foreground and background regions represent the
locations of zero crossing points. These boundaries can then be easily detected and
marked in single pass, e.g. using some morphological operator. For instance, to locate all
boundary points, we simply have to mark each foreground point that has at least one
background neighbour.

The problem with this technique is that will tend to bias the location of the zero
crossing edge to either the light side of the edge, or the dark side of the edge, depending
on whether it is decided to look for the edges of foreground regions or for the edges of
background regions.
A better technique is to consider points on both sides of the threshold boundary,
and choose the one with the lowest absolute magnitude of the Laplacian, which will
hopefully be closest to the zero crossing.
Since the zero crossings generally fall in between two pixels in the Log filtered
image, an alternative output representation is an image grid which is spatially shifted
half a pixel across and half a pixel down, relative to the original image. Such a
representation is known as a dual lattice. This does not actually localize the zero
crossing any more accurately, of course.

A more accurate approach is to perform some kind of interpolation to estimate the


position of the zero crossing to sub-pixel precision.

Fig: 2.10 Original Image Fig:2.11 After Applying Zero Cross Operator

2.3 PROPOSED METHOD


In this project we have taken the existing method into consideration, a variety of
different methods on edge detection of noisy images (without the effect of noise on
edge), has been innovated. One of is: Mathematical Morphology method.

2.3.1 Mathematical Morphology method.


Binary images may contain numerous imperfections. In particular, the binary regions
produced by simple thresholding are distorted by noise and texture. Morphological
image processing pursues the goals of removing these imperfections by accounting for
the form and structure of the image. These techniques can be extended to greyscale
images.
Morphological image processing is a collection of non-linear operations related to the
shape or morphology of features in an image. According to morphological operations
rely only on the relative ordering of pixel values, not on their numerical values, and
therefore are especially suited to the processing of binary images. Morphological
operations can
also be applied to greyscale images such that their light transfer functions are unknown
and therefore their absolute pixel values are of no or minor interest.
Morphological techniques probe an image with a small shape or template called
a structuring element. The structuring element is positioned at all possible locations in
the image and it is compared with the corresponding neighbourhood of pixels. Some
operations test whether the element "fits" within the neighbourhood, while others test
whether it "hits" or intersects the neighbourhood

Fig: 2.12 probing of an image with a structuring element

A morphological operation on a binary image creates a new binary image in which the
pixel has a non-zero value only if the test is successful at that location in the input
image. The structuring element is a small binary image, i.e. a small matrix of pixels,
each with a value of zero or one:

 The matrix dimensions specify the size of the structuring element.


 The pattern of ones and zeros specifies the shape of the structuring element.
 An origin of the structuring element is usually one of its pixels, although
generally the origin can be outside the structuring element.
Edge Detection System For Noisy Images

Fig: 2.13Examples of simple structuring elements

2.3.2 Fundamental operations Of Mathematical Morphology method


 Erosion
 Dilation
Erosion
The erosion of a binary image f by a structuring element s (denoted f s) produces a new
binary image g = f s with ones in all locations (x, y) of a structuring element's origin at
which that structuring element s fits the input image f, i.e. g(x,y) = 1 is s fits f and 0
otherwise, repeating for all pixel coordinates (x,y).

Fig: 2.14 Greyscale image Fig: 2.15 Erosion: a 2×2 square structuring

Dilation
The dilation of an image f by a structuring element s (denoted f s) produces a
new binary image g = f s with ones in all locations (x, y) of a structuring element's
origin at which that structuring element s hits the input image f, i.e. g(x,y) = 1
if s hits f and 0 otherwise, repeating for all pixel coordinates (x,y). Dilation has the
opposite effect to erosion -- it adds a layer of pixels to both the inner and outer
boundaries of regions.
Fig: 2.16 Binary Image Fig: 2.17 Dilation: a 2×2 square structuring

2.4 NOISE MODELS


Noise tells unwanted information in digital images. Noise produces undesirable
effects such as artifacts, unrealistic edges, unseen lines, corners, blurred objects and
disturbs background scenes. To reduce these undesirable effects, prior learning of noise
models is essential for further processing. Digital noise may arise from various kinds of
sources such as Charge Coupled Device (CCD) and Complementary Metal Oxide
Semiconductor (CMOS) sensors. In some sense, points spreading function (PSF) and
modulation transfer function (MTF) have been used for timely, complete and
quantitative analysis of noise models. Probability density function (PDF) or Histogram
is also used to design and characterize the noise models. Here we will discuss few noise
models, their types and categories in digital image.
2.4.1 Gaussian Noise Model
It is also called as electronic noise because it arises in amplifier or detectors.
Gaussian noise caused by natural sources such as thermal vibration of atoms and
discrete nature of radiation of warm objects.
Gaussian noise generally disturbs the gray values in digital images. That is why
Gaussian noise model essentially designed and characteristics by its PDF or normalizes
histogram with respect to gray values. This is given as

(𝑥−𝜇)2
P(x) = 1 −
𝜎√2𝜋
𝑒 2𝜎2

Where x=gray value, σ=standard deviation and µ=mean.


Generally Gaussian noise mathematical model represents the correct approximation of
real world scenarios. In this noise model, the mean value is zero, variance is 0.1 and 256
gray levels in terms of its PDF, which is shown in below fig.
Edge Detection System For Noisy Images

Fig: 2.18 PDF of Gaussian noise


Due to this equal randomness the normalized Gaussian noise curve look like in bell
shaped. The PDF of this noise model shows that 70% to 90% noisy pixel values of
degraded image in between µ−σ and µ+σ. The shape of normalized histogram is almost
same in spectral domain.

2.4.2 Speckle Noise Model


This noise is multiplicative noise. Their appearance is seen in coherent imaging
system such as laser, radar and acoustics etc., Speckle noise can exist similar in an
image as Gaussian noise. Its probability density function follows gamma distribution,
which is shown in fig2.19

Fig: 2.19 Image of Speckle noise with variance 0.04


2.4.3 Photon Noise (Poisson Noise)
The appearance of this noise is seen due to the statistical nature of electromagnetic
waves such as x-rays, visible lights and gamma rays. The x-ray and gamma ray sources
emitted number of photon per unit time. These rays are injected in patient’s body from
its source, in medical x rays and gamma rays imaging systems. These sources are
having random fluctuation of photons. Result gathered image has spatial and temporal
randomness. This noise is also called as quantum (photon) noise.
2.4.4 Gamma Noise Model
Gamma noise is generally seen in the laser based images. It obeys the Gamma
distribution. Which is shown in the below fig2.20.

Fig: 2.20 Gamma distribution

Fig: 2.21 Gamma noise

2.4.5 Impulse Valued Noise (Salt and Pepper Noise)


This is also called data drop noise because statistically its drop the original data values.
This noise is also referred as salt and pepper noise. However the
imageisnotfullycorruptedbysaltandpeppernoiseinsteadofsomepixelvalues
Are changed in the image. Although in noisy image, there is a possibilities of some
neighbors does not changed.
This noise is seen in data transmission. Image pixel values are replaced by
corrupted pixel values either maximum ‘or’ minimum pixel value i.e., 255 ‘or’ 0
respectively, if number of bits are 8 for transmission.
Let us consider 3x3 image matrices. Suppose the central value of matrices is
corrupted by Pepper noise. Therefore, this central value i.e., 212 is given in Fig 2.2 is
replaced by value zero.
In this connection, we can say that, this noise is inserted dead pixels either dark
or bright. So in a salt and pepper noise, progressively dark pixel values are present in
bright region and vice versa.
Inserted dead pixel in the picture is due to errors in analog to digital
conversion and errors in bit transmission. The percentage wise estimation of noisy
pixels, directly determine from pixel metrics. The PDF of this noise is shown in the
Fig.2.23 The PDF of Salt and Pepper noise, if mean is zero and variance is 0.05. Here
we will meet two spike one is for bright region (where gray level is less) called ‘region
a’

Fig: 2.22The PDF of salt and pepper


And another one is dark region (where gray level is large) called ‘region b’,we have
clearly seen here the PDF values are minimum and maximum in ‘region a’ and ‘region
b’,respectively.
Edge Detection System For Noisy Images

2.4.6 Rayleigh Noise Model


Rayleigh noise presents in radar range images. In Rayleigh noise, probability density
function is given in fig 2.23.

2 −(𝑧−𝑎)2
𝜋𝑏 𝑏(4−𝜋)
P (z) (z-a)𝑒 𝑏 Mean:𝜇 = 𝑎 + √ Variance:𝜎2 =
𝑏 4 4
=

Fig:2.23 Rayleigh distribution


CHAPTER 3
IMPLEMENTATION OF PROPOSED METHOD
3.1 Flow chart of general algorithm for classical operators

Start

Read the image and


convolve with filter.

Convolve the resultant image with


the chosen operator’s gradient mask
in I

Convolve the resultant image with


chosen operator’s gradient mask in
I

Set a threshold value T.

For a pixel say M(i, j).

Consider the
next neighbor
pixel Compute the gradient magnitude
say

IS
G>T

Mark pixel as an “edge”

End

Fig: 3.1 general algorithm


Edge Detection System For Noisy Images
Digital image processing is meant for processing digital computer. It is the use of
computer algorithm to perform image processing on digital images. It is a technology
widely used for digital image operations like feature extraction, pattern recognition,
segmentation, image morphology etc. Edge detection is a well-developed field on its
own within image processing. Edge detection is basically image segmentation
technique, divides spatial domain, on which the image is defined, into meaningful parts
or regions. Edges characterize boundaries and are therefore a problem of fundamental
importance in image processing. Edges typically occur on the boundary between two
different regions in an image. Edge detection allows user to observe those features of an
image where there is a more or less abrupt change in gray level or texture indicating the
end of one region in the image and the beginning of another. It finds practical
applications in medical imaging, computer guided surgery diagnosis, locate object in
satellite images, face recognition, and finger print recognition ,automatic traffic
controlling systems, study of anatomical structure etc. Many edge detection techniques
have been developed for extracting edges from digital images.
Gradient based classical operators like Robert, Prewitt, Sobel were initially used for edge
detection but they did not give sharp edges and were highly sensitive to noise image
.Laplacian based Marr Hildrith operators also suffers from two limitations : high
probability of detecting false edges and the localization error may be severe at curved
edges but Signal & Image Processing : An International Journal (SIPIJ) Vol.4, No.3,
June 2013 66 algorithm proposed by John F. Canny in 1986 is considered as the ideal
edge detection algorithm for images that are corrupted with noise. Canny's aim was to
discover the optimal edge detection algorithm which reduces the probability of detecting
false edge, and gives sharp edges.

3.2 Flow chart for the Procedure of the proposed edge detection

method In this system, an edge detection method for detecting the edge of noisy and
non-noisy images is proposed. Phases are: initially RGB image divide to the three
constituent channels of R, G and B. Then we use Median filter on each channel,
applying a definite value [12]. Then attach all the channels together to achieve a new
RGB image. The aim of applying Median filter in this fashion is to achieve a kind of
special smoothing of the image, which can deal with all sort of noises. It should be
considered, applying this filter only to one channel and on the gray image would not
achieve the desired result. Selecting pixels by the Median filter is presented in fig 3.2.
Acquiring RGB image from the input

Applying median filter to the decomposed channels of R, G, B


initial smoothing of the images

Applying sharpening method to the sharpen the edges, and


converting the colored images to gray in order to apply the
proposed edge detection filter

Applying the proposed edge detection filter for edge detection of


images which is polluted by different noises, from four sides

Applying a kind of post processing to eliminate the possible spots


in the binary images

Applying hit or miss morphology operations on the output, to thin


the edges

Validation using PSNR, SSIM factors for comparing final edge


detected images and also comparing the proposed method with
classic operators

Fig: 3.2 Flow chart for the Procedure of the proposed edge detection method

In the next stage, the smoothed image becomes sharpened by the Unsharp Mask filter
[13]. Table 1 presents the filter used for edge sharpening. After edge sharpening, for
edge detection, the output image will be sent to the gray level. Next, it is the time for
applying the proposed edge detection filter, which is a 3*3 matrix. It will be applied to
the image from four sides. This filter is presented in formula 1. This filter after
smoothing and sharpening the image causes very beautiful sharp edges, and also causes
the noise not
Being seen in the output image. In order to eliminate the superfluous noises in output
binary image, a post-processing action is applying to two dimensions of the image. This
happens for eliminating the possible noises. Finally, for thinning the edges, using Hit or
Miss Morphology system, the final output is achieved. This system is implemented by
Mat lab. The flowchart of the proposed system is presented in Figure 3.2.
For validating the acquired result, we use the PSNR and SSIM factors. The value of
PSNR factor should be usually “between” 5 to 50. The more this value is, the better, and
the high similarity of the input image with reference can be estimated by a number like
50. The next factor is SSIM which is a number between 0 and 1. The higher this value
is, the better result exist. The best result is something “between” 0.9 to 1.
CHAPTER 4
SOFTWARE DESCRIPTION
4.1 INTRODUCTION
The tutorials are independent of the rest of the document. The primarily objective is to
help you learn quickly the first steps. The emphasis here is “learning by doing”.
Therefore, the best way to learn is by trying it yourself. Working through the examples
will give you a feel for the way that MATLAB operates. [20] In this introduction we will
describe how MATLAB handles simple numerical expressions and mathematical
formulas.
MATLAB has many advantages compared to conventional computer languages
(e.g., C, FORTRAN) for solving technical problems. MATLAB is an interactive system
whose basic data element is an array that does not require dimensioning. The software
package has been commercially available since1984 and is now considered as a standard
tool almost universities and industries worldwide. [20]
It has powerful built-in routines that enable a very wide variety of computations. It
also has easy to use graphics commands that make the visualization of results
immediately available. Specific applications are collected in packages referred to as
toolbox. There are toolboxes for signal processing, symbolic [20] computation, control
theory, simulation, optimization, and several other fields of applied science and
engineering.
4.2 BASIC FEATURES
As we mentioned earlier, the following TUTORIAL lessons are designed to get you started
quickly in MATLAB. The lessons are intended to make you familiar with the basics of
MATLAB.
4.3 A MINIMUM MATLAB SESSION
The goal of this minimum session (also called starting and exiting sessions) is to learn
the first steps:
1. How to logon
2. Invoke MATLAB
3. Do a few simple calculations
4. How to quit MATLAB
4.3.1 Starting Mat lab
After logging into your account, you can enter MATLAB by double-clicking on the
MATLAB shortcut icon (MATLAB 7.0.4) on your Windows desktop. When you start
MATLAB, a special window called the MATLAB desktop appears. The desktop is a
window that contains other windows. [20] The major tools within or accessible from
the desktop are:
• The Command Window
• The Command History
• The Workspace
• The Current Directory
• The Help Browser
• The Start Button

Figure 4.1: The graphical interface to the MATLAB workspace


Edge Detection System For Noisy Images

4.4 MATLAB's POWER OF COMPUTATIONAL MATHEMATICS


MATLAB is used in every facet of computational mathematics. Following are some
commonly used mathematical calculations [20] where it is used most commonly:
 Dealing with Matrices and Arrays
 2-D and 3-D Plotting and graphics
 Linear Algebra
 Algebraic Equations
 Non-linear Functions
 Statistics
 Data Analysis
 Calculus and Differential Equations
 Numerical Calculations
 Integration
 Transforms
 Curve Fitting
 Various other special functions

4.5 FEATURES
Following are the basic features of MATLAB:
 It is a high-level language for numerical computation, visualization and
application development.
 It also provides an interactive environment for iterative exploration, design and
problem solving.
 It provides vast library of mathematical functions for linear algebra, statistics,
Fourier analysis, filtering, optimization, numerical integration and solving
ordinary differential equations.
 It provides built-in graphics for visualizing data and tools for creating custom
plots.
 MATLAB's programming interface gives development tools for improving code
quality, maintainability, and maximizing performance.
 It provides tools for building applications with custom graphical interfaces.
 It provides functions for integrating MATLAB based algorithms with external
applications and languages such as C, Java, .NET and Microsoft Excel.
4.6 USES OF MATLAB
 MATLAB is widely used as a computational tool in science and engineering
encompassing the fields of physics, chemistry, math and all engineering streams.
It is used in a range of applications including: [20]
 signal processing and Communications
 image and video Processing
 control systems
 test and measurement
 computational finance
 computational biology
4.7 IMAGE PROCESSING TOOL BOX
The Image Processing Toolbox is a collection of functions that extend the capabilities of
the MATLAB’s numeric computing environment. [20] The toolbox supports wide range
of image processing operations, including:
• Geometric operations
• Neighborhood and block operations
• Linear filtering and filter design
• Transforms
• Image analysis and enhancement
• Binary image operations
• Region of interest operations

TYPES OF IMAGE FORMAT


• BMP (Microsoft Windows Bitmap)
• GIF (Graphics Interchange Files)
• HDF (Hierarchical Data Format)
• JPEG (Joint Photographic Experts Group)
• PCX(Paintbrush)
• PNG (Portable Network Graphics)
• TIFF (Tagged Image File Format)
• XWD (X Window Dump)
• Raw-data and other types of image data
Edge Detection System For Noisy Images

Data type’s in MATLAB


• Double (64-bitdouble-precision floating point)
• Single (32-bitsingle-precision floating point)
• Int32 (32-bit signed integer)
• Int16 (16-bit signed integer)
• Int8 (8-bit signed integer)
• Uint32 (32-bit unsigned integer)
• Uint16 (16-bit unsigned integer)
• Uint8 (8-bit unsigned integer)

TABLES
Table 4.1: Image to Display

IMAGE DISPLAY
Image Create and display image object
Images Scale and display as image
I’m show Display image
Color bar Display color bar
Get image Get image data from axes
True size Adjust display size of image
Zoom Zoom in and zoom out of 2D plot

Table 4.2: Image to Conversion

Image Conversion
gray2ind Intensity image to index image
im2bw Image to binary
im2double Image to double precision
im2uint8 Image to 8-bit unsigned integers
im2uint16 Image to 16-bit unsigned integers
ind2gray Indexed image to intensity image
mat2gray Matrix to intensity image
rgb2gray RGB image to grayscale
rgb2ind Rgb image to indexed image
4.8 IMAGE OPERATIONS
• RGB image to gray image
• Image resize
• Image crop
• Image rotate
• Image histogram
• Image histogram equalization
• Image DCT/IDCT
• Convolution

4.9 IMREAD

SYNTAX
A = imread(filename)
A = imread(filename,fmt)
A = imread( ,idx)
A = imread( ,Name,Value)
[A,map] = imread( )
[A,map,transparency] = imread( )
A = imread(filename) reads the image from the file specified by filename, inferring the
format of the file from its contents. If filename is a multi-image file, then imread reads
the first image in the file.
A = imread(filename,fmt) additionally specifies the format of the file with the standard
file extension indicated by fmt. If imread cannot find a file with the name specified
by filename, it looks for a file named filename.fmt.
A = imread( ,idx) reads the specified image or images from a multi-image file. This
syntax applies only to GIF, CUR, ICO, TIF, and HDF4 files. You must specify
a filename input, and you can optionally specify fmt.
A = imread( ,Name,Value) specifies format-specific options using one or more name-
value pair arguments, in addition to any of the input arguments in the previous syntaxes.
[A,map] = imread( ) reads the indexed image in filename into A and reads its
associated colormap into map. Colormap values in the image file are automatically
rescaled into the range [0,1].
[A,map,transparency] = imread( ) additionally returns the image transparency. This
syntax applies only to PNG, CUR, and ICO files. For PNG files, transparency is the
alpha channel, if one is present. For CUR and ICO files, it is the AND (opacity) mask

4.10 IMSHOW

SYNTAX
imshow(I) imshow(I,
[low high])
imshow(I,[])
imshow(RGB)
imshow(BW)
imshow(X,map)
imshow(filename)
imshow( ,Name,Value)
himage = imshow( )
Description
imshow(I) displays the grayscale image I in a figure. imshow uses the default display
range for the image data type and optimizes figure, axes, and image object properties for
image display.
imshow(I,[low high]) displays the grayscale image I, specifying the display range as a
two-element vector, [low high]. For more information, see the Display Range
parameter.
imshow(I,[]) displays the grayscale image I, scaling the display based on the range of
pixel values in I. imshow uses [min(I(:)) max(I(:))] as the display range. imshow
displays the minimum value in I as black and the maximum value as white. For more
information, see the DisplayRange parameter.
imshow(RGB) displays the truecolor image RGB in a figure.
imshow(BW) displays the binary image BW in a figure. For binary
images, imshow displays pixels with the value 0 (zero) as black and 1 as white.
imshow(X,map) displays the indexed image X with the Colour map. A colour map
matrix can have any number of rows, but it must have exactly 3 columns. Each row is
interpreted as a colour, with the first element specifying the intensity of red, the second
green, and the third blue. Colour intensity can be specified on the interval [0, 1].
imshow(filename) displays the image stored in the graphics file specified by filename.
imshow( ,Name,Value) displays an image, using name-value pairs to control aspects
of the operation.
himage = imshow( ) returns the image object created by imshow.

4.11 IMHIST

SYNTAX
[counts,binLocations] = imhist(I)
[counts,binLocations] = imhist(I,n)
[counts,binLocations] = imhist(X,map)
imhist( )
Description
[counts,binLocations] = imhist(I) calculates the histogram for the grayscale image I.
The imhist function returns the histogram counts in counts and the bin locations
in bilocation. The number of bins in the histogram is determined by the image type.
You optionally can compute the histogram counts and bin locations using a GPU
(requires Parallel Computing Toolbox™). For more information, see Image Processing
on a GPU.
[counts,binLocations] = imhist(I,n) specifies the number of bins, n, used to calculate the
histogram.
[counts,binLocations] = imhist(X,map) calculates the histogram for the indexed
image X with colormap map. The histogram has one bin for each entry in the colour
map.
This syntax is not supported on a GPU.
imhist( ) displays a plot of the histogram. If the input image is an indexed image, then
the histogram shows the distribution of pixel values above a colour bar of the colour
map.
If you use this syntax when I is a gpu Array, then no plot is displayed. imhist returns the
histogram counts in and does not return the histogram bin locations.

4.12 RGB2GRAY

SYNTAX
I = rgb2gray(RGB)
newmap = rgb2gray(map)
Description
I = rgb2gray (RGB) converts the true color image RGB to the grayscale image I.
The rgb2gray function converts RGB images to grayscale by eliminating the hue and
saturation information while retaining the luminance. If you have Parallel Computing
Toolbox™ installed, rgb2gray can perform this conversion on a GPU

4.13 BWCONNCO MP ()
SYNTAX
CC = bwconncomp(BW)
CC = bwconncomp(BW,conn)
Description
CC = bwconncomp(BW) returns the connected components CC found in the binary
image BW. bwconncomp uses a default connectivity of 8 for two dimensions, 26 for
three dimensions, and conndef(ndims(BW),'maximal') for higher dimensions
4.14 RGB2HSV
SYNTAX
HSV = rgb2hsv (RGB)
hsvmap = rgb2hsv(rgbmap)
Description
HSV = rgb2hsv (RGB) converts the red, green, and blue values of an RGB image to hue,
saturation, and value (HSV) values of an HSV image.

hsvmap = rgb2hsv(rgbmap) converts an RGB colour map to an HSV colour map.


CHAPTER 5
EXPERIMENTAL RESULTS
Here we apply the four proposed system on the polished image which is having a good
resolution. The output follows the proposed system algorithm.
Step 1. Initially RGB image is given as the input.
Step 2. The given RGB image is divided as horizontal and vertical edges.
Step 3. Apply median filter to the three decomposed channels of R, G, B for initial
smoothing of the image.
Step 4. Then the image is converted to gray scale.
Step 5. After applying hit and miss morphology technique the on the give noisy image
we get the output as an edge detected image.

Fig: 5.1Horizontal and Vertical edges Fig: 5.2 Median Filter Edges

Fig: 5.3 RGB Smooth medfilt2 Fig: 5.4 Gray scale smooth
The original image without adding any noise is shown in fig 5.1 with vertical and
horizontal edges. Initially RGB image is divided into three constituent channels R, G, B.
Then the median filter is applied to the image which is shown figure 5.3 for sharping of
the image. The edges after applying the median filter is shown in fig 5.2. Then the RGB
image is converted into gray scale image for smoothing shown in fig 5.4.
AFTER APPLYING GAUSSIAN NOISE

Fig: 5.5 Horizontal and vertical edges

Fig: 5.6 Edges Gaussian Fig: 5.7 RGB Smooth medfilt2 and
Gaussian noise

Fig: 5.8 Gray scale smooth & Fig: 5.9 Gray scale smooth with
Gaussian Noise wiener2&Gaussian noise
The original image without adding any noise is shown in fig 5.5 with vertical and
horizontal edges. Then the Gaussian noise is applied to the original image then the
edges are detected for Gaussian noise which is shown in fig 5.6. This RGB image is
applied to the median filter along with Gaussian noise for sharping the edges shown in
fig 5.7. Then the RGB image is converted into gray scale for smoothing the image
shown in fig 5.8. The gray scale image is applied with the wiener filter and Gaussian
noise shown in fig 5.9.
APPLYING SALT AND PEPPER NOISE

Fig: 5.10 Horizontal and vertical edges

Fig: 5.11 Edges Salt Pepper Fig: 5.12 RGB Smooth medfilt2&Salt
Pepper noise

Fig: 5.13 Gray scale smooth &salt Fig: 5.14Gray scale smooth with
wiener2&Salt Pepper noise pepper noise
The original image without adding any noise is shown in fig 5.10 with vertical and
horizontal edges. Then the salt and pepper noise is applied to the original image then the
edges are detected for salt and pepper noise which is shown in fig 5.11. This RGB
image is applied to the median filter along with salt and pepper noise for sharping the
edges shown in fig 5.12. Then the RGB image is converted into gray scale for
smoothing the image shown in fig 5.13. The gray scale image is applied with the wiener
filter and salt and pepper noise shown in fig 5.14.
APPLYING RAYLEIGH NOISE

Fig: 5.15Horizontal and vertical edges

Fig: 5.16 Edges Rayleigh Fig: 5.17 RGB Smooth medfilt2&


Rayleigh Noise
The original image without adding any noise is shown in fig 5.15 with vertical and
horizontal edges. Then the Rayleigh noise is applied to the original image then the edges
are detected for Rayleigh noise which is shown in fig 5.16. This RGB image is applied
to the median filter along with Rayleigh noise for sharping the edges shown in fig 5.17.
Then the RGB image is converted into gray scale for smoothing the image shown in fig
5.18. The gray scale image is applied with the wiener filter and Rayleigh noise shown in
fig 5.19.

Fig: 5.18 Gray scale smooth & Fig: 5.19Gray scale smooth with wiener2&
Rayleigh Noise Rayleigh Noise
APPLYING GAMMA NOISE

Fig: 5.20 Horizontal and vertical edges

Fig: 5.21 Edges Gamma Fig: 5.22 RGB Smooth medfilt2&


Gamma Noise
Fig: 5.23 Gray scale smooth & Fig: 5.24Gray scale smooth with wiener2&
Gamma Noise Gamma Noise

The original image without adding any noise is shown in fig 5.20 with vertical and
horizontal edges. Then the Gamma noise is applied to the original image then the edges
are detected for Gamma noise which is shown in fig 5.21. This RGB image is applied to
the median filter along with Gamma noise for sharping the edges shown in fig 5.22.
Then the RGB image is converted into gray scale for smoothing the image shown in fig
5.23. The gray scale image is applied with the wiener filter and Gamma noise shown in
fig 5.24.
PSNR VALIDATIONS FOR VARIOUS NOISES
Table: 5.2 PSNR Validations

Gaussian 13.2352

Salt and pepper 24.5420

Rayleigh 23.5873

Gamma 13.9062
Edge Detection System For Noisy Images

SSIM VALIDATIONS FOR VARIOUS NOISES

Table: 5.2 SSIM Validations

Gaussian 0.3226

Salt and pepper 0.9297

Rayleigh 0.8808

Gamma 0.4527
CHAPTER 6
CONCLUSION AND FUTURE SCOPE
CONCLUSION:
The proposed system is not only a new edge detection approach, but is able
to deal with different noises in digital images, and indicate the least of sensitivity. This
happens in a condition in which, conventional edge detection methods, have a lot of
sensitivity to different kind of noises.
This system does not change the real position of the edges, which causes to use it in
applied cases with the peace of mind. This system possesses the best results, in validated
visual and statistical results against other conventional edge detection methods, in the
same situation. With changing the proposed system, we are able to create a sort of
image feature extraction method (this is of future works)

FUTURE SCOPE:
This method gives the fine details of an edge which helps the doctor to make sure the
state of illness development and confirm how to cure it. Furthermore the computational
time for edge detection is much smaller when compared to the other traditional
techniques. The work can be extended to text image, satellite image, video frames, etc.
In future work parallelizing an edge detection algorithms, provides better performance
results for image-processing applications.
REFERENCES

[1]Jain, Anil K. Fundamentals of digital image processing. Prentice-Hall, Inc., 1989.


[2]Sobel, Irwin, and Gary Feldman. "A 3x3 isotropic gradient operator for image
processing." a talk at the Stanford Artificial Project in (1968): 271-272.
[3]Prewitt, Judith MS. "Object enhancement and extraction." Picture processing and
Psychopictorics 10.1 (1970): 15-19.
[4]Lawrence, G. ROBERTS. Machine perception of three-dimensional solids. Diss. Ph.
D. Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 1963.
[5]Lindeberg, Tony. "Scale selection properties of generalized scale-space interest point
detectors." Journal of Mathematical Imaging and Vision 46.2 (2013): 177-210.
[6]Haralick, Robert M. "Digital step edges from zero crossing of second directional
derivatives." IEEE Transactions on Pattern Analysis and Machine Intelligence 1 (1984):
58-68.
[7]Canny, John. "A computational approach to edge detection." IEEE Transactions on
pattern analysis and machine intelligence 6 (1986): 679-698.
[8]Shih, Ming-Yu, and Din-Chang Tseng. "A wavelet-based multiresolution edge
detection and tracking." Image and Vision Computing 23.4 (2005): 441-451.
[9]Lee, James, R. Haralick, and Linda Shapiro. "Morphologic edge detection." IEEE
Journal on Robotics and Automation 3.2 (1987): 142-156.
[10]Rajab, M. I., M. S. Woolfson, and S. P. Morgan. "Application of region-based
segmentation and neural network edge detection to skin lesions." Computerized Medical
Imaging and Graphics 28.1 (2004): 61-68.
[11]Akbari, A. Sheikh, and J. Soraghan."Fuzzy-based multiscale edge detection
(FWOMED)." (2003).
[12]Gonzalez, Rafael C., Richard E. Woods, and S. Eddins. "Digital Image Processing
Using MATLAB, Prentice Hall."." Upper Saddle River, NJ (2003).
[13]Math Works, January. "Image Processing Toolbox for Use with MATLAB: User’s
Guide." The Math WorksInc. (2003).
[14]Lehmann, Erich Leo, and George Casella. Theory ofpoint estimation. Springer
Science & Business Media, 2006.
[15]Huynh-Thu, Quan, and Mohammed Ghanbari. "Scope of validity of PSNR in
image/video quality assessment." Electronics letters 44.13 (2008): 800-801.
Edge Detection System For Noisy Images
[16]Wang, Zhou, et al. "Image quality assessment: from error visibility to structural
similarity." IEEE transactions on image processing 13.4 (2004): 600-612.
[17]Barbu, Tudor. "Variational image denoising approach with diffusion porous media
flow." Abstract and Applied Analysis. Vol. 2013. Hindawi Publishing Corporation,
2013. [18]Schottky,
Walter."ÜberspontaneStromschwankungeninverschiedenenElektrizitätsleitern." Annalen
der physik 362.23 (1918): 541-567.
[19]Forouzanfar, M., and H. Abrishami-Moghaddam. "Ultrasound SpeckleReduction in
theComplex Wavelet Domain." Principles of Waveform Diversity and Design, M.
Wicns, E. Mokole, S. Blunt, R. Sfhneible, and V. Amuso (eds.), SfiTech Publishing
(2010).
[20] MATLAB, Version2015a.

You might also like