Professional Documents
Culture Documents
Fuzzy C Means Brain
Fuzzy C Means Brain
ABSTRACT
Segmentation is an important aspect of medical image processing, where
Clustering approach is widely used in biomedical applications particularly for brain tumor
detection in abnormal Magnetic Resonance Images (MRI). Fuzzy clustering using Fuzzy CMeans (FCM) algorithm proved to be superior over the other clustering approaches in terms of
segmentation efficiency. But the major drawbackof the FCM algorithm is the
huge computational time required for convergence. The effectiveness of the FCM algorithm in
terms of computational rate is improved by modifying the cluster center and membership
value updation criterion. In this paper, convergence rate is compared between the conventional
FCM and the Improved FCM
CHAPTER-1
INTRODUCTION TO MATLAB
1.1 History of MATLAB
MATLAB is programming tool for technical computing. This
is
the
shortest
2
5
6
1
1
1
7
1
1
2
8
1
2
2
2
1
1
Figure 1.1 : Magic matrix with dimensions 5 x 5
2
3
Cleve Moler discovered that program languages are not user
9
friendly for his needs to teach mathematics, soon. If students
1
8
5
would use the program languages FORTRAN, Pascal, C, etc, they would loose a lot of time
programming and learning these languages. This is the reason, why he had started to
develop MATLAB, as interactive calculator, without possibility to program, just to calculate
matrices etc. The first version of MATLAB was written in FORTRAN and it used the
libraries LINPACK and EISPACK to calculate matrices. The next stage in the development
of MATLAB was the year 1983, when Jack Little and Steve Bangert joined Cleve Moler and
programmed the newer version of MATLAB in C language. They also added the
possibility of interpreter programming in so called M-code and some other characteristics.
They called this version of MATLAB MATLAB 1.0 and was published on the market in
the year 1984, when the company MATHWORKS was established. The company
MATHWORKS has developed several newer versions from year to year. They introduced first
version of SIMULINK, which offer also graphical programming in year 1990. The usability
of MATLAB and SIMULINK is even improved by so called Toolboxes and Blocksets,
which have been available and improved from year 1990. The newest version of
MATLAB on the market is version 7.2. It is also available cheaper version of MATLAB just
1
for students. We will present the basic of MATLAB trough the practical work during the
next sections. Here you will find a little more complicate definition, what is MATLAB?
MATLAB is:
The time needed for developing program in M-code is pretty lower than in case
of programming in Pascal, FORTRAN or in C-language.
The drawback of the classical translated M-code is that this translated code is
not transferrable to other microprocessors (PIC, ATMEL, etc), as it is possible in
case of translated C-code.
Double click on
MATLAB
icon!
Menu
Tool line
MATLAB is
closed by clicking x
Work
space
Command
window
Command
history
1.2.1HELP in MATLAB
The Help is called on several ways in MATLA developing environment (see Fig. 2.4):
2.1.1
2.1.2
Help in Menu
name of function, which we would like to explore, like this: help name_of_function and
confirm with Enter.
Example:
With help of MATLAB command help in Command Window find the information for
function log and log10.
First, we type the command: help log, and than help log10 in Command Window.
Figure 2.6 shows the display of help command for the function log, while figure 2.7
presents the help information for function log10.
We can see that the function log is logarithm naturalis, while the function log10 is
Briggs logarithm.
1.3MATLAB as a calculator
What you will learn in this section?
1.4 How to use MATLAB environment interactively.
1.5 How numbers are defined and displayed in MATLAB.
1.6 How to calculate with numbers.
1.7 How mathematical functions (sin, log, etc.) are called.
1.8 How to arrange numbers in MATLAB.
Interactivity of MATLAB development environment allows the exploration of
problems with trials; as well the MATLAB can be used as a calculator. The usage of the
calculators is well known, so we can start to explore MATLAB on the basis of this
knowledge.
Choose a number 10and write down it in the MATLAB command window and see
what will happen!
What kind of numbers do we know in mathematics? Does the MATLAW know the
all sorts of numbers, known in mathematics?
Probably, you remember that we do know: natural numbers, integer numbers,
rational numbers, real numbers, and complex numbers.
The number 10, just typed in the Command Window is natural number. By the
way, every type of numbers has to be confirmed by the command Enter. Other types of
numbers are:
1.3.1.2
Integer number: -5
1.3.1.3
1.3.1.4
1.3.1.5
Figure 1.9: Typing the different types of numbers in the MATLAB Command Window
We can see from Fig. 3.2, that rational and real number is rewritten by MATLAB
with 4 decimal places (digits). The rational number 1/3 is rewritten in the decimal shape
with the never ending queue of numbers 3, or with other words: 0.33333... Also, other types
of numbers, like number 2, can be represented by never ending queue of decimal digits.
Presented numbers in MATLAB developing environment can be set by command
format. This command allows us to write the number as a rational numbers, or a decimal
8
1.3.1.6
format short determines the decimal shape with 4 digits after the decimal point and
1.3.1.7
format long determines the decimal shape with 14 digits after the decimal point.
The default type of a number displaying in MATLAB Command window is format
short, as you see in the Fig. 3.2. Therefore we will rewrite the same numbers of rational
number 1/3 and real number with the command format long.
10
1.3.1.8
format short e for the exponent form with 4 decimal digits and
1.3.1.9
format short
m2 flat costs about 150.000 Euros. Therefore, we do need numbers with 6 decimal digits
before the decimal point in the average person life. Of course, if you are the company
owner or businessmen than you operate with numbers which have more digits than 6
before the decimal point. More the wealth you have bigger number you need. Of course,
you, as future engineers or scientist, are probably not so much interested for the numbers
to calculate the wealth, but to calculate what kind of numbers we need in engineering
and science.
The next example is the example from electrical engineering. Let's see the simple
RC circuit, which is presented in the Fig. 3.6.
R
uin
uC = y
12
If the mass of electron neutrino means the smallest mass, the mass of the Universe
presents the biggest mass, needed. Astronomy estimates the mass of the Universe as:
mU = 1053 kg.
Mass in physics therefore has values which differs by 89 (36+53) decimal digits.
The quantum physicists don't talk about the size of electronic particles, but
according to classical theory there exists the electron size estimation:
-15
a diameter of an electron 10
m.
The greatest distances and sizes we can observe again in the Universe. Nowadays,
we can observe the flock of galaxies, which are billions of light years away and their size
is millions of light years. Therefore, we can estimate:
a size of the Universe > 1034 m.
So, we need numbers which differs for 49 (15+34) decimal digits in physics to present
the sizes and distances.
We can conclude that engineers and scientists need really huge or really small numbers.
So, there is the next question, what is the biggest and the smallest number to be written
and used in MATLAB?
How is the number written in MATLAB?
Numbers are written as binary code numbers in every computer. The size of the
number, which is used in the computer, is determined by numbers of bits or with number
of digits to write the binary value. Computers use 8, 16, 32 or 64 bits long binary
numbers...
The usual writing (with fixed point) can present the biggest value, which is written
with 64 bit word:
binary values. It is very precisely described by IEEE standard 754 from 1985, or 2008,
when it was renewed. The mentioned standard you can find on the web page:
The sign of the number The
sign of the exponent
2 1
IEEE standard 754 exponent writing of the number with 64 bits is called double. With
this kind of format, we can write numbers: from 10-308 to 10+308 .
The command in MATLAB to write the biggest number is realmax, while the
smallest number is written by the command realmin.
It is used a format double this is 64 bit floating point format (se Fig. 3.7) for all
versions of MATLAB, except MATLAB Version 7, where it is possible to determine the
format and used it for the calculation by the user.
NOTE: It will not be presented the command Enter any further, but of course, it is
assumed that the user presses the key command Enter after the number is written in the
Command Window.
14
format loose (there is one empty line between two written lines) and
1.3.1.11
15
adding+ ,
1.3.1.13
subtracting - ,
1.3.1.14
multiplying * ,
1.3.1.15
dividing / and
1.3.1.16
power .
16
1.3.1.18
1.3.1.19
Next are operators multiplying (*) and dividing (/), which have the same
priority
1.3.1.20
Adding (+) and subtracting (-) have the same and the lowest priority.
17
Maybe, someone should think that it is unusual, but the majority of errors, when
using the rules of priority, have happened because of the order of operators with the same
priority, like multiplying and dividing.
What is the order of mathematical operators execution with the same priority?
The same priority mathematical operators are executed one after another from left to
right, as they are written in the expression.
Figure 1.16: The execution priority of the operators with the same priority
First, we calculate (5/2), than we multiply (5/2)*3, than we divide again with 3 ((5/2)*3)/3
and in the end everything multiply by 2: (((5/2)*3)/3)*2.
Lets write the above mathematical expression with rational numbers:
The typical error happened, when we change the mathematical expression written by
rational numbers to the MATLAB expression. Let's see the typical example. If we would
like to calculate the rational number: 1200/(40.30), then the expected result is 1.
If we would do the error, then we might write the last expression like that:
19
20
The short list of basic mathematical functions, which are the most often used in the
MATLAB:
1.3.1.21
Square root
1.3.1.22
Exponent function ex
exp()
1.3.1.23
Logarithm naturalis
log()
1.3.1.24
1.3.1.25
Absolute value
1.3.1.26
1.3.1.27
Sinus function
1.3.1.28
1.3.1.29
1.3.1.30
1.3.1.31
1.3.1.32
1.3.1.33
Co sinus hiperbolicus
1.3.1.34
1.3.1.35
1.3.1.36
Arcos tangents
1.3.1.37
Tangents hiperbolicus
1.3.1.38
1.3.1.39
Cotangents function
1.3.1.40
Arcos cotangents
1.3.1.41
1.3.1.42
sqrt()
abs()
sin()
asin()
cosh()
acosh()
atan()
tanh()
atanh()
cot()
acot()
21
sin2(/2)
sin(/2)2
22
Figure 1.20: The priority of sin function is higher than priority of power operator
1.3.2
Only one number is not enough, because in our everyday life exist a lot of numbers and
data. So nowadays, there is important to arrange a mass of numbers, or with other words
data.
The next enigma will present how important is the number (data) putting in the order.
ENIGMA: Calculate the number 25 with only with help of number 10 and all
possible operators!
Upper enigma can be solved only with proper order of numbers and operators.
Let's see some other examples of the number arrangement, which is often in use.
Example 1: vector
We
f(x)
0
0.5
1.0
1.5
2.0
2.5
0
5.62
0
2.62
3.00
1.87
0
-
23
CHAPTER-2
OVER VIEW OF PROJECT
2.1 Introduction to Image Processing:
Image Processing is a technique to enhance raw images received from
cameras/sensors placed on space probes, aircrafts and satellites or pictures taken in normal daytoday life for various applications. An Image is rectangular graphical object. Image processing
involves issues related to image representation, compression techniques and various complex
operations, which can be carried out on the image data. The operations that come under image
processing are image enhancement operations such as sharpening, blurring, brightening, edge
enhancement etc. Image processing is any form of signal processing for which the input is an
image, such as photographs or frames of video; the output of image processing can be either an
image or a set of characteristics or parameters related to the image. Most image-processing
techniques involve treating the image as a two-dimensional signal and applying standard signalprocessing techniques to it. Image processing usually refers to digital image processing, but
optical and analog image processing are also possible.
24
green and blue), and hence stimulate the three types of cones at will, we are able to generate
almost any detectable colour. This is the reason behind why colour images are often stored as
three separate image matrices; one storing the amount of red (R) in each pixel, one the amount
of green (G) and one the amount of blue (B). We call such colour images as stored in an RGB
format. In grayscale images, however, we do not differentiate how much we emit of different
colours, we emit the same amount in every channel. We will be able to differentiate the total
amount of emitted light for each pixel; little light gives dark pixels and much light is perceived
as bright pixels. When converting an RGB image to grayscale, we have to consider the RGB
values for each pixel and make as output a single value reflecting the brightness of that pixel.
One of the approaches is to take the average of the contribution from each channel:
(R+B+C)/3. However, since the perceived brightness is often dominated by the green
component, a different, more "human-oriented", method is to consider a weighted average,
e.g.: 0.3R + 0.59G + 0.11B
2.1.4 Image Enhancement:
Image enhancement is the process of adjusting digital images so that the results
are more suitable for display or further analysis. For example, we can eliminate noise, which
will make it more easier to identify the key characteristics. In poor contrast images, the
adjacent characters merge during binarization. We have to reduce the spread of the
characters before applying a threshold to the word image. Hence, we introduce POWERLAW TRANSFORMATION which increases the contrast of the characters and helps in
better
segmentation.
The
basic
form
of
power-law
transformation
is
s = cr , where r and s are the input and output intensities, respectively; c and are positive
constants. A variety of devices used for image capture, printing, and display respond
according to a powerlaw. By convention, the exponent in the power-law equation is referred
to as gamma. Hence, the process used to correct these power-law response phenomena is
called gamma correction. Gamma correction is important, if displaying an image accurately
on a computer screen is of concern. In our experimentation, is varied in the range of 1 to 5.
If c is not equal to 1, then the dynamic range of the pixel values will be significantly
affected by scaling. Thus, to avoid another stage of rescaling after power-law transformation,
we fix the value of c = 1.With = 1, if the power-law transformed image is passed through
26
binarization, there will be no change in the result compared to simple binarization. When >
1, there will be a change in the histogram plot, since there is an increase of samples in the
bins towards the gray value of zero. Gamma correction is important if displaying an image
accurately on_computer_screen_is_of_concern.
2.1.5 Edge Detection:
Edge detection is the name for a set of mathematical methods which aim at
identifying points in a digital image at which the image brightness changes sharply or, more
technically, has discontinuities or noise. The points at which image brightness alters sharply
are
typically
organized
into
set
of
curved
line
segments
termed
edges.
noise
image.
Following
are
list
of
various
edge-detection
methods:-
Sobel_Edge_Detection_Technique
Perwitt_Edge_Detection
Roberts_Edge_Detection_Technique
Zerocross_Threshold_Edge_Detection_Technique
Canny Edge Detection Technique In our project we use CANNY EDGE DETECTION
TECHNIQUE
2.1.5.2 Canny Edge Detection:
The Canny Edge Detector is one of the most commonly used image processing
tools detecting edges in a very robust manner. It is a multi-step process, which can be
implemented on the GPU as a sequence of filters. Canny edge detection technique is based
on three basic objectives.
The edges located must be as close as possible to the true edges. That is , the
distance between a point marked as an edge by the detector and the centre of the true edge
should be minimum.
27
Single edge point response:- The detector should return only one point for each
true edge point. That is, the number of local maxima around the true edge should be
minimum. This means that the detector should not identify multiple edge pixels where only a
single edge point exist. The essence of Cannys work was in eressing the preceding three
criteria mathematically and then attempting to find optimal solution to these formulations, in
general, it is difficult to find a close form solution that satisfies all the preceding objectives.
However, using numerical optimization with 1-D step edges corrupted by additive while
Gaussian noise led to the conclusion that a good approximation to the optimal step edge .
Because the direction of the normal is unknown beforehand, this would require
applying the 1-D edge detector in all possible directions. This task can be approximated by
first smoothing the image with circular 2-D Gaussian function, computing the gradient of the
result, and then using the gradient magnitude and direction to estimate edge strength and
direction at every point. Let f(x,y) denote the input image and G(x,y) denote .
2.1.6 Image Matching:
Recognition techniques based on matching represent each class by a prototype
pattern vector. An unknown pattern is assigned to the class to which is closest in terms of
predefined metric. The simplest approach is the minimum distance classifier, which, as its
name implies, computes the (Euclidean) distance between the unknown and each of the
prototype vectors. It chooses the smallest distance to make decision. There is another
approach based on correlation, which can be formulated directly in terms of images and is
quite intuitive. We have used a totally different approach for image matching. Comparing a
reference image with the real time image pixel by pixel. Though there are some
disadvantages related to pixel based matching but it is one of the best techniques for the
algorithm which is used in the project for decision making. Real image is stored in matric in
memory and the real time image is also converted in the desired matric. For images to be
same their pixel values in matrix must be same. This is the simplest fact used in pixel
matching. If there is any mismatch in pixel value it adds on to the counter used to calculate
number of pixel mismatches. .
28
CHAPTER-3
IMAGE FILTERING TECHNIQUES
3.1 Introduction to filtering techniques:
ANY applications
in
the
fields
of
processing require smoothing techniques that can preserve edge well. Typical examples
include image de-noising , fusion of differently exposed images , tone mapping of high
dynamic range (HDR) images, detail enhancement via multi-lighting images, texture transfer
from a source image to a destination image, single image haze removal, and etc. The smoothing
process usually decomposes an image to be filtered into two layers: a base layer formed by
homogeneous regions with sharp edges and a detail layer which can be either noise, e.g., a
random pattern with zero mean, or texture, such as a repeated pattern with regular structure.
There are two types of edge-preserving image smoothing techniques. One type is
global optimization based filters. The optimized performance criterion consists of a data term
and a regularization term.
29
Particle filter technique is used for positioning, navigation & tracking. Particle Filter is
concerned with the problem of tracking single and multiple objects. Particle Filter is a
hypothesis tracker, that approximates the filtered posterior distribution by a set of
weighted particles. It weights particles based on a likelihood score and then propagates these
particles according to a motion model. . The particle filter-based trackers have the
theoretical possibility of tracking multiple hypotheses, and kalman filter based on single
object tracking. We show that KPF performs robust multiple object tracking. . Particle
filtering is a promising technique because it allows fusion of different sensor data, to
incorporate constraints and to account for different uncertainties. The algorithm based on
likelihood factor as a product of the likelihoods of different object. We show the Benefit of
using multiple object compared to color-based tracking only and texture-based tracking only.
after the mean shift procedure at time .After each mean shift procedure, the weight is recomputed as the posterior density evaluated, at the new particle positions augmented with a
particle density balancing factor.
Kalman(KPF) filter for single object tracking
KPF is applied to head tracking to test its ability in tracking with a weak
dynamic model. The test videos involve various motions such as sudden acceleration,
rotation, abrupt changes of direction, jump, and out-of-plane rotation. The first test video
sequence, 1FACE, consists of 797 frames of a human face moving in a typical laborat or
environment. The face is modeled as an ellipse with a vertical major axis and a fixed aspect
ratio of 1.4. Trackers are initialized manually. A few frames of the tracking results using
PF and KPF are shown in Fig. 4. The PF tracker with the same dynamic model tends to lag
behind the object and eventually loses the head at the #373rd frame. A PF tracker is able to
succeed after doubling the dynamic noise and uses 250 particles to saturate the search region.
On the other hand, KPF with 30 particles and 3 iterations, despite being occasionally distracted
by the background clutter, is able to track the face throughout
the
sequence.
The
31
SMC methods are a general class of Monte Carlo methods that sample sequentially
from a sequence of target probability densities f(n) (x1:n)g of increasing dimension where
each distribution f(n) (x1:n) is defined on the product space Xn.
Kalman filter
Single object tracker
Recursive system based
on non linear.
32
Processing
Decreases.
Based on
Gaussian
Takes
less time
distributed.
on executing the
Computation time
probability
Takes
more time.
distribution
.
state.
section,
existing
edge-preserving
smoothing techniques
are
summarized with the emphasis on the GIF . The task of edge-preserving smoothing is
to decompose .
X ( p) = Z ( p) + e( p),
this type of edge-preserving smoothing techniques is based on local filtering. The BF
is
widely used due to its simplicity. However, the BF could suffer from gradient reversal
artifacts despite its popularity, and the results may exhibit undesired profiles around edges,
usually observed in detail enhancement of conventional LDR images or tone mapping of
HDR images. The GIF was introduced into overcome this problem. In the GIF, a guidance
image G is used which could be identical to the image X to be filtered.
It is assumed that Z is a linear transform of G in the window
Z ( p) = a p G( p) +1 b p , p ( p ), (2) where
( p) is a square window
images. Image filtering makes possible several useful tasks in image processing. A filter can be
applied to reduce the amount of unwanted noise in a particular image as shown in fig. Another
type of filter can be used to reverse the effects of blurring on a particular picture. Nonlinear
filters have quite different behavior compared to linear filters. For nonlinear filters, the filter
output or response of the filter does not obey the principles outlined earlier, particularly scaling
and shift invariance. Moreover, a nonlinear filter can produce results that vary in a nonintuitive manner.
Defected image
Real image
Figure 2.1- A Defected image and real image after applying filtering
This paper mainly contains the five sections which describes the different algorithms and
techniques. Section 1 describes the simple introduction about image filtering .
3.6 WORKING EXAMPLE:-The mean filter
The simplest filter to implement is known as the [mean filter] 3. The mean filter performs
average smoothing on an image
.The name perfectly describes the function of this filter. Each pixel in I (image) is
replaced with the mean of the pixels that surround it. Especially, noise is blended into the rest of
the picture. A filter that performs average smoothing must use a kernel with all entries being
non-negative. For example if a kernel A was used with m( size)=3:
A avg = 1/3 [1 1 1]
Let I be an image of size N, m an odd number smaller than N, and A the kernel of a
linear filter, that is a mask of size m. Additionally,it is absolutely necessary for all the entries in
the kernel to have a sum of one. If the sum is not equal to one ,then the kernel must be divided
by the sum of the entries( hence the multiplication of the 1/3).If the requirement is not met, then
34
the filtered image will become brighter than the original image, along with undergoing the
specified filtering effect. This limitation on the mean filter fulfills the seconded portion of the
image filtering goal A. This filter is effective at attenuating noise because averaging removes
small variations. The effect is identical to that of averaging a set of data to help reduce the effect
of outliers. In a two-dimensional mean filter, the effect of averaging m^2 noisy values around
pixel divides the standard derivation of the noise by m2=m(size).
3.7 ALGORITHMS FOR IMAGE FILTERING
A.Linear Smoothing :-The most common, simplest and fastest kind of filtering is achieved by
linear filters. The linear filter replaces each pixel with a linear combination of its neighbors and
convolution kernel is used in prescription for the linear combination.4
Linear filtering of a signal can be expressed as
the convolution . y(t)= ( . )
of the input signal x(n) with the impulse response h(n) of the given filter, i.e. the filter output
arising from the input of an ideal Dirac impulse .Now from fig. it is clear that image filtering is
done by applying function and when we apply linear filtering then each pixel is replaced by
linear combination of its neighbor.
Box blur:
A box blur, also known as moving average, is a simple linear filter with a square kernel
and it contains all the kernel coefficients equal. It is the quickest blur algorithm, but it has a
drawback i.e.it lacks smoothness of a Gaussian blur.
A box blur can be with a complexity independent of a filter radius. The algorithm is
based on a fact that sum S of elements in the rectangular window can be decomposed into sums
C of columns of this window: S[I, J] =
C(I, j + k)
(column)with FET ,do the same with a zero-padded Gaussian kernel, then multiply complex
spectra and do the inverse transform.
Hann Window
Hann window is a smooth function defined as
35
H(t)=1+COS(t), - t
The algorithm that we propose in ID Hann smoothing is based on modulation of the input signal
with a complex exponent .Lets consider a discrete filtering with a Hann kernel:
This can be rewritten as sum of a box filter and a cosine modulated input signal. .Now
we will solve the update formula for fast calculation of a cosine modulated real-valued signal.By
calculating above equation we find out the solution as-:
0.52.5
A relative accuracy of this approximation increases as filter radius increases, but even with
small Gaussian Blur:
[Gaussian blur] is considered a perfect blur for many applications, provided that kernel
support is large enough to fit the essential part of the Gaussian. Gaussian filter on a square
support is separable, i.e. In case of 2D filtering it can be decomposed into a series of 1D filtering
for rows and columns. When the filter radius is relatively small (less than few dozen), the fastest
way to calculate the filtering result is direct 1D convolution.First of all, it is considered that the
result of convolution has a length N+M1, where N is the signal size and M is a filter kernel size
(equal to 2r+1), i.e. the output signal is longer than the input signal.
Secondly, calculating FFT of the complete image row is not optimal, since the
complexity of FFT is O(N log N). The complexity of FFT (fast Fourier transform)can be
reduced by breaking the kernel into sections with an approximate length M and performing
overlap-add convolution section-wise. The FFT size should be selected so that circular
convolution is not included. Usually optimal performance is achieved when FFT size F is
selected as the smallest power of 2 larger than 2M, and signal section size is selected as FM+1
for full utilization of FFT block. This reduces the overall complexity of 1D convolution to
O(Norm).So, the per-pixel complexity of Gaussian blur becomes O(log r). However, the value of
constant is quite large.So for many practical purposes Gaussian blur can be successfully
implemented with simpler filters.
B.Nonlinear Smoothing:
Median filtering:
36
In
37
[6]
Image
enhancement is one of the most important concepts in image processing. Its purpose is to
improve the quality of low contrast images, i.e., to enlarge the intensity difference among objects
and background. And histograms are very important in case of image enhancement and image
processing.
The straightforward implementation of median filter requires O (r2logr) operations per
pixel to sort the array of (2r+1) (2r+1) pixels in a window. However an optimization is possible
when image data takes a limited range of discrete values, e.g. 8-bit pixel values. It is based on a
fact that median value can be easily calculated from a histogram of pixel values in a window.
For 8-bit pixel values such a histogram contains 256 bins and can be searched for a constant
38
time (8 comparisons) independently of a filter radius. When a filter window shifts, thishistogram
can be effectively updated. If the filter window shifts one pixel down, the pixels of upper
window row are removed from the histogram (2r+1 operations), and pixels of a new lower
window row are added to the histogram (2r+1 operations).
To optimize the histogram search, a previously calculated median value can be used as a
starting point in a search for a new median value. A further optimization of median filtering is
possible by maintaining several histograms as combining them in a certain way.
Binary morphological operations
A basic morphological operation is dilation. When a structuring element is defined inside
a square window with a radius r, the dilation operation sets to 1 all the pixels from which the
structuring element overlaps at least one non-zero pixel of the source image. A straightforward
implementation of dilation requires O( r2) operations per pixel to check all the points of
structuring elements.
If we keep the number of non-zero pixels that are overlapped by a structuring element,
an efficient update rule can be used for this number. When a structuring element window shifts
one pixel to the right, some image pixels that can become overlapped are shifting in from the
right border of a structuring element, and some image pixels can be shifting out of overlapping
area through the left border of a structuring element. So, instead of counting a total number of
overlapping pixels, we can increment the previous count by a number of pixels covered by the
right border of the structuring element and decrement by the number of pixels that are lying to
the left of the left border of a structuring element. The complexity of this optimized dilation is
O(r).A similar optimization is possible for erosion operation. For erosion we will count the
number of zero image pixels overlaid by a structuring element.
Min/Max filters:
A max filter outputs a maximal pixel value from its rectangular window. A
straightforward implementation requires O(r2) operations per pixel.
In case of small data bit depth, a histogram approach can be used. But when the bit depth
is large, another approach based on a 1D running max filter appears more practical. A simple
and fast algorithm called MAXLINE2 is using a circular buffer of delayed input elements.
39
The[ anchor points]9 to the current maximal value. When the window is shifted, a new element
is added to the delay line and compares against anchor element. If the new element is smaller,
the maximum stays at the anchor. Otherwise anchor moves to a new element. When the anchor
shifts out of the delay line, the whole delay line is scanned for a new anchor.
This algorithm works very fast on IID (independent identically distributed) data, but has
a worst-case complexity of O(r) for a monotonically decreasing data. An algorithm with a better
worst-case complexity (although with a worse complexity on IID data) is also intr. It has a
complexity of Oleg).This running max algorithm can be used for adding pixels to a 2D window
of a 2D min/max filter with a worst-case complexity of O(log) operations per pixel.
Grayscale morphological operations:Grayscale morphology is simply a generalization from 1 bop (bits per pixel) images to
images with multiple bits/pixel, where the Max and Min operations are used in place of the OR
and operations, respectively, of binary morphology. Grayscale morphological operations are
based on min/max filters. When structuring element is rectangular, they can be optimized by
using min/max filter.
The purpose of smoothing is to reduce noise and improve the visual quality of the image.
Often, smoothing is referred to as filtering. There are two types of filters that have been found
useful in nuclear medicine:A. Spatial filter
B. Temporal filter
Spatial filters:
These applied to both static and dynamic images, whereas temporal filters are applied
only to dynamic images.The simplest smoothing technique is the nine-point smooth. The ninepoint smooth will take a 3-x-3 square of pixels (total of nine) and determine the number of
counts in each pixel. The counts per pixel are then averaged, and that value is assigned to the
central pixel (Figure 5). This same operation can be repeated for the entire computer screen or
restricted to a designated area. Similar operations can be performed with 5-x-5 or 7-x-7 squares.
Spatial filters:
40
CHAPTER-4
OVERVIEW OF PROJECT
4.1 Introduction :
Image segmentation is the process of partitioning an image into homogenous regions
using its attributes such as pixel intensity, spectral values or textural properties. This
step is a primordial task in image analysis and pattern recognition especially in remote
sensing images.
Remote sensing imagery needs to be converted into tangible information which
can be utilized in conjunction with other data sets [1]. This kind of images has been
signicantly increased in recent years. The obtained images provide a lot of details
about surface, which are useful for mapping, environmental monitoring, resource investigation,
disaster management, and military intelligence [2].
42
In this context, Alistair and al.[3] give a review on studies that have applied remote
sensing imagery to characterize vegetation vulnerability in both retrospective and
prospective modes , in natural terrestrial ecosystems including temperate forests,
tropical forests, boreal forests, semi-arid lands, coastal areas, and the arctic. Abkar
and al. [4] describe a likelihood-based segmentation and classication method for
remotely sensed images. It is based on optimization of a utility function that can be
described as a cost-weighted likelihood for a collection of objects and their parameters.
In their paper, Zhijian and al. [5] propose a Dynamic Statistical Region Merging to
improve segmentation accuracy and the correctness of remote sensing images.
In addition, Remote Sensing Image is more seriously disturbed by luminance, noise
and so on [6]. Thus, any single segmentation method can barely produce satisfying
results in urban regions, roads, vegetation and water areas.
43
proposed algorithm:
The improved FCM algorithm is based on the concept of
data compression where the dimensionality of the input is
highly reduced. The data compression includes two steps:
quantization and aggregation [3].
44
CHAPTER-5
EXPERIMENTAL RESULTS
46
CHAPTER-6
CONCLUSION AND FUTURE SCOPE
conclussion
The results show that FCM and MFCM method can
48
Future scope:
Future research in MRI segmentation should strive toward
improving the accuracy, precision, and computation speed of
the segmentation algorithms, while reducing the amount of
manual interactions needed. This is particularly important as
MR imaging is becoming a routine diagnostic procedure in
clinical practice. It is alsoimportant that any practical
segmentation algorithm should deal with 3D volume
segmentation instead of 2D slice by slice segmentation, since
MRI data is 3D in nature. Volume segmentation ensures
continuity of the 3D boundaries of the segmented images
whereas slice by slice segmentation does not guarantee
continuation of the boundaries of the tissue regions between
slices.
49
REFERENCES
Anton Bardera, Jaume Rigau, ImmaBoada, Miquel Feixas, and Mateu
Sbert, Image Segmentation Using Information Bottleneck
Method,Page Number 1601-1612, IEEE Transactions on Image
Processing, Vol. 18, No. 7, July 2009.
[2] J.Jaya and K.Thanushkodi, Segmentation of MR Brain tumor using
Parallel ACO,Page Number 150-153, (IJCNS) International Journal of
Computer and Network Security,Vol. 2, No. 6, June 2010.
[3] Jude hemanth.D, D.Selvathi and J.Anitha,Effective Fuzzy Clustering
Algorithm for Abnormal MR Brain Image Segmentation,Page Number
609-614, International/Advance Computing Conference (IACC
2009),IEEE,2009.
[4] Jian Wu, Feng Ye, Jian-Lin Ma, Xiao-Ping Sun, Jing Xu, Zhi-Ming, The
Segmentation and Visualization of Human Organs Based on Adaptive
Region Growing Method ,Page Number-439-443, IEEE 8th
International Conference on Computer and Information Technology
Workshops978-0-7695-3242-4/08,IEEE,2008.
[5] Marcus karnan and T.logeswari, An Improved Implementation of Brain
Tumor Detection using Soft Computing,Page Number 6-10, (IJCNS)
International Journal of Computer and Network Security,Vol. 2, No. 1,
50
January 2010
51