You are on page 1of 33

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/305724232

Image Processing in Bio-medical Area (Literature Review)

Research · July 2016


DOI: 10.13140/RG.2.1.1254.1680

CITATIONS READS

0 1,369

4 authors, including:

Isuru Suranga Wijesinghe


University of Moratuwa
8 PUBLICATIONS   0 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Socioeconomic Status Classification of Geographic Regions in Sri Lanka Using Anonymised Call Detail Records View project

All content following this page was uploaded by Isuru Suranga Wijesinghe on 30 July 2016.

The user has requested enhancement of the downloaded file.


Image Processing In
Biomedical Area
Literature Review

Group Members
Kumarasinghe C.U. 100282N
Kumarasiri M.K.D.S. 100285C
Liyanage K.L.D.U. 100299X
Mannapperuma J. 100330L
Wijesinghe W.O.K.I.S. 100609C

8/16/2013

0|Page
Table of Contents
1 Introduction .................................................................................................................................... 2
2 Image Acquisition............................................................................................................................ 3
2.1 DCE-MRI (Dynamic Contrast Enhanced - Magnetic Resonance Imaging) ............................... 3
2.2 FLAIR MRI images .................................................................................................................... 3
2.3 MRI for Brain Tumor Detection............................................................................................... 4
3 Tools Used for Medical Image Processing ...................................................................................... 5
3.1 Usimag Tool............................................................................................................................. 5
3.2 Image Parser Tool ................................................................................................................... 6
4 Medical Imaging in Diagnosing Diseases ........................................................................................ 7
4.1 Brain ........................................................................................................................................ 7
4.1.1 Brain MR Image Feature Extraction and Segmentation ................................................. 7
4.1.2 Diagnosing Brain Tumors .............................................................................................. 12
4.1.3 Using Composite Feature Vector .................................................................................. 12
4.1.4 Using Type II Fuzzy Logic ............................................................................................... 14
4.2 Kidney.................................................................................................................................... 16
4.2.1 Using a Shape-Optimized Framework for Segmentation ............................................. 16
4.2.2 Using Wavelet Feature Extraction for 3D Segmentation .............................................. 17
4.2.3 Using a CAD System ...................................................................................................... 19
4.3 Liver ....................................................................................................................................... 20
4.4 Heart ..................................................................................................................................... 20
4.4.1 A Geometric Snake Model for Segmentation of Medical Imagery ............................... 20
5 Image Analysis and Processing Methods in Proving Correctness ................................................. 23
5.1 Vertebra CT Image Segmentation using an Improved Level Set Method............................. 24
5.2 Tumor Detection Probabilistic Neural Network (PNN) Techniques ...................................... 25
5.3 Statistical Influence in Geodesic Active Contours................................................................. 26
6 References .................................................................................................................................... 28
7 Work Involvement ........................................................................................................................ 31
7.1 100282N ................................................................................................................................ 31
7.2 100285C ................................................................................................................................ 31
7.3 100299X ................................................................................................................................ 31
7.4 100330L ................................................................................................................................. 31
7.5 100609C ................................................................................................................................ 31

1|Page
1 Introduction
Biomedical imaging concentrates on the capture of images for both diagnostic and therapeutic
purposes. Image processing in a biomedical area deals with signal gathering, image forming and
image display to medical diagnosis based on feature extracted images. The techniques used image
processing differs with the type of the image used Biomedical imaging technologies utilize either x-
rays (CT scans), sound (ultrasound), magnetism (MRI), radioactive pharmaceuticals (nuclear
medicine: SPECT, PET) or light (endoscopy, OCT) to assess the current condition of an organ or tissue
and can monitor a patient over time over time for diagnostic and treatment evaluation. Here in this
we have discussed two imaging areas where image processing is highly used. Those are identifying
brain tumors and kidney and liver diseases and obtaining shape model of heart. In addition to that
we have discussed the tools which are used for image processing in these areas.

2|Page
2 Image Acquisition
In biomedical engineering, there are various technologies to capture an image of the human body
which differ from their application. It is a must to have an accurate, clear image in order to analyze
the anatomy and the behavior of an organ or tissue. Therefore, engineers have designed
technologies to fulfill that requirement using CAD (computer aided devices) systems which plays a
major role in curing the diseases. Biomedical imaging technologies utilize either x-rays (CT scans),
sound (ultrasound), magnetism (MRI), radioactive pharmaceuticals (nuclear medicine: SPECT, PET) or
light (endoscopy, OCT) to assess the current condition of an organ or tissue. [15] Using these imaging
technologies, surgeons can monitor a patient over time for diagnostic and treatment evaluation. A
brief introduction to each imaging technologies is as follows.

2.1 DCE-MRI (Dynamic Contrast Enhanced - Magnetic Resonance Imaging)


Magnetic Resonance Imaging (MRI) is a diagnostic study which captures the images of the organs,
tissues of the body using a magnetic field and radio frequency pulses. In DCE-MRI is a faster imaging
technique which uses a contrast agent to make the specific organs, tissues or tumors easier to see.
Gd-DTPA is the contrast agent used in DCE-MRI which is injected to bloodstream which makes the
organs more visible in the images captured from this technique. During perfusion, Gd-DTPA causes a
change in the relaxation times of tissues and creates a change in image contrast. As a result, the
patterns of the contrast change give functional information, while MRI provides good anatomical
information which helps in differentiation of diseases that affect different organs of the body. By
using this imaging technique, the surgeons will be able to diagnose the diseases by analyzing clear
DCE-MRI images.

2.2 FLAIR MRI images


This is a Fluid Attenuated Inversion Recovery Magnetic Resonance Image technology which is use for
identifying brain tumors and other head injuries. It’s a cornerstone in neuroimaging protocols as it
nulls the cerebrospinal fluid (CSF) and results in a contrast between lesions in the white matter or
gray matter and the surrounding brain tissue [15]. FLAIR MRI images produce a strongly T2 weighted
images and suppressed CSF signals [12]. In this technique subtle lesions near the CSF stand out
against a background of attenuated fluid as CSF appears dark, tumors and edematous tissues appear
bright. They are very useful in diagnosis of intracranial tumor, periventricular lesions, head injury,
demyelinating diseases, such as multiple sclerosis, in the study of normal brain maturation etc. FLAIR
images provide better discrimination between tumor and edema than T1 and T2 images. [7]

3|Page
2.3 MRI for Brain Tumor Detection
MRI scan images of a given patient are either color, Gray-scale or intensity images herein are
displayed with a default size of 220×220. If it is color image, a Gray-scale converted image is defined
by using a large matrix whose entries are numerical values between 0 and 255, where 0 corresponds
to black and 255 to white for instance.

Synthetic and real ultrasound image capturing: the image is acquired sequentially, one image at a
time. Therefore the frame rate is a critical variable in this. With a higher frame rate there might be
less clear images, which will lead to poor diagnose decisions. And also the possibility of acquiring a
sufficient amount of data for high precision flow estimation is strictly limited. These constraints can
be raised by using synthetic aperture imaging. In this image capturing process, data is acquired
simultaneously from all the directions over a number of emissions. Then the full image is
reconstructed from this data. Due to acquisition of full set of data, it is possible to have both
dynamic transmit and receive focusing to improve contrast and resolution. [22]

The specific synthetic and real US images have 512x512 pixels. The real ultrasound images for the
left kidney were acquired by DC-7 ultrasound from Mindray with convex array transducers.

4|Page
3 Tools Used for Medical Image Processing

3.1 Usimag Tool


This article is related with a tool that will help for research purpose of ultra sound-image processing.
This tool was built with the intension of analyzing and visualizing ultrasound images. For example
image of a fetus inside a mother. This tool brings together three other platforms for the ease of the
user. Insight Toolkit (ITK) for segmentation and registration of multidimensional data, Visualization
Toolkit (VTK) for visualizing medical images and Fast Light Toolkit (FLTK) for implementation of
graphical objects. The GUI has the capability of 3D and auxiliary view. For this tool, they have given
five algorithms for image processing, three for filtering, one for segmentation and one for
registration. This tool is more beneficial to the society as it is open source software.

Talking about the benefits of the tool the source code is available for everyone to modify and reuse,
it is efficient, robust and fast due to standard object oriented language C++. It is also a Multi-
platform application meaning the ability to run in many Operating systems. Usability is provided with
an easy to use GUI.

Visualization is performed in three main viewers. Each viewer shows two dimensional slices from
different 3D data, and support the following functions: zoom, display any of the three orthogonal
views, flip the x, y, or z axis, transpose the axis of the slice being viewed, display points selected,
show image details, view a color overlay image, show pixel value and location of cursor, change
intensity window and level, and switch between different visualization modes.

Pre and/or post processing of the data images, some basic operations are included in this tool. They
include gradient magnitude computation, addition and multiplication of images, relabeling of
segmented images, connected components labeling, and binary morphological operations (erosion,
dilation, opening and closing).

There are several classic image filtering methods implemented in UsimagTool such as Gaussian,
median, bilateral, or anisotropic filters.

The features of this tool make it quite useful for researchers, due to the flexibility offered by its
simple architecture that allows including new algorithms very fast. In my opinion this work can be
extended easily to future technologies allowing for faster detection of diseases.

5|Page
3.2 Image Parser Tool
Finite Element Method (FEM) is a powerful mathematical tool which is used to identify the
deformations of the tissues and organs from the 3-D tomographic medical images. These 3-D images
are generated using by imaging in sections and those sectional images will be used to generate the
3-D medical image. [18] Then, Image Parser tool will generate FEM mesh though the Regions of
Interest (ROIs) are irregular and fuzzy. It uses a semi-automatic method to detect ROIs from the
context of image including neighboring tissues and organs. Therefore it can successfully exact the
geometry of ROIs from a complex medical image and generate the FEM mesh with the help of
customer defined information.

6|Page
4 Medical Imaging in Diagnosing Diseases
4.1 Brain
4.1.1 Brain MR Image Feature Extraction and Segmentation

4.1.1.1 Overview
Medical image analysis is one important biomedical application, which is a very computational
complexity in nature and requires the help of an automated system. Such image analysis techniques
are often used to detect changes in the human body by scanning images. Automatic diagnosis of MR
images of the brain is one of the specific methods of medical image analysis. Image analysis
techniques include image processing, image segmentation, feature extraction histogram
equalization, etc.

Image preprocessing is necessary because patient movement during imaging, thermal noise can
easily detect. After preprocessing prepare images for further processing, such as feature extraction,
classification, filtering, etc. There are various types of filters such as Gaussian, Weiner, A sharp, etc.
Histogram equalization is a further tool for pre-equalization in the image intensity. This
preprocessing technique is required as in the detection of edges of tumor the tumor appears very
dark on the image which is very confusing.

Segmentation of images holds an important position in the area of image processing. It becomes
more important while typically dealing with medical images where pre-surgery and post-surgery
decisions are required for the purpose of initiating and speeding up the recovery process Computer
aided detection of abnormal growth of tissues is primarily motivated by the necessity of achieving
maximum possible accuracy. Manual segmentation of these abnormal tissues cannot be compared
with modern day’s high speed computing machines which enable us to visually observe the volume
and location of unwanted tissues. A well-known segmentation problem within MRI is the task of
labeling voxels according to their tissue type which include White Matter (WM), Grey Matter (GM),
Cerebrospinal Fluid (CSF) and sometimes pathological tissues like tumor etc.

The next step in this diagnostic system is Classification and followed by Segmentation. The feature
vector is supplied to the classifier for classifying the Brain MRI Images into two categories namely
normal brain and abnormal brain. There are several classifiers available for classifying the brain MRI
images. In this literature we studied some classifiers which are artificial neural network based, fuzzy
logic based.

7|Page
4.1.1.2 Data Description
Experimentations are conducted on MR images collected from sample set of patients. From every
single patient of the selected sample took set of sequences of MR images. Each volume that has
been taken consumes 24 slices in axial plain with 5 mm slice thickness. This MR imaging was
accomplished on 3T (Tesla) Siemens devices.

4.1.1.3 Pre-processing
The preliminary step in medical image analysis is image preprocessing which ensures the high
accuracy of the subsequent steps. In at the preprocessing step the original image is subdivided into
small structure elements and then different types of features are extracted. Proposed three step for
preprocessing the original image namely histogram equalization, binarization, and Morphological
operation. Histogram equalization is used for equal distribution of intensities. This is done with
thresholding and using from mean filter. In this research 2-D discrete Fourier transform is computed
for images as it is intensive for normal tissues whereas widespread and amorphous in abnormal
images. Here the original image is smoothened by couple of filters namely wiener filter and Gaussian
filter. The edges in the smoothened images are taken than from the original images to reduce the
effect of noise.

4.1.1.4 Normalization
Primarily, these MRI images are normalized to gray code from 0 to 1 and the features are extracted
from the normalized images. After extracting the parts form the image then resized and filters the
multidimensional array using the multidimensional filters and round off all the fraction values. Then
gray scale converted image combines with the filtered image and generated enlarge image.

Subsequently normalization reduces the dynamic range of the intensity values and therefore
feature extraction is made far easier. When an image has been normalized and its characteristics
roughly match other normalized images characteristics. This process used the coordinate structure
to describe different brain locations and it is easy for making group relationships. For that we used
two methods for normalization. First normalization is MRIreg positioning on the raw MRI scan and
the other one is transform a linear normalization on the MRI scan.

4.1.1.5 Feature Extraction and segmentation


Feature extraction is the technique of extracting specific features from the preprocessed image of
different abnormal categories. Also the feature extraction is the process to represent raw image in
its reduced form to facilitate decision making such as pattern classification.

8|Page
Magnetic resonance images (MRIs) of the brain are segmented to measure the efficacy of treatment
strategies for brain tumors. A genetic algorithm (GA) search was used to discover a feature set from
multi-spectral MRI data. Segmentations were performed using the fuzzy c-means (FCM) clustering
technique. The GA feature set produces a more accurate segmentation. The GA fitness function that
achieves the best results is the Wilks's lambda statistic when applied to FCM clusters. Compared to
linear discriminant analysis, which requires class labels, the same or better accuracy is obtained by
the features constructed from a GA search without class labels, allowing fully operator independent
segmentation. The GA approach therefore provides a better starting point for the measurement of
the response of a brain tumor to treatment.

There exists another extraction method that developed using improved geometric active contour
model.

The method uses an improved geometric active contour model which can not only solve the
boundary leakage problem but also is less sensitive to intensity in homogeneity. The method defines
the initial function as a binary level set function to improve computational efficiency. The method is
applied to both our data and Internet brain MR data provided by the Internet Brain Segmentation
Repository. Normally Weak boundaries between brain tissues and surrounding tissues are often
seen in brain MR images, which result in leakage through these boundaries in brain extraction.

The reason to make such improvement lies in the fact that, in some areas, the CSF (cerebrospinal
fluid) is often thinner than other parts, leading to similar intensities of non-brain tissues with brain
tissues and the leakage through weak boundaries is liable to occur. By observing, can find such
leakage through weak boundaries often occurs at some points with high curvature, so search such
points and, if the number of these points reaches a pre-set value which is determined according to
image data empirically, estimate that segmentation leakage will occur, so the segmentation will be
stopped immediately. To correct the leakage of the evolving contour through weak boundaries,
detected the leakage through a weak boundary first by calculating the Jaccard coefficient, then
increased the weight of the mean curvature force Fcurv to prevent high curvature, and lastly
segmented the same slice again.

So they propose a method to correct effectively the leakage of weak boundaries based on local
thresholds estimation. As a result, these parts are separated with a higher threshold, which can
reveal more details of the weak boundary and lower the risk of the segmentation leakage. In local
thresholds estimation, the brain region is divided into several parts with two different thresholds
which are used to separate the background and other non-brain tissues as mentioned previously.

9|Page
4.1.1.6 Segmentation by thresholding
Thresholding method is frequently used for image segmentation. This is simple and effective
segmentation method for images with different intensities. The technique basically attempts for
finding a threshold value, which enables the classification of pixels into different categories. A major
weakness of this segmentation mode is that it generates only two classes. Therefore, this method
fails to deal with multichannel images. Besides, it also ignores the spatial characteristics due to
which an image becomes noise sensitive and undergoes intensity in-homogeneity problem, which
are expected to be found in MRI. Both these features create the possibility for corrupting the
histogram of the image. For overcoming these problems various versions of thresholding technique
have been introduced that segments medical images by using the information based on local
intensities and connectivity.

4.1.1.7 Feature selection


This process is usually used in machine learning. The preeminent subdivision contains the minimum
number of dimensions that contributes to high accuracy, so discard the remaining, irrelevant
dimensions. In this experiment feature selection done by using two concepts called forward
selection and backward selection.

4.1.1.8 Forward Selection


This process begins with no variables and adds them one by one, at each time adding the one that
decreases the error the most, till any further addition does not meaningfully decrease the error. We
used a simple ranking based feature selection standard, a two – tracked t-test, measures the
significance of a difference of means between two distributions.

4.1.1.9 Backward Selection


This selection process begins with all the variables and eliminates them one by one and at each step
eliminating the one that decreases the error the most, up to any further removal increases the error
significantly. To decrease over fitting, the error referred to above is the error on a validation set that
is distinct from the training set.

4.1.1.10 Classification
The important process in the automated brain tumor detection system is brain image classification.
The main objective of this step is to differentiate the different abnormal brain images based on the
optimal feature set. Several conventional classifiers are available for categorization but most of the
earlier work depends on artificial intelligence (AI) Techniques which yields highly accurate results
then the conventional classifiers. The lack of faster convergence rate of the conventional neural
networks is also explained in the report. This lay an emphasis on the requirement of modified neural

10 | P a g e
networks with superior convergence rate for image classification applications. The multilevel
perceptron (MLP) has been used with two hidden layers. For evaluation of classification efficiency
two matrixes have been computed:

(1) The training performance


(2) The testing performance

Basically the testing performance provides the neural network classification efficiency. Conventional
neural network used for classification purpose by training neural network. A Neuro-fuzzy classifier is
used to detect candidate circumscribed tumor. In this literature they used back propagation
algorithm to train the neural network and the weights are adjusted using a basic delta rule. The
fuzzy-c means algorithm is based on fuzzy-c partition which was introduced by various researchers in
this field, developed by Dunn and generalized by Bezdek. The aim of this is to find cluster centers
that minimize dissimilarity functions. According to fuzzy neural approach found to have more
accurate decision making as compare to their counterparts. Support vector machine (SVM) with a
polynomial kernel were chosen to classify the brain MRI Images.

The techniques here mentioned determine whether an input Brain MR Image represents a healthy
brain or tumor brain as percentage. The drawbacks of these techniques are,

 Large variance and complexity of tumor characteristics in images such as sizes, shapes,
locations and intensities.
 Complexity of the pathology in human brain and the high quality required by clinical
diagnosis, only intensity features cannot achieve acceptable results.

Though Segmentation by thresholding is a simple technique, still there are some factors that can
complicate the thresholding operation, for example, non-stationary and correlated noise, ambient
illumination, busyness of gray levels within the object and its background, inadequate contrast, and
object size not commensurate with the scene introduced a new image thresholding method based
on the divergence function. In this method, the objective function is constructed using the
divergence function between the classes, the object and the background. The required threshold is
found where this divergence function shows a global minimum.

The GAC (geometric active contour model) algorithm is more efficient than other extraction
algorithms because this method uses a binary level set function to eliminate the expensive re-
initialization of the existing brain extraction algorithms. So this method can extract brain tissue with
high efficiency and full automation.

11 | P a g e
4.1.2 Diagnosing Brain Tumors

4.1.2.1 Using Digital Image Processing based on soft computing

4.1.2.2 Identifying the brain tumor should be very fast because


the patient cannot recover if the damage is more than 50%. For
detection of brain tumor CT scan is a primary technique. What
happens normally is the image from the CT scan is detected
manually by an expert doctor in a radiology lab. But instead of
detecting it manually we can use digital image processing for the
fast detection.
First we take the CT scan image of a brain and convert it to the
gray scale image removing Red Green Blue Components. Then convert gray scale image to the
matrix form by scanning X axis and Y axis. Normally the pixel values found in this matrix is between 0
and 215. If the pixel value at a certain location is 255 or 0 then search whether the surrounding
values are same as 255 or 0. If a brain tumor exists, the result of the search will return a 3X3 or
greater matrix. By this method you can identify brain tumors very fast but there is several other
implementations which gives more accuracy like fuzzy II logic and feature extraction which will be
described later in this paper. The following diagram gives the example of matrix with pixel values.

0 3 3 0 130 131 131 165


215 215 115 220 115 115 215 215
215 131 133 255 255 255 215 215
215 118 135 255 255 255 215 111
215 168 144 255 255 255 138 115
215 137 164 187 171 171 137 114
0 138 154 165 236 171 171 215
215 215 215 215 0 215 215 215

4.1.3 Using Composite Feature Vector


Detection of pathological issues from healthy tissues is very important in diagnosing brain tumors.
Although many technologies have advanced in that area radiologists are manually making the
diagnosis and identifying the location of lesion. It is prone to errors. Identifying the accurate location
of the brain tumor will help minimizing the damage to the healthy tissues due to therapy procedures
like radio surgery. In radio surgery aims the radiotherapy beams vey precisely at the brain tumor
area. If they are aimed at healthy tissues they get damaged and results in dead cells. Hence precisely
identifying the pathological tissues from healthy tissues should be essential.

12 | P a g e
A composite feature vector can be used for the detection of pathological and healthy tissues in FLAIR
MRI images of the brain. There are feature vectors developed using statistical parameters or using
wavelet parameters. But here we have used a compromise of statistical and wavelet parameters.

In most cases the researches have mainly focused only on detection of tumor only. But identifying
the exact location is also essential in treatment, surgery and managing the tumor. There are several
research works in multimodal images in detection and identifying the tumor an edema together.
Multilevel segmentation and integrated Bayesian model and Probabilistic segmentation of brain
tumors based on multimodality magnetic resonance image are few of them.

Histogram of intra cranial brain image does not clearly separates the 5 tissue segments in brain;
white matter, gray matter, CSF, tumor and edema. So for segmentation it is necessary to design a
composite feature vector.

K mean algorithm is used to segment the blocks of FLAIR images in to normal tissue, fluid and
pathological tissue regions based on feature vectors. With the knowledge of these five feature
vectors neural network is trained using back propagation algorithm. Then for better image
interpretation and visualizations pseudo coloring is done. ANN is a good tool to analyze MRI images
and classify them in terms of texture, intensity and contrast Feature extraction and texture
classification of MRI images using ANN is described later in detail.

In development of the composite feature vector preprocessing the image is done as the first step.
For the preprocessing intra cranial brain is extracted from the original image. Extra cranial tissues
like skull, eyes, etc. should be removed to make segmentation easy and eliminate the possibilities of
false segmentation. For the extraction MATLAB image processing software is used. First it reads the
original image and converts it in to gray level image. MATLAB command “roipoly” make the mask of
intra cranial brain with 0’s outside the region of interest and 1’s inside. In MATLAB “roipoly” returns
a binary image that you can use as a mask for masked filtering. It is used to specify a polygonal
region of interest (ROI) within an image. The mask from the MATLAB command is multiplied with the
grey level image to get intra cranial brain image.

Then the intracranial brain images are processed block wise. Feature vector corresponding to each
block is developed by a MATLAB program. Feature vectors of the block contain 5 parameters mean
and variance and three energy functions of high frequency wavelet transforms. The mean is
calculated by taking the addition of the pixel values in the block and dividing it by the number of
pixels. The equation for calculating the mean and variance is given below.

13 | P a g e
Pixels in the block = p1, p2, ….. pn

Mean = ,1/n(p1+p2+……..+pn)-

Variance = ,1/n(∑(pi-M))- where i=1,….,n

Three wavelet energies are horizontal, vertical and diagonal bands of wavelet transform. These high
frequency wavelet bands give the information about texture properties of the image. Mean and
variance gives information about average pixel intensities and average contrast.

The block size of a processed block is normally 4x4=16 pixels. So for processing of the whole image
we store the five features are stored in the form of a row vector. Then the next block is processed
and stores the features as another row vectors. All the blocks are processed and stored in this way.

K mean algorithm is a standard algorithm for unsupervised learning of neural network, pattern
recognition, classification, clustering analysis etc. It is simple and fast and can run on a large set of
data. Here k means algorithm is used to classify patterns of the feature vector in to 5 segments of
tumor, edema, gray matter, white matter and CSF. With these 5 feature vectors as inputs training of
Artificial Neural Network is done using back propagation algorithm.

4.1.4 Using Type II Fuzzy Logic


Diagnosing the brain tumors physical and neurologic examination and laboratory findings are used.
Improved accuracy of diagnosis is based on advanced imaging techniques. To diagnosis neurologists
usually use MRI and CT scan images. For identifying a tumor its location, mass and edges should be
clearly identified.

Number of brain tumors identified has improved in a great deal in recent years. This is because the
improvement of diagnosing methods and increasing medical care and number of neurosurgeons.
The survival time of the brain tumor depends on the identified stage and the histological type. So,
early detection of brain tumor is essential.

There have been many systems proposed for detection of brain tumors. One such system is type II
fuzzy image processing expert system. Fuzzy framework is very useful in dealing with absence of
sharp boundaries.

Fuzzy logic is a probabilistic logic applied in many areas such as pattern recognition and computer
vision. In image processing techniques in identifying brain tumors we deal with lot of uncertainties.
Type –I fuzzy sets are not able to model these uncertainties directly as their membership functions
are crisp. Type II fuzzy sets are able to model them more accurately.

14 | P a g e
There are two type of fuzzy II systems; interval-valued and generalized one. In the interval valued
the upper and lower bounds of membership are crisp and spread of membership distribution is
ignored with the assumption that membership values between upper and lower values between
upper and lower values are uniformly distributed or scattered. In generalized type II fuzzy the upper
and lower membership values and the spread of membership values between these bounds are
defined

In MRI images there are many uncertainties caused by non-uniformity and in homogeneity. Type II
fuzzy logic provides powerful tools identifying brain tumors with these uncertainties. Developing the
type II fuzzy system includes two steps; design a strategy of building type II fuzzy system and its
implementation.

For generating fuzzy systems there are two ways; supervised and unsupervised learning. In
unsupervised learning can be used to cluster the input data in to classes on the basis of their
statistical properties only. In supervised learning data inputs contain both the input data and desired
output. Here we have used only unsupervised learning for the design,

After the design it is applied in application area. T1 weighted MRI images are used as inputs. The
proposed method have four stages; preprocessing, segmentation, feature extraction and
approximate reasoning.

In preprocessing the noises and artifacts of the image is reduced using three common filters; Median
Filter, Unsharp Masking and Winner Filter. The algorithm used here is a rule based algorithm. If the
center pixel is an outer, median filter is used. If it is an edge, unsharp masking filter is used.
Otherwise winner filter is used.

The preprocessed image is then segmented in to four cases; white matter, gray matter, tumor and
CSF. For the segmentation probabilistic C mean method is used with fuzzy II logic. This methodology
is described in detail in the later sections with feature extraction methodologies.

Approximate reasoning is done by defining the diagnostic rules. Tumor shape, mass existence,
patient’s age and experimental results are the parameters used here.

Type-II fuzzy expert system can provide better results than Type-I fuzzy expert system according to
the uncertainties in the real world. But this method in the preprocessing area needs further
research.

15 | P a g e
4.2 Kidney

4.2.1 Using a Shape-Optimized Framework for Segmentation


Ultrasound imaging is used vastly for medical diagnosing. Compared to MRI, CT and X-ray the quality
of US images are relatively low, due to attenuation, speckle, shadows and signal dropouts. This
makes the segmentation task more complex. More significantly kidney segmentation of ultrasound
images is studied rarely in certain fields, such as regarding the noise of the image as part of texture
feature, which has to be extracted in every single segmentation process.

Recent years, level set is used to conduct segmentation: proposed by Osher and Sethian. Many
models have been included in level set functions such as global intensity statistics, texture models,
shape prior models, and Markov random field models. Even though these models are used to
improve the results, the complex calculations in the level set evolution leads to longer computation
time.

To counter the aforesaid limitations in kidney segmentation in ultrasound images, a novel


framework is presented. This is a combination of NLTV (non-local total variation) image denoising,
DRLSE (distance regularized level set evolution) and shape priors. This consists of three processes;
first the initial process, and then segmentation process, finally post optimization process.

In the initial process, effective NLTV image denoising is exploited to reduce the bad influence of
noise in the ultrasound image due to shadow, speckle, attenuation, and signal dropouts. As a result
of this scenario an almost homogeneous intensity gray scale image of kidney area is obtained, while
conserving boundaries of the organ. In NLTV image denoising, data conformity term is implemented
via iterative nonlocal methods, which can conserve the structure information in the denoised image.

In the segmentation process, DRLSE method is used to get a coarse segmented result. DRLSE is a
simple level set evolution, which makes use of a reduced segmentation cost time. The main PDE
contains two parts: the item that detects object boundaries from image gradients and the part that
maintains the signed distance property. During level set evolution, level set function may not be
smooth. Therefore re-initialization is essential. A level set formulation method called DRLSE, which
has an intrinsic mechanism of maintaining this desirable property of LSF proposed by Chunming Li
[11] is used. This segmentation process produces a binary image with the black and white region
representing the kidney and the background respectively.

Since the noise cannot be completely eliminated in the NLTV denoising method, the result of the
DRLSE method is still influenced by noise. Therefore a post optimization process is undertaken with

16 | P a g e
shape priors, to eliminate the prevailing noise from the monochrome image produced by DRLSE and
obtain a kidney shape space. In this method the shape prior exploits a principle component analysis,
where the boundaries are represented as the zero level set of a 2D scalar function. This method is
free of parameterization and it is topologically flexible, because different topologies of the curve are
depicted by constant topology of the scalar function. The final outcome is called the optimized
segmentation result of DRLSE.

The results from this framework are compared with manual segmentation done by the surgeons
using ITK-SNAP software. It is observed that these qualitative results are very close to the manual
segmentation, which implies that the proposed method produces more accurate results.
Quantitative results are analyzed again with respect to manual segmentation. In this analysis the
quantitative metrics sensitivity (SN), specificity (SP) and positive predictive value (PPV) are used.
Higher value of these metrics suggests that the segmentation results are closer to manual
segmentation. With the proposed method the values of these metrics are significantly increased.

From the qualitative and quantitative results we can conclude that using NLTV denoising prior to
segmentation achieves the goal of delineating the boundaries of the kidney accurately and quickly
than the prevailing methods. In addition, better results with smooth boundaries can be obtained by
using shape priors at the end of the evolution, which gets to its global minimum.

4.2.2 Using Wavelet Feature Extraction for 3D Segmentation


Polycystic kidney disease (PKD) is a genetic disease passed down through families. At initial stages of
the disease cysts cause the kidney swelling, disturbing the kidney functions and leading to chronic
high blood pressure and kidney infections. This disease leads to enlarged joint kidneys with several
cysts. Since the severity of the renal task is coupled with the size of the kidneys, the enlargement of
the kidney leads to severe renal failure. Therefore in this method an automatic segmentation
method is proposed and evaluated to segment the kidneys in MRI. This is a 3D method for
automatically segmenting the kidney in 3D MRI using Wavelet features and kidney geometry.

The objective of this technique is to modify the kidney boundaries. This technique has 2 stages:
training and application stages. In the training stage 10 training MRI are employed to train the
Wavelet feature classification and to make a predefined model. Here the kidney boundaries are
defined manually.

Then at the application stage: W-SVMs are used to tentatively label each voxel into kidney and non-
kidney tissues. The kidney textures are captured by the trained Wavelet-based support vector
machines (W-SVMs). Integrating the texture features and the geometrical data of the kidney, the W-

17 | P a g e
SVMs can robustly differentiate the kidney tissues from the adjacent tissues. The trained W-SVMs
are used to label the respective voxels around the surface into the kidney and non-kidney tissue
based on their texture features from different Wavelet filters. Subsequently, after the translation of
the shape model to the kidney region that are defined by intensity profiles of the MRI, the surface of
the kidney is driven to the boundary between kidney and non-kidney tissues based on defined
weighting functions and labeled voxels. A 3D edge detection method is employed based on Canny
edge detection. High-intensity regions in the MRI are combined with the detected edge.

Then the combined image is used to localize the model using the lower edge of the kidney. The
segmented kidney is tailored based on a region growing method in the model-defined region, and
then the model is re-localized based on the new detected region.

In this technique a kidney probability model is used for modifying the segmentation. The ten
segmented kidneys are registered using affine transformation, whereas there are many other
registration methods to create the model. Here principal axis transformation is used due its
computational properties speed and simplicity. The principal axis transformation is derived from the
classical theory of rigid bodies: a rigid body is uniquely located on the knowledge of the position of
its center of mass and its orientation with respect to center of mass.

The basic parameters that are used for the registration of the kidney are the position of the center of
mass, the rotation if the kidney about the center of mass, and the lengths of the principal axes.
These properties uniquely derive the location and the geometry of the kidney in the 3D space. After
registering the 10 volumes individually, they are overlaid and a probability model is created for each
voxel, based on how many kidney representations are labeled as kidney tissues at that particular
region.

By comparing the results with the corresponding gold standard method from manual segmentation
a quantitative performance assessment was conducted. The Dice similarity was used as the metric in
the kidney segmentation algorithm. It was computed as follows;

( )

Where voxel set of the kidney segmented by the algorithm is denoted by S and voxels of the kidney
from the gold standard data is denoted by G. For qualitative assessment of the proposed model MR
data sets from seven mice were used, which are different from the MR data used for making model
and training.

18 | P a g e
In conclusion, this method employs a learning based mechanism using W-SVMs to automatically
collect texture features in different regions of the kidney. The probability model was integrated into
the segmented kidney to adaptively identify kidney and non-kidney tissues. In this manner, even the
kidney has a diverse appearance in different parts and has weak boundaries near liver, pancreas, or
spleen; the model is still able to produce reliable and more accurate segmentation in 3D MR images.

4.2.3 Using a CAD System


Advanced MRI imaging techniques like DCE-MRI is being widely used to monitor a newly planted
kidney. Acute rejection is the most common reason of graft failure after kidney transplantation, and
early detection is crucial to survive the transplanted kidney function. An algorithm to monitor the
behavior of the kidney has been proposed which consists of three major steps. In the first step, it
isolates the kidney from the surrounding anatomical structures by evolving a deformable model
based on two density functions. Then the motion of the kidney is examined using FEM (Finite
Element Method) in the second step. Finally, the results of above steps will be used in the
classification of normal and acute rejection transplants.

In United States, around 12000 kidney transplants are performed annually. [25] As the limited supply
of kidney organs, it has been essential to preserve the transplanted kidney. Nowadays, the diagnosis
of acute rejection is done by biopsy which will be a risk to patients as they may be infected.
Therefore, DCE-MRI is a good solution to minimize the harm which may be caused by biopsy.

After a transplantation of a kidney, the images will be captured daily to analyze the behavior of the
kidney. Before capturing DCE-MRI image, a contrast agent which is Gd-DTPA will be injected to
bloodstream, so that when it perfuses in the organ, the kidney will be easier to identify from the
other anatomical structures in the body. Also, there will be several problems as well when capturing
the image. The special resolution of the MRI image will be low due to fast scanning and the images
get blurred due to the motion of the kidney when the patient breaths. Also the intensity of the
kidney changes non-uniformly as the contrast agent perfuses into the cortex which complicates the
segmentation procedures.

But now, necessary actions had been taken to overcome those registration and segmentation
problems. Gerig et al. proposed Hough transform as a solution for registration problem. An
algorithm to overcome segmentation problem is based on using the deformable model guided by a
stochastic force which represents the intensity and shape prior of the kidney.

The ultimate goals of the proposed algorithms are to construct mean intensity signal curves from the
DCE-MRI images. They have tested the above algorithms on thirty patients and they were able to

19 | P a g e
identify the acute rejection earlier by using Bayesian supervised classifier learning statistical
characteristics from a training set for the normal and acute rejection.

4.3 Liver
Ultrasonography (visualizing inner body structures), Computed Tomography (2D, 3D imaging method
using X-Ray), Magnetic Resonance Imaging (Imaging using magnetic field of the human), and Nuclear
Medicine (Radiation Treatment etc) have being used to detect and to prevent liver related diseases
such as liver cancer.

It is quite difficult to find the liver boundary as many other organs such as spinal cord, kidney and
gastrointestinal tract are close to it. The process of liver boundary segmentation is done in two parts.

 Boundary detection
 Boundary extraction.

First it is necessary to convert the CT image to a binary valued bitmap. It is generated by dividing an
original image into a set of 16 16 image blocks and assigning either a zero or one to each of the
image blocks. Then the right bottom region of the image can be eliminated as the liver cannot be in
that area

This will reduce the search area. Then after some modification to the image a Catmull–Rom B-spline
is used to interpolate liver boundary.

The process of liver contour extraction happens by first enhancing the image and by doing a
hematoma network classification. After completing that it can be iteratively used to create the liver
contour.

The main advantage of these imaging techniques is that it will prevent the necessity for unnecessary
biopsy. Biopsy is a medical procedure to remove a sample of the infection or the cancer in order to
test and to detect a certain patient hold a particular disease.

4.4 Heart

4.4.1 A Geometric Snake Model for Segmentation of Medical Imagery


Snake is another name used for Active Contour Model, which is used to delineate an object outline
in noisy 2D images. The Snake are used for segmentation of medical imagery is based on utilization
of deformable contours, which deforms into various objects and motions. This model has been used
for edge and curve detection, and segmentation of MRI, CT scans, and Ultrasound images, shape
modeling, and visual tracking. Snake model is exploited to segment myocardial heart boundaries as a

20 | P a g e
prerequisite, which is used to deduce vital information such as ejection-fraction ratio, ventricular
volume ratio, and heart output.

In the classical theory of Snake models energy minimization methods is where, continuity splines are
allowed to move under the manipulation of external image reliant forces, internal forces and certain
limitations set by the user. There may be number of problems associated with this method, namely
initializations, existence of several minima, and the choice of the elasticity parameters. Whereas this
model unifies the curve evaluation approaches for active contours and classical energy methods.
This model gives the ability for the user to automatically handling topological changes within the
gradient flow energy framework through merging and splitting of contours. This model has the
capability to handle topological complexities due to the first principles from geometric energy
minimization. Moreover this can be seen as a derivation from the Euclidean curve shortening
evolution.

The main components of the Snake model are; curve evolution theory, and mean curvature surface
evolution.

Curve Evolution Theory: The purpose of this theory is to reduce the set of vertices containing
information about the polygon into a subset of vertices containing significant information on the
original contour. [13] The mathematical foundation of the new contour model is based on the
Euclidean curve shortening.

Mean Curvature Surface Evolution: An inhomogeneous algorithm is developed regarding the image
as a surface. And then the surface is evolved at a speed proportional to its mean curvature. This
reduces the noise while it conserves the image structure. Use of an adaptive scaling parameter
enhances the speed of the diffusion. [4]

The application of this Snake model to medical imagery is as follows. For 2-D active contours, the
evolution equation concerns on the propagation front in the plane of an image. This propagation
may not be smooth. For the evolution beyond the discontinuities, it is required to satisfy an entropy
condition. It assures the physical existence of the front.

The application is described using a number of 2-D images from which the contours are extracted
utilizing Snake/Bubble techniques. Image are chosen from three modalities; MRI, CT scan, and
Ultrasound. These results are obtained in Sparc 10 workstation, and the variations in unit iteration
times between different images are functions of initial contours and the type of the image. The
metrics used for the analysis are number of iterations and the processing time.

21 | P a g e
These contour extraction results are compared in a quantitative analysis to distinguish the better
technique. Use of Bubble technique for contour extraction of a MRI heart image took 45 iterations in
3s to produce the contour, while the Snake (inward) evolution took 30 iterations in 2.5s. And then
two Bubbles were used to capture the edge of a cyst in a breast ultrasound image, where the cyst
boundary was found in 75 iterations in about 5s. Finally Snake (inward) evolution was used on a CT
bone image, in which 67 iterations were followed in 8s. The last two experiments were done to
demonstrate the speed and utility of the Snake model in the context of changing topologies,
multiple contours, and finding boundaries in noisy environments.

It is conclude that the application of this technique in extracting features in rather noisy medical
imagery yields powerful results. This approach is geometric, based on image-reliant Riemannian
metrics and the associated gradient flows for Snake models.

22 | P a g e
5 Image Analysis and Processing Methods in Proving Correctness
Based on the error of dosage of irradiation with the procedures of esthetic medical procedures and
how to detect such defects, the correctness is proved. As many such procedures use a fractional
laser or radiofrequency there may be errors occurring due to machine precision faults. For this, the
area of treated skin is taken to a picture by a thermo-vision camera with a photon detector and then
entered into Matlab. What is that needed to be found is the area of overlapping pulse (used for the
procedure). This can be identified by the algorithm given in the article. By this method to detect
errors in such procedures it is possible for medical authorities to recommend proper esthetic
medical procedures or to treat patients of dangerous dosage.

According to the American Society of Plastic Surgeons, in the USA alone 13.8 million of low-invasive
procedures were performed in 2011. There large gains in development in the medical equipment
market and these equipment to be trusted need to be approved by a proper procedure as there is a
possibility of adverse side effects.

The procedure is done by delivering the energy of laser radiation or radiofrequency to the patient’s
skin in the form of pulses which affect a definite tissue area. The safety of this treatment depends on
precise movement of the therapeutic equipment head across the patient’s skin.

There are several methods used currently for monitoring the efficiency and safety of laser esthetic
procedures they are optoacoustic (imaging technology based on the photoacoustic effect, and can
be used for obtaining images of structures in turbid environments) and optodynamic (dynamic
measurement techniques that use small lasers) methods. These use highly sensitive cameras that
allow imaging fluorescence phenomena in real time and measuring fluorescence intensity. In the
procedure given by the article they automatically calculate the degree of coverage of the treated
area to verify the correctness of the procedures.

Contemporary lasers do not offer a qualitative analysis of procedures performed using them. It is
assumed that an expert laser operator performs the cosmetic procedure correctly, without causing
overlapping of irradiation doses. None of these solutions offers an analysis of the correctness of
procedures performed with laser equipment. This procedure would solve this problem due to the
fact that defective laser treatments can be identified and replaced before more patients are harmed
with side effects.

23 | P a g e
5.1 Vertebra CT Image Segmentation using an Improved Level Set Method
This is another area which uses image processing methods to improve CT images of vertebra. An
improved level set method which is also called as edge and region based level set method (ERBLS) is
used to segment blurry vertebra CT images with discontinues boundaries. As most of the CT images
of vertebra consist of lots of noises, the boundaries cannot be recognized to the naked eye. Also the
intensity level of the CT images varies drastically which will result a blurry image. Therefore, for
diagnosis purposes, those images have to be improved such that the small defects of the vertebra
can be identified.

In the history of vertebra segmentation, there were several methods used to improve the CT images.

 Statistical Shape Model (SSM)


In this method, the mean shape of the vertebra is modeled using set of n points. To model
these shapes, this method uses Fourier and wavelet descriptors and the ambiguity of the
shape boundaries is overcome.
 Active shape models (ASM)
It’s also a kind of SSM but it iteratively finds a boundary while maintaining shape constraint.
Although, both SSM and ASM overcome the ambiguity problem of the CT images, those
methods fail to represent the minor changes in the boundaries which differs among
patients.
 Active Appearance Model (AAM)
This method combines the appearance information and the shape constraints of the CT
images and provides a better robust result than previous two methods. As texture patterns
of vertebra differ among patients, its application to vertebra segmentation has been
difficult.

Thus, there are various methods proposed but none of them provides a robust method to extract
minor changes in the boundaries of blurry CT images of vertebra. But multiple level set method is
able to detect small changes even though the intensity level of the CT images is inhomogeneous. It
uses edge detection function (edf) and region detection function (rdf) to effectively segment
ambiguous or discontinuous boundaries. Also a simple initialization method has introduced to level
set function for initialization. Otsu threshold is used to initialize the level set function which obtains
the regions of interests and multiple initial curves. Thresholding is essential in image segmentation
which is used to separate the objects from the background. By using Otsu method, an optimal
threshold can be obtained automatically which is implemented in Matlab 7.0. Not only, this

24 | P a g e
algorithm is successful in obtaining accurate segmentation results even though the intensity level is
highly inhomogeneous but also, it doesn’t need of any re-initialization which is costly.

Level set method, used to segment vertebra CT images has improved to process 3D images by using
slices of 2D CT images. Therefore, the accuracy level of the boundaries generated by this method
has been improved drastically. Experimental results on both synthetic and real images prove that the
proposed method is robust and efficient.

5.2 Tumor Detection Probabilistic Neural Network (PNN) Techniques


Here MRI images of brain are used to detect brain tumors automatically using a probabilistic neural
network model. For this model the Learning vector quantization is used rather than using
conventional probabilistic neural network model. The latter has several advantages over the
conventional model which are described later in this section.

Segmentation process has a wide acceptance in the field of identifying brain tumors from digital
image processing. MRI images of brain are segmented to different region and they are given a
separate criterion for the each region. The parameters of these criteria include intensity, texture,
color and range of the selected region. Color based segmentation using K mean algorithm is proven
one of the best schemas. Implementation of this algorithm is described earlier.

There are several types of conventional segmentation techniques. The most popular is the image
thresholding. It has very simple implementation but it is difficult to separate the background of the
object if both are of the same intensity. Edge – based segmentation is based on discontinuities in
image attributes such as texture and color. Region based segmentation is used to find similarities
and differences to merge and split the regions.

After segmentation you need to detect edges. For detecting edges you can use canny edge detection
algorithm. It finds edges via minimizing the error rate. To implement this MATLAB can be used.

There are several kind of neural network architectures such as multilayer perception neural network,
radio based function neural network and probabilistic neural network. Among them PNN becomes
an effective too because of its statistical foundation is Bayesian estimation theory. PNN network
consists of three layers.

1. Input Layer
2. Pattern Layer
3. Competitive Layer

25 | P a g e
Learning Vector Quantization is used to obtain decision boundaries in input states. That also
contains the input layer, competitive layer and output layer.

The PNN methodology gives a better performance comparing the processor time and it can be
developed further more in the future because of its extensible network structure. [4]

5.3 Statistical Influence in Geodesic Active Contours


In this technique a curve is marked as a zero level set of a higher dimensional surface, and then the
entire surface is designed to minimize a metric defined curvature and image gradient. The approach
to object segmentation in this technique extends geodesic active contours with the integration of
shape information into the evolution process. As the initial step a statistical shape model is
computed over a training set of curves. An active contour is evolved both locally, based on image
curvature and gradient, and globally, based on an estimation of the maximum posterior shape and
pose.

To utilize a segmentation of an object in an image, a probabilistic approach is followed to collect


shape information. To obtain a shape model curve representation is used and then a probability
density function is defined using the parameters of the representation.

Curve Representation: Each curve is the embedded zero level set of a higher dimensional surface,
whose height is sampled at regular intervals from a set of training data. The embedding function is
the widely used signed distance function, where each data sample embeds the distance to the
bordering point on the curve. The goal of this task is to build a shape model over the distribution of
surface obtained via the signed distance function.

When measuring the variation of the shape of a certain part of an object in a population, it is
important to compare like parts of the object. This is called the correspondence problem. One
solution for this is to ensure that comparisons are done consistently through generating all point-
wise correspondences explicitly. Another solution is the alignment of the training data set before
performing any operation on them, such as variance calculation and comparison.

Then from the curve representation obtained and the probability distribution can be folded in to the
segmentation process. And then the energy function defined in the Snake model is used to obtain
the geodesic active contours for segmentation. Here the surface evolves at every point in a right
angle to the level sets, derived as a function of the curvature at that point and the image gradient.
Information about the shape is derived through extracting the pose of the evolving curve with
respect to the shape model. Therefore at each step of the evolution of the curve, the shape

26 | P a g e
parameters and rigid pose parameters of the final curve are estimated using a maximum a posterior
method.

Finally the surface is evolved initially starting from one point that lies inside the object to be
segmented. The evolution step is computed, given surface at a time, which leads to the final
segmentation based on local gradient and global shape information.

The segmentation results of this method are tested on 2-D slices of MRI of femur (thigh bone) and
corpus callosum (brain neural fiber).

For the femur experiment, training set contained 18 of adjacent slices of the same femur. In both
examples, the same initialization point was used to start the development. A MAP estimator of
shape and pose locks into the femur slice as the curve evolves. The corpus callosum training set
contained 49 samples. Two samples of corpus callocum was tested and the initially the MAP
estimator was incorrect, where the pose and shape parameters converge on the boundary with the
evolution of the curve. The segmentations of these converged within a minute on a 550Hz Pentium.

It is concluded that this technique presents a method of integrating prior shape information into
geodesic active contour method of medical image segmentation.

27 | P a g e
6 References
[1]. A.C.Phadke, J.Joshi. Feature Extraction and Texture Classification in MRI

[2] A. Deda, A. Samojedny, R. Koprowski, S. Wilczyoski, Z. Wróbel, (2013, Jun) Image analysis and
processing methods in verifying the correctness of performing low-invasive esthetic medical
procedures

[3] A. El-Baz, A. Farag, M. El-Ghar, R. Fahmi, S. Yuksel, T. Eldiasty, W. Miller. A new CAD system for
the evaluation of kidney diseases using DCE-MRI

[4]. A.I. El-Fallah, G.E. Ford (1997, May) Mean curvature evolution and surface area scaling in image
filtering.

[5] A. Hayat Gondal and M.N.A. Khan, A Review of Fully Automated Techniques for Brain Tumor
Detection from MR Images [PDF].Available: http://www.mecs-press.org/ijmecs/ijmecs-v5-
n2/IJMECS-V5-N2-8.pdf.

[6] A. Korzyoska, A. Witkowska, R. Koprowski , J. Małyszek, W. Zieleźnik, W. Wójcik and Z. Wróbel.


(2012, Nov) Influence of the measurement method of features in ultrasound images of the thyroid in
the diagnosis of Hashimoto’s disease. *online+. Available: http://www.biomedical-engineering-
online.com/content/11/1/91

[6] A. Kumar, A. Yezzi, Jr., A.Tannenbaum, P. Olver, and S. Kichenassamy (1997, April) A Geometric
Snake Model for Segmentation of Medical Imagery [PDF] Available:
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=563665

[7] A.K. Sinha, N.Pradhan. (2010, Dec). Development of a composite feature vector for the detection
of pathological and healthy tissues in FLAIR MR Images of brain

[8] A. Tristan, E. Mu˜noz-Moreno, L. Cordero-Grande, M. Martın-Fernandez, R. Cardenes.


UsimagTool: an Interactive Research Tool for Ultrasound Image Processing

[9] B. Fei, H. Akbari. (2012, Feb).Automatic 3D Segmentation of the Kidney in MR Images Using
Wavelet Feature Extraction and Probability Shape Model

[10] C. Chang, C. Chen, E. Chen, P. Chung and H. Tsai (1998, Jun) An Automatic Diagnostic System for
CT Liver Image Classification. [PDF] Available: http://www.umbc.edu/rssipl/pdf/BME_6_98.pdf.

28 | P a g e
[11] C. Li, C. Xu, C. Gui, and M.D. Fox: Distance regularized level set evolution and its application to
image segmentation

[12] D. Atkinson, G.Scidmore, M.B. Zawadzki, M. Detrick, and W.G. Bradley. (1996, July) Fluid-
Attenuated Inversion Recovery (FLAIR) for Assessment of Cerebral Infarction

[13] D. A. Dahab, S. S. A. Ghoniemy, G. M. Selim. (Oct, 2012). Automated Brain Tumor Detection and
Identification Using Image Processing and Probabilistic Neural Network Techniques. [PDF] Available:
http://www.ijipvc.org/article/IJIPVCV1I201.pdf

[14] E. Grimson, O. Faugeras, and M. Leventon (2000, June) Statistical Shape Influence in Geodesic
Active Contours [PDF] Available: http://www.spl.harvard.edu/publications/item/viewpdf/846/3492

[15] F. Visser, J. Hendrikse, J.J.M. Zwanenburg, P.R. Luijten, and T. Takahara. (2009, Oct) Fluid
attenuated inversion recovery (FLAIR) MRI at 7.0 Tesla: comparison with 1.5 and 3.0 Tesla

[16] F.Yang, J. Gu, T. Wen, W. Qin, Y. Xie (2012, Oct), A Shape-Optimized framework for kidney in
ultrasound images NLTV Denoising and DRLSE

[17] G. Wang, H. Yin, J. Wang, L. Sun, M. Vannier, T. Yamada. (2004, Oct) ImageParser: a tool for
finite element generation from three-dimensional medical images

[18] H. Li, H. Zhang, J. Liu, Z. Zhu. (2011, Sep) An automated and simple method for brain MR image
extraction

[18] http://en.wikipedia.org/wiki/Tomography

[19] http://www.math.uni-hamburg.de/projekte/shape/curve_evolution.html

[20] J. Huang, F. Jian, H. Wu, H. Li. (2013, May). An improved level set method for vertebra CT image
segmentation. [online]. Available: http://www.biomedical-engineering-online.com/content/12/1/48

[21] M. Acheroy, W. Philips, A. Pizurica, and I. Lemahieu (2003 Mar) A Versatile Wavelet Domain
Noise Filtration Technique for Medical Imaging [PDF]. Available:
http://telin.ugent.be/~sanja/Papers/TMI0161.pdf.

[22] J.R. Jensen, K.L. Gammelmark, M.H. Pedersen, and S.I. Nikolov (2006, December). Synthetic
Aperture Ultrasound Imaging

[23] M.H.F. Zarandi, M. Izadi, M. Zarinbal. Systematic image processing for diagnosing brain tumors:
A Type-II fuzzy expert system approach.

29 | P a g e
[24] S.S. Asole and V.J. Nagalkar (2012, May) Brain tumor detection using digital image processing
based on soft computing [PDF] Available: www.bioinfo.in/uploadfiles/13366343493_3_1_JSIP.pdf

[25] U.S. Department of Health and Human Services. Annual report of the U.S. scientificregistry of
transplant recipients and the organ procurement and transplantation network: transplant data:
1990-1999. Bureau of Health Resources Department, Richmond, VA; 2000

[26] V.P.G.P. Rathi and S.Palani, Brain tumor MRI image classification with feature selection and
extraction using linear decrement analysis [PDF].Available:
http://arxiv.org/ftp/arxiv/papers/1208/1208.2128.pdf.

30 | P a g e
7 Work Involvement

7.1 100282N
 Usimag Tool
 Liver
 Intro - Image Analysis and Processing Methods in Proving Correctness

7.2 100285C
 Flair MRI Images
 Using Digital Image Processing based on soft computing
 Using Composite Feature Vector
 Using Type II Fuzzy Logic
 Tumor Detection Probabilistic Neural Network (PNN) Techniques

7.3 100299X
 Using a Shape-Optimized Framework for Segmentation
 Using Wavelet Feature Extraction for 3D Segmentation
 Heart
 Statistical Influence in Geodesic Active Contours
 1st Draft - Document Preparation

7.4 100330L
 Introduction
 Intro - Image Acquisition
 DCE MRI
 Image Parser Tool
 Using a CAD System
 Vertebra CT Image Segmentation using an Improved Level Set Method
 2nd Draft - Document Preparation

7.5 100609C
 MRI for Brain Tumor Detection
 Overview - Brain MR Image Feature Extraction and Segmentation
 Data Description
 Preprocessing
 Normalization
 Feature Extraction and segmentation
 Segmentation by thresholding
 Feature selection
 Forward Selection
 Backward Selection
 Classification

31 | P a g e

View publication stats

You might also like