You are on page 1of 51

1

CHAPTER 1
INTRODUCTION
Imaging is an essential tool of the medical science to visualize the anatomical
structures of the human body. Medical image analysis for identification and
classification is an important task for many applications.

This work presents a method for automatic classification of medical images in


two classes Normal and Abnormal based on image features and automatic
abnormality detection.

The aim of image pre-processing is to improve the quality of data through the
application of methods for denoising such as mean, median, Laplacian and
Gaussian filters and enhancing the edges of image structures such as
unsharpening and wavelet transform and enhancing image contrast like histogram
equalization.

There are various techniques available for Medical imaging for brain scan like
MRI, CT scan, PET scan which are used for extracting information by the
radiologist. Computer aided diagnosis is gaining significant importance in image
analysis.

An automated image analysis system to detect the abnormalities in the brain CT


scan images assists the physicians for making better decisions.

This system is also helpful for technicians and medical students. Support vector
machine (SVM) is a data classification technique that is usually based on training
and testing. Each training set instance has a target value and many attributes.
SVM main purpose is to produce a generalized model to predict and classify data
instances target values in testing sets given only the attributes alone. Support
vector machine based classification is done by many researchers.

2
Modification of conventional SVM is also proposed and has been compared with
AdaBoost, ANN, Bayesian Network and many other classifiers.

Many researchers have used hybrid approaches such as combination of wavelets,


K-NN and SVM for classifying the abnormal and the normal images.

These Hybrid approaches proved to be robust, fast, accurate and reliable in the
detection.

Advantages of using CT include good detection of calcification, hemorrhage and


bony detail plus lower cost, short imaging times and widespread availability.

1.1 OBJECTIVE OF THE PROJECT

 A new approach for automated diagnosis based on classification of the


Computer temography(CT) images.
 Computed tomography scan is an imaging technique used for studying
brain images.

 This work presents a method for automatic classification of medical


images in two classes Normal and Abnormal based on image features
and automatic abnormality detection.

 Classification of brain images is important in order to distinguish


between normal brain images and those having the abnormalities in
brain like hematomas, tumor, edema, concussion etc.

 We used efficient algorithm for classifying image.

3
1.2 SCOPE OF THE PROJECT

1. Implement a preprocessing procedure of the axial view for the entire set of CT
images.

2. Segment the different region of interests like White Matter (WM), Gray Matter
(GM), Cerebro Spinal Fluid (CSF) and abnormal tumor region from the
preprocessed images through computationally efficient and accurate automatic
segmentation algorithms.

3. Successfully develop the texture feature extraction algorithms to extract the


texture features from the tumor Region Of Interest (ROI) of the image to be
segmented.

4. Develop the feature selection algorithms to select the optimal features from the
set of extracted texture features.

5. Implement the PNN and SVM classifiers to segment and classify the tumor by
using the set of selected texture features.

6. Analyze and validate the result with the assistance of experienced radiologist.
Preprocessing of brain CT images, the region of interest segmentation, feature
extraction and feature selection are done using image processing techniques.

1.3 OUTLINE OF THE PROJECT

 Pre-processing
 Feature extraction
 Classification

4
CHAPTER 2
LITERATURE REVIEW

2.1 TITLE :Abnormality Detection in Brain CT images using SVM.


AUTHOURS : Bhavna Sharma and Priyanka Mitra
YEAR :2018

Automated detection of the abnormalities in brain image analysis is very


important and it is prerequisite for planning and treatment of the disease.
Computed tomography scan is an imaging technique used for studying brain
images. Classification of brain images is important in order to distinguish
between normal brain images and those having the abnormalities in brain like
hematomas, tumor, edema, concussion etc. The proposed automated method
identifies the abnormalities in brain CT images and classifies them using support
vector machine. The proposed method consists of three important phases, First
phase is preprocessing, second phase consists of feature extraction and final phase
is classification. In the first phase preprocessing is performed on brain CT images
to remove artifacts and noise. In second phase features are extracted from brain
CT images using gray level co-occurrence matrix (GLCM). In the final stage,
extracted features are fed as input to SVM classifier with different kernel
functions that classifies the images into normal and abnormal with different
accuracy levels.

5
2.2 TITLE :Brain ct images classification with deep neural networks
AUTHOURS : Cheng Da, Haixian Zhang
YEAR :2018

With the development of X-ray, CT, MRI and other medical imaging techniques,
doctors and researchers are provided with a large number of medical images for
clinical diagnosis. It can largely improves the accuracy and reliability of disease
diagnosis. In this paper, the method of brain CT image classification with Deep
neural networks is proposed. Deep neural network exploits many layers of non-
linear information for classification and pattern analysis. In the most recent
literature, deep learning is defined as a kind of representation learning, which
involves a hierarchy architecture where higher-level concepts are constructed
from lower-level ones. The techniques developed from deep learning, enriched
the main research aspects of machine learning and artificial intelligence, have
already been impacting a wide range of signal and information processing
researches. By using the normal and abnormal brain CT images, texture features
are extracted as the characteristic value of each image. Then, deep neural network
is used to realize the CT image classification of brain health. Experimental results
indicate that the deep neural network have performed well in the CT images
classification of brain health. It also shows that the stability of the network
increases significantly as the depth of the network increasing.

6
2.3 TITLE : A Classifier to Detect Abnormality in CT Brain Images.

AUTHOURS : Hassan Najadat, Yasser Jaffal.

YEAR :2011

Medical images are among important data sources available, since these images
are usually used by physicians to detect different diseases. Extracting features
from brain CT images helps in building a machine classifier that able to classify
new brain images without human interference. In this paper, we used a data set
of 25 CT brain images with different diagnoses, and built a decision tree classifier
that is able to predict general abnormality in human brain. The preprocessing uses
the three stages described by Peng et al with modifications. The process of feature
extraction was mainly to identify the regions of interest and extract analytical data
from those regions. The model was evaluated using hold out method and N-fold
evaluation. The results showed that the classifier is able to detect abnormality,
even with a small training data set.

7
2.4 TITLE : Detection of Brain Tumor using Image Classification.
AUTHOURS : Shanata Giraddi, S V Vaishnavi
YEAR :2017

One of the most common diseases in India is Brain tumor, which is spreading
due to many reasons, most common reason is identified as lifestyle of people.
But, with the changing trends and technology, the identification and treatments
are also increasing only if early detected. Early detection of any disease will help
in better treatment. The image processing techniques help in detecting the tumor
images at an early stage. With the help of the scanned MRI images it is possible
to detect the tumor and it’s severity. In this paper, we propose the system to
classify the images into two groups, Malignant or Benign. The proposed system
is based on second order texture features and SVM classifier. Various second
order features like Energy, Entropy, Homogeneity and correlation are used to
build the system. The work is carried in the following steps, preprocessing which
includes feature extraction followed by training the images on SVM classier
based on the extracted features and finally testing on the SVM classier with
various kernels. With Linear kernel, highest sensitivity, specificity and accuracy
obtained are 80%, 90% and 80% respectively. The results of the work are to
classify an image with tumor as Malignant or Benign. The results obtained
illustrate the robustness of the system in identifying and classifying the Brain
tumor.

8
CHAPTER 3

SYSTEM ANALYSIS

3.1 EXISTING SYSTEM


 In the existing solution of extraction of brain tumor from CT scan images
tumor part is detected from the CT scan of the brain. The proposed solution
also do the same thing ,inform the user about details of tumor using basic
image processing techniques.
 The methods include noise removal,feature extraction and classification,
erosion and dilation, to obtain the background. . Subtraction of background
and its negative from different sets of images results in extracted tumor
image. This process helps in identifying the size, shape and position of the
tumor.
 It helps the medical staff as well as the patient to understand the seriousness
of the tumor with the help of different color-labeling for different levels
of elevation.

3.1.1 DRAWBACKS OF EXISTING SYSTEM

 The CT image acquired from the CT machine give two dimension cross
sectional of brain.
 However, the image acquired did not extract the tumor from the image.
 Thus, the image processing is needed to determine the severity of the
tumor depends on the size.

9
3.2 PROPOSED SYSTEM

 The algorithm is a set of image processing fundamental procedures.


 A set of noise-removal functions accompanied with morphological
operations that result in clear image of tumor after passing through high
pass filter is the basic idea behind the proposed algorithm.
 The set of morphological operations used will decide the clarity and quality
of the tumor image.
 A GUI is created in the MATLAB offering the proposed application of
extracting the tumor from selected brain image.
 The GUI also contains options for zoom-in, zoom-out, data cursor for co-
ordinates, and prints the selected image.

10
CHAPTER 4
SYSTEM SPECIFICATION

4.1 HARDWARE SPECIFICATION

 RAM : 4GB

 Hard-disk : 1GB

 Keyboard : Standard keyboard

 Processer : intel core i3

4.2 SOFTWARE SPECIFICATION

 OS : Window 8,10

 Tool : Matlab

11
CHAPTER 5
SYSTEM DESIGN

5.1 ACTIVITY DIAGRAM


Activity diagram is another important diagram in UML to describe
dynamic aspects of the system. Activity diagram is basically a flow chart to
represent the flow from one activity to another activity. The activity can be
described as an operation of the system.

Original image

Pre-processing Feature extraction Classification

Exist

12
5.2 DATA FLOW DIAGRAM

A data-flow diagram (DFD) is a way of representing a flow of a data


of a process or a system (usually an information system) The DFD also provides
information about the outputs and inputs of each entity and the process itself. A
dataflow diagram has no control flow, there are no decision rules and no loops.
Specific operations based on the data can be represented by a flowchart.

Original image

CT SCAN Pre-processing
Classification

Feature extraction

13
CHAPTER 6

MODULES DESCRIPTION

This system consists four modules. They are:

 Preprocessing

 Feature Extraction

 Classification

6.1 PREPROCESSING

 Preprocessing of images is one of the very important step and prerequisite


to ensure the high accuracy level in the subsequent steps. The brain CT
scan images have patient’s name, age and marks etc.,
 those marks should also be removed in pre-processing step. We first used
median filtering to remove noise from the image.
 After removing noise it is required to remove skull portion from the brain
images so that small level of pixel values will not disturb the operation. We
used brain extraction algorithm on abnormal brain CT image to extract
brain and its various steps.

6.2 FEATURE EXTRACTION

• The purpose of feature extraction is to reduce the original data set by


measuring certain properties of features that distinguish on input pattern
from another.

14
• Features are the characteristics of the objects of interests. They are used as
inputs to classifiers that assign them to the class that they represent. We
extracted the following nine features from segmented image. Intensity
based features are first order statistics depends only on individual pixel
values of image.

• Mean is average value of an array


1
µ = ∑𝑁
𝑖=1 𝑙𝑖 (1)
𝑁

• Variance It is one of several descriptors of a probability distribution,


describing how far the numbers lie from the mean
1
𝜎 2 = ∑𝑁
𝑖=1(𝑙𝑖 − 𝜇) (2)
𝑁

• Skewness is a measure of the asymmetry of the data around the sample


mean.

∑𝑁
𝑖=1(𝑙𝑖 −𝜇)
3
Skewness = (3)
(𝑁−1)𝜎 3

where µ is the mean , σ is the standard deviation ,N is the no of data points.

• Kurtosis is a measure of how outlier-prone a distribution is.

∑𝑁
𝑖=1(𝑙𝑖 −𝜇)
4
Kurtosis = (4)
(𝑁−1)𝜎 4

where µ is the mean, σ is the standard deviation, N is the no of data points.

• Texture features are described by Gray Level Cooccurrence Matrix


(GLCM) that is a matrix of relative frequencies Pθ,d(i, j). It describes how
frequently two pixels with gray-levels i, j appear in the window separated
by a distance d in direction θ. Here P(i,j) is the [i,j]th entry in a gray-tone
spatial dependence matrix, and Ng is the number of distinct gray-levels in
the quantized image.

15
• All features obtained from GLCM are functions of the distance d and the
orientation θ. Thus, if an image is rotated, the values of the features will be
different. In practice, for each d the resulting values for the four directions
are averaged out. Five co-occurrence matrices are constructed in four
spatial orientations horizontal, right diagonal, vertical and left diagonal
(0°,45°,90°,135°). A fifth matrix is constructed as the mean of the
preceding four matrices. This will generate texture features that will be
rotation invariant:

• Contrast is measure of local level variations which takes high values for
image of high contrast.

Contrast= ∑ 𝑖, 𝑗|𝑖 − 𝑗|2 log 𝑝(𝑖 − 𝑗) (5)

• Inverse Difference Moment (Homogeneity) is a measure that takes high


values for lowcontrast images.

𝑝(𝑖,𝑗)
Homogeneity=∑ 𝑖, 𝑗 (6)
1+|𝑖−𝑗|

• Angular Second Moment (ASM) is a feature that measures the smoothness


of the image.

ASM= ∑ 𝑖, 𝑗𝑝(𝑖, 𝑗)2 (7)

• Correlation Coefficient is a measure of how a pixel is correlated to its


neighbor over the whole image.

(𝑖,𝑗)𝑝(𝑖,𝑗)−𝜇𝑥 𝜇𝑦
CorrCoeff=∑ 𝑖, 𝑗 (8)
𝜎𝑥 𝜎𝑦

Where µx, µy, σx, σy are the mean and standard deviation of px, py,the
partial probability density functions respectively.

• Entropy is a measure of randomness and takes low values for smooth


images.

16
Entropy=∑ 𝑖 ∑ 𝑗 𝑝(𝑖, 𝑗) log 𝑝(𝑖, 𝑗) (9)

6.3 CLASSIFICATION

Classification is to organize the data into different groups on the basis of their
features or properties. This process consists of training and testing phase. In
training phase, properties of image called features are isolated and a unique
description of each classification category is created. In testing phase, these
features are used to classify images in categories.

The accuracy of this classification method must be high because the diagnosis
and treatment is based on this categorization. We used SVM, a binary classifier
based on supervised learning capable of delivering higher performance in terms
of classification [10].

The support vector machine operates on two fundamental operations; First,


Nonlinear mapping of an input vector into a high-dimensional feature space that
is hidden from both the input and output. Second is construction of an optimal
hyperplane or a surface to separate the training data by minimizing the margin
between the vectors. The training ends with the definition of a decision surface
that divides the space into two subspaces.

17
CHAPTER 7

SYSTEM TESTING

7.1 SYSTEM TESTING


The purpose of testing is to discover errors. Testing is the process of trying
to discover every conceivable fault or weakness in a work product. It provides a
way to check the functionality of components, sub assemblies, assemblies and/or
a finished product It is the process of exercising software with the intent of
ensuring that the Software system meets its requirements and user expectations
and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.

TEST OBJECTIVES
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.

7.2 TYPE OF TESTING


 Unit testing
 Integration testing
 System testing
 Acceptance testing
 Functional testing

UNIT TESTING
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs.

18
All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the
completion of an individual unit before integration.
This is a structural testing, that relies on knowledge of its construction and is
invasive.
In this project, based on the this testing separately test the modules, fields and
each and every data this has been tested based on efficient and the custom
needs.

INTEGRATION TESTING
Integration tests are designed to test integrated software components to determine
if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.

In this testing the each and every modules are linked together by using the data
to be transfer from one module field to another.

SYSTEM TESTING

System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing predriven
process links and integration points.

In this testing it is based on the coding to assing or performs the function by using
the methods and data for the program to be run. In this testing testing are two
types are namely.

 White box testing


19
 Black box testing

WHITE BOX TESTING

White Box Testing is a testing in which in which the software tester has
knowledge of the inner coding, structure and language of the software. White box
testing also known as clear box testing, open box testing, glass box testing,
transparent box testing, code based testing. It is a software testing method in
which the internal structure of the item being tested is known to be tester.

This method is named so because the software program, in the eyes of the tester,
is like a white or transparent box, inside which one clearly sees.

BLACK BOX TESTING

Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as
most other kinds of tests, must be written from a definitive source document, such
as specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box
.you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.

20
CHAPTER 8

FUTURE ENHANCEMENT AND CONCLUSION

CONCLUSION

We conclude that automated system with GLCM method and using SVM
classifier with RBF kernel gives 98% level of accuracy. This would be highly
useful as a diagnostic tool for radiologists in the automated classification of brain
CT images into normal and abnormal. The proposed system is for classifying
images into two classes only i.e. normal and abnormal images.

FUTURE ENHANCEMENT

 We will focus on multiple-class classifications for brain CT images and


detection for various abnormalities like hemorrhage, edema, tumor etc.
 To find out the most efficient classifier we will compare SVM with
different deep learning algorithms such that more features can be
incorporated and different abnormalities can be detected.

21
CHAPTER 9
SOURCECODE AND SCREENSHOTS

9.1 SOURCECODE

PREPROCESSING

clc, clearvars, close all;


%% image acquisition and converting to grayscale
f = rgb2gray(imread('sah1.jpg'));

figure, imshow(f), title('Gray Scaled Image');


%% pre-process
% gamma correction to light up the image
f = imadjust(f,[],[],0.5);

figure, imshow(f), title('Image after Gamma Correction ');

% smoothing the image to remove noise


mask = ones(5) / (25);
f_smooth = imfilter(f,mask);

figure, imshow(f_smooth), title('Smoothed Image');


%% edge detection
% Edge detection is done by using Sobel operator.
% a hands-on way
%
% sobelX = [-1 -2 -1; 0 0 0; 1 2 1];

22
% sobelY = sobelX';
% gx = imfilter(f_smooth, sobelX);
% gy = imfilter(f_smooth, sobelY);
% f_edge = abs(gx) + abs(gy);
%
% an alternative way
f_edge = edge(f_smooth, 'Sobel');

figure, imshow(f_edge), title('Edges by Sobel');


%% thresholding
% Since we choose the second way at previous part, thresholding is not
% needed. Comment thresholding part then run the code again.
f_edge = im2uint8(f_edge);
level = graythresh(f_edge);
f_th = imbinarize(f_edge, level);

figure, imshow(f_th), title('Image Thresholded');


%% morphological process
% creating structuring elements
se1 = strel('rectangle', [15 40]);
se2 = strel('rectangle', [15 10]);

% Closing is applied to close spaces between correlated parts of edges.


f_close = imclose(f_th,se1);

figure, imshow(f_close), title('Closed Image');

% Opening is applied to remove wrong connections and unwanted objects.


f_open = imopen(f_close,se2);

23
figure, imshow(f_open), title('Opened Image');
%% connected component analysis
cc1 = bwconncomp(f_open);
% Feature extracting so as to find interested components. Notice that the
% 'eccentricity' property is not used on this application but added into
% code.
stats1 = regionprops(cc1, {'Area','Eccentricity'});
area = [stats1.Area];
sumArea = 0;
% ecc = [stats1.Eccentricity];
% sumEcc = 0;

for i = 1 : cc1.NumObjects
sumArea = sumArea + area(i);
% sumEcc = sumEcc + ecc(i);
end

avArea= sumArea / cc1.NumObjects;


% avEcc = sumEcc / cc1.NumObjects;

% Take the components which have an area larger than 25 percent of


% the average.
idx = find([stats1.Area] > (avArea * 1 / 4));
BW = ismember(labelmatrix(cc1), idx);

figure, imshow(BW), title('Binary Image of Interested Components');

% label binary image to get 'BoundingBox' properties of interested

24
% components
cc2 = bwconncomp(BW);
stats2 = regionprops(cc2, 'BoundingBox');
%% representation
figure, imshow(f), title('Detected Vehicles');

% represent each detected component with a rectangular


for i = 1 : cc2.NumObjects
rectangle('Position', stats2(i).BoundingBox, ...
'EdgeColor','r','LineWidth',2);
End

FEATURE EXTRACTION

function featureout=feature_extraction(g1,g2,g3,g4)

asm1=sum(sum(g1.^2));
asm2=sum(sum(g2.^2));
asm3=sum(sum(g3.^2));
asm4=sum(sum(g4.^2));

asm=mean([asm1 asm2 asm3 asm4]);

featureout(1,1)=asm;

ctr1=contrast_func(g1);

25
ctr2=contrast_func(g2);
ctr3=contrast_func(g3);
ctr4=contrast_func(g4);

contrast=mean([ctr1 ctr2 ctr3 ctr4]);


featureout(1,2)=contrast;

e1=entropy_func(g1);
e2=entropy_func(g2);
e3=entropy_func(g3);
e4=entropy_func(g4);

etr=mean([e1 e2 e3 e4]);
featureout(1,3)=etr;

idm1=idm_func(g1);
idm2=idm_func(g2);
idm3=idm_func(g3);
idm4=idm_func(g4);
idm=mean([idm1 idm2 idm3 idm4]);
featureout(1,4)=idm;
diss1=dissm_func(g1);
diss2=dissm_func(g2);
diss3=dissm_func(g3);
diss4=dissm_func(g4);

dissm=mean([diss1 diss2 diss3 diss4]);


featureout(1,5)=dissm;

26
v1=var_func(g1);
v2=var_func(g2);
v3=var_func(g3);
v4=var_func(g4);
varval=mean([v1 v2 v3 v4]);
featureout(1,6)=varval;
corr1=correlation_func(g1);
corr2=correlation_func(g2);
corr3=correlation_func(g3);
corr4=correlation_func(g4);
corrval=mean([corr1 corr2 corr3 corr4]);
featureout(1,7)=corrval;
pval1=prominence_func(g1);
pval2=prominence_func(g2);
pval3=prominence_func(g3);
pval4=prominence_func(g4);
proval=mean([pval1 pval2 pval3 pval4]);
featureout(1,8)=proval;
sval1=shade_func(g1);
sval2=shade_func(g2);
sval3=shade_func(g3);
sval4=shade_func(g4);
shadeval=mean([sval1 sval2 sval3 sval4]);
featureout(1,9)=shadeval;
intval1=interia_func(g1);
intval2=interia_func(g2);
intval3=interia_func(g3);
intval4=interia_func(g4);

27
intval=mean([intval1 intval2 intval3 intval4]);
featureout(1,10)=intval;
de1=diffentropy_func(g1);
de2=diffentropy_func(g2);
de3=diffentropy_func(g3);
de4=diffentropy_func(g4);
diffetr=mean([de1 de2 de3 de4]);
featureout(1,11)=diffetr;
dv1=diffvar_func(g1);
dv2=diffvar_func(g2);
dv3=diffvar_func(g3);
dv4=diffvar_func(g4);
diffvarval=mean([dv1 dv2 dv3 dv4]);
featureout(1,12)=diffvarval;

TUMOR DETECTION

function varargout = tumorComp(varargin)


% TUMORCOMP MATLAB code for tumorComp.fig
% TUMORCOMP, by itself, creates a new TUMORCOMP or raises the
existing
% singleton*.
%
% H = TUMORCOMP returns the handle to a new TUMORCOMP or the
handle to
% the existing singleton*.
%

28
% TUMORCOMP('CALLBACK',hObject,eventData,handles,...) calls the
local
% function named CALLBACK in TUMORCOMP.M with the given input
arguments.
%
% TUMORCOMP('Property','Value',...) creates a new TUMORCOMP or
raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before tumorComp_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to tumorComp_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES

% Edit the above text to modify the response to help tumorComp

% Last Modified by GUIDE v2.5 08-May-2019 16:55:14

% Begin initialization code - DO NOT EDIT


gui_Singleton = 1;
gui_State = struct('gui_Name', mfilename, ...
'gui_Singleton', gui_Singleton, ...
'gui_OpeningFcn', @tumorComp_OpeningFcn, ...
'gui_OutputFcn', @tumorComp_OutputFcn, ...
'gui_LayoutFcn', [] , ...
'gui_Callback', []);

29
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end

if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT

% --- Executes just before tumorComp is made visible.


function tumorComp_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to tumorComp (see VARARGIN)

% Choose default command line output for tumorComp


handles.output = hObject;

% Update handles structure


guidata(hObject, handles);

% UIWAIT makes tumorComp wait for user response (see UIRESUME)


% uiwait(handles.figure1);

30
% --- Outputs from this function are returned to the command line.
function varargout = tumorComp_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

% Get default command line output from handles structure


varargout{1} = handles.output;

% --- Executes on button press in pushbutton2.


function pushbutton2_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton2 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

global img1 img2

[path, nofile] = imgetfile();

if nofile
msgbox(sprintf('Image not found!'),'Error', 'warning');
return
end

img1 = imread(path);
img1 = im2double(img1);

31
img2 = img1;

axes(handles.axes1);
imshow(img1)

title('\fontsize{20}\color[rgb]{0.996, 0.592, 0.0} Brain CT')

% --- Executes on button press in pushbutton3.


function pushbutton3_Callback(hObject, eventdata, handles)
% hObject handle to pushbutton3 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)

global img1
axes(handles.axes);
bw = im2bw(img1,0.7);
label = bwlabel(bw);

stats = regionprops(label, 'solidity','Area');


density = [stats.Solidity]
area = [stats.Area];
high_dense_area = density >0.5;

max_area = max(area(high_dense_area));
tumor_label = find(area == max_area);
tumor = ismember(label,tumor_label);

32
se = strel('square',5);
tumor = imdilate(tumor,se)

Bound = bwboundaries(tumor, 'noholes');

imshow(img1);
hold on

for i = 1: length(Bound)
plot(Bound{i} (:,2), Bound{i}(:,1), 'y','linewidth', 1.75)
end

title('\fontsize{20}\color[rgb]{0.996, 0.692, 0.0} Tumor Detected!');

hold off

axes(handles.axes)

SVM PROCESS

function FINAL_OUT=svm_process(train_data,train_class,test_data)

c = 1000;
lambda = 1e-7;
kerneloption= 2;
kernel='gaussian';

33
verbose = 0;
nbclass=9;
[xsup,w,b,nbsv]=svmmulticlassoneagainstall(train_data,train_class,nbclass,c,la
mbda,kernel,kerneloption,verbose);
[ypred,maxi] = svmmultival(test_data,xsup,w,b,nbsv,kernel,kerneloption);
FINAL_OUT=ypred';

FINAL
clc
clear all
close all

% filepath=uigetdir(cd,'Select Image folder');


% fileloc=dir(filepath);
% index=1;
%
% wresize=256;
%
% for i = 3:length(fileloc)
%
% filename=fileloc(i).name;
% filepath1=[filepath,'\', filename];
% fileloc1=dir(filepath1);
%
% for k1=3:length(fileloc1)
% filename1=fileloc1(k1).name
%
%
% if(strcmp(filename1,'EDH'));

34
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% subplot(1,2,1),imshow(imgval);
%
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
1];
%
%
%
% file_data{index}=filepath3;
%

35
%
% index=index+1;
%
% end
%
%
% elseif(strcmp(filename1,'ICH'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
2];

36
%
% file_data{index}=filepath3;
%
%
%
% index=index+1;
%
% end
%
%
% elseif(strcmp(filename1,'Ischemic stroke'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%

37
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
3];
% file_data{index}=filepath3;
% index=index+1;
%
% end
%
% elseif(strcmp(filename1,'Normal'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
4];

38
% file_data{index}=filepath3;
%
% index=index+1;
%
% end
%
% elseif(strcmp(filename1,'SAH'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
5];
%

39
% file_data{index}=filepath3;
%
% index=index+1;
%
% end
%
%
% elseif(strcmp(filename1,'SDH'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
6];

40
%
% file_data{index}=filepath3;
% index=index+1;
%
% end
%
% elseif(strcmp(filename1,'MS'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
7];
% file_data{index}=filepath3;

41
%
% index=index+1;
%
% end
%
% elseif(strcmp(filename1,'Normalmri'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
8];
%
% file_data{index}=filepath3;

42
%
% index=index+1;
%
%
% end
%
% elseif(strcmp(filename1,'Tumor'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
9];
% file_data{index}=filepath3;

43
% index=index+1;
%
% end
%
% end
%
% end
%
%
% end
%
% save feature_data.mat final_feature_data file_data

load feature_data.mat

res_str{1}='CT EDH';
res_str{2}='CT ICH';
res_str{3}='CT Ischemic stroke';
res_str{4}='CT Normal';
res_str{5}='CT SAH';
res_str{6}='CT SDH';

final_feature_data=[final_feature_data;final_feature_data;final_feature_data];
train_data=final_feature_data(:,1:end-1);
train_class=final_feature_data(:,end);

[rr cc]=size(final_feature_data);

test_data=final_feature_data(:,1:end-1);

44
no_of_feat_red=200;
train_data=pca_process(train_data,no_of_feat_red);

test_data=pca_process(test_data,no_of_feat_red);

for kw=1:length(train_class)
confin_ori(kw,1:9)=ismember([1 2 3 4 5 6 7 8 9],train_class(kw));
end
train_num=[25 25 22 9 25 25 1 25 100]*3;
res_out=knnclassify(test_data,train_data,train_class);
result_knn=floor(res_out);
locx=find(result_knn<min(train_class) | result_knn>max(train_class));
result_knn(locx)=1;
for k7=1:length(result_knn)
FINAL_KNN_RESULT{k7}=res_str{result_knn(k7)};
end
FINAL_KNN_RESULT=FINAL_KNN_RESULT
[Sensitivity_KNN
KNN_CONFUSION]=RESULT_PROP_FUNCTION(FINAL_KNN_RESULT,
res_str,train_num,confin_ori)
paprameter_cal_process(KNN_CONFUSION)
for kw=1:length(result_knn)
confin_out(kw,1:9)=ismember([1 2 3 4 5 6 7 8 9],result_knn(kw));
end
figure,plotconfusion(confin_out',confin_ori');

45
title('KNN');
figure,plotroc(confin_out',confin_ori');
title('KNN');
grid on;
SVM_OUT=svm_process(train_data,train_class,test_data);
for k7=1:length(SVM_OUT)

FINAL_SVM_RESULT{k7}=res_str{SVM_OUT(k7)};
end
FINAL_SVM_RESULT=FINAL_SVM_RESULT

[Sensitivityval_SVM
SVM_CONFUSION]=RESULT_PROP_FUNCTION(FINAL_SVM_RESULT,
res_str,train_num,confin_ori)
paprameter_cal_process(SVM_CONFUSION)
for kw=1:length(SVM_OUT)
confin_out(kw,1:9)=ismember([1 2 3 4 5 6 7 8 9],SVM_OUT(kw));
end
figure,plotconfusion(confin_out',confin_ori');
title('SVM');
figure,plotroc(confin_out',confin_ori');
title('SVM');
grid on;

46
9.2 SCREENSHOTS

GRAY SCALED IMAGE

IMAGE AFTER GAMMA CORRELATION

47
NOISE REMOVAL

48
DETECT TUMOR

49
CHAPTER 10
REFERENCES

1. Chaurasia B D (2008) Human Anatomy: Regional & Applied Dissection


& clinical Vol 3: Head, Neck & Brain, CBS Publishers.
2. Dhawan A P , Huang H K , Kim D S (2008) Principles and advanced
methods in medical imaging and image analysis, World Scientific.
3. Duncan J S , Ayache N (2000) Medical image analysis: Progress over two
decades and the challenges ahead . IEEE Transactions on Pattern Analysis
and Machine Intelligence, 22(1):85- 106.
4. Kharrat A ,Gasmi K, Mohamed B (2010) A Hybrid Approach for
Automatic Classification of Brain MRI Using Genetic Algorithm and
Support Vector Machine. Leonardo Journal of Sciences 9(17):71-82.
5. Sandeep C, Patnaik L, Jaganathan (2006) Classification of MR brain
images using wavelets as input to SVM and neural network. Biomedical
signal processing and control. 1:86-92.
6. El-Sayed A,El-Dahshan, Abdel-Badeeh M.Salem and Tamer H.Younis
(2009) A hybrid technique for automatic MRI brain image Classification.
Informatica, LIV, no. 1.
7. Smith S M (2002) Fast robust automated brain extraction. Human Brain
Mapping,17: 143-155.
8. Sharma B , Venugopalan K (2012). Automated Segmentation of Brain CT
Images. International Journal of Computer Applications, 40(10):1-4.
9. Haralick, M. Robert, K. Shanmugam (1973). Textural Features for Image
Classification. IEEE Transactions on Systems, Man and Cybernetics, 3(6):
610 – 621.

50
10.Burges C J (2012). A Tutorial on Support Vector Machines for Pattern
Recognition. Data Mining and Knowledge Discovery 2:121–167.

51

You might also like