Professional Documents
Culture Documents
CHAPTER 1
INTRODUCTION
Imaging is an essential tool of the medical science to visualize the anatomical
structures of the human body. Medical image analysis for identification and
classification is an important task for many applications.
The aim of image pre-processing is to improve the quality of data through the
application of methods for denoising such as mean, median, Laplacian and
Gaussian filters and enhancing the edges of image structures such as
unsharpening and wavelet transform and enhancing image contrast like histogram
equalization.
There are various techniques available for Medical imaging for brain scan like
MRI, CT scan, PET scan which are used for extracting information by the
radiologist. Computer aided diagnosis is gaining significant importance in image
analysis.
This system is also helpful for technicians and medical students. Support vector
machine (SVM) is a data classification technique that is usually based on training
and testing. Each training set instance has a target value and many attributes.
SVM main purpose is to produce a generalized model to predict and classify data
instances target values in testing sets given only the attributes alone. Support
vector machine based classification is done by many researchers.
2
Modification of conventional SVM is also proposed and has been compared with
AdaBoost, ANN, Bayesian Network and many other classifiers.
These Hybrid approaches proved to be robust, fast, accurate and reliable in the
detection.
3
1.2 SCOPE OF THE PROJECT
1. Implement a preprocessing procedure of the axial view for the entire set of CT
images.
2. Segment the different region of interests like White Matter (WM), Gray Matter
(GM), Cerebro Spinal Fluid (CSF) and abnormal tumor region from the
preprocessed images through computationally efficient and accurate automatic
segmentation algorithms.
4. Develop the feature selection algorithms to select the optimal features from the
set of extracted texture features.
5. Implement the PNN and SVM classifiers to segment and classify the tumor by
using the set of selected texture features.
6. Analyze and validate the result with the assistance of experienced radiologist.
Preprocessing of brain CT images, the region of interest segmentation, feature
extraction and feature selection are done using image processing techniques.
Pre-processing
Feature extraction
Classification
4
CHAPTER 2
LITERATURE REVIEW
5
2.2 TITLE :Brain ct images classification with deep neural networks
AUTHOURS : Cheng Da, Haixian Zhang
YEAR :2018
With the development of X-ray, CT, MRI and other medical imaging techniques,
doctors and researchers are provided with a large number of medical images for
clinical diagnosis. It can largely improves the accuracy and reliability of disease
diagnosis. In this paper, the method of brain CT image classification with Deep
neural networks is proposed. Deep neural network exploits many layers of non-
linear information for classification and pattern analysis. In the most recent
literature, deep learning is defined as a kind of representation learning, which
involves a hierarchy architecture where higher-level concepts are constructed
from lower-level ones. The techniques developed from deep learning, enriched
the main research aspects of machine learning and artificial intelligence, have
already been impacting a wide range of signal and information processing
researches. By using the normal and abnormal brain CT images, texture features
are extracted as the characteristic value of each image. Then, deep neural network
is used to realize the CT image classification of brain health. Experimental results
indicate that the deep neural network have performed well in the CT images
classification of brain health. It also shows that the stability of the network
increases significantly as the depth of the network increasing.
6
2.3 TITLE : A Classifier to Detect Abnormality in CT Brain Images.
YEAR :2011
Medical images are among important data sources available, since these images
are usually used by physicians to detect different diseases. Extracting features
from brain CT images helps in building a machine classifier that able to classify
new brain images without human interference. In this paper, we used a data set
of 25 CT brain images with different diagnoses, and built a decision tree classifier
that is able to predict general abnormality in human brain. The preprocessing uses
the three stages described by Peng et al with modifications. The process of feature
extraction was mainly to identify the regions of interest and extract analytical data
from those regions. The model was evaluated using hold out method and N-fold
evaluation. The results showed that the classifier is able to detect abnormality,
even with a small training data set.
7
2.4 TITLE : Detection of Brain Tumor using Image Classification.
AUTHOURS : Shanata Giraddi, S V Vaishnavi
YEAR :2017
One of the most common diseases in India is Brain tumor, which is spreading
due to many reasons, most common reason is identified as lifestyle of people.
But, with the changing trends and technology, the identification and treatments
are also increasing only if early detected. Early detection of any disease will help
in better treatment. The image processing techniques help in detecting the tumor
images at an early stage. With the help of the scanned MRI images it is possible
to detect the tumor and it’s severity. In this paper, we propose the system to
classify the images into two groups, Malignant or Benign. The proposed system
is based on second order texture features and SVM classifier. Various second
order features like Energy, Entropy, Homogeneity and correlation are used to
build the system. The work is carried in the following steps, preprocessing which
includes feature extraction followed by training the images on SVM classier
based on the extracted features and finally testing on the SVM classier with
various kernels. With Linear kernel, highest sensitivity, specificity and accuracy
obtained are 80%, 90% and 80% respectively. The results of the work are to
classify an image with tumor as Malignant or Benign. The results obtained
illustrate the robustness of the system in identifying and classifying the Brain
tumor.
8
CHAPTER 3
SYSTEM ANALYSIS
The CT image acquired from the CT machine give two dimension cross
sectional of brain.
However, the image acquired did not extract the tumor from the image.
Thus, the image processing is needed to determine the severity of the
tumor depends on the size.
9
3.2 PROPOSED SYSTEM
10
CHAPTER 4
SYSTEM SPECIFICATION
RAM : 4GB
Hard-disk : 1GB
OS : Window 8,10
Tool : Matlab
11
CHAPTER 5
SYSTEM DESIGN
Original image
Exist
12
5.2 DATA FLOW DIAGRAM
Original image
CT SCAN Pre-processing
Classification
Feature extraction
13
CHAPTER 6
MODULES DESCRIPTION
Preprocessing
Feature Extraction
Classification
6.1 PREPROCESSING
14
• Features are the characteristics of the objects of interests. They are used as
inputs to classifiers that assign them to the class that they represent. We
extracted the following nine features from segmented image. Intensity
based features are first order statistics depends only on individual pixel
values of image.
∑𝑁
𝑖=1(𝑙𝑖 −𝜇)
3
Skewness = (3)
(𝑁−1)𝜎 3
∑𝑁
𝑖=1(𝑙𝑖 −𝜇)
4
Kurtosis = (4)
(𝑁−1)𝜎 4
15
• All features obtained from GLCM are functions of the distance d and the
orientation θ. Thus, if an image is rotated, the values of the features will be
different. In practice, for each d the resulting values for the four directions
are averaged out. Five co-occurrence matrices are constructed in four
spatial orientations horizontal, right diagonal, vertical and left diagonal
(0°,45°,90°,135°). A fifth matrix is constructed as the mean of the
preceding four matrices. This will generate texture features that will be
rotation invariant:
• Contrast is measure of local level variations which takes high values for
image of high contrast.
𝑝(𝑖,𝑗)
Homogeneity=∑ 𝑖, 𝑗 (6)
1+|𝑖−𝑗|
(𝑖,𝑗)𝑝(𝑖,𝑗)−𝜇𝑥 𝜇𝑦
CorrCoeff=∑ 𝑖, 𝑗 (8)
𝜎𝑥 𝜎𝑦
Where µx, µy, σx, σy are the mean and standard deviation of px, py,the
partial probability density functions respectively.
16
Entropy=∑ 𝑖 ∑ 𝑗 𝑝(𝑖, 𝑗) log 𝑝(𝑖, 𝑗) (9)
6.3 CLASSIFICATION
Classification is to organize the data into different groups on the basis of their
features or properties. This process consists of training and testing phase. In
training phase, properties of image called features are isolated and a unique
description of each classification category is created. In testing phase, these
features are used to classify images in categories.
The accuracy of this classification method must be high because the diagnosis
and treatment is based on this categorization. We used SVM, a binary classifier
based on supervised learning capable of delivering higher performance in terms
of classification [10].
17
CHAPTER 7
SYSTEM TESTING
TEST OBJECTIVES
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
UNIT TESTING
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid
outputs.
18
All decision branches and internal code flow should be validated. It is the
testing of individual software units of the application .it is done after the
completion of an individual unit before integration.
This is a structural testing, that relies on knowledge of its construction and is
invasive.
In this project, based on the this testing separately test the modules, fields and
each and every data this has been tested based on efficient and the custom
needs.
INTEGRATION TESTING
Integration tests are designed to test integrated software components to determine
if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. the combination of components is
correct and consistent. Integration testing is specifically aimed at exposing the
problems that arise from the combination of components.
In this testing the each and every modules are linked together by using the data
to be transfer from one module field to another.
SYSTEM TESTING
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An
example of system testing is the configuration oriented system integration test.
System testing is based on process descriptions and flows, emphasizing predriven
process links and integration points.
In this testing it is based on the coding to assing or performs the function by using
the methods and data for the program to be run. In this testing testing are two
types are namely.
White Box Testing is a testing in which in which the software tester has
knowledge of the inner coding, structure and language of the software. White box
testing also known as clear box testing, open box testing, glass box testing,
transparent box testing, code based testing. It is a software testing method in
which the internal structure of the item being tested is known to be tester.
This method is named so because the software program, in the eyes of the tester,
is like a white or transparent box, inside which one clearly sees.
Black Box Testing is testing the software without any knowledge of the inner
workings, structure or language of the module being tested. Black box tests, as
most other kinds of tests, must be written from a definitive source document, such
as specification or requirements document, such as specification or requirements
document. It is a testing in which the software under test is treated, as a black box
.you cannot “see” into it. The test provides inputs and responds to outputs without
considering how the software works.
20
CHAPTER 8
CONCLUSION
We conclude that automated system with GLCM method and using SVM
classifier with RBF kernel gives 98% level of accuracy. This would be highly
useful as a diagnostic tool for radiologists in the automated classification of brain
CT images into normal and abnormal. The proposed system is for classifying
images into two classes only i.e. normal and abnormal images.
FUTURE ENHANCEMENT
21
CHAPTER 9
SOURCECODE AND SCREENSHOTS
9.1 SOURCECODE
PREPROCESSING
22
% sobelY = sobelX';
% gx = imfilter(f_smooth, sobelX);
% gy = imfilter(f_smooth, sobelY);
% f_edge = abs(gx) + abs(gy);
%
% an alternative way
f_edge = edge(f_smooth, 'Sobel');
23
figure, imshow(f_open), title('Opened Image');
%% connected component analysis
cc1 = bwconncomp(f_open);
% Feature extracting so as to find interested components. Notice that the
% 'eccentricity' property is not used on this application but added into
% code.
stats1 = regionprops(cc1, {'Area','Eccentricity'});
area = [stats1.Area];
sumArea = 0;
% ecc = [stats1.Eccentricity];
% sumEcc = 0;
for i = 1 : cc1.NumObjects
sumArea = sumArea + area(i);
% sumEcc = sumEcc + ecc(i);
end
24
% components
cc2 = bwconncomp(BW);
stats2 = regionprops(cc2, 'BoundingBox');
%% representation
figure, imshow(f), title('Detected Vehicles');
FEATURE EXTRACTION
function featureout=feature_extraction(g1,g2,g3,g4)
asm1=sum(sum(g1.^2));
asm2=sum(sum(g2.^2));
asm3=sum(sum(g3.^2));
asm4=sum(sum(g4.^2));
featureout(1,1)=asm;
ctr1=contrast_func(g1);
25
ctr2=contrast_func(g2);
ctr3=contrast_func(g3);
ctr4=contrast_func(g4);
e1=entropy_func(g1);
e2=entropy_func(g2);
e3=entropy_func(g3);
e4=entropy_func(g4);
etr=mean([e1 e2 e3 e4]);
featureout(1,3)=etr;
idm1=idm_func(g1);
idm2=idm_func(g2);
idm3=idm_func(g3);
idm4=idm_func(g4);
idm=mean([idm1 idm2 idm3 idm4]);
featureout(1,4)=idm;
diss1=dissm_func(g1);
diss2=dissm_func(g2);
diss3=dissm_func(g3);
diss4=dissm_func(g4);
26
v1=var_func(g1);
v2=var_func(g2);
v3=var_func(g3);
v4=var_func(g4);
varval=mean([v1 v2 v3 v4]);
featureout(1,6)=varval;
corr1=correlation_func(g1);
corr2=correlation_func(g2);
corr3=correlation_func(g3);
corr4=correlation_func(g4);
corrval=mean([corr1 corr2 corr3 corr4]);
featureout(1,7)=corrval;
pval1=prominence_func(g1);
pval2=prominence_func(g2);
pval3=prominence_func(g3);
pval4=prominence_func(g4);
proval=mean([pval1 pval2 pval3 pval4]);
featureout(1,8)=proval;
sval1=shade_func(g1);
sval2=shade_func(g2);
sval3=shade_func(g3);
sval4=shade_func(g4);
shadeval=mean([sval1 sval2 sval3 sval4]);
featureout(1,9)=shadeval;
intval1=interia_func(g1);
intval2=interia_func(g2);
intval3=interia_func(g3);
intval4=interia_func(g4);
27
intval=mean([intval1 intval2 intval3 intval4]);
featureout(1,10)=intval;
de1=diffentropy_func(g1);
de2=diffentropy_func(g2);
de3=diffentropy_func(g3);
de4=diffentropy_func(g4);
diffetr=mean([de1 de2 de3 de4]);
featureout(1,11)=diffetr;
dv1=diffvar_func(g1);
dv2=diffvar_func(g2);
dv3=diffvar_func(g3);
dv4=diffvar_func(g4);
diffvarval=mean([dv1 dv2 dv3 dv4]);
featureout(1,12)=diffvarval;
TUMOR DETECTION
28
% TUMORCOMP('CALLBACK',hObject,eventData,handles,...) calls the
local
% function named CALLBACK in TUMORCOMP.M with the given input
arguments.
%
% TUMORCOMP('Property','Value',...) creates a new TUMORCOMP or
raises the
% existing singleton*. Starting from the left, property value pairs are
% applied to the GUI before tumorComp_OpeningFcn gets called. An
% unrecognized property name or invalid value makes property application
% stop. All inputs are passed to tumorComp_OpeningFcn via varargin.
%
% *See GUI Options on GUIDE's Tools menu. Choose "GUI allows only one
% instance to run (singleton)".
%
% See also: GUIDE, GUIDATA, GUIHANDLES
29
if nargin && ischar(varargin{1})
gui_State.gui_Callback = str2func(varargin{1});
end
if nargout
[varargout{1:nargout}] = gui_mainfcn(gui_State, varargin{:});
else
gui_mainfcn(gui_State, varargin{:});
end
% End initialization code - DO NOT EDIT
30
% --- Outputs from this function are returned to the command line.
function varargout = tumorComp_OutputFcn(hObject, eventdata, handles)
% varargout cell array for returning output args (see VARARGOUT);
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
if nofile
msgbox(sprintf('Image not found!'),'Error', 'warning');
return
end
img1 = imread(path);
img1 = im2double(img1);
31
img2 = img1;
axes(handles.axes1);
imshow(img1)
global img1
axes(handles.axes);
bw = im2bw(img1,0.7);
label = bwlabel(bw);
max_area = max(area(high_dense_area));
tumor_label = find(area == max_area);
tumor = ismember(label,tumor_label);
32
se = strel('square',5);
tumor = imdilate(tumor,se)
imshow(img1);
hold on
for i = 1: length(Bound)
plot(Bound{i} (:,2), Bound{i}(:,1), 'y','linewidth', 1.75)
end
hold off
axes(handles.axes)
SVM PROCESS
function FINAL_OUT=svm_process(train_data,train_class,test_data)
c = 1000;
lambda = 1e-7;
kerneloption= 2;
kernel='gaussian';
33
verbose = 0;
nbclass=9;
[xsup,w,b,nbsv]=svmmulticlassoneagainstall(train_data,train_class,nbclass,c,la
mbda,kernel,kerneloption,verbose);
[ypred,maxi] = svmmultival(test_data,xsup,w,b,nbsv,kernel,kerneloption);
FINAL_OUT=ypred';
FINAL
clc
clear all
close all
34
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% subplot(1,2,1),imshow(imgval);
%
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
1];
%
%
%
% file_data{index}=filepath3;
%
35
%
% index=index+1;
%
% end
%
%
% elseif(strcmp(filename1,'ICH'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
2];
36
%
% file_data{index}=filepath3;
%
%
%
% index=index+1;
%
% end
%
%
% elseif(strcmp(filename1,'Ischemic stroke'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
37
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
3];
% file_data{index}=filepath3;
% index=index+1;
%
% end
%
% elseif(strcmp(filename1,'Normal'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
4];
38
% file_data{index}=filepath3;
%
% index=index+1;
%
% end
%
% elseif(strcmp(filename1,'SAH'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
5];
%
39
% file_data{index}=filepath3;
%
% index=index+1;
%
% end
%
%
% elseif(strcmp(filename1,'SDH'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
6];
40
%
% file_data{index}=filepath3;
% index=index+1;
%
% end
%
% elseif(strcmp(filename1,'MS'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
7];
% file_data{index}=filepath3;
41
%
% index=index+1;
%
% end
%
% elseif(strcmp(filename1,'Normalmri'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
8];
%
% file_data{index}=filepath3;
42
%
% index=index+1;
%
%
% end
%
% elseif(strcmp(filename1,'Tumor'));
%
% filepath2=[filepath1,'\', filename1];
% fileloc2=dir(filepath2);
% for k2=3:length(fileloc2)
% filename2=fileloc2(k2).name;
% filepath3=[filepath2,'\', filename2];
% img=(imread(filepath3));
% dim=ndims(img);
% if(dim==3)
% img=rgb2gray(img);
% end
% imgval=(imresize(img,[wresize wresize]));
%
% [final_out dwtimg]=wavelet_process(imgval,4);
% feature_final=feature_text_process(final_out);
% subplot(1,2,1),imshow(imgval);
% subplot(1,2,2),imshow(dwtimg);
% pause(0.1);
%
% final_feature_data(index,1:length(feature_final)+1)=[feature_final
9];
% file_data{index}=filepath3;
43
% index=index+1;
%
% end
%
% end
%
% end
%
%
% end
%
% save feature_data.mat final_feature_data file_data
load feature_data.mat
res_str{1}='CT EDH';
res_str{2}='CT ICH';
res_str{3}='CT Ischemic stroke';
res_str{4}='CT Normal';
res_str{5}='CT SAH';
res_str{6}='CT SDH';
final_feature_data=[final_feature_data;final_feature_data;final_feature_data];
train_data=final_feature_data(:,1:end-1);
train_class=final_feature_data(:,end);
[rr cc]=size(final_feature_data);
test_data=final_feature_data(:,1:end-1);
44
no_of_feat_red=200;
train_data=pca_process(train_data,no_of_feat_red);
test_data=pca_process(test_data,no_of_feat_red);
for kw=1:length(train_class)
confin_ori(kw,1:9)=ismember([1 2 3 4 5 6 7 8 9],train_class(kw));
end
train_num=[25 25 22 9 25 25 1 25 100]*3;
res_out=knnclassify(test_data,train_data,train_class);
result_knn=floor(res_out);
locx=find(result_knn<min(train_class) | result_knn>max(train_class));
result_knn(locx)=1;
for k7=1:length(result_knn)
FINAL_KNN_RESULT{k7}=res_str{result_knn(k7)};
end
FINAL_KNN_RESULT=FINAL_KNN_RESULT
[Sensitivity_KNN
KNN_CONFUSION]=RESULT_PROP_FUNCTION(FINAL_KNN_RESULT,
res_str,train_num,confin_ori)
paprameter_cal_process(KNN_CONFUSION)
for kw=1:length(result_knn)
confin_out(kw,1:9)=ismember([1 2 3 4 5 6 7 8 9],result_knn(kw));
end
figure,plotconfusion(confin_out',confin_ori');
45
title('KNN');
figure,plotroc(confin_out',confin_ori');
title('KNN');
grid on;
SVM_OUT=svm_process(train_data,train_class,test_data);
for k7=1:length(SVM_OUT)
FINAL_SVM_RESULT{k7}=res_str{SVM_OUT(k7)};
end
FINAL_SVM_RESULT=FINAL_SVM_RESULT
[Sensitivityval_SVM
SVM_CONFUSION]=RESULT_PROP_FUNCTION(FINAL_SVM_RESULT,
res_str,train_num,confin_ori)
paprameter_cal_process(SVM_CONFUSION)
for kw=1:length(SVM_OUT)
confin_out(kw,1:9)=ismember([1 2 3 4 5 6 7 8 9],SVM_OUT(kw));
end
figure,plotconfusion(confin_out',confin_ori');
title('SVM');
figure,plotroc(confin_out',confin_ori');
title('SVM');
grid on;
46
9.2 SCREENSHOTS
47
NOISE REMOVAL
48
DETECT TUMOR
49
CHAPTER 10
REFERENCES
50
10.Burges C J (2012). A Tutorial on Support Vector Machines for Pattern
Recognition. Data Mining and Knowledge Discovery 2:121–167.
51