Professional Documents
Culture Documents
INTRODUCTION
Brain tumor is one of the most rigorous diseases in the medical science. An effective and
efficient analysis is always a key concern for the radiologist in the premature phase of
tumor growth. Histological grading, based on a stereotactic biopsy test, is the gold
standard and the convention for detecting the grade of a brain tumor. The biopsy
procedure requires the neurosurgeon to drill a small hole into the skull from which the
tissue is collected. There are many risk factors involving the biopsy test, including
bleeding from the tumor and brain causing infection, seizures, severe migraine, stroke,
coma and even death. But the main concern with the stereotactic biopsy is that it is not
100% accurate which may result in a serious diagnostic error followed by a wrong
Tumor biopsy being challenging for brain tumor patients, non-invasive imaging
techniques like Magnetic Resonance Imaging (MRI) have been extensively employed in
diagnosing brain tumors. Therefore, development of systems for the detection and
prediction of the grade of tumors based on MRI data has become necessary. But at first
sight of the imaging modality like in Magnetic Resonance Imaging (MRI), the proper
visualisation of the tumor cells and its differentiation with its nearby soft tissues is
somewhat difficult task which may be due to the presence of low illumination in imaging
modalities or its large presence of data or several complexity and variance of tumors-like
1
Automated defect detection in medical imaging using machine learning has become the
emergent field in several medical diagnostic applications. Its application in the detection
of brain tumor in MRI is very crucial as it provides information about abnormal tissues
which is necessary for planning treatment. Studies in the recent literature have also
reported that automatic computerized detection and diagnosis of the disease, based on
medical image analysis, could be a good alternative as it would save radiologist time and
also obtain a tested accuracy. Furthermore, if computer algorithms can provide robust and
greatly aid in the clinical management of brain tumors by freeing physicians from the
The machine learning based approaches like Deep ConvNets in radiology and other
medical science fields plays an important role to diagnose the disease in much simpler
way as never done before and hence providing a feasible alternative to surgical biopsy for
brain tumors . In this project, we attempted at detecting and classifying the brain tumor
and comparing the results of binary and multi class classification of brain tumor using
Magnetic resonance (MR) imaging and computed tomography (CT) scans of the brain
are the two most general tests to check the existence of a tumor and recognize its
position for progressive treatment decisions. These two scans are still used extensively
for their handiness, and the capability to yield high-definition images of pathological
tissues is more. At present, there are several other conducts offered for tumors, which
2
include surgery, therapies such as radiation therapy, and chemotherapy. The decision for
which treatment relies on the many factors such as size, kind, and grade of the tumor
present in the MR image. It’s conjointly chargeable for whether or not cancer has reached
Precise sighting of the kind of brain abnormality is enormously needed for treatment
medical doctors in image understanding and to lessen image reading time. These
segmenting an MR image of the tumor and its area itself a very problematic job. The
occurrence of tumors in specific positions within the brain image without distinguishing
associate estimation to assist medical doctors in image understanding and to lessen image
reading time. These advancements increase the steadiness and correctness of medical
diagnosis however, segmenting an MR image of the tumor and its area itself a very
problematic job. The occurrence of tumors in specific positions within the brain image
3
MOTIVATION FOR THE WORK:
A brain tumor is defined as abnormal growth of cells within the brain or central spinal
canal. Some tumors can be cancerous thus they need to be detected and cured in time.
The exact cause of brain tumors is not clear and neither is exact set of symptoms defined,
thus, people may be suffering from it without realizing the danger. Primary brain tumors
can be either malignant (contain cancer cells) or benign (do not contain cancer cells).
SCOPE:
Our aim is to develop an automated system for classification of brain tumors. The
incorporates image processing, pattern analysis, and computer vision techniques and
and accurate information from these images with the least error possible. The proper
combination and parameterization of the phases enables the development of adjunct tools
that can help on the early diagnosis or the monitoring of the tumor identification and
locations.
4
CHAPTER 2
LITERATURE REVIEW
Pan & Yang ‘s survey focused on categorizing and reviewing the current progress on
transfer learning for classification, regression and clustering problems. In this survey,
they discussed the relationship between transfer learning and other related machine
learning techniques such as domain adaptation, multitask learning and sample selection
bias, as well as co-variate shift. They also explored some potential future issues in
transfer learning research. In this survey article, they reviewed several current trends of
transfer learning.
Kapoor, Sanjeev. This paper surveys the various techniques that are part of Medical
Image Processing and are prominently used in discovering brain tumors from MRI
Images. Based on that research this Paper was written listing the various techniques in
use. A brief description of each technique is also provided. Also of All the various steps
standard classifiers, viz. i)Adaptive Neuro- Fuzzy Classifier (ANFC), ii) Naive
Vector Machine (SVM), vi) Classification and Regression Tree (CART), and vii) k-
nearest neighbours (k-NN). The accuracy reported is on the BRaTS 2015 dataset (a
5
subset of BRaTS 2017 dataset) which consists of 200 HGG and 54 LGG cases. 56 three-
dimensional quantitative MRI features extracted manually from each patient MRI and
Nilesh BhaskarraoBahadure, Arun Kumar Ray in Image Analysis for MRI Based
Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and
SVM In this paper using MR images of the brain, we segmented brain tissues into normal
tissues such as white matter, gray matter, cerebrospinal fluid (background), and tumor-
eliminate the effect of unwanted noise. We can used the skull stripping algorithm its
In hunnur et al , the authors proposed a method to detect brain tumors mainly based on
Thresholding approach and morphological operations. It also calculates the area of the
tumor and displays the stage of the tumor. MRI images of the brain are used as input and
the images are converted into grayscale images. High pass filter is used to remove the
noise present in the converted images. The median filter is applied to remove the impulse
noise. Thresholding is used to extract the object from the background by selecting a
threshold value. Morphological operations- dilation and erosion are done. Tumor region
is detected and then the image is shrunk to remove the unwanted details present in the
images. The tumor area is calculated and at last, the stage of the tumor patient is
displayed.
6
A hybrid segmentation technique including Watershed and Thresholding based
segmentation technique is used by Mustaqeem et al. Firstly the quality of the scanned
images is enhanced and then morphological operations are applied to detect the tumor
along with their proposed hybrid segmentation. The proposed system is easy to execute
and thus can be managed easily. Working Approach: Obtained MRI images are displayed
in two-dimensional matrices having pixels as its elements. Gaussian low pass filter and
averaging filters are used to remove salt and pepper noise from the image. Filters pixel’s
value is replaced with its neighborhood values. Gaussian high pass filter is used to
convert the grayscale image into a binary image format. Watershed Segmentation is used
to group pixels of an image on the basis of their intensities. Morphological operators are
applied to the converted binary image to separate the tumor part of the image.
Marroquin et al. presented the automated 3d segmentation for brain MRI scans. Using a
lessen the impact on the intensities of a grandeur. Brain atlas is hired to find nonrigid
conversion to map the usual brain. This transformation is further used to segment the
brain from nonbrain tissues, computing prior probabilities and finding automatic
initialization and finally applying the MPM-MAP algorithm to find out optimal
segmentation. Major findings from the study show that the MPM-MAP algorithm is
comparatively robust than EM in terms of errors while estimating the posterior marginal.
For optimal segmentation, the MPM-MAP algorithm involves only the solution of
7
linear systems and is therefore computationally efficient. Astinaminz et al.
projected the usage of the AdaBoost gadget mastering algorithm. The proposed system
includes three main segments. Pre-processing has eradicated noises in the datasets and
converted images into grayscale. Median filtering and thresholding segmentation are
8
CHAPTER 3
HARDWARE REQUIREMENT:
(RAM):4.00GB
SOFTWARE REQUIREMENT:
Python:
Guido Van Rossum and first released in 1991, Python's design philosophy emphasizes
code Readability with its notable use of significant Whitespace. Its language constructs
and object-oriented approach aim to help programmers write clear, logical code for small
and large-scale projects. Python is dynamically typed and garbage collected. It supports
programming.
9
PIP:
It is the package management system used to install and manage software packages
written in Python.
NumPy:
multidimensional array object, and tools for working with these arrays. It is the
fundamental package for scientific computing with Python. It contains various features
Pandas:
Pandas is the most popular python library that is used for data analysis. It provides highly
1. Series
2. Data frames
Anaconda:
10
Anaconda is a free and open-source distribution of the Python and R programming
languages for scientific computing that aims to simplify package management and
deployment. Package versions are managed by the package management system conda.
The Anaconda distribution includes data-science packages suitable for Windows, Linux,
and macOS. Anaconda distribution comes with 1,500 packages selected from PyPI as
well as the conda package and virtual environment manager. It also includes a GUI,
Tensor Flow: Tensor flow is a free and open-source software library for dataflow and
also used for machine learning applications such as neural networks. It is used for both
Keras:
enable fast experimentation with deep neural networks, it focuses on being user-friendly,
optimizers, and a host of tools to make working with image and text data easier to
simplify the coding necessary for writing deep neural network code.
11
CHAPTER 4
METHODOLOGY
ARTIFICIAL INTELLIGENCE:
especially computer systems enabling it to even mimic human behaviour. Its applications
subset of AI which is programmed to think on its own, perform social interaction, learn
new information from the provided data and adapt as well as improve with experience.
Although training time via Deep Learning (DL) methods is more than Machine Learning
automatic, large domain knowledge is not required for obtaining desired results unlike in
ML.
BRAIN TUMOR:
In medical science, an anomalous and uncontrollable cell growth inside the brain is
recognised as tumor. Human brain is the most receptive part of the body. It controls
muscle movements and interpretation of sensory information like sight, sound, touch,
12
The human brain consists of Grey Matter (GM), White Matter (WM) and Cerebrospinal
Fluid (CSF) and on the basis of factors like quantification of tissues, location of
is identified. A tumor in the brain can affect such sensory information and muscle
movements or even results in more dangerous situation which includes death. Depending
upon the place of commencing, tumor can be categorised into primary tumors and
secondary tumors. If the tumor is originated inside the skull, then the tumor is known as
primary brain tumor otherwise if the tumor‘s initiation place is somewhere else in the
body and moved towards the brain, then such tumors are called secondary tumors.
bronchogenic carcinoma on the basis of axial plane. While some tumours such as
meningioma can be easily segmented, others like gliomas and glioblastomas are much
more difficult to localise. World Health Organisation (WHO) categorised gliomas into -
and III stage /benign. Although most of the LGG tumors have slower growth rate
compared to HGG and are responsive to treatment, there is a subgroup of LGG tumors
which if not diagnosed earlier and left untreated could lead to GBM. In both cases a
the tumor grade can lead to a good prognosis. Survival time for a GBM (Glioblastoma
13
Magnetic Resonance Imaging (MRI) has become the standard non-invasive technique for
brain tumor diagnosis over the last few decades, due to its improved soft tissue contrast
that does not use harmful radiations unlike other methods like CT(Computed
Tomography), X-ray, PET (Position Emission Tomography) scans etc. The MRI image is
Since glioblastomas are infiltrative tumours, their borders are often fuzzy and hard to
distinguish from healthy tissues. As a solution, more than one MRI modality is often
relaxation), proton density (PD) contrast imaging, diffusion MRI (dMRI), and fluid
intravenous contrast highlight the most vascular regions of the tumor (T 1C gives much
more accuracy than T1.), called ‗Enhancing tumor‘ (ET), along with the ‗tumor core'
(TC) that does not involve peritumoral edema. T2-weighted (T2W) and T2W-Fluid
Attenuation Inversion Recovery (FLAIR) images are used to evaluate the tumor and
peritumoral edema together defined as the ‗whole tumor‘ (WT). Gliomas and
glioblastomas are difficult to distinguish in T1, T1c, T2 and PD. They are better
Here, we are using Convolutional Neural Network (CNN) for the detection and
14
Neural Networks (NN) form the base of deep learning, a subfield of machine learning
where the algorithms are inspired by the structure of the human brain. NN take in data,
train themselves to recognize the patterns in this data and then predict the outputs for a
new set of similar data. NN are made up of layers of neurons. These neurons are the core
processing units of the network. First we have the input layer which receives the input;
the output layer predicts our final output. In between, exist the hidden layers which
Our brain tumor images are composed of 128 by 128 pixels which make up for 16,384
pixels. Each pixel is fed as input to each neuron of the first layer. Neurons of one layer
are connected to neurons of the next layer through channels .Each of these channels is
assigned a numerical value known as weight‘. The inputs are multiplied to the
corresponding weight and their sum is sent as input to the neurons in the hidden layer.
Each of these neurons is associated with a numerical value called the bias which is then
added to the input sum. This value is then passed through a threshold function called the
activation function‘. The result of the activation function determines if the particular
neuron will get activated or not. An activated neuron transmits data to the neurons of the
next layer over the channels. In this manner the data is propagated through the network
this is called forward propagation‘. In the output layer the neuron with the highest value
fires and determines the output. The values are basically a probable.
The predicted output is compared against the actual output to realize the error in
prediction. The magnitude of the error gives an indication of the direction and magnitude
15
of change to reduce the error. This information is then transferred backward through our
network. This is known as ‗back propagation‘. Now based on this information the
weights are adjusted. This cycle of forward propagation and back propagation is
iteratively performed with multiple inputs. This process continues until our weights are
assigned such that the network can predict the type of tumor correctly in most of the
cases. This brings our training process to an end. NN may take hours or even months to
train but time is a reasonable trade-off when compared to its scope Several experiments
show that after pre-processing MRI images, neural network classification algorithm was
Results are
Input from shown on IoT
medical based
professionals or devices or
users Web-based
Weights(w)
Nodes
Image Acquisition:
Kaggle dataset:
16
Images can be in the form of .csv (comma separated values), .dat (data) files in grayscale,
RGB, or HSV or simply in .zip file as was in the case of our online Kaggle dataset. It
contained 98 healthy MRI images and 155 tumor infected MRI images.
The Multimodal Brain Tumor Segmentation (BRaTS) MICCAI has always been focusing
acquired multimodal MRI scans of glioblastoma (GBM) and lower grade glioma (LGG),
with pathologically confirmed diagnosis and available OS, was provided as the training,
validation and testing data for BRaTS 2015 challenge. All BRaTS multimodal scans are
available as NIfTI files (.nii.gz) and these multimodal scans describe a) native (T1) and
Inversion Recovery (FLAIR) volumes, and were acquired with different clinical
protocols and various scanners from multiple institutions. They described a mixture of
pre- and post-operative scans and their ground truth labels have been annotated by the
fusion of segmentation results from algorithms. All the imaging datasets have been
segmented manually, by one to four raters, following the same annotation protocol, and
comprise the whole tumor, the tumor core (including cystic areas), and the C- enhancing
tumor core.
17
The dataset contains 2 folders for the purpose of training and testing. The 'train' folder
contains 2 sub-folders of HGG and LGG cases-220 patients of HGG and 27 patients of
LGG. The ‗test‘ folder contains brain images of 110 Patients with HGG and LGG cases
combined. There are 5 different MRI image modalities for each patient which are T1, T2,
T1C, FLAIR, and OT (Ground truth of tumor Segmentation). All these image files are
stored in .mha format and are of the size of 240x240, resolution of (1 mm^3) and skull-
stripped. In the ground truth images, each voxel is labelled with zeros and non-zeros,
18
Data Augmentation:
random rotation (0-10 degrees), horizontal and vertical shifts, and horizontal and vertical
flips. Data augmentation is done to teach the network desired invariance and robustness
Image Pre-Processing:
In the first step of pre-processing, the memory space of the image is reduced by scaling
Activation Function:
case of binary classification while Softmax function is used for multi-class classification.
tanh function ranges from -1 to 1 and is considered better than sigmoid in binary
classification using feed forward algorithm. ReLU (Rectified Linear Unit) ranges from 0
to infinity and Leaky ReLU (better version of ReLU) ranges- from -infinity to +infinity.
ReLU stands for Rectified Linear Unit for a non-linear operation. The output is ƒ(x) =
world data would want our ConvNet to learn would be non-negative linear values. There
19
are other nonlinear functions such as tanh or sigmoid that can also be used instead of
ReLU. Most of the data scientists use ReLU since performance wise ReLU is better than
the other two.Stride is the number of pixels that would move over the input matrix one at
a time.Sometimes filter does not fit perfectly fit the input image. We have two options:
either pad the picture with zeros (zero-padding) so that it fits or drop the part of the image
where the filter did not fit. This is called valid padding which keeps only valid part of the
image.
Classifier models can be basically divided into two categories respectively which are
generative models based on hand- crafted features and discriminative models based on
traditional learning such as support vector machine (SVM), Random Forest (RF) and
Convolutional Neural Network (CNN). One difficulty with methods based on hand-
crafted features is that they often require the computation of a large number of features in
order to be accurate when used with many traditional machine learning techniques. This
can make them slow to compute and expensive memory-wise. More efficient techniques
employ lower numbers of features, using dimensionality reduction like PCA (Principle
Component Analysis) or feature selection methods, but the reduction in the number of
features is often at the cost of reduced accuracy. Brain tumour segmentation employ
exploit little prior knowledge on the brain‘s anatomy and instead rely mostly on the
extraction of [a large number of] low level image features, directly modelling the
20
relationship between these features and the label of a given voxel.In our project, we have
used the Convolutional Neural Network architecture for Brain tumor Detection and
with specialised NN analysing RGB layers of an image .Unlike others, it analyses one
image at a time,identifies and extracts important features and uses them to classify the
high-level representations or abstractions from the input training data. The main building
block used to construct a CNN architecture is the convolutional layer. It also consists of
• Convolutional Layer- It is the first layer to extract features from an input image.
Convolution preserves the relationship between pixels by learning image features using
small squares of input data. It is a mathematical operation that takes two inputs such as
image matrix and a filter or kernel to generate a feature map Convolution of an image
with different filters can perform operations such as edge detection, blur and sharpen by
applying filters.
• Activation Layer-It produces a single output based on the weighted sum of inputs
when the images are too large. Spatial pooling (also called subsampling or down
21
sampling) reduces the dimensionality of each map but retains important information.
• Sum Pooling – taking the sum of all elements in the feature map
• Fully Connected Layer-The layer we call as FC layer, we flattened our matrix into
vector and feed it into a fully connected layer like a neural network. the feature map
matrix will be converted as column vector (x1, x2, x3, …). With the fully connected
layers, we combined these features together to create a model. Forclassifying input image
Output Layer Dense Layer Flatten Layer Dropout Layer Fully Connected Layer
BLOCK DIAGRAM
22
Fig:4 block diagram
Flask Framework
The framework is the basis upon which software programs are built. It serves as a
foundation for software developers, allowing them to create a variety of applications for
certain platforms. It is a set of functions and predefined classes used to connect with the
It simplifies the life of a developer while giving them the ability to use certain extensions
1. The front end of a website is the area with which the user immediately interacts. It
contains everything that users see and interact with: text colours and styles, images and
videos, graphs and tables, the navigation menu, buttons, and colours. HTML, CSS, and
2. The backend of a website refers to the server-side of the website. It saves and
organizes data and ensures that everything on the client-side of the website functions
properly. It is the section of the website that you are unable to interact with or access. It is
the part of the software that has no direct communication with the users. Back End
23
Flask is used for developing web applications using python, implemented on Werkzeug
• Lightweight
CHAPTER-5
SOURCE CODE
CNN CODE:
import pandas as pd
import numpy as np
import cv2
import os
24
import matplotlib.pyplot as plt
import tensorflow as tf
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import warnings
warnings.filterwarnings('ignore')
MAIN_DIR = "brain_tumor_dataset/"
SEED = 40
os.listdir(MAIN_DIR)
25
subdirs = os.listdir(MAIN_DIR)[:2]
def load_images(folder):
imgs = []
target = 0
labels = []
for i in os.listdir(folder):
subdir = os.path.join(folder, i)
for j in os.listdir(subdir):
img_dir = os.path.join(subdir, j)
try:
img = cv2.imread(img_dir)
imgs.append(img)
26
labels.append(target)
except:
continue
target += 1
imgs = np.array(imgs)
labels = np.array(labels)
data.shape, labels.shape
plt.figure(figsize=(22,8))
for i in range(10):
plt.imshow(data[idx], cmap='gray')
plt.axis('on')
27
axs.set_xticklabels([])
axs.set_yticklabels([])
plt.subplots_adjust(wspace=None, hspace=None)
norm_data.shape, norm_data[0]
tf.random.set_seed(SEED)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=64,
kernel_size=3,
activation='relu',
input_shape=(128,128,1)),
tf.keras.layers.Conv2D(32,3,activation='relu'),
tf.keras.layers.MaxPool2D(pool_size=2,
28
padding='valid'),
tf.keras.layers.Conv2D(32,3,activation='relu'),
tf.keras.layers.Conv2D(16,3,activation='relu'),
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(),
optimizer=tf.keras.optimizers.Adam(),
metrics=["accuracy"])
29
np.random.seed(SEED)
y_pred_prob = model.predict(norm_data[idxs])
y_true = labels[idxs]
y_pred.shape, y_true.shape
y_pred
plt.figure(figsize=(8,8))
cm = confusion_matrix(y_true, y_pred)
plt.show()
print(classification_report(y_true, y_pred))
model.summary()
histdf = pd.DataFrame(history.history)
plt.figure(figsize=(12,8))
30
plt.plot(histdf['accuracy'], label='Training Acc')
plt.plot(histdf['loss'], label='Loss')
plt.legend()
plt.show()
APPLICATION CODE:
import numpy as np
import os
import tensorflow as tf
app = Flask(__name__)
31
##Load trained model
model = load_model("brain_tumor_model_CNN.h5")
print(img_path)
x = image.img_to_array(img)
##Scaling
x = x/255.0
x = np.expand_dims(x, axis = 0)
preds = model.predict(x)
preds = np.argmax(preds)
if preds == 0:
else:
preds = "Sorry, You have Brain Tumor Disease, kindly contact your doctor."
32
return preds
@app.route("/")
def index():
return render_template('index.html')
def upload():
if request.method == 'POST':
f = request.files['file']
basepath = os.path.dirname(__file__)
f.save(file_path)
##Make predictions
result = preds
33
return result
return None
if __name__ == '__main__':
HTML CODE:
<h2>Image Classifier</h2>
<div>
Choose...
</label>
</form>
<div class="img-preview">
34
<div id="imagePreview">
</div>
</div>
<div>
id="btn-predict">Predict!</button>
</div>
</div>
<h3 id="result">
<span></span>
</h3>
</div>
{% endblock %}
<html lang="en">
<head>
35
<meta charset="UTF-8">
<title>AI Demo</title>
<link href="https://cdn.bootcss.com/bootstrap/4.0.0/css/bootstrap.min.css"
rel="stylesheet">
<script src="https://cdn.bootcss.com/popper.js/1.12.9/umd/popper.min.js"></script>
<script src="https://cdn.bootcss.com/jquery/3.3.1/jquery.min.js"></script>
<script src="https://cdn.bootcss.com/bootstrap/4.0.0/js/bootstrap.min.js"></script>
</head>
<body>
<div class="container">
</div>
36
</nav>
<div class="container">
</div>
</body>
<footer>
</footer>
</html>
CSS CODE:
.img-preview {
width: 256px;
height: 256px;
position: relative;
37
margin-top: 1em;
margin-bottom: 1em;
.img-preview>div {
width: 100%;
height: 100%;
background-repeat: no-repeat;
background-position: center;
input[type="file"] {
display: none;
.upload-label{
display: inline-block;
38
padding: 12px 30px;
background: #39D2B4;
color: #fff;
font-size: 1em;
cursor: pointer;
.upload-label:hover{
background: #34495E;
color: #39D2B4;
.loader {
border-radius: 50%;
width: 50px;
39
height: 50px;
@keyframes spin {
0% { transform: rotate(0deg); }
CHAPTER 6
Epoch 1/10
Epoch 2/10
Epoch 3/10
40
7/7 [==============================] - 23s 3s/step - loss: 0.5236 - accuracy:
Epoch 4/10
Epoch 5/10
Epoch 6/10
Epoch 7/10
Epoch 8/10
Epoch 9/10
41
7/7 [==============================] - 21s 3s/step - loss: 0.3242 - accuracy:
Epoch 10/10
Loss: 0.2502
accuracy 0.85 20
42
Model: "sequential"
_________________________________________________________________
===============================================================
==
43
2D)
===============================================================
==
Non-trainable params: 0
_________________________________________________________________
44
Confusion Matrix
45
Fig:6: Accuracy plot
46
Fig:8 Negative Prediction
47
CHAPTER 7
CONCLUSION
tumor using the Convolution Neural Network. The input MR images are read from the
local device using the file path and converted into grayscale images. These images are
pre-processed using an adaptive bilateral filtering technique for the elimination of noises
that are present inside the original image. The binary thresholding is applied to the
denoised image, and Convolution Neural Network segmentation is applied, which helps
in figuring out the tumor region in the MR images. The proposed model had obtained an
accuracy of 84% and yields promising results without any errors and much less
identification of a brain tumor using the Convolution Neural Network. The input MR
images are read from the local device using the file path and converted into grayscale
images. These images are pre-processed using an adaptive filtering technique for the
elimination of noises that are present inside the original image. The Convolution Neural
Network segmentation is applied, which helps in figuring out the tumor region in the MR
images. The proposed model had obtained good accuracy and yields promising results
48
CHAPTER-8
FUTURE SCOPE
It is observed on extermination that the proposed approach needs a vast training set for
better accurate results; in the field of medical image processing, the gathering of medical
data is a tedious job, and, in few cases, the datasets might not be available. In all such
cases, the proposed algorithm must be robust enough for accurate recognition of tumor
regions from MR Images. The proposed approach can be further improvised through in
cooperating weakly trained algorithms that can identify the abnormalities with a
minimum training data and also self-learning algorithms would aid in enhancing the
49
REFRENCES
[1]. S. Bauer et al., ―A survey of MRI-based medical image analysis for brain tumor studies,‖
Phys. Med. Biol., vol. 58, no. 13, pp.97–129, 2013.
[2]. B. Menze et al., ―The multimodal brain tumor image segmentation benchmark (BRATS),‖
IEEE Trans. Med. Imag., vol. 34, no.10, pp. 1993–2024, Oct. 2015
[3]. B. H. Menze et al., ―A generative model for brain tumor segmentation in multi-
modal images,‖ in Medical Image Computing and Comput.- Assisted Intervention-MICCAI
2010. New York: Springer, 2010, pp. 151–159
[4]. S. Bauer, L.-P. Nolte, and M. Reyes, ―Fully automatic segmentation of brain tumor images
using support vector machine classification in combination with hierarchical conditional random
field regularization,‖ in Medical Image Computing and Comput.-Assisted Intervention-MICCAI
2011. New York: Springer, 2011, pp. 354–361.
[5]. C.-H. Lee et al., ―Segmenting brain tumors using pseudo-conditional random fields,‖
in Medical Image Computing and Comput.-Assisted
[7]. R. Meier et al., ―A hybrid model for multimodal brain tumor segmentation,‖ in Proc. NCI-
MICCAI BRATS, 2013, pp. 31–37.
[8]. Vinod Kumar, JainySachdeva, Indra Gupta ―Classification of brain tumors using
PCA-ANN‖ 2011 World Congress on Information and Communication Technologies
[9]. Sergio Pereira, Adriano Pinto, Victor Alves, and Carlos A. Silva ―Brain
Tumor Segmentation Using Convolutional Neural Networks in MRI Images‖IEEE
TRANSACTIONS ON MEDICAL IMAGING, VOL. 35, NO. 5, MAY 2016
[10]. RaselAhmmed, Md. Foisal Hossain ―Tumor Detection in Brain MRI Image Using
Template based K-means and Fuzzy C-means Clustering Algorithm‖ 2016 International
Conference on Computer Communication and Informatics (ICCCI -2016), Jan. 07 –09,
2016,Coimbatore, INDIA
50
[11]. S.C. Turaga, J.F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman,W. Denk, and
H.S. Seung. Convolutional networks can learn to generate affinity graphs for image
segmentation. Neural Computation, 22(2):511–538, 2010
[12] A Reliable Method for Brain Tumor Detection Using Cnn Technique NeethuOuseph C1,
Asst. Prof. Mrs.Shruti K2 1(Digital electronics ECE, Malabar Institute of Technology, India)
2(Electronics and Communication Engineering, Malabar Institute of Technology, India)
[14] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., "Advancing
The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic
features", Nature Scientific Data, 4:170117 (2017) DOI: 10.1038/sdata.2017.117
[15] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al., "Identifying the
Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and
Overall Survival Prediction in the BRATS Challenge", arXiv preprint arXiv:1811.02629 (2018)
[16]A.Vidyarthi, N.Mittal, ―CLAP: Closely link associated pixel based extraction of brain
tumor in MR images‖, International Advance Computing Conference on Signal Processing and
Communication (IACC), pp. 202 -206
[17] International Journal of Computer Science and communication Vol. 2, No. 2, July-
December-2011,pp. 325-331
51
[19] Luxit Kapoor and Sanjeev Thakur ―A survey on brain tumor detection
Mahakud ―An adaptive filtering technique filtering technique for brain tumor analysis
[21] S.U. Aswathy, G.Glan Deva Dhas and S.S. Kumar ―A Survey on detection of
brain tumor from MRI Brain images‖, 7th International Conference on Cloud Computing,
52