You are on page 1of 52

CHAPTER 1

INTRODUCTION

Brain tumor is one of the most rigorous diseases in the medical science. An effective and

efficient analysis is always a key concern for the radiologist in the premature phase of

tumor growth. Histological grading, based on a stereotactic biopsy test, is the gold

standard and the convention for detecting the grade of a brain tumor. The biopsy

procedure requires the neurosurgeon to drill a small hole into the skull from which the

tissue is collected. There are many risk factors involving the biopsy test, including

bleeding from the tumor and brain causing infection, seizures, severe migraine, stroke,

coma and even death. But the main concern with the stereotactic biopsy is that it is not

100% accurate which may result in a serious diagnostic error followed by a wrong

clinical management of the disease.

Tumor biopsy being challenging for brain tumor patients, non-invasive imaging

techniques like Magnetic Resonance Imaging (MRI) have been extensively employed in

diagnosing brain tumors. Therefore, development of systems for the detection and

prediction of the grade of tumors based on MRI data has become necessary. But at first

sight of the imaging modality like in Magnetic Resonance Imaging (MRI), the proper

visualisation of the tumor cells and its differentiation with its nearby soft tissues is

somewhat difficult task which may be due to the presence of low illumination in imaging

modalities or its large presence of data or several complexity and variance of tumors-like

unstructured shape, viable size and unpredictable locations of the tumor.

1
Automated defect detection in medical imaging using machine learning has become the

emergent field in several medical diagnostic applications. Its application in the detection

of brain tumor in MRI is very crucial as it provides information about abnormal tissues

which is necessary for planning treatment. Studies in the recent literature have also

reported that automatic computerized detection and diagnosis of the disease, based on

medical image analysis, could be a good alternative as it would save radiologist time and

also obtain a tested accuracy. Furthermore, if computer algorithms can provide robust and

quantitative measurements of tumor depiction, these automated measurements will

greatly aid in the clinical management of brain tumors by freeing physicians from the

burden of the manual depiction of tumors.

The machine learning based approaches like Deep ConvNets in radiology and other

medical science fields plays an important role to diagnose the disease in much simpler

way as never done before and hence providing a feasible alternative to surgical biopsy for

brain tumors . In this project, we attempted at detecting and classifying the brain tumor

and comparing the results of binary and multi class classification of brain tumor using

Convolutional Neural Network (CNN) architecture.

Magnetic resonance (MR) imaging and computed tomography (CT) scans of the brain

are the two most general tests to check the existence of a tumor and recognize its

position for progressive treatment decisions. These two scans are still used extensively

for their handiness, and the capability to yield high-definition images of pathological

tissues is more. At present, there are several other conducts offered for tumors, which

2
include surgery, therapies such as radiation therapy, and chemotherapy. The decision for

which treatment relies on the many factors such as size, kind, and grade of the tumor

present in the MR image. It’s conjointly chargeable for whether or not cancer has reached

the other portions of the body.

Precise sighting of the kind of brain abnormality is enormously needed for treatment

operations with a resolution to diminish diagnostic errors. The precision is often

makeshift utilizing computer-aided diagnosis (CAD) systems. The essential plan of

computer vision is to produce a reliable output, which is an associate estimation to assist

medical doctors in image understanding and to lessen image reading time. These

advancements increase the steadiness and correctness of medical diagnosis — however,

segmenting an MR image of the tumor and its area itself a very problematic job. The

occurrence of tumors in specific positions within the brain image without distinguishing

picture intensities is an additional issue that makes a computerized detection of brain

tumor and segmentation a problematic job.

The essential plan of computer vision is to produce a reliable output, which is an

associate estimation to assist medical doctors in image understanding and to lessen image

reading time. These advancements increase the steadiness and correctness of medical

diagnosis however, segmenting an MR image of the tumor and its area itself a very

problematic job. The occurrence of tumors in specific positions within the brain image

without distinguishing picture intensities is an additional issue that makes a computerized

detection of brain tumor and segmentation a problematic job.

3
MOTIVATION FOR THE WORK:

A brain tumor is defined as abnormal growth of cells within the brain or central spinal

canal. Some tumors can be cancerous thus they need to be detected and cured in time.

The exact cause of brain tumors is not clear and neither is exact set of symptoms defined,

thus, people may be suffering from it without realizing the danger. Primary brain tumors

can be either malignant (contain cancer cells) or benign (do not contain cancer cells).

SCOPE:

Our aim is to develop an automated system for classification of brain tumors. The

system can be used by neurosurgeons and healthcare specialists. The system

incorporates image processing, pattern analysis, and computer vision techniques and

is expected to improve the sensitivity, specificity, and efficiency of brain tumor

screening. The primary goal of medical imaging projects is to extract meaningful

and accurate information from these images with the least error possible. The proper

combination and parameterization of the phases enables the development of adjunct tools

that can help on the early diagnosis or the monitoring of the tumor identification and

locations.

4
CHAPTER 2

LITERATURE REVIEW

Pan & Yang ‘s survey focused on categorizing and reviewing the current progress on

transfer learning for classification, regression and clustering problems. In this survey,

they discussed the relationship between transfer learning and other related machine

learning techniques such as domain adaptation, multitask learning and sample selection

bias, as well as co-variate shift. They also explored some potential future issues in

transfer learning research. In this survey article, they reviewed several current trends of

transfer learning.

A Survey on Brain Tumor Detection Using Image Processing Techniques by Luxit

Kapoor, Sanjeev. This paper surveys the various techniques that are part of Medical

Image Processing and are prominently used in discovering brain tumors from MRI

Images. Based on that research this Paper was written listing the various techniques in

use. A brief description of each technique is also provided. Also of All the various steps

involved in the process of detecting Tumors, Segmentation is the most significant.

S. Banerjee, S.Mitra, and B. U. Shankar reports the accuracy achieved by seven

standard classifiers, viz. i)Adaptive Neuro- Fuzzy Classifier (ANFC), ii) Naive

Bayes(NB), iii) Logistic Regression (LR), iv) Multilayer Perceptron(MLP), v) Support

Vector Machine (SVM), vi) Classification and Regression Tree (CART), and vii) k-

nearest neighbours (k-NN). The accuracy reported is on the BRaTS 2015 dataset (a

5
subset of BRaTS 2017 dataset) which consists of 200 HGG and 54 LGG cases. 56 three-

dimensional quantitative MRI features extracted manually from each patient MRI and

used for the classification.

Nilesh BhaskarraoBahadure, Arun Kumar Ray in Image Analysis for MRI Based

Brain Tumor Detection and Feature Extraction Using Biologically Inspired BWT and

SVM In this paper using MR images of the brain, we segmented brain tissues into normal

tissues such as white matter, gray matter, cerebrospinal fluid (background), and tumor-

infected tissues. We used pre-processing to improve the signal-to-noise ratio and to

eliminate the effect of unwanted noise. We can used the skull stripping algorithm its

based on threshold technique for improve the skull stripping performance.

In hunnur et al , the authors proposed a method to detect brain tumors mainly based on

Thresholding approach and morphological operations. It also calculates the area of the

tumor and displays the stage of the tumor. MRI images of the brain are used as input and

the images are converted into grayscale images. High pass filter is used to remove the

noise present in the converted images. The median filter is applied to remove the impulse

noise. Thresholding is used to extract the object from the background by selecting a

threshold value. Morphological operations- dilation and erosion are done. Tumor region

is detected and then the image is shrunk to remove the unwanted details present in the

images. The tumor area is calculated and at last, the stage of the tumor patient is

displayed.

6
A hybrid segmentation technique including Watershed and Thresholding based

segmentation technique is used by Mustaqeem et al. Firstly the quality of the scanned

images is enhanced and then morphological operations are applied to detect the tumor

along with their proposed hybrid segmentation. The proposed system is easy to execute

and thus can be managed easily. Working Approach: Obtained MRI images are displayed

in two-dimensional matrices having pixels as its elements. Gaussian low pass filter and

averaging filters are used to remove salt and pepper noise from the image. Filters pixel’s

value is replaced with its neighborhood values. Gaussian high pass filter is used to

enhance boundaries of the objects in the image. Threshold segmentation is used to

convert the grayscale image into a binary image format. Watershed Segmentation is used

to group pixels of an image on the basis of their intensities. Morphological operators are

applied to the converted binary image to separate the tumor part of the image.

Marroquin et al. presented the automated 3d segmentation for brain MRI scans. Using a

separate parametric model in preference to a single multiplicative magnificence will

lessen the impact on the intensities of a grandeur. Brain atlas is hired to find nonrigid

conversion to map the usual brain. This transformation is further used to segment the

brain from nonbrain tissues, computing prior probabilities and finding automatic

initialization and finally applying the MPM-MAP algorithm to find out optimal

segmentation. Major findings from the study show that the MPM-MAP algorithm is

comparatively robust than EM in terms of errors while estimating the posterior marginal.

For optimal segmentation, the MPM-MAP algorithm involves only the solution of

7
linear systems and is therefore computationally efficient. Astinaminz et al.

implemented an operative automatic classification approach for brain image that

projected the usage of the AdaBoost gadget mastering algorithm. The proposed system

includes three main segments. Pre-processing has eradicated noises in the datasets and

converted images into grayscale. Median filtering and thresholding segmentation are

implemented in the pre-processed image.

8
CHAPTER 3

HARDWARE AND SOFTWARE REQUIRMENT

HARDWARE REQUIREMENT:

• Processor: Intel® Core™ i3-2350M CPU @ 2.30GHz Installed memory

(RAM):4.00GB

• System Type: 64-bit Operating System

• Hard disk: 10 GB of available space or more.

• Display: Dual XGA (1024 x 768) or higher resolution monitors

• Operating system: Windows

SOFTWARE REQUIREMENT:

Python:

Python is an interpreted, high-level, general purpose programming language created by

Guido Van Rossum and first released in 1991, Python's design philosophy emphasizes

code Readability with its notable use of significant Whitespace. Its language constructs

and object-oriented approach aim to help programmers write clear, logical code for small

and large-scale projects. Python is dynamically typed and garbage collected. It supports

multiple programming paradigms, including procedural, object-oriented, and functional

programming.

9
PIP:

It is the package management system used to install and manage software packages

written in Python.

NumPy:

NumPy is a general-purpose array-processing package. It provides a high performance

multidimensional array object, and tools for working with these arrays. It is the

fundamental package for scientific computing with Python. It contains various features

including these important ones: A powerful N-dimensional array object

• Sophisticated (broadcasting) functions

• Tools for integrating C/C++ and Fortran code

• Useful linear algebra, Fourier transform, and random number capabilities

Pandas:

Pandas is the most popular python library that is used for data analysis. It provides highly

optimized performance with back-end source code is purely written in C or Python. We

can analyze data in pandas with

1. Series

2. Data frames

Anaconda:

10
Anaconda is a free and open-source distribution of the Python and R programming

languages for scientific computing that aims to simplify package management and

deployment. Package versions are managed by the package management system conda.

The Anaconda distribution includes data-science packages suitable for Windows, Linux,

and macOS. Anaconda distribution comes with 1,500 packages selected from PyPI as

well as the conda package and virtual environment manager. It also includes a GUI,

Anaconda Navigator, as a graphical alternative to the command-line interface (CLI).

Tensor Flow: Tensor flow is a free and open-source software library for dataflow and

differentiable programming across a range of tasks. It is a symbolic math library, and is

also used for machine learning applications such as neural networks. It is used for both

research and production at Google.

Keras:

Keras is an open-source neural-network library written in Python. It is capable of running

on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or Plaid ML. Designed to

enable fast experimentation with deep neural networks, it focuses on being user-friendly,

modular, and extensible. Keras contains numerous implementations of commonly used

neural-network building blocks such as layers, objectives, activation functions,

optimizers, and a host of tools to make working with image and text data easier to

simplify the coding necessary for writing deep neural network code.

11
CHAPTER 4

METHODOLOGY

ARTIFICIAL INTELLIGENCE:

Artificial intelligence (AI) is the simulation of human intelligence processes by machines,

especially computer systems enabling it to even mimic human behaviour. Its applications

lie in fields of Computer Vision, Natural Language Processing, Robotics, Speech

Recognition, etc. Advantages of using AI are improved customer experience, accelerate

speed to market, develop sophisticated products, enable cost optimisation, enhance

employee productivity and improve operational efficiency. Machine Learning (ML) is a

subset of AI which is programmed to think on its own, perform social interaction, learn

new information from the provided data and adapt as well as improve with experience.

Although training time via Deep Learning (DL) methods is more than Machine Learning

methods, it is compensated by higher accuracy in the former case. Also, DL being

automatic, large domain knowledge is not required for obtaining desired results unlike in

ML.

BRAIN TUMOR:

In medical science, an anomalous and uncontrollable cell growth inside the brain is

recognised as tumor. Human brain is the most receptive part of the body. It controls

muscle movements and interpretation of sensory information like sight, sound, touch,

taste, pain, etc.

12
The human brain consists of Grey Matter (GM), White Matter (WM) and Cerebrospinal

Fluid (CSF) and on the basis of factors like quantification of tissues, location of

abnormalities, malfunctions & pathologies and diagnostic radiology, a presence of tumor

is identified. A tumor in the brain can affect such sensory information and muscle

movements or even results in more dangerous situation which includes death. Depending

upon the place of commencing, tumor can be categorised into primary tumors and

secondary tumors. If the tumor is originated inside the skull, then the tumor is known as

primary brain tumor otherwise if the tumor‘s initiation place is somewhere else in the

body and moved towards the brain, then such tumors are called secondary tumors.

Brain tumor can be of the following types-glioblastoma, sarcoma, metastatic

bronchogenic carcinoma on the basis of axial plane. While some tumours such as

meningioma can be easily segmented, others like gliomas and glioblastomas are much

more difficult to localise. World Health Organisation (WHO) categorised gliomas into -

HGG/high grade glioma/glioblastoma/IV stage /malignant & LGG/low grade glioma/II

and III stage /benign. Although most of the LGG tumors have slower growth rate

compared to HGG and are responsive to treatment, there is a subgroup of LGG tumors

which if not diagnosed earlier and left untreated could lead to GBM. In both cases a

correct treatment planning (including surgery, radiotherapy, and chemotherapy separately

or in combination) becomes necessary, considering that an early and proper detection of

the tumor grade can lead to a good prognosis. Survival time for a GBM (Glioblastoma

Multiform) or HGG patient is very low i.e. in the range of 12 to 15 months.

13
Magnetic Resonance Imaging (MRI) has become the standard non-invasive technique for

brain tumor diagnosis over the last few decades, due to its improved soft tissue contrast

that does not use harmful radiations unlike other methods like CT(Computed

Tomography), X-ray, PET (Position Emission Tomography) scans etc. The MRI image is

basically a matrix of pixels having characteristic features.

Since glioblastomas are infiltrative tumours, their borders are often fuzzy and hard to

distinguish from healthy tissues. As a solution, more than one MRI modality is often

employed e.g. T1 (spin-lattice relaxation), T1-contrasted (T1C), T2 (spin-spin

relaxation), proton density (PD) contrast imaging, diffusion MRI (dMRI), and fluid

attenuation inversion recovery (FLAIR) pulse sequences. T1-weighted images with

intravenous contrast highlight the most vascular regions of the tumor (T 1C gives much

more accuracy than T1.), called ‗Enhancing tumor‘ (ET), along with the ‗tumor core'

(TC) that does not involve peritumoral edema. T2-weighted (T2W) and T2W-Fluid

Attenuation Inversion Recovery (FLAIR) images are used to evaluate the tumor and

peritumoral edema together defined as the ‗whole tumor‘ (WT). Gliomas and

glioblastomas are difficult to distinguish in T1, T1c, T2 and PD. They are better

identified in FLAIR modalities.

Here, we are using Convolutional Neural Network (CNN) for the detection and

classification of the brain tumor.

BASIC OPERATION OF NEURAL NETWORKS:

14
Neural Networks (NN) form the base of deep learning, a subfield of machine learning

where the algorithms are inspired by the structure of the human brain. NN take in data,

train themselves to recognize the patterns in this data and then predict the outputs for a

new set of similar data. NN are made up of layers of neurons. These neurons are the core

processing units of the network. First we have the input layer which receives the input;

the output layer predicts our final output. In between, exist the hidden layers which

perform most of the computations required by our network.

Our brain tumor images are composed of 128 by 128 pixels which make up for 16,384

pixels. Each pixel is fed as input to each neuron of the first layer. Neurons of one layer

are connected to neurons of the next layer through channels .Each of these channels is

assigned a numerical value known as weight‘. The inputs are multiplied to the

corresponding weight and their sum is sent as input to the neurons in the hidden layer.

Each of these neurons is associated with a numerical value called the bias which is then

added to the input sum. This value is then passed through a threshold function called the

activation function‘. The result of the activation function determines if the particular

neuron will get activated or not. An activated neuron transmits data to the neurons of the

next layer over the channels. In this manner the data is propagated through the network

this is called forward propagation‘. In the output layer the neuron with the highest value

fires and determines the output. The values are basically a probable.

The predicted output is compared against the actual output to realize the error in

prediction. The magnitude of the error gives an indication of the direction and magnitude

15
of change to reduce the error. This information is then transferred backward through our

network. This is known as ‗back propagation‘. Now based on this information the

weights are adjusted. This cycle of forward propagation and back propagation is

iteratively performed with multiple inputs. This process continues until our weights are

assigned such that the network can predict the type of tumor correctly in most of the

cases. This brings our training process to an end. NN may take hours or even months to

train but time is a reasonable trade-off when compared to its scope Several experiments

show that after pre-processing MRI images, neural network classification algorithm was

the best more specifically CNN(Convolutional Neural Network) as compared to Support

Vector Machine(SVM),Random Forest Field.

Results are
Input from shown on IoT
medical based
professionals or devices or
users Web-based

Weights(w)
Nodes

Fig: 1:A multi-layer perceptron model of neural network

Image Acquisition:

Kaggle dataset:

16
Images can be in the form of .csv (comma separated values), .dat (data) files in grayscale,

RGB, or HSV or simply in .zip file as was in the case of our online Kaggle dataset. It

contained 98 healthy MRI images and 155 tumor infected MRI images.

BRaTS MICCAI dataset:

The Multimodal Brain Tumor Segmentation (BRaTS) MICCAI has always been focusing

on the evaluation of state-of-the-art methods for the segmentation of brain tumors in

magnetic resonance imaging (MRI) scans. Ample multi-institutional routine clinically-

acquired multimodal MRI scans of glioblastoma (GBM) and lower grade glioma (LGG),

with pathologically confirmed diagnosis and available OS, was provided as the training,

validation and testing data for BRaTS 2015 challenge. All BRaTS multimodal scans are

available as NIfTI files (.nii.gz) and these multimodal scans describe a) native (T1) and

b) post-contrast T1- weighted (T1c), c) T2-weighted (T2), and d) T2 Fluid Attenuated

Inversion Recovery (FLAIR) volumes, and were acquired with different clinical

protocols and various scanners from multiple institutions. They described a mixture of

pre- and post-operative scans and their ground truth labels have been annotated by the

fusion of segmentation results from algorithms. All the imaging datasets have been

segmented manually, by one to four raters, following the same annotation protocol, and

their annotations were approved by experienced neuro-radiologists. Annotations

comprise the whole tumor, the tumor core (including cystic areas), and the C- enhancing

tumor core.

17
The dataset contains 2 folders for the purpose of training and testing. The 'train' folder

contains 2 sub-folders of HGG and LGG cases-220 patients of HGG and 27 patients of

LGG. The ‗test‘ folder contains brain images of 110 Patients with HGG and LGG cases

combined. There are 5 different MRI image modalities for each patient which are T1, T2,

T1C, FLAIR, and OT (Ground truth of tumor Segmentation). All these image files are

stored in .mha format and are of the size of 240x240, resolution of (1 mm^3) and skull-

stripped. In the ground truth images, each voxel is labelled with zeros and non-zeros,

corresponding to the normal pixel and parts of tumor cells, respectively.

Fig: 2.Online Kaggle dataset(above two) 2. BRaTS MICCAI dataset


(below)

18
Data Augmentation:

Data augmentation consists of Grey Scaling(RGB/BW to ranges of

grey),Reflection(vertical/horizontal flip),Gaussian Blur(reduces image noise),Histogram

equalisation(increases global contrast),Rotation(may not preserve image

size),Translation(moving the image along x or y axis), linear transformation such as

random rotation (0-10 degrees), horizontal and vertical shifts, and horizontal and vertical

flips. Data augmentation is done to teach the network desired invariance and robustness

properties, when only few training samples are available.

Image Pre-Processing:

In the first step of pre-processing, the memory space of the image is reduced by scaling

the gray-level of the pixels in the range 0-255.

Activation Function:

Sigmoid function ranges from 0 to 1 and is used to predict probability as an output in

case of binary classification while Softmax function is used for multi-class classification.

tanh function ranges from -1 to 1 and is considered better than sigmoid in binary

classification using feed forward algorithm. ReLU (Rectified Linear Unit) ranges from 0

to infinity and Leaky ReLU (better version of ReLU) ranges- from -infinity to +infinity.

ReLU stands for Rectified Linear Unit for a non-linear operation. The output is ƒ(x) =

max(0,x).ReLU‘s purpose is to introduce non-linearity in our ConvNet. Since, the real

world data would want our ConvNet to learn would be non-negative linear values. There

19
are other nonlinear functions such as tanh or sigmoid that can also be used instead of

ReLU. Most of the data scientists use ReLU since performance wise ReLU is better than

the other two.Stride is the number of pixels that would move over the input matrix one at

a time.Sometimes filter does not fit perfectly fit the input image. We have two options:

either pad the picture with zeros (zero-padding) so that it fits or drop the part of the image

where the filter did not fit. This is called valid padding which keeps only valid part of the

image.

Convolutional Neural Network:

Classifier models can be basically divided into two categories respectively which are

generative models based on hand- crafted features and discriminative models based on

traditional learning such as support vector machine (SVM), Random Forest (RF) and

Convolutional Neural Network (CNN). One difficulty with methods based on hand-

crafted features is that they often require the computation of a large number of features in

order to be accurate when used with many traditional machine learning techniques. This

can make them slow to compute and expensive memory-wise. More efficient techniques

employ lower numbers of features, using dimensionality reduction like PCA (Principle

Component Analysis) or feature selection methods, but the reduction in the number of

features is often at the cost of reduced accuracy. Brain tumour segmentation employ

discriminative models because unlike generative modelling approaches, these approaches

exploit little prior knowledge on the brain‘s anatomy and instead rely mostly on the

extraction of [a large number of] low level image features, directly modelling the

20
relationship between these features and the label of a given voxel.In our project, we have

used the Convolutional Neural Network architecture for Brain tumor Detection and

Classification.Convolutional neural network processes closely knitted data used for

image classification, image processing, face detection etc. It is a specialised 3D structure

with specialised NN analysing RGB layers of an image .Unlike others, it analyses one

image at a time,identifies and extracts important features and uses them to classify the

image .ConvolutionalNeural Networks (ConvNets) automatically learns mid-level and

high-level representations or abstractions from the input training data. The main building

block used to construct a CNN architecture is the convolutional layer. It also consists of

several other layers, some of which are described as bellow:

• Input Layer-It takes in the raw pixel value of input image

• Convolutional Layer- It is the first layer to extract features from an input image.

Convolution preserves the relationship between pixels by learning image features using

small squares of input data. It is a mathematical operation that takes two inputs such as

image matrix and a filter or kernel to generate a feature map Convolution of an image

with different filters can perform operations such as edge detection, blur and sharpen by

applying filters.

• Activation Layer-It produces a single output based on the weighted sum of inputs

• Pooling Layer-Pooling layers section would reduce the number of parameters

when the images are too large. Spatial pooling (also called subsampling or down

21
sampling) reduces the dimensionality of each map but retains important information.

Spatial pooling can be of different types:

• Max Pooling – taking the largest element in the feature map

• Average Pooling - taking the average of elements in the feature map

• Sum Pooling – taking the sum of all elements in the feature map

• Fully Connected Layer-The layer we call as FC layer, we flattened our matrix into

vector and feed it into a fully connected layer like a neural network. the feature map

matrix will be converted as column vector (x1, x2, x3, …). With the fully connected

layers, we combined these features together to create a model. Forclassifying input image

into various classes based on training set.

• Dropout Layer-It prevents nodes in a network from co-adapting to each other.

Batch Normalisation Layer


Input Layer Activation Layer Convolutional Layer Max Pooling Layer

Output Layer Dense Layer Flatten Layer Dropout Layer Fully Connected Layer

Fig:3 A diagram of a model trained from scratch using CNN


architecture.

BLOCK DIAGRAM
22
Fig:4 block diagram

 Flask Framework 

The framework is the basis upon which software programs are built. It serves as a

foundation for software developers, allowing them to create a variety of applications for

certain platforms. It is a set of functions and predefined classes used to connect with the

system software and handle inputs and outputs.

It simplifies the life of a developer while giving them the ability to use certain extensions

and makes the online applications scalable and maintainable.

Frontend Development vs Backend Development

1. The front end of a website is the area with which the user immediately interacts. It

contains everything that users see and interact with: text colours and styles, images and

videos, graphs and tables, the navigation menu, buttons, and colours. HTML, CSS, and

JavaScript are used in developing the front end.

2. The backend of a website refers to the server-side of the website. It saves and

organizes data and ensures that everything on the client-side of the website functions

properly. It is the section of the website that you are unable to interact with or access. It is

the part of the software that has no direct communication with the users. Back End

development is done using Java, Python, C#, and JavaScript.

23
Flask is used for developing web applications using python, implemented on Werkzeug

and Jinja2. Advantages of using Flask framework are:

• There is a built-in development server and a fast debugger provided.

• Lightweight

• Secure cookies are supported.

• Templating using Jinja2.

• Request dispatching using REST.

• Support for unit testing is built-in.

CHAPTER-5

SOURCE CODE

PYTHON SOURCE CODE

CNN CODE:

import pandas as pd

import numpy as np

import cv2

import os

24
import matplotlib.pyplot as plt

import seaborn as sns

import tensorflow as tf

os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'

from keras.models import Sequential

from keras.layers import Dense, Flatten, Conv2D, MaxPooling2D

from keras.layers import Input, Activation, LeakyReLU, Dropout

from keras.losses import BinaryCrossentropy

from tensorflow.keras.optimizers import Adam

from sklearn.model_selection import train_test_split

from sklearn.metrics import classification_report, confusion_matrix

import warnings

warnings.filterwarnings('ignore')

MAIN_DIR = "brain_tumor_dataset/"

SEED = 40

os.listdir(MAIN_DIR)

25
subdirs = os.listdir(MAIN_DIR)[:2]

for subdir in subdirs:

print(f"{subdir} contains {len(os.listdir(MAIN_DIR+'/'+subdir))} images")

def load_images(folder):

imgs = []

target = 0

labels = []

for i in os.listdir(folder):

subdir = os.path.join(folder, i)

for j in os.listdir(subdir):

img_dir = os.path.join(subdir, j)

try:

img = cv2.imread(img_dir)

img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

img = cv2.resize(img, (128, 128))

imgs.append(img)

26
labels.append(target)

except:

continue

target += 1

imgs = np.array(imgs)

labels = np.array(labels)

return imgs, labels

data, labels = load_images(MAIN_DIR)

data.shape, labels.shape

def plot_images(start, end):

plt.figure(figsize=(22,8))

for i in range(10):

axs = plt.subplot(2,5, i+1)

idx = np.random.randint(start, end)

plt.imshow(data[idx], cmap='gray')

plt.axis('on')

27
axs.set_xticklabels([])

axs.set_yticklabels([])

plt.subplots_adjust(wspace=None, hspace=None)

plot_images(0, 97) # 0 to 97 for images with no tumor

plot_images(98, 252) # 98 to 252 for images with tumor

norm_data = data / 255.

norm_data = np.expand_dims(norm_data, axis=3)

norm_data.shape, norm_data[0]

tf.random.set_seed(SEED)

model = tf.keras.models.Sequential([

tf.keras.layers.Conv2D(filters=64,

kernel_size=3,

activation='relu',

input_shape=(128,128,1)),

tf.keras.layers.Conv2D(32,3,activation='relu'),

tf.keras.layers.MaxPool2D(pool_size=2,

28
padding='valid'),

tf.keras.layers.Conv2D(32,3,activation='relu'),

tf.keras.layers.Conv2D(16,3,activation='relu'),

tf.keras.layers.MaxPool2D(2),

tf.keras.layers.Flatten(),

tf.keras.layers.Dense(1, activation='sigmoid')

])

# Compile the model

model.compile(loss=tf.keras.losses.BinaryCrossentropy(),

optimizer=tf.keras.optimizers.Adam(),

metrics=["accuracy"])

# Fit the model

history = model.fit(norm_data, labels, epochs=10, validation_split=0.20)

result = model.evaluate(norm_data, labels, verbose=0)

print(f"Accuracy on Evaluation: {result[1]*100:.2f}%\nLoss: {result[0]:.4f}")

# Generating 20 random numbers to index from data

29
np.random.seed(SEED)

idxs = np.random.randint(0, 252, 20)

y_pred_prob = model.predict(norm_data[idxs])

y_pred = np.array([1 if prob>0.5 else 0 for prob in y_pred_prob])

y_true = labels[idxs]

y_pred.shape, y_true.shape

y_pred

plt.figure(figsize=(8,8))

cm = confusion_matrix(y_true, y_pred)

sns.heatmap(cm, cmap="Blues", annot=True, fmt=".2g", cbar=False)

plt.show()

print(classification_report(y_true, y_pred))

model.summary()

histdf = pd.DataFrame(history.history)

plt.figure(figsize=(12,8))

30
plt.plot(histdf['accuracy'], label='Training Acc')

plt.plot(histdf['val_accuracy'], label='Validation Acc')

plt.plot(histdf['loss'], label='Loss')

plt.legend()

plt.show()

APPLICATION CODE:

import numpy as np

import os

import tensorflow as tf

from tensorflow.keras.applications.vgg19 import preprocess_input

from tensorflow.keras.preprocessing import image

from tensorflow.keras.models import load_model

from flask import *

from werkzeug.utils import secure_filename

##Define the flask app

app = Flask(__name__)

31
##Load trained model

model = load_model("brain_tumor_model_CNN.h5")

def model_predict(img_path, model):

print(img_path)

img = image.load_img(img_path, target_size = (64, 64))

#Preprocessing the image

x = image.img_to_array(img)

##Scaling

x = x/255.0

x = np.expand_dims(x, axis = 0)

preds = model.predict(x)

preds = np.argmax(preds)

if preds == 0:

preds = "Great!!!, You don't have Brain Tumor Disease."

else:

preds = "Sorry, You have Brain Tumor Disease, kindly contact your doctor."

32
return preds

@app.route("/")

def index():

return render_template('index.html')

@app.route('/predict', methods = ['GET', 'POST'])

def upload():

if request.method == 'POST':

## Get the file from post request

f = request.files['file']

##save the file './uploads'

basepath = os.path.dirname(__file__)

file_path = os.path.join(basepath, 'uploads', secure_filename(f.filename))

f.save(file_path)

##Make predictions

preds = model_predict(file_path, model)

result = preds

33
return result

return None

if __name__ == '__main__':

app.run(port = 5001, debug= True)

HTML CODE:

{% extends "base.html" %} {% block content %}

<h2>Image Classifier</h2>

<div>

<form id="upload-file" method="post" enctype="multipart/form-data">

<label for="imageUpload" class="upload-label">

Choose...

</label>

<input type="file" name="file" id="imageUpload" accept=".png, .jpg, .jpeg">

</form>

<div class="image-section" style="display:none;">

<div class="img-preview">

34
<div id="imagePreview">

</div>

</div>

<div>

<button type="button" class="btnbtn-primary btn-lg "

id="btn-predict">Predict!</button>

</div>

</div>

<div class="loader" style="display:none;"></div>

<h3 id="result">

<span></span>

</h3>

</div>

{% endblock %}

<html lang="en">

<head>

35
<meta charset="UTF-8">

<meta name="viewport" content="width=device-width, initial-scale=1.0">

<meta http-equiv="X-UA-Compatible" content="ie=edge">

<title>AI Demo</title>

<link href="https://cdn.bootcss.com/bootstrap/4.0.0/css/bootstrap.min.css"

rel="stylesheet">

<script src="https://cdn.bootcss.com/popper.js/1.12.9/umd/popper.min.js"></script>

<script src="https://cdn.bootcss.com/jquery/3.3.1/jquery.min.js"></script>

<script src="https://cdn.bootcss.com/bootstrap/4.0.0/js/bootstrap.min.js"></script>

<link href="{{ url_for('static', filename='css/main.css') }}" rel="stylesheet">

</head>

<body>

<nav class="navbar navbar-dark bg-dark">

<div class="container">

<a class="navbar-brand" href="#">AI Demo</a>

<button class="btnbtn-outline-secondary my-2 my-sm-0" type="submit">Help</button>

</div>

36
</nav>

<div class="container">

<div id="content" style="margin-top:2em">{% block content %}{% endblock %}</div>

</div>

</body>

<footer>

<script src="{{ url_for('static', filename='js/main.js') }}" type="text/javascript"></script>

</footer>

</html>

CSS CODE:

.img-preview {

width: 256px;

height: 256px;

position: relative;

border: 5px solid #F8F8F8;

box-shadow: 0px 2px 4px 0px rgba(0, 0, 0, 0.1);

37
margin-top: 1em;

margin-bottom: 1em;

.img-preview>div {

width: 100%;

height: 100%;

background-size: 256px 256px;

background-repeat: no-repeat;

background-position: center;

input[type="file"] {

display: none;

.upload-label{

display: inline-block;

38
padding: 12px 30px;

background: #39D2B4;

color: #fff;

font-size: 1em;

transition: all .4s;

cursor: pointer;

.upload-label:hover{

background: #34495E;

color: #39D2B4;

.loader {

border: 8px solid #f3f3f3; /* Light grey */

border-top: 8px solid #3498db; /* Blue */

border-radius: 50%;

width: 50px;

39
height: 50px;

animation: spin 1s linear infinite;

@keyframes spin {

0% { transform: rotate(0deg); }

100% { transform: rotate(360deg); }

CHAPTER 6

RESULTS AND DISCUSSIONS

Epoch 1/10

7/7 [==============================] - 27s 3s/step - loss: 0.6445 - accuracy:

0.6238 - val_loss: 0.2283 - val_accuracy: 1.0000

Epoch 2/10

7/7 [==============================] - 20s 3s/step - loss: 0.6191 - accuracy:

0.6485 - val_loss: 0.5810 - val_accuracy: 0.7059

Epoch 3/10

40
7/7 [==============================] - 23s 3s/step - loss: 0.5236 - accuracy:

0.7475 - val_loss: 0.5613 - val_accuracy: 0.7255

Epoch 4/10

7/7 [==============================] - 21s 3s/step - loss: 0.5105 - accuracy:

0.7327 - val_loss: 0.4832 - val_accuracy: 0.8235

Epoch 5/10

7/7 [==============================] - 22s 3s/step - loss: 0.4711 - accuracy:

0.7871 - val_loss: 0.7456 - val_accuracy: 0.6078

Epoch 6/10

7/7 [==============================] - 22s 3s/step - loss: 0.4688 - accuracy:

0.7772 - val_loss: 0.4794 - val_accuracy: 0.8235

Epoch 7/10

7/7 [==============================] - 23s 3s/step - loss: 0.4010 - accuracy:

0.8069 - val_loss: 0.5129 - val_accuracy: 0.7843

Epoch 8/10

7/7 [==============================] - 21s 3s/step - loss: 0.3686 - accuracy:

0.8317 - val_loss: 0.4498 - val_accuracy: 0.8627

Epoch 9/10

41
7/7 [==============================] - 21s 3s/step - loss: 0.3242 - accuracy:

0.8713 - val_loss: 0.4708 - val_accuracy: 0.8431

Epoch 10/10

7/7 [==============================] - 21s 3s/step - loss: 0.2497 - accuracy:

0.9059 - val_loss: 0.2540 - val_accuracy: 0.9216

Accuracy on Evaluation: 90.51%

Loss: 0.2502

precision recall f1-score support

0 0.89 0.80 0.84 10

1 0.82 0.90 0.86 10

accuracy 0.85 20

macro avg 0.85 0.85 0.85 20

weighted avg 0.85 0.85 0.85 20

42
Model: "sequential"

_________________________________________________________________

Layer (type) Output Shape Param #

===============================================================

==

conv2d (Conv2D) (None, 126, 126, 64) 640

conv2d_1 (Conv2D) (None, 124, 124, 32) 18464

max_pooling2d (MaxPooling2D (None, 62, 62, 32) 0

conv2d_2 (Conv2D) (None, 60, 60, 32) 9248

conv2d_3 (Conv2D) (None, 58, 58, 16) 4624

max_pooling2d_1 (MaxPooling (None, 29, 29, 16) 0

43
2D)

flatten (Flatten) (None, 13456) 0

dense (Dense) (None, 1) 13457

===============================================================

==

Total params: 46,433

Trainable params: 46,433

Non-trainable params: 0

_________________________________________________________________

Process finished with exit code 0

44
Confusion Matrix

Fig:5 confusion matrix

Accuracy / Loss plot:

45
Fig:6: Accuracy plot

Fig:7 Positive prediction

46
Fig:8 Negative Prediction

47
CHAPTER 7

CONCLUSION

We proposed a computerized method for the segmentation and identification of a brain

tumor using the Convolution Neural Network. The input MR images are read from the

local device using the file path and converted into grayscale images. These images are

pre-processed using an adaptive bilateral filtering technique for the elimination of noises

that are present inside the original image. The binary thresholding is applied to the

denoised image, and Convolution Neural Network segmentation is applied, which helps

in figuring out the tumor region in the MR images. The proposed model had obtained an

accuracy of 84% and yields promising results without any errors and much less

computational time. We proposed a computerized method for the segmentation and

identification of a brain tumor using the Convolution Neural Network. The input MR

images are read from the local device using the file path and converted into grayscale

images. These images are pre-processed using an adaptive filtering technique for the

elimination of noises that are present inside the original image. The Convolution Neural

Network segmentation is applied, which helps in figuring out the tumor region in the MR

images. The proposed model had obtained good accuracy and yields promising results

without any errors and much less computational time.

48
CHAPTER-8

FUTURE SCOPE

It is observed on extermination that the proposed approach needs a vast training set for

better accurate results; in the field of medical image processing, the gathering of medical

data is a tedious job, and, in few cases, the datasets might not be available. In all such

cases, the proposed algorithm must be robust enough for accurate recognition of tumor

regions from MR Images. The proposed approach can be further improvised through in

cooperating weakly trained algorithms that can identify the abnormalities with a

minimum training data and also self-learning algorithms would aid in enhancing the

accuracy of the algorithm and reduce the computational time.

49
REFRENCES

[1]. S. Bauer et al., ―A survey of MRI-based medical image analysis for brain tumor studies,‖
Phys. Med. Biol., vol. 58, no. 13, pp.97–129, 2013.

[2]. B. Menze et al., ―The multimodal brain tumor image segmentation benchmark (BRATS),‖
IEEE Trans. Med. Imag., vol. 34, no.10, pp. 1993–2024, Oct. 2015

[3]. B. H. Menze et al., ―A generative model for brain tumor segmentation in multi-
modal images,‖ in Medical Image Computing and Comput.- Assisted Intervention-MICCAI
2010. New York: Springer, 2010, pp. 151–159

[4]. S. Bauer, L.-P. Nolte, and M. Reyes, ―Fully automatic segmentation of brain tumor images
using support vector machine classification in combination with hierarchical conditional random
field regularization,‖ in Medical Image Computing and Comput.-Assisted Intervention-MICCAI
2011. New York: Springer, 2011, pp. 354–361.

[5]. C.-H. Lee et al., ―Segmenting brain tumors using pseudo-conditional random fields,‖
in Medical Image Computing and Comput.-Assisted

[6]. Intervention-MICCAI 2008. New York: Springer, 2008, pp. 359–366

[7]. R. Meier et al., ―A hybrid model for multimodal brain tumor segmentation,‖ in Proc. NCI-
MICCAI BRATS, 2013, pp. 31–37.

[8]. Vinod Kumar, JainySachdeva, Indra Gupta ―Classification of brain tumors using
PCA-ANN‖ 2011 World Congress on Information and Communication Technologies

[9]. Sergio Pereira, Adriano Pinto, Victor Alves, and Carlos A. Silva ―Brain
Tumor Segmentation Using Convolutional Neural Networks in MRI Images‖IEEE
TRANSACTIONS ON MEDICAL IMAGING, VOL. 35, NO. 5, MAY 2016

[10]. RaselAhmmed, Md. Foisal Hossain ―Tumor Detection in Brain MRI Image Using
Template based K-means and Fuzzy C-means Clustering Algorithm‖ 2016 International
Conference on Computer Communication and Informatics (ICCCI -2016), Jan. 07 –09,
2016,Coimbatore, INDIA

50
[11]. S.C. Turaga, J.F. Murray, V. Jain, F. Roth, M. Helmstaedter, K. Briggman,W. Denk, and
H.S. Seung. Convolutional networks can learn to generate affinity graphs for image
segmentation. Neural Computation, 22(2):511–538, 2010

[12] A Reliable Method for Brain Tumor Detection Using Cnn Technique NeethuOuseph C1,
Asst. Prof. Mrs.Shruti K2 1(Digital electronics ECE, Malabar Institute of Technology, India)
2(Electronics and Communication Engineering, Malabar Institute of Technology, India)

[13] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, et al. "The


Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)", IEEE Transactions on
Medical Imaging 34(10), 1993-2024 (2015) DOI: 10.1109/TMI.2014.2377694

[14] S. Bakas, H. Akbari, A. Sotiras, M. Bilello, M. Rozycki, J.S. Kirby, et al., "Advancing
The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic
features", Nature Scientific Data, 4:170117 (2017) DOI: 10.1038/sdata.2017.117

[15] S. Bakas, M. Reyes, A. Jakab, S. Bauer, M. Rempfler, A. Crimi, et al., "Identifying the
Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and
Overall Survival Prediction in the BRATS Challenge", arXiv preprint arXiv:1811.02629 (2018)

[16]A.Vidyarthi, N.Mittal, ―CLAP: Closely link associated pixel based extraction of brain
tumor in MR images‖, International Advance Computing Conference on Signal Processing and
Communication (IACC), pp. 202 -206

[17] International Journal of Computer Science and communication Vol. 2, No. 2, July-

December-2011,pp. 325-331

[18] k. Ramya and L. K. Joshila Grace ―Brain tumor detection based on


watershed transformation‖, International Conference on Communication and Signal Processing
(ICCSP),2016.

51
[19] Luxit Kapoor and Sanjeev Thakur ―A survey on brain tumor detection

using Image processing techniques‖,7s International Conference on Cloud Computing,

Data Science & Engineering-Confluence, 2017.

[20] MinuSamantaray ,MilleePanigrahi, K.C .Patra ,AvipsaS.Panda and Rina

Mahakud ―An adaptive filtering technique filtering technique for brain tumor analysis

and detection‖,10th International Conference on Intellligent and Control (ISCO),2016.

[21] S.U. Aswathy, G.Glan Deva Dhas and S.S. Kumar ―A Survey on detection of

brain tumor from MRI Brain images‖, 7th International Conference on Cloud Computing,

Data Science & Engineering –Confluence 2017.

[22] S. Banerjee, S. Mitra, and B. U. Shankar, ―Synergetic neuro-fuzzy feature

selection and classification of brain tumors,‖ in 2017 IEEE International Conference on

Fuzzy Systems (FUZZ-IEEE), 2017, pp.1–6.

52

You might also like