You are on page 1of 86

Kingdom of Saudi Arabia

Ministry of Higher Education


Imam Abdulrahman Bin Faisal University
College of Computer Sciences & Information Technology

Deep Learning Techniques for Brain Tumor Classification

A project submitted
in partial fulfillment of the requirements for the degree of
Bachelor of Science in Computer Science

by
Aziza Mudrek Alsaiary (2200005607)
Ghadah Ahmed Hasen (2200001631)
Hedayah Ahmed Al Rumaih (2200004700)
Jumanah Abdullah AlGhamdi (2200002572)
Zahra Abdullah Almusajin (2200001773)

Supervised by
Garsah Farhan Alqarni

Committee Members
Farmanullah Zaman Mohammad Jan
Amal Hadi Saeed Alhajri
12/2023
DECLARATION

We hereby declare that this project report entitled “Deep Learning Techniques for Brain
Tumor Classification” is based on our original work except for citations and quotations, which
have been duly acknowledged. We also declare that it has not been previously or concurrently
submitted for any other degree or award at any university or any other institution. This project
work is submitted in the partial fulfillment of the requirements for the degree of “Bachelor of
Science in Computer Science” at Computer Science Department, College of Computer Sciences
and Information Technology, Imam Abdulrahman Bin Faisal University.

Project Team:
Group Number: 9F2

Team Member’s Name Signature Date

Jumanah Alghamdi Jumanah

Aziza Alsairy Aziza

Ghadah Hasen Ghadah

Hedayah Al Rumaih Hedayah

Zahra Almusajin Zahra

Project Supervisor:
Name Signature Date

Dr. Garsah Farhan Alqarni Garsah


Acknowledgement

We are extremely thankful to the creator, Allah SWT, who gave us the knowledge, power, and
motivation of producing this project. We are also sincerely grateful to our supervisor Dr. Garsah
Al-Qarni for her constant support and guidance throughout our work. Dr. Garsah Algarni
directed us in the right path, sharing her valuable observations, and making sure we always showed
our creativeness as well as encouraging and inspiring us.
We would like to express our gratitude to all team members for their hard work and constant
effort they have put into producing this project, continuously contributing in each other’s work
and lending a helping hand. Finally, we would like to thank the College of Computer Science and
Information Technology at Imam Abdulrahman Bin Faisal University for extending our
knowledge and providing us with the necessary information to complete this project.
ABSTRACT

A brain tumor can be malignant (cancerous) or benign (non-cancerous), and it is


defined by an abnormal cell aggregation inside the rigid confines of the skull. Detection
and classification of brain tumors is crucial for proper therapeutic interventions. So, there
are many traditional Machine Learning (ML) and Deep Learning (DL) approaches that
have been used in earlier studies for correct diagnoses of brain tumor types. This research
project will use a DL model for classification of brain tumors images into Glioma,
Meningioma, or Pituitary brain image. The utilized DL model will be assessed in terms of
accuracy to evaluate the model performance in classification of brain tumors.
Table of Contents

LIST OF TABLES ..................................................................................................................................... ix


LIST OF FIGURES .................................................................................................................................... x
Chapter 1: Introduction ........................................................................................................................... 11
1.1 Introduction ..................................................................................................................................... 11
1.2 Problem Statement.......................................................................................................................... 11
1.3 Justification ..................................................................................................................................... 12
1.4 Aims & Objectives .......................................................................................................................... 13
1.5 Scope/Limitations ............................................................................................................................ 13
1.6 Social, Professional, Legal, and Ethical Implications .................................................................. 14
1.7 Project Organization ...................................................................................................................... 14
Chapter 2: Background and Review of Literature ................................................................................ 15
2.1. Introduction .................................................................................................................................... 15
2.2. Background .................................................................................................................................... 15
2.3. Literature Review .......................................................................................................................... 23
2.3.1 Brain Tumor Segmentation Techniques ................................................................................ 23
2.3.2 Tumor Classification using Traditional Learning ............................................................... 29
2.3.2 Tumor Classification using Deep Learning .......................................................................... 35
2.4. Knowledge Gap. ............................................................................................................................. 41
Chapter 3: SPMP & Requirement Specification.................................................................................... 42
3.1. Managerial Process Plans.............................................................................................................. 42
3.1.1 Startup Plan. ............................................................................................................................. 42
A Estimates ......................................................................................................................................... 42
B Staffing ............................................................................................................................................ 42
C Project Staff Training ...................................................................................................................... 42
3.1.2 Work Plan ................................................................................................................................. 44
A Work Breakdown Structure............................................................................................................. 44
B. Schedule Allocation ....................................................................................................................... 44
C. Resource Allocation ....................................................................................................................... 45
D. Budget Allocation .......................................................................................................................... 46
3.2 Overall description.......................................................................................................................... 46
3.2.1. Product perspective................................................................................................................. 46
3.2.2. Product functions .................................................................................................................... 46
3.3. Specific requirements ................................................................................................................... 47
3.3.1. External interface requirements ............................................................................................ 47
3.3.2. User interfaces ......................................................................................................................... 47
3.3.3. Functional Requirements ....................................................................................................... 49
3.3.4. Nonfunctional requirements .................................................................................................. 50
Chapter 4: Methodology & Design .......................................................................................................... 51
Overview ................................................................................................................................................ 51
4.1 proposed methodology .................................................................................................................... 51
4.1.1 purpose .......................................................................................................................................... 51
4.1.2 data gathering............................................................................................................................... 51
4.1.2.1: dataset description ............................................................................................................... 51
4.1.3 data preprocessing ....................................................................................................................... 52
4.1.3.1 data augmentation................................................................................................................. 52
4.1.4 Image classification ...................................................................................................................... 52
4.1.5 image features extraction ............................................................................................................ 52
4.1.6 Deep learning models ................................................................................................................... 52
4.1.7 product perspective ...................................................................................................................... 53
4.1.8 product function ........................................................................................................................... 53
4.1.9 User classes and characteristics .................................................................................................. 53
4.2 System Design .................................................................................................................................. 53
4.1.1 Consideration ........................................................................................................................ 53
4.1.1.1 Assumption and Dependencies ............................................................................................ 53
4.1.1.1.1 Related Software or Hardware ........................................................................................ 53
4.1.1.1.2 End Users Expectations .................................................................................................... 54
4.1.1.1.3 Operating System .............................................................................................................. 54
4.1.1.1.4 Possible and/or probable changes in functionality ......................................................... 54
4.1.1.2 General Constraints .............................................................................................................. 54
4.1.1.2.1 Standard Compliance ....................................................................................................... 54
4.1.1.2.2 End-User System ............................................................................................................... 54
4.1.1.2.3 Interface Requirements .................................................................................................... 54
4.1.1.2.4 Data Repository ................................................................................................................. 54
4.1.1.2.5 Memory Capacity.............................................................................................................. 54
4.1.1.2.6 Security Requirements ..................................................................................................... 55
4.1.1.2.7 Performance Requirements ............................................................................................. 55
4.1.1.2.8 Verification and Validation Requirements ..................................................................... 55
4.2 Interface Design ........................................................................................................................ 56
4.1.1 Overview about Interface Design ........................................................................................ 56
4.1.2 Overview about Interface Design ........................................................................................ 56
4.1.3 Screen Images ........................................................................................................................ 56
4.1.3.1 Main Interface ....................................................................................................................... 57
4.1.3.2 Sign In Interface .................................................................................................................... 57
4.1.3.3 Forget My Password ............................................................................................................. 58
4.1.3.4 Doctor Homepage.................................................................................................................. 58
4.1.3.5 Patients ................................................................................................................................... 59
4.1.3.6 Patient Information............................................................................................................... 59
4.1.3.7 Create New MRI Scan .......................................................................................................... 59
4.3 Detailed System Design............................................................................................................. 60
4.3.1 Definition, Classification and Responsibilities ................................................................... 61
4.3.2 Constraints and Composition .............................................................................................. 62
4.3.3 Preprocessing......................................................................................................................... 64
Chapter 5: Implementation ...................................................................................................................... 66
5.1. Model............................................................................................................................................... 66
5.1.2. DenseNet201 ............................................................................................................................ 66
5.2. Results and Evaluation .................................................................................................................. 68
5.2.1. Dataset and Preprocessing ..................................................................................................... 68
5.2.2. Performance Metrics .............................................................................................................. 69
5.2.3. Hyperparameters .................................................................................................................... 70
5.2.4. Experimental Results .............................................................................................................. 70
5.2.5. Benchmarking ......................................................................................................................... 71
Chapter 6: Conclusion .............................................................................................................................. 73
6.1 Introduction ............................................................................................................................... 73
6.2 Findings and Contributions ..................................................................................................... 73
6.3 Limitation .................................................................................................................................. 74
6.4 Lessons & Skills learned ........................................................................................................... 74
6.5 Recommendation & Future Works ......................................................................................... 76
References .................................................................................................................................................. 77
LIST OF TABLES

Table 1: project staffing plan ...................................................................................................................... 43


Table 2: Work breakdown structure............................................................................................................ 44
Table 3: Schedule allocation ....................................................................................................................... 44
Table 4: Assumptions, and Constraints....................................................................................................... 46
Table 5: Roles and responsibilities ............................................................................................................. 49
Table 6: Data split for training and testing ................................................................................................. 69
Table 7: Results obtained by DenseNet201 ................................................................................................ 71
Table 8 : Comparison of results using the same figshare dataset................................................................ 72
LIST OF FIGURES
Figure 1: Resource allocation ..................................................................................................................... 45
Figure 2: External interface ........................................................................................................................ 47
Figure 3: User interface .............................................................................................................................. 47
Figure 4: User interface .............................................................................................................................. 48
Figure 5: HunTumor Main Interface ........................................................................................................... 57
Figure 6: HunTumor Sign in Interface ........................................................................................................ 57
Figure 7:HunTumor Forget My Password Interface ................................................................................... 58
Figure 8: HunTumor Doctor Homepage Interface ...................................................................................... 58
Figure 9: HunTumor List of patients Interface ........................................................................................... 59
Figure10 :HunTumor Patient Information Interface ................................................................................... 59
Figure 11: HunTumor Create New MRI Scan Interface ............................................................................. 60
Figure 12: HunTumor Classification Pop-Up Interface .............................................................................. 60
Figure 13:Relation of Dense Blocks and Transition Layers ....................................................................... 67
Figure 14:Shows three types of tumors in three angle of views axial, coronal, and sagittal: (1) Glioma, (2)
Meningioma, and (3) Pituitary. ................................................................................................................... 68
Figure 15 Confusion matrix for brain tumor classification:........................................................................ 70
Figure 16: DenseNet201 ROC Curve ......................................................................................................... 71
Chapter 1: Introduction
1.1 Introduction
This chapter introduces the research project on using deep learning for brain tumor image
classification. It starts by highlighting the challenge of accurately diagnosing brain tumors,
emphasizing the need for enhanced methods. The justification section explains why deep
learning a pivotal solution could be, given its potential to improve diagnosis accuracy and
speed. Following this, the aims and goals detail the research's main goals, which revolve around
developing and validating a deep learning model for this purpose.
The study's scope and limitations address the research boundaries, potential challenges, and
the dataset's confines. Lastly, the implications section delves into the broader impacts of this
research on society, the professional medical community, legal standards, and ethical
considerations. Overall, this chapter sets the stage for the detailed exploration of deep learning's
role in brain tumor diagnostics.
1.2 Problem Statement
In the multifaceted domain of medical diagnostics, the urgency and precision required in
the classification and diagnosis of brain tumors stand paramount. A brain tumor, characterized
by an aggregation of abnormal cells within the rigid confines of the skull, can be either
malignant or benign, each bringing forth its own set of complications and treatment pathways.
The complexity of brain tumors, along with their categorization into primary or secondary,
renders their detection and classification a meticulous task that demands precision to facilitate
apt therapeutic interventions. Traditional diagnostic methodologies, such as X- rays, while
being cost-effective and rapid, present substantial challenges in deriving accurate clinical
diagnoses in comparison to other imaging modalities like CT or MRI scans [1][2][3].
Considering this, computer-aided diagnosis, especially through leveraging deep learning
techniques, appears as a beacon of potential to enhance accuracy and efficiency in diagnosing
brain tumors. The overarching aim of this research is to develop a model that harnesses the
prowess of deep learning, specifically focusing on grey-scaled segmented images, to ensure
swift and correct diagnosis. Whereas existing systems might provide a foundational approach
towards disease identification, the intricate nature of brain tumors, often leading to critical
conditions like increased intracranial pressure or metastasis, necessitates an advanced, reliable,
and rapid diagnostic model [2][3].

11
This research endeavors to use a deep learning model, that will be trained on a substantial
dataset of brain tumor images, with an objective to assist healthcare professionals in making
expedient and precise diagnoses. The imperativeness of this study is further underscored by the
potential of automated systems in supplying vital support to physicians, thereby potentially
saving lives through prompt and accurate intervention. By bridging the gap between the
criticality of precise brain tumor classification and the current limitations of diagnostic
methodologies, this research aspires to contribute meaningfully to the field of medical image
classification [2].
1.3 Justification

Brain tumors, with their intricate and varied nature, pose a significant challenge in accurate
and swift diagnosis, which is crucial for deciding the right course of treatment and improving
patient outcomes. Traditional diagnostic methods like X-rays, while being accessible and
quick, often fall short in providing the detailed and precise information that more advanced
imaging techniques like MRI provide. However, even with advanced imaging, the complexity
and variability of brain tumors can make accurate classification challenging for physicians,
especially in nuanced or early-stage cases.
In this context, the application of deep learning for brain tumor image classification
becomes pivotal. Earlier research and applications have showcased the capability of deep
neural networks, particularly when applied to grey-scaled segmented images, in enhancing the
accuracy and efficiency of disease diagnosis. Deep learning models, by learning intricate
patterns and features from large datasets of brain tumor images, have the potential to
significantly augment the capabilities of physicians in diagnosing tumors accurately and
promptly.
Moreover, an automated, reliable system for brain tumor classification can serve as a
valuable second opinion for healthcare professionals, reducing the likelihood of misdiagnoses
and ensuring that treatment can be started at the earliest, thereby potentially improving patient
survival and quality of life. Considering the serious implications of brain tumors and the clear
limitations of existing diagnostic methodologies, the incorporation of deep learning into brain
tumor image classification is not only beneficial but imperative.
This research, therefore, looks to build upon previous studies, aiming to develop a more
robust, accurate, and clinically applicable deep learning model for brain tumor image
classification.

12
1.4 Aims & Objectives
The aim is to develop an application for early detection with high accuracy levels of deep
learning to identify brain tumors:

• Develop a deep learning model capable of accurately detecting the presence or absence
of brain tumors in medical images [4].

• Explore the integration of multimodal imaging data, such as MRI, and CT to improve
the overall accuracy of brain tumor detection.

• Investigate the potential for the automated segmentation of brain tumors within medical
images, aiding in precise tumor localization.

• Assess the clinical impact of the deep learning-based brain tumor detection system,
including its potential to reduce diagnostic delays and improve patient outcomes [4].

• Collaborate with healthcare professionals to ensure the seamless integration of the deep
learning solution into existing clinical workflows, optimizing its utility in real-world
medical practice [4].

1.5 Scope/Limitations
The focal point of this research revolves around developing a deep learning model aimed
at refining the classification of brain tumor images, looking to provide a valuable tool for
enhancing diagnostic accuracy and timeliness. The scope encompasses sourcing and utilizing
a diverse and comprehensive dataset of brain tumor images, developing a model that learns
from the intricate patterns and variations within the data, and validating its effectiveness in a
controlled environment. However, study is not without its limitations.
The efficacy and generalizability of the model might be constrained by the diversity and
size of the dataset used, potentially limiting its applicability to a wider range of cases and
populations. Moreover, the technical constraints, such as computational resources and
algorithmic challenges in model optimization, pose potential hurdles in model development
and validation. Furthermore, translating the model from a research environment to real-world
clinical settings, ensuring it adheres to regulatory and ethical standards while maintaining
reliability and interpretability, also presents a considerable challenge.

13
This research, while aiming for broad applicability, acknowledges these limitations and
seeks to address them through rigorous testing, validation, and adherence to best practices in
model development and deployment.
1.6 Social, Professional, Legal, and Ethical Implications
Incorporating deep learning models for brain tumor image classification carries various
implications across multiple domains. Socially, it holds the promise of enhanced healthcare
delivery by potentially providing quicker and more accurate diagnoses, yet it raises questions
about accessibility and data privacy.
Professionally, while it can serve as a robust diagnostic assistant to healthcare
professionals, it might also alter roles and responsibilities, requiring new skill sets and
adaptation to technological advancements. Legally, ensuring the model complies with
healthcare regulations, data protection laws, and patient confidentiality becomes imperative,
ensuring that the technology is implemented in a manner that safeguards patient interests and
adheres to regulatory frameworks.
Ethically, the deployment of such models demands meticulous attention to ensure unbiased,
transparent, and accountable practices. Ensuring the model makes equitable and reliable
decisions, maintains the privacy and confidentiality of patient data, and is used in a manner
that aligns with ethical standards is vital to ensuring the responsible integration of deep learning
into brain tumor diagnostics.
1.7 Project Organization
In this mid-term report, we have structured it into three chapters, each addressing a key
aspect of our project. The first chapter introduces our project's title, outlines the reasons behind
selecting this subject, emphasizes its significance, and highlights our specific aims. Moving on
to the second chapter, it delves into the background and reviews existing literature relevant to
our project. Finally, the third chapter outlines our software project management plan, offering
insights into the purpose, scope, and goals of our project for better clarity and organization.

14
Chapter 2: Background and Review of Literature
2.1. Introduction
This chapter is to give the reader the theoretical foundation they need to comprehend the
research methodologies employed in this thesis.
2.2. Background

A. Brain tumor

A brain tumour is a collection, or mass, of abnormal cells in brain. or skull, which encloses
your brain, is very rigid. Any growth inside such a restricted space can cause problems. Brain
tumours can be cancerous (malignant) or noncancerous (benign). When benign or malignant
tumours grow, they can cause the pressure inside your skull to increase. This can cause brain
damage, and it can be life-threatening.
There are around 130 distinct forms of tumours that can affect the brain and central nervous
system, all of which can range from benign to malignant, from exceedingly rare to frequent.
But each of those 130 brain tumours has a primary or secondary classification. Primary brain
tumours are those that develop in the brain. They can arise from your brain cells, the meninges,
the membranes that surround your brain, your nerve cells, or your glands. Primary tumours can
be malignant or benign.
Most brain malignancies are secondary brain tumours. Breast cancer, kidney cancer, or skin
cancer are examples of conditions that begin in one area of the body and progress to the brain
[5]. Malignant secondary brain tumours are unavoidable. The spread of benign tumours is not
from one area of your body to another. The four most typical types of brain tumours are:

• Meningioma: As they develop in the meninges, the membranes that border the skull
and spinal canal, these tumours are technically not brain tumours. However, the effects
of their development on the brain might range from memory loss to convulsions to
eyesight and hearing loss. Meningioma incidence rises with age, and because the
tumours grow slowly, symptoms may appear gradually.
Since meningiomas are often benign, clinicians may decide to ignore asymptomatic
instances. However, if the tumour begins to negatively impact quality of life, doctors will
either perform a surgical removal or use radiation treatment to treat it. Chronic headaches
are the most typical early indication of meningioma.

15
• Gliomas: tumours that arise from glial cells are called gliomas. Normally, these cells
sustain the structure of your central nervous system, nourish your central nervous
system, remove cellular waste, degrade dead neurons, and Different kinds of glial cells,
such as: astrocytic tumours like astrocytoma’s, which begin in the cerebrum, can give
rise to gliomas. Oligodendroglia tumours, which are frequently discovered in the frontal
and temporal lobes. glioblastomas, the most aggressive variety, which develop in the
brain's supporting tissue.

• Metastatic: these brain tumours are the most prevalent among adults and are typically
caused by breast or lung cancer. Smaller metastatic tumours may be treated with a
gamma knife, a sort of radiation treatment that concentrates 200 tiny radiation beams
into the tumorous region. Larger metastatic tumours are frequently surgically excised,
meaning eliminated.

• Astrocytoma: It is located in the cerebrum of the brain, astrocytes are star-shaped cells
that give rise to these initial tumours. The grade of astrocytoma tumours, which refers
to their degree of malignancy and aggressiveness, varies; occasionally, they progress
slowly (Grade II), and occasionally, they do so more aggressively (Grade III) [6].

B. Deep Learning

A key area of machine learning called deep learning focuses on developing algorithms that
are modelled after the structure and operation of the human brain. Since the 1990s, DL
techniques have produced state-of-the-art results in image classification tasks and, therefore,
medical image analysis [7] because of their high accuracy and capacity to swiftly identify
crucial representations that discriminate between various picture classes. CNNs are the most
popular neural network designs for pictures and are included in DL.

C. Machine learning

Machine Learning (ML) is a branch of AI that focuses on developing devices that can
comprehend data and increase accuracy during a time when no software is being developed. A
process in information knowledge is a series of operations used in numerical handling.

Machine learning (ML) makes decisions and predicts based on a large amount of data used
to train algorithms. based on fresh data. improved data processing, decision-making that is

16
precise, and prediction. The process of creating a machine learning application (or model)
Typically, a scientist will carry out this activity in collaboration with a model specialist [8].

D. Convolutional Neural Network

A convolutional neural network (CNN), a type of artificial neural network specifically


created to process pixel data, is a potent image processing tool. This network uses a
mathematical operation known as convolution, a specialized form of linear operation, to
process pixel data. Convolutional networks are essentially neural networks with at least one
layer that utilize convolution rather than standard matrix multiplication [9].
A hardware or software system known as a neural network is modelled after how neurons
in the human brain function. Traditional neural networks must be given pictures in pixel-by-
pixel, low-resolution chunks, which is not suitable for image processing. CNN's "neurons" are
structured more like the frontal lobe, which in humans and other animals is the region in charge
of processing visual inputs. The full visual field is covered by the layers of neurons,
overcoming the issue with typical neural networks' piecemeal picture processing [9].
A multilayer perceptron-like system that has been optimized for low processing demands
is used by a CNN. An input layer, an output layer, and a hidden layer make up a CNN's layers.
Among the many convolutional, pooling, fully connected, and normalization layers, the top
three most popular layers are convolutional, Transposed Convolutional Layer, and dense layer.

• Convolution layer
A convolution layer is always the initial layer of a CNN. Convolutional layers operate on
the input using a convolution operation and send the outcome to the following layer. All the
pixels in a convolution's receptive area are combined into a single value. When a convolution
is applied to a picture, the image size is reduced and every bit of data in the field is combined
into a single pixel. A vector is the convolution layer's final output [16].

• Pooling layer
A pooling layer's purpose is to gradually shrink the representation's spatial size, as well as
the network's computational and parameter requirements. Usually, the average or maximum
function is applied to the subset under consideration during the pooling process.
Aggregations come in a wide variety of forms that might be applied. The average pool and
the maximum pool are the most popular. In real-world applications, the nucleus is transformed

17
into inputs that generate outputs based on the sort of aggregation the typical pool. produces a
calculation of the average parameters.
Max Pool chooses the highest possible value of the kernel's internal parameters. Grouping
layers results in input sampling, which reduces the number of computations required [17].

• Fully Connected Layer


CNNs, which have been demonstrated to be particularly useful in identifying and
categorizing pictures for computer vision, must have fully connected (FC) layers. Convolution
and pooling, which divide the image into features and analyse each one separately, are the first
steps in the CNN process. A fully connected neural network structure receives the output of
this procedure and uses it to make the final classification decision [18].

E. Deep Belief Network

A sort of deep learning technique called deep belief networks (DBNs) aims to solve the
issues with conventional neural networks. They do this by exploiting the network's layers of
stochastic latent variables. These stochastic binary latent variables, also known as feature
detectors and hidden units, are binary variables that have a chance of taking on any value within
a given range. Deep Belief Network (DBN) classifies the images as normal or abnormal
indicating the presence or absence of tumours in MRI [10].

F. Fuzzy C-means algorithm

Unsupervised clustering challenges related to feature analysis, clustering, and classifier


building are addressed by the FCM method. FCM is frequently used in fields including
agricultural engineering, astronomy, chemistry, geology, image analysis, and target
identification. The FCM clustering technique, which is really based on Ruspini Fuzzy
clustering theory, was introduced in the 1980s along with the development of fuzzy theory.
This algorithm is employed for analysis based on the separation of different input data points.
The cluster centres are generated for each cluster, and the clusters are constructed based on the
spacing between the data points.
FCM is a data clustering technique in which a data set is divided into n clusters, each of
which has a high degree of belonging (connection) to the others in the cluster. In contrast, a
data point that is located far from a cluster's center will have a low degree of belonging to that
cluster [11].

18
G. MRI

A non-invasive imaging method that creates finely detailed three-dimensional pictures is


MRI. Disease detection, diagnosis, and therapy monitoring are all accomplished with it. Its
foundation is a technology that stimulates and detects changes in the rotational axis of protons
in the water that makes up biological tissues [12].
The characteristics of hydrogen in water or lipids are used by MRI to take pictures.
Repetition time (TE) and time to echo (TR) are two of the most important characteristics. TR
is the length of time that separates each additional pulse sequence that is applied to a slice. The
amount of time (TE) that passes between the transmission of an RF pulse and the reception of
an echo signal [13].

H. Artificial neural networks

Artificial neural networks, often known as neural networks, are computer programs that
resemble animal brains' organic neural networks in certain ways. A neural network is composed
of neurons, a group of nodes that serve as a loose representation of the neurons found in a
biological brain. Each neuron computes and transmits a value to the layer of neurons below it
via connections.
A non-linear function of the sum of each neuron's inputs is used to determine each neuron's
output, where the signal at a connection is a real value. Edges refer to the connections. Weights
on the edges and neurons are modified as the learning process develops. The weight regulates
the output signal's strength [14].
A neural network contains following components:
• Connections and Weights: The network is made up of connections that allow one
neuron's output to serve as an input for another neuron. Each link is given a weight to
indicate its relative relevance. Multiple input and output connections are possible for
one neuron at a time.
• Backpropagation: Backpropagation, which stands for "backward propagation of
errors," is a gradient descent-based supervised learning approach for artificial neural
networks. The technique determines the gradient of the error function in relation to the
weights of the neural network.

Supervised, Semi-supervised and Unsupervised Learning: Information input is necessary


for a machine learning problem to learn. The process of trying to produce a predicted output

19
from an input x that has an expected output y, known as a label, is known as supervised
learning. y equals y, etc.
Unlabelled data is used to train an unsupervised problem. For producing the anticipated
results, it automatically searches for and learns patterns and regularities in the input data ˆy. A
semi-supervised problem is a combination of supervised and unsupervised learning that learns
from labelled and unlabelled data [15].

I. SVM

A machine learning approach called Support Vector Machine (SVM) is employed for
classification, regression, and ranking issues. It operates by using a kernel approach to project
data points into a higher-dimensional hyperplane. The decision boundary in the supplied data
space is determined using SVM using structural risk reduction. Support vectors, or data points
that best distinguish the classes, are found. Finding the ideal hyperplane that optimally
separates the classes in the data is the aim of SVM [19].

J. Gray Level Co-occurrence Matrix

A statistical technique used in image processing to examine the spatial correlations between
pixels in an image is the Gray Level Co-occurrence Matrix (GLCM). It determines how
frequently distinct pixel values appear in pairings at various spatial distances and orientations.
Contrast, homogeneity, correlation, and energy are some of the textural properties that may be
extracted from pictures by GLCM and employed in a variety of applications, such as image
segmentation and classification [19].

K. Linear Discriminant Analysis

In statistics, pattern recognition, and machine learning, the LDA (Linear Discriminant
Analysis) technique is used to identify a linear combination of features. It attempts to express
a single dependent variable as a linear combination of other characteristics or measures.
Although LDA clearly describes the distinctions between data classes, it is closely connected
to PCA (Principal Component Analysis) and factor analysis. It looks for vectors in the
underlying space that can distinguish between classes the best, combining them linearly to
provide the highest mean differences between the target classes [20].

20
L. Principal Component Analysis

A method for feature extraction and dimensionality reduction in data analysis is called PCA
(Principal Component Analysis). It entails mapping the original data onto a collection of
primary components, or sets of eigenvectors, which are the biggest eigenvalues of the
covariance matrix.
The original data may be approximated using lower-dimensional feature vectors using
PCA, giving the data a linear representation with the fewest possible components. Maximizing
the separability between various groups or categories in the data is the primary objective of
PCA [20].

M. Computer Aided Detection (CAD)


A technology designed to decrease observational oversights—and thus the false negative
rates—of physicians interpreting medical images [21].

N. nnU-Net framework
Fully automated method for deep learning-based biomedical image segmentation [22].

O. U-Net framework
An architecture for semantic segmentation. The structure of this system comprises two
main components: a contracting path and an expansive path. The contracting path adheres to
the typical architecture found in convolutional networks. It involves repeatedly applying a pair
of 3x3 convolutions (without padding), each followed by a rectified linear unit (ReLU), and a
2x2 max-pooling operation with a stride of 2 to achieve down sampling.
In every down sampling, the number of feature channels is multiplied by 2. Conversely,
each step in the expansive path includes an up sampling of the feature map, followed by a 2x2
convolution (referred to as "up-convolution") that reduces the number of feature channels by
half. Subsequently, there is a concatenation with the corresponding cropped feature map from
the contracting path, followed by two 3x3 convolutions, each with a ReLU activation function.
Cropping is necessary due to the loss of border pixels during each convolution.
Finally, at the last layer, a 1x1 convolution is employed to map each 64-component feature
vector to the desired number of classes. In total, this network comprises 23 convolutional layers
[23].

21
P. Gliomas
Primary brain tumors that are perceived to come from neuroglial stem or ancestor cells
[24].

Q. Meningiomas
Mostly benign tumors, stemming from the arachnoid cap cells, almost a quarter of
intracranialtumors [25].

R. Pituitary tumors
Monoclonal benign adenomas that grow from the expansion of single predecessor cell [26].

S. Recurrent Neural Network (RNN)


A neural network that has feedback and is designed to learn sequential or patterns that
changeover time [27].

T. Neural Autoregressive Distribution Estimation (NADE)


Neural network structures applied to unsupervised distribution and density estimation
problems. They utilize the probability product rule along with a weight sharing approach
inspired by restricted Boltzmann machines to create an estimator that is manageable and
exhibits strong generalization capabilities [28].

U. Visual geometry group (VGG)


An improved deep convolutional neural network model that contains many layers [29].

V. Super Resolution Fuzzy C-Mean (SR-FCM)


A tool that increases the resolution of images, making it clearer and more informative [30].

W. Extreme learning machine (ELM)


A fast look-ahead neural network. It does not need inclining backpropagation to function
[31].

X. Bayesian network
A class of probabilistic graphical models consisting of two segments: structure and
parameters. The structure involves a Directed Acrylic Graph (DAG) for the conditional
dependencies and independencies in random variables of nodes. The parameters involve
conditional probability distributions of nodes [32].

22
Y. Conditional random field (CRF)
Probabilistic technique for structured prediction used in NLP, computer vision,
bioinformatics, and more [33].

Z. K-NN algorithm
An algorithm that assigns an input vector that has no classification, to the class of its nearest
neighbor [34].

AA. Deep Neural Network


A group of neurons structured in a sequence of layers where they receive the activation
fromneurons of the previous layer so perform some simple computations [35].
2.3. Literature Review
This chapter presents a literature review of this project, which includes three items: Brain
Tumor Segmentation Techniques, Tradition Machine Learning methods for Tumor
Classification and Deep Learning techniques for Tumor Classification.

2.3.1 Brain Tumor Segmentation Techniques


Chatzitheodoridou [36] explains the importance of developing Computer-Aided Detection
(CAD) system to use in deducing more accuratedecisions in clinics and radiology. In brain
tumor segmentation, four types of segmentation techniques are mentioned.
First, the nnU-Net framework technique, which is an automated framework that uses
convolutional neural networks (CNNs) and has produced significant results of graded
classification of brain tumors on the multimodal dataset benchmark BraTS and is considered
the best method of brain tumor segmentation. Second, the Tumor Slice Extraction, which is
used to acquire 2D images for MR volumes of patients from different manners for future
analysis.
Third, the Intensity Normalization for image pre-processing, such as Gaussian kernel
normalization and histograms, which works by finding noise and nonuniform heterogeneousity
(inhomogeneity) in the MR images to give an outline of the tumor characteristics. Fourth and
last is the Manual Tumor Detection, which is used when nnU-Net framework does not give a
segmentation of the tumor, and it works by manually detecting tumors with a boundary box,
which is a close-up image graph of the minimum and maximum x and y coordinates of the
tumor.

23
Kalvakolanu[37] classifies the brain tumors into threecategories: Gliomas, Meningiomas,
and Pituitary tumors, using registration and segmentation-based method of skull stripping to
produce MRI brain tumor images with the absence of the skull and a grab-cut mechanism to
determine if the no-skull MRI images gathered tumor characteristics for accuracy in
classification. Five brain tumor segmentation techniques are presented.
First, Naser et al. [38] proposes U-Net with CNN for image segmentation through an
encoder-decoder architecture that gathers information of the image, and it segments tumors
depending on their grades of glioma. Second is the Recurrent Neural Network (RNN) with
Optical Wavelet Statistical characteristics proposed by Begum et al. [39] which involves a
combination of RNN and optical wavelet statistical characteristics system to segment tumors
by processing sequential data and extracting information from brain tumor images.
Third is the Two CNNs with NADE, proposed by Hashemzehi at el. [40] which uses two
CNNs and NADE to create a joint distribution for brain tumor segmentation, where each CNN
learns characteristics of the tumor separately, then the results are combined to produce the
segmentation, and the NADE identifies the brain cell location to learn the shape of the tumor.
Fourth is the CNN with NADE and Fully Connected Network, which is a hybrid of CNN,
NADE, and a connected network for brain tumor segmentation, thus serving a feature
extraction, CNN independent feature learning, and a fully connected network for processing
and classification.
Fifth and last is the Transfer Learning with Visual Geometry Group (VGG), which divides
the NN into six blocks and performs fine tuning for each block independently, and transfer
learning leverages pre-trained weights from previous tasks and datasets to improve on the
current task [41].
Almadhoun [42] presents four techniques that determine abnormal growth of the brain
using a deep learning method called Stacked Autoencoders, where it predicts tumor segments.
The first is Super Resolution Fuzzy-C Means (SR-FCM), which utilizes high-resolution
MRI images to enhance the accuracy of tumor detection. It involves segmenting the tumors
using the Fuzzy-C-Means approach and extracting features using the SqueezeNet architecture
from a CNN, and the classification process is performed using the extreme learning machine
(ELM) algorithm.
The second is Multiple Kernel based Probabilistic Clustering, which involves segmenting
the pre-processed MRI image using Multiple Kernel based Probabilistic Clustering, where
features are then extracted based on shape, texture, and intensity for each segment, then Linear

24
Discriminant Analysis is used to select important features for classification, and a deep learning
classifier is employed to classify the segments into tumor or non-tumor.
The third is Triangular Fuzzy Median Filtering, which applies triangular fuzzy median
filtering for image enhancement that helps in accurate segmentation based on unsupervised
fuzzy set method, then Gabor features are extracted across each patient’s lesions, and similar
texture features are calculated, which are supplied to the ELM for tumor classification. The
fourth and last is CNN with ELM, which utilizes brain MRI for tumor evaluation, involving
automated segmentation and classification methods. The MRI image is pre-processed and
segmented using unsupervised fuzzy set method, Gabor features are extracted, and similar
texture features are calculated, where features are then used for tumor classification using the
ELM.
Alberts [43] proposes three techniques of brain tumor segmentation: The first is Deep
Learning Frameworks, which have proven to be highly accuratein brain tumor segmentation
when large, annotated image datasets are available. These frameworks, such as CNNs, learn
feature maps by cascading convolutional layers on images like inputs. They perform well on
different scales of the image in medical segmentation.
The second is Bayesian Network with Hidden Variables, which is based on a Bayesian
network with hidden variables optimized by the expectation-maximization algorithm. It is
designed to handle non-Gaussian multivariate distributions using the concept of Gaussian
copulas, as it allows for intuitive semi-automated user interaction, and is less dependent on
training data, and can handle missing MR modalities.
The third and last is Conditional Random Field (CRF), which unifies brain tumor
segmentation and growth analysis within a single framework, acting on longitudinal datasets
and studying whether the tumor has grown or shrunken between two time points in the given
sequence, and it uses label prior maps generated by supervoxel segmenters and spatially
regularizes the brain tumor segmentations.
Hameed [1] proposes fourteen techniques for brain tumor segmentation. The first is Double
Threshold Method, used for extracting the skull fromMRI images by setting two thresholds to
segment the image and extract the skull. The second is Nonparametric Active Contour
Deformable Model, which is used for excising brain tumorsfrom brain MRI images, using a
deformable model to segment and remove the tumors.
The third is K-means Clustering Method which involves clustering the image pixels based
on their similarity and assigning them to different tumor regions. The fourth is Discrete

25
Wavelet Transforms and Grayscale Matching Matrices, which are used for feature extraction in
the first stage of tumor segmentation, then they analyze the image at different scales and extract
relevant features for classification. The fifth is Support Vector Machine (SVM), which is used
for classification in the segmentation process, by using a trained model to classify the
segmented tumor regions as normal, benign, or malignant.
The sixth is Histogram, which counts the number of pixels in an image and provides
information about the distribution of pixel intensities and by analyzing the intensity values. The
seventh is Resampling, involving resizing the image to a standard size for proper geometric
representation, and can be used as a preprocessing step before tumor segmentation. The eighth
is K-NN Algorithm, which is used to assign a new image to a class based on the majority vote
of its k nearest neighbors in the feature space.
The nineth is Distance Matrix, which is used for classification based on the similarity
between images, calculating the distance between the features of different images and assigns
them to different classes. The tenth is Particle Swarm Optimization (PSO) Algorithm, which
is used for feature selection in brain tumor classification, by optimizing the selection of relevant
features to improve the accuracy of the classification model. The eleventh is CNN, which is a
deep learning method that analyzes the image at different levels of abstraction and learns to
classify the tumor regions based on their features.
The twelfth is Naive Bayesian Classification, which is used to identify the tumor site
containing disseminated cancerous tissue, using a probabilistic model to classify the tumor
regions based on their features. The thirteenth is Stationary Wavelet Packet Transform, which
decomposes the image into different frequency bands and extracts relevant features for tumor
detection. The fourteenth and last is Adaptive Kernel Fuzzy C Means Clustering, which is an
algorithm that adapts the kernel function based on the data distribution to improve the accuracy
of the segmentation.
Pedapati and Tanneedi [44] provide nine brain tumor segmentation techniques. The first is
Mean Shift, which compares the intensity values of pixels in brain MRI images to identify
tumor regions, using a sliding window approach to iteratively shift the window towards the mode
of the pixel distribution, effectively segmenting the tumor. The second is Thresholding by
Histogram through Transform (ThH), which also uses the histogram of pixel intensities to
determine a threshold value. Pixels with intensities above the threshold are classified as tumor
regions.

26
The third is Support Vector Machine (SVM), which is a machine learning algorithm that
applies to both PET and MRI images to classify tumor regions based on features extracted from
the images. The fourth is Grey Level Co-Occurrence Matrix (GLCM), which is an image
mining technique that analyzes the spatial relationships between pixels based on their intensity
values which extracts features in segmentation. The fifth is Canny Edge Detection. The Canny
edge detection is an image processing technique that identifies edges in an image and used in
combination with other segmentation techniques to improve the quality of the segmented brain
tumor image.
The sixth is Gabor Wavelet, which is a mathematical function used for feature extraction in
segmentation that analyzes the frequency and orientation of image patterns to identify tumor
regions. The seventh Probabilistic Neural Network (PNN), PNN is a type of artificial neural
network used for classification, and in brain tumor segmentation it is used to classify tumors
into malignant, metastatic, and benign based on extracted features.
The eighth is Technique for Artificial Neural Network (TANN). TANN is a brain tumor
classification technique that combines pre-processing, segmentation, feature extraction, and
classification steps, achieving high detection rates and fast classification speeds. The nineth and
last is K-Nearest Neighbor (K-NN). K-NN is a classification algorithm that classifies tumors
based on their features and the features of their nearest neighbours, achieving high accuracy in
brain tumor classification.
Kanervo [45] proposes an architecture named random forest classifier model used for all
training performed on the data. The brain tumor classification techniques mentioned are two:
the first is Random Forest Trained on Iteratively Selected Patients, and the second is Nonlinear
Microscopy, Infrared, and RamanMicrospectroscopy.
Random Forest Trained on Iteratively Selected Patients, raised by Ellwaa et al. [46] utilizes
a random forest algorithm trained on a selected group of patients. It aims to segment brain
tumors by identifying patterns in the data obtained from Raman spectroscopy. Nonlinear
Microscopy, Infrared, and Raman Microspectroscopy, mentioned Meyer et al [47], combines
nonlinear microscopy, infrared, and Raman microspectroscopy to analyze brain tumors, using
advanced imaging techniques to examine the molecular composition of tumor tissue and identify
specific characteristics.
Mlynarski [48] presents three brain tumor segmentation techniques. The first is Generative
Models, which involve registering the patient's scan to a brain atlas to provide a probabilistic
prior of different tissues. The segmentation is guided by differences between the patient's scan

27
and the atlas. The second is CNN, which has been mentioned before and have impressive
performance in brain tumor segmentation. These models automatically learn relevant image
features and have been successful in the BraTS. Thethird and last is Mixed Supervision, which
involves training segmentation models using both fully annotated images (with voxelwise
labels) and weakly-annotated images. By combining various levels of supervision, the
segmentation accuracy can be improved, especially when the number of fully annotated images
is limited.
Parvez [49] mentions three prostate tumor segmentation techniques. First, the Medical
Image Synthesis for Data Augmentation and Anonymization using Generative Adversarial
Networks. This approach, described by Shin et al. [50], uses Pix2Pix to generate MRIs of the
prostate. It utilizes real segmentation masks and does not generate new ones, and the masks
used in this approach have multiple classes with black backgrounds surrounding the prostate.
Second, the Data augmentation in deep learning using generative adversarial networks.
Thomas Neff' [51] proposes the use of Wasserstein GAN (WGAN) to generate X-ray
images of the lung and their corresponding masks, as it aims to improve data augmentation in
deep learning. Third and last is Improving Prostate Whole Gland Segmentation In T2-Weighted
MRI With Synthetically Generated Data. Fernandez-Quilez et al. [52] uses DCGAN and
Pix2Pix GAN to enhance the segmentation of the whole gland (WG) in T2W images. This
approach generates new images and their corresponding masks to improve the quality of whole
gland segmentation compared to standardaugmentation methods.
Tahir [53] provides five brain tumors segmentation techniques. First, Image Filtration and
Denoising, which involves improving the quality of the image by removing noise and enhancing
the tumor region, thus helping in the reduction of the complexity involved in tumor
segmentation. Second, Gray Scaled Segmentation, which segments the MR images based on
gray scale intensity levels.
It extracts features such as tumor region, texture, colour, location, and edge from the
segmented images for further analysis and classification. Third, Fuzzy C-Mean Clustering,
which uses fuzzy logic to classify the MR images into different clusters based on their
similarity. It helps in identifying the tumor region by grouping pixels with similar
characteristics. Fourth, Self-Organizing Maps (SOM), which is a supervised technique that
uses neural networks to classify the MR images.
It uses discrete wavelet transforms for feature extraction and achieves an accuracy of 94%.
Fifth and last, Deep Neural Network (DNN) Classification, which combines the use of deep

28
neural networks with gray scaled segmentation. It helps in accurately classifying the MR
images as normal (no tumor) or abnormal (tumor) by extracting and analyzing features from
the segmented images.

2.3.2 Tumor Classification using Traditional Learning


In a study conducted by E. Alberts et al [54] a method, combining PCA with Random Forest
classification, is introduced for brain tumor recognition. The research utilized a dataset from
“The Cancer Imaging Archive” (TCIA) and achieved an accuracy of 0.83. The study focuses
on classifying genomic brain tumors using low-dimensional texture features.
Iglesias et al [55] introduced ROBEX in 2011, a robust learning-based system for brain
extraction, demonstrating significant robustness across various datasets and populations. With
a Random Forest classifier and a generative model, ROBEX was trained and tested on several
datasets, including the IBSR andthe LPBA40, showcasing enhanced performance measures
when compared to other prevalentmethods.
In a 2017 conference presentation and publication, Mathew et al [56] unveiled a
methodology for MRI brain image tumor detection and classification usingwavelet transform
and SVM. Employing local MR images of several types, the method demonstrated a
commendable classification accuracy of 86%, thereby contributing to ongoing advancements
in MRI brain image analysis and tumor detection.
Latif et al [57] presented a technique for multiclass brain glioma tumor classification in
their 2017 study, employing the BraTS dataset and a Random Forest algorithm for
classification. Achieving a classification accuracy of 86.87%, this method shows promise in
developing robust techniques for glioma tumor classification in MRI brain imaging.
In the research conducted by Gumaei et al [58], hybrid feature extraction methodcombined
with Regularized Extreme Learning Machine (RELM) and PCA-NGIST was implemented for
brain tumor classification using the BraTS2013 dataset. This method achieved an impressive
classification accuracy of 94.233%, highlighting the potential of hybrid feature extraction
methods coupled with RELM.
Praveen et al [59] utilized a dataset comprising real MRI scans from Goa Medical College
and synthetic images from the Brats challenge, employing feature extraction using the grey
level co-occurrence matrix (GLCM) and classification via LS-SVM which is a variation of
SVM that seeks to reduce sum of squared errors and Multilayer perceptron (MLP) kernel
function used specifically in classification algorithms. With a classification accuracy of

29
96.63%, their research highlights the effectiveness of GLCM for feature extraction and the LS-
SVM with MLP kernel for classification in brain tumor detection.
Nagtode et al [60] conducted a study focusedon brain tumor detection and classification,
utilizing discrete wavelet transform and a Probabilistic Neural Network. While the exact
dataset used is not specified in the available information, Support Vector Machines (SVM) was
applied for classification, achieving a commendable 96% classification accuracy. The
methodology leveraged 2D Gabor wavelet analysis and probabilistic neural network analysis,
demonstrating robustness against local distortions and variance of illumination, offering
substantial insights for early tumor identification and classification into benign and malignant
brain tumors.
Sudharani et al [67] presented brain tumor lesion classification and identification scoresby
utilizing k-nearest neighbors (K-NN) method, which is one of the machine learning (ML)
techniques. They used data of total of 229 images scanned using Computed Tomography (CT)
which is a method of medical imaging that produces several cross-sectional images of the head
using specialized X-ray devices, and magnetic resonance imaging (MRI) techniques.
Before feeding the (K-NN) algorithm with inputs, histogram was used to give the total
number of pixels in the images and resampling the images to 629x839 to ensure accurate
geometric representation of each image processed by this technique. Manhattan metric has been
used for this study to find the distance of the classifier. Algorithm testing was done on 48
patient brain tumor images and accuracy was around 95% for all images tested.
Ahmed et al [68] presented support vector machine (SVM) and artificial neural network
(ANN)models to classify brain tumor MRI images. The dataset was images for brains with no
tumor and brains with lower grade glioma or glioblastoma multiforme obtained from university
of Maryland medical center in oncology department. To segment MRI images temper-based k-
means and modified fuzzy c-means (TKFCM) algorithm which helps identifying brain tumor
based on the grey level intensity in specific section in the brain.
After that, features of first order statistic and region property based extracted. Using SVM,
the first order statistic features are used to separate tumor from non-tumor brain MRI images.
The second order statistic features handle recognizing the difference between a cancerous and
non-cancerous tumor using ann. This proposed technique has achieved an accuracy of 97.37%
for classifying tumor and non-tumor brain images.
Machhale et al [69] used two techniques for brain tumor classification; first one is Support
Vector Machine (SVM) and another new technique which is hybrid classifier of SVM and K-

30
Nearest Neighbor (K-NN). Dataset consists of MRI images collected from BrainWeb:
simulated brain database. Three types of features such as symmetrical, grey scale, texture
features will beextracted to distinguish one input from the other then used as an input for the
classifier. Then,both of SVM and hybrid classifier (SVM-K-NN) will be used to classify a total
of 50 images and the hybrid classifier (SVM-K-NN) has achieved the highest accuracy of 98%
compared to SVM which achieved maximum of 96% accuracy rate.
Veeramuthu et al [70] combined feature and image-based classifier (CFIC) to classifybrain
tumor images. This method consists of two approaches image-based classification and feature-
based classification, in the first approach the image itself serves as the input and its features
extracted by utilizing many-layered module. In the second approach, extracted features from the
images are sent to the classifier. The proposed CFIC classifier was trained and tested using the
Kaggle brain tumor detection 2020 dataset. The proposed method works based on feature
extraction, classification, and selection.Each image in the datasets contains the feature vector
that is extracted for classification. The CFIC classifier has achieved an accuracy of 98.97%.
Cheng et al [73] focused on three types of brain tumors such as pituitary, meningioma, and
glioma for classification. To improve classification performance, this method was proposed: it
starts first with using tumor regions augmented by image dilation as a region of interest then
splitting it into sub-regions. The effectiveness of the proposed method was testedon a large set
of data (Brain T1-weighed CE-MRI) obtained from Nanfang hospital in China using three
feature extraction techniques bag-of-words (BoW), intensity histogram, and grey level co-
occurrence matrix (GLCM). Using tumor regions as regions of interests has improved the
accuracy up to 82.31%, 84.75%, and 88.19% for the three (intensity histogram, GLCM, BoW)
techniques, respectively.
Othman et al [74] used Support Vector Machine (SVM) in classifying MRI brain images
to normal or abnormal brain images. Axial, T2 Flair weighted, 256 x 256 pixels MRI data are
the input dataset for the algorithm. This input dataset was obtained from the advanced medical
and dentist institute in Malysia. The extracted features from MRI image were obtained using
discrete wavelet transformation (DWT), then feeding these features to the SVM classifier. In
the testing process of MRI brain data, all the normal or non-tumor images have been classified
accurately, but not the same case for the tumor images. Because of this misclassification the
accuracy results have been dropped down to 65%.
For brain tumor classification, Sachdeva et al [19] introduced a recent hybrid machine
learning (ML) algorithm that uses Support Vector Machine (SVM) and genetic algorithm (GA)

31
which is an optimization technique used to provide solutions for optimization problems. 428
MRI brain images (Post Contrast1-weighted) are the dataset, which GA-SVM algorithm will
be performed on. This dataset was obtained from Postgraduate Institute of Medical Education
and Research in Department of Radiodiagnosis in India.
A set of 71 texture and intensity features collected from tumor images using different
techniques such as grey level co-occurrence matrix which extracts features suchas homogeneity,
correlation, contrast, and energy, Laplacian of Gaussian Features which are a group of features
calculated by scattering image with different gaussian widths, Directional Gabor Texture
Features which are calculated by adjusting values of θ (degrees) and λ (pixels) for Gabor filters,
Rotation Invariant Circular Gabor Features obtained by adjusting values of ψ (degrees) and λ
(pixels), Rotation Invariant Local Binary Patterns which uses histograms to represent the
intensity pattern in a tumor location, Intensity and Shape-based Features like standard deviation
, mean intensity, kurtosis, skewness that provide information about the intensity in tumor areas
and define the tumor shape. The GA optimization method improved theaccuracy of SVM to
91.7%, according to test findings.
Amulya et al [75] used K-Nearest Neighbour (K-NN) model to classify brain tumor images
to tumor or non-tumor images. The dataset used in this research consisting of 101 MR brain
images for normal and abnormal brains. These data are retrieved from two databases
overcode.yak.net and MR-TIP. The feature extraction methods applied on these data were both
Scale Invariant Feature Transform (SIFT) which recognize and extract images with bigger
corners and Speeded Up Robust Features (SURF) which is considered as improved version of
SIFT in terms of speed. After extracting the features using SIFT and SURF techniques, these
features will be an input for the K-NN classifier which achieves an accuracy of 96.22% using
SIFT and SURF feature extraction methods.
Rathi et al [20] utilized Support Vector Machine (SVM) for classifying brain tumors along
with techniques such as Linear Discriminant Analysis (LDA) and Principal Component
Analysis (PCA). The dataset used contains 140 tumor MR images obtained from Internet Brain
Segmentation Repository. To make decision-making and pattern classification easier, feature
extraction attempts to extract the visual information from brain tumor images and express it in
a condensed form, Shape-based features, intensity-based features, and texture-based features
are some of the feature extraction techniques used in this paper. Then, these extracted features
are used as an input for the proposed method, and it achieved an accuracy of 98.87%.

32
Selvaraj et al [76] described the use of Least Squares Support Vector Machine (LS-SVM)
with Radial Basis Function (RBF) which is used specifically in LS-SVM for regression and
classification tasks. In this study LS-SVM used along with RBF kernel for classifying brain
MRI slices into tumor and non-tumor images. The used dataset contains 1100 MR images
composed of tumor and non-tumor images obtained from Siemens, AG Medical Solutions,
Erlangen, Germany.
Features from grey level co-occurrence matrices (GLCM) such as contrast, energy, entropy,
correlation, and inverse difference moment and other statistical features like variance and mean
extracted from MRI slices. After That, these data were categorized in two training and test sets.
The first set has more tumor images than non-tumor images unlike the second test set which
has equal number of tumor and non-tumor images. These two test sets used as an input for the
proposed LS-SVM classifier, and the experiment shows that the first and second set achieved
98.64% and 98.92% classification accuracy.
Saritha et al [77] presented an innovative approach for classification which is Probabilistic
Neural Network (PNN) and for feature extraction wavelet entropy-based spider web plots was
utilized. For the dataset, T2 weighted MR brain scans were obtained from the website of the
Harvard Medical School categorized into infectious disease, stroke, degenerative disease, brain
tumor and normal with 15 images in each category. The feature extraction technique used was
wavelet entropy-based spider web plots which measures the entropy of wavelet components of
an image then creates a spider web plot with calculated areas used as features for PNN
classifier. Results of the experiment shows that this proposed model has successfully achieved
an accuracy of 100%.
Zhang et al [78] conducted a study on using neural network (NN) derived approach to
classify MRI brain images as tumor or non-tumor image. This Approach uses the wavelet
transform for feature extraction from the images and reduce the dimensionality of the features
using the Principal Component Analysis (PCA) technique. Axial T2-weighted MR brain slices
dataset in 256x256 resolution were used. This dataset collected from Harvard Medical School
website, and it consists of 18 non-tumor and 48 tumor images. The extracted features data now
sent back to the classifier and the results shows that both training and testing sets had a
classification accuracy of 100%.
In a study conducted by Arakeri et al [79] a two-level hierarchical Content based medical
image retrieval (CBMIR) system has been presented. The CBMIR system is a system which
helps the radiologists in brain tumor diagnosis by obtaining images that are comparable from

33
medical image database. The Two level hierarchical CBMIR system first classifies the brain
tumor image to cancerous or non-cancerous using SVM, after that it searches for images that
are closest in shape in the identified class.
This system uses global and local feature extraction to detect an image as cancerous or non-
cancerous and to find images similar within the identified class. The collected MR images were
obtained from different places: Kidwai Memorial Institute of Oncology in Bangalore, Shirdi
Sai Cancer Hospital in Manipal and Harvard Medical School. The proposed CBMIR system
classification accuracy has achieved 97.95%.
Othman et al [80] utilized Probabilistic Neural Network (PNN) approach for classifying
brain tumor MRI scans, and it was conducted in two stages. First stage was feature extraction
by utilizing PNN with Principal Component Analysis (PCA) to decrease the dimensionality of
the data and calculates feature vectors. The data used in this study are MRI images which
consists of two sets training set with 20 subjects and testing set with 15 subjects. The feature
vectors of training images calculated using PCA are fed into PNN classifier, and it achieved a
classification accuracy range from 73% to 100% depending on the spread value.
The approach used in the study conducted by Basria et al [81] called Neuro Fuzzy or
Adaptive Neuro Fuzzy Inference System (ANFIS) for brain tumor classification. ANFIS is
used to classify abnormal brain images based on where the tumors are located. There have been
35 data sets (tumor brain) used. The training data sets of 20 subjects and the testing data sets
of 15 subject are two independent data sets. First dataset used to train the ANFIS, and the
second data set used for verifying accuracy which has reached 93.33% using the proposed
approach.
Nanthagopal et al [82] used two classifiers for classifying brain tumors which are Support
Vector Machine (SVM) and Probabilistic Neural Network (PNN). Features like dominant run
length and concurrence texture features are extracted from Computed tomography (CT) images
by utilizing wavelet decomposition.
Data set was obtained from Aarthi MRI and CT scans in Madurai, India, it consists of CT
images of 16 patients (8 cancerous, 8 non-cancerous). SVM and PNN classifiers accuracy are
given in the study using the selected features. The PNN classifier's accuracy is 97.11%,
compared to the SVM classifier's claimed accuracy of 98.07%.
A reliable method for identifying the disease type in brain magnetic resonance images
(MRIs) is provided in this research conducted by Kalbkhani et al [83]. The suggested method
generalized auto-regressive conditional heteroscedasticity (GARCH) model classifies MRI

34
brain images into seven categories normal, or one of the other brain diseases. For feature
extraction, Wavelet Transform technique used and Principal Component Analysis (PCA) +
Linear Discriminant Analysis (LDA) for feature selection. Datasets are not mentioned in this
paper, but the training and testing was done on MRI brain images.
For classification, two scenarios were considered: first scenario was classifying the input
image into one of 8 different classes, second scenario was classifying the image into tumor or
non-tumor image. For the first scenario accuracy of SVM and K-NN was 97.62% and 98.21%
respectively and for the second scenario accuracy of both classifiers achieved 100%.
In the paper conducted by Sindhumol et al [84], a combination of Sparse Component
Independent Component Analysis (SC-ICA) and Support Vector Machine (SVM) are used to
classify brain tissue MRI. (SC-ICA) which is a feature extraction technique applied to the study
of brain MRIs, enhances the classification of brain images and integrates independent part
analysis (ICA), spectral clustering, and SVM. It prioritizes both global and local information
to extract fine details from the images.
Feature extraction method SC-ICA was applied to clinicals and synthetic datasets. Then,
extracted features was analysed and compared to performance of generated ICs. From the
BrainWeb database, dataset was collected which are synthetic MRI scans and consists of
abnormal and normal brain scans. For the classification accuracy, SC-ICA and SVM achieved
an accuracy of 97% with standard deviation of 0.2.

2.3.2 Tumor Classification using Deep Learning

Wong et al. [61] devised a strategy for developing medical image classifiers under the
constraint of limited data, utilizing T1- weighted MRI scans from the TCGA dataset. Achieving
an 82% classification accuracy with the VGG-16 neural network model, their work stands out
for demonstrating a viable approachto constructing effective classifiers even with limited data
availability.
Afshar et al. [62] investigated the applicability of Capsule Networks (CapsNets) in
classifying brain tumors, using MRI images and coarse tumor boundaries. By employing the
T1-weighted Contrast Enhanced dataset and achieving a notable 90.89% classification
accuracy, their work substantiates the effectiveness of CapsNets in MRI-based brain tumor
classification.
In a study conducted by Anaraki et al [63], T1- contrast enhanced MRI datasets (IXI,
REMBRANDT, TCGA-GBM, TCGA-LGG) were utilized to classify and grade brain tumors.

35
The integration of Convolutional Neural Networks (CNN) and Genetic Algorithms (GA) in
their methodology yielded a 90.9% classification accuracy, underscoring the potency of
melding CNN and GA for MRI-basedbrain tumor classification and grading.
Sajjad et al [64] formulated a multi-grade brain tumor classification method, leveraging a
deep Convolutional Neural Network (CNN), specifically VGG-16, along with extensive data
augmentation. With the Radiopaedia dataset and a classification accuracy of 90.67%, their
research evidences the efficacy of deep CNNs coupled with data augmentation in classifying
multi-grade brain tumors.
In a study titled "Multi-Classification of Brain Tumor MRI Images Using Deep
Convolutional Neural Network with Fully Optimized Framework," Irmak et al [65] applied
CNN to multi-classification of brain tumor MRI images, using datasets such as T1-post-
contrast and FLAIR images from RIDER, REMBRANDT, TCGA-LGG, and Cheng et al. [73].
Achieving a robust classification accuracy of 98.14%, this research underscores the utility of
deep convolutional neural networks paired with a fully optimized framework for the multi-
classification of brain tumor MRI images.
Kumar et al [66], through their work "Multi-class brain tumor classification using residual
network and global average pooling," employed the ResNet50 neural network to achieve a
classification accuracy of 97.48% on the T1-weighted ContrastEnhanced dataset. This outcome
underscores the capability of the residual network and global average pooling in accurately
classifying brain tumors across multiple classes.
In a study conducted by Salçin [71], a deep learning method called Faster Region-based
Convolutional Neural Networks (R-CNN) was used for classification and detection of brain
tumor in MRI images. 3,064 MRI images for 233 patients publicly available data were used.
These data consist of three types of brain tumors such as pituitary, meningioma, and glioma.
In the feature extraction step, input images put through convolution filters to activate
features and conducting non-linear down sampling to clarify the output. Unlike CNN, faster R-
CNN can process the entire image and find the object location within specific region in an
image using certain functions. This study has shown that R-CNN performs better than other
deep learning methods with 91.66% classification accuracy.
The aim of the study conducted by Dehkordi et al [72], is to classify MRI brain tumor
images to tumor and non-tumor images by using improved convolutional neural network. The
improved CNN structure consists first of feature extraction then optimal classification. Dataset
was brain images collected from Harvard Medical School and BRATS2015. CNN layers are

36
trained using Nonlinear Levy Chaotic Moth Flame Optimizer (NLCMFO) hyperparameter
optimization which aims to enhance the performance of Moth Flame Optimizer (MFO) which
is an algorithm utilized to address challenging real-world optimization challenges across
several fields. The optimized CNN model by using NLCMFO has reached an accuracy of
97.4%.
Research done by Jemimma et al [10], intended to segment brain tumor regions to ensure
an accurate classification and classify each MRI image one to tumor or non-tumor brain. The
dataset was collected from BRATS database with 30 patient images. Probabilistic Fuzzy C-
means Algorithm used to separate important regions in the brain images and lower dimensional
reduction.
To extract texture feature, segments were processed by using Local Directional pattern
(LDP) which describe image texture features. After that, extracted features being fed to the
Deep Belief Network (DBN) which classifies MRI brain images to tumor or non-tumor brain
images. The proposed method is tested using the BRATS database, and its accuracy has
achieved 95.78.

2.4. Summary of literature review


Table 1: Summary of papers discussing the classification of brain tumors

Ref Method Dataset Model # of Accuracy


classes
[74] Traditional The Advanced SVM 2 65%
Medical and
Dentist Institute,
Malysia
[19] Traditional Postgraduate GA-SVM 5 91.7%
Institute of Medical
Education and
Research:
Department of
Radiodiagnosis in
India
[75] Traditional overcode.yak.net & K-NN using 2 96.22%
MR-TIP database SURF&SIFT
[20] Traditional Internet Brain SVM using 2 98.87%
Segmentation PCA&LDA
Repository

37
[54] Traditional The Cancer PCA (Principal - 83%
Imaging Archive Component
(TCIA). Analysis) and
Random
Forest.

[55] Traditional Four different Random Forest - -


datasets were used:
The first test
dataset is the
Internet Brain
Segmentation
Repository (IBSR).
The second test
dataset is the
LPBA40 dataset.
The third test
dataset consists of
the first two discs
(77 T1-weighted
scans) of the cross-
sectional MRI
dataset of the
OASIS project.

[56] Traditional Local MR images SVM 31 86%


of T1, T2, FLAIR

[57] Traditional BraTS (T1, T2, Random Forest 32 86.87%


T1c, FLAIR)

[58] Traditional BraTS2013 dataset RELM, PCA- 3 94.233%


NGIST

[59] Traditional BRATS challenge LS-SVM with 2 96.63%


& Goa Medical MLP Kernel
College
[60] Traditional - SVM - 96%
2
[61] Deep Learning TCGA (T1- VGG-16 3 82%
weighted)
[62] Deep Learning T1-weighted CapsNets 33 90.89%
Contrast Enhanced
[63] Deep Learning T1-contrast CNN, GA 44 90.9%
enhanced (IXI,

38
REMBRANDT,
TCGA-GBM,
TCGA-LGG)
[64] Deep Learning Radiopaedia VGG-19 44 90.67%
2
[65] Deep Learning T1-post-contrast, CNN 3 98.14%
FLAIR (RIDER,
REMBRANDT,
TCGA-LGG,
Cheng et al.
[52] Deep Learning T1-weighted ResNet50 33 97.48%
Contrast Enhanced

[76] Traditional AG Medical LS-SVM with 2 98.92%


Solutions, RBF
Erlangen,
Germany.
[77] Traditional Harvard Medical PNN with 5 100%
School website wavelet
entropy-based
spider web
plots
[78] Traditional Harvard Medical NN 2 100%
School website
[79] Traditional Kidwai Memorial CBMIR system 2 97.95%
Institute of utilizing SVM
Oncology in
Bangalore & Shirdi
Sai Cancer Hospital
in Manipal &
Harvard Medical
School
[80] Traditional MRI images PNN - 73% to
(source was not 100%
mentioned) depending
on S.V
[81] Traditional Synthetic contrast Neuro Fuzzy - 93.33%
enhanced MR
images (source was
not mentioned)
[82] Traditional Aarthi MRI and CT SVM&PNN 2 PNN:
scans in Madurai, 97.11%,
India SVM:
98.07%

39
[83] Traditional MRI brain images SVM&KNN 8 SVM:
(source was not with GARCH 97.62%
mentioned) model KNN:
98.21%

2 SVM&KN
N: 100%

[84] Traditional BrainWeb database SC-ICA&SVM 5 97%


[73] Traditional Nanfang Hospital, Intetnsity 3 82.31%
China histogram 84.75%
GLCM 88.19%
BoW
[67] Traditional 229 CT&MRI k-NN 10 95%
Scans (source not
mentioned)
[68] Traditional MRI images from SVM&ANN 5 97.37%
University of
Maryland Medical
Center:oncology
department
[69] Traditional BrainWeb: SVM-KNN 2 98%
Simulated Brain
Database.
[70] Traditional Kaggle Brain CFIC - 98.97%
Tumor Detection
2020 dataset
[71] Deep Learning 3,064 MRI scans R-CNN 3 91.66%
(source not
mentioned)
[72] Deep Learning Hravard Medical CNN 2 97.4%
School & optimized with
BRATS2015 NLCMFO
[10] Deep Learning BRATS database DBN 2 95.78%

40
2.5. Knowledge Gap.
Even with significant advancement in classifying brain tumors using both classic and
advanced Artificial Intelligence techniques, there are still unanswered questions or research
gaps which need to address such as:
1- The classification accuracy is based on segmentation. However, manual segmentation and
evaluation of MRI brain images conducted by radiologists are tedious. i.e., there is a need to
use automatic machine learning techniques whose accuracy is more.
2- There is a need to implement the most recent image processing approaches to improve the
quality of the real-world medical image.
3- Furthermore, there is limited use of sophisticated techniques, studying how brain tumors
change over time and exploring the power of mix-and-match approaches.
4- A lot of research use small and sometimes different sets of data, making it hard to compare
results or apply them widely.
Tackling these issues is essential to make tumor identification methods better and more widely
usable.

41
Chapter 3: SPMP & Requirement Specification
3.1. Managerial Process Plans

This section of the Project Management Plan specifies the project management processes
for the project. This section defines the plans for project start-up, risk management, project
work, projecttracking and project close-out.

3.1.1 Startup Plan.

The project's start-up plan finds the materials and resources required. It serves as a project
estimation plan, staffing strategy, and training plan for project staff.

A Estimates

Project management success depends on accurately estimating the cost and timeline, which
are any project's two most crucial components. The amount of technology, software, human
resources, and other resources needed to finish this project will influence its cost. The second
crucial element is time estimation; it is projected that this project will take two to three months
to complete, allowing us to avoid any unexpected challenges.

B Staffing

Five students from Imam Abdulrahman bin Faisal University are on the team working on
this research in the field of computer science. Each project phase should be managed and
specified, and team members should be aware of the abilities needed for each step. The below
table give more details.

C Project Staff Training

The team members' ability and abilities must include a broad range of subjects to create
and investigate Brain tumor detection system with the highest level of efficiency. These
competencies—such as deepS learning, data analytics, programming, analytical thinking, and
writing abilities—were acquired through prior and present course work.

42
Table 1: project staffing plan

Human
No Project Phase Duration Required skills
Resource

Team One Analytical and


1 Project Proposal
members Week writing skills

Submit
Project Two Analytical and
2 Team members
Management Weeks writing skills
Plan (SPMP)

Analytical and
Submit Project Requirements Two writing skills, and
3 Team members
Analysis (SRS) Weeks Information
gathering skills

Analytical
and writing
Two skills, and
4 Submit Project Design(SDS) Team members
Weeks database
knowledge

Delivery & Presenting of One Analytical and


5 Team members
Complete Project Week writing skills

43
3.1.2 Work Plan

A Work Breakdown Structure

The tasks and project breakdown activities that must be completed are listed in the table
below, along with an estimate of their duration, deliverables, and the current progress.
Table 2: Work breakdown structure

Task Estimated Deliverables of


Task Name Status
No duration the task

1 Project proposal One week Completed

2 Project planning(SPMP) Two Weeks Midterm Completed


Report
(document)
Project’s requirements
3 Two Weeks Incomplete
(SRS)

4 Designing (SDS) Two Weeks Incomplete

5 Final report One week Incomplete

B. Schedule Allocation

The time required for each task in the overall project process is shown in the figure below;
group members are required to adhere to this schedule.

Table 3: Schedule allocation

Time Frame Work Activity

May 16 – May 20 Project Proposal &Definition

Problem Statement & Literature Software Project Management Plan


Sep 27 – Sep 30
Review (SPMP)

Oct 5 – Oct 10 Midterm report

44
Oct 30 – Nov 6 Software Requirements Specification (SRS)

Nov 12 – Nov 20 Software Design Specification (SDS)

Dec 5 – Dec 13 Final Report

C. Resource Allocation

The process of distributing all currently available and necessary resources is known as
resource allocation.
Both human and non-human resources are efficiently used and applied for this project as
resource allocations. The figure below includes a list of some of the resources needed for this
project.

Figure 1: Resource allocation

45
D. Budget Allocation

Since this project is entirely dependent on non-human resources like software like
Microsoft Word, Sketch, and Power Point as well as computer languages like Python, it has
extremely few costs, if any at all.
3.2 Overall description
3.2.1. Product perspective
This table shows the project Assumptions, and Constraints.
Table 4: Assumptions, and Constraints.

Type Assumptions, Constraints.

1- Completion and submission of the project in due time.


2- All the members are expected to work equally.
3- All the needed resources are available to use.
4- All the members are expected to meet at least twice a weekto
review the work.
Assumptions: 5- Apply deep learning techniques using python
programing language.
6- The implementation will be applied using modern
technological standards.
7- The system is connected to a database.
8- The system can be used by doctors and nurses.

1- Limited Time: Project submission is restricted to the due date.


2- Limited budget: Project should not exceed the specified budget.
Constraints: 3- Limited resources: The system targets only patients with brain
tumors.

3.2.2. Product functions


This product intends to increase the accuracy of rapid detection, implement safe, practical
approaches, and apply contemporary technology. Intelligent decision-making techniques and
technological advancements have the potential to save many lives and lower the demand for
caregivers and overall healthcare expenditures.

46
3.3. Specific requirements
3.3.1. External interface requirements
The project is brain tumor detection using deep learning techniques. It is supervised by Dr.
Gharsah Alqarni. The external interface is shown in the next figure.

Figure 2: External interface

3.3.2. User interfaces

This project is supervised by Dr. Gharsah Alqarni. The team members are Jumanah Abdullah
Alghamdi (Leader), Ghadah Ahmed Hasen (member), Hedaya Ahmed Al Rumaih (member),
Zahra Abdullah Almusajin (member), Aziza Modrek Alsaiary (member). Roles are given to
members based on their skills, figure 2 below will show you the internal structure.

Figure 3: User interface

47
Figure 4: User interface

48
3.3.3. Functional Requirements
Table 5: Roles and responsibilities

Name: Role: Responsibilities:

Dr. Gharsah Project • Oversees the work and check the qualityof each
Alqarni. Supervisor deliverable.

• Set the time and management.


• Set the project goals and objective.
Jumanah
Project Manager • Motivate the team members to
Alghamdi
perform their tasks.
• Help the team in allocating the
tasks and resolving issues.

Jumanah Project Plan • Assign members their work.


Alghamdi Leader • Assign for the project schedule.
• Helping team members locate resources.

Ghadah Requirement
• The hiring of analysts and collectors.
Hasen Design leader

Hedayah • Creating and developing the methodologies of


Methodology
AlRumaih the project.

Zahra Project Testing


• Developing a testing plan
Almusajin Leader

Aziza Project design • Creating software that complies with the


Alsaiary leader needs.
• To design document.

All Members implementation • Making our ideas and project plansa reality.

Analyzing the • To agree on requirement.


All Members
requirements
• Analyze the chosen ones.
• Trying to find errors in the programs.
All Members Testing
• Testing if all functions work as planned

Techniques and
All Members • Proper techniques for collecting data
data gathering

49
3.3.4. Nonfunctional requirements
This section discusses the non-functional requirements of the project.
3.3.4.1 Operability
To maximize the performance and outcome, the application is trained on a dataset of
images of various brain tumors, then deployed into the hospital system.
3.3.4.2 Performance Requirements
The first impression of the application should be positive for the user when using it, as
health caregivers not only need the satisfaction of use, but more importantly an optimized outcome
in response. The application must be able to do the following:

a. Run on hospital systems using its database.


b. Be always accessible.
c. Extract brain images from the hospital database and support all input images.
d. Run fast with no complications from the number of users or number of input images.
e. Produce accurate results and diagnoses.

3.3.4.3 Security and Privacy Requirements


Since the application will be connected to the hospital database, patients’ confidentiality
should be guaranteed. The application should be able to keep patients’ information privacy and
prevent unauthorized access to the input images during its use in the application. Authorized users
include hospital workers, specifically those who work in the databases and have access to brain
images of patients.

50
Chapter 4: Methodology & Design

Overview
This chapter presents an explanation of how the suggested solution satisfies the demands
and specifications of the model. This chapter also describes how you will be taught how to
gather data, prepare it for analysis, classify it, create models, examine requirements, and create
system models.
4.1 proposed methodology
The proposed methodology involves a comprehensive approach to classifying brain
tumors from MRI images using deep learning. Firstly, a curated dataset comprising diverse brain
tumor cases will be collected, ensuring a balanced representation of various tumor types. The
preprocessing stage will include standardization, normalization, and augmentation techniques to
enhance the model's ability to generalize. Subsequently, a (DenseNet201) architecture will be
designed, leveraging state-of-the-art deep learning frameworks. Training will be conducted on
the dataset, with a focus on optimizing hyperparameters to achieve optimal performance.
Evaluation metrics such as accuracy, sensitivity, specificity, and area under the ROC curve will
be employed to assess the model's efficacy. Rigorous cross-validation and testing on an
independent dataset will validate the model's generalization capabilities and robustness in real-
world scenarios. The proposed methodology aims to provide a reliable and accurate tool for the
early detection and classification of brain tumors through non-invasive MRI analysis.
4.1.1 purpose
This paper contains comprehensive information on the project's needs. We'll talk about
the elements and capabilities of the suggested Deep Learning model. The goal of this model is to
use MRI scans to aid in the early diagnosis of brain tumors.
4.1.2 data gathering
This section will discuss the collection of datasets for our model, which is a crucial component
of the project's construction. The datasets include MRI pictures of 233 individuals with brain
tumors, which are sourced from an open source dataset called (T1W-CE MRI) presented by
Cheng et al. in 2017.
4.1.2.1: dataset description

Class name /category Description

glioma 1426 images


meningioma 708 images

pituitary 930 images

total 3064 images

51
4.1.3 data preprocessing
In the process of creating deep learning models, this is a crucial stage. There will be
training and testing sets of the data. Before the MRI pictures are sent into the model for training
and testing, they will also be converted into a uniform size. Therefore, before being put into the
model, all new, unknown data will pass through this preprocessing pipeline.
4.1.3.1 data augmentation
The process of artificially increasing the quantity of data allocated for the model's
training is known as data augmentation. This method aims to prevent overfitting and improve the
model's accuracy. Therefore, we may generate a large number of pictures that would not
otherwise be accessible to train the model by performing little adjustments to the photos (such as
horizontally flipping photographs). The classes in the dataset we intend to use are unbalanced;
certain classes have more photos than others. As a result, to address this issue, we can think
about employing data augmentation approaches.
4.1.4 Image classification
As it takes work to understand an image, classifying them is crucial. Finding the category
that the image most closely matches into and labeling it appropriately are the goals. However,
object detection involves both localization and classification tasks and is used to investigate
more realistic circumstances where several entities may coexist in a picture. Classifying a picture
is difficult due to its complexity and dependence on several aspects. Using a unique grey level,
image classification seeks to identify the properties of a picture and express them in terms of the
object or kind of land cover they represent. The classification of images is the most important
component of digital image analysis.
4.1.5 image features extraction
In order to recognize a pattern, feature extraction offers the legitimate texture features
that are present in the pattern. A particular type of dimensionality reduction used in pattern
recognition and image processing is called feature mining. Finding the much more relevant
information in the raw data and displaying it in a lower dimensional space are the main goals of
feature extraction. When the inputs to algorithms become too large to manage and are deemed to
be duplicates, the source data will be reduced to a smaller feature representation set, commonly
referred to as a feature set. Extraction of features is the process of converting a set of attributes
from the input data.
4.1.6 Deep learning models
DL is a kind of ML that simulates how the human brain handles information. Instead of
requiring human-created rules to operate, DL leverages a vast quantity of data to transform the
inputs into a label set. Over the past ten years, the DL field has grown quickly and been
successfully used to a wide range of typical tasks. Most importantly, DL has outperformed well-
known ML approaches in a number of disciplines, including cybersecurity, genomics, natural
language processing (NLP), robotics and automation, and the study of medical data.

52
4.1.7 product perspective
Since brain tumors are dangerous and can endanger a patient's life if treatment is delayed,
we are developing a deep-learning model to help doctors detect the disease early on. This deep
learning method will be precise, quick, and incredibly dependable when applied in the healthcare
industry.
4.1.8 product function
The model's output must be extremely accurate because it will be utilized in the medical
industry. For this reason, the model must have the following characteristics.
Using the patient's MRI scan, the model ought to be able to identify the tumor. ·
Due to the delicate nature of the medical industry, high accuracy outcomes ·
4.1.9 User classes and characteristics
Our model is intended for usage by physicians in the medical profession. Because this is
a very sensitive field, a skilled artificial intelligence team should be in place to monitor the
model and look for any potential problems. We must ensure that the model is operating at peak
efficiency if we want to let it assess patients on its own and provide findings without intervention
from medical professionals. If not, the system won't be suitable.

4.2 System Design


This section discusses the design of the system.

4.1.1 Consideration
This section presents the issues faced with HunTumor desktop application that must be resolved
before we start the designing phase.

4.1.1.1 Assumption and Dependencies


During HunTumor design phase, several assumptions should be taken into consideration.
Assumptions can be categorized according to their relevance to:

4.1.1.1.1 Related Software or Hardware


HunTumor is an advanced desktop application built using Python, tailored explicitly for
brain tumor classification. Its resilient functionalities and potent algorithms empower precise and
efficient tumor categorization. The software integrates a secure, dependable database system,
leveraging MySQL for seamless data handling and storage. MySQL guarantees the integrity and
accessibility of tumor classification data, enabling swift retrieval and thorough analysis. By
harnessing Python for programming and MySQL for the database, HunTumor presents a holistic
approach to classifying brain tumors, providing medical practitioners and researchers with a
robust tool for diagnostic.

53
4.1.1.1.2 End Users Expectations
Our project emphasizes brain tumor classification, specifically designed for doctors.
Within this platform, doctors have the ability to upload new MRI scans and retrieve their
patients' data.

4.1.1.1.3 Operating System


HunTumor will be able to run on both platforms, Windows, and MACOS operating
systems. Both operating systems are compatible with python language, which is what we are
going to use.

4.1.1.1.4 Possible and/or probable changes in functionality


The final design might differ from the initial concept. The initial plan might not be
feasible, or there could be an opportunity for an improved design. However, due to project time
constraints, only minor alterations can be made to avoid disrupting deadlines and schedules.

4.1.1.2 General Constraints


This section discusses the HunTumor desktop application design considerations

4.1.1.2.1 Standard Compliance


The system is anticipated to be of fast, high quality, and efficient performance for a
graduation project that follows the university standards and caliber.

4.1.1.2.2 End-User System


The system should be compatible with various browsers for ease of use and convenience
to doctors.

4.1.1.2.3 Interface Requirements


HunTumor is anticipated to be user-friendly, providing easy navigation and
understanding of application elements, as well as being bug-free, clear in view in the realm of
colors and labels, and provide direct error messages.

4.1.1.2.4 Data Repository


The system is built using Python, as it suits the project best regarding compatibility, dataset
processing, and image classification abilities, therefore processing local datasets. Additionally, the
system utilizes MySQL as a database for storing patient data including their image diagnoses.

4.1.1.2.5 Memory Capacity


The system’s capacity needs to be enough for the system to be able to work with the training
model with the provided amount of data, so it performs its best. The device requirements for
optimized results are the following:

54
RAM: 16 GB.

Storage size: 1TB SSDs.

Processor generation: Intel Core i7 10 Gen 1.8 GHz 2.3 GHz.

4.1.1.2.6 Security Requirements


The HunTumor desktop application works with patients to classify their brain tumor type.
Since it has access to patient files from the database, it must have efficient security measures.
Access is permitted only for doctors, therefore denying unauthorized accesses, keeping the
patients’ information safe. Also, if the user fails to sign in for reasons such as wrong ID or
password, the system will send an automated message stating that access is denied and the user
must enter their login information again.

4.1.1.2.7 Performance Requirements


The HunTumor desktop application must work in the best way possible. with optimized
results. For the performance to be excellent, these points are considered:

1. Availability: The desktop application but be available at all times.


2. Response time: The desktop application must retrieve information from the database
and give results in a speedy manner.
3. Recovery from faults: The desktop application must be able to recover from faults and
operate properly.
Other requirements can be found in section 4.1.6.

4.1.1.2.8 Verification and Validation Requirements


The system should be able to determine if the system is able to meet the requirements. It should
confirm the following:

1. Deny access to unauthorized individuals.


2. Update the database information after diagnoses.
3. Detect failures for proper operation.

55
4.2 Interface Design
This section covers user interface design, interface design rules, screen images, screen objects
and actions, and other interfaces.

4.1.1 Overview about Interface Design


The interface design of HunTumor Desktop application is user-friendly. Therefore, it allows
Doctors to attract to the Application rapidly. First, the Doctor Signed in into the system using
their IDs and passwords. Afterwards, Doctors can view their patients or create a new MRI scan.

4.1.2 Overview about Interface Design


HunTumor desktop application aims to have a user-friendly, effective, and clear design interface.
Therefore, HumTumor will follow the Eight Golden Rules to achieve this goal. The rules that are
applicable to our web application are listed below:

• Strive for consistency: Consistency in writing style, button appearances, and colors is
crucial to prevent confusion for doctors using the application.
• Offer informative feedback: Whenever a doctor takes action in the app, ensure immediate
feedback, keeping them informed about the progress of their actions.
• Design dialogue to yield closure: After any interaction, present a confirmation message to
enhance the doctor's experience and keep them informed about the outcome of their
actions.
• Offer simple error handling: Strive to prevent errors in design; however, if an error
occurs, offer a clear error message to doctors along with instructions to resolve the issue.
• Permit easy reversal of actions: Allow doctors to easily undo or cancel actions, especially
in case of errors, to reduce anxiety and promote comfort in using the app.
• Support internal locus of control: Design the app to grant doctors a sense of complete
control, enhancing their feeling of autonomy.
• Reduce short-term memory load: Keep interfaces simple by presenting easily
recognizable information, reducing the need for users to rely on their memory.
• Introduce efficient shortcuts: Implement time-saving shortcuts within the application to
enable doctors to swiftly navigate or perform frequent tasks, enhancing their efficiency
and productivity.

4.1.3 Screen Images


The interfaces for the HunTumor application are described in this section, briefly describing each
interface and the necessary fields.

56
4.1.3.1 Main Interface
In HunTumor Application the first page to appear is the main interface. The doctor will be able
to click on sign in button.

Figure 5: HunTumor Main Interface

4.1.3.2 Sign In Interface


In sign in interface doctors should enter their IDs and passwords to continue signing in. in the
case of forgetting passwords they may click on Forget my password to redirect them to forget my
password interface.

Figure 6: HunTumor Sign in Interface

57
4.1.3.3 Forget My Password
Doctor will be able to enter their IDs and Emails.

Figure 7:HunTumor Forget My Password Interface

4.1.3.4 Doctor Homepage


Doctor will be able to see their names and IDs and choose whether they want to view their
patients or create new MRI scan.

Figure 8: HunTumor Doctor Homepage Interface

58
4.1.3.5 Patients
Doctor will be able to see their patients with their IDs and last MRI scan.

Figure 9: HunTumor List of patients Interface

4.1.3.6
Patient Information
Doctor will be able to see their patients Information and their tumor type with a picture of
last MRI scan.

Figure10 :HunTumor Patient Information Interface

4.1.3.7 Create New MRI Scan


Doctor will be able to choose the Patient and add the newest MRI scan to the classifier. Also,
doctors will be able to choose the model which will be used to the classification process.

59
Figure 11: HunTumor Create New MRI Scan Interface

The result of the classification process will appear as a pop-up window

Figure 12: HunTumor Classification Pop-Up Interface

4.3 Detailed System Design


In this section, we discuss the system’s components. Each component is explained thoroughly
including its preprocessing information.

60
4.3.1 Definition, Classification and Responsibilities
All system components are classified in a certain category, listed with their definitions and
responsibilities.

Components:

1. Sign In:-
a. Classification: function.
b. Definition and responsibility: doctors access the system using ID and password.
2. Forgot My Password:-
a. Classification: function.
b. Definition and responsibility: doctors use to renew their password when
forgotten.
3. Sign Out:-
a. Classification: function.
b. Definition and responsibility: doctors sign out the system.
4. Patients:-
a. Classification: function.
b. Definition and responsibility: provides patient information.
5. New MRI Scan:-
a. Classification: function.
b. Definition and responsibility: doctors open a new page for classifying MRI scan
images.
6. Upload Image:-
a. Classification: function.
b. Definition and responsibility: doctors upload images of the MRI scans.
7. Model Name:-
a. Classification: function.
b. Definition and responsibility: doctors select the model type for classification.
8. Classify:
a. Classification: function.
b. Definition and responsibility: classify the input images.

61
9. Pop-Up Tumor Type:-
a. Classification: message.
b. Definition and responsibility: provide the result of the classification.
10. Save:-
a. Classification: function.
b. Definition and responsibility: save new classification into the patient’s information.

4.3.2 Constraints and Composition


In this section, we present the components’ functions with their constraints and composition,
specifying the pre-conditions and post-conditions.

Components:

1. Sign In:-
a. Constraints: doctor must be registered with their ID.
b. Pre-condition: ID and password.
c. Post-condition: system authorizes the doctor and navigate them to the
homepage.
2. Forgot My Password:-
a. Constraints: doctor must have an authorized account.
b. Pre-condition: ID and email.
c. Post-condition: doctor receives and email to renew their password.
3. Sign Out:-
a. Constraints: doctor must be signed in.
b. Pre-condition: click on the sign out button.
c. Post-condition: navigation to the sign in page.
4. Patients:-
a. Constraints: doctor must be signed in.
b. Pre-condition: click on patients button.
c. Post-condition: reveal patient’s information.
5. New MRI Scan:-
a. Constraints: doctor must be signed in.

62
b. Pre-condition: click on the new MRI scan button.
c. Post-condition: navigate to the new MRI scan page.
6. Upload Image:-
a. Constraints: doctor must have a saved image to upload.
b. Pre-condition: click on upload image button, select an image, click upload
button.
c. Post-condition: image uploaded successfully.
7. Model Name:-
a. Constraints: doctors must select one model each time.
b. Pre-condition: click on the desired model type.
c. Post-condition: model selected successfully.
8. Classify:
a. Constraints: doctor must upload images and select a model type.
b. Pre-condition: click on classify button.
c. Post-condition: classification done successfully.
9. Pop-Up Tumor Type:-
a. Constraints: doctor must select images, model type, and click classify button.
b. Pre-condition: click on classify.
c. Post-condition: Pop-Up Tumor Type message with patient ID and tumor type.
10. Save:-
a. Constraints: doctor must have new classification to update the database with.
b. Pre-condition: click on save button.
c. Post-condition: classification saved into patient’s information database
successfully.

63
4.3.3 Preprocessing
In this section, we demonstrate the components’ description, inputs, outputs, and constraints.

Components:

1. Sign In:-
a. Description: doctors sign in to navigate to the homepage.
b. Inputs: valid ID and password, then click on Sign In button.
c. Outputs: navigation to the homepage.
d. Constraints: doctors must have an authorized account and enter their ID and
password correctly.
2. Forgot My Password:-
a. Description: when the doctor clicks on Forgot My Password button, they will
be sent an email to renew their password if they are registered.
b. Inputs: ID and password then click on Sign In button, or immediately click on
Forgot My Password button.
c. Outputs: Invalid password. Try again.
d. Constraints: doctor must have a registered account in the system, and email
must match with the doctor’s email used in the system.
3. Sign Out:-
a. Description: doctor can sign out of their account.
b. Inputs: click on Sign Out button.
c. Outputs: navigation to the Sign In page.
d. Constraints: doctor must be signed.
4. Patients:-
a. Description: doctors can view their patients list.
b. Inputs: click on Patients button.
c. Outputs: patients list is provided.
d. Constraints: doctor must be signed in.
5. New MRI Scan:-
a. Description: doctors can enter a new MRI scan for classification process.
b. Inputs: click on New MRI Scan button.

64
c. Outputs: navigation to New MRI Scan page.
d. Constraints: doctor must be signed in.
6. Upload Image:-
a. Description: doctor can enter MRI scan images.
b. Inputs: click on Upload Image button, upload MRI scan images, then click on
Upload button.
c. Outputs: images uploaded successfully.
d. Constraints: doctor must have MRI scan images to upload.
7. Model Name:-
a. Description: provides a list of models for classification.
b. Inputs: click on the desired model.
c. Outputs: model selected successfully.
d. Constraints: doctor must be in the New MRI Scan page.
8. Classify:
a. Description: allows the uploaded images to be classified based on the selected
model.
b. Inputs: click on Classify button.
c. Outputs: classification done successfully.
d. Constraints: doctor must upload MRI scan images and select a model.
9. Pop-Up Tumor Type:-
a. Description: a message containing the patient ID and tumor type.
b. Inputs: click on Classify button.
c. Outputs: : results of the classification.
d. Constraints: doctor must upload MRI scan images and select a model, then click
Classify button.
10. Save:-
a. Description: allows doctor to save the new diagnoses.
b. Inputs: click on Save button.
c. Outputs: diagnoses saved into the patient’s information database.
d. Constraints: doctor must receive the Pop-Up Tumor Type message after
clicking Classify button.

65
Chapter 5: Implementation
5.1. Model
5.1.2. DenseNet201
Among the deep learning architecture groups, Densely Connected Convolutional
Networks or DenseNet architecture was created by improving the gradient flow on the ResNet
architecture. Compared to traditional convolution network methods, DenseNet requires fewer
parameters to improve the network's gradient and information flow. It is simpler to train the
network because every layer in the structure of DenseNet is connected to the original signal and
loss function. It additionally includes dense connections between the layers, which impact how
datasets are arranged [90].

Feedforward connections are used by DenseNet to create interactions between every layer
and all layers that come after it. In contrast, standard L-layer CNNs only contain L connections.
DenseNet considerably increases the number of direct connections between layers to, which leads
to improving the propagation of features. Each layer of the model includes a feature map. The
feature map of each layer is used as the input for the next layer. By directly linking every layer in
the network, it optimizes information transit within it. It lowers gradient runaway, increases feature
diffusion, promotes feature reusability, and drastically reduces dimensionality [91]. Additionally,
since multiple layers can reuse features, DenseNet201 condensed network produces very
parametrically efficient models that are simple to train, enhance layer input diversity and enhance
efficiency [90].

Specifically, DenseNet201 has 201 layers. The architecture is composed of a final classification
layer, transition layers, and many dense blocks. The layers' precise details could change based on
the implementation and particular needs. However, here is a general overview [92][93]:

5.1.2.1. Input Layer


The initial layer, which starts by taking the input data.

5.1.2.2. DenseBlocks
In DenseNet, several densely connected layers make up each dense block. Every layer in
these blocks gets the feature maps from the previous one. Number of filters varies across blocks,
but the feature map dimensions stay constant.

66
5.1.2.3. Transition layers
The layers that connect each of the DenseBlocks to each other’s. These layers, which are
referred to as Transition Layers, are responsible for downsampling by using 2x2 pooling layers,
1x1 convolution, and batch normalization.

5.1.2.4. Global Average Pooling (GAP)


A more drastic feature map reduction is carried out by the global average pooling (GAP),
which is comparable to conventional pooling techniques but reduces the dimensions of feature
map to 1 × 1.

5.1.2.5. Fully Connected (Dense) Layer (FCL)


The last layer in a classification task generates the output logits for each of the classes
using a fully connected layer.

Figure 13: Architecture of DenseNet201 Model

67
5.2. Results and Evaluation
5.2.1. Dataset and Preprocessing
The dataset used in this study is a brain MRI scans dataset developed by Jun Cheng from
233 patients with brain tumors obtained from public Figshare repository. It contains total of 3064
T1-weighted contrast-enhanced images which were updated in 2017 after being collected over a
five-year period from Tianjian Medical University General Hospital and Nanfang Hospital in
Guangzhou, China, in 2015. And it consists of three types of brain tumors which are Glioma,
Meningioma, and Pituitary tumor. Containing roughly 1426 Glioma images, 708 Meningioma
images, and 930 Pituitary images each having three angles of view: coronal, sagittal, and axial in
2D volumes in .mat extension [94]. For better performance, the dimensions of the dataset were
converted from 512x512 pixel to 224x224 pixels and split into two parts 75% for training set and
25% for testing set:

Figure 14:Shows three types of tumors in three angle of views axial, coronal, and sagittal: (1) Glioma, (2) Meningioma, and (3)
Pituitary.

68
Dataset Training Set Testing Set
Glioma 1069 357
Meningioma 531 177
Pituitary 697 233
Table 6: Data split for training and testing

As seen in Figure 1, three different types of tumors Glioma, Meningioma, and Pituitary
tumor are shown in 3 different angles respectively which are axial, coronal and sagittal. Table 1
provides information about the exact number of splitting datasets into training and testing set with
the ratio of 25:75. Data preprocessing helps to optimize input dataset for the subsequent training
phase. We converted the .mat files to .png images. The model applies data augmentation methods
using the ImageDataGenerator from Keras. These methods include rotation (up to 90 degrees),
flipping horizontally and vertically, and random zooming (up to a factor of 2). The model is
exposed to a wider range of variances in the training set by integrating these augmentations, which
allows the model to search and explore every region in the image.

5.2.2. Performance Metrics


To evaluate the performance of the DenseNet201 architecture, five standard evaluation
criteria will be used which are: accuracy, precision, recall, and F-score. These metrics are
frequently utilized to examine misclassification more closely and gain a better understanding of
true positives (TP), true negatives (TN), false positives (FP), and false negatives (FN). Also, other
performance metrics will be used to evaluate the classification such as confusion matrix which is
a matrix that shows the instances in a predicted class in each row of the matrix, while the
occurrences in an actual class are represented by each column. The diagonals show which classes
were appropriately classified. This is useful since we not only know which classes are
misclassified, but also what they are misclassified as. Another evaluation metric is the ROC Curve,
which evaluates the model depending on AUC or Area Under the Curve, The greater the AUC, the
better the model [95].

69
5.2.3. Hyperparameters
To efficiently train the model, the brain tumor classification code implementing DenseNet201
model makes use of a number of important hyperparameters. The number of samples handled in
each training iteration is determined by the BATCH_SIZE, which is set to 16. The step size during
optimization is influenced by the learning_rate, which is set at 1e-4. The training dataset is
traversed 20 times by the model throughout its 20 epochs of training. To avoid overfitting, a
dropout rate of 0.5 is implemented after the global average pooling (GAP) layer. Also, the
hyperparameter of Adam optimizer is utilized. In order to maximize the brain tumor classification
model's performance, it is important to balance model complexity, and training speed using these
hyperparameters. The experiment was conducted using Python programming language on laptop
computer: HP Spectre x360 2 in1 laptop 14-ef0xxx, 1 terabyte (SSD) Solid State Drive with 12th
Gen Intel(R) Core(TM) i7-1255U processor.

5.2.4. Experimental Results

The following figure 2 shows number of correctly classified and misclassified brain tumor
images. For Glioma, Meningioma, and Pituitary 355, 150, and 216 images were predicted correctly
out of 357, 177, 233 total images in the testing set.

Figure 15 Confusion matrix for brain tumor classification:

The below ROC Curve demonstrates the 45-degree random line, the greater the Area Under
the Curve (AUC) and the better the model, the wider the curve deviates from this line. AUC of 1

70
is the best a model can achieve when the curve forms a right-angled triangle. In case of our model,
the AUC is around 0.993 which is approximately equal to 1.

Figure 16: DenseNet201 ROC Curve

The DenseNet201 model achieved the highest accuracy of 95.86%, highest loss value of
0.7402, an average F1-score value of 0.93, a recall of 0.92, and precision of 0.94. These results
show that this model worked relatively great in terms of accuracy; nevertheless, further
improvements may be required to get better scores in terms of evaluation metrics. These
conclusions can be used to future encounters in this subject.

Model Accuracy Recall Precision F1-score ROC-AUC


DenseNet201 95.86% 0.92 0.94 0.93 0.993
Table 7: Results obtained by DenseNet201

5.2.5. Benchmarking
We compared our obtained result using DenseNet201 model with other methods results in
different literatures such as CNN, CapsNet, GoogleNet, augmentation and partition, and CNN-
ELM on the same T1W-CE MRI dataset. As you can see in Table 3, with an accuracy of 95.86%,
our proposed (DenseNet201) model is considered the best classifier when compared to the other
classification algorithms in the table.

Ref No. Technique Accuracy


Nyoman et al [96] CNN 84.19%
Afshar et al [97] CapsNet 90.89%

71
Cheng et al [73] Tumor region partition 91.28%
and augmentation
Deepak et al [98] GoogleNet 92.30%
Pashaei et al [99] CNN-ELM 93.68%
Proposed DenseNet201 95.86%
Table 8 : Comparison of results using the same figshare dataset.

72
Chapter 6: Conclusion
6.1 Introduction
This report details our graduation project titled "DEEP LEARNING TECHNIQUES FOR
BRAIN TUMOR CLASSIFICATION." It encompasses the essential documentation for creating
a desktop application focused on Brain Tumor Classification. The report is structured into various
sections: an introduction, a literature review summarizing related studies, a methodology and
requirements engineering segment, a system design overview, an implementation chapter, and a
testing section. The conclusion will encapsulate the project's outcomes, emphasizing its
discoveries and contributions. Additionally, it will highlight lessons learned, challenges faced, and
recommendations for future work.

Through this project, each member of our group has acquired new skills, especially in
machine Learning and Image Processing. We've delved into novel concepts, honed our research
abilities, and stayed abreast of the latest trends in Brain Tumor Classification. Adhering to the
project schedule taught us the significance of timely progress and milestone adherence. Moreover,
it enhanced our communication, analytical, and logical skills through comprehensive project
analysis. We've learned to handle pressure, estimate and prioritize tasks, and maintain clear team
communication, all of which significantly enrich our experience as computer science students.

Efficient planning, constant monitoring, and iterative revisions enabled us to finish project
components within the set timeline. The exceptional quality of our work is a testament to our
cohesive teamwork, evident through our consistent cooperation, high motivation, and dedication.
Our project was developed innovatively and robustly, resulting in comprehensive and timely
deliverables. This work reflects our commitment, ambition, and teamwork, aligning with our goal
to contribute to a high-quality healthcare system in line with the kingdom's 2030 vision.

6.2 Findings and Contributions


• Appreciation of the advantages of merging healthcare with AI technology.
• Leveraging machine learning capabilities to achieve precise patient diagnoses.
• The effective utilization of machine learning in accurately identifying tumor types from
images.
• Utilizing pre-existing models to enhance accuracy and decrease training duration in classifying
tumor types.

These findings emphasize the profound impact of integrating AI into healthcare,


particularly in facilitating precise diagnoses and improving tumor classification accuracy,
thereby potentially revolutionizing patient care and treatment outcomes.

73
The project embodies various critical factors:
1. Creation of an automated system capable of precisely diagnosing diverse tumor types
from uploaded images.
2. Aligning with the kingdom's 2030 vision, the project leverages cutting-edge technology
to advance the healthcare system, ensuring competitiveness in the age of AI.
3. Significantly reducing diagnosis time, enhancing efficiency in healthcare delivery.

These facets underscore the project's pivotal role in revolutionizing healthcare through
advanced technology, aiming to streamline diagnoses, fortify healthcare capabilities, and
propel progress toward the kingdom's visionary healthcare goals.

6.3 Limitation
The documentation and implementation stages posed various hurdles for our team. However,
we navigated these challenges by adapting to change and employing strong decision-making skills
to explore alternative solutions. The encountered challenges encompassed:

1. Selecting the most appropriate dataset for brain tumor analysis.


2. Venturing into machine learning for the first time.
3. Determining the optimal platform for our work.
4. Lack of prior experience among team members in the field of AI.

To address these challenges systematically, the team took the following actions:

1. Conducted extensive literature reviews to identify pertinent brain tumor datasets from
various research papers.
2. Engaged in watching instructional materials and experimenting with machine learning
on simpler tasks.
3. Explored different platforms to ascertain the best fit for our project's requirements.
4. Invested efforts in augmenting our knowledge base in the realm of AI through learning
and research initiatives.

6.4 Lessons & Skills learned


This project has been an invaluable source of knowledge, fostering growth in various
domains and enabling us to acquire essential skills that complement our existing expertise. The
implementation phase significantly broadened our understanding in technical and medical realms,
offering invaluable experiences. Despite encountering initial challenges due to the project's scale,
our cohesive teamwork allowed us to overcome obstacles and derive crucial insights. Amid the
myriad lessons learned, several noteworthy takeaways include:

1 AI Exploration: Engaging with diverse AI algorithms illuminated the profound impact


of AI on daily life.

74
2 Collaborative Advantage: Leveraging group dynamics led to diverse perspectives,
enhancing project outcomes.
3 Insights into Brain Tumor Classification: Acquiring knowledge on AI techniques for
tumor classification proved illuminating.
4 Skill Enhancement: Our writing and analytical abilities witnessed substantial
improvement throughout the project.
5 Effective Application of Past Knowledge: Applying prior learning effectively was a
significant learning curve.
6 Supervision and Teamwork: Recognizing the importance of guidance and collaborative
efforts in goal achievement.
7 Punctuality and Adaptability: Emphasizing schedule adherence and adaptability to
resolve issues and adapt plans.
8 Conflict Resolution: Swiftly resolving conflicts without disrupting workflow was a
crucial skill developed.
9 Research and Time Management: Strengthening research skills and mastering time
management and prioritization.
10 Holistic Problem-Solving: Embracing multifaceted perspectives to enhance problem-
solving and critical thinking.
11 Programming and Debugging Skills: Mastery of Python and debugging through
identifying logic and syntax errors.
12 Pre-processing Techniques: Enhanced understanding of diverse pre-processing
techniques pivotal in classification.
13 Solution-Oriented Approach: Navigating challenges by extracting solutions through
research and information retrieval.
14 Planning and Adaptability: Strategizing, planning alternatives, and managing multiple
tasks simultaneously.
15 Deepening ML Insights: Gaining profound insights into machine learning, particularly
image classification.
16 Strategic Milestones: Establishing clear objectives and milestones to guide project
progression.
17 Effective Communication: Leveraging collaborative tools for seamless communication
among remote team members.
18 Supervisory Guidance: Regular meetings with the supervisor ensured progress,
adherence to timelines, and valuable input consideration.
19 Motivation and Feedback: Sustaining motivation through honest feedback while
collectively pursuing project goals.

This project not only expanded our knowledge base but also honed our collaborative,
adaptive, and problem-solving skills crucial for tackling substantial projects with precision and
proficiency.

75
6.5 Recommendation & Future Works
Moving forward, our aim is to enhance the system's efficiency and effectiveness by
incorporating additional features and functions. To advance the project's outcomes and broaden its
scope post-implementation, our team proposes the following recommendations:

1- Expansion of Tumour Classification: Including a wider array of brain tumour types for
classification.
2- Exploration of Advanced DL Models: Exploring and integrating new deep learning
models and advanced pre-processing techniques to potentially yield more precise results
than the current methods employed.

76
References

[1] F. l. HAMEED and O. DAKKAK, "Brain Tumor Detection and Classification Using
Convolutional Neural Network (CNN)," 2022 International Congress on Human-
Computer Interaction, Optimization and Robotic Applications (HORA), Ankara, Turkey,
pp. 1-7, 2022.
[2] O. Parvez, “UIS Brage: Conditional data generation for an improved diagnosis in prostate
cancer” University of Stavanger, https://uis.brage.unit.no/uis-
xmlui/handle/11250/2788238 (accessed Oct. 7, 2023).

[3] M. N. Tahir, “Classification and characterization of brain tumor MRI by using Gray
scaled segmentation and DNN,” Theseus, https://www.theseus.fi/handle/10024/151825
(accessed Oct. 9, 2023).
[4] E. Chatzitheodoridou, “Brain Tumor Grade Classification in MR images using Deep
Learning,” Digitala Vetenskapliga Arkivet, https://www.diva-
portal.org/smash/get/diva2:1674243/FULLTEXT01.pdf (accessed Oct. 9, 2023).
[5] M. Hargrave, "Deep Learning," Updated Apr 30, 2019.
[6] R. Lustig, “What Are the Most Common Types of Brain Tumors?,”
Pennmedicine.org,https://www.pennmedicine.org/updates/blogs/neuroscience-
blog/2018/november/what-are-the-most-common-types-of-brain-tumors (accessed Oct.
9, 2023).
[7] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” arXiv.org,
https://arxiv.org/abs/1508.06576 (accessed Oct. 9, 2023).
[8] K.-K. Phoona and W. Zhang, “Future of machine learning in Geotechnics ,” Taylor &
Francis Online, https://www.tandfonline.com/doi/full/10.1080/17499518.2022.2087884
(accessed Oct. 9, 2023).
[9] G. Litjens et al., “A survey on deep learning in medical image analysis,” Medical Image
Analysis, vol. 42, pp. 60–88, Dec. 2017.
[10] T. A. Jemimma and Y. J. V. Raj, “Brain Tumor Segmentation and Classification Using
Deep Belief Network,” ScienceDirect, pp. 1390-1394, Jun. 2018.

77
[11] S. Ghosh and S. K. Dubey, “Comparative analysis of K-Means and fuzzy C-Means
algorithms,” International Journal of Advanced Computer Science and Applications, vol.
4, no. 4, Jan. 2013.
[12] “National Institute of Biomedical Imaging and
Bioengineering|.” https://www.nibib.nih.gov/
[13] “MRI Basics.” https://case.edu/med/neurology/NR/MRI%20Basics.htm
[14] Wikipedia contributors, “Artificial neural network,” Wikipedia, Jul. 2021, [Online].
Available: https://en.wikipedia.org/w/index.php?title=Artificial_neural_%20network&o
ldid=1032050958
[15] S. Larsen, “Exploring Generative Adversarial Networks to Improve Prostate
Segmentation on MRI,” University of Stavanger, Jun. 28,
2022. https://uis.brage.unit.no/uis-xmlui/handle/11250/2680065 (accessed Oct. 10,
2023).
[16] “Deep learning.” https://www.deeplearningbook.org/
[17] D. Campora, “A practical approach to Convolutional Neural Networks slides,” Course
Hero, Jan. 20, 2021. https://www.coursehero.com/file/78404036/Daniel-Campora-A-
practical-approach-to-Convolutional-Neural-Networks-slidespdf/ (accessed Oct. 10,
2023).
[18] “Missing link - AI detection of oral
inflammation.” http://missinglink.ai/guides/convolution-neural-networks/fully-
connected-layers-convolutional-neuralnetworks-complete-guide/
[19] J. Sachdeva, V. Kumar, I. Gupta, N. Khandelwal, and C. K. Ahuja, “Multiclass brain
tumor classification using GA-SVM,” 2011 Developments in E-systems Engineering, pp.
182-187, Dec. 2011.
[20] V. P. G. P. Rathi, “Brain Tumor MRI Image Classification with Feature Selection and
Extraction using Linear Discriminant Analysis,” International Journal of Information
Sciences and Techniques, vol. 2, no. 4, pp. 131–146, Jul. 2012.
[21] R. A. Castellino, "Computer aided detection (CAD): an overview," Cancer Imaging, vol.
5, no. 1, pp. 17, 2005.

78
[22] H. R. Almadhoun, “Hamza Rafiq Almadhoun & Samy S. Abu-Naser, detection of brain
tumor using Deep Learning,” PhilArchive, https://philarchive.org/rec/ALMDOB
(accessed Oct. 10, 2023).
[23] O. Ronneberger, “U-NET: Convolutional Networks for Biomedical Image
Segmentation,” arXiv.org, pp. 234-241, May 18, 2015.
[24] M. Weller et al., “Glioma,” Nature Reviews Disease Primers, vol. 1, no. 1, Jul. 2015.
[25] C. Marosi et al., “Meningioma,” Critical Reviews in Oncology/Hematology, vol. 67, no.
2, pp. 153–171, Aug. 2008.
[26] S. Melmed, “Pathogenesis of pituitary tumors,” Nature Reviews Endocrinology, vol. 7,
no. 5, pp. 257–266, Mar. 2011.
[27] S. Grossberg, “Recurrent neural networks,” Scholarpedia, vol. 8, no. 2, p. 1888, Jan.
2013.
[28] B. Uria, et al., "Neural autoregressive distribution estimation," The Journal of Machine
Learning Research, vol. 17, no. 1, pp. 7184-7220, 2016.
[29] G. Boesch, “VGG Very Deep Convolutional Networks (VGGNet) – What you need to
know,” viso.ai, Mar. 2023, [Online]. Available: https://viso.ai/deep-learning/vgg-very-
deep-convolutional-networks/#:~:text=and%20many%20more.-
,What%20is%20VGG%3F,CNN)%20architecture%20with%20multiple%20layers.
[30] F. Özyurt, E. Sert, and D. Avcı, “An expert system for brain tumor detection: Fuzzy C-
means with super resolution and convolutional neural network with extreme learning
machine,” Medical Hypotheses, vol. 134, p. 109433, Jan. 2020.
[31] K. Erdem, “Introduction to extreme learning machines - towards data science,” Medium,
Dec. 14, 2021. [Online]. Available: https://towardsdatascience.com/introduction-to-
extreme-learning-machines-c020020ff82b
[32] P. Parviainen, “Bayesian networks,” University of Bergen, Oct. 18,
2019. https://www.uib.no/en/rg/ml/119695/bayesian-networks (accessed Oct. 10, 2023).
[33] C. Sutton, “An introduction to conditional random fields,” Foundations and Trends in
Machine Learning, vol. 4, no. 4, pp. 267–373, Jan. 2012.
[34] J. M. Keller, M. R. Gray, and J. A. Givens, "A fuzzy k-nearest neighbor algorithm," IEEE
Transactions on Systems, Man, and Cybernetics, vol. 4, pp. 580-585, 1985.

79
[35] G. Montavon, W. Samek, and K. R. Müller, “Methods for interpreting and understanding
deep neural networks,” Digital Signal Processing, vol. 73, pp. 1–15, Feb. 2018.
[36] E. Chatzitheodoridou, “Brain Tumor Grade Classification in MR images using Deep
Learning,” Digitala Vetenskapliga Arkivet, https://www.diva-
portal.org/smash/get/diva2:1674243/FULLTEXT01.pdf (accessed Oct. 9, 2023).
[37] T. S. Kalvakolanu, “Brain tumor detection and classification from MRI images,”
DigitalCommons@CalPoly, https://digitalcommons.calpoly.edu/theses/2267/ (accessed
Oct. 6, 2023).
[38] M. A. Naser and M. J. Deen. Brain tumor segmentation and grading of lower-grade
glioma using deep learning in MRI images. Computers in Biology and Medicine,
121:103758, 2020.
[39] S. S. Begum and D. R. Lakshmi, "Combining optimal wavelet statistical texture and
recurrent neural network for tumor detection and classification over MRI," in Multimedia
Tools and Applications, vol. 79, pp. 14009-14030, 2020.
[40] R. Hashemzehi, et al., "Detection of brain tumors from MRI images based on deep
learning using hybrid model CNN and NADE," Biocybernetics and Biomedical
Engineering, vol. 40, no. 3, pp. 1225-1232, 2020.
[41] Z. N. Khan, et al., "Brain tumor classification for MR images using transfer learning and
fine-tuning," Computerized Medical Imaging and Graphics, vol. 75, pp. 34-46, 2019.
[42] H. R. Almadhoun, “Hamza Rafiq Almadhoun & Samy S. Abu-Naser, detection of brain
tumor using Deep Learning,” PhilArchive, https://philarchive.org/rec/ALMDOB
(accessed Oct. 6, 2023).
[43] E. J. Alberts, “Tum,” Technische Universitat Munchen,
https://mediatum.ub.tum.de/doc/1453649/137528.pdf (accessed Oct. 6, 2023).
[44] P. Pedapati and R. V. Tanneedi, Brain tumour detection using hog by SVM - diva,
http://www.diva-portal.org/smash/get/diva2:1184069/FULLTEXT02.pdf (accessed Oct.
6, 2023).
[45] A. Kanervo, “Random forests : An application to tumour classification,” UTUPub,
https://www.utupub.fi/handle/10024/154159?show=full (accessed Oct. 7, 2023).

80
[46] A. Ellwaa, et al., "Brain tumor segmentation using random forest trained on iteratively
selected patients," Springer International Publishing, Athens, Greece, vol. 2, pp. 129-
137, 2016.
[47] T. Meyer, et al., "Nonlinear microscopy, infrared, and Raman microspectroscopy for
brain tumor analysis," Journal of Biomedical Optics, vol. 16, no. 2, pp. 021113-021113,
2011.
[48] P. Mlynarski, “Apprentissage Profond pour la segmentation des tumeurs cérébrales et des
organes à risque en radiothérapie,” http://www.theses.fr/,
https://www.theses.fr/2019AZUR4084 (accessed Oct. 7, 2023).
[49] O. Parvez, “UIS Brage: Conditional data generation for an improved diagnosis in prostate
cancer” University of Stavanger, https://uis.brage.unit.no/uis-
xmlui/handle/11250/2788238 (accessed Oct. 7, 2023).
[50] H. C. Shin, N. A. Tenenholtz, J. K. Rogers, C. G. Schwarz, M. L. Senjem, J. L. Gunter,
... & M. Michalski, "Medical image synthesis for data augmentation and anonymization
using generative adversarial networks," Springer International Publishing, Granada,
Spain, vol. 3, pp. 1-11, 2018.
[51] T. Neff, “Data Augmentation in Deep Learning using Generative Adversarial
Networks,” TU Graz: Graz University of Technology, Mar.
2018. https://www.tugraz.at/fileadmin/user_upload/Institute/ICG/Images/team_bischof/
mib/paper_pdfs/StudentsMasterTheses/2018_03_DA_neff.pdf (accessed Oct. 10, 2023).
[52] Alvaro Fernandez-Quilez et al. "Improving Prostate Whole Gland Segmenta- tion In T2-
Weighted MRI With Synthetically Generated Data", 2021 IEEE 18th International
Symposium on Biomedical Imaging (ISBI), pp. 1915- 1919, 2021.

[53] M. N. Tahir, “Classification and characterization of brain tumor MRI by using Gray
scaled segmentation and DNN,” Theseus, https://www.theseus.fi/handle/10024/151825
(accessed Oct. 9, 2023).
[54] E. Alberts et al., “Multi-modal image classification using Low-Dimensional Texture
features for genomic brain tumor recognition,” in Lecture Notes in Computer Science,
pp. 201–209, 2017.

81
[55] J. E. Iglesias, C.-Y. Liu, P. M. Thompson, and Z. Tu, “Robust brain extraction across
datasets and comparison with publicly available methods,” IEEE Transactions on
Medical Imaging, vol. 30, no. 9, pp. 1617–1634, Sep. 2011.
[56] A. R. Mathew and P. B. Anto, "Tumor detection and classification of MRI brain image
using wavelet transform and SVM," 2017 International Conference on Signal Processing
and Communication (ICSPC), pp. 75-78, July 2017
[57] G. Latif, M. M. Butt, A. H. Khan, O. Butt, and D. A. Iskandar, "Multiclass brain glioma
tumor classification using block-based 3D Wavelet features of MR images," 2017 4th
International Conference on Electrical and Electronic Engineering (ICEEE), pp. 333-
337, April 2017.
[58] A. Gumaei, M. M. Hassan, M. R. Hassan, A. Alelaiwi, and G. Fortino, "A hybrid feature
extraction method with regularized extreme learning machine for brain tumor
classification," IEEE Access, vol. 7, pp. 36266-36273, 2019.
[59] G. B. Praveen and A. Agrawal, "Hybrid approach for brain tumor detection and
classification in magnetic resonance images," 2015 Communication, Control and
Intelligent Systems (CCIS), pp. 162-166, IEEE, November 2015.

[60] S. A. Nagtode, B. B. Potdukhe, and P. Morey, "Two-dimensional discrete wavelet


transform and probabilistic neural network used for brain tumor detection and
classification," 2016 Fifth International Conference on Eco-friendly Computing and
Communication Systems (ICECCS), pp. 20-26, December 2016.
[61] K. C. L. Wong, T. Syeda-Mahmood, and M. Moradi, “Building medical image classifiers
with very limited data using segmentation networks,” Medical Image Analysis, vol. 49,
pp. 105–116, Oct. 2018.
[62] P. Afshar, K. N. Plataniotis, and A. Mohammadi, "Capsule Networks for Brain Tumor
Classification Based on MRI Images and Coarse Tumor Boundaries," in Proceedings of
ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech, and Signal
Processing (ICASSP). IEEE, pp. 1368–1372, 2019.
[63] A. Kabir Anaraki, M. Ayati, and F. Kazemi, "Magnetic resonance imaging-based brain
tumor grades classification and grading via convolutional neural networks and genetic
algorithms," Biocybernetics and Biomedical Engineering, vol. 39, no. 1, pp. 63–74, 2019.

82
[64] M. Sajjad et al., "Multi-grade brain tumor classification using deep CNN with extensive
data augmentation," Journal of Computational Science, vol. 30, pp. 174–182, 2019.
[65] E. Irmak, "Multi-Classification of Brain Tumor MRI Images Using Deep Convolutional
Neural Network with Fully Optimized Framework," Iranian Journal of Science and
Technology, Transactions of Electrical Engineering, vol. 45, no. 3, pp. 1015–1036, 2021.
[66] R. Lokesh Kumar et al., "Multi-class brain tumor classification using residual network
and global average pooling," Multimedia Tools and Applications, vol. 80, no. 9, pp.
13429–13438, 2021.
[67] Sudharani, K., Sarma, T. C., & Rasad, K. S. "Intelligent brain tumor lesion classification
and identification from MRI images using k-NN technique." 2015 International
Conference on Control, Instrumentation, Communication and Computational
Technologies (ICCICCT), IEEE, pp. 777-780, Dec. 2015.
[68] R. Ahmmed, A. S. Swakshar, M. F. Hossain, and M. A. Rafiq, "Classification of tumors
and its stages in brain MRI using support vector machine and artificial neural network,"
2017 International Conference on Electrical, Computer and Communication Engineering
(ECCE), pp. 229-234, Feb. 2017.
[69] K. Machhale, H. B. Nandpuru, V. Kapur, and L. Kosta, "MRI brain cancer classification
using hybrid classifier (SVM-KNN)," in 2015 International Conference on Industrial
Instrumentation and Control (ICIC), pp. 60–65, May 2015.
[70] A. Veeramuthu, S. Meenakshi, G. Mathivanan, K. Kotecha, J. R. Saini, V. Vijayakumar,
and V. Subramaniyaswamy, "MRI brain tumor image classification using a combined
feature and image-based classifier," Frontiers in Psychology, vol. 13, pp. 848784, 2022.
[71] K. Salçin, "Detection and classification of brain tumors from MRI images using Faster
R-CNN," Tehnički Glasnik, vol. 13, no. 4, pp. 337-342, 2019.
[72] A. A. Dehkordi, M. Hashemi, M. Neshat, S. Mirjalili, and A. S. Sadiq, "Brain tumor
detection and classification using a new evolutionary convolutional neural network,"
arXiv preprint arXiv:2204.12297, 2022.
[73] J. Cheng, W. Huang, S. Cao, R. Yang, W. Yang, Z. Yun, et al., "Enhanced performance
of brain tumor classification via tumor region augmentation and partition," PLOS ONE,
vol. 10, no. 10, p. e0140381, 2015.

83
[74] M. F. B. Othman, N. B. Abdullah, and N. F. B. Kamal, "MRI brain classification using
support vector machine," in 2011 Fourth International Conference on Modeling,
Simulation and Applied Optimization, pp. 1-4, April 2011.
[75] C. Amulya and G. Prathibha, "MRI brain tumor classification using SURF and SIFT
features," International Journal for Modern Trends in Science and Technology, vol. 2,
no. 7, pp. 123-127, 2016.
[76] H. Selvaraj, S. T. Selvi, D. Selvathi, and L. Gewali, "Brain MRI slices classification using
least squares support vector machine," International Journal of Intelligent Computing in
Medical Sciences & Image Processing, vol. 1, no. 1, pp. 21-33, 2007.
[77] M. Saritha, K. P. Joseph, and A. T. Mathew, "Classification of MRI brain images using
combined wavelet entropy based spider web plots and probabilistic neural network,"
Pattern Recognition Letters, vol. 34, no. 16, pp. 2151-2156, 2013.
[78] Y. Zhang, Z. Dong, L. Wu, and S. Wang, "A hybrid method for MRI brain image
classification," Expert Systems with Applications, vol. 38, no. 8, pp. 10049–10053, 2011.

[79] M. P. Arakeri and G. R. M. Reddy, "Medical image retrieval system for diagnosis of brain
tumor based on classification and content similarity," in 2012 Annual IEEE India
Conference (INDICON), Dec. 2012, pp. 416-421.
[80] M. F. Othman and M. A. M. Basri, "Probabilistic neural network for brain tumor
classification," in 2011 Second International Conference on Intelligent Systems,
Modelling and Simulation, pp. 136-138, Jan. 2011.
[81] M. A. M. Basria, M. F. Othmana, and A. R. Husaina, "An approach to brain tumor MR
image detection and classification using neuro fuzzy," 2012.
[82] A. Padma Nanthagopal and R. Sukanesh, "Wavelet statistical texture features-based
segmentation and classification of brain computed tomography images," IET Image
Processing, vol. 7, no. 1, pp. 25-32, 2013.
[83] H. Kalbkhani, M. G. Shayesteh, and B. Zali-Vargahan, "Robust algorithm for brain
magnetic resonance image (MRI) classification based on GARCH variances series,"
Biomedical Signal Processing and Control, vol. 8, no. 6, pp. 909-919, 2013.

84
[84] S. Sindhumol, A. Kumar, and K. Balakrishnan, "Spectral clustering independent
component analysis for tissue classification from brain MRI," Biomedical Signal
Processing and Control, vol. 8, no. 6, pp. 667-674, 2013.

[85] M. Kheirollahi, S. Dashti, Z. Khalaj, F. Nazemroaia, and P. Mahzouni, “Brain tumors:


Special characters for research and banking,” Advanced Biomedical Research, vol. 4, Jan.
2015.
[86] L. Rosencrance, “software,” App Architecture, Mar. 2021, [Online]. Available:
https://www.techtarget.com/searchapparchitecture/definition/software
[87] T. Inhand, “spmp-document,” tutorialsinhand.com, Nov. 2020, [Online]. Available:
https://tutorialsinhand.com/tutorials/software-engineering-tutorial/software-
projectmanagement/spmp-document.aspx
[88] “Software Requirement Specification.”
https://www.tutorialspoint.com/software_testing_dictionary/software_requirement_s
pecification.htm
[89] Wikipedia contributors, “Design specification,” Wikipedia, Apr. 2023, [Online].
Available: https://en.wikipedia.org/wiki/Design_specification
[90] H. Çetiner and İ. Çetiner, "Classification of cataract disease with a DenseNet201 based
deep learning model," Journal of the Institute of Science and Technology, vol. 12, no. 3,
pp. 1264-1276, 2022.
[91] M. A. Talukder et al., "An efficient deep learning model to categorize brain tumor using
reconstruction and fine-tuning," Expert Systems with Applications”, vol. Early Access,
pp. 120534, 2023.
[92] A. Subramanian, "Understanding and visualizing DenseNets,"Towards Data Science",
Available: https://towardsdatascience.com/understanding-and-visualizing-densenets-
7f688092391a
[93] S.H. Wang and Y.D. Zhang, "DenseNet-201-based deep neural network with composite
learning factor and precomputation for multiple sclerosis classification,"ACM
Transactions on Multimedia Computing, Communications, and Applications (TOMM)",
vol. 16, no. 2s, pp. 1-19, 2020.

85
[94] J. Cheng, “Brain tumor dataset,” figshare,
https://figshare.com/articles/dataset/brain_tumor_dataset/1512427 (accessed Nov.
27, 2023).

[95] N. Mukherjee, "Convolutional Neural Network for Breast Cancer


Classification",Towards Data Science, Available:
https://towardsdatascience.com/convolutional-neural-network-for-breast-cancer-
classification-52f1213dcc9

[96] Abiwinanda, N., Hanif, M., Hesaputra, S. T., Handayani, A., & Mengko, T. R. (2019).
"Brain tumor classification using convolutional neural network." In World Congress on
Medical Physics and Biomedical Engineering 2018: June 3-8, 2018, Prague, Czech
Republic (Vol. 1, pp. 183-189). Springer Singapore.

[97] P. Afshar, K. N. Plataniotis and A. Mohammadi, "Capsule Networks for Brain Tumor
Classification Based on MRI Images and Coarse Tumor Boundaries," ICASSP 2019 -
2019 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), Brighton, UK, 2019, pp. 1368-1372, doi: 10.1109/ICASSP.2019.8683759.

[98] Deepak, S., & Ameer, P. M. (2019). "Brain tumor classification using deep CNN
features via transfer learning." Computers in Biology and Medicine, 111, 103345.

[99] A. Pashaei, H. Sajedi and N. Jazayeri, "Brain Tumor Classification via Convolutional
Neural Network and Extreme Learning Machines," 2018 8th International Conference
on Computer and Knowledge Engineering (ICCKE), Mashhad, Iran, 2018, pp. 314-319,
doi: 10.1109/ICCKE.2018.8566571.

86

You might also like