You are on page 1of 5

Page 1 of 5 - Cover Page Submission ID trn:oid:::1:2787742756

Sonia verma
sdfzsgdf
CS2024

ABC

ABES Engineering College

Document Details

Submission ID

trn:oid:::1:2787742756 3 Pages

Submission Date 1,888 Words

Dec 18, 2023, 11:03 AM GMT+5:30


10,838 Characters

Download Date

Dec 18, 2023, 11:17 AM GMT+5:30

File Name

Ultimate_RP.pdf

File Size

131.5 KB

Page 1 of 5 - Cover Page Submission ID trn:oid:::1:2787742756


Page 2 of 5 - AI Writing Overview Submission ID trn:oid:::1:2787742756

How much of this submission has been generated by AI?

*9%
Caution: Percentage may not indicate academic misconduct. Review required.

It is essential to understand the limitations of AI detection before making decisions


about a student's work. We encourage you to learn more about Turnitin's AI detection
capabilities before using the tool.
of qualifying text in this submission has been determined to be
generated by AI.

* Low scores have a higher likelihood of false positives.

Frequently Asked Questions

What does the percentage mean?


The percentage shown in the AI writing detection indicator and in the AI writing report is the amount of qualifying text within the
submission that Turnitin's AI writing detection model determines was generated by AI.

Our testing has found that there is a higher incidence of false positives when the percentage is less than 20. In order to reduce the
likelihood of misinterpretation, the AI indicator will display an asterisk for percentages less than 20 to call attention to the fact that
the score is less reliable.

However, the final decision on whether any misconduct has occurred rests with the reviewer/instructor. They should use the
percentage as a means to start a formative conversation with their student and/or use it to examine the submitted assignment in
greater detail according to their school's policies.

How does Turnitin's indicator address false positives?


Our model only processes qualifying text in the form of long-form writing. Long-form writing means individual sentences contained in paragraphs that make up a
longer piece of written work, such as an essay, a dissertation, or an article, etc. Qualifying text that has been determined to be AI-generated will be highlighted blue
on the submission text.

Non-qualifying text, such as bullet points, annotated bibliographies, etc., will not be processed and can create disparity between the submission highlights and the
percentage shown.

What does 'qualifying text' mean?


Sometimes false positives (incorrectly flagging human-written text as AI-generated), can include lists without a lot of structural variation, text that literally repeats
itself, or text that has been paraphrased without developing new ideas. If our indicator shows a higher amount of AI writing in such text, we advise you to take that
into consideration when looking at the percentage indicated.

In a longer document with a mix of authentic writing and AI generated text, it can be difficult to exactly determine where the AI writing begins and original writing
ends, but our model should give you a reliable guide to start conversations with the submitting student.

Disclaimer
Our AI writing assessment is designed to help educators identify text that might be prepared by a generative AI tool. Our AI writing assessment may not always be accurate (it may misidentify
both human and AI-generated text) so it should not be used as the sole basis for adverse actions against a student. It takes further scrutiny and human judgment in conjunction with an
organization's application of its specific academic policies to determine whether any academic misconduct has occurred.

Page 2 of 5 - AI Writing Overview Submission ID trn:oid:::1:2787742756


Page 3 of 5 - AI Writing Submission Submission ID trn:oid:::1:2787742756

Detecting Cancer from MRI Using ML Algorithm


Harsh Diggal, Chirag Yadav, Dikshant Sharma, Deepanshu Tomar
Department of Computer Science, ABES Engineering College
Address
harsh.20b0121091@abes.ac.in
chirag.20b0121191@abes.ac.in
dikshant.20b0121028@abes.ac.in
deepanshu.20b0121057@abes.ac.in

Abstract— In the field of medical imaging, early detection of performance. This project report introduces the use of
brain tumour is crucial for timely intervention so that a life Convolutional Neural Network (CNN). The CNN is used to
can be saved. This project helps to detect brain tumour using distinguish the scanned images of the brain into two
machine learning algorithms applied to MRI scans. The categories, one with those of brain tumours and the others
data-set, consisting of MRI images was preprocessed to ensure
robustness of the model, then the data is further divided into
with healthy brains. CNN performance is evaluated in terms
test, training, and validation sets. The model is developed on of accuracy and precision. The results show that the
CNN architecture. The CNN, which is constructed using Keras technology can accurately identify brain tumours with high
and TensorFlow libraries, consists of multiple layers for performance.
feature extraction and classification. The training is done with
early stopping and model checkpoint to monitor and save the II. METHODS FOR BRAIN TUMOUR
best performing model. This project is made to assist medical SEGMENTATION
professionals to make faster and more accurate diagnoses,
which will ultimately help their patients. Brain tumours from MRI or CT scans can be difficult to
manually segment due to the high number of pictures
Keywords— Brain Tumour Detection, Medical Imaging, generated, the intricate 3D anatomical perspective, and
Convolutional Neural Network image artefacts that impair quality and complicate
interpretation. Consequently, there exists a significant
I. INTRODUCTION possibility of heterogeneity in judgements made by various
observers as well as within the same observer.
A brain tumour, a malignant growth in the brain, has
become a substantial cause of death in both children and The scientific literature has proposed a number of
adults in recent years, having a devastating influence on life. automated brain tumour segmentation approaches to help
This tumour develops spontaneously as a result of the radiologists overcome these difficulties. These techniques
tissues surrounding the brain or skull. Brain tumours are use the latest developments in machine learning and
collections of rapidly growing brain cells that can severely artificial intelligence to automate the segmentation process.
damage the nervous system. Furthermore, the cancerous They are designed to improve accuracy and reduce the
accumulation of cells can hinder normal brain function. workload associated with manual segmentation. Atlas-based
segmentation, machine learning algorithms, deep learning
Surgical techniques are often used to treat tumours in the neural networks, clustering algorithms, and hybrid
brain. ALI (automated lesion identification), lesionGnb (a approaches are examples of common methodologies.
Gaussian naive Bayes lesion detection), and LINDA (lesion
identification employing neighbourhood data analysis) had A. Region-Based and Shallow Unsupervised ML
previously been utilised but failed to detect small lesion
Some of the most often used approaches for image
areas because of the different sizes of the brain and strokes
segmentation in autonomous processing of images is
territory.
region-based segmentation. This method creates areas inside
an image from linked pixels that satisfy certain similarity
In recent years, deep learning techniques have become standards, including comparable pixel intensity levels,
popular in brain diagnosis. This technology can detect shapes, or textures. To precisely identify the target area, the
tumours in medical images such as CT scans and MRI picture is divided into different areas using the region-based
scans. Methods utilising region-based approaches and segmentation technique. When classifying pixels together,
threshold techniques have become prevalent in recent image this method takes into account variables such as pixel values
processing studies. (e.g., grayscale changes) and spatial closeness (e.g.,
Euclidean distance, compactness of areas). Common
In addition to the accuracy criterion, we use the benchmarks
area-based segmentation approaches in the context of brain
of Specificity, Precision and Sensitivity to evaluate network

Page 3 of 5 - AI Writing Submission Submission ID trn:oid:::1:2787742756


Page 4 of 5 - AI Writing Submission Submission ID trn:oid:::1:2787742756

tumour segmentation include strategies like region growth processing by overcoming limitations and enhancing image
and clustering algorithms. processing capabilities. Our 12 layer CNN model yields
superior results in tumour detection. The proposed
Clustering-based segmentation is a strong methodology is illustrated in the accompanying figure,
region-based approach that divides a picture into discrete along with a brief explanation.
groups. This approach finds and classifies pixels with high
similarity into distinct areas, while assigning dissimilar To maintain consistent image dimensions in our proposed
pixels to distinct regions Clustering, a method of approach, we resize each image to a fixed size of 224 X 224
unsupervised learning, has been intensively researched for X 3. We applied a complex convolutional kernel using 16
medical picture segmentation. convolutional filters, each with a 3 X 3 size. For activation
functions, we predominantly utilised Rectified Linear
B. Shallow Supervised ML Activation (ReLU). ReLU is a piecewise linear operation
that directly outputs the input if it's positive; otherwise, it
The primary objective of supervised machine outputs zero. Continuing from that point, we advanced to
learning algorithms for segmentation of brain tumours has the next stage of our processing, which entailed employing
changed from segmenting generic pictures to identifying 36 filters for convolution and executing max-pooling
pixels corresponding to tumours. These methods approach operations.
segmentation as a classification task for tumorous pixels.
They use numerous extracted characteristics as inputs for The pooling operation entails applying a 2D filter to each
supervised learning models, resulting in an output vector channel of a feature map, condensing the features within the
specifying segmentation classes. Pixel classification filter's covered region. In a typical CNN model,
approaches are frequently preferred over traditional convolutional and pooling layers are typically stacked on
segmentation techniques in brain tumour segmentation, top of each other. The inclusion of pooling layers serves to
since tumour areas tend to be distributed across the picture. reduce the size of the feature maps, which in turn reduces
As a result, typical supervised machine learning techniques the computational demands of the network. These pooling
for segmenting brain tumours from head MRI data have layers consolidate the information contained in the feature
been implemented. maps generated by the preceding convolutional layers.
Instead of preserving precise feature positions like the
C. Deep Learning-Based convolutional layers, pooling layers generate summarised
features. This summarization makes the model more robust
The computerised extraction of features that is to variations in the positioning of features within the input
offered by deep learning techniques reduces the need to image. In our method, we utilised a 2 X 2 Max pooling
perform human feature design. Whenever it comes to brain operation. This process involves selecting the maximum
tumour classification with deep learning, the primary value within the filter's region of coverage in the feature
approach is to run a picture via a variety of modules and map. As a result, the output of the max-pooling layer
then employ the features that are learned to carry out comprises a new feature map that contains the most crucial
segmentation. Various deep learning algorithms have been features extracted from the preceding feature map. You can
introduced in scientific literature for segmenting brain refer to for a visual depiction of this process.
tumours. These include recurrent NNs, CNNs, long
short-term memory (LSTM) and Deep CNNs. Advancing to the next stage of our processing, which
entailed employing 64 filters for convolutional and
III. PROPOSED METHODOLOGY max-pooling operation is executed. After this the next stage
of processing employs 128 filters for convolutional and
CNNs offer an organised approach to processing medical
max-pooling operation. Additionally we have used DropOut
images. CNNs, a subset of artificial NNs, are developed
layer to prevent overfitting. Next, we employed a dense
exclusively for tasks including recognition of images. They
layer with 64 hidden units and flattening. Following that, we
harness deep learning, an advanced computational method
employed an additional DropOut layer, along with a dense
for both generative and descriptive tasks, and find
layer with one hidden unit and sigmoid activation.
applications in recommender systems, natural language
processing (NLP), and machine vision, enabling image and In selecting the activation function for the final layer, we
video recognition. Unlike artificial neural networks, which opted for Sigmoid due to its demonstrated superiority in
attempt to mimic human brain neuron behaviour but may terms of accuracy when compared to other activation
not be the ideal choice for image processing, CNNs adopt a functions. Additionally, we continued to use the Adam
multilayer perspective that demands less computational optimiser and employed the "categorical cross-entropy" loss
resources. function for our model. The operational workflow of our
CNN model, as described, is illustrated in Fig.
This streamlined and more efficient system simplifies the
training of data for both language communication and image

Page 4 of 5 - AI Writing Submission Submission ID trn:oid:::1:2787742756


Page 5 of 5 - AI Writing Submission Submission ID trn:oid:::1:2787742756

IV. EXPERIMENTAL RESULTS Our final results were obtained through the utilisation of
1496 images for training and 401 images for testing,
A. Experimental Dataset maintaining a 7:2 ratio, and employing an 50-epoch
technique. Our model comprises a 12-layer CNN
In our research experiment, we have used a dataset architecture, and we took precautions to mitigate overfitting
comprising a total of 3000 images of various tumour types by excluding certain images from the dataset.
including FLAIR, T2 and T1 modalities, out of which 1500
are class 1 images and 1500 are class 0 images. Class 0
denoting non-tumour images and class 1 denoting tumour
images. Few of which are shown in Fig and.

B. Finding and Analysis

Tables and present the results of our experiments,


where we tested various CNN activation functions and
optimizers. In our initial test, we utilised a combination of
SVM with CNN, but this yielded an accuracy rate of only
24.37%. By employing Sigmoid in the final layer and Adam
as the optimizer, we achieved an impressive accuracy of
98.75%. This accuracy was obtained through the use of
1496 training images, 401 test images, and a 7:2 split ratio.
This method represents a substantial enhancement over
traditional Gradient Descent Optimization techniques.

Fig. illustrates the final results of our proposed


technique after 50 epochs. Additionally, we provide graphs
depicting the corresponding training/validation loss and
training/validation accuracy over the number of epochs in
Fig. , , and .

We achieved an accuracy of 98.75% in our


proposed model, surpassing the results reported by Seetha et
al. and Tonmoy Hossain et al., as outlined in Table. You can
observe a sample of a predicted output image in Fig..

V. CONCLUSIONS

The primary applications of MRI typically involve tumour


segmentation and classification. While CNNs possess the
ability to autonomously learn complex features from
multi-modal MRI images, encompassing both healthy brain
tissues and malignant tissues, our objective was to enhance
their accuracy.

Initially, we conducted experiments combining SVM with


CNN, but this yielded an accuracy rate of only 24.37%.
Subsequently, we conducted an exploration of various
parameters. By making adjustments such as modifying the
last layer to utilise Sigmoid and transitioning to the
RMSProp optimizer, we were able to improve the accuracy
significantly, achieving a rate of 95.71%. Then we adjusted
the last layer to use SoftMax and transitioned to AdaMax
optimiser which further improved the accuracy to 97.63%.
However, we continued to pursue further enhancements and
eventually modified the last layer to utilise Sigmoid with
Adam optimizer, resulting in an impressive accuracy rate of
98.75%.

Page 5 of 5 - AI Writing Submission Submission ID trn:oid:::1:2787742756

You might also like