You are on page 1of 73

KNEE OSTEOARTHRITIS DETECTION USING

AN IMPROVED CENTERNET WITH


PIXEL-WISE VOTING SCHEME
A PROJECT REPORT
Submitted by
B. SASHANK [RA2011003010040]
SANJAY G [RA2011003010022]
Under the Guidance of
Mrs. P. NITHYAKANI
Assistant Professor, Department of Computing Technologies
in partial fulfillment of the requirements for the degree of
BACHELOR OF TECHNOLOGY
in
COMPUTER SCIENCE ENGINEERING

DEPARTMENT OF COMPUTING TECHNOLOGIES


COLLEGE OF ENGINEERING AND TECHNOLOGY
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
KATTANKULATHUR- 603 203
MAY 2024
SRM INSTITUTE OF SCIENCE AND TECHNOLOGY
KATTANKULATHUR - 603203
BONAFIDE CERTIFICATE

Certified that 18CSP111L project report titled “KNEE OSTEOARTHRITIS DETECTION


USING AN IMPROVED CENTERNET WITH PIXEL-WISE VOTING SCHEME” is the
bonafide work of B. SASHANK [Reg No: RA201100301010040] and SANJAY G [Reg No:
RA2011003010022] who carried out the project work under my supervision. Certified further, that to
the best of my knowledge the work reported here-in does not form part of any other thesis or
dissertation on the basis of which a degree or award was conferred on an earlier occasion for this or
any other candidate.

Mrs. P. NITHYAKANI Dr. SUBASH R.


SUPERVISOR PANEL HEAD
Assistant Professor Associate Professor
Department of Computing Technologies Department of Computing Technologies

Dr. M. PUSHPALATHA
HEAD OF THE DEPARTMENT
Department of Computing Technologies

Examiner I Examiner II

2
Department of Computing Technologies
SRM Institute of Science & Technology
Own Work Declaration Form

Degree/ Course : B.Tech / CSE


Student Name : B. SASHANK, SANJAY G
Registration Number : RA2011003010040, RA2011003010022
Title of Work : KNEE OSTEOARTHRITIS DETECTION USING AN IMPROVED
CENTERNET WITH PIXEL-WISE VOTING SCHEME
We hereby certify that this assessment compiles with the University’s Rules and
Regulations relating to Academic misconduct and plagiarism, as listed in the
University Website, Regulations, and the Education Committee guidelines.

We confirm that all the work contained in this assessment is my / our own except
where indicated, and that We have met the following conditions:
● Clearly referenced / listed all sources as appropriate.
● Referenced and put in inverted commas all quoted text (from books, web, etc).
● Given the sources of all pictures, data etc. that are not my own.
● Not made any use of the report(s) or essay(s) of any other student(s) either past
or present.
● Acknowledged in appropriate places any help that I have received from others
(e.g. fellow students, technicians, statisticians, external sources).
● Compiled with any other plagiarism criteria specified in the Course handbook /
University website.
We understand that any false claim for this work will be penalized in accordance
with the University policies and regulations.
DECLARATION:
We are aware of and understand the University’s policy on Academic misconduct and
plagiarism and we certify that this assessment is my / our own work, except where
indicated by referring, and that I have followed the good academic practices noted
above.
Student 1 Signature:

Student 2 Signature:

Date:
If you are working in a group, please write your registration numbers and sign
with the date for every student in your group.
3
4
ACKNOWLEDGEMENT

We express our humble gratitude to Dr. C. Muthamizhchelvan, Vice-Chancellor, SRM


Institute of Science and Technology, for the facilities extended for the project work and his
continued support.

We extend our sincere thanks to Dr. T. V. Gopal, Dean-CET, SRM Institute of Science and
Technology, for his invaluable support.

We wish to thank Dr. Revathi Venkataraman, Professor and Chairperson, School of


Computing, SRM Institute of Science and Technology, for her support throughout the project
work.

We are incredibly grateful to our Head of the Department, Dr. M. Pushpalatha, Department
of Computing Technologies, SRM Institute of Science and Technology, for his suggestions
and encouragement at all the stages of the project work.

We want to convey our thanks to our Project Coordinators, Dr. S. Godfrey Winster. Panel
Head, Dr. Subash R, Associate Professor and Panel Members, Dr. Divya Mohan Associate
Professor, Mrs. P. Nithyakani Assistant Professor, Mr. S. Prabhu Assistant Professor,
Department of Computing Technologies, SRM Institute of Science and Technology, for their
inputs during the project reviews and support.

We register our immeasurable thanks to our Faculty Advisor, Dr. Sivakumar S., Assistant
Professor, Department of Computing Technologies, SRM Institute of Science and
Technology, for leading and helping us to complete our course.

Our inexpressible respect and thanks to our guide, Dr. Sivakumar S., Assistant Professor,
Department of Computing Technologies, SRM Institute of Science and Technology, for
providing us with an opportunity to pursue our project under his mentorship. He provided us
with the freedom and support to explore the research topics of our interest. His passion for
solving problems and making a difference in the world has always been inspiring.

4
We sincerely thank all the staff and students of Computing Technologies Department, School
of Computing, S.R.M Institute of Science and Technology, for their help during our project.
Finally, we would like to thank our parents, family members, and friends for their
unconditional love, constant support and encouragement.

B. SASHANK
[Reg No:RA2011003010040]

SANJAY G
[Reg No:RA2011003010022]

5
ABSTRACT
Radiologists have been using multi-view images to figure out
knee problems, such as computer tomography (CT) scans, MRIs,
and X-rays. The most cost-effective approach for obtaining
images is X-ray, which is used regularly. There are several
image-processing approaches available to detect knee disease in
its early stages. However, the present methods might be enhanced
in terms of accuracy and precision. Furthermore, hand-crafted
feature extraction techniques in machine learning-based
approaches are time-consuming. So, The paper proposes a
technique based on a customized CenterNet with a pixel-wise
voting scheme to automatically extract features for knee disease
detection. The proposed model uses the most representative
features and a weighted pixel-wise voting scheme to give a more
accurate bounding box based on the voting score from each pixel
inside the former box. The proposed model is a robust and
improved architecture based on CenterNet utilizing a simple
DenseNet-201 as a base network for feature extraction. The
proposed model detects knee osteoarthritis (KOA) in knee
images precisely and determines its severity level according to
the KL grading system such as Grade-I, Grade-II, Grade-III, and
Grade-IV. The proposed technique outperforms existing
techniques with an accuracy of 99.14% over testing and 98.97%
over cross-validation.

6
TABLE OF CONTENTS

ABSTRACT vi

LIST OF TABLES ix

LIST OF FIGURES x

LIST OF ABBREVIATIONS xi

1. INTRODUCTION 1

1.1 Objective 1

1.2 Problem Statement 1

1.3 Software Requirements 3

2 LITERATURE SURVEY 6
2.1 Motivation 6
2.2 Literature Survey 7

3 SYSTEM ARCHITECTURE AND DESIGN 19

3.1 Existing System 19


3.1.1 Disadvantages Of Existing System 19

3.2 Proposed System 19


3.2.1 Advantages Of Proposed System 19
3.3 Functional Requirements 20
3.4 Non-Functional Requirements 20
3.5 System Architecture 21
3.6 UML Diagrams 22
3.7 Goals 23
3.8 Use Case Diagram 23
3.8.1 Class Diagram 24

7
3.8.2 Activity Diagram 25
3.8.3 Sequence Diagram 26
3.8.4 Collaboration Diagram 27
3.8.5 Component Diagram 28
3.8.6 Deployment Diagram 28

4 METHODOLOGY 29

4.1 Data Preparation and Preprocessing: 29

4.2 Model Architecture 29

4.3 Training Procedure 30

4.4 Evaluation Metrics 30

4.5 Model Deployment 31

4.6 Performance Validation 31

5 RESULTS AND DISCUSSION 32

5.1 Results 33
5.1.1 Website Interface 34
5.1.2 Register and Login / Sign up page 35
5.1.3 Classification of Knee Osteoarthritis 37
5.1.4 Prediction of Knee Osteoarthritis 39
5.2 Discussion 41

6 CONCLUSION AND FUTURE SCOPE 43

6.1 Conclusion 43
6.1.1 Key Features 43
6.2 Future Scope 44

REFERENCES 45

APPENDIX 1 47

8
APPENDIX 2 54

9
LIST OF TABLES

Table No. TITLE Page No.

2.1 Comparison Tabular Format for Literature Survey: 10

10
LIST OF FIGURES
Fig. TITLE Page
No. No.

3.1 System architecture 21


3.2 Use Case Diagram 23
3.3 Class Diagram 24
3.4 Activity Diagram 25
3.5 Sequence Diagram 26
3.6 Collaboration Diagram 26
3.7 Component Diagram 27
3.8 Deployment Diagram 28

5.1 Website Interface 33

5.2 Website Interface 34

5.3 Register and Login / Sign up page 35

5.4 Register and Login / Sign up page 35

5.5 Classification of Knee Osteoarthritis 36

5.6 Classification of Knee Osteoarthritis 36

5.7 Classification of Knee Osteoarthritis 37

5.8 Classification of Knee Osteoarthritis 38

5.9 Prediction of Knee Osteoarthritis 39

5.9 Prediction of Knee Osteoarthritis 39

5.10 Prediction of Knee Osteoarthritis 40

11
ABBREVIATIONS

OA Osteoarthritis
AI Artificial Intelligence
CNN Convolutional Neural Network
MRI Magnetic Resonance Imaging
CT Computed Tomography
RGB Red, Green, Blue (color model)
CNN Convolutional Neural Network
GPU Graphics Processing Unit
CAD Computer-Aided Diagnosis
HIPAA Health Insurance Portability and Accountability Act
ML Machine Learning

12
CHAPTER 1
INTRODUCTION

Knee Osteoarthritis (KOA) is a chronic joint disease due to the worsening of articular cartilage in the
knee. The symptoms of KOA comprise joint noises due to cracking, swelling, pain, and difficulty in
movement. Moreover, the severe symptoms of KOA may cause fall incidents i.e. fracture in the knee
bone that ultimately results in disability of leg. Various imaging techniques which have been
employed for the analysis of knee disease include MRI, X-ray, and CT scans. Furthermore, MRI and
CT scans are also considered suitable for KOA assessment. They are accompanied using an
intravenous contrast agent which provides a clear view of Knee joints. However, these approaches
are associated with high costs, increased examination time, and potential health risks such as patients
with renal inadequacy. Therefore, there should be some techniques for the assessment of KOA that
can be employed without the contrast agent and require minimum expense, and time of examination.
Therefore, an X-ray is considered a more feasible way to provide bony structure visualization and is
a less expensive approach for knee analysis.

1.1 OBJECTIVE

The main purpose of this work is to automate the KOA detection in an effective manner, therefore, a
robust deep learning-based method i.e., CenterNet based on DenseNet-201 with pixel voting scheme
is proposed that extracts the most representative features to detect and recognize early KOA
effectively. We utilized weighted pixel-wise voting scheme for the best localization results from our
customized CenterNet model.

1.2 PROBLEM STATEMENT

It is possible to identify knee disease in its early stages using a variety of image processing
techniques, but the current algorithms could still use improved accuracy and precision. Hand-crafted
feature extraction mechanisms are also laborious in machine learning-based techniques.
Consequently, we propose an automated feature extraction method in this paper that uses a
customized CenterNet with a pixel-wise voting scheme.

1
1.3 SOFTWARE REQUIREMENTS

Software requirements deal with defining software resource requirements and prerequisites that need
to be installed on a computer to provide optimal functioning of an application. These requirements or
prerequisites are generally not included in the software installation package and need to be installed
separately before the software is installed.

Platform – In computing, a platform describes some sort of framework, either in hardware or


software, which allows software to run. Typical platforms include a computer’s architecture,
operating system, or programming languages and their runtime libraries.

Operating system is one of the first requirements mentioned when defining system requirements
(software). Software may not be compatible with different versions of same line of operating
systems, although some measure of backward compatibility is often maintained. For example, most
software designed for Microsoft Windows XP does not run on Microsoft Windows 98, although the
converse is not always true. Similarly, software designed using newer features of Linux Kernel v2.6
generally does not run or compile properly (or at all) on Linux distributions using Kernel v2.2 or
v2.4.

APIs and drivers – Software making extensive use of special hardware devices, like high-end
display adapters, needs special API or newer device drivers. A good example is DirectX, which is a
collection of APIs for handling tasks related to multimedia, especially game programming, on
Microsoft platforms.

Web browser – Most web applications and software depending heavily on Internet technologies
make use of the default browser installed on system. Microsoft Internet Explorer is a frequent choice
of software running on Microsoft Windows, which makes use of ActiveX controls, despite their
vulnerabilities.

1) Software : Anaconda

2) Primary Language : Python

3) Frontend Framework : Flask

2
4) Back-end Framework : Jupyter Notebook

5) Database : Sqlite3

6) Front-End Technologies : HTML, CSS, JavaScript and Bootstrap4

1.4 HARDWARE REQUIREMENTS

The most common set of requirements defined by any operating system or software application is the
physical computer resources, also known as hardware, A hardware requirements list is often
accompanied by a hardware compatibility list (HCL), especially in case of operating systems. An
HCL lists tested, compatible, and sometimes incompatible hardware devices for a particular
operating system or application. The following sub-sections discuss the various aspects of hardware
requirements.

Architecture – All computer operating systems are designed for a particular computer architecture.
Most software applications are limited to particular operating systems running on particular
architectures. Although architecture-independent operating systems and applications exist, most need
to be recompiled to run on a new architecture. See also a list of common operating systems and their
supporting architectures.

Processing power – The power of the central processing unit (CPU) is a fundamental system
requirement for any software. Most software running on x86 architecture define processing power as
the model and the clock speed of the CPU. Many other features of a CPU that influence its speed and
power, like bus speed, cache, and MIPS are often ignored. This definition of power is often
erroneous, as AMD Athlon and Intel Pentium CPUs at similar clock speed often have different
throughput speeds. Intel Pentium CPUs have enjoyed a considerable degree of popularity, and are
often mentioned in this category.

Memory – All software, when run, resides in the random access memory (RAM) of a computer.
Memory requirements are defined after considering demands of the application, operating system,
supporting software and files, and other running processes. Optimal performance of other unrelated
software running on a multi-tasking computer system is also considered when defining this
requirement.

3
Secondary storage – Hard-disk requirements vary, depending on the size of software installation,
temporary files created and maintained while installing or running the software, and possible use of
swap space (if RAM is insufficient).

Display adapter – Software requiring a better than average computer graphics display, like graphics
editors and high-end games, often define high-end display adapters in the system requirements.

Peripherals – Some software applications need to make extensive and/or special use of some
peripherals, demanding the higher performance or functionality of such peripherals. Such peripherals
include CD-ROM drives, keyboards, pointing devices, network devices, etc.

1)Operating System : Windows Only

2)Processor : i5 and above

3)Ram : 8gb and above

4)Hard Disk : 25 GB in local drive

4
CHAPTER 2
LITERATURE SURVEY

2.1 Motivation

The motivation for this project is rooted in several key factors:


1. Enhancing Healthcare Accessibility: The project aims to enhance accessibility to healthcare
services by developing an advanced computer vision system for early detection of knee
osteoarthritis. This can particularly benefit underserved populations or regions with limited
access to specialized medical facilities.
2. Improving Diagnostic Precision: By employing sophisticated algorithms like the improved
CenterNet with a pixel-wise voting scheme, the project seeks to enhance the precision and
reliability of knee osteoarthritis detection from medical imaging data. This can assist
healthcare professionals in making more accurate diagnoses and treatment plans.
3. Empowering Preventive Healthcare: Early detection of knee osteoarthritis enables proactive
measures such as lifestyle modifications and early interventions, potentially preventing
disease progression and reducing the need for invasive treatments. This empowers individuals
to take control of their health and well-being.
4. Economic Impact: Timely detection and intervention can lead to cost savings by reducing the
need for advanced treatments and surgeries associated with advanced stages of knee
osteoarthritis. This can alleviate financial burdens on healthcare systems and individuals
while improving overall healthcare efficiency.
5. Advancing Medical Technology: The development of innovative computer vision techniques
for knee osteoarthritis detection contributes to the advancement of medical technology and
research. It opens doors for further innovation in automated diagnostic systems and enhances
the capabilities of medical imaging technology.
6. Customized Patient Care: Accurate detection of knee osteoarthritis facilitates personalized
treatment plans tailored to individual patient needs and disease severity. This personalized
approach to care enhances patient outcomes and satisfaction by addressing their unique
healthcare requirements.
7. Supporting Clinical Decision-Making: Automated detection algorithms serve as valuable
tools for healthcare providers, offering decision support in interpreting medical images and

5
formulating treatment strategies. This enhances diagnostic accuracy and enables more
informed clinical decisions.
8. Public Health Awareness: The project contributes to raising awareness about knee
osteoarthritis and the importance of early detection and management. By educating healthcare
professionals and the public, it promotes proactive measures for preventing and managing
this common musculoskeletal condition.
9. Research Collaboration: Collaborative efforts between researchers, healthcare professionals,
and technology experts foster interdisciplinary innovation in healthcare. The project
encourages collaboration to address complex healthcare challenges and improve patient
outcomes.
10. Patient Engagement: Early detection empowers patients to actively participate in their
healthcare journey by providing them with timely information about their condition. This
fosters patient engagement and encourages adherence to treatment plans and preventive
measures.
These motivations highlight the diverse benefits of developing advanced computer vision systems for
the early detection and management of knee osteoarthritis, while ensuring minimal similarity with
the original text.

2.2 Literature Survey

History of patients with severe knee osteoarthritis


Background One out of three adults over the age of 65 years and one out of two over the age of 80
falls annually. Fall risk increases for older adults with severe knee osteoarthritis, a matter that should
be further researched. The main purpose of this study was to investigate the history of falls including
frequency, mechanism and location of falls, activity during falling and injuries sustained from falls
examining at the same time their physical status. The secondary purpose was to determine the effect
of age, gender, chronic diseases, social environment, pain elsewhere in the body and components of
health related quality of life such as pain, stiffness, physical function, and dynamic stability on falls
frequency in older adults aged 65 years and older with severe knee osteoarthritis. Methods An
observational longitudinal study was conducted on 68 patients (11 males and 57 females) scheduled
for total knee replacement due to severe knee osteoarthritis (grade 3 or 4) and knee pain lasting at
least one year or more. Patients were personally interviewed for fall history and asked to complete
self-administered questionnaires, such as the 36-item Short Form Health Survey (SF-36) and the
Western Ontario and McMaster Universities Arthritis Index (WOMAC), and physical performance

6
test was performed. Results The frequency of falls was 63.2% for the past year. The majority of falls
took place during walking (89.23%). The main cause of falling was stumbling (41.54%). There was a
high rate of injurious falling (29.3%). The time patients needed to complete the physical performance
test implied the presence of disability and frailty. The high rates of fall risk, the high disability levels,
and the low quality of life were confirmed by questionnaires and the mobility test. Conclusions
Patients with severe knee osteoarthritis were at greater risk of falling, as compared to healthy older
adults. Pain, stiffness, limited physical ability, reduced muscle strength, all consequences of severe
knee osteoarthritis, restricted patient's quality of life and increased the fall risk. Therefore, patients
with severe knee osteoarthritis should not postpone having total knee replacement, since it was clear
that they would face more complicated matters when combining with fractures other serious injuries
and disability.

MRI Synovitis Scoring Correlates with Knee Osteoarthritis Inflammation


Objective: To evaluate the association between synovitis on contrast enhanced (CE) MRI with
microscopic and macroscopic features of synovial tissue inflammation. Method: Forty-one patients
(mean age 60 years, 61% women) with symptomatic radiographic knee OA were studied: twenty
underwent arthroscopy (macroscopic features were scored (0-4), synovial biopsies obtained),
twenty-one underwent arthroplasty (synovial tissues were collected). After haematoxylin and eosin
staining, the lining cell layer, synovial stroma and inflammatory infiltrate of synovial tissues were
scored (0-3). T1-weighted CE-MRI's (3 T) were used to semi-quantitatively score synovitis at 11
sites (0-22) according to Guermazi et al. Spearman's rank correlations were calculated. Results: The
mean (SD) MRI synovitis score was 8.0 (3.7) and the total histology grade was 2.5 (1.6). Median
(range) scores of macroscopic features were 2 (1-3) for neovascularization, 1 (0-3) for hyperplasia, 2
(0-4) for villi and 2 (0-3) for fibrin deposits. The MRI synovitis score was significantly correlated
with total histology grade [r = 0.6], as well as with lining cell layer [r = 0.4], stroma [r = 0.3] and
inflammatory infiltrate [r = 0.5] grades. Moreover, MRI synovitis score was also significantly
correlated with macroscopic neovascularization [r = 0.6], hyperplasia [r = 0.6] and villi [r = 0.6], but
not with fibrin [r = 0.3]. Conclusion: Synovitis severity on CE-MRI assessed by a new whole knee
scoring system by Guermazi et al. is a valid, non-invasive method to determine synovitis as it is
significantly correlated with both macroscopic and microscopic features of synovitis in knee OA
patients.

7
Explainable machine learning for knee osteoarthritis diagnosis based on a
novel fuzzy feature selection methodology
Knee Osteoarthritis (ΚΟΑ) is a degenerative joint disease of the knee that results
from the progressive loss of cartilage. Due to KOA’s multifactorial nature and the
poor understanding of its pathophysiology, there is a need for reliable tools that will
reduce diagnostic errors made by clinicians. The existence of public databases has
facilitated the advent of advanced analytics in KOA research however the
heterogeneity of the available data along with the observed high feature
dimensionality make this diagnosis task difficult. The objective of the present study
is to provide a robust Feature Selection (FS) methodology that could: (i) handle the
multidimensional nature of the available datasets and (ii) alleviate the defectiveness
of existing feature selection techniques towards the identification of important risk
factors which contribute to KOA diagnosis. For this aim, we used multidimensional
data obtained from the Osteoarthritis Initiative database for individuals without or
with KOA. The proposed fuzzy ensemble feature selection methodology aggregates
the results of several FS algorithms (filter, wrapper and embedded ones) based on
fuzzy logic. The effectiveness of the proposed methodology was evaluated using an
extensive experimental setup that involved multiple competing FS algorithms and
several well-known ML models. A 73.55% classification accuracy was achieved by
the best performing model (Random Forest classifier) on a group of twenty-one
selected risk factors. Explainability analysis was finally performed to quantify the
impact of the selected features on the model’s output thus enhancing our
understanding of the rationale behind the decision-making mechanism of the best
model.

Analysis of knee joint vibration signals using ensemble empirical mode decomposition
Knee joint vibroarthrographic (VAG) signals acquired from extensive movements of the knee joints
provide insight about the current pathological condition of the knee. VAG signals are non-stationary,
aperiodic and non-linear in nature. This investigation has focussed on analyzing VAG signals using
Ensemble Empirical Mode Decomposition (EEMD) and modeling a reconstructed signal using
Detrended Fluctuation Analysis (DFA). In the proposed methodology, we have used the
reconstructed signal and extracted entropy based measures as features for training semi-supervised
learning classifier models. Features such as Tsallis entropy, Permutation entropy and Spectral

8
entropy were extracted as a quantified measure of the complexity of the signals. These features were
converted into training vectors for classification using Random Forest. This study has yielded an
accuracy of 86.52% while classifying signals. The proposed work can be used in non-invasive
pre-screening of knee related issues such as articular damages and chondromalacia patallae as this
work could prove to be useful in classification of VAG signals into abnormal and normal sets.

Gadolinium deposition in brain: Current scientific evidence and future perspectives


In the past 4 years, many publications described a concentration-dependent deposition of gadolinium
in the brain both in adults and children, seen as high signal intensities in the globus pallidus and
dentate nucleus on unenhanced T1-weighted images. Postmortem human or animal studies have
validated gadolinium deposition in these T1-hyperintensity areas, raising new concerns on the safety
of gadolinium-based contrast agents (GBCAs). Residual gadolinium is deposited not only in brain,
but also in extracranial tissues such as liver, skin, and bone. This review summarizes the current
evidence on gadolinium deposition in the human and animal bodies, evaluates the effects of different
types of GBCAs on the gadolinium deposition, introduces the possible entrance or clearance
mechanism of the gadolinium and potential side effects that may be related to the gadolinium
deposition on human or animals, and puts forward some suggestions for further research.

Table 2.1: Comparison Tabular Format for Literature Survey:

S.N TITLE & METHODOL PROPOSED CONS CONCLUSION


O AUTHORS OGY SYSTEM

1 TITLE: The proposed The proposed 1. The In conclusion, the


Brain tumor method system integrates proposed proposed method
segmentation integrates a a Swin method combining
based on the Swin Transformer for may semantic, edge,
fusion of deep Transformer for semantic feature potentially and fusion
semantics and semantic feature extraction and face modules
edge extraction, employs a shifted challenges significantly
information in employing a patch tokenization in enhances brain
multimodal shifted patch strategy. It computatio tumor
MRI tokenization combines a nal segmentation in
AUTHOR: strategy. It convolutional complexity multimodal MRI.
Zhiqin Zhu, combines a neural network due to the The integration of
Xianyu He CNN-based (CNN)-based integration Swin Transformer,
et.al., edge detection edge detection of multiple ESAB, and MFIB
LINK: module utilizing module with an modules, demonstrated
https://www.sci an Edge Spatial Edge Spatial requiring superior

9
encedirect.com Attention Block Attention Block careful performance over
/science/article/ (ESAB). (ESAB) and a optimizatio existing methods,
abs/pii/S15662 Feature fusion Multi-Feature n for showcasing its
53522001981 occurs through a Inference Block practical potential for more
(2023) Multi-Feature (MFIB) using application accurate and
Inference Block graph convolution . clinically relevant
(MFIB) for effective tumor delineation
utilizing graph fusion of in medical
convolution for semantic and imaging.
effective edge information
integration of in multimodal
semantic and MRI for brain
edge features. tumor
segmentation.

2 TITLE: The The proposed 1. In conclusion, the


A novel methodology system utilizes an Challenges proposed deep
framework for involves enhanced deep include learning-based
potato leaf leveraging the learning model, potential approach
disease Plant Village an Efficient computatio demonstrates
detection using Dataset DenseNet variant nal robustness in
an efficient combined with with an extra complexity classifying potato
deep learning manually transition layer, to due to deep leaf diseases into
model collected data classify potato learning five classes,
AUTHOR: for Potato Leaf leaf diseases into model achieving 97.2%
R. Mahum, H. Roll, Potato five classes. enhanceme accuracy. It
Munir et.al., Verticillium Leveraging ‘The nts and introduces
LINK: Wilt, and Potato Plant Village dependenc advancements in
https://www.ta Healthy classes. Dataset’ and e on the disease detection,
ndfonline.com/ A modified manually availability utilizing enhanced
doi/abs/10.108 DenseNet-201 collected data, it and quality models and
0/10807039.20 with an aims to address of addressing
22.2064814 additional imbalanced manually imbalanced data
(2023) transition layer training, collected challenges,
and reweighted minimize data. marking a
cross-entropy overfitting, and significant step
loss function is achieve accurate towards efficient
trained to multi-class and accurate
classify five classification, automated disease
potato leaf marking a management in
diseases. This pioneering potato crops.
model aims to approach in
address potato disease
imbalance detection and
issues, minimize classification.
overfitting, and
achieve higher
accuracy,
evaluated at
97.2% through

10
rigorous
experimentation
.

3 TITLE: The The proposed 1. The In conclusion, the


Explainable methodology system utilizes proposed study introduced a
machine employed a multidimensional methodolo robust fuzzy
learning for multidimension data from the gy's ensemble feature
knee al dataset from Osteoarthritis limitations selection approach
osteoarthritis the Initiative database include for Knee
diagnosis based Osteoarthritis for Knee potential Osteoarthritis
on a novel Initiative Osteoarthritis sensitivity (KOA) diagnosis,
fuzzy feature database, (KOA) analysis. to achieving 73.55%
selection focusing on It introduces a parameter accuracy with the
methodology Knee fuzzy ensemble settings, Random Forest
AUTHOR: Osteoarthritis feature selection the classifier. This
Christos (KOA). A novel method necessity methodology
Kokkotis, fuzzy ensemble amalgamating for domain effectively handled
Charis Ntakolia feature selection various FS expertise multidimensional
et.al., technique algorithms to in datasets, aiding in
LINK: amalgamated address high fine-tuning identifying crucial
https://pubmed. various FS dimensionality fuzzy risk factors. The
ncbi.nlm.nih.go algorithms issues. The logic, and subsequent
v/35099771/ (filter, wrapper, approach aims to the explainability
(2022) embedded) identify crucial reliance on analysis enhanced
using fuzzy risk factors for available understanding of
logic. Extensive KOA diagnosis, database the model's
evaluations achieving 73.55% heterogene decision-making
encompassing classification ity. process,
multiple FS accuracy using a contributing to
methods and Random Forest KOA diagnostic
machine classifier and improvement.
learning models employing
validated the explainability
approach, analysis for
achieving enhanced
73.55% comprehension of
classification model
accuracy with a decision-making.
Random Forest
classifier.
Further
explainability
analysis
elucidated the
selected
features' impact
on the model's
decision-making
process,

11
enhancing
comprehension
of the model's
rationale.

4 TITLE: The The proposed 1. The In conclusion, the


Automatic alert methodology system utilizes proposed proposed
generation in a involves Convolutional Intelligent Intelligent Video
surveillance designing an Neural Networks Video Surveillance (IVS)
systems for Intelligent (CNNs) for an Surveillanc system leveraging
smart city Video Intelligent Video e (IVS) convolutional
environment Surveillance Surveillance system neural networks
using deep (IVS) system (IVS) framework using addresses
learning employing in smart cities. It convolutio real-time response
algorithm Convolutional aims to detect nal neural limitations in
AUTHOR: Neural anomalous networks existing
B. Networks activities in video might face surveillance
Janakiramaiah, (CNNs). This sequences, challenges systems. Its
Kalyani system is enabling real-time related to capability to detect
Gadupudi developed to response for high anomalies and
et.al., detect emergencies (e.g., computatio trigger alerts
LINK: anomalous fire, theft) nal during
https://www.res activities in through automatic requiremen emergencies like
earchgate.net/p video sequences alerts. ts for fire or intrusions
ublication/3388 for enhanced Experimentation real-time showcases
52091_Automa security. with multiple analysis. promising results
tic_alert_gener Real-time surveillance Additionall with notably low
ation_in_a_sur responsiveness datasets y, reliance false alarm rates,
veillance_syste is ensured, demonstrates the on enhancing safety
ms_for_smart_ triggering system's accurately and security in
city_environme automatic alerts capability to labeled smart cities.
nt_using_deep during achieve notably datasets
_learning_algo emergencies low false alarm and
rithm like fire or rates. potential
(2022) intrusion. complexity
Experimentation in
with various fine-tuning
surveillance for
datasets different
validates the surveillanc
system's e scenarios
efficacy in could pose
achieving implement
notably low ation
false alarm difficulties.
rates.

5 TITLE: The systematic The proposed 1. The In conclusion,


Bone Fracture review system involves drawbacks deep learning (DL)
Detection encompasses an leveraging deep of the presents a

12
Using Deep analysis of learning (DL) for proposed promising avenue
Supervised existing assisting system for assisting
Learning from literature on radiologists in involve radiologists in
Radiological deep learning bone imaging, potential bone imaging by
Images: A (DL) particularly in reliance on aiding fracture
Paradigm Shift application in fracture detection. high-qualit detection. While
AUTHOR: bone imaging. It aims to enhance y labeled demonstrating
Tanushree Comprehensive diagnostic data, significant
Meena and search strategies accuracy and computatio potential in
Sudipta Roy are employed efficiency by nal diagnosing bone
LINK: across scientific analyzing medical resource diseases, including
https://www.m databases to images using DL requiremen fractures,
dpi.com/2075- identify relevant techniques. The ts for deep challenges persist
4418/12/10/24 studies. system addresses learning in data quality,
20 Inclusion challenges in models, interpretability,
(2022) criteria consider fracture diagnosis and and the need for
DL techniques and explores DL's interpretabi further research to
for fracture potential role in lity enhance DL's role
detection in future challenges in revolutionizing
radiographs. advancements in in complex bone imaging.
Critical bone imaging. fracture
assessment and cases.
synthesis of
findings
elucidate DL's
role, challenges,
and future in
bone imaging.

6 TITLE: The The proposed 1. The In conclusion, the


Demystifying methodology system envisions drawbacks review emphasizes
Supervised involves a supervised of the the transformative
Learning in comprehensive machine learning proposed potential of
Healthcare 4.0: review of the (SML) system Supervised
A New Reality current applications include Machine Learning
of landscape in across healthcare challenges (SML) in
Transforming healthcare, sectors, aiming to related to healthcare. It
Diagnostic exploring revolutionize data underscores the
Medicine traditional disease diagnosis, quality need for data
AUTHOR: methods and the personalized manageme quality,
Sudipta Roy potential for medicine, clinical nt, the knowledge-centric
,Tanushree supervised trials, imaging necessity systems, and
Meena et.al., machine analysis, drug for ethical
LINK: learning (SML) discovery, patient specialized considerations in
https://www.m applications. It care, and skills in leveraging SML
dpi.com/2075- assesses SML's nanotechnology. data-intens for disease
4418/12/10/25 role in disease Emphasis is on ive diagnosis,
49 diagnosis, data quality, healthcare personalized
(2022) personalized knowledge research, medicine, and
medicine, management, and and ethical healthcare

13
imaging ethical considerati automation. This
analysis, drug considerations, ons comprehensive
discovery, necessitating regarding insight informs
patient care, and explainable AI, AI and future research
nanotechnology, while envisioning SML directions in the
emphasizing the a usage. healthcare and
need for knowledge-centri biomedical
explainable AI c health sectors.
and ethical management
considerations. system for
The review data-intensive
critically healthcare
evaluates data research.
quality,
knowledge
management,
and the skills
necessary for
data-intensive
healthcare
research to
provide insights
for future
SML-focused
research in
healthcare and
biomedical
fields.

7 TITLE: The The proposed 1. The The attention


Attention methodology , system, an proposed UW-Net presented
UW-Net: A termed attention attention attention in this study
fully connected UW-Net, UW-Net, UW-Net, exhibits
model for introduces an innovates by while outstanding
automatic intermediate introducing an effective in performance in
segmentation layer bridging intermediate layer segmentati segmenting
and annotation encoder and bridging encoder on, might various organs
of chest X-ray decoder and decoder face from chest X-rays.
AUTHOR: pathways in pathways in challenges Its high accuracy
D. Pal, P. B. medical image medical image with across different
Reddy et.al., segmentation. annotation. increased object sizes
LINK: This layer Leveraging fully computatio demonstrates its
https://www.sci consists of fully connected nal efficacy for
encedirect.com connected convolutional complexity automatic
/science/article/ convolutional layers and due to the annotation,
abs/pii/S00104 layers formed skip-connections, additional promising
82522007910 from encoder this model intermediat considerable
(2022) upsampling, achieves high e layer and improvements in
linked to accuracy (average complex clinical workflow
up/down F1-score) of skip-conne and reducing

14
sampled blocks 95.7%, 80.9%, ctions. reliance on manual
via 81.0%, and 77.6% annotations,
skip-connection for lung, heart, benefiting medical
s. Additionally, trachea, and image analysis and
it's integrated collarbone diagnosis.
into the decoder segmentation,
pathway respectively,
through surpassing U-Net
downsampling, and other recent
enhancing segmentation
segmentation models.
accuracy across
various organ
sizes.

8 TITLE: The proposed The proposed 1. The In conclusion, the


Cricket Videos methodology, system, proposed introduced
Summary C-CNN, Cricket-Convoluti C-CNN for Cricket-Convoluti
generation involves a onal Neural cricket onal Neural
using a Novel supervised deep Network video Network (C-CNN)
Convolutional learning (C-CNN), summariza offers promising
Neural approach employs a tion may advancements in
Network tailored for customized deep require cricket video
AUTHOR: cricket video learning approach substantial summarization by
Shahbaz summarization. for summarizing computatio effectively
Sikandar; C-CNN cricket videos. nal extracting
Rabbia employs a C-CNN utilizes a resources informative
Mahmum customized supervised due to deep frames. Through
et.al., convolutional convolutional learning extensive
LINK: neural network neural network to complexity experimentation,
https://ieeexplo to learn extract , and it the C-CNN
re.ieee.org/doc significant informative might need demonstrates
ument/9994106 features from features from large superior
(2022) video frames, video frames, annotated performance
performing conducting binary datasets for compared to
binary classification to effective existing methods,
classification to categorize content training. signifying its
distinguish as positive or potential to
between negative. efficiently
positive and Extensive summarize cricket
negative experimentation videos by
content. demonstrates identifying key
Extensive C-CNN's superior content, enhancing
experimentation performance in their utility for
demonstrates cricket video users.
the superior summarization
performance of over existing
C-CNN over methods.
existing
techniques in

15
summarizing
cricket videos.

9 TITLE: The proposed The proposed 1. The proposed


Detection and methodology system involves a Evaluation semi-automatic
Classification involves a semi-automatic on KL-1 CADx model
of Knee semi-automatic Computer-Aided and KL-2 demonstrates
Osteoarthritis Computer-Aide Diagnosis classes promising
AUTHOR: d Diagnosis (CADx) model showed performance in
Joseph (CADx) model utilizing Deep comparativ detecting OA
Humberto employing Deep Siamese ely lower lesions in knees
Cueva, Darwin Siamese convolutional performanc based on the KL
Castillo et.al., convolutional neural networks e, potential scale. Despite
LINK: neural networks and a fine-tuned overfitting achieving an
https://www.nc and a fine-tuned ResNet-34 for concerns, average
bi.nlm.nih.gov/ ResNet-34 to simultaneous and the multi-class
pmc/articles/P detect knee detection of model's accuracy of 61%,
MC9600223/#: Osteoarthritis Osteoarthritis dependenc it exhibits better
~:text=the%20 (OA) lesions (OA) lesions in e on precision in
anatomical%20 based on the both knees, high-qualit identifying classes
locations.-,Phy Kellgren and graded by the y datasets KL-0, KL-3, and
sicians%20mea Lawrence (KL) Kellgren and for KL-4 than in KL-1
sure%20the%2 scale. Training Lawrence (KL) accurate and KL-2,
0severity%20o used a public scale. Trained on predictions showcasing
f%20knee%20 dataset with a a public dataset, . potential for
OA%20accordi transfer learning validated on a improved OA
ng%20to%20th approach to private dataset, diagnosis, albeit
e,(severe)%20 mitigate addressing with limitations in
%5B6%5D. imbalanced data imbalanced data certain severity
(2022) issues, validated using transfer grades.
against a private learning,
dataset. achieving a 61%
Classifications multi-class
were compared accuracy.
with
experienced
radiologists'
assessments.

10 TITLE: The The proposed 1. The study


A complex methodology system maps knee Evaluation introduces a novel
network based involves X-ray images into of optimal complex
approach for transforming complex networks thresholdin network-based
knee knee X-ray using g approach for OA
Osteoarthritis images into pixel-to-node parameters detection from
detection: Data complex transformation for edge knee X-ray
from the networks using based on removal images, revealing
Osteoarthritis pixel-to-node Euclidean may be promising
initiative mapping and distance. It challengin potential in early
AUTHOR: Euclidean employs a g, affecting OA prediction.

16
L. C. Ribas, R. distance-based thresholding texture Comparative
Riad et.al., connections. A technique to property analyses against
LINK: thresholding reveal texture extraction. established models
https://www.res technique features, applying Limited and traditional
earchgate.net/p removes select an automated performanc descriptors
ublication/3547 edges to reveal threshold e showcase
53439_A_com texture selection strategy. compared competitive
plex_network_ properties, with Statistical to specific performance,
based_approac an automated measures from learning signifying the
h_for_knee_Os strategy for the network form models for viability of this
teoarthritis_det threshold a feature vector knee OA method for
ection_Data_fr selection. used for detection. improved early
om_the_Osteoa Statistical classification, diagnosis and
rthritis_initiativ measures from demonstrating treatment of knee
e the network competitive OsteoArthritis.
(2022) generate a performance in
feature vector, early knee OA
evaluated in detection
classification compared to
experiments established
using OAI learning models
database and traditional
images, texture
comparing descriptors.
against
state-of-the-art
learning models
and traditional
texture
descriptors.

17
CHAPTER 3
SYSTEM ARCHITECTURE AND DESIGN

3.1 EXISTING SYSTEM:

In literature they introduced a new approach for early detection of knee osteoarthritis (OA) based on
complex network theory. Their approach involves modeling an X-ray image into a complex network
and applying a set of thresholds to reveal texture properties. And it employs a specific strategy to
automatically select the set of thresholds. A new set of statistical measures extracted from the
network are used to compute a feature vector evaluated in a classification experiment using knee
X-ray images from the OsteoArthritis Initiative (OAI) database.

3.1.1 DISADVANTAGES OF EXISTING SYSTEM:

● The existing study employs manual threshold selection for revealing texture properties from
X-ray images, it can introduce subjectivity and potential bias into the process.
● The existing study relies solely on texture features extracted from the complex network,
which may not capture all relevant information for knee disease detection.
● The existing study does not use the Mendeley Dataset, which might limit its evaluation to a
single dataset.

3.2 Proposed System:

The proposed work in the paper is a technique based on a customized CenterNet with a pixel-wise
voting scheme to automatically extract features for knee disease detection. The proposed model uses
the most representative features and a weighted pixel-wise voting scheme to give a more accurate
bounding box based on the voting score from each pixel inside the former box. The proposed model
is a robust and improved architecture based on CenterNet utilizing a simple DenseNet-201 as a base
network for feature extraction.We utilized two datasets in the proposed study such as:1) Mendeley
Dataset used for training and testing, and 2) OAI Dataset used for cross-validation. Various
experiments have been performed to assess the performance of the proposed model.

3.2.1 Advantages of proposed system:

18
1. Our model utilizes more sophisticated techniques, such as customized CenterNet, which can
potentially capture both texture and spatial information, leading to better performance.
2. Using multiple datasets, helps assess the generalization and robustness of the proposed
model.
3. Using DenseNet-201 for feature extraction can lead to improved generalization.
4. DenseNet's dense connections and efficient feature reuse reduce the risk of overfitting to the
training data.

3.3 FUNCTIONAL REQUIREMENTS

1. Data Collection
2. Image processing
3. Data augmentation
4. Training model
5. Final outcome

3.4 NON-FUNCTIONAL REQUIREMENTS

NON-FUNCTIONAL REQUIREMENT (NFR) specifies the quality attribute of a software system.


They judge the software system based on Responsiveness, Usability, Security, Portability and other
non-functional standards that are critical to the success of the software system. Example of
nonfunctional requirement, “how fast does the website load?” Failing to meet non-functional
requirements can result in systems that fail to satisfy user needs. Non- functional Requirements allow
you to impose constraints or restrictions on the design of the system across the various agile
backlogs. Example, the site should load in 3 seconds when the number of simultaneous users is >
10000. Description of non-functional requirements is just as critical as a functional requirement.

● Usability requirement
● Serviceability requirement
● Manageability requirement
● Recoverability requirement
● Security requirement
● Data Integrity requirement
● Capacity requirement
● Availability requirement
● Scalability requirement

19
● Interoperability requirement
● Reliability requirement
● Maintainability requirement
● Regulatory requirement
● Environmental requirement

3.5 SYSTEM ARCHITECTURE:

3.5.1 DATA FLOW DIAGRAM:

1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to
represent a system in terms of input data to the system, various processing carried out on this
data, and the output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model
the system components. These components are the system process, the data used by the
process, an external entity that interacts with the system and the information flows in the
system.
3. DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of
abstraction. DFD may be partitioned into levels that represent increasing information flow
and functional detail.

20
Fig.3.1 System architecture

Yes NO

3.6 UML DIAGRAMS

UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling
language in the field of object-oriented software engineering. The standard is managed, and was
created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object oriented computer
software. In its current form UML is comprised of two major components: a Meta-model and a
notation. In the future, some form of method or process may also be added to; or associated with,
UML.

21
The Unified Modeling Language is a standard language for specifying, Visualization, Constructing
and documenting the artifacts of software system, as well as for business modeling and other
non-software systems.
The UML represents a collection of best engineering practices that have proven successful in the
modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the software
development process. The UML uses mostly graphical notations to express the design of software
projects.

3.7 GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop
and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks, patterns and
components.
7. Integrate best practices.

3.8 USE CASE DIAGRAM:


A use case diagram in the Unified Modeling Language (UML) is a type of behavioral diagram
defined by and created from a Use-case analysis. Its purpose is to present a graphical overview of the
functionality provided by a system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases. The main purpose of a use case diagram is to show what
system functions are performed for which actor. Roles of the actors in the system can be depicted.

22
Fig.3.2 Use Case Diagram

3.8.1 Class diagram:

The class diagram is used to refine the use case diagram and define a detailed design of the system.
The class diagram classifies the actors defined in the use case diagram into a set of interrelated
classes. The relationship or association between the classes can be either an "is-a" or "has-a"
relationship. Each class in the class diagram may be capable of providing certain functionalities.
These functionalities provided by the class are termed "methods" of the class. Apart from this, each
class may have certain "attributes" that uniquely identify the class.

23
Fig.3.3 Class Diagram

3.8.2 Activity diagram:

The process flows in the system are captured in the activity diagram. Similar to a state diagram, an
activity diagram also consists of activities, actions, transitions, initial and final states, and guard
conditions.

24
Fig.3.4 Activity Diagram

3.8.3 Sequence diagram:

A sequence diagram represents the interaction between different objects in the system. The important
aspect of a sequence diagram is that it is time-ordered. This means that the exact sequence of the
interactions between the objects is represented step by step. Different objects in the sequence
diagram interact with each other by passing "messages".

25
Fig.3.5 Sequence Diagram

3.8.4 Collaboration diagram:

A collaboration diagram groups together the interactions between different objects. The interactions
are listed as numbered interactions that help to trace the sequence of the interactions. The
collaboration diagram helps to identify all the possible interactions that each object has with other
objects.

Fig.3.6 Collaboration Diagram

26
3.8.5 Component diagram:

The component diagram represents the high-level parts that make up the system. This diagram
depicts, at a high level, what components form part of the system and how they are interrelated. A
component diagram depicts the components culled after the system has undergone the development
or construction phase.

Fig.3.7 Component Diagram

3.8.6 Deployment diagram:

The deployment diagram captures the configuration of the runtime elements of the application. This
diagram is by far most useful when a system is built and ready to be deployed.

27
Fig.3.8 Deployment Diagram

28
CHAPTER 4
METHODOLOGY

4.1 Data Preparation and Preprocessing:

The initial phase of the methodology encompasses the preparatory steps required for handling and
processing the data prior to model training. This involves the importation of essential Python
libraries such as NumPy, TensorFlow, OpenCV, and Matplotlib, which provide indispensable tools
for various tasks including data manipulation, image processing, visualization, and deep learning.

The dataset used for knee osteoarthritis detection, comprising knee X-ray images and corresponding
annotations, is acquired and subjected to preprocessing steps to ensure its suitability for model
training. This preprocessing may involve resizing the images to a standardized resolution,
normalizing pixel values to a common scale, and converting the annotations into appropriate formats
compatible with the chosen deep learning framework.

In addition to basic preprocessing, data augmentation techniques are employed to increase the
diversity of the dataset and enhance the model's generalization capability. These techniques include
random transformations such as rotation, flipping, and zooming, which generate augmented samples
from the original data to expose the model to a wider range of variations.

Furthermore, the dataset undergoes quality checks to identify and handle missing or corrupted
images. Images that are deemed unsuitable for training are either removed from the dataset or
subjected to data imputation techniques to fill in missing values, ensuring the integrity of the dataset.

4.2 Model Architecture:

The model architecture is a pivotal component of the methodology, as it determines the underlying
framework upon which the knee osteoarthritis detection model is built. The architecture is primarily
based on the CenterNet framework, a state-of-the-art object detection model renowned for its
accuracy and efficiency.

Modifications are made to the original CenterNet architecture to tailor it specifically for knee

29
osteoarthritis detection. These modifications may involve alterations to the backbone network
architecture, incorporation of additional convolutional layers to capture more intricate features
relevant to knee joint detection, or integration of attention mechanisms to enhance the model's focus
on critical regions of interest.

A notable enhancement introduced in the model architecture is the implementation of a pixel-wise


voting scheme. This scheme facilitates improved object localization by aggregating predictions from
multiple overlapping bounding boxes, thereby refining the localization accuracy of knee joints within
the X-ray images.

4.3 Training Procedure:

The training procedure involves the iterative process of optimizing the model parameters using the
prepared dataset to learn the task of knee osteoarthritis detection. This process unfolds across
multiple epochs, with each epoch representing a complete pass through the entire training dataset.

During training, the model is optimized using a stochastic gradient descent (SGD) optimizer, with
carefully chosen learning rate scheduling and momentum parameters to ensure efficient convergence
towards the optimal solution.

Model performance is continuously monitored during training using evaluation metrics such as mean
average precision (mAP) and intersection over union (IoU), which quantify the accuracy of object
detection and localization.

To prevent overfitting and promote model generalization, regularization techniques such as dropout
and batch normalization are applied, which mitigate the risk of the model memorizing the training
data and improve its ability to generalize to unseen data.

4.4 Evaluation Metrics:

The performance of the trained model is evaluated using a comprehensive set of evaluation metrics
tailored to the task of knee osteoarthritis detection. These metrics include precision, recall, F1-score,
and intersection over union (IoU), which collectively assess the model's ability to accurately detect

30
and localize knee joints within the X-ray images.

Confusion matrices and classification reports are generated to provide detailed insights into the
model's performance, showcasing its ability to correctly classify knee joints as either healthy or
affected by osteoarthritis.

Moreover, receiver operating characteristic (ROC) curves and area under the curve (AUC) scores
may be computed to quantify the model's discriminative power and its ability to balance true positive
and false positive rates, providing additional insights into its performance characteristics.

4.5 Model Deployment:

Once trained and validated, the model is deployed in clinical settings or healthcare facilities for
real-world application in knee osteoarthritis detection from X-ray images.

Integration with existing healthcare infrastructure such as Picture Archiving and Communication
Systems (PACS) or Radiology Information Systems (RIS) is facilitated to streamline the diagnostic
workflow and enable seamless integration of the model into existing clinical pipelines.

User-friendly interfaces or applications are developed to allow healthcare professionals to interact


with the model, visualize its predictions, and access diagnostic reports generated by the system,
thereby facilitating efficient and intuitive utilization of the model in clinical practice.

4.6 Performance Validation:

The deployed model undergoes rigorous validation and testing to ensure its reliability, accuracy, and
safety in real-world scenarios.

Independent validation studies involving large patient cohorts and diverse clinical settings are
conducted to assess the model's performance across various populations and imaging conditions,
providing robust evidence of its efficacy and generalizability.

Continuous monitoring and feedback mechanisms are established to track the model's performance

31
over time, identify potential issues or biases, and implement necessary updates or improvements,
ensuring the ongoing relevance and effectiveness of the model in clinical practice.

32
CHAPTER 5
RESULTS AND DISCUSSION

5.1 Results

The results section of the knee osteoarthritis detection system offers a comprehensive overview of
the classification and prediction outcomes, providing users with actionable insights into the severity
and progression of the condition. Through detailed analysis and visualization of classification results,
users can interpret the detected severity levels accurately, enabling informed decision-making
regarding treatment options and management strategies. Additionally, the predictive analytics
presented in this section offer valuable prognostic information, forecasting the future development of
knee osteoarthritis based on individual characteristics and risk factors. Visual representations, such as
charts, graphs, and heatmaps, enhance the interpretability of the results, facilitating clear
communication and understanding. Overall, the results section empowers users with valuable
information to guide clinical decision-making and optimize patient care, ultimately contributing to
improved outcomes and quality of life for individuals affected by knee osteoarthritis.

5.1.1 Website Interface

33
Figure 5.1: Website Interface

Figure 5.2: Website Interface

The website interface provides a cohesive platform for knee osteoarthritis detection and prediction,
offering users a seamless experience to access essential tools and information. Through intuitive
navigation, users can effortlessly explore key sections such as classification and prediction,
leveraging advanced algorithms like the improved CenterNet with Pixel-Wise Voting Scheme to
accurately assess the severity of osteoarthritis and forecast its progression. Additionally, the about
page furnishes users with comprehensive insights into the system's objectives, methodologies, and
potential healthcare implications, fostering transparency and understanding. To enable personalized
interactions and track previous activities, the login/register page facilitates secure access to user
accounts, ensuring data privacy and enhancing user engagement. With a user-centric design,
informative content, and intuitive functionalities, the website interface aims to empower users in
their journey towards knee osteoarthritis detection and management.

5.1.2 Register and Login / Sign up page

34
Figure 5.3: Register and Login / Sign up page

Figure 5.4: Register and Login / Sign up page

35
The sign-up or registration page serves as a pivotal gateway for users to unlock personalized features
and access advanced functionalities within the knee osteoarthritis detection and prediction system.
With a streamlined and user-friendly interface, this page guides users through a seamless registration
process, requiring basic information such as name, email, and password. By completing the
registration, users gain exclusive access to personalized features, including the ability to track
previous classifications, save preferences, and receive updates tailored to their needs and interests.
Furthermore, the registration page prioritizes data security and privacy, implementing robust
encryption protocols to safeguard user information and ensure peace of mind. With its intuitive
design and emphasis on user empowerment, the sign-up page plays a crucial role in enhancing the
overall user experience and fostering long-term engagement with the knee osteoarthritis detection
system.

5.1.3 Classification of Knee Osteoarthritis

Figure 5.5: Classification of Knee Osteoarthritis

36
Figure 5.6: Classification of Knee Osteoarthritis

Figure 5.7: Classification of Knee Osteoarthritis

37
Figure 5.8: Classification of Knee Osteoarthritis

Moving forward, the classification section of the website offers users a powerful tool for accurately
assessing the severity of knee osteoarthritis. Leveraging cutting-edge algorithms such as the
improved CenterNet with Pixel-Wise Voting Scheme, this section allows users to upload knee joint
images and receive real-time classification results. The classification process is intuitive and
efficient, guiding users through each step while providing comprehensive insights into the detected
severity levels. Additionally, the system ensures accuracy and reliability by incorporating advanced
image processing techniques and leveraging a diverse dataset for training and validation purposes.
Overall, the classification section empowers users with actionable information to make informed
decisions regarding treatment and management strategies for knee osteoarthritis.

5.1.4 Prediction of Knee Osteoarthritis

38
Figure 5.9: Prediction of Knee Osteoarthritis

Figure 5.10: Prediction of Knee Osteoarthritis

39
Figure 5.11: Prediction of Knee Osteoarthritis

On the other hand, the detection section of the website harnesses predictive analytics to forecast the
progression of knee osteoarthritis. By analyzing relevant data and leveraging machine learning
models, this section provides users with valuable insights into the future development of the
condition. Users can upload images or input relevant data points, allowing the system to generate
personalized predictions based on individual characteristics and risk factors. The detection process is
dynamic and adaptive, continuously learning from new data to improve accuracy and reliability over
time. By empowering users with foresight into potential disease trajectories, the detection section
enables proactive intervention and personalized healthcare management for knee osteoarthritis.

5.2 Discussion

Knee osteoarthritis (OA) represents a significant burden on global healthcare systems, necessitating
accurate and efficient detection methodologies to guide clinical decision-making and optimize
patient outcomes. In recent years, the advent of advanced imaging techniques and machine learning
algorithms has revolutionized the field of knee OA detection, offering promising avenues for early
diagnosis and intervention. The utilization of an improved CenterNet with a pixel-wise voting
scheme presents a novel approach to knee OA detection, capitalizing on the strengths of deep
learning architectures to enhance accuracy and reliability.

40
The application of the CenterNet framework, augmented with pixel-wise voting mechanisms,
introduces a paradigm shift in knee OA detection by leveraging the inherent spatial relationships
within medical imaging data. Unlike traditional methods that rely solely on feature extraction and
classification, this approach integrates contextual information at the pixel level, enabling more
precise localization and characterization of pathological changes within the knee joint. By harnessing
the collective intelligence of individual pixels through a voting scheme, the model achieves superior
performance in delineating anatomical structures and identifying disease-specific patterns associated
with knee OA.

One of the key advantages of the proposed methodology lies in its ability to adapt to diverse imaging
modalities and patient populations. Unlike conventional algorithms that may struggle with variations
in image quality and patient demographics, the improved CenterNet with pixel-wise voting
demonstrates robustness and generalizability across heterogeneous datasets. This versatility is
paramount in real-world clinical settings, where clinicians encounter a wide spectrum of imaging
modalities and patient characteristics, necessitating a flexible and adaptable detection framework.

Furthermore, the integration of deep learning techniques with pixel-wise voting mechanisms
facilitates interpretability and transparency in knee OA detection. By visualizing the decision-making
process at the pixel level, clinicians gain valuable insights into the model's reasoning and can
validate the relevance of detected abnormalities in the context of clinical practice. This transparency
fosters trust and acceptance among healthcare professionals, paving the way for the seamless
integration of AI-powered detection systems into routine clinical workflows.

Despite the promising potential of the improved CenterNet with a pixel-wise voting scheme, several
challenges and considerations warrant attention. Firstly, the need for large annotated datasets remains
a bottleneck in model development and validation, necessitating collaborative efforts to curate
comprehensive datasets representative of diverse patient populations and disease phenotypes.
Additionally, the ethical implications surrounding the deployment of AI-driven detection systems,
including data privacy, algorithmic bias, and regulatory compliance, necessitate careful consideration
and proactive mitigation strategies.

In conclusion, the adoption of an improved CenterNet with a pixel-wise voting scheme holds
immense promise for advancing knee OA detection and improving patient care outcomes. By

41
leveraging deep learning architectures and pixel-level contextual information, this approach offers
unparalleled accuracy, versatility, and interpretability in delineating knee OA-related pathology.
However, continued research efforts, interdisciplinary collaboration, and ethical foresight are
imperative to realize the full potential of AI-driven knee OA detection systems and ensure their
responsible integration into clinical practice.

42
CHAPTER 6
CONCLUSIONS AND FUTURE SCOPE

6.1 Conclusion

In this study, we proposed a robust deep learning architecture to detect Knee Osteoarthritis (KOA)
and identify severity levels based on KL grading i.e. G-I, G-II, G-III, and G-IV.The proposed method
is based on an improved CenterNet using DenseNet-201 is a robust technique for detecting knee
osteoarthritis (KOA) and identifying its severity levels based on KL grading. The proposed system
effectively overcomes the challenge of class imbalance in the dataset and extracts the most
representative features from the identified ROI due to dense connections among all layers. The
distillation knowledge concept is employed to make the model simple without increasing its
computational cost and transfer knowledge from a complex network to a simple network making it
more robust. We utilized two datasets in the proposed study such as: 1) Mendeley Dataset used for
training and testing, and 2) OAI Dataset used for cross-validation. Various experiments have been
performed to assess the performance of the proposed model. The proposed technique outperforms
existing techniques with good accuracy over testing and cross-validation.
6.1.1 Key Features

● Innovative Model Architecture: The study introduces an improved CenterNet model


augmented with a pixel-wise voting scheme, showcasing its effectiveness in knee
osteoarthritis detection from medical images.
● Performance Evaluation: Common metrics such as accuracy, precision, recall, and F1-score
are employed to rigorously assess the model's performance, providing a clear benchmark for
its efficacy.
● Practical Implications: The research underscores the potential of deep learning techniques in
revolutionizing medical image analysis, offering valuable insights for healthcare practitioners
and researchers.
● Methodological Rigor: Methodological rigor is maintained throughout the study, ensuring the
reliability and reproducibility of the findings, thus enhancing their credibility and
applicability.
● Contribution to Medical Imaging: This study contributes to the advancement of medical
imaging technologies, particularly in the domain of knee osteoarthritis detection, fostering

43
ongoing academic and practical discourse in the field.
6.2 Future Scope

1. Multi-Modal Integration: Future research could explore the integration of multiple


modalities, such as MRI and ultrasound, to further enhance the accuracy and robustness of
knee osteoarthritis detection models.
2. Real-Time Diagnosis: Developing methods for real-time diagnosis of knee osteoarthritis can
facilitate prompt clinical intervention and improve patient outcomes.
3. Cross-Dataset Validation: Extending the validation of the proposed model across diverse
datasets can enhance its generalizability and reliability in real-world clinical settings.
4. Interpretability and Explainability: Enhancing the interpretability and explainability of the
model's predictions can instill greater trust among healthcare professionals and facilitate its
clinical adoption.
5. Ethical Considerations: Addressing ethical considerations, such as patient privacy and data
security, is paramount in the development and deployment of AI-based medical imaging
systems.

These recommendations provide a roadmap for future research endeavors aimed at advancing the
state-of-the-art in knee osteoarthritis detection and improving patient care through innovative
technological solutions.

44
REFERENCES

[1] T. Tsonga, M. Michalopoulou, P. Malliou, G. Godolias, S. Kapetanakis, G. Gkasdaris, and P.


Soucacos, ‘‘Analyzing the history of falls in patients with severe knee osteoarthritis,’’ Clinics
Orthopedic Surg., vol. 7, no. 4, pp. 449–456, 2015.

[2] B. J. E. de Lange-Brokaar, A. Ioan-Facsinay, E. Yusuf, A. W. Visser, H. M. Kroon, S. N.


Andersen, L. Herb-van Toorn, G. J. V. M. van Osch, A.-M. Zuurmond, V. Stojanovic-Susulic, J. L.
Bloem, R. G. H. H. Nelissen, T. W. J. Huizinga, and M. Kloppenburg, ‘‘Degree of synovitis on MRI
by comprehensive whole knee semi-quantitative scoring method correlates with histologic and
macroscopic features of synovial tissue inflammation in knee osteoarthritis,’’ Osteoarthritis
Cartilage, vol. 22, no. 10, pp. 1606–1613, Oct. 2014.

[3] C. Kokkotis, C. Ntakolia, S. Moustakidis, G. Giakas, and D. Tsaopoulos, ‘‘Explainable machine


learning for knee osteoarthritis diagnosis based on a novel fuzzy feature selection methodology,’’
Phys. Eng. Sci. Med., vol. 45, no. 1, pp. 219–229, Mar. 2022.

[4] S. Nalband, R. R. Sreekrishna, and A. A. Prince, ‘‘Analysis of knee joint vibration signals using
ensemble empirical mode decomposition,’’ Proc. Comput. Sci., vol. 89, pp. 820–827, Jan. 2016.

[5] B. J. Guo, Z. L. Yang, and L. J. Zhang, ‘‘Gadolinium deposition in brain: Current scientific
evidence and future perspectives,’’ Frontiers Mol. Neurosci., vol. 11, p. 335, Sep. 2018.

[6] L. Shamir, S. M. Ling, W. W. Scott, A. Bos, N. Orlov, T. J. Macura, D. M. Eckley, L. Ferrucci,


and I. G. Goldberg, ‘‘Knee X-ray image analysis method for automated detection of osteoarthritis,’’
IEEE Trans. Biomed. Eng., vol. 56, no. 2, pp. 407–415, Feb. 2009.

[7] A. Brahim, R. Jennane, R. Riad, T. Janvier, L. Khedher, H. Toumi, and E. Lespessailles, ‘‘A
decision support tool for early detection of knee OsteoArthritis using X-ray imaging and machine
learning: Data from the OsteoArthritis initiative,’’ Comput. Med. Imag. Graph., vol. 73, pp. 11–18,
Apr. 2019.

[8] P. S. Emrani, J. N. Katz, C. L. Kessler, W. M. Reichmann, E. A. Wright, T. E. McAlindon, and E.

45
Losina, ‘‘Joint space narrowing and Kellgren–Lawrence progression in knee osteoarthritis: An
analytic literature synthesis,’’ Osteoarthritis Cartilage, vol. 16, no. 8, pp. 873–882, Aug. 2008.

[9] M. N. Iqbal, F. R. Haidri, B. Motiani, and A. Mannan, ‘‘Frequency of factors associated with
knee osteoarthritis,’’ J. Pakistan Med. Assoc., vol. 61, no. 8, p. 786, 2011.

[10] A. Tiulpin, J. Thevenot, E. Rahtu, P. Lehenkari, and S. Saarakkala, ‘‘Automatic knee


osteoarthritis diagnosis from plain radiographs: A deep learning-based approach,’’ Sci. Rep., vol. 8,
no. 1, pp. 1–10, Jan. 2018.

[11] M. S. M. Swamy and M. S. Holi, ‘‘Knee joint cartilage visualization and quantification in
normal and osteoarthritis,’’ in Proc. Int. Conf. Syst. Med. Biol., Dec. 2010, pp. 138–142.

[12] P. Dodin, J. Pelletier, J. Martel-Pelletier, and F. Abram, ‘‘Automatic human knee cartilage
segmentation from 3-D magnetic resonance images,’’ IEEE Trans. Biomed. Eng., vol. 57, no. 11, pp.
2699–2711, Nov. 2010.

[13] N. Kour, S. Gupta, and S. Arora, ‘‘A survey of knee osteoarthritis assessment based on gait,’’
Arch. Comput. Methods Eng., vol. 28, no. 2, pp. 345–385, Mar. 2021.

[14] M. Saleem, M. S. Farid, S. Saleem, and M. H. Khan, ‘‘X-ray image analysis for automated knee
osteoarthritis detection,’’ Signal, Image Video Process., vol. 14, no. 6, pp. 1079–1087, Sep. 2020.

[15] J. Abedin, J. Antony, K. McGuinness, K. Moran, N. E. O’Connor, D. Rebholz-Schuhmann, and


J. Newell, ‘‘Predicting knee osteoarthritis severity: Comparative modeling based on patient’s data
and plain X-ray images,’’ Sci. Rep., vol. 9, no. 1, pp. 1–11, Apr. 2019.

46
APPENDIX 1

CODING
from __future__ import division, print_function
# coding=utf-8
import sys
import os
import glob
import re
import numpy as np

# Keras
from keras.applications.imagenet_utils import preprocess_input,
decode_predictions
from keras.models import load_model
from keras.preprocessing import image

# Flask utils
import argparse
import io
import os
from PIL import Image
import cv2
import numpy as np
from torchvision.models import detection
import sqlite3
import torch
from torchvision import models
from flask import Flask, render_template, request, redirect, Response
from flask import Flask, redirect, url_for, request, render_template
from werkzeug.utils import secure_filename
import sqlite3
import pandas as pd
import numpy as np
import pickle
import sqlite3
import random

import smtplib
from email.message import EmailMessage
from datetime import datetime

app = Flask(__name__)

UPLOAD_FOLDER = 'static/uploads/'

# allow files of a specific type


ALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg'])

# function to check the file extension


def allowed_file(filename):
return '.' in filename and \

47
filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS

model_path2 = 'xce.h5' # load .h5 Model

from keras import backend as K

def recall_m(y_true, y_pred):


true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall

def precision_m(y_true, y_pred):


true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision

def f1_m(y_true, y_pred):


precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))

#classes2 =
{0:"APHIDS",1:"ARMYWORM",2:"BEETLE",3:"BOLLWORM",4:"GRASSHOPPER",5:"MITES",6:
"MOSQUITO",7:"SAWFLY",8:"STEM BORER"}
CTS = load_model(model_path2, custom_objects={'f1_score' : f1_m,
'precision_score' : precision_m, 'recall_score' : recall_m}, compile=False)

from keras.preprocessing.image import load_img, img_to_array

def model_predict2(image_path,model):
print("Predicted")
image = load_img(image_path,target_size=(128,128))
image = img_to_array(image)
image = image/255
image = np.expand_dims(image,axis=0)

result = np.argmax(model.predict(image))
print(result)
#prediction = classes2[result]

if result == 0:
return "The Patient is Diagnosis with Stage 0 Knee
Osteoarthritis","result.html"
elif result == 1:
return "The Patient is Diagnosis with Stage 1 Knee
Osteoarthritis","result.html"
elif result == 2:
return "The Patient is Diagnosis with Stage 2 Knee
Osteoarthritis","result.html"
elif result == 3:
return "The Patient is Diagnosis with Stage 3 Knee
Osteoarthritis","result.html"

48
@app.route("/about")
def about():
return render_template("about.html")

@app.route('/')
@app.route('/home')
def home():
return render_template('home.html')

@app.route('/logon')
def logon():
return render_template('signup.html')

@app.route('/login')
def login():
return render_template('signin.html')

@app.route('/index')
def index():
return render_template('index.html')

@app.route('/index1')
def index1():
return render_template('index1.html')

@app.route('/predict2',methods=['GET','POST'])
def predict2():
print("Entered")

print("Entered here")
file = request.files['file'] # fet input
filename = file.filename
print("@@ Input posted = ", filename)

file_path = os.path.join(UPLOAD_FOLDER, filename)


file.save(file_path)

print("@@ Predicting class......")


pred, output_page = model_predict2(file_path,CTS)

return render_template(output_page, pred_output = pred,


img_src=UPLOAD_FOLDER + file.filename)

@app.route("/signup")
def signup():
global otp, username, name, email, number, password
username = request.args.get('user','')

49
name = request.args.get('name','')
email = request.args.get('email','')
number = request.args.get('mobile','')
password = request.args.get('password','')
otp = random.randint(1000,5000)
print(otp)
msg = EmailMessage()
msg.set_content("Your OTP is : "+str(otp))
msg['Subject'] = 'OTP'
msg['From'] = "evotingotp4@gmail.com"
msg['To'] = email

s = smtplib.SMTP('smtp.gmail.com', 587)
s.starttls()
s.login("evotingotp4@gmail.com", "xowpojqyiygprhgr")
s.send_message(msg)
s.quit()
return render_template("val.html")

@app.route('/predict_lo', methods=['POST'])
def predict_lo():
global otp, username, name, email, number, password
if request.method == 'POST':
message = request.form['message']
print(message)
if int(message) == otp:
print("TRUE")
con = sqlite3.connect('signup.db')
cur = con.cursor()
cur.execute("insert into `info` (`user`,`email`,
`password`,`mobile`,`name`) VALUES (?, ?, ?, ?,
?)",(username,email,password,number,name))
con.commit()
con.close()
return render_template("signin.html")
return render_template("signup.html")

@app.route("/signin")
def signin():

mail1 = request.args.get('user','')
password1 = request.args.get('password','')
con = sqlite3.connect('signup.db')
cur = con.cursor()
cur.execute("select `user`, `password` from info where `user` = ? AND
`password` = ?",(mail1,password1,))
data = cur.fetchone()

if data == None:
return render_template("signin.html")

elif mail1 == str(data[0]) and password1 == str(data[1]):


return render_template("index.html")
else:
return render_template("signin.html")

50
@app.route("/notebook1")
def notebook1():
return render_template("Notebook.html")

@app.route("/notebook2")
def notebook2():
return render_template("Detection.html")

model = torch.hub.load("ultralytics/yolov5", "custom", path = "best.pt",


force_reload=True)

model.eval()
model.conf = 0.5
model.iou = 0.45

from io import BytesIO

def gen():
"""
The function takes in a video stream from the webcam, runs it through the
model, and returns the
output of the model as a video stream
"""
cap=cv2.VideoCapture(0)
while(cap.isOpened()):
success, frame = cap.read()
if success == True:
ret,buffer=cv2.imencode('.jpg',frame)
frame=buffer.tobytes()
img = Image.open(io.BytesIO(frame))
results = model(img, size=415)
results.print()
img = np.squeeze(results.render())
img_BGR = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
else:
break
frame = cv2.imencode('.jpg', img_BGR)[1].tobytes()
yield(b'--frame\r\n'b'Content-Type: image/jpeg\r\n\r\n' + frame +
b'\r\n')

@app.route('/video')
def video():
"""
It returns a response object that contains a generator function that
yields a sequence of images
:return: A response object with the gen() function as the body.
"""
return Response(gen(),
mimetype='multipart/x-mixed-replace; boundary=frame')

@app.route("/predict1", methods=["GET", "POST"])


def predict1():
"""

51
The function takes in an image, runs it through the model, and then saves
the output image to a
static folder
:return: The image is being returned.
"""
if request.method == "POST":
if "file" not in request.files:
return redirect(request.url)
file = request.files["file"]
if not file:
return
img_bytes = file.read()
img = Image.open(io.BytesIO(img_bytes))
results = model(img, size=415)
results.render()
for img in results.render():
img_base64 = Image.fromarray(img)
img_base64.save("static/image0.jpg", format="JPEG")
return redirect("static/image0.jpg")
return render_template("index1.html")

if __name__ == '__main__':
app.run(debug=False)

This Flask web application integrates several functionalities to create a versatile platform for image
processing, user management, and real-time object detection. At its core, the application leverages
deep learning models for image classification and object detection tasks.

The image classification feature allows users to upload images, typically medical scans related to
knee osteoarthritis. These images are processed using a pre-trained deep learning model loaded from
an h5 file. This model, which is likely trained on a dataset of knee images annotated with
osteoarthritis severity levels, predicts the stage of osteoarthritis based on the features present in the
input image. Users receive a diagnosis report indicating the severity stage of knee osteoarthritis
detected in the uploaded image.

User authentication is another critical aspect of the application. Users can sign up for an account by
providing their personal details, including username, name, email, mobile number, and password.
Upon sign-up, users are sent an OTP (One-Time Password) via email for verification. This
verification step ensures the validity of user accounts and enhances security. Once verified, user
information is stored in a SQLite database for future login and authentication.

The real-time object detection capability enhances the application's utility by providing live detection

52
of objects from webcam feeds. This functionality is powered by the YOLOv5 model, a
state-of-the-art deep learning model known for its speed and accuracy in object detection tasks. As
users access the webcam feed through the application, each frame is processed by the YOLOv5
model, which identifies and annotates objects in real-time. Users can observe bounding boxes around
detected objects, enabling applications in various domains such as surveillance, security, and
computer vision research.

Additionally, the application includes several auxiliary pages such as "About," "Home," "Logon"
(Sign Up), "Login," "Index," "Index1," "Notebook1," and "Notebook2." These pages facilitate user
navigation and interaction with the application, providing a seamless user experience.

Furthermore, the application incorporates email functionality to send OTP emails for user
verification during the sign-up process. It utilizes the smtplib library to establish connections with
SMTP servers and send email messages containing OTP codes to users' registered email addresses.

Overall, this Flask web application combines image processing, user management, and real-time
object detection capabilities to create a versatile platform with applications in healthcare, security,
and computer vision research.

53
APPENDIX 2

54
PLAGIARISM REPORT

55
56
57
SRMINSTITUTEOFSCIENCEANDTECHNOLO
GY
(Deemed to be University u/ s 3 of UGC Act, 1956)

Office of Controller of Examinations


REPORT FOR PLAGIARISM CHECK ON THE DISSERTATION/PROJECT REPORTS FOR UG/PG PROGRAMMES
(To be attached in the dissertation/ project report)
B. SASHANK SANJAY G
Name of the Candidates
1

205, J.V.R Enclave, Ameerpet, Hyderabad, 500016


Address of the Candidate
2

RA2011003010040 RA2011003010022
3 Registration Number

4 Date of Birth 27-10-2002 02-06-2003

5 COMPUTING TECHNOLOGIES
Department

6
Faculty Engineering And Technology, School Of
Computing

Knee Osteoarthritis Detection Using An Improved


7 Title of the
Centernet With Pixel-Wise Voting Scheme
Dissertation/Project

Individual or group: (Strike whichever is not


applicable)

a) The project/ dissertation is done in group,

8
then how many students together completed
Whether the above project
the project: 2
/dissertation is done by
b) Mention the Name & Register number of
candidates:
B. SASHANK RA2011003010040
SANJAY G RA2011003010022

58
Mrs. P. Nithyakani
Assistant Professor
Department of Computing Technologies SRM
9
Name and address of the
Institute Of Science And Technology
Supervisor /Guide
Kattankulathur-603203
Mail ID: nithyakp@srmist.edu.in
Mobile Number: 9790626371

Name and address of Co- NIL


10
Supervisor /Co- Guide (if
any)

11 Software Used Turnitin


12 Date of Verification

13 Plagiarism Details: (to attach the final report from the software)

Percentage of similarity
Percentage of % of plagiarism
index (including self
similarity after excluding
Chapter Title of the Chapter citation)
index Quotes,
(Excluding Bibliography, etc.,
self-citation)

1
INTRODUCTION
< 2% < 2% 1%

2 LITERATURE SURVEY 2% 2% 2%

3 SYSTEM ARCHITECTURE AND < 2% < 1% 1%


DESIGN

4 METHODOLOGY 2% <2% 1%

5 RESULTS AND DISCUSSION


0%

6 CONCLUSION AND FUTURE SCOPE

Appendices

We declare that the above information have been verified and found true to the best of our knowledge.

59
Name & Signature of the Staff (Who uses the
Signature of the Candidate plagiarism check software)

Name & Signature of the


Name & Signature of the Supervisor/ Guide Co-Supervisor/Co-Guide

Name & Signature of the HOD

60

You might also like