Professional Documents
Culture Documents
Final Project Report (1)
Final Project Report (1)
Dr. M. PUSHPALATHA
HEAD OF THE DEPARTMENT
Department of Computing Technologies
Examiner I Examiner II
2
Department of Computing Technologies
SRM Institute of Science & Technology
Own Work Declaration Form
We confirm that all the work contained in this assessment is my / our own except
where indicated, and that We have met the following conditions:
● Clearly referenced / listed all sources as appropriate.
● Referenced and put in inverted commas all quoted text (from books, web, etc).
● Given the sources of all pictures, data etc. that are not my own.
● Not made any use of the report(s) or essay(s) of any other student(s) either past
or present.
● Acknowledged in appropriate places any help that I have received from others
(e.g. fellow students, technicians, statisticians, external sources).
● Compiled with any other plagiarism criteria specified in the Course handbook /
University website.
We understand that any false claim for this work will be penalized in accordance
with the University policies and regulations.
DECLARATION:
We are aware of and understand the University’s policy on Academic misconduct and
plagiarism and we certify that this assessment is my / our own work, except where
indicated by referring, and that I have followed the good academic practices noted
above.
Student 1 Signature:
Student 2 Signature:
Date:
If you are working in a group, please write your registration numbers and sign
with the date for every student in your group.
3
4
ACKNOWLEDGEMENT
We extend our sincere thanks to Dr. T. V. Gopal, Dean-CET, SRM Institute of Science and
Technology, for his invaluable support.
We are incredibly grateful to our Head of the Department, Dr. M. Pushpalatha, Department
of Computing Technologies, SRM Institute of Science and Technology, for his suggestions
and encouragement at all the stages of the project work.
We want to convey our thanks to our Project Coordinators, Dr. S. Godfrey Winster. Panel
Head, Dr. Subash R, Associate Professor and Panel Members, Dr. Divya Mohan Associate
Professor, Mrs. P. Nithyakani Assistant Professor, Mr. S. Prabhu Assistant Professor,
Department of Computing Technologies, SRM Institute of Science and Technology, for their
inputs during the project reviews and support.
We register our immeasurable thanks to our Faculty Advisor, Dr. Sivakumar S., Assistant
Professor, Department of Computing Technologies, SRM Institute of Science and
Technology, for leading and helping us to complete our course.
Our inexpressible respect and thanks to our guide, Dr. Sivakumar S., Assistant Professor,
Department of Computing Technologies, SRM Institute of Science and Technology, for
providing us with an opportunity to pursue our project under his mentorship. He provided us
with the freedom and support to explore the research topics of our interest. His passion for
solving problems and making a difference in the world has always been inspiring.
4
We sincerely thank all the staff and students of Computing Technologies Department, School
of Computing, S.R.M Institute of Science and Technology, for their help during our project.
Finally, we would like to thank our parents, family members, and friends for their
unconditional love, constant support and encouragement.
B. SASHANK
[Reg No:RA2011003010040]
SANJAY G
[Reg No:RA2011003010022]
5
ABSTRACT
Radiologists have been using multi-view images to figure out
knee problems, such as computer tomography (CT) scans, MRIs,
and X-rays. The most cost-effective approach for obtaining
images is X-ray, which is used regularly. There are several
image-processing approaches available to detect knee disease in
its early stages. However, the present methods might be enhanced
in terms of accuracy and precision. Furthermore, hand-crafted
feature extraction techniques in machine learning-based
approaches are time-consuming. So, The paper proposes a
technique based on a customized CenterNet with a pixel-wise
voting scheme to automatically extract features for knee disease
detection. The proposed model uses the most representative
features and a weighted pixel-wise voting scheme to give a more
accurate bounding box based on the voting score from each pixel
inside the former box. The proposed model is a robust and
improved architecture based on CenterNet utilizing a simple
DenseNet-201 as a base network for feature extraction. The
proposed model detects knee osteoarthritis (KOA) in knee
images precisely and determines its severity level according to
the KL grading system such as Grade-I, Grade-II, Grade-III, and
Grade-IV. The proposed technique outperforms existing
techniques with an accuracy of 99.14% over testing and 98.97%
over cross-validation.
6
TABLE OF CONTENTS
ABSTRACT vi
LIST OF TABLES ix
LIST OF FIGURES x
LIST OF ABBREVIATIONS xi
1. INTRODUCTION 1
1.1 Objective 1
2 LITERATURE SURVEY 6
2.1 Motivation 6
2.2 Literature Survey 7
7
3.8.2 Activity Diagram 25
3.8.3 Sequence Diagram 26
3.8.4 Collaboration Diagram 27
3.8.5 Component Diagram 28
3.8.6 Deployment Diagram 28
4 METHODOLOGY 29
5.1 Results 33
5.1.1 Website Interface 34
5.1.2 Register and Login / Sign up page 35
5.1.3 Classification of Knee Osteoarthritis 37
5.1.4 Prediction of Knee Osteoarthritis 39
5.2 Discussion 41
6.1 Conclusion 43
6.1.1 Key Features 43
6.2 Future Scope 44
REFERENCES 45
APPENDIX 1 47
8
APPENDIX 2 54
9
LIST OF TABLES
10
LIST OF FIGURES
Fig. TITLE Page
No. No.
11
ABBREVIATIONS
OA Osteoarthritis
AI Artificial Intelligence
CNN Convolutional Neural Network
MRI Magnetic Resonance Imaging
CT Computed Tomography
RGB Red, Green, Blue (color model)
CNN Convolutional Neural Network
GPU Graphics Processing Unit
CAD Computer-Aided Diagnosis
HIPAA Health Insurance Portability and Accountability Act
ML Machine Learning
12
CHAPTER 1
INTRODUCTION
Knee Osteoarthritis (KOA) is a chronic joint disease due to the worsening of articular cartilage in the
knee. The symptoms of KOA comprise joint noises due to cracking, swelling, pain, and difficulty in
movement. Moreover, the severe symptoms of KOA may cause fall incidents i.e. fracture in the knee
bone that ultimately results in disability of leg. Various imaging techniques which have been
employed for the analysis of knee disease include MRI, X-ray, and CT scans. Furthermore, MRI and
CT scans are also considered suitable for KOA assessment. They are accompanied using an
intravenous contrast agent which provides a clear view of Knee joints. However, these approaches
are associated with high costs, increased examination time, and potential health risks such as patients
with renal inadequacy. Therefore, there should be some techniques for the assessment of KOA that
can be employed without the contrast agent and require minimum expense, and time of examination.
Therefore, an X-ray is considered a more feasible way to provide bony structure visualization and is
a less expensive approach for knee analysis.
1.1 OBJECTIVE
The main purpose of this work is to automate the KOA detection in an effective manner, therefore, a
robust deep learning-based method i.e., CenterNet based on DenseNet-201 with pixel voting scheme
is proposed that extracts the most representative features to detect and recognize early KOA
effectively. We utilized weighted pixel-wise voting scheme for the best localization results from our
customized CenterNet model.
It is possible to identify knee disease in its early stages using a variety of image processing
techniques, but the current algorithms could still use improved accuracy and precision. Hand-crafted
feature extraction mechanisms are also laborious in machine learning-based techniques.
Consequently, we propose an automated feature extraction method in this paper that uses a
customized CenterNet with a pixel-wise voting scheme.
1
1.3 SOFTWARE REQUIREMENTS
Software requirements deal with defining software resource requirements and prerequisites that need
to be installed on a computer to provide optimal functioning of an application. These requirements or
prerequisites are generally not included in the software installation package and need to be installed
separately before the software is installed.
Operating system is one of the first requirements mentioned when defining system requirements
(software). Software may not be compatible with different versions of same line of operating
systems, although some measure of backward compatibility is often maintained. For example, most
software designed for Microsoft Windows XP does not run on Microsoft Windows 98, although the
converse is not always true. Similarly, software designed using newer features of Linux Kernel v2.6
generally does not run or compile properly (or at all) on Linux distributions using Kernel v2.2 or
v2.4.
APIs and drivers – Software making extensive use of special hardware devices, like high-end
display adapters, needs special API or newer device drivers. A good example is DirectX, which is a
collection of APIs for handling tasks related to multimedia, especially game programming, on
Microsoft platforms.
Web browser – Most web applications and software depending heavily on Internet technologies
make use of the default browser installed on system. Microsoft Internet Explorer is a frequent choice
of software running on Microsoft Windows, which makes use of ActiveX controls, despite their
vulnerabilities.
1) Software : Anaconda
2
4) Back-end Framework : Jupyter Notebook
5) Database : Sqlite3
The most common set of requirements defined by any operating system or software application is the
physical computer resources, also known as hardware, A hardware requirements list is often
accompanied by a hardware compatibility list (HCL), especially in case of operating systems. An
HCL lists tested, compatible, and sometimes incompatible hardware devices for a particular
operating system or application. The following sub-sections discuss the various aspects of hardware
requirements.
Architecture – All computer operating systems are designed for a particular computer architecture.
Most software applications are limited to particular operating systems running on particular
architectures. Although architecture-independent operating systems and applications exist, most need
to be recompiled to run on a new architecture. See also a list of common operating systems and their
supporting architectures.
Processing power – The power of the central processing unit (CPU) is a fundamental system
requirement for any software. Most software running on x86 architecture define processing power as
the model and the clock speed of the CPU. Many other features of a CPU that influence its speed and
power, like bus speed, cache, and MIPS are often ignored. This definition of power is often
erroneous, as AMD Athlon and Intel Pentium CPUs at similar clock speed often have different
throughput speeds. Intel Pentium CPUs have enjoyed a considerable degree of popularity, and are
often mentioned in this category.
Memory – All software, when run, resides in the random access memory (RAM) of a computer.
Memory requirements are defined after considering demands of the application, operating system,
supporting software and files, and other running processes. Optimal performance of other unrelated
software running on a multi-tasking computer system is also considered when defining this
requirement.
3
Secondary storage – Hard-disk requirements vary, depending on the size of software installation,
temporary files created and maintained while installing or running the software, and possible use of
swap space (if RAM is insufficient).
Display adapter – Software requiring a better than average computer graphics display, like graphics
editors and high-end games, often define high-end display adapters in the system requirements.
Peripherals – Some software applications need to make extensive and/or special use of some
peripherals, demanding the higher performance or functionality of such peripherals. Such peripherals
include CD-ROM drives, keyboards, pointing devices, network devices, etc.
4
CHAPTER 2
LITERATURE SURVEY
2.1 Motivation
5
formulating treatment strategies. This enhances diagnostic accuracy and enables more
informed clinical decisions.
8. Public Health Awareness: The project contributes to raising awareness about knee
osteoarthritis and the importance of early detection and management. By educating healthcare
professionals and the public, it promotes proactive measures for preventing and managing
this common musculoskeletal condition.
9. Research Collaboration: Collaborative efforts between researchers, healthcare professionals,
and technology experts foster interdisciplinary innovation in healthcare. The project
encourages collaboration to address complex healthcare challenges and improve patient
outcomes.
10. Patient Engagement: Early detection empowers patients to actively participate in their
healthcare journey by providing them with timely information about their condition. This
fosters patient engagement and encourages adherence to treatment plans and preventive
measures.
These motivations highlight the diverse benefits of developing advanced computer vision systems for
the early detection and management of knee osteoarthritis, while ensuring minimal similarity with
the original text.
6
test was performed. Results The frequency of falls was 63.2% for the past year. The majority of falls
took place during walking (89.23%). The main cause of falling was stumbling (41.54%). There was a
high rate of injurious falling (29.3%). The time patients needed to complete the physical performance
test implied the presence of disability and frailty. The high rates of fall risk, the high disability levels,
and the low quality of life were confirmed by questionnaires and the mobility test. Conclusions
Patients with severe knee osteoarthritis were at greater risk of falling, as compared to healthy older
adults. Pain, stiffness, limited physical ability, reduced muscle strength, all consequences of severe
knee osteoarthritis, restricted patient's quality of life and increased the fall risk. Therefore, patients
with severe knee osteoarthritis should not postpone having total knee replacement, since it was clear
that they would face more complicated matters when combining with fractures other serious injuries
and disability.
7
Explainable machine learning for knee osteoarthritis diagnosis based on a
novel fuzzy feature selection methodology
Knee Osteoarthritis (ΚΟΑ) is a degenerative joint disease of the knee that results
from the progressive loss of cartilage. Due to KOA’s multifactorial nature and the
poor understanding of its pathophysiology, there is a need for reliable tools that will
reduce diagnostic errors made by clinicians. The existence of public databases has
facilitated the advent of advanced analytics in KOA research however the
heterogeneity of the available data along with the observed high feature
dimensionality make this diagnosis task difficult. The objective of the present study
is to provide a robust Feature Selection (FS) methodology that could: (i) handle the
multidimensional nature of the available datasets and (ii) alleviate the defectiveness
of existing feature selection techniques towards the identification of important risk
factors which contribute to KOA diagnosis. For this aim, we used multidimensional
data obtained from the Osteoarthritis Initiative database for individuals without or
with KOA. The proposed fuzzy ensemble feature selection methodology aggregates
the results of several FS algorithms (filter, wrapper and embedded ones) based on
fuzzy logic. The effectiveness of the proposed methodology was evaluated using an
extensive experimental setup that involved multiple competing FS algorithms and
several well-known ML models. A 73.55% classification accuracy was achieved by
the best performing model (Random Forest classifier) on a group of twenty-one
selected risk factors. Explainability analysis was finally performed to quantify the
impact of the selected features on the model’s output thus enhancing our
understanding of the rationale behind the decision-making mechanism of the best
model.
Analysis of knee joint vibration signals using ensemble empirical mode decomposition
Knee joint vibroarthrographic (VAG) signals acquired from extensive movements of the knee joints
provide insight about the current pathological condition of the knee. VAG signals are non-stationary,
aperiodic and non-linear in nature. This investigation has focussed on analyzing VAG signals using
Ensemble Empirical Mode Decomposition (EEMD) and modeling a reconstructed signal using
Detrended Fluctuation Analysis (DFA). In the proposed methodology, we have used the
reconstructed signal and extracted entropy based measures as features for training semi-supervised
learning classifier models. Features such as Tsallis entropy, Permutation entropy and Spectral
8
entropy were extracted as a quantified measure of the complexity of the signals. These features were
converted into training vectors for classification using Random Forest. This study has yielded an
accuracy of 86.52% while classifying signals. The proposed work can be used in non-invasive
pre-screening of knee related issues such as articular damages and chondromalacia patallae as this
work could prove to be useful in classification of VAG signals into abnormal and normal sets.
9
encedirect.com Attention Block Attention Block careful performance over
/science/article/ (ESAB). (ESAB) and a optimizatio existing methods,
abs/pii/S15662 Feature fusion Multi-Feature n for showcasing its
53522001981 occurs through a Inference Block practical potential for more
(2023) Multi-Feature (MFIB) using application accurate and
Inference Block graph convolution . clinically relevant
(MFIB) for effective tumor delineation
utilizing graph fusion of in medical
convolution for semantic and imaging.
effective edge information
integration of in multimodal
semantic and MRI for brain
edge features. tumor
segmentation.
10
rigorous
experimentation
.
11
enhancing
comprehension
of the model's
rationale.
12
Using Deep analysis of learning (DL) for proposed promising avenue
Supervised existing assisting system for assisting
Learning from literature on radiologists in involve radiologists in
Radiological deep learning bone imaging, potential bone imaging by
Images: A (DL) particularly in reliance on aiding fracture
Paradigm Shift application in fracture detection. high-qualit detection. While
AUTHOR: bone imaging. It aims to enhance y labeled demonstrating
Tanushree Comprehensive diagnostic data, significant
Meena and search strategies accuracy and computatio potential in
Sudipta Roy are employed efficiency by nal diagnosing bone
LINK: across scientific analyzing medical resource diseases, including
https://www.m databases to images using DL requiremen fractures,
dpi.com/2075- identify relevant techniques. The ts for deep challenges persist
4418/12/10/24 studies. system addresses learning in data quality,
20 Inclusion challenges in models, interpretability,
(2022) criteria consider fracture diagnosis and and the need for
DL techniques and explores DL's interpretabi further research to
for fracture potential role in lity enhance DL's role
detection in future challenges in revolutionizing
radiographs. advancements in in complex bone imaging.
Critical bone imaging. fracture
assessment and cases.
synthesis of
findings
elucidate DL's
role, challenges,
and future in
bone imaging.
13
imaging ethical considerati automation. This
analysis, drug considerations, ons comprehensive
discovery, necessitating regarding insight informs
patient care, and explainable AI, AI and future research
nanotechnology, while envisioning SML directions in the
emphasizing the a usage. healthcare and
need for knowledge-centri biomedical
explainable AI c health sectors.
and ethical management
considerations. system for
The review data-intensive
critically healthcare
evaluates data research.
quality,
knowledge
management,
and the skills
necessary for
data-intensive
healthcare
research to
provide insights
for future
SML-focused
research in
healthcare and
biomedical
fields.
14
sampled blocks 95.7%, 80.9%, ctions. reliance on manual
via 81.0%, and 77.6% annotations,
skip-connection for lung, heart, benefiting medical
s. Additionally, trachea, and image analysis and
it's integrated collarbone diagnosis.
into the decoder segmentation,
pathway respectively,
through surpassing U-Net
downsampling, and other recent
enhancing segmentation
segmentation models.
accuracy across
various organ
sizes.
15
summarizing
cricket videos.
16
L. C. Ribas, R. distance-based thresholding texture Comparative
Riad et.al., connections. A technique to property analyses against
LINK: thresholding reveal texture extraction. established models
https://www.res technique features, applying Limited and traditional
earchgate.net/p removes select an automated performanc descriptors
ublication/3547 edges to reveal threshold e showcase
53439_A_com texture selection strategy. compared competitive
plex_network_ properties, with Statistical to specific performance,
based_approac an automated measures from learning signifying the
h_for_knee_Os strategy for the network form models for viability of this
teoarthritis_det threshold a feature vector knee OA method for
ection_Data_fr selection. used for detection. improved early
om_the_Osteoa Statistical classification, diagnosis and
rthritis_initiativ measures from demonstrating treatment of knee
e the network competitive OsteoArthritis.
(2022) generate a performance in
feature vector, early knee OA
evaluated in detection
classification compared to
experiments established
using OAI learning models
database and traditional
images, texture
comparing descriptors.
against
state-of-the-art
learning models
and traditional
texture
descriptors.
17
CHAPTER 3
SYSTEM ARCHITECTURE AND DESIGN
In literature they introduced a new approach for early detection of knee osteoarthritis (OA) based on
complex network theory. Their approach involves modeling an X-ray image into a complex network
and applying a set of thresholds to reveal texture properties. And it employs a specific strategy to
automatically select the set of thresholds. A new set of statistical measures extracted from the
network are used to compute a feature vector evaluated in a classification experiment using knee
X-ray images from the OsteoArthritis Initiative (OAI) database.
● The existing study employs manual threshold selection for revealing texture properties from
X-ray images, it can introduce subjectivity and potential bias into the process.
● The existing study relies solely on texture features extracted from the complex network,
which may not capture all relevant information for knee disease detection.
● The existing study does not use the Mendeley Dataset, which might limit its evaluation to a
single dataset.
The proposed work in the paper is a technique based on a customized CenterNet with a pixel-wise
voting scheme to automatically extract features for knee disease detection. The proposed model uses
the most representative features and a weighted pixel-wise voting scheme to give a more accurate
bounding box based on the voting score from each pixel inside the former box. The proposed model
is a robust and improved architecture based on CenterNet utilizing a simple DenseNet-201 as a base
network for feature extraction.We utilized two datasets in the proposed study such as:1) Mendeley
Dataset used for training and testing, and 2) OAI Dataset used for cross-validation. Various
experiments have been performed to assess the performance of the proposed model.
18
1. Our model utilizes more sophisticated techniques, such as customized CenterNet, which can
potentially capture both texture and spatial information, leading to better performance.
2. Using multiple datasets, helps assess the generalization and robustness of the proposed
model.
3. Using DenseNet-201 for feature extraction can lead to improved generalization.
4. DenseNet's dense connections and efficient feature reuse reduce the risk of overfitting to the
training data.
1. Data Collection
2. Image processing
3. Data augmentation
4. Training model
5. Final outcome
● Usability requirement
● Serviceability requirement
● Manageability requirement
● Recoverability requirement
● Security requirement
● Data Integrity requirement
● Capacity requirement
● Availability requirement
● Scalability requirement
19
● Interoperability requirement
● Reliability requirement
● Maintainability requirement
● Regulatory requirement
● Environmental requirement
1. The DFD is also called as bubble chart. It is a simple graphical formalism that can be used to
represent a system in terms of input data to the system, various processing carried out on this
data, and the output data is generated by this system.
2. The data flow diagram (DFD) is one of the most important modeling tools. It is used to model
the system components. These components are the system process, the data used by the
process, an external entity that interacts with the system and the information flows in the
system.
3. DFD shows how the information moves through the system and how it is modified by a
series of transformations. It is a graphical technique that depicts information flow and the
transformations that are applied as data moves from input to output.
4. DFD is also known as bubble chart. A DFD may be used to represent a system at any level of
abstraction. DFD may be partitioned into levels that represent increasing information flow
and functional detail.
20
Fig.3.1 System architecture
Yes NO
UML stands for Unified Modeling Language. UML is a standardized general-purpose modeling
language in the field of object-oriented software engineering. The standard is managed, and was
created by, the Object Management Group.
The goal is for UML to become a common language for creating models of object oriented computer
software. In its current form UML is comprised of two major components: a Meta-model and a
notation. In the future, some form of method or process may also be added to; or associated with,
UML.
21
The Unified Modeling Language is a standard language for specifying, Visualization, Constructing
and documenting the artifacts of software system, as well as for business modeling and other
non-software systems.
The UML represents a collection of best engineering practices that have proven successful in the
modeling of large and complex systems.
The UML is a very important part of developing objects oriented software and the software
development process. The UML uses mostly graphical notations to express the design of software
projects.
3.7 GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that they can develop
and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core concepts.
3. Be independent of particular programming languages and development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations, frameworks, patterns and
components.
7. Integrate best practices.
22
Fig.3.2 Use Case Diagram
The class diagram is used to refine the use case diagram and define a detailed design of the system.
The class diagram classifies the actors defined in the use case diagram into a set of interrelated
classes. The relationship or association between the classes can be either an "is-a" or "has-a"
relationship. Each class in the class diagram may be capable of providing certain functionalities.
These functionalities provided by the class are termed "methods" of the class. Apart from this, each
class may have certain "attributes" that uniquely identify the class.
23
Fig.3.3 Class Diagram
The process flows in the system are captured in the activity diagram. Similar to a state diagram, an
activity diagram also consists of activities, actions, transitions, initial and final states, and guard
conditions.
24
Fig.3.4 Activity Diagram
A sequence diagram represents the interaction between different objects in the system. The important
aspect of a sequence diagram is that it is time-ordered. This means that the exact sequence of the
interactions between the objects is represented step by step. Different objects in the sequence
diagram interact with each other by passing "messages".
25
Fig.3.5 Sequence Diagram
A collaboration diagram groups together the interactions between different objects. The interactions
are listed as numbered interactions that help to trace the sequence of the interactions. The
collaboration diagram helps to identify all the possible interactions that each object has with other
objects.
26
3.8.5 Component diagram:
The component diagram represents the high-level parts that make up the system. This diagram
depicts, at a high level, what components form part of the system and how they are interrelated. A
component diagram depicts the components culled after the system has undergone the development
or construction phase.
The deployment diagram captures the configuration of the runtime elements of the application. This
diagram is by far most useful when a system is built and ready to be deployed.
27
Fig.3.8 Deployment Diagram
28
CHAPTER 4
METHODOLOGY
The initial phase of the methodology encompasses the preparatory steps required for handling and
processing the data prior to model training. This involves the importation of essential Python
libraries such as NumPy, TensorFlow, OpenCV, and Matplotlib, which provide indispensable tools
for various tasks including data manipulation, image processing, visualization, and deep learning.
The dataset used for knee osteoarthritis detection, comprising knee X-ray images and corresponding
annotations, is acquired and subjected to preprocessing steps to ensure its suitability for model
training. This preprocessing may involve resizing the images to a standardized resolution,
normalizing pixel values to a common scale, and converting the annotations into appropriate formats
compatible with the chosen deep learning framework.
In addition to basic preprocessing, data augmentation techniques are employed to increase the
diversity of the dataset and enhance the model's generalization capability. These techniques include
random transformations such as rotation, flipping, and zooming, which generate augmented samples
from the original data to expose the model to a wider range of variations.
Furthermore, the dataset undergoes quality checks to identify and handle missing or corrupted
images. Images that are deemed unsuitable for training are either removed from the dataset or
subjected to data imputation techniques to fill in missing values, ensuring the integrity of the dataset.
The model architecture is a pivotal component of the methodology, as it determines the underlying
framework upon which the knee osteoarthritis detection model is built. The architecture is primarily
based on the CenterNet framework, a state-of-the-art object detection model renowned for its
accuracy and efficiency.
Modifications are made to the original CenterNet architecture to tailor it specifically for knee
29
osteoarthritis detection. These modifications may involve alterations to the backbone network
architecture, incorporation of additional convolutional layers to capture more intricate features
relevant to knee joint detection, or integration of attention mechanisms to enhance the model's focus
on critical regions of interest.
The training procedure involves the iterative process of optimizing the model parameters using the
prepared dataset to learn the task of knee osteoarthritis detection. This process unfolds across
multiple epochs, with each epoch representing a complete pass through the entire training dataset.
During training, the model is optimized using a stochastic gradient descent (SGD) optimizer, with
carefully chosen learning rate scheduling and momentum parameters to ensure efficient convergence
towards the optimal solution.
Model performance is continuously monitored during training using evaluation metrics such as mean
average precision (mAP) and intersection over union (IoU), which quantify the accuracy of object
detection and localization.
To prevent overfitting and promote model generalization, regularization techniques such as dropout
and batch normalization are applied, which mitigate the risk of the model memorizing the training
data and improve its ability to generalize to unseen data.
The performance of the trained model is evaluated using a comprehensive set of evaluation metrics
tailored to the task of knee osteoarthritis detection. These metrics include precision, recall, F1-score,
and intersection over union (IoU), which collectively assess the model's ability to accurately detect
30
and localize knee joints within the X-ray images.
Confusion matrices and classification reports are generated to provide detailed insights into the
model's performance, showcasing its ability to correctly classify knee joints as either healthy or
affected by osteoarthritis.
Moreover, receiver operating characteristic (ROC) curves and area under the curve (AUC) scores
may be computed to quantify the model's discriminative power and its ability to balance true positive
and false positive rates, providing additional insights into its performance characteristics.
Once trained and validated, the model is deployed in clinical settings or healthcare facilities for
real-world application in knee osteoarthritis detection from X-ray images.
Integration with existing healthcare infrastructure such as Picture Archiving and Communication
Systems (PACS) or Radiology Information Systems (RIS) is facilitated to streamline the diagnostic
workflow and enable seamless integration of the model into existing clinical pipelines.
The deployed model undergoes rigorous validation and testing to ensure its reliability, accuracy, and
safety in real-world scenarios.
Independent validation studies involving large patient cohorts and diverse clinical settings are
conducted to assess the model's performance across various populations and imaging conditions,
providing robust evidence of its efficacy and generalizability.
Continuous monitoring and feedback mechanisms are established to track the model's performance
31
over time, identify potential issues or biases, and implement necessary updates or improvements,
ensuring the ongoing relevance and effectiveness of the model in clinical practice.
32
CHAPTER 5
RESULTS AND DISCUSSION
5.1 Results
The results section of the knee osteoarthritis detection system offers a comprehensive overview of
the classification and prediction outcomes, providing users with actionable insights into the severity
and progression of the condition. Through detailed analysis and visualization of classification results,
users can interpret the detected severity levels accurately, enabling informed decision-making
regarding treatment options and management strategies. Additionally, the predictive analytics
presented in this section offer valuable prognostic information, forecasting the future development of
knee osteoarthritis based on individual characteristics and risk factors. Visual representations, such as
charts, graphs, and heatmaps, enhance the interpretability of the results, facilitating clear
communication and understanding. Overall, the results section empowers users with valuable
information to guide clinical decision-making and optimize patient care, ultimately contributing to
improved outcomes and quality of life for individuals affected by knee osteoarthritis.
33
Figure 5.1: Website Interface
The website interface provides a cohesive platform for knee osteoarthritis detection and prediction,
offering users a seamless experience to access essential tools and information. Through intuitive
navigation, users can effortlessly explore key sections such as classification and prediction,
leveraging advanced algorithms like the improved CenterNet with Pixel-Wise Voting Scheme to
accurately assess the severity of osteoarthritis and forecast its progression. Additionally, the about
page furnishes users with comprehensive insights into the system's objectives, methodologies, and
potential healthcare implications, fostering transparency and understanding. To enable personalized
interactions and track previous activities, the login/register page facilitates secure access to user
accounts, ensuring data privacy and enhancing user engagement. With a user-centric design,
informative content, and intuitive functionalities, the website interface aims to empower users in
their journey towards knee osteoarthritis detection and management.
34
Figure 5.3: Register and Login / Sign up page
35
The sign-up or registration page serves as a pivotal gateway for users to unlock personalized features
and access advanced functionalities within the knee osteoarthritis detection and prediction system.
With a streamlined and user-friendly interface, this page guides users through a seamless registration
process, requiring basic information such as name, email, and password. By completing the
registration, users gain exclusive access to personalized features, including the ability to track
previous classifications, save preferences, and receive updates tailored to their needs and interests.
Furthermore, the registration page prioritizes data security and privacy, implementing robust
encryption protocols to safeguard user information and ensure peace of mind. With its intuitive
design and emphasis on user empowerment, the sign-up page plays a crucial role in enhancing the
overall user experience and fostering long-term engagement with the knee osteoarthritis detection
system.
36
Figure 5.6: Classification of Knee Osteoarthritis
37
Figure 5.8: Classification of Knee Osteoarthritis
Moving forward, the classification section of the website offers users a powerful tool for accurately
assessing the severity of knee osteoarthritis. Leveraging cutting-edge algorithms such as the
improved CenterNet with Pixel-Wise Voting Scheme, this section allows users to upload knee joint
images and receive real-time classification results. The classification process is intuitive and
efficient, guiding users through each step while providing comprehensive insights into the detected
severity levels. Additionally, the system ensures accuracy and reliability by incorporating advanced
image processing techniques and leveraging a diverse dataset for training and validation purposes.
Overall, the classification section empowers users with actionable information to make informed
decisions regarding treatment and management strategies for knee osteoarthritis.
38
Figure 5.9: Prediction of Knee Osteoarthritis
39
Figure 5.11: Prediction of Knee Osteoarthritis
On the other hand, the detection section of the website harnesses predictive analytics to forecast the
progression of knee osteoarthritis. By analyzing relevant data and leveraging machine learning
models, this section provides users with valuable insights into the future development of the
condition. Users can upload images or input relevant data points, allowing the system to generate
personalized predictions based on individual characteristics and risk factors. The detection process is
dynamic and adaptive, continuously learning from new data to improve accuracy and reliability over
time. By empowering users with foresight into potential disease trajectories, the detection section
enables proactive intervention and personalized healthcare management for knee osteoarthritis.
5.2 Discussion
Knee osteoarthritis (OA) represents a significant burden on global healthcare systems, necessitating
accurate and efficient detection methodologies to guide clinical decision-making and optimize
patient outcomes. In recent years, the advent of advanced imaging techniques and machine learning
algorithms has revolutionized the field of knee OA detection, offering promising avenues for early
diagnosis and intervention. The utilization of an improved CenterNet with a pixel-wise voting
scheme presents a novel approach to knee OA detection, capitalizing on the strengths of deep
learning architectures to enhance accuracy and reliability.
40
The application of the CenterNet framework, augmented with pixel-wise voting mechanisms,
introduces a paradigm shift in knee OA detection by leveraging the inherent spatial relationships
within medical imaging data. Unlike traditional methods that rely solely on feature extraction and
classification, this approach integrates contextual information at the pixel level, enabling more
precise localization and characterization of pathological changes within the knee joint. By harnessing
the collective intelligence of individual pixels through a voting scheme, the model achieves superior
performance in delineating anatomical structures and identifying disease-specific patterns associated
with knee OA.
One of the key advantages of the proposed methodology lies in its ability to adapt to diverse imaging
modalities and patient populations. Unlike conventional algorithms that may struggle with variations
in image quality and patient demographics, the improved CenterNet with pixel-wise voting
demonstrates robustness and generalizability across heterogeneous datasets. This versatility is
paramount in real-world clinical settings, where clinicians encounter a wide spectrum of imaging
modalities and patient characteristics, necessitating a flexible and adaptable detection framework.
Furthermore, the integration of deep learning techniques with pixel-wise voting mechanisms
facilitates interpretability and transparency in knee OA detection. By visualizing the decision-making
process at the pixel level, clinicians gain valuable insights into the model's reasoning and can
validate the relevance of detected abnormalities in the context of clinical practice. This transparency
fosters trust and acceptance among healthcare professionals, paving the way for the seamless
integration of AI-powered detection systems into routine clinical workflows.
Despite the promising potential of the improved CenterNet with a pixel-wise voting scheme, several
challenges and considerations warrant attention. Firstly, the need for large annotated datasets remains
a bottleneck in model development and validation, necessitating collaborative efforts to curate
comprehensive datasets representative of diverse patient populations and disease phenotypes.
Additionally, the ethical implications surrounding the deployment of AI-driven detection systems,
including data privacy, algorithmic bias, and regulatory compliance, necessitate careful consideration
and proactive mitigation strategies.
In conclusion, the adoption of an improved CenterNet with a pixel-wise voting scheme holds
immense promise for advancing knee OA detection and improving patient care outcomes. By
41
leveraging deep learning architectures and pixel-level contextual information, this approach offers
unparalleled accuracy, versatility, and interpretability in delineating knee OA-related pathology.
However, continued research efforts, interdisciplinary collaboration, and ethical foresight are
imperative to realize the full potential of AI-driven knee OA detection systems and ensure their
responsible integration into clinical practice.
42
CHAPTER 6
CONCLUSIONS AND FUTURE SCOPE
6.1 Conclusion
In this study, we proposed a robust deep learning architecture to detect Knee Osteoarthritis (KOA)
and identify severity levels based on KL grading i.e. G-I, G-II, G-III, and G-IV.The proposed method
is based on an improved CenterNet using DenseNet-201 is a robust technique for detecting knee
osteoarthritis (KOA) and identifying its severity levels based on KL grading. The proposed system
effectively overcomes the challenge of class imbalance in the dataset and extracts the most
representative features from the identified ROI due to dense connections among all layers. The
distillation knowledge concept is employed to make the model simple without increasing its
computational cost and transfer knowledge from a complex network to a simple network making it
more robust. We utilized two datasets in the proposed study such as: 1) Mendeley Dataset used for
training and testing, and 2) OAI Dataset used for cross-validation. Various experiments have been
performed to assess the performance of the proposed model. The proposed technique outperforms
existing techniques with good accuracy over testing and cross-validation.
6.1.1 Key Features
43
ongoing academic and practical discourse in the field.
6.2 Future Scope
These recommendations provide a roadmap for future research endeavors aimed at advancing the
state-of-the-art in knee osteoarthritis detection and improving patient care through innovative
technological solutions.
44
REFERENCES
[4] S. Nalband, R. R. Sreekrishna, and A. A. Prince, ‘‘Analysis of knee joint vibration signals using
ensemble empirical mode decomposition,’’ Proc. Comput. Sci., vol. 89, pp. 820–827, Jan. 2016.
[5] B. J. Guo, Z. L. Yang, and L. J. Zhang, ‘‘Gadolinium deposition in brain: Current scientific
evidence and future perspectives,’’ Frontiers Mol. Neurosci., vol. 11, p. 335, Sep. 2018.
[7] A. Brahim, R. Jennane, R. Riad, T. Janvier, L. Khedher, H. Toumi, and E. Lespessailles, ‘‘A
decision support tool for early detection of knee OsteoArthritis using X-ray imaging and machine
learning: Data from the OsteoArthritis initiative,’’ Comput. Med. Imag. Graph., vol. 73, pp. 11–18,
Apr. 2019.
45
Losina, ‘‘Joint space narrowing and Kellgren–Lawrence progression in knee osteoarthritis: An
analytic literature synthesis,’’ Osteoarthritis Cartilage, vol. 16, no. 8, pp. 873–882, Aug. 2008.
[9] M. N. Iqbal, F. R. Haidri, B. Motiani, and A. Mannan, ‘‘Frequency of factors associated with
knee osteoarthritis,’’ J. Pakistan Med. Assoc., vol. 61, no. 8, p. 786, 2011.
[11] M. S. M. Swamy and M. S. Holi, ‘‘Knee joint cartilage visualization and quantification in
normal and osteoarthritis,’’ in Proc. Int. Conf. Syst. Med. Biol., Dec. 2010, pp. 138–142.
[12] P. Dodin, J. Pelletier, J. Martel-Pelletier, and F. Abram, ‘‘Automatic human knee cartilage
segmentation from 3-D magnetic resonance images,’’ IEEE Trans. Biomed. Eng., vol. 57, no. 11, pp.
2699–2711, Nov. 2010.
[13] N. Kour, S. Gupta, and S. Arora, ‘‘A survey of knee osteoarthritis assessment based on gait,’’
Arch. Comput. Methods Eng., vol. 28, no. 2, pp. 345–385, Mar. 2021.
[14] M. Saleem, M. S. Farid, S. Saleem, and M. H. Khan, ‘‘X-ray image analysis for automated knee
osteoarthritis detection,’’ Signal, Image Video Process., vol. 14, no. 6, pp. 1079–1087, Sep. 2020.
46
APPENDIX 1
CODING
from __future__ import division, print_function
# coding=utf-8
import sys
import os
import glob
import re
import numpy as np
# Keras
from keras.applications.imagenet_utils import preprocess_input,
decode_predictions
from keras.models import load_model
from keras.preprocessing import image
# Flask utils
import argparse
import io
import os
from PIL import Image
import cv2
import numpy as np
from torchvision.models import detection
import sqlite3
import torch
from torchvision import models
from flask import Flask, render_template, request, redirect, Response
from flask import Flask, redirect, url_for, request, render_template
from werkzeug.utils import secure_filename
import sqlite3
import pandas as pd
import numpy as np
import pickle
import sqlite3
import random
import smtplib
from email.message import EmailMessage
from datetime import datetime
app = Flask(__name__)
UPLOAD_FOLDER = 'static/uploads/'
47
filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS
#classes2 =
{0:"APHIDS",1:"ARMYWORM",2:"BEETLE",3:"BOLLWORM",4:"GRASSHOPPER",5:"MITES",6:
"MOSQUITO",7:"SAWFLY",8:"STEM BORER"}
CTS = load_model(model_path2, custom_objects={'f1_score' : f1_m,
'precision_score' : precision_m, 'recall_score' : recall_m}, compile=False)
def model_predict2(image_path,model):
print("Predicted")
image = load_img(image_path,target_size=(128,128))
image = img_to_array(image)
image = image/255
image = np.expand_dims(image,axis=0)
result = np.argmax(model.predict(image))
print(result)
#prediction = classes2[result]
if result == 0:
return "The Patient is Diagnosis with Stage 0 Knee
Osteoarthritis","result.html"
elif result == 1:
return "The Patient is Diagnosis with Stage 1 Knee
Osteoarthritis","result.html"
elif result == 2:
return "The Patient is Diagnosis with Stage 2 Knee
Osteoarthritis","result.html"
elif result == 3:
return "The Patient is Diagnosis with Stage 3 Knee
Osteoarthritis","result.html"
48
@app.route("/about")
def about():
return render_template("about.html")
@app.route('/')
@app.route('/home')
def home():
return render_template('home.html')
@app.route('/logon')
def logon():
return render_template('signup.html')
@app.route('/login')
def login():
return render_template('signin.html')
@app.route('/index')
def index():
return render_template('index.html')
@app.route('/index1')
def index1():
return render_template('index1.html')
@app.route('/predict2',methods=['GET','POST'])
def predict2():
print("Entered")
print("Entered here")
file = request.files['file'] # fet input
filename = file.filename
print("@@ Input posted = ", filename)
@app.route("/signup")
def signup():
global otp, username, name, email, number, password
username = request.args.get('user','')
49
name = request.args.get('name','')
email = request.args.get('email','')
number = request.args.get('mobile','')
password = request.args.get('password','')
otp = random.randint(1000,5000)
print(otp)
msg = EmailMessage()
msg.set_content("Your OTP is : "+str(otp))
msg['Subject'] = 'OTP'
msg['From'] = "evotingotp4@gmail.com"
msg['To'] = email
s = smtplib.SMTP('smtp.gmail.com', 587)
s.starttls()
s.login("evotingotp4@gmail.com", "xowpojqyiygprhgr")
s.send_message(msg)
s.quit()
return render_template("val.html")
@app.route('/predict_lo', methods=['POST'])
def predict_lo():
global otp, username, name, email, number, password
if request.method == 'POST':
message = request.form['message']
print(message)
if int(message) == otp:
print("TRUE")
con = sqlite3.connect('signup.db')
cur = con.cursor()
cur.execute("insert into `info` (`user`,`email`,
`password`,`mobile`,`name`) VALUES (?, ?, ?, ?,
?)",(username,email,password,number,name))
con.commit()
con.close()
return render_template("signin.html")
return render_template("signup.html")
@app.route("/signin")
def signin():
mail1 = request.args.get('user','')
password1 = request.args.get('password','')
con = sqlite3.connect('signup.db')
cur = con.cursor()
cur.execute("select `user`, `password` from info where `user` = ? AND
`password` = ?",(mail1,password1,))
data = cur.fetchone()
if data == None:
return render_template("signin.html")
50
@app.route("/notebook1")
def notebook1():
return render_template("Notebook.html")
@app.route("/notebook2")
def notebook2():
return render_template("Detection.html")
model.eval()
model.conf = 0.5
model.iou = 0.45
def gen():
"""
The function takes in a video stream from the webcam, runs it through the
model, and returns the
output of the model as a video stream
"""
cap=cv2.VideoCapture(0)
while(cap.isOpened()):
success, frame = cap.read()
if success == True:
ret,buffer=cv2.imencode('.jpg',frame)
frame=buffer.tobytes()
img = Image.open(io.BytesIO(frame))
results = model(img, size=415)
results.print()
img = np.squeeze(results.render())
img_BGR = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
else:
break
frame = cv2.imencode('.jpg', img_BGR)[1].tobytes()
yield(b'--frame\r\n'b'Content-Type: image/jpeg\r\n\r\n' + frame +
b'\r\n')
@app.route('/video')
def video():
"""
It returns a response object that contains a generator function that
yields a sequence of images
:return: A response object with the gen() function as the body.
"""
return Response(gen(),
mimetype='multipart/x-mixed-replace; boundary=frame')
51
The function takes in an image, runs it through the model, and then saves
the output image to a
static folder
:return: The image is being returned.
"""
if request.method == "POST":
if "file" not in request.files:
return redirect(request.url)
file = request.files["file"]
if not file:
return
img_bytes = file.read()
img = Image.open(io.BytesIO(img_bytes))
results = model(img, size=415)
results.render()
for img in results.render():
img_base64 = Image.fromarray(img)
img_base64.save("static/image0.jpg", format="JPEG")
return redirect("static/image0.jpg")
return render_template("index1.html")
if __name__ == '__main__':
app.run(debug=False)
This Flask web application integrates several functionalities to create a versatile platform for image
processing, user management, and real-time object detection. At its core, the application leverages
deep learning models for image classification and object detection tasks.
The image classification feature allows users to upload images, typically medical scans related to
knee osteoarthritis. These images are processed using a pre-trained deep learning model loaded from
an h5 file. This model, which is likely trained on a dataset of knee images annotated with
osteoarthritis severity levels, predicts the stage of osteoarthritis based on the features present in the
input image. Users receive a diagnosis report indicating the severity stage of knee osteoarthritis
detected in the uploaded image.
User authentication is another critical aspect of the application. Users can sign up for an account by
providing their personal details, including username, name, email, mobile number, and password.
Upon sign-up, users are sent an OTP (One-Time Password) via email for verification. This
verification step ensures the validity of user accounts and enhances security. Once verified, user
information is stored in a SQLite database for future login and authentication.
The real-time object detection capability enhances the application's utility by providing live detection
52
of objects from webcam feeds. This functionality is powered by the YOLOv5 model, a
state-of-the-art deep learning model known for its speed and accuracy in object detection tasks. As
users access the webcam feed through the application, each frame is processed by the YOLOv5
model, which identifies and annotates objects in real-time. Users can observe bounding boxes around
detected objects, enabling applications in various domains such as surveillance, security, and
computer vision research.
Additionally, the application includes several auxiliary pages such as "About," "Home," "Logon"
(Sign Up), "Login," "Index," "Index1," "Notebook1," and "Notebook2." These pages facilitate user
navigation and interaction with the application, providing a seamless user experience.
Furthermore, the application incorporates email functionality to send OTP emails for user
verification during the sign-up process. It utilizes the smtplib library to establish connections with
SMTP servers and send email messages containing OTP codes to users' registered email addresses.
Overall, this Flask web application combines image processing, user management, and real-time
object detection capabilities to create a versatile platform with applications in healthcare, security,
and computer vision research.
53
APPENDIX 2
54
PLAGIARISM REPORT
55
56
57
SRMINSTITUTEOFSCIENCEANDTECHNOLO
GY
(Deemed to be University u/ s 3 of UGC Act, 1956)
RA2011003010040 RA2011003010022
3 Registration Number
5 COMPUTING TECHNOLOGIES
Department
6
Faculty Engineering And Technology, School Of
Computing
8
then how many students together completed
Whether the above project
the project: 2
/dissertation is done by
b) Mention the Name & Register number of
candidates:
B. SASHANK RA2011003010040
SANJAY G RA2011003010022
58
Mrs. P. Nithyakani
Assistant Professor
Department of Computing Technologies SRM
9
Name and address of the
Institute Of Science And Technology
Supervisor /Guide
Kattankulathur-603203
Mail ID: nithyakp@srmist.edu.in
Mobile Number: 9790626371
13 Plagiarism Details: (to attach the final report from the software)
Percentage of similarity
Percentage of % of plagiarism
index (including self
similarity after excluding
Chapter Title of the Chapter citation)
index Quotes,
(Excluding Bibliography, etc.,
self-citation)
1
INTRODUCTION
< 2% < 2% 1%
2 LITERATURE SURVEY 2% 2% 2%
4 METHODOLOGY 2% <2% 1%
Appendices
We declare that the above information have been verified and found true to the best of our knowledge.
59
Name & Signature of the Staff (Who uses the
Signature of the Candidate plagiarism check software)
60