Professional Documents
Culture Documents
Bachelor of Technology
in
Information Technology
by
October,2023
I certify that
(a) The work contained in this report has been done by me under the guidance of
my supervisor.
(b) The work has not been submitted to any other Institute for any degree or
diploma.
(c) I have conformed to the norms and guidelines given in the Ethical Code of
Conduct of the Institute.
(d) Whenever I have used materials (data, theoretical analysis, figures, and text)
from other sources, I have given due credit to them by citing them in the text
of the thesis and giving their details in the references. Further, I have taken
permission from the copyright owners of the sources, whenever necessary.
Date: October 21, 2023 (Suryam Kumar, Budh Kishor, Tejasvi Malvi )
Place: Bilaspur (20107066 GGV/20/01467, 20107015 GGV/20/01415, 20107070
GGV/20/01471 )
i
DEPARTMENT OF INFORMATION TECHNOLOGY
GURU GHASIDAS VISHWAVIDYALAYA
BILASPUR - 495009, INDIA
CERTIFICATE
This is to certify that the project report entitled “Collobration Tool Project
Management Android App.” submitted by Suryam Kumar, Budh Kishor,
Tejasvi Malvi (Roll No. 20107066 GGV/20/01467, 20107015 GGV/20/01415,
20107070 GGV/20/01471 ) to Guru Ghasidas Vishwavidyalaya towards partial ful-
filment of requirements for the award of degree of Bachelor of Technology in In-
formation Technology is a record of bonafide work carried out by him under my
supervision and guidance during October,2023.
ii
Abstract
Name of the student: Suryam Kumar, Budh Kishor, Tejasvi Malvi Roll No:
20107066 GGV/20/01467, 20107015 GGV/20/01415, 20107070
GGV/20/01471
Degree for which submitted: Bachelor of Technology
Department: Department of Information Technology
Thesis title: Collobration Tool Project Management Android App.
Thesis supervisor: Dr. Rajesh Mahule
Month and year of thesis submission: October 21, 2023
iii
Acknowledgements
Dr. Rohit Raja, the head of the information technology department, has our sincere
gratitude for giving us this chance.
All the esteemed faculty members, Dr. Rohit Raja (HoD), Dr. Amit Khaskalam,
Dr. Rajesh Mahule, Mr. Santosh Soni, Mr. Deepak Netam, Mr. Abhishek Jain,
Mr. Agnivesh Pandey, Mr. Pankaj Chandra, Mr. Suhel Ahamed, Mrs. Akansha
Gupta Mrs. Aradhana Soni for their continued support effort.
The Dean of Institute of Technology, Guru Ghasidas Vishwavidyalaya , for providing
all the necessary facilities.
We would also want to thank our mentor Dr.Rajesh Mahule sir, an assistant profes-
sor in the department of information technology, for his time, effort, and assistance
during the course of the semester. We truly appreciated your helpful recommen-
dations and assistance as we finished the project. We will always be grateful to
you for this. We really appreciate Dr.Rajesh Mahule Sir unwavering assistance and
ongoing encouragement throughout the project; without his direction and steadfast
assistance, this report would not have been feasible.
We would like to highlight that we did all of the work on this project and not
someone else.
iv
Contents
Declaration i
Certificate ii
Abstract iii
Acknowledgements iv
Contents v
List of Tables ix
Abbreviations x
v
Contents vi
5 Model Architecture 23
5.1 Model Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.1.1 Loading Pre-Trained DenseNet121 Model . . . . . . . . . . . . 23
5.1.2 Transfer Learning . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.1.3 Sequential Model Initialization . . . . . . . . . . . . . . . . . . 24
5.1.4 Adding Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
5.1.5 Model Compilation . . . . . . . . . . . . . . . . . . . . . . . . 24
5.2 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5.3 Results Of Model Training . . . . . . . . . . . . . . . . . . . . . . . . 26
A Features Overview 29
A.1 Creating Boards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.2 Including New People . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.3 Video Chat and Calling . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.4 Progress Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A.5 Task Delegation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Contents vii
Bibliography 31
List of Figures
viii
List of Tables
ix
Abbreviations
UI User Interface
UX User Experience
API Application Programming Interface
SDK Software Development Kit
RTC Real - Time Communication
HTTPS Hypertext Transfer Protocol Secure
FCM Firebase Cloud Messaging
AR Augmented Reality
VR Virtual Reality
ML Machine Learning
JS JavaScript
XML Xtensible Markup Language
x
Chapter 1
1.1 Introduction
?.
1
Chapter 1. Introduction and foundation 2
1.2 Purpose
The primary purpose of this study is twofold: firstly, to investigate the effectiveness
of the DenseNet architecture in pneumonia detection, and secondly, to develop a
robust pneumonia detection model through deep learning techniques. By utilizing
the RSNA Pneumonia Detection Dataset, comprising over 26,000 annotated chest
X-ray images, the research seeks to train and fine-tune the model to accurately
classify images and identify cases of pneumonia. The ultimate goal is to contribute
to the advancement of automated pneumonia detection algorithms, facilitating early
diagnosis and intervention.
1.3 Scope
The scope of this research encompasses the development and evaluation of a pneu-
monia detection model using the DenseNet architecture and the RSNA Pneumonia
Detection Dataset. The focus lies on leveraging deep learning methodologies to ac-
curately classify chest X-ray images and identify pneumonia cases. Furthermore, the
research aims to assess the generalizability of the developed model by evaluating its
performance on a held-out test set. This evaluation will provide insights into the
model’s ability to detect pneumonia in unseen X-ray images, thus determining its
potential for real-world clinical applications.
1.4 Method
Data Collection: The RSNA Pneumonia Detection Dataset served as the primary
data source, providing a comprehensive collection of chest X-ray images with pneu-
monia annotations.
Model Architecture: The DenseNet architecture, renowned for its densely connected
layers and effectiveness in image classification tasks, was chosen as the foundation
for the pneumonia detection model.
Model Training: The model was trained with the objective of optimizing its perfor-
mance in pneumonia detection. The Adam optimizer and binary cross-entropy loss
function were utilized for training, suitable for binary classification tasks.
1.5 Objective
Utilization of RSNA Pneumonia Detection Dataset: The project utilizes the RSNA
Pneumonia Detection Dataset, a rich and diverse collection of over 26,000 chest X-
ray images with pneumonia annotations, as the primary data source. By leveraging
this extensive dataset, the objective is to train and fine-tune the pneumonia detec-
tion model to effectively distinguish between normal and pneumonia-affected X-ray
images.
1.6 Foundation
DenseNet Architecture:
– Provides a diverse and comprehensive dataset for model training and evalua-
tion.
– Serves as the primary data source for training and optimizing the pneumonia
detection model.
Chapter 2
Lung opacity in pneumonia arises from the inflammatory response within the lungs.
When the lung tissue becomes inflamed due to infection, there is an accumulation
of fluid, pus, or inflammatory substances in the air sacs (alveoli) or tissues of the
lungs. This accumulation leads to a decrease in the amount of air in the affected
areas and an increase in density, resulting in the appearance of opacity on imaging.
The extent and severity of lung opacity vary depending on the stage and severity of
pneumonia, as well as the underlying cause of infection.
7
Chapter 2. Decoding Lung Opacity in Pneumonia: Understanding The Connection8
Radiologists and healthcare providers play a pivotal role in analyzing lung opacity
patterns and distribution for pneumonia diagnosis and monitoring. The presence
and characteristics of lung opacity on imaging can provide valuable diagnostic in-
formation, aiding in the identification of pneumonia and assessment of its severity.
Radiographic findings such as consolidation, airspace opacities, and interstitial in-
filtrates are indicative of pneumonia and guide treatment decisions.
Furthermore, the distribution and morphology of lung opacity can help differentiate
between different types of pneumonia, such as bacterial, viral, or fungal. Bacte-
rial pneumonia often presents with lobar consolidation, while viral pneumonia may
exhibit diffuse interstitial infiltrates. Understanding these patterns is essential for
tailoring treatment regimens and predicting patient outcomes.
Lung opacity in pneumonia has significant clinical implications for patient manage-
ment. Healthcare providers use imaging findings to assess treatment response, track
infection progression, and identify complications such as pleural effusion or abscess
formation. Serial imaging studies allow clinicians to monitor changes in lung opacity
over time, guiding adjustments to treatment plans and ensuring appropriate care for
individuals with pneumonia.
Differentiating between various causes of lung opacity is crucial for initiating timely
and targeted interventions, minimizing the risk of misdiagnosis or inappropriate
treatment. Close collaboration between radiologists, pulmonologists, and infectious
disease specialists is essential for accurate diagnosis and optimal patient care.
Chapter 3
The dataset consists of approximately 30,000 chest X-ray images, each meticulously
annotated to indicate the presence or absence of pneumonia. However, for our
analysis, we have opted to work with a subset of 12,000 images. This subset has
been carefully selected to represent a balanced distribution of pneumonia-affected
and normal (non-pneumonia) images, ensuring a comprehensive representation of
both classes in our analysis.
In order to increase the diversity and robustness of our dataset, we have augmented
the selected subset of 12,000 images. Through augmentation techniques such as
rotation, flipping, and scaling, we have effectively doubled the size of our dataset
10
Chapter X. Unlocking Insights into Used Dataset 11
to 24,000 images. This augmented dataset provides a more varied and comprehen-
sive training dataset for our machine learning models, improving their ability to
generalize to unseen data and enhancing their overall performance.
To facilitate effective model training and evaluation, we have partitioned the aug-
mented dataset into two subsets: a training set and a validation/test set. The
training set comprises 80% of the data and is used to train our machine learning
models, while the remaining 20% constitutes the validation/test set, which is used
to assess the performance of our trained models on unseen data. This partitioning
strategy helps prevent overfitting and ensures the reliability and generalizability of
our models.
In addition to the chest X-ray images and their corresponding annotations, the
dataset also includes metadata such as patient ID, age, gender, and supplementary
clinical information. This additional metadata provides valuable contextual infor-
mation that can be leveraged to gain deeper insights into the relationship between
patient characteristics and the diagnosis of pneumonia. By incorporating these nu-
merical features into our analysis, we can explore potential correlations and patterns
that may further inform our understanding of pneumonia detection.
DenseNet-121: Empowering
Pneumonia Detection with Dense
Connectivity and CNNs
13
Chapter 1. DenseNet-121: Empowering Pneumonia Detection with Dense
Connectivity and CNNs 14
The hallmark of DenseNet-121 is its dense connectivity pattern, which enables the
direct connections between all layers within a dense block. Unlike traditional CNN
architectures where each layer is connected only to the subsequent layer, DenseNet-
121 connects each layer to every other layer in a feed-forward fashion. This dense
connectivity promotes feature reuse, facilitates gradient flow, and reduces the num-
ber of parameters, making the model more efficient and powerful.
Chapter 1. DenseNet-121: Empowering Pneumonia Detection with Dense
Connectivity and CNNs 16
At the end of the network, DenseNet-121 includes a global average pooling layer
followed by a fully connected layer with softmax activation for classification. The
global average pooling layer aggregates the feature maps across spatial dimensions,
producing a single feature vector for each channel. This feature vector is then fed
into the fully connected layer for classification into different classes.
Out of the total 30,000 images in the dataset, the majority, approximately 27,000,
were labeled as normal cases. This indicates a significant class imbalance, with
Chapter 1. DenseNet-121: Empowering Pneumonia Detection with Dense
Connectivity and CNNs 18
the normal class dominating the dataset. Conversely, only around 6,000 images
were classified as pneumonia cases, representing a much smaller proportion of the
dataset.
This imbalance in class distribution could have significant implications for model
training and performance. Models trained on imbalanced datasets tend to exhibit
biases towards the majority class, resulting in suboptimal performance in accurately
predicting minority classes. In the context of pneumonia detection, such biases could
lead to the underrepresentation of pneumonia cases and potentially compromise the
model’s ability to accurately identify and classify pneumonia-related abnormalities
in chest X-ray images.
Addressing this class imbalance is crucial to ensure the model’s robustness and ef-
fectiveness in real-world applications. Strategies such as class balancing techniques,
dataset augmentation, or specialized training algorithms may be employed to miti-
gate the effects of imbalance and improve the model’s predictive performance across
all classes.
To mitigate the effects of class imbalance, a decision was made to select an equal
number of images from each class for model training. This involved choosing 6,000
Chapter 1. DenseNet-121: Empowering Pneumonia Detection with Dense
Connectivity and CNNs 19
Figure 4.4: Centres of Lung Opacity rectangles (brown) over rectangles (yellow)
Sample size 2000
The data is divided into batches using the Keras ImageDataGenerator class. This
class facilitates real-time data augmentation and batch processing during model
training.
• The image size is set to 224 × 224, the batch size is set to 32, and the number
of classes is 2.
• Two instances of the ImageDataGenerator class are created: one for training
data (train datagen) and one for testing data (test datagen).
• The rescale parameter is set to 1/255, which normalizes the pixel values of
the images to the range [0, 1].
• For the training data generator, the flow from directory() method is called.
It takes the DIRECTORY path, target size (IMG SIZE, IMG SIZE), batch size
(BATCH SIZE), class mode set to ’categorical’ (since there are multiple classes),
and the list of class names (CATEGORIES) as arguments. The subset parameter
is set to ’training’, indicating that this generator will be used for training data.
• Similarly, for the testing data generator, the flow from directory() method
is called with the same parameters, except for the subset parameter set to
’validation’, indicating that this generator will be used for testing data.
Chapter 1. DenseNet-121: Empowering Pneumonia Detection with Dense
Connectivity and CNNs 22
During model training, the generators are used to load data in batches. Each batch
contains a set number of images (defined by BATCH SIZE) along with their corre-
sponding labels. The data is loaded and preprocessed on-the-fly, which allows for
efficient memory usage.
Chapter 5
Model Architecture
The code snippet defines a model using the Sequential API from Keras. The model
architecture is based on the DenseNet121 pre-trained model, with additional layers
added on top.
The DenseNet121 model is loaded using the DenseNet121() function. The include top
parameter is set to False to exclude the top (fully connected) layers of the model.
Pre-trained weights from the ImageNet dataset are loaded using the weights pa-
rameter. The input shape of the model is specified as (224, 224, 3), representing
RGB images with a size of 224 × 224 pixels.
23
Chapter 5. Model Architecture 24
Next, the first 100 layers of the DenseNet model are frozen by setting their trainable
attribute to False. This allows for transfer learning, where the pre-trained weights
are kept fixed during the initial training.
The DenseNet model is added to the Sequential model using the add() method,
treating it as a single layer within the architecture. A Flatten layer is added to
the Sequential model using the add() method, which converts the output from the
DenseNet model into a 1D vector. Dense layers with 64 units and ’relu’ activation
are sequentially added to the Sequential model using the add() method, performing
linear transformations followed by rectified linear activation. Another Dense layer
with 256 units and ’relu’ activation is added to further transform the output from the
previous layer. Finally, a Dense layer with softmax activation is added to produce
the output probabilities for the classification task.
The model is compiled using the compile() method. The Adam optimizer with a
learning rate of 0.001 is used. The loss function is set to ’categorical crossentropy’
since it is a multi-class classification problem. The accuracy metric is specified to
evaluate the model’s performance during training.
Chapter 5. Model Architecture 25
In summary, the model is based on the DenseNet121 architecture with some mod-
ifications. It has multiple Dense and Flatten layers, followed by a softmax output
layer. The model is compiled with the Adam optimizer and categorical cross-entropy
loss function. The model is trained using the batches of data in an iterative manner,
with each batch contributing to the optimization of the model’s parameters. This
approach allows for efficient utilization of computational resources and enables the
training of models on large datasets.
5.2 Summary
In summary, the model is based on the DenseNet121 architecture with some mod-
ifications. It includes multiple Dense and Flatten layers, followed by a softmax
output layer. The model is compiled with the Adam optimizer and categorical
cross-entropy loss function. During training, the model utilizes batches of data in
an iterative manner, with each batch contributing to the optimization of the model’s
Chapter 5. Model Architecture 26
The code snippet demonstrates the training of a deep learning model for pneumonia
detection using the DenseNet121 architecture. Here’s a detailed explanation of the
training process:
• Epoch Number: Indicates the current epoch number out of the total
number of epochs.
• Batch Processing: The training data is divided into batches, and each
batch is processed sequentially. The number of batches processed in the
current epoch is displayed.
Overall, the training process involves iteratively optimizing the model’s parameters
to minimize the loss function and improve its ability to classify pneumonia cases. By
monitoring the training progress and validation performance, researchers can assess
the effectiveness of the model and make informed decisions to refine the training
process and enhance the model’s performance.
Features Overview
Description: Using this tool, users may arrange jobs and projects visually on boards.
Use: Users may make boards to efficiently organise and manage their tasks.
Description: This feature allows users to create project teams and invite and add
new members. Usage: By inviting team colleagues, stakeholders, or clients, users
may promote cooperation.
Description: This feature makes it easier to communicate visually in real time within
the programme.
29
Appendix A. Appendix 30
Description: This feature gives users the ability to keep an eye on the state and
advancement of tasks and projects.
Use: Users are able to monitor project milestones, spot bottlenecks, and guarantee
on-time completion.
31