Professional Documents
Culture Documents
MACHINE LEARNING
ABSTRACT
I offer my gratitude to the Lord Almighty who graced me and helped me all along to
complete the project successfully. To the Lord’s glory I supplicate this work.
I owe my sincere thanks to Dr. M. Arularasu, M.E., Ph.D., Principal, TPGIT for
taking a keen interest in the project and for guiding throughout the work even under his busy
schedule.
I also very thankful to Mr. N. Santhosh, MCA., Assistant Professor and Project
Supervisor, Department of Master of Computer Applications for his valuable guidance and
motivation to complete this project and his efforts to bring out the best in me, his availability
at all times and thought providing questions and timely suggestions.
I am also very thankful to Mr. S. Kalidasan, MCA, Assistant Professor, Faculty in-
charge Department of MCA, I am greatly benefited by the invaluable guidance and
coordination of his efforts to bring out the best in me.
I also express my thanks to all the staff members of the Department of MCA., TPGIT
for their constant support and encouragement. I also express my thanks to all the non-teaching
staff members for their support.
TABLE OF CONTENTS
1 INTRODUCTION 1
2 SYSTEM ANALYSIS
3 DEVELOPMENT ENVIRONMENT
4 SYSTEM DESIGN
5 MODULE DESCRIPTION
5.1 MODULES 23
6 SYSTEM IMPLEMENTATION
6.1 CODING 27
7 SYSTEM TESTING
8 APPENDICES
8.1 SCREENSHOTS 56
9.1 CONCLUSION 61
REFERENCES 62
LIST OF FIGURES
INTRODUCTION
The global spread of the COVID-19 pandemic has caused significant losses. The most
critical issues medical and healthcare departments are facing is the fact that the COVID19
was discovered promptly. Therefore, it is of great importance to check the diagnosis of the
suspected case, not only to facilitate the next step for the patients, but also to reduce the
number of infected people. MRI examination is considered to be the most commonly used
MRI examination method because of its low cost, wide range of application, and fast speed.
It plays a pivotal role in COVID-19 patient screening and disease detection. Because
COVID-19 attacks human respiratory epithelial cells, we can use MRI to detect the health
of the patient's lungs. How to detect these features from MRI has become a top priority.
The deep convolutional neural network has achieved unprecedented development in
image recognition, especially in the field of auxiliary medical diagnosis technology. Neural
networks have been successfully used in identifying pneumonia from MRI achieving
performances better than those of radiologists.
The main objective of using deep learning models is to achieve higher accuracy of
classification with chest X-Ray and CT scan images by separating the COVID-19 cases
from non-COVID-19 cases. The databases of X-Ray and CT scan images are publicly
available in GitHub repository for the purpose of experiments. Both these datasets contain
chest images of COVID-19 and non-COVID-19 individuals. The X-Ray database contains
67 COVID images and the same number of non-COVID images whereas CT scan database
contains 345 COVID images and the same number of non-COVID images. To conduct the
experiment, images are down-sampled to 50×50 dimensions from their original size. The
random subsampling or holdout method is adopted to test the efficacy of the model.
Moreover, this result exhibits more consistency while layers are being changed in CNN
based deep models. To evaluate the framework in a robust and effective way, a number of
evaluation metrics such as classification accuracy, loss, area under ROC curve (AUC),
precision, recall, F1 score and confusion matrix has been used. The values of these metrics
have been determined on different ratios of training and test samples considering a number
of layers in a deep model. The model is correctly able to classify the chest X-Ray and CT
scan images of COVID-19 cases from non-COVID-19 cases.
CHAPTER 2
SYSTEM ANALYSIS
The radiologist check the mri and write the reports and recommend to the doctor,
sometimes the appointment of radiologist will cost time and the delay of reports will occur
because it is done by the human, sometime human error will occur while writing a report
and the delay of report might be a big problem in emergency situation. And the cost of a
radiologist is also high sometimes if the tests are very intensive and it might cost much
higher.
Chest MRI images as the research object. However, radiologists and experts mainly
interpret images based on personal clinical experience when analysing MRI images.
Usually, different doctors or experts have a different understanding of the same image.
Moreover, the situation of the same image in different periods is not entirely consistent, and
the conclusions produced will be different. Also, the workload of interpretation of images is
vast, and doctors are prone to misdiagnosis due to fatigue. Therefore, there is an urgent need
for a computer- aided diagnosis system to help radiologists interpret images faster and more
accurately.We use the method of automatically extracting features from deep convolutional
neural networks. This method does not require traditional manual methods for feature
extraction, avoiding complex feature extraction processes.
RAM : 8 GB Ram
Processor : Intel i5 Processor or More
Hard Disk : 512 GB
GPU : 2 GB
Hard Disk : 20 GB
Floppy Drive : 1.44 MB
Key Board : Standard Windows Keyboard
Mouse : Two or Three Button Mouse
Monitor : SVGA
Flask Framework
Applications that use the Flask framework include Pinterest and LinkedIn.
TensorFlow
TensorFlow is Google Brain's second-generation system. Version 1.0.0 was released
on February 11, 2017.While the reference implementation runs on single devices,
TensorFlow can run on multiple CPUs and GPUs (with optional CUDA and SYCL
extensions for general-purpose computing on graphics processing units).[15] TensorFlow is
available on 64-bit Linux, macOS, Windows, and mobile computing platforms including
Android and iOS.
Its flexible architecture allows for the easy deployment of computation across a
variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to
mobile and edge devices.
TensorFlow computations are expressed as stateful dataflow graphs. The name
TensorFlow derives from the operations that such neural networks perform on
multidimensional data arrays, which are referred to as tensors. During the Google I/O
Conference in June 2016, Jeff Dean stated that 1,500 repositories on GitHub mentioned
TensorFlow, of which only 5 were from Google.[16]
In December 2017, developers from Google, Cisco, RedHat, CoreOS, and CaiCloud
introduced Kube Flow at a conference. Kubeflow allows operation and deployment of
TensorFlow on Kubernetes.
In March 2018, Google announced TensorFlow.js version 1.0 for machine learning in
JavaScript.
In Jan 2019, Google announced TensorFlow 2.0. It became officially available in Sep
2019.
In May 2019, Google announced TensorFlow Graphics for deep learning in computer
graphics.
Tensor processing unit (TPU)
Using or running models rather than training them. Google announced they had been
running TPUs inside their data centres for more than a year, and had found them to deliver
an order of magnitude better-optimised performance per watt for machine learning.
In May 2017, Google announced In May 2016, Google announced its Tensor
processing unit (TPU), an application-specific integrated circuit (ASIC, a hardware chip)
built specifically for machine learning and tailored for TensorFlow. A TPU is a
programmable AI accelerator designed to provide high throughput of low-precision
arithmetic (e.g., 8-bit), and oriented toward the second-generation, as well as the availability
of the TPUs in Google Compute Engine. The second-generation TPUs deliver up to 180
teraflops of performance, and when organised into clusters of 64 TPUs, provide up to 11.5
petaflops.
In February 2018, Google announced that they were making TPUs available in beta
on the Google Cloud Platform.
TensorFlow Lite
Keras
Keras is an open-source software library that provides a Python interface for artificial
neural networks. Keras acts as an interface for the TensorFlow library.Up until version 2.3
Keras supported multiple backends, including TensorFlow, Microsoft Cognitive Toolkit,
Theano, and PlaidML.As of version 2.4, only TensorFlow is supported. Designed to enable
fast experimentation with deep neural networks, it focuses on being user-friendly, modular,
and extensible. It was developed as part of the research effort of project ONEIROS (Open-
ended Neuro-Electronic Intelligent Robot Operating System),and its primary author and
maintainer is François Chollet Google engineer. Chollet also is the author of the EXCeption
deep neural network model.
Python
Guido van Rossum began working on Python in the late 1980's, as a successor to the
ABC programming language, and first released it in 1991 as Python 0.9.0.Python 2.0 was
released in 2000 and introduced new features, such as list comprehensions and a garbage
collection system using reference counting and was discontinued with version 2.7.18 in
2020. Python 3.0 was released in 2008 and was a major revision of the language that is not
completely backward- compatible and much Python 2 code does not run unmodified on
Python 3.
History of Python
Python was conceived in the late 1980s by Guido van Rossum at Centrum Wiskunde
& Informatica (CWI) in the Netherlands as a successor to ABC programming language,
which was inspired by SETL, capable of exception handling and interfacing with the Amoeba
operating system. Its implementation began in December 1989.Van Rossum shouldered sole
responsibility for the project, as the lead developer, until 12 July 2018, when he announced
his "permanent vacation" from his responsibilities as Python's Benevolent Dictator For Life, a
title the Python community bestowed upon him to reflect his long-term commitment as the
project's chief decision-maker. He now shares his leadership as a member of a five-person
steering council. In January 2019, active Python core developers elected Brett Cannon, Nick
Coghlan, Barry Warsaw, Carol Willing and Van Rossum to a five-member "Steering
Council" to lead the project. Guido van Rossum has since then withdrawn his nomination for
the 2020 Steering council.
Python 2.0 was released on 16 October 2000 with many major new features, including
a cycle-detecting garbage collector and support for Unicode.
Python 3.0 was released on 3 December 2008. It was a major revision of the language
that is not completely backward-compatible. Many of its major features were backported to
Python 2.6.xand 2.7.x version series. Releases of Python 3 include the 2to3 utility, which
automates (at least partially) the translation of Python 2 code to Python 3.
Python's statements include (among others):
The assignment statement, using a single equals sign =.
The if statement, which conditionally executes a block of code, along with else and
elif (a contraction of else-if).The for statement, which iterates over an iterable object,
capturing each element to a local variable for use by the attached block.The while statement,
which executes a block of code as long as its condition is true.The try statement, which
allows exceptions raised in its attached code block to be caught and handled by
except clauses; it also ensures that clean-up code in a finally block will always be run
regardless of how the block exits.
The continue statement skips this iteration and continues with the next item.
The del statement removes a variable, which means the reference from the name to
the value is deleted and trying to use that variable will cause an error. A deleted variable can
be reassigned.
The pass statement, which serves as a NOP. It is syntactically needed to create an
empty code block.
The assert statement, used during debugging to check for conditions that should
apply.
The yield statement, which returns a value from a generator function. From Python
2.5, yield is also an operator. This form is used to implement coroutines.
The return statement, used to return a value from a function.
The import statement, which is used to import modules whose functions or variables
can be used in the current program. There are three ways of using import: import <module
name> [as <alias>] or from <module name> import * or from <module name> import
<definition 1> [as <alias 1>], <definition 2> [as <alias 2>], ....
Python does not support tail call optimization or first-class continuations, and,
according to Guido van Rossum, it never will. However, better support for coroutine-like
functionality is provided in 2.5, by extending Python's generators.Before 2.5, generators were
lazy iterators; information was passed unidirectionally out of the generator. From Python 2.5,
it is possible to pass information back into a generator function, and from Python 3.3, the
information can be passed through multiple stack levels.
Machine Learning
Machine learning involves computers discovering how they can perform tasks without
being explicitly programmed to do so. It involves computers learning from data provided so
that they carry out certain tasks. For simple tasks assigned to computers, it is possible to
program algorithms telling the machine how to execute all steps required to solve the
problem at hand; on the computer's part, no learning is needed. For more advanced tasks, it
can be challenging for a human to manually create the needed algorithms. In practice, it can
turn out to be more effective to help the machine develop its own algorithm, rather than
having human programmers specify every needed step.
Artificial intelligence
The types of machine learning algorithms differ in their approach, the type of data
they input and output, and the type of task or problem that they are intended to solve.
Supervised learning
A support-vector machine is a supervised learning model that divides the data into
regions separated by a linear boundary. Here, the linear boundary divides the black circles
from the white.
Supervised learning algorithms build a mathematical model of a set of data that
contains both the inputs and the desired outputs.[38] The data is known as training data, and
consists of a set of training examples. Each training example has one or more inputs and the
desired output, also known as a supervisory signal. In the mathematical model, each training
example is represented by an array or vector, sometimes called a feature vector, and the
training data is represented by a matrix.
Through iterative optimization of an objective function, supervised learning
algorithms learn a function that can be used to predict the output associated with new inputs.
An optimal function will allow the algorithm to correctly determine the output for inputs that
were not a part of the training data. An algorithm that improves the accuracy of its outputs or
predictions over time is said to have learned to perform that task.
Types of supervised learning algorithms include active learning, classification and
regression. Classification algorithms are used when the outputs are restricted to a limited set
of values, and regression algorithms are used when the outputs may have any numerical value
within a range. As an example, for a classification algorithm that filters emails, the input
would be an incoming email, and the output would be the name of the folder in which to file
the email.
Similarity learning is an area of supervised machine learning closely related to
regression and classification, but the goal is to learn from examples using a similarity
function that measures how similar or related two objects are. It has applications in ranking,
recommendation systems, visual identity tracking, face verification, and speaker verification.
Self learning
Models
Training models
Usually, machine learning models require a lot of data in order for them to perform
well. Usually, when training a machine learning model, one needs to collect a large,
representative sample of data from a training set. Data from the training set can be as varied
as a corpus of text, a collection of images, and data collected from individual users of a
service. Overfitting is something to watch out for when training a machine learning model.
Trained models derived from biassed data can result in skewed or undesired predictions.
Algorithmic bias is a potential result from data not fully prepared for training.
RestNet
SCIKIT
Scikit-learn (formerly scikits.learn and also known as sklearn) is a free software
machine learning library for the Python programming language.[3] It features various
classification, regression and clustering algorithms including support vector machines,
random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate
with the Python numerical and scientific libraries NumPy and SciPy.
Overview of SCIKIT
The scikit-learn project started as scikits.learn, a Google Summer of Code project by
David Cournapeau. Its name stems from the notion that it is a "SciKit" (SciPy Toolkit), a
separately-developed and distributed third-party extension to SciPy.[4] The original codebase
was later rewritten by other developers. In 2010 Fabian Pedregosa, Gael Varoquaux,
Alexandre Gramfort and Vincent Michel, all from the French Institute for Research in
Computer Science and Automation in Rocquencourt, France, took leadership of the project
and made the first public release on February the 1st 2010.[5] Of the various scikits, scikit-
learn as well as scikit-image were described as "well-maintained and popular" in November
2012.Scikit-learn is one of the most popular machine learning libraries on GitHub.
Implementation
Scikit-learn is largely written in Python, and uses numpy extensively for high-
performance linear algebra and array operations. Furthermore, some core algorithms are
written in Cython to improve performance. Support vector machines are implemented by a
Cython wrapper around LIBSVM; logistic regression and linear support vector machines by a
similar wrapper around LIBLINEAR. In such cases, extending these methods with Python
may not be possible.
Scikit-learn integrates well with many other Python libraries, such as matplotlib and
plotly for plotting, numpy for array vectorization, pandas dataframes, scipy, and many more.
An epoch elapses when an entire dataset is passed forward and backward through the
neural network exactly one time. If the entire dataset cannot be passed into the algorithm at
once, it must be divided into mini-batches. Batch size is the total number of training samples
present in a single min-batch. An iteration is a single gradient update (update of the model's
weights) during training. The number of iterations is equivalent to the number of batches
needed to complete one epoch.
So if a dataset includes 1,000 images split into mini-batches of 100 images, it will
take 10 iterations to complete a single epoch.
VGG Neural Networks
While previous derivatives of AlexNet focused on smaller window sizes and strides in
the first convolutional layer, VGG addresses another very important aspect of CNNs: depth.
Let’s go over the architecture of VGG:
Input. VGG takes in a 224x224 pixel RGB image. For the ImageNet competition, the authors
cropped out the centre 224x224 patch in each image to keep the input image size consistent.
Convolutional Layers. The convolutional layers in VGG use a very small receptive
field (3x3, the smallest possible size that still captures left/right and up/down). There are also
1x1 convolution filters which act as a linear transformation of the input, which is followed by
a ReLU unit. The convolution stride is fixed to 1 pixel so that the spatial resolution is
preserved after convolution.
Fully-Connected Layers. VGG has three fully-connected layers: the first two have
4096 channels each and the third has 1000 channels, 1 for each class.
Hidden Layers. All of VGG’s hidden layers use ReLU (a huge innovation from AlexNet that
cut training time). VGG does not generally use Local Response Normalization (LRN), as
LRN increases memory consumption and training time with no particular increase in
accuracy.
The Difference. VGG, while based off of AlexNet, has several differences that
separates it from other competing models:
Instead of using large receptive fields like AlexNet (11x11 with a stride of 4), VGG uses very
small receptive fields (3x3 with a stride of 1). Because there are now three ReLU units
instead of just one, the decision function is more
discriminative. There are also fewer parameters (27 times the number of channels instead of
AlexNet’s 49 times the number of channels).
VGG incorporates 1x1 convolutional layers to make the decision function more non-
linear without changing the receptive fields.
The small-size convolution filters allow VGG to have a large number of weight
layers; of course, more layers leads to improved performance. This isn’t an uncommon
feature, though. GoogLeNet, another model that uses deep CNNs and small convolution
filters, was also shown in the 2014 ImageNet competition.
Xception Classification
SYSTEM DESIGN
MODULE DESCRIPTION
5.1 MODULES
● Transfer Learning
The images are collected from COVID19-related papers from aRxiv, bioRxiv, NEJM,
JAMA, Lancet, etc.We first collected 760 preprints about COVID-19 from medRxiv2 and
bioRxiv3 , posted from Jan 19th to Mar 25th. Many of these preprints report patient cases of
COVID-19 and some of them show CT images in the reports. CT images are associated with
captions describing the clinical findings in the CTs. We used PyM PDF 4 to extract the low-
level structure information of the PDF files of preprints and located all the embedded figures.
The quality (including resolution, size, etc.) of figures were well-preserved. From the
structure information, we also identified the captions associated with figures. Given these
extracted figures and captions, we first manually selected all CT images. Then for each CT
image, we read the associated caption to judge whether it is positive for COVID-19. If not
able to judge from the caption, we located the text analysing this figure in the preprint to
make a decision. For each CT image, we also collected the meta information extracted from
the paper, such as patient age, gender, location, medical history, scan time, severity of
COVID-19, and radiology report.
5.1.2 Data Preprocessing and Augmentation
Each image had to be preprocessed according to the deep neural network used. There
were two important steps involved: resizing and normalisation. Different neural networks
require images of different sizes according to their defined architecture. ResNet18,
DenseNet121, and MobileNetV2 expect images of size 224 × 224, while InceptionV3 and
Exception require images of size 229 × 229. All the images were also normalised according
to the respective architectures. Adequate training of a neural net requires big data. With less
data availability, parameters are undermined, and learned networks generalise poorly. Data
augmentation solves this problem by utilising existing data more efficiently. It aids in
increasing the size of the existing training dataset and helps the model not to overfit this
dataset. In this case, there were a total of 1283 images of the normal (healthy) case and 3873
images of the pneumonia case in the training dataset. Out of these, four-hundred images were
reserved for optimising the weighted classifier. This dataset was highly imbalanced. There
were already enough images in the pneumonia case. Therefore, each image of only the
normal (healthy) case was augmented twice. Finally, after augmentation, there were 3399
healthy chest MRI images and 3623 chest MRI images.
First used CNN, in 1989, for handwritten zip code recognition. This is a type of feed-
forward network. The main advantage of CNN compared to its predecessors is that it is
capable of detecting the relevant features without any human supervision. A series of
convolution and pooling operations is performed on the input image, which is followed by a
single or multiple fully connected layers. The output layer depends on the operations being
performed. For multiclass classification, the output layer is a softmax layer. The main
disadvantage with deeper CNNs is vanishing gradients, which can be solved by using residual
networks introduced in the following section.
In transfer learning, a model that is trained for a particular task is employed as the
starting point for solving another task. Therefore, in transfer learning, pre-trained models are
used as the starting point for some specific tasks, instead of going through the long process of
training with randomly initialised weights. Hence, it helps with saving the substantial
computer resources needed to develop neural network models to solve these problems.used
domain, task, and marginal probabilities to propose a framework for better understanding the
transfer learning. The domain D was defined as a two-element tuple consisting of the feature
space, χ, with a marginal probability, P(X), where X is a sample data point. Hence,
mathematically, domain D can be defined as,
D={χ,P(X)}
(1) Here, χ is the space of all term vectors, xi is the ith term vector corresponding to some
documents, and X is a particular learning sample (X=x1,⋯,xn,∈χ).
For a given domain D, the given task T is defined as:
T={γ,P(Y∣X)}={γ,η},Y={y1,⋯,yn},yi∈γ
(2) where γ is the label space. η is a predictive function learned from the feature vector/label
pairs (xi,yi), where xi∈χ and yi∈γ.
η(xi)=yi
(3) Here, η predicts a label for each feature vector.
Due to the lack of a sufficient dataset, training a deep learning based model for medical
diagnosis related problems is computationally expensive, and the results achieved are also not
up to the mark. Hence, pre-trained deep learning models, which were previously trained on
ImageNet dataset, were used in this paper. Further, all these pre-trained models were fine-
tuned for pneumonia classification. All the layers of the architectures used were trainable.
Further details, related to fine-tuning.
All the architecture details used in this paper are discussed in A. Raw chest MRI
images, after being pre-processed and normalised, were used to train the network. Then, data
augmentation techniques were used to process the dataset more efficiently. All the layers of
the networks used were trainable, and these layers extracted the features from the images.
Some parameters must be set to train the network. An interesting paper came out, and
according to it, stochastic gradient descent (SGD) had better generalisation than adaptive
optimizers. Therefore, SGD as the optimizer was used, and the model was trained for 25
epochs. The learning rate, the momentum, and the weight decay were set to 0.001, 0.9, and
0.0001, respectively. These configurations were to make sure that the networks were fine-
tuned for pneumonia diagnosis.
Training an image using a Algorithm.
This is a type of feed-forward network. The main advantage of CNN compared to its
predecessors is that it is capable of detecting the relevant features without any human
supervision. ConvNets are more powerful than machine learning algorithms and are also
computationally efficient. These numerical values are then put into numerical arrays based on
their categorised characteristics. These arrays are then put into different nodes in the network
and passed through multiple iterations based on the input given. The CNN models are used
for geographical classification in multiple companies which require data to be classified in a
quick and secure way; it almost acts like a filter removing dust and separating the features of
the images. A series of convolution and pooling operations is performed on the input image,
which is followed by a single or multiple fully connected layers. The output layer depends on
the operations being performed. For multiclass classification, the output layer is a softmax
layer.During the training process, in order to make the results reproducible and more
convincing, a random seed was fixed. Randomly divide the training set and the test set at a
ratio close to 9: 1.Feature classification is used for Covid19 disease detection. The diseased
features are removed from the human and classified with the healthy lung image. When the
lung is healthy and there is no classification the results are shown as healthy and when there
is a disease which when grey scaled shows black spots, it classifies them so they are shown as
which disease they are and the confidence of the classification. Classification takes place
between two numerical arrays. If the numerical arrays match, then it is a healthy or diseased
lungs, depending upon the dataset provided. Classification is a simple but relevant procedure
which gives a proper result and is used in plant disease detection.
CHAPTER 7
SYSTEM TESTING
System testing falls under the black box testing. System involves the external
workings of the software from the user’s perspective.
Screenshot here…
● Load the covid and normal images into the ResNet algorithm, Xception, VGG,
Inception models.
APPENDIX
8.1 SCREENSHOTS
Home Page
Screenshot here…
About
Screenshot here…
Preventions
Screenshot here…
News
Screenshot here…
Contact Us
Screenshot here…
Image uploading
Screenshot here…
Screenshot here…
Model Prediction 1
Screenshot here…
Model Prediction 2
Screenshot here…
CHAPTER 9
CONCLUSION AND FUTURE ENHANCEMENT
9.1 CONCLUSION
[1] Ooi GC, Khong PL, Müller NL, Yiu WC, Zhou LJ, Ho JC, Lam B, Nicolaou S, Tsang
KW (2004) Severe acute respiratory syndrome:
[2] temporal lung changes at thin-section CT in 30 patients. Radiology 230(3):836–844 2.
Wong KT, Antonio GE, Hui DS, Lee N, Yuen EH, Wu A, Leung CB, Rainer TH, Cameron
P, Chung SS, Sung JJ (2003) Severe acute respiratory syndrome: radiographic appearances
and pattern of progression in 138 patients. Radiology 228(2):401–406
[3] Xie X, Li X, Wan S, Gong Y (2006) Mining x-ray images of SARS patients. In: Data
Mining. Springer, Berlin, pp 282–294
[4] Huang C, Wang Y, Li X, Ren L, Zhao J, Hu Y, Zhang L, Fan G, Xu J, Gu X, Cheng Z
(2020) Clinical features of patients infected with 2019 novel coronavirus in Wuhan, China.
Lancet 395(10223):497– 506
[5] Wang Y, Hu M, Li Q, Zhang XP, Zhai G, Yao N (2020) Abnormal respiratory patterns
classifier may contribute to large-scale screening of people infected with COVID-19 in an
accurate and unobtrusive manner. arXiv preprint arXiv:2002.05534
[6] Koo HJ, Lim S, Choe J, Choi SH, Sung H, Do KH (2018) Radiographic and CT features
of viral pneumonia. Radiographics 38(3):719–739
[7] Li L, Qin L, Xu Z, Yin Y, Wang X, Kong B, Bai J, Lu Y, Fang Z, Song Q, Cao K (2020)
Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest
ct. Radiology: 200905
[8] Hansell DM, Bankier AA, MacMahon H, McLoud TC, Muller NL, Remy J (2008)
Fleischner society: glossary of terms for thoracic imaging. Radiology 246(3):697–722
[9] Narin A, Kaya C, Pamuk Z (2020) Automatic detection of coronavirus disease (covid-19)
using x-ray images and deep convolutional neural networks. arXiv preprint arXiv:2003.10849
[10] Apostolopoulos ID, Mpesiana TA (2020) Covid-19: automatic detection from x-ray
images utilising transfer learning with convolutional neural networks. Phys Eng Sci Med:1