You are on page 1of 18

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/357260925

Analysis and detection of COVID-19 cases on chest X-ray images using a novel
architecture self-development deep-learning

Conference Paper · October 2021


DOI: 10.1109/IBITeC53045.2021.9649263

CITATIONS READS

0 46

5 authors, including:

Hoang Nhut Huynh Tu Anh Tran


Ho Chi Minh City University of Technology (HCMUT) Ho Chi Minh City University of Technology (HCMUT)
2 PUBLICATIONS   0 CITATIONS    14 PUBLICATIONS   4 CITATIONS   

SEE PROFILE SEE PROFILE

Trung Nghia Tran


Ho Chi Minh City University of Technology (HCMUT) - VNU-HCM
25 PUBLICATIONS   76 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Disinfection chamber View project

Transillumination imaging of animal body using NIR light View project

All content following this page was uploaded by Hoang Nhut Huynh on 28 November 2022.

The user has requested enhancement of the downloaded file.


2021 IEEE International Biomedical Instrumentation and Technology Conference (IBITeC) | 978-1-6654-4179-7/21/$31.00 ©2021 IEEE | DOI: 10.1109/IBITeC53045.2021.9649095

Co-ORGANIZED WITH

SPONSORED BY
COPYRIGHTS
2021 IEEE International Biomedical Instrumentation and Technology Conference (IBITeC) | 978-1-6654-4179-7/21/$31.00 ©2021 IEEE | DOI: 10.1109/IBITeC53045.2021.9649156

2021 IEEE International Biomedical Instrumentation and Technology Conference (IBITeC)


Copyright ©2021 by Institute of Electrical and Electronics Engineers, Inc. All rights reserved.

Copyright and Reprint Permission:


Copyright and Reprint Permission: Abstracting is permitted with credit to the source. Libraries
are permitted to photocopy beyond the limit of U.S. copyright law for private use of patrons
those articles in this volume that carry a code at the bottom of the first page, provided the per-
copy fee indicated in the code is paid through Copyright Clearance Center, 222 Rosewood
Drive, Danvers, MA 01923. For reprint or republication permission, email to IEEE Copyrights
Manager at pubs-permissions@ieee.org. All rights reserved. Copyright ©2021 by IEEE.

IEEE Catalog Number : CFP21TEG-ART


ISBN : 978-1-6654-4179-7

i
2021 IEEE International Biomedical Instrumentation and Technology Conference (IBITeC) | 978-1-6654-4179-7/21/$31.00 ©2021 IEEE | DOI: 10.1109/IBITeC53045.2021.9649327

2021 IEEE International Biomedical Instrumentation and Technology Conference


(IBITeC)

“The Improvement of Healthcare Technology


To Achieve Universal Health Coverage”

20 – 21 October 2021, Virtual Meeting

ISBN: 978-1-6654-4179-7

ii
WELCOME MESSAGE FROM DEAN

Distinguished guests and participants,

Assalamu’alaikum warahmatullahi wabarakatuh

Firstly, let us thank Allah Almighty, who has given us His blessings and mercies so we can gather
today, in good health and spirits.

On behalf of the Faculty of Industrial Technology, Universitas Islam Indonesia (UII), I welcome our
speakers and participants.

The development of technology has become very rapid with the advent of Industry 4.0, and almost all
aspects of our lives have been influenced by it.
In the field of health, technology and information systems have become vital tools to discuss. The
demands of the health community must be followed by the development of the technology used. Healthcare
technology is one of the technologies most affected by global regulations. Because technology is the only
instrument to generate added value, mastery and the ability to create technology becomes a crucial problem.
Also, the application of information technology in the field of health is believed to provide various
benefits for health care providers. With the support of these technologies, the benefits that can be obtained include
the availability of accurate and comprehensive patient health information so that professionals can provide the
best possible treatment.

To develop health technology, scientific meetings are needed as a means for sharing, disseminating, and
communicating between practitioners, researchers, government agencies, non-governmental institutions, and
industry.

On this occasion, the Electrical Engineering Department, Faculty of Industrial Technology, Universitas
Islam Indonesia held The International Biomedical Instrumentation and Technology Conference, IBITeC 2021.
This seminar is the second conference organized by the Electrical Engineering Department and co-organized by
Diponegoro University (UNDIP) and Universiti Teknologi Malaysia (UTM). We hope this activity can provide a
change of knowledge for researchers, practitioners, students, and lecturers to improve their abilities.

To our speakers and all those who support the seminar, we thank you for your cooperation in conducting
this seminar. Finally, our congratulations on attending the seminar. Hopefully, what we achieve here will benefit
institutions and society as a whole.

Wassalamu’alaikum warahmatullahi wabarakatuh

Dean,

Faculty of Industrial Engineering

Prof. Dr. Ir. Hari Purnomo, M.T

iii
WELCOME MESSAGE FROM CONFERENCE CHAIR

Distinguished guests, respected colleagues, ladies, and gentlemen,

Assalamu’alaikum. All praise is for Allah, who guided us to do good deeds and gave us the health bounty.
On behalf of the 2nd International Biomedical Instrumentation and Technology Conference (IBITeC) 2021
Committee, I would like to welcome you to this biannual conference held by the Department of Electrical
Engineering, Faculty of Industrial Technology, Universitas Islam Indonesia, Yogyakarta. This conference is co-
sponsored by IEEE Communication Society Indonesia Chapter, and co-organized by the Center for
BioMechanics, Bio-Materials, Bio Mechatronics, and Bio Signal Processing (CBIOM3S) of Diponegoro
University (UNDIP), Universiti Teknologi Malaysia (UTM), and UII IEEE Student Branch. The goal of this
conference is to facilitate researchers, practitioners, students, and lecturers around the world to publish, explore
and share their latest research in Biomedical Engineering and related fields in Biomedical Sensors Development,
Biomedical and Informatics, Biomedical Imaging, Internet of Things (IoT) and Healthcare Information System
with its associated topics. This year’s theme is “The Empowerment of Healthcare Technology to Achieve
Universal Health Coverage.”

The committee is delighted with the positive response of researchers to this conference. We received 50
submissions from Germany, France, Portugal, Morocco, Malaysia, Vietnam, China, India, Pakistan, Iraq, and our
own Indonesia. The papers were peer-reviewed by our reviewers from several countries to maintain the quality of
this conference. The acceptance rate of the 2nd IBITeC 2021 is 58%. All accepted and presented papers will be
forwarded for consideration to be published in the IEEE Xplore Digital Library and indexed by Indexing Service
Partners (Scopus, INSPEC, Semantic Scholar, EBSCO, and others that are available/eligible).

We are grateful for the contributions of our invited speakers. We will have three keynote speakers that
we believe could spread the new insight for biomedical engineering disciplines to follow the industry 4.0 needs.
The organization of a conference is very much a team effort. I want to thank all committees, editorial team, event-
organizer, reviewers, and other parties who have carried a vast and complicated workload. The 2nd IBITeC 2021
strives to offer plenty of opportunities, especially networking. Authors and participants have the chance to meet
and interact with each other to share and transfer their knowledge in similar fields. We hope that you can benefit
from this conference during discussions and, most importantly, networking among our peers. We hope that this
conference will be unforgettable moments and experiences. Thank you. Wassalamu’alaikum.

Yogyakarta, October 2021

Firdaus, Ph.D

Conference Chair of The 2nd IBITeC 2021

iv
TABLE OF CONTENTS

COPYRIGHT STATEMENT
TITLE PAGE
WELCOME MESSAGE FROM DEAN v
WELCOME MESSAGE FROM CHAIR
AUTHORS INDEX
Quaternion-based AHRS with MEMS Motion Sensor for Biomedical Applications
Stress Effect on Attention Level Detection Using Neurosky Mindwave Headset
Cortical Functional Connectivity by Partial Directed Coherence In Relation To
Create Thinking: A Case Study
Study of effect of silver nanoparticles on the electrical excitability of hippocampal
neuronal network based on “Voltage Threshold Measurement Method”
Texture Analysis of Ultrasound Images to Differentiate Pneumonia and Covid-19
Green Synthesis of Silver Nanoparticles using Citrus aurantifolia (Bangladeshi
Lemon Leaf) Extract and Its Antibacterial Activity
Preparation and Evaluation of Folic Acid-TPGS Polymeric Micelle as a Quercetin
Anticancer Drug Carrier
Characterization of Silver Resistive Electrode as Heating Module for Portable
Thermocycler Device
Hip Joint Implant Loading During Routine Activities of Elderly Indonesian
Optimization of Insole Shoe for Diabetic Mellitus Type 2 Using Finite Element
Analysis
Analysis and detection of COVID-19 cases on chest X-ray images using a novel
architecture selfdevelopment deep-learning
A Review on Opportunities and Challenges of Machine Learning and Deep
Learning for Eye Movements Classification
Fetal Heart Rate Detection Algorithm fromNoninvasive Fetal Electrocardiogram
Performance Improvement of Breast Cancer Diagnosis based on Mammogram
Images using Feature Extraction and Classification Methods
Classification of Markerless 3D Dorsal Shapes in Adolescent Idiopathic Scoliosis
Patients Using Machine Learning Approach
Combating Bias in COVID-19 Disease Detection Using Synthetic Annotations on
Chest X-Ray Images
A Hybrid Risk Assessment Kit by Using Observing and Instrument Methods
The Statistical Analysis of Urodynamic Parameters with Different Stress Urinary
Incontinence
Pressure Ulcer Prediction, Prevention and Assessment Using Biomedical System
Design
GGB Color Normalization and Faster-RCNN Techniques for Malaria Parasite
Detection
Computer-Aided Design for Analyzing the Influence of Single Radius on Flexion
Angle of Artificial Knee Joint
Performance Evaluation of a New Dry-Contact Electrode for EEG Measurement
Conceptual Design of Bionic Foot for Transtibial Prosthesis
A Computational Stress Analysis of Active Prosthetic Hand “Asto Hand V4” for
the Loaded Hook Position
Experiment on Deep Learning Models for COVID-19 Detection from Blood
Testing
Effect of Chest X-Ray Contrast Image Enhancement on Pneumonia Detection using 142
Convolutional Neural Networks
Classification of the Degree of Dehydration in Diarrhea Using the Naïve Bayes 148
Algorithm

vi
----- CONFERENCE PAPERS-----

vii
AUTHOR INDEX
2021 IEEE International Biomedical Instrumentation and Technology Conference (IBITeC) | 978-1-6654-4179-7/21/$31.00 ©2021 IEEE | DOI: 10.1109/IBITeC53045.2021.9649053

A. P. Bayuseno 52, 114


Aaisha Diaa-Aldeen Abdullah 119
Ade Reza Ismawan 124
Agung W. Setiawan 142
Agusta Rakhmat Taufani 148
Anh Tu Tran 59
Ardimas Andi Purwita 88, 136
Ardiyansyah Yatim 40
Arief Ruhullah Harris 7
Arkaan Nofarditya Ashadi 88
Auns Q. Al-Neami 104, 119
B. Setiyana 124
Bhuiyan Mohammad Mahtab Uddin 29
Chen Meng 19
D. Darmanto 114
Dabasish Kumar Saha 29
Donny Danudirdjo 71
Dwi Basuki Wibowo 46
Dwiajeng Puspita Ratri 148
E. Juliastuti 24
Elvira Sukma Wahyuni 77
Fiqhy Bismadhika 136
Gilar Pandu Annanto 130
Gunarajulu Renganathan 83
Haider Khaleel Raad 104
Hanung Adi Nugroho 109
Hasballah Zakaria 71
Hasna Nadila 71
Hassan Hosseinzadeh 29
Hassanain Ali Lafta 99
Hayder Hadi Mohammed 99
Hoang Nhut Huynh 59
Ilham A. E. Zaeni 148
Ismoyo Haryanto 46, 130
Jamari 52, 114
Janani Arivudaiyanambi 83
Joy James Costa 29
Julia Bt Mohd Yusof 13
Kun Hou 19
L. P. Pradipta 52
Laith Amer Al-Anbary 99
Lu´ıs Brito Palma 1
M. Ariyanto 124

viii
Maheza Irna Mohd Sali 7
Minh Bao Pham 59
Mohammed Alfaqawi 93
Mouna Ben Mabrouk 93
Muhammad Ainul Fikri 65
Norhana Jusoh 33
Norjihada Izzah Ismail 33
Norlaili binti Mat Safri 13
Nunung Nurul Qomariyah 88, 136
P. K. Fergiawan 52
P. W. Anggoro 52, 114
Paulus Insap Santosa 65
Puspa Inayat binti Khalid 13
Quoc Tuan Nguyen Diep 59
R. Novriansyah 114, 124
Rania Al-Ashwal 7
Rayhan Ammarsyah 40
Retno Paras Rasmi 77
Rifky Ismail 46, 114, 124, 130
Rilo Berdin Taqriban 46
Rizki Nurfauzi 109
Roshida binti Abdul Majid 13
Rui Azevedo Antunes 1
Saeful Bahri 24
Saif Abdulmohsin A 99
Sasa Cukovic 83
Shihab Uddin Al Mahmud 29
Siti Fatimah Ibharm 33
St´ephane Davai 93
Suatmi Murnani 77
Sunu Wibirama 65
Suprijanto 24
Syafwendra Syafril 7
T. Prahasto 124
Trung Nghia Tran 59
Xiao-Ying Lü 19
Yan Huang 19
Yudan Whulanza 40
Zeena Sh Saleh Biomedical 104
Zhi-Gong Wang 19
Zubair Ahmed Ratan 29
Analysis and detection of COVID-19 cases on chest
X-ray images using a novel architecture self-
development deep-learning
Hoang Nhut Huynh Quoc Tuan Nguyen Diep Minh Bao Pham
2021 IEEE International Biomedical Instrumentation and Technology Conference (IBITeC) | 978-1-6654-4179-7/21/$31.00 ©2021 IEEE | DOI: 10.1109/IBITeC53045.2021.9649263

Dept. of Biomedical Engineering, Doctor Quoc Ltd. Dept. of Biomedical Engineering,


Faculty of Applied Science, Ho Chi Ho Chi Minh City, Vietnam Faculty of Applied Science, Ho Chi
Minh City University of Technology edennguyen0511@gmail.com Minh City University of Technology
(HCMUT) (HCMUT)
Vietnam National University Ho Chi Vietnam National University Ho Chi
Minh City (VNUHCM) Minh City (VNUHCM)
Ho Chi Minh City, Vietnam Ho Chi Minh City, Vietnam
nhut.huynhvlkt@hcmut.edu.vn bao.pham1811539@hcmut.edu.vn

Anh Tu Tran Trung Nghia Tran


Laboratory of General Physics, Faculty Laboratoty of Laser Technology,
of Applied Science, Ho Chi Minh City Faculty of Applied Science, Ho Chi
University of Technology (HCMUT) Minh City University of Technology
Vietnam National University Ho Chi (HCMUT)
Minh City (VNUHCM) Vietnam National University Ho Chi
Ho Chi Minh City, Vietnam Minh City (VNUHCM)
tranatu@hcmut.edu.vn Ho Chi Minh City, Vietnam
corresponding: ttnghia@hcmut.edu.vn

Abstract— The recent ongoing pandemic coronavirus disease The COVID-19 identification now relies on the Real-time
2019 (COVID-19) is growing increasingly out of control reverse transcriptase polymerase chain reaction (RT-PCR)
globally, posing a severe threat to human health. The use and serological testing [6]. In low-middle income countries,
of artificial intelligence (AI) in predicting COVID-19-positive the shortage of required material and specialized personnel to
individuals becomes a promising tool that may enhance the perform these tests mainly leads to widely spreading disease
existing diagnosis modality. Algorithms in supporting in common. On the other hand, RT-PCR is time-consuming,
classifications for chest X-ray images face challenges in terms has poor sensitivity, high false-positive rates [6]-[12].
of dependability. With this aim, a convolutional neural Computed tomography (CT) scan and X-ray of the chest are
network (CNN) model FASNet is proposed to identify chest
also used to diagnose and identify the COVID-19 infected
X-ray images of three distinct conditions: pneumonia,
COVID-19, and normal (or healthy) cases. The FASNet model
individuals [6], [11], [12]. These techniques are hospital-
consists of four convolution layers and two fully connected accepted techniques for diagnosing COVID-19 patients.
layers. The pre-trained deep learning models were used and However, these techniques required specialized personnel to
included in our self-development FASNet CNN model. In the perform, assess, and interpret the images. This work needs
first convolutional layer, the size of the kernel 1×1 is used on experience and is difficult for applying with the naked eye.
each pixel as a fully connected connection with the aim of Hence, an alternative method to support the diagnosis of
reducing the channel depth and number of parameters of the COVID-19 is an urgent and crucial need.
model. The early- stopping class and dropout layer are used
The use of artificial intelligence (AI) to assess, and
to limit the number of neural connections and prevent
overfitting. The dataset for this study was derived from an interpret the medical diagnosis images for COVID-19 patients
open-source collection of 6,432 images for training and testing. is a promising method [6]. Moreover, it can be a practical tool
As the result, our approach successfully detected COVID-19 in tracking the coronavirus’s spread, identifying high-risk
infected individuals, pneumonia, and healthy ones with a patients, adequately analyzing patients’ previous data, and
98.48% accuracy. This promising preliminary results lead us to predicting the risk of death [13]. Thus, many studies have
expect that the FASNet model can be used in further focused on CT images and X-ray images using machine
development research to assist in diagnosing COVID-19. learning (ML) and deep learning (DL) [14]-[23][21].
The result with FASNet model has a high correlation in
comparison with other popular models such as ResNet50V2 and
The chest X-ray imaging may play an important role in a
MobileNetV2. rapid clinical assessment, as the machine is normally already
Keywords—chest X-ray image, COVID-19, CNN, deep available in the clinical unit. In comparison with CT scanner
learning, FASNet imaging, X-ray imaging is a faster, easier, cheaper, less
harmful, and digital type result. Using chest X-ray imaging
I. INTRODUCTION
can early identify the COVID-19 infected individuals and
Since early December 2019, the ongoing coronavirus separate out other respiratory disease ones.
disease 2019 (COVID-19) has created an unprecedented
situation all over the world. Following the large-scale In this paper, a self-development convolutional neural
COVID-19 out-break this year, especially in low-middle network (CNN) model was proposed to categorize chest X-
income countries, the COVID-19 pandemic causes the ray images for healthy, pneumonia, and COVID-19 patients in
national medical systematic collapse by the lack of resources order to aid clinical diagnosis and reduce misinterpret. The
and the high demand of the intensive care units simultaneously transfer learning technique and a dense neural network were
[1]-[6]. The COVID-19 mainly damages the respiratory employed in our model. Instead of training an utterly new
system then may cause damage to other systems. neural network, this technique allowed the use of lower-level

978-1-6654-4179-7/21/$31.00 ©2021 IEEE

Page 59
Authorized licensed use limited to: Trung Nghia Tran. Downloaded on December 27,2021 at 04:39:36 UTC from IEEE Xplore. Restrictions apply.
weighting for optimizing higher-level weights to compensate max-pooling and a non-linear ReLU activation layer. The
for the absence of input data, and save computation time. With result is (7, 7, 128).
a view towards contributing a little effort to averting the 4) Fourth convolutional layer: The output was upgraded
COVID-19 pandemic if our CNN model can perform well in in the fourth layer, increasing the number of convolution
categorizing chest X-ray images of groups: COVID-19,
filters to 256 while keeping the max-pooling and non-linear
pneumonia, and normal (healthy). This work was designed to
allow the prediction can be made using chest X-ray images of ReLU activation layer the same as the previous layers,
any size. The public datasets of chest X-ray images were converting the feature output to (2, 2, 512). This output map
applied for testing and evaluating the model. Our model also is fed into the linear layer and used for feature extraction.
benchmarked the performance of pre-trained CNNs with the 5) Fully connected layer: At this layer, the feature map
popular CNN models i.e. ResNet50V2 and MobileNetV2. Our flattens with the value 2,048 and then integrating it with the
CNN model also can detect the location of COVID-19 damage final linear layer with 512 nodes. Softmax activation is used
zones in chest X-ray image. to categorize three layers of pictures in FasNet's final output
size - COVID-19, pneumonia, and normal.
II. FASNET AND FRE-TRAINED CNN MODEL
The FASNet model includes a total of trainable 1,752,003
A. FASNet model parameters and 0 non-trainable parameters.
Our novel CNN network was designed to classify better B. Pre-Trained CNN model
the individuals with COVID-19, pneumonia, and normal from
24 pre-trained models are used to train with our FASNet
their chest X-ray images after challenging and experimenting
for the first time to detect subjects infected with COVID-19,
with hundreds of parameters (e.g. dense layer, filter,
pneumonia, and normal (healthy). Fig. 2 and Fig. 3 show the
convolution layer, normalizations, and max pooling). We
accuracy and the training time of these 24 models after one
experimented with hundreds of parameters to build this
epoch, respectively. Two models ResNet50V2 and
network: dense layer, filter, convolution layer, normalizations,
MobileNetV2 achieved the most outstanding performance
and max pooling. The proposed CNN network was named
among the 24 models as shown in Fig. 2 and Fig. 3.
“FASNet” which “FAS” stands for “Faculty of Applied
Science”. FASNet is made up of four convolutional layers and
a fully connected linear layer as shown in Fig. 1.

Fig. 2. Accuracy of 24 models after one epoch


Fig. 1. Schematic of FASNet model architecture

The schematic of FASNet model as shown in Fig. 1


consists of four convolutional layers and a fully connected
linear layer. At the end of each convolutional layer, an
additional intermediate max-pooling layer was added. The
input dimension of FASNet is (150, 150, 3) and the output
dimension is: (512, 3).
1) First convolutional layer: The first layer consists of 64
filters. The size of the kernel is 1×1 with stride 1. (150, 150, Fig. 3. Training time of 24 models after one epoch
3) is the size of the input image. The batch normalization
Therefore, ResNet50V2 and MobileNetV2 are selected for
function is coupled with a non-linear Rectified Linear Units benchmarking the performance with our FASNet.
(ReLU) activation function to decrease the number of training
parameters and control overfitting. A pooling layer with a III. METHOD AND MATERIALS
maximum pooling size of 4×4. The feature map is A. Dataset
downsampled to (37, 37, 64) by using the max-pooling layer.
The dataset for this study was collected from the Kaggle
2) Second convolutional layer: The second layer has a
database [24], containing chest X-ray images of three groups:
similar structure to the first layer. To prevent altering the COVID-19, pneumonia, and healthy cases. A raw dataset of
dimension of the output feature map, the kernel size is set to 6432 chest X-ray pictures. 70% of the pictures are utilized for
3×3 and stride 1. To maintain the stability of the neural training, 20% for validation, and 10% for testing.
network, the batch normalization function is employed for
standardization and non-linear ReLU activation functionality There are 1,782 images of COVID-19 infection in the
Kaggle database, which is less than pneumonia and healthy
at this layer. To downsample the feature map to (17, 17, 128),
cases as shown in Fig.4. The dataset is imbalanced because of
the max-pooling layer was used with kernel size 2×2 and fewer COVID-19 images than pneumonia and healthy
stride 2. pictures for training, the model may identify pneumonia and
3) Third convolutional layer: At the third convolutional healthy subjects rather than COVID-19. In this scenario, the
layer the number of filters was increased to 128, combining a overall accuracy becomes high but does not represent the

Page 60
Authorized licensed use limited to: Trung Nghia Tran. Downloaded on December 27,2021 at 04:39:36 UTC from IEEE Xplore. Restrictions apply.
accuracy of COVID-19 detection. This is not our primary gradient-based optimization method for stochastic objective
objective in this research; instead of being able to diagnose functions. This optimizer method is selected because of its
and avoid mishandling COVID-19 instances properly. To straightforward to implement, computationally efficient, less
overcome this difficulty, the data was re-organized in order to time-consuming, and requires little memory to process [26].
balance the three cases and ensure that the number of images Pre-trained models are deployed on Keras library, Tesla P100
in each case is almost equal. This work helps to increase GPUs, and 25 GB of RAM powered by Google Colab
COVID-19 cases detection while also balancing overall Notebooks [27].
accuracy.
TABLE III. THE PARAMETERS FOR PROCESSING DATASET

Parameters Value
Input shape images (150, 150, 3)

Learning Rate 1e-4

Batch size 32

Dropout 0.50

Number of epochs 20

Optimizer Adam

Loss Categorical Cross-entropy

Metrics Accuracy
Fig. 4. Representative chest X-ray images from Kaggle database. Image
size is of (224, 224).
Step 3: Reduce the number of dimensions by flattening.
Table I shows the distribution of images in dataset for
Step 4: Dense layer
training, validation, and testing with FASNet. Table II shows
the parameters for enhancing dataset. The dense layer is a neural network layer that is connected
deeply, which means each neuron in the dense layer receives
TABLE I. DISTRIBUTION OF CHEST X-RAY IMAGE DATASET input from all neurons of its previous layer. Dense layer does
Number of chest X-ray images the below operation on the input and return the output :
Patient case
Training Validation Testing Total (1)
Normal 1250 400 200 1850
where is the bias and = ReLU(Z).
COVID-19 1250 300 232 1782
Step 5: The non-linear ReLU function is used because it
Pneumonia 2000 500 300 2800
does not activate all of the neurons simultaneously, which
means that the neurons will only be deactivated if the linear
TABLE II. THE PARAMETERS FOR PROCESSING DATASET transformation output is less than zero. The softmax function
Parameters Value is used in the final layer to classify three cases: COVID-19,
pneumonia, and normal (healthy) [27]:
Rotation range 30

(2)
Width shift range 0.20

Height shift 0.20


C. Classification performance metrics
Zoom range 0.15 The classification performance of the deep learning model
Horizontal flip True is quantitatively evaluated through the results of COVID-19,
pneumonia, and healthy diagnoses in chest X-ray images. The
Fill mode Nearest essential parameter for assessing categorization performance
is accuracy. The number of accurate classifications divided by
B. Proposed algorithm the total number of images in the dataset equals accuracy.
Step 1-Preprocessing: All chest X-ray images are resized
to 150×150 pixels. The image data generator function was TABLE IV. CONFUSION MATRIX
used to enhance training data. To make the model more
Predicted Cases
generalizable, the model has encountered an image just once Actual cases
Chest disease No chest disease
during training.
Positive True positive (TP) False negative (FN)
Step 2: Model training with pre-trained and FASNet.
Negative False positive (FP) True negative (TN)
Table III shows the parameters for implementing the
Categorical Cross-entropy loss function for training. It is used
to optimize the value of model parameters. The learning rate Table IV illustrates a basic 2×2 confusion matrix,
was optimized dependent on the number of epochs by estimated based on hypothesis testing and cross-validation
calculating with the PyTorch library module during the model [28]. The actual label on the tested image is compared with the
training phase [25]. Adaptive moment estimation (Adam) is a classification prediction using the four following indicators:

Page 61
Authorized licensed use limited to: Trung Nghia Tran. Downloaded on December 27,2021 at 04:39:36 UTC from IEEE Xplore. Restrictions apply.
• True positive (TP): number of correctly identified COVID-
19 and pneumonia X-ray images;
• True negative (TN): number of correctly identified
healthy X-ray cases;
• False positive (FP): number of incorrectly identified
healthy X-ray cases;
• False negative (FN): number of incorrectly classified
COVID- 19 and pneumonia X-ray images.
In addition, other evaluation indicators as shown in Eqs.
(3)-(6): Fig. 6. Confusion matrix of train data of ResNet50V2

• Precision (PRE): is also known as the percentage of Fig. 6 shows the confusion matrix of train data of
positive predictive value; ResNet50V2 model with “0” is COVID-19, “1” is normal, “2”
is pneumonia.
• The f1-score is defined as the harmonic mean of
precision and sensitivity B. MobileNetV2 model
• Recall: It is the ratio of correctly predicted positive
observations to all observations in actual class.
!"#!$
(3)
!"#%"#%$#!$

!"
& ' ()( * (4)
!"#%"

!" Fig. 7. MobileNetV2 model architecture


' ++ '*)( (,( (5)
!"#%$
TABLE VI. RESULT WITH TESTING DATA
0 "1 2 3 45 6 7 2899
1.s ' (6)
"1 2 3 45#7 2899 Label Precision Recall f1-score Support
COVID-19 1.00 1.00 1.00 26
IV. EXPERIMENTAL RESULTS
Pneumonia 1.00 0.80 0.89 20
A. ResNet50V2 model
Normal 0.83 1.00 0.91 20

Accuracy 0.94 66

Macro avg. 0.94 0.93 0.93 66

Weighted avg. 0.95 0.94 0.94 66

MobileNetV2 is one of the most popular architectures as


Fig. 5. ResNeXt model architecture shown in Fig. 7 for creating AI applications in computer vision
since its introduction. In comparison to the previous versions,
The design of ResNet50V2 is comparable to that of MobileNetV2 includes various enhancements that make it
previous neural networks, with convolution, pooling, more accurate, have fewer parameters, and needless
activation, and fully linked layers. The residual block utilized calculations. The accuracy of the MobileNetV2 model's
in the network is shown in Fig. 5. The accuracy of the model's performance rating is 93.97%. Table VI shows the result for
performance rating is 93.97%. Table V shows the result for the MobileNetV2 model's testing.
the ResNet50V2 model's testing.

TABLE V. RESULT WITH TESTING DATA

Label Precision Recall f1-score Support


COVID-19 0.93 1.00 0.96 26

Pneumonia 1.00 0.75 0.86 20

Normal 0.87 1.00 0.93 20

Accuracy 0.92 66
Fig. 8. Confusion matrix of train data of MobileNetV2
Macro avg. 0.93 0.92 0.92 66
Fig. 8 shows the confusion matrix of train data of
Weighted avg. 0.93 0.92 0.92 66
MobileNetV2 model with “0” is COVID-19, “1” is normal,
“2” is pneumonia.

Page 62
Authorized licensed use limited to: Trung Nghia Tran. Downloaded on December 27,2021 at 04:39:36 UTC from IEEE Xplore. Restrictions apply.
C. FASNet model weights of model were reset at the beginning of each fold run.
Table VII show the result for the FASNet model's testing. The results are shown in Table VIII.

TABLE VII. RESULT WITH TESTING DATA TABLE VIII. CLASSIFICATION PERFORMANCE WITH TESTING DATASET

Label Precision Recall f1-score Support Accuracy (%)


Number
Model COVID-
of epoch Pneumonia Normal
COVID-19 1.00 1.00 1.00 26 19
Pneumonia 1.00 0.90 0.95 20 ResNet50V2 94 92 69

Normal 0.95 0.95 0.97 20 20 MobileNetV2 100 89 91

Accuracy 0.98 66 FASNet 100 100 91

Macro avg 0.98 0.98 0.98 66 ResNet50V2 98 92 80

Weighted avg 0.99 0.98 0.98 66 40 MobileNetV2 100 92 91

FASNet 100 100 95

ResNet50V2 100 94 80

100 MobileNetV2 100 91 100

FASNet 100 100 94


In comparison with previous studies on the same dataset,
the performance results are shown in Table IX. The FASNet
has clearly outperformed than in prior research.

TABLE IX. RESULT WITH SAME TESTING DATASET


Authors Model Diseases Accuracy
Fig. 9. Confusion matrix of train data of FASNet model (%)
Hemdan et al. COVIDX-Net COVID-19 90.00
[30]
Wang and COVID-Net COVID-19 92.40
Wong [31] Non-COVID-19
pneumonia
Ghoshal and Bayesian CNN COVID-19 90.00
Tucker [32] Non-COVID-19
Bacterial-
pneumonia
Our proposed FASNet COVID-19 98.48
method Pneumonia
Normal

V. DISCUSSION
The time consumption for diagnosis of COVID-19
infection is a critical requirement for limiting the disease's
global spread. The chest X-ray imaging has played a crucial
role in the diagnosis of COVID-19 [6], [11], [12]. Our work
supports the importance of chest X-rays and offers a FASNet
model that can predict the COVID-19 infected individuals and
Fig. 10. Classification results of FASTNet model separate out those pneumonia or healthy persons.
Fig. 9 shows the confusion matrix of train data of FASNet In our model, the first convolutional layer is designed with
model with “0” is COVID-19, “1” is normal, “2” is the size of the kernel 1×1 is used on each pixel as a fully
pneumonia. As shown in the results, FASNet model with a connected connection with the aim of reducing the channel
simpler architectural design is able to outperform better than depth and number of parameters of the model. The early-
the other pre-trained models in terms of detection of COVID- stopping class and dropout layer are used to limit the number
19 from 10% images in validation dataset. The result for the of neural connections and prevent overfitting.
FASNet model show the accuracy rate is 98.48%. To verify
the accuracy of the FASNet model, new testing data from a The dataset is usually imbalanced because of fewer
completely different dataset was used [29]. The results are COVID-19 images than pneumonia and healthy pictures for
depicted in Fig. 10 with a high correlation between the actual training, the model may identify pneumonia and healthy
label (represents the subject's original state) and predicted subjects rather than COVID-19. In this scenario, the overall
label (represents the model's categorization). accuracy becomes high but does not represent the accuracy of
COVID-19 detection. This will cause the accuracy will be
D. Comparison of grading performance of models imbalanced as in other research. To overcome this problem,
To benchmark FASNet model with ResNet50V2 and the data was re-organized in order to balance the cases and
MobileNetV2, the training using different epochs were ensure that the number of images in each case is almost equal.
conducted. The parameters setting was used as in Table III. To This work helps to increase COVID-19 cases detection while
avoid data overfitting concerns during the training phase, the also balancing overall accuracy.

Page 63
Authorized licensed use limited to: Trung Nghia Tran. Downloaded on December 27,2021 at 04:39:36 UTC from IEEE Xplore. Restrictions apply.
Through the analysis as mention above, the FASNet model [9] S. Armstrong, COVID-19: tests on students are highly inaccurate, early
findings show, BMJ, 2020, vol. 371, p. m4941.
is trustworthy. The FASNet model with its simple
architectural CNN design allows it to be readily expanded or [10] S. Roy, Physicians’ Dilemma of false-positive rt-pcr for COVID-19: a
case report. SN Compr. Clin. Med., 2021, vol. 3, pp. 255–258.
modified. This approach may be applied to classify medical
[11] Y. Fang et al., Sensitivity of chest CT for COVID-19: comparison to
images for other diseases. For this work, the trial-and-error RT-PCR. Radiology, 2020, vol. 296(2), p. 32073353.
method is used to design our model. Thus parameters in this [12] T. Ai, et al., Correlation of chest CT and RT-PCR testing in
model is need to be evaluated more to find out the best values. Coronavirus Disease 2019 (COVID-19) in China: A Report of 1014
On the other hand, the FASNet model still has a few Cases. Radiology, 2020, 296(2), p. 32101510.
misclassified cases due to the poor image resolution. Code [13] A. Haleem, M. Javaid, R. Vaishya, Effects of COVID 19 pandemic in
available at https://github.com/HoangNhutHuynh/FAS daily life, Curr. Med. Res. Pract., 2020, vol. 10, pp. 78–79.
[14] P. Saha et al., GraphCovidNet: A graph neural network based model
VI. CONCLUSION for detecting COVID-19 from CT scans and X-rays of chest, Sci Rep,
2021, vol. 11, p. 8304.
With the view towards contributing a little effort to
[15] T. D. Pham, Classification of COVID-19 chest X-rays with deep
averting the COVID-19 pandemic, FASNet CNN model was learning: new models or fine tuning?, Health Inf Sci Syst, 2021, vol
developed to categorize chest X-ray pictures into three 9(2), pp. 1-11.
categories: normal, pneumonia, and COVID-19. With a [16] S. Shakouri et al., COVID19-CT-dataset: an open-access chest CT
simple architectural CNN design, the accuracy of FASNet image repository of 1000+ patients with confirmed COVID-19
model can reach 98.48%. The model was benchmarked and diagnosis, BMC Res Notes, 2021, vol. 14, p. 178.
compared with other models using the same dataset. This [17] C.B.S. Maior, J.M.M. Santana, I.D. Lins, M.J.C. Moura, Convolutional
promising preliminary results lead us to expect that the neural network model based on radiological images to support COVID-
FASNet model can assist in diagnosing COVID-19 and other 19 diagnosis, Evaluating database biases. PLoS ONE, 2021, vol. 16(3),
p. e0247839.
lung’s diseases. In the near future, the FASNet model will be
[18] R. Jain, M. Gupta, S. Taneja, D. J. Hemanth, Deep learning based
improved to detect and evaluate the lung damage zone detection and analysis of COVID-19 on chest X-ray images, Appl
induced by COVID-19. This will help the doctor to assess the Intell., 2021, vol. 51, pp. 1690–1700.
patient's condition immediately and simultaneously keep an [19] N. El-Rashidy, et al., Comprehensive survey of using machine learning
eye on the disease's development. To do so, collaboration with in the COVID-19 pandemic, Diagnostics, 2021, vol. 11(7), p. 1155.
specialists to pinpoint the original data source is crucial. [20] Wynants L et al., Prediction models for diagnosis and prognosis of
COVID-19 infection: systematic review and critical appraisal. BMJ,
CONFLICTS OF INTEREST 2020, vol. 369, p. m1328.
The authors declare no conflicts of interest. [21] A. Narin, C. Kaya, and Z. Pamuk, Automatic detection of coronavirus
disease (COVID-19) using X-ray images and deep convolutional
neural networks, Pattern Analysis and Applications, 2021, pp. 1 - 14.
ACKNOWLEDGMENT
[22] L. Li et al., Using artificial intelligence to detect COVID-19 and
The authors would like to thank Ho Chi Minh City community-acquired pneumonia based on pulmonary ct: evaluation of
University of Technology (HCMUT), VNU-HCM for the the diagnostic accuracy, Radiology, 2020, vol. 296(2), pp. E65–E71.
support of time and facilities for this study. [23] A. I. Khan, J. L. Shah, M. M. Bhat, CoroNet: a deep neural network for
detection and diagnosis of COVID-19 from chest X-ray images,
REFERENCES Computer Methods and Programs in Biomedicine, 2020, vol. 196, p.
105581.
[1] C. C. Lai, T. P. Shih, W. C. Ko, H. J. Tang, P. R. Hsueh, Severe acute
[24] Kaggle dataset, https://www.kaggle.com/prashant268/chest-xray-
respiratory syndrome coronavirus 2 (SARS-CoV-2) and coronavirus
covid19-pneumonia.
disease-2019 (COVID-19): the epidemic and the challenges,
International Journal of Antimicrobial Agents. 2020; 55(3), p. 105924. [25] Z. Li, S. Arora, An exponential learning rate schedule for deep
learning, 2020. ArXiv, abs/1910.07454.
[2] J. Zhao et al., Antibody responses to SARS-CoV-2 in patients of novel
coronavirus disease 2019, Clinical Infectious Disease, 2020, vol. [26] S. Ruder, An overview of gradient descent optimization algorithms,
71(16), pp. 2027–2034. 2016. ArXiv, abs/1609.04747.
[3] J. She, J. Jiang, L. Ye, L. Hu, C. Bai, Y. Song, 2019 novel coronavirus [27] F. Chollet, Keras: The Python Deep Learning library, 2018.
of pneumonia in Wuhan, China: emerging attack and management [28] M. Sokolova, G. Lapalme, A systematic analysis of performance
strategies, Clinical and Translational Medicine, 2020, vol. 9(1), pp. 1– measures for classification tasks, Inf. Process. Manag., 2009, vol.
7. 45(4), pp. 427–437.
[4] S. Fang et al., GESS: A database of global evaluation of SARS-CoV- [29] J. P. Cohen, P. Morrison, L. Dao, COVID-19 Image Data Collection,
2/hCoV-19 sequences. Nucleic Acids Res. 2021, vol. 49, pp. D706– 2020. ArXiv, abs/2003.11597v1
D714. [30] E.E. Hemdan, M. Shouman, and M. Karar, COVIDX-Net: A
[5] S. Ludwig, A. Zarbock, Coronaviruses and SARS-CoV-2: a brief Framework of Deep Learning Classifiers to Diagnose COVID-19 in X-
overview, Anesth. Analg. 2020, vol. 20, pp. 6–9. Ray Images, 2020. ArXiv, abs/2003.11055v1
[6] I. Salahshoori et al., Overview of COVID-19 disease: virology, [31] L. Wang, Z.Q. Lin, A. Wong, COVID-Net: a tailored deep
epidemiology, prevention diagnosis, treatment, and vaccines, convolutional neural network design for detection of COVID-19 cases
Biologics 2021, vol. 1, pp. 2–40. from chest X-ray images. Sci. Rep., 2020, vol. 10, p. 19549.
[7] C. P. West, V. M. Montori, P. Sampathkumar, COVID-19 testing: the [32] B. Ghoshal and A. Tucker, Estimating uncertainty and interpretability
threat of false-negative results, Mayo Clinic Proceedings, 2020, vol in deep learning for coronavirus (COVID-19) detection, 2020. ArXiv,
95(6), pp. 1127–1129. abs/2003.10769.
[8] X. Wu et al., A follow-up study shows that recovered patients with re-
positive PCR test in Wuhan may not be infectious. BMC Med. 2021,
vol. 19, pp. 1–7.

Page 64
Authorized licensed use limited to: Trung Nghia Tran. Downloaded on December 27,2021 at 04:39:36 UTC from IEEE Xplore. Restrictions apply.
2021 IEEE International Biomedical Instrumentation and Technology Conference (IBITeC) | 978-1-6654-4179-7/21/$31.00 ©2021 IEEE | DOI: 10.1109/IBITeC53045.2021.9649379
View publication stats

D E P A R T M E N T O F E L E C T R IC A L

ENGINEERING

U N IV E R S IT A S

IS L A M IN D O N E S IA

You might also like