Professional Documents
Culture Documents
Textbook Ebook Applications of Computational Intelligence in Multi Disciplinary Research Ahmed A Elngar All Chapter PDF
Textbook Ebook Applications of Computational Intelligence in Multi Disciplinary Research Ahmed A Elngar All Chapter PDF
Intelligence in Multi-Disciplinary
Research Ahmed A. Elngar
Visit to download the full and correct content document:
https://ebookmass.com/product/applications-of-computational-intelligence-in-multi-dis
ciplinary-research-ahmed-a-elngar/
Applications of Computational Intelligence in
Multi-Disciplinary Research
Advances in Biomedical Informatics
Applications of
Computational Intelligence
in Multi-Disciplinary
Research
Edited by
Ahmed A. Elngar
Faculty of Computers and Artificial Intelligence, Beni-Suef University, Beni-Suef City, Egypt
College of Computer Information Technology, American University in the Emirates,
United Arab Emirates
Rajdeep Chowdhury
Department of Computer Application, JIS College of Engineering, Kalyani, West Bengal, India
Mohamed Elhoseny
College of Computing and Informatics, University of Sharjah, United Arab Emirates
Faculty of Computers and Information, Mansoura University, Egypt
Series Editor
Valentina Emilia Balas
Department of Automatics and Applied Software, Faculty of Engineering,
“Aurel Vlaicu” University of Arad, Arad, Romania
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, United Kingdom
525 B Street, Suite 1650, San Diego, CA 92101, United States
50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom
Copyright © 2022 Elsevier Inc. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including
photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher.
Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with
organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website:
www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be
noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding,
changes in research methods, professional practices, or medical treatment may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information,
methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their
own safety and the safety of others, including parties for whom they have a professional responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury
and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of
any methods, products, instructions, or ideas contained in the material herein.
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the Library of Congress
ISBN: 978-0-12-823978-0
v
vi Contents
11.5 Challenges in healthcare IoT 195 11.7.3 Quarantined patient care 199
11.5.1 Technology-oriented challenges 195 11.7.4 Public surveillance 200
11.5.2 Adapting to remote healthcare 11.7.5 Safeguarding hygiene 200
and telehealth 195 11.7.6 IoT and robotics 200
11.5.3 Data security 195 11.7.7 Smart disinfection and sanitation
11.6 Security issues and defense tunnel 201
mechanisms and IoT 196 11.7.8 Smart masks and smart medical
11.6.1 Security requirements in equipment 201
healthcare IoT 196 11.8 Future of IoT in healthcare 202
11.6.2 Attacks on IoT devices 196 11.8.1 IoT and 5G 202
11.6.3 Defensive mechanism 197 11.8.2 IoT and artificial intelligence 202
11.7 Covid 19—how IoT rose to the global 11.9 Conclusion 203
pandemic 198 References 203
11.7.1 About Covid 19 199
11.7.2 Decoding the outbreak and Index 205
identifying patient zero 199
List of contributors
T. Lucia Agnes Beena Department of Information Nor Hisham B. Hamid Department of Electrical and
Technology, St. Josephs College, Tiruchirappalli, Electronics Engineering, Universiti Technologi
India PETRONAS, Perak, Malaysia
Rokeya Akter Department of Pharmacy, Jagannath Joarder Kamruzzaman School of Engineering and
University, Dhaka, Bangladesh Information Technology, Federation University
Altahir A. Altahir Department of Electrical and Australia, Churchill, VIC, Australia
Electronics Engineering, Universiti Technologi Tai-Hoon Kim Computer Science and Engineering
PETRONAS, Perak, Malaysia Department, Global Campus of Konkuk University,
Vijanth S. Asirvadam Department of Electrical and Chungcheongbuk-do, Korea
Electronics Engineering, Universiti Technologi Deepak Kumar Sharma Department of Information
PETRONAS, Perak, Malaysia Technology, Netaji Subhas University of Technology
Sakshi Bhargava Department of Physical Sciences and (formerly known as Netaji Subhas Institute of
Engineering, Banasthali Vidyapith, Tonk, India Technology), New Delhi, India
Debnath Bhattacharyya Computer Science and Indhra Om Prabha M G H Raisoni College of
Engineering Department, Koneru Lakshmaiah Engineering and Management, Pune, India
Education Foundation, Guntur, India Bijoy Kumar Mandal Computer Science and
Paras Bohara Faculty of Applied Sciences and Engineering Department, NSHM Knowledge Campus,
Biotechnology, Shoolini University of Biotechnology Durgapur, India
and Management Sciences, Solan, India Manirujjaman Manirujjaman Institute of Health and
Pankaj Kumar Chauhan Faculty of Applied Sciences Biomedical Innovation (IHBI), School of Clinical
and Biotechnology, Shoolini University of Sciences, Faculty of Health, Queensland University of
Biotechnology and Management Sciences, Solan, India Technology, Brisbane, QLD, Australia
Parveen Chauhan Faculty of Applied Sciences and Asif Ikbal Mondal Computer Science and Engineering
Biotechnology, Shoolini University of Biotechnology Department, Dumkal Institute of Engineering &
and Management Sciences, Solan, India Technology, Murshidabad, India
Md. Arifur Rahman Chowdhury Department of Pratyusa Mukherjee School of Computer Engineering,
Pharmacy, Jagannath University, Dhaka, Bangladesh; KIIT Deemed to be University, Bhubaneshwar,
Department of Bioactive Materials Science, Jeonbuk India
National University, Jeoju, South Korea Antoanela Naaji Faculty of Economics, Computer
Nhon V. Do Hong Bang International University, Ho Chi Science and Engineering, “Vasile Goldis” Western
Minh City, Vietnam University of Arad, Arad, Romania
Kanika Dulta Faculty of Applied Sciences and Hien D. Nguyen University of Information Technology,
Biotechnology, Shoolini University of Biotechnology Ho Chi Minh City, Vietnam; Vietnam National
and Management Sciences, Solan, India University, Ho Chi Minh City, Vietnam
Shimaa E. Elshenawy Center of Stem Cell and Vuong T. Pham Sai Gon University, Ho Chi Minh City,
Regenerative Medicine, Zewail City for Science, Vietnam
Zewail, Egypt Prajoy Podder Bangladesh University of Engineering
Poonam Gupta G H Raisoni College of Engineering and and Technology, Institute of Information and
Management, Pune, India Communication Technology, Dhaka, Bangladesh
ix
x List of contributors
T. Poongodi School of Computing Science and Patrick Sebastian Department of Electrical and
Engineering, Galgotias University, Greater Noida, Electronics Engineering, Universiti Technologi
India PETRONAS, Perak, Malaysia
Marius Popescu Faculty of Economics, Computer D. Sumathi SCOPE, VIT-AP University, Amaravati,
Science and Engineering, “Vasile Goldis” Western India
University of Arad, Arad, Romania P. Suresh School of Mechanical Engineering, Galgotias
Chittaranjan Pradhan School of Computer University, Greater Noida, India
Engineering, KIIT Deemed to be University, Jayesh S Vasudeva Department of Instrumentation and
Bhubaneshwar, India Control Engineering, Netaji Subash University of
Md. Habibur Rahman Department of Pharmacy, Technology (formerly known as Netaji Subhas
Southeast University, Dhaka, Bangladesh Institute of Technology), New Delhi, India
M. Rubaiyat Hossain Mondal Bangladesh University of Amanpreet Kaur Virk Faculty of Applied Sciences and
Engineering and Technology, Institute of Information Biotechnology, Shoolini University of Biotechnology
and Communication Technology, Dhaka, Bangladesh and Management Sciences, Solan, India
Chapter 1
Abbreviations
^ yÞ
iðx; 2D cepstrum with ðx; yÞ representing quefrency coordinates
Iðu; vÞ 2D discrete-time Fourier Transform
Gðx; yÞ 2D Gabor function
FðU; VÞ 2D discrete cosine transform (DCT) coefficient matrix
W Angular frequency
σx andσy Standard deviations of x and y
xc The x-axis coordinate of the iris circle
yc The y-axis coordinate of the iris circle
r Radius of the iris circle
gc Gray level of the center pixel, c
gp Gray level of the neighboring pixel, p
ψ" sp Binary iris code obtained as XOR output
LBPp MLBP operator
1.1 Introduction
The concern of high security and surveillance in the present world has made the identification of people an increasingly
important issue. Among various identification modes, biometric has been considered over the last few decades for its
reliable and accurate identification [15]. Commonly used biometric features include the face, fingerprint, iris, retina,
hand geometry, and DNA identifications. Among them, nowadays, iris recognition has attracted significant interest in
research and commercialization [615]. Iris recognition has several applications in the security systems of banks, bor-
der control, restricted areas, etc. [13]. One key part of such a system is the extraction of prominent texture information
or features in the iris. This feature extraction method generates feature vectors or feature codes. The feature vectors of
the unknown images are used to match those of the stored known ones. In an iris recognition system, the matching pro-
cess matches the extracted feature code of a given image with the feature codes previously stored in the database. In
this way, the identity of the given iris image can be known.
A generalized iris recognition scheme is presented in Fig. 1.1. There are two major parts of Fig. 1.1, one showing
the feature extraction and the other describing the identification portion of an iris. The system starts with image acquisi-
tion and ends with matching, that is, the decision of acceptance or rejection of the identity. In between, there are two
main stages: iris image preprocessing and feature extraction [3,4]. Furthermore, iris image preprocessing includes the
stages of iris segmentation, normalization, and enhancement [5,11]. In the acquisition stage, cameras are used to capture
images of the iris. The acquired images are then segmented. In iris segmentation, the inner and the outer boundaries are
detected to separate the iris from the pupil and sclera. A circular edge detection method is used to segment the iris
region by finding the pixels of the image that have sharp intensity differences with neighboring pixels [3]. Estimating
the center and the radius of each of the inner and outer circles refers to iris localization. After iris segmentation, any
image artifacts are suppressed. Next is the normalization step in which the images are transformed from Cartesian to
pseudo polar scheme. This is shown in Fig. 1.1, where boundary points are aligned at an angle. Image enhancement is
then performed. As a part of feature extraction, the important features are extracted and then used to generate an iris
code or template. Finally, iris recognition is performed by calculating the difference between codes with the use of a
matching algorithm. For this purpose, Hamming and Euclidian are well known and also considered in this chapter [15].
The matching score is compared with a threshold to determine whether the given iris is authentic or not.
Despite significant research results so far [39,11,12,14], there are several challenges in iris recognition
[13,1526]. One problem is the occlusion, that is, the hiding of the iris caused by eyelashes, eyelids, specular reflec-
tion, and shadows [21]. Occlusion can introduce irrelevant parts and hide useful iris texture [21]. The movement of the
eye can also cause problems in iris region segmentation and thus accurate recognition. Another issue is the computation
time of iris identification. For large population sizes, the matching time of the iris can sometimes become exceedingly
high for real-time applications, and the identification delay increases with the increase in the population size and the
length of feature codes. It has been reported in the recent literature [13,18,22] that the existing iris recognition methods
still suffer from long run times apart from other factors. This is particularly true when the sample size is very large, and
the iris images are nonideal and captured from different types of cameras. Hence, devising a method that reduces the
run time of iris recognition without compromising accuracy is still an important research problem. The identification
delay can be reduced by reducing the feature vector of iris images. Thus this chapter focuses on the issue of reducing
the feature vector which will lead to a reduction in identification delay without lowering the identification accuracy.
For lowering the feature vector, the concept of Haar wavelet along with modified local binary pattern (MLBP) is used
in this work. Note that in the context of face recognition [2730] and fingerprint identification [31], the Haar wavelet
transform demonstrates an excellent recognition rate at a low computation time. In Ref. [32], the Haar wavelet is also
proposed without the use of MLBP.
The main contributions of this chapter can be summarized as follows.
1. A new iris feature extraction method is proposed. This new method is based on repeated Haar wavelet transforma-
tion (HWT) and MLBP. Note that MLBP is the local binary pattern (LBP) operation followed by Exclusive OR
(XOR). This proposed method is different from the technique described in Ref. [30], which uses single-level HWT
and LBP (without XOR) in the context of face recognition.
Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1 3
2. The efficacy of the HWTMLBP method is evaluated using three well-known benchmark datasets: CASIA-Iris-V4
[33], CASIA-Iris-V1 [34], and MMU iris database [35].
3. A comparison is made of this new technique with the existing methods of feature extraction in terms of feature vec-
tor length, false acceptance rate (FAR), and false rejection rate (FRR). It is shown here that the proposed method
outperforms the existing ones in terms of feature vector length.
The remainder of this chapter is organized as follows. Section 1.2 provides a literature survey of the relevant
research. Section 1.3 shows the iris localization part where the inner boundary and outer boundary can be detected.
Section 1.4 describes iris normalization. Section 1.5 illustrates our proposed approach for the purpose of encoding the
iris features. Section 1.6 describes the iris recognition process by matching score. The effectiveness of the new method
is evaluated in Section 1.7. Finally, Section 1.8 provides a summary of the research work followed by the challenges
and future work.
decomposition at each level is done in a way that the bandwidth of the output signal is half of that of the input. In Ref.
[26], a PILP technique is used for feature extraction and to obtain a feature vector of size 1 3 128. In this PILP method,
there are four stages: key-point detection via phase-intensive patterns, removal of edge features, computation of ori-
ented histogram, and formation of the feature vector. Iris features were extracted using 1D DCT and relational measure
(RM), where RM encodes the difference in intensity levels of local regions of iris images [21]. The matching scores of
these two approaches were fused using a weighted average. The score-level fusion technique compensates for some
images that are rejected by one method but accepted by the other [21]. Another way of extracting feature vectors from
iris images is by the use of linear predictive coding coefficients (LPCC) and linear discriminant analysis (LDA) [24].
Llano et al. in [19] used a 2D Gabor filter for feature extraction. Before applying this filter, the fusion of three different
algorithms was performed at the segmentation level (FSL) of the iris images to improve the textual information of the
images. Oktiana et al. [36] proposed an iris feature extraction system using an integration of Gradientface-based nor-
malization (GRF), where GRF uses an image gradient to remove the variation in the illumination level. Furthermore,
the work in Ref. [19] concatenated the GRF with a Gabor filter, a difference of Gaussian (DoG) filter, binary statistical
image feature (BSIF), and LBP for iris feature extraction in a cross-spectral system. Shuai et al. proposed [37] an iris
feature extraction method based on multiple-source feature fusion performed by a Gaussian smoothing filter and texture
histogram equalization. Besides, there have been some recent studies in the field of iris recognition [3849], with some
focusing on iris feature extraction methods [38,4042,45,49] and some on iris recognition tasks [39,44,46,48].
The 2D Gabor function can be described mathematically by using the following expression:
1 1 x2 y2
Gðx; yÞ 5 exp 2 1 1 2 3 π 3 j 3 Wx
2 3 π 3 σx 3 σy 2 σx 2 σy 2
and 2D DCT can be defined as:
pffiffiffi M21 N21
2XX ð2Y 1 1ÞUπ ð2X 1 1ÞVπ
F ðU; V Þ 5 f ð X; Y Þcos cos
M X50 Y50 2M 2M
where f(X,Y) is the image space matrix; (X, Y) is the position of the current image pixel and
FðU; V ÞðU; V 5 1; 2; . . . ::; M 2 1Þ is the transform coefficient matrix; W is the angular frequency; and σx and σy are the
standard deviations of x and y, respectively.
The concepts of machine learning (ML)-driven methods for example, neural networks and genetic algorithms have
been reported [46], while the idea of deep CNNs has also been applied [40]. Moreover, researchers are now investigat-
ing the effectiveness of multimodal biometric recognition systems [43,47].
A comparative summary of some of the most relevant works on iris feature extraction is shown in Table 1.1. It can
be seen that there are several algorithms and these are applied to different datasets, achieving varying performance
results.
In this regard, edge points are detected first. For each of the edge points, a circle is drawn having the center in the
middle of the edge. In this way, each of the edge points constitutes circles with the desired radius. Next, an accumulator
matrix is formed to track the intersection points of the circles in the Hough space, where the accumulator has the num-
ber of circles. The largest number in the Hough space points to the center of the image circles. Several circular filters
with different radius values are considered and the best one is selected.
Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1 5
(Continued )
6 Applications of Computational Intelligence in Multi-Disciplinary Research
FIGURE 1.2 Illustrations of (A) Daugman’s rubber sheet model; (B, F, J) original input images; (C, G, K) images with inner and outer boundary
detection; (D, H, L) segmented iris regions, and (E, I, M) iris images after normalization.
FIGURE 1.3 Block diagram of the proposed approach for iris feature extraction.
coefficients of LL1, LH1, HL1, and HH1. In this case, the approximation part of level 1, denoted as LL1, becomes of
size 32 3 256. Next, level 2 HWT is applied to LL1, which generates wavelet coefficients of LL2, LH2, HL2, and HH2.
In this case, the approximation part of level 2 (LL2) becomes of size 16 3 128. After that, level 3 HWT is applied to
LL2 to generate its wavelet coefficients LL3, LH3, HL3, and HH3. In this case, the approximation part of level 3 (LL3)
becomes of size 8 3 64. Hence a major distinctive region LL3 is obtained by performing the wavelet transformation
three times. Next, the LL3 region is used for the MLBP tasks.
Now consider the MLBP operation [25], which generates robust binary features. Furthermore, MLBP has low
computational complexity. MLBP labels each pixel based on the neighboring pixels and considering a given threshold.
MLBP then produces outputs in the binary format. This binary code can describe the local texture pattern. Note that
MLBP is an LBP followed by an XOR operation. Next, MLBP operation is described in the following.
For a center pixel c, and neighboring pixels p within a neighborhood of P pixels, the MLBP operation can be
expressed as follows.
X
P21
LBPp 5 S gp 2 g c 3 2p (1.4)
p50
where LBPp is the MLBP operator, gc is the gray level of c, and gp is the gray level of p pixels. Moreover, SðxÞ in (4)
refers to the sign function defined as,
1 ifx $ 0
Sð x Þ 5 (1.5)
0 otherwise
10 Applications of Computational Intelligence in Multi-Disciplinary Research
Next, the center pixel value is generated by applying XOR operation on the values of LBPp . This results in the fol-
lowing expression.
ψ" ðsp Þ 5 so ". . ."sP-1 (1.6)
where " denotes the XOR operator and ψ" sp is the binary iris code obtained as the XOR output. Since it is a com-
mutative operation of XOR, this can be performed by circularly shifting on sp in the clockwise or anticlockwise direc-
tion. Now XOR is performed to reduce the size from 8 3 64 to 1 3 64. XOR is computed in the column vector. In other
words, the eight-row iris signature is reduced to only a single row. Figs. 1.7 and 1.8 describe the MLBP operation.
Fig. 1.7 shows the center pixel in a 3 3 3 neighborhood, while Fig. 1.8 illustrates the computation of LBP8;1 with XOR
for a single pixel.
Algorithm 2: Feature encoding using the proposed MLBP
Input: Level three approximation part of the normalized image
Output: Binary sequence of the normalized iris image.
Main Process:
Step 1: Read the intensity values of the level three approximation part of the normalized image.
Step 2: Convert the RGB image to grayscale form.
Step 3: Resize the image if required and then store the size ½M; N of the image.
Step 4: Divide the image into eight segments.
Step 5: For each of the image segments, apply a 3 3 3 kernel.
Step 6: For i 5 1: P // P 5 8 for a 3 3 3 kernel.
Step 7: Compute DðiÞ 5 gp ðiÞ 2 gc ðiÞ // gp is the gray level for neighboring pixels and gc is the center pixel.
Step 8: If DðiÞ , 0
setUSðiÞ 5 0
else
SðiÞ 5 1
end.
Step 9: Compute LBP_p 5 XOR(SðiÞ); // Apply XOR operation to get the binary mask.
Step 10: Place the binary output of the XOR operation in the center pixel.
Step 11: Move the kernel in order to obtain a binary template.
Step 12: Apply XOR operation across the columns.
So, for the case of MLBP, the first LBP operation extracts the distinctive features to generate a unique iris code.
This code is reduced from 8 3 64 features to 1 3 64 by applying the XOR operation.
Iris feature extraction using three-level Haar wavelet transform and modified local binary pattern Chapter | 1 11
where ED is the Euclidean distance between two coordinate points: (X1,Y1) and (X2,Y2). In the case of Hamming dis-
tance between two iris codes, the number of unmatched bits is divided by the number of bits used for comparison. The
main operation in the Hamming distance is the use of an XOR gate which computes the disagreement between two
input bits. If P and Q are two bitwise templates of iris images and N is the number of bits of each iris code, then the
Hamming distance can be mathematically expressed as follows.
1X N
HD 5 Pj ðXORÞQj (1.8)
N j51
where HD denotes the Hamming distance. Hence according to (6), HD 5 0 indicates complete similarity between two
iris codes, while HD 5 1 means total dissimilarity between the codes. In practice, the two iris codes are assumed to be
the same, if the Hamming distance is lower than a threshold. Similar to the work in Ref. [6], this chapter considers a
Hamming distance value of 0.32 for iris templates to be identical.
Finis.
COLLOQUIO
INTERLOCUTORES