You are on page 1of 241

Studies in Computational Intelligence 875

Deepak Gupta
Aboul Ella Hassanien
Ashish Khanna   Editors

Advanced
Computational
Intelligence
Techniques
for Virtual Reality in
Healthcare
Studies in Computational Intelligence

Volume 875

Series Editor
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
The series “Studies in Computational Intelligence” (SCI) publishes new develop-
ments and advances in the various areas of computational intelligence—quickly and
with a high quality. The intent is to cover the theory, applications, and design
methods of computational intelligence, as embedded in the fields of engineering,
computer science, physics and life sciences, as well as the methodologies behind
them. The series contains monographs, lecture notes and edited volumes in
computational intelligence spanning the areas of neural networks, connectionist
systems, genetic algorithms, evolutionary computation, artificial intelligence,
cellular automata, self-organizing systems, soft computing, fuzzy systems, and
hybrid intelligent systems. Of particular value to both the contributors and the
readership are the short publication timeframe and the world-wide distribution,
which enable both wide and rapid dissemination of research output.
The books of this series are submitted to indexing to Web of Science,
EI-Compendex, DBLP, SCOPUS, Google Scholar and Springerlink.

More information about this series at http://www.springer.com/series/7092


Deepak Gupta Aboul Ella Hassanien
• •

Ashish Khanna
Editors

Advanced Computational
Intelligence Techniques
for Virtual Reality
in Healthcare

123
Editors
Deepak Gupta Aboul Ella Hassanien
Department of Computer Science Faculty of Computers and Information
and Engineering Cairo University
Maharaja Agrasen Institute of Technology Cairo, Egypt
Guru Gobind Singh Indraprastha University
New Delhi, India

Ashish Khanna
Department of Computer Science
and Engineering
Maharaja Agrasen Institute of Technology
Guru Gobind Singh Indraprastha University
New Delhi, India

ISSN 1860-949X ISSN 1860-9503 (electronic)


Studies in Computational Intelligence
ISBN 978-3-030-35251-6 ISBN 978-3-030-35252-3 (eBook)
https://doi.org/10.1007/978-3-030-35252-3
© Springer Nature Switzerland AG 2020
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, expressed or implied, with respect to the material contained
herein or for any errors or omissions that may have been made. The publisher remains neutral with regard
to jurisdictional claims in published maps and institutional affiliations.

This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Dr. Deepak Gupta would like to dedicate this
book to his father Sh. R. K. Gupta, his mother
Smt. Geeta Gupta, his mentors Dr. Anil
Kumar Ahlawat, Dr. Arun Sharma for their
constant encouragement, his family members
including his wife, brothers, sisters, kids, and
to my students close to my heart.

Prof. (Dr.) Aboul Ella Hassanien would like


to dedicate this book to his beloved wife Azza
Hassan El-Saman.

Dr. Ashish Khanna would like to dedicate this


book to his mentors Dr. A. K. Singh and
Dr. Abhishek Swaroop for their constant
encouragement and guidance and his family
members including his mother, wife and kids.
He would also like to dedicate this work to his
(Late) father Sh. R. C. Khanna with folded
hands for his constant blessings.
Preface

We hereby are delighted to launch our book entitled Advanced Computational


Intelligence Techniques for Virtual Reality in Healthcare. This volume is able to
attract a diverse range of engineering practitioners, academicians, scholars, and
industry delegates, with the reception of abstracts from different parts of the world.
Around 25 full-length chapters have been received. Among these manuscripts, 11
chapters have been included in this volume. All the chapters submitted were
peer-reviewed by at least two independent reviewers, who were provided with a
detailed review proforma. The comments from the reviewers were communicated to
the authors, who incorporated the suggestions in their revised manuscripts. The
recommendations from two reviewers were taken into consideration while selecting
chapters for inclusion in the volume. The exhaustiveness of the review process is
evident, given a large number of articles received addressing a wide range of
research areas. The stringent review process ensured that each published chapter
met the rigorous academic and scientific standards.
We would also like to thank the authors of the published chapters for adhering to
the time schedule and for incorporating the review comments. We wish to extend
my heartfelt acknowledgment to the authors, peer reviewers, committee members,
and production staff whose diligent work put shape to this volume. We especially
want to thank our dedicated team of peer reviewers who volunteered for the arduous
and tedious step of quality checking and critique on the submitted chapters.
Lastly, we would like to thank Springer for accepting our proposal for pub-
lishing the volume titled Advanced Computational Intelligence Techniques for
Virtual Reality in Healthcare.

New Delhi, India Deepak Gupta


Cairo, Egypt Aboul Ella Hassanien
New Delhi, India Ashish Khanna

vii
About This Book

Advanced Computational Intelligence Techniques for Virtual Reality in Healthcare


addresses the difficult task of integrating computational techniques with virtual
reality and healthcare.
The book presents world of virtual reality in healthcare, cognitive and behavioral
training, understand mathematical graphs, human–computer interaction, fluid
dynamics in healthcare industries, accurate real-time simulation, healthcare diag-
nostics, and so on.
By presenting the computational techniques for virtual reality in healthcare, this
book teaches readers to use virtual reality in healthcare industry, thus providing a
useful reference for educational institutes, industry, researchers, scientists, engi-
neers, and practitioners.

New Delhi, India Deepak Gupta


Cairo, Egypt Aboul Ella Hassanien
New Delhi, India Ashish Khanna

ix
Contents

World of Virtual Reality (VR) in Healthcare . . . . . . . . . . . . . . . . . .... 1


Bright Keswani, Ambarish G. Mohapatra, Tarini Ch. Mishra,
Poonam Keswani, Pradeep Ch. G. Mohapatra, Md Mobin Akhtar
and Prity Vijay
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Virtual Reality Application in Medicine . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Medical Teaching and Training . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2 Medical Treatment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3 Experimenting Medicine Composition . . . . . . . . . . . . . . . . . . . 9
3 Key Research Opportunities in Medical VR Technology . . . . . . . . . . . 10
4 Computational Intelligence for Visualization of Useful Aspects . . . . . . 15
4.1 General Guidelines for Patient Care . . . . . . . . . . . . . . . . . . . . . 16
5 Surgical VR and Opportunities of CI . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.1 The JIGSAWS Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
6 Human Computer Interface in CI Based VR . . . . . . . . . . . . . . . . . . . . 17
6.1 Computer-Aided Design (CAD) Repairing Imitated Model
(Design for Artificial Body Part) . . . . . . . . . . . . . . . . . . . . . . . 19
6.2 Test and Treatment for Mental Sickness . . . . . . . . . . . . . . . . . . 19
6.3 Improvement for Treatment Safety . . . . . . . . . . . . . . . . . . . . . . 19
7 Advantages of VR Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Towards a VIREAL Platform: Virtual Reality in Cognitive
and Behavioural Training for Autistic Individuals . . . . . . . . ......... 25
Sahar Qazi and Khalid Raza
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.1 VIREAL: Decoding the Terminology . . . . . . . . . . . . . . . . . . . . 27
1.2 Historical Background of VIREAL . . . . . . . . . . . . . . . . . . . . . 28
1.3 Day-to-Day Applications of VIREAL . . . . . . . . . . . . . . . . . . . . 29

xi
xii Contents

2 Autism and VIREAL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... 30


2.1 Common Teaching Techniques for Autistic Children . . . . . ... 32
2.2 Qualitative and Quantitative Teaching Method – PECS . . . . ... 33
2.3 From VIREAL Toilets to Classroom: VR Design
and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
3 Social and Parental Issues Related to VIREAL . . . . . . . . . . . . . . . . . . 34
4 Computational Intelligence in VIREAL Platforms . . . . . . . . . . . . . . . . 36
4.1 Where Do VIREAL and Machine Learning Intersect? . . . . . . . . 37
4.2 SLAM for VIREAL Environments . . . . . . . . . . . . . . . . . . . . . . 38
4.3 VIREAL on Mobile: Mobile App Developments for Autism . . . 39
4.4 Mind Versus Machine: Practicality of AI in Autism . . . . . . . . . 39
4.5 Limitations of Computational Intelligence in VIREAL . . . . . . . 41
5 Future Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Assisting Students to Understand Mathematical Graphs Using Virtual
Reality Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 49
Shirsh Sundaram, Ashish Khanna, Deepak Gupta and Ruby Mann
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.1 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.2 Scope of VR in Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4 Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
6 Conclusion and Future Scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Short Time Frequency Analysis of Theta Activity for the Diagnosis
of Bruxism on EEG Sleep Record . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Md Belal Bin Heyat, Dakun Lai, Faijan Akhtar, Mohd Ammar Bin Hayat
and Shajan Azad
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
2 Stages of Sleep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.1 Non-rapid Eye Movement (NREM) . . . . . . . . . . . . . . . . . . . . . 64
2.2 Rapid Eye Movement (REM) . . . . . . . . . . . . . . . . . . . . . . . . . 65
3 History of Sleep Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
3.1 Classification of Sleep Disorder . . . . . . . . . . . . . . . . . . . . . . . . 66
4 Electroencephalogram (EEG) Signal . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.1 EEG Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.2 Classification of EEG Signal . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Contents xiii

5 Subject Details and Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71


5.1 Welch Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.2 Hamming Window . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6 Analysis of the EEG Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
7 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
8 Future Scope of the Proposed Research . . . . . . . . . . . . . . . . . . . . . . . 80
9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Hand Gesture Recognition for Human Computer Interaction
and Its Applications in Virtual Reality . . . . . . . . . . . . . . . . . . . . ...... 85
Sarthak Gupta, Siddhant Bagga and Deepak Kumar Sharma
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
2 Process of Hand Gesture Recognition . . . . . . . . . . . . . . . . . . . . . . . . 87
2.1 Hand Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
2.2 Contour Matching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
2.3 Hand Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
2.4 Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3 Latest Research in Hand Gesture Recognition . . . . . . . . . . . . . . . . . . 91
4 Applications of Virtual Reality and Hand Gesture Recognition
in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5 Hand Gesture Recognition Techniques . . . . . . . . . . . . . . . . . . . . . . . . 96
5.1 Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.2 Tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.3 Recognition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
6 Further Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Fluid Dynamics in Healthcare Industries: Computational Intelligence
Prospective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Vishwanath Panwar, Sampath Emani, Seshu Kumar Vandrangi,
Jaseer Hamza and Gurunadh Velidi
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
2 A CI Critical Review in Relation to Fluid Dynamics in Healthcare
Industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
A Novel Approach Towards Using Big Data and IoT for Improving
the Efficiency of m-Health Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
Kamta Nath Mishra and Chinmay Chakraborty
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
2 Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
xiv Contents

3 Proposed Architecture of IoT Based m-Health System . . . . . . . . . . . . 130


3.1 IoT Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
3.2 The Architecture of the Internet of Things . . . . . . . . . . . . . . . . 131
3.3 Proposed Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4 Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
Using Artificial Intelligence to Bring Accurate Real-Time Simulation
to Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
Deepak Kumar Sharma, Arjun Khera and Dharmesh Singh
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
2 Applications of VR in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
2.1 Medical Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
2.2 Surgery Training and Planning . . . . . . . . . . . . . . . . . . . . . . . . . 146
2.3 Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
2.4 Treatment of Patients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
3 Rendering in Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
3.1 Virtual Reality and 3D Game Systems . . . . . . . . . . . . . . . . . . . 150
3.2 Human Vision and Virtual Reality . . . . . . . . . . . . . . . . . . . . . . 150
3.3 Virtual Reality Graphics Pipeline . . . . . . . . . . . . . . . . . . . . . . . 152
3.4 Motion to Photons Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
3.5 Improving Input Performance: Using Predictions for Future
Viewpoints Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
3.6 Improving the Rendering Pipeline Performance . . . . . . . . . . . . . 156
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Application of Chicken Swarm Optimization in Detection of Cancer
and Virtual Reality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
Ayush Kumar Tripathi, Priyam Garg, Alok Tripathy, Navender Vats,
Deepak Gupta and Ashish Khanna
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.1 Machine Learning Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
2.2 Feature Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
2.3 Genetic Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
3.1 Proposed Chicken Swarm Optimisation . . . . . . . . . . . . . . . . . . 176
3.2 Implementation of the Proposed Method . . . . . . . . . . . . . . . . . 178
4 Results and Discussions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5 Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
5.1 Cervical Cancer (Risk Factors) . . . . . . . . . . . . . . . . . . . . . . . . 187
5.2 Breast Cancer (Wisconsin) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Contents xv

6 Conclusions and Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190


References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
Computational Fluid Dynamics Simulations with Applications
in Virtual Reality Aided Health Care Diagnostics . . . . . . . . . . . . . . . . . 193
Vishwanath Panwar, Seshu Kumar Vandrangi, Sampath Emani,
Gurunadh Velidi and Jaseer Hamza
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
2 A Discussion and Critical Review of CFD Simulations
with Applications in VR-Aided Health Care Diagnostics . . . . . . . . . . . 195
3 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Data Analysis and Classification of Cardiovascular Disease and Risk
Factors Associated with It in India . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Sonia Singla, Sanket Sathe, Pinaki Nath Chowdhury, Suman Mishra,
Dhirendra Kumar and Meenakshi Pawar
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
2 Prevalence and Mortality Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
3 A Rate of Cardiovascular Ailment . . . . . . . . . . . . . . . . . . . . . . . . . . . 214
4 Spread of Ailment with Age and Beginning of Ailment . . . . . . . . . . . 215
5 Risk Ailments of Cardiovascular Infirmities . . . . . . . . . . . . . . . . . . . . 215
5.1 Smoking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.2 Hypertension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
5.3 Diet and Nutrition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.4 The Abundance of Sodium . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.5 Air Pollution Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
5.6 Gender . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.7 Ethnicity or Race . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.8 Low Financial Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
5.9 Psychosocial Stress . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
5.10 Diabetes and Glucose Intolerance . . . . . . . . . . . . . . . . . . . . . . . 219
6 Predictive Data Analysis of Cardiovascular Disease in an Urban
and Rural Area for Males and Females . . . . . . . . . . . . . . . . . . . . . . . 219
7 Classification of Heart Disease by Naive Bayes Using Weka Tools . . . 219
8 Medication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 222
9 Various Tests Available for Heart Check up . . . . . . . . . . . . . . . . . . . . 223
10 Virtual Reality in Health Care . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
11 Implantable Cardioverter Defibrillators . . . . . . . . . . . . . . . . . . . . . . . . 226
12 Use of Certain Medication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
13 Cardiovascular Diseases Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
14 Prevention Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227
xvi Contents

15 Role of Yoga in Treatment of Heart Disease . . . . . . . . . . . . . . . . . . . 227


16 Burden of Disease . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
17 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
About the Editors

Dr. Deepak Gupta is Eminent Academician and plays versatile roles and
responsibilities juggling between lectures, research, publications, consultancy,
community service, Ph.D., and postdoctorate supervision, etc. With 12 years of rich
expertise in teaching and two years in industry, he focuses on rational and practical
learning. He has contributed massive literature in the fields of human–computer
interaction, intelligent data analysis, nature-inspired computing, machine learning,
and soft computing. He has served as Editor-in-Chief, Guest Editor, and Associate
Editor in SCI and various other reputed journals. He has completed his postdoc
from Inatel, Brazil, and Ph.D. from Dr. APJ Abdul Kalam Technical University.
He has authored/edited 35 books with national/international-level publishers
(Elsevier, Springer, Wiley, Katson). He has published 87 scientific research pub-
lications in reputed international journals and conferences including 39 SCI
Indexed Journals of IEEE, Elsevier, Springer, Wiley, and many more. He is the
convener and organizer of “ICICC” Springer Conference Series.

Dr. Aboul Ella Hassanien is Founder and Head of the Egyptian Scientific Research
Group (SRGE) and Professor of Information Technology at the Faculty of Computer
and Information, Cairo University. He is Ex-Dean of the faculty of computers and
information, Beni Suef University. He has more than 800 scientific research papers
published in prestigious international journals and over 30 books covering such
diverse topics as data mining, medical images, intelligent systems, social networks,
and smart environment. He won several awards including the Best Researcher of the
Youth Award of Astronomy and Geophysics of the National Research Institute,
Academy of Scientific Research (Egypt, 1990). He was also granted a scientific
excellence award in humanities from the University of Kuwait for the 2004 Award
and received the superiority of scientific—University Award (Cairo University,
2013). Also, he honored in Egypt as the best researcher in Cairo University in 2013.
He was also received the Islamic Educational, Scientific and Cultural Organization
(ISESCO) Prize on Technology (2014) and received the State Award for Excellence
in Engineering Sciences in 2015. He was awarded the medal of Sciences and Arts
of the first class by the President of the Arab Republic of Egypt, 2017.

xvii
xviii About the Editors

Dr. Ashish Khanna is a highly qualified individual with around 15 years of rich
expertise in teaching, entrepreneurship, and research and development with spe-
cialization in Computer Science Engineering Subjects. He received his Ph.D.
degree from National Institute of Technology, Kurukshetra. He has completed his
M. Tech. in 2009 and B. Tech. from GGSIPU, Delhi, in 2004. He has published
many research papers in reputed journals and conferences. He also has papers in
SCI and Scopus Indexed Journals including some in Springer Journals. He is
Co-author in 10 textbooks of various engineering courses. He is Guest Editor in
many special issues of IGI Global, Bentham Science, and Inderscience Journals.
He is convener and organizer in ICICC-2018 Springer conference. He is also a
successful entrepreneur by originating a publishing house named as “Bhavya
Books” having 250 solution books and around 50 textbooks. He has also started a
research unit under the banner of “Universal Innovator”.
World of Virtual Reality (VR)
in Healthcare

Bright Keswani, Ambarish G. Mohapatra, Tarini Ch. Mishra,


Poonam Keswani, Pradeep Ch. G. Mohapatra, Md Mobin Akhtar
and Prity Vijay

Abstract Virtual Reality (VR) technology is widely used in scientific, engineering


and educational applications all over the world. The technology is also widely advanc-
ing day by day, but, the applications in medical fields are limited. Medical technology
is one of the most advancing technologies which are evolving due to unlimited need
of health requirement. Further, Computational Intelligence (CI) contributed much
promising aspects of many healthcare practices such as treatment, disease diagnosis,
direct follow-ups, rehabilitation setups, preventive measures and administrative man-
agement practices etc. Dental sciences have witnessed many developments. In many

B. Keswani
Department of Computer Applications, Suresh Gyan Vihar University, Mahal Jagatpura, Jaipur,
India
e-mail: bright.keswani@mygyanvihar.com
A. G. Mohapatra (B)
Department of Electronics and Instrumentation Engineering, Silicon Institute of Technology,
Bhubaneswar, Odisha, India
e-mail: ambarish.mohapatra@gmail.com
T. Ch. Mishra
Department of Information Technology, Silicon Institute of Technology, Bhubaneswar, Odisha,
India
e-mail: tarini@silicon.ac.in
P. Keswani
Akashdeep PG College, Jaipur, Rajasthan, India
e-mail: Poonamkeswani777@gmail.com
P. Ch. G. Mohapatra
PCG Medical, Charampa, Bhadrak, Odisha, India
e-mail: pradeepch.gajendra.m@gmail.com
M. M. Akhtar
Riyadh Elm University, Riyadh, Saudi Arabia
e-mail: jmi.mobin@gmail.com
P. Vijay
Suresh Gyan Vihar University, Mahal Jagatpura, Jaipur, India
e-mail: prityvijay1@gmail.com

© Springer Nature Switzerland AG 2020 1


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_1
2 B. Keswani et al.

ways, VR based surgery practices are governed by computer assistance. The con-
junction of these two technological aspects to a larger extent can solve various issues
in modern healthcare systems. With the introduction of newer healthcare technology,
the medical issues nevertheless happen to be overcome. Nevertheless the scope in
this kind of study is boundless.

Keywords Virtual reality · Computational intelligence · Medical technology ·


Healthcare systems

1 Introduction

Virtual Reality (VR) is a leading and wide range aspect of Information Technology
(IT). VR can represent a three dimensional (3D) spatial concept with aid of a computer
and other gadgets. It can stimulate variety of sensations such as touch, smell, vision
and hearing and provide the stimulated output to a user. Using VR enabled equipment
a user can interact, control and manage objects that belong to virtual environment.In
this context, the VR system can be referred as an artificial and a 3D spatial world from
a user perception. The ability of portraying 3D information, user trait towards human
computer interfacing, immersing the user in the virtual world, makes VR a class apart
from other simulating systems [1]. The VR system stand on 3 I’s namely Interaction,
Immersion and Imagination that are complementary to each other (Fig. 1).
Depending on the 3 I’s the VR system can be divided into Desktop, Distributed,
Immersive and Augmented Virtual Reality systems. Especially, VR in medicine is
supposed to have a higher accuracy rate, greater interactivity, and improved reality.
So, the Desktop VR has very applications in medicine [2]. Similarly, the Immersive

Fig. 1 A typical VR headset [3]


World of Virtual Reality (VR) in Healthcare 3

VR has Head Mounted Display (HMD) and data gloves thereby isolating the user
vision and other sensations, making the user a participant in the internal system.
In Augmented VR, a virtual image is superimposed on a real object thus enabling
the user to get real time information. The Distributed VR is a network of virtual
environments, which can connect a large number of users across virtual environments
on various physical locations through communication networks [3–5].VR and ARis
widely used in healthcare [6].
Currently, VR and AR applicability in healthcare is as below:
• Training in surgical environment
• Healthcare Education
• Psychic health management such as Post Traumatic Stress Disorder (PTSD),
Obsessive Compulsion Disorder (OCD), Stress Management, Phobias
• Therapy such as Autistic Spectral Disorder, Occupational Therapy, Sensory
Processing Disorder (SPD)
• Neuroplasticity in case of Neural Rehabilitation, Cognitive behavior (Fig. 2).
Figure 3 provides a report provided by Tractica, which is a forecast of global
market between 2014 and 2020 that signifies annual shipment unit and revenue of
VR hardware and other related content in various industrial sectors taking HMDs and
VR equipment such as motion capture cameras, displays, projectors, gesture tracking
devices and related application software [7]. The figure also predicts the growth in
the software and content creation tool. The virtual Surgeon training and VR module

Fig. 2 Predicted market size of VR and AR [57]


4 B. Keswani et al.

Fig. 3 Predicted market size of VR hardware and software [7]

for nurses are examples of VR/AR applications to justify the above. Moreover, the
British Startup Medical Realities have its own training tool for armature professionals
to be familiar with surgery from a surgeon point of view [8]. Similarly, to adopt the
measures so as to curb the risk to a patient VR Healthnet is creating a VR module
for nurses and medical professionals [9].
More than a century, virtual consultation by the General Practitioners was a very
common; telephonic consultation was a part of virtual consultation. But, this kind
of consultation lead towards disappointment due to shorter consultation and longer
waiting time. There was a 30% increase in waiting time in 2016 [10]. Similarly,
the UK had nearly 90% of the consultations that lasted up to 15 min [11]. That’s
how the telemedicine became popular in the recent years and has become of much
interest in managing chronic diseases. Recent studies justify that patients suffering
from chronic diseases such as blood pressure, cholesterol and diabetes have got
significant improvement with consultations using video services and e-mails [12].
In addition to this, virtual consultation is also helpful in curbing mental ailments,
especially among the youngsters [10].
The solution in this case is amalgamation of traditional Healthcare and information
technology for health; combining referred as Healthcare Information and Commu-
nication Technology (ICT). So, eHealth is the answer, which is of course the ICT in
healthcare. mHealth, component of eHealth uses Mobile Phone and related services
World of Virtual Reality (VR) in Healthcare 5

Fig. 4 Schematic figure of


VR-Health [12]

such as short messaging service (SMS), 3G & 4G mobile technologies using gen-
eral packet radio service (GPRS), global positioning system (GPS), and Bluetooth
technology at the core [13]. However, there is a little bit of difference being mobile
and wireless. Wireless health solutions will not always be mobile and vice versa.
Mobile technology uses the core technologies discussed above, but Wireless Health
integrates technology to customary medical practices such as diagnosis, treatment
of illness and monitoring.
Similarly, uHealth (the Ubiquitous Health) is capable of providing healthcare
solution to anyone anywhere anytime using various broadband technologies, based
on many ubiquitous applications [14, 15]. But the uHealth does not have AR and VR
technologies.
Finally, looking at various aspects such as increase in VR/AR technology and
applications, accomplishments in eHealth and mHealth, it is inferred that new innova-
tion in VR/AR healthcare application model is absolutely inevitable. New innovative
models in VR/AR is definitely going to help patients and nonetheless the healthcare
staff members. Figure 4 justifies the schematic distribution of VR-Health.

2 Virtual Reality Application in Medicine

The usability of Virtual Reality (VR) technologies is simply limitless. Pertaining to


the field of medicine, the lure of VR technologies are primarily used in expressing
3D space and interactive surgical environment. Moreover, VR has a high significance
role in enabling people towards perpetual and sensible information on a measurable
and reliable virtual environment, which is useful towards making a clearer view
on VR, producing innovation in VR and active information acquisition. Hence, VR
technology has a pragmatic superiority in medicine in term of study, surgery training,
pharmaceutical tests, diagnosis and treatment. The major aspects of VR technology
in medicine are discussed below [16, 17].
6 B. Keswani et al.

2.1 Medical Teaching and Training

VR is useful in learning new technology and methodology. Eventually, VR will take


the place of traditional medical experiments and will impart new teaching mecha-
nism. VR uses multi-attribute data that creates and efficient mode for a practitioner
to mastering this new technology.

2.1.1 Medical Teaching

For the medial practitioners, different sensation information such as hear, touch and
smell and dynamic 3D objects that are lively, can be combined using VR technology
and this can be used in classroom training where these things can be felt without
their physical existence; human body structure, heart structure and cause of a disease
can be found out in this technique. In this process a 3D model of a human body
can be created and anyone can get inside the model and can see the muscle, skeletal
structure and other organ systems and working and status. Moreover, the condition
of an organ can be realized and proper ailment procedure can also be defined. In
other words, VR can provide an alternative and interactive process of studying the
human anatomy. For example, the Internet resource for surgical education, Vesalius,
of Duke University and the brain atlas of Harvard University are considered as the
most famous virtual medical multimedia teaching resources [18].

2.1.2 Virtual Surgery Training

During surgical process, 80% of failures occur due to human error thereby making
precision in surgery as a priority. The surgery training is absolutely a traditional
classroom based process. However, in the classroom the condition of a patient may
vary depending on various unforeseen factors resulting in an inappropriate training
procedure, which can make the training procedure less effective. In addition to this,
the traditional process takes more time, incurs more cost and decrease the operation
quality which is not suitable for the patient [19].
On the contrary, VR technology can provide a simulated workbench environment
for the doctors. With the help of this, doctors can have a 3D image of human body.
Moreover, doctors can learn how they can deal with the actual clinical procedure
and can practice surgery on a virtual human body. In addition a doctor can feel the
experience of this virtual environment as real with the help of the VR technologies
[2]. Taking the feedback of expert professionals the VR system can also provide new
dimensions to the surgery system. However, this process can be made recursive. The
VR system can evaluate a surgical procedure once complete by considering various
parameters and standards. This kind of system are risk free, cost effective, recursive,
and self-assistive and can help professional towards improving their skillset [19].
This is given in Fig. 5.
World of Virtual Reality (VR) in Healthcare 7

Fig. 5 Minimally invasive


surgical trainer (MIST)
system [19]

2.2 Medical Treatment

Conventional surgery methods says that the patient statistics are acquired using X-ray
images, MRI & CT scanning and then these images are combined to 3D image by
image-processing. A doctor recreates the whole procedure in its brain before doing
the actual surgery. During the surgery also a doctor need to memorize all the 3D
images. In this scenario, a qualitative surgical procedure is expected is the doctor is
skillful and experienced [6, 17]. VR technology will be of great help in this kind of
scenario by proving its capacity by supporting all channels of the 3D display and
shared surgery and thereby increases success rates in complicated surgeries [20].

2.2.1 Analysis and Data Acquisition

VR technology combines 2D images obtained from sources such as CT, PET and MRI
to hi-fi images. To establish a 3D model the 2D model is treated, surface is rebuilt
and virtual endoscope data are processed. This will help a doctor to investigate a
patient data by using 3D images. Moreover, a doctor can also investigate more inside
8 B. Keswani et al.

a 3D virtual model of a patient that are far reach of an endoscope. This however, is
helpful towards proper analysis of sick organs and surrounding tissues so as to avoid
redundant invasive diagnosis [21].

2.2.2 Designing a Trial Surgery Program

A surgical simulator sets up a 3D model depending on the actual patient data before a
surgery. Next, a doctor, who will be carrying out the surgery, performs a trial surgery
in a computerized virtual environment as per a planned surgery procedure. Compli-
cated situations are handled by taking extra precautions like testing edge and angle
of the knife. These steps are necessary to produce a flawless operation procedure.
Concurrently, all participating members of the surgical group can interchange ideas
based on the information they are getting from the 3D surgical environment, where
the surgery is done by a computer. Thereby the coordination of the surgical group is
enhanced [21].

2.2.3 Result Prediction in Surgery

VR is useful towards guiding and monitoring a surgical process. A patient’s 3D


model is created initially and a scanned image is added to the model. This enables
a doctor integrate the newly captured data to the patient’s 3D model and predict the
result in a real environment [16].

2.2.4 Distance Medical Treatment

This technology is used to broaden the scope of medical treatment with the help
of broadband networks and improvises the expertise of a professional to the fullest.
Distance diagnosis and distance operations are the two major usage of the distance
medical treatment. The distance diagnosis enables a professional to consult to a
patient at a distance place remotely using its computer. This process is just like an
onsite inspection. In this way the medical services can be rendered to more people.
The distance operation, in this context, is used to instruct a local doctor to smoothly
conduct a surgery.
Distance Medical System when combined with improvised imaging system
becomes an efficient mean for training of medical practitioners. Using this academic
conferences can be relayed, surgeries can be demonstrated and medical courses can
be delivered without detaching medical professional from their regular activities.
Satellite technology, broadband network, image processing techniques will aid the
distance medical treatment to a perfection.
World of Virtual Reality (VR) in Healthcare 9

2.3 Experimenting Medicine Composition

New drug creation is one of the latest applications using VR. Creating new types
of drugs is the new era of application in VR. A molecule is complicated in its own
structure and the 3D structure is difficult enough to translate it to a 2D display. Using
VR the natural and visible 3D environment of molecular structure of compound can
be viewed where the interaction traits of a molecule can be determined.
VR however provides an opportunity to establish the molecular structure of com-
pound medicine through the provision of a natural and visible 3D environment where
the interaction traits of a molecule can be determined. The characteristic of the atoms
can also be studied. Figure 6 shows how the UNC can use ceiling mounted Argonne
Remote Manipulator (ARM) to test receptor sites for a drug molecule.
The medicine once successfully developed in a virtual environment now can be
tested in a virtual environment (a virtual body). Effectiveness of the medicine is
provided to a computer. A virtual patient (the virtual body) will try the medicine.
Physiological reactions of the virtual body will appear under the medicinal action.
However, the process of testing a newly designed drug on a virtual patient will speed

Fig. 6 The GROPE system


[19]
10 B. Keswani et al.

up the testing process, which has a to stage significance such as cost effectiveness
and harm of the new drug on human body.

3 Key Research Opportunities in Medical VR Technology

VR in medicine can be termed as the second generation of it. Traditionally 3D


scientific visualization in aerospace, geological survey, computer-aided designing
and manufacturing (CAD/CAM), transportation, and other nonmedical fields are
involved in the research. With the increasing power of computer processing and
virtual realism, number of medical applications are emerging in VR in medicine. In
VR a person can be viewed as a 3D dataset that can represent a real person. Simulator
in academic training, diagnosis using virtual endoscopy are the upcoming research
areas in VR in the 21st century VR is a new pathway in medicine.
But, there are areas that need to be addressed. Real-time 3d segmentation, interac-
tive clinical system, image segmentation and fusion with Digital Signal Processing
(DSP), high volume data transmission and storage and use interface are some of the
areas. The virtual realization in surgery started dated back in the 1980s. Figure 7
shows the first ever VR system created by Delp and Rosen for tendon transplant of
the lower leg as an alternative surgery process [22].
The very first surgery simulator of abdomen was created by Satava [23] in 1991
(Fig. 8). it used organ images created using simple graphics drawing program
Being not so very interactive and realistic the simulators provided an opportunity
to explore more and practice in surgical procedures. Merrill of High Techsplanations
successfully created a sophisticated graphical version of human torso with organs
that simulated physical properties such as bending or stretching when pushed and
pulled or edges retracting when cut as given in Fig. 9 [24]. This was a landmark
even in 1994 release of the National Library of Medicine’s “Visible Human” project
under Dr. M. Ackerman that provided images that were reconstructed from an actual
person’s data set.
Spitzer and Whitlock of the University of Colorado created a virtual cadaver from
1871 slices and 1 mm thick and were digitized and stored [25]. While condensing
there was no photorealism because the whole computing power was vested in image
processing. The image achieved were not realistic. Much of the processing were
wasted in tissue properties, bleeding, wounding, and instrument interaction.
Dr. J. Levy designed a hysteroscopy surgical simulator with a simple haptic device,
patient specific anatomy and pathology in 1995. This enabled doctors to be hand
on with same virtual pathology at par at with a patient. In case of a complicated
anatomy a realistic image with tissue properties and haptic input is achieved like in
the case of central venous catheter placement simulator (Fig. 10) by Higgins of HT
Medical, Inc. [26]. Boston Dynamics Inc. with Phantom Haptic Device introduced
a surgical simulator with high fidelity haptic in 1996 that focused on anastomoses,
ligating and dividing, etc. rather than full procedures [27]. Moreover, simulators of
World of Virtual Reality (VR) in Healthcare 11

Fig. 7 Lower limb simulator


to evaluate tendon transplant.
Courtesy of Dr. S. Delp and
J. Rosen, MusculoGraphics,
Inc., Evanston, IL. [22]

Fig. 8 Early surgical


simulation of the abdomen
using simple graphics
drawing programs [23]
12 B. Keswani et al.

Fig. 9 Improved graphic


rendering of human torso,
which includes organ
properties [24]. Courtesy of
J. Merril, High
Techsplanations, Inc.,
Rockville, MD.

Fig. 10 Central venous


catheter placement simulator
for training [27]. Courtesy of
Dr. G. Higgins, HT Medical,
Inc., Rockville, MD.

catheter systems with balloon angioplasty and stent placement are being developed
for catheter based endovascular therapy (Fig. 11).
Simulators are having four different levels. These simulators are now ready to be
inducted to the medical academia where the matching capabilities of the simulators
can be implemented.
Levels are as below:
• Simulators with needle like needle insertion in vein, catheter placing in central
venous, tap in spine, biopsy of lever.
• Simulator with scope type where the scope (the movement of control handle) can
change the view on monitor; like that of an angioplasty.
• Task based simulators with single or multiple instruments like anastomoses, cross
clamp.
• Simulators with complete surgery procedures.
World of Virtual Reality (VR) in Healthcare 13

Fig. 11 Angioplasty
catheter in blood vessel [27].
Courtesy of G. Merrill, HT
Medical, Inc., Rockville,
MD.

These kind of simulators always provide a value added service whenever there is a
need of technical stand point. However, matching the curriculum with technology is
of higher significance now a days. The primary focus here is to make a professional to
be an expert in the instruments and in anastomoses. A professional expects a realistic
model from the technology rather than being getting hands on using a simulator.
However, this quotient will increase with the increase in computational power and
likewise there will be an increase in the level of realism.
With the development in the technology of surgical simulation, real data from
a patient that has been captures using VR and ICT, the diagnostic procedure can
now be performed on information collected without invasive or minimally invasive
procedures applied to a patients; Virtual Endoscopy being an example in this case.
Endoscopic procedures are a great applicability of this kind of procedure. However,
this can also be applied in areas not directly related to endoscopic procedures. Areas
such as internal portion of the eye and ear, which is generally not accessible using
an instrument can now be accessed using this technology. Virtual Endoscopy can
also perform a regular CT scan of a concerned body part keeping various organs and
tissues aside.
Using advanced algorithm such as a Flight Path algorithm, a organ can be super-
imposed with a resulting image being comparable to performing the examination
with a video endoscope [28]. Lungs, stomach, uterus, sinus and many more organs
are being successfully examined (Fig. 12). Organs such as inner ear, ganglion are
getting explored (Fig. 13) [29].
A resolution of 0.3 mm is enough to diagnose irregularity like ulcer, polyps and
cancer, which change the surface. Usually, the distortion in the surface are generic
texture maps. Hence, anatomy like infection, ischemia, and superficial cancers are not
diagnosed properly. A look up table correlating Hounsfield units of a CT scan with
organ-specific color and texture can be verified. After solving the real-time registrant
and accuracy a virtual organ can have proper anatomy with precise coloring. Hence,
virtual endoscopy is useful in diagnosis. Energy directed methods are useful in case
of total noninvasive treatment. Cryotherapy can heal using protein denaturing. Data
14 B. Keswani et al.

Fig. 12 Virtual colonoscopy


with internal view of the
transversecolon [29].
Courtesy of Dr. R. Robb,
Mayo Clinic, Rochester,
MN.

Fig. 13 Virtual endoscopy


of the inner ear with view of
semicircular canals,
chochlea, and associated
structures [29]. Courtesy of
Dr. R. Robb, Mayo Clinic,
Rochester, MN.

fusion and stereo taxi are useful by any physician to augment precision location in
real-time.
Usually, a physician’s chamber has many components like CT scanners, MRI
machines, Ultrasound devices and many more. The main objective of these devices
is to capture patient data. It is possible that by the time a patient takes a chair beside
a physician, a 3D image of the patient will appear in the desktop of the physician
(Fig. 14). This visual integration of information are acquired by the scanners in
the physician’s chamber. Now, if the patient asks for a problem in the right flank,
the doctor can rotate the image and get relevant information. Each pixel of the
image stores patient data and eventually creates a new Medical Avatar for a patient.
Any such image contains anatomic data, physiological data and historical data of a
patient. Information now can be directly searched from the image database instead
of searching volumes of written materials. Images are useful in any stage of medical
World of Virtual Reality (VR) in Healthcare 15

Fig. 14 Full 3-D suspended holographic image of a mandible [29]. Courtesy of J. Prince,
Dimensional Media Associates, New York.

treatment such as pre-operative procedure and post-operative procedure, analysis of


patient data. The patient information can also be shared irrespective of time and
place.

4 Computational Intelligence for Visualization of Useful


Aspects

Earlier Clinical decision support systems (CDSS) was in use as an AI tool for
medicine. CDSS used to take symptoms of a disease and demographic information.
In the 70s CDSS could diagnose bacteria causing infection and could recommend
antibiotics [30]. Mycin was used as a rule based engine. David Heckerman developed
Pathfinder, which used Bayesian networks [31]. Pathfinder was a graphical model
which could encode probabilistic relationships among variables of interest [31]. It
was very helpful in diagnosing lymph-node diseases. Medical imaging like CAD for
tumors and polyps also implement AI. This kind of imaging are helpful in mam-
mography, cancer diagnosis, congenital heart diseases and various artery defects
[32].
AI and Machine Learning (ML) can be used to create models based on a large
patient data; called as population. These models can make real-time predictions like
risk, incumbency of a disease and can provide alert at real-time as well [33–35].
These models take huge amount of records collected from ICUs on a regular basis
[36]. Neural Network (NN) and decision tree algorithms are used as classifiers of
patient state to fire an alert. Time Series Topic Model (a hierarchical Bayesian model),
developed by Suchi Saria, which is a physiological assessment for new born infant
16 B. Keswani et al.

that captures time-series data of a new born in first three hours [34]. This model
accurately estimated the risk of infant being infected and risk of cardiopulmonary
complications. Physiologic parameters have higher potential predictive capability
than that of invasive laboratory processes thereby encouraging study of non-invasive
neonatal care [17].

4.1 General Guidelines for Patient Care

The advancement of technology pertaining to AI and ML signify the potential of


improving patient care. These models concentrate on prediction problems like pre-
diction using discrete-valued attribute and regression to predicting a real valued
attribute. These models are useful for specific diseases and it will be considering a
small population of data only. Hence, the bigger challenge here is crate models that
would be taking large population data and the model would be able to detect prob-
lems like automatically. Also, the model would be able to find threats like hospital
acquired disease, suboptimal patient care and invent new way of patient care.
Question Answering (QA) and Large-scale Anomalous Pattern Detection (LAPD)
are the new AI tools having great potential to overcome the above mentioned chal-
lenges. IBM and Carnegie Mellon University have developed DeepQA for general
QA and can be integrated to IBM Watson [37]. IBM and Memorial Sloan Kettering
Cancer Center are designing a tool to diagnose and recommend treatment for vari-
ous types of cancer. IBM Watson provides probabilistic approach for doctors to take
evidence-based decisions. This is also going to be helpful towards learning from user
interaction [38].
In this context, Semantic Research Assistant (SRA) is also another QA system
pertains to medical domain. SRA creates knowledge base that answers queries from
doctors. It provides answers using medical facts, rules and patient records. It is now
in use for cardiothoracic surgery, percutaneous coronary and such other diseases.
SRA can answer such queries in minutes [39].

5 Surgical VR and Opportunities of CI

Surgical motion sensing in real-time is current trend now. Recent developments in


this field is the automatic capture of motion of a surgeon and implementing this
tracking and training system to a robot. Surgical simulators are available sensing
system and recording system so as to record the automatic surgical motion [40–
42]. Thus, huge opportunity like automatic object analysis and training progress of
a surgery has been created in this field. This technology is helpful for a doctor to
acquire more skill thereby decreasing complications in case of a patient [43]. The
automated surgery skill is highly important in healthcare and it is an impregnable
step towards building surgical data science [44].
World of Virtual Reality (VR) in Healthcare 17

5.1 The JIGSAWS Model

Primarily, the surgical skill evaluation has four objectives: (1) skill evaluation, (2)
gesture classification, (3) gesture segmentation and (4) task recognition. Spatiotem-
poral characteristic by two-thirds power law [45] and the one-sixth power law [46]
is used for extracting features from kinematic data. In the similar context, reinforce-
ment learning is also useful in enhancing skill [47]. Recorded video data can be
input to a deep learning system to estimating position and accuracy of a surgical
robot [48–50].
Primarily, a surgical process is dependent on kinematic data. To classify a sur-
gical task, k-nearest neighbor classifier can be used with Dynamic Time Wrapping
[51, 52]. Similarly, boundaries and classification of gesture are required for gesture
classification. Spatio-Temporal Features and the Linear Dynamical System (LDS)
are used to classify gesture [53]. LDS is able to classify gestures in surgery using the
kinematic data. This condition is tested with Gaussian Mixture Models. Dynamic
Time Warping is also helpful in gesture classification. In this process an auto-encoder
is used with Dynamic Time Wrapping for alignment of the extracted feature.
Figure 15 demonstrates trials that last for up to 2 min. A trial is signified by
kinematic data (master and slave manipulator) of a surgical robot and is recorded at
30 Hz. The data has 76 variables signifying motion, position and velocity (master
and slave manipulator). This is the JIGSAW dataset and were manually segmented
to 15 surgical gestures. The system is also able to synchronize video of the trial to
kinematic data.
Figure 16 shows, for the Suturing task of the JIGSAWS dataset, the two individual
5th trials of subjects B (Novice) and E (Expert), using (x, y, z) coordinates for the
right hand.

6 Human Computer Interface in CI Based VR

A virtual environment should provide real-life image and sense for a proper interac-
tive system. There is a constant thrive in image processing to improve the quality of

Fig. 15 Snapshots of the three surgical tasks in the JIGSAWS dataset (from left to right): suturing,
knot-tying, needle-passing [58]
18 B. Keswani et al.

Fig. 16 The contactless control interface (Leap Motion) (top) and the RAVEN-II robot (bottom)
for surgical training [59]

medical images [2]. The knowledge about Sensation/Sensory mechanism of a normal


human being is naïve. For example, the analysis of a tactile sensory is very com-
plicated. Hence, touch/sensory devices are only in prototype stages. Complicated
surgery like cutting an organ by hand is yet to be simulated.
A smart home with potential health monitoring technology is considered as a
method of healthy outcome, better cognitive output and behavioral improvement
[54]. The world is moving fast towards an aged population. A smart home with
necessary healthcare monitoring mechanism is useful towards a better quality of life
and reduced healthcare cost. Earlier research suggest traditional methods to predict
an individual’s mental condition, behavioral features, screen neurological conditions
[55, 56]. A smart home with monitoring technology can track changes in health in a
daily basis and can detect early disease symptoms providing a better healthcare and
enhancing well-being [6].
World of Virtual Reality (VR) in Healthcare 19

6.1 Computer-Aided Design (CAD) Repairing Imitated


Model (Design for Artificial Body Part)

CAD is useful for medical image restructuring and related imagery. It is also helpful
in creating a 3D structure of any body part. For example, a hipbone replacement
surgery. Before the surgery is carried out, a 3D image with proper dimension and
shape is created and is then measured. Using this process the success in the surgical
process increased exponentially and eliminates the chance of re-operation by using
unsuitable artificial hipbone.

6.2 Test and Treatment for Mental Sickness

Mental condition of a person can also be examined using VR by comparing images


captured in real environment and virtual figures. Acrophobia, Xenophobia can be
treated using VR technology by creating a virtual environment that can trigger a
patient’s extremist action where a live repeatedly. Thereby the process can achieve
therapeutic effect.

6.3 Improvement for Treatment Safety

Radiotherapy, one of the vulnerable treatments, where a doctor can only rely on
its experience for the radiation dose. But, a patient is always has a worry of being
over-dosed. Using VR a doctor can perform radiation experiment on a virtual human
with predefined condition and can decide actual dose for a real patient. Hence, there
is an increase in a patient safety. In addition to this, a virtual environment protects a
doctor from being exposed to radiation.

7 Advantages of VR Techniques

VR is useful in learning new technology and methodology. Eventually, VR will take


the place of traditional medical experiments and will impart new teaching mecha-
nism. VR can provide an alternative and interactive process of studying the human
anatomy. For example, the Internet resource for surgical education, Vesalius, of
Duke University and the brain atlas of Harvard University are considered as the most
famous virtual medical multimedia teaching resources [18]. VR technology can pro-
vide a simulated workbench environment for the doctors. With the help of this, doctors
can have a 3D image of human body. Moreover, doctors can learn how they can deal
with the actual clinical procedure and can practice surgery on a virtual human body.
20 B. Keswani et al.

In addition a doctor can feel the experience of this virtual environment as real with
the help of the VR technologies. Taking the feedback of expert professionals the VR
system can also provide new dimensions to the surgery system. However, this pro-
cess can be made recursive. The VR system can evaluate a surgical procedure once
complete by considering various parameters and standards. VR technology will be
of great help in this kind of treatment scenario by proving its capacity by supporting
all channels of the 3D display and shared surgery and thereby increases success rates
in complicated surgeries [20]. VR technique is helpful towards proper analysis of
sick organs and surrounding tissues so as to avoid redundant invasive diagnosis [21].
New drug creation is one of the latest applications using VR. Creating new types
of drugs is the new era of application in VR. A molecule is complicated in its own
structure and the 3D structure is difficult enough to translate it to a 2D display. Using
VR the natural and visible 3D environment of molecular structure of compound can
be viewed where the interaction traits of a molecule can be determined.

8 Conclusion

The technologies discussed in this chapter are very effective in nature and probably
the technologies will be developed in the manner that has been discussed. It may
also happen that technologies with greater impact than that of the currently used
technology may come up in the future. However, we are now having cutting edge
information tool that has revolutionized the fundamentals of healthcare and patient
care tools. These tools and techniques that exist today are based on knowledge
and demonstration. In addition to this there is always a requirement of evaluating
these technologies and concepts with related and demonstrated scientific factors.
This process will increase the endurance of the technology we are using today. The
powerful ideas of healthcare and patient care cannot never be discarded because of
our preconception on the Industrial Age.

References

1. Stanney K. M. (2000). Handbook of virtual environments. In K. M. Stanneyed (Ed.), Handbook


of virtual environments: Design, implementation and applications (pp. 301–302). Mahwah NJ.
Lawrence Erlbaum Associates, Inc.
2. Pulijala, Y., Ma, M., Pears, M., Peebles, D., & Ayoub, A. (2018). An innovative virtual reality
training tool for orthognathic surgery. International Journal of Oral and Maxillofacial Surgery,
47(9), 1199–1205.
3. Sik Lanyi, C. (2006). Virtual reality in healthcare, intelligent paradigms for assistive and pre-
ventive healthcare. In A. Ichalkaranje, et al. (Eds.), (pp. 92–121). Berlin: Springer. https://doi.
org/10.3109/02699052.2016.1144146.
4. Yates, M., Kelemen, A., & Sik-Lanyi, C. (2016). Virtual reality gaming in the rehabilitation of
the upper extremities post-stroke. Brain Injury, 30(7), 855–863.
World of Virtual Reality (VR) in Healthcare 21

5. Tagaytaya, R., A. Kelemen, A., & Sik-Lanyi, C. (2016). Augmented reality in neurosurgery.
Archive of Medical Science. https://doi.org/10.5114/aoms.2016.58690. Published online: 22
March 2016.
6. Mazur, T., Mansour, T. R., Mugge, L., & Medhkour, A. (2018). Virtual reality–Based simulators
for cranial tumor surgery: A systematic review. World Neurosurgery, 110, 414–422.
7. Tractica from https://www.tractica.com/wpcontent/uploads/2015/09/VREI-15-Brochure.pdf.
Last Accessed September 27, 2018.
8. Medical Realities http://www.medicalrealities.com. Last Accessed September 26, 2018.
9. VR healthnet http://healthnet.com. Last Accessed September 26, 2017.
10. Chada, B. V. (2017). Virtual consultations in general practice: embracing innovation, carefully.
British Journal of General Practice, 264.
11. Kaffash, J. (2017, June 10). Average waiting time for GP appointment increases 30% in a
year. Pulse 2016. http://www.pulsetoday.co.uk/yourpractice/access/average-waiting-timefor-
gpappointment-increases-30-in-a-year/20032025. Last Accessed April 25, 2018.
12. Greenhalgh, T., Vijayaraghavan, S., Wherton, J., et al. (2016). Virtual on-line consultations:
advantages and limitations (VOCAL) stud. British Medical Journal Open, 6(1), e009388.
13. WHO. (2011). mHealth New horizons for health through mobile technologies, Global Obser-
vatory for eHealth series—Volume 3, WHO library cataloguing-in-publication data. http://
www.who.int/goe/publications/goe_mhealth_web.pdf. Last Accessed October 4, 2017.
14. Yountae, L., & Hyejung, C. (2012). Ubiquitous health in Korea: Progress, barriers, and
prospects. Healthcare Informatics Research, 18(4), 242–251. https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC3548153/#B13. Last Accessed October 4, 2018.
15. Kang, S. W., Lee, S. H., & Koh, Y. S. (2007). Emergence of u-Health era. CEO Inf (602), 1–4.
16. Ganrya, L., Hersanta, B., Sidahmed-Mezia, M., Dhonneurb, G., & Meningauda, J. P. (2018).
Using virtual reality to control preoperative anxiety in ambulatory surgery patients: A pilot
study in maxillofacial and plastic surgery. Journal of Stomatology, Oral and Maxillofacial
Surgery, 119(4), 257–261.
17. Quero, G., Lapergola, A., Soler, L., Shabaz, M., Hostettler, A., Collins T, et al. (2019). Virtual
and augmented reality in oncologic liver surgery. Surgical Oncology Clinics of North America,
28(1), 31–44.
18. Qiumingguo, & Shaoxiang, Z. (2013). Development of applications is boundless. Computer
World, 2003.
19. Tanglei. (2001). Virtual surgery. In http://www.sungraph.com.cn/, July 2001.
20. Hua, Q. (2004). The applications of VR in medicine. In http://www.86vr.com/apply, October
2004.
21. Vince, J. (2002). Virtual reality systems. Boston: Addison Wesley Publishing.
22. Zajac, F. R., & Delp, S. L. (1992). Force and moment generating capacity of lower limb muscles
before and after tendon lengthening. Clinical Orthopaedic Related Research, 284, 247–259.
23. Satava, R. M. (1993). Virtual reality surgical simulator: The first steps. In Surgical endoscopy
(vol. 7, pp. 203–205).
24. Merril, J. R., Merril, G. L., Raju, R., Millman, A., Meglan, D., Preminger, G. M., et al. (1995).
Photorealistic interactive three-dimensional graphics in surgical simulation. In R. M. Satava,
K. S. Morgan, H. B. Sieburg, R. Masttheus, & J. P. Christensen (Eds.), Interactive technology
and the new paradigm for healthcare (pp. 244–252). Washington, DC: IOS Press.
25. Spitzer, V. M., & Whitlock, D. G. (1992). Electronic imaging of the human body. Data storage
and interchange format standards. In M. W. Vannier, R. E. Yates, & J. J. Whitestone (Eds.),
Proceedings of Electronic Imaging of the Human Body Working Group (pp. 66–68).
26. Meglan, D. A., Raju, R., Merril, G. L., Merril, J. R., Nguyen, B. H., Swamy, S. N., & Higgins,
G. A. (1995). Teleos virtual environment for simulation-based surgical education. In R. M.
Satava, K. S. Morgan, H. B. Sieburg, R. Masttheus, & J. P. Christensen (Eds.), Interactive
technology and the new paradigm for healthcare (pp. 346–351). Washington, DC: IOS Press.
27. Raibert, M. A. Personal communication.
22 B. Keswani et al.

28. Lorensen, W. E., Jolesz, F. A., & Kikinis, R. (1995). The exploration of cross-sectional data
with a virtual endoscope, In R. M. Satava, K. S. Morgan, H. B. Sieburg, R. Masttheus, & J. P.
Christensen (Eds.), Interactive technology and the new paradigm for healthcare (pp. 221–230).
Washington, DC: IOS Press.
29. Geiger, B., & Kikinis, R. (1994). Simulation of endoscopy. In Proceedings of AAAI Spring
Symposium Series: Applications of Computer Vision in Medical Images Processing (pp. 138–
140). Stanford: Stanford University.
30. Buchanan, B. G., & Shortliffe, E. H. (1984). Rule-based expert systems. In The Mycin
experiments of the Stanford heuristic programming project. Boston: Addison-Wesley.
31. Heckerman, D. E., Horvitz, E. J., & Nathwani, B. N. (1992). Toward normative expert systems:
The pathfinder project. Methods of Information in Medicine, 31(2), 90–105.
32. Kang, K. W. (2012). Feasibility of an automatic computer-assisted algorithm for the detection
of significant coronary artery disease in patients presenting with acute chest pain. European
Journal Radiology, 81(4), 640–646.
33. Zhang, Y., & Szolovits, P. (2008). Patient specific learning in real time for adaptive monitoring
in critical care. Journal of Biomedical Informatics, 41(3), 452–460.
34. Saria, S. (2010). Integration of early physiological responses predicts later illness severity in
preterm infants. Science Translational Medicine, 2(48), 48–65.
35. Wiens, J., Guttag, J. V., & Horvitz, E. (2012). Patient risk stratification for hospital associated
c. diff as a time-series classification task. In Advances in neural information systems, neural
information processing systems (NIPS) Foundation (Vol. 25, pp. 247–255).
36. Levin, S. R. (2012). Real-time forecasting of pediatric intensive care unit length of stay using
computerized provider orders. Critical Care Medicine, 40(11), 3058–3064.
37. Ferrucci, D. (2010). Building Watson: An overview of the Deep QA project. AI Magazine,
31(3), 59–79.
38. Kohn, M. S., & Skarulis, P. C. (2012). IBM Watson delivers new insights for treatment and
diagnosis. In Digital Health Conference, Presentation.
39. Lenat, D. (2010). Cyc to answer clinical researchers’ Ad Hoc Queries. AI Magazine, 31(3),
13–32.
40. Tsuda, S., Scott, D., Doyle, J., & Jones, D. B. (2009). Surgical skills training and simulation.
Current Problem in Surgery, 46(4), 271–370.
41. Forestier, G., Petitjean, F., Riffaud, L., & Jannin, P. (2015). Optimal sub-sequence matching for
the automatic prediction of surgical tasks. In AIME 15th Conference on Artificial Intelligence
in Medicine, 9105 (pp. 123–32).
42. Forestier, G., Petitjean, F., Riffaud, L., & Jannin, P. (2017). Automatic matching of surgeries
to predict surgeons’ next actions. Artificial Intelligence Medicine, 2017(81), 3–11.
43. Dlouhy, B. J., & Rao, R. C. (2014). Surgical skill and complication rates after bariatric surgery.
England Journal of Medicine, 370(3), 285.
44. Maier-Hein, L., Vedula, S. S., Speidel, S., Navab, N., Kikinis, R., & Park, A. (2017). Surgical
data science for next-generation interventions. National Biomedical Engineering, 1(9), 691.
45. Shafiei, S. B., Cavuoto, L., & Guru, K. A. (2017). Motor skill evaluation during robot-
assisted surgery. In International Design Engineering Technical Conferences & Computers
and Information in Engineering Conference IDETC/CIE 2017. Cleveland, Ohio, USA.
46. Sharon, Y., & Nisky, I. (2017). What can spatiotemporal characteristics of movements in
RAMIS tell us? ArXiv e-prints 2017.
47. Li, K., & Burdick, J. W. (2017). A function approximation method for model-based high-
dimensional inverse reinforcement learning. ArXiv e-prints:1708.07738.
48. Marban, A., Srinivasan, V., Samek, W., Fernandez, J., & Casals, A. (2017). Estimating position
& velocity in 3d space from monocular video sequences using a deep neural network. In The
IEEE International Conference on Computer Vision (ICCV) 2017.
49. Rupprecht, C., Lea, C., Tombari, F., Navab, N., & Hager, G. D. (2016). Sensor substitution for
video based action recognition. IEEE/RSJ International Conference on Intelligent Robots and
Systems (IROS), 2016, 5230–5237.
World of Virtual Reality (VR) in Healthcare 23

50. Sarikaya, D., Corso, J. J., & Guru, K. A. (2017). Detection and localization of robotic tools
in robot assisted surgery videos using deep neural networks for region proposal and detection.
IEEE Transactions on Medical Imaging, 36(7), 1542–1549.
51. Fard, M. J., Pandya, A. K., Chinnam, R. B., Klein, M. D., & Ellis, R. D. (2017). Distance-
based time series classification approach for task recognition with application in surgical robot
autonomy. International Journal Med Robot Comput Assist Surgery, 13(3). e1766-n/a. E1766
RCS-16-0026.R2.
52. Bani, M. J., & Jamali, S. (2017). A new classification approach for robotic surgical tasks
recognition. ArXiv e-prints:1707.09849.
53. Ahmidi, N., Tao, L., Sefati, S., Gao, Y., Lea, C., & Bejar, B. (2017). A dataset and benchmarks
for segmentation and recognition of gestures in robotic surgery. IEEE Transactions Biomedical
Engineering.
54. Alemdar, H., & Ersoy, C. (2010). Wireless sensor networks for healthcare: A survey. Computer
Network, 54(15), 2688–2710.
55. Esposito, A., Esposito, A. M., Likforman-Sulem, L., Maldonato, M. N., & Vinciarelli, A.
(2016). On the significance of speech pauses in depressive disorders: results on read and
spontaneous narratives. In Recent advances in nonlinear speech processing (pp. 73–82). Berlin:
Springer.
56. Tsanas, A., Little, M. A., McSharry, P. E., & Ramig, L. O. (2010). Accurate tele-monitoring of
Parkinson’s disease progression by noninvasive speech tests. IEEE Transactions Biomedical
Engineering, 57(4), 884–893.
57. Virtual and augmented reality software revenue from https://www.statista.com/chart/4602/
virtual-and-augmented-realitysoftware-revenue/. Last Accessed September 27, 2018.
58. Gao, Y., Vedula, S. S., Reiley, C. E., Ahmidi, N., Varadarajan, B., & Lin, H. C. (2014). JHU-ISI
gesture and skill assessment working set (JIGSAWS): A surgical activity dataset for human
motion modeling. In Modeling and Monitoring of Computer Assisted Interventions (M2CAI)-
MICCAI Workshop (pp. 1–10).
59. Despinoy, F., Bouget, D., Forestier, G., Penet, C., Zemiti, N., & Poignet, P. (2016). Unsupervised
trajectory segmentation for surgical gesture recognition in robotic training. IEEE Transactions
of Biomedical Engineering, 2016, 1280–1291.
Towards a VIREAL Platform: Virtual
Reality in Cognitive and Behavioural
Training for Autistic Individuals

Sahar Qazi and Khalid Raza

Abstract VIREAL or Virtual Reality (VR) is a bilateral experience created using


computers which occurs in a simulated environment encapsulating vocal, visual and
sensational feedbacks. This computer generated virtual world looks so similar to the
real world that a person can’t distinguish between the two. With the development of
computational technologies and techniques, virtual reality has become a powerful aid
in eliminating loopholes in the path of research. Autism viz., a neurological cogni-
tive, disturbed-behavioural disorder is observed by problems with social interaction
and communication in children, can be effectively treated with the employment of
VIREAL which seems to be a compassionate platform for healthcare and especially
for Autism Spectrum Disorder (ASD) and related psychiatric disorders. Many sci-
entific studies have shown the benefits of using virtual reality for patients with High
Functioning Autism (HFA) or people with interaction difficulties lately. Some soft-
ware enhancements and affordability of VIREAL gadgets have been kept in mind
by the manufacturers so that magnanimous therapeutic experience can be used by
everyone. It is also a very practical therapeutic gadget which distracts patients from
severe pains. VIREAL is a friendly approach which holds a gigantic efficiency in
clinical prognosis and treatment sector. VIREAL based techniques have incorporated
two things which are doing wonders with autistic kids and their parents and consul-
tants. With all the benefits, there are some limitations to VIREAL platforms since
most parents are not comfortable, mainly due to their parental concerns, and at times,
the children may develop a fright by viewing such virtual environments leading to
a limitation in their growth and understanding. However, with the rapid progress in
VR industry, VIREAL devices intelligently extract emotional response knowledge
and thus, give an appropriate rationale to the kids to reaction to any scenario, and
with that knowledge can approximate the mind and emotional status of the child,
eventually leading to a healthy and a happy learning of children with Autism.

Keywords VIREAL · Autism · Applied behaviour analysis (ABA) · Verbal


behaviour analysis (VBA) · Picture exchange communication system (PECS)

S. Qazi · K. Raza (B)


Department of Computer Science, Jamia Millia Islamia, New Delhi 110025, India
e-mail: kraza@jmi.ac.in

© Springer Nature Switzerland AG 2020 25


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_2
26 S. Qazi and K. Raza

Abbreviations

ABA Applied behaviour analysis


AI Artificial intelligence
AR Augmented reality
ASD Autism spectrum disorder
CBT Cognitive behavioral therapy
CNN Convolutional neural networking
DTT Discrete trial training
GPU Graphical Processing Unit
HFA High functioning autism
HMD Head-mounted display
LEEP Large expanse extra perspective
ML Machine learning
PECS Picture exchange communication system
PTSD Post-traumatic stress disorder
RDI Relationship development intervention
SLAM Simultaneous localization and mapping
VADIA VR adaptive driving intervention architecture
VBA Verbal behaviour analysis
VET Virtual environment theatre
VIREAL Virtual reality

1 Introduction

VIREAL or simply virtual reality (VR) is a bilateral experience developed with the
use of computers which occurs in a simulated milieu encapsulating vocal, visual
and sensational feedbacks. This computer generated virtual world looks so similar
to the real world that a person can’t distinguish between the two. Virtual reality
creates such an experience which is actually impossible in the ordinary reality and is
fascinating for an individual. Autism is neurological cognitive, disturbed-behavioural
disorder which is observed by problems with social interaction and communication
in children. Parents of such children observe the signs and symptoms within the first
2–3 years of their offspring’s life. Autism is one of those psychiatric disorders today
which is still new to the literature and medical fraternity and not much research
has been achieved in the same. In order to develop a lucid understanding of the
disease, many researchers are trying a multi-faceted strategy by employing many
psychological and computational techniques. One of the latest and best approaches
for ASD and related disorders has been seen with the introduction of VIREAL.
With the development of computational technologies and techniques, virtual real-
ity has become a powerful aid in eliminating loopholes in the path of research.
VIREAL seems to be a compassionate platform for healthcare and especially for
Towards a VIREAL Platform: Virtual Reality in Cognitive … 27

Autism Spectrum Disorder (ASD) and related psychiatric disorders. The only limi-
tation is the lack of evidential tenets for its efficiency in such disorders and imple-
mentation. Many scientific studies have shown the benefits of using virtual reality
for patients with High Functioning Autism (HFA) or people with interaction diffi-
culties lately [1–4]. Social training with the use of virtual reality has been proved
to be beneficial when compared to the traditional social skills. For instance, simple
emotion recognition or role play are described as follows [1, 2, 5–9]:
(i) It has the potential to maintain a secure, free regular real scenario for social
interactions. It has helped to decrease the social anxiety when adopted with
addition of the Cognitive Behavioral Therapy (CBT) therapy.
(ii) It also gives an opportunity to people to repeatedly encounter the dynamic
social interactions which leads to an extraordinary therapeutic benefit as no two
interaction sessions are ever same, focusing on responses from a multi-varied
training session. This dynamic interacting session has been able to facilitate
the enhancement of social communication skills for everyday life tasks.
(iii) Furthermore, it also tries to maintain a secure and a supportive milieu which
makes the autistic individuals makes less of errors and aids them to interact
without any fear or anxiety. Person-to-person interactions usually make such
individuals frightened by the fact of rejection.
(iv) VIREAL interaction sessions provide a manageable environment which is
concerned with every individual’s needs and wants and is capable of tak-
ing feedbacks from individuals so that it can learn and further improve its
performance.
(v) VR provides an interactive, learning and personalized platform for autistic
individuals and helps them to live a normal life in this rat race of today!

1.1 VIREAL: Decoding the Terminology

VIREAL is an amalgam of two words, virtual and reality, where virtual has the
reference having the essence but not factually. It was in the year 1938, when Antonin
Artuad first explained the delusive and deceptive characteristics of the term “virtual
reality” in his collection of essays “la réalitévirtuelle” [10]. VIREAL is somewhat
related to ‘Augmented reality’ (AR) which is an interactive and dynamic experience
of a so-called real-world milieu wherein the objects which exist in the real world
are added/augmented by computationally devised perception information, composed
of—visual, verbal, olfactory, haptic and somatosensory features [11]. The additional
features which are generated using special software enhance the virtual milieu and
provide an extraordinary experience to the user. There are many AR based systems
such as Microsoft’s HoloLens, Magic Leap etc., which use cameras in order to capture
the user’s environment.
28 S. Qazi and K. Raza

1.2 Historical Background of VIREAL

Before the 1950s, the origin of VIREAL concepts was vague and highly feuded
since it was tedious to come up with an exact description of the concept [12]. It
was Antonin Artaud who described the illusory aspect of the virtual reality in his
stage theatre plays [13]. Science fictions play a pivotal role in giving the descriptions
about the modern VIREAL aspects. During the 1950–70s, Morton Heilig penned an
“Experience Theatre” which encapsulated all the senses of the real world onscreen.
He developed a prototype of his ideation with five short movies to be screened
showcasing multiple senses and utilised digital computational device, the Sensorama
in 1962. Moreover, he also generated the ‘Telesphere Mask’ which is basically a
telescopic television material for personalised usage and has been patented in the year
1960. The device has a sensation of reality with 3D images which may or may not
be in colour, sounds, aromas and cool breezes [14]. Another personality around the
same time named, Douglas Englebart employed computational screens as input and
output devices, whereas, Ivan Sutherland in 1968 along with his abecedarian devised
the first ever Head-Mounted Display (HMD) system for simulation purposes which
had both a friendly user interface and touch of reality and the graphics for VIREAL
were simply wire-frame models. The only disadvantage was that the HMD which
was worn by the user was quite heavy and was ceiling suspended. The VIREAL
fraternity gives virtual reality based devices and tools for medical, flight simulation,
automobile industries and military training from 1970 to 1990 accordingly [15]. At
NASA’s Jet Propulsion Laboratory (JPL), David Em, and an American artist (1977–
1984) was the first one to develop navigable virtual world [16]. The MIT in 1978
created the Aspen Movie Map, a program which was a crude virtual simulation of
Aspen, Colorado, where the users were mobile on the streets in one of the three
versions: summer, winter, or polygons.
Back then in 1979, Eric Howlett had devised the ‘Large Expanse Extra Perspec-
tive’ (LEEP) optical system, the original system was recreated for NASA Ames
Research Centre in 1985 for VIREAL installation executed by Scott Fisher, is a
combined system which had the potential to create a stereoscopic image with a field
of view wide enough to make a reliable sense of space which allows the users of the
system for a deep sensation in the view, corresponding to reality. The system gives
the basis for most of the current virtual reality helmets available today in the market
[17]. By the 1980s the term “virtual reality” was on the lips of the public because of
Jaron Lanier who was one of the modern pioneers of the field and developed several
VIREAL devices such as the Data Glove, the EyePhone, and the Audio Sphere [18].
Between the years 1989–92 the first real time, interactive immersive film was
created named Angels by Nicole Stenger and the interaction was made available with
the help of a data glove and high-resolution oculars. Further in 1992, a researcher
named Louis Rosenberg developed the Virtual Fixtures System at the U.S. Air Force’s
Armstrong Labs with the help of a full upper-body external skeleton aiding the
purpose of a physically realistic 3D VIREAL and this helped in generating the first
veritable VIREAL experience sorting for vision, sound, and sensation [19]. The
Towards a VIREAL Platform: Virtual Reality in Cognitive … 29

1990s were the years when the world saw a spread in production and releases of
consumer headsets and in 1991, Sega announced the Sega VR headset for arcade
games and the Mega Drive console which used LCD screens in the visor, stereo
headphones, and inertial sensors, allowing the system to track, record and react to
the mobility of the user’s head [20].
In July 1995, Nintendo in Japan developed the Virtual Boy, while it was developed
in August 1995 in North America [21]. Moreover, in the same year, a cave-like
270 degrees projection theatre was created for public demonstrations in Seattle,
called as the Virtual Environment Theatre (VET) produced by Chet Dagit and Bob
Jacobson, respectively [22]. Entrepreneurs Philip Rosedale formed the Linden Lab
with a motivation to develop VIREAL hardware [23].
Z-S Production developed the first PC based cubic room in 2001 named SAS
Cube (SAS3) and was installed in Laval, France. By the year 2007, Google introduced
Street View which is an aid that shows comprehensive scenes of an increasing number
of worldwide positions such as streets, indoor buildings and rural regions and also
has stereoscopic 3D mode which was later introduced in the year 2010 [24].
There were around 230 companies which developed VIREAL based products
by 2016. The most popular online social network Facebook, focuses on VIREAL
platform development. Further, Google, Apple, Amazon, Microsoft, Sony and Sam-
sung etc., are working hard for the introduction and development of VIREAL and
Augmented Reality based platforms as well [25]. Sony had ascertained that the com-
pany was developing a location tracking technology for the VIVE PlayStation VR
platform in the year 2017 [26].

1.3 Day-to-Day Applications of VIREAL

VIREAL is one of the branches of information technology (IT) which has effected
and it has impacted on human lives. It is because of this reason itself the VIREAL has
become so popular and widely successful in the application development platforms.
Currently, VIREAL based technologies have showed interest in various day-to-day
activities. For a VIREAL experience, one needs to have a HMD, data gloves with an
inclusive tracking system. If the user has these basic apparatus, one is ready to feel
and live the VIREAL life [27–33]. Some of the day-to day applications are shown
in Fig. 1.
Currently, Vanderbilt University boffins and software developers have started
VIREAL-based driving classes for autistic individuals, named as—“Vanderbilt VR
Adaptive Driving Intervention Architecture” (VADIA), which is essentially for ado-
lescents and adults enduring autism. VADIA helps them to learn basic driving and
road etiquettes as it has the potential to create different driving situations as per
the basic, medium and difficult modes, thus helping the individuals to learn driving
safely in any case [34].
30 S. Qazi and K. Raza

Fig. 1 Applications of virtual reality

2 Autism and VIREAL

Autism is a psychiatric disorder which paid heed to VIREAL based interactive ther-
apies. Initially, sandbox playing technique was utilised for special care children
and the study found that it was quite difficult to employ for such children. Another
study at the University of Nottingham, United Kingdom, which used VIREAL based
strategies discerned that not only autism, it was useful for many other complex psy-
chiatric disorders. Autism is a disorder which is serious as it becomes very difficult
to understand such children and their expressions.
Some software enhancements and affordability of VIREAL gadgets have been
kept in mind by the manufacturers so that magnanimous therapeutic experience
can be used by everyone and not just the luxurious ones. It is also a very practical
therapeutic gadget which distracts patients from severe pains. VIREAL is a friendly
approach which holds a gigantic efficiency in clinical prognosis and treatment sector
Towards a VIREAL Platform: Virtual Reality in Cognitive … 31

[35]. Some of the applications of VIREAL gadgets which are being used for the
treatment of autism have been listed in Table 1. These gadgets usually are simple
computers which create a realistic environment for children and help to attain focus,
attention, error less performance while doing any task, identify emotions, social

Table 1 Applications of VIREAL gadgets in ASD treatment [36]


VIREAL gear Number of Age Reliant variables Descriptions
subjects (years)
HMD 14–21x 2 7.5–9 • Completion of task Easy to wear
(3–5 min) • Attention and focus helmets for
children
HMD 40x 5 2 7.5–9 • Identify the virtual Easy to use and
min for objects wear
6 weeks
Computer 36 13–18 • Understanding virtual Children easily
monitor with a milieu grasped the
mouse • Error-less performance essentials of the
tool. An improved
performance of
children was
observed
Computer 34 13–18 • Performance A very few
monitor with a • Understanding and children could
joystick and a explanation of understand the
mouse participants virtual
environment, the
rest were simply
least interested
and not attentive
Computer 34 7.8–16 • Identification if Most patients
monitor with emotions easily were able to
mouse recognize and
understand
emotions easily
Computer 7 14–16 • Social skills Improved
monitor with • Verbal etiquette behavior and
mouse for • Social milieu social skills in
30–50 min behavioural children
understanding
Touch screens 2 8–15 • Understanding Improved
symbolism functioning,
• Enhanced imagination understanding and
creative
imagination in
children was
observed
(continued)
32 S. Qazi and K. Raza

Table 1 (continued)
VIREAL gear Number of Age Reliant variables Descriptions
subjects (years)
Computer 3 7–8 • Emotional intelligence –
monitor with • Understanding gestures
mouse for such as eye contact
30–40 min

environment, developing vivid imagination, and understanding emotional, social and


environmental behavior respectively [36].

2.1 Common Teaching Techniques for Autistic Children

The common teaching techniques for autistic children are as follows [37]:
(a) Applied Behaviour Analysis and Verbal Behaviour Analysis: Applied
Behaviour Analysis (ABA) is a modus operand based on behaviour inclusive
of speech, education and academic and life skills can be taught using scien-
tific principles. This teaching approach assumes that children have repetitive
behaviour includes a “bait” and there are less chances to continue behaviour
which are not inclusive of baits in autistic children. Reinforcement is gradu-
ally reduced so that children can learn without any bait. Commonly practised
ABA is—Discrete Trial Training (DTT), where life skills, such as, eye contact,
imitation and mimicking, self help, conversation etc., are chiselled into small
chunks and then taught separately to autistic children. Another approach in ABA
is Errorless Learning, where the therapist appreciates children for their good
response with a present and prompts for every negative response, but won’t
be told “no” straightaway. Instead, the therapist will guide to get the correct
response.
Verbal Behaviour Analysis (VBA) is the state-of-the-art panache of ABA which
utilises B. F. Skinner’s 1957 analysis of Verbal Behaviour to teach life skills and
communication to autistic children. A VBA approach is focused on making children
understand that speaking and communication will help them get what they want. It
is a more natural technique of teaching when compared to ABA.
(b) Relationship Development Intervention: Relationship Development Interven-
tion (RDI) is a parent-based clinical therapy session which aims to treat autism
at its roots. Autistic children usually prefer to be aloof, obvious reason being,
lack of communication. Life skills, communicating and exchanging personal
experiences with others, are common aspects which people do and makes them
feel connected and lively with the world. Emotional intelligence is something
which is often skipped while training autistic children. It simply refers to the
process of expressing ones true feelings—both good and bad. This approach
Towards a VIREAL Platform: Virtual Reality in Cognitive … 33

helps children to interact positively with other people, even without language.
The whole idea is to let children “express” openly so that these children feel
light and can enjoy themselves around people.
(c) Sensory Integration Therapy: Sensing stimulus is one of the signs which help
children to learn about this world. Children with autism have difficulties to
patiently sense noise, touch, sight, olfactory, and/or movement. Autistic chil-
dren may or may not necessarily respond to these senses and thus, are sometimes
confused of being deaf, dumb or blind. If children cannot distinguish or have
difficulty in responding to these senses, then they are typically diagnosed with
sensory integration dysfunction, and it is very common with autistic children.
Occupational therapists are trained specialists who use some sensory techniques
which engage such children in joyful activities, helping them to process infor-
mation that they receive from their senses. The main aim of this technique is
not to ‘teach’, but to allow children to focus on their senses and act accordingly.
(d) EduBOSS: EduBOSS viz. adapted from BOSS GNU/Linux, are simply
education-based applications for higher academia purposes for children with
special needs, such as Autism. It includes subjects like, Math, Science and
Social Studies, and their tests, quizzes, supportive material etc. [38].
(e) TEACCH: Treatment and Education of Autistic and related Communication-
handicapped CHildren (TEACCH) is a structured classroom which is encap-
sulated with different classes for different purposes. It is heavily dependent on
observational learning. Here, images or verbose are used for making a time-
schedule for autistic children so that they can accomplish their assigned tasks
of the day easily [39].

2.2 Qualitative and Quantitative Teaching Method – PECS

Picture Exchange Communication System (PECS) is a qualitative and a quantitative


methodology, generally utilized to study and understand the observations of autistic
children using PECS teaching modus operandi [40]. PECS is simply an interactive
method which doesn’t require any speech/vocals and is widely accepted these days
for autistic and other related disorders, and is mainly based on an interchange of an
image of an actual object by searching and then reaching for someone’s help in order
to convey messages efficiently. Henceforth, main principle of PECS is that the child
starts to interact and communicate, can easily approach others, and thus can use only a
single image so as to avoid perplexed behaviour [41]. It is not aimed to teach ‘speech’,
but, children enrolled with this program grasp basics of efficient communication
with PECS. The program starts off with normal and basic activities inclusive of
approaches: chaining, prompting/cuing, modelling, and environmental engineering
[42]. The images with which the autistic children are exposed to are coloured or can
be black and white, tangible scribbles or even photographs. Mayer-Johnson pictures
symbols, often called PCS are also commonly used as stimulus.
34 S. Qazi and K. Raza

2.3 From VIREAL Toilets to Classroom: VR Design


and Analysis

Virtual reality as a domain has been successful in attracting much attention to its
expansion in almost all fields including psychiatric disorders. VIREAL is simple
and an organised e-learner for special children and aids them to live, not a perfect,
but a healthy life. Autistic children need special attention and support system which
helps them to learn and understand this not-so-perfect life. Classrooms are simply
interactive systems which help individuals to come into one domain where they
learn and understand concepts of life [43]. A real classroom has walls, windows,
black/white board, chairs and tables etc., while a VIREAL-based classroom provides
children exactly the same environment like that of a real one. A virtual classroom’s
main component in the internet accessibility. It is hard to imagine how strong the
connectivity is of the World Wide Web.
VIREAL-based classrooms have mainly three principles: (a) Ease of usage, (b)
Flexibility for both teacher and students and (c) Concerned with constraints (both,
intrinsic & extrinsic), which is a wide aspect to ponder upon. The preliminary design
analogy is stated as: “The usability of a VIREAL classroom increases only if the
learning milieu is satisfying all the classroom constraints”. Intrinsic constraints are
focused with cognitive learning while extrinsic constraints are more dependent on
amalgamating productive technologies for better classroom experiences for autistic
children. The extrinsic constraints are more liable to disturb the perfect scenario
for a robust and an efficient virtual classroom. For this purpose, Cuendet et al. [44]
proposed their five mantras for curbing this common problem and providing an
amazing exhilarating experience of VIREAL classrooms as shown in Fig. 2.
The latent components of VIREAL-based classrooms, are dependent on the hard-
ware known as TinkerLamp. It is a camera-projector system which contains a camera
and a projector pointed towards a tabletop. There are four versions of the Tinker-
Lamp, all supporting the camera-projector system but vary in myriad ways and have
been shown in Fig. 3. The initial version (a) lacks a mirror, henceforth, has a smaller
projection area and lacks an embedded computer making it even more tedious to use.
The rest of the versions (b, c & d) are better than (a) since all of these have a small
computer installed within the hardware making things and utilisation easy for the
user.

3 Social and Parental Issues Related to VIREAL

With all the benefits, there are some limitations to VIREAL platforms since most
parents are not comfortable, mainly due to their parental concerns, and at times, the
children may develop a fright by viewing such virtual environments leading to a
limitation in their growth and understanding [45, 46]. Although, there are myriad
options of VR platforms, but only a few are chosen for training of autistic children.
Towards a VIREAL Platform: Virtual Reality in Cognitive … 35

Fig. 2 Five mantras of VIREAL classrooms

Fig. 3 The four versions of the hardware TinkerLamp [44]


36 S. Qazi and K. Raza

There is a big need for paying heed for developing and efficient and child-friendly
VIREAL platforms, which not only train such children but also bring out their unique
talents and help them to face the cut-throat competitive world of today! [47].
The common social and parental issues which revolve around VIREAL platforms
are described as follows [48]:
• Safety Issues: Individuals wearing a headset could end up injuring themselves if
they bang themselves with surrounding walls, which can turn dangerous. Some
solutions have been debated for this, like employing circular walking arc to enable
straight walking, but it still has some loopholes which are to be worked upon.
• User Addiction: Virtual reality becomes so alluring to some people that they tend
to live it, which causes serious risks to their health and lifestyle. Addiction is the
biggest concern of parents which makes them hesitant towards VIREAL.
• Criminality: Though, VR teaches children or adults how to take care of themselves
in serious situations, or how to handle discombobulant situations. It also teaches
tricks and tips to execute any criminal action. For instance, very famous virtual
game: The Grand Theft Auto uses many gestures for pulling a trigger of a gun or
pistol, or thumb movement for stabbing a person with knife or sword.
• Reality Blues: Who would want to come out of a world where everything is perfect,
where there is less of anxiety and worries? The goodness of a virtual world often
disturbs the reality of an individual, which ultimately leads to his/her troubled
real-life affair and could damage their relationships.
• Post-Traumatic Stress Disorder (PTSD): Some games which are meant to
enhance real life experiences, often turn very depressing for children. Psycho-
logical issues do arise with VIREAL-based games which leave a long term effect
on children.
• VIREAL-based Torture: Usually, military personnel use VIREAL for criminals
to torture them by subjecting them to horrendous and atrocious images or videos.
But, it is very dangerous as it lacks control. Such an act is inhumane and immoral
which must not be appreciated.
• Privacy Policy: Any individual before stepping for a novel technology thinks
about his/her privacy. VIREAL-based platforms are surely exciting to work on
with, but the user needs to submit some personal and private information before
using, which can be misused.

4 Computational Intelligence in VIREAL Platforms

Today, we are more inclined towards the “mixing” two or more things which can
thus become more productive than their individual forms. Henceforth, computational
power and virtual reality have joined their hands in providing bigger and better
resources. Machine learning techniques when applied to VIREAL need quantification
and assessment. Some of the important aspects of Computational Intelligence in
VIREAL platforms are discussed in the following sections.
Towards a VIREAL Platform: Virtual Reality in Cognitive … 37

4.1 Where Do VIREAL and Machine Learning Intersect?

Virtual Reality and machine learning go hand-in-hand. For instance, VIREAL ocular
gears, commonly called as Oculus Rift, may require an ultra precise quantification for
empowering its performance for virtual games [49]. Here, there is a need to apply an
algorithm which can automatically regulate and assess the general parameters such as
height, stimulation, etc., for an overall exciting experience by individuals playing the
VR game [50]. Machine learning is the janitor for keeping up the artificial intelligence
(AI) for VIREAL gaming and related platforms. It contemplates the user movements
and understands how the user will interact in a specific environment. The algorithm
must have the potential to be ‘responsive’ as AI will have 4D duties in such a case. In
the state-of-the-art studies, artificial intelligence (AI) has been observed to pay much
heed to global intelligence and robotic simulation in order to completely identify the
correct collective behaviours of human beings in specific situations. Boffins around
the globe have been trying hard to improvise simulations by employing artificial
intelligence for better outcomes, viz., undoubtedly a challenging process. A study
by Cipresso and Riva [51], have successfully presented simulation of virtual worlds
using a hybrid platform where the experimentations were executed easily and by two
ways: (a) the operators behaviour is regulated by the virtual reality based behaviour
of human who is exposed to simulation milieu and (b) the hybrid technology shifts
these rules into the virtual world, thus forming a closed knit of real behaviours which
are incorporated into virtual operators.
The best masterpieces which showcase the combined power of VIREAL and
Machine learning are described as follows [52]:
(a) Natural Language Processing (NLP): Amazon’s Alexa [53] or Google Assis-
tant [54], are the best examples of VIREAL–ML based assistants which are
nothing less than today’s Aladdin’s Djinn. They are voice-controlled and so, the
user simply has to ask these assistants for execution of tasks such as playing
music, movies, launching games etc. The voice recognition of these assistants
have been found perfect in British English, while for US English, there were
a few error rates. The only problem being the translation to other languages,
which is being worked upon still.
(b) Hand Tracking Movements: Controlling technology today is easy, either the
user can use his/her voice or hands movements. The VIREAL world is full of
games, classrooms, etc., which use hand movements for controlling or authen-
ticating access to some by some special hand tracked passwords set by the
user.
(c) Video Games Reinforced: Convolutional Neural Networking (CNN), Graph-
ical Processing Unit’s (GPU’s) and some other essential requisites are manda-
tory for enforcing reinforcement machine learning to video gaming fraternity
for a better and fast processing. Reinforcement machine learning is basically
focused on rewarding the machine for a positive action else, no rewards are
assigned. However, due to time management problem for the action and reward
assignment, this domain is not used commonly.
38 S. Qazi and K. Raza

Much work is still pending in this domain of computational intelligence and


current research regarding computational intelligence in VIREAL is puerile [50].

4.2 SLAM for VIREAL Environments

SLAM (Simultaneous Localization and Mapping) is a VIREAL application viz. more


of a concept than a single algorithm and is used for mobile robots which assess the
headset position code and thus then acts accordingly, can be used for both 2D and
3D motion [50].
SLAM is difficult as it requires a map viz. needed for localization and a good posi-
tion approximation is needed for mapping. SLAM based on the tradition problem of
‘The Chicken-or-Egg’, where a map is needed for localization and a position approx-
imation is required for mapping. Figure 4 represents an entire SLAM flowchart [55].
Statistical approaches employ estimations based on algorithms such as Kalman filters
and Monte Carlo methods, which give an approximation of the posterior probability
for the position of the robot and the features of the maps. The set-membership tech-
niques are for interval constraint propagation [56, 57] and give a collection of best
positions of the robot along with an estimation of the map.
Many steps are involved with the SLAM application and can be applied by using
different algorithms. It is composed of many parts such as—landmark extraction,
data association, state estimation, state update and landmark update and there are
myriad ways one can solve each of these smaller parts. The objective of SLAM is to
aid one in applying and using for their own newer approach. The new approach can be
anything, be it implementation of SLAM on a mobile robot in an indoor environment
or for an entirely different environment. It can be of great use for training autistic
children as they can get habitual of such comfortable and a user friendly environment.
The benefit of SLAM is that it can be used in different environments and different

Fig. 4 Flowchart of an entire SLAM model [55]


Towards a VIREAL Platform: Virtual Reality in Cognitive … 39

algorithms can be applied to it giving an extension to the user [58]. The entire SLAM
model checks for the entire path and map and uses this equation: p(x 0:t , m|z1:t , u1:t ).

4.3 VIREAL on Mobile: Mobile App Developments


for Autism

VIREAL technology has grown enormously since the past decade and is still growing
and evolving rapidly. Many companies such a Google, Facebook, Amazon, etc., have
already started to work with VR and have come up with beneficial tools for the society
which have helped in day-to-day activities. Mobile app development companies
have also stepped in for VIREAL-based tools which given a dynamic and robust
functioning to mobile phones, and thus, are now called as VIREAL gadgets [59]. This
technology has streamlined interaction business and has given better opportunities
to their customers.
This mobile app development is not only limited to general communication busi-
ness, but, also is open for autism. Although, it is currently at a stage wherein VIREAL,
ML and AI are in collision with one another, but, a company named, Niantic, is plan-
ning to develop a virtual reality game based on the Harry Potter Series, titled as: “The
Harry Potter version of Pokemon GO”, where AI is controlling the surrounding area
with the help of SLAM and cameras, sensors and radars, etc. [50].

4.4 Mind Versus Machine: Practicality of AI in Autism

What is difficult with autism is the fact that it is hard to understand what such
children feel if they don’t receive what they ask for or feel for. For instance, an
autistic child asks his/her mother for an apple, the mother instead, gives the child a
banana, which it is hard to understand how will the child react to this situation? It is
a very common psychology, not only for autism, but in general: ‘if someone wants
something and gets it, they feel ecstatic, and if they don’t, they feel sad and upset’ [60].
The human mind is always considered superior to the mechanical one as it is known
to be ‘Emotionally Intelligent’. However, more often, humans fail to understand
critical emotions of others and tend to hurt them intentionally or unintentionally.
When it is a case with children and that too with, with special needs, one has to
be extra cautious. VIREAL technologies have paved a way to a new dimension of
mechanics and robotics, which are not only smart with their efficiency and robustness,
but also, have emotional intelligence. Their emotional intelligence can be used for
understanding the psychology of autistic children (Fig. 5).
The VIREAL industry has a magnanimous and a pivotal role in changing mindsets
of consultants, specialists, therapists, parents, nursing and staff to help autistic chil-
dren to give their best in both- academia and social skills. Virtual reality developers,
40 S. Qazi and K. Raza

Fig. 5 VIREAL-based healthcare

creators, producers and workers have given their best in developing apps, games,
and educational programs to help children with autism to live a normal life. Not only
that, VIREAL has helped children with neurodegenerative problems to interact and
communicate freely without any hesitation [61]. VIREAL based techniques have
incorporated two things which are doing wonders with autistic kids and their parents
and consultants. These devices intelligently extract emotional response knowledge
and thus, give an appropriate rationale to the kids to reaction to any scenario, and
with that knowledge can approximate the mind and emotional status of the child.
For the treatment of autism, VIREAL systems can be considered to be beneficial
w.r.t. myriad number of different therapeutic items given to a child, referring to non-
redundant mode of therapy, which is way different from the usual ABA and VBA
therapies for autism. VIREAL systems are not to decline the traditional ways of treat-
ment, but can be useful to the practitioners and therapists. The therapist/consultant
will have to learn the basics of such VIREAL platforms so that better treatment
outcomes are obtained [60].
Towards a VIREAL Platform: Virtual Reality in Cognitive … 41

4.5 Limitations of Computational Intelligence in VIREAL

The entire chapter has discussed the blossoms of VIREAL platforms for autism but
there are some limitations. The World Wide Web based virtual systems not only
provide the best of knowledge but also have some imprudent information which can
be very cataclysmic for children, especially in their growing years. As the adage
goes: ‘Excess of everything is bad’, excessive use of VIREAL systems can lead to
severe health issues. The VIREAL based gadgets, such as the oculus rift come along
with big and bold warnings such as- epileptic seizures, growth issues in children,
trip-and-fall and collision warnings, blackouts, disturbances, dizziness and nausea,
redundant stress experiences etc. [62].
VIREAL sickness (Cybersickness) is a common sign of a person who is deeply
exposed to virtual reality and causes symptoms such as: disturbances, headache,
nausea, puke, sweating, fatigue, numbness of body and head, drowsiness, irritation,
unconsciousness, and apathy [63] All such symptoms are experienced because the
VIREAL system does not have a high frame rate, or if there is a time lag between
ones movement and the onscreen visual image reaction to it [64]. Around 25–40%
people experience VIREAL sickness, and general remedies for these are to soak ones
hands in ice water or chewing ginger. Thus, the manufacturing companies are really
working hard to find solutions to reduce it [65].

5 Future Perspectives

VIREAL is an industry-academic effort. VIREAL platforms have unleashed a new


way of living a life dreamed by people in real, which are evolving and revolutionising,
and are one of the leading domains where computational research is on a high! There
is much more to VIREAL systems and is being worked upon by the boffins and
computational engineers. They will get more physical and real with time. These
systems will also change our lifestyle. One can explore a new place which is distant
to ones residence by simply putting on a headset/ocular gears. In terms of surgery,
VIREAL can also be helpful for saving more people than the traditional ways. It can
also be vital for amateur pilots to learn how to fly an aeroplane by using simulation
strategies. For psychological disorders, such as ASD, Phobias, Dravet Syndrome,
Epilepsy, etc., VIREAL is already very helpful in their treatments. Autistic children
can live a normal and a happy life just like other children do. If the manufacturing
companies fix the loopholes and assure parents and consultants about its safety and
privacy policy, it would be one of the optimum therapies for the disorder [66] (Fig. 6).
The VR and AI are interfacing each other for potential commercial applications
including healthcare sector. It is expected that VR, AI and Internet technology will
put an end to the traditional way of doing things, including within healthcare sectors.
They will make the adaptation of new technologies more simpler and straightforward.
It will also help in presenting contextual data in proper order to open up channels
42 S. Qazi and K. Raza

Fig. 6 The future of VIREAL in healthcare

for healthcare. Big healthcare data helps to develop insight about patients and at
the same time leveraging machine learning will offer more personalized and tailored
healthcare to the patients. AI is a leading driver of growth in healthcare and medicine,
and hence we can conclude that ‘VR + AI = Future Healthcare’.

6 Conclusion

VIREAL or virtual reality (VR) is a vivid experience developed with the use of com-
puters which occurs in a simulated milieu encapsulating vocal, visual and sensational
feedbacks. The computationally developed world looks so similar to the real world
that a person can’t distinguish between the two. VIREAL is a combination of two
words, virtual and reality, where virtual has the reference having the essence but not
factually. It was in the year 1938, when Antonin Artuad first explained the delusive
and deceptive characteristics of the term “virtual reality” in his collection of essays
“la réalité virtuelle” respectively.
VIREAL is somewhat similar to ‘Augmented reality’ which is an interactive and
dynamic experience of a so-called real-world milieu wherein the objects which exist
in the real world are added/augmented by computationally devised perception infor-
mation which is composed of—visual, verbal, olfactory, haptic and somatosensory
Towards a VIREAL Platform: Virtual Reality in Cognitive … 43

features respectively. The additional features which are generated using special soft-
ware enhance the virtual milieu and provide an extraordinary experience to the user.
There are many AR based systems such as Microsoft’s HoloLens, Magic Leap etc.,
which use cameras in order to capture the user’s environment. For a VIREAL expe-
rience, one needs to have a Head Mounted Displays, data gloves with an inclusive
tracking system. If the user has these basic apparatus, one is ready to feel and live the
VIREAL life. VIREAL based technologies have showed interest in various day-to-
day activities as well such as- gaming world, crime investigations, virtual tourism,
education, treatment of various neurological and psychological disorders, movies,
events and concerts, military training, etc.
Autism is neurological behavioural disorder which is observed by problems with
social interaction and communication in children and is one of those psychiatric
disorders today which is still is new to the literature and medical fraternity and
not much research has been achieved in the same. With the rise of computational
technologies, virtual reality has become a powerful aid in eliminating loopholes in
the path of research. VIREAL seems to be a compassionate platform for health-
care and especially for autism and related psychiatric disorders its only limitation
is the lack of evidential tenets for its efficiency in such disorders and implementa-
tion. Many scientific studies have shown the benefits of using virtual reality. Social
training with the use of virtual reality has been proved to be beneficial when com-
pared to the traditional social skills for instance, simple emotion recognition. Applied
Behaviour Analysis (ABA) is a modus operand used for teaching autistic children
and is based on behaviour inclusive of speech, education and academic and life
skills can be taught using scientific principles. This teaching approach assumes that
children have repetitive behaviour includes a “bait” and there are less chances to con-
tinue behaviour which are not inclusive of baits in autistic children. Reinforcement
is gradually reduced so that children can learn without any bait. Verbal Behaviour
Analysis (VBA) is similar to ABA, but is preferred. Relationship Development Inter-
vention is another therapy involving parents, which aims to treat autism at its roots.
Occupational therapists are trained specialists who use some sensory techniques for
Sensory Integration Therapy, and engage autistic children in joyful activities, helping
them to process information that they receive from their senses. The main aim of
this technique is not to ‘teach’, but to allow children to focus on their senses and
act accordingly. Picture Exchange Communication System (PECS) is a qualitative
and a quantitative methodology utilized to study and understand the observations of
autistic children using PECS teaching modus operandi.
VIREAL is simple and an organised e-learner for special children and aids them
to live, not a perfect, but a healthy life. Autistic children need special attention and
support system which helps them to learn and understand this not-so-perfect life. A
VIREAL classroom’s main component is the internet accessibility and has mainly
three principles: (a) Ease of usage, (b) Flexibility for both teacher and students and
(c) Concerned with constraints (both, intrinsic & extrinsic), which is obviously a
wide aspect to ponder upon.
44 S. Qazi and K. Raza

With all the benefits, there are some limitations to VIREAL platforms since most
parents are not comfortable, mainly due to their parental concerns, and at times, the
children may develop a fright by viewing such virtual environments leading to a
limitation in their growth and understanding. There is a big need for paying heed for
developing and efficient and child-friendly VIREAL platforms, which not only train
such children but also bring out their unique talents and help them to face the cut-
throat competitive world of today! The VIREAL based gadgets, such as the oculus
rift come along with big and bold warnings such as- epileptic seizures, growth issues
in children, trip-and-fall and collision warnings, blackouts, disturbances, dizziness
and nausea, redundant stress experiences etc.
The VIREAL industry has a magnanimous and a pivotal role in changing mindsets
of consultants, specialists, therapists, parents, nursing and staff to help autistic chil-
dren to give their best in both- academia and social skills. Virtual reality developers,
creators, producers and workers have given their best in developing apps, games,
educational programs to help children with autism to live a normal life. Not only
that, VIREAL has helped children with neurodegenerative problems to interact and
communicate freely without any hesitation [61]. VIREAL based techniques have
incorporated two things which are doing wonders with autistic kids and their parents
and consultants. These devices intelligently extract emotional response knowledge
and thus, give an appropriate rationale to the kids to reaction to any scenario, and
with that knowledge can approximate the mind and emotional status of the child.

Acknowledgements Sahar Qazi is supported by DST-INSPIRE fellowship provided by Depart-


ment of Science & Technology, Government of India.

References

1. Kandalaft, M. R., Didehbani, N., Krawczyk, D. C., Allen, T. T., & Chapman, S. B. (2013).
Virtual reality social cognition training for young adults with high-functioning autism. Journal
of Autism and Developmental Disorders, 43, 34–44.
2. Maskey, M., Lowry, J., Rodgers, J., McConachie, H., & Parr, J. R. (2014). Reducing specific
phobia/fear in young people with autism spectrum disorders (ASDs) through a virtual reality
environment intervention. PLoS One, 9(7), e100374.
3. Parsons, S., & Mitchell, P. (2002). The potential of virtual reality in social skills training for
people with autistic spectrum disorders. Journal of Intellectual Disability Research, 46(5),
430–443.
4. Wainer, A., & Ingersoll, B. R. (2011). The use of innovative computer technology for teach-
ing social communication to individuals with autism spectrum disorders. Research in Autism
Spectrum Disorders, 5(1), 96–107.
5. Parsons, S., Mitchell, P., & Leonard, A. (2005). Do adolescents with autistic spectrum disorders
adhere to social conventions in virtual environments? Autism, 9, 95e117.
6. Wallace, S., Parsons, S., Westbury, A., White, K., & Bailey, A. (2010). Sense of presence and
atypical social judgments in immersive virtual environments: Responses of adolescents with
Autism Spectrum Disorders. Autism, 14, 199–213.
7. Bellani, M., Fornasari, L., Chittaro, L., & Brambilla, P. (2011). Virtual reality in autism: State
of the art. Epidemiology and Psychiatric Science, 20, 235–238.
Towards a VIREAL Platform: Virtual Reality in Cognitive … 45

8. Parsons, S., & Cobb, S. (2011). State-of-the-art of virtual reality technologies for children on
the autism spectrum. European Journal of Special Needs Education, 710N; Didehbani et al.
(2016). Computers in Human Behavior, 62, 703–711, 26, 355–366.
9. Tzanavari, A., Charalambous-Darden, N., Herakleous, K., & Poullis, C. (2015). Effectiveness
of an immersive virtual environment (CAVE) for teaching pedestrian crossing to children with
PDD-NOS. In 2015 15th IEEE International Conference on Advanced Learning Technologies.
10. Artaud, A. (1958). The theatre and its double (M. C. Richards, Trans.). New York: Grove
Weidenfeld.
11. Rosenberg, L. B. (1992). The use of virtual fixtures as perceptual overlays to enhance oper-
ator performance in remote environments. Technical Report AL-TR-0089, USAF Armstrong
Laboratory, Wright-Patterson AFB OH.
12. https://www.theverge.com/a/virtual-reality/intro. Accessed on October 6th, 2018.
13. https://www.vrs.org.uk/virtual-reality/beginning.html. Accessed on October 6th, 2018.
14. https://www.techradar.com/news/wearables/forgotten-genius-the-man-who-made-a-working-
vr-machine-in-1957-1318253/2. Accessed on October 6th, 2018.
15. https://web.archive.org/web/20150821054144/http://archive.ncsa.illinois.edu/Cyberia/
VETopLevels/VR.History.html. Accessed on October 6th, 2018.
16. https://wikivisually.com/wiki/David_Em. Accessed on October 6th, 2018.
17. Thomas, W. (2005). Virtual reality and artificial environments. A critical history of computer
graphics and animation. Section 17.
18. https://www.bloomberg.com/research/stocks/private/snapshot.asp?privcapid=2700815.
Accessed on October 6th, 2018.
19. Rosenberg, L. B. (1993). Virtual fixtures: Perceptual overlays for telerobotic manipulation. In
Proceedings of the IEEE Annual International Symposium on Virtual Reality (pp. 76–82).
20. https://web.archive.org/web/20100114191355/http://sega-16.com/feature_page.php?id=5&
title=Sega%20VR%3A%20Great%20Idea%20or%20Wishful%20Thinking%3F. Accessed
on October 6th, 2018.
21. Kent, S. L. (2002). The ultimate history of video games: The story behind the craze that touched
our lives and changed the world (pp. 513–515, 518, 519, 523, 524). New York: Random House
International. ISBN 978-0-7615-3643-7.
22. http://articles.latimes.com/1995-02-22/business/fi-34851_1_virtual-reality. Accessed on
October 6th, 2018.
23. Au, W. J. The making of second life (p. 19). New York: Collins. ISBN 978-0-06-135320-8.
24. https://readwrite.com/2010/04/06/google_street_view_in_3d_here_to_stay/. Accessed on
October 6th, 2018.
25. https://www.wired.com/2016/04/magic-leap-vr/. Accessed on October 8th, 2018.
26. https://www.digitaltrends.com/virtual-reality/sony-psvr-patent-sensor/. Accessed on October
8th, 2018.
27. https://economictimes.indiatimes.com/magazines/panache/the-practical-applications-of-
virtual-reality/articleshow/51341634.cms. Accessed on October 8th, 2018.
28. https://blog.proto.io/5-practical-uses-virtual-reality/. Accessed on October 9th, 2018.
29. http://www.iamwire.com/2017/06/applications-virtual-reality-technology/153607. Accessed
on October 9th, 2018.
30. https://www.diva-portal.org/smash/get/diva2:1115566/FULLTEXT01.pdf. Accessed on Octo-
ber 15th, 2018.
31. https://www.vrs.org.uk/virtual-reality-applications/film.html. Accessed on October 15th, 2018.
32. https://insights.samsung.com/2017/07/13/how-virtual-reality-is-changing-military-training/.
Accessed on October 15th, 2018.
33. https://www.verywellmind.com/virtual-reality-exposure-therapy-vret-2797340. Accessed on
October 15th, 2018.
34. https://haptic.al/virtual-reality-autism-47081472cceb. Accessed on November 27th, 2018.
46 S. Qazi and K. Raza

35. Benyoucef, Y., Lesport, P., & Chassagneux, A. (2017). The emergent role of virtual reality in
the treatment of neuropsychiatric disease. Frontiers in Neuroscience, 11, 491.
36. Bellani, M., Fornasari, L., et al. (2011). Virtual reality in autism: State of the art. Epidemiology
and Psychiatric Sciences, 20, 235–238.
37. http://vikaspedia.in/education/education-best-practices/teaching-methods-childrens-with-
autism. Accessed on October 17th, 2018.
38. https://www.bosslinux.in/eduboss. Accessed on November 27th, 2018.
39. https://teacch.com/. Accessed on November 27th, 2018.
40. Bondy, A. S., & Frost, L. A. (1994). The picture exchange communication system. Focus on
Autism and Other Developmental Disabilities, 9(3), 1–19.
41. Bondy, A., & Frost, L. (2002). A picture’s worth: Pecs and other visual communication
strategies in autism. Topics in autism. Woodbine House, 6510 Bells Mill Rd., Bethesda, MD
20817.
42. https://www.iidc.indiana.edu/pages/What-is-the-Picture-Exchange-Communication-System-
or-PECS. Accessed on October 17th, 2018.
43. https://medium.com/inborn-experience/case-study-the-portal-75c27f58f898. Accessed on
29th July, 2018.
44. Cuendet, S., Bonnard, Q., et al. (2013). Designing augmented reality for the classroom.
Computers & Education, 1–13.
45. Zander, E. (2004). An introduction to autism, AUTISMFORUM. Stockholm: Handikapp &
Habilitering.
46. Ramachandiran, C. R., Jomhari, N., et al. (2015). Virtual reality based behavioural learning for
autistic children. The Electronic Journal of e-Learning, 13(5), 357–365.
47. https://virtualrealityineducation.wordpress.com/assisted-learning/. Accessed on July 29th,
2018.
48. https://thenextweb.com/contributors/2018/04/18/9-ethical-problems-vr-still-solve/. Accessed
on October 18th, 2018.
49. https://www.oculus.com/rift/#oui-csl-rift-games=mages-tale. Accessed on July 29th, 2018.
50. https://www.re-work.co/blog/the-power-of-machine-learning-and-vr-combined. Accessed on
July 29th, 2018.
51. Cipresso, P., & Riva, G. (2015). Virtual reality for artificial intelligence: Human-centered
simulation for social science. Annual Review of Cybertherapy and Telemedicine, 219, 177–181.
52. https://blog.goodaudience.com/3-cool-ways-in-which-machine-learning-is-being-used-in-
virtual-reality-12b8ece6d2c0. Accessed on October 23rd, 2018.
53. https://www.amazon.com/Amazon-Echo-And-Alexa-Devices/b?ie=UTF8&node=
9818047011. Accessed on October 23rd, 2018.
54. https://support.google.com/assistant/answer/7172657?co=GENIE.Platform%3DAndroid&
hl=en. Accessed on October 23rd, 2018.
55. http://ais.informatik.uni-freiburg.de/teaching/ss12/robotics/slides/12-slam.pdf. Accessed on
October 25th, 2018.
56. Jaulin, L. (2009). A nonlinear set membership approach for the localization and map building
of underwater robots. IEEE Transactions on Robotics, 25(1).
57. Jaulin, L. (2011). Range-only SLAM with occupancy maps: A set-membership approach. IEEE
Transactions on Robotics, 27(5).
58. https://ocw.mit.edu/courses/aeronautics-and-astronautics/16-412j-cognitive-robotics-spring-
2005/projects/1aslam_blas_repo.pdf. Accessed on July 30th, 2018.
59. https://www.dotcominfoway.com/blog/how-app-development-will-drive-vr-market. Accessed
on October 25th, 2018.
60. Jarrold, W. L. (2007). Treating autism with the help of artificial intelligence: A value propo-
sition. In: Proceedings of Agent-Based Systems for Human Learning and Entertainment
(ABSHLE) Workshop at AAMAS (pp. 1–8).
61. https://www.vrfitnessinsider.com/how-vr-is-helping-children-with-autism-navigate-the-
world-around-them/. Accessed on October 27th, 2018.
Towards a VIREAL Platform: Virtual Reality in Cognitive … 47

62. https://www.oculus.com/legal/health-and-safety-warnings/. Accessed on October 27th, 2018.


63. https://apps.dtic.mil/dtic/tr/fulltext/u2/a551763.pdf. Accessed on October 27th, 2018.
64. https://www.wareable.com/vr/vr-headset-motion-sickness-solution-777. Accessed on October
27th, 2018.
65. http://fortune.com/2018/02/06/virtual-reality-motion-sickness/. Accessed on October 27th,
2018.
66. https://www.quora.com/What-is-the-future-of-virtual-reality. Accessed on November 23rd,
2018.
Assisting Students to Understand
Mathematical Graphs Using Virtual
Reality Application

Shirsh Sundaram, Ashish Khanna, Deepak Gupta and Ruby Mann

Abstract Many face difficulties in understanding mathematical equations and their


graphs. Implementing virtual reality to plot graphs of the mathematical equation
will help to understand the equations better. Virtual reality (VR) is a computer-
generated environment in which a client can explore and collaborate with it. Virtual
reality system allows a user to view three-dimensional images. VR has a wide range
of applications. VR is being utilized in entertainment for gaming or 3D movies,
in medicine for simulating the surgical environment, in robotics development and
many more. VR has a wide scope of application in the education system though only
a few kinds of research have been proposed. In this paper, we have introduced a
new approach to making the user understand any mathematical equation better by
plotting their graph using virtual reality application. Unity, a real-time engine and
C# are being used to develop this novel approach. The proposed method will be
compared with current method of learning mathematical equations.

Keywords Virtual reality · Three-dimensional displays · Mathematical equations ·


Computer-generated environment

1 Introduction

Virtual reality (VR) is an instinctive PC made understanding used to supplant your


world with some mimicked condition. It comprises of primarily sound what’s more,

S. Sundaram · A. Khanna · D. Gupta (B) · R. Mann


Maharaja Agrasen Institute of Technology, Delhi, India
e-mail: deepakgupta@mait.ac.in
S. Sundaram
e-mail: sundaramshirsh@gmail.com
A. Khanna
e-mail: ashishkhanna@mait.ac.in
R. Mann
e-mail: rubymann1610@gmail.com

© Springer Nature Switzerland AG 2020 49


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_3
50 S. Sundaram et al.

visual information, however may similarly allow different sorts of sensory feedback
like haptic. This clear condition can resemble this present reality or it might be
fantastical.
Current VR technology most usually utilizes virtual reality headsets or multi-
projected environments, some of the time in mix with physical situations or props, to
produce realistic pictures, sounds and different vibes that simulate a client’s physical
proximity in a virtual or nonexistent environment. Basically, three things make VR
more vivid than different sorts of media, 3D Stereovision, client dynamic control
of perspective, and an encompassing knowledge. Individual using virtual reality
hardware can “glance around” the fake world, move around in it, and connect with
virtual features or things. The effect is typically made by VR headsets including a
head-mounted exhibit with a little screen before the eyes, yet can moreover be made
through uncommonly organized rooms with various huge screens. If you compare
watching a film on a small TV to watching the same movie in a cinema or in an IMAX
cinema, where you have a massive screen, the experiences can be very different.
Basically, the more of your field of view is covered by the screen, the more immersed
you will feel. The screen size of these headsets might be tiny, but there is no escape.
In a cinema when you look around, you can see your friend sitting next to you. But
with these headsets, you are trapped. When you look around in this headset, you still
see images from the virtual world instead of this present reality. The interesting thing
is that this kind of experience is overwhelming and persistent. It doesn’t diminish
over time.

1.1 Applications

VR is most commonly used in diversion applications, for instance, gaming and 3D


film. Customer virtual reality headsets were first released by video game associa-
tions in the early-mid 1990s. Beginning during the 2010s, front line business fastened
headsets were released by Oculus (Rift), HTC (Vive) and Sony (PlayStation VR), set-
ting off another surge of usage development [1]. 3D cinema has been used for games,
artistic work, music accounts, and short films. Since 2015, crazy rides and amuse-
ment parks have combined computer-generated simulation to arrange uncommon
perceptions with haptic feedback [2].
In apply autonomy, virtual reality has been utilized to control robots in telepres-
ence and telerobotic frameworks [3]. It has been utilized in mechanical technology
advancement. For instance, in trials that examine how robots—through virtual enun-
ciations—can be applied as an instinctive human UI. Another model is the utilization
of robots that are remotely controlled in risky situations, for example, space. Here,
virtual reality simulation not just offers bits of information into the control and motion
of mechanical development yet likewise demonstrates open doors for inspection [4].
In humanistic systems and cerebrum science, virtual reality offers a financially
savvy apparatus to think about and repeat connections in a controlled environment
[5]. It can be used as a kind of remedial mediation. For instance, there is the circum-
stance of the computer generated simulation introduction treatment (VRET), a sort of
Assisting Students to Understand Mathematical Graphs … 51

introduction treatment for treating nervousness issue, for example, post horrendous
pressure issue (PTSD) and phobias [6].
In medicine, simulated VR surgical environments—under the supervision of spe-
cialists—can give reasonable and repeatable training at a low cost, enabling students
to perceive and change blunders as they happen [7].

1.2 Scope of VR in Education

Students don’t learn much with the help of books, they are expected to learn without
much scope for an immersive and experimental learning. They need to get a practical
idea of what is being taught to them, in order to achieve this VR can play a significant
job in the field of teaching for future generations. Virtual reality (VR) is the utilization
of three dimensional (3D) computer graphics in combination with interface devices to
make an interactive, immersive environment [8]. Because of upgrades in innovation
and decreases in cost, the utilization of VR in education has expanded incredibly in
the course of recent years [9].
VR provides an immersive experience in learning, it can show a proper use case
scenario of any topic with the help of real life examples. The best way to teach some-
thing is when students are themselves able to implement anything, this is possible
with the help of VR.
Virtual reality also removes barriers associated with transport and logistics in real
world and opens up immense opportunities to be explored. Students for instance
can go on a field trip to the Amazon rainforest from the comfort of their classroom
anywhere in the world. Experience near impossible tasks such as a field trip to the
moon or the surface of Mars can now be explored from the within the comforts
and safety of a classroom. Such a realistic multi-dimensional experience delivers
a truly immersive learning experience, making the knowledge gained much more
holistic. VR and technology generally are accepted to encourage learning through
commitment, inundation, and intuitiveness [10].
In this project we are trying to simulate a virtual reality environment to plot 3D
graphs of mathematical equations that will help to understand the equations better.
The equations will be given as input by the users. The user can interact with the
graph, move around in it, in the VR simulated environment.
This paper is structured as following: in Sect. 2 literature review has been done to
change following which the methodology and implementation of the proposed model
has been discussed in Sects. 3 and 4 respectively. In Sect. 5 the results obtained from
the proposed model are discussed. At last, the conclusion future scope of the paper
and the references have been presented.

2 Literature Review

Numerous investigations have demonstrated that scholar gain best knowledge when
assortments of training techniques are utilized and those different scholars react
52 S. Sundaram et al.

best to various strategies. This paper tends to the use of virtual reality as another
educational means, intended to get scholars all the more profoundly inundated in the
computer simulations, and to display educational experiences unrealistic utilizing
other methods [11]. Virtual environments are characteristically three-dimensional.
They can furnish intuitive play areas with a level of intuitiveness that goes a long
ways past what is conceivable in all actuality. In the event that utilizing VR as a tool
for mathematics education, it in a perfect world offers an additional advantage to
learning in a wide scope of mathematical areas.
A few points that are incorporated into most arithmetic educational programs
worldwide are foreordained for being instructed in VR situations. For scholars aged
10–18, such topics are, for example, 3D geometry, vector polynomial math, graph
visualization all in all and curve sketching, complex numbers (representations), and
trigonometry, just as other three-dimensional applications and issues. Scholars in
elementary school profit by the high level of intelligence and immersion all through
their initial four years, when learning the four basic operations, yet additionally when
finding out about fractions and settling real life problems [12]. Understanding the
properties of a function over complex numbers can be substantially more trouble-
some than with a function over real numbers. This work gives one methodology in
the area of visualization and augmented reality to pick up understanding into these
properties. The applied visualization techniques utilize the full palette of a 3D scene
graph’s essential components, the complex function can be seen and comprehended
through the area, the shape, the shading and even the animation of a subsequent visual
object [13]. For beneficial use in the study hall, various conditions must be suited:
Support for an assortment of social settings including scholars working alone and
together, an educator working with a scholar or showing an entire class, scholar or
the entire class taking a test, and so forth. A coordinated effort in these circumstances
is to a great extent controlled by roles, and the educator ought to have the option to
hold power over the activities [14]. We depict our endeavors in building up a frame-
work for the improvement of spatial abilities and boost of exchange of learning. So
as to help various educator-scholars interaction scenarios we implemented adaptable
strategies for context and user dependent rendering of parts of the construction [15].
The basic supposition that the learning procedure will occur normally through the
simple investigation and revelation of the Virtual Environment ought to be reviewed.
In spite of the estimation of exploratory learning, when the information context is
excessively unstructured, the learning procedure can move toward becoming difficult.
Another Possibility is to carefully characterize explicit errands to the clients/scholars
through interaction with the educator. We recommend the utilization of various learn-
ing modes in virtual environments from instructor upheld to self-teaching learning
[16]. Numerical information is regularly crucial when taking care of real-life issues.
Especially, issues arranged in a few-dimensional area that require spatial aptitudes
are now and again hard to fathom for researchers. Numerous researchers experience
issues with a spatial creative mind and need spatial capacities. Spatial abilities, con-
versely, present a noteworthy fragment of human insight, just as intelligent reasoning
[17]. Our point was not to make an expert 3D displaying bundle yet rather a fun-
damental and intuitive 3D advancement tools in a distinctive virtual condition for
Assisting Students to Understand Mathematical Graphs … 53

instructive purposes. Like the CAD3D bundle co-made by the third creator, which
won the German-Austrian scholastic programming grant in 1993, our chief target was
to keep the UI as fundamental as possible to encourage learning and profitable use.
The standard regions of the use of our structure in science and geometry education
are vector analysis, spellbinding geometry, and geometry with everything taken into
account. These regions have not been unequivocally tended to by past frameworks
[18]. VRMath is an online application that uses VR (Virtual Reality) technology
joined with the intensity of a Logo-like programming language and hypermedia and
the Internet to encourage the learning of 3-Dimensional (3D) geometry concepts and
procedures. VRMath is being planned inside the structure of a design experiment
(The Design-Based Research Collective, 2003) during which VRMath will advance
through a progression of emphasis cycles of plan establishment reflection update into
an educational tool that will provide mathematics teachers with new and all the more
powerful methods for encouraging the development of 3D geometry knowledge [19].
Of the educational technologies at present being used, VR is seen as promising in
view of its special ability to submerge students in situations they are examining,
for example, in old urban communities, fabricating environments, or an investigate
the human body. The investigation into the adequacy of innovation-based instructive
devices, including VR, has shown substantial advantages, for example, decreased
learning time and better learning outcomes [20]. The use of visual advancements for
instructing and learning in modern training has delivered dramatic expansions of the
once conventional talks, showings, and hands-on experiences. From the introduction
of shading photography with full-movement video to computer-generated presenta-
tions with graphics and animations, visual advances have upgraded the arrangement
of workforce specialists and experts by bringing into study halls and research centers
an expansiveness and profundity of authenticity that has improved comprehension
expanded learning performance and decreased preparing time. At times, in any case,
there shows up a training technology that causes an acknowledgment that “this makes
a huge difference.” Such innovation is virtual reality [21]. This article talks about
the present utilization of virtual reality tools and their potential in science and engi-
neering education. One programming tool specifically, the Virtual Reality Modeling
Language. One contribution of this article is to show software tools and give models
that may urge instructors to create virtual reality models to upgrade education in their
own order [9].

3 Methodology

This work proposes a new method for visualisations of the graphs using virtual reality.
Visualisation plays a vital role in understanding something as different visualisation
can lead to different perceptions. The complex function graph is hard to understand
and when the graph is 3-dimensional, it increases complexity creating more confusion
54 S. Sundaram et al.

among the students. For instance, we have the function f (x) = x + 1. We can substitute
a number for x, say 3. That prompts f (3) = 3 + 1 = 4 and can make numerous sets of
the form. For instance (5,6) and (8,9) and (1,2) and (6,7). But it is more clear to the
function when we request the sets by the input number. (1,2) and (2,3) and (3,4) and
so on. It is easier to understand that but for a more complex function to understand,
for instance, f (x) = (x − 1)(x − 1)4 + 5x 3 − 8x 2 is harder. We could write down
a couple of input-output sets, yet that presumable won’t give a decent handle of the
mapping it represents.
In this paper we have presented a new method for visualisation of graphs using
virtual reality. The methodology is divided into two parts; A. Visualisation of graphs
using Virtual reality; B. Visualisations of scatter plots using VR.
A. Visualisation of graphs using virtual reality
Here the graphs are made from the equation passed to the proposed model and graph
can be visualised in virtual reality simulated environment. The pseudocode of the
proposed model using which the graphs are made are given below:

Pseudocode 1: The proposed model


Input: The equation whose graph is required to be plotted
Output: The graph is plotted in virtual reality simulated environment

1. Set the values of upper range and lower range of variables x and z(only if the equation
is three dimensional)
2. Set the values of variables: resolution, step, scale, .
3. Create a prefab and instantiate it
4. Initialize the variable t holding the time information from unity
5. float xStep = (xUpperRange + Mathf.Abs(xLowerRange));
6. float zStep = (zUpperRange + Mathf.Abs(zLowerRange));
7. int i = 0;
8. for (float z = 0; z <= resolution; z++)
9. {
10. float v = (z/resolution) * zStep + zLowerRange;
11. for (float x = 0; x <= resolution; x++, i++)
12. {
13. float u = (x/resolution) * xStep + xLowerRange;
14. returnedValue = Plotter(u, v, t);
15. Plot the points returned from the function.
16. }
17. Add VR look walk to walk across the graphs in the VR environment
18. View the graph in the simulated environment

Pseudocode 2: Plotter
Input: Pass the values of u, v, and t.
Output: Return the vector P having x, y, z, coordinates
Assisting Students to Understand Mathematical Graphs … 55

1. Initialize a vector 3 object P.


2. Set the value of P·x, P·y, P·z using the given equation.
3. Use value of t if animation of graph is required.
4. Return the object, P.
The explanation for the above pseudocode is given below:
• In line 1, the values of lower range and upper range of the variables x and z are
set. If the equation is 2-dimensional then values of variable x is only required. The
default values of the lower range and upper range for both variables x and z are −
1 and 1 respectively.
• In line 2–3, the values of resolution, scale and step is set. Resolution is kept high
for devices having good configuration and lower for low end devices. Graph are
plotted by placing points at right coordinates. We have used cube to represent the
points of a graph by using prefab. Prefab is a template that can be used to create
new instances of object that has same properties as its parent.
• The variable t is used for animating a graph.
• In lines 5–7, the xStep and zStep is calculated using the lower range and upper
range for both x and z variables.
• In lines 8–16, the value of points of graph is calculated using the function plotter
as given in Pseudocode 2 by passing the values of u and v and the time t.
• At last the graph is plotted in virtually simulated environment.

B. Visualisation of scatter plot in VR

The scatter plot of a dataset shows the correlation between features. Understanding
the scatter plots for higher dimensional datasets becomes complex and it becomes
hard to draw conclusion from them. So in our work we have also put emphasis on the
visualisation of scatter plots using VR environment. The process for this is provided
below with the explanations.

4 Implementation

In this section the implementation of the proposed model is done and it contains
experimental setup, input parameters of the algorithms are shown in Tables 1 and 2
and the end of this section system’s framework has been described.
A. Experimental arrangement
The algorithm has been investigated the framework having setup of Intel® Core™
i5-7200U and CPU of 2.50 GHz × 4 under Windows 10. We implemented the
algorithms using C# and Unity 2018.3.3f1.
B. Input parameters
The system framework for the model is appeared beneath in Fig. 1.
56 S. Sundaram et al.

Table 1 Input parameters for visualisation of graph in VR


Parameters Values Description
Resolution 10–100 Sharpness and clarity of graph. High value means high
resolution
xLowerRange −1 (default value) Lower range for the x axis variable
xUpperRange −1 (default value) Upper range for the x axis variable
zLowerRange 1 (default value) Lower range for the z axis variable
zUpperRange 1 (default value) Upper range for the z axis variable

Table 2 Input parameters for visualisation of scatter plot in VR


Parameters Values Description
Inputfile Name of the file (string) Input the csv file for getting scatter plot in VR
xName String The name of the first column/feature
yName String The name of the second column/feature
zName String The name of the third column/feature
Plotscale 10 (default value) Used as the range to which the all the values is
normalized

Fig. 1 System framework for a visualisation of graph in VR and b visualisation of scatter plot in
VR

Figure 1 comprises of two sections a and b, section a shows the system architecture
for the plotting of graphs from the equation and section b shows the plotting of scatter
graphs prepared from a dataset. For plotting graphs any equation is provided as input
with the range of the variables (default values are −1 and 1) then passed to the model
which returns the points that is plotted in the VR simulated environment using the
Pseudocode 1. Grid overlay is used to represent the 8 quadrants of the coordinate
system. The graphs are coloured and is animated if required. VR look walk is used
to move in the environment and live in it.
The section b of Fig. 1 shows the architecture for plotting of scatter plots in VR. It
first takes the input of dataset file and the features name whose correlation is required.
The dataset is normalized first so as to scale the all values between specified range.
Then the points are plotted in the environment. The graphs are coloured such that
Assisting Students to Understand Mathematical Graphs … 57

each feature have different colour and the labels are provided. At last VR look walk
is used to move in the simulated environment and learn all about it.

5 Results and Discussions

This section discusses the results produced when the proposed model is executed.
We have used few shapes and plotted them using our model in VR environment.
The shapes are (i) circle, (ii) sphere, (iii) ellipse, (iv) animated ripple. To check the
validation of the points plotted we have also shown figures displaying the values of
X, Y and Z and then put them into the equation and see if it satisfies or not.
1. Circle: The first shape taken is of a circle. Figure 2i shows the circle in the VR
simulated environment. Figure 2ii shows the values of X, Y, Z of a point in the
circle and when put into the equation x 2 + y2 = 4, it satisfies (1.80562 + 0.85992
= 4).
2. Sphere: Now sphere is plotted using the proposed model as appeared in Fig. 3.
Figure 3i demonstrates the sphere from the outside while Fig. 3ii demonstrates
the sphere from within. Figure 3iii is used to check if a point on the sphere is
satisfying the equation x 2 + y2 + z2 = l not.

Fig. 2 i A circle with radius of 2. ii A point on the circle


58 S. Sundaram et al.

Fig. 3 i The outside view of sphere, ii the inside view of sphere, iii values of a point on the sphere
Assisting Students to Understand Mathematical Graphs … 59

3. Ellipse: The ellipse is plotted using the proposed model as shown in Fig. 4.
Figure 4i shows the ellipse from the outside while Fig. 4ii shows the ellipse from

Fig. 4 i The outside view of ellipse, ii the inside view of ellipse, iii values of a point on the ellipse
60 S. Sundaram et al.

Fig. 5 i A graph plotted from Eq. (1) at time T1 and ii a graph plotted from Eq. (1) at time T2

the inside. Figure 4iii is used to check if a point on the ellipse is satisfying the
equation ¼(x 2 ) + y2 + z2 = 1 or not.
4. Ripple: An animated ripple is plotted which change it position with respect to
time. Figure 5i, ii shows the ripple at time T1 and Time T2. The equation of
ripple is given by:

1   
y= √ sin π(4 x 2 + z 2 − t) (1)
1 + 10( x 2 + z 2 )
Assisting Students to Understand Mathematical Graphs … 61

6 Conclusion and Future Scope

The utilization of VR and e-learning in education has extended extraordinarily in


the previous decade. Extraordinary enhancements in the technology and resulting
diminish in costs have made VR substantially more available outside of the enter-
prises from which it is ordinarily used. Better instructional structure and thought
of instructive benchmarks have hurried the ascent of VR in training and explicitly
the STEM (science, technology, engineering, and math) fields. The ability of VR
and e-learning to decrease costs, enable scholars to communicate with imperceptible
phenomena, and to build apparent learning results, scholars engagement, and ease
of use gives the enormous potential to the field of education.
This study have introduced a new approach to making the user understand any
mathematical equation better by plotting their graph using virtual reality applica-
tion. For implementation of the proposed method the Unity tool and C# language is
used. The proposed method allows users to visualise and interact with the graph of
mathematical equations in VR environment. The proposed method have two parts
firstly it is used for visualisation of graphs of mathematical equations and then it is
also used for visualising scatter plots in VR. For evaluation of the proposed method
some shapes of circle, sphere, an ellipse, a changed ripple are plotted and visualised
in VR environment and cross checked by observing if the point of a graph satisfies
the equation of the shape or not.
The future scope of this work is to expand the proposed method for plotting
other types of graphs too in VR environment so as to help experts for better data
visualisations and make it a generic tool.

References

1. Comparison of VR headsets: Project Morpheus vs. Oculus Rift vs. HTC Vive. Data Reality.
Archived from the original on August 20, 2015. Retrieved August 15, 2015.
2. Kelly, K. (2016). The untold story of magic leap, the world’s most secretive startup. WIRED.
Retrieved March 13, 2017.
3. Rosenberg, L. (1992). The use of virtual fixtures as perceptual overlays to enhance operator
performance in remote environments (Technical Report AL-TR-0089). Wright-Patterson AFB
OH: USAF Armstrong Laboratory.
4. Gulrez, T., & Hassanien, A. E. (2012). Advances in robotics and virtual reality (p. 275). Berlin:
Springer-Verlag. ISBN 9783642233623.
5. Rosenberg, L. (1993). Virtual fixtures as tools to enhance operator performance in telepresence
environments. In SPIE Manipulator Technology.
6. Gonçalves, R., Pedrozo, A. L., Coutinho, E. S. F., Figueira, I., & Ventura, P. (2012). Effi-
cacy of virtual reality exposure therapy in the treatment of PTSD: A systematic review. PLoS
One, 7(12), e48469. https://doi.org/10.1371/journal.pone.0048469. ISSN 1932-6203. PMC
3531396. PMID 23300515.
7. Westwood, J. D. (2014). Medicine meets virtual reality 21: NextMed/MMVR21 (p. 462). IOS
Press.
8. Pan, Z., Cheok, A. D., Yang, H., Zhu, J., & Shi, J. (2006). Virtual reality and mixed reality for
virtual learning environments. Computers & Graphics, 30(1), 20–28.
62 S. Sundaram et al.

9. Manseur, R. (2005). Virtual reality in science and engineering education. In Frontiers in


Education, 2005. FIE’05. Proceedings 35th Annual Conference (p. F2E–8).
10. Merchant, Z., Goetz, E. T., & Cifuentes, L. (2014). Effectiveness of virtual reality-based instruc-
tion on students’ learning outcomes in K-12 and higher education: A meta-analysis. Computers
& Education.
11. Bell, J. T., & Fogler, S. H. (1995). The investigation and application of virtual reality as an
educational tool. In Proceedings of the American Society for Engineering Education Annual
Conference, Anaheim, CA.
12. Do, V. T., & Lee, J. W. (2007). Geometry education using augmented reality. Paper presented
at Workshop 2: Mixed Reality Entertainment and Art (at ISMAR 2007), Nara, Japan.
13. Liebo, R. (2006). Visualization of complex function graphs in augmented reality (Master’s
thesis). Vienna University of Technology, Vienna.
14. Taxén, G., & Naeve, A. (2001). CyberMath: Exploring open issues in VR-based learn-
ing. In SIGGRAPH 2001 Educators Program, SIGGRAPH 2001 Conference Abstracts and
Applications (pp. 49–51).
15. Kaufmann, H., & Schmalstieg, D. (2003). Mathematics and geometry education with
collaborative augmented reality. Computers & Graphics, 27(3), 339–345.
16. Kaufmann, H. (2004). Geometry education with augmented reality. Vienna University of
Technology.
17. Kaufmann, H., & Dünser, A. (2007). Summary of usability evaluations of an educational
augmented reality application. In R. Shumaker (Ed.), HCI International Conference (HCII
2007) (Vol. 14, pp. 660–669). Beijing, China: Springer-Verlag Berlin Heidelberg.
18. Winn, W., & Bricken, W. (1992). Designing virtual worlds for use in mathematics education:
The example of experiential algebra. Educational Technology, 32(12), 12–19.
19. Yeh, A., & Nason, R. (2004). VRMath: A 3D microworld for learning 3D geometry. In Proceed-
ings of World Conference on Educational Multimedia, Hypermedia & Telecommunications,
Lugano, Switzerland.
20. Lee, E. A., Wong, K. W., & Fung, C. C. (2010). How does desktop virtual reality enhance
learning outcomes? A structural equation modeling approach. Computers & Education, 55(4),
1424–1442.
21. Pantelidis, V. S. (1993). Virtual reality in the classroom. Educational Technology Research and
Development.
Short Time Frequency Analysis of Theta
Activity for the Diagnosis of Bruxism
on EEG Sleep Record

Md Belal Bin Heyat, Dakun Lai, Faijan Akhtar, Mohd Ammar Bin Hayat
and Shajan Azad

Abstract Sleep is the important part of the living organism. If the normal humans do
not sleep properly so its generate many diseases. Bruxism is a neurological or sleep
syndrome. Its individuals involuntarily grind the teeth. Bruxism covered in 8–31% of
the whole sleep disorders like Insomnia, Narcolepsy etc. The present research focused
on three steps such as data selection, filtration, and normalized value of theta activity.
Additionally, the three sleep stages of non rapid eye movement such as S0, S1, S2 and
rapid eye movement. In addition to parietal occipital (P4-O2) Electroencephalogram
(EEG), channels are used in the present work. The total number of eighteen subjects
such as bruxism and healthy human studied to this work. The average value of the
normal human’s theta activity is higher than bruxism in all sleep stages such as S0,
S1, S2 and rapid eye movement. Moreover, the proposed research is in accurate than
other traditional system.

Keywords Bruxism · Brain · EEG signal · Parietal occipital channel · Detection ·


Teeth · Sleep disorder

1 Introduction

Sleep is applicable to all zoological species [1–4]. It is a general behavior demon-


strated by mammals, insects, animals, and humans [5, 6]. It is a state of abridged
realization of ecologically aware spurs. It visibly breaks up the state of mind carefully

M. B. B. Heyat · D. Lai (B)


Biomedical Imaging and Electrophysiology Laboratory, University of Electronic Science and
Technology of China, Chengdu, Sichuan, China
e-mail: dklai@uestc.edu.cn
F. Akhtar
School of Computer Science and Engineering, University of Electronic Science and Technology
of China, Chengdu, Sichuan, China
M. A. B. Hayat · S. Azad
Lucknow, UP, India
S. Azad
Hayat Institute of Nursing, Lucknow, Uttar Pradesh, India
© Springer Nature Switzerland AG 2020 63
D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_4
64 M. B. B. Heyat et al.

by changed recognition, moderately inhibited physical action, the awkwardness of


precise muscles, and compact relations with situations. Sleep familiar known from
anxiety by abridged fitness to pledge to divisions, but supplementary simply reversed
than the hibernation. Mammalian sleep [7, 8] occurs in restating periods. Sleep is
also observed in mammals, reptiles, insects, birds, fishes, and amphibians. Devel-
opment and artificial light needed has significantly altered human sleep ways in the
past two hundred years. Sleep is a common phenomenon for human body and mind.
However, in some humans [9], fish, and crocodiles eyes are not closed during sleep,
while there is a decrease in body drive and retorts to branches. During sleep brain
involvements cycle of brain increase activity, this comprises fantasizing. Sleep is the
sentimental ointment that appeases and restores later protracted day of effort and
play. This system is save the valuable time of human so it is very fast in traditional
system or any research related to the prognostic of bruxism syndrome.

2 Stages of Sleep

It is classified into two stages such as Rapid Eye Movement (REM) and Non Rapid
Eye Movement (NREM) [10–12].

2.1 Non-rapid Eye Movement (NREM)

During this stage, approximately 50% of the sleep period is completed. NREM is
divided into five stages [13–16]. It’s given below:

2.1.1 Non-rapid Eye Movement-1

During this sleep stage, sleep is starting, somnolence, drowsy sleep in which distinctly
awakened easily. In this stage, eye open and close slowly i.e. movement of the eye
and muscle movement is slow gradually. Human body strength evokes the bit graphic
pictures when woken from vision. The mind developments from α wave frequency θ
wave frequency 4–7 Hz and 8–13 Hz. In this stage found 5–10% of the total sleep of
the human body. The organism loses certain muscle tone and maximum conscious
awareness outside of the environment.

2.1.2 Non-rapid Eye Movement-2

In this stage starting from completed of the NREM1 stage. The eye movement is
fully closed and brainwave slowed down. θ wave observed Sleep has developed
slowly tougher awaken. In this stage found 45–55% of the totals sleep of the human
Short Time Frequency Analysis of Theta Activity … 65

body. Sleep axles range from 12 to 14 Hz. Muscular movements dignified by EMG
reductions and conscious alertness of exterior environment.

2.1.3 Non-rapid Eye Movement-3

In this stage starting from completed to NREM2 stage. Eye movement stops, brain
slows down.

2.1.4 Non-rapid Eye Movement-4

In this stage started to the completion of the NREM3 stage. Mind builds delta wave
completely. It is a stage of human body goes to deep Sleep. The human organs brain,
muscle or others are free and in relaxed mode. In this stage found 15–25% of the
totals sleep of the human body. The other name of NREM4 stage is deep sleep stage.
The delta wave are extremely dawdling waves start to appear, feast smaller and faster
waves.

2.1.5 Non-rapid Eye Movement-5

Some human figure is working on this phase, not all-human body is working to this
phase. The eye is close but sleep will be a disruption. The humans body going through
this stage, only one percent of the whole sleep time.

2.2 Rapid Eye Movement (REM)

In this stage, breathing fast, unequal deep and eye movement in divergent directions
and limb muscles temporarily paralyzed. The heart rate and blood force increases.
In this stage, the duration of the total sleep is 20–25%. Expedient paralysis from
brawny atonic in rapid eye movement essential protect organisms from self-damage
over actually extra from repeatedly bright dreams occurs during REM stage. The rapid
eye movement is slightly label in relationships of stimulant and phasic apparatuses
[17–19].

3 History of Sleep Disorder

Human have been riveted with sleep. The Charles Dickens describes first time in 1836
sleep disorders. In 1950 and 1960, sleep researchers are William Dement; Nathaniel
66 M. B. B. Heyat et al.

Kleitman etc. identified the sleep stages. Dement started 1970 in first sleep disorders,
which delivered totally night estimations of patients with sleep illnesses [20–22].

3.1 Classification of Sleep Disorder

The sleep disorders are Periodic Limb Movement Disorder (PLMD), Bruxism,
Insomnia, Narcolepsy, Rapid Eye Movement Behavioral Disorder (RBD), Nocturnal
Frontal Lobe Epilepsy (NFLE) and Sleep Apnea [23–25].

3.1.1 Insomnia

Insomnia is an indication not a stand-alone identification. It’s “difficulty initiating”,


it could be due to capacity of sleep. Many persons remain ignorant of the social
and medical options accessible to treat insomnia [26–31]. There are three types of
insomnia:
• Short term Insomnia: The indications to the one week to three weeks i.e. entitled
to short-term insomnia.
• Transient Insomnia: The indication lasting less than one week i.e. entitled to
transient insomnia.
• Chronic Insomnia: Those longer than three weeks i.e. entitled to chronic insomnia.

Causes of Insomnia

The main causes of insomnia are arthritis, allergies, asthma, chronic sting, hyperthy-
roidism, inferior back sting, Parkinson sickness, reflux etc.

Traditional Method for the Diagnosis of Insomnia

• Actigraphy Test: Tests to measure sleep-wake designs over the period. Actigraphs
are minor, wrist-worn procedures that quantify movement.
• Polysomnogram Test: In this test calculates the activity of sleep.

3.1.2 Narcolepsy

Narcolepsy is a sleep and neural disorder produced by the mind’s incapability to


control sleep-wake phases typically. The core structures of narcolepsy are cataplexy
and fatigue. The syndrome is also frequently relating to unexpected sleep assaults.
In order to appreciate the essentials of narcolepsy, it is essential to the first analysis
Short Time Frequency Analysis of Theta Activity … 67

of the structures of normal Sleep, Slumber happens in successions. We primarily


enter bright sleep stages and then growth into gradually deeper stages. A deep sleep
stage is called as NREM slumber. Narcolepsies touches both genders similarly and
matures age indications usually first mature in youth and may continue unrecognized
as they slowly grow. The example of a familial association with narcolepsy is rather
small but a mixture of inherent besides ecological issues may be the source of this
sleep syndrome [32–35]. Almost 90% of people with narcolepsy need hypocretin
low phases. Currently the cerebrospinal molten, narcolepsy divided into two parts:
• With Cataplexy: It is the greatest disturbing indication in an insufficient patient,
producing total injury of influence tone and subsequently collapse numerous times
in a day. It hardly ensues and reasons only fleeting softness of the makeover muscu-
lature. In additional, with cataplexy narcolepsy has been considered by an injury of
the hypocretin peptide of the chambers generating this peptide. Hypocretin short-
age can be verified by computing cerebra-spinal fluid absorptions of hypocretin,
one-third of standard values are the presence of most optimum amended based on
the receiver of functioning features of curve analysis, incidence of narcolepsy with
cataplexy is accepted at point zero two percent to point zero five percent in Korea,
Western Europe, and the US. Patient suffering from narcolepsy with cataplexy
also show scattered nocturnal sleep with even awakenings.
• Without Cataplexy: Narcolepsy without cataplexy is a tough dissimilar disorder
with rising amount of patients without cataplexy but with an opposition of unex-
plained daytime sleepiness, were being recognize as some of positive multiple
sleep latency test (MSLT). This assumption has managed to a cumulative num-
ber of patients being analyze with narcolepsy without cataplexy. These genetic
factors control the creation of chemicals in the brain that may signal sleep and
awaked cycles. The abnormality apparently donates to symptom development.
Its likely narcolepsy involves many factors that interact with source neurological
dysfunction and rapid eye movement sleep disorders.

Causes of Narcolepsy

These genetic factors control the creation of chemicals in the brain that may signal
Sleep cycles. The abnormality apparently donates to symptom development. Its likely
narcolepsy involves many factors that interact to source neurological dysfunction and
rapid eye movement sleep disorders.

Traditional Method for the Diagnosis of Narcolepsy

• Multiple Sleep Latency Test: It is frequently complete in a day next to or the day
to a polysomnogram (PSG), during the assessment, you are request to sleep for
twenty minutes each two hours during the day. A specialist checks your mind
movement during this time [36].
68 M. B. B. Heyat et al.

3.1.3 Nocturnal Frontal Lobe Epilepsy (NFLE)

NFLE is an attack sickness in which attacks occurs only while snoozing. Numerous
collective procedures of epilepsy, with frontal lobe epilepsy (FLE), can obvious in a
nightly state. The wide use of invented to characterize diseases of inspiration through
Sleep. Detailed commitment has been dedicate to new years to those seizures growing
next epileptic efforts placed intimate the forward lobe i.e. called as nocturnal frontal
lobe epilepsy [37–39].

Causes of Nocturnal Frontal Lobe Epilepsy

The main causes are NFLE are stroke, tumors, traumatic damages etc.

Traditional Method for the Diagnosis of Nocturnal Frontal Lobe Epilepsy

It is diagnose by different method:


• Cinematic Electroencephalogram: Cinematic EEG usually achieved during a rapid
stay. Both a cinematic camera and electroencephalogram display works composed
all night. Consultants then contests what actually occurs when we have a seizure
with what performs on the electroencephalogram at the same time.
• Timorous, irregular blood vessels can produce Head Scan: Frontal lobe seizures.
Head imaging typically magnetic resonance imaging is used to identify. Magnetic
resonance imaging uses radio influences and dominant magnetic field to produce
meticulous brain images.

3.1.4 Sleep Apnea

Sleep Apnea is a sleep breathing syndrome, it is mutual sicknesses in which you mud-
stone or additional recesses in conscious or narrow sniffs while you Sleep. Conscious
recesses can proceed from an insufficient sec to min. They might occur thirty times.
Usually, typical breathing then twitches again, occasionally with a showy splutter or
obstructing sound [40, 41]. Sleep Apnea divided into three types:
• Obstructive Sleep Apnea: It is the maximum communal form of sleep apnea. It is
supposed to affect approximately 5% of men and 3% of women. It’s only around
10% of human with obstructive sleep apnea pursue treatment parting the common
of obstructive sleep apnea agonizes undiagnosed [42, 43].
• Central Sleep Apnea: It arises when the mind provisionally fails to indicate the
muscles accountable for controlling conscious unlike obstructive sleep apnea,
which can be suppose of as a mechanical problem, central sleep apnea is more of
a communication problem [44, 45].
Short Time Frequency Analysis of Theta Activity … 69

Fig. 1 The human performs


sleep recording by traditional
systems in Lucknow, India, it
is captured in 2017

• Complex Sleep Apnea: It [46] is the combination of central and obstructive sleep
apnea. Some patient of obstructive sleep apnea diagnose by Continuous Positive
Airway Pressure (CPAP) machines.

Causes of Sleep Apnea

The main causes of sleep apnea is overweight, high blood pressure, smoking, thick
neck, stroke, spinal injury, etc.

Traditional Method for the Diagnosis of Sleep Apnea

CPAP instrument is help in the detection of sleep apnea. It is very time taken; total
duration of this process is approximately six hours [47, 48]. The patients recorded
the sleep in the traditional system as shown in Fig. 1.

4 Electroencephalogram (EEG) Signal

A British physician Richard Caton has discovered animal brain generate electricity
in the nineteenth century. A German physiologist Hans Berger recorded first-time
human brain signal. The human brain has generated the signal. Human signals are
70 M. B. B. Heyat et al.

convert into electrical form in the help of electroencephalogram and record the data
in computer or display devices or recording devices. In 1924, a German psychiatrist,
Hans Berger in Jena, is recorded the electric field of the human brain in first time. An
electroencephalogram [49–51] is a common tool used in medical field for performing
sleep research.

4.1 EEG Generation

An electroencephalogram signal is a consequence of the flow of a synaptic current that


movement in dendrites of the neurons in intellectual cortex. These currents generate
an electrical field, which measured by EEG system as electroencephalogram signal.
An electroencephalogram signal is a result of the flow of positive ions like sodium,
calcium, potassium and negative ion of chlorine across the cell membrane. It is
enough to the recorded by head electrode placed overhead for EEG measurement
[52–54].

4.2 Classification of EEG Signal

An electroencephalogram signal consists of few waves such as alpha, beta, alpha,


theta, delta and gamma.
• Delta Waves: The frequency ranges of delta waves are 0.5–4 Hz. These waves are
slowest in nature. This wave is observed while sleep [55].
• Theta Waves: The frequency ranges of theta waves are 4–8 Hz. These waves are
related with insufficiency and daydreaming. Small value of theta waves shows a
very little difference between being awake or in sleep. We can say it is transition
phase from consciousness to drowsiness. These waves are result of emotional
stress like frustration, disappointment etc. [55].
• Alpha Waves: The frequency ranges of alpha waves are 8–12 Hz. It appears as a
round or sinusoidal shaped signal. These waves related to relaxation and disen-
gagement. These are slower waves. The intensity of alpha waves increases during
peaceful thinking with eyes closed. These waves found behind the head near occip-
ital and in the frontal lobe of the human brain. These waves experience an increase
after smoking. Alpha waves are shows in the posterior half of the head and usu-
ally found over the occipital region of the brain. They can be detected all parts of
posterior lobes of the brain [50].
• Beta Waves: The frequency ranges of beta waves are 12–30 Hz. The beta waves
are small and fast in nature. They are detected frontal and central areas of human
brain. Its become frequent when we are suppressing the movement. It also found
that strength of beta waves increases with intake of alcohol leading to the hyper-
excitable state. The amplitude of beta wave is approximately 30 μV.
Short Time Frequency Analysis of Theta Activity … 71

• Gamma Waves: The frequency ranges of gamma waves are 31 Hz and above. They
reflect mechanism of consciousness. They are low amplitude wave and occur rarely.
The diagnosis of gamma waves can be used to certain brain disorder [56].

5 Subject Details and Methodology

The all data of the normal and bruxism downloaded from the physionet website [57–
60] for the duration of 1 min. This website is used in the research work, all data is
downloaded in .m format and .info file. The total number of eighteen human data is
use in the research work.

5.1 Welch Method

Welch method was discovered by a eminent scientist P. D. Welch. It is an approach to


estimate electricity spectral density. This approach is used inside the estimating the
strength of a sign at specific frequencies. These strategies are base at the idea of using
periodogram spectrum approximations. Periodogram spectrum approximations are
changing a signal from the time domain to the frequency domain. Welch’s method
[61–63] is a further improvement of the standard periodogram spectrum estimations
technique. It is also treating from a change to Bartlett’s method. Welch approach
is used to lessen noise in the anticipated strength spectra with an exchange for the
reduction in the frequency resolution. Welch’s strategies are the choice to lessen
noise from imperfect and finite statistics. There exist a difference between The Welch
method and Bartlett’s technique. These differences described in underneath:
• If a signal cut up into overlapping segments L and the duration of data segments
is M. Then, D factors overlap the segments:
• The overlapping segments are then windowed: After the facts break up into over-
lapping segments, the character L statistics segments have a window implemented
to them within the time area.
– Most window capabilities manage to pay for more affect to the statistics on
the center of the set than to information at the rims, which represents a lack of
information. To mitigate that loss, the person facts set commonly overlapped in
time.
– The windowing of the sections makes the Welch approach a modified peri-
odogram. After this calculation, the length gram calculated within the comput-
ing the discrete Fourier remodel, and then computing the squared magnitude
of the result. The person duration gram is averaged which reduces the variance
of the character power measurements. In the end, we get an array of strength
measurements versus frequency bin.
72 M. B. B. Heyat et al.

5.2 Hamming Window

The R.W. Hamming [64] discovered the hamming window. This window optimized
for maximum to minimum side lobe, giving it a height of about one-fifth that of the
hanning window.
 
2π n
w(n) = α − β cos
N −1

where: w(n) = Hamming Window


N = Number of samples each frame
n = 0, 1, 2, 3, 4, 5, . . .
With, α = 0.54, β = 1 − α = 0.46.
The constant approximations of values α = 25/46 and β = 21/46, which with-
draw the main aspect-lobe of the hanning window by returns of putting a zero at
frequency 5π/(N − 1). Approximation of the coefficients to two decimal positions
notably lowers the degree of side-lobes, to a nearly equal-ripple condition. Inside the
equi-ripple sense, the most reliable values for the coefficients are α = 0.53836 and
β = 0.46164. The zero segment models are certain by:
 
2π n
W0 (n) = 0.54 + 0.46 cos
N −1

6 Analysis of the EEG Signal

The first step is loaded the Electroencephalogram signal [59, 60]. The normal human
sleep recording such as Fp2-F4, F4-C4, C4-P4, P4-O2, C4-A1, ROC-LOC, LOC-
ROC, EMG1-EMG2, ECG1-ECG2, DX1-DX2, SX1-SX2, SAO2, HR, PLETH,
STAT, and MIC are represented in Fig. 2. The bruxism patient sleep recording such
as Fp2-F4, F4-C4, C4-P4, P4-O2, F8-T4, T4-T6, FP1-FP3, F3-C3, C3-P3, P3-O1,
F7-T3, T3-T5, C4-A1, ROC-LOC, EMG1-EMG2, ECG1-ECG2, DX1-DX2, and
SX1-SX2 are represented in Fig. 3. In this research, all sleep-recorded channels
extract P4-O2 channel of the EEG signal. In this process are complete by the normal
human and bruxism patient signal in Figs. 4 and 5. The third stage is filtering the
Electroencephalogram signal. The low pass filter is passing the low frequency and
blocking the high frequency. In this stage, low pass cutoff frequency of 25 Hz is used.
The both bruxism patient and normal human signal are shows in Figs. 6 and 7. All
signals of normal human and bruxism syndrome patient applied by the Hamming
window in the filtered signal. In this process, the Hamming window is only used.
The noise will be very less than all windows filter (Figs. 8 and 9). The estimation
of power spectral density by Welch method, this technique is design by renowned
Short Time Frequency Analysis of Theta Activity … 73

Fig. 2 The representation of the sleep recordings of the normal humans are Fp2-F4, F4-C4, C4-
P4, P4-O2, C4-A1, ROC-LOC, LOC-ROC, EMG1-EMG2, ECG1-ECG2, DX1-DX2, SX1-SX2,
SAO2, HR, PLETH, STAT, and MIC

Fig. 3 The representation of the sleep recordings of the bruxism patients are Fp2-F4, F4-C4, C4-
P4, P4-O2, F8-T4, T4-T6, FP1-FP3, F3-C3, C3-P3, P3-O1, F7-T3, T3-T5, C4-A1, ROC-LOC,
EMG1-EMG2, ECG1-ECG2, DX1-DX2, and SX1-SX2

scientist P. D. Welch. This technique used for the assessment of power signal at dis-
similar frequencies. Welch method consisting of the distributed time series into the
segment data. This output is a number of fast Fourier points or Welch power spectral
density estimation (Figs. 10 and 11).
74 M. B. B. Heyat et al.

Fig. 4 The P4-O2 channel are extracted in the sleep record for the normal human. The parietal
occipital regions are used in the proposed research

Fig. 5 The P4-O2 channel are extracted in the sleep record for the bruxism patient. The parietal
occipital regions are used in the proposed research
Short Time Frequency Analysis of Theta Activity … 75

Fig. 6 The low pass first impulse response filters are used in the filtration of P4-O2 channel of the
EEG signal for the normal human

Fig. 7 The low pass first impulse response filters are used in the filtration of P4-O2 channel of the
EEG signal for the bruxism patient
76 M. B. B. Heyat et al.

Fig. 8 The hamming window applied in the P4-O2 channel for the normal human. The hamming
windows were negligible in noise so its help to the accuracy of the system

Fig. 9 The hamming window applied in the P4-O2 channel for the bruxism patient. The hamming
windows were negligible in noise so its help to the accuracy of the system
Short Time Frequency Analysis of Theta Activity … 77

Fig. 10 The Welch method are applied in P4-O2 channel for the measurement of the power spectral
density for the normal human. This method changed the signal time series into segment

Fig. 11 The Welch method are applied in P4-O2 channel for the measurement of the power spectral
density for the normal human. This method changed the signal time series into segment
78 M. B. B. Heyat et al.

7 Results

The normalized values of the power spectral density is represented in Tables 1, 2,


3 and 4. The all values belongs to the three stages of sleep such S0, S1, and REM
stages. In Table 1, Bruxism patient normalized power is 0.340360 and 0.098548.
The normal human normalized power is 0.196010 and 0.193010. The differences
of the normalized power of the Bruxism patient are 0.241812 and normal human
is 0.003000. Finally, the normal human power is low as compare to the bruxism
human powers. In Table 2, Bruxism patient normalized power is 0.2703 and 0.2601.
The normal human normalized power is 0.2744 and 0.26854. The differences of the

Table 1 Comparative results for the bruxism patient and normal human in the theta wave for the
P4-O2 channel of the EEG signal in the S0 sleep stage
Name of the Bruxism patient Bruxism disorder Normal human Normal human
subjects human
Normalized 0.340360 0.098548 0.196010 0.193010
powers of the
theta waves
Differences of two 0.241812 0.003000
same subject
Observation Bruxism human normalized power is Normal human normalized
high power is low

Table 2 Comparative results for the bruxism patient and normal human in the theta wave for the
P4-O2 channel of the EEG signal in the S1 sleep stage
Name of the subjects Bruxism patient Bruxism patient Normal human Normal human
Normalized powers 0.27030 0.26010 0.27440 0.26854
of the theta waves
Differences of two 0.01020 0.00586
same subject
Observation Bruxism human normalized power Normal human normalized
is high power is low

Table 3 Comparative results for the bruxism patient and normal human in the theta wave for the
P4-O2 channel of the EEG signal in the S2 sleep stage
Name of the subjects Bruxism patient Bruxism patient Normal human Normal human
Normalized powers 0.34855 0.32266 0.19601 0.19301
of the theta waves
Differences of two 0.02589 0.00918
same subject
Observation Bruxism human normalized power Normal human normalized
is high power is low
Short Time Frequency Analysis of Theta Activity … 79

Table 4 Comparative results for the bruxism patient and normal human in the theta wave for the
P4-O2 channel of the EEG signal in the REM sleep stage
Name of the subjects Bruxism patient Bruxism patient Normal human Normal human
Normalized powers 0.29592 0.21833 0.30669 0.30392
of the theta waves
Differences of two 0.07759 0.00277
same subject
Observation Bruxism human normalized power Normal human normalized
is high power is low

normalized power of the Bruxism patient are 0.0102 and normal human is 0.00586.
Finally, the normal human power is low as compare to the bruxism human powers.
In Table 3, Bruxism patient normalized power is 0.34855 and 0.32266. The normal
human normalized power is 0.19601 and 0.19301. The differences of the normalized
power of the Bruxism patient are 0.02589 and normal human is 0.00918. Finally, the
normal human power is low as compare to the bruxism human powers. In Table 4,
Bruxism patient normalized power is 0.29592 and 0.21833. The normal human nor-
malized power is 0.30669 and 0.30392. The differences of the normalized power of
the Bruxism patient are 0.07759 and normal human is 0.00277. Finally, the normal
human power is low as compare to the bruxism human powers (Table 4, Fig. 12).

Fig. 12 The comparative analysis of the bruxism disorder and normal human in the theta wave.
The S0 sleep stage of the bruxism disorder is greater than all sleep stages. The S2 sleep stage of the
normal human is greater than all sleep stages
80 M. B. B. Heyat et al.

8 Future Scope of the Proposed Research

In this research work, the study of bruxism using P4-O2 channel of the EEG sig-
nal is done. Here, different stages of sleep, sleep disorders, and EEG signals have
been discussed in briefly. As earlier, the methods were graphical so to diagnose them
was a big issue. This method will not allow only the mathematical standards of the
normalized power, but it will also be responsible for ways to identify other sleep
syndromes. This method can be a great aid in designing brain interface system. We
can also use this method to study other biomedical signals likewise Galvanic Skin
Response (GSR) Electroretinogram (ERG), Electromyogram (EMG), Electrocardio-
gram (ECG) signal, Electrooculagram (EOG) etc. The artificial neural network can
also be considered using Power Spectral Density (PSD) of these EEG signals.

9 Conclusion

The proposed work, we have developed a prognostic system of sleep syndrome


bruxism using P4-O2 channel of EEG sleep record. The obtained results from the
theta wave for the bruxism were higher than normal. This proposed work will easy
to help the prognostic of the patients. The future research work, we intend to extend
this study using machine learning and deep learning classifier. Additionally, we will
design the automatic prognostic system of the bruxism and other sleep syndrome.

Acknowledgements The authors would like to thanks Dr. Faez Iqbal Khan, Prof. Naseem, Prof.
Siddiqui, and Prof. Quddus for useful discussion. It’s also acknowledge BMI-EP, Laboratory,
UESTC, Chengdu, Sichuan, China for providing biomedical and computational equipment. The
National Natural Science Foundation of China under grant 61771100 supported this work.

References

1. Tsuno, N., Besset, A., & Ritchie, K. (2005). Sleep and depression. The Journal of Clinical
Psychiatry.
2. Hasan, Y. M., Heyat, M. B. B., Siddiqui, M. M., Azad, S., & Akhtar, F. (2015). An overview
of sleep and stages of sleep. International Journal of Advanced Research in Computer and
Communication Engineering, 4(12), 505–507.
3. Khoramirad, A., et al. (2015). Relationship between sleep quality and spiritual well-
being/religious activities in Muslim women with breast cancer. Journal of Religion and Health,
54(6), 2276–2285.
4. BaHammam, A. S. (2011). Sleep from an Islamic perspective. Annals of Thoracic Medicine,
6(4), 187.
5. Pohlman, R., & Cichos, M. (1974, July 16). Apparatus for disintegrating concretions in body
cavities of living organisms by means of an ultrasonic probe. U.S. Patent No. 3,823,717.
Short Time Frequency Analysis of Theta Activity … 81

6. Zhang, R., et al. (2016). Real-time discrimination and versatile profiling of spontaneous reactive
oxygen species in living organisms with a single fluorescent probe. Journal of the American
Chemical Society, 138(11), 3769–3778.
7. Capellini, I., et al. (2008). Energetic constraints, not predation, influence the evolution of sleep
patterning in mammals. Functional Ecology, 22(5), 847–853.
8. Opp, M. R. (2009). Sleeping to fuel the immune system: Mammalian sleep and resistance to
parasites. BMC Evolutionary Biology, 9(1), 8.
9. Samson, D. R., & Nunn, C. L. (2015). Sleep intensity and the evolution of human cognition.
Evolutionary Anthropology: Issues, News, and Reviews, 24(6), 225–237.
10. Lee-Chiong, T. (2008). Sleep medicine: Essentials and review. Oxford University Press.
11. Imtiaz, S. A. (2015). Low-complexity algorithms for automatic detection of sleep stages and
events for use in wearable EEG systems.
12. Rechtschaffen, A., & Kales, A. (1968). A manual of standardized terminology, techniques and
scoring system for sleep stages of human subjects. Washington, DC: Public Health Service,
U.S. Government Printing Office.
13. Iber, C., Ancoli-Israel, S., Chesson, A., & Quan, S. (2007). The AASM manual for the scoring
of sleep and associated events: Rules, terminology and technical specifications. Westchester,
IL: American Academy of Sleep Medicine.
14. Gaudreau, H., Carrier, J., & Montplaisir, J. (2001). Age-related modifications of NREM sleep
EEG: From childhood to middle age. Journal of Sleep Research, 10(3), 165–172.
15. Marzano, C., et al. (2010). The effects of sleep deprivation in humans: Topographical electroen-
cephalogram changes in non-rapid eye movement (NREM) sleep versus REM sleep. Journal
of Sleep Research, 19(2), 260–268.
16. Holz, J., et al. (2012). EEG sigma and slow-wave activity during NREM sleep correlate with
overnight declarative and procedural memory consolidation. Journal of Sleep Research, 21(6),
612–619.
17. Nofzinger, E. A., et al. (2002). Human regional cerebral glucose metabolism during non-rapid
eye movement sleep in relation to waking. Brain, 125(5), 1105–1115.
18. Thirumalai, S. S., Shubin, R. A., & Robinson, R. (2002). Rapid eye movement sleep behavior
disorder in children with autism. Journal of Child Neurology, 17(3), 173–178.
19. Dang-Vu, T. T., et al. (2011). Interplay between spontaneous and induced brain activity during
human non-rapid eye movement sleep. Proceedings of the National Academy of Sciences,
108(37), 15438–15443.
20. Lakshminarayana Tadimeti, M. D., et al. (2000). Sleep latency and duration estimates among
sleep disorder patients: Variability as a function of sleep disorder diagnosis, sleep history, and
psychological characteristics. Sleep, 23(1), 1.
21. Van der Heijden, K. B., Smits, M. G., Someren, E. J. V., & Boudewijn Gunning, W. (2005).
Idiopathic chronic sleep onset insomnia in attention-deficit/hyperactivity disorder: A circadian
rhythm sleep disorder. Chronobiology International, 22(3), 559–570.
22. Senthilvel, E., Auckley, D., & Dasarathy, J. (2011). Evaluation of sleep disorders in the primary
care setting: History taking compared to questionnaires. Journal of Clinical Sleep Medicine,
7(1), 41–48.
23. Sateia, M. J. (2014). International classification of sleep disorders. Chest, 146(5), 1387–1394.
24. Thorpy, M. J. (2012). Classification of sleep disorders. Neurotherapeutics, 9(4), 687–701.
25. Ohayon, M. M., & Reynolds, C. F., III. (2009). Epidemiological and clinical relevance of
insomnia diagnosis algorithms according to the DSM-IV and the International Classification
of Sleep Disorders (ICSD). Sleep Medicine, 10(9), 952–960.
26. Heyat, M. B. B. (2016). Insomnia: Medical sleep disorder & diagnosis (Tech. Rep. V337729).
Hamburg, Germany: Anchor Academic Publishing.
27. Heyat, M. B. B. (2017). Hamming window is used in the detection of insomnia medical sleep
syndrome. In Proceedings of International Seminar on Present Scenario & Future Prospectives
of Research in Engineering and Sciences (ISPSFPRES) (pp. 65–71).
28. Heyat, M. B. B., Akhtar, S. F., & Azad, S. (2016, July). Power spectral density are used in the
investigation of insomnia neurological disorder. In Proceedings of Pre Congress Symposium,
Organized Indian Academy of Social Sciences (ISSA) (pp. 45–50).
82 M. B. B. Heyat et al.

29. Heyat, B., Akhtar, F., Mehdi, A., Azad, S., Hayat, A. B., & Azad, S. (2017). Normalized power
are used in the diagnosis of insomnia medical sleep syndrome through EMG1-EMG2 channel.
Austin Journal of Sleep Disorders, 4(1), 1027.
30. Heyat, M. B. B., & Siddiqui, S. A. (2015). An overview of Dalk therapy and treatment of
insomnia by Dalk therapy (Tech. Rep. 2). Lucknow, India: State Takmeel-ut-Tib-College and
Hospital.
31. Siddiqui, M. M., Srivastava, G., & Saeed, S. H. (2016). Diagnosis of insomnia sleep disorder
using short time frequency analysis of PSD approach applied on EEG signal using channel
ROC-LOC. Sleep Science, 9(3), 186–191.
32. Peyron, C., Faraco, J., Rogers, W., Ripley, B., Overeem, S., Charnay, Y., et al. (2000). A
mutation in a case of early onset narcolepsy and a generalized absence of hypocretin peptides
in human narcoleptic brains. Nature Medicine, 6(9), 991.
33. Thorpy, M. J., Shapiro, C., Mayer, G., Corser, B. C., Emsellem, H., Plazzi, G., et al. (2019). A
randomized study of solriamfetol for excessive sleepiness in narcolepsy. Annals of Neurology,
85(3), 359–370.
34. Guilleminault, C., & Pelayo, R. (2000). Narcolepsy in children. Pediatric Drugs, 2(1), 1–9.
35. Rahman, T., Farook, O., Heyat, B. B., & Siddiqui, M. M. (2016). An overview of narcolepsy.
International Advanced Research Journal in Science, Engineering and Technology, 3(3), 85–87.
36. Veasey, S. C., Yeou-Jey, H., Thayer, P., & Fenik, P. (2004). Murine Multiple Sleep Latency
Test: Phenotyping sleep propensity in mice. Sleep, 27(3), 388–393.
37. Manni, R., Terzaghi, M., & Repetto, A. (2008). The FLEP scale in diagnosing nocturnal frontal
lobe epilepsy, NREM and REM parasomnias: Data from a tertiary sleep and epilepsy unit.
Epilepsia, 49(9), 1581–1585.
38. Derry, C. P., Heron, S. E., Phillips, F., Howell, S., MacMahon, J., Phillips, H. A., et al.
(2008). Severe autosomal dominant nocturnal frontal lobe epilepsy associated with psychiatric
disorders and intellectual disability. Epilepsia, 49(12), 2125–2129.
39. Farooq, O., Rahman, T., Heyat, M. B. B., Siddiqui, M. M., & Akhtar, F. (2016). An overview of
NFLE. International Journal of Innovative Research in Electrical, Electronics, Instrumentation
and Control Engineering, 4, 209–211.
40. Bixler, E. O., Vgontzas, A. N., Lin, H. M., Calhoun, S. L., Vela-Bueno, A., & Kales, A.
(2005). Excessive daytime sleepiness in a general population sample: The role of sleep apnea,
age, obesity, diabetes, and depression. The Journal of Clinical Endocrinology & Metabolism,
90(8), 4510–4515.
41. Dempsey, J. A., Veasey, S. C., Morgan, B. J., & O’Donnell, C. P. (2010). Pathophysiology of
sleep apnea. Physiological Reviews, 90(1), 47–112.
42. Cai, A., Wang, L., & Zhou, Y. (2016). Hypertension and obstructive sleep apnea. Hypertension
Research, 39(6), 391.
43. Elshaug, A. G., Moss, J. R., Southcott, A. M., & Hiller, J. E. (2007). Redefining success in
airway surgery for obstructive sleep apnea: A meta analysis and synthesis of the evidence.
Sleep, 30(4), 461–467.
44. Cowie, M. R., Woehrle, H., Wegscheider, K., Angermann, C., d’Ortho, M. P., Erdmann, E.,
et al. (2015). Adaptive servo-ventilation for central sleep apnea in systolic heart failure. New
England Journal of Medicine, 373(12), 1095–1105.
45. Aurora, R. N., Chowdhuri, S., Ramar, K., Bista, S. R., Casey, K. R., Lamm, C. I., et al.
(2012). The treatment of central sleep apnea syndromes in adults: Practice parameters with an
evidence-based literature review and meta-analyses. Sleep, 35(1), 17–40.
46. Javaheri, S., Smith, J., & Chung, E. (2009). The prevalence and natural history of complex
sleep apnea. Journal of Clinical Sleep Medicine, 5(03), 205–211.
47. McEvoy, R. D., Antic, N. A., Heeley, E., Luo, Y., Ou, Q., Zhang, X., et al. (2016). CPAP
for prevention of cardiovascular events in obstructive sleep apnea. New England Journal of
Medicine, 375(10), 919–931.
48. Chirinos, J. A., Gurubhagavatula, I., Teff, K., Rader, D. J., Wadden, T. A., Townsend, R.,
et al. (2014). CPAP, weight loss, or both for obstructive sleep apnea. New England Journal of
Medicine, 370(24), 2265–2275.
Short Time Frequency Analysis of Theta Activity … 83

49. Sanei, S., & Chambers, J. A. (2007). EEG signal processing.


50. Subha, D. P., Joseph, P. K., Acharya, R., & Lim, C. M. (2010). EEG signal analysis: A survey.
Journal of Medical Systems, 34(2), 195–212.
51. Lakshmi, M. R., Prasad, T. V., & Prakash, D. V. C. (2014). Survey on EEG signal processing
methods. International Journal of Advanced Research in Computer Science and Software
Engineering, 4(1).
52. Heyat, M. B. B., Shaguftah, Hasan, Y. M., & Siddiqui, M. M. (2015). EEG signals and wire-
less transfer of EEG signals. International Journal of Advanced Research in Computer and
Communication Engineering, 4(12), 502–504.
53. Heyat, M. B. B., & Siddiqui, M. M. (2015). Recording of EEG, ECG, EMG signal. International
Journal of Advanced Research in Computer Science and Software Engineering, 5(10), 813–815.
54. Rappelsberger, P., Pockberger, H., & Petsche, H. (1982). The contribution of the cortical layers
to the generation of the EEG: Field potential and current source density analyses in the rabbit’s
visual cortex. Electroencephalography and Clinical Neurophysiology, 53(3), 254–269.
55. Van Luijtelaar, G., Hramov, A., Sitnikova, E., & Koronovskii, A. (2011). Spike–wave discharges
in WAG/Rij rats are preceded by delta and theta precursor activity in cortex and thalamus.
Clinical Neurophysiology, 122(4), 687–695.
56. Berk, L., Alphonso, C., Thakker, N., & Nelson, B. (2014). Humor similar to meditation
enhances EEG power spectral density of gamma wave band activity (31–40 Hz) and synchrony
(684.5). The FASEB Journal, 28(1_supplement), 684–685.
57. Goldberger, A. L., Amaral, L. A., Glass, L., Hausdorff, J. M., Ivanov, P. C., Mark, R. G., et al.
(2000). PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource
for complex physiologic signals. Circulation, 101(23), e215–e220.
58. Terzano, M. G., Parrino, L., Sherieri, A., Chervin, R., Chokroverty, S., Guilleminault, C., et al.
(2001). Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP)
in human sleep. Sleep Medicine, 2(6), 537–553.
59. Heyat, M. B. B., Lai, D., & Zhang, F. I. K. Y. (2019). Sleep bruxism detection using decision
tree method by the combination of C4-P4 and C4-A1 channels of scalp EEG. IEEE Access.
60. Lai, D., et al. (2019). Prognosis of sleep bruxism using power spectral density approach applied
on EEG signal of both EMG1-EMG2 and ECG1-ECG2 channels. IEEE Access, 7, 82553–
82562.
61. Villwock, S., & Pacas, M. (2008). Application of the Welch-method for the identification of
two- and three-mass-systems. IEEE Transactions on Industrial Electronics, 55(1), 457–466.
62. Rahi, P. K., & Mehra, R. (2014). Analysis of power spectrum estimation using welch method for
various window techniques. International Journal of Emerging Technologies and Engineering,
2(6), 106–109.
63. Barbe, K., Pintelon, R., & Schoukens, J. (2009). Welch method revisited: Nonparametric power
spectrum estimation via circular overlap. IEEE Transactions on Signal Processing, 58(2),
553–565.
64. Kumar, S., Singh, K., & Saxena, R. (2011). Analysis of Dirichlet and generalized “Hamming”
window functions in the fractional Fourier transform domains. Signal Processing, 91(3), 600–
606.
Hand Gesture Recognition for Human
Computer Interaction and Its
Applications in Virtual Reality

Sarthak Gupta, Siddhant Bagga and Deepak Kumar Sharma

Abstract Computers are emerging as the most utilitarian products in the human
society and therefore the interaction between humans and computers will have a
very significant influence in the society. As a result, enormous amount of efforts are
being made to augment the research in the domain of human computer interaction
to develop more efficient and effective techniques for the purpose of reducing the
barrier of humans and computers. The primary objective is to develop a conducive
environment in which there is feasibility of very natural interaction between humans
and computers. In order to achieve this goal, gestures play a very pivotal role and
are the core area of research in this domain. Hand gesture recognition is a significant
component of virtual Reality finds applications in numerous fields including video
games, cinema, robotics, education, marketing, etc. Virtual reality also caters to a
variety of healthcare applications involving the procedures used in surgical operations
including remote surgery, augmented surgery, software emulation of the surgeries
prior to actual surgeries, therapies, training in the medical education, medical data
visualization and much more. A lot of tools and techniques have. Been developed
to cater to the development of the such virtual environments. Gesture recognition
signifies the method of keeping track of gestures of humans, to representing and
converting the gestures to meaningful signals. Contact based and vision based devices
are used for creating and implementing these systems of recognition effectively. The
chapter begins with the introduction of hand gesture recognition and the process of
carrying out hand gesture recognition. Further, the latest research which is being in
carried out in the domain of hand gesture recognition is described. It is followed by
the details of applications of virtual reality and hand gesture recognition in the field of

S. Gupta · S. Bagga · D. K. Sharma (B)


Department of Information Technology, Netaji Subhas University of Technology (Formerly Netaji
Subhas Institute of Technology), New Delhi, India
e-mail: dk.sharma1982@yahoo.com
S. Gupta
e-mail: sarthakgupta259@gmail.com
S. Bagga
e-mail: siddhantbagga1@gmail.com

© Springer Nature Switzerland AG 2020 85


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_5
86 S. Gupta et al.

healthcare. Then, various techniques which are applied in hand gesture recognition
are described. Finally, the challenges in the field of hand gesture recognition have
been explained.

Keywords Artificial intelligence · Virtual reality · Hand gesture recognition ·


Human computer interaction · Healthcare · Representations · Recognition ·
Natural interfaces

1 Introduction

As the name suggests, human computer interaction [1] focusses on the interaction
which is carried out between the human beings and the computers. As a matter of fact.
The scope of HCI is not just limited to computers but rather all forms of technology.
This multidisciplinary field of study involves many areas of research as shown in
Fig. 1.
Enormous amount of research work is being carried out for the development
of interafaces incorporating the latest technologies which could further be used in
the requisite interactions between humans and computers in virtual environments.
A pivotal part of human-computer interaction is gesture recognition whose type is
determined by the number of channels viz. unimodal and multimodal [3]. Variety of

Fig. 1 Fields of study in human computer interaction [2]


Hand Gesture Recognition for Human Computer Interaction … 87

modalities are considered to account to comprehend the behaviour of the user includ-
ing gestures, speech, body movements, facial expressions etc. There are basically
three different levels of activities of users in HCI which are described as follows:
1. Physical level [4]: It involves determining the mechanical aspects of humans’
interaction with the machines.
2. Cognitive level [5]: It involves how the humans comprehend the machines and
vice versa.
3. Affective level [6]: It involves making HCI a very pleasing experience for the
humans so that the user continues to interact with the machine.
Hand-gesture recognition is an important part of Human computer interaction.
Moving of body components to express something is known as a gesture. It can
be either movement of whole hand or the movement of only fingers. Hand gesture
recognition is a prominent part of various applications of virtual-reality. Virtual-
reality is actually a computer simulated environment inculcating visuals and audio
in a 3 dimensional space with the users experiencing reality within that computer
generated environment. It finds applications in computer games, movies, theme parks
etc.

2 Process of Hand Gesture Recognition

Conventional methods of interacting with the computer include keyboards, joysticks,


mouse and other input devices but there is lot of dependence on the external devices
for the purpose of communication. Therefore the latest methods including hand
gesture recognition are required to improve upon the level interactivity between
the users and the computers using the movements of hand to express and signify
something to the machine. There are 2 types of hand-gesture recognition viz. static
and dynamic. In the static recognition, the recognition is based on the shape of
the hand whereas dynamic recognition involves the movement of hands and based
on the trajectory of the movement of the hand in space, the recognition is carried
out. Conventionally, hand gesture recognition was carried out using the special data
gloves [7]. The data can be sent from the glove to the computer in real time and
based on the data, feedback can be received by the glove to the user from the virtual
environment. The demerit of this approach of is the high expense of the additional
equipment. Hand-gesture recognition which is based on vision is the new approach
which involves the use of precise camera to capture the image of hand and then
comprehend those movements. Block diagram of the process of the hand-gesture
recognition is shown in Fig. 2.
First of all, an image is taken by camera and then segmentation of the image is
carried out. In segmentation, the image is partitioned into various parts. A technique
known as hand tracking is employed to determine the consecutive positions. Next,
the most significant features are extracted. Finally, the classifier plays salient role
in recognition of the gesture. It takes in the input of set of features and outputs a
88 S. Gupta et al.

Fig. 2 Process of hand-gesture recognition [8]

label. Hidden-Markov Model (HMM) and Conditional-Random Field are the move
commonly used in the classifiers. The process of hand gesture recognition [8] is
described in the next section.

2.1 Hand Segmentation

Prior to determining the movements of the hands, hand segmentation is carried out.
There are various techniques for carrying out hand segmentation which are
described as follows.

2.1.1 Segmentation Based on Skin Colour [9]

The image in RGB colour space is transformed in HIS model and then a threshold
is applied to transform that image into binary image. Further, minimization of noise
is carried out. An algorithm of segmentation [9] based on skin colour is depicted in
Fig. 3.

2.1.2 Frame Differencing

Subtraction of one frame from another frame is carried out. If the difference obtained
is significant, ‘foreground’ is provided as a label to that difference. This technique
is carried out in order to determine the body edges. For instance, an algorithm for
frame differencing [10] is illustrated in Fig. 4.
Hand Gesture Recognition for Human Computer Interaction … 89

Fig. 3 Process of segmentation based on colour [9]

Fig. 4 Flow chart of frame differencing algorithm [10]

2.2 Contour Matching

The series of points which determine a line or a curve in image are called contours.
Every point in that series has encoded data regarding the position of the next point.
All the pixels which exist on the edges of objects are contained in a contour model and
these points are positioned over all the possible points in the target image or another
contour. The values are matched which is determined by the edge pixels contained in
the contour model. Edge pixels in the contour model which correspond to the pixels
of the target image are added. If the value is high, significant amount of resemblance
exists between the target image and the contour model. Matching by contours is
much better than matching by templates because in contour matching, only the edge
pixels are considered in contrast to the whole image in template matching. Therefore
this process is much faster and results in much more precise matches. As a matter
90 S. Gupta et al.

Fig. 5 Flow chart of contour matching algorithm [11]

of fact, transformations of images (scaling, translation and rotation) does cause any
problem in contour matching. An algorithm for contour matching as described in
[11] and is illustrated in Fig. 5.

2.3 Hand Tracking

In order to determine the path traversed by the hand and it’s corresponding trajectory,
tracking of the hand is carried out. The most widely used approach is by computing
the centroids the hand which is segmented and then connecting those points to get
the path followed by the hand. Average of the intensities of the pixels of the images is
calculated to compute the moment of the image. Centroid is further calculated from
the image moments.
Hand Gesture Recognition for Human Computer Interaction … 91

2.4 Feature Extraction

A set of characteristics which are retrieved from an image in order to classify that
image is known as feature extraction. Various methods have been worked upon
for the purpose of feature extraction. Some techniques involve the use hand shape,
position of the fingertips, center of the palm etc. in order to identify hand. In some
techniques, a feature vector is created in which the initial parameter representing the
bounding box’s aspect ratio of the hands and the other parameters can be the pixels
of brightness in the image. In [12], SGONG (Self-growing and neural-gas algorithm)
is used, which takes 3 features including the palm area, middle of the palm and the
slope of the hand.
In [13], COG (center of gravity) is computed of the hand which is segmented and
the distance is calculated from the COG to the farthest point in the fingers. Finally a
signal which is binary is retrieved to identify the number of fingers. In [14], various
blocks were obtained from the segmented image with each block indicating the mea-
surement of brightness. Various experimentation was done in order to determine the
right size of the block. ZM (Zernike moments) and PZM (Pseudo Zernike Moments)
are used in [15] and this method involves 4 steps. Firstly, segmentation of the input is
carried out to obtain a binary hand silhouette with the use of segmentation of colour.
MBC (minimum-bounding circle is used in the next step. Next, using the morpho-
logical operations, finger and palms are separated as per the radius of MBC. The
Zernike-moments and pseudo Zernike-moments of the significant parts of fingers
and palm are calculated as per the center of MBC. At the end, techniques based on
nearest neighbour are employed for performing the matches.
In [16], MCT (modified census transform) is used as the operator on the pixels.
Further, a linear classifier is used. In [17], joint movements are observed and the
concept of ROM (Range of Motion) is used in order to determine the angle between
the initial position and the complete movement.
The process of hand gesture recognition has been implemented using numerous
algorithms and techniques, some of which have been described in the following
section.

3 Latest Research in Hand Gesture Recognition

In the recent times, a lot of efforts have been made to carry out much more effi-
cient hand gesture recognition in the field of human computer interaction. The latest
techniques have been described as follows
1. In [18], a system is developed for the hand gesture recognition which uses
skin detection along with the use of comparison bases on hand posture con-
tour. Hand gestures are recognized using ‘bag-of-features’ and SVM (support
vector machine). A grammar is thus developed which creates the commands
to control the interacting program. SIFT (scale invariance feature transform) is
92 S. Gupta et al.

used after the extraction of the requisite points in the image. Mapping of the
important points from the training image to a histogram vector which is termed
‘bag of words’ using K-means-clustering. Detecting of the hand is carried out for
each frame and then the extraction of the important points is carried which are
primarily containing the hand gesture. Then these points are further fed to cluster
model for mapping them to histogram vector. Then this histogram vector is fed
into support vector machine classifier for hand gesture recognition. Generation
of ‘bag of words’ is illustrated in Fig. 6.
2. In [19], MPCNN (Max pooling in convolutional neural network) is used for car-
rying out supervised feature learning and recognition of gestures of hand. The
retrieval of the contour is done by segmentation based on colour. Then smoothen-
ing is carried out to eliminate the edges which have noise. The architecture of
MPCNN is shown in Fig. 7.
3. In [20], only the features involving the shape and the orientation of the hand
are considered such as centroid (center of mass), folded and unfolded thumb
and fingers and the orientation of the fingers and thumb. For carrying out the
segmentation, K means clustering algorithm is used.

Fig. 6 Generation of histogram vector (bag of words) [18]

Fig. 7 Architecture of MPCNN [19]


Hand Gesture Recognition for Human Computer Interaction … 93

Fig. 8 Work flow of exemplar based technique [23]

4. In [21], combination of RGB descriptor and depth descriptor is being employed


for carrying out hand gesture recognition.
5. In [22], DBN (dynamic Bayesian network) is used. Skin extraction is initially
carried out which is followed by motion tracking. A cycle gesture network is
defined for the purpose of modelling of continuous stream of gestures.
6. In [23], exemplar based technique is used which is based on the motion-
divergence fields whose normalization to gray scale images can be carried out.
MSER (maximum stable external regions) are detected on motion-divergence
maps. Extraction of local descriptor is done for capturing the patterns of local
motions. TF-IDF (term frequency inverse document frequency) weighting is used
for matching the gestures in the database with the input gestures. The work flow
is shown in Fig. 8.
7. In [24], superpixel based method is developed which incorporates the use of
kinext depth cam along with superpixel earth mover distance metric. In this
method, measurement of difference between gestures of hands is carried out by
mover’s distance. The proposed framework is shown in Fig. 9.
8. In [24], rotation normalizations based on the geometrical orientation of the ges-
ture is employed for alignment of the hand which is extracted. Further, represen-
tation of these binary silhouettes which are normalized is done using Krawtchouk
moments. The workflow of this method is shown in Fig. 10.

4 Applications of Virtual Reality and Hand Gesture


Recognition in Healthcare

The advancement of Virtual Reality has great scope in the field of healthcare. It can
be used to improve diagnosis of medical conditions, medical training, etc. A few
applications of Virtual Reality in healthcare have been mentioned below.
94 S. Gupta et al.

Fig. 9 Framework for superpixel based approach [24]

Fig. 10 Workflow of the proposed approach [24]

Virtual Reality can be used to enhance the Human Computer Interaction (HCI)
experience. An obvious application of VR is as a communication interface. It can
be used to collect and visualize the patients’ medical data. By using VR, the visual-
izations would be better, more detailed and highly interactive and thus help in more
accurate diagnoses. VR also has huge potential when it comes to medical training and
Hand Gesture Recognition for Human Computer Interaction … 95

education [25]. Students can be taught about various anatomical features with better
understanding and clarity with the help of VR tools. Details of various surgeries and
procedures can be presented in greater detail to improve surgical skills [26]. In fact,
a study conducted in [27] has found that using a VR based training curriculum for
laparoscopic procedure shortened the learning curve as compared to other traditional
training methods, hence reinforcing the importance of VR as a medium of medical
education. Another major application of Virtual Reality is simulated surgery [28].
Surgeons can perform a trial surgery in a virtual environment to reduce the chances
of error while performing the actual surgery.
During surgery, real-time visualization is essential. However, traditional tools
require physical contact, which is not suitable for conditions where sterility is impor-
tant. Moreover, such tools are not very intuitive and may divert the attention of the
surgeons. In [29], a system has been proposed that keeps track of the patient’s 3D
anatomy and lets the surgeon interact with it in real-time. The interface is touch-less,
and based on vision-based hand-gesture recognition, which makes it more intuitive
to operate (see Fig. 11). They have used Histogram of Oriented Gradients (HOG)
features for detecting the hands. For gesture recognition, a multi-class Support Vector
Machine (SVM) has been trained using the HOG features.
In [30], the need for a more intuitive interaction system is emphasized, that allows
for 6 degrees of freedom (6-DOF) in order to effectively explore 3-dimensional data.
They have proposed a system that, among other devices, uses “data gloves” in order
to interact naturally with the virtual environment. The data gloves calculate flexion
of each finger, and the roll and pitch values of the hand are measured by a tilt sensor.
The calculated values of the data glove are processed and compared with reference
gestures. Once the gesture has been recognized, the corresponding action is taken in
the virtual environment. This process has been described in Fig. 12.

Fig. 11 Real-time visualization model for computer assisted surgery [29]


96 S. Gupta et al.

Fig. 12 Data glove gesture recognition pipeline [30]

5 Hand Gesture Recognition Techniques

Hand-gesture recognition can be divided into 2 main categories based on the method
of acquiring information—data glove based and vision based. Vision-based tech-
niques are generally low-cost and more natural since any additional hardware is
usually not required. However, data glove based techniques are usually more accu-
rate since they use sensors, but at the same time they are more bulky and hinder
the natural movement of the hands [31]. In this section, a few of vision-based the
techniques for hand-gesture recognition have been discussed.
Hand-gesture recognition systems consist of 3 main phases: detection, tracking
and recognition. Some of the techniques for these tasks have been shown in Fig. 13
[32].

5.1 Detection

The first step is the detecting hands and the segmentation of corresponding image
regions. Segmentation is extremely important since it separates the pertinent data
from background of image. A few important features used for detection are discussed
below:
Hand Gesture Recognition for Human Computer Interaction … 97

Fig. 13 Techniques for hand gesture recognition [32]

1. Colour: Skin colour is an important feature that is often used for detection and
segmentation of hands. One of the major decisions to take is which colour model
to use. A lot of colour models exist like RGB, HSV, YUV, etc. In general, chro-
maticity based colour models are preferred, since the dependence on illumination
can then be effectively reduced. These techniques are supposed to be invariant
to slight changes in skin colour and illumination. However, problem arises when
background has similar colour distribution as skin. In [33] background subtrac-
tion has been done to tackle this problem. But this approach only works well
when the camera is static with respect to the image background. In [34, 35] work
has been done to remove background from dynamic scenes.
2. Shape: The shape of the hand is very unique and can be seen easily in the contours
of objects extracted from images. However, due to the two dimensional nature of
images, it is susceptible to occlusion or bad viewpoint. This approach does not
directly depend on the skin colour or illumination. However, contours made using
edge detectors result in edges in large numbers, often associated with irrelevant
objects. So, skin colour and background subtraction are often used alongside
contour extraction to get good detection results.
3. Pixel Values: The pixel values are another feature often used for hand detection.
This usually involves training an algorithm for detecting hands by giving it a set
of positive and negative samples. Recently, boosting machine learning techniques
have shown significant results. Boosting is based on the idea that a strong learner
can be made by a weighted combining of many weak learners. [36] provides an
overview of various boosting techniques.
98 S. Gupta et al.

4. 3D Model: Certain techniques utilize three dimensional models of the hand for
detection. The advantage of this approach is that detection can be viewpoint
independent.

5.2 Tracking

Tracking is done for temporal modelling of data to convey important information


regarding the hand movement. Tracking is a particularly challenging task since the
movement of hands is usually very fast and the appearance of the hand may be very
different over only a few frames. A few techniques for hand tracking are described
below:
1. Template-based: These techniques are very similar to hand detection techniques.
In these techniques, a hand detector is invoked in the spatial region in which the
hand was earlier detected.
2. Optimal estimation: Optimal estimation refers to inferring of certain parameters
based on indirect or uncertain observations. Kalman filters [37] are very well-
studied linear optimal estimators. [38, 39] are a few recent works that use Kalman
filters for hand tracking.
3. Particle filtering: Kalman filters can only model Gaussian distributions. To model
any arbitrary distribution, Particle Filters are used. Location of the hand is deter-
mined using set of particles. A disadvantage with this approach is that too many
particles are required, although attempts have been made to limit them by using
constraints of the human anatomy. [40, 41] are examples of using Particle filtering
for hand tracking.
4. CamShift: CamShift is based on MeanShift algorithm. In MeanShift, a fixed size
window is moved towards the region of high density. By comparing the contents
in the window to a sample pattern, the most similar distribution pattern is found.
But a problem with this technique is that it is not flexible to the size of the object
and will fail if object moves along the depth dimension in the image. To overcome
this, Continuous Adaptive MeanShift (CamShift) was proposed in which size of
the window is adaptively adjusted to meet the requirements. In [42] CamShift is
being used for hand-gesture recognition. In [43, 44] a combination of CamShift
and Kalman Filters are used for tracking hands in videos.

5.3 Recognition

Finally, recognition involves using the relevant information to identify the gesture
by classifying it into one of the known categories. Vision based hand gesture recog-
nition techniques can be classified as static and dynamic. For classifying static ges-
tures, simple linear and non-linear classifiers could be used. However, for classifying
Hand Gesture Recognition for Human Computer Interaction … 99

dynamic gestures, some temporal model needs to be used since the gestures have a
time-dimension as well. A few techniques that are used for gesture recognition have
been described below:
1. K-means: In k-means, the objective is to find k center points, one for each class,
such that the distance of points that belong to a class is minimum from its center.
In [45] k means algorithm has been used for clustering the points in 2 sets, based
on which the convex hull of hands is detected.
2. K-nearest neighbours (KNN): This is a method of classifying objects based on
k nearest objects in the feature space. Value of k controls the smoothness of the
boundary line; the more the value of k, the smoother the boundary is. Several
modifications to the KNN algorithm have been proposed. For example, in [46]
the neighbours are weighted according to the distance from the object to be
classified. In [47] a fuzzy K-nearest-neighbours algorithm has been proposed.
In [48], k-nearest neighbor classifier has been used to classify hand gestures.
Novel technique has been proposed that aims to classify based on the x and y
projections of the hand.
3. Mean shift clustering: Previous information about the number of clusters isn’t
required and there is no constraint on the shape of these clusters. The mean
shift vector always points to the direction of maximum increase in density in the
feature space.
4. Support Vector Machine (SVM): SVM is a non-linear classifier. The basic idea
behind support vector machines is mapping non-linear data to higher dimen-
sions in which it is linear and can be easily classified. SVMs usually perform
better than most other linear and non-linear classifiers. In [49] various SVMs are
fused together. It uses three SVMs that have been individually trained on frontal
(FSVM), left (LSVM) and right (RSVM) images (see Fig. 14).

Fig. 14 Fusion of SVMs for hand gesture classification [49]


100 S. Gupta et al.

Fig. 15 Hand gesture recognition using k-means clustering and SVM [50]

Another example of SVMs for classification of hand-gesture recognition has


shown in [50]. The authors have used k-means clustering for making bag of
words vectors, and then SVM for gesture recognition (see Fig. 15).
5. Hidden Markov Models (HMM): The backbone of HMMs is Markov chains.
Markov chains are probabilistic structures, in which there are transition proba-
bilities that determine the next state from the current state. Markov chains must
satisfy the Markovian property i.e. the future state depends only on the current
state, and not on the sequence of states before it. Hidden-Markov-Models are
used for model ling of temporal data, usually in cases where the underlying
probability distribution is unknown, but certain output observations are known.
In the context of hand gesture recognition, all states may refer to a hand posi-
tion, and the transition probabilities would define the probability of the hand’s
position to change from one state to another. In [51] HMMs have been trained
for each gesture. During test time, the input is passed through all HMMs, and the
one with the maximum forward-probability is considered to be the recognized
action. The generalized topology for HMM is a fully connected ergodic topology
(see Fig. 16a). Another commonly used topology is the left-right banded (LRB)
topology (see Fig. 16b).
6. Soft Computing Approach: Soft computing is a collection of techniques that aim
to handle ambiguous situations. Soft computing tools such as Artificial Neural
Networks, Genetic Algorithms, fuzzy sets, rough sets, etc. are extensively used
in hand gesture recognition and other tasks that have ambiguity associated with
them.
7. Time Delay Neural Networks (TDNN): Time Delay Neural Networks [53] have
been used to model temporal data. Due to delay, all neurons have access to more
than one inputs at a time. So, each neuron can model relationships between the
current and past inputs.
8. Finite State Machine (FSM): Finite State Machine is an automata-based com-
putation model. It has a finite number of states. In [54] FSM has been used
with a modified Knuth-Morris-Pratt algorithm to fasten gesture recognition (see
Fig. 17).
Hand Gesture Recognition for Human Computer Interaction … 101

Fig. 16 a Ergodic topology and b LRB topology of HMM [52]

Fig. 17 FSM for a gesture with 4 states [54]

6 Further Challenges

Although hand gesture recognition has come a long way, but there is still a lot of work
to be done before it is applied in healthcare at scale. The various applications of hand
gesture recognition in healthcare and medicine have been outlined in this chapter, and
it is extremely important to further improve the hand-gesture recognition techniques
so that its applications can be widely adopted. Certain challenges that further need
to be worked on are listed below:
1. One of the major tasks that need to be worked on are recognition of hand gestures
for different backgrounds.
2. For static gestures, a number of factors like viewpoint, degrees of freedom,
different silhouette scales, etc. need to be considered which make the process
difficult.
102 S. Gupta et al.

3. For dynamic gestures, a few additional factors like speed of the gesture also
need to be taken into account, which adds an additional layer of complexity to
hand gesture recognition.
4. Hand gesture recognition techniques must be robust, scalable and should be
able to deliver real-time processing. Robustness plays an important role for
recognition in different lightning conditions, cluttered backgrounds, etc.
5. User-independence is a must-have feature for many hand gesture recognition
systems.
6. Many color-based techniques fail when images have distribution similar to that
of a hand. These techniques must be more robust to such adversarial examples
and tricky backgrounds.
7. Weak classifiers are another reason for poor performance of many gesture
recognition techniques.
8. Template matching for tracking is not an effective solution since it is not robust
to various illumination conditions.
9. Finding the optimal set of techniques for a particular application are very chal-
lenging tasks as there is no theoretical way to do it. It is based on results and
practical experience.
10. Creation of a general gesture recognition framework is very difficult because
most systems involve gestures that are application specific.
Many more similar limitations exist in the current hand gesture recognition
systems that need to be overcome in order for these systems to be used at scale.

7 Conclusion

In this chapter, the importance of hand gesture recognition and virtual reality in the
field of healthcare has been outlined. Hand gesture recognition techniques provide
for a major form of Human Computer Interaction (HCI), which is essential for com-
munication between the users and the virtual environments. The need for HCI has
been highlighted, and various ways for HCI have been discussed. Considering the
multifarious applications of this technology, there has been huge research interest
in the field. A glimpse of various techniques and some of the latest research in this
area has been presented in this chapter. Different steps involved in hand gesture
recognition have been discussed and the importance of techniques used for each
of these steps has been highlighted. Various challenges and limitations of the cur-
rent state-of-the-art tools have been covered. Vision-based and glove-based gesture
recognition have been compared and contrasted against each other, and it was found
that vision-based techniques provide low-cost, sterile solutions for many healthcare
applications. The vast potential for hand gesture recognition has been emphasized,
which should pave the way for more robust, scalable, real-time and accurate gesture
recognition systems.
Hand Gesture Recognition for Human Computer Interaction … 103

References

1. Sinha, G., Shahi, R., & Shankar, M. (2010). Human Computer Interaction. 2010 3rd
International Conference on Emerging Trends in Engineering and Technology.
2. Chakraborty, B. K., Sarma, D., Bhuyan, M. K., & MacDorman, K. F. (2018). Review of
constraints on vision-based gesture recognition for human–computer interaction. IET Computer
Vision, 12(1), 3–15.
3. Jaimes, A., & Sebe, N. (2007). Multimodal human-computer interaction: A survey. Computer
Vision and Image Understanding, 108(1–2), 116–134.
4. Chapanis, A. (1965). Man machine engineering. Belmont: Wadsworth.
5. Norman, D. (1986). Cognitive Engineering. In D. Norman & S. Draper (Eds.), User centered
design: New perspective on human-computer interaction. Hillsdale: Lawrence Erlbaum.
6. Picard, R. W. (1997). Affective computing. Cambridge: MIT Press.
7. Han, Y. (2010). A low-cost visual motion data glove as an input device to interpret human hand
gestures. IEEE Transactions on Consumer Electronics, 56(2), 501–509.
8. Choudhury, A., Talukdar, A. K., & Sarma, K. K. (2014). A Conditional Random Field
Based Indian Sign Language Recognition System under Complex Background. 2014 Fourth
International Conference on Communication Systems and Network Technologies.
9. Habili, N., Lim, C. C., & Moini, A. (2004). Segmentation of the face and hands in sign language
video sequences using color and motion cues. IEEE Transactions on Circuits and Systems for
Video Technology, 14(8), 1086–1097.
10. Iqbal, J., Ul Haq, A., & Wali, S. (2015). Moving target detection and tracking.
11. Choudhury, A., Talukdar, A., Sarma, K. (2014). A novel hand segmentation method for
multiple-hand gesture recognition system under complex background. In 2014 International
Conference on Signal Processing and Integrated Networks, SPIN 2014. https://doi.org/10.1109/
spin.2014.6776936.
12. Stergiopoulou, E., & Papamarkos, N. (2009). Hand gesture recognition using a neural network
shape fitting technique. Engineering Applications of Artificial Intelligence, 22(8), 1141–1158.
13. Malima, A., Ozgur, E., & Cetin, M. (n.d.). A fast algorithm for vision-based hand gesture
recognition for robot control. In 2006 IEEE 14th Signal Processing and Communications
Applications. https://doi.org/10.1109/siu.2006.1659822.
14. Hasan, M. M., & Mishra, P. K. (2011). HSV brightness factor matching for gesture recognition
system. International Journal of Image Processing (IJIP), 4(5), 456–467.
15. Chang, C. C., Chen, J. J., Tai, W., & Han, C. C. (2006). New approach for static gesture
recognition. Journal of Information Science and Engineering, 22, 1047–1057.
16. Just, A. (2006). Two-handed gestures for human–computer interaction. Ph.D. thesis.
17. Parvini, F., & Shahabi, C. (2007). An algorithmic approach for static and dynamic gesture
recognition utilising mechanical and biomechanical characteristics. International Journal of
Bioinformatics Research and Applications, 3(1), 4.
18. Dardas, N. H., & Georganas, N. D. (2011). Real-time hand gesture detection and recog-
nition using bag-of-features and support vector machine techniques. IEEE Transactions on
Instrumentation and Measurement, 60(11), 3592–3607.
19. Nagi, J., Ducatelle, F., Di Caro, G. A., Ciresan, D., Meier, U., Giusti, A., et al. (2011). Max-
pooling convolutional neural networks for vision-based hand gesture recognition. In 2011 IEEE
International Conference on Signal and Image Processing Applications (ICSIPA).
20. Panwar, M. (2012). Hand gesture recognition based on shape parameters. In 2012 International
Conference on Computing, Communication and Applications.
21. Ohn-Bar, E., & Trivedi, M. M. (2014). Hand gesture recognition in real time for automo-
tive interfaces: A multimodal vision-based approach and evaluations. IEEE Transactions on
Intelligent Transportation System, 15(6), 2368–2377.
22. Suk, H.-I., Sin, B.-K., & Lee, S.-W. (2010). Hand gesture recognition based on dynamic
Bayesian network framework. Pattern Recognition, 43(9), 3059–3072.
104 S. Gupta et al.

23. Shen, X., Hua, G., Williams, L., & Wu, Y. (2012). Dynamic hand gesture recognition: An
exemplar-based approach from motion divergence fields. Image and Vision Computing, 30(3),
227–235. https://doi.org/10.1016/j.imavis.2011.11.003.
24. Padam Priyal, S., & Bora, P. K. (2013). A robust static hand gesture recognition system using
geometry based normalizations and Krawtchouk moments. Pattern Recognition, 46(8), 2202–
2219. https://doi.org/10.1016/j.patcog.2013.01.033.
25. Hoffman, H., & Vu, D. (1997). Virtual reality: teaching tool of the twenty-first century?
Academic Medicine: Journal of the Association of American Medical Colleges, 72(12),
1076–1081.
26. Gallagher, A. G., Ritter, E. M., Champion, H., Higgins, G., Fried, M. P., Moses, G., et al. (2005).
Virtual reality simulation for the operating room: Proficiency-based training as a paradigm shift
in surgical skills training. Annals of Surgery, 241(2), 364.
27. Aggarwal, R., Ward, J., Balasundaram, I., Sains, P., Athanasiou, T., & Darzi, A. (2007). Proving
the effectiveness of virtual reality simulation for training in laparoscopic surgery. Annals of
Surgery, 246(5), 771–779.
28. Satava, R. M. (1993). Virtual reality surgical simulator. Surgical Endoscopy, 7(3), 203–205.
29. Liu, J. Q., Fujii, R., Tateyama, T., Iwamoto, Y., & Chen, Y. W. (2017). Kinect-based gesture
recognition for touchless visualization of medical images. International Journal of Computer
and Electrical Engineering, 9(2), 421–429.
30. Krapichler, C., Haubner, M., Engelbrecht, R., & Englmeier, K. H. (1998). VR interaction
techniques for medical imaging applications. Computer Methods and Programs in Biomedicine,
56(1), 65–74.
31. Khan, R. Z., & Ibraheem, N. A. (2012). Comparative study of hand gesture recognition system.
In Proceedings of International Conference of Advanced Computer Science & Information
Technology in Computer Science & Information Technology (CS & IT ) (Vol. 2, No. 3, pp. 203–
213).
32. Rautaray, S. S., & Agrawal, A. (2015). Vision based hand gesture recognition for human
computer interaction: A survey. Artificial Intelligence Review, 43(1), 1–54.
33. Rehg, J. M., & Kanade, T. (1994, November). Digiteyes: Vision-based hand tracking for human-
computer interaction. In Proceedings of the 1994 IEEE Workshop on Motion of Non-rigid and
Articulated Objects, 1994 (pp. 16–22). IEEE.
34. Ramesh, V. (2003, October). Background modeling and subtraction of dynamic scenes. In
Proceedings. Ninth IEEE International Conference on Computer Vision, 2003 (pp. 1305–1312).
IEEE.
35. Zivkovic, Z. (2004, August). Improved adaptive Gaussian mixture model for background sub-
traction. In Proceedings of the 17th International Conference on Pattern Recognition, 2004.
ICPR 2004 (Vol. 2, pp. 28–31). IEEE.
36. Schapire, R. E. (2003). The boosting approach to machine learning: An overview. In Nonlinear
estimation and classification (pp. 149–171). New York, NY: Springer.
37. Kalman, R. E. (1960). A new approach to linear filtering and prediction problems. Journal of
Basic Engineering, 82(1), 35–45.
38. Stenger, B., Mendonça, P. R., & Cipolla, R. (2001, September). Model-based hand tracking
using an unscented Kalman filter. In BMVC (Vol. 1, pp. 63–72).
39. Isard, M., & Blake, A. (1996, April). Contour tracking by stochastic propagation of condi-
tional density. In European Conference on Computer Vision (pp. 343–356). Berlin, Heidelberg:
Springer.
40. Shan, C., Tan, T., & Wei, Y. (2007). Real-time hand tracking using a mean shift embedded
particle filter. Pattern Recognition, 40(7), 1958–1970.
41. Stenger, B., Thayananthan, A., Torr, P. H., & Cipolla, R. (2006). Model-based hand track-
ing using a hierarchical bayesian filter. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 28(9), 1372–1384.
42. Nadgeri, S. M., Sawarkar, S. D., & Gawande, A. D. (2010, November). Hand gesture recognition
using CAMSHIFT algorithm. In 2010 3rd International Conference on Emerging Trends in
Engineering and Technology (ICETET ) (pp. 37–41). IEEE.
Hand Gesture Recognition for Human Computer Interaction … 105

43. Peng, J. C., Gu, L. Z., & Su, J. B. (2006). The hand tracking for humanoid robot using Camshift
algorithm and Kalman filter. Journal-Shanghai Jiaotong University-Chinese Edition, 40(7),
1161.
44. Luo, Y., Li, L., Zhang, B. S., & Yang, H. M. (2009). Video hand tracking algorithm based on
hybrid Camshift and Kalman filter. Application Research of Computers, 26(3), 1163–1165.
45. Li, Y. (2012, June). Hand gesture recognition using Kinect. In 2012 IEEE 3rd International
Conference on Software Engineering and Service Science (ICSESS) (pp. 196–199). IEEE.
46. Dudani, S. A. (1976). The distance-weighted k-nearest-neighbor rule. IEEE Transactions on
Systems, Man, and Cybernetics, 4, 325–327.
47. Keller, J. M., Gray, M. R., & Givens, J. A. (1985). A fuzzy k-nearest neighbor algorithm. IEEE
Transactions on Systems, Man, and Cybernetics, 4, 580–585.
48. Kollorz, E., Penne, J., Hornegger, J., & Barke, A. (2008). Gesture recognition with a time-
of-flight camera. International Journal of Intelligent Systems Technologies and Applications,
5(3), 334.
49. Chen, Y. T., & Tseng, K. T. (2007, September). Multiple-angle hand gesture recognition
by fusing SVM classifiers. In IEEE International Conference on Automation Science and
Engineering, 2007. CASE 2007 (pp. 527–530). IEEE.
50. Dardas, N., Chen, Q., Georganas, N. D., & Petriu, E. M. (2010, October). Hand gesture recogni-
tion using bag-of-features and multi-class support vector machine. In 2010 IEEE International
Symposium on Haptic Audio-Visual Environments and Games (HAVE) (pp. 1–5). IEEE.
51. Chen, F. S., Fu, C. M., & Huang, C. L. (2003). Hand gesture recognition using a real-time
tracking method and hidden Markov models. Image and Vision Computing, 21(8), 745–758.
52. Elmezain, M., Al-Hamadi, A., Appenrodt, J., & Michaelis, B. (2009). A hidden markov model-
based isolated and meaningful hand gesture recognition. International Journal of Electrical,
Computer, and Systems Engineering, 3(3), 156–163.
53. Waibel, A., Hanazawa, T., Hinton, G., Shikano, K., & Lang, K. J. (1990). Phoneme recognition
using time-delay neural networks. In Readings in speech recognition (pp. 393–404).
54. Hong, P., Turk, M., & Huang, T. S. (2000). Constructing finite state machines for fast gesture
recognition. In Proceedings 15th International Conference on Pattern Recognition, 2000 (Vol.
3, pp. 691–694). IEEE.
Fluid Dynamics in Healthcare Industries:
Computational Intelligence Prospective

Vishwanath Panwar, Sampath Emani, Seshu Kumar Vandrangi,


Jaseer Hamza and Gurunadh Velidi

Abstract The main aim of this study is to discuss and critically review the con-
cept of computational intelligence in relation to the context of fluid dynamics in
healthcare industries. The motivation or specific objective is to discern how, in the
recent past, scholarly investigations have yielded insights into the CI concept as that
which is shaping the understanding of fluid dynamics in healthcare. Also, the study
strives to predict how CI might shape fluid dynamics understanding in the future of
healthcare industries. From the secondary sources of data that have been consulted,
it is evident that the CI concept is gaining increasing adoption and application in
healthcare fluid dynamics. Some of the specific areas where it has been applied
include the determination of occlusion device performance, the determination of
device safety in cardiovascular medicine, the determination of optimal ventilation
system designs in hospital cleanrooms and operating rooms, and the determination of
the efficacy of intra-arterial chemotherapy for cancer patients; especially relative to
patient vessel geometries. Other areas include analyzing idealized medical devices
from the perspective of inter-laboratory studies and how the CI techniques could
inform healthcare decisions concerning the management of unruptured intracranial
aneurysms. In the future, the study recommends the need for further understanding of
some of the challenges that CI-based approaches face when other moderating factors
(such as patients presenting with multiple conditions) face and how they could be
mitigated to assure their efficacy for use in the healthcare fluid dynamics context.

Keywords Fluid dynamics · Health-care · Medicine · Computational intelligence

V. Panwar
VTU-RRC, Belagavi, India
S. Emani
Department of Chemical Engineering, Universiti Teknologi Petronas, Seri Iskandar, Malaysia
S. K. Vandrangi (B) · J. Hamza
Department of Mechanical Engineering, Universiti Teknologi Petronas, Seri Iskandar, Malaysia
e-mail: seshu1353@gmail.com
G. Velidi
University of Petroleum and Energy Studies, Bidholi, via Prem Nagar, Dehradun, Uttarakhand
248007, India

© Springer Nature Switzerland AG 2020 107


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_6
108 V. Panwar et al.

1 Introduction

Computational intelligence (CI) refers to a computer ability to learn certain tasks


relative to experimental or data observations [1]. Other studies document that CI
entails sets of nature-inspired approaches and methodologies through which real-
world problems that are deemed complex could be addressed, especially in situations
where traditional or mathematical modeling could be less applicable for some reasons
[2, 3]. Some of the reasons that might prompt the use of CI in the place of traditional
or mathematical modeling include a stochastic nature of the processes, the presence
of uncertainties in the target processes, and the presence of processes that are too
complex to apply mathematical reasoning [4, 5]. In healthcare settings, especially
where fluids are involved, complexities have been reported relative to the stimulation
of the functions of test stations, diagnostic systems, surgical techniques, and many
medical implants [4, 6]. Given that the engineers’ understanding of the basic flow
of liquid chemicals, blood, or air calls for the implementation of fine-tuned software
[7], the implication for medical applications is that any design safety margins remain
extra-tight [8, 9]; especially in fluid dynamics—that involves or is concerned with the
movement, properties, and interaction of fluids (gas and liquid) in motion [10]. The
main aim of this study is to discuss and critically review the concept of computational
intelligence in relation to the context of fluid dynamics in healthcare industries.
The motivation or specific objective is to discern how, in the recent past, scholarly
investigations have yielded insights into the CI concept as that which is shaping the
understanding of fluid dynamics in healthcare. Also, the study strives to predict how
CI might shape fluid dynamics understanding in the future of healthcare industries.
Indeed, it is projected that the review will offer a broad of CI as an otherwise exciting
field, especially with its ever-growing importance tied to the increasing computational
power and availability of data in the healthcare setting. How CI continues to lend itself
to fluid dynamics in healthcare industries is the central subject under investigation.

2 A CI Critical Review in Relation to Fluid Dynamics


in Healthcare Industries

With the evolution of the digital age coming in the wake of the advent of the informa-
tion age, there has been a profound effect on health science operations. In particular,
different stages of healthcare firms experience a flow of vast dataset amounts [10,
11]. This trend has prompted the need for knowledge extraction and its use toward
improving the dataset entries [12]. Through intelligent computer systems, there has
been increasing support to healthcare personnel charged with managerial and med-
ical contexts. One of the specific systems that have supported the work of health
professionals is the case of CI approaches. Particularly, CI has gained increasing
popularity because of the degree to which it copes with uncertain information [12],
as well as vast amounts of clinical data [13, 14].
Fluid Dynamics in Healthcare Industries … 109

As mentioned earlier, biologically inspired computational algorithms define CI


operations. Some of the major pillars surrounding the CI concept include fuzzy
systems, genetic algorithms, and neural networks [15]. Novel recent developments
and strategies have used CI in healthcare. One of the specific application fields entails
computer-aided diagnosis (CAD). Specifically, two categories of CAD methods have
been employed relative to the extension of the work of CI in governing fluid dynamics
in healthcare. One of the methods is that which seeks to offer enhanced diagnostic
procedures to clinicians to improve human decision-making [16]. Another type of
CAD method as a CI-led strategy for governing the understanding of fluid dynamics
in healthcare entails that which strives to conform or offer one or more potential
diseases processes—relative to the given set of signs [17, 18].for the majority of CAD
procedures that conform to the former type, the majority rely on image processing
algorithms; an example being the magnetic resonance image (MRI) segmentation of
the brain, which aids in discerning pathological zones [17] and providing room for
decision-making regarding image-guided surgery [19, 20]. Another area involves the
diagnosis of diffuse lung disease, which has seen CI employed toward high-resolution
computed tomography [17, 18, 20]. From the previous literature, the latter CI-led
procedure has provided room for the identification of abnormal patterns, especially
when complemented by sparse representations [19, 21].
It is also worth indicating that segmentation as a CI-led healthcare approach has
gained increasing application at the microscopic scale in such a way that marked
controlled watershed transforms have played a leading healthcare industry role in
measuring variations in intracellular calcium [22]. Also, various imaging techniques
have evolved and attracted efforts to their associated image fusing problem [23].
However, some CI-led diagnostic enhancements linked to fluid dynamics in health-
care have deviated from the perspective of image processing. An example is the case
of Medroids clustering algorithm that has allowed for the study of Partitions to diag-
nose Guillain-Barre Syndrome [24]. Particularly, this CI-associated algorithm can
manage both numerical and categorical data because it only relies on the samples’
distance matrix [25].
As aforementioned, another class of methods involves classification systems
whose application is mostly felt in early disease diagnoses where definitive diagnos-
tic tests have not been established [22–24]. An example, as indicated in the recent
literature, is a case of the use of a hybrid approach combining Bayesian networks,
genetic algorithms, and rough sets to carry out Alzheimer’s disease’ computer-aided
screening relative to neuropsychological rating [26, 27]. An additional example of
CI-led systems that rely on classification systems toward disease diagnosis involves
an expert system targeting psychotic disorders relative to multi-criteria decision sup-
port [28]; beside social simulation networks for developing effective interventional
and preventative strategies for AIDS epidemics and particle swarm optimization for
enhancing blood bank assignment [29, 30].
To discern how CI has shaped the understanding and incorporation of technology
in healthcare, one of the areas that have been investigated involves the management
of unruptured intracranial aneurysms [31]. In intracranial aneurysms’ etiopatho-
genesis, the importance of hemodynamic has been acknowledged and prompted
110 V. Panwar et al.

CI-based approaches towards the perceived hemodynamic predictions [32]. In most


cases, supervised CFD analyses of intracranial aneurysms have been performed [33].
Regarding the acquisition and processing of medical images, the rotational acquisi-
tion has been employed due to its capacity to offer 100 images in about six seconds
[34], with exposure for each image recorded at 5 m/s. To apply this CI-based approach
to investigate the concept of fluid dynamics surrounding intracranial aneurysms, raw
medical images have been uploaded to software such as @neuFuse software before
visualizing relevant hemodynamic data [35]. It can be summarized the workflow of
investigations that have strived to employ CI in uncovering hemodynamic predictions
for unruptured intracranial aneurysms, hence informing management approaches.
Qureshi, et al. [8], the operation workflow stretches from the point involving
medical images to that of hemodynamic results. Indeed, findings demonstrate that
through CI-based methods, unruptured intracranial aneurysms could be managed.
Another healthcare area where CI-based techniques have been used to investi-
gate the aspect of fluid dynamics entails cardiovascular medicine. Particularly, the
objective of employing CI-based approaches in cardiovascular medicine has been
to discern health-related outcomes of parameters such as challenges, benefits, and
methods of CI that could be used to facilitate low-risk, economical, and rapid pro-
totyping through the development of devices that include ventricular assist devices,
valve prostheses, and stents [32, 33]. To impact upon clinical medicine, CI analy-
ses investigating cardiovascular regions have targeted areas around the vasculature.
Some of the stages that have preceded the simulation exercises include clinical imag-
ing (such as X-ray angiography, MRI and CT that offer adequate physiological and
anatomical detail), reconstruction and segmentation, discretization (to divide the tar-
get geometries into various discrete volumetric cells or elements), and the setting of
boundary conditions (such as a case in which the target region needs to have at least
one outlet and one inlet—because it is impractical to discretize the cardiovascular
system in its entirety) [36, 37].
Also, the setting of boundary conditions before the CI-based simulation exercise
targeting cardiovascular medicine (in relation to fluid flow) has been established in
such a way that the inlet/outlet boundaries and the physiological conditions at the
wall have had to be specified [38, 39]. Therefore, the boundaries conditions have had
to be defined at the walls, outlets, and inlets—relative to factors such as assumptions
or physical models, population data, and patient-specific data [40].
From the findings that have been documented from such investigations, an emerg-
ing theme is that CI-based approaches pose several beneficial effects in relation to
the understanding of fluid dynamics in cardiovascular medicine. For instance, the
post-processing enabled by CI-based investigations offers additional data that gives
new insight into disease processes and physiology [41]. A specific example is a case
in which many studies acknowledge the difficulty and invasive nature of measur-
ing arterial WSS (wall shear stress) [42–44], yet this factor is crucial and plays an
important role in developing in-stent restenosis and atherosclerosis [45]. However,
CI-based techniques have paved the way for WSS computation, ensuring further that
its (WSS’s) spatial distribution is mapped successfully [45]. The studies suggest fur-
ther that through CI-based techniques in understanding fluid dynamics in the context
Fluid Dynamics in Healthcare Industries … 111

of cardiovascular medicine, the relationship between atherogenesis and hemody-


namic disturbance could be established [31], an outcome explaining atherosclerotic
plaque’s preferential deposition at bifurcation regions and arterial bends [38, 40, 41,
44]. In cardiovascular medicine as a target area for CIO-based method application
in relation to fluid dynamics in healthcare, the modeling has also paved the way
for the understanding of how WSS affects endothelial homeostasis. In particular,
results from the CI-based technique modeling in cardiovascular medicine indicate
that an increase in WSS causes laminar and non-disturbed blood flow, upon which
endothelial cell activation is inhibited [11, 18]. Also, the modeling in the context
of cardiovascular medicine has led to the understanding that disturbed or turbulent
blood flow causes WSS reduction, with the secondary effect being stimulated adverse
vessel remodeling [19, 24].
It is also notable that some studies have employed the CI concept to develop med-
ical devices [18] while others have employed the concept to taste device safety [21,
30]. The latter investigations’ main aim has been to determine the methodology and
suitability of fluid flow simulation in idealized medical devices. With comparison
metrics identified, these investigations have defined the flow conditions, as well as
the model geometries. Particularly, most of the common model geometries that have
been used include cylindrical nozzles with sudden expansions and conical collec-
tors on either side, as well as throats capable of causing hemolysis in certain flow
conditions. One of the trends or scholarly observations that have motivated CI-led
investigations seeking to taste medical device safety is that as blood flows through
the medical devices, it could be subjected to thrombosis and hemolysis [24, 38, 40,
41, 44]. Hence, high shear stresses on blood could result in deleterious effects [11,
20]. Hence, CI has been used to conduct hemolysis predictions relative to idealized
medical devices; with other active research areas involving CI methods for platelet
thrombosis and activation prediction. Indeed, it is worth indicating that these CI
investigations that target fluid dynamics in healthcare have strived to inform ideal-
ized medical device developmental stages and also predict the degree to which the
final device designs might be safe [2, 8]. For such investigations, which come in the
form of inter-laboratory studies, devices that have been considered for CIO-based
investigation include idealized and simplified medical devices with small nozzles
whose characteristics could be likened to blood-carrying medical devices; exam-
ples being hypodermic needles, syringes, cannulas, catheters, hemodialysis sets, and
blood tubing [11].
From the findings, these investigations suggest that through CI application in
healthcare fluid dynamics, shear stress distributions vary widely [3, 8]. The impli-
cation for efforts aimed at blood damage modeling is that poor prediction of shear
stresses hampers the development of accurate and reliable idealized medical devices
[11, 15]. Also, findings aimed at applying CI techniques to predict the safety of ide-
alized medical devices indicate that in healthcare fluid dynamics, the CI perspective,
if applied, gives insight into some of the complications that are worth considering—
before discerning the extent to which the medical devices could be deemed safe.
112 V. Panwar et al.

Fig. 1 Nozzle conditions for CI-based investigations seeking to predict idealized medical device
safety. Source Kheyfets et al. [11]

Some of these complications, from the CI-based investigations and recommenda-


tions, include features or factors such as parameter sensitivity, the range of hemat-
ocrit and viscosity that devices are used, sharp corners, secondary flows, transitional
flows, and pulsatile flows [22–27] (Fig. 1).
Computational intelligence has also been applied to analyze hospital room ven-
tilation systems. Some of the parameters that have been considered in these inves-
tigations include the flow of bacteria particles from patients and natural and forced
ventilation [7]. Whereas some studies document that infection transfer via contact
forms a leading cause of health care-associated infections or hospital-acquired infec-
tions [11], airborne bacteria have also been documented to cause infection through
inhalation [2, 13]. For respiratory diseases such as TB, Tuberculosis, and SARS, CI
has been used to understand infectious particle dynamics [19]. In the CI modeling,
the target hospital rooms have been those with exhausts, inlets, medical equipment,
lamps, wardrobes, beds, doctors, and patients [10]. Figure 2 shows the CI-led room
layout of the modeling studies.
In the above room layout, the computation intelligence investigation concerning
fluid dynamics in healthcare has aimed at airflow pattern optimization in hospital
Fluid Dynamics in Healthcare Industries … 113

Fig. 2 The CI-led modeling


room layout to examine
hospital room ventilation
systems. Source Farag et al.
[16]

cleanrooms. Also, CI techniques in these settings have strived to optimize tempera-


ture distribution and airflow pattern for better thermal comfort levels [17, 20]. Indeed,
findings in these investigations suggest that there is upward air movement next to
patients and doctors while downward air movement occurs next to the wall; features
that arise from natural convection [26, 37]. Additional findings demonstrate that
next to the doctors and above the wardrobes, two recirculation zones exist and could
form platforms where bacteria might not only be trapped but also stayed longer [25].
Indeed, the findings are of clinical importance whereby they pave the way for further
predictions of the duration that the bacteria are likely to take before leaving the room;
especially after coughing. Hence, CI techniques are seen to play a leading role in
improving hospital ventilation designs and increasing the comfort level. Through
CI incorporation into the examination of the fluid dynamics concept in healthcare,
especially by analyzing the state of ventilation designs, it is evident that the tech-
niques allow for informed decision-making regarding indoor ventilation with good
air quality control, upon which infection could be curbed through the minimization
of airborne respiratory spread, as well as other hospital infections. Therefore, CI is
seen to provide better insight into ventilation design and aerosol contamination in
hospital cleanrooms [5], upon which airflow patterns could be optimized relative
to the CI-based simulation outcomes [11–14]. As such, it can be inferred that dur-
ing coughing episodes, CI-based techniques prove informative in such a way that
they allow for the analysis of ventilation system performance in clean or isolation
rooms, upon which optimal airflow patterns could be established, and design patterns
developed as deemed appropriate.
It is also worth noting that CI-based techniques and simulations have been con-
ducted to investigate the state of horizontal airflow ventilation systems; with crucial
114 V. Panwar et al.

insights gained from the contexts of hospital surgical sites. The aim of these stud-
ies has been to control SSI (surgical site infection) arising from airborne particles.
Indeed, most of the previous scholarly studies affirm that when ultraclean air is
distributed properly, infectious particles are likely to be diluted and isolated from
surgical sites [46–48]. To create aseptic environments around patients in the hospi-
tal surgical sites, additional studies affirm that laminar or downward unidirectional
flows are adopted in most cases [15, 49]. However, other studies document that
this airflow pattern has demerits in such a way that unidirectional airflow patterns
tend to be affected by overhead accessories (including medical lamps), as well as
thermal plumes surrounding wounds [14, 20]. Hence, CI-based approaches that con-
sider indoor particles as the main factor that influences airflow pattern have been
implemented to counteract clean air’s function in relation to infectious particle iso-
lation [22]. Also, CI-based approaches have been implemented relative to the need
to respond to the affirmation that when facilities such as medical lamps are placed
upstream of patients, they could cause particulate accumulation [3], as well as seri-
ous whirlpool [36]. Similarly, CI-based techniques have been applied in healthcare
fluid dynamics to counter the documented trend whereby human body temperature
surfaces tend to be higher than those of the surrounding air, which cause upward
airflow plumes that are buoyancy-driven [15, 33]. Hence, the motivation of employ-
ing CI techniques to examine airflow patterns in hospital surgical sites has been to
examine how best the disturbance exerted to downward airflow patterns could be
countered while seeking to avoid ventilating system carriage of infectious particles
to the patients’ wounds [40, 41, 43, 44]. Particularly, the CI-based techniques have
responded to these demerits in the operating rooms (associated with conventional
airflow patterns) by focusing on the characteristics, feasibility, and the ability of
horizontal airflow patterns to exert a contamination control effect [12, 14] (Fig. 3).
From the results, most of the CI-based investigations that have sought to discern
the feasibility of alternative horizontal airflow patterns towards controlling operating
room contamination suggest that the resultant system exhibits superior results in such
a way that it prevents particles from striking the patients’ would areas. Specific results
demonstrate that in the operating room, this system causes 95.1% of nurse-generated
particles to escape, as well as 87.8% surgeon-generated particles [34, 46–48]. The
emerging trend is that CI-based investigations have proved insightful to the field
of fluid dynamics in healthcare whereby they sensitize practitioners regarding the
importance of ensuring that the patient’s direction is prescribed correctly—to main-
tain low particle concentration, especially around their wound areas. With the impact
of operating room layout and relative position of source being highly influential in
relation to particle concentration documented [11], the emerging theme is that CI
techniques have proved crucial in informing how the operating rooms and ventilation
pattern designs could be set in such a way that the main particle sources are placed
in downstream locations in relation to the patients’ wound areas [17–21].
The CI concept has also gained application in healthcare fluid dynamics from
the perspective of new occlusion device development for cancers. In particular, CI
techniques have been employed to discern the level of performance of the target
Fluid Dynamics in Healthcare Industries … 115

Fig. 3 An illustration of air distribution, flow visualization, operating room baseline model, and
boundary conditions for CI-based simulation of operating rooms. Source Homma et al. [17]
116 V. Panwar et al.

Fig. 3 (continued)

devices’ clinical occlusion [42]. The Navier-Stokes and continuity equations that
have been employed by the CI techniques to govern the flow of blood are:

∇ ·v=0
 
∂v
ρ + v · ∇v = −∇ p + μ∇ 2 v + f
∂t

It is also notable that these investigations have made several assumptions. For
instance, it has been assumed that the flow of blood is laminar and incompressible.
Also, the density of blood has been set at 1060 kg/m3 , with the governing equations
solved by using the FLUENT software. For the artery wall, the boundary conditions
have also been assumed to be no-slip; with the inlet having fully-developed pulsatile
flows while the outlets have been set to have uniform pressure. Figure 4 illustrates
these assumptions.
Also, the specific determination of the occlusion performance of the target devices
has been achieved based on two main forms of flow experiments. These forms
have been conducted in the form of digital particle image velocimetry (DPIC) and
occlusion experiment [24, 32]; as demonstrated in Fig. 5 (respectively).
Aimed at informing future cancer treatment, the investigations have, specifi-
cally, aimed at proposing spherical occlusion devices; an alternative cancer treatment
option based on CI techniques. From the findings, most of the scholarly investigations
targeting this subject have reported that when the spherical occlusion device’s metal
Fluid Dynamics in Healthcare Industries … 117

Fig. 4 An illustrations of the assumptions for investigations applying CI to determine device


occlusion performance in healthcare. Source Ansaloni et al. [24]

density ranges from 14 to 27%, there is a feasible and significant reduction in the
rate of blood flow [11–16]. Particularly, the reduction ranges from 30% to 50% [22–
25]. Additional findings demonstrate that the proposed spherical occlusion device
reduces the flow successfully—as designed or desired. When the metal density of
the occlusion device is set at 27%, the CI-based investigations indicate further that
the rate of blood flow reduces by 44% [32–34]. As such, it is evident that CI-based
investigations have given a new dimension to the future of the treatment of cancer,
having sensitized healthcare professionals regarding some of the ideal conditions
under which the proposed spherical occlusion device (that needs to be deployed in
the artery’s upstream) could perform best and reduce the rate of blood flow towards
the downstream cancer cells.
For oral cancer, additional scholarly studies have focused on how CI-based tech-
niques could lead to informed decision-making relative to the implementation of
intra-arterial chemotherapy. Specifically, these studies acknowledge that when intra-
arterial chemotherapy is used to treat oral cancers, anti-cancer agents tend to be
delivered into tumor-feeding arteries in higher concentration [46–48]. However, the
extent to which the use of this approach proves adequate in relation to anti-cancer
agent distribution into different external carotid artery branches poses a dilemma
[46]. As such, CI-based techniques have been employed in a quest to steer improve-
ments in intra-arterial chemotherapy effectiveness in situations where anti-cancer
agents are distributed into different external carotid artery branches.
A specific example of cancer that has been investigated using the above CI-based
approach has been the case of tongue cancer. Methodologically, vessel models have
been combined with catheter models, and the wall shear stress calculated after tracing
the blood streamline from a given common carotid artery toward the respective
outlets. In the findings, these investigations indicate that when the catheter tip is
located under or below the target arteries and the external carotid artery bifurcation,
118 V. Panwar et al.

Fig. 5 The hepatic artery model and occlusion experiment setup to determine CI-led device
occlusion performance. Source Au et al. [25]

anti-cancer agents flow into the intended arteries [33–35, 37]. Similarly, the CI-based
investigations suggest that by shifting the catheter tip toward the target artery, anti-
cancer agents flow into it (the target artery) [34]. In all branches with anti-cancer
agent flows, additional findings demonstrate that there is contact between the catheter
tip and blood streamlines to the target arteries [35].
Based on the scholarly results above, it is evident that CI-based techniques are seen
to play a crucial role in healthcare fluid dynamics in such a way that they increase the
understanding that catheter tip location plays a crucial and determining role relative
to the control of anti-cancer agents in conventional intra-arterial chemotherapy. Also,
the findings are insightful in such a way that they indicate that in the tumor-feeding
artery, the anti-cancer agent distribution rate increases when healthcare practitioners
opt to place the catheter tip toward and under or below the target arteries. The
eventuality is that CI techniques inform the importance of considering a trend in
Fluid Dynamics in Healthcare Industries … 119

which the reliability of intra-arterial chemotherapy as a technique for anti-cancer


agent supply to the target arteries could be compromised by high shear stresses
experienced at the target arteries—based on the patient’s vessel geometry, hence
avoiding adverse outcomes or serious complications.

3 Conclusion

In summary, this study has provided a discussion, and critical review of some of
the recent scholarly investigations and findings reported relative to the use of CI
techniques in healthcare fluid dynamics. Particularly, the aim of the review has
been to determine some of the emerging trends, points of agreement, and points
of deviation among scholars regarding the future of fluid dynamics in healthcare
industries, with particular focus on the role of computation intelligence in enhanc-
ing informed decision-making among healthcare professionals. From the findings,
it is evident that the CI concept is gaining increasing adoption and application in
healthcare fluid dynamics. Some of the specific areas where it has been applied
include the determination of occlusion device performance, the determination of
device safety in cardiovascular medicine, the determination of optimal ventilation
system designs in hospital cleanrooms and operating rooms, and the determination
of the efficacy of intra-arterial chemotherapy for cancer patients; especially relative
to patient vessel geometries. Other areas where CI-based techniques are seen to gain
application include the alteration of idealized medical devices from the perspec-
tive of inter-laboratory studies and how the CI techniques could inform healthcare
decisions concerning the management of unruptured intracranial aneurysms. In the
future, this study recommends that critical reviews focus on other subjects such as
some of the challenges facing CI techniques or scholarly investigations advocating
for the use of the CI concept in healthcare fluid dynamics and the efficacy of using
CI-based techniques in investigating and informing healthcare fluid dynamics deci-
sions in situations where patients present with different conditions simultaneously.
In so doing, the impact of other moderating factors in shaping the effectiveness of
CI-based approaches in healthcare dynamics might be predicted.

References

1. Morris, P. D., Ryan, D., Morton, A. C., Lycett, R., Lawford, P. V., Hose, D. R., et al. (2013).
Virtual fractional flow reserve from coronary angiography: Modeling the significance of coro-
nary lesions: Results from the VIRTU-1 (VIRTUal fractional flow reserve from coronary
angiography) study. JACC: Cardiovascular Interventions, 6(2), 149–157.
2. Tu, S., Barbato, E., Köszegi, Z., Yang, J., Sun, Z., Holm, N. R., et al. (2014). Fractional flow
reserve calculation from 3-dimensional quantitative coronary angiography and TIMI frame
count: A fast computer model to quantify the functional significance of moderately obstructed
coronary arteries. JACC: Cardiovascular Interventions, 7(7), 768–777.
120 V. Panwar et al.

3. Nørgaard, B. L., Gaur, S., Leipsic, J., Ito, H., Miyoshi, T., Park, S.-J., et al. (2015). Influence
of coronary calcification on the diagnostic performance of CT angiography derived FFR in
coronary artery disease: A substudy of the NXT trial. JACC: Cardiovascular Imaging, 8(9),
1045–1055.
4. Erhart, P., Hyhlik-Dürr, A., Geisbüsch, P., Kotelis, D., Müller-Eschner, M., Gasser, T. C., et al.
(2015). Finite element analysis in asymptomatic, symptomatic, and ruptured abdominal aortic
aneurysms: In search of new rupture risk predictors. JACC: Cardiovascular Imaging, 49(3),
239–245.
5. Morris, P. D., van de Vosse, F. N., Lawford, P. V., Hose, D. R., & Gunn, J. P. (2015). “Virtual”
(computed) fractional flow reserve: Current challenges and limitations. JACC: Cardiovascular
Interventions, 8(8), 1009–1017.
6. Morlacchi, S., & Migliavacca, F. (2013). Modeling stented coronary arteries: Where we are,
where to go. Annals of Biomedical Engineering, 41(7), 1428–1444.
7. Peach, T., Ngoepe, M., Spranger, K., Zajarias-Fainsod, D., & Ventikos, Y. (2014). Personalizing
flow-diverter intervention for cerebral aneurysms: From computational hemodynamics to bio-
chemical modeling. International Journal for Numerical Methods in Biomedical Engineering,
30(11), 1387–1407.
8. Qureshi, M. U., Vaughan, G. D., Sainsbury, C., Johnson, M., Peskin, C. S., Olufsen, M. S.,
et al. (2014). Numerical simulation of blood flow and pressure drop in the pulmonary arterial
and venous circulation. Biomechanics and Modeling in Mechanobiology, 13(5), 1137–1154.
9. Schneiders, J., Marquering, H., Van Ooij, P., Van den Berg, R., Nederveen, A., Verbaan, D., et al.
(2015). Additional value of intra-aneurysmal hemodynamics in discriminating ruptured versus
unruptured intracranial aneurysms. American Journal of Neuroradiology, 36(10), 1920–1926.
10. Lungu, A., Wild, J., Capener, D., Kiely, D., Swift, A., & Hose, D. (2014). MRI model-based non-
invasive differential diagnosis in pulmonary hypertension. Journal of Biomechanics, 47(12),
2941–2947.
11. Kheyfets, V. O., Rios, L., Smith, T., Schroeder, T., Mueller, J., Murali, S., et al. (2015). Patient-
specific computational modeling of blood flow in the pulmonary arterial circulation. Computer
Methods and Programs in Biomedicine, 120(2), 88–101.
12. Bertoglio, C., Barber, D., Gaddum, N., Valverde, I., Rutten, M., Beerbaum, P., et al. (2014).
Identification of artery wall stiffness: In vitro validation and in vivo results of a data assimilation
procedure applied to a 3D fluid–structure interaction model. Journal of Biomechanics, 47(5),
1027–1034.
13. Sonntag, S. J., Li, W., Becker, M., Kaestner, W., Büsen, M. R., Marx, N., et al. (2014). Combined
computational and experimental approach to improve the assessment of mitral regurgitation
by echocardiography. Annals of Biomedical Engineering, 42(5), 971–985.
14. Bluestein, D., Einav, S., & Slepian, M. J. (2013). Device thrombogenicity emulation: A
novel methodology for optimizing the thromboresistance of cardiovascular devices. Journal of
Biomechanics, 46(2), 338–344.
15. Chiu, W.-C., Girdhar, G., Xenos, M., Alemu, Y., Soares, J. S., Einav, S., et al. (2014). Throm-
boresistance comparison of the HeartMate II ventricular assist device with the device throm-
bogenicity emulation-optimized HeartAssist 5 VAD. Journal of Biomechanical Engineering,
136(2), 021014.
16. Farag, M. B., Karmonik, C., Rengier, F., Loebe, M., Karck, M., von Tengg-Kobligk, H.,
et al. (2014). Review of recent results using computational fluid dynamics simulations in
patients receiving mechanical assist devices for end-stage heart failure. Methodist DeBakey
Cardiovascular Journal, 10(3), 185.
17. Homma, A., Onimaru, R., Matsuura, K., Robbins, K. T., & Fujii, M. (2015). Intra-arterial
chemoradiotherapy for head and neck cancer. Japanese Journal of Clinical Oncology, 46(1),
4–12.
18. Martufi, G., & Gasser, T. C. (2013). The role of biomechanical modeling in the rupture risk
assessment for abdominal aortic aneurysms. Journal of Biomechanical Engineering, 135(2),
021010.
Fluid Dynamics in Healthcare Industries … 121

19. Hariharan, P., Giarra, M., Reddy, V., Day, S., Manning, K., Deutsch, S., et al. (2011). Experi-
mental particle image velocimetry protocol and results database for validating computational
fluid dynamic simulations of the FDA benchmark nozzle model. Journal of Biomechanical
Engineering, 133, 041002.
20. Ohhara, Y., Oshima, M., Iwai, T., Kitajima, H., Yajima, Y., Mitsudo, K., et al. (2016). Investi-
gation of blood flow in the external carotid artery and its branches with a new 0D peripheral
model. Biomedical Engineering Online, 15(1), 16.
21. Pant, S., Bressloff, N. W., Forrester, A. I., & Curzen, N. (2010). The influence of strut-connectors
in stented vessels: A comparison of pulsatile flow through five coronary stents. Annals of
Biomedical Engineering, 38(5), 1893–1907.
22. Xenos, M., Girdhar, G., Alemu, Y., Jesty, J., Slepian, M., Einav, S., et al. (2010). Device throm-
bogenicity emulator (DTE)—design optimization methodology for cardiovascular devices: A
study in two bileaflet MHV designs. Journal of Biomechanics, 43(12), 2400–2409.
23. Wu, J., Paden, B. E., Borovetz, H. S., & Antaki, J. F. (2010). Computational fluid dynamics anal-
ysis of blade tip clearances on hemodynamic performance and blood damage in a centrifugal
ventricular assist device. Artificial Organs, 34(5), 402–411.
24. Ansaloni, L., Coccolini, F., Morosi, L., Ballerini, A., Ceresoli, M., Grosso, G., et al. (2015).
Pharmacokinetics of concomitant cisplatin and paclitaxel administered by hyperthermic
intraperitoneal chemotherapy to patients with peritoneal carcinomatosis from epithelial ovarian
cancer. British Journal of Cancer, 112(2), 306.
25. Au, J. L.-S., Guo, P., Gao, Y., Lu, Z., Wientjes, M. G., Tsai, M., et al. (2014). Multiscale tumor
spatiokinetic model for intraperitoneal therapy. The AAPS Journal, 16(3), 424–439.
26. Bhandari, A., Bansal, A., Singh, A., & Sinha, N. (2017). Perfusion kinetics in human brain
tumor with DCE-MRI derived model and CFD analysis. Journal of Biomechanics, 59, 80–89.
27. Barnes, S. L., Whisenant, J. G., Loveless, M. E., & Yankeelov, T. E. (2012). Practical dynamic
contrast enhanced MRI in small animal models of cancer: Data acquisition, data analysis, and
interpretation. Pharmaceutics, 4(3), 442–478.
28. Bhandari, A., Bansal, A., Jain, R., Singh, A., & Sinha, N. (2019). Effect of tumor volume on
drug delivery in heterogeneous vasculature of human brain tumors. Journal of Engineering
and Science in Medical Diagnostics and Therapy, 2(2), 021004.
29. Goodman, M. D., McPartland, S., Detelich, D., & Saif, M. W. (2016). Chemotherapy for
intraperitoneal use: A review of hyperthermic intraperitoneal chemotherapy and early post-
operative intraperitoneal chemotherapy. Journal of Gastrointestinal Oncology, 7(1), 45.
30. De Vlieghere, E., Carlier, C., Ceelen, W., Bracke, M., & De Wever, O. (2016). Data on in vivo
selection of SK-OV-3 Luc ovarian cancer cells and intraperitoneal tumor formation with low
inoculation numbers. Data in Brief, 6, 542–549.
31. Gremonprez, F., Descamps, B., Izmer, A., Vanhove, C., Vanhaecke, F., De Wever, O., et al.
(2015). Pretreatment with VEGF (R)-inhibitors reduces interstitial fluid pressure, increases
intraperitoneal chemotherapy drug penetration, and impedes tumor growth in a mouse
colorectal carcinomatosis model. Oncotarget, 6(30), 29889.
32. Stachowska-Pietka, J., Waniewski, J., Flessner, M. F., & Lindholm, B. (2012). Computer
simulations of osmotic ultrafiltration and small-solute transport in peritoneal dialysis: A
spatially distributed approach. American Journal of Physiology-Renal Physiology, 302(10),
F1331–F1341.
33. Steuperaert, M., Debbaut, C., Segers, P., & Ceelen, W. (2017). Modelling drug transport during
intraperitoneal chemotherapy. Pleura and Peritoneum, 2(2), 73–83.
34. Kim, M., Gillies, R. J., & Rejniak, K. A. (2013). Current advances in mathematical modeling
of anti-cancer drug penetration into tumor tissues. Frontiers in Oncology, 3, 278.
35. Magdoom, K., Pishko, G. L., Kim, J. H., & Sarntinoranont, M. (2012). Evaluation of a voxelized
model based on DCE-MRI for tracer transport in tumor. Journal of Biomechanical Engineering,
134(9), 091004.
36. Steuperaert, M., Falvo D’Urso Labate, G., Debbaut, C., De Wever, O., Vanhove, C., Ceelen,
W., et al. (2017). Mathematical modeling of intraperitoneal drug delivery: Simulation of drug
distribution in a single tumor nodule. Drug Delivery, 24(1), 491–501.
122 V. Panwar et al.

37. Pishko, G. L., Astary, G. W., Mareci, T. H., & Sarntinoranont, M. (2011). Sensitivity analysis of
an image-based solid tumor computational model with heterogeneous vasculature and porosity.
Annals of Biomedical Engineering, 39(9), 2360.
38. Stylianopoulos, T. (2017). The solid mechanics of cancer and strategies for improved therapy.
Journal of Biomechanical Engineering, 139(2), 021004.
39. Stylianopoulos, T., Martin, J. D., Chauhan, V. P., Jain, S. R., Diop-Frimpong, B., Bardeesy, N.,
et al. (2012). Causes, consequences, and remedies for growth-induced solid stress in murine
and human tumors. Proceedings of the National Academy of Sciences, 109(38), 15101–15108.
40. Barker, P. B., X. Golay, & Zaharchuk, G. (2013). Clinical perfusion MRI: Techniques and
applications. Cambridge University Press.
41. Winner, K. R. K., Steinkamp, M. P., Lee, R. J., Swat, M., Muller, C. Y., Moses, M. E., et al.
(2016). Spatial modeling of drug delivery routes for treatment of disseminated ovarian cancer.
Cancer Research, 76(6), 1320–1334.
42. Zhang, Y., Furusawa, T., Sia, S. F., Umezu, M., & Qian, Y. (2013). Proposition of an out-
flow boundary approach for carotid artery stenosis CFD simulation. Computer Methods in
Biomechanics and Biomedical Engineering, 16(5), 488–494.
43. Tabakova, S., Nikolova, E., & Radev, S. (2014). Carreau model for oscillatory blood flow in a
tube. In AIP Conference Proceedings. AIP.
44. Zhan, W., Gedroyc, W., & Xu, X. Y. (2014). Effect of heterogeneous microvasculature dis-
tribution on drug delivery to solid tumour. Journal of Physics D: Applied Physics, 47(47),
475401.
45. Sui, B., Gao, P., Lin, Y., Jing, L., Sun, S., & Qin, H. (2015). Hemodynamic parameters dis-
tribution of upstream, stenosis center, and downstream sides of plaques in carotid artery with
different stenosis: A MRI and CFD study. Acta Radiologica, 56(3), 347–354.
46. Marsden, A. L., Bazilevs, Y., Long, C. C., & Behr, M. (2014). Recent advances in computational
methodology for simulation of mechanical circulatory assist devices. Wiley Interdisciplinary
Reviews: Systems Biology and Medicine., 6(2), 169–188.
47. Wu, J., Liu, G., Huang, W., Ghista, D. N., & Wong, K. K. (2015). Transient blood flow in
elastic coronary arteries with varying degrees of stenosis and dilatations: CFD modelling and
parametric study. Computer Methods in Biomechanics and Biomedical Engineering, 18(16),
1835–1845.
48. Khader, A. S., Shenoy, S. B., Pai, R. B., Kamath, G. S., Sharif, N. M., & Rao, V. (2011). Effect
of increased severity in patient specific stenosis of common carotid artery using CFD—A case
study. World Journal of Modelling and Simulation, 7(2), 113–122.
49. Consolo, F., Dimasi, A., Rasponi, M., Valerio, L., Pappalardo, F., Bluestein, D., et al. (2016).
Microfluidic approaches for the assessment of blood cell trauma: A focus on thrombotic risk in
mechanical circulatory support devices. The International Journal of Artificial Organs, 39(4),
184–193.
A Novel Approach Towards Using Big
Data and IoT for Improving
the Efficiency of m-Health Systems

Kamta Nath Mishra and Chinmay Chakraborty

Abstract The application of big data in healthcare is growing at tremendous speed


and many new discoveries and methodologies are published in the last decade in this
field. Big data technologies are effectively being used in biomedical informatics and
healthcare research. The mobile phones, sensors, patients, hospitals, researchers, and
other organizations are generating a huge amount of healthcare data in these days.
The large amounts of clinical data are being continuously generated by medical
organizations and are used for detecting and curing new diseases. The actual test in m-
health systems is the way to discover, gather, examine and administer the information
to build a person’s life better and easier, by predicting the life dangers at early
stages. A number of technologies have been developed by researchers which can
decrease on which overheads for the evasion of overall management of chronic
illnesses. The medical devices that continually monitor health system indicators or
tracking of online health data in real-time environment as and when patient self-
administers physiotherapy are now in huge demand. Many intelligent patients have
now started using mobile applications (apps) to manage different daily life-related
health needs on regular basis because of easy availability of high-speed Internet
connections on smartphone and cybercafes. These devices and mobile applications
are now progressively more used and integrated with telemedicine and telehealth
via the Internet of Things (IoT). In this chapter, the authors have discussed the
applications and challenges of biomedical big data. Further, this chapter presents
novel approaches to advancements in healthcare systems using big data technologies
and distributed computing systems.

Keywords Big data technologies · Clinical informatics · Healthcare systems ·


Imaging informatics · Public health informatics · Medical internet of things

K. N. Mishra (B)
Computer Science and Engineering, Birla Institute of Technology, Ranchi,
Jharkhand, India
e-mail: mishrakn@yahoo.com
C. Chakraborty
Electronics and Communication Engineering, Birla Institute of Technology, Ranchi,
Jharkhand, India
e-mail: cchakrabarty@bitmesra.ac.in

© Springer Nature Switzerland AG 2020 123


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_7
124 K. N. Mishra and C. Chakraborty

1 Introduction

The Internet of things (IoT) is a novel concept and it is an interesting area in the world
because of its vast applications in healthcare systems. It is surprising to know that
approximately 20 billion devices are connected to each other IoT [1]. Fundamentally,
IoT is representing the interworking of electronic devices which can allow the swap of
information between connected devices for the purpose of solving specific problems.
This perception of IoT based internetworking has made human life much easier than
ever before. As per the reports of the world health organization (WHO), India is facing
severe health problems and the life span of a human is gradually coming down [2].
IoT is one of the most promising solutions in this regard. Further, the health care
industry can efficiently help the patients for managing their own day-to-day life-
related diseases and they can get via mobile and IoT based devices in emergency
cases [3]. Therefore, the m-health services can be used to provide standard healthcare
services to the patients and quality of medication as per the requirements of patients
[4].
The m-health system under IoT is fully answerable for the complete patient care
and these advanced systems are adjustable to patient’s situations and the parameters
of these systems can be adjusted as per disease type of patients. Hence, with the help
of IoT, the mobile healthcare system will be able to manage the present and future
health status of critically ill patients. The use of IoT in mobile health services has great
possible to increase the capacity of primary healthcare services to a large extent and
hence it will be easily possible to have frequent interaction between patients health
service providers including doctors e.g. Intel company has introduced a wearable-to-
analytics devices which directly links wearable devices with big data analytics device
for instantaneous handling and determining any variations in data. Apple company
is also contributing on a wearable medical sensor laden, called “iWatch” for blood
monitoring via the human skin whereas Google has declared the progress of eye
contact lenses types which can investigate and display the glucose levels with the
help of tears.
The Dell recently underway a pilot project that attentions on analysis and observ-
ing of chronic diseases. Thus, a diabetic patient can be actively monitored and
diet-related reminders and suggestion can be provided in day-to-day life. The El
Camino Hospital of USA has made an announcement that the hospital has suc-
ceeded a 39–40% fall in last 6 months via a telemedicine-based analysis for identify-
ing patients at high risk and their proposed telemedicine-based m-healthcare system
can immediately recommend the most appropriate way of an intervention [5, 6].
The aspects of telemedicine-based healthcare analysis systems exist for the last
few decades, but the further expansions in promising digital healthcare tools provide
modern resolutions to gather and transfer a massive real-time medical data [7, 8].
Hence, it has vast possible to increase the capability of healthcare systems to reduce
risks and improve interaction between patients and health service providers includ-
ing physicians and surgeons. These solutions can also sustain the holistic care of
patients which will reduce the possibility of avertable risks. In any hospital, the IoT
A Novel Approach Towards Using Big Data … 125

is created using Internet Protocol (IP) communications, medical devices, sensor sys-
tems, and medical database which can be used to process electronic medical record
from remote places. The integration of enterprise service bus with IP and other med-
ical devices permits the exchange of data with each other including medical staff,
doctors and patients. All of these things together are called the Internet of Medical
Things (IoMT). The IoMT helps us to deliver the accurate biomedical information
and medical resources timely to the clinicians at critical points where immediate care
is required (Fig. 1).
The setups of healthcare devices are growing to become more complex and hence
are creating challenges for information technology professionals while integrating
healthcare systems with IoT and m-health systems. It is advised to m-health and IoT
integrating professionals to reconsideration about how to relate business intelligence
to integrate the communication network, IoT-m-health systems, and medical device
support together in the best way or improving the performance of IoT based m-health
big data systems. The data necessities in delivering well-organized and efficient
m-health system have always been a greater concern of our society.

Big Data
Dimensions

Fig. 1 Big data healthcare components interactions


126 K. N. Mishra and C. Chakraborty

The focus of value-based mobile healthcare is shifting from achieving financial


incentives to a new model in which the health service providers are rewarded on the
basis of how their patients are being cured and cared at low cost within specified
time limits [9, 10]. Figure 2 describes the facilities required at the common hospitals
which are providing m-health and IoT based m-health services. The outcomes of
Fig. 2 clearly show that the group-wise support is needed for the patients and medical
staff members of the hospital for its smooth functioning [11]. The integration of IoT
and wearable technology with mobile healthcare system has always been considered

Fig. 2 The basic features of a smart hospital


A Novel Approach Towards Using Big Data … 127

the tasks of high potential to further enhance the accessibility and availability of
m-healthcare services at moderately low cost [12]. The concept of IoT reflects a
worldwide network of intercommunicating devices and services which are connected
to the internet, and are available at every place at any time. It is known as “Internet
of healthcare things (IoHT)” or “internet of patients (IoP)” in an m-health system
which emphasizes on ease of cost-effective interactions between patients and health
service providers through a secure communication system [13, 14].

2 Literature Review

Gia et al. [15] presented a Fog based computation of mobile health monitoring system
for data storing and services using m-health. The electrocardiogram (ECG) character-
istics extraction has been discussed. Doukas and Maglogiannis [16] presented a novel
approach of online data organization, processing, and control of IoT-enabled applica-
tions based on big data. The installed trial product received patient information from
the IoT devices and forwarded to the cloud computing system efficiently. Tsiachri
Renta et al. [17] focused on loading m-health data obtained from distributed IoT
devices to distant Cloud. The database management system permitted IoT devices to
collect critical data of users/patients in real-time. The cloud-enabled methods ensured
quicker stored data processing such that the authentic users could get speedy warnings
in crisis cases. Shahid et al. [18] developed a framework which enables visualization
and data analysis for predicting mobile and electronic health-shocks built on pre-
defined m-health databases. The proposed structure works efficiently using Cloud
computing infrastructure for achieving the defined goals and it includes geographi-
cal information systems (GIS), Amazon web services (AWS), and Fuzzy rule-based
summarization procedures.
Chen et al. [19] targeted the medical data protection which can be shared through
Cloudlets. They considered an encoding scheme for data gathering. A dependence
model was designed to recognize reliable and secure endpoints including m-health
hospitals, health service providers and clinicians for sharing the medical data. The
dependence framework was able to link patients with health service providers and
doctors as well. Zhang et al. [20] developed a patient-centered cyber-physical system
(Health-CPS) which aims to confirm suitable and effective mobile healthcare service.
The system gathers data in an integrated way. And it supports parallel processing
and distributed storage facilities. Fazio et al. [12] designed an automated m-health
systems for remote patient monitoring under FIWARE. The authors emphasized to
improve the processing and communication speed using the facilities provided by
FIWARE. Vijay et al. [21] proposed a Cloud-dependent calorie system monitoring
using e-health. The system is capable to categorize various food objects from the
meal with great exactness and it can calculate the total calories of energy available
in the food. Jindal [22] proposed an efficient method to estimate heart rate using the
accelerometer (smartphone embedded sensors) and Photo Plethysmo Graphy (PPG)
signals. The system is composed of 3 steps of data transferring and it needs Cloud
128 K. N. Mishra and C. Chakraborty

linking to choose ideal PPG signals through deep learning concepts of machine
intelligence for classifying the signals to estimate heart rate with very high accuracy.
Muhammad et al. [23] described an IoT-Cloud-based mobile healthcare resolution
for observing the patient’s voice pathology. It includes a voice pathology recognition
tool that applies the local binary shape on voice signal signified via Melspectrum
approach. An intelligent machine learning approach was applied to conduct the patho-
logical computations with high accuracy. Gupta et al. [24] introduced a Cloud-IoT
solution for the monitoring of physical activity predictively. This model is used in
embedded medical sensors, Cloud framework, and XML Web services for rapid,
safe and smooth data acquisition and transferring. The model generates alerts to
the sick person for notifying anomalies or complications while performing physical
activities. Shamim et al. [25] described the Healthcare-Industrial IoT (HealthIIoT)
concept for real-time health monitoring of elderly and differently able people.
Nguyen et al. [26] presented a health monitoring and control system which can
offer highly reliable checking of cardiac patients at minimal cost with high efficiency.
The Fog-based approach consists of smart gateways and energy-efficient IoT sensors.
The sensors are able to collect ECG, respiration rate, and body temperature data and it
can transmit the collected data to the gateways with minimum data loss for automatic
analysis and notification in a wireless environment. Ahmad et al. [10] proposed a Fog-
based m-Health solution which can act as an intermediary layer among the Cloud and
ending IoT devices. It improves data security and privacy at the boundary level with
the help of Cloud Access Security Broker (CASB). Chakraborty et al. [27] described
a Fog-enabled platform that can handle latency-sensitive m-health data. Dubey et al.
[28] discussed the Fog assisted service-oriented structure to authorize and analyze
unrefined biomedical data obtained via IoT devices. They used resource-constrained
embedded computing instances to carry out biomedical data analysis. Negash et al.
[29] focused on the implementation part of a Fog-based smart e-health gateway
which can support IoT linked m-healthcare services. The healthcare gateways of the
system are positioned in the distributed network at different geographical positions
and each gateway is accountable to administer and control multiple numbers of IoT
devices openly connected with the patients and medical service providers. The Fog-
based smart healthcare gateway was offered by Rahmani et al. [11]. The researchers
described the possibility of using smart e-health gateway to provide real-time storage,
data processing and analysis of patient’s data. An early warning score (EWS) under
IoT platform has been discussed to assess the proposed system performances.
Lee et al. [30] proposed an IoT-enabled cyber-physical system which wires data
examination and knowledge acquirement approaches to further enhance productivity
in different industries. A novel intelligence framework is introduced that can facilitate
to handle industrial informatics depending on the sensors and locations for mining
of big data systems. Rizwan et al. [31] studied the powers and shortcomings of
a variety of traffic control systems. They proposed a very minimum cost, a real-
time operative traffic management tool that can install IoT devices and sensors to
gather real-time traffic data. Zhang et al. [32] developed a Firework based novel
computing paradigm that permits data distribution and processing based sharing in
an IoT-dependent mutual edge platform. The firework handles the distributed data
A Novel Approach Towards Using Big Data … 129

by permitting virtual data views to the end-users using predefined interfaces. An


IoT based smart city concept [33, 34] has been introduced. It also uses big data
analytics for improving quality of life. The system uses different types of sensors
which includes weather sensors, water sensors, surveillance objects sensors, network
sensors, parking sensors, and smart home management sensors.
Ahlgren et al. [35] described the importance of IoT for further enhancing services
to the life and living standards of citizens, including air quality, transportation, and
energy efficiency services. The IoT-based systems must be based on open data and it
should include protocols and interfaces to provide third-party innovations. Based on
this particular idea, the researchers designed and developed a Green IoT platform to
establish open platforms based on smart city development. Sezer et al. [36] proposed
an improved outline that can integrate semantic web methods, IoT and big data
together to analyze and design of envisioned IoT system. Cheng et al. [37] designed
and developed an edge analytics tool which can perform real-time processing of data
at the edges of networks in a cloud-dependent environment. Wang et al. [38] discussed
the challenges and scopes of using big data and IoT for developing maritime clusters.
A novel framework has been developed for integrating industry based IoT with big
data.
Prez and Carrera [39] conducted an extensive study of the performance description
of application interfaces for hosting IoT workloads in the clouds for providing multi-
tenant data transferring capabilities, multi-protocol supports and superior querying
mechanisms with software-based solution capabilities with the help of combining
sophisticated data-centric technologies. Another study provided by Villari et al. [40]
moderately resolves the big data storage problems by using AllJoyn Lambda software
solution that maps AllJoyn in the Lambda architecture and it is useful for storage
and analysis of big data.
Jara et al. [41] conducted a study to emphasize the open solutions and challenges
for big data-based cyber-physical systems. The study alerts on cloud security and
incorporation of data obtained from various sources. Ding et al. [42] proposed a
cluster-based mechanism for statistical analysis over IoT-big data platform. The
statistical analysis is performed in a distributed environment using multiple servers in
a parallel way. Vuppalapati et al. [43] inspected the importance of big data in mobile
healthcare and observed that medical sensors generate a large amount of health
information. On the basis of these observations, they proposed a sensor integration
framework that describes a scalable cloud framework to provide a holistic scheme
for controlling all the sensors of m-health systems. Here, Apache Kafka and Spark
are applied to process large datasets in a real-time environment.
Ahmad et al. [44] analyzed human behavior using big data analytics in social IoT
[45]. The performance of big data-based ecosystem has been analyzed for smart cities.
They concluded that Collaborative filtering schemes can be used in forthcoming
years to analyze human behavior with high accuracy. Arora et al. [46] used big
data analytics methods to organized network-enabled devices. They analyzed the
efficiency of machine learning algorithms like k-nearest neighbor (KNN), support
vector machines (SVM), Naive Bayes (NB), and random forest. Yen et al. [47]
investigated the prospective of service detection and composition methods in solving
130 K. N. Mishra and C. Chakraborty

real-world day-to-day related problems which are based on the data obtained through
the gaming-based crowdsourcing which can use human intelligence for the end of
specific control assignments successfully.

3 Proposed Architecture of IoT Based m-Health System

3.1 IoT Components

The Internet of things includes a huge number of components that work collec-
tively for realizing the concept of accessing and using networked resources. The
components of IoT and the corresponding layers are explained in Table 1 [48].

3.1.1 Physical Objects

The physical objects or physical devices accumulate, recognize, and supervise infor-
mation about differently able users in their natural or medical scenarios. This may
include the devices which are monitoring the glucose level, blood pressure, heart rate,
and their other daily life related medical things. The physical objects are connected
to the Internet for transmitting the medical-related information of differently able
patients to the concerned authorities including doctors.

Table 1 Components and layers of IoT based m-health system


IoT layers IoT components Tasks
Application Applications Provides care and assistance to
disabled persons and permits them to
view records
Middleware Data management Establishes the communication
Device discovery access control between IoT and other applications
Access gateway Communication technologies Sends and receives information
through the internet using gateways
and enables medical devices for
interchanging data/information
Edge technology Physical objects Provides and monitors data about
differently able persons
A Novel Approach Towards Using Big Data … 131

3.1.2 Communication Technologies

The well-known type of network used for IoT healthcare applications to handle the
illness of physically challenged persons is Wide Area Networks (WANs) and Local
Area Networks (LANs) [44].

3.2 The Architecture of the Internet of Things

IoT architecture consists of four main layers. These layers are publicized in Fig. 3
[49, 50]. The two layers of lower-level perform data capturing whereas the two layers
of the higher end are accountable for data expenditure in different applications.
The functional architecture of IoT layers (Top-to-down approaches) are as follow:
(a) Perception Layer: This layer is a hardware component dependent layer that con-
sists of various data collection elements like cameras, wireless sensors networks
(WSNs), intelligent terminals, GPS, and electronic data interfaces (EDIs) [51].
(b) Gateway Access Layer: This layer includes working functions of the network
layer and transport layer and it is answerable for data handling. It can perform
data broadcast, routing of messages, and publishing/subscribing messages. The
gateway layer receives information from the edge layer and sends information to
the middleware layer using certain communications technologies like Ethernet,
Wi-Fi, and WSN, etc. [49, 50].
(c) Middleware layer: It is a type of software platform which gives abstraction to
applications through the internet of things. It also provides many services e.g.

m-health
systems

Fig. 3 The IoT framework


132 K. N. Mishra and C. Chakraborty

data aggregation, device discovery and management, access control, seman-


tic data analysis, data filtering, and information discovery with the help of
Electronic Product Code (EPC), and Object Naming Service (ONS).
(d) Applications layer: This is the top layer and it is accountable for the deliverance
of a variety of applications to various users of IoT. It includes two sub-layers
namely data management and application service [52].

3.3 Proposed Model

The proposed model of big data and IoT based m-health system is presented in
Fig. 4. The proposed model provides patient empowerment, surveillance, fitness and
training, disease monitoring, and rehabilitation facilities to the patients and m-health
hospitals. The patients are connected to general physicians, super specialists, nurses,
and other healthcare officials through the internet of things. Hence, it becomes easy
for patients to get an appointment with general physicians and specialist doctors [53].
Further, the proposed model provides critical care services in m-health hospitals
and home services for taking care of patients using IoT based applications. Following
are the facilities which are provided the proposed model:

(i) Reminding About Appointment of Patients

The appointment reminders are voice or SMS based messages sent by hospitals to
the patients for fixing an appointment with the doctor. This system also includes vac-
cination reminders, treatment results, and appointment postponement. In developing
and developed countries, the mobile phone has become the main source of receiving
appointment reminders [54].

(ii) Providing Mobile Based Telemedicine to Patients

The Mobile telemedicine can be defined as the direct interaction or consultation


between health professionals and patients using voice, imaging, text or video calls
through a mobile phone. The chronic diseases of patients living at home can be
managed through telemedicine facilities provided by hospitals. The shortage of health
professionals including doctors is the main cause of moving towards telemedicine
and it connects patients, community health workers and physicians of urban areas
to the patients of rural areas for enhancing the quality of medical care and reduces
unnecessary referral costs [55].

(iii) Patient Monitoring and Raising Awareness


With reference to m-Health, online patient monitoring can be defined as using the
internet of things based technologies to monitor a patient and illness treatment
of a remotely allocated patient. To provide these facilities the remote sensors of
households and imaging devices linked to mobile phones are used to provide data
A Novel Approach Towards Using Big Data … 133

Surveillance

PaƟent
Empowerment
m-health & RehabilitaƟon
IoT

Disease
Fitness &
Monitoring
Training

Fig. 4 Proposed model of IoT and big data-based m-health system

transmission and communication between patients and medical health professionals.


Therefore, the need of visiting a health center can be minimized [56].
Raising public consciousness includes the utilization of health information prod-
ucts in games and quizzes to inform people about critical diseases e.g. HIV/AIDS.
These programs are usually available for mobile phones as download applications
and video-based stories/songs are used for communicating with people.
134 K. N. Mishra and C. Chakraborty

4 Discussions

The use of mobile devices to up-lift the treatment process has now started playing
a vital role in m-health management and control systems. This feature of m-Health
permits the access and use of electronic medical records (EMRs) through mobile
technologies as and when the medical treatment is required. The use of IoT based
technology for continuous and flawless monitoring of serious patients can be helpful
to the patients and further medical crisis can be avoided. Monitoring the patients
with IoT based m-health tools has certain benefits like cost minimization, efficient
utilization of medical equipment and healthcare professionals. In developed coun-
tries, the medication is taken care of properly through messages and video call based
approaches to avoid further complications of patients. In general, the authors have
observed that IoT based applications can drastically reduce the cost of medical care
for chronic disease patients by 30 to 40 percent. This observation is based on recent
clinical experience. if remotely allocated health technology is capable to attain its full
potential in improving patient observance using IoT then it will be a boon for human
society. We could achieve additional benefits in m-healthcare systems if IoT-based
technologies can bring significant changes in patient monitoring, patient advises,
the situation-based raising of alerts, diet control, and exercise-related advice. The
Internet of Things based m-health systems can start financial incentive schemes for
patients who can demonstrate improved lifestyle behaviors [57, 58].
On the basis of expert interviews, it is assumed that IoT based monitoring and
control of m-health systems can minimize the day-to-day burden of diabetic, blood
pressure, and anemia associated cases by 20–25%. Hence, the overall economic
growth of a family can be increased. It is observed that the patients and m-health
professionals including doctors are getting huge profit from the integration of IoT
technologies and big data with healthcare systems. Hence, it is required to develop
and use approaches that can permit for humans and machines to incorporate big data
in m-health systems for the betterment of patients. The necessities of special target
groups e.g. researchers, health professionals including doctors and nurses play a vital
role in running an m-health project. There is a huge demand for technologists and
technologies using which we can manage scrutinize and develop the set of highly
diversified interlinked IoT based m-health complex data. Further, a big amount of
medical and healthcare data/knowledge already exists in a scattered way. But, we need
to bring these data sets together for the benefits of patients and m-health hospitals.
Figure 5 represents the interaction between m-health and IoT components in a cloud
computing system [59].
A Novel Approach Towards Using Big Data … 135

Fig. 5 Interaction between m-health and IoT components in a cloud computing system

5 Conclusions

This research work reveals that there is a huge potential in delivering more belea-
guered, wide-reaching, and cost-efficient healthcare by expanding the currently exist-
ing IoT, m-health and big data trends. It has been shown the authors that the m-
healthcare realm has very precise characteristics and vast challenges which may
136 K. N. Mishra and C. Chakraborty

need a specific effort and research work to realize the full strength of IoT integrated
m-health big data systems. The computing necessities for monitoring and control of
data obtained from IoT based m-health environment can use the efficiency of sen-
sors and health-related software applications installed on personal computers. In this
research work, the authors proposed using a distributed framework to integrate sens-
ing, monitoring, processing, and delivery of quality m-health services to remotely
located patients. The IoT and big data integrated environments which includes wear-
able medical sensors will be very much useful for monitoring chronic disease patients.
The basic advantage of proposed big data and IoT based m-health system is its flexi-
ble nature where different applications can be executed in a coordinated way. Hence,
the concept of shared computing resources for helping/curing remotely and urban
patients has now become the reality. The proposed framework needs adjustments in
real scenarios where processing speeds at different corners of big data integrated IoT
based m-health system are varying. This proposed system can be very much useful
in IoT based scenarios and other areas where exhaustive data acquiring and very high
data processing activities are to be performed like a diagnosis of a critical disease on
the basis of available symptoms and medical diagnosis reports.
The big data applications provide opportunities to find out new knowledge and
create novel techniques for further improving the quality and standard of healthcare
systems. A number of technologies have been developed by researchers which can
decrease on which overheads for the evasion of the overall management of chronic
illnesses. The medical devices that continually monitor health system indicators or
tracking of online health data in a real-time environment as and when patient self-
administers physiotherapy are now in huge demand. Many intelligent patients have
now started using mobile applications (apps) to manage different daily life-related
health needs on regular basis because of easy availability of high-speed Internet
connections on smartphone and cybercafes. These devices and mobile applications
are now progressively more used and integrated with telemedicine and telehealth
via the Internet of Things (IoT). In this chapter, the authors have discussed the
applications and challenges of biomedical big data in the field of bioinformatics,
clinical informatics, imaging informatics, and public health informatics. Further,
this chapter presents novel approaches to advancements in healthcare systems using
big data technologies.

References

1. Internet of Things (IoT). (2019). Number of connected devices worldwide from 2012 to 2020
(in billions). Available: https://www.statista.com/statistics/471264/iot-numberof-connected-
devices-worldwide/.
2. Institute of Health Metrics and Evaluation. (2015). Available: http://www.healthdata.org/
pakistan.
3. Andriopoulou F, Dagiuklas T, Orphanoudakis T (2017) Integrating IoT and fog computing for
healthcare service delivery. Springer International Publishing, Switzerland
A Novel Approach Towards Using Big Data … 137

4. Dong B, Yang J, Ma Y, Zhang X (2016) Medical monitoring model of internet of things based
on the adaptive threshold difference algorithm. Int J Multimedia and Ubiquitous Eng
5. Abu KE (2017) Analytics and telehealth emerging technologies: the path forward for smart
primary care environment. J Healthc Comm 2(S1):67
6. Chinmay C (2019) Mobile health (m-Health) for tele-wound monitoring. Mobile Health Appli-
cations for Quality Healthcare Delivery 5:98–116. https://doi.org/10.4018/978-1-5225-8021-
8.ch005
7. Chinmay C, Gupta B, Ghosh SK (2013) A review on telemedicine-based WBAN framework
for patient monitoring. Int J Telemed e-Health 19(8):619–626
8. Chakraborty C, Gupta B, Ghosh SK (2014) Mobile metadata assisted community database of
chronic wound. International Journal of Wound Medicine 6:34–42
9. Redowan M, Fernando LK, Rajkumar B (2018) Cloud-fog interoperability in IoT-enabled
healthcare solutions. In 19th ACM International Conference on Distributed Computing and
Networking (pp. 1–10). January 4–7, 2018
10. Ahmad M, Amin MB, Hussain S, Kang BH, Cheong T, Lee S (2016) Health fog: a novel
framework for health and wellness applications. J Supercomp 72(10):3677–3695
11. Rahmani AM, Gia TN, Negash B, Anzanpour A, Azimi I, Jiang M, Liljeberg P (2017) Exploit-
ing smart e-health gateways at the edge of healthcare internet-of-things: a fog computing
approach. Future Generation Computer Systems
12. Fazio M, Celesti A, MÃ˛arquez FG, Glikson A, Villari M (2015) Exploiting the FIWARE cloud
platform to develop a remote patient monitoring system. In Proceedings of the IEEE Symposium
on Computers and Communication (ISCC) (pp. 264–270). https://doi.org/10.1109/ISCC.2015.
7405526
13. Hassanalieragh M, Page A, Soyata T, Sharma G, Aktas M, Mateos G, Kantarci B, Andreescu S
(2015) Health monitoring and management using internet-of-things (IoT) sensing with cloud-
based processing: opportunities and challenges. In Proceedings of the IEEE International
Conference on Services Computing (pp. 285–292)
14. Mahmud R, Ramamohanarao K, Buyya R (2017) Fog computing: a taxonomy, survey and
future directions. In Di Martino B, Yang L, Li, K-C, Antonio E (eds) Internet of everything:
algorithms, methodologies, technologies and perspectives (pp. 103–130). Springer, Berlin
15. Gia TN, Jiang M, Rahmani AM, Westerlund T, Liljeberg P, Tenhunen H (2015) In Proceedings
of the IEEE International Conference on Computer and Information Technology; Ubiquitous
Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive
Intelligence and Computing (pp. 356–363)
16. Doukas C, Maglogiannis I (2012) Bringing IoT and cloud computing towards pervasive health-
care. In Proceedings of the Sixth International Conference on Innovative Mobile and Internet
Services in Ubiquitous Computing (pp. 922–926)
17. Tsiachri Renta P, Sotiriadis S, Petrakis EG (2017) Healthcare sensor data management on
the cloud. In Proceedings of the 2017 Workshop on Adaptive Resource Management and
Scheduling for Cloud Computing (ARMS-CC ’17) (pp. 25–30). ACM
18. Mahmud, S., Iqbal, R., & Doctor, F. (2016). Cloud enabled data analytics and visualization
framework for health-shocks prediction. Future Generation Computer Systems (Special Issue
on Big Data in the Cloud), 65(Supplement C), 169–181
19. Chen M, Qian Y, Chen J, Hwang K, Mao S, Hu L (2017) Privacy protection and intrusion
avoidance for cloudlet-based medical data sharing. IEEE Transactions on Cloud Computing
99:1
20. Zhang Y, Qiu M, Tsai CW, Hassan MM, Alamri A (2017) Health-CPS: healthcare cyber-
physical system assisted by cloud and big data. IEEE Syst J 11(1):88–95
21. Vijay BP, Pallavi K, Abdulsalam Y, Parisa P, Shervin S, Ali ANS (2017) An intelligent
cloud-based data processing broker for mobile e-health multimedia applications. Future Gener
Comput Syst 66(Supplement C):71–86
22. Jindal V (2016) Integrating mobile and cloud for PPG signal selection to monitor heart rate dur-
ing intensive physical exercise. In Proceedings of International Conference on Mobile Software
Engineering and Systems (MOBILESoft’16) (pp. 36–37). ACM
138 K. N. Mishra and C. Chakraborty

23. Muhammad G, Rahman SMM, Alelaiwi A, Alamri A (2017) Smart health solution integrating
IoT and cloud: a case study of voice pathology monitoring. IEEE Comm Mag 55(1):69–73
24. Gupta PK, Maharaj BT, Malekian R (2017) A novel and secure IoT based cloud centric architec-
ture to perform predictive analysis of users activities in sustainable health centres. Multimedia
Tools Appl 76(18):18489–18512
25. Shamim HM, Ghulam M (2016) Cloud-assisted industrial internet of things (IIoT)—
Enabled framework for health monitoring. Computer Networks, 101(Supplement C), 192–202.
Industrial Technologies and Applications for the Internet of Things
26. Nguyen GT, Jiang M, Sarker VK, Rahmani AM, Westerlund T, Liljeberg P, Tenhunen H
(2017) Low-cost fog-assisted health-care IoT system with energy-efficient sensor nodes. In Pro-
ceedings of 13th International Wireless Communications and Mobile Computing Conference
(IWCMC) (pp. 1765–1770)
27. Chakraborty S, Bhowmick S, Talaga P, Agrawal DP (2016) Fog networks in healthcare appli-
cation. In Proceedings of 13th IEEE International Conference on Mobile Ad Hoc and Sensor
Systems (MASS) (pp. 386–387)
28. Dubey H, Yang J, Constant N, Amiri AM, Yang Q, Makodiya K (2015) Fog data: enhancing tele-
health big data through fog computing. In Proceedings of the ASE BigData & SocialInformatics
2015 (ASE BD&SI’15). ACM, New York, Article 14, 6
29. Negash B, Gia TN, Anzanpour A, Azimi I, Jiang M, Westerlund T, Rahmani AM, Liljeberg
P, Tenhunen H (2018) Leveraging fog computing for healthcare IoT (pp. 145–169). Springer
International Publishing, Cham
30. Lee C, Yeung C, Cheng M (2015) Research on IoT based cyber physical system for industrial
big data analytics. In IEEE International Conference on Industrial Engineering and Engineering
Management (IEEM) (pp. 1855–1859). IEEE
31. Rizwan P, Suresh K, Babu MR (2016) Real-time smart traffic management system for smart
cities by using internet of things and big data. In International Conference on Emerging
Technological Trends (ICETT) (pp. 1–7). IEEE
32. Zhang Q, Zhang X, Zhang Q, Shi W, Zhong H (2016) Firework: big data sharing and processing
in collaborative edge environment. In Fourth IEEE Workshop on Hot Topics in Web Systems
and Technologies (HotWeb) (pp. 20–25). IEEE
33. Rathore MM, Ahmad A, Paul A (2016) Iot-based smart city development using big data ana-
lytical approach. In IEEE International Conference on Automatica (ICA-ACCA) (pp. 1–8).
IEEE
34. Kamta NM, Chakraborty C (2019) A novel approach toward enhancing the quality of life in
smart cities using clouds and IoT-based technologies. Digital Twin Technologies and Smart
Cities, Internet of Things (Technology, Communications and Computing) (pp. 19–35). https://
doi.org/10.1007/978-3-030-18732-3_2
35. Ahlgren B, Hidell M, Ngai ECH (2016) Internet of things for smart cities: Interoperability and
open data. IEEE Internet Computing 20(6):52–56
36. Sezer OB, Dogdu E, Ozbayoglu M, Onal A (2016) An extended iot framework with semantics,
big data, and analytics. In IEEE International Conference on Big Data (Big Data) (pp. 1849–
1856). IEEE
37. Cheng B, Papageorgiou A, Cirillo F, Kovacs E (2015) Geelytics: geo-distributed edge analytics
for large scale iot systems based on dynamic topology. In IEEE 2nd World Forum on Internet
of Things (WF-IoT) (pp. 565–570). IEEE
38. Wang H, Osen OL, Li G, Li W, Dai HN, Zeng W (2015) Big data and industrial internet
of things for the maritime industry in northwestern Norway. In TENCON 2015-2015 IEEE
Region 10 Conference (pp. 1–5). IEEE
39. Perez JL, Carrera D (2015) Performance characterization of the servioticy api: an iot-as-
a-service data management platform. In IEEE First International Conference on Big Data
Computing Service and Applications (Big Data Service) (pp. 62–71). IEEE
40. Villari M, Celesti A, Fazio M, Puliafito A (2014) Alljoyn Lambda: an architecture for the
management of smart environments in IoT. In International Conference on Smart Computing
Workshops (SMARTCOMP Workshops) (pp. 9–14). IEEE
A Novel Approach Towards Using Big Data … 139

41. Jara AJ, Genoud D, Bocchi Y (2014) Big data for cyber physical systems: an analysis of chal-
lenges, solutions and opportunities. In Eighth International Conference on Innovative Mobile
and Internet Services in Ubiquitous Computing (IMIS) (pp. 376–380). IEEE
42. Ding Z, Gao X, Xu J, Wu H (2013) IOT-statisticDB: A general statistical database cluster
mechanism for big data analysis in the internet of things. In Green Computing and Communi-
cations (GreenCom), 2013 IEEE and Internet of Things (iThings/CPSCom), IEEE International
Conference on and IEEE Cyber, Physical and Social Computing (pp. 535–543). IEEE
43. Vuppalapati C, Ilapakurti A, Kedari S (2016) The role of big data in creating sense ehr, an
integrated approach to create next generation mobile sensor and wearable data driven electronic
health record (ehr). In IEEE Second International Conference on Big Data Computing Service
and Applications (BigDataService) (pp. 293–296). IEEE
44. Ahmad A, Rathore MM, Paul A, Rho S (2016) Defining human behaviors using big data
analytics in social internet of things. In IEEE 30th International Conference on Advanced
Information Networking and Applications (AINA) (pp. 1101–1107). IEEE
45. Ahmed E, Rehmani MH (2017) Introduction to the special section on social collaborative
internet of things. Computers & Electrical Engineering, 382–384
46. Arora D, Li KF, Loffler A (2016) Big data analytics for classification of network enabled
devices. In 30th International Conference on Advanced Information Networking and Applica-
tions Workshops (WAINA) (pp. 708–713). IEEE
47. Yen LL, Zhou G, Zhu W, Bastani F, Hwang SY (2015) A smart physical world based on service
technologies, big data, and game-based crowd sourcing. In IEEE International Conference on
Web Services (ICWS) (pp. 765–772). IEEE
48. Wassnaa A (2015) Privacy and security issues in IoT healthcare applications for the disabled
users. A survey (pp. 1–40). Master Degree Thesis, Western Michigan University
49. Santucci G (2011) From internet to data to internet of things. In Proceedings of the International
Conference on Future Trends of the Internet. Journal of Wireless Personal Communications,
58(1):49–69
50. Atzori L, Lera A, Morabito G (2010) The internet of things: asurvey. Comput Netw 54(15):1–17
51. Hussain S, Schaffner S, Moseychuck D (2009) Applications of wireless sensor networks and
RFID in a smart home environment. In IEEE 7th Annual Conference on Communication
Networks and Services Research, Moncton, NB (pp. 153–157)
52. Jia X, Feng Q, Fan T, Lei Q (2012) RFID technology and its applications in internet of things
(IoT). In IEEE 2nd International Conference on Consumer Electronics, Communications and
Networks (CECNet), Yichang (pp. 1282–1285), April 2012
53. Annette CN (2016) Internet & audiology—ehealth in the large perspective. Vingstedkursus
(pp. 1–52). Erikholm Research Centre, Part of Oticon, August 25–26, 2016
54. Milovanovic DA, Bojkovic ZS (2017) New generation IoT-based healthcare applications:
requirements and recommendations. Int J Syst Appl Eng Devel 11:17–20
55. Higinio M, David G, Rafael MT, Jorge A, Julian S (2017) An IoT-based computational
framework for healthcare monitoring in mobile environments. Sensors Journal 17:1–25
56. Christos P, Christoph T, Nikolaos G, Pantelis A, Nigel J, Bin Z, Guixia K, Cindy F, Clara L,
Chunxue B, Kostas D, Katarzyna W, Panayiotis K (2016) A new generation of e-health systems
powered by 5G, WWRF WG e/m-health and wearable vertical industries platform. Wireless
World Research Forum, White Paper (pp. 1–37)
57. Joyia GJ, Liaqat RM, Farooq A, Rehman S (2017) Internet of medical things (IOMT):
Applications, benefits and future challenges in healthcare domain. J Comm 12(4):240–247
58. Patan R, Rajasekhara BM, Suresh K (2017) Design and development of low investment smart
hospital using internet of things through innovative approaches. Biomedical Research Journal
28(11):4979–4985
59. Kubo (2014) The research of IoT based on RFID technology. In IEEE 7th International
Conference on Intelligent Computation Technology and Automation, China, Changsha
(pp. 832–835)
Using Artificial Intelligence to Bring
Accurate Real-Time Simulation
to Virtual Reality

Deepak Kumar Sharma, Arjun Khera and Dharmesh Singh

Abstract There always has been an excruciating gap between theoretical possi-
bilities, clinical trial and real world applications in the Medical Industry. Any new
research, experimentation or training in this sector has always been subject to extreme
scrutiny and legal intricacies, due to the complexity of the human body and any
resulting complications that might arise from the application of prematurely tested
techniques or tools. The introduction of Virtual Reality in the Medical Industry is
bringing all these troubles to their heel. Simulations generated by virtual reality are
currently being explored to impart education and practical medical experience to
students and doctors alike, generate engaging environments for patients and thus
assisting in various aspects ranging from treatment of medical conditions to rehabil-
itation. This book chapter aims to develop an understanding on how virtual reality
is being applied in the healthcare industry. A formal study of various solutions for
reducing the latency is presented along with research being done in the area for
improving the performance and making the experience more immersive. It is evident
that motion to photons latency plays a crucial role in determining a genuine virtual
reality experience. Among many, foveated rendering and gaze tracking systems seem
to be the most promising in creating exciting opportunities for virtual reality systems
in the future.

1 Introduction

The field of Virtual Reality has witnessed a meteoric resurgence in the last few years.
This has largely been made possible due to the combined effects of significantly

D. K. Sharma (B) · A. Khera · D. Singh


Department of Information Technology, Netaji Subhas University of Technology (Formerly Netaji
Subhas Institute of Technology), New Delhi, India
e-mail: dk.sharma1982@yahoo.com
A. Khera
e-mail: arjunk.it@nsit.net.in
D. Singh
e-mail: dharmeshs.it@nsit.net.in
© Springer Nature Switzerland AG 2020 141
D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_8
142 D. K. Sharma et al.

improved hardware along with the falling cost of the required equipments. However,
in terms of progress, the current technology is still long ways off in achieving the
desired results under the given constraints. The conception of the term virtual reality
was made back in 1965 by Ivan Sutherland “make that(virtual) world in the window
look real, sound real, feel real, and respond realistically to the viewer’s actions”
[1]. The term virtual reality does not carry a concrete definition and hence is often
misinterpreted. Various authors [2–4] provide varying definitions of virtual reality,
however if scrutinised carefully all the literature concerning this topic have the core
concepts of creating an environment and engaging the user with that environment. A
better way to frame this will be, “Virtual reality can be defined as a three-dimensional,
computer-generated simulation in which one can navigate around, interact with, and
be immersed in another environment. Virtual reality provides a reality that mimics
our everyday one.” [5] (Figs. 1 and 2).
Virtual reality systems take control of our sensory inputs by replacing natural stim-
ulations by artificial ones. The crucial takeaway here is that any computer-generated
graphic can be deemed as virtual reality. It is the level of immersion that plays the
role of a differentiator in the types of virtual reality. By the levels of immersion,
we mean factors such as whether the system provides 2d or 3d visual depth, is head
motion taken into account, is the user allowed to be in motion, are haptics a part of the
system or not. Based on these parameters a traditional classification tends to describe
virtual reality as either being immersive, semi immersive or non-immersive [6]. Fully
Immersive systems such as CAVE usually consist of projection room containing a

Fig. 1 How human body interprets natural stimulations [59]

Fig. 2 How virtual reality mimics natural stimulation [59]


Using Artificial Intelligence to Bring Accurate … 143

Table 1 A comparison of various types of virtual reality systems [7]


Non immersive Semi immersive Fully immersive
Resolution High High Medium–low
Sense of immersion None–low Medium–high Very high
Interaction Low Medium High
Cost Lowest Expensive Very expensive

simulator, such systems are limited in their purpose due to their sheer cost and lack
of flexibility. Non-Immersive systems that usually involve applications of computers
form the lower end of the immersion spectrum, and they are much easier to design
and implement. In the middle comes the category of semi immersive systems, such
as modern Head Mounted Systems. They still are expensive and require high end
hardware to run but provide a much richer and accessible form of immersion. With
the rapid pace of advancements being made, the target is to breach the gap and make
these systems cheaper yet more immersive (Table 1).
Another vector of approach to judge the immersion level of a user is to divide it
along two paradigms, consumption and interaction [8]. Consumption dictates how
does a user take input from the virtual world, particularly the number of sensory
outputs provided by the system and their level of detail. Hence, the consumption
model can be broadly classified along three lines, visual, audio and haptic outputs.
The human visual system is the most complex and plays a much more important role
than others, hence the reason that visual fidelity is the first and foremost benchmark
for measuring the performance of a virtual system. Most of the current develop-
ments in the field of virtual reality are focused on improving the immersion of head
mounted systems, specifically in improving the level of visual immersion as these
two factors play a key role. The primary aim of this book chapter is to stress on why
reducing latency is a key challenge to the application of virtual reality through head
mounted devices in the medical industry and what are the efforts being undertaken to
address the same. This first section of this book chapter delves into the applications
of virtual reality in healthcare and the challenges faced. In conclusion to this section,
a hypothesis is formed that motion to photons latency is a critical factor when it
comes to these practical applications alongside the increasing demands of more in-
depth immersions. To this end, the second section provides a step wise detailed study
of the efforts being undertaken to eliminate any form of motion and virtual reality
sickness while simultaneously providing with the most immersive experience. The
last section provides an overview of the further challenges to be overcome.
144 D. K. Sharma et al.

2 Applications of VR in Healthcare

2.1 Medical Education

As opposed to conventional rote learning, virtual reality adds a completely new


dimension to education, experience.

2.1.1 Experiential Learning

The study of human anatomy is mainly illustrative, and the application of VR thus
has a great potential in medical education [9]. For instance, VR can be used to
explore organs by “flying” around, behind or even inside. Therefore, VR can be
used to gain in-depth understanding of human anatomy that is honestly at par with
any conventional method so far, even cadaveric dissection [10]. Haptic devices give
users a sense of “touch” which further expands the immersion level of the user [11]
(Fig. 3).

Fig. 3 Man playing with VR goggles (Margaret M Hansen. Originally published in the Journal
of Medical Internet Research (http://www.jmir.org), 01.09.2008. Except where otherwise noted,
articles published in the Journal of Medical Internet Research are distributed under the terms of the
Creative Commons Attribution License (http://www.creativecommons.org/licenses/by/2.0/), which
permits unrestricted use, distribution, and reproduction in any medium, provided (1) the original
work is properly cited, including full bibliographic details and the original article URL on www.
jmir.org, and (2) this statement is included)
Using Artificial Intelligence to Bring Accurate … 145

Apart from these advantages, the cost of VR development has significantly


dropped down, which has aided to increasing attention towards creating more
advanced VR techniques to be utilised in Medical Education. Now, not only does
the users get to observe and interact with 3D models, but (also) they can also manip-
ulate certain aspects of the environment and observe reactions. For instance, VR
applications can provide users the ability to turn certain systems on and off [12].

2.1.2 Distance Education

Apart from the said advantages of VR in medical education, it also provides a com-
pletely new way to experience distance education. So far, distance education involves
two-dimensional presentation of educational material, but virtual reality techniques
allow visualisation of data in three dimensions with interactive functionalities that
provide greater level of immersion in computer-generated virtual world. It is common
knowledge that virtual reality techniques engages students’ attention and converts
education into an entertaining experience contributing thereby to active participation
of students in the learning process. Therefore, Virtual reality techniques are used
to create “Virtual Worlds” which are rapidly shaping the educational technology
landscape.
Second Life (SL) is one of the most popular virtual worlds [13]. Within these
platforms, end users choose a pseudonym and can create their own selves (a.k.a.
avatars). These are three-dimensional graphical representations of the users in the
virtual world, which they may use to navigate, communicate with other users and
perform other typical tasks within the virtual world via computer’s keyboard. More-
over, the users may create and purchase various physical objects in the virtual world.
Furthermore, the SL program provides a voice feature which lets players, hear other
avatars’ voice depending on their physical location [14]. Also, other softwares could
be embedded into SL creating a plethora of opportunities. One such software is Wii
[15], a gaming software created by Nintendo, which may drive users to log in and
have fun while learning.
Another example is the virtual world known as the Second Health Project [16].
Second Health is a fully equipped high technology system of healthcare that primarily
focuses on communicating complex healthcare messages like simulating diseases
such as heart attacks through animations. Another example is the Advanced Learning
and Immersive Virtual Environment (ALIVE) created at the University of Southern
Queensland [17]. The aim of the ALIVE team is to provide trainers with tools and
resources to develop learning content, which is made real in a 3D virtual world.
The ALIVE DX Editor is a simple to use, interactive game creator which allows
users to create three-dimensional learning content by performing actions as simple
as dragging and dropping a 3D scene from the gallery.
146 D. K. Sharma et al.

2.2 Surgery Training and Planning

Traditionally, junior surgeons need to be physically present in the operating room


under the supervision of a senior surgeon to acquire surgical skills. This method is
however, proving to be inefficient and implausible over the years due to increasing
number of trainees, high costs and ethical reasons. Furthermore, as surgical opera-
tions are becoming more advanced and complex, observation alone seems no longer
sufficient for acquiring particular skills and training.
VR training in comparison to traditional methods, e-learning and videos are more
realistic and thus, complex surgical procedures can be explained in a very intuitive
way. Trainees can interact with anatomical structures and observe changes that occur
as the surgical procedure goes through. Furthermore, the performance of a trainee
can be recorded, compared and analysed [18]. In addition, patient participation or
senior supervision is no longer needed for basic skill training and acquisition, since
VR is able to simulate an environment that is enough for such needs.

2.2.1 Laparoscopic

The assimilation of skills required to safely conduct a laparoscopic surgery neces-


sitate extensive training. VR simulators are very popular for use in laparoscopic
training as acquiring certain skills with traditional methods is no longer efficient and
poses a potential risk to patients.
It is also proven that trainees training using simulators demonstrate better
psychomotor skills than those who did not [19].
Lap Mentor [20], LapSim [21], Simendo [22] and MIST-VR (Minimally Invasive
Surgical Trainer-Virtual Reality) [23] are commonly used virtual reality simulators
in laparoscopy. MIST-VR is the earliest and the most basic simulator. It can simulate
some of manoeuvers involved in the surgery by “grasping” and “manipulating”. Lap
Mentor and LapSim are more modern and both provide basic skills and procedural
training. Simendo is the latest VR simulator and has smaller application range. None
of these VR simulators utilizes a HMD. This is because in real laparoscopic surgery
various tasks are accomplished by observing a monitor.

2.2.2 Orthopaedics

Research and Development of VR simulators for orthopaedics has been considerably


slow in comparison to other surgical disciplines. This is evident by small number of
research papers written on this topic [24].
The latest orthopaedic simulator, Sim-Ortho developed by OSSimTech [25], is a
next gen virtual reality open surgery-training platform. It provides a 3D environment
for the surgery with haptic feedback, which replicates the applied force, and resis-
tance feedback felt by surgeons when they manipulate tools to cut and drill bones
Using Artificial Intelligence to Bring Accurate … 147

and tissues, providing the trainee a “life-like-experience”. Other simulators include


Procedicus KSA VR simulator and Procedicus virtual arthroscopy (VA) simulator
have been frequently utilised to train trainees since 2002. These also provide haptic
feedback every time a trainee touches an organ. ARTHRO Mentor developed by 3D
Systems, is one of the latest and advanced arthroscopic training simulators. Like
other simulators, it can also provide haptic feedback. This simulator however, can
create different positions that helps trainees to acquire more skills.

2.2.3 Surgical Planning

Another typical use of VR is preoperative planning. Traditionally, planning for sur-


gical procedures have relied on the ability of a surgeon to visualise the concept in
3D using two-dimensional materials such as MRI (Magnetic Resonance Imaging),
CT-Scan (Computer Tomography) etc. This visualisation is often difficult given the
complexity of anatomic structures and different radiographic techniques used to rep-
resent it. VR simulators are capable of combining all the two-dimensional data into
an easy-to-understand 3D view.
Most VR simulators in fact focus on preoperative planning. The obstacle in plan-
ning is two phases. First is concerned with conversion of 2-D radiographic data into
a 3D model [26]. This is difficult since those techniques are often recording different
aspects of the same anatomical region onto a 2D plane. Second is concerned with
simulation of the 3D model so that the solution could be verified before actually
performing the surgery. In other words, if a simulator is modeling a joint then the
surgeon could observe changes as different tissues are cut. This helps in not only
verifying the current solution, but also to brainstorm new ones. In addition, since the
models are normally patient-specific, surgeons can practically perform the operation
in VR before performing the operation in reality. This also reduces potential risks
associated with the surgery.

2.3 Diagnostics

In comparison to traditional methods, virtual reality provides the ability to measure


and store responses of patients to various situations. Thus, physicians are able to
assess responses that was not possible earlier. It is also able to reduce personnel time
and cost, thus improving clinical efficacy and efficiency.
Virtual reality has been proven efficient in diagnosis of diseases at very early
stages like Alzheimer’s disease, schizophrenia etc. Alzheimer’s disease could be
diagnosed by studying interactions between different parts of the brain linked to
memory and navigation as the patients navigate through 3D virtual environments
[27]. Schizophrenic patients also depict changes in certain areas of the brain when
perceiving the environment. Researchers have performed studies to detect changes
at early stage in peripheral vision in glaucoma patients using Oculus Rift, which had
148 D. K. Sharma et al.

exhibited promising results [28]. Researchers were able to approximately measure


the range of motion of the cervical spine using Oculus Rift to detect any abnormalities
[29].

2.4 Treatment of Patients

Due to the simulation abilities of virtual reality, it has vast number of use cases
in treatment of different kind of diseases. Since, it mainly creates a simulating an
interactive, three-dimensional virtual environment, it proves to be very useful in
dealing with mental and physical health issues.

2.4.1 Autism

Autism is a mental condition, which severely inhibits the ability of brain to socialise.
This disease has devastating long-term effects. Inability to synthesise input stimuli is
theorised to be a cause for autism. Attention deficit hyperactivity disorder (ADHD)
and attention deficit disorder (ADD), although, have different effects but share the
same cause. Moreover, while, autism is rare, ADHD and ADD are more frequent
in children. Virtual reality can be used to provide an input stimulus in a controlled
manner and increased in a slow regulated manner w.r.t the individual’s attention
level. Furthermore, children normally respond to less complex and more structured
interactions. It has been seen that virtual reality is of great value for treating autism
and related diseases [30].

2.4.2 Parkinson’s Disease

Parkinson’s disease is a neurological disorder that affects movement. As gait distur-


bances are common in Parkinson’s disease (PD), it further aggravates fall risk and
problems with mobility. Virtual reality can be used to create “serious games” with
the motive of increasing motor precision. For instance, a game could be developed
that precisely detects wrist movements to balance a ball on a table will increase
movement precision. Another example might include a virtual environment created
for gait training under normal and dual-task conditions with physical obstacles [31].
Overall, virtual reality can be used to target different symptoms of disease to lower
risk associated with it.

2.4.3 Alzheimer’s Disease

Alzheimer’s diseases (AD) is a progressive disease that gradually inhibits memory


and other vital mental functions. Traditionally, drugs have been used to help for a
Using Artificial Intelligence to Bring Accurate … 149

time with memory and cognitive symptoms. Virtual reality can be used to create
virtual environments for training people cognitive skills such as spatial navigation,
precision motor skills etc. For instance, a study was conducted in which a person
with AD was given cognitive skills training using virtual reality. It was found out
that the skills improved noticeably and could be used to help other people with AD
too [32].

2.4.4 Psychological Disorders

The treatment of psychological disorders often involves patients to deal with the
situations they fear. This is also known as exposure therapy, which helps patients
to acknowledge their fears and gradually change their perspective towards the dis-
astrous consequences they have assumed. However, as effective as it may seem, it
is very difficult to recreate the desired situation and expose. Moreover, since these
are psychological disorders, the exposure therapy should be gradual and not sudden.
This means that more than one scenario need to be created and need to be exposed to
the patient in a gradual manner. Virtual environments created by virtual reality prove
to be very valuable in exposure therapy [33]. This way virtual environment can be
created individually for every patient.
Overall, virtual reality is proving to be very advantageous against various diseases.
We are positive that as the research is moving forward in both innovative uses of
virtual reality in healthcare as well as the advancement of hardware and software
used to create virtual environments, many more efficient uses of virtual reality will
materialize.

3 Rendering in Virtual Reality

In-spite of the magnanimous advances that have been made in the past few years, a
truly immersive virtual reality system still remains out of reach. This problem stems
from the two constantly opposing demands in the development of virtual reality sys-
tems. The first being the need to push out cheaper virtual reality devices and the
second being increasing the depth of immersion of the virtual reality system. In this
section, we first delve into the end to end virtual reality pipeline that generates the
virtual world as well discuss on why true immersion is difficult to achieve in compar-
ison to present non immersive systems. This is then followed by the developments
in the graphics industry that are providing solutions to address these issues thereby
providing maximum possible immersion under the given hardware constraints.
150 D. K. Sharma et al.

Fig. 4 3D video games rendering pipeline [60]

3.1 Virtual Reality and 3D Game Systems

Virtual reality is extremely demanding when it comes to rendering of virtual envi-


ronments. The rendering of games in non-immersive environments provides a good
starting point of comparison for understanding virtual reality (Fig. 4).
Computer generated graphics used by the film industry pre-process the scenes
and the rendering can take more than a day due to the use of path tracing for photo
realistically simulating light. However, the challenge with rendering in games is to
interpret user inputs in real time and produce the scenes accordingly. This input is
fed into a later frame in the graphics pipeline for processing. This introduces latency
as the time to update to the frames in response to these inputs is directly related to
the rendering pipeline. For a playable experience the latency needs to a be at-least
under 150 ms, though most modern video games operate at much lower latencies.
There are multiple reasons for the introduction of latency. One of them is the use of
rasterization based rendering algorithms instead of ray tracing. Secondly, post pro-
cessing components add to the rendering time. Although the long pipeline provides
high resolution and increased throughputs, its complexity adds to the latency. Lastly,
the need of synchronisation points highlighted in the diagram through the red verti-
cal lines represent the fact that frames cannot be partially processed at these points,
hence the frame does not move onto the next stage until all the pixels of current stage
have been processed. In order to study the graphics pipeline of virtual reality, we
need to understand the unique challenges presented by the human eye in the use of
head mounted devices (Fig. 5).

3.2 Human Vision and Virtual Reality

The problem is that current a brute force extension of 2d screen rendering technolo-
gies is grossly insufficient for rendering virtual. The key here is that when rendering
a VR environment, we are dealing with visualising an environment that imitates real-
ity and which requires dealing with the working of human eye. When we talk about
Using Artificial Intelligence to Bring Accurate … 151

Fig. 5 Extent of human


vision reality [61]

human visual system, we introduce a number of new variables into the equation, such
as FOV and depth perception bringing challenges not present in 2d screens. Field of
View (FoV) is defined as the extent of the observable universe as seen by the eye at
a given time, and is measured in degrees. Each of our eyes has a 160° horizontal and
a 175° vertical FoV [34]. The eyes work together to provide a stereoscopic depth
perception over roughly 120° horizontally and 135° vertically [34]. In addition to
this, our eyes can move roughly 90° is a mere 1/10 of a second [35]. So, while a 2d
monitor could function in less than 30° horizontal and 17° vertical FoV involving
merely 2 MPixels. Virtual reality on the other hand requires 120° horizontally and
135° vertically for full stereoscopic display translating to 116 MPixels (assuming 60
pixels per degree). It is not just the wide FoV that presents problems for VR. The
human eye also can determine depth and hence an immersive VR environment needs
to have a display dealing with both the wide FoV and proper depth perception.
Also as discussed previously, latency in VR systems needs to be kept at a min-
imum. This means, that while the most demanding rendering requirements for 3d
models in 2d screens that come from gaming can work with frame latency of any-
where between 16 and 33 ms and a frame rate of 30 FPS, VR systems need to have
a frame latency of about 5 ms and a frame rate exceeding 90 FPS for a basic visual
experience [36]. In addition, this work has to be done twice as we have two eyes,
which is highly taxing on the GPU. What this also means is that current rendering
pipelines as well as existing GPU hardware needs to deal with both reduced frame
latency and increased frame rates for VR.
152 D. K. Sharma et al.

3.3 Virtual Reality Graphics Pipeline

As explained in the previous section, the need of high-resolution images in virtual


reality is highly taxing. Even though one day brute force might be enough to achieve
these, still the high cost and the spread of equipment would make the application
of virtual reality obsolete. These shortcomings were the primary reason why virtual
reality was unable to progress in the late nineties. Modern day head mounted gears
have led to resurgence in the field of virtual reality due to their relatively costs,
mobility and ease of use by making changes in the working of the traditional rendering
pipeline (Fig. 6).
The virtual reality graphics pipelines can be broken down into the following steps
[36]

1. Input

This involves acquiring the data from the respective input devices and sending them
for computation of the next frame. As discussed in the introduction, the number of
input devices used depends on the level of immersion. The development of head
tracking plays a crucial role in current generation of head mounted gears, as it deter-
mines the gaze of the user. The time it takes to detect and send data regarding the
user’s gaze is a determining factor in the latency of the system.

2. Generation of new frame

Similar to 3d video games, virtual reality systems also need a rendering pipeline.
However, in order to fit the latency and frame refresh rates under the given con-
straints, the pipeline has to significantly reduce the number of steps from the video
game rendering pipeline. Instead of using multiple passes involving PostFX and 2D

Fig. 6 Virtual reality graphics pipeline [36]


Using Artificial Intelligence to Bring Accurate … 153

Fig. 7 Virtual reality rendering pipeline [60]

shading, the rendering makes use of a single 3D pass. This results in a reduction of
throughput which is maintained by significantly lowering the quality of the images
(Fig. 7).
However, this is not enough to ensure that the constraints are met and hence
this involves making use of Time Warp. Instead of queuing the current input to the
beginning of the rendering pipeline, the head tracking data is applied on the rendered
frames produced by the GPU. This is done by warping the images hence reducing the
latency. Though warping the image is an added overhead in the rendering pipeline,
yet the reduction in latency through its use by far outweighs its computational time.
In addition to rendering the image, the pipeline also has to account for the distortion
produced by the lens used by the head mounted displays. To produce a correction,
the shaders pre-distort the images by opposite amounts.
3. Output
The time to output a frame rendered by the system consist of transferring the new
frame data from the GPU to the head mounted device followed by pixel switching
to update the pixels on the display to output the new frame.
The aim of the pipeline is to reduce the latency of the system without lowering
the visual fidelity. In the following section we will be discussing latency in current
virtual reality systems with respect to head mounted devices.

3.4 Motion to Photons Latency

The most important factor in providing a truly immersive VR experience is to match


the sensory input to the virtual world with human sensory expectations. Any per-
ceived latency is described as ‘motion to photons’, i.e., the length of time between an
input (e.g., changing the head position) and the moment when a full frame of pixels
reflecting the associated changes is displayed. For example, given an object is placed
154 D. K. Sharma et al.

at the centre of line of sight, a sudden shift of vision to the right by turning your
head should result in that object moving to the left almost in sync to the speed of
your heads movement. Any delay in this objects movement will result in a breaking
of perception of stationarity. The key takeaway is that is that in order to make a user
perceive the virtual world objects as real, they must always be at the right position.
Results achieving 99% percent accuracy will still be a failure as our visual system is
designed to detect such anomalies and which can lead to disorienting and nauseating
VR experiences.
The natural question that arises is, how much latency is enough? In order to avoid
motion sickness, the current industry threshold for latency is set at 20 Ms, although
research suggests that the optimum level should be at 7 Ms. For comparison, if we
were to put latency at 50 Ms for a system with resolution at 1 K × 1 K over a 100°
FOV, and rotate our head at 60°/s, then the latency would introduce a variation of
three degrees, which is very noticeable [36]. The end aim for engineers here is to
make a truly immersive VR experience without the nauseating simulator sickness
caused by the motion to photons latency. A ballpark estimate for an immersive display
would be to have 60 pixels per degree, which would require an astounding 9.6 K
horizontal and 10.5 K vertical pixels. This would translate to 100.8 MPixels for each
eye [37]. Taking current hardware limitations, such a figure is unachievable for the
foreseeable future. Hence, in order to continue improving the impressiveness sans
the motion sickness requires developing ingenious methods to reduce the latency
without compromising in the visual fidelity. The most common strategies [38] that
are applied to both reduce latency as well as to minimize any remaining latency are
1. Lower the complexity of the virtual world
2. Improve the rendering pipeline performance
3. Remove the delays along the path from the rendered image to the switching pixels
4. Use predictions to estimate future viewpoints and world states
5. Shift or distort the rendered image to compensate for last moment viewpoint
errors and missing frames

3.5 Improving Input Performance: Using Predictions


for Future Viewpoints Estimation

Accurately predicting human movements unlocks the doors to exciting new opportu-
nities especially when combined with virtual reality. As discussed in Sect. 2, exploit-
ing gaze tracking allows for patients to interact with the virtual environment just
by their eyes. The following subsection presents new methods that are improv-
ing the overall virtual reality experience by exploiting improved motion prediction
techniques.
Using Artificial Intelligence to Bring Accurate … 155

Fig. 8 Taxonomy of gaze based interaction [39]

3.5.1 Gaze Tracking

Eye tracking is witnessing a surge in popularity due to virtual reality. As discussed


previously, reducing motion to photons latency is a prime concern for VR developers.
To this end, the introduction of new rendering methods such as foveation aim to
improve the visual experience while simultaneously reducing the rendering time
and computational power required. Foveation and its application is discussion in
3.5.1, however a key prerequisite to foveation requires an accurate eye tracking. In
order to understand how gaze tracking is proving to be game changing, we refer to
the following taxonomy [39], which splits gaze based interaction into four forms,
namely diagnostic (off-line measurement), active (selection, look to shoot), passive
(foveated rendering, a.k.a. gaze-contingent displays), and expressive (gaze synthesis)
(Fig. 8).
Here we present a few techniques that are improving accuracy for gaze tracking
and exploiting its benefits particularly for improving rendering virtual environments.

3.5.2 Improved Gaze Prediction

Gaze prediction though important is hard to predict, especially in virtual environ-


ments given that unlike a 2d environment which fixes the viewers pose, a 3d virtual
environment gives the viewer 360 freedom. A study for gaze prediction using deep
learning [40] lists the key factors playing a role in gaze prediction include history gaze
path, temporal saliency and spatial saliency. Extensive works on saliency detection
have already been done, with improvements extending to stereo and videos. However,
saliency research in VR is still premature. Saliency levels of same object is different
at different spatial scales. The model uses a multi scale categorisation, namely local,
FoV and global saliency. The regions that correspond are saliency for current gaze
point, sub image corresponding current FoV and global scene.
The model uses a Convolution Neural Network (CNN) for feature extraction from
saliency maps and the corresponding images. Simultaneously, in order to incorporate
156 D. K. Sharma et al.

the history of the scan path, the model employ a Long-Short-Term-Memory (LSTM)
for encoding. Lastly, the extracted features of both the models are combined and
used for gaze displacement prediction between gaze point at a current time and gaze
point at an upcoming time.

3.5.3 Gaze Tracking for Frame Predictions

The movement of gaze provides prescience of human intent. This can allow for
pre-rendering of future scenes by predicting frames in advance and hence unlock
significant reductions in latency. However, unlike normal third person videos which
typically have a static background, egocentric videos have to deal with the addition of
background motion. Also, egocentric vision involves a coordination of head motion,
gaze prediction and body poses [41], hence presenting a significant challenge.
Deep Future Gaze [42] proposes a generative adversarial network (GAN) based
model that learns visual cues during training and can predict future frames. Moreover,
while the prediction in “real” video prediction is to use random noise as input, Deep
Future Gaze uses the current input frame for prediction. To achieve this, the model first
uses a 2D convolutional network to extract latent representation of the current frame
such that the motion dynamics of the generated frames is consistent with the current
frame across time. The output is then passed through a two-stream spatial-temporal
convolution model to separate the foreground and background motion to deal with
the complex background motion. The combination of these models is known form
the Future Frame Generation Module and produces three outputs, which represent
the learned representations for the foreground, mask and the background. The 3
streams produced by the Future Frame Generation Module are combined to generate
future frames. The generated future frames are then sent to the next stage of GAN
which uses two 3D convolution networks, namely the generator and discriminator.
The Temporal Saliency Prediction Module which employs the generator predicts
the anticipate gaze location. On the other hand, the Discriminator distinguishes the
generated frames from real frames by classifying its inputs to real or fake. The GAN
improves quality of future predictions based on the feedback from discriminator, this
also helps the model in predicting future gaze more accurately. The model has proven
to outperform competitive baselines significantly and hence provides a starting point
for further research into future frame rendering using gaze prediction.

3.6 Improving the Rendering Pipeline Performance

We have already had a look into the rendering pipeline in Sect. 3.2. Here we discuss
some proposed solutions that are aimed at reducing the motion to photons latency in
this step of the graphics pipeline.
Using Artificial Intelligence to Bring Accurate … 157

3.6.1 Foveated Rendering

The concept of Foveated Rendering exploits the fact that the focus of human gaze
is limited to a certain region termed as fovea. As explained in Sect. 3.2, the human
vision is limited to roughly 120° horizontally and 135° vertically, however most of
the fine detail is limited within a 5° central circle. This region of eye produces a clear
vision and is known as the Foveal region, while the remaining region is termed as
Peripheral vision and lacks fidelity. Outside this region there is a gradual degradation
in ability to focus and vision suffers from astigmatism and chromatic aberration.
However, neural factors result in more pronounced degradation as the distance from
foveal region increases. This degradation of quality as the distance from the fovea
increases is termed as foveation [43]. The angular distance away from the central
gaze direction is called eccentricity. Acuity falls off rapidly as eccentricity increases.
Current methods render a high-resolution image over the whole display, resulting in
wastage of compute resources. In contrast, the foveal region occupies 0.8% of the
total of 60° solid angle display.
Foveation in rendering is not a new idea, and has been studied for its application
[44, 45]. However, it is witnessing a surge in research particularly for VR after a
demonstration by NVIDIA [46] displaying use of new foveation techniques, fol-
lowed by a claim from Oculus [47] that use of foveated rendering could speed up
computation and bring about 25% performance gain.
However, implementing foveated rendering is not all sunshine and no rain. Even
though the foveal and peripheral regions have a significant difference in visual acuity,
implementation of foveal rendering must be done with care. Peripheral vision allows
a person to make sense of surroundings without active study, excessive blurring in
this peripheral vision can lead to tunnel vision. More importantly, a proper imple-
mentation of foveated rendering requires the use of high-speed gaze tracker so that
the location of high acuity can be updated and aligned with eye saccades hence ensur-
ing the preservation of the perception of a constant high resolution across the field
of view. Also aliasing introduced by lower spatial resolution can lead to prominent
temporal artefacts, especially when there is a change in the scenery due to motion.

Requirements for Implementing Foveated Rendering

Foveated rendering provides the performance gains by under sampling the peripheral
regions. However, this leads to the negative effect of blurring in the periphery. Even
though peripheral region suffers from degradation in visual quality, the human eye is
still very adept in detecting motion in that region. The peripheral vision suffers from
aliasing due to lower resolution acuity as compared to detection acuity. Detection
acuity determines how we perceive, and resolution acuity determines how we resolve
orientation. In comparison to resolution acuity, detection acuity degrades [48] slowly
hence the reason on why detection should serve as an estimate of acuity for foveated
rendering. Targeting resolution acuity in foveated rendering leads to loss of contrast
158 D. K. Sharma et al.

in the peripheral region, hence there is a need to consider a post processing step to
maintain the contrast and preserve the required details.
Another important prerequisite for a successful application of foveated rendering
is the requirement of accurate and low latency eye tracking. In addition, the saccades
also have to be taken into account to ensure that the images are not too distorted and
break the virtual immersion. If saccades were to be dropped from consideration, then
up to 20–40 Ms of latency for eye-tracking can be considered acceptable, even in
cases where foveation is more pronounced [49]. In the following section we present a
few implementations of foveated rendering that can bring about visible performance
improvements in the virtual reality rendering pipeline and hence play an important
role in reducing motion to photons latency.

Contrast Preserving Foveation

The foveation research efforts produced by Patney [50] concluded that images which
preserved contrast and were temporally stable, proved to be far superior in compar-
ison to non-contrast preserving or temporally unstable images with regards to per-
ceptual vision. Bases on this analysis they presented a model for a foveated renderer
that provided performance gains through reduced peripheral sampling while avoid
spatial and temporal aliasing and preserving the contrast of the image through post
process contrast enhancement.
The renderer uses pre-filtered shading terms wherever possible so that the system
can vary the shading rate based on eccentricity without introducing aliasing. The
model uses Texture mipmapping [51] for texture pre filtering, LEAN mapping [52]
for pre-filtering normal maps and exponential-variance shadow [53] maps for pre-
filtering shadows. The renderer maintains sampling at a constant rate, but still suffers
aliasing as the sampling rate is not high enough. To reduce the aliasing effects the
system employs a post process anti-aliasing. In order to deal with the gaze dependant
artefacts caused by eye saccades, a new variance sampling algorithm derived from
temporal anti-aliasing [54] is used. It introduces variable size sampling and saccade
aware reconstruction and provides a 10× improvement in temporal instability reduc-
tion. Lastly, post process contrast enhancement normalizes the contrast that was lost
due the filtering of shading attributes. The system is able to provide significant cuts in
rendering costs, with a reduction of 70% in shading of pixel quads without significant
perceptual degradation.

Stimulating Peripheral Vision Using Generative Neural Network

In order to assist development of foveated rendering requires the understanding of


peripheral vision and its role. The current foveated rendering models work on the
basis of gaze estimation by gradually reducing image quality with increase in eccen-
tricity. This involves ensuring extremely fast and accurate eye tracking hardware to
ensure proper working of the foveated rendering algorithm. However, eye saccades
Using Artificial Intelligence to Bring Accurate … 159

present a daunting challenge to this and can often result gaze dependant artefacts.
In the previous section we saw the use of a combination of post processing contrast
and temporal anti-aliasing algorithm to deal with issues of saccades and related qual-
ity issues in peripheral vision. The other plausible option is to generate peripheral
vision paralleling that of humans by employing use of neural networks. Implementa-
tion of such a model might one day best other post processing step in both quality as
well as performance. Side-Eye [55] provides a first step in this direction by using a
Generative Network for generating real time peripheral vision. The current methods
of peripheral vision generation are extremely slow and hence not suitable for most
production purposes. Side Eye provided a performance improvement of 21,000 by
reducing the running time to 700 ms. The model can further be optimised to approach
33 ms in the near future. The model employs a foveated generative network.
The architecture is based on several components of CNN-based deconvolution
approaches and fully convolutional segmentation approaches. Fully convolutional
networks can operate on large image sizes and producing output of same spatial
dimensions. Side-Eye employs four convolutional layers with the number of kernels
in the layers being 256, 512, 512 and 3 respectively. The Texture Tiling Model which
is currently used and takes much longer to construct the foveated image. The main
advantage of using foveated generative network is that it completes the foveation in
a single pass.

Permissible Latency Across Foveated Rendering Techniques

As mentioned before, the use of Foveated rendering can significantly impact per-
formance of Virtual Reality experience. Statistics released by Tobii, a leader in the
eye tracking space claim to reduce average GPU load by 57% while using Dynamic
Foveated Rendering. This consistent reduced GPU load not only makes it easier
for maintaining frame rates but also provide better capacity for higher frame rates,
which is crucial for an immersive experience. The question is, how much latency is
acceptable before it deteriorates the user experience.
A study conducted by Rachel et al. [56] compares the latency requirements
for three difference foveation techniques across three different radii of foveation
regions. The three foveation techniques compared are subsampling, gaussian blur
and foveated coarse pixel shading. Subsampling is used for setting a minimum bench-
mark, while Gaussian blur establishes the upper benchmark. The values were com-
pared across a variation of peripheral eccentricity at 5°, 10° and 20° respectively. For
cases of greater latency, varying in the range of 80–150 ms, there is significant effect
on quality. However, for latencies in the range of up to 40 ms, there was not much
difference. fCPS proves to be much better than sub sampling at providing foveation.
The study highlights that although the subjects were asked to specifically look for
peripheral artefacts, the latency threshold still comes around 50–70 ms. It also stated
that improvements to foveated rendering such as temporal stability can play a role
in improving in latency tolerance.
160 D. K. Sharma et al.

3.6.2 Real Time Ray Tracing

The introduction of real time ray tracing can prove to be game changing for rendering
in virtual reality. Current rendering methods employ the concept of frames for display,
however ray tracing allows for computing the output of each pixel and sending
it directly to display from GPU. This is known as beam racing, and it eliminates
the need of display synchronisation and hence frames. In addition, ray tracing can
directly render the barrel distorted images for the lens thus eliminating the post
processing step of lens warp processing thereby further reducing latency. Moreover,
ray tracing supports mixed primitives such as triangles, voxels, light fields and unlike
rasterization which needs to divide the image into multiple planes in order to deal
with the of rendering a wide field of view, ray tracing can directly render these
scenes. Lastly, ray tracing provides a drastic improvement in the image quality of
virtual environments, which can prove to be a turning point in the application of
virtual reality in field of human anatomy or surgery.
Another further step forward in this direction is the use of path tracing, however it
is computationally very expensive and currently out of reach of present-day hardware
given the constraints. There are two ways we can reduce the computation needed for
path tracing. The first is to trace paths only for required areas by employing foveated
rendering. The second is to compute only a few paths for every pixel. This results
in the negative effect of noise in the image. Here we take a look at some denoising
algorithms that can reconstruct the full image.

Denoising Algorithms

The computation involved in path ray tracing can be effectively reduced by large
factors if instead of 10, much lesser amounts of rays were used for each pixel.
Moreover, due to the computation expensiveness of tracing the rays the sampling
rate is extremely low. The combined effect of these two results in introduction of
a very noisy image, wherein most of the energy is concentrated in a small subset
of paths or pixels. Advances in deep convolutional networks have produced highly
accurate denoised results.
A research led by NVIDIA [57] developed a new variant of these deep convolu-
tional networks introducing recurrent connections in the deep autoencoder structure.
This provides better temporal stability, allows for consideration of larger pixel neigh-
bourhoods and increases the speed of execution. The procedure also has the added
benefit of modelling relationships based on auxiliary per-pixel input channels, such
as depth and normal. The network is fully convolutional and consists of distinct
encoder and decoder stages working in decreasing and increasing spatial resolutions
respectively. It also employs a recurrent neural network after the encoding stage for
temporal stability. These convolutional recurrent blocks are used after every encod-
ing stage for retaining temporal features at multiple scales. The algorithm also uses
skip connections which jump over a set of layers making the training easier [58].
Using Artificial Intelligence to Bring Accurate … 161

References

1. Sutherland, I. E. (1965). The ultimate display. Proceedings of IFIP 65, 2, 506–508.


2. Fuchs, H., Bishop, G., et al. (1992). Research directions in virtual environments. NFS
Invitational Workshop, University North Carolina.
3. Gigante, M. (1993). Virtual reality: Definitions, history and applications. Virtual Reality
Systems, 3–14. ISBN 0-12-22-77-48-1.
4. Steur, J. (1995). Defining virtual reality: Dimensions determining telepresence. In F. L. Biocca
(Ed.), Communication in the age of virtual reality. Hillsdale, NJ: Lawrence Erlbaum Associates.
5. Briggs, J. C. (1996). The promise of virtual reality. The Futurist, 30.
6. Alqahtani, A. S., Daghestani, L., & Ibrahim, L. F. (2017). Environments and system types of
virtual reality technology in STEM: A survey.
7. Alqahtani, A., Daghestani, L., & Ibrahim, L. F. (2017). Environments and system types of
virtual reality technology in STEM: A survey.
8. Shakibamanesh, A. (2014). Improving results of urban design research by enhancing advanced
semi-experiments in virtual environments. IJAUP, 24(2), 131–141.
9. Fairén González, M., Farrés, M., Moyes Ardiaca, J., & Insa, E. (2017). Virtual reality to
teach anatomy. In Eurographics 2017: education papers (pp. 51–58). European Association
for Computer Graphics (Eurographics).
10. Codd, A. M., & Choudhury, B. (2011). Virtual reality anatomy: Is it comparable with traditional
methods in the teaching of human forearm musculoskeletal anatomy? Anatomical Sciences
Education, 4(3), 119–125.
11. Basdogan, C., & Srinivasan, M. A. (2002). Haptic rendering in virtual environments. In
Handbook of virtual environments (pp. 157–174). CRC Press.
12. Górski, F., Buń, P., Wichniarek, R., Zawadzki, P., & Hamrol, A. (2015). Immersive city bus
configuration system for marketing and sales education. Procedia Computer Science, 75, 137–
146.
13. Second Life [Online Virtual World]. (2019). Accessible from http://secondlife.com.
14. Life, S. (2017). An overview of the potential of 3-D virtual worlds in medical and health
education. Health Information and Libraries Journal, 24(4), 233–245.
15. Wii [Video Game Console]. (2019). Retrieved from URL: http://wii.com.
16. The Future of Healthcare Communication. (2007). Second health. [Online Virtual World].
Retrieved from URL: http://secondhealth.wordpress.com.
17. Hansen, M. (2008). Versatile, immersive, creative and dynamic virtual 3-D healthcare learning
environments: a review of the literature. Journal of medical Internet research, 10(3), e26.
18. Aïm, F., Lonjon, G., Hannouche, D., & Nizard, R. (2016). Effectiveness of virtual reality
training in orthopaedic surgery. Arthroscopy: the journal of arthroscopic & related surgery,
32(1), 224–232.
19. Mohan, P. V. R., & Chaudhry, R. (2009). Laparoscopic simulators: Are they useful! Medical
Journal Armed Forces India, 65(2), 113–117.
20. Lap Mentor [Simulator]. (2017). Accessible from https://simbionix.com/simulators/lap-
mentor/.
21. LapSim [Simulator]. (2018). Accessible from https://surgicalscience.com/systems/lapsim/.
22. Simendo [Simulator]. (2018). Accessible from https://www.simendo.eu.
23. Wilson, M. S., Middlebrook, A., Sutton, C., Stone, R., & McCloy, R. F. (1997). MIST VR:
A virtual reality trainer for laparoscopic surgery assesses performance. Annals of the Royal
College of Surgeons of England, 79(6), 403.
24. Vaughan, N., Dubey, V. N., Wainwright, T. W., & Middleton, R. G. (2016). A review of virtual
reality based training simulators for orthopaedic surgery. Medical Engineering & Physics,
38(2), 59–71.
25. SimOrtho [Simulator]. (2019). Retrieved from https://ossimtech.com/en-us/Simulators.
26. Salb, T., Weyrich, T., & Dillmann, R. (1999, April). Preoperative planning and training
simulation for risk reducing surgery. In International Training and Education Conference
(ITEC).
162 D. K. Sharma et al.

27. Cara, M. (2015, October 23). VR tests could diagnose very early onset Alzeimers. Retrieved
from https://www.wired.co.uk/article/alzheimers-virtual-reality.
28. Universal Health Network. (2015, September 23). Retrieved from https://www.uhn.ca/
corporate/News/Pages/more_than_a_videogame_virtual_reality_helps_eye_research.aspx.
29. Xu, X., Chen, K. B., Lin, J. H., & Radwin, R. G. (2015). The accuracy of the Oculus Rift
virtual reality head-mounted display during cervical spine mobility measurement. Journal of
Biomechanics, 48(4), 721–724.
30. Strickland, D. (1997). Virtual reality for the treatment of autism. Studies in Health Technology
and Informatics, 81–86.
31. Mirelman, A., Maidan, I., Herman, T., Deutsch, J. E., Giladi, N., & Hausdorff, J. M. (2011).
Virtual reality for gait training: can it induce motor learning to enhance complex walking and
reduce fall risk in patients with Parkinson’s disease? The Journals of Gerontology: Series A,
66(2), 234–240.
32. White, P. J., & Moussavi, Z. (2016). Neurocognitive treatment for a patient with Alzheimer’s
disease using a virtual reality navigational environment. Journal of experimental neuroscience,
10, JEN-S40827.
33. Gega, L. (2017). The virtues of virtual reality in exposure therapy. The British Journal of
Psychiatry, 210(4), 245–246.
34. Visual Search 2. (1995). Proceedings of the 2nd International Conference on Visual Search,
p. 270, Optican.
35. AltDev Blog, John Carmack. (2013, February 22). Latency mitigation strategies. [Blog Post].
Retrieved from https://web.archive.org/web/20140719053303/; http://www.altdev.co/2013/02/
22/latency-mitigation-strategies/.
36. Rambling in Valve Time, Abrash, M. (2012, December 29). Latency—The sine qua non of AR
and VR. [Blog post]. Retrieved from http://blogs.valvesoftware.com/abrash/latency-the-sine-
qua-non-of-ar-and-vr/.
37. Kanter, D. (2015). Graphics processing requirements for enabling immersive Vr.
38. LaValle, S. M. (2016). Visual rendering, virtual reality (Chap. 7). Retrieved from http://msl.cs.
uiuc.edu/vr/vrch3.pdf.
39. Duchowski, A. T. (2018). Gaze-based interaction: A 30 year retrospective. Computers &
Graphics. https://doi.org/10.1016/j.cag.2018.04.002.
40. Xu, Y., Dong, Y., Wu, J., Sun, Z., Shi, Z., Yu, J., & Gao, S. (2018). Gaze prediction in dynamic
360° immersive videos. CVPR.T.
41. Land, M. F. (2004). The coordination of rotations of the eyes head and trunk in saccadic turns
produced in natural situations. Experimental Brain Research, 159(2), 151–160.
42. Zhang, M., et al. (2017). Deep future gaze: gaze anticipation on egocentric videos using
adversarial networks. CVPR.
43. Guenter, B., Finch, M., Drucker, S., Tan, D., and Snyder, J. 2012. Foveated 3D graphics. ACM
Transactions on Graphics 31, 6, 164:1–164:10.
44. Levoy, M., & Whitaker, R. (1989). Gaze-directed volume rendering. Technical report,
University of North Carolina.
45. Ohshima, T., Yamamoto, H., & Tamura, H. (1996). Gaze directed adaptive rendering for
interacting with virtual space. In Proceedings of the 1996 Virtual Reality Annual Interna-
tional Symposium (VRAIS 96), IEEE Computer Society, Washington, DC, USA, VRAIS ’96
(pp. 103–110).
46. Patney, A., Kim, J., Salvi, M., Anton Kaplanyan, M., Wyman, C., Benty, N., Lefohn, A., Luebke,
D. (2016). Perceptually-based foveated virtual reality. In ACM SIGGRAPH 2016 Emerging
Technologies (SIGGRAPH ‘16). ACM, New York, NY, USA, Article 17, 2 pp. https://doi.org/
10.1145/2929464.2929472.
47. Oculus Go. Fixed foveated rendering. Documentation. Retrieved from https://developer.oculus.
com/documentation/unreal/latest/concepts/unreal-ffr/.
48. Wang, Y.-Z., Bradley, A., & Thibos, L. N. (1997). Aliased frequencies enable the discrimination
of compound gratings in peripheral vision. Vision Research, 37(3), 283–290.
Using Artificial Intelligence to Bring Accurate … 163

49. Albert, R., Patney, A., Luebke, D., & Kim, J. (2017). Latency requirements for foveated ren-
dering in virtual reality. ACM Transactions on Applied Perception, 14(4), 1–13. https://doi.org/
10.1145/3127589.
50. Patney, A., Salvi, M., Kim, J., Kaplanyan, A., Wyman, C., Benty, N., Luebke, D., & Lefohn,
A. (2016). Towards foveated rendering for gaze-tracked virtual reality. ACM Transactions on
Graphics, 35, 6, Article 179 (November 2016), 12 pp. DOI:https://doi.org/10.1145/2980179.
2980246.
51. Williams, L. (1983). Pyramidal parametrics. SIGGRAPH. Computational Graphics, 17(3),
1–11.
52. Olano, M., & Baker, D. (2010). Lean mapping. In Symposium on Interactive 3D Graphics and
Games (pp. 181–188).
53. Lauritzne, A., Salvi, M., & Lefohn, A. (2011). Sample distribution shadow maps. In Symposium
on Interactive 3D Graphics and Games (pp. 97–102).
54. Karis, B. (2014). High-quality temporal supersampling. In Advances in Real-Time Rendering
in Games, SIGGRAPH Courses.
55. Fridman, L., Jenik, B., Keshvari, S., Reimer, B., Zetzsche, C., Rosenholtz, R. (2017). SideEye:
A generative neural network based simulator of human peripheral vision. arXiv:1706.04568v2
[cs.NE].
56. Albert, R., Patney, A., Luebke, D., Kim, J. (2017). Latency requirements for foveated rendering
in virtual reality. ACM Transactions on Applied Perception, 14(4), Article 25 (September 2017),
13 pp.
57. Chaitanya, C. R. A., Kaplanyan, A. S., Schied, C., Salvi, M., Lefohn, A., Nowrouzezahrai, D.,
& Aila, T. (2017). Interactive reconstruction of Monte Carlo image sequences using a recurrent
denoising autoencoder. ACM Transactions on Graphics, 36(4), Article 98 (July 2017), 12 pp.
https://doi.org/10.1145/3072959.3073601.
58. He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep residual learning for image recognition.
ArXiv e-prints.
59. LaValle, S. M. (2016). Bird’s eye view, virtual reality (Chap. 2). Retrieved from http://msl.cs.
uiuc.edu/vr/vrch2.pdf.
60. Road to VR. Dr Morgan McGuire. (2017, 29 November). How NVIDIA Research is reinventing
the display pipeline for the future of VR. [Blog Post]. Retreived from https://www.roadtovr.com/
exclusive-how-nvidia-research-is-reinventing-the-display-pipeline-for-the-future-of-vr-part-1
61. Peripheral Vision. Retrieved from https://commons.wikimedia.org/wiki/File:Peripheral_
vision.svg#filelinks
Application of Chicken Swarm
Optimization in Detection of Cancer
and Virtual Reality

Ayush Kumar Tripathi, Priyam Garg, Alok Tripathy, Navender Vats,


Deepak Gupta and Ashish Khanna

Abstract Cancer is a very common type of disease occurring amongst people and
it is also amongst the main causes of deaths of humans around the world. Symptom
awareness and needs of screening are very essential these days in order to reduce
its risks. Several machine learning models have already been proposed in order to
predict whether cancer is malignant or benign. In this paper, we have attempted
to propose a better way to do the same. Here we discuss in detail about how we
have applied the chicken swarm Optimisation as a feature selection algorithm to the
cancer dataset of features in order to predict if the cancer is malignant or benign. Here
we also elucidate how the Chicken Swarm Optimization provides better results than
several other machine learning models such as Random Forest, k-NN, Decision Trees
and Support Vector Machines. Feature Selection is a technique used to eliminate
the redundant features from a large dataset in order to obtain a better subset of
features to use for processing. In order to achieve this, we have used Chicken Swarm
Optimization. The chicken swarm optimization algorithm is a bio-inspired algorithm.
It attempts to mimic the order of hierarchy and the behavior of chicken swarm in
order to optimize the problems. On the basis of these predictions we can also provide
quick treatment by using virtual reality simulators that can be helpful for complex

A. K. Tripathi (B) · P. Garg · A. Tripathy · N. Vats · D. Gupta · A. Khanna


Maharaja Agrasen Institute of Technology (MAIT),
New Delhi, India
e-mail: ayush8.tripathi@gmail.com
P. Garg
e-mail: gargpriyam21@gmail.com
A. Tripathy
e-mail: aloktripathy242@gmail.com
N. Vats
e-mail: navendervats@gmail.com
D. Gupta
e-mail: deepakgupta@mait.ac.in
A. Khanna
e-mail: ashishkhanna@mait.ac.in

© Springer Nature Switzerland AG 2020 165


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_9
166 A. K. Tripathi et al.

oncological surgeries. The results shown by this are better than the other models as
this model achieves a very high accuracy as compared to the others discussed in the
paper.

Keyword Cancer · Chicken swarm optimization · Feature selection · Machine


learning · Evolutionary algorithms · Classification · Nature inspired

1 Introduction

In the past few years, the field of cancer research has evolved continuously. Scientists
have applied several methods in order to detect cancer before they cause any symp-
toms [1]. Various new strategies have been proposed for the prediction of cancer. As
a result of these highly large stockpiles of data has been collected which is available
for medical research [2, 3]. However, the precise prediction of cancer is the most
challenging and interesting task for any physician. So, here the machine learning
methods come in handy [4]. Various machine learning techniques have already been
applied for this task such as Artificial Neural Networks (ANN), SVMs and Decision
Trees [5]. In this paper, we have proposed the prediction of cancer using a recently
proposed algorithm i.e. Chicken Swarm Optimization. By early prediction we can
provide treatments using virtual reality simulators that can significantly reduce the
complexity of surgical procedures [6]. Low cost VR may be very effective tool and
also helps surgeons to learn complex surgical oncology procedures in a short period
of time [7]. Here the Chicken Swarm Optimization has been used as a feature selec-
tion technique and is applied on cervical cancer and breast cancer dataset which is
publicly available.
The importance of segregating patients into low or high- risk groups has become
so essential that the researchers are now inclining towards the machine learning
strategies in order to predict cancer [8, 9]. These techniques are being utilized for early
diagnosis and progression of treatment of cancer. The ability of machine learning
tools to detect key features from a large dataset also explains its importance. Some
of these tools include Decision Trees, Random Forests, Artificial neural networks,
Support Vector Machines, and many Bio-inspired algorithms [10]. Even though it
has been proven that the machine learning models can improve our understanding
of cancer progression, a significant level of validation is required in order to adopt
these methods for regular clinical practices [11, 12]. Here we have also compared the
performance of the Chicken Swarm Optimization in feature selection on the breast
cancer dataset and Cervical Cancer dataset with the other techniques which include
k-NN, Decision Trees, Random Forests, Support Vector Machines for validation of
results. The results show that CSO provides better accuracy than the other methods
discussed.
Feature selection is a technique of utmost importance in the field of machine
learning. It demands a heuristic approach to find an optimal machine learning subset
[13]. This technique is used to generate a better subset of a given complex dataset by
Application of Chicken Swarm Optimization in Detection … 167

reducing the redundant features from the given dataset. The computational complex-
ity of the algorithm is also significantly optimized by this method. There are brute
force methods and also forward selection and backward propagation techniques for
feature selection but they both are not such a great fit [14]. So, for feature selection,
the best algorithm available is Evolutionary and Genetic algorithms.
Genetic algorithms belong to a class of algorithms which are experience-based
search and time optimization learning strategies, based on the Darwinian paradigm
[15]. The natural selection process takes place in the implementation of the opti-
mization strategies by simulating the evolution of the species. Initialization of this
algorithm is done by the creation of strings whose contents are random, each string is
used to represent the corresponding member of the population. Next, the calculation
of the fitness of each member of the population is done as a measure of the degree of
the healthiness of that individual in the population [16]. The implementation of the
selection phase is done in which members are chosen from the current population
to enter a mating pool to produce new individuals for the next generation in a way
that the selection chance of the individual is proportional to its relative fitness. Then
crossover is done in which the features of 2 parent individuals are combined to form 2
children that may have new patterns in comparison to their parents [17]. Then muta-
tion is introduced to guard the premature convergence. Maintaining genetic diversity
is the main purpose of mutation in the population. Then replacement happens where
the parent population is completely replaced by the offspring. Finally, the Genetic
Algorithm terminates when a certain convergence criterion is made [18].
Evolutionary Algorithm is an optimization technique which mimics the ideas of
natural evolution where the 3 basic concepts are considered:
1. Parents generate offspring. (crossover)
2. Individuals under offspring undergo some changes. (mutation)
3. The fitter individuals are most likely to survive (selection)
The algorithm is initialized by creating a population of individuals who are ran-
domly generated [19]. After this, there are some series of steps which are needed to be
repeated i.e. until we reach a stopping criterion. The next step is mutation where we
flip a single bit from 1 to 0 or vice versa. Then we do an evaluation of each individual
in the population. Then the next step is the iteration process which is based on the
concept of survival of the fittest which means that the individuals who yield higher
accuracy should have more likelihood of survival. Evolutionary strategies provide
a user with a set of candidate solutions to evaluate. Evolutionary algorithms can be
applied for feature selection which is evident by numerous papers available [20].
Chicken Swarm Optimization algorithm [21] is a bio-inspired algorithm which
is proposed for optimization applications. Bio inspired algorithms like the one pro-
posed in this paper are proven to be very helpful while solving the optimization
problems [22, 23] and with the new researches are still going and new algorithms are
still appearing [24–26]. In this, we divide the population into various groups where
each group comprises of chic’s, some hens and a dominant rooster. To divide the pop-
ulations of chickens and to determine the identity of chickens completely depends
on the fitness values of the chicken themselves. The population of chickens with best
168 A. K. Tripathi et al.

fitness values would act as roosters and each group will have only a single rooster
and the chickens with worst fitness values will be treated as chicks and the remaining
would be designated as a hen. groups are assigned randomly to the hens and mother
child relationship is also established between the chicks and hens randomly.
• Chicken Swarm Optimisation is used for a search design to locate ideal features
in the given dataset.
• Chicken Swarm Optimisation as a feature extraction algo. has been conversed.
• Decision Trees, k Nearest Neighbours, Random Forest and SVM are used for
estimating the implementation of features selected by our proposed algorithm.
• To estimate the result, we have used four classifiers (a) Decision Trees (b) k Nearest
Neighbours (c) Support Vector Machine (SVM) (d) Random Forest.
• The proposed method has been delineated in brief and kept cognitive to understand
The illumination of the proposed paper is as described. Background study of
the methods is explained in Sect. 2. Background whereas Sect. 3. Methodology
consists of the brief explanation of the proposed method with the explanation of the
datasets used and the parameters and the implementation of the method. Results of
the proposed solution to the problem are discussed in 4. Results and Discussions
Section. Comparisons with other results have been shown in 5. Comparison Section
and eventually, 6. Conclusions Section concludes the research with the further scopes
for the proposed algorithm and the selected datasets in the future.

2 Background

2.1 Machine Learning Methods

In the proposed paper, for the result calculation the selected Cancer datasets i.e.
Cervical Cancer (Risk Factors) dataset and Breast Cancer (Wisconsin) dataset were
passed to the proposed Chicken Swarm Optimization Method and the respective
accuracies were obtained which were validated using different ML classifiers algo-
rithms i.e. (a) Decision Trees (b) k Nearest Neighbours (c) Support Vector Machine
(SVM) (d) Random Forest. These algorithms can be defined as:

2.1.1 K-nearest Neighbours

The K-nearest Neighbours algorithm also popularly known as the KNN algorithm
is a very versatile and robust algorithm. This is a classification algorithm and is also
regarded as a benchmark for other algorithms which are more complex than this
such as Support Vector Machines and Neural Networks. This is a highly powerful
classifier despite its simplicity [27]. This has a variety of applications including data
compression, genetics, and even economic forecasting as shown in Fig. 1.
Application of Chicken Swarm Optimization in Detection … 169

Fig. 1 Explaining kNN algorithm

KNN comes under the category of supervised learning algorithms. Supervised


learning algorithms are class of machine learning algorithms that are focused on the
task of mapping the input data to targets, given a set of examples. This basically
means that if we are given a dataset which is labeled and it consists of training
observations (m,n) where m is denoting the features and n is denoting the target we
are predicting and we are willing to catch a relationship between x and y.
The KNN algorithm in the task of classification basically favours a majority vote
between K instances which are similar, to a given unseen observation. Distance metric
between 2 data points is used to determine the similarity. The most popular choice
for this task is to find the Euclidean distance between these 2 points. A Euclidean
170 A. K. Tripathi et al.

distance is basically an ordinary straight line distance between two points in a given
Euclidean space. This task can also be done using the Manhattan distance [28].
So, given a positive integer K, an unseen observation x, and the calculated simi-
larity metric d, the KNN algorithm has to run through the entire dataset i.e. available
and compute the distance metric between the unseen observation x and each training
observation. Then we will save these distance in an ordered set. These distances are
then arranged in increasing order and the first K entries are selected from this sorted
list.
The K in this algorithm must be picked by a programmer in such a way that we can
achieve the best possible fit for the dataset. If the value of K is chosen as a very small
value then the region of given prediction will be restrained and we will be forcing
the classifier to neglect the overall distribution. It surely will provide a flexible fit
with low bias but high variance [29]. However, selecting a higher value of K will
have more voters in each prediction. This will result in low variance but increased
bias. The KNN algorithm is as represented below:
Algorithm 1: [K-nearest Neighbours (KNN)].
Input: Dataset
Output: Subset of selected features
1. k is taken to be the number of nearest neighbours and S be the set of training set.
2. For each attribute in the S:
2.1 Calculate the distance between the current point and the selected point from
S.
2.2 Save the distance in the ordered set.
3. This ordered set containing distances are sorted in the increasing order of the
distances.
4. First k entries are selected from this sorted list.
5. The labels of these entries are selected.
6. If the type is regression,

6.1 return the mean of the selected k labels.

7. If the type is classification,

7.1 return the mode of the selected k labels

It is worth mentioning that the minimal training phase of KNN is expensive both
in terms of memory and computation. Since we may store a potentially large dataset
the memory cost is going to be high and since the classification requires to go through
the whole dataset, the computation cost will also be high. which is undesirable.
Application of Chicken Swarm Optimization in Detection … 171

2.1.2 Decision Tree

Decision trees have influenced a vivid range of machine learning applications. In


analysing decisions, these are used to visually and explicitly represent decision mak-
ing. They are a supervised learning method used for classification and regression
models [30]. This method is used widely in machine learning application. A typical
representation of Decision Trees is as shown in Fig. 2.
This algorithm basically maps the various possible outcomes. After that on the
basis their probabilities, benefits, and costs, the decision trees provide us a way to
weigh possible actions against one another. It uses a tree-like model as is evident
from the name. They are mainly used to predict the most accurate algorithm that can
predict the best choice mathematically.
Typically, a tree is mapped which starts with 1 node and this further branches
into various (2 or more) possible outcomes which again branch further into other
possibilities. Here we have 3 different types of nodes, a chance node which gives the
probabilities of some specific results. A decision node shows a decision that needs
to be made, and an end node shows the final outcome that is obtained using this path
[31].
After constructing this tree, we check the given test conditions by beginning
from the root node and then one of the outgoing edges assigns the path after which
this condition is again analysed after which the node is assigned. When all the test

Fig. 2 C4.5 decision tree


172 A. K. Tripathi et al.

conditions lead us to a leaf node only then we say the tree is complete. The algorithm
of the decision tree is as shown below:
Algorithm 2: (Decision Tree (DT)).
Input: Dataset
Output: Subset of selected features
1. Check for all the following cases (base cases):
• Each and every sample that is present in the given list resides in the same
class. Due to this, a leaf node is created for the decision tree recommending
to choose that class.
• No information gain is provided by any of the given features. This leads C4.5
to create a decision node using the expected value of the class which is higher
up the tree.
• Encounter the instance of the class which is previously unseen. Again this
leads C4.5 to create a decision node using the expected value on higher up the
tree.
2. For every single attribute, the normalized information gain ratio is identified by
splitting the selected attribute.
3. Let the best_a be the selected attribute possessing the highest normalized
information gain.
4. A node is created which splits on best_a
5. Repeat the above steps on the obtained sub lists by splitting on best_a and add
these nodes as the sibling of the node.
Decision trees are capable of generating understandable rules. They perform clas-
sification without performing low computational cost and are best suited to handle
variables that are both categorical and continuous. They also help us to clearly figure
out the fields that are relevant for classification or prediction. However, these are not
appropriate for tasks that require estimation where the target is the prediction of the
value of the continuous attribute.

2.1.3 Random Forests

Random Forest belongs under the class of supervised learning algorithm. It is used
for both classification and regression task. Random Forest has two main parts in it:
Random and Forest. Random stands for randomly selecting the data points from the
given dataset to feed to the decision trees in the forest and forest is just a collection
of many decision trees (decision trees are explained above).
Let’s start from the forest, as mentioned a forest is a collection of many decision
trees and each tree is given a dataset to make predictions. Each prediction of this
ensemble of decision trees is taken into account to make predictions for our Forest.
In classification tasks, we take the prediction that was done by most of the decision
trees and that becomes the prediction of the Random Forest. For example, suppose
Application of Chicken Swarm Optimization in Detection … 173

Fig. 3 Random forests

we have an ensemble of 10 decision trees and 8 of them predict that a given image
is a dog then our random forest model’s prediction will also be that the given image
is a dog.
In regression tasks, we simply take the mean of the predictions done by the decision
trees and the result of that becomes the output of our random forest model. Note that,
larger the ensemble of decision trees more accurate the prediction of our random
forest model will be. For example, let’s take the same ensemble of 10 decision trees
but for regression task now, the predictions made by them are 8, 8.01, 8.02, 7.99,
8.10, 7.98, 8, 8.01, 8, 7.97. If we take an average of it then it’ll give us 8.008 which
will be the output of our random forest model. The difference between a decision
tree and the random forest is that in the random forest there is no pruning i.e., each
tree is allowed to grow completely as shown in Fig. 3.
Now, coming to the Random part, it means that we randomly sample apart from
our given dataset for each decision tree. This is done to give different samples to
each decision tree so that the output from our random forest isn’t biased. More the
number of decision trees in the ensemble higher will be the prediction accuracy of
our random forest model [32].
Random Forests have advantages like they can be used for both classification and
regression tasks. They are also very easy to understand and manipulate as with default
hyperparameters we get a satisfying accuracy and since the number of hyperparame-
ters is also not large so one can easily manage them. Generally, there is no overfitting
in Random Forest models if the ensemble of decision trees is large enough.
The main disadvantage of Random Forests is that it’s slow and can hardly be used
for real-time applications having high accuracy demand. This is because if we want
higher accuracy we need a larger ensemble of decision trees in which case time is
taken by each decision tree to make predictions add up making the overall model
slow and if the ensemble is not large enough, though we get a higher speed accuracy
of our model is decreased.
174 A. K. Tripathi et al.

2.1.4 Support Vector Machine

Support vector machines come under the supervised learning models that are used to
examine the precise data utilized for regression analysis and the classification under
Machine Learning as explained in Fig. 4. A Support vector machine model is a
visualization of the samples as points in space, plotted so the samples as of different
classes can be divided by a clear gap that can be as large as possible. Using the
Kernel trick, these SVMs can expertly perform non-linear classification in addition
to the linear one, essentially plotting inputs to the high dimensional feature spaces.
The support vector machine works in a supervised learning algorithm but when data
is unlabelled, the unsupervised learning approach is required, that searches for the
natural clustering of data to the groups. Then the newly formed data is plotted to
the latter groups. The building of Hyperplanes in a big or infinite dimensional space
is basically the work of support vector machines which can later be used for the
classification and regression and many more functions including outliers detection
[33].
The main goal of the support vector machine is to find a hyperplane that separately
classifies the data points in an N-dimensional space. Many hyperplanes can be chosen
if we want to separate the two classes of data points. In the support vector machine,
our main focus is on finding a plane containing a maximum margin, where maximum
margin refers to the distance between the data points of both classes. By maximizing
margin gap it provides some brace so that the data points that we will get in the future
will be classified with more assurance.
The planes which are used in support vector machines i.e. hyperplanes are basi-
cally the decision boundaries helping classifying data points. The dimension of these
hyperplanes is dependent on the number of features. These data points descending
on either side of hyperplane are marked to the distinct classes. Influencing the posi-
tion and the orientation of this hyperplane which are close to the hyperplane are the

Fig. 4 Support vector machine


Application of Chicken Swarm Optimization in Detection … 175

support vector which is also called data points. Removing these support vector will
affect the position of the hyperplane. The margin of the classifier can be increased
using these support vectors.
There are basically four tuning parameters in SVM they are:- kernel, regulariza-
tion, Gamma and Margin. For understanding the concept of a hyperplane in linear
SVM is done using the transformation of the problem applying some linear algebra,
and here is where we use the concept of Kernel. The two Kernels i.e., polynomial
and exponential are involved in calculating separation line in the higher dimension.
Avoiding misclassifying each training example is declared to the SVM by the
regularization parameter which is generally termed as C parameter. For greater values
of C, a small margin hyperplane is selected by the optimization if that hyperplane in
able to get all the training points allocated accurately. And for smaller values of C,
the optimizer will choose a larger-margin distinct hyperplane, even if it misclassified
many points [34].
The gamma parameter helps in defining how distant the influence of a single
training example distances, in which the low values refer to ‘far’ and the higher
values refers to ‘near’. And whereas the Margin parameter which is basically a
separation of the line to the closest class marks. Good margin refers to where the
separation is maximum for both the classes and if it is near to one of the class than
it is classified as Bad margin.

2.2 Feature Selection

Feature selection is a technique of utmost importance in the field of machine learning.


It demands a heuristic approach to find an optimal machine learning subset [13]. This
technique is used to generate a better subset of a given complex dataset by reducing
the redundant features from the given dataset. The computational complexity of
the algorithm is also significantly optimized by this method. There are brute force
methods and also forward selection and backward propagation techniques for feature
selection but they both are not such a great fit [14]. So, for feature selection, the best
algorithm available is Evolutionary and Genetic algorithms.

2.3 Genetic Algorithm

Genetic algorithms belong to a class of algorithms which are experience-based search


and time optimization learning strategies, based on the Darwinian paradigm [15].
The natural selection process takes place in the implementation of the optimization
strategies by simulating the evolution of the species. Initialization of this algorithm
is done by the creation of strings whose contents are random, each string is used
to represent the corresponding member of the population. Next, the calculation of
the fitness of each member of the population is done as a measure of the degree of
176 A. K. Tripathi et al.

the healthiness of that individual in the population [16]. The implementation of the
selection phase is done in which members are chosen from the current population to
enter a mating pool to produce new individuals for the next generation in a way that the
selection chance of the individual is proportional to its relative fitness. Then crossover
is done in which the features of 2 parent individuals are combined to form 2 children
that may have new patterns in comparison to their parents [17]. Then mutation is
introduced to guard the premature convergence. Maintaining genetic diversity is
the main purpose of mutation in the population. Then replacement happens where
the parent population is completely replaced by the offspring. Finally, the Genetic
Algorithm terminates when a certain convergence criterion is made [18].

3 Methodology

3.1 Proposed Chicken Swarm Optimisation

The voguish Chicken Swarm Optimization algorithm has been applied to Cervical
Cancer (Risk Factors) and Breast Cancer (Wisconsin) dataset available publicly to
modify the problem of selecting features and spot the eventuality of cancer at its early
age. It has been used for feature selection task. Four famous machine learning meth-
ods k-NN, SVM, C4.5 and Naïve Bayes [10] have been compared and also compared
with the various algorithms from the various papers [11, 12]. The performance of
the proposed method has been estimated using four machine learning models, k-NN,
SVM, Decision Tree and Random Forest. This implementation has been carried out
using Python and its libraries.
Before going further to the algorithm we will look through the various equations
used in the proposed Chicken Swarm Optimisation for calculating the fitness at the
various position during the algorithm. The equations for calculating the fitness of
Rooster are:
  
j = x i, j ∗ 1 + Rand n 0, σ
xi,t+1 t 2
(1)

( fk − fk )
σ 2 = {1, i f f i ≤ f k exp( , other wise, k ∈ [1, N ], k = i. (2)
| f i |+

where Rand n (0, σ 2 ) is a Gaussian distribution with mean value of 0 and standard
deviation as σ 2 . The equations for calculating the fitness of Hen are:
 t   t 
j = x i, j S1 ∗ Rand ∗ xr 1, j − x i, j + S2 ∗ Rand ∗ xr 2, j − x i, j
xi,t+1 t t t
(3)

where
Application of Chicken Swarm Optimization in Detection … 177
 
f i − fr 1
S1 = exp (4)
abs( f i ) + ε

and

S2 = exp(( fr 2 − f i )). (5)

where Rand is an uniform random number over [0, 1]. r1 belongs from 1 to N, is the
index of the rooster, which is the ith hen’s group member, while r2 belongs from 1
to N, is the index of the chicken (rooster or hen), which is arbitrarily chosen from
the swarm. The equations for calculation the fitness of Chick is:
 t 
j = x i, j + F L ∗ x m, j − x i, j .
xi,t+1 t t
(6)

where, xi,t j stands for the position of ith chick’s mother.


The algorithm for the proposed Chicken Swarm Optimisation and its flow chart
diagram can be found below.
Algorithm: Chicken Swarm Optimisation (CSO)
Input: Dataset
Output: Subset of selected features
1. Initialize population of chickens
2. Corresponding to each chicken associate a sample array of randomly chosen
features from Dataset for feature selection.
3. Evaluate each chicken’s fitness value at t = 0 using sample array; t = 0.
4. while (t < MG), MG = Max Gen.
5. if (t%G is 0)
6. Sort chickens according to their fitness value and set-up a hierarchy among
them.
7. Assign, groups randomly to roosters, hens and chicks and establish the Mother-
Child relationship in the brood.
8. end if
9. For i from 1 to N
10. if i is R (rooster)
11. Renovate its location Using Eqs. (1) and (2)
12. End If
13. If i is H (hen)
14. Renovate its location Using Eqs. (3), (4) and (5)
15. End If
16. If i is C (chick)
17. Renovate its location Using Eq. (6)
18. End if
19. For j = 1: Number_of_Features
20. Let, R (0,1) - Random real number between 0 and 1
178 A. K. Tripathi et al.

21. If sigmoid (Updated_Position)>R(0,1)


22. Put 1 at position ‘j’ in the corresponding sample array
23. End if
24. Else
25. Put 0 at position ‘j’ in the corresponding sample array
26. End else
27. End for
28. If a new subset of features gives better fitness than the last one
29. Renovate it
30. End if
31. End for
32. End While
33. Fitness is evaluated using machine learning algorithms
34. Calculate the accuracy of the results
35. Compare the result with k-NN and Random Forests
The flowchart of the CSO has been demonstrated in the in Fig. 5.

3.2 Implementation of the Proposed Method

In this area, the experimental setups, parameters, datasets & implementation of the
proposed approach has been discussed.

3.2.1 Experimental Setup

The Code was executed and tested on Google Collaboratory with following Notebook
Settings:
• Runtime type—Python 3
• Hardware Accelerator—None
Google Colab is a free cloud service and can provide access to Tesla K-80 GPU.
Python libraries such as Pytorch, numpy, matplotlib, pandas, scikit, etc. are used.

3.2.2 Parameters

CSO contains six parameters. As the chicken is primarily considered only as a food
source and only hen lays eggs, which is also a source of food. That’s why keeping
hens are more favorable for humans. Thus hen parameter would be greater than the
Rooster parameter. Considering individual contrasts, not every hen would be laying
eggs at the same time, that’s why Hen parameter will also be bigger than the mother
hen parameter. Also, we assume that the adult chicken population would surpass
Application of Chicken Swarm Optimization in Detection … 179

Fig. 5 Algorithm of the proposed chicken swarm optimisation (CSO)


180 A. K. Tripathi et al.

that of the chicks i.e. the chick parameter. Now for the value of the swarm, it should
neither be too big nor be too small after many tests the value between 5 and 30 would
generate the best results.

3.2.3 Datasets

In the proposed paper, Cervical Cancer (Risk Factors) Dataset and Wisconsin
Diagnostic Breast Cancer (WDBC) Dataset which are publicly available at the
UCI machine learning repository are passed through the proposed Chicken Swarm
Optimisation Algorithm. The detailed explanation of these datasets are as follows:

Breast Cancer (Wisconsin)

Wisconsin Diagnostic Breast Cancer (WDBC) Dataset is publicly available at the


UCI machine learning repository [36]. The dataset was donated by Nick Street. The
dataset was poised by Dr. William H. Wolberg, General Surgery Dept., University of
Wisconsin, Clinical Sciences Center, Madison, WI 53792, W. Nick Street, Computer
Sciences Dept., University of Wisconsin, 1210 West Dayton St., Madison, WI 53706
and Olvi L. Mangasarian, Computer Sciences Dept., University of Wisconsin, 1210
West Dayton St., Madison, WI 53706 in November 1995. The dataset was first used
in the publication [37].
The features in the selected dataset are computed using a digitally converted
image of a fine needle aspirate (FNA) of a breast mass. Characteristics of the cell
nuclei presented are described in the image. Multi surface Method-Tree (MSM-T)
was used to obtain a separating plane as described above [38]. Multi surface Method-
Tree (MSM-T) is a classification method which uses linear programming to construct
a decision tree. An exhaustive search was used to extract relevant features in the space
of one to three separate planes and one to four features. The article [39] describes the
actual linear program used to obtain the separating plane in the 3-dimensional space.
The selected dataset has also been used in the publications [40, 41]. This dataset
solemnly focuses on the prediction of the indicators and diagnosis of breast cancer.
Wisconsin Diagnostic Breast Cancer (WDBC) Dataset is having a multivariate
characteristic as a dataset, Classification is the main task associated with the selected
dataset. There are 569 instances in the dataset and 32 Real attributes (ID, diagnosis,
30 real valued features). The mean, standard error, and “worst” or largest (mean of
the three largest values) of these features were computed for each image, resulting in
30 features. All the features are calculated with four significant digits. The selected
Application of Chicken Swarm Optimization in Detection … 181

Table 1 Attribute
Feature Type Feature Type
information of breast cancer
(Wisconsin) dataset ID Integer Smoothness se Integer
Diagnosis Boolean Compactness se Integer
Radius mean Integer Concavity se Integer
Texture mean Integer Concave points se Integer
Perimeter mean Integer Symmetry se Integer
Area mean Integer Fractal dimension Integer
se
Smoothness mean Integer Radius worst Integer
Compactness mean Integer Texture worst Integer
Concavity mean Integer Perimeter worst Integer
Mean concave Integer Worst area Integer
points
Mean symmetry Integer Worst smoothness Integer
Mean fractal Integer Worst compactness Integer
dimension
Radius se Integer Concavity worst Integer
Texture se Integer Concave points Integer
worst
Perimeter se Integer Symmetry worst Integer
Area se Integer Fractal dimension Integer
worst

dataset contains no missing values and the class is distributed as 357 attributes of
benign and 212 attributes of malignant. Attribute information can be as shown in
Table 1:

Cervical Cancer (Risk Factors)

Cervical Cancer (Risk Factors) Dataset is publicly available at the UCI machine learn-
ing repository [42]. The dataset was poised at ‘Hospital Universitario de Caracas’
in Caracas, Venezuela. The dataset comprises statistic information, historic medical
records, and habits of 858 patients. This dataset consists of several unknown val-
ues because many patients decided not to answer privacy related questions (missing
values). This dataset has also been used by [43]. This dataset solemnly focuses on
the prediction of the indicators and diagnosis of cervical cancer. The features cover
statistic information, historic medical records and habits best suited for the prediction
of cervical cancer at its early ages.
Cervical Cancer (Risk Factors) Dataset is having a multivariate characteristic
as a dataset, Classification is the main task associated with the selected dataset.
There are 858 instances in the dataset and 32 Real, Integer attributes All the features
182 A. K. Tripathi et al.

are calculated with 4 significant digits. The selected dataset contains a few missing
values. Attribute information can be as shown in Table 2 and the comparison between
the choosen datasets are shown in Table 3:

Table 2 Attribute information of cervical cancer (risk factors) dataset


Feature Type Feature Type
Age Integer STDs: pelvic inflammatory disease Boolean
Number of sexual partners Integer STDs: genital herpes Boolean
First sexual intercourse (age) Integer STDs: molluscum contagiosum Boolean
Number of pregnancies Integer STDs: AIDS Boolean
Smokes Boolean STDs: HIV Boolean
Smokes (years) Boolean STDs: Hepatitis B Boolean
Smokes (packs/year) Boolean STDs: HPV Boolean
Hormonal contraceptives Boolean STDs: number of diagnosis Integer
Hormonal contraceptives (years) Integer STDs: time since first diagnosis Integer
IUD Boolean STDs: time since last diagnosis Integer
IUD (years) Integer Dx: cancer Boolean
STDs Boolean Dx: CIN Boolean
STDs (number) Boolean Dx: HPV Boolean
STDs: condylomatosis Boolean Dx Boolean
STDs: cervical condylomatosis Boolean Hinselmann: target variable Boolean
STDs: vaginal condylomatosis Boolean Schiller: target variable Boolean
STDs: vulvo-perineal Boolean Cytology: target variable Boolean
condylomatosis
STDs: syphilis Boolean Biopsy: target variable Boolean

Table 3 Dataset comparison


Information Datasets
of selected datasets
Cervical cancer Breast cancer
Data set characteristics Multivariate Multivariate
Attribute characteristics Integer, real Real
Associated tasks Classification Classification
Number of instances 858 569
Number of attributes 36 32
Missing values? Yes No
Area Life Life
Application of Chicken Swarm Optimization in Detection … 183

Heat Map Representation

Heat Map is a pictorial relation of the data in the form of a matrix in which the
colour intensity of each cell tells us the data, higher the intensity of the colour
in heatmap more significant is the data. Here, we’ve used the correlation matrix
and plotted a heatmap using correlation matrix to give us a better estimate of how
various attributes/features are correlated with each other and what impact does one
attribute’s value have on another one. Higher the intensity of the cell corresponding
to the features more correlated are those features [44].
Note that, correlation is the statistical parameter that shows how the given two
parameters are dependent on each other and to what extent they have an effect on
each other. This correlation value between each possible combination of parameters
is calculated by the “Seaborn” library and using those values colours corresponding to
those values are plotted in the heatmap. The heatmap for the attributes in the Breast
Cancer dataset and Cervical Cancer dataset are shown below and the correlation
values have been scaled in the range from 0 to 1 only to make comparisons easy. The
Heat map for the following is shown in Figs. 6 and 7.

3.2.4 Implementation

Firstly Population of Chicken Swarm was initialized with random values and features
were selected randomly and stored in a sample array corresponding to each chicken.
Then, we evaluated the fitness value for each chicken and set up a hierarchy among
them and then divided them into different groups with each group having one rooster,
two hen, and two chicks. Mother-Child relationship was established between hen and
chicks.
For each chicken, its position is updated by using the Eqs. (1)–(6) corresponding
to its hierarchy.
The equations are:
  
j = x i, j ∗ 1 + Rand n 0, σ
xi,t+1 t 2
(7)

( fk − fk )
σ 2 = {1, i f f i ≤ f k exp( , other wise, k ∈ [1, N ], k = i. (8)
| f i |+
   
xi,t+1
j = xi,t j S1 ∗ Rand ∗ xrt 1, j − xi,t j + S2 ∗ Rand ∗ xrt 2, j − xi,t j (9)

where
 
f i − fr 1
S1 = exp (10)
abs( f i ) + ε

and
184 A. K. Tripathi et al.

Fig. 6 Heat map of the dataset correlation (cervical cancer)

S2 = exp(( fr 2 − f i )). (11)

 t 
j = x i, j + F L ∗ x m, j − x i, j .
xi,t+1 t t
(12)

where xi,t j stands for the position of the ith chick’s mother.
Then for each feature, we pass the new position to a sigmoid function and compare
it to a random real value between 0 and 1. Note that the sigmoid function (S(x)) is
given as:

1 ex
S(x) = = (13)
1 + e−x 1 + ex
Application of Chicken Swarm Optimization in Detection … 185

Fig. 7 Heat map of the dataset correlation (breast cancer)

If it’s greater than the random value, we assign value 1 at the corresponding
position in sample array else we assign value 0. Here, 1 means that feature is accepted
and 0 means that feature is not accepted.
After forming the new subset of features, we take those features and use Machine
Learning Algorithms (here, we’ve used Random Forest and KNN) to check the
accuracy. If the accuracy is better than the previous value then we update the position
and sample array else we reiterate. The dataset was divided into testing and training
data in the ratio of 20:80. As shown in Fig. 8.
186 A. K. Tripathi et al.

Fig. 8 Implementation for detection and prognosis of cancer

4 Results and Discussions

The results calculated after passing the selected Cancer datasets i.e. Cervical Cancer
(Risk Factors) dataset and Breast Cancer (Wisconsin) dataset to the proposed Chicken
Swarm Optimization Method are discussed in this section. After applying the selected
cancer datasets to the proposed Chicken Swarm Optimisation, the quality of the
extracted/selected features is measured and evaluated by using four machine learning
algorithms i.e. Decision Trees, k nearest neighbours (k-NN), Support Vector Machine
(SVM) and Random Forests getting the accuracy as 99.48, 97.82, 98 and 99.53%
for Cervical Cancer (Risk Factors) dataset respectively and 99.21, 98.54, 98.54 and
99.76 for Breast Cancer (Wisconsin) dataset respectively as shown in Fig. 9. Also,
the proposed Chicken Swarm Optimisation algorithm resulted in the reduction in the
calculation time of the prediction as the results were calculated within a few seconds
only. A Number of features selected by the proposed Chicken Swarm Optimisation
Algorithm were also very good i.e. 15 features out of 32 for features Cervical Cancer
(Risk Factors) dataset and 14 features selected out of 32 features for Breast Cancer
(Wisconsin) dataset, which are a very good amount for our proposed algorithm.

5 Comparison

In this Area, the proposed Chicken Swarm Optimization has been compared with
the various different studies made on the Detection Of Cancer.
Application of Chicken Swarm Optimization in Detection … 187

Fig. 9 Accuracy comparison for diff. ML algorithms for evaluating the results

5.1 Cervical Cancer (Risk Factors)

In 2018 Yasha Singh, Dhruv Shrivatsva, P.S. Chand, and Dr. Surrinder Singh have
proposed a paper [45] in which they have compared the various algorithms for the
screening of the Cervical Cancer in the recent times in the chronological order. The
comparison of this study has been shown in Fig. 10. In 2007,

Fig. 10 Accuracy comparison with other algorithms in the study shown


188 A. K. Tripathi et al.

Fig. 11 Accuracy comparison with other algorithms in the study shown

Muhammed Fahri Unlersen, Kadir Sabanci, Muciz Ozcan [46] proposed machine
learning methods namely KNN and MLP for the prediction of the feature selection
for determining the cervical cancer possibility. This comparison has been shown in
Fig. 11.

5.2 Breast Cancer (Wisconsin)

In 2016, Hiba Asri, Hajar Mousannif, Hassan Al Moatassime, Thomas Noel proposed
a paper [10] in which machine learning methods namely k-NN, C4.5, Naïve Bayes
and Support Vector Machine (SVM) are used for the prediction of the feature selection
for determining the breast cancer possibility The comparison of this study has been
shown in Fig. 12. The results from other studies [11, 12] were observed and compared
with the results calculated by the proposed Chicken Swarm Optimisation for the
prediction of Breast Cancer by feature selection, the comparison from these studies
are shown in Fig. 13.
The proposed Chicken Swarm Optimisation shows the best accuracy of 99.53%
in the feature selection from the Cervical Cancer (Risk Factors) dataset [42] and
best accuracy of 99.76% in the feature selection for the selected Breast Cancer
(Wisconsin) dataset [36] with a comparatively fast computational time of a few
seconds.
The proposed Chicken Swarm Optimisation clearly outperforms the basic ML
algorithms. Also, it evens outperforms all the algorithm as shown in Figs. 9 and
13. It is also shown from above that the Chicken Swarm algorithm outruns other
Application of Chicken Swarm Optimization in Detection … 189

Fig. 12 Accuracy comparison with other algorithms in the study shown

Fig. 13 Accuracy comparison with other algorithms in the study shown

algorithms in case of selecting features without causing any harm to the accuracy
of the original results. Thus it can be alleged that the Chicken Swarm Optimisation
algorithm for selection of features can be applied for various practical applications
and the proposed algorithm will also play a very beneficial role in the prediction of
cancer at its early stage.
190 A. K. Tripathi et al.

Table 4 Accuracy
Machine learning methods Accuracy (in %)
comparison for the selected
Datasets Cervical cancer Breast cancer
Random forest 99.53 99.76
k-NN 97.82 98.54
Decision tree 97.48 99.21
SVM 98 98.54

6 Conclusions and Future Works

In this paper, Feature Selection by Chicken Swarm Optimization algorithm is


explained. The better subset of features are found by applying the Chicken Swarm
Optimization Algorithm and a competitive accuracy was obtained without causing
any harm to the performances. it showed better results than the other machine learning
models which are discussed in this paper. The models with their respective accuracies
which were obtained and validation of the results using different ML algorithms are
mentioned in Table 4.
The results above clearly demonstrate the superiority of performing Feature selec-
tion with chicken Swarm Optimization over the other algorithms discussed above.
So, the proposed algorithm can now be applied by the researchers for feature selec-
tion for early detection of cancers. By early prediction we can also provide treatments
using virtual reality simulators that can significantly reduce the complexity of sur-
gical procedures. Low cost VR can be very effective tool and also helps surgeons to
learn complex surgical oncology procedures in a short period of time.
The dataset which has been used in this paper is titled Cervical Cancer (Risk
Factors) and Wisconsin Diagnostic Breast Cancer (WDBC). For further studies,
these datasets can also be used for various feature selection algorithms coming up
in the future [47]. Chicken swarm optimization as a feature selection algorithm is
highly optimal for feature selection. This algorithm can also be used for predicting
various other cancer datasets of features as it can achieve the best Optimisation results
and stalwartness. This algorithm can also be compared with the other bio-inspired
algorithms that are going to be proposed in the future for feature selection in terms
of accuracy.

References

1. Hanahan, D., & Weinberg, R. A. (2011). Hallmarks of cancer: The next generation. Cell, 144,
646–674.
2. Ferlay, J., Soerjomataram, I., Ervik, M., Dikshit, R., Eser, S & Mathers, C. (2013) GLOBOCAN
2012 v1.0, cancer incidence and mortality worldwide International Agency for Research on
Cancer. IARC CancerBase no 11. Lyon; France. Accessed January 1, 2016.
3. World Cancer Report. (2008). International agency for research on cancer. Retrieved February
26, 2011.
Application of Chicken Swarm Optimization in Detection … 191

4. Polley, M. Y. C., Freidlin, B., Korn, E. L., Conley, B. A., Abrams, J. S., & McShane, L. M.
(2013). Statistical and practical considerations for clinical evaluation of predictive biomarkers.
Journal of the National Cancer Institute, 105, 1677–1683.
5. Cruz, J. A., & Wishart, D. S. (2006). Applications of machine learning in cancer prediction
and prognosis. Cancer Informatics, 2, 59.
6. Parham, G., Bing, E. G., Cuevas, A., Fisher, B., Skinner, J., Mwanahamuntu, M., & Sullivan,
R. (2019). Creating a low-cost virtual reality surgical simulation to increase surgical oncology
capacity and capability. Ecancermedicalscience 13, 910.
7. Katic, D., Wekerle, A. L., Gortler, J., et al. (2013). Context-aware augmented reality in laparo-
scopic surgery. Computerized Medical Imaging and Graphics, 37(2), 174–182. https://doi.org/
10.1016/j.compmedimag.2013.03.003.
8. Tyrer, J., Duffy, S. W., & Cuzick, J. (2004). A breast cancer prediction model incorporating
familial and personal risk factors. Stat Med, 23(7), 1111–1130.
9. Moyer, V. A. (2013). Medications to decrease the risk for breast cancer in women: Recom-
mendations from the U.S. preventive services task force recommendation statement. Annals of
Internal Medicine, 159(10), 698–708.
10. Asri, H., Mousannif, H., Al Moatassime, H., & Noel, T. (2016) Using machine learning
algorithms for breast cancer risk prediction and diagnosis. Procedia Computer Science, 83,
1064–1069. ISSN 1877-0509.
11. Ahmad, L. G., Eshlaghy, A. T., Poorebrahimi, A., Ebrahimi, M., & Razavi, A. R. (2013). Using
three machine learning techniques for predicting breast cancer recurrence. Journal of Health
and Medical Informatics, 4, 124. https://doi.org/10.4172/2157-7420.1000124.
12. Mihaylov, I., Nisheva, M., & Vassilev, D. (2019). Application of machine learning models for
survival prognosis in breast cancer studies. Information, 10, 93.
13. Ramaswami, M., & Bhaskaran, R. (2009). A study on feature selection techniques in
educational data mining. arXiv preprint arXiv:0912.3924.
14. Liu, H., & Yu, L. (2005). Toward integrating feature selection algorithms for classification and
clustering. IEEE Transactions on Knowledge and Data Engineering, 17(3), 1–12.
15. Genetic Algorithm. https://www.geeksforgeeks.org/genetic-algorithms/.
16. Eiben, A. E., & Smith, J. E. (2003). Introduction to evolutionary computing (Vol. 53). Berlin:
Springer.
17. Abo-Hammour, Z. S., Alsmadi, O. M., & Al-Smadi, A. M. (2011). Frequency-based model
order reduction via genetic algorithm approach. In 7th International Workshop on Systems,
Signal Processing and their Applications (WOSSPA).
18. Mohamed, K. S. (2018) Bio-inspired machine learning algorithm: Genetic algorithm. In
Machine learning for model order reduction (pp 19–34). Cham: Springer.
19. Xue, B., Zhang, M., Browne, W. N., & Yao, X. (2016). A survey on evolutionary computation
approaches to feature selection. IEEE Transactions on Evolutionary Computation, 20(4), 606–
626.
20. Evolutionary Algorithm as Feature Selection. https://www.kdnuggets.com/2017/11/
rapidminer-evolutionary-algorithms-feature-selection.html.
21. Meng, X. B., Yu, L., Gao, X., & Zhang, H. (2014). A new bio-inspired algorithm: Chicken
swarm optimization. pp. 86–94. https://doi.org/10.1007/978-3-319-11857-4_10.
22. Das, S., & Suganthan, P. N. (2011). Differential evolution: A survey of the state-of-the-art.
IEEE Transactions on Evolutionary Computation, 15(1), 4–31.
23. Yang, X. S. (2013). Bat algorithm: Literature review and applications. International Journal
of Bio-Inspired Computation, 5(3), 141–149.
24. Gandomi, A. H., & Alavi, A. H. (2012). Krill herd: A new bio-inspired optimization algorithm.
Communications in Nonlinear Science and Numerical Simulation, 17, 4831–4845.
25. Cuevas, E., Cienfuegos, M., Zaldivar, D., & Cisneros, M. (2013). A swarm optimization algo-
rithm inspired in the behavior of the social-spider. Expert Systems with Applications, 40,
6374–6384.
192 A. K. Tripathi et al.

26. Kumar, S., Nayyar, A., & Kumari, R. (2019). Arrhenius artificial bee colony algorithm. In S.
Bhattacharyya, A. Hassanien, D. Gupta, A. Khanna, I. Pan (Eds.) International conference on
innovative computing and communications. Lecture notes in networks and systems (Vol. 56)
Singapore: Springer.
27. Wang, J., Neskovic, P., & Cooper, L. N. (2007). Improving nearest neighbor rule with a simple
adaptive distance measure. Pattern Recognition Letters, 28(2), 7.
28. Zhou, Y., Li, Y., & Xia, S. (2009). An improved KNN text classification algorithm based on
clustering. Journal of Computers, 4(3), 8.
29. KNN. https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm.
30. Decision Trees. https://en.wikipedia.org/wiki/C4.5_algorithm.
31. Quinlan, J. R. (2014). C4.5: Programs for machine learning (Vol. 302). https://books.google.
com/books?hl=fr&lr=&id=b3ujBQAAQBAJ&pgis=1. Accessed January 5, 2016.
32. Random Forests. https://en.wikipedia.org/wiki/Random_forest.
33. Berk, R. A. (2016). Random forests, statistical learning from a regression perspective (pp. 205–
258). Cham: Springer.
34. Noble, W. S. (2006). What is a support vector machine? Nature Biotechnology, 24(12), 1565–
1567. https://doi.org/10.1038/nbt1206-1565.
35. Support Vector Machine. https://en.wikipedia.org/wiki/Support-vector_machine.
36. Wisconsin Diagnostic Breast Cancer Dataset. http://archive.ics.uci.edu/ml/datasets/breast+
cancer+wisconsin+%28diagnostic%29.
37. Street, W. N., Wolberg, W. H., & Mangasarian, O. L. (1993) Nuclear feature extraction for
breast tumor diagnosis. In IS&T/SPIE 1993 International Symposium on Electronic Imaging:
Science and Technology (Vol. 1905, pp. 861–870), San Jose, CA.
38. Bennett, K. P. (1992) Decision tree construction via linear programming. In Proceedings of the
4th Midwest Artificial Intelligence and Cognitive Science Society, pp. 97–101.
39. Bennett, K. P., & Mangasarian, O. L. (1992). Robust linear programming discrimination of
two linearly inseparable sets. Optimization methods and software, 1, 23–34.
40. Antos, A., Kégl, B., Linder, T., & Lugosi, G. (2002). Data-dependent margin-based general-
ization bounds for classification. Journal of Machine Learning Research, 3, 73–98.
41. Bradley, P. S., Bennett, K. P., & Demiriz, A. (2000). Constrained k-means clustering. Microsoft
Res Redmond (Microsoft Research Dept. of Mathematical Sciences One Microsoft Way Dept.
of Decision Sciences and Eng. Sys).
42. Cervical cancer Dataset. https://archive.ics.uci.edu/ml/datasets/Cervical+cancer+%28Risk+
Factors%29.
43. Fernandes, K., Cardoso, J. S., & Fernandes, J. (2017). Transfer learning with partial observ-
ability applied to cervical cancer screening. In Iberian conference on pattern recognition and
image analysis. Cham: Springer.
44. Heat Map. https://en.wikipedia.org/wiki/Heat_map.
45. https://arxiv.org/pdf/1811.00849.pdf.
46. Ünlerşen, Muhammed, Sabanci, Kadir, & Ozcan, Muciz. (2017). Determining cervical cancer
possibility by using machine learning methods. International Journal of Recent Technology
and Engineering, 3, 65–71.
47. Dwivedi, R. K., Aggarwal, M., Keshari, S. K., & Kumar, A. (2019). Sentiment analysis and fea-
ture extraction using rule-based model (RBM). In S. Bhattacharyya, A. Hassanien, D. Gupta, A.
Khanna & Pan, I (Eds.) International conference on innovative computing and communications.
Lecture notes in networks and systems (Vol. 56). Singapore: Springer.
Computational Fluid Dynamics
Simulations with Applications in Virtual
Reality Aided Health Care Diagnostics

Vishwanath Panwar, Seshu Kumar Vandrangi, Sampath Emani,


Gurunadh Velidi and Jaseer Hamza

Abstract Currently, medical scans yield large 3D data volumes. To analyze the
data, image processing techniques are worth employing. Also, the data could be
visualized to offer non-invasive and accurate 3D anatomical views regarding the
inside of patients. Through this visualization approach, several medical processes or
healthcare diagnostic procedures (including virtual reality (VR) aided operations)
can be supported. The main aim of this study has been to discuss and provide a
critical review of some of the recent scholarly insights surrounding the subject of
CFD simulations with applications of VR-aided health care diagnostics. The study’s
specific objective has been to unearth how CFD simulations have been applied to
different areas of health care diagnostics, with VR environments on the focus. Some
of the VR-based health care areas that CFD simulations have been observed to gain
increasing application include medical device performance and diseases or health
conditions such as colorectal cancer, cancer of the liver, and heart failure. From the
review, an emerging theme is that CFD simulations form a promising path whereby
they sensitize VR operators in health care regarding some of the best paths that are
worth taking to minimize patient harm or risk. Hence, CFD simulations have paved
the way for VR operators to make more informed and accurate decisions regarding
disease diagnosis and treatment tailoring relative to the needs and conditions with
which patients present.

Keywords CFD · Health-care · Medicine · Virtual reality · Image-processing

V. Panwar
VTU-RRC, Belagavi, India
S. K. Vandrangi · J. Hamza
Department of Mechanical Engineering, Universiti Teknologi Pteronas, Seri Iskandar, Malaysia
S. Emani (B)
Department of Chemical Engineering, Universiti Teknologi Pteronas, Seri Iskandar, Malaysia
e-mail: sampath.evs@gmail.com
G. Velidi
University of Petroleum and Energy Studies, Bidholi, via Prem Nagar, Dehradun, Uttarakhand
248007, India

© Springer Nature Switzerland AG 2020 193


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_10
194 V. Panwar et al.

1 Introduction

Preoperative diagnostics, training, and medical education have experienced dramatic


improvements due to advanced technologies. One of these technologies entails virtual
reality [1]. In some of the recent scholarly investigations, methods that could enable
students and doctors to manipulate and visualize healthcare diagnostic situations or
3D models arising from MRI and CT scans have been proposed [1, 2]. These methods
have also been used to analyze fluid flow simulation results [3]. A specific healthcare
example is a case in which the finite element approach has been employed to conduct
fluid flow simulation to compute artery wall shear stress, with the resultant virtual
reality systems making the education processes more effective and also shortening
the length of training programs in healthcare [4, 5].
Currently, medical scans yield large 3D data volumes [6]. To analyze the data,
image processing techniques are worth employing [4, 6]. Also, the data could be
visualized to offer non-invasive and accurate 3D anatomical views regarding the
inside of patients [7]. Through this visualization approach, several medical processes
or healthcare diagnostic procedures, including virtual reality (VR) aided operations,
can be supported. Some of these processes include surgical simulation, surgical train-
ing, surgical planning, quantitative measurement, and diagnosis [8]. Indeed, virtual
reality reflects a revolutionizing and new concept that has progressed in medical
diagnosis and reached a new level [8]. Through virtual reality simulations, such as
computational fluid dynamics (CFD)-based simulations, the outcome reflects essen-
tial aptitude through which operating patients are prepared in controlled domains
without pressure; with supervision also minimized [9]. The implication is that vir-
tual reality (VR) simulator skills could be exploited to achieve the desired health
care diagnostic outcomes in training rooms [8–10]. It is also notable that in medical
application contexts, VR gains its use in the better planning of surgery, enhanced
quantitative correlations, enhanced picture understanding, and enhanced picture con-
trol. The emerging theme is that VR has stretched beyond the beneficial effect of
helping patients to cope with surgery-related stress and paved the way for pain
reduction in healthcare settings [11, 12]. The main aim of this study is to discuss
and provide a critical review of some of the recent scholarly insights surrounding
the subject of CFD simulations with applications of VR-aided health care diagnos-
tics. The study’s specific objective is to unearth how CFD simulations have been
applied to different areas of health care diagnostics, with VR environments on the
focus. Hence, the motivation is to determine how CFD simulations have supported
the realization of the intended goals of VR-aided health care diagnostics. Also, the
motivation of the study is to give insight into the extent to which CFD simulations,
upon utilization in VR-aided health care diagnostics settings, could aid in optimiz-
ing processes surrounding the treatment of diseases or health conditions with which
patients present.
Computational Fluid Dynamics Simulations with Applications … 195

2 A Discussion and Critical Review of CFD Simulations


with Applications in VR-Aided Health Care Diagnostics

In the healthcare industry, VR has gained application in areas such as disease diag-
nosis, improving drug design processes, the harmless screening of breasts, and vir-
tual colonoscopy (in a quest to replace optical colonoscopy) [11, 13]. Some of the
specific beneficial effects accruing from VR implementation in the healthcare sec-
tor include cognitive rehabilitation, pain management, training doctors and nurses,
physical therapy, and addressing fears and phobias among patients [13–15].
One of the areas that have seen CFD simulations gain application in VR-aided
health care diagnostics entails the examination of blood movement or hemodynamic
indices relative to medical imaging data used to generate computer-based vascular
models [14]. To achieve this process, vascular models have been created and dis-
cretized into finite element meshes with millions of pieces. Also, rheological proper-
ties that include viscosity and density have been specified, with hemodynamic states
prescribed both at the vessel exit and entry; translating into boundary conditions
[16]. For the applicable governing equations, solutions have been achieved by using
high-performance computing [17, 18]. The objective of the simulations has been
to examine parameters such as the wall shear stress before predicting the possible
onset of cardiovascular disease progression, ensuring that through CFD simulations,
VR-aided health care diagnostics are supported [19] (Fig. 1).
Methodologically, such investigations have been conducted in four major stages.
Whereas the first stage has involved model creation and obtaining vessel morphol-
ogy [19], the second stage has been to apply simulation and boundary conditions
[20]. The third stage has centered on post-processing, culminating into the fourth
stage in the form of outcome presentation. Indeed, model creation and obtaining
vessel morphology in studies involving CFD simulations with applications in VR-
aided health care diagnostics imply that 3D medical imaging data with various 2D
planar images are obtained [21]. Before CFD simulation, boundary conditions are
then applied to the model, including time-varying blood flow waveforms. The role
of the post-processing stage is to ensure that the medical imaging data’s output for-
mat is converted (as well as the CFD solver) to obtain quantitative and volumetric
requirements of the intended display software [2].
Indeed, findings from the investigations suggest that through CFD simulations,
semi-automated workflows for integrating CFD capabilities specific to each patient
could be achieved, hence supporting VR in healthcare diagnostics.
Another area where CFD simulation has been employed to support informed
decision-making regarding the use of VR in health care diagnostics involves nasal
airflow and how functional rhinosurgery treatment could be planned. Over time,
treatment methods in throat, nose, and ear surgery have improved [19] but how to
predict successful individual therapy remains a challenge [21, 22]. Therefore, air-
flow simulations have been conducted in a quest to support virtual rhinosurgery,
a VR-aided health care procedure. Particularly, the CFD-led airflow simulations
that have been conducted are those involving anatomies of nasal airways; including
196 V. Panwar et al.

Fig. 1 A flowchart representing CFD simulations in VR-aided health care diagnostics for
hemodynamic indices. Source Pareek et al. [1]

cases of paranasal and frontal sinuses [23]. Through complex airflow characteristics’
CFD simulations targeting individual anatomies, the pathophysiology and physiol-
ogy of nasal breathing have been studied [24, 25], ensuring that VR-aided health care
procedures are supported and planned accordingly; especially virtual rhinosurgery.
Methodologically, the data that has been employed in such simulations involves a
nasal airway reference model obtained from quasi ideal human anatomies; a specific
Computational Fluid Dynamics Simulations with Applications … 197

example being a helical CT scan [24, 25]. For numeric fluid computation, the phys-
ical modeling mechanisms have involved fundamental equations such as turbulence
equations, the energy equation, Reynolds-averaged Navier-Stokes equation, and the
conservation of mass or continuity equation [24, 26]. Specific variables or parameters
that have played a moderating effect in these equations include turbulence properties,
temperature, pressure, and velocity components [27]. The generalized form of these
conservation equations has been obtained in the form of the transport equation for the
respective transported and mass-related quantities (such as enthalpy and velocity) to
translate into:

∂ ∂ ∂ 2
(ρ) + Vj (ρ) = ´  + S

∂t   
∂x j
 
∂ x 2j
   Production
Time alteration rate Convective transport Diffusive transport

From the results, the investigations reveal that when CFD simulations are used to
generate a “nose” model through which VR-aided health care procedures could be
used to study the pathophysiology and physiology of nasal breathing, the resultant
framework is stable and well suited for application to various collections of non-
pathologic and pathologic anatomies [27–30]. The implication for VR-aided health
care diagnostics is that this path of CFD simulations targeting individual nasal flows
supports virtual rhinosurgery in such a way that it enables the VR system users to
gain a deeper understanding of airflow in the nasal path, upon which potential air-
flow pneumonia in nasal cavities could be predicted [27–30]. However, an emerging
dilemma is whether these results hold regardless of the potential moderating or pre-
dictive role of other factors that could be operating on the part of patients (such as
patients presenting with multiple conditions that could compromise the efficacy of
the CFD framework). How the latter dilemma could be addressed forms an additional
area of research interest.
Apart from nasal path airflow, CFD simulations seeking to support informed
decision-making in VR-aided health care diagnostics have been applied to multi-
scale lung ventilation modeling. In particular, the proposed CFD framework has
been that which could be employed in VR-aided health care settings in the form
of a ventilation model, having demonstrated how certain boundary conditions and
parenchyma or tree alterations influence lung behavior, as well as treatment efficiency
and the prediction of the impact of the pathologies [28, 31]. Therefore, the CFD
simulations seeking to support VR in examining lung ventilation have strived to
develop a tree-parenchyma coupled framework.
During the process of model development, the parenchyma has been investigated
in the form of an elastic homogenized medium, with the trachea-bronchial tree repre-
sented by a space-filling dyadic resistive pipe network responsible for irrigating the
parenchyma. The eventuality is that the parenchyma and the tree have been coupled,
with the chosen algorithm being that which takes advantage of the resultant tree struc-
ture and poses superiority in such a way that fast matrix-vector product computation
could be achieved [32–34]. Also, the proposed CFD framework seeking to support
198 V. Panwar et al.

VR-based examination of lung ventilation has been that which could be applied
in the modeling of both mechanically induced and free respiration [35]. Indeed, a
nonlinear Robin model and other boundary conditions have also been defined and
tested in these investigations. Apart from the tree-parenchyma model that treats the
parenchyma in the form of an elastic (continuous) model, another model that has
been investigated in similar settings involves the exit compartment model, which
perceives the parenchyma as that which constitutes sets of independent compliant
compartments, with individual compartments exhibiting unique compliance coeffi-
cients [36, 37]. It is also notable that the central assumption in these investigations
has been a case in which the airflow in the respective dyadic tree branches involves
model fluid dissipation and resistance [38]. The branching network of pipes has been
developed to represent bronchial trees through which inhaled air is received by the
lung tissues [39, 40] (Fig. 2).
Indeed, findings demonstrate that ventilation distribution tends to be altered by
constrictions. For VR-aided health care diagnostics, the resultant data from the CFD
simulations prove important in such a way that it gives insight into how ventilation
as a health care parameter could be focused upon to determine plug distribution in
the simulated tree.
With a respiratory component on the focus, CFD simulations have also been
extended to the context of liver biopsy, with the central objective being to apply
patient-specific data to design a simulator model that would support VR-aided health
care processes relative to the understanding of real-time hepatic interaction, as well
as modifiable respiratory movements [41]. To construct virtual patients aimed at
supporting virtual environments in health care diagnostics, some of the procedures
that have preceded the detailing of organ behavior simulations include the definition
of the software framework and the anatomy employed in the simulation [42, 43];
especially during needle insertion as patients breathe [44].
Methodologically, this construction of virtual patients to be used in liver biopsy
simulators (hence supporting VR-aided health care diagnostics) has seen patient
databases constructed to pave the way for organ motion and position visualiza-
tion. In turn, the environment has been adapted to ensure that it supports tool-tissue
interactions [45] (Fig. 3).
Before the implementation stage, some of the segmentations that have also been
conducted include skin, lung, bone, diaphragm, and liver segmentation (Fig. 4).
For respiration simulation in these investigations, specific parameters that have
been examined include natural respiration processes, soft-tissue behavior simulation,
rib cage simulation, diaphragm simulation, and liver simulation [45, 46]. From the
results, it is evident that when a liver biopsy simulator is implemented and consti-
tutes 3D virtual organs associated with patient data, virtual reality environments could
conduct real-time and on-line computation of hepatic feedback and organ motion as
needle insertion progresses [47–49]. The resultant inference is that through CFD sim-
ulations seeking to achieve liver biopsy simulators, VR operators are better placed
to perform diagnostic procedures in 3D environments that provide room for hand
Computational Fluid Dynamics Simulations with Applications … 199

Fig. 2 An illustration of the


trachea-bronchial tree,
tree-parenchyma coupled
model, and an
exit-compartment model.
Source Chnafa et al. [4]
200 V. Panwar et al.

Fig. 3 Virtual patient construction for liver biopsy simulator. Source Doost et al. [6]

Fig. 4 CFD creation of a virtual patient after segmentation and meshing. Source Doost et al. [7]

co-location with virtual targets. Given the efficacy of the simulator in VR environ-
ments, some of the additional medical environments where it has been used include
ultrasound and fluoroscopic image guidance [48].
Apart from liver biopsy, which is a procedure aimed at removing a small piece
of the liver for further medical analysis to determine signs of disease or damage,
another area that has seen CFD simulations applied to support VR-aided health
care processes involves liver surgery planning. For traditional surgical planning,
volumetric data that has been used is that which is stored in intensity-based image
stacks. The data comes from CT (computerized tomography) scanners and allows
surgeons to view it in 2D imager viewers [3]. In turn, the surgeons use the image
slices at their disposal to establish 3D models of the vasculature, tumor, and liver
[4, 9]. However, this task is challenging and tends to be compounded by situations
where tumors exhibit anatomical variability [12]. It is also notable that when 2D
volumetric data set representations are provided, surgeons are likely to miss crucial
information, hence draw inaccurate conclusions—due to the perceived anatomical
variability that the 2D representations fail to reveal [11]. The trickle-down effect of
such inaccuracy is a case of suboptimal treatment strategy decision.
In response to this dilemma (and the need to steer improvements in how surgeons
understand complex interior structures in the liver), CFD simulations have been con-
ducted to support VR-aided health care diagnostics in terms of making accurate and
informed treatment strategy decisions [22]. The motivation has arisen from most
of the previous scholarly investigations that contend that in most cases, the work
Computational Fluid Dynamics Simulations with Applications … 201

Fig. 5 CFD-led VR liver


surgery planning. Source
Imanparast et al. [10]

of surgical planning is 3D-oriented and that 2D inputs are unsuitable [3–6]. Three
main stages that have been implemented in the CFD simulations seeking to support
VR-based liver surgery planning include image analysis, the refinement of the seg-
mentation, and planning the treatment [7]. Figure 5 summarizes the VR-based liver
surgery planning relative to the incorporations CFD simulation outcomes.
Indeed, findings from the CFD simulations seeking to support VR processes in
liver treatment planning suggest that through the simulations, VR exhibits significant
improvements in the liver resection surgical methods. Also, the simulations are seen
to play a crucial role in optimization in such a way that the best VR approaches
through which easy and quick preoperative planning can be achieved are established
[10]. Similarly, the results indicate that when CFD simulations are implemented to
understand how best VR-based liver treatment planning could be realized, surgeons
tend to gain a detailed understanding of the complexity associated with the interior of
the liver structure [22]. From the CFD simulations, it remains inferable that surgeons
gain crucial knowledge through which liver treatment planning can be achieved, and
decisions for or against proceeding with surgery made appropriately.
From the documentation above, it is evident that CFD simulations in VR-based
health care environments have allowed surgeons to stretch beyond 2D gray-valued
images and the mental construction of 3D structures, which prove unreliable in sit-
uations where patients present with complex cases involving anatomical variability
[14]. Hence, the simulations are contributory to liver surgery planning because they
ensure that accurate interpretations are made in VR-aided health care diagnostics for
patients with liver complications. Another positive trend accruing from CFD simu-
lations relative to the creation and analysis of the behavior of virtual patients is that
the resultant information shows important information (including liver segments),
202 V. Panwar et al.

which allows VR operators to use a CFD-led application such as a LiverPlanner to


gain exposure to virtual liver surgery planning systems that incorporate the input of
VR technology and high-level image analysis algorithms towards establishing opti-
mal resection paths for individual patients [24, 26, 27, 30]. Overall, findings reported
from the CFD simulations targeting liver surgery planning in VR health care envi-
ronments suggest that the proposed CFD tools are better placed to reduce planning
times [19–21], hence improved patient outcomes.
CFD simulations targeting VR-aided health care diagnostics have also been imple-
mented relative to flow visualization in coronary artery grafts. The motivation of
employing the simulations has been to determine some of the predictive forces
responsible for failure rates of the grafts, as well as discern some of the techniques
through which the failures could be addressed when VR-aided health care diagnos-
tics are used. Indeed, several reasons are documented to account for regular coronary
artery graft failure. These failures demand repeated heart surgeries. From the major-
ity of the previous scholarly observations, the repeated surgeries have a tertiary and
adverse effect in terms of potential heart failure [35–38]. Thus, CFD simulations
have been conducted to ensure that in VR health care environments, medical imag-
ing modalities that offer varying artery views (such as magnetic resonance imaging,
ultrasound, and X-ray angiography) are supported further through blood flow simula-
tion in vessels. Particularly, the increasing attention toward blood flow simulation to
support VR techniques in addressing coronary artery graft failure has been informed
by the need to discern causes of failure, an area that has had to be addressed via
CFD simulation and, in turn, allow for successful VR-led visualization of blood flow
through arteries.
For such investigations, the objective has been to gain an understanding of how
3D modifications applied to arterial bypass grafts could pose hemodynamic effects,
especially due to previous scholarly assertions holding that near areas experiencing
flow disturbances, atherosclerotic lesions tend to develop [47, 48], examples being
bifurcations and arterial branches [2]. Thus, the CFD experimental setup has been
established in such a way that a new artery piece is attached to a damaged coronary
artery’s point downstream from the point of damage. The role of the simulated graft
has been to ensure that new blood is brought to the heart sections that might have
been starved. Also, angiograms of damaged arteries have been simulated to discern
the degree of damage, upon which how and why future lesions might form have been
investigated and predicted (Fig. 6).
Notably, such simulations have aimed at supporting the creation of virtual car-
diovascular laboratories in which the bifurcated artery’s overall view is presented to
understand an originally blocked artery and how blood eventually flows into newly
grafted segments, courtesy of CFD simulations; hence proving informative for VR-
aided health care diagnostics seeking to understand why and how lesions tend to
form. The emerging theme is that through CFD simulations, VR health care envi-
ronments supporting the flow of fluids through simulated arterial grafts are created.
Specific results indicated that the resultant VR environment accruing from CFD-led
simulations exhibits good potential for increasing the understanding of the grafts’
Computational Fluid Dynamics Simulations with Applications … 203

Fig. 6 An illustration of the simulated graft. Source Lewis et al. [11]

failure modalities relative to the location of the graft and individual patients’ vessel
characteristics, upon which potential heart failure could be reduced.
Apart from arterial graft failures, another area that has received attention regarding
the use of CFD simulations in informing VR-aided health care diagnostics involves
colonoscopy, with a colonoscopy simulator generated to allow VR health care opera-
tors to make informed decisions relative to the characteristics with which individual
patients present. In the colon, one of the widely applied gold standards for detecting
and removing precancerous polyps entails colonoscopy. With the procedure prov-
ing too challenging to master, the need to gain exposure to various pathology and
patient scenarios could not be overstated. Hence, CFD simulations have been used
to establish a colonoscopy simulator that would enable VR operators in health care
diagnostics to reduce patient discomfort and risk. Particularly, the objective of the
CFD simulations has been to counter some of the shortfalls associated with previous
forms of simulators. Thus, the new version, whose efficacy has been investigated, is
that which constitutes a haptic device that provides room for instrumented colono-
scope insertion, as well as a colonoscope camera view simulator. For VR-aided health
care diagnostics, the CFD simulations have strived to pave the way for the provision
204 V. Panwar et al.

of force feedback to VR operators [47]. The simulation environment has been set in
such a way that photorealistic visualization has been combined with the surrounding
organs and tissues, the colon, and the colonoscope’s physically accurate models.
With the diagnosis of colorectal cancer on the focus, the methodology of the CFD
simulations has been set in such a way that the developed virtual environment, which
has been computer-generated, has been that which mimics the view seen normally
by gastroenterologists which conducting colonoscopy procedures. Also, the resultant
virtual environment has been that which constitutes a haptic interface to provide room
for the interaction between users and the virtual environments [32] (Fig. 7).
Findings demonstrate that through CFD simulation, the resultant colonoscopy
simulator yields significant improvements to and counters the deficiencies associated
with previous versions of simulators based on four major parameters. The parameters
that the CFD-simulated colonoscopy improves include haptic fidelity, visual realism,
physical and anatomical realism, and case complexity [44–46]. The implication for
VR-aided health care diagnostics is that the colonoscopy simulator developed from
CFD simulations paves the way for the provision of accurate physical realism at the
selected interactive rates. Specifically, the CFD-generated colonoscope simulator
allows health care, VR operators to understand how physical interactions with the
colon cause loop formations, rather than rely on the previous trend in which loop
occurrences would be predicted and mimicked by assessing parameters such as the

Fig. 7 Summary of the CFD simulations seeking to support VR-based procedures for colorectal
cancer. Source Nguyen et al. [14]
Computational Fluid Dynamics Simulations with Applications … 205

actual position in the colon versus the depth of the colonoscope. Hence, the CFD-
led colonoscope simulator comes with the provision of accurate physical behaviors
in the colon, which allow VR operators to make informed decisions in a new VR
environment that is marked by reduced deficiencies associated with previous versions
of the colonoscopes.
Lastly, CFD simulation has been used to complement the work of VR oper-
ators in health care diagnosis relative to the context of symmetric hemodialysis
catheters. Indeed, the ease of tip positioning and low access recirculation account
for the increasing use of symmetric-tip dialysis catheters [5]. Therefore, CFD simu-
lations have been applied to analyze several parameters. Some of these parameters
include venous outflow deflection, recirculation, shear-induced platelet activation
potency, and regions of low separation [9, 10, 12]. Notably, regions of low potency
are at risk for thrombus development [47]. In such investigations, the experimental
conditions have been set in such a way that one of the assumptions has been that
blood is a Newtonian fluid. Also, the performance of the simulated catheter has been
investigated in a setting where the experimental conditions have been set to involve
high hemodialysis flow rate. With the superior vena cava on target, the CD simu-
lated catheter tip position has been that which experiences a steady-state, laminar
flow. Imperative to highlight is that these CFD simulations seeking to inform VR
health care diagnostic decisions have targeted the superior vena cava in the place
of simulating hemodialysis catheters in robust right atrial models due to three main
reasons. These reasons include the complexity of the tricuspid valve function, the
proportion of flow from the inferior vena cava, and assumption complexity regarding
atrial anatomy.
In the findings, the investigations contend that there is a significant difference
in the CFD simulation-led catheters. For example, flow reattachment or separation
from the combined impact of larger side slots and distal tip cause larger areas of
flow stagnation in the Palindrome catheter. Also, the catheter is observed to exhibit
the highest shear-induced platelet activation potency mean levels. As documented
by previous investigations in clinical scenarios, the two outcomes reflect risk factors
for catheter thrombosis [11–13, 15]. Also, CFD simulations depict that the simulated
catheters such as Glide-Path and Palindrome exhibit minimal recirculation because a
wide spectrum divides venous and arterial lumens. Furthermore, the distal tip design
allows for flow deflection, hence low recirculation in the VectorFlow device. The
implication for VR health care diagnostics and processes is that through such CFD
simulations, the design of the catheter tip in the VR environments plays a crucial
role and forms a determinant factor in determining the rate of recirculation.

3 Conclusion

In summary, this study has discussed and critically reviewed some of the recent
scholarly study outcomes regarding CFD simulations with applications in VR-aided
health care diagnostics. From the findings, it is evident that there is an increasing
206 V. Panwar et al.

trend in CFD simulator development to discern how VR-led health care diagnostic
processes could be optimized. The CFD simulations target various clinical settings
ranging from medical devices to diseases or health conditions such as colorectal can-
cer, cancer of the liver, and heart failure. From the analysis of different results that
have been documented, an emerging theme is that CFD simulations form a promis-
ing path whereby they sensitize VR operators in health care regarding some of the
best paths that are worth taking to minimize patient harm or risk while achieving
optimal outcomes in VR-aided health care situations such as those that involve plan-
ning for surgery and analyzing how environments that surround an organ might be
interacting with and contributing to a given abnormality. In so doing, CFD simu-
lations have paved the way for VR operators to make more informed and accurate
decisions regarding disease diagnosis and treatment tailoring relative to the needs
and conditions with which patients present.

References

1. Pareek, T. G., Mehta, U., & Gupta, A. (2018). A survey: Virtual reality model for medical
diagnosis. Biomedical and Pharmacology Journal, 11(4), 2091–2100.
2. Antoniadis, A. P., Mortier, P., Kassab, G., Dubini, G., Foin, N., et al. (2015). Biome-
chanical modeling to improve coronary artery bifurcation stenting: Expert review docu-
ment on techniques and clinical implementation. JACC: Cardiovascular Interventions, 8(10),
1281–1296.
3. Bavo, A., Pouch, A. M., Degroote, J., Vierendeels, J., Gorman, J. H., et al. (2017).
Patient-specific CFD models for intraventricular flow analysis from 3D ultrasound imaging:
Comparison of three clinical cases. Journal of Biomechanics, 50(11), 144–150.
4. Chnafa, C., Mendez, S., & Nicoud, F. (2014). Image-based large-eddy simulation in a realistic
left heart. Computers & Fluids, 94(6), 173–187.
5. Belinha, J. (2016). Meshless methods: The future of computational biomechanical simulation.
Journal of Biometrics and Biostatistics, 7(4), 1–3.
6. Doost, S. N., Ghista, D., Su, B., Zhong, L., & Morsi, Y. S. (2016). Heart blood flow simulation:
a perspective review. Biomedical Engineering Online, 15(1), 101.
7. Doost, S. N., Zhong, L., Su, B., & Morsi, Y. S. (2017). Two-dimensional intraventricular flow
pattern visualization using the image-based computational fluid dynamics. Computer Methods
in Biomechanics and Biomedical Engineering, 20(5), 492–507.
8. Douglas, P. S., Pontone, G., Hlatky, M. A., Patel, M. R., Norgaard, B. L., et al. (2015). Clinical
outcomes of fractional flow reserve by computed tomographic angiography-guided diagnostic
strategies vs. usual care in patients with suspected coronary artery disease: The prospective
longitudinal trial of FFRCT: Outcome and resource impacts study. European Heart Journal,
36(47), 3359–3367.
9. Galassi, F., Alkhalil, M., Lee, R., Martindale, P., Kharbanda, R. K., et al. (2018). 3D reconstruc-
tion of coronary arteries from 2D angiographic projections using non-uniform rational basis
splines (NURBS) for accurate modelling of coronary stenoses. PloS one, 13(1), e0190650.
10. Imanparast, A., Fatouraee, N., & Sharif, F. (2016). The impact of valve simplifications on
left ventricular hemodynamics in a three dimensional simulation based on in vivo MRI data.
Journal of Biomechanics, 49(9), 1482–1489.
11. Lewis, M. A., Pascoal, A., Keevil, S. F., & Lewis, C. A. (2016). Selecting a CT scanner for
cardiac imaging: The heart of the matter. The British Journal of Radiology, 89(1065), 20160376.
Computational Fluid Dynamics Simulations with Applications … 207

12. Leng, S., Jiang, M., Zhao, X.-D., Allen, J. C., Kassab, G. S., Ouyang, R.-Z., et al. (2016).
Three-dimensional tricuspid annular motion analysis from cardiac magnetic resonance feature-
tracking. Annals of Biomedical Engineering, 44(12), 3522–3538.
13. Mittal, R., Seo, J. H., Vedula, V., Choi, Y. J., Liu, H., et al. (2016). Computational modeling of
cardiac hemodynamics: Current status and future outlook. Journal of Computational Physics,
305(2), 1065–1082.
14. Nguyen, V.-T., Loon, C. J., Nguyen, H. H., Liang, Z., & Leo, H. L. (2015). A semi-automated
method for patient-specific computational flow modelling of left ventricles. Computer Methods
in Biomechanics and Biomedical Engineering, 18(4), 401–413.
15. Morris, P. D., Narracott, A., von Tengg-Kobligk, H., Soto, D. A. S., Hsiao, S., Lungu, A., et al.
(2016). Computational fluid dynamics modelling in cardiovascular medicine. Heart, 102(1),
18–28.
16. Moosavi, M.-H., Fatouraee, N., Katoozian, H., Pashaei, A., Camara, O., & Frangi, A. F. (2014).
Numerical simulation of blood flow in the left ventricle and aortic sinus using magnetic res-
onance imaging and computational fluid dynamics. Computer Methods in Biomechanics and
Biomedical Engineering, 17(7), 740–749.
17. Itu, L., Rapaka, S., Passerini, T., Georgescu, B., Schwemmer, C., Schoebinger, M., et al.
(2016). A machine-learning approach for computation of fractional flow reserve from coronary
computed tomography. Journal of Applied Physiology, 121(1), 42–52.
18. Su, B., Zhang, J.-M., Tang, H. C., Wan, M., Lim, C. C. W., et al. (2014). Patient-specific blood
flows and vortex formations in patients with hypertrophic cardiomyopathy using computational
fluid dynamics. In 2014 IEEE Conference on Biomedical Engineering and Sciences (IECBES).
IEEE.
19. Kawaji, T., Shiomi, H., Morishita, H., Morimoto, T., Taylor, C. A., Kanao, S., et al. (2017).
Feasibility and diagnostic performance of fractional flow reserve measurement derived from
coronary computed tomography angiography in real clinical practice. The International Journal
of Cardiovascular Imaging, 33(2), 271–281.
20. Khalafvand, S., Zhong, L., & Ng, E. (2014). Three-dimensional CFD/MRI modeling reveals
that ventricular surgical restoration improves ventricular function by modifying intraventricular
blood flow. International Journal for Numerical Methods in Biomedical Engineering, 30(10),
1044–1056.
21. Koo, B.-K., Erglis, A., Doh, J.-H., Daniels, D. V., Jegere, S., Kim, H.-S., et al. (2011). Diag-
nosis of ischemia-causing coronary stenoses by noninvasive fractional flow reserve com-
puted from coronary computed tomographic angiograms: results from the prospective mul-
ticenter DISCOVER-FLOW (Diagnosis of ischemia-causing stenoses obtained via noninva-
sive fractional flow reserve) study. Journal of the American College of Cardiology, 58(19),
1989–1997.
22. Tu, S., Westra, J., Yang, J., von Birgelen, C., Ferrara, A., et al. (2016). Diagnostic accuracy of fast
computational approaches to derive fractional flow reserve from diagnostic coronary angiog-
raphy: The international multicenter FAVOR pilot study. JACC: Cardiovascular Interventions,
9(19), 2024–2035.
23. Wittek, A., Grosland, N. M., Joldes, G. R., Magnotta, V., & Miller, K. (2016). From finite
element meshes to clouds of points: A review of methods for generation of computational
biomechanics models for patient-specific applications. Annals of Biomedical Engineering,
44(1), 3–15.
24. Zhang, J. M., Luo, T., Tan, S. Y., Lomarda, A. M., Wong, A. S. L., et al. (2015). Hemodynamic
analysis of patient-specific coronary artery tree. International Journal for Numerical Methods
in Biomedical Engineering, 31(4), e02708.
25. Wong, K. K., Wang, D., Ko, J. K., Mazumdar, J., Le, T.-T., et al. (2017). Computational
medical imaging and hemodynamics framework for functional analysis and assessment of
cardiovascular structures. Biomedical Engineering Online, 16(1), 35.
208 V. Panwar et al.

26. Zhang, J.-M., Shuang, D., Baskaran, L., Wu, W., Teo, S.-K., et al. (2018). Advanced analy-
ses of computed tomography coronary angiography can help discriminate ischemic lesions.
International Journal of Cardiology, 267(18), 208–214.
27. Wexelblat, A. (2014). Virtual reality: Applications and explorations. Academic Press.
28. Bush, J. (2008). Viability of virtual reality exposure therapy as a treatment alternative.
Computers in Human Behavior, 24(3), 1032–1040.
29. Fluet, G., Merians, A., Patel, J., Van Wingerden, A., Qiu, Q., et al. (2014). Virtual reality-
augmented rehabilitation for patients in sub-acute phase post stroke: A feasibility study.
In 10th International Conference on Disability, Virtual Reality & Associated Technologies,
Gothenburg, Sweden.
30. Dascal, J., Reid, M., IsHak, W.W., Spiegel, B., Recacho, J., et al. (2017). Virtual reality and
medical inpatients: A systematic review of randomized, controlled trials. Innovations in Clinical
Neuroscience, 14(1–2), 14.
31. Miloff, A., Lindner, P., Hamilton, W., Reuterskiöld, L., Andersson, G., et al. (2016). Single-
session gamified virtual reality exposure therapy for spider phobia vs. traditional exposure
therapy: Study protocol for a randomized controlled non-inferiority trial. Trials, 17(1), 60.
32. Hawkins, R. P., Han, J.-Y., Pingree, S., Shaw, B. R., Baker, T. B., & Roberts, L. J. (2010).
Interactivity and presence of three eHealth interventions. Computers in Human Behavior, 26(5),
1081–1088.
33. Garcia, A. P., Ganança, M. M., Cusin, F. S., Tomaz, A., Ganança, F. F., & Caovilla, H. H.
(2013). Vestibular rehabilitation with virtual reality in Ménière’s disease. Brazilian Journal of
Otorhinolaryngology, 79(3), 366–374.
34. Cameirao, M. S., Badia, S. B. I., Duarte, E., Frisoli, A., & Verschure, P. F. (2012). The combined
impact of virtual reality neurorehabilitation and its interfaces on upper extremity functional
recovery in patients with chronic stroke. Stroke, 43(10), 2720–2728.
35. Kim, Y. M., Chun, M. H., Yun, G. J., Song, Y. J., & Young, H. E. (2011). The effect of
virtual reality training on unilateral spatial neglect in stroke patients. Annals of Rehabilitation
Medicine, 35(3), 309.
36. Subramanian, S. K., Lourenço, C. B., Chilingaryan, G., Sveistrup, H., & Levin, M. F. (2013).
Arm motor recovery using a virtual reality intervention in chronic stroke: Randomized control
trial. Neurorehabilitation and Neural Repair, 27(1), 13–23.
37. Nolin, P., Stipanicic, A., Henry, M., Joyal, C. C., & Allain, P. (2012). Virtual reality as a
screening tool for sports concussion in adolescents. Brain Injury, 26(13–14), 1564–1573.
38. Steuperaert, M., Debbaut, C., Segers, P., & Ceelen, W. (2017). Modelling drug transport during
intraperitoneal chemotherapy. Pleura and Peritoneum, 2(2), 73–83.
39. Magdoom, K., Pishko, G. L., Kim, J.H., & Sarntinoranont, M. (2012). Evaluation of a voxelized
model based on DCE-MRI for tracer transport in tumor. Journal of Biomechanical Engineering,
134(9), 091004.
40. Kim, M., Gillies, R. J., & Rejniak, K. A. (2013). Current advances in mathematical modeling
of anti-cancer drug penetration into tumor tissues. Frontiers in Oncology, 3(11), 278.
41. Pishko, G. L., Astary, G. W., Mareci, T. H., & Sarntinoranont, M. (2011). Sensitivity analysis of
an image-based solid tumor computational model with heterogeneous vasculature and porosity.
Annals of Biomedical Engineering, 39(9), 2360.
42. Stylianopoulos, T., Martin, J. D., Chauhan, V. P., Jain, S. R., Diop-Frimpong, B., Bardeesy, N.,
et al. (2012). Causes, consequences, and remedies for growth-induced solid stress in murine
and human tumors. Proceedings of the National Academy of Sciences, 109(38), 15101–15108.
43. Steuperaert, M., Falvo D’Urso Labate, G., Debbaut, C., De Wever, O., Vanhove, C., et al. (2017).
Mathematical modeling of intraperitoneal drug delivery: Simulation of drug distribution in a
single tumor nodule. Drug Delivery, 24(1), 491–501.
44. Stylianopoulos, T. (2017). The solid mechanics of cancer and strategies for improved therapy.
Journal of Biomechanical Engineering, 139(2), 021004.
Computational Fluid Dynamics Simulations with Applications … 209

45. Winner, K. R. K., Steinkamp, M. P., Lee, R. J., Swat, M., Muller, C. Y., Moses, M. E., et al.
(2016). Spatial modeling of drug delivery routes for treatment of disseminated ovarian cancer.
Cancer Research, 76(6), 1320–1334.
46. Zhan, W., Gedroyc, W., & Xu, X. Y. (2014). Effect of heterogeneous microvasculature dis-
tribution on drug delivery to solid tumour. Journal of Physics D: Applied Physics, 47(47),
475401.
47. Au, J. L.-S., Guo, P., Gao, Y., Lu, Z., Wientjes, M. G., Tsai, M., et al. (2014). Multiscale tumor
spatiokinetic model for intraperitoneal therapy. The AAPS Journal, 16(3), 424–439.
48. Zhang, Y., Furusawa, T., Sia, S. F., Umezu, M., & Qian, Y. (2013). Proposition of an out-
flow boundary approach for carotid artery stenosis CFD simulation. Computer Methods in
Biomechanics and Biomedical Engineering, 16(5), 488–494.
49. Tabakova, S., Nikolova, E., & Radev, S. (2014). Carreau model for oscillatory blood flow in a
tube. In AIP Conference Proceedings. AIP.
Data Analysis and Classification
of Cardiovascular Disease and Risk
Factors Associated with It in India
Sonia Singla, Sanket Sathe, Pinaki Nath Chowdhury, Suman Mishra,
Dhirendra Kumar and Meenakshi Pawar

Abstract Cardiovascular disease (CVD) is one of the genuine reasons behind mor-
tality in India and around the globe. A high measure of sodium, high circulatory strain,
extend, smoking, family parentage and a few different variables are related to heart
illnesses. Air and Noise Pollution is also worst in India and is likely to cause more
deaths, amongst the top five causes of deaths worldwide, are the heart, COPD, lower
respiratory infections, and lung cancer. In India absence of information, and treatment
facilities in that of rural and urban zones are the critical issue of concern. Youths have
more chances of getting impacted with CVD, due to alcohol usage, smoking, and
unfortunate eating routine. In the future, in India by 2030, the prevalence rate might
rise to two-fold than 2018. This overview goes for researching progressing propels
in understanding the investigation of infection transmission of CVD, causes and the
hazard factors related to it. One of the continuous patterns in cardiology at present is
the proposed use of man-made consciousness (AI) in increasing and broadening the
adequacy of the cardiologist. This is on the grounds that AI or AI would take into

S. Singla (B)
University of Leicester, Leicester, UK
e-mail: ssoniyaster@gmail.com
S. Sathe
Savitribai Phule Pune University, Pune, India
e-mail: sathesfaction25@yahoo.com
P. N. Chowdhury
Kalyani Government Engineering College, Kalyani, India
e-mail: pinakinathc@gmail.com
S. Mishra
School of Biotechnology and Bioinformatics, D.Y. Patil University, Navi Mumbai, India
e-mail: mishrasuman428@gmail.com
D. Kumar
Translational Health Science and Technology Institute, Faridabad, Haryana, India
e-mail: dhiru.kumar7@gmail.com
M. Pawar
MIMER Medical College, Talegaon Dabhade, India
e-mail: dr.meenakshipawar@yahoo.com

© Springer Nature Switzerland AG 2020 211


D. Gupta et al. (eds.), Advanced Computational Intelligence Techniques for Virtual Reality
in Healthcare, Studies in Computational Intelligence 875,
https://doi.org/10.1007/978-3-030-35252-3_11
212 S. Singla et al.

consideration an exact proportion of patient working and conclusion from the earli-
est starting point up to the part of the bargain procedure. Specifically, the utilization
of computerized reasoning in cardiology plans to concentrate on innovative work,
clinical practice, and populace wellbeing. Made to be an across the board instrument
in cardiovascular social insurance, AI innovations consolidate complex calculations
in deciding significant advances required for a fruitful finding and treatment. The
job of man-made reasoning explicitly reaches out to the recognizable proof of novel
medication treatments, illness stratification or insights, persistent remote observing
and diagnostics, reconciliation of multi-omic information, and expansion of doctor
effectivity and effectiveness. Virtual reality developed during the 1990s in the realm
of computer games and has been gradually crawling into medication from that point
onward. Today, several specialists are investigating how VR can help treat everything
from agoraphobia to consume wounds to stroke. Research recommends utilizing an
augmented experience interface can help improve development and coordination of
the arms, hands and fingers in stroke survivors.

Keywords India · Ordinariness · Rate · Mortality · CVD · Smoking ·


Hypertension · Medicines · Diet and nutrients · Air pollution · Data analysis ·
Virtual reality · Artificial intelligence · Stroke

1 Introduction

Indian subcontinent has most elevated rates of cardiovascular sicknesses (CVDs)


around the world [1]. Cardiovascular disease (CVD) addresses 3C, viz.,—Coronary,
Cardiomyopathy, Congenital, Vascular—Diseases). As the term appears, it is a dis-
order of heart and veins. It is one of the real reasons for mortality in India and around
the world. Not very many individuals know about the way that the use of tobacco,
alcohol usage, overweight; forcefulness and deficient eating routine with the high
proportion of salt are related to hypertension, which is otherwise called High Blood
Pressure and is the significant hazard factors related with coronary heart disease [2].
It is continuously fundamental in most established women’s as they enter menopause
[3]. Most impacts in India are because of desperation, non-appearance of learning,
treatment workplaces, and early start of ailment which has affected both urban and
provincial areas [4]. The impact of high danger lead with diabetes, hypertension,
smoking between the age of 35 and 70 and the absence of treatment is a real reason
for CVD in India [5]. 33% of the affliction is a direct result of tobacco use, phys-
ical inertness, high-risk sexual practices, harm, violence and diverse factors in the
early enhancement periods of youth, adding to the threat of perpetual sickness [6]. In
United Kingdom demise among south Asians is generally likely due to CVD. Smok-
ing, circulating strain, corpulence, and cholesterol level additionally shifts between
European and South Asian People. When contrasted with Europeans, South Asians
have been found to have fewer coronary vessels and angiography has uncovered it
to have a triple vessel illness, alongside a few lesions [7]. 70% populace lives in
Data Analysis and Classification of Cardiovascular … 213

Fig. 1 Causes of death worldwide in 2015 [9]

India in the provincial region and it needs restorative offices, delay in treatment as
inaccessibility of specialists and less healing facility is also a big concern [8]. Despite
having great frame of mind, Metabolic Syndrome (MS) patients were not following
great way of life practices to forestall [1] (Fig. 1).
Distress considering strain is seen to be ordinary among progressively prepared
women and increasingly young woman inferable from smoking, fast sustenance’s,
and alcohol uses in European countries. They are at high risk of coronary artery
disease [3]. About 52% of individual dies in India at age under 70 years, due to CVD
[10] and investigation done in 1995–1996 and 2004 showed most extreme instances
of people in a specialist treatment has expanded for diabetes, trailed by wounds,
coronary illness and dangerous development in 2004 [11]. Diabetes relates to risk
parts of CVD and has diminished future [12]. In May to October 2012, most of the
patients in Odisha essential social insurance office were encountering respiratory
(17%) and cardiovascular sickness (10.2%) [13]. Approx. Around 6 million people
are continuing, and 610,000 individuals kicked the bucket each year in the United
States due to coronary sickness, in 2009 the passing rate of men was more when
contrasted with women’s [14]. In a 2012–2014 study, information gathered from 400
Urban and 400 rustic houses from western India revealed nonappearance of preparing
for prescription usage; for the most part, sedates used were cardiovascular affliction
without medication and expiry dates and not suitable estimations being taken [15].
The acclaimed performing artist Abir Goswami and Razzak Khan passed on account
of heart assault and cardiovascular catch which is caused by sudden frustration of
flow of blood as heart stop to siphoning blood and its essential driver is Coronary
Artery Disease (CAD) [16, 17]. Dev Anand, Reema Lago, Vinod Mehra, Navin
Nischol, Om Puri and Inder Kumar are some of the great personalities of Indian film
Industry which died due to a heart attack [16, 17].
214 S. Singla et al.

2 Prevalence and Mortality Rate

In September 2015 to July 2016 cross-sectional data shows most people affected by
CVD were women being 56% more than men and prevalence rate of diabetes being
9%, for Hypertension prevalence rate was 22%, hypercholesterolemia had a preva-
lence rate of 20%, and prevalence rate for previous and current smokers about 14%
and 4% respectively [18]. In 2016 investigate done in the urban zone of Varanasi
demonstrates that the predominance rate for hypertension was 32.9%, mean systolic
and diastolic BP were 124.25 ± 15.05 and 83.45 ± 9.49 mm Hg. Men were more
affected than women [2]. The regularity rate of hypertension among adults (>20 year)
was 159 for each thousand for both Urban and Rural zone in 1995 [19]. In year 2009–
2012 for each 20 urban networks Delhi, Karnataka (Bangalore, Mysore), Andhra
Pradesh (Hyderabad, Vishakhapatnam), Maharashtra (Pune, Ambernath, Ahmedna-
gar), Uttar Pradesh (Agra, Kanpur), Rajasthan (Jodhpur), Himachal Pradesh (Man-
ali), Chandigarh, Uttrakhand (Dehradun, Mussourrie), Orissa (Chandipur), Assam
(Tejpur), Jammu and Kashmir (Leh), Madhya Pradesh (Gwalior), Tamil Nadu (Chen-
nai) and Kerala (Kochi) the general regularity for diabetes was 16% with little refine-
ment in individuals approx. 16.6 and 12.7%, transcendence of hypertension was 21%,
normality for dyslipidemia was high about 45.6%. The Men and Women are at high
peril of CAD [4]. In 2010–2012, in Vellore, cross-sectional examination done by
Rose angina survey and electrocardiography found the inescapability rate for coro-
nary Heart contamination among commonplace men was 3.4 and 7.3% in urban men,
in provincial women was 7.4 and 13.4% in urban women high among female than
the male from prevalence rate drove between 1991 and 1994 [20].
In 2010–2012, the cross-sectional survey shows prevalence rate increased in urban
and rural area as compared to 1991–1994. The use of alcohol, overweight, raised
blood pressure, smoking has put Delhi in high risk of cardiovascular disease. The
mean body mass index in urban Delhi was found to be 24.4–26.0 kg/m2 ; and that
in rural from 20.2 to 23.0 kg/m2 , systolic blood pressure in urban was found to be
121.2–129.8 mm Hg, and in rural about 114.9–123.1 mm Hg, and diastolic blood
pressure in urban was found to be 74.3–83.9 mm Hg; in rural about 73.1–82.3 mm Hg
[21].

3 A Rate of Cardiovascular Ailment

In the year 2010–2011 sudden cardiac death at the age of 35 years and above of
patients who underwent an autopsy, occurred in 39.7/100,000 of the population dur-
ing the study interval. It was 4.6 times more in males than females with approx. inci-
dence of 65.8/100,000 compared to 14.3/100,000 among females [22]. The incidence
rate is 145 per 100,000 per year [23].
Data Analysis and Classification of Cardiovascular … 215

4 Spread of Ailment with Age and Beginning of Ailment

Mean period of commencement of smoking in the urban and rustic region was 22.24
± 7.2 and 21.1 ± 7.4 [24]. Mean age at commencement of smoking in young person
was 19 years ± 2.34 years [25]. The mean age for sudden heart failure was observed
to be 55 + 10 years [22].

5 Risk Ailments of Cardiovascular Infirmities

5.1 Smoking

Tobacco is used in chewing, smoking by children at the age of 10–13 years, but
found more in the age of 14–19 years. According to world bank report, around
82,000–99,000 children smoke every day. Approx. 6 million people pass on overall
on account of eating up tobacco and interfacing with smoke [26]. Tobacco used as
cigarette contains chemical compounds,such as Acetone ((CH3 )2CO) used in nail
cleaning, Acetic Acid (CH3 COOH) in hair shading, Ammonia (NH3 ) in cleaning
house, Arsenic (As) as bug splashes and in rechargeable battery, Benzene(C6 H6 ) as
an essential part of gas, Butane (C4 H10 ) which on reaction with plenty of oxygen
forms Carbon dioxide and if oxygen is present in limited amount carbon monoxide is
formed. Carbon Monoxide in car exhaust fumes, Hexamine in barbecue lighter fluid,
Lead in batteries, Naphthalene as an ingredient in mothballs, Methanol in rocket fuel,
Nicotine as an insecticide, Tar as material for paving roads, and Toluene, for making
paint [27]. These synthetic compounds prompt swelling of a cell of veins making
it confined and provoking various heart conditions, for instance, atherosclerosis in
which cholesterol solidifies with other substance in blood making a plaque which
blocks the stream of blood, and Abdominal aortic aneurysm in which stomach aorta
is week’s end and can prompt an aneurysm [28]. In India at the age of approx.
15 years 47% men and 14% of women’s either smoke or use tobacco as cigarette,
beedis or hookah, chillum, and pipe, etc. [29]. In the year 2005, data from private
and government schools of Noida shows prevalence rate between age 11–19 years
more in young men than young women. Early start of smoking or gnawing tobacco,
among 70% young fellows and 80% young women starts at an age not actually or
identical to 15 years, generally is found more in non-state funded schools than in
government schools [30].

5.2 Hypertension

For CVD, hypertension is a most important risk factor which increases with age. The
prevalence rate was found more in men as compared to a woman [31].
216 S. Singla et al.

5.3 Diet and Nutrition

Low intake of fruits and vegetables and more intake of fast foods such as pizza, burger
increases high blood pressure due to the presence of saturated fats and cholesterol
which in turn forms a plaque in the wall of blood vessels causing the reduction in its
diameter and elasticity [32].

5.4 The Abundance of Sodium

Young, and grown-ups are more taking overabundance measure of salt into the body
by eating various types of items bought from the market.As indicated by the World
Health Organization (WHO) in 2003 < 2.0 g/day of sodium taken by grown-ups
which imply 5 g/day and they are at high danger of hypertension [33, 34]

5.5 Air Pollution Effects

India named as a sevnth most polluted nation with regards to air pollution. The harm-
ful gases mostly come from vehicles. Air contamination contains organic substances,
particulate issue, and synthetic substances to the air which makes harm people and
other living life forms [35]. Contaminated air has a negative influence on various
organs. It ranges from minor upper respiratory, coronary illness, lung tumor and
intense respiratory contaminations in youngsters and constant bronchitis in grown-
ups, exasperating previous heart, and lung infection, or asthmatic assaults [36] This
year 2018 before Diwali PM2.5 and NO2 value have increased as compared to previ-
ous Diwali day in areas of Delhi Anand Vihar, R.K Puram, and Punjabi Bagh. These
areas are quite unsafe for people to breath as they are more at risk in developing
heart, COPD and cancer disease [37] (Tables 1, 2 and Figs. 2, 3, 4).

Table 1 The remarks for


AQ1 Remark
AQI index are given as below
as taken from CPCB [37] 0–50 Good
51–100 Satisfactory
101–200 Moderate
201–300 Poor
301–400 Very poor
401–500 Severe
Data Analysis and Classification of Cardiovascular … 217

Table 2 Prevalence of ever tobacco use among boys and girls


Tobacco Boys Girls Total Place References
Smoking or chewing or both 12.2 10.2 11.2 Noida [30]
Never tobacco 87.8 89.8 88.8 Noida [30]

Fig. 2 India map showing AQI index on Diwali day 2018 [37]

Fig. 3 PM2.5 value in Anand Vihar, Punjabi Bagh and RK Puram [37]
218 S. Singla et al.

Fig. 4 The correlation


coefficient for PM2.5 NO2 is
0.468688 and for that of
PM2.5 and PM10 IS 0.8 [37]

5.6 Gender

Women’s after their menopause is at high danger of creating cardiovascular sickness


than young women’s and men [38]. After menopause the cholesterol and low thick-
ness lipoprotein (LDL) builds 10–14% while high thickness lipoprotein level stays
unaltered, the low LDL and cholesterol can to some degree help in expanding the
life expectancy in women [39].

5.7 Ethnicity or Race

Ethnicity plays a role in CVD. South Asian have triple vessel infection as compared
to European [7].

5.8 Low Financial Status

Utilization of Tobacco, low nourishment diet, and consumption of low-quality liquor


is increasing in low financial status, although diabetes, hypertension is progressively
normal [40]. Utilization of unsafe and low-quality liquor was found with low-salary
and absence of training living in provincial territories [32]. Mental sickness, anxiety,
was seen among individuals suffering from heart disease [41]. Patients enduring
with mental clutters including Schizophrenia, serious mental confusion has 53%
CVD [19].
Data Analysis and Classification of Cardiovascular … 219

5.9 Psychosocial Stress

Youngsters are likely to be influenced by this issue mainly because of online net-
working sites like Facebook, Twitter, and so on. The absent of family support and that
of luxurious living have great effects [32]. Stress may prompt hypertension, discom-
bobulation, bitterness, and change in conduct. Patients experiencing these are bound
to have heart disease [42]. Women’s living in urban, provincial towns experience
the ill effects of social factors and fears of sexual viciousness all of which adds to
psychosocial stress [43].

5.10 Diabetes and Glucose Intolerance

The prevalence rates of diabetes for urban and rural are increasing rapidly, and so the
risk of heart disease also is increasing, patients suffering with acute chronic disease
should undergoes diabetes screening with glucose tolerance test [12, 42]. There is
very much less awareness of diabetes among rural population [42].

6 Predictive Data Analysis of Cardiovascular Disease


in an Urban and Rural Area for Males and Females

Predictive data analysis by excel shows rises in Urban and Rural cases for 2030 [9].
Coronary corridor infection (CAD) represents 60% everything being equal and 47%
of weight of maladies which is continuously expanding in rustic populace as far as
outright numbers [44] (Figs. 5, 6, 7 and 8).

7 Classification of Heart Disease by Naive Bayes Using


Weka Tools

Heart patients are regularly not recognized until a later phase of the ailment or the
advancement of entanglements [45] (Tables 3, 4 and Figs. 9, 10).
Time taken to build model: 0.01 s.
The dataset from GitHub was taken [38]. We used Weka Tools for the classification
model for patients of heart and analyzed it by Naive Bayes classification algorithms
as Naive Bayes Classification shows more accuracy than other algorithms.
To test the developed model, we used 10-fold cross-validation. The outcomes can
be used to make a control plan for Heart patients since Heart patients are regularly
not recognized until a later phase of the ailment or the advancement of entanglements
[45].
220 S. Singla et al.

Fig. 5 Showing the data analysis in urban area and likely to rise in 2030 in the age group 20–29
years [9]

Fig. 6 Forecast data for the female in an Urban area for age 20–29 years [9]

Fig. 7 Forecast data for the male in the rural area for age 20–29 years [9]
Data Analysis and Classification of Cardiovascular … 221

Fig. 8 Forecast for females in the rural area till 2030 for age 20–29 years [9]

Table 3 Classification by
Correctly classified instances 226 83.7037%
Naïve Bayes algorithm [45]
Incorrectly classified instances 44 16.2963%

Table 4 Hypertension
State Men Woman Total
Prevalence rate in some states
Andhra Pradesh 16.2 10.0 13.1
Assam 19.6 16.0 17.8
Sikkim 27.3 16.5 21.9
Rajasthan 12.4 6.9 9.7
Uttar Pradesh 10.1 7.6 8.9

Fig. 9 Prevalence of overweight in young students


222 S. Singla et al.

Fig. 10 Prevalence of obesity among young students

8 Medication

List of medication being provided to patient-


• ACE inhibitors (angiotensin-converting enzyme inhibitors)—It helps relaxing
blood vessel by preventing the enzyme to produce.
• Angiotensin II which narrows the blood vessels. It is used for treating high blood
pressure [46].
• Angiotensin-II antagonists (ARBs)—It prevents the binding of Angiotensin II to
receptors of the muscle surrounding the blood vessels thus preventing high blood
pressure. Few examples of them are as below [47].
(1) Azilsartan (Edarbi)
(2) Candesartan (Atacand)
(3) Eprosartan
(4) Irbesartan (Avapro)
(5) Losartan (Cozaar)
(6) Olmesartan (Benicar)
(7) Telmisartan (Micardis)
(8) Valsartan (Diovan)
• ARNi (angiotensin-II receptor-neprilysin inhibitor)
• Antiarrhythmic medicines—It is given to prevent heart attack and stroke. It is used
to treat Arrhythmia i.e. irregular heart beats [48]
Data Analysis and Classification of Cardiovascular … 223

• Anticoagulant medicines—Anticoagulant medicines such as warfarin are given to


prevent blood clot. It is recommended to the patients with high risk of develop-
ing clots to prevent stroke and heart attack. Other medicines include rivaroxaban
(Xarelto), dabigatran (Pradaxa), apixaban (Eliquis) and edoxaban (Lixiana) [49]
• Antiplatelet medicines: Platelets plays a role in the development of arterial throm-
bosis despite smallest blood cells, which can prove beneficial for reducing the
formation of thrombus and thus reduce the mortality in patients suffering from
coronary artery disease. Antiplatelet medicines such as Aspirin prevents blood
clot by preventing blood cells from sticking together and 75 mg tablets prevents
the heart attack and stroke in patients [50].
• Beta-blockers: Beta-blockers such as BisoprololFumarate is used for the treatment
of heart failure and provides protection to the heart, thus is useful in treatment of
various CAD [51].
• Calcium channel blockers—Calcium channel blockers like Amlodipine improve
blood flow by widening the blood vessels. It is used to treat high blood pressure,
chest pain and CAD. Some examples of Calcium channel blockers are as below
and they are prescribed along with cholesterol lowering drugs [52]
• Amlodipine (Norvasc)
• Diltiazem (Cardizem, Tiazac, others)
• Felodipine
• Isradipine
• Nicardipine
• Nifedipine (Adalat CC, Afeditab CR, Procardia)
• Nisoldipine (Sular)
• Verapamil (Calan, Verelan)
• Cholesterol-lowering medicines (lipid-lowering medicines) such as statins: It is
used to lower cholesterol and triglycerides in the blood. Atorvastatin is taken to
reduce risk of heart disease [53].
• Digoxin—It forces the heart to pump more bloods, by increasing its activity, and
reduce shortness of breath [54].

9 Various Tests Available for Heart Check up

Electrocardiogram (ECG)—It is done to check whether your heart is working prop-


erly or not, it measures the electrical activity of the heart. It can show up following
problems related to heart [55].
1. Any blockage by cholesterol or other substance—CAD.
2. Abnormal heart rhythms condition known as arrhythmias.
3. Any past Heart attacks.
4. Cardiomyopathy.
224 S. Singla et al.

Magnetic Resonance Imaging (MRI)—It is useful for checking of effect of coro-


nary artery disease, and anatomy of heart with congenital heart disease [56] (Figs. 11
and 12).
Angiography—It is used to check the blood vessels and is like X-rays. Normal
X-rays don’t show up the clear picture, so angiography is done, by injecting a small
cut around the wrist and small thin tube is inserted into artery containing dye, and X-
rays are taken as dye flows through blood vessels. It is useful for checking peripheral
artery disease, angina, atherosclerosis, blood supply to lungs, kidney and brain [57].
Risk factors associated with Angiography-
Haematoma—It is collection of blood where small cut is made which leads to bruises.
Haemorrhage—Even small amount of bleeding from the cut site may be deleterious
in some cases.
Pseudoaneurysm—Bleeding from the cut side leading to the formation of a lump
and need to be operated.
Arrhythmias—As the name suggest disturbance cause to the rhythm of the heart
which can settle without drug treatment or with use of it.
Cerebrovascular accident—A clot or bleed in vessel in the brain causing stroke.
Myocardial infarction—Heart attack occurring due to blockage in the arteries which
can be treated by angioplasty and may be even led to death in very rare cases.
Reaction to dye—Although rare but it is caused by allergic reaction against the dye
which can be treated with drugs and can sometimes become serious.
Pulmonary embolism—A clot in veins going towards lungs which can be treated
with drugs.

Fig. 11 ECG Report with patient suffering from depression


Data Analysis and Classification of Cardiovascular … 225

Fig. 12 ECG report with patient suffering from cardiovascular heart disease

10 Virtual Reality in Health Care

Virtual reality has been standing out as truly newsworthy for its capability to change
the manners in which we interface with our surroundings.
Leap forward innovations like the Oculus Rift headset have made for fantasti-
cally similar encounters, strikingly in gaming and different types of computerized
excitement.
Mounting long stretches of clinical experience have built up the utility of printed
models of patient life structures in various treatment and showing situations, most
remarkably as pre- and intra-procedural arranging instruments controlling basic lead-
ership for innate coronary illness and catheter-based mediations. To some extent
because of a proceeded with absence of repayment and under-characterized (and
moderate to advance) administrative status, these utilization cases remain to a great
extent investigational even as they become progressively normal. Patients, doctors,
as well as imaging focuses consequently stay troubled by the related expense to make
such models, and the perceptual and basic leadership upgrades, while self-evident
noteworthy, still may not plainly or freely legitimize a possibly surprising expense.
Reproduction and implantable gadget applications may speak to a more profound
well of hidden an incentive in cardiovascular mediation; be that as it may, further
advancement of these applications depends on-and is throttled by-advance in material
science and tissue-building research. The significance of reenactment applications
226 S. Singla et al.

as of late is additionally now in rivalry with advanced analogs including expanded


and computer-generated reality.
Beside its blast in the media area, augmented reality has likewise risen as a creative
device in social insurance.
Both virtual and expanded reality advancements are springing up in social insur-
ance settings, for example, working rooms, or being gushed to buyers by means of
telehealth correspondences. Much of the time, augmented reality has empowered
therapeutic experts to execute care even more securely and viably.
Computer generated reality that enables specialists to picture the heart in three
measurements could help in the analysis of heart conditions. A pilot examines dis-
tributed today in the open access diary Cardiovascular Ultrasound uncovers that
specialists can analyse heart conditions rapidly and effectively from virtual three-
dimensional enlivened pictures or ‘3D images’ of the heart. Three-dimensional (3D)
3D images enable specialists to ‘jump’ into the pulsating heart and see inside pieces
of the organ [58].

11 Implantable Cardioverter Defibrillators

Implantable Cardioverter Defibrillators (ICD)


An ICD is a little electrical gadget used to treat a few sorts of unusual heart cadence,
or arrhythmia, which can be hazardous.

It’s a little greater at that point coordinate box in size, and its normally embedded
simply under your collarbone. It’s made up of a pulse generator i.e. battery powered
electronic circuit, and one or more electrode leads which are placed in heart through
vein [14].

12 Use of Certain Medication

Medication used for mental-illness for example a condition Schizophrenia have cer-
tain rare side effects of the medicine used i.e. Aripiprazole, leads to slower heartbeat,
heart attack, chest pain, etc. [59].

13 Cardiovascular Diseases Types

Stroke—It happens when the blood supply to the brain is cut off. It occurs because
of two reasons either blood supply to the brain is blocked as blood gets clot which is
popularly known as ischaemic and another reason is haemorrhagic in which blood
vessel supplying to the brain bursts out [14].
Data Analysis and Classification of Cardiovascular … 227

Arrhythmia—As the name suggests is related with abnormal heart beat. The types
of Arrhythmia are atrial fibrillation in which the heart beat is faster, another type is
bradycardia in which heart beat is slow, ventricular fibrillation in which person can
become unconscious and if not treated can have sudden death [60].
Coronary heart disease—CAD also known as ischaemic heart disease which is
caused by the blockage of heart blood supply by certain kind of substance formed
with cholesterol or fat called as atheroma. The wall of artery gets covered with it
called as atherosclerosis and is main cause of death worldwide. It can be caused by
smoking, high blood pressure due to hypertension, alcohol and diabetes [61].
Heart failure—It is the condition in which heart is unable to pump the blood. The
main cause of heart failure is CAD, high blood pressure, cardiomyopathy, congenital
heart disease, etc. [62].

14 Prevention Measures

Change in routine lifestyle, quitting of tobacco, physical exercise, yoga, check-up of


blood pressure and cholesterol along with intake of fruit and vegetable rich diet, less
salt intake and low alcohol consumption can be some of the preventive measures.
Government should increase the taxes on tobacco, alcohol and fast foods, and spread
awareness about CVD in order to check the spread of this disease [63]. It has been
found that stress and physical inactivity promotes risk for cardiovascular disease and
yoga is highly beneficial to reduce stress among patients [64].

15 Role of Yoga in Treatment of Heart Disease

While inquiry about on utilizing yoga as a treatment for heart patients is still in its
logical early stages, there is developing proof to recommend that yogic practices
positively affect both counteractive action and fix of coronary illness. A few yogic
practices strike at the main drivers of the malady by lessening hypertension, bringing
down elevated cholesterol levels, just as better overseeing mental and passionate
pressure. At the point when performed normally under master direction, and joined
with an appropriate eating routine, Yogic practices can help decrease blockages, help
in the quicker development of pledges, increment blood flow, quiet the thoughtful
sensory system which oversees producing pressure hormones, and actuate positive
reasoning (along these lines lessening heart hypochondria).
In any case, particularly in the therapeutic phase of coronary illness, Yoga treat-
ment must work related to restorative treatment and all practices must be attempted
simply after conference with the doctor.
Yoga Nidra: A propelled unwinding method which incorporates breath mindful-
ness and representation to support the mending procedure. In the field of coronary
illness, this training is viewed as a viable preventive, therapeutic and palliative in all
228 S. Singla et al.

degrees of heart strain and disappointment. Unwinding has been appeared to bring
down the pulse, decline circulatory strain and mitigate the working strain upon heart
muscles. This system can even be utilized while the patient is still in the Intensive
Care Unit, recuperating from a heart assault.
Reflection and Chanting: OMKAR or different mantras make positive vibrations
which impact the body and mind and decrease mental and passionate pressure [52].
Cardiovascular patients are urged to exercise and remain dynamic for different
advantages, including improvement of fiery markers and vascular reactivity. HF
patients normally have comorbidities that keep them from taking part in customary
exercise programs and require individualized exercise medicine. The metabolic inter-
est of yoga is adaptable, extending from seat based to consistent stream. Alternatives
for the conveyance of yoga to HF patients may run from support in a heart restoration
office or a regulated locally situated program utilizing savvy and associated innova-
tion, empowering a feeling of authority and association. Distributed research to date
underpins that yoga is a protected and successful expansion to the administration of
HF patients and their QoL. Brilliant and associated advancements to increase yoga-
based restorative mediation for centre or home settings could profit hard-to-achieve
populaces. Endeavours utilizing 3D room sensors, for example, Microsoft Kinect
for subjective investigation of yoga and Tai Chi stances [65] could prompt wide-
scale selection through economical channels. These ease equipment/programming
cell phones or gaming stages could evaluate helpful results, for example, consistence
to perfect stances, breath, or vitality consumption. These applications can connect
with various members for inspiration and adherence [66].
Studies analysing bunch yoga versus at-home yoga versus a control could be of
an incentive to gauge the advantages of social help for patients in danger for or
determined to have cardiovascular ailment [67].

16 Burden of Disease

According to health data the most causes of death in India is due to Ischemic heart
disease [68].
The proportion of IHD to stroke mortality in India is essentially higher than
the worldwide normal and is tantamount to that of Western industrialized nations.
Together, IHD and stroke are in charge of more than one-fifth (21.1%) everything
being equal and one-tenth of the long stretches of life lost in India (long periods of
life lost is a measure that evaluates untimely mortality by weighting more youthful
passing’s more than more seasoned deaths) 0.8. The long periods of life lost owing
to CVD in India expanded by 59% from 1990 to 2010 (23.2 million to 37 million)
[65].
Data Analysis and Classification of Cardiovascular … 229

Fig. 13 The percentage change from 2007 to 2017

17 Conclusion

CVD is found to be the main reason behind more death in India and around the world.
Ischemic coronary artery disease and stroke are the primary cause of about 70% of
CVD deaths [10]. The knowledge of CVD and its hazard factors are considerably
less in urban and rural zones along with the school children’s. The family ancestors
and ethnicity are additional factors in CVD. Young with family ancestry of smoking
and diabetes have more chances of heart disease. Air pollution is also the biggest
problem in India and is more in the three states Delhi, UP and Haryana. It is also one
of the causes of respiratory, cardiovascular disease and skin cancer (Fig. 13).

References

1. Verma, A., Mehta, S., Mehta, A., & Patyal, A. (2019). Knowledge, attitude and practices
toward health behavior and cardiovascular disease risk factors among the patients of metabolic
syndrome in a teaching hospital in India. Journal of Family Medicine and Primary Care On
Web, 8(1), 178–183.
2. Singh, S., Shankar, R., & Singh, G. P. (2017). Prevalence and associated risk factors of hyper-
tension: A cross-sectional study in urban Varanasi. International Journal of Hypertension,
2017, 5491838.
3. European Institute of Women’s Health, E. (2013). Gender and Chronic Disease Policy Briefings.
European Institute of Women’s Health.
4. Sekhri, T., Kanwar, R. S., Wilfred, R., Chugh, P., Chhillar, M., Aggarwal, R., et al. (2014).
Prevalence of risk factors for coronary artery disease in an urban Indian population. BMJ Open,
4, e005346.
230 S. Singla et al.

5. Marbaniang, I. P., Kadam, D., Suman, R., Gupte, N., Salvi, S., Patil, S., et al. (2017).
Cardiovascular risk in an HIV-infected population in India. Heart Asia, 9, e010893.
6. Sunitha, S., Gururaj, G. (2014). Health behaviours & problems among young people in India:
Cause for concern & call for action. Indian Journal of Medical Research, 140, 185–208.
7. Chaturvedi, N. (2003). Ethnic differences in cardiovascular disease. Heart, 89, 681–686.
8. Khairnar, V. D., Saroj, A., Yadav, P., Shete, S., & Bhatt, N. (2019) Primary Healthcare Using
Artificial Intelligence. In: Bhattacharyya S., Hassanien A., Gupta D., Khanna A., Pan I. (eds)
International Conference on Innovative Computing and Communications. Lecture Notes in
Networks and Systems, vol 56. Springer, Singapore.
9. Murthy, K. J. R. (2005). Economic burden of chronic obstructive pulmonary disease. In A.
Indrayan, (Ed.) Burden of Disease in India, p. 264.
10. Chauhan, S., Aeri, D. (2013). Prevalence of cardiovascular disease in India and its economic
impact—A review. International Journal of Scientific Research. 3.
11. Engelgau, M. M., Karan, A., & Mahal, A. (2012). The economic impact of non-communicable
diseases on households in India. Global Health, 8, 9.
12. Schnell, O., & Standl, E. (2006). Impaired glucose tolerance, diabetes, and cardiovascular
disease. Endocrine Practice, 12(Suppl 1), 16–19.
13. Swain, S., Pati, S., & Pati, S. (2017). A chart review of morbidity patterns among adult patients
attending primary care setting in urban Odisha, India: An international classification of primary
care experience. Journal of Family Medicine and Primary Care On Web, 6, 316–322.
14. British Heart Foundation, B.H.F. Medicines for Heart Conditions—Treatments—British Heart
Foundation. https://www.bhf.org.uk/heart-health/treatments/medication.
15. Mirza, N., Ganguly, B. (2016). Utilization of medicines available at home by general population
of rural and urban set up of Western India. Journal of Clinical and Diagnostic Research, 10,
FC05–FC09.
16. Latest News, Breaking News Live, Current Headlines, India News Online | The Indian Express.
http://indianexpress.com/.
17. Famous Celebrities who Died Because Of Heart Attack | https://www.bollywoodpapa.com/
bollywood-actors/dev-anand/famous-celebrities-died-heart-attack.
18. Khetan, A., Zullo, M., Hejjaji, V., Barbhaya, D., Agarwal, S., Gupta, R., et al. (2017). Prevalence
and pattern of cardiovascular risk factors in a population in India. Heart Asia, 9, e010931.
19. Shah, B., & Mathur, P. (2010). Surveillance of cardiovascular disease risk factors in India: The
need & scope. Indian Journal of Medical Research, 132, 634–642.
20. Oommen, A. M., Abraham, V. J., George, K., & Jose, V. J. (2016). Prevalence of coronary heart
disease in rural and urban Vellore: A repeat cross-sectional survey. Indian Heart Journal, 68,
473–479.
21. Prabhakaran, D., Roy, A., Praveen, P. A., Ramakrishnan, L., Gupta, R., Amarchand, R., et al.
(2017). 20-Year trend of CVD risk factors: Urban and rural national capital region of India.
Global Heart, 12, 209–217.
22. Srivatsa, U. N., Swaminathan, K., Sithy Athiya Munavarah, K., Amsterdam, E., Shantaraman,
K. (2016). Sudden cardiac death in South India: Incidence, risk factors and pathology. Indian
Pacing and Electrophysiology Journal, 16, 121–125.
23. Das, S. K., Banerjee, T. K., Biswas, A., Roy, T., Raut, D. K., Mukherjee, C. S., et al. (2007).
A prospective community-based study of stroke in Kolkata. Indian Stroke, 38, 906–910.
24. Bhagyalaxmi, A., Atul, T., & Shikha, J. (2013). Prevalence of risk factors of non-communicable
diseases in a District of Gujarat, India. Journal of Health, Population and Nutrition, 31, 78–85.
25. Use v, control. (2013). Perceptions of young male smokers. International Journal of Research
Development and Health.
26. Bani T Aeri, D. S. (2014). Risk factors associated with the increasing cardiovascular diseases
prevalence in India: A review. Journal of Nutrition and Food Sciences, 05.
27. American Lung Association: What’s In a Cigarette? | American Lung Association, http://www.
lung.org/stop-smoking/smoking-facts/whats-in-a-cigarette.html.
28. Smoking and Cardiovascular Health: This fact sheet is for public health officials and others
who are interested in how smoking affects the heart and circulatory system. Smoking is very
dangerous to’ ‘ cardiovascular health. Smoking and Cardiovascular Health. (2014).
Data Analysis and Classification of Cardiovascular … 231

29. Chadda, R. K., & Sengupta, S. N. (2003). Tobacco use by Indian adolescents. Tobacco Induced
Diseases, 1, 8.
30. Narain, R., Sardana, S., Gupta, S., & Sehgal, A. (2011). Age at initiation & prevalence of
tobacco use among school children in Noida, India: A cross-sectional questionnaire based
survey. Indian Journal of Medical Research, 133, 300–307.
31. Gupta, R., Xavier, D. (2018). Hypertension: The most important non-communicable disease
risk factor in India. Indian Heart Journal.
32. Allen, L., Williams, J., Townsend, N., Mikkelsen, B., Roberts, N., Foster, C., et al. (2017).
Socioeconomic status and non-communicable disease behavioural risk factors in low-income
and lower-middle-income countries: A systematic review. The Lancet Global Health, 5, e277–
e289.
33. O’Donnell, M. J., Mente, A., Smyth, A., & Yusuf, S. (2013). Salt intake and cardiovascular
disease: Why are the data inconsistent? European Heart Journal, 34, 1034–1040.
34. Cappuccio, F. P. (2013). Cardiovascular and other effects of salt consumption. Kidney
International Supplements, 2011(3), 312–315.
35. Nayana, A., Amudha, T. (2019). A computational study on air pollution assessment modeling. In
S. Bhattacharyya, A. Hassanien, D. Gupta, A. Khanna, I. Pan, (Eds.) International Conference
on Innovative Computing and Communications. Lecture Notes in Networks and Systems, Vol.
56. Singapore: Springer.
36. Shah, A. S. V., Langrish, J. P., Nair, H., McAllister, D. A., Hunter, A. L., Donaldson, K., et al.
(2013). Global association of air pollution and heart failure: A systematic review and meta-
analysis. Lancet, 382, 1039–1048.
37. National Air Quality Index. https://app.cpcbccr.com/AQI_India/.
38. GitHub.com. https://raw.githubusercontent.com/renatopp/arff-datasets/master/classification/
heart.statlog.arff.
39. Abbey, M., Owen, A., Suzakawa, M., Roach, P., & Nestel, P. J. (1999). Effects of menopause
and hormone replacement therapy on plasma lipids, lipoproteins and LDL-receptor activity.
Maturitas, 33, 259–269.
40. Kinra, S., Bowen, L. J., Lyngdoh, T., Prabhakaran, D., Reddy, K. S., Ramakrishnan, L., et al.
(2010). Sociodemographic patterning of non-communicable disease risk factors in rural India:
A cross sectional study. BMJ, 341, c4974.
41. Ormel, J., Von Korff, M., Burger, H., Scott, K., Demyttenaere, K., Huang, Y., et al. (2007).
Mental disorders among persons with heart disease—Results from world mental health surveys.
General Hospital Psychiatry, 29, 325–334.
42. Michael, A. J., Krishnaswamy, S., Muthusamy, T. S., Yusuf, K., & Mohamed, J. (2005). Anx-
iety, depression and psychosocial stress in patients with cardiac events. Malaysian Journal of
Medical Sciences, 12, 57–63.
43. Sahoo, K. C., Hulland, K. R. S., Caruso, B. A., Swain, R., Freeman, M. C., Panigrahi, P., et al.
(2015). Sanitation-related psychosocial stress: A grounded theory study of women across the
life-course in Odisha, India. Social Science and Medicine, 139, 80–89.
44. Kavi, A., Walvekar, P. R., & Patil, R. S. (2019). Biological risk factors for coronary artery disease
among adults residing in rural area of North Karnataka, India. Journal of Family Medicine and
Primary Care On Web, 8(1), 148–153.
45. Kumar, M. (2018). Classification of heart diseases patients using data mining techniques.
IJRECE, 6.
46. Angiotensin-converting enzyme (ACE) inhibitors - Mayo Clinic [Internet]. [cited 2018 Mar
7]. Available from: https://www.mayoclinic.org/diseases-conditions/high-blood-pressure/in-
depth/ace-inhibitors/ART-20047480.
47. Angiotensin II receptor blockers—Mayo Clinic [Internet]. [cited 2018 Mar 7]. Available from:
https://www.mayoclinic.org/diseases-conditions/high-blood-pressure/in-depth/angiotensin-ii-
receptor-blockers/art-20045009.
48. Medications for Arrhythmia [Internet]. [cited 2018 Mar 7]. Available from: https://www.heart.
org/HEARTORG/Conditions/Arrhythmia/PreventionTreatmentofArrhythmia/Medications-
for-Arrhythmia_UCM_301990_Article.jsp.
232 S. Singla et al.

49. Anticoagulant medicines—NHS.UK [Internet]. [cited 2018 Mar 11]. Available from: https://
www.nhs.uk/conditions/anticoagulants/.
50. Knight, C. J. (2003). Antiplatelet treatment in stable coronary artery disease. Heart, 89(10),
1273–1278.
51. Boudonas, G. E. (2010). β-blockers in coronary artery disease management. Hippokratia, 14(4),
231–235.
52. Heart Disease and Yoga [Internet]. [cited 2019 Apr 4]. Available from: https://www.yogapoint.
com/articles/heartdiseaseandyogapaper.htm.
53. Atorvastatin | C33H35FN2O5—PubChem [Internet]. [cited 2018 Mar 7]. Available from:
https://pubchem.ncbi.nlm.nih.gov/compound/atorvastatin.
54. Digoxin | Heart and Stroke Foundation [Internet]. [cited 2018 Mar 11]. Available from: http://
www.heartandstroke.ca/heart/treatments/medications/digoxin.
55. NHS. Electrocardiogram (ECG)—NHS.UK [Internet]. 2015 [cited 2018 Feb 26]. Available
from: https://www.nhs.uk/conditions/electrocardiogram/.
56. Cardiac, Heart MRI [Internet]. [cited 2018 Feb 26]. Available from: https://www.radiologyinfo.
org/en/info.cfm?pg=cardiacmr.
57. Angiography—NHS.UK [Internet]. 2017 [cited 2018 Feb 27]. Available from: https://www.
nhs.uk/conditions/angiography/.
58. Thompson-Butel, A. G., Shiner, C. T., McGhee, J., Bailey, B. J., Bou-Haidar, P., McCorriston,
M., et al. (2019). The role of personalized virtual reality in education for patients post stroke-a
qualitative case series. Journal of Stroke and Cerebrovascular Diseases, 28, 450–457.
59. Pacher, P., & Kecskemeti, V. (2004). Cardiovascular side effects of new antidepressants and
antipsychotics: New drugs, old concerns? Current Pharmaceutical Design, 10(20), 2463–2475.
60. Arrhythmia—NHS.UK [Internet]. [cited 2018 Feb 28]. Available from: https://www.nhs.uk/
conditions/arrhythmia/.
61. Coronary heart disease—NHS.UK [Internet]. [cited 2018 Feb 28]. Available from: https://
www.nhs.uk/conditions/coronary-heart-disease/.
62. Heart failure—NHS.UK [Internet]. [cited 2018 Feb 28]. Available from: https://www.nhs.uk/
conditions/heart-failure.
63. Gupta, R., Joshi, P., Mohan, V., Reddy, K. S., & Yusuf, S. (2008). Epidemiology and causation
of coronary heart disease and stroke in India. Heart, 94(1), 16–26.
64. Holt, S. (2014). Cochrane corner: Yoga to prevent cardiovascular disease. Advances in
Integrative Medicine, 1(3), 150.
65. Prabhakaran, D., Jeemon, P., & Roy, A. (2016). Cardiovascular diseases in India: Current
epidemiology and future directions. Circulation, 133, 1605–1620.
66. Pullen, P. R., Seffens, W. S., & Thompson, W. R. (2018). Yoga for heart failure: A review and
future research. International Journal of Yoga, 11(2), 91–98.
67. Haider, T., Sharma, M., & Branscum, P. (2017). Yoga as an alternative and complimentary ther-
apy for cardiovascular disease: A systematic review. Journal of Evidence Based Complementary
& Alternative Medicine, 22(2), 310–316.
68. India | Institute for Health Metrics and Evaluation [Internet]. [cited 2019 Apr 4]. Available
from: http://www.healthdata.org/india.

You might also like