You are on page 1of 15

The current issue and full text archive of this journal is available on Emerald Insight at:

https://www.emerald.com/insight/1742-7371.htm

COVID-19
Towards a ubiquitous real-time detection
COVID-19 detection system system
Mohamed Sbai, Hajer Taktak and Faouzi Moussa
Faculty of Sciences of Tunis, LIPAH LR11ES14, University of Tunis El Manar,
Tunis, Tunisia
Received 19 July 2020
Revised 19 July 2020
Abstract Accepted 19 July 2020

Purpose – In view of the intensive spread of Coronavirus disease 2019 (COVID-19) and in order to reduce
the rate of spread of this disease; the objective of this article is to propose an approach to detect in real time
suspect person of Coronavirus disease 2019 (COVID-19).
Design/methodology/approach – The ubiquitous computing offers a new opportunity to reshape the
form of conventional solutions for personalized services according to the contextual situations of each
environment. The health system is seen as a key part of ubiquitous computing, which means that health
services are available anytime, anywhere to monitor patients based on their context. This paper aims to design
and validate a contextual model for ubiquitous health systems designed to detect in real time suspect person of
COVID-19, to reduce the propagation of this infectious disease and to take the necessary instructions.
Findings – This paper presents the performance results of the COVID-19 detection approach. Thus, the
reduction of the COVID-19 propagation rate thanks to the real-time intervention of the system.
Originality/value – Following the COVID-19 pandemic spread, the authors tried to find a solution to
detect the disease in real time. In this paper, a real-time COVID-19 detection system based on the ontological
description supported by Semantic Web Rule Language (SWRL) rules was developed. The proposed ontology
contains all relevant concepts related to COVID-19, including personal information, location, symptoms, risk
factors, laboratory test results and treatment planning. The SWRL rules are constructed from medical
recommendations.
Keywords COVID-19, Context sensitive applications, Ubiquitous system
Paper type Research paper

1. Introduction
Ubiquitous computing is a thriving research and application axis. It began with the advent
of the miniaturization of electronic devices and the rise of mobility and wireless networks
(Greenfield, 2006). The basic concepts of this new paradigm were founded by Mark Weiser
(Weiser, 1993). Technically speaking, it involves incorporating and disseminating various
electronic devices throughout our physical environment and our daily life, thus making
them imperceptible to the user (Tran et al., 2009; Kranz et al., 2010; Pantsar-Syväniemi and
KuusijärvietEila Ovaska., 2011). The current development of ubiquitous computing is
helping to create new generations of services and representing one of the ways to advance
our society to be more comfortable and secure. One of the most important applications of
ubiquitous computing is health care, which transmits the physiological parameters of
patients via modern communication technologies, which allows to carry out the necessary
procedures in case of emergency. The health system is seen as a key part of ubiquitous
computing, which means that health services are available anytime, anywhere to monitor
patients based on their context.
The goal of our research is to create a ubiquitous health system that can detect whether a International Journal of Pervasive
Computing and Communications
mobile user interacting in our system is suspect of Coronavirus disease 2019 (COVID-19) or © Emerald Publishing Limited
1742-7371
not. DOI 10.1108/IJPCC-07-2020-0087
IJPCC COVID-19 (World Health Organization, 2020) is an infectious disease caused by the latest
corona virus that was discovered in Wuhan (China) in December 2019. COVID-19 is now a
pandemic and affects many countries (216 countries, 4,620,034 cases and 308,450 deaths)
around the world.
Covid-19 (World Health Organization, 2020) is a life-threatening respiratory disease that
can be fatal in patients weakened by age or other chronic illness. It is transmitted through
close contact with infected persons. The disease could also be transmitted by asymptomatic
patients, but there is a lack of scientific data to prove this with certainty. The most common
symptoms of COVID-19 (World Health Organization, 2020) are fever, dry cough and fatigue.
Other less common symptoms may also occur in some people, such as aches and pains,
nasal congestion, headache, conjunctivitis, sore throat, diarrhea, loss of taste or smell, skin
rash or discoloration of fingers or feet. These symptoms are usually mild and appear
gradually. Some people, although infected, have only very mild symptoms.
The objective of this article is to find a ubiquitous solution that allows real-time detection
of COVID-19 suspects. This allows for rapid intervention at the right time/place to reduce
the propagation of this infectious disease and to take the necessary instructions.
The article is organized in several parts. Section 2 deals with related works in context-
aware software platforms. Section 3 focuses on our contribution. In this section, we will
detail our proposed system. Section 4 validates our proposal. In Section 5, we present a
performance study of our contribution. Section 6 concludes the paper with some future
works.

2. Related work
This section focuses on evaluating and comparing existing research. We briefly describe the
architectures and argue their strengths and shortcomings, and conclude by comparing
them.
A three-layer architecture (Lehmann et al., 2010) was presented for developing adaptive
smart environment user interfaces. Owing to the ubiquitous nature of its target applications,
this architecture only supports direct adaptations. Information is read from sensors, and the
environment context pillar is targeted and as such, multiple data sources are not supported.
The architecture uses a modeling approach based on generative runtime models, which
could be less flexible than interpreted runtime models for performing advanced adaptations
(Akiki et al., 2014). Furthermore, the work does not specify whether the architecture is meant
to support all levels of abstraction. The architecture does not support user feedback but
refers to the work of Brdiczka et al. (Brdiczka et al., 2007) that does not offer an architecture,
but uses user-feedback for refining initial situation models at runtime to improve the
reliability of detected situations.
CAMELEON-RT (Balme et al., 2004) is a reference architecture model for distributed,
migratable and plastic user interfaces within interactive spaces. This architecture targets all
context-of-use pillars (i.e. user, platform and environment) and can be considered general-
purpose because of its implementation neutrality. The architecture provides a good
conceptual representation of the extensibility of adaptive behaviour through the use of open-
adaptive components, which allow new adaptive behaviour to be added at runtime (Oreizy
et al., 1999). Both direct and indirect adaptations could in theory be implemented using these
components. The CAMELEON framework supports all levels of abstraction. The
architecture depicts observes that collect data on the system, user, platform and
environment and feed it to a situation synthesizer, thereby supporting multiple data sources.
CEDAR (Akiki et al., 2012) is reference architecture for stakeholders interested in
developing adaptive enterprise application user interfaces (UIs) based on an interpreted
runtime model-driven approach. The architecture follows the levels of abstraction suggested COVID-19
by CAMELEON for representing its UI models. It supports both direct and indirect detection
adaptation and the extensibility of its adaptive behaviour, which is stored in a relational
database. CEDAR presents components for supporting trade-off analysis and user feedback
system
on the UI adaptations. The architecture also introduced a basic crowd sourcing approach for
empowering end-users to participate in the UI adaption process.
FAME (Duarte and Carrico, 2006) is an architecture targeting adaptive multimodal UIs
using a set of context models in combination with user inputs. It only targets modality
adaptation and is therefore not meant to be a general-purpose reference for adapting other
UI characteristics. The adopted approach allows designer input on the character user
interface; hence, providing good control over the UI. Adaptive behaviour can be extended
using device changes, environmental changes, and user inputs that feed into related models.
According to Akiki et al. (Akiki et al., 2014), the combination of the multiple data sources and
the adaptive behaviour matrices should be able to support both direct and indirect
adaptations.
Malai (Blouin and Beaudoux, 2010; Blouin et al., 2011) is an architectural model for
interactive systems and forms a basis for a technique that uses aspect-oriented modeling for
adapting user interfaces. The extensibility of adaptive behaviour is poor, as multiple
presentations have to be defined at design-time by the developer, to be later switched at
runtime. Although Malai supports multiple levels of abstraction, the modeling approach
relies on generating code (such as Swing, .NET, etc.) to represent the UI. Furthermore, it
does not describe multiple sources for acquiring adaptive behaviour data. In theory, both
direct and indirect adaptations can be supported. Malai allows developers to define feedback
that would help users to understand the state of the interactive system, but the user cannot
provide feedback on the adaptations (i.e. reverse an unwanted adaptation).
After analyzing the works reviewed in this section, it became clear that most of the
architectures did not address several key criteria (Table 1). For example, only the work
(Akiki et al., 2012) presents components for managing trade-off analysis and user feedback.
Additionally, despite the importance of integration in existing software systems that are in a
mature development stage all of the evaluations, except the work (Akiki et al., 2012), were
conducted by building new prototypes. Furthermore, empowering new design participants
was only partially addressed by the work (Akiki et al., 2012), while the other architectures
did not incorporate any components for supporting this feature.

3. Contribution
After an in-depth study of the different architectures presented in the previous section, we
propose our architecture which formed by the following layers (Figure 1): Data Acquisition
Layer, Context Processing Layer, Reasoning Layer and Application Layer.

3.1 Data acquisition layer


This layer is primarily used to collect and transmit static and dynamic data from the
COVID-19 domain, including user profile, real-time physiological parameters, and
environmental information from portable or fixed monitoring nodes. This combined
information constitutes all risk factors related to COVID-19. Medical studies indicate that
there are many risk factors that contribute to the development of COVID-19:
 Contact: COVID-19 is transmitted through close contact with infected persons. The
disease could also be transmitted by asymptomatic patients, but there is a lack of
scientific data to prove this with certainty.
IJPCC

Table 1.

of related works
Comparative study
Direct and Empowering Integrating User feedback
indirect new design Extensibility of in existing Levels of Modeling Multiple Trade off on the adapted
Related work adaptation participants adaptive behavior systems abstraction approach data sources analysis UI

Lehmann et al. (2010) þ – – – – þ þ – –


Balme et al. (2004) þþ – þþ – þþ – þþ – –
Akiki et al. (2012) þþ þ þ þ þþ þ þ þþ þþ
Duarte and Carrico
(2006) þþ – þþ – þ – þþ – –
Blouin and Beaudoux
(2010), Blouin et al.
(2011) þþ – þ – þþ þ þ – –
COVID-19
detection
system

Figure 1.
Real-time COVID-19
detection system

 Age: older people are the most at risk of having severe symptoms.
 Temperature: it is a very important factor in the detection of COVID-19. If a person
has a temperature >= 38 then they may be suspect of COVID-19.
 Comorbidities: people with chronic renal failure on dialysis should be monitored, as
well as those with heart failure; patients with a history of cardiovascular disease;
insulin-dependent diabetics; people with chronic respiratory insufficiency under
oxygen therapy or asthma or cystic fibrosis or any chronic respiratory condition
that may decompensate with viral infection; people undergoing chemotherapy.
 Physiological factors: disruption of certain vital signs can cause complications and
could indicate a worsening of the disease. Therefore, all these physiological
parameters should be evaluated and observed regularly. Examples of parameters
favoring COVID-19: Dyspnea, cough, sore throat, chest pain, headache, diarrhea,
etc.
IJPCC 3.2 Context processing layer
The development of ubiquitous systems requires, first, to correctly model all the context
information with very high precision using the best possible formalism to facilitate their
exploitation thereafter.
For this work, we have opted for the use of ontologies which are known in the literature
as a very powerful formalism for modeling the context with a very rich and extensible
semantic expressiveness (Kapitsaki et al., 2010; Bettini et al., 2010; Alegre et al., 2016).
The proposed context model is made up of a generic ontology (Figure 2). The
structuring of the ontologies of our context model makes it possible to render the
proposed model:
 general enough to be used by different ubiquitous applications;
 sufficiently specific to cover the main contextual entities proposed in the literature
of mobile and context-sensitive applications; and
 sufficiently flexible to allow its extension by taking into account new entities
specific to a given field of application.

In this section, we present our context model design starting with the highest level of
abstraction. After developing the generic context model based on the core ontology
(Figure 2), we will introduce domain ontologies, their objectives, and describe each domain.
3.2.1 User ontology. This part of the ontology (Figure 3) describes the user information
that can modify the adaptation service. The user’s preferences include his age, preferred
languages and preferred modalities (voice, gesture, pen click, mouse click, etc.). The user
answers a questionnaire to find out if he was in direct contact with a person infected by
COVID-19 and to find out his physiological condition (Fever, dyspnea, chest pain, cough,
diarrhea, sore throat, etc).
3.2.2 Environment ontology. This ontology describes spatial and temporal information
(Figure 4):
 Temporal information can be a date or time used as a timestamp. Time is an aspect
of spatial and temporal information. Sustainability, it is therefore important to date
information as soon as it is produced.
 Location describes information related to the user’s location {longitude, altitude,
and attitude}, in a given place, where we can find the mobile resources available.
Mobile Resources are mobile devices, such as tablets, smart phones, laptops and
smart objects, such as biosensors, environmental sensors, etc.

Figure 2.
Core ontology
COVID-19
detection
system

Figure 3.
User ontology

Figure 4.
Environment
ontology

3.2.3 Service ontology. Today’s environments are becoming smarter to respond to user
demands anytime and anywhere based on location; user interaction is aimed at getting
better services from providers. There are three types of services (Figure 5): Service profile
(Name, score, ID, etc), Interactive Service (InteractiveHealthService, InteractiveMediaservice,
etc) and Adaptation Service (TranscodinService, TransformationService, Transmoding
Service).
3.2.4 Device ontology. This ontology includes computer peripherals such as personal
digital assistants (PDAs) and sensors. Basically, this ontology (Figure 6) covers the mobile
device used to collect and send data, as well as all the fixed and portable biomedical
equipment used by patients to monitor their vital signs, in addition to environmental
sensors to detect any change in the environment. Figure 6 shows the types of devices found
in this ontology. BioSensor parameters are detected by “BloodPressureSensor,”
“GlucoseSensor,” “TemperatureSensor” and “RespirationRateSensor.” Environmental
information can be obtained using the thermometer, hygrometer, air quality sensors,
barometer and GPS.
IJPCC

Figure 5.
Service ontology

Figure 6.
Device ontology

3.3 Reasoning layer


This layer is responsible for identifying the current situation, making inferences from the
ontology information, and determining and deploying appropriate services based on the
inferred situations.
3.3.1 Reasoning process. The reasoning process is responsible for making inferences
about ontology information, determining and deploying appropriate services based on
identified situations. It is based on the Java Expert System Shell (JESS) inference engine
(Clark and Parsia, 2020), executes Semantic Web Rule Language (SWRL) situation
rules, infers current situations and determines appropriate actions based on the
inferred situations.
3.3.1.1 Situation rules. Situation rules are used to infer situations from contextual
information. Certain situations trigger the appropriate services to be provided to the user.
However, other situations, such as gesture modality detection, can be used to propose to
ensure continuity of service.
The rules are based on contextual constraints. A context constraint is defined by the COVID-19
terms context parameter and context expression, which is then categorized by a simple detection
expression and/or a composite expression (Tables 2 and 3), thus forming a multi-level system
context ontology, as shown in Figure 7.
3.3.1.2 Situation ontology. This part represents the possible situations (Figure 8) that we
can define for each context (Table 2). We call these situations “Contextual Situations”
because they depend on contextual information. This class contains two subclasses:
External Situation Class and Internal Situation Class. The first one represents situations
that are related to the user’s environment and to the user’s devices, such as house
temperature and battery situations. The internal situation class represents situations that
are related to a specific area, such as the health status of the person, such as “Suspect COVID
19 Situation”, “High Level Respiration Rate Situation”, etc. Each situation has data type
properties, such as the type of situation, the max value and a min value, which are defined
by the developer and the expert in the field; in our work, the doctor.

4. Potential scenario and validation


The objective of our system is to automatically detect persons suspected of COVID-19 to
take the necessary decisions and give favorable instructions.
To verify if a user is a suspect of COVID-19 or not, our system will assign a score to the user
according to the different factors favoring COVID-19, which will be classified into four categories:

Situation Rules
COVID-19 Suspect Situation IF user has score >=4
THEN user has
situation ”SuspectCOVID19Situation”
IF user has situation ” SuspectCOVID19Situation”
THEN activate emergency call
High Level Temperature Situation IF user has TemperatureSensor Data
AND Temperature Level >= 38
THEN user has
situation ”HighLevelTemperatureSituation”
Diabetes Type1 IF user has GlucoseSensor Data Table 2.
Situation AND Glucose Level> Min Value Example of rules
AND Glucose Level< Max Value specified by the
THEN user has situation ”DiabetType1Situation” doctor

Services Rules IF Service IS Health-Care AND Service Has Type “Service Type”
AND User HAS Situation
AND Situation Has Type “Situation Type”
WHERE “SituationType” IS EQUAL TO “Service Type”
THEN Situation Trigger Service
WHERE Service is Provided To User
Smart Services Rules IF Service IS Health-Care AND Service is Deployed On Device “D1”
AND User HAS Situation
AND Situation Is Detected By Detection_Function
AND Situation Depends On Device “D2” Table 3.
AND Detection_Function Has Modality “Modality Type” Example of rules
THEN Situation Migrate Service specified by the
WHERE Service is Deployed On Device “D2” developer
IJPCC (1) Temperature: The value will be captured by the “TemperatureSensor.” If this
value >= 38° (HighLevelTemperature Situation) then our system will assign 2
points to the user’s score.
(2) Contact: If the user had direct contact with a person wearing the COVID-19 then
our system will assign 3 points to the user’s score.

Figure 7.
Situation rule
ontology

Figure 8.
Situation ontology
(3) Physiological factors: If the user has any of the following symptoms: Dyspnea, COVID-19
cough, chest pain, headache, diarrhea, vomiting, and sore throat. Then our system detection
will assign 1 point to the user’s score.
system
(4) Comorbidities: If the user has any of the following symptoms: Respiratory failure,
chronic kidney failure, liver failure, diabetes, high blood pressure, etc. Then our
system will assign 1 point to the user’s score.

The contact factor and the physiological factors are explicitly obtained by our system
through a questionnaire asked to the user. The temperature and comorbidities parameters
are obtained implicitly by our system through biosensors.
If score >= 4 of the user using our system, then it is the “Suspect COVID 19 Situation”
(Table 2). In this case the call service of the care provider or prior health facility will be
automatically triggered, a state of emergency is notified in order to intervene as soon as
possible and guide the suspect user through the necessary instructions. Else, this is a “Non
Suspect COVID 19 situation” and the user may have other health abnormalities depending
on the values captured.

4.1 Scenario
Let’s take the example of a mobile “user A” who uses his Smartphone and wants to use our
system to monitor his health and manage his situations more conveniently by “user B”
(Doctor in our case). The Smartphone site automatically connects to our system, which will
allow him to authenticate himself (a) or create an account (b).

When “user A” is logged into our system, he can semantically track his health status and/or
detect abnormal situations in a given location. He answers a questionnaire (c) concerning his
age, his physiological condition (Dyspnea, cough, chest pain, headache, diarrhea, vomiting,
sore throat, etc) and whether he had contact with people infected with COVID-19. This
information, together with the values captured by the intelligent sensors, constitutes all the
contextual information.
IJPCC

 All this information is collected by the data acquisition layer and stored in a
database; this means that the information simulated via the prototype interfaces is
automatically stored in a database.
 This information is converted into a pattern in OWL format and inserted into our
ontology by the context processing layer.
 Then the reasoning layer uses the situation inference engine, which is based on JESS,
which executes the situation rules presented in Table 2 and infers the situations.

The “user A” profile will be analyzed in real time by a doctor. Our system will migrate the
“user A” profile (Name, age, place, physiological parameters, comorbidities parameters) to
the doctor’s interface (d) in order to assess the risks of COVID-19.

For this scenario, our system assigned a score = 7 to “user A” based on his current situation.
Hence the situation is “Suspect COVID 19” (Table 2). In this situation, the emergency
number will be called automatically and our system will put a “doctor” on line with “user A”.
The latter will receive an interface (e) of his state of health and the instructions to follow COVID-19
(Stay at home; Isolate yourself; Cover your nose and your mouth; etc). detection
system

5. Performance study
The objective of our approach is to detect in real time the persons affected by COVID 19, to
intervene at the right time/place and to reduce the propagation rate of the disease. In order to
show the contribution of our approach, we made a study on some cities of Tunisia. In
Figure 9, we make a comparative study of the number of cases detected by our system
compared to the number of cases declared by the Ministry of Health.

Figure 9.
COVID-19 cases

Figure 10.
Propagation rate of
COVID-19
IJPCC We note that the number of cases reported by the Ministry of Health is not an actual
statistic; which increases the rate of spread of the disease.
Figure 10 represents the performance result of our approach to reducing the propagation
rate of COVID-19 thanks to the real-time intervention of our system.

6. Conclusion and future work


In this article, we have developed a real-time COVID-19 detection system based on the
ontological description supported by SWRL rules. The proposed ontology contains all
relevant concepts related to COVID-19, including personal information, location, symptoms,
risk factors, laboratory test results and treatment planning. The SWRL rules are constructed
from medical recommendations.
Although this article presents a new vision for the construction of monitoring systems
for COVID-19, several challenges remain.
 Security and privacy: the collection and use of personal information is a serious
threat to privacy. It is then necessary to authenticate authorized entities, either by
identifying the organizations responsible for deploying these applications or by
building a reliable and secure communication network.
 Generalization: owing to time constraints, the ontology model is developed to collect
and share data related to COVID-19, but the same ontology approach can be
generalized and extended to all other diseases such as diabetes, cardiovascular, etc.
The ontology model can be used to collect and share data related to COVID-19, but
the same ontology approach can be generalized and extended to apply to all other
diseases such as diabetes, cardiovascular diseases, etc.

References
Akiki, P.A., Bandara, A.K. and Yu, Y. (2012), “Using interpreted runtime models for devising adaptive
user interfaces of enterprise applications”, in Proceedings of the 14th International Conference on
Enterprise Information Systems. Wroclaw, SciTePress, Poland, pp. 72-77.
Akiki, P.A., Bandara, A.K. and Yu, Y. (2014), “Adaptive model-driven user interface development
systems”, ACM Computing Surveys, Vol. 47 No. 1.
Alegre, U., Augusto, J.C. and Clark, T. (2016), “Engineering context-aware systems and applications”,
Journal of Systems and Software, Vol. 117 No. C, pp. 55-83.
Balme, L., Demeure, R., Barralon, N., Coutaz, J., Calvary, G. and Fourier U.J. (2014), “Cameleon-RT: a
software architecture reference model for distributed, migratable, and plastic user interfaces”,
Proceedings of the 2nd European Symposium on Ambient Intelligence. Springer, Eindhoven, The
Netherlands, pp. 291-302.
Bettini, C., Brdiczka, O., Henricksen, K., Indulska, J., Nicklas, D., Ranganathan, A. and Riboni, D. (2010),
“A survey of context modelling and reasoning techniques”, Pervasive and Mobile Computing,
Vol. 6 No. 2, pp. 161-180.
Blouin, A. and Beaudoux, O. (2010), “Improvising modularity and usability of interactive systems with
Malai”, in Proceedings of the 2nd ACM SIGCHI Symposium on Engineering Interactive
Computing Systems. ACM, New York, NY, pp. 115-124.
Blouin, A., Morin, B., Beaudoux, O., Nain, G., Albers, P. and Jézéquel, J.M. (2011), “Combining aspect-
oriented modeling with property-based reasoning to improve user interface adaptation”, in
Proceedings of the 3rd ACM SIGCHI Symposium on Engineering Interactive Computing
Systems. ACM, Pisa, Italy, pp. 85-94.
Brdiczka, O., Crowley, J.L. and Reignier, P. (2007), Learning Situation Models for Providing Context COVID-19
Aware Services, Springer.
detection
Clark, K. and Parsia, B. (2020), “JESS: OWL 2 reasoner for java”, available online: http://clarkparsia.
com/Jess system
Duarte, C. and Carrico, L. (2006), “A conceptual framework for developing adaptive multimodal
applications”, in Proceedings of the 11th International Conference on User Interfaces, ACM,
Sydney, Australia, pp. 132-139.
Greenfield, A. (2006), “Everyware: the dawning age of ubiquitous computing. New riders. 1249 eighth
street Berkeley, CA 94710. Published in association with AIGA”, ISBN 0-321-38401-6.
Kapitsaki, G.M., Prezerakos, G.N. and Tselikas, N.D. (2010), “In book ‘enabling context-aware web
services: methods, architectures, and technologies’, Chapter Context-Aware Web Service
Development: Methodologies and Approaches, Chapman and Hall/CRC Press, 1st edition,
pp. 3-29.
Kranz, M., Holleis, P. and Schmidt, A. (2010), “Embedded interaction: interacting with the internet of
things”, Ieee Internet Computing, Vol. 14 No. 2, pp. 46-53.
Lehmann, G., Rieger, A., Blumendorf, M. and Albayrak, S. (2010), “A 3-layer architecture for smart
environment models”, Proceedings of the 8th Annual IEEE International Conference on
Pervasive Computing and Communications. IEEE, Mannheim, Germany, pp. 636-641.
Oreizy, P., Gorlick, M.M., Taylor, R.N., Heimhigner, D., Johnson, G., Medvidovic, N., Quilici, A.,
Rosenblum, D.S. and Wolf, A.L. (1999), “An architecture-based approach to self-adaptive
software”, IEEE Intelligent Systems, IEEE, Vol. 14 No. 3, pp. 54-62.
Pantsar-Syväniemi, S. and KuusijärvietEila Ovaska, J. (2011), “Context-awareness microarchitecture for
smart spaces”, in Advances in Grid and Pervasive Computing, Lecture Notes in Computer
Science, Vol. 6646, pp. 148-157.
Tran, M.H., Han, J. and Colman, A. (2009), “Social context: supporting interaction awareness in
ubiquitous environments”, in Mobile and Ubiquitous Systems: Networking and Services,
MobiQuitous, pp. 1-10.
Weiser, M. (1993), “Some computer science issues in ubiquitous computing”, Communications of the
ACM, Vol. 36 No. 7, pp. 74-84.
World Health Organization (2020), “Homepage”, available at: www.who.int/home

Corresponding author
Mohamed Sbai can be contacted at: mohamed.sbai155@gmail.com

For instructions on how to order reprints of this article, please visit our website:
www.emeraldgrouppublishing.com/licensing/reprints.htm
Or contact us for further details: permissions@emeraldinsight.com

You might also like