You are on page 1of 18

Future Generation Computer Systems 92 (2019) 564–581

Contents lists available at ScienceDirect

Future Generation Computer Systems


journal homepage: www.elsevier.com/locate/fgcs

Zeus: A resource allocation algorithm for the cloud of sensors


Igor L. Santos a,d, *, Luci Pirmez a , Flavia C. Delicato a , Gabriel M. Oliveira a ,
Claudio M. Farias a , Samee U. Khan b , Albert Y. Zomaya c
a
Programa de Pós-Graduação em Informática (PPGI), Universidade Federal do Rio de Janeiro, RJ 20001-970, Brazil
b
Faculty of Electrical and Computer Engineering, North Dakota State University, ND 58102, USA
c
Centre for Distributed and High Performance Computing, School of Information Technologies, the University of Sydney, NSW 2006, Australia
d
Centro Federal de Educação Tecnológica Celso Suckow da Fonseca (CEFET-RJ) - DEPRO, RJ 20271-110, 229, Brazil

highlights

• We formulate the resource allocation problem in CoS.


• Zeus is a hybrid algorithm proposed to solve the resource allocation problem in CoS.
• Zeus is scalable in terms of the number of VNs and applications supported.
• By identifying requests in common for applications, Zeus saves energy for sensors.
• By leveraging edge computing, Zeus supports delay-sensitive applications.

article info a b s t r a c t
Article history: The cloud of sensors (CoS) paradigm brings together cloud computing, edge computing and wireless
Received 16 June 2017 sensor and actuator networks (WSAN) in the form of a three-tier CoS architecture. By employing CoS
Received in revised form 26 February 2018 virtualization, a set of virtual nodes (VNs) is made available to applications, decoupling them from the CoS
Accepted 12 March 2018
infrastructure. Assigning VNs to application requests, in order to timely and efficiently meet application
Available online 20 March 2018
requirements, gives rise to the challenge of resource allocation in CoS. In this work, we formulate the
Keywords: problem of resource allocation in CoS and propose Zeus, a hybrid (partly-decentralized) algorithm for
Cloud of sensors solving it. Zeus has two key features. First, it is able to identify requests that are common for multiple
Edge computing applications and perform only once their required tasks, sharing the results among the applications for
Resource allocation saving resources. Second, its hybrid approach leveraging edge computing, makes Zeus scalable in terms
Mixed integer non-linear programming of the number of VNs and applications supported, and suitable for meeting delay-sensitive applications.
Data provisioning © 2018 Elsevier B.V. All rights reserved.
Data freshness

1. Introduction resources of clouds, besides the low latency, mobility and location-
awareness support of the edge, to implement services to exploit
The novel cloud of sensors (CoS) paradigm [1] has gained mo- the sensors and their produced data [4]. On the other hand, the
mentum recently. The CoS combines wireless sensor and actuator cloud/edge computing paradigms can benefit from WSANs by ex-
networks (WSANs) [2] with cloud computing [3], making up a two- tending their scopes to deal with real world objects, enabling the
tier architecture. With the more recent emergence of edge comput- delivery of new services in real world scenarios [5].
ing [4], which leverages computation and networking capabilities Virtualization is a key feature that grounds both the cloud and
at the edge of the network, it became interesting to investigate edge computing paradigms [4], and can also be used to perform
a new model, where cloud, sensors and edge devices are inte- WSAN virtualization [5], the predecessor of CoS virtualization [1].
grated in a three-tier architecture, leveraging mutual advantages. The authors in [6] define virtualization in terms of hiding from
On the one hand, WSANs can benefit from the virtually unlimited clients the variety of types of infrastructures, platforms and data
available at the back-end, facilitating application delivery and al-
lowing resource sharing among applications. Among the several
* Corresponding author. approaches for performing CoS virtualization [7,5], in our previous
E-mail addresses: igor.santos@cefet-rj.br (I.L. Santos), luci@nce.ufrj.br
work [1] we proposed Olympus, a CoS virtualization model based
(L. Pirmez), fdelicato@dcc.ufrj.br (F.C. Delicato), gmartinsoc@gmail.com
(G.M. Oliveira), claudiofarias@nce.ufrj.br (C.M. Farias), samee.khan@ndsu.edu on information fusion. By employing Olympus, a set of virtual
(S.U. Khan), zomaya@it.usyd.edu.au (A.Y. Zomaya). nodes (VNs) is made available to applications, providing a clean

https://doi.org/10.1016/j.future.2018.03.026
0167-739X/© 2018 Elsevier B.V. All rights reserved.
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 565

decoupling between the CoS infrastructure and applications. As- share features of both centralized and decentralized algorithms.
signing such VNs to application requests in a timely and efficient We combine a centralized decision phase, which runs in either
way rises the challenge of resource allocation in CoS. the cloud or edge tier, and a decentralized decision phase, held
Similarly to traditional clouds, the goal of resource allocation fully within the VNs. This hybrid approach makes Zeus scalable in
in CoS is to maximize the amount of application requirements terms of the number of VNs and applications to be executed in CoS.
met by the CoS infrastructure, while ensuring a target operational Moreover, handling the decision on resource allocation at the edge
cost [8]. However, in CoS, the data acquisition from the underlying of the network allows Zeus to avoid the unpredictable and often
infrastructure and the data sharing are mandatory, in contrast to high latency typical of communication with the cloud. Therefore,
the sharing of the computational, storage and communication ca- Zeus is suitable for delay-sensitive applications, with restrictive
pabilities of such infrastructure, as in traditional clouds. Therefore, response times.
we consider resource allocation in CoS under the perspective of The rest of this paper is organized as follows. Section 2 describes
data provisioning [9,10]. The philosophy behind data provisioning aspects of CoS virtualization, used to ground the challenge of re-
consists of abstracting data acquisition/access to users and applica- source allocation in CoS. Section 3 describes the MINPP formulation
tions, thus allowing data sharing among applications, while meet- and Zeus. Section 4 describes the performed evaluation. Sections 5
ing their requirements. Among the possible requirements stands and 6 discuss related work and final remarks.
out the data freshness, given its importance for distributed systems
based on sensing data. Although there are several definitions [11], 2. Background on CoS virtualization
in this work we quantify the freshness of a given data as the time
elapsed since its acquisition. The CoS infrastructure, VNs and applications are the key entities
As in other works in literature [12], the problem of resource involved in the challenge of resource allocation in CoS. A CoS vir-
allocation in CoS falls within the typical class of mixed integer tualization model, such as Olympus [1], typically provides models
non-linear programming problems (MINPP) [13,14]. To solve our for describing such entities. However, Olympus builds on a typical
MINPP in order to seek the optimal solution, there are a number of two-tier CoS architecture, encompassing only the sensor and cloud
methods in the literature, such as the linear programming tech- tiers. Thus, in this work we extended Olympus in order to better
niques and their variants [15]. However, our formulated MINPP exploit the capabilities of the edge tier. In Sections 2.1, 2.2 and 2.3
is an NP-complete problem [16], so its explosive combinatorial we describe the models, based on Olympus, used to respectively
nature hinders the quick search for optimal solutions when it represent the CoS infrastructure, VNs, and applications.
grows, in terms of the number of VNs and applications. This aspect
harms the typical delay-sensitive applications from the scenario of 2.1. The CoS infrastructure
edge computing, which usually require strict response times, in the
order of milliseconds to sub seconds [4]. In this sense, another chal- As shown in Fig. 1, we consider a CoS infrastructure composed
lenge arises regarding how to solve, in polynomial time, practical of the sensors tier (ST), edge tier (ET) and cloud tier (CT). The main
instances of our MINPP with arbitrary sizes [17]. Because of NP- entities of our CoS infrastructure are (i) the physical sensor and
complete problems, an entire research area exists that deals with actuator nodes (PSANs) in the ST, (ii) the edge nodes (ENs) in the
quick search of solutions, emphasizing heuristic techniques that ET, and (iii) the cloud nodes (CNs) in the CT. The definitions of such
produce near-optimal solutions, and also show low computation entities are given per Eqs. (1)–(3).
overhead [18–20]. Among the several classifications of heuristic
techniques shown in [20], we chose to use constructive techniques PSAN = ⟨PS , TM , BW , LS , LA, RE , WI , LC ⟩ (1)
for the following reasons. First, they do not require initial solutions, EN = ⟨PS , TM , BW , LC ⟩ (2)
allowing the decision-making node that uses these constructive CN = ⟨PS , TM , BW , LC ⟩ . (3)
techniques to be more autonomous, in terms of requiring less
inputs to make decisions, than other techniques shown in [20]. Definition 1. Per Eq. (1), we describe each PSAN as a tuple, in
In addition, constructive techniques do not require the decision- terms of processing speed PS, total memory TM, bandwidth BW,
making node to have information about the whole scenario to list of sensing units LS, list of actuation units LA, remaining energy
make decisions. Instead, it can make localized decisions, with RE, identification of the physical WSAN infrastructure to which it
the information available locally, fostering decentralized decision- pertains WI and information about its geographical location LC.
making.
In face of the abovementioned challenges, the goal of this Definition 2. Per Eqs. (2) and (3), we describe, respectively, each
work is to propose Zeus, a heuristic-based algorithm to solve the EN and CN as tuples, in terms of processing speed PS, total memory
MINPP for resource allocation in CoS. Our algorithm has two key TM, bandwidth BW and information about its geographical loca-
features, from which come the main benefits of our proposal, as tion LC.
we show in the performed evaluations. First, Zeus analyzes the
applications requests and identifies those that are common for The ST encompasses the physical WSAN infrastructures (WSA-
multiple applications, performing them only once and sharing the NIs), each of which is owned and administered by an infrastructure
outcome among these multiple applications. Such a feature allows provider (InP). Each WSANI comprises a set of PSANs deployed over
Zeus to contribute for saving resources, consequently improving a geographical area and connected by wireless links, and every
the WSANs lifetime. Second, Zeus leverages the concept of edge PSAN pertains to a single WSANI. The InPs define the physical and
computing in its operation. By doing so, the resource allocation administrative (logical) boundaries of their respective WSANIs.
process is not limited to occur at the cloud tier, as in two-tier CoS Moreover, each PSAN must have a valid communication path to
architectures [7]. Moreover, by taking advantage of edge comput- reach the sink node within such boundaries. These communication
ing, Zeus is designed as a hybrid algorithm. In centralized resource paths are defined by underlying networking protocols chosen by
allocation algorithms for CoS [16,21,22], the decision is taken by a the InP, such as [24–29], and are out of the scope of our work.
centralized entity with a global view of the CoS, and for a set of pas- The ET comprises typical physical edge devices, while the CT
sive VNs. In turn, in decentralized algorithms [23], the resource al- comprises multiple data centres (more powerful than physical
location decision is taken cooperatively among VNs, based on their edge devices). Since both edge and cloud are strongly based on
local views and independently on a centralized entity. In Zeus, we virtualization, we consider the existence of edge nodes (EN) at the
566 I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581

load of output data (TLo), the time required for performing a data
update (DUpTm), and the list of underlying nodes (LU).
During CoS operation, the resource provisioning process [21,33]
handles the instantiation of VNs, and their management after
instantiation. In such a process, the values of properties of VNs
are initialized and updated, when necessary. Properties as PPw,
TM, BW and LC are computed as functions over the respective
capabilities of the underlying nodes. The other properties of VNs in
Eq. (4) depend on the respective properties of the information fu-
sion technique implemented by the VN. The resource provisioning
process can be either proactive or reactive. In the proactive process,
the InPs have the responsibility to instantiate VNs and configure
their properties independently on the arrivals of applications dur-
ing the CoS operation. In the reactive process, the instantiation
of VNs occurs dynamically and automatically, in response to the
arrival of an application in the CoS. We consider a proactive pro-
cess in this work, since it imposes a lower overhead on the CoS
infrastructure. In our proactive process, all VNs are instantiated
and allocated to run on their host ENs or CNs from the beginning
of CoS operation. Also, each VN is connected to all the underlying
nodes in its respective LU. The InPs define, according to their own
criteria, the information fusion technique implemented by each VN
and how to calculate each of the properties of VNs. Regardless the
resource provisioning process, to reduce the overhead caused on
the CoS infrastructure we consider that VNs can temporary release
their underlying nodes, whenever they are not providing data to
Fig. 1. The CoS infrastructure. an application.
The data provided by a VN is the output of its implemented
information fusion technique. Such output data may comprise
ET and cloud nodes (CN) at the CT, which are virtual instances sensing/processed data or control actions, in case of VNs that
hosted by the physical devices of ET and CT, respectively. The perform actuation. Either way, such output data is always of a
deployment of ENs and CNs on their respective physical hosts is single data type. We consider that each data type is unique, and
transparent to our three-tier CoS architecture, and is handled by that a variable number of VNs can provide data of the same type.
typical cloud and edge computing virtualization models. As in [21], The InPs define and describe the data types available in the CoS.
our proposal is agnostic to the cloud and edge computing virtual- For instance, in the context of structural health monitoring (SHM),
ization models adopted to provide the CNs and ENs, so we do not a data type D can be a structural damage index calculated through
present the properties of such virtualization models in detail. We a given damage detection technique [34]. In addition, InPs can
follow the principle of overlay virtualization from Khan et al. [30], define the data types R1 and R2 as the raw data obtained from two
respective to building VNs in an overlay layer, over either physical different structures (1 and 2, respectively), each of which can be
nodes (PSANs) or readily available virtual nodes (ENs and CNs). used for calculating D.
According to Khan et al., overlays have several advantages: they are For updating its provided data, each VN coordinates the engage-
distributed, lack central control and allow resource sharing, being ment of its underlying CoS infrastructure to perform tasks such as
an ideal candidate for CoS virtualization. sensing, processing or actuation on the environment, according to
the definition of its information fusion technique. For this coordi-
2.2. Virtual node model nation, the VN uses a typical WSAN task scheduling and execution
algorithm. In our work, this procedure is called a data update and
Based on Olympus [1], in our current work the VNs are com- is transparent to users and applications. Proposing a WSAN task
putational entities implemented as software instances that run on scheduling and execution algorithm is out of the scope of our work.
top of the edge or cloud tiers, and coordinate a set of underlying Therefore, we suggest existing approaches as [35,36] and, in our
nodes (either PSANs, ENs or CNs). In this sense, we consider VNs work, we consider that each VN has its own fixed value of DUpTm
analogous to typical IoT resources as defined in [31,32]. Moreover, in Eq. (4).
since Olympus is an information fusion based approach for CoS For providing data to an application request, VNs must meet
virtualization [1], each VN represents an information fusion tech- two categories of application requirements: non-negotiable, which
nique. The formal definition of a VN used in our current work is must be completely (100%) fulfilled by VNs, and negotiable, which
given per Eq. (4). allow partial fulfilment (under 100%) [37]. The values of require-
ments provided by VNs are stored in NNRV and NRV, from Eq. (4).
PP w, TM , BW , LC
⟨ ⟩
In our work, the data type is a non-negotiable requirement. In
VN = . (4)
NNRV , NRV , PLo, TLo, DUpTm, LU addition, besides updating the value of data provided by the VN,
the data updates are used to improve the values of the negotiable
Definition 3. Per Eq. (4), we describe each VN as a tuple in requirements, such as the data freshness. For instance, an applica-
terms of the following properties: processing power (PPw), total tion request may define a data freshness threshold of three seconds
memory (TM), bandwidth (BW) and location coordinates (LC). In or less. The most recent data obtained by the VN must meet this
addition, VN descriptions include the lists of provided values of threshold, otherwise a data update is necessary.
non-negotiable and negotiable application requirements (NNRV Ideally, each VN should always update its data upon its al-
and NRV). Moreover, they include the local processing load re- location to an application request, in order to meet all requests
quired to respond to an application request (PLo), the transmission with best (zero) data freshness. However, to avoid the constant
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 567

re-execution of data updates, it is possible to meet requests by In the current version of our work, we consider only time-
simply re-using historical data obtained from previous updates and based applications [42,43]. A time-based application defines the
stored locally in the VN, reducing the system overhead. In our work, exact moment in time for demanding data from the VN, using
each VN exposes a data provisioning service, an interface between a pull model. We also consider that requests are aperiodic, i.e. a
VNs and application requests. The implementation of this service VN provides data to a request only once, and then the request
consists of deciding between performing a data update or simply is never reactivated. During CoS operation, applications arrive in
accessing the VN memory and retrieving historical data, before the CoS through logical entities called application entry points
providing data to a request. (AEPs). Applications may arrive either through the internet or from
a user device (such as a PC or smartphone) connected directly to
2.3. Application model an EN. Either the CNs or ENs are possible candidates to hold AEPs.
Considering that the CoS operates for a given timespan τ > 0,
We inspire our application model in a workflow-based ap- an application must arrive at a given AEP at a time TI within this
proach, widely used in Web Services [38]. End users model an timespan.
application as a workflow that comprises several activities. In this Considering the CoS infrastructure, VN and application models
work, we call each activity of an application workflow as an ap- just described, we conclude that the challenge of resource allo-
plication request to data from a given data type. Thus, requests are cation in CoS consists of making three decisions. The first one
logical interfaces to the data provisioning services implemented by regards which VN should provide data to each request. The second
VNs. Users from any level of expertise in an application domain decision regards when each of such data provisions will occur. The
define requests based on the data types available in the CoS. The third decision regards if each VN will either perform a data update or
InPs are responsible for considering the desired level of expertise reuse data in the data provision process. In Section 3 we describe
of their target users, when defining the data types. Considering the our approach to handle these decisions jointly in our proposed
example in the context of SHM, presented in Section 2.2, users from algorithm for resource allocation in CoS.
a higher level of expertise can be interested in building applications
using D and R1, or D and R2. In other words, the VN that provides D 3. Resource allocation in CoS
can receive data from either the VN that provides R1 or the VN that
provides R2. To support users from a lower level of expertise, the In this section, we describe the proposed MINPP for resource
InPs can abstract data acquisition. For instance, InPs might choose allocation in CoS and our proposed hybrid algorithm for solving
to provide only data types D1 or D2, which respectively represent it, called Zeus. Section 3.1 introduces the parameters and decision
the damage indexes calculated for civil structures 1 and 2. An variables used in our MINPP. Sections 3.2 and 3.3 describe the
application manager subsystem in the cloud tier supports users MINPP formulation and the adopted CoS energy model, respec-
in the application development process. This subsystem allows tively. Finally, Section 3.4 presents Zeus.
users to express requests for the data types existing in the CoS.
The specification of this subsystem is extensive, and is out of the 3.1. Parameters and variables design
scope of our work. For discovering and selecting the available
Among the MINPP input parameters, we define the sets in
data types in the CoS, we suggest using typical service discovery
Table 1, and the remaining parameters in Table 2. Regarding the
methods [39,37]. To express and manage applications and their
entities defined in Section 2.1, we define the sets SPSAN, compris-
requests, we suggest using domain specific languages (DSL) [40,41]
ing the PSANs, and SECN, comprising the hosts of VNs (ENs and
or semantic queries [36]. We define requests and applications in
CNs).
our work per Eqs. (5) and (6), respectively.
Moreover, we model a resource provisioning function per
RQ = ⟨PE , NNR, NR⟩ (5) Eq. (7), implemented by the proactive resource provisioning pro-
APP = ⟨TI , LRQ ⟩ (6) cess mentioned in Section 2.2. This resource provisioning function
maps the sets SPSAN and SECN to the set SVN, which comprises the
(TI ∈ R |TI ≤ τ ).
+
VNs defined in Section 2.2. This function also outputs the matrices
of VN × PSAN mapping (VPMik ) and VN × host mapping (VECMih ).
Definition 4. Per Eq. (5), each request is described in terms of the The matrices VECMih and VPMik store the usage levels (0%–100%)
list of predecessor requests (PE) and lists of non-negotiable and of, respectively, each host by each VN and each PSAN by each VN.
negotiable requirements (NNR and NR). Such matrices are also input parameters to our MINPP.

Definition 5. Per Eq. (6), users describe each application in terms fphy→v ir (SPSAN , SECN ) = SVN , VPMik , VECMih . (7)
of the arrival time in CoS, within the timespan τ (TI) and the list of
requests (LRQ). In addition, we represent the applications defined in Section 2.3
in terms of a digraph G = (SRQ , SPRE) where (a, b) ∈ SPRE
We model precedence relationships among requests of the represents that request a must be met before request b. We assume
same application in terms of the data dependence among requests. the digraph G only contains immediate precedence relationships,
Users must define the list PE, including identifiers of other requests that is, if SRQ = {1, 2, 3} and {(1, 2) , (2, 3)} ⊂ SPRE, then (1, 3) ̸ ∈
from the same application (from LRQ) that must run to completion, SPRE.
before starting the current request. Based on such information, In our MINPP formulation, we consider a time-based decision
we model the requests of an application as vertices connected by window, defined by a fixed value of duration (ETMDW , in Table 2).
directed edges without loops, thus forming a directed acyclic graph The resource allocation decision, achieved by solving the MINPP, is
(DAG) [35]. Fig. 2 shows an example of an application with eight valid for the duration of this decision window. Such a decision must
requests modelled as a DAG. Moreover, each request includes the be reviewed whenever any of the sets defined in Table 1 changes
description of its application requirements in lists NNR and NR. In (for instance, due to application arrivals), or when the execution
addition, for NR, the list includes the thresholds that define the of the current resource allocation decision is completed (when
negotiable interval of each requirement. For instance, in this work, the decision window ends). We consider partitioning the decision
a data freshness threshold is included in NR, and an identifier of a window into P periods, so that it is possible to allocate VNs to
data type in NNR. requests within these periods. We refer to such periods through the
568 I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581

Fig. 2. Application modelled as a DAG of requests.

Table 1
Summary of sets.
Definition Description Subscript indexes
SPSAN = {k|k ∈ N0 , k < K } Set of K PSANs k
SECN = {h|h ∈ N0 , h < H } Set of H hosts (EN or CN) of VNs h
SVN = {i|i ∈ N0 , i < I } Set of I VNs i, c
SRQ = {j|j ∈ N0 , j < J } Set of J requests j, a, b
SPRE = {(a, b)|a, b ∈ SRQ , a ̸ = b} Set of precedences (a precedes b) (a, b)
SP = {p|p ∈ N0 , p < P }
Set of all P periods, and set of periods starting from p = 1 p, q
SP 1 = {p|p ∈ N, p < P }

Table 2
Summary of remaining parameters.
Definition Description Definition Description
BWic Bandwidth between VNs i and c WEVRAi Limit of energy reserved for the window in VN i
TLoi Transmission load of VN i WEPk Limit of energy reserved for the window in PSAN k
PLoi Processing load of VN i WEVDUi Limit of energy reserved for the window in VN i
LDUPWi Last data update time of VN i achieved during previous decision Vsupk Sensing supply voltage of PSAN k
window
DUpTmi Data update time of VN i Isensk Total current required for sensing activity of PSAN k
PP wi Processing power of VN i ξ sensk Time duration for sensing component to collect data of PSAN k
PP wh Processing power of the host h (EN or CN) Eeleck Radio energy dissipation of PSAN k
DFTj Data freshness threshold of request j εampk Amplifier energy dissipation of PSAN k
VECMih VN i × host h mapping d
[ ik ] Distance between PSAN k and VN i
2
Pmaxh Maximum power dissipated by host h C × Vdd +f k Processing constant of PSAN k
bik Amount of bytes requested by VN i to PSAN k nik Number of actuations of PSAN k to VN i
ETMDW End time of the decision window Eactk Energy spent for each actuation by VN k

p
Table 3 real-valued decision variable STMi indicates the start time of each
Decision variables. period p in a VN i.
Definition Description
p
xij ∈ {0, 1} 3.2. MINPP formulation for resource allocation in CoS
Whether VN I is allocated to request j in period p
∀i, j, p ∈ SVN , SRQ , SP
p
yi ∈ {0, 1} In this section, we describe the objective function of our prob-
Whether VN i updates its data in period p
∀i, p ∈ SVN , SP lem, defined per Eq. (8). We also describe the constraints of our
p
STMi ∈ R
Start time of period p in VN i problem, defined per Eqs. (18) to (30).
∀i, p ∈ SVN , SP ∑ ∑ ∑ p p
max xaij ∗ uij (8)
j∈SRQ i∈SVN p∈SP 1
set SP. Each VN considers the same amount P of periods, and every
where:
period p has a start time, an end time and a duration. However, the
times and duration vary among VNs. For instance, a given period p ∀i ∈ SVN
p p p−1
could have different start times, end times and durations in VN 1 xaij = xij − xij ∀j ∈ SRQ (9)
and VN 2. ∀p ∈ SP 1

Moreover, our MINPP employs three decision variables shown


p p p ∀i ∈ SVN
in Table 3: xij , yi and STMi . The resource allocation decision p p p
ETMi = TTMi + PTMi + UTMi + STMi
p p
(10)
p
comprises feasible values of such variables. We define xij as binary ∀p ∈ SP
values to indicate whether to allocate VN i to request j in period p. p p ∀i ∈ SVN
UTMi = DUpTmi × yi (11)
A request is always fully processed during a single period. When ∀p ∈ SP
p
assigning xij = 1 at any given period p, for a given VN i and request ∑
p PP wi = VECMih × PP wh ∀i ∈ SVN (12)
j, the values of xij for every period p′ > p must also be equal to
h∈SECN
one, for the same values of i and j. Thus, whenever an allocation
⎨ PLoi
⎧ ∑ p
occurs, it cannot be undone or re-done during the remainder of xaij p>0 ∀i ∈ SVN
PTMi = PP wi j∈SRQ
p p
the decision window. In addition, the binary variables yi indicate (13)
∀p ∈ SP
whether VN i performs a data update in period p. Finally, the

0 p=0
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 569

P
p
∑ ∑ ∑ p and as a function of time, which affects data freshness directly.
TTMi = xaia Thus, we consider, for each period p of a VN, besides its start time
p p
(a, b) ∈ c ∈ SVN q = 1 (STMi ), its end time (ETMi ), defined per Eq. (10), and its duration.
p
SPRE c ̸= i q ̸= p We define the time spent for updating data (UTMi ) per Eq. (11).
If a decision is made for updating data of VN i at period p, this time is
TLoi ∀i ∈ SVN p
× × xaqcb (14) equal to the DUpTmi . The time spent for processing (PTMi ) defined
BWic ∀p ∈ SP per Eq. (13), is the time necessary for a VN i to perform its internal
p
({ q q q
LDUi = max yi × STMi + UTMi procedures in order to meet a request j. Each allocation of VN i
to a request j generates the standard processing load in the VN i
∀i ∈ SVN
+ LDUPWi × (yqi − 1)|q ∈ [0..p] (PLoi ). In addition, the VN i has a processing power (PP wi ), which is
})
(15)
∀p ∈ SP 1 defined per Eq. (12). It is a fraction of the processing power of the
p p p p ∀i ∈ SVN host (PP wh ), defined by VECMih . It is important to mention that no
DFi = STMi + UTMi − LDUi (16) processing occurs in period zero, since no allocation is possible in
∀p ∈ SP 1
this period. Finally, per Eq. (14) we define time spent for transmis-
p
(DFi > DFTj ) ∀i ∈ SVN p

⎨0 sion (TTMi ). When VN i is allocated to request ‘‘a’’, the output data
p p
uij = DFi p ∀j ∈ SRQ (17) must be transmitted to all VNs c that are allocated to the successors
⎩1 − (DFTj ≥ DFi ≥ 0) b of request ‘‘a’’. Each of such output data transmissions takes the
DFTj ∀p ∈ SP . 1
time equal to the rate between TLoi and BWic . Eq. (14) calculates,
Subject to: for a VN i, the sum of the times spent for each transmission.
Moreover, we calculate the moment in time of the last data
∀i ∈ SVN p
update of a VN i at a given period p (LDUi ) per Eq. (15). This is the
p
xaij ≥0 ∀j ∈ SRQ (18) highest time value, among the current period p and previous peri-
∀p ∈ SP 1 ods of the current decision window, at which a VN i performed a
data update. If no data update decision happens during the current
∀i ∈ SVN
x0ij = 0 (19) decision window, the moment of the last data update considered
∀j ∈ SRQ will be the one occurred in the previous window (LDUPWi ), which
y0i = 0 ∀i ∈ SVN (20) is provided as input parameter for the problem. Then, we formally
define data freshness per Eq. (16). The data freshness of a given VN
STMi0
=0 ∀i ∈ SVN (21)
∑ ∑ i in period p is the difference between two moments in time. The
p
xaij ≤1 ∀j ∈ SRQ (22) first is the moment during current period p in which a request will
p
i∈SVN p∈SP 1 be served by VN i, and the second is LDUi . In the best case, the data
has just been updated in VN i during current period p, thus both
∑ p ∀i ∈ SVN
xaij ≤ FRRij (23) moments are equal and data freshness is the best (zero).
∀j ∈ SRQ Finally, we formally define utility per Eq. (17). We assume that
p∈SP 1
the utility decreases linearly and has the maximum and minimum
∑ ∑( p p
) ∀ (a, b) values of 100% and 0%, respectively. We define that, whenever the
xia − xib ≥ 1 (24) p
∈ SPRE data freshness (DFi ) of a given VN i, in a given period p, is zero,
i∈SVN p∈SP 1
p p
the utility is maximum (100%), i.e. a data update happened in such
xaib × STMcq × xaqca |(a, b)
({
STMi max ≥ p
a period. We assume that values of DFi that are higher than the
∀i ∈ SVN data freshness threshold of request j (DFTj ) are useless, thus utility
∈ SPRE , c ∈ SVN , q ∈ SP , q ̸= p
})
(25) is minimum (0%) in this case.
∀p ∈ SP 1
By the constraint defined per Eq. (18), once an allocation of a
p ∀i ∈ SVN VN to a request occurs, it cannot be undone or re-done during the
ETMi ≤ ETMDW (26)
p=P −1 remainder of the decision window. Moreover, since our objective
∀i ∈ SVN function has a recursive term, it is necessary to consider period zero
p p−1
STMi ≥ ETMi (27) as a dummy period. In period zero, we disallow allocations and
∀p ∈ SP 1 data updates by VNs. In addition, the start time of period zero is

ERAih ≤ WEVRAi ∀i ∈ SVN (28) set to the start time of the current decision window, which is time
h∈SECN
zero (a relative time). We express such statements by constraints,
⎡ ⎤ respectively defined per Eqs. (19)–(21). Per Eq. (22) we ensure
∑ ∑ p ∀k that requests are performed only once and by a single VN i during
⎣EDUik × VPMik × yi ⎦ ≤ WEPk (29)
∈ SPSAN the decision window. Per Eq. (23) we ensure the allocation of
i∈SVN p∈SP 1 VNs to requests only when VNs match the defined non-negotiable
requirements of the requests. The matrix of non-negotiable re-
⎡ ⎤
∑ ∑ p
⎣EDUik × VPMik × yi ⎦ ≤ WEVDUi ∀i ∈ SVN . (30) quirements (FRRij ) has the value of one set at each position where a
VN i meets the non-negotiable requirements of request j, and zero
k∈ p∈SP 1
otherwise. Per Eq. (24) we ensure that whenever a VN is allocated
SPSAN to a request b in a given period, all the ‘‘a’’ predecessors of b must
p also have VNs allocated, at previous periods. Finally, the constraint
To define our objective function, we define xaij as an auxiliary
p p defined per Eq. (25) ensures that a request b should never start at
expression per Eq. (9). When we multiply uij by the value of xaij the
a time prior to the start of its predecessor request ‘‘a’’. By Eq. (26)
result is the amount of utility obtained by allocating VN i to request we ensure that the end time of the last period of each VN is smaller
j at period p. As expressed by Eq. (8), we want to maximize the than the end time of the decision window (ETMDW ). By Eq. (27) we
sum of all utilities obtained by each allocation. Thus, we make the ensure values of start times so that two consecutive periods will
following considerations regarding our way for calculating utility. never overlap. In addition, the successor period will never happen
First, in our formulation, we obtain values of utility for each period, at a prior moment in time, in relation to a predecessor period.
570 I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581

Finally, constraints in Eqs. (28)–(30) regard the energy costs of ∀i ∈ SVN


ETXik = bik × Eeleck + ε ampk × (dik )2
( )
(39)
allocating each VN to requests. Such energy is fully consumed by ∀k ∈ SPSAN
the ENs/CNs that host each VN. Thus, we need a model for energy ∀i ∈ SVN
ERXik = bik × Eeleck (40)
consumption at edge/cloud tiers. It is important to mention that ∀k ∈ SPSAN
our MINPP is agnostic to any specific energy model. In this paper, ∀i ∈ SVN
2
[ ]
we adopted the same energy model used in our previous work, EPRik = bik × C × Vdd +f (41)
k ∀k ∈ SPSAN
which targets the edge and cloud paradigms [43], briefly described
in Section 3.3. ∀i ∈ SVN
EACik = nik × Eactk (42)
∀k ∈ SPSAN .
3.3. Energy model The energy consumption of PSANs is composed of four ma-
jor components: sensing, computation, communication and ac-
The constraint defined per Eq. (28) concerns the total energy tuation components. Eqs. (38)–(42) are adapted from our previ-
consumption of a VN i during the decision window (comprising ous work [43] to represent the energy consumption of PSANs. In
all periods p). It considers the energy consumption by all hosts h Eq. (38), let Isensk and Vsupk be, respectively, the total current and
of VN i (ERAih ). The value of the sum of ERAih must be smaller than supply voltage required for sensing activity, while ξ sensk is the
the limit WEVRAi . The ERAih is defined per Eq. (31). It is divided into time duration for sensing data from the PSAN and bik is number
two portions of energy consumption by host h. The first is used for of bits collected by the sensing activity. Moreover, we distinguish
communication of the output data of VN i (ECMih ). The second is the energy spent for transmission (ETXik ) and for reception (ERXik )
used for processing the request inside VN i (EPRih ). The (ECMih ) is per Eqs. (39) and (40). The size of the transmitted/received data
defined by Eq. (35) in terms of the fraction of power of host h that is is bik and dik represents the distance between transmitter and
allocated to be dissipated in response to running VN i (Pih ) and the receiver. The Eeleck and ε ampk denote energy dissipation of radio
p
sum of all times spent for communication (TTMi ) of the VN i in all and transmission amplifier, respectively. The energy consumption
periods p. The EPRih is defined per Eq. (36) similarly to ECMih , but for processing all bik bits of data (EPRik ) is given by Eq. (41),
p
using the processing times (PTMi ). Thus, we describe Pih through where Vdd 2
denotes the thermal voltage of the processor and C
Eqs. (32)–(34). and f are processor-dependent parameters. The last part of energy
As in [43], we use the hypothetical linear power model, which consumption for a device is actuation (EACik ). This part is hard to es-
describes the power consumption in a host per Eq. (32). Eq. (32) is timate because it is highly dependent on the specific actuation task.
divided into two parts: static and dynamic power. Static power Ps is Eq. (42) describes the energy consumption for actuation, where
consumed even if the host is idle. The dynamic power (Pmax − Ps )× nik is the number of actuations, each consuming a fixed amount
U is proportional to the utilization U of the host, where Pmax is the of energy Eactk . For instance, an actuation regarding driving a fan
maximum power the host can dissipate. We consider that the value with two motors (possibly in response to temperature variations),
of Linear Deviation Ratio (LDR) is zero, thus we have an ideal power then nik = 2.
model. We also ignore the static power consumption, because it Finally, we must constraint the energy spent by PSANs also from
does not vary with the allocation of VNs to requests, reducing the the perspective of VNs, so that a single VN will not preemptively
model reduced to Eq. (33). Finally, based on Eqs. (33), (34) describes deplete the batteries of several PSANs subordinated to it. Simply
the fraction of the power allocated by each host h to a VN i in our by changing the sum (from index i to index k) in Eq. (29), we reach
work. to Eq. (30), which regards the total energy consumption of a VN i
∀i ∈ SVN during the decision window, respective to data updates. This value
ERAih = ECMih + EPRih (31)
∀h ∈ SECN must be smaller than the limit WEVDUi . For the sake of simplicity,
P (U ) = (1 + LDR) we index this consumption for VNs, although the PSANs are the
ones that effectively consume the energy.
∗ [Ps + (Pmax − Ps) × U] (32)
P (U ) = (Pmax) × U (33) 3.4. Zeus
∀i ∈ SVN
Pih = (Pmaxh ) × VECMih (34) This section describes Zeus, our hybrid and heuristic-based
∀h ∈ SECN
P algorithm to solve the MINPP for resource allocation in CoS. Our
∑ p ∀i ∈ SVN MINPP is NP-complete. For the sake of space, we suppressed the
ECMih = Pih × TTMi (35)
∀h ∈ SECN proof of NP-completeness, but a similar proof can be found in [16].
p=1
Due to its NP-completeness, we cannot expect to solve practical
P
∑ p ∀i ∈ SVN instances of the resource allocation problem with arbitrary sizes,
EPRih = Pih × PTMi (36)
∀h ∈ SECN . optimally and in polynomial time [17]. An algorithm for solving
p=1 our MINPP can be implemented either in the edge or in the cloud
The constraint defined in Eq. (29) regards the total energy tier. Thus, the scope of the resource allocation can be more global
consumption of a PSAN k during the decision window, respective (when implemented at the cloud) or more local (implemented at
to data updates. Such an energy consumption must be smaller than the edge), affecting the size of the MINPP in terms of the amount
p
the limit WEPk . The sum of yi is the number of data updates of VN i of entities considered. Depending on the size of an instance of
during the decision window. Every data update consumes the same the MINPP and the available CPU speed, we will often have to
amount of energy of the same PSANs. According to [43], the energy be satisfied with computing approximate solutions via heuristic
spent by a PSAN k in response for a data update of VN i (EDUik ) is algorithms. This justifies the elaboration of Zeus as an heuristic
defined per Eq. (37). algorithm, to find sub-optimal solutions in polynomial time for our
problem.
EDUik = ESEik + ETXik + ERXik p p p
In Zeus, decisions regarding xij , yi and STMi are taken sepa-
∀i ∈ SVN rately for each request at a time, and are valid for the moment
+ EPRik + EACik (37)
∀k ∈ SPSAN in which they are taken. Therefore, in Zeus formulation we do not
∀i ∈ SVN consider the decision window mentioned in Section 3.1. Moreover,
ESEik = bik × Vsupk × Isensk × ξ sensk (38) the fast execution of Zeus (in polynomial time) allows it to run
∀k ∈ SPSAN
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 571

as an online solution for resource allocation. Therefore, Zeus was


designed interlaced with the execution of application requests.
Finally, Zeus is a hybrid algorithm, and, thus, comprises two phases.
The first phase is centralized and can be implemented in the AEPs
defined in Section 2.3. The function denoting this phase is fcen ,
p
(Fig. 3). In this phase, the decision regarding xij is taken. The second
phase is decentralized and implemented in each VN. The respective
function is fdec , shown in Fig. 4. In this phase, decisions regarding
p p
yi and STMi are taken.

3.4.1. First phase


The first phase of Zeus, in Fig. 3, starts by defining the empty
unique DAG (UDAG) structure (line 2) and running a timer (line
3), whose time buff_tm is provided as a parameter to fcen . The fcen
runs periodically, and each cycle is described by the block between
lines 4 and 40. In summary, fcen waits for a time duration (fixed
for all cycles) for buffering the arriving applications. Simultane-
ously, requests in common of buffered applications are merged
according to their requirements. From lines 6 to 23, we describe
our UDAG formation algorithm, which is inspired by [44]. The main
difference is that we consider building a UDAG of requests, which
differ from tasks as considered in [44]. Another difference is that
we consider negotiable (data freshness) and non-negotiable (data
type) requirements in the UDAG formation.
In line 6, the AEP accesses all applications arriving on it at
the beginning of the current cycle of fcen . The AEP iterates for
each request (rq) of each application (line 7), to assess if either
it should generate a new unique request (urq) in the UDAG (line
15), or it should merge the rq to an existing urq (lines 10–14).
The function cmp_non_neg_rr (line 10) returns true if the non-
negotiable requirements of rq and urq are the same. In line 11,
the function update_more_restr_neg_rr chooses the values of ne-
gotiable requirements that are more restrictive between the urq
and the rq being merged. We use the max function, thus the new
data freshness of the urq is always the highest value between the
rq and urq. In line 12, the urq stores the identifier and information
of the rq that is being merged to it. Moreover, in line 13 the rq keeps
track of which urq it was merged to. Next, the AEP iterates for each
precedence relationship (edge) pr in the graph of each application
(line 16), similarly to the procedure of adding urqs to the UDAG. Fig. 3. First phase of Zeus.
The AEP iterates through the existing upr in the UDAG (line 18),
and if no existing edge is found in the UDAG, the AEP adds a new
upr to it (line 23). In our work, edges have no requirements to be EBR rule. Second, in the context of edge applications, which require
merged. Moreover, since the AEP stores, in the rq, the information strict response times [4], the main concern regards reducing the
about its related urqs, the comparisons performed in lines 19, 20, response time of applications. Since this response time is mainly
21 are trivial. influenced by queues formed in the CoS system, we formulated the
After buffering all arriving applications in the current cycle, QTRR.
the AEP checks if its timer has expired, starting a new cycle oth- Line 35 fills each urq with the information regarding the VN
erwise. If the timer has expired, the AEP starts making its deci- allocated to it. Lines 36 and 37 fill each urq with information
p
sion regarding variable xij (between lines 28 and 33). From lines regarding the VNs allocated to the successors of the urq. The
28 to 32, pairs of rqs and VNs are formed. In line 31, function procedure of line 38 transmits the urq to its VN, where it will
eval_non_negotiable_rr filters all the pairs of rqs and VNs, to com- be handled by fdec . We assume that all requests have at least one
prise only the pairs which meet the non-negotiable requirements. feasible pair of rq and VN. Thus, line 39 erases the UDAG, and resets
Line 33 calls the function choose_best_pairs, which considers a the buffering timer in line 40. Section 3.4.2 describes the second
given rule (parameter rule_fcen), to decide which pairs of rqs and phase of Zeus.
VNs will be in fact performed. The result of this decision is stored
p
in variable x (analogous to xij ). For instance, we can consider two 3.4.2. Second phase
rules. The first is the energy balancing rule (EBR), which consists The second phase of Zeus is shown in Fig. 4. It starts by obtaining
of allocating VNs with the highest remaining energy values to the reference to the data provisioning service of the VN (line 2),
each rq. The second is the queue time reduction rule (QTRR), which where the data provisioning service is started. This service is able
consists of allocating VNs with the shortest queues to each rq. to provide data to one request at a time. In line 3, the queue of
Among the several rules that could be formulated for deciding x, unique requests of the VN is started, empty. The algorithm of fdec
we chose EBR and QTRR for the following reasons. First, in the runs periodically, and each cycle is described by the block between
context of WSANs applications, due to the energy constrained lines 4 and 15.
nature of the nodes [26–29], a main concern arises regarding how The VN iterates for each urq arriving from different AEPs in
to extend the WSAN lifetime. We covered this by formulating the line 5. Line 6 adds each urq to the VN queue, which is a first
572 I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581

experiments were performed with the goal to ascertain whether


the contributions intended with the adoption of Zeus were effec-
tively achieved. The first experiment (described in Section 4.3.1)
aims at assessing Zeus scalability, both in terms of the number
of VNs and the number of applications executed in the CoS. The
second experiment (Section 4.3.2) aims to assess how the use of the
edge tier in Zeus allows supporting delay-sensitive applications, in
comparison to an approach using only the cloud tier. Moreover,
Zeus identifies tasks that are common for multiple applications,
performing them only once and sharing the outcome among these
multiple applications. Such a feature contributes for improving
the WSANs lifetime, and this is assessed in a third experiment
(described in Section 4.3.3). Finally, we perform the fourth experi-
ment to assess the quality of the solutions of our MINPP found by
Zeus, and show how much energy it can save for the WSANs when
reusing data among multiple applications (Section 4.3.4).

Fig. 4. Second phase of Zeus.


4.1. Illustrative example

The goal of this illustrative example, based on [34], is to imple-


in–first out (FIFO) queue of unlimited size in this work. When ment a CoS system for monitoring the structural health of buildings
removing an urq from the queue, the VN considers the precedence in a smart city. Among the requirements of a smart building,
restrictions among urqs. Thus, it is avoided that the next urq to be assuring the safety of its residents is a key issue, especially con-
removed from queue with a precedence restriction (not able to be sidering seismic events. In cities prone to frequent seismic tremors,
performed) blocks the execution of another urq in the middle of the managers close buildings for months to undergo a detailed manual
queue and without precedence restrictions. If the data provisioning inspection, to ensure the safety of people. Such an approach is in-
service of the VN has finished providing data to an urq (line 7), efficient, resulting in frequent inoperative periods of the building,
the VN transmits the output data to all the VNs allocated to the and consequent economic losses.
successors of current urq (line 8). Due to such procedure, data
incoming from other VNs (input data of an urq in the queue) may 4.1.1. CoS infrastructure deployment
also arrive in the current VN. Therefore, the VN fills the respective In this hypothetical scenario (Fig. 5), let us consider a smart
urq with the incoming data and removes the corresponding prece- building composed of four floors, covering a rectangular area of
dence restriction. If the urq has no successor, the VN transmits data 50 × 50 m and with three metres of height between the floors.
back to the AEPs from which the applications respective to the urq The smart building has six PSANs deployed on each floor. All the
arrived. In line 9, the data provisioning service is set to idle. Next, PSANs are MICAz motes [45], whose radio supports 2.40–2.48 GHz
the VN checks if the data provisioning service is idle, and the queue band and 250 kbps data rate. MICAz motes include the main board,
is not empty (line 10). Then, per line 12, the VN obtains the next with processor, radio, memory and batteries. Per floor, five PSANs
urq from the queue with no precedence restrictions. Then, the VN are connected to a MTS400 board equipped with accelerometer,
decides if it should perform a data update (line 13), considering barometric pressure, ambient light, relative humidity and tem-
a given rule (parameter rule_fdec). For instance, we consider two perature sensing units. Finally, the remaining PSAN per floor has
rules. First, the avoid negative utility rule (ANUR) updates data no sensing units, and is connected to an actuator, a stoplight-
whenever the current data freshness provided by the VN is greater style signal. It must have its values set according to the current
than the data freshness threshold of the urq. Second, the maximum structural soundness. The common energy source of MICAz motes
utility rule (MUR) consists of always performing the data update. (two AA batteries) provides up to 16 kJ of energy [46]. All the PSANs
Among the several rules that could be formulated for deciding deployed in the smart building pertain to the same WSANI. Other
p
y (analogous to yi ), we chose ANUR and MUR for the following relevant parameters regarding the sensors tier are summarized in
reason. These two rules represent two extremes of all possible Table 4. Most values in Table 4 were retrieved from our previous
decisions regarding utility. MUR allows obtaining the maximum works [43,34] and adapted to the current scenario when necessary.
possible utility for all applications, while ANUR ensures obtaining The PSANs are programmed in NesC language, under the TinyOS
at least the minimum acceptable utility for all applications. development environment [47], version 2.1. Further information
Next, line 14 sets the start time of the urq to the current time on the configurations used in the physical and MAC layers can be
of the VN. In line 15, the function provide_data starts processing found in [34]. The InP deploys the PSANs in key locations where
the urq, setting the value is_idle of the data provisioning service to the shock response of the structure is the highest. We randomly
False. Finally, it is important to mention that our proposal is based define such key locations according to the following procedure,
on a set of user-configurable rules (rule_fcen and rule_fdec) for also used in [43]. Given the coordinates of the centre of a floor, the
making decisions. Therefore, the user may dynamically change the ‘‘PSAN deployment radius’’ parameter in Table 4 defines a circle
rules during the CoS operation, disseminating new rules to AEPs area, centred at the floor centre, within which all the PSANs of the
and VNs. floor are randomly (uniform probability) positioned.
Besides PSANs, we consider the deployment of gateway nodes
4. Experimental evaluation of the WSANI in the same corner of each floor, each one attached
to a local desktop, composing the ENs (one per floor). Each PSAN
To help understanding our proposal and supporting the sce- is directly connected to the EN of its floor with one-hop distance.
nario used in our evaluation, in Section 4.1 we present an illustra- The ENs connect all floors through the building Local Area Network
tive and hypothetical example in the application domain of SHM (LAN), and to the Internet by using existing broadband internet
for smart buildings. In Section 4.2 we describe the metrics used connections (for instance, ADSL, VDSL, Satellite) in the building,
for assessing the performance of Zeus in our experiments. Four thus being able to reach the CN.
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 573

Table 5
Parameters of CoS virtualization obtained from [34].
Parameter Value
DUpTmi (data type 5) 0.025 s
DUpTmi (data types 6–9) 0.125 s
DUpTmi (data types 1–4) 0.500 s
bik (data type 5) 0 bytes
bik (data types 6–9) 1 byte
bik (data types 1–4) 1024 bytes
Output data size (data type 5) 1 byte
Output data size (data types 6–9) 1 byte
Output data size (data types 1–4) 10 bit * 5 PSAN with sensing unit

Table 6
Relevant parameters of applications.
Parameter Value Source of data
Total simulation time 180 s
Deadline margin 20% (1.2)
Current work
Buffering time of AEPs 1.0 s
DFTj lambda 1.0 s
Request size 320 bytes [48]

Fig. 5. Deployment of the three-tier CoS infrastructure in the smart building. in [34], and coordinated by the VN. Thus, the respective VN stores
all the CDCs of a floor. As in [34], we consider a 10-bit CDC for each
Table 4
PSAN. Thus, the data types 1, 2, 3 and 4 are, each, a list of tuples.
Parameters of CoS infrastructure. This list contains one tuple per PSAN with sensing units in the floor,
Parameter Value Source
of the format ⟨CDC_value, location_in_floor⟩.
In addition, the InP implements a VN for providing data type
Sensors tier
5, which is the output of the information fusion technique for
Vsupk 1.0 V deciding about the existence of damage in the building, described
Isensk 1 mA
ξ sensk 0.5 s
in [34]. Data type 5 uses data types 1, 2, 3 and 4 as input. The VN5
Eeleck 50.0/(10**9) has no subordinated PSANs, and is able to perform its function fully
[εampk 2 ] 10.0/(10**12) [43,34] within its host EN. Data type 5 has the format of a Boolean variable.
C × Vdd + f k 7.310e−10 J/bit Moreover, the InP implements VNs for providing data types 6, 7, 8
nik 1 actuation and 9, and performing actuation on each floor, depending on data
Eactk 0.02 J
PSAN deployment radius 25 m
type 5. Such data types consist of actuation feedback (i.e. actuation
acknowledgement and logs) retrieved from the physical actuator,
Edge tier
in the floor respective to the VN. The InP defines that, when the
EN core speed PP wh 3 × 106 cycles/s VN performs a data update, the occurrence of an actuation on the
Number of cores of each EN 4 cores
Average bandwidth of ENs BWdic (10.0**6) bps [43]
physical environment is implicit. This physical actuation is part of
Pmaxh 20.0 W the information fusion technique implemented by the InP within
Transmission delay lambda 0.2 s the VN. Therefore, if several requests access simultaneously this
Cloud tier VN, the VN only needs to physically actuate once, and provide the
CN core speed PP wh 6 × 106 cycles/s
same ‘‘actuation feedback’’ data to all the simultaneous requests.
Number of cores of each CN 128 cores Finally, we consider that every PSAN is dedicated exclusively to
Average bandwidth of CNs BWdic (10.0**6) bps [43] an assigned VN (the VPMik matrix has only 0 and 1 values, and no
Pmaxh 100.0 W sharing of a PSAN by VNs occurs). In addition, whenever a host EN
Transmission delay lambda 0.2 s
or CN is shared among VNs, the matrix VECMil is filled with equal
portions of capacity of the host for each VN sharing it.

4.1.2. VNs description 4.1.3. Applications description


In our example, we consider nine data types (numbered as 1 to In our illustrative example, we consider the arrivals of several
9), and nine VNs, one VN to provide (and numbered respectively applications that can be described by the same workflow struc-
to) each data type. In the simulations, we choose the edge tier as ture, based on [34]. In Table 6, we show relevant parameters of
the best location in our three-tier CoS architecture to run the VNs, applications. Each application has nine requests, each one request-
according to the properties of this tier assessed in our previous ing one of the data types 1–9, and numbered according to the
work [43]. The ENs are in a privileged position, in relation to the requested data type. The arrival time of applications is randomly
CNs, for linking PSANs from different WSANIs under the same and uniformly distributed in between the start time of simulation
VN, and for reducing the latency for time sensitive applications. (simulation time = 0) and the total simulation time (duration).
Relevant parameters of VNs are shown in Table 5. When the application arrives in the CoS, it is immediately assigned
In line with our proactive resource provisioning process (see to its AEP.
Section 2.2), the InP instantiates VN 1, 2, 3 and 4 (Fig. 5), one Moreover, each application has an expected deadline to be com-
per floor, that provide to applications the cooperative damage pleted. This deadline is computed as the sum of all the data update
coefficients (CDCs) of the respective floor. The CDC [34] is a rep- times in the application critical path. Thus, the application deadline
resentation of local decisions made by each PSAN. For each floor is the sum of the data update times of requests in the critical path
the CDC is calculated by the five PSANs with sensing units, in their of the application DAG, i.e. the sum of DUpTmi for data types 1–
respective positions, each one performing procedures as described 4, 6–9 and 5, as shown in Table 5, considering a deadline margin
574 I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581

Table 7 standard Python programming language (version 2.7.11) [49]. We


Metrics and acronyms. used a desktop computer (Intel Core 2 Duo 2.80 GHz processor
Metric Acronym and 4 GB RAM) to run the simulations in a controlled environment
Total energy consumption by the sensors tier TECST within the Federal University of Rio de Janeiro. Each experiment
Total energy consumption by the edge tier TECET was performed under 30 repetitions, what provided a reasonable
Total energy consumption by the cloud tier TECCT
confidence interval of 95% for the results. In this work, we focus on
Lifetime of sensors tier LTST
Average makespan of applications AMSA the evaluation of the allocation of VNs to requests in CoS. Therefore,
Percentage of late applications PLA our simulation uses the high-level events to collect the results,
Percentage of applications completed PAC rather than implementing all the lower network level details. In
Total utility obtained for applications TUTOB this Section, we describe the design of E1, E2 and E3, along with
the analysis of the achieved results.

shown in Table 6. For generating the data freshness threshold of 4.3.1. Experiment E1
applications, we used an exponential distribution with a lambda We designed experiment E1 to assess the scalability of Zeus
parameter. Choosing a High lambda parameter (>= 1.0) means in terms of the number of VNs and the number of applications,
a denser probability closer to zero, thus we have applications in the most adverse configuration. Such a configuration consists
which are more restrictive in their data freshness requirement. of (i) not sharing the results of requests in common for multiple
When choosing lower lambda (>= 1.0) the applications are more applications (i.e. not using our UDAG formation algorithm) and
permissive in terms of data freshness threshold. In Section 4.2 we (ii) using the EBR and MUR rules as the parameters rule_fcen and
present the evaluation metrics used in our work. rule_fdec, respectively. Considering this configuration, we vary (i)
the number of applications and (ii) the number of VNs. Regarding
4.2. Metrics the number of applications, we performed 10 variations, adding
200 new applications in the scenario at each variation. As for the
Table 7 summarizes the metrics used in the performed evalu- number of VNs, we in fact considered incrementing the number
ations. TECST is the sum of the energy consumption of the PSANs of VNs per data type, one by one and from one to five. When
during simulation. TECET is the sum of the energy consumption of performing such a variation, every new VN added is a replica of
the edge nodes during simulation, for running the VNs and AEPs. an existing VN in the base scenario with nine VNs described in
The definition of TECCT is similar to the definition of TECET, but for Section 4.1. For instance, we first increment the number of VNs per
the cloud tier. Considering that, usually, replaceable batteries are data type by one, having two VNs per data type and thus a total of
the energy source of a PSAN, we infer that a particular PSAN will be 18 VNs. For three VNs per data type, we have 27 VNs in total, and so
the first one to have its battery depleted. This one is the PSAN with on. We consider the elastic capacity of the edge, so that every new
the maximum energy consumption among all PSANs in the sensors VN can be instantiated to run on the same host EN of its original
tier (MECST). Based on that, we define LTST as the time until the replica, and have the same amount of computational resources
first node dies in the sensors tier. We also assume that a common as its original replica. In addition, new VNs are created along
energy source of a PSAN is two AA batteries, capable of providing with a new and exclusive set of PSANs (configured as the other
16 kJ during the lifetime of the PSAN, resulting in the formula LTST PSANs described in Section 4.1.1), when necessary. Although the
= 16 kJ/(MECST/simulation time). physical network density increases when we use this strategy, we
We also define AMSA, as follows. We consider that the consider no interference among underlying PSANs of different VNs.
makespan of a single application is computed as the total time, Therefore, we are increasing the capacity of the CoS infrastructure
since the application arrival, until its last request is processed by to provide data to an also increasing number of applications. Fig. 6
an allocated VN. Then, AMSA is calculated as the average makespan shows the results of E1.
for all applications in simulation. Moreover, some applications may In Fig. 6a, the TECST metric achieves the maximum value for 400
not be completed within the simulation time for two reasons. The applications and one VN per data type (DT). This is the saturation
first, and less frequent, is that application arrival was randomly point for one VN per data type. Since we use the MUR rule in E1,
set too close to the end of simulation. The second is that the the VNs always perform data updates to meet requests. Therefore,
system can be overloaded, and may not be able process all the 400 applications are enough to minimize the time between every
requests in queues within the simulation time. Therefore, we count two successive data updates for any given VN. Therefore, for any
the amount of applications completed through PAC. In addition, number of applications greater than the saturation point, the VNs
considering that delay-sensitive applications often have a strict are said to be saturated. For two, three, four and five VNs per DT,
makespan deadline (defined in Section 4.1.3), we define the PLA. the saturation points are, respectively, 800, 1200, 1600 and 2000
The PLA is calculated only among the applications successfully applications. In Fig. 6b, the LTST metric is higher, as higher is the
completed. Finally, we define the TUTOB metric as the total utility number of VNs per data type (more than a month for five VNs
obtained for applications. Utility is measured in our work for each per data type and 200 applications). The EBR rule balances the
request as a number between 0 and 100%, and is used in our load of requests among all available VNs, avoiding overloading the
objective function according to Eq. (8). When running a simulation same VN and thus improving the LTST, as the number of VNs per
for a given number of requests, the maximum amount of utility DT increases. In addition, the LTST converges to five days as the
that is possible to achieve is equal to the number of requests times number of applications increases, for any amount of VNs per DT.
100%. TUTOB is, therefore, a relative value (between 0% and 100%) In Fig. 6c, the TECET metric increases for processing a number of
that represents the percentage of this maximum amount of utility applications greater than the saturation point, for each amount of
that was obtained for applications when running a simulation. VNs per DT. For 2000 applications, the TECET ranges from 6 kJ (one
VN per DT) to 10 kJ (five VNs per DT). Since we did not use a cloud
4.3. Design and analysis of experiments node in this experiment, the TECCT is zero.
In Fig. 6d, the values for AMSA metric grow slower, as the
To execute the four envisioned experiments (E1, E2, E3 and number of VNs per DT increases. The value of AMSA for one VN
E4), we designed a discrete event simulation using SimPy [49], per DT is close to 1 s for 200 applications, growing to nearly 80 s for
a process-based discrete-event simulation framework written in 2000 applications. This is due to the larger VN queues formed when
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 575

Fig. 6. Results of experiment E1.

the number of applications increases. The use of the MUR rule, 1 to 5, respectively. In addition, the TECST and TECET grow linearly,
along with not considering the UDAG formation algorithm, makes at a constant rate, with the increasing amount of applications, also
VNs always perform a data update for each request individually. denoting a non-explosive growth.
This contributes to form such queues, causing delays to applica- Finally, in every aspect assessed in E1, the hybrid approach
tions and increasing their makespan. In addition, when increasing of Zeus succeeded in handling a growing load (in terms of an
the number of VNs per DT from one to five, the AMSA drops, increasing number of applications), and in scaling (in terms of an
respectively, from 80 to 20 s, for 2000 applications. This is due increasing number of VNs) to accommodate such a growing load.
to the distribution of the load of applications, mainly respective Thus, the hybrid approach of Zeus is scalable.
to data updates, through the EBR rule. In Fig. 6e, the PAC metric
decreases slower as the number of VNs per DT increases. Using 4.3.2. Experiment E2
five VNs per DT, we achieved the best PAC curve, with a value We designed experiment E2 to assess how the use of the
greater than 80% for 1800 applications. In Fig. 6f, the PLA grows edge tier in Zeus allows supporting delay sensitive applications,
slower as the number of VNs per DT increases. For five VNs per in comparison to an approach using the cloud tier only. In E2, we
DT and 1200 applications, we still find a PLA lower than 100%. considered four smart buildings (replicating four times the base
Therefore, considering more VNs per DT allows more applications scenario of Section 4.1). In addition, we used five VNs per data type,
to be completed in simulation time (higher PAC) and within their considering the better results achieved for this configuration in E1.
deadlines (lower PLA). However, we consider that each building is independent, having
To prove that our algorithm is scalable in our scenario, we con- its exclusive set of data types. Moreover, we designed applications,
sider the definition of scalability as follows, based on [50]. Scalabil- each for one of the four buildings, with the same DAG structure, but
ity is inherently about concurrency; it is about doing more work at with the exclusive data types of the respective building. In addition,
the same time. Therefore, to prove that our algorithm is scalable, we used our UDAG formation algorithm during E2, and considered
in the scenario used in E1 (where the DAG of requests allows the the QTRR and ANUR rules as the parameters rule_fdec and rule_fcen,
concurrency of requests a priori), we must prove the following respectively. In E2, we vary (i) the use of two distinct scenarios of
statement. When increasing the number of VNs (to process more configurations, one with the edge tier and the other one without it,
work/requests), the algorithm can perform more requests (work) and (ii) the number of applications. In the configuration using the
simultaneously (concurrently), and without a clear maximum limit edge tier, each building has its AEP, which makes local decisions
in both the number of VNs and requests [50]. In our work, we are for the building (AEPs in ENs). In the configuration without the
able to prove this statement by assessing the results obtained for edge (AEP in CN), the AEP makes the resource allocation decision
the PAC and PLA metrics. With the increase in the number of VNs for all the four smart buildings together, in the CN. Regarding the
per DT, our algorithm can reach higher percentages of applications number of applications, we performed 10 variations, adding 200
completed and in time (punctual), without a clear limit. In addition, new applications per building (800 in total) in each variation. Fig. 7
the saturation point grows linearly as the number of VNs per DT shows the results of E2.
grows. And the values of AMSA grow as a sigmoid curve, as the In Fig. 7a, the AMSA metric decreases as the number of appli-
number of VNs per DT grows. Thus, both metrics do not show cations grows, for the configuration with a single AEP in CN. A
an exponential or combinatorial (explosive) growth, which would decreasing behaviour in the values of AMSA is also shown by the
hinder the scalability of Zeus. configuration of multiple AEPs in ENs. The decreasing behaviour
Besides proving that Zeus is scalable, it is important to assess of each curve, separately, occurs because of a higher efficiency
the overhead of adding a new VN per data type in Zeus, in order achieved by the identification of requests in common for a greater
to process growing loads, i.e. proving that such scalability is worth amount of applications, in comparison to using such a mecha-
in terms of resources spent to achieve it. Adding a new VN per DT nism on few applications. Thus, more applications will be able
has a positive effect on the LTST, because the workload of requests to share the same data, avoiding delays due to data updates, and
is better distributed (balanced) among VNs. The LTST raises from consequently reducing the makespan of applications, in average.
around 9 to 33 days when we vary the number of VNs per DT from In addition, for 8000 applications, the AMSA is higher (around
576 I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581

Fig. 7. Results of experiment E2. Fig. 8. Results of experiment E3.

1.18 s) in the configuration with one AEP in CN, in comparison to 4.3.3. Experiment E3
the configuration with multiple AEPs in ENs (around 0.97 s). We We designed experiment E3 to show how the UDAG formation
explain this result by the additional time required for communica- algorithm used in Zeus contributes for saving resources from the
tion between the cloud, where the decision is taken, and the edge, nodes, consequently improving the WSANs lifetime. We used the
which holds VNs. The PAC and PLA values achieved are 100% for EBR and MUR rules as the parameters rule_fcen and rule_fdec,
every configuration in this experiment. In Fig. 7b, the TECCT metric respectively. Moreover, we considered the same scenario as E1,
grows to around 12 kJ for 8000 applications, in the configuration however with a fixed number of five VNs per data type, considering
with one AEP in CN, and drops to zero when using multiple AEPs the better results achieved for this configuration. In E3 we vary the
in ENs. In Fig. 7c, the TECET metric grows to around 3.7 kJ for 8000 number of applications using the same method as in E1. In addition,
applications, in the configuration with one AEP in CN. The TECET unlike E1, we used two distinct scenarios of configurations, one
is insignificantly greater when using multiple AEPs (less than 1% with the UDAG formation algorithm and the other without it. Fig. 8
difference between both curves). This result occurs exclusively due shows the results of E3.
to the need for additional processing in the edge tier, a less time In Fig. 8a, when not using the UDAG formation algorithm, as in
and energy intensive task than communication. E1, the values of TECST metric grow linearly until the saturation
We conclude that leveraging the use of the edge tier is a point of five VNs per data type (2000 applications). When using
better option for saving energy and reducing the makespan of the algorithm, the values of TECST drop to a nearly constant curve,
applications, because applications take additional time for being assuming a value around 90 J for any number of applications. For
communicated between the CT and the ET. The physical proximity 2000 applications, TECST drops by 86% when using the algorithm.
of AEPs, VNs and their PSANs is better explored as an intrinsic More importantly, when more applications are used, the possibility
characteristic of the edge, speeding up communications among of finding common requests among them increases. Thus, the
such entities in comparison to any approach hosting some of these proposed UDAG formation algorithm is able to fully utilize the
entities in the cloud. When using the CT to make decisions in the unique requests to reduce the value of TECST. In Fig. 8b, the LTST
performed experiments, the CN uses an unnecessary amount of metric is shown for both configurations (with and without the
energy for processing applications insignificantly faster than the UDAG formation algorithm). LTST without using the UDAG forma-
ENs. At some point, the cloud would be more efficient in terms tion algorithm is the same as in E1. When using the algorithm,
of energy, but only when considering a far more computation- LTST is better in comparison to when not using such algorithm,
ally intensive scenario (greater than 8000 applications). Finally, for any given number of applications. For 2000 applications, LTST
according to Bonomi et al. [4], the delay-sensitive applications increases from around 5 to around 38 days when using the UDAG
require strict response times, in the order of milliseconds to sub formation algorithm. The peak lifetime is around 2 months, for 200
seconds. When leveraging the use of the edge tier, we achieved an applications, when using the UDAG formation algorithm. In Fig. 8c,
application makespan (time for acquiring, deciding and actuating the TECET metric shows curves with shapes similar to the TECST
over the acquired data) of less than one second, in average (for curves, however there is a difference of proportions between them.
8000 applications). The response time of applications (time for The edge tier consumes around 1 kJ for 2000 applications when
only deciding and actuating over the acquired data) is, conse- using the UDAG formation algorithm. The energy saving in the edge
quently, lower than one second. Therefore, Zeus is more suitable tier, when using the UDAG formation algorithm, is explained by the
for delay sensitive applications, with restrictive response times, reduced number of requests communicated between the AEPs and
when leveraging the use of the edge tier, in comparison to a typical VNs, both running on ENs.
two-tier CoS architecture that faces the delay communication with Finally, Zeus allows identifying tasks that are common for mul-
the cloud, such as [7]. Finally, this is a contribution of our work tiple applications, performing them only once and sharing the out-
specifically to improve the area of CoS systems. come among the applications. Therefore, Zeus reduces the energy
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 577

consumption of the ST and consequently extends the lifetime of


WSANs.

4.3.4. Experiment E4
We designed experiment E4 to assess the quality of the solu-
tions calculated by Zeus, and showed how much energy it can save
for the WSANs when reusing data among multiple applications.
To assess the quality of solutions provided by Zeus, we used the
well-known strategy of comparing the solutions obtained by Zeus
with a baseline [20]. It is important to mention that running a
traditional optimization algorithm to obtain optimum solutions to
use as a baseline, is not possible, due to the NP-completeness and
combinatorial growth nature of our MINPP. Therefore, we used
a baseline estimated by us, representing an expected optimum
solution. Since we know the number of requests in every scenario,
and that the maximum utility that can be obtained for each re-
quest is 100%, by multiplying both values we can reach a reliable
maximum baseline for utility. Moreover, using the MUR as the
parameter rule_fdec in Zeus generates the baselines for comparison
(utility is maximized with this rule). Therefore, we consider that
the implementation of Zeus using the MUR rule is our baseline.
We can compare results obtained by Zeus when using the ANUR
rule as the parameter rule_fdec to this baseline for utility. Thus,
we can have a reliable understanding of how good the solutions
provided by Zeus are when considering the reuse of data, shar-
Fig. 9. Results of experiment E4.
ing such data among multiple applications (respecting their data
freshness requirements and reducing utility obtained). The results
obtained by using these two rules (MUR and ANUR) are worth
comparing, due to their conflicting nature. Besides utility, in E4 we the algorithm using the MUR rule, which was used as a baseline,
also assessed the energy consumption for achieving each solution, the approach proposed in our work is capable of achieving near-
and how much energy can be saved in our comparison with the optimal solutions, in the worst case, of values around 80% of the
baseline (MUR). Although the MUR rule maximizes utility, this rule optimal solution. Moreover, by using the ANUR rule, we can save
also imposes a greater energy overhead due to data updates on the around 15% of energy for the sensors tier, in comparison to the
sensors tier, in comparison to the ANUR rule. baseline algorithm using the MUR rule.
In E4, we used two distinct configurations, one with the MUR
rule as the parameter rule_fdec and the other with the ANUR rule. 5. Related Work
Moreover, we vary the number of applications using the same
method as in E1. Besides such variations, we considered as fixed The research on CoS is recent, with most existing proposals
the following parameters: (i) using the UDAG formation algorithm, focusing on virtualization models, instead of the formulation of
(ii) using the EBR rule as the parameter rule_fcen and (iii) using five resource allocation mechanisms. Thus, in our discussion, we also
VNs per data type. Fig. 9 shows the results of E4. considered works from the IoT field [44,23,22,12,51–54], which
In Fig. 9a, the TUTOB metric shows a constant curve at 100% could be adapted to CoS.
when using the MUR rule, as expected. When using the ANUR rule, From the field of CoS, Delgado et al. [16] formulated a mixed in-
the TUTOB metric drops from around 96% for 200 applications to teger and linear programming problem and proposed a centralized
around 80% for 2000 applications. This is because when the num- heuristic algorithm based on linear programming to perform re-
ber of applications increase, more opportunities of reusing data to source allocation. Their problem seeks to maximize the number of
meet applications occur. In these opportunities, applications are applications sharing the CoS, while considering constraints related
met with values of the data freshness requirement that are greater to storage, processing power, bandwidth, and energy consumption
than zero, thus obtaining values of utility that are lower than 100%. requirements of sensors tier. Dinh et al. [21] proposed a centralized
In Fig. 9b, the TECST metric grows until nearly 71 J for 2000 model for the CoS to provide on-demand sensing services for
applications when using the MUR, because every VN is performing multiple applications with generic requirements, such as the data-
the maximum number of data updates in the available simulation sensing interval. The authors also designed a request aggregation
time. When using the ANUR rule, the TECST is lower in comparison scheme that considers the more restrictive data-sensing interval
to when using the MUR. For 2000 applications, the TECST is re- among the aggregated requests. Therefore, their scheme performs
duced from 71 to 60 J. This represents around 15% of energy saving only once the requests with data sensing intervals in common and
for the sensors tier. shares the results of the consolidated request among the aggre-
In Fig. 9c, the LTST metric is shown for both configurations gated requests. In addition, the authors formulate a linear problem
(using the MUR and ANUR rules). The LTST is greater when using for minimizing the energy consumption of physical sensors, as well
the ANUR rule in comparison to when using the MUR, as expected, as the bandwidth consumption of sensing traffic.
since less energy is spent in the sensors tier using ANUR. For 2000 In the IoT field, Narman et al. [22] propose a generic model, with
applications, the LTST raises to around 50 days when using the a greedy algorithm, to perform server allocation to IoT requests
ANUR rule, in comparison to 42 days, when using the MUR rule. dynamically. Moreover, they consider a priority-based queuing of
Finally, the approach of Zeus is based on sharing the data IoT requests. Zeng et al. [12] formulated a mixed integer non-
provisioned by VNs among multiple applications, considering the linear programming problem to perform resource allocation in the
data freshness requirement of applications. This approach is re- edge tier. They aim at minimizing the completion time of requests,
flected by the algorithm using the ANUR rule. In comparison to influenced by computation, I/O interrupts and transmissions, while
578 I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581

considering load balancing on both client side and edge side. Our work differs from all previous proposals mainly because of
Moreover, they propose a three-stage algorithm for solving their its hybrid approach. Despite the inherently distributed nature of
formulated problem, based on the linear programming relaxation CoS and IoT, only in [23] a fully decentralized solution is proposed,
method and a greedy approach. Yu et al. [23] proposed a game- while all the others are fully centralized approaches. In our hy-
theoretical approach to allocate resources to requests optimally, brid approach, we combine and make the most of the features of
using virtual machines (VMs) in a cloud-based vehicular network. both centralized and decentralized solutions. As in decentralized
Their approach is decentralized, such that each VM competes for approaches, Zeus is scalable in terms of the number of VNs and
resources with other VMs based on its local view. The authors aim applications. As in centralized approaches, each AEP running on
to meet the QoS requirements of VMs while ensuring the usage the ET has information about a set of local VNs, and centralizes
levels of computation and storage resources from the physical resource allocation decisions for this whole set. Moreover, consid-
infrastructure. Aazam et al. [51], proposed a probabilistic model ering the edge tier in the CoS architecture is another differential of
for resource allocation in the cloud of things using the edge tier our work, in comparison to [44,23,22,16,21], allowing to support
to decide what type of data to upload to the cloud tier, avoiding delay-sensitive applications.
burdening the core network and the cloud. Their approach consid- Our work also differs from all related proposals in the formu-
ers that users and their application requests have an unpredictable lation of our optimization problem and solution algorithm. We
probability of stop using resources from the physical infrastructure formulate a MINPP based on the application requirements of data
at any moment. The authors call this a relinquish probability and freshness and data type, and our constraints consider the prece-
centre on it the design of their probabilistic model. Moreover, dences among requests and the energy consumption of the CoS
their model also considers as parameters the service type, service infrastructure. Thus, Zeus is able to save resources from the CoS
price, and variance of the relinquish probability. Farias et al. [44] infrastructure by reusing data, while improving the data freshness
proposed a framework for Shared Sensor and Actuator Networks provided to applications. Moreover, heuristic algorithms, such as
(SSAN), including an energy-efficient centralized task scheduling Zeus, are well known for their low computation overhead. Thus,
algorithm. A major feature of their work is that the algorithm the computation overhead and time spent for making resource
performs tasks in common to multiple applications only once. allocation decisions is reduced, allowing the hybrid and online
Their framework also handles precedences among requests of the approach of our work.
same application when making scheduling decisions. As a major Moreover, our work differs from [23,22,16,12,21,51–54] by
drawback in comparison to our work, their work does not consider handling precedences among requests and from [23,22,16,12,51–
the cloud and edge tiers, thus not supporting CoS applications
54] by performing requests in common among multiple applica-
which demand, respectively, high processing capacity for analytics
tions only once, and sharing the result among multiple applica-
and strict response times. Ekmen et al. [54] proposed an energy
tions. Dinh et al. [21] also consider requests in common in their
efficient multi-copy and multi-path WSAN routing strategy. Their
solution, although their strategy is different. Our approach for
basic idea behind multi-copying is to duplicate only the WSAN-
handling requests in common among applications is similar to the
generated data that passes through some central nodes, instead of
one of Farias et al. [44]. Both approaches formulate a unique DAG
duplicating all the WSAN-generated data, as a precaution against
of requests, rearranging their precedence relationships through a
WSAN malfunctioning. A limited number of nodes with higher data
similar algorithm. Our approach differs from both works of Dinh
transmission rates are determined as central, considering WSAN
et al. and Farias et al. by considering requests in common based
lifetime maximization objective. As a major strength of this paper,
on their negotiable requirement of data freshness, and the non-
the authors formulate a mixed integer programming problem to
negotiable requirement of data type. Finally, each one of the afore-
determine optimal routing. Moreover, a heuristic algorithm for
mentioned contributions is not exclusive to our work. Other works
finding good solutions for large problem instances in reasonable
also contribute on each feature separately such as considering the
times is proposed. As a drawback, Ekmen et al. do not consider the
edge tier in their proposal, with all the potential for supporting edge tier, and supporting delay-sensitive applications. However,
WSAN data storage, processing and analytics that such tier can to the best of our knowledge, no algorithm found in literature
bring, besides the management capabilities and delay-sensitive ap- supports simultaneously all the four features, namely, being a
plication support. Xu et al. [52] propose Zenith, a centralized edge hybrid algorithm, handling precedences among requests, sharing
computing resource allocation model. Their model allows service requests in common among multiple applications and considering
providers to establish resource sharing contracts with edge in- the edge tier to distribute the applications’ workload. Such four
frastructure providers. Based on the established contracts, service features are, each, specifically important to the area of CoS for
providers employ a latency-aware resource allocation algorithm the reasons above-mentioned. Put together, they constitute the
that enables delay-sensitive requests to run to completion, having research gap that we investigate in our work, which is summarized
their requirements considered in such decision. Their algorithm is in Table 8.
auction-based and ensures truthfulness and utility-maximization
for both the Edge Computing Infrastructure Providers and Service 6. Conclusion
Providers. Wang et al. [53] present the Edge NOde Resource Man-
agement (ENORM) framework, to address the resource manage- In this paper we presented Zeus, a hybrid and heuristic-based
ment challenge in the edge tier. In ENORM, provisioning enables algorithm to obtain near-optimal solutions in reduced computa-
cloud servers to offload workloads on to edge nodes. Auto-scaling tion time to the MINPP for resource allocation in Clouds of Sensors
takes resource availability on the edge node into account and (CoS). The main distinct features of Zeus are as follows: (i) Zeus can
allocates/de-allocates resources provided to a workload. ENORM perform requests in common for multiple applications only once,
was able to reduce the latency of applications between 20%–80% sharing the results of this single execution among these multiple
and data transfer and communication frequency between the edge applications. (ii) Zeus considers the existence of precedence rela-
node and the cloud by up to 95%. Wang et al. consider the edge tier tionships (dependencies between data inputs and outputs) among
clearly in their proposal, and their provisioning and auto-scaling the requests of a same application. (iii) Zeus leverages the concept
mechanisms are designed as centralized solutions. As drawbacks, of edge computing in its operation. (iv) Zeus has characteristics
their work does not consider precedences among requests of the of both centralized and decentralized algorithms. Therefore, the
same application, and requests in common among multiple appli- partly-decentralized design of Zeus makes the most of the features
cations. of each computational tier of the CoS system.
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 579

Table 8
Summary of strengths of related work and research gap.
1 Degree of decentralization 2 Handles precedences 3 Shares requests in common 4 Considers the edge tier
1.1 Centralized 1.2 Hybrid 1.3 Decentralized
Farias et al. [44] X X X
Yu et al. [23] X
Narman et al. [22] X
Delgado et al. [16] X
Zeng et al. [12] X X
Dinh et al. [21] X X
Aazam et al. [51] X X
Xu et al. [52] X X
Wang et al. [53] X X
Ekmen et al. [54] X
Zeus X X X X

Our proposal has the following major contributions. First, as compared to Zeus using the MUR and ANUR rules, in terms of the
decentralized algorithms, Zeus is scalable in terms of the number quality of solutions obtained, using the same baseline of our cur-
of VNs and applications. We assessed the scalability of Zeus in the rent work. Finally, designing a fully autonomous and self-adaptive
most adverse configuration. In the performed evaluation, the en- algorithm, which operates based on a set of rules and policies,
ergy consumption of the sensors and edge tiers grow linearly with and whose adaptation is based on learning techniques, is a final
the increasing amount of applications (ranging from 200 to 2000) an ambitious goal that we believe will contribute to the fully
running in the CoS system. Moreover, the values of makespan of realization of the potentials of a 3-tier CoS architecture.
applications grow as a sigmoid curve, as the number of VNs per
data type grows (from 1 to 5). Therefore, in every aspect assessed
in our experiments, Zeus succeeded in handling a growing load Acknowledgements
(in terms of an increasing number of applications), and in scaling
(in terms of an increasing number of VNs) to accommodate such a Igor L. Santos’s and Gabriel M. Oliveira’s work are supported
growing load. Thus, the hybrid approach of Zeus achieved a good by scholarships from CAPES Foundation, Brazil, from the Min-
scalability. istry of Education of Brazil. This work was partially supported by
Second, since it leverages the presence of an edge tier in the CoS Brazilian Funding Agencies FAPERJ, Brazil (under grant 213967 for
architecture, Zeus provides support to delay-sensitive applications, Flavia C. Delicato) and CNPq, Brazil (under grant 307378/2014-4 for
with restrictive response times. In the performed experiments, Flavia C. Delicato, and grant 304527/2015-7 for Luci Pirmez). Flavia
Zeus achieved an application makespan (time for acquiring, decid- C. Delicato and Luci Pirmez are CNPq fellows. The contribution of
ing and actuating over the acquired data) of less than one second, in Samee U. Khan is based upon work supported by (while serving
average (for 8000 applications). The response time of applications at) the National Science Foundation, United States. Any opinion,
(time for only deciding and actuating over the acquired data) is, findings, and conclusions or recommendations expressed in this
consequently, lower than one second. Therefore, Zeus is more material are those of the authors and do not necessarily reflect the
suitable for delay sensitive applications, with restrictive response views of the National Science Foundation. Professor Zomaya’s work
times, when leveraging the use of the edge tier, in comparison to a
is partially supported by an Australian Research Council Linkage
typical two-tier CoS architecture.
Industry, Australia Grant (LP160100406).
Third, by sharing, among multiple applications, the outcome
of tasks that are common for such applications, Zeus reduces the
energy consumption of the sensors tier. When using the UDAG References
formation algorithm, and for 2000 applications running in the CoS,
the energy consumption of the sensors tier is reduced by 86%. Also, [1] I.L. Santos, L. Pirmez, F.C. Delicato, S.U. Khan, A.Y. Zomaya, Olympus: The cloud
for 2000 applications, the lifetime of the sensors tier increases from of sensors, IEEE Cloud Comput. 2 (2015) 48–56. http://dx.doi.org/10.1109/
around 5 to around 38 days when we use the UDAG formation MCC.2015.43.
[2] P. Rawat, K.D. Singh, H. Chaouchi, J.M. Bonnin, Wireless sensor networks: a
algorithm. Consequently, Zeus contributes for saving energy and
survey on recent developments and potential synergies, J. Supercomput. 68
extends the lifetime of WSANs. (2013) 1–48. http://dx.doi.org/10.1007/s11227-013-1021-9.
Finally, Zeus approach is based on sharing the data provisioned [3] A. Alamri, W.S. Ansari, M.M. Hassan, M.S. Hossain, A. Alelaiwi, M.A. Hossain, A
by VNs, considering the data freshness requirement of applica- survey on sensor-cloud: Architecture, applications, and approaches, Int. J. Dis-
tions. This approach is reflected by the algorithm using the ANUR trib. Sens. Netw. 2013 (2013) 1–18. http://dx.doi.org/10.1155/2013/917923.
rule. In comparison to the algorithm using the MUR rule, which was [4] F. Bonomi, R. Milito, J. Zhu, S. Addepalli, Fog computing and its role in the
used as a baseline, the approach proposed in our work is capable internet of things, in: Proc. First Ed. MCC Work. Mob. Cloud Comput. 2012,
pp. 13–16, http://dx.doi.org/10.1145/2342509.2342513.
of achieving near-optimal solutions, in the worst case, of values
[5] M.M. Islam, M.M. Hassan, G.-W. Lee, E.-N. Huh, A survey on virtualization of
around 80% of the optimal solution. Moreover, by using the ANUR wireless sensor networks, Sensors 12 (2012) 2175–2207. http://dx.doi.org/10.
rule, we can save around 15% of energy for the sensors tier, in 3390/s120202175.
comparison to the baseline algorithm using the MUR rule. [6] R. Liu, I.J. Wassell, Opportunities and challenges of wireless sensor networks
There are several research directions that deserve further in- using cloud services, in: Proc. Work. Internet Things Serv. Platforms - IoTSP
vestigation. One of them is proposing new rules for Zeus, so that ’11, ACM Press, New York, New York, USA, 2011, pp. 1–7. http://dx.doi.org/10.
its behaviour can be tailored to different QoS requirements and/or 1145/2079353.2079357.
[7] S. Madria, V. Kumar, R. Dalvi, Sensor cloud: A cloud of virtual sensors, IEEE
application domains. Designing a reactive resource provisioning
Softw. 31 (2014) 70–77. http://dx.doi.org/10.1109/MS.2013.141.
mechanism, and compare its performance with the proactive strat- [8] G.E. Gonçalves, P.T. Endo, T.D. Cordeiro, D. Palhares, André Vitor de Almeida
egy used in Zeus is another avenue we intend to pursue. We also Sadok, J. Kelner, B. Melander, J.-E. Mångs, Resource allocation in clouds:
aim at providing support to event-based applications. In addition, concepts, tools and research challenges, in: Minicursos - XXIX Simpósio Bras.
we suggest investigating different types of heuristics for solving Redes Comput. E Sist. Distrib, 2011, pp. 197–240 http://www.cricte2004.
the MINPP of resource allocation. Such new heuristics could be eletrica.ufpr.br/anais/sbrc/2011/files/mc/mc5.pdf.
580 I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581

[9] B. Gonçalves, J.G.P. Filho, G. Guizzardi, A service architecture for sensor data networks, IEEE Trans. Comput. 65 (2016) 1363–1376. http://dx.doi.org/10.
provisioning for context-aware mobile applications, in: Proc. 2008 ACM Symp. 1109/TC.2015.2479608.
Appl. Comput. - SAC ’08., 2008, p. 1946 http://dx.doi.org/10.1145/1363686. [35] W. Li, F.C. Delicato, A.Y. Zomaya, Adaptive energy-efficient scheduling for
1364155. hierarchical wireless sensor networks, ACM Trans. Sens. Netw. 9 (2013) 1–34.
[10] R. Yang, T. Wo, C. Hu, J. Xu, M. Zhang, Dˆ2PS: A dependable data provisioning http://dx.doi.org/10.1145/2480730.2480736.
service in multi-tenant cloud environment, in: 2016 IEEE 17th Int. Symp. High [36] L. Fuentes, D. Jiménez, An aspect-oriented ambient intelligence middleware
Assur. Syst. Eng. 2016, pp. 252–259, http://dx.doi.org/10.1109/HASE.2016.26. platform, in: Proc. 3rd Int. Work. Middlew. Pervasive Ad-Hoc Comput. - MPAC
[11] M. Bouzeghoub, A framework for analysis of data freshness, in: Proc. 2004 Int. ’05, ACM Press, New York, New York, USA, 2005, pp. 1–8. http://dx.doi.org/10.
Work. Inf. Qual. Informational Syst. - IQIS ’04, 2004, p. 59, http://dx.doi.org/10. 1145/1101480.1101482.
1145/1012453.1012464. [37] C. Perera, A. Zaslavsky, C.H. Liu, M. Compton, P. Christen, D. Georgakopoulos,
[12] D. Zeng, L. Gu, S. Guo, Z. Cheng, S. Yu, Joint optimization of task scheduling and Sensor search techniques for sensing as a service architecture for the internet
image placement in fog computing supported software-defined embedded of things, IEEE Sens. J. 14 (2014) 406–420. http://dx.doi.org/10.1109/JSEN.
system, IEEE Trans. Comput. 65 (12) (2016) 3702–3712. http://dx.doi.org/10. 2013.2282292.
1109/TC.2016.2536019.
[38] A. Chirichiello, Two Formal Approaches for Web Services: Process Algebras &
[13] P. Raghavendra, Approximating NP-hard Problems Efficient Algorithms and
Action Languages, Sapienza Universita di Roma, 2008.
their Limits, 2009.
[39] D. Guinard, V. Trifa, S. Karnouskos, P. Spiess, D. Savio, Interacting with the
[14] R.I. Bolaños, M.G. Echeverry, J.W. Escobar, A multiobjective non-dominated
SOA-based internet of things: discovery, query, selection, and on-demand
sorting genetic algorithm (NSGA-II) for the multiple traveling salesman prob-
lem, Decis. Sci. Lett 4 (2015) 559–568. http://dx.doi.org/10.5267/j.dsl.2015.5. provisioning of web services, IEEE Trans. Serv. Comput. 3 (3) (2010) 223–235.
003. [40] P. Patel, D. Cassou, Enabling high-level application development for the Inter-
[15] R.J. Vanderbei, Linear programming: Foundations and extensions, J. Oper. Res. net of Things, J. Syst. Softw. 103 (2015) 62–84. http://dx.doi.org/10.1016/j.jss.
Soc. 49 (1) (1998) 94. http://dx.doi.org/10.1057/palgrave.jors.2600987. 2015.01.027.
[16] C. Delgado, J.R. Gállego, M. Canales, J. Ortín, S. Bousnina, M. Cesana, On optimal [41] B. Costa, P.F. Pires, F.C. Delicato, Modeling IoT Applications with SysML4IoT,
resource allocation in virtual sensor networks, Ad Hoc Networks 50 (2016) 23– in: 2016 42th Euromicro Conf. Softw. Eng. Adv. Appl., IEEE, 2016, pp. 157–164.
40. http://dx.doi.org/10.1016/j.adhoc.2016.04.004. http://dx.doi.org/10.1109/SEAA.2016.19.
[17] C.H. Papadimitriou, Comput. Complexity, Addison-Wesley, Reading, Mass. [42] H. Liu, A. Nayak, I. Stojmenovic, Applications, models, problems, and solution
[u.a.], 2005. strategies, in: Wireless Sensor and Actuator Networks, John Wiley Sons, Inc.,
[18] J. Zhang, H. Huang, X. Wang, Resource provision algorithms in cloud comput- Hoboken, New Jersey, 2010, pp. 4–7 (Chapter 1).
ing: A survey, J. Netw. Comput. Appl. 64 (2016) 23–42. http://dx.doi.org/10. [43] W. Li, I. Santos, F.C. Delicato, P.F. Pires, L. Pirmez, W. Wei, H. Song, A. Zomaya,
1016/j.jnca.2015.12.018. S. Khan, System modelling and performance evaluation of a three-tier Cloud
[19] I.C. Wong, Zukang. Shen, B.L. Evans, J.G. Andrews, A Low complexity algorithm of Things, Future Gener. Comput. Syst. 70 (2017) 104–125. http://dx.doi.org/
for proportional resource allocation in OFDMA systems, in: IEEE Work. on 10.1016/j.future.2016.06.019.
Signal Process. Syst, Vol. 2004, SIPS 2004, IEEE, 2004, pp. 1–6. http://dx.doi. [44] C.M. de Farias, L. Pirmez, F.C. Delicato, W. Li, A.Y. Zomaya, J.N. de Souza, A
org/10.1109/SIPS.2004.1363015. scheduling algorithm for shared sensor and actuator networks, in; Int. Conf.
[20] R. Martí, G. Reinelt, Heuristic methods, European J. Oper. Res. (2011) 17–40. Inf. Netw, Vol. 2013, 2013, pp. 648–653. http://dx.doi.org/10.1109/ICOIN.2013.
http://dx.doi.org/10.1007/978-3-642-16729-4_2.
6496703.
[21] T. Dinh, Y. Kim, An efficient interactive model for on-demand sensing-As-
[45] R. de Paz Alberola, D. Pesch, AvroraZ, in: Proc. 3nd ACM Work. Perform. Monit.
A-servicesof sensor-cloud, Sensors 16 (2016) 992. http://dx.doi.org/10.3390/
Meas. Heterog. Wirel. Wired Networks - PM2HW2N ’08, ACM Press, New York,
s16070992.
New York, USA, 2008, pp. 43–50. http://dx.doi.org/10.1145/1454630.1454637.
[22] H.S. Narman, M.S. Hossain, M. Atiquzzaman, H. Shen, Scheduling internet of
things applications in cloud computing, Ann. Telecommun. (2016). http://dx. [46] G. Hackmann, F. Sun, N. Castaneda, C. Lu, S. Dyke, A holistic approach to
doi.org/10.1007/s12243-016-0527-6. decentralized structural damage localization using wireless sensor networks,
[23] R. Yu, Y. Zhang, S. Gjessing, W. Xia, K. Yang, Toward cloud-based vehicular Comput. Commun. 36 (1) (2012) 29–41. http://dx.doi.org/10.1016/j.comcom.
networks with efficient resource management, IEEE Netw. 27 (2013) 48–55. 2012.01.010.
http://dx.doi.org/10.1109/MNET.2013.6616115. [47] J. Hill, R. Szewczyk, A. Woo, S. Hollar, D. Culler, K. Pister, System architecture
[24] M. Iman, F.C. Delicato, C.M. de Farias, L. Pirmez, I.L. dos Santos, P.F. Pires, directions for networked sensors, ACM SIGOPS Oper. Syst. Rev 34 (2000) 93–
THESEUS: A routing system for shared sensor networks, in; 2015 IEEE Int. 104. http://dx.doi.org/10.1145/384264.379006.
Conf. Comput. Inf. Technol. Ubiquitous Comput. Commun. Dependable, Auton. [48] B.A. Mah, An empirical model of HTTP network traffic, in: Proc. INFOCOM
Secur. Comput. Pervasive Intell. Comput, Vol. 6, 2015, pp. 108–115, http://dx. ’97, IEEE Comput. Soc. Press, 1997, pp. 592–600. http://dx.doi.org/10.1109/
doi.org/10.1109/CIT/IUCC/DASC/PICOM.2015.18. INFCOM.1997.644510.
[25] R. Eltarras, M. Eltoweissy, Adaptive multi-criteria routing for shared sensor- [49] N. Matloff, Introduction to discrete-event simulation, in: Introd. To Discret.
actuator networks, in: GLOBECOM - IEEE Glob. Telecommun. Conf. 2010. http: Event Syst., Springer US, Boston, MA, 2008, pp. 557–615. http://dx.doi.org/10.
//dx.doi.org/10.1109/GLOCOM.2010.5683555. 1007/978-0-387-68612-7_10.
[26] M. Arghavani, M. Esmaeili, M. Esmaeili, F. Mohseni, A. Arghavani, Optimal en- [50] P. Li, S. Zdancewic, Combining events and threads for scalable network ser-
ergy aware clustering in circular wireless sensor networks, Ad Hoc Networks vices implementation and evaluation of monadic, application-level concur-
65 (2017) 91–98. http://dx.doi.org/10.1016/j.adhoc.2017.07.006. rency primitives, ACM Sigplan Not. 42 (2007) 189. http://dx.doi.org/10.1145/
[27] I. Ez-zazi, M. Arioua, A. El Oualkadi, P. Lorenz, On the performance of adaptive
1273442.1250756.
coding schemes for energy efficient and reliable clustered wireless sensor
[51] M. Aazam, M. St-Hilaire, C.-H. Lung, I. Lambadaris, PRE-Fog: IoT trace based
networks, Ad Hoc Networks 64 (2017) 99–111. http://dx.doi.org/10.1016/j.
probabilistic resource estimation at Fog, in: 2016 13th IEEE Annu. Consum.
adhoc.2017.07.001.
Commun. Netw. Conf., IEEE, 2016, pp. 12–17. http://dx.doi.org/10.1109/CCNC.
[28] A. Laouid, A. Dahmani, A. Bounceur, R. Euler, F. Lalem, A. Tari, A distributed
multi-path routing algorithm to balance energy consumption in wireless sen- 2016.7444724.
sor networks, Ad Hoc Networks 64 (2017) 53–64. http://dx.doi.org/10.1016/j. [52] J. Xu, B. Palanisamy, H. Ludwig, Q. Wang, Zenith: Utility-aware resource
adhoc.2017.06.006. allocation for edge computing, in: 2017 IEEE Int. Conf. Edge Comput., IEEE,
[29] K. Tang, R. Shi, J. Dong, Throughput analysis of cognitive wireless acoustic 2017, pp. 47–54. http://dx.doi.org/10.1109/IEEE.EDGE.2017.15.
sensor networks with energy harvesting, Future Gener. Comput. Syst. 1 (2017). [53] N. Wang, B. Varghese, M. Matthaiou, D.S. Nikolopoulos, ENORM: A framework
http://dx.doi.org/10.1016/j.future.2017.07.032. for edge node resource management, IEEE Trans. Serv. Comput. 46 (November)
[30] I. Khan, F. Belqasmi, R. Glitho, N. Crespi, M. Morrow, P. Polakos, Wireless sensor (2017) 14. http://dx.doi.org/10.1109/TSC.2017.2753775.
network virtualization: A survey, IEEE Commun. Surv. Tutor. 18 (2016) 553– [54] M. Ekmen, A. Altın-Kayhan, Reliable and energy efficient wireless sensor net-
576. http://dx.doi.org/10.1109/COMST.2015.2412971. work design via conditional multi-copying for multiple central nodes, Comput.
[31] M. Bauer, N. Bui, C. Jardak, A. Nettsträter, The IoT ARM reference manual, in: A. Netw. 126 (2017) 57–68. http://dx.doi.org/10.1016/j.comnet.2017.06.028.
Bassi, M. Bauer, M. Fiedler, T. Kramp, R. van Kranenburg, S. Lange, S. Meissner
(Eds.), Enabling Things to Talk, Springer Berlin Heidelberg, Berlin, Heidelberg, Igor L. Santos received his Doctorate degree in Informatics
2013, pp. 213–236. http://dx.doi.org/10.1007/978-3-642-40403-0_9. in 2017 and his Bachelor degree in Production Engineering
[32] Y. Zhang, J. Chen, Constructing scalable internet of things services based on in 2011 from the Federal University of Rio de Janeiro
their event-driven models, Concurr. Comput. Pract. Exp. 27 (2015) 4819–4851. (UFRJ). He is currently a lecturer at the department of Pro-
http://dx.doi.org/10.1002/cpe.3469. duction Engineering (DEPRO) from the Federal Centre for
[33] S. Chatterjee, S. Misra, Optimal composition of a virtual sensor for efficient Technological Education Celso Suckow da Fonseca (CEFET-
virtualization within sensor-cloud, in: IEEE Int. Conf. Commun. 2015–Septe, RJ), in Brazil. His research interests include Industry 4.0,
Vol. 2015, pp. 448–453, http://dx.doi.org/10.1109/ICC.2015.7248362. Wireless Sensor and Actuator Networks, Cloud Comput-
[34] I.L. Santos, L. Pirmez, L.R. Carmo, P.F. Pires, F.C. Delicato, S.U. Khan, A.Y. Zomaya, ing, Information Fusion and Structural Health Monitoring.
A decentralized damage detection system for wireless sensor and actuator
I.L. Santos et al. / Future Generation Computer Systems 92 (2019) 564–581 581

Luci Pirmez received her Ph.D. degree in 1996 from the Claudio M. Farias received a M.Sc. degree on Computer
Federal University of Rio de Janeiro (UFRJ), where she is Science in 2010 and his doctorate degree in 2014 from
a researcher and professor of post-graduation courses in the Federal University of Rio de Janeiro (UFRJ), Brazil. He
computer science. Her research interests include Wireless is nowadays professor at the Tercio Pacitti Institute for
Sensor and Actuator Networks, Network Management and Applications and Computational Research. His research
Information Security. interests include nanonets, wireless sensor networks, net-
work security, VOIP, real-time communications and video
processing.

Flavia C. Delicato received her Ph.D. degree in 2005 from Samee U. Khan is an associate professor of electrical and
the Federal University of Rio de Janeiro (UFRJ), where she computer engineering at North Dakota State University.
is an associate professor. Her research interests are Wire- His research interests include cloud, grid, and big data
less Sensor and Actuator Networks, Middleware, Adaptive computing, wired and wireless networks, smart grids.
Systems and Internet of Things (IoT). She is a CNPq fellow Khan has a Ph.D. in computer science from the University
level 1. of Texas, Arlington. He is a senior member of IEEE. For
more information, see: http://sameekhan.org/.

Gabriel M. Oliveira is currently pursuing a Master degree Albert Y. Zomaya is the chair professor of high-
in Informatics at the Federal University of Rio de Janeiro performance computing and networking in the School
(UFRJ). His research interests include Wireless Sensor and of Information Technologies at Sydney University. His
Actuator Networks, Cloud Computing and Information Fu- research interests are complex systems, parallel and dis-
sion. tributed computing, and green computing. Zomaya has a
Ph.D. in control engineering from Sheffield University, UK.
Zomaya is a Fellow of AAAS, IEEE, and IET (UK).

You might also like