You are on page 1of 8

Computer Communications 144 (2019) 124–131

Contents lists available at ScienceDirect

Computer Communications
journal homepage: www.elsevier.com/locate/comcom

A trust management framework for clouds


Yefeng Ruan ∗, Arjan Durresi
Indiana University Purdue University Indianapolis, Department of Computer and Information Science, Indianapolis, IN, 46202, USA

ARTICLE INFO ABSTRACT


Keywords: In today’s cloud computing platforms, more and more users are now working or collaborating in the multi-
Trust management cloud environment, in which collaborators, clouds, computing nodes may belong to different institutions or
Secure cloud computing organizations. Those different organizations might have their policies. Security is still a big concern in cloud
Reliability
computing. To help cloud vendors and customers to detect and prevent from being affected by potential
attacks, we propose a trust management framework. We consider link/flow’s level trust, node’s level trust,
and task/mission’s level trust.
We introduced a new security metric trustability (trust–reliability) and a new algorithm to calculate it.
Trustability measures how much a system can be trusted under a specific attack vector. Trustability can be
used to explore the design space of resource configuration in order be able to choose the right trade-off
between trustability and cost of redundancy. We show that our trust management framework can guide the
administrators and customers to make decisions. For example, based on the real-time trust information, cloud
administrators can migrate tasks from suspect nodes to trustworthy nodes, dynamically allocate a resource,
and manage the trade-off between the degree of redundancy and the cost of the resource.

1. Introduction In cloud computing platforms, on the one hand, a single task might
be distributed over multiple computing nodes. For example, one com-
Nowadays, cloud computing platforms are becoming more and more puting node pre-processes the data. The second computing node does
widely used and welcomed in many fields, including e-commerce, web the data mining tasks. Moreover, the third one visualizes the results
applications, data storage, healthcare, gaming, mobile social networks, to the end users. On the other hand, a single computing node may
and so on [1–3]. Cloud computing platforms can provide customers be shared by multiple tasks. In such cases, it is possible that tasks are
shared with some other untrustworthy tasks on the same node.
with Internet-based services, without requiring customers to purchase
Faced with such new challenges, the old security model that is
an amount of hardware [4]. However, security and privacy are still two
consisted in defending the perimeter of the system is not valid anymore.
significant concerns for cloud computing platforms and applications [5–
We have to assume that whatever defense mechanisms we deploy in
10]. For example, data confidentiality and auditability are two critical
the systems, sooner or later will be breached by attackers. We have to
properties for cloud vendors to convince customers to put their sensi-
design systems that can survive various attacks, with a calculated and
tive information in clouds [1]. Also, it is essential for cloud vendors
acceptable degradation in performance by using additional resource
to provide available and reliable services, which is called business
planned for such conditions. Therefore, besides traditional security
continuity and service availability in [1], to customers.
measures, such as cryptography, access control policies, and so on,
According to [1], clouds can be classified into public and private more measures should be taken in cloud computing platforms. For
clouds depending on their owners and serving objects. Public clouds example, when multiple cloud computing platforms are involved, not
are generally developed by big companies, e.g., Google and Amazon, only load balance and redundancy should be taken into account, but
and is designed to be accessible to public customers in a pay-as-you-go also the trustworthiness of computing nodes, groups of nodes, tasks,
manner, such as Amazon EC2. On the other hand, private clouds are and cloud computing platforms, should be taken into account.
usually owned by private companies or organizations. Moreover, only In this paper, we apply our measurement theory based trust man-
internal users have access to use private clouds. In reality, as different agement framework for cloud computing platforms, which addresses
owners can own the cloud, it is possible that a single mission or task three levels of trust measurement: flow level trust, node level trust, and
will involve or be distributed over multiple clouds. In this work, we call task level trust. Both of the node level trust and the task level trust
this scenario multi-clouds environment as in [11]. are dependent on the flow level trust. Although packets information is

∗ Corresponding author.
E-mail addresses: yefruan@iupui.edu (Y. Ruan), adurresi@iupui.edu (A. Durresi).

https://doi.org/10.1016/j.comcom.2019.05.018
Received 17 May 2018; Received in revised form 18 October 2018; Accepted 23 May 2019
Available online 28 May 2019
0140-3664/© 2019 Elsevier B.V. All rights reserved.
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131

more detailed than flow information and may also be available in some 3. Trust management framework
cases, typically the amount of packets is much higher than the number
of flows such that it is very difficult to handle packets information [12]. Trust has been shown to be very helpful in many decision-making
Flows, which are the aggregation of packets, somehow also exhibit fields, such as IT systems, sociology [23], electronic commerce [24],
traffic features between the sender and the receiver. Therefore, in Inter of Things [25], Financial analysis [26], and so on. Therefore, there
this work, we use flow level measurements rather than packet level are many proposed trust management frameworks in literature [27].
measurements. To summarize, we estimate trustworthiness based on
the network flow traffic. 3.1. Trust metrics
We show that by using trust metrics — trustworthiness and con-
fidence, we can help cloud vendors and cloud customers to estimate In this paper, we apply our measurement theory based trust man-
the trust of both computing nodes and tasks. Based on the evaluation agement framework [28] in cloud computing platforms. Compared with
results of trust, in cases that there are attacks, it could help cloud ad- existing trust models, our framework is flexible and can be adapted to
ministrators to migrate tasks from suspect nodes to trustworthy nodes. many formulas.
Also, it can help cloud administrators to dynamically allocate resource Our trust management framework has two metrics: trustworthiness
to tasks with the guidance of our trust management framework. Our and confidence. Trustworthiness (denoted as 𝑚) is like a comprehensive
main contributions include the following. summary of multiple ‘‘measurements’’. It measures to what extent the
• Propose a measurement theory based trust management frame- truster will trust the trustee. For example, how good is the quality, what
work for cloud computing platforms; is the probability that the measured object will behave as the truster
• Show how can trust help to detect attacks by using a testbed expected? Although the specific semantic meanings and processing
example; methods might be different, this summarization of ‘‘measurements’’
• Propose a metric trustability (trust-reliability) and a new algo- is very similar to the averaging of sample measurements in statis-
rithm (trustability assessment algorithm) which takes into both tics [29]. Suppose that we have a set of measurement results 𝑀 =
trust and reliability; {𝑚1 , 𝑚2 , … , 𝑚𝑘 }, then trustworthiness 𝑚 can be calculated as in Eq. (1).
• Show how can trustability assessment algorithm help to dynami- ∑𝑖=𝑘
𝑚𝑖
cally allocate resources by using an example. 𝑚 = 𝑖=1 (1)
𝑘
The rest of the paper is organized as follows: we introduce literature Similar to sampling in statistics, depending on the number of inci-
works in Section 2. We introduce our measurement theory based trust dents and the intensity of measurements, we would have a distribution
management framework in Section 3. We illustrate the usage of our of measurements in a range around the summarized trustworthiness
trust management framework for clouds in Section 4. We show the 𝑚. Such a distribution, which shows to what extent we are confi-
usage of our trust management framework by an attack example on dent about the trustworthiness evaluation, is similar to the ‘‘error’’ in
our testbed in Section 5. We propose a trustability assessment algorithm physical measurements, which represents the variance of the actual
and show its usage for resource configuration in Section 6. Finally, we value from the summarized value. Therefore, we associate confidence
conclude this paper in Section 7. (denoted as 𝑐) with the ‘‘variance’’ or the ‘‘error’’ of the measurement
theory, in an inversely proportional manner. It is intuitive that the
2. Background and related works
smaller is the ‘‘variance’’ or the ‘‘error’’, the higher is the confidence.
Therefore, in our trust management framework, a trust tuple has two
As security is a very hot research topic in clouds, many works have
metrics: trustworthiness 𝑚 and confidence 𝑐, which can be represented
been proposed to detect attacks and to diminish the damage [6,13–16].
as 𝑇 (𝑚, 𝑐).
There exist several works talking about trust between cloud vendors
We define both the value of 𝑚 and 𝑐 as continuous values in the
and cloud customers. For example, in [17], the author explored the
role of mutual trust between cloud providers and cloud customers range of [0, 1]. A higher trustworthiness value means that the trustee is
in data storage systems. In [18], authors listed several factors which more trustworthy. For example, 0 means the most untrustworthy, while
need to be considered in estimating trust, such as ownership, control, 1 refers to the most trustworthy scenarios. Similarly, higher confidence
transparency, and so on. Therefore, we can see that there is a significant value means the truster is more confident about the trustworthiness
need in cloud computing platforms for cloud vendors to be able to evaluation 𝑚.
provide trust information to their customers. We define confidence 𝑐 concerning the error in a corresponding
On the other hand, there are also some works focusing on trust form. As a result, we further introduce another intermediate metric:
or risk assessment in distributed systems. In [19], authors defined range 𝑅, which is deduced from confidence 𝑐. If we consider 𝑐 as the
risk using the concept of fuzzy belief to deal with risk’s uncertainty percentage of known fact, then the percentage of uncertain fact would
property. In [20], authors established a network for hosts, connected be 1 − 𝑐. Therefore, 𝑅 is the total trustworthiness interval times the
with flows among them. Moreover, they explored both PageRank and percentage of uncertain fact. Generally, for a trust tuple 𝑇 (𝑚 = 0.5, 𝑐 =
HITS algorithms in their work. Similarly, in [21], authors assessed 0), which is the most neutral and uncertain trust fact, we would like
hosts’ risk based on their flows and host network. Compared with these the possible trust value [𝑚 − 𝑅∕2, 𝑚 + 𝑅∕2] to cover the whole interval
existing works, out trust management framework can estimate trust for [0, 1], i.e., the ‘‘real’’ trust value could be any value in the range of
any portions of the system. In addition to that, our work can be used [0, 1]. On the contrary, when 𝑐 = 1, which represents the highest
to guide vendors to dynamically allocate resources accordingly. confidence, we would like 𝑅 = 0, which means that both the worst and
In this work, we adapt and apply our measurement theory based best-expected trustworthiness equal to 𝑚. Following these guidelines,
trust management framework to fulfill the gap between the need for the relation between confidence and range can be simply defined as:
trust and analysis of trust in cloud computing platforms. We provide 𝑅 = 1 ∗ (1 − 𝑐) = 1 − 𝑐.
an approach for cloud vendors or administrators to assess the trust To better fit the error characteristic, radius 𝑟, which is half of the
of nodes and tasks in cloud environments. Also, we provide cloud range 𝑅 is introduced. 𝑟 shows how far the best or worst expected
vendors guidance for dynamically allocating resources. Compared with trust can be from the summarized trustworthiness value 𝑚. Therefore,
other existing works, in addition to the trustworthiness, we also have in this definition, 𝑚 is equivalent to the measurement mean, and 𝑟
confidence included in our trust management framework. Confidence is equivalent to the square root of the variance (or standard error).
can be used to measure how certain the trustworthiness evaluation Conversion between 𝑟 and 𝑐 can be written as in Eq. (2).
is. Furthermore, we develop a reconfiguration capability of tasks over {
elements of the system, such as tasks, computing nodes, networks, 1 − 2 ∗ 𝑟, if 𝑟 ≤ 0.5 1−𝑐
𝑐= and 𝑟= (2)
based on their trust values and the required trust by various tasks [22]. 0, otherwise 2

125
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131

Fig. 2. An flow measurement normalization example.


Fig. 1. The relation among m, c, and r (using Normal distribution as an example). (For
interpretation of the references to color in this figure legend, the reader is referred to
the web version of this article.)
We assume that the source and destination of flows are known such
that we know the truster and trustee correspondingly. Moreover, flows
To illustrate the relationship among 𝑚, 𝑐 and 𝑟, we use Normal between the truster and the trustee are treated like conversations or
distribution as an example in Fig. 1. Here, the black line represents observations between them. In order to know the trust relationship, we
the mean of a set of measurements 𝑀 = {𝑚1 , 𝑚2 , … , 𝑚𝑘 }, which is the need to analyze traffic flows. Anomalous flows can decrease trustwor-
trustworthiness 𝑚. The blue line represents the standard error 𝑟, and thiness. To distinguish anomalous traffic from normal traffic, there exist
confidence 𝑐 = 1 − 2 ∗ 𝑟 is represented by red line. More consistent are many methodologies, such as machine learning-based method [31–33],
the measurement results, smaller is the standard error 𝑟, which results rule-based method [34], and so on. We use anomaly detection results
in higher confidence. as our trust management framework’s input.
Note that, although we use Normal distribution as an example, our In this paper, we launch a testbed and use bursts as examples
trust management framework can be adapted to any other distributions. to simulate flows. We use Ostinato1 to generate bursts and collect
For example, in case that measurement results are binary, i.e., positive packets by using Wireshark.2 Together with packets’ features collected
and negative, we can use Beta distribution to calculate and mean 𝑚 and by Wireshark, we also have three additional features for burst: number
error 𝑟. of packets in a burst, burst rate (number of burst per second), and
number of burst in a flow. In the following of this paper, we assume
3.2. Error propagation theory that we have profiles which specify both normal and abnormal traffic
patterns about the above three burst’s features for each flow. For a
given traffic, we compare it with the profile. Considering the above
In a trust management framework, it is inevitable that we need
mentioned three features, we assume that based on the profile, we
to do some operation on trust information. For example, we might
can normalize their anomalous scores within the range of [0, 1]. An
need to aggregate two trust tuples 𝑇1 (𝑚1 , 𝑐1 ) and 𝑇2 (𝑚2 , 𝑐2 ) together,
example of normalization is shown in Fig. 2. In this example, we assume
or we may need to infer indirect trust given a chain of trust tuples.
that there exist two thresholds. If the measured value is less than or
In our framework, the propagated trustworthiness 𝑚 is defined by user
equal to the lower threshold, we assume that its anomalous score is 0,
defined formulas, such as multiplication, weighted mean, and so on. As
which means it is a normal flow. On the other hand, if the measured
confidence is derived from the error, we will use the error propagation
value is greater than or equal to the higher threshold, we assume
theory [30] to calculate the propagated error first. Moreover, then the
that its anomalous score is 1, which means it is an anomalous flow.
error can be converted to confidence as shown in Eq. (2). By using the
For the value between the lower and higher thresholds, we assume
error propagation theory, our framework is flexible and can be adapted
the anomalous score is linearly related with the measured value. To
to various formulas as long as they are derivable.
combine three features together, there are many ways, e.g., average of
Given a set of variables which have error (or uncertainty) 𝑟, error
them, weighted mean, and so on.
propagation theory (also called propagation of uncertainty) is used to
Besides continuous anomaly scores, our trust management system
calculate the error (or uncertainty) of a function of the variables [30].
can also handle binary cases. In some cases, the output of anomaly
To illustrate this, we use two trust tuples 𝑇1 (𝑚1 , 𝑐1 ) and 𝑇2 (𝑚2 , 𝑐2 ) as
detection is a binary result (normal and abnormal) rather than a
an example. However, it is easy to extend to cases which have more
continuous value [31]. For example, classification algorithms and clus-
than two trust tuples. We represent the function as 𝑓 (𝑇1 , 𝑇2 ). As long as
tering algorithms will classify traffic into two categories. Depending on
𝑓 (𝑇1 , 𝑇2 ) is derivable, the error of the function 𝑟𝑓 can be computed as
the output of anomaly detection, corresponding distributions can be
in Eq. (3) [30].
applied. For example, we can use Beta distribution for binary results.
𝜕𝑓 2 𝜕𝑓 2 𝜕𝑓 𝜕𝑓 For other discrete cases, we can use Dirichlet distribution.
𝑟2𝑓 = ( ) ∗ 𝑟21 + ( ) ∗ 𝑟22 + 2 ∗ ∗ ∗ 𝑐𝑜𝑣(𝑇1 , 𝑇2 ) (3)
𝜕𝑇1 𝜕𝑇2 𝜕𝑇1 𝜕𝑇2
Here 𝑐𝑜𝑣(𝑇1 , 𝑇2 ) is the covariance between 𝑇1 and 𝑇2 . In the case 4.2. Trust modeling: Trustworthiness and confidence
that 𝑇1 and 𝑇2 are independent, the covariance becomes zero.
We evaluate flow trust based on the flows’ anomaly detection
4. Trust management in cloud computing platforms results. As defined in Section 3.1, we calculate 𝑚 as the mean of 𝑀 =
{𝑚1 , 𝑚2 , … , 𝑚𝑘 } and confidence is derived from 𝑀’s error. As indicated
4.1. Measurement of flows by [35], confidence should have two important properties. First, given
a fixed conflict ratio of evidence or measurements (i.e., positive vs.
As indicated in Section 1, we measure trust based on network
flows among computing nodes in cloud computing platforms. In our 1
https://ostinato.org/.
2
approach, we treat each network flow as an atomic measurement. https://www.wireshark.org/.

126
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131

Fig. 3. The conflict ratio’s effect on confidence. Fig. 5. The effect of forgetting factor.

By discounting the old measurements, instead of using mean, we use


weighted mean as trustworthiness 𝑚. Correspondingly, we use weighted
sample variance to calculate confidence. We show them in Eq. (4).
Here, for the most recent measurements, weights are 1. Moreover, for
previous measurements, weights are discounted, i.e., 𝜎. Note that, for
each 𝑚𝑖 in a single time window, all the anomaly detection results are
treated equally important, and it follows the definition of Eqs. (1) and
(2).

∑𝑖=𝑘 √ ∑𝑖=𝑘
√ 𝑤𝑖 ∗ (𝑚𝑖 − 𝑚)2
𝑖=1 𝑤𝑖 ∗ 𝑚𝑖
𝑚 = ∑𝑖=𝑘 𝑟 = √ 𝑖=1 ∑𝑖=𝑘 (4)
𝑖=1 𝑤𝑖 𝑖=1 𝑤𝑖

There exists a trade-off between the number of available measure-


ments and timely trust information. If we set 𝜎 too large, we will have
Fig. 4. The total number of measurements’ effect on confidence.
more measurements available but lose the relative importance of the
recent measurements. On the other hand, if we set 𝜎 too small, we
can track trust estimation in real time, but with a limited number
negative), confidence should increase as the amount of evidence or of measurements. We show the effect of forgetting factor in Fig. 5.
measurements increases. Second, given a fixed amount of evidence or In this example, we divide time into 20 time windows. Moreover,
measurements, confidence increases when the conflict ratio decreases. each time window has 5 new measurements. For the first 10 time
We show these two properties in Figs. 3 and 4. In this example, windows, we assume they have 4 positive measurements and 1 negative
we only consider two extreme anomaly scores {0, 1}. In other words, measurement. Therefore, for the first 10 time windows, 𝑚 = 0.8. For the
we consider that the anomaly detection results are either positive or next 10 time windows, we assume that the object changes its behavior,
negative. We can see that when the total number of measurements and each time window has 1 positive measurement and 4 negative
is fixed, confidence achieves smallest when the ratio of positive and measurements. We can see that, given smaller forgetting factors (forget
negative measurements is 1:1. Also, in Fig. 4, we fix the conflict more rapidly), 𝑚 decreases more rapidly. On the other hand, smaller
ratio equaling to 1:1, which means that we have the same number forgetting factors result in lower confidence.
of positive and negative measurements. We can see that confidence
monotonically increases with the number of total measurements. In 4.3. Trust of nodes
other words, given a fixed ratio of positive and negative measurements,
more measurements can make the trustworthiness estimation more In [21], authors argued that the risk of a node/host is determined by
confident. both its incoming and outgoing links/flows. It is reasonable to assume
As nodes or objects’ behavior may change over time, trust assess- that if a node sends/receives a large number of anomalous flows, it
ment should be dynamically updated as well. Therefore, we divide may execute some malicious missions, or it may be compromised.
flows based on time. For example, flows collected within one hour In addition to that, in cloud computing platforms, we consider that
can be considered as a measurement window. The length of the time nodes’ trust will also be affected by the tasks that are executing on the
window will be tuned in future works. nodes. For example, if we know that a malicious task is running on a
Besides this, trust assessment should also highlight more on recent specific node, although the node’s incoming and outgoing flows have
measurements than old measurements, as recent measurements are not exhibited anomaly yet, we will still treat this node as a suspect.
more likely to reflect the real-time situation. Therefore, we forget the In summary, we take all the incoming and outgoing flows into
previous measurements by a forgetting factor 𝜎, where 𝜎 ≤ 1. Instead of account. Similarly, all the tasks running on the node will be considered.
treating previous measurements as important as current measurements, We represent measurements of all the incoming flows and outgoing
for each time window, we discount previous measurements by 𝜎. The flows as 𝑓 𝑙𝑜𝑤𝐼 and 𝑓 𝑙𝑜𝑤𝑂 correspondingly. And all the tasks running
larger is 𝜎, the more important old measurements are. When 𝜎 = 1, on a node are denoted as 𝑇 𝑎𝑠𝑘 = {𝑡𝑎𝑠𝑘1 , 𝑡𝑎𝑠𝑘2 , … , 𝑡𝑎𝑠𝑘𝑛 }. For some
we consider old measurements as equally important as current mea- types of attacks, incoming and outgoing flows are of different impor-
surements. When 𝜎 = 0, we only use the most recent measurements to tance, we might consider using a weighted mean of them. However, in
estimate trustworthiness. Note that in each new window, old measure- the following attack example, we consider that incoming and outgoing
ments are discounted by 𝜎. So the old measurements will be discounted flows are equally important. Moreover, we use 𝑤𝑓 𝑙𝑜𝑤 to denote the
by 𝜎, 𝜎 2 , 𝜎 3 , and so on, as time goes on. weight of incoming and outgoing flows.

127
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131

Table 1 we assume that three features are equally important such that we use
Lower and higher thresholds for three features.
the mean of three normalized values as the anomalous score.
Feature Lower threshold Higher threshold
Number of packets in a burst 10 70 5.1. An attack example in cloud computing platforms
Burst rate 3 0.1
number of burst in a flow 10 70
In Fig. 6, we show an example of an attack in cloud computing
platforms on our testbed. In this example, we have 6 tasks which are
running on 5 computing nodes. Among 6 tasks, tasks 𝑇 2, 𝑇 4 and 𝑇 6
By considering flows’ trust and tasks’ trust as two factors to deter- distribute over multiple nodes and have incoming and outgoing flows
mine trust for computing nodes, we represent it as in Eq. (5). Here, among these nodes. For tasks 𝑇 1, 𝑇 3 and 𝑇 5, we assume that they can
𝑤𝑓 𝑙𝑜𝑤 and 𝑤𝑡𝑎𝑠𝑘 control the relative weight of flows’ trust to tasks’ trust. be accomplished in a single node such that there is no flow for them.
By following the error propagation theory, confidence of the node can Also, we assume that each node has profiles for all the tasks running
be calculated as in Eq. (6). on it, and then anomalous flows can be detected based on the profiles.
∑ We calculate trust in an iterative way, first flow level trust — then
𝑤𝑓 𝑙𝑜𝑤 ∗ 𝑚𝑓 𝑙𝑜𝑤 + 𝑖=𝑛𝑖=1 𝑤𝑡𝑎𝑠𝑘 ∗ 𝑚𝑡𝑎𝑠𝑘𝑖
𝑚𝑛𝑜𝑑𝑒 = ∑𝑖=𝑛 (5) node level trust and last task level trust. In the following, we assume
𝑤𝑓 𝑙𝑜𝑤 + 𝑖=1 𝑤𝑡𝑎𝑠𝑘 that a malicious flow has all three features equal to 1, and a normal
𝑤𝑓 𝑙𝑜𝑤 ∗ (1 − 𝑐𝑓 𝑙𝑜𝑤 ) flow has all three features equal to 0. Therefore, a malicious flow has
𝑐𝑛𝑜𝑑𝑒 = 1 − 2 ∗ [( ∑ )2
2 ∗ (𝑤𝑓 𝑙𝑜𝑤 + 𝑖=𝑛𝑖=1 𝑤 𝑡𝑎𝑠𝑘 ) anomalous score 1 (trustworthiness score 0), and a normal flow has
(6) anomalous score 0 (trustworthiness score 1).

𝑖=𝑛
𝑤𝑡𝑎𝑠𝑘 ∗ (1 − 𝑐𝑡𝑎𝑠𝑘𝑖 ) 1
+ ( ∑𝑖=𝑛 )2 ] 2 We assume that task 𝑇 4 is a malicious task. Fig. 6(a) to (c) show the
𝑖=1 2 ∗ (𝑤𝑓 𝑙𝑜𝑤 + 𝑖=1 𝑤𝑡𝑎𝑠𝑘 ) process of the attack. In (a), it begins to launch an attack on node 𝑁3.
In the next step, node 𝑁2 is compromised and begins to send malicious
4.4. Trust of tasks
flows to node 𝑁5, which is also running task 𝑇 4. Also, as node 𝑁2 is
compromised, flows between node 𝑁2 and node 𝑁1 will be anomalous
Similar to the trust of computing nodes, as tasks are involved with
as well. Finally, in (c), node 𝑁5 is also compromised. In this example,
both flows and nodes (a set of nodes 𝑁𝑜𝑑𝑒 = {𝑛𝑜𝑑𝑒1 , 𝑛𝑜𝑑𝑒2 , … , 𝑛𝑜𝑑𝑒𝑁 }),
we assume that nodes are compromised and not only task 𝑇 4 is affected,
we consider both of them in evaluating tasks’ trust. However, compared
but also other tasks running on the same nodes will be affected. In
with the trust of nodes, where we consider all the incoming and
Fig. 6, red lines represent anomalous flows, and blue lines represent
outgoing flows, here we only take flows that belong to the corre-
normal flows. We will see how the malicious task (task 𝑇 4) will affect
sponding tasks into account. In other words, a task’s flow trust is only
other nodes and tasks.
derived from its flows (both incoming and outgoing flows). Similarly,
In this example, we have 3 time windows (𝑇 𝑊 1, 𝑇 𝑊 2, 𝑇 𝑊 3),
we assume that incoming and outgoing flows are equally important in
which correspond to the scenarios of Fig. 6(a), (b) and (c). In addition
the following attack example.
to 𝑇 𝑊 1, 𝑇 𝑊 2, 𝑇 𝑊 3, we assume that there exists a prior time window
Similar to the trust of nodes, we first calculate flow trust for each
𝑇 𝑊 0, which includes the prior knowledge. Initially, in 𝑇 𝑊 0, we
task. Also, we use the weighted mean of flow trust and nodes’ trust to
assume that all the nodes, tasks, and flows are normal. Therefore, we
calculate trust for tasks, as shown in Eqs. (7) and (8).
let 𝑚 = 1 and 𝑐 = 1 for all the nodes and tasks. Regarding flows, in

𝑤𝑓 𝑙𝑜𝑤 ∗ 𝑚𝑓 𝑙𝑜𝑤 + 𝑖=𝑁𝑖=1 𝑤𝑛𝑜𝑑𝑒 ∗ 𝑚𝑛𝑜𝑑𝑒𝑖 𝑇 𝑊 0, we assume that there are 10 normal flows for each link in Fig. 6.
𝑚𝑡𝑎𝑠𝑘 = ∑𝑖=𝑁 (7)
𝑤𝑓 𝑙𝑜𝑤 + 𝑖=1 𝑤𝑛𝑜𝑑𝑒 For example, there exist 10 normal flows between node 𝑁1 and node
𝑤𝑓 𝑙𝑜𝑤 ∗ (1 − 𝑐𝑓 𝑙𝑜𝑤 ) 𝑁2. Obviously, all the flow trust initially has 𝑚 = 1 and 𝑐 = 1 as well.
𝑐𝑡𝑎𝑠𝑘 = 1 − 2 ∗ [( ∑ )2 For links among each pair of nodes, we assume that it contains
2 ∗ (𝑤𝑓 𝑙𝑜𝑤 + 𝑖=𝑁
𝑖=1 𝑤 𝑛𝑜𝑑𝑒 ) 10 flows in each time window. Therefore, node 𝑁2 has 30 incoming
(8)

𝑖=𝑁
𝑤𝑛𝑜𝑑𝑒 ∗ (1 − 𝑐𝑛𝑜𝑑𝑒𝑖 ) 1 and outgoing flows in total in each time window, as it has three links
2 2
+ ( ∑𝑖=𝑁 ) ] with nodes 𝑁1, 𝑁3 and 𝑁5. As we indicated before, for simplicity,
𝑖=1 2 ∗ (𝑤𝑓 𝑙𝑜𝑤 + 𝑖=1 𝑤𝑛𝑜𝑑𝑒 )
we consider incoming flows as equally important as outgoing flows. In
5. An testbed example other words, 10 is the total number of flows between a pair of nodes,
no matter how many of them are incoming flows or outgoing flows. For
In this section, we show how to use our trust management frame- time windows 𝑇 𝑊 1, 𝑇 𝑊 2, 𝑇 𝑊 3, we assume that each link contains 10
work in cloud computing platforms. We show an example of a possible normal flows (for each measurement 𝑚𝑖 = 1) if the link is not affected
attack in our testbed platforms. (blue links). Otherwise, we assume that all 10 flows are anomalous (red
We launch a testbed in which we use bursts as examples to simulate links), which means that their measurement results are 𝑚𝑖 = 0.
flows. We use 5 virtual machines to simulate 5 nodes. Each virtual In Table 2, we list the trust information for all the nodes and tasks
machine operates on Ubuntu 16.0.1 system and has 2.1 GB memory. for 4 time windows. As we assume that initially all the nodes and tasks
Virtual machines have four 64-bit processors. We use Ostinato to gen- are not affected, we assign 𝑚 = 1 and 𝑐 = 1 for them. In this example,
erate bursts and collect packets by using Wireshark. As we mentioned we let the forgetting factor 𝜎 = 0.8. To consider the importance of
in Section 4, we use three features: number of packets in a burst, burst flow trust relative to tasks and nodes’ trust, we let 𝑤𝑛𝑜𝑑𝑒 = 𝑤𝑡𝑎𝑠𝑘 =
𝑤𝑓 𝑙𝑜𝑤
rate, and number of burst in a flow. Moreover, as indicated in Fig. 2, 2
. In the future work, these weights can be adjusted by different
for simplicity we assume that anomalous score is linearly related with specific scenarios. Also, note that within each time window, we update
the measured value. To illustrate this, we set the lower and higher nodes and tasks’ trust using the previous time window’s results as prior
thresholds as an example and show it in Table 1. Note that, here we knowledge.
use these thresholds as examples. In reality, tasks and nodes can have From Table 2, we can see that although initially only task 𝑇 4 is
their different pre-defined profiles and therefore different thresholds. malicious, it can affect other nodes and tasks as well. First of all, as
For each feature, we can normalize the measured values into the task 𝑇 4 is distributed over nodes 𝑁2, 𝑁3 and 𝑁5, their trustworthiness
range of [0, 1] as shown in Fig. 2. To consider them together, we use the will decrease a lot, which means that nodes 𝑁2, 𝑁3 and 𝑁5 are
weighted mean of three normalized results. For different applications compromised by malicious task 𝑇 4. In addition to that, we can see
or tasks, we might assign different weights since three features might that it also affects tasks 𝑇 2, 𝑇 5 and 𝑇 6, as they are running on the
not be equally important in some cases. In this paper, for simplicity, affected nodes 𝑁2, 𝑁3, and 𝑁5. Finally, in 𝑇 𝑊 3, we can see that the

128
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131

Fig. 6. An attack example in cloud computing platforms. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this
article.)

Table 2
Trust information for all the nodes and tasks in 4 time windows.
𝑇 𝑊 0(𝑚, 𝑐) 𝑇 𝑊 1(𝑚, 𝑐) 𝑇 𝑊 2(𝑚, 𝑐) 𝑇 𝑊 3(𝑚, 𝑐)
T1 (1, 1) (1, 1) (1, 1) (0.84, 0.92)
T2 (1, 1) (1, 1) (0.76, 0.90) (0.57, 0.91)
T3 (1, 1) (1, 1) (1, 1) (0.84, 0.92)
T4 (1, 1) (0.86, 0.93) (0.65, 0.93) (0.50, 0.95)
T5 (1, 1) (1, 1) (0.72, 0.88) (0.60, 0.91)
T6 (1, 1) (1, 1) (1, 1) (0.79, 0.91)
N1 (1, 1) (1, 1) (0.84, 0.92) (0.71, 0.92)
N2 (1, 1) (0.87, 0.93) (0.65, 0.93) (0.47, 0.94)
N3 (1, 1) (0.72, 0.88) (0.60, 0.91) (0.43, 0.92)
N4 (1, 1) (1, 1) (1, 1) (0.77, 0.88)
N5 (1, 1) (1, 1) (0.85, 0.93) (0.65, 0.93)

Fig. 8. Trustability assessment results vs. c.

(𝑐). Compared with existing works, in addition to the trustworthiness


itself, we also measure how certain the trustworthiness evaluation is
with confidence. Since trust is related to system reliability, to consider
trustworthiness and confidence together, we call it trustability (trust-
reliability) assessment. Trustability measures how much a system can
be trusted under a specific attack vector. Moreover, we propose an
algorithm to assess trustability in Algorithm 1.
Algorithm 1: Trustability Assessment Algorithm
Input: m; c; 𝜆1; 𝜆2; mthreshold;
Output: Trustability
1 if m ≥ mthreshold: ;
Fig. 7. Trustability assessment results vs. m. 2 then
3 𝜆 = 𝜆1 ;
4 end
malicious effect spread to all the nodes and tasks in this example. They 5 else
all decrease their trust from the initial status (𝑚 = 1, 𝑐 = 1). 6 𝜆 = 𝜆2 ;
From Table 2, we show that our trust management framework 7 end
can derive trust information for nodes and tasks in cloud computing 8 Normalizedmc = (2 * (m - 0.5) * c + 1) / 2 ;
platforms. The derived trust information is very helpful for cloud 9 Trustability = exp(- 𝜆 * (1 - Normalizedmc)) ;
administrators to make decisions. When trust decrease is detected in 10 return Trustability ;
any nodes, cloud administrators might need to monitor or investigate In Algorithm 1, we first shift trustworthiness by (𝑚 − 0.5), since
efforts on those suspect nodes. After an investigation, corresponding 0.5 means neutral trustworthiness. To consider 𝑚 and 𝑐 together, we
measures should be taken to diminish potential damage. For example, multiply shifted 𝑚 with 𝑐. And then it is normalized into range of
tasks 𝑇 2, 𝑇 5 and 𝑇 6 can be migrated in advance if we find that nodes [0, 1]. Finally, we use an exponential function to assess trustability, in
they are running on are decreasing their trust. Or, at least alarms should which 𝜆 is related with 𝑚. Basically, if both 𝑚 and 𝑐 are high, we
be arisen for further investigation. Alarms and administrator’s decision want the corresponding trustability assessment result being high as well
making will be part of our future works. (controlled by 𝜆1). Otherwise, if 𝑚 is low, we want that the trustability
assessment result decreases dramatically (controlled by 𝜆2). Therefore,
6. Trustability: Trust, redundancy and reliability typically, we have 𝜆1 ≤ 𝜆2. To better illustrate this, we plot Figs. 7 and
8. Here, 𝑚𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 = 0.5, 𝜆1 = 4, and 𝜆2 = 8
6.1. Trust-reliability assessment In Fig. 7, we fix the confidence (𝑐 = 0.2 and 𝑐 = 0.8 correspond-
ingly). We can see that when 𝑚 is low, the trustability assessment result
In Sections 4 and 5, we have introduced how to evaluate trustwor- is always low. On the other hand, for high 𝑚 (when 𝑚 is greater than
thiness and confidence for tasks and nodes. It is crucial for both cloud 𝑚𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑), the trustability assessment result increases dramatically
vendors and customers to monitor trustworthiness (𝑚) and confidence when confidence increases. For low 𝑐, the trustability assessment result

129
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131

Table 3 (in Eq. (9)) to aggregate them, as shown in Eqs. (10) and (11). Here,
Trustability assessment results for all the nodes and tasks in 4 time windows. we assume that implementations are independent from each other.
𝑇𝑊 0 𝑇𝑊 1 𝑇𝑊 2 𝑇𝑊 3 Basically, Eq. (9) calculates the probability that at least one of two in-
T1 1 1 1 0.4729 dependent events happens. For example, if we have (𝑚1 , 𝑐1 ) = (0.8, 0.8),
T2 1 1 0.3451 0.1746 and (𝑚2 , 𝑐2 ) = (0.9, 0.9), then (𝑚, 𝑐) = (0.98, 0.9717). We can see that by
T3 1 1 1 0.4729
adding more redundancy, we can increase tasks’ 𝑚, 𝑐, and consequently
T4 1 0.5164 0.2365 0.1353
T5 1 1 0.2936 0.1948 the trustability assessment results.
T6 1 1 1 0.3889
N1 1 1 0.4729 0.2931 𝑃 (𝐸𝐴 ∪ 𝐸𝐵) = 𝑃 (𝐸𝐴) + 𝑃 (𝐸𝐵) − 𝑃 (𝐸𝐴 ∩ 𝐸𝐵) (9)
N2 1 0.5360 0.2365 0.0146
N3 1 0.2936 0.1948 0.0109 𝑚 = 𝑚1 + 𝑚2 − 𝑚1 ∗ 𝑚2 (10)
N4 1 1 1 0.3501
N5 1 1 0.4976 0.2365 √
(1 − 𝑚2 ) ∗ (1 − 𝑐1 ) 2 (1 − 𝑚1 ) ∗ (1 − 𝑐2 ) 2
𝑐 =1−2∗ ( ) +( ) (11)
2 2
6.3. Resource configuration

In the cloud computing scenario, given a set of devices or resource,


the service providers or vendors need to assign the right resource to
applications. Suppose that the vendor has a set of candidate devices
which can provide functional usage for an application; however, these
devices might have different trustability assessment results. At the same
time, applications might also have different requirements.
By using Eqs. (7) and (8) and Algorithm 1 together, we are able
to calculate and assess trustability for each implementation. If there
is the need to increase trustability we can add redundancy and use
Eqs. (10) and (11) and trustability assessment algorithm together to
evaluate trust for the application. Therefore, we can use the trustability
metric to explore the design space of resource configuration and be
able to choose the right tradeoff between trustability and cost of redun-
dancy. Note that, our trustability assessment results are dynamic based
on real-time monitored traffic. This information can also be used to
Fig. 9. An example of redundancy. dynamically configure the resources for the applications. In summary,
by using our trust management framework, we can guide vendors for
resource configuration.
does not increase too much even if we increase 𝑚. Similarly, we fix
the trustworthiness (𝑚 = 0.2 and 𝑚 = 0.8) in Fig. 8. We can see that 7. Summary
if trustworthiness 𝑚 is low, the trustability assessment result is always
low no matter how confident it is. If trustworthiness 𝑚 is high, then In this paper, we adapted and applied our measurement theory
increasing confidence can help to increase the trustability assessment based trust management framework for cloud computing platforms.
result as well. In summary, to get a high trustability assessment result, Our trust management framework consists of two metrics: trustworthi-
ness and confidence. It begins from flow measurements. We derived
both 𝑚 and 𝑐 must be high.
trust of nodes based on all the tasks running on them and all the
We also show the trustability assessment results for the attack
flows they send and receive. Similarly, for tasks, their trust depends
example (Fig. 6) in Table 3. Similarly, we can see that task 𝑇 4 can
on the flows and the nodes which implement the tasks. We provided
potentially affect all the nodes and other tasks in this example.
a way for cloud vendors to estimate nodes and tasks’ real-time trust
information. We used an example of an attack on our testbed to
6.2. Redundancy
illustrate the usage of our trust management framework. We showed
that although tasks themselves are not malicious initially, they can be
Redundancy is a basic requirement in many networking frameworks affected and be compromised by other tasks if we are not aware of
in order to provide reliable services. On the one hand, it increases nodes’ trust. We introduced a new security metric trustability (trust-
services’ reliability by providing backups for services. On the other reliability) that measures how much a system can be trusted under a
hand, it requires more resource and in consequence, has a higher cost. specific attack vector. Redundancy is a powerful tool to increase the
Therefore, there is a trade-off between the degree of redundancy and survivability of cloud computing platforms. Our new metric trustability
cost. can be used to explore the design space of resource configuration
In cases that tasks have a certain level of redundancy, it means and be able to choose the right tradeoff between trustability and cost
that there are multiple methods or paths to implement them. For each of redundancy. We showed that by adding more copies or paths for
method, we can use Eqs. (7) and (8) to evaluate 𝑚 and 𝑐. Given more tasks, we can increase their trustability assessment results. Also, we
than one backup methods, we need to aggregate methods first. We use guided the administrators to dynamically allocate resource to tasks. For
Fig. 9 as an example to illustrate the aggregate methodology. In this example, when the trustability of the resources used by a task decreases
example, Task 1 requires three nodes (𝑝1 (𝑁1, 𝑁2, 𝑁3) or 𝑝2 (𝑁4, 𝑁5, below a given threshold, the administrator can allocate some additional
𝑁6)) to implement it. It also distributes two copies (𝑝1 and 𝑝2) of the paths for the task. While if the trust-reliability of a task is very high,
task over six nodes. For each single method 𝑝1 or 𝑝2, we have shown the administrators might decrease the degree of its redundancy.
how to calculate trust for tasks in Eqs. (7) and (8). Given each method’s In summary, our trust management framework can provide guid-
trust, we will aggregate them together and evaluate trust metrics for the ance information for the administrators and cloud customers to make
replicated task. decisions, e.g., migrating tasks from suspect nodes to trustworthy
In the above example, there exist more than one implementation nodes, dynamically allocating resources, and managing the trade-off
for the task (also called redundancy), we follow the redundancy theory between the degree of redundancy and cost of resources.

130
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131

Acknowledgments [17] A. Barsoum, A. Hasan, Enabling dynamic data and indirect mutual trust for cloud
computing storage systems, IEEE Trans. Parallel Distrib. Syst. 24 (12) (2013)
2375–2385.
This work was partially supported by National Science Foundation,
[18] K.M. Khan, Q. Malluhi, Establishing trust in cloud computing, IT Prof. 12 (5)
USA under Grant No. 1547411 and National Institute of Food and (2010) 20–27.
Agriculture (NIFA), USA USDA Award 2017-67003-26057. [19] N. Feng, M. Li, An information systems security risk assessment model under
uncertain environment, Appl. Soft Comput. 11 (7) (2011) 4332–4340.
References [20] S. Wang, R. State, M. Ourdane, T. Engel, Riskrank: Security risk ranking for
IP flow records, in: 2010 International Conference on Network and Service
Management, 2010, pp. 56–63, http://dx.doi.org/10.1109/CNSM.2010.5691334.
[1] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D.
[21] M. Rezvani, V. Sekulic, A. Ignjatovic, E. Bertino, S. Jha, Interdependent security
Patterson, A. Rabkin, I. Stoica, M. Zaharia, A view of cloud computing, Commun.
risk analysis of hosts and flows, IEEE Transactions on Information Forensics and
ACM 53 (4) (2010) 50–58.
Security 10 (11) (2015) 2325–2339.
[2] H.T. Dinh, C. Lee, D. Niyato, P. Wang, A survey of mobile cloud computing:
[22] Y. Ruan, A. Durresi, A trust management framework for cloud computing
Architecture, applications, and approaches, Wirel. Commun. Mob. Comput. 13
platforms, in: 2017 IEEE 31st International Conference on Advanced Information
(18) (2013) 1587–1611.
Networking and Applications (AINA), 2017, pp. 1146–1153, http://dx.doi.org/
[3] C. Thota, R. Sundarasekar, G. Manogaran, R. Varatharajan, M. Priyan, Central-
10.1109/AINA.2017.108.
ized fog computing security platform for iot and cloud in healthcare system, in:
[23] Y. Ruan, L. Alfantoukh, A. Fang, A. Durresi, Exploring trust propagation
Exploring the Convergence of Big Data and the Internet of Things, IGI Global,
behaviors in online communities, in: Network-Based Information Systems (NBiS),
2018, pp. 141–154.
2014 17th International Conference on, 2014, pp. 361–367, http://dx.doi.org/
[4] R.L. Krutz, R.D. Vines, Cloud Security: A Comprehensive Guide to Secure Cloud
10.1109/NBiS.2014.91.
Computing, Wiley Publishing, 2010.
[24] P. Resnick, K. Kuwabara, R. Zeckhauser, E. Friedman, Reputation systems,
[5] C. Rong, S.T. Nguyen, M.G. Jaatun, Beyond lightning: A survey on security
Commun. ACM 43 (12) (2000) 45–48.
challenges in cloud computing, Comput. Electr. Eng. 39 (1) (2013) 47–54.
[25] Y. Ruan, A. Durresi, L. Alfantoukh, Trust management framework for internet
[6] S. Pearson, Privacy, security and trust in cloud computing, in: Privacy and
of things, in: 2016 IEEE 30th International Conference on Advanced Information
Security for Cloud Computing, Springer, 2013, pp. 3–42.
Networking and Applications (AINA), 2016, pp. 1013–1019, http://dx.doi.org/
[7] M. Ali, S.U. Khan, A.V. Vasilakos, Security in cloud computing: Opportunities
10.1109/AINA.2016.136.
and challenges, Inf. Sci. 305 (2015) 357–383.
[26] Y. Ruan, A. Durresi, L. Alfantoukh, Using twitter trust network for stock market
[8] S. Singh, Y.-S. Jeong, J.H. Park, A survey on cloud computing security: Issues,
analysis, Knowl.-Based Syst. 145 (2018) 207–218.
threats, and solutions, J. Netw. Comput. Appl. 75 (2016) 200–222.
[27] Y. Ruan, A. Durresi, A survey of trust management systems for online social
[9] Q. Jiang, J. Ma, F. Wei, On the security of a privacy-aware authentication scheme
communities - trust modeling, trust inference and attacks, Knowl.-Based Syst.
for distributed mobile cloud computing services, IEEE Syst. J. 12 (2) (2018)
106 (2016) 150–163.
2039–2042.
[28] Y. Ruan, P. Zhang, L. Alfantoukh, A. Durresi, Measurement theory-based trust
[10] D. Alsmadi, V. Prybutok, Sharing and storage behavior via cloud computing:
management framework for online social communities, ACM Trans. Internet
Security and privacy in research and practice, Comput. Hum. Behav. 85 (2018)
Technol. (TOIT) 17 (2) (2017) 16:1–16:24.
218–226.
[29] D.C. Montgomery, G.C. Runger, Applied Statistics and Probability for Engineers,
[11] M.A. AlZain, E. Pardede, B. Soh, J.A. Thom, Cloud computing security: From
John Wiley & Sons, New York, 2010.
single to multi-clouds, in: System Science (HICSS), 2012 45th Hawaii Interna-
[30] A.A. Clifford, Multivariate Error Analysis, John Wiley & Sons, 1973.
tional Conference on, 2012, pp. 5490–5499, http://dx.doi.org/10.1109/HICSS.
[31] C.-F. Tsai, Y.-F. Hsu, C.-Y. Lin, W.-Y. Lin, Intrusion detection by machine
2012.153.
learning: A review, Expert Syst. Appl. 36 (10) (2009) 11994–12000.
[12] S. Song, L. Ling, C. Manikopoulo, Flow-based statistical aggregation schemes
[32] C. Modi, D. Patel, B. Borisaniya, H. Patel, A. Patel, M. Rajarajan, A survey of
for network anomaly detection, in: 2006 IEEE International Conference on
intrusion detection techniques in cloud, J. Netw. Comput. Appl. 36 (1) (2013)
Networking, Sensing and Control, IEEE, 2006, pp. 786–791.
42–57.
[13] A. Sarwar, M.N. Khan, A review of trust aspects in cloud computing security,
[33] F.A. Narudin, A. Feizollah, N.B. Anuar, A. Gani, Evaluation of machine learning
Int. J. Cloud Comput. Serv. Sci. 2 (2) (2013) 116.
classifiers for mobile malware detection, Soft Comput. 20 (1) (2016) 343–357.
[14] N.H. Hussein, A. Khalid, A survey of cloud computing security challenges and
[34] H. Hu, G.-J. Ahn, K. Kulkarni, Detecting and resolving firewall policy anomalies,
solutions, Int. J. Comput. Sci. Inf. Secur. 14 (1) (2016) 52.
IEEE Trans. Dependable Secure Comput. 9 (3) (2012) 318–331.
[15] A. Botta, W. De Donato, V. Persico, A. Pescapé, Integration of cloud computing
[35] Y. Wang, M.P. Singh, Formal trust model for multiagent systems, in: Proceedings
and internet of things: a survey, Future Gener. Comput. Syst. 56 (2016) 684–700.
of the 20th International Joint Conference on Artifical Intelligence, in: IJCAI’07,
[16] Z. Yan, X. Li, M. Wang, A.V. Vasilakos, Flexible data access control based on
San Francisco, CA, USA, 2007, pp. 1551–1556.
trust and reputation in cloud computing, IEEE Trans. Cloud Comput. 5 (3) (2017)
485–498.

131

You might also like