Professional Documents
Culture Documents
Computer Communications
journal homepage: www.elsevier.com/locate/comcom
1. Introduction In cloud computing platforms, on the one hand, a single task might
be distributed over multiple computing nodes. For example, one com-
Nowadays, cloud computing platforms are becoming more and more puting node pre-processes the data. The second computing node does
widely used and welcomed in many fields, including e-commerce, web the data mining tasks. Moreover, the third one visualizes the results
applications, data storage, healthcare, gaming, mobile social networks, to the end users. On the other hand, a single computing node may
and so on [1–3]. Cloud computing platforms can provide customers be shared by multiple tasks. In such cases, it is possible that tasks are
shared with some other untrustworthy tasks on the same node.
with Internet-based services, without requiring customers to purchase
Faced with such new challenges, the old security model that is
an amount of hardware [4]. However, security and privacy are still two
consisted in defending the perimeter of the system is not valid anymore.
significant concerns for cloud computing platforms and applications [5–
We have to assume that whatever defense mechanisms we deploy in
10]. For example, data confidentiality and auditability are two critical
the systems, sooner or later will be breached by attackers. We have to
properties for cloud vendors to convince customers to put their sensi-
design systems that can survive various attacks, with a calculated and
tive information in clouds [1]. Also, it is essential for cloud vendors
acceptable degradation in performance by using additional resource
to provide available and reliable services, which is called business
planned for such conditions. Therefore, besides traditional security
continuity and service availability in [1], to customers.
measures, such as cryptography, access control policies, and so on,
According to [1], clouds can be classified into public and private more measures should be taken in cloud computing platforms. For
clouds depending on their owners and serving objects. Public clouds example, when multiple cloud computing platforms are involved, not
are generally developed by big companies, e.g., Google and Amazon, only load balance and redundancy should be taken into account, but
and is designed to be accessible to public customers in a pay-as-you-go also the trustworthiness of computing nodes, groups of nodes, tasks,
manner, such as Amazon EC2. On the other hand, private clouds are and cloud computing platforms, should be taken into account.
usually owned by private companies or organizations. Moreover, only In this paper, we apply our measurement theory based trust man-
internal users have access to use private clouds. In reality, as different agement framework for cloud computing platforms, which addresses
owners can own the cloud, it is possible that a single mission or task three levels of trust measurement: flow level trust, node level trust, and
will involve or be distributed over multiple clouds. In this work, we call task level trust. Both of the node level trust and the task level trust
this scenario multi-clouds environment as in [11]. are dependent on the flow level trust. Although packets information is
∗ Corresponding author.
E-mail addresses: yefruan@iupui.edu (Y. Ruan), adurresi@iupui.edu (A. Durresi).
https://doi.org/10.1016/j.comcom.2019.05.018
Received 17 May 2018; Received in revised form 18 October 2018; Accepted 23 May 2019
Available online 28 May 2019
0140-3664/© 2019 Elsevier B.V. All rights reserved.
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131
more detailed than flow information and may also be available in some 3. Trust management framework
cases, typically the amount of packets is much higher than the number
of flows such that it is very difficult to handle packets information [12]. Trust has been shown to be very helpful in many decision-making
Flows, which are the aggregation of packets, somehow also exhibit fields, such as IT systems, sociology [23], electronic commerce [24],
traffic features between the sender and the receiver. Therefore, in Inter of Things [25], Financial analysis [26], and so on. Therefore, there
this work, we use flow level measurements rather than packet level are many proposed trust management frameworks in literature [27].
measurements. To summarize, we estimate trustworthiness based on
the network flow traffic. 3.1. Trust metrics
We show that by using trust metrics — trustworthiness and con-
fidence, we can help cloud vendors and cloud customers to estimate In this paper, we apply our measurement theory based trust man-
the trust of both computing nodes and tasks. Based on the evaluation agement framework [28] in cloud computing platforms. Compared with
results of trust, in cases that there are attacks, it could help cloud ad- existing trust models, our framework is flexible and can be adapted to
ministrators to migrate tasks from suspect nodes to trustworthy nodes. many formulas.
Also, it can help cloud administrators to dynamically allocate resource Our trust management framework has two metrics: trustworthiness
to tasks with the guidance of our trust management framework. Our and confidence. Trustworthiness (denoted as 𝑚) is like a comprehensive
main contributions include the following. summary of multiple ‘‘measurements’’. It measures to what extent the
• Propose a measurement theory based trust management frame- truster will trust the trustee. For example, how good is the quality, what
work for cloud computing platforms; is the probability that the measured object will behave as the truster
• Show how can trust help to detect attacks by using a testbed expected? Although the specific semantic meanings and processing
example; methods might be different, this summarization of ‘‘measurements’’
• Propose a metric trustability (trust-reliability) and a new algo- is very similar to the averaging of sample measurements in statis-
rithm (trustability assessment algorithm) which takes into both tics [29]. Suppose that we have a set of measurement results 𝑀 =
trust and reliability; {𝑚1 , 𝑚2 , … , 𝑚𝑘 }, then trustworthiness 𝑚 can be calculated as in Eq. (1).
• Show how can trustability assessment algorithm help to dynami- ∑𝑖=𝑘
𝑚𝑖
cally allocate resources by using an example. 𝑚 = 𝑖=1 (1)
𝑘
The rest of the paper is organized as follows: we introduce literature Similar to sampling in statistics, depending on the number of inci-
works in Section 2. We introduce our measurement theory based trust dents and the intensity of measurements, we would have a distribution
management framework in Section 3. We illustrate the usage of our of measurements in a range around the summarized trustworthiness
trust management framework for clouds in Section 4. We show the 𝑚. Such a distribution, which shows to what extent we are confi-
usage of our trust management framework by an attack example on dent about the trustworthiness evaluation, is similar to the ‘‘error’’ in
our testbed in Section 5. We propose a trustability assessment algorithm physical measurements, which represents the variance of the actual
and show its usage for resource configuration in Section 6. Finally, we value from the summarized value. Therefore, we associate confidence
conclude this paper in Section 7. (denoted as 𝑐) with the ‘‘variance’’ or the ‘‘error’’ of the measurement
theory, in an inversely proportional manner. It is intuitive that the
2. Background and related works
smaller is the ‘‘variance’’ or the ‘‘error’’, the higher is the confidence.
Therefore, in our trust management framework, a trust tuple has two
As security is a very hot research topic in clouds, many works have
metrics: trustworthiness 𝑚 and confidence 𝑐, which can be represented
been proposed to detect attacks and to diminish the damage [6,13–16].
as 𝑇 (𝑚, 𝑐).
There exist several works talking about trust between cloud vendors
We define both the value of 𝑚 and 𝑐 as continuous values in the
and cloud customers. For example, in [17], the author explored the
role of mutual trust between cloud providers and cloud customers range of [0, 1]. A higher trustworthiness value means that the trustee is
in data storage systems. In [18], authors listed several factors which more trustworthy. For example, 0 means the most untrustworthy, while
need to be considered in estimating trust, such as ownership, control, 1 refers to the most trustworthy scenarios. Similarly, higher confidence
transparency, and so on. Therefore, we can see that there is a significant value means the truster is more confident about the trustworthiness
need in cloud computing platforms for cloud vendors to be able to evaluation 𝑚.
provide trust information to their customers. We define confidence 𝑐 concerning the error in a corresponding
On the other hand, there are also some works focusing on trust form. As a result, we further introduce another intermediate metric:
or risk assessment in distributed systems. In [19], authors defined range 𝑅, which is deduced from confidence 𝑐. If we consider 𝑐 as the
risk using the concept of fuzzy belief to deal with risk’s uncertainty percentage of known fact, then the percentage of uncertain fact would
property. In [20], authors established a network for hosts, connected be 1 − 𝑐. Therefore, 𝑅 is the total trustworthiness interval times the
with flows among them. Moreover, they explored both PageRank and percentage of uncertain fact. Generally, for a trust tuple 𝑇 (𝑚 = 0.5, 𝑐 =
HITS algorithms in their work. Similarly, in [21], authors assessed 0), which is the most neutral and uncertain trust fact, we would like
hosts’ risk based on their flows and host network. Compared with these the possible trust value [𝑚 − 𝑅∕2, 𝑚 + 𝑅∕2] to cover the whole interval
existing works, out trust management framework can estimate trust for [0, 1], i.e., the ‘‘real’’ trust value could be any value in the range of
any portions of the system. In addition to that, our work can be used [0, 1]. On the contrary, when 𝑐 = 1, which represents the highest
to guide vendors to dynamically allocate resources accordingly. confidence, we would like 𝑅 = 0, which means that both the worst and
In this work, we adapt and apply our measurement theory based best-expected trustworthiness equal to 𝑚. Following these guidelines,
trust management framework to fulfill the gap between the need for the relation between confidence and range can be simply defined as:
trust and analysis of trust in cloud computing platforms. We provide 𝑅 = 1 ∗ (1 − 𝑐) = 1 − 𝑐.
an approach for cloud vendors or administrators to assess the trust To better fit the error characteristic, radius 𝑟, which is half of the
of nodes and tasks in cloud environments. Also, we provide cloud range 𝑅 is introduced. 𝑟 shows how far the best or worst expected
vendors guidance for dynamically allocating resources. Compared with trust can be from the summarized trustworthiness value 𝑚. Therefore,
other existing works, in addition to the trustworthiness, we also have in this definition, 𝑚 is equivalent to the measurement mean, and 𝑟
confidence included in our trust management framework. Confidence is equivalent to the square root of the variance (or standard error).
can be used to measure how certain the trustworthiness evaluation Conversion between 𝑟 and 𝑐 can be written as in Eq. (2).
is. Furthermore, we develop a reconfiguration capability of tasks over {
elements of the system, such as tasks, computing nodes, networks, 1 − 2 ∗ 𝑟, if 𝑟 ≤ 0.5 1−𝑐
𝑐= and 𝑟= (2)
based on their trust values and the required trust by various tasks [22]. 0, otherwise 2
125
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131
126
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131
Fig. 3. The conflict ratio’s effect on confidence. Fig. 5. The effect of forgetting factor.
127
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131
Table 1 we assume that three features are equally important such that we use
Lower and higher thresholds for three features.
the mean of three normalized values as the anomalous score.
Feature Lower threshold Higher threshold
Number of packets in a burst 10 70 5.1. An attack example in cloud computing platforms
Burst rate 3 0.1
number of burst in a flow 10 70
In Fig. 6, we show an example of an attack in cloud computing
platforms on our testbed. In this example, we have 6 tasks which are
running on 5 computing nodes. Among 6 tasks, tasks 𝑇 2, 𝑇 4 and 𝑇 6
By considering flows’ trust and tasks’ trust as two factors to deter- distribute over multiple nodes and have incoming and outgoing flows
mine trust for computing nodes, we represent it as in Eq. (5). Here, among these nodes. For tasks 𝑇 1, 𝑇 3 and 𝑇 5, we assume that they can
𝑤𝑓 𝑙𝑜𝑤 and 𝑤𝑡𝑎𝑠𝑘 control the relative weight of flows’ trust to tasks’ trust. be accomplished in a single node such that there is no flow for them.
By following the error propagation theory, confidence of the node can Also, we assume that each node has profiles for all the tasks running
be calculated as in Eq. (6). on it, and then anomalous flows can be detected based on the profiles.
∑ We calculate trust in an iterative way, first flow level trust — then
𝑤𝑓 𝑙𝑜𝑤 ∗ 𝑚𝑓 𝑙𝑜𝑤 + 𝑖=𝑛𝑖=1 𝑤𝑡𝑎𝑠𝑘 ∗ 𝑚𝑡𝑎𝑠𝑘𝑖
𝑚𝑛𝑜𝑑𝑒 = ∑𝑖=𝑛 (5) node level trust and last task level trust. In the following, we assume
𝑤𝑓 𝑙𝑜𝑤 + 𝑖=1 𝑤𝑡𝑎𝑠𝑘 that a malicious flow has all three features equal to 1, and a normal
𝑤𝑓 𝑙𝑜𝑤 ∗ (1 − 𝑐𝑓 𝑙𝑜𝑤 ) flow has all three features equal to 0. Therefore, a malicious flow has
𝑐𝑛𝑜𝑑𝑒 = 1 − 2 ∗ [( ∑ )2
2 ∗ (𝑤𝑓 𝑙𝑜𝑤 + 𝑖=𝑛𝑖=1 𝑤 𝑡𝑎𝑠𝑘 ) anomalous score 1 (trustworthiness score 0), and a normal flow has
(6) anomalous score 0 (trustworthiness score 1).
∑
𝑖=𝑛
𝑤𝑡𝑎𝑠𝑘 ∗ (1 − 𝑐𝑡𝑎𝑠𝑘𝑖 ) 1
+ ( ∑𝑖=𝑛 )2 ] 2 We assume that task 𝑇 4 is a malicious task. Fig. 6(a) to (c) show the
𝑖=1 2 ∗ (𝑤𝑓 𝑙𝑜𝑤 + 𝑖=1 𝑤𝑡𝑎𝑠𝑘 ) process of the attack. In (a), it begins to launch an attack on node 𝑁3.
In the next step, node 𝑁2 is compromised and begins to send malicious
4.4. Trust of tasks
flows to node 𝑁5, which is also running task 𝑇 4. Also, as node 𝑁2 is
compromised, flows between node 𝑁2 and node 𝑁1 will be anomalous
Similar to the trust of computing nodes, as tasks are involved with
as well. Finally, in (c), node 𝑁5 is also compromised. In this example,
both flows and nodes (a set of nodes 𝑁𝑜𝑑𝑒 = {𝑛𝑜𝑑𝑒1 , 𝑛𝑜𝑑𝑒2 , … , 𝑛𝑜𝑑𝑒𝑁 }),
we assume that nodes are compromised and not only task 𝑇 4 is affected,
we consider both of them in evaluating tasks’ trust. However, compared
but also other tasks running on the same nodes will be affected. In
with the trust of nodes, where we consider all the incoming and
Fig. 6, red lines represent anomalous flows, and blue lines represent
outgoing flows, here we only take flows that belong to the corre-
normal flows. We will see how the malicious task (task 𝑇 4) will affect
sponding tasks into account. In other words, a task’s flow trust is only
other nodes and tasks.
derived from its flows (both incoming and outgoing flows). Similarly,
In this example, we have 3 time windows (𝑇 𝑊 1, 𝑇 𝑊 2, 𝑇 𝑊 3),
we assume that incoming and outgoing flows are equally important in
which correspond to the scenarios of Fig. 6(a), (b) and (c). In addition
the following attack example.
to 𝑇 𝑊 1, 𝑇 𝑊 2, 𝑇 𝑊 3, we assume that there exists a prior time window
Similar to the trust of nodes, we first calculate flow trust for each
𝑇 𝑊 0, which includes the prior knowledge. Initially, in 𝑇 𝑊 0, we
task. Also, we use the weighted mean of flow trust and nodes’ trust to
assume that all the nodes, tasks, and flows are normal. Therefore, we
calculate trust for tasks, as shown in Eqs. (7) and (8).
let 𝑚 = 1 and 𝑐 = 1 for all the nodes and tasks. Regarding flows, in
∑
𝑤𝑓 𝑙𝑜𝑤 ∗ 𝑚𝑓 𝑙𝑜𝑤 + 𝑖=𝑁𝑖=1 𝑤𝑛𝑜𝑑𝑒 ∗ 𝑚𝑛𝑜𝑑𝑒𝑖 𝑇 𝑊 0, we assume that there are 10 normal flows for each link in Fig. 6.
𝑚𝑡𝑎𝑠𝑘 = ∑𝑖=𝑁 (7)
𝑤𝑓 𝑙𝑜𝑤 + 𝑖=1 𝑤𝑛𝑜𝑑𝑒 For example, there exist 10 normal flows between node 𝑁1 and node
𝑤𝑓 𝑙𝑜𝑤 ∗ (1 − 𝑐𝑓 𝑙𝑜𝑤 ) 𝑁2. Obviously, all the flow trust initially has 𝑚 = 1 and 𝑐 = 1 as well.
𝑐𝑡𝑎𝑠𝑘 = 1 − 2 ∗ [( ∑ )2 For links among each pair of nodes, we assume that it contains
2 ∗ (𝑤𝑓 𝑙𝑜𝑤 + 𝑖=𝑁
𝑖=1 𝑤 𝑛𝑜𝑑𝑒 ) 10 flows in each time window. Therefore, node 𝑁2 has 30 incoming
(8)
∑
𝑖=𝑁
𝑤𝑛𝑜𝑑𝑒 ∗ (1 − 𝑐𝑛𝑜𝑑𝑒𝑖 ) 1 and outgoing flows in total in each time window, as it has three links
2 2
+ ( ∑𝑖=𝑁 ) ] with nodes 𝑁1, 𝑁3 and 𝑁5. As we indicated before, for simplicity,
𝑖=1 2 ∗ (𝑤𝑓 𝑙𝑜𝑤 + 𝑖=1 𝑤𝑛𝑜𝑑𝑒 )
we consider incoming flows as equally important as outgoing flows. In
5. An testbed example other words, 10 is the total number of flows between a pair of nodes,
no matter how many of them are incoming flows or outgoing flows. For
In this section, we show how to use our trust management frame- time windows 𝑇 𝑊 1, 𝑇 𝑊 2, 𝑇 𝑊 3, we assume that each link contains 10
work in cloud computing platforms. We show an example of a possible normal flows (for each measurement 𝑚𝑖 = 1) if the link is not affected
attack in our testbed platforms. (blue links). Otherwise, we assume that all 10 flows are anomalous (red
We launch a testbed in which we use bursts as examples to simulate links), which means that their measurement results are 𝑚𝑖 = 0.
flows. We use 5 virtual machines to simulate 5 nodes. Each virtual In Table 2, we list the trust information for all the nodes and tasks
machine operates on Ubuntu 16.0.1 system and has 2.1 GB memory. for 4 time windows. As we assume that initially all the nodes and tasks
Virtual machines have four 64-bit processors. We use Ostinato to gen- are not affected, we assign 𝑚 = 1 and 𝑐 = 1 for them. In this example,
erate bursts and collect packets by using Wireshark. As we mentioned we let the forgetting factor 𝜎 = 0.8. To consider the importance of
in Section 4, we use three features: number of packets in a burst, burst flow trust relative to tasks and nodes’ trust, we let 𝑤𝑛𝑜𝑑𝑒 = 𝑤𝑡𝑎𝑠𝑘 =
𝑤𝑓 𝑙𝑜𝑤
rate, and number of burst in a flow. Moreover, as indicated in Fig. 2, 2
. In the future work, these weights can be adjusted by different
for simplicity we assume that anomalous score is linearly related with specific scenarios. Also, note that within each time window, we update
the measured value. To illustrate this, we set the lower and higher nodes and tasks’ trust using the previous time window’s results as prior
thresholds as an example and show it in Table 1. Note that, here we knowledge.
use these thresholds as examples. In reality, tasks and nodes can have From Table 2, we can see that although initially only task 𝑇 4 is
their different pre-defined profiles and therefore different thresholds. malicious, it can affect other nodes and tasks as well. First of all, as
For each feature, we can normalize the measured values into the task 𝑇 4 is distributed over nodes 𝑁2, 𝑁3 and 𝑁5, their trustworthiness
range of [0, 1] as shown in Fig. 2. To consider them together, we use the will decrease a lot, which means that nodes 𝑁2, 𝑁3 and 𝑁5 are
weighted mean of three normalized results. For different applications compromised by malicious task 𝑇 4. In addition to that, we can see
or tasks, we might assign different weights since three features might that it also affects tasks 𝑇 2, 𝑇 5 and 𝑇 6, as they are running on the
not be equally important in some cases. In this paper, for simplicity, affected nodes 𝑁2, 𝑁3, and 𝑁5. Finally, in 𝑇 𝑊 3, we can see that the
128
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131
Fig. 6. An attack example in cloud computing platforms. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this
article.)
Table 2
Trust information for all the nodes and tasks in 4 time windows.
𝑇 𝑊 0(𝑚, 𝑐) 𝑇 𝑊 1(𝑚, 𝑐) 𝑇 𝑊 2(𝑚, 𝑐) 𝑇 𝑊 3(𝑚, 𝑐)
T1 (1, 1) (1, 1) (1, 1) (0.84, 0.92)
T2 (1, 1) (1, 1) (0.76, 0.90) (0.57, 0.91)
T3 (1, 1) (1, 1) (1, 1) (0.84, 0.92)
T4 (1, 1) (0.86, 0.93) (0.65, 0.93) (0.50, 0.95)
T5 (1, 1) (1, 1) (0.72, 0.88) (0.60, 0.91)
T6 (1, 1) (1, 1) (1, 1) (0.79, 0.91)
N1 (1, 1) (1, 1) (0.84, 0.92) (0.71, 0.92)
N2 (1, 1) (0.87, 0.93) (0.65, 0.93) (0.47, 0.94)
N3 (1, 1) (0.72, 0.88) (0.60, 0.91) (0.43, 0.92)
N4 (1, 1) (1, 1) (1, 1) (0.77, 0.88)
N5 (1, 1) (1, 1) (0.85, 0.93) (0.65, 0.93)
129
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131
Table 3 (in Eq. (9)) to aggregate them, as shown in Eqs. (10) and (11). Here,
Trustability assessment results for all the nodes and tasks in 4 time windows. we assume that implementations are independent from each other.
𝑇𝑊 0 𝑇𝑊 1 𝑇𝑊 2 𝑇𝑊 3 Basically, Eq. (9) calculates the probability that at least one of two in-
T1 1 1 1 0.4729 dependent events happens. For example, if we have (𝑚1 , 𝑐1 ) = (0.8, 0.8),
T2 1 1 0.3451 0.1746 and (𝑚2 , 𝑐2 ) = (0.9, 0.9), then (𝑚, 𝑐) = (0.98, 0.9717). We can see that by
T3 1 1 1 0.4729
adding more redundancy, we can increase tasks’ 𝑚, 𝑐, and consequently
T4 1 0.5164 0.2365 0.1353
T5 1 1 0.2936 0.1948 the trustability assessment results.
T6 1 1 1 0.3889
N1 1 1 0.4729 0.2931 𝑃 (𝐸𝐴 ∪ 𝐸𝐵) = 𝑃 (𝐸𝐴) + 𝑃 (𝐸𝐵) − 𝑃 (𝐸𝐴 ∩ 𝐸𝐵) (9)
N2 1 0.5360 0.2365 0.0146
N3 1 0.2936 0.1948 0.0109 𝑚 = 𝑚1 + 𝑚2 − 𝑚1 ∗ 𝑚2 (10)
N4 1 1 1 0.3501
N5 1 1 0.4976 0.2365 √
(1 − 𝑚2 ) ∗ (1 − 𝑐1 ) 2 (1 − 𝑚1 ) ∗ (1 − 𝑐2 ) 2
𝑐 =1−2∗ ( ) +( ) (11)
2 2
6.3. Resource configuration
130
Y. Ruan and A. Durresi Computer Communications 144 (2019) 124–131
Acknowledgments [17] A. Barsoum, A. Hasan, Enabling dynamic data and indirect mutual trust for cloud
computing storage systems, IEEE Trans. Parallel Distrib. Syst. 24 (12) (2013)
2375–2385.
This work was partially supported by National Science Foundation,
[18] K.M. Khan, Q. Malluhi, Establishing trust in cloud computing, IT Prof. 12 (5)
USA under Grant No. 1547411 and National Institute of Food and (2010) 20–27.
Agriculture (NIFA), USA USDA Award 2017-67003-26057. [19] N. Feng, M. Li, An information systems security risk assessment model under
uncertain environment, Appl. Soft Comput. 11 (7) (2011) 4332–4340.
References [20] S. Wang, R. State, M. Ourdane, T. Engel, Riskrank: Security risk ranking for
IP flow records, in: 2010 International Conference on Network and Service
Management, 2010, pp. 56–63, http://dx.doi.org/10.1109/CNSM.2010.5691334.
[1] M. Armbrust, A. Fox, R. Griffith, A.D. Joseph, R. Katz, A. Konwinski, G. Lee, D.
[21] M. Rezvani, V. Sekulic, A. Ignjatovic, E. Bertino, S. Jha, Interdependent security
Patterson, A. Rabkin, I. Stoica, M. Zaharia, A view of cloud computing, Commun.
risk analysis of hosts and flows, IEEE Transactions on Information Forensics and
ACM 53 (4) (2010) 50–58.
Security 10 (11) (2015) 2325–2339.
[2] H.T. Dinh, C. Lee, D. Niyato, P. Wang, A survey of mobile cloud computing:
[22] Y. Ruan, A. Durresi, A trust management framework for cloud computing
Architecture, applications, and approaches, Wirel. Commun. Mob. Comput. 13
platforms, in: 2017 IEEE 31st International Conference on Advanced Information
(18) (2013) 1587–1611.
Networking and Applications (AINA), 2017, pp. 1146–1153, http://dx.doi.org/
[3] C. Thota, R. Sundarasekar, G. Manogaran, R. Varatharajan, M. Priyan, Central-
10.1109/AINA.2017.108.
ized fog computing security platform for iot and cloud in healthcare system, in:
[23] Y. Ruan, L. Alfantoukh, A. Fang, A. Durresi, Exploring trust propagation
Exploring the Convergence of Big Data and the Internet of Things, IGI Global,
behaviors in online communities, in: Network-Based Information Systems (NBiS),
2018, pp. 141–154.
2014 17th International Conference on, 2014, pp. 361–367, http://dx.doi.org/
[4] R.L. Krutz, R.D. Vines, Cloud Security: A Comprehensive Guide to Secure Cloud
10.1109/NBiS.2014.91.
Computing, Wiley Publishing, 2010.
[24] P. Resnick, K. Kuwabara, R. Zeckhauser, E. Friedman, Reputation systems,
[5] C. Rong, S.T. Nguyen, M.G. Jaatun, Beyond lightning: A survey on security
Commun. ACM 43 (12) (2000) 45–48.
challenges in cloud computing, Comput. Electr. Eng. 39 (1) (2013) 47–54.
[25] Y. Ruan, A. Durresi, L. Alfantoukh, Trust management framework for internet
[6] S. Pearson, Privacy, security and trust in cloud computing, in: Privacy and
of things, in: 2016 IEEE 30th International Conference on Advanced Information
Security for Cloud Computing, Springer, 2013, pp. 3–42.
Networking and Applications (AINA), 2016, pp. 1013–1019, http://dx.doi.org/
[7] M. Ali, S.U. Khan, A.V. Vasilakos, Security in cloud computing: Opportunities
10.1109/AINA.2016.136.
and challenges, Inf. Sci. 305 (2015) 357–383.
[26] Y. Ruan, A. Durresi, L. Alfantoukh, Using twitter trust network for stock market
[8] S. Singh, Y.-S. Jeong, J.H. Park, A survey on cloud computing security: Issues,
analysis, Knowl.-Based Syst. 145 (2018) 207–218.
threats, and solutions, J. Netw. Comput. Appl. 75 (2016) 200–222.
[27] Y. Ruan, A. Durresi, A survey of trust management systems for online social
[9] Q. Jiang, J. Ma, F. Wei, On the security of a privacy-aware authentication scheme
communities - trust modeling, trust inference and attacks, Knowl.-Based Syst.
for distributed mobile cloud computing services, IEEE Syst. J. 12 (2) (2018)
106 (2016) 150–163.
2039–2042.
[28] Y. Ruan, P. Zhang, L. Alfantoukh, A. Durresi, Measurement theory-based trust
[10] D. Alsmadi, V. Prybutok, Sharing and storage behavior via cloud computing:
management framework for online social communities, ACM Trans. Internet
Security and privacy in research and practice, Comput. Hum. Behav. 85 (2018)
Technol. (TOIT) 17 (2) (2017) 16:1–16:24.
218–226.
[29] D.C. Montgomery, G.C. Runger, Applied Statistics and Probability for Engineers,
[11] M.A. AlZain, E. Pardede, B. Soh, J.A. Thom, Cloud computing security: From
John Wiley & Sons, New York, 2010.
single to multi-clouds, in: System Science (HICSS), 2012 45th Hawaii Interna-
[30] A.A. Clifford, Multivariate Error Analysis, John Wiley & Sons, 1973.
tional Conference on, 2012, pp. 5490–5499, http://dx.doi.org/10.1109/HICSS.
[31] C.-F. Tsai, Y.-F. Hsu, C.-Y. Lin, W.-Y. Lin, Intrusion detection by machine
2012.153.
learning: A review, Expert Syst. Appl. 36 (10) (2009) 11994–12000.
[12] S. Song, L. Ling, C. Manikopoulo, Flow-based statistical aggregation schemes
[32] C. Modi, D. Patel, B. Borisaniya, H. Patel, A. Patel, M. Rajarajan, A survey of
for network anomaly detection, in: 2006 IEEE International Conference on
intrusion detection techniques in cloud, J. Netw. Comput. Appl. 36 (1) (2013)
Networking, Sensing and Control, IEEE, 2006, pp. 786–791.
42–57.
[13] A. Sarwar, M.N. Khan, A review of trust aspects in cloud computing security,
[33] F.A. Narudin, A. Feizollah, N.B. Anuar, A. Gani, Evaluation of machine learning
Int. J. Cloud Comput. Serv. Sci. 2 (2) (2013) 116.
classifiers for mobile malware detection, Soft Comput. 20 (1) (2016) 343–357.
[14] N.H. Hussein, A. Khalid, A survey of cloud computing security challenges and
[34] H. Hu, G.-J. Ahn, K. Kulkarni, Detecting and resolving firewall policy anomalies,
solutions, Int. J. Comput. Sci. Inf. Secur. 14 (1) (2016) 52.
IEEE Trans. Dependable Secure Comput. 9 (3) (2012) 318–331.
[15] A. Botta, W. De Donato, V. Persico, A. Pescapé, Integration of cloud computing
[35] Y. Wang, M.P. Singh, Formal trust model for multiagent systems, in: Proceedings
and internet of things: a survey, Future Gener. Comput. Syst. 56 (2016) 684–700.
of the 20th International Joint Conference on Artifical Intelligence, in: IJCAI’07,
[16] Z. Yan, X. Li, M. Wang, A.V. Vasilakos, Flexible data access control based on
San Francisco, CA, USA, 2007, pp. 1551–1556.
trust and reputation in cloud computing, IEEE Trans. Cloud Comput. 5 (3) (2017)
485–498.
131