Professional Documents
Culture Documents
net/publication/277594751
CITATIONS READS
51 5,086
3 authors:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
Deep Learning for Dynamical System Estimation and Prediction View project
All content following this page was uploaded by Peng Wang on 23 June 2016.
Journal of Manufacturing Science and Engineering AUGUST 2015, Vol. 137 / 040901-1
C 2015 by ASME
Copyright V
Journal of Manufacturing Science and Engineering AUGUST 2015, Vol. 137 / 040901-3
Response time Quantifying the time taken by the cloud providers to respond for service request, e.g., EC2 response time is several minutes
Accuracy Quantifying the difference between user experience and promised service level by cloud providers
Transparency Qualitatively measuring users’ usability affected by changes in service in the context of fast evolution of cloud-based service
Interoperability Qualitatively measuring the ability of a service to interact with other services offered by same or different providers
Stability Quantifying the variability in the performance of a service, which is largely and easily affected by the total VMs and computing
load
Reliability Quantifying the probability of a service operating without failure for assigned computing load and promised service level
Throughput Quantifying the maximum tasks can be handled and completed by a service per unit time
Scalability Qualitatively measuring the ability to handle a large number of application requests simultaneously
similar to a web service: SaaS is an almost web-based application A framework for evaluating cloud offerings and ranking them
while IaaS provides a virtual environment for platform deploy- based on their ability to meet tenants’ QoS requirement is pro-
ment, monitored by web-based monitoring tools. As a result, the posed in Refs. [32,49], by comparing the service alternative via
majority of current research on cloud service evaluation is derived performance attributes and generating the global ranking of serv-
from the web service evaluation methods. A common way is to ice based on AHP. This work also addresses the challenge of dif-
extract and select important service performance indicators, based ferent dimensional units of various QoS attributes by providing a
on which an analytical model can be established to rank alterna- uniform way to evaluate the relative ranking of cloud services for
tive services by quantifying the intrinsic attributes and comparing each type of QoS attribute. Research in Ref. [50] explored the
them to the overall goal. To establish the analytical model, AHP techniques of aggregating and evaluating the multilevel QoS pa-
method and utility function are reviewed, considering different rameters of cloud services, which facilitate the ranking and selec-
performance evaluation attributes. tion of IaaS and SaaS services according to users’ requirements.
An AHP hierarchy of a cloud service weighting model is defined,
Performance Indicators. For cloud computing especially in the in which QoS parameters of both IaaS and SaaS are considered as
context of IaaS, the performance can be evaluated based on both the ranking criteria and are layered and categorized based on their
qualitative and quantitative indicators, with the most important influential relations. Similar work can be found in Ref. [51],
ones summarized in Table 2 [32]. The problem becomes how to which introduced an AHP-based SaaS service selection approach
aggregate and fuse the information expressed by these attributes to objectively score and rank services.
to quantitatively compare several service candidates to fit the
goals and constraints specified by the tenants. Utility Function Based Selection. Unlike AHP that focuses on
the relative importance of decision criteria through pair compari-
AHP-Based Selection. The AHP is a wide-spread structured son, a utility function quantifies the preferences of a decision
technique for organizing and analyzing complex decisions, espe- maker and aggregates several of the decision maker’s degrees of
cially for group decision-making, based on mathematics and psy- satisfaction toward a particular criterion. Zeng et al. [52] dis-
chology. AHP provides a comprehensive and rational framework cussed cloud service selection depending on the trade-off between
for structuring a decision-making problem, evaluating alternative the maximized gain and minimized cost, through two steps: (1)
solutions by quantifying intrinsic attributes and relating them to searching all alternatives that satisfy tenants’ requirements; (2)
the overall goal, and providing a best-suited solution. An AHP- finding the optimal service from the candidates to reach the trade-
based rating process comprises three main steps: decomposition, off between performance and cost. Limam and Boutaba [53] pro-
comparative judgment, and synthesis [48]. In the decomposition posed a reputation-aware service selection framework to rate SaaS
stage, the decision problem is decomposed into a hierarchy of services, with the aim of reducing the time and risk of the selec-
more easily comprehended subproblems. The purpose of this step tion and utilization of software services. The reputation depends
is to determine the layer and elements/attributes contained in each on the feedback of users, which is formed by aggregating the per-
layer of the hierarchy. An example of AHP hierarchy of cloud ceived utility of the customer’s baseline satisfaction and the per-
service selection with respect to performance attributes is shown ceived disconfirmation. Then, the utility is calculated according to
in Fig. 4, which contains three layers with service alternatives as quality monitoring results. In Table 3, commonly used methods
the bottom layer, and attribute as the middle layer under the over- for service selection are summarized.
all goal. In the comparative judgment stage, pair comparisons
between elements are conducted, with respect to their impact on
elements above them in the hierarchy. Two types of numerical pri-
orities are assigned to connect each element in the hierarchy:
{wA1, wA2,…,wAn,…, wNn} represents the influence of alternatives
with respect to the attributes; and {w1, w2,…,wn} denotes the pri-
ority with respect to the overall selection. Finally, numerical fac-
tors are calculated for each of the alternatives, which represent the
alternatives’ relative ability to achieve the decision goal. The pri-
ority of each alternative in Fig. 4 can be calculated as
2 3 2 32 3
PA wA1 wA2 wA3 wAn w1
6 7 6 76 7
6 PB 7 6 wB1 wB2 wB3 wBn 76 w2 7
6 7 6 76 7
6. 7 ¼ 6. 76 . 7 (2)
6 .. 7 6 .. 76 .. 7
4 5 4 54 5
Fig. 4 AHP hierarchy for performance-based cloud service
PN wN1 wN2 wN3 wNn wn selection
Journal of Manufacturing Science and Engineering AUGUST 2015, Vol. 137 / 040901-5
AHP Attribute and decision matrix Weights of alternatives When there are multiple attributes and [48,50,51]
limited explicit selection options
Utility function Decision matrix A subset of alternatives When it is not easy to aggregate or [52,53]
compare attributes
4 Limitations of Cloud Computing (1) How to ensure that requests finish their execution within
The IoT for CM allows for the collection of real-time process estimated completion times in the presence of resource per-
and condition data from machine equipment networked across the formance fluctuations, especially in the large-scale, decen-
manufacturing enterprise. These data and information provide a tralized and distributed systems?
holistic perspective of the operational state of the equipment for (2) How to fulfil the movement or transfer of large volumes of
remote monitoring, diagnosis, and prognosis. One significant data, in the scenario that data are stored in distributed
nature involved in this process to perform real-time monitoring is devices?
the large variety of data types and large amount of data, due to a (3) How to develop resource prediction models for facilitating
variety of measurement techniques with high sampling rate proactive scaling in the cloud so that hosted applications
employed for CM. While processing of different data types can be are able to withstand the variation in workload with the
facilitated by machine-to-machine communication techniques least drop in performance and availability?
such as MTConnect [21], massive data to be processed pose a
major challenge on the current cloud computing technologies, Iosup et al. [5] attempted to obtain an answer experimentally
when applied to manufacturing. A good understanding of the limi- for the question of whether the performance of clouds is sufficient
tations of cloud computing on computational and network per- for executing many-tasks based on scientific computing, from
formance can help in: the perspective of performance metrics, including resource
acquisition/release time, computing performance, I/O perform-
• selecting computing services to ensure the quality of data ance, and reliability. The computational performance evaluation
results indicated that the floating point and double-precision float
analysis
operations are six to eight times slower than the theoretical maxi-
• selecting sensor type to achieve optimal trade-off between mum. A potential reason for this is the overrun or thrashing of the
data resolution and data processing quality memory caches by the working set of other applications sharing
• enabling intelligent data transmission, i.e., transmitting col- the same physical machines. The performance and cost of clouds
lected raw measurement data or extracted features per- were also compared, with workload traces taken from grids and
formed by local agents, in view of the energy efficiency parallel production infrastructure. A conclusion was reached that
[54–56]. the performance of all the cloud environments investigated is low
for high demand usage and should only be considered when
For information and knowledge sharing through crowdsourcing resources are needed instantly and temporarily. Similar work is
for cloud-based design, monitoring, and decision-making, cyber- found in Refs. [64,65]. Huber et al. [64] executed several bench-
security is a significant challenge. This refers to protecting IP, marks to analyze the performance of native and virtualized sys-
sensitive information, and the security of devices and assets net- tems. The results showed that the performance overhead for CPU
worked in the IoT [57]. Existing infrastructure, such as supervi- and memory virtualization were up to 5% and 40%, respectively.
sory control and data acquisition networks, can be a significant The performance of floating point operations were demonstrated
vulnerability, given its designed function [58]. In addition, chal- to be within 3–5% of overall drops, up to a 20% drop for some
lenge lies in the filtering of individual enterprise-sensitive infor- benchmarks. The main cause for the overall performance drop
mation while maximizing the pooling of information sharing. was suggested to stem from the allocation of large memory areas.
Although different experiments delivered different results, it is
found that the performance overhead of VM would increase, and
4.1 Computational Performance. Virtualization brings a
the overhead is determined by specific test bed and resource allo-
number of challenging issues to maintaining stable performance
cation technique.
of each VM. The most popular option among current cloud serv-
Schad et al. [66] has carried out a study of the performance in
ices is to search for QoS-based resource management to trade-off
terms of CPU efficiency and memory speed on Amazon EC2.
execution quality by the assigned resources via a load balancing
Experiments were performed at different levels (e.g., single EC2
mechanism or high availability mechanism [59]. There are a num-
instance, multiple instances, and different locations), and different
ber of reported work on dynamic resource management, for
types of EC2 instances were taken into consideration. The experi-
instance, based on the game theory [60] or k-means clustering
mental results indicated that regardless of the CPU efficiency or
[61]. However, these efforts only address the scaling problem of
memory speed, the performance is far less stable than one would
one resource or a single-tier. Sotomayor et al. [62] found that
expect. For example, the variance for memory speed is around
resource provisioning encompasses three dimensions as hardware
8–10%, while the variance of a local physical cluster is only
resources, software resources, and the time during which those
0.3%. A possible reason is the different system types used by vir-
resources must be guaranteed to be available. A complete resource
tual instances. Also, the variance on the cloud was compared with
provisioning model is also needed to allow resource consumers to
the variance on a local physical cluster, which indicated that the
specify requirements across these three dimensions. Manvi and
same MapReduce job suffered from a significantly higher per-
Shyam [31] surveyed several resource provisioning models and
formance variance on EC2.
evaluated them with proposed performance metrics including reli-
ability, QoS, delay and control overhead. Buyya and Ranjan [63]
pointed out various challenges in addressing the problem of ena- 4.2 Network Bottleneck. Besides sharing resources such as
bling QoS-oriented resource management in distributed servers to CPU and storage facilities that has an effect on cloud computing
satisfy competing applications’ demand for computing services, performance, sharing I/O resources also affects the network per-
including: formance. Resources of network links and bandwidth are shared
Table 4 Selected research and experiments related to computational and network performance evaluation under the effect of
virtualization
Computational performance [5,64,65] Evaluates the computing performance by comparing performing high performance usage to local
clusters
[66] Evaluates the performance variance of EC2 with respect to CPU performance and memory speed
Network performance [70] Evaluates the effect of I/O virtualization on storage and network performance over clouds
based on Eucalyptus
[71] Investigates the bottleneck of I/O virtualized network performance and its root cause under high
workload
[68,69] Investigates the degradation network performance with high package transmission rate in a
virtualized single host
[72,73] Evaluates the network performance on VMs running on multicore platforms by comparing with
single host
Journal of Manufacturing Science and Engineering AUGUST 2015, Vol. 137 / 040901-7
Journal of Manufacturing Science and Engineering AUGUST 2015, Vol. 137 / 040901-9
DownloadedViewFrom:
publicationhttp://manufacturingscience.asmedigitalcollection.asme.org/
stats on 11/13/2015 Terms of Use: http://www.asme.org/about-asme/terms-of-use