You are on page 1of 2

2010 10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing

Energy Efficient Allocation of Virtual Machines in Cloud Data Centers

Anton Beloglazov and Rajkumar Buyya
Cloud Computing and Distributed Systems (CLOUDS) Laboratory
Department of Computer Science and Software Engineering
The University of Melbourne, Australia
{abe, raj}

Abstract—Rapid growth of the demand for computational Service Level Agreements (SLA), e.g. throughput, response
power has led to the creation of large-scale data centers. They time. Therefore, to ensure efficient resource management
consume enormous amounts of electrical power resulting in and provide higher utilization of resources, Cloud providers
high operational costs and carbon dioxide emissions. Moreover,
modern Cloud computing environments have to provide high (e.g. Amazon EC2) have to deal with power-performance
Quality of Service (QoS) for their customers resulting in trade-off, as aggressive consolidation of VMs can lead to
the necessity to deal with power-performance trade-off. We performance loss.
propose an efficient resource management policy for virtualized In this work we leverage live migration of VMs and pro-
Cloud data centers. The objective is to continuously consolidate pose heuristics for dynamic reallocation of VMs according
VMs leveraging live migration and switch off idle nodes to
minimize power consumption, while providing required Quality to current resources requirements, while ensuring reliable
of Service. We present evaluation results showing that dynamic QoS. The objective of the reallocation is to minimize the
reallocation of VMs brings substantial energy savings, thus number of physical nodes serving current workload, whereas
justifying further development of the proposed policy. idle nodes are switched off in order to decrease power
Keywords-Energy efficiency; Cloud computing; Energy con- consumption. A lot of research has been done in power
sumption; Green IT; Resource management; Virtualization; efficient resource management in data centers, e.g. [3], [4].
Allocation of virtual machines; Live migration of virtual In contrast to previous studies, the proposed approach can
machines. effectively handle strict QoS requirements, heterogeneous
infrastructure and heterogeneous VMs. The algorithms are
I. I NTRODUCTION implemented as fast heuristics; they do not depend on a par-
In recent years, IT infrastructures continue to grow rapidly ticular type of workload and do not require any knowledge
driven by the demand for computational power created about applications executing on VMs.
by modern compute-intensive business and scientific ap-
plications. However, a large-scale computing infrastructure II. R EALLOCATION H EURISTICS
consumes enormous amounts of electrical power leading to Allocation of VMs can be divided in two: the first part is
operational costs that exceed the cost of the infrastructure admission of new requests for VM provisioning and place-
in few years. For example, in 2006 the cost of electricity ment VMs on hosts, whereas the second part is optimization
consumed by IT infrastructures in US was estimated as 4.5 of current allocation of VMs. The first part can be considered
billion dollars and tends to double by 2011 [1]. Except for as a bin packing problem with variable bin sizes and prices.
overwhelming operational costs, high power consumption To solve it we apply modification of the Best Fit Decreasing
results in reduced system reliability and devices lifetime (BFD) algorithm. In our modification (MBFD) we sort all
due to overheating. Another problem is significant CO2 VMs in decreasing order of current utilization and allocate
emissions that contribute to the greenhouse effect. each VM to a host that provides the least increase of power
One of the ways to reduce power consumption by a data consumption due to this allocation. This allows to leverage
center is to apply virtualization technology. This technology heterogeneity of the nodes by choosing the most power-
allows one to consolidate several servers to one physical efficient ones. The complexity of the allocation part of the
node as Virtual Machines (VMs) reducing the amount of algorithm is n · m, where n is the number of VMs that have
the hardware in use. Recently emerged Cloud computing to be allocated and m is the number of hosts.
paradigm leverages virtualization and provides on-demand Optimization of current allocation of VMs is carried out
resource provisioning over the Internet on a pay-as-you- in two steps: at the first step we select VMs that need to
go basis [2]. This allows enterprises to drop the costs of be migrated, at the second step chosen VMs are placed on
maintenance of their own computing environment and out- the host using MBFD algorithm. We propose four heuristics
source the computational needs to the Cloud. It is essential for choosing VMs to migrate. The first heuristic, Single
for Cloud providers to offer reliable Quality of Service Threshold (ST), is based on the idea of setting upper uti-
(QoS) for the customers that is negotiated in terms of lization threshold for hosts and placing VMs while keeping

978-0-7695-4039-9/10 $26.00 © 2010 IEEE 577
DOI 10.1109/CCGRID.2010.45

due to consolidation in cases when resource requirements by Table I VMs increase.7% of SLA violations. Los Alamitos. some VMs have to be migrated from the In this work we have proposed and evaluated heuris- host to reduce utilization to prevent potential SLA violation. CA. Ranjan. “Energy aware consol- and by 87%. 66% and 23% less energy consumption relatively puting. Brown et al. Besides that.” in Proceed- (1. all VMs have to be migrated from this host and the host has to be switched off in order to eliminate idle power consumption. “Power and performance management of virtualized cation policies. SLA is achieved by live migration of VMs. For the benchmark policies we simulated R EFERENCES a Non Power Aware policy (NPA) and DVFS that adjusts [1] R. CS Press. J. Srikantaiah.69% 3 120 76% lower threshold. Besides the reduction of node is modeled to have one CPU core with performance operational and establishment costs.04% 34 231 89% keeping total utilization of CPU by all VMs between these MM 30-70% 1. to NPA. Kansal. and G.14 KWh 6. while providing reliable QoS. Buyya.75% 3 241 65% MM 50-90% 1. such as network interface and disk storage. 74% and 43% with thresholds 50-90% and idation for cloud computing.69%). . Strict SLA CloudSim toolkit: Challenges and opportunities. and ulation using CloudSim toolkit [5]. IEEE Press. hype.” Cluster Computing. J. Users submit requests for provisioning energy consumption by modern IT infrastructures. ergy consumption.” Cluster Com- by 83%.. and reality for delivering it services thresholds. Kephart. to a host. Jiang.40 KWh . Avg.50 KWh 9. Kandasamy. 12. leveraging multi-core CPU architectures. At each time frame all VMs are reallocated S IMULATION RESULTS using MBFD algorithm with additional condition of keeping the upper utilization threshold not violated. as – migrating the necessary number of VMs by picking them these resources also significantly contribute to the overall en- according to a uniformly distributed random variable. Yeo. Hanson. the best two-threshold policy. 1–15. the [2] R. E VALUATION work are investigation of setting the utilization thresholds dynamically according to a current set of VMs allocated The proposed heuristics have been evaluated by sim. The obtained be migrated from the host: (1) Minimization of Migrations results show that the technique of dynamic reallocation of (MM) – migrating the least number of VMs to minimize VMs and switching off the idle servers brings substantial migration overhead. 2009. O. The simulated data decentralization of the optimization algorithms to improve center comprises 100 heterogeneous physical nodes. E. Kusic.the total utilization of CPU below this threshold. the energy to preserve free resources in order to prevent SLA violation consumption is further reduced to 1. [4] S. USA. Buyya. USA. NY. “Report to congress on server and data center the voltage and frequency of CPU according to current energy efficiency: Public law 109-431.1%. For the future work. C. We present results obtained using ST policy and National Laboratory. DVFS and ST policies respectively with thresholds 30-70% and ensuring percentage of SLA violations of 1. Venugopal. “Modeling and the flexibility of the proposed algorithms.14 KWh. [3] D.15 KWh . NPA 9. as the thresholds simulation of scalable cloud computing environments and the can be adjusted according to SLA requirements. The results show [5] R. 1–15. Each scalability and fault tolerance. However. New placement Policy Energy SLA Migr. N. 2009. 2000 or 3000 MIPS.48 KWh 1. Other interesting directions for the future III. - ST 50% 2. 8 GB of RAM significance as it decreases carbon dioxide footprints and and 1 TB of storage. Calheiros. 12. vol. . (3) Random Choice (RC) decisions. S. R. and S.11% 3 359 56% thresholds. (2) Highest Potential Growth (HPG) – energy savings and is applicable to real-world Cloud data migrating VMs that have the lowest usage of CPU relatively centers. IEEE The simulation results presented in Table I show that dy. the work has social equivalent to 1000. The aim is 1. If the utilization of CPU for a host falls below the MM 40-80% 1. as computing utilities. 2008. MM policy achieves the best energy savings: computing environments via lookahead control. vol.03 KWh 5.27 KWh 2. If the utilization goes over the IV. pp. 2008.” in Proceedings of HPCC’08. 578 . namic reallocation of VMs according to current utilization of CPU brings higher energy savings compared with static allo. 2009. 6. if SLA are relaxed (6. - The other three heuristics are based on the idea of DVFS 4. no.48 KWh. and R. MM policy. “Market-oriented cloud policies have been evaluated with different values of the computing: Vision. tics for dynamic reallocation of VMs to minimize energy We propose three policies for choosing VMs that have to consumption. 1.11%) allow achievement of the energy consumption of ings of HPCS’09.” Lawrence Berkeley utilization. and F. we propose to investigate the to requested in order to minimize total potential increase of consideration of multiple system resource in reallocation the utilization and SLA violation. of 290 heterogeneous VMs that fill the full capacity of the data center. N.41% 35 226 81% setting upper and lower utilization thresholds for hosts and ST 60% 1. pp. MM policy leads to more than 10 times fewer VM migrations than ST. C ONCLUSION upper threshold. Zhao. A.