You are on page 1of 10

Performance-to-Power Ratio Aware Virtual Machine

(VM) Allocation in Energy-Efficient Clouds


Xiaojun Ruan Haiquan Chen
Department of Computer Science Department of Math and Computer Science
West Chester University of Pennsylvania, PA, USA Valdosta State University, GA, USA
xruan@wcupa.edu hachen@valdosta.edu

Abstract—The last decade witnesses a dramatic advance of A. Novelty of Our Approach


cloud computing research and techniques. One of the key faced
challenges in this field is how to reduce the massive amount of To the best of our knowledge, our work is the first to
energy consumption in cloud computing data centers. To address leverage the performance-to-power ratio of computing nodes
this issue, many power-aware virtual machine (VM) allocation in VM allocation and migration to achieve the optimal balance
and consolidation approaches are proposed to reduce energy between host utilization and energy consumption. Most of
consumption efficiently. However, most of those existing efficient
cloud solutions save energy cost at a price of the significant
current VM placement and migration policies are based on the
performance degradation. In this paper, we present a novel factors which are primitive system characteristics like power,
VM allocation algorithm called ”PPRGear”, which leverages utilization, network bandwidth, or storage space. However,
the Performance-to-Power ratios for various host types. By in this paper, we propose an energy-efficient VM allocation
achieving the optimal balance between host utilization and energy strategy based on Performance-to-Power Ratio (PPR) which
consumption, PPRGear is able to guarantee that host computers
run at the most power-efficient levels (i.e., the levels with highest
is not a primitive characteristic of host computers.
Performance-to-Power ratios) so that the energy consumption In our system, each type of host computers maintains
can be tremendously reduced with little sacrifice of performance. 11 levels (from gear 0 to gear 10) of gears that indicate
Our extensive experiments with real world traces show that which utilization level (from 0%, 10%, ... to 100%) this host
compared with three baseline energy-efficient VM allocation and computer is working at. The gear with the highest PPR is
selection algorithms, PPRGear is able to reduce the energy
consumption up to 69.31% for various host computer types chosen as the best gear. The top n gears with the highest
with fewer migration and shutdown times and little performance PPRs are chosen as pref erred gears. When the current
degradation for cloud computing data centers. working gear of a host is not in the range of the preferred
gears, the current host is considered as either over utilized or
underutilized. Before executing any tasks energy-efficiently,
I. I NTRODUCTION we evaluate the characteristics of computing node at different
utilization levels. This evaluation calculates the best gear with
highest PPR and the preferred gears with the highest n PPR
Cloud computing has been widely adopted by business,
values. By allocating and migrating VMs in clouds, we aim
individuals, and even large enterprises. However, energy con-
to keep computing nodes working at the best gears. When a
sumption has become a big concern in the last decade since
computing node is working at a gear higher than any preferred
cloud data centers consumes significant power and generate
gears, the computing node is considered over utilized; when a
giant power bills. According to the data disclosed by The New
computing node is working at a gear lower than any preferred
York Times in 2012, Facebook data centers consume about 60
gears, the computing node is considered underutilized. If a
million watts and Google data centers consume as much as
computing node is over utilized, one or multiple VM(s) in this
almost 300 million watts [8]. In year 2013, data centers in the
host will be selected and then migrated out. If a computing
United States totally consumed 91 billion kWh of electrical
node is underutilized, the cloud will try to either migrate VMs
energy and generated 97 million metric tons of carbon dioxide
into this host or migrate out all VMs in this host to shut it
(CO2) [18]. In 2014, more than 2% US electricity usage is con-
down to save energy.
sumed by data centers [5]. Therefore, many energy-efficient
approaches have been explored at facility level, in cooling
B. Our Contributions
systems, or by using computing resource allocation. Among
those methods, computing resource allocation is considered In this paper, we propose PPRGear, a novel VM allo-
as the most achievable and cost-effective approaches since cation scheme based on Performance-to-Power Ratio (PPR)
it does not require any hardware modification or upgrades. to achieve the optimal balance between host utilization and
Energy-efficient aware VM management has been explored by energy consumption. Our solution dynamically allocates VMs
using task scheduling [26], workload consolidation [25][19], to and migrate VMs among hosts with the objective of making
temperature-aware capping [24], request batching [22], etc. all host computers operating at the most power-efficient levels
(i.e., the levels with highest performance-to-power ratios). states that power consumption model of a host computer
Specifically, this paper has the following contributions: solely employs CPU utilization metrics [12]. According to
• We proposed a novel VM allocation algorithm called the data collected by SP ECpower ssj2008 in Fig. 1, we
PPRGear which allocates and migrate virtual machines can observe that the power consumption of a host computer
in clouds based on host performance-to-power radios. increases linearly while CPU utilization increases linearly even
As a result, host computers run at the most power- though the four host computers have different configurations
efficient levels (i.e., the levels with highest performance- as presented in Table II.
to-power ratios) so that the energy consumption can Fig. 1(a) presents the power consumption and PPR of
be tremendously reduced without sacrifice of cloud-end host model Fujitsu Primergy RX1330 M1 on utilization from
computation performance. 0 to 1. It is expected that power consumption increases
• PPRGear is able to dynamically monitor host utilization almost linearly while CPU utilization increases. However, the
and trigger virtual machine migration and consolidation highest PPR value appear at utilization level 0.7. In other
automatically when a host is over utilized or underutilized words, the computing node works most energy-efficiently at
to achieve the optimal balance between host utilization CPU utilization 0.7. Likewise, Figs. 1(b), 1(c), 1(d) present
and energy consumption. similar PPR trends of host models Inspur NF5280M4, Dell
• Our extensive experiments on CloudSim [4] with real- PowerEdge R820, and IBM NeXtScale nx360 M4. Therefore,
world traces show that compared with ThrRs, MadMmt, instead of trying to reduce power by decreasing utilization,
and IqrMc, PPRGear is able to reduce the energy con- PPRGear attempts to keep computing nodes working under
sumption significantly for various host computer types highest PPR as long as possible in order to balance utilization
with barely no performance degradation for cloud com- and power consumption.
puting data centers. In addition, the SLA violation rate
III. PPRG EAR
of PPRGear is almost the same as that of Dynamic
Voltage and Frequency Scaling (DVFS), a primitive non- Most of current VM placement and migration policies are
migration energy-efficient baseline algorithm. based on the factors which are primitive system characteristics
The remaining part of the paper is organized as follows: like power, utilization, network bandwidth, or storage space.
Section II introduces the motivation and observation of this We propose an energy-efficient VM allocation strategy based
paper. Section III presents the overview of our approach and on Performance-to-Power Ratio (PPR) which is not a primitive
Section IV details the algorithmic design. Section V compares characteristic of host computers. Before executing any tasks
PPRGear with the baseline and shows our simulation results. energy-efficiently, we evaluate the characteristics of computing
Section VI presents the related work. Section VII concludes node at different utilization levels which is called gears. The
the paper. evaluation indicates the best gear with highest PPR and the
preferred gears with the highest n PPR values. By allocating
II. O UR O BSERVATIONS and migrating VMs in clouds, we attempt to keep computing
Conserving energy consumption in a period of time is nodes working at the best gear (or as close as possible).
not the ultimate goal because there is not much room to When a computing node is working at a gear higher than any
save energy when workload is heavy. The ultimate goal preferred gears, the computing node is over utilized; when a
of energy-efficiency should be to improve the effective- computing node is working at a gear lower than any preferred
ness of energy usage. In other words, accomplish more gears, the computing node is underutilized. If a computing
tasks with less energy. Standard Performance and Evaluation node is over utilized due to VM utilization increasing, one
Corporation (SPEC) developed an energy benchmark suite or multiple VM(s) will be selected then migrated out. If a
SP ECpower ssj2008 [1]. A number of corporations have computing node is underutilized, the cloud will try to migrate
conducted experiments on their host computers by using out all VMs then shut the computing node down. This scheme
SP ECpower ssj2008 and uploaded experimental results to is called PPRGear.
SPEC website. Performance-to-Power Ratio, or PPR, is calcu- Before a cloud can be utilized power-efficiently, each com-
lated as the number of Server Side Java operations, ssj ops, puting node in the cloud hardware platform should be evalu-
completed during a certain time period divided by the average ated in the following steps to find the best gear and n preferred
active power consumption in that period. PPR indicates the gears. First, performance and power data is collected by using
effectiveness of power consumption in a computing node. benchmark suit SPECpower ssj2008 [1]. In this paper, the
Fig. 1 presents the power consumption trends and PPR performance and power data was collected by the following
trends while CPU utilization increases in four different host vendors: Fujitsu, Inspur, Dell, and IBM. The Table II presents
models of Fujitsu Primergy RX1330 M1, Inspur NF5280M4, the configurations of hosts used in later experiments. Power
Dell PowerEdge R820, and IBM NeXtScale nx360 M4. Ac- is the average power consumption in a certain time interval
cording to DVFS, power consumption of CPU increases ex- which is 300 seconds in this paper. We define performance as
ponentially while CPU utilization increases linearly. However, ssj ops over time where ssj ops stands for server side Java
the power consumption of a host computer includes CPUs, operations. Second, PPR is calculated simply by performance
memory, secondary storage, network adapters, etc. Mobius over power for the eleven gears on CPU utilization from 0%,
10000 100 400
Performance to Power Ratio 12000 Performance to Power Ratio
Power Power

8000 80

Performance to Power Ratio

Performance to Power Ratio


10000 300

6000 60 8000

Power (W)
Power (W)
200
6000
4000 40
4000
100
2000 20
2000

0 0 0 0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
Utilization (0.0 - 1.0) Utilization (0.0 - 1.0)

(a) Fujitsu PRIMERGY RX1330 M1 [15] (b) Inspur NF5280M4 [17]


500 10000 3000
10000 Performance to Power Ratio Performance to Power Ratio
Power Power
400
Performance to Power Ratio

8000

Performance to Power Ratio


8000
2000
300

Power (W)
6000

Power (W)
6000

200 4000
4000
1000

2000 100 2000

0 0 0 0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
Utilization (0.0 - 1.0) Utilization (0.0 - 1.0)

(c) Dell PowerEdge R820 [14] (d) IBM NeXtScale nx360 [16]

Fig. 1. PPR and power consumption with increase in CPU utilization

10% ... 100%. Third, sort gears on PPR in descending order *HDU *HDU *HDU *HDU
to find the best gear and n preferred gears.
*HDU *HDU *HDU *HDU
Fig. 1 indicates that although power consumption increases
linearly while utilization increases, the highest PPR values *HDU *HDU *HDU *HDU
may not appear with the highest power consumption values. *HDU *HDU *HDU *HDU
Therefore, even though the host computer offers the highest
performance at the highest utilization, the CPU is not working *HDU *HDU *HDU *HDU
as efficiently as working under a lower utilization due to the *HDU *HDU *HDU *HDU
lower PPR. Based on PPR values, PPRGear decides how to
*HDU *HDU *HDU *HDU
allocate new VMs and how to migrate running VMs.
Fig. 2 demonstrates an example of gear selection for the *HDU *HDU *HDU *HDU
four host types. In this example, the number of preferred *HDU *HDU *HDU *HDU
gears is set to 3. According to the PPR values in Fig. 1, the
best gear is selected from each group and marked in white; *HDU *HDU *HDU *HDU
three preferred gears (including the best gear) are selected *HDU *HDU *HDU *HDU
and marked in light gray; all gears higher than preferred
)XMLWVX 'HOO ,%0
gears are selected as over utilized gears and marked in dark ,QVSXU
35,0(5*< 3RZHU(GJH 1H;W6FDOH
gray; all gears lower than preferred gears are selected as 1)0
5;0 5 Q[0
underutilized gears and marked in dot pattern. Each gear has
a corresponding CPU utilization. For example, gear 7 means Fig. 2. An illustration of gear selection (3 preferred gears and 1 best gear)
the current host computer is working at CPU utilization 70%.
PPRGear attempts to keep each host working at the preferred
gears as long as possible. If the utilization goes below or different selections for best and preferred gears based on PPR
over the preferred gears, PPRGear will conduct migrations to values. Therefore, allocation and migration may vary due to
migrate VMs either out or in. When PPRGear is migrating different gear selections.
VMs to a destination host computer, PPRGear attempts to Fig. 3 serves as the example of VM allocation and migration
allocate VMs in order to make the destination host working based on our PPRGear framework. Assuming there are 7
at the CPU utilization level which is closest to that of the hosts in the cloud. Host 1 is over utilized; Host 5, 6, 7 are
best gear. Fig. 2 indicates that different host models may have underutilized; Host 2, 3 are working at preferred gears; and
$OORFDWLRQ
90

3UHIHUUHG*HDUV
8WLOL]DWLRQ
%HVW*HDU 0LJUDWLRQIURP8QGHUXWLOL]HG+RVWV

 
 !%HVW*HDU %HVW*HDU    
2YHUXWLOL]HG ,Q3UHIHUUHG*HDUV ,Q3UHIHUUHG*HDUV $WWKH%HVW*HDU 8QGHUXWLOL]HG 8QGHUXWLOL]HG 8QGHUXWLOL]HG

90 90 90 90 90 90 90 90 90 90 90 90 90

90 90 90 90 90 90 90 90 90

90 90 90 90 90 90 90 90

90 90 90 90 90 90 90

90 90 90

+RVW +RVW +RVW +RVW +RVW +RVW +RVW

0LJUDWLRQIURP2YHU8WLOL]HG+RVWV

Fig. 3. A possible snapshot of VM allocation and migration (3 machines underutilized, 1 over utilized, 1 at the best gear, 2 at preferred gears)

Host 4 is working at the best gear. In this case, Host 1 needs


to migrate VMs out to lower utilization, Host 5, 6, 7 needs
to either migrate VMs in to increase utilization or migrate
VMs out to shut down. Underutilized hosts and the hosts at Algorithm 1 VM Placement Algorithm
preferred gears are possible destination hosts. Hlist ← All Hosts
vm ← A New Virtual Machine
IV. A LGORITHMIC D ESIGN utilizationDiff ← MAXVALUE
hchosen ← null
In this section, we detail our algorithmic design of PPRGear, for each h in Hlist do
which contains two modules: VM Allocation and VM migra- utilization ← 0
tion. V Mh ← All Virtual Machines on h
for each v in V Mh do
A. VM Allocation utilization ← utilization + v.utilization
VM Allocation algorithm is used to place a VM. This VM end for
is either newly created or to migrate from an over utilized if utilization > highest preferred gear utilization ∨ uti-
host or an underutilized host. PPRGear VM allocation choose lization + vm.utilization > 1 then
an appropriate destination host with both computing capability continue
and power consumption concerns. Algorithm 1 presents how else if utilization + vm.utilization > best gear Utilization -
PPRGear allocates hosts for VMs. First, PPRGear traverses all 0.03 ∧ utilization + vm.utilization < best gear Utilization
hosts to collect current CPU utilization of each host. Current + 0.03 then
CPU utilization is summed by all VMs running on or migrating return h
to the host. Second, skip all the over utilized hosts even else if abs(utilization + vm.utilization - best gear utiliza-
though some of them may still have enough resource to run tion) < utilizationDiff ) then
the VM due to energy-efficient concerns. Third, PPRGear utilizationDiff ← abs(utilization + vm.utilization - best
calculates the expected host utilization by summing current gear utilization)
host utilization and predicted utilization of the VM on the host. hchosen ← h
If the expected host CPU utilization is at the best gear or at end if
the CPU utilization closest to the best gear (3% tolerance), end for
the current host is considered to be chosen as the destination return hchosen
host for the VM. If no hosts’ expected utilization is within the
tolerable range, then PPRGear attempts to find the host whose
expected utilization is closest to the best gear utilization.
B. VM Migration Algorithm 2 VM Selection on Over Utilized Hosts for Mi-
gration
VMs migrate to other hosts when the current host is
either overutilized or underutilized. VMs on overutilized and Hlist ← All Hosts
underutilized hosts will be selected for migration with power Hover ← φ
consumption concerns. Then VM Allocation algorithm will Vmigrate ← φ
be in charge of allocating appropriate hosts for the selected for each h in Hlist do
VMs. Therefore, VM Migration contains three steps: 1. Detect utilization ← 0
over utilized hosts and underutilized hosts; 2. Select VMs for V Mh ← All Virtual Machines on h
migration; 3. Choose appropriate destination hosts. When VMs for each vm in V Mh do
are migrating from either underutilized or over utilized hosts, utilization ← utilization + vm.utilization
PPRGear considers underutilized hosts as possible destination end for
hosts first. The use of hosts at preferred gears will be the if utilization > utilization of highest preferred gear then
second choice. We concluded two cases to describe PPRGear Hover ← h
VM Migration. end if
1) Over Utilized Hosts Detection and VM Selection: The end for
utilization on a host varies even though there is no VMs for each h in Hover do
migrate in or out. The reason is that the utilization of each V Mh ← Migratable Virtual Machines on h
VM also varies as time goes. In other words, a host currently Sort V Mh in descending order on utilization
working at the best gear may exceed the highest preferred gear utilization ← 0
after a while with no VMs migrating in, vice versa. v ← null
Algorithm 2 shows Overutilized Host Detection and VM Se- while h is overutilized do
lection to migrate VMs out from overutilized hosts. PPRGear for each vm in V Mh do
firstly traverses all hosts to find overutilized hosts and put them if abs(hostUtilization − VmUtiliza-
in overutilized host list H over. Whether a host is overutilized tionOnHost − bestGearUtilization)
depends on if the current host CPU utilization is higher than < abs(hostUtilization − getVmTo-
that of the highest preferred gear. Therefore, if the number talMips(migratableVms.get(chosenVmIndex))
of preferred gears is set to be large enough, the gear 10 (the − bestGearUtilization) then
highest gear) may be included in the preferred gears. Then v ← vm
the PPRGear will not migrate VMs for the reason of over end if
utilization since there will be no hosts to be considered as end for
overutilized. end while
Second, for each host h in H over, put all migratable VMs Vmigrate ← v
in to list V M h then sort the list in descending order on end for
utilization. Third, find the VM(s) to migrate out in order to return Vmigrate
reduce the host utilization as close to best gear utilization as
possible, then add the VM(s) in list Vmigrate for migration.
When Algorithm 2 finishes, Vmigrate contains the VMs to mi- underutilized hosts is simpler than that of over utilized hosts.
grate out. Then, PPRGear calls Algorithm 1 to find appropriate Once PPRGear decides to migrate VMs from an underutilized
destination hosts for VMs in list Vmigrate . hosts, all VMs will migrate out. Then the host will be shut
2) Underutilized Hosts Detection and VM Selection: Un- down after migration.
derutilized Hosts Detection checks whether host’s current Algorithm 3 presents the algorithm to find VMs to migrate
utilization level is lower than that of the lowest preferred gear. out from underutilized hosts. First, if the utilization of a
If The number of preferred gears is set high enough, PPRGear host is lower than the lowest preferred gear utilization, then
may not be able to find any underutilized hosts for migration put the host in list H under. Second, sort H under in
due to wide range of preferred gears. ascending order on utilization. Last, migrate all VMs from
Unlike the case of over utilized hosts, when migrating the host with the lowest utilization. Make sure to update the
VMs from underutilized hosts, the destination hosts, which destination hosts utilization before migrate VMs from next
were also underutilized hosts, may turn in to hosts works host in H under.
at preferred gears. In other words, some underutilized hosts
could turn in to well utilized hosts after migrating VMs in V. P ERFORMANCE E VALUATION
from previous underutilized hosts. This type of hosts will To demonstrate the performance and energy-efficiency of
be removed from underutilized host list. Therefore, underuti- PPRGear, in this section, we evaluate the performance of
lized host list H under needs to be updated after each VM PPRGear on four different host models under different work-
migration operation. Furthermore, the order of underutilized loads in terms of energy consumption, Service-Level Agree-
hosts to migrate VMs does matter the results since destination ment (SLA) violation, shutdown times, and migration times
hosts status may change during migration. VM selection for by using simulator CloudSim 3.0.3 [4]. CloudSim 3.0.3 is
TABLE I
Algorithm 3 VM Selection on Underutilized Hosts for Mi- V IRTUAL M ACHINE C ONFIGURATION
gration
Hlist ← All Hosts Configuration Parameters Default Value
Hunder ← φ MIPS 2000
Processing Element 2
Vmigrate ← φ Memory 1GB
for each h in Hlist do Bandwidth 100 Mbit/s
utilization ← 0 VM Size 2.5 GB
V Mh ← All Virtual Machines on h
for each vm in V Mh do
utilization ← utilization + vm.utilization requested from the host computer. Bandwidth and VM size
end for decide the VM migration cost by equation migrationtime =
if utilization < utilization of lowest preferred gear then V M Size/Bandwidth.
Hunder ← h In order to fairly evaluate PPRGear performance, 800 host
end if computers’ resource is evenly allocated to all VMs in the
Sort Hunder in ascending order on utilization cloud at the beginning of the cloud. Then, PPRGear starts
end for to manage VMs by migrating VMs and allocating host com-
Sort Hunder in ascending order on utilization puter resource. PlanetLab [13] workload is adopted in the
for each h in Hunder do experiments. The workload data was collected on March 3rd,
if h is underutilized then 2011. The workload is 24-hour long and the interval of CPU
Vmigrate ← all virtual machines on h utilization measurement is 5 minutes.
end if
call Algorithm 1 to allocate Vmigrate B. The Baseline Algorithms
end for
In order to investigate the energy-efficiency improve-
ment and performance impact, PPRGear was compared with
an event-driven simulator used to simulate infrastructures and three energy-efficient VM allocation and selection algorithms:
application services in cloud computing with customizable IqrMc, MadMmt, and ThrRs [3] which use different policies
policies of virtual machine selection, allocation, migration, and to select hosts and VMs for migration. Each of those three
provisioning on configurable host models. We implemented algorithms has two core steps. First step is deciding whether
PPRGear in CloudSim 3.0.3 with real-world host models. the current host is over utilized based on utilization threshold.
Based on our simulation results, compared with energy- The utilization threshold is either configured as a constant
efficient VM energy-efficient baseline algorithms ThrRs, Mad- value before experiments or generated dynamically based on
Mmt, and IqrMc [3], PPRGear reduces energy consumption historical utilization information. Second, the algorithm selects
up to 69.31% with fewer migration and shutdown times. an appropriate VM from a host for migration if the host is
Compared with DVFS, PPRGear significantly reduced power currently over utilized. VM selection algorithm will repeat
consumption up to 95.3% under light workload with little until the host is not overutilized.
Service-Level Agreement violation. When the workload is • Static Threshold VM allocation policy and Random Se-
extremely high, the SLA violation of PPRGear is almost the lection VM selection policy, or ThrRs, uses a static
same as that of DVFS. utilization threshold instead of generating in real time. In
our experiments, utilization threshold is constantly set to
A. Experimental Setup 0.8. When a host is over utilized, ThrRs randomly selects
We simulated a cloud computing center with 800 homo- a VM for migration regardless the VM’s performance
geneous host computers in four different models: Fujitsu impact on the host. ThrRs is a primitive energy-efficient
Primergy RX1330 M1, Inspur NF5280M4, Dell PowerEdge VM allocation and selection algorithm.
R820, and IBM NeXtScale nx360 M4. Table II presents the • Median Absolute Deviation VM allocation policy and
specifications of four models among which Inspur NF5280M4 Minimum Migration Time VM selection policy, or Mad-
is equipped with the largest capacity of memory (64GB), Dell Mmt, uses a robust statistic Median Absolute Deviation to
PowerEdge R820 is equipped with the most computing cores calculate historical data’s median for utilization threshold.
(40). The power consumption and performance data of all four According to this dynamic utilization threshold, if a host
host models were collected in the middle of year 2014 by host is over utilized, MadMmt selects a VM that takes the
manufactures using benchmark suite SPEC power ssj2008 [1]. least time for migration.
We assume that all Virtual Machines are configured in • Interquartile Range VM allocation policy and Maximum
the same specification as presented in Table I. VM’s MIPS Correlation VM selection policy, or IqrMc, uses another
are mapped from host computers’ CPU frequency to quanti- robust statistic Interquartile Range to analyze historical
tatively evaluate CPU utilization [3]. MIPS, Processing El- utilization. IqrMc selects the most correlated VM to CPU
ement amount, memory, and VM size decide the sources for migration [21].
TABLE II
H OST M ODELS [15] [17] [14] [16]

Model CPU Frequency Cores RAM Test Date


Fujitsu Primergy RX1330 M1 Intel Xeon E3-1275, 8MB L3 Cache 2500MHz 4 16 GB Jul 30, 2014
Inspur NF5280M4 Intel Xeon E5-2699 v3, 45MB L3 Cache 2300MHz 18 64 GB Aug 29, 2014
Dell PowerEdge R820 Intel Xeon E5-4650 v2, 25 MB L3 Cache 2400MHz 40 48 GB Apr 1, 2014
IBM NeXtScale nx360 M4 Intel Xeon E5-2660 v2, 25MB L3 Cache 2200MHz 20 24 GB Mar 17, 2014

We also compared PPRGear with a popular non-VM- the exact power consumption for a host working at any utiliza-
Migration energy-efficient algorithm DVFS [9]. DVFS has tion level. The average active power of any given utilization
been proved to be effective for CMOS integrated circuits like U can be approximated using Eq. 1, where Ph and Uh are the
CPU and other processors. Since host computer utilization power consumption and utilization of the next higher gear and
varies over time, it is not energy-efficient to keep CPUs Pl and Ul are the power consumption and utilization of the
working under highest frequency all the time. DVFS dynam- next lower gear.
ically alters CPU working voltage and frequency to adapt
to the current CPU utilization in order to conserve energy P h − Pl P h Ul − P l Uh
consumption. However, DVFS only passively adapts to the Average Active P ower(U ) = U− (1)
Uh − U l Uh − U l
utilization on current host computer. VM allocation is not
controlled by DVFS. Therefore, DVFS is considered as ”local” E. Preferred number of gears and Best Gear
since it is only effective on local host computer. Furthermore, Number of preferred gears indicates the top n highest PPR
DVFS adapts CPU voltage and frequency only based on gears are the preferred gears to stay. Number of preferred gears
CPU utilization. However, energy consumption of a cloud is used to judge whether a host is over utilized or underutilized.
computing center includes CPU, memory, secondary storage, If current host utilization is higher than the highest utilization
network adapter, etc. Hence, although DVFS conserves energy of preferred gears, then the current host is over utilized. If the
for processors, the overall energy consumption conservation current host utilization is lower than the lowest utilization of
rate is not as significant. DVFS is used as baseline algorithm preferred gears, then the current host is underutilized.
in the experiments. Because DVFS works locally on each host,
The best gear is the gear with the highest PPR value. In
VMs will not be migrated once they are allocated.
other words, the host computer achieves the most power-
efficient utilization level while working at the best gear. Best
C. Workload Control gear is used when migrating and allocating VMs. When
In our experiments, host utilization is calculated based on all migrating VMs from other hosts or allocating newly created
VM allocated on the host. VM utilization is stated in workload VMs, PPRGear attempts to make the targeting host computing
trace file with 300 seconds interval between each measure- work as close to the best gear utilization as possible.
ment. In order to test PPRGear under different workloads, the Note that there is always one best gear in one host computer
original VM utilization is also manipulated by multiplying and the best gear is also one of the preferred gears. Number of
a control factor between 0.1 to 4. In all four subfigures of preferred gears is set by cloud administrators and has signif-
Fig. 4, energy consumption increases as workload increases. icant performance impact on PPRGear. When the number of
Compared with DVFS, the energy conservation rate is as preferred gears is set small, the cloud works energy-efficiently
high as 95.3% when the number of preferred gears is 1 and but it could also lead to highly frequent VM migrations and
workload is 0.1x on Fujitsu Primergy RX1330 M1 host model. host shutdowns which will be harmful to host computers’
Since DVFS does not apply VM migration to conserve energy, reliability. If the number of preferred gears is large, it is very
we further compared PPRGear with other VM allocation and possible that the highest gear (gear 10 at utilization 100 %) is
selection energy efficient algorithms: ThrRs, MadMmt, and included in the preferred gears. Then, PPRGear does not work
IqrMc. Fig 4 also presents that the energy conservation rate energy-efficiently since there are too many preferred gears.
is as high as 69.31% when the number of preferred gears of Therefore, overutilized hosts and underutilized hosts will be
gears is 1 and workload is 0.1x on Dell PowerEdge R820 host very rare and few migrations will be triggered in PPRGear.
model. When workload increases, the energy conservation rate
decreases in all host models. F. Impact of Workload on Energy Consumption
Fig. 4 presents the energy consumption of various host types
D. Average Active Power
under different workloads. Host models of Figs. 4(a), 4(b),
It is worth mentioning that CPU utilization is not always 4(c), 4(d) are Fujitsu Primergy RX1330 M1, Inspur
exactly at the gear-assigned utilization levels such as 10% NF5280M4, Dell PowerEdge R820, and IBM NeXtScale
or 70%. Without loss of generality, we assume that power nx360 M4, respectively. The corresponding performance-to-
consumption increases linearly between two consecutive gears. power ratios and average active power are presented in Ta-
Consequently, we can use linear interpolation to approximate ble III.
TABLE III
P ERFORMANCE - TO -P OWER R ATIO OF G EARS [15] [17] [14] [16]

Gear Level: Gear0 Gear1 Gear2 Gear3 Gear4 Gear5 Gear6 Gear7 Gear8 Gear9 Gear10
Utilization: 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Host Model Performance-to-Power Ratio Values
Fujitsu Primergy RX1330 M1 0 2425 4281 5857 6991 7821 8467 8540 8410 8231 8041
Inspur NF5280M4 0 3796 6295 8063 9385 10590 11519 11536 11570 11198 10441
Dell PowerEdge R820 0 2599 4538 5995 7130 8050 8705 9194 9533 10013 9372
IBM NeXtScale nx360 M4 0 2589 4445 5858 6965 7849 8477 8952 9070 9012 8731
Host Model Average Active Power (Watt)
Fujitsu Primergy RX1330 M1 13.8 20.8 23.9 26.3 29.1 32.6 36.2 42.0 48.6 55.9 63.7
Inspur NF5280M4 44.4 83.3 101 118 135 146 161 190 218 255 301
Dell PowerEdge R820 71.8 135 156 176 198 219 243 269 297 318 374
IBM NeXtScale nx360 M4 44.4 83.3 101 118 135 146 161 190 218 255 301

   


3UHIHUUHG *HDU  3UHIHUUHG *HDU  3UHIHUUHG *HDU  3UHIHUUHG *HDU 
 3UHIHUUHG *HDU  3UHIHUUHG *HDU 
3UHIHUUHG *HDU   3UHIHUUHG *HDU 
,TU0F  ,TU0F ,TU0F
 ,TU0F 
(QHUJ\ &RQVXPSWLRQ N:K

(QHUJ\ &RQVXPSWLRQ N:K

(QHUJ\ &RQVXPSWLRQ N:K


0DG0PW 0DG0PW  0DG0PW 0DG0PW

(QHUJ\ &RQVXPSWLRQ N:K


 7KU5V  7KU5V 7KU5V 7KU5V

'9)6 '9)6  '9)6 '9)6


   



 



 



   
[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [
:RUNORDG :RUNORDG :RUNORDG :RUNORDG

(a) Fujitsu PRIMERGY RX1330 M1 (b) Inspur NF5280M4 (c) Dell PowerEdge R820 (d) IBM NeXtScale nx360

Fig. 4. Energy consumption of various host types under varying workloads

G. Impact of Preferred Gears Number on Energy Consump- will be conducted in the cloud due to the greater computing
tion, Migration and Shutdown Times capacity. Fig. 5(c) reveals the impact of the number of pre-
ferred gears on shutdown times under the original workload
Fig. 5 shows the impact of number of preferred gears on
1x. The impact fades when the number of preferred gears gets
energy consumption, migration and shutdown times under the
larger according to Fig. 5(c).
original workload 1x. Fig. 5(a) shows the impact of the number
of preferred gears on energy cost under the original workload
1x. Energy conservation rate is significant when number of H. Impact of Workload on SLA
preferred gears is small. When number of preferred gears is Service-Level Agreement is made between cloud service
large enough, the energy consumption will be as same as that providers and customers before the service. SLA is the
of DVFS. The reason is that when the number of preferred promises made by Cloud service providers about the com-
gears is set to be large, the host computer will be mostly puting resource offered to customers. If the SLA is violated,
working at the utilization levels of preferred gear numbers. Cloud Service provider usually refunds some money back to
The migration numbers will reduce. So will the effectiveness customers account depending on SLA violation.
of PPRGear. PRRGear also uses DVFS on individual host Instead of using time to evaluate performance, we use
computers. Therefore, if number of preferred gears of gears is SLA to evaluate the service quality impact of PPRGear
large enough, PPRGear will work exactly the same as DVFS. compared with ThrRs, MadMmt, IqrMc, and DVFS.
One interesting observation is that with the increase of Figs. 6(a) 6(b) 6(c) 6(d) indicate that the SLAs of PPRGear
the number of preferred gears, Fujitsu Primergy RX1330 M1 are either very close or almost the same as that of ThrRs,
kept significant energy conservation rate until the number of MadMmt, IqrMc, and DVFS under different workloads and
preferred gears was 4, Inspur NF5280M4 and IBM NeXtScale number of preferred gears.
nx360 M4 kept significant energy conservation rate until the
number of preferred gears was 3. However, Dell PowerEdge
I. Impact of Workload on Migration# and Shutdown#
R820 only kept good energy conservation rate until the number
of preferred gears was 2. This observation indicates that the When workload is getting heavier, smaller number of pre-
effective number of preferred gears depends on the cores that ferred gears may cause more migration and shutdown times.
each host has. The more cores a host has, the more VM a host Fig 7 presents the impact of workload on migration and
can execute. shutdown times. Fig. 7(a) shows that PPRGear (number of
Fig. 5(b) shows the impact of the number of preferred gears preferred gears is 1 and 2, respectively) causes fewer migration
on migration under the original workload 1x. According to times compared with other baseline algorithms when workload
Fig. 5(b), the more cores that a host has, the less migrations is less or equal to 1.5x.
 
 )XMLWVX 35,0(5*< 5; 0
)XMLWVX 35,0(5*<5; 0 )XMLWVX 35,0(5*< 5; 0
 ,QVSXU 1)0 ,QVSXU 1)0
,QVSXU 1)0
 'HOO 3RZHU(GJH 5
 'HOO3RZHU(GJH 5 'HOO 3RZHU(GJH 5
,%0 1H;W6FDOH Q[

(QHUJ\ &RQVXPSWLRQ N:K


,%0 1H;W6FDOH Q[ ,%0 1H;W6FDOH Q[
 

 RI +RVW 6KXWGRZQV


 RI 90 0LJUDWLRQ




 


 




  
              
3UHIHUUHG *HDU 3UHIHUUHG *HDU 3UHIHUUHG *HDU

(a) Energy Workload 1x (b) Migration # Workload 1x (c) Shutdown # Workload 1x

Fig. 5. Impact of number of preferred gears on energy consumption, migration and shutdown times under the original workload 1x

   
3UHIHUUHG *HDU  3UHIHUUHG *HDU  3UHIHUUHG *HDU 
3UHIHUUHG *HDU 
3UHIHUUHG *HDU  3UHIHUUHG *HDU  3UHIHUUHG *HDU 
 3UHIHUUHG *HDU  
 ,TU0F  ,TU0F ,TU0F
,TU0F
0DG0PW 0DG0PW 0DG0PW
2YHUDOO 6/$ 9LRODWLRQ 

2YHUDOO 6/$ 9LRODWLRQ 

2YHUDOO 6/$ 9LRODWLRQ 

2YHUDOO 6/$ 9LRODWLRQ 


0DG0PW
7KU56 7KU5V 7KU5V
 7KU5V 
'9)6 '9)6 '9)6
 '9)6 

 

 
 

 
 

   
[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [
:RUNORDG :RUNORDG :RUNORDG :RUNORDG

(a) Fujitsu PRIMERGY RX1330 M1 (b) Inspur NF5280M4 (c) Dell PowerEdge R820 (d) IBM NeXtScale nx360

Fig. 6. SLA violation rate

 
3UHIHUUHG *HDU 
3UHIHUUHG *HDU   3UHIHUUHG *HDU 
 3UHIHUUHG *HDU 
,TU0&
,TU0F 
0DG0PW
0DG0PW
  7KU5V
 RI +RVW 6KXWGRZQV

7KU5V
 RI 90 0LJUDWLRQ




 

 





 
[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [
:RUNORDG :RUNORDG

(a) Migration # Comparison (b) Shutdown # Comparison

Fig. 7. Migration # and Shutdown # Fujitsu PRIMERGY RX1330 M1

VI. R ELATED W ORK Consolidation saves significant amount of energy in clouds but
may cause noteworthy performance degradation. Beloglazov
An important aspect of energy-efficiency clouds is accom- et al. analyzed the energy-performance tradeoff for energy
plishing more jobs by less amount of power. In energy-efficient and performance efficient dynamic VM consolidation [3].
clouds, power consumption is measured at computing node Consolidation can be triggered by conditions set up based
level since different components, like processors, memory, and on different policies. Agrawal et al. proposed pSciMapper,
second-level storage, have different power consumption model. a power-aware consolidation frame work based on the char-
It is indirect and impractical to measure power consumption acteristics of scientific workloads [25]. Xu et al. designed
for individual components in order to evaluate overall power algorithms to consolidate workload with considering mini-
consumption. According to recent studies, although Dynamic mizing both energy consumption and network workload [19].
Voltage Frequency Scaling demonstrates that the relationship Assuming all VMs have been placed on physical hosts, VMs
of CPU power consumption and utilization is exponential, the will be reassigned again with both energy consumption and
relationship of overall power consumption and CPU utilization network overhead concerns. Kim et al. proposed a strategy
has been proved linear [6][11][12]. Based on this conclusion, of VM placement based on correlation information of core
most research of energy-efficient clouds was conducted on utilization [10].
Virtual Machine level. In order to reduce both energy consumption and network
Virtual Machine allocation, migration, and consolidation overhead, Xu et al. applied VM packing algorithms and inter-
have been explored for both performance [2] and energy esting trade-offs have been found between energy consumption
efficiency based on different strategies. Power-aware VM and network overhead [19]. Verma et al. presented pMapper
which places applications in virtualized systems with power [7] C. Ghribi, M. Hadji, and D. Zeghlache. Energy efficient vm scheduling
and migration cost awareness [20]. Ghribi et al. explored for cloud data centers: Exact allocation and migration algorithms. In
Cluster, Cloud and Grid Computing (CCGrid), 2013 13th IEEE/ACM
VM placement problem and uses exact algorithms for both International Symposium on, pages 671–678, May 2013.
VM placement and workload consolidation to reduce energy [8] James Glanz. The cloud factories power, pollution and the internet. The
consumption [7]. When workload is extremely low (utilization New York Times, 2012.
[9] Tom Gurout, Thierry Monteil, Georges Da Costa, Rodrigo Neves
¡ 10%), it is hard to conserve energy since there is not much Calheiros, Rajkumar Buyya, and Mihai Alexandru. Energy-aware
room to further reduce utilization to conserve energy (DVFS simulation with {DVFS}. Simulation Modelling Practice and Theory,
strategy). Hence, Wang et al. proposed request batching [22] 39(0):76 – 91, 2013. S.I.Energy efficiency in grids and clouds.
[10] Jungsoo Kim, Martino Ruggiero, David Atienza, and Marcel Leder-
to group received request in batches. The requests are served in berger. Correlation-aware virtual machine allocation for energy-efficient
batches and hosts are shut down between batches. Xiao et al. datacenters. In Design, Automation Test in Europe Conference Exhibition
used VM to allocate system resource based on skewness to (DATE), 2013, pages 1345–1350, March 2013.
[11] D. Kusic, J.O. Kephart, J.E. Hanson, Nagarajan Kandasamy, and Guofei
conserve energy conservation [23]. Jiang. Power and performance management of virtualized computing
environments via lookahead control. In Autonomic Computing, 2008.
VII. C ONCLUSION ICAC ’08. International Conference on, pages 3–12, June 2008.
[12] C. Mobius, W. Dargie, and A. Schill. Power consumption estimation
Energy consumption has become a big concern in the last models for processors, virtual machines, and servers. Parallel and
Distributed Systems, IEEE Transactions on, 25(6):1600–1614, June
decade since cloud data centers consumes significant power 2014.
and generate giant power bills. In cloud computing, computing [13] KyoungSoo Park and Vivek S. Pai. Comon: A mostly-scalable monitor-
resources are allocated to virtual machines generated for ing system for planetlab. SIGOPS Oper. Syst. Rev., 40(1):65–74, January
2006.
customers. The placement and migration of virtual machines [14] SPEC. Dell Inc. PowerEdge R820 (Intel Xeon E5-4650 v2 2.40 GHz),
have huge impact on both performance and energy cost. If https://www.spec.org/power ssj2008/results/res2014q2/power ssj2008-
an energy-efficient algorithm only consider energy consump- 20140401-00654.html.
[15] SPEC. Fujitsu FUJITSU Server PRIMERGY RX1330 M1,
tion when scheduling the virtual machines, it is impossible https://www.spec.org/power ssj2008/results/res2014q3/power ssj2008-
to utilize computing resource efficiently. In this paper, we 20140804-00662.html.
presented PPRGear, an energy-efficient virtual machine allo- [16] SPEC. IBM NeXtScale nx360 M4 (Intel Xeon E5-2660 v2),
https://www.spec.org/power ssj2008/results/res2014q2/power ssj2008-
cation scheme for energy-efficient clouds. To the best of our 20140421-00657.html.
knowledge, our work is the first to leverage the performance- [17] SPEC. Inspur Corporation NF5280M4 (Intel Xeon E5-2699 v3),
to-power ratio of computing nodes in virtual machine allo- https://www.spec.org/power ssj2008/results/res2014q4/power ssj2008-
20140905-00673.html.
cation and migration for energy-efficient cloud solution. By [18] Patrick Thibodeau. Data centers are the new polluters. Computerworld,
achieving the optimal balance between host utilization and 2014.
energy consumption, PPRGear is able to guarantee that host [19] N. Tziritas, Cheng-Zhong Xu, T. Loukopoulos, S.U. Khan, and Zhibin
Yu. Application-aware workload consolidation to minimize both energy
computers run at the most power-efficient levels (i.e., the levels consumption and network load in cloud environments. In Parallel
with highest Performance-to-Power ratios) so that the energy Processing (ICPP), 42nd International Conference on, Oct 2013.
consumption can be tremendously reduced without sacrifice [20] Akshat Verma, Puneet Ahuja, and Anindya Neogi. pmapper: Power
and migration cost aware application placement in virtualized systems.
of performance. Our extensive experiments with real world In Proceedings of the 9th ACM/IFIP/USENIX International Conference
traces show that compared with ThrRs, MadMmt, and IqrMc, on Middleware, Middleware ’08, pages 243–264, New York, NY, USA,
PPRGear is able to reduce up to 69.31% energy consumption 2008. Springer-Verlag New York, Inc.
[21] Akshat Verma, Gargi Dasgupta, Tapan Kumar Nayak, Pradipta De, and
with fewer migration and shutdown times. Ravi Kothari. Server workload analysis for power minimization using
consolidation. In Proceedings of the 2009 Conference on USENIX
R EFERENCES Annual Technical Conference, USENIX’09, pages 28–28, Berkeley, CA,
USA, 2009. USENIX Association.
[1] SPEC Power, https://www.spec.org/power/. [22] Yefu Wang and Xiaorui Wang. Virtual batching: Request batching
[2] S. Bazarbayev, M. Hiltunen, K. Joshi, W.H. Sanders, and R. Schlichting. for server energy conservation in virtualized data centers. Parallel
Content-based scheduling of virtual machines (vms) in the cloud. In and Distributed Systems, IEEE Transactions on, 24(8):1695–1705, Aug
Distributed Computing Systems (ICDCS), 2013 IEEE 33rd International 2013.
Conference on, pages 93–101, July 2013. [23] Zhen Xiao, Weijia Song, and Qi Chen. Dynamic resource allocation
[3] Anton Beloglazov and Rajkumar Buyya. Optimal online deterministic using virtual machines for cloud computing environment. Parallel
algorithms and adaptive heuristics for energy and performance efficient and Distributed Systems, IEEE Transactions on, 24(6):1107–1117, June
dynamic consolidation of virtual machines in cloud data centers. Con- 2013.
curr. Comput. : Pract. Exper., 24(13):1397–1420, September 2012. [24] Sungkap Yeo, Mohammad M. Hossain, Jen-Cheng Huang, and Hsien-
[4] Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, C&#x00e9;sar Hsin S. Lee. Atac: Ambient temperature-aware capping for power
A. F. De Rose, and Rajkumar Buyya. Cloudsim: A toolkit for modeling efficient datacenters. In Proceedings of the ACM Symposium on Cloud
and simulation of cloud computing environments and evaluation of Computing, SOCC ’14, pages 17:1–17:14, New York, NY, USA, 2014.
resource provisioning algorithms. Softw. Pract. Exper., January 2011. ACM.
[5] Khosrow Ebrahimi, Gerard F. Jones, and Amy S. Fleischer. A review [25] Qian Zhu, Jiedan Zhu, and G. Agrawal. Power-aware consolidation of
of data center cooling technology, operating conditions and the corre- scientific workflows in virtualized environments. In High Performance
sponding low-grade waste heat recovery opportunities. Renewable and Computing, Networking, Storage and Analysis (SC), 2010 International
Sustainable Energy Reviews, 31(0):622 – 638, 2014. Conference for, pages 1–12, Nov 2010.
[6] Xiaobo Fan, Wolf-Dietrich Weber, and Luiz Andre Barroso. Power [26] Xiaomin Zhu, L.T. Yang, Huangke Chen, Ji Wang, Shu Yin, and
provisioning for a warehouse-sized computer. In Proceedings of the Xiaocheng Liu. Real-time tasks oriented energy-aware scheduling in
34th Annual International Symposium on Computer Architecture, ISCA virtualized clouds. Cloud Computing, IEEE Transactions on, 2(2):168–
’07, pages 13–23, New York, NY, USA, 2007. ACM. 180, April 2014.

You might also like