Professional Documents
Culture Documents
8000 80
6000 60 8000
Power (W)
Power (W)
200
6000
4000 40
4000
100
2000 20
2000
0 0 0 0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
Utilization (0.0 - 1.0) Utilization (0.0 - 1.0)
8000
Power (W)
6000
Power (W)
6000
200 4000
4000
1000
0 0 0 0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 1.1
Utilization (0.0 - 1.0) Utilization (0.0 - 1.0)
(c) Dell PowerEdge R820 [14] (d) IBM NeXtScale nx360 [16]
10% ... 100%. Third, sort gears on PPR in descending order *HDU *HDU *HDU *HDU
to find the best gear and n preferred gears.
*HDU *HDU *HDU *HDU
Fig. 1 indicates that although power consumption increases
linearly while utilization increases, the highest PPR values *HDU *HDU *HDU *HDU
may not appear with the highest power consumption values. *HDU *HDU *HDU *HDU
Therefore, even though the host computer offers the highest
performance at the highest utilization, the CPU is not working *HDU *HDU *HDU *HDU
as efficiently as working under a lower utilization due to the *HDU *HDU *HDU *HDU
lower PPR. Based on PPR values, PPRGear decides how to
*HDU *HDU *HDU *HDU
allocate new VMs and how to migrate running VMs.
Fig. 2 demonstrates an example of gear selection for the *HDU *HDU *HDU *HDU
four host types. In this example, the number of preferred *HDU *HDU *HDU *HDU
gears is set to 3. According to the PPR values in Fig. 1, the
best gear is selected from each group and marked in white; *HDU *HDU *HDU *HDU
three preferred gears (including the best gear) are selected *HDU *HDU *HDU *HDU
and marked in light gray; all gears higher than preferred
)XMLWVX 'HOO ,%0
gears are selected as over utilized gears and marked in dark ,QVSXU
35,0(5*< 3RZHU(GJH 1H;W6FDOH
gray; all gears lower than preferred gears are selected as 1)0
5;0 5 Q[0
underutilized gears and marked in dot pattern. Each gear has
a corresponding CPU utilization. For example, gear 7 means Fig. 2. An illustration of gear selection (3 preferred gears and 1 best gear)
the current host computer is working at CPU utilization 70%.
PPRGear attempts to keep each host working at the preferred
gears as long as possible. If the utilization goes below or different selections for best and preferred gears based on PPR
over the preferred gears, PPRGear will conduct migrations to values. Therefore, allocation and migration may vary due to
migrate VMs either out or in. When PPRGear is migrating different gear selections.
VMs to a destination host computer, PPRGear attempts to Fig. 3 serves as the example of VM allocation and migration
allocate VMs in order to make the destination host working based on our PPRGear framework. Assuming there are 7
at the CPU utilization level which is closest to that of the hosts in the cloud. Host 1 is over utilized; Host 5, 6, 7 are
best gear. Fig. 2 indicates that different host models may have underutilized; Host 2, 3 are working at preferred gears; and
$OORFDWLRQ
90
3UHIHUUHG*HDUV
8WLOL]DWLRQ
%HVW*HDU 0LJUDWLRQIURP8QGHUXWLOL]HG+RVWV
!%HVW*HDU %HVW*HDU
2YHUXWLOL]HG ,Q3UHIHUUHG*HDUV ,Q3UHIHUUHG*HDUV $WWKH%HVW*HDU 8QGHUXWLOL]HG 8QGHUXWLOL]HG 8QGHUXWLOL]HG
90 90 90 90 90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90 90
90 90 90 90 90 90 90 90
90 90 90 90 90 90 90
90 90 90
0LJUDWLRQIURP2YHU8WLOL]HG+RVWV
Fig. 3. A possible snapshot of VM allocation and migration (3 machines underutilized, 1 over utilized, 1 at the best gear, 2 at preferred gears)
We also compared PPRGear with a popular non-VM- the exact power consumption for a host working at any utiliza-
Migration energy-efficient algorithm DVFS [9]. DVFS has tion level. The average active power of any given utilization
been proved to be effective for CMOS integrated circuits like U can be approximated using Eq. 1, where Ph and Uh are the
CPU and other processors. Since host computer utilization power consumption and utilization of the next higher gear and
varies over time, it is not energy-efficient to keep CPUs Pl and Ul are the power consumption and utilization of the
working under highest frequency all the time. DVFS dynam- next lower gear.
ically alters CPU working voltage and frequency to adapt
to the current CPU utilization in order to conserve energy P h − Pl P h Ul − P l Uh
consumption. However, DVFS only passively adapts to the Average Active P ower(U ) = U− (1)
Uh − U l Uh − U l
utilization on current host computer. VM allocation is not
controlled by DVFS. Therefore, DVFS is considered as ”local” E. Preferred number of gears and Best Gear
since it is only effective on local host computer. Furthermore, Number of preferred gears indicates the top n highest PPR
DVFS adapts CPU voltage and frequency only based on gears are the preferred gears to stay. Number of preferred gears
CPU utilization. However, energy consumption of a cloud is used to judge whether a host is over utilized or underutilized.
computing center includes CPU, memory, secondary storage, If current host utilization is higher than the highest utilization
network adapter, etc. Hence, although DVFS conserves energy of preferred gears, then the current host is over utilized. If the
for processors, the overall energy consumption conservation current host utilization is lower than the lowest utilization of
rate is not as significant. DVFS is used as baseline algorithm preferred gears, then the current host is underutilized.
in the experiments. Because DVFS works locally on each host,
The best gear is the gear with the highest PPR value. In
VMs will not be migrated once they are allocated.
other words, the host computer achieves the most power-
efficient utilization level while working at the best gear. Best
C. Workload Control gear is used when migrating and allocating VMs. When
In our experiments, host utilization is calculated based on all migrating VMs from other hosts or allocating newly created
VM allocated on the host. VM utilization is stated in workload VMs, PPRGear attempts to make the targeting host computing
trace file with 300 seconds interval between each measure- work as close to the best gear utilization as possible.
ment. In order to test PPRGear under different workloads, the Note that there is always one best gear in one host computer
original VM utilization is also manipulated by multiplying and the best gear is also one of the preferred gears. Number of
a control factor between 0.1 to 4. In all four subfigures of preferred gears is set by cloud administrators and has signif-
Fig. 4, energy consumption increases as workload increases. icant performance impact on PPRGear. When the number of
Compared with DVFS, the energy conservation rate is as preferred gears is set small, the cloud works energy-efficiently
high as 95.3% when the number of preferred gears is 1 and but it could also lead to highly frequent VM migrations and
workload is 0.1x on Fujitsu Primergy RX1330 M1 host model. host shutdowns which will be harmful to host computers’
Since DVFS does not apply VM migration to conserve energy, reliability. If the number of preferred gears is large, it is very
we further compared PPRGear with other VM allocation and possible that the highest gear (gear 10 at utilization 100 %) is
selection energy efficient algorithms: ThrRs, MadMmt, and included in the preferred gears. Then, PPRGear does not work
IqrMc. Fig 4 also presents that the energy conservation rate energy-efficiently since there are too many preferred gears.
is as high as 69.31% when the number of preferred gears of Therefore, overutilized hosts and underutilized hosts will be
gears is 1 and workload is 0.1x on Dell PowerEdge R820 host very rare and few migrations will be triggered in PPRGear.
model. When workload increases, the energy conservation rate
decreases in all host models. F. Impact of Workload on Energy Consumption
Fig. 4 presents the energy consumption of various host types
D. Average Active Power
under different workloads. Host models of Figs. 4(a), 4(b),
It is worth mentioning that CPU utilization is not always 4(c), 4(d) are Fujitsu Primergy RX1330 M1, Inspur
exactly at the gear-assigned utilization levels such as 10% NF5280M4, Dell PowerEdge R820, and IBM NeXtScale
or 70%. Without loss of generality, we assume that power nx360 M4, respectively. The corresponding performance-to-
consumption increases linearly between two consecutive gears. power ratios and average active power are presented in Ta-
Consequently, we can use linear interpolation to approximate ble III.
TABLE III
P ERFORMANCE - TO -P OWER R ATIO OF G EARS [15] [17] [14] [16]
Gear Level: Gear0 Gear1 Gear2 Gear3 Gear4 Gear5 Gear6 Gear7 Gear8 Gear9 Gear10
Utilization: 0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Host Model Performance-to-Power Ratio Values
Fujitsu Primergy RX1330 M1 0 2425 4281 5857 6991 7821 8467 8540 8410 8231 8041
Inspur NF5280M4 0 3796 6295 8063 9385 10590 11519 11536 11570 11198 10441
Dell PowerEdge R820 0 2599 4538 5995 7130 8050 8705 9194 9533 10013 9372
IBM NeXtScale nx360 M4 0 2589 4445 5858 6965 7849 8477 8952 9070 9012 8731
Host Model Average Active Power (Watt)
Fujitsu Primergy RX1330 M1 13.8 20.8 23.9 26.3 29.1 32.6 36.2 42.0 48.6 55.9 63.7
Inspur NF5280M4 44.4 83.3 101 118 135 146 161 190 218 255 301
Dell PowerEdge R820 71.8 135 156 176 198 219 243 269 297 318 374
IBM NeXtScale nx360 M4 44.4 83.3 101 118 135 146 161 190 218 255 301
[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [
:RUNORDG :RUNORDG :RUNORDG :RUNORDG
(a) Fujitsu PRIMERGY RX1330 M1 (b) Inspur NF5280M4 (c) Dell PowerEdge R820 (d) IBM NeXtScale nx360
G. Impact of Preferred Gears Number on Energy Consump- will be conducted in the cloud due to the greater computing
tion, Migration and Shutdown Times capacity. Fig. 5(c) reveals the impact of the number of pre-
ferred gears on shutdown times under the original workload
Fig. 5 shows the impact of number of preferred gears on
1x. The impact fades when the number of preferred gears gets
energy consumption, migration and shutdown times under the
larger according to Fig. 5(c).
original workload 1x. Fig. 5(a) shows the impact of the number
of preferred gears on energy cost under the original workload
1x. Energy conservation rate is significant when number of H. Impact of Workload on SLA
preferred gears is small. When number of preferred gears is Service-Level Agreement is made between cloud service
large enough, the energy consumption will be as same as that providers and customers before the service. SLA is the
of DVFS. The reason is that when the number of preferred promises made by Cloud service providers about the com-
gears is set to be large, the host computer will be mostly puting resource offered to customers. If the SLA is violated,
working at the utilization levels of preferred gear numbers. Cloud Service provider usually refunds some money back to
The migration numbers will reduce. So will the effectiveness customers account depending on SLA violation.
of PPRGear. PRRGear also uses DVFS on individual host Instead of using time to evaluate performance, we use
computers. Therefore, if number of preferred gears of gears is SLA to evaluate the service quality impact of PPRGear
large enough, PPRGear will work exactly the same as DVFS. compared with ThrRs, MadMmt, IqrMc, and DVFS.
One interesting observation is that with the increase of Figs. 6(a) 6(b) 6(c) 6(d) indicate that the SLAs of PPRGear
the number of preferred gears, Fujitsu Primergy RX1330 M1 are either very close or almost the same as that of ThrRs,
kept significant energy conservation rate until the number of MadMmt, IqrMc, and DVFS under different workloads and
preferred gears was 4, Inspur NF5280M4 and IBM NeXtScale number of preferred gears.
nx360 M4 kept significant energy conservation rate until the
number of preferred gears was 3. However, Dell PowerEdge
I. Impact of Workload on Migration# and Shutdown#
R820 only kept good energy conservation rate until the number
of preferred gears was 2. This observation indicates that the When workload is getting heavier, smaller number of pre-
effective number of preferred gears depends on the cores that ferred gears may cause more migration and shutdown times.
each host has. The more cores a host has, the more VM a host Fig 7 presents the impact of workload on migration and
can execute. shutdown times. Fig. 7(a) shows that PPRGear (number of
Fig. 5(b) shows the impact of the number of preferred gears preferred gears is 1 and 2, respectively) causes fewer migration
on migration under the original workload 1x. According to times compared with other baseline algorithms when workload
Fig. 5(b), the more cores that a host has, the less migrations is less or equal to 1.5x.
)XMLWVX 35,0(5*< 5; 0
)XMLWVX 35,0(5*<5; 0 )XMLWVX 35,0(5*< 5; 0
,QVSXU 1)0 ,QVSXU 1)0
,QVSXU 1)0
'HOO 3RZHU(GJH 5
'HOO3RZHU(GJH 5 'HOO 3RZHU(GJH 5
,%0 1H;W6FDOH Q[
RI +RVW 6KXWGRZQV
RI 90 0LJUDWLRQ
3UHIHUUHG *HDU 3UHIHUUHG *HDU 3UHIHUUHG *HDU
Fig. 5. Impact of number of preferred gears on energy consumption, migration and shutdown times under the original workload 1x
3UHIHUUHG *HDU 3UHIHUUHG *HDU 3UHIHUUHG *HDU
3UHIHUUHG *HDU
3UHIHUUHG *HDU 3UHIHUUHG *HDU 3UHIHUUHG *HDU
3UHIHUUHG *HDU
,TU0F ,TU0F ,TU0F
,TU0F
0DG0PW 0DG0PW 0DG0PW
2YHUDOO 6/$ 9LRODWLRQ
[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [
:RUNORDG :RUNORDG :RUNORDG :RUNORDG
(a) Fujitsu PRIMERGY RX1330 M1 (b) Inspur NF5280M4 (c) Dell PowerEdge R820 (d) IBM NeXtScale nx360
3UHIHUUHG *HDU
3UHIHUUHG *HDU 3UHIHUUHG *HDU
3UHIHUUHG *HDU
,TU0&
,TU0F
0DG0PW
0DG0PW
7KU5V
RI +RVW 6KXWGRZQV
7KU5V
RI 90 0LJUDWLRQ
[ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [ [
:RUNORDG :RUNORDG
VI. R ELATED W ORK Consolidation saves significant amount of energy in clouds but
may cause noteworthy performance degradation. Beloglazov
An important aspect of energy-efficiency clouds is accom- et al. analyzed the energy-performance tradeoff for energy
plishing more jobs by less amount of power. In energy-efficient and performance efficient dynamic VM consolidation [3].
clouds, power consumption is measured at computing node Consolidation can be triggered by conditions set up based
level since different components, like processors, memory, and on different policies. Agrawal et al. proposed pSciMapper,
second-level storage, have different power consumption model. a power-aware consolidation frame work based on the char-
It is indirect and impractical to measure power consumption acteristics of scientific workloads [25]. Xu et al. designed
for individual components in order to evaluate overall power algorithms to consolidate workload with considering mini-
consumption. According to recent studies, although Dynamic mizing both energy consumption and network workload [19].
Voltage Frequency Scaling demonstrates that the relationship Assuming all VMs have been placed on physical hosts, VMs
of CPU power consumption and utilization is exponential, the will be reassigned again with both energy consumption and
relationship of overall power consumption and CPU utilization network overhead concerns. Kim et al. proposed a strategy
has been proved linear [6][11][12]. Based on this conclusion, of VM placement based on correlation information of core
most research of energy-efficient clouds was conducted on utilization [10].
Virtual Machine level. In order to reduce both energy consumption and network
Virtual Machine allocation, migration, and consolidation overhead, Xu et al. applied VM packing algorithms and inter-
have been explored for both performance [2] and energy esting trade-offs have been found between energy consumption
efficiency based on different strategies. Power-aware VM and network overhead [19]. Verma et al. presented pMapper
which places applications in virtualized systems with power [7] C. Ghribi, M. Hadji, and D. Zeghlache. Energy efficient vm scheduling
and migration cost awareness [20]. Ghribi et al. explored for cloud data centers: Exact allocation and migration algorithms. In
Cluster, Cloud and Grid Computing (CCGrid), 2013 13th IEEE/ACM
VM placement problem and uses exact algorithms for both International Symposium on, pages 671–678, May 2013.
VM placement and workload consolidation to reduce energy [8] James Glanz. The cloud factories power, pollution and the internet. The
consumption [7]. When workload is extremely low (utilization New York Times, 2012.
[9] Tom Gurout, Thierry Monteil, Georges Da Costa, Rodrigo Neves
¡ 10%), it is hard to conserve energy since there is not much Calheiros, Rajkumar Buyya, and Mihai Alexandru. Energy-aware
room to further reduce utilization to conserve energy (DVFS simulation with {DVFS}. Simulation Modelling Practice and Theory,
strategy). Hence, Wang et al. proposed request batching [22] 39(0):76 – 91, 2013. S.I.Energy efficiency in grids and clouds.
[10] Jungsoo Kim, Martino Ruggiero, David Atienza, and Marcel Leder-
to group received request in batches. The requests are served in berger. Correlation-aware virtual machine allocation for energy-efficient
batches and hosts are shut down between batches. Xiao et al. datacenters. In Design, Automation Test in Europe Conference Exhibition
used VM to allocate system resource based on skewness to (DATE), 2013, pages 1345–1350, March 2013.
[11] D. Kusic, J.O. Kephart, J.E. Hanson, Nagarajan Kandasamy, and Guofei
conserve energy conservation [23]. Jiang. Power and performance management of virtualized computing
environments via lookahead control. In Autonomic Computing, 2008.
VII. C ONCLUSION ICAC ’08. International Conference on, pages 3–12, June 2008.
[12] C. Mobius, W. Dargie, and A. Schill. Power consumption estimation
Energy consumption has become a big concern in the last models for processors, virtual machines, and servers. Parallel and
Distributed Systems, IEEE Transactions on, 25(6):1600–1614, June
decade since cloud data centers consumes significant power 2014.
and generate giant power bills. In cloud computing, computing [13] KyoungSoo Park and Vivek S. Pai. Comon: A mostly-scalable monitor-
resources are allocated to virtual machines generated for ing system for planetlab. SIGOPS Oper. Syst. Rev., 40(1):65–74, January
2006.
customers. The placement and migration of virtual machines [14] SPEC. Dell Inc. PowerEdge R820 (Intel Xeon E5-4650 v2 2.40 GHz),
have huge impact on both performance and energy cost. If https://www.spec.org/power ssj2008/results/res2014q2/power ssj2008-
an energy-efficient algorithm only consider energy consump- 20140401-00654.html.
[15] SPEC. Fujitsu FUJITSU Server PRIMERGY RX1330 M1,
tion when scheduling the virtual machines, it is impossible https://www.spec.org/power ssj2008/results/res2014q3/power ssj2008-
to utilize computing resource efficiently. In this paper, we 20140804-00662.html.
presented PPRGear, an energy-efficient virtual machine allo- [16] SPEC. IBM NeXtScale nx360 M4 (Intel Xeon E5-2660 v2),
https://www.spec.org/power ssj2008/results/res2014q2/power ssj2008-
cation scheme for energy-efficient clouds. To the best of our 20140421-00657.html.
knowledge, our work is the first to leverage the performance- [17] SPEC. Inspur Corporation NF5280M4 (Intel Xeon E5-2699 v3),
to-power ratio of computing nodes in virtual machine allo- https://www.spec.org/power ssj2008/results/res2014q4/power ssj2008-
20140905-00673.html.
cation and migration for energy-efficient cloud solution. By [18] Patrick Thibodeau. Data centers are the new polluters. Computerworld,
achieving the optimal balance between host utilization and 2014.
energy consumption, PPRGear is able to guarantee that host [19] N. Tziritas, Cheng-Zhong Xu, T. Loukopoulos, S.U. Khan, and Zhibin
Yu. Application-aware workload consolidation to minimize both energy
computers run at the most power-efficient levels (i.e., the levels consumption and network load in cloud environments. In Parallel
with highest Performance-to-Power ratios) so that the energy Processing (ICPP), 42nd International Conference on, Oct 2013.
consumption can be tremendously reduced without sacrifice [20] Akshat Verma, Puneet Ahuja, and Anindya Neogi. pmapper: Power
and migration cost aware application placement in virtualized systems.
of performance. Our extensive experiments with real world In Proceedings of the 9th ACM/IFIP/USENIX International Conference
traces show that compared with ThrRs, MadMmt, and IqrMc, on Middleware, Middleware ’08, pages 243–264, New York, NY, USA,
PPRGear is able to reduce up to 69.31% energy consumption 2008. Springer-Verlag New York, Inc.
[21] Akshat Verma, Gargi Dasgupta, Tapan Kumar Nayak, Pradipta De, and
with fewer migration and shutdown times. Ravi Kothari. Server workload analysis for power minimization using
consolidation. In Proceedings of the 2009 Conference on USENIX
R EFERENCES Annual Technical Conference, USENIX’09, pages 28–28, Berkeley, CA,
USA, 2009. USENIX Association.
[1] SPEC Power, https://www.spec.org/power/. [22] Yefu Wang and Xiaorui Wang. Virtual batching: Request batching
[2] S. Bazarbayev, M. Hiltunen, K. Joshi, W.H. Sanders, and R. Schlichting. for server energy conservation in virtualized data centers. Parallel
Content-based scheduling of virtual machines (vms) in the cloud. In and Distributed Systems, IEEE Transactions on, 24(8):1695–1705, Aug
Distributed Computing Systems (ICDCS), 2013 IEEE 33rd International 2013.
Conference on, pages 93–101, July 2013. [23] Zhen Xiao, Weijia Song, and Qi Chen. Dynamic resource allocation
[3] Anton Beloglazov and Rajkumar Buyya. Optimal online deterministic using virtual machines for cloud computing environment. Parallel
algorithms and adaptive heuristics for energy and performance efficient and Distributed Systems, IEEE Transactions on, 24(6):1107–1117, June
dynamic consolidation of virtual machines in cloud data centers. Con- 2013.
curr. Comput. : Pract. Exper., 24(13):1397–1420, September 2012. [24] Sungkap Yeo, Mohammad M. Hossain, Jen-Cheng Huang, and Hsien-
[4] Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, César Hsin S. Lee. Atac: Ambient temperature-aware capping for power
A. F. De Rose, and Rajkumar Buyya. Cloudsim: A toolkit for modeling efficient datacenters. In Proceedings of the ACM Symposium on Cloud
and simulation of cloud computing environments and evaluation of Computing, SOCC ’14, pages 17:1–17:14, New York, NY, USA, 2014.
resource provisioning algorithms. Softw. Pract. Exper., January 2011. ACM.
[5] Khosrow Ebrahimi, Gerard F. Jones, and Amy S. Fleischer. A review [25] Qian Zhu, Jiedan Zhu, and G. Agrawal. Power-aware consolidation of
of data center cooling technology, operating conditions and the corre- scientific workflows in virtualized environments. In High Performance
sponding low-grade waste heat recovery opportunities. Renewable and Computing, Networking, Storage and Analysis (SC), 2010 International
Sustainable Energy Reviews, 31(0):622 – 638, 2014. Conference for, pages 1–12, Nov 2010.
[6] Xiaobo Fan, Wolf-Dietrich Weber, and Luiz Andre Barroso. Power [26] Xiaomin Zhu, L.T. Yang, Huangke Chen, Ji Wang, Shu Yin, and
provisioning for a warehouse-sized computer. In Proceedings of the Xiaocheng Liu. Real-time tasks oriented energy-aware scheduling in
34th Annual International Symposium on Computer Architecture, ISCA virtualized clouds. Cloud Computing, IEEE Transactions on, 2(2):168–
’07, pages 13–23, New York, NY, USA, 2007. ACM. 180, April 2014.