You are on page 1of 62

Energy Efficient/ Green Cloud

Computing
BY
Navneet Singh
Pursuing Ph.D. From SBBSU, Jalandhar
Outline
• Basic of cloud computing
• Review of Literature : Techniques for Green
Cloud computing
• Dynamic consolidation of virtual machines
• Scope of research
• Research Problems
• Objectives
• Methodology
• Hardware & Software Requirement
What is Cloud Computing ?
John McCathy a mathematician &
computer scientist.
The idea cloud computing was given by
John McCarthy in 1961. In public
speech, on the occasion of MIT’s
centennial, he predicted that
computer time sharing technology
might result in a future in which
computing power and even specific
application could be sold through the
utility business model like water or
electricity and will be available on-
demand in metered way.
• Cloud computing is 5th utility (after water,
electricity, gas, and telephony) that is
delivered in a metered way i.e. pay-as-per-use
pattern.
Cloud Service Model
Three Service Model
• Software as Service [SaaS]: Application delivered as a
service to end users typically through a web browser.
Examples of SaaS include Salesforce.com, Google Apps,
Microsoft Office 365.
• Platform as a Service [PaaS]: An Application Development
and deployment platform delivered as service to
developers, who use the platform to build, deploy and
manage application. Examples of PaaS include Google App
Engine and Microsoft Azure.
• Infrastructure as a Service [IaaS]: IaaS refers to the sharing
of hardware resources for executing services using
virtualization technology. Examples of IaaS include Amazon
S3 and Elastic cloud computing [EC2].
CaaS
• Recently a new type of service, called
Containers as a Service (CaaS), has been
introduced by Google and Amazon Web
Services.
• Docker2 is a good example of a container
management system.
What is container?
• Containers encapsulate discrete components of application
logic provisioned only with the minimal resources needed to
do their job.

• Unlike virtual machines (VM), containers have no need for


embedded operating systems (OS); calls are made for OS
resources via an application programming interface (API).

• Containers are easily packaged, lightweight and designed to


run anywhere. Multiple containers can be deployed in a single
VM.

• Containerisation is, in effect, OS-level virtualisation (as


opposed to VMs, which run on hypervisors, each with a full
embedded OS).
Cloud Deployment Model
Four Deployment Models

• Private Clouds: For exclusive use by a single organization and


typically controlled, managed and hosted by organization’s IT
department. Example of private clouds includes Eucalyptus system.
• Public Clouds: For use by multiple organizations on a shared basis
and hosted and managed by third party service provider. Best
examples of public cloud include Microsoft Azure and Google App
Engine.
• Hybrid Clouds: When a single organization adopts both private and
public cloud for a single application in order to take advantage of
benefits of both. Example of hybrid cloud is Amazon Web Services
[AWS].
• Community Clouds: For use by a group of related organization who
wish to make use of common cloud computing environment. A
special case of community cloud is the Government or G-cloud
Use of cloud computing
• Many Industries such as banking, healthcare
and education are moving towards the cloud
due to the efficiency of services provided by
the ‘pay-per-use‘ pattern based on the
resources such as processing power used,
transaction carried out, bandwidth consumed,
data transferred or storage occupied etc.
Cloud Computing in Govt. sector in
India?
• Aneka developed by Manjrasoft is used in
India in many sectors.
• Dr. Rajkumar Buyya, CEO, Manjrasoft Pvt.ltd
Melbourne University – Australia.
• The Department of Space, Government of
India, adopted Aneka as the Cloud computing
platform supporting the development of high
performance GIS applications.
• A number of institutions in India such as
MSRIT-Bangalore and C-DAC , Hyderabad, and
other countries have used Aneka for setting
up a Cloud Computing Lab and used it to offer
practical exposure to their students (studying
Cloud/Grid/ High-Performance computing
courses) in addition to building applications.
Cloud Consumer, SLA, Cloud Provider
Contents of SLA
Inside view ‘Microsoft Azure data center’
Components of Data center
Where data center power goes
Energy facts about Data Center
• Currently it is estimated that servers consume
0.5% of the world’s total electricity usage.
• Server energy demand doubles every 4-6
years.
• With their enormous appetite for energy,
today’s data centers emit as much carbon
dioxide as all of Argentina.
• Data center emissions are expected to
quadruple by 2020.
• the average data center consumes as much
energy as 25,000 households
How to make Data center Energy
Efficient?
• We can save energy that we spend on cooling
by building our data center in regions where
climate is cold.
• Facebook’s massive Arctic Sever Farm
• It was built on the edge of the Arctic Circle in
Northern Sweden.
• This server farm is able to let go of air
conditioning for cooling and instead just use
fresh Arctic air.
• Heat from the servers can be used to help
heat buildings in cold climates, thus reducing
the energy consumption of conventional
heaters.
Question?
• How to reduce the energy consumed by IT
equipment of a Data center?
• Energy consumed by IT equipment:
• Server-----30%
• Storage -----10%
• Other IT equipment---10%
So major portion of energy is consumed by server. If
we can save energy in this, we can make our data
center energy efficent.
• The data center efficiency is measured by a
metric called power usage effectiveness (PUE),
which is the ratio of total data center energy
usage to IT equipment energy usage.
• Higher PUE value indicates that most of data
center energy is consumed in cooling measures
instead of computing.
• The average PUE value of data centers in a 2015
survey was found to be 2.
• Although aggressive measures can result in
ideal PUE value of 1.2.
Energy efficient data center

• Data center known to consume lot of electric


power.
• While excising economy on power
consumption by (data center) outmost care
needs to be taken so that it never at the cost
services provided to end user i.e. SLA violation
must be kept as low as possible.
Motivation for Green Data Centers
• Economic • Environmental
– New data centers run on the – 70% of the U.S. energy
Megawatt scale, requiring sources are fossil fuels.
millions of dollars to operate.
– 2.8 billion tons of CO2
– Recently institutions are emitted each year from U.S.
looking for new ways to power plants.
reduce costs, no more “blank
– Sustainable energy sources
checks.”
are not ready.
– Many facilities are are at
– Need to reduce energy
their peak operating
dependence until a more
envelope, and cannot expand
sustainable energy source is
without a new power source.
deployed.

27
Literature Review of Technique for
Green cloud comuting
Server power management

I. Independent voltage scaling [IVS] : In IVS each node


independently handles its own power consumption by
making use of dynamic voltage scaling. This way IVS can
easily account for energy saving of 20-30%.
II. Coordinated voltage scaling [CVS]:CVS uses dynamic voltage
scaling in a synchronized way so that all nodes work
extremely close mean frequency setting across data center.
III Vary-On Vary-Off [VOVO] : it works like a
governor and regulate the server nodes so
that at a particular point of time only that
much number of server kept active which are
just sufficient to support the workload.
IV Coordinated Policy: in this policy the VOVO
and CVS work in conjunction where VOVO
restricts the use of active servers and CVS
controls and reduce the power consumption
of individual active nodes.
Virtualization

• When we observe that an application no


longer need an exclusive server we can club
number of application and can make them run
on a single server off course within it’s
capacity to handle that load. This is what we
mean by term virtualization.
• It significantly decrease the amount of
hardware required.
485 Watts vs. 552 Watts
V V V V V V V V
M M M M M M M M
Node 1 @ 170W Node 2 @ 105W

Node 3 @ 105W Node 4 @ 105W

VS.

V V V V
M M M M
Node 1 @ 138W Node 2 @ 138W

V V V V
M M M M
Live migration
Memory transfer phases:
• Push phase
The source VM remains in running mode while certain
pages are pushed across the network to the new
destination. To ensure consistency, pages modified
throughout this process must be re-sent.
• Stop-and-copy phase
The source VM is stopped, pages are copied across to the
destination VM, then the new VM is started.
• Pull phase
The new VM executes and, if it accesses a page that has
not been copied yet, this page is faulted in ("pulled")
across the network from the source VM.
Popular hypervisors, such as Xen and VMWare, allow
migrating an OS as it continues to run.
Data center architecture
Data center architecture

• Source: [Proceedings of the 1st International


Conference on Energy-Efficient Computing and
Networking, Pages 183-186, ACM 2010]
• In all cases, the power consumption of the
architecture highly depends on the
parameters of the architecture, namely the
number of ports, denoted by n, that a server
or switch can have and the number of
structural levels, denoted by k.
Energy aware profiling
• Schubert et al [25] state that the developers
lack the tools that indicate where the energy-
hungry sections are located in their code and
help them better optimize their code for
enhancing energy consumption more
accurately instead of just relying on their own
intuitions.
Monitoring Unit (RMU), Energy
Profiling Unit (EPU)
Dynamic Thermal Management for
data center
• ASHRAE has suggested maximum server inlet
temperature of 27 ◦C for data center
environments . Every 10 ◦C rise in
temperature over 21 ◦C can result in 50%
decrease in hardware reliability.
• CADE (Corporate Average Datacenter
Efficiency), a new industry standard efficiency
measure developed by McKinsey.ie. They tell
how energy efficient your data center is.
Dynamic consolidation of virtual
machines
• Dynamic consolidation of virtual machines
(VMs) is an effective way to improve the
utilization of resources and energy efficiency
in cloud data centers.
• The reduction in energy consumption can be
achieved by switching idle nodes to low-
power modes (i.e., sleep, hibernation), thus
eliminating the idle power consumption.
Energy Efficient Cloud Computing :
Using dynamic VM consolidation

Prof. Ripal Nathuji Prof. Karsten Schwan


Georgia Institute of Technology Atlanta, GA 30032
Dynamic VM consolidation consists of
two basic processes
• migrating VMs from underutilized hosts to
minimize the number of active
• offloading VMs from hosts when those
become overloaded to avoid performance
degradation experienced by the VMs
• Another capability provided by virtualization is
live migration, which is the ability to transfer a
VM between physical servers (referred to as
hosts, or nodes) with a close to zero
downtime.
• Dynamic VM consolidation in Clouds is not
trivial since modern service applications often
experience highly variable workloads causing
dynamic resource usage patterns.
Scope of research
• The scope of my research is energy-efficient
dynamic VM consolidation in Infrastructure as
a Service (IaaS) Cloud data centers under QoS
constraints.
• Another aspect distinguishing the work
presented in this from the related research is
the distributed architecture of the VM
management system.
Why distributed architecture ?
• A distributed VM management system
architecture is essential for large-scale Cloud
providers, as it enables the natural scaling of
the system to thousands of compute nodes.
• Another benefit of making the VM
management system distributed is the
improved fault tolerance by eliminating single
points of failure.
Research Problems

In particular, the following research problems


are investigated:
• How to define workload-independent QoS
requirements.
• When to migrate VMs.
• Which VMs to migrate.
• Where to migrate the VMs selected for
migration.
• When and which physical nodes to switch
on/off.
• How to design distributed dynamic VM
consolidation algorithms.
Objectives
• Explore, analyze, and classify the research in the area of
energy-efficient computing to gain a systematic
understanding of the existing techniques and approaches.

• Conduct competitive analysis of algorithms for dynamic VM


consolidation to obtain theoretical performance estimates
and insights into the design of online algorithms for
dynamic VM consolidation.

• Propose a workload independent QoS metric that can be


used in defining system wide QoS constraints in IaaS
environments.
• Propose an approach to designing a dynamic VM
consolidation system in a distributed manner.

• Develop online algorithms for energy-efficient


distributed dynamic VM consolidation for IaaS
environments satisfying workload-independent
QoS constraints.

• Design and implement a distributed dynamic VM


consolidation system that can be used to evaluate
the proposed algorithms on a multi-node IaaS
testbed.
Methodology
The research methodology followed in this
thesis consists of several consecutive steps
summarized below:
(a). Conduct theoretical analysis of dynamic VM
consolidation algorithms to obtain theoretical
performance estimates and insights into
designing such algorithms.
(b). Develop distributed dynamic VM
consolidation algorithms based on the insights
from the conducted competitive analysis and
derived system model
(c). Evaluate the proposed algorithms through
discrete-event simulation using the CloudSim
simulation toolkit extended to support power
and energy-aware simulations.
Hardware & Software Requirement
• Software requirement
• Operating System: Windows 7
• Simulator: CloudSim
• Sun’s Java 6 or newer

• Hardware Requirement
• Processor: Intel(R) Core™ i3-4005U CPU @ 1.70GHz
• RAM : 4 GB
• Hard Disk: 630 GB
References
[1].ENM Elnozahy, M Kistler, R Rajamony , Energy-efficient server clusters.
Power-Aware Computer Systems, 2002 – Springer.
[2].RK Sharma, CE Bash, CD Patel, Balance of power: Dynamic thermal
management for internet data centers. IEEE Internet
Computing (Volume:9, Issue: 1), pp. 42-49, IEEE, 2005.
[3]. AJ Younge, G Von Laszewski, L Wang, Efficient resource management for
cloud computing environments, Green Computing Conference, pp. 357-
364, IEEE 2010.
[4]. C Clark, K Fraser, S Hand, JG Hansen, Live migration of virtual machines. In
Proceedings of the 2nd conference on Symposium on Networked Systems
Design & Implementation - Volume 2, pp. 273-286, USENIX
Association Berkeley, CA, USA, 2005.
[5]. L Gyarmati, TA Trinh, How can architecture help to reduce energy
consumption in data center networking? In Proceedings of the 1st
International Conference on Energy-Efficient Computing and Networking,
pp. 183-186, ACM 2010.
[6]. I Alzamil, K Djemame, D Armstrong, Energy-Aware Profiling for Cloud
Computing Environments. Electronic Notes in Theoretical Computer
Science Volume 318, pp. 91-108, 25 November 2015.
[8] J.S. Chase, D.C. Anderson, P.N. Thakar, A.M. Vahdat, R.P. Doyle, Managing energy and server
resources in hosting centers, in: Proceedings of the 18th ACM Symposium on Operating Systems
Principles, ACM, New York, NY, USA, 2001, pp. 103–116.

[9] E. Elnozahy, M. Kistler, R. Rajamony, Energy-efficient server clusters, Power- Aware Computer
Systems (2003) 179–197.

[10] R. Nathuji, K. Schwan, Virtualpower: coordinated power management in virtualized enterprise


systems, ACM SIGOPS Operating Systems Review 41 (6) (2007) 265–278.

[11] R. Raghavendra, P. Ranganathan, V. Talwar, Z. Wang, X. Zhu, No ‘‘power’’ struggles: coordinated


multi-level power management for the data center, SIGARCH Computer Architecture News 36 (1)
(2008) 48–59.

[12] D. Kusic, J.O. Kephart, J.E. Hanson, N. Kandasamy, G. Jiang, Power and performancee management
of virtualized computing environments via lookahead control, Cluster Computing 12 (1) (2009) 1–
15.

[13] S. Srikantaiah, A. Kansal, F. Zhao, Energy aware consolidation for cloud computing, Cluster
Computing 12 (2009) 1–15.

[14] M. Cardosa, M. Korupolu, A. Singh, Shares and utilities based power consolidation in virtualized
server environments, in: Proceedings of the 11th IFIP/IEEE Integrated Network Management, IM
2009, Long Island, NY, USA, 2009.
[15] A. Verma, P. Ahuja, A. Neogi, pMapper: power and migration cost aware
application placement in virtualized systems, in: Proceedings of the 9th
ACM/IFIP/USENIX International Conference on Middleware, Springer, 2008, pp.
243–264.

[16] A. Gandhi, M. Harchol-Balter, R. Das, C. Lefurgy, Optimal power allocation in


server farms, in: Proceedings of the 11th International Joint Conference on
Measurement and Modeling of Computer Systems, ACM, New York, NY, USA, 2009,
pp. 157–168.

[17] M. Gupta, S. Singh, Greening of the internet, in: Proceedings of the ACM
Conference on Applications, Technologies, Architectures, and Protocols for
Computer Communication, SIGCOMM 2003, New York, NY, USA, 2003, pp. 19–26.

[18] N. Vasic, D. Kostic, Energy-aware traffic engineering, in: Proceedings of the 1st
ACM International Conference on Energy-Efficient Computing and Networking, e-
Energy 2010, Passau, Germany, 2010, pp. 169–178.

[19] C. Panarello, A. Lombardo, G. Schembra, L. Chiaraviglio, M. Mellia, Energy saving


and network performance: a trade-off approach, in: Proceedings of the 1st ACM
International Conference on Energy-Efficient Computing and Networking, e-Energy
2010, Passau, Germany, 2010, pp. 41–50.
[20] L. Chiaraviglio, I. Matta, GreenCoop: cooperative green routing with energy efficient servers, in:
Proceedings of the 1st ACM International Conference on Energy-Efficient Computing and
Networking, e-Energy 2010, Passau, Germany, 2010, pp. 191–194.

[21] M. Koseoglu, E. Karasan, Joint resource and network scheduling with adaptive offset determination
for optical burst switched grids, Future Generation Computer Systems 26 (4) (2010) 576–589.

[22] L. Tomas, A. Caminero, C. Carrion, B. Caminero, Network-aware metascheduling


in advance with autonomous self-tuning system, Future Generation Computer Systems 27 (5) (2010)
486–497.

[23] E. Dodonov, R. de Mell, A novel approach for distributed application scheduling


based on prediction of communication events, Future Generation Computer Systems 26 (5) (2010) 740–
752.

[24] L. Gyarmati, T. Trinh,Howcan architecture help to reduce energy consumption in data center
networking? in: Proceedings of the 1st ACM International Conference on Energy-Efficient
Computing and Networking, e-Energy 2010, Passau, Germany, 2010, pp. 183–186.

[25] C. Guo, G. Lu, H. Wang, S. Yang, C. Kong, P. Sun, W. Wu, Y. Zhang, Secondnet: a data center network
virtualization architecture with bandwidth guarantees, in: Proceedings of the 6th International
Conference on Emerging Networking EXperiments and Technologies, CoNEXT 2010, Philadelphia,
USA, 2010.

[26] L. Rodero-Merino, L. Vaquero, V. Gil, F. Galan, J. Fontan, R. Montero, I. Llorente, From infrastructure
delivery to service management in clouds, Future Generation Computer Systems 26 (8) (2010)
1226–1240.
[27] R.N. Calheiros, R. Buyya, C.A.F.D. Rose, A heuristic for mapping
virtual machines and links in emulation testbeds, in: Proceedings of
the 38th International Conference on Parallel Processing, Vienna,
Austria, 2009.

[28] R.K. Sharma, C.E. Bash, C.D. Patel, R.J. Friedrich, J.S. Chase,
Balance of power: dynamic thermal management for internet data
centers, IEEE Internet Computing (2005) 42–49.

[29] T. Kgil, A. Saidi, N. Binkert, S. Reinhardt, K. Flautner, and T. Mudge,


“PicoServer: Using 3D stacking technology to build energy efficient
servers,” ACM J. Emerging Technol. Comput. Syst., vol. 4, no. 4, pp.
1–34, Oct. 2008.

[30] P. Ranganathan, P. Leech, D. Irwin, and J. Chase, “Ensemble-level


power management for dense blade servers,” ACM SIGARCH
Comput. Archit. News, vol. 34, no. 2, pp. 66–77, May 2006.
[31] T. Mudge and U. Holzle, “Challenges and opportunities for extremely energy-
efficient processors,” IEEE Micro, vol. 30, no. 4, pp. 20–24, Jul./Aug. 2010.

[32] A. Szalay, G. Bell, H. Huang, A. Terzis, and A. White, “Low-power Amdahl-balanced


blades for data intensive computing,” ACM SIGOPS Oper. Syst. Rev., vol. 44, no. 1,
pp. 71–75, Jan. 2010.
[33] Y. Wang, H. Liu, D. Liu, Z. Qin, Z. Shao, and E. H.-M. Sha, “Overheadaware energy
optimization for real-time streaming applications on multiprocessor system-on-
chip,” ACM Trans. Des. Autom. Electron. Syst., vol. 16, no. 2, p. 14, Mar. 2011.

[34] Y.Wang, D. Liu, Z. Qin, and Z. Shao, “Memory-aware optimal scheduling with
communication overhead minimization for streaming applications on chip
multiprocessors,” in Proc. IEEE RTSS, Dec. 2010, pp. 350–359.

[35] Y. Wang, D. Liu, Z. Qin, and Z. Shao, “Optimally removing inter-core


communication overhead for streaming applications on MPSOCS,” IEEE Trans.
Comput., vol. 62, no. 2, pp. 336–350, Feb. 2013.

[36] J. Xu and J. Fortes, “Multi-objective virtual machine placement in virtualized data


center environments,” in Proc. IEEE/ACM Int. Conf. CPSCom,Dec. 2010, pp. 179–
188.
[37] L. Keys, S. Rivoire, and J. Davis, “The search for energy-efficient building blocks for
the data center,” in Proc. Comput. Architect., 2012, pp. 172–182.

[38] S. W. Keckler, W. J. Dally, B. Khailany, M. Garland, and D. Glasco, “GPUS and the
future of parallel computing,” IEEE Micro, vol. 31, no. 5, pp. 7–17, Sep./Oct. 2011.

[39] P. Lotfi-Kamran, B. Grot, M. Ferdman, S. Volos, O. Kocberber, J. Picorel, A. Adileh,


D. Jevdjic, S. Idgunji, E. Ozer, and B. Falsafi, “Scaleout processors,” in Proc. 39th Int.
Symp. Comput. Archit., Jun. 2012, pp. 500–511.

[40] S. Pelley, D. Meisner, P. Zandevakili, T. Wenisch, and J. Underwood, “Power


routing: Dynamic power provisioning in the data center,” ACM Sigplan Notices, vol.
45, no. 3, pp. 231–242, Mar. 2010.
[41] M. Floyd, S. Ghiasi, T. Keller, K. Rajamani, F. Rawson, J. Rubio, and M. Ware,
“System power management support in the IBM POWER6 microprocessor,” IBM J.
Res. Develop., vol. 51, no. 6, pp. 733–746, Nov. 2007.

[42] X. Fan, W.-D. Weber, and L. A. Barroso, “Power provisioning for a warehouse-sized
computer,” ACM SIGARCH Comput. Archit. News, vol. 35, no. 2, pp. 13–23, May
2007.

[43] D. Meisner, B. T. Gold, and T. F.Wenisch, “PowerNap: Eliminating server idle


power,” SIGPLAN Notices, vol. 44, no. 3, pp. 205–216, Mar. 2009.
[44] R. Simanjorang, H. Yamaguchi, H. Ohashi, K. Nakao, T. Ninomiya, S. Abe, M. Kaga, and A. Fukui,
“High-efficiency high-power dc–dc converter for energy and space saving of power-supply system in
a data center,” in Proc. IEEE APEC Expo., Mar. 2011, pp. 600–605.

[45] A. H. Beitelmal and C. D. Patel, “Thermo-fluids provisioning of a high performance high density data
center,” Distrib. Parallel Databases, vol. 21, no. 2/3, pp. 227–238, Jun. 2007.
[46] S. V. Garimella, L.-T. Yeh, and T. Persoons, “Thermal management challenges in telecommunication
systems and data centers,” IEEE Trans. Compon. Packag. Manuf. Technol., vol. 2, no. 8, pp. 1307–
1316, Aug. 2012.

[47] D. Quirk and M. Patterson, “Ab-10-c021 the “right” temperature in datacom environments,”
ASHRAE Trans., vol. 116, no. 2, p. 192, 2010.

[48] J. Moore, J. Chase, P. Ranganathan, and R. Sharma, “Making scheduling “cool”: Temperature-aware
workload placement in data centers,” in Proc. Annu. Conf. USENIX Annu. Tech. Conf., Apr. 2005, pp.
61–75.
[49] N. Jiang and M. Parashar, “Enabling autonomic power-aware management of instrumented data
centers,” in Proc. IEEE Int. Symp. Parallel Distrib. Process., May 2009, pp. 1–8.

[50] M. Ellsworth, L. Campbell, R. Simons, and R. Iyengar, “The evolution of water cooling for IBM large
server systems: Back to the future,” in Proc. 11th Intersoc. Conf. Thermal Thermomech. Phenom.
Electron. Syst., May 2008, pp. 266–274.

[51] S. Zimmermann, I. Meijer, M. K. Tiwari, S. Paredes, B. Michel, and D. Poulikakos, “Aquasar: A hot
water cooled data center with direct energy reuse,” Energy, vol. 43, no. 1, pp. 237–245, Jul. 2012.

You might also like