You are on page 1of 5

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 8 August 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2466



Efficient Allocation of Virtual Machines to Optimize Energy in Cloud Data Centre
J agjeet Singh
1
, Sarpreet Singh
2

1
Research Fellow,
2
Asst. Professor
1,2
Sri Guru Granth Sahib World University,Fatehgarh Sahib,Punjab.

Abstract Advancement in technology result in the demand
for machine power that in turns has semiconductor diode to the
creation of large-scale knowledge centers. They consume
monumental amounts of power leading to high operational
prices and CO2 emissions. Moreover, fashionable Cloud
computing environments ought to give prime quality of Service
(QoS) for his or her customers leading to the requirement to
agitate power-performance trade-off. We are going to propose
AN economical resource management policy for virtualized
Cloud knowledge centers. Planned theme thinks about the
utmost and minimum utilization threshold price. If the
employment of hardware for a bunch falls below the minimum
threshold, all VMs ought to be migrated from this host and also
the host needs to be transitioned so as to eliminate idle power
consumption. We tend to gift analysis results showing that
dynamic reallocation of VMs brings substantial energy savings,
so justifying any development of the planned policy.

Keywords Virtualisation, Energy efficiency, Cloud
Computing, Energy Consumption, CPU Scheduling,
Allocation of VMs, CloudSim.

I. INTRODUCTION

In recent years, IT infrastructures still grow quickly driven
by the demand for procedure power created by fashionable
compute-intensive business and scientific applications.
However, a large-scale computing infrastructure consumes
huge amounts of power resulting in operational prices that
exceed the value of the infrastructure in few years. as an
example, in 2006 the value of electricity consumed by IT
infrastructures in North American country was calculable
as four.5 billion bucks and tends to double by 2011 [1].
Apart from overwhelming operational prices, high power
consumption ends up in reduced system dependableness
and devices period owing to warming. Another downside
is important CO2 emissions that contribute to the
atmospheric phenomenon. one among the ways in which
to cut back power consumption by an information center is
to use virtualization technology. This technology permits
one to consolidate many servers to at least one physical
node as Virtual Machines (VMs) reducing the quantity of
the hardware in use. Recently emerged Cloud computing
paradigm influences virtualization and provides on-
demand resource provisioning over the web on a pay-as-
you go basis [2]. This permits enterprises to drop the
prices of maintenance of their own computing
surroundings and source the procedure must the Cloud.
Its essential for Cloud suppliers to supply reliable Quality
of Service (QoS) for the purchasers that are negotiated in
terms of Service Level Agreements (SLA), e.g.
throughput, time interval. Therefore, to confirm
economical resource management and supply higher
utilization of resources, Cloud suppliers (e.g. Amazon
EC2) need to manage power-performance trade-off, as
aggressive consolidation of VMs will result in
performance loss. supported the trends from yankee
Society of Heating, cold and Air- learning Engineers
(ASHRAE), it's been calculable that by 2014 infrastructure
and energy prices would contribute regarding seventy
fifth, whereas it'd contribute simply twenty fifth to the
price of operative an information center [3].

II. RESOURCE MANAGEMENT

Cloud computing is turning into one in every of the
foremost explosively increasing technologies within the
computing business nowadays. It allows users to migrate
their information and computation to a foreign location
with stripped-down impact on system performance [7].
Generally this provides variety of advantages that couldn't
well be complete. These edges include:

Scalable: Clouds square measure designed to deliver the
maximum amount computing power as any user needs.
Whereas in observe the underlying infrastructure isn't
infinite, the cloud resources square measure projected to
ease the developers dependence on any specific hardware.

Quality of Service (QoS): Unlike commonplace
information centers and advanced computing resources, a
simple Cloud will project a way higher QoS than generally
attainable. This is often thanks to the shortage of
dependence on specific hardware, therefore any physical
machine failures are often lessened while not the users
information.

Specialized Environment: Within a Cloud, the user will
utilize custom tools and services to fulfil their desires.
This may be to use the newest library, toolkit, or to
support heritage code among new infrastructure.

Cost Effective: Users finds solely the hardware needed for
every project. This greatly reduces the danger for
establishments which will be trying to make a climbable
system. Therefore providing bigger flexibility since the
user is just paying for required infrastructure whereas
maintaining the choice to extend services within the
future.

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 8 August 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2467

Simplified Interface: Whether employing a specific
application, a group of tools or internet services, Clouds
offer access to a probably huge quantity of computing
resources in a straightforward and user-centric approach.
Weve investigated such AN interface at intervals Grid
systems through the employment of the Cyberaide project
[8], [9].

There are number of underlying technologies, services,
and infrastructure-level configurations that build Cloud
computing attainable. one amongst the foremost vital
technologies is that the use of virtualization [10], [11].

Virtualization may be a way to abstract the hardware and
system resources from software. This is often generally
performed at intervals a Cloud setting across an oversized
set of servers employing a Hypervisor or Virtual Machine
Monitor (VMM) that lies in between the hardware and
also the software (OS). From here, one or a lot of
virtualized OSs is started at the same time as seen in
Figure one, resulting in one amongst the key benefits of
Cloud computing. This, together with the arrival of multi-
core process capabilities, permits for a consolidation of
resources at intervals any information center. it's the
Clouds job to use this capability to its most potential
whereas still maintaining a given QoS. Virtualization isn't
specific to Cloud computing. IBM originally pioneered the
conception within the 1960s with the M44/44X systems.
it's solely recently been reintroduced for general use on
x86 platforms. These days there square measure variety of
Clouds that provide Infrastructure as a Service (IaaS). The
Amazon Elastic figure Cloud (EC2) [12], is maybe the
foremost in style of that and is employed extensively
within the IT trade. Eucalyptus [13] is changing into in
style in each the scientific and trade communities. It
provides identical interface as EC2 associate degreed
permits users to make an EC2-like cloud mistreatment
their own internal resources. different scientific Cloud
specific comes exist like OpenNebula [14], InVIGO [15],
and Cluster-on-Demand. they supply their own
interpretation of personal Cloud services at intervals an
information center. employing a Cloud readying overlaid
on a Grid computing system has been explored by the
Nimbus project with the Globus Toolkit All of those
clouds leverage the ability of virtualization (typically
mistreatment the Xen hypervisor) to form associate degree
increased data center.
III. RELATED WORK

Anton Beloglazov and Rajkumar Buyya in 2012 [1]
explained that rapid climb in demand for procedure power
driven by fashionable service applications combined with
the shift to the Cloud computing model have light-emitting
diode to the institution of large-scale virtualized
knowledge centers. Such knowledge centers consume
monumental amounts of voltage leading to high in
operation prices and greenhouse gas emissions. Dynamic
consolidation of virtual machines (VMs) victimisation live
migration and change idle nodes to the sleep mode enable
Cloud suppliers to optimize resource usage and scale back
energy consumption. However, the duty of providing top
quality of service to customers ends up in the requirement
in addressing the energy-performance trade-off, as
aggressive consolidation might result in performance
degradation. thanks to the variability of workloads old by
fashionable applications, the VM placement ought to be
optimized unendingly in a web manner. to know the
implications of the web nature of the matter, we have a
tendency to conduct competitive analysis and prove
competitive ratios of optimum on-line settled algorithms
for the one VM migration and dynamic VM consolidation
issues. what is more, authors propose novel adaptational
heuristics for dynamic consolidation of VMs supported
Associate in Nursing analysis of historical knowledge
from the resource usage by VMs. The planned algorithms
considerably scale back energy consumption, whereas
guaranteeing a high level of adherence to the Service
Level Agreements (SLA). Authors validate the high
potency of the planned algorithms by intensive simulations
victimisation real-world work traces from quite thousand
PlanetLab VMs.

Anton Beloglazov Associate in Nursingd Rajkumar Buyya
in 2010 [2] propose an energy economical resource
management system for virtualized Cloud knowledge
centers that reduces operational prices and provides
needed Quality of Service (QoS). Energy savings ar
achieved by continuous consolidation of VMs in step with
current utilization of resources, virtual network topologies
established between VMs and thermal state of computing
nodes. Authors gift 1st results of simulation-driven
analysis of heuristics for dynamic reallocation of VMs
victimisation live migration in step with current necessities
for central processing unit performance. The results show
that the planned technique brings substantial energy
savings, whereas guaranteeing reliable QoS. This justifies
additional investigation and development of the planned
resource management system
In this paper have bestowed a decentralised design of the
energy aware resource management system for Cloud
knowledge centers. we've outlined the matter of
minimizing the energy consumption whereas meeting QoS
necessities and explicit the wants for VM allocation
policies. Moreover, authors have planned 3 stages of
continuous optimisation of VM placement and bestowed
heuristics for a simplified version of the primary stage.
The heuristics are evaluated by simulation victimisation
the extended Cloud Sim toolkit. one in all the heuristics
ends up in important reduction of the energy consumption
by a Cloud knowledge center by eighty three as
compared to a non power aware system and by sixty six as
compared to a system that applies solely DVFS technique
however doesn't adapt allocation of VMs in run-time.
Moreover, millimeter policy allows versatile adjustment of
SLA by setting acceptable values of the use thresholds:
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 8 August 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2468

SLA are often relaxed resulting in additional improvement
of energy consumption. The policy supports
heterogeneousness of each the hardware and VMs and
doesn't need any information concerning specific
applications running on the VMs. The policy is freelance
of the work kind.

Anton Beloglazov, Jemal Abawajy, Rajkumar Buyya in
2012 [3] explains that Cloud computing offers utility-
oriented IT services to users worldwide. supported a pay-
as-you-go model, it allows hosting of pervasive
applications from client, scientific, and business domains.
However, knowledge centers hosting Cloud applications
consume Brobdingnagian amounts of voltage, conducive
to high operational prices and carbon footprints to the
atmosphere. Therefore, we'd like inexperienced Cloud
computing solutions which will not solely minimize
operational prices however additionally scale back the
environmental impact. during this paper, authors outline
Associate in Nursing beaux arts framework and principles
for energy-efficient Cloud computing. supported this
design, authors gift our vision, open analysis challenges,
and resource provisioning and allocation algorithms for
energy-efficient management of Cloud computing
environments. The planned energy-aware allocation
heuristics provision knowledge center resources to
consumer applications during a manner that improves
energy potency of the info center, whereas delivering the
negotiated Quality of Service (QoS). particularly, during
this paper we have a tendency to conduct a survey of
analysis in energy-efficient computing and propose: (a)
beaux arts principles for energy-efficient management of
Clouds; (b) energy-efficient resource allocation policies
and programing algorithms considering QoS expectations
and power usage characteristics of the devices; and (c)
variety of open analysis challenges, addressing which
might bring substantial edges to each resource suppliers
and customers. we've valid our approach by conducting a
performance analysis study victimisation the CloudSim
toolkit. The results demonstrate that Cloud computing
model has large potential because it offers important value
savings and demonstrates high potential for the advance of
energy potency beneath dynamic work eventualities.
Corentin Dupont, Giovanni Giuliani in 2011 [5] explained
that knowledge centres ar powerful ICT facilities that
perpetually evolve in size, complexity, and power
consumption. At an equivalent time users and operators
necessities become a lot of and a lot of complicated.
However, existing knowledge centre frameworks don't
usually take energy consumption into consideration as a
key parameter of the info centres configuration. To lower
the facility consumption whereas fulfilling performance
necessities authors propose a versatile and energy-aware
framework for the (re)allocation of virtual machines
during a knowledge centre. The framework, being
freelance from the info centre management system,
computes and enacts the simplest potential placement of
virtual machines supported constraints expressed through
service level agreements. The frameworks flexibility is
achieved by decoupling the expressed constraints from the
algorithms victimisation the Constraint Programming (CP)
paradigm and artificial language, basing ourselves on a
cluster management library known as Entropy. Finally, the
experimental and simulation results demonstrate the
effectiveness of this approach in achieving the pursued
energy optimisation goals.

IV. PROPOSED HEURISTIC APPROACH

Our projected heuristic employing a Task programing
algorithmic rule that schedule the tasks to the Vms
keeping with mainframe power. In keeping with our
thought if tasks square measure regular to the Vms earlier,
load is managed in a very higher method and this lead to
less variety of Vms migrations. additional migration of
vms is taken into account with lowest usage of mainframe
and tasks square measure wholly addicted to it. This
method is useful to attenuate total potential increase of the
employment and SLA violation. For validation of our
projected work, we tend to simulate Non Power Aware
policy, metric linear unit and DVFS, Comparison has been
through with these 2 schemes.

4.1 Proposed Model

In our proposed work, We used the concept of maximum
and minimum utilization threshold value to fulfil the
following objectives:

To find the better virtual machine resource
management policy that will reduce the migration
of virtual machines.
Find the solution for better scheme based on
virtual machine migration based on low
utilization of cpu.

4.2 Basic Design of System

We implement the proposed scheme; in this consider the
maximum and minimum utilization threshold value. If the
utilization of CPU for a host falls below the minimum
threshold, all VMs have to be migrated from this host and
the host has to be switched off in order to eliminate idle
power consumption. If the utilization goes over the
maximum threshold, some VMs have to be migrated from
the host to reduce utilization to prevent potential Service
Level Agreements violation.
Further for migrating least number of vms to migrate, it
uses minimization of migrations to reduce migration
overhead. Our research will follow similar line of
implementation by managing the load of task execution on
the resources on the basis of cpu power. Better managed
resources will reduce the number of over utilized or under-
International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 8 August 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2469

utilized host hence will result in less number of Vm
migration.


Fig 1: Proposed Approach

Further migration of vms will be considered with lowest
usage of cpu and tasks are totally dependent on it. This
process will be helpful to minimize total potential increase
of the utilization and SLA violation.

4.3 Proposed Algorithm

A scheme for selecting energy efficient allocation of
virtual machines in cloud data center. Proposed scheme
consider the maximum and minimum utilization threshold
value. If the utilization of CPU for a host falls below the
minimum threshold, all VMs have to be migrated from
this host and the host has to be switched off in order to
eliminate idle power consumption. If the utilization goes
over the maximum threshold, some VMs have to be
migrated from the host to reduce utilization to prevent
potential Service Level Agreements violation. Further for
migrating, it uses minimization of migrations to reduce
migration overhead. Our research will follow similar line
of implementation by using a Task scheduling algorithm
that will schedule the task to the Vms according to cpu
power. According to our concept if tasks will be scheduled
to the Vms earlier, load will be managed in a better way
and this will result in less number of Vms migrations.
Further migration of vms will be considered with lowest
usage of cpu and tasks are totally dependent on it. This
process will be helpful to minimize total potential increase
of the utilization and SLA violation. For validation of our
proposed work, we will simulate Non Power Aware policy
and DVFS, Comparison will be done with these two
schemes.
In this scheme we used following task scheduling
algorithm:
1. Start Al gori thm.
2. Sort the l i st of the cl oudl ets (tasks) on
the basi s of the si ze of cl oudl ets.
3. Loop whi l e there are cl oudl ets to be
schedul ed.
4. Pi ck the cl oudl et C (i ) from the l i st.
Where i = {1,2,3,..............,n}.
5. Fi nd the Vm V(j) that may run the
cl oudl et successful l y.
Where j={1,2,3,.............,m}.
6. Bi nd Vm V(j) to the Cl oudl et C(i ).
7. I f there are more cl oudl ets i n the l i st,
go to step three.
8. Return control to the si mul ati on.

V. RESULTS

The proposed heuristics have been evaluated by simulation
using CloudSim toolkit [5].The simulated data Centre
includes 5 hosts. Each node is demonstrated to have one
CPU core with performance equivalent to 3000,
2660,2500,1000,2000 MIPS, 4 GB RAM and 1 TB of
storage. For the benchmark policies we simulated a Non
Power Aware policy (NPA), Minimum Migration (MM)
and DVFS that adjusts the voltage and frequency of CPU
according to current utilization.
The simulation results presented in Table 1 show that
Minimum CPU Utilization (MCU) policy brings higher
energy savings compared MM, NPA and DVFS policies.
Table 1: Simulation Results
Policy Energy
(kwh)
SLA Migrations Overall
SLA
MCU 0.03 0.002% 2 0.07%
MM 0.05 0.011% 13 0.16%
NPA 0.17 0.261% 13 0.85%
DVFS 0.09 0 0 0



International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 8 August 2013

ISSN: 2231-2803 http://www.ijcttjournal.org Page 2470

VI. CONCLUSION

In our proposed work we can evaluate optimized minimize
energy consumption, while providing reliable QoS. The
obtained results show that the technique of task scheduling
and minimum usage of cpu brings substantial energy
savings and is applicable to real-world Cloud data centers.
For the future work, we propose to investigate of setting
the utilization thresholds dynamically according to a
current set of VMs allocated to a host, propose to
investigate optimization over multiple system resources in
VMs reallocation policies, such as RAM and network
bandwidth utilization.

REFERENCES

[1] R. Brown et al., Report to congress on server and data center energy
efficiency: Public law 109-431, Lawrence Berkeley National
Laboratory, 2008.
[2] R. Buyya, C. S. Yeo, and S. Venugopal, Market-oriented cloud
computing: Vision, hype, and reality for delivering it services as
computing utilities, in Proceedings of HPCC08. IEEE CS Press, Los
Alamitos, CA, USA, 2008.
[3] A. Beloglazov and R. Buyya, Energy Efficient Resource
Management in Virtualized Cloud Data Centers, 10th IEEE/ACM
International Conference on Cluster, Cloud and Grid Computing, 2010.
[4] A. Beloglazov and R. Buyya, Energy Efficient Allocation of Virtual
Machines in Cloud Data Centers, 10th IEEE/ACM International
Conference on Cluster, Cloud and Grid Computing,2010.
[5] R. Buyya, R. Ranjan, and R. N. Calheiros, Modeling and simulation
of scalable cloud computing environments and the CloudSim toolkit:
Challenges and opportunities, in Proceedings of HPCS09. IEEE Press,
NY, USA, 2009.
[6] Anton Beloglazov and Rajkumar Bu, Optimal Online Deterministic
Algorithms and Adaptive Heuristics for Energy and Performance
Efficient Dynamic Consolidation of Virtual Machines in Cloud Data
Centers, Published online in Wiley InterScience
(www.interscience.wiley.com), pp. 13971420, Vol. 24, 2012.
[7] Anton Beloglazov and Rajkumar Buyya, Energy Efficient Resource
Management in Virtualized Cloud Data Centers, 10th IEEE/ACM
International Conference on Cluster, Cloud and Grid Computing, 2010.
[8] Anton Beloglazov, J emal Abawajy, Rajkumar Buyya, Energy-aware
resource allocation heuristics for efficient management of data centers
for Cloud computing, Future Generation Computer Systems, pp. 755
768, Vol. 28, 2012.
[9] Can Hankendi and Ayse K. Coskun, Adaptive Energy-Efficient
Resource Sharing for
Multi-threaded Workloads in Virtualized Systemss, IRNet IEEE
International Conference on Cluster, Cloud and Grid Computing, 2011.
[10] Corentin Dupont, Giovanni Giuliani, An Energy Aware Framework
for Virtual Machine Placement in Cloud Federated Data Centres.
WASE International Conference on Information Engineering, pp. 251-
254, 2011.
[11] Bhaskar. R, Deepu. S.R and Dr. B.S. Shylaja, Dynamic Allocation
Method For Efficient Load Balancing In Virtual Machines For Cloud
Computing Environment, Advanced Computing: An International
J ournal, pp. 56-59, Vol.3, No.5, September 2012.
[12] Mukil Kesavan, Ada Gavrilovska, Elastic Resource Allocation in
Datacenters: Gremlins in the Management Plane, J ournal of
Information Processing Systems, pp. 345-347, Vol.6, Issue.2, J une 2010.
[13] Anton Beloglazov and Rajkumar Buyya " Managing Overloaded
Hosts for Dynamic Consolidation of Virtual Machines in Cloud Data
Centers Under Quality of Service Constraints ", IEEE Transactions On
Parallel And Distributed Systems, Vol.3, Issue.5, Sep 2012.

You might also like