You are on page 1of 4

Virtual Machine Allocation in Current Cloud

Computing Middleware
GWDG eScience Group
Gesellschaft f¨ ur wissenschaftliche Datenverarbeitung mbH G¨ ottingen, Germany
Abstract—This is a technical report for the virtual machine
(VM) allocation status in current cloud computing middleware.
The middleware that we are going to research are Open Cloud
Computing Interface (OCCI), Opennebula (ONE) and Openstack
(OSA). We concern 4 aspects in this report: 1, our requirements;
2, our utility functions; 3, the data; 4, the algorithms for
maximizing the utility functions.
Index Terms—Cloud computing, Virtual machine allocation,
By considering the current cloud environment at GWDG,
we are going to optimize our own utility functions and deploy
the strategies into various cloud middleware we are using.
• Least energy consumption. VM consolidatioin, less car-
bon footprint, green computing.
• Minimize the number of the active servers and VMs live
migrations, with performance constraints.
• Reasonable resource utilization. Not over provisioned, not
under provisioned.
• Load balancing among servers.
• Higher customer satisfaction, accepting rate.
• Avoid SLA violation, by monitoring, keep QoS guaran-
tees. higher service reliability.
• Higher profit.
The followings are the current ONE/OSA default VM
allocation strategies.
A. OpenStack
OpenStack is updating its functionalities from “Cactus”
to “Diablo” to the most recent fifth release “Essex”. The
most important features cover user UI, namely “OpenStack
Dashboard”, “OpenStack Compute (Nova)”, “OpenStack Ob-
ject Storage (Swift)”, “OpenStack Identity” and “OpenStack
Image Service (Glance)”. There are many components, they
communicate with each other through RabbitMessageQueue
Protocol and RPC, for example Nova-api will process all the
VM requests from the Queue. The communication with outside
is through “HTTP”. For VM scheduling, the most important
component is “nova-scheduler”.
In OpenStack, The nova-scheduler maps the nova-API calls
to the appropriate openstack components. It runs as a daemon
named nova-schedule and picks up a compute/network/volume
server from a pool of available resources depending upon
the scheduling algorithm in place. A scheduler can base its
decisions on various factors such as load, memory, physical
distance of the availability zone, CPU architecture, etc. Since
the data centers, on which the OpenStack is running, are
different from each other. Therefore, nova-schedule should
be customized according to factor in current state of the
entire cloud infrastructure and apply complicated algorithm to
ensure efficient usage. To that end, nova-schedule implements
a pluggable architecture that lets you choose (or write) your
own algorithm for scheduling. The nova scheduler implements
a pluggable architecture. All schedulers should inherit from
class “Scheduler” in “”.
Currently the nova-scheduler implements a few basic
scheduling algorithms:
• inherits “driver.Scheduler”, in this method,
a compute host is chosen randomly across availability
•, inherits “chance.ChanceScheduler”, hosts
whose load is least are chosen to run the instance. The
load information may be fetched from a load balancer.
• inherits “driver.Scheduler”, a mapping of meth-
ods to topics so we can figure out which driver to use.
For compute, points to “filter scheduler.FilterScheduler”,
for volume (binding extra storage to VM instance), point
to “chance.ChanceScheduler”.
• filter scheduler, supports filtering and weighting to make
informed decisions on where a new instance should
be created. This Scheduler supports only working with
Compute Nodes [2]. See Figures 1, 2. First, the proper
hosts will be filtered and selected according to some pre-
defined filters, e.g., AllHostFilter, AvailabilityZoneFilter,
ComputeFilter, CoreFilter, IsolatedHostFilter, JsonFilter,
RamFilter, SimpleCIDRAffinityFilter, DifferentHostFil-
ter, SameHostFilter and so on. Different filters are de-
signed for different purposes. For instance, JsonFilter
provides the opportunity to write complicated queries for
the hosts capabilities filtering, based on simple JSON-
like syntax. When we get all the proper hosts, then
we have to weight them and select the one with least
costs. Therefore, if we want to define our own strategy
from scratch, “new filter” and “new cost functions” are
“” as a scheduling manager has to set the default
scheduling algorithm. In Nova2011.2 Version,
flags.DEFINE string(’scheduler driver’,
’nova.scheduler.chance.ChanceScheduler’, ’Driver to use
Fig. 1. Hosts filtering
Fig. 2. Host weighting
for the scheduler’)
In Nova2011.3 Version,
flags.DEFINE string(’scheduler driver’,
’nova.scheduler.multi.MultiScheduler’, ’Default driver to
use for the scheduler’)
In Nova2012.1 Version,
flags.DEFINE string(’scheduler driver’,
’nova.scheduler.multi.MultiScheduler’, ’Default driver to
use for the scheduler’)
Therefore, there is no change in scheduling.
B. OCCI with OpenStack Features
OpenStack is one of the major players in the cloud com-
munity at the moment. Still it currently lacks an standardized
Interface which can be used to manage VMs. This changes
now! During the Cloud Plugfest OCCI has been demoed on
top of OpenStack [4].
• General support for VM management is available.
• First set of tests available.
• Aligned with the coding Standards of OpenStack.
• OCCI Compliant.
• Deployment and Management of VMs and Volumes.
• Scaleup (Resize), Rebuild, Imaging of VMs.
• Integrated as a nova-wsgi WSGI service.
At the moment, fetching infrastructure configuration
through OCCI is not supported and depending on how and
what OCCI of OpenStack wants to achieve.
C. OpenNebula
In [5], the Scheduler 3.4 module is in charge of the
assignment between pending VMs and known Hosts. Open-
Nebula’s architecture defines this module as a separate process
that can be started independently of oned. The OpenNebula
scheduling framework is designed in a generic way, so it is
highly modifiable and can be easily replaced by third-party
Very similar to Openstack’s filtering and weighting strate-
gies. The Match-making Scheduler in OpenNebula comes with
a match making scheduler (mm sched) that implements the
Rank Scheduling Policy. The goal of this policy is to prioritize
those resources more suitable for the VM.
The match-making algorithm works as follows:
• First those hosts that do not meet the VM requirements
(see the “REQUIREMENTS” attribute) and do not have
enough resources (available CPU and memory) to run the
VM are filtered out.
• The “RANK” expression is evaluated upon this list using
the information gathered by the monitor drivers.
• Any variable reported by the monitor driver can be
included in the rank expression.
• Those resources with a higher rank are used first to
allocate VMs.
It is true that this scheduler algorithm easily allows the
implementation of several placement heuristics (see below)
depending of the RANK expression used.
There are three default placement policies in ONE, namely:
Packing Policy
• Target: Minimize the number of cluster nodes in use
• Heuristic: Pack the VMs in the cluster nodes to reduce
VM fragmentation
• Implementation: Use those nodes with more VMs running
Striping Policy
• Target: Maximize the resources available to VMs in a
• Heuristic: Spread the VMs in the cluster nodes
• Implementation: Use those nodes with less VMs running
Load-aware Policy
• Target: Maximize the resources available to VMs in a
• Heuristic: Use those nodes with less load
• Implementation: Use those nodes with more FREECPU
In [18], authors propose a joint-VM provisioning approach
in which multiple VMs are consolidated and provisioned
together, based on an estimate of their aggregate capacity
needs. This approach exploits statistical multiplexing among
the workload patterns of multiple VMs, i.e., the peaks and
valleys in one workload pattern do not necessarily coincide
with the others. A performance constraint describing the ca-
pacity need of a VM for achieving a certain level of application
In [19], authors introduce a novel technique called Back-
ward Speculative Placement (BSP) that projects the past
demand behavior of a VM to a candidate target host as a base
for handling the deployment requests or reallocation decisions.
Collect the past VM demands for future prediction. Besides,
authors also restrict the maximum number of migration. The
optimization component is defined such that the deference
between the most and the least loaded hosts is minimized.
However, the resource that BSP is considering is only focused
on CPU.
In [20], authors present an application-centric energy-aware
strategy for VM allocation by considering energy consump-
tion, performance (response time of application) and SLA
violation. They evaluate the optimal combination of the appli-
cations with different type (CPU, memory and I/O intensive)
through VM consolidation to maximize the resource utilization
and energy consumption. However, this work still doesnt dis-
cuss much in detail how do they concern multi-resources types
in dealing with resource over-provision and under-provision.
VM migration is not considered as well.
In [21], authors analyzed the key challenge for providing
the cloud service by meeting the SLA and capacity constraints.
VM migration is a technology that moves a VM from one node
to another node, it is able to on the one hand save as much
energy as possible, on the other hand keep the performance
of the VM as same as it is expected. However, based on
the impact of VM migration in [22], although live-migration
is introduced recently, for instance in OpenStack, no visible
downtime and no transaction loss is only the ideal goal. The
number of migration should controled and minimized, as it
has impact on the performance of the other running VMs in
the source and destination hosts, which is mainly proportional
to the amount of memory allocated to the VM.
In [23], authors present a dynamic resource allocation
solution that is implemented in OpenStack, the objectives of
the work is to balance load, minimizing power consumption
and differentiation among different classes of cloud services.
Finally the trade-off between the objectives and cost of VM
live-migration is discussed.
In [24] [25] [26], authors present a comprehensive frame-
work and solution (algorithms) for handling the energy con-
cern in VM allocation and VM consolidation of data centers.
The proposed approach is going to be implemented in Open-
In [27], authors present a dynamic server VM live-migration
and consolidation algorithm for managing SLA violations.
In [28], authors present a VM consolidation strategy that
is implemented on OpenStack cloud by considering SLA
In [29], authors present a power consumption evaluation
on the effects of live migration of VMs. Results show that
the power overhead of migration is much less in the scenario
of employing the strategy of consolidation than the regular
deployment without using consolidation. Our results are based
on the typical physical server, the power of which is linear
model of CPU utilization percentage.
The utility functions, namely our targets we want to achieve.
A. Power and Energy Cost
Energy-aware VM scheduling in IaaS clouds. See the fol-
lowing figure, power meter is able to provide the actual power
consumed by the System Under Test (SUT) [1].
Fig. 3. Energy collection
B. Optimal Resource Utilization
C. Customers Satisfaction
D. QoS and Other Operation Costs
E. Profit
What values can we get? Where to get them? How to get
• Monitoring information from livert, cgroups, openstdb.
• Service history information, if any.
• Infrastructure configuration information (Openstack can
retrieve them).
• Service requests.
• Energy consumption.
More SotA analysis is required. Next step is to review
the following papers: [6] [7] [9] [8] [10] [11] [12] [13]
[14] [15] [16] [17].
[1] Qingwen Chen “Towards energy-aware VM scheduling in IaaS clouds
through empirical studies”.
[2] (2012) Filtering and weighting in Openstack, [Online]. Available: scheduler.html
[3] (2011) Proposals in Openstack, [Online]. Available: http://www.mail-
[4] (2012) OCCI and Openstack, [Online]. Available: http://occi-
[5] (2012) Match-making in Opennebula, [Online]. Available:
[6] C. Abhishek, G. Pawan, S. Prashant “Quantifying the benefits of resource
multiplexing in on-demand data centers”.
[7] W. Guiyi, V. Athanasios, Z. Yao, X. Naixue “A game-theoretic method
of fair resource allocation for cloud computing services”.
[8] C. Sivadon “Optimal Virtual Machine Placement accross Multiple Cloud
[9] A. Janki “Negotiation for resource allocation in IaaS Cloud”.
[10] A. Marcoss “Evaluating the cost-benefit of using cloud computing to
extend the capacity of clusters”.
[11] A. Bo “Automated negotiation with decommitment for dynamic resource
allocation in cloud computing”.
[12] L. Ignacio “On the management of virtual machines for cloud infras-
[13] J. Pramod “Usage management in cloud computing”.
[14] G. Hadi “Multi-dimensional SLA-based resource allocation for multi-
tier cloud computing systems”.
[15] S. Shekhar “Energy aware consolidation for cloud computing”.
[16] G. Hadi “Maximizing Profit in Cloud Computing System via Resource
[17] L. Jim “Performance Model Driven QoS Guarantees and Optimization
in Clouds”.
[18] X. Q. Meng “Efficient Resource Provisioning in Compute Clouds via
VM Multiplexing” in Proc !CAC2010.
[19] N. M. Calcavecchia “VM Placement Stratrgies for Cloud Scenarios” in
Proc IEEE 5th International Coference on Cloud Computing.
[20] H. Viswanathan, “Energy-Aware Application-Centric VM Allocation for
HPC Workloads”
[21] S. Das, “Faster and Efficient VM Migrations for Improving SLA and
ROI in Cloud Infrastructures”
[22] F. Hermenier, “entropy a consolidation manager for clusters”
[23] F. Wuhib, “Dynamic Resource Allocation with Management Objectives
Implementation for an OpenStack Cloud”
[24] A. Beloglazov, “energy-aware resource allocation heuristics for efficient
management of data centers for cloud computing”
[25] A. Beloglazov, “OpenStack Neat: A Framework for Dynamic Consoli-
dation of Virtual Machines in OpenStack Clouds A Blueprint”
[26] A. Beloglazov, “Optimal Online Deterministic Algorithms and Adaptive
Heuristics for Energy and Performance Efficient Dynamic Consolidation
of Virtual Machines in Cloud Data Centers”
[27] N. Bobroff, “Dynamic Placement of VMs for Managing SLA Violations”
[28] A. Corradi, “VM consolidatio: A real case based on OpenStack Cloud”
FGCS 2012.
[29] Q. Huang, “Power Consumption of VM Live-Migration in Clouds” CMC