You are on page 1of 62

TABLE OF CONTENTS

TITLE Page No.


Abstract i
Chapter 1 - Introduction 1
1.1. Introduction to the Product 1
1.2. Introduction to CPE – Risk Assessment 5
1.3. Scope & Opportunity 8
1.4. Proposed System 9
1.5. Organization of the Thesis 10

Chapter 2 - Literature Review


2.1. The XaaS Service Models 12
2.2. Cloud Computing Scenarios & Roles 13
2.3. Virtualization 15
2.4. VM Placement Approaches 18
2.5. Optimization Techniques 19
Chapter 3 - Design and Implementation
3.1. VM Machine Scheduling 24
3.2. Objectives & Considerations 26
3.3. State of Art 28
3.4. Main Challenges, Models 31
3.5. The Proposed Algorithm 32
3.6. Clustering Phase 37
3.7. UML Diagrams 38
3.7.1 Use Case Diagram 42
3.7.2 Sequence Diagram 43
3.7.3 Class Diagram 44
3.7.4 ER Diagram 46
3.7.5 Data Dictionary 47
TITLE Page No.
Chapter 4 – Testing 48
Chapter 5 - Results 50
Chapter 6 – Conclusion 53
Chapter 7 – Future Work 54
References 55

LIST OF FIGURES
Name of the Figure
Page No.
Figure 1.1: Cloud Computing Architecture 1
Figure 1.2: Dynamic Resource Provisioning by VMP
3
Architecture
Figure 1.3: Problem Under Provision & Over Provision 7
Figure 1.4: OVMP Algorithm Proposed 9
Figure 2: Optimization Techniques 20
Figure 3.1: Activity Client 32
Figure 3.2: Activity Server 33
Figure 3.3: Use Case 42
Figure 3.4: Sequence 43
Figure 3.5: Class 44
Figure 3.6: Components of Client 45
Figure 3.7: Components of Server 46
Figure 3.8: ER Diagrams 46

Table 1: Test Cases 49


Table 2: Characteristics of different Cluster nodes 50
Dynamic Resource Provisioning Virtual Machine
Based SIP Servers
ABSTRACT:
Resource provisioning is one of the common management task in recent
cloud computing applications via different systematic features for
extracting relevant provisioning operations in cloud computing. Joint
virtual machines provisioning approach in which multiple VMs are
consolidated and provisioned together, based on an estimate of their
aggregate capacity needs. This virtual provisioning approach validates to
evaluate the performance of the released data from different collected
virtual machines. For providing individual and higher utilization
considerations in cloud computing. VM multiplexing potentially leads to
significant capacity saving compared to individual-VM based
provisioning. The savings achieved by multiplexing are realized by
packing VMs more densely into hardware resources without sacrificing
application performance commitment. For providing efficient utilization
of processing virtual machine in resource cloud computing. In this we
propose an optimal virtual machine placement (OVMP) algorithm. This
algorithm can minimize the cost spending in each plan for hosting virtual
machines in a multiple cloud provider environment under future demand
and price uncertainty. OVMP algorithm makes a decision based on the
optimal solution of stochastic integer programming (SIP) to rent resources
from cloud providers. The performance of OVMP algorithm is evaluated
by numerical studies and simulation. The results clearly show that the
proposed OVMP algorithm can minimize users' budgets. This algorithm
can be applied to provision resources in emerging cloud computing
environments.

i
Chapter 1

INTRODUCTION

Cloud computing model used for enabling ubiquitous, convenient,


on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that
can be rapidly provisioned and released with minimal management effort
or service provider interaction.

Figure 1: Cloud Computing Architecture

Figure 1 shows the cloud model which promotes the availability


and is composed of five essential characteristics, three service models and
four deployments.

Cloud Computing infrastructure with the services are available


"on-need" basis. The computing infrastructure includes hard disk,
development platform, computing power, database or complete software
applications. To access these resources from the cloud vendors,

1
organizations do not need to make any large scale capital expenditures on
their own. Organization need to pay only as much for the computing
infrastructure as they use from the vendors whose hosting the services.
The billing model of cloud computing is similar to the electricity payment
that do on the basis of usage. The vendor is used for providing the cloud
computing services and organization is used for users' of cloud computing
services.

By provision of shared resources as metered on-demand services


over networks, Cloud Computing is emerging as a promising paradigm
for providing configurable computing resources (i.e., networks,
servers, storage, applications, and services) that can be rapidly
allocated and released with minimal management effort. Cloud end-
users (e.g., service consumers and developers of cloud services) can
access various services from cloud providers such as Amazon, Google
and SalesForce. They are relieved from the burden of IT maintenance
and administration and their total IT cost is expected to decrease. From
the perspective of a cloud provider or an agent, however, resource
allocation and scheduling become challenging issues. This may be due
to the scale of resources to manage, and the dynamic nature of service
behavior (with rapid demands for capacity variations and resource
mobility), as well as the heterogeneity of cloud systems. As such,
finding optimal placement schemes for resources, and making resource
reconfigurations in response to the changes of the environment are
difficult.

2
There are a multitude of parameters and considerations (e.g.,
performance, cost, locality, reliability and availability) in the decision
of where, when and how to place and reallocate virtualized resources
in cloud environments. Some of the considerations are aligned with
one another while others may be contradictory.
This work investigates challenges involved in the problem of VM
scheduling in cloud environments, and tackles these challenges using
approaches ranging from combinatorial optimization techniques and
mathematical modeling to simple heuristic methods.

Note that the term scheduling in the context of this thesis is referred
to as the initial placement of VMs and the readjustment of placement
over time.

Figure 1.2: Dynamic resource provisioning by virtual machine placement architecture

3
Scientific contributions of this thesis include modeling for dynamic
cloud scheduling via VM migration in multi-cloud environments, cost-
optimal VM placement across multiple clouds under dynamic pricing
schemes, modeling and placement of cloud services with internal
structure, as well as to optimize VM placement within data centers for
predicable and time-constrained load peaks.
Virtualization is the creation of a virtual version of any of the
resources like a server, an operating system, a storage device or network
resources. It broadly describes the separation of a resource or request for a
service from the physical entity.

Virtual machine placement is the process of mapping virtual


machines to physical machines and it is the process of selecting the
suitable host for the virtual machine. The process involves categorizing the
resources requirements, virtual machines hardware, anticipated usage of
resources and placement goal. The dynamic placement goal can either be
maximizing the usage of available resources or it can be saving of power
by being able to shut down some servers if the resources are not utilized at
the computational time.

Infrastructure-as-a-Service (IaaS) is a computational service model


provides virtual server instance to start, stop, and access and configure
their virtual servers and storage. In the enterprise, cloud computing allows
a company to pay for only as much capacity as is needed for utilizing the
services, and bring more online as soon as required because this pay-for-
what-you-use model. It is a large-scale distributed computing paradigm in

4
which a pool of computing resources is available to users (called cloud
consumers) via the Internet. Computing resources such as software,
processing power, storage and network bandwidth are represented to cloud
consumers as the accessible public utility services. In IaaS model,
virtualization technologies can be used to provide resources to cloud
consumers. They can specify the required software stack, e.g., operating
systems and applications; then package them all together into Virtual
Machines (VMs).

1.2. Introduction to VIRTUAL MACHINE PLACEMENT VMP – Risk


Assessment

Allocation of Resource in Cloud Computing


In cloud computing, resource provisioning is an important issue as it
dictates how resources may be allotted to a cloud application such that
service level agreements (SLAs) of applications are met. Resource
provisioning is based on FCFS scheduling algorithm, it analyses response
time distribution. This in turn is used to develop a heuristic algorithm, it
defines an allocation scheme and it requires small number of servers. In
responding to the effectiveness of the algorithm specification was
evaluated in range of operating conditions. A new modification called
randomly dependent priority. It is originated to have the best performance
in terms of required number of servers. Allocation of resources in cloud
environment virtual instances is offered by Amazon EC2, each instance is
attributed with different capacities in terms of RAM size, CPU capacity

5
and I/O bandwidth. The declared volume details are virtual instances on
EC2. To address fault- tolerance, EC2 distributes its virtual instances
across several data centers, each data center is called as an availability
zone. To execute in different data centers, two virtual instances are
running on different zones. There are six available zones, four are located
in U.S. and the other two are in Europe. Further, to determine that the
similar performance characteristics appear on different types of virtual
instances as well.

Cloud computing provide services in pay-as-you-go manner over the


Internet. Therefore, service providers without making any advance
investment in infrastructure could only rent resources from infrastructure
providers. Customers can choose two payment plans which are offered by
cloud providers namely reservation plan (pre-paid) and on-demand plan
(pay per-use). The price of resources in on-demand plan is expensive than
reservation plan. This can lead to an under provisioning problem when the
quantity of reserved resources is unable to meet the demands fully. If a
customer reserves resources more than required than an over provisioning
problem occurs which cannot be neglected as well since the amount of
reserved resources will be underutilized thus again increasing the cost
which is referred to as oversubscribed cost. So it is necessary to minimize
both on-demand and oversubscribed costs. The cost for maintenance by
infrastructure providers and energy consumption increases rapidly. The
energy usage could be minimized using a virtualization technology. This
will allow using fewer physical servers with much higher per-server
utilization. However it harbingers new challenges for the management of
VM. So, VM management in cloud data centers must be provisioned and

6
managed very productively and hence must pave the way for optimizing
the energy-performance trade-off.

This study mainly considers IaaS as it has been recognized as the most
promising model. Such Iaas represented by a large scale data center
comprising large number of heterogeneous physical node. Each node is
characterized by CPU performance, disk storage, amount of RAM and
network bandwidth. The proposed study architecture is adopted from
enhancing the efficient management of workload by assigning them in a
proper manner. Our proposed work is similar to the work done but we also
consider energy efficiency in our work which is not being considered in
Stochastic programming was applied to solve problems and even in
resource provisioning; however, to the best of our understanding it has
never been studied in energy efficiency for VM consolidation. The
dynamic VM consolidation problem can be divided into four problems.

1. Checking whether the host is under loaded.


2. Checking whether the host is overloaded.
3. Selection policy to migrate VMs from overloaded host.
4. VMs placement for placing the VMs in allocation or migration to
other active/reactivated node host.

7
Figure 1.3: Problems in Under provisioning and Over provisioning

1.3. Scope & Opportunity

The following are the main set of possibilities which lay the foundation
for taking up this project:

Dynamic assignments of physical resources to virtual machines are


required to make energy efficient and improve the quality of service
related to SLA. Dynamic VM consolidation as a vital control process is an
efficient management system to amend energy efficiency. The dynamic
VM consolidation problem has four sub-problems.

1. Determine when a host is considered as being overloaded (host


overloading detection). Live migration is required to migrate one or
more VMs from the overloaded host

2. Determine when a host is considered as being under-loaded. Here the


host is ready for switching to sleep mode, allowing migration of its
all VMs.

3. Determine which VMs must be selected to migrate from overloaded


host.

4. Determine which hosts must be selected to place migrated VMs. The


objective of this procedure is to perform in such a way that
optimizes energy-performance trade off inside cloud data center.

8
Each of foretasted sub-problems must operate in better optimized
way.

1.4. Proposed System

1. Data accessing via virtual multiplexing is a complex task in


reliable data over cloud resource provisioning applications. For
providing efficient data accessing in cloud computing an efficient
algorithm was required.
2. So in this paper we propose to develop an optimal virtual machine
placement (OVMP) algorithm.
3. This algorithm can minimize the cost spending in each plan for
hosting virtual machines in a multiple cloud provider environment
under future demand and price uncertainty.
4. OVMP algorithm makes a decision based on the optimal solution
of stochastic integer programming (SIP) to rent resources from
cloud providers.
5. The results clearly show that the proposed OVMP algorithm can
minimize users' budgets. This algorithm can be applied to
provision resources in emerging cloud computing environments.

OVMP algorithm

9
Figure 1.4: OVMP algorithm purpose

This study explores optimal methods by considering both under


provisioning and over provisioning problems of resource management and
also to reduce the overall energy consumption. Taking IaaS model,
OVMP algorithm can host a certain number of VMs taking into
consideration the uncertainty of future demands and prices of resources
between on-demand and oversubscribed costs. The result of OVMP
algorithm is obtained as the gilt-edged solution from stochastic integer
programming (SIP) formulation with two-stage recourse. The
performance of OVMP is evaluated. The results show that an OVMP
algorithm can reduce the total cost while meeting the requirements of both
providers and customers. The new proposed algorithm reduces energy
consumption and number of SLA violation resulting in maximization of
overall performance.

1.5. Organization of the Thesis:

The upcoming chapters of the thesis are organized in the following way:
Introduction of the product and project as detailed in Chapter-1. In
Chapter - 2 we present a literature survey. Chapter - 3 provides an
introduction of the proposed algorithm. Chapter - 4 shows the
experimental result. The Chapter 5 provides the summary of the study and
concludes the paper. and future work by a list of references and the papers

10
11
Chapter 2

LITERATURE REVIEW

Cloud Computing provides a paradigm shift following the shift


from mainframe to client-server architecture in the early 1980s. It is
a new paradigm in which computing is delivered as a service rather
than a product, whereby shared resources, software, and information
are provided to consumers as a utility over networks.

The vision of this paradigm can be traced back to 1969 when


Leonard Kleinrock, one of the chief scientists of the original
Advanced Research Projects Agency Network (ARPANET) project,
which preceded the Internet, stated at the time of ARPANET’s
development:
“as of now, computer networks are still in their infancy, but as they
grow up and become sophisticated, we will probably see the spread
of ‘computer utilities’ which, like present electric and telephone
utilities, will service individual homes and offices across the
country.”

Over the past decades, new computing paradigms (e.g., Grid


Computing, P2P Computing, and Cloud Computing) promising to
deliver this vision of computing utilities have been proposed and
adopted. Of all these paradigms, the two most frequently mentioned
ones with deferring areas of focus are Grid Computing and Cloud
Computing. Grids are designed to support sharing of pooled

12
resources, usually used for solving problems that may require
thousands of processor cores or hundreds of terabytes of storage,
while cloud technologies are driven by economies of scale, focusing
on integrating resource capacities to the public in the form of a
utility and enabling access to leased resources (e.g., computation
power, storage capacity, and software services) at prices comparable
to in-house hosting. The distinctions between these two paradigms
are sometimes not clear as they share the same vision. An in-depth
comparison between girds and clouds is beyond the scope of this
thesis, but for details there are a number of valuable works available

2.1. The XaaS Service Models

Commonly associated with cloud computing are the following


service models, differing in the service offered to the customers:

I. Software as a Service (SaaS)

In the SaaS model, software applications are delivered as services


that exe-cute on infrastructure managed by the SaaS vendor itself
or a third-party infrastructure provider. Consumers are enabled to
access services over various clients such as web browsers and
programming interfaces, and are typically charged on a
subscription basis. The implementation and the underlying cloud
infrastructure where the service is hosted are transparent to
consumers.

13
II. Platform as a Service (PaaS)

In the PaaS model, cloud providers deliver a computing platform


and/or solution stack typically including operating system,
programming language execution environment, database, and
web server. Application developers can develop and run their
software on a cloud platform without having to manage or control
the underlying hardware and software layers, including network,
servers, operating systems, or storage, but maintain the control
over the deployed applications and possibly configuration
settings for the application-hosting environment.

III. Infrastructure as a Service (IaaS)

In the IaaS model, computing resources such as storage, network,


and computation resources are provisioned as services.
Consumers are able to deploy and run arbitrary software, which
can include operating systems and applications. Consumers do
not manage or control the underlying physical infrastructure but
have to control their own virtual infrastructures typically
constructed by virtual machines hosted by the IaaS vendor. This
thesis work is mainly focusing on the IaaS model, although it
may be generalized also to apply to the other models.

14
2.2. Cloud Computing Scenarios and Roles

Based on the classification of cloud services into SaaS, PaaS, and


IaaS, three main stakeholders in a cloud provisioning scenario can
be identified:

I. Infrastructure Providers (IPs) provision infrastructure resources


such as virtual instances, networks, and storage to consumers
usually by utilizing hardware virtualization technologies. In the
IaaS model, a consumer rents resources from an infrastructure
provider or multiple infrastructure providers, and establishes its
own virtualized infrastructure, instead of maintaining an
infrastructure with dedicated hardware. There are numerous
infrastructure providers on the market, such as Amazon Elastic

15
Compute Cloud (EC2) [3], GoGrid, and Rackspace. To simplify
the application delivery for consumers, some infrastructure
providers go a step further with the PaaS model, i.e., in addition
to supporting application hosting environments, these
infrastructure providers also provide development infrastructure
including programming environment, tools, configuration
management, etc. Some notable providers of this type include
Google App Engine, Salesforce.com, and AppFog. In academia,
some on-going projects such as ConPaaS and 4CaaSt are
developing new PaaS frameworks that enable flexible
deployment and management of cloud-based services and
applications.

II. Service Providers (SPs) use either their own resources (taking
both the SP and IP roles) or resources leased from one or
multiple IPs to deliver end-user services to their consumers. It
can be a telco service provider, an internet service provider
(e.g., LinkedIn, etc. These services can be potentially
developed using PaaS tools as mentioned previously. In
particular, when cloud resources are leased from external IPs,
SPs are not in charge of maintaining the underlying hardware
infrastructures. Without having direct control over the low-
level hardware resources, SPs can use performance metrics
(e.g., response time) to optimize their applications by scaling
their rented resources from IPs, providing required Quality of
Service (QoS) to the end users.

16
III. Cloud End Users who are the consumers of the services
offered by SPs and usually have no concerns on where and
how the services are hosted.

2.3. Virtualization
Virtualization is a technology that separates computing functions and
implementations from physical hardware. Early related research dates
back to 1960s and the joint work of IBM TJ Watson and MIT on the
M44/44X Project. Now virtualization has become the foundation of
Cloud Computing, since it enables isolation between hardware and
software, between users, and between processes and resources. These
isolation problems have not been well solved by traditional operating
systems. With virtualization, software capable of execution on the raw
hardware can be run in a virtual environment. Depending on the layer
where the virtualization occurs, two major categories of virtualization
can be identified (as illustrated in Figure 1):
Hypervisor- based Virtualization. This technology is based on a
layer of software (i.e., the hypervisor) that manages the resources of
physical hosts and provides the necessary services for the VMs to run.
Instead of direct access to the underlying hardware layer, all VMs
request resources from the hypervisor that is in charge of resource
allocation and scheduling for VMs. There are two major types of
implementations of this kind of virtualization, briefly described as
follows.

I. Full virtualization, fully emulates system hardware, and thus does


not require changes to the operating system (OS) or applications.
Virtualization is done transparently at the hardware level of the
17
system. Well known implementations include Microsoft Virtual
PC, VMware Workstation, VirtualBox, and KVM.

II. Paravirtualization, requires changes to the OS and possibly the


applications to take full advantage of optimizations of the
virtualized hardware layer, and thus achieves better performance
than Full Virtualization. As a well-established example, Xen
offers a Paravirtualization solution.

In environments with hypervisor-based virtualization, Cloud


services can be encapsulated in virtual appliances (VAs), and
deployed by instantiating virtual machines with their virtual
appliances. Moreover, since the underlying hardware is emulated,
multiple different operating systems are usually allowed to run in
virtual machines atop the hypervisor. This new type of service
deployment provides a direct route for traditional on-premise
applications to be rapidly redeployed in Software as a Service
(SaaS) manner for SPs. By decoupling the infrastructure provider
possessing hardware (and usually operating system) from the
application stack provider, virtual appliances allow economies of
scale which is a great attraction for IT industries. This thesis work
is based on hypervisor-based virtualization. Throughout the thesis,
unless otherwise specified, the term virtualization refers to this
category.

2.4. VM Placement Approaches


Many different goals for VM placement have been considered in previous
works, including energy saving, cost reduction, load balancing, reduction
18
of SLA violations, network delays, congestion, and service downtime.
According to the goals of placement, VM placement approaches can be
generally divided into two types:
 Power-based approach: Determines an energy-efficient VM-PM
mapping based on resource utilization.
 QoS-based approach: Determines a VM-PM mapping using the
maximum fulfilment of quality of service requirements.
According to the type of optimization techniques used to obtain a VM-
PM mapping, the VM placement techniques can largely be categorized
into six categories:
 Heuristic Bin Packing: The VM placement problem is formulated
as vector bin packing. VMs are considered to be small items that
are tightly packed into the minimum number of bins, each
considered a PM. Several heuristic methods are developed to
approximate the optimal solution to this packing problem.
 Biology-based optimization: Several bio-inspired optimization
techniques such as the ant colony optimization method, the self-
adaptive particle swarm optimization method, and genetic
algorithms are applied to pack VMs into the smallest number of
PMs, given the current workload.
 Linear programming: The VM placement problem is constructed as
a linear programming problem which considers a number of
constraints derived from practical applications. LP-relaxation-
based methods are developed to solve the formulated model.
 Constraint programming: Have presented a resource management
framework, which includes a dynamic utility-based VM
provisioning manager and a dynamic VM placement manager, to

19
obtain a suitable VM-PM mapping. Both management tasks are
regarded as constraint satisfaction problems. More practical aspects
can be taken into consideration by extending the constraints in
these problems.
 Stochastic integer programming: Because the future demand of
VM for providing network services is uncertain, the stochastic
integer programming technique is used to predict a suitable VM-
PM mapping.
 Simulated annealing optimization: Have proposed a dynamic
runtime mapping framework that adopts a simulated annealing
optimization algorithm to map VMs onto a small set of PMs in
order to minimize power consumption without significant system
degradation.

2.5. Optimization Techniques


In Computer Science and operation research, Optimization is
referred to as the selection of best element from a set of alternatives
with regard to some criteria. Two types of entities involved in cloud
are cloud service provider and cloud consumer. Cloud service
providers provision their resources on rental basis to cloud
consumers and cloud consumers submit their requests for
provisioning the resources. They both have their own motivations
when they become part of cloud environment. Consumers are
concerned with the performance of their applications, whereas
providers are more interested in efficient utilization of their
resources. Thus these Optimization Techniques can be classified
into two types: Static techniques and Dynamic techniques.

20
Following are some of the optimization criteria followed while
provisioning resources in cloud environment.

Figure 2.1. Optimization techniques

2.5.1. Static Provisioning

Cloud service provider allocates resources to consumer in advance.


That is cloud consumer has to decide how much capacity of resources
he/she wants statically means before using them. The deterministic
approach, integer programming, linear programming and deadline
provisioning algorithm show how resources are allocated statically for
deadline based workflows. Here we consider only the expected values
for all the parameters. In static provisioning we may get over
provisioning or under provisioning problem.

2.5.2. Dynamic provisioning

Cloud service provider allocates resources to consumer as needed and


when required and they have to pay per use. This is also called as pay as
you go model. Here, genetic algorithm, stochastic approach, approximate

21
dynamic programming, benders decomposition and average approximation
are used for provisioning resources dynamically.

I. Genetic algorithm

Genetic algorithms are commonly used to generate high-quality


solutions to optimization and search problems by relying on bio-
inspired operators such as mutation, crossover and selection. Genetic
algorithms do not scale well with complexity, operating on dynamic
data sets is difficult.

II. Stochastic approach

In this section, the stochastic programming with multistage recourse is


presented as the core formulation. First, the original form of stochastic
integer programming formulation is derived. Then, the formulation is
transformed into the deterministic equivalent formulation (DEF) which
can be solved by traditional optimization solver software.

III. Approximate dynamic programming

Approximate Dynamic Programming (ADP) is a powerful technique to


solve large scale discrete time multistage stochastic control processes,
i.e., complex Markov Decision Processes (MDPs). These processes
consists of a state space S, and at each time step t, the system is in a
particular state St ∈ S from which we can take a decision xt from the
feasible set Xt . This decision results in rewards or costs, typically
given by Ct(St , xt), and brings us to a new state St+1 with probability
22
P(St+1|St , xt), i.e., the next state is conditionally independent of all
previous states and actions. Therefore, the decision not only
determines the direct costs, but also the environment within which
future decisions take place, and hence influences the future costs. The
goal is to find a policy. A policy π ∈ Π can be seen as a decision
function Xπ (St) that returns a decision xt ∈ Xt for all states St ∈ S,
with Π being the set of potential decision functions or policies. The
problem of finding the best policy can be written as (1).

min π∈Π Eπ{ (X T t=0 γCt(St , Xπ t (St)))} (1)

Where γ is a discount factor, and T denotes the planning horizon,


which could be infinite.

IV. Benders decomposition

The goal of this algorithm is to break down the optimization problem


into multiple smaller problems which can be solved independently and
parallel. The Benders decomposition algorithm can decompose integer
programming problems with complicating variables into two major
problems: master problem and sub problem.

V. Sample average approximation

It may not be efficient to get the solution of the algorithm if the


number of scenarios is more by solving the stochastic programming
formulation defined in directly if all scenarios in the problem are
considered. The sample-average approximation (SAA) approach is
23
used to address this complex problem. This approach is applied on a
set of scenarios.

One of the main advantages and motivations behind Cloud


Computing is reducing the CAPEX (capital expenditures) of systems
from the perspective of cloud users and providers. By renting
resources from cloud providers in a pay-per-use manner, cloud
customers benefit from lowered initial investments and relief of IT
maintenance. On the other hand, taking advantage of virtualization
technologies, cloud providers are enabled to increase the energy-
efficiency of the infrastructures and scale the costs of the ordered
virtualized resources. The paradigm has been proved to be suitable for
a wide range of applications, e.g., for hosting websites and social
networks applications, scientific workflows, Customer Relationship
Management, and high performance computing.

24
3. DESIGN AND IMPLEMENTATION

OVERVIEW OF DESIGN AND IMPLEMENTATION

As discussed in the Proposed System section of the Introduction,


the figure-3.1 shows the overview of the steps involved in the
design and implementation of the project.

3.1. Virtual Machine Scheduling

Given a set of admitted services and the availability of local and


possibly remote resources, there are a number of scheduling problems
to be solved to determine where to store data and where to execute and
reallocate VMs. We categorize these problems into two classes,
namely single-cloud environments and multi-cloud environments. The
following sections describe these two scenarios respectively, as well as
the challenges and the state of the art of VM scheduling.

3.1.1. Scheduling in Single-cloud Scenarios

In this thesis, VM scheduling in single-cloud environments is referred


to as scenarios where VMs are scheduled within an infrastructure
provider that can have multiple data centers geographically distributed.
This is consistent with the Private Cloud scenario described in Chapter
2, while cases where the private infrastructure outsources (part of) its
workload to external infrastructure provider(s) belong to another class
25
of scenarios discussed in the following section. In single-cloud
scenarios, resource characteristics, including the real-time state of the
whole infrastructure, the revenue model, and the schedule policies, is
usually exposed to the scheduling optimization process. A scheduling
algorithm can thus take full advantage of the information potentially
available.

According to a study by IDC and IBM in 2008, most test servers


run at 10% utilization. Furthermore, 30% of all defects are caused by
wrongly configured servers and 85% of computing sites are idle [39].
To improve the infrastructures that rely on virtualization technologies,
VMs running in the cloud need to be properly configured and
scheduled. As another key aspect, from a profit perspective, Service-
Level Agreement (SLA) compliance is also crucial as violations in
SLA can result in significant revenue loss to both the customer and the
provider. This may also require accurate and efficient SLA compliance
monitoring.

3.1.2 Scheduling in Multi-cloud Scenarios

Multi-cloud scenarios include (i) one cloud infrastructure that


overloads its workload to another infrastructure, for example in order
to lower the operating costs while maintaining customer satisfaction,
and (ii) a cloud user who deploys and manages VMs across multiple
cloud infrastructures gaining advantage of avoidance of vendor lock-in
problem, improving service availability and fault-tolerance, etc. This is
consistent with the cases of cloud bursting, cloud federation, and
multi-cloud mentioned in Chapter 2. In such cases, decision making is
26
usually focused on selecting which cloud to run in, not which server.
The detailed states of the infrastructures are commonly opaque to the
cloud user or the cloud infrastructure that initiates the non-local
actions. Conversely, the remote cloud infrastructures usually only
expose business-related info such as VM instance types, pricing
schemes, locality of the infrastructures, and legal information to the
optimization process. VM scheduling in these scenarios is also
complicated by obstacles in integrating resources from various cloud
providers which usually have their own characteristics of resources,
protocols and APIs.

3.2 Objectives and Considerations

There are a multitude of parameters and considerations involved in the


decision on where and when to place or reallocate data objects and
computations in cloud environments. An automated scheduling
mechanism should take the considerations and trade-offs into account,
and allocate resources in a manner that benefits the stakeholder for
which it operates (SP or IP). For both of these, this often leads to the
problem of optimizing cost or performance subject to a set of
constraints. Among the main considerations are:

• Performance. In order to improve the utilization of physical


resources, data centers are increasingly employing virtualization and
consolidation as a means to support a large number of disparate
applications running simultaneously on server platforms. With
di↵erent VM scheduling strate-gies, the achieved performance may
differ significantly. In scenarios where multiple cloud providers are
27
involved, the performance is of additional concern, as preserving
performance of systems constructed by integrating resources from
heterogeneous infrastructures is a challenge with high complexity.

• Energy-efficiency. In line with the interest in eco-efficiency


technologies, increasing overall efficiency of cloud infrastructures
in terms of power, cost, and utilization has naturally become a
major concern. However, this is usually conflicting with other
concerns, e.g., performance.

• Costs. The price model was dominated by fixed prices in the early
phase of cloud adoption. However, cloud market trends show that
dynamic pricing schemes utilization is increasing. Deployment
costs decrease by dynamically placing services among clouds or
by dynamically reconfiguring services (e.g., resizing VM sizes
without harming service performance) become possible. In
addition, internal implicit costs for VM scheduling, e.g.,
interference and overhead that one VM causes on other
concurrently running VMs on the same physical host, should also
be taken into account.

• Locality. In general, for considerations of usability and


accessibility, VMs should be located close to users (which could
be other services or VMs). However, due to e.g., legal issues and
security reasons, locality may become a constraint for optimal
scheduling. This may apply to both cloud providers with
geographically distributed data centers and service providers
utilizing resources from multiple cloud providers.
28
• Reliability and continuous availability. Part of the central goals
for VM scheduling is service reliability and availability. To
achieve this, VMs may be replicated across multiple (at least two)
geographical zones. During this procedure, factors such as the
importance of the data and/or service encapsulated in VMs, its
expected usage frequency, and the reliability of the different data
centers, must be taken into account. As such, scheduling VMs
within a single-cloud environment may also cause service
degradation, e.g., by introducing additional delays due to VM
migration, or by co-locating to many VMs with competing
demands on a single physical server.

3.3 State of the Art

Virtual machine scheduling in distributed environments has


been extensively studied in the context of cloud computing.
Such approaches address separate problems, such as initial
placement, consolidation, or trade-offs between honouring
SLAs and constraining provider operating costs, etc. Studied
scenarios are usually encoded as assignment or packing
problems in mathematical models and are finally solved either
by approximations, e.g., greedy packing and heuristic methods,
or by existing mathematical programming solvers such as
Gurobi, CPLEX and GLPK. As before, related work can be
separated into two sets: (i) VM scheduling in single-cloud
environments and (ii) VM scheduling in multi-cloud
environments.
29
In the single-cloud scenario, given a set of physical
machines and a set of services (encapsulated within VMs) with
dynamically changing demands, to decide how many instances
to run for each service and where to put and execute them,
while observing resource constraints, is an NP hard problem.
Trade-off between quality of solution and computation cost is a
challenge. To address this issue, various approximation
approaches are applied, proposed an algorithm that can produce
high-quality solutions for hard placement problems with
thousands of machines and thousands of VMs within 30
seconds. This approximation algorithm strives to maximize the
total satisfied application demand, to minimize the number of
application starts and stops, and to balance the load across
machines present the Entropy resource manager for
homogeneous clusters, which performs dynamic consolidation
based on constraint programming and takes migration overhead
into account. Entropy chooses migrations that can be
implemented efficiently, incurring a low performance
overhead. The CHOCO constraint programming solver, with
optimizations e.g., identifying lower and upper bounds that are
close to the optimal value, is employed to solve the problem.
To reduce electricity cost in high performance computing
clouds that operate multiple geographically distributed data
centers, study the impact of VM placement policies on cooling
and maximum data center temperatures. They develop a model
of data center cooling for a realistic data center and cooling
system, and design VM distribution policies that intelligently
30
place and migrate VMs across the data centers to take
advantage of time-based differences in electricity prices and
temperatures. Targeting the energy efficiency and SLA
compliance, an integrated management framework for
governing Cloud Computing infrastructures based on three
management actions, namely, VM migration and
reconfiguration, and power management on physical machines.

Incorporating an autonomic management loop optimized using a


wide variety of heuristics ranging from rules over random methods,
the proposed approach can save energy up to 61.6% while keeping
SLA violations acceptably low.

For VM scheduling across multiple IPs, information about the


number of physical machines, the load of these physical machines, and
the state of resource distribution inside the IPs’ side is normally hidden
from the SP and hence is not part of the parameters that can be used
for placement decisions. Only provision-related information such as
types of VM instance and price schemes, is exposed to SP. Therefore,
most work on VM scheduling across multi-cloud environments is
focusing on cost aspects. Chaisiri et al. propose an stochastic integer
programming (SIP) based algorithm that can minimize the cost
spending in each placement plan for hosting virtual machines in a
multiple cloud provider environment under future demand and price
uncertainty. Van den Bossche et al. examine the workload outsourcing
problem in a multi-cloud setting with deadline-constrained workloads,
and present a cost-optimal optimization method to maximize the
utilization of the internal data center and minimize the cost of running
31
the outsourced tasks in the cloud, while fulfilling the QoS constraints
for applications. Tordsson et al., propose a cloud brokering mechanism
for optimized placement of VMs to obtain optimal cost-performance
trade-offs across multiple cloud providers. Similarly, explore the
multi-cloud scenario to deploy a compute cluster on top of a multi-
cloud infrastructure, for provisioning loosely-coupled Many-Task
Computing (MTC) applications. In this way, the cluster nodes can be
provisioned with resources from different clouds to improve the cost-
effectiveness of the deployment, or to implement high-availability
strategies.

3.4 Main Challenges - Models

A. VM Placement problem based on Stochastic Integer


Programming (SIP)

The cloud providers supply a pool of resources to the user in the form
of virtual machines. These VM can be divided into a set of classes
with each class representing a different application type with the goal
of minimizing all the costs at the provider’s end by meeting the
demands of users. The demands and prices can't be fixed. This value
can be estimated based on a distribution curve. SIP is found to be
useful for solving the problem of VM placement.

The problem statement is given below. Assuming a set of VM classes


which represent application types and set of cloud providers who
supply these VM classes as computing resources to users. Consider
32
that the providers offer four types of resources-computing power,
storage, network bandwidth and electric power which are denoted by
the superscripts (p), (s), (n) and (e)respectively. The uncertainty of
demands and prices are also taken into account.

B. VM placement problem based on Bin Packing method

The VM placement is viewed as a bin packing problem with variable


bin sizes and prices, where bins represent PM, items are the VMs.
Bin sizes are the available CPU capacities of the PM and prices
correspond to the power consumption by the PM. A modified version
of the Best Fit Decreasing (BFD) algorithm is used. All the VMs are
sorted in the decreasing order of their current CPU utilization. And
allocate each VM to a host that provides the smallest growth of the
power consumption induced by the allocation. It allows the
incorporation of PMs heterogeneity by firstly choosing the most
power-efficient ones.

3.5. The proposed Methodology

The proposed algorithm is the combination of SIP and enhances version


of BFD VM placement algorithms (OVMP). At first the SIP algorithm
is executed by global manager to perform the reservation as well as to
meet the on-demand and utilization phases of the VM placement. Then
the local managers execute the modified BFD algorithm to place the
VMs in such a way that results in minimum usage of physical machines.
Then the VMM performs the actual VM placement, migration and
resizing activities after receiving commands from the global and local
33
managers-VMs number, location and time. Once the VMM gets all the
required information it starts its actual placement. OVMP’s objective is
to optimize the overall cost and maximize their ROI.
Existing System
Input: Data Process on OS module description
Output : Formation of cloud resource process
1. Derive the constraints parameters to the processing virtual
machines
2. For each VM i
3. Decompose xi(t) 7−→ xi(t) = ˆxi(t) + ˜xi(t)
xˆi(t): trend and seasonal components
x˜i(t): irregular fluctuations
4. Perform forecast operations on each virtual machine
placement process and cinstruct virtual machine process

PROPOSED ALGORITHM
Parameters are denoted as follows:

 V={V1,V2,V3,….,Vlast} denotes the set of VM classes where a VM


class represents an application type.
 P={P1,P2,P3,……,Plast} denotes set of cloud providers who supply a
pool of resources to the user.
 Di={di1,di2,di3,….,div} denotes set of maximum number of required
VMs of class Vi. The total number of required VMs D will be Cartesian
product of all Di over all i.

34
 vi(d) denotes the number of required VMs in class Vi if demand d
is realized.
 tj(p), tj(s), tj(n ) -the maximum capacity of corresponding resource
which cloud provider Pj can supply to user.

 ri(p), ri(s), ri(n) denotes amount of corresponding resource required by


a single VM under class Vi.
 cj(rev)(p), cj(rev)(s), cj(rev)(n) denotes the prices of resources in reservation
phase for provider Pj.
 cj(util)(p),cj(util)(s), cj(util)(n) denotes the prices of corresponding resources
in utilization phase for cloud provider Pj.
 cj(od) (p)
, cj(od)(s) cj(od)(n) denotes the prices of corresponding resources
in on-demand phase for cloud provider Pj and these costs can be random.
 Cj(od) (p), Cj(od)(s) Cj(od)(n) denotes set of possible prices of offered
resources by provider Pj in on-demand phase.

35
N denotes the number of cloud providers
Step1: first get the sample reservation cost from cloud
provider
Step2: initialize the cost for basic variables like Cijk(r) (ω)
for i = 1,2,... ,N do
for j = 1,2,... ,N do
for k = 1,2,... ,N do
z= scenario of reservation phase* decision variable of
reservation
phase I
(or)
z=Cijk(r)(ω)* Xijk(r)(ω)
I = min (decision variable of reservation phase, on demand
phase, Expending phase) *c(y)
(or)
I=min ( Xijk(r)(ω) * Xijk(o)( ω)* Xijk(e)(ω)
End for;
End for;
End for;
for i = 1,2,... ,N do
for j = 1,2,... ,N do
for k = 1,2,... ,N do
for t = 1,2,... ,N do
c(y)= scenario of reservation phase* decision variable of
reservation phase + scenario of on demand phase* decision
variable of on demand phase + scenario of expanding phase*
decision variable of expanding phase (or)
c(y)= Cijkt(r)( ω)* Xijkt(r)( ω) + Cijkt(o)( ω)* Xijkt(o)( ω) +
Cijkt(e)( ω)* Xijkt(e)( ω)
here, some constraints to be followed
Xijk(e)( ω)<=Xijk(r)( ω);
Xijk(R)= Xijkt(r)( ω);
36
Xijk(e)( ω)+Xijk(o)( ω) => dit(ω)
3.6. Clustering Phase

The SIP algorithm has two stages. Stage1 defines the number of VMs to
be provisioned in reservation phase and stage2 defines the number of VMs
allocated in utilization and on-demand.
The proactive algorithm for resource allocation in cloud computing has
been proposed in this section. The main objective of our approach is to
track the status of available resources, allocated resources in the server and
this approach also contributing to maintain the load balancing over the
slave server.

Figure 3.1. Activity Client

37
Each cluster server includes a master server and many more slave
server. Each slave server serves a virtual machine which is allocated by
the CSPi. In the slave server each virtual machine has a unique ID. The
main roll of master server is to listing all over resource states of each VM i
at slave server. Each virtual machine contains the following resources like
CPU, RAM and STORAGE such as R1, R2 and R3 respectively.

Figure 3.2. Activity Server

CSPi allocates one and more virtual machine at a slave server with
unique ID and slave server also have own
resource capacity to executes VMi resources. Each virtual machine takes a
fix amount of space for the logically allocation at the slave server.

38
Physically VMi are placed in the data center and references of VMi passed
on master server by CSPi during demanding request by users.

A condition is arising there, if a new user requesting for resources to


MSi then how wiil be allocation of the resources happened? For that
initially master server will be contact to CSPi about requested VMi and
CSPi will be send reference of VMi and then MSi will decides that where
should be allocate that VMi on the SSi. We know that MSi listing many
types of information about resources of VMi like how much VMi
executing and where are executing in cluster as well as that is also listing
about how much resources are executing of each VMi and where are
executing and how much resources are free and utilized in VM i. in this
condition if requests is arising from users side for resources then MS i will
check list of free resources on VMi and will be check for the availability
of resources if resources will be free then master server will be allocates
resources from pre executed VMi to requested user. Each VMi have its
own capacity for resources allocation, free resources of VMi we can
calculate by subtract total utilized resources from the total resources of
VMi at SSi. That is representing in an equation form such as eq. 1 show
the free resources at virtual machines.
Where C denotes the free resources of the VMi in respect of Ri that is
resources and A stand for total resources available of the VMi and B stand
for utilized resources of the VMi at SSi. If i=1 then we calculating free
resources of resource type-1 that is CPU and if we are taking i=2 then we
will be calculating free resources of resource type-2 that is RAM and if we
are taking i=3 then we will be calculating free resources of resource type-3
that is STORAGE. There are given below a tabular representation to find
out free resources of VMi at the slave server
39
3.7. UML DIAGRAMS
Unified Modeling Language:

The Unified Modeling Language allows the software engineer to express


an analysis model using the modeling notation that is governed by a set of
syntactic semantic and pragmatic rules.

A UML system is represented using five different views that describe the
system from distinctly different perspective. Each view is defined by a set
of diagram, which is as follows.
 User Model View
i. This view represents the system from the user’s
perspective.
ii. The analysis representation describes a usage
scenario from the end-users perspective.

 Structural model view


i. In this model the data and functionality are arrived
from inside the system.
ii. This model view models the static structures.

 Behavioral Model View


It represents the dynamic of behavioral as parts of the
system, depicting the interactions of collection between
various structural elements described in the user model and
structural model view.

40
 Implementation Model View
In this the structural and behavioral as parts of the system
are represented as they are to be built.

 Environmental Model View

In this the structural and behavioral aspects of the


environment in which the system is to be implemented are
represented.

UML is specifically constructed through two different domains they are:


 UML Analysis modeling, this focuses on the user model and
structural model views of the system.
 UML design modeling, which focuses on the behavioral modeling,
implementation modeling and environmental model views.

Use case Diagrams represent the functionality of the system from a user’s
point of view. Use cases are used during requirements elicitation and
analysis to represent the functionality of the system. Use cases focus on
the behavior of the system from external point of view.

Actors are external entities that interact with the system. Examples of
actors include users like administrator, bank customer …etc., or another
system like central database.

41
Use Case

42
Sequence

43
Class

44
45
ER DIAGRAMS

DATA DICTIONARY
46
MySQL -u root –p 123456

drop database OVMPcloud;


create database OVMPcloud;

use OVMPcloud;

create table users(id int(11) NOT NULL AUTO_INCREMENT,passid


varchar(50),phone varchar(50),email varchar(50),address
varchar(50),userid varchar(50),postcode varchar(50),country
varchar(50),roles varchar(50),PRIMARY KEY
(`id`),UNIQUE(userid),UNIQUE(email));

insert into users(passid,phone,email,address,userid,postcode,country,roles)


values('123456','924816937','admin@OVMPcloud.com','address
#1234','admin','33333','INDIA','admin');

create table bookings(id int(11) NOT NULL AUTO_INCREMENT,userid


varchar(50),vmname varchar(50),bookdate varchar(50),fixeddate
varchar(50),used varchar(10) NOT NULL DEFAULT 'no',PRIMARY
KEY(`id`));

commit;
select 'Thank You For Using This Script. Your Database is Ready to
use!!!' as notification from dual;

Chapter 4
47
TESTING

In non-local scenarios, cloud users may want to distribute the VMs


across multiple providers for various purposes, e.g., in order to
construct a user’s cloud environment and prevent potential vendor
lock-in problems by means of migrating applications and data between
data centers and cloud providers. Most likely, the decision on VMs
distribution among cloud providers is not a one-time event.
Conversely, it needs to be adjusted according to the changes exposed.
We investigate dynamic cloud scheduling in scenarios where
conditions are continuously changed, and propose a linear
programming model to dynamically reschedule VMs (including
modeling of VM migration overhead) upon new conditions such as
price changes and service demand variation. Our model can be applied
in various scenarios through selections of corresponding objectives
and constraints, and offers the flexibility to express different levels of
migration overhead when restructuring an existing virtual
infrastructure, i.e., VM layout.

In scenarios where new instance types are introduced, the proposed


mechanisms can accurately determine the break-off point when the
improved performance resulting from migration outweighs the
migration overhead. It is also demonstrated that our cloud mechanism
can cope with scenarios where prices change over time. Performance
changes, as well as transformation of VM distribution across cloud
providers as a consequence of price changes, can be precisely
48
calculated. In addition, the ability of the proposed mechanism to
handle the tradeo↵between vertical (resizing VMs) and horizontal
elastic-ity (adding VMs), as well as to improve decision making in
complex scale-up scenarios with multiple options for service
reconfiguration, e.g., to decide how many new VMs to deploy, and
how many and which VMs to migrate, is also evaluated in scenarios
based on commercial cloud providers’ offerings.

A joint-VM approach can potentially exploit the multiplexing among the


demand patterns of multiple VMs to reach an aggregated capacity measure
that is only bound by the aggregate peak behavior.

Test Cases
Test Module Name Test Expected Result
Num Name Output
ber
1 Deploy SaaS SaaS test Should P
display
SAAS
homepag
e
2 Run SaaS SaaS test System P
without IaaS with
deployment restricted
functional
ities
3 Login IaaS with IaaS Should P
proper authentic display
credentials ation test operation
s
49
Table 1
Chapter 5
RESULTS

Table 2

But, data accessing via virtual multiplexing is a complex task in reliable


data over cloud resource provisioning applications.

50
Capacity utilization for exploring virtual machine is the major task in
cloud computing in joint virtual provisioning operations and cost spending
is one key feature in virtual cloud computing applications.
So the better system was required during this requirement in cloud
computing.

51
OVMP algorithm makes a decision based on the optimal solution of
stochastic integer programming (SIP) to rent resources from cloud
providers. This algorithm can minimize the cost spending in each plan

52
for hosting virtual machines in a multiple cloud provider environment
under future demand and price uncertainty.

The results show that OVMP algorithm can minimize the total cost,
while requirements of both providers and customers are met.

CONCLUSION
As cloud computing becomes increasingly popular,
efficient management of VM images, such as image propagation to
compute nodes and image snapshotting for check pointing or migration
is critical. The performance of these operations directly affects the
usability of the benefits offered by cloud computing systems. This
paper introduced several techniques that integrate with cloud
middleware to efficiently handle two patterns: multi deployment and
multi snapshotting. We propose a lazy VM deployment scheme that
fetches VM image content as needed by the application executing in
the VM, thus reducing the pressure on the VM storage service for
heavily concurrent deployment requests. Further-more, we leverage
object versioning to save only local VM image differences back to
persistent storage when a snapshot is created, yet provide the illusion
that the snapshot is a different, fully independent image. This has two
important benefits. First, it handles the management of updates
independently of the hypervisor, thus greatly improving the portability
of VM images and compensating for the lack of VM image format
standardization. Second, it handles snapshotting transparently at the
level of the VM image repository, greatly simplifying the management
53
of snapshots. We demonstrated the benefits of our approach through
experiments on hundreds of nodes using benchmarks as well as real-
life applications. Compared with simple approaches based on pre
propagation, our approach shows a major improvement in both
execution time and resource usage: the total time to perform a multi
deployment was reduced by up to a factor of 25, while the storage and
bandwidth usage was reduced by as much as 90%. Compared with
approaches that use copy-on-write images (i.e., qcow2 ) based on raw
backing images stored in a distributed file system (i.e., PVFS), we
show a speedup of multideployment by a factor of 2 and comparable
multisnapshotting performance, all with the added benefits of
transparency and portability. Based on these encouraging results, we
plan to explore the multideployment and multisnapshotting patterns
more extensively. With respect to multideployment, one possible
optimization is to build a pre fetching scheme based on previous
experience with the access pattern. With respect to multisnapshotting,
interesting reductions in time and storage space can be obtained by
introducing deduplication schemes. We also plan to fully integrate the
current work with Nimbus and explore its benefits for more complex
HPC applications in the real world.

FUTURE WORK
The extension of Cloud computing has resulted in the creation of huge
data centers globally containing numbers of computers that consume large
amounts of energy resulting in high operating costs. To reduce energy
consumption providers must optimize resource usage by performing
dynamic consolidation of virtual machines (VMs) in an efficient way.

54
8. REFERENCES

BOOKS

JAVA Technologies
  Gerard Ian Prudhomme, “JAVA Technologies”, 2016.

  Herbert Schildt, “JAVA Complete Reference”, 2017, Oracle


Press.

 Yehuda Shiran, Tomer Shiran, “JavaScript Programming”, 2017,


BPB Publications.

 Dr. Edward Lavieri, “Mastering Java 11: Develop modular and


secure Java applications using concurrency and advanced JDK
libraries”, 2nd Edition, 2018, Packt.

 Marco Pistoia, Deepak Gupta,  Ashok Ramani, “JAVA 2


Networking”, 2nd Edition, Prentice Hall Ptr.

 Shadab Siddiqui, Pallavi Jain, “J2EE Professional”, 2002,


Premier Press.

 Larne Pekowsley, “JAVA Server Pages”, 2nd Edition, Wesley.

 Robin Nixon, “Learning PHP, MySQL, JavaScript, CSS &


HTML5: A Step-by-Step Guide to Creating Dynamic
Websites”, 3rd Edition, 2009, O’Reilly.

55
WEB LINKS
1.Zhen Xiao, Weijia Song, and Qi Chen, Dynamic Resource Allocation
using Virtual Machines for Cloud Computing Environment,
Volume: 24, Issue: 6 , June 2013, IEEE.
2. Rafael Moreno-Vozmediano, Ruben S. Montero, Ignacio M.
Llorente. ACDC '09: Proceedings of the 1st workshop on
Automated control for datacenters and cloudsJune 2009 Pages 19–
24https://doi.org/10.1145/1555271.1555277.
3. Amazon Elastic Compute Cloud (EC2).
http://aws.amazon.com/ec2/.
4. A. Kovari ; P. Dukan. KVM & OpenVZ virtualization based IaaS
open source cloud virtualization platforms: OpenNode, Proxmox
VE, October 2012, IEEE.
5. Amazon Simple Storage Service (S3).http://aws.amazon.com/s3/.
6. Jessie J Walker, David Freet, Rajeev Agrawal, Youakim Badr.
Open source cloud management platforms and hypervisor
technologies: A review and comparison, 11 July 2016, IEEE.
7. M. Armbrust, A. Fox, R. Griffith, A. Joseph, R. Katz, A.
Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M.
Zaharia. A view of cloud computing. Commun. ACM, 53:50–58,
April 2010.
8. A. Bar-Noy and S. Kipnis. Designing broadcasting algorithms in the
postal model for message-passing systems. In SPAA ’92:
Proceedings of the 4th Annual ACM Symposium on Parallel
Algorithms and Architectures, pages 13–22, New York, 1992.
ACM.

56
9. Anshul Gandhi, Yuan Chen, Daniel Gmach, Martin Arlitt ; Manish
Marwah. Minimizing data center SLA violations and power
consumption via hybrid resource provisioning, September 2011,
IEEE.
10. Zhibo Cao , Shoubin Dong. Dynamic VM Consolidation for
Energy-Aware and SLA Violation Reduction in Cloud Computing,
September 2013, IEEE.
11. B. Claudel, G. Huard, and O. Richard. Taktuk. Aadaptive
deployment of remote executions. In HPDC ’09: Proceedings of the
18th ACM International Symposium on High Performance
Distributed Computing, pages 91–100, New York, 2009. ACM.
12. A.J. Kleywegt, A. Shapiro, and T. Homem-de-Mello. The
sample average approximation method for stochastic discrete
optimization. SIAM Journal on Optimization, 12:479–502, 2001.
(Cited on p. 471)
13. Alexander Shapiro and Andy Philpott. A Tutorial on Stochastic
Programming.
https://www2.isye.gatech.edu/people/faculty/Alex_Shapiro/Tutorial
SP.pdf, 2007.
14. P. H. Carns, W. B. Ligon, R. B. Ross, and R. Thakur. Pvfs: A
parallel file system for Linux clusters. In Proceedings of the 4th
Annual Linux Showcase and Conference, pages 317–327, Atlanta,
GA, 2000.
15. RüdigerSchultz, François V, Louveaux. Stochastic Integer
Programming, Volume 10, 2003, Pages 213-266, Handbooks in
Operations Research and Management Science.
16. S. Chaisiri, B.-S. Lee, and D. Niyato. Optimal Virtual Machine
Placement across Multiple Cloud Providers. In Proceedings of the
57
4th IEEE Asia-Pacific Services Computing Conference, pages
103–110.
17. R. V. den Bossche, K. Vanmechelen, and J. Broeckhove. Cost-
Optimal Scheduling in Hybrid IaaS Clouds for Deadline
Constrained Workloads. In Proceedings of the 2010 IEEE
International Conference on Cloud Comput-ing, pages 228–235.
IEEE Computer Society, 2010.
18. W. Li, J. Tordsson, and E. Elmroth. Modeling for Dynamic Cloud
Schedul-ing via Migration of Virtual Machines. In Proceedings of
the 3rd IEEE International Conference on Cloud Computing
Technology and Science (CloudCom 2011), pages 163–171, 2011.
19. BM. Industry Developments and Models – Global Testing
Services: Com-ing of Age. IDC, 2008 and IBM Internal Reports.
20. Salesforce.com, Inc. Salesforce.com. http://www.salesforce.com,
visited March 2014.

58

You might also like