0% found this document useful (0 votes)
112 views36 pages

VM Live Migration Strategies Explained

This document discusses virtual machine live migration. It introduces virtualization concepts like hypervisors and different types of hypervisors. It then covers different VM live migration strategies like pre-copy, post-copy, and hybrid migration. It describes setting up an experimental test bed to evaluate the performance of VM live migration by measuring parameters like downtime, total migration time, and network traffic. The document presents results of evaluating these parameters for a test VM and analyzes the effects of different factors on live migration performance.

Uploaded by

Abdul Raheem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
112 views36 pages

VM Live Migration Strategies Explained

This document discusses virtual machine live migration. It introduces virtualization concepts like hypervisors and different types of hypervisors. It then covers different VM live migration strategies like pre-copy, post-copy, and hybrid migration. It describes setting up an experimental test bed to evaluate the performance of VM live migration by measuring parameters like downtime, total migration time, and network traffic. The document presents results of evaluating these parameters for a test VM and analyzes the effects of different factors on live migration performance.

Uploaded by

Abdul Raheem
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Virtual Machine Live

Migration
Contents
Abstract:......................................................................................................................................................4
1. Introduction........................................................................................................................................4
2. Virtualization:.....................................................................................................................................5
Before Virtualization:...........................................................................................................................6
After Virtualization:.............................................................................................................................6
Virtualization Approaches:..................................................................................................................7
Hosted Architecture:...........................................................................................................................7
Hypervisor (Bare Metal) Architecture:.................................................................................................7
2.1. Hypervisor:.......................................................................................................................................8
Mainframe origins:..............................................................................................................................8
Classification:.......................................................................................................................................9
2.2. Type-1(native or bare-metal hypervisors)........................................................................................9
2.3. Type-2 or hosted hypervisors:..........................................................................................................9
3. VM Live Migration Strategies:...........................................................................................................10
3.1. Pre-copy memory migration:.....................................................................................................10
3.2. Post-copy memory migration:...................................................................................................11
3.3. Hybrid migration:.......................................................................................................................12
4. Popular cloud workloads:..................................................................................................................13
4.1. Workload statistic (cloud):.........................................................................................................13
4.2. Choosing Benchmark for testing:...............................................................................................14
5. Parameters:.......................................................................................................................................14
Migration link bandwidth:.....................................................................................................................15
Page dirty rate:......................................................................................................................................15
5.1 Memory Access Features:....................................................................................................................15
5.2. Downtime:..........................................................................................................................................16
5.3. Total Migration time:..........................................................................................................................16
5.4. Network Traffic:..................................................................................................................................17
6. Experimental Test bed Setup:................................................................................................................17
6.1. Hardware Setup:.............................................................................................................................18
QEMU-KVM & VMV libvirt VMM Setup.................................................................................................18
6.2. Network Storage Setup:..................................................................................................................19
6.3. Server and VMs Configuration........................................................................................................19
6.4. Web Serving:...................................................................................................................................20
6.5. Parameters Measurement Setup:...................................................................................................21
6.5.1. Benchmarks:............................................................................................................................23
6.5.2. Base:........................................................................................................................................23
6.5.3. Peak:........................................................................................................................................23
7. Performance Evaluation....................................................................................................................26
7.1 Test VM:..........................................................................................................................................26
7.2 TMT and Downtime.........................................................................................................................27
Downtime..............................................................................................................................................28
TMT (Total Migration Time)...................................................................................................................28
Network traffic find out by lightweight application called Nethogs......................................................29
Memory Access Features (MAF)............................................................................................................30
MAF results............................................................................................................................................32
8. Related Work:....................................................................................................................................34
9. Conclusion:........................................................................................................................................34
10. References:....................................................................................................................................35
Abstract:

Migration of working data operating system towards a specific physical host is very important need of
administration sector when it is about data storage and clusters. This migration system provides a clear
distinction between software and hardware of a working system, and contributes in fault data
management, load balancing issues and smaller level maintenance issues is system.

By taking out the migration process, OS continue to work and in this way one can achieve maximum
level performance with less services downtimes. In this study, it is discussed that how migration of
complete OS data occurs in a comfortable way and recorded service downtimes are just up to one minute.
Form this study; it is observed that working performance of the live migration is practically possible even
when servers have issues in interactive load.

In this research paper, we will create an experimental test bed case study and will found the effects of
different parameters on this migration of virtual machines. Downtime and total migration time for
different design cases will also be compared. In this study, it is also preferred that migration in virtual
machines will look after different running services with other constraints. At last, Performance analysis of
this migration study of virtual machine is also done by considering Test VM, TMT and downtime test.

1. Introduction
Working system virtualization has earned a lot attraction in recent time, especially when it is about data
collection and its organization and data computing center. It is usually seen that semi-virtualization
permits many OS applications work simultaneously on a single operational machine with greater working
performance, giving better use of available data and separating individual OS applications.

In this research paper, we study additional benefits allowed by virtualization that is due to OS migration.
Migrating complete OS system and all of its working applications as single working unit permits to
remove the chances of difficulties produced by process-level migration techniques. Particularly, in this
case small level interface between virtualized OS system and the virtual machine monitor reduces the
occurrence of ‘residual dependencies’ issues. In this case the actual host of virtualization system remains
working so that it cans specific service system calls and memory access if needed in migration process.
On other side, virtual machine migration (VMM) may stop its working when migration work is
completed. This feature helps a lot when migration is being occurred to permits the maintenance of actual
host.

Secondly, migration of data at complete virtual machine data level means that complete memory data is
being shifted and this process is also quite effective in terms of efficiency. The below diagram describes
about the process which takes place in migration.
From the above fig, it can be analyzed that how data is transferred from source node to destination node
and how communicating node assists them in this migration work.

Overall, total live migration is completely sturdy tool for data administration, permits division of
hardware and software considerations, and then separating data into a single coherent management file.

2. Virtualization:
In modern era of the business world, the major challenges which CIOs and It managers have to face are:
Economy friendly utilization of IT infrastructure, Effective response to entrepreneurship ideas and
litheness to make new organizational changes. IT managers always have a special urgency for climate
budget constraints and they also have strict regulatory demands in this regard. Virtualization is a basic
technological invention that helps organization’s IT managers to deploy innovatory solutions to major
business challenges.[1]

Virtualization is the modern world idea. The real meaning of virtualization is to separate the resource or
request from a service that is done in physical delivery manner before. In virtual memory case, computer
software comprises more liberty for extra memory than the physically installed, through the background
swapping of data to disk storage. In the similar manner, Virtualization techniques can be implemented to
remaining IT infrastructure which includes all networks, storage, laptop or server hardware and operating
system with applications.

This blend of virtualization technologies - or virtual infrastructure


- provides a layer of abstraction between computing, storage and networking hardware, and the
applications running on it (see Figure 1). The deployment of virtual infrastructure is non-disruptive, since
the user experiences are largely unchanged. However, virtual infrastructure gives administrators the
advantage of managing pooled resources across the enterprise, allowing IT managers to be more
responsive to dynamic organizational needs and to better leverage infrastructure investments.
This mixture of virtualization technologies or also known as virtual infrastructure gives a cover of thought
that about computer, working storage and the applications working on it. The division of virtual
infrastructure is non-disruptive as the
consumer experience remains same for
both cases. On the other hand, Virtual
infrastructure also allows the
administrators to take benefit of
handling pooled resources throughout
the enterprise. [1]

Before Virtualization:
 Single OS per Machine.
 Multiple application working
may create dis-harmony.
 Expensive and unchangeable
infrastructure.
 Underutilized Resources.

After Virtualization:
 Independent from hardware source in operating system and its applications.
 Virtual machine can work with any sort of system.
 Spontaneously manage OS and application
by treating them as a single unit in capsule.

By using these virtual infrastructure solutions such as


VMware, enterprise IT managers can report
encounters that may be including:

• Server Consolidation and Containment –


Eliminating ‘server sprawl’ via deployment of
systems as virtual machines (VMs) that can run
safely and move transparently across shared
hardware, and increase server utilization rates from
5-15% to 60-80%.

 Server alliance and containment: Removing


‘server sprawl’ via division of variety of
systems working as virtual machines (VMs)
that work properly and travel accurately
throughout the shared hard ware, and also
increases server operation cost from 5-15%
to 60-80%.
 Evaluation and growth optimization: Quickly providing evaluating process and growing servers
by recycling previous data and systems, increasing developer association and homogenizing
development environments.
 Business Continuity: Lessening the cost and intricacy of business work (highly available and
mismanagement solutions) by accumulating the complete system at a single file that can be
simulated and restoring it in a particular server for emergency use.

Virtualization Approaches:
Since virtualization has started playing a key role in IT business for decades, it is very recent (1998)
VMware conveyed the perks of virtualization to the modern day industry which is now standard x86-
based platforms, which hold majority of desktop, laptop and server shipments. The major advantage of
virtualization is the capability of running variety of operating system on a single physical system and it
also shares the hardware which known as portioning.[2]

In modern era, virtualization can work on variety of


system layers which further includes hardware level
virtualization, working systems virtualization and also
bigger level language virtual machines. Hardware
type virtualization was initially invented by IBM
mainframes in early seventies and in the recent past
Unix/RISC system managers started with hardware
portioning abilities before working on the software
side. [1]

In Unix/RISC and industry-standard x86 systems case,


two different kind of approaches are utilized for this
purpose and they are hosted and hypervisor
architectures as shown in the diagram. Hosted
approach is used for partitioning services at top
priority in standard operating system and assists the
broadcast range which belongs to hardware
configurations. On the other side, hypervisor
architecture is initial step of software installed in a spotless x86 –based system (Hence it is also known as
“bare metal” approach.) As it has direct connection with hardware resources, a hypervisor system is more
effectual than hosted architectures, also providing bigger level of scale, robustness and enactment. [2]

Hosted Architecture:
 Installs and works as an application.
 Based on OS system for working support and
Physical resource management.

Hypervisor (Bare Metal) Architecture:


 Less virtualization
 Service Comfort ability for worker and helping applications.[2]
2.1. Hypervisor:
Hypervisor formally known as VMM (Virtual Machine Monitor) is combination of firmware or hardware
and computer software that operates the virtual machine. A desktop computer on which one or more than
one virtual machine work is known as host machine and each virtual machine is known as guest machine.
The hypervisor shows guest operating systems as a virtual working platform and controls the working of
guest operating systems. More than one operating systems working simultaneously have numerous
instances which they share as virtualized hardware resources. A few examples are Linux, macOS and
Windows instances work simultaneously on single physical x86 machine. Here it is different from
operating system virtualization, where all containers share a single kernel. The guest operating system can
be different in user space as Linux distributions system work with same kernel.

The hypervisor term is a little bit different from supervisor; Supervisor is usually used for the kernel of an
operating system. The hypervisor can be regarded where used as upper level of super.[3]

Mainframe origins:
The very first hypervisor was used in SIMMON and IBM’s as an assessment tool which was assisted to
provide full virtualization in CP-40 system. This system started production use in 1967 and renowned as
the first form of IBM’s CP/CMS operating system. CP-40 worked on S/360-40 system that was a little bit
modified version and modified at IBM Cambridge Scientific Center to help out dynamic address
translation, a feature that enables virtualization. Before this time, computer hardware system can only
support virtualization to allows numerous user work simultaneously, such as in case of CTSS and IBM
M44/44X. The hardware system upper state was also virtualized as well which allows these numerous
system to work simultaneously in different virtual machine context.

Programmers employed CP-40 as (CP-67) for the IBM system/360-67 that was production which can be
used for full virtualization. The first shipment of this machine was sent in 1966. This system has also
facility of page-translation-table hardware for virtual memory and also provides different sorts of
techniques which allow full virtualization of all kernel tasks this further includes I/O and interrupt
handling. This is full virtualization official operating system while ill-fated TSS/360 does not have full
virtualization facility. Both CP-40 and CP-67 started working as commercial level in 1967. CP/CMS
systems were in used for IBM customers from 1968 to earlry1970s, and this was in in source code from
without support.

Classification:
In an article in 1974, by considering formal requirements for virtualize able third generation architectures,
Gerald J. Popek and Robert P. Goldberg classified two basic types of hypervisor:

2.2. Type-1(native or bare-metal hypervisors)


These hypervisor works directly on the basis of hardware to manage the hardware and to control guest
operating working systems. That’s why sometimes, it is formally known as bare metal hypervisors. The
very initial level of hypervisor was that when it is formed in IBM in the 1960s and they were native
hypervisor. These hypervisor had test software mechanism which was SIMMON and CP/CMS operating
system (the older version of IBM’S z/VM). Modern equivalents include Antsle OS, Microsoft Hyper-V
and X-box system software, Power hypervisor, VMware and lastly Xen.[3]

2.3. Type-2 or hosted hypervisors:


These hypervisor type works on the traditional operating mechanism (OS) just like all other programs do.
A simply guest operating mechanism works as a process in this Host system. Type-2 hypervisors abstract
guest operating mechanism from the host operating mechanism. The major examples of this type are Mac,
QEMU, Virtual Box, VMware Player and VMware work station. These all examples represent the variety
of type-2 hypervisor[4].

The difference in these two types of hypervisor is not always very clear. For example, Linux’s Kernel-
bases Virtual Machine (Known as KVM) and Free BSD’s are kernel modules types that effectively
convert the host converting system to the first type of hypervisor. At the similar time, Linux distribution
and free BSD are general types operating mechanism, having applications of VM resources and KVM.
And they can also regard as Type-2 Hypervisor.
3. VM Live Migration Strategies:

Memory migration in Virtual Machine Live Migration Strategies plays a key role. Transferring just
memory instead of migrating complete virtual machine can be done by numerous ways. The
methodologies used for this aim usually depend on the purposes and applications of this work. This whole
phenomenon is assisted by guest operating mechanism.[5]

As there are couples of possible techniques which can be utilized for shifting virtual machine memory
state from a particular source to a specific destination, but two or three of them are very common as
vendors of data collection side preferred them and they are pre-copy memory migration and post copy
migration. Sometimes, Hybrid migration also plays an important rule.[6]

3.1. Pre-copy memory migration:


In pre-copy migration methodology, contents of VMs memory are collected and transferred over
destination through multiple iterations. At this similar time, virtual machine is also working
simultaneously at that time. In the following fig, the whole process is explained.
The very first step of this methodology is to copy the complete data of Virtual machine while the
subsequent iterations only transferred the dirtied pages. Usually the numbers of dirty pages are limited but
if it reaches to maximum level then this phase is called as warm up phase and which is over and in this
way virtual machine work is suspended. Then CPU data and remaining dirty pages are moved and after
this, the virtual machine can start working it has left. The time difference between the virtual machine
stoppage and resuming its work is known as downtime.[7]

3.2. Post-copy memory migration:


From the fig, it can be clearly observed that post-copy virtual machine migration started by the
suspension of virtual machine work at the source. By this suspension of virtual machine, a lower level
state of virtual machine likes (CPU state, registers and other non-page able memory) is copied to the
target. The virtual machine then started its work from destination host. As a result, the source started
sending the remaining data of the virtual machine to the destination and in this way, if virtual machine
tries to send a page that is unavailable, and then it shows as page fault. [5]

These all sorts of faults are categorized as network faults and they are confined at that particular
destination and reversed to the source, which have created these faults and then responds to them. [6]

The total migration time and also the eviction is comparable and sometimes less than when post-copy and
pre-copy techniques are compared.

After having an idea of both pre-copy and post-copy mechanism, it seems quite interesting to compare
their efficiency and then the downtime of both methodologies. Total time of migration comparison also
tells which process is faster and smoother. In following fig, this comparison is made between pre-copy
and post-copy migration strategies.
So from fig, it can be observed that post copy technique has less time and that is why it preferred usually
instead of Pre copy technique.

3.3. Hybrid migration:


As far hybrid migration is concerned, a proposed approach is stated. In this particular approach having
registers and other devices like post copy methodology, an extra amount of memory is usually accessed
and then utilized. If this small level of memory is present in the destination host when data transferring is
being occurred then numbers of network page faults can be decreased as compared to post copy
technique. In this manner, the corrupted response time in this strategy becomes less than the post copy
case.
By considering the performances of these strategies, it can be seen that migration process can be done at
different destination like LAN, WAN or MAN and the virtual machine can change its position from one
country to another country or continent to continent.[5]

4. Popular cloud workloads:

A workload in hybrid cloud environment is an independent working service or it may be collection of


data that can be implemented. Because a cloud workload is performed along whole computer assets, so it
can also recognized as the total amount of work that needed to be done by computer resources in a
specific particular time.[8]

As per some industry expert of this work, Application, Operating system and middle ware are the parts of
this definition of cloud workload. In this way, each different particular cloud workload has its own
properties, and the best podium to work on is the nature of that particular cloud workload.[9]

4.1. Workload statistic (cloud):


Cloud computing workloads are divided in certain across the public, private and private points from other
different and major players. Some of big examples are Amazon Web services, Microsoft Azure and the
last one is Google cloud platform (GCP). Which types of job done at which platforms are different in
multiple ways? And they tend to vary, as described in a new report from Tech analysis Research. [10]
The mix of cloud workloads varies by provider, though. For example, while databases are No. 1 on Azure
and GCP, the top workload on AWS is web/content hosting, as shown here: The combination of cloud
workloads differentiate at provider level. For example, Database in Azure and GCP are at number 1
position, the workload division on AWS is content hosting, as shown below.[10]
4.2. Choosing Benchmark for testing:
The idea for choosing benchmarking of a test is shown below in the diagram. By observation, it is seen
that MySQL tends to remain at CPU place when Olio database is in working and on other side,
Apache/PHP tries to remain memory-bound. So in this study the data is analyzed by SUT (system under
test) and in this technique, the workload is divided into two or more networked VMs, in this study they
are two. These networks VMs are accommodated into two different nodes points; in this an easier
partition is available for physical resources. [8]

All these types of nodes shows a Networking File System (NFS) which is placed at storage device, which
is mounted in the head-note and saves virtual machine images and virtual disks. Usually, a local virtual
disk is mounted in a particular server that has MySQL. The load is driven from head-note, at which place
multi-threaded workload works, and it works along with Fabian’s master component. [9]

5. Parameters:
Migration of a regular virtual machine is measured by total migration time and total downtime of that
particular machine. The migration time is the time period in which both virtual machines are
synchronized, which may change the reliability of that machine. Down time is the time duration in which
the working of virtual machine is suspended. So total migration time is the time duration spend on all 6
stages of migration while on the other side, total downtime is the time duration required for last 3 stages.
In first equation total migration time is given in the fig.
Now the equation used for total downtime is also given.

There are multiple factors that affect migration and they are needed to be understood for accurate
migration modeling. In this section, we will find these factors and then discuss a little bit about their
effect.

Migration link bandwidth:


Migration link bandwidth is one of the most difficult significant parameter in migration working
performance. Link bandwidth is usually inversely proportional to total downtime and total migration time.

Page dirty rate:


Page dirty rate is the virtual machine working rate at which memory pages of virtual machines are altered
and as a result, it changes the total number of pages that are shifted in pre copy iteration.

5.1 Memory Access Features:


Memory access or direct memory access is a methodology in which data is transferred between main
memories and then take assistance from input-output devices and this phenomenon can be vice versa and
working independent from processor.

The principle working technique of accessing data independently for input-output taking from main
memory is shown in the following fig.
During this data transfer working, DMSC sends a few numbers of signals towards processor which are
known as BR (Bus Request) signal.

5.2. Downtime:
Downtime is the time duration in which a virtual machine working of virtual machine is deferred. This
total down time consists of 3 steps which are stop and copy, Commitment and activation as given below
in the equation.

So, in downtime data collection following data is recorded: Start Time of migration, End time of
migration and Products Ids and serial numbers.

5.3. Total Migration time:


Total migration time of a working machine is the time in which both of virtual machines are being
synchronized which may alter the reliability of those machines

In other words is the total time when the migration process starts and end when the second virtual
machine can control the data and first virtual machine can be discarded and it consists of six steps which
are given below.
So it can be seen that total downtime is the subpart of total migration time.

5.4. Network Traffic:


Network traffic means the total amount of data moving across a network in a given set of time. This
network data is organized in bigger shapes of network packets which hold the load of network. This
parameter is utilized to control the network traffic measurement and control and then its reproduction.

6. Experimental Test bed Setup:


Local Area Network (LAN) was selected to perform the experimental testbed. One client storage and two
client servers have been used in this research paper. As we can see from fig that all the servers have been
installed in two client servers and all images (iOS) files have been stored in storage server.

In the Network File System (NFS), A Ubuntu server has been installed in one PC firstly to distribute the
storage which means all VMs images have been present inside unbuntu server then it has been used from the
QEMU server.

Two Unbuntu desktops have been installed after this KVM/Type 2 hypervisor QEMU along with libvirt and
VMM( Virtual Machine Manager) between these two desktops( Host A and Host B). After that many VM have
been created on both Desktop by the use of NFS share storages where VMs images have been gathered in the
desired directory. These all several VMs have been used for various workload testing. As it can seen from the
fig that Host A set for the source while Host B has been selected for the Target Host after this performed the
live migration.

Variable workloads have been applied on VMs such as Media Streaming, Two different user sets of Web
serving means to say one in 250 users and another is 500 user and web search.
After this step, Live migration of VMs has been performed and many experiements have been done to
find out MAF( memry acces features of VMs while running different workloadswith different conditions.
While depending on the workloads, the downtime,total migration time and network traffic.

6.1. Hardware Setup:

Host, set target and the server machines in the shared storage have the subsequent formation:
 Intel Core i7-4790,
 3.6GHz x 4, 8 Gega Byte of RAM,
 2Tera Byte of Hard drive
 Intel Desktop, Linux Ubuntu Server 16.04 (open source) and 100 Mb/s Ethernet networks
connection.
Virtual machines have the succeeding outline:

 Idle VIRTUAL MACHINE has typical 2 Central Processing Umits, 1 Gega Byte of memory, 20
Gega Byte Hard drive and public internet assembly. Data Analytics benchmark desires a huge
quantity of RAM; thus, it has 3584 Mega Byte memory, 4 CPU, 20 Gega Byte of Hard drive and
stocks internet assembly from the host. Media Running and TPC-W benchmarks have the similar
patterns:
 1024 Mega Byte of memory
 2 Central Processing Units
 20 Gega Byte of Hard drive and shares internet connection from the host
 All these are allotted for the Web Serving benchmark.

QEMU-KVM & VMV libvirt VMM Setup

QEMU is the foremost software that generates the hardware that is employed by guest OS identical to
keyboard, and network card. It can track unaccompanied, but as it accomplished exclusively in software,
it develops intensely slow. For its operation, KVM is basically a Kernel Virtual Machine. QEMU KVM is
that kind of KVM in which there is a hypervisor, this hypervisor is based into a Linux Kernel. This Linux
Kernel is basically based on the QEMU due to which it is said to be a QEMU KVM. In this modern era of
science and technology, KVM has got development attention and is used in almost every field of Live
Migration. A special type of AMD-V and VT-x is required in order to perform the testing using the
QEMU KVM. In order to operate QEMU KVM, the host in it must be Linux. KVM is basically a very
simple process and can easily be performed. KVM can run directly from a command line as given below.
There is a need to remember all the steps of running a KVM module and hence the commands associated
to it although it’s a painful task to be performed. KVM module uses such type of above mentioned codes
in order to perform the operations with more precision and accuracy [34]. Though, previously by means
of QEMU-KVIRTUAL MACHINE , the libvirt library required to connect which yield maintenance of
the boundary with dissimilar virtualization equipments. The hardware wires virtualization allowance for
KVIRTUAL MACHINE (e.g., hardware used for this work) merely underneath command line is
employed to connect QEMU-KVIRTUAL MACHINE :

install qemu-kvm libvirt-bin

Then, the operator cloud (in this work) were new. Furthermore, to understand and effort suitably through
VMs a Graphical User Interface (GUI) is recycled. In this exertion, Virtual Machine Viewer (VMV) is
employed by means of beneath command line:

install virt-viewer

After that, VIRTUAL MACHINEs are shaped affording to the necessities. It provides informal controller
of VIRTUAL MACHINEs over the server. VIRTUAL MACHINEs can simply remove, generate and
migrate from the Virtual Machine Viewer (VMV).

6.2. Network Storage Setup:


As this effort emphases on the VIRTUAL MACHINE migration on LAN; as a consequence, the Network
File System (NFS) has been recycled. NFS is a CPU data packing server associated to the system which
offers facts admittance to the customer. Network storage setup involving NFS network file system, their
access is not the same for the virtual disks and the network file systems in which they are used. In a
network file system which is of the type of conventional there is an access control which is basically the
reverse for the condition of the virtual disk. Usually, NFS using servers not ready to make the access as
the super user and the other servers just like CIFS and AFS, they grant or give the permission through the
authenticated networking principles. Nfs-kernel-server suite on the storing server besides on the
consumer’s sideways nfs-common is recycled to establish this server. The OS image of Linux Desktop is
deposited on beneath directory:

/var/lib/libvirt/images
Overhead directory is the defaulting position of libvirt VIRTUAL MACHINE images and is communal to
the customers later transferring the image to this reference book.

6.3. Server and VMs Configuration


In this type of the testing of VMs work, two client servers and one storage server are recycled. In two
client servers all the VIRTUAL MACHINE s were connected, As shown in the experimental test bed
setup the VM is connected with the followings, Server 1, Server 2, Client Computer, and NAS (Network
Attached System). Both the servers used in it operate the CentOS 7 Linux and has the Kernel Virtual
Machine mounted on it. Finally, the Ubuntu 16.04 desktop image is deposited in the favorite manual of
libvirt. Though, on the customer side two Ubuntu 16.04 servers are established, and then one server is
selected as a target crowd, and another one is for basis host.

6.4. Web Serving:

The set benchmark has four tiers:


 Web server
 Database server
 Memcached server
 Clients
The web server turns Elgg, and it attaches towards the server of Memcached and the Database. The trades
refer requirements to made a login of the social network [30]. Aimed at this exertion, 300 and 500
simultaneous customers direct demands by means of 300 seconds stable formal time and 30 seconds
ramp-up time. Major, the database server was remaining with the subsequent understanding:

run -dt --net=host --name=mysql_server cloudsuite/web-serving:db_server ${WEB_SERVER_IP}

There the WEB_SERVER_IP was evaded mechanism Internet Protocol as mass system is recycled
designed for this benchmark. At that time the Memcached and web server are happening with the beneath
commands:

run -dt --net=host --name=memcache_server cloudsuite/web-serving:memcached_server


run -dt --net=host --name=web_server cloudsuite/web-serving:web_server /etc/bootstrap.sh $
{DATABASE_SERVER_IP} ${MEMCACHED_SERVER_IP} ${MAX_PM_CHILDREN}
Catalogue and Memcached servers IP are elective, and quantity of MAX_PM_CHILDREN is eighty
aimed at this effort which is the quantity of youngster events on behalf of PHP-FPM. Entirely of these
servers are connected arranged on one VIRTUAL MACHINE with the customer. Lastly, the underneath
command surprise the customer which turns the benchmark:

run –p 8080:8080 --net=host --name=faban_client cloudsuite/web-serving:faban_client $


{WEB_SERVER_IP} ${LOAD_SCALE}

At this time the load scale is 50 which panels the quantity of operators that are at the same time sorting in
to the web servers and demanding social networking platforms.

6.5. Parameters Measurement Setup:


Five metrics were occupied into interpretation to assess the presentation of live Virtual Machine
live migration as in Section VI labelled. There are countless methods to discover out Working
Set Size, Receive Side Scaling, and total network traffic created by Virtual Machine. Though, the
metrics of WORKING SET SIZE, RECEIVE SIDE SCALING and PSS are appropriate
procedures of the quantity of memory reside in by a request and by what means considerable
remembrance it assigns to path the request although the claim is not consuming the assigned
remembrance. Separately after that, RECEIVE SIDE SCALING and PSS can be outlined
through the kernel (Linux) as it accomplishes the PSS, and RECEIVE SIDE SCALING.
Nevertheless, when a request is consuming by the operator and its loading dissimilar
responsibilities over WORKING SET SIZE, typically kernel is not tangled through that; as a
outcome, it’s difficult to offer WORKING SET SIZE by means of the kernel. Beneath is the key
portion of the code is recycled to catch out the WORKING SET SIZE, PSS and RECEIVE SIDE
SCALING [33].

# read referenced counts


$rss = $pss = $referenced = 0;
open SMAPS, $smaps or die "ERROR: can't open $smaps: $!";
# slurp smaps quickly to minimize unwanted WSS growth during reading:
my @smaps = <SMAPS>;
$ts5 = Time::HiRes::gettimeofday();
close SMAPS;
kill -CONT, $pid if ($pausetarget and $snapshot != -1);
foreach my $line (@smaps) {
if ($line =~ /^Rss:/) {
$metric = \$rss;
} elsif ($line =~ /^Pss:/) {
$metric = \$pss;
} elsif ($line =~ /^Referenced:/) {
$metric = \$referenced;
} else {
next;
}
# now pay the split cost, after filtering out most lines:
my ($junk1, $kbytes, $junk2) = split ' ', $line;
$$metric += $kbytes;
}

# time calculations
if ($snapshot != -1 or $pausetarget) {
$sleeptime = $ts4 - $ts3;
} else {
$sleeptime += $ts4 - $ts3;
}
$readtime = $ts5 - $ts4;
$durtime = $ts5 - $ts1;
if ($pausetarget) {
$esttime = $ts4 - $ts3;
} else {
$esttime = $durtime - ($settime / 2) - ($readtime / 2);
}
The script as cited directly above is allocated in entirely Virtual Machines and in progress successively
subsequently a precise time to discover the WORKING SET SIZE. Therefore, there is a possibility that it
can simply hold the WORKING SET SIZE by means of the Process Identifier in the frame of a particular
interval. The composed figures are also put in storage as a text file although the script is operating
successively.
Furthermore, to discover out total migration time which is also said to be TMT, and downtime beneath
are basically the 2 case scripts arranged related to the information’s source and marked server as the
yields of the virtualization that basically will not expose such kind of performance metrics:

#! /bin/bash sudo virsh unsafe migrate --live migratedVMName


qemu+ssh://username@targetHostIP/system & sudo ping -i 0.01 migratedVMIP |ts '[%.S]' >>
OutputFileName.txt

This understanding generates the passage from the side of the source, and hence in the same interval of
time, it also rings the Virtual Machine which is presently migrating from the one place of the source to
the other place of the source of same destination in a suitable time frame of 0.01 seconds and hence
protected the information in the form of a statistics as a text file.

#! /bin/bash sudo ping -i 0.01 migratedVMIP |ts '[%.S]' >> OutputFileName.txt

After the server of the target, the directly above mentioned script of the shell organizes a remarkably large
speed ping to the migrated source of the Virtual Machine with the time period of the order of 1/100
seconds and hence protects the given documents in the form of text file. Finally, the total traffic of the
whole network which is performing the testing is unrushed by having the set mark of the target server by
means of NetHogs which illustrates that how considerably traffic of the whole network is directed to the
server of the target from the source of the host. Let us take an example, if the migrated Virtual Machine
has a Process Identifier: 11768, then NetHogs precisely displays by what means considerable network
traffic is acknowledged by the mark server by means of this Process Identifier.
In this study effort, designed for every benchmark, a ramp-up location is assigned and then initiated to
gather the performance statistics. The gathered performance statistics cover the whole time of the life of
every set benchmark.

6.5.1. Benchmarks:
The benchmark which is used in this Virtual Live Migration experiment is SPECjvm2008, this
benchmark basically includes:

 As testing has performed at different loads, so it consists of 39 different types of the load in this
experimentation
 These loads are used to check the performance of the Java Virtual Machine which is also said to
be JVM and the other hardware used in it
 There is a time set for each load out of these various types of load, in this way the set default time
for each workload on which the testing has to be performed is 4 minutes or 240 seconds, so that
testing can be performed quite accurately.
There are specific modes on which these kind of hardware perform their testing, as concerned to
benchmark, it has two modes:

1. Base
2. Peak

6.5.2. Base:
Base mode used by the hardware setup has the specific time duration which is of 240 seconds. In these
seconds, 120 are used for warm-up and then it continued its operation for the next remaining 120 seconds.

6.5.3. Peak:
Benchmark used in the Virtual Live Migration experiment, when it runs on the peak value of the
benchmark, then at this moment the users of benchmark can tune the Java Virtual Machine in order to
raise the performance.

In this experimentation, we have used 21 various kinds of the loads of benchmarks as given in the table
below. There is a group name of each benchmark and hence the type of loads are mentioned in front of it.
These loads tell us about the loads which are applicable and compatible to a certain work load. In this
table, it is quite evident that more than one load test can be performed on a certain group of the
benchmark. In our experimentation, we have applied twenty one different loads in accordance to the
group of the bench mark which uses its name in the making of the test code of a benchmark in the Virtual
Machine Live Migration. In the next portion we will discuss the workloads of each group of the
benchmark, its specification and its working phenomenon. The specification and the working
phenomenon easily tell us about the procedure that it follows in order to perform the testing and provide
the accurate results to the users. Different kinds of the loads used by the each benchmark to perform
specific kind of testing. For example, some used the Floating-point calculations while the other uses data
access points and many other similar operations as given below.[15]
There are different kinds of the loads associated with each of the benchmark. They perform different type
of the loading behaviour depending on the type and on the characteristics of their loading. These loads are
performed in an individual way and after a certain period of time the migration is to be performed. This
kind of migration is done by just transferring the data from the one place to the other place. In our process
of transferring the data, this transformation is based and carried by the shifting of each workload after 1
minute from the server 1 to the server 2 of the test bed setup with the help of the client computer as
shown in the experimental setup of the Virtual Live Migration of the machine. In this way each type of
the workloads that are carried out on them and give the total migration time are given in the next sections
of this testing

Figure 1 SPECjvm2008 Benchmark workloads

1. Compiler:
 Two different kinds workloads are used in it namely compiler and sunflow
 The workload compiler.compiler measures the OpenJDK time of compilation
 The workload compiler.sunflow measures the benchmark compilation of the sunflow

2. Compress:
 Only one type of the load used in it is the compress load
 It uses a modern technique called Lempel-Ziv technique in order to compress and decompress
the load
 The algorithm used in the compress workload uses the pseudo-random data of the access on
the input data
 It will used to transfer the load from the Server 1 to the Server 2 of the Virtual Machine Live
Migration

3. Crypto:
 Three different types of the loads are used in this group of the benchmark
 In between the testing phase of the various kind of loads, it basically uses the cryptography
test
 The testing is performed not only on the execution of the JVM but also on the provided
protocols by the vendor
 This type of the testing is done according to the protocols specified by the AES and DES
protocols

4. Derby:
 One type of the load is used in this group of the benchmark
 A form of the database which is said to be the pure database has used by this kind of Derby
testing
 Derby is a testing that is used to test three different forms of operations including the Decimal
Operations, Synchronization and the Database operations.[15]

5. Mpegaudio:
 One types of the load is used in this group of the benchmark
 Mpegaudio performs a testing that is basically based on the floating-point calculations used
to perform the various loadings
 The input data files are used in it, whose size ranges from the 20 Kilobytes (KB) to 3
Megabytes (MB)

6. Scimark Large:
 Five different types of the loads are used in this group of the benchmark
 The main purpose of using Scimark is to evaluate the access patterns of the data used for the
various load testing
 This type of load testing also used the floating-point calculations to give the precision and
accuracy in the results.

7. Scimark Small:

 Five different types of the loads are used in this group of the benchmark
 Various floating point calculations are used in the Scimark Small similar to the Scimark large
 According to the dataset each of the load testing used in this type of the Scimark small is
divided into the two workgroups

8. Serial:
 One type of the load is used in this group of the benchmark
 The main objective of this type of benchmark is basically the examination of the various
serials of the objects and the primitive associated with it
 Jboss benchmark is used to evaluate the datasets for this kind of benchmark so that the results
can be more accurate and precise

9. Sunflow:
 One type of the load is used in this group of the benchmark
 This type of the workload is a multi-threaded workload and it runs quite a large number of the
bundles of these dependent threads
 The flow of the work used in the Sunflow can be reconfigurable

10. Xml:
 Two different types of the loads are used in this group of the benchmark
 Exercise string operations are used by both forms of load in this type of benchmark

7. Performance Evaluation

7.1 Test VM:

1. In Server 1, with the help of KVM a VM is formed, and hence the images in the XML format was
saved in Network Attached System (NAS). An operating system of Ubuntu 16.40 LTS used by
the VM. To perform the testing at the different loads SPECjvm2008 benchmark was used[16]
2. Three different VM having their suitable hardware capabilities were used:
a. VM 1 having number of 1 CPU and accessories 1 GB of RAM
b. VM 2 having number of 2 CPU and accessories 2 GB of RAM
c. VM 3 having number of 3 CPU and accessories 3 GB of RAM
3. In this testing of VM, 21 different loads given by SPECjvm2008 benchmark were used. So, we
operate each of these loads separately as a base mode
4. After this, by applying a load in the initial stage of the experimentation, we have to move this
individual load from Server 1 to Server 2, after 1 minute

Figure 2 Virtual Machine Live Migration of Shared Storage

5. This whole experimentation of applying and transferring the load is done by the Client computer
in an automated way
6. In order to make the client computer to operate the SPECjvm2008 benchmark on the Virtual
Machine a bash script was made. It also allows the live migration among the server 1 and the
server 2 of the test bed setup
7. By following the above mentioned steps, the client operated the VM and then execute the
workloads. So, after one minute of load application the client then transfer the load from the
Server 1 to the Server 2
8. In the log files, the client after the transformation of the load note the start and the end time of
the live migration
9. Every second each of the workload’s run time is recorded by the script used during the testing of
VM
10. In order to record the processes, a script called the “Memusg Script” used to record the usage of
memory
11. In order to record the memory use during the Virtual Machine Live Migration and the utilisation
of the Central Processing Unit we use the other device of Top
12. The bash script made on the Virtual Machine operates 10 times for every workload and after each
time during testing, we have to restart the Virtual Machine.
13. The script used during the testing have its own specific codes and hence the other researchers can
perform their testing own their hardware
14. In this testing, the percentage of the load applied and hence the run time under which loads are
operating can be modified as well
15. Script use during the testing can be used to record the start and end time of:
i. Workload
ii. Live Migration
iii. Memory used by the workload[16]

7.2 TMT and Downtime


TMT is basically the top Migration time which is used in the testing of VM. As in the
experimentation given above, we have used 21 different loads during the testing, so the migration
time of these loads can be as after the testing.

Downtime of each VM (IdleVM; Media streaming, Webserving250 and webserving 500, Websearch)
Overall Migration of each VM

Total Generate traffic of each VM when Migrating one One host to another host.

Memory Access Features (Media streaming, Webserving250 and webserving 500, Websearch)

Here are the result Figures


Downtime

Fig: Jobinfo to see migration status

TMT (Total Migration Time)


Network traffic find out by lightweight application called Nethogs

Fig: Idle VM
Fig: webserving 250

Fig: Webserving 500

Fig: Media streaming

Memory Access Features (MAF)

One of the most important and crucial aspects of VM live migration is Memory Migration which provides
continuous service to users. It can be understood from the method of pre-copy migration that the
migration performance of a VM is highly related to how quick the VM disturbs its memory pages (i.e.,
memory dirty rate), and the finishing conditions have been set by the underlying hypervisor. So now if
these iterations have been terminated too early then there will be raise in service downtime and migration
time of VM in the larger dirty rate.

After the iteration phase there will be need of more dirtied pages to transfer, more downtime and there
will be encounter of migration time. Adding more to this, a VM which is functioning with variable
workloads will have variable memory access feature. More, it is very hard to guarantee a very excellent
performance of migration with static termination conditions of variable VMs; those are functioning at
variable workloads. For example, CPU- workloads will have variable MAFs as compare to memory-
intensive workloads.

WSS known as working set size usually have been used to determine that how much VM is dirting its
memory pages that is also known as memory dirty rate of a VM. Working set size is defined as the
collection of pages which were most recently accessed by the VM. For Example, a workload could had
been 200 Mbytes of main memory and pages that collected, but its only touching 80 Mbytes memory per
second to carry on the task which is not standing with predictions and that is the working site size, which
could be hot memory used by the VM usually.

So as to deal with memory assets effectively, it is important that the size of WSS of each workload task at
hand is accurately approximated at some random point in the time. Based upon the WSS, while a
noteworthy number of messy pages will be re-transmitted in progressive iterations; thus, migration could
take a fundamentally more time, and downtime could be tremendous. Additionally, when the memory
messy rate is bigger than the accessible system transmission capacity for VM movement, the migration
activity can't be accomplished. Particularly, memory dirtying rate could learn whether the iteration period
of pre-duplicate can locate a legitimate end point. In addition, a few pages remain clear during the whole
lifetime of the VM, while some are much of the time changed. The fundamental inquiry for pre-duplicate
relocation is, by what means will somebody choose once the time has come to end the pre-duplicate stage,
because of the exorbitant measure of time and asset is being wasted? Resistant, a solitary pre-duplicate of
every memory page will satisfactory to move a steady picture to the objective if the VM being relocated
never changes its memory. Continually when the pace of messy pages is quicker than the pace of
replicating, at that point one should promptly stop and duplicate, where the all pre-duplicate work will be
futile. Along these lines, movement overhead because of the WSS is commonly worthy; however it ought
not to be dismissed.

Then again, unique VMs running different outstanding tasks at hand will have diverse design and without
a doubt will have distinctive MAFs relying upon the remaining burdens. Along these lines, no fixed end
condition is suitable for all VMs with various outstanding tasks at hand. For some outstanding burdens,
there are countless pages those are rarely adjusted and perhaps there will be an enormous arrangement of
memory pages those are refreshed all the time. This component fluctuates from outstanding task at hand
to remaining tasks at hand. Notwithstanding that, the pages are oftentimes refreshing those should be
moved by means of the stop-and-duplicate stage, which is the WSS for the particular remaining tasks at
hand. Moreover, grimy pages contain the stack and nearby factors being gotten to inside the as of now
executing forms, and the pages are being used for network and disk traffic.

In, they have tested four benchmarks, and they have different MAFs. There were two, three or four pre-
copy iterations before the final stop-and-copy round. This research works also shows different MAFs of
different workloads in the conclusion Section. Therefore, performing the live migration of a VM will
provide different output relying solely upon at which point’s migration begins and on the workloads
features, which is our main motivation for this research works where little focus has been given on this
issue so far.

MAF results

Fig; Media Streaming

Fig: Webserving 250 and 500


Fig; WSS of different workloads for webserving 250 and 500

Fig: Websearch
8. Related Work:
This project of Virtual Machine Live Migration is basically consists of different kinds of loads that are
transferred from the Server 1 to the Server 2 during the testing of VM. So, Virtual Machine Migration
Tool can also be used as a tool to provide the users who are operating the different workloads or the
physical hosts at the different time with the facility of the transferring the data, taking an example of the
user that he perform the transferring of OS instance to the computer that is basically used at home while
the person performs the transferring basically drives the home from work. This work basically has the
objective of optimizing the data for the links which are too slow and hence taking the long time to
complete the task. Therefore, it can also slows down or stops the execution of the operating system in
order to transfer the data from one place to another place, with the effect in the reduction of the size of the
image that has be transferred. On the other hand, our major area of concern is to look the live migration
time for the fast-networks having the tens of m-sec of the downtime.[20]

Hence, the live migration has been a hot topic for a very large period of the time from the early 1980’s. A
researcher studied on the real-time applications of the related work of Virtual Live Migration, he was
focussed on the kind of problems and issues that we have to face regarding the residual dependencies of
the migrated process of the Virtual Live Migration. He concluded that a virtual migrated process retains
on the testing machine for a considerable period of time from which it has to be migrated.[21] Let us
consider the examples of the dependencies of the residuals that may include:

1. Descriptors of the file


2. Memory segments that has to be shared
3. Local Resources from which data has to be manipulated
These kind of dependencies are not desirable because they have some limitations that the machine must
be continuously available for such migration and hence it can affect the process of the migration in a
negative manner.

9. Conclusion:
 In this Virtual Machine Live Migration testing, we have applied 21 different workload with the
various characteristics on the Virtual Machine’s Capacities. In this testing, KVM was used as a
hypervisor and the SPECjvm2008 was used as a benchmark in order to have different kinds of the
workloads to perform the testing.
 A bash script was used during the experimentation in order to run the testing of the various
workloads used in the testing.
 The results obtained from the testing showed the Virtual Machine Live Migration time and the
memory size of the Virtual Machine.
 Hence, there is a direct relation between the size of the memory and the Live Migration time. So,
if the memory size increases then in the same manner the Total migration time for such testing
will also go on increasing.[18]
 Our results obtained from the testing also shows the time of the single workload of the virtual
machine’s live migration on the host that performing the migration will not select by the
utilisation of the Central Processing Unit of the Virtual Machine Live Migration. [21]
 From the results it is concluded that there is a connection among the bandwidth of the network
and the live time of the migration.
 So, after analysing the results, we come to the point that the live migration time is that migration
time which also depends on the bandwidth of the network it is using during the testing.
 In this way virtual machine live migration depend on the most of the high bandwidth of the
network performing it.[17]
 This testing can also be used to record the Virtual Machine Live Migration’s conservation of the
energy. Through this energy conservation, efficient energy cases used for VMs testing can also be
predicted and pointed out with the help of it.
 With the help of such results obtained from the experimentation of the KVM, in the future we can
be able to compare such results with the other type of the hypervisors we have in the market.
 After the successful testing, we basically plot a plan to further make our contribution in the
addition of the simulation tool in the Virtual Machine Live Migration. So, this will allow us to
calculate the energy conservation.

10. References:
1. Srinarayan Sharma, VIRTUALIZATION: A REVIEW AND FUTURE DIRECTIONS Executive Overview.
2. Christian Limpach, I.P., Andrew Warfield, Virtualization Overview.
3. Hypervisor.
4. What is a Hypervisor.
5. Krushi Damania, S.H., An Overview of VM Live Migration Strategies and Technologies.
6. Arab, H.B., Virtual Machines Live Migration.
7. Virtual Machine Guide. VMware, Inc.

3145 Porter Drive

Palo Alto, CA 94304.


8. Surath Liyanage, S.K., Virtual Machine Migration Strategy in Cloud Computing. 14th
International Symposium on Distributed Computing and Applications for Business Engineering
and Science (DCABES).
9. William Voorsluys1, J.B., Srikumar Venugopal2, and s, Cost of Virtual Machine Live Migration in

Clouds: A Performance Evaluation. September 2011.


10. RAMEL, D., What workloads run best on which clouds?
11. Aljarih, O., VM Live Migration.
12. Osama Alrajeh, M.F., Performance of Virtual Machine Live Migration with Various Workloads.
13. Michael Hines, U.D., Post-copy live migration of virtual machines. ACM SIGOPS Operating
Systems Review.
14. Seteven Hand, J.G.H., Live migration of virtual machines. the 2nd Symposiun on Networked
SystemsDesign and Implementation (USENIX NSDI '05, At Boston, MA, USA, Volume: 2.
15. Cerroni, W., Network performance of multiple virtual machine live migration in cloud
federations. Journal of Internet Services and Applications.
16. Bloch, T., What is the best way to set up an experiment environment for cloud performance
testing for live VM migration.
17. Hansen†, J.G., et al., Live Migration of Virtual Machines.
18. Govil, M.C., A critical survey of live virtual machine migration techniques. Journal of Cloud
Computing.
19. Asadullah Shaikh, M.S., Analyzing Virtual Machine Live Migration in

Application Data Context.


20. MOHAMMAD, T. and C.S. EATI, A Performance Study of VM Live

Migration over the WAN. Master Thesis

Electrical Engineering
21. Xing LiQinming HeJianhai ChenKejiang YeTing Yin, Informed Live Migration Strategies of Virtual
Machines for Cluster Load Balancing. IFIP International Conference on Network and Parallel
Computing.

You might also like