VM Live Migration Strategies Explained
VM Live Migration Strategies Explained
Migration
Contents
Abstract:......................................................................................................................................................4
1. Introduction........................................................................................................................................4
2. Virtualization:.....................................................................................................................................5
Before Virtualization:...........................................................................................................................6
After Virtualization:.............................................................................................................................6
Virtualization Approaches:..................................................................................................................7
Hosted Architecture:...........................................................................................................................7
Hypervisor (Bare Metal) Architecture:.................................................................................................7
2.1. Hypervisor:.......................................................................................................................................8
Mainframe origins:..............................................................................................................................8
Classification:.......................................................................................................................................9
2.2. Type-1(native or bare-metal hypervisors)........................................................................................9
2.3. Type-2 or hosted hypervisors:..........................................................................................................9
3. VM Live Migration Strategies:...........................................................................................................10
3.1. Pre-copy memory migration:.....................................................................................................10
3.2. Post-copy memory migration:...................................................................................................11
3.3. Hybrid migration:.......................................................................................................................12
4. Popular cloud workloads:..................................................................................................................13
4.1. Workload statistic (cloud):.........................................................................................................13
4.2. Choosing Benchmark for testing:...............................................................................................14
5. Parameters:.......................................................................................................................................14
Migration link bandwidth:.....................................................................................................................15
Page dirty rate:......................................................................................................................................15
5.1 Memory Access Features:....................................................................................................................15
5.2. Downtime:..........................................................................................................................................16
5.3. Total Migration time:..........................................................................................................................16
5.4. Network Traffic:..................................................................................................................................17
6. Experimental Test bed Setup:................................................................................................................17
6.1. Hardware Setup:.............................................................................................................................18
QEMU-KVM & VMV libvirt VMM Setup.................................................................................................18
6.2. Network Storage Setup:..................................................................................................................19
6.3. Server and VMs Configuration........................................................................................................19
6.4. Web Serving:...................................................................................................................................20
6.5. Parameters Measurement Setup:...................................................................................................21
6.5.1. Benchmarks:............................................................................................................................23
6.5.2. Base:........................................................................................................................................23
6.5.3. Peak:........................................................................................................................................23
7. Performance Evaluation....................................................................................................................26
7.1 Test VM:..........................................................................................................................................26
7.2 TMT and Downtime.........................................................................................................................27
Downtime..............................................................................................................................................28
TMT (Total Migration Time)...................................................................................................................28
Network traffic find out by lightweight application called Nethogs......................................................29
Memory Access Features (MAF)............................................................................................................30
MAF results............................................................................................................................................32
8. Related Work:....................................................................................................................................34
9. Conclusion:........................................................................................................................................34
10. References:....................................................................................................................................35
Abstract:
Migration of working data operating system towards a specific physical host is very important need of
administration sector when it is about data storage and clusters. This migration system provides a clear
distinction between software and hardware of a working system, and contributes in fault data
management, load balancing issues and smaller level maintenance issues is system.
By taking out the migration process, OS continue to work and in this way one can achieve maximum
level performance with less services downtimes. In this study, it is discussed that how migration of
complete OS data occurs in a comfortable way and recorded service downtimes are just up to one minute.
Form this study; it is observed that working performance of the live migration is practically possible even
when servers have issues in interactive load.
In this research paper, we will create an experimental test bed case study and will found the effects of
different parameters on this migration of virtual machines. Downtime and total migration time for
different design cases will also be compared. In this study, it is also preferred that migration in virtual
machines will look after different running services with other constraints. At last, Performance analysis of
this migration study of virtual machine is also done by considering Test VM, TMT and downtime test.
1. Introduction
Working system virtualization has earned a lot attraction in recent time, especially when it is about data
collection and its organization and data computing center. It is usually seen that semi-virtualization
permits many OS applications work simultaneously on a single operational machine with greater working
performance, giving better use of available data and separating individual OS applications.
In this research paper, we study additional benefits allowed by virtualization that is due to OS migration.
Migrating complete OS system and all of its working applications as single working unit permits to
remove the chances of difficulties produced by process-level migration techniques. Particularly, in this
case small level interface between virtualized OS system and the virtual machine monitor reduces the
occurrence of ‘residual dependencies’ issues. In this case the actual host of virtualization system remains
working so that it cans specific service system calls and memory access if needed in migration process.
On other side, virtual machine migration (VMM) may stop its working when migration work is
completed. This feature helps a lot when migration is being occurred to permits the maintenance of actual
host.
Secondly, migration of data at complete virtual machine data level means that complete memory data is
being shifted and this process is also quite effective in terms of efficiency. The below diagram describes
about the process which takes place in migration.
From the above fig, it can be analyzed that how data is transferred from source node to destination node
and how communicating node assists them in this migration work.
Overall, total live migration is completely sturdy tool for data administration, permits division of
hardware and software considerations, and then separating data into a single coherent management file.
2. Virtualization:
In modern era of the business world, the major challenges which CIOs and It managers have to face are:
Economy friendly utilization of IT infrastructure, Effective response to entrepreneurship ideas and
litheness to make new organizational changes. IT managers always have a special urgency for climate
budget constraints and they also have strict regulatory demands in this regard. Virtualization is a basic
technological invention that helps organization’s IT managers to deploy innovatory solutions to major
business challenges.[1]
Virtualization is the modern world idea. The real meaning of virtualization is to separate the resource or
request from a service that is done in physical delivery manner before. In virtual memory case, computer
software comprises more liberty for extra memory than the physically installed, through the background
swapping of data to disk storage. In the similar manner, Virtualization techniques can be implemented to
remaining IT infrastructure which includes all networks, storage, laptop or server hardware and operating
system with applications.
Before Virtualization:
Single OS per Machine.
Multiple application working
may create dis-harmony.
Expensive and unchangeable
infrastructure.
Underutilized Resources.
After Virtualization:
Independent from hardware source in operating system and its applications.
Virtual machine can work with any sort of system.
Spontaneously manage OS and application
by treating them as a single unit in capsule.
Virtualization Approaches:
Since virtualization has started playing a key role in IT business for decades, it is very recent (1998)
VMware conveyed the perks of virtualization to the modern day industry which is now standard x86-
based platforms, which hold majority of desktop, laptop and server shipments. The major advantage of
virtualization is the capability of running variety of operating system on a single physical system and it
also shares the hardware which known as portioning.[2]
Hosted Architecture:
Installs and works as an application.
Based on OS system for working support and
Physical resource management.
The hypervisor term is a little bit different from supervisor; Supervisor is usually used for the kernel of an
operating system. The hypervisor can be regarded where used as upper level of super.[3]
Mainframe origins:
The very first hypervisor was used in SIMMON and IBM’s as an assessment tool which was assisted to
provide full virtualization in CP-40 system. This system started production use in 1967 and renowned as
the first form of IBM’s CP/CMS operating system. CP-40 worked on S/360-40 system that was a little bit
modified version and modified at IBM Cambridge Scientific Center to help out dynamic address
translation, a feature that enables virtualization. Before this time, computer hardware system can only
support virtualization to allows numerous user work simultaneously, such as in case of CTSS and IBM
M44/44X. The hardware system upper state was also virtualized as well which allows these numerous
system to work simultaneously in different virtual machine context.
Programmers employed CP-40 as (CP-67) for the IBM system/360-67 that was production which can be
used for full virtualization. The first shipment of this machine was sent in 1966. This system has also
facility of page-translation-table hardware for virtual memory and also provides different sorts of
techniques which allow full virtualization of all kernel tasks this further includes I/O and interrupt
handling. This is full virtualization official operating system while ill-fated TSS/360 does not have full
virtualization facility. Both CP-40 and CP-67 started working as commercial level in 1967. CP/CMS
systems were in used for IBM customers from 1968 to earlry1970s, and this was in in source code from
without support.
Classification:
In an article in 1974, by considering formal requirements for virtualize able third generation architectures,
Gerald J. Popek and Robert P. Goldberg classified two basic types of hypervisor:
The difference in these two types of hypervisor is not always very clear. For example, Linux’s Kernel-
bases Virtual Machine (Known as KVM) and Free BSD’s are kernel modules types that effectively
convert the host converting system to the first type of hypervisor. At the similar time, Linux distribution
and free BSD are general types operating mechanism, having applications of VM resources and KVM.
And they can also regard as Type-2 Hypervisor.
3. VM Live Migration Strategies:
Memory migration in Virtual Machine Live Migration Strategies plays a key role. Transferring just
memory instead of migrating complete virtual machine can be done by numerous ways. The
methodologies used for this aim usually depend on the purposes and applications of this work. This whole
phenomenon is assisted by guest operating mechanism.[5]
As there are couples of possible techniques which can be utilized for shifting virtual machine memory
state from a particular source to a specific destination, but two or three of them are very common as
vendors of data collection side preferred them and they are pre-copy memory migration and post copy
migration. Sometimes, Hybrid migration also plays an important rule.[6]
These all sorts of faults are categorized as network faults and they are confined at that particular
destination and reversed to the source, which have created these faults and then responds to them. [6]
The total migration time and also the eviction is comparable and sometimes less than when post-copy and
pre-copy techniques are compared.
After having an idea of both pre-copy and post-copy mechanism, it seems quite interesting to compare
their efficiency and then the downtime of both methodologies. Total time of migration comparison also
tells which process is faster and smoother. In following fig, this comparison is made between pre-copy
and post-copy migration strategies.
So from fig, it can be observed that post copy technique has less time and that is why it preferred usually
instead of Pre copy technique.
As per some industry expert of this work, Application, Operating system and middle ware are the parts of
this definition of cloud workload. In this way, each different particular cloud workload has its own
properties, and the best podium to work on is the nature of that particular cloud workload.[9]
All these types of nodes shows a Networking File System (NFS) which is placed at storage device, which
is mounted in the head-note and saves virtual machine images and virtual disks. Usually, a local virtual
disk is mounted in a particular server that has MySQL. The load is driven from head-note, at which place
multi-threaded workload works, and it works along with Fabian’s master component. [9]
5. Parameters:
Migration of a regular virtual machine is measured by total migration time and total downtime of that
particular machine. The migration time is the time period in which both virtual machines are
synchronized, which may change the reliability of that machine. Down time is the time duration in which
the working of virtual machine is suspended. So total migration time is the time duration spend on all 6
stages of migration while on the other side, total downtime is the time duration required for last 3 stages.
In first equation total migration time is given in the fig.
Now the equation used for total downtime is also given.
There are multiple factors that affect migration and they are needed to be understood for accurate
migration modeling. In this section, we will find these factors and then discuss a little bit about their
effect.
The principle working technique of accessing data independently for input-output taking from main
memory is shown in the following fig.
During this data transfer working, DMSC sends a few numbers of signals towards processor which are
known as BR (Bus Request) signal.
5.2. Downtime:
Downtime is the time duration in which a virtual machine working of virtual machine is deferred. This
total down time consists of 3 steps which are stop and copy, Commitment and activation as given below
in the equation.
So, in downtime data collection following data is recorded: Start Time of migration, End time of
migration and Products Ids and serial numbers.
In other words is the total time when the migration process starts and end when the second virtual
machine can control the data and first virtual machine can be discarded and it consists of six steps which
are given below.
So it can be seen that total downtime is the subpart of total migration time.
In the Network File System (NFS), A Ubuntu server has been installed in one PC firstly to distribute the
storage which means all VMs images have been present inside unbuntu server then it has been used from the
QEMU server.
Two Unbuntu desktops have been installed after this KVM/Type 2 hypervisor QEMU along with libvirt and
VMM( Virtual Machine Manager) between these two desktops( Host A and Host B). After that many VM have
been created on both Desktop by the use of NFS share storages where VMs images have been gathered in the
desired directory. These all several VMs have been used for various workload testing. As it can seen from the
fig that Host A set for the source while Host B has been selected for the Target Host after this performed the
live migration.
Variable workloads have been applied on VMs such as Media Streaming, Two different user sets of Web
serving means to say one in 250 users and another is 500 user and web search.
After this step, Live migration of VMs has been performed and many experiements have been done to
find out MAF( memry acces features of VMs while running different workloadswith different conditions.
While depending on the workloads, the downtime,total migration time and network traffic.
Host, set target and the server machines in the shared storage have the subsequent formation:
Intel Core i7-4790,
3.6GHz x 4, 8 Gega Byte of RAM,
2Tera Byte of Hard drive
Intel Desktop, Linux Ubuntu Server 16.04 (open source) and 100 Mb/s Ethernet networks
connection.
Virtual machines have the succeeding outline:
Idle VIRTUAL MACHINE has typical 2 Central Processing Umits, 1 Gega Byte of memory, 20
Gega Byte Hard drive and public internet assembly. Data Analytics benchmark desires a huge
quantity of RAM; thus, it has 3584 Mega Byte memory, 4 CPU, 20 Gega Byte of Hard drive and
stocks internet assembly from the host. Media Running and TPC-W benchmarks have the similar
patterns:
1024 Mega Byte of memory
2 Central Processing Units
20 Gega Byte of Hard drive and shares internet connection from the host
All these are allotted for the Web Serving benchmark.
QEMU is the foremost software that generates the hardware that is employed by guest OS identical to
keyboard, and network card. It can track unaccompanied, but as it accomplished exclusively in software,
it develops intensely slow. For its operation, KVM is basically a Kernel Virtual Machine. QEMU KVM is
that kind of KVM in which there is a hypervisor, this hypervisor is based into a Linux Kernel. This Linux
Kernel is basically based on the QEMU due to which it is said to be a QEMU KVM. In this modern era of
science and technology, KVM has got development attention and is used in almost every field of Live
Migration. A special type of AMD-V and VT-x is required in order to perform the testing using the
QEMU KVM. In order to operate QEMU KVM, the host in it must be Linux. KVM is basically a very
simple process and can easily be performed. KVM can run directly from a command line as given below.
There is a need to remember all the steps of running a KVM module and hence the commands associated
to it although it’s a painful task to be performed. KVM module uses such type of above mentioned codes
in order to perform the operations with more precision and accuracy [34]. Though, previously by means
of QEMU-KVIRTUAL MACHINE , the libvirt library required to connect which yield maintenance of
the boundary with dissimilar virtualization equipments. The hardware wires virtualization allowance for
KVIRTUAL MACHINE (e.g., hardware used for this work) merely underneath command line is
employed to connect QEMU-KVIRTUAL MACHINE :
Then, the operator cloud (in this work) were new. Furthermore, to understand and effort suitably through
VMs a Graphical User Interface (GUI) is recycled. In this exertion, Virtual Machine Viewer (VMV) is
employed by means of beneath command line:
install virt-viewer
After that, VIRTUAL MACHINEs are shaped affording to the necessities. It provides informal controller
of VIRTUAL MACHINEs over the server. VIRTUAL MACHINEs can simply remove, generate and
migrate from the Virtual Machine Viewer (VMV).
/var/lib/libvirt/images
Overhead directory is the defaulting position of libvirt VIRTUAL MACHINE images and is communal to
the customers later transferring the image to this reference book.
There the WEB_SERVER_IP was evaded mechanism Internet Protocol as mass system is recycled
designed for this benchmark. At that time the Memcached and web server are happening with the beneath
commands:
At this time the load scale is 50 which panels the quantity of operators that are at the same time sorting in
to the web servers and demanding social networking platforms.
# time calculations
if ($snapshot != -1 or $pausetarget) {
$sleeptime = $ts4 - $ts3;
} else {
$sleeptime += $ts4 - $ts3;
}
$readtime = $ts5 - $ts4;
$durtime = $ts5 - $ts1;
if ($pausetarget) {
$esttime = $ts4 - $ts3;
} else {
$esttime = $durtime - ($settime / 2) - ($readtime / 2);
}
The script as cited directly above is allocated in entirely Virtual Machines and in progress successively
subsequently a precise time to discover the WORKING SET SIZE. Therefore, there is a possibility that it
can simply hold the WORKING SET SIZE by means of the Process Identifier in the frame of a particular
interval. The composed figures are also put in storage as a text file although the script is operating
successively.
Furthermore, to discover out total migration time which is also said to be TMT, and downtime beneath
are basically the 2 case scripts arranged related to the information’s source and marked server as the
yields of the virtualization that basically will not expose such kind of performance metrics:
This understanding generates the passage from the side of the source, and hence in the same interval of
time, it also rings the Virtual Machine which is presently migrating from the one place of the source to
the other place of the source of same destination in a suitable time frame of 0.01 seconds and hence
protected the information in the form of a statistics as a text file.
After the server of the target, the directly above mentioned script of the shell organizes a remarkably large
speed ping to the migrated source of the Virtual Machine with the time period of the order of 1/100
seconds and hence protects the given documents in the form of text file. Finally, the total traffic of the
whole network which is performing the testing is unrushed by having the set mark of the target server by
means of NetHogs which illustrates that how considerably traffic of the whole network is directed to the
server of the target from the source of the host. Let us take an example, if the migrated Virtual Machine
has a Process Identifier: 11768, then NetHogs precisely displays by what means considerable network
traffic is acknowledged by the mark server by means of this Process Identifier.
In this study effort, designed for every benchmark, a ramp-up location is assigned and then initiated to
gather the performance statistics. The gathered performance statistics cover the whole time of the life of
every set benchmark.
6.5.1. Benchmarks:
The benchmark which is used in this Virtual Live Migration experiment is SPECjvm2008, this
benchmark basically includes:
As testing has performed at different loads, so it consists of 39 different types of the load in this
experimentation
These loads are used to check the performance of the Java Virtual Machine which is also said to
be JVM and the other hardware used in it
There is a time set for each load out of these various types of load, in this way the set default time
for each workload on which the testing has to be performed is 4 minutes or 240 seconds, so that
testing can be performed quite accurately.
There are specific modes on which these kind of hardware perform their testing, as concerned to
benchmark, it has two modes:
1. Base
2. Peak
6.5.2. Base:
Base mode used by the hardware setup has the specific time duration which is of 240 seconds. In these
seconds, 120 are used for warm-up and then it continued its operation for the next remaining 120 seconds.
6.5.3. Peak:
Benchmark used in the Virtual Live Migration experiment, when it runs on the peak value of the
benchmark, then at this moment the users of benchmark can tune the Java Virtual Machine in order to
raise the performance.
In this experimentation, we have used 21 various kinds of the loads of benchmarks as given in the table
below. There is a group name of each benchmark and hence the type of loads are mentioned in front of it.
These loads tell us about the loads which are applicable and compatible to a certain work load. In this
table, it is quite evident that more than one load test can be performed on a certain group of the
benchmark. In our experimentation, we have applied twenty one different loads in accordance to the
group of the bench mark which uses its name in the making of the test code of a benchmark in the Virtual
Machine Live Migration. In the next portion we will discuss the workloads of each group of the
benchmark, its specification and its working phenomenon. The specification and the working
phenomenon easily tell us about the procedure that it follows in order to perform the testing and provide
the accurate results to the users. Different kinds of the loads used by the each benchmark to perform
specific kind of testing. For example, some used the Floating-point calculations while the other uses data
access points and many other similar operations as given below.[15]
There are different kinds of the loads associated with each of the benchmark. They perform different type
of the loading behaviour depending on the type and on the characteristics of their loading. These loads are
performed in an individual way and after a certain period of time the migration is to be performed. This
kind of migration is done by just transferring the data from the one place to the other place. In our process
of transferring the data, this transformation is based and carried by the shifting of each workload after 1
minute from the server 1 to the server 2 of the test bed setup with the help of the client computer as
shown in the experimental setup of the Virtual Live Migration of the machine. In this way each type of
the workloads that are carried out on them and give the total migration time are given in the next sections
of this testing
1. Compiler:
Two different kinds workloads are used in it namely compiler and sunflow
The workload compiler.compiler measures the OpenJDK time of compilation
The workload compiler.sunflow measures the benchmark compilation of the sunflow
2. Compress:
Only one type of the load used in it is the compress load
It uses a modern technique called Lempel-Ziv technique in order to compress and decompress
the load
The algorithm used in the compress workload uses the pseudo-random data of the access on
the input data
It will used to transfer the load from the Server 1 to the Server 2 of the Virtual Machine Live
Migration
3. Crypto:
Three different types of the loads are used in this group of the benchmark
In between the testing phase of the various kind of loads, it basically uses the cryptography
test
The testing is performed not only on the execution of the JVM but also on the provided
protocols by the vendor
This type of the testing is done according to the protocols specified by the AES and DES
protocols
4. Derby:
One type of the load is used in this group of the benchmark
A form of the database which is said to be the pure database has used by this kind of Derby
testing
Derby is a testing that is used to test three different forms of operations including the Decimal
Operations, Synchronization and the Database operations.[15]
5. Mpegaudio:
One types of the load is used in this group of the benchmark
Mpegaudio performs a testing that is basically based on the floating-point calculations used
to perform the various loadings
The input data files are used in it, whose size ranges from the 20 Kilobytes (KB) to 3
Megabytes (MB)
6. Scimark Large:
Five different types of the loads are used in this group of the benchmark
The main purpose of using Scimark is to evaluate the access patterns of the data used for the
various load testing
This type of load testing also used the floating-point calculations to give the precision and
accuracy in the results.
7. Scimark Small:
Five different types of the loads are used in this group of the benchmark
Various floating point calculations are used in the Scimark Small similar to the Scimark large
According to the dataset each of the load testing used in this type of the Scimark small is
divided into the two workgroups
8. Serial:
One type of the load is used in this group of the benchmark
The main objective of this type of benchmark is basically the examination of the various
serials of the objects and the primitive associated with it
Jboss benchmark is used to evaluate the datasets for this kind of benchmark so that the results
can be more accurate and precise
9. Sunflow:
One type of the load is used in this group of the benchmark
This type of the workload is a multi-threaded workload and it runs quite a large number of the
bundles of these dependent threads
The flow of the work used in the Sunflow can be reconfigurable
10. Xml:
Two different types of the loads are used in this group of the benchmark
Exercise string operations are used by both forms of load in this type of benchmark
7. Performance Evaluation
1. In Server 1, with the help of KVM a VM is formed, and hence the images in the XML format was
saved in Network Attached System (NAS). An operating system of Ubuntu 16.40 LTS used by
the VM. To perform the testing at the different loads SPECjvm2008 benchmark was used[16]
2. Three different VM having their suitable hardware capabilities were used:
a. VM 1 having number of 1 CPU and accessories 1 GB of RAM
b. VM 2 having number of 2 CPU and accessories 2 GB of RAM
c. VM 3 having number of 3 CPU and accessories 3 GB of RAM
3. In this testing of VM, 21 different loads given by SPECjvm2008 benchmark were used. So, we
operate each of these loads separately as a base mode
4. After this, by applying a load in the initial stage of the experimentation, we have to move this
individual load from Server 1 to Server 2, after 1 minute
5. This whole experimentation of applying and transferring the load is done by the Client computer
in an automated way
6. In order to make the client computer to operate the SPECjvm2008 benchmark on the Virtual
Machine a bash script was made. It also allows the live migration among the server 1 and the
server 2 of the test bed setup
7. By following the above mentioned steps, the client operated the VM and then execute the
workloads. So, after one minute of load application the client then transfer the load from the
Server 1 to the Server 2
8. In the log files, the client after the transformation of the load note the start and the end time of
the live migration
9. Every second each of the workload’s run time is recorded by the script used during the testing of
VM
10. In order to record the processes, a script called the “Memusg Script” used to record the usage of
memory
11. In order to record the memory use during the Virtual Machine Live Migration and the utilisation
of the Central Processing Unit we use the other device of Top
12. The bash script made on the Virtual Machine operates 10 times for every workload and after each
time during testing, we have to restart the Virtual Machine.
13. The script used during the testing have its own specific codes and hence the other researchers can
perform their testing own their hardware
14. In this testing, the percentage of the load applied and hence the run time under which loads are
operating can be modified as well
15. Script use during the testing can be used to record the start and end time of:
i. Workload
ii. Live Migration
iii. Memory used by the workload[16]
Downtime of each VM (IdleVM; Media streaming, Webserving250 and webserving 500, Websearch)
Overall Migration of each VM
Total Generate traffic of each VM when Migrating one One host to another host.
Memory Access Features (Media streaming, Webserving250 and webserving 500, Websearch)
Fig: Idle VM
Fig: webserving 250
One of the most important and crucial aspects of VM live migration is Memory Migration which provides
continuous service to users. It can be understood from the method of pre-copy migration that the
migration performance of a VM is highly related to how quick the VM disturbs its memory pages (i.e.,
memory dirty rate), and the finishing conditions have been set by the underlying hypervisor. So now if
these iterations have been terminated too early then there will be raise in service downtime and migration
time of VM in the larger dirty rate.
After the iteration phase there will be need of more dirtied pages to transfer, more downtime and there
will be encounter of migration time. Adding more to this, a VM which is functioning with variable
workloads will have variable memory access feature. More, it is very hard to guarantee a very excellent
performance of migration with static termination conditions of variable VMs; those are functioning at
variable workloads. For example, CPU- workloads will have variable MAFs as compare to memory-
intensive workloads.
WSS known as working set size usually have been used to determine that how much VM is dirting its
memory pages that is also known as memory dirty rate of a VM. Working set size is defined as the
collection of pages which were most recently accessed by the VM. For Example, a workload could had
been 200 Mbytes of main memory and pages that collected, but its only touching 80 Mbytes memory per
second to carry on the task which is not standing with predictions and that is the working site size, which
could be hot memory used by the VM usually.
So as to deal with memory assets effectively, it is important that the size of WSS of each workload task at
hand is accurately approximated at some random point in the time. Based upon the WSS, while a
noteworthy number of messy pages will be re-transmitted in progressive iterations; thus, migration could
take a fundamentally more time, and downtime could be tremendous. Additionally, when the memory
messy rate is bigger than the accessible system transmission capacity for VM movement, the migration
activity can't be accomplished. Particularly, memory dirtying rate could learn whether the iteration period
of pre-duplicate can locate a legitimate end point. In addition, a few pages remain clear during the whole
lifetime of the VM, while some are much of the time changed. The fundamental inquiry for pre-duplicate
relocation is, by what means will somebody choose once the time has come to end the pre-duplicate stage,
because of the exorbitant measure of time and asset is being wasted? Resistant, a solitary pre-duplicate of
every memory page will satisfactory to move a steady picture to the objective if the VM being relocated
never changes its memory. Continually when the pace of messy pages is quicker than the pace of
replicating, at that point one should promptly stop and duplicate, where the all pre-duplicate work will be
futile. Along these lines, movement overhead because of the WSS is commonly worthy; however it ought
not to be dismissed.
Then again, unique VMs running different outstanding tasks at hand will have diverse design and without
a doubt will have distinctive MAFs relying upon the remaining burdens. Along these lines, no fixed end
condition is suitable for all VMs with various outstanding tasks at hand. For some outstanding burdens,
there are countless pages those are rarely adjusted and perhaps there will be an enormous arrangement of
memory pages those are refreshed all the time. This component fluctuates from outstanding task at hand
to remaining tasks at hand. Notwithstanding that, the pages are oftentimes refreshing those should be
moved by means of the stop-and-duplicate stage, which is the WSS for the particular remaining tasks at
hand. Moreover, grimy pages contain the stack and nearby factors being gotten to inside the as of now
executing forms, and the pages are being used for network and disk traffic.
In, they have tested four benchmarks, and they have different MAFs. There were two, three or four pre-
copy iterations before the final stop-and-copy round. This research works also shows different MAFs of
different workloads in the conclusion Section. Therefore, performing the live migration of a VM will
provide different output relying solely upon at which point’s migration begins and on the workloads
features, which is our main motivation for this research works where little focus has been given on this
issue so far.
MAF results
Fig: Websearch
8. Related Work:
This project of Virtual Machine Live Migration is basically consists of different kinds of loads that are
transferred from the Server 1 to the Server 2 during the testing of VM. So, Virtual Machine Migration
Tool can also be used as a tool to provide the users who are operating the different workloads or the
physical hosts at the different time with the facility of the transferring the data, taking an example of the
user that he perform the transferring of OS instance to the computer that is basically used at home while
the person performs the transferring basically drives the home from work. This work basically has the
objective of optimizing the data for the links which are too slow and hence taking the long time to
complete the task. Therefore, it can also slows down or stops the execution of the operating system in
order to transfer the data from one place to another place, with the effect in the reduction of the size of the
image that has be transferred. On the other hand, our major area of concern is to look the live migration
time for the fast-networks having the tens of m-sec of the downtime.[20]
Hence, the live migration has been a hot topic for a very large period of the time from the early 1980’s. A
researcher studied on the real-time applications of the related work of Virtual Live Migration, he was
focussed on the kind of problems and issues that we have to face regarding the residual dependencies of
the migrated process of the Virtual Live Migration. He concluded that a virtual migrated process retains
on the testing machine for a considerable period of time from which it has to be migrated.[21] Let us
consider the examples of the dependencies of the residuals that may include:
9. Conclusion:
In this Virtual Machine Live Migration testing, we have applied 21 different workload with the
various characteristics on the Virtual Machine’s Capacities. In this testing, KVM was used as a
hypervisor and the SPECjvm2008 was used as a benchmark in order to have different kinds of the
workloads to perform the testing.
A bash script was used during the experimentation in order to run the testing of the various
workloads used in the testing.
The results obtained from the testing showed the Virtual Machine Live Migration time and the
memory size of the Virtual Machine.
Hence, there is a direct relation between the size of the memory and the Live Migration time. So,
if the memory size increases then in the same manner the Total migration time for such testing
will also go on increasing.[18]
Our results obtained from the testing also shows the time of the single workload of the virtual
machine’s live migration on the host that performing the migration will not select by the
utilisation of the Central Processing Unit of the Virtual Machine Live Migration. [21]
From the results it is concluded that there is a connection among the bandwidth of the network
and the live time of the migration.
So, after analysing the results, we come to the point that the live migration time is that migration
time which also depends on the bandwidth of the network it is using during the testing.
In this way virtual machine live migration depend on the most of the high bandwidth of the
network performing it.[17]
This testing can also be used to record the Virtual Machine Live Migration’s conservation of the
energy. Through this energy conservation, efficient energy cases used for VMs testing can also be
predicted and pointed out with the help of it.
With the help of such results obtained from the experimentation of the KVM, in the future we can
be able to compare such results with the other type of the hypervisors we have in the market.
After the successful testing, we basically plot a plan to further make our contribution in the
addition of the simulation tool in the Virtual Machine Live Migration. So, this will allow us to
calculate the energy conservation.
10. References:
1. Srinarayan Sharma, VIRTUALIZATION: A REVIEW AND FUTURE DIRECTIONS Executive Overview.
2. Christian Limpach, I.P., Andrew Warfield, Virtualization Overview.
3. Hypervisor.
4. What is a Hypervisor.
5. Krushi Damania, S.H., An Overview of VM Live Migration Strategies and Technologies.
6. Arab, H.B., Virtual Machines Live Migration.
7. Virtual Machine Guide. VMware, Inc.
Electrical Engineering
21. Xing LiQinming HeJianhai ChenKejiang YeTing Yin, Informed Live Migration Strategies of Virtual
Machines for Cluster Load Balancing. IFIP International Conference on Network and Parallel
Computing.