You are on page 1of 24

A

Seminar Report
On

HARDWARE
VIRTUALISATION
Submitted in partial fulfillment of the
requirements for the award ofthe degree
of

BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING

Submitted by
SHASHANK TIWARI

Roll No. -1612210089

B. TECH (CS-81)

Under the guidance of

[MR. DEVENDRA KUMAR]

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


SHRI RAMSWAROOP MEMORIAL GROUP OF PROFESSIONAL COLLEGES,

APJAKTU, LUCKNOW

JAN-2021
ABSTRACT

The goal of this paper is to look into the field of hardware virtualisation, as well as to provide an
overview of human-computer interaction and its definition.

Numerous systems have been designed which use virtualization to subdivide the ample resources
of a modern computer. Some require specialized hardware, or cannot support commodity
operating systems. Some target 100% binary compatibility at the expense of performance. Others
sacrifice security or functionality for speed. Few offer resource isolation or performance
guarantees; most provide only best-effort provisioning, risking denial of service.

The goal is to integrate all these facilities into a single virtual machine so that the user does not
have to resort to different resources for different purposes. Integration is one of the sole purposes
of this paper.

1
ACKNOWLEDGEMENT

I wish to express my profound and sincere gratitude to Mr. Devendra Kumar Sir, Department of
Computer Science Engineering, SRMGPC, LUCKNOW, who guided me into the intricacies of
this seminar non-chalantly with matchless magnanimity.
I thank to Dr. Atul Kumar, Head of the Department of Computer Science Engineering,
SRMGPC, LUCKNOW, for extending their support during course of this investigation.

SHASHANK TIWARI
DECLARATION

I, SHASHANK TIWARI, Roll. No. 161210089, studying in the Final year of Bachelor of
Technology in Computer Science and Engineering at SHRI RAMSWRAOOP MEMORIAL
GROUP OF PROFESSIONAL COLLEGES, LUCKNOW, hereby declare that this project work
entitled “HARDWARE VIRTUALISATION” which is submitted by me in the fulfilment of my
seminar report work for the session 2020-2021.
I further undertake that the matter embodied in the dissertation has not been submitted previously
for the award of any degree or certification by me to any other university or institution. The
activity work done in this project has been kept extremely confidential. I hereby declare that the
above provided statement has been written with absolute authenticity.
SHASHANK TIWARI
LIST OF SYMBOLS, ABBREVIATIONS AND NOMENCLATURE

 Scientific Discipline: a particular branch of scientific knowledge.


 Technological Discipline: application of practical sciences to industry or commerce.
 Robotics: the branch of technology that deals with the design, construction, operation,
and application of robots.
 HCI: Human Computer Interface
 VM: Virtual Machine
1. INTRODUCTION

Hardware virtualization is the virtualization of computers as complete hardware platforms, certain logical
abstractions of their componentry, or only the functionality required to run various operating systems.
Virtualization hides the physical characteristics of a computing platform from the users, presenting instead
an abstract computing platform.[1][2] At its origins, the software that controlled virtualization was called a
"control program", but the terms "hypervisor" or "virtual machine monitor" became preferred over time.

The term "virtualization" was coined in the 1960s to refer to a virtual machine (sometimes called "pseudo
machine"), a term which itself dates from the experimental IBM M44/44X system.[3] The creation and
management of virtual machines has been called "platform virtualization", or "server virtualization", more
recently.
Platform virtualization is performed on a given hardware platform by host software (a control program),
which creates a simulated computer environment, a virtual machine (VM), for its guest software. The guest
software is not limited to user applications; many hosts allow the execution of complete operating systems.
The guest software executes as if it were running directly on the physical hardware, with several notable
caveats. Access to physical system resources (such as the network access, display, keyboard, and disk
storage) is generally managed at a more restrictive level than the host processor and system-memory.
Guests are often restricted from accessing specific peripheral devices, or may be limited to a subset of the
device's native capabilities, depending on the hardware access policy implemented by the virtualization
host.
Virtualization often exacts performance penalties, both in resources required to run the hypervisor, and as
well as in reduced performance on the virtual machine compared to running native on the physical
machine.
Working
How Virtualization Happens
Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010
Virtualizing Hardware Platforms
Hardware virtualization, sometimes called platform or server virtualization, is executed on a particular
hardware platform by host software. Essentially, it hides the physical hardware. The host software that is
actually a control program is called a hypervisor. The hypervisor creates a simulated computer
environment for the guest software that could be anything from user applications to complete OSes. The
guest software performs as if it were running directly on the physical hardware. However, access to
physical resources such as network access and physical ports is usually managed at a more restrictive level
than the processor and memory. Guests are often restricted from accessing specific peripheral devices.
Managing network connections and external ports such as USB from inside the guest software can be
challenging. Figure 1.4 shows the concept behind virtualizing hardware platforms.

Hardware virtualization layer


The hardware virtualization layer is created by installing Microsoft Hyper-V on one or more compatible
hardware platforms. Hyper-V, Microsoft's entry into the hypervisor market, is a very thin layer that
presents a small attack surface. It can do this because Microsoft does not embed drivers. Instead, Hyper-V
uses vendor-supplied drivers to manage VM hardware requests.
Warning
Hardware targeted for virtualization must support virtualization, as specified in Chapter 1.
Each VM exists within a partition, starting with the root partition. The root partition must run Windows
2008 Server ×64 or Windows 2008 Server Core ×64. Subsequent partitions, known as child partitions,
usually communicate with the underlying hardware via the root partition. Some calls directly from a child
partition to Hyper-V are possible using WinHv (defined below) if the OS running in the partition is
“enlightened.” An enlightened OS understands how to behave in a Hyper-V environment. Communication
is limited for an unenlightened OS partition, and applications there tend to run much more slowly than
those in an enlightened one. Performance issues are generally related to the requirement for emulation
software to interface hosted services.
Reasons for virtualization
 In the case of server consolidation, many small physical servers are replaced by one larger physical
server to decrease the need for more (costly) hardware resources such as CPUs, and hard drives.
Although hardware is consolidated in virtual environments, typically OSs are not. Instead, each OS
running on a physical server is converted to a distinct OS running inside a virtual machine.
Thereby, the large server can "host" many such "guest" virtual machines. This is known as
Physical-to-Virtual (P2V) transformation.
 In addition to reducing equipment and labor costs associated with equipment maintenance,
consolidating servers can also have the added benefit of reducing energy consumption and the
global footprint in environmental-ecological sectors of technology. For example, a typical server
runs at 425 W[4] and VMware estimates a hardware reduction ratio of up to 15:1.[5]
 A virtual machine (VM) can be more easily controlled and inspected from a remote site than a
physical machine, and the configuration of a VM is more flexible. This is very useful in kernel
development and for teaching operating system courses, including running legacy operating
systems that do not support modern hardware.[6]
 A new virtual machine can be provisioned as required without the need for an up-front hardware
purchase.
 A virtual machine can easily be relocated from one physical machine to another as needed. For
example, a salesperson going to a customer can copy a virtual machine with the demonstration
software to their laptop, without the need to transport the physical computer. Likewise, an error
inside a virtual machine does not harm the host system, so there is no risk of the OS crashing on the
laptop.
 Because of this ease of relocation, virtual machines can be readily used in disaster recovery
scenarios without concerns with impact of refurbished and faulty energy sources.
Types of Virtualisation
Full virtualization[edit]
Main article: Full virtualization

Logical diagram of full virtualization.


In full virtualization, the virtual machine simulates enough hardware to allow an unmodified "guest" OS
designed for the same instruction set to be run in isolation. This approach was pioneered in 1966 with the
IBM CP-40 and CP-67, predecessors of the VM family.
Hardware-assisted virtualization[edit]
Main article: Hardware-assisted virtualization
In hardware-assisted virtualization, the hardware provides architectural support that facilitates building a
virtual machine monitor and allows guest OSs to be run in isolation.[7] Hardware-assisted virtualization was
first introduced on the IBM System/370 in 1972, for use with VM/370, the first virtual machine operating
system.
In 2005 and 2006, Intel and AMD provided additional hardware to support virtualization. Sun
Microsystems (now Oracle Corporation) added similar features in their UltraSPARC T-Series processors
in 2005.
In 2006, first-generation 32- and 64-bit x86 hardware support was found to rarely offer performance
advantages over software virtualization.[8]
Paravirtualization[edit]
Main article: Paravirtualization
In paravirtualization, the virtual machine does not necessarily simulate hardware, but instead (or in
addition) offers a special API that can only be used by modifying[clarification needed] the "guest" OS. For this to
be possible, the "guest" OS's source code must be available. If the source code is available, it is sufficient
to replace sensitive instructions with calls to VMM APIs (e.g.: "cli" with "vm_handle_cli()"), then re-
compile the OS and use the new binaries. This system call to the hypervisor is called a "hypercall" in
TRANGO and Xen; it is implemented via a DIAG ("diagnose") hardware instruction in IBM's CMS under
VM[clarification needed] (which was the origin of the term hypervisor)..
Operating-system-level virtualization[edit]
Main article: Operating-system-level virtualization
In operating-system-level virtualization, a physical server is virtualized at the operating system level,
enabling multiple isolated and secure virtualized servers to run on a single physical server. The "guest"
operating system environments share the same running instance of the operating system as the host system.
Thus, the same operating system kernel is also used to implement the "guest" environments, and
applications running in a given "guest" environment view it as a stand-alone system.
ADVANTAGES:
The benefits of hardware virtualization decrease the overall cost of cloud users and increase flexibility.
The advantages are:
Lower Cost: Because of server consolidation, the cost decreases; now, multiple OS can exist together in a
single hardware. This minimizes the quantity of rack space, reduces the number of servers, and eventually
drops the power consumption.
Efficient resource utilization: Physical resources can be shared among virtual machines. Another virtual
machine can use the unused resources allocated by one virtual machine in case of any need.
Increase IT flexibility: The quick development of hardware resources became possible using virtualization,
and the resources can be managed consistently also.
Advanced Hardware Virtualization features: With the advancement of modern hypervisors, highly
complex operations maximize the abstraction of hardware & ensure maximum uptime. This technique
helps to migrate an ongoing virtual machine from one host to another host dynamically.
Economical- These virtualizations are compatible with large scale as well as small-scale industries. As
most of the amount is spent on hardware, the hardware virtualization eliminates this cost and benefits the
customer. It also increases the lifespan of the existing hardware which reduces the energy costs.
ii. Efficient backup and recovery
As the disaster is unexpected and the data can be destroyed in seconds. Here, virtualizations make recovery
much easier and accurate with less manpower while using very fewer resources.
iii. Efficient its operations
It can provide an easier way for the IT staff to install and maintain the software rather than maintaining
hardware. Everything can be done with the help of a computer and its professional will do it with less
downtime, quicker recovery, and fewer outages.
iv. Disaster recovery in hardware virtualizations
In the cloud there a situation where continuous operation is done and a disaster recovery plan should be
there which can provide surety that the performance and maintenance are met after the retrieval of the data.
Disaster recovery plan in hardware virtualization involves the protection of both hardware and the software
and this can be done by various methods.
v. Tape backup
In this method, data stores offsite and the data recovery can be difficult and time-consuming. If a customer
is restoring the latest copy of the data then he will get most in the backup. There is a requirement of a
backup device and ongoing storage material.
v. File and Application Replication
Here, the data replicated on the separate disk and UP control software requires for application and data file
storage replication which can be on the same side. This method basically uses for database-type
applications.
vi. Hardware and Software Redundancy
This method provides a duplicate hardware and software application which situates at two different
geographic areas. This method has the highest level of disaster recovery protection and virtualization
solution.
The Virtual Machine Interface
Memory management Virtualizing memory is undoubtedly the most difficult part of paravirtualizing an
architecture, both in terms of the mechanisms required in the hypervisor and modifications required to port
each guest OS. The task is easier if the architecture provides a softwaremanaged TLB as these can be
efficiently virtualized in a simple manner [13]. A tagged TLB is another useful feature supported by most
server-class RISC architectures, including Alpha, MIPS and SPARC. Associating an address-space
identifier tag with each TLB entry allows the hypervisor and each guest OS to efficiently coexist in
separate address spaces because there is no need to flush the entire TLB when transferring execution.

Virtualizing the CPU has several implications for guest OSes. Principally, the insertion of a hypervisor
below the operating system violates the usual assumption that the OS is the most privileged entity in the
system. In order to protect the hypervisor from OS misbehavior (and domains from one another) guest
OSes must be modified to run at a lower privilege level. Many processor architectures only provide two
privilege levels. In these cases the guest OS would share the lower privilege level with applications. The
guest OS would then protect itself by running in a separate addressspace from its applications, and
indirectly pass control to and from applications via the hypervisor to set the virtual privilege level and
change the current address space. Again, if the processor’s TLB supports address-space tags then
expensive TLB flushes can be avoided.

Rather than emulating existing hardware devices, as is typically done in fully-virtualized environments,
Xen exposes a set of clean and simple device abstractions. This allows us to design an interface that is both
efficient and satisfies our requirements for protection and isolation. To this end, I/O data is transferred to
and from each domain via Xen, using shared-memory, asynchronous bufferdescriptor rings. These provide
a high-performance communication mechanism for passing buffer information vertically through the
system, while allowing Xen to efficiently perform validation checks (for example, checking that buffers
are contained within a domain’s memory reservation).
Detailed Design

Control Transfer: Hypercalls and Events Two mechanisms exist for control interactions between Xen and
an overlying domain: synchronous calls from a domain to Xen may be made using a hypercall, while
notifications are delivered to domains from Xen using an asynchronous event mechanism. The hypercall
interface allows domainsto perform a synchronous software trap into the hypervisor to perform a
privileged operation, analogous to the use of system calls in conventional operating systems. An example
use of a hypercall is to request a set of pagetable updates, in which Xen validates and applies a list of
updates, returning control to the calling domain when this is completed.

Data Transfer: I/O Rings The presence of a hypervisor means there is an additional protection domain
between guest OSes and I/O devices, so it is crucial that a data transfer mechanism be provided that allows
data to move vertically through the system with as little overhead as possible. Two main factors have
shaped the design of our I/O-transfer mechanism: resource management and event notification. For
resource accountability, we attempt to minimize the work required to demultiplex data to a specific domain
when an interrupt is received from a device — the overhead of managing buffers is carried out later where
computation may be accounted to the appropriate domain. Similarly, memory committed to device I/O is
provided by the relevant domains wherever possible to prevent the crosstalk inherent in shared buffer
pools; I/O buffers are protected during data transfer by pinning the underlying page frames within Xen.

Subsystem Virtualisation contains the following


 CPU Scheduling
 Time and Timers
 Virtual address translation
 Physical Memory
 Network
 Disk
Future Work
We believe that Xen and XenoLinux are sufficiently complete to be useful to a wider audience, and so
intend to make a public release of our software in the very near future. A beta version is already under
evaluation by selected parties; once this phase is complete, a general 1.0 release will be announced on our
project page3 . After the initial release we plan a number of extensions and improvements to Xen. To
increase the efficiency of virtual block devices, we intend to implement a shared universal buffer cache
indexed on block contents. This will add controlled data sharing to our design without sacrificing isolation.
Adding copy-on-write semantics to virtual block devices will allow them to be safely shared among
domains, while still allowing divergent file systems. To provide better physical memory performance, we
plan to implement a last-chance page cache (LPC) — effectively a systemwide list of free pages, of non-
zero length only when machine memory is undersubscribed. The LPC is used when the guest OS virtual
memory system chooses to evict a clean page; rather than discarding this completely, it may be added to
the tail of the free list. A fault occurring for that page before it has been reallocated by Xen can therefore
satisfied without a disk access. An important role for Xen is as the basis of the XenoServer project which
looks beyond individual machines and is building the control systems necessary to support an Internet-
scale computing infrastructure. Key to our design is the idea that resource usage be accounted precisely
and paid for by the sponsor of that job — if payments are made in real cash, we can use a congestion
pricing strategy [28] to handle excess demand, and use excess revenues to pay for additional machines.
This necessitates accurate and timely I/O scheduling with greater resilience to hostile workloads. We also
plan to incorporate accounting into our block storage architecture by creating leases for virtual block
devices.
Conclusion
Xen provides an excellent platform for deploying a wide variety of network-centric services, such as local
mirroring of dynamic web content, media stream transcoding and distribution, multiplayer game and
virtual reality servers, and ‘smart proxies’ [2] to provide a less ephemeral network presence for transiently-
connected devices. Xen directly addresses the single largest barrier to the deployment of such services: the
present inability to host transient servers for short periods of time and with low instantiation costs. By
allowing 100 operating systems to run on a single server, we reduce the associated costs by two orders of
magnitude. Furthermore, by turning the setup and configuration of each OS into a software concern, we
facilitate much smaller-granularity timescales of hosting. As our experimental results show in Section 4,
the performance of XenoLinux over Xen is practically equivalent to the performance of the baseline Linux
system. Thisfact, which comesfrom the careful design of the interface between the two components, means
that there is no appreciable cost in having the resource management facilities available. Our ongoing work
to port the BSD and Windows XP kernels to operate over Xen is confirming the generality of the interface
that Xen exposes.
References:
[1] A. Awadallah and M. Rosenblum. The vMatrix: A network of virtual machine monitors for dynamic
content distribution. In Proceedings of the 7th International Workshop on Web Content Caching and
Distribution (WCW 2002), Aug. 2002.
[2] A. Bakre and B. R. Badrinath. I-TCP: indirect TCP for mobile hosts. In Proceedings of the 15th
International Conference on Distributed Computing Systems (ICDCS 1995), pages 136–143, June 1995.
[3] G. Banga, P. Druschel, and J. C. Mogul. Resource containers: A new facility for resource management
in server systems. In Proceedings of the 3rd Symposium on Operating Systems Design and Implementation
(OSDI 1999), pages 45–58, Feb. 1999.
[4] A. Bavier, T. Voigt, M. Wawrzoniak, L. Peterson, and P. Gunningberg. SILK: Scout paths in the Linux
kernel. Technical Report 2002-009, Uppsala University, Department of Information Technology, Feb.
2002.
[5] B. N. Bershad, S. Savage, P. Pardyak, E. G. Sirer, M. Fiuczynski, D. Becker, S. Eggers, and C.
Chambers. Extensibility, safety and performance in the SPIN operating system. In Proceedings of the 15th
ACM SIGOPS Symposium on Operating Systems Principles, volume 29(5) of ACM Operating Systems
Review, pages 267–284, Dec. 1995.
[6] A. Brown and M. Seltzer. Operating System Benchmarking in the Wake of Lmbench: A Case Study of
the Performance of NetBSD on the Intel x86 Architecture. In Proceedings of the 1997 ACM
SIGMETRICS Conference on Measurement and Modeling of Computer Systems, June 1997.
[7] E. Bugnion, S. Devine, K. Govil, and M. Rosenblum. Disco: Running commodity operating systems on
scalable multiprocessors. In Proceedings of the 16th ACM SIGOPS Symposium on Operating Systems
Principles, volume 31(5) of ACM Operating Systems Review, pages 143–156, Oct. 1997.

You might also like