You are on page 1of 27

UNIT II

VIRTUALIZATION BASICS

Basics of Virtualization
Before Virtualization
Virtualization is a technique, which allows to share single physical instance of an application
or resource among multiple organizations or tenants (customers). It does so by assigning a
logical name to a physical resource and providing a pointer to that physical resource on
demand.
One of the main cost-saving, hardware-reducing, and energy-saving techniques used by cloud
providers is virtualization. Virtualization is done with software-based computers that share the
underlying physical machine resources among different virtual machines (VMs). With OS
virtualization each VM can use a different operating system (OS), and each OS is isolated from
the others. Many companies use VMs to consolidate servers, enabling different services to run
in separate VMs on the same physical machine. VMs allow time-sharing of a single computer
among several single-tasking operating systems. Utilizing VMs requires the guest operating
systems to use memory virtualization to share the memory of the one physical host. Memory
Virtualization removes volatile random access memory (RAM) resources from individual
systems, and aggregates those resources into a virtualized memory pool available to any
computer in the cluster. Memory virtualization leverages large amount of memory which
improves overall performance, system utilization, and increased efficiency. Allowing
applications on multiple servers to share data without replication also reduces the total amount
of memory needed.
One of the most important ideas behind cloud computing is scalability, and the key technology
that makes that possible is virtualization. Virtualization, in its broadest sense, is the emulation
of one of more workstations/servers within a single physical computer. Put simply,
virtualization is the emulation of hardware within a software platform. The most well known
virtualization software in use today is VMware. VMware will simulate the hardware resources
of an x86 based computer, to create a fully functional virtual machine. An operating system
and associated applications can then be installed on this virtual machine, just as would be done
on a physical machine. Multiple virtual machines can be installed on a single physical machine,
as separate entities. This eliminates any interference between the machines, each operating
separately. Although virtualization technology has been around for many years, it is only now
beginning to be fully deployed. One of the reasons for this is the increase in processing power
and advances in hardware technology.
Before Virtualization
• Single OS image per machine.
• Software and hardware tightly coupled.
• Running multiple applications on same machine often creates conflict.
• Underutilized resources.
• Inflexible and costly infrastructure.
Before Virtualization

After Virtualization

 Hardware-independence of operating system and applications.


 Virtual machines can be provisioned to any system.
 Can manage OS and application as a single unit by encapsulating them into virtual
Machines.

Virtualization Architecture
• Installs and runs as an application
• Relies on host OS for device support and physical resource management
Virtualization Architecture

Objectives of virtualization:
• Increased use of hardware resources
With improvements in technology, typical server hardware resources are not being used to their
full capacity. On average, only 5-15% of hardware resources are being utilized. One of the
goals of virtualization is to resolve this problem. By allowing a physical server to run
virtualization software, a server’s resources are used much more efficiently. This can greatly
reduce both management and operating costs. For example, if an organization used 5 different
servers for 5 different services, instead of having 5 physical servers, these servers could be run
on a single physical server operating as virtual servers.
• Reduced management and resource costs
Due to the sheer number of physical servers/workstations in use today, most organizations have
to deal with issues such as space, power and cooling. Not only is this bad for the environment
but, due to the increase in power demands, the construction of more buildings etc is also very
costly for businesses. Using a virtualized infrastructure, businesses can save large amounts of
money because they require far fewer physical machines.
• Improved business flexibility
Whenever a business needs to expand its number of workstations or servers, it is often a lengthy
and costly process. An organisation first has to make room for the physical location of the
machines. The new machines then have to be ordered in, setup, etc. This is a time consuming
process and wastes a business’s resources both directly and indirectly. Virtual machines can be
easily setup. There are no additional hardware costs, no need for extra physical space and no
need to wait around. Virtual machine management software also makes it easier for
administrators to setup virtual machines and control access to particular resources, etc.
• Improved security and reduced downtime
When a physical machine fails, usually all of its software content becomes in accessible. All
the content of that machine becomes unavailable and there is often some downtime to go along
with this, until the problem is fixed. Virtual machines are separate entities from one another.
Therefore if one of them fails or has a virus, they are completely isolated from all the other
software on that physical machine, including other virtual machines. This greatly increases
security, because problems can be contained. Another great advantage of virtual machines is
that they are not hardware dependent. What this means is that if a server fails due to a hardware
fault, the virtual machines stored on that particular server can be migrated to another server.
Functionality can then resume as though nothing has happened, even though the original server
may no longer be working.

Virtualization Concept
Creating a virtual machine over existing operating system and hardware is referred as Hardware
Virtualization. Virtual Machines provide an environment that is logically separated from the
underlying hardware.
The machine on which the virtual machine is created is known as host machine and virtual
machine is referred as a guest machine. This virtual machine is managed by a software or
firmware, which is known as hypervisor.

Hypervisor
The hypervisor is a firmware or low-level program that acts as a Virtual Machine Manager.
There are two types of hypervisor:
Type 1 hypervisor executes on bare system. LynxSecure, RTS Hypervisor, Oracle VM, Sun
xVM Server, VirtualLogic VLX are examples of Type 1 hypervisor. The following diagram
shows the Type 1 hypervisor.
The type1 hypervisor does not have any host operating system because they are installed on
a bare system.
Type 2 hypervisor is a software interface that emulates the devices with which a system
normally interacts. Containers, KVM, Microsoft Hyper V, VMWare Fusion, Virtual Server
2005 R2, Windows Virtual PC and VMWare workstation 6.0 are examples of Type 2
hypervisor. The following diagram shows the Type 2 hypervisor.

Virtualization Processes
Some of the service management processes involved with Virtualization include,
Demand Management
Capacity Management
Financial Management
Availability Management
Information Security Management
IT Service Continuity Management
Release and Deployment Management
Service Asset & Configuration Management
Knowledge Management
Incident Management
Problem Management
Change Management
Service Desk Function
Benefits:
Sharing of resources helps cost reduction
Isolation: Virtual Machines are isolated from each other as if they were present on a physical
machine.
Encapsulation: VMs encapsulate a complete computing environment.
Hardware Independence: VMs run independently of underlying hardware.
Portability: VMs can be migrated between different hosts.
TAXONOMY OF VIRTUALIZATION TECHNIQUES
Virtualization covers a wide range of emulation techniques that are applied to different areas
of computing.
A classification of these techniques helps us better understand their characteristics and use.

The first classification discriminates against the service or entity that is being emulated.
Virtualization is mainly used to emulate execution environments, storage, and networks.

Among these categories, execution virtualization constitutes the oldest,most popular, and
most developed area. Therefore, it deserves major investigation and a further categorization
We can divide these execution virtualization techniques into two major categories by
considering the type of host they require.

Process-level techniques are implemented on top of an existing operating system, which


has full control of the hardware.
System-level techniques are implemented directly on hardware and do not require - or require
a minimum of support from - an existing operating system
Within these two categories we can list various techniques that offer the guest a different
type of virtual computation environment:
bare hardware
operating system resources
low-level programming language
application libraries.
Execution virtualization:
Execution virtualization includes all techniques that aim to emulate an execution
environment that is separate from the one hosting the virtualization layer.
All these techniques concentrate their interest on providing support for the execution of
programs, whether these are the operating system, a binary specification of a program compiled
against an abstract machine model, or an application. Therefore, execution virtualization can
be implemented directly on top of the hardware by the operating system, an application, or
libraries dynamically or statically linked to an application image

Hardware-level virtualization:
Hardware-level virtualization is a virtualization technique that provides an abstract execution
environment in terms of computer hardware on top of which a guest operating system can be
run .
Hardware-level virtualization is also called system virtualization, since it provides ISA to
virtual machines, which is the representation of the hardware interface of a system.
Hardware-level virtualization is also called system virtualization.

Hypervisors:
A fundamental element of hardware virtualization is the hypervisor, or virtual machine
manager (VMM). It recreates a hardware environment in which guest operating systems are
installed. There are two major types of hypervisors: Type I and Type II .
Type I :
hypervisors run directly on top of the hardware. Therefore, they take the place of the operating
systems and interact directly with underlying hardware . This type of hypervisor is also called
a native virtual machine since it runs natively on hardware .

Type II :
hypervisors require the support of an operating system to provide virtualization services. This
means that they are programs managed by the operating system, which interact with it hardware
for guest operating systems. This type of hypervisor is also called a
hosted virtual machine
since it is hosted within an operating system .
Type II

Hardware Virtualization Techniques :

Full virtualization :
Full virtualization refers to the ability to run a program, most likely an operating system,
directly on top of a virtual machine and without any modification, as though it were run on the
raw hardware. To make this possible, virtual machine manager are required to provide a
complete emulation of the entire underlying hardware .
Para - virtualization :
This is a not-transparent virtualization solution that allows implementing thin virtual machine
managers. Paravirtualization techniques expose a software interface to the virtual machine that
is slightly modified from the host and, as a consequence, guests need to be modified. The
aim of paravirtualization is to provide the capability to demand the execution of
performance-critical operations directly on the host .
Partial virtualization :
Partial virtualization provides a partial emulation of the underlying hardware, thus not
allowing the complete execution of the guest operating system in complete isolation. Partial
virtualization allows many applications to run transparently, but not all the features ofthe
operating system can be supported, as happens with full virtualization

Operating System-Level Virtualization :


It offers the opportunity to create different and separated execution environments for
applications that are managed concurrently. Differently from hardware virtualization, there is
no virtual machine manager or hypervisor, and the virtualization is done within a single
operating system, where the OS kernel allows for multiple isolated user space instances.

Programming language-level virtualization


Programming language-level virtualization is mostly used to achieve ease of deployment of
applications, managed execution, and portability across different platforms and operating
systems
The main advantage of programming-level virtual machines, also called process virtual
machines, is the ability to provide a uniform execution environment across different platforms.
Programs compiled into bytecode can be executed on any operating system and platform for
whicha virtual machine able to execute that code has been provided .

Application-level virtualization :
The application-level virtualization is used when there is a desire to virtualize only one
application .
Application virtualization software allows users to access and use an application from a
separate computer than the one on which the application is installed .

Other types of virtualizations


Other than execution virtualization, other types of virtualizations provide an abstract
environment to interact with. These mainly cover storage, networking, and client/server
interaction .
Storage virtualization : It is a system administration practice that allows decoupling the
physical organization of the hardware from its logical representation. Using this technique,
users do not have to be worried about the specific location of their data, which can be identified
using a logical path. Storage virtualization allows us to harness a wide range of storage facilities
and represent them under a single logical filesystem.

Network virtualization : Network Virtualization is a process of logically grouping physical


networks and making them operate as single or multiple independent networks called Virtual
Networks.

It combines hardware appliances and specific software for the creation and management of a
virtual network.

Virtualization Structures/Tools and Mechanisms


In general, there are three typical classes of VM architecture. Figure below shows the
architectures of a machine before and after virtualization.
Before virtualization, the operating system manages the hardware. After virtualization, a
virtualization layer is inserted between the hardware and the operating system. In such a case,
the virtualization layer is responsible for converting portions of the real hardware into virtual
hardware. Therefore, different operating systems such as Linux and Windows can run on the
same physical machine, simultaneously.
Depending on the position of the virtualization layer, there are several classes of VM
architectures, namely the hypervisor architecture, para-virtualization, and host-based
virtualization. The hypervisor is also known as the VMM (Virtual Machine Monitor). They
both perform the same virtualization operations.
Hypervisor and Xen Architecture
The hypervisor supports hardware-level virtualization (see Figure (b)) on bare metal devices
like CPU, memory, disk and network interfaces. The hypervisor software sits directly between
the physical hardware and its OS. This virtualization layer is referred to as either the VMM or
the hypervisor.
The hypervisor provides hyper calls for the guest OSes and applications. Depending on the
functionality, a hypervisor can assume microkernel architecture like the Microsoft Hyper-V.
Or it can assume monolithic hypervisor architecture like the VMware ESX for server
virtualization.
A micro-kernel hypervisor includes only the basic and unchanging functions (such as physical
memory management and processor scheduling). The device drivers and other changeable
components are outside the hypervisor.
A monolithic hypervisor implements all the aforementioned functions, including those of the
device drivers. Therefore, the size of the hypervisor code of a micro-kernel hypervisor is
smaller than that of a monolithic hypervisor. Essentially, a hypervisor must be able to convert
physical devices into virtual resources dedicated for the deployed VM to use.

The Xen Architecture


Xen is an open source hypervisor program developed by Cambridge University. Xen is a micro-
kernel hypervisor, which separates the policy from the mechanism. The Xen hypervisor
implements all the mechanisms, leaving the policy to be handled by Domain 0, as shown in
Figure above. Xen does not include any device drivers natively. It just provides a mechanism
by which guests OS can have direct access to the physical devices.
As a result, the size of the Xen hypervisor is kept rather small. Xen provides a virtual
environment located between the hardware and the OS. A number of vendors are in the process
of developing commercial Xen hypervisors, among them are Citrix XenServer and Oracle VM.
The core components of a Xen system are the hypervisor, kernel, and applications. The
organization of the three components is important. Like other virtualization systems, many
guest OSes can run on top of the hypervisor. However, not all guest OSes are created equal,
and one in particular controls the others.
The guest OS, which has control ability, is called Domain 0, and the others are called Domain
U. Domain 0 is a privileged guest OS of Xen. It is first loaded when Xen boots without any file
system drivers being available. Domain 0 is designed to access hardware directly and manage
devices. Therefore, one of the responsibilities of Domain 0 is to allocate and map hardware
resources for the guest domains (the Domain U domains).

Binary Translation with Full Virtualization


Depending on implementation technologies, hardware virtualization can be classified into two
categories: full virtualization and host-based virtualization.
Full virtualization does not need to modify the host OS. It relies on binary translation to trap
and to virtualize the execution of certain sensitive, non virtualizable instructions. The guest
OSes and their applications consist of noncritical and critical instructions.
In a host-based system, both a host OS and a guest OS are used. A virtualization software
layer is built between the host OS and guest OS.

Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while critical
instructions are discovered and replaced with traps into the VMM to be emulated by software.
Both the hypervisor and VMM approaches are considered full virtualization.
Why are only critical instructions trapped into the VMM? This is because binary translation
can incur a large performance overhead.
Noncritical instructions do not control hardware or threaten the security of the system, but
critical instructions do. Therefore, running noncritical instructions on hardware not only can
promote efficiency, but also can ensure system security.
Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software companies. As shown
in Figure below, VMware puts the VMM at Ring 0 and the guest OS at Ring 1. The VMM
scans the instruction stream and identifies the privileged, control- and behaviour sensitive
instructions. When these instructions are identified, they are trapped into the VMM, which
emulates the behavior of these instructions. The method used in this emulation is called binary
translation. Therefore, full virtualization combines binary translation and direct execution. The
guest OS is unaware that it is being virtualized.
Indirect execution of complex instructions via binary translation of guest OS requests using the
VMM plus direct execution of simple instructions on the same host.

The performance of full virtualization may not be ideal, because it involves binary translation
which is rather time-consuming. In particular, the full virtualization of I/O intensive
applications is a really a big challenge. Binary translation employs a code cache to store
translated hot instructions to improve performance, but it increases the cost of memory usage.

Host-Based Virtualization

An alternative VM architecture is to install a virtualization layer on top of the host OS. This
host OS is still responsible for managing the hardware. The guest OSes are installed and run
on top of the virtualization layer. Dedicated applications may run on the VMs.
Certainly, some other applications can also run with the host OS directly. This host-based
architecture has some distinct advantages, as,

First, the user can install this VM architecture without modifying the host OS. The virtualizing
software can rely on the host OS to provide device drivers and other low-level services. This
will simplify the VM design and ease its deployment.

Second, the host-based approach appeals to many host machine configurations. Compared to
the hypervisor/VMM architecture, the performance of the host-based architecture may also be
low. When an application requests hardware access, it involves four layers of mapping which
downgrades performance significantly. When the ISA of a guest OS is different from the ISA
of the underlying hardware, binary translation must be adopted. Although the host-based
architecture has flexibility, the performance is too low to be useful in practice.
Para-Virtualization with Compiler Support
Para-virtualization needs to modify the guest operating systems. A para-virtualized VM
provides special APIs requiring substantial OS modifications in user applications. Performance
degradation is a critical issue of a virtualized system. No one wants to use a VM if it is much
slower than using a physical machine.
The virtualization layer can be inserted at different positions in a machine software stack.
However, para-virtualization attempts to reduce the virtualization overhead, and thus improve
performance by modifying only the guest OS kernel. The guest operating systems are
paravirtualized.
The traditional x86 processor offers four instruction execution rings: Rings 0,1, 2, and 3. The
lower the ring number, the higher the privilege of instruction being executed. The OS is
responsible for managing the hardware and the privileged instructions to execute at Ring 0,
while user-level applications run at Ring 3.

Para-virtualized VM architecture

The use of a para-virtualized guest OS assisted by an intelligent compiler to replace


nonvirtualizable OS instructions by hypercalls
Para-Virtualization Architecture:
When the x86 processor is virtualized, a virtualization layer is inserted between the hardware
and the OS. According to the x86 ring definitions, the virtualization layer should also be
installed at Ring 0. The para-virtualization replaces non virtualizable instructions with hyper
calls that communicate directly with the hypervisor or VMM. However, when the guest OS
kernel is modified for virtualization, it can no longer run on the hardware directly.
Although para-virtualization reduces the overhead, it has incurred other problems. First, its
compatibility and portability may be in doubt, because it must support the unmodified OS as
well. Second, the cost of maintaining para-virtualized OSes is high, because they may require
deep OS kernel modifications. Finally, the performance advantage of para virtualization varies
greatly due to workload variations.

KVM (Kernel-Based VM):


This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel. Memory
management and scheduling activities are carried out by the existing Linux kernel. The KVM
does the rest, which makes it simpler than the hypervisor that controls the entire machine.
KVM is a hardware-assisted para-virtualization tool, which improves performance and
supports unmodified guest OSes such as Windows, Linux, Solaris, and other UNIX variants.
Unlike the full virtualization architecture which intercepts and emulates privileged and
sensitive instructions at runtime, para-virtualization handles these instructions at compile time.
The guest OS kernel is modified to replace the privileged and sensitive instructions with hyper
calls to the hypervisor or VMM. Xen assumes such a para virtualization architecture. The guest
OS running in a guest domain may run at Ring 1instead of at Ring 0. This implies that the guest
OS may not be able to execute some privileged and sensitive instructions. The privileged
instructions are implemented by hypercalls to the hypervisor. After replacing the instructions
with hyper calls, the modified guest OS emulates the behavior of the original guest OS.

IMPLEMENTATION LEVELS OF VIRTUALIZATION


Virtualization is a computer architecture technology by which multiple virtual machines (VMs)
are multiplexed in the same hardware machine. The purpose of a VM is to enhance resource
sharing by many users and improve computer performance in terms of resource utilization and
application flexibility.
Hardware resources (CPU, memory, I/O devices, etc.) or software resources (operating system
and software libraries) can be virtualized in various functional layers.
The idea is to separate the hardware from the software to yield better system efficiency. For
example, computer users gained access to much enlarged memory space when the concept of
virtual memory was introduced. Similarly, virtualization techniques can be applied to enhance
the use of compute engines, networks and storage.
Levels of Virtualization:
A traditional computer runs with host operating system specially tailored for its hardware
architecture, as shown in Figure 2.11 (a). After virtualization, different user applications
managed by their own operating systems (guest OS) can run on the same hardware,
independent of the host OS.
This is often done by adding additional software, called a virtualization layer as shown in
Figure 2.11 (b). This virtualization layer is known as hypervisor or virtual machine monitor
(VMM) .The VMs are shown in the upper boxes, where applications run with their own guest
OS over the virtualized CPU, memory, and I/O resources. The main function of the software
layer for virtualization is to virtualize the physical hardware of a host machine into virtual
resources to be used by the VMs, exclusively. The virtualization software creates the
abstraction of VMs by interposing a virtualization layer at various levels of a computer system.
Common virtualization layers include the instruction set architecture (ISA) level, hardware
level, operating system level, library support level, and application level.

Virtualization ranging from hardware to applications in five abstraction levels.


Instruction Set Architecture Level:
At the ISA level, virtualization is performed by emulating a given ISA by the ISA of the host
machine. For example, MIPS binary code can run on an x86-based host machine with the help
of ISA emulation. With this approach, it is possible to run a large amount of legacy binary code
written for various processors on any given new hardware host machine. Instruction set
emulation leads to virtual ISAs created on any hardware machine.
The basic emulation method is through code interpretation. An interpreter program interprets
the source instructions to target instructions one by one. OneSource instruction may require
tens or hundreds of native target instructions to perform its function. Obviously, this process is
relatively slow. For better performance, dynamic binary translation is desired.
This approach translates basic blocks of dynamic source instructions to target instructions. The
basic blocks can also be extended to program traces or super blocks to increase translation
efficiency. Instruction set emulation requires binary translation and optimization. A virtual
instruction set architecture (V-ISA) thus requires adding a processor-specific software
translation layer to the compiler.
Hardware Abstraction Level:
Hardware-level virtualization is performed right on top of the bare hardware. The idea is to
virtualize a computer’s resources, such as its processors, memory, and I/O devices. The
intention is to upgrade the hardware utilization rate by multiple users concurrently.
Operating System Level:
This refers to an abstraction layer between traditional OS and user applications. OS-level
virtualization creates isolated containers on a single physical server and the OS instances to
utilize the hardware and software in datacenters.
The containers behave like real servers. OS-level virtualization is commonly used in creating
virtual hosting environments to allocate hardware resources among a large number of mutually
distrusting users. It is also used, to a lesser extent, in consolidating server hardware by moving
services on separate hosts into containers or VMs on one server.
Library Support Level:
Most applications use APIs exported by user level libraries rather than using lengthy system
calls by the OS. Since most systems provide well documented APIs, such an interface becomes
another candidate for virtualization.
Virtualization with library interfaces is possible by controlling the communication link
between applications and the rest of a system through API hooks. The software tool WINE has
implemented this approach to support Windows applications on top of UNIX hosts. Another
example is the vCUDA which allows applications executing within VMs to leverage GPU
hardware acceleration.
User-Application Level:
Virtualization at the application level virtualizes an application as a VM. On a traditional OS,
an application often runs as a process. Therefore, application-level virtualization is also known
as process-level virtualization. The most popular approach is to deploy high level language
(HLL)VMs.
Relative Merits of Virtualization at Various Levels (More “X”’s Means Higher Merit,
with a Maximum of 5 X’s)

VMM Design Requirements and Providers


Hardware-level virtualization inserts a layer between real hardware and traditional operating
systems. This layer is commonly called the Virtual Machine Monitor (VMM) and it manages
the hardware resources of a computing system. Each time programs access the hardware the
VMM captures the process. VMM acts as a traditional OS.
One hardware component, such as the CPU, can be virtualized as several virtual copies.
Therefore, several traditional operating systems which are the same or different can sit on the
same set of hardware simultaneously.
Three requirements for a VMM
• First, a VMM should provide an environment for programs which is essentially identical to
the original machine.

• Second, programs run in this environment should show, at worst, only minor decreases in
speed.

• Third, a VMM should be in complete control of the system resources.

Virtualization Support at the OS Level


With the help of VM technology, a new computing mode known as cloud computing is
emerging. Cloud computing is transforming the computing landscape by shifting the hardware
and staffing costs of managing a computational center to third parties, just like banks. However,
cloud computing has at least two challenges.
• The first is the ability to use a variable number of physical machines and VM instances
depending on the needs of a problem.

• The second challenge concerns the slow operation of instantiating new VMs.

Currently, new VMs originate either as fresh boots or as replicates of a template VM, unaware
of the current application state. Therefore, to better support cloud computing, a large amount
of research and development should be done.

Why OS-Level Virtualization?


To reduce the performance overhead of hardware-level virtualization, even hardware
modification is needed. OS-level virtualization provides a feasible solution for these hardware-
level virtualization issues. Operating system virtualization inserts a virtualization layer inside
an operating system to partition a machine’s physical resources. It enables multiple isolated
VMs within a single operating system kernel. This kind of VM is often called a virtual
execution environment (VE), Virtual Private System (VPS), or simply container. From the
user’s point of view, VEs look like real servers. This means a VE has its own set of processes,
file system, user accounts, network interfaces with IP addresses, routing tables, firewall rules,
and other personal settings. Although VEs can be customized for different people, they share
the same operating system kernel.
Advantages of OS Extensions
(1) VMs at the operating system level have minimal startup/shutdown costs, low resource
requirements, and high scalability.
(2) For an OS-level VM, it is possible for a VM and its host environment to synchronize
state changes when necessary.
These benefits can be achieved via two mechanisms of OS-level virtualization:
(1) All OS-level VMs on the same physical machine share a single operating system kernel

(2) The virtualization layer can be designed in a way that allows processes in VMs to access as
many resources of the host machine as possible, but never to modify them.
Virtualization on Linux or Windows Platforms
(1) Virtualization support on the Windows-based platform is still in the research stage. The
Linux kernel offers an abstraction layer to allow software processes to work with and operate
on resources without knowing the hardware details. New hardware may need a new Linux
kernel to support. Therefore, different Linux platforms use patched kernels to provide special
support for extended functionality.
Middleware Support for Virtualization
Library-level virtualization is also known as user-level Application Binary Interface (ABI) or
API emulation. This type of virtualization can create execution environments for running alien
programs on a platform rather than creating a VM to run the entire operating system. API call
interception and remapping are the key functions performed. This provides an overview of
several library-level virtualization systems: namely the Windows Application Binary Interface
(WABI), lxrun, WINE, Visual MainWin, and Vcuda.

VIRTUALIZATION OF CPU, MEMORY, AND I/O DEVICES


To support virtualization, processors such as the x86 employ a special running mode and
instructions, known as hardware-assisted virtualization. In this way, the VMM and guest OS
run in different modes and all sensitive instructions of the guest OS and its applications are
trapped in the VMM. To save processor states, modes witching are completed by hardware.
For the x86architecture, Intel and AMD have proprietary technologies for hardware-assisted
virtualization.
Hardware Support for Virtualization: Modern operating systems and processors permit
multiple processes to run simultaneously. If there is no protection mechanism in a processor,
all instructions from different processes will access the hardware directly and cause a system
crash. Therefore, all processors have at least two modes, user mode and supervisor mode, to
ensure controlled access of critical hardware. Instructions running in supervisor mode are
called privileged instructions. Other instructions are unprivileged instructions. In a virtualized
environment, it is more difficult to make OSes and applications run correctly because there are
more layers in the machine stack.

CPU Virtualization: A VM is a duplicate of an existing computer system in which a majority


of the VM instructions are executed on the host processor in native mode. Thus, unprivileged
instructions of VMs run directly on the host machine for higher efficiency. Other critical
instructions should be handled carefully for correctness and stability. The critical instructions
are divided into three categories:

Privileged instructions - Privileged instructions execute in a privileged mode and will be


trapped if executed outside this mode.

Control sensitive instructions - Control-sensitive instructions attempt to change the


configuration of resources used.

Behavior-sensitive instructions - Behavior-sensitive instructions have different behaviors


depending on the configuration of resources, including the load and store operations over the
virtual memory.
A CPU architecture is virtualizable if it supports the ability to run the VM’s privileged and
privileged instructions in the CPU’s user mode while the VMM runs in supervisor mode. When
the privileged instructions including control- and behavior sensitive instructions of a VM are
executed, they are trapped in the VMM. In this case, the VMM acts as a unified mediator for
hardware access from different VMs to guarantee the correctness and stability of the whole
system. RISC CPU architectures can be naturally virtualized because all control- and behavior
sensitive instructions are privileged instructions.

Hardware-Assisted CPU Virtualization: This technique attempts to simplify virtualization


because full or para virtualization is complicated. Intel and AMD add an additional mode called
privilege mode level (some people call it Ring-1) to x86 processors. Therefore, operating
systems can still run at Ring 0 and the hypervisor can run at Ring -1.All the privileged and
sensitive instructions are trapped in the hypervisor automatically. This technique removes the
difficulty of implementing binary translation of full virtualization. It also lets the operating
system run in VMs without modification.

Memory Virtualization: Virtual memory virtualization is similar to the virtual memory


support provided by modern operating systems. In a traditional execution environment, the
operating system maintains mappings of virtual memory to machine memory using page tables,
which is a onestage mapping from virtual memory to machine memory. All modern x86 CPUs
include a memory management unit (MMU) and a translation lookaside buffer (TLB) to
optimize virtual memory performance. However, in a virtual execution environment, virtual
memory virtualization involves sharing the physical system memory in RAM and dynamically
allocating it to the physical memory of the VMs. That means a two-stage mapping process
should be maintained by the guest OS and the VMM, respectively: virtual memory to physical
memory and physical memory to machine memory. Furthermore, MMU virtualization should
be supported, which is transparent to the guest OS. The guest OS continues to control the
mapping of virtual addresses to the physical memory addresses of VMs. But the guest OS
cannot directly access the actual machine memory. The VMM is responsible for mapping the
guest physical memory to the actual machine memory. Figure below shows the two-level
memory mapping procedure.

Two-level memory mapping procedure


I/O Virtualization: I/O virtualization involves managing the routing of I/O requests between
virtual devices and the shared physical hardware. There are three ways to implement I/O
virtualization:
• Full device emulation
• Para virtualization
• Direct I/O

Device emulation for I/O virtualization implemented inside the middle layer that maps real
I/O devices into the virtual devices for the guest device driver to use.

Full device emulation is the first approach for I/O virtualization. Generally, this approach
emulates well known, real-world devices. All the functions of a device or bus infrastructure,
such as device enumeration, identification, interrupts, and DMA, are replicated in software.
This software is located in the VMM and acts as a virtual device. The I/O access requests of
the guest OS are trapped in the VMM which interacts with the I/O devices.
A single hardware device can be shared by multiple VMs that run concurrently. However,
software emulation runs much slower than the hardware it emulates. The para virtualization
method of I/O virtualization is typically used in Xen. It is also known as the split driver model
consisting of a frontend driver and a backend driver. The frontend driver is running in Domain
U and the backend driver is running in Domain 0. They interact with each other via a block of
shared memory. The frontend driver manages the I/O requests of the guest OSes and the
backend driver is responsible for managing the real I/O devices and multiplexing the I/O data
of different VMs. Although para I/O-virtualization achieves better device performance than
full device emulation, it comes with a higher CPU overhead.

Virtualization in Multi-Core Processors: Virtualizing a multi-core processor is relatively


more complicated than virtualizing a unicore processor. Though multicore processors are
claimed to have higher performance by integrating multiple processor cores in a single chip,
muti-core virtualization has raised some new challenges to computer architects, compiler
constructors, system designers, and application programmers. There are mainly two
difficulties: Application programs must be parallelized to use all cores fully, and software must
explicitly assign tasks to the cores, which is a very complex problem.

Other Types of Virtualization


Virtualization is the ability to run multiple operating systems on a single physical system and
share the underlying hardware resources. There are various types that includes,
a. Hardware virtualization
b. Software virtualization
c. Storage virtualization
d. OS virtualization
e. Memory virtualization
f. Network virtualization
g. Data virtualization
h. Desktop virtualization

a. Hardware Virtualization
Here are the three types of hardware virtualization:
Full Virtualization
Emulation Virtualization
Paravirtualization

Full Virtualization
In full virtualization, the underlying hardware is completely simulated. Guest software does
not require any modification to run.
Emulation Virtualization
In Emulation, the virtual machine simulates the hardware and hence becomes independent of
it. In this, the guest operating system does not require modification.

Paravirtualization
In Paravirtualization, the hardware is not simulated. The guest software run their own isolated
domains.
VMware vSphere is highly developed infrastructure that offers a management infrastructure
framework for virtualization. It virtualizes the system, storage and networking hardware.

b. Software Virtualization
Software virtualization is the virtualization of applications or computer programs. One of the
most widely used software virtualization programs is SVS (Software Virtualization Solution),
developed by Altiris. The concept is similar to hardware virtualization where physical
machines are simulated as virtual machines. Software virtualization involves creating a virtual
layer or virtual hard drive space where applications can be installed. From this virtual space,
applications can then be run as though they have been installed onto host OS. Once a user has
finished using an application, they can ‘switch it off’. When an application is switched off, any
changes that the application made to the host OS, will be completely reversed. This means that
registry entries and installation directores will have no trace of the application being installed
or executed at all.
Software virtualization offers many benefits like,
The ability to run applications without making permanent registry or library changes
The ability to run multiple versions of the same application
The ability to install applications that would otherwise conflict with each other (by
using multiple virtual layers)
The ability to test new applications in an isolated environment. Software virtualization
provides many benefits and is easy to implement. A fantastic advantage is that you can try out
software virtualization yourself by downloading Altiris’s SVS application completely free.

c. Storage Virtualization
Storage virtualization involves the virtualization of physical storage devices. It is a technique
that allows many different users or applications to access storage, regardless of where that
storage is located or what kind of storage device it is. When storage is virtualized, it appears
standardized and local to host machines, even though the storage may be distributed across
many different locations and many different types of hard drives. The great thing about storage
virtualization is that it allows many different machines and servers to access distributed storage
devices. However, a particular machine accessing a virtualized storage area will see one large
storage area, as though it is a single massive hard drive, rather than a load of scattered hard
drives. Other benefits offered by storage virtualization include the ability for administrators to
mask particular hard drives or storage volumes from particular machines. This obviously
improves security and the ability to increase a storage volumes size in real time. Again this is
very useful because if a server appears to be running out of space on their virtualized storage
area, an administrator can increase its size immediately with just a few clicks. One of the most
widely deployed
storage virtualization technologies is a SAN (Storage Area Network). Just as its name suggests,
a SAN is a large network of storage devices. These storage devices which are usually held in a
rack are independent of any servers or machines, and are instead directly connected to an
organization’s network. Through the use of Storage Area Networks a business can improve
their flexibility. Ror example, a Storage Area Network’s size can easily be increased by adding
additional hard drives or storage volumes. They can be used as an efficient backup solution;
for example by backing up data in a remote location away from business enterprises. Lastly,
Storage Area Networks can provide better storage volume utilization. This means that instead
of having one hard drive per server, a server can spread their data across multiple different hard
drives or storage volumes. This is a much more efficient use of hard drives, because a single
hard drive is not being constantly written and rewritten to. Instead, multiple hard drives share
the load, which should increase the lifespan of individual hard drives.

Storage Virtualization
Types:
Block – It works before the file system exists. It replaces controllers and takes over at the
disk level.
File – The server that uses the storage must have software installed on it in order to enable
file-level usage.

d. OS Virtualization
Although similar to full virtualization, OS-level virtualization is actually quite different. Full
virtualization involves the virtualization of a machines entire hardware. Each virtual
environment is then run under its own operating system and more importantly its own kernel.
OS virtualization is different in that separate virtual environments with their own separate
kernels are not created. Instead, OS virtualization works by running virtual environments
(known as containers) under a single kernel. Each container environment within an OS
virtualization solution will be isolated from other containers and will look and act like a
physical server. A container environment can then run applications and accept the workload of
its physical machine. The end result of OS virtualization is effectively the same as full
virtualization but as you can see, the process for each solution is actually different.
OS virtualization has a number of practical uses. In fact, it has been used in virtual hosting
environments for many years. Virtual hosting involves hosting more than one domain name on
the same physical machine. By using OS virtualization, web hosts can create secure isolated
environments for different domain names. This is obviously advantageous because otherwise
the resources of a single machine would be wasted if it could only be used as a host for one
domain name. Other benefits of using of OS virtualization include the separation of
applications along with the ability to more easily manage resources. For example, using OS
virtualization, you could separate or group applications into different containers. Software
resources would also be more manageable because administrators would be dealing with
smaller ‘chunks’ of resources, rather than entire groups of resources under a single
environment. OS virtualization sounds great and you may be wondering why most
organizations today are using full virtualization solutions rather than OS virtualization
solutions. Both solutions provide similar end results, however there are distinct differences
between the two and OS virtualization does have its own set of pros and cons. The major
advantage of OS virtualization over full virtualization solutions is that it is far more efficient.
OS virtualization has very little overheads because it does not need to emulate hardware.
Communication between hardware and software is carried out by a container’s host operating
system’s kernel, so again there is very little overhead. However, OS virtualization does have
its disadvantages. Firstly, OS virtualization cannot run operating systems which are different
from its original host operating system. If you want to run a Linux-based environment within
a Windows operating system, then OS virtualization is no good to you. Container environments
also have a number of restrictions within them. For example, a container cannot modify its
kernel directly, it cannot mount or dismount file systems and it cannot carry out other top level
actions. A full virtualization solution on the other hand, gives a user a completely unrestricted
environment on which many different operating systems can be installed. In the end it was the
flexibility that full virtualization solutions offered which made them the standard solution for
virtualization. Along with hardware-assisted virtualization and Paraassisted virtualization
technology, full virtualization is now just as efficient as OS virtualization. However OS
virtualization is a technology that is still widely used; for example in web hosting environments
and it will continue to be used in the future.

e. Memory virtualization
It introduces a way to decouple memory from the server to provide a shared, distributed or
networked function. It enhances performance by providing greater memory capacity without
any addition to the main memory. That’s why a portion of the disk drive serves as an extension
of the main memory.
Implementations:
Application-level integration – applications running on connected computers directly connect
to the memory pool through an API or the file system.
OS Level integration – The operating system first connects to the memory pool and makes
that polled memory available to applications.

f. Network virtualization
It refers to the management and monitoring of a computer network as a single managerial entity
from a single software-based administrator’s console. It is intended to allow network
optimization of data transfer rates, scalability, reliability, flexibility and security. It also
automates many network administrative tasks. Network virtualization is specifically useful for
networks experiencing a huge, rapid and unpredictable increase of usage.
The intended result of network virtualization provides improved network productivity and
efficiency.

Two categories:
a. Internal – Provide network like functionality to a single system.
b. External – Combine many networks or parts of networks into a virtual unit.

g. Data virtualization
Without any technical details, you can easily manipulate data and know how it is formatted or
where it is physically located. It decreases the data errors and workload.

h. Desktop virtualization
It provides the work convenience and security. As one can access remotely, you are able to
work from any location and on any PC. It provides a lot of flexibility for employees to work
from home or on the go. It also protects confidential data from being lost or stolen by keeping
it safe on central servers.

You might also like