You are on page 1of 54

Department of Information Technology Virtualization: Comparison of Windows

and Linux

Chapter 1
Concept of Virtualization

Virtualization provides a set of tools for increasing flexibility and lowering costs, things that
are important in every enterprise and Information Technology organization. Virtualization
solutions are becoming increasingly available and rich in features.

Since virtualization can provide significant benefits to your organization in multiple areas,
you should be establishing pilots, developing expertise and putting virtualization technology to
work now.

In essence, virtualization increases flexibility by decoupling an operating system and the


services and applications supported by that system from a specific physical hardware platform.
It allows the establishment of multiple virtual environments on a shared hardware platform.
Organizations looking to innovate find that the ability to create new systems and services
without installing additional hardware (and to quickly tear down those systems and services
when they are no longer needed) can be a significant boost to innovation.

Virtualization can also excel at supporting innovation through the use of virtual
environments for training and learning. These services are ideal applications for virtualization
technology. A student can start course work with a known, standard system environment. Class
work can be isolated from the production network. Learners can establish unique software
environments without demanding exclusive use of hardware resources.

As the capabilities of virtual environments continue to grow, we’re likely to see increasing
use of virtualization to enable portable environments tailored to the needs of a specific user.
These environments can be moved dynamically to an accessible or local processing
environment, regardless of where the user is located. The user’s virtual environments can be
stored on the network or carried on a portable memory device. A related concept is the
Appliance Operating System, an application package oriented operating system designed to run
in a virtual environment. The package approach can yield lower development and support costs

1
Department of Information Technology Virtualization: Comparison of Windows
and Linux

as well as insuring the application runs in a known, secure environment. An Appliance


Operating System solution provides benefits to both application developers and the consumers
of those applications.

Virtualization can also be used to lower costs. One obvious benefit comes from the
consolidation of servers into a smaller set of more powerful hardware platforms running a
collection of virtual environments. Not only can costs be reduced by reducing the amount of
hardware and reducing the amount of unused capacity, but application performance can actually
be improved since the virtual guests execute on more powerful hardware.

Further benefits include the ability to add hardware capacity in a non-disruptive manner and
to dynamically migrate workloads to available resources. Depending on the needs of your
organization, it may be possible to create a virtual environment for disaster recovery.
Introducing virtualization can significantly reduce the need to replicate identical hardware
environments and can also enable testing of disaster scenarios at lower cost. Virtualization
provides an excellent solution for addressing peak or seasonal workloads.

Cost savings from server consolidation can be compelling. If you aren’t exploiting
virtualization for this purpose, you should start a program now. As you gain experience with
virtualization, explore the benefits of workload balancing and virtualized disaster recovery
environments.

Regardless of the specific needs of your enterprise, you should be investigating


virtualization as part of your system and application portfolio as the technology is likely to
become pervasive. We expect operating system vendors to include virtualization as a standard
component, hardware vendors to build virtual capabilities into their platforms, and
virtualization vendors to expand the scope of their offerings.

Virtualization is unquestionably one of the hottest trends in information technology today. This is
no accident. While a variety of technologies fall under the virtualization umbrella, all of them are
changing the IT world in significant ways.

2
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Virtualization is a key enabling technology that can be leveraged to achieve business


benefits. Virtualization technology enables customers to run multiple operating systems
concurrently on a single physical server, where each of the operating systems runs as a self-
contained computer.

Virtualization is a system or a method of dividing computer resources into multiple isolated


environments. It is possible to distinguish four types of such virtualization: emulation, Para-
virtualization, operating system-level virtualization, and multi-server (cluster) virtualization.

Each virtualization type has its pros and cons that condition its appropriate applications.
Emulation makes it possible to run any non-modified operating system which supports the
platform being emulated. Implementations in this category range from pure emulators (like
Bochs) to solutions which let some code to be executed on the CPU natively, in order to
increase performance. The main disadvantages of emulation are low performance and low
density.
Examples: VMware products, QEmu, Bochs, Parallels.
Para-virtualization is a technique to run multiple modified OSs on top of a thin layer called a
hypervisor, or virtual machine monitor. Para-virtualization has better performance compared to
emulation, but the disadvantage is that the “guest” OS needs to be modified. Examples: Xen,
UML.
Operating system-level virtualization enables multiple isolated execution environments
within a single operating system kernel. It has the best possible (i. e. close to native)
performance and density, and features dynamic resource management. On the other hand, this
technology does not allow running different kernels from different OSs at the same time.
Examples: FreeBSD Jail, Solaris Zones/Containers, Linux-VServer, OpenVZ and
Virtuozzo.

Simply put, virtualization is an idea whose time has come. The term virtualization broadly
describes the separation of a resource or request for a service from the underlying physical
delivery of that service. With virtual memory, for example, computer software gains access to
more memory than is physically installed, via the background swapping of data to disk storage.

3
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Similarly, virtualization techniques can be applied to other IT infrastructure layers - including


networks, storage, laptop or server hardware, operating systems and applications. This blend of
virtualization technologies - or virtual infrastructure provides a layer of abstraction between
computing, storage and networking hardware, and the applications running on it. The
deployment of virtual infrastructure is non-disruptive, since the user experiences are largely
unchanged. However, virtual infrastructure gives administrators the advantage of managing
pooled resources across the enterprise, allowing IT managers to be more responsive to dynamic
organizational needs and to better leverage infrastructure investments. We introduce
virtualization technologies, focusing on three areas: hardware virtualization, presentation
virtualization, and application virtualization.

Concept of a VE

Virtual Environment (VE, also known as VPS, container, partition etc.) is an isolated program
execution environment, which (from the point of view of its owner) looks and feels like a
separate physical server. A VE has its own set of processes starting from init, file system, users
(including root), network interfaces with IP addresses, routing tables, firewall rules
(netfilter/iptables), etc.
Multiple VEs co-exist within a single physical server. Different VEs can run different Linux
distributions, but all VEs operate under the same kernel.

Virtualization Technologies

To understand modern virtualization technologies, think first about a system without them.
Imagine, for example, an application such as Microsoft Word running on a standalone desktop
computer. Figure 1 shows how this looks.

4
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Figure 1: A system without virtualization

The application is installed and runs directly on the operating system, which in turn runs
directly on the computer’s hardware. The application’s user interface is presented via a display
that’s directly attached to this machine. This simple scenario is familiar to anybody who’s ever
used a Computer.

But it’s not the only choice. In fact, it’s often not the best choice. Rather than locking these
various parts together—the operating system to the hardware, the application to the operating
system, and the user interface to the local machine—it’s possible to loosen the direct reliance
these parts have on each other.

Doing this means virtualizing aspects of this environment, something that can be done in
various ways. The operating system can be decoupled from the physical hardware it runs on
using hardware virtualization, for example, while application virtualization allows an analogous
decoupling between the operating system and the applications that use it. Similarly,
presentation virtualization allows separating an application’s user interface from the physical
machine the application runs on. All of these approaches to virtualization help make the links
between components less rigid. This lets hardware and software be used in more diverse ways,
and it also makes both easier to change. Given that most IT professionals spend most of their

5
Department of Information Technology Virtualization: Comparison of Windows
and Linux

time working with what’s already installed rather than rolling out new deployments, making
their world more malleable is a good thing.

Each type of virtualization also brings other benefits specific to the problem it addresses.
Understanding what these are requires knowing more about the technologies themselves.
Accordingly, the next sections take a closer look at each one.

1.1 Hardware Virtualization


For most IT people today, the word “virtualization” conjures up thoughts of running multiple
operating systems on a single physical machine. This is hardware virtualization, and while it’s not
the only important kind of virtualization, it is unquestionably the most visible today.

The core idea of hardware virtualization is simple: Use software to create a virtual machine
(VM) that emulates a physical computer. By providing multiple VMs at once, this approach allows
running several operating systems simultaneously on a single physical machine. Figure 2 shows how
this looks.

Figure 2: Illustrating hardware virtualization

When used on client machines, this approach is often called desktop virtualization, while using it
on server systems is known as server virtualization. Desktop virtualization can be useful in a variety
of situations. One of the most common is to deal with incompatibility between applications and

6
Department of Information Technology Virtualization: Comparison of Windows
and Linux

desktop operating systems. For example, suppose a user running Windows Vista needs to use an
application that runs only on Windows XP with Service Pack 2. By creating a VM that runs this
older operating system, then installing the application in that VM, this problem can be solved.

Still, while desktop virtualization is useful, the real excitement around hardware virtualization is
focused on servers. The primary reason for this is economic: Rather than paying for many under-
utilized server machines, each dedicated to a specific workload, server virtualization allows
consolidating those workloads onto a smaller number of more fully used machines. This implies
fewer people to manage those computers, less space to house them, and fewer kilowatt hours of
power to run them, all of which saves money.

Server virtualization also makes restoring failed systems easier. VMs are stored as files, and so
restoring a failed system can be as simple as copying its file onto a new machine. Since VMs can
have different hardware configurations from the physical machine on which they’re running, this
approach also allows restoring a failed system onto any available machine. There’s no requirement
to use a physically identical system.

Hardware virtualization can be accomplished in various ways, and so Microsoft offers three
different technologies that address this area:

Virtual Server 2005 R2: This technology provides hardware virtualization on top of Windows via
add-on software. As its name suggests, Virtual Server provides server virtualization, targeting
scalable multi-user scenarios.

Virtual PC 2007: Like Virtual Server, this technology also provides hardware virtualization on top
of Windows via add-on software. Virtual PC provides desktop virtualization, however, and so it’s
designed to support multiple operating systems on a single-user computer.

Windows Server virtualization: Like Virtual Server, Windows Server virtualization provides server
virtualization. Rather than relying on an add-on, however, support for hardware virtualization is
built directly into Windows itself. Windows Server virtualization is part of Windows Server
2008, and it’s scheduled to ship shortly after the release of this new operating system.

7
Department of Information Technology Virtualization: Comparison of Windows
and Linux

All of these technologies are useful in different situations, and all are described in more detail
later in this overview.

1.2 Presentation Virtualization


Many of the applications people use most are designed to both run and present their user interface on
the same machine. Microsoft Office is one common example, but there are plenty of others. While
accepting this default is fine much of the time, it’s not without some downside. For example,
organizations that manage many desktop machines must make sure that any sensitive data on those
desktops is kept secure. They’re also obliged to spend significant amounts of time and money
managing the applications resident on those machines. Letting an application execute on a remote
server, yet display its user interface locally—presentation virtualization—can help. Figure 3 shows
how this looks.

8
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Figure 3: Illustrating presentation virtualization

As the figure shows, this approach allows creating virtual sessions, each interacting with a remote
desktop system. The applications executing in those sessions rely on presentation virtualization to
project their user interfaces remotely. Each session might run only a single application, or it might
present its user with a complete desktop offering multiple applications. In either case, several virtual
sessions can use the same installed copy of an application.

Running applications on a shared server like this offers several benefits, including the following:

Data can be centralized, storing it safely on a central server rather than on multiple desktop
machines. This improves security, since information isn’t spread across many different systems.

The cost of managing applications can be significantly reduced. Instead of updating each application
on each individual desktop, for example, only the single shared copy on the server needs to be
changed. Presentation virtualization also allows using simpler desktop operating system images
or specialized desktop devices, commonly called thin clients, both of which can lower
management costs.

Organizations need no longer worry about incompatibilities between an application and a desktop
operating system. While desktop virtualization can also solve this problem, as described earlier,
it’s sometimes simpler to run the application on a central server, and then use presentation
virtualization to make the application accessible to clients running any operating system.

In some cases, presentation virtualization can improve performance. For example, think about a
client/server application that pulls large amounts of data from a central database down to the
client. If the network link between the client and the server is slow or congested, this application
will also be slow. One way to improve its performance is to run the entire application—both
client and server—on a machine with a high-bandwidth connection to the database, then use
presentation virtualization to make the application available to its users.

Microsoft’s presentation virtualization technology is Windows Terminal Services. First


released for Windows NT 4, it’s now a standard part of Windows Server 2003. Terminal

9
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Services lets an ordinary Windows desktop application run on a shared server machine yet
present its user interface on a remote system, such as a desktop computer or thin client. While
remote interfaces haven’t always been viewed through the lens of virtualization, this
perspective can provide a useful way to think about this widely used technology.

1.3 Application Virtualization


Virtualization provides an abstracted view of some computing resource. Rather than run directly on a
physical computer, for example, hardware virtualization lets an operating system run on a software
abstraction of a machine. Similarly, presentation virtualization lets an application’s user interface be
abstracted to a remote device. In both cases, virtualization loosens an otherwise tight bond between
components.

Another bond that can benefit from more abstraction is the connection between an application
and the operating system it runs on. Every application depends on its operating system for a range of
services, including memory allocation, device drivers, and much more. Incompatibilities between an
application and its operating system can be addressed by either hardware virtualization or
presentation virtualization, as described earlier. But what about incompatibilities between two
applications installed on the same instance of an operating system? Applications commonly share
various things with other applications on their system, yet this sharing can be problematic. For
example, one application might require a specific version of a dynamic link library (DLL) to
function, while another application on that system might require a different version of the same
DLL. Installing both applications leads to what’s commonly known as DLL hell, where one of them
overwrites the version required by the other. To avoid this, organizations often perform extensive
testing before installing a new application, an approach that’s workable but time-consuming and
expensive.

Application virtualization solves this problem by creating application-specific copies of all


shared resources, as Figure 4 illustrates. The problematic things an application might share with
other applications on its system—registry entries, specific DLLs, and more—are instead packaged
with it, creating a virtual application. When a virtual application is deployed, it uses its own copy of
these shared resources.

10
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Figure 4: Illustrating application virtualization

Application virtualization makes deployment significantly easier. Since applications no longer


compete for DLL versions or other shared aspects of their environment, there’s no need to test new
applications for conflicts with existing applications before they’re rolled out. And as Figure 4
suggests, these virtual applications can run alongside ordinary applications—not everything needs to
be virtualized.

SoftGrid Application Virtualization is Microsoft’s technology for this area. A SoftGrid


administrator can create virtual applications, and then deploy those applications as needed. By
providing an abstracted view of key parts of the system, application virtualization reduces the time
and expense required to deploy and update applications.

11
Department of Information Technology Virtualization: Comparison of Windows
and Linux

1.4 Other Virtualization Technologies

This overview looks at three kinds of virtualization: hardware, presentation, and application. Similar
kinds of abstraction are also used in other contexts, however. Among the most important are
network virtualization and storage virtualization.

The term network virtualization is used to describe a number of different things. Perhaps
the most common is the idea of a virtual private network (VPN). VPNs abstract the notion of a
network connection, allowing a remote user to access an organization’s internal network just as
if she were physically attached to that network. VPNs are a widely implemented idea, and they
can use various technologies. In the Microsoft world, the primary VPN technologies today are
Internet Security and Acceleration (ISA) Server 2006 and Internet Application Gateway 2007.

The term storage virtualization is also used quite broadly. In a general sense, it means
providing a logical, abstracted view of physical storage devices, and so anything other than a
locally attached disk drive might be viewed in this light. A simple example is folder redirection
in Windows, which lets the information in a folder be stored on any network-accessible drive.
Much more powerful (and more complex) approaches also fit into this category, including
storage area networks (SANs) and others. However it’s done, the benefits of storage
virtualization are analogous to those of every other kind of virtualization: more abstraction and
less direct coupling between components.

12
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Chapter 2
XEN: APPROACH & OVERVIEW

2.1 Virtualization on Linux

Modern computers are sufficiently powerful to use virtualization to present the illusion of many
smaller virtual machines (VMs), each running a separate operating system instance. This has
led to a resurgence of interest in VM technology. Here we present Xen, a high performance
resource-managed virtual machine monitor (VMM) which enables applications such as server
consolidation , co-located hosting facilities, distributed web services, secure computing
platforms and application mobility. Successful partitioning of a machine to support the
concurrent execution of multiple operating systems poses several challenges.
Firstly, virtual machines must be isolated from one another: it is not acceptable for the
execution of one to adversely affect the performance of another. This is particularly true when
virtual machines are owned by mutually untrusting users. Secondly, it is necessary to support a
variety of different operating systems to accommodate the heterogeneity of popular
applications. Thirdly, the performance overhead introduced by virtualization should be small.
Xen enables users to dynamically instantiate an operating system to execute whatever they
desire. In the XenoServer project, we are deploying Xen on standard server hardware at
economically strategic locations within ISPs or at Internet exchanges. We perform admission
control when starting new virtual machines and expect each VM to pay in some fashion for the
resources it requires. We discuss our ideas and approach in this direction elsewhere; this paper
focuses on the VMM.
In a traditional VMM the virtual hardware exposed is functionally identical to the
underlying machine. Although full virtualization has the obvious benefit of allowing
unmodified operating systems to be hosted, it also has a number of drawbacks. This is
particularly true for the prevalent IA-32, or x86, architecture. Support for full virtualization was
never part of the x86 architectural designs. Certain supervisor instructions must be handled by
the VMM for correct virtualization, but executing these with insufficient privilege fails silently

13
Department of Information Technology Virtualization: Comparison of Windows
and Linux

rather than causing a convenient trap. Efficiently virtualizing the x86 MMU is also difficult.
These problems can be solved, but only at the cost of increased complexity and reduced
performance. VMware's ESX Server dynamically rewrites portions of the hosted machine code
to insert traps wherever VMM intervention might be required. This translation is applied to the
entire guest OS kernel (with associated translation, execution, and caching costs) since all non-
trapping privileged instructions must be caught and handled. An ESX Server implement
shadow versions of system structures such as page tables and maintains consistency with the
virtual tables by trapping every update attempt. This approach has a high cost for update-
intensive operations such as creating a new application process.
Notwithstanding the intricacies of the x86, there are other arguments against full
virtualization. In particular, there are situations in which it is desirable for the hosted operating
systems to see real as well as virtual resources: providing both real and virtual time allows a
guest OS to better support time-sensitive tasks, and to correctly handle TCP timeouts and RTT
estimates, while exposing real machine addresses allows a guest OS to improve performance by
using super pages or page coloring.
We avoid the drawbacks of full virtualization by presenting a virtual machine abstraction
that is similar but not identical to the underlying hardware, an approach which has been dubbed
Para-virtualization. This promises improved performance, although it does require
modifications to the guest operating system. It is important to note, however, that we do not
require changes to the application binary interface (ABI), and hence no modifications are
required to guest applications.
We distill the discussion so far into a set of design principles:
1. Support for unmodified application binaries is essential, or users will not transition to
Xen. Hence we must virtualize all architectural features required by existing standard
ABIs.
2. Supporting full multi-application operating systems is important, as this allows
complex server configurations to be virtualized within a single guest OS instance.
3. Para-virtualization is necessary to obtain high performance and strong resource isolation
on uncooperative machine architectures such as x86.

14
Department of Information Technology Virtualization: Comparison of Windows
and Linux

4. Even on cooperative machine architectures, completely hiding the effects of resource


virtualization from guest OS risks both correctness and performance.

Note that our Para-virtualized x86 abstraction is quite different from that proposed by the
recent Denali project. Denali is designed to support thousands of virtual machines running
network services, the vast majority of which are small-scale and unpopular.
In contrast, Xen is intended to scale to approximately 100 virtual machines running industry
standard applications and services. Given these very different goals, it is instructive to contrast
Denali's design choices with our own principles.
Firstly, Denali does not target existing ABIs, and so can elide certain architectural features
from their VM interface. For example, Denali does not fully support x86 segmentation although
it is exported (and widely used1) in the ABIs of NetBSD, Linux, and Windows XP.
Secondly, the Denali implementation does not address the problem of supporting application
multiplexing, nor multiple address spaces, within a single guest OS. Rather, applications are
linked explicitly against an instance of the Ilwaco guest OS in a manner rather reminiscent of a
libOS in the Exokernel. Hence each virtual machine essentially hosts a single-user single-
application unprotected operating system. In Xen, by contrast, a single virtual machine hosts a
real operating system which may itself securely multiplex thousands of unmodified user-level
processes. Although a prototype virtual MMU has been developed which may help Denali in
this area, we are unaware of any published technical details or evaluation.
Thirdly, in the Denali architecture the VMM performs all paging to and from disk. This is
perhaps related to the lack of memory management support at the virtualization layer.

Memory Management
• Segmentation cannot install fully-privileged segment descriptors and cannot overlap
with the top end of the linear address space.
• Paging Guest OS has direct read access to hardware page tables, but updates are
batched and validated by the hypervisor. A domain may be allocated discontinuous
machine pages.
CPU

15
Department of Information Technology Virtualization: Comparison of Windows
and Linux

• Protection Guest OS must run at a lower privilege level than Xen.


• Exceptions Guest OS must register a descriptor table for exception handlers with Xen.
Aside from page faults, the handlers remain the same.
• System Calls Guest OS may install a `fast' handler for system calls, allowing direct
calls from an application into its guest OS and avoiding in directing through Xen on
every call.
• Interrupts Hardware interrupts are replaced with a lightweight event system.
• Time Each guest OS has a timer interface and is aware of both `real' and `virtual' time.
Device I/O
• Virtual devices are elegant and simple to access. Data is transferred using
asynchronous I/O rings. An event mechanism replaces hardware interrupts for
notifications.

Paging within the VMM is contrary to our goal of performance isolation: malicious virtual
machines can encourage thrashing behavior, unfairly depriving others of CPU time and disk
bandwidth. In Xen we expect each guest OS to perform its own paging using its own
guaranteed memory reservation and disk allocation.
Finally, Denali virtualizes the “namespaces” of all machine resources, taking the view that
no VM can access the resource allocations of another VM if it cannot name them (for example,
VMs have no knowledge of hardware addresses, only the virtual addresses created for them by
Denali). In contrast, we believe that secure access control within the hypervisor is sufficient to
ensure protection; furthermore, as discussed previously, there are strong correctness and
performance arguments for making physical resources directly visible to guest OS.
In the following section we describe the virtual machine abstraction exported by Xen and
discuss how a guest OS must be modified to conform to this. Note that in this paper we reserve
the term guest operating system to refer to one of the OS that Xen can host and we use the term
domain to refer to a running virtual machine within which a guest OS executes; the distinction
is analogous to that between a program and a process in a conventional system. We call Xen
itself the hypervisor since it operates at a higher privilege level than the supervisor code of the
guest operating systems that it hosts.

16
Department of Information Technology Virtualization: Comparison of Windows
and Linux

17
Department of Information Technology Virtualization: Comparison of Windows
and Linux

2.2 DETAILED DESIGN

In this section we introduce the design of the major subsystems that make up a Xen- based
server. In each case we present both Xen and guest OS functionality for clarity of exposition.
The current discussion of guest OSes focuses on XenoLinux as this is the most mature;
nonetheless our ongoing porting of Windows XP and NetBSD gives us confidence that Xen is
guest OS agnostic.

2.2.1 Control Transfer: Hypercalls and Events

Two mechanisms exist for control interactions between Xen and an overlying domain:
synchronous calls from a domain to Xen may be made using a hypercall, while notifications are
delivered to domains from Xen using an asynchronous event mechanism.
The hypercall interface allows domains to perform a synchronous software trap into the
hypervisor to perform a privileged operation, analogous to the use of system calls in
conventional operating systems.
An example use of a hypercall is to request a set of page table updates, in which Xen
validates and applies a list of updates, returning control to the calling domain when this is
completed. Communication from Xen to a domain is provided through an asynchronous event
mechanism, which replaces the usual delivery mechanisms for device interrupts and allows
lightweight notification of important events such as domain-termination requests. A kin to
traditional Unix signals, there are only a small number of events, each acting to flag a particular
type of occurrence. For instance, events are used to indicate that new data has been received
over the network, or that a virtual disk request has completed.
Pending events are stored in a per-domain bitmask which is updated by Xen before invoking
an event-callback handler specified by the guest OS. The callback handler is responsible for
resetting the set of pending events, and responding to the notifications in an appropriate
manner. A domain may explicitly defer event handling by setting a Xen-readable software flag:
this is analogous to disabling interrupts on a real processor.

18
Department of Information Technology Virtualization: Comparison of Windows
and Linux

2.2.2 Data Transfer: I/O Rings


The presence of a hypervisor means there is an additional protection domain between guest OS
and I/O devices, so it is crucial that a data transfer mechanism be provided that allows data to
move vertically through the system with as little overhead as possible.
Two main factors have shaped the design of our I/O-transfer mechanism: resource management
and event notification. For resource accountability, we attempt to minimize the work required
to demultiplex data to a specific domain when an interrupt is received from a device. The
overhead of managing buffers is carried out later where computation may be accounted to the
appropriate domain.
Similarly, memory committed to device I/O is provided by the relevant domains wherever
possible to prevent the crosstalk inherent in shared buffer pools; I/O buffers are protected
during data transfer by pinning the underlying page frames within Xen.

Figure 5: I/O Rings


This Figure shows the structure of our I/O descriptor rings. A ring is a circular queue of
descriptors allocated by a domain but accessible from within Xen. Descriptors do not directly
contain I/O data; instead, I/O data buffers are allocated out-of-band by the guest OS and
indirectly referenced by I/O descriptors. Access to each ring is based around two pairs of
producer-consumer pointers: domains place requests on a ring, advancing a request producer
pointer, and Xen removes these requests for handling, advancing an associated request

19
Department of Information Technology Virtualization: Comparison of Windows
and Linux

consumer pointer. Responses are placed back on the ring similarly, save with Xen as the
producer and the guest OS as the consumer. There is no requirement that requests be processed
in order: the guest OS associates a unique identifier with each request which is reproduced in
the associated response. This allows Xen to unambiguously reorder I/O operations due to
scheduling or priority considerations.
This structure is sufficiently generic to support a number of different device paradigms. For
example, a set of `requests' can provide buffers for network packet reception; subsequent
`responses' then signal the arrival of packets into these buffers. Reordering is useful when
dealing with disk requests as it allows them to be scheduled within Xen for efficiency, and the
use of descriptors with out-of-band buffers makes implementing zero-copy transfer easy. We
decouple the production of requests or responses from the notification of the other party: in the
case of requests, a domain may enqueue multiple entries before invoking a hypercall to alert
Xen; in the case of responses, a domain can defer delivery of a notification event by specifying
a threshold number of responses. This allows each domain to trade-off latency and throughput
requirements, similarly to the flow-aware interrupt dispatch in the ArseNIC Gigabit Ethernet
interface.

2.3 Subsystem Virtualization

The control and data transfer mechanisms described are used in our virtualization of the various
subsystems. In the following, we discuss how this virtualization is achieved for CPU, timers,
memory, network and disk.
2.3.1 CPU scheduling
Xen currently schedules domains according to the Borrowed Virtual Time (BVT) scheduling
algorithm. We chose this particular algorithms since it is both work-conserving and has a
special mechanism for low-latency wake-up (or dispatch) of a domain when it receives an
event. Fast dispatch is particularly important to minimize the effect of virtualization on OS
subsystems that are designed to run in a timely fashion; for example, TCP relies on the timely
delivery of acknowledgments to correctly estimate network round-trip times. BVT provides
low-latency dispatch by using virtual-time warping, a mechanism which temporarily violates

20
Department of Information Technology Virtualization: Comparison of Windows
and Linux

`ideal' fair sharing to favor recently-woken domains. However, other scheduling algorithms
could be trivially implemented over our generic scheduler abstraction. Per-domain scheduling
parameters can be adjusted by management software running in Domain0.

2.3.2 Time and timers


Xen provides guest OS with notions of real time, virtual time and wall-clock time. Real time is
expressed in nanoseconds passed since machine boot and is maintained to the accuracy of the
processor's cycle counter and can be frequency-locked to an external time source (for example,
via NTP). A domain's virtual time only advances while it is executing: this is typically used by
the guest OS scheduler to ensure correct sharing of its time-slice between application processes.
Finally, wall-clock time is specified as an offset to be added to the current real time. This
allows the wall-clock time to be adjusted without affecting the forward progress of real time.
Each guest OS can program a pair of alarm timers, one for real time and the other for virtual
time. Guest OS are expected to maintain internal timer queues and use the Xen-provided alarm
timers to trigger the earliest timeout. Timeouts are delivered using Xen's event mechanism.

2.3.3 Virtual address translation


As with other subsystems, Xen attempts to virtualize memory access with as little overhead as
possible. As discussed in Section 2.1.1, this goal is made somewhat more difficult by the x86
architecture's use of hardware page tables. The approach taken by VMware is to provide each
guest OS with a virtual page table, not visible to the memory-management unit (MMU) [10].
The hypervisor is then responsible for trapping accesses to the virtual page table, validating
updates, and propagating changes back and forth between it and the MMU-visible `shadow'
page table. This greatly increases the cost of certain guest OS operations, such as creating new
virtual address spaces, and requires explicit propagation of hardware updates to `accessed' and
`dirty' bits.
Although full virtualization forces the use of shadow page tables, to give the illusion of
contiguous physical memory, Xen is not so constrained. Indeed, Xen need only be involved in
page table updates, to prevent guest OS from making unacceptable changes.

21
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Thus we avoid the overhead and additional complexity associated with the use of shadow page
tables. The approach in Xen is to register guest OS page tables directly with the MMU, and
restrict guest OS to read-only access. Page table updates are passed to Xen via a hypercall; to
ensure safety, requests are validated before being applied.
To aid validation, we associate a type and reference count with each machine page frame. A
frame may have any one of the following mutually-exclusive types at any point in time: page
directory (PD), page table (PT), local descriptor table (LDT), global descriptor table (GDT), or
writable (RW). Note that a guest OS may always create readable mappings to its own page
frames, regardless of their current types. A frame may only safely be retasked when its
reference count is zero. This mechanism is used to maintain the invariants required for safety;
for example, a domain cannot have a writable mapping to any part of a page table as this would
require the frame concerned to simultaneously be of types PT and RW.
The type system is also used to track which frames have already been validated for use in
page tables. To this end, guest OS indicate when a frame is allocated for page-table use . this
requires a one-off validation of every entry in the frame by Xen, after which its type is pinned
to PD or PT as appropriate, until a subsequent unpin request from the guest OS. This is
particularly useful when changing the page table base pointer, as it obviates the need to validate
the new page table on every context switch. Note that a frame cannot be retasked until it is both
unpinned and its reference count has reduced to zero, this prevents guest OS from using unpin
requests to circumvent the reference-counting mechanism.
To minimize the number of hypercalls required, guest OS can locally queue updates before
applying an entire batch with a single hypercall. This is particularly beneficial then creating
new address spaces. However we must ensure that updates are committed early enough to
guarantee correctness. Fortunately, a guest OS will typically execute a TLB flush before the
first use of a new mapping: this ensures that any cached translation is invalidated. Hence,
committing pending updates immediately before a TLB flush usually suffices for correctness.
However, some guest OS elide the flush when it is certain that no stale entry exists in the TLB.
In this case it is possible that the first attempted use of the new mapping will cause a page-not-
present fault. Hence the guest OS fault handler must check for outstanding updates; if any are
found then they are flushed and the faulting instruction is retried.

22
Department of Information Technology Virtualization: Comparison of Windows
and Linux

2.3.4 Physical memory


The initial memory allocation, or reservation, for each domain is specified at the time of its
creation; memory is thus statically partitioned between domains, providing strong isolation. A
maximum allowable reservation may also be specified: if memory pressure within a domain
increases, it may then attempt to claim additional memory pages from Xen, up to this
reservation limit. Conversely, if a domain wishes to save resources, perhaps to avoid incurring
unnecessary costs, it can reduce its memory reservation by releasing memory pages back to
Xen.
XenoLinux implements a balloon driver, which adjusts a domain's memory usage by
passing memory pages back and forth between Xen and XenoLinux's page allocator. Although
we could modify Linux's memory-management routines directly, the balloon driver makes
adjustments by using existing OS functions, thus simplifying the Linux porting effort.
However, para-virtualization can be used to extend the capabilities of the balloon driver; for
example, the out-of-memory handling mechanism in the guest OS can be modified to
automatically alleviate memory pressure by requesting more memory from Xen.
Most operating systems assume that memory comprises at most a few large contiguous
extents. Because Xen does not guarantee to allocate contiguous regions of memory, guest OS
will typically create for them the illusion of contiguous physical memory, even though their
underlying allocation of hardware memory is sparse. Mapping from physical to hardware
addresses is entirely the responsibility of the guest OS, which can simply maintain an array
indexed by physical page frame number. Xen supports efficient hardware-to-physical mapping
by providing a shared translation array that is directly readable by all domains. Updates to this
array are validated by Xen to ensure that the OS concerned owns the relevant hardware page
frames.
Note that even if a guest OS chooses to ignore hardware addresses in most cases, it must
use the translation tables when accessing its page tables (which necessarily use hardware
addresses). Hardware addresses may also be exposed to limited parts of the OS's memory-
management system to optimize memory access. For example, a guest OS might allocate

23
Department of Information Technology Virtualization: Comparison of Windows
and Linux

particular hardware pages so as to optimize placement within a physically indexed cache, or


map naturally aligned contiguous portions of hardware memory using super-pages.

2.3.5 Network
Xen provides the abstraction of a virtual firewall-router (VFR), where each domain has one or
more network interfaces (VIFs) logically attached to the VFR. A VIF looks somewhat like a
modern network interface card: there are two I/O rings of buffer descriptors, one for transmit
and one for receive. Each direction also has a list of associated rules of the form (<pattern>,
<action>). if the pattern matches then the associated action is applied. Domain0 is responsible
for inserting and removing rules. In typical cases, rules will be installed to prevent IP source
address spoofing, and to ensure correct de-multiplexing based on destination IP address and
port. Rules may also be associated with hardware interfaces on the VFR. In particular, we may
install rules to perform traditional firewalling functions such as preventing incoming
connection attempts on insecure ports.
To transmit a packet, the guest OS simply enqueues a buffer descriptor onto the transmit ring.
Xen copies the descriptor and, to ensure safety, then copies the packet header and executes any
matching filter rules. The packet payload is not copied since we use scatter-gather DMA;
however note that the relevant page frames must be pinned until transmission is complete. To
ensure fairness, Xen implements a simple round-robin packet scheduler.
To efficiently implement packet reception, we require the guest OS to exchange an unused page
frame for each packet it receives; this avoids the need to copy the packet between Xen and the
guest OS, although it requires that page-aligned receive buffers be queued at the network
interface. When a packet is received, Xen immediately checks the set of receive rules to
determine the destination VIF, and exchanges the packet buffer for a page frame on the relevant
receive ring. If no frame is available, the packet is dropped.

2.3.6 Disk
Only Domain0 has direct unchecked access to physical (IDE and SCSI) disks. All other
domains access persistent storage through the abstraction of virtual block devices (VBDs),
which are created and configured by management software running within Domain0. Allowing

24
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Domain0 to manage the VBDs keeps the mechanisms within Xen very simple and avoids more
intricate solutions such as the UDFs used by the Exo-kernel.
A VBD comprises a list of extents with associated ownership and access control
information, and is accessed via the I/O ring mechanism. A typical guest OS disk scheduling
algorithm will reorder requests prior to enqueuing them on the ring in an attempt to reduce
response time, and to apply differentiated service (for example, it may choose to aggressively
schedule synchronous metadata requests at the expense of speculative read ahead requests).
However, because Xen has more complete knowledge of the actual disk layout, we also support
reordering within Xen, and so responses may be returned out of order. A VBD thus appears to
the guest OS somewhat like a SCSI disk.
A translation table is maintained within the hypervisor for each VBD; the entries within
this table are installed and managed by Domain0 via a privileged control interface. On
receiving a disk request, Xen inspects the VBD identifier and offset and produces the
corresponding sector address and physical device. Permission checks also take place at this
time. Zero-copy data transfer takes place using DMA between the disk and pinned memory
pages in the requesting domain.
Xen services batches of requests from competing domains in a simple round-robin fashion;
these are then passed to a standard elevator scheduler before reaching the disk hardware.
Domains may explicitly pass down reorder barriers to prevent reordering when this is necessary
to maintain higher level semantics (e.g. when using a write-ahead log). The low-level
scheduling gives us good throughput, while the batching of requests provides reasonably fair
access. Future work will investigate providing more predictable isolation and differentiated
service, perhaps using existing techniques and schedulers.

25
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Chapter 3
Microsoft’s Virtualization Roadmap

Microsoft’s virtualization roadmap combines:

A long-term vision for how customers can drastically reduce complexity of IT infrastructure as a
part of the overall Dynamic Systems Initiative

A solid product roadmap that offers valuable current and near-term solutions, enabling customers to
take a series of practical steps in line with the long term vision.

Microsoft is delivering application development tools, server applications, operating


systems, and management solutions that provide immediate improvements to address the
complexity in customers’ IT environment. As a part of the virtualization solutions, customers
will see improvements in the current product offering for Virtual Server 2005 R2; new
advanced products such as System Center Virtual Machine Manager that will address key
management challenges and Windows Server virtualization as a part of Windows Server
“Longhorn”, that will provide an improved virtualization platform with increased scalability,
performance and reliability.

With hardware capacity growing and more robust virtualization platform and management
capabilities, more customers can benefit from consolidation, easier management and
automation capabilities. Virtualization is a key technology for reducing the cost and complexity
of IT management, and Microsoft has committed significant resources to making virtualization
more broadly accessible and affordable for customers. Microsoft’s virtualization solutions
increases hardware utilization and enable organizations to rapidly configure and deploy new
servers with the following key benefits:

26
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Efficient use of hardware resources.

Virtual machine isolation and resource management enable multiple workloads to


coexist on fewer servers, allowing organizations to make more efficient use of their
hardware resources. Windows Server virtualization, a part of Windows Server
“Longhorn” and Virtual Server 2005 R2 with Windows Server 2003 provides the
broadest interoperability with existing storage, network and security infrastructures.
With advancements in server hardware with 64-bit technology, multi-processor and
multi-core systems, virtualization provides an easy way to optimize hardware
utilization.

Enhanced administrative productivity and responsiveness.

Windows Server virtualization enables IT organizations to enhance their administrative


productivity and rapidly deploy new servers to address changing business needs. Easy
integration into existing server management tools, such as System Center Operations
Manager and sophisticated tools such as System Center Virtual Machine Manager
(SCVMM) facilitates management of Windows virtual machines. The ability to
consolidate workloads in a hardware agnostic environment and an integrated physical
and virtual IT management framework enables administrators to lower operational
costs and create more agile datacenters

Well-supported server virtualization solution.

Virtual Server 2005 R2 is extensively tested and supported by Microsoft in conjunction


with its server operating systems and applications. Hence Virtual Server 2005 R2 is a
well-supported virtualization solution both within Microsoft and across the broader ISV
community. With Windows Server virtualization being an integral component of
Windows Server “Longhorn” and Virtual Machine Manager being part of the System
Center family, you can be assured that the upcoming virtualization solutions from
Microsoft will also be extensively tested and well-supported. The use of a common
virtual hard disk (VHD) format ensures investment protection for all the virtual
machines created for Virtual Server with a transparent migration path to Windows
Server virtualization.

27
Department of Information Technology Virtualization: Comparison of Windows
and Linux

A key deliverable for Microsoft’s Dynamic Systems Initiative.

As a part of the Dynamic Systems Initiative (DSI), Microsoft’s industry wide effort to
dramatically simplify and automate how businesses design, deploy, and operate IT
systems to enable self-managing dynamic systems, Microsoft is providing businesses
with tools to help them more flexibly utilize their hardware resources. Virtual Server
2005 R2, Windows Server virtualization, and Virtual Machine Manager are key
examples of how Microsoft is continuing to deliver technology those results in
improved server hardware utilization and provides for more flexible provisioning of
data center resources.

The next few sections will focus on the key virtualization products, both at the platform and the
management level.

3.1 Virtual Server 2005 R2


Microsoft Virtual Server 2005 R2 is the most cost-effective server virtualization technology
engineered for the Windows Server System™ platform. As a key part of any server
consolidation strategy, Virtual Server increases hardware utilization and enables organizations
to rapidly configure and deploy new servers.

3.1.1 Usage Scenarios

Virtual Server 2005 R2 offers improved hardware efficiency by providing a great solution for
isolation and resource management, which enable multiple workloads to coexist on fewer
servers. Virtual Server can be used to improve operational efficiency in consolidating
infrastructure, applications, and branch office server workloads, consolidating and re-hosting
legacy applications, automating and consolidating software test and development
environments, and reducing disaster impact.

• Consolidate Infrastructure, Application, and Branch Office Server


Workloads
Virtual Server enables workload consolidation for infrastructure services, branch office

28
Department of Information Technology Virtualization: Comparison of Windows
and Linux

services, and for disaster recovery environments resulting in fewer physical systems for
reduced hardware footprint. Virtual Server 2005 R2 is ideal for server consolidation in
both the datacenter and the branch office, allowing organizations to make more efficient
use of their hardware resources. It allows IT organizations to enhance their
administrative productivity and rapidly deploy new servers to address changing
business needs and increases the hardware utilization rates for an optimized IT
infrastructure.

• Consolidate and automate your software test and development


environment

Customers across all segments are looking for ways to decrease costs and accelerate
application and infrastructure installations and upgrades, while delivering a
comprehensive level of quality assurance. Virtual Server enables you to consolidate
your test and development server farm and automate the provisioning of virtual
machines, improving hardware utilization and operational flexibility. For developers,
Virtual Server enables easy deployment and testing of a distributed server application
using multiple virtual machines on one physical server.

• Re-host legacy applications

Virtual Server enables migration of legacy operating systems (Windows NT4.0 Server
and Windows 2000 Server) and their associated custom applications from older
hardware to new servers running Windows Server 2003. Virtual Server 2005 R2
delivers the best of both worlds: application compatibility with legacy environments,
while taking advantage of the reliability, manageability and security features of
Windows Server 2003 running on the latest hardware. Virtual Server 2005 R2 delivers
this capability by enabling customers to run legacy applications in their native software
environment in virtual machines, without rewriting application logic, reconfiguring
networks or retraining end users. This gives customers time to refresh older
infrastructure systems first, then either upgrade or rewrite out-of-service applications on
a timetable that best fits their business needs. Virtual Server 2005 R2 enables better

29
Department of Information Technology Virtualization: Comparison of Windows
and Linux

customer choice for legacy application migration with outstanding application


compatibility.

• Disaster Recovery solutions

Virtual Server 2005 R2 can be used as part of a disaster recovery plan that requires
application portability and flexibility across hardware platforms. Consolidating physical
servers onto fewer physical machines running virtual machines decreases the number of
physical assets that must be available in a disaster recovery location. In the event of
recovery, virtual machines can be hosted anywhere, on host machines other than those
affected by a disaster, speeding up recovery times and maximizing organization
flexibility.

3.1.2 Key Features

Virtualization facilitates broad device compatibility and complete support for Windows server
environments.

Virtual machine isolation

Virtual machine isolation ensures that if one virtual machine crashes or hangs, it cannot
impact any other virtual machine or the host system. Maximum application
compatibility is achieved through isolation. This allows customers to further leverage
their existing storage, network and security infrastructures.

Broad device compatibility

Virtual Server runs on Windows Server 2003 which supports most Windows Server
Catalog devices, providing compatibility with a wide range of host system hardware.

Multithreaded VMM

Virtual Server’s Virtual Machine Monitor provides the software infrastructure to create,
manage and interact with virtual machines on multi processor hardware.

30
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Broad x86 guest OS compatibility

Virtual Server can run all major x86 operating systems in the virtual machine guest
environment. Microsoft will also support specific distributions of Linux running in the
virtual machine environment.

iSCSI clustering

Flexible clustering scenarios


provide high availability for
mission-critical environments
while improving patching and
hardware maintenance processes.
iSCSI clustering between physical
hosts of Virtual Server 2005 R2
offers a cost-effective means of Virtual Server 2005 R2: Administration Website

increasing server availability.

Figure 6: Virtual Server 2005 R2: Administration Window

X64 support

Virtual Server 2005 R2 runs on the following 64-bit host operating systems: Windows
Server 2003 Standard x64 Edition, Windows Server 2003 Enterprise x64 Edition and
Windows XP Professional x64 Edition providing increased performance & memory
headroom.

Comprehensive COM API

Enables complete scripted control of virtual machine environments. Virtual Server


supports a full-featured Component Object Model (COM) Application Programming
Interface (API) that contains 42 interfaces and hundreds of calls, allowing scripts to
control nearly every aspect of the product.

Virtual Hard Disks (VHDs)

31
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Virtual Server encapsulates virtual machines in portable Virtual Hard Disks, enabling
flexible configuration, versioning and deployment.

PXE Boot

The emulated network card in Virtual Server 2005 R2 now supports Pre-boot Execution
Environment (PXE) boot. This network boot allows customers to provision their virtual
machines in all of the same ways that they do their physical servers.

Active Directory integration

Virtual machines in Virtual Server function the way you would expect a physical
machine, offering full Active Directory integration. This level of integration enables
delegated administration and secure, authenticated guest access.

Microsoft Operations Manager 2005 Management Pack for Virtual Server

A management pack developed specifically for Virtual Server enables advanced


management features within virtual machines.

3.2 Windows Server Virtualization


Windows Server virtualization is a hypervisor-based technology that is a part of Windows
Server “Longhorn”. Windows hypervisor is a thin layer of software running directly on the
hardware which works in conjunction with an optimized instance of Windows Server
“Longhorn” that allows multiple operating system instances to run on a physical server
simultaneously. It leverages the powerful enhancements of processors provides customers with
a scalable, reliable, secure and highly available virtualization platform.

3.2.1 Usage Scenarios

Windows Server virtualization is integrated as the virtualization role in Windows Server


“Longhorn” and provides a more dynamic virtual environment for consolidating workloads. It
provides a virtualization platform that enables improved operational efficiency for workload

32
Department of Information Technology Virtualization: Comparison of Windows
and Linux

consolidation, business continuity management, automating and consolidating software test and
development environments, and creating a dynamic datacenter.

• Production server consolidation

Organizations are looking at production servers in their datacenters and finding overall
hardware utilization levels often between 5% and 15% of the capacity of the server. In
addition, physical constraints like space and power are constraining them from
expanding their datacenters. By consolidating several production servers with Windows
Server virtualization, businesses can benefit from increased hardware utilization and
reduced overall total cost of ownership.

• Business continuity management

IT administrators are always trying to find ways to reduce or eliminate downtime from
their environment. Windows Server virtualization will provide capabilities for efficient
disaster recovery to eliminate downtime. The robust and flexible virtualization
environment created by Windows Server virtualization minimizes the impact of
scheduled and unscheduled downtime.

• Software test and development

One of the biggest areas where virtualization technology will continue to be relevant is
the software test and development area to create automated and consolidated
environments that are agile enough to accommodate the constantly changing
requirements. Windows Server virtualization helps minimize test hardware, improve
lifecycle management and improves test coverage.

• Dynamic datacenter

The rich set of features of Windows Server virtualization combined with the new
management capabilities extended by Virtual Machine Manager enables organizations
to create a more agile infrastructure. Administrators will be able to dynamically add
resources to virtual machines and move them across physical machines transparently
without impacting users.

33
Department of Information Technology Virtualization: Comparison of Windows
and Linux

3.2.2 Key Features

There are several new features in Windows Server virtualization that help create a scalable,
secure and highly available virtualization platform as a part of Windows Server “Longhorn”.
The following are some of the key components and features of Windows Server virtualization.

Windows hypervisor

A very thin layer of software that leverages the Windows Server driver support and
hardware assisted virtualization technology. The minimal code base with no third party
code / drivers helps create a more secure and robust base for virtualization solutions.

Dynamic resource management

Windows Server virtualization provides the capability to hot add resources such as
CPU, memory, networks and storage to the virtual machines with no downtime.
Combined with the hot add features of Windows Server “Longhorn”, this enables
administrators to manage their hardware resources without impacting their SLA
commitments.

64-bit guest support

A key new feature of the Windows Server virtualization platform is the ability to
support 64-bit guests. This enables organizations to be able to virtualize more
applications that are memory intensive and benefit from the increased memory pool
accessible in a 64-bit environment.

Multi-processor guest support

Windows Server virtualization now provides the capability to allocate multiple CPU
resources to a single virtual machine and enables virtualization of multi-threaded
applications. This capability combined with the 64-bit guest support makes Windows
Server virtualization a scalable platform for virtualization

Live Migration of virtual machines

34
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Windows Server virtualization will provide the ability to move a virtual machine from
one physical machine to another with minimal downtime. This capability combined
with host clustering of physical machines provides high availability and flexibility to
achieve an agile and dynamic datacenter.

• New device virtualization architecture

Windows Server virtualization provides a new virtualized I/O architecture. This


provides customers with the highest possible performance with the lowest possible
overhead.

Offline VHD manipulation

Windows Server virtualization provides the administrators the ability to securely access
files within a virtual hard disk without having to instantiate a virtual machine. This
provides the administrators granular access to VHDs and be able to perform some
management tasks offline.

3.3 System Center Virtual Machine Manager


As a part of the System Center family of management products, Virtual Machine Manager
facilitates management of Windows virtual machines. Virtual Machine Manager enables
increased physical server utilization by allowing for simple and fast consolidation on virtual
infrastructure with integrated consolidation candidate identification, fast P2V and intelligent
workload placement based on performance knowledge and user defined business policies.
Virtual Machine Manager enables rapid provisioning of new virtual machines by the
administrator and end users using a self-service provisioning tool. Virtual Machine Manager is
a tightly integrated member of the System Center product family of management products.

3.3.1 Usage Scenarios

Virtual Machine Manager delivers simple and complete support for consolidating physical
hardware on virtual infrastructure and optimizing utilization. It also provides rapid provisioning
of virtual machines from physical machines, templates in the image library or by end users

35
Department of Information Technology Virtualization: Comparison of Windows
and Linux

• Production server consolidation

As organizations look to consolidate their production servers, Virtual Machine Manager


provides a way to transfer the knowledge about the system and the environment through
the virtualization process and help maintain knowledge continuity. By consolidating
several production servers with Virtual Server 2005 R2 or Windows Server
virtualization, businesses reduce overall total cost of ownership and still maintain an
unified management framework across their physical and virtual environments.

• Increasing operational agility

Businesses across segments are looking for ways to drive efficiency through their IT
environments and increase operational agility. Virtual Machine Manager provides a
mechanism to enable functionality such as rapid server provisioning, rapid recovery,
and scalable migration capability to make the overall virtual infrastructure robust and
easy to manage.

• Integrated management

Virtual Machine Manager helps create a centralized virtual machine management


infrastructure across multiple Virtual Server 2005 R2 host systems and Windows Server
virtualization hosts. Organizations are adopting virtualization across production and test
and development areas and as management capabilities get more sophisticated, it helps
administrators deploy and manage virtual and physical environments in an integrated
approach.

3.3.2 Key Features

System Center Virtual Machine Manager focuses on unique requirements of virtual machines
and is designed to enable increased physical server utilization, centralized management of
virtual machine infrastructure and rapid provisioning of new virtual machines. The following is
some of the key features in Virtual Machine Manager.

Consolidation candidate identification

36
Department of Information Technology Virtualization: Comparison of Windows
and Linux

The first step in migrating from a physical data center with a one-workload per server
model is to identify the appropriate physical workloads for consolidation onto virtual
hardware. The decision factors for determining the appropriate candidates are based on
several factors such as historical performance, peak load characteristics, access
patterns, etc. Virtual Machine Manager leverages the existing historical performance
data in the System Center Operations Manager database to list the consolidation
candidates in rank order.

Intelligent Placement

The act of assigning and activating a given virtual workload onto a physical virtual host
server is referred as placement. Placement is at the crux of maximizing the utilization of
physical assets. Virtual Machine Manager brings a deep and holistic approach to
placement and combines the knowledge from historical performance data of the virtual
workload and the intelligence about the virtual host system. Business rules and
associated models are also leveraged by Virtual Machine Manager to determine the
placement options.

Host Provisioning

Virtual Machine Manager will identify the physical virtual hosts in the enterprise
through integrated discovery with Active Directory. This helps organizations to easily
scale the management of virtual machines and hosts across the datacenter and branch
offices.

Central Library

Virtual Machine Manager provides a central repository for all the building blocks for a
virtual machine such as VHDs, offline virtual machines, templates, and even ISO
images. Each item in the library has models or rich meta data that enable more
controlled management of the objects. Template is a new object that enables an
administrator to create approved virtual machine configurations that serve as a gold
standard for subsequent virtual machine deployments.

Self-service provisioning

37
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Virtual infrastructure is commonly used in test and development environments where


there is consistent provisioning and teardown of virtual machines for testing purposes.
With Virtual Machine Manager, administrators can selectively extend self-provisioning
capabilities to user groups and be able to define quotas. The automated provisioning
tool will manage the virtual machines through their lifecycles including teardowns.

38
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Chapter 4
x86 Virtualization

x86 virtualization is the method by which x86-based "guest" operating systems can run within
another "host" x86 operating system, with little or no modification of the guest OS. The x-86
processor architecture did not originally meet the Popek and Goldberg virtualization
requirements. As a result, it was very difficult to implement a general virtual machine on an
x86 processor. In 2005 and 2006, extensions to their respective x86 architectures by Intel and
AMD resolved this and other virtualization difficulties.

Intel and AMD have independently developed virtualization extensions to the x86
architecture. Though not directly compatible with each other, they serve largely the same
functions. Either will allow a virtual machine hypervisor to run an unmodified guest operating
system without incurring significant emulation performance penalties.

4.1 Intel Virtualization Technology for x86 (Intel VT-x)


Previously codenamed "Vanderpool", VT-x represents Intel's technology for virtualization on
the x 86 platforms. Intel plans to add Extended Page Tables (EPT), a technology for page table
virtualization, in the Nehalem architecture.

The following modern Intel processors include support for VT-x:

• Pentium 4 662 and 672


• Pentium Extreme Edition 955 and 965 (not Pentium 4 Extreme Edition with HT)
• Pentium D 920-960 except 945, 935, 925, 915
• some models of the Core processors family
• some models of the Core 2 processors family
• Xeon 3000 series
• Xeon 5000 series

39
Department of Information Technology Virtualization: Comparison of Windows
and Linux

• Xeon 7000 series


• some models of the Atom processor family
• all Intel Core i7 processors

Neither Intel Celeron nor Pentium Dual-Core nor Pentium M processors have VT technology.

4.1.1 IOMMU

An input/output memory management unit (IOMMU) enables guest virtual machines to


directly use peripheral devices, such as Ethernet and accelerated graphics cards, through DMA
and interrupt remapping. Both AMD and Intel have released specifications. AMD calls it by
what it is ("IOMMU") and Intel calls their implementation "Intel's Virtualization Technology
for Directed I/O (VT-d)”

A note on 64-bit guests

One can run a 64-bit guest on a 32-bit host OS, if the underlying processor runs in 64-bit mode
and supports virtualization extensions; however not all platforms support this. Note, however,
that a 32-bit host OS's memory addressing limits can become problematic, and users should
generally install 64-bit operating systems on 64-bit capable processors.

Virtualization features activation

Intel's VT-x feature needs activation in the BIOS before applications can make use of it. Most
computer and motherboard/BIOS/chipset manufacturers disable this support by default but
make an option available to activate it, some do not. The AMD-V feature may also be
controlled by a BIOS setting.

VT-d (Intel's hardware virtualization)

As VT-x enhances the virtualization at the cpu level, VT-d does the same thing at the chipset
level. VT-x is a CPU feature but VT-d is a chipset feature. VT-d is integrated in the chipset, so a
chipset with VT-d is required. A chipset with VT-d is not the only requirement in order to can
use the VT-d features. Some VT-d capable chipsets are enabled by default, other have to be

40
Department of Information Technology Virtualization: Comparison of Windows
and Linux

enabled by the OEM manufacturer (at the chipset level) in order to be used. Also the BIOS
have to be containing VT-d features. So in order to work on your system you need a
motherboard with a VT-d capable chipset that is enabled by default or was enabled by the OEM
manufacturer and with BIOS that includes the VT-d features. Is the BIOS doesn't have VT-d
features an update of firmware could solve the problem. Also in the BIOS exists the option to
enable or disable (at the BIOS level) this feature. For a list of VT-d capable chipsets and their
status enabled or enabling depends of the OEM manufacturer see the Intel chipset list[3]. Also
the virtualization software has to support VT-d. For some virtualization software as Xen and
Hyper-V VT-d is the main requirement. Other virtualization software that supports VT-d are:
Xen, VMware, KVM, Hyper-V and Parallels (experimental support).

Intel has addressed the rise in virtualization demand by creating Intel® Virtualization
Technology1 (Intel® VT for Connectivity), a suite of powerful enhancements to Intel®
processors, chipsets, and I/O devices enabling hardware-assisted virtualization support from the
core platform architecture. The hardware assists that Intel VT provides to the virtualization
software helps hypervisor providers to deliver more simple and robust code, decreasing
software overhead and its potential impact to solution performance.
Intel VT comprises:
• Intel® Virtualization Technology for IA-32, Intel® 64 Architecture2 and Itanium®
processors (Intel® VT-x and Intel® VT-i)
• Intel® Virtualization Technology for Directed I/O
• Intel® Virtualization Technology for Connectivity

4.2 Introducing Intel® Virtualization-Technology for Connectivity

Intel’s latest addition to its suite of virtualization technologies is Intel Virtualization Technology
for Connectivity (Intel® VT for Connectivity). This new collection of I/O virtualization
technologies improves overall system performance by improving communication between host
CPU and I/O devices within the virtual server. This enables a lowering of CPU utilization, a

41
Department of Information Technology Virtualization: Comparison of Windows
and Linux

reduction of system latency and improved networking and I/O throughput. Intel VT for
Connectivity includes:
• Virtual Machine Device Queues (VMDq)
• Intel® I/O Acceleration Technology
• Single Root I/O Virtualization (SR-IOV) implementation in Intel® devices

4.2.1 Virtual Machine Device Queues (VMDq)

In today’s traditional virtualization implementation, hypervisor abstracts the I/O device and
shares that hardware resource with multiple virtual machines. To route the packets coming from
that shared I/O device, hypervisor sorts the incoming packets based on the destined virtual
machine and then delivers the packets accordingly.
This sorting and grouping done in the hypervisor consumes CPU cycles, thereby impacting
the overall virtual server performance. The Intel Advantage: Virtualization Innovation VMDq
technology enhances networking performance and reduces CPU utilization in the virtualized
environment. It reduces I/O overhead on the hypervisor in a virtualized server by performing
data sorting in the network silicon. VMDq technology makes use of the multiple queues
technology in the network device. With VMDq, as data packets enter the network adapter, they
are sorted, and packets to the same destination get grouped together. The packets are then sent
to the hypervisor, which directs them to their respective destinations. Relieving the hypervisor
of packet filtering improves overall CPU utilization and throughput levels. Intel® I/O
Acceleration Technology Intel I/O Acceleration Technology (Intel® I/OAT) is a suite of
features which improves data acceleration across the platform, from I/O and networking
devices to the memory and processors which help to improve system performance. The
different features include Intel® Quick-Data Technology, Direct Cache Access (DCA), MSI-X,
low latency interrupts, and Receive Side Coalescing (RSC). Intel Quick-Data Technology
moves data copy from the CPU to the chipset and DCA enables the CPU to pre-fetch data,
thereby avoiding cache misses and improving application response times. MSI-X helps in load-
balancing I/O network interrupts, and low latency interrupts automatically tune interrupt

42
Department of Information Technology Virtualization: Comparison of Windows
and Linux

interval times depending on the latency sensitivity of the data. RSC provides lightweight
coalescing of receive packets, which increases the efficiency of the host network stack.

43
Department of Information Technology Virtualization: Comparison of Windows
and Linux

4.2.2 Single Root I/O Virtualization (SR-IOV) Implementation

Single Root I/O Virtualization (SR-IOV) is a Peripheral Component Interconnect Special


Interest Group (PCI-SIG) specification. Intel is actively participating along with other industry
leaders within the PCI-SIG working group to define new standards for enhancing virtualization
capabilities of I/O devices. SR-IOV provides a standard mechanism for devices to advertise
their ability to be simultaneously shared among multiple virtual machines. SR-IOV allows for
the partitioning of a PCI function into many virtual interfaces for the purpose of sharing the
resources of a PCI Express* (PCIe) device in a virtual environment. Intel plans to support SR-
IOV specification in its networking devices.
Each virtual function can support a unique and separate data path for I/O-related functions
within the PCI Express* hierarchy. Use of SR-IOV with a networking device, for example,
allows the bandwidth of a single port (function) to be partitioned into smaller slices that may be
allocated to specific virtual machines, or guests, via a standard interface. A common
methodology for configuration and management is also established to further enhance the
interoperability of various devices in a PCIe hierarchy. Such sharing of resources can increase
the total utilization of any given resource presented on an SR-IOV capable PCIe device,
potentially reducing the cost of a virtual system.

44
Department of Information Technology Virtualization: Comparison of Windows
and Linux

4.2.3 Intel® Virtualization Technology for Connectivity


(Intel VT for Connectivity)

Figure 7: Intel Virtualization Technology for Connectivity


• Improved data acceleration across the platform
• Hardware assists in the I/O silicon to improve data processing
• Continued investment in providing hardware assists and offloads technologies to
improve I/O performance

4.3 AMD virtualization (AMD-V)


AMD markets its virtualization extensions to the 64-bit x86 architecture as “AMD
Virtualization”, abbreviated AMD-V. It is still referred to as "Pacifica", the AMD internal
project code name.

AMD-V operates on AMD Athlon 64 and Athlon 64 X2 with family "F" or "G" on socket
AM2 (not 939), Turion 64 X2, Opteron 2nd generation and 3rd-generation, Phenom, and all
newer processors. Sempron processors do not include support for AMD-V.

45
Department of Information Technology Virtualization: Comparison of Windows
and Linux

On May 23, 2006, AMD released the Athlon 64 ("Orleans"), the Athlon 64 X2 ("Windsor")
and the Athlon 64 FX ("Windsor") as the first AMD processors to support AMD-V. Prior
processors do not have AMD-V. Note also that often, even if your processor supports AMD-V,
AMD virtualization can be disabled in the BIOS by default, possibly by the computer builder or
manufacturer, and must be enabled before use.

AMD has published a specification for a technology named "IO Memory Management
Unit" (IOMMU) to AMD-V. This provides a way of configuring interrupt delivery to individual
virtual machines and an IO memory translation unit for preventing a virtual machine from
using DMA to break isolation. The IOMMU also plays an important role in advanced operating
systems (absent virtualization) and the AMD Torrenza architecture.

46
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Chapter 5

VMware

5.1 VMware Desktop Technology


5.1.1 Consumer Desktop Technology

VMware desktop virtualization technology lets you run multiple operating systems on a single
physical computer. Easily run Windows’ applications on your Mac, including high end games
and other graphic applications, with VMware Fusion. Run Windows and Linux applications on
Windows or Linux PCs with the free VMware Player.

5.1.2 Technical Desktop Technology

Reduce development time and shrink QA and support costs with VMware Workstation.
Consolidate multiple development and test workstations onto a single physical system running
multiple virtual machines. Record and replay virtual machine activity to capture, diagnose, and
resolve non-deterministic bugs and race conditions.

5.1.3 Enterprise Desktop Technology

Organizations trust VMware enterprise desktop technology to manage and support the needs of
their extended, global workforce while strengthening security and control over corporate
resources and sensitive information. VMware View (formerly VMware Virtual Desktop
Infrastructure (VDI)) lets organizations streamline desktop management & control, reduce
operating costs and deliver complete desktop environments with greater application
compatibility.

47
Department of Information Technology Virtualization: Comparison of Windows
and Linux

5.1.4 Build the Desktop of Tomorrow, Today

Business is more dependent on technology than ever before, yet more frustrated by its
inflexibility. With an increasing mobile and globally dispersed workforce using multiple
devices on multiple platforms, they struggle to connect to their data and applications across a
tangle of web, desktop, and server based solutions. IT struggles to retrofit and manage the
tightly bound single purpose system of OS, applications, and hardware. And when a user breaks
or misplaces a computer—productivity stops, security is breached, and intellectual property can
be lost. Reconnecting is no easy task: days are lost bringing users back on line and weeks go by
as IT tries to recoup lost information.

5.1.5 Give Users a Personal View of Their Data and Applications

The desktop of the future will not be a single physical device but a collection of different
devices and environments. Applications and data may be located across a combination of
locations for example: a virtual desktop running on a server, a home notebook computer and a
webmail account. End users want the same view regardless of what device they use to connect
to their desktop or where their applications and data are located – the user wants a universal
client. IT organizations on the other hand want to simplify management and take control of
desktops and applications cost effectively. Universal client is the next evolution in desktop
computing including virtual desktop infrastructure.

Decouple applications, data, and operating system from the hardware and deliver them to
the user rather than a device. Give users a personal view of their applications and data, whether
they’re on a thin client or laptop, in the office or on the road. Intelligently deliver applications
and data to any device and allow users to focus on their jobs rather than the tools. Balance the
requirements of your business with the needs of your users and create a seamless experience
where applications and data follow the user and not the device.

48
Department of Information Technology Virtualization: Comparison of Windows
and Linux

5.1.6 Improve your Desktop Management

The monolithic model of tightly coupled hardware, operating system, and applications cannot
keep up with today’s global economy while complying with business, regulatory, and security
objectives. You need flexible solutions to drive your infrastructure based on the needs of your
business. So it helps:

• Deliver personalized applications and desktops to a user, not a device


• Enable flexibility with control
• Centrally manage and secure user desktop environments

5.2 Virtual Datacenter OS from VMware


5.2.1 The Virtual Datacenter Operating System Defined

VMware's flagship product, VMware Infrastructure, coupled with VMware's comprehensive


roadmap of groundbreaking new products provide a virtual datacenter OS for IT environments
of all sizes. The virtual datacenter OS addresses customers’ needs for flexibility, speed,
resiliency and efficiency by transforming the datacenter into an “internal cloud” – an elastic,
shared, self- managing and self-healing utility that can federate with external clouds of
computing capacity freeing IT from the constraints of static hardware-mapped applications.
The virtual datacenter OS guarantees appropriate levels of availability, security and scalability
to all applications independent of hardware and location.

Just like the single server OS was an indispensable part of the traditional IT stack, the virtual
datacenter OS is an indispensable platform for business computing of the future.

5.2.2 VMware’s Virtual Datacenter OS

VMware Infrastructure delivers the virtual datacenter OS through the following essential
components:

49
Department of Information Technology Virtualization: Comparison of Windows
and Linux

• Application vServices guarantee the appropriate levels of availability, security and


scalability to all applications independent of hardware and location.
• Infrastructure vServices abstract, aggregate and allocate on-premise servers, storage
and network for maximum infrastructure efficiency.
• Cloud vServices federate the on-premise infrastructure with third party cloud
infrastructure.
• Virtualization Management allow you to proactively manage the virtual datacenter OS
and the applications running on it.

5.2.3 Why Do You Need A Virtual Datacenter Operating System?

The traditional IT stack with its tight coupling of software and hardware falls short of
supporting customers’ needs. The accelerating rate of business change, non-negotiable
requirements for 24X7 business resiliency and inexorable pressure to reduce cost is increasing
the pressure on IT.

At the same time, IT has dramatic opportunity to change the status quo by leveraging the
immense power and attractive economics of x86 hardware, the maturing of virtualization
technologies, increasing choice in new application architectures and the availability of vast and
new clouds of cheap and readily accessible computing power.

5.2.4 The Virtual Datacenter OS Simplifies Computing

The virtual datacenter OS allows an IT professional to:

• Automatically manage applications to pre-defined SLAs by providing built in


availability, security and performance assurance for all applications.
• Run applications on a highly unified, reliable, efficient infrastructure made up of
industry standard components that can be easily replaced
• Move applications with the same service level expectations across on premise or off
premise computing clouds for the lowest TCO and highest operational efficiency

50
Department of Information Technology Virtualization: Comparison of Windows
and Linux

The virtual datacenter OS enables a dramatically simpler and more efficient model of
computing that meet customers’ needs. In this new model customers define the desired
outcomes and the computing infrastructure guarantees these outcomes precisely. For example,
IT professionals would like to provision an application, specify the application service levels,
response time, security protection, and availability level– and the infrastructure should deliver
and assure these service levels, at the lowest possible cost with minimal maintenance effort
required.

51
Department of Information Technology Virtualization: Comparison of Windows
and Linux

Conclusion

Virtualization technologies have matured to the point where the technology is being deployed
across a wide range of platforms and environments. The usage of virtualization has gone
beyond increasing the utilization of infrastructure, to areas like data replication and data
protection. This white paper looks at the continuing evolution of virtualization, its potential,
some tips on optimizing virtualization as well as how to future proof the technology.

After all, server virtualization's value is well-established. Many, many companies have
migrated significant percentages of their servers to virtual machines hosted on larger servers,
gaining benefits in hardware utilization, energy use, and data centre space. And those
companies that haven't done so thus far are hatching plans to consolidate their servers in the
future. These are all capital or infrastructure costs, though. What does server virtualization do
for human costs—the IT operations piece of the puzzle?

Base level server consolidation offers a few benefits for IT operations. It makes hardware
maintenance much easier, since virtual machines can be moved to other physical servers when
it's time to maintain or repair the original server. This moves hardware maintenance from a
weekend and late night effort to a part of the regular business day—certainly a great
convenience.

52
Department of Information Technology Virtualization: Comparison of Windows
and Linux

REFERENCES

[1] www.google.com

[2] www.ieeexplore.com

[3] www.redhat.com/doc/virtualiztion

[4] www.vmware.com/help

[5] www.wikipedia.com

53
Department of Information Technology Virtualization: Comparison of Windows
and Linux

BIBLIOGRAPHY

[1] Microsoft Overview by Davis Chappel

[2] Comparative study on MPI Communication Primitives by Arnab Sinha and Nabnita Das.

[3] Sir. M., Otto, S. W., Huss-Lederman, S., Walker, D. W., Dongarra, J.J., and MPI: The
Complete Reference, The MIT Press, 1996.

[4] Michael J. Quinn, Parallel Programming in C with MPI and OpenMP, McGraw-Hill, 2003.

[5] Virtualization on Redhat by Redhat Team

54

You might also like