Professional Documents
Culture Documents
and Linux
Chapter 1
Concept of Virtualization
Virtualization provides a set of tools for increasing flexibility and lowering costs, things that
are important in every enterprise and Information Technology organization. Virtualization
solutions are becoming increasingly available and rich in features.
Since virtualization can provide significant benefits to your organization in multiple areas,
you should be establishing pilots, developing expertise and putting virtualization technology to
work now.
Virtualization can also excel at supporting innovation through the use of virtual
environments for training and learning. These services are ideal applications for virtualization
technology. A student can start course work with a known, standard system environment. Class
work can be isolated from the production network. Learners can establish unique software
environments without demanding exclusive use of hardware resources.
As the capabilities of virtual environments continue to grow, we’re likely to see increasing
use of virtualization to enable portable environments tailored to the needs of a specific user.
These environments can be moved dynamically to an accessible or local processing
environment, regardless of where the user is located. The user’s virtual environments can be
stored on the network or carried on a portable memory device. A related concept is the
Appliance Operating System, an application package oriented operating system designed to run
in a virtual environment. The package approach can yield lower development and support costs
1
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Virtualization can also be used to lower costs. One obvious benefit comes from the
consolidation of servers into a smaller set of more powerful hardware platforms running a
collection of virtual environments. Not only can costs be reduced by reducing the amount of
hardware and reducing the amount of unused capacity, but application performance can actually
be improved since the virtual guests execute on more powerful hardware.
Further benefits include the ability to add hardware capacity in a non-disruptive manner and
to dynamically migrate workloads to available resources. Depending on the needs of your
organization, it may be possible to create a virtual environment for disaster recovery.
Introducing virtualization can significantly reduce the need to replicate identical hardware
environments and can also enable testing of disaster scenarios at lower cost. Virtualization
provides an excellent solution for addressing peak or seasonal workloads.
Cost savings from server consolidation can be compelling. If you aren’t exploiting
virtualization for this purpose, you should start a program now. As you gain experience with
virtualization, explore the benefits of workload balancing and virtualized disaster recovery
environments.
Virtualization is unquestionably one of the hottest trends in information technology today. This is
no accident. While a variety of technologies fall under the virtualization umbrella, all of them are
changing the IT world in significant ways.
2
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Each virtualization type has its pros and cons that condition its appropriate applications.
Emulation makes it possible to run any non-modified operating system which supports the
platform being emulated. Implementations in this category range from pure emulators (like
Bochs) to solutions which let some code to be executed on the CPU natively, in order to
increase performance. The main disadvantages of emulation are low performance and low
density.
Examples: VMware products, QEmu, Bochs, Parallels.
Para-virtualization is a technique to run multiple modified OSs on top of a thin layer called a
hypervisor, or virtual machine monitor. Para-virtualization has better performance compared to
emulation, but the disadvantage is that the “guest” OS needs to be modified. Examples: Xen,
UML.
Operating system-level virtualization enables multiple isolated execution environments
within a single operating system kernel. It has the best possible (i. e. close to native)
performance and density, and features dynamic resource management. On the other hand, this
technology does not allow running different kernels from different OSs at the same time.
Examples: FreeBSD Jail, Solaris Zones/Containers, Linux-VServer, OpenVZ and
Virtuozzo.
Simply put, virtualization is an idea whose time has come. The term virtualization broadly
describes the separation of a resource or request for a service from the underlying physical
delivery of that service. With virtual memory, for example, computer software gains access to
more memory than is physically installed, via the background swapping of data to disk storage.
3
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Concept of a VE
Virtual Environment (VE, also known as VPS, container, partition etc.) is an isolated program
execution environment, which (from the point of view of its owner) looks and feels like a
separate physical server. A VE has its own set of processes starting from init, file system, users
(including root), network interfaces with IP addresses, routing tables, firewall rules
(netfilter/iptables), etc.
Multiple VEs co-exist within a single physical server. Different VEs can run different Linux
distributions, but all VEs operate under the same kernel.
Virtualization Technologies
To understand modern virtualization technologies, think first about a system without them.
Imagine, for example, an application such as Microsoft Word running on a standalone desktop
computer. Figure 1 shows how this looks.
4
Department of Information Technology Virtualization: Comparison of Windows
and Linux
The application is installed and runs directly on the operating system, which in turn runs
directly on the computer’s hardware. The application’s user interface is presented via a display
that’s directly attached to this machine. This simple scenario is familiar to anybody who’s ever
used a Computer.
But it’s not the only choice. In fact, it’s often not the best choice. Rather than locking these
various parts together—the operating system to the hardware, the application to the operating
system, and the user interface to the local machine—it’s possible to loosen the direct reliance
these parts have on each other.
Doing this means virtualizing aspects of this environment, something that can be done in
various ways. The operating system can be decoupled from the physical hardware it runs on
using hardware virtualization, for example, while application virtualization allows an analogous
decoupling between the operating system and the applications that use it. Similarly,
presentation virtualization allows separating an application’s user interface from the physical
machine the application runs on. All of these approaches to virtualization help make the links
between components less rigid. This lets hardware and software be used in more diverse ways,
and it also makes both easier to change. Given that most IT professionals spend most of their
5
Department of Information Technology Virtualization: Comparison of Windows
and Linux
time working with what’s already installed rather than rolling out new deployments, making
their world more malleable is a good thing.
Each type of virtualization also brings other benefits specific to the problem it addresses.
Understanding what these are requires knowing more about the technologies themselves.
Accordingly, the next sections take a closer look at each one.
The core idea of hardware virtualization is simple: Use software to create a virtual machine
(VM) that emulates a physical computer. By providing multiple VMs at once, this approach allows
running several operating systems simultaneously on a single physical machine. Figure 2 shows how
this looks.
When used on client machines, this approach is often called desktop virtualization, while using it
on server systems is known as server virtualization. Desktop virtualization can be useful in a variety
of situations. One of the most common is to deal with incompatibility between applications and
6
Department of Information Technology Virtualization: Comparison of Windows
and Linux
desktop operating systems. For example, suppose a user running Windows Vista needs to use an
application that runs only on Windows XP with Service Pack 2. By creating a VM that runs this
older operating system, then installing the application in that VM, this problem can be solved.
Still, while desktop virtualization is useful, the real excitement around hardware virtualization is
focused on servers. The primary reason for this is economic: Rather than paying for many under-
utilized server machines, each dedicated to a specific workload, server virtualization allows
consolidating those workloads onto a smaller number of more fully used machines. This implies
fewer people to manage those computers, less space to house them, and fewer kilowatt hours of
power to run them, all of which saves money.
Server virtualization also makes restoring failed systems easier. VMs are stored as files, and so
restoring a failed system can be as simple as copying its file onto a new machine. Since VMs can
have different hardware configurations from the physical machine on which they’re running, this
approach also allows restoring a failed system onto any available machine. There’s no requirement
to use a physically identical system.
Hardware virtualization can be accomplished in various ways, and so Microsoft offers three
different technologies that address this area:
Virtual Server 2005 R2: This technology provides hardware virtualization on top of Windows via
add-on software. As its name suggests, Virtual Server provides server virtualization, targeting
scalable multi-user scenarios.
Virtual PC 2007: Like Virtual Server, this technology also provides hardware virtualization on top
of Windows via add-on software. Virtual PC provides desktop virtualization, however, and so it’s
designed to support multiple operating systems on a single-user computer.
Windows Server virtualization: Like Virtual Server, Windows Server virtualization provides server
virtualization. Rather than relying on an add-on, however, support for hardware virtualization is
built directly into Windows itself. Windows Server virtualization is part of Windows Server
2008, and it’s scheduled to ship shortly after the release of this new operating system.
7
Department of Information Technology Virtualization: Comparison of Windows
and Linux
All of these technologies are useful in different situations, and all are described in more detail
later in this overview.
8
Department of Information Technology Virtualization: Comparison of Windows
and Linux
As the figure shows, this approach allows creating virtual sessions, each interacting with a remote
desktop system. The applications executing in those sessions rely on presentation virtualization to
project their user interfaces remotely. Each session might run only a single application, or it might
present its user with a complete desktop offering multiple applications. In either case, several virtual
sessions can use the same installed copy of an application.
Running applications on a shared server like this offers several benefits, including the following:
Data can be centralized, storing it safely on a central server rather than on multiple desktop
machines. This improves security, since information isn’t spread across many different systems.
The cost of managing applications can be significantly reduced. Instead of updating each application
on each individual desktop, for example, only the single shared copy on the server needs to be
changed. Presentation virtualization also allows using simpler desktop operating system images
or specialized desktop devices, commonly called thin clients, both of which can lower
management costs.
Organizations need no longer worry about incompatibilities between an application and a desktop
operating system. While desktop virtualization can also solve this problem, as described earlier,
it’s sometimes simpler to run the application on a central server, and then use presentation
virtualization to make the application accessible to clients running any operating system.
In some cases, presentation virtualization can improve performance. For example, think about a
client/server application that pulls large amounts of data from a central database down to the
client. If the network link between the client and the server is slow or congested, this application
will also be slow. One way to improve its performance is to run the entire application—both
client and server—on a machine with a high-bandwidth connection to the database, then use
presentation virtualization to make the application available to its users.
9
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Services lets an ordinary Windows desktop application run on a shared server machine yet
present its user interface on a remote system, such as a desktop computer or thin client. While
remote interfaces haven’t always been viewed through the lens of virtualization, this
perspective can provide a useful way to think about this widely used technology.
Another bond that can benefit from more abstraction is the connection between an application
and the operating system it runs on. Every application depends on its operating system for a range of
services, including memory allocation, device drivers, and much more. Incompatibilities between an
application and its operating system can be addressed by either hardware virtualization or
presentation virtualization, as described earlier. But what about incompatibilities between two
applications installed on the same instance of an operating system? Applications commonly share
various things with other applications on their system, yet this sharing can be problematic. For
example, one application might require a specific version of a dynamic link library (DLL) to
function, while another application on that system might require a different version of the same
DLL. Installing both applications leads to what’s commonly known as DLL hell, where one of them
overwrites the version required by the other. To avoid this, organizations often perform extensive
testing before installing a new application, an approach that’s workable but time-consuming and
expensive.
10
Department of Information Technology Virtualization: Comparison of Windows
and Linux
11
Department of Information Technology Virtualization: Comparison of Windows
and Linux
This overview looks at three kinds of virtualization: hardware, presentation, and application. Similar
kinds of abstraction are also used in other contexts, however. Among the most important are
network virtualization and storage virtualization.
The term network virtualization is used to describe a number of different things. Perhaps
the most common is the idea of a virtual private network (VPN). VPNs abstract the notion of a
network connection, allowing a remote user to access an organization’s internal network just as
if she were physically attached to that network. VPNs are a widely implemented idea, and they
can use various technologies. In the Microsoft world, the primary VPN technologies today are
Internet Security and Acceleration (ISA) Server 2006 and Internet Application Gateway 2007.
The term storage virtualization is also used quite broadly. In a general sense, it means
providing a logical, abstracted view of physical storage devices, and so anything other than a
locally attached disk drive might be viewed in this light. A simple example is folder redirection
in Windows, which lets the information in a folder be stored on any network-accessible drive.
Much more powerful (and more complex) approaches also fit into this category, including
storage area networks (SANs) and others. However it’s done, the benefits of storage
virtualization are analogous to those of every other kind of virtualization: more abstraction and
less direct coupling between components.
12
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Chapter 2
XEN: APPROACH & OVERVIEW
Modern computers are sufficiently powerful to use virtualization to present the illusion of many
smaller virtual machines (VMs), each running a separate operating system instance. This has
led to a resurgence of interest in VM technology. Here we present Xen, a high performance
resource-managed virtual machine monitor (VMM) which enables applications such as server
consolidation , co-located hosting facilities, distributed web services, secure computing
platforms and application mobility. Successful partitioning of a machine to support the
concurrent execution of multiple operating systems poses several challenges.
Firstly, virtual machines must be isolated from one another: it is not acceptable for the
execution of one to adversely affect the performance of another. This is particularly true when
virtual machines are owned by mutually untrusting users. Secondly, it is necessary to support a
variety of different operating systems to accommodate the heterogeneity of popular
applications. Thirdly, the performance overhead introduced by virtualization should be small.
Xen enables users to dynamically instantiate an operating system to execute whatever they
desire. In the XenoServer project, we are deploying Xen on standard server hardware at
economically strategic locations within ISPs or at Internet exchanges. We perform admission
control when starting new virtual machines and expect each VM to pay in some fashion for the
resources it requires. We discuss our ideas and approach in this direction elsewhere; this paper
focuses on the VMM.
In a traditional VMM the virtual hardware exposed is functionally identical to the
underlying machine. Although full virtualization has the obvious benefit of allowing
unmodified operating systems to be hosted, it also has a number of drawbacks. This is
particularly true for the prevalent IA-32, or x86, architecture. Support for full virtualization was
never part of the x86 architectural designs. Certain supervisor instructions must be handled by
the VMM for correct virtualization, but executing these with insufficient privilege fails silently
13
Department of Information Technology Virtualization: Comparison of Windows
and Linux
rather than causing a convenient trap. Efficiently virtualizing the x86 MMU is also difficult.
These problems can be solved, but only at the cost of increased complexity and reduced
performance. VMware's ESX Server dynamically rewrites portions of the hosted machine code
to insert traps wherever VMM intervention might be required. This translation is applied to the
entire guest OS kernel (with associated translation, execution, and caching costs) since all non-
trapping privileged instructions must be caught and handled. An ESX Server implement
shadow versions of system structures such as page tables and maintains consistency with the
virtual tables by trapping every update attempt. This approach has a high cost for update-
intensive operations such as creating a new application process.
Notwithstanding the intricacies of the x86, there are other arguments against full
virtualization. In particular, there are situations in which it is desirable for the hosted operating
systems to see real as well as virtual resources: providing both real and virtual time allows a
guest OS to better support time-sensitive tasks, and to correctly handle TCP timeouts and RTT
estimates, while exposing real machine addresses allows a guest OS to improve performance by
using super pages or page coloring.
We avoid the drawbacks of full virtualization by presenting a virtual machine abstraction
that is similar but not identical to the underlying hardware, an approach which has been dubbed
Para-virtualization. This promises improved performance, although it does require
modifications to the guest operating system. It is important to note, however, that we do not
require changes to the application binary interface (ABI), and hence no modifications are
required to guest applications.
We distill the discussion so far into a set of design principles:
1. Support for unmodified application binaries is essential, or users will not transition to
Xen. Hence we must virtualize all architectural features required by existing standard
ABIs.
2. Supporting full multi-application operating systems is important, as this allows
complex server configurations to be virtualized within a single guest OS instance.
3. Para-virtualization is necessary to obtain high performance and strong resource isolation
on uncooperative machine architectures such as x86.
14
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Note that our Para-virtualized x86 abstraction is quite different from that proposed by the
recent Denali project. Denali is designed to support thousands of virtual machines running
network services, the vast majority of which are small-scale and unpopular.
In contrast, Xen is intended to scale to approximately 100 virtual machines running industry
standard applications and services. Given these very different goals, it is instructive to contrast
Denali's design choices with our own principles.
Firstly, Denali does not target existing ABIs, and so can elide certain architectural features
from their VM interface. For example, Denali does not fully support x86 segmentation although
it is exported (and widely used1) in the ABIs of NetBSD, Linux, and Windows XP.
Secondly, the Denali implementation does not address the problem of supporting application
multiplexing, nor multiple address spaces, within a single guest OS. Rather, applications are
linked explicitly against an instance of the Ilwaco guest OS in a manner rather reminiscent of a
libOS in the Exokernel. Hence each virtual machine essentially hosts a single-user single-
application unprotected operating system. In Xen, by contrast, a single virtual machine hosts a
real operating system which may itself securely multiplex thousands of unmodified user-level
processes. Although a prototype virtual MMU has been developed which may help Denali in
this area, we are unaware of any published technical details or evaluation.
Thirdly, in the Denali architecture the VMM performs all paging to and from disk. This is
perhaps related to the lack of memory management support at the virtualization layer.
Memory Management
• Segmentation cannot install fully-privileged segment descriptors and cannot overlap
with the top end of the linear address space.
• Paging Guest OS has direct read access to hardware page tables, but updates are
batched and validated by the hypervisor. A domain may be allocated discontinuous
machine pages.
CPU
15
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Paging within the VMM is contrary to our goal of performance isolation: malicious virtual
machines can encourage thrashing behavior, unfairly depriving others of CPU time and disk
bandwidth. In Xen we expect each guest OS to perform its own paging using its own
guaranteed memory reservation and disk allocation.
Finally, Denali virtualizes the “namespaces” of all machine resources, taking the view that
no VM can access the resource allocations of another VM if it cannot name them (for example,
VMs have no knowledge of hardware addresses, only the virtual addresses created for them by
Denali). In contrast, we believe that secure access control within the hypervisor is sufficient to
ensure protection; furthermore, as discussed previously, there are strong correctness and
performance arguments for making physical resources directly visible to guest OS.
In the following section we describe the virtual machine abstraction exported by Xen and
discuss how a guest OS must be modified to conform to this. Note that in this paper we reserve
the term guest operating system to refer to one of the OS that Xen can host and we use the term
domain to refer to a running virtual machine within which a guest OS executes; the distinction
is analogous to that between a program and a process in a conventional system. We call Xen
itself the hypervisor since it operates at a higher privilege level than the supervisor code of the
guest operating systems that it hosts.
16
Department of Information Technology Virtualization: Comparison of Windows
and Linux
17
Department of Information Technology Virtualization: Comparison of Windows
and Linux
In this section we introduce the design of the major subsystems that make up a Xen- based
server. In each case we present both Xen and guest OS functionality for clarity of exposition.
The current discussion of guest OSes focuses on XenoLinux as this is the most mature;
nonetheless our ongoing porting of Windows XP and NetBSD gives us confidence that Xen is
guest OS agnostic.
Two mechanisms exist for control interactions between Xen and an overlying domain:
synchronous calls from a domain to Xen may be made using a hypercall, while notifications are
delivered to domains from Xen using an asynchronous event mechanism.
The hypercall interface allows domains to perform a synchronous software trap into the
hypervisor to perform a privileged operation, analogous to the use of system calls in
conventional operating systems.
An example use of a hypercall is to request a set of page table updates, in which Xen
validates and applies a list of updates, returning control to the calling domain when this is
completed. Communication from Xen to a domain is provided through an asynchronous event
mechanism, which replaces the usual delivery mechanisms for device interrupts and allows
lightweight notification of important events such as domain-termination requests. A kin to
traditional Unix signals, there are only a small number of events, each acting to flag a particular
type of occurrence. For instance, events are used to indicate that new data has been received
over the network, or that a virtual disk request has completed.
Pending events are stored in a per-domain bitmask which is updated by Xen before invoking
an event-callback handler specified by the guest OS. The callback handler is responsible for
resetting the set of pending events, and responding to the notifications in an appropriate
manner. A domain may explicitly defer event handling by setting a Xen-readable software flag:
this is analogous to disabling interrupts on a real processor.
18
Department of Information Technology Virtualization: Comparison of Windows
and Linux
19
Department of Information Technology Virtualization: Comparison of Windows
and Linux
consumer pointer. Responses are placed back on the ring similarly, save with Xen as the
producer and the guest OS as the consumer. There is no requirement that requests be processed
in order: the guest OS associates a unique identifier with each request which is reproduced in
the associated response. This allows Xen to unambiguously reorder I/O operations due to
scheduling or priority considerations.
This structure is sufficiently generic to support a number of different device paradigms. For
example, a set of `requests' can provide buffers for network packet reception; subsequent
`responses' then signal the arrival of packets into these buffers. Reordering is useful when
dealing with disk requests as it allows them to be scheduled within Xen for efficiency, and the
use of descriptors with out-of-band buffers makes implementing zero-copy transfer easy. We
decouple the production of requests or responses from the notification of the other party: in the
case of requests, a domain may enqueue multiple entries before invoking a hypercall to alert
Xen; in the case of responses, a domain can defer delivery of a notification event by specifying
a threshold number of responses. This allows each domain to trade-off latency and throughput
requirements, similarly to the flow-aware interrupt dispatch in the ArseNIC Gigabit Ethernet
interface.
The control and data transfer mechanisms described are used in our virtualization of the various
subsystems. In the following, we discuss how this virtualization is achieved for CPU, timers,
memory, network and disk.
2.3.1 CPU scheduling
Xen currently schedules domains according to the Borrowed Virtual Time (BVT) scheduling
algorithm. We chose this particular algorithms since it is both work-conserving and has a
special mechanism for low-latency wake-up (or dispatch) of a domain when it receives an
event. Fast dispatch is particularly important to minimize the effect of virtualization on OS
subsystems that are designed to run in a timely fashion; for example, TCP relies on the timely
delivery of acknowledgments to correctly estimate network round-trip times. BVT provides
low-latency dispatch by using virtual-time warping, a mechanism which temporarily violates
20
Department of Information Technology Virtualization: Comparison of Windows
and Linux
`ideal' fair sharing to favor recently-woken domains. However, other scheduling algorithms
could be trivially implemented over our generic scheduler abstraction. Per-domain scheduling
parameters can be adjusted by management software running in Domain0.
21
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Thus we avoid the overhead and additional complexity associated with the use of shadow page
tables. The approach in Xen is to register guest OS page tables directly with the MMU, and
restrict guest OS to read-only access. Page table updates are passed to Xen via a hypercall; to
ensure safety, requests are validated before being applied.
To aid validation, we associate a type and reference count with each machine page frame. A
frame may have any one of the following mutually-exclusive types at any point in time: page
directory (PD), page table (PT), local descriptor table (LDT), global descriptor table (GDT), or
writable (RW). Note that a guest OS may always create readable mappings to its own page
frames, regardless of their current types. A frame may only safely be retasked when its
reference count is zero. This mechanism is used to maintain the invariants required for safety;
for example, a domain cannot have a writable mapping to any part of a page table as this would
require the frame concerned to simultaneously be of types PT and RW.
The type system is also used to track which frames have already been validated for use in
page tables. To this end, guest OS indicate when a frame is allocated for page-table use . this
requires a one-off validation of every entry in the frame by Xen, after which its type is pinned
to PD or PT as appropriate, until a subsequent unpin request from the guest OS. This is
particularly useful when changing the page table base pointer, as it obviates the need to validate
the new page table on every context switch. Note that a frame cannot be retasked until it is both
unpinned and its reference count has reduced to zero, this prevents guest OS from using unpin
requests to circumvent the reference-counting mechanism.
To minimize the number of hypercalls required, guest OS can locally queue updates before
applying an entire batch with a single hypercall. This is particularly beneficial then creating
new address spaces. However we must ensure that updates are committed early enough to
guarantee correctness. Fortunately, a guest OS will typically execute a TLB flush before the
first use of a new mapping: this ensures that any cached translation is invalidated. Hence,
committing pending updates immediately before a TLB flush usually suffices for correctness.
However, some guest OS elide the flush when it is certain that no stale entry exists in the TLB.
In this case it is possible that the first attempted use of the new mapping will cause a page-not-
present fault. Hence the guest OS fault handler must check for outstanding updates; if any are
found then they are flushed and the faulting instruction is retried.
22
Department of Information Technology Virtualization: Comparison of Windows
and Linux
23
Department of Information Technology Virtualization: Comparison of Windows
and Linux
2.3.5 Network
Xen provides the abstraction of a virtual firewall-router (VFR), where each domain has one or
more network interfaces (VIFs) logically attached to the VFR. A VIF looks somewhat like a
modern network interface card: there are two I/O rings of buffer descriptors, one for transmit
and one for receive. Each direction also has a list of associated rules of the form (<pattern>,
<action>). if the pattern matches then the associated action is applied. Domain0 is responsible
for inserting and removing rules. In typical cases, rules will be installed to prevent IP source
address spoofing, and to ensure correct de-multiplexing based on destination IP address and
port. Rules may also be associated with hardware interfaces on the VFR. In particular, we may
install rules to perform traditional firewalling functions such as preventing incoming
connection attempts on insecure ports.
To transmit a packet, the guest OS simply enqueues a buffer descriptor onto the transmit ring.
Xen copies the descriptor and, to ensure safety, then copies the packet header and executes any
matching filter rules. The packet payload is not copied since we use scatter-gather DMA;
however note that the relevant page frames must be pinned until transmission is complete. To
ensure fairness, Xen implements a simple round-robin packet scheduler.
To efficiently implement packet reception, we require the guest OS to exchange an unused page
frame for each packet it receives; this avoids the need to copy the packet between Xen and the
guest OS, although it requires that page-aligned receive buffers be queued at the network
interface. When a packet is received, Xen immediately checks the set of receive rules to
determine the destination VIF, and exchanges the packet buffer for a page frame on the relevant
receive ring. If no frame is available, the packet is dropped.
2.3.6 Disk
Only Domain0 has direct unchecked access to physical (IDE and SCSI) disks. All other
domains access persistent storage through the abstraction of virtual block devices (VBDs),
which are created and configured by management software running within Domain0. Allowing
24
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Domain0 to manage the VBDs keeps the mechanisms within Xen very simple and avoids more
intricate solutions such as the UDFs used by the Exo-kernel.
A VBD comprises a list of extents with associated ownership and access control
information, and is accessed via the I/O ring mechanism. A typical guest OS disk scheduling
algorithm will reorder requests prior to enqueuing them on the ring in an attempt to reduce
response time, and to apply differentiated service (for example, it may choose to aggressively
schedule synchronous metadata requests at the expense of speculative read ahead requests).
However, because Xen has more complete knowledge of the actual disk layout, we also support
reordering within Xen, and so responses may be returned out of order. A VBD thus appears to
the guest OS somewhat like a SCSI disk.
A translation table is maintained within the hypervisor for each VBD; the entries within
this table are installed and managed by Domain0 via a privileged control interface. On
receiving a disk request, Xen inspects the VBD identifier and offset and produces the
corresponding sector address and physical device. Permission checks also take place at this
time. Zero-copy data transfer takes place using DMA between the disk and pinned memory
pages in the requesting domain.
Xen services batches of requests from competing domains in a simple round-robin fashion;
these are then passed to a standard elevator scheduler before reaching the disk hardware.
Domains may explicitly pass down reorder barriers to prevent reordering when this is necessary
to maintain higher level semantics (e.g. when using a write-ahead log). The low-level
scheduling gives us good throughput, while the batching of requests provides reasonably fair
access. Future work will investigate providing more predictable isolation and differentiated
service, perhaps using existing techniques and schedulers.
25
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Chapter 3
Microsoft’s Virtualization Roadmap
A long-term vision for how customers can drastically reduce complexity of IT infrastructure as a
part of the overall Dynamic Systems Initiative
A solid product roadmap that offers valuable current and near-term solutions, enabling customers to
take a series of practical steps in line with the long term vision.
With hardware capacity growing and more robust virtualization platform and management
capabilities, more customers can benefit from consolidation, easier management and
automation capabilities. Virtualization is a key technology for reducing the cost and complexity
of IT management, and Microsoft has committed significant resources to making virtualization
more broadly accessible and affordable for customers. Microsoft’s virtualization solutions
increases hardware utilization and enable organizations to rapidly configure and deploy new
servers with the following key benefits:
26
Department of Information Technology Virtualization: Comparison of Windows
and Linux
27
Department of Information Technology Virtualization: Comparison of Windows
and Linux
As a part of the Dynamic Systems Initiative (DSI), Microsoft’s industry wide effort to
dramatically simplify and automate how businesses design, deploy, and operate IT
systems to enable self-managing dynamic systems, Microsoft is providing businesses
with tools to help them more flexibly utilize their hardware resources. Virtual Server
2005 R2, Windows Server virtualization, and Virtual Machine Manager are key
examples of how Microsoft is continuing to deliver technology those results in
improved server hardware utilization and provides for more flexible provisioning of
data center resources.
The next few sections will focus on the key virtualization products, both at the platform and the
management level.
Virtual Server 2005 R2 offers improved hardware efficiency by providing a great solution for
isolation and resource management, which enable multiple workloads to coexist on fewer
servers. Virtual Server can be used to improve operational efficiency in consolidating
infrastructure, applications, and branch office server workloads, consolidating and re-hosting
legacy applications, automating and consolidating software test and development
environments, and reducing disaster impact.
28
Department of Information Technology Virtualization: Comparison of Windows
and Linux
services, and for disaster recovery environments resulting in fewer physical systems for
reduced hardware footprint. Virtual Server 2005 R2 is ideal for server consolidation in
both the datacenter and the branch office, allowing organizations to make more efficient
use of their hardware resources. It allows IT organizations to enhance their
administrative productivity and rapidly deploy new servers to address changing
business needs and increases the hardware utilization rates for an optimized IT
infrastructure.
Customers across all segments are looking for ways to decrease costs and accelerate
application and infrastructure installations and upgrades, while delivering a
comprehensive level of quality assurance. Virtual Server enables you to consolidate
your test and development server farm and automate the provisioning of virtual
machines, improving hardware utilization and operational flexibility. For developers,
Virtual Server enables easy deployment and testing of a distributed server application
using multiple virtual machines on one physical server.
Virtual Server enables migration of legacy operating systems (Windows NT4.0 Server
and Windows 2000 Server) and their associated custom applications from older
hardware to new servers running Windows Server 2003. Virtual Server 2005 R2
delivers the best of both worlds: application compatibility with legacy environments,
while taking advantage of the reliability, manageability and security features of
Windows Server 2003 running on the latest hardware. Virtual Server 2005 R2 delivers
this capability by enabling customers to run legacy applications in their native software
environment in virtual machines, without rewriting application logic, reconfiguring
networks or retraining end users. This gives customers time to refresh older
infrastructure systems first, then either upgrade or rewrite out-of-service applications on
a timetable that best fits their business needs. Virtual Server 2005 R2 enables better
29
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Virtual Server 2005 R2 can be used as part of a disaster recovery plan that requires
application portability and flexibility across hardware platforms. Consolidating physical
servers onto fewer physical machines running virtual machines decreases the number of
physical assets that must be available in a disaster recovery location. In the event of
recovery, virtual machines can be hosted anywhere, on host machines other than those
affected by a disaster, speeding up recovery times and maximizing organization
flexibility.
Virtualization facilitates broad device compatibility and complete support for Windows server
environments.
Virtual machine isolation ensures that if one virtual machine crashes or hangs, it cannot
impact any other virtual machine or the host system. Maximum application
compatibility is achieved through isolation. This allows customers to further leverage
their existing storage, network and security infrastructures.
Virtual Server runs on Windows Server 2003 which supports most Windows Server
Catalog devices, providing compatibility with a wide range of host system hardware.
Multithreaded VMM
Virtual Server’s Virtual Machine Monitor provides the software infrastructure to create,
manage and interact with virtual machines on multi processor hardware.
30
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Virtual Server can run all major x86 operating systems in the virtual machine guest
environment. Microsoft will also support specific distributions of Linux running in the
virtual machine environment.
iSCSI clustering
X64 support
Virtual Server 2005 R2 runs on the following 64-bit host operating systems: Windows
Server 2003 Standard x64 Edition, Windows Server 2003 Enterprise x64 Edition and
Windows XP Professional x64 Edition providing increased performance & memory
headroom.
31
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Virtual Server encapsulates virtual machines in portable Virtual Hard Disks, enabling
flexible configuration, versioning and deployment.
PXE Boot
The emulated network card in Virtual Server 2005 R2 now supports Pre-boot Execution
Environment (PXE) boot. This network boot allows customers to provision their virtual
machines in all of the same ways that they do their physical servers.
Virtual machines in Virtual Server function the way you would expect a physical
machine, offering full Active Directory integration. This level of integration enables
delegated administration and secure, authenticated guest access.
32
Department of Information Technology Virtualization: Comparison of Windows
and Linux
consolidation, business continuity management, automating and consolidating software test and
development environments, and creating a dynamic datacenter.
Organizations are looking at production servers in their datacenters and finding overall
hardware utilization levels often between 5% and 15% of the capacity of the server. In
addition, physical constraints like space and power are constraining them from
expanding their datacenters. By consolidating several production servers with Windows
Server virtualization, businesses can benefit from increased hardware utilization and
reduced overall total cost of ownership.
IT administrators are always trying to find ways to reduce or eliminate downtime from
their environment. Windows Server virtualization will provide capabilities for efficient
disaster recovery to eliminate downtime. The robust and flexible virtualization
environment created by Windows Server virtualization minimizes the impact of
scheduled and unscheduled downtime.
One of the biggest areas where virtualization technology will continue to be relevant is
the software test and development area to create automated and consolidated
environments that are agile enough to accommodate the constantly changing
requirements. Windows Server virtualization helps minimize test hardware, improve
lifecycle management and improves test coverage.
• Dynamic datacenter
The rich set of features of Windows Server virtualization combined with the new
management capabilities extended by Virtual Machine Manager enables organizations
to create a more agile infrastructure. Administrators will be able to dynamically add
resources to virtual machines and move them across physical machines transparently
without impacting users.
33
Department of Information Technology Virtualization: Comparison of Windows
and Linux
There are several new features in Windows Server virtualization that help create a scalable,
secure and highly available virtualization platform as a part of Windows Server “Longhorn”.
The following are some of the key components and features of Windows Server virtualization.
Windows hypervisor
A very thin layer of software that leverages the Windows Server driver support and
hardware assisted virtualization technology. The minimal code base with no third party
code / drivers helps create a more secure and robust base for virtualization solutions.
Windows Server virtualization provides the capability to hot add resources such as
CPU, memory, networks and storage to the virtual machines with no downtime.
Combined with the hot add features of Windows Server “Longhorn”, this enables
administrators to manage their hardware resources without impacting their SLA
commitments.
A key new feature of the Windows Server virtualization platform is the ability to
support 64-bit guests. This enables organizations to be able to virtualize more
applications that are memory intensive and benefit from the increased memory pool
accessible in a 64-bit environment.
Windows Server virtualization now provides the capability to allocate multiple CPU
resources to a single virtual machine and enables virtualization of multi-threaded
applications. This capability combined with the 64-bit guest support makes Windows
Server virtualization a scalable platform for virtualization
34
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Windows Server virtualization will provide the ability to move a virtual machine from
one physical machine to another with minimal downtime. This capability combined
with host clustering of physical machines provides high availability and flexibility to
achieve an agile and dynamic datacenter.
Windows Server virtualization provides the administrators the ability to securely access
files within a virtual hard disk without having to instantiate a virtual machine. This
provides the administrators granular access to VHDs and be able to perform some
management tasks offline.
Virtual Machine Manager delivers simple and complete support for consolidating physical
hardware on virtual infrastructure and optimizing utilization. It also provides rapid provisioning
of virtual machines from physical machines, templates in the image library or by end users
35
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Businesses across segments are looking for ways to drive efficiency through their IT
environments and increase operational agility. Virtual Machine Manager provides a
mechanism to enable functionality such as rapid server provisioning, rapid recovery,
and scalable migration capability to make the overall virtual infrastructure robust and
easy to manage.
• Integrated management
System Center Virtual Machine Manager focuses on unique requirements of virtual machines
and is designed to enable increased physical server utilization, centralized management of
virtual machine infrastructure and rapid provisioning of new virtual machines. The following is
some of the key features in Virtual Machine Manager.
36
Department of Information Technology Virtualization: Comparison of Windows
and Linux
The first step in migrating from a physical data center with a one-workload per server
model is to identify the appropriate physical workloads for consolidation onto virtual
hardware. The decision factors for determining the appropriate candidates are based on
several factors such as historical performance, peak load characteristics, access
patterns, etc. Virtual Machine Manager leverages the existing historical performance
data in the System Center Operations Manager database to list the consolidation
candidates in rank order.
Intelligent Placement
The act of assigning and activating a given virtual workload onto a physical virtual host
server is referred as placement. Placement is at the crux of maximizing the utilization of
physical assets. Virtual Machine Manager brings a deep and holistic approach to
placement and combines the knowledge from historical performance data of the virtual
workload and the intelligence about the virtual host system. Business rules and
associated models are also leveraged by Virtual Machine Manager to determine the
placement options.
Host Provisioning
Virtual Machine Manager will identify the physical virtual hosts in the enterprise
through integrated discovery with Active Directory. This helps organizations to easily
scale the management of virtual machines and hosts across the datacenter and branch
offices.
Central Library
Virtual Machine Manager provides a central repository for all the building blocks for a
virtual machine such as VHDs, offline virtual machines, templates, and even ISO
images. Each item in the library has models or rich meta data that enable more
controlled management of the objects. Template is a new object that enables an
administrator to create approved virtual machine configurations that serve as a gold
standard for subsequent virtual machine deployments.
Self-service provisioning
37
Department of Information Technology Virtualization: Comparison of Windows
and Linux
38
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Chapter 4
x86 Virtualization
x86 virtualization is the method by which x86-based "guest" operating systems can run within
another "host" x86 operating system, with little or no modification of the guest OS. The x-86
processor architecture did not originally meet the Popek and Goldberg virtualization
requirements. As a result, it was very difficult to implement a general virtual machine on an
x86 processor. In 2005 and 2006, extensions to their respective x86 architectures by Intel and
AMD resolved this and other virtualization difficulties.
Intel and AMD have independently developed virtualization extensions to the x86
architecture. Though not directly compatible with each other, they serve largely the same
functions. Either will allow a virtual machine hypervisor to run an unmodified guest operating
system without incurring significant emulation performance penalties.
39
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Neither Intel Celeron nor Pentium Dual-Core nor Pentium M processors have VT technology.
4.1.1 IOMMU
One can run a 64-bit guest on a 32-bit host OS, if the underlying processor runs in 64-bit mode
and supports virtualization extensions; however not all platforms support this. Note, however,
that a 32-bit host OS's memory addressing limits can become problematic, and users should
generally install 64-bit operating systems on 64-bit capable processors.
Intel's VT-x feature needs activation in the BIOS before applications can make use of it. Most
computer and motherboard/BIOS/chipset manufacturers disable this support by default but
make an option available to activate it, some do not. The AMD-V feature may also be
controlled by a BIOS setting.
As VT-x enhances the virtualization at the cpu level, VT-d does the same thing at the chipset
level. VT-x is a CPU feature but VT-d is a chipset feature. VT-d is integrated in the chipset, so a
chipset with VT-d is required. A chipset with VT-d is not the only requirement in order to can
use the VT-d features. Some VT-d capable chipsets are enabled by default, other have to be
40
Department of Information Technology Virtualization: Comparison of Windows
and Linux
enabled by the OEM manufacturer (at the chipset level) in order to be used. Also the BIOS
have to be containing VT-d features. So in order to work on your system you need a
motherboard with a VT-d capable chipset that is enabled by default or was enabled by the OEM
manufacturer and with BIOS that includes the VT-d features. Is the BIOS doesn't have VT-d
features an update of firmware could solve the problem. Also in the BIOS exists the option to
enable or disable (at the BIOS level) this feature. For a list of VT-d capable chipsets and their
status enabled or enabling depends of the OEM manufacturer see the Intel chipset list[3]. Also
the virtualization software has to support VT-d. For some virtualization software as Xen and
Hyper-V VT-d is the main requirement. Other virtualization software that supports VT-d are:
Xen, VMware, KVM, Hyper-V and Parallels (experimental support).
Intel has addressed the rise in virtualization demand by creating Intel® Virtualization
Technology1 (Intel® VT for Connectivity), a suite of powerful enhancements to Intel®
processors, chipsets, and I/O devices enabling hardware-assisted virtualization support from the
core platform architecture. The hardware assists that Intel VT provides to the virtualization
software helps hypervisor providers to deliver more simple and robust code, decreasing
software overhead and its potential impact to solution performance.
Intel VT comprises:
• Intel® Virtualization Technology for IA-32, Intel® 64 Architecture2 and Itanium®
processors (Intel® VT-x and Intel® VT-i)
• Intel® Virtualization Technology for Directed I/O
• Intel® Virtualization Technology for Connectivity
Intel’s latest addition to its suite of virtualization technologies is Intel Virtualization Technology
for Connectivity (Intel® VT for Connectivity). This new collection of I/O virtualization
technologies improves overall system performance by improving communication between host
CPU and I/O devices within the virtual server. This enables a lowering of CPU utilization, a
41
Department of Information Technology Virtualization: Comparison of Windows
and Linux
reduction of system latency and improved networking and I/O throughput. Intel VT for
Connectivity includes:
• Virtual Machine Device Queues (VMDq)
• Intel® I/O Acceleration Technology
• Single Root I/O Virtualization (SR-IOV) implementation in Intel® devices
In today’s traditional virtualization implementation, hypervisor abstracts the I/O device and
shares that hardware resource with multiple virtual machines. To route the packets coming from
that shared I/O device, hypervisor sorts the incoming packets based on the destined virtual
machine and then delivers the packets accordingly.
This sorting and grouping done in the hypervisor consumes CPU cycles, thereby impacting
the overall virtual server performance. The Intel Advantage: Virtualization Innovation VMDq
technology enhances networking performance and reduces CPU utilization in the virtualized
environment. It reduces I/O overhead on the hypervisor in a virtualized server by performing
data sorting in the network silicon. VMDq technology makes use of the multiple queues
technology in the network device. With VMDq, as data packets enter the network adapter, they
are sorted, and packets to the same destination get grouped together. The packets are then sent
to the hypervisor, which directs them to their respective destinations. Relieving the hypervisor
of packet filtering improves overall CPU utilization and throughput levels. Intel® I/O
Acceleration Technology Intel I/O Acceleration Technology (Intel® I/OAT) is a suite of
features which improves data acceleration across the platform, from I/O and networking
devices to the memory and processors which help to improve system performance. The
different features include Intel® Quick-Data Technology, Direct Cache Access (DCA), MSI-X,
low latency interrupts, and Receive Side Coalescing (RSC). Intel Quick-Data Technology
moves data copy from the CPU to the chipset and DCA enables the CPU to pre-fetch data,
thereby avoiding cache misses and improving application response times. MSI-X helps in load-
balancing I/O network interrupts, and low latency interrupts automatically tune interrupt
42
Department of Information Technology Virtualization: Comparison of Windows
and Linux
interval times depending on the latency sensitivity of the data. RSC provides lightweight
coalescing of receive packets, which increases the efficiency of the host network stack.
43
Department of Information Technology Virtualization: Comparison of Windows
and Linux
44
Department of Information Technology Virtualization: Comparison of Windows
and Linux
AMD-V operates on AMD Athlon 64 and Athlon 64 X2 with family "F" or "G" on socket
AM2 (not 939), Turion 64 X2, Opteron 2nd generation and 3rd-generation, Phenom, and all
newer processors. Sempron processors do not include support for AMD-V.
45
Department of Information Technology Virtualization: Comparison of Windows
and Linux
On May 23, 2006, AMD released the Athlon 64 ("Orleans"), the Athlon 64 X2 ("Windsor")
and the Athlon 64 FX ("Windsor") as the first AMD processors to support AMD-V. Prior
processors do not have AMD-V. Note also that often, even if your processor supports AMD-V,
AMD virtualization can be disabled in the BIOS by default, possibly by the computer builder or
manufacturer, and must be enabled before use.
AMD has published a specification for a technology named "IO Memory Management
Unit" (IOMMU) to AMD-V. This provides a way of configuring interrupt delivery to individual
virtual machines and an IO memory translation unit for preventing a virtual machine from
using DMA to break isolation. The IOMMU also plays an important role in advanced operating
systems (absent virtualization) and the AMD Torrenza architecture.
46
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Chapter 5
VMware
VMware desktop virtualization technology lets you run multiple operating systems on a single
physical computer. Easily run Windows’ applications on your Mac, including high end games
and other graphic applications, with VMware Fusion. Run Windows and Linux applications on
Windows or Linux PCs with the free VMware Player.
Reduce development time and shrink QA and support costs with VMware Workstation.
Consolidate multiple development and test workstations onto a single physical system running
multiple virtual machines. Record and replay virtual machine activity to capture, diagnose, and
resolve non-deterministic bugs and race conditions.
Organizations trust VMware enterprise desktop technology to manage and support the needs of
their extended, global workforce while strengthening security and control over corporate
resources and sensitive information. VMware View (formerly VMware Virtual Desktop
Infrastructure (VDI)) lets organizations streamline desktop management & control, reduce
operating costs and deliver complete desktop environments with greater application
compatibility.
47
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Business is more dependent on technology than ever before, yet more frustrated by its
inflexibility. With an increasing mobile and globally dispersed workforce using multiple
devices on multiple platforms, they struggle to connect to their data and applications across a
tangle of web, desktop, and server based solutions. IT struggles to retrofit and manage the
tightly bound single purpose system of OS, applications, and hardware. And when a user breaks
or misplaces a computer—productivity stops, security is breached, and intellectual property can
be lost. Reconnecting is no easy task: days are lost bringing users back on line and weeks go by
as IT tries to recoup lost information.
The desktop of the future will not be a single physical device but a collection of different
devices and environments. Applications and data may be located across a combination of
locations for example: a virtual desktop running on a server, a home notebook computer and a
webmail account. End users want the same view regardless of what device they use to connect
to their desktop or where their applications and data are located – the user wants a universal
client. IT organizations on the other hand want to simplify management and take control of
desktops and applications cost effectively. Universal client is the next evolution in desktop
computing including virtual desktop infrastructure.
Decouple applications, data, and operating system from the hardware and deliver them to
the user rather than a device. Give users a personal view of their applications and data, whether
they’re on a thin client or laptop, in the office or on the road. Intelligently deliver applications
and data to any device and allow users to focus on their jobs rather than the tools. Balance the
requirements of your business with the needs of your users and create a seamless experience
where applications and data follow the user and not the device.
48
Department of Information Technology Virtualization: Comparison of Windows
and Linux
The monolithic model of tightly coupled hardware, operating system, and applications cannot
keep up with today’s global economy while complying with business, regulatory, and security
objectives. You need flexible solutions to drive your infrastructure based on the needs of your
business. So it helps:
Just like the single server OS was an indispensable part of the traditional IT stack, the virtual
datacenter OS is an indispensable platform for business computing of the future.
VMware Infrastructure delivers the virtual datacenter OS through the following essential
components:
49
Department of Information Technology Virtualization: Comparison of Windows
and Linux
The traditional IT stack with its tight coupling of software and hardware falls short of
supporting customers’ needs. The accelerating rate of business change, non-negotiable
requirements for 24X7 business resiliency and inexorable pressure to reduce cost is increasing
the pressure on IT.
At the same time, IT has dramatic opportunity to change the status quo by leveraging the
immense power and attractive economics of x86 hardware, the maturing of virtualization
technologies, increasing choice in new application architectures and the availability of vast and
new clouds of cheap and readily accessible computing power.
50
Department of Information Technology Virtualization: Comparison of Windows
and Linux
The virtual datacenter OS enables a dramatically simpler and more efficient model of
computing that meet customers’ needs. In this new model customers define the desired
outcomes and the computing infrastructure guarantees these outcomes precisely. For example,
IT professionals would like to provision an application, specify the application service levels,
response time, security protection, and availability level– and the infrastructure should deliver
and assure these service levels, at the lowest possible cost with minimal maintenance effort
required.
51
Department of Information Technology Virtualization: Comparison of Windows
and Linux
Conclusion
Virtualization technologies have matured to the point where the technology is being deployed
across a wide range of platforms and environments. The usage of virtualization has gone
beyond increasing the utilization of infrastructure, to areas like data replication and data
protection. This white paper looks at the continuing evolution of virtualization, its potential,
some tips on optimizing virtualization as well as how to future proof the technology.
After all, server virtualization's value is well-established. Many, many companies have
migrated significant percentages of their servers to virtual machines hosted on larger servers,
gaining benefits in hardware utilization, energy use, and data centre space. And those
companies that haven't done so thus far are hatching plans to consolidate their servers in the
future. These are all capital or infrastructure costs, though. What does server virtualization do
for human costs—the IT operations piece of the puzzle?
Base level server consolidation offers a few benefits for IT operations. It makes hardware
maintenance much easier, since virtual machines can be moved to other physical servers when
it's time to maintain or repair the original server. This moves hardware maintenance from a
weekend and late night effort to a part of the regular business day—certainly a great
convenience.
52
Department of Information Technology Virtualization: Comparison of Windows
and Linux
REFERENCES
[1] www.google.com
[2] www.ieeexplore.com
[3] www.redhat.com/doc/virtualiztion
[4] www.vmware.com/help
[5] www.wikipedia.com
53
Department of Information Technology Virtualization: Comparison of Windows
and Linux
BIBLIOGRAPHY
[2] Comparative study on MPI Communication Primitives by Arnab Sinha and Nabnita Das.
[3] Sir. M., Otto, S. W., Huss-Lederman, S., Walker, D. W., Dongarra, J.J., and MPI: The
Complete Reference, The MIT Press, 1996.
[4] Michael J. Quinn, Parallel Programming in C with MPI and OpenMP, McGraw-Hill, 2003.
54