Professional Documents
Culture Documents
Virtualization: Virtualization and Basics of Virtualization
Virtualization: Virtualization and Basics of Virtualization
Contents
Introduction...................................................................................................................................................2
1. What is virtualization?.........................................................................................................................3
2. Cloud computing Vs. Virtualization.................................................................................................3
3. A hypervisor..........................................................................................................................................3
3.1. Types of Hypervisors...................................................................................................................4
3.2. Security implications of hypervisors...........................................................................................4
4. Types of virtualization.........................................................................................................................4
4.1. Application virtualization...........................................................................................................4
4.1.1. Description.............................................................................................................................4
4.1.2. Benefits of application virtualization................................................................................5
4.1.3. Limitations of application virtualization..........................................................................5
4.2. Desktop virtualization.................................................................................................................6
4.2.1. Types of Desktop Virtualization........................................................................................6
4.2.2. Benefits of desktop virtualization......................................................................................7
4.3. Server virtualization.....................................................................................................................7
4.3.1. Types of Server Virtualization...........................................................................................8
4.3.2. Benefits of server virtualization.........................................................................................8
4.3.3. Limitations of Server Virtualization..................................................................................9
4.4. Storage Virtualization................................................................................................................10
4.4.1. The Anatomy of Storage Virtualization..........................................................................10
4.4.2. Benefits of Storage Virtualization....................................................................................13
4.5. Network Virtualization..............................................................................................................13
4.5.1. External virtualization.........................................................................................................13
4.5.2. Internal virtualization.........................................................................................................14
4.5.3. Use in testing.......................................................................................................................14
4.5.4. Wireless network virtualization.......................................................................................14
References....................................................................................................................................................15
Group name - IET Sec-C - Group U.........................................................................................................15
List of figures
2
Virtualization and basics of virtualization
Introduction
While virtualization technology can be sourced back to the 1960s, it wasn’t widely adopted until
the early 2000s. The technologies that enabled virtualization—like hypervisors—were developed
decades ago to give multiple users simultaneous access to computers that performed batch
processing. Batch processing was a popular computing style in the business sector that ran routine
tasks thousands of times very quickly (like payroll).
But, over the next few decades, other solutions to the many users/single machine problem grew in
popularity while virtualization didn’t. One of those other solutions was time-sharing, which
isolated users within operating systems—inadvertently leading to other operating systems like
UNIX, which eventually gave way to Linux. All the while, virtualization remained a largely
unadopted, niche technology.
Fast forward to the 1990s. Most enterprises had physical servers and single-vendor IT stacks,
which didn’t allow legacy apps to run on a different vendor’s hardware. As companies updated
their IT environments with less-expensive commodity servers, operating systems, and applications
from a variety of vendors, they were bound to underused physical hardware—each server could
only run 1 vendor-specific task.
This is where virtualization really took off. It was the natural solution to 2 problems: companies
could partition their servers and run legacy apps on multiple operating system types and versions.
Servers started being used more efficiently (or not at all), thereby reducing the costs associated
with purchase, set up, cooling, and maintenance.
Virtualization’s widespread applicability helped reduce vendor lock-in and made it the foundation
of cloud computing. It’s so prevalent across enterprises today that specialized virtualization
management software is often needed to help keep track of it all.
In this paper we will take a brief look at types, benefits, and limitations of virtualization and how
virtualization works.
2
Virtualization and basics of virtualization
1. What is virtualization?
Virtualization is the process of running a virtual instance of a computer system in a layer
abstracted from the actual hardware. Most commonly, it refers to running multiple operating
systems on a computer system simultaneously. To the applications running on top of the
virtualized machine, it can appear as if they are on their own dedicated machine, where the
operating system, libraries, and other programs are unique to the guest virtualized system and
unconnected to the host operating system which sits below it.
There are many reasons why people utilize virtualization in computing. To desktop users, the
most common use is to be able to run applications meant for a different operating system without
having to switch computers or reboot into a different system. For administrators of servers,
virtualization also offers the ability to run different operating systems, but perhaps, more
importantly, it offers a way to segment a large system into many smaller parts, allowing the server
to be used more efficiently by a number of different users or applications with different needs. It
also allows for isolation, keeping programs running inside of a virtual machine safe from the
processes taking place in another virtual machine on the same host.
Virtualization and cloud computing are completely unique technologies. Virtualization creates
virtual IT assets by separating physical infrastructure from operating systems, and applications
while cloud computing is the delivery of shared data and software. Virtualization differs from
cloud computing. This is because virtualization is software that manipulates hardware, while
cloud computing refers to a service that results from that manipulation. An easy way to separate
the two is to think of virtualization as technology and cloud as a service. Confusion between the
two technologies occurs because cloud computing and virtualization often work together to
provide services. For example, in an IT environment, virtualization is used to create virtual
versions of hardware while cloud computing is used to store and access data and programs over
the Internet by using these virtual machines.
3. A hypervisor
A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or
hardware that creates and runs virtual machines. A computer on which a hypervisor is running
one or more virtual machines is defined as a host machine. Each virtual machine is called a guest
machine. The hypervisor presents the guest operating systems with a virtual operating platform
and manages the execution of the guest operating systems. Multiple instances of a variety of
operating systems may share the virtualized hardware resources.
2
Virtualization and basics of virtualization
4. Types of virtualization
4.1. Application virtualization (AV)
Application virtualization is software technology that encapsulates application software from the
underlying operating system on which it is executed. A fully virtualized application is not
installed in the traditional sense, although it is still executed as if it were. The application
behaves at runtime like it is directly interfacing with the original operating system and all the
resources managed by it, but can be isolated or sandboxed to varying degrees.
In this context, the term "virtualization" refers to the artifact being encapsulated (application),
which is quite different from its meaning in hardware virtualization, where it refers to the artifact
being abstracted (physical hardware).
4.1.1. Description
Modern operating systems such as Microsoft Windows and Linux can include limited application
virtualization. For example, Windows 7 provides Windows XP Mode that enables older Windows
XP application to run unmodified on Windows 7.
2
Virtualization and basics of virtualization
many files spread throughout the system, it becomes easy to run the application on a different
computer and previously incompatible applications can be run side-by-side.
Examples of this technology for the Windows platform include AppZero, 2X Software, Citrix
XenApp, Novell ZENworks AV, Application Jukebox, Microsoft AV, Software Virtualization
Solution, Spoon (formerly Xenocode), Symantec Workspace Virtualization and Workspace
Streaming, VMware ThinApp, P-apps, and Oracle Secure Global Desktop.
Some types of software such as antivirus packages and applications that require heavy OS
integration, such as Stardock's Window Blinds or TGTSoft's StyleXP are difficult to virtualize.
Only file and registry-level compatibility issues between legacy applications and newer operating
systems can be addressed by application virtualization. For example, applications that don't
manage the heap correctly will not execute on Windows Vista as they still allocate memory in the
same way, regardless of whether they are virtualized or not. For this reason, specialist application
compatibility fixes (shims) may still be needed, even if the application is virtualized.
Moreover, in software licensing, application virtualization bears great licensing pitfalls mainly
because both the application virtualization software and the virtualized applications must be
correctly licensed.
2
Virtualization and basics of virtualization
In VDI, the OS runs VMs—which contains the desktop image—on a server within the datacenter.
VDI technology leverages a hypervisor to split a server into different desktop images that users
can remotely access via their end-devices. VDI provisions a dedicated VM running its own OS to
each user within the virtualized environment.
RDS—also called Remote Desktop Session Host (RDSH), and formerly Terminal Services—allows
users to remotely access shared desktops and Windows applications on Microsoft Windows Server
OS. In RDS, users access remote desktops by sharing the hardware, OS (in this case, a Windows
Server), apps, and host resources.
c. Desktop-as-a-Service (DaaS)
DaaS’s functionality is similar to that of VDI: users access their desktops and apps from any end-
device or platform. However, in VDI, we have to purchase, deploy, and manage all the hardware
components ourselves. In DaaS, though, we outsource desktop virtualization to the third party to
help us develop and operate virtual desktops.
Client Virtualization
In Client virtualization, we install a hypervisor on a client device to allow us to run multiple OSes.
Client virtualization eliminates the need for users to have their own dedicated hardware and
software. Client virtualization deployment has two variants:
a. Presentation virtualization
Presentation virtualization provides a web-based portal through which users leverage to interact
with published desktops and apps. Organizations can use this approach to deliver apps or
desktops from a shared server.
2
Virtualization and basics of virtualization
b. Application virtualization
Application virtualization allows apps to run on other platforms. For example, we can run
Windows apps on Linux. We can use Application virtualization to simplify OS migration by
creating portable software. We can then transfer applications between computers without having
to install them.
There are a couple of problems with this approach, though. One is that it doesn't take advantage of
modern server computers' processing power. Most servers use only a small fraction of their overall
processing capabilities. Another problem is that as a computer network gets larger and more
complex, the servers begin to take up a lot of physical space. A data center might become
overcrowded with racks of servers consuming a lot of power and generating heat.
Server virtualization attempts to address both of these issues in one fell swoop. By using specially
designed software, an administrator can convert one physical server into multiple virtual
2
Virtualization and basics of virtualization
machines. Each virtual server acts like a unique physical device, capable of running its own
operating system (OS). In theory, we could create enough virtual servers to use all of a machine's
processing power, though in practice that's not always the best idea.
Full virtualization uses a special kind of software called a hypervisor. The hypervisor interacts
directly with the physical server's CPU and disk space. It serves as a platform for the virtual
servers' operating systems. The hypervisor keeps each virtual server completely independent and
unaware of the other virtual servers running on the physical machine. Each guest server runs on
its own OS we can even have one guest running on Linux and another on Windows.
The hypervisor monitors the physical server's resources. As virtual servers run applications, the
hypervisor relays resources from the physical machine to the appropriate virtual server.
Hypervisors have their own processing needs, which means that the physical server must reserve
some processing power and resources to run the hypervisor application. This can impact overall
server performance and slow down applications.
The para-virtualization approach is a little different. Unlike the full virtualization technique, the
guest servers in a para-virtualization system are aware of one another. A para-virtualization
hypervisor doesn't need as much processing power to manage the guest operating systems,
because each OS is already aware of the demands the other operating systems are placing on the
physical server. The entire system works together as a cohesive unit.
An OS-level virtualization approach doesn't use a hypervisor at all. Instead, the virtualization
capability is part of the host OS, which performs all the functions of a fully virtualized hypervisor.
The biggest limitation of this approach is that all the guest servers must run the same OS. Each
virtual server remains independent from all the others, but we can't mix and match operating
systems among them. Because all the guest operating systems must be the same, this is called a
homogeneous environment.
Server virtualization provides a way for companies to practice redundancy without purchasing
additional hardware. Redundancy refers to running the same application on multiple servers. It's a
safety measure - if a server fails for any reason, another server running the same application can
take its place. This minimizes any interruption in service. It wouldn't make sense to build two
virtual servers performing the same application on the same physical server. If the physical server
were to crash, both virtual servers would also fail. In most cases, network administrators will
create redundant virtual servers on different physical machines.
2
Virtualization and basics of virtualization
Virtual servers offer programmers isolated, independent systems in which they can test new
applications or operating systems. Rather than buying a dedicated physical machine, the network
administrator can create a virtual server on an existing machine. Because each virtual server is
independent in relation to all the other servers, programmers can run software without worrying
about affecting other applications.
Server hardware will eventually become obsolete, and switching from one system to another can
be difficult. In order to continue offering the services provided by these outdated systems -
sometimes called legacy systems - a network administrator could create a virtual version of the
hardware on modern servers. From an application perspective, nothing has changed. The
programs perform as if they were still running on the old hardware. This can give the company
time to transition to new processes without worrying about hardware failures, particularly if the
company that produced the legacy hardware no longer exists and can't fix broken equipment.
An emerging trend in server virtualization is called migration. Migration refers to moving a server
environment from one place to another. With the right hardware and software, it's possible to
move a virtual server from one physical machine in a network to another. Originally, this was
possible only if both physical machines ran on the same hardware, operating system and
processor. It's possible now to migrate virtual servers from one physical machine to another even if
both machines have different processors, but only if the processors come from the same
manufacturer.
For servers dedicated to applications with high demands on processing power, virtualization isn't
a good choice. That's because virtualization essentially divides the server's processing power up
among the virtual servers. When the server's processing power can't meet application demands,
everything slows down. Tasks that shouldn't take very long to complete might last hours. Worse,
it's possible that the system could crash if the server can't meet processing demands. Network
administrators should take a close look at CPU usage before dividing a physical server into
multiple virtual machines.
It's also unwise to overload a server's CPU by creating too many virtual servers on one physical
machine. The more virtual machines a physical server must support, the less processing power
each server can receive. In addition, there's a limited amount of disk space on physical servers. Too
many virtual servers could impact the server's ability to store data.
Another limitation is migration. Right now, it's only possible to migrate a virtual server from one
physical machine to another if both physical machines use the same manufacturer's processor. If a
network uses one server that runs on an Intel processor and another that uses an AMD processor,
it's impossible to port a virtual server from one physical machine to the other.
Why would an administrator want to migrate a virtual server in the first place? If a physical server
requires maintenance, porting the virtual servers over to other machines can reduce the amount of
2
Virtualization and basics of virtualization
application downtime. If migration isn't an option, then all the applications running on the virtual
servers hosted on the physical machine will be unavailable during maintenance.
Many companies are investing in server virtualization despite its limitations. As server
virtualization technology advances, the need for huge data centers could decline. Server power
consumption and heat output could also decrease, making server utilization not only financially
attractive, but also a green initiative. As networks use servers closer to their full potential, we
could see larger, more efficient computer networks. It's not an exaggeration to say that virtual
servers could lead to a complete revolution in the computing industry. We'll just have to wait and
see.
Storage virtualization runs on multiple storage devices, making them appear as if they were a
single storage pool. Pooled storage devices can be from different vendors and networks. The
storage virtualization engine identifies available storage capacity from multiple arrays and storage
media, aggregates it, manages it and presents it to applications.
The virtualization software works by intercepting storage system I/O (Input/Output) requests to
servers. Instead of the CPU processing the request and returning data to storage, the engine maps
physical requests to the virtual storage pool and accesses requested data from its physical location.
Once the computer process is complete, the virtualization engine sends the I/O from the CPU to
its physical address, and updates its virtual mapping. The engine centralizes storage management
into a browser-based console, which allows storage admins to effectively manage multi-vendor
arrays as a single storage system.
2
Virtualization and basics of virtualization
Data Level
a. Block-Based
Block-based storage virtualization is the most common type of storage virtualization. Block-based
virtualization abstracts the storage system’s logical storage from its physical components. Physical
components include memory blocks and storage media, while logical components include drive
partitions.
The storage virtualization engine discovers all available blocks on multiple arrays and individual
media, regardless of the storage system’s physical location, logical partitions, or manufacturer. The
engine leaves data in its physical location and maps the address to the virtual storage pool. This
enables the engine to present multi-vendor storage system capacity to servers, as if the storage
were a single array.
b. File Level
File level virtualization works over NAS (Network-attached storage) devices to pool and
administrate separate NAS appliances. While managing a single NAS is not particularly difficult,
managing multiple appliances is time-consuming and costly. NAS devices are physically and
logically independent of each other, which requires individual management, optimization and
provisioning. This increases complexity and requires that users know the physical pathname to
access a file.
One of the most time-consuming operations with multiple NAS appliances is migrating data
between them. As organizations outgrow legacy NAS devices, they often buy a new and larger
one. This often requires migrating data from older appliances that are near their capacity
thresholds. This in turn requires significant downtime to configure the new appliance, migrate
data from the legacy device, and test the migrated data before going live. But downtime affects
users and projects, and extended downtime for a data migration can financially impact the
organization.
Virtualizing data at the file level masks the complexity of managing multiple NAS appliances, and
enables administrators to pool storage resources instead of limiting them to specific applications or
workgroups. Virtualizing NAS devices also makes downtime unnecessary during data migration.
The virtualization engine maintains correct physical addresses and re-maps changed addresses to
the virtual pool. A user can access a file from the old device and save to the new without ever
knowing migration occurred.
Intelligence
The virtualization engine may be located in different computing components. The three most
common are host, network and array. Each serves a different storage virtualization use case.
a. Host-Based
Primary use case: Virtualizing storage for VM environments and online applications. Some servers
provide virtualization from the OS level. The OS virtualizes available storage to optimize capacity
and automate tiered storage schedules.
More common host-based storage virtualization pools storage in virtual environments and
presents the pool to a guest operating system. One common implementation is a dynamically
2
Virtualization and basics of virtualization
expandable VM that acts as the storage pool. Since VMs expect to see hard drives, the
virtualization engine presents underlying storage to the VM as a hard drive. In fact, what the
“hard drive” is the logical storage pool created from disk- and array-based storage assets.
This virtualization approach is most common in cloud and hyper-converged storage. A single host
or hyper-converged system pools available storage into virtualized drives, and present the drives
to guest machines.
b. Network-Based
Primary use case: SAN storage virtualization. Network-based storage virtualization is the most
common type for SAN owners, who use it to extend their investment by adding more storage. The
storage virtualization intelligence runs from a server or switch, across Fibre Channel or iSCSI
networks.
The network-based device abstracts storage I/O running across the storage network, and can
replicate data across all connected storage devices. It also simplifies SAN management with a
single management interface for all pooled storage.
c. Array-Based
Primary use case: Storage tiering. Storage-based virtualization in arrays is not new. Some RAID
levels are essentially virtualized as they abstract storage from multiple physical disks into a single
logical array.
Today, array-based virtualization usually refers to a specialized storage controller that intercepts
I/O requests from secondary storage controllers and automatically tiers data within connected
storage systems. The appliance enables admins to assign media to different storage tiers, usually
SSDs to high-performance tiers and HDDs into nearline or secondary tiers. Virtualization also
allows admins to mix media in the same storage tier.
This virtualization approach is more limited than host or network-based virtualization, since
virtualization only occurs over connected controllers. The secondary controllers need the same
amount of bandwidth as the virtualization storage controllers, which can affect performance.
However, if an enterprise has heavily invested in an advanced hybrid array, the array’s storage
intelligence may outpace what storage virtualization can provide. In this case, array-based
virtualization allows the enterprise to retain the array’s native capabilities and add virtualized
tiering for better efficiency.
Band
a. In-Band
In-band storage virtualization occurs when the virtualization engine operates between the host
and storage. Both I/O requests and data pass through the virtualization layer, which allows the
engine to provide advanced functionality like data caching, replication and data migration.
In-band takes up fewer host server resources, because it does not have to find and attach multiple
storage devices. The server only sees the virtually pooled storage in its data path. However, the
larger the pool grows, the higher the risk that it will impact data path throughput.
b. Out-of-Band
2
Virtualization and basics of virtualization
Out-of-band storage virtualization splits the path into control (metadata) and data paths. Only the
control path runs through the virtualization appliance, which intercepts I/O requests from the
host, looks up and maps metadata on physical memory locations, and issues an updated I/O
request to storage. Data does not pass through the device, which makes caching impossible.
Out-of-band virtualization installs agents on individual servers to direct their storage I/O to the
virtualization appliance. Although this adds somewhat to individual server loads, out-of-band
virtualization does not bottleneck data like in-band can. Nevertheless, best practices are to avoid
virtualization disruption by adding additional redundant out-of-band appliances.
In software testing, software developers use network virtualization to test software under development
in a simulation of the network environments in which the software is intended to operate. As a
component of application performance engineering, network virtualization enables developers to
emulate connections between applications, services, dependencies, and end users in a test
environment without having to physically test the software on all possible hard ware or system
software. Of course, the validity of the test depends on the accuracy of the network virtualization
in emulating real hardware and operating systems.
2
Virtualization and basics of virtualization
Microsoft Virtual Server uses virtual machines to make a "network in a box" for x86 systems.
These containers can run different operating systems, such as Microsoft Windows or Linux, either
associated with or independent of a specific network interface controller (NIC).
2
Virtualization and basics of virtualization
References
EiABC Pre-Engineering
Section – C
Virtualization
Group members: