You are on page 1of 16

Virtualization

Virtualization and basics of virtualization

By IET Sec-C - Group U EmTe 1012


Virtualization and basics of virtualization

Contents
Introduction...................................................................................................................................................2
1. What is virtualization?.........................................................................................................................3
2. Cloud computing Vs. Virtualization.................................................................................................3
3. A hypervisor..........................................................................................................................................3
3.1. Types of Hypervisors...................................................................................................................4
3.2. Security implications of hypervisors...........................................................................................4
4. Types of virtualization.........................................................................................................................4
4.1. Application virtualization...........................................................................................................4
4.1.1. Description.............................................................................................................................4
4.1.2. Benefits of application virtualization................................................................................5
4.1.3. Limitations of application virtualization..........................................................................5
4.2. Desktop virtualization.................................................................................................................6
4.2.1. Types of Desktop Virtualization........................................................................................6
4.2.2. Benefits of desktop virtualization......................................................................................7
4.3. Server virtualization.....................................................................................................................7
4.3.1. Types of Server Virtualization...........................................................................................8
4.3.2. Benefits of server virtualization.........................................................................................8
4.3.3. Limitations of Server Virtualization..................................................................................9
4.4. Storage Virtualization................................................................................................................10
4.4.1. The Anatomy of Storage Virtualization..........................................................................10
4.4.2. Benefits of Storage Virtualization....................................................................................13
4.5. Network Virtualization..............................................................................................................13
4.5.1. External virtualization.........................................................................................................13
4.5.2. Internal virtualization.........................................................................................................14
4.5.3. Use in testing.......................................................................................................................14
4.5.4. Wireless network virtualization.......................................................................................14
References....................................................................................................................................................15
Group name - IET Sec-C - Group U.........................................................................................................15

List of figures

Figure 1. The anatomy of storage virtualization......................................................................................10

2
Virtualization and basics of virtualization

Introduction

While virtualization technology can be sourced back to the 1960s, it wasn’t widely adopted until
the early 2000s. The technologies that enabled virtualization—like hypervisors—were developed
decades ago to give multiple users simultaneous access to computers that performed batch
processing. Batch processing was a popular computing style in the business sector that ran routine
tasks thousands of times very quickly (like payroll).

But, over the next few decades, other solutions to the many users/single machine problem grew in
popularity while virtualization didn’t. One of those other solutions was time-sharing, which
isolated users within operating systems—inadvertently leading to other operating systems like
UNIX, which eventually gave way to Linux. All the while, virtualization remained a largely
unadopted, niche technology.

Fast forward to the 1990s. Most enterprises had physical servers and single-vendor IT stacks,
which didn’t allow legacy apps to run on a different vendor’s hardware. As companies updated
their IT environments with less-expensive commodity servers, operating systems, and applications
from a variety of vendors, they were bound to underused physical hardware—each server could
only run 1 vendor-specific task.

This is where virtualization really took off. It was the natural solution to 2 problems: companies
could partition their servers and run legacy apps on multiple operating system types and versions.
Servers started being used more efficiently (or not at all), thereby reducing the costs associated
with purchase, set up, cooling, and maintenance.

Virtualization’s widespread applicability helped reduce vendor lock-in and made it the foundation
of cloud computing. It’s so prevalent across enterprises today that specialized virtualization
management software is often needed to help keep track of it all.

In this paper we will take a brief look at types, benefits, and limitations of virtualization and how
virtualization works.

2
Virtualization and basics of virtualization

1. What is virtualization?
Virtualization is the process of running a virtual instance of a computer system in a layer
abstracted from the actual hardware. Most commonly, it refers to running multiple operating
systems on a computer system simultaneously. To the applications running on top of the
virtualized machine, it can appear as if they are on their own dedicated machine, where the
operating system, libraries, and other programs are unique to the guest virtualized system and
unconnected to the host operating system which sits below it.

There are many reasons why people utilize virtualization in computing. To desktop users, the
most common use is to be able to run applications meant for a different operating system without
having to switch computers or reboot into a different system. For administrators of servers,
virtualization also offers the ability to run different operating systems, but perhaps, more
importantly, it offers a way to segment a large system into many smaller parts, allowing the server
to be used more efficiently by a number of different users or applications with different needs. It
also allows for isolation, keeping programs running inside of a virtual machine safe from the
processes taking place in another virtual machine on the same host.

2. Cloud computing Vs. Virtualization


Virtualization and cloud computing are often used interchangeably. However, the two are entirely
different technologies. Here’s a breakdown on virtualization vs. cloud computing to understand
how the technologies differ:

Virtualization is software that separates physical infrastructure (servers, workstations, storage,


etc.) from computing environments to create virtual IT assets. Virtualization enables multiple
virtual machines (VMs) to run on a single piece of hardware by making computing environments
independent of physical infrastructure. Cloud computing means storing and accessing data and
programs over the internet instead of through a local network, or what is called on-premise today.

Virtualization and cloud computing are completely unique technologies. Virtualization creates
virtual IT assets by separating physical infrastructure from operating systems, and applications
while cloud computing is the delivery of shared data and software. Virtualization differs from
cloud computing. This is because virtualization is software that manipulates hardware, while
cloud computing refers to a service that results from that manipulation. An easy way to separate
the two is to think of virtualization as technology and cloud as a service. Confusion between the
two technologies occurs because cloud computing and virtualization often work together to
provide services. For example, in an IT environment, virtualization is used to create virtual
versions of hardware while cloud computing is used to store and access data and programs over
the Internet by using these virtual machines.

3. A hypervisor
A hypervisor or virtual machine monitor (VMM) is a piece of computer software, firmware or
hardware that creates and runs virtual machines. A computer on which a hypervisor is running
one or more virtual machines is defined as a host machine. Each virtual machine is called a guest
machine. The hypervisor presents the guest operating systems with a virtual operating platform
and manages the execution of the guest operating systems. Multiple instances of a variety of
operating systems may share the virtualized hardware resources.

2
Virtualization and basics of virtualization

3.1. Types of Hypervisors


 Type-1: native or bare-metal hypervisors
These hypervisors run directly on the host's hardware to control the hardware and to manage
guest operating systems. For this reason, they are sometimes called bare metal hypervisors. A
guest operating system runs as a process on the host. The first hypervisors, which IBM
(International Business Machines Corporation) developed in the 1960s, were native hypervisors.
These included the test software SIMMON and the CP/CMS operating system (the predecessor of
IBM's zJVM). Modern equivalents include Oracle VM Server for SPARC, Oracle VM Server for
x86, the Citrix XenServer, VMware ESX/ESXi and Microsoft Hyper-V.

 Type-2: hosted hypervisors


These hypervisors run on a conventional operating system just as other computer programs do.
Type-2 hypervisors abstract guest operating systems from the host operating system. VMware
Workstation and VirtualBox are examples of type-2 hypervisors.

3.2. Security implications of hypervisors


The use of hypervisor technology by malware and rootkits installing themselves as a hypervisor
below the operating system can make them more difficult to detect because the malware could
intercept any operations of the operating system (such as someone entering a password) without
the anti-malware software necessarily detecting it (since the malware runs below the entire
operating system). Implementation of the concept has allegedly occurred in the SubVirt laboratory
rootkit (developed jointly by Microsoft and University of Michigan researchers) as well as in the
Blue Pill malware package.

4. Types of virtualization
4.1. Application virtualization (AV)
Application virtualization is software technology that encapsulates application software from the
underlying operating system on which it is executed. A fully virtualized application is not
installed in the traditional sense, although it is still executed as if it were. The application
behaves at runtime like it is directly interfacing with the original operating system and all the
resources managed by it, but can be isolated or sandboxed to varying degrees.

In this context, the term "virtualization" refers to the artifact being encapsulated (application),
which is quite different from its meaning in hardware virtualization, where it refers to the artifact
being abstracted (physical hardware).

4.1.1. Description
Modern operating systems such as Microsoft Windows and Linux can include limited application
virtualization. For example, Windows 7 provides Windows XP Mode that enables older Windows
XP application to run unmodified on Windows 7.

Full application virtualization requires a virtualization layer. Application virtualization layers


replace part of the runtime environment normally provided by the operating system. The layer
intercepts all disk operations of virtualized applications and transparently redirects them to a
virtualized location, often a single file. The application remains unaware that it accesses a virtual
resource instead of a physical one. Since the application is now working with one file instead of

2
Virtualization and basics of virtualization

many files spread throughout the system, it becomes easy to run the application on a different
computer and previously incompatible applications can be run side-by-side.

Examples of this technology for the Windows platform include AppZero, 2X Software, Citrix
XenApp, Novell ZENworks AV, Application Jukebox, Microsoft AV, Software Virtualization
Solution, Spoon (formerly Xenocode), Symantec Workspace Virtualization and Workspace
Streaming, VMware ThinApp, P-apps, and Oracle Secure Global Desktop.

4.1.2. Benefits of application virtualization


 Allows applications to run in environments that do not suit the native application:
e.g., Wine allows some Microsoft Windows applications to run on Linux.
e.g., CDE, a lightweight application virtualization, allows Linux applications to run
in a distribution agnostic way.
 May protect the operating system and other applications from poorly written or buggy
code and in some cases provide memory protection and IDE (integrated development
environment) style debugging features, for example as in the IBM OLIVER.
 Uses fewer resources than a separate virtual machine.
 Run applications that are not written correctly, for example applications that try to store
user data in a read-only system-owned location.
 Run incompatible applications side-by-side, at the same time and with minimal regression
testing against one another.
 Reduce system integration and administration costs by maintaining a common software
baseline across multiple diverse computers in an organization.
 Implement the security principle of least privilege by removing the requirement for end-
users to have Administrator privileges in order to run poorly written applications.
 Simplified operating system migrations.
 Improved security, by isolating applications from the operating system.
 Allows applications to be copied to portable media and then imported to client
computers without need of installing them, so called Portable software.

4.1.3. Limitations of application virtualization


Not all software can be virtualized. Some examples include applications that require a device
driver and 16-bit applications that need to run in shared memory space.

Some types of software such as antivirus packages and applications that require heavy OS
integration, such as Stardock's Window Blinds or TGTSoft's StyleXP are difficult to virtualize.

Only file and registry-level compatibility issues between legacy applications and newer operating
systems can be addressed by application virtualization. For example, applications that don't
manage the heap correctly will not execute on Windows Vista as they still allocate memory in the
same way, regardless of whether they are virtualized or not. For this reason, specialist application
compatibility fixes (shims) may still be needed, even if the application is virtualized.

Moreover, in software licensing, application virtualization bears great licensing pitfalls mainly
because both the application virtualization software and the virtualized applications must be
correctly licensed.

2
Virtualization and basics of virtualization

4.2. Desktop virtualization


Desktop virtualization is a technology that allows the creation and storage of multiple user
desktop instances on a single host, residing in a data center or the cloud. It is achieved by using a
hypervisor, which resides on top of the host server hardware to manage and allow virtual
desktops to utilize the computing power of the underlying server hardware. The hypervisor
creates VMs that simulate the user’s desktop environments, which can hold different operating
systems, applications, personalized settings, and user data. Users can remotely access as well as
operate these desktops from any endpoint device.

4.2.1. Types of Desktop Virtualization


Desktop virtualization has two major deployment models: Hosted Desktop and Client
Virtualization.

 Hosted Desktop Virtualization


Under this model, a server, which resides in a data center hosts the virtual machines. Users can
connect to the server through standard protocols such as Remote Desktop Protocol (RDP) or
connection brokers. There are three major variants under Hosted Desktop Virtualization:

a. Virtual Desktop Infrastructure (VDI)

In VDI, the OS runs VMs—which contains the desktop image—on a server within the datacenter.
VDI technology leverages a hypervisor to split a server into different desktop images that users
can remotely access via their end-devices. VDI provisions a dedicated VM running its own OS to
each user within the virtualized environment.

b. Remote Desktop Services (RDS)

RDS—also called Remote Desktop Session Host (RDSH), and formerly Terminal Services—allows
users to remotely access shared desktops and Windows applications on Microsoft Windows Server
OS. In RDS, users access remote desktops by sharing the hardware, OS (in this case, a Windows
Server), apps, and host resources.

c. Desktop-as-a-Service (DaaS)

DaaS’s functionality is similar to that of VDI: users access their desktops and apps from any end-
device or platform. However, in VDI, we have to purchase, deploy, and manage all the hardware
components ourselves. In DaaS, though, we outsource desktop virtualization to the third party to
help us develop and operate virtual desktops.

 Client Virtualization
In Client virtualization, we install a hypervisor on a client device to allow us to run multiple OSes.
Client virtualization eliminates the need for users to have their own dedicated hardware and
software. Client virtualization deployment has two variants:

a. Presentation virtualization

Presentation virtualization provides a web-based portal through which users leverage to interact
with published desktops and apps. Organizations can use this approach to deliver apps or
desktops from a shared server.

2
Virtualization and basics of virtualization

b. Application virtualization

Application virtualization allows apps to run on other platforms. For example, we can run
Windows apps on Linux. We can use Application virtualization to simplify OS migration by
creating portable software. We can then transfer applications between computers without having
to install them.

4.2.2. Benefits of desktop virtualization


 Simplified administration. Desktop virtualization enables IT admins to manage a server from a
centralized location, allowing for quicker deployments and simplified maintenance. This saves
IT resources and time for an organization.
 Secure and mobile access to apps. Organizations can use virtualized desktops to provide their
remote employees high-throughput apps by enabling GPU sharing via a secure connection
from any end-device or platform.
 Enhanced employee productivity. Employees can securely access their corporate virtual
desktops from any end-device, location, and at any time. Desktop virtualization is a perfect fit
for telework because employees access specialized apps and functionalities on-the-go as
opposed to typical mobile computing technologies.
 Reduced downtimes and accelerated deployments. With virtualized desktops, users can easily
be migrated to other VMs in case there is a hardware failure. As such, there’s no lost time and
productivity. Similarly, IT admins can quickly deploy new hardware within a centralized
infrastructure—getting new employees on board and up to speed.
 Reduced IT costs. Desktop virtualization allows organizations to shift their IT budgets from the
capital to operating expenditures. By delivering computationally-intensive apps on VMs that
are hosted on a data center, organizations can extend the shelf life of older PCs or even less
powerful machines. Besides, we also save on software licensing requirements because we only
need to install apps on a single, centralized server as opposed to individual workstations.
 Enhanced user experience. Desktop virtualization can provide feature-rich experience without
sacrificing the hardware on which apps run on. For example, users can still access USB ports or
printing services on their end-devices.

4.3. Server virtualization


Server computers - machines that host files and applications on computer networks - have to be
powerful. Some have central processing units (CPUs) with multiple processors that give these
servers the ability to run complex tasks with ease. Computer network administrators usually
dedicate each server to a specific application or task. Many of these tasks don't play well with
others - each needs its own dedicated machine. One application per server also makes it easier to
track down problems as they arise. It's a simple way to streamline a computer network from a
technical standpoint.

There are a couple of problems with this approach, though. One is that it doesn't take advantage of
modern server computers' processing power. Most servers use only a small fraction of their overall
processing capabilities. Another problem is that as a computer network gets larger and more
complex, the servers begin to take up a lot of physical space. A data center might become
overcrowded with racks of servers consuming a lot of power and generating heat.

Server virtualization attempts to address both of these issues in one fell swoop. By using specially
designed software, an administrator can convert one physical server into multiple virtual

2
Virtualization and basics of virtualization

machines. Each virtual server acts like a unique physical device, capable of running its own
operating system (OS). In theory, we could create enough virtual servers to use all of a machine's
processing power, though in practice that's not always the best idea.

4.3.1. Types of Server Virtualization


There are three ways to create virtual servers: full virtualization, para-virtualization and OS-level
virtualization. They all share a few common traits. The physical server is called the host. The
virtual servers are called guests. The virtual servers behave like physical machines. Each system
uses a different approach to allocate physical server resources to virtual server needs.

Full virtualization uses a special kind of software called a hypervisor. The hypervisor interacts
directly with the physical server's CPU and disk space. It serves as a platform for the virtual
servers' operating systems. The hypervisor keeps each virtual server completely independent and
unaware of the other virtual servers running on the physical machine. Each guest server runs on
its own OS we can even have one guest running on Linux and another on Windows.

The hypervisor monitors the physical server's resources. As virtual servers run applications, the
hypervisor relays resources from the physical machine to the appropriate virtual server.
Hypervisors have their own processing needs, which means that the physical server must reserve
some processing power and resources to run the hypervisor application. This can impact overall
server performance and slow down applications.

The para-virtualization approach is a little different. Unlike the full virtualization technique, the
guest servers in a para-virtualization system are aware of one another. A para-virtualization
hypervisor doesn't need as much processing power to manage the guest operating systems,
because each OS is already aware of the demands the other operating systems are placing on the
physical server. The entire system works together as a cohesive unit.

An OS-level virtualization approach doesn't use a hypervisor at all. Instead, the virtualization
capability is part of the host OS, which performs all the functions of a fully virtualized hypervisor.
The biggest limitation of this approach is that all the guest servers must run the same OS. Each
virtual server remains independent from all the others, but we can't mix and match operating
systems among them. Because all the guest operating systems must be the same, this is called a
homogeneous environment.

4.3.2. Benefits of server virtualization


Server virtualization conserves space through consolidation. It's common practice to dedicate each
server to a single application. If several applications only use a small amount of processing power,
the network administrator can consolidate several machines into one server running multiple
virtual environments. For companies that have hundreds or thousands of servers, the need for
physical space can decrease significantly.

Server virtualization provides a way for companies to practice redundancy without purchasing
additional hardware. Redundancy refers to running the same application on multiple servers. It's a
safety measure - if a server fails for any reason, another server running the same application can
take its place. This minimizes any interruption in service. It wouldn't make sense to build two
virtual servers performing the same application on the same physical server. If the physical server
were to crash, both virtual servers would also fail. In most cases, network administrators will
create redundant virtual servers on different physical machines.

2
Virtualization and basics of virtualization

Virtual servers offer programmers isolated, independent systems in which they can test new
applications or operating systems. Rather than buying a dedicated physical machine, the network
administrator can create a virtual server on an existing machine. Because each virtual server is
independent in relation to all the other servers, programmers can run software without worrying
about affecting other applications.

Server hardware will eventually become obsolete, and switching from one system to another can
be difficult. In order to continue offering the services provided by these outdated systems -
sometimes called legacy systems - a network administrator could create a virtual version of the
hardware on modern servers. From an application perspective, nothing has changed. The
programs perform as if they were still running on the old hardware. This can give the company
time to transition to new processes without worrying about hardware failures, particularly if the
company that produced the legacy hardware no longer exists and can't fix broken equipment.

An emerging trend in server virtualization is called migration. Migration refers to moving a server
environment from one place to another. With the right hardware and software, it's possible to
move a virtual server from one physical machine in a network to another. Originally, this was
possible only if both physical machines ran on the same hardware, operating system and
processor. It's possible now to migrate virtual servers from one physical machine to another even if
both machines have different processors, but only if the processors come from the same
manufacturer.

4.3.3. Limitations of Server Virtualization


The benefits of server virtualization can be so enticing that it's easy to forget that the technique
isn't without its share of limitations. It's important for a network administrator to research server
virtualization and his or her own network's architecture and needs before attempting to engineer a
solution.

For servers dedicated to applications with high demands on processing power, virtualization isn't
a good choice. That's because virtualization essentially divides the server's processing power up
among the virtual servers. When the server's processing power can't meet application demands,
everything slows down. Tasks that shouldn't take very long to complete might last hours. Worse,
it's possible that the system could crash if the server can't meet processing demands. Network
administrators should take a close look at CPU usage before dividing a physical server into
multiple virtual machines.

It's also unwise to overload a server's CPU by creating too many virtual servers on one physical
machine. The more virtual machines a physical server must support, the less processing power
each server can receive. In addition, there's a limited amount of disk space on physical servers. Too
many virtual servers could impact the server's ability to store data.

Another limitation is migration. Right now, it's only possible to migrate a virtual server from one
physical machine to another if both physical machines use the same manufacturer's processor. If a
network uses one server that runs on an Intel processor and another that uses an AMD processor,
it's impossible to port a virtual server from one physical machine to the other.

Why would an administrator want to migrate a virtual server in the first place? If a physical server
requires maintenance, porting the virtual servers over to other machines can reduce the amount of

2
Virtualization and basics of virtualization

application downtime. If migration isn't an option, then all the applications running on the virtual
servers hosted on the physical machine will be unavailable during maintenance.

Many companies are investing in server virtualization despite its limitations. As server
virtualization technology advances, the need for huge data centers could decline. Server power
consumption and heat output could also decrease, making server utilization not only financially
attractive, but also a green initiative. As networks use servers closer to their full potential, we
could see larger, more efficient computer networks. It's not an exaggeration to say that virtual
servers could lead to a complete revolution in the computing industry. We'll just have to wait and
see.

4.4. Storage Virtualization


Storage virtualization is the technology of abstracting physical data storage resources to make
them appear as if they were a centralized resource. Virtualization masks the complexities of
managing resources in memory, networks, servers and storage.

Storage virtualization runs on multiple storage devices, making them appear as if they were a
single storage pool. Pooled storage devices can be from different vendors and networks. The
storage virtualization engine identifies available storage capacity from multiple arrays and storage
media, aggregates it, manages it and presents it to applications.

The virtualization software works by intercepting storage system I/O (Input/Output) requests to
servers. Instead of the CPU processing the request and returning data to storage, the engine maps
physical requests to the virtual storage pool and accesses requested data from its physical location.
Once the computer process is complete, the virtualization engine sends the I/O from the CPU to
its physical address, and updates its virtual mapping. The engine centralizes storage management
into a browser-based console, which allows storage admins to effectively manage multi-vendor
arrays as a single storage system.

4.4.1. The Anatomy of Storage Virtualization

Figure 1. The anatomy of storage virtualization.

2
Virtualization and basics of virtualization

 Data Level
a. Block-Based

Block-based storage virtualization is the most common type of storage virtualization. Block-based
virtualization abstracts the storage system’s logical storage from its physical components. Physical
components include memory blocks and storage media, while logical components include drive
partitions.

The storage virtualization engine discovers all available blocks on multiple arrays and individual
media, regardless of the storage system’s physical location, logical partitions, or manufacturer. The
engine leaves data in its physical location and maps the address to the virtual storage pool. This
enables the engine to present multi-vendor storage system capacity to servers, as if the storage
were a single array.

b. File Level

File level virtualization works over NAS (Network-attached storage) devices to pool and
administrate separate NAS appliances. While managing a single NAS is not particularly difficult,
managing multiple appliances is time-consuming and costly. NAS devices are physically and
logically independent of each other, which requires individual management, optimization and
provisioning. This increases complexity and requires that users know the physical pathname to
access a file.

One of the most time-consuming operations with multiple NAS appliances is migrating data
between them. As organizations outgrow legacy NAS devices, they often buy a new and larger
one. This often requires migrating data from older appliances that are near their capacity
thresholds. This in turn requires significant downtime to configure the new appliance, migrate
data from the legacy device, and test the migrated data before going live. But downtime affects
users and projects, and extended downtime for a data migration can financially impact the
organization.

Virtualizing data at the file level masks the complexity of managing multiple NAS appliances, and
enables administrators to pool storage resources instead of limiting them to specific applications or
workgroups. Virtualizing NAS devices also makes downtime unnecessary during data migration.
The virtualization engine maintains correct physical addresses and re-maps changed addresses to
the virtual pool. A user can access a file from the old device and save to the new without ever
knowing migration occurred.

 Intelligence
The virtualization engine may be located in different computing components. The three most
common are host, network and array. Each serves a different storage virtualization use case.

a. Host-Based

Primary use case: Virtualizing storage for VM environments and online applications. Some servers
provide virtualization from the OS level. The OS virtualizes available storage to optimize capacity
and automate tiered storage schedules.

More common host-based storage virtualization pools storage in virtual environments and
presents the pool to a guest operating system. One common implementation is a dynamically

2
Virtualization and basics of virtualization

expandable VM that acts as the storage pool. Since VMs expect to see hard drives, the
virtualization engine presents underlying storage to the VM as a hard drive. In fact, what the
“hard drive” is the logical storage pool created from disk- and array-based storage assets.

This virtualization approach is most common in cloud and hyper-converged storage. A single host
or hyper-converged system pools available storage into virtualized drives, and present the drives
to guest machines.

b. Network-Based

Primary use case: SAN storage virtualization. Network-based storage virtualization is the most
common type for SAN owners, who use it to extend their investment by adding more storage. The
storage virtualization intelligence runs from a server or switch, across Fibre Channel or iSCSI
networks.

The network-based device abstracts storage I/O running across the storage network, and can
replicate data across all connected storage devices. It also simplifies SAN management with a
single management interface for all pooled storage.

c. Array-Based

Primary use case: Storage tiering. Storage-based virtualization in arrays is not new. Some RAID
levels are essentially virtualized as they abstract storage from multiple physical disks into a single
logical array.

Today, array-based virtualization usually refers to a specialized storage controller that intercepts
I/O requests from secondary storage controllers and automatically tiers data within connected
storage systems. The appliance enables admins to assign media to different storage tiers, usually
SSDs to high-performance tiers and HDDs into nearline or secondary tiers. Virtualization also
allows admins to mix media in the same storage tier.

This virtualization approach is more limited than host or network-based virtualization, since
virtualization only occurs over connected controllers. The secondary controllers need the same
amount of bandwidth as the virtualization storage controllers, which can affect performance.

However, if an enterprise has heavily invested in an advanced hybrid array, the array’s storage
intelligence may outpace what storage virtualization can provide. In this case, array-based
virtualization allows the enterprise to retain the array’s native capabilities and add virtualized
tiering for better efficiency.

 Band
a. In-Band

In-band storage virtualization occurs when the virtualization engine operates between the host
and storage. Both I/O requests and data pass through the virtualization layer, which allows the
engine to provide advanced functionality like data caching, replication and data migration.

In-band takes up fewer host server resources, because it does not have to find and attach multiple
storage devices. The server only sees the virtually pooled storage in its data path. However, the
larger the pool grows, the higher the risk that it will impact data path throughput.

b. Out-of-Band

2
Virtualization and basics of virtualization

Out-of-band storage virtualization splits the path into control (metadata) and data paths. Only the
control path runs through the virtualization appliance, which intercepts I/O requests from the
host, looks up and maps metadata on physical memory locations, and issues an updated I/O
request to storage. Data does not pass through the device, which makes caching impossible.

Out-of-band virtualization installs agents on individual servers to direct their storage I/O to the
virtualization appliance. Although this adds somewhat to individual server loads, out-of-band
virtualization does not bottleneck data like in-band can. Nevertheless, best practices are to avoid
virtualization disruption by adding additional redundant out-of-band appliances.

4.4.2. Benefits of Storage Virtualization


 Enables dynamic storage utilization and virtual scalability of attached storage resources, both
block and file.
 Avoids downtime during data migration. Virtualization operates in the background to
maintain data’s logical address to preserves access.
 Centralizes a single dashboard to manage multi-vendor storage devices, which saves
management overhead and money.
 Protects existing investments by expanding available storage available to a host or SAN.
 Can add storage intelligence like tiering, caching, replication and a centralized management
interface in a multi-vendor environment.

4.5. Network Virtualization


In computing, network virtualization is the process of combining hardware and software network
resources and network functionality into a single, software-based administrative entity, a virtual
network. Network virtualization involves platform virtualization, often combined with resource
virtualization.

Network virtualization is categorized as either external virtualization, combining many networks


or parts of net works into a virtual unit, or internal virtualization, providing network-like
functionality to software containers on a single network server.

In software testing, software developers use network virtualization to test software under development
in a simulation of the network environments in which the software is intended to operate. As a
component of application performance engineering, network virtualization enables developers to
emulate connections between applications, services, dependencies, and end users in a test
environment without having to physically test the software on all possible hard ware or system
software. Of course, the validity of the test depends on the accuracy of the network virtualization
in emulating real hardware and operating systems.

4.5.1. External virtualization


External network virtualization combines or subdivides one or more local area networks (LANs) into
virtual networks to improve a large network' s or data center's efficiency. A virtual local area
network (VLAN) and network switch comprise the key components. Using this technology, a
system administrator can configure systems physically attached to the same local network into
separate virtual networks. Conversely, an administrator can combine systems on separate local
networks into a VLAN spanning the segments of a large network.

2
Virtualization and basics of virtualization

4.5.2. Internal virtualization


Internal virtualization is also called Virtual Channel. Internal network virtualization configures a
single system with software containers, such as Xen hypervisor control programs, or pseudo-
interfaces, such as a VNIC, to emulate a physical network with software. This can improve a
single system's efficiency by isolating applications to separate containers or pseudo interfaces.
 Examples
Citrix and Vyatta have built a virtual network protocol stack combining Vyatta's routing, firewall,
and VPN functions with Citrix's NetScaler load balancer, branch repeater wide area network
(WAN) optimization, and secure sockets layer VPN. Open Solaris network virtualization provides
a so-called "network in a box"

Microsoft Virtual Server uses virtual machines to make a "network in a box" for x86 systems.
These containers can run different operating systems, such as Microsoft Windows or Linux, either
associated with or independent of a specific network interface controller (NIC).

4.5.3. Use in testing


Network virtualization may be used in application development and testing to mimic real-world
hardware and system software. In application performance engineering, network virtualization
enables emulation of connections between applications, services, dependencies, and end users for
software testing.

4.5.4. Wireless network virtualization


Wireless network virtualization can have a very broad scope ranging from spectrum sharing,
infrastructure virtualization, to air interface virtualization. Similar to wired network
virtualization, in which physical infrastructure owned by one or more providers can be shared
among multiple service providers, wireless network virtualization needs the physical wireless
infrastructure and radio resources to be abstracted and isolated to a number of virtual resources,
which then can be offered to different service providers. In other words, virtualization, regardless
of wired or wireless networks, can be considered as a process splitting the entire network system.

However, the distinctive properties of the wireless environment, in terms of time-various


channels, attenuation, mobility, broadcast, etc., make the problem more complicated.
Furthermore, wireless network virtualization depends on specific access technologies, and
wireless network contains much more access technologies compared to wired network
virtualization and each access technology has its particular characteristics, which makes
convergence, sharing and abstraction difficult to achieve. Therefore, it may be inaccurate to
consider wireless network virtualization as a subset of network virtualization.

2
Virtualization and basics of virtualization

References

1. Topics in Virtualization and Cloud Computing by Dr Balwinder Singh Sodhi


July, 2017
2. Introduction to Virtualization from NDG in partnership with VMware IT Academy
https://www.vmware.com/go/academy
3. Using Vagrant and Ansible to deploy virtual machines for web development by Betsy
Gamrat.
4. Virtualization and Hypervisors. Virtualization refers to the creation by Devansh Agarwal
Medium
5. Storage virtualization by Christine Taylor
January 8, 2020
6. Introduction to Virtualization the Long Island Chapter of the IEEE Circuits and Systems
(CAS) Society by Morty Eisen Marcum Technology
April 28th, 2011
7. Classification of Existing Virtualization Methods Used in Telecommunication Networks by
Dmytro Ageyev, Oleg Bondarenko, Tamara Radivilova and Walla Alfroukh
Kharkiv National University of Radio Electronics, Nauky ave., 14, Kharkiv, Ukraine,
dmytro.aheiev@nure.ua, oleg.kh.bondarenko@gmail.com, tamara.radivilova@gmail.com,
http://nure.ua
8. Topics in Virtualization and Cloud Computing and Storage Systems Introduction
Autumn 2015

Group name - IET Sec-C - Group U

EiABC Pre-Engineering
Section – C
Virtualization
Group members:

1. Fikre Azazh (UGR/8382/12)


2. Hunachew Mulat (UGR/7368/12)
3. Gizachew Asefa (UGR/2920/12)
4. Gemechis Diriba (UGR/2795/12)
5. Girum Tesfaye (UGR/2443/12)

12th February, 2021

You might also like