You are on page 1of 14

Server Virtualization

Related terms:

Operating System, Virtual Machine, Hypervisor, Network Virtualization, Data Center

View all Topics

Server Virtualization
Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Publisher Summary
This chapter discusses how server virtualization happens along with exploring many
of the most common server virtualization products and hypervisors. It explains
that server virtualization is the separation of server computing functions from the
physical resources they reside on and that the purpose of server virtualization is
ultimately to save money by more efficiently utilizing server capacity, saving room
and power in data centers by having fewer machines, and creating a much more
flexible operating environment for the organization. Furthermore, it takes a look at
the growing trends in server virutalization, including storage virtualization, network
virtualization, and workload management. It also provides an understanding of the
differences between server virtualization and desktop virtualization, highlighting
that while the virtualization concepts are the same, servers and desktops have
radically different requirements in both functionality and their overall place in a
larger enterprise environment. Finally, it concludes with a summary of descriptions
of some of the more common virtual servers that are likely to be encountered during
investigations.

> Read full chapter

Network Architectures and Overlay Net-


works
Casimer DeCusatis, in Handbook of Fiber Optic Data Communication (Fourth
Edition), 2013

13.6 Virtual network overlays


Server virtualization drives several new data center networking requirements
[17–19]. In addition to the regular requirements of interconnecting physical servers,
network designs for virtualized data centers have to support the following:

• Huge number of endpoints. Today physical hosts can effectively run tens of
virtual machines, each with its own networking requirements. In a few years,
a single physical machine will be able to host 100 or more virtual machines.
• Large number of tenants fully isolated from each other. Scalable multitenancy
support requires a large number of networks that have address space isolation,
management isolation, and configuration independence. Combined with a
large number of endpoints, these factors will make multitenancy at the phys-
ical server level an important requirement in the future.
• Dynamic network and network endpoints. Server virtualization technology
allows for dynamic and automatic creation, deletion, and migration of virtual
machines. Networks must support this function in a transparent fashion,
without imposing restrictions due to, e.g., IP subnet requirements.
• A decoupling of the current tight binding between the networking require-
ments of virtual machines and the underlying physical network.

Rather than treat virtual networks simply as an extension of physical networks, these
requirements can be met by creating virtual overlay networks in a way similar to
creating virtual servers over a physical server: independent of physical infrastruc-
ture characteristics, ideally isolated from each other, dynamic, configurable, and
manageable. Hypervisor-based overlay networks can provide networking services
to virtual servers in a data center. Overlay networks are a method for building
one network on top of another. The major advantage of overlay networks is their
separation from the underlying infrastructure in terms of address spaces, protocols,
and management. Overlay networks allow a tenant to create networks designed to
support specific distributed workloads, without regard to how that network will be
instantiated on the data center’s physical network. In standard TCP (Transmission
Control Protocol)/IP networks, overlays are usually implemented by tunneling. The
overlay network payload is encapsulated within an overlay header and delivered to
the destination by tunneling over the underlying infrastructure.

Overlay networks allow the virtual network to be defined through software and
decouple the virtual network from the limitations of the physical network. Therefore,
the physical network is wired and configured once and the subsequent provisioning
of the virtual networks does not require the physical network to be rewired or re-
configured. Overlay networks hide the MAC addresses of the VMs from the physical
infrastructure, which significantly reduces the size of telecommunications access
method (TCAM) and access control list (ACL) tables. This overlay is transparent to
physical switches external to the server, and is thus compatible with other networking
protocols (including Layer 3 ECMP or Layer 2 TRILL). This allows L3 routing along
with ECMP to be more effectively utilized, reducing the problems of larger broadcast
domains within the data center. As the virtual network is independent of the physical
network topology, these approaches enable the ability to reduce the broadcast
domains within a data center while still retaining the ability to support VM migration.
In other words where VM migration typically required flat Layer 2 domains, overlay
networking technologies allow segmenting a data center while still supporting VM
migration across the data center and potentially between different data centers.

Many different types of overlays have been proposed, both as industry standards and
vendor proprietary implementations. We will focus on open standards, reviewing
some of the major proposals including NVGRE (Network Virtualization using Gener-
ic Routing Encapsulation), VXLAN (Virtual Extensible LAN), and DOVE (Distributed
Overlay Virtual Ethernet).

VXLAN is an IETF standard proposal [20] for an overlay Layer 2 network over a
Layer 3 network. Each Layer 2 network is identified through a 24-bit value referred
to as VXLAN Network Identifier (VNI). This allows for a large number (16 million)
of Layer 2 networks on a common physical infrastructure that can extend across
subnet boundaries. VMs access the overlay network through VXLAN Tunnel End-
points (VTEPs), which are instantiated in the hypervisor. The VTEPs join a common
multicast group on the physical infrastructure. This scheme discovers the location
of the VTEP of the destination VM by multicasting initial packets over this multicast
group. Future packets are encapsulated using the proposed VXLAN header format,
which includes the VNI to identify the overlay Layer 2 network and is sent directly to
the destination VTEP, which removes the VXLAN headers and delivers them to the
destination VM.

NVGRE is an IETF standard proposal [21], which uses Generic Routing Encapsulation
(GRE), a tunneling protocol defined by IEEE RFC 2784 and extended by RFC 2890.
It is similar to VXLAN as it also uses a 24-bit value referred to as a Tenant Network
Identifier (TNI) in the encapsulation header. VMs access the overlay network through
NVGRE endpoints, which are instantiated in the hypervisor. The current proposal
does not describe the method to discover the NVGRE endpoint of the destination
VM and leaves this for the future. NVGRE endpoints encapsulate packets and send
them directly to the destination endpoint. As currently proposed, NVGRE provides
a method for encapsulating and sending packets to a destination across Layer 2 or
Layer 3 networks. NVGRE creates an isolated virtual Layer 2 network that may be
confined to a single physical Layer 2 network or extend across subnet boundaries.
Each TNI is associated with an individual GRE tunnel. Packets sent from a tunnel
endpoint are forwarded to the other endpoints associated with the same TNI via
IP multicast. Use of multicast means that the tunnel can extend across a Layer 3
network, thereby limiting broadcast traffic by splitting a large broadcast domain into
multiple smaller domains. An NVGRE endpoint receives Ethernet packets from a
VM, encapsulates them, and sends them through the GRE tunnel. The endpoint
decapsulates incoming packets, distributing them to the proper VM. To enable
multitenancy and overcome the 4094 VLAN limit, an NVGRE endpoint isolates
individual TNIs by inserting the TNI specifier in the GRE header (NVGRE does not
use the packet sequence number defined by RFC 2890). The current draft of NVGRE
does not describe some details of address assignment or load balancing.

DOVE, also known by its IETF standard designation Network Virtualization Overlay
version 3 (NVO3) [22], is a Layer 2/3 overlay network that employs packet encap-
sulation to form instances of overlay networks that separate the virtual networks
from the underlying infrastructure and from each other. The separation means
separate address spaces, ensuring that virtual network traffic is seen only by network
endpoints connected to their own virtual network, and allowing different virtual
networks to be managed by different administrators. Upon creation, every DOVE
instance is assigned a unique identifier and all the traffic sent over this overlay
network carries the DOVE instance identifier in the encapsulation header in order
to be delivered to the correct destination virtual machine.

Figure 13.6 shows DOVE switches residing in data center hosts and providing
network service for hosted virtual machines so that virtual machines are connected
to independent isolated overlay networks. As virtual machine traffic never leaves
physical hosts in a nonencapsulated form, physical network devices are not aware
of virtual machines, their addresses, and their connectivity patterns. Using DOVE,
virtual switches learn the MAC address of their physical host, not the VMs, and route
traffic using IP addressing. In this way, DOVE enables a single MAC address for
each physical server (or dual redundant addresses for high availability), significantly
reducing the size of TCAM and ACL tables. DOVE may be thought of as a multipoint
tunnel for communication between systems, including discovery mechanisms and
provisions for attachment to non-DOVE networks. DOVE networks connect to other
non-DOVE networks through special purpose edge appliances known as DOVE
gateways. The DOVE gateways receive encapsulated packets from DOVE switches in
physical servers, strip the DOVE headers, and forward the packets to the non-DOVE
network using the appropriate network interfaces.

> Read full chapter


An Introduction to Virtualization
In Virtualization for Security, 2009

Types of Virtualization
Server Virtualization is the most common form of virtualization, and the
original. Managed by the VMM, physical server resources are used to provision
multiple virtual machines, each presented with its own isolated and indepen-
dent hardware set. Of the top three forms of virtualization are full virtual-
ization, paravirtualization, and operating system virtualization. An additional
form, called native virtualization, is gaining in popularity and blends the best
of full virtualization and paravirtualization along with hardware acceleration
logic.
Other areas have and continue to experience benefits of virtualization, includ-
ing storage, network, and application technologies.

> Read full chapter

Terminal Services and XenApp Server


Deployment
Tariq Bin Azad, in Securing Citrix Presentation Server in the Enterprise, 2008

Server Virtualization
One of the most exciting new technologies to be introduced to server-based com-
puting in the last few years is server virtualization. Server virtualization allows a
host operating system to provide guest operating systems a completely virtualized
hardware environment. For example, a single dual processor server running Win-
dows Server 2003 as the host operating system could virtualize servers for Windows
servers, Linux servers, or NetWare servers. By completely separating and virtual-
izing the hardware required by the guest operating system, server virtualization
provides many benefits. While things would appear to be easier on the surface as
far as the hardware planning for this environment, special consideration must be
given to guarantee the resources needed by a particular guest operating system.
The Datacenter and Enterprise Editions of Windows Server 2003 provide some of
this functionality with the Resource Manager component CD that ships with the
software. Additional third-party software is available to assist in “controlling” the
virtualized environment; one product in particular called ArmTech is from Aurema (-
www.aurema.com). Aurema provides a specific toolset for “fair sharing” of resources,
especially within a virtualized server context.

Server virtualization requires a special application to run on top of the host operating
system. This software provides the management and hardware virtualization for the
guest operating systems. Microsoft produces a relatively new offering known as
Virtual Server 2005. Virtual Server 2005 is based on software created by Connectix
(a company recently purchased by Microsoft) that allowed Macintosh and Windows
users to virtualized x86 architecture operating systems. The biggest player in this
space is definitely VMWare. VMWare offers a host of products for virtualization
and management thereof, but the product that most closely relates to Microsoft's
Virtual Server 2005 would be VMWare GSX Server. VMWare has been working on
computer virtualization for quite some time and has developed a suite of products
to aid in deploying and supporting this solution. One of our personal favorites is
VMotion, which allows for uninterrupted transfer of guest operating systems from
one host to another (very powerful stuff indeed!).

Server virtualization is definitely a situation when scaling up is the way to go. “Big
Steel” is needed typically to see the return on investment from such a consolidation.
The following would be a good list to start with and grow from there:

▪ Eight-way P4 2.8 GHz or faster

▪ 16 GB of RAM (the more the better, HOT ADD would useful)

▪ Multiple physical network cards (to allow for teaming or assigning to specific
guest operating systems)
▪ RAID 1 for host operating system, separate RAID 5 stripe(s) for the guest
operating systems
▪ Redundant power supplies

This setup would most likely support six or more (depending on applications)
XenApp servers and would be excellent at consolidating the other pieces of the Citrix
Access Suite, such as Web Interface, Secure Gateway, and the Secure Ticket Authority.

> Read full chapter

Server Virtualization and Networking


Gary Lee, in Cloud Networking, 2014

Abstract
Server virtualization is widely adopted in cloud data center networks, but this
technology adds some additional challenges for cloud networking especially at the
server-network interface. In this chapter, we describe virtual machines (VMs) and
how hypervisors from some leading software vendors establish virtual switches
within the host for local connectivity. Next, we provide an overview of PCI-Express
and describe how single-root IO virtualization can provide an alternative to a vSwitch
in the host. We then provide some details on how the physical and virtual switches
interact and how the industry is establishing edge virtual bridging standards in
order to unify network functionality down into the servers. Finally, we discuss VM
migration and the requirements this poses on the networks.

> Read full chapter

Information Security Essentials for In-


formation Technology Managers
Albert Caballero, in Computer and Information Security Handbook (Third Edition),
2017

Public Cloud
There are many core ideas and characteristics behind the architecture of the public
cloud, but possibly the most alluring is the ability to create the illusion of infinite
capacity. Whether its one server or thousands, the performance appears to perform
the same, with consistent service levels that are transparent to the end user. This is
accomplished by abstracting the physical infrastructure through virtualization of the
operating system so that applications and services are not locked into any particular
device, location, or hardware. Cloud services are also on demand, which is to say
that you only pay for what you use and should therefore drastically reduce the cost
of computing for most organizations. Investing in hardware and software that is
underutilized and depreciates quickly, is not as appealing as leasing a service, that
with minimal upfront costs, an organization can deploy as an entire infrastructure.

Server, network, storage, and application virtualization are the core components that
most cloud providers specialize in delivering. These different computing resources
make up the bulk of the infrastructure in most organizations, so it is easy to see
the attractiveness of the solution. In the cloud, provisioning these resources is
fully automated and scale up and down quickly. To understand how each provider
protects and configures each of the major architecture components of the cloud,
it is critical for an organization to be able to assess and compare the risk involved
in utilizing that provider or service. Make sure to request that the cloud provider
furnish information regarding the reference architecture in each of the following
areas of their infrastructure:

• Compute: Physical servers, OS, CPU, memory, disk space, etc.

• Network: VLANs, DMZ, segmentation, redundancy, connectivity, etc.

• Storage: LUNs, ports, partitioning, redundancy, failover, etc.

• Virtualization: Hypervisor, geolocation, management, authorization, etc.

• Application: Multitenancy, isolation, load-balancing, authentication, etc.

An important aspect of pulling off this type of elastic and resilient architecture is


commodity hardware. A cloud provider needs to be able to provision more physical
servers, hard drives, memory, network interfaces, and just about any operating
system or server application transparently and efficiently. To be able to do this,
servers and storage need to be provisioned dynamically and they are constantly being
reallocated to and from different customer environments with minimum regard
for the underlying hardware. As long as the service level agreements for up time
are met and the administrative overhead is minimized, the cloud provider does
little to guarantee or disclose what the infrastructure looks like. It is incumbent
upon the subscriber to ask and validate the design characteristics of every cloud
provider they contract services from. There are many characteristics that define a
cloud environment; please see Fig. 24.10 providing a comprehensive list of cloud
design characteristics.
Figure 24.10. Characteristics of cloud computing.

Most of the key characteristics can be summarized in the list that follows [15].

• On demand: The always-on nature of the cloud allows for organizations to


perform self-service administration and maintenance, over the Internet, of
their entire infrastructure without the need to interact with a third party.
• Resource pooling: Cloud environments are usually configured as large pools of
computing resources such as CPU, RAM, and storage from which a customer
can choose to use or leave to be allocated to a different customer.
• Measured service: The cloud brings tremendous cost savings to the end user
due to its pay-as-you-go nature; therefore, it is critical for the provider to be
able to measure the level of service and resources each customer utilizes.
• Network connectivity: The ease with which users can connect to the cloud is
one of the reasons why cloud adoption is so high. Organizations today have a
mobile workforce, which require connectivity for multiple platforms.
• Elasticity: A vital component of the cloud is that it must be able to scale up as
customers demand it. A subscriber may spin up new resources seasonally or
during a big campaign and bring them down when no longer needed. It is the
degree to which a system can autonomously adapt capacity over time.
• Resiliency: A cloud environment must always be available as most service
agreements guarantee availability at the expense of the provider if the system
goes down. The cloud is only as good as it is reliable so it is essential that the •
infrastructure be resilient and delivered with availability at its core.
Multitenancy: A multitenant environment refers to the idea that all tenants
within a cloud should be properly segregated from each other. In many cases
a single instance of software may serve many customers so for security and
privacy reasons it is critical that the provider takes the time to build in secure
multitenancy from the bottom up. A multitenant environment focuses on the
separation of tenant data in such a way as to take every reasonable measure to
prevent unauthorized access or leakage of resource between tenants.

The most significant cloud security challenges revolve around how and where the
data is stored as well as whose responsibility is it to protect. In a more traditional
IT infrastructure or private cloud environment the responsibility to protect the data
and who owns it is clear. When a decision is made to migrate services and data to
a public cloud environment certain things become unclear and difficult to prove
and define. The most pressing challenges to assess are [3]:

• Data residency: This refers to the physical geographic location where the data
stored in the cloud resides. There are many industries that have regulations
requiring organizations to maintain their customer or patient information
within their country of origin. This is especially prevalent with government
data and medical records. Many cloud providers have data centers in several
countries and may migrate virtual machines or replicate data across disparate
geographic regions causing cloud subscribers to fail compliance checks or
even break the law without knowing it.
• Regulatory compliance: Industries that are required to meet regulatory com-
pliance such as HIPAA or security standards such as those in the PCI typically
have a higher level of accountability and security requirements than those who
do not. These organizations should take special care of what cloud services
they decide to deploy and that the cloud provider can meet or exceed these
compliance requirements. Many cloud providers today can provision part of
their cloud environment with strict HIPAA or PCI standards enforced and
monitored but only if you ask for it, and at an additional cost, of course.
• Data privacy: Maintaining the privacy of users is of high concern for most orga-
nizations. Whether employees, customers, or patients, personally identifiable
information is a high valued target. Many cloud subscribers do not realize that
when they contract a provider to perform a service that they are also agreeing
to allow that provider to gather and share metadata and usage information
about their environment.
• Data ownership: Many cloud services are contracted with stipulations stating
that the cloud provider has permission to copy, reproduce, or retain all data
stored on their infrastructure, in perpetuity—this is not what most subscribers•
believe is the case when they migrate their data to the cloud.
Data protection: This isn't always clear unless it is discussed before engaging
the service. Many providers do have security monitoring available but in most
cases it is turned off by default or costs significantly more for the same level
of service. A subscriber should always validate that the provider can protect the
company's data just as effectively, or even more so than the company itself.

If these core challenges with public cloud adoption are not properly evaluated then
there are some potential security issues that could crop up. On the other hand, these
issues can be avoided with proper preparation and due diligence. The type of data
and operations in your unique cloud instance will determine the level of security
required.

> Read full chapter

Choosing the Right Solution for the


Task
In Virtualization for Security, 2009

Paravirtualization
A server virtualization platform attempts to present a generic hardware interface to a
guest virtual machine, whereas a paravirtualized platform requires the guest virtual
machine to use virtualization specific drivers in order to run within the virtualized
platform. In certain circumstances paravirtualization can offer performance benefits,
but some administrators consider the requirement of specialized drivers within the
operating system as invasive. Examples of paravirtualized platforms include the Xen
and TRANGO hypervisors.

Even though paravirtualization offers some unique advantages, there are many
problems with the technology in general. Before you can use a particular operating
system within a paravirtualized environment, you must first make sure that the
virtualization platform offers drivers for the guest operating system. This means
that the platform may support Linux kernel version 2.4 but it does not necessarily
support kernel version 2.6. What it also means is that one development group from
company ‘A’ has a version of the drivers but open source group ‘B’ also has a set of
drivers. It can become confusing to an administrator which drivers should be used
depending on the pros and cons of each implementation. And those differences can
be very complex. Different operating systems may have vastly different methods for
transferring data to and from hardware devices. This can make paravirtualization
extremely complex and thus prone to errors.

> Read full chapter

Identifying the Level of Effort and Cost


Tom Laszewski, Prakash Nauduri, in Migrating to the Cloud, 2012

Oracle Virtualization and Oracle Enterprise Linux: Database


and Middleware Hardware and Software
Oracle VM is server virtualization software that fully supports both Oracle and
non-Oracle applications. Combined with OEL, Oracle VM provides a single point
of enterprise-class support for your company's entire virtualization environment,
including Oracle Database, Fusion Middleware, and Oracle Applications which are
certified with Oracle VM. It is not a complete, integrated hardware, software, oper-
ating system, and application solution like Exadata and Exalogic, but it does provide
you with an end-to-end cloud solution from one vendor. Oracle VM Server runs on
any x86 machine and does not require an operating system to be installed on the
server (this is known as bare-metal provisioning). Oracle VM Server is managed by
Oracle VM Manager using a Web browser-based console. Oracle provides Oracle
VM Server templates which are preinstalled and preconfigured software images
that are designed to make configuring your virtualized operating system, database,
middleware, or application software stack a fast and easy process.

> Read full chapter

Welcome to Cloud Networking


Gary Lee, in Cloud Networking, 2014

Virtualization
In cloud data centers, server virtualization can help improve resource utilization and,
therefore, reduce operating costs. You can think of server virtualization as logically
dividing up a physical server into multiple smaller virtual servers, each running its
own operating system. This provides more granular utilization of server resources
across the data center. For example, if a small company wants a cloud service
provider to set up a web hosting service, instead of dedicating an underutilized
physical server, the data center administrator can allocate a virtual machine allowing
multiple web hosting virtual machines to be running on a single physical server.
This saves money for both the hosting data center as well as the consumer. We will
provide more information on server virtualization in Chapter 6.

Virtualization is also becoming important in the cloud data center network. New
tunneling protocols can be used at the edge of the network that effectively provide
separate logical networks for services such as public cloud hosting where multiple
corporations may each have hundreds of servers or virtual machines that must
communicate with each other across a shared physical network. For this type of
application, these multitenant data centers must provide virtual networks that are
separate, scalable, flexible, and secure. We will discuss virtual networking in Chapter
7.

> Read full chapter

How Virtualization Happens


Diane Barrett, Gregory Kipper, in Virtualization and Forensics, 2010

Virtualizing Hardware Platforms


Hardware virtualization, sometimes called platform or server virtualization, is exe-
cuted on a particular hardware platform by host software. Essentially, it hides the
physical hardware. The host software that is actually a control program is called
a hypervisor. The hypervisor creates a simulated computer environment for the
guest software that could be anything from user applications to complete OSes.
The guest software performs as if it were running directly on the physical hardware.
However, access to physical resources such as network access and physical ports is
usually managed at a more restrictive level than the processor and memory. Guests
are often restricted from accessing specific peripheral devices. Managing network
connections and external ports such as USB from inside the guest software can be
challenging. Figure 1.4 shows the concept behind virtualizing hardware platforms.
Figure 1.4. Hardware Virtualization Concepts

> Read full chapter

ScienceDirect is Elsevier’s leading information solution for researchers.


Copyright © 2018 Elsevier B.V. or its licensors or contributors. ScienceDirect ® is a registered trademark of Elsevier B.V. Terms and conditions apply.

You might also like