You are on page 1of 8

Virtual Systems & Services

CPU Virtualization

MAY 22, 2020


USMAN ZAFAR | MITE-F18-015
Superior University Lahore
Virtualization
Introduction to virtualization

Virtualization is the creation of a virtual form of a computing resource like a


computer, server, or other hardware component, or a software-based resource
such as an operating system. The most common example of virtualization is
partitioning a hard disk during OS installation, where the physical hard drive is split
into multiple logical disks to provide better data storage and retrieval (a category
known as 'data virtualization').

Types of virtualization.

Virtualization is classified based on the resource that is being created. There are
various categories such as:

 Network virtualization
 Server virtualization
 Desktop virtualization
 Hardware virtualization
 Software virtualization
 Storage virtualization

Of these, server virtualization is the most commonly used. Server virtualization


involves pooling resources from one or more physical servers and partitioning them
into multiple virtual servers. A special virtualization tool called hypervisor is used
for this purpose.

There are various types of hypervisors, such as type 1 hypervisors (”bare-metal


hypervisors” that run directly on raw hardware, also called virtual machine
virtualization) and type 2 hypervisors (hosted hypervisors that run on a guest OS).
The key players in the type 1 market are VMware, Microsoft, and Citrix, while Red
Hat's Kernel-based Virtual Machine (KVM) is the most widely used type 2 product.
Advantages of virtualization:

 Gain better performance and efficiency from resources in the existing computing
components, using CPU virtualization.
 Boost virtual machine (VM) security. Since VMs are logically separated from each
other, a malware attack or other software glitch on one VM won't affect other VMs.
 Save money on hardware. Virtualization software involve less cost, and also require
lesser hardware to run than physical machines.
 Gain peace of mind. VMs provide better reliability in terms of disaster recovery as
well as better backup and retrieval capabilities.

The challenges of managing a virtual environment

The common challenges in implementing and managing a virtual environment are:

 Discovering new VMs: Discovering and adding new VMs into a network can
become tiresome, especially when you have to add credentials individually for each
VM. This is where automated discovery comes in handy, since you can create
multiple credentials and add all those devices at the same time. Some network
management solutions even support one-click discovery, where once you add the
vCenter or the corresponding hypervisor, all the VMs under it are auto-detected.

 VM sprawl: This happens when the number of VMs in an environment goes beyond
a certain manageable number; VM sprawl can heavily affect the performance of
your virtual devices. Unused VMs take up a lot of the virtualization server’s CPU
and memory, resulting in lag or unresponsiveness in active VMs. VM sprawl can
also open up security loopholes.

 Resource allocation: Allocation of memory and processing power for VMs should
be properly planned beforehand and demands a strong understanding of how your
network is growing. Over-allocated or under-allocated storage space for VMs not
only heavily impacts performance, but also hinders the creation of new VMs in your
environment when you run out of storage. Other than this, unused VMs or VM disk
kernels must be constantly monitored and removed so that your network storage
is optimized.
 Monitoring VM performance: Most network monitoring software doesn't support
VM monitoring, and therefore requires a separate tool to serve that purpose. This
further complicates the network and might result in devices being left
unmonitored. An integrated VM monitoring solution will go a long way in helping
you get the best performance out of your network.

CPU Virtualization Basics


CPU virtualization emphasizes performance and runs directly on the processor
whenever possible. The underlying physical resources are used whenever possible
and the virtualization layer runs instructions only as needed to make virtual
machines operate as if they were running directly on a physical machine.
CPU virtualization is not the same thing as emulation. ESXi does not use emulation
to run virtual CPUs. With emulation, all operations are run in software by an
emulator. A software emulator allows programs to run on a computer system other
than the one for which they were originally written. The emulator does this by
emulating, or reproducing, the original computer’s behavior by accepting the same
data or inputs and achieving the same results. Emulation provides portability and
runs software designed for one platform across several platforms.
When CPU resources are overcommitted, the ESXi host time-slices the physical
processors across all virtual machines so each virtual machine runs as if it has its
specified number of virtual processors. When an ESXi host runs multiple virtual
machines, it allocates to each virtual machine a share of the physical resources.
With the default resource allocation settings, all virtual machines associated with
the same host receive an equal share of CPU per virtual CPU. This means that a
single-processor virtual machines is assigned only half of the resources of a dual-
processor virtual machine.

Software-Based CPU Virtualization


With software-based CPU virtualization, the guest application code runs directly
on the processor, while the guest privileged code is translated and the translated
code runs on the processor.
The translated code is slightly larger and usually runs more slowly than the native
version. As a result, guest applications, which have a small privileged code
component, run with speeds very close to native. Applications with a significant
privileged code component, such as system calls, traps, or page table updates can
run slower in the virtualized environment.

Hardware-Assisted CPU Virtualization


Certain processors provide hardware assistance for CPU virtualization.
When using this assistance, the guest can use a separate mode of execution called
guest mode. The guest code, whether application code or privileged code, runs in
the guest mode. On certain events, the processor exits out of guest mode and
enters root mode. The hypervisor executes in the root mode, determines the
reason for the exit, takes any required actions, and restarts the guest in guest
mode.
When you use hardware assistance for virtualization, there is no need to translate
the code. As a result, system calls or trap-intensive workloads run very close to
native speed. Some workloads, such as those involving updates to page tables, lead
to a large number of exits from guest mode to root mode. Depending on the
number of such exits and total time spent in exits, hardware-assisted CPU
virtualization can speed up execution significantly.

Virtualization and Processor-Specific Behavior


Although VMware software virtualizes the CPU, the virtual machine detects the
specific model of the processor on which it is running.

Processor models might differ in the CPU features they offer, and applications
running in the virtual machine can make use of these features. Therefore, it is not
possible to use vMotion® to migrate virtual machines between systems running on
processors with different feature sets. You can avoid this restriction, in some cases,
by using Enhanced vMotion Compatibility (EVC) with processors that support this
feature. See the vCenter Server and Host Management documentation for more
information.

Performance Implications of CPU Virtualization


CPU virtualization adds varying amounts of overhead depending on the workload
and the type of virtualization used.
An application is CPU-bound if it spends most of its time executing instructions
rather than waiting for external events such as user interaction, device input, or
data retrieval. For such applications, the CPU virtualization overhead includes the
additional instructions that must be executed. This overhead takes CPU
processing time that the application itself can use. CPU virtualization overhead
usually translates into a reduction in overall performance.
For applications that are not CPU-bound, CPU virtualization likely translates into
an increase in CPU use. If spare CPU capacity is available to absorb the overhead,
it can still deliver comparable performance in terms of overall throughput.
ESXi supports up to 128 virtual processors (CPUs) for each virtual machine.

Let’s have a look on two Famous Processor Manufacture Virtualization


Technologies.

Intel Virtualization Technology (Intel® VT)


Virtualization abstracts hardware that allows multiple workloads to share a
common set of resources. On shared virtualized hardware, a variety of workloads
can co-locate while maintaining full isolation from each other, freely migrate across
infrastructures, and scale as needed.
Businesses tend to gain significant capital and operational efficiencies through
virtualization because it leads to improved server utilization and consolidation,
dynamic resource allocation and management, workload isolation, security, and
automation. Virtualization makes on-demand self-provisioning of services and
software-defined orchestration of resources possible, scaling anywhere in a hybrid
cloud on-premise or off-premise per specific business needs.
Intel® Virtualization Technology (Intel® VT) represents a growing portfolio of
technologies and features that make virtualization practical by eliminating
performance overheads and improving security. Intel® Virtualization Technology
(Intel® VT) provides hardware assist to the virtualization software, reducing its size,
cost, and complexity. Special attention is also given to reduce the virtualization
overheads occurring in cache, I/O, and memory. Over the last decade or so, a
significant number of hypervisor vendors, solution developers, and users have
been enabled with Intel® Virtualization Technology
(Intel® VT), which is now serving a broad range of customers in the consumer,
enterprise, cloud, communication, technical computing, and many more sectors.
Intel® Virtualization Technology (Intel® VT) portfolio currently includes (but not
limited to):

Intel CPU virtualization features enable faithful abstraction of the full prowess of
Intel® CPU to a virtual machine (VM). All software in the VM can run without any
performance or compatibility hit, as if it was running natively on a dedicated CPU.
Live migration from one Intel® CPU generation to another, as well as nested
virtualization, is possible.

Intel® VT for Directed I/O (Intel® VT-d) In computing, an input/output memory


management unit (IOMMU) is a memory management unit (MMU) that connects
a digital media adapter (DMA)-capable I/O bus to the main memory. Like a
traditional MMU, which translates CPU-visible virtual addresses to physical
addresses, the IOMMU takes care of mapping device-visible virtual addresses (also
called device addresses or I/O addresses in this context) to physical addresses.
Some units also provide memory protection from misbehaving devices.

Intel® VT-d is a feature integrated into the chipset and therefore not related to the
CPU. Before Intel VT-d and hypervisors supporting it, any VM running on top of a
VMM was seeing emulated, or para-virtualized, devices. Figure 4shows how Intel
VT-d works. No matter what type of hardware was physically present in the server,
the VM itself sees a virtualized device. So, for example, on VMware vSphere*, you
would typically see a VMXnet*network card instead of the real network interface
card (NIC) installed on the server. This has both pros and cons:• Pros: This hides
any type of change between the hardware vendors and makes it possible for VMs
to migrate easily.• Cons: Performance takes a hit. This is true even if the emulated
device is based on a para-virtualized or synthetic driver, either in terms of CPU
utilization, bandwidth, or latency.

AMD-V (AMD virtualization)


AMD-V (AMD virtualization) is a set of hardware extensions for the X86 processor
architecture. Advanced Micro Dynamics (AMD) designed the extensions to perform
repetitive tasks normally performed by software and improve resource use and
virtual machine (VM) performance.
Early virtualization efforts relied on software emulation to replace hardware
functionality. But software emulation can be a slow and inefficient process.
Because many virtualization tasks were handled through software, VM behavior
and resource control were often poor, resulting in unacceptable VM performance
on the server.
Processors lacked the internal microcode to handle intensive virtualization tasks in
hardware. Both Intel Corp. and AMD addressed this problem by creating processor
extensions that could offload the repetitive and inefficient work from the software.
By handling these tasks through processor extensions, traps and emulation of
virtualization, tasks through the operating system were essentially eliminated,
vastly improving VM performance on the physical server.
AMD Virtualization (AMD-V) technology was first announced in 2004 and added to
AMD's Pacifica 64-bit x86 processor designs. By 2006, AMD's Athlon 64 X2 and
Athlon 64 FX processors appeared with AMD-V technology, and today, the
technology is available on Turion 64 X2, second- and third-generation Opteron,
Phenom and Phenom II processors.

You might also like