AKVM
(Kernel-based Virtual Machine)
Asst. Prof. NV MahajanKVM History
*KVM has been initially developed by Qumranet, a small company
located in Israel
*Redhat acquired Qumranet in September 2008, when KVM became
more production ready
«Nowadays KVM is used as the default VMM in Redhat Enterprise Linux
(RHEL) since version 5.4 and the Redhat Enterprise Virtualization for
Servers
*Qumranet released the code of KVM to the open source community
«Today, well known companies like IBM, Intel and AMD count to the list
of contributors of the projectIntroduction
* Goal of development of KVM was
* to build a modern open source hypervisor exploiting the experience from the
previous generation hypervisors and the powerfull technologies available at the
hardware level
*KVM is available as a loaded module with operating systems like
+ Linux, OpenBSD, FreeBSD, OpenSolaris, Solaris x86 and MS DOS
‘Intel or AMD hardware with x86 architecture is a must for KVM to
execute
*KVM simply turns the Linux kernel into a hypervisor when the KVM
kernel module is installedLinux Features
* Has a mature and proven memory manager including support for
NUMA and large scale systems
*Power management is already mature and field proven in Linux
Linux enjoys one of the largest ecosystem of hardware vendors and
the nature of the open source community
*Hardware vendors are able to participate in the development of the
Linux kernel, ensures that the latest hardware features are rapidly
adopted in the Linux kernelKVM and Linux
+ KVM is able to inherit key features from the Linux kernel
-Scheduler, Memory management with swapping, VO stacks, Power
management, Host CPU hot-plugging
—User can reuse the existing Linux process management infrastructure, ¢.g., top to
look at epu usage and taskset to pin VMs to specific cpus, Ail/ to pause or terminate VMs
*Able to use any storage supported by Linux to store VM images
Local disks with IDE, SCSI and SATA, Network Attached Storage (NAS)
including NFS and SAMBAJCIES or SAN with support for iSCSI and Fiber Channel
*Whenever new features added to Linux, KVM inherits them without additional
engineeringKVM for
(Kernel-based Virtual Machine)
‘niy one tread can
nm m | emceMU code
guests cori
‘ _ tenet) mate)
* A full virtualization
solution for Linux on x86 cee
hardware". cn messOverview of KVM
kvm.ko module exports a device called
/dev/kvm, which enables a guest mode
fora VM
-that provides the core virtualization
infrastructure , that turns Linux into a
hypervisor
*A processor specific module, kvm-
intel.ko or kvm-amd.ko, turns the Linux
kernel into a hypervisor
~~ iocti()|interface™~
vendor-/technology-specific (AMD SVM, Intel VMX)QEMU
For VO_ (disk, network, PCI, USB,
serial/parallel_ports)and CPU emulations,
KVM uses Quick Emulator (QE! , a
generic and open source machine emulator
and virtualizer
*KVM developers modified QEMU as called
qemu-kvm
*QEMU interacts with KVM modules directly
and safely execute instructions from the VM.
directly on the CPU without using dynamic
translations
Regular AppsKVM modes
+ Anormal Linux process has two modes of execution: kernel and user
*KVM adds third mode: guest mode (VTX Root mode)
*Communication channels
—KVM to QEMU through /dev/kvm
—KVM to Guest 0 and 3 through VMCS + vmx instructions
Guest 3 Guest User
ye Guest 2
‘\ i kz
* Host 3 QeMu
,\ ey
>./dev/kvm Device Node
+ igetl system calls are used in an OS for processes running in user-mode to communicate with a
ver
*KVM exposes a device file called /dev/kvm to applications to make use of the ioctls() provided
*QEMU opens the device file (/dev/kvm) and control hypervisor through ioctl() calls
—kvm_ioctlQ, kvm_vym_ioctl(), kvm_vepu_ioctl(), kvm_device_ioctl() and so on
soperations provided by /dev/kvm include
~Creation of a new virtual machine
—Allocation of memory to a virtual machine
—Reading and writing virtual cpu registers
—Injecting an interrupt into a virtual cpu
Running a virtual epuKVM-QEMU
* QEMU process is started in user-mode
and certain emulated devices (Keyboard,
Mouse, Display, hard drive and NIC) are
virtually attached to these VMs
*When a VM performs I/O operations (PIO
and MMIO), these are intercepted by KVM
and redirected to the QEMU process
*QEMU injects interrupts from devices
through KVM
To emulate DMA, QEMU uses threads to
do the I/OTo create anew VM
+ Kernel source code:
—Opens /dev/kvm and checks the version
~Makes a KVM_CREATE_VM call to creates a Getig a prces
Si cA = bape eee ore
ftp it sey
—Makes a KVM_CREATE_VCPU call to =
creates a VCPU within the VM and mmaps its control THOS | HSE,
area
switch (exit_reason) {
KUMEXIT TO: /* ... */
—Sets the FLAGS and CS:IP registers of the VCPU
KVMLEXIT_HLT: /* ... */
—Uses mmap to allocate some memory for the VM
Makes a KVM_RUN call to execute the VCPU
—Checks that the VCPU execution had the expected.
resultCPU Virtualization
+ AvCPU is implemented using a Linux thread (each thread has its own
guest mode)
Linux scheduler is responsible for scheduling a vCPU, as it is a normal
thread
- vepus
emu
Process
threads
Process
scheduling
Physical
CPUsKVM API
+ Three sets of ioctl make up the KVM API
—System ioctls: query and set global
attributes, which affect the whole KVM
subsystem. A system ioctl is used to create
VMs
VM ioctls: query and set attributes that
affect an entire virtual machine. A VM ioctl
is used to create vVCPUs
—Vepu ioctls: query and set attributes that
control the operation of a single virtual CPU
/* ioctls for /dev/kvm fds:
#define KVM_GET_API_VERSION
define KVM_CREATE_VM
“f
/* ioctls for VM fds */
#define KVM_SET_MEMORY_REGION
region)
#define KVM_CREATE_VCPU
/* ioctls for vepu fds */
#define KVM_RUN
#define KVM_GET_REGS
#define KVM_SET_REGSVM Execution
Userspace
Kernel
GuestKVM-QEMU Execution Flow
1.VM entry is done when guest starts execution
2.When a sensitive instruction has to be executed, VM Exit is done
3.If QEMU intervention is needed to execute I/O or any other task, control is
transferred to QEMU
—On completion, QEMU calls ioctl and requests KVM to continue guest processing-
flow returns to step 1
User space
emu
oEs |
oyrdevnom | 20M Eney =| went
Lina kernel ¢ryperison)Management Tools - Libvirt
+ A library/daemon/tool to manage qemu-kvm
—General VM life-cycle management
—VM properties (no of CPUs, memory size, I/O configuration) are defined in separate XML files
—Supports multiple hypervisors KVM, QEMU, XEN, Virtualbox, LXC,
-CPU Pinning
Manage: Devices, Networks, Snapshots, Storage pools. .. oVirt, Openstack, ete.
+libvirt process is daemonized and it is called libvirtd
virish virt-managerLibvirt
* virsh - command line Interface
end virt-manvirt-manager - graphical user interface Red Hat Enterprise Linux
¢virsh or virt-manager client ask the libvirtd to start talking to the hypervisor
+Separate qemu-kvm process is launched for each VM by libvirtd at the request
of virsh managerLive Migration with KVM
+ Provides the ability to move a running VM between physical hosts with no
interruption to service.
~—Transparent to the end user, the VM remains powered on, network connections
remain active and user applications continues to run while the VM is relocated to
a new physical host.
*KVM supports saving a virtual machine's current state to disk to allow it to be
stored and resumed at a later time
*Precopy live migration
—Copy VM memory before switching the execution hostLive migration Requirements
* Guest must be installed on shared networked storage using one of the
protocols: Fibre Channel/iSCSI /NFS /GFS2
«Shared storage must mount at the same location on source and destination
systems. The mounted directory name must be identical
«Both system must have the appropriate ports open
*Both systems must have identical network configurations.
All bridging and network configurations must be exactly the same on both
hostsLive Migration Steps (Precopy)
* Migration Request arrives (migrate VM from A to B)
«Transfer Memory
—First transfer all memory pages (first iteration)
—For every next iteration, transfer all dirty pages of after first iteration, until
convergence
*Stop the VM at A
«Transfer VM State to B
—Copy CPU registers, device states and the Dirty pages (from the last iteration)
+Continue the VM at B
—Send (broadcast) an Ethernet packet to announce the new locationLive Migration Limitation
* Only possible with CPUs from the same vendor (that is, Intel to Intel
or AMD to AMD only)
* The No eXecution (NX) bit must be set to on or off for both CPUs for
live migrationLive Migration with Tools
+ Live KVM migration with virsh
-# virsh migrate --live GuestName DestinationURL
*Migrating with virt-managerKVM -VM Templates
*VM template is a pre-configured operating system image that can used to quickly
deploy VMs
Using templates, many repetitive installation and configuration tasks are avoided
evirt-clone utility available in virt-manager creates templatesVM Snapshots
* VM snapshot is a file-based representation of the system state at a particular
point in time
esnapshot includes configuration and disk data
*by taking a snapshot of a VM, its state is preserved and can easily revert to it in
the future if needed
*Used as saving a VM's state before a potentially destructive operation
*libvirt supports taking live snapshots
*Snapshots can be created using virsh command and virt-managerThank YouKVM Features
* Each VM is a process, hence leverages standard Linux security model to
provide isolation and resource control
*Supports hybrid virtualization by using para-virtualized device drivers to
use optimized I/O interface
Inherits performance and scalability of Linux to support VM with upto
16vCPUs and 256GB RAM having 95%-135% performance relative to bare-
metal
*Modern Linux scheduler accrues further enhancements like completely
fair scheduler (CFS), control-groups, network name spaces and real-time
extensions for obtaining QoS, service levels and accounting for VMs
* Offers lower latency for applications in VMs and a higher degree of
determinism which is important for mission critical enterprise workloads? KVM Hypervisors : Virtualization Management S/W
i Quick Emulator Virtualizes /O >To
provide hardware like hard disks, cd drives or
network cards to the VMs
oe
: KVM Server
arinelizabion fer
kermel =
BIOS [Interface] user
host setup guest VM
qaneex Ue
as workload increasesKVM execution model
wseemede ernatmode gust-mote =e User Mode
+ ioctl() : system call allows to execute several
operations to create new virtual machines, assign
memory to a virtual machine, assign and start virtual
CPUs.
+ Kemel Mode:
* If the processor exits the guest due to pending
memory or I/O operations, the kernel performs the
necessary tasks and resumes the flow of execution.
+ If external events such as signals or I/O operations
initiated by the guest exists, it exits to the user-mode
* Guest mod
* the extended instruction
* set of a virtualization capable CPU is used to execute
the native codeAKVM Hypervisors : Virtualization Management S/W
Libvirt: A toolkit to interact with the virtualization capabilities of recent versions
of Linux (and other OSes). This is a building block to KVM.
virsh( Virtualization Shell): Virsh is a shell for managing hypervisors and VM’s
directly from Host OS terminal.
Cloning: Cloning is a concept to replicate a VM state/data etc so that we no need
to install a new OS. After cloning we can use the cloned machine as we use
normal VM.Working
Linuxcuserspace
gemukvm
de acre
seater
inn —
LUnuxckernel* 1) How do you know whether your machine supports
BIOS Virtualization Technology(VT)?
* 2) What is the configuration needed for a host machine for KVM
installation?
* 3) What is the specification of guest machine which you have created?
* 4) Is it possible to run 32-bit KVM guest on a 64-bit host or vice-versa?
Comment.
* 5) What is the result of kill -9 -1 in your KVM Guest machine?
* 6) How do you access LAN and USB port of your host machine from the
guest machine?
* 7) What is the role of QEMU in virtualization? Differentiate it with KVM.+ If the output is 0, then KVM won't work as the system does not
support VMX nor SVM hardware virtualization. If the output is 1 or
greater than 1 it means your system is all set and ready to go for KVM
installation.Installing KVM on bento 1404