Professional Documents
Culture Documents
Virtualization has made a lot of progress during the last decade, primarily due
to the development of myriad open-source virtual machine hypervisors. This
progress has almost eliminated the barriers between operating systems and
dramatically increased utilization of powerful servers, bringing immediate
benefit to companies. Up until recently, the focus always has been on
software-emulated virtualization. Two of the most common approaches to
software-emulated virtualization are full virtualization and paravirtualization.
In full virtualization, a layer, commonly called the hypervisor or the virtual
machine monitor, exists between the virtualized operating systems and the
hardware. This layer multiplexes the system resources between competing
operating system instances. Paravirtualization is different in that the
hypervisor operates in a more cooperative fashion, because each guest
operating system is aware that it is running in a virtualized environment, so
each cooperates with the hypervisor to virtualize the underlying hardware.
How do you find out whether your system will run KVM? First, you need a
processor that supports virtualization. For a more detailed list, have a look
atwiki.xensource.com/xenwiki/HVM_Compatible_Processors.
Additionally, you can check /proc/cpuinfo, and if you see vmx or smx in the
cpu flags field, your system supports KVM.
Conclusion
With the introduction of KVM into the Linux kernel, future Linux
distributions will have built-in support for virtualization, giving them an edge
over other operating systems. There will be no need for any dual-boot
installation in the future, because all the applications you require could be run
directly from the Linux desktop. KVM is just one more of the many existing
open-source hypervisors, reaffirming that open source has been instrumental
to the progress of virtualization technology.
KVM, however, is not perfect due to its newness; it has some limitations
including the following:
At the time of this writing, KVM supports only Intel and AMD
virtualization, whereas Xen supports IBM PowerPC and Itanium as well.
SMP support for hosts is lacking in the current release.
Performance tuning.
However, the project is continuing at a rapid pace, and according to Avi Kivity,
KVM already is further ahead than Xen in some areas and surely will catch up
in other areas in the future.
For other distributions, you need to download a kernel of version 2.6.20 and
above. When compiling a custom kernel, select Device Drivers→Virtualization
when configuring the kernel, and enable support for hardware-based
virtualization. You also can get the KVM module along with the required user-
space utilities
fromsourceforge.net/project/showfiles.php?group_id=180599.
Using the compiled kernel with virtualization support enabled, the next step is
to create a disk image for the guest operating system. You do so with qemu-
img, as shown below. Note that the size of the image is 6GB, but using
QEMU's copy-on-write format (qcow), the file will grow as needed, instead of
occupying the full 6GB:
With your virtual disk created, load the guest operating system into it. The
following example assumes that the guest operating system is on a CD-ROM.
In addition to populating the virtual disk with the CD-ROM ISO image, you
must boot the image when it's done:
The I/O in the current release of KVM is handled by QEMU, so let's look at
some important QEMU switches:
-m: memory in terms of megabytes.
-cdrom: the file, ideally an ISO image, acts as a CD-ROM drive to the
VM. If no cdrom switch is specified, the ide1 master acts as the CD-
ROM.
-hda: points to a QEMU copy-on-write image file. For more hard disks
we could specify:
#qemu-kvm -m 384 -hda vmdisk1.img -hdb vmdisk2.img -hdc vmdisk3.img
-boot: allows us to customize the boot options; the -d switch boots from
the CD-ROM.
The default command starts the guest OS in a subwindow, but you can start in
full-screen mode, by passing the following switch:
-full-screen