VIRTUALIZATION: COMPARISION OF WINDOWS AND LINUX

Ms. Pooja Sharma Lecturer (I.T) PCE, Jaipur Email:er9.pooja@gmail.com Charnaksh Jain IV yr (I.T) PCE, Jaipur charnaksh@gmail.com

Abstract
Virtualization as a concept is not new; computational environment virtualization has been around since the first mainframe systems. But recently, the term "virtualization" has become ubiquitous, representing any type of process obfuscation where a process is somehow removed from its physical operating environment. Because of this ambiguity, virtualization can almost be applied to any and all parts of an IT infrastructure. For example, mobile device emulators are a form of virtualization because the hardware platform normally required to run the mobile operating system has been emulated, removing the OS binding from the hardware it was written for. But this is just one example of one type of virtualization; there are many definitions of the term "virtualization" floating around in the current lexicon, and all (or at least most) of them are correct, which can be quite confusing. This paper focuses on virtualization as it pertains to the data center; but before considering any type of data center virtualization, it's important to define what technology or category of service you're trying to virtualize. Generally speaking, virtualization falls into three categories: Operating System, Storage, and Applications. But these categories are very broad and don't adequately delineate the key aspects of data center virtualization.

Full-Virtualization, Para-Virtualization, hypervisior(Hyper-V), Guest Operating System, Host Operating System.

1. Introduction
Virtualization provides a set of tools for increasing flexibility and lowering costs, things that are important in every enterprise and Information Technology organization. Virtualization solutions are becoming increasingly available and rich in features. Since virtualization can provide significant benefits to your organization in multiple areas, you should be establishing pilots, developing expertise and putting virtualization technology to work now. In essence, virtualization increases flexibility by decoupling an operating system and the services and applications supported by that system from a specific physical hardware platform. It allows the establishment of multiple virtual environments on a shared hardware platform. Organizations looking to innovate find that the ability to create new systems and services without installing additional hardware (and to quickly tear down those systems and services when they are no longer needed) can be a significant boost to innovation. Virtualization can also excel at supporting innovation through the use of virtual environments for training and learning. These services are ideal applications for virtualization technology. A student can start course work with a known, standard system environment.

Keywords

Class work can be isolated from the production network. Learners can establish unique software environments without demanding exclusive use of hardware resources. As the capabilities of virtual environments continue to grow, we’re likely to see increasing use of virtualization to enable portable environments tailored to the needs of a specific user. These environments can be moved dynamically to an accessible or local processing environment, regardless of where the user is located. The user’s virtual environments can be stored on the network or carried on a portable memory device. A related concept is the Appliance Operating System, an application package oriented operating system designed to run in a virtual environment. The package approach can yield lower development and support costs as well as insuring the application runs in a known, secure environment. An Appliance Operating System solution provides benefits to both application developers and the consumers of those applications. Virtualization can also be used to lower costs. One obvious benefit comes from the consolidation of servers into a smaller set of more powerful hardware platforms running a collection of virtual environments. Not only can costs be reduced by reducing the amount of hardware and reducing the amount of unused capacity, but application performance can actually be improved since the virtual guests execute on more powerful hardware. Further benefits include the ability to add hardware capacity in a non-disruptive manner and to dynamically migrate workloads to available resources. Depending on the needs of your organization, it may be possible to create a virtual environment for disaster recovery. Introducing virtualization can significantly reduce the need to replicate identical hardware environments and can also enable testing of disaster scenarios at lower cost. Virtualization provides an excellent solution for addressing peak or seasonal workloads. Cost savings from server consolidation can be compelling. If you aren’t exploiting virtualization for

this purpose, you should start a program now. As you gain experience with virtualization, explore the benefits of workload balancing and virtualized disaster recovery environments. Regardless of the specific needs of your enterprise, you should be investigating virtualization as part of your system and application portfolio as the technology is likely to become pervasive. We expect operating system vendors to include virtualization as a standard component, hardware vendors to build virtual capabilities into their platforms, and virtualization vendors to expand the scope of their offerings.

2. System Architecture
A functional Red Hat Virtualization system is multilayered and is driven by the privileged RedHat Virtualization component. Red Hat Virtualization can host multiple guest operating systems. Each guest operating system runs in its own domain, Red Hat Virtualization schedules virtual CPUs within the virtual machines to make the best use of the available physical CPUs. Each guest operating systems handles its own applications. These guest operating systems schedule each application accordingly.

You can deploy Red Hat Virtualization in one of two choices: full virtualization or Para-virtualization. Full virtualization provides total abstraction of the underlying physical system and creates a new virtual system in which the guest operating systems can run. No modifications are needed in the guest OS or

application (the guest OS or application is not aware of the virtualized environment and runs normally). Para-virtualization requires user modification of the guest operating systems that run on the virtual machines (these guest operating systems are aware that they are running on a virtual machine) and provide near-native performance. You can deploy both Para-virtualization and full virtualization across your virtualization infrastructure. The first domain, known as domain0 (dom0), is automatically created when you boot the system. Domain0 is the privileged guest and it possesses management capabilities which can create new domains and manage their virtual devices. Domain0 handles the physical hardware, such as network cards and hard disk controllers. Domain0 also handles administrative tasks such as suspending, resuming, or migrating guest domains to other virtual machines. The hypervisor (Red Hat's Virtual Machine Monitor) is a virtualization platform that allows multiple operating systems to run on a single host simultaneously within a full virtualization environment. A guest is an operating system (OS) that runs on a virtual machine in addition to the host or main OS.

You can configure each guest with a number of virtual cpus (called vcpus). The Virtual Machine Manager schedules the vcpus according to the workload on the physical CPUs. You can grant a guest any number of virtual disks. The guest sees these as either hard disks or (for full virtual guests) as CD-ROM drives. Each virtual disk is served to the guest from a block device or from a regular file on the host. The device on the host contains the entire full disk image for the guest, and usually includes partition tables, multiple partitions, and potentially LVM physical volumes. Virtual networking interfaces runs on the guest. Other interfaces can run on the guest like virtual Ethernet Internet cards (VNICs). These network interfaces are configured with a persistent virtual media access control (MAC) address. The default installation of a new guest installs the VNIC with a MAC address selected at random from a reserved pool of over 16 million addresses, so it is unlikely that any two guests will receive the same MAC address. Complex sites with a large number of guests can allocate MAC addresses manually to ensure that they remain unique on the network. Each guest has a virtual text console that connects to the host. You can redirect guest logins and console output to the text console. You can configure any guest to use a virtual graphical console that corresponds to the normal video console on the physical host. You can do this for full virtual and Para-virtualized guests. It employs the features of the standard graphic adapter like boot messaging, graphical booting, multiple virtual terminals, and can launch the x window system. You can also use the graphical keyboard to configure the virtual keyboard and mouse.

With Red Hat Virtualization, each guest memory comes from a slice of the host's physical memory. For Para-virtualized guests, you can set both the initial memory and the maximum size of the virtual machine. You can add (or remove) physical memory to the virtual machine at runtime without exceeding the maximum size you specify. This process is called ballooning.

Guests can be identified in any of three identities: domain name (domain-name), identity (domain-id), or UUID. The domain-name is a text string that corresponds to a guest configuration file. The domain-name is used to launch the guests, and when the guest runs the same name is used to identify and control it. The domain-id is a unique, non-persistent number that gets assigned to an active domain and is

used to identify and control it. The UUID is a persistent, unique identifier that is controlled from the guest's configuration file and ensures that the guest is identified over time by system management tools. It is visible to the guest when it runs. A new UUID is automatically assigned to each guest by the system tools when the guest first installs.

Previously codenamed "Vanderpool", VT-x is Intel's technology for virtualization on the x-86 platforms. Intel plans to add Extended Page Tables (EPT), a technology for page table virtualization, in the upcoming Nehalem architecture. The following modern Intel processors include support for VT-x: • • • • • • • • • Pentium 4 - 662 and 672 Pentium Extreme Edition 955 and 965 (not Pentium 4 Extreme Edition with HT) Pentium D 920-960 except 945, 935, 925, 915 some models of the Core processors family some models of the Core 2 processors family Xeon 3000 series Xeon 5000 series Xeon 7000 series some models of the Atom processor family

3. Implementation
Intel and AMD have independently developed virtualization extensions to the x86 architecture. They are not directly compatible with each other, but serve largely the same functions. Either will allow a virtual machine hypervisor to run an unmodified guest operating system without incurring significant emulation performance penalties.

3.1 Hardware Implementation 3.1.1 AMD virtualization (AMD-V)
AMD's virtualization extensions to the 64-bit X86 architecture is named AMD Virtualization, abbreviated AMD-V. It is still referred to as "Pacifica", the AMD internal project code name. AMD-V is present in AMD Athlon 64 and Athlon 64 X2 with family "F" or "G" on socket AM2 (not 939), Turion 64 X2, Opteron 2nd generation and 3rd generation, Phenom, and all newer processors. Sempron processors do not include support for AMDV. On May 23, 2006, AMD released the Athlon 64 ("Orleans"), the Athlon 64 X2 ("Windsor") and the Athlon 64 FX ("Windsor") as the first AMD processors to support AMD-V. Prior processors do not have AMD-V. AMD has published a specification for a technology named IO Memory Management Unit (IOMMU) to AMD-V. This provides a way of configuring interrupt delivery to individual virtual machines and an IO memory translation unit for preventing a virtual machine from using DMA to break isolation. The IOMMU also plays an important role in advanced operating systems (absent virtualization) and the AMD Torrenza architecture.

Neither Intel Celeron nor Pentium Dual-Core nor Pentium M processors have VT technology.

3.2 Software Implementation Virtual Server 2005 R2
One way to support multiple virtual machines on a single physical machine is to run virtualization software largely on top of the operating system. Writing this software is challenging, especially for older processors that don’t provide built-in support for hardware virtualization. Yet it’s a viable solution, one that’s proven quite successful in practice. One example of this success is Virtual Server 2005 R2, a freely available technology for Windows Server 2003. Figure 5 illustrates how Virtual Server supports multiple virtual machines on a single physical machine. Whatever guest operating systems are running, all of them require storage. To allow this, Microsoft has defined a virtual hard disk (VHD) format. A VHD is really just a file, but to a virtual machine, it appears to be an attached disk drive. Guest operating systems and their applications rely on one or more VHDs for storage. In fact, all of Microsoft’s hardware

3.1.2 Intel Virtualization Technology for x86 (Intel VT-x)

virtualization technologies use the same VHD format, making it easier to move information among them.

It also does not require hardware assistance to perform efficiently.

Flexibility Virtual PC 2007
The most commercially important aspect of hardware virtualization today is the ability to consolidate workloads from multiple physical servers onto one machine. Yet it can also be useful to run guest operating systems on a desktop machine. Virtual PC is architecturally much like Virtual Server. Virtual Server is significantly more scalable than Virtual PC, and it supports a wider array of storage options. Virtual Server also includes administrative tools that target professional IT staff, while Virtual PC is designed to be managed by users. While Virtual PC does provide a few things that are lacking in Virtual Server, such as sound card support, it’s fair to think of it as offering a simpler approach to hardware virtualization for desktop users. Operating system-level virtualization is not as flexible as other virtualization approaches since it cannot host a guest operating system different from the host one, or a different guest kernel. For example, with Linux, different distributions are fine, but other OS such as Windows cannot be hosted. This limitation is partially overcome in Solaris Containers by its branded zones feature, which provides the ability to run an environment within a container that emulates a Linux 2.4-based release or an older Solaris release.

Storage
Some operating-system virtualizers provide file-level copy-on-write mechanisms. (Most commonly, a standard file system is shared between partitions, and partitions which change the files automatically create their own copies.) This is easier to back up, more space-efficient and simpler to cache than the blocklevel copy-on-write schemes common on wholesystem virtualizers. Whole-system virtualizers, however, can work with non-native file systems and create and roll back snapshots of the entire system state.

Xen Virtualization
A functional Red Hat Virtualization system is multilayered and is driven by the privileged Red Hat Virtualization component. Red Hat Virtualization can host multiple guest operating systems. Each guest operating system runs in its own domain, Red Hat Virtualization schedules virtual CPUs within the virtual machines to make the best use of the available physical CPUs. Each guest operating systems handles its own applications. These guest operating systems schedule each application accordingly.

Restrictions inside the container
The following actions are often prohibited: • • • • • • • Modifying the running kernel by direct access and loading kernel modules. Mounting and dismounting file systems. Creating device nodes. Accessing raw, divert, or routing sockets. Modifying kernel runtime parameters, such as most sysctl settings. Changing secure level-related file flags. Accessing network resources not associated with the container.

4. EVALUATION Overhead
Virtualization usually imposes little or no overhead, because programs in virtual partition use the operating system's normal system call interface and do not need to be subject to emulation or run in an intermediate virtual machine, as is the case with whole-system virtualizers (such as VMware and QEMU) or para-virtualizers (such as Xen and UML).

5. Conclusion and Future Work
Virtualization technologies have matured to the point where the technology is being deployed across a wide range of platforms and environments. The usage of virtualization has gone beyond increasing the utilization of infrastructure, to areas like data

replication and data protection. This white paper looks at the continuing evolution of virtualization, its potential, some tips on optimizing virtualization as well as how to future proof the technology. After all, server virtualization's value is wellestablished. Many, many companies have migrated significant percentages of their servers to virtual machines hosted on larger servers, gaining benefits in hardware utilization, energy use, and data centre space. And those companies that haven't done so thus far are hatching plans to consolidate their servers in the future. These are all capital or infrastructure costs, though. What does server virtualization do for human costs—the IT operations piece of the puzzle? Base level server consolidation offers a few benefits for IT operations. It makes hardware maintenance much easier, since virtual machines can be moved to other physical servers when it's time to maintain or repair the original server. This moves hardware maintenance from a weekend and late night effort to a part of the regular business day—certainly a great convenience. The next step for most companies is to leverage the portability of virtual machines to achieve IT operational agility. Because virtual machines are captured in disk images, they are portable, no longer bound to an individual physical server. The ease of reproducing virtual images means that application capacity can be easily dialed up and down with the creation or tear down of additional virtual images. Server pooling allows virtual machines to be automatically migrated according to application load. More sophisticated virtualization uses include high availability, where virtual machines can be moved— automatically, by the virtualization management software itself & mdash: when hardware failures occur. Seeing the magic of a virtual machine automatically being brought up on a new server after its original host is brought down, all without any human intervention vividly demonstrates the power of more sophisticated virtualization use. Certainly these kinds of uses of virtualization demonstrate its power to transform IT operations, enabling IT organizations to offer the kind of

responsiveness and creativity that could only be dreamed of a few years ago. The deftness with which applications can be migrated, upsized, downsized, cloned, etc. is something that will forever change the way IT does its job.

6. References 1. www.google.com 2. www.wikipedia.com 3. www.cio.com 4. www.terena.org

5. Redhat Documentation 6. Microsoft Virtualization overview by Davis Chappell

Sign up to vote on this title
UsefulNot useful