You are on page 1of 4

information technology

ukessays.com /essays/information-technology/virtualization-resource-simulation.php

Information Technology Essays - Virtualization, VM Ware, Linux


kernel and Linux device drivers
Virtualization, VM Ware, Linux kernel and Linux device drivers
Virtualization is a system or rather a technique for hiding the physical characteristics of computing
resources from the way in which other systems, applications, or end users interact with those resources.
This consists of making a single physical resource (like a storage device, a server, an application, or an
operating system) appear to operate as multiple logical resources. It can also comprise making multiple
physical resources (like storage devices or servers) appear as a single logical resource. It can also take in
making one physical resource to appear, with fairly diverse characteristics, as one logical resource.
It is the universal theme of all virtualization technologies to hide the technical detail by means of
encapsulation. An external interface that hides an underlying implementation (for example, by simplifying a
control system, by combining resources at different physical locations, or by multiplexing access) is
created by virtualization. Owing to the recent development of new virtualization platforms and
technologies, attention has been refocused on virtualization. It is a confirmed software technology.
Through this technology, the IT landscape is speedily transforming and the manner of computing is also
fundamentally changing. (VMWare 2008)
The technology of virtualization can benefit any person who uses a computer, from the IT professionals
and Mac enthusiasts to commercial businesses and government organizations. It saves time, effort and
money while it also achieves more with the computer hardware already owned by the users. (VMWare
2008)Virtualization is used in various diverse contexts; they can be grouped into two major types: Platform
virtualization involves the simulation of whole computers; and resource virtualization involves the
simulation of combined, fragmented, or simplified resources. Certainly, it is also an important concept in
non-computer contexts. A virtualized interface is employed to a complex device by many control systems;
therefore gas pedal of a modern car does much more than simply increasing the flow of fuel to the engine.
The returns on investment in any business can be improved with the use of Virtualization. It effectively lets
one computer to do the job of multiple computers, through sharing the resources of a single computer
across multiple or several environments. Virtual servers and virtual desktops allow hosting multiple
operating systems and multiple applications locally as well as in distant and inaccessible locations. It gives
freedom from physical and geographical limitations. Better desktop management, improved disaster
recovery processes, increased security, and high availability of resources are the other benefits that one
gets from building a virtual infrastructure apart from the basic advantages that are savings in energy and
lower capital expenses because of more efficient use of the hardware resources. There are some benefits
of virtualization, which are as follows:
Through virtual machines, the workloads of several under-utilized servers can be consolidated to a
smaller number of machines, possibly a single machine. This results in benefits of savings on hardware,
environmental costs, management, and administration of the server infrastructure.
They also serve the purpose of running the legacy applications. A legacy application just might not
operate on newer hardware or operating systems. And if it runs on it, there may be under-utilization of the

server; therefore consolidation of several applications is useful. This is not easy to do without virtualization;
applications are usually not written to co-exist within a single execution environment.
Virtual machines offer safe and isolated sandboxes for running applications that can not be trusted upon.
Such an execution environment can also be created dynamically - on the fly - as something is downloaded
from the Internet. Virtualization plays an important role in building secure computing platforms.
They can be used to create operating systems or execution environments with resource limits.
Generally, partitioning goes together with quality of service in design of QoS-enabled operating systems.
The illusion of hardware or hardware configuration (including the multiple processors and SCSI devices)
is provided by the virtual machines. It simulates networks of independent computers.
They can run multiple operating systems at the same time, which are of entirely different nature. Some
such systems may be hard or impossible to run on newer real hardware.
They allow powerful debugging and performance monitoring. Debugging of operating systems can be
done without losing productivity, or creating more complicated debugging scenarios.
Whatever is run by the virtual machines, can also be isolated by them. So, they provide fault and error
containment. Faults can be injected proactively into software in order to study its consequent behavior.
Application and system mobility is helped by the virtual machines, since they make migration of software
easier.
They are excellent tools for research and academic experiments. It is safer to work with them because
they provide isolation. They sum up the complete state of a running system: the state can be saved,
examined, modified, and reloaded.
The existing operating systems can run on shared memory multiprocessors.
They can create arbitrary test scenarios and can result in some very imaginative, effective quality
assurance. Virtualization can retrofit new features in the existing operating systems without "too much"
work.
Several tasks including the system migration, backup, and recovery can be made easier and more
manageable.
It is an effective means of providing binary compatibility. In co-located hosting, virtualization on
commodity hardware is extremely popular. Such hosting is economical, secure, and appealing on the
whole.
In core, it can be said that virtualization helps to convert hardware into software. The software like VMware
and ESX helps to transform the resources of an x86-based computer hardware that includes the CPU,
RAM, network controller and hard disk. In order to create a virtual machine, this is fully functional and
capable of running its own applications and operating system similar to a real computer. Multiple virtual
machines are known for sharing the resources of hardware without having any interference among each
other. This helps to safely run the various operating systems and applications on a single computer.

VMWare
The VMware Approach to Virtualization: This approach inserts a fine layer of software directly on the
hardware of computer or might be on a host operating system. This layer of software is capable of creating
machines that are virtual and contain a monitor of virtual machine. This allocates hardware resources
energetically and transparently. This helps the multiple operating systems to run on a single physical
computer concurrently. (VMWare 2008) Virtualizing a single physical computer marks the beginning. A
robust virtualization platform is provided by the VMware. This helps to scale the hundreds of interlinked
physical computers and storage devices that can form complete virtual infrastructure.

VMware, Inc. is a publicly-listed company on the New York Stock Exchange. It designs and develops
proprietary virtualization software products for x86-compatible computers, counting commercially-available
as well as freeware versions. The desktop software of VMWare runs atop Microsoft Windows, Linux, and
Mac OS X. The enterprise level software and the ESX Server of VMWare run directly on the server
hardware without the need of an extra core operating system. The name VMware comes from the
abbreviation "VM", which means "virtual machine" and ware comes from second part of Soft'ware'.
Achieving the Benefits of Virtualization: VMwares proven technology is the basis of free VMware Server.
With the help of this software, which is robust yet easy to use, many things can be done. Developers can
create multiple environments by means of different operating systems on the same server so as to
restructure software development and testing. New applications, IT testing of patches and operating
systems can be simplified by permitting system administrator to test in secure virtual machines and by
leveraging snapshots to be able to roll back to a clean state.
Server positioning can be simplified by once building a virtual machine and then developing it multiple
times. In ready-to-run virtual machines, software can be evaluated without installation and configuration.
Legacy operating systems such as Windows NT Server 4.0 and Window 2000 Server can be re-hosted in
a virtual machine running on new hardware and operating system.
Pre-built, ready-to-run virtual appliances can be leveraged, which include virtual hardware, operating
system and application environment. On Virtual Appliance Marketplace, virtual appliances for web, file,
print, proxy, email and other infrastructure services are available for download. Free operating systems
have several advantages as characterized by Linux. One of which is that their internals are open for all to
view. Anybody, who has requisite skills, can readily examine, understand and modify the operating system,
which is once considered as a dark and mysterious area whose code was restricted to a small number of
programmers.
Operating systems can be democratized with the help of Linux. The Linux kernel is a large and complex
body of code, which is not possible to be hacked. Kernel hackers would need an entry point where they
can approach the code without being disturbed by the complexity of its codes. Frequently, the gateway is
provided by device drivers (Corbet, Rubini & Kroah-Hartman 2005).
In Linux kernel, device drivers play an important role. These device drivers are a different kind of black
boxes that make a particular hardware piece respond to a well-defined internal programming interface.
Also, they completely hide the details regarding the working of the device. With the help of a set of
standardized calls that are independent of the specific driver, various user activities are performed. The
role of device driver is to map those calls to device-specific operations that act on real hardware.
The writing of Linux device drivers is interesting because of number of reasons. In order to gain access to a
particular device that is of their interest, individuals need to know about the drivers. By making a Linux
driver available for the products, hardware vendors can add large and growing Linux user base to their
potential market (Corbet, Rubini & Kroah-Hartman 2005). It is necessary to reliably run the Linux kernel
without considering whether a new hardware is added or removed from the system. Due to this, an
additional burden is placed on the device driver author.
For USB drivers, when the device with which a USB driver is bound with, is removed from the system, the
pending urbs that were submitted to the device begin to fail with the error ENODEV. The driver is
required to identify the error and properly clean up any pending I/O if it occurs. The hot pluggable devices
are not only limited to the traditional devices like the mice and keyboards. Now, number of systems
supports the removal and addition of the complete CPUs and memory sticks. The Linux kernel properly
handles the addition and removal of such core system devices so that there is no need for the individual
device drivers to pay attention to these things.

References
VMWare 2008, Retrieved April 17, 2008 from www.vmware.com
Mullins, R 2007, VMware the bright spot on a gray Wall Street day, IDG News Service.
Corbet, J, Rubini, A & Kroah-Hartman, G 2005, Linux Device Drivers. 3 rd Edition, O'Reilly Media.

You might also like