You are on page 1of 6

REMOTE BASED KVM MANAGEMENT TOOL (USING CLOUD COMPUTING CONCEPT)

CISCO

PROJECT GUIDE :-

PROJECT SYNOPSIS
By :AMAN SAGAR

Computing clouds provide computation, software, data access, and storage resources without requiring cloud users to know the location and other details of the computing infrastructure. Within limits, cloud users can consume any amount of these resources without having first to acquire servers or other computing equipment. End users access cloud based applications through a web browser or a light weight desktop or mobile application while the business software and data are stored on servers at a remote location. Cloud application providers strive to give the same or better service and performance as if the software programs were installed locally on end-user computers. This type of data centre environment allows enterprises to get their applications up and running faster, with easier manageability and less maintenance, and enables IT to more rapidly adjust IT resources (such as servers, storage, and networking) to meet fluctuating and unpredictable business demand.

Objective/ Aim :This project allows a user to control multiple computers from a single keyboard, video monitor and mouse. Although multiple computers are connected to the KVM(Kernelbased Virtual Machine ), typically a smaller number of computers can be controlled at any given time using virtualization. So many number of computers can be monitored and controlled at the same time and from the same place .

Technical details :Virtualization has made a lot of progress during the last decade, primarily due to the development of myriad open-source virtual machine hypervisors. This progress has almost eliminated the barriers between operating systems and dramatically increased utilization of powerful servers, bringing immediate benefit to companies. Up until

recently, the focus always has been on software-emulated virtualization. Two of the most common approaches to software-emulated virtualization are full virtualization and paravirtualization.

y y

In full virtualization, a layer, commonly called the hypervisor or the virtual machine monitor, exists between the virtualized operating systems and the hardware. This layer multiplexes the system resources between competing operating system instances. Paravirtualization is different in that the hypervisor operates in a more cooperative fashion, because each guest operating system is aware that it is running in a virtualized environment, so each cooperates with the hypervisor to virtualize the underlying hardware.

KVM (for Kernel-based Virtual Machine) is a fast growing open source full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module that provides the core virtualization infrastructure and a processor specific module. Using KVM hypervisor, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. The kernel component of KVM hypervisor is included in mainline Linux. Considering the time line of virtualization techniques, KVM is a relative newcomer. Several incumbent open source methods exist today, such as Xen, Bochs, UML, LinuxVServer, and coLinux, but KVM is receiving a surprising amount of exposure in the press.

The approach that KVM takes is to turn a Linux kernel into a hypervisor simply by loading a kernel module. The kernel module exports a device called /dev/kvm, which enables a guest mode of the kernel (in addition to the traditional kernel and user modes). With /dev/kvm, a VM has its own address space separate from that of the

kernel or any other VM that's running. Devices in the device tree (/dev) are common to all user-space processes. But /dev/kvm is different in that each process that opens it sees a different map (to support isolation of the VMs). KVM source in the Linux kernel KVM then simply turns the Linux kernel into a hypervisor (when you install the kvm kernel module). Because the standard Linux kernel is the hypervisor, it benefits from the changes to the standard kernel (memory support, scheduler, and so on). Optimizations to these Linux components benefit both the hypervisor (the host operating system) and the Linux guest operating systems. With the kernel acting as a hypervisor, you can then start other operating systems, such as another Linux kernel or Windows. KVM is a unique hypervisor. The KVM developers, instead of creating major portions of an operating system kernel themselves, as other hypervisors have done, devised a method that turned the Linux kernel itself into a hypervisor. This was achieved through a minimally intrusive method by developing KVM as kernel module. Integrating the KVM hypervisor capabilities into a host Linux kernel as a loadable module can simplify management and improve performance in virtualized environments. This probably was the main reason for developers to add KVM hypervisor to the Linux kernel. A typical KVM installation consists of the following components:
y y y

A device driver for managing the virtualization hardware; this driver exposes its capabilities via a character device /dev/kvm. A user-space component for emulating PC hardware; currently, this is handled in the user space and is a lightly modified QEMU process. The I/O model is directly derived from QEMU's, with support for copy-on-write disk images and other QEMU features.

Innovativeness & Usefulness :This approach has numerous advantages. By adding KVM virtualization capabilities to a standard Linux kernel, the virtualized environment can benefit from all the ongoing work on the Linux kernel itself. Under this model, every virtual machine is a regular Linux process, scheduled by the standard Linux scheduler. Traditionally, a normal Linux process has two modes of execution: kernel and user. The user mode is the default mode for applications, and an application goes into kernel mode when it requires some service from the kernel, such as writing to the hard disk. KVM hypervisor adds a third mode, the guest mode. Guest mode processes are processes that are run from within the virtual machine. The guest mode, just like the normal mode (non-virtualized instance), has its own kernel and user-space variations. Normal kill and ps commands

work on guest modes. From the non-virtualized instance, a KVM virtual machine is shown as a normal process, and it can be killed just like any other process. KVM makes use of hardware virtualization to virtualize processor states, and memory management for the virtual machine is handled from within the kernel. I/O in the current version is handled in user space, primarily through QEMU. A wide variety of guest operating system work with hypervisor , including many versions of linux , BSD (Berkeley Software Distribution) , Solaris and Windows operating systems.

Hardware & Software to be used :y y Intel Processor supporting virtualization C programming language for the code write up.

Market Potential & Competitive advantage :The largest advantage of cloud computing(virtualization) thus far has been the increase in speed to market and new services and in lowering the investment and access barriers to these services for both buyers and consumers, as well for as vendor providers and systems integrators. Additional critical success factors (CSFs) for cloud computing include:
y

Lowers the asset investment threshold: Companies can avoid capex and enhanced utilization when buyers and consumers use third-party cloud services, therefore avoiding the need to invest in their own capital assets.

Lowers switching costs via self-service: Self-service and alternative channels of service open up as Web sites and portals on premise and mobile cell networks enable wider access and choice of IT services for business and IT. Lowers access to services barriers: The speed of procurement increases through access to ondemand services that have the potential to change the location of risk responsibility from internal to external providers.

Removes intermediates (or creates new intermediates/aggregators/brokers): Business can go direct to gain convenience and leverage existing and new services. Lowers the innovation access barrier: IT services can be refreshed faster and more frequently. Increase growth potential: Additional sources of revenue can be enabled and exploited through both existing and new market access, providing new growth potential for businesses using cloud access and services.

y y

The following are few business metrics that can be used to measure the benefits of cloud computing when applied to an organization
1. The speed and rate of change: Cost reduction and cost of adoption/de-adoption is faster in the cloud. Cloud computing creates additional cost transformation benefits by reducing delays in decision costs by adopting pre-built services and a faster rate of transition to new capabilities. This is a common goal for business improvement programs that are lacking resources and skills and that are time sensitive. 2. Total cost of ownership optimization: Users can identify cloud services that enable them to select and configure a service that can be from a standard catalog or parameterized to offer a level of customization to meet their needs. Because the cloud service is "designed to run on the Web" provisioning from a catalog style service means it should be ready to provision and run within minutes or hours from selecting the service. Traditionally this has often been decoupled when IT projects are handed off to production services. In cloud computing environments, these are joined up. 3. Rapid provisioning and de-provisioning: Resources are scaled up and down to follow business activity as it expands and grows or is redirected. Provisioning time compression can go from weeks to hours. 4. Increased margin and cost control: Revenue growth and cost control opportunities allow companies to pursue new customers and markets for business growth and service improvement. 5. Dynamic usage: Elastic provisioning and service management targets real end users and real business needs for functionality as the scope of users and services evolve seeking new solutions.

You might also like