You are on page 1of 7

SASIDHAR TALASILA * et al.

[IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY

ISSN: 22503676
Volume - 2, Special Issue - 1, 45 51

ABIQUO: A PROFICIENT ENTERPRISE CLOUD MANAGEMENT PLATFORM INTENDED FOR HYPERVISOR INDEPENDENCY, INTEGRITY AND SCALABILITY
Sasidhar Talasila1, Illa Pavan Kumar 2, Subrahmanyam Kodukula 3
2

M.Tech, Department of CSE, K L University, Andhra Pradesh, India, sasidhar369@gmail.com M.Tech, Department of CSE, K L University, Andhra Pradesh, India, illa.pavankumar@gmail.com 3 Professor, Department of CSE, K L University, Andhra Pradesh, India, smkodukula@yahoo.com

Abstract
Infrastructure as a service (IaaS) is one of the primary services provided by the cloud computing in which physical infrastructure is abstracted to provide computing, storage and networking as a service, i.e Virtualization is the key concept of IaaS. Virtualization bring in the facility to transfer virtual machines between physical severs. Through virtualization we can create and control virtual images for different platforms, such a creation and controlling of virtual images will be provided by a layer of software called Hypervisor sometimes called a Virtual Machine Monitors (VMM). Hypervisor bring in the ability to execute multiple operating systems and their applications concurrently on a single physical machine. Therefore an effective cloud management platform for Hypervisors is the key consideration of virtualization. But most of the existing platforms are hypervisor dependent, i.e they dont allow the switching of virtual machines from one hypervisor to another, hence there is a need for hypervisor independency in cloud management for scalability reasons. This paper discusses about such a hypervisor independent platform Abiquo. First we investigate the architectures of two popular hypervisors called Xen Hypervisor and VMware ESXi Hypervisor. In isolation both these hypervisors are very efficient, but when there is a need of switching of virtual machines from one hypervisor to another, the management of virtual machines created by respective hypervisors becomes an overhead. Finally we explore the architecture of new cloud management platform Abiquo, which overcome such overhead, i.e., provides hypervisor independency.

Index Terms: Virtualization, Hypervisors, Abiquo Platform. --------------------------------------------------------------------- *** -----------------------------------------------------------------------1. INTRODUCTION
The Cloud Computing is the fastest growing part of Information Technology, Cloud services are simpler to acquire and scale up or down, like Infrastructure as a service (IaaS), one of the great service provided by the cloud computing. The concept behind to provide this service efficiently by the cloud computing is Virtualization. It allows abstraction and isolation of lower level functionalities and underlying hardware. This enables portability of higher level functions and sharing and/or aggregation of the physical resources [1]. One of the examples of virtualization is server virtualization. The role of the hypervisor in virtualization is, it presents to the guest operating systems a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources. Hypervisors are installed on server hardware whose only task is to run guest operating systems. Amazon EC2 currently utilizes a highly customized version of the Xen hypervisor. From VMware we got ESXi, and it is base for VMware vSphere. The Microsofts Hyper-V technology is used in Windows Azure platforms. The rest of this paper is organized as follows. In the next section we discuss about Virtualization it contains various virtualization techniques that we followed. In section 3 we discussed about Architecture of Xen Hypervisor. VMware ESXi architecture is discussed in section 4.The new design to manage different hypervisors is discussed in section 5.

2. VIRTUALIZATION
Virtualization is a technique to access pooled resources; it assigns a logical name for a physical resource and then provides a pointer to that physical resource when a request is made. Virtualization provides an efficient management of resources and the mapping of virtual resources to physical resources is dynamic. Virtualization is dynamic in that the mapping can be assigned based on rapidly changing

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 45

SASIDHAR TALASILA * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY conditions, and it is facile because changes to a mapping assignment can be nearly instantaneous. Successful partitioning of a machine to support the concurrent execution of multiple operating systems poses several challenges. Firstly, virtual machines must be isolated from one another: it is not acceptable for the execution of one to adversely affect the performance of another. This is particularly true when virtual machines are owned by mutually untrusting users. Secondly, it is necessary to support a variety of different operating systems to accommodate the heterogeneity of popular applications. Thirdly, the performance overhead introduced by virtualization should be small [2]. A virtual machine has all the attributes and characteristics of a physical system but is strictly software that emulates a physical machine.

ISSN: 22503676
Volume - 2, Special Issue - 1, 45 51

very rich product category. Type 2 virtual machines are installed over a host operating system; for Microsoft Hyper-V, that operating system would be Windows Server.

2.3. Virtualization Techniques


In Para virtualization requires the host operating system provide a virtual machine interface for the guest operating system and that the guest access hardware through that host VM. An operating system running as a guest on a paravirtualization system must be ported to work with the host interface and it is an example of Type 2 Hypervisors. In Full virtualization technique, the VM is installed as a Type 1 Hypervisor directly onto the hardware. All operating systems in full virtualization communicate directly with the VM hypervisor, so no modification is required in the guest OS or application. The guest OS or application is not aware of the virtualized environment so they have the capability to execute on the VM just as they would on a physical system [4]. So those guest operating system in full virtualization systems are generally faster compare with para virtualization technique. In this paper we presented these two virtualization techniques in a clear way Figure 1 and Figure 2 shows these two different virtualization techniques.

2.1. Server Virtualization


Server virtualization is a proven technology that enables multiple virtual machines to run on a single physical server. Each virtual machine is completely isolated from other virtual machines and is decoupled from the underlying host by a thin layer of software known as a hypervisor. This allows each virtual machine to run different operating systems and applications. Because the machines have been decoupled from the underlying host, the guest can also be moved from one physical server host to another while running; this is known as live migration. These attributes are transforming how organizations approach virtual computing [3].

2.2. Hypervisor
A low-level program is required to provide system resource access to virtual machines, and this program is referred to as the hypervisor or Virtual Machine Monitor (VMM). A hypervisor running on bare metal is a Type 1 VM or native VM. Examples of Type 1 Virtual Machine Monitors are LynxSecure, RTS Hypervisor, Oracle VM, Sun xVM Server, VirtualLogix VLX, VMware ESX and ESXi, and Wind River VxWorks, among others. The operating system loaded into a virtual machine is referred to as the guest operating system, and there is no constraint on running the same guest on multiple VMs on a physical system. Type 1 VMs have no host operating system because they are installed on a bare system. An operating system running on a Type 1 VM is a full virtualization because it is a complete simulation of the hardware that it is running on. Some hypervisors are installed over an operating system and are referred to as Type 2 or hosted VM [4]. Examples of Type 2 Virtual Machine Monitors are Containers, KVM, Microsoft Hyper V, Parallels Desktop for Mac, Wind River Simics, VMware Fusion, Virtual Server 2005 R2, Xen, Windows Virtual PC, and VMware Workstation 6.0 and Server, among others. This is a

Figure.1. Para Virtualization Technique

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 46

SASIDHAR TALASILA * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY

ISSN: 22503676
Volume - 2, Special Issue - 1, 45 51

(Domain 0, VM 1, VM 2, and VM 3). We will discuss each in a clear way.

3.1. Xen Virtual Machine Monitor (VMM) or Xen Hypervisor.


The Xen hypervisor is the basic abstraction layer of software that sits directly on the hardware below any operating systems. It is responsible for CPU scheduling and memory partitioning of the various virtual machines running on the hardware device [6]. The hypervisor not only abstracts the hardware for the virtual machines but also controls the execution of virtual machines as they share the common processing environment. It has no knowledge of networking, external storage devices, video, or any other common I/O functions found on a computing system.

3.2. Domain 0
Figure.2. Fullvirtualization Technique Domain 0 is a modified Linux kernel, it is a unique virtual machine running on the Xen hypervisor that has special rights to access physical I/O resources as well as interact with the other virtual machines (Domain U: PV and HVM Guests) running on the system. All Xen virtualization environments require Domain 0 to be running before any other virtual machines can be started. Two drivers are included in Domain 0 to support network and local disk requests from Domain U PV and HVM Guests; the Network Backend Driver and the Block Backend Driver. The Network Backend Driver communicates directly with the local networking hardware to process all virtual machines requests coming from the Domain U guests [6]. The Block Backend Driver communicates with the local storage disk to read and write data from the drive based upon Domain U requests.

3. ARCHITECTURE OF XEN
Xen is open source virtualization software based on paravirtualization technology. To guarantee resource isolation, the guest OS does not directly access physical resources. For that, Xen changes the management of the execution of privileged instructions at the processor level. Normally the x86 architecture supports four different privileged execution modes on the processor, called rings. Ring 0 allows full access to the hardware (privileged mode); ring 3 avoids the execution of privileged instructions (typically used for application execution). In a traditional operating system, the kernel is running in the first ring (ring 0) for full access to the hardware. In the case of Xen, the Hypervisor runs in ring 0 and other operating systems (both guest OSes and host OS) are run in ring 1[5]. This means that if an OS wants to execute a privileged instruction the kernel has to go through the Hypervisor. The Hypervisor, in conjunction with the host OS, provides system management utilities for users to manage VMs. For example in the creation of VM, when the command is sent to the Hypervisor, it sets up the virtual hardware for the VM, the Hypervisor allocates memory to the VM, loads the kernel into the virtual address space and then the VM boots like a real machine. This section we will give an overview of the Xen 3.0 architecture. Figure 3 shows this architecture, the components are Xen Virtual Machine Monitor (VMM) and four VMs

3.3. Domain U
In Figure 3 VM1 and VM2 are the Domain U guests, these have no direct access to physical hardware on the machine as a Domain0 Guest does and is often referred to as unprivileged. All paravirtualized virtual machines running on a Xen hypervisor are referred to as Domain U PV Guests and are modified Linux operating systems, Solaris, FreeBSD, and other UNIX operating systems. All fully virtualized machines running on a Xen hypervisor are referred to as Domain U HVM Guests and run standard Windows or any other unchanged operating system. The Domain U PV Guest virtual machine is aware that it does not have direct access to the hardware and recognizes that other virtual machines are running on the same machine. The Domain U HVM Guest virtual machine is not aware that it is sharing processing time

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 47

SASIDHAR TALASILA * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY on the hardware and that other virtual machines are present. A Domain U PV Guest contains two drivers for network and disk access, PV Network Driver and PV Block Driver. Various operations are performed by this virtual machine monitor is classified as CPU, memory and I/O operations.

ISSN: 22503676
Volume - 2, Special Issue - 1, 45 51

memory ring architecture (memory is shared between Domain 0 and the guest domain) through which incoming and outgoing messages are sent [7].

4. ARCHITECTURE OF VMWARE ESXi


The next generation hypervisor of VMware is VMware ESXi. It is functionally equivalent to ESX; ESXi eliminates the Linux-based service console that is required for management of ESX. The removal from its architecture results in a hypervisor without any general operating system dependencies, which improves reliability and security. The result is a footprint of less than 90MB, allowing ESXi to be embedded onto a hosts flash device and eliminating the need for a local boot disk. The heart of ESXi is the VMkernel shown in Figure 4. All other processes run on top of the VMkernel, which controls all access to the hardware in the ESXi host. The VMkernel is a POSIX-like OS developed by VMware and is similar to other OSs in that it uses process creation, file systems, and process threads. Unlike a general OS, the VMkernel is designed exclusively around running virtual machines, thus the hypervisor focuses on resource scheduling, device drivers, and input/output (I/O) stacks [8]. The user world processes are nothing but various processes those are executing above the VMKernel to provide management access, hardware monitoring, and execution compartment in which a virtual machine operates. The hypervisor layer is managed by these operations. Another process is the virtual machine monitor (VMM) process. It is responsible for providing an execution environment in which the guest OS operates and interacts with the set of virtual hardware that is presented to it. Each VMM process has a corresponding helper process known as VMX and each virtual machine has one of each process. The hostd process provides a programmatic interface to the VMkernel. It is used by the vSphere API and for the vSphere client when making a direct management connection to the host. The hostd process manages local user and groups as well as evaluates the privileges for users that are interacting with the host. The hostd also functions as a reverse proxy for all communications to the ESXi host. VMware ESXi relies on the Common Information Model (CIM) system for hardware monitoring and health status. The CIM broker provides a set of standard APIs that remote management applications can use to query the hardware status of the ESXi host. Third-party hardware vendors are able to develop their own hardware-specific CIM plug-ins to augment the hardware information that can be obtained from the host. The Direct Console User Interface (DCUI) process provides a local management console for ESXi [8]. The vpxa process is responsible for vCenter Server communications. This process

Figure.3. The Architecture of XEN Hypervisor CPU operations: As discussed earlier the Intel x86 architecture provides four levels of privilege modes. These modes, or rings, are numbered 0 to 3, with 0 being the most privileged. In a non-virtualized system, the OS executes at ring 0 and the applications at ring 3. Rings 1 and 2 are typically not used. In Xen para-virtualization, the VMM executes at ring 0, the guest OS at ring 1, and the applications at ring 3. This approach helps to ensure that the VMM possesses the highest privilege, while the guest OS executes in a higher privileged mode than the applications and is isolated from the applications. Privileged instructions issued by the guest OS are verified and executed by the VMM [7]. Memory operations: In a non-virtualized environment, the OS expects contiguous memory. Guest operating systems in Xen paravirtualization are modified to access memory in a noncontiguous manner. Guest operating systems are responsible for allocating and managing page tables. However, direct writes are intercepted and validated by the Xen VMM. I/O operations: In a fully virtualized environment, hardware devices are emulated. Xen para-virtualization exposes a set of clean and simple device abstractions. For example, I/O data to and from guest operating systems is transferred using shared

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 48

SASIDHAR TALASILA * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY runs under the security context of the vpxuser. Commands and queries from vCenter Server are received by this process before being forwarded to the hostd process for processing. The agent process is installed and executes when the ESXi host is joined to a High Availability (HA) cluster. The syslog daemon is responsible for forwarding logging data to a remote syslog receiver. To enable management communication, a limited number of network ports are open on ESXi. The most important ports and services are the following [9]: 80: This port serves a reverse proxy that is open only to display a static Web page that you see when browsing to the server. Otherwise, this port redirects all traffic to port 443 to provide SSL-encrypted communications to the ESXi host. 443: This port also acts as a reverse proxy to a number of services to provide SSL-encrypted communication to these services. The services include the VMware Virtual Infrastructure API (VI API), which provides access to the RCLIs, VI Client, VirtualCenter Server, and the SDK. 427: This port provides access for the service location protocol, a generic protocol to search for the VI API. 5989: This port is open for the CIM server, which is an interface for Third-party management tools. 902: This port is open to support the older VIM API, specifically the older versions of the VI Client and Virtual Center.

ISSN: 22503676
Volume - 2, Special Issue - 1, 45 51

5. THE ABIQUO PLATFORM


From the previous sections we define that virtualization is rapidly growing and it initially deployed as a development tool to assist efficient testing, and to reduce the need to replicate hardware, over the last decade it has become widely adopted in enterprise datacenters and hosting environments. Whereas the benefits of these virtualization like; improved utilization and efficiency, scalability, portability and energy saving are increases the complexity of implementation of infrastructure. So to provide those above functionalities, we need an efficient environment. An Abiquo platform will provide this type of environment. The Abiquo Platform is the next generation of cloud Management Software, it allows IT operations managers, responsible for maintaining the hardware, to create and manage pooled computer resources, from any servers virtualized by any supported hypervisor [11]. In this one we have Resource Cloud and it will separates physical infrastructure and virtual application infrastructure. Physical infrastructure is managed by the IT infrastructure organization, contributes resources to the Resource Cloud, while virtual enterprises consume it. This Resource Cloud can then be provisioned or sold to those that need it, the users of the IT resources. These consumers can then use Abiquo to create virtual datacenters through which they can deploy instantly bundled virtual servers, storage and other physical resource, and applications from public and private virtual image libraries.

5.1. Architectural Components


The Abiquo Server architecture is based on enterprise-class technologies such as Java and MySQL, as well some other widely deployed technologies, such as DHCP servers and Network-Attached Storage (NAS) Servers. The Abiquo Platform includes the entire infrastructure required to manage your cloud environment. The architectural components of Abiquo is shown in Figure 5.

Figure.4. The Architecture of VMware ESXi Hypervisor

Figure.5. Components of Abiquo Platform Architecture

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 49

SASIDHAR TALASILA * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY Three major modules that are useful to provide the scalable hypervisor management, those are Abiquo Server it contains the business logic of Abiquo, and includes some third-party services. Second one is Abiquo Remote Services it will manage interactions between the Abiquo Server and the cloud nodes and other elements of the platform. And finally one more module; the Abiquo V2V Services to manage the business process manager remote services and manages the complex asynchronous tasks like image conversions, persistent virtual machines, etc [10]. Another component that will manage virtual machine images is shared libraries users can capture and store virtual machine images in private, shared or even public libraries. They can combine sets of VM images into a single appliance for easy re-deployment. Shared libraries allow the IT organization to define standard VM images, for example built to company anti-virus, directory and control requirements. [2]

ISSN: 22503676
Volume - 2, Special Issue - 1, 45 51

Paul Barham, Boris Dragovic, Keir Fraser, Steven Hand, Tim Harris, Xen and the Art of Virtualization. SOSP'03 ACM, October 2003. USA. Citrix Xen Server: http://www.citrix.com/English/ps2/ products/feature.asp?contentID=2300351. Barrie Sosinsky, Cloud Computing Bible, Wiley Publishing, Inc.2011. Geoffroy Vallee, Thomas Naughton, Hong Ong and Stephen L. Scott. Checkpoint/Restart of Virtual Machines Based on Xen. U.S. Department of Energy. How Does Xen Work: http://xen.org/files/Marketing/ HowDoesXenWork.pdf. Tim Abels, Puneet Dhawan, Balasubramanian Chandrasekaran. An Overview of Xen Virtualization. Dell Power Solutions. pp 109-111. August 2005. Dave Mishchenko. VMware ESXi: Planning, Implementation, and Security. Course Technology, a part of Cengage Learning. 2011. Charu Chaubal. The Architecture of VMware ESXi. VMware White Paper. 2008. Abiquo Architecture Overview: http://wiki.abiquo.com/ display/ABI18/Architecture+Overv iew. Abiquo Cloud Vision: http://wiki.abiquo.com/display /ABI18/The+Abiquo+Cloud+Vision.

[3]

[4]

[5]

[6]

6. CONCLUSION
The Management and maintenance of cloud infrastructure is difficult for the new cloud developers, while converting their physical servers into virtual servers several issues may occur, such as multiple hypervisor accessible and management. To diminish this issue a new cloud management environment, provided by Abiquo, called Abiquo platform is required. We conclude that these types of platforms are very useful in the future to the large organizations to manage virtual machines easily, and Abiquo is a solution that employs a standardsbased modular architecture, designed to interface with any type of hypervisor,i.e, hypervisor independency, networking components, storage systems and data bases. Finally we conclude that, unlike products initially built around a particular hypervisor, Abiquo is absolute solution to support additional platforms and services, and to provide customers with complete flexibility.

[7]

[8]

[9]

[10]

[11]

ACKNOWLEDGEMENTS
We like to express our gratitude to all those who gave us the possibility to carry out the paper. We would like to thank Mr.K.Satyanarayana, chancellor of K.L.University, Dr.K.Raja Sekhara Rao, Dean, and K.L.University for stimulating suggestions and encouragement. We have further more to thank Prof.S.Venkateswarlu, Dr.K.Subrahmanyam, who encouraged us to go ahead with this paper.

BIOGRAPHIES
Sasidhar Talasila is pursuing M.Tech from K L University, Guntur, in Computer Science Engineering and completed his B.Tech from JNTUH in 2010.

REFERENCES
[1] Mladen A. Vouk,Cloud Computing Issues, Research and Implementations, Journal of Computing and Information Technology - CIT 16, 2008, 4, 235246.

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 50

SASIDHAR TALASILA * et al. [IJESAT] INTERNATIONAL JOURNAL OF ENGINEERING SCIENCE & ADVANCED TECHNOLOGY Illa Pavan Kumar is pursuing M.Tech from K L University, Guntur, in Computer Science Engineering and completed his B.Tech from Kakatiya University in 2010.

ISSN: 22503676
Volume - 2, Special Issue - 1, 45 51

Dr. Kodukula Subrahmanyam, a Gold Medalist from Andhra University (1992-93) is currently working as a Professor in Computer Science & Engineering Department, School of Computing of KL University, Guntur. He is in teaching profession for the past 20 years and prior to joining KL University he worked as Programme Leader in the School of Engineering, Science & Technology at KDU University, Malaysia for about 10 years. He has published more than 30 papers in both national and international journals/conferences and attended various workshops in Malaysia, Singapore, USA & India. His research interests include Knowledge Management, Communication Technologies & Soft Systems Methodologies. He has guided 100 over students towards their Masters and Bachelor Dissertations and currently guiding 4 towards their PhD.

IJESAT | Jan-Feb 2012


Available online @ http://www.ijesat.org 51

You might also like