You are on page 1of 63

UNIT III

Cloud Computing
Virtualization Concepts
Cloud Virtualization Concepts
• Virtualization Technology: Definition, Understanding and
Benefits of Virtualization
• "Implementation Level of Virtualization,"
• Virtualization Structure/Tools and Mechanisms
• Hypervisor VMware, KVM, Xen.
• "Virtualization: of CPU, Memory, I/O Devices,"
• Virtual Cluster and Resources Management,
• Virtualization of Server,
• Desktop, Network, and Virtualization of data-centre.
Virtualization Technology
• Virtualization is a technology that helps us to install different
Operating Systems on a hardware.
• They are completely separated and independent from each
other.
• In Wikipedia, you can find the definition as – “In computing,
virtualization is a broad term that refers to the abstraction of
computer resources.
• Virtualization hides the physical characteristics of computing
resources from their users, their applications or end users.
• This includes making a single physical resource (such as a
server, an operating system, an application or a storage
device) appear to function as multiple virtual resources. It
can also include making multiple physical resources (such as
storage devices or servers) appear as a single virtual
resource.
Virtualization is often −
• The creation of many virtual resources from one physical
resource.
• The creation of one virtual resource from one or more
physical resource.
What is virtualization?
• Virtualization uses software to create an abstraction layer
over the physical hardware. In doing so, it creates a virtual
compute system, known as virtual machines (VMs).
• This allows organizations to run multiple virtual
computers, operating systems, and applications on
a single physical server — essentially partitioning it into
multiple virtual servers.
• Simply put, one of the main advantages of virtualization is
that it’s a more efficient use of the physical computer
hardware; this, in turn, provides a greater return on a
company’s investment.
What is a virtual machine (VM)?
• In the simplest terms possible, a virtual machine (VM) is a
virtual representation of a physical computer.
• Virtualization allows an organization to create
multiple virtual machines—each with their own operating
system (OS) and applications—on a single physical machine.
• A virtual machine can’t interact directly with a physical
computer, however. Instead, it needs a lightweight software
layer called a hypervisor to coordinate with the physical
hardware upon which it runs.
What is a hypervisor?
• The hypervisor is essential to virtualization—it's a thin
software layer that allows multiple operating systems to run
alongside each other and share the same physical computing
resources.
• These operating systems come as the
aforementioned virtual machines (VMs)—virtual
representations of a physical computer—and
the hypervisor assigns each VM its own portion of the
underlying computing power, memory, and storage.
• This prevents the VMs from interfering with each other.
Cloud Virtualization Architecture
• Host Machine: The
machine on which the
virtual machine is going to
be built is known as Host
Machine.
• Guest Machine: The virtual
machine is referred to as a
Guest Machine.
Benefits of Virtualization
• More flexible and efficient allocation of resources.
• Enhance development productivity.
• It lowers the cost of IT infrastructure.
• Remote access and rapid scalability.
• High availability and disaster recovery.
• Pay peruse of the IT infrastructure on demand.
• Enables running multiple operating systems.
Drawback of Virtualization
• High Initial Investment: Clouds have a very high initial investment,
but it is also true that it will help in reducing the cost of companies.
• Learning New Infrastructure: As the companies shifted from Servers
to Cloud, it requires highly skilled staff who have skills to work with
the cloud easily, and for this, you have to hire new staff or provide
training to current staff.
• Risk of Data: Hosting data on third-party resources can lead to
putting the data at risk, it has the chance of getting attacked by any
hacker or cracker very easily.
Types of Virtualization
• Today the term virtualization is widely applied to a number
of concepts, some of which are described below −
• Server Virtualization
• Client & Desktop Virtualization
• Services and Applications Virtualization
• Network Virtualization
• Storage Virtualization
Server Virtualization
• It is virtualizing your server infrastructure where you do not have to
use any more physical servers for different purposes.
Client & Desktop Virtualization
• This is similar to server virtualization, but this time is on the user’s
site where you virtualize their desktops. We change their desktops
with thin clients and by utilizing the datacenter resources.
Services and Applications Virtualization
• The virtualization technology isolates applications from the
underlying operating system and from other applications, in order to
increase compatibility and manageability. For example – Docker can
be used for that purpose.
Network Virtualization
• It is a part of virtualization
infrastructure, which is used
especially if you are going to
visualize your servers. It
helps you in creating
multiple switching, Vlans,
NAT-ing, etc.
• The Digaram illustration
shows the VMware schema

Storage Virtualization
• This is widely used in
datacenters where you
have a big storage and
it helps you to create,
delete, allocated
storage to different
hardware.
• This allocation is done
through network
connection. The leader
on storage is SAN.
Data Virtualization
• This is the kind of virtualization in which the data is collected
from various sources and managed at a single place without
knowing more about the technical information like how data
is collected, stored & formatted then arranged that data
logically so that its virtual view can be accessed by its
interested people and stakeholders, and users through the
various cloud services remotely.
• Many big giant companies are providing their services like
Oracle, IBM, At scale, Cdata, etc.
implementation levels of virtualization
1) Instruction Set Architecture Level (ISA)
• ISA virtualization can work through ISA emulation. This is used to run
many legacy codes written for a different hardware configuration.
These codes run on any virtual machine using the ISA. With this, a
binary code that originally needed some additional layers to run is
now capable of running on the x86 machines. It can also be tweaked
to run on the x64 machine. With ISA, it is possible to make the virtual
machine hardware agnostic.
• For the basic emulation, an interpreter is needed, which interprets
the source code and then converts it into a hardware format that can
be read. This then allows processing. This is one of the five
implementation levels of virtualization in Cloud Computing.
2) Hardware Abstraction Level (HAL)
• True to its name HAL lets the virtualization perform at the level of the
hardware. This makes use of a hypervisor which is used for
functioning. The virtual machine is formed at this level, which
manages the hardware using the virtualization process. It allows the
virtualization of each of the hardware components, which could be
the input-output device, the memory, the processor, etc.
• Multiple users will not be able to use the same hardware and also use
multiple virtualization instances at the very same time. This is mostly
used in the cloud-based infrastructure.
3) Operating System Level
• At the level of the operating system, the virtualization model is
capable of creating a layer that is abstract between the operating
system and the application. This is an isolated container on the
operating system and the physical server, which uses the software
and hardware. Each of these then functions in the form of a server.
• When there are several users and no one wants to share the
hardware, then this is where the virtualization level is used. Every
user will get his virtual environment using a dedicated virtual
hardware resource. In this way, there is no question of any conflict.
4) Library Level
• The operating system is cumbersome, and this is when the
applications use the API from the libraries at a user level. These APIs
are documented well, and this is why the library virtualization level is
preferred in these scenarios. API hooks make it possible as it controls
the link of communication from the application to the system.
5) Application Level
• The application-level virtualization is used when there is a desire to
virtualize only one application and is the last of the implementation
levels of virtualization in Cloud Computing. One does not need to
virtualize the entire environment of the platform.
• This is generally used when you run virtual machines that use high-
level languages. The application will sit above the virtualization layer,
which in turn sits on the application program.
• It lets the high-level language programs compiled to be used at the
application level of the virtual machine run seamlessly.
Virtualization Structures/Tools and
Mechanisms
• virtualization layer is inserted between the hardware and the OS.
• virtualization layer is responsible for converting portions of the real
hardware into virtual hardware.
• Depending on the position of the virtualization layer, there are
several classes of VM architectures, namely
• Hypervisor and Xen architecture,
• Hardware Virtualization - host-based virtualization.
• Paravirtualization
Hypervisor and Xen Architecture

• A hypervisor can assume a micro-kernel architecture or a monolithic


hypervisor architecture.
• A micro-kernel hypervisor includes only the basic and unchanging
functions (such as physical memory management and processor
scheduling). The device drivers and other changeable components
are outside the hypervisor.
• A monolithic hypervisor implements all the aforementioned
functions, including those of the device drivers. Therefore, the size of
the hypervisor code of a micro-kernel hypervisor is smaller than that
of a monolithic hypervisor.
Xen Architecture

• Xen is an open source hypervisor program developed by Cambridge


University. Xen is a microkernel hypervisor, which separates the
policy from the mechanism. It implements all the mechanisms,
leaving the policy to be handled by Domain 0.
Hardware virtualization
• hardware virtualization can be classified into two categories: full
virtualization and host-based virtualization.
Full Virtualization
• With full virtualization, noncritical instructions run on the hardware
directly while critical instructions are discovered and replaced with
traps into the VMM to be emulated by software.
• Both the hypervisor and VMM approaches are considered full
virtualization. Noncritical instructions do not control hardware or
threaten the security of the system, but critical instructions do.
Therefore, running noncritical instructions on hardware not only can
promote efficiency, but also can ensure system security.
Host-Based Virtualization
• An alternative VM architecture is to install a virtualization layer on
top of the host OS. This host OS is still responsible for managing the
hardware.
• The guest OSes are installed and run on top of the virtualization layer.
Dedicated applications may run on the VMs.
• Certainly, some other applications can also run with the host OS
directly. This host based architecture has some distinct advantages,
as enumerated next. First, the user can install this VM architecture
without modifying the host OS. Second, the host-based approach
appeals to many host machine configurations.
Para-Virtualization
• It needs to modify the
guest operating
systems. A para-
virtualized VM
provides special APIs
requiring substantial
OS modifications in
user applications.
Performance
degradation is a
critical issue of a
virtualized system.
Virtualization of CPU, Memory, And I/O Devices

• CPU Virtualization
• Memory Virtualization
• I/O virtualization
• Para-virtualization
• Virtualization in Multi-Core Processors
Hardware Support for Virtualization
• Modern operating systems and processors permit multiple processes
to run simultaneously. If there is no protection mechanism in a
processor, all instructions from different processes will access the
hardware directly and cause a system crash.
• Therefore, all processors have at least two modes, user mode and
supervisor mode, to ensure controlled access of critical hardware.
• Instructions running in supervisor mode are called privileged
instructions.
• Other instructions are unprivileged instructions. In a virtualized
environment, it is more difficult to make OSes and applications run
correctly because there are more layers in the machine stack.
CPU Virtualization
CPU Virtualization

• Unprivileged instructions of VMs run directly on the host machine for


higher efficiency.
• Other critical instructions should be handled carefully for correctness
and stability.
• The critical instructions are divided into three categories:
1. privileged instructions,
2. controls sensitive instructions, and
3. behavior-sensitive instructions.
1. Privileged instructions execute in a privileged mode and will be
trapped if executed outside this mode.
2. Control-sensitive instructions attempt to change the configuration
of resources used.
3. Behavior-sensitive instructions have different behaviors depending
on the configuration of resources, including the load and store
operations over the virtual memory.
• A CPU architecture is virtualizable if it supports the ability to run the
VM’s privileged and unprivileged instructions in the CPU’s user mode
while the VMM runs in supervisor mode.
• When the privileged instructions including control- and behavior-
sensitive instructions of a VM are executed, they are trapped in the
VMM.
• RISC CPU architectures can be naturally virtualized because all
control- and behavior-sensitive instructions are privileged
instructions.
• On the contrary, x86 CPU architectures are not primarily designed to
support virtualization.
VIRTUAL CLUSTERS AND RESOURCE
MANAGEMENT
• A physical cluster is a collection of servers interconnected by a
physical network such as a LAN whereas virtual clusters have VMs
that are interconnected logically by a virtual network across several
physical networks.
• Virtual cluster based on application partitioning or customization. The
most important thing is to determine how to store those images in
the system efficiently.
• There are common installations for most users or applications, such
as operating systems or user-level programming libraries.
Three critical design issues of virtual clusters
• Live migration of VMs,
• Memory and file migrations, and
• Dynamic deployment of virtual clusters.
VMs in virtual cluster have the following
interesting properties:
• The virtual cluster nodes can be either physical or virtual machines
• The purpose of using VMs is to consolidate multiple functionalities on the
same server
• VMs can be colonized (replicated) in multiple servers for the purpose of
promoting
• Distributed parallelism, fault tolerance, and disaster recovery.
• The size (number of nodes) of a virtual cluster can grow or shrink
dynamically
• The failure of any physical nodes may disable some VMs installed on the
failing nodes.
• But the failure of VMs will not pull down the host system.
VM Migration
• Virtual machine migration is the task of moving a virtual machine
from one physical hardware environment to another.
• It is part of managing hardware virtualization systems and is
something that providers look at as they offer virtualization services.
Main benefits of migrating to the cloud
• Scalability: Cloud computing can scale up to support larger workloads and greater
numbers of users far more easily than on-premises infrastructure, which requires
companies to purchase and set up additional physical servers, networking
equipment, or software licenses.
• Cost: Companies that move to the cloud often vastly reduce the amount they
spend on IT operations, since the cloud providers handle maintenance and
• upgrades. Instead of keeping things up and running, companies can focus more
resources on their biggest business needs – developing new products or
improving existing ones.
• Performance: For some businesses, moving to the cloud can enable them to
improve performance and the overall user experience for their customers. If their
application or website is hosted in cloud data centers instead of in various on-
premises servers, then data will not have to travel as far to reach the users,
reducing latency.
• Flexibility: Users, whether they're employees or customers, can access the cloud
services and data they need from anywhere.
Main challenges of migrating to the cloud?
• Migrating large databases: Often, databases will need to move to a different
platform altogether in order to function in the cloud. Moving a database is
difficult, especially if there are large amounts of data involved. Some cloud
providers actually offer physical data transfer methods, such as loading data onto
a hardware appliance and then shipping the appliance to the cloud provider, for
massive databases that would take too long to transfer via the Internet. Data can
also be transferred over the Internet. Regardless of the method, data migration
often takes significant time.
• Data integrity: After data is transferred, the next step is making sure data is
intact and secure, and is not leaked during the process.
• Continued operation: A business needs to ensure that its current systems remain
operational and available throughout the migration. They will need to have some
overlap between on-premises and cloud to ensure continuous service; for
instance, it's necessary to make a copy of all data in the cloud before shutting
down an existing database. Businesses typically need to move a little bit at a time
instead of all at once
Hot and Cold Migrations
1. Cold Migration :
A powered down Virtual Machine is carried to separate host or data
store. Virtual Machine’s power state is OFF and there is no need of
common shared storage. There is a lack of CPU check and there is long
shortage time. Log files and configuration files are migrated from the
source host to the destination host.
2. Hot Migrations :
• A powered on Virtual Machine is moved from one physical host to
another physical host. A source host state is cloned to destination
host and then that source host state is discarded. Complete state is
shifted to the destination host. Network is moved to destination
Virtual Machine
live migration of a VM from one machine to another consists
of the following six steps:
• Stage-0:
Is Pre-Migration stage having functional Virtual Machine on primary host.
• Stage-1:
Is Reservation stage initializing container on destination host.
• Stage-2:
Is Iterative pre-copy stage where shadow paging is enabled and all dirty
pages are cloned in succession rounds.
• Stage-3:
Is Stop and copy where first host’s Virtual Machine is suspended and all
remaining Virtual Machine state are synchronized on second host.
• Stage-4:
Is Commitment where there is minimization of Virtual Machine state on
first host.
• Stage-5:
Is Activation stage where second host’s Virtual Machine start and
establishes connection to all local computers resuming all normal
activities.
Migration of Memory, Files, and Network
Resources
Memory Migration
• The Internet Suspend- Resume (ISR) technique exploits temporal locality as
memory states are likely to have considerable overlap in the suspended and
the resumed instances of a VM.
File System Migration
• Location-independent view of the file system that is available on all hosts. A
simple way to achieve this is to provide each VM with its own virtual disk
which the file system is mapped to and transport the contents of this virtual
disk along with the other states of the VM.
Network Migration
• To enable remote systems to locate and communicate with a VM, each VM
must be assigned a virtual IP address known to other entities. This address
can be distinct from the IP address of the host machine where the VM is
currently located. Each VM can also have its own distinct virtual MAC
address.
VIRTUALIZATION FOR DATA-CENTER
AUTOMATION
• Data-center automation refers huge volumes of hardware, software,
and database resources in these data centers can be allocated
dynamically to millions of Internet users simultaneously, with
guaranteed QoS.
• The latest virtualization development highlights high availability (HA),
backup services, workload balancing, and further increases in client
bases.
• Automation of data-center operations includes resource scheduling,
architectural support, power management, automatic or autonomic
resource management, performance of analytical models, and so on.
Server Consolidation in Data Centers
• Server consolidation is an approach to improve the low utility ratio of
hardware resources by reducing the number of physical servers. The
use of VMs increases resource management complexity.
Two categories:
• chatty workloads and noninteractive workloads.
• Chatty workloads may burst at some point and return to a silent state
at some other point. For example, video services can be used by a lot
of people at night and few people use it during the day.
• Noninteractive workloads do not require people’s efforts to make
progress after they are submitted.
Advantages:
• It enhances hardware utilization.
• Many underutilized servers are consolidated into fewer servers to
enhance resource utilization. Consolidation also facilitates backup
services and disaster recovery.
• In a virtual environment, the images of the guest OS and their
applications are readily cloned and reused.
• Total cost of ownership is reduced
• Improves availability and business continuity
Virtual Storage Management
• Virtual storage includes the storage managed by VMMs and guest
OSes. Generally, the data stored in this environment can be classified
into two categories:
• VM Images and Application data.
• The VM images are special to the virtual environment, while
application data includes all other data which is the same as the data in
traditional OS environments.
• In data centers, there are often thousands of VMs, which cause the VM
images to become flooded.
• Parallax is a distributed storage system customized for virtualization
environments.
• Content Addressable Storage (CAS) is a solution to reduce the total size
of VM images, and therefore supports a large set of VM-based systems
in data centers.
overview of the Parallax system
architecture.
Need of Content Addressed Storage (CAS)

• Storing the Fixed data which


is less likely to change.
• Managing the fixed data or
content.
• To provide security of fixed
content.
• To make fixed content
available whenever needed.
• To provide optimized and
centralized managed
storage solutions.
Cloud OS for Virtualized Data Centers
• Virtual infrastructure (VI) managers and OSes are specially tailored
for virtualizing data centers which often own a large number of
servers in clusters.
• Nimbus, Eucalyptus, and Open Nebula are all open source software
available to the public.
• Only vSphere 4 is a proprietary OS for cloud resource virtualization
and management over data centers.
• These VI managers are used to create VMs and aggregate them into
virtual clusters as elastic resources.
Eucalyptus for Virtual Networking of Private
Cloud
• Eucalyptus CLIs can handle Amazon Web Services and their own private instances.
Clients have the independence to transfer cases from Eucalyptus to Amazon
Elastic Cloud. The virtualization layer oversees the Network, storage, and
Computing. Occurrences are isolated by hardware virtualization.
∙ Instance Manager (IM) controls the execution,
inspection, and terminating of VM instances on
the host
∙ Group Manager (GM) gathers information
about schedules VM execution on specific
instance managers, as well as manages virtual
instance network.
∙ Cloud Manager (CM) is the entry-point into the
cloud for users and administrators. It queries
node managers for information about resources,
makes scheduling decisions, and implements
them by making requests to group managers.
Important Features of Eucalyptus Architecture
• Images: A good example is the Eucalyptus Machine Image which is a
module software bundled and uploaded to the Cloud.
• Instances: When we run the picture and utilize it, it turns into an instance.
• Networking: It can be further subdivided into three modes: Static
mode(allocates IP address to instances), System mode (assigns a MAC
address and imputes the instance’s network interface to the physical
network via NC), and Managed mode (achieves local network of
instances).
• Access Control: It is utilized to give limitations to clients.
• Elastic Block Storage: It gives block-level storage volumes to connect to an
instance.
• Auto-scaling and Load Adjusting: It is utilized to make or destroy cases or
administrations dependent on necessities.
What is the difference between the vSphere ESX
and ESXi architectures?
• VMware vSphere 4 is VMware Inc.'s flagship server virtualization
suite and consists of several technologies that provide live migration,
disaster recovery protection, power management and automatic
resource balancing for data centers.
• The primary difference between ESX and ESXi is that ESX is based on a
Linux-based console OS, while ESXi offers a menu for server
configuration and operates independently from any general-purpose
OS.
vSphere 4 Architecture

You might also like