You are on page 1of 22

Cloud computing Unit 2

Virtualization

Virtualization is a technique how to separate a service from the underlying physical delivery
of that service. It is the process of creating a virtual version of something like computer
hardware. It was initially developed during the mainframe era. It involves using specialized
software to create a virtual or software-created version of a computing resource rather than the
actual version of the same resource. With the help of Virtualization, multiple operating systems
and applications can run on the same machine and its same hardware at the same time,
increasing the utilization and flexibility of hardware.
In other words, one of the main cost-effective, hardware-reducing, and energy-saving
techniques used by cloud providers is Virtualization. Virtualization allows sharing of a single
physical instance of a resource or an application among multiple customers and organizations
at one time. It does this by assigning a logical name to physical storage and providing a pointer
to that physical resource on demand. The term virtualization is often synonymous with
hardware virtualization, which plays a fundamental role in efficiently delivering Infrastructure-
as-a-Service (IaaS) solutions for cloud computing. Moreover, virtualization technologies
provide a virtual environment for not only executing applications but also for storage, memory,
and networking.
 Host Machine: The machine on which the virtual machine is going to be built is known as
Host Machine.
 Guest Machine: The virtual machine is referred to as a Guest Machine.

Work of Virtualization in Cloud Computing


Virtualization has a prominent impact on Cloud Computing. In the case of cloud computing,
users store data in the cloud, but with the help of Virtualization, users have the extra benefit of
sharing the infrastructure. Cloud Vendors take care of the required physical resources, but
these cloud providers charge a huge amount for these services which impacts every user or
organization. Virtualization helps Users or Organisations in maintaining those services which
are required by a company through external (third-party) people, which helps in reducing costs
to the company. This is the way through which Virtualization works in Cloud Computing.

Benefits of Virtualization
 More flexible and efficient allocation of resources.
 Enhance development productivity.
 It lowers the cost of IT infrastructure.
 Remote access and rapid scalability.
 High availability and disaster recovery.
 Pay peruse of the IT infrastructure on demand.
 Enables running multiple operating systems.

Drawback of Virtualization
 High Initial Investment: Clouds have a very high initial investment, but it is also true that
it will help in reducing the cost of companies.
Cloud computing Unit 2

 Learning New Infrastructure: As the companies shifted from Servers to Cloud, it


requires highly skilled staff who have skills to work with the cloud easily, and for this, you
have to hire new staff or provide training to current staff.
 Risk of Data: Hosting data on third-party resources can lead to putting the data at risk, it
has the chance of getting attacked by any hacker or cracker very easily.

Characteristics of virtualization:

Abstracting Physical Resources

One of the most significant characteristics of virtualization is the ability to abstract physical
resources. This means that virtual machines can be created that are completely independent of
the underlying physical hardware. This allows multiple virtual machines to run on the same
physical machine, each with their own operating system and applications. This is known as
server virtualization, and it is the most common use of virtualization today.

For example, a single physical server can host multiple virtual machines, each running its own
operating system and applications. This allows for efficient use of resources, as a single physical
machine can be used to run multiple applications, rather than having to purchase and maintain
multiple physical servers.

Isolation of Resources

Another key characteristic of virtualization is the isolation of resources. This means that each
virtual machine is isolated from the others, and they cannot access each other's resources. This
provides security and stability, as a problem with one virtual machine will not affect the others.

For example, a company may use virtualization to run their email server, web server, and
database server on the same physical machine. If the email server were to be compromised, the
web server and database server would still be protected and continue to function properly.

Flexibility

Virtualization also provides flexibility in terms of resource allocation. Virtual machines can be
easily created, deleted, and modified as needed. This allows for easy scaling, as more resources
can be allocated to a virtual machine as needed. It also allows for easier testing and development,
as virtual machines can be created to test new software and configurations without affecting the
production environment.
Cloud computing Unit 2

For example, a company may use virtualization to create a test environment for their new
software. They can create a virtual machine with the same specifications as their production
environment and test the software without affecting their live systems. Once the software has
been tested and is ready for production, the virtual machine can be deleted, and the software can
be deployed to the production environment.

Portability

Virtualization also provides portability, as virtual machines can be easily moved between
physical machines. This allows for easy disaster recovery, as virtual machines can be quickly
moved to a different physical machine in the event of a disaster. It also allows for easy migration
between physical machines, as virtual machines can be moved to new hardware without affecting
the applications or data.

For example, a company may use virtualization to create a disaster recovery plan. They can
create a virtual machine that contains all of their important data and applications and store it on a
separate physical machine. In the event of a disaster, the virtual machine can be quickly moved
to a new physical machine, and the company can continue to operate as normal.

Networking

Virtualization also provides networking capabilities, as virtual machines can be connected to


virtual networks. This allows for easy communication between virtual machines, as well as the
ability to connect to physical networks. This allows for easy integration of virtual machines into
existing networks and the ability to create isolated networks for specific purposes.

For example, a company may use virtualization to create a virtual network for their development
team. They can create a virtual network that connects all of their development virtual machines,
allowing them to easily communicate and share resources. They can also connect this virtual
network to their physical network, allowing the development team to access the internet and
other resources. Additionally, this virtual network can be isolated from the rest of the company's
network for added security.

Snapshots and Backup

Virtualization also provides the ability to create snapshots of virtual machines. This allows for
easy backup and recovery of virtual machines, as well as the ability to quickly revert to a
Cloud computing Unit 2

previous state. This is especially useful for testing and development, as it allows for easy
experimentation without the risk of losing data or compromising the production environment.

For example, a company may use virtualization to test a new software update. They can create a
snapshot of their virtual machine before installing the update, and if the update causes any issues,
they can easily revert to the previous snapshot. This eliminates the need to manually restore data
and configurations, saving both time and resources.

Desktop Virtualization

Desktop virtualization is another form of virtualization that allows multiple virtual desktops to
run on a single physical machine. This allows for easy deployment and management of desktops,
as well as the ability to access desktops remotely. This is especially useful for companies with a
mobile workforce, as it allows employees to access their desktop from any location.

For example, a company may use desktop virtualization to provide remote access for their sales
team. The sales team can access their virtual desktop from anywhere, allowing them to work on
presentations, access customer data, and communicate with the rest of the team. This eliminates
the need for remote employees to carry a laptop or access company data on a personal computer,
improving security and productivity.

Taxonomy of virtualization

 Virtualization is mainly used to emulate the execution environment, storage, and


networks. The execution environment is classified into two:

– Process-level – implemented on top of an existing operating system.

– System-level – implemented directly on hardware and does not or minimum requirement of the
existing operating system.

OR,

Virtualization covers a wide range of emulation techniques that are applied to different areas of
computing. A classification of these techniques helps us better understand their characteristics
and use
Cloud computing Unit 2

Virtualization is mainly used to emulate

● Execution Environments: To provide support for the execution of the programs eg. OS, and
Application.

○ Process Level: Implemented on top of an existing OS that has full control of the hardware

○ System Level: Implemented directly on Hardware and do not require support from existing
OS.

● Storage: Storage virtualization is a system administration practice that allows decoupling the
physical organization of the hardware from its logical representation.

● Networks: Network virtualization combines hardware appliances and specific software for the
creation and management of a virtual network.

OR,

Taxonomy of virtualization

 Virtual machines are broadly classified into two types: System Virtual Machines (also
known as Virtual Machines) and Process Virtual Machines (also known as Application
Virtual Machines). The classification is based on their usage and degree of similarity to
the linked physical machine. The system VM mimics the whole system hardware stack
and allows for the execution of the whole operating system Process VM, on the other
hand, provides a layer to an operating system that is used to replicate the programming
environment for the execution of specific processes.

 A Process Virtual Machine, also known as an application virtual machine, operates as a


regular program within a host OS and supports a single process. It is formed when the
process begins and deleted when it terminates. Its goal is to create a platform-independent
programming environment that abstracts away features of the underlying hardware or
operating system, allowing a program to run on any platform. With Linux, for example,
Wine software aids in the execution of Windows applications.

 A System Virtual Machine, such as VirtualBox, offers a full system platform that allows
the operation of a whole operating system (OS).

 Virtual Machines are used to distribute and designate suitable system resources to
software (which might be several operating systems or an application), and the software
is restricted to the resources provided by the VM. The actual software layer that allows
virtualization is the Virtual Machine Monitor (also known as Hypervisor). Hypervisors
are classified into two groups based on their relationship to the underlying hardware.
Native VM is a hypervisor that takes direct control of the underlying hardware, whereas
hosted VM is a different software layer that runs within the operating system and so has
an indirect link with the underlying hardware.
Cloud computing Unit 2

 The system VM abstracts the Instruction Set Architecture, which differs slightly from
that of the actual hardware platform. The primary benefits of system VM include
consolidation (it allows multiple operating systems to coexist on a single computer
system with strong isolation from each other), application provisioning, maintenance,
high availability, and disaster recovery, as well as sandboxing, faster reboot, and
improved debugging access.

 The process VM enables conventional application execution inside the underlying


operating system to support a single process. To support the execution of numerous
applications associated with numerous processes, we can construct numerous instances of
process VM. The process VM is formed when the process starts and terminates when the
process is terminated. The primary goal of process VM is to provide platform
independence (in terms of development environment), which implies that applications
may be executed in the same way on any of the underlying hardware and software
platforms. Process VM as opposed to system VM, abstracts high-level programming
languages. Although Process VM is built using an interpreter, it achieves comparable
speed to compiler-based programming languages using a just-in-time compilation
mechanism.

o Java Virtual Machine (JVM) and Common Language Runtime are two popular
examples of Process VMs that are used to virtualize the Java programming
language and the.NET Framework programming environment, respectively.

 difference between Cloud computing and Virtualization:-

S.N
O Cloud Computing Virtualization

Cloud computing is used to provide pools While It is used to make various


1. and automated resources that can be simulated environments through a
accessed on-demand. physical hardware system.

Cloud computing setup is tedious, While virtualization setup is simple as


2.
complicated. compared to cloud computing.

While virtualization is low scalable


3. Cloud computing is high scalable.
compared to cloud computing.

While virtualization is less flexible than


4. Cloud computing is Very flexible.
cloud computing.

5. In the condition of disaster recovery, cloud While it relies on single peripheral


Cloud computing Unit 2

S.N
O Cloud Computing Virtualization

computing relies on multiple machines. device.

In cloud computing, the workload is In virtualization, the workload is


6.
stateless. stateful.

The total cost of cloud computing is higher The total cost of virtualization is lower
7.
than virtualization. than Cloud Computing.

Cloud computing requires many dedicated While single dedicated hardware can do
8.
hardware. a great job in it.

While storage space depends on


Cloud computing provides unlimited storage
9. physical server capacity in
space.
virtualization.

Virtualization is of two types :


Cloud computing is of two types : Public
10. Hardware virtualization and
cloud and Private cloud.
Application virtualization.

In Cloud Computing, Configuration is In Virtualization, Configuration is


11.
image based. template based.

In cloud computing, we utilize the entire


In Virtualization, the entire servers are
12. server capacity and the entire servers are
on-demand.
consolidated.

In cloud computing, the pricing pay as you


In Virtualization, the pricing is totally
13. go model, and consumption is the metric on
dependent on infrastructure costs.
which billing is done.

Pros of Virtualization in Cloud Computing :

 Utilization of Hardware Efficiently –


With the help of Virtualization Hardware is Efficiently used by user as well as Cloud
Service Provider. In this the need of Physical Hardware System for the User is decreases
Cloud computing Unit 2

and this results in less costly.In Service Provider point of View, they will vitalize the
Hardware using Hardware Virtualization which decrease the Hardware requirement from
Vendor side which are provided to User is decreased. Before Virtualization, Companies
and organizations have to set up their own Server which require extra space for placing
them, engineer’s to check its performance and require extra hardware cost but with the help
of Virtualization the all these limitations are removed by Cloud vendor’s who provide
Physical Services without setting up any Physical Hardware system.
 Availability increases with Virtualization –
One of the main benefit of Virtualization is that it provides advance features which allow
virtual instances to be available all the times. It also has capability to move virtual instance
from one virtual Server another Server which is very tedious and risky task in Server Based
System. During migration of Data from one server to another it ensures its safety. Also, we
can access information from any location and any time from any device.
 Disaster Recovery is efficient and easy –
With the help of virtualization Data Recovery, Backup, Duplication becomes very easy. In
traditional method , if somehow due to some disaster if Server system Damaged then the
surety of Data Recovery is very less. But with the tools of Virtualization real time data
backup recovery and mirroring become easy task and provide surety of zero percent data
loss.
 Virtualization saves Energy –
Virtualization will help to save Energy because while moving from physical Servers to
Virtual Server’s, the number of Server’s decreases due to this monthly power and cooling
cost decreases which will Save Money as well. As cooling cost reduces it means carbon
production by devices also decreases which results in Fresh and pollution free
environment.
 Quick and Easy Set up –
In traditional methods Setting up physical system and servers are very time-consuming.
Firstly Purchase them in bulk after that wait for shipment. When Shipment is done then
wait for Setting up and after that again spend time in installing required software etc.
Which will consume very time. But with the help of virtualization the entire process is
done in very less time which results in productive setup.
 Cloud Migration becomes easy –
Most of the companies those who already have spent a lot in the server have a doubt of
Shifting to Cloud. But it is more cost-effective to shift to cloud services because all the data
that is present in their server’s can be easily migrated into the cloud server and save
something from maintenance charge, power consumption, cooling cost, cost to Server
Maintenance Engineer etc.

Cons of Virtualization :

 Data can be at Risk –


Working on virtual instances on shared resources means that our data is hosted on third
party resource which put’s our data in vulnerable condition. Any hacker can attack on our
data or try to perform unauthorized access. Without Security solution our data is in threaten
situation.
Cloud computing Unit 2

 Learning New Infrastructure –


As Organization shifted from Servers to Cloud. They required skilled staff who can work
with cloud easily. Either they hire new IT staff with relevant skill or provide training on
that skill which increase the cost of company.
 High Initial Investment –
It is true that Virtualization will reduce the cost of companies but also it is truth that Cloud
have high initial investment. It provides numerous services which are not required and
when unskilled organization will try to set up in cloud they purchase unnecessary services
which are not even required to them.
 A hypervisor is a form of virtualization software used in Cloud hosting to divide and
allocate the resources on various pieces of hardware. The program which provides
partitioning, isolation, or abstraction is called a virtualization hypervisor. The
hypervisor is a hardware virtualization technique that allows multiple guest operating
systems (OS) to run on a single host system at the same time. A hypervisor is
sometimes also called a virtual machine manager(VMM).

 Types of Hypervisor –

 TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a
“Native Hypervisor” or “Bare metal hypervisor”. It does not require any base server
operating system. It has direct access to hardware resources. Examples of Type 1
hypervisors include VMware ESXi, Citrix XenServer, and Microsoft Hyper-V
hypervisor.

 Pros & Cons of Type-1 Hypervisor:


 Pros: Such kinds of hypervisors are very efficient because they have direct access to
the physical hardware resources(like Cpu, Memory, Network, and Physical storage).
This causes the empowerment of the security because there is nothing any kind of the
third party resource so that attacker couldn’t compromise with anything.
 Cons: One problem with Type-1 hypervisors is that they usually need a dedicated
separate machine to perform their operation and to instruct different VMs and control
the host hardware resources.

 TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as
‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over the underlying
hardware rather they run as an application in a Host system(physical machine).
Basically, the software is installed on an operating system. Hypervisor asks the
operating system to make hardware calls. An example of a Type 2 hypervisor includes
VMware Player or Parallels Desktop. Hosted hypervisors are often found on endpoints
like PCs. The type-2 hypervisor is very useful for engineers, and security analysts (for
checking malware, or malicious source code and newly developed applications).
Cloud computing Unit 2

 Pros & Cons of Type-2 Hypervisor:


 Pros: Such kind of hypervisors allows quick and easy access to a guest Operating
System alongside the host machine running. These hypervisors usually come with
additional useful features for guest machines. Such tools enhance the coordination
between the host machine and the guest machine.
 Cons: Here there is no direct access to the physical hardware resources so the
efficiency of these hypervisors lags in performance as compared to the type-1
hypervisors, and potential security risks are also there an attacker can compromise the
security weakness if there is access to the host operating system so he can also access
the guest operating system.

What Does Xen Hypervisor Mean?

Xen is a hypervisor that enables the simultaneous creation, execution and management of
multiple virtual machines on one physical computer.Xen was developed by XenSource, which
was purchased by Citrix Systems in 2007. Xen was first released in 2003. It is an open source
hypervisor. It also comes in an enterprise version.

Xen Hypervisor

Xen is primarily a bare-metal, type-1 hypervisor that can be directly installed on computer
hardware without the need for a host operating system. Because it’s a type-1 hypervisor, Xen
controls, monitors and manages the hardware, peripheral and I/O resources directly. Guest
virtual machines request Xen to provision any resource and must install Xen virtual device
drivers to access hardware components. Xen supports multiple instances of the same or different
operating systems with native support for most operating systems, including Windows and
Linux. Moreover, Xen can be used on x86, IA-32 and ARM processor architecture.

VMware:

VMware is a virtualization and cloud computing software provider based in Palo Alto, Calif.
Founded in 1998, VMware is a subsidiary of Dell Technologies. EMC Corporation originally
acquired VMware in 2004; EMC was later acquired by Dell Technologies in 2016. VMware
bases its virtualization technologies on its bare-metal hypervisor ESX/ESXi in x86 architecture.
With VMware server virtualization, a hypervisor is installed on the physical server to allow for
Cloud computing Unit 2

multiple virtual machines (VMs) to run on the same physical server. Each VM can run its own
operating system (OS), which means multiple OSes can run on one physical server. All the VMs
on the same physical server share resources, such as networking and RAM. In 2019, VMware
added support to its hypervisor to run containerized workloads in a Kubernetes cluster in a
similar way. These types of workloads can be managed by the infrastructure team in the same
way as virtual machines and the DevOps teams can deploy containers as they were used to.

Microsoft hyper -v

Ever since Microsoft Hyper-V was released on Windows Server 2008, this hypervisor has been
one of the most popular virtualization options on the market. With Microsoft Hyper-V you can
create and run virtual machines without maintaining physical hardware.

The growth of virtualized working environments has led to an increased usage of Microsoft
Hyper-V and performance monitoring tools to help manage abstract resources. In this article,
we’re going to look at what Microsoft Hyper-V is.

What is Hyper-V? (And What is it Used For?)

Microsoft Hyper-V is a type of hypervisor. A hypervisor is a program used to host multiple


virtual machines on one single piece of hardware. Every virtual machine has its own applications
and programs separate from the underlying hardware. Hyper-V is one of, if not the, most famous
hypervisor in the world.

Hypervisors like Hyper-V are used for many different reasons depending on the environment in
which they are deployed. In general, Hyper-V helps network administrators create a virtualized
computing environment in a format that is easy to manage. From one piece of hardware an
administrator can manage a range of virtual machines without having to change computers.

Hyper-V can be used for many different purposes but it is most commonly used to create private
cloud environments. You deploy Hyper-V on servers to produce virtual resources for your users.

Why is Microsoft Hyper-V Important?

Microsoft Hyper-V is important because it allows users to transcend beyond the limitations of
physical hardware. Managing physical hardware is incredibly complex for larger organizations,
who have to manage often disparate and out-of-date hardware. Managing these devices is not
only time-consuming for administrators but also very costly.
Cloud computing Unit 2

Purchasing new hardware adds up very quickly, particularly when you take into account the
amount of extra office space needed to fit these devices in. Enterprises have started to attempt to
reduce costs and manage devices more efficiently. At the same time, modern devices have the
storage, CPU, and RAM to be able to sustain a variety of virtual OS’s from one location making
this even more effective.

What are the Minimum Requirements of Microsoft Hyper-V?

 Windows Server 2008


 Windows 8
 4GB RAM
 CPU

Metrics baseline
It consists of data collected in pervious projects. They can be used to set a goal and try to
determine, if trends show the likelihood of the meeting the goal. They become an essential piece
of a key performance indicator.

Baseline Measurement
Baseline Measurement or Base lining as it is shortly called is the process of establishing the

starting point of any process/metric, from which the improvement or impact of any change

measure is calculated. It is used to gauge how effective an improvement or change initiative is.

Now, let us look at where and how a baseline measurement is used.

Where do we use Baseline Measurement?


 In Change initiatives like automation, Business Process Reengineering, Mergers &
Acquisitions etc.
 In medical field for measuring treatment effectiveness
 Product improvement and modification
 Software version changes
The above are just a few of the many places where baselining of data is done. There are a lot

more applications of Baseline measurement.


Cloud computing Unit 2

How to do a Baseline Measurement?


1. The first thing to look at is scope of the initiative: the departments /teams it is going to cover,
the product lines, the scenarios being considered etc.
2. The next thing is to set the objective/Goal of the initiative & its unit of measurement. For
example, if a medical research team is going to measure the impact of a medicine that
reduces fever, the goal should be the normal body temperature.
3. The next step is to collect historical data of the measure. The best way is to collect the past
data. In cases where all of the data cannot be collected, an appropriate sampling method can
be applied. Care should be taken to ensure the sample is a representative of the original data
lying behind. Some scenarios require ‘Surveying’ customers to collect data. Some other
scenarios require data of competitive product or industry average. Employing and involving
experts who can judge the right approach for data collection is the key to success of this
activity.
4. Estimate the baseline with appropriate statistical method. Sometimes it would be simply
enough to plot a line graph and then arrive at the baseline. Some methods simply require
averaging of the past data. Some complex scenarios require cleansing of the data for
abnormal scenarios or advanced statistical techniques. User should adapt an appropriate
method for arriving at the baseline.
Once the baseline value or range is obtained, the project can be kicked-off and the result can be

monitored against the baseline measurement.

Scalability and Elasticity in Cloud Computing

Cloud Elasticity: Elasticity refers to the ability of a cloud to automatically expand or


compress the infrastructural resources on a sudden up and down in the requirement so that the
workload can be managed efficiently. This elasticity helps to minimize infrastructural costs.
This is not applicable for all kinds of environments, it is helpful to address only those scenarios
where the resource requirements fluctuate up and down suddenly for a specific time interval. It
is not quite practical to use where persistent resource infrastructure is required to handle the
heavy workload.
The versatility is vital for mission basic or business basic applications where any split the
difference in the exhibition may prompts enormous business misfortune. Thus, flexibility
comes into picture where extra assets are provisioned for such application to meet the
presentation prerequisites.
Cloud computing Unit 2

It works such a way that when number of client access expands, applications are naturally
provisioned the extra figuring, stockpiling and organization assets like central processor,
Memory, Stockpiling or transfer speed what’s more, when fewer clients are there it will
naturally diminish those as
per prerequisite.

The Flexibility in cloud is a well-known highlight related with scale-out arrangements (level
scaling), which takes into consideration assets to be powerfully added or eliminated when
required.
It is for the most part connected with public cloud assets which is generally highlighted in pay-
per-use or pay-more only as costs arise administrations.
The Flexibility is the capacity to develop or contract framework assets (like process, capacity
or organization) powerfully on a case by case basis to adjust to responsibility changes in the
applications in an autonomic way.
It makes make most extreme asset use which bring about reserve funds in foundation costs in
general.
Relies upon the climate, flexibility is applied on assets in the framework that isn’t restricted to
equipment, programming, network, QoS and different arrangements.
The versatility is totally relying upon the climate as now and again it might become negative
characteristic where execution of certain applications probably ensured execution.
It is most commonly used in pay-per-use, public cloud services. Where IT managers are
willing to pay only for the duration to which they consumed the resources.
Example: Consider an online shopping site whose transaction workload increases during
festive season like Christmas. So for this specific period of time, the resources need a spike up.
In order to handle this kind of situation, we can go for a Cloud-Elasticity service rather than
Cloud Scalability. As soon as the season goes out, the deployed resources can then be
requested for withdrawal.
Cloud Scalability: Cloud scalability is used to handle the growing workload where good
performance is also needed to work efficiently with software or applications. Scalability is
commonly used where the persistent deployment of resources is required to handle the
workload statically.
Example: Consider you are the owner of a company whose database size was small in earlier
days but as time passed your business does grow and the size of your database also increases,
so in this case you just need to request your cloud service vendor to scale up your database
capacity to handle a heavy workload.
It is totally different from what you have read above in Cloud Elasticity. Scalability is used to
fulfill the static needs while elasticity is used to fulfill the dynamic need of the organization.
Scalability is a similar kind of service provided by the cloud where the customers have to pay-
per-use. So, in conclusion, we can say that Scalability is useful where the workload remains
high and increases statically.
Types of Scalability:
Cloud computing Unit 2

1. Vertical Scalability (Scale-up) –


In this type of scalability, we increase the power of existing resources in the working
environment in an upward direction.

2. Horizontal Scalability: In this kind of scaling, the resources are added in a horizontal row.

3. Diagonal Scalability –
It is a mixture of both Horizontal and Vertical scalability where the resources are added both
vertically and horizontally.

Difference Between Cloud Elasticity and Scalability :


Cloud Elasticity Cloud Scalability

Elasticity is used just to meet the sudden up


Scalability is used to meet the static increase
and down in the workload for a small period
in the workload.
1 of time.

Elasticity is used to meet dynamic changes,


Scalability is always used to address the
where the resources need can increase or
increase in workload in an organization.
2 decrease.

Elasticity is commonly used by small Scalability is used by giant companies whose


companies whose workload and demand customer circle persistently grows in order to
3 increases only for a specific period of time. do the operations efficiently.

It is a short term planning and adopted just to Scalability is a long term planning and
deal with an unexpected increase in demand adopted just to deal with an expected increase
4 or seasonal demands. in demand.
Cloud computing Unit 2

System metrics
System metrics are measurement types found in the system. Each resource that can be monitored
for performance, availability, reliability, and other attributes has one or more metrics about
which data can be collected. Sample metrics include the amount of CPU on a node, or the
amount of CPU usage on a node.

In cloud computing, a system metric refers to a measurable quantity or parameter that provides
insights into the performance, health, and utilization of various components within a cloud-based
system. These metrics help cloud service providers and users monitor and manage the
infrastructure, applications, and services hosted on the cloud platform. System metrics play a
crucial role in ensuring optimal performance, resource allocation, and scalability. Here are some
common system metrics in cloud computing:

1. CPU Utilization: This metric measures the percentage of CPU capacity being used by a virtual
machine (VM) or a container. Monitoring CPU utilization helps identify performance
bottlenecks and indicates if additional resources are required.
2. Memory Utilization: This metric tracks the amount of available memory being used by
applications and processes. Monitoring memory usage helps prevent out-of-memory errors and
ensures efficient memory allocation.
3. Network Throughput: Network throughput measures the amount of data transmitted over a
network connection within a given time period. It helps evaluate network performance and detect
potential network congestion.
4. Disk I/O Operations: Disk I/O metrics track the number of read and write operations to storage
devices. Monitoring disk I/O helps optimize storage performance and prevent data access delays.
5. Latency: Latency is the time delay between a request and the corresponding response.
Monitoring latency helps assess the responsiveness of applications and services.
6. Request Rate: This metric measures the rate at which requests or transactions are being
processed by a system. It's particularly important for web servers and APIs.
7. Availability/Uptime: Availability is the percentage of time that a service or resource is
operational and accessible. Monitoring uptime helps ensure that services meet their SLAs
(Service Level Agreements).
8. Load Average: Load average indicates the average number of processes in the system's run
queue over a certain time period. It provides insights into the system's workload and potential
resource contention.
9. Resource Allocation: Metrics related to the allocation of CPU, memory, storage, and other
resources help ensure that applications have the necessary resources to function properly without
resource starvation.
10. Elasticity Metrics: These metrics track how well a system scales in response to changing
demands. They include metrics related to auto-scaling events, instance provisioning, and
resource allocation.
Cloud computing Unit 2

11. Error Rates: Monitoring the frequency of errors, exceptions, and failures helps identify issues
affecting application stability and user experience.

Cloud service providers often offer monitoring and analytics tools that collect and display these
system metrics in real-time or over specific time intervals. These tools enable cloud
administrators and users to make informed decisions about resource provisioning, application
optimization, and troubleshooting.

load testing in cloud computing


Load testing in cloud computing is a performance testing technique that involves simulating
various levels of user activity and demand on a cloud-based application or system. The goal of
load testing is to determine how well the application or system can handle different levels of
load, ensuring that it performs optimally under expected and peak usage conditions. Cloud
computing is particularly well-suited for load testing due to its scalability and flexibility.

Here's how load testing in cloud computing typically works:

1. Creating Test Scenarios: Load testers define realistic scenarios that simulate user behavior and
interactions with the application. These scenarios could include actions like browsing web pages,
making API requests, submitting forms, or performing transactions.
2. Generating Load: Load testing tools or frameworks generate a significant amount of virtual
users or simulated traffic to replicate the expected user load on the application. These virtual
users perform the predefined actions simultaneously.
3. Scalability Testing: One advantage of cloud computing is the ability to easily scale resources up
or down. Load tests can take advantage of this by provisioning additional virtual machines or
resources during testing to simulate high user loads and measure system performance.
4. Performance Monitoring: Throughout the load test, various performance metrics such as
response times, throughput, latency, CPU and memory utilization, and error rates are monitored
and recorded.
5. Analyzing Results: Once the load test is complete, the collected performance metrics are
analyzed to assess the application's performance under different load levels. This helps identify
bottlenecks, resource constraints, and areas for optimization.
6. Optimization and Scaling: Based on the analysis, adjustments can be made to the application's
architecture, code, and infrastructure to address any performance issues. If the cloud platform
supports auto-scaling, load tests can help fine-tune the auto-scaling policies.
7. Repeat Testing: Load testing is often an iterative process. After making optimizations, the load
test can be run again to verify the improvements and ensure that the changes have the desired
impact on system performance.
8. Stress Testing: In addition to load testing, stress testing involves pushing the application beyond
its expected capacity to identify its breaking point. This helps determine how the system fails and
whether it recovers gracefully.
9. Failover and Recovery Testing: Cloud environments also allow testing failover and recovery
scenarios, where an instance or server fails, and traffic is automatically redirected to healthy
instances.
Cloud computing Unit 2

10. Realistic Environment Simulation: Cloud computing environments can simulate


geographically dispersed user bases, which is crucial for applications with a global user base.

Load testing in cloud computing offers several advantages, including the ability to test at scale
without investing in a dedicated infrastructure, flexibility to simulate various user loads, and
access to cloud-specific features like auto-scaling. However, it's important to design load tests
carefully to accurately reflect real-world scenarios and to choose appropriate testing tools and
methodologies to achieve meaningful results.

Cloud Resourceceiling

In cloud computing, a "resource ceiling" typically refers to the maximum allocation or limit
placed on various computing resources, such as CPU, memory, storage, and network bandwidth,
for a specific cloud service or instance. These limits are defined and enforced by the cloud
provider to ensure fair and efficient resource allocation among their customers. Here are some
common examples of resource ceilings in cloud computing:

1. Compute Resources (CPU and Memory): Cloud providers often allow users to select the
amount of CPU cores and memory for their virtual machines (VMs) or containers. Resource
ceilings may be set to prevent overcommitting these resources, ensuring that one user's workload
does not consume all available compute resources on a physical host.
2. Storage Quotas: Cloud storage services like Amazon S3, Google Cloud Storage, or Azure Blob
Storage may impose resource ceilings in terms of storage capacity, IOPS (Input/Output
Operations Per Second), or network bandwidth for data transfer. Users are often limited by their
subscription plan or can request higher limits if needed.
3. Network Bandwidth: Cloud providers may impose limits on the amount of incoming and
outgoing network traffic for a specific virtual machine or service. These limits help maintain
network performance and prevent one tenant from monopolizing network resources.
4. API Rate Limits: Cloud providers typically have rate limits on their APIs to prevent abuse and
ensure fair access for all users. These rate limits may vary depending on the type of API request
and the user's subscription level.
5. Scaling Limits: Some cloud services, such as auto-scaling groups or load balancers, may have
maximum and minimum scaling limits to control the number of instances or resources that can
be dynamically provisioned.
6. Resource Pools: In some cloud environments, resource ceilings can be set for resource pools,
which are groups of resources allocated to specific departments, projects, or teams. This helps
manage resource allocation within an organization.
7. Budget and Cost Controls: Cloud providers often offer budgeting and cost control features to
set spending limits or alerts, effectively acting as resource ceilings for financial expenditures.
Cloud computing Unit 2

It's important for cloud users to be aware of these resource ceilings and plan their deployments
accordingly. Exceeding resource limits can lead to degraded performance, unexpected costs, or
service interruptions. Many cloud providers offer tools and APIs to monitor resource usage and
adjust limits as needed, but these may require specific permissions and adherence to provider
policies.

Network capacity
Network capacity in cloud computing refers to the amount of bandwidth and connectivity
available within a cloud environment. It is a critical aspect of cloud infrastructure, as it directly
impacts the performance, reliability, and scalability of cloud-based applications and services.
Here are key considerations related to network capacity in cloud computing:

1. Bandwidth: Bandwidth refers to the maximum amount of data that can be transmitted over a
network connection in a given period, usually measured in bits per second (bps), kilobits per
second (Kbps), megabits per second (Mbps), or gigabits per second (Gbps). Cloud providers
offer various levels of bandwidth to meet the needs of different users and services. Adequate
bandwidth is essential for fast data transfer and low-latency communication.
2. Latency: Latency is the time it takes for data to travel from the source to the destination over a
network. Lower latency is crucial for applications that require real-time or near-real-time
communication, such as video conferencing, online gaming, and financial trading platforms.
Cloud providers often strive to reduce network latency by optimizing their data center locations
and infrastructure.
3. Scalability: Scalability in terms of network capacity involves the ability to increase or decrease
bandwidth and network resources dynamically based on demand. Cloud providers typically offer
scalable network solutions that allow users to adjust their network capacity as needed, which is
particularly useful for handling traffic spikes and accommodating growing workloads.
4. Virtual Private Cloud (VPC): Cloud providers often offer the concept of a Virtual Private
Cloud (VPC), which allows users to create isolated network environments within the cloud.
VPCs provide control over network routing, security, and resource allocation. Users can define
their own IP address ranges, subnets, and firewall rules, enhancing security and customization.
5. Content Delivery Networks (CDNs): CDNs are distributed networks of servers located in
multiple data centers around the world. They help improve the performance and reliability of
web applications by caching and serving content from servers geographically closer to end-users.
CDNs can significantly enhance a cloud application's network capacity and reduce latency.
6. Redundancy and High Availability: To ensure network reliability, cloud providers often build
redundancy into their network infrastructure. This redundancy includes multiple data centers,
redundant network paths, and failover mechanisms to minimize downtime in the event of
hardware failures or network issues.
7. Data Transfer Costs: It's important to consider the costs associated with data transfer in and out
of the cloud. Some cloud providers may charge for data egress (outbound data transfer) or have
tiered pricing based on the volume of data transferred. Understanding these costs is essential for
budgeting and cost management.
8. Security and Compliance: Cloud users must consider security and compliance requirements
when configuring network capacity. Features like network encryption, access control, and
Cloud computing Unit 2

compliance certifications (e.g., SOC 2, HIPAA) may be necessary to meet specific business
needs.

Overall, network capacity is a critical aspect of cloud infrastructure that directly impacts the
performance, availability, and cost-effectiveness of cloud-based applications and services. Cloud
users should carefully assess their network requirements and work with their chosen cloud
provider to configure and optimize network resources accordingly.

Server and instance type:


What is an Instance in Cloud Computing?

An instance in cloud computing is a server resource provided by third-party cloud services.


While you can manage and maintain physical server resources on premises, it is costly and
inefficient to do so. Cloud providers maintain hardware in their data centers and give you virtual
access to compute resources in the form of an instance. You can use the cloud instance for
running compute-intensive workloads like containers, databases, microservices, and virtual
machines.

Why are cloud instances important?

A cloud instance allows software developers to scale beyond traditional physical boundaries.
Unlike physical servers, developers don’t need to worry about the underlying hardware when
deploying workloads on a cloud instance. There are two main benefits of cloud instances.

Scalability

Developers scale computing resources in a cloud instance according to their workload


requirements. For example, software developers deploy an application on an instance. As the app
gains more users, it experiences huge traffic that slows down response time. Developers can
horizontally scale cloud resources by increasing the CPU, memory, storage, and network
resources to the particular instance.

Fault tolerance

Organizations create redundancy by using multiple duplicate instances for backup. They are
especially useful for managing memory-intensive workloads like data processing. For example,
an application can still run on other instances in the US and Asia if a cloud instance hosted in
Europe fails.

What types of workloads can you run on a cloud instance?

Compute intensive
Cloud computing Unit 2

You can run high performance computing workloads on instances, such as distributed analytics,
machine learning (ML) algorithms, batch processing, ad serving, video encoding, scientific
modeling, and scalable multi-player gaming applications.

Memory intensive

Instances are useful for running memory-intensive workloads such as real-time data ingestion,
distributed in-memory caches, big data analytics, memory-intensive enterprise applications, and
high-performance databases.

Graphics intensive

Applications that render graphics require high processing and storage capabilities. You can run
virtual reality applications, 3D rendering, animation, computer vision, video streaming, and other
graphics workloads on a cloud instance.

How do cloud instances work?

A cloud instance abstracts physical computing infrastructure using virtual machine technology. It
is similar to having your own server machine in the cloud. You basically create and manage your
own virtual server instance in the cloud computing environment. You can configure this cloud
server to meet your memory, graphics processing, CPU, and other requirements.

The steps for creating a new instance are:

1. You use a visual interface or API calls to programmatically create instances


2. You specify the resources you require or use pre-existing instance types that your cloud provider
defines
3. You can then host your own operating system and other software applications on an instance

The cloud provider will typically charge you only for the resources you actually use. You can
create and destroy as many instances as you like. For example, you can use Amazon Machine
Images (AMI) to configure and launch cloud instances on AWS.

What is the instance life cycle?

Developers use a series of steps to set up, run, manage, and stop an instance. The following
stages describe an instance life cycle.

Provisioning

Provisioning an instance means setting the compute resources that the instance requires. When
developers launch a provisioned instance, it goes into a pending stage.

Running
Cloud computing Unit 2

At this stage, the instance is deployed and active on the cloud. Developers can deploy workloads
such as containerized applications on running instances. They are billed the moment an instance
starts running.

Stopping

Developers might stop an instance to troubleshoot issues that affect the workloads that run on it.
When they stop an instance, it enters the stopping stage before being completely halted.
Developers can modify the setting of the instance setting when it’s stopped.

Terminated

Developers can shut down an instance when it is no longer in use. By shutting down an instance,
the cloud platform prepares to terminate the instance and remove its corresponding data in the
instance store volume. The instance store volume is temporary storage that resides on the same
computer as the instance.

You might also like