Professional Documents
Culture Documents
Cloud Computing Unit 2
Cloud Computing Unit 2
Virtualization
Virtualization is a technique how to separate a service from the underlying physical delivery
of that service. It is the process of creating a virtual version of something like computer
hardware. It was initially developed during the mainframe era. It involves using specialized
software to create a virtual or software-created version of a computing resource rather than the
actual version of the same resource. With the help of Virtualization, multiple operating systems
and applications can run on the same machine and its same hardware at the same time,
increasing the utilization and flexibility of hardware.
In other words, one of the main cost-effective, hardware-reducing, and energy-saving
techniques used by cloud providers is Virtualization. Virtualization allows sharing of a single
physical instance of a resource or an application among multiple customers and organizations
at one time. It does this by assigning a logical name to physical storage and providing a pointer
to that physical resource on demand. The term virtualization is often synonymous with
hardware virtualization, which plays a fundamental role in efficiently delivering Infrastructure-
as-a-Service (IaaS) solutions for cloud computing. Moreover, virtualization technologies
provide a virtual environment for not only executing applications but also for storage, memory,
and networking.
Host Machine: The machine on which the virtual machine is going to be built is known as
Host Machine.
Guest Machine: The virtual machine is referred to as a Guest Machine.
Benefits of Virtualization
More flexible and efficient allocation of resources.
Enhance development productivity.
It lowers the cost of IT infrastructure.
Remote access and rapid scalability.
High availability and disaster recovery.
Pay peruse of the IT infrastructure on demand.
Enables running multiple operating systems.
Drawback of Virtualization
High Initial Investment: Clouds have a very high initial investment, but it is also true that
it will help in reducing the cost of companies.
Cloud computing Unit 2
Characteristics of virtualization:
One of the most significant characteristics of virtualization is the ability to abstract physical
resources. This means that virtual machines can be created that are completely independent of
the underlying physical hardware. This allows multiple virtual machines to run on the same
physical machine, each with their own operating system and applications. This is known as
server virtualization, and it is the most common use of virtualization today.
For example, a single physical server can host multiple virtual machines, each running its own
operating system and applications. This allows for efficient use of resources, as a single physical
machine can be used to run multiple applications, rather than having to purchase and maintain
multiple physical servers.
Isolation of Resources
Another key characteristic of virtualization is the isolation of resources. This means that each
virtual machine is isolated from the others, and they cannot access each other's resources. This
provides security and stability, as a problem with one virtual machine will not affect the others.
For example, a company may use virtualization to run their email server, web server, and
database server on the same physical machine. If the email server were to be compromised, the
web server and database server would still be protected and continue to function properly.
Flexibility
Virtualization also provides flexibility in terms of resource allocation. Virtual machines can be
easily created, deleted, and modified as needed. This allows for easy scaling, as more resources
can be allocated to a virtual machine as needed. It also allows for easier testing and development,
as virtual machines can be created to test new software and configurations without affecting the
production environment.
Cloud computing Unit 2
For example, a company may use virtualization to create a test environment for their new
software. They can create a virtual machine with the same specifications as their production
environment and test the software without affecting their live systems. Once the software has
been tested and is ready for production, the virtual machine can be deleted, and the software can
be deployed to the production environment.
Portability
Virtualization also provides portability, as virtual machines can be easily moved between
physical machines. This allows for easy disaster recovery, as virtual machines can be quickly
moved to a different physical machine in the event of a disaster. It also allows for easy migration
between physical machines, as virtual machines can be moved to new hardware without affecting
the applications or data.
For example, a company may use virtualization to create a disaster recovery plan. They can
create a virtual machine that contains all of their important data and applications and store it on a
separate physical machine. In the event of a disaster, the virtual machine can be quickly moved
to a new physical machine, and the company can continue to operate as normal.
Networking
For example, a company may use virtualization to create a virtual network for their development
team. They can create a virtual network that connects all of their development virtual machines,
allowing them to easily communicate and share resources. They can also connect this virtual
network to their physical network, allowing the development team to access the internet and
other resources. Additionally, this virtual network can be isolated from the rest of the company's
network for added security.
Virtualization also provides the ability to create snapshots of virtual machines. This allows for
easy backup and recovery of virtual machines, as well as the ability to quickly revert to a
Cloud computing Unit 2
previous state. This is especially useful for testing and development, as it allows for easy
experimentation without the risk of losing data or compromising the production environment.
For example, a company may use virtualization to test a new software update. They can create a
snapshot of their virtual machine before installing the update, and if the update causes any issues,
they can easily revert to the previous snapshot. This eliminates the need to manually restore data
and configurations, saving both time and resources.
Desktop Virtualization
Desktop virtualization is another form of virtualization that allows multiple virtual desktops to
run on a single physical machine. This allows for easy deployment and management of desktops,
as well as the ability to access desktops remotely. This is especially useful for companies with a
mobile workforce, as it allows employees to access their desktop from any location.
For example, a company may use desktop virtualization to provide remote access for their sales
team. The sales team can access their virtual desktop from anywhere, allowing them to work on
presentations, access customer data, and communicate with the rest of the team. This eliminates
the need for remote employees to carry a laptop or access company data on a personal computer,
improving security and productivity.
Taxonomy of virtualization
– System-level – implemented directly on hardware and does not or minimum requirement of the
existing operating system.
OR,
Virtualization covers a wide range of emulation techniques that are applied to different areas of
computing. A classification of these techniques helps us better understand their characteristics
and use
Cloud computing Unit 2
● Execution Environments: To provide support for the execution of the programs eg. OS, and
Application.
○ Process Level: Implemented on top of an existing OS that has full control of the hardware
○ System Level: Implemented directly on Hardware and do not require support from existing
OS.
● Storage: Storage virtualization is a system administration practice that allows decoupling the
physical organization of the hardware from its logical representation.
● Networks: Network virtualization combines hardware appliances and specific software for the
creation and management of a virtual network.
OR,
Taxonomy of virtualization
Virtual machines are broadly classified into two types: System Virtual Machines (also
known as Virtual Machines) and Process Virtual Machines (also known as Application
Virtual Machines). The classification is based on their usage and degree of similarity to
the linked physical machine. The system VM mimics the whole system hardware stack
and allows for the execution of the whole operating system Process VM, on the other
hand, provides a layer to an operating system that is used to replicate the programming
environment for the execution of specific processes.
A System Virtual Machine, such as VirtualBox, offers a full system platform that allows
the operation of a whole operating system (OS).
Virtual Machines are used to distribute and designate suitable system resources to
software (which might be several operating systems or an application), and the software
is restricted to the resources provided by the VM. The actual software layer that allows
virtualization is the Virtual Machine Monitor (also known as Hypervisor). Hypervisors
are classified into two groups based on their relationship to the underlying hardware.
Native VM is a hypervisor that takes direct control of the underlying hardware, whereas
hosted VM is a different software layer that runs within the operating system and so has
an indirect link with the underlying hardware.
Cloud computing Unit 2
The system VM abstracts the Instruction Set Architecture, which differs slightly from
that of the actual hardware platform. The primary benefits of system VM include
consolidation (it allows multiple operating systems to coexist on a single computer
system with strong isolation from each other), application provisioning, maintenance,
high availability, and disaster recovery, as well as sandboxing, faster reboot, and
improved debugging access.
o Java Virtual Machine (JVM) and Common Language Runtime are two popular
examples of Process VMs that are used to virtualize the Java programming
language and the.NET Framework programming environment, respectively.
S.N
O Cloud Computing Virtualization
S.N
O Cloud Computing Virtualization
The total cost of cloud computing is higher The total cost of virtualization is lower
7.
than virtualization. than Cloud Computing.
Cloud computing requires many dedicated While single dedicated hardware can do
8.
hardware. a great job in it.
and this results in less costly.In Service Provider point of View, they will vitalize the
Hardware using Hardware Virtualization which decrease the Hardware requirement from
Vendor side which are provided to User is decreased. Before Virtualization, Companies
and organizations have to set up their own Server which require extra space for placing
them, engineer’s to check its performance and require extra hardware cost but with the help
of Virtualization the all these limitations are removed by Cloud vendor’s who provide
Physical Services without setting up any Physical Hardware system.
Availability increases with Virtualization –
One of the main benefit of Virtualization is that it provides advance features which allow
virtual instances to be available all the times. It also has capability to move virtual instance
from one virtual Server another Server which is very tedious and risky task in Server Based
System. During migration of Data from one server to another it ensures its safety. Also, we
can access information from any location and any time from any device.
Disaster Recovery is efficient and easy –
With the help of virtualization Data Recovery, Backup, Duplication becomes very easy. In
traditional method , if somehow due to some disaster if Server system Damaged then the
surety of Data Recovery is very less. But with the tools of Virtualization real time data
backup recovery and mirroring become easy task and provide surety of zero percent data
loss.
Virtualization saves Energy –
Virtualization will help to save Energy because while moving from physical Servers to
Virtual Server’s, the number of Server’s decreases due to this monthly power and cooling
cost decreases which will Save Money as well. As cooling cost reduces it means carbon
production by devices also decreases which results in Fresh and pollution free
environment.
Quick and Easy Set up –
In traditional methods Setting up physical system and servers are very time-consuming.
Firstly Purchase them in bulk after that wait for shipment. When Shipment is done then
wait for Setting up and after that again spend time in installing required software etc.
Which will consume very time. But with the help of virtualization the entire process is
done in very less time which results in productive setup.
Cloud Migration becomes easy –
Most of the companies those who already have spent a lot in the server have a doubt of
Shifting to Cloud. But it is more cost-effective to shift to cloud services because all the data
that is present in their server’s can be easily migrated into the cloud server and save
something from maintenance charge, power consumption, cooling cost, cost to Server
Maintenance Engineer etc.
Cons of Virtualization :
Types of Hypervisor –
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a
“Native Hypervisor” or “Bare metal hypervisor”. It does not require any base server
operating system. It has direct access to hardware resources. Examples of Type 1
hypervisors include VMware ESXi, Citrix XenServer, and Microsoft Hyper-V
hypervisor.
TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as
‘Hosted Hypervisor”. Such kind of hypervisors doesn’t run directly over the underlying
hardware rather they run as an application in a Host system(physical machine).
Basically, the software is installed on an operating system. Hypervisor asks the
operating system to make hardware calls. An example of a Type 2 hypervisor includes
VMware Player or Parallels Desktop. Hosted hypervisors are often found on endpoints
like PCs. The type-2 hypervisor is very useful for engineers, and security analysts (for
checking malware, or malicious source code and newly developed applications).
Cloud computing Unit 2
Xen is a hypervisor that enables the simultaneous creation, execution and management of
multiple virtual machines on one physical computer.Xen was developed by XenSource, which
was purchased by Citrix Systems in 2007. Xen was first released in 2003. It is an open source
hypervisor. It also comes in an enterprise version.
Xen Hypervisor
Xen is primarily a bare-metal, type-1 hypervisor that can be directly installed on computer
hardware without the need for a host operating system. Because it’s a type-1 hypervisor, Xen
controls, monitors and manages the hardware, peripheral and I/O resources directly. Guest
virtual machines request Xen to provision any resource and must install Xen virtual device
drivers to access hardware components. Xen supports multiple instances of the same or different
operating systems with native support for most operating systems, including Windows and
Linux. Moreover, Xen can be used on x86, IA-32 and ARM processor architecture.
VMware:
VMware is a virtualization and cloud computing software provider based in Palo Alto, Calif.
Founded in 1998, VMware is a subsidiary of Dell Technologies. EMC Corporation originally
acquired VMware in 2004; EMC was later acquired by Dell Technologies in 2016. VMware
bases its virtualization technologies on its bare-metal hypervisor ESX/ESXi in x86 architecture.
With VMware server virtualization, a hypervisor is installed on the physical server to allow for
Cloud computing Unit 2
multiple virtual machines (VMs) to run on the same physical server. Each VM can run its own
operating system (OS), which means multiple OSes can run on one physical server. All the VMs
on the same physical server share resources, such as networking and RAM. In 2019, VMware
added support to its hypervisor to run containerized workloads in a Kubernetes cluster in a
similar way. These types of workloads can be managed by the infrastructure team in the same
way as virtual machines and the DevOps teams can deploy containers as they were used to.
Microsoft hyper -v
Ever since Microsoft Hyper-V was released on Windows Server 2008, this hypervisor has been
one of the most popular virtualization options on the market. With Microsoft Hyper-V you can
create and run virtual machines without maintaining physical hardware.
The growth of virtualized working environments has led to an increased usage of Microsoft
Hyper-V and performance monitoring tools to help manage abstract resources. In this article,
we’re going to look at what Microsoft Hyper-V is.
Hypervisors like Hyper-V are used for many different reasons depending on the environment in
which they are deployed. In general, Hyper-V helps network administrators create a virtualized
computing environment in a format that is easy to manage. From one piece of hardware an
administrator can manage a range of virtual machines without having to change computers.
Hyper-V can be used for many different purposes but it is most commonly used to create private
cloud environments. You deploy Hyper-V on servers to produce virtual resources for your users.
Microsoft Hyper-V is important because it allows users to transcend beyond the limitations of
physical hardware. Managing physical hardware is incredibly complex for larger organizations,
who have to manage often disparate and out-of-date hardware. Managing these devices is not
only time-consuming for administrators but also very costly.
Cloud computing Unit 2
Purchasing new hardware adds up very quickly, particularly when you take into account the
amount of extra office space needed to fit these devices in. Enterprises have started to attempt to
reduce costs and manage devices more efficiently. At the same time, modern devices have the
storage, CPU, and RAM to be able to sustain a variety of virtual OS’s from one location making
this even more effective.
Metrics baseline
It consists of data collected in pervious projects. They can be used to set a goal and try to
determine, if trends show the likelihood of the meeting the goal. They become an essential piece
of a key performance indicator.
Baseline Measurement
Baseline Measurement or Base lining as it is shortly called is the process of establishing the
starting point of any process/metric, from which the improvement or impact of any change
measure is calculated. It is used to gauge how effective an improvement or change initiative is.
It works such a way that when number of client access expands, applications are naturally
provisioned the extra figuring, stockpiling and organization assets like central processor,
Memory, Stockpiling or transfer speed what’s more, when fewer clients are there it will
naturally diminish those as
per prerequisite.
The Flexibility in cloud is a well-known highlight related with scale-out arrangements (level
scaling), which takes into consideration assets to be powerfully added or eliminated when
required.
It is for the most part connected with public cloud assets which is generally highlighted in pay-
per-use or pay-more only as costs arise administrations.
The Flexibility is the capacity to develop or contract framework assets (like process, capacity
or organization) powerfully on a case by case basis to adjust to responsibility changes in the
applications in an autonomic way.
It makes make most extreme asset use which bring about reserve funds in foundation costs in
general.
Relies upon the climate, flexibility is applied on assets in the framework that isn’t restricted to
equipment, programming, network, QoS and different arrangements.
The versatility is totally relying upon the climate as now and again it might become negative
characteristic where execution of certain applications probably ensured execution.
It is most commonly used in pay-per-use, public cloud services. Where IT managers are
willing to pay only for the duration to which they consumed the resources.
Example: Consider an online shopping site whose transaction workload increases during
festive season like Christmas. So for this specific period of time, the resources need a spike up.
In order to handle this kind of situation, we can go for a Cloud-Elasticity service rather than
Cloud Scalability. As soon as the season goes out, the deployed resources can then be
requested for withdrawal.
Cloud Scalability: Cloud scalability is used to handle the growing workload where good
performance is also needed to work efficiently with software or applications. Scalability is
commonly used where the persistent deployment of resources is required to handle the
workload statically.
Example: Consider you are the owner of a company whose database size was small in earlier
days but as time passed your business does grow and the size of your database also increases,
so in this case you just need to request your cloud service vendor to scale up your database
capacity to handle a heavy workload.
It is totally different from what you have read above in Cloud Elasticity. Scalability is used to
fulfill the static needs while elasticity is used to fulfill the dynamic need of the organization.
Scalability is a similar kind of service provided by the cloud where the customers have to pay-
per-use. So, in conclusion, we can say that Scalability is useful where the workload remains
high and increases statically.
Types of Scalability:
Cloud computing Unit 2
2. Horizontal Scalability: In this kind of scaling, the resources are added in a horizontal row.
3. Diagonal Scalability –
It is a mixture of both Horizontal and Vertical scalability where the resources are added both
vertically and horizontally.
It is a short term planning and adopted just to Scalability is a long term planning and
deal with an unexpected increase in demand adopted just to deal with an expected increase
4 or seasonal demands. in demand.
Cloud computing Unit 2
System metrics
System metrics are measurement types found in the system. Each resource that can be monitored
for performance, availability, reliability, and other attributes has one or more metrics about
which data can be collected. Sample metrics include the amount of CPU on a node, or the
amount of CPU usage on a node.
In cloud computing, a system metric refers to a measurable quantity or parameter that provides
insights into the performance, health, and utilization of various components within a cloud-based
system. These metrics help cloud service providers and users monitor and manage the
infrastructure, applications, and services hosted on the cloud platform. System metrics play a
crucial role in ensuring optimal performance, resource allocation, and scalability. Here are some
common system metrics in cloud computing:
1. CPU Utilization: This metric measures the percentage of CPU capacity being used by a virtual
machine (VM) or a container. Monitoring CPU utilization helps identify performance
bottlenecks and indicates if additional resources are required.
2. Memory Utilization: This metric tracks the amount of available memory being used by
applications and processes. Monitoring memory usage helps prevent out-of-memory errors and
ensures efficient memory allocation.
3. Network Throughput: Network throughput measures the amount of data transmitted over a
network connection within a given time period. It helps evaluate network performance and detect
potential network congestion.
4. Disk I/O Operations: Disk I/O metrics track the number of read and write operations to storage
devices. Monitoring disk I/O helps optimize storage performance and prevent data access delays.
5. Latency: Latency is the time delay between a request and the corresponding response.
Monitoring latency helps assess the responsiveness of applications and services.
6. Request Rate: This metric measures the rate at which requests or transactions are being
processed by a system. It's particularly important for web servers and APIs.
7. Availability/Uptime: Availability is the percentage of time that a service or resource is
operational and accessible. Monitoring uptime helps ensure that services meet their SLAs
(Service Level Agreements).
8. Load Average: Load average indicates the average number of processes in the system's run
queue over a certain time period. It provides insights into the system's workload and potential
resource contention.
9. Resource Allocation: Metrics related to the allocation of CPU, memory, storage, and other
resources help ensure that applications have the necessary resources to function properly without
resource starvation.
10. Elasticity Metrics: These metrics track how well a system scales in response to changing
demands. They include metrics related to auto-scaling events, instance provisioning, and
resource allocation.
Cloud computing Unit 2
11. Error Rates: Monitoring the frequency of errors, exceptions, and failures helps identify issues
affecting application stability and user experience.
Cloud service providers often offer monitoring and analytics tools that collect and display these
system metrics in real-time or over specific time intervals. These tools enable cloud
administrators and users to make informed decisions about resource provisioning, application
optimization, and troubleshooting.
1. Creating Test Scenarios: Load testers define realistic scenarios that simulate user behavior and
interactions with the application. These scenarios could include actions like browsing web pages,
making API requests, submitting forms, or performing transactions.
2. Generating Load: Load testing tools or frameworks generate a significant amount of virtual
users or simulated traffic to replicate the expected user load on the application. These virtual
users perform the predefined actions simultaneously.
3. Scalability Testing: One advantage of cloud computing is the ability to easily scale resources up
or down. Load tests can take advantage of this by provisioning additional virtual machines or
resources during testing to simulate high user loads and measure system performance.
4. Performance Monitoring: Throughout the load test, various performance metrics such as
response times, throughput, latency, CPU and memory utilization, and error rates are monitored
and recorded.
5. Analyzing Results: Once the load test is complete, the collected performance metrics are
analyzed to assess the application's performance under different load levels. This helps identify
bottlenecks, resource constraints, and areas for optimization.
6. Optimization and Scaling: Based on the analysis, adjustments can be made to the application's
architecture, code, and infrastructure to address any performance issues. If the cloud platform
supports auto-scaling, load tests can help fine-tune the auto-scaling policies.
7. Repeat Testing: Load testing is often an iterative process. After making optimizations, the load
test can be run again to verify the improvements and ensure that the changes have the desired
impact on system performance.
8. Stress Testing: In addition to load testing, stress testing involves pushing the application beyond
its expected capacity to identify its breaking point. This helps determine how the system fails and
whether it recovers gracefully.
9. Failover and Recovery Testing: Cloud environments also allow testing failover and recovery
scenarios, where an instance or server fails, and traffic is automatically redirected to healthy
instances.
Cloud computing Unit 2
Load testing in cloud computing offers several advantages, including the ability to test at scale
without investing in a dedicated infrastructure, flexibility to simulate various user loads, and
access to cloud-specific features like auto-scaling. However, it's important to design load tests
carefully to accurately reflect real-world scenarios and to choose appropriate testing tools and
methodologies to achieve meaningful results.
Cloud Resourceceiling
In cloud computing, a "resource ceiling" typically refers to the maximum allocation or limit
placed on various computing resources, such as CPU, memory, storage, and network bandwidth,
for a specific cloud service or instance. These limits are defined and enforced by the cloud
provider to ensure fair and efficient resource allocation among their customers. Here are some
common examples of resource ceilings in cloud computing:
1. Compute Resources (CPU and Memory): Cloud providers often allow users to select the
amount of CPU cores and memory for their virtual machines (VMs) or containers. Resource
ceilings may be set to prevent overcommitting these resources, ensuring that one user's workload
does not consume all available compute resources on a physical host.
2. Storage Quotas: Cloud storage services like Amazon S3, Google Cloud Storage, or Azure Blob
Storage may impose resource ceilings in terms of storage capacity, IOPS (Input/Output
Operations Per Second), or network bandwidth for data transfer. Users are often limited by their
subscription plan or can request higher limits if needed.
3. Network Bandwidth: Cloud providers may impose limits on the amount of incoming and
outgoing network traffic for a specific virtual machine or service. These limits help maintain
network performance and prevent one tenant from monopolizing network resources.
4. API Rate Limits: Cloud providers typically have rate limits on their APIs to prevent abuse and
ensure fair access for all users. These rate limits may vary depending on the type of API request
and the user's subscription level.
5. Scaling Limits: Some cloud services, such as auto-scaling groups or load balancers, may have
maximum and minimum scaling limits to control the number of instances or resources that can
be dynamically provisioned.
6. Resource Pools: In some cloud environments, resource ceilings can be set for resource pools,
which are groups of resources allocated to specific departments, projects, or teams. This helps
manage resource allocation within an organization.
7. Budget and Cost Controls: Cloud providers often offer budgeting and cost control features to
set spending limits or alerts, effectively acting as resource ceilings for financial expenditures.
Cloud computing Unit 2
It's important for cloud users to be aware of these resource ceilings and plan their deployments
accordingly. Exceeding resource limits can lead to degraded performance, unexpected costs, or
service interruptions. Many cloud providers offer tools and APIs to monitor resource usage and
adjust limits as needed, but these may require specific permissions and adherence to provider
policies.
Network capacity
Network capacity in cloud computing refers to the amount of bandwidth and connectivity
available within a cloud environment. It is a critical aspect of cloud infrastructure, as it directly
impacts the performance, reliability, and scalability of cloud-based applications and services.
Here are key considerations related to network capacity in cloud computing:
1. Bandwidth: Bandwidth refers to the maximum amount of data that can be transmitted over a
network connection in a given period, usually measured in bits per second (bps), kilobits per
second (Kbps), megabits per second (Mbps), or gigabits per second (Gbps). Cloud providers
offer various levels of bandwidth to meet the needs of different users and services. Adequate
bandwidth is essential for fast data transfer and low-latency communication.
2. Latency: Latency is the time it takes for data to travel from the source to the destination over a
network. Lower latency is crucial for applications that require real-time or near-real-time
communication, such as video conferencing, online gaming, and financial trading platforms.
Cloud providers often strive to reduce network latency by optimizing their data center locations
and infrastructure.
3. Scalability: Scalability in terms of network capacity involves the ability to increase or decrease
bandwidth and network resources dynamically based on demand. Cloud providers typically offer
scalable network solutions that allow users to adjust their network capacity as needed, which is
particularly useful for handling traffic spikes and accommodating growing workloads.
4. Virtual Private Cloud (VPC): Cloud providers often offer the concept of a Virtual Private
Cloud (VPC), which allows users to create isolated network environments within the cloud.
VPCs provide control over network routing, security, and resource allocation. Users can define
their own IP address ranges, subnets, and firewall rules, enhancing security and customization.
5. Content Delivery Networks (CDNs): CDNs are distributed networks of servers located in
multiple data centers around the world. They help improve the performance and reliability of
web applications by caching and serving content from servers geographically closer to end-users.
CDNs can significantly enhance a cloud application's network capacity and reduce latency.
6. Redundancy and High Availability: To ensure network reliability, cloud providers often build
redundancy into their network infrastructure. This redundancy includes multiple data centers,
redundant network paths, and failover mechanisms to minimize downtime in the event of
hardware failures or network issues.
7. Data Transfer Costs: It's important to consider the costs associated with data transfer in and out
of the cloud. Some cloud providers may charge for data egress (outbound data transfer) or have
tiered pricing based on the volume of data transferred. Understanding these costs is essential for
budgeting and cost management.
8. Security and Compliance: Cloud users must consider security and compliance requirements
when configuring network capacity. Features like network encryption, access control, and
Cloud computing Unit 2
compliance certifications (e.g., SOC 2, HIPAA) may be necessary to meet specific business
needs.
Overall, network capacity is a critical aspect of cloud infrastructure that directly impacts the
performance, availability, and cost-effectiveness of cloud-based applications and services. Cloud
users should carefully assess their network requirements and work with their chosen cloud
provider to configure and optimize network resources accordingly.
A cloud instance allows software developers to scale beyond traditional physical boundaries.
Unlike physical servers, developers don’t need to worry about the underlying hardware when
deploying workloads on a cloud instance. There are two main benefits of cloud instances.
Scalability
Fault tolerance
Organizations create redundancy by using multiple duplicate instances for backup. They are
especially useful for managing memory-intensive workloads like data processing. For example,
an application can still run on other instances in the US and Asia if a cloud instance hosted in
Europe fails.
Compute intensive
Cloud computing Unit 2
You can run high performance computing workloads on instances, such as distributed analytics,
machine learning (ML) algorithms, batch processing, ad serving, video encoding, scientific
modeling, and scalable multi-player gaming applications.
Memory intensive
Instances are useful for running memory-intensive workloads such as real-time data ingestion,
distributed in-memory caches, big data analytics, memory-intensive enterprise applications, and
high-performance databases.
Graphics intensive
Applications that render graphics require high processing and storage capabilities. You can run
virtual reality applications, 3D rendering, animation, computer vision, video streaming, and other
graphics workloads on a cloud instance.
A cloud instance abstracts physical computing infrastructure using virtual machine technology. It
is similar to having your own server machine in the cloud. You basically create and manage your
own virtual server instance in the cloud computing environment. You can configure this cloud
server to meet your memory, graphics processing, CPU, and other requirements.
The cloud provider will typically charge you only for the resources you actually use. You can
create and destroy as many instances as you like. For example, you can use Amazon Machine
Images (AMI) to configure and launch cloud instances on AWS.
Developers use a series of steps to set up, run, manage, and stop an instance. The following
stages describe an instance life cycle.
Provisioning
Provisioning an instance means setting the compute resources that the instance requires. When
developers launch a provisioned instance, it goes into a pending stage.
Running
Cloud computing Unit 2
At this stage, the instance is deployed and active on the cloud. Developers can deploy workloads
such as containerized applications on running instances. They are billed the moment an instance
starts running.
Stopping
Developers might stop an instance to troubleshoot issues that affect the workloads that run on it.
When they stop an instance, it enters the stopping stage before being completely halted.
Developers can modify the setting of the instance setting when it’s stopped.
Terminated
Developers can shut down an instance when it is no longer in use. By shutting down an instance,
the cloud platform prepares to terminate the instance and remove its corresponding data in the
instance store volume. The instance store volume is temporary storage that resides on the same
computer as the instance.