You are on page 1of 10

UNIT 2

CLOUD COMPUTING
Unit 2

SCALING IN CLOUD
It is the process of adding / removing cloud computing resources as you need them. Pricing
Based utility model of Scale Up and Scale Down of resources (services, systems and applications
). A scalable business / technology allows you to drop the costs.

Scale Up / Down Pricing


Scale Up(Add resources) More Price / More Billing
Scale Down(Remove resources) Less Price / Less Billing

Types of Cloud Scaling


S.No Scaling Type Details
1 Vertical Here we add more power (RAM, Storage (Solid State Drives (SSDs))
Scaling and CPUs / processors) to an existing instance.
(Easy)
2 Horizontal Here we add more servers to spread the load across multiple machines.
Scaling ( And it require the administration tasks such as updates, security,
complex) monitoring, sync your application, data and backups across many
instances.
3 Auto Scaling It is performed automatically using an API. For example, a settlement
process for a bank that Horizontally scales itself depending on how
many trades it is processing.
4 Side-by-side It is the process of adding instances for different purposes on demand.
Scaling The most common the firm that adds development and test instances of
a service as required by a project.
5 Global Scaling Scaling a service to run in different geographical locations. For example
, a content delivery network that delivers videos from dozens of
geographically distributed data centers so that videos are served to users
from a data center that is close to them.

Scalable Cloud Based Services:


1. Infrastructure-as-a-Service (IaaS). 2. Platform-as-a-Service (PaaS).
3. Storage-as-a-Service (STaaS). 4. Data-as-a-Service (DaaS).
5. Database-as-a-Service (DBaaS).

Scalability applies to Systems 4 general areas:


1. Disk I/O. 2. Memory.
3. Network I/O. 3. CPU.
Unit 2

Cloud Scalability Benefits


1 Performance Scalability in the cloud is that it facilitates performance. Scalable
architecture has the ability to handle the bursts of traffic and heavy
workloads that will come with business growth. Performance of a system is
measured by many different metrics – one of the main ones is response time
.
2 Cost-efficient You can allow your business to grow without making any expensive
changes in the current setup. This reduces the cost implications of storage
growth making scalability in the cloud very cost effective.
3 Easy and Scaling up or scaling out in the cloud is simpler; you can commission
Quick additional VMs with a few clicks and after the payment is processed, the
additional resources are available without any delay.
4 Capacity Scalability ensures that with the continuous growth of your business the
storage space in cloud grows as well. Scalable cloud computing systems
accommodate your data growth requirements. With scalability, you don’t
have to worry about additional capacity needs.
5 Scalability Scalability also has some limitations. If you want a fully scalable system
admonition then you have a large task to handle. It requires planning, testing and again
testing for your data storage. If you have the applications already then
splitting up the system will require code changes, updates
And monitoring. You have to be well prepared for the digital transformation
of your infrastructure.
Sharding
To shard a database for scalability is to split your data up into separate database servers. Instead
of having all of your data on one database server you would split the data into “shards”. This can
help with performance in a few ways:
• The data requests are shared across multiple servers instead of a the same database server
each time
• Less data on each shard reduces index sizes which can improve data seek time
• Less data on each shard means there are less rows of data, this can allow queries to run
quicker since there is less data to traverse or calculate
Partitioning
Database partitioning is similar to database sharding, but not exactly the same. Database
partitioning separates the data into distinct parts. Certain partitioning methods include:
• Splitting data by range (alphabetically or numerically)
• Row wise (horizontal partitioning)
• Column wise (vertical partitioning)
Application code database optimizations
You can also perform application-level database optimizations, such as:
• Using database indexes
• Table partitioning
• Caching database queries
Unit 2

• De-normalization
• Running large queries/batch queries offline
Having your system set up into services such as the microservices system architecture can make
monitoring, feature updates, debugging and scaling easier.

CAPACITY PLANNING IN CLOUD COMPUTING


1. How much capacity is available in the data center?
2. How much of available capacity is currently being consumed?
3. When will capacity free up?
4. What is the forecast for new requests?
5. What is the return on investment?

One benefit of cloud is having smaller recurring operating expenses rather than one big capital
expenditure. The cloud can also make capacity planning easier since adding and removing
capacity is much easier.

Cloud Capacity Planning 8 Steps for its Successful Implementation


1. Consider Service Level Agreements
Your organization needs to meet service level agreements (SLAs). Most SLAs are geared toward
avoiding downtime and ensuring continuous service delivery. This is typically done through
ensuring availability and creating a back-up and recovery plan in the event of downtime.
2. Monitor Applications Utilization
Utilization patterns help you find use spikes and dips in server, application, and systems
utilization. Utilization changes based on the day or season. When you monitor application
utilization, you can properly manage capacity.
Consider this scenario: You have 4 vCPUs and 2 gb on-premises at 10 percent utilization. But in
the cloud, you may only have 1 vCPU and 500 mg of memory. Capacity planning can help you
manage that transition.
3. Review Workload Analytics
Assessing your workloads today is an important step in moving your workloads into the cloud.
You’ll need to consider why workloads change—and what happens when they do. Reviewing
historical trends and data will be essential in that evaluation, and you'll need to consider the
business forecast. How will that impact your future workloads? For example, planning to add a
new customer next quarter will affect capacity demands, and knowing those demands can help
you prepare a better plan.
4. Decide What to Put in the Cloud
A top consideration when moving into the cloud is what to put in it. You could end up migrating
everything to the cloud eventually, but you’ll probably start small: just a few applications,
systems, and servers.
Things to consider:
• Applications with a large memory requirement may not be good cloud candidates without
refactoring
• Do potential cloud applications depend on an application you’ll be keeping on-premises?
Unit 2

• Public vs. private? Moving an application into the public cloud can give you greater
availability. But if you move an application into the private cloud, it gives you more
flexibility and control.
• Just as you have to secure on-premises infrastructure, you'll need to make smart move to
lock down your cloud migration. 
• How will moving applications to the cloud affect support? Will it change?
• Finally, are there any external dependencies you need to consider for your applications?
An example might be an application dependent on a confidential data source or
dependent on another cloud application where availability is a concern.
5. Evaluate Data Management Policies
Data management policies ensure that your organization is properly managing and retaining data.
That stays just as important when you migrate to the cloud.
Some key areas to monitor include:
Creation, Access, Retention, Archiving and Deletion. Compliance with data management
policies is important, no matter where your workloads are.
6. Pick the Proper Size
Many organizations make the mistake of not having the right size of cloud instance when they
migrate.  (See number 2 for how to know how much you need!) While cloud vendors make
recommendations, it’s ultimately up to you to make sure you’ve sized your cloud instances for
your needs.
As you size your cloud instances, you should make sure you understand how much
customization is possible. Will you be able to size up or down easily? How much flexibility will
you have? Also, consider how many pre-defined sizes make sense at this stage.
The explosion of cloud adoption has put a lot of pressure on technology leaders to move to the
cloud...and move quickly. But sometimes these moves can result in enormous bills. This is often
called "cloudshock" because the unexpected cost involved has a heart-stopping effect. 
7. Plan for the Worst
Even if you move into the cloud, you still need a disaster recovery plan. You should know what
the recovery time is for your applications, systems, and servers and be able to determine what
potential impact—and cost—downtime due to disaster would have on your business. Depending
on your recovery window, plan your disaster recovery accordingly.
8. Meet Business Requirements
Today, the business counts on IT as a strategic partner. Their business requirements are your IT
requirements. So on your way to the cloud, you’ll need to map business needs to capacity
requirements. This might mean knowing how the business will grow (e.g. 30 percent in three
months) and translating that growth into infrastructure needs.
Your goal should be to support business objectives. And you should be able to tell business users
in their language (cost, response times, availability, etc.) what’s going to happen in the next three
to six months.

LOAD BALANCING IN CLOUD COMPUTING


Unit 2

Cloud load balancing refers to distributing client requests across multiple application servers
that are running in a cloud environment. Like other forms of load balancing, cloud load
balancing enables you to maximize application performance and reliability; its advantages over
traditional load balancing of on-premises resources are the (usually) lower cost and the ease of
scaling the application up or down to match demand.

Hardware vs. Software Load Balancing


Traditional load balancing solutions rely on proprietary hardware housed in a data center, and
require a team of sophisticated IT personnel to install, tune, and maintain the system. Only large
companies with big IT budgets can reap the benefits of improved performance and reliability. In
the age of cloud computing, hardware-based solutions have another serious drawback: they do
not support cloud load balancing, because cloud infrastructure vendors typically do not allow
customer or proprietary hardware in their environment.
Fortunately, software-based load balancers can deliver the performance and reliability benefits of
hardware-based solutions at a much lower cost. Because they run on commodity hardware, they
are affordable even for smaller companies. And they are ideal for cloud load balancing, as they
can run in the cloud like any other software application.

Benefits of Cloud Load Balancing


The benefits of cloud load balancing in particular arise from the scalable and global character of
the cloud itself. The ease and speed of scaling in the cloud means that companies can handle
traffic spikes (like those on Cyber Monday) without degraded performance by placing a cloud
load balancer in front of a group of application instances, which can quickly autoscale in reaction
to the level of demand.
The ability to host an application at multiple cloud hubs around the world can boost reliability. If
a power outage hits the northeastern U.S. after a snowstorm, for example, the cloud load
balancer can direct traffic away from cloud resources hosted there to resources hosted in other
parts of the country.

How Can NGINX Plus Help?


NGINX Plus and NGINX are the best-in-class load-balancing solutions used by high-traffic
websites such as Dropbox, Netflix, and Zynga. More than 400 million websites worldwide rely
on NGINX Plus and NGINX to deliver their content quickly, reliably, and securely.
As a software load balancer, NGINX Plus is significantly less expensive than hardware solutions
with similar capabilities. Furthermore, it can be easily deployed in a cloud infrastructure such as 
Amazon EC2 to load balance across multiple cloud resources.

Cloud Load balancing is the process of distributing workloads and computing resources across
one or more servers. This kind of distribution ensures maximum throughput in minimum
response time. The workload is segregated among two or more servers, hard drives, network
interfaces or other computing resources, enabling better resource utilization and system response
time. Thus, for a high traffic website, effective use of cloud load balancing can ensure business
continuity. The common objectives of using load balancers are:

How does load balancing work?


Unit 2

Here, load refers to not only the website traffic but also includes CPU load, network load and
memory capacity of each server. A load balancing technique makes sure that each system in the
network has same amount of work at any instant of time. This means neither any of them is
excessively over-loaded, nor under-utilized. The load balancer distributes data depending upon
how busy each server or node is. In the absence of a load balancer, the client must wait while his
process gets processed, which might be too tiring and demotivating for him. Various information
like jobs waiting in queue, CPU processing rate, job arrival rate etc. are exchanged between the
processors during the load balancing process. Failure in the right application of load balancers
can lead to serious consequences, data getting lost being one of them.

FILE SYSTEM AND STORAGE


Cloud file storage is a method for storing data in the cloud that provides servers and applications
access to data through shared file systems. This compatibility makes cloud file storage ideal for
workloads that rely on shared file systems and provides simple integration without code changes.

What is a cloud file system?


A file system in the cloud is a hierarchical storage system that provides shared access to file data.
Users can create, delete, modify, read, and write files and can organize them logically in
directory trees for intuitive access.

What is cloud file sharing?


Cloud file sharing can be defined as a service that provides simultaneous access for multiple
users to a common set of file data in the cloud. Security for file sharing in the cloud is managed
with user and group permissions enabling administrators to tightly control access to the shared
file data.

Benefits of Cloud File Storage


Scalability  Scale Up / Down of resources
Interoperability  Applications require integration with shared
file services that follow existing file system
semantics.
Budget and Resources  Less Costly
Web Serving shared file storage for web serving applications
Unit 2

Content Management Systems (CMS) It is a software application / set of related


programs that are used to create
and manage digital content. 
Big Data Analytics Big data is a field that treats ways to analyze,
systematically extract information from, or
otherwise deal with data sets that are too large
or complex to be dealt with by traditional data-
processing application software.
Media & Entertainment Using network file protocols such as NFS we
access the media.
Home Directories storing files only accessible by specific users
and groups.
Database Backups locational flexibility for recovery
Development Tools easily share code
Container Storage Holds data and provide persistent shared access

Requirements for Cloud File Storage


Fully managed Provides a fully managed file system that can
be launched in minutes 
Performance Provides consistent throughput and low latency
performance
Compatibility Integrates seamlessly with existing applications
with no new code to write
Security Provides network security and access control
permissions
Availability Redundancy across multiple sites and always
accessible when needed
Affordability Pay only for capacity used with no upfront
provisioning costs
Fully managed solution, such as Amazon EFS, Amazon FSx for Windows File Server,
or Amazon FSx for Lustre

Types of Cloud Storage


There are three types of cloud storage: Object, File, and Block.
1. Object Storage - Applications developed in the cloud often take advantage of object storage's
vast scalability and metadata characteristics. Object storage solutions like Amazon Simple
Storage Service (Amazon S3) are ideal for building modern applications from scratch that
require scale and flexibility, and can also be used to import existing data stores for analytics,
backup, or archive.
2. File Storage - Many applications need to access shared files and require a file system. This
type of storage is often supported with a Network Attached Storage (NAS) server. File storage
solutions like Amazon Elastic File System (EFS), Amazon FSx for Windows File Server, and
Amazon FSx for Lustre are ideal for use cases like large content repositories, development
environments, media stores, user home directories, and Amazon FSx for Lustre is ideal for high-
Unit 2

performance computing and machine learning workloads.


3. Block Storage - Other enterprise applications like databases or ERP systems often require
dedicated, low latency storage for each host. This is analogous to direct-attached storage (DAS)
or a Storage Area Network (SAN). Block-based cloud storage solutions like Amazon Elastic
Block Store (EBS) are provisioned with each virtual server and offer the ultra-low latency
required for high performance workloads.

How is File Storage Different?


Although object storage solutions enable storage of files as objects, accessing with existing
applications requires new code and the use of API’s and direct knowledge of naming semantics.
File storage solutions that support existing file system semantics and permissions models have a
distinct advantage in that they do not require new code to be written to integrate with
applications that are easily configured to work with shared file storage.
Block storage can be used as the underlying storage component of a self-managed file storage
solution. However, the one-to-one relationship required between the host and volume makes it
difficult to have the scalability, availability, and affordability of a fully managed file storage
solution and would require additional budget and management resources to support. Using a
fully managed cloud file storage solution removes complexities, reduces costs, and simplifies
management.

What are the AWS File Storage Services?


There’s a vast amount of file-based data in the world and AWS provides fully managed file
system services that help you easily address the diverse needs of your file-based applications and
workloads.
Business application storage
Organizations require their mission critical business applications to be highly available, and a
great many of these applications use shared file storage. Migrating these applications to the cloud
provides the scalability, high availability and durability, security, and reduced costs, while
increasing agility.
AWS offers two file system services optimized for your business applications.
• Amazon EFS provides a cloud-native fully managed file system that provides scalable, elastic
file storage for a broad range of Linux based applications.
• Amazon FSx for Windows File Server provides a fully managed native Windows file system
with the features and performance optimized for Windows-based business applications.
Amazon EFS and Amazon FSx for Windows File Server enable customers to migrate their Linux
and Windows-based applications to AWS using fully managed file systems with the features,
compatibility, performance, and security that these applications rely on.
Compute-optimized storage
Compute-intensive applications, like high performance computing, machine learning, and media
processing, often require massive throughput and low latencies from a file system. These
workloads often execute for a short period of time using input data that is stored in a low-cost
data lake.
• AWS offers Amazon FSx for Lustre for these compute-intensive applications. Amazon FSx for
Lustre allows customers to easily process their data with a file system that’s optimized for the
Unit 2

performance and cost of short-lived, compute-intensive processing jobs, with input and output
stored on Amazon S3.

A Very All the BEST

You might also like