Professional Documents
Culture Documents
CLOUD COMPUTING
Unit 2
SCALING IN CLOUD
It is the process of adding / removing cloud computing resources as you need them. Pricing
Based utility model of Scale Up and Scale Down of resources (services, systems and applications
). A scalable business / technology allows you to drop the costs.
• De-normalization
• Running large queries/batch queries offline
Having your system set up into services such as the microservices system architecture can make
monitoring, feature updates, debugging and scaling easier.
One benefit of cloud is having smaller recurring operating expenses rather than one big capital
expenditure. The cloud can also make capacity planning easier since adding and removing
capacity is much easier.
• Public vs. private? Moving an application into the public cloud can give you greater
availability. But if you move an application into the private cloud, it gives you more
flexibility and control.
• Just as you have to secure on-premises infrastructure, you'll need to make smart move to
lock down your cloud migration.
• How will moving applications to the cloud affect support? Will it change?
• Finally, are there any external dependencies you need to consider for your applications?
An example might be an application dependent on a confidential data source or
dependent on another cloud application where availability is a concern.
5. Evaluate Data Management Policies
Data management policies ensure that your organization is properly managing and retaining data.
That stays just as important when you migrate to the cloud.
Some key areas to monitor include:
Creation, Access, Retention, Archiving and Deletion. Compliance with data management
policies is important, no matter where your workloads are.
6. Pick the Proper Size
Many organizations make the mistake of not having the right size of cloud instance when they
migrate. (See number 2 for how to know how much you need!) While cloud vendors make
recommendations, it’s ultimately up to you to make sure you’ve sized your cloud instances for
your needs.
As you size your cloud instances, you should make sure you understand how much
customization is possible. Will you be able to size up or down easily? How much flexibility will
you have? Also, consider how many pre-defined sizes make sense at this stage.
The explosion of cloud adoption has put a lot of pressure on technology leaders to move to the
cloud...and move quickly. But sometimes these moves can result in enormous bills. This is often
called "cloudshock" because the unexpected cost involved has a heart-stopping effect.
7. Plan for the Worst
Even if you move into the cloud, you still need a disaster recovery plan. You should know what
the recovery time is for your applications, systems, and servers and be able to determine what
potential impact—and cost—downtime due to disaster would have on your business. Depending
on your recovery window, plan your disaster recovery accordingly.
8. Meet Business Requirements
Today, the business counts on IT as a strategic partner. Their business requirements are your IT
requirements. So on your way to the cloud, you’ll need to map business needs to capacity
requirements. This might mean knowing how the business will grow (e.g. 30 percent in three
months) and translating that growth into infrastructure needs.
Your goal should be to support business objectives. And you should be able to tell business users
in their language (cost, response times, availability, etc.) what’s going to happen in the next three
to six months.
Cloud load balancing refers to distributing client requests across multiple application servers
that are running in a cloud environment. Like other forms of load balancing, cloud load
balancing enables you to maximize application performance and reliability; its advantages over
traditional load balancing of on-premises resources are the (usually) lower cost and the ease of
scaling the application up or down to match demand.
Cloud Load balancing is the process of distributing workloads and computing resources across
one or more servers. This kind of distribution ensures maximum throughput in minimum
response time. The workload is segregated among two or more servers, hard drives, network
interfaces or other computing resources, enabling better resource utilization and system response
time. Thus, for a high traffic website, effective use of cloud load balancing can ensure business
continuity. The common objectives of using load balancers are:
Here, load refers to not only the website traffic but also includes CPU load, network load and
memory capacity of each server. A load balancing technique makes sure that each system in the
network has same amount of work at any instant of time. This means neither any of them is
excessively over-loaded, nor under-utilized. The load balancer distributes data depending upon
how busy each server or node is. In the absence of a load balancer, the client must wait while his
process gets processed, which might be too tiring and demotivating for him. Various information
like jobs waiting in queue, CPU processing rate, job arrival rate etc. are exchanged between the
processors during the load balancing process. Failure in the right application of load balancers
can lead to serious consequences, data getting lost being one of them.
performance and cost of short-lived, compute-intensive processing jobs, with input and output
stored on Amazon S3.