You are on page 1of 6

1) Virtualization is a key enabler of the first four of five key attributes of

cloud computing:
Service-based: A service-based architecture is where clients are abstracted
from service providers through service interfaces.
Scalable and elastic: Services can be altered to affect capacity and
performance on demand.
Shared services: Resources are pooled in order to create greater efficiencies.
Metered usage: Services are billed on a usage basis.
Internet delivery: The services provided by cloud computing are based on
Internet protocols and
formats.
2) What is load balancing?
One characteristic of cloud computing is virtualized network access to a service.
No matter where you access the service, you are directed to the available
resources. The technology used to distribute service requests to resources is
referred to as load balancing. Load balancing can be implemented in hardware,
as is the case with F5's BigIP servers, or in software, such as the Apache
mod_proxy_balancer extension, the Pound load balancer and reverse proxy
software, and the Squid proxy and cache daemon.
Load balancing is an optimization technique; it can be used to increase
utilization and throughput, lower latency, reduce response time, and avoid
system overload.
The following network resources can be load balanced:
Network interfaces and services such as DNS, FTP, and HTTP
Connections through intelligent switches
Processing through computer system assignment
Storage resources
Access to application instances
Without load balancing, cloud computing would very difficult to manage. Load
balancing provides the necessary redundancy to make an intrinsically unreliable
system reliable through managed redirection.
It also provides fault tolerance when coupled with a failover mechanism. Load
balancing is nearly always a feature of server farms and computer clusters and
for high availability applications.
A load-balancing system can use different mechanisms to assign service
direction. In the simplest load-balancing mechanisms, the load balancer listens

to a network port for service requests. When a request from a client or service
requester arrives, the load balancer uses a scheduling algorithm to
assign where the request is sent. Typical scheduling algorithms in use today are
round robin and weighted round robin, fastest response time, least connections
and weighted least connections, and custom assignments based on other factors.
A session ticket is created by the load balancer so that subsequent related traffic
from the client that is part of that session can be properly routed to the same
resource. Without this session record or persistence, a load balancer would not
be able to correctly failover a request from one resource to another. Persistence
can be enforced using session data stored in a database and replicated across
multiple load balancers. Other methods can use the client's browser to store a
client-side cookie or through the use of a rewrite engine that modifies the URL.
Of all these methods, a session cookie stored on the client has the least amount
of overhead for a load balancer because it allows the load balancer an
Independent selection of resources.
The algorithm can be based on a simple round robin system where the next
system in a list of systems gets the request. Round robin DNS is a common
application, where IP addresses are assigned out of a pool of available IP
addresses. Google uses round robin DNS, as described in the next section.
3. what is hypervisor?
Load balancing virtualizes systems and resources by mapping a logical address
to a physical address. Another fundamental technology for abstraction creates
virtual systems out of physical systems. If load balancing is like playing a game
of hot potato, then virtual machine technologies is akin to playing slice and
dice with the potato. Given a computer system with a certain set of resources,
you can set aside portions of those resources to create a virtual machine. From
the standpoint of applications or users, a virtual machine has all the attributes
and characteristics of a physical system but is strictly software that emulates a
physical machine. A system virtual machine (or a hardware virtual machine) has
its own address space in memory, its own processor resource allocation, and its
own device I/O using its own virtual device
drivers. Some virtual machines are designed to run only a single application or
process and are referred to as process virtual machines.
A virtual machine is a computer that is walled off from the physical computer
that the virtual machine is running on. This makes virtual machine technology
very useful for running old versions of operating systems, testing applications in
what amounts to a sandbox, or in the case of cloud computing, creating

virtual machine instances that can be assigned a workload. Virtual machines


provide the capability of running multiple machine instances, each with their
own operating system.
From the standpoint of cloud computing, these features enable VMMs to
manage application provisioning, provide for machine instance cloning and
replication, allow for graceful system failover, and provide several other
desirable features. The downside of virtual machine technologies is that
having resources indirectly addressed means there is some level of overhead.

When you hear the term "load balancing", what do you think of? The term
evokes the idea of work that needs to be done that is too much for one person to
handle, prompting another person to help out. In the world of web hosting, load
balancing provides a similar function, spreading the "work" of running a
website across multiple servers in order to ensure that the servers hosting the
site do not get overloaded.
In order to provide load balancing for a web site, you need to have more than
one web server, either virtual or physical. Once that is in place, the method of
load balancing can be determined. For the purposes of this discussion, we will
be reviewing load balancing methods provided by a dedicated appliance, rather
than solutions such as round robin DNS.
High Level Architecture
The web servers hosting a load-balanced site may have public IP addresses, but
visitors accessing the site won't actually connect directly to the servers. Instead,
they will first access a load balancer appliance and let it handle the traffic
appropriately. A Virtual IP Address is created that is only present on the load
balancer. The site's DNS host record for WWW will be pointed to this IP. Once
the traffic traverses to the load balancer, the appliance then decides how to
distribute the traffic to the actual servers (called the pool) running the site. The
load balancer runs health checks on the servers by checking to make sure the
site is up and running correctly. If it detects that a server's website is not
loading, it will not route any traffic to that server. This is not only beneficial for
times when the web site is not running correctly on a server, but also when a
server needs to be brought down for maintenance.
Persistence

One problem that arises with load balancing has to do with persistence. For
some web applications, it is necessary to make sure a user's session continues to
use the first server that they contacted. If their traffic is sent to another server in
the pool during a transaction, this may cause missing data to be collected on the
back end of the initial server. This issue can be solved by setting the load
balancer to evaluate the source address of the user or utilizing cookies and
making the user's connection persistent.
Other Features
In addition to ensuring site uptime, these appliances can also provide other
performance related features:
Caching: The appliance can store content that does not change (such as images)
and serve them directly to the client without sending traffic to the web server.
Compression: Reduces that amount of traffic for HTTP objects by compressing
files before they are sent.
SSL Offloading: Processing SSL traffic is demanding on a web server's CPU, so
a load balancer can perform this processing instead.
High availability: Two load balancing appliances can be used in case one fails.
In our always-connected society, everyone expects their website to be up and
performing well all of the time, even in times of high traffic. Employing a load
balancing appliance is a great way to work towards these goals and improve the
uptime and performance of your website. Please contact us for more
information or if you have any other questions about enabling load balancing
for your website.

Advantage of load balancing


Scalability
The amount of traffic a website receives has a substantial effect on its
performance, and load balancing provides the capability to handle most sudden
spikes in traffic by spreading the traffic across multiple servers. Adding more
load balanced servers to handle increased traffic is much easier and faster to
implement than moving a site to an entirely new, more powerful server. This is
especially advantageous for sites which operate on virtual web servers, since
existing servers can easily be cloned and added to the load balanced array.
As a sites traffic fluctuates, load balancing allows server administrators to
increase or decrease the number of web servers depending on the sites current

needs. Here are a few examples of how regular changes in site traffic might
necessitate the use of load balancing to adjust server capacity:
Educational websites: Universities which use their website to allow
students to enroll for classes online will often see a large increase in site
traffic each semester during the enrollment period. This increase can
cause site slowness at these peak times, and adding one or more load
balanced web servers to increase the site's capacity can address this issue.
E-commerce websites: E-commerce sites usually see a large increase in
traffic over the holiday season; adding one or more load balanced web
servers can keep the site from experiencing slowness during this busy
time.
Changes to the number of load balanced servers can be made as needed, so
servers can be added in preparation for periods of increased traffic, then
removed when they are no longer necessary. Utilizing load balancing provides
this exceptional scalability, ensuring that a website is always prepared to meet
its users demands.
Redundancy
Utilizing load balancing to maintain a website on more than one web server
(whether they are dedicated servers or virtual servers) can greatly limit the
impact of hardware failure on a sites overall uptime. Since the website traffic is
sent to two or more web servers, if one server fails, the load balancer will
automatically transfer the traffic to the other working web server(s).
Load balancing can be used in two different modes: in active/active mode, all
servers actively receive traffic, while in active/passive mode, the active server
receives the traffic, and the passive server will come online if the active server
fails. Maintaining multiple load balanced servers provides the assurance that a
working server will always be online to handle site traffic even in the case of
hardware failure.
Flexibility
Using multiple load balanced servers to handle a sites traffic allows
administrators the flexibility to perform maintenance on a server without
impacting site uptime. This can be done by pointing all traffic to one server and
placing the load balancer in active/passive mode. Software upgrades and code
updates can be deployed to the passive server and tested in a production
environment, and when administrators are comfortable that the updates have
completed with no issues, they can switch the passive server to active and do
the same work on the other server(s). Any server maintenance can be staggered

in this way, with at least one server remaining available, ensuring that the sites
users do not experience any outages.
For enterprise websites, maintaining uptime and performance is essential, and
load balancing is a key part of making sure your site has the capacity to do so.
Do you have any questions about how you can implement or configure load
balancing? Please let us know in the comments below.
Host VM
The host virtual machine and the guest virtual machine are the two components
that make up a virtual machine. The guest VM is an independent instance of an
operating system and associated software and information. The host VM is the
hardware that provides it with computing resources such as processing power,
memory, disk and network I/O (input/output), and so on.
A virtual machine monitor (VMM) or hypervisor intermediates between the host
and guest VM, isolating individual guest VMs from one another and making it
possible for a host to support multiple guests running different operating
systems.
A guest VM can exist on a single physical machine but is usually distributed
across multiple hosts for load balancing. A host VM, similarly, may exist as
part of the resources of a single physical machine or as smaller parts of the
resources of multiple physical machines.
1.

You might also like