Professional Documents
Culture Documents
Advertisement
Techopedia Explains Server Redundancy
Server redundancy is implemented in an enterprise IT infrastructure
where server availability is of paramount importance. To enable server
redundancy, a server replica is created with the same computing power,
storage, applications and other operational parameters.
14 Apr 2021
For server infrastructure, "redundancy" is a vital part of keeping everything
working, despite the connotations of the word.
However, the main difference here is that the redundant server is not
online, and it is not utilised as a "live" server until it is needed. Despite this,
it still has network connectivity ready to go and can receive power, if it is
ever needed. This usually happens during extreme traffic or downtimes.
The redundant server is used to spread the workload or pick the network
back up.
What is server failover?
Ref: What is server failover? | Failover meaning |
Cloudflare
Server failover is the practice of having a backup server (or servers) prepared to
automatically take over if the primary server goes offline. Server failover works like a
backup generator. When the power goes out in a building or home, a backup
generator temporarily restores electricity. Similarly, in server failover, a secondary
server takes over when the primary server fails. The goal of server failover is to
improve a network or website's fault tolerance, or its ability to continue operating
when one of its parts fails.
A server's primary job is to store content and data to share with other computers.
While there are different types of servers, web servers are perhaps the most well-
known because they keep websites and applications operational. When web servers
fail, they cannot process requests, which means they cannot serve data to clients.
Without server failover, a failed server can cause a loading error or a site outage.
Power outages
Natural disasters
Failover often goes hand in hand with a process called load balancing. Load
balancers increase application availability and performance by distributing traffic
across more than one server. To ensure requests are assigned to servers that can
handle the traffic, many load balancers monitor server health and implement failover.
Active-standby
Active-active
Systems that aim for as little downtime as possible (or 99.999% uptime) are
considered HA. If an HA system experiences downtime, it should only last for a few
seconds or minutes at a time. Highly-regulated industries, like government services,
may need to meet high availability standards for compliance purposes.
CA systems, on the other hand, are created to avoid any downtime at all. No
downtime means that users can stay connected to a site or application at all times,
even during maintenance. One area where CA might be necessary, for example, is in
online stock trading, where transactions are highly time-sensitive. CA systems are
more complex to build and maintain because they must account for every single
point of failure, from servers to physical location to power access.
Cloudflare Load Balancing achieves fast failover by actively monitoring servers and
instantly rerouting traffic when an issue is detected, resulting in zero downtime.
Learn more about Cloudflare Load Balancing.
Server Load Balancing (SLB) is a technology that distributes high traffic sites
among several servers using a network-based hardware or software-defined
appliance. And when load balancing across multiple geo locations, the intelligent
distribution of traffic is referred to as global server load balancing (GSLB). The
servers can be on premises in a company’s own data centers, or hosted in a private
cloud or the public cloud.
Server load balancers intercepts traffic for a website and reroutes that traffic to
servers.
FAQs
What is Server Load
Balancing?
Server Load Balancing (SLB) provides network services and content delivery using a
series of load balancing algorithms. It prioritizes responses to the specific requests
from clients over the network. Server load balancing distributes client traffic to
servers to ensure consistent, high-performance application delivery.
Server load balancing ensures application delivery, scalability, reliability and high
availability.
How does Server Load
Balancing Work?
Server load balancing works within two main types of load balancing:
• Increase Scalability: load balancers are able to spin up or down server resources
based on spikes in traffic to the pool of servers that are best suited to handle these
increases in traffic and keep applications performance optimized.
For more information on server load balancing see the following resources: