You are on page 1of 6

Research Problem –

Requirements
• The most important functional requirement of the load balancer is to ensure that all the
traffic pertaining to one call goes to the same CPS Process

• Performance: The LB is he single point of entry in the cluster (NE) and hence has to be fast
enough without becoming the bottleneck of the cluster.

• Scalability: More nodes can be added to LB (load balancer) at the run time. A load balancer
should be able to scale both statically and dynamically.

• Awareness of load at the nodes where the traffic is being routed. Ideally, the load balancer
must be adaptive.

• LB should be able to handle failures of internal nodes. The aim is not to make sure that the
LB can handle all kinds of faults, but it should be able to handle basic fault situation such as
the case when an internal node crashes.

Research Problem – criteria


of load balancing – stateless
applications
• Incase the applications are stateless, the load balancer may route the incoming message to
any node. It is the responsibility of the application to replicate the call state. We can see in
the figure below that the messages from one call (denoted by the same color) end up at
different nodes.

Node 1
Ext
1

LB Node 2

Ext
2 Node n
Research Problem – criteria of
load balancing – stateless
applications
• Incase the applications are stateless, the load balancer may route the incoming message to
any node. It is the responsibility of the application to replicate the call state. We can see in
the figure below that the messages from one call (denoted by the same color) end up at
different nodes.

Node 1
Ext
1
LB
Node 2

Backend
Ext
2 Node n

Stateless Load Balancer – LB


via NAT
The advantage of the load balancing via NAT is that nodes can run any operating system that
supports TCP/IP protocol, internal nodes can use private Internet addresses, and only one
externally visible IP address is needed for the load balancer.

Node 1
Ext
1

LB Node 2

Ext
2 Node n
The disadvantage is that the scalability of the virtual server via NAT is limited as all the traffic passes
through it.

Stateless Load Balancer- LB


using IP Tunneling
In the load balancing using IP tunneling, the load balancer schedules requests to the different nodes,
and the nodes return replies directly to the external nodes.

Node 1
Ext
1

LB Node 2

Ext
2 Node n

The original IP packet is encapsulated in another IP packet and directed to a chosen internal node. At
the internal node, the packet is decapsulated and the original packet is retrieved. The original packet
has the source IP address and port where the packet originated and is used to establish a new
connection back to the external node

Dynamic Addition and Removal


of Nodes – problem
Typically the stateless load balancer uses the hash-algorithm to route a message. In the
following cluster the hash for a certain call ID yields node 1.

Node 1
Ext
1

LB Node 2

Ext
2 Node 3

Node 4

Now if an additional node is removed, for the same call, the hash returns node 3

Node 1
Ext
1

LB Node 2

Ext
2 Node 3

Capacity Based Load Balancing


• In all the discussion above we assumed that the internal nodes had an equal processing
capacity. In reality this may not be the case. For example in a cluster running Diameter, SIP
and COPS applications, there could be very easily be a case where some nodes are running
all the three protocols, some nodes are just running a dedicated protocol, or yet different
combinations. The message is that the load balancer cannot distribute traffic to the internal
entities assuming that they have equal traffic-handling capacity.

• Assume that today the standard CPU speed is 1600 MHz, and two year later when we want
to add more nodes (new hardware) into the cluster, maybe the commonly available CPU
speed then is 2400 MHz, then the traffic cannot be evenly distributed amongst the internal
nodes because different nodes have different processing capacity. Hence the need for
capacity-based load balancer.

Peer Capacity is the parameter of interest for us, for the capacity based load balancer. For example,
if a cluster typically has every node with processor with 1600 MHz speed and each node has two
processors, and then Peer Capacity may have values from 1 to 4. A value of 1 would mean that the
Peer is designed to consume half of one processor

Overload Control
• The arguments in favor and against doing overload control entirely at the load balancer are
given below:

• Advantages:

• The load balancer is a front door for the cluster. The point of entry is a logical place
to make sure that excess traffic does not enters the cluster.

• There is no proprietary interface required between the Peers and the load balancer
for receiving feedback from the nodes.

• Disadvantages:

• The processing logic at the load balancer increases and thus would lower it’s
performance.

• The load balancer would have to keep track of load at the internal nodes, therefore
bringing in state to it.

• It is not possible to configure the load balancer to use the metrics of overload
provided by the nodes.

• It is not possible for the load balancer to detect the load at the internal nodes
accurately. For example if an internal node is shared such that it is dedicated 20% for
COPS, 30 % for Diameter and 50% for SIP. If the load balancer is balancing traffic for,
say Diameter and measuring the response time from the Peer to find out how
loaded it is, then it might happen that the Diameter Peer starts consuming CPU
allocated for other protocols. There is no way that the load balancer can know this.

Results and Conclusion


• As IP Telephony becomes more popular and Call Processing Servers become more
distributed, the demand for greater scalability and dependability is increasing. Distributed
system performance and dependability can degrade significantly, when servers become
overloaded by client requests. To alleviate such bottlenecks, load balancer must implement
a congestion control algorithm. It should also be possible for the operator or service
provider to add extra hardware to the system without interrupting the ongoing traffic. 

• This paper lists four classes of load balancers for IP traffic, which were Network-Based load
balancer, Network-Layer load balancer, Transport-Layer load balancer or the Application
Layer based load balancer. All load balancer should follow in one of the above four
categories.

• Performance and scalability are the most important requirements for any load balancer.
However providing congestion control and the ability to add or remove servers from the
load balancer at run time are very important functionalities as well. A load balancer, which
can adapt to changing load in the servers or changing topology, is called as an adaptive load
balancer. In the absence of the intelligence to adapt to changing conditions, a load balancer
should rather be called as load distributor.

You might also like