You are on page 1of 33

Congestion Control Algorithms

• General Principles of Congestion Control


• Congestion Prevention Policies
• Traffic Shaping
• Flow Specifications
• Congestion Control in Virtual-Circuit Subnets
• Choke Packets
• Load Shedding
• Jitter Control
• Congestion Control for Mulicasting
Congestion

When too much traffic is offered, congestion sets in and


performance degrades sharply.
General Principles of Congestion Control
A. Monitor the system .
– detect when and where congestion occurs.
B. Pass information to where action can be taken.
C. Adjust system operation to correct the problem.
Metrics for monitoring
• % of all packets discarded for lack of buffer space
• Avg queue lengths
• No. Of packets that time out and are retransmitted
• Avg packet delay
• Std deviation of packet delay.
• In all these, rising nos. indicate growing congestion.
General Principles of Congestion Control
•Passing information where action can be taken

•A bit or field can be reserved in every packet for


routers to fill in whenever congestion gets above
some threshold level.

•When a router detects this congested state, it fills in


the field in all outgoing packets, to warn the
neighbours
Congestion Prevention Policies

5-26

Policies that affect congestion.


Traffic Shaping
• It is forcing the packets to be transmitted at a
more predictable rate.
• Widely used in ATM networks
• Regulates the avg rate of data transmission.
• Monitoring a traffic flow is called traffic
policing.
Traffic Shaping
The Leaky Bucket Algorithm

(a) A leaky bucket with water. (b) a leaky bucket with packets.
Traffic Shaping
The Leaky Bucket Algorithm
• Leaky bucket – no matter at what rate water
enters the bucket, the outflow is at a constant
rate, when there is any water in the bucket,
and zero when the bucket is empty.
– When the bucket is full, any additional water
entering it spills over the sides and is lost
Traffic Shaping
The Leaky Bucket Algorithm
• Conceptually, each host is connected to the
n/w by an interface containing a finite internal
queue.
• If the packet arrives at the queue when it is
full, the packet is discarded.
Traffic Shaping
The Token Bucket Algorithm

5-34

(a) Before. (b) After.

Token bucket allows some burstiness (up to the number of token the
bucket can hold)
Traffic Shaping
The Token Bucket Algorithm
• For many applications, it is better to allow the
output to speed up somewhat when large
bursts arrive, so a more flexible algorithm is
needed, preferably one that never loses data.
• Token bucket algorithm – here, the leafy bucket
holds tokens, generated by a clock at the rate of
one token every ΔT sec.
• In fig. a, the bucket holds three tokens, with 5
packets waiting to be transmitted.
Traffic Shaping
The Token Bucket Algorithm
• For a packet to be transmitted, it must
capture and destroy one token
• In fig. b, three of the five packets have gotten
through, but the other two are stuck waiting
for two more tokens to be generated.
Traffic Shaping
The Token Bucket Algorithm
• Comparison b/w leaky bucket and token bucket
algorithm
– Leafy bucket does not allow idle hosts to save
permission to send large bursts later but, Token
bucket does allow saving, upto the max. size of the
bucket, n.
– Token bucket algorithm throws away tokens when
the bucket fills up but never discards packets, but
leaky bucket algorithm discards packets when the
bucket fills up.
Traffic Shaping
The Token Bucket Algorithm
• Impl. of token bucket algorithm
– The counter is incremented by one every ΔT and
decremented by one whenever a packet is sent.
– When the counter hits zero, no packets may be
sent.
– In the byte count variant, the counter is increment
by k bytes every ΔT and decremented by the
length of each packet sent.
Flow Specification
• Traffic shaping is most effective when the
sender , receiver and subnet agree to it
• To get agreement, it is necessary to specify
the traffic pattern in a precise way.
• This agreement is called a flow specification.
• Flow specification can apply either to the
packets sent on a virtual circuit, or to a
sequence of datagrams sent between a
source and a destination.
Flow Specification
• Some of the parameters of flow specification
– Max. packet size
– Token bucket rate
– Token bucket size
– Max. transmission rate etc
Congestion Control in Virtual-Circuit
Subnets: Admission control

(a) A congested subnet. (b) A redrawn subnet, eliminates


congestion and a virtual circuit from A to B.
Congestion Control in Virtual-Circuit
Subnets: Admission control
Admission Control
- Once congestion has been signaled, no more virtual circuits are setup
until the problem has gone away
Alternative Approach
-Allow new virtual circuits but carefully route all new virtual circuits
around problem areas.
Example
-A host attached to router A wants to set up a connection to a host
attached to router B. Normally, this connection would pass thru one of the
congested routers. To avoid this situation, we can redraw the situation as shown in
fig. b, omitting the congested routers and all of their lines. The dashed line shows a
possible route for the virtual circuit that avoids the congested routers.
Choke Packets
• Each router can monitor the utilization of its o/p lines and other
resources, eg., by associating with each line a real variable, u,
whose value b/w 0.0 and 1.0 reflects the recent utlilization of
that line.
• unew = auold + (1-a)f.
• f sample of the instantaneous line utilization (either 0 or 1)
• a determines how fast the router forgets recent history.
• When u>threshold, o/p line enters a warning state.
• Each newly arrived packet is checked to see if its o/p line is in
warning state.
• If so, the router sends a choke packet back to the source host,
giving it the destination found in the packet.
• When the source host gets the choke packet, it is required to
reduce the traffic sent to the specific destination by X percent.
• Hosts can reduce traffic by adjusting their policy parameters,
for example window size.
Weighted Fair Queueing
•While using choke packets, the problem is that the action to be taken
by the source hosts is voluntary.
•Due to this some may get limited bandwidth than others
•To get around this problem, fair queuing algorithm has been proposed
•Routers have multiple queues for each output line, one for each
source
•When a line becomes idle, the router scans the queues round
robin, taking the first packet on the next queue.
•ie., with n hosts competing for a given output line, each host gets
to send one out of every n packets
•Problem – gives more bandwidth to hosts that use large
packets than to hosts that use small packets.
•For this, instead of a packet by packet around robin, a byte by
byte round robin is simulated.
•A problem with this is, it gives same priority for all the hosts. So,
it is desirable to give the file and other servers more bandwidth
than clients which is called weighted fair queueing.
Hop-by-Hop Choke Packets
(in high speed nets)
Before the choke reaches the source, a lot
of data will be sent from source.
An alternative approach is to have the
choke packet take effect at every hop it
passes thru.
Here as soon as the choke packet reaches
F, F is required to reduce the flow to D and
so on.

(a) A choke packet that affects


only the source.

(b) A choke packet that affects


each hop it passes through.
Load Shedding
• When routers are being inundated by packets
that they cannot handle, they just throw them
away – load shedding
• Which packet to discard may depend on the
applications running.
• Example, consider transmitting a document
containing ASCII text and pictures –losing a line
of pixels in some image is far less damaging than
losing a line of readable text.
Load Shedding
• To implement intelligent discard policy,
applications must mark their packets in priority
classes to indicate how important they are.
• There is some significant incentive to mark
packets as anything other than VERY
IMPORTANT-NEVER EVER DISCARD
• Incentive can be in the form of money – low
priority packets being cheaper to send than the
high priority ones.
Load Shedding
• Marking packets by class requires one or
more header bits in which to put the priority.
• ATM cells have 1 bit reserved in the header
for this purpose, so every ATM cell is labeled
either as low priority or high priority.
Jitter Control
• The jitter (variation in time b/w packets arriving) can
be bounded by computing the expected transit time
for each hop along the path.
• When a packet arrives at a router, the router checks
to see how much the packet is behind or ahead of its
schedule.
• This information is stored in the packet and updated
at each hop. If the packet is ahead of schedule, the
router tries to get it out the door quickly.
Jitter control cont…
• In fact, the algorithm for determining which
of several packets competing for an output
line should go next can always choose the
packet furthest behind in its schedule.
• In this way, packets that are ahead of
schedule get slowed down and packets that
are behind schedule get speeded up, in both
cases reducing the amount of jitter.
Congestion control for multicasting
• All of the congestion control algorithms
discussed so far deal with messages from a
single source to a single destination.
• In this section we will describe a way of
managing multicast flows from multiple
sources to multiple destinations.
RSVP-Resource reSerVation
Protocol
• It allows multiple senders to transmit to
multiple groups of receivers, permits
individual receivers to switch channels freely,
and optimizes bandwidth use while at the
same time eliminating congestion.
• The protocol uses multicast routing using
spanning trees.
RSVP-cont…..
• Each group is assigned a group address.
• To send to a group, a sender puts the group’s
address in its packets.
• The standard multicast routing algorithm
then builds a spanning tree covering all group
members.
• The routing algorithm is not part of RSVP.
RSVP-cont…..
• The only difference with normal multicasting
is a little extra information that is multicast
to the group periodically to tell the routers
along the tree to maintain certain data
structures in their memories.
• To get better reception and eliminate
congestion, any of the receivers in a group
can send a reservation message up the tree
to the sender.
RSVP-cont…..
• The message is propagated using the reverse path
forwarding algorithm.
• At each hop, the router notes the reservation and
reserves the necessary bandwidth.
• If insufficient bandwidth is available, it reports back
failure.
• By the time the message gets back to the source,
bandwidth has been reserved all the way from the
sender to the receiver making the reservation request
along the spanning tree.
RSVP-The ReSerVation Protocol
Bandwidth reservation is done with reverse path forwarding along
the spanning tree.

(a) A network, (b) The multicast spanning tree for host 1.


(c) The multicast spanning tree for host 2.
RSVP-The ReSerVation Protocol (2)

(a) Host 3 requests a channel to host 1. (b) Host 3 then requests a


second channel, to host 2. (c) Host 5 requests a channel to host 1.

You might also like