You are on page 1of 20

A

REPORT

On

CONGESTION CONTROL

Presented by: Guided by:

ZAFARYAB HAIDER Mr.BRAHMA DEO SAH

Mahatma Gandhi Missions

College of Engineering. & Technology, Sector 62,

NOIDA (U.P)

UP TECHNICAL UNIVERSITY: LUCKNOW


ACKNOWLEDGEMENT

The written words have an unfortunate tendency to degenerate the feeling of genuine
gratitude into a still formality but I have no other way to record my feeling permanently.

First of all, I, Zafaryab Haider of BT-CS, would like to express my deep sense gratitude
towards my guide Mr.Braham deo Sah, Lecturer, Computer Science & Engineering
Department for his valuable guidance, constant encouragement & inspiring efforts for
completion of this dissertation. Without his efforts this dissertation cannot be completed.

I would like to thank Mr.Mohd Haider, Head of Computer Science & Engineering
Department for the college facilities he provided.

I am also thankful to my friends for their timely advice, moral support, and
encouragement.

I also acknowledge the co-operation of all the other individuals who directly & indirectly
helped me in making this report a success.

I
ABSTRACT

Congestion is said to occur in the network when the resource demands exceed the
capacity and packets are lost due to too much queuing in the network. During congestion, the
network throughput may drop to zero and the path delay may become very high. A congestion
control scheme helps the network to recover from the congestion state.

A congestion avoidance scheme allows a network to operate in the region of low delay
and high throughput. Such schemes prevent a network from entering the congested state.
Congestion avoidance is a prevention mechanism while congestion control is a recovery
mechanism.

We compare the concept of congestion avoidance with that of flow control and
congestion control. A number of possible alternative for congestion avoidance have been
identified. From these a few were selected for study. The criteria for selection and goals for these
schemes have been described. In particular, we wanted the scheme to be globally efficient, fair,
dynamic, convergent, robust, distributed, configuration independent, etc. These goals and the test
cases used to verify whether a particular scheme has met the goals have been described.

We model the network and the user policies for congestion avoidance as a feedback
control system. The key components of a generic congestion avoidance scheme are: congestion
detection, congestion feedback, feedback selector, signal filter, decision function, and
increase/decrease algorithms. These components have been explained.
The congestion avoidance research was done using a combination of analytical modeling and
simulation techniques. The features of simulation model used have been described. This is the
first report in a series on congestion avoidance schemes. Other reports in this series describe the
application of these ideas leading to the development of specific congestion avoidance schemes.

II
INDEX

Acknowledgement I
Abstract II
List of Figure IV
1. Introduction 1
2. Congestion Control 2-3
2.1 What is Congestion? 2
2.2 Causes of Congestion 2
3. Principles of Congestion Control 4
4. Congestion Control Techniques 5-7
4.1 Open Loop Techniques 5
4.2 Closed Loop Techniques 6
4.3 Load Shedding 7
5. Example of Congestion Control in TCP 8
6. Congestion Control at Routers 10
7. Traffic Shaping 12-13
7.1 Leaky Bucket 12
7.2 Token Bucket 13
8. Conclusion 14
9. References 15
LIST OF FIGURES

Figure No. Name of Figure Page No.

1. Performance Degradation during Congestion 2


2. Back Pressure 6
3. Choke Packet 6
4. Slow Start 8
5. Fair Queuing 10
6. Leaky Bucket
a. With Water 12
b. With Packet 12
7. Token Bucket
a. Before 13
b. After 13

IV
INTRODUCTION

As Internet can be considered as a Queue of packets, where transmitting nodes are


constantly adding packets and some of them (receiving nodes) are removing packets from the
queue. So, consider a situation where too many packets are present in this queue (or internet or a
part of internet), such that constantly transmitting nodes are pouring packets at a higher rate than
receiving nodes are removing them. This degrades the performance, and such a situation is
termed as Congestion. Main reason of congestion is more number of packets into the network
than it can handle. So, the objective of congestion control can be summarized as to maintain the
number of packets in the network below the level at which performance falls off dramatically.
The nature of a Packet switching network can be summarized in following points:
A network of queues
At each node, there is a queue of packets for each outgoing channel
If packet arrival rate exceeds the packet transmission rate, the queue size grows without
bound
When the line for which packets are queuing becomes more than 80% utilized, the queue
length grows alarmingly

When the number of packets dumped into the network is within the carrying capacity,
they all are delivered, except a few that have to be rejected due to transmission errors. And then
the number delivered is proportional to the number of packets sent. However, as traffic increases
too far, the routers are no longer able to cope, and they begin to lose packets. This tends to make
matter worse. At very high traffic, performance collapse completely, and almost no packet is
delivered. In the following sections, the causes of congestion, the effects of congestion and
various congestion control techniques are discussed in detail.

1
CONGESTION

WHAT IS CONGESTION?
Congestion occurs when the source sends more packets than the destination can handle.
When this congestion occurs performance will degrade.
The packets are normally temporarily stored in the buffers of the source and the destination
before forwarding it to their upper layers. Congestion occurs when these buffers gets filled on the
destination side. At a very high traffic rate, performance collapses completely and no packets are
delivered.
This can be very well demonstrated through the graph given below:-

Fig.1 Performance degradation in congestion


CAUSES OF CONGESTION:-
The main causes of the congestion are as follows:-
• Packet arrival rate exceeds the outgoing link capacity.
• Insufficient memory to store arriving packets
• Bursty Traffic
• Slow processor
If there is insufficient memory to hold these packets, then packets will be lost (dropped).
Adding more memory also may not help in certain situations. If router have an infinite amount of
memory even then instead of congestion being reduced, it gets worse; because by the time
packets gets at the head of the queue, to be dispatched out to the output line, they have already
timed-out (repeatedly), and duplicates may also be present. All the packets will be forwarded to
next router up to the destination, all the way only increasing the load to the network more and
more.
Slow processors also cause Congestion. If the router CPU is slow at performing the task required
for them (Queuing buffers, updating tables, reporting any exceptions etc.), queue can build up
even if there is excess of line capacity.
Congestion tends to feed upon itself to get even worse. Routers respond to overloading by
dropping packets. When these packets contain TCP segments, the segments don't reach their
destination, and they are therefore left unacknowledged, which eventually leads to timeout and
retransmission. So, the major cause of congestion is often the bursty nature of traffic. If the hosts
could be made to transmit at a uniform rate, then congestion problem will be less common and
all other causes will not even led to congestion because other causes just act as an enzyme which
boosts up the congestion when the traffic is bursty
ISSUES:-If the rate of packet processing is less than rate of packet arrival than input queues of
the router becomes longer and longer and if rate of packet processing is more than rate of packet
departure than output queues of router will become longer and longer.

3
PRICIPLES OF CONGESTION CONTROL:-

It refers to the techniques and mechanism that can either prevent congestion, before it happens,
or remove the congestion, after it has happened.
Main steps followed in congestion control are:-
1)For Closed Loop solution which is based on feedback loop:-
• Monitor the segments to detect when and where congestion occurs.
• Pass this information to places where action can be taken.
• Adjust system operations to correct the problem.
2)For Open Loop solutions attempt to solve the problem via good design because once the
system is up and running, midcourse corrections are not made. it includes deciding when to
accept new traffic, deciding when to discard packets and which ones, and making scheduling
decisions at various points in the network.

4
CONGESTION CONTROL TECHNIQUES
Congestion Control techniques are broadly divided into two broad categories:-
1. Open loop congestion control (prevention).
• Retransmission policy
• Window policy
• Acknowledgement policy
• Discarding policy
• Admission policy

2. Closed loop congestion control (removal).


• Back pressure
• Choke packet
• Implicit signaling
• Explicit signaling

OPEN LOOP CONGESTION CONTROL:


Here policies are applied to prevent congestion before it happens.
Retransmission policy: If the sender feels that a sent packet is lost or corrupted, the packet
needs to be retransmitted. Generally it may increase congestion which can be prevented by using
good retransmission policy and retransmission timers.

Window policy: Type of window at the sender’s end may also effect the congestion. Selective
repeat window is better than Go-Back-N window because in case selective there are no chances
of duplication.

Acknowledgement policy: Acknowledgement policy of receiver can also effect the congestion.
If receiver does not acknowledge every packet it receives, it may slow down the sender, thereby
preventing congestion. Several approaches are used for this. Sending fewer acknowledgement
means imposing less load on network.
Discarding policy: A good discarding policy by routers may prevent congestion and at the same
time may not harm the integrity of the transmission. Eg:- discard less sensitive packets in audio
transmission thereby preserving quality of sound and preventing congestion.

Admission policy: In this first of all resource requirements of flow are checked before admitting
to network. If there is congestion or possibility of congestion in future than router is denied
establishing a virtual circuit connection.

CLOSED LOOP CONGESTION CONTROL:


These policies try to remove the congestion:-
Back pressure: It is the technique where congested node stops receiving the data from the
immediate upstream node or nodes. This may congest the upstream nodes which in turn reject
data from the further upstream nodes and so on. It is node to node congestion control that
propagates in opposite direction to that of data flow. It is illustrated in the given figure:-

Fig. 2 Back pressure


Choke packet: A choke packet is a packet that is sent by a node to the source to inform it about
the congestion. It is not node to node approach. Here router directly warns the source station,
intermediate node are not warned. Fig. given below illustrates it:-

Fig 3 Choke Packet


6
Implicit signaling: In this case there is no communication between nodes and source station.
Source guesses the congestion. Eg:- when source sends several packets and do not get the
acknowledgement for a while, it assumes that there is congestion and slows down.

Explicit Signaling: In this case the node experiencing the congestion can explicitly signal the
source or destination. It is different from choke packet as in this case signal is included in the
packet that carries the data, no new packet is used. It can be either forward or backward.
• Backward Signaling: Signal warns the source that there is congestion and that it needs
to slow down to avoid discarding of packets.
• Forward Signaling: Signal warns destination that there is congestion and that it needs to
slow down in sending the acknowledgements.

LOAD SHEDDING:
When none of the above techniques are able to make the congestion disappear, routers can bring
out the heavy artillery: Load Shedding. It is one of the simplest and more effective techniques. In
this method, whenever a router finds that there is congestion in the network, it simply starts
dropping out the packets. There are different methods by which a host can find out which
packets to drop. Simplest way can be just choose the packets randomly which has to be dropped.
More effective ways are there but they require some kind of cooperation from the sender too. For
many applications, some packets are more important than others. So, sender can mark the
packets in priority classes to indicate how important they are. If such a priority policy is
implemented than intermediate nodes can drop packets from the lower priority classes and use
the available bandwidth for the more important packets

7
EXAMPLE OF CONGESTION CONTROL
Congestion Control in TCP:-
Nowadays sender’s window is not only controlled be the receiver but also by the congestion in
the network. Sender has two piece information i.e. receiver advertised window size (rwnd) and
congestion window size (cwnd). The actual size of sender’s window is the minimum of these
two.
Actual window size=minimum (rwnd, cwnd)

TCP’s general policy for handling congestion is based on three phases: slow start, congestion
avoidance and congestion detection.

Slow Start: Exponential Increase: This is an algorithm that is based on the idea that the size of
cwnd starts with one maximum segment size (MSS) i.e. cwnd = 1MSS and that the sender’s
window size is always equals to cwnd as it much smaller that rwnd. After every
acknowledgement cwnd is incremented by 1MSS.
Start -- cwnd = 1
After round 1 -- cwnd = 21=2
After round 2 -- cwnd = 22=4
After round 3 -- cwnd = 23=8

Fig 4 Slow Start, Exponential Increase

In case of delayed ACK’s the increase in the size of the window is less than the power of 2.
Slow Start cannot grow indefinitely. There’s a threshold to stop this phase. Mostly its value is
65535bytes.
Congestion Avoidance: Additive Increase: It is an algo that slows down the exponential
growth of previous phase, once it has reached the threshold, thereby avoiding the congestion. It
undergoes an Additive Increase i.e. size of congestion window is increased by one each time the
whole window is acknowledged (each round), until congestion is detected. To illustrate this see
the previous Fig.:-
Start -- cwnd = 1
After round 1 -- cwnd = 1+1=2
After round 2 -- cwnd = 2+1=3
After round 3 -- cwnd = 3+1=4
In this case, after the sender has received acknowledgements for a complete window size of
segment, the size of window is increased by 1 segment.

Congestion Detection: Multiplicative Decrease: If congestion occurs then cnwd should be


decreased i.e threshold should be reduced to half. This is called as Multiplicative Decrease.
Most TCP implementations have two reactions:
1. If a time out occurs, there is a stronger possibility of congestion. In this case TCP reacts
strongly:
• Sets the value of threshold to half of current window size.
• Sets cnwd to the size of one segment
• Starts the slow-start phase.
2. If three ACK’s are received then there is weak possibility of congestion; a segment may have
been dropped, but some segments after that may have arrived safely since three ACK’s are
received. In this case TCP reacts weakly:
• Sets the value of threshold to half of current window size.
• Sets cnwd to the value of the threshold
• Starts the congestion avoidance phase.
Note:
• if any segment is missing than sender waits for three duplicate ACK’s before taking any
step.
• Slow start is resumed only at the starting of the connection, at all other times additive
increase is resumed.
CONGESTION CONTROL AT ROUTER
Packets from different flows gather at router for processing. They are to be queued to improve
the quality of service. Queuing algorithms determine:
– How packets are buffered.
– Which packets get transmitted.
– Which packets get marked or dropped.
Some of the possible choices in queuing algorithms:
FIFO Queuing: First packet to arrive is first to be transmitted. It introduces global
synchronization when packets are dropped from several connections. It follows Drop Tail
policy. If average arrival rate is higher than the average processing rate, the queue will fill up and
new packets will be discarded.

Priority Queuing: Packets are first marked with a priority. Implement multiple FIFO queues,
one for each priority class. Always transmit out of the highest priority non-empty queue. System
does not stop serving until queue is empty. It is better than FIFO as higher priority data is
transferred first.
Problem:: high priority packets can ‘starve’ lower priority class packets.
One practical use in the Internet is to protect routing update packets by giving them a higher
priority and a special queue at the router

Fair Queuing: The basic problem with FIFO is that it does not discriminate between different
packet sources. Another problem with FIFO was that an “ill-behaved” flow can capture an
arbitrarily large share of the network’s capacity. Thus Fair Queuing (FQ) aldo was introduced. It
maintained a separate queue for each flow, and Fair Queuing (FQ) services these queues in a
round-robin fashion.
Flo w 1

Flo w 2
Round-robin
service

Flo w 3

Flo w 4

Fig.5 Fair Queuing


If a queue reaches a particular length then the additional packets is discarded thereby ensuring
that there is no arbitrary increase of any source in the network. Ideal FQ does bit-by-bit round-
robin.

Weighted Fair Queuing(WFQ): Here we assign a weight to each flow (queue) such that the
weight logically specifies the number of bits to transmit each time the router services that queue.
Higher priority has higher weight. This controls the percentage of the link capacity that the flow
will receive. If the weights are 3,2,1 it means that the three packets are processed from the first
queue, 2 from second and 1 from 3rd.
If the system doesn’t impose priority then the weights would have been same.
• An issue – how does the router learn of the weight assignments?
– Manual configuration
– Signaling from sources or receivers.

11
TRAFFIC SHAPING
It is the mechanism to control the amount and the rate of the traffic sent to the network. There are
two methods to do it:
Leaky Bucket: Consider a Bucket with a small hole at the bottom, whatever may be the rate of
water pouring into the bucket; the rate at which water comes out from that small hole is constant.
This scenario is depicted in figure 6(a). Once the bucket is full, any additional water entering it
spills over the sides and is lost (i.e. it doesn’t appear in the output stream through the hole
underneath).

The same idea of leaky bucket can be applied to packets, as shown in Fig. 6(b). Conceptually
each network interface contains a leaky bucket. And the following steps are performed:
 When the host has to send a packet, the packet is thrown into the bucket.
 The bucket leaks at a constant rate, meaning the network interface transmits packets at a
constant rate.
 Bursty traffic is converted to a uniform traffic by the leaky bucket.
 In practice the bucket is a finite queue that outputs at a finite rate.

This arrangement can be simulated in the operating system or can be built into the hardware.
Implementation of this algorithm is easy and consists of a finite queue. Whenever a packet
arrives, if there is room in the queue it is queued up and if there is no room then the packet is
discarded.

Fig.6 (a) A leaky bucket with water. (b) A leaky bucket with packets
Token Bucket: The leaky bucket algorithm described above, enforces a rigid pattern at the output
stream, irrespective of the pattern of the input. For many applications it is better to allow the output
to speed up somewhat when a larger burst arrives than to lose the data. Token Bucket algorithm
provides such a solution. In this algorithm leaky bucket holds token, generated at regular intervals.
Main steps of this algorithm can be described as follows:

 In regular intervals tokens are thrown into the bucket.


 The bucket has a maximum capacity.
 If there is a ready packet, a token is removed from the bucket, and the packet is send.
 If there is no token in the bucket, the packet cannot be send.

Figure 7 shows the two scenarios before and after the tokens present in the bucket have been
consumed. In Fig. 7(a) the bucket holds two tokens, and three packets are waiting to be sent out of
the interface, in Fig. 7(b) two packets have been sent out by consuming two tokens, and 1 packet is
still left.
The token bucket algorithm is less restrictive than the leaky bucket algorithm, in a sense that it
allows bursty traffic. However, the limit of burst is restricted by the number of tokens available in the
bucket at a particular instant of time.
The implementation of basic token bucket algorithm is simple; a variable is used just to count the
tokens. This counter is incremented every t seconds and is decremented whenever a packet is sent.
Whenever this counter reaches zero, no further packet is sent out as shown in Fig. 7.5.5.

Fig. 7 (a) Before. (b) After


CONCLUSION

With the development of Internet, more and more real-time multimedia services
employing noncongestion-controlled protocols (usually UDP) have constituted the major share
of the Internet traffic. This may lead to network breakdown.
In order to avoid such situations we take some control measures which are Open Loop
(preventive) or Closed Loop (detective). If these are in any case unable to outdo congestion then
we can go for Load Shedding technique.
Traffic shaping techniques are also very helping in avoiding the congestion. As it is
always preferable to avoid or prevent than to detect and cure, so we should always apply traffic
shaping in the networks.

14
REFERENCES

[1] Data Communication and Networking “Fourth Edition” by Behrouz A. Forouzan.


[2] Computer Networks “Fourth Edition” by Andrew S. Tanenbaum.
[3] Computer Networks A System Approach “Third Editon” by Larry Peterson and Bruce Davie.
and Experiment Manual prepared by Emad Aboelela (Massachussets University).
[4] Networking E-Book of CSE IIT-Kharagpur (Version 2).

15

You might also like