This action might not be possible to undo. Are you sure you want to continue?
Presented by: ZAFARYAB HAIDER
Guided by: Mr.BRAHMA DEO SAH
Mahatma Gandhi Missions College of Engineering. & Technology, Sector 62, NOIDA (U.P) UP TECHNICAL UNIVERSITY: LUCKNOW
The written words have an unfortunate tendency to degenerate the feeling of genuine gratitude into a still formality but I have no other way to record my feeling permanently. First of all, I, Zafaryab Haider of BT-CS, would like to express my deep sense gratitude towards my guide Mr.Braham deo Sah, Lecturer, Computer Science & Engineering Department for his valuable guidance, constant encouragement & inspiring efforts for completion of this dissertation. Without his efforts this dissertation cannot be completed. I would like to thank Mr.Mohd Haider, Head of Computer Science & Engineering Department for the college facilities he provided. I am also thankful to my friends for their timely advice, moral support, and encouragement. I also acknowledge the co-operation of all the other individuals who directly & indirectly helped me in making this report a success.
fair. we wanted the scheme to be globally efficient. This is the first report in a series on congestion avoidance schemes. signal filter. the network throughput may drop to zero and the path delay may become very high. congestion feedback. During congestion. The criteria for selection and goals for these schemes have been described. Congestion avoidance is a prevention mechanism while congestion control is a recovery mechanism. In particular. The features of simulation model used have been described. The congestion avoidance research was done using a combination of analytical modeling and simulation techniques. decision function. etc. II . A number of possible alternative for congestion avoidance have been identified. Other reports in this series describe the application of these ideas leading to the development of specific congestion avoidance schemes. convergent. A congestion avoidance scheme allows a network to operate in the region of low delay and high throughput. Such schemes prevent a network from entering the congested state. These goals and the test cases used to verify whether a particular scheme has met the goals have been described. We model the network and the user policies for congestion avoidance as a feedback control system. We compare the concept of congestion avoidance with that of flow control and congestion control. dynamic. The key components of a generic congestion avoidance scheme are: congestion detection. feedback selector. robust. distributed. configuration independent. and increase/decrease algorithms. A congestion control scheme helps the network to recover from the congestion state.ABSTRACT Congestion is said to occur in the network when the resource demands exceed the capacity and packets are lost due to too much queuing in the network. These components have been explained. From these a few were selected for study.
1 2.2 3.1 4. 7. 7. 6.INDEX Acknowledgement Abstract List of Figure 1.3 5.2 8. Introduction Congestion Control What is Congestion? Causes of Congestion Principles of Congestion Control Congestion Control Techniques Open Loop Techniques Closed Loop Techniques Load Shedding Example of Congestion Control in TCP Congestion Control at Routers Traffic Shaping Leaky Bucket Token Bucket Conclusion References I II IV 1 2-3 2 2 4 5-7 5 6 7 8 10 12-13 12 13 14 15 . 4.2 4. 2. 9.1 7. 2. 4.
a. 3. 2 6 6 8 10 12 12 13 13 IV . b. a. 6.LIST OF FIGURES Figure No. 2. 1. Name of Figure Performance Degradation during Congestion Back Pressure Choke Packet Slow Start Fair Queuing Leaky Bucket With Water With Packet Token Bucket Before After Page No. 7. 5. 4. b.
The nature of a Packet switching network can be summarized in following points: A network of queues At each node. where transmitting nodes are constantly adding packets and some of them (receiving nodes) are removing packets from the queue. So. However. So.INTRODUCTION As Internet can be considered as a Queue of packets. In the following sections. the causes of congestion. and almost no packet is delivered. the objective of congestion control can be summarized as to maintain the number of packets in the network below the level at which performance falls off dramatically. the queue length grows alarmingly When the number of packets dumped into the network is within the carrying capacity. the queue size grows without bound When the line for which packets are queuing becomes more than 80% utilized. and they begin to lose packets. except a few that have to be rejected due to transmission errors. At very high traffic. And then the number delivered is proportional to the number of packets sent. This tends to make matter worse. Main reason of congestion is more number of packets into the network than it can handle. they all are delivered. and such a situation is termed as Congestion. as traffic increases too far. performance collapse completely. consider a situation where too many packets are present in this queue (or internet or a part of internet). This degrades the performance. the routers are no longer able to cope. the effects of congestion and various congestion control techniques are discussed in detail. there is a queue of packets for each outgoing channel If packet arrival rate exceeds the packet transmission rate. 1 . such that constantly transmitting nodes are pouring packets at a higher rate than receiving nodes are removing them.
When this congestion occurs performance will degrade. Insufficient memory to store arriving packets Bursty Traffic Slow processor If there is insufficient memory to hold these packets. If router have an infinite amount of memory even then instead of congestion being reduced. it gets worse. performance collapses completely and no packets are delivered. and duplicates may also be present. At a very high traffic rate. they have already timed-out (repeatedly). This can be very well demonstrated through the graph given below:- Fig. The packets are normally temporarily stored in the buffers of the source and the destination before forwarding it to their upper layers.1 Performance degradation in congestion CAUSES OF CONGESTION:The main causes of the congestion are as follows: Packet arrival rate exceeds the outgoing link capacity.CONGESTION WHAT IS CONGESTION? Congestion occurs when the source sends more packets than the destination can handle. Congestion occurs when these buffers gets filled on the destination side. All the packets will be forwarded to . then packets will be lost (dropped). Adding more memory also may not help in certain situations. because by the time packets gets at the head of the queue. to be dispatched out to the output line.
3 . Congestion tends to feed upon itself to get even worse. Routers respond to overloading by dropping packets. updating tables. and they are therefore left unacknowledged. When these packets contain TCP segments. If the hosts could be made to transmit at a uniform rate. which eventually leads to timeout and retransmission. the major cause of congestion is often the bursty nature of traffic. Slow processors also cause Congestion. all the way only increasing the load to the network more and more. If the router CPU is slow at performing the task required for them (Queuing buffers.next router up to the destination. reporting any exceptions etc.). queue can build up even if there is excess of line capacity. the segments don't reach their destination. So. then congestion problem will be less common and all other causes will not even led to congestion because other causes just act as an enzyme which boosts up the congestion when the traffic is bursty ISSUES:-If the rate of packet processing is less than rate of packet arrival than input queues of the router becomes longer and longer and if rate of packet processing is more than rate of packet departure than output queues of router will become longer and longer.
2)For Open Loop solutions attempt to solve the problem via good design because once the system is up and running. Pass this information to places where action can be taken. before it happens. after it has happened. or remove the congestion.PRICIPLES OF CONGESTION CONTROL:It refers to the techniques and mechanism that can either prevent congestion. and making scheduling decisions at various points in the network. midcourse corrections are not made. 4 . Main steps followed in congestion control are:1)For Closed Loop solution which is based on feedback loop: Monitor the segments to detect when and where congestion occurs. deciding when to discard packets and which ones. Adjust system operations to correct the problem. it includes deciding when to accept new traffic.
Generally it may increase congestion which can be prevented by using good retransmission policy and retransmission timers. it may slow down the sender. Retransmission policy: If the sender feels that a sent packet is lost or corrupted. Back pressure Choke packet Implicit signaling Explicit signaling OPEN LOOP CONGESTION CONTROL: Here policies are applied to prevent congestion before it happens. Retransmission policy Window policy Acknowledgement policy Discarding policy Admission policy 2. Open loop congestion control (prevention). Selective repeat window is better than Go-Back-N window because in case selective there are no chances of duplication. the packet needs to be retransmitted. Several approaches are used for this. Sending fewer acknowledgement means imposing less load on network. Window policy: Type of window at the sender¶s end may also effect the congestion. Acknowledgement policy: Acknowledgement policy of receiver can also effect the congestion. Closed loop congestion control (removal). If receiver does not acknowledge every packet it receives. thereby preventing congestion.CONGESTION CONTROL TECHNIQUES Congestion Control techniques are broadly divided into two broad categories:1. .
Here router directly warns the source station. It is illustrated in the given figure:- Fig. 2 Back pressure Choke packet: A choke packet is a packet that is sent by a node to the source to inform it about the congestion. Fig. If there is congestion or possibility of congestion in future than router is denied establishing a virtual circuit connection. given below illustrates it:- Fig 3 Choke Packet 6 . This may congest the upstream nodes which in turn reject data from the further upstream nodes and so on. intermediate node are not warned. It is not node to node approach. It is node to node congestion control that propagates in opposite direction to that of data flow. Eg:. Admission policy: In this first of all resource requirements of flow are checked before admitting to network.Discarding policy: A good discarding policy by routers may prevent congestion and at the same time may not harm the integrity of the transmission.discard less sensitive packets in audio transmission thereby preserving quality of sound and preventing congestion. CLOSED LOOP CONGESTION CONTROL: These policies try to remove the congestion:Back pressure: It is the technique where congested node stops receiving the data from the immediate upstream node or nodes.
More effective ways are there but they require some kind of cooperation from the sender too. whenever a router finds that there is congestion in the network. In this method. Explicit Signaling: In this case the node experiencing the congestion can explicitly signal the source or destination. Source guesses the congestion.Implicit signaling: In this case there is no communication between nodes and source station. It is different from choke packet as in this case signal is included in the packet that carries the data. There are different methods by which a host can find out which packets to drop. If such a priority policy is implemented than intermediate nodes can drop packets from the lower priority classes and use the available bandwidth for the more important packets 7 . sender can mark the packets in priority classes to indicate how important they are. It can be either forward or backward. Forward Signaling: Signal warns destination that there is congestion and that it needs to slow down in sending the acknowledgements. Simplest way can be just choose the packets randomly which has to be dropped. It is one of the simplest and more effective techniques. So. some packets are more important than others. Eg:.when source sends several packets and do not get the acknowledgement for a while. it simply starts dropping out the packets. routers can bring out the heavy artillery: Load Shedding. For many applications. Backward Signaling: Signal warns the source that there is congestion and that it needs to slow down to avoid discarding of packets. LOAD SHEDDING: When none of the above techniques are able to make the congestion disappear. no new packet is used. it assumes that there is congestion and slows down.
EXAMPLE OF CONGESTION CONTROL Congestion Control in TCP:Nowadays sender¶s window is not only controlled be the receiver but also by the congestion in the network. Slow Start cannot grow indefinitely.e. . Start After round 1 After round 2 After round 3 ----cwnd = 1 cwnd = 21=2 cwnd = 22=4 cwnd = 23=8 Fig 4 Slow Start.e. cwnd = 1MSS and that the sender¶s window size is always equals to cwnd as it much smaller that rwnd. Mostly its value is 65535bytes. After every acknowledgement cwnd is incremented by 1MSS. Actual window size=minimum (rwnd. There¶s a threshold to stop this phase. Slow Start: Exponential Increase: This is an algorithm that is based on the idea that the size of cwnd starts with one maximum segment size (MSS) i. cwnd) TCP¶s general policy for handling congestion is based on three phases: slow start. Exponential Increase In case of delayed ACK¶s the increase in the size of the window is less than the power of 2. congestion avoidance and congestion detection. receiver advertised window size (rwnd) and congestion window size (cwnd). Sender has two piece information i. The actual size of sender¶s window is the minimum of these two.
there is a stronger possibility of congestion. after the sender has received acknowledgements for a complete window size of segment. 2. This is called as Multiplicative Decrease.:Start After round 1 After round 2 After round 3 ----cwnd = 1 cwnd = 1+1=2 cwnd = 2+1=3 cwnd = 3+1=4 In this case. at all other times additive increase is resumed. In this case TCP reacts strongly: Sets the value of threshold to half of current window size.Congestion Avoidance: Additive Increase: It is an algo that slows down the exponential growth of previous phase. Congestion Detection: Multiplicative Decrease: If congestion occurs then cnwd should be decreased i. To illustrate this see the previous Fig. It undergoes an Additive Increase i. If three ACK¶s are received then there is weak possibility of congestion. size of congestion window is increased by one each time the whole window is acknowledged (each round). . Sets cnwd to the size of one segment Starts the slow-start phase. Sets cnwd to the value of the threshold Starts the congestion avoidance phase. Most TCP implementations have two reactions: 1.e. a segment may have been dropped. Sets the value of threshold to half of current window size.e threshold should be reduced to half. but some segments after that may have arrived safely since three ACK¶s are received. Slow start is resumed only at the starting of the connection. thereby avoiding the congestion. the size of window is increased by 1 segment. until congestion is detected. If a time out occurs. once it has reached the threshold. In this case TCP reacts weakly: Note: if any segment is missing than sender waits for three duplicate ACK¶s before taking any step.
Which packets get transmitted. Which packets get marked or dropped. Priority Queuing: Packets are first marked with a priority. If average arrival rate is higher than the average processing rate. Implement multiple FIFO queues. It introduces global synchronization when packets are dropped from several connections. and Fair Queuing (FQ) services these queues in a round-robin fashion. It is better than FIFO as higher priority data is transferred first. one for each priority class. They are to be queued to improve the quality of service. Flow 1 Flow 2 Flow 3 Flow 4 Fig.CONGESTION CONTROL AT ROUTER Packets from different flows gather at router for processing. Some of the possible choices in queuing algorithms: FIFO Queuing: First packet to arrive is first to be transmitted. Problem:: high priority packets can µstarve¶ lower priority class packets. One practical use in the Internet is to protect routing update packets by giving them a higher priority and a special queue at the router Fair Queuing: The basic problem with FIFO is that it does not discriminate between different packet sources. System does not stop serving until queue is empty. the queue will fill up and new packets will be discarded. Another problem with FIFO was that an ³ill-behaved´ flow can capture an arbitrarily large share of the network¶s capacity. It maintained a separate queue for each flow. Always transmit out of the highest priority non-empty queue.5 Fair Queuing ound-robin service . Queuing algorithms determine: ± ± ± How packets are buffered. Thus Fair Queuing (FQ) aldo was introduced. It follows Drop Tail policy.
11 . Higher priority has higher weight.1 it means that the three packets are processed from the first queue. An issue ± how does the router learn of the weight assignments? ± ± Manual configuration Signaling from sources or receivers.2. If the system doesn¶t impose priority then the weights would have been same. Ideal FQ does bit-by-bit roundrobin. 2 from second and 1 from 3rd.If a queue reaches a particular length then the additional packets is discarded thereby ensuring that there is no arbitrary increase of any source in the network. This controls the percentage of the link capacity that the flow will receive. Weighted Fair Queuing(WFQ): Here we assign a weight to each flow (queue) such that the weight logically specifies the number of bits to transmit each time the router services that queue. If the weights are 3.
whatever may be the rate of water pouring into the bucket. the packet is thrown into the bucket. The same idea of leaky bucket can be applied to packets. Once the bucket is full. it doesn¶t appear in the output stream through the hole underneath). if there is room in the queue it is queued up and if there is no room then the packet is discarded. the rate at which water comes out from that small hole is constant.6 (a) A leaky bucket with water. Whenever a packet arrives. The bucket leaks at a constant rate. as shown in Fig.TRAFFIC SHAPING It is the mechanism to control the amount and the rate of the traffic sent to the network. And the following steps are performed: y y When the host has to send a packet. Fig. In practice the bucket is a finite queue that outputs at a finite rate. (b) A leaky bucket with packets . meaning the network interface transmits packets at a constant rate. This scenario is depicted in figure 6(a). There are two methods to do it: Leaky Bucket: Consider a Bucket with a small hole at the bottom. This arrangement can be simulated in the operating system or can be built into the hardware.e. any additional water entering it spills over the sides and is lost (i. Implementation of this algorithm is easy and consists of a finite queue. Conceptually each network interface contains a leaky bucket. y y Bursty traffic is converted to a uniform traffic by the leaky bucket. 6(b).
For many applications it is better to allow the output to speed up somewhat when a larger burst arrives than to lose the data. 7. no further packet is sent out as shown in Fig. and 1 packet is still left. If there is a ready packet. and three packets are waiting to be sent out of the interface. 7(a) the bucket holds two tokens. Fig.5. Figure 7 shows the two scenarios before and after the tokens present in the bucket have been consumed. However. (b) After . In Fig. In this algorithm leaky bucket holds token. 7(b) two packets have been sent out by consuming two tokens. irrespective of the pattern of the input. and the packet is send. Main steps of this algorithm can be described as follows: y y y y In regular intervals tokens are thrown into the bucket. This counter is incremented every t seconds and is decremented whenever a packet is sent. enforces a rigid pattern at the output stream. The token bucket algorithm is less restrictive than the leaky bucket algorithm. generated at regular intervals. the limit of burst is restricted by the number of tokens available in the bucket at a particular instant of time. in a sense that it allows bursty traffic. a variable is used just to count the tokens.Token Bucket: The leaky bucket algorithm described above. 7 (a) Before.5. The implementation of basic token bucket algorithm is simple. a token is removed from the bucket. in Fig. If there is no token in the bucket. the packet cannot be send. Whenever this counter reaches zero. Token Bucket algorithm provides such a solution. The bucket has a maximum capacity.
As it is always preferable to avoid or prevent than to detect and cure. If these are in any case unable to outdo congestion then we can go for Load Shedding technique. In order to avoid such situations we take some control measures which are Open Loop (preventive) or Closed Loop (detective). Traffic shaping techniques are also very helping in avoiding the congestion. 14 . so we should always apply traffic shaping in the networks.CONCLUSION With the development of Internet. more and more real-time multimedia services employing noncongestion-controlled protocols (usually UDP) have constituted the major share of the Internet traffic. This may lead to network breakdown.
REFERENCES  Data Communication and Networking ³Fourth Edition´ by Behrouz A. and Experiment Manual prepared by Emad Aboelela (Massachussets University). Forouzan.  Networking E-Book of CSE IIT-Kharagpur (Version 2).  Computer Networks A System Approach ³Third Editon´ by Larry Peterson and Bruce Davie. Tanenbaum.  Computer Networks ³Fourth Edition´ by Andrew S. 15 .
This action might not be possible to undo. Are you sure you want to continue?
We've moved you to where you read on your other device.
Get the full title to continue listening from where you left off, or restart the preview.