P. 1
Unit Test 2 CS9213 C.N

Unit Test 2 CS9213 C.N

|Views: 555|Likes:
Published by Vinod Deenathayalan

More info:

Published by: Vinod Deenathayalan on Feb 17, 2011
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as DOC, PDF, TXT or read online from Scribd
See more
See less






Define window management techniques used in TCP for congestion control. Window Management  Slow start  Dynamic window sizing on congestion  Fast retransmit  Fast recovery  Limited transmit Slow Start awnd=MIN [credit, cwnd) where awnd = allowed window in segments cwnd = congestion window in segments (assumes MSS bytes per segment) credit = amount of unused credit granted in most recent ack (rcvwindow) cwnd = 1 for a new connection and increased by 1 (except during slow start) for each ack received, up to a maximum. Effect of TCP Slow Start

use slow start with linear growth in cwnd after reaching a threshold value Illustration of Slow Start and Congestion Avoidance .Dynamic Window Sizing on Congestion  A lost segment indicates congestion  Prudent (conservative) to reset cwnd to 1 and begin slow start process  May not be conservative enough: the net to recover” (Jacobson) “easy to drive a network into saturation but hard for  Instead.

cwnd =3  Under what circumstances does sender have small congestion window?  Is the problem common?  If the problem is common.e. highly likely it was lost.. an ack must be issued immediately for the last in-order segment  Tahoe/Reno Fast Retransmit rule: if 4 acks received for same segment (I.Fast Retransmit  RTO is generally noticeably longer than actual RTT  If a segment is lost. Limited Transmit Algorithm Sender can transmit new segment when 3 conditions are met:  Two consecutive duplicate acks are received  Destination advertised window allows transmission of segment . so retransmit immediately. e. Fast Recovery  When TCP retransmits a segment using Fast Retransmit.. why not reduce number of duplicate acks needed to trigger retransmit. TCP may be slow to retransmit  TCP rule: if a segment is received out of order. a segment was assumed lost  Congestion avoidance measures are appropriate at this point  E. rather than waiting for timeout. set congestion window to threshold +3. fast retransmit may not get triggered. slow-start/congestion avoidance procedure  This may be unnecessarily conservative since multiple ACKs indicate segments are actually getting through  Fast Recovery: retransmit lost segment. 3 duplicate acks).g. cut threshold in half. proceed with linear increase of cwnd  This avoids initial slow-start Limited Transmit  If congestion window at sender is small.g.

A node could add such information to packets going in the direction opposite to the congestion. Routing algorithms provide link delay information to other nodes. a node could add such information to packets going in the same direction as the congestion.  Allow packet switching nodes to add congestion information to packets as they go by. The destination either asks the source to adjust the load or returns the signal back to the source in the packets (or acknowledgments) going in the reverse direction.  Rely on routing information. This approach requires additional traffic on the network during a period of congestion. This information could also be used to influence the rate at which new packets are produced. which influences routing decisions. which can reduce the flow of packets into the network. nrt-VBR: traffic contract with open-loop control  UBR: best effort sharing of unused capacity  ABR: share unused (available) capacity using closed-loop control of source – Allowed Cell Rate (ACR): current max. cell transmission rate – Minimum Cell Rate (MCR): network guaranteed minimum cell rate – Peak Cell Rate (PCR): max.  Make use of an end-to-end probe packet. 3. rt-VBR. they may vary too rapidly to be used effectively for congestion control.  Send a control packet from a congested node to some or all source nodes. This information quickly reaches the source node. Such a packet could be timestamped to measure the delay between two particular end points. Amount of outstanding data after sending is less than or equal to cwnd + 2 2. This choke packet will have the effect of stopping or slowing the rate of transmission from sources and hence limit the total no of packets in the network. There are two possible approaches here. Explain congestion control mechanisms for packet-switching networks. Because these delays are being influenced by the routing decision. This has the disadvantage of adding overhead to the network. value for ACR – Initial Cell Rate (ICR): initial value of ACR  ACR is dynamically adjusted based on feedback to the source in the form of Resource Management (RM) cells  RM cells contain three fields: – Congestion Indication (CI) bit – No Increase (NI) bit – Explicit Cell Rate (ER) field Flow of Data and RM Cells – ABR Connection . Alternatively. Explain ABR & GTR Traffic Management ABR Traffic Management  CBR.

ABR Source Reaction Rules Variations in Allowed Cell Rate ABR Capacity Allocation  Two Functions of ATM Switches .

CI. DPF x MACR] where DPF is the down pressure factor parameter. NI based on threshold(s) in each queue  Multiple queues per port . congestion is threatened.  Effect: lowers ERs of VCs that are consuming more than fair share of switch capacity ERICA  Makes adjustments to ER based on switch load factor: Load Factor (LF) = Input rate /Target rate where input rate is averaged over a fixed interval. and target rate is typically 85-90% of link bandwidth  When LF > 1. update ER field in RMs for all VCs on that port as: ER ← min[ER. CI and NI bits – Explicit rate: use of the ER field Binary Feedback Schemes  Single FIFO queue at each output port buffer – switch issues EFCI. determine the current load or degree of congestion 3. max[Fairshare. or group of VCs – uses threshold levels as above  Use selective feedback to dynamically allocate fair share of capacity – switch will mark cells that exceed their fair share of buffer capacity Explicit Rate Feedback Schemes  Basic scheme at switch is: 1. and ERs are reduced by VC on a fair share basis: – Fairshare = target rate/number of VCs – Current VCshare = CCR/LF – newER = min[oldER. compute fair share of capacity for each VC 2. typically set to 7/8. VCshare]] – – GFR Traffic Management .separate queue for each VC. compute an explicit rate (ER) for each VC and send to the source in an RM cell  Several example of this scheme – Enhanced proportional rate control algorithm (EPRCA) – Explicit rate indication for congestion avoidance (ERICA) Congestion Avoidance using proportional control (CAPC) EPRCA  Switch calculates mean current load on each connection.Congestion Control: throttle back on rates based on buffer dynamics Fairness: throttle back as required to ensure fair allocation of available capacity between connections  Two categories of switch algorithms – Binary: EFCI. called the MACR: MACR(I) = (1-α ) x MACR(I-1) + α x CCR(I) Note: typical value for α is 1/16  When queue length at an output port exceeds the established threshold.

. MBS. network discards whole frames. provides capacity reservation and traffic contract for QoS – guaranteed minimum rate without loss – Specify PCR. not just individual cells GFR Mechanism Frame-Based GCRA (F-GCRA) 4. Simple. MFS. CDVT  Requires that network recognize frames as well as cells – in congestion. like UBR – no policing or shaping of traffic at end-system – no guaranteed frame delivery – depends on higher level protocols (like TCP) for reliable data transfer mechanisms  Like ABR. Explain Frame Relay Congestion control mechanisms in TCP. MCR.

little overhead  Minimal additional network traffic  Resources distributed fairly  Limit spread of congestion  Operate effectively regardless of flow  Have minimum impact other systems in network  Minimize variance in QoS Frame Relay Techniques Congestion Avoidance with Explicit Signaling Two general strategies considered:  Hypothesis 1: Congestion always occurs slowly. almost always at egress nodes – forward explicit congestion avoidance  Hypothesis 2: Congestion grows very quickly in internal nodes and requires quick action – backward explicit congestion avoidance Congestion Control: BECN/FECN .Frame Relay Congestion Control  Minimize frame discard  Maintain QoS (per-connection bandwidth)  Minimize monopolization of network  Simple to implement.

2 Bits for Explicit Signaling  Forward Explicit Congestion Notification – For traffic in same direction as received frame – This frame has encountered congestion  Backward Explicit Congestion Notification – For traffic in opposite direction of received frame – Frames transmitted may encounter congestion Explicit Signaling Response  Network Response – each frame handler monitors its queuing behavior and takes action – use FECN/BECN bits – some/all connections notified of congestion  User (end-system) Response – receipt of BECN/FECN bits in frame – BECN at sender: reduce transmission rate – FECN at receiver: notify peer (via LAPF or higher layer) to restrict flow Frame Relay Traffic Rate Management Parameters  Committed Information Rate (CIR) – Average data rate in bits/second that the network agrees to support for a connection  Data Rate of User Access Channel (Access Rate) – Fixed rate link between user and network (for network access)  Committed Burst Size (Bc) – Maximum data over an interval agreed to by network  Excess Burst Size (Be) – Maximum data. over an interval that network will attempt to transfer Committed Information Rate (CIR) Operation . above Bc.FR .

Explain types of traffic control functions used in atm networks & QOS parameters. Traffic Control and Congestion Functions Traffic Control Strategy  Determine whether new ATM connection can be accommodated  Agree performance parameters with subscriber  Traffic contract between subscriber and network .Relationship of Congestion Parameters Note that T=Bc/CIR 5.

performance depends on consecutive VPCs and on node performance – VPC performance depends on capacity of VPC and traffic characteristics of VCCs – VCC related function depends on switching/processing speed and priority Traffic Parameters       Traffic pattern of flow of cells – Intrinsic nature of traffic  Source traffic descriptor – Modified inside network  Connection traffic descriptor . This is congestion avoidance  If it fails congestion may occur – Invoke congestion control Traffic Control  Resource management using virtual paths  Connection admission control  Usage parameter control  Selective cell discard  Traffic shaping  Explicit forward congestion indication Resource Management Using Virtual Paths  Allocate resources so that traffic is separated according to service characteristics  Virtual path connection (VPC) are groupings of virtual channel connections (VCC) Applications  User-to-user applications – VPC between UNI pair – No knowledge of QoS for individual VCC – User checks that VPC can take VCCs’ demands  User-to-network applications – VPC between UNI and network node – Network aware of and accommodates QoS of VCCs  Network-to-network applications – VPC between two network nodes – Network aware of and accommodates QoS of VCCs Resource Management Concerns Cell loss ratio Max cell transfer delay Peak to peak cell delay variation All affected by resources devoted to VPC If VCC goes through multiple VPCs.

physical layer overhead and layer functions (e. idle gaps must be enough to keep overall rate below SCR – Required for VBR  Minimum cell rate – Min commitment requested of network – Can be zero – Used with ABR and GFR – ABR & GFR provide rapid access to spare network capacity up to PCR – PCR – MCR represents elastic component of data flow – Shared among ABR and GFR flows  Maximum frame size – Max number of cells in frame that can be carried over GFR connection – Only relevant in GFR Connection Traffic Descriptor Includes source traffic descriptor plus:Cell delay variation tolerance Amount of variation in cell delay introduced by network interface and UNI Bound on delay variability due to slotted nature of ATM.Source Traffic Descriptor  Peak cell rate – Upper bound on traffic that can be submitted – Defined in terms of minimum spacing between cells T – PCR = 1/T – Mandatory for CBR and VBR services  Sustainable cell rate – Upper bound on average rate – Calculated over large time scale relative to T – Required for VBR – Enables efficient allocation of network resources between VBR sources – Only useful if SCR < PCR  Maximum burst size – Max number of cells that can be sent at PCR – If bursts are at MBS.g. cell multiplexing) Represented by time variable τ Conformance definition Specify conforming cells of connection at UNI Enforced by dropping or marking cells over definition Quality of Service Parameters-maxCTD .

Retransmission timer management techniques in TCP Retransmission Strategy  TCP relies exclusively on positive acknowledgements and retransmission on acknowledgement timeout  There is no explicit negative acknowledgement  Retransmission required when:  Segment arrives damaged. Cell delay variation due to buffering and scheduling Maximum cell transfer delay (maxCTD)is max requested delay for connection Fraction α of cells exceed threshold Discarded or delivered late Peak-to-peak CDV & CLR Peak-to-peak Cell Delay Variation Remaining (1-α) cells within QoS Delay experienced by these cells is between fixed delay and maxCTD This is peak-to-peak CDV CDVT is an upper bound on CDV Cell loss ratio Ratio of cells lost to cells transmitted 6. sender must retransmit Key Design Issue: value of retransmission timer   . as indicated by checksum error.Cell transfer delay (CTD) Time between transmission of first bit of cell at source and reception of last bit at destination Typically has probability density function (see next slide) Fixed delay due to propagation etc. causing receiver to discard segment  Segment fails to arrive Timers A timer is associated with each segment as it is sent If timer expires before segment acknowledged.

3 < β < 2. Max(LB. the less it is counted in the average. β: 0. wasting network bandwidth Too large: delay in handling lost segment Two Strategies  Timer should be longer than round-trip delay (send segment.0Implementation Policy Options .9 1. receive ack)  Delay is variable Strategies:     Fixed timer  Adaptive Problems with Adaptive Scheme  Peer TCP entity may accumulate acknowledgements and not acknowledge immediately  For retransmitted segments. β × SRTT(K + 1))) UB.8 < α < 0. RFC 793 Retransmission Timeout RTO(K + 1) = Min(UB. can’t tell whether acknowledgement is response to original transmission or retransmission  Network conditions may change suddenly Adaptive Retransmission Timer  Average Round-Trip Time (ARTT) K+1 1 K+1 = K+1 ∑ RTT(i) i=1 K ART(K) + K+1 1 RTT(K + 1) ARTT(K + 1) = RFC 793 Exponential Averaging Smoothed Round-Trip Time (SRTT) SRTT(K + 1) = α × SRTT(K) + (1 – α) × SRTT(K + 1) The older the observation.Too small: many unnecessary retransmissions. LB: prechosen fixed upper and lower bounds Example values for α.

 The network must ensure that this capacity is available and also polices the incoming traffic on a CBR connection to ensure that the subscriber does not exceed its allocation. 8.  A VBR connection is defined in terms of a sustained rate for a normal use and a faster burst rate for occasional use at peak periods. but it is understood that the user will not continuously require this faster rate. What is Back Pressure? Signals are exchanged between switching elements in adjacent stages so that the generic SE can grant a packet transmission to its upstream SE’s only within the current idle buffer capacity. Real time variable bit rate (rt-VBR)  The faster rate is guaranteed.Send Deliver Accept In-order    In-window  Retransmit First-only Batch individual  Acknowledge immediate cumulative Part A 7. . Difference between CBR and Real time variable bit rate(rt-VBR) Constant Bit Rate (CBR)  Requires that a fixed data rate be made available by the ATM provider.

Round Trip Time is also referred to as Round Trip Delay. Define Round-trip time(RTT) Round Trip Time. 10. Upper bound on traffic submitted by source (PCR = 1/T. The signal is generally a data packet. and the RTT time is also known as the ping time.9. RTT may also be used to find the best possible route. What are the effects of congestion? Effects of congestion are. in cells/sec. or RTT. access to unused capacity up to PCR (elastic capacity = PCR-MCR?) 12. Minimum Cell Rate (MCR) is an ATM ABR service traffic descriptor. Define BECN and FECN. 11. Define PCR and MCR Peak cell rate (PCR) is an ATM (Asynchronous Transfer Mode) term to describe the rate in cells per second that the source device may never exceed. refers to the amount of time it takes for a signal to travel from a particular terrestrial system to a designated satellite and back to its source. BECN (backward explicit congestion notification) is a header bit transmitted by the destination terminal requesting that the source terminal send data more slowly. In a frame relay network. where T = minimum cell spacing). FECN (forward explicit congestion notification) is a header bit transmitted by the source (sending) terminal requesting that the destination (receiving) terminal slow down its requests for data. that is the minimum rate at which the source is always allowed to be sent.  Buffers fill  Packets discarded  Sources retransmit . Used with ABR and GFR… minimum cell rate requested. FECN and BECN are intended to minimize the possibility that packets will be discarded (and thus have to be resent) when more packets arrive than can be handled.

This is more complicated. TCP does this by letting the sink advertise its free buffer space in the window field of the acknowledgements. What are the congestion control techniques?  Backpressure  Policing  Choke packet  Implicit congestion signaling  Explicit congestion signaling . This is fairly easy with a sliding window protocol--just make sure the source's window is no larger than the free space in the sink's buffer. because packets from different sources travelling different paths can converge on the same queue. 14. What is the difference between flow control & congestion control? Flow control means preventing the source from sending data that the sink will end up dropping because it runs out of buffer space. Routers generate more traffic to update paths  Good packets present  Delays and costs propagate 13. Congestion control means preventing (or trying to prevent) the source from sending data that will end up getting dropped by a router because its queue is full.

Activity (6)

You've already reviewed this. Edit your review.
1 thousand reads
1 hundred reads
Padmaraj Pillai liked this
Kumaran John liked this
Anandakumar Haldorai liked this
Jeffneil Lalith liked this

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->