Unit Test 2:CS9213-COMPUTER NETWORKS AND MANAGEMENT 1.
Define window management techniques used in TCP for congestion control. Window Management Slow start Dynamic window sizing on congestion Fast retransmit Fast recovery Limited transmit Slow Start awnd=MIN [credit, cwnd) where awnd = allowed window in segments cwnd = congestion window in segments (assumes MSS bytes per segment) credit = amount of unused credit granted in most recent ack (rcvwindow) cwnd = 1 for a new connection and increased by 1 (except during slow start) for each ack received, up to a maximum. Effect of TCP Slow Start
use slow start with linear growth in cwnd after reaching a threshold value Illustration of Slow Start and Congestion Avoidance
.Dynamic Window Sizing on Congestion A lost segment indicates congestion Prudent (conservative) to reset cwnd to 1 and begin slow start process May not be conservative enough: the net to recover” (Jacobson) “easy to drive a network into saturation but hard for
set congestion window to threshold +3. Limited Transmit Algorithm Sender can transmit new segment when 3 conditions are met: Two consecutive duplicate acks are received Destination advertised window allows transmission of segment
. rather than waiting for timeout. a segment was assumed lost Congestion avoidance measures are appropriate at this point E.Fast Retransmit RTO is generally noticeably longer than actual RTT If a segment is lost. e. 3 duplicate acks).. Fast Recovery When TCP retransmits a segment using Fast Retransmit. highly likely it was lost. cwnd =3 Under what circumstances does sender have small congestion window? Is the problem common? If the problem is common. TCP may be slow to retransmit TCP rule: if a segment is received out of order. proceed with linear increase of cwnd This avoids initial slow-start Limited Transmit If congestion window at sender is small. fast retransmit may not get triggered..g. slow-start/congestion avoidance procedure This may be unnecessarily conservative since multiple ACKs indicate segments are actually getting through Fast Recovery: retransmit lost segment. why not reduce number of duplicate acks needed to trigger retransmit.e.g. an ack must be issued immediately for the last in-order segment Tahoe/Reno Fast Retransmit rule: if 4 acks received for same segment (I. so retransmit immediately. cut threshold in half.
which influences routing decisions. Such a packet could be timestamped to measure the delay between two particular end points. The destination either asks the source to adjust the load or returns the signal back to the source in the packets (or acknowledgments) going in the reverse direction. This has the disadvantage of adding overhead to the network.
Send a control packet from a congested node to some or all source nodes. they may vary too rapidly to be used effectively for congestion control. 3. Routing algorithms provide link delay information to other nodes. This information could also be used to influence the rate at which new packets are produced. There are two possible approaches here. A node could add such information to packets going in the direction opposite to the congestion. Alternatively. Rely on routing information. which can reduce the flow of packets into the network. a node could add such information to packets going in the same direction as the congestion. This choke
packet will have the effect of stopping or slowing the rate of transmission from sources and hence limit the total no of packets in the network. Allow packet switching nodes to add congestion information to packets as they go by. Make use of an end-to-end probe packet. Amount of outstanding data after sending is less than or equal to cwnd + 2
2. nrt-VBR: traffic contract with open-loop control UBR: best effort sharing of unused capacity ABR: share unused (available) capacity using closed-loop control of source – Allowed Cell Rate (ACR): current max. value for ACR – Initial Cell Rate (ICR): initial value of ACR ACR is dynamically adjusted based on feedback to the source in the form of Resource Management (RM) cells RM cells contain three fields: – Congestion Indication (CI) bit – No Increase (NI) bit – Explicit Cell Rate (ER) field Flow of Data and RM Cells – ABR Connection
. Explain congestion control mechanisms for packet-switching networks. This information quickly reaches the source node. Explain ABR & GTR Traffic Management ABR Traffic Management CBR. rt-VBR. This approach requires additional traffic on the network during a period of congestion. Because these delays are being influenced by the routing decision. cell transmission rate – Minimum Cell Rate (MCR): network guaranteed minimum cell rate – Peak Cell Rate (PCR): max.
ABR Source Reaction Rules
Variations in Allowed Cell Rate
ABR Capacity Allocation Two Functions of ATM Switches
CI and NI bits – Explicit rate: use of the ER field Binary Feedback Schemes Single FIFO queue at each output port buffer – switch issues EFCI. Effect: lowers ERs of VCs that are consuming more than fair share of switch capacity ERICA Makes adjustments to ER based on switch load factor: Load Factor (LF) = Input rate /Target rate where input rate is averaged over a fixed interval. compute fair share of capacity for each VC 2. compute an explicit rate (ER) for each VC and send to the source in an RM cell Several example of this scheme – Enhanced proportional rate control algorithm (EPRCA) – Explicit rate indication for congestion avoidance (ERICA) Congestion Avoidance using proportional control (CAPC) EPRCA Switch calculates mean current load on each connection. called the MACR: MACR(I) = (1-α ) x MACR(I-1) + α x CCR(I) Note: typical value for α is 1/16 When queue length at an output port exceeds the established threshold.separate queue for each VC. NI based on threshold(s) in each queue Multiple queues per port . VCshare]]
GFR Traffic Management
. update ER field in RMs for all VCs on that port as: ER ← min[ER. and ERs are reduced by VC on a fair share basis: – Fairshare = target rate/number of VCs – Current VCshare = CCR/LF – newER = min[oldER. congestion is threatened. max[Fairshare. CI. or group of VCs – uses threshold levels as above Use selective feedback to dynamically allocate fair share of capacity – switch will mark cells that exceed their fair share of buffer capacity Explicit Rate Feedback Schemes Basic scheme at switch is: 1.Congestion Control: throttle back on rates based on buffer dynamics Fairness: throttle back as required to ensure fair allocation of available capacity between connections Two categories of switch algorithms – Binary: EFCI. DPF x MACR] where DPF is the down pressure factor parameter. typically set to 7/8. determine the current load or degree of congestion 3. and target rate is typically 85-90% of link bandwidth When LF > 1.
MBS. not just individual cells GFR Mechanism
Frame-Based GCRA (F-GCRA)
4. CDVT Requires that network recognize frames as well as cells – in congestion.
. like UBR – no policing or shaping of traffic at end-system – no guaranteed frame delivery – depends on higher level protocols (like TCP) for reliable data transfer mechanisms Like ABR. Explain Frame Relay Congestion control mechanisms in TCP. MFS. provides capacity reservation and traffic contract for QoS – guaranteed minimum rate without loss – Specify PCR. MCR. network discards whole frames. Simple.
almost always at egress nodes – forward explicit congestion avoidance Hypothesis 2: Congestion grows very quickly in internal nodes and requires quick action – backward explicit congestion avoidance Congestion Control: BECN/FECN
.Frame Relay Congestion Control Minimize frame discard Maintain QoS (per-connection bandwidth) Minimize monopolization of network Simple to implement. little overhead Minimal additional network traffic Resources distributed fairly Limit spread of congestion Operate effectively regardless of flow Have minimum impact other systems in network Minimize variance in QoS Frame Relay Techniques
Congestion Avoidance with Explicit Signaling Two general strategies considered: Hypothesis 1: Congestion always occurs slowly.
above Bc.FR . over an interval that network will attempt to transfer Committed Information Rate (CIR) Operation
.2 Bits for Explicit Signaling Forward Explicit Congestion Notification – For traffic in same direction as received frame – This frame has encountered congestion Backward Explicit Congestion Notification – For traffic in opposite direction of received frame – Frames transmitted may encounter congestion Explicit Signaling Response Network Response – each frame handler monitors its queuing behavior and takes action – use FECN/BECN bits – some/all connections notified of congestion User (end-system) Response – receipt of BECN/FECN bits in frame – BECN at sender: reduce transmission rate – FECN at receiver: notify peer (via LAPF or higher layer) to restrict flow Frame Relay Traffic Rate Management Parameters Committed Information Rate (CIR) – Average data rate in bits/second that the network agrees to support for a connection Data Rate of User Access Channel (Access Rate) – Fixed rate link between user and network (for network access) Committed Burst Size (Bc) – Maximum data over an interval agreed to by network Excess Burst Size (Be) – Maximum data.
Traffic Control and Congestion Functions
Traffic Control Strategy Determine whether new ATM connection can be accommodated Agree performance parameters with subscriber Traffic contract between subscriber and network
.Relationship of Congestion Parameters
Note that T=Bc/CIR 5. Explain types of traffic control functions used in atm networks & QOS parameters.
This is congestion avoidance If it fails congestion may occur – Invoke congestion control Traffic Control Resource management using virtual paths Connection admission control Usage parameter control Selective cell discard Traffic shaping Explicit forward congestion indication Resource Management Using Virtual Paths Allocate resources so that traffic is separated according to service characteristics Virtual path connection (VPC) are groupings of virtual channel connections (VCC) Applications User-to-user applications – VPC between UNI pair – No knowledge of QoS for individual VCC – User checks that VPC can take VCCs’ demands User-to-network applications – VPC between UNI and network node – Network aware of and accommodates QoS of VCCs Network-to-network applications – VPC between two network nodes – Network aware of and accommodates QoS of VCCs Resource Management Concerns Cell loss ratio Max cell transfer delay Peak to peak cell delay variation All affected by resources devoted to VPC If VCC goes through multiple VPCs. performance depends on consecutive VPCs and on node performance – VPC performance depends on capacity of VPC and traffic characteristics of VCCs – VCC related function depends on switching/processing speed and priority Traffic Parameters Traffic pattern of flow of cells – Intrinsic nature of traffic Source traffic descriptor – Modified inside network Connection traffic descriptor
Source Traffic Descriptor Peak cell rate – Upper bound on traffic that can be submitted – Defined in terms of minimum spacing between cells T – PCR = 1/T – Mandatory for CBR and VBR services Sustainable cell rate – Upper bound on average rate – Calculated over large time scale relative to T – Required for VBR – Enables efficient allocation of network resources between VBR sources – Only useful if SCR < PCR Maximum burst size – Max number of cells that can be sent at PCR – If bursts are at MBS. physical layer overhead and layer functions (e. cell multiplexing) Represented by time variable τ Conformance definition Specify conforming cells of connection at UNI Enforced by dropping or marking cells over definition
Quality of Service Parameters-maxCTD
. idle gaps must be enough to keep overall rate below SCR – Required for VBR Minimum cell rate – Min commitment requested of network – Can be zero – Used with ABR and GFR – ABR & GFR provide rapid access to spare network capacity up to PCR – PCR – MCR represents elastic component of data flow – Shared among ABR and GFR flows Maximum frame size – Max number of cells in frame that can be carried over GFR connection – Only relevant in GFR Connection Traffic Descriptor Includes source traffic descriptor plus:Cell delay variation tolerance Amount of variation in cell delay introduced by network interface and UNI Bound on delay variability due to slotted nature of ATM.g.
Cell transfer delay (CTD) Time between transmission of first bit of cell at source and reception of last bit at destination Typically has probability density function (see next slide) Fixed delay due to propagation etc. Cell delay variation due to buffering and scheduling Maximum cell transfer delay (maxCTD)is max requested delay for connection Fraction α of cells exceed threshold Discarded or delivered late Peak-to-peak CDV & CLR Peak-to-peak Cell Delay Variation Remaining (1-α) cells within QoS Delay experienced by these cells is between fixed delay and maxCTD This is peak-to-peak CDV CDVT is an upper bound on CDV Cell loss ratio Ratio of cells lost to cells transmitted 6. Retransmission timer management techniques in TCP Retransmission Strategy TCP relies exclusively on positive acknowledgements and retransmission on acknowledgement timeout There is no explicit negative acknowledgement Retransmission required when: Segment arrives damaged. as indicated by checksum error. causing receiver to discard segment Segment fails to arrive Timers A timer is associated with each segment as it is sent If timer expires before segment acknowledged. sender must retransmit Key Design Issue: value of retransmission timer
the less it is counted in the average. receive ack) Delay is variable Strategies:
Fixed timer Adaptive Problems with Adaptive Scheme Peer TCP entity may accumulate acknowledgements and not acknowledge immediately For retransmitted segments. can’t tell whether acknowledgement is response to original transmission or retransmission Network conditions may change suddenly Adaptive Retransmission Timer Average Round-Trip Time (ARTT) K+1 1 K+1 = K+1 ∑ RTT(i) i=1 K ART(K) + K+1 1 RTT(K + 1)
ARTT(K + 1) =
RFC 793 Exponential Averaging Smoothed Round-Trip Time (SRTT) SRTT(K + 1) = α × SRTT(K) + (1 – α) × SRTT(K + 1) The older the observation. LB: prechosen fixed upper and lower bounds Example values for α.0Implementation Policy Options
.3 < β < 2.Too small: many unnecessary retransmissions.8 < α < 0. wasting network bandwidth Too large: delay in handling lost segment Two Strategies Timer should be longer than round-trip delay (send segment. β: 0. β × SRTT(K + 1))) UB. Max(LB. RFC 793 Retransmission Timeout RTO(K + 1) = Min(UB.9 1.
Difference between CBR and Real time variable bit rate(rt-VBR) Constant Bit Rate (CBR)
Requires that a fixed data rate be made available by the ATM provider.
8. What is Back Pressure? Signals are exchanged between switching elements in adjacent stages so that the generic SE can grant a packet transmission to its upstream SE’s only within the current idle buffer capacity.
. Real time variable bit rate (rt-VBR)
The faster rate is guaranteed. but it is understood that the user will not continuously
require this faster rate.Send Deliver Accept In-order
Retransmit First-only Batch individual
Acknowledge immediate cumulative
A VBR connection is defined in terms of a sustained rate for a normal use and a faster
burst rate for occasional use at peak periods. The network must ensure that this capacity is available and also polices the incoming
traffic on a CBR connection to ensure that the subscriber does not exceed its allocation.
Round Trip Time is also referred to as Round Trip Delay. Used with ABR and GFR… minimum cell rate requested.
11. and the RTT time is also known as the ping time.
Buffers fill Packets discarded Sources retransmit
. Define PCR and MCR Peak cell rate (PCR) is an ATM (Asynchronous Transfer Mode) term to describe the rate in cells
per second that the source device may never exceed. RTT may also be used to find the best possible route. that is the minimum rate at which the source is always allowed to be sent. or RTT. What are the effects of congestion? Effects of congestion are. Upper bound on traffic submitted by source (PCR = 1/T. BECN (backward explicit congestion notification) is a header bit transmitted by the destination terminal requesting that the source terminal send data more slowly.9. In a frame relay network. FECN and BECN are intended to minimize the possibility that packets will be discarded (and thus have to be resent) when more packets arrive than can be handled. refers to the amount of time it takes for a signal to travel from a particular terrestrial system to a designated satellite and back to its source. where T = minimum cell spacing). Define Round-trip time(RTT) Round Trip Time.
Minimum Cell Rate (MCR) is an ATM ABR service traffic descriptor. access to unused capacity up to PCR (elastic capacity = PCR-MCR?)
12. The signal is generally a data packet. FECN (forward explicit congestion notification) is a header bit transmitted by the source (sending) terminal requesting that the destination (receiving) terminal slow down its requests for data. in cells/sec. Define BECN and FECN.
14. This is more complicated. TCP does this by letting the sink advertise its free buffer space in the window field of the acknowledgements. because packets from different sources travelling different paths can converge on the same queue. Routers generate more traffic to update paths Good packets present Delays and costs propagate
13. Congestion control means preventing (or trying to prevent) the source from sending data that will end up getting dropped by a router because its queue is full. What is the difference between flow control & congestion control? Flow control means preventing the source from sending data that the sink will end up dropping because it runs out of buffer space. This is fairly easy with a sliding window protocol--just make sure the source's window is no larger than the free space in the sink's buffer. What are the congestion control techniques?
Backpressure Policing Choke packet Implicit congestion signaling Explicit congestion signaling