Professional Documents
Culture Documents
22 - Trans - Congestion
22 - Trans - Congestion
application
transport
network
data link
physical
Transport Layer
Congestion Control – An Introduction
Some slides are adapted from “Computer Networking – a Top-Down Approach”
© 1996-2012 by J.F Kurose and K.W. Ross, All Rights Reserved
CS 210
Introduction to Computer Networks
application
transport
network
data link
physical
1
Questions
• Why does congestion happen? What
are the costs of congestion?
• What are the two broad approaches to
congestion control?
• How does TCP provide congestion
control?
• What is TCP slow start?
• What is fairness? How does TCP
provide fairness?
3
2
Causes/costs of Congestion: scenario 1
original data: lin throughput: lout
• two senders, two
receivers Host A
R/2
delay
lout
Host A
3
Causes/costs of Congestion: scenario 2
R/2
idealization: perfect
knowledge
lout
• sender sends only when
router buffers available
lin R/2
no buffer space!
Host A
Host B 8
4
Causes/costs of congestion: scenario 2
Idealization: known loss R/2
packets can be lost,
dropped at router due to when sending at R/2,
some packets are
lout
full buffers retransmissions but
• sender only resends if asymptotic goodput
is still R/2
packet known to be lost
lin R/2
Host B
9
lin
timeout lout
copy l'in
Host B 10
10
5
Causes/costs of Congestion: scenario 2
Realistic: duplicates
R/2
❖ packets can be lost, dropped
at router due to full buffers when sending at R/2,
some packets are
lout
❖ sender times out prematurely, retransmissions
sending two copies, both of including duplicated
that are delivered!
which are delivered
lin R/2
“costs” of congestion:
❖ more work (retrans) for given “goodput”
❖ unneeded retransmissions: link carries multiple copies of pkt
▪ decreasing goodput
11
11
Host D
Host C
12
12
6
Causes/costs of Congestion: scenario 3
C/2
lout
lin’ C/2
13
13
14
14
7
TCP Congestion Control: additive increase
multiplicative decrease
❖ approach: sender increases transmission rate (window size),
probing for usable bandwidth, until loss occurs
▪ additive increase: increase cwnd by 1 MSS (Maximum
Segment Size) every RTT until loss detected
▪ multiplicative decrease: cut cwnd in half after loss
time
15
15
16
8
TCP Slow Start
Host A Host B
❖ when connection begins,
increase rate
exponentially until first
RTT
loss event:
▪ initially cwnd = 1 MSS
▪ double cwnd every RTT
▪ done by incrementing
cwnd for every ACK
received
❖ summary: initial rate is
slow but ramps up
exponentially fast time
17
17
18
9
TCP: Switching from Slow Start to CA
Q: when should the
exponential
increase switch to
linear?
A: when cwnd gets
to 1/2 of its value
before timeout.
Implementation:
• variable ssthresh
• on loss event, ssthresh
is set to 1/2 of cwnd just
before loss event
19
19
20
10
TCP Throughput
• Average TCP throughput as function of window size,
RTT?
– ignore slow start, assume always data to send
• W: window size (measured in bytes) where loss occurs
– avg. window size (# in-flight bytes) is ¾ W
– avg. thruput is 3/4W per RTT
3 W
avg TCP thruput = bytes/sec
4 RTT
W/2
21
21
22
11
TCP Fairness
fairness goal: if k TCP sessions share same
bottleneck link of bandwidth R, each should have
average rate of R/k
TCP connection 1
bottleneck
router
capacity R
TCP connection 2
23
23
Connection 1 throughput R
24
24
12
Fairness – cont’d
Fairness and UDP Fairness, parallel TCP
• multimedia apps often connections
do not use TCP • application can open
– do not want rate multiple parallel
throttled by congestion connections between two
control
hosts
• instead use UDP:
• web browsers do this
– send audio/video at
constant rate, tolerate • e.g., link of rate R with 9
packet loss existing connections:
– new app asks for 1 TCP, gets rate
R/10
– new app asks for 11 TCPs, gets R/2
25
25
26
13