Professional Documents
Culture Documents
V. Jacobson
(Originally Published in: Proc. SIGCOMM '88, Vo118 No. 4, August 1988)
Van Jacobson*
University of California
Lawrence Berkeley Laboratory
Berkeley, CA 94720
van@helios.ee.lbl.gov
In October of '86, the Internet h a d the first of what Algorithms (i) - (v) spring from one observation:
became a series of 'congestion collapses'. During this The flow on a TCP connection (or ISO TP-4 or Xerox NS
period, the data t h r o u g h p u t from LBL to UC Berke- SPP connection) should obey a 'conservation of pack-
ley (sites separated b y 400 yards and three IMP hops) ets' principle. And, if this principle w e r e obeyed, con-
d r o p p e d from 32 Kbps to 40 bps. Mike Karels 1 and I gestion collapse w o u l d b e c o m e the exception rather
were fascinated b y this s u d d e n factor-of-thousand drop than the rule. Thus congestion control involves find-
in b a n d w i d t h and e m b a r k e d on an investigation of w h y ing places that violate conservation and fixing them.
things h a d gotten so bad. We w o n d e r e d , in particular, By 'conservation of packets' I m e a n that for a con-
if the 4.3BSD (Berkeley UNIX) TCP was mis-behaving or nection 'in equilibrium', i.e., r u n n i n g stably with a full
if it could be t u n e d to w o r k better u n d e r abysmal net- w i n d o w of data in transit, the packet flow is w h a t a
w o r k conditions. The answer to b o t h of these questions physicist w o u l d call 'conservative': A n e w packet isn't
was "yes". p u t into the n e t w o r k until an old packet leaves. The
Since that time, we have p u t seven n e w algorithms physics of flow predicts that systems with this p r o p e r t y
into the 4BSD TCP: should be robust in the face of congestion. Observation
of the Internet suggests that it was not particularly ro-
(i) round-trip-time variance estimation bust. W h y the discrepancy?
(ii) exponential retransmit timer backoff There are only three ways for packet conservation to
fail:
(iii) slow-start
1. The connection doesn't get to equilibrium, or
(iv) m o r e aggressive receiver ack policy
2. A sender injects a n e w packet before an old packet
(v) d y n a m i c w i n d o w sizing on congestion
has exited, or
(vi) Karn's c l a m p e d retransmit backoff
3. The equilibrium can't be reached because of re-
(vii) fast retransmit source limits along the path.
O u r m e a s u r e m e n t s and the reports of beta testers sug- In the following sections, we treat each of these in turn.
gest that the final p r o d u c t is fairly good at dealing with
congested conditions on the Internet.
This p a p e r is a brief description of (i) - (v) and the 1 Getting to Equilibrium:
rationale b e h i n d them. (vi) is an algorithm recently de- Slow-start
v e l o p e d b y Phil Karn of Bell Communications Research,
described in [KP87]. (vii) is described in a soon-to-be-
Failure (1) has to be from a connection that is either
published RFC.
starting or restarting after a packet loss. A n o t h e r w a y
*This work was supported in part by the U.S. Department of En- to look at the conservation p r o p e r t y is to say that the
ergy under Contract Number DE-AC03-76SF00098. sender uses acks as a 'clock' to strobe n e w packets into
1The algorithms and ideas described in this paper were developed the network. Since the receiver can generate acks n o
in collaboration with Mike Karels of the UC Berkeley Computer Sys- faster than data packets can get t h r o u g h the network,
tem Research Group. The reader should assume that anything clever
is due to Mike. Opinions and mistakes are the property of the author.
::::::::::::::::::::::::::::::::: .......
Sender Receiver
the p r o t o c o l is 'self c l o c k i n g ' (fig. 1). Self clocking sys- of this a l g o r i t h m is r a t h e r subtle, the i m p l e m e n t a t i o n is
t e m s a u t o m a t i c a l l y adjust to b a n d w i d t h a n d d e l a y vari- trivial - - o n e n e w state variable a n d three lines of c o d e
ations a n d h a v e a w i d e d y n a m i c r a n g e ( i m p o r t a n t con- in the sender:
s i d e r i n g t h a t TCP s p a n s a r a n g e f r o m 800 M b p s C r a y
c h a n n e l s to 1200 b p s p a c k e t r a d i o links). But the s a m e • A d d a congestion window, c w n d , to the per-
t h i n g that m a k e s a self-clocked s y s t e m stable w h e n it's c o n n e c t i o n state.
r u n n i n g m a k e s it h a r d to start - - to get d a t a f l o w i n g
there m u s t be acks to clock o u t p a c k e t s b u t to get acks • W h e n starting or restarting after a loss, set c w n d
there m u s t be d a t a flowing. to o n e packet.
To start the 'clock', w e d e v e l o p e d a slow-start al- • O n each ack for n e w data, increase c w n d b y one
g o r i t h m to g r a d u a l l y increase the a m o u n t of d a t a in- packet.
transit. 2 A l t h o u g h w e flatter o u r s e l v e s that the d e s i g n
• W h e n s e n d i n g , s e n d the m i n i m u m of the re-
2Slow-start is quite similar to the CUTEalgorithm described in ceiver's advertised window and cwnd.
[Jai86b]. We didn't know about CUTEat the time we were devel-
oping slow-start but we should have--CUTE preceded our work by slow-startwas coined by John Nagle in a message to the IETF mailing
several months. list in March, '87. This name was clearly superior to ours and we
When describing our algorithm at the Feb., 1987, Internet Engineer- promptly adopted it.
ing Task Force (IETF) meeting, we called it soft-start, a reference to an
electronics engineer's technique to limit in-rush current. The name
~__________~One PacketTime
1R
2R
R .....................................................
The horizontal direction is time. The continuous time line has been chopped into one-
round-trip-time pieces stacked vertically with increasing time going down the page. The
grey, numbered boxes are packets. The white numbered boxes are the corresponding
acks.
As each ack arrives, two packets are generated: one for the ack (the ack says a packet
has left the system so a new packet is added to take its place) and one because an ack
opens the congestion window by one packet. It may be clear from the figure why an
add-one-packet-to-window policy opens the window exponentially in time.
If the local net is much faster than the long haul net, the ack's two packets arrive at the
bottleneck at essentially the same time. These two packets are shown stacked on top of
one another (indicating that one of them would have to occupy space in the gateway's
outbound queue). Thus the short-term queue demand on the gateway is increasing
exponentially and opening a window of size W packets will require W/2 packets of
buffer capacity at the bottleneck.
?*
d,
Z
o~
g~
69
e~
o
0.
o
o ..y':":" o/
0 2 4 6 8 10
SendTime(sec)
Trace data of the start of a TCP conversation b e t w e e n t w o Sun 3/50s r u n n i n g Sun os 3.5
(the 4.3BSD TCP). The two Suns were on different Ethemets connected by IP gateways
driving a 230.4 Kbs point-to-point link (essentially the setup shown in fig. 7).
Each dot is a 512 data-byte packet. The x-axis is the time the packet was sent. The y-axis
is the sequence number in the packet header. Thus a vertical array of dots indicate
back-to-back packets and two dots with the same y but different x indicate a retransmit.
'Desirable' behavior on this graph would be a relatively smooth line of clots extending
diagonally from the lower left to the upper right. The slope of this line would equal the
available bandwidth. Nothing in this trace resembles desirable behavior.
The dashed line shows the 20 KBps bandwidth available for this connection. Only 35%
of this bandwidth was used; the rest was wasted on retransmits. Almost everything is
retransmitted at ]east once and data from 54 to 58 KB is sent five times.
that R and the variation in R increase quickly with The parameter fl accounts for RTT variation (see
load. If the load is p (the ratio of average arrival rate to [Cla82], section 5). The suggested fl = 2 can adapt
average departure rate), R and aR scale like (1 - p ) - l . to loads of at most 30%. Above this point, a connection
To make this concrete, if the network is running at 75% will respond to load increases by retransmitting packets
of capacity, as the Arpanet was in last April's collapse, that have only been delayed in transit. This forces the
one should expect round-trip-time to vary by a factor network to do useless work, wasting b a n d w i d t h on du-
of sixteen (±2~). plicates of packets that will be delivered, at a time w h e n
The TCP protocol specification, [RFC793], suggests it's k n o w n to be having trouble with useful work. I.e.,
estimating m e a n round trip time via the low-pass filter this is the network equivalent of pouring gasoline on a
fire.
R = c~R + ( 1 - ~ ) M
We developed a cheap m e t h o d for estimating vari-
where R is the average RTT estimate, M is a round trip ation (see appendix A) 3 and the resulting retransmit
time measurement from the most recently acked data timer essentially eliminates spurious retransmissions.
packet, and c~ is a filter gain constant with a suggested
value of 0.9. Once the R estimate is updated, the re- 3 We are far from the first to recognize that transport needs to esti-
transmit timeout interval, rto, for the next packet sent mate both m e a n and variation. See, for example, [Edg83]. But we do
think our estimator is simpler than most.
is set to fiR.
v o
,/fy /fj,f, . •
80 -
o . , , , , ,
2 4 6 8 10
Send Time (sec)
Same conditions as the previous figure (same time of day, same Suns, same network
path, same buffer and window sizes), except the machines were running the 4.3+TCP
with slow-start.
No bandwidth is wasted on retransmits but two seconds is spent on the slow-start so
the effective bandwidth of this part of the trace is 16 KBps - - two times better than
figure 3. (This is slightly misleading: Unlike the previous figure, the slope of the trace
is 20 KBps and the effect of the 2 second offset decreases as the trace lengthens. E.g.,
if this trace had run a minute, the effective bandwidth would have been 19 KBps. The
effective bandwidth without slow-start stays at 7 KBps no matter how long the trace.)
A pleasant side effect of estimating ~ rather than using system theory says that if a system is stable, the stability
a fixed value is that low load as well as high load per- is exponential. This suggests that an unstable system
formance improves, particularly over high delay paths (a n e t w o r k subject to r a n d o m load shocks a n d p r o n e to
such as satellite links (figures 5 and 6). congestive collapse s ) can be stabilized b y a d d i n g some
A n o t h e r timer mistake is in the backoff after a re- exponential d a m p i n g (exponential timer backoff) to its
transmit: If a packet has to be retransmitted m o r e than p r i m a r y excitation (senders, traffic sources).
once, h o w s h o u l d the retransmits be spaced? Only one
scheme will work, exponential backoff, b u t p r o v i n g this
is a b i t involved. 4 To finesse a proof, note that a n e t w o r k 3 Adapting to the path: congestion
is, to a v e r y g o o d approximation, a linear system. That avoidance
is, it is c o m p o s e d of elements that b e h a v e like linear op-
erators m integrators, delays, gain stages, etc. Linear
If the timers are in g o o d shape, it is possible to state with
4An in-progress paper attempts a proof. If an IP gateway is viewed some confidence that a timeout indicates a lost packet
as a 'shared resource with fixed capacity', it bears a remarkable resem- and not a b r o k e n timer. At this point, s o m e t h i n g can be
blance to the 'ether' in an Ethernet. The retransmit backoffproblem is d o n e about (3). Packets get lost for t w o reasons: they
essentially the same as showing that no backoff'slower' than an expo-
nential will guarantee stability on an Ethernet. Unfortunately, in the- s The phrase congestion collapse (describing a positive feedback in-
ory even exponential backoff won't guarantee stability (see [Aid87]). stability due to poor retransmit timers) is again the coinage of John
Fortunately, in practise we don't have to deal with the theorist's infi- Nagle, this time from [Nag84].
nite user population and exponential is "good enough".
"-''i_i
, i .... . i
ii , --'
!" !'C" -"
I I I I I I I t I I
10 20 30 40 50 60 70 80 90 1O0 110
Packet
Trace data showing per-packet round trip time on a well-behaved Arpanet connection.
The x-axis is the packet number (packets were numbered sequentially, starting with one)
and the y-axis is the elapsed time from the send of the packet to the sender's receipt of
its ack. During this portion of the trace, no packets were dropped or retransmitted.
The packets are indicated by a dot. A dashed line connects them to make the sequence
easier to follow. The solid line shows the behavior of a retransmit timer computed
according to the rules of RFC793.
,¢
.i.:i i i:
..
~ :
~:
..:
.." ...../i :
i • f'."
. ,
~" :"
.
," i-y" D
I T I I I I I I I I
10 20 30 40 50 60 70 80 90 1O0 110
Packet
Same data as above but the solid line shows a retransmit timer computed according to
the algorithm in appendix A.
• On any timeout, set cwnd to half the current win- 4 Future work: the gateway side of
dow size (this is the multiplicative decrease).
congestion control
• On each ack for new data, increase cwnd by
1/cwnd (this is the additive increase). 10 While algorithms at the transport endpoints can insure
the network capacity isn't exceeded, they cannot insure
• When sending, send the minimum of the re-
ceiver's advertised window and cwnd. 11We have also developed a rate-based variant of the congestion
avoidance algorithm to apply to connectiontess traffic (e.g., domain
Note that this algorithm is only congestion avoidance, server queries, RPC requests). Remembering that the goal of the in-
crease and decrease policies is bandwidth adjustment, and that 'time'
it doesn't include the previously described slow-start. (the controlled parameter in a rate-based scheme) appears in the de-
Since the packet loss that signals congestion will re- nominator of bandwidth, the algorithm follows immediately: The
sult in a re-start, it will almost certainly be necessary multiplicative decrease remains a multiplicative decrease (e.g., dou-
ble the interval between packets). But subtracting a constant amount
we have to be sure that during the non-equilibrium window adjust- from interval does not result in an additive increase in bandwidth.
ment, our control policy allows the gateway enough free bandwidth This approach has been tried, e.g., [Kli87] and [PP87], and appears
to dissipate queues that inevitably form due to path testing and traf- to oscillate badly. To see why, note that for an inter-packet interval
fic fluctuations. By an argument similar to the one used to show / and decrement c, the bandwidth change of a decrease-interval-by-
exponential timer backoff is necessary, it's possible to show that an constant policy is
1 1
exponential (multiplicative) window increase policy will be 'faster'
than the dissipation time for some traffic mix and, thus, leads to an 7 --+ [ - c
unbounded growth of the bottleneck queue. a non-linear, and destablizing, increase.
An update policy that does result in a linear increase of bandwidth
10This increment rule may be less than obvious. We want to in-
over time is
crease the window by at most one packet over a time interval of Odi-- I
length R (the round trip time). To make the algorithm "self-clocked',
a + li-1
it's better to increment by a small amount on each ack rather than by
a large amount at the end of the interval. (Assuming, of course, that where Ii is the interval between sends when the i t h packet is sent
the sender has effective silly window avoidance (see [Cla82], section and c~ is the desired rate of increase in packets per packet/sec.
3) and doesn't attempt to send packet fragments because of the frac- We have simulated the above algorithm and it appears to perform
tionally sized window.) A window of size cwnd packets will generate well. To test the predictions of that simulation against reality, we
at most cwnd acks in one R. Thus an increment of 1/cwnd per ack have a cooperative project with Sun Microsystems to prototype RPC
will increase the window by at most one packet in one R. In TCP, dynamic congestion control algorithms using NFS as a test-bed (since
windows and packet sizes are in bytes so the increment translates to NFS is known to have congestion problems yet it would be desirable
maxseg*maxseg/cwnd where maxseg is the maximum segment size and to have it work over the same range of networks as TCP).
cwnd is expressed in bytes, not packets. 12did lead.
~'~""~ 10MbsEthernets
Test setup to examine the interaction of multiple, simultaneous TCP conversations shar-
ing a bottleneck link. 1 MByte transfers (2048 512-data-byte packets) were initiated 3
seconds apart from four machines at LBL to four machines at UCB, one conversation
per machine pair (the dotted lines above show the pairing). All traffic went via a 230.4
Kbps link connecting IP router csam at LBL to IP router cartan at UCB.
The microwave link queue can hold up to 50 packets. Each connection was given a
window of 16 KB (32 512-byte packets). Thus any two connections could overflow the
available buffering and the four connections exceeded the queue capacity by 160%.
fair sharing of that capacity. Only in gateways, at the on averaging b e t w e e n q u e u e regeneration points. This
convergence of flows, is there e n o u g h information to should yield good burst filtering b u t we think it might
control sharing and fair allocation. Thus, w e view the have convergence problems u n d e r high load or signif-
gateway 'congestion detection' algorithm as the next icant second-order dynamics in the traffic. 13 We plan
big step. to use some of our earlier w o r k on ARMAX models for
The goal of this algorithm to send a signal to the r o u n d - t r i p - t i m e / q u e u e length prediction as the basis
e n d n o d e s as early as possible, b u t not so early that the of detection. Preliminary results suggest that this ap-
gateway b e c o m e s starved for traffic. Since we plan proach works well at high load, is i m m u n e to second-
to continue using packet drops as a congestion sig- order effects in the traffic and is computationally cheap
nal, gateway 'self protection' from a mis-behaving host e n o u g h to not slow d o w n kilopacket-per-second gate-
should faU-out for free: That host will simply have most ways.
of its packets d r o p p e d as the gateway trys to tell it
that it's using m o r e than its fair share. Thus, like the
e n d n o d e algorithm, the gateway algorithm should re- Acknowledgements
duce congestion even if no e n d n o d e is modified to do
congestion avoidance. A n d nodes that do i m p l e m e n t I am v e r y grateful to the m e m b e r s of the Internet Activ-
congestion avoidance will get their fair share of band- ity Board's End-to-End and Internet-Engineering task
w i d t h and a m i n i m u m n u m b e r of packet drops. forces for this past y e a r ' s a b u n d a n t s u p p l y of inter-
Since congestion grows exponentially, detecting it est, encouragement, cogent questions and n e t w o r k in-
early is i m p o r t a n t - - If detected early, small adjust- sights.
ments to the senders' w i n d o w s will cure it. Other-
wise massive adjustments are necessary to give the net 13The time between regeneration points scales like (1 - p)-1 and
e n o u g h spare capacity to p u m p out the backlog. But, the variance of that time like (1 - p)-3 (see [Fe171],chap. VI.9). Thus
the congestion detector becomes sluggish as congestion increases and
given the b u r s t y nature of traffic, reliable detection is a its signal-to-noise ratio decreases dramatically.
non-trivial problem. [JRC87] proposes a scheme based
g
,/
.,~¢;
x/
~g d
g
CD :.%.. -*-
,,2' "'
a,t, "%'
Trace data from four simultaneous TCP conversations without congestion avoidance
over the paths shown in figure 7.
4,000 of 11,000 packets sent were retransmissions (i.e., half the data packets were re-
transmitted).
Since the link data bandwidth is 25 KBps, each of the four conversations should have
received 6 KBps. Instead, one conversation got 8 KBps, two got 5 KBps, one got 0.5
KBps and 6 KBps has vanished.
A ~ A +g(M- A)
A.1 Theory
Think of A as a prediction of the n e x t m e a s u r e m e n t .
The RFC793 a l g o r i t h m for estimating the m e a n r o u n d M - A is the error in that prediction a n d the expression
trip time is one of the simplest e x a m p l e s of a class of es- a b o v e says w e m a k e a n e w prediction b a s e d on the old
timators called recursive prediction error or stochastic gra- prediction plus s o m e fraction of the prediction error.
dient algorithms. In the past 20 years these algorithms The prediction error is the s u m of t w o c o m p o n e n t s : (1)
h a v e revolutionized estimation a n d control theory 14 error d u e to 'noise' in the m e a s u r e m e n t (random, un-
a n d it's p r o b a b l y w o r t h looking at the RFC793 estima- predictable effects like fluctuations in c o m p e t i n g traffic)
tor in s o m e detail. and (2) error d u e to a b a d choice of A. Calling the ran-
d o m error E~ a n d the estimation error Ee,
14See, for example [LS83].
A ~-- A + gE~ + gEe
/'
'2 ~°~°°
OOB
~ 8 (D
oof 6°e
=-
g
o)
o
o
CM
I" °,"
Trace data f r o m four simultaneous TCP conversations using congestion avoidance over
the paths shown in figure 7.
89 of 8281 packets sent were retransmissions (i.e., 1% of the data packets had to be
retransmitted).
Two of the conversations got 8 KBps and two got 4.5 KBps (i.e., all the link bandwidth
is accounted for - - see fig. 11). The difference between the high and low bandwidth
senders was due to the receivers. The 4.5 KBps senders were talking to 4.3BSDreceivers
which would delay an ack until 35% of the window was filled or 200 ms had passed
(i.e., an ack was delayed for 5-7 packets on the average). This meant the sender would
deliver bursts of 5-7 packets on each ack.
The 8 KBps senders were talking to 4.3 + BSDreceivers which would delay an ack for at
most one packet (the author doesn't believe that delayed acks are a particularly good
idea). I.e., the sender would deliver bursts of at most two packets.
The probability of loss increases rapidly with burst size so senders talking to old-style
receivers saw three times the loss rate (1.8% vs. 0.5%). The higher loss rate meant more
time spent in retransmit wait and, because of the congestion avoidance, smaller average
window sizes.
F i g u r e 9: M u l t i p l e , s i m u l t a n e o u s T C P s w i t h c o n g e s t i o n a v o i d a n c e
The gE~ term gives A a kick in the right direction while It's probably obvious that A will oscillate r a n d o m l y
the gE,, term gives it a kick in a r a n d o m direction. Over a r o u n d the true average and the s t a n d a r d deviation of
a n u m b e r of samples, the r a n d o m kicks cancel each A will be g s d e v ( M ) . Also that A converges to the
other out so this algorithm tends to converge to the true average exponentially with time constant 1/g. So
correct average. But g represents a compromise: We a smaller g gives a stabler A at the expense of taking a
w a n t a large g to get mileage out of E¢ but a small g m u c h longer time to get to the true average.
to minimize the d a m a g e from E~. Since the E¢ terms If we w a n t some m e a s u r e of the variation in M , say
m o v e A t o w a r d the real average no matter w h a t value to c o m p u t e a g o o d value for the TCP retransmit timer,
we use for g, it's almost always better to use a gain there are several alternatives. Variance, cr2, is the con-
that's too small rather than one that's too large. Typical ventional choice because it has some nice mathematical
gain choices are 0.1-0.2 (though it's a g o o d idea to take properties. But c o m p u t i n g variance requires squaring
long look at y o u r r a w data before picking a gain). (M - A) so an estimator for it will contain a m u l t i p l y
I I T I I
20 40 60 80 100 120
Time (sec)
The thin line shows the total bandwidth used by the four senders without congestion
avoidance (fig. 8), averaged over 5 second intervals and normalized to the 25 KBps link
bandwidth. Note that the senders send, on the average, 25% more than will fit in the
wire.
The thick line is the same data for the senders with congestion avoidance (fig. 9). The
first 5 second interval is low (because of the slow-start), then there is about 20 seconds
of damped oscillation as the congestion control 'regulator' for each TCP finds the correct
window size. The remaining time the senders run at the wire bandwidth. (The activity
around 110 seconds is a bandwidth 're-negotiation" due to connection one shutting
down. The activity around 80 seconds is a reflection of the 'flat spot' in fig. 9 where
most of conversation two's bandwidth is suddenly shifted to conversations three and
four - - a colleague and I find this 'punctuated equilibrium' behavior fascinating and
hope to investigate its dynamics in a future paper.)
i o 6
>=
c5
¢o
I I I I I
20 40 60 80 1 O0 120
Time (sec)
Figure 10 showed the old TCPs were using 25% more than the bottleneck link b a n d w i d t h .
Thus, once the bottleneck queue filled, 25% of the the senders' packets were being
discarded. If the discards, and only the discards, were retransmitted, the senders w o u l d
have received the full 25 KBps link b a n d w i d t h (i.e., their behavior w o u l d have been anti-
social b u t not self-destructive). But fig. 8 noted that around 25% of the link b a n d w i d t h
was unaccounted for.
Here w e average the total amount of data acked per five second interval. (This gives
the effective or deliveredb a n d w i d t h of the link.) The thin line is once again the old TCPs.
Note that only 75% of the link b a n d w i d t h is being used for data (the remainder m u s t
have been used b y retransmissions of packets that d i d n ' t need to be retransmitted).
The thick line shows delivered b a n d w i d t h for the new TCPs. There is the same slow-start
and turn-on transient followed b y a long period of operation right at the link b a n d w i d t h .
I I I
o 0 20 40 60 80
Time (sec)
Because of the five second averaging time (needed to smooth out the spikes in the old
TCP data), the congestion avoidance window policy is difficult to make out in figures
10 and 11. Here we show effective throughput (data acked) for T C P s with congestion
control, averaged over a three second interval.
When a packet is dropped, the sender sends until it fills the window, then stops until
the retransmission timeout. Since the receiver cannot ack data beyond the dropped
packet, on this plot we'd expect to see a negative-going spike whose amplitude equals
the sender's window size (minus one packet). If the retransmit happens in the next
interval (the intervals were chosen to match the retransmit timeout), we'd expect to see
a positive-going spike of the same amplitude when receiver acks its cached data. Thus
the height of these spikes is a direct measure of the sender's window size.
The data clearly shows three of these events (at 15, 33 and 57 seconds) and the window
size appears to be decreasing exponentially. The dotted line is a least squares fit to the
six window size measurements obtained from these events. The fit time constant was 28
seconds. (The long time constant is due to lack of a congestion avoidance algorithm in
the gateway. With a 'drop' algorithm running in the gateway, the time constant should
be around 4 seconds)