Professional Documents
Culture Documents
L ICENTIATE T H E S I S
Sara Landström
Sara Landström
March 2005
Supervisor
Lars-Åke Larzon, Ph.D.,
Luleå University of Technology and Uppsala University
Assistant supervisors
Ulf Bodin, Ph.D., Luleå University of Technology
Krister Svanbro, Ericsson Research AB
Published 2005
Printed in Sweden by University Printing Office, Luleå
To Peter and Emelie
Abstract
i
Contents
Abstract i
Publications v
Acknowledgments vii
Thesis Introduction 1
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 The Internet . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Wireless Wide-area Networks . . . . . . . . . . . . . . . . 5
2 Research Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Papers 15
iii
Publications
Mats Folke, Sara Landström and Ulf Bodin, “On the TCP Minimum Re-
transmission Timeout in a High-speed Cellular Network”. To be presented
at 11th European Wireless, Nicosia, Cyprus, April 10-13 2005.
• Paper 3
Sara Landström and Lars-Åke Larzon, “Buffer management for TCP over
HS-DSCH”. Technical report, LTU–TR–05/09–SE, Luleå University of
Technology, Sweden, February 2005.
• Paper 4
v
Acknowledgments
The first person that I would like to thank is my supervisor, Lars-Åke. Lars-
Åke has believed in me from the start and he has always invited me to ask
questions, share ideas and discuss everything that remotely relates to being a
Ph.D., student. To Ulf and Krister, who have been my assistant supervisors,
I’d like to say that I appreciate our discussions and your involvement in my
education.
Among my colleagues in the Computer Science and Networking hallway,
Mats deserves a special thank you. His help and willingness to discuss everyday
problems and ideas have been a valuable asset to me. My fellow Ph.D., students
all have one thing in common, they are helpful and nice company. I would also
like to direct a thank you to Arne Simonsson at Ericsson Research AB, for
always taking the time to answer my questions.
Waiting for me to come home each day is my daughter Emelie. Thank
you for reminding me that there is a world, where entirely different things are
important than the things I struggle with at work. I would also like to thank
my husband Peter for all his love, support, and patience.
My research has mainly been supported by Vinnova, the Swedish Agency
for Innovation Systems, but also by the PCC++ graduate school and Ericsson
Research AB. Thank you for your financial support and for providing me with
the opportunity to work with you.
vii
Thesis Introduction
Thesis Introduction 3
1 Introduction
In this section a short introduction to networking and wireless wide-area net-
works is given. It also serves to set the stage for the research presentation that
follows.
Application
TCP UDP
IP
Network
header. The current Internet architecture can not disregard from implicit signals
since it is not possible to assume that explicit signaling is supported along the
entire network path between two communicating network nodes.
The flows representing the majority of the long-lived sessions in the end of
the 1980’s ran on top of TCP, therefore congestion control was made part of
TCP. However, the user and application behaviors are different today. New
applications, like streaming and gaming applications, with realtime demands on
the transport service have lead to longer lived UDP sessions. The TCP proto-
col includes mechanisms, such as a reliable in-order delivery service guarantee,
which introduce variations in delay and less control over the data flow from
an application point of view. Therefore UDP is preferred by applications with
strict timing requirements.
Out of concern for the changing traffic patterns, an initiative to provide UDP
flows as well with congestion control was taken. The effort lead to the design
of the Datagram Congestion Control Protocol (DCCP) [19]. The protocol has
been built as a toolbox from which a suitable congestion control profile can be
chosen. The limitation of the protocol is foremost the existing congestion con-
trol profiles. One of the problems with designing congestion control algorithms
is that this mechanism also performs resource allocation when resources become
scarce. Thus it is desirable that the algorithms are fair to other existing con-
gestion control schemes, i.e., that of TCP, and avoid starving other flows also
implementing congestion control.
The future of DCCP depends on whether it will gain acceptance by the
wider network community or not. For application designers it is a big step to
change from UDP to DCCP. As a measure of complexity it can be mentioned
that UDP is defined in about ten pages, whereas DCCP consists of a minimum
of three specifications where the largest is almost two hundred pages. A barrier
for deployment is the use of Network Address Translation (NAT) tools and
firewalls, which have to be extended to correctly handle DCCP.
Meanwhile, the Internet has become a gathering of both wired and wireless
networks. In the next section I will discuss the history of the mobile telephony
networks and their approach to providing data services.
high speed while being engaged in a session and switching cells over a wide
area. The mobile switching center (MSC) coordinates the activities of all the
base stations and connects the cellular system to the public switched telephone
network (PSTN).
It all began in 1895 – Nikola Tesla was ready to transmit a radio signal
50 miles to West Point, New York, when a fire consumed his lab. Meanwhile
Guglielmo Marconi had been granted a patent for the wireless telegraphy in
England in 1896. One year later, he used the Tesla oscillator to demonstrate
the usability of radio in mobile communication through keeping in contact with
ships sailing on the English channel [55]. The world’s first wireless cellular
system was however not implemented until 1979 in Japan by Nippon Telephone
and Telegraph company (NTT). The invention of the cellular concept enabled
large-scale radio communications and was refined by many telecommunication
companies working in parallel. The idea is to split the coverage zone into small
cells and reuse portions of the spectrum to increase spectrum usage at the
expense of a larger infrastructure.
Here in Scandinavia, the Nordic Mobile Telephone (NMT) system was intro-
duced in 1981. It belongs to the first generation of mobile systems, which were
generally incompatible in Europe due to the use of different frequencies and
protocols. The first universal digital cellular system (2G) that gained world-
wide acceptance was the Global System for Mobile (GSM) deployed in the early
1990s. GSM was designed before the Internet became a commodity and hence
the data rate requirements were low, since voice produces relatively low bit rate
traffic. In order to increase the 2G data rates for Internet type of services a
number of techniques under the name of 2.5G were developed. For GSM, High
Speed Circuit Switched Data (HSCSD), General Packet Radio Service (GPRS)
and/or Enhanced Data rates for GSM (or Global) Evolution (EDGE) are viable
extensions.
The third generation of GSM technology (3GSM) has a Wideband-CDMA
(W-CDMA) air interface, which has been developed as an open standard by
operators in conjunction with the 3GPP standards development organization.
Already over 85% of the world’s network operators have chosen 3GSM’s under-
lying technology platform to deliver their third generation services [24]. Another
name for W-CDMA is Universal Mobile Telecommunications Service (UMTS).
W-CDMA includes a shared high-speed channel for traffic from the base station
to the mobile users. This high-speed shared downlink packet access (HSDPA)
mode is the focus of several of the studies in this thesis. Figure 1.2 illustrates
the evolution of the mobile cellular networks.
The telecommunication industry has not been able to agree on one global
3G standard. Therefore a second partnership project (3GPP2) is developing
another 3G standard in parallel to W-CDMA, which is called cdma2000 and is
not building on GSM technology. The partners are from North America, Japan,
Korea and China.
The roots of the wireless wide-area networks are in the telephone industry,
from which users have come to expect a high quality of service and a high degree
of stability. Telecommunication is often referred to as having 6 nines, 99.9999%,
Thesis Introduction 7
GPRS
2.5G
IS-95B HSCSD EDGE
W-CDMA
EDGE
cdma2000-1xRTT TD-SCDMA
3G
cdma2000-1xEV, DV, DO HSDPA
availability. Requirements of the emergency services are one of the reasons for
the high demands.
From being a voice call communication systems, the mobile wireless cel-
lular systems have evolved in the direction of the Internet. Since the system
was originally designed for voice traffic with circuit-switching as the means for
distributing capacity, certain changes have been necessary to connect to the In-
ternet and allow data traffic to be transferred. The core business is however still
voice calls. The quality of this service must therefore also be ensured henceforth.
Different types of messaging services, like the short message service (SMS) and
the multimedia message service MMS, have also become popular.
Wireless local-area networks (WLANs) on the other hand have a background
in data services. In general they provide higher data rates at the expense of
mobility compared to WWANs. The user is required to stay close to an access
point to achieve the high data rates. Followers of the successful IEEE802.11
standard, IEEE802.16 and IEEE802.20, are currently being designed to allow
increased coverage [57], e.g., wireless broadband to residents and for small office
use. The primary standardization organs for WLANs are the Institute of Elec-
trical and Electronics Engineers (IEEE) and the European Telecommunications
Standards Institute (ETSI).
The wireless media and user mobility challenge some of the implicit assump-
tions that were made when designing the congestion control and avoidance mech-
anisms of TCP. Transmission errors are more frequent [2] and round trip times
may vary to a larger extent than in a wired network [41]. Another key issue is
the efficient utilization of the available frequencies, since the licenses for the ra-
dio spectrum is relatively expensive. This is in contrast to the IETF philosophy
8 Congestion Control in Wireless Cellular Networks
2 Research Area
There are many challenges to the Internet and the telecommunication industry.
In this section I will briefly touch upon a few of them, which are related to my
research.
Transporting both voice and data traffic as packet-switched services at the
IP layer would allow the efficient deployment of new services, such as real-time
multimedia with integrated voice and video. Furthermore, having an all-IP
network instead of separate voice and data networks means that fewer pieces of
equipment need to be deployed and maintained.
Voice over IP (VoIP) is the key to a common IP platform for wire line and
wireless networks. Still, the traditional circuit-switched voice networks have
been well tuned for efficient spectrum utilization, thus VoIP has a lot to prove
in regards to its cost effectiveness. Nonetheless a first step was taken on the
25th of August, 2003, when a specification for Push to talk over Cellular (PoC)
was submitted to the Open Mobile Alliance (OMA) [48].
The interest for VoIP over wired networks is also growing. This trend has
the potential to change the traffic patterns on the Internet. A larger share of
long-lived UDP sessions is undesirable from a network point of view, since no
regulation of the traffic flows is performed by UDP. The network may therefore
become unstable and perform a high degree of useless work.
Congestion control performs resource allocation when competition for re-
sources is intense. Therefore the problem of delivering service assurances and
performing congestion control are related. Time-constrained services using UDP
are pushing the development of congestion control profiles that combine satis-
factory service delivery and network stability. For flows with strict timing re-
quirements, there is a send rate threshold below which the data stream will be
useless to the receiver and there is also a maximum delay that can be tolerated.
A forum for exchanging ideas in this area has been the IETF working group
for DCCP [31]. DCCP is a new transport protocol which offers no delivery
guarantees. The objective is to create an alternative to UDP for long-lived traffic
flows, which applies congestion control. I have followed the standardization
process of DCCP, whose main weakness is the usability of the congestion control
profiles it currently provides. This is a key question to resolve for the success
of DCCP.
IP-based traffic is often characterized as bursty, especially when TCP is
used as transport protocol. To maintain high system utilization for IP-based
traffic over WWANs, gathering data from multiple users may be beneficiary.
Shared channel solutions of which HSDPA is one example are therefore likely to
become more common. The problem with service multiplexing is to be able to
give service assurances when mixing data from many users and/or of different
traffic types.
The level of service quality required is tied to peoples’ expectations and
Thesis Introduction 9
habits and it comes into focus when a service is to be offered building on an-
other technology than before. For example telephone services are now being
provided over the Internet and wireless cellular mobile networks are offering
data services. With this development follows culture clashes. We expect a cer-
tain call reliability, whereas we are quite familiar with the best-effort thinking of
the Internet. We are used to pay different taxes for our telephone calls, but we
do not usually get a differentiated bill for our Internet usage. Thus, in working
with technology we must be aware of these conceptions and allow the market
to mature.
2.1 Focus
My work is related to the heterogenous platforms and diverse applications, as-
piring to become part of the global Internet. In my research I attempt to find
solutions for applications in an all-IP network that involves both wired and wire-
less links. There may still be links requiring special attention and techniques
to enhance performance. In such situations I believe that solving the problems
locally is often preferable if the technology is widely spread. When new inven-
tions are being made it is important to be there from the start and develop a
generic solution.
I am particularly interested in how congestion control can be used to allow a
continued profileration of applications, in a way that does not disturb the current
core activities of a subnet. Furthermore, understanding how new services can
be introduced in cellular systems and how applications can co-exist in a cellular
environment is of interest to me. Therefore I have studied the existing congestion
control algorithms and those that are under development, as well as general
design of transport protocols.
My research around HSDPA aims to widen the understanding of the special
characteristics of a wireless cellular system with channels especially designed for
data transport. There are many points in common between HSDPA and the
next generation of WLAN technologies as well.
Producing implementations is an important part of networking research,
since much of the research is applied and theoretical analysis usually does not
allow design choices to be critically tested in a wide range of settings. I have
worked with implementing HSDPA, DCCP and a queue management technique
called PDPC. The existence of implementations makes it easier to perform re-
search into these areas and pushes development forward.
3 Methodology
The eligible tools for performance analysis of computer systems are measure-
ment, simulation and analytical modeling. The system must be studied under
an appropriate workload and its performance evaluated by a suitable metric.
10 Congestion Control in Wireless Cellular Networks
design space). On the other hand it takes time to implement additional details,
debug and change as development continues. Foreseeing the future is virtually
impossible. Details may also distract from the research problem at hand and
even make the effects less distinct. In order to enable large-scale simulations
the appropriate level of detail must therefore be chosen with care [27].
If you have knowledge of the situation that you are to study it is easier to
choose an appropriate level of detail, since then it is possible to reason about
the effects that a certain part might have in that particular setting. By clearly
stating which assumptions that we have made and which scenario the model
is intended for, we guard against misuse of the model and encourage others to
give us feedback on our simplifications.
In addition to the simulation experiments, I have put forward and supervised
“real-world” projects:
4 Contribution
My research contribute to bridging the gap between wire line and wireless net-
works, by ensuring that new transport protocol mechanisms are evaluated for
the wireless realm. Parts of my research aim at widening the understanding of
how shared WWAN channels can be used and identifying issues that need to be
considered when managing these networks. Finally, research into appropriate
congestion control algorithms for time-constrained services is important to the
network community as a whole and its potential for further growth.
12 Congestion Control in Wireless Cellular Networks
Seen from a larger perspective this type of research may in the end result
in better user perceived performance regarding the quality and availability of
services in WWANs. This requires efficient service implementations.
Paper 1
The first paper, Congestion Control in a High Speed Radio Environment, was
also the first paper in time. It is an evaluation of TFRC and TCP in HS-
DPA. The purpose of the evaluation was to detect any weaknesses in the design
of TFRC and whether any interactions between radio-block scheduling at the
link layer and congestion control algorithms at the transport layer would be
problematic.
By exposing TFRC to many different environmental conditions, as this study
is an example of, we will arrive at a robust design suitable for wireless environ-
ments as well.
As part of this evaluation I updated the TFRC code in the widely spread
network simulator, ns-2, to conform to the RFC standard. The earlier imple-
mentation was produced prior to its standardization.
Paper 2
On the Minimum Retransmission Timeout of TCP in a High-speed cellular en-
vironment is a continuation of the work in the previous paper. Except for a few
corner cases TCP works rather well in wireless networks. One of its weaknesses
is the use of a timer to determine when a packet is to be retransmitted. Delay
variations are inherent to a radio environment and the timer may prematurely
trigger retransmissions if there are sudden delay variations. A lower bound on
the retransmission timer has historically been motivated by poor clock granu-
larity and the “conservation of packets” principle described in [33], but lately a
much reduced lower bound has been adopted in modern implementations [26].
We have evaluated the effect of decreasing the minimum retransmit timeout
interval on TCP performance for HSDPA. The importance of the minimum
retransmit interval to the performance of the retransmit timer has previously
been pointed out in [13]. Since TCP is a central part of the current network
architecture, optimizing its behavior for a particular environment may lead to an
overall performance degradation. It is therefore important to evaluate the effect
of proposed changes to the protocol under conditions which may be problematic.
I contributed with the idea of investigating the impact of changing the lower
bound on the retransmission timeout interval, assisted in interpreting the results
and participated in the writing process.
Paper 3
Buffer management has previously been shown to have a significant effect on
TCP performance over both wired and wireless links. In the paper Buffer
Management for TCP over HS-DSCH I study the problem of finding a robust
Thesis Introduction 13
and efficient buffer configuration for a high-speed shared channel, when data is
buffered for each user individually. A similar problem was studied for dedicated
3G links in [10] and its companion papers. I wanted to see if their findings also
applied to a shared wireless high speed channel and I also contribute through
identifying a number of factors that must be considered when performing buffer
management for HSDPA.
The buffer strategy called Packet Discard Prevention Counter (PDPC) pro-
posed in [59] for low statistical multiplexing environments was implemented for
ns-2 by me as part of this study.
Paper 4
In the last paper, Properties of TCP-like Congestion Control, I have analyzed
the design of a congestion control algorithm that attempts to imitate the con-
gestion control and avoidance behavior of TCP, but within an unreliable service
concept. TCP-like congestion control is currently being standardized by the
IETF and a sanity check of the algorithm was therefore motivated before de-
ployment can be recommended.
I have supervised a Master’s Thesis worker named Nils-Erik Mattsson, while
implementing a substantial part of the Datagram Congestion Control Protocol
(DCCP) protocol into the network simulator, ns-2. Among other features DCCP
includes TCP-like congestion control. The code is available and has been handed
out to a number of interested parties.
The work presented here is part of a larger evaluation of DCCP and is
linked to the TFRC study in Paper 1. Comparing the use of TFRC and TCP-
like congestion control for streaming and real-time applications is the next step.
We have also made an implementation of DCCP-Thin for Symbian OS. The
observations made for DCCP-Thin are reported in [11] and will also be presented
in Linköping at “Radiovetenskap och Kommunikation” (RVK 05), 14–16 June,
2005. Furthermore, a kernel version of DCCP for FreeBSD and a patch for
Ethereal were released as part of a network project which I have proposed and
supervised [12].
My contribution
In all the papers included in this thesis, but Buffer Management for TCP over
HS-DSCH, I have been the main author and carried out all the experimental
work.
5 Continuation
My efforts up to this point have been concentrated on evaluating a number of
congestion control mechanisms. We have observed the performance in terms of
system throughput, fairness and individual transfer rates. These are metrics
14 Congestion Control in Wireless Cellular Networks
primarily suitable for bulk transfers, but also reasonable when studying rela-
tively new algorithms, with the purpose of validating their operation. When
data is being produced during the session itself or the client wishes to hold only
small portions of a flow at the time, the application behavior can be quite dif-
ferent and the set of metrics used so far incomplete. Studying the performance
of new congestion control algorithms for applications with harder timing con-
straints in depth, with appropriate application models and performance metrics
is therefore part of my future plans.
There are a number of issues that can be seen such as congestion control re-
sponse to application limited periods, start up costs, discrete send rates, packet
sizes, smoothness, a minimum useful transfer rate and delay variations as per-
ceived by the user. A related question is to investigate which information a
network can provide in order to improve congestion control and thus applica-
tion performance. Also, can the network use existing congestion control algo-
rithms to prioritize certain services higher than others by feeding them different
information?
Furthermore, the multitude of applications seems to grow indefinitely. Tra-
ditional cellular networks have been built and optimized for one main service,
i.e., phone calls. These networks are now being transformed into a platform
supporting a magnitude of services. If optimizations are attempted for each ser-
vice the network is likely to become highly complex, therefore it is interesting
to investigate how the services can co-exist and where tuning of the network is
necessary.
Papers
15
Paper 1
17
Paper published as
Sara Landström, Lars-Åke Larzon and Ulf Bodin, “Congestion Control in a High Speed
Radio Environment”. In Proceedings of the International Conference on Wireless
Networks, pages 617-623, Las Vegas, Nevada, USA, 21-24 June 2004.
18
Congestion Control
in a High Speed Radio Environment
Sara Landström† , Lars-Åke Larzon†,‡ , Ulf Bodin†
†
Luleå University of Technology
‡
Uppsala University
Abstract
1 Introduction
The High-speed Down-link Packet Access (HSDPA) mode, is part of the 3GPP
WCDMA specification release 5 [29]. It supports peak data rates in the order of
10 Mbps with low delays. A key component of HSDPA is the channel scheduler.
The channel is divided into 2 ms slots that are assigned to the users according
to a scheduling algorithm.
A round-robin (RR) scheduler lets users take turns to transmit in an orderly
fashion, whereas a signal-to-interference (SIR) scheduler gives precedence to the
user with the best predicted signaling conditions.
As scheduling is tightly coupled to data availability - which is regulated at
a higher level by the transport protocol - we study the interactions between
congestion control in the transport layer and channel scheduling in the physical
layer.
Of the transport protocols that perform congestion control, TCP is the most
widely deployed. Another way of performing congestion control is to apply
TCP Friendly Rate Control (TFRC) in which an equation-based model of TCP
Reno, derived in [49], is used. TFRC has been designed to give smoother rate
20 Congestion Control in Wireless Cellular Networks
changes compared to TCP and is primarily suitable for streaming media appli-
cations [20]. An important factor in the send rate equation is the estimate of
the round trip time.
In this paper, we study the performance of TFRC and TCP over a HSDPA
link layer with both a RR and a SIR scheduler. As expected, SIR is not as fair
as RR, but does on the other hand give significantly larger throughput to the
users with the best SIRs. We found channel utilization helpful in explaining the
observed loads and comparing the two congestion control algorithms. TFRC
and TCP performed equally well. Both protocols are however sensitive to delay
spikes resulting from SIR scheduling and performance could be improved in this
respect.
3 Evaluation
Our evaluation of TFRC and TCP over the high speed down-link channel (HS-
DSCH) in HSDPA is based on simulations. Performance is investigated both
for an RR and a SIR scheduler using two different loads. This results in four
scenarios for each congestion control mechanism.
22 Congestion Control in Wireless Cellular Networks
3.1 Model
The network simulator version 2.27 (ns-2) was complemented with a module for
simulating HSDPA [5]. We also modified the TFRC code to follow RFC3448
and measures were taken to remove the bug reported in [18]. The changes are
described in a document retrievable from http://www.csee.ltu.se/∼saral.
Table 1.1 gives an overview of the radio models implemented and their con-
figuration.
Phenomena Model/Configuration
Path loss Exponential, propagation constant
3.5
Shadow fading Stddev 8dB
Self interference Constant 10%
Intra cell interference (orthogonality) Constant 40%
Inter cell interference Modeled by distance and shadow fad-
ing
Fast HARQ No, immediately retransmitted
Code multiplexing Max 3 users
BLER Uniformly distributed, 10% for SIRs
over −3.5dB, 50% for lower SIR lev-
els
There is no fast power control over the high speed shared channel, instead
link adaptation is employed. The combination of coding rates and modula-
tion types included in the simulator are introduced in Table 1.2 and SIR levels
were established in [50]. Note, with these combinations a maximum bit rate
of 7.20 Mbps can be achieved. We assume that the number of spreading codes
and the power assigned to HS-DSCH, change on long time scales compared to
the simulation time. The average power was fixed to 10 W and 12 out of 16
channelization codes were used.
Seven cells with omni-directional antennas and a 500 m radius were simu-
lated and the performance in the center cell was analyzed. A fixed delay was
used to model the delay over the wired links between the sources and base sta-
Congestion Control in a High Speed Radio Environment 23
tions. The delay, 75 ms, was the same in both directions. The wired links were
over provisioned such that the bottleneck was over the wireless link. When
reaching the base station user data were stored in individual buffers, each capa-
ble of keeping 90 IP packets. This means that it is possible for a single user to
capture the wireless channel with SIR scheduling. With our simulation set-up,
packets can only be lost in the queue awaiting transport over the wireless link.
We varied the load by simulating either 50 or 65 (30% more) stationary
mobile terminals present in the coverage area. The nodes were distributed
uniformly over the seven cells. The effects of scheduling showing at this load,
would probably become apparent at higher loads with better tuned scheduling
algorithms than RR and SIR. Alternatively, the load could have been varied by
changing the average waiting times between transfers.
Every session consisted of a mobile downloading a file followed by a truncated
exponentially distributed waiting time with mean 2 seconds and a minimum
value of 0.5 seconds. The waiting time was initiated as soon as all the data had
reached the receiver1 . The file sizes were randomly chosen from nine possible
sizes where the number of packets i is given by equation 1.4.
The relation between the frequencies with which the file sizes were likely to be
selected was 1:2:3:4:5:6:7:8:9, where 9 corresponds to the smallest file size. Two
relatively large file sizes have been included, i.e., 740950 and 1483350 bytes. The
reason was threefold, first for the short file sizes slow start is essentially never
left. Secondly, the behaviors of the schedulers have larger impact on longer file
transfers and finally, TFRC is targeted at longer lived sessions. A fixed payload
size of 1450 bytes was used, in order to create the same number of packets for
both TCP and TFRC given a certain transfer size.
TFRC does not include connection establishment nor tear-down, which for
short flows result in higher throughput. Therefore, to enable comparisons, TCP
was configured to send data with the initial SYN segment. By setting the initial
window of TCP to two segments and sending data on the SYN, the window is
doubled as if no handshake was made. A minimum retransmit timeout of 1
second and a timer granularity of 0.01 seconds were used for TCP.
Five minutes system time was simulated in each run and all scenarios were
repeated twenty times. The random number generators giving the positions
of the mobiles, the starting times of the transfers and the file sizes were given
different seeds in each replication of the same scenario.
4 Results
When analyzing the material, we aggregated data from all the replications of
the same scenario. The data from the first 5 simulated seconds in each run were
removed to avoid initialization bias. Only performance in the center cell was
1 We reset the transport layer endpoints after receiving the last piece of data.
24 Congestion Control in Wireless Cellular Networks
studied. In most cases the results obtained for both loads were similar, hence if
nothing else is stated the figures represent the case when there are 65 mobiles
present in the system. The confidence intervals are for 95%.
We have used two approaches when performing the analysis. First, we look
at transport layer events, thereafter we study when data are available to the
scheduler.
0.8
SACK SIR
TFRC SIR
SACK RR
TFRC RR
0.6
Number of timeouts
0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10 11 12
File size (Mbits)
such that three duplicated acknowledgments are not generated as is needed for
a fast retransmission to be made. When a timeout occurs, a parameter called
the slow start threshold, is set to half the current window, forcing TCP into
congestion avoidance whenever the window size exceeds this value. Since TCP
increases its rate slower during congestion avoidance and the flows are buffered
individually, later timeouts are connected to decreased bottleneck capacity -
leading to an assembly of segments in the sensitive buffer.
In Figure 1.1 the average number of timeouts per transfer size with TCP is
shown. The larger number of timeouts with SIR scheduling is caused by changes
in the available capacity due to competing sources and not the probing behavior
of TCP. Of the total number of timeouts with TCP and SIR scheduling close
to 70% were spurious.
Due to the fact that it is enough for one packet to reach the receiver to
allow the next acknowledgment to be sent in TFRC and the relatively large
timeout interval, no feedback timeouts are rare, see Figure 1.1. No timeouts of
this type were observed for TFRC with RR. With SIR scheduling there were
a few occurrences, but they were considerably fewer than the TCP retransmit
timeouts. The no feedback timeouts can not be said to be spurious, since they
are to prevent data from being sent continuously at the same rate when it is not
getting through. Fine-tuning the duration of this interval such that excessive
packet loss does not occur when the bandwidth is suddenly decreased is however
important and so is finding a way to allow the flow to start over relatively quickly
26 Congestion Control in Wireless Cellular Networks
3.5
TFRC SIR
TFRC RR
3 SACK SIR
SACK RR
Number of congestion events
2.5
1.5
0.5
0
0 1 2 3 4 5 6 7 8 9 10 11 12
File size (Mbits)
Figure 1.2: The average number of congestion events detected through later
sent packets arriving at the receiver in TFRC, or through three duplicate ac-
knowledgments in TCP.
ments.
Congestion Control in a High Speed Radio Environment 27
0.2
SACK SIR 50
SACK RR 50
SACKSIR 65
SACK RR 65
0.15
Share of the slots
0.1
0.05
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13
Potential number of receivers in each slot
0.2
TFRC SIR 50
TFRC RR 50
TFRC SIR 65
TFRC RR 65
0.15
Share of the slots
0.1
0.05
0
0 1 2 3 4 5 6 7 8 9 10 11 12 13
Potential number of receivers in each slot
the future.
When comparing the number of bytes transferred with TCP and SIR schedul-
ing for 50 mobiles and RR for 65 mobiles, the difference in result is smaller than
when comparing the system throughput with the same number of mobiles for
the two schedulers. This indicates that an RR scheduler needs a larger number
of mobiles to generate the same offered load, i.e., number of transfers. With
TFRC the RR scheduler never reaches the same levels as the SIR scheduler.
Since TFRC is unreliable, the artifacts that might come of SIR scheduling, i.e.,
higher loss rates, does not lead to retransmissions and as severe send rate re-
ductions. Therefore it is likely that a higher offered load can be sustained and
that the limiting factor may be the influence of the loss rate on the quality.
500000
SACK SIR
TFRC SIR
TFRC RR
400000 SACK RR
Bit-rate (bps)
300000
200000
100000
0
0 1 2 3 4 5 6 7 8 9 10 11 12
Potential number of receivers in each slot
users should have a bit rate exceeding 50 Kbps. We found that this condition
was met for both protocols. When looking at the 5th percentile bit rates on a
per flow size basis, Figure 1.6 and Figure 1.7, we find that it is the small flows
that do not reach 50 Kbps. RR scheduling results in higher transfer rates for
this group of flows than SIR, but the RR scheduler operates at a lower offered
load with the current application model.
5 Discussion
Future studies include investigating a range of propagation delays for the wired
links in the path and different buffer strategies at the wireless channel. With
these additional dimensions in the evaluation follow needs to more accurately
track delay spikes and their influence on the probability for packet loss. Finding
appropriate buffer sizes, that balance the risk of buffer overflow and long queuing
delays for wireless channels and different applications is non-trivial. Especially if
TFRC and TCP are to co-exist in an environment where the available channel
capacity can vary substantially. In this study we used the same application
model for both TFRC and TCP, in the future we would like to include a model
of a streaming application for TFRC and look at other ways to distribute the
transfers among the mobiles.
30 Congestion Control in Wireless Cellular Networks
150000
125000
100000
Bit-rate (bps)
75000
50000
25000 SACK RR 50
SACK RR 65
SACK SIR 50
SACK SIR 65
0
0 1 2 3 4 5 6 7 8 9 10 11 12
Potential number of receivers in each slot
6 Conclusions
We have performed an initial investigation of how congestion control at the
transport layer, lead to different physical channel utilization patterns for a high-
speed shared wireless cellular environment. We have found, that with an appli-
cation model where the waiting time is initiated as soon as a transfer is finished,
the observed load is the result of both the nature of the scheduling algorithms
for the shared environment and the congestion control algorithms.
As expected, the SIR scheduler gives higher average transfer rates at the
expense of fairness compared to the RR scheduler. Since high SIR users complete
their transfers faster with the SIR scheduler, a larger part of the generated load
comes from these users. In general, a higher load is created for the same number
of mobiles with SIR scheduling than with RR scheduling. The main reason being
that the channel is better utilized partly because the average transfer times are
shorter, which leads to a faster initialization of the following transfers.
The difference in transfer rates between TFRC and TCP is small, although
the system throughput is higher with TFRC. This can be explained by the
distributions of the number of the potential receivers being similar, thus the
retransmissions performed by TCP take up capacity corresponding to the addi-
tional transfers performed with TFRC.
We conclude that the common type of application model used in this study
leads to offered loads that depend on algorithms both at the transport and the
Congestion Control in a High Speed Radio Environment 31
150000
125000
100000
Bit-rate (bps)
75000
50000
25000 TFRC RR 50
TFRC RR 65
TFRC SIR 50
TFRC SIR 65
0
0 1 2 3 4 5 6 7 8 9 10 11 12
Potential number of receivers in each slot
physical layer. It is however not unreasonable, since users are likely to transfer
more data if they get fast responses.
32 Congestion Control in Wireless Cellular Networks
Paper 2
33
To be presented at EW 2005
Mats Folke, Sara Landström and Ulf Bodin, “On the TCP Minimum Retransmission
Timeout in a High-speed Cellular Network”. To be presented at European Wireless,
Nicosia, Cyprus, April 10-13 2005.
34
On the TCP Minimum Retransmission Timeout
in a High-speed Cellular Network
Mats Folke Sara Landström Ulf Bodin
Division of Computer Science and Networking
Luleå University of Technology
Sweden
Abstract
1 Introduction
The High-Speed Down-link Shared Channel (HS-DSCH) in Wide-band CDMA
(WCDMA) release 6 has theoretical peak bit-rates for data services of 14 Mbps [38].
Moreover, delays considerably shorter than for other shared data channel tech-
nologies in previous releases of WCDMA are supported.
HS-DSCH is primarily shared in the time domain, where users are assigned
time slots according to a scheduling algorithm that runs independently at each
base station. The short Transfer Time Interval (TTI) of 2 ms, enables fast
link adaptation, fast scheduling and fast Hybrid Automatic Repeat reQuest
36 Congestion Control in Wireless Cellular Networks
(HARQ). The channel was designed for bursty Internet traffic, typical of web
browsing.
TCP (Transmission Control Protocol) ensures reliable transfer of HTTP
traffic. Avoiding delay spikes is important to TCP. In particular, delay spikes
may cause spurious timeouts, resulting in unnecessary retransmissions and mul-
tiplicative decreases in congestion window sizes as described by Inamura et
al.[32]. There are several mechanisms in HS-DSCH that can cause considerable
delay variations appearing as delay spikes to TCP.
In HS-DSCH, the data rate depends on the Signal to Interference Ratio
(SIR) of the receiving user. Consequently, fluctuations in the interference levels
lead to delay variations. SIR is affected by path-loss, fading and interference
from other transmissions. Schedulers aiming at optimizing system throughput
give precedence to the channel to users with high SIRs. With a Round Robin
(RR) scheduler the delay of an individual IP packet is determined both by the
number of active users and by the SIR of the receiving user.
Using the network simulator version 2 (ns-2)[44] we evaluate the performance
of TCP Sack[14], [42], [4] and TCP NewReno [16] for the RR and SIR scheduler
respectively. Modern implementations of TCP have a lower minimum bound on
the retransmission timer than the customary 1 second. In this paper we evaluate
the sensitivity of TCP regarding the setting of this minimum bound and its
impact on the number of spurious timeouts, fairness, goodput and throughput.
2 TCP fundamentals
In TCP the send rate is gradually increased and drastically decreased accord-
ing to its congestion control and avoidance mechanisms, thus providing the link
layer with an irregular flow of data. Typically, a TCP source in slow start,
begins by sending two to four segments [1] and then waits for the receiver to
acknowledge them before releasing more data. The send rate is increased expo-
nentially as long as the acknowledgments keep arriving in time. This results in
TCP sources alternating between releasing bursts of data and being idle until
they have opened their congestion window enough to always have data buffered
for HS-DSCH1 . For short transfers, TCP may never reach such a window size.
When the first packet is lost, TCP leaves slow start and enters congestion avoid-
ance, where the send rate is increased linearly.
When a new segment creates a gap in the receive buffer (i.e. its segment
number is not consecutive with respect to previous segments’), the receiver
generates a duplicate acknowledgment indicating where the beginning of the first
gap is. If three duplicate acknowledgments are consecutively received, the TCP
source assumes that the bytes pointed at have been lost due to buffer overflow
somewhere along the data path. The missing bytes are retransmitted and the
congestion window is reduced to half its current size. This retransmission is
called a fast retransmit.
1 We assume that the congestion windows and not the receiving windows limit the TCP
For a fast retransmit to take place, at least three segments sent after the first
lost segment must arrive at their destination and trigger duplicate acknowledg-
ments. If segments at the end of a transfer are lost or multiple packet losses
from a window occur, there might not be enough segments left to trigger a fast
retransmit. The send window may also be too small to begin with. In such cases
the TCP source must rely on its timeout mechanism for recovery. If the oldest
segment is not acknowledged within a time frame, called the retransmit timeout
(RTO), the TCP source starts over from congestion window of one segment and
re-enters the slow start phase. It then retransmits the presumably lost segment.
The sender continuously samples the round trip time (RTT) and adjusts
the RTO. The RTO is based on the mean RTT and a factor accounting for
the fluctuations in the RTT. Traditionally, there has been a lower bound of 1
second on the RTO due to poor clock granularity. We will refer to this bound as
the minRTO. The clock granularity has however improved and therefore some
modern implementations have chosen to significantly reduce the lower bound.
For instance Linux version 2.4 uses a minRTO of 200 ms. This might have
an impact on TCP performance over wireless links, where the lower bound
has shielded against delay spikes in the range of the lower bound. Such delay
increases can occur if the available forwarding capacity rapidly decreases and
they may cause the retransmit timer to expire prematurely.
With Selective Acknowledgments (SACKs) [14], [42], the receiver can inform
the sender about all non-contiguous blocks of data that have been received, thus
the sender knows which segments to retransmit. Without the SACK option
the sender does not know exactly which packets that have been lost. TCP
NewReno [16] is the TCP variant recommended if one of the two communicating
TCP end points in a session does not support the use of SACK.
The NewReno algorithm is active during Fast recovery, i.e., from the re-
ceipt of three duplicate acknowledgments to a timeout or until all the data sent
has been acknowledged. In short, the NewReno algorithm considers each du-
plicate acknowledgment to be an indication of a segment leaving the network
and therefore the sender is allowed to send a new segment on each duplicate
acknowledgment. This variation of the TCP congestion recovery behavior is
more likely to keep the ack clock going during loss events than that of TCP
Reno, thereby avoiding a timeout. The difference compared to SACK-based
loss recovery [4] is that the NewReno sender, does not know where the gaps in
the receive sequence are.
3 Method
The impact of different settings of the TCP retransmit timeout lower bound
(minRTO) has been evaluated through simulations. In this section we intro-
duce the simulation environment, thereafter the chosen evaluation metrics are
presented.
38 Congestion Control in Wireless Cellular Networks
Traffic sources
Figure 2.1: Simplified topology illustrating the connection between the traffic
sources and the mobile nodes.
We also evaluate the effect of changing the lower bound of the retransmission
timer on system performance. The objective is to maximize system goodput,
while maintaining fairness between the users. The system throughput is useful
when analyzing the goodput, since it gives an indication of the amount of traffic
offered to the system.
4 Results
In Figures 2.2 and 2.3 we clearly see that a longer minRTO results in a smaller
share of the flows suffering from spurious timeouts3 . By comparing Figures 2.2
and 2.3 we find that the SIR scheduler causes fewer spurious timeouts for a
shorter minRTO, however the RR scheduler is better (i.e. fewer spurious time-
outs) for a longer minRTO. We see that most delay spikes do not last for more
than 0.5 seconds, because for longer minRTOs the share of flows suffering from
spurious timeouts is virtually zero.
Different values of the minRTO do not result in any significant differences
in goodput fairness, except when using an RR scheduler at high loads. For this
case a longer minRTO is better than a short one, as shown in Figures 2.4 and 2.5.
We believe that this decrease in fairness is the result of the increase in spurious
timeouts. Comparing the two schedulers, we see that the RR scheduler produces
slightly higher fairness than the SIR scheduler for moderate load. Regardless
of scheduler and minRTO, the fairness steadily decreases as the load increases
above 75 users.
3 We have also looked at the total number of spurious timeouts for which the results corre-
0.25
minRTO=0.0s
minRTO=0.1s
minRTO=0.2s
minRTO=0.3s
minRTO=0.4s
0.2 minRTO=0.5s
minRTO=1.0s
Share of spurious timeouts
0.15
0.1
0.05
0
0 50 100 150
Number of users
Figure 2.2: SIR scheduling: The share of all flows experiencing at least one
spurious timeout using TCP Sack and a specified scheduler for different values
of minRTO. The confidence level is 90%. TCP NewReno gave similar results.
0.25
minRTO=0.0s
minRTO=0.1s
minRTO=0.2s
minRTO=0.3s
minRTO=0.4s
0.2 minRTO=0.5s
minRTO=1.0s
Share of spurious timeouts
0.15
0.1
0.05
0
0 50 100 150
Number of users
0.7
Fairness in goodput
0.65
0.6
0.55
0.5
0 50 100 150
Number of users
Figure 2.4: The fairness in goodput among different flows using TCP Sack and a
specified scheduler for different values of minRTO. The confidence level is 90%.
TCP NewReno gave similar results.
0.7
Fairness in goodput
0.65
0.6
0.55
0.5
0 50 100 150
Number of users
5e+06
Throughput [bits/s]
4e+06
3e+06
2e+06
1e+06
0 50 100 150
Number of users
Figure 2.6: The total throughput in the system using TCP Sack, averaged over
ten runs for different values of minRTO. The confidence level is 90%. TCP
NewReno gave similar results.
Figures 2.6 and 2.7 present the throughput for the whole system. For SIR
scheduling, the different values of the minRTO do not result in any differences
in throughput. We note a small difference in throughput when using an RR
scheduler at high loads.
5 Discussion
The results presented raise a key question: Why is RR scheduling more sensi-
tive to changes in the minRTO when compared to SIR scheduling? Spurious
timeouts occur when the delay suddenly increases, such that a packet will be
delayed causing the RTO timer to go off. In our system, increased packet delays
are the result of intensified competition at the MAC layer.
With an RR scheduler the competition is intensified for all users whenever
a new user arrives to a cell, since they all compete on equal terms. However,
given the slow start behavior of TCP, the traffic of one new user is not enough
to create a delay spike. There must be several new users arriving within a short
period of time in order for any rapid increase in competition to occur. With SIR
scheduling the arriving users only compete with the users having worse SIR than
themselves. This means that if a number of users arrive at a cell, the likelihood
of all of them contributing to the competition observed by a particular user is
On the TCP Minimum Retransmission Timeout in a High-speed 43
Cellular Network
5e+06
Throughput [bits/s]
4e+06
3e+06
2e+06
1e+06
0 50 100 150
Number of users
the load beyond the current point we speculate that the minRTO might have
an effect for SIR scheduling. The average round trip times for the same number
of users, are shorter for SIR scheduling than for RR scheduling, indicating that
the SIR scheduler is more efficient. This is more evident during high loads.
Furthermore, we have compared cumulative distributions of the RTTs for the
two schedulers. In general the RTT for SIR scheduling is shorter, but there are
several occurrences of really long RTTs compared to RR scheduling.
We have only studied TCP traffic and even though TCP probably will be
the protocol used by most of the applications, it is of interest to discuss its
performance when competing with traffic using UDP. A UDP traffic source may
very well start sending at a high rate compared to the start-up behavior of TCP.
This means that a single, or a few high-rate UDP flows can cause sudden service
interruptions interpreted as delay spikes by the TCP flows transferring data in
the same cell.
To conclude, we see that there are differences in the number of spurious
timeouts when using the two schedulers for the minimum retransmission bounds
studied and for our application model. These differences does not seem to
have any major effect on fairness, goodput or throughput, nor do the two TCP
versions.
Paper 3
45
Technical report, Luleå University of Technology
Sara Landström and Lars-Åke Larzon, “Buffer management for TCP over HS-DSCH”.
Technical report, LTU–TR–05/09–SE, Luleå University of Technology, Sweden, Febru-
ary 2005.
46
Buffer Management for TCP over HS-DSCH
Sara Landström† , Lars-Åke Larzon†,‡ , Ulf Bodin†
†
Luleå University of Technology
‡
Uppsala University
Abstract
1 Introduction
On the Internet, buffering is usually performed on a per-link basis, except when
it comes to wireless cellular systems, where per-user queuing is common practice.
Previous studies of buffer management over wireless cellular systems focus on
dedicated channel types [59], [10].
In this paper we study how appropriate buffer management can improve
performance of HS-DSCH, which is a shared channel, when transfers are made
using TCP as transport protocol. TCP connections in the slow start phase,
alternate between sending data and waiting for acknowledgments. Improved
link utilization may therefore be achieved through time division. With the
48 Congestion Control in Wireless Cellular Networks
increased amount of data services being offered over wireless cellular networks, it
is probable that the shared channel concept will become increasingly important.
In low load situations buffer management for HS-DSCH primarily targets
user experience in terms of transfer rates. When the traffic load increases buffer
management can help to ensure that the resources are being spent wisely, since
it interacts with the TCP congestion and avoidance mechanism.
One of the key issues is that we do not want to transfer stale data or multiple
copies of the same data over the link. It is therefore likely that the queue
should be kept small to prevent data from aging in the queue and unnecessarily
triggering timeouts. Meanwhile, we want to minimize the number of packets
that have to be dropped in order to keep the buffer size small. We also want to
enable high transfer rates and ensure that data is available to be transferred.
We will now present the main features of HSDPA and relate them to TCP
and current buffer management principles. We also expand on the different
aspects of buffering and previous work before presenting the results from a
simulation study of queue management for HS-DSCH.
in the form of lost packets and by the use of a timer1 . The buffer strategy
interacts with these mechanisms through its drop pattern and the delay that it
induces. Choosing the appropriate buffer strategy is thus important to ensure
high channel utilization and acceptable transfer rates.
Passive Queuing
The traditional approach to buffering is to set an absolute limit on the amount
of data that can be buffered. Packets are then dropped when the buffer capacity
is exhausted. This strategy is known as passive queue management.
to wired links, which means that each packet can add substantial delay and slow
down loss recovery.
We argue that whether to base decisions on the average queue size or not, also
depends on the number of flows being handled. In the simple case when there
is only one flow, it is possible to detect when over buffering with knowledge of
the transport protocol and the current buffer level for a given bandwidth*delay
product. Essentially a packet should be dropped as soon as the lower threshold
is exceeded to get a prompt send rate reduction in slow start. Thereafter equally
spaced drops are preferable for the probing behavior of TCP.
With RED the likelihood of losing a packet is high close to the upper thresh-
old, but rather low close to the lower threshold. Increasing the dropping prob-
ability will only marginally increase the likelihood of dropping a packet at the
right moment, while also increasing the risk of loosing multiple packets from the
same TCP window. Figure 3.1 illustrates this relation. Dropping more than one
packet from a TCP window complicates loss recovery [14].
1 100% 3*tmin
100% 4*tmin
10% 3*tmin
10% 4*tmin
0.8
Dropping probability
0.6
0.4
0.2
0
0 5=tmin 10 15=3*tmin 20=4*tmin
Queue size in number of IP packets
Figure 3.1: RED dropping probabilities with different maximum drop proba-
bilities and relations between the lower and the upper thresholds. The lower
threshold is here set to 5 IP packets.
A change has later been made to the RED algorithm [15] in order to de-
crease the sensitivity to the parameter settings. Instead of dropping all packets
when the average queue size exceeds t max, the dropping probability is slowly
increased from max p to 1 between t max and 2 ∗ t max. An evaluation of this
modified algorithm is to be found in [56].
Buffer management for TCP over HS-DSCH 51
0
t_min t_min t_max t_min t_max
Figure 3.2: A graphical comparison of the three queuing principles with the
dropping probability on the y-axis. The proportions in this figure are not exact.
For HS-DSCH, it is not necessarily the wireless hop that dominates the pipe
capacity as was the assumption for the 3G links in the previous studies [59], [10].
The actual radio link round trip time is short and the available bit rate can vary
substantially, which means that other guidelines for how to set t min has to be
applied.
Dropping Policies
Data may be dropped from the tail or the front of the queue. A packet may
also be randomly selected for dropping. Randomly selecting a packet is foremost
an interesting approach when the buffer contains packets from several transfers
and users. In such a case we want to distribute the packet losses among the
flows and primarily drop packets belonging to flows that occupy a large share of
the buffer, without having to keep track of individual flows. Since each buffer
52 Congestion Control in Wireless Cellular Networks
is dedicated to one user in the case of HS-DSCH we will not consider random
dropping further.
In [63], the drop-from-front scheme was shown to give a shorter average
queuing delay than drop-from-tail for passive buffering. The decrease in delay
is roughly proportional to the fraction of packets dropped, since dropping from
the front decreases the service time.
Another motivation for the use of drop-from-front is that the fast retransmit
mechanism of TCP can be exploited to convey the congestion signal faster to
the sender as proposed in [39], which for instance can help to avoid a large slow
start overshoot. A large buffer overflow in slow start has been shown to be a
problem in low statistical multiplexing environments [25].
Finally, if the passive buffer only keeps data for one transfer the drop-from-
front approach ensures that there are always enough segments to trigger a fast
retransmit following the dropped segment (assuming that the buffer can hold
at least three segments).
Although we have discussed the dropping policies from the perspective of
passive queue management, most buffer management algorithms can be arbi-
trarily combined with a drop policy. In this study we consider passive buffering
with a drop-from-tail scheme (DT), passive buffering where packets are dropped
from the front (DF), RED with drop-from-front (RED) and finally PDPC with
drop-from-front (PDPC). Buffer sizes are measured in IP packets. The nota-
tion “DT 4”, translates to a passive queue management algorithm with packets
being dropped from the tail and room for at most 4 IP packets.
2 Evaluation
We use simulations to illustrate the effects of buffer management over a shared
channel. The data has been obtained through simulations using the Network
Simulator version 2.27 (ns-2) [44]. For the simulations the PDPC algorithm was
implemented according to the state chart in [59] and the model of HS-DSCH,
first used in [5], was extended by wrap-around for interference calculations and
moving users. See [3] for an explanation of wrap-around for moving users.
The transport protocol is TCP SACK, as implemented in the ns-2 module tcp-
sack1. The connection set-up, but not the tear-down was simulated. Based on
the Ethernet MTU of 1500 bytes, the TCP segment size was set to 1460 bytes.
Simulation model
The network topology is shown in Figure 3.3. Instead of a 3G link, we use a
wired link with a corresponding fixed delay and bandwidth. The purpose is to
Buffer management for TCP over HS-DSCH 53
Service User
provider 70ms 64kbps/60ms
Figure 3.3: The network topology for the simulations of the dedicated channel.
The bandwidth of the first link is over provisioned.
illustrate the behavior of the queue management principles when the buffer is
dedicated to one flow. Knowledge of the general characteristics, such as the
variations in queue length and drop patterns, is to support our evaluation of
the buffer management strategies for HS-DSCH.
The buffer size is given in IP packets and the maximum queue length for
the passive buffer scheme corresponds to t min in the active queue management
algorithms. Both RED and PDPC were set to drop packets from the front. The
relation between t min and other parameters in PDPC follows the description
in Section 1.2.
RED is the most complex algorithm of those investigated and it includes a
random element. The configuration of the RED parameters are accounted for
in Table 3.1. For comparison purposes the distance between the two thresholds,
t min and t max, and the maximum dropping probability have the same settings
as in [59]. Each TCP transfer was 250 kbytes.
Results
The trend for tail drop is that the number of packets lost is increasing with
the queue size up to a buffer capacity of about 40 IP packets, see Figure 3.4.
The reason is that the slow start overshoot is potentially bigger, the larger the
buffer. At larger buffer sizes the drops occur towards the end of the transfer and
thus fewer segments are dropped. However, dropping segments late is costly,
since a timeout is often necessary to recover, which is reflected by the noticeably
54 Congestion Control in Wireless Cellular Networks
45
DT
40 RED
DF
PDPC
35
25
20
15
10
0
0 5 10 15 20 25 30 35 40 45 50
Buffersize in number of IP packets
Discussion
RED is difficult to configure. By reducing the distance between the upper and
the lower thresholds, the average queuing delay can be reduced but instead we
increase the risk of dropping closely-spaced packets. Another alternative would
be to disable the algorithm, which allows for a slow decrease of the dropping
probability between t max and 2 ∗ t max. We kept the configuration we had in
this experiment, since our focus is not on optimizing any particular algorithm,
but rather on finding general guidelines that will apply for HSDPA.
We repeated the experiments with a faster outgoing link and different delays.
When the bandwidth is higher and the delays shorter, loss recovery is faster and
thus has less effect on the transfer times as can be expected.
Buffer management for TCP over HS-DSCH 55
54
DT
52 PDPC
RED
50 DF
48
Transfer time [s]
46
44
42
40
38
36
34
32
0 5 10 15 20 25 30 35 40 45 50
Buffersize in number of IP packets
8
RED
DT
7 PDPC
DF
6
Average queuing time [s]
0
0 5 10 15 20 25 30 35 40 45 50
Buffersize in number of IP packets
(b) The average queuing delay due to queuing as a function of the buffer size.
Simulation model
The application model determines the results to a large extent. For instance, if
most files are small enough to fit into the buffer, the dropping strategy never
comes into play. In this section the effects of two different file size distributions
are studied. In the first scenario 250 kbytes TCP transfers are being made, in
the second simulation file sizes are drawn from a long-tail distribution where
the majority of the transfers are short.
A fixed number of mobile users are spread out over the simulation area. New
sessions are generated independently of the perceived transfer rates, through a
session generator for which the average waiting time between sessions can be
configured. The waiting time is uniformly distributed. The destination is picked
randomly among the idle users. If there is no idle user, the session is dropped.
The session generator enables comparisons to be made at a reasonably sim-
ilar offered load as opposed to an application model where each user generates
its next session after a waiting time that is initiated when the previous transfer
has been concluded. In the latter case, a higher average transfer rate results in
more transfers being generated. Even with the session generator a system with
low transfer rates has less ability to accept the offered sessions, since all the mo-
bile users may be occupied. System goodput captures the results of the transfer
rates and the degree to which the system performs useful work. Each simula-
tion corresponds to 5 minutes simulated time and each scenario was repeated
ten times.
The cell plan consists of seven cells with omni directional antennas and 500 m
cell radius. Initially the mobile units are spread uniformly in the plane within
a circle enclosing the cell plane. For simplicity a mobile is associated with the
base station to whom it is closest to in distance. A hand-over only results in
one missed transmission opportunity.
The performance is sensitive to radio conditions and positions of the mobile
users. Therefore a mobile unit is given a position, speed and direction for each
Buffer management for TCP over HS-DSCH 57
40
35
30
Number of users
25
20
15
10
0
0 10 20 30 40 50 60 70 80 90 100
Speed in kph
new session assigned to it. The speed is taken from a pedestrian and low mobility
speed distribution as shown in Figure 3.6 and recommended in [46], whereas
any direction is equally likely and positions are chosen as when initializing the
simulation.
The deterministic loss in signal strength due to distance is assumed to be
exponential with a propagation constant of 3.5. The location dependent path
loss, referred to as shadow fading, is normally distributed in dB with a standard
deviation of 8 dB and there is a 0.5 correlation between base stations. The
autocorrelation profile for the shadow process is first order negative exponential
and we use a correlation distance of 40 m.
Multi-path fading leads to self interference and loss of orthogonality when
data for several users are transmitted simultaneously within a cell using code
multiplexing. These phenomena are modeled by constants, which have the
values 0.1 and 0.4 respectively2 . All transmissions in other cells contribute to
the interference level.
In Table 3.2, the combinations of coding rates and modulation types that are
available in the simulator are summarized. We assume that 12 out of 16 codes
and a power of 10W have been allocated to HS-DSCH. Code multiplexing is
possible for up to three users in one time slot and the block errors are uniformly
distributed. Lost radio blocks are immediately retransmitted.
2A value of 1 would mean that all orthogonality is lost.
58 Congestion Control in Wireless Cellular Networks
Bottleneck
buffer
Data Users
Service provider
25 ms
Since TCP has a bias against long round trip time connections, the server
was placed at the same distance from all base stations. This prevents the TCP
bias from affecting the results. The one-way propagation delay between the air
interface and the server was fixed to 25 ms in both directions. The topology is
depicted in Figure 3.7.
In reality active queue management will be performed at the serving radio
network controller (SRNC) for HSDPA. We assume that the SRNC and the
basestation can transfer data seamlessly between each other and that only a
small amount of data is between the air interface and the queue that is being
actively managed at any point in time.
Statistical methods
Details of the statistical methods used in this paper can be found in [45] and
the software used for the statistical computations is R [60]. Below we briefly
account for the applied methods and their underlying assumptions.
For comparison of means when we have two or three samples we chose the
paired t-test with the significance coefficient adjusted for multiple comparisons
using the method suggested by Bonferroni. The t-test assumes that the differ-
ence between the data sets is normally distributed. There is a t-test for data
sets with equal variance and another for unequal variance. If the data sets are
normally distributed Bartlett’s test can be used to determine whether the vari-
ance are equal or not. The assumption of normality is verified through a normal
probability plot.
The null hypothesis for the paired t-test is that there are no differences in
means and the alternative hypothesis is that there are differences in means. We
Buffer management for TCP over HS-DSCH 59
can reject the null hypothesis if the computed p-value is less than our predeter-
mined significance coefficient, which we have set to 0.05.
For multiple comparisons of means (more than three means to compare in
this study), we have used analysis of variance (ANOVA). ANOVA allows us to
extend our hypothesis to include more than two treatments on one population
or alternatively to ask are all the means from more than two populations equal?
This is equivalent to asking whether the treatments have any overall effect. The
assumptions are that the residuals resulting from the model have equal variance
and that they are normally distributed. Thereafter Tukey’s3 test have been
performed to detect significant differences between means and to construct 95%
confidence intervals for these differences.
RR scheduling We start with the longest waiting time, 0.4 seconds, between
initiating new transfers and compare the system goodput for two buffer sizes
with DT. The paired t-test was performed to detect any significant difference in
mean system goodput. Our reference buffer, which can keep the entire transfer,
gives between 1635 and 9809 bits better system goodput per second and cell
than DT 30 with 95% confidence. The system goodput for the buffer of 4 IP
packets was not significantly different from that of the reference buffer at this
confidence level.
For a waiting time of 0.3 seconds, we study DT, DF, PDPC and RED with
4 and 30 IP packets as the maximum sizes of the passive buffers. We use a two
factor ANOVA to analyze the data. Both the queue strategy and the queue size
effect are significant, as well as their interaction. Therefore we have to study
the effect of the queue strategy at each queue size and vice versa. Table 3.3
accounts for the 95% confidence intervals for the significant differences between
means. The table reveals that all schemes are significantly better than RED
4. The other short buffer configurations and the long RED queue give higher
system goodput than DT 30. DF 4, PDPC 4 and RED 30 give slightly higher
goodput than PDPC 30. Tukey’s method was used to perform the multiple
comparisons. We also compared DF 4 to the reference buffer using a paired
t-test. The null hypothesis that the means are equal could not be rejected at
the 95% confidence level.
When decreasing the waiting time further, we find that it is the same dif-
ferences in means that are significant and that these differences have increased
in size. There is also a small but significant difference in means between DF 30
and DT 4.
3 We could have analyzed the experiments using ANOVA and blocking, but the R imple-
mentation does not support Tukey’s for blocked experiments. In practice this means that it
is harder to detect small differences.
60 Congestion Control in Wireless Cellular Networks
Table 3.3: 95% confidence intervals for the significant differences in means with
RR scheduling for 0.3 seconds waiting time. The unit is bits per second and
cell.
Table 3.4: 95% confidence intervals for the significant differences in means with
SIR scheduling for 0.3 seconds waiting time. The unit is bits per second and
cell.
Simulation model We use the Pareto distribution with the average set to 25
kbytes and the shape parameter to 1.1 as recommended in [46] for Web traffic.
Values larger than 2 Mbytes are rounded down to 2 Mbytes. The number of
mobile units in this scenario is 200 and the performance was studied at two
offered loads; one where the average waiting time between new sessions was
0.0175 seconds and one with 0.015 seconds. The results were similar in both
cases.
Results The differences in mean system goodput observed between the buffer
sizes in the previous scenario have been reduced and are no longer statistically
significant, which is to be expected since most files no longer overflow the larger
buffers. Neither the buffer strategy, nor the interaction term have any significant
effect.
Discussion
We will first discuss the results for the long TCP transfers, which are strongly
connected to the actual buffer size that the buffer management principle oper-
ates at. For RR scheduling it is the small buffers, DF 4, DT 4 and PDPC 4, that
62 Congestion Control in Wireless Cellular Networks
give the best performance in terms of system goodput. RED, that often drops
the first packet when the queue is larger than for the other schemes, results in a
lower system goodput for the short buffer size. With the large buffer size RED
gives marginally higher system goodput than PDPC 30 that has an average
queue length which is larger than that of the passive schemes, but smaller than
that of RED 30.
With SIR scheduling DF 4, DT 4 and PDPC 4 distinguish themselves from
the other configurations. It is likely that a shorter buffer contribute to a higher
degree of statistical multiplexing over the radio link and thus evens out unfair-
ness problems. The total amount of data buffered for HS-DSCH is also decreased
resulting in a more dynamic system.
A drop-from-front policy is preferable to drop-from-tail for passive buffering
when the buffer is relatively large. For short buffers, the dropping policy has
no significant influence.
The mean system goodput for the long-tail distribution is higher than with
the large transfers for two reasons; fewer flows perform retransmissions and the
majority of the flows are short. Short flows only demand capacity for shorter
periods of time and thus have less impact on other simultaneous sessions. Short
flows also increase the degree of statistical multiplexing over the bottleneck link.
Simulation model
Three cases are studied:
• The exact same topology as before for HSDPA, see Figure 3.7.
Results
In Figure 3.8, 3.9(a) and 3.9(b) the results are shown. The median transfer
time out of 30 simulations have been plotted, since the data is not normally
distributed. Simulations have been carried out for buffer sizes of 5, 10, 15, 20,
30, 50 and 100 IP packets.
In the first scenario, the median transfer times are in the range from 0.6 to
2.0. PDPC gives the best performance over the entire range of buffer sizes. DT
and DF perform well up to a buffer size of 15-20 IP packets. At this point the
Buffer management for TCP over HS-DSCH 63
0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0
DT
Median transfer time
PDPC
RED
DF
20 40 60 80 100
Figure 3.8: The topology in Figure 3.7 with only one TCP user.
RED buffer no longer induces any packet losses and approaches the minimum
transfer time for this scenario.
In the second scenario, Figure 3.9(a), the minimum transfer time is around
1.3 seconds and the highest median transfer times are more than 3 seconds.
RED results in the lowest median transfer times for all buffer sizes. The other
buffers perform worse at small buffer sizes, than PDPC, followed by DF and
DT reaches the lowest boundary.
With a low degree of losses over the wired hop, Figure 3.9(b), PDPC has the
lowest transfer times of about 0.6 seconds. The highest median transfer times
are two times larger. DF and DT have relatively low median transfer times
for buffers of 10-15 IP packets. For larger buffers their performance degrade,
whereas the performance of the RED queue improves.
Discussion
Compared to the previous scenario with the individual buffers in Section 2.1, we
have a higher and varying bandwidth. A buffer size of 10-15 IP packets seems
to give a stable performance for PDPC, DF and DT in all the investigated
scenarios. It is likely that this buffer size represent a good trade-off between
enabling high transfer rates and keeping the delay down. With shorter buffer
sizes, the TCP window is kept too small to allow high peak transfer rates. On
the other hand, when the buffer is larger, variations in the channel capacity
64 Congestion Control in Wireless Cellular Networks
3.0
DT
PDPC
Median transfer time
RED
2.5
DF
2.0
1.5
20 40 60 80 100
DT
1.2
PDPC
Median transfer time
RED
DF
1.1
1.0
0.9
20 40 60 80 100
(b) The same topology with 25 ms one-way delay, but randomly uniformly
distributed losses of 1% are added over the fixed link.
Figure 3.9: Median transfer rates for varying buffer sizes and environments.
Buffer management for TCP over HS-DSCH 65
cause many packets to be lost until the buffer becomes large enough to prevent
them.
The extra degree of freedom when deciding whether to drop packets or not
strengthen the position of PDPC over channels with varying capacity. DF and
PDPC had similar performance in the scenario with individual buffers for a
fixed link capacity, see Section 2.1, whereas PDPC exhibits better performance
than DF over HS-DSCH.
3 Discussion
We have studied a scenario where only one transfer at the time takes place for
each user, but the user can initiate several transfers at the same time. For
instance if HTTP is used without pipelining and the browser is configured to
allow more than one parallel connection. Such combined streams are more
aggressive than a single TCP session, which means that the queue sizes would
probably grow for RED and PDPC, possibly giving a more problematic error
recovery even for the smallest buffer configurations. The cost of keeping the
queue at a minimum for multiple simultaneous transfers through passive queuing
may be larger than in the case of only one transfer at the time, since it may
come at the price of many packets being dropped. The full impact of several
transfers sharing a buffer on the system is however part of our future plans.
RED was designed for a scenario where multiple flows traverse the bottleneck
buffer, which is a scenario that remains to be studied for HS-DSCH. For one flow
at the time, RED performs poorly. The reason is the late reaction to increases
in the queue length, which often results in timeouts in combination with loosing
many packets. Together with a passive drop-tail queue management, RED is
not to be recommended. Instead we argue for the use of passive drop-from-front
management and PDPC, which have been shown to be suitable for short buffers.
It is likely that to improve buffer management further, system parameters such
as radio conditions, link utilization and the current amount of data that is
buffered for the system as a whole should be taken into account. In low load
situations larger buffer sizes may allow higher transfer rates and high utilization,
but when the load increases the buffer sizes may have to be reduced to keep
buffering delays down and to allow smooth TCP operation.
4 Conclusions
HS-DSCH is a relatively new technique and it is hard to foresee what character-
istics the traffic mix for this channel will have. A queue management principle
that exhibits robustness to application parameters and handles TCP well is
therefore likely to be the best choice. There is a trade-off between low queuing
delays, system goodput and peak transfer rates which has been illustrated in
this paper. In general, drop-from-front is a better choice than drop-from-tail
for passive queue management.
66 Congestion Control in Wireless Cellular Networks
67
Paper published as
Sara Landström, Lars-Åke Larzon and Ulf Bodin, “Properties of TCP-like congestion
control”. In Proceedings of the Swedish National Computer Networking Workshop,
pages 13-18, Karlstad, Sweden, 23-24 November 2004.
68
Properties of TCP-like Congestion Control
Sara Landström† , Lars-Åke Larzon†,‡ , Ulf Bodin†
†
Luleå University of Technology
‡
Uppsala University
Abstract
In this paper we investigate the performance of TCP-like congestion con-
trol and compare it to TCP SACK. TCP-like congestion control is cur-
rently up for standardization as part of the Datagram Congestion Control
Protocol (DCCP) in the IETF. DCCP offers an unreliable transport ser-
vice with congestion control to upper layers.
We have found that TCP-like is fair to TCP SACK when the loss
rate is low. In the high loss, low round trip time regime, TCP-like seizes
more bandwidth and is able to better maintain a smooth send rate than
TCP SACK. In low round trip time environments, the absence of a lower
bound on the transmit timeout in TCP-like, which corresponds to the re-
transmission timeout in TCP, contribute to this difference in performance.
Another factor is the decoupling of the congestion control state from in-
dividual packets that is possible in TCP-like, since it offers an unreliable
transport service.
1 Introduction
Traditionally TCP has been the dominating Internet transport protocol, but
time constrained media services are becoming more frequent and promote the
use of UDP. Most UDP flows lack congestion control mechanisms and exist
on the expense of the TCP flows. The increased share of UDP flows might
eventually cause severe starvation of TCP flows or even a congestion collapse,
therefore TCP-friendly rate regulation of all longer transfers is desirable from a
fairness perspective.
To support this ambition a new transport protocol, called the Datagram
Congestion Control Protocol (DCCP) [37], has been designed. It currently of-
fers the choice of two congestion control algorithms, TCP Friendly Rate Control
(TFRC) [20] and TCP-like congestion control [23]. The former has been stud-
ied in [20] and [62], but the latter has to our knowledge not been extensively
evaluated.
TFRC is targeted at applications desiring a smoother send rate than cur-
rently possible using TCP, whereas TCP-like congestion control is designed
to closely trace the behavior of TCP SACK [14] thus prioritizing throughput.
69
70 Congestion Control in Wireless Cellular Networks
There are however differences that springs out of TCP-like providing an unreli-
able service while TCP enforces reliability. For instance, the delay variations are
expected to be smaller since retransmissions are not performed at the transport
layer and in order delivery has been abandoned. The removal of these features
gives the application designer a finer grained control over which data is to be
sent and when, than in ordinary TCP. Thereby TCP-like may become an at-
tractive option for applications like streaming media, where the time constraints
may allow selective retransmissions.
In this paper we concentrate on mapping out the differences in the design of
TCP-like congestion control and TCP SACK. We also show, through simulations
in the Network Simulator, ns-2 [44], the impact they have on the send rate,
smoothness and fairness of the protocols. TCP is used as a reference point,
since TCP-like expressly attempts to imitate its behavior and also because it is
one of the most prevalent protocols.
Packet 9 10 11 12 13 14
Status Not recv Not recv Recv Recv Not recv Recv
injecting replicas of packets into the network when the original transmission
has merely been delayed. In TCP-like, new packets are sent when the timeout
expires, therefore this argument is not compelling.
The standard behavior of TCP is to restart the retransmission timer when
the cumulative acknowledgment point is advanced. This tight coupling of the
congestion control state to individual packets increases the likelihood for time-
outs. For instance, if a fast retransmit of packet p is triggered, the retransmitted
packet will be the last packet to reach the receiver if there are other packets
already on the way. Whereas in TCP-like, p would have been assumed lost when
the DCCP event corresponding to three duplicate acknowledgments arriving in
TCP occurred. The arrival of either any of the packets already on their way or
the newly sent packet (sent instead of the retransmission) would then restart
the timer.
When a timeout occurs TCP-like resets pipe. Packets still in the network
at this point will not further reduce pipe, but could serve a purpose during the
following slow start period. In [13] the performance of different methods for
updating cwnd during slow start was investigated for TCP. Increasing cwnd by
the number of newly acknowledged packets, when slow start has been entered
due to a loss event, was deemed too aggressive. The reason being that after a loss
event the sender can not be certain that acknowledged packets actually left the
network during the last round trip time, since the reports are cumulative they
may have left the network long ago1 . In TCP-like, there is enough information
to deduce when a packet left the network and we therefore suggest to always
increase cwnd based on the number of packets acknowledged. This option is
however left for future studies.
To summarize, TCP-like can discount packets when they have been con-
firmed lost resulting in a quick exit from loss recovery. Also, the acknowledg-
ment of essentially any new packet restarts the transmit timer and releases a
new packet, which keeps the ack clock going through difficult congestion events
and decreases the likelihood of a timeout occurring. Finally, there is no lower
bound on the transmit timeout which means that less time is spent waiting for
the timer to expire in environments where the round trip time is short, if a
timeout is necessary to recover.
3 Simulations
Our evaluation is based on simulations carried out in the Network Simulator
version 2.27 (ns-2). The TCP SACK agent (TCPSack1) is part of the simulator
and includes most TCP algorithms such as fast recovery, fast retransmit, slow
start, congestion avoidance and limited transmit. We have been involved in the
implementation of DCCP and will use our DCCP implementation [43] which
implements all protocol features relevant for this study. In all simulations the
initial window was set to 2 packets and the buffers at both end points were set
1 Through the SACK option it can be derived when a packet left the network, but this is
TCPL
RTT 20 ms SACK
120000
80000
Receive rate (bytes/s)
40000
0
RTT 100 ms
120000
80000
40000
0
RTT 200 ms
120000
80000
40000
0
0 5 10 15 20
Loss rate (%)
Figure 4.1: The receive rates of TCP SACK and TCP-like congestion control.
large enough not to limit the send rate. The simulation scenarios are similar to
those presented in [20] when TFRC was introduced, which makes a comparison
of TCP-like and TFRC possible.
−598 ±
200 ms 1% 1 0 ± 2987 0.69 86765 ± 1506 45%
3012
191 ±
200 ms 5% 0.84 −64 ± 651 0.56 35293 ± 328 61%
657
−18937 ± 1393 ±
20 ms 10% 0.00 0.26 95322 ± 1221 78%
2423 2443
372 ±
200 ms 10% 0.00 −932 ± 346 0.04 21580 ± 174 60%
348
round trip time of a connection is low and stable. However, it makes the protocol
more sensitive to delay spikes. The Eifel Detection and Response algorithms
could be implemented for TCP-like to mitigate this potential drawback.
The effect of enforcing a minimum transmission timeout of 1 second on
the receive rate of TCP-like can be easily investigated. It is also possible to
imitate application layer retransmissions by adding another packet to be sent
for each packet detected as lost by the sender. Table 4.1 gives the results of
an experiment designed as a full factorial test [45] where these two features are
turned on and off in a few environments characterized by their round trip time
and loss rate. Each setting was simulated thirty times2 .
A p-value less than 0.05 indicates that a factor is significant at the 95% confi-
dence level. The effect of the interaction between the factors was not significant
and is therefore excluded from the table. A minimum transmission timeout of
1 second has a negative impact on performance when the loss rate is high and
the effect becomes significant for lower loss rates when the round trip time is
low. At 95% confidence level, reliability results in insignificant changes in all the
investigated scenarios, except when both the round trip time and the loss rate
are high. The Effect column gives information about the confidence interval for
the effect.
The R-Squared statistic captures how much of the variability that can be
explained by the fitted model. The best fit is given when the RTT is 20 ms and
2 The average effect found in the table is computed over all simulations for that particular
environmental setting, these values are therefore not directly comparable to the receive rates
shown in Fig. 4.1.
Properties of TCP-like congestion control 75
16 TCPL
Size (packets)
SACK
12
0
0 2 4 6 8 10
Time (seconds)
Figure 4.2: A comparison of the cwnd sizes under the same conditions.
the loss rate 10%. Part of the discrepancy between the receive rate observed by
TCP-like and TCP SACK is still unaccounted for. It is reasonable to assume
that the remaining difference can be attributed to the decoupling of the conges-
tion control state from individual packets as illustrated in the previous section.
This assumption is strengthen through the trace of the congestion windows when
the loss rate is 5% and the delay is 10 ms shown in Fig. 4.2. Timeouts are less
frequent in the case of TCP-like (the congestion window is never down to one
packet) and there are fewer long loss recovery periods identified by cwnd being
frozen.
Figure 4.3: The topology used in the simulations. The queue parameters were
scaled with the bandwidth that in some scenarios was up to 256 Mbytes/s.
14
12
Loss Rate (%)
10
8
6
4
2
0
0 16 32 48 64 80 96 112 128
Number of TCP SACK and TCP-like Flows in total
2.5
SACK Flows
TCPL Flows
Normalized send rate
2
Mean SACK
Mean TCPL
1.5
0.5
0
0 16 32 48 64 80 96 112 128
Number of TCP SACK and TCP-like Flows in total
Figure 4.4: TCP-like and TCP SACK when sharing a 16Mbytes/s link with
RED queuing.
Properties of TCP-like congestion control 77
1
0.9 SACK CoV
Coefficient of Variation
TCPL CoV
0.8 Mean SACK CoV
0.7 Mean TCPL CoV
0.6
0.5
0.4
0.3
0.2
0.1
2 3 4 5 10 20
Loss Rate (%)
Figure 4.5: Coefficient of variation of the send rate between flows of the same
type.
variation although the loss rate is high, whereas the spread of the throughput
between the TCP SACK sessions rapidly increases. When the loss rates are low,
TCP-like and TCP SACK exhibit similar fairness between the flows.
Fairness can also be measured over different time scales. For this purpose,
we ran 150 seconds long simulations. These simulations were partitioned into
time intervals of length δ and the send rate in every time interval computed.
The equivalence ratio in a time interval for user A and B is then the minimum
sendrateuser A
of the two ratios, sendrate user B
and sendrateuser B
sendrateuser A . By taking the minimum of
the two ratios, a value between 0 and 1 is received. For perfect fairness this
equivalence ratio should be 1. The average value of the equivalence ratios for a
time series gives an estimate of how the bandwidth has been distributed on the
time scale δ [20]. The shorter the time scale, the more likely it is that we will
observe a smaller equivalence ratio.
We have computed the equivalence ratio for two scenarios corresponding to
the situations when 32 and 128 flows respectively were active in Fig. 4.4. The
main difference is that the starting times of the flows are chosen from a uniform
random distribution in the interval between 0-40s. Previously all flows were
started within a time span of less than 10ms. We also removed the first 50
seconds before performing our analysis. The reduction in fairness compared to
the results in Fig. 4.4 indicates that new TCP SACK flows have a harder time
grabbing bandwidth from already active sources. Also, the equivalence ratio
does not consider which flow that got more bandwidth during an interval. For
instance, if TCP-like sends at a rate that is twice that of TCP SACK in the
first interval and TCP SACK sends at twice the rate of TCP-like in the next
interval, the equivalence ratio will be (0.5 + 0.5)/2 = 0.5, not (0.5 + 1.5)/2 = 1.
78 Congestion Control in Wireless Cellular Networks
4 Conclusions
TCP-like features several of the numerous TCP improvements that have been
proposed during the last decade. While being more aggressive than TCP Reno,
it is reasonably fair to TCP SACK at loss rates observed on the Internet today.
Simulations show that at shorter round trip times - up to approximately
100ms - the differences between TCP-like and TCP SACK are more pronounced.
If in addition the loss rate is high, not having a minimum transmission timeout
gives TCP-like better performance. TCP-like also recovers faster from severe
congestion as its lack of reliability makes it less dependent on individual packets.
In the future we would like to explore the delay variations generated by
the two algorithms observed from an application point of view. Also, TCP-like
attempts to regulate the acknowledgment pace when losses are detected on the
return path. Investigating the performance impact that this algorithm may have
is also an area to look into.
Properties of TCP-like congestion control 79
1
TCPL vs TCPL
SACK vs SACK
0.8 SACK vs TCPL
Equivalance ratio
0.6
0.4
0.2
0
0.2 0.5 1 2 5 10
Timescale for throughput measurement (seconds)
3
SACK 128
Coefficient of Variation
2.5 SACK 32
TCPL 128
2 TCPL 32
1.5
0.5
0
0.2 0.5 1 2 5 10
Timescale for throughput measurement (seconds)
Figure 4.6: Equivalence ratio: the upper line is for the case of 32 active flows and
the lower line represents 128 flows. In the lower figure you find the smoothness
of the flows.
80 Congestion Control in Wireless Cellular Networks
Bibliography
[5] Ulf Bodin and Arne Simonsson. Effects on TCP from Radio-Block Schedul-
ing in WCDMA High Speed Downlink Shared Channels. In QoFIS, volume
2811 of Lecture Notes in Computer Science, pages 214–223. Springer, 2003.
[7] Jin Cao, William S. Cleveland, Dong Lin, and Don X. Sun. On the nonsta-
tionarity of Internet traffic. In ACM Sigmetrics, pages 102–112, Cambridge,
Massachousetts, United States, 2001. ACM Press.
[8] Vinton G. Cerf and Robert E. Kahn. A Protocol for Packet Network Inter-
connection. IEEE Transactions on Communications, COM-22(5):637–648,
May 1974.
81
82 Bibliography
[10] Hannes Ekström and Andreas Schieder. Buffer Management for the Inter-
active Bearer in GERAN. In IEEE VTC, pages 2505–2509, Apr. 2003.
[11] Magnus Erixzon. DCCP-Thin for Symbian OS. Master’s thesis, Luleå
University of Technology, Sep. 2004. 2004:261 CIV.
[13] Aaron Falk and Mark Allman. On the Effective Evaluation of TCP. ACM
SIGCOMM Computer Communication Review, 29(5):59–70, Oct. 1999.
[14] Kevin Fall and Sally Floyd. Simulation-based comparisons of Tahoe, Reno
and SACK TCP. ACM SIGCOMM Computer Communication Review,
26(3):5–21, Jul. 1996.
[15] S. Floyd. Optimum functions for computing the drop probability. E-mail
available at http://www.aciri.org/floyd/REDfunc.txt, Oct. 1997.
[17] S. Floyd and E. Kohler. Internet Research Needs Better Models. ACM
SIGCOMM Computer Communications Review, 33(1):29–34, Jan. 2003.
[18] Sally Floyd. A bug in the TFRC code in NS. Mail to the DCCP mailing
list, http://www.ietf.org/mail-archive/web/dccp/index.html, 2003.
[19] Sally Floyd, Mark Handley, and Eddie Kohler. Problem Statement for
DCCP. Internet draft, IETF, Oct. 2002. Work in progress.
[20] Sally Floyd, Mark Handley, Jitendra Padhye, and Jörg Widmer. Equation-
based congestion control for unicast applications. In ACM SIGCOMM Pro-
ceedings of the conference on Applications, Technologies, Architectures, and
Protocols for Computer Communication, pages 43–56, Stockholm, Sweden,
Aug. 2000.
[21] Sally Floyd and Van Jacobson. Traffic Phase Effects in Packet-Switched
Gateways. Journal of Internetworking: Practice and Experience, 3(3):115–
156, Sep. 1992.
[22] Sally Floyd and Van Jacobson. Random early detection gateways for con-
gestion avoidance. IEEE/ACM Transactions on Networking, 1(4):397–413,
Aug. 1993.
[23] Sally Floyd and Eddie Kohler. Profile for DCCP Congestion Control ID 2:
TCP-like Congestion Control. Internet Draft 6, IETF, Jul. 2004. Work in
progress.
83
[39] T.V. Lakshman, Arnold Neidhardt, and Teunis J. Ott. The Drop from
Front Strategy in TCP and in TCP over ATM. In IEEE INFOCOM, pages
1242–1250, Mar. 1996.
[40] Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leon-
dard Kleinrock, Daniel C. Lynch, Jon Postel, Lawrence G. Roberts, and
Stephen S. Wolff. The Past and Future History of the Internet. Commu-
nications of the ACM, 40(2):102–108, Feb. 1997.
[41] R. Ludwig and R. H. Katz. The Eifel Algorithm: Making TCP Robust
Against Spurious Retransmissions. ACM Computer Communications Re-
view, 30(1):30–36, Jan. 2000.
[42] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP Selective Ac-
knowledgement Options. RFC Standards Track 2018, IETF, Oct. 1996.
[43] Nils-Erik Mattsson. A DCCP module for ns-2. Master’s thesis, Luleå
University of Technology, Feb. 2004. 2004:175 CIV.
[44] S. McCanne and S. Floyd. ns Network Simulator. Technical report, Infor-
mation Sciences Institute, 2004.
[45] Douglas C. Montgomery. Design and Analysis of Experiments. John Wiley
& Sones, 5th edition, 1997.
[46] Motorola. Evaluation Methods for High Speed Downlink Packet Accesss
(HSDPA). Technical report, 3GPP, Jul. 2000.
[47] John Nagle. Congestion Control in IP/TCP Internetworks. RFC 896,
IETF, Jan. 1984.
[48] Open Mobile Alliance. OMA Technical Section-Push Talk Over Cellular
Working Group. http://www.openmobilealliance.org/, Feb. 2005.
[49] J. Padhye, V. Firiou, D. Towsley, and J. Kurose. Modeling TCP Through-
put: A Simple Model and its Empirical Validation. In ACM SIGCOMM
conference on Applications, technologies, architectures, and protocols for
computer communication, pages 303–314, 1998.
[50] S. Parkvall, E. Dahlman, P. Frenger, P. Beming, and M. Persson. The Evo-
lution of WCDMA Towards Higher Speed Downlink Packet Data Access.
In IEEE VTC (Spring), 2001.
[51] Vern Paxson and Sally Floyd. Wide area traffic: the failure of Poisson
modeling. IEEE/ACM Transactions on Networking, 3(3):226–244, 1995.
[52] Vern Paxson and Sally Floyd. Why we don’t know how to simulate the
internet. In Winter Simulation Conference, pages 1037–1044, 1997.
[53] Janne Peisa and Eva Englund. TCP performance over HS-DSCH. In IEEE
VTC Spring, volume 2, pages 987–991, May 2002.
85
[54] J. Postel. Transmission Control Procol. RFC 793, IETF, Sep. 1981.
[55] Theodore Rappaport. Wireless Communications: Principles and Practice.
Prentice Hall, 2nd edition, Dec. 2001.
[56] V. Rosolen, O. Bonaventure, and G. Leduc. A RED discard strategy for
ATM networks and its performance evaluation with TCP/IP traffic. ACM
SIGCOMM Computer Communications Review, 29(3):23–43, Jul. 1999.
[57] I. Stojanovic, M. Airy, D. Gesbert, and H. Saran. Performance of TCP/IP
Over Next Generation Broadband Wireless Access Networks. In IEEE
WPMC, Aalborg, Denmark, Sep. 2001.
[58] Mats Sågfors, Reiner Ludwig, Michael Meyer, and Janne Peisa. Buffer
Management for Rate-Varying 3G Wireless Links Supporting TCP Traffic.
In IEEE VTC, pages 675–679, Apr. 2003.
[59] Mats Sågfors, Reiner Ludwig, Michael Meyer, and Janne Peisa. Queue
Management for TCP Traffic over 3G Links. In IEEE WCNC, pages 1663–
1668, Mar. 2003.
[60] R Development Core Team. R: A language and environment for statistical
computing.
[61] W. Willinger and V. Paxson. Where Mathematics meets the Internet.
Notices of the American Mathematical Society, 45(8):961–970, Aug. 1998.
[62] Yang Richard Yang, Min Sik Kim, and Simon S. Lam. Transient Behav-
iors of TCP-friendly Congestion Control Protocols. Computer Networks,
41(2):193–210, Feb. 2003.
[63] N. Yin and M.V. Hluchyj. Implications of Dropping Packets from the Front
of a Queue. In 7-th ITC, Oct. 1990.