You are on page 1of 99

:

L ICENTIATE T H E S I S

Congestion Control in Wireless


Cellular Networks

Sara Landström

Luleå University of Technology


Department of Computer Science and Electrical Engineering
Division of Computer Science and Networking

:|: -|: - --  ⁄ -- 


Congestion Control in Wireless
Cellular Networks
by

Sara Landström

Division of Computer Science and Networking


Department of Computer Science and Electrical Engineering
Luleå University of Technology
S-971 87 Luleå, Sweden

March 2005

Supervisor
Lars-Åke Larzon, Ph.D.,
Luleå University of Technology and Uppsala University
Assistant supervisors
Ulf Bodin, Ph.D., Luleå University of Technology
Krister Svanbro, Ericsson Research AB
Published 2005
Printed in Sweden by University Printing Office, Luleå
To Peter and Emelie
Abstract

Through the introduction of the third generation of mobile cellular network


technologies a major step towards ubiquitious and wireless access to the In-
ternet has been taken. There are however still challenges due to the different
characteristics and prerequisites of wired and wireless networks.
An important network element is congestion control. The purpose of con-
gestion control is to ensure network stability and achieve a reasonably fair dis-
tribution of the network resources among the users. TCP is a well-established
protocol, which offers reliable transport of data and applies congestion control.
With regards to TCP it is of interest to follow up on proposed changes to the
protocol and to learn how to tune wireless networks for optimal TCP perfor-
mance, since its usage is wide spread. We have performed a study of buffer
management for TCP with High Speed Downlink Packet Access (HSDPA) and
evaluated the effect of reducing the lower bound of the retransmit timeout in-
terval in an environement with varying capacity.
A number of the features that TCP consists of introduce arbitrarily delay,
therefore reliable transport is sometimes traded for less delay variations by ap-
plications with strict timing requirements. Until recently UDP has been the
main alternative to TCP. UDP does not provide any service guarantees, nor
congestion control, but does on the other hand not introduce any delay in itself.
There are however concerns that increased usage of UDP would cause net-
work instability and starve the TCP flows that reduce their send rate when
competition intensifies. Therefore a new transport protocol called the Datagram
Congestion Control Protocol (DCCP) is being designed to provide applications
that do not desire the service model of TCP with an alternative. Currently,
DCCP includes two profiles for congestion control, TFRC and TCP-like. For
these new algorithms verifying the design, identifying weaknesses and suggesting
improvements, as I have done, is important in order to drive the development
forward.
Through the studies that comprise this thesis, I contribute to the stable
operation of the future Internet and the merging with wireless cellular data
architectures.

i
Contents

Abstract i

Publications v

Acknowledgments vii

Thesis Introduction 1
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.1 The Internet . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Wireless Wide-area Networks . . . . . . . . . . . . . . . . 5
2 Research Area . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1 Focus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Continuation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Papers 15

1 Congestion Control in a High Speed Radio Environment . . . . . 17

2 On the TCP Minimum Retransmission Timeout in a High-speed


Cellular Network . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

3 Buffer management for TCP over HS-DSCH . . . . . . . . . . . . 45

4 Properties of TCP-like congestion control . . . . . . . . . . . . . 67

iii
Publications

The following papers are included in this thesis:


• Paper 1

Sara Landström, Lars-Åke Larzon and Ulf Bodin, “Congestion Control


in a High Speed Radio Environment”. In Proceedings of the International
Conference on Wireless Networks, pages 617-623, Las Vegas, Nevada,
USA, 21-24 June 2004.
• Paper 2

Mats Folke, Sara Landström and Ulf Bodin, “On the TCP Minimum Re-
transmission Timeout in a High-speed Cellular Network”. To be presented
at 11th European Wireless, Nicosia, Cyprus, April 10-13 2005.
• Paper 3

Sara Landström and Lars-Åke Larzon, “Buffer management for TCP over
HS-DSCH”. Technical report, LTU–TR–05/09–SE, Luleå University of
Technology, Sweden, February 2005.
• Paper 4

Sara Landström, Lars-Åke Larzon and Ulf Bodin, “Properties of TCP-like


congestion control”. In Proceedings of the Swedish National Computer
Networking Workshop, pages 13-18, Karlstad, Sweden, 23-24 November
2004.
Furthermore, a report describing the ns-2 module for simulating HSDPA will be
available as a technical report during 2005 with the title “Description of HSDPA
module for ns-2”, with me and Mats Folke as authors. The user manual will
also be made available at http://www.csee.ltu.se/∼saral.

v
Acknowledgments

The first person that I would like to thank is my supervisor, Lars-Åke. Lars-
Åke has believed in me from the start and he has always invited me to ask
questions, share ideas and discuss everything that remotely relates to being a
Ph.D., student. To Ulf and Krister, who have been my assistant supervisors,
I’d like to say that I appreciate our discussions and your involvement in my
education.
Among my colleagues in the Computer Science and Networking hallway,
Mats deserves a special thank you. His help and willingness to discuss everyday
problems and ideas have been a valuable asset to me. My fellow Ph.D., students
all have one thing in common, they are helpful and nice company. I would also
like to direct a thank you to Arne Simonsson at Ericsson Research AB, for
always taking the time to answer my questions.
Waiting for me to come home each day is my daughter Emelie. Thank
you for reminding me that there is a world, where entirely different things are
important than the things I struggle with at work. I would also like to thank
my husband Peter for all his love, support, and patience.
My research has mainly been supported by Vinnova, the Swedish Agency
for Innovation Systems, but also by the PCC++ graduate school and Ericsson
Research AB. Thank you for your financial support and for providing me with
the opportunity to work with you.

vii
Thesis Introduction
Thesis Introduction 3

1 Introduction
In this section a short introduction to networking and wireless wide-area net-
works is given. It also serves to set the stage for the research presentation that
follows.

1.1 The Internet


“The Internet was conceived in the era of time-sharing, but has survived into
the era of personal computers, client/server and peer-to-peer computing, and the
network computer.” [40]
There was a time when computers were expensive and rare, sharing their com-
puting power therefore seemed appealing. Others, like J.C.R Licklider1 , found
the novel ways through which humans would be able to communicate through
a computer network to be a driving force.
It was quickly realized that computers communicate in a different way than
human beings, therefore the circuit-switched approach that is used in ordinary
phone systems was deemed unsuitable. Instead of circuits, packet-switching was
considered to support the bursty computer communication pattern. The first
paper on packet-switching theory was published in 1961 [36]. Information that
is to be sent is gathered into packets before being sent towards the destination
and each packet is handled individually, not as part of a sequence of packets,
by the nodes relaying traffic in the network. The internal network nodes buffer
data if the outgoing link is occupied when a packet arrives for it. The manner
in which the buffers are managed affects network performance.
In short, the Internet can be thought of as a network of roads, where cars
(packets) share the road along certain routes. The degree to which the traffic
from different connections are mixed together is called the degree of statistical
multiplexing.
The transport and forwarding service of the Internet was first intended to be
carried out by one protocol, called the Transmission Control Protocol (TCP) [8].
However, it was hard to support all the application requirements within one
framework. For instance early work on voice applications revealed that packet
losses in some cases were better dealt with by the application. Therefore, the
addressing and forwarding of individual packets was removed from TCP and
the Internet Protocol (IP) was formed to provide these services.
IP and its datagrams represent the minimum building block of a computer
network, as illustrated in Figure 1.1. The User Datagram Protocol (UDP)
was also created to give access to the services of IP, which are best effort, and
to provide process multiplexing. The significance of the best effort paradigm is
that a packet, may or may not arrive at its destination depending on the current
status of the network. For instance, no guarantees are given neither regarding
the delay and integrity of a packet, nor the order in which packets will arrive.
This model was adopted since few assumptions about the underlying network
1 J.C.R Licklider has been attributed the first recorded description on the social interactions

that may be possible through interconnecting computers.


4 Congestion Control in Wireless Cellular Networks

Application

TCP UDP

IP

Network

Figure 1.1: A simple illustration of the layering principle of the Internet.


By Network I refer to network technologies such as Ethernet, token ring and
IEEE802.11.

technologies had to be made. As a result, networks can be built on top of very


diverse technologies. The choice of placing advanced algorithms and state in the
end-nodes, instead of in the internal network nodes, has strongly contributed to
the success of the Internet.
In the spirit of academic tradition the specifications of the computer network
protocols were made available for free. A dynamic forum for the exchange of
ideas was created by Crocker in 1969 when he started the request for comments
(RFC) series [9]. The RFCs have become Internet standardization documents
and the standardization documents of concern for this thesis are now being
produced within the Internet Engineering Task Force (IETF).
IETF stems from a board of researchers created by the Advanced Research
Projects Agency (ARPA) to give technical advice on the Internet program in
1981. Due to the heterogeneity of the Internet, one of the technical beliefs of
the IETF is that tight engineering optimizations are generally not feasible.
The first TCP versions included a method for the receiver to control the rate
at which the sender was transmitting, but no algorithms for handling dynamic
network conditions [54]. As the number of hosts connected to the Internet grew,
the need for dealing with the variable network conditions arose [47]. In the late
1980s the Internet suffered from the first of a series of congestion collapses,
i.e., the network became overwhelmed by the traffic load, which prevented it
from doing useful work. In order to ensure the operability of the network, a
TCP sender should attempt to adapt its send rate to a level which can be
handled by the network under the current circumstances. Thus, a number
of algorithms under the name of “Congestion Avoidance and Control” were
proposed by Jacobson for TCP in 1988 [33] to achieve network stability.
The congestion signals, which the algorithms are to respond to, can be either
implicit, such as packet losses or increasing round trip times, or explicit, as when
internal network nodes signal congestion by setting a dedicated bit in a packet
Thesis Introduction 5

header. The current Internet architecture can not disregard from implicit signals
since it is not possible to assume that explicit signaling is supported along the
entire network path between two communicating network nodes.
The flows representing the majority of the long-lived sessions in the end of
the 1980’s ran on top of TCP, therefore congestion control was made part of
TCP. However, the user and application behaviors are different today. New
applications, like streaming and gaming applications, with realtime demands on
the transport service have lead to longer lived UDP sessions. The TCP proto-
col includes mechanisms, such as a reliable in-order delivery service guarantee,
which introduce variations in delay and less control over the data flow from
an application point of view. Therefore UDP is preferred by applications with
strict timing requirements.
Out of concern for the changing traffic patterns, an initiative to provide UDP
flows as well with congestion control was taken. The effort lead to the design
of the Datagram Congestion Control Protocol (DCCP) [19]. The protocol has
been built as a toolbox from which a suitable congestion control profile can be
chosen. The limitation of the protocol is foremost the existing congestion con-
trol profiles. One of the problems with designing congestion control algorithms
is that this mechanism also performs resource allocation when resources become
scarce. Thus it is desirable that the algorithms are fair to other existing con-
gestion control schemes, i.e., that of TCP, and avoid starving other flows also
implementing congestion control.
The future of DCCP depends on whether it will gain acceptance by the
wider network community or not. For application designers it is a big step to
change from UDP to DCCP. As a measure of complexity it can be mentioned
that UDP is defined in about ten pages, whereas DCCP consists of a minimum
of three specifications where the largest is almost two hundred pages. A barrier
for deployment is the use of Network Address Translation (NAT) tools and
firewalls, which have to be extended to correctly handle DCCP.
Meanwhile, the Internet has become a gathering of both wired and wireless
networks. In the next section I will discuss the history of the mobile telephony
networks and their approach to providing data services.

1.2 Wireless Wide-area Networks


The Internet started out as a project for sharing computer resources, whereas
mobile telephony grew out of a desire to provide voice communication while on
the move. These different incentives lead, as we will see, to different designs.
The type of wireless systems that I consider in this thesis are often referred to
as wireless wide-area networks (WWANs). They accommodate a large number
of users over a large geographic area, within a limited frequency spectrum2 .
The land area is divided into cells and the communication within each cell is
supported by a base station. The system allows a user to move at a relatively
2 The radio frequency spectrum is controlled by governments, therefore techno-politics has

a major impact on this industry.


6 Congestion Control in Wireless Cellular Networks

high speed while being engaged in a session and switching cells over a wide
area. The mobile switching center (MSC) coordinates the activities of all the
base stations and connects the cellular system to the public switched telephone
network (PSTN).
It all began in 1895 – Nikola Tesla was ready to transmit a radio signal
50 miles to West Point, New York, when a fire consumed his lab. Meanwhile
Guglielmo Marconi had been granted a patent for the wireless telegraphy in
England in 1896. One year later, he used the Tesla oscillator to demonstrate
the usability of radio in mobile communication through keeping in contact with
ships sailing on the English channel [55]. The world’s first wireless cellular
system was however not implemented until 1979 in Japan by Nippon Telephone
and Telegraph company (NTT). The invention of the cellular concept enabled
large-scale radio communications and was refined by many telecommunication
companies working in parallel. The idea is to split the coverage zone into small
cells and reuse portions of the spectrum to increase spectrum usage at the
expense of a larger infrastructure.
Here in Scandinavia, the Nordic Mobile Telephone (NMT) system was intro-
duced in 1981. It belongs to the first generation of mobile systems, which were
generally incompatible in Europe due to the use of different frequencies and
protocols. The first universal digital cellular system (2G) that gained world-
wide acceptance was the Global System for Mobile (GSM) deployed in the early
1990s. GSM was designed before the Internet became a commodity and hence
the data rate requirements were low, since voice produces relatively low bit rate
traffic. In order to increase the 2G data rates for Internet type of services a
number of techniques under the name of 2.5G were developed. For GSM, High
Speed Circuit Switched Data (HSCSD), General Packet Radio Service (GPRS)
and/or Enhanced Data rates for GSM (or Global) Evolution (EDGE) are viable
extensions.
The third generation of GSM technology (3GSM) has a Wideband-CDMA
(W-CDMA) air interface, which has been developed as an open standard by
operators in conjunction with the 3GPP standards development organization.
Already over 85% of the world’s network operators have chosen 3GSM’s under-
lying technology platform to deliver their third generation services [24]. Another
name for W-CDMA is Universal Mobile Telecommunications Service (UMTS).
W-CDMA includes a shared high-speed channel for traffic from the base station
to the mobile users. This high-speed shared downlink packet access (HSDPA)
mode is the focus of several of the studies in this thesis. Figure 1.2 illustrates
the evolution of the mobile cellular networks.
The telecommunication industry has not been able to agree on one global
3G standard. Therefore a second partnership project (3GPP2) is developing
another 3G standard in parallel to W-CDMA, which is called cdma2000 and is
not building on GSM technology. The partners are from North America, Japan,
Korea and China.
The roots of the wireless wide-area networks are in the telephone industry,
from which users have come to expect a high quality of service and a high degree
of stability. Telecommunication is often referred to as having 6 nines, 99.9999%,
Thesis Introduction 7

2G IS-95 GSM IS-136 & PDC

GPRS
2.5G
IS-95B HSCSD EDGE

W-CDMA
EDGE
cdma2000-1xRTT TD-SCDMA
3G
cdma2000-1xEV, DV, DO HSDPA

Figure 1.2: The evolution of different WWAN technologies.

availability. Requirements of the emergency services are one of the reasons for
the high demands.
From being a voice call communication systems, the mobile wireless cel-
lular systems have evolved in the direction of the Internet. Since the system
was originally designed for voice traffic with circuit-switching as the means for
distributing capacity, certain changes have been necessary to connect to the In-
ternet and allow data traffic to be transferred. The core business is however still
voice calls. The quality of this service must therefore also be ensured henceforth.
Different types of messaging services, like the short message service (SMS) and
the multimedia message service MMS, have also become popular.
Wireless local-area networks (WLANs) on the other hand have a background
in data services. In general they provide higher data rates at the expense of
mobility compared to WWANs. The user is required to stay close to an access
point to achieve the high data rates. Followers of the successful IEEE802.11
standard, IEEE802.16 and IEEE802.20, are currently being designed to allow
increased coverage [57], e.g., wireless broadband to residents and for small office
use. The primary standardization organs for WLANs are the Institute of Elec-
trical and Electronics Engineers (IEEE) and the European Telecommunications
Standards Institute (ETSI).
The wireless media and user mobility challenge some of the implicit assump-
tions that were made when designing the congestion control and avoidance mech-
anisms of TCP. Transmission errors are more frequent [2] and round trip times
may vary to a larger extent than in a wired network [41]. Another key issue is
the efficient utilization of the available frequencies, since the licenses for the ra-
dio spectrum is relatively expensive. This is in contrast to the IETF philosophy
8 Congestion Control in Wireless Cellular Networks

that tight optimizations are virtually impossible due to heterogeneity.

2 Research Area
There are many challenges to the Internet and the telecommunication industry.
In this section I will briefly touch upon a few of them, which are related to my
research.
Transporting both voice and data traffic as packet-switched services at the
IP layer would allow the efficient deployment of new services, such as real-time
multimedia with integrated voice and video. Furthermore, having an all-IP
network instead of separate voice and data networks means that fewer pieces of
equipment need to be deployed and maintained.
Voice over IP (VoIP) is the key to a common IP platform for wire line and
wireless networks. Still, the traditional circuit-switched voice networks have
been well tuned for efficient spectrum utilization, thus VoIP has a lot to prove
in regards to its cost effectiveness. Nonetheless a first step was taken on the
25th of August, 2003, when a specification for Push to talk over Cellular (PoC)
was submitted to the Open Mobile Alliance (OMA) [48].
The interest for VoIP over wired networks is also growing. This trend has
the potential to change the traffic patterns on the Internet. A larger share of
long-lived UDP sessions is undesirable from a network point of view, since no
regulation of the traffic flows is performed by UDP. The network may therefore
become unstable and perform a high degree of useless work.
Congestion control performs resource allocation when competition for re-
sources is intense. Therefore the problem of delivering service assurances and
performing congestion control are related. Time-constrained services using UDP
are pushing the development of congestion control profiles that combine satis-
factory service delivery and network stability. For flows with strict timing re-
quirements, there is a send rate threshold below which the data stream will be
useless to the receiver and there is also a maximum delay that can be tolerated.
A forum for exchanging ideas in this area has been the IETF working group
for DCCP [31]. DCCP is a new transport protocol which offers no delivery
guarantees. The objective is to create an alternative to UDP for long-lived traffic
flows, which applies congestion control. I have followed the standardization
process of DCCP, whose main weakness is the usability of the congestion control
profiles it currently provides. This is a key question to resolve for the success
of DCCP.
IP-based traffic is often characterized as bursty, especially when TCP is
used as transport protocol. To maintain high system utilization for IP-based
traffic over WWANs, gathering data from multiple users may be beneficiary.
Shared channel solutions of which HSDPA is one example are therefore likely to
become more common. The problem with service multiplexing is to be able to
give service assurances when mixing data from many users and/or of different
traffic types.
The level of service quality required is tied to peoples’ expectations and
Thesis Introduction 9

habits and it comes into focus when a service is to be offered building on an-
other technology than before. For example telephone services are now being
provided over the Internet and wireless cellular mobile networks are offering
data services. With this development follows culture clashes. We expect a cer-
tain call reliability, whereas we are quite familiar with the best-effort thinking of
the Internet. We are used to pay different taxes for our telephone calls, but we
do not usually get a differentiated bill for our Internet usage. Thus, in working
with technology we must be aware of these conceptions and allow the market
to mature.

2.1 Focus
My work is related to the heterogenous platforms and diverse applications, as-
piring to become part of the global Internet. In my research I attempt to find
solutions for applications in an all-IP network that involves both wired and wire-
less links. There may still be links requiring special attention and techniques
to enhance performance. In such situations I believe that solving the problems
locally is often preferable if the technology is widely spread. When new inven-
tions are being made it is important to be there from the start and develop a
generic solution.
I am particularly interested in how congestion control can be used to allow a
continued profileration of applications, in a way that does not disturb the current
core activities of a subnet. Furthermore, understanding how new services can
be introduced in cellular systems and how applications can co-exist in a cellular
environment is of interest to me. Therefore I have studied the existing congestion
control algorithms and those that are under development, as well as general
design of transport protocols.
My research around HSDPA aims to widen the understanding of the special
characteristics of a wireless cellular system with channels especially designed for
data transport. There are many points in common between HSDPA and the
next generation of WLAN technologies as well.
Producing implementations is an important part of networking research,
since much of the research is applied and theoretical analysis usually does not
allow design choices to be critically tested in a wide range of settings. I have
worked with implementing HSDPA, DCCP and a queue management technique
called PDPC. The existence of implementations makes it easier to perform re-
search into these areas and pushes development forward.

3 Methodology
The eligible tools for performance analysis of computer systems are measure-
ment, simulation and analytical modeling. The system must be studied under
an appropriate workload and its performance evaluated by a suitable metric.
10 Congestion Control in Wireless Cellular Networks

3.1 Data Collection


My approach in this thesis has been mainly experimental. I have used sim-
ulations and modeling. Several factors have lead to the wide-spread use of
simulation within the networking discipline. Firstly, networks are large and
heterogeneous, making it difficult to derive a theoretical model. Secondly, the
results of a study should be repeatable, but certain network elements like the
radio media inherently introduce variations.
Analytical modeling and simulation can be used in any stage of the life-
cycle of a product, but measurement requires that a prototype exists and access
to the system, which is often restricted. Measurement may seem the most
accurate evaluation method, but environmental parameters, which are often
uncontrollable, may make it difficult to generalize from the results [34]. The
importance of measurement is nevertheless its unique potential to provide a
crucial “reality check”.
Compared to measurement, simulation allows a wider range of parameter
settings and environments to be explored – often at a lower cost. Simulation
generally requires less assumptions than analytical modeling and can include
more details, thus often resulting in a higher accuracy. It is also an important
tool for developing intuition. Preferably, a combination of methods should be
used in collaboration to strengthen the results of a performance evaluation study.
In order to elevate the state-of-the-art in Internet simulation, an effort was
made in the late 1990s to extend and advance the Network Simulator, ns-2.
The key goal being to facilitate studies of scale and protocol interactions [52].
I have used ns-2 as my simulation platform throughout this thesis. It contains
detailed models of the transport layer protocols and queuing strategies I have
identified as important to study. Also, since the simulator is frequently used
by many researchers bugs are likely to be discovered and studies can be made
comparable. In most cases some modifications have been necessary to bring the
code up-to-date and create the simulation scenarios. A firm recommendation
when dealing with ns-2 is to make sure by yourself that the codes simulates
what you think it does and to also check all the parameter settings.
When performing simulations I have consulted the rich literature describing
the particular difficulties associated with networking simulation studies and also
offering some advice [52], [61], [21], [51], [13] and [17]. I have also tried to
validate the simulation results through detailed analysis of the protocols and
techniques under study. Therefore I have often started with relatively simple
scenarios, where it is easy to fore say the outcome with knowledge of the elements
under study. Another advice is to begin by retrieving a lot of data from the
system for validation and to provoke important events. In [28] current validation
techniques are outlined.
Dr. Ulf Bodin implemented the first version of a ns-2 module for simulating
HSDPA. This module has later been extended and refined by me and Mats
Folke. In this work, we have been confronted with many decisions regarding
the appropriate level of detail. A lack of detail can cause wrong answers by
being incorrect or simply inapplicable (not moving within a relevant part of the
Thesis Introduction 11

design space). On the other hand it takes time to implement additional details,
debug and change as development continues. Foreseeing the future is virtually
impossible. Details may also distract from the research problem at hand and
even make the effects less distinct. In order to enable large-scale simulations
the appropriate level of detail must therefore be chosen with care [27].
If you have knowledge of the situation that you are to study it is easier to
choose an appropriate level of detail, since then it is possible to reason about
the effects that a certain part might have in that particular setting. By clearly
stating which assumptions that we have made and which scenario the model
is intended for, we guard against misuse of the model and encourage others to
give us feedback on our simplifications.
In addition to the simulation experiments, I have put forward and supervised
“real-world” projects:

• Within the frameworks of a network project for undergraduate students a


kernel version of DCCP for FreeBSD was implemented [12]. The resulting
code has been built into FreeBSD Kame and is still maintained through
them [30].

• A Master Thesis student, Magnus Erixzon, implemented a down-scaled


version of DCCP called DCCP-Thin in Symbian OS. This implementation
was tested over a mobile cellular link (GPRS) and its performance was
related to that of TCP under similar conditions [11].

In theory, experimentation may seem straightforward, but when designing


an experiment there is suddenly a lot of hard questions to resolve before even
the first experiment can be carried out - knowing which data to extract, which
parameters to vary (if possible), their relevant values, configure the protocols
right etc. Experimentation requires an infinite amount of patience and judi-
cious planning, as well as an observant practitioner during the execution of the
plans. Furthermore, experience with the environment and experimental work is
a factor, which I have seen the effect of in my work. And still, data gathering
is only a first step.
There are no absolute guarantees when it comes to simulation. I have care-
fully selected the methods used, applied them and scrutinized the outcome.

4 Contribution
My research contribute to bridging the gap between wire line and wireless net-
works, by ensuring that new transport protocol mechanisms are evaluated for
the wireless realm. Parts of my research aim at widening the understanding of
how shared WWAN channels can be used and identifying issues that need to be
considered when managing these networks. Finally, research into appropriate
congestion control algorithms for time-constrained services is important to the
network community as a whole and its potential for further growth.
12 Congestion Control in Wireless Cellular Networks

Seen from a larger perspective this type of research may in the end result
in better user perceived performance regarding the quality and availability of
services in WWANs. This requires efficient service implementations.

Paper 1
The first paper, Congestion Control in a High Speed Radio Environment, was
also the first paper in time. It is an evaluation of TFRC and TCP in HS-
DPA. The purpose of the evaluation was to detect any weaknesses in the design
of TFRC and whether any interactions between radio-block scheduling at the
link layer and congestion control algorithms at the transport layer would be
problematic.
By exposing TFRC to many different environmental conditions, as this study
is an example of, we will arrive at a robust design suitable for wireless environ-
ments as well.
As part of this evaluation I updated the TFRC code in the widely spread
network simulator, ns-2, to conform to the RFC standard. The earlier imple-
mentation was produced prior to its standardization.

Paper 2
On the Minimum Retransmission Timeout of TCP in a High-speed cellular en-
vironment is a continuation of the work in the previous paper. Except for a few
corner cases TCP works rather well in wireless networks. One of its weaknesses
is the use of a timer to determine when a packet is to be retransmitted. Delay
variations are inherent to a radio environment and the timer may prematurely
trigger retransmissions if there are sudden delay variations. A lower bound on
the retransmission timer has historically been motivated by poor clock granu-
larity and the “conservation of packets” principle described in [33], but lately a
much reduced lower bound has been adopted in modern implementations [26].
We have evaluated the effect of decreasing the minimum retransmit timeout
interval on TCP performance for HSDPA. The importance of the minimum
retransmit interval to the performance of the retransmit timer has previously
been pointed out in [13]. Since TCP is a central part of the current network
architecture, optimizing its behavior for a particular environment may lead to an
overall performance degradation. It is therefore important to evaluate the effect
of proposed changes to the protocol under conditions which may be problematic.
I contributed with the idea of investigating the impact of changing the lower
bound on the retransmission timeout interval, assisted in interpreting the results
and participated in the writing process.

Paper 3
Buffer management has previously been shown to have a significant effect on
TCP performance over both wired and wireless links. In the paper Buffer
Management for TCP over HS-DSCH I study the problem of finding a robust
Thesis Introduction 13

and efficient buffer configuration for a high-speed shared channel, when data is
buffered for each user individually. A similar problem was studied for dedicated
3G links in [10] and its companion papers. I wanted to see if their findings also
applied to a shared wireless high speed channel and I also contribute through
identifying a number of factors that must be considered when performing buffer
management for HSDPA.
The buffer strategy called Packet Discard Prevention Counter (PDPC) pro-
posed in [59] for low statistical multiplexing environments was implemented for
ns-2 by me as part of this study.

Paper 4
In the last paper, Properties of TCP-like Congestion Control, I have analyzed
the design of a congestion control algorithm that attempts to imitate the con-
gestion control and avoidance behavior of TCP, but within an unreliable service
concept. TCP-like congestion control is currently being standardized by the
IETF and a sanity check of the algorithm was therefore motivated before de-
ployment can be recommended.
I have supervised a Master’s Thesis worker named Nils-Erik Mattsson, while
implementing a substantial part of the Datagram Congestion Control Protocol
(DCCP) protocol into the network simulator, ns-2. Among other features DCCP
includes TCP-like congestion control. The code is available and has been handed
out to a number of interested parties.
The work presented here is part of a larger evaluation of DCCP and is
linked to the TFRC study in Paper 1. Comparing the use of TFRC and TCP-
like congestion control for streaming and real-time applications is the next step.
We have also made an implementation of DCCP-Thin for Symbian OS. The
observations made for DCCP-Thin are reported in [11] and will also be presented
in Linköping at “Radiovetenskap och Kommunikation” (RVK 05), 14–16 June,
2005. Furthermore, a kernel version of DCCP for FreeBSD and a patch for
Ethereal were released as part of a network project which I have proposed and
supervised [12].

My contribution
In all the papers included in this thesis, but Buffer Management for TCP over
HS-DSCH, I have been the main author and carried out all the experimental
work.

5 Continuation
My efforts up to this point have been concentrated on evaluating a number of
congestion control mechanisms. We have observed the performance in terms of
system throughput, fairness and individual transfer rates. These are metrics
14 Congestion Control in Wireless Cellular Networks

primarily suitable for bulk transfers, but also reasonable when studying rela-
tively new algorithms, with the purpose of validating their operation. When
data is being produced during the session itself or the client wishes to hold only
small portions of a flow at the time, the application behavior can be quite dif-
ferent and the set of metrics used so far incomplete. Studying the performance
of new congestion control algorithms for applications with harder timing con-
straints in depth, with appropriate application models and performance metrics
is therefore part of my future plans.
There are a number of issues that can be seen such as congestion control re-
sponse to application limited periods, start up costs, discrete send rates, packet
sizes, smoothness, a minimum useful transfer rate and delay variations as per-
ceived by the user. A related question is to investigate which information a
network can provide in order to improve congestion control and thus applica-
tion performance. Also, can the network use existing congestion control algo-
rithms to prioritize certain services higher than others by feeding them different
information?
Furthermore, the multitude of applications seems to grow indefinitely. Tra-
ditional cellular networks have been built and optimized for one main service,
i.e., phone calls. These networks are now being transformed into a platform
supporting a magnitude of services. If optimizations are attempted for each ser-
vice the network is likely to become highly complex, therefore it is interesting
to investigate how the services can co-exist and where tuning of the network is
necessary.
Papers

15
Paper 1

Congestion Control in a High


Speed Radio Environment

17
Paper published as

Sara Landström, Lars-Åke Larzon and Ulf Bodin, “Congestion Control in a High Speed
Radio Environment”. In Proceedings of the International Conference on Wireless
Networks, pages 617-623, Las Vegas, Nevada, USA, 21-24 June 2004.

18
Congestion Control
in a High Speed Radio Environment
Sara Landström† , Lars-Åke Larzon†,‡ , Ulf Bodin†

Luleå University of Technology

Uppsala University

Abstract

This paper explores interactions between congestion control mechanisms


at the transport layer and scheduling algorithms at the physical layer in
the High-Speed Down-link Packet Access extension to WCDMA. Two dif-
ferent approaches to congestion control – TCP SACK and TFRC – are
studied. We find that TCP SACK and TFRC in most respects perform
the same way. SIR scheduling yields a higher system throughput for both
congestion control algorithms than RR scheduling, but introduces delay
variations that sometimes lead to spurious timeouts. The no feedback
timeout of TFRC exhibits similar sensitivity to delay spikes as the re-
transmit timeout in TCP SACK. The consequences of delay spikes are
however different.

1 Introduction
The High-speed Down-link Packet Access (HSDPA) mode, is part of the 3GPP
WCDMA specification release 5 [29]. It supports peak data rates in the order of
10 Mbps with low delays. A key component of HSDPA is the channel scheduler.
The channel is divided into 2 ms slots that are assigned to the users according
to a scheduling algorithm.
A round-robin (RR) scheduler lets users take turns to transmit in an orderly
fashion, whereas a signal-to-interference (SIR) scheduler gives precedence to the
user with the best predicted signaling conditions.
As scheduling is tightly coupled to data availability - which is regulated at
a higher level by the transport protocol - we study the interactions between
congestion control in the transport layer and channel scheduling in the physical
layer.
Of the transport protocols that perform congestion control, TCP is the most
widely deployed. Another way of performing congestion control is to apply
TCP Friendly Rate Control (TFRC) in which an equation-based model of TCP
Reno, derived in [49], is used. TFRC has been designed to give smoother rate
20 Congestion Control in Wireless Cellular Networks

changes compared to TCP and is primarily suitable for streaming media appli-
cations [20]. An important factor in the send rate equation is the estimate of
the round trip time.
In this paper, we study the performance of TFRC and TCP over a HSDPA
link layer with both a RR and a SIR scheduler. As expected, SIR is not as fair
as RR, but does on the other hand give significantly larger throughput to the
users with the best SIRs. We found channel utilization helpful in explaining the
observed loads and comparing the two congestion control algorithms. TFRC
and TCP performed equally well. Both protocols are however sensitive to delay
spikes resulting from SIR scheduling and performance could be improved in this
respect.

2 TFRC vs. TCP


TCP SACK (from now on referred to as TCP) is a complete transport protocol
with many different features such as congestion control, reliability and session
management. In this study, we focus on the mechanisms that affect data avail-
ability in lower layers, i.e., the congestion control and avoidance mechanisms.
We compare TCP, to the alternative approach to congestion control given
by TFRC. There are a number of fundamental differences between TCP and
TFRC - TCP is sender-oriented and uses a sliding window to control the send
rate whereas TFRC is receiver-oriented and uses an equation-based scheme. In
the following sections we briefly describe the transport layer mechanisms that
we study in this paper, and how they differ between TFRC and TCP.

2.1 Adaptive timeouts


If the sender does not receive an acknowledgment of transmitted data before
a timeout, the sending rate is reduced as timeouts are interpreted as signs
of network congestion. Both TFRC and TCP use a moving average filter on
RTT samples to calculate an RTT estimate that controls the timeout values.
However, where TCP reduces the sending rate to a minimum after a timeout
and forces the sender into slow start, TFRC only halves the sending rate. The
reason is that this TFRC timeout, called the no feedback timeout, is only a
supplement to the retransmit timeout of TCP. The TCP retransmit timeout
is instead modeled by the send rate equation that TFRC complies to. The
no feedback interval is relatively long (four times the estimated RTT) to be
compared to the retransmit timeout, which is set to the estimated RTT with a
margin accounting for RTT fluctuations.
The Eifel algorithm, presented in [41], has the potential to improve TCP
performance in wireless systems by making it possible to faster regain the send
rate used before a timeout if the acknowledgment causing the timeout arrives
late. Timeouts that would not have occurred if the timeout interval had been
longer are called spurious. The appropriate actions to take after a spurious
timeout are however still being debated [26].
Congestion Control in a High Speed Radio Environment 21

2.2 Congestion avoidance


During normal operation, i.e, when the sender has left the slow start phase,
the send rate is increased slowly to avoid filling up queues too fast. In TFRC,
the receiver detects lost packets through the arrival of three later sent packets.
Packet losses during one RTT only result in one congestion event, as in TCP.
The TFRC receiver informs the sender of the perceived loss rate, p, and the
receive rate, Xrecv , which the sender uses to calculate a tentative new sending
rate, Xcalc using equation 1.1.
s
Xcalc = , (1.1)
R + f (p)
where s is the mean packet size, R the round trip time and f (p) is given by
 
p p
f (p) = 2 ∗ + 12 ∗ 3 ∗ ∗ p ∗ (1 + 32 ∗ p2 ). (1.2)
3 8
To determine the new sending rate, the computed sending rate is compared to
the receive rate according to equation 1.3

Xsend = min(Xcalc , 2 ∗ Xrecv ). (1.3)

As TCP uses cumulative acknowledgments, a duplicated acknowledgment


(dupack) indicates an out-of-order reception of data. Upon reception of four
successive acknowledgments, all with the same acknowledgment number, TCP
assumes a packet lost and halves its sending rate.

2.3 Slow start


In the beginning of each transfer and after a timeout in TCP, the session is in a
slow start phase. During this phase, the send rate increases exponentially until
a lost packet is detected. In TCP, this is accomplished by increasing the window
that controls the send rate proportionally to the number of acknowledgments
received.
TFRC mimics the TCP slow start behavior when the estimated loss rate
is zero by doubling the sending rate once per round trip time, if the reported
receive rate matches the current send rate. If the receive rate is lower than the
send rate, the new sending rate is set to twice the receive rate. This means that
the sending rate is limited only by the current values of Xsend and Xrecv until
a loss event occurs.

3 Evaluation
Our evaluation of TFRC and TCP over the high speed down-link channel (HS-
DSCH) in HSDPA is based on simulations. Performance is investigated both
for an RR and a SIR scheduler using two different loads. This results in four
scenarios for each congestion control mechanism.
22 Congestion Control in Wireless Cellular Networks

3.1 Model
The network simulator version 2.27 (ns-2) was complemented with a module for
simulating HSDPA [5]. We also modified the TFRC code to follow RFC3448
and measures were taken to remove the bug reported in [18]. The changes are
described in a document retrievable from http://www.csee.ltu.se/∼saral.
Table 1.1 gives an overview of the radio models implemented and their con-
figuration.

Phenomena Model/Configuration
Path loss Exponential, propagation constant
3.5
Shadow fading Stddev 8dB
Self interference Constant 10%
Intra cell interference (orthogonality) Constant 40%
Inter cell interference Modeled by distance and shadow fad-
ing
Fast HARQ No, immediately retransmitted
Code multiplexing Max 3 users
BLER Uniformly distributed, 10% for SIRs
over −3.5dB, 50% for lower SIR lev-
els

Table 1.1: Summary of radio models and parameters

There is no fast power control over the high speed shared channel, instead
link adaptation is employed. The combination of coding rates and modula-
tion types included in the simulator are introduced in Table 1.2 and SIR levels
were established in [50]. Note, with these combinations a maximum bit rate
of 7.20 Mbps can be achieved. We assume that the number of spreading codes
and the power assigned to HS-DSCH, change on long time scales compared to
the simulation time. The average power was fixed to 10 W and 12 out of 16
channelization codes were used.

Coding Modulation SIR Bitrate Radio block


(Rate) (Type) (dB) (Mbps) (Bytes)
0.25 QPSK -3.5 1.44 360
0.50 QPSK 0 2.88 720
0.38 16QAM 3.5 4.32 1080
0.63 16QAM 7.5 7.20 1800

Table 1.2: Combinations of coding rates and modulation types

Seven cells with omni-directional antennas and a 500 m radius were simu-
lated and the performance in the center cell was analyzed. A fixed delay was
used to model the delay over the wired links between the sources and base sta-
Congestion Control in a High Speed Radio Environment 23

tions. The delay, 75 ms, was the same in both directions. The wired links were
over provisioned such that the bottleneck was over the wireless link. When
reaching the base station user data were stored in individual buffers, each capa-
ble of keeping 90 IP packets. This means that it is possible for a single user to
capture the wireless channel with SIR scheduling. With our simulation set-up,
packets can only be lost in the queue awaiting transport over the wireless link.
We varied the load by simulating either 50 or 65 (30% more) stationary
mobile terminals present in the coverage area. The nodes were distributed
uniformly over the seven cells. The effects of scheduling showing at this load,
would probably become apparent at higher loads with better tuned scheduling
algorithms than RR and SIR. Alternatively, the load could have been varied by
changing the average waiting times between transfers.
Every session consisted of a mobile downloading a file followed by a truncated
exponentially distributed waiting time with mean 2 seconds and a minimum
value of 0.5 seconds. The waiting time was initiated as soon as all the data had
reached the receiver1 . The file sizes were randomly chosen from nine possible
sizes where the number of packets i is given by equation 1.4.

in = 2in−1 + 1, f or n = 1, 2...9, i0 = 0 (1.4)

The relation between the frequencies with which the file sizes were likely to be
selected was 1:2:3:4:5:6:7:8:9, where 9 corresponds to the smallest file size. Two
relatively large file sizes have been included, i.e., 740950 and 1483350 bytes. The
reason was threefold, first for the short file sizes slow start is essentially never
left. Secondly, the behaviors of the schedulers have larger impact on longer file
transfers and finally, TFRC is targeted at longer lived sessions. A fixed payload
size of 1450 bytes was used, in order to create the same number of packets for
both TCP and TFRC given a certain transfer size.
TFRC does not include connection establishment nor tear-down, which for
short flows result in higher throughput. Therefore, to enable comparisons, TCP
was configured to send data with the initial SYN segment. By setting the initial
window of TCP to two segments and sending data on the SYN, the window is
doubled as if no handshake was made. A minimum retransmit timeout of 1
second and a timer granularity of 0.01 seconds were used for TCP.
Five minutes system time was simulated in each run and all scenarios were
repeated twenty times. The random number generators giving the positions
of the mobiles, the starting times of the transfers and the file sizes were given
different seeds in each replication of the same scenario.

4 Results
When analyzing the material, we aggregated data from all the replications of
the same scenario. The data from the first 5 simulated seconds in each run were
removed to avoid initialization bias. Only performance in the center cell was
1 We reset the transport layer endpoints after receiving the last piece of data.
24 Congestion Control in Wireless Cellular Networks

studied. In most cases the results obtained for both loads were similar, hence if
nothing else is stated the figures represent the case when there are 65 mobiles
present in the system. The confidence intervals are for 95%.
We have used two approaches when performing the analysis. First, we look
at transport layer events, thereafter we study when data are available to the
scheduler.

4.1 Transport layer events


Packet loss – To TFRC, which offers unreliable transport, the packet loss
rate is an important metric. If the loss rate is too high the data which reach
the receiver is not going to be useful. Bursty losses may also be problematic
depending on the application. However, when we study the share of packets
that was lost for TFRC the purpose is to detect dependencies between the
transfer sizes and their loss rates that help our analysis. For instance, the
transitions between slow start and congestion avoidance are controlled by the
packet loss events and how they are detected. An inspection of the packet loss
rates experienced by TFRC reveals that there are basically three groups of file
sizes, which we will call small, medium and large.
Small: In the first group, the six smallest transfer sizes fall. These flows do
not loose any packets and hence never leave slow start due to packet loss.
Medium: The 369750 bytes flows form a group of their own. Approximately
80% of these transfers get through intact. The remaining 20% of the flows loose
less than a tenth of the data.
Large: For the two largest file sizes the transition from slow start to con-
gestion avoidance can not be avoided. In the case of 740950 bytes, the flows
all reach the maximum available capacity and about 15% have high loss rates
(20%), since the file sizes are still rather short. For the 1483350 byte flows the
loss rate stays below 10%, except in the case of SIR scheduling and 65 mobiles,
where 2% of the flows experience higher loss rates. These flows are long enough
to compensate for the high loss rates encountered when probing for an initial
estimate of the capacity.
We expect to find that the TCP flows exhibit a similar behavior, since the
slow start strategies are comparable.
Detection of packet losses – Long propagation delays potentially result
in each source having large amounts of data residing in the network. Only parts
of this data at a given time is actually in the bottleneck buffer. If packets travel
closely together buffer overflow will occur at lower send rates. TCP is window-
based and can send several segments back to back. TFRC on the other hand,
space out the packets over a round trip time, thereby decreasing the probability
that a large part of the data in flight find itself in the queue at the edge of the
wireless link simultaneously.
With TCP, the window has to be larger than the bottleneck buffer size to
sustain any losses, since new data must be acknowledged before a new segment is
released. It is most vulnerable to timeouts when quickly increasing the send rate
during slow start, since many segments can then be lost from the same window,
Congestion Control in a High Speed Radio Environment 25

0.8
SACK SIR
TFRC SIR
SACK RR
TFRC RR
0.6
Number of timeouts

0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10 11 12
File size (Mbits)

Figure 1.1: Timeout events for TCP and TFRC.

such that three duplicated acknowledgments are not generated as is needed for
a fast retransmission to be made. When a timeout occurs, a parameter called
the slow start threshold, is set to half the current window, forcing TCP into
congestion avoidance whenever the window size exceeds this value. Since TCP
increases its rate slower during congestion avoidance and the flows are buffered
individually, later timeouts are connected to decreased bottleneck capacity -
leading to an assembly of segments in the sensitive buffer.
In Figure 1.1 the average number of timeouts per transfer size with TCP is
shown. The larger number of timeouts with SIR scheduling is caused by changes
in the available capacity due to competing sources and not the probing behavior
of TCP. Of the total number of timeouts with TCP and SIR scheduling close
to 70% were spurious.
Due to the fact that it is enough for one packet to reach the receiver to
allow the next acknowledgment to be sent in TFRC and the relatively large
timeout interval, no feedback timeouts are rare, see Figure 1.1. No timeouts of
this type were observed for TFRC with RR. With SIR scheduling there were
a few occurrences, but they were considerably fewer than the TCP retransmit
timeouts. The no feedback timeouts can not be said to be spurious, since they
are to prevent data from being sent continuously at the same rate when it is not
getting through. Fine-tuning the duration of this interval such that excessive
packet loss does not occur when the bandwidth is suddenly decreased is however
important and so is finding a way to allow the flow to start over relatively quickly
26 Congestion Control in Wireless Cellular Networks

3.5
TFRC SIR
TFRC RR
3 SACK SIR
SACK RR
Number of congestion events

2.5

1.5

0.5

0
0 1 2 3 4 5 6 7 8 9 10 11 12
File size (Mbits)

Figure 1.2: The average number of congestion events detected through later
sent packets arriving at the receiver in TFRC, or through three duplicate ac-
knowledgments in TCP.

when resources become available again.


Since there is no packet reordering possible with our network configuration,
the arrival of three duplicated acknowledgments in TCP usually confirms the
loss of at least one segment2 . TCP then has a chance to retransmit the pre-
sumably lost segment, before the timeout expires. For TFRC, we observe few
congestion events being detected by later sent packets arriving in relation to the
loss rates. Hence, we conclude that the losses are often correlated and occur in
bursts during a round trip time. This type of event is more frequent for TFRC
for the two largest file sizes than the fast retransmits are for TCP, which can
be explained by TFRC being slower to reduce its send rate. The situation is
reversed for the 740950 byte files, which is likely to be a consequence of TCP
sending its segments back to back instead of spacing them out as TFRC does.
Thus, TCP encounters its first losses earlier, i.e., for a smaller file size. The three
dupack events and the packet losses that lead to congestion events in TFRC are
summarized in Figure 1.2. The confidence intervals for these events are narrow
and there are no apparent differences between the schedulers.

2 The exception is if a large number of retransmissions triggered the duplicate acknowledg-

ments.
Congestion Control in a High Speed Radio Environment 27

0.2
SACK SIR 50
SACK RR 50
SACKSIR 65
SACK RR 65
0.15
Share of the slots

0.1

0.05

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13
Potential number of receivers in each slot

Figure 1.3: Distribution of the number of potential receivers for TCP.

4.2 Data availability


For system throughput, having data to be transferred at the highest data rate in
every slot represents the optimal situation. However, only the SIR scheduler can
take advantage of having several potential receivers with different SIR conditions
to choose between. A requirement for good throughput, which is independent of
scheduler characteristics, is that there should be at least one potential receiver
in each slot. Hence, we focus on the distribution of the number of potential
receivers for this study.
The distribution of the number of potential receivers for TCP is shown in
Figure 1.3 and for TFRC in Figure 1.4. As can be seen, there are few slots that
are not used. SIR scheduling gives a larger concentration around 1-2 potential
receivers than RR scheduling does and the right tail is thinner for SIR than for
RR.
In general the SIR scheduler gets more files through than the RR scheduler
with the same number of users in the system. The current application model,
where the waiting period is initiated as soon as the transfer has been com-
pleted, leads to high SIR users generating a larger part of the data with SIR
scheduling than with RR scheduling. This is not an unlikely scenario however,
since poor SIR users might wait for an indication of improved signal reception
before attempting transfers. The application model does however discriminate
against the RR scheduler and we would like to try other application models in
28 Congestion Control in Wireless Cellular Networks

0.2
TFRC SIR 50
TFRC RR 50
TFRC SIR 65
TFRC RR 65
0.15
Share of the slots

0.1

0.05

0
0 1 2 3 4 5 6 7 8 9 10 11 12 13
Potential number of receivers in each slot

Figure 1.4: Distribution of the number of potential receivers for TFRC.

the future.
When comparing the number of bytes transferred with TCP and SIR schedul-
ing for 50 mobiles and RR for 65 mobiles, the difference in result is smaller than
when comparing the system throughput with the same number of mobiles for
the two schedulers. This indicates that an RR scheduler needs a larger number
of mobiles to generate the same offered load, i.e., number of transfers. With
TFRC the RR scheduler never reaches the same levels as the SIR scheduler.
Since TFRC is unreliable, the artifacts that might come of SIR scheduling, i.e.,
higher loss rates, does not lead to retransmissions and as severe send rate re-
ductions. Therefore it is likely that a higher offered load can be sustained and
that the limiting factor may be the influence of the loss rate on the quality.

4.3 Transfer rates


The most important metric for a file transfer is the obtained transfer rate. The
transfer rates in Figure 1.5, puts the previously observed events into perspective.
The difference between the two protocols is small, although the difference in
performance between the schedulers is big.
A metric used in [53] for determining whether the system has appropriate
settings is the 5th percentile transfer rates. According to this metric, the ob-
served transfer rates for the HSDPA channel are supposed to be on a level with
the Circuit Switched Equivalent for web browsing, which means that 95% of the
Congestion Control in a High Speed Radio Environment 29

500000
SACK SIR
TFRC SIR
TFRC RR
400000 SACK RR
Bit-rate (bps)

300000

200000

100000

0
0 1 2 3 4 5 6 7 8 9 10 11 12
Potential number of receivers in each slot

Figure 1.5: Average transfer rates for TCP and TFRC.

users should have a bit rate exceeding 50 Kbps. We found that this condition
was met for both protocols. When looking at the 5th percentile bit rates on a
per flow size basis, Figure 1.6 and Figure 1.7, we find that it is the small flows
that do not reach 50 Kbps. RR scheduling results in higher transfer rates for
this group of flows than SIR, but the RR scheduler operates at a lower offered
load with the current application model.

5 Discussion
Future studies include investigating a range of propagation delays for the wired
links in the path and different buffer strategies at the wireless channel. With
these additional dimensions in the evaluation follow needs to more accurately
track delay spikes and their influence on the probability for packet loss. Finding
appropriate buffer sizes, that balance the risk of buffer overflow and long queuing
delays for wireless channels and different applications is non-trivial. Especially if
TFRC and TCP are to co-exist in an environment where the available channel
capacity can vary substantially. In this study we used the same application
model for both TFRC and TCP, in the future we would like to include a model
of a streaming application for TFRC and look at other ways to distribute the
transfers among the mobiles.
30 Congestion Control in Wireless Cellular Networks

150000

125000

100000
Bit-rate (bps)

75000

50000

25000 SACK RR 50
SACK RR 65
SACK SIR 50
SACK SIR 65
0
0 1 2 3 4 5 6 7 8 9 10 11 12
Potential number of receivers in each slot

Figure 1.6: 5th percentile transfer rates for TCP.

6 Conclusions
We have performed an initial investigation of how congestion control at the
transport layer, lead to different physical channel utilization patterns for a high-
speed shared wireless cellular environment. We have found, that with an appli-
cation model where the waiting time is initiated as soon as a transfer is finished,
the observed load is the result of both the nature of the scheduling algorithms
for the shared environment and the congestion control algorithms.
As expected, the SIR scheduler gives higher average transfer rates at the
expense of fairness compared to the RR scheduler. Since high SIR users complete
their transfers faster with the SIR scheduler, a larger part of the generated load
comes from these users. In general, a higher load is created for the same number
of mobiles with SIR scheduling than with RR scheduling. The main reason being
that the channel is better utilized partly because the average transfer times are
shorter, which leads to a faster initialization of the following transfers.
The difference in transfer rates between TFRC and TCP is small, although
the system throughput is higher with TFRC. This can be explained by the
distributions of the number of the potential receivers being similar, thus the
retransmissions performed by TCP take up capacity corresponding to the addi-
tional transfers performed with TFRC.
We conclude that the common type of application model used in this study
leads to offered loads that depend on algorithms both at the transport and the
Congestion Control in a High Speed Radio Environment 31

150000

125000

100000
Bit-rate (bps)

75000

50000

25000 TFRC RR 50
TFRC RR 65
TFRC SIR 50
TFRC SIR 65
0
0 1 2 3 4 5 6 7 8 9 10 11 12
Potential number of receivers in each slot

Figure 1.7: 5th percentile transfer rates for TFRC.

physical layer. It is however not unreasonable, since users are likely to transfer
more data if they get fast responses.
32 Congestion Control in Wireless Cellular Networks
Paper 2

On the TCP Minimum


Retransmission Timeout in a
High-speed Cellular Network

33
To be presented at EW 2005

Mats Folke, Sara Landström and Ulf Bodin, “On the TCP Minimum Retransmission
Timeout in a High-speed Cellular Network”. To be presented at European Wireless,
Nicosia, Cyprus, April 10-13 2005.

34
On the TCP Minimum Retransmission Timeout
in a High-speed Cellular Network
Mats Folke Sara Landström Ulf Bodin
Division of Computer Science and Networking
Luleå University of Technology
Sweden

Abstract

HS-DSCH is a high-speed shared radio channel for cellular mobile tele-


phony. The algorithm for distributing the channel resources together
with the characteristics of the radio medium result in delay variations.
The TCP minimum retransmission timeout has effectively alleviated delay
variations in its range from deteriorating TCP performance. But recently,
this bound has been shortened in modern widely spread TCP implemen-
tations. The aim of our study is to find out how a shorter minimum
retransmission timeout affects TCP performance over HS-DSCH.
We have implemented a model of HS-DSCH in the network simulator
ns-2. Our simulations cover a wide range of different minimum retransmis-
sion timeout values and loads, two types of schedulers (Round-Robin and
Signal-to-Interference-Ratio (SIR) scheduling) and two versions of TCP
(TCP Sack and NewReno).
Our results show that the number of spurious timeouts increases with
the load. The SIR scheduler causes fewer spurious timeouts in general.
The RR scheduler is however better than the SIR scheduler for longer
minimum retransmission timeouts. The minimum retransmission timeout
has consequences for goodput fairness, but it does not affect the total
system throughput. The studied TCP versions produced similar results.

1 Introduction
The High-Speed Down-link Shared Channel (HS-DSCH) in Wide-band CDMA
(WCDMA) release 6 has theoretical peak bit-rates for data services of 14 Mbps [38].
Moreover, delays considerably shorter than for other shared data channel tech-
nologies in previous releases of WCDMA are supported.
HS-DSCH is primarily shared in the time domain, where users are assigned
time slots according to a scheduling algorithm that runs independently at each
base station. The short Transfer Time Interval (TTI) of 2 ms, enables fast
link adaptation, fast scheduling and fast Hybrid Automatic Repeat reQuest
36 Congestion Control in Wireless Cellular Networks

(HARQ). The channel was designed for bursty Internet traffic, typical of web
browsing.
TCP (Transmission Control Protocol) ensures reliable transfer of HTTP
traffic. Avoiding delay spikes is important to TCP. In particular, delay spikes
may cause spurious timeouts, resulting in unnecessary retransmissions and mul-
tiplicative decreases in congestion window sizes as described by Inamura et
al.[32]. There are several mechanisms in HS-DSCH that can cause considerable
delay variations appearing as delay spikes to TCP.
In HS-DSCH, the data rate depends on the Signal to Interference Ratio
(SIR) of the receiving user. Consequently, fluctuations in the interference levels
lead to delay variations. SIR is affected by path-loss, fading and interference
from other transmissions. Schedulers aiming at optimizing system throughput
give precedence to the channel to users with high SIRs. With a Round Robin
(RR) scheduler the delay of an individual IP packet is determined both by the
number of active users and by the SIR of the receiving user.
Using the network simulator version 2 (ns-2)[44] we evaluate the performance
of TCP Sack[14], [42], [4] and TCP NewReno [16] for the RR and SIR scheduler
respectively. Modern implementations of TCP have a lower minimum bound on
the retransmission timer than the customary 1 second. In this paper we evaluate
the sensitivity of TCP regarding the setting of this minimum bound and its
impact on the number of spurious timeouts, fairness, goodput and throughput.

2 TCP fundamentals
In TCP the send rate is gradually increased and drastically decreased accord-
ing to its congestion control and avoidance mechanisms, thus providing the link
layer with an irregular flow of data. Typically, a TCP source in slow start,
begins by sending two to four segments [1] and then waits for the receiver to
acknowledge them before releasing more data. The send rate is increased expo-
nentially as long as the acknowledgments keep arriving in time. This results in
TCP sources alternating between releasing bursts of data and being idle until
they have opened their congestion window enough to always have data buffered
for HS-DSCH1 . For short transfers, TCP may never reach such a window size.
When the first packet is lost, TCP leaves slow start and enters congestion avoid-
ance, where the send rate is increased linearly.
When a new segment creates a gap in the receive buffer (i.e. its segment
number is not consecutive with respect to previous segments’), the receiver
generates a duplicate acknowledgment indicating where the beginning of the first
gap is. If three duplicate acknowledgments are consecutively received, the TCP
source assumes that the bytes pointed at have been lost due to buffer overflow
somewhere along the data path. The missing bytes are retransmitted and the
congestion window is reduced to half its current size. This retransmission is
called a fast retransmit.
1 We assume that the congestion windows and not the receiving windows limit the TCP

sources’ sending windows and that HS-DSCH is the bottleneck.


On the TCP Minimum Retransmission Timeout in a High-speed 37
Cellular Network

For a fast retransmit to take place, at least three segments sent after the first
lost segment must arrive at their destination and trigger duplicate acknowledg-
ments. If segments at the end of a transfer are lost or multiple packet losses
from a window occur, there might not be enough segments left to trigger a fast
retransmit. The send window may also be too small to begin with. In such cases
the TCP source must rely on its timeout mechanism for recovery. If the oldest
segment is not acknowledged within a time frame, called the retransmit timeout
(RTO), the TCP source starts over from congestion window of one segment and
re-enters the slow start phase. It then retransmits the presumably lost segment.
The sender continuously samples the round trip time (RTT) and adjusts
the RTO. The RTO is based on the mean RTT and a factor accounting for
the fluctuations in the RTT. Traditionally, there has been a lower bound of 1
second on the RTO due to poor clock granularity. We will refer to this bound as
the minRTO. The clock granularity has however improved and therefore some
modern implementations have chosen to significantly reduce the lower bound.
For instance Linux version 2.4 uses a minRTO of 200 ms. This might have
an impact on TCP performance over wireless links, where the lower bound
has shielded against delay spikes in the range of the lower bound. Such delay
increases can occur if the available forwarding capacity rapidly decreases and
they may cause the retransmit timer to expire prematurely.
With Selective Acknowledgments (SACKs) [14], [42], the receiver can inform
the sender about all non-contiguous blocks of data that have been received, thus
the sender knows which segments to retransmit. Without the SACK option
the sender does not know exactly which packets that have been lost. TCP
NewReno [16] is the TCP variant recommended if one of the two communicating
TCP end points in a session does not support the use of SACK.
The NewReno algorithm is active during Fast recovery, i.e., from the re-
ceipt of three duplicate acknowledgments to a timeout or until all the data sent
has been acknowledged. In short, the NewReno algorithm considers each du-
plicate acknowledgment to be an indication of a segment leaving the network
and therefore the sender is allowed to send a new segment on each duplicate
acknowledgment. This variation of the TCP congestion recovery behavior is
more likely to keep the ack clock going during loss events than that of TCP
Reno, thereby avoiding a timeout. The difference compared to SACK-based
loss recovery [4] is that the NewReno sender, does not know where the gaps in
the receive sequence are.

3 Method
The impact of different settings of the TCP retransmit timeout lower bound
(minRTO) has been evaluated through simulations. In this section we intro-
duce the simulation environment, thereafter the chosen evaluation metrics are
presented.
38 Congestion Control in Wireless Cellular Networks

Traffic sources

25ms/5Gbps Mobile nodes

Figure 2.1: Simplified topology illustrating the connection between the traffic
sources and the mobile nodes.

3.1 Simulation Environment


A model of HSDPA has been implemented into ns version 2.27 [44]. The radio
model includes log normal shadow fading with a standard deviation of 8 dB and
exponential path loss with a propagation constant of 3.5. Self interference is
assumed to be 10 percent and the interference from simultaneous transmissions
within a user’s own cell is approximated to 40 percent. Code multiplexing for
up to three users in the same slot for a given cell is supported. The interference
from transmissions in other cells than a user’s own cell is dampened by distance.
The coding and modulation combinations, as well as the introduction of block
errors are described in [5]. No fast HARQ is implemented; instead, damaged
radio-blocks are immediately retransmitted.
When starting a simulation the mobile terminals are randomly distributed
according to a uniform distribution for the x-axis and the y-axis on a cell plan
consisting of seven cells. All cells have omni directional antennas and a radius
of 500 m. The traffic sources are at equal distance from the base stations and
the mobile users are associated with the closest base station. During a transfer
the mobile node moves with a speed drawn from a low mobility model [46]. All
directions are equally likely. Wrap-around is supported both for the moving
users and interference calculations.
A session consists of a user (mobile terminal) downloading a file followed by a
waiting time drawn from an exponential distribution with a mean of 1.5 seconds.
The waiting time is initiated as soon as the last data byte has reached the
receiver2 . The file sizes are drawn from a Pareto distribution with a mean of
25000 bytes and the shape parameter set to 1.1. The mobile node is also moved
to a new position each time it starts a new transfer. A simplified model of the
topology can be found in Figure 2.1.
We vary the load by setting the total number of users for the simulation.
Initial studies suggested that 20-50 users generate what can be regarded as low
load, 50-100 users gave moderate load and above 100 users the load is high.
We initially used a range of up to 500 users, but we settled on 150 users as
a maximum. Above this point about 50% of the transfers had a goodput of
2 We reset the TCP endpoints, thus no tear down is performed, but connection establish-

ment takes place for every flow.


On the TCP Minimum Retransmission Timeout in a High-speed 39
Cellular Network

less than 10 kbit/s, regardless of scheduler and minRTO setting. We consider


10 kbit/s to be too low for the web traffic that HS-DSCH was designed for.

3.2 Evaluation metrics


If the RTO is set too low the risk for premature timeouts is high. With timeout
intervals larger than necessary the sender might be idle for periods waiting for
the timer to expire. When studying the effect of different minRTO settings,
we count the number of spurious timeouts on a per-flow basis. Although ex-
tended idle periods negatively influence the transfer rates of individual flows,
they do not result in unnecessary retransmissions. We therefore consider spu-
rious timeouts more destructive for system performance and focus on them for
this study.
We expect spurious timeouts to decrease the fairness between users, since
users that have been hit by a spurious timeout perform double work. We use
the fairness metric given by equation 2.1, where xi is the goodput experienced
during a particular flow i to measure fairness. This metric was suggested in [34].
n
( i=1 xi )2
f (x1 , x2 , · · · , xn ) =  (2.1)
n i=1 x2i
n

We also evaluate the effect of changing the lower bound of the retransmission
timer on system performance. The objective is to maximize system goodput,
while maintaining fairness between the users. The system throughput is useful
when analyzing the goodput, since it gives an indication of the amount of traffic
offered to the system.

4 Results
In Figures 2.2 and 2.3 we clearly see that a longer minRTO results in a smaller
share of the flows suffering from spurious timeouts3 . By comparing Figures 2.2
and 2.3 we find that the SIR scheduler causes fewer spurious timeouts for a
shorter minRTO, however the RR scheduler is better (i.e. fewer spurious time-
outs) for a longer minRTO. We see that most delay spikes do not last for more
than 0.5 seconds, because for longer minRTOs the share of flows suffering from
spurious timeouts is virtually zero.
Different values of the minRTO do not result in any significant differences
in goodput fairness, except when using an RR scheduler at high loads. For this
case a longer minRTO is better than a short one, as shown in Figures 2.4 and 2.5.
We believe that this decrease in fairness is the result of the increase in spurious
timeouts. Comparing the two schedulers, we see that the RR scheduler produces
slightly higher fairness than the SIR scheduler for moderate load. Regardless
of scheduler and minRTO, the fairness steadily decreases as the load increases
above 75 users.
3 We have also looked at the total number of spurious timeouts for which the results corre-

spond with the results depicted.


40 Congestion Control in Wireless Cellular Networks

0.25
minRTO=0.0s
minRTO=0.1s
minRTO=0.2s
minRTO=0.3s
minRTO=0.4s
0.2 minRTO=0.5s
minRTO=1.0s
Share of spurious timeouts

0.15

0.1

0.05

0
0 50 100 150
Number of users

Figure 2.2: SIR scheduling: The share of all flows experiencing at least one
spurious timeout using TCP Sack and a specified scheduler for different values
of minRTO. The confidence level is 90%. TCP NewReno gave similar results.

0.25
minRTO=0.0s
minRTO=0.1s
minRTO=0.2s
minRTO=0.3s
minRTO=0.4s
0.2 minRTO=0.5s
minRTO=1.0s
Share of spurious timeouts

0.15

0.1

0.05

0
0 50 100 150
Number of users

Figure 2.3: The results for RR scheduling corresponding to Figure 2.2.


On the TCP Minimum Retransmission Timeout in a High-speed 41
Cellular Network

Fairness in goodput per flow for different values on minRTO


0.8
minRTO=0.0s
minRTO=0.1s
minRTO=0.2s
minRTO=0.3s
0.75 minRTO=0.4s
minRTO=0.5s
minRTO=1.0s

0.7
Fairness in goodput

0.65

0.6

0.55

0.5
0 50 100 150
Number of users

Figure 2.4: The fairness in goodput among different flows using TCP Sack and a
specified scheduler for different values of minRTO. The confidence level is 90%.
TCP NewReno gave similar results.

Fairness in goodput per flow for different values on minRTO


0.8
minRTO=0.0s
minRTO=0.1s
minRTO=0.2s
minRTO=0.3s
0.75 minRTO=0.4s
minRTO=0.5s
minRTO=1.0s

0.7
Fairness in goodput

0.65

0.6

0.55

0.5
0 50 100 150
Number of users

Figure 2.5: The results for RR scheduling corresponding to Figure 2.4.


42 Congestion Control in Wireless Cellular Networks

Average total throughput for different values on minRTO


7e+06
minRTO=0.0s
minRTO=0.1s
minRTO=0.2s
minRTO=0.3s
6e+06 minRTO=0.4s
minRTO=0.5s
minRTO=1.0s

5e+06
Throughput [bits/s]

4e+06

3e+06

2e+06

1e+06
0 50 100 150
Number of users

Figure 2.6: The total throughput in the system using TCP Sack, averaged over
ten runs for different values of minRTO. The confidence level is 90%. TCP
NewReno gave similar results.

Figures 2.6 and 2.7 present the throughput for the whole system. For SIR
scheduling, the different values of the minRTO do not result in any differences
in throughput. We note a small difference in throughput when using an RR
scheduler at high loads.

5 Discussion
The results presented raise a key question: Why is RR scheduling more sensi-
tive to changes in the minRTO when compared to SIR scheduling? Spurious
timeouts occur when the delay suddenly increases, such that a packet will be
delayed causing the RTO timer to go off. In our system, increased packet delays
are the result of intensified competition at the MAC layer.
With an RR scheduler the competition is intensified for all users whenever
a new user arrives to a cell, since they all compete on equal terms. However,
given the slow start behavior of TCP, the traffic of one new user is not enough
to create a delay spike. There must be several new users arriving within a short
period of time in order for any rapid increase in competition to occur. With SIR
scheduling the arriving users only compete with the users having worse SIR than
themselves. This means that if a number of users arrive at a cell, the likelihood
of all of them contributing to the competition observed by a particular user is
On the TCP Minimum Retransmission Timeout in a High-speed 43
Cellular Network

Average total throughput for different values on minRTO


7e+06
minRTO=0.0s
minRTO=0.1s
minRTO=0.2s
minRTO=0.3s
6e+06 minRTO=0.4s
minRTO=0.5s
minRTO=1.0s

5e+06
Throughput [bits/s]

4e+06

3e+06

2e+06

1e+06
0 50 100 150
Number of users

Figure 2.7: The results for RR scheduling corresponding to Figure 2.6.

lower for SIR scheduling, than with RR scheduling.


The size of the files which the new users chose to download also affects the
outcome. The files must be big enough to capture a large amount of time slots.
In our application file size distribution, small files are common and large files
rare. This makes it less probable that spurious timeouts would occur compared
to a scenario where a majority of the file sizes are large.
The effect of our file size distribution is different for the two schedulers. If a
high-SIR user arrives to a cell which uses SIR scheduling and begins to transfer
a small file, other users are not affected, since the new transfer will use few time
slots. On the other hand, if RR scheduling is applied, a new user would probably
consume more time slots since the RR scheduler does not try to optimize system
throughput.
To conclude, in order for delay spikes to occur when an RR scheduler is
used, it is sufficient that a fairly large number of users arrives at a cell. If a
SIR scheduler is used, the number of users arriving must be higher and the size
of their transfers larger (per user) than with RR scheduling to create the same
effect.
From the results it is likely that the delay spikes when using the RR scheduler
have a duration of less than 1 second. However, no such conclusion can be drawn
when we use the SIR scheduler. Either, it could be that delay spikes during
SIR scheduling are longer than 1 second, thus a minRTO of 1 second will not
be able to capture them, or because the SIR scheduler optimizes on system
throughput and thus is able to process more data faster. If we would increase
44 Congestion Control in Wireless Cellular Networks

the load beyond the current point we speculate that the minRTO might have
an effect for SIR scheduling. The average round trip times for the same number
of users, are shorter for SIR scheduling than for RR scheduling, indicating that
the SIR scheduler is more efficient. This is more evident during high loads.
Furthermore, we have compared cumulative distributions of the RTTs for the
two schedulers. In general the RTT for SIR scheduling is shorter, but there are
several occurrences of really long RTTs compared to RR scheduling.
We have only studied TCP traffic and even though TCP probably will be
the protocol used by most of the applications, it is of interest to discuss its
performance when competing with traffic using UDP. A UDP traffic source may
very well start sending at a high rate compared to the start-up behavior of TCP.
This means that a single, or a few high-rate UDP flows can cause sudden service
interruptions interpreted as delay spikes by the TCP flows transferring data in
the same cell.
To conclude, we see that there are differences in the number of spurious
timeouts when using the two schedulers for the minimum retransmission bounds
studied and for our application model. These differences does not seem to
have any major effect on fairness, goodput or throughput, nor do the two TCP
versions.
Paper 3

Buffer management for TCP over


HS-DSCH

45
Technical report, Luleå University of Technology

Sara Landström and Lars-Åke Larzon, “Buffer management for TCP over HS-DSCH”.
Technical report, LTU–TR–05/09–SE, Luleå University of Technology, Sweden, Febru-
ary 2005.

46
Buffer Management for TCP over HS-DSCH
Sara Landström† , Lars-Åke Larzon†,‡ , Ulf Bodin†

Luleå University of Technology

Uppsala University

Abstract

In this paper we investigate the influence of buffer management for TCP


on performance of the High Speed Downlink Channel (HS-DSCH) intro-
duced in WCDMA release 5. HS-DSCH is a shared channel, but user
data is buffered individually prior to the wireless link. Three queue man-
agement principles, e.g., passive queuing, the Packet Discard Prevention
Counter (PDPC) method and the Random Early Detection (RED) algo-
rithm were evaluated for a number of buffer sizes and scenarios. Also, a
buffer large enough to prevent packets from being lost was included for
reference.
For round robin (RR) scheduling of radio-blocks, PDPC and the pas-
sive approach, that both manage to keep the buffer short, gave the best
system goodput as well as the shortest average transfer times together
with the excessively large buffer. With signal-to-interference ratio (SIR)
scheduling, the strategy to avoid all packet losses, resulted in a lower
system goodput than for the short buffers.
As illustrated in this article, peak transfer rates may not be achieved
with very small buffers, but buffers of 10-15 IP packets seem to represent
a good trade-off between transfer rates, delay and system goodput. We
would like to investigate how to make use of system parameters such
as the current amount of data offered for HS-DSCH in total to regulate
individual buffer sizes.

1 Introduction
On the Internet, buffering is usually performed on a per-link basis, except when
it comes to wireless cellular systems, where per-user queuing is common practice.
Previous studies of buffer management over wireless cellular systems focus on
dedicated channel types [59], [10].
In this paper we study how appropriate buffer management can improve
performance of HS-DSCH, which is a shared channel, when transfers are made
using TCP as transport protocol. TCP connections in the slow start phase,
alternate between sending data and waiting for acknowledgments. Improved
link utilization may therefore be achieved through time division. With the
48 Congestion Control in Wireless Cellular Networks

increased amount of data services being offered over wireless cellular networks, it
is probable that the shared channel concept will become increasingly important.
In low load situations buffer management for HS-DSCH primarily targets
user experience in terms of transfer rates. When the traffic load increases buffer
management can help to ensure that the resources are being spent wisely, since
it interacts with the TCP congestion and avoidance mechanism.
One of the key issues is that we do not want to transfer stale data or multiple
copies of the same data over the link. It is therefore likely that the queue
should be kept small to prevent data from aging in the queue and unnecessarily
triggering timeouts. Meanwhile, we want to minimize the number of packets
that have to be dropped in order to keep the buffer size small. We also want to
enable high transfer rates and ensure that data is available to be transferred.
We will now present the main features of HSDPA and relate them to TCP
and current buffer management principles. We also expand on the different
aspects of buffering and previous work before presenting the results from a
simulation study of queue management for HS-DSCH.

1.1 Radio resource management


HS-DSCH is primarily shared in the time domain, but also through code divi-
sion. It supports theoretical peak data rates in the order of 14 Mbps. There
are basically three techniques that enable these increased data rates; fast link
adaptation, fast hybrid ARQ and fast scheduling. These techniques all rely on
a rapid adaptation of the transmission parameters to the instantaneous channel
conditions.
In addition to increased data rates compared to earlier versions of shared
channels in WCDMA, lower delays can be achieved. Users are scheduled on a
2 ms basis, which is the length of the transmission time interval (TTI).
Enabling high link utilization and ensuring low delays are examples of re-
quirements that may be counterproductive, therefore one of the purposes of our
evaluation is to determine which factors that must be considered when perform-
ing buffering for HS-DSCH and what the trade-offs are. The scheduler is central
to this problem, since it largely determines how resources are distributed and
thus the available bit rate from a user perspective.
In our evaluation we use two different types of schedulers. The signal-to-
interference ratio (SIR) scheduler chooses the next data receiver based on who
has the most favorable signal conditions. The round robin (RR) scheduler,
attempts to distribute the time slots fairly by assigning a slot to each active
user in turn. The SIR and the RR scheduler represent two extremes from a
time fairness perspective, that are often used as reference points as in [35].
Most schedulers are hybrids of SIR and RR scheduling, hence any conclusions
drawn may apply to algorithms that combine their characteristics.
The scheduling algorithm controls access to the channel, while the bottleneck
buffer strategy influences the amount of data for a given user that is available
to the scheduler. TCP regulates its send rate by interpreting congestion signals
Buffer management for TCP over HS-DSCH 49

in the form of lost packets and by the use of a timer1 . The buffer strategy
interacts with these mechanisms through its drop pattern and the delay that it
induces. Choosing the appropriate buffer strategy is thus important to ensure
high channel utilization and acceptable transfer rates.

1.2 Buffer management


The three main considerations in buffer management are; to decide on an ap-
propriate buffer size, a suitable algorithm that determines when a packet needs
to be dropped and a dropping policy. We will begin by introducing the buffer
management algorithms.

Passive Queuing
The traditional approach to buffering is to set an absolute limit on the amount
of data that can be buffered. Packets are then dropped when the buffer capacity
is exhausted. This strategy is known as passive queue management.

Random Early Detection (RED) gateways


RED gateways belong to the class of active queue management principles and is
currently the recommended strategy for use on the Internet [6]. It is primarily
intended for a scenario where multiple flows traverse the same queue, therefore
a probabilistic approach to dropping was taken in order to avoid biases and
global synchronization.
The original algorithm compares the average queue size against one lower
threshold, t min, below which no packets are dropped, and a upper threshold,
t max, above which all packets are dropped. The drop probability at t max is
p max. In between the thresholds, the exact dropping probability depends on
the average queue size, avg, and the number of packets that have arrived since
the last packet was dropped, count. To separate the packet drops in the time
domain, the packet dropping probability based solely on the average queue size,
max p(avg − t min)
pb = , (3.1)
(t max − t min)
is adjusted by
pb
pa = , (3.2)
1 − count ∗ pb
yielding the final dropping probability pa .
In [59] the advantages and disadvantages of basing decisions on the average
queue size were discussed in relation to the outgoing link capacity. The larger
the link bandwidth, the less importance each additional packet has in terms of
delay in the buffer and a slower reaction to changes in the queue size can be
tolerated. However, wireless links can have relatively low bandwidths compared
1 The network buffers can also set a bit in the IP header to signal congestion, which is

referred to as Explicit Congestion Notification (ECN).


50 Congestion Control in Wireless Cellular Networks

to wired links, which means that each packet can add substantial delay and slow
down loss recovery.
We argue that whether to base decisions on the average queue size or not, also
depends on the number of flows being handled. In the simple case when there
is only one flow, it is possible to detect when over buffering with knowledge of
the transport protocol and the current buffer level for a given bandwidth*delay
product. Essentially a packet should be dropped as soon as the lower threshold
is exceeded to get a prompt send rate reduction in slow start. Thereafter equally
spaced drops are preferable for the probing behavior of TCP.
With RED the likelihood of losing a packet is high close to the upper thresh-
old, but rather low close to the lower threshold. Increasing the dropping prob-
ability will only marginally increase the likelihood of dropping a packet at the
right moment, while also increasing the risk of loosing multiple packets from the
same TCP window. Figure 3.1 illustrates this relation. Dropping more than one
packet from a TCP window complicates loss recovery [14].

1 100% 3*tmin
100% 4*tmin
10% 3*tmin
10% 4*tmin
0.8
Dropping probability

0.6

0.4

0.2

0
0 5=tmin 10 15=3*tmin 20=4*tmin
Queue size in number of IP packets

Figure 3.1: RED dropping probabilities with different maximum drop proba-
bilities and relations between the lower and the upper thresholds. The lower
threshold is here set to 5 IP packets.

A change has later been made to the RED algorithm [15] in order to de-
crease the sensitivity to the parameter settings. Instead of dropping all packets
when the average queue size exceeds t max, the dropping probability is slowly
increased from max p to 1 between t max and 2 ∗ t max. An evaluation of this
modified algorithm is to be found in [56].
Buffer management for TCP over HS-DSCH 51

Packet Discard Prevention Counter (PDPC)


In [59], it was shown that the Packet Discard Prevention Counter (PDPC)
method outperforms RED gateways [22] and passive queuing schemes for TCP
over dedicated 3G channels. PDPC takes advantage of there being only one or
a few flows sharing the buffer immediately in front of the 3G link and considers
the congestion control and avoidance mechanisms of TCP. In addition to low
statistical multiplexing, it was assumed that the wireless hop is limiting the
transfer rate when designing PDPC, which simplifies the configuration of the
buffer parameters.
Assuming that the buffer is dedicated to one user, a deterministic approach
that does not require more knowledge than RED can be used, without risking
biases against certain connections and global synchronization. PDPC utilizes
a counter to inflict packet drops regularly when the instantenous queue size is
larger than the lower threshold, t min. In [58] t min is set to the estimated
pipe capacity, the counter, n, which largely determines the spacing between
packet drops to 2 ∗ t min and the upper threshold above which all packets are
dropped to 4 ∗ t min. Figure 3.2 illustrates the relations between the dropping
probabilities and the threshold values for the discussed buffer principles.

Passive PDPC RED


Loss probability

0
t_min t_min t_max t_min t_max

Buffer size in IP packets

Figure 3.2: A graphical comparison of the three queuing principles with the
dropping probability on the y-axis. The proportions in this figure are not exact.

For HS-DSCH, it is not necessarily the wireless hop that dominates the pipe
capacity as was the assumption for the 3G links in the previous studies [59], [10].
The actual radio link round trip time is short and the available bit rate can vary
substantially, which means that other guidelines for how to set t min has to be
applied.

Dropping Policies
Data may be dropped from the tail or the front of the queue. A packet may
also be randomly selected for dropping. Randomly selecting a packet is foremost
an interesting approach when the buffer contains packets from several transfers
and users. In such a case we want to distribute the packet losses among the
flows and primarily drop packets belonging to flows that occupy a large share of
the buffer, without having to keep track of individual flows. Since each buffer
52 Congestion Control in Wireless Cellular Networks

is dedicated to one user in the case of HS-DSCH we will not consider random
dropping further.
In [63], the drop-from-front scheme was shown to give a shorter average
queuing delay than drop-from-tail for passive buffering. The decrease in delay
is roughly proportional to the fraction of packets dropped, since dropping from
the front decreases the service time.
Another motivation for the use of drop-from-front is that the fast retransmit
mechanism of TCP can be exploited to convey the congestion signal faster to
the sender as proposed in [39], which for instance can help to avoid a large slow
start overshoot. A large buffer overflow in slow start has been shown to be a
problem in low statistical multiplexing environments [25].
Finally, if the passive buffer only keeps data for one transfer the drop-from-
front approach ensures that there are always enough segments to trigger a fast
retransmit following the dropped segment (assuming that the buffer can hold
at least three segments).
Although we have discussed the dropping policies from the perspective of
passive queue management, most buffer management algorithms can be arbi-
trarily combined with a drop policy. In this study we consider passive buffering
with a drop-from-tail scheme (DT), passive buffering where packets are dropped
from the front (DF), RED with drop-from-front (RED) and finally PDPC with
drop-from-front (PDPC). Buffer sizes are measured in IP packets. The nota-
tion “DT 4”, translates to a passive queue management algorithm with packets
being dropped from the tail and room for at most 4 IP packets.

2 Evaluation
We use simulations to illustrate the effects of buffer management over a shared
channel. The data has been obtained through simulations using the Network
Simulator version 2.27 (ns-2) [44]. For the simulations the PDPC algorithm was
implemented according to the state chart in [59] and the model of HS-DSCH,
first used in [5], was extended by wrap-around for interference calculations and
moving users. See [3] for an explanation of wrap-around for moving users.
The transport protocol is TCP SACK, as implemented in the ns-2 module tcp-
sack1. The connection set-up, but not the tear-down was simulated. Based on
the Ethernet MTU of 1500 bytes, the TCP segment size was set to 1460 bytes.

2.1 Individual buffers


Our initial scenario was chosen to illustrate the behavioral differences of RED,
PDPC and passive queuing and is similar to the scenario investigated in [58].

Simulation model
The network topology is shown in Figure 3.3. Instead of a 3G link, we use a
wired link with a corresponding fixed delay and bandwidth. The purpose is to
Buffer management for TCP over HS-DSCH 53

Service User
provider 70ms 64kbps/60ms

Figure 3.3: The network topology for the simulations of the dedicated channel.
The bandwidth of the first link is over provisioned.

Parameter Setting Explanation


thresh t min Size of the passive buffers.
maxtresh 4*t min Distance between t max and t min.
drop front true Drop packets from the front.
mean pktsize 1500 The size of our TCP segments.
q weight 1 Base decision on the instantaneous queue
size.
linterm 10 Drop every 10th packet at t max.
gentle true Slow increase of loss rate after t max.
This is the default value.
limit 8*t min Absolute maximum size of the buffer.

Table 3.1: Configuration of RED parameters.

illustrate the behavior of the queue management principles when the buffer is
dedicated to one flow. Knowledge of the general characteristics, such as the
variations in queue length and drop patterns, is to support our evaluation of
the buffer management strategies for HS-DSCH.
The buffer size is given in IP packets and the maximum queue length for
the passive buffer scheme corresponds to t min in the active queue management
algorithms. Both RED and PDPC were set to drop packets from the front. The
relation between t min and other parameters in PDPC follows the description
in Section 1.2.
RED is the most complex algorithm of those investigated and it includes a
random element. The configuration of the RED parameters are accounted for
in Table 3.1. For comparison purposes the distance between the two thresholds,
t min and t max, and the maximum dropping probability have the same settings
as in [59]. Each TCP transfer was 250 kbytes.

Results
The trend for tail drop is that the number of packets lost is increasing with
the queue size up to a buffer capacity of about 40 IP packets, see Figure 3.4.
The reason is that the slow start overshoot is potentially bigger, the larger the
buffer. At larger buffer sizes the drops occur towards the end of the transfer and
thus fewer segments are dropped. However, dropping segments late is costly,
since a timeout is often necessary to recover, which is reflected by the noticeably
54 Congestion Control in Wireless Cellular Networks

45
DT
40 RED
DF
PDPC
35

Number of packets lost


30

25

20

15

10

0
0 5 10 15 20 25 30 35 40 45 50
Buffersize in number of IP packets

Figure 3.4: Number of packets dropped as a function of the buffer size.

increased transfer times in Figure 3.5(a).


RED drops less packets than drop-tail, but the transfers are not always com-
pleted faster. The reason is that the RED queue reacts much later to congestion
and even if a fast retransmit is made, there are often too many packets ahead
in the queue for the transport layer retransmission to reach the receiver in time
to prevent a timeout. The average queuing delay gives an indication of the
size of the queue that the buffer algorithm operates at. From Figure 3.5(b) we
conclude that RED results in a larger average queue than the other investigated
strategies.
In terms of both packet losses and transfer times PDPC gives the best per-
formance, closely shadowed by drop-from-front that suffers from a few more
losses when exiting slow start, than PDPC does.

Discussion

RED is difficult to configure. By reducing the distance between the upper and
the lower thresholds, the average queuing delay can be reduced but instead we
increase the risk of dropping closely-spaced packets. Another alternative would
be to disable the algorithm, which allows for a slow decrease of the dropping
probability between t max and 2 ∗ t max. We kept the configuration we had in
this experiment, since our focus is not on optimizing any particular algorithm,
but rather on finding general guidelines that will apply for HSDPA.
We repeated the experiments with a faster outgoing link and different delays.
When the bandwidth is higher and the delays shorter, loss recovery is faster and
thus has less effect on the transfer times as can be expected.
Buffer management for TCP over HS-DSCH 55

54
DT
52 PDPC
RED
50 DF
48
Transfer time [s]

46
44
42
40
38
36
34
32
0 5 10 15 20 25 30 35 40 45 50
Buffersize in number of IP packets

(a) Transfer times as a function of the buffer size.

8
RED
DT
7 PDPC
DF
6
Average queuing time [s]

0
0 5 10 15 20 25 30 35 40 45 50
Buffersize in number of IP packets

(b) The average queuing delay due to queuing as a function of the buffer size.

Figure 3.5: Buffers dedicated to one user.


56 Congestion Control in Wireless Cellular Networks

2.2 HSDPA goodput


For a dedicated channel, what primarily determines the performance from a
system perspective is how long the user keeps the channel. The activity de-
gree has an influence on other users, since power is a shared resource and each
transmission also generates interference.
When the channel is shared in time, users compete for time slots. The
amount of double work that is brought about through queue management thus
influence system goodput when resources are scarce. We define useful data as the
data that must reach the receiver for the transfer to be completed. Replicated
application layer data may reach the receiver as a consequence of transport layer
retransmissions.
We evaluate the system performance by studying the system goodput per
second and cell. System goodput is the amount of unique application layer data
that the system has transferred.

Simulation model
The application model determines the results to a large extent. For instance, if
most files are small enough to fit into the buffer, the dropping strategy never
comes into play. In this section the effects of two different file size distributions
are studied. In the first scenario 250 kbytes TCP transfers are being made, in
the second simulation file sizes are drawn from a long-tail distribution where
the majority of the transfers are short.
A fixed number of mobile users are spread out over the simulation area. New
sessions are generated independently of the perceived transfer rates, through a
session generator for which the average waiting time between sessions can be
configured. The waiting time is uniformly distributed. The destination is picked
randomly among the idle users. If there is no idle user, the session is dropped.
The session generator enables comparisons to be made at a reasonably sim-
ilar offered load as opposed to an application model where each user generates
its next session after a waiting time that is initiated when the previous transfer
has been concluded. In the latter case, a higher average transfer rate results in
more transfers being generated. Even with the session generator a system with
low transfer rates has less ability to accept the offered sessions, since all the mo-
bile users may be occupied. System goodput captures the results of the transfer
rates and the degree to which the system performs useful work. Each simula-
tion corresponds to 5 minutes simulated time and each scenario was repeated
ten times.
The cell plan consists of seven cells with omni directional antennas and 500 m
cell radius. Initially the mobile units are spread uniformly in the plane within
a circle enclosing the cell plane. For simplicity a mobile is associated with the
base station to whom it is closest to in distance. A hand-over only results in
one missed transmission opportunity.
The performance is sensitive to radio conditions and positions of the mobile
users. Therefore a mobile unit is given a position, speed and direction for each
Buffer management for TCP over HS-DSCH 57

40

35

30
Number of users

25

20

15

10

0
0 10 20 30 40 50 60 70 80 90 100
Speed in kph

Figure 3.6: The low speed and mobility model.

new session assigned to it. The speed is taken from a pedestrian and low mobility
speed distribution as shown in Figure 3.6 and recommended in [46], whereas
any direction is equally likely and positions are chosen as when initializing the
simulation.
The deterministic loss in signal strength due to distance is assumed to be
exponential with a propagation constant of 3.5. The location dependent path
loss, referred to as shadow fading, is normally distributed in dB with a standard
deviation of 8 dB and there is a 0.5 correlation between base stations. The
autocorrelation profile for the shadow process is first order negative exponential
and we use a correlation distance of 40 m.
Multi-path fading leads to self interference and loss of orthogonality when
data for several users are transmitted simultaneously within a cell using code
multiplexing. These phenomena are modeled by constants, which have the
values 0.1 and 0.4 respectively2 . All transmissions in other cells contribute to
the interference level.
In Table 3.2, the combinations of coding rates and modulation types that are
available in the simulator are summarized. We assume that 12 out of 16 codes
and a power of 10W have been allocated to HS-DSCH. Code multiplexing is
possible for up to three users in one time slot and the block errors are uniformly
distributed. Lost radio blocks are immediately retransmitted.
2A value of 1 would mean that all orthogonality is lost.
58 Congestion Control in Wireless Cellular Networks

Coding Modulation SIR Bit rate Radio block


(rate) (type) (dB) (Mbps) (bytes)
0.25 QPSK -3.5 1.44 360
0.50 QPSK 0 2.88 720
0.38 16QAM 3.5 4.32 1080
0.63 16QAM 7.5 7.20 1800

Table 3.2: Combinations of coding rates and modulation types

Bottleneck
buffer

Data Users
Service provider
25 ms

Figure 3.7: The network topology for the simulations of HS-DSCH.

Since TCP has a bias against long round trip time connections, the server
was placed at the same distance from all base stations. This prevents the TCP
bias from affecting the results. The one-way propagation delay between the air
interface and the server was fixed to 25 ms in both directions. The topology is
depicted in Figure 3.7.
In reality active queue management will be performed at the serving radio
network controller (SRNC) for HSDPA. We assume that the SRNC and the
basestation can transfer data seamlessly between each other and that only a
small amount of data is between the air interface and the queue that is being
actively managed at any point in time.

Statistical methods
Details of the statistical methods used in this paper can be found in [45] and
the software used for the statistical computations is R [60]. Below we briefly
account for the applied methods and their underlying assumptions.
For comparison of means when we have two or three samples we chose the
paired t-test with the significance coefficient adjusted for multiple comparisons
using the method suggested by Bonferroni. The t-test assumes that the differ-
ence between the data sets is normally distributed. There is a t-test for data
sets with equal variance and another for unequal variance. If the data sets are
normally distributed Bartlett’s test can be used to determine whether the vari-
ance are equal or not. The assumption of normality is verified through a normal
probability plot.
The null hypothesis for the paired t-test is that there are no differences in
means and the alternative hypothesis is that there are differences in means. We
Buffer management for TCP over HS-DSCH 59

can reject the null hypothesis if the computed p-value is less than our predeter-
mined significance coefficient, which we have set to 0.05.
For multiple comparisons of means (more than three means to compare in
this study), we have used analysis of variance (ANOVA). ANOVA allows us to
extend our hypothesis to include more than two treatments on one population
or alternatively to ask are all the means from more than two populations equal?
This is equivalent to asking whether the treatments have any overall effect. The
assumptions are that the residuals resulting from the model have equal variance
and that they are normally distributed. Thereafter Tukey’s3 test have been
performed to detect significant differences between means and to construct 95%
confidence intervals for these differences.

HS-DSCH for long transfers


In this scenario there are 60 mobile users and new sessions are generated with
an average waiting time, which is varied between 0.2 and 0.4 seconds.

RR scheduling We start with the longest waiting time, 0.4 seconds, between
initiating new transfers and compare the system goodput for two buffer sizes
with DT. The paired t-test was performed to detect any significant difference in
mean system goodput. Our reference buffer, which can keep the entire transfer,
gives between 1635 and 9809 bits better system goodput per second and cell
than DT 30 with 95% confidence. The system goodput for the buffer of 4 IP
packets was not significantly different from that of the reference buffer at this
confidence level.
For a waiting time of 0.3 seconds, we study DT, DF, PDPC and RED with
4 and 30 IP packets as the maximum sizes of the passive buffers. We use a two
factor ANOVA to analyze the data. Both the queue strategy and the queue size
effect are significant, as well as their interaction. Therefore we have to study
the effect of the queue strategy at each queue size and vice versa. Table 3.3
accounts for the 95% confidence intervals for the significant differences between
means. The table reveals that all schemes are significantly better than RED
4. The other short buffer configurations and the long RED queue give higher
system goodput than DT 30. DF 4, PDPC 4 and RED 30 give slightly higher
goodput than PDPC 30. Tukey’s method was used to perform the multiple
comparisons. We also compared DF 4 to the reference buffer using a paired
t-test. The null hypothesis that the means are equal could not be rejected at
the 95% confidence level.
When decreasing the waiting time further, we find that it is the same dif-
ferences in means that are significant and that these differences have increased
in size. There is also a small but significant difference in means between DF 30
and DT 4.
3 We could have analyzed the experiments using ANOVA and blocking, but the R imple-

mentation does not support Tukey’s for blocked experiments. In practice this means that it
is harder to detect small differences.
60 Congestion Control in Wireless Cellular Networks

Strategy 1 Strategy 2 Lower limit Upper limit


DF 4 RED 4 77105 101669
DT 4 RED 4 70924 95487
PDPC 4 RED 4 79134 103878
DF 30 RED 4 82739 107303
DT 30 RED 4 56849 81413
PDPC 30 RED 4 71673 96237
RED 30 RED 4 71637 96237
DF 4 DT 30 7973 32537
DT 4 DT 30 1793 26356
PDPC 4 DT 30 10183 34747
RED 30 DT 30 2542 27106
DF 4 PDPC 30 5675 30238
PDPC 4 PDPC 30 7885 32448
RED 30 PDPC 30 243 24807

Table 3.3: 95% confidence intervals for the significant differences in means with
RR scheduling for 0.3 seconds waiting time. The unit is bits per second and
cell.

SIR scheduling As with RR scheduling at the lowest investigated load, we


study DT for different buffer sizes. The results are similar, that is the refer-
ence buffer has a significantly higher system goodput than DT 30. The 95%
confidence interval for the difference in means is [849, 4317] bits per second and
cell.
With 0.3 seconds waiting time between new sessions, we get the significant
differences shown in Table 3.4. DF 4 gives higher system goodput than all the
other schemes but DT 4. DT 4 performs better than RED 4 and all the large
buffers. Of the large buffers DT 30 results in lower system goodput than the
other strategies.
Since DF 4 performs better than the other configurations, we compare it
against the reference buffer using a paired t-test. The small buffer improves
system goodput by 31173 to 41178 bits per second and cell compared to the
reference buffer with 95% confidence.
At the highest investigated load, the characteristics of the scheduler domi-
nates and therefore no differences in means can be detected.

HS-DSCH for a long-tail distribution


Web browsing is an important service for the mobile Internet, since it stands for
a large part of the transfers on the Internet today. The majority of the generated
flows are short, but the distribution of the transfers exhibits a long-tail [7].
Web traffic has TCP as the underlying transport protocol and short transfers
often stay in the slow start phase of TCP throughout their existence. In this
phase sources alternate between sending data and waiting for acknowledgments.
Buffer management for TCP over HS-DSCH 61

Strategy 1 Strategy 2 Lower limit Upper limit


DF 4 PDPC 4 1493 23823
DF 4 RED 4 6364 28694
DF 4 DF 30 9381 31711
DF 4 DT 30 37631 59961
DF 4 PDPC 30 10054 32384
DF 4 RED 30 17410 39740
DT 4 RED 4 3369 25700
DT 4 DF 30 6386 28716
DT 4 DT 30 34636 56966
DT 4 PDPC 30 7059 29390
DT 4 RED 30 14415 36745
PDPC 4 DT 30 24972 47303
PDPC 4 RED 30 4751 27082
DF 30 DT 30 17084 39415
PDPC 30 DT 30 16411 38741
RED 30 DT 30 9055 31386

Table 3.4: 95% confidence intervals for the significant differences in means with
SIR scheduling for 0.3 seconds waiting time. The unit is bits per second and
cell.

Hence, statistical multiplexing over a shared channel is suitable to increase


link utilization. For optimal performance the shared resources of HSDPA must
however be appropriately distributed among the users.

Simulation model We use the Pareto distribution with the average set to 25
kbytes and the shape parameter to 1.1 as recommended in [46] for Web traffic.
Values larger than 2 Mbytes are rounded down to 2 Mbytes. The number of
mobile units in this scenario is 200 and the performance was studied at two
offered loads; one where the average waiting time between new sessions was
0.0175 seconds and one with 0.015 seconds. The results were similar in both
cases.

Results The differences in mean system goodput observed between the buffer
sizes in the previous scenario have been reduced and are no longer statistically
significant, which is to be expected since most files no longer overflow the larger
buffers. Neither the buffer strategy, nor the interaction term have any significant
effect.

Discussion
We will first discuss the results for the long TCP transfers, which are strongly
connected to the actual buffer size that the buffer management principle oper-
ates at. For RR scheduling it is the small buffers, DF 4, DT 4 and PDPC 4, that
62 Congestion Control in Wireless Cellular Networks

give the best performance in terms of system goodput. RED, that often drops
the first packet when the queue is larger than for the other schemes, results in a
lower system goodput for the short buffer size. With the large buffer size RED
gives marginally higher system goodput than PDPC 30 that has an average
queue length which is larger than that of the passive schemes, but smaller than
that of RED 30.
With SIR scheduling DF 4, DT 4 and PDPC 4 distinguish themselves from
the other configurations. It is likely that a shorter buffer contribute to a higher
degree of statistical multiplexing over the radio link and thus evens out unfair-
ness problems. The total amount of data buffered for HS-DSCH is also decreased
resulting in a more dynamic system.
A drop-from-front policy is preferable to drop-from-tail for passive buffering
when the buffer is relatively large. For short buffers, the dropping policy has
no significant influence.
The mean system goodput for the long-tail distribution is higher than with
the large transfers for two reasons; fewer flows perform retransmissions and the
majority of the flows are short. Short flows only demand capacity for shorter
periods of time and thus have less impact on other simultaneous sessions. Short
flows also increase the degree of statistical multiplexing over the bottleneck link.

2.3 HSDPA transfer rates


So far we have concentrated on system goodput under high load. In this section
we investigate the peak transfer rates achieved by TCP, when there is only one
user in the center cell for various buffer configurations.

Simulation model
Three cases are studied:

• The exact same topology as before for HSDPA, see Figure 3.7.

• A one-way delay of 75 ms, instead of 25 ms.

• The same topology with 25 ms one-way delay, but randomly uniformly


distributed losses of 1% are added over the fixed link. That is, there are
more than one bottleneck link.

Results
In Figure 3.8, 3.9(a) and 3.9(b) the results are shown. The median transfer
time out of 30 simulations have been plotted, since the data is not normally
distributed. Simulations have been carried out for buffer sizes of 5, 10, 15, 20,
30, 50 and 100 IP packets.
In the first scenario, the median transfer times are in the range from 0.6 to
2.0. PDPC gives the best performance over the entire range of buffer sizes. DT
and DF perform well up to a buffer size of 15-20 IP packets. At this point the
Buffer management for TCP over HS-DSCH 63
0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0

DT
Median transfer time

PDPC
RED
DF

20 40 60 80 100

Buffer size in IP packets

Figure 3.8: The topology in Figure 3.7 with only one TCP user.

RED buffer no longer induces any packet losses and approaches the minimum
transfer time for this scenario.
In the second scenario, Figure 3.9(a), the minimum transfer time is around
1.3 seconds and the highest median transfer times are more than 3 seconds.
RED results in the lowest median transfer times for all buffer sizes. The other
buffers perform worse at small buffer sizes, than PDPC, followed by DF and
DT reaches the lowest boundary.
With a low degree of losses over the wired hop, Figure 3.9(b), PDPC has the
lowest transfer times of about 0.6 seconds. The highest median transfer times
are two times larger. DF and DT have relatively low median transfer times
for buffers of 10-15 IP packets. For larger buffers their performance degrade,
whereas the performance of the RED queue improves.

Discussion
Compared to the previous scenario with the individual buffers in Section 2.1, we
have a higher and varying bandwidth. A buffer size of 10-15 IP packets seems
to give a stable performance for PDPC, DF and DT in all the investigated
scenarios. It is likely that this buffer size represent a good trade-off between
enabling high transfer rates and keeping the delay down. With shorter buffer
sizes, the TCP window is kept too small to allow high peak transfer rates. On
the other hand, when the buffer is larger, variations in the channel capacity
64 Congestion Control in Wireless Cellular Networks

3.0
DT
PDPC
Median transfer time

RED
2.5
DF
2.0
1.5

20 40 60 80 100

Buffer size in IP packets

(a) A one-way delay of 75 ms, instead of 25 ms.


1.3

DT
1.2

PDPC
Median transfer time

RED
DF
1.1
1.0
0.9

20 40 60 80 100

Buffer size in IP packets

(b) The same topology with 25 ms one-way delay, but randomly uniformly
distributed losses of 1% are added over the fixed link.

Figure 3.9: Median transfer rates for varying buffer sizes and environments.
Buffer management for TCP over HS-DSCH 65

cause many packets to be lost until the buffer becomes large enough to prevent
them.
The extra degree of freedom when deciding whether to drop packets or not
strengthen the position of PDPC over channels with varying capacity. DF and
PDPC had similar performance in the scenario with individual buffers for a
fixed link capacity, see Section 2.1, whereas PDPC exhibits better performance
than DF over HS-DSCH.

3 Discussion
We have studied a scenario where only one transfer at the time takes place for
each user, but the user can initiate several transfers at the same time. For
instance if HTTP is used without pipelining and the browser is configured to
allow more than one parallel connection. Such combined streams are more
aggressive than a single TCP session, which means that the queue sizes would
probably grow for RED and PDPC, possibly giving a more problematic error
recovery even for the smallest buffer configurations. The cost of keeping the
queue at a minimum for multiple simultaneous transfers through passive queuing
may be larger than in the case of only one transfer at the time, since it may
come at the price of many packets being dropped. The full impact of several
transfers sharing a buffer on the system is however part of our future plans.
RED was designed for a scenario where multiple flows traverse the bottleneck
buffer, which is a scenario that remains to be studied for HS-DSCH. For one flow
at the time, RED performs poorly. The reason is the late reaction to increases
in the queue length, which often results in timeouts in combination with loosing
many packets. Together with a passive drop-tail queue management, RED is
not to be recommended. Instead we argue for the use of passive drop-from-front
management and PDPC, which have been shown to be suitable for short buffers.
It is likely that to improve buffer management further, system parameters such
as radio conditions, link utilization and the current amount of data that is
buffered for the system as a whole should be taken into account. In low load
situations larger buffer sizes may allow higher transfer rates and high utilization,
but when the load increases the buffer sizes may have to be reduced to keep
buffering delays down and to allow smooth TCP operation.

4 Conclusions
HS-DSCH is a relatively new technique and it is hard to foresee what character-
istics the traffic mix for this channel will have. A queue management principle
that exhibits robustness to application parameters and handles TCP well is
therefore likely to be the best choice. There is a trade-off between low queuing
delays, system goodput and peak transfer rates which has been illustrated in
this paper. In general, drop-from-front is a better choice than drop-from-tail
for passive queue management.
66 Congestion Control in Wireless Cellular Networks

All the investigated buffer management principles operate at different aver-


age queue lengths, which is determining for performance at high loads. A short
buffer increases the order of statistical multiplexing and reduces the amount of
data being buffered in the system as a whole resulting in the highest system
goodput. The use of short buffers is further motivated by the decreased waiting
time in the buffer and a reduced risk for data aging while waiting in line. In low
load scenarios being able to avoid burst packet losses due to variations in the
forwarding capacity enables high transfer rates. There seems to exist a point
where high transfer rates can be achieved with a relatively small buffer.
Paper 4

Properties of TCP-like congestion


control

67
Paper published as

Sara Landström, Lars-Åke Larzon and Ulf Bodin, “Properties of TCP-like congestion
control”. In Proceedings of the Swedish National Computer Networking Workshop,
pages 13-18, Karlstad, Sweden, 23-24 November 2004.

68
Properties of TCP-like Congestion Control
Sara Landström† , Lars-Åke Larzon†,‡ , Ulf Bodin†

Luleå University of Technology

Uppsala University

Abstract
In this paper we investigate the performance of TCP-like congestion con-
trol and compare it to TCP SACK. TCP-like congestion control is cur-
rently up for standardization as part of the Datagram Congestion Control
Protocol (DCCP) in the IETF. DCCP offers an unreliable transport ser-
vice with congestion control to upper layers.
We have found that TCP-like is fair to TCP SACK when the loss
rate is low. In the high loss, low round trip time regime, TCP-like seizes
more bandwidth and is able to better maintain a smooth send rate than
TCP SACK. In low round trip time environments, the absence of a lower
bound on the transmit timeout in TCP-like, which corresponds to the re-
transmission timeout in TCP, contribute to this difference in performance.
Another factor is the decoupling of the congestion control state from in-
dividual packets that is possible in TCP-like, since it offers an unreliable
transport service.

1 Introduction
Traditionally TCP has been the dominating Internet transport protocol, but
time constrained media services are becoming more frequent and promote the
use of UDP. Most UDP flows lack congestion control mechanisms and exist
on the expense of the TCP flows. The increased share of UDP flows might
eventually cause severe starvation of TCP flows or even a congestion collapse,
therefore TCP-friendly rate regulation of all longer transfers is desirable from a
fairness perspective.
To support this ambition a new transport protocol, called the Datagram
Congestion Control Protocol (DCCP) [37], has been designed. It currently of-
fers the choice of two congestion control algorithms, TCP Friendly Rate Control
(TFRC) [20] and TCP-like congestion control [23]. The former has been stud-
ied in [20] and [62], but the latter has to our knowledge not been extensively
evaluated.
TFRC is targeted at applications desiring a smoother send rate than cur-
rently possible using TCP, whereas TCP-like congestion control is designed
to closely trace the behavior of TCP SACK [14] thus prioritizing throughput.

69
70 Congestion Control in Wireless Cellular Networks

There are however differences that springs out of TCP-like providing an unreli-
able service while TCP enforces reliability. For instance, the delay variations are
expected to be smaller since retransmissions are not performed at the transport
layer and in order delivery has been abandoned. The removal of these features
gives the application designer a finer grained control over which data is to be
sent and when, than in ordinary TCP. Thereby TCP-like may become an at-
tractive option for applications like streaming media, where the time constraints
may allow selective retransmissions.
In this paper we concentrate on mapping out the differences in the design of
TCP-like congestion control and TCP SACK. We also show, through simulations
in the Network Simulator, ns-2 [44], the impact they have on the send rate,
smoothness and fairness of the protocols. TCP is used as a reference point,
since TCP-like expressly attempts to imitate its behavior and also because it is
one of the most prevalent protocols.

2 TCP SACK and TCP-like congestion control


In this section we will discuss general characteristics of TCP-like and TCP
SACK congestion control, such as the conditions that must be fulfilled to be
allowed to transmit data, when acknowledgments are sent and which information
they include. We also compare the criteria that are used to increase the send
rate.
Thereafter we will illustrate through examples, how loss recovery and de-
tection have been implemented in the two protocols and the implications of
the behavioral differences on performance. We will also point out areas where
further refinements of the TCP-like algorithm are possible.
DCCP is a packet oriented protocol, whereas TCP is byte oriented. In this
study we will use the unit packets also when referring to TCP segments, and it
is assumed that the send rate is limited by congestion rather than the resources
of the receiver.

2.1 General algorithm characteristics


The send rate in both TCP-like and TCP SACK when competing for bandwidth
is limited by the size of the congestion window, cwnd. This window represents
the number of packets which is allowed in the network for TCP-like. In TCP
SACK, the window also confines a sequence of packets – only packets with
sequence numbers lower than the sum of the highest acknowledged sequence
number and cwnd can normally be sent. When the oldest outstanding packet
has been acknowledged, the window can be moved past this packet. In TCP-
like, the variable pipe represents an estimate of the number of packets in the
network and new packets may be sent as long as pipe is less than cwnd. During
loss recovery TCP SACK deploys an algorithm similar to that of TCP-like and
the sender therefore also maintains a pipe variable in this state.
Properties of TCP-like congestion control 71

TCP uses a cumulative acknowledgment scheme, i.e., the highest in-order


sequence number received is acknowledged. With the SACK option, information
about gaps in the sequence of received packets can be acquired. Acknowledg-
ments of data carrying packets in TCP-like are normally sent for every second
packet. The acknowledgments account for the state of all packets up-to-date,
for which the acknowledgment information has not in turn been acknowledged.
Acknowledgment information is reliably transferred in DCCP, thus the sender
acknowledges the acknowledgments sent by the receiver. The detailed packet
history is carried in the ack vector option.
Both TCP variants have two phases, called slow start and congestion avoid-
ance. During congestion avoidance TCP-like updates its send rate in the same
manner as TCP SACK, but in slow start there are differences. When the TCP-
like sender is in slow start, cwnd is increased by one packet for each packet
newly reported as received. This is similar to TCP SACK with appropriate
byte counting or when the receiver acknowledges every packet. These schemes
are more aggressive than a TCP connection with the delayed acknowledgment
algorithm enabled that updates cwnd by one for each feedback packet received.
Throughout this paper we configure TCP SACK to send feedback for every data
packet.

2.2 Loss detection and recovery


When TCP SACK detects a congestion event through the arrival of four con-
secutive acknowledgments of the same sequence number, the indicated packet is
assumed lost and retransmitted. If the SACK options report multiple lost pack-
ets and pipe is less than cwnd, additional packets may also be retransmitted.
Pipe is decreased for each new packet that the SACK option states have been
received. In the corresponding situation TCP-like also reduces pipe by one for
each packet with unknown state, for which three later sent packets have been
acknowledged. Thereby pipe is usually less in TCP-like when multiple packets
are lost from a window of data, leaving room for more packets to be sent. In
the example below, when the acknowledgment for packet 14 reaches the sender
it will give pipetcplike = pipesack − 2.

Packet 9 10 11 12 13 14

Status Not recv Not recv Recv Recv Not recv Recv

TCP SACK has a retransmission timeout, which allows the connection to


recover when the loss event is severe. A similar transmit timeout exists in
TCP-like, except there is no lower bound on how short it may be. Previously
one of the reasons for the lower bound was the poor clock granularity, which
is no longer of immediate concern for modern operating systems. A second
motivation for having a minimum retransmission timeout is foremost to avoid
72 Congestion Control in Wireless Cellular Networks

injecting replicas of packets into the network when the original transmission
has merely been delayed. In TCP-like, new packets are sent when the timeout
expires, therefore this argument is not compelling.
The standard behavior of TCP is to restart the retransmission timer when
the cumulative acknowledgment point is advanced. This tight coupling of the
congestion control state to individual packets increases the likelihood for time-
outs. For instance, if a fast retransmit of packet p is triggered, the retransmitted
packet will be the last packet to reach the receiver if there are other packets
already on the way. Whereas in TCP-like, p would have been assumed lost when
the DCCP event corresponding to three duplicate acknowledgments arriving in
TCP occurred. The arrival of either any of the packets already on their way or
the newly sent packet (sent instead of the retransmission) would then restart
the timer.
When a timeout occurs TCP-like resets pipe. Packets still in the network
at this point will not further reduce pipe, but could serve a purpose during the
following slow start period. In [13] the performance of different methods for
updating cwnd during slow start was investigated for TCP. Increasing cwnd by
the number of newly acknowledged packets, when slow start has been entered
due to a loss event, was deemed too aggressive. The reason being that after a loss
event the sender can not be certain that acknowledged packets actually left the
network during the last round trip time, since the reports are cumulative they
may have left the network long ago1 . In TCP-like, there is enough information
to deduce when a packet left the network and we therefore suggest to always
increase cwnd based on the number of packets acknowledged. This option is
however left for future studies.
To summarize, TCP-like can discount packets when they have been con-
firmed lost resulting in a quick exit from loss recovery. Also, the acknowledg-
ment of essentially any new packet restarts the transmit timer and releases a
new packet, which keeps the ack clock going through difficult congestion events
and decreases the likelihood of a timeout occurring. Finally, there is no lower
bound on the transmit timeout which means that less time is spent waiting for
the timer to expire in environments where the round trip time is short, if a
timeout is necessary to recover.

3 Simulations
Our evaluation is based on simulations carried out in the Network Simulator
version 2.27 (ns-2). The TCP SACK agent (TCPSack1) is part of the simulator
and includes most TCP algorithms such as fast recovery, fast retransmit, slow
start, congestion avoidance and limited transmit. We have been involved in the
implementation of DCCP and will use our DCCP implementation [43] which
implements all protocol features relevant for this study. In all simulations the
initial window was set to 2 packets and the buffers at both end points were set
1 Through the SACK option it can be derived when a packet left the network, but this is

currently not exploited.


Properties of TCP-like congestion control 73

TCPL
RTT 20 ms SACK
120000
80000
Receive rate (bytes/s)

40000
0
RTT 100 ms
120000
80000
40000
0
RTT 200 ms
120000
80000
40000
0
0 5 10 15 20
Loss rate (%)

Figure 4.1: The receive rates of TCP SACK and TCP-like congestion control.

large enough not to limit the send rate. The simulation scenarios are similar to
those presented in [20] when TFRC was introduced, which makes a comparison
of TCP-like and TFRC possible.

3.1 Environmental impact


We have identified a number of differences between ordinary TCP and TCP-like
congestion control, that can have an effect on the throughput of a connection.
The decreased sensitivity to multiple packets being lost from the same window
and not having a minimum transmit timeout, are factors that are likely to
be more relevant in certain environments. We therefore compared the receive
rates of TCP SACK and TCP-like congestion control for various loss rates and
round trip times. The results are presented in Fig. 4.1. Losses were distributed
according to a uniform distribution. The simulations have been repeated forty
times for each set up and the confidence intervals are for 95%. The confidence
intervals are narrow, therefore they appear to coincide with the marks on the
lines. At higher loss rates, and especially when the round trip time is short,
TCP-like has a higher receive rate.
When loss events are frequent, timeouts may be necessary for the transfers to
recover. The removal of the minimum retransmission timeout, which is currently
1 second in TCP SACK, is likely to be to an advantage for TCP-like when the
74 Congestion Control in Wireless Cellular Networks

RTT Loss MinRTO Reliability Transfer R2


rate rate
P-value Effect P-value Effect

20 ms 1% 1 0 ± 15 0.75 2 ± 15 122069 ± 8 33%

−598 ±
200 ms 1% 1 0 ± 2987 0.69 86765 ± 1506 45%
3012

20 ms 5% 0.02 −611 ± 521 0.76 80 ± 526 118671 ± 263 37%

191 ±
200 ms 5% 0.84 −64 ± 651 0.56 35293 ± 328 61%
657

−18937 ± 1393 ±
20 ms 10% 0.00 0.26 95322 ± 1221 78%
2423 2443

372 ±
200 ms 10% 0.00 −932 ± 346 0.04 21580 ± 174 60%
348

Table 4.1: The impact of a 1 second minimum retransmission timeout and


application layer retransmissions on the receive rates of TCP-like.

round trip time of a connection is low and stable. However, it makes the protocol
more sensitive to delay spikes. The Eifel Detection and Response algorithms
could be implemented for TCP-like to mitigate this potential drawback.
The effect of enforcing a minimum transmission timeout of 1 second on
the receive rate of TCP-like can be easily investigated. It is also possible to
imitate application layer retransmissions by adding another packet to be sent
for each packet detected as lost by the sender. Table 4.1 gives the results of
an experiment designed as a full factorial test [45] where these two features are
turned on and off in a few environments characterized by their round trip time
and loss rate. Each setting was simulated thirty times2 .
A p-value less than 0.05 indicates that a factor is significant at the 95% confi-
dence level. The effect of the interaction between the factors was not significant
and is therefore excluded from the table. A minimum transmission timeout of
1 second has a negative impact on performance when the loss rate is high and
the effect becomes significant for lower loss rates when the round trip time is
low. At 95% confidence level, reliability results in insignificant changes in all the
investigated scenarios, except when both the round trip time and the loss rate
are high. The Effect column gives information about the confidence interval for
the effect.
The R-Squared statistic captures how much of the variability that can be
explained by the fitted model. The best fit is given when the RTT is 20 ms and
2 The average effect found in the table is computed over all simulations for that particular

environmental setting, these values are therefore not directly comparable to the receive rates
shown in Fig. 4.1.
Properties of TCP-like congestion control 75

16 TCPL
Size (packets)

SACK
12

0
0 2 4 6 8 10
Time (seconds)

Figure 4.2: A comparison of the cwnd sizes under the same conditions.

the loss rate 10%. Part of the discrepancy between the receive rate observed by
TCP-like and TCP SACK is still unaccounted for. It is reasonable to assume
that the remaining difference can be attributed to the decoupling of the conges-
tion control state from individual packets as illustrated in the previous section.
This assumption is strengthen through the trace of the congestion windows when
the loss rate is 5% and the delay is 10 ms shown in Fig. 4.2. Timeouts are less
frequent in the case of TCP-like (the congestion window is never down to one
packet) and there are fewer long loss recovery periods identified by cwnd being
frozen.

3.2 Send rate, fairness and smoothness


In the previous section TCP-like congestion control has been shown to give a
higher receive rate than TCP SACK when traversing separate links induced with
random loss. We have also investigated the scenario depicted in Fig. 4.3 where
the two protocols co-exist over a link with varying capacity for both RED and
drop-tail queuing. Half the flows are TCP SACK and the other half consists
of TCP-like flows. Each dot in Fig. 4.4 is the normalized mean send rate of
the last 60 seconds of a 75 seconds long simulation for an individual flow. The
initial 15 seconds are removed, since we are interested in the performance once
the system has stabilized. This figure is representative of our findings, i.e., when
the loss rate is low TCP-like congestion control has a slightly higher normalized
mean throughput. This gap increases with an increasing number of competitors
and a diminished link capacity.
Fig. 4.5 shows how the coefficient of variation (CoV) between flows of the
same type in one simulation changes as the bandwidth is varied for 16 TCP
SACK and 16 TCP-like sessions. Each setting has been repeated ten times
and the mean send rate was computed over the second half of the 30 simulated
seconds. TCP-like congestion control manages to maintain a low coefficient of
76 Congestion Control in Wireless Cellular Networks

Figure 4.3: The topology used in the simulations. The queue parameters were
scaled with the bandwidth that in some scenarios was up to 256 Mbytes/s.

14
12
Loss Rate (%)

10
8
6
4
2
0
0 16 32 48 64 80 96 112 128
Number of TCP SACK and TCP-like Flows in total
2.5
SACK Flows
TCPL Flows
Normalized send rate

2
Mean SACK
Mean TCPL
1.5

0.5

0
0 16 32 48 64 80 96 112 128
Number of TCP SACK and TCP-like Flows in total

Figure 4.4: TCP-like and TCP SACK when sharing a 16Mbytes/s link with
RED queuing.
Properties of TCP-like congestion control 77

1
0.9 SACK CoV
Coefficient of Variation

TCPL CoV
0.8 Mean SACK CoV
0.7 Mean TCPL CoV
0.6
0.5
0.4
0.3
0.2
0.1
2 3 4 5 10 20
Loss Rate (%)

Figure 4.5: Coefficient of variation of the send rate between flows of the same
type.

variation although the loss rate is high, whereas the spread of the throughput
between the TCP SACK sessions rapidly increases. When the loss rates are low,
TCP-like and TCP SACK exhibit similar fairness between the flows.
Fairness can also be measured over different time scales. For this purpose,
we ran 150 seconds long simulations. These simulations were partitioned into
time intervals of length δ and the send rate in every time interval computed.
The equivalence ratio in a time interval for user A and B is then the minimum
sendrateuser A
of the two ratios, sendrate user B
and sendrateuser B
sendrateuser A . By taking the minimum of
the two ratios, a value between 0 and 1 is received. For perfect fairness this
equivalence ratio should be 1. The average value of the equivalence ratios for a
time series gives an estimate of how the bandwidth has been distributed on the
time scale δ [20]. The shorter the time scale, the more likely it is that we will
observe a smaller equivalence ratio.
We have computed the equivalence ratio for two scenarios corresponding to
the situations when 32 and 128 flows respectively were active in Fig. 4.4. The
main difference is that the starting times of the flows are chosen from a uniform
random distribution in the interval between 0-40s. Previously all flows were
started within a time span of less than 10ms. We also removed the first 50
seconds before performing our analysis. The reduction in fairness compared to
the results in Fig. 4.4 indicates that new TCP SACK flows have a harder time
grabbing bandwidth from already active sources. Also, the equivalence ratio
does not consider which flow that got more bandwidth during an interval. For
instance, if TCP-like sends at a rate that is twice that of TCP SACK in the
first interval and TCP SACK sends at twice the rate of TCP-like in the next
interval, the equivalence ratio will be (0.5 + 0.5)/2 = 0.5, not (0.5 + 1.5)/2 = 1.
78 Congestion Control in Wireless Cellular Networks

The inter-flow fairness between TCP-like flows seems independent of changes


in the loss rate resulting from increasing the number of active flows, but both
the inter-flow fairness for TCP SACK flows and between flows of different types
decreases as visualized in Fig. 4.6.
The coefficient of variation of the send rate time series can also be used to
measure the variability in the send rate, a property commonly referred to as
the smoothness. A low coefficient of variation means that the flow is sending
data at a steady rate. If the send rate varies a lot it may be difficult for media
applications with a limited play-out buffer, to present an uninterrupted flow of
data. In TCP, minor congestion events leads to the congestion window being
cut in two and more severe loss events result in the protocol having to start over
in slow start from a window of one segment. In large, TCP-like responds in the
same way to congestion and thus the difference in smoothness is expected to
be minor when the loss rate is low. At higher loss rates, the looser coupling of
the congestion control state to individual packets and not having a minimum
transmit timeout are likely to give TCP-like an advantage.
Furthermore, Fig. 4.6 shows the computed smoothness. TCP-like maintains
a steadier send rate than TCP and it is insensitive to high loss rates also over
short time scales. The coefficient of variation for TCP SACK is significantly
higher over shorter time scales and for higher loss rates.

4 Conclusions
TCP-like features several of the numerous TCP improvements that have been
proposed during the last decade. While being more aggressive than TCP Reno,
it is reasonably fair to TCP SACK at loss rates observed on the Internet today.
Simulations show that at shorter round trip times - up to approximately
100ms - the differences between TCP-like and TCP SACK are more pronounced.
If in addition the loss rate is high, not having a minimum transmission timeout
gives TCP-like better performance. TCP-like also recovers faster from severe
congestion as its lack of reliability makes it less dependent on individual packets.
In the future we would like to explore the delay variations generated by
the two algorithms observed from an application point of view. Also, TCP-like
attempts to regulate the acknowledgment pace when losses are detected on the
return path. Investigating the performance impact that this algorithm may have
is also an area to look into.
Properties of TCP-like congestion control 79

1
TCPL vs TCPL
SACK vs SACK
0.8 SACK vs TCPL
Equivalance ratio

0.6

0.4

0.2

0
0.2 0.5 1 2 5 10
Timescale for throughput measurement (seconds)
3
SACK 128
Coefficient of Variation

2.5 SACK 32
TCPL 128
2 TCPL 32

1.5

0.5

0
0.2 0.5 1 2 5 10
Timescale for throughput measurement (seconds)

Figure 4.6: Equivalence ratio: the upper line is for the case of 32 active flows and
the lower line represents 128 flows. In the lower figure you find the smoothness
of the flows.
80 Congestion Control in Wireless Cellular Networks
Bibliography

[1] M. Allman, S. Floyd, and C. Partridge. Increasing TCP’s Initial Window.


RFC Standards Track 3390, IETF, Oct. 2002.

[2] Hari Balakrishnan, Venkata N. Padmanabhan, Srinivasan Seshan, and


Randy H. Katz. A comparison of mechanisms for improving TCP per-
formance over wireless links. IEEE/ACM Transactions on Networking,
5(6):756–769, 1997.

[3] Christian Bettstetter. Mobility modeling in wireless networks: categoriza-


tion, smooth movement, and border effects. ACM SIGMOBILE Mobile
Computing and Communications Review, 5(3):55–66, Jul. 2001.

[4] E. Blanton, M. Allman, K. Fall, and L. Wang. A Conservative Selective


Acknowledgment (SACK)-based Loss Recovery Algorithm for TCP. RFC
Standards Track 3517, IETF, Apr. 2003.

[5] Ulf Bodin and Arne Simonsson. Effects on TCP from Radio-Block Schedul-
ing in WCDMA High Speed Downlink Shared Channels. In QoFIS, volume
2811 of Lecture Notes in Computer Science, pages 214–223. Springer, 2003.

[6] B. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin, S. Floyd,


V. Jacobson, G. Minshall, C. Partridge, L. Peterson, K. Ramakrishnan,
S. Shenker, J. Wroclawski, and L. Zhang. Recommendations on Queue
Management and Congestion Avoidance in the Internet. Informational
RFC 2309, IETF, Apr. 1998.

[7] Jin Cao, William S. Cleveland, Dong Lin, and Don X. Sun. On the nonsta-
tionarity of Internet traffic. In ACM Sigmetrics, pages 102–112, Cambridge,
Massachousetts, United States, 2001. ACM Press.

[8] Vinton G. Cerf and Robert E. Kahn. A Protocol for Packet Network Inter-
connection. IEEE Transactions on Communications, COM-22(5):637–648,
May 1974.

[9] S. Crocker. Host software. http://www.ietf.org, RFC001, Apr. 1969.

81
82 Bibliography

[10] Hannes Ekström and Andreas Schieder. Buffer Management for the Inter-
active Bearer in GERAN. In IEEE VTC, pages 2505–2509, Apr. 2003.

[11] Magnus Erixzon. DCCP-Thin for Symbian OS. Master’s thesis, Luleå
University of Technology, Sep. 2004. 2004:261 CIV.

[12] Magnus Erixzon, Joakim Häggmark, and Nils-Erik Mattsson. DCCP


Projects. http://www.dccp.org, Jun. 2003.

[13] Aaron Falk and Mark Allman. On the Effective Evaluation of TCP. ACM
SIGCOMM Computer Communication Review, 29(5):59–70, Oct. 1999.

[14] Kevin Fall and Sally Floyd. Simulation-based comparisons of Tahoe, Reno
and SACK TCP. ACM SIGCOMM Computer Communication Review,
26(3):5–21, Jul. 1996.

[15] S. Floyd. Optimum functions for computing the drop probability. E-mail
available at http://www.aciri.org/floyd/REDfunc.txt, Oct. 1997.

[16] S. Floyd, T. Henderson, and A. Gurtov. The NewReno Modification to


TCP’s Fast Recovery Algorithm. RFC Standards track 3782, IETF, Apr.
2004.

[17] S. Floyd and E. Kohler. Internet Research Needs Better Models. ACM
SIGCOMM Computer Communications Review, 33(1):29–34, Jan. 2003.

[18] Sally Floyd. A bug in the TFRC code in NS. Mail to the DCCP mailing
list, http://www.ietf.org/mail-archive/web/dccp/index.html, 2003.

[19] Sally Floyd, Mark Handley, and Eddie Kohler. Problem Statement for
DCCP. Internet draft, IETF, Oct. 2002. Work in progress.

[20] Sally Floyd, Mark Handley, Jitendra Padhye, and Jörg Widmer. Equation-
based congestion control for unicast applications. In ACM SIGCOMM Pro-
ceedings of the conference on Applications, Technologies, Architectures, and
Protocols for Computer Communication, pages 43–56, Stockholm, Sweden,
Aug. 2000.

[21] Sally Floyd and Van Jacobson. Traffic Phase Effects in Packet-Switched
Gateways. Journal of Internetworking: Practice and Experience, 3(3):115–
156, Sep. 1992.

[22] Sally Floyd and Van Jacobson. Random early detection gateways for con-
gestion avoidance. IEEE/ACM Transactions on Networking, 1(4):397–413,
Aug. 1993.

[23] Sally Floyd and Eddie Kohler. Profile for DCCP Congestion Control ID 2:
TCP-like Congestion Control. Internet Draft 6, IETF, Jul. 2004. Work in
progress.
83

[24] GSM Association. GSM World – 3GSM Platform.


http://www.gsmworld.com/technology/3g/index.shtml, Feb. 2005.
[25] Andrei Gurtov. TCP Performance in Presence of Congestion and Corrup-
tion Losses. Master’s thesis, University of Helsinki, Dec. 2000.
[26] Andrei Gurtov and Reiner Ludwig. Responding to Spurious Timeouts in
TCP. In IEEE Infocom, pages 2313–2322, 2003.
[27] John Heidemann, Nirupama Bulusu, Jeremy Elson, Chalermek In-
tanagonwiwat, Kun chan Lan, Ya Xu, Wei Ye, Deborah Estrin, and Ramesh
Govindan. Effects of detail in wireless network simulation. In SCS Multi-
conference on Distributed Simulation, pages 3–11, Phoenix, Arizona, USA,
Jan. 2001. Society for Computer Simulation.
[28] John Heidemann, Kevin Mills, and Sri Kumar. Expanding Confidence
in Network Simulation. IEEE Network Magazine, 15(5):58–63, Sep./Oct.
2001.
[29] Hämeenlinna. Overview of 3GPP Release 5. Technical Report 030375,
ETSI Mobile Competence Center, Jun. 2003.
[30] Jun ichiro Itojun Hagino. DCCP in FreeBSD Kame? Mail sent to the
IETF DCCP mailing list, Nov. 2004.
[31] IETF. Datagram Congestion Control Protocol (dccp) - working group.
http://www.ietf.org/html.charters/dccp-charter.html, Feb. 2005.
[32] H. Inamura, G. Montenegro, R. Ludwig, A. Gurtov, and F. Khafizov. TCP
over Second (2.5G) and Third (3G) Generation Wireless Networks. RFC
Best Current Practice 3481, IETF, Feb. 2003.
[33] V. Jacobson. Congestion Avoidance and Control. In ACM SICOMM, pages
314–329, Stanford, CA, Aug. 1988.
[34] Raj Jain. The Art of Computer System Performance Analysis. John Wiley
& Sons, 1991.
[35] Niranjan Joshi, Srinivas R. Kadaba, Sarvar Patel, and Ganapathy S. Sun-
daram. Downlink Scheduling in CDMA Data Networks. In ACM Mobicom,
pages 179–190, 2000.
[36] Leonard Kleinrock. Information Flow in Large Communication Nets. RLE
Quarterly Progress Report, Jan. 1972.
[37] Eddie Kohler, Mark Handley, and Sally Floyd. Datagram Congestion Con-
trol Protocol (DCCP). Internet Draft 7, IETF, Jul. 2004. Work in progress.
[38] Troels Emil Kolding, Klaus Ingemann Pedersen, Jeroen Wigard, Frank
Frederiksen, and Preben Elgaard Mogensen. High Speed Downlink Packet
Access: WCDMA Evolution. In IEEE Vehicular Technology Society News,
pages 4–10, Feb. 2003.
84 Bibliography

[39] T.V. Lakshman, Arnold Neidhardt, and Teunis J. Ott. The Drop from
Front Strategy in TCP and in TCP over ATM. In IEEE INFOCOM, pages
1242–1250, Mar. 1996.
[40] Barry M. Leiner, Vinton G. Cerf, David D. Clark, Robert E. Kahn, Leon-
dard Kleinrock, Daniel C. Lynch, Jon Postel, Lawrence G. Roberts, and
Stephen S. Wolff. The Past and Future History of the Internet. Commu-
nications of the ACM, 40(2):102–108, Feb. 1997.
[41] R. Ludwig and R. H. Katz. The Eifel Algorithm: Making TCP Robust
Against Spurious Retransmissions. ACM Computer Communications Re-
view, 30(1):30–36, Jan. 2000.
[42] M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow. TCP Selective Ac-
knowledgement Options. RFC Standards Track 2018, IETF, Oct. 1996.
[43] Nils-Erik Mattsson. A DCCP module for ns-2. Master’s thesis, Luleå
University of Technology, Feb. 2004. 2004:175 CIV.
[44] S. McCanne and S. Floyd. ns Network Simulator. Technical report, Infor-
mation Sciences Institute, 2004.
[45] Douglas C. Montgomery. Design and Analysis of Experiments. John Wiley
& Sones, 5th edition, 1997.
[46] Motorola. Evaluation Methods for High Speed Downlink Packet Accesss
(HSDPA). Technical report, 3GPP, Jul. 2000.
[47] John Nagle. Congestion Control in IP/TCP Internetworks. RFC 896,
IETF, Jan. 1984.
[48] Open Mobile Alliance. OMA Technical Section-Push Talk Over Cellular
Working Group. http://www.openmobilealliance.org/, Feb. 2005.
[49] J. Padhye, V. Firiou, D. Towsley, and J. Kurose. Modeling TCP Through-
put: A Simple Model and its Empirical Validation. In ACM SIGCOMM
conference on Applications, technologies, architectures, and protocols for
computer communication, pages 303–314, 1998.
[50] S. Parkvall, E. Dahlman, P. Frenger, P. Beming, and M. Persson. The Evo-
lution of WCDMA Towards Higher Speed Downlink Packet Data Access.
In IEEE VTC (Spring), 2001.
[51] Vern Paxson and Sally Floyd. Wide area traffic: the failure of Poisson
modeling. IEEE/ACM Transactions on Networking, 3(3):226–244, 1995.
[52] Vern Paxson and Sally Floyd. Why we don’t know how to simulate the
internet. In Winter Simulation Conference, pages 1037–1044, 1997.
[53] Janne Peisa and Eva Englund. TCP performance over HS-DSCH. In IEEE
VTC Spring, volume 2, pages 987–991, May 2002.
85

[54] J. Postel. Transmission Control Procol. RFC 793, IETF, Sep. 1981.
[55] Theodore Rappaport. Wireless Communications: Principles and Practice.
Prentice Hall, 2nd edition, Dec. 2001.
[56] V. Rosolen, O. Bonaventure, and G. Leduc. A RED discard strategy for
ATM networks and its performance evaluation with TCP/IP traffic. ACM
SIGCOMM Computer Communications Review, 29(3):23–43, Jul. 1999.
[57] I. Stojanovic, M. Airy, D. Gesbert, and H. Saran. Performance of TCP/IP
Over Next Generation Broadband Wireless Access Networks. In IEEE
WPMC, Aalborg, Denmark, Sep. 2001.

[58] Mats Sågfors, Reiner Ludwig, Michael Meyer, and Janne Peisa. Buffer
Management for Rate-Varying 3G Wireless Links Supporting TCP Traffic.
In IEEE VTC, pages 675–679, Apr. 2003.
[59] Mats Sågfors, Reiner Ludwig, Michael Meyer, and Janne Peisa. Queue
Management for TCP Traffic over 3G Links. In IEEE WCNC, pages 1663–
1668, Mar. 2003.
[60] R Development Core Team. R: A language and environment for statistical
computing.
[61] W. Willinger and V. Paxson. Where Mathematics meets the Internet.
Notices of the American Mathematical Society, 45(8):961–970, Aug. 1998.
[62] Yang Richard Yang, Min Sik Kim, and Simon S. Lam. Transient Behav-
iors of TCP-friendly Congestion Control Protocols. Computer Networks,
41(2):193–210, Feb. 2003.
[63] N. Yin and M.V. Hluchyj. Implications of Dropping Packets from the Front
of a Queue. In 7-th ITC, Oct. 1990.

You might also like