You are on page 1of 112

UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL – UFRGS

INSTITUTO DE INFORMATICA
PROGRAMA DE PÓS-GRADUAÇÃO EM COMPUTAÇÀO

Analyzing TCP Performance


over IEEE 802.11
Mobile Ad hoc Networks
By

OSCAR NÚÑEZ MORI


T.I 1080 PPGC-UFRGS

Individual Work I - CMP401

Prof. Dr. Juergen Rochol


Advisor

Porto Alegre, June 2003


2

UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL


Reitora: Profª. Wrana Panizzi
Pro-Reitor de ensino: Prof. José Carlos Ferraz Hennemann
Pro-Reitora Adjunta de Pós-Graduação: Profª. Jocélia Grazia
Diretor do Instituto de Informática: Prof. Philippe Olivier Alexandre Navaux
Coordenador do PPGC: Prof. Carlos Alberto Heuser
Bibliotecária-Chefe do Instituto de informática: Beatriz Regina Bastos Haro
3

Acknowledgments

Special thanks

To our Blessed Mother Virgin Mary, Queen of


Peace, that through her messages in Medjugorje and
Angüera, she inserted me into her beloved son
Jesus Christ, our Lord, and filled my heart with the
love of God [MAR 2003].

To my beloved and tender mother Dr. Maria Teolita


Mori Hidalgo that always trust in me and made
possible this adventure.

To my dear father Oscar Marcelo who is in the


heaven with our dearest Lady.

To my beloved sons Oscar Brian and Oscar Marcelo


who bears the weight of my absence with courage.

To all my family in PERU that miss me too much.

To my lovely friend Eliene Sulzbacher.

To my great advisor Juergen Rochol.

And to all my dear friends in the UFRGS, Porto


Alegre and Brasil.

Thank you very much for your comprehension and


friendship.
4

Table of Contents
List of Abbreviations ............................................................................................ 7
List of Figures ......................................................................................................... 9
List of Tables …………………………………………………………………….. 11
Abstract ……………………………………………………………………………. 12
Resumo ……………………………………………………………………………. 13

1 Introduction ......................................................................................................... 14
1.1 Motivation ............................................................................................................. 15

2 General over view IEEE 802.11 .................................................................... 16


2.1 IEEE Std 802.11 ………………………………………………………………… 17
2.1.1 Main Characteristics of Wireless LAN ……………………………………… 17
2.1.2 Components of the IEEE 802.11 architecture …………………………………. 18
2.1.3 IEEE 802.11 Reference Model ……………………………………………….. 19

3 Ad Hoc Networks ……………………………………………………………… 27


3.1 Definition: MANET …………………………………………………………….. 27
3.2 Multi hop wireless networks …………………………………………………… 27
3.3 Mobile Topology ………………………………………………………………… 28
3.4 Main Characteristics of MANET …………………………………………… 29
3.5 Routing protocols in MANET ………………………………………………….. 29
3.5.1 Table Driven Routing Protocols ……………………………………………… 30
3.5.2 On-Demand Routing Protocols ……………………………………………….. 30
3.5.3 Table Driven versus On Demand …………………………………………….. 30
3.5.4 Ad-Hoc Wireless Multicasting ………………………………………………… 31
3.6 MANET Applications ........................................................................................... 32

4 General overview of Transmission Control Protocol …………………. 33


4.1 The window principle applied to TCP ………………………………………… 34
4.2 TCP segment format …………………………………………………………….. 34
4.3 Acknowledgements and retransmissions ………………………………………. 34
4.4 Establishing a TCP connection …………………………………………………. 35
4.5 TCP application programming interface ………………………………………. 37
4.6 TCP congestion control algorithms …………………………………………….. 39
4.6.1 Slow Start ………………………………………………………………………. 39
4.6.2 Congestion avoidance …………………………………………………………. 41
4.6.3 Fast Retransmit ………………………………………………………………… 42
4.6.4 Fast Recovery ………………………………………………………………….. 43
4.7 Current TCP version ……………………………………………………………. 44
4.7.1 TCP Tahoe ……………………………………………………………………… 45
4.7.2 TCP Reno ………………………………………………………………………. 45
4.7.3 TCP New Reno ..................................................................................................... 45
4.7.4 TCP SACK ........................................................................................................... 46
5

4.7.5 TCP Vegas ............................................................................................................ 46


4.7.6 TCP Santa Cruz ................................................................................................... 47
4.8 TCP in Ad Hoc Networks ……………………………………………………….. 47

5 Problems with TCP in Ad Hoc Networks .................................................. 51


5.1 Why TCP Throughput performs poor in wireless networks? .......................... 51
5.1.1 Effect of a high BER ………………………………………………………….. 52
5.1.2 Effect of Route Re-computations …………………………………………….. 52
5.1.3 Effect of Network Partitions …………………………………………………. 53
5.1.4 Effect of Multi-path Routing …………………………………………………. 54
5.2 What Does Congestion Window Really Mean in Ad Hoc Networks? ………. 54
5.3 Other factors …………………………………………………………………….. 54
5.3.1 Multi-hop factor problem in MANET ………………………………………... 54
5.3.2 Beaconing interval factor in MANET ………………………………………... 61
5.3.3 Packet sizes factor in MANET ………………………………………………... 62

6 Proposed solutions in TCP over wireless ………………………………… 63


6.1 Link Layer Schemes …………………………………………………………….. 63
6.1.1 SNOOP Protocol ……………………………………………………………….. 63
6.1.2 TULIP …………………………………………………………………………... 68
6.1.3 ADAPTIVE Link Layer ………………………………………………………. 70
6.1.4 ELN-ACK Protocol ……………………………………………………………. 70
6.2 Split-Connection Schemes ………………………………………………………. 70
6.2.1 Indirect TCP …………………………………………………………………… 71
6.2.2 WTCP …………………………………………………………………………... 74
6.3. End to End (E2E) Schemes …………………………………………………….. 76
6.3.1 TCP-Probing …………………………………………………………………... 76
6.3.2 TCP with NACK ……………………………………………………………….. 79
6.3.3 TCP with SIF ....................................................................................................... 80
6.3.4 E2E-ELN Protocol .............................................................................................. 81
6.3.5 E2E-ELN-RXMT Protocol ……………………………………………………. 81
6.4 New Layer Schemes ……………………………………………………………. 81
6.4.1 ATCP …………………………………………………………………………… 81
6.4.2 LSSA ……………………………………………………………………………. 85
6.5 Emergent Schemes ………………………………………………………………. 86
6.5.1 Feedback-Based Scheme: TCP-F ……………………………………………... 86
6.5.2 TCP-BUS ……………………………………………………………………….. 89
6.5.3 MAITE …………………………………………………………………………. 91
6.5.4 IR-TCP …………………………………………………………………………. 95
6.5.5 FAST-TCP ……………………………………………………………………… 95
6.5.6 TCP with SPACK ……………………………………………………………… 96

7 Proposed solutions in IEEE 802.11 MAC to improve TCP


performance ……………………………………………………………………. 97
7.1 A Novel MAC with fast Collision Resolution (FCR) in WLAN ………………. 97
7.2 Receiver-Based AutoRate (RBAR) protocol ….………………………………. 99
6

7.3 Data-driven Cut-through Multiple Access (DCMA) Protocol ……………...… 103

8 Conclusions ……………………………………………………………………… 105

References .............................................................................................................. 107


7

List of Abbreviations
AP Access Point

ACK Acknowledgement

BER Bit Error Rate

BSS Basic Service Set

CCS Cellular Communication System

CSMA/CA Carrier Sense Multiple Access with Collision Avoidance

CSMA/CD Carrier Sense Multiple Access with Collision Detection

CWND Congestion Window

DS Direct Sequence

DCF Distributed Coordination Function

E2E End to End

ELN-ACK Explicit Loss Notification with Acknowledgment

FH Frequency Hopping

ISO International Organization for Standardization

LSSA Link Signal strength Agent

MANET Mobile Ad hoc Network

MAC Medium Access Control MAC

MSDU MAC Service Data Unit

MSR Mobility Support Routers

MH Mobile Host

MAITE Mobility Awareness Incorporated as TCP Enhancement

NACK Negative Acknowledgement


8

OFDM Orthogonal Frequency Division Multiplex

OSI Open Systems Interconnection

PLCP Physical Layer Convergence Procedure

PCF Point Coordination Function

RFC Request for Comments

RTT Round-trip time

STA Station

SSTHRESH Slow Start Threshold Size

SPACK Splitted ACK

TCP Transmission Control Protocol

TULIP Transport Unaware Link Improvement Protocol

TRIC TCP Rate Implicit Control

TCP-SF TCP Smart Framing

WLAN Wireless Local Area Networks

WAP Wireless Access Protocol


9

List of Figures
FIGURE 2.1 - General Overview IEEE 802 ………………………………………….. 16

FIGURE 2.2 - Complete IEEE 802.11 Architecture ………………………………….. 19

FIGURE 2.3 - IEEE 802.11 Reference Model ……………………………………….. 20

FIGURE 2.4 - General PLCP Frame Format ………………………………………… 21

FIGURE 2.5 - MAC architecture ……………………………………………………... 22

FIGURE 2.6 - Basic Access Method of DCF …………………………………………. 23

FIGURE 2.7 - RTS-CTS-DATA-ACK and NAV Setting ……………………………. 24

FIGURE 2.8 - Beacon generation in an IBSS ………………………………………… 25

FIGURE 2.9 - Infrastructured and Ad hoc Networks ………………………………… 26

FIGURE 3.1 - Logical architecture of an IBSS ………………………………………. 27

FIGURE 3.2 - Multi-hop wireless network …………………………………………… 28

FIGURE 3.3 - Existing multicast routing protocols for ad hoc wireless networks…… 32

FIGURE 4.1 - TCP – Window principle applied to TCP…………………………….. 34

FIGURE 4.2 - TCP – Segment format ………………………………………………... 35

FIGURE 4.3 - TCP Connections establishment ………………………………………. 38

FIGURE 4.4 - TCP Slow start in action ……………………………………………… 40

FIGURE 4.5 - TCP slow start and congestion avoidance behavior in action ………… 42

FIGURE 4.6 - TCP fast retransmit in action ………………………………………….. 43

FIGURE 4.7 - Comparison of Perf. of four TCPs under different mobility Metric…… 49

FIGURE 5.1 - Route change forced by mobility ……………………………………… 53

FIGURE 5.2 - Partitions formed and recombined by mobility ……………………….. 54

FIGURE 5.3 - TCP-Reno throughput over an 802.11 fixed, linear, multi-hop Network 55
10

FIGURE 5.4 - Instability problem in the four hop TCP Reno connection ……………. 57

FIGURE 5.5 - Throughput of two TCP connections with different sender and receiver 58

FIGURE 5.6 - Throughput of two TCP connections with the same hop Number…….. 60

FIGURE 5.7 - RD time Vs. beaconing interval ……………………………………….. 61

FIGURE 6.1 - Flowchart for snoop Data/Ack ………………………………………… 65

FIGURA 6.2 - Indirect Transport Layer………………………………………………. 72

FIGURA 6.3 - Transport connection ………………………………………………….. 74

FIGURE 6.4 - Probing State Transition Diagram …………………………………….. 78

FIGURE 6.5 - Option field for Negative Acknowledgement in TCP header …………. 79

FIGURE 6.6 - Data flow through the TCP/ATCP/IP …………………………………. 82

FIGURE 6.7 - State transition diagram for ATCP at the sender ……………………… 83

FIGURA 6.8 - Link State Machine …………………………………………………… 86

FIGURE 6.9 - The TCP-F state machine …………………………………………….. 87

FIGURE 6.10 - Topology used in MAITE …………………………………………… 91

FIGURE 6.11 - State transition of MAITE’s features at a mobile host that acts as
a TCP Sender ………………………………………………………… 93

FIGURE 6.12 - State transition diagram at a supervisory host showing MAITE


Features ………………………………………………………………. 94

FIGURE 6.13 - SPACK transmission ………………………………………………… 96

FIGURE 7.1 - Throughput for Various Number of stations ………………………….. 99

FIGURE 7.2 - Timeline with changes to the DCF ……………………………………. 101

FIGURE 7.3 - RBAR vs. ARF ………………………………………………………... 102

FIGURE 7.4 - Fast forwarding in DCMA …………………………………………….. 104

FIGURE 7.5 - Comparative Performance IEEE 802.11 vs. DCMA …………………. 105
11

List of tables
TABLE 5.1 - Route discovery (RD) time at 1 second beaconing interval for different
hop counts ……………………………………………………………….. 56

TABLE 5.2 - Average End-to-End Delay at Different Packet Sizes…………………… 56


12

Abstract

Wireless networks are one of the more challenging environments for the Internet
protocols, and for TCP in particular. Wired TCP protocols lack of good performance in
Wireless environments. As the main reason of this poor performance for TCP, may raise
the fact that TCP can not distinguish between packet losses due to wireless errors from
those due to congestion. Traditional transport connections set up without any modification
in wireless ad hoc networks are plagued by problems such as high bit error rates, frequent
route changes, and partitions. In this paper, we are going to present a general overview of
IEEE 802.11 Standard, TCP, Wireless Ad Hoc networks and different solutions to
congestion in TCP over wireless.

Keywords: Wireless Ad Hoc Networks, Congestion, IEEE 802.11, Wireless,


TCP, MANET.
13

“Analisando o Desempenho do TCP em Redes Móveis Ad Hoc


IEEE 802.11”

Resumo

As redes sem fios “Wireless” são um dos ambientes mais desafiadores para os
protocolos da Internet, em particular o TCP. Os protocolos TCP para redes com
capeamento não apresentam bom desempenho em ambientes Wireless. A razão principal
deste baixo desempenho para o TCP esta relacionada ao fato do TCP não distinguir entre os
pacotes perdidos devido a erros Wireless, ou devido a congestionamento na rede com
cabeamento. As conexões de transporte tradicionais, sem modificações, em redes
Wireless Ad Hoc, são atingidas por problemas tais como: taxas de erro elevadas, mudanças
freqüentes da rota, e partições. Neste trabalho apresenta-se uma visão geral do padrão
IEEE 802,11, do TCP, das redes Wireless Ad Hoc e de soluções diferentes para
congestionamento em redes sem fios.

Palavra-chaves: Wireless Ad Hoc Networks, Congestion, IEEE 802.11, Wireless,


TCP, MANET.
14

1 Introduction
Wireless networks are one of the more challenging environments for the Internet
protocols, and for TCP in particular. It is possible to see two approaches:

The first one allows mobile wireless devices to function as any other Internet-connected
device providing seamless inter-working between the wired and wireless worlds. A second
approach called - walled garden -, means a Web client into the wireless device using some
form of proxy server at the boundary of the wireless network and the Internet. This is the
approach adopted by the Wireless Access Protocol (WAP) Forum [HUS 2001].

Wireless communication faces some challenges different from those present in the
wired world. If mobility is present, then even more challenges appear. Mobility is the origin
for problems to be solved by wireless communication systems, but other physical
challenges exist such as signal attenuation, reflection, refraction, and multi-path
propagation. Furthermore, all of these problems are reflected on the upper layers of a
communications protocol stack [ARA 2001].

TCP communications over wireless channels must address two main problems. The
first problem refers to the high bit error rate (BER) that a wireless channel experiences.
High BER could cause the corruption of data transmitted over a link, which may result in a
loss of TCP data segments or acknowledgments (ACKs). The second problem refers to the
effect of disconnections that could occur while mobile hosts are “handed off” from one cell
to another or when physical obstacles impede the signals from reaching the receivers either
at the base stations or at a mobile host [ARA 2001].

All of this result in a waste of bandwidth and battery power that have been
unnecessarily used to retransmit and process information

Due to the strong drive toward wireless Internet access through mobile terminals, these
problems must be carefully studied in order to build improved systems.

The wireless word is constituted of Satellite Networks Systems, Cellular


Communication System (CCS), Wireless Local Area Networks (WLAN), and terrestrial
microwave systems in general. In this Individual work, we plan to review wireless link
characteristics using mobile Ad hoc Networks (MANET) based on the Standard IEEE
802.11.

WLANs were standardized as IEEE 802.11 in 1999 and use direct sequence either (DS)
or frequency hopping (FH) spread spectrum radios, at the 900 MHz or 2.4 GHz frequency
bands. While the original bit rate was 2 Mbit/s, more recent WLANs offer 5.5 Mb/s and 11
Mb/s bit rates, with 54 Mbit/s in the IEEE 802.11a

It is important to remark that WLANs deploy carrier sense multiple access with collision
avoidance (CSMA/CA) to share the channel, instead of IEEE 802.3 Ethernet's CSMA with
collision detection (CSMA/CD).
15

To achieve interoperability between WLAN devices supplied by different vendors, the


IEEE designed the 802.11 standard

Meanwhile, they were initiated two new standardization projects to provide higher
speeds. The IEEE 802.11a uses a high-speed (Orthogonal Frequency Division Multiplex,
OFDM) physical layer, it is deployed in the 5 GHz frequency band, providing bit rates
ranging between 6 and 54 Mbit/s. For increased bit rate, the IEEE 802.11b was developed
over the existing physical layer. Commercial 802.11b solutions provide either 5.5 Mbit/s or
11 Mbit/s rates, using the 2.4 GHz frequency band [XYL 2001].

In this TI, we present different proposal solutions to increase the TCP throughput
performance due to wireless noisy environment and mobility problems.

We hope that this TI will be useful as a tutorial to a better understanding of wireless


networks.

1.1 Motivation

We start this work because applications such as e-mail, Web browsing, chat, and
general access services over wireless networks are designed to be used with TCP
(Transmission Control Protocol). However, upper layer protocols, like TCP are designed to
operate over relatively error free link. Consequently, when TCP is used over a wireless
channel, the overall performance could be greatly degraded.

The increasing importance of wireless communication has promoted studies on how to


improve the performance of popular protocols like TCP over wireless links. The
importance of these studies is even greater if one considers the fact that legacy, applications
are designed to employ traditional TCP communication and that these applications are
being deployed on today's wireless communications systems. TCP has been designed to
interpret packet loss as a sign of congestion. In a wireless world, this is typically the wrong
response to packet loss.

On the other side a very interesting application is in emergencies, for example, because
of natural disasters where the entire communications is crashed and restoring
communications quickly is essential. Using Ad hoc wireless networks an infrastructure
could be set up in hours instead of days/weeks required for wire-line communications
[COR 2002].
16

2 General overview IEEE 802


The family of standards IEEE 802 deals with the Physical and Data Link layers as
defined by the International Organization for Standardization (ISO) Open Systems
Interconnection (OSI) Basic Reference Model (ISO/IEC 7498-1: 1994). As shown in the
figure 2.1

FIGURE 2.1 - General Overview IEEE 802

The access standards define seven types of medium access technologies and associated
physical media, each appropriate for particular applications or system objectives. Other
types are under investigation [INT 99].

The standards defining the access technologies are as follows:

IEEE Std 802 Overview and Architecture.


IEEE Std 802.1B LAN/MAN Management.
IEEE Std 802.1D Media Access Control (MAC) Bridges.
IEEE Std 802.1E System Load Protocol.
IEEE Std 802.1F Common Definitions and Procedures for IEEE 802
Management Information.
IEEE Std 802.1G Remote Media Access Control (MAC) Bridging.
IEEE Std 802.2 Logical Link Control.
IEEE Std 802.3 CSMA/CD Access Method and Physical Layer
Specifications.
IEEE Std 802.4 Token Passing Bus Access Method and Physical Layer
Specifications.
IEEE Std 802.5 Token Ring Access Method and Physical Layer
17

Specifications.
IEEE Std 802.6 Distributed Queue Dual Bus Access Method and Physical
Layer Specifications.
IEEE Std 802.9 Integrated Services (IS) LAN Interface at the Medium
Access.
IEEE Std 802.10 Interoperable LAN/MAN Security.
IEEE Std 802.11 Wireless LAN Medium Access Control (MAC) and Physical
Layer Specifications.
IEEE Std 802.12 Demand Priority Access Method, Physical Layer and
Repeater Specifications.

2.1 IEEE Std 802.11

2.1.1 Main Characteristics of Wireless LAN

- Ubiquity, means that in any place and in any moment may be deployed the mobile
ad hoc networks to exchange any information.

- Some countries impose specific requirements for radio equipment in addition to


those specified in this standard.

- In IEEE 802.11, the addressable unit is a station (STA). The STA is a message
destination, but not (in general) a fixed location.

- The IEEE 802.11 physical layer uses a medium that has neither absolute nor
readily observable boundaries outside of which stations with conformant physical
layer transceivers are known to be unable to receive network frames.

- The IEEE 802.11 physical layer is unprotected from outside signals.

- The IEEE 802.11 physical layer communicates over a medium significantly less
reliable than wired physical layers.

- The IEEE 802.11 physical layer has dynamic topologies.

- The IEEE 802.11 physical layer lacks full connectivity, and therefore the
assumption normally made that every STA can hear ever other STA is invalid
(i.e., STAs may be -hidden- from each other).

- The IEEE 802.11 physical layer has time-varying and asymmetric propagation
properties.

- Well-defined coverage areas in the wireless physical layer simply do not exist;
propagation characteristics are dynamic and unpredictable, small changes in
position or direction may result in dramatic differences in signal strength. Similar
effects occur whether a STA is stationary or mobile.
18

2.1.2 Components of the IEEE 802.11 architecture

The IEEE 802.11 architecture consists of several components that interact to provide a
wireless LAN that supports station mobility transparently to upper layers.

The basic service set (BSS) is the basic building block of an IEEE 802.11 LAN. Fig. 2
shows two BSSs, each of which has two stations (STAs) that are members of the BSS.

It is useful to think of the ovals used to depict a BSS as the coverage area within which
the member stations of the BSS may remain in communication. (The concept of area, while
not precise, is often good enough.) If a station moves out of its BSS, it can no longer
directly communicate with other members of the BSS.

The association between a station (STA) and a BSS is dynamic (STAs turn on, turn off,
come within range, and go out of range). To become a member of an infrastructure BSS, a
station will become associated. These associations are dynamic and involve the use of the
distribution system service (DSS).

The DS enables mobile device support by providing the logical services necessary to
handle address to destination mapping and seamless integration of multiple BSSs. A DS
may be created from many different technologies including current IEEE 802 wired LANs.
IEEE 802.11 does not constrain the DS to be either data link or network layer based. Nor
does IEEE 802.11 constrain a DS to be either centralized or distributed in nature. IEEE
802.11 explicitly does not specify the details of DS implementations. Instead, IEEE 802.11
specifies Services

An access point (AP) is a STA that provides access to the DS by providing DS services
in addition to acting as a STA. Note that all APs are also STAs; thus they are addressable
entities.

The DS and BSSs allow IEEE 802.11 to create a wireless network of arbitrary size and
complexity. IEEE 802.11 refers to this type of network as the extended service set network.
(ESS) which appears the same to an LLC layer as an Independent BSS network; stations
within an ESS may communicate and mobile stations may move from one BSS to another
(within the same ESS) transparently to LLC. In IEEE 802.11, the ESS architecture (APs
and the DS) provides traffic segmentation and range extension.
To integrate the IEEE 802.11 architecture with a traditional wired LAN, a final logical
architectural component is introduced a portal. A portal is the logical point at which
MSDUs from an integrated non-IEEE 802.11 LAN (i.e. IEEE 802.3 ETHERNET LAN)
enter the IEEE 802.11 DS. For example, a portal is shown in figure 2.2 connecting to a
wired IEEE 802 LAN, all data from non-IEEE 802.11 LANs enter the IEEE 802.11
architecture via a portal. It is possible for one device to offer both the functions of an AP
and a portal; this could be the case when a DS is implemented from IEEE 802 LAN
components. For a better comprehension, see figure 2.2
19

FIGURE 2.2 - Complete IEEE 802.11 Architecture

2.1.3 IEEE 802.11 Reference Model

A reference model is presented in the Figure 2.3 that gives us a scope of the
physical layer composed of the Physical Layer Convergence Protocol (PLCP) and the
Physical Medium Dependent (PMD sub layers as well as Data link layer composed by
MAC sub layer [ISO 99].
20

FIGURE 2.3 - IEEE 802.11 Reference Model

2.1.3.1 PMD Sub Layer

The Physical Medium Dependent (PMD) sub layer provides different transmission
techniques (FHSS, DSSS, OFDM, and Diffuse Infrared), modulation and encoding of the
signal between two or more stations each using the same modulation system:

a) OFDM: Is a communication technique that divides a communication channel into a


number of equally spaced frequency bands. A sub carrier carrying a portion of the
user information is transmitted in each band. Each sub carrier is orthogonal
(independent of each other) with every other sub carrier that makes the difference
with Frequency Division Multiplexing (FDM: It is used for a variant of digital
subscriber line (DSL) and cable modem. OFDM also is the European Standard
Digital Video Broadcasting as well as digital radio in USA. It is used by Std. IEEE
802.11a [OFD 2002].

b) DSSS: Direct Sequence Spread Spectrum, with this technique the transmission
signal is spread over an allowed band (for example 25MHz). A random binary string
(spreading code) is used to modulate the transmitted signal. The data bits are mapped
to into a pattern of -chips - and mapped back into a bit at the destination. The number
of chips that represent a bit is a spreading ratio , the higher the spreading ratio, the
more the signal is resistant to interference. The lower the spreading radio, the more
bandwidth is available to the user. IEEE 802.11 standard requires a spread ratio of
eleven. The transmitter and the receiver must be synchronized with the same
spreading code. If orthogonal spreading codes are used then more than one LAN can
share the same band. It is used by Std. IEEE802.11b [PRE 2002].

c) FHSS: Frequency Hopping Spread Spectrum, This technique splits the band into
many small sub channels (1 MHZ). The signal then hops from sub channel to sub
channel transmitting short bursts of data on each channel for a set period, called
21

Dwell time. The hopping sequence must be synchronized at the sender and receiver
or information is lost. The FCC requires that the band is split into at least 75 sub
channels and the dwell time is no longer, than 400 ms. In order to jam a FHSS the
whole band must be jammed. The sub-channels are smaller than in DSSS. If
orthogonal hopping sequence is used many FHSS, LANs can be co-located [PRE
2002].

d) INFRARED: infrared systems are simple in design and therefore inexpensive.


They use the same signal frequencies used on fiber optics links. Infrared
transmission operates in the light spectrum, the transmission spectrum is shared with
the sun and the fluorescent lights, it requires an unobstructed line of sight. IR cannot
penetrate opaque objects as walls, curtains etc but it can bounce. Moreover, if there is
enough interference from other sources it can render the LAN useless [PRE 2002].

2.1.3.2 PLCP Sub Layer

The Physical Layer Convergence Procedure (PLCP) sub layer (see figure 2.4):
provides common Service Access Points to its layer, defines a method of mapping the
802.11 PHY sub layer service Data Units (PSDU) into a framing format suitable for
sending and receiving user data and management information between two or more stations
using the associated physical medium dependent system. This allows 802.11 MAC to
operate with minimum dependence on the PMD sub layer [ZHE 2002]. The picture shows a
generic PLCP Frame format; the PLCP actually is transmitted at more than 1.0 Mbit/s, but
according the standard IEEE 802.11a transmits at 54.0 Mbit/s. in the future may be more.

PLCP Frame Format

SYNC SFD SIGNAL SERVICE LENGTH CRC


128 bits 16 bits 8 bits 8 bits 16 bits 16 bits

PLCP Preamble PLCP Header MPDU


144 bits 48 bits (MAC Protocol Data)
i)

SYNC : Synchronization PPDU


SFD : Start Frame Delimiter
CRC-16 : Frame Check Sequence (PLCP Protocol Data Unit)
SIGNAL: Signaling

FIGURE 2.4 - PLCP Frame Format


22

2.1.3.3 Medium Access Control

The Medium Access Control (MAC) Layer defines two different access methods,
the Distributed Coordination Function (DCF) and the Point Coordination Function (PCF) as
shown in figure 2.5.

MAC Architecture

Required for Contention


free Services

Point Used For


Coordination Contention
Function Services and
basis for PCF
(PCF)
MAC
Extent

Distributed
Coordination Function
(DCF)

FIGURE 2.5 - MAC architecture

DCF: The Distributed Coordination Function (DCF) is the fundamental access


method of the IEEE 802.11 MAC, knows as carrier sense multiple access with
collision avoidance (CSMA/CA). The DCF will be implemented in all STAs for use
within both IBSS and infrastructure network configurations. For a STA to transmit,
it shall sense the medium to determine if another STA is transmitting. If the medium
is not determined to be busy, the transmission may proceed. The CSMA/CA
algorithm mandates that a gap of a minimum specified duration exist between
contiguous frame sequences. A transmitting STA shall ensure that the medium is
idle for this required duration before attempting to transmit. If the medium is
determined to be busy, the STA shall defer until the end of the current transmission.
After deferral, or prior to attempting to transmit again immediately after a successful
transmission, the STA shall select a random back off interval and shall decrement
the back off interval counter while the medium is idle. In the following figure 2.6,
the basic access method timing of DCF is introduced.
23

Basic Access Method


Immediate access when medium is free >= DIFS

DIFS DIFS
Contention Window

PIFS

SIFS
Busy Medium Backoff-Window Next Frame

Slot Time IFS: Interframe space


SIFS: Short interframe space
PIFS: PCF interframe space
DIFS: DCF interframe space
EIFS: Extended interframe space
BackoffTime = Random() x aSlotTime

Differ Access Select Slot and Decrement Backoff as long as medium is idle

FIGURE 2.6 - Basic Access Method of DCF

A refinement of the method may be used under various circumstances to


further minimize collisions; here the transmitting and receiving STA exchange short
control frames such as Request to Send (RTS) and Clear to send frames (CTS), after
determining that the medium is idle and after any deferrals or back offs, prior to
data transmission to avoid for example hidden station collisions (we have three
STAs A, B ,C aligns A transmits to B and at the same time C transmits to B so B
receives to packets that collide and neither A nor C knows). While the source the
destination are changing control messages (RTS, CTS) and data and ACKs the other
stations keeps a Network Allocation Vector (NAV) waiting for the contention
window to win the medium until their back off timers became zero. (See figure 2.7)
24

RTS / CTS / Data / ACK and NAV setting


DIFS

RTS Data
Src.

SIFS SIFS SIFS

CTS ACK
Dest.

DIFS

Contention Window
NAV ( RTS )
Other
NAV ( CTS )

Defer Access Backoff After Defer

FIGURE 2.7 - RTS-CTS-DATA-ACK and NAV Setting

It is remarkable that all the stations in an Ad hoc wireless networks transmit each
other beacon signals periodically to show its present to other stations, for example in the
figure 2.8 the STA 15, 22, and 31 are sending beacon transmissions.
25

Beacon Generation in an IBSS


Awake
Period

Beacon Interval

STA STA STA STA


22 31 15 22

Beacon Busy
Transmissions Medium

STA Beacon Interval STA


31 15

D1 D1

D1 : Random delay
Awake Period IBSS : Independent BSS = Ad Hoc

FIGURE 2.8 - Beacon generation in an IBSS

b) PCF: the Point coordination function (PCF) is an optional access method which
is only usable on infrastructure (includes DS medium, always APs, and optional
portal entities) network configurations. This access method uses a point coordinator
(PC), which shall operate at the access point of the BSS, to determine which STA
currently has the right to transmit. The operation is essentially that of polling (To
check the status of STAs to see changes), which the PC performing the roll of the
polling master. The operation of the PCF require additional coordination not include
in Std 802.11, to permit efficient operation in cases where multiple point-
coordinated BSSs are operating in the same channel, in overlapping physical space.
The PCF uses a virtual carrier-sense mechanisms aided by an access priority
mechanism. The PCF shall distribute information within Beacon management
frames to gain control of the medium by setting the network allocation vector
(NAV) (think of NAV as a counter) in STAs. In addition, all frame transmission
under the PCF may use an inter frame space (IFS) that is smaller than the IFS for
frames transmitted via DCF, the use of a smaller IFS implies that point-
coordinated traffic shall have priority access to the medium over STAs in
overlapping BSSs operating under the DCF access method. The access priority
provided by a PCF may be utilized to create a contention- free (CF) access method.
The PC controls the frame transmissions of the STAs so as to eliminate contention
for a limited period of time.
26

The DCF and the PCF shall coexist in a manner that permits both to operate
concurrently within the same BSS. When a point coordinator (PC) is operating in a BSS,
the two access methods alternate, with a contention -free period (CFP) followed by a
contention period (CP). In the next picture we can see the difference between a structures
and an Ad hoc network.

FIGURE 2.9 - Infrastructured and Ad hoc Networks


27

3 AD HOC NETWORKS

An Ad hoc Network is a network composed solely of stations within mutual


communication range of each other via the wireless medium (WM). The term ad hoc is
often used as slang to refer to independent basic service set IBSS. (IBSS equals a BSS that
forms a self-contained network, and in which no access to a distribution system (DS) is
available). In the figure 3.1, the stations STA1 and STA2 are without distribution system
(DS) [ISO 99].

802.11 Independent BSS

802.11 MAC/PHY

STA 1 SS STA 2

FIGURE 3.1 - Logical architecture of an IBSS

It is important to see the difference between Mobile and portable STAs and it is that
portable STAs move from point to point but is only used at a fixed time while the mobile
stations can access the WLAN during the movement.

3.1 Definition: MANET

A "mobile ad hoc network" (MANET) is an autonomous system of mobile routers


(and associated hosts) connected by wireless links. the union which form an arbitrary
graph. The routers are free to move randomly and organize themselves arbitrarily; thus, the
network's wireless topology may change rapidly and unpredictably. Such a network may
operate in a standalone fashion, or may be connected to the larger Internet [IET 2002].

3.2 Multi-hop Wireless Networks

A mobile ad hoc wireless network becomes a multi hop wireless network when it
needs to send a message using three or more node routers between a node transmitter and a
node receiver extending the communication range of the network. These types of networks
are useful in any situation where temporary network connectivity is needed in bigger
28

places. Recent work has concentrated on developing MAC layer protocols and routing
protocols for these types of networks. We can see a good example of a simple multi-hop
network in the next figure 3.2

FIGURE 3.2 - Multi-hop wireless network

3.3 Mobile Topology

Ad hoc networks are wireless networks of mobile hosts, in which the topology
rapidly changes due to the movement of mobile hosts. This frequent topology change may
lead to sudden packet losses and delays. Transport Protocols like TCP, which have been
designed for reliable fixed networks, misinterpret this packet loss as congestion and invoke
congestion control, leading to unnecessary retransmissions and loss of throughput [CHA
2001].

The topology of an ad hoc network changes every time an MH’s movement results
in the establishment of new wireless links (an MH moves within range of another) or link
29

disconnections (an MH moves out of range of another which was within its range). The rate
of topology change is dependent on the extent of mobility and transmission range of the
hosts. Routes are heavily dependent on the relative location of MHs. Hence, routes may be
repeatedly invalidated in an unpredictable and arbitrary fashion due to the mobility of hosts.
The mobility of a single node may affect several routes that pass through it [CHA 2001].

3.4 Main Characteristics of a MANET

Adding those mentioning in wireless characteristics, these are as follows:

- It is typically created in a spontaneous manner.

- It is limited temporal and spatial.

- Dynamic topologies: Nodes are free to move arbitrarily, with different speed and
the network topology may change randomly and at unpredictable times

- Energy constrained operation: some or all the nodes may depend on batteries or
other exhaustible means for their energy.

- Limited bandwidth: less than wired nets in addition the realized throughput of
wireless after accounting for the effect of multi path, fading, noise etc it is often much
less than a radio's maximum transmission rate.

- Security menaces: MANET has more predisposition than fixed-cable nets. The
increased possibility of eavesdropping, spoofing, hacking should be considered.

3.5 Routing protocols in MANET [PAD 2000]

The dynamic nature of topology in ad hoc networks poses many interesting


problems in the domain of routing protocols. As a result, ad hoc networks have been
studied extensively in
the context of routing (i.e., at the network layer) [CHA 2001].

In these networks, it is necessary to have a routing protocol that:

• Quickly provides relatively stable, loop-free routes

• Adapts to the mobility of the network

Conventional protocols like link state and distance vector do not match these requirements
because they do not converge quickly enough or scale well as mobility increases [CHA
2001].

Routing protocols are also mentioned in this work because they are associated with
poor throughput performance in wireless networks as shown in chapter 5 of this TI.
30

The traditional routing protocols deployed for wired networks can not be used for
mobile ad hoc networks because of the mobility of networks.

All nodes of ad hoc networks behave as routers and take part in discovery and
maintenance of routes to other nodes in the network. The ad hoc routing protocols can be
divided into two classes: Table-driven and On-demand.

3.5.1 Table Driven Routing Protocols

In Table-driven routing protocols, each node maintains one or more tables containing
routing information to every other node in the network. All nodes update these tables to
maintain a consistent up-to-date view of the network. When the network topology changes
the nodes propagate update messages throughout the network in order to maintain
consistent and up-to-date routing information about the whole network. As follows, we
present some of the more important Table-driven ad hoc routing protocols:

- Dynamic Destination-Sequenced Distance-Vector Routing Protocol


(DSDV) [PER 94].
- Cluster head Gateway Switch Routing Protocol (CGSR) [CHI 97].
- The Wireless Routing Protocol (WRP) [MUR 96].
- Global State Routing (GSR) [CHE 98].
- Fisheye State Routing (FSR) [IWA 99].
- Hierarchical State Routing - (HSR) [IWA 99].
- Zone-based Hierarchical Link State Routing Protocol (ZHLS) [JOA 99]., etc.

3.5.2 On-Demand Routing Protocols

These protocols take a lazy approach to routing. In contrast to table-driven routing


protocols not all up-to-date routes are maintained at every node, instead the routes are
created as and when required. When a source wants to send to a destination, it invokes the
route discovery mechanisms to find the path to the destination. The route remains valid till
the destination is reachable or until the route is no longer needed. As follows, we present
some of the more important On-demand ad hoc routing protocols:

- Cluster based Routing Protocol (CBRP) [IET 99].


- Dynamic Source Routing Protocol (DSRP) [IET 99a].
- Ad hoc On-demand Distance Vector Routing (AODV) [IET 99b].
- Temporally Ordered Routing Algorithm (TORA) [PAR 97].
- Associatively Based Routing (ABR) [TOH 96].
- Signal Stability Routing (SSR) [DUB 97], etc.

3.5.3 Table Driven versus On Demand


31

Here it is presented some characteristics of both schemes [ADH 2002]:

Table-driven:

- Table-driven or proactive approach uses periodic route updates, and can either be
link-state based or distance-vector based.

- The word proactive means that this approach will always react to or do something.
In addition, it will react on link changes as well.

- Mobility is treated as link changes.

- Drawbacks of this approach are that it is inefficient if there is little demand for
routes and it has tendency to instability at high mobility.

On-Demand:

- In on-demand driven or reactive approach, there are no periodic route updates.

- This approach is termed reactive, meaning that it reacts specifically to link changes
only when needed.

- Routes are discovered base on demand by the source node.

- It is also possible to use caching.

- The advantage of this approach is that it is both power and bandwidth efficient.

3.5.4 Ad-Hoc Wireless Multicasting

For wireless networks, the most natural communication type is broadcasting since
traditional radios are based on omni-directional antennas. However, problems arise in ad
hoc wireless networks multicasting due to mobility of sources, destinations and
intermediate nodes in the distribution tree. In addition, there are hidden terminal problems
and the presence of multicast group dynamics [ADH 2002].

It is important to realize that multicasting is a “selective broadcast”, from one


transmitter to a selected group of receivers.

Below presents a classification of current ad-hoc multicasting routing protocols


32

Multicast Routing Protocols for MANET

Source-Based Core-Based Multicast Group-Based Location-Based


Tree Tree Mash Forwarding Forwarding

DVMRP AODV CAMP ODMRP LBM

DVMR: Distance Vector Multicast Routing Protocol


CAMP: Core-Assisted Mesh Protocol
ODMRP: On-Demand Multicast Routing Protocol
LBM: Location-Based Multicast [ADH 2002]

FIGURE 3.3 - Existing multicast routing protocols for ad hoc wireless networks.

More information may be acquired in [ADH 2002].

3.6. MANET Applications

We have different MANET applications as follows:

- Collaborative work: for some business, educational, engineering environments, the


need for collaborative computing might be more important outside office
environments than inside.

- Security applications: companies can implement security ad hoc systems using


sensing tiny computers

- Crisis management applications: these arise for example, as a result of natural


disasters where the entire communications infrastructure is crushed, Restoring
communication quickly is essential;. By using MANET, an ad hoc infrastructure could
be set up in hours instead of days or weeks required for wire-line communications

- Sensor Network applications

- Personal area Networking.


33

4 General overview of Transmission Control Protocol


In [ROD 2001] we found a good introduction to Transmission Control Protocol
(TCP), and in its chapter five the authors make a good overview of the TCP using mainly
RFC 793 as reference.

The primary purpose of TCP is to provide reliable logical circuit or connection


service between pairs of processes. It does not assume reliability from the lower-level
protocols (such as IP), so TCP must guarantee this itself.

TCP can be characterized by the following facilities:

- Stream Data Transfer: From the application's viewpoint, TCP transfers a contiguous
stream of bytes through the network. The application does not have to bother with
chopping the data into basic blocks or datagrams. TCP does this by grouping the bytes
in TCP segments, which are passed to IP for transmission to the destination. Also,
TCP itself decides how to segment the data and it can forward the data at its own
convenience. Sometimes, an application needs to be sure that all the data passed to
TCP has actually been transmitted to the destination. For that reason, a push function
is defined. It will push all remaining TCP segments still in storage to the destination
host. The normal close connection function also pushes the data to the destination.

- Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgment (ACK) from the receiving TCP. If the ACK is not received
within a timeout interval, the data is retransmitted. Since the data is transmitted in
blocks (TCP segments), only the sequence number of the first data byte in the segment
is sent to the destination host. The receiving TCP uses the sequence numbers to
rearrange the segments when they arrive out of order, and to eliminate duplicate
segments.

- Flow Control: The receiving TCP, when sending an ACK back to the sender, also
indicates to the sender the number of bytes it can receive beyond the last received TCP
segment, without causing overrun and overflow in its internal buffers. This is sent in
the ACK in the form of the highest sequence number it can receive without problems.
This mechanism is also referred to as a window-mechanism, and we discuss it in more
detail later in this chapter.

- Multiplexing: Achieved through the use of ports, just as with UDP.

- Logical Connections: The reliability and flow control mechanisms described above
require that TCP initializes and maintains certain status information for each data
stream. The combination of this status, including sockets, sequence numbers and
window sizes, is called a logical connection. The pair of sockets used by the sending
and receiving processes uniquely identifies each connection.
34

- Full Duplex: TCP provides for concurrent data streams in both directions.

4.1 The window principle applied to TCP

A simple transport protocol might use the following principle: send a packet and then
wait for an acknowledgment from the receiver before sending the next packet. If the
ACK is not received within a certain amount of time, retransmit the packet.

While this mechanism ensures reliability and better use of the network bandwidth
(better throughput), it only uses a part of the available network bandwidth, at the same
time is important to realize that:

- The sender groups its packets to be transmitted.


- The sender can send all packets within the window without receiving an ACK, but
must start a timeout timer for each of them.
- The receiver must acknowledge each packet received, indicating the sequence
number of the last well-received packet.
- The sender slides the window on each ACK received.
- the receiver may delay replying to a packet with an acknowledgment, according to
the availability of its buffer and the window-size of the communication, it is a way of
flow-cont.

FIGURE 4.1 - Window principle applied to TCP

The above window principle is used in TCP, but with a few differences:
35

- Since TCP provides a byte-stream connection, sequence numbers are assigned to


each byte in the stream. TCP divides this contiguous byte stream into TCP segments to
transmit them. The window principle is used at the byte level, that is, the segments sent and
ACKs received will carry byte-sequence numbers and the window size is expressed as a
number of bytes, rather than a number of packets.
- The window size is determined by the receiver when the connection is established and is
variable during the data transfer. Each ACK message will include the window size that the
receiver is ready to deal with at that particular time.
The sender's data stream can now be seen as follows:
Remember that TCP will block bytes into segments, and a TCP segment only carries
the sequence number of the first byte in the segment.

4.2 TCP segment format

The TCP segment format is as shown in following figure 4.2:

TCP Segment format


0 1 2 3
012345678901234567 89012345678901

Source Port Destination Port

Sequence Number

Acknowledgement Number
Data UAPRSF
Offset Reset RCS S YI Window
GK H T N N

Checksum Urgent Pointer

Options …| … Padding

Data bytes

FIGURE 4.2 - TCP Segment format

Where:
- Source Port: The 16-bit source port number, used by the receiver to reply.
- Destination Port: The 16-bit destination port number.
- Sequence Number: The sequence number of the first data byte in this
segment. If the SYN control bit is set, the sequence number is the initial
36

sequence number (n) and the first data byte is n+1.


- Acknowledgment Number: If the ACK control bit is set, this field contains
the value of the next sequence number that the receiver is expecting to receive.
- Data Offset: The number of 32-bit words in the TCP header. It indicates where the
data begins.
- Reserved: Six bits reserved for future use; must be zero.
- URG: Indicates that the urgent pointer field is significant in this segment.
- ACK: Indicates that the acknowledgment field is significant in this
segment.
- PSH: Push function.
- RST: Resets the connection.
- SYN: Synchronizes the sequence numbers.
- FIN: No more data from sender.
- Window: Used in ACK segments. It specifies the number of data bytes, beginning
with the one indicated in the acknowledgment number field that the receiver (= the
sender of this segment) is willing to accept.
- Checksum: The 16-bit one's complement of the one's complement sum of all 16-bit
words in a pseudo-header, the TCP header, and the TCP data. While computing the
checksum, the checksum field itself is considered zero.
- Urgent Pointer: Points to the first data octet following the urgent data. Only
significant when the URG control bit is set.
- Options: Just as in the case of IP datagram options, options can be either:
* A single byte containing the option number
* A variable length option in the format: (Option - Length - Option data... )

There are currently seven options defined:

Options Kind Length


a) End of option list 0 -
b) No-Operation 1 -
c) Maximum segment size 2 4
d) Window scale 3 3
e) Sack-Permitted 4 2
f) Sack 5 X
g) Timestamps 8 10

a) and b) are as they mean


c) Maximum Segment Size option: This option is only used during the
establishment of the connection (SYN control bit set) and is sent from the side
that is to receive data to indicate the maximum segment length it can handle. If
this option is not used, any segment size is allowed.
d) Window Scale option: This option is not mandatory. Both sides must send
the Windows Scale Option in their SYN segments to enable windows scaling in
their direction. The Window Scale expands the definition of the TCP window to
32 bits. It defines the 32-bit window size by using scale factor in the SYN
segment over standard 16-bit window size. The receiver rebuild the 32-bit
window size by using the 16-bit window size and scale factor. This option is
37

determined while handshaking. There is no way to change it after the


connection has been established.
e) SACK-Permitted Option: This option is set when selective acknowledgment
is used in that TCP connection.
f) SACK option: Selective Acknowledgment (SACK) allows the receiver to
inform the sender about all the segments that are received successfully. Thus,
the sender will only send the segments that actually got lost. If the number of
the segments that have been lost since the last SACK is too large, the SACK
option will be too large. As a result, the number of blocks that can be reported
by the SACK option is limited to four. To reduce this, the SACK option should
be used for the most recent received data.
g) Timestamps option: The timestamps option sends a timestamp value that
indicates the current value of the timestamp clock of the TCP sending the
option. Timestamp Echo Value can only be used if the ACK bit is set in the
TCP header.
- Padding: All zero bytes are used to fill up the TCP header to a total length that is a
multiple of 32 bits.

4.3 Acknowledgements and retransmissions

TCP sends data in variable length segments. Sequence numbers are based on a byte
count to be used. ACKs specified the sequence number of the next byte that the receiver
expect to receive.

Consider that segment gets lost or corrupted. In this case, the receiver will
acknowledge all further well-received segments with an acknowledgment referring to the
first byte of the missing packet. The sender will stop transmitting when it has sent all the
bytes in the window. Eventually, a timeout will occur and the missing segment will be
retransmitted.

4.4 Establishing a TCP connection

Before any data can be transferred, a connection has to be established between the
two processes. One of the processes (usually the server) issues a passive OPEN call, the
other an active OPEN call. The passive OPEN call remains dormant until another process
tries to connect to it by an active OPEN.

On the network, three TCP segments are exchanged: This whole process is known
as a three-way handshake. Note that the exchanged TCP segments include the initial
sequence numbers from both sides, to be used on subsequent data transfers. Closing the
connection is done implicitly by sending a TCP segment with the FIN bit (no more data)
set. Since the connection is full-duplex (that is, there are two independent data streams, one
in each direction), the FIN segment only closes the data transfer in one direction. The other
process will now send the remaining data it still has to transmit and ends with a TCP
segment where the FIN bit is set. The connection is deleted (status information on both
sides) once the data stream is closed in both directions.
38

TCP – Connection establishment

Process 1 Process 2

Passive OPEN
Waits for active request

Active OPEN
Send SYN, seq = n

Receive SYN
Send SYN, seq=m, ACK n+1

Receive SYN+ACK
Send ACK m+1

The connection is now established and the two data streams (one in
each direction) have been initialized (sequence numbers)

FIGURE 4.3 - TCP Connections establishment

4.5 TCP application programming interface

The TCP application-programming interface is not fully defined. Only some base
functions it should provide are described in RFC 793 Transmission Control Protocol. As is
the case with most RFCs in the TCP/IP protocol suite, a great degree of freedom is left to
the implementers, thereby allowing for optimal (operating system-dependent)
implementations, resulting in better efficiency (greater throughput).

The following function calls are described in the RFC:

- Open: To establish a connection takes several parameters, such as:


a) Active/passive
b) Foreign socket
c) Local port number
d) Timeout value (optional)

This returns a local connection name, which is used to reference this particular
connection in all other functions.

- Send: Causes data in a referenced user buffer to be sent over the connection. Can
optionally set the URGENT flag or the PUSH flag.
39

- Receive: Copies incoming TCP data to a user buffer.

- Close: Closes the connection; causes a push of all remaining data and a TCP
segment with FIN flag set.

- Status: An implementation-dependent call that could return information,


Such as:
a) Local and foreign socket
b) Send and receive window sizes
c) Connection state
e) Local connection name

- Abort: Causes pending Send and Receive operations to be aborted, and a RESET to
be sent to the foreign TCP.
Full details can be found in RFC 793 Transmission Control Protocol.

4.6 TCP congestion control algorithms

The TCP congestion algorithm prevents a sender from overrunning the capacity of
the network (for example, slower WAN links). TCP can adapt the sender's rate to network
capacity and attempt to avoid potential congestion situations. Several congestion control
enhancements have been added and suggested to TCP over the years. This is still an active
and ongoing research area, but modern implementations of TCP contain four intertwined
algorithms as basic Internet standards:

- Slow start.
- Congestion avoidance.
- Fast retransmit.
- Fast recovery.

4.6.1 Slow Start

Old implementations of TCP would start a connection with the sender injecting
multiple segments into the network, up to the window size advertised by the receiver.
While this is OK when the two hosts are on the same LAN, if there are routers and slower
links between the sender and the receiver, problems can arise. Some intermediate routers
cannot handle it, packets are dropped, retransmission results and performance is degraded.
The algorithm to avoid this is called slow start. It operates by observing that the rate at
which new packets should be injected into the network is the rate at which the
acknowledgments are returned by the other end. Slow start adds another window to the
sender's TCP: the congestion window, called CWND. When a new connection is
established with a host on another network, the congestion window is initialized to one
segment (for example, the segment size announced by the other end, or the default,
typically 536 or 512).
Each time an ACK is received, the congestion window is increased by one segment. The
sender can transmit the lower value of the congestion window or the advertised window.
The congestion window is flow control imposed by the sender, while the advertised
40

window is flow control imposed by the receiver. The former is based on the sender's
assessment of perceived network congestion; the latter is related to the amount of available
buffer space at the receiver for this connection.

The sender starts by transmitting one segment and waiting for its ACK. When that
ACK is received, the congestion window is incremented from one to two, and two
segments can be sent. When each of those two segments is acknowledged, the congestion
window is increased to four. This provides an exponential growth, although it is not exactly
exponential, because the receiver may delay its ACKs, typically sending one ACK for
every two segments that it receives.

At some point, the capacity of the IP network (for example, slower WAN links) can
be reached, and an intermediate router will start discarding packets. This tells the sender
that its congestion window has gotten too large.

TCP slow start in action

Sender
Receiver

FIGURE 4.4 - TCP Slow start in action


41

4.6.2 Congestion avoidance

The assumption of the algorithm is that packet loss caused by damage is very small
(much less than 1 percent). Therefore, the loss of a packet signals congestion somewhere in
the network between the source and destination. There are two indications of packet loss:

- A timeout occurs.
- Duplicate ACKs are received.

Congestion avoidance and slow start are independent algorithms with different
objectives. Nevertheless, when congestion occurs TCP must slow down its transmission
rate of packets into the network, and invoke slow start to get things going again. In practice,
they are implemented together. Congestion avoidance and slow start require that two
variables be maintained for each connection:

- A congestion window, CWND


- A slow start threshold size, SSTHRESH

The combined algorithm operates as follows:

1. Initialization for a given connection sets CWND to one segment and SSTHRESH to
65535 bytes.

2. The TCP output routine never sends more than the lower value of CWND or the
receiver's advertised window.

3. When congestion occurs (timeout or duplicate ACK), one-half of the current


window size is saved in SSTHRESH. Additionally, if the congestion is indicated by a
timeout, CWND is set to one segment.

4. When new data is acknowledged by the other end, increase CWND, but the way it
increases depends on whether TCP is performing slow start or congestion avoidance.
If CWND is less than or equal to SSTHRESH, TCP is in slow start; otherwise, TCP is
performing congestion avoidance.

Slow start continues until TCP is halfway to where it was when congestion occurred
(since it recorded half of the window size that caused the problem in step 2), and then
congestion avoidance takes over. Slow start has CWND begin at one segment, and
incremented by one segment every time an ACK is received. As mentioned earlier, this
opens the window exponentially: send one segment, then two, then four, and so on.
Congestion avoidance dictates that CWND be incremented by
SEGSIZE*SEGSIZE/CWND each time an ACK is received, where SEGSIZE is the
segment size and CWND is maintained in bytes. This is a linear growth of CWND,
compared to slow start's exponential growth. The increase in CWND should be at most one
segment each round-trip time. Regardless of how many ACKs are received in that RTT.
42

FIGURE 4.5 - TCP slow start and congestion avoidance behavior in action

4.6.3 Fast retransmit

Fast retransmit avoids having TCP wait for a timeout to resend lost segments.
Modifications to the congestion avoidance algorithm were proposed in 1990. Before
describing the change, realize that TCP may generate an immediate acknowledgment (a
duplicate ACK) when an out-of-order segment is received. The purpose of this duplicate
ACK is to let the other end know that a segment was received out of order, and to tell it
what sequence number is expected. Since TCP does not know whether a lost segment or
just a reordering of segments causes a duplicate ACK, it waits for a small number of
duplicate ACKs to be received. It is assumed that if there is just a reordering of the
segments, there will be only one or two duplicate ACKs before the reordered segment is
processed, which will then generate a new ACK. If three or more duplicate ACKs are
received in a row, it is a strong indication that a segment has been lost. TCP then performs
a retransmission of what appears to be the missing segment, without waiting for a
retransmission timer to expire.
43

TCP fast retransmit in action

Sender
Receiver
Packet 1
Packet 2
Packet 3 ACK 1
ACK 2

Packet 4
Packet 5
Packet 6
ACK 2
ACK 2
ACK 2

Retransmit Packet 3

ACK 6

FIGURE 4.6 - TCP fast retransmit in action

4.6.4 Fast recovery

After fast retransmit sends what appears to be the missing segment, congestion
avoidance, but not slow start, is performed. This is the fast recovery algorithm. It is an
improvement that allows high throughput under moderate congestion, especially for large
windows. The reason for not performing slow start in this case is that the receipt of the
duplicate ACKs tells TCP more than just a packet has been lost. Since the receiver can only
generate the duplicate ACK when another segment is received, that segment has left the
network and is in the receiver's buffer.

That is, there is still data flowing between the two ends, and TCP does not want to
reduce the flow abruptly by going into slow start. The fast retransmit and fast recovery
algorithms are usually implemented together as follows:

1. When the third duplicate ACK in a row is received, set SSTHRESH to one-half the
44

current congestion window, CWND, but no less than two segments. Retransmit the
missing segment. Set CWND to SSTHRESH plus three times the segment size. This
inflates the congestion window by the number of segments that have left the network
and the other end has cached (3).

2. Each time another duplicate ACK arrives, increment CWND by the segment size.
This inflates the congestion window for the additional segment that has left the
network. Transmit a packet, if allowed by the new value of CWND.

3. When the next ACK arrives that acknowledges new data, set CWND to
SSTHRESH (the value set in step 1). This ACK should be the acknowledgment of the
retransmission from step 1, one round-trip time after the retransmission. Additionally,
this ACK should acknowledge all the intermediate segments sent between the lost
packet and the receipt of the first duplicate ACK. This step is congestion avoidance,
since TCP is down to one-half the rate it was at when the packet was lost.

4.7 Current TCP versions

In this section is introduced some of the most popular current TCP version as: TCP
Tahoe, TCP Reno, TCP New Reno, etc

Historically, TCP Tahoe was the first modification to TCP. The newer TCP Reno
included the fast recovery algorithm. This was followed by New Reno and The Partial
Acknowledgment mechanism for multiple losses in a single window of data. As Noted,
TCP Tahoe, Reno and New Reno, all use the same algorithm at the receiver, but implement
different variations of the transmission process at the sender. The receiver advertises a
window size, and the sender ensures that the number of unacknowledged bytes does not
exceed this size. For each segment correctly received, the receiver sends an
acknowledgment, which includes the sequence number identifying the next in-sequence
segment (byte). The sender implements a congestion window that defines the maximum
number of transmitted-but-unacknowledged bytes permitted. This adaptive window can
increase and decrease, but never exceeds the receiver’s advertised window. TCP applies
graduated multiplicative and additive increases to the sender’s congestion window. The
versions of protocol differ from each other essentially in the way that the congestion
window is manipulated in response to acknowledgments and timeouts [TSA 2000].

TCP error-control mechanism is primarily oriented towards congestion control can


be beneficial also for the flow that experiences it, since avoiding unnecessary
retransmission can lead to better throughput. The basic idea is for each source to determine
how much capacity is available in the network. So that it knows how many segments, it can
have in transit. TCP utilizes acknowledgments to pace the transmission of segments and
interprets timeout events as indicating congestion. In response, the TCP sender reduces the
transmission rate by shrinking its window. TCP Tahoe and Reno are the two most common
reference implementations for TCP. TCP New Reno is a modified version of Reno that
attempts to solve some of Reno’s performance problems when multiple packets are dropped
from a single window of data [TSA 2000].
45

4.7.1 TCP Tahoe

The congestion-control algorithm includes Slow Start, Congestion Avoidance, and


Fast Retransmit. It also implements an Round trip time based estimation (RRT-based
estimation) of retransmission time out. In the Fast Retransmit mechanism, a number of
successive (the threshold is usually set at three), duplicate acknowledgments (DACKS)
carrying the same sequence number triggers off a retransmission without waiting for the
associated timeout event to occur. The window adjustment strategy for this “early timeout”
is the same as for a regular timeout: Slow Start is applied. The problem, however, is that
Slow Start is not always efficient, especially if the error was purely transient or random in
nature, and not persistent. In such a case, the shrinkage of the congestion window is, in fact,
unnecessary, and renders the protocol unable to utilize the available bandwidth of the
communication channel during the subsequent phase of window re-expansion [TSA 2000].

4.7.2 TCP Reno

TCP Reno introduces Fast Recovery in conjunction with Fast retransmit. The idea
behind fast Recovery is that a DACK is an indication of available channel bandwidth since
a segment has been successfully delivered. This, in turn, implies that the congestion
window (CWND) should actually be incremented. Receiving the threshold number of
DACKS triggers Fast Recovery: the sender retransmits the missing segment then, instead of
entering Slow Start as in TCP Tahoe, increases CWND by the DACK threshold number.
Thereafter, and for as long as the sender remains in Fast Recovery, CWND increased by
one for each additional DACK received. This procedure is called “inflating” CWND. The
Fast Recovery stage is completed when an acknowledgment (ACK) for new data is
received. The sender then halves CWND (“deflating” the window), sets the congestion
threshold to CWND, and resets the DACK counter. In Fast Recovery, CWND is thus
effectively set to half its previous value in the presence of DACKS, rather than performing
Slow Start as for a general retransmission timeout. TCP Reno, however, is not optimized
for multiple segment drops from a single window [TSA 2000].

4.7.3 TCP New Reno

TCP New Reno addresses the problem of multiple segment drops. In effect, it can
avoid many of multiple the retransmit timeouts of TCP Reno. The New Reno modification
introduces a partial acknowledgement strategy in Fast Recovery. A partial acknowledgment
is defined as an ACK for new data, which does not acknowledge all segments that were in
flight at point when Fast Recovery was initiated. It is thus an indication that not all data
sent before entering Fast Recovery has been received. In TCP Reno, a partial ACK causes
exit from Fast Recovery. In the TCP New Reno it is an indication that (at least) one
segment is missing and needs to be retransmitted. This retransmission is effectuated and
Fast Recovery continues. In this way, when multiple segments are lost from a window of
data, TCP New Reno can recover without waiting for a retransmission timeout. However,
the retransmission triggered off by a partial ACK might be for a delayed rather than lost
segment; thus, the strategy risks making multiple successful transmissions for the segment,
which can seriously impact its energy efficiency with no compensatory gain in goodput.
46

4.7.4 TCP SACK

TCP SACK was defined in RFC 2018 by Mathis et al. in 1996, and later extended in
RFC2883 by Floyd et al. in 2000. TCP SACK further improves TCP performance by
allowing the sender to retransmit packets based on the selective ACKs provided by the
receiver. The implementation constitutes a SACK field that contains a number of SACK
blocks. The first block reports the most recently received packets. The additional blocks
repeat the most recently reported SACK blocks. The SACK uses the basic congestion
control algorithms and uses retransmit timeouts as a last option for recovery. The main
difference is the way it handles the loss of multiple packets from the same window, in fast
recovery. Like Reno, SACK enters fast recovery upon receiving duplicate ACKs. It then
retransmits a packet and cuts its congestion window in half. In addition to that, SACK has a
new variable called the pipe, and a data structure called the scoreboard. The pipe is
incremented when the sender sends a new or a retransmitted packet. It is decremented when
the receiver receives a new packet. This is indicated when the sender receives a duplicate
ACK with a SACK option. The scoreboard stores ACKs from previous SACK options,
allowing the sender to retransmit packets that are implied to be missing at the receiver. Like
New-Reno, the sender exits fast recovery [ELA 2002].

4.7.5 TCP Vegas

TCP Vegas was introduced in 1994 as an alternative to TCP Reno. It improves upon
each of the three mechanisms of TCP Reno. The first enhancement is a more prudent way
to grow the window size during the initial se of slow-start and leads to fewer losses. The
second enhancement is an improved retransmission mechanism where time-out is checked
on receiving the first duplicate acknowledgment, rather than waiting for the third duplicate
acknowledgment (as Reno would), and leads to a more timely detection of loss. The third
enhancement is a new congestion avoidance mechanism that corrects the oscillatory
behavior of Reno. In contrast to the Reno algorithm, which induces congestion to learn the
available network capacity, a Vegas source anticipates the onset of congestion by
monitoring the difference between the rate it is expecting to see and the rate it is actually
realizing. Vegas’ strategy is to adjust the source’s sending rate (window size) in an attempt
to keep a small number of packets buffered in the routers along the path.. Although
experimental results presented by Brakmo and Peterson in 1995, and duplicated in Ahn et
al. in 1995, show that TCP Vegas achieves better throughput and fewer losses than TCP
Reno under many scenarios, at least two concerns remained: whether Vegas is stable, and if
so, whether it stabilizes to a fair distribution of resources; and whether Vegas results in
persistent congestion. In short, Vegas has lacked a theoretical explanation of why it works
[LOW 2000].
47

4.7.6 TCP Santa Cruz

This is a new implementation of TCP that detects not only the initial stages of
congestion in the network but also identifies the direction of congestion i.e., it determines
whether congestion is developing in the forward or reverse path of the connection. TCP
Santa Cruz is able to isolate the forward throughput from events such as congestion that
may occur in the reverse path. Congestion is determined by calculating the relative delay
that one packet experiences with respect to another as it traverses the network. This
relative delay is the foundation of their congestion control algorithm. The relative delay is
used to estimate the number of packets residing in the bottleneck queue; the congestion
control algorithm keeps the number of packets in the bottleneck queue at a minimum level
by adjusting the TCP source’s congestion window. The congestion window is reduced if
the bottleneck queue length increases (in response to increasing congestion in the network)
beyond a desired number of packets. The window is increased when the source detects
additional bandwidth available in the network (i.e., after a decrease in the bottleneck queue
length). TCP Santa Cruz can be implemented as a TCP option by utilizing the extra 40
bytes available in the options field of the TCP header [PAR 2000].

Besides TCP implementations mentioned above exist others as: TCP Net Reno (It
deals with small congestion window issue with Limited Retransmit [RFC 3042]), but the
TCP Reno is the currently the de facto standard on the internet. It is remarkable that this
TCP version presented here are intended for wired networks and are used as a standard of
comparison with new TCP protocols arising for pure wireless networks and heterogeneous
networks.

4.8 TCP in Ad Hoc Networks

TCP is a reliable, stream-oriented transport layer protocol, which was designed for
use over fixed low-error networks like the Internet. Route failures and disruptions are very
infrequent since the network is fixed. Therefore, packet loss, which is detected by TCP as a
timeout, can be reliably interpreted as a symptom of congestion in the network. In response,
TCP invokes congestion control mechanisms. Thus, TCP does not distinguish between
congestion and packet loss due to transmission errors or route failures. This inability of
TCP to distinguish between two distinct problems exhibiting the same symptom results in
performance degradation in ad hoc networks [CHA 2001].

In an ad hoc network, packet losses are frequent in the error-prone wireless medium,
but the effect of these losses can be reduced using reliable link layer protocols. One of the
more serious problems is that of route failures, which can occur very frequently and
unpredictably during the lifetime of a transport session, depending on the relative motion of
MHs in the network [CHA 2001].

In general, whenever the mobility of an mobile host (MH) invalidates a route(s), the
reestablishment of the route by the underlying routing protocol takes a finite amount of
time. During this period, no packets can reach the destination through the existing route.
This results in the queuing and possible loss of packets and acknowledgments. This in turn
48

leads to timeouts at the source, which are interpreted by the transport protocol as
congestion. Consequently the source:

- The Retransmits unacknowledged packets upon timing out.

- Invokes congestion control mechanisms that include exponential back off of the
retransmission timers and immediate shrinking of the window size, thus resulting in
reduction of the transmission rate

- Enters a slow start recovery phase to ensure that the congestion is reduced before
resuming transmission at the normal rate [CHA 2001].

This is undesirable for the following reasons:

- When there is no route available, there is no need to retransmit packets that will not reach
the destination.

Packet retransmission wastes precious MH battery power and scarce bandwidth.

- In the period immediately following the restoration of the route, the throughput will be
unnecessarily low because of the slow start recovery mechanism, even though there
may be no congestion in the network [CHA 2001].

In [XUS 2000], [SUN 2001], [JIC 2001] study the performance of TCP Reno, New
Reno, Sack and Vegas in mobile ad hoc networks.

Early research works on cellular wireless systems showed that TCP suffers a poor
performance in wireless networks because of packet losses due to high bit error rates link.
[BAL 97] Beside the link error issue, high mobility of ad hoc networks also has a
significant impact on the TCP performance.

In [SUN 2001] four TCP Tahoe, Reno, New-Reno and Sack were simulated in NS-
2 [NET 02] in a MANET environment using the IEEE 802.11 standard, in spite of they
were created for wired networks. They used the standard IEEE 802.11, they chose the Ad
Hoc On-Demand distance Vector routing protocol (AODV). The simulation was of 50
mobile ad hoc nodes moving around a 1500x300 m2 flat rectangular area and use four
different average speeds: 2, 10, 20 and 30 m/s with pause time of zero.

In the figure 4.7 below four key performance metrics were evaluated:

- Throughput (Only the packets that the sender has received the ACKs are counted in the
throughtput).

- Goodput (it is the ratio between the amount of data arrived at the destination and the
amount of data generated by the TCP source).
49

- End-to-End Delay (it is defined as the average delay incurred for packets from the time it
is deposited into the sender’s buffer until it is successfully acknowledged, which includes
all possible delay caused by buffering during route discovery latency, queuing at the
interface queue, retransmission delay in the MAC, propagation and transfer times).

- Transfer Time (it is the time that destination received a fixed number of packets from the
sender).

FIGURE 4.7 - Comparison of performance of four TCPs under different mobility metric

The figure 4.7 shows that TCP New-Reno, Reno, Sack and Tahoe when mobility
increase the Goodput and throughtput decrease and the delay and transfer time increase
probing that new solutions to improve TCP over wireless are required. Several factors may
have affected these results. First, when the relative speed increases, the route breakage and
formation become more frequent, this causes bigger link-down probability at longer period.
This will increase the retransmission number and overhead, reduce the effective transfer
time. Secondly, after the route is broken, if the routing protocol cannot recover and
50

discover new route before the transmit timeout occurs, TCP will trigger the congestion
control (slow start by setting CWND =1) followed by avoidance mechanism at the sender.
This is because current TCP cannot distinguish the packet loss due to route breakage from
congestion. In a stationary multi hop network using ad hoc algorithm and topology, besides
the TCP congestion control algorithm, the distance between source and destination also
affects the performance [SUN 2001].
51

5 Problems with TCP in Ad Hoc Networks


The TCP used in the Internet has been mainly designed assuming a relatively
reliable wire-line network. TCP assumes that any loss is due to congestion and
consequently invokes congestion control measures. This has been shown to yield poor
performance in the presence of wireless links as a large number of segment losses will
occur more often because of wireless channel errors or host mobility [RAT 98].

With the exception of the Fast Retransmit and Recovery algorithms, Transmission
Control Protocol (TCP) assumes congestion to be the only source of packet loss. When
wireless networks experience packet loss due to interference or any other error, congestion
control algorithms in TCP are triggered [MAR 98].

Unnecessary and incorrect usage of congestion control algorithms results in a high


performance penalty [MAR 98].

TCP performs at an acceptable efficiency over the traditional wired networks where
packet losses are usually caused by network congestion. However, in networks with
wireless links in addition to wired segments, this assumption would be insufficient, as the
high wireless bit error rate could become the dominant cause of packet loss and thus TCP
performs sub-optimum under these new conditions [WEN 2001a].

As the main reason of this poor performance for TCP, may raise the fact that TCP
cannot distinguish between packet losses due to wireless errors from those due to
congestion. Moreover, TCP sender cannot keep the size of its congestion window at
optimum level and always has to retransmit packets after waiting for timeout, which
significantly degrades end-to-end delay performance of TCP [WEN 2001a].

Transport connections set up in wireless ad hoc networks have many problems such
as high bit error rates, frequent route changes, and partitions. If we run transmission control
protocol (TCP) over such connections, the throughput of the connection is extremely poor
because TCP treats lost or delayed acknowledgments as congestion [JIA 2001].

If we use standard TCP without any modification in mobile ad hoc networks, we


experience a serious drop in the throughput of the connection. There are several reasons for
such a drastic drop in TCP throughput

5.1 Why TCP Throughput performs poor in wireless networks?

The more important reasons are:

- Effect of a High BER.


- Effect of Route Re-computations.
- Effect of Network Partitions.
- Effect of Multi path Routing.
52

5.1.1 Effect of a High BER

Bit errors cause packets to get corrupted which result in lost TCP data segments or
acknowledgment. When acknowledgment do not arrive at the TCP sender within a short
amount of time [the retransmit timeout (RTO)], the sender retransmits the segment,
exponentially backs off its retransmit timer for the next retransmission, reduces its
congestion control window threshold, and closes its congestion window to one segment.
Repeated errors will ensure that the congestion window at the sender remains small
resulting in low throughput [JIA 2001].

It is important to note that error correction may be used to combat high BER but it
will waste valuable wireless bandwidth when correction is not necessary [JIA 2001].

5.1.2 Effect of Route Re-computations

When an old route is no longer available (as in figure 5.x1), the network layer at the
sender attempts to find a new route to the destination [in dynamic source routing (DSR),
this is done via route discovery messages while in destination-sequenced distance-vectoring
(DSDV) table exchanges are triggered that eventually result in a new route being found
[JIA 2001].

Discovering a new route may take significantly longer than the retransmit time out
at the sender. As a result, the TCP sender times out, retransmits a packet, and invokes
congestion control. Thus, when a new route is discovered, the throughput will continue to
be small for some time because TCP at the sender grows its congestion window using the
slow start and congestion avoidance algorithm [JIA 2001].

This is clearly undesirable behavior because the TCP connection will be very
inefficient. If we imagine a network in which route computations are done frequently (due
to high node mobility), the TCP connection will never get an opportunity to transmit at the
maximum negotiated rate (i.e., the congestion window will always be significantly smaller
than the advertised window size from the receiver) [JIA 2001].

In figure 5.1, node source (s), needs to re-compute its route to destination (d), for an
ongoing TCP connection because node “a” moved out of range of node “d”
53

FIGURE 5.1 - Route change forced by mobility

5.1.3 Effect of Network Partitions

It is likely that the ad hoc network may periodically be partitioned for several
seconds at a time (see figure 5.2). If the sender and the receiver of a TCP connection lie in
different partitions, all the sender’s packets get dropped by the network resulting in the
sender invoking congestion control. If the partition lasts for a significant amount of time
(several times longer than the retransmit time out RTO), the situation gets even worse
because of a phenomena called serial timeouts [JIA 2001].

A serial timeout is a condition wherein multiple consecutive retransmissions of the


same segment are transmitted to the receiver while it is disconnected from the sender. All
these retransmissions are, thus, lost. Since the retransmission timer at the sender is doubled
with each unsuccessful retransmission attempt (until it reaches 64 s), several consecutive
failures can lead to inactivity lasting one or two minutes even when the sender and receiver
get reconnected [JIA 2001].

In figure 5.2 is likely that the ad hoc network may be temporarily partitioned due to
node mobility. Source (s) node has an open TCP connection to destination (s) node. The
network get partitioned at time T+5 causing “s” and “d” to lie in different partitions. The
network eventually reconnects 15 seconds after, allowing “s” and “d” to continue
communicating. Unfortunately, this change in node connectivity has disastrous
consequences for TCP’s throughput which can drop to very low level [JIA 2001].
54

FIGURE 5.2 - Partitions formed and recombined by mobility

5.1.4 Effect of Multi path Routing

Some routing protocols (such as temporally ordered routing algorithm (TORA))


maintain multiple routes between source destination pairs, the purpose of which is to
minimize the frequency of route re-computation. Unfortunately, this sometimes results in a
significant number of out-of-sequence packets arriving at the receiver. The effect of this is
that the receiver generates duplicate acknowledgments (ACKs) which cause the sender (on
receipt of three duplicate ACKs) to invoke congestion control [JIA 2001].

5.2 What Does Congestion Window Really Mean in Ad Hoc Networks?

The congestion window in TCP imposes an acceptable data rate for a particular
connection based on congestion information that is derived from timeout events as well as
from duplicate ACKs. In an ad hoc network, since routes change during the lifetime of a
connection, we lose the relationship between the congestion window size and the tolerable
data rate for the route. In other words, the congestion window (CWND) as computed for
one route may be too large for a newer route, resulting in network congestion when the
sender transmits at the full rate allowed by the old CWND [JIA 2001].

5.3 Other factors

5.3.1 Multi-hop problem factor in MANET

It is important to notice that “half duplex” transmission implemented in IEEE


802.11 brings low performance in wireless links when we have multi hop networks as
proved in [HOL 2002]. Here all the results were based on a network configuration
consisting of TCP-Reno over IP on an IEEE 802.11 wireless network, with routing
provided by the Dynamic Source Routing (DSR) protocol.
55

In The figure 5.3 is presented the measured TCP throughput as a function of the
number of hops, averaged over ten runs. Observe that the throughput decreases rapidly
when the number of hops is increased from one, and then stabilizes once the number of
hops becomes large. The primary reason four-hop network for this trend is due to the
characteristics of 802.11. Consider the simple shown in figure 5.3. In IEEE 802.11, when
link 1–2 is active only link 4–5 may also be active. Link 2–3 cannot be active because node
2 cannot transmit and receive simultaneously, and link 3–4 may not be active because
communication by node 3 may interfere with node 2. Thus, throughput on an “i” hop
802.11 network with link capacity C is bounded by C/“i” for 1 ≤ “i” ≤3, and C/3 otherwise.
The decline in figure 5.3 for “i” ≥ 4 is due to contention caused by the backward flow of
TCP ACKs.

FIGURE 5.3 - TCP-Reno throughput over an 802.11 fixed, linear, multi-hop network
56

The same degradation of performance throughout was observed in [TOH 2002]


while using associatively-based routing (ABR) protocol, in a real test bed with the same
structure of the figure 3.2 but with four nodes (laptops) and three hops.

In [TOH 2002]´s experiment is also showed that the Route Discovery (RD) time of
the ABR protocol and the End to End delay increases while increasing the number of hops
as shown in table 5.1 and table 5.2

Route Length Avg.RD time


One hop 6.64ms
Two hops 20.94ms
Three hops 32.98ms

TABLE 5.1 – Route Discovery (RD) time at 1 second beaconing interval for different hop
Counts [TOH 2002].

Hop Count Min Size 1000 bytes Max. Size


One hop 3.25 ms 10.40 ms 340.00 ms
Two hops 6.20 ms 19.70 ms 26.20 ms
Three hops 38.30 ms 38.30 ms 38.30 ms

TABLE 5.2 – Average End-to-End Delay at Different Packet Sizes [TOH 2002].

In [XUS 2002] we found three problems instability, unfairness and incompatibility


in TCP Reno in a multi-hop MANET network. They used NS-2 network simulator. Their
model includes IEEE 802.11 MAC layer protocol, DSR routing protocol, string topology of
eight nodes from 0 to 7, with 7 hops with 200 meters of separation each, all nodes
communicate with identical, half-duplex wireless radios that are modeled after
commercially available 802.11-based WaveLan wireless radios with a bandwidth of 2
Mbit/s and a nominal transmission radius of 250 m.

5.3.1.1 TCP instability problem in a wireless mobile multi-hop ad hoc network

In the Figure 5.4, we may see the instability problem deploying TCP Reno in a
Multi-hop mobile ad hoc wireless network; we also may appreciate that when windows size
is reduce from 32 to 4 the instability also reduces. According to [XUS 2002] it is because in
a carrier sense wireless network, the interfering range (and sensing range) is typically larger
than the range at which receivers are willing to accept a packet from the same transmitter.
WaveLAN wireless systems are engineered in such a way. According to the IEEE 802.11
protocol implementation in the NS-2 simulator software [NET 2000], which is modeled
after a the WaveLAN wireless radio, the interfering range and the sensing range are more
than two times the size of the communication range. This is the reason why a collision
57

occurs at node 2 when node 1 and node 4 are sending at the same time, even though node 4
cannot directly communicate with node 2. Node 2 is within the interfering range of node 4.
This is a typical “hidden node problem” in wireless packet networks. Node 4 is hidden node
in this case. It is within the interference range of the intended destination (node 2) but out
of the sensing range of the sender (node 1). Since the nominal communication range is 250
m, which is smaller than the interfering range, node 1 cannot hear the Clear to Send (CTS)
packet from node 4. Thus, the virtual carrier sense mechanism cannot function, either in
this case. Now we can explain why node 4 is sending, even if node 2 successfully receives
the Request to Send (RTS) from node 1. Note that node 2 can sense node 4. This is typical
“exposed node problem” in wireless packet networks [XIS 2002].

FIGURE 5.4 - Instability problem in the four hop TCP Reno connection

Now, It is clear that the exposed station problems and collisions are preventing the
intermediated node from reaching its next hop. The random back off scheme is used in the
MAC layer makes this worse, since it always favors the latest successful node. As bigger
data packet sizes and sending back-to-back packets both increase the chance of the
intermediated nodes failing to obtain the channel, the node has to back off a random time
and try again. This will increase the delay of ACKs if it finally succeeds. If it still fails after
several tries, a link breakage will be declared. The result is the report of a route failure
[XUS 2002].
58

5.3.1.2 TCP unfairness problem in a wireless mobile multi-hop ad hoc network

To prove unfairness S. Xu and T. Saadawi [XUS 2002] setup two TCP connection I
the network shown in figure 5.5. The first one started at 10.0s, the second one begins 20.0s
later. They called the first one “first session” and the second one “second session”. Their
hole experiment lasts 130.0 s. the first session is from node 6 to node 4; the second session
is a from node 2 to node 3. The first session is a two hop TCP. The first session has a
throughput of around 450 Kpbs after starting fr0m 10.0s. However, it is completely forced
down after the second session starts at 30.0s. In most of its life time after 30.0s, the
throughput of the first session is zero. There is not even chance for it to restart. The
aggregate throughputs of these two TCP connections belong completely to the second
session- around 920 kbps in the 30.0 – 130. 0 s lifetime as shown in figure 5.5. This is also
serious unfairness. The loser session is completely shut down even if it starts much earlier.
Even if the window_ size that is equal to 4 in the experiment change to 1 the results are
quite the same.

FIGURE 5.5 - Throughput of two TCP connections with different sender and receiver
59

5.3.1.3 TCP incompatibility problem in a wireless mobile multi-hop ad hoc network

The authors shown that with the IEEE 802.11 MAC layer, two simultaneous TCP
traffics cannot coexist in the network at the same time. Once one session develops, the
other one will be shut down. The overturn can happen at any random time.

We can see in figure 5.6.a two TCP connections with two hops each. They cannot
keep alive at the same time and, in this experiment, the overturn can happen three times.
The turnovers are very random and several trials with different simulations seed were done
they could not predict when it may happen that it is why they called it incompatibility
problem. In figure 5.6.a, the TCP sources are neighboring nodes, nodes 4 and 3, but in
figure, 5.6.b TCP sources are not direct neighboring nodes 1 and 6. In figure 5.6.b the TCP
sources are now five hops away while the TCP receivers are neighbors and three turnovers
occurs.

According to [XUS 2002] even if we do not use TCP, the three problems previous
mentions still exist in the MAC layer when the IEEE 802.11 is used in multi-hop networks.
TCP traffic clearly shows the problem existing in the MAC. In fact, these problems always
appear when the traffic load becomes large enough even if traffic is not from TCP.
60

FIGURE 5.6 - Throughput of two TCP connections with the same hop number
61

5.3.2 Beaconing interval factor in MANET


In C. K. Toh et al [TOH 2002] changed the beacon intervals (see figure 5.7) of all
the concerned ad hoc mobile computer in their real test bed revealing that periodic
beaconing by ad hoc mobile hosts has very little impact on the route discovery of
Associatively-based routing (ABR) protocol. In the End-to-End delay and in the
communications throughput with the exception of very low beaconing intervals (below
100ms) because of the delay in the transmission caused by channel contention over the
wireless medium. Periodic high frequency beaconing can lengthen RD time due to the
presence of congestion in the transmission paths. High frequency beaconing is, therefore,
not recommended for practical wireless ad hoc network with single band radios.

FIGURE 5.7 - RD time vs. beaconing interval


62

5.3.3 Packet sizes factor in MANET

Varying the transmission packet size has a direct influence on End-to-End delay of
ad hoc wireless routes. This is because the larger the packet size, the longer the data
transmission, propagation and process time. The use of large packet size can increase the
performance of ad hoc networks in terms of throughput. However, at very large packet size,
there is a high probability that a packet is corrupted. This behavior is likely to occur in a
wireless environment due to its high bit-error rate compared with a wired medium.
Moreover contention can be a problem when traffic load is high [TOH 2002].

It is also important to consider the power supply because mobile computers are
battery-operated and as a result, the power resource is limited. Therefore, it will be helpful
if the transmitting and receiving time in a MANET is minimized.

In a MANET the dynamic network topology changes rapidly due to the movement
of the wireless station and at different speeds it other point to take into consideration.

In Wireless networks, the bandwidth is limited and variable depending on the


location and the number of users sharing the channels.

The obstacles in the environment that may create shadow have also to be
considered.
63

6 Proposed solutions in TCP over wireless


With the hope to get a general scope we present here not only solutions exclusively
applied to mobile ad hoc networks but also applied to cellular networks because both
networks present similar problems of mobility and noisy wireless environment.

There are several excellent papers made about comparison of the different TCP
solution. We start this chapter with Balak Rishnan´s proposal [BAL 97], George
Xylomenos et al [XYL 2001] and Wen- Tsuen Chen and Jyh-shin Lee [WEN 2000].

In this work, we present some of the more important proposals to solve wireless
problems and as a natural evolution of [BAL 97]’s schemes. We suggest to add a New
Layer Scheme proposed by [JIA 2001] and [CHE 2001], and also add Emergent Schemes,
where are the proposals that did not fit well in [BAL 97]´s schemes or appear after with
original and interesting characteristics. Our proposed classification is as follows:

- Link Layer Schemes


- Split-Connection Schemes
- End-to-End Schemes
- New Layer Schemes
- Emergent schemes

6.1 Link Layer Schemes

Unlike TCP for the transport layer, there is not de facto standard for link-layer
protocols. Existing link-layer protocols choose from techniques such as stop-and-wait, go-
back- N, selective repeat, and forward error correction to provide reliability [BAL 97].

There have been several proposals for reliable link-layer protocols. The two main
classes of techniques employed by these protocols are: error correction, using techniques
such as forward error correction (FEC), and retransmission of lost packets in response to
automatic repeat request (ARQ) messages [BAL 97].

The main advantage of employing a link-layer protocol for loss recovery is that it
fits naturally into the layered structure of network protocols. The link-layer protocol
operates independently of higher layer protocols, and does not maintain any per-connection
state. The main concern about link-layer protocols is the possibility of an adverse effect on
certain transport-layer protocols such as TCP [BAL 97].

Now we present some Link layer solutions:

6.1.1 SNOOP Protocol

This link-layer protocol takes advantage of the knowledge of the higher layer
transport protocol (TCP) and is deployed in cellular wireless networks. The snoop protocol
introduces a module, called the snoop agent, at the base station. The agent monitors every
64

packet that passes through the TCP connection in both directions, and maintains a cache of
TCP segments sent across the link that have not yet been acknowledged by the receiver. A
packet loss is detected by the arrival of a small number of duplicate acknowledgments from
the receiver or by a local timeout. The snoop agent retransmits the lost packet if it has it
cached, and suppresses the duplicate acknowledgments.

This protocol modifies network-layer software mainly at a base station and


preserves end-to-end TCP semantics [BAL 95].

The particular goal here is to improve the end-to-end performance on networks with
wireless links without changing existing TCP implementations at hosts in the fixed network
and without recompiling or re-linking existing applications is achieved by a simple set of
modifications to the network-layer (IP) software at the base station. These modifications
consist mainly of caching packets and performing local retransmissions across the wireless
link by monitoring the acknowledgments to TCP packets generated by the receiver [BAL
95].

We first are going to describe the protocol for transfer of data from a fixed host
(FH) to a mobile host (MH) through a base station (BS). The base station routing code is
modified by adding a module, called the snoop, that monitors every packet that passes
through the connection in either direction. Non-transport layer code runs at the base station.
The snoop module maintains a cache of TCP packets sent from the FH that have not yet
been acknowledged by the MH. This is easy to do since TCP has a cumulative
acknowledgment policy for received packets. When a new packet arrives from the FH,
snoop adds it to its cache and passes the packet on to the routing code which performs the
normal routing functions. The snoop module also keeps track of all the acknowledgments
sent from the mobile host. When a packet loss is detected (either by the arrival of a
duplicate acknowledgment or by a local timeout), it retransmits the lost packet to the MH if
it has the packet cached. Thus, the base station (snoop) hides the packet loss from the FH
by not propagating duplicate acknowledgments, thereby preventing unnecessary congestion
control mechanism invocations [BAL 95].

The snoop module has two linked procedures, snoop_data() and snoop_ack(). the
Snoop_data() processes caches packets intended for the MH while snoop_ack() processes
acknowledgments (ACKs) coming from the MH and drives local retransmissions from the
base station to the mobile host. The flowcharts summarizing the algorithms for
snoop_data() and snoop_ack() are shown in figure 6.1.
65

Flow chart for snoop Data Ack

snoop_data( ) snoop_Ack( )
Ack arrives
Packet arrives
New Ack?
Yes
New packet?
No
1. Free Buffers
2. Update RTT
1. Forward Packet estimate
2. Reset Local No 3. Propagate ACK
rexmit counter to sender
Yes
Sender rxmission Common case

Dup Ack?
No
Discard
In sequence?
No Spurious Ack Yes

1. Mark as cong loss


Yes 2. Forward packet
First one?
Congestion loss Yes
No
1. Cache packet
2. Forward to mobile Retransmit lost
Discard Packet with high
Common case priority
Later dup Acks
For lost packets Next packet lost
[BAL 95]

FIGURE 6.1 - Flowchart for snoop Data/Ack

The Snoop_data() processes packets from the fixed host. TCP implements a sliding
window scheme to transmit packets based on its congestion window (estimated from local
computations at the sender) and the flow control window (advertised by the receiver). TCP
is a byte stream protocol and each byte of data has an associated sequence number. The
sequence number of its first byte of data and its size identifies a TCP packet (or segment)
uniquely. At the BS, snoop keeps track of the last sequence number seen for the connection.
One of several kinds of packets can arrive at the BS from the FH, and snoop_data()
processes them in different ways [BAL 95]:

1. A new packet in the normal TCP sequence: This is the common case, when a new packet
in the normal increasing sequence arrives at the BS. In this case the packet is added to the
snoop cache and forwarded on to the MH. We do not perform any extra copying of data
66

while doing this. We also place a timestamp on one packet per transmitted window in order
to estimate the round-trip time of the wireless link.

2. An out-of-sequence packet that has been cached earlier: This is a less common case, but
it happens when dropped packets cause timeouts at the sender. It could also happen when a
stream of data following a TCP sender fast retransmission arrives at the base station.
Different actions are taken depending on whether this packet is greater or less than the last
acknowledged packet seen so far. If the sequence number is greater than the last
acknowledgment seen, it is very likely that this packet did not reach the MH earlier, and so
it is forwarded on. If, on the other hand, the sequence number is less than the last
acknowledgment, the MH has already received this packet. At this point, one possibility
would be to discard this packet and continue, but this is not always the best thing to do. The
reason for this is that the original ACK with the same sequence number could have been
lost due to congestion while going back to the FH. In order to facilitate the sender getting to
the current state of the connection as fast as possible, a TCP acknowledgment
corresponding to the last ACK seen at the BS is generated by the snoop module (with the
source address and port corresponding to the MH) and sent to the FH.

3. An out-of-sequence packet that has not been cached earlier: In this case, the packet was
either lost earlier due to congestion on the wired network or has been delivered out of order
by the network. The former is more likely, especially if the sequence number of the packet
(i.e, the sequence number of its first data byte) is more than one or two packets away from
the last one seen so far by the snoop module. This packet is forwarded to the MH, and
marked as having been retransmitted by the sender. Snoop_ack() uses this information to
process acknowledgments (for this packet) from the MH.

The Snoop_ack() monitors and processes the acknowledgments (ACKs) sent back
by the MH and performs various operations depending on the type and number of
acknowledgments it receives. These ACKs fall into one of three categories [BAL 95] :

1. A new ACK: This is the common case (when the connection is fairly error-free and there
is little user movement), and signifies an increase in the packet sequence received at the
MH. This acknowledgment initiates the cleaning of the snoop cache and all acknowledged
packets are freed. The round-trip time estimate for the wireless link is also updated at this
time. This estimate is not done for every packet, but only for one packet in each window of
transmission, and only if no retransmissions happened in that window. The last condition is
needed because it is impossible in general to determine if the arrival of an acknowledgment
for a retransmitted packet was for the original packet or for the retransmission. Finally, the
acknowledgment is forwarded to the FH.

2. A spurious ACK: This is an acknowledgment less than the last acknowledgment seen by
the snoop module and is a situation that rarely happens. It is discarded and the protocol
continues.

3. A duplicate ACK (DUPACK): This ACK is identical to a previously received one. In


particular, it is the same as the last ACK seen so far. In this case, the MH has not received
the next packet in sequence from the DUPACK. However, some subsequent packets in the
67

sequence have been received, since the MH generates a DUPACK for each TCP segment
received out of sequence. One of several actions is taken depending on the type of duplicate
acknowledgment and the current state of snoop [BAL 95]:

- The first case occurs when we receive a DUPACK for a packet that is either not in the
snoop cache or has been marked as having been retransmitted by the sender. If the
packet is not in the cache, it needs to be resent from the FH, perhaps after invoking the
necessary congestion control mechanisms at the sender. If the packet was marked as a
sender-retransmitted packet, the DUPACK needs to be routed to the FH because the
TCP stack there maintains state based on the number of duplicate acknowledgments it
receives when it retransmits a packet. Therefore, both these situations require the
DUPACK to be routed to the FH.

- The second case occurs when snoop gets a DUPACK that it doesn’t expect to receive
for the packet. This typically happens when the first DUPACK arrives for the packet,
after a subsequent packet in the stream reaches the MH. The arrival of each successive
packet in the window causes a DUPACK to be generated for the lost packet. In order to
make the number of such DUPACKs as small as possible, the lost packet is
retransmitted more soon as the loss is detected, and at a higher priority than normal
packets. This is done by maintaining two queues at the link layer for high and normal
priority packets. In addition, snoop also estimates the maximum number of duplicate
acknowledgments that can arrive for this packet. This is done by counting the number of
packets that were transmitted after the lost packet prior to its retransmission.

- The third case occurs when an “expected” DUPACK arrives, based on the above
maximum estimate. The missing packet would have already been retransmitted when the
first DUPACK arrived (and the estimate was zero), so this acknowledgment is
discarded. In practice, the retransmitted packet reaches the MH before most of the later
packets do and the BS sees an increase in the ACK sequence before all the expected
DUPACKs arrive.

Snoop keeps track of the number of local retransmissions for a packet, but resets
this number to zero if the packet arrives again from the sender following a timeout or a fast
retransmission. In addition to retransmitting packets depending on the number and type of
acknowledgments, the snoop protocol also performs retransmissions driven by timeouts.

Their design involves a slight modification to the TCP code at the mobile host. At
the base station, they keep track of the packets that were lost in any transmitted window,
and generate negative acknowledgments (NACKs) for those packets back to the mobile.
This is especially useful if several packets are lost in a single transmission window, a
situation that happens often under high interference or in fades where the strength and
quality of the signal are low. These NACKs are sent either when a threshold number of
packets (from a single window) have reached the base station or when a certain amount of
time has expired without any new packets from the mobile. Encoding these NACKs as a bit
vector can ensure that the relative fraction of the sparse wireless bandwidth consumed by
NACKs is relatively low. their implementation of NACKs is based on using the Selective
Acknowledgment (SACK) option in TCP. Selective acknowledgments, currently
68

unsupported in most TCP implementations, were introduced to improve TCP performance


for connections on “long fat networks”, or LFNs. These are networks where the capacity of
the network (the product of bandwidth and round-trip time) is large. SACKs were proposed
to handle multiple dropped packets in a window, but the current TCP specification does not
include this feature. The basic idea here is that in addition to the normal cumulative ACKs
the receiver can inform the sender which specific packets it did not receive [BAL 95].

The snoop protocol uses SACKs to cause the mobile host to retransmit quickly
missing packets (relative to the round-trip time of the connection). The only change
required at the mobile host will be to enable SACK processing. No changes of any sort are
required in any of the fixed hosts. They have implemented the ability to generate SACKs at
the base station and process them at the mobile hosts to retransmit lost packets and are
currently measuring the performance of transfers from the mobile host.

Their experiments for moderate to high error rates are very encouraging. For bit
error rates greater than 5x10^ -7 show increase in throughput by a factor of up to 20 times
compared to regular TCP (Reno) depending on the bit error rate. For error rates that are
lower than this, there is little difference between the performance of snoop and regular TCP
showing that the overhead caused by snoop is negligible [BAL 95].

They have also found that our protocol is significantly more robust at dealing with
multiple packet losses in a single window as compared to regular TCP.

The main advantage of this approach is that it suppresses duplicate


acknowledgments for TCP segments lost and retransmitted locally, thereby avoiding
unnecessary fast retransmissions and congestion control invocations by the sender. Like
other link-layer solutions, the snoop approach could also suffer from not being able to
shield the sender from wireless losses [BAL 97], [WEN 2000].

The main drawbacks of the snoop agent are that it does not consider packet loss and
delay due to hand off, and interferences of the data link with transport layer retransmission
is still present. [WEN 2000]

6.1.2 TULIP

The Transport Unaware Link Improvement Protocol (TULIP) improves the


performance of TCP over noisy wireless links, without competing with , or modifying the
transport or network layer protocols. TULIP allows TCP to operate efficiently over wireless
networks, with no changes to the hosts and TCP´s semantics, and without requiring proxies
between sender and receiver TCP.

TULIP is exceptionally robust when bit error rates are high; it maintains high
goodput, i.e., only those packets which are in fact dropped on the wireless link are
retransmitted and then only when necessary.
69

TULIP provides reliable service for packets carrying TCP data traffic, and
unreliable service for other packets types, such as UDP traffic (e.g., routing tables’ updates
and DNS packets) and TCP acknowledgments (ACKs).

TULIP eliminates the need for a transport-layer proxy, which must keep per-session
state to actively monitor the TCP packets and suppress any duplicate ACKs it encounters.

An important feature of TULIP is its ability to maintain local recovery of all lost
packets at the wireless link in order to prevent the unnecessary and delayed retransmission
of packets over the entire path and a subsequent reduction in TCP congestion window.
Flow control across the link is maintained by a sliding window, and the sending side’s link
layer accomplishes automatic retransmission of lost packets. Lost packets are detected at
the sender via a bit vector returned by the receiver as a part of every ACK packet. This
allows for quick and efficient recovery of packets over the link and helps to keep delay and
delay variance low.

TULIP is designed for efficient operation over the half-duplex radio channels
available in today’s commercial radios by strobing packets onto the link in a turn-taking
manner.

TULIP’s times rely on a maximum propagation delay over the link, rather than
performing a round-trip time estimate of the channel delay.

The authors introduce a new feature MAC Acceleration, in which TULIP interacts
with with the MAC protocol to accelerate the return of link-layer ACKs (which are most
often piggybacked with returning TCP ACKs) without renegotiating access to the channel.
This feature is applicable to collision avoidance MAC protocols (e.g., IEEE 802.11) to
improve throughput.

TULIP causes no modifications of the network or transport layer software, and the
link layer is not required to know any details regarding TCP or the algorithm it uses.

TULIP maintains no TCP state and makes no decisions on a TCP-session basis, but
rather solely on a per-destination basis. This approach greatly reduces the overhead of
maintaining state information when multiple TCP sessions are active for a given
destination. (as is common with web traffic). From the transport layer’s point of view, the
path to the destination through a lossy wireless link simply appears to be a slow link
without losses and TCP simply adjusts accordingly.

Precisely because TULIP keeps no TCP state and therefore do not need to look into
the TCP packet header is that it works correctly with any current or future version of TCP
(e.g., TCP-SACK), even if TCP headers are encrypted. TULIP works with both IPV4 and
IPV6; in the latter case, TCP data packets can be identified as requiring reliable service
from the NextHeader field in the IPV6 header. In addition, because this approach does not
restrict the network to the presence of a base station, it can be applied to multi-hop wireless
networks. Furthermore, by controlling the MAC layer, TULIP conserves wireless
70

bandwidth by piggybacking TCP ACKs with link-layer ACKs and returning them
immediately across the channel though MAC acceleration.

Finally, the performance of TULIP is compared against the performance of the


Snoop protocol (a TCP-aware approach) and TCP without link-level retransmission
support. The results of the simulations experiments show that TULIP achieves up to three
times higher throughput, lower packet delay, and smaller delay variance. More details
about TULIP can be found in [PAR 99].

There are other solutions in this scheme, which we will just mention summarized

6.1.3 ADAPTIVE Link Layer

In [CHI 2001] has been studied the performance of TCP when the last hop of the
end-to-end connection is wireless and link-layer retransmissions are used to shield the TCP
sender from losses over the wireless channel, it means hide losses over the wireless link to
TCP in spite of the time varying transmission quality.

In [CHI 2001] also focus on link layer retransmission mechanisms and determine
their parameter setting in such a way that a reliable communication link is provided. In
particular, they have chosen a significant QoS metric at the transport layer and fixed its
targeted value, adapting the maximum number of link-layer transmissions to the
characteristics of the wireless link so that the desired QoS at the transport layer is provided.
finally this approach provides a reliable wireless link in spite of the heterogeneous
environments mobile terminals may incur, and it enables a efficient use of TCP over
wireless connections without any modification to the transport protocol [CHI 2001].

6.1.4 ELN-ACK Protocol

The Explicit Loss Notification with Acknowledgment protocol is based on Snoop


protocol to improve the transport performance from the fixed host to mobile host. The key
idea here is to let the fixed host of TCP sender knows clearly of the reason of the packet
loss in a wireless network; that is whether it is a congestion-related packet loss or a wireless
loss-related one. More details about ELN-ACK can be found in [WEN 2001] and [WEN
2001a].

6.2 Split-Connection Schemes

The Split-Connection schemes use an intermediate host to divide a TCP connection


into two separate TCP connections. The implementation avoids data copying in the
intermediate host by passing the pointers to the same buffer between the two TCP
connections. Split-connection protocols split each TCP connection between a sender and a
receiver into two separate connections at the base station. One TCP connection between the
sender and the base station, and the other between the base station and the receiver. Over
the wireless hop, a specialized protocol tuned to the wireless environment may be used
[BAL 97].
71

6.2.1 Indirect TCP

The cellular networks deploy this protocol, even though cellular networks and ad
hoc networks using standard IEEE 802.11 are different technologies, they share the same
problems of mobility and unreliable nature of wireless link that is why is interesting its
study in this TI.

Indirect TCP (I-TCP) is a split-connection solution that uses standard TCP for its
connection over the wireless link. Like other split-connection proposals, it attempts to
separate loss recovery over the wireless link from that across the wire line network, thereby
shielding the original TCP sender from the wireless link. I-TCP utilizes the resources of
Mobility Support Routers (MSRs) to provide transport layer communications between
mobile hosts and hosts on the fixed networks. With I-TCP, the problems related to mobility
and unreliability of wireless link are handled entirely within the wireless link. I-TCP is
particularly suited for applications, which are throughput intensive [BAK 95].

I-TCP is a reliable stream-oriented transport layer protocol for mobile hosts. I-TCP
is fully compatible with TCP/IP on the fixed network and is built around the following
simple concepts [BAK 97 ]:

1) A transport layer connection between an Mobile Host (MH) and an Fixed Host (FH) is
established as two separate connections, one over the wireless medium and another over the
fixed network with the current mobility support router (MSR) being the intermediate point.

2) If the MH switches cells during the lifetime of an I-TCP connection, the center point of
the connection moves to the new MSR.

3) The FH is completely unaware of the indirection and is not affected even when the MH
hands off, i.e., when the intermediate point of the I-TCP connection moves from one MSR
to another.

When a mobile host (MH) wishes to communicate with some fixed host (FH) using
I-TCP, a request is sent to the current MSR (which is also attached to the fixed network) to
open a TCP connection with the FH on behalf of the MH. The MH communicates with its
MSR on a separate connection using a variation of TCP that is tuned for wireless links and
is aware of mobility. See figure 6.2 below
72

Indirect Transport Layer

MSR-2
Fixed CELL-1
Fixed
Host Network Transport
layer
intermediary

Transport
layer
Existing Handoffs
Transport MSR-1
Protocols CELL-2
Over Transport
IP layer
intermediary

MH

[BAK 97] Wireless Protocol / Mobile - IP

FIGURE 6. 2 - Indirect Transport Layer

In figure 6.2 above, If mobile host (MH) which had first established a connection
with a fixed host (FH) through MSR-1, moves to another cell under MSR-2. When the MH
requests an I-TCP connection with the FH while located in the cell of MSR-1, MSR-1
establishes a socket with the MH address and MH port number to handle the connection
with the fixed host. It also opens another socket with its own address and some suitable port
number for the wireless side of the I-TCP connection to communicate with the MH.

If the MH switches cells (hand off or hand over), the state associated with two
sockets of the I-TCP connection at MSR-1 is handed over to the new MSR (MSR-2). MSR-
2 then creates two sockets corresponding to the I-TCP connection with the same endpoint
parameters that the sockets at MSR-1 had associated with them. Since the connection
endpoints for both wireless and the fixed parts of the I-TCP connection do not change after
a move, there is no need to re-establish the connection at the new MSR. This also ensures
that the indirection in the transport layer connection is completely hidden from the FH.

We present some advantages of I-TCP according Bakre and Badrinath [BAK 97]:

1) It separates the flow control and congestion control functionality on the wireless link
from that on the fixed network. This is desirable because of the vastly different error and
bandwidth characteristics of the two kinds of links.
73

2) A separate transport protocol for the wireless link can support notification of events such
as disconnections, moves, and other features of the wireless link such as the available
bandwidth, etc., to the higher layers, which can be used by link aware, and location aware
mobile applications.

3) Indirection at the MSR allows faster reaction to mobility and wireless related events
compared to a scheme in which the remote communicating host tries to react to such
events.

4) An indirect transport protocol can provide some measure of reliability over the wireless
link for those applications, which prefer to use unreliable transport to the fixed network.

5) Indirect transport protocols provide backward compatibility with the existing wired
network protocols thus obviating modifications at fixed hosts for accommodating mobile
hosts.
6) Indirection allows an MSR to manage much of the communication overhead for a mobile
host. Thus, a mobile host (e.g., a small palmtop) that only runs a very simple wireless
protocol to communicate with the MSR can still access fixed network services such as
WWW which may otherwise require a full TCP/IP stack running on the mobile.

7) Indirect transport protocols allow the use of different MTUs over the wired and the
wireless part of the connection. Since the wireless links have lower bandwidth and higher
error rate, the optimal MTU for the wireless medium may be smaller than the smallest
MTU supported by the wired network.

Some important drawback of this are:

1. I-TCP violates the semantics of TCP [PAR 99]. I-TCP acknowledgments and semantics
are not end-to-end. Since the TCP connection is explicitly split into two distinct ones,
acknowledgments of TCP packets can arrive at the sender even before the packet actually
reaches the intended recipient. I-TCP derives its good performance from this splitting of
connections. However, as we shall show, there is no need to sacrifice the semantics of
acknowledgments in order to achieve good performance [BAL 95].

2. Applications running on the mobile host have to be re-linked with the I-TCP library and
need to use special I-TCP socket system calls in the current implementation [BAL 95].

3. Every packet needs to go through the TCP protocol stack and incur the associated
overhead four times (once at the sender, twice at the base station, and once at the receiver).
This also involves copying data at the base station to move the packet from the incoming
TCP connection to the outgoing one. This overhead is lessened if a more lightweight,
wireless specific reliable protocol is used on the last link [BAL 95].
74

6.2.2 WTCP

The efficient transmission control scheme (WTCP) is a new reliable transport-level


scheme, that requires the base station to buffer data packets destined for the mobile host
and retransmit lost packets.

WTCP maintains end-to-end TCP semantics and requires no modification to the


TCP code running in the fixed host or the mobile host. Also, WTCP effectively shields
wireless link errors and attempts to hide the time spent by the base station to locally recover
so that the TCP's round trip time estimation at the source is not affected. This is critical
since otherwise the ability of the source to detect congestion in the fixed wire line network
will be hindered.

They propose an efficient mechanism, where the base station is involved in the TCP
connection. The conceptual view of the transport connection is shown in figure 6.3.

Transport Connection

Fixed Host Base Stationt Mobile Host

TCP WTCP
TCP TCP

IP M-IP IP

M-IP: Móbile IP [RAT 98]

FIGURA 6.3 - Transport connection

WTCP receives data segment from source: The network layer protocol (M-IP)
running in the base station detects any TCP segment that arrives for a mobile host and
sends it to the WTCP input buffer. If this segment is the next segment expected from the
fixed host, it is stored in the WTCP buffer along with its arrival time, and the receive array
is updated. The receive array maintains the sequence numbers of the segments received by
WTCP. The number of bytes received increases the sequence number for the next segment
expected from the fixed host. If the newly arriving segment has a larger sequence number
than what is expected, the segment is buffered, the arrival time is recorded and the receive
array is updated, but the sequence number of the next packet expected is not changed. If the
sequence number of the segment is smaller than what is expected, the segment is dropped.
75

WTCP sends data segments to mobile host: On the wireless link, WTCP tries to
send the segments that are stored in its buffer. WTCP independently performs flow and
error control for the wireless connection. It also maintains state information for the wireless
connection, such as sequence number of last acknowledgment received from mobile host,
and the sequence number of the last segment that was sent to the mobile host. From its
buffer, WTCP transmits all segments that fall within the wireless link transmission
window. Each time a segment is sent to the mobile host (including a retransmission), the
timestamp of the segment is incremented by the amount of time that segment spent in the
WTCP buffer (residence time). When a segment is sent to the mobile host, the base station
schedules a new timeout if there is no other timeout pending.

WTCP receives acknowledgement from mobile host: The base station


acknowledges a segment to the fixed host only after the mobile host actually receives and
acknowledges that segment. Hence, TCP end-to-end semantics is maintained throughout
the lifetime of the connection. Also, the round trip time seen by the source is the actual
round trip time taken by a segment to reach and return from the mobile host, i.e. it does not
include residence time at the base station needed for local recovery. Based on duplicate
acknowledgment or timeout, the base station locally retransmits lost segments. In case of
timeout, the transmission window for the wireless connection is reduced to just one
segment assuming a typical burst loss on the wireless link is going to follow. If only one
segment is lost and the following segment(s) pass(es) through, the loss would likely be
indicated by a duplicate acknowledgment (see below). In case of time-out, by quickly
reducing the transmission window, potentially wasteful wireless transmission is avoided
and the interference with other channels is reduced. The opening of the transmission
window in regular TCP is based on whether the connection is in the slow start or
congestion avoidance phase. In the slow start phase, each time an acknowledgment is
received, the congestion window (CWND) is incremented by one segment size. In the
congestion avoidance phase, each time an acknowledgment is received, the congestion
window is incremented by a factor of 1/CWND. WTCP however opens the wireless
transmission window completely assuming that an acknowledgement indicates that the
wireless link is in good state. That is, when an acknowledgment is received, the
transmission window size is set to the window size advertised by the receiver [RAT 98].

For duplicate acknowledgment, WTCP opens the wireless transmission window in


full assuming that the reception of the duplicate acknowledgement is an indication that the
wireless link is in good state, and immediately retransmits the lost segment. The base
station continues to transmit the remaining segments that are within the transmission
window if they have not already been transmitted. Until the mobile host receives the lost
segment, the reception of each out-of-order segment will generate a duplicate
acknowledgment. The number of these additional duplicate acknowledgments can be
determined by the base station, which ceases to retransmit during their reception [RAT 98].

By avoiding more than one duplicate acknowledgment based retransmission for a


segment, WTCP improves the utilization of the wireless channel [RAT 98].

According to their experiments, WTCP achieves throughput values 4-8 times higher
than TCP-Tahoe [RAT 98].
76

A drawback of WTCP is that the residence time a TCP segment spends in the base station
buffer can affect the RTT value estimated at the TCP source [RAT 98].

A disadvantage of split connections is that the end-to-end semantics of TCP


acknowledgments is violated since acknowledgments to packets can now reach the source
even before the packets actually reach the mobile host [BAL 97].

6.3 End-to-End (E2E) Schemes

Although a wide variety of TCP versions are used on the Internet, the current de
facto standard for TCP implementations is TCP Reno [BAL 97].

Now we present some End-to-End solutions:

6.3.1 TCP-Probing

In order to enhance TCP throughput and energy efficiency TCP-probing is proposed


grafting a “probing” scheme onto the basic TCP error-control mechanism. In this scheme, a
“probe Cycle” consists of a structured exchange of “probe segments between the sender
and the receiver. These segments carry no payload and are implemented using option
extension to the TCP header. When a data segment is excessively delayed and possibly lost,
the sender, rather than immediately retransmitting the segment and adjusting the congestion
window and threshold downwards, suspends data transmission and initiates a probe cycle
instead. The probe segments are composed of only segment headers, this enables the header
to efficient monitors the network on an end-to-end basis, at much less cost in transmission
effort (and hence energy cost) than would otherwise be expended on the (re)transmission of
data segments that might not have a good chance of getting good during the periods of
degraded network conditions [TSA 2000].

The TCP-Probing also contributes more effectively to alleviate congestion. The


probe terminates when network conditions have improved sufficiently that the sender can
make two successive round-trip time measurements from the network, at which point it will
have more information on which to base its error-correction response than does “standard’
TCP [TSA 2000].

In the event of persistent error conditions (e.g. congestion, burst link errors), the
duration of the probe cycle will be naturally extended and is likely to be commensurate
with that of the error conditions, since probe segment will be lost. The data transmission
process is thus effectively “sitting out” these error conditions awaiting successful
completion of the probe cycle. In the case of random loss, however, the probe cycle will
complete much more quickly, in proportions to the prevailing density of occurrence for the
random error [TSA 2000].

The sender enters a probe cycle when either of two situations apply [TSA 2000]:
77

1. A timeout event occurs. If network conditions detected when the probe cycle
completes are sufficiently good, then instead of entering Slow Start, TCP-probing
simply picks up from the point where the timeout event occurred. In other words,
neither congestion window nor threshold is adjusted downwards. They call this
“Immediate Recovery”. Otherwise, slow start is entered.
2. Three duplicated acknowledgements (DACKS) are received. Again, if prevailing
networks conditions at the end of the probe cycle are sufficiently good, Immediate
Recovery is executed. Note that here; however, Immediate Recovery will also
expand the congestion window in response to all DACKS received by the time the
probe cycle terminates. This is analogous to the window inflation phase of Fast
Retransmit in Reno and New Reno. Alternatively, if deteriorated network conditions
are detected at the end of the probe cycle, the sender enters Slow Start. This is in
market distinction to Reno and New Reno behavior at the end of Fast Retransmit.
The logic here is that, having sat out the error condition during the probe cycle and
finding that network throughput is nevertheless still poor at the end of the cycle; a
conservative transmission strategy is more clearly indicated.

Implementation: A probe cycle uses two segments (PROBE1, PROBE2) and their
corresponding acknowledgements (PR1_ACK and PR2_ACK), implemented as options
extensions to the TCP header. The segments carry no payload as we said before. The option
header extension consists of fields:
(i) type: in order to distinguish between the four probe segments (this is
effectively the option code field).
(ii) (options) length
(iii) id number: used to identify an exchange of probe segment.

The sender initiates a probe cycle by transmitting a PROBE1 segment to which the
receiver immediately responds with a PR1_ACK, upon receipt of which the sender
transmits a PROBE2. the receiver acknowledges this second probing with a PR2_ACK
and returns to the ESTAB state as shown in the figure 6.4
78

Probing State Transition Diagram


Timeout
PROBE 1
3 Dacks orTimeout PROBE 1
PROBE 1 PR1_ACK

PR1_SENT ESTAB PR1_RCVD

PR1_ACK Probe Timeout1


PROBE 2 PROBE 1

PROBE 2
PR2_ ACK PR2_ACK
PR2_SENT
[TSA 2000]

FIGURE 6.4 - Probing State Transition Diagram

The sender makes a round-trip time (RTT) measurement based on the time delay
between sending the PROBE1 and receiving the PR1_ACK, and another based on the
exchange of PROBE2 and PR2_ACK. The sender makes use of two timers during probing.
The first is a Probe timer, used to determine if a PROBE1 or its corresponding PR1_ACK
segment is missing, and the same again for the PROBE2/PR2_ACK segments. The second
is a Measurement timer, used to measure each of the two RTTs from the probe cycle, in
turn. The probe timer is set to estimated RTT value current at the time the probe cycle is
triggered.

The value in the option extension id number identifies a full exchange of PROBE1,
PR1_ACK, PROBE2 and PR2_ACK segments, rather than individual segments within that
exchange. Thus, in the event that the PROBE1 or its PR1_ACK is lost (i.e. the probe timer
expires), the sender reinitialize the probe and measurement timers, and retransmits
PROBE1 with a new id number. Similarly, if a PROBE2 or its PR2_ACK is lost, the sender
reinitiates the exchange of probe segments from the beginning by retransmitting a PROBE1
with a new id number. A PR1_ACK carries the same id number as the corresponding
PROBE1 that it is acknowledging; this is also the id number used by the subsequent
PROBE2 and PR2_ACK segments. The receiver moves to the ESTAB state after sending
the PR2_ACK that should terminate the probe cycle. In this state, and should the
PR2_ACK be lost, the receiver would receive - instead of data segments - a retransmitted
PROBE1 that is reinitiating the exchange of probe segments since the sender’s probe timer,
in the meantime, will have expired.
79

They experiments were implemented (not simulated) as fully developed functioning


protocol code using the X-kernel framework (http://www.cs.princeton.edu/xkernel).
They results are not of a great performance comparing with TCP tahoe, Reno, New
Reno, In general their exhibit somewhat similar behavior and TCP-Probing’s goodput is
quite uniformly no worse than the best of the three standard TCP versions.

According to Tsaoussidis and Badr, the authors [TSA 2000], TCP-probing can be a
protocol of choice for heterogeneous wired/wireless communications with respect to energy
and throughput performance so it is a protocol that can be used in both environments.

6.3.2 TCP with NACK

This protocol is a modification to TCP, which uses Negative Acknowledgement


(NACK) as an explicit notification for packet corruption. This modified TCP may be
implemented in End ton End scheme as well as in Split schemes according its authors
[CHA 97].

In order to have the advantage of “time diversity”, cumulative acknowledgement is


used to acknowledge the last in sequence and correctly received packet in NACK scheme
[CHA 97].

The additional negative acknowledgement (NACK) is added in the option field


(figure 6.5) to explicitly indicate which packet is received in error so that retransmission of
that packet can be initiated quickly, particularly in the case of multiple corruptions in a
single window [CHA 97].

Option Field for negative Acknowledgement in TCP header

1 byte 1 byte 4 byte 1 byte

Option = A Length = 7 Sequence # of the first # of bytes


Bytes being nacked nacked

FIGURE 6.5 - Option field for Negative Acknowledgement in TCP header

Under the assumptions that a corrupted packet can still reach the destination and the
source address of a corrupted packet is still known, whenever a corrupted packet is
received, a NACK is sent. Upon the detection of the NACK, only the corrupted packet is
retransmitted by the source and no window size adjustments are performed. After the
retransmission, the source resumes normal packet transmission. To avoid inflation of the
round-trip time estimate, the round-trip time measurements from all the packets which have
been sent before the retransmission of the corrupted packet are ignored [CHA 97].
80

An advantage of NACK is that it produces less acknowledgement traffic load in the


return path [CHA 97].

A drawback of the NACK scheme is that can not cope with the degradation caused
by hand off [CHA 97].

6.3.3 TCP with SIF

It is proposed a modified TCP sender, which incorporates a heuristic Segment In


Flight (SIF) estimation algorithm to improve the TCP Throughput.

SIF Estimation: if the TCP sender has accumulated a certain number of duplicate
acknowledgements (DUPACKS) and has the confidence that the next unacknowledged
packet is lost, it can invoke the Fast retransmit algorithm immediately. The number of
DUPACKs that should be accumulated is a function of the number of the current
unacknowledged packets through the Segment In Flight (SIF) Estimation.

A new variable sifest_, which is the sender’s estimation of the amount of in-flight
segments and is zero initially, is proposed to be added in the sender for every active TCP
session. When a new data packet is sent, sifest_ is increased by one. When the TCP
senders get a new acknowledgement, sifest_ is decremented by the number of
acknowledgement segments. If time out occurs or if the sender has been idle for more than
one round-trip time (RTT), sifest_ is zero. Again since the sender is going to reprobe the
network. If the TCP sender retransmits a sent data packet triggered by schemes other than
a time out, sifest_ remains untouched, since it is assumed that the previous copy of this
packet has already left the network. The receiver and intermediate systems are unmodified,
and the modified TCP can be deployed incrementally and interact with ordinary TCP
compatibly over the Internet.

When the TCP sender already has (sifest_ - 1) DUPACKs, or DUPACKs have
exceeded the fixed threshold, and the most unacknowledged packets has not been resent
in the last RTT, this packet is resent immediately according to the Fast Retransmit.

If more new data packets are allowed by CWND and the CWND inflating
algorithm, they can be piped into the network to trigger more ACKs and to help TCP
endpoints regain the self-clocking earlier. Integrating the SIF estimation with the ordinary
TCP variants is quite straightforward and conflict-free with the original algorithms. With
the SIF enhancement, the TCP sender might become less tolerant on re-ordered packets,
and the estimation error might be accumulated for a while. However, when SIF is effective
since there are only few in-flight segments and the estimation is re-initialized periodically,
the probability of DUPACK occurrence due to packet recording is negligible.

For their experiments, the TCP modules in the ns-2 simulator were modified
accordingly to incorporate the Segment in Flight Estimation Algorithm, and it can be seen
that the SIF-enhanced TCP improves the application in terms of high throughput and less
idle periods from ordinary TCP. Their simulations showed also that the TCP with SIF
81

achieves better end-to-end performance, and still keeps the fairness and the compatibility
with ordinary TCP variants [PAN 2000].

Some End to End Solution are mention in [BAL 97] and are presented here
summarized as follows:

6.3.4 E2E-ELN Protocol

This protocol adds an explicit loss notification (ELN) option to TCP


acknowledgments. When a packet is dropped on the wireless link, future cumulative
acknowledgments corresponding to the lost packet are marked to identify that a non-
congestion related loss has occurred. Upon receiving this information with duplicate
acknowledgments, the sender may perform retransmissions without invoking the associated
congestion-control procedures. This option allows us to identify what percentage of the
end-to-end performance degradation is associated with TCP’s incorrect invocation of
congestion control algorithms when it does a fast retransmission of a packet lost on the
wireless hop [BAL 97].

6.3.5 E2E-ELN-RXMT Protocol

This protocol is an enhancement of E2E-ELN Protocol, where the sender


retransmits the packet on receiving the first duplicate acknowledgment with the ELN option
set (as opposed to the third duplicate acknowledgment in the case of TCP Reno), in
addition to not shrinking its window size in response to wireless losses [BAL 97].

6.4 New Layer Schemes

we found two interesting approaches presented by Ma Jian, Wu Jing, [JIA 2001]


and Li Chengzhou , Symeon Papavassiliou [CHE 2001]

6.4.1 ATCP

In [JIA 2001] is presented a solution to the problem of running TCP in ad hoc


wireless networks. The solution is to implement a thin layer (see picture 6.6) between IP
and TCP (called ATCP) that ensures correct TCP behavior while maintaining high
throughput. This is done by putting TCP into persist mode (like snooze state), when the
network is disconnected or when there are losses due to high bit error.
82

Data flow through the


TCP/ATCP/IP Stack DATA

TCP input ( ) TCP output ( )

ATCP input ( ) ATCP output ( )

ipintr ( ) Ip output ( )

DATA
[JIA 2001]

FIGURE 6.6 - Data flow through the TCP/ATCP/IP

The highlights of ATCP are as follows:

1) End-to-End TCP semantics are maintained.

2) ATCP is transparent which means that nodes with and without ATCP can normally set
up TCP connections.

3) ATCP’s performance is almost ideal as measured by the time to transfer large


files.

4) ATCP does not interfere with TCP’s congestion control behavior when there is network
congestion.

The ATCP layer is only active at the TCP sender (in a duplex communication, the
ATCP layer at both participating nodes will be active). This layer monitors TCP state and
the state of the network based on ECN (Explicit Congestion Notification) and ICMP
(Internet Control Message Protocol) messages and takes appropriate action. To understand
ATCP’s behavior, consider the figure 6.7 that illustrates ATCP’s four possible states –
Normal, Congested, Loss, and Disconnected. When the TCP connection is initially
established, ATCP at the sender is in the normal state. In this state, ATCP does nothing and
is invisible [JIA 2001].
83

State transition diagram for ATCP at the sender

Receive dup Ack


Receive Or packet from receiver
Destination
Unreachable Disconnected
ICMP - TCP Sender
CWND --- 1 put In persitent state

Receive TCP Transmits


ECN a packet Normal
New
ACK

RTO about ATCP


Congested To Expire or Loss Retransmits
3 dup ACKs Segments
in
TCP’s buffer

CWND: Congestions Window


ECN: Explicit Congestion Notification Receive ICMP Source
ICMP: Internet Control Message Protocol Quench Message
RTO: Retransmission Time Out
[JIA 2001]

FIGURE 6.7 - State transition diagram for ATCP at the sender

Let us now examine ATCP’s behavior under four circumstances:

• Lossy Channel: When the connection from the sender to the receiver is lossy, it is likely
that some segments will not arrive at the receiver or may arrive out-of-order. Thus, the
receiver may generate duplicate acknowledgment (ACKs) in response to out of sequence
segments. When TCP receives three consecutive duplicate ACKs, it retransmits the
offending segment and shrinks the congestion window. It is also possible that, due to lost
ACKs, the TCP sender’s retransmission time out (RTO), may expire causing it to
retransmit one segment and invoke congestion control. ATCP in its normal state counts the
number of duplicate ACKs received for any segment. When it sees that three duplicate
ACKs have been received, it does not forward the third duplicate ACK but puts TCP in
persist mode (like snooze state in [CHA 88]). Similarly, when ATCP sees that TCP’s RTO
is about to expire, it again puts TCP in persist mode By doing this, they ensure that the TCP
sender does not invoke congestion control because that is the wrong thing to do under these
circumstances. After ATCP puts TCP in persist mode, ATCP enters the loss state. In the
loss state, ATCP transmits the unacknowledged segments from TCP’s send buffer. It
maintains its own separate timers to retransmit these segments in the event that ACK’s are
not forthcoming. Eventually, when a new ACK arrives (i.e., an ACK for a previously
84

unacknowledged segment) ATCP forward that ACK to TCP which also removes TCP from
persist mode. ATCP then returns to its normal state [JIA 2001].

• Congested: they assume that when the network detects congestion, the ECN flag is set in
ACK and data packets. They also assume that ATCP receives this message when in its
normal state. ATCP moves into its congested state and does nothing. It ignores any
duplicate ACKs that arrive and it ignores imminent RTO expiration events. In other words,
ATCP does not interfere with TCP’s normal congestion behavior. After TCP transmits a
new segment, ATCP returns to its normal state [JIA 2001].

• Disconnected: Node mobility in ad hoc networks causes route re-computation or even


temporary network partition. When this happens, they assume that the network generates an
ICMP Destination Unreachable message in response to a packet transmission. When
ATCP receives this message, it puts the TCP sender into persist mode and itself enters the
disconnected state. TCP periodically generates probe packets while in persist mode. When,
eventually, the receiver is connected to the sender, it responds to these probe packets with a
duplicate ACK (or a data packet). This removes TCP from persist mode and moves ATCP
back into normal state. In order to ensure that TCP does not continue using the old CWND
value, ATCP sets TCP’s CWND to one segment at the time it puts TCP in persist state. The
reason for doing this is to force TCP to probe the correct value of CWND to use for the
new route [JIA 2001].

• Other Transitions: Finally, when ATCP is in the loss state, reception of an ECN or an
ICMP Source Quench message will move ATCP into congested state and ATCP removes
TCP from its persist state. Similarly, reception of an ICMP Destination Unreachable
message moves ATCP from either the loss state or the congested state into the
disconnected state and ATCP moves TCP into persist mode (if it was not already in that
state) [JIA 2001].

• Effect of Lost Messages: Note that due to the lossy environment, it is possible that an
ECN may not arrive at the sender or, similarly, a “Destination Unreachable” message may
be lost. If an ECN message is lost, the TCP sender will continue transmitting packets.
However, every subsequent ACK will contain the ECN, thus ensuring that the sender will
eventually receive the ECN causing it to enter the congestion control state as it is supposed
to. Likewise, if there is no route to the destination, the sender will eventually receive a
retransmission of the “Destination Unreachable” message causing TCP to be put into the
persist state by ATCP. Thus, in all cases of lost messages, ATCP performs correctly [JIA
2001].

The ATCP changes TCP’s behavior under the lossy conditions (due to high BER),
ATCP retransmits unacknowledged segments while TCP is put into persist state. Thus,
TCP does not invoke congestion control. In the event that the source and the destination get
disconnected (either for short periods while a new route is computed or for longer periods
due to partition), TCP is again put into persist mode for the duration of the disconnection
and no segments are transmitted by ATCP. When the network is reconnected, TCP
automatically comes out of persist mode because the receiver responds to the sender’s
probe packets. However, the congestion window used in this case is one segment initially.
85

TCP’s congestion behavior is unchanged ensuring that TCP appropriately throttles back its
transmission rate when the network is congested [JIA 2001].

Some drawbacks of ATCP are:

- If ATCP is used in fixed internet implementing ECN, ATCP will operate correctly but If
the fixed internet does not implement ECN, then it is necessary to split the connection at
the node that connects the wireless network with the wired internet. Thus, there will be two
conjugated TCP connections, this is similar to I-TCP [BAK 95] for cellular networks

- They have not considered energy consumption in [JIA 2001] but minimizing the
processing involved is important to increase the life of the battery).

- The test best deployed in [JIA 2001] is an emulations using a wired network with five
Pentium computers each of which had two Ethernet (CSMA/CD) IEEE 802.3 instead of
using a real wireless ad hoc network with standard IEEE 802.11(CSMA/CA)

Finally, about performance, they have implemented their protocol in FreeBSD, and they
show that their solution improves TCP’s throughput by a factor of 2–3 [JIA 2001].

6.4.2 LSSA

In wireless mobile ad hoc networks, each Mobile Host emits a beacon signal that is
used to identify itself and notify its neighbors about its existence. In this environment, the
Link Signal Strength Agent (LSSA) is introduced and it is a new layer that overcomes the
problems associated with the nature of the wireless links and the mobility of nodes and
therefore is applicable in dynamic wireless networks [CHE 2001].

The LSSA resides in between the TCP and IP layers, in a position similar to the
Internet control Message Protocol (ICMP) in the Internet. When receiving the signal
strength indication from lower layers, LSSA encapsulates it into a link signal strength
Indication (LSSI) message, and sends it to the TCP Source and destination. By analyzing
this information, the TCP source is able to monitor the state of current TCP connections
(e.g. good (strong) condition, bad (weak) condition or down). According to the link,
condition TCP source may freeze its congestion window, invoke congestion avoidance or
request new route reconstruction. Each node can detect the strength of a signal coming
from all its neighbors [CHE 2001].

LSSA also receives the LSSI messages from its neighbors as well. It may append its
own connection information into the message and relay it to next hope. In order to support
this new layer, they need to use a new protocol value in the protocol field of IP header.
Thus, when receiving this message, IP layer will pass it directly to the LSSA layer each
node receiving an LSSI message checks its own link status and appends its information into
it. In order to reduce the overhead by the LSSI messages, those nodes receiving a message
from their neighbors do not add more information into the message if they find that the
connection is in good condition [CHE 2001].
86

To simulate the impact of high bit error rate of wireless link and frequent topology
change, they introduce the LSSA state machine shown in figure 6.8. When a connection
becomes weak but the topology does not change, they assumed that the link is in weak
state. The loss state is entered only when network topology change occurs. The periods of a
connection staying in the Good state and Weak state are exponentially distributed with
means mean_good_period and mean_weak_period respectively. Once the link is in Good
state, it may transition with probability “p” to the Weak state and “(1- p)” to the Loss state.
The system will invoke the route re-construct procedure to find a new route when iot
detects a route is invalid.

Link State Machine

Weak

Good

Loss
[CHE 2001]

FIGURA 6.8 - Link State Machine

The LSSA was evaluated in the Optimized Network Engineering Tool (OPNET)
[OPT 2003], and its initial results indicate that this proposal overcomes the problems
associated with the nature of the wireless links and the mobility of nodes, and therefore is
applicable in dynamic wireless networks [CHE 2001].

6.5 Emergent Schemes

We call this group emergent because are new schemes and proposals that share
characteristics of all or part of all schemes before mentions and/or original new solutions:

6.5.1 Feedback-Based Scheme: TCP-F

The Feed-back-based scheme was introduced first in [CHA 88] and upgrade in
[CHA 2001].

Standard TCP does not distinguish between congestion and packet loss due to
transmission error or route failure. So treating route failure as congestion (and invoking
congestion control) is not advisable because congestion and route failure are disparate
87

phenomena which have to be handled independently and separately. In this scheme the
source is informed of the route failure so that it does not unnecessarily invoke congestion
control and can refrain from sending any further packets until the route is restored.

The feedback-Based Scheme is called TCP-Feedback (TCP-F), is described below:

Consider, for simplicity, a single bulk data transfer session, where a source MH
(mobile host) is sending packets to a destination MH. Every MH behaves in a cooperative
fashion by acting as a router, allowing packets destined to others MHs to pass through it.
As soon as the network layer at an intermediate MH (henceforth referred to as the failure
point, FP) detects the disruption of a route due to the mobility of the next MH along that
route, it explicitly sends a route failure notification (RFN) packet to the source and records
this event. Each intermediate node that receives the RFN packet invalidates the particular
route and prevents incoming packets intended for the destination from passing through that
route. If the intermediate node knows of an alternate route to the destination, this alternate
route can now be used to support further communication, and the RFN is discarded.
Otherwise, the intermediate node simply propagates the RFN toward the source. On
receiving the RFN, the source goes into a snooze state (see figure 6.9) and performs the
following:

The TCP-F state machine

From SYN-RECVD

RFN

From SYN-SENT

Snooze Established
To FIN-WAIT_1

RRN
Or route
failure timeout To CLOSE-WAIT
RFN: Route Failure Notification
RRN: Route Restablishment Notification [CHA 2001]

FIGURE 6.9 - The TCP-F state machine

• It completely stops sending further packets (new or retransmissions).

• It then [CHA 2001]:


– It Marks all of its existing timers as invalid.
– It Freezes the send window of packets.
– Freezes values of other state variables such as retransmit timer value and window size.
88

– It starts a route failure timer, which corresponds to a worst-case route reestablishment


time. The timeout value of this timer can be a parameter whose value depends on
the Underlying routing protocol. The source remains in this snooze state until it is
notified of the restoration of the route through a route reestablishment notification
(RRN) packet, as explained below.

Let one of the intermediate nodes that has previously forwarded an RFN to the
source learn about a new route to the destination (through a routing update). This
intermediate node then sends an RRN packet to the source (whose identity it previously
stored). All further RRNs received by this intermediate node for the same source-
destination connection are discarded. Any other node that receives the RRN simply
forwards it toward the source [CHA 2001].

As soon as the source receives the RRN, it changes to an active state from the
snooze state. It then flushes out all unacknowledged packets in its current window. Since
most packets in transit during the failure period would have been affected, packets can be
flushed out without waiting for acknowledgments from the receiver. The number of
retransmitted packets directly depends on the current window size. These steps in effect
reduce the effect of TCP’s congestion control mechanism when transmission restarts.
Communication now resumes at the same rate, as before the route failure occurred,
ensuring that there is an unnecessary loss of throughput in this period. TCP’s congestion
control mechanism can now take over and adjust to the existing load in the system. The
route failure timer ensures that the source does not indefinitely remain in the snooze state
waiting for an RRN, which may be delayed or lost [CHA 2001].

They also assume that packets reaching the failure point are lost when the next link
from the failure point is down. However, this is not so if the intermediate nodes can buffer
these packets to a limited capacity. In this case, we could do the following. As the RFN
propagates to the source from the failure point, all the intermediate nodes can temporarily
buffer subsequent packets. If there is a substantial overlap between the newly established
route and the old one, the RRN message can be used to flush out the buffers (i.e., the
buffered packets may be sent to the destination along the newly established route).
Similarly, intermediate nodes may forward the buffered packets to the destination without
waiting for an RRN on learning of new routes. This buffering scheme has the following
advantages:

• It will save packet retransmissions, and packet flow can resume even before the source
learns about the route reestablishment.

• Since buffering is staggered across the intermediate hops, the buffering overhead at each
node is expected to be low.

In [CHA 2001] also considers the problem of maintaining reliable end-to-end


communication in ad hoc networks, similar to that provided by TCP over the Internet. It is
desirable to use TCP directly even in ad hoc networks in order to provide seamless
portability to applications like file transfer, e-mail, and Web browsers written using
89

standard TCP libraries. Hence, it is of interest to study the behavior of TCP in the context
of ad hoc networks and evaluate the effect of dynamic topology on TCP performance.

The studies and simulation of [CHA 2001] indicates because of frequent and
unpredictable route disruptions, TCP’s performance is substantially degraded.

Their conducted simulation experiments to compare the performance of TCP-F and


basic TCP, also suggest that this approach is indeed beneficial in preventing performance
degradation.

We present some drawbacks to this scheme:

- In [CHA 88]’s method, the source continues using the old congestion window for the new
route. This is a problem because the congestion window size is route specific (since it seek
to approximate the available bandwidth [JIA 2001].

- Reference [CHA 88] also does not consider the effect of congestion, out-of-order packets,
and bit error [JIA 2001].

- In [CHA 88] a feedback scheme has bee proposed in order to improve the TCP
performance in mobile ad-hoc networks by notifying the source about route failures.
However, this scheme fails to distinguish between route failures due to topology changes
and temporary bad link quality, and therefore the TCP performance is degraded by both
topology changes as well as by the bath quality of wireless connection [CHE 2001].

6.5.2 TCP-BUS

In [KIM 2000] is proposed a new mechanism that improves TCP performance in a


wireless ad hoc network, called TCP-BUS, where each node can buffer packets during a
route disconnection and reestablishment.

This proposal deploys the Associatively Based Routing (ABR) [TOH 96] as the
underlying routing protocol based on source-initiated on demand protocol. ABR advocates
for stable and long-lived routes. It also takes advantage of the feedback information for
detecting the route disconnection, which is also used in TCP-F [CHA 88], [KIM 2000].

In this proposal the modifications to TCP are:

1. Explicit notifications: Two control messages related to route maintenance are


introduced to notify the source of route failures and route re-establishments. They are the
Explicit Route Disconnection Notification (ERDN) and the Explicit Route Successful
Notification (ERSN). These indicators are used to differentiate between network congestion
and route because of node movement. ERDN is generated at in intermediate node (pivoting
node, PN) upon detection of a route disconnection, and is propagated towards the source.
After receiving ERDN message, the source stops transmission. Similarly, after discovering
a new partial path from the PN to the destination, the PN sends an ERSN message to the
source. On receiving ERSN message, the source resumes transmission [KIM 2000].
90

2. Extending timeouts values: The timeout values for buffered packets at the source
and nodes along the path to the PN are doubled mainly because it takes time to recover the
route in case of lost route and reestablishment [KIM 2000].

3. Selective Retransmission of lost packets at receiver Node: The retransmission of


lost packets on the path due to congestion relies on timeout mechanism. Therefore, if the
timeouts values for buffered packets at the source and nodes along the path to the PN are
adjusted to be doubled, the lost packets are not retransmitted until the adjusted timeout
values are expired. To early cope with the packet losses along the path from the source to
the PN, a request is generated to require the source to retransmit the lost packet selectively
before their timeout values are expired to make the lost packets retransmitted by the source
[KIM 2000].

4. Avoiding unnecessary requests for fast retransmission: Using the ABR protocol,
packets along the path from the PN to the destination may be discarded by intermediate
nodes after receiving a Route Notification (RN) message [KIM 2000].

5. Reliable Transmission of Control messages: After a PN detects route


disconnection; the node will notify the source of the route failure by using ERDN message.
However, the source can only take action on the route failure only if it receives the ERDN
message reliably. In addition, each intermediate node receiving the ERDN message stops
transmission of its buffered packets. The reliable transmission of ERDN depends on the
link layer and network layer. One way for reliable transmission is to have the source
generate Probe messages periodically to check if the PN has found a new partial route
successfully until it receives the ERSN messages or it times out [KIM 2000].

TCP-BUS at source: transmits its segments in the same manner as general TCP
when there are no feedback messages (such as ERDN and ERSN messages). The slow start
and congestion avoidance mechanisms function as normal; however, when the source
receives the ERDN feedback message from the network, it stops sending data packets. In
addition, it freezes all timers and windows sizes in a manner to TCP-F [CHA 88], [KIM
2000].
TCP-BUS at intermediate nodes: after anode (PN) detects a route failure, it sends
the ERDN message to notify the source of route failure and initiates partial route discovery.
While ERDN message is propagated towards the source, each intermediate node stops
further transmission of data packets and buffers, all pending packets to defer transmission.
After receiving a replay message, the PN notifies the source of successful route re-
establishment via ERSN message. At each intermediate node receiving the ERSN message,
transmission of buffered packets resumes

TCP-BUS at Destination Node: a receiver performs the normal TCP end-to-end


procedure on the acquired path in case that there is no route disconnection. Also a selective
retransmission mechanism as in TCP-SACK can be applied to the source and receiver’s
procedure. They propose an additional selective retransmission scheme to cope with the
lost packets due to congestion on the partial path from the source to the receiver. A request
91

for selective retransmission of lost packets is generated on the receiving detecting the hole
of consecutive segment sequence. It requires the source to react to the congestion.

Their simulation was written using a discrete-event simulation language, SMPL


(Simulation Model Programming Language). Finally comparing their result between TCP
Reno and TCP-F, TCP-BUS showed better performance, thanks to the selective
retransmission mechanism for the lost packets after route reestablishment and the buffered
data packets that it uses. The longer congestion exists, the better performance shows TCP-
BUS due to early selective retransmission instead of relying on the timeout mechanism
used in other schemes [KIM 2000].

TCP-F and TCP BUS are not the unique solution; briefly, we are going to introduce
some more in this category

6.5.3 MAITE

The Mobility Awareness Incorporated as TCP Enhancement (MAITE) is a new


scheme that consists of an implementation of link layer messages that informs TCP of high
BER and disconnections conditions [ARA 2001].

MAITE shares features of split connection and link layer schemes. The figure 6.10
below shows the topology of MAITE in which two mobile hosts (MH) use wireless links to
communicate with each other. Since both links could be experiencing very difficult
situations, individual control over each of the wireless link is desirable (split scheme). The
supervisory host allows the implementation of specific controls over the links. As shown in
the figure a intermediate access point and a supervisory host is needed. A wired
communication link exists between the supervisory host and the access point.

FIGURE 6.10 - Topology used in MAITE

At a supervisory host, disconnections can be detected by noticing the lack of


reception of data from mobile host over a period. The detection of losses due to high BER
92

can be detected if the link layer at an access informs the supervisory host when high BER
occurs.

At the link layer, it is necessary for a MH to distinguish between disconnection and


high BER conditions. A disconnection at the MH can be detected by sensing the lack of the
beacon that is periodically sent by the access point in the wireless local area network
standard implementations (IEEE 802.11).

A high BER condition can be detected when CRC check on received frames
continuously fails. If the link layer is able to distinguish between these two conditions,
upper layers can be informed. After that, appropriate actions at the transport level can be
taken. MAITE incorporates into the MH the ability of sense disconnections via messages
sent from the hardware to the upper layers.

In order to illustrate how MAITE works we will present the state transition
diagrams at both a sending MH and at the supervisory host.

The figure 6.11 below shows that at the mobile host, high BER conditions are
informed to transport layer by the link layer via a HighBER Notification message. Upon
receiving this message, a sending MH will freeze its TCP timers until it receives a
HighBER over message from the link layer. A TCP sender will not attempt any
transmissions during high BER periods. Disconnections are handled in a similar way.

TCP at the MH is informed of disconnections by the link layer via a Disconnection


Notification message. When disconnected a sending mobile will freeze its TCP timers until
it receives a disconnection over notification message. In the same way as with high BER,
while disconnected a TCP sender will not attempt any transmissions.

It is interesting to notice that if high BER and disconnections conditions are


distinguished any timeout that occurs at the transport level should have been caused by
intermediate congestion. When a timeout occurs at the TCP level is treated in a standard
way initiating the appropriate congestion control procedures. The diagram also shows that
the TCP sender can be forced into persist mode when the supervisory host closes the
advertised window of the connection.

This is an indication of problems over the remote wireless link with the receiving
mobile. Normal conditions are reestablished upon receiving an acknowledgement via the
supervisory host, this reopens this window.
93

FIGURE 6.11 - State transition of MAITE’s features at a mobile host that acts as a TCP
Sender.

The figure 6.12 below shows the state diagram at a supervisory host. MAITE allows
host to receive link layer notifications from the access point. These notifications inform the
supervisory host of high BER conditions.

When a high BER conditions occurs, a link down message is received and those
TCP senders communicating with a mobile receiver are forced into persist mode. No
segments are sent until this condition is over, In the same way no segments are sent to TCP
receivers until a high BER condition is over.

When the supervisory host detects a mobile is disconnected it freezes the


transmission timers of the connection towards the disconnection sender or receiver. No
segments are sent until a reconnect message is received from a sending TCP.

Any timeouts that occur in the supervisory host are treated as an indication of loss
and therefore retransmissions of local cached data occurs.
94

FIGURE 6.12 - State transition diagram at a supervisory host showing MAITE features

The authors in [ARA 2001] do not show the transition state diagram for a receiving
TCP mobile. The transitions between states are simpler in this case. A receiving mobile will
not freeze any timers during high BER or disconnections conditions but will refrain from
transmitting acknowledgements. TCP receivers are not forced into persist mode and it is the
responsibility of the supervisory host to handle bad conditions of the first wireless link
used by storing acknowledgements until good channel conditions exist. In the same way
congestion, detection is not applicable at the receiving mobile. After being reconnected a
receiver restarts communications by sending a reconnect message.

The features of MAITE were simulated in a computer simulation using the


simulation package CSIM. In order to maintain simplicity they selected a frequently used
wireless channel model and a common TCP implementation, TCP Tahoe. They followed
IEEE 802.11b (at 11 Mbps) channel access guidelines and a stop and wait protocol similar
to that of the standard is present at the link layer level. They assumed that channel is a
Rayleigh fading one. Their results show that in general pure end-to-end TCP has the lowest
performance since pure TCP will constantly reduce the sending rate of a transmitter
whenever the condition of the channel or disconnection occurs. The introduction of the link
layer messages gave an average improvement of 30% in the end-to-end goodput. The
improvements introduced by splitting the channel are of the 75% and 100% in reference to
pure TCP. Total improvements of 105% and 130% over pure TCP are obtained with
MAITE. The introduction of MAITE increases the goodput since it permits TCP not to
time out whenever the channel is in a bad state or whenever the mobile gets disconnected.
95

Under high speed conditions or high mobility (45 km/hr) the improvements in end-
to-end goodput introduced by the link layer messages and MAITE are significant lower. In
other words, the improvements are only noticeable under low mobility speed (4 km/hr), as
speed increased the performance improvements decreased significantly.

In [ARA 01], a drawback of MAITE is that it does not handle Handoffs between
supervisory hosts.

TCP performance over Wireless networks is a growing area of interest in the


scientific community in that way we are going to introduce briefly some emergent scheme
solutions as follows:

6.5.4 IR-TCP

This Transport layer protocol improves the transport layer performance as compared
to TCP in the presence of noisy links such as those in wireless networks. IR-TCP is
interference aware and uses the interference information from the link layer for its recovery
procedure. IR-TCP is backward compatible and does not affect performance during normal
operation or congestion, while providing significant performance improvement during
interference. IR-TCP addresses specific problems in TCP, with regard to performance
during interference. IR-TCP can be defined as an interference aware transport layer that
detects the presence of interference while also being congestion aware. Some researchers
do not believe that this is a good practice. However, it is their strong belief that without
such mechanisms performance cannot be achieved. IR-TCP employs algorithms that
improve recovery from interference, overall performance during interference. It also
prevents the inappropriate usage of congestion control algorithms when there is
interference and not congestion in the path. SNR is a good measure for detecting
interference [MAR 98].

6.5.5 FAST-TCP

The FAST-TCP is proposed to be used for more accurate control of the TCP
transmission rate and better TCP traffic shaping by Nokia Research Center. The basic idea
of the method is that a network element such a router, delays the IP packets carrying TCP
ACKs, when congestions tends to occur. Since the TCP source does not receive an ACK, it
keeps its current transmission window until the delayed ACKs are received.

The authors have been proved that Fast-TCP can reduce TCP flow control feedback
time, reduce buffer oscillation, increase bandwidth utilization, increase throughput, and
reduce packet losses in the IP networks with wired links. The fast-TCP is implemented at
the router, which eliminates the need to change either the sender or receiver’s TCP
implement. The approach also support wireless links [MAJ 99].
96

6.5.6 TCP with SPACK

TCP with SPACK is a new acknowledgement scheme. When base station detects
packet losses, SPACK splits the newly arrived ACK packet into several ones and transfers
and transfer them to the fixed host. The fixed host received several ACK packets,
increasing the windows size rapidly, thus the performance of TCP quickly recovers. The
SPACK has several advantages comparing with other protocols such as: No modifications
of TCP source code, maintenance of end to end TCP semantics and less complexity of
bases station [JIN 99]. We consider SPACK in this classification although with
characteristics of splits schemes, it keeps end-to-end semantics and split schemes do not do
it according to [BAL 97] see figure 6.13

SPACK Transmission

Fixed Host Base Station Móbile Host

(2) Retransmission
Timer Expired

(1)
X or
(3) Congestion (2) Timer Expired Packet
Control Invoked Loss
Window Size = 1 X

(3)
Retransmit

(4)
Window Size = 2 Received Packlet
Window Size = 3
.
.
. (5) Transfer
.
Splitted Ack
.
Window Size = n

FIGURE 6.13 - SPACK Transmission


97

7 Proposed solutions in IEEE 802.11 MAC to improve TCP performance

7.1 A Novel MAC with fast Collision Resolution (FCR) in WLAN

According to [KWO 2002] the major deficiency of the IEEE 802.11 MAC protocol
comes from the slow collision resolution as the number of active station increases. An
active station can be in two modes at each contention period, namely, the transmitting mode
when it wins a contention and the deferring mode when it losses a contention. In the
proposed FCR algorithm the authors changed the contention window size for the deferring
stations and regenerate the back off timers for all potential transmitting stations to avoid
“future” potential collisions, in this way, we can resolve possible packet collisions quickly.
More importantly, the proposed algorithm preserves the simplicity for implementation like
the IEEE 802.11 MAC.

The FCR algorithm has the following characteristics [KWO 2002]:

1. Use much smaller initial minimum contention window size (minCW) than the IEEE
802.11 MAC.

2. Use much larger maximum contention window size (maxCW) than the IEEE 802.11
MAC.

3. Increase the contention window size of a station when it is in both collision state and
deferring state.

4. Reduce the back off timers exponentially fast when a prefixed number of consecutive
idle slots are detected.

5. Assign the maximum successive packet transmission limit to keep fairness in serving
users.

In the FCR algorithm, the contention window size of a station will increase not only
when it experiences a collision but also when it is in the deferring mode and senses the start
of a busy period [KWO 2002].

Detailed data of FCR algorithm are available on [KWO 2002] as follows:

1. Backoff Procedure: All active stations will monitor the medium. If a station senses the
medium idle for a slot, then it will decrement its back off time (BT) by a slot time, i.e.,
BTnew = BTold - aSlotTime (or the back off timer is decreased by one unit in terms of
slot). When its back off timer reaches to zero, the station will transmit a packet. If there are
[(minCW+1) x 2 - 1] consecutive idle slots being detected, its back-off timer should be
decreased much faster (say, exponentially fast), i.e., BTnew = BTold - BTold /2 = BTold /2
98

( if BTnew < aSlotT ime; then BTnew = 0) or the back off timer is decreased by a half. For
example, if a station has the back off timer 2047, hence its back off time is BT = 2047 x
aSlotT ime, which will be decreased by a slot time at each idle slot until the back off timer
reaches 2040 (we assume that [(minCW +1)x2 -1] = 7 or minCW = 3). After then, if the
idle slots continue, the back off timer will be decreased by one half, i.e., BTnew = BTold /2
at each additional idle slot until either it reaches to zero or it senses a non-idle slot,
whichever comes first. As an illustration, after 7 idle slots, we will have BT = 1020 x
aSlotTime on the 8th idle slot, BT = 510 x aSlotTime on the 9th idle slot, BT = 255 x
aSlotTime on the 10th idle slot, and so on until it either reaches to zero or detects a non-idle
slot. Therefore, the wasted idle back off time is guaranteed to be less than or equal to 18 x
aSlotTime for above scenario. The net effect is that the unnecessary wasted idle back off
time will be reduced when a station, which has just performed a successful packet
transmission, runs out of packets for transmission or reaches its maximum successive
packet transmission limit [KWO 2002].

2. Transmission Failure (Packet Collision): If a station notices that its packet transmission
has failed possibly due to packet collision (i.e.,it fails to receive an acknowledgment from
the intended receiving station), the contention window size of the station will be increased
and a random back off time (BT) will be chosen, i.e., CW = min(maxCW, CWx2), BT =
uniform(0,CW- 1) x aSlotTime, where uniform(a,b) indicates a number randomly drawn
from the uniform distribution between a and b and CW is the current contention window
size [KWO 2002].

3. Successful Packet Transmission: If a station has finished a successful packet


transmission, then its contention window size will be reduced to the initial (minimum)
contention window size minCW and a random backoff time (BT) value will be chosen
accordingly, i.e., CW = minCW, BT = uniform(0,CW-1) x aSlotTime. If a station has
performed successive packet transmissions which reaches the maximum successive
transmission limit (or larger), then its contention window size will be increased to the
maximum contention window size maxCW and a random backoff time (BT) value will be
chosen as follows: CW = maxCW, BT = uniform(0, CW-1) x aSlotT ime [KWO 2002].

4. Deferring State: For a station which is in deferring state, whenever it detects the start of a
new busy period, which indicates either a collision or a packet transmission in the medium,
the station will increase its contention window size and pick a new random back off time
(BT) as follows: CW = min(maxCW,CW-2), BT = uniform(0,CW-1) x aSlotTime [KWO
2002].

Finally, in the FCR algorithm, the station that has successfully transmitted a packet
will have the minimum contention window size and smaller back off timer, hence it will
have a higher probability to gain access of the medium, while other stations have relatively
larger contention window size and larger back off timers. After a number of successful
packet transmissions for one station, another station may win a contention and this new
station will then have higher probability to gain access of the medium for a period of time
[KWO 2002].
99

The figure 7.1 below shows the throughput results of the IEEE 802.11 MAC and
FCR algorithms for 100 contending stations with better performance for FCR. It was
simulated using the GloMoSim Network simulator [GLO 2002]

FIGURE 7.1: Throughput for Various Number of stations.

7.2 Receiver-Based Auto Rate (RBAR) protocol

The Receiver-Based Auto Rate (RBAR) protocol is a rate adaptive MAC protocol.
The novelty of RBAR is that its rate adaptation mechanism is in the receiver instead of in
the sender.

Rate adaptation is the process of dynamically switching data rates to match the
channel conditions, with the goal of selecting the rate that will give the optimum throughput
for the given channel conditions. The Lucent WaveLAN II and Aironet PC4800 devices
contain proprietary rate adaptation mechanisms. There are two aspects to rate adaptation:
channel quality estimation and rate selection. Channel quality estimation involves
measuring the time-varying state of the wireless channel for generating predictions of
future quality. Issues include: which metrics should be used as indicators of channel quality
100

(e.g., signal-to-noise ratio, signal strength, symbol error rate, bit error rate), which
predictors should be used, whether predictions should be short-term or long-term, etc. Rate
selection involves using the channel quality predictions to select an appropriate rate.
Techniques vary, but a common technique is threshold selection, where the value of an
indicator is compared against a list of threshold values representing boundaries between the
data rates [HOL 2001].

The central idea of RBAR is to allow the receiver to select the appropriate rate for
the data packet during the RTS/CTS packet exchange.

Advantages to this approach according its authors include [HOL 2001]:

1. - Both channel quality estimation and rate selection mechanisms are now on the receiver.
This allows the channel quality estimation mechanism to directly access all of the
information made available to it by the receiving hardware (such as the number of multi
path components, the symbol error rate, the received signal strength, etc.), for more
accurate rate selection.

2. - Since the rate selection is done during the RTS/CTS exchange, the channel quality
estimates are nearer to the actual transmission time of the data packet than in existing
sender-based approaches.

3. - It can be implemented into IEEE 802.11 with minor changes, as we will show in a later
section.

In RBAR, instead of carrying the duration of the reservation (duration is part of


RTS/CTS control packet frames), the packets carry the modulation rate and size of the data
packet. This modification serves the dual purpose of providing a mechanism by which the
receiver can communicate the chosen rate to the sender, while still providing neighboring
nodes with enough information calculate the duration of the requested reservation. The
protocol is as follows:

Referring to figure 7.2 below, node A is in range of Src but not Dst , and node B is
in range of Dst but not Src. The sender Src chooses a data rate based on some heuristic
(such as the most recent rate that was successful for transmission to the destination Dst),
and then stores the rate and the size of the data packet into the RTS. Node A, overhearing
the RTS, calculates the duration of the requested reservation DRTS using the rate and packet
size carried in the RTS. This is possible because all of the information required calculating
DRTS is known to A. A then updates its NAV to reflect the reservation. While receiving the
RTS, the receiver Dst uses information available to it about the channel conditions to
generate an estimate of the conditions for the impending data packet transmission. Dst then
selects the appropriate rate based on that estimate, and transmits it and the packet size in the
CTS back to the sender. Node B, overhearing the CTS, calculates the duration of the
reservation DCTS similar to the procedure used by A, and then updates its NAV to reflect
the reservation. Finally, Src responds to the receipt of the CTS by transmitting the data
packet at the rate chosen by Dst [HOL 2001]. In the picture are not shown the short
interframe spaces (SIFS) between RST, CTS, and DATA.
101

Timeline showing changes to the DCF protocol as needed


for the proposed Receiver-Based Auto Rate Protocol

DRTS

DRSH
A

RTS RSHH DATA


Src

Dest RTS ACK

DCTS

T0 T1 T2 T3 Time T4 [HOL 2001]

FIGURE 7.2 - Timeline with changes to the DCF

In the instance that the rates chosen by the sender and receiver are different, then the
reservation, DRTS calculated by A will no longer be valid. Thus, we refer to DRTS as a
tentative reservation. A tentative reservation serves only to inform neighboring nodes that a
reservation has been requested but that the duration of the final reservation may differ. Any
node that receives a tentative reservation is required to treat it the same as a final
reservation with regard to later transmission requests; that is, if a node overhears a tentative
reservation it must update its NAV so that any later requests it receives that would conflict
with the tentative reservation must be denied. Thus, a tentative reservation effectively
serves as a placeholder until either a new reservation is received or the tentative reservation
is confirmed as the final reservation. Final reservations are confirmed by the presence or
absence of a special sub-header, called the Reservation Sub-Header (RSH), in the MAC
header of the data packet. The reservation sub header consists of a subset of the header
fields that are already present in the 802.11 data packet frame, plus a check sequence that
serves to protect the sub-header. The fields in the reservation sub-header consist of only
those fields needed to update the NAV, and essentially amount to the same fields present in
an RTS. Furthermore, the fields (minus the check sequence) still retain the same
functionality that they have in a standard 802.11 header [HOL 2001].

The Reservation Sub-Header is as follows:


102

Referring again to Figure 7.2 above, in the instance that the tentative reservation
DRTS is incorrect, Src will send the data packet with the special MAC header containing
the RSH sub-header. A, overhearing the RSH, will immediately calculate the final
reservation DRSH, and then update its NAV to account for the difference between DRTS
and DRSH. Note that, for A to update its NAV correctly, it must know what contribution
DRTS has made to its NAV. One way this can be done, is to maintain a list of the end times
of each tentative reservation, indexed according to the < sender; receiver > pair. Thus,
when an update is required, a node can use the list to determine if the difference in the
reservations will require a change in the NAV [HOL 2001].

A Throughput performance comparison between ARF (ARF is the rate adaptation


scheme used in Lucent’s IEEE 802.11 Wave LAN II network devices) and RBAR is shown
in the figure 7.3 below. The test bed was made in NS-2 [NET 2002] with 20 nodes in
continuous motion within a 1500x300 meter arena. Even when the mean node speed
increase, the mean throughput of RBAR is better than ARF.

FIGURE 7.3 - RBAR vs. ARF


103

7.3 Data-driven Cut-through Multiple Access (DCMA) Protocol

In [ACH 2002] to implement the proposed Data-driven Cut-through Multiple


Access (DCMA), the authors presented an architecture for a “wireless router”, i.e. a
forwarding node with a single wireless NIC in a multi-hop wireless network, that allows a
packet to be forwarded entirely within the network interface card of the forwarding node
without requiring per-packet intervention by the node’s CPU. This was made possible by
enhancing the IEEE 802.11 DCF channel access scheme and by carrying a label in the
RTS/ACK packet, which allowed the NIC (Network Interface Card) to determine the
packet’s next hop. The NIC was augmented with a label-switching table mapping incoming
labels and MAC addresses to outgoing labels and MAC addresses.

The DCMA scheme is based on enhancements to the basic IEEE 802.11 4-way handshake,
involving the exchange of RTS/CTS/DATA/ACK packets.

DCMA does not require any modifications or enhancements to the 802.11 NAV. A
node simply stays quiet as long as it is aware of (contiguous) activity involving one or more
of its neighbors.

The Timing diagram in Figure 7.4 is useful to understand the operation of DCMA.
Assume that node A has a packet to send to node D. A sends a RTS to B, which includes a
label LAB associated with the route to D. Assuming that its NAV is not busy for the
proposed transmission duration, B replies with a CTS. B receives the DATA packet, and
then sends a RTS/ACK control packet, with the ACK part addressed to A, and the RTS part
addressed to C, along with a label LBC. C’s actions would be analogous to B, except that it
uses the label LCD in its RTS/ACK message [ACH 2002].

Label lookup: In DCMA, the RTS/ACK (or RTS) bears the label. In principle, the DATA
field is carrying the label, since the label lookup is not strictly necessary until after the
DATA is being received [ACH 2002].

However, by providing the label information in the RTS, we provide the forwarding
node additional time to complete the lookup. This should not be a problem, since the
DATA duration is at least tens of µsecs (e.g., a 500 byte packet on 2 Mbit/s channel takes 2
msecs). Due to the competition among different flows, it is possible that DCMA can fail to
set up the “fast-path” (cut-through) forwarding at different points in the traffic path. Upon
the failure of a cut-through attempt, DCMA reverts to the base 802.11 specification,
aborting the cut-through attempt and using the exponential back off to regulate subsequent
access to the shared channel. The channel contention resolution of DCMA is same as that
of IEEE 802.11, with a node remaining silent as long as any of its one-hop neighbors are
either receiving or transmitting a data packet. Accordingly, this protocol does not suffer
from any additional penalties, over and above those present in IEEE 802.11. Like the base
IEEE 802.11 protocol, DCMA can suffer from possible contention for channel access by
successive paths on consecutive hops on the same path (e.g., in Fig. 7. 4, A may try to send
another packet to B while C is engaged in forwarding the previous packet to D). A more
detailed explanation is in [ACH 2002].
104

Fast Forwarding in DCMA


SIFS
T T

R
T DATA
S
A

A R
C C T
B T K S DATA
S

C A R
T C T DATA
S K S
C
C
T
S
D
T
ACK
MAC Address (Out)
Flag [ACH 2002]
Label
RTS
MAC Address
Flag

FIGURE 7.4 - Fast forwarding in DCMA

The simulation was implemented using NS-2 network simulator [NET 2002]. The
parameters where tuned to model the Lucent Wavelan card at 2 Mbps data rate. The
effective transmission range was 250 meters and the interfering range about 550 meters.
The figure 7.5 shows a throughput improvement about 20 % and latency improvements
of100 % in small packets (256 byte) to 63 % in bigger ones (1546 bytes) for DCMA.

FIGURE 7.5 - Comparative Performance IEEE 802.11 vs. DCMA


105

8 Conclusions
- Wired TCP cannot distinguish between packet losses (due to wireless errors) from those
due to congestion [WEN 2001a].

- In TCP, if the source is not aware of the route failure, the source continues to transmit (or
retransmit) packets even when the network is down. This leads to packet loss and
performance degradation. Since packet loss is interpreted as congestion, TCP invokes
congestion recovery algorithms when the route is reestablished, leading to throttling of
transmission [CHA 2001].

- In practice, it might be difficult to identify which packets are lost due to errors on a noisy
link [BAL 97].

- The signal-to-noise ratio (SNR) is a good measure for detecting interference [MAR 98].

- In [MAR 98] studied and showed that the window size variations in response to
occurrence of interference are the reason for performance degradation.

- The main difference between MANET (mobile Ad hoc Networks) and Cellular networks
is that MANET stations communicate using identical radio transceivers without the aid of a
fixed infrastructure such Cellular Base stations and fixed routers.

- In wireless multi-hop networks, the hidden node problem still exists, although the
standard has paid much attention to this problem. The protocol has defined several schemes
to deal with this, such as physical carrier sensing and the RTS/CTS handshake. These
schemes work well to prevent the hidden node problem in a wireless LAN where all nodes
can sense each other’s transmissions. The sufficient condition for not having hidden nodes
is: any station that can possibly interfere with the reception of a packet from node A to B is
within the sensing range of A. This might be true in an 802.11 basic service set. Obviously,
however, this condition cannot be true in a multi-hop network [XUS 2002].

- There is no scheme in IEEE 802.11 standard to deal with the exposed node problem,
which will be more harmful in a multi-hop network [XUS 2002].

- The 802.11 MAC is based on carrier sensing, including the physical layer sensing
function (CCA). As we know, carrier sensed wireless networks are usually engineered in
such a way that the sensing range (and interfering range) is typically larger than the
communication range. According to the IEEE 802.11 protocol implementation in the NS-2
simulation software, which is modeled after the Wavelan wireless radio, the interfering
range and the sensing range are more than two times the size of the communication range.
The larger sensing and interfering ranges will degrade the network performance severely in
106

the multi-hop case. The larger interfering range makes the hidden node problem worse; the
larger sensing range intensifies the exposed node problem [XUS 2002].

- The binary exponential back-off .scheme always favors the latest successful node. This
will cause unfairness, even when this protocol is not used in multi-hop networks, like in the
typical wireless LAN defined in the IEEE 802.11 standard [XUS 2002].

- In [HOL 2002] is analyzed the Dynamic Source Routing (DSR) protocol and observed
different characteristics that affects TCP performance and suggested that instead of
augmenting TCP/IP, it would be better to improve the routing protocols so that mobility
will be more effectively masked. Clearly, extensive modifications to upper layer protocols
are less desirable than a routing protocol that can react quickly and efficiently such that
TCP is not disturbed. However, regardless of the efficiency and accuracy of the routing
protocol, network partitioning and delays will still occur because of mobility, which can not
be hidden.

- In general Wireless technology does not came to content with wired one but to give extra
service to the user with the concept of ubiquity using internet in any place, any moment
and with any data.

- finally based on [TOH 2002] we consider, the implementation of an ad hoc wireless


network using current mobile computer technology, wireless adapters, and ad hoc routing
software is considered feasible and the resulting communication performance is acceptable
for most existing data applications. It is feasible to augment existing wireless computers
with ad hoc networking capability.
107

References
[ACH 2002] ACHARYA, A.; MISRA, A.; BANSAL, S. A Label-switching Packet
Forwarding Architecture for Multi-hop Wireless LANs. In:
INTERNATIONAL WORKSHOP ON WIRELESS MOBILE
MULTIMEDIA. Proceedings… Atlanta, Georgia, USA:[s.n.], 2002. p.33-40
Available at.: <http://doi.acm.org/10.1145/570790.570797> Visited on:
Dec. 01, 2002.
[ADH 2002] AD-HOC Wireless Multicasting. Available at:
<http://www.online.kth.se/courses/common/adhoc/newcontent/7_2.html>.
Visited on: Oct. 30, 2002.
[ARA 2001] ARÁUZ, J.; BANERJEE, S.; KRISHNAMURTHY, Prashant. MAITE: A
Scheme for Improving the Performance of TCP over Wireless Channels.
VEHICULAR TECHNOLOGY CONFERENCE, VTC, 2001.
Proceedings… [S.l.:s.n.], 2001. v.1, p. 252- 256.
[BAK 95] BAKRE, A.; BADRINATH, B.R. I-TCP: Indirect TCP for Mobile Hosts.
In: THE INTERNATIONAL CONFERENCE ON DISTRIBUTED
COMPUTING SYSTEMS, 15., 1995. Proceedings… Vancouver,
BC, Canada:[s.n.], 1995. p. 136-143, 1995. Available at:
http://rictec.capes.gov.br/login.asp>. Visited on: Aug. 27, 2002.
[BAK 97] BAKRE, A.V.; BADRINATH, B.R. Implementation and Performance
Evaluation of Indirect TCP. IEEE Transactions on Computers, New York,
v. 46, n. 3, p. 260-278, Mar 1997.
[BAL 95] BALAKRISHNAN, H. et al. Improving TCP/IP Performance over
Wireless Networks. In: ACM INTERNATIONAL CONFERENCE ON
MOBILE COMPUTING AND NETWORKING, MOBICOM, 1., 1995.
Proceeding… [S.l.:s.n], 1995.
[BAL 97] BALAKRISHNANi, H. et al .A Comparison of Mechanisms for Improving
TCP Performance over Wireless Links. IEEE/ACM Transactions on
Networking, Atlanta, v. 5, n. 6, p. 756-769, Dec. 1997.

[CHA 97] CHANT, A.; TSANG, D.; GUPTA S. TCP (Transmission Control Protocol)
over wireless Links. VEHICULAR TECHNOLOGY CONFERENCE, IEEE
47., 1997. Proceedings… Phoenix:[s.n.],1997. v.3, p.1326-1330
[CHA 2001] CHANDRAN, K. et al. A Feedback-Based Scheme for Improving TCP
Performance in Ad Hoc Wireless Networks. IEEE Personal
Communications, New York, v. 8, n. 1, p. 34-39, Feb. 2001
[CHA 88] CHANDRAN, K.; RAGBUNATHAN, S.; VENKATESAN, S.; PRAKASH,
R. In: A Feedback Based Scheme for Improving TCP performance in ad-hoc
wireless networks. In: INTERNATIONAL CONFERENCE ON
DISTRIBUTED COMPUTING SYSTEMS, 1998. Proceedings…
Amsterdam, Netherlands:[s.n.], 1998. p. 472-479.
108

[CHE 98] CHEN T.; GERLA, M. Global State Routing: A New Routing Scheme for
Ad-hoc Wireless Networks. In: IEEE INTERNATIONAL CONFERENCE ON
COMMUNICATIONS, ICC, 1998. Proceedings… Atlanta, GA, USA:[s.n.],
1998. v.1, p. 171-175.
[CHE 2001] CHENGZHOU, L.; PAPAVASSILIOU, S. The Link Signal Strength Agent
(LSSA) Protocol for TCP Implementation in Wireless Mobile Ad Hoc
Networks. In: VEHICULAR TECHNOLOGY CONFERENCE, VTC,
54.,2001. Proceedings... Discataway, NJ: IEEE, 2001. v.4, p.2528-2532
[CHI 97] CHIANG, C.-C. Routing in Clustered Multihop, Mobile Wireless
Networks with Fading Channel. In: IEEE SINGAPORE INTERNATIONAL
CONFERENCE ON NETWORKS, SICON, 1997. Proceedings…
Singapore:[s.n.],1997. Available at:
<http://www.ics.uci.edu/~atm/adhoc/paper-collection/gerla-routing-
clustered-sicon97.pdf>. Visited on: Aug. 28, 2002.
[CHI 2001] CHIASSERINI, C.; MEO, Michela. Improving TCP over Wireless
through Adaptive Link Layer Setting. In: GLOBAL
TELECOMMUNICATIONS CONFERENCE, GLOBECOM, 2001.
Proceedings… San Antonio, TX, USA:[s.n.], 2001. v.3, p.1766-1770.
Available at <http://www.tlc-
networks.polito.it/carla/papers/globecom01.pdf>. Visited on: Aug. 28, 2002.
[COR 2002] CORDEIRO, C.; AGRAWAL, D. Mobile Ad hoc Networking. Minicurso
SBRC 2002
[DUB 97] DUBE, R. et al. Signal Stability based adaptive routing for Ad Hoc
Mobile network. IEEE Personal Communications, New York, p. 36-45,
Feb. 1997. Available at:
<http://www.cs.umd.edu/projects/mcml/papers/pcm97.ps>. Visited on:
Aug.28, 2002.
[ELA 2002] ELAARAG, H. Improving TCP Performance over Mobile Networks. ACM
Computing Surveys. New York, v.3, n.3, p.357-374, Sept. 2002. Available
at: <http://www.acm.org/>. Visited on: Aug. 28, 2002.
[GLO 2002] GLOMOSIM - Global Mobile Information Systems Simulation Library.
University of California (UCLA). Available at:
<http://pcl.cs.ucla.edu/projects/glomosim/>. Visited on: Sept. 2002
[HOL 2001] HOLLAND, G.; VAIDYA, N.; BAHL, P. A rate-adaptive MAC protocol for
multi-Hop wireless networks. In: INTERNACIONAL CONFERENCE ON
MOBILE COMPUTING AND NETWORKING, MOBICOM, 7., 2001
Proceedings… Rome, Italy:[s.n.], 2001. p.236-251. Available at:
<http://doi.acm.org/10.1145/381677.381700>. Visited on: Aug. 29, 2002
[HOL 2002] HOLLAND, G.; VAIDYA, N. Analysis of TCP Performance over Mobile
Ad Hoc Networks. Wireless Networks, Hingham, v. 8, n. 2/3, p. 275-
109

288, Mar. 2002. Available at: <http://www.acm.org/>. Visited on: Dec. 10,
2002.
[HUS 2001] HUSTON, G. TCP in a Wireless World. IEEE Internet Computing, New
York, v. 5, n. 2, p. 82-84, Mar./Apr. 2001.
[IET 99] IETF DRAFT. Draft-ietf-manet-cbrp-spec-01.txt: Cluster Based Routing
Protocol. [S.l.], Aug. 1999. Available at:
<http://www.eecs.wsu.edu/~rgriswol/Drafts-RFCs/draft-ietf-manet-cbrp-
spec-01.txt > Visited on: Jan. 10, 2003.
[IET 99a] IETF DRAFT. Draft-ietf-manet-dsr-03.txt: The Dynamic Source Routing
Protocol for Mobile Ad Hoc Networks. [S.l.], Oct. 1999. Available at:
http://www.eecs.wsu.edu/~rgriswol/Drafts-RFCs/draft-ietf-manet-dsr-03.txt.
Visited on: July 02, 2002.
[IET 99b] IETF DRAFT. Draft-ietf-manet-aodv-04.txt. Ad Hoc On demand
Distance Vector Routing.[S.l.], Oct. 1999. Available at:
http://www.ietf.org/proceedings/99nov/I-D/draft-ietf-manet-aodv-04.txt.
Visited on: July. 02, 2002.
[IET 2002] IETF. The Internet Engineering Task Force. Available at:
<http://www.ietf.org/html.charters/manet-charter.html>. Visited on: July 12,
2002.
[ISO 99] ISO/IEC 8802-11: 1999(E) ANSI/IEEE. Std 802.11. Part 11: Wireless LAN
Medium Access Control (MAC) and Physical Layer (PHY) specifications:
1999.
[IWA 99] IWATA,A. et al. Scalable Routing Strategies for Ad Hoc Wireless
Networks. IEEE Journal on Selected Areas in Communications, [S.l.],
v.17, n.8, p.1369-1379, Aug. 1999. Available at:
http://www.cs.ucla.edu/NRL/wireless/PAPER/jsac99.ps.gz>. Visited on
Aug. 20, 2002.
[JIA 2001] JIAN, Liu; SING Suresh. ATCP: TCP for Mobile Ad Hoc Networks.
IEEE Journal on Selected Areas in Communications, New York, v. 19,
n. 7, p.1300-1315, July. 2001.
[JIC 2001] JIAN, H.; CHENG, S; CHEN, X. TCP Reno and Vegas performance in
wireless ad hoc networks. In: IEEE INTERNATIONAL CONFERENCE ON
COMMUNICATIONS, ICC, 2001. Proceedings… [S.l.:s.n.], 2001, v.1,
p.132-136.
[JIN 99] JIN, K.; KIM, K.; LEE, J. SPACK: Rapid Recovery of the TCP
Performance using SPLIT-ACK in Mobile Communication Environments.
IEEE REGION 10 CONFERENCE ,TENCON 1999. Proceedings… Cheju
Island, South Korea:[s.n.], 1999. v.1, p.761-764.
[JOA 99] JOA-NG, M; LU, I.-T. A Peer-to-Peer zone-based two-level link state
routing for mobile Ad Hoc Networks. IEEE Journal on Selected Areas in
Communications, New York, v.17, n.8, p. 1415-1425, Aug. 1999.
110

[KIM 2000] KIM, D.; TOH C.-K; CHOI, Y. TCP-BuS: Improving TCP Performance in
Wireless Ad Hoc Networks. In: IEEE INTERNATIONAL CONFERENCE
ON COMMUNICATIONS, ICC, 2000. Proceedings... New Orleans, L.A.,
USA:[s.n.], 2000. v. 3, p.1707-1713.
[KWO 2002] KWON, Y.; FANG, Y.; LATCHMAN, H. Improving Transport Layer
Performance by Using A Novel Medium Access Control Protocol with Fast
Collision Resolution in Wireless LANs. In: ACM INTERNATIONAL
WORKSHOP ON MODELING ANALYSIS AND SIMULATION OF
WIRELESS AND MOBILE SYSTEMS, 5., 2002. Proceedings… Atlanta,
Georgia USA:[s.n.], 2002, p.112-119.
[LOW 2000] LOW, S. H; PETERSON, L. L. ; WANG, L. Understanding TCP Vegas a
duality Model. Journal of the ACM, New York. v. 49, n. 2, p. 207–235,
Mar. 2002.
[MAR 2003] BLESSED VIRGIN MARY. Queen of Peace.- Messages of Our Lady.
Medjugorje. Available at: <http://www.medjugorje.hr>,
<http://www.medjugorje.org>. and Argüera–Bahia. Available at:
<http://www.apelosurgentes.com.br> Visited on: May 25, 2003.
[MAR 98] MARUTHI, B.; ARUN, K.; AZIZOGLU, M. Interference Robust TCP. In:
INTERNATIONAL SYMPOSIUM ON FAULT-TOLERANT
COMPUTING, 29.,1999. Proceedings… Madison, WI, USA:[s.n.], 1999.
p.102-109.
[MAJ 99] MA, J.; WU, J. Improving TCP Performance in IP Networks with Wireless
links. In: IEEE INTERNATIONAL CONFERENCE ON PERSONAL
WIRELESS COMMUNICATION, 1999. Proceedings... Jaipur,
India:[s.n.], 1999. p. 211-215.
[MUR 96] MURTHY, S.; GARCIA-LUNA-ACEVERES, J. An Efficient Routing
Protocol for Wireless Networks. Mobile Networks and Applications, [S.l.],
v.1, n.2, p.183-197, Oct. 1996.

[NET 2002] NETWORK SIMULATOR. NS 2. Available at:


<http://www.isi.edu/nsnam/ns/>. Visited on: Dec. 20, 2002.
[OFD 2002] FLARION. OFDM for Mobile Data Communications Web Pro forum
Tutorial. Available at: http://www.iec.org/>. Visited on: July 25, 2002.
[OPT 2003] OPNET TECHNOLOGIES, Inc. Available at: <http://www.opnet.com/>.
Visited on: Jan. 30, 2003.
[PAD 2000] PADMINI, M. Routing Protocols for Ad Hoc Mobile Wireless Networks.
[S.l.], 2000. Available at: <http://www.cis.ohio-state.edu/~jain/cis788-
99/adhoc_routing/index.html>. Visited on: July. 02, 2002.
[PAN 2000] PAN, J.; MARK, J. W.; SHEN X. TCP Performance and its Improvement
over Wireless Links. In: IEEE GLOBAL TELECOMMUNICATIONS
CONFERENCE, GLOBECOM, 2000. Proceedings... San Francisco, CA,
USA:[s.n.], 2000. v.1, p.62-66.
111

[PAR 97] PARK, V.D.; CORSON, M.S. A highly adaptive distributed routing
algorithm for mobile wireless networks. In: CONFERENCE OF THE IEEE
COMPUTER AND COMMUNICATIONS SOCIETIES, INFOCOM, 16.,
1997. Proceedings… Kobe, Japan: [s.n.], 1997. v. 3, p. 1405-1413.
[PAR 99] PARSA, C.; GARCI-LUNA - ACEVERES, J. TULIP: A Link-Level
Protocol for Improving TCP over Wireless Links. In IEEE WIRELESS
COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC,
1999. Proceedings… New Orleans, LA, USA:[s.n.], 1999. v.3, p.1253-1257.
[PAR 2000] PARSA, C.; GARCIA – LUNA - ACEVES, J. J. Differentiating
congestion vs. Random Loss: A Method for Improving TCP Performance
over Wireless links. In: WIRELESS COMMUNICATIONS AND
NETWORKING CONFERENCE, WCNC, 2000. Proceedings… Chicago,
IL, USA:[s.n.], 2000. v. 1, p. 90-93.
[PER 94] PERKINS, C.E.; BHAGWAT, P. Highly Dynamic Destination-Sequenced
Distance-Vector Routing (DSDV) for Mobile Computers. In: ACM
CONFERENCE ON COMMUNICATIONS ARCHITECTURES,
PROTOCOLS AND APPLICATIONS, 1994. Proceedings… London,
United Kingdom:[s.n.], p. 234-244.
[PRE 2002] PREM, E. C. Wireless Local Area Networks. Available at:
<http://www.cis.ohio-state.edu/~jain/cis788-97/wireless_lans/index.htm>.
Visited on: July 02, 2002.
[RAT 98] RATNAM, K.; MATTA, I. WTCP: An Efficient Mechanism for Improving
TCP Performance over Wireless Links. In: IEEE SYMPOSIUM ON
COMPUTERS AND COMMUNICATIONS, ISCC, 3., 1998.
Proceedings… Athens, Greece:[s.n.], 1998. p. 74-78.
[ROD 2001] RODRIGUEZ A. et al.TCP/IP Tutorial Technical Overview. Available at:
< http:// www.ibm.com/redbooks> . Visited on: Aug. 25, 2002.
[STA 2000] STALLINGS, W. Data & Computer Communications. 6th ed. Upper
Saddle River: Prentice Hall, 2000.
[SUN 2001] SUN, D.; MAN, H. Performance Comparison of Transport Control
Protocols over Mobile Ad Hoc Networks. In: IEEE INTERNATIONAL
SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO
COMMUNICATIONS, 12., 2001. Proceedings… San Diego, CA,
USA:[s.n.], 2001. v.2, p. G-83-G-87.
[TOH 96] TOH, C.-K. A novel distributed routing protocol to support Ad hoc
mobile computing. In: IEEE INTERNATIONAL CONFERENCE ON
COMPUTERS AND COMMUNICATIONS, 15., 1996. Proceeding…
Scottsdale, AZ, USA:[s.n.], 1996. p.480-486.
[TOH 2002] TOH, C.-K.; DELWAR, M.; ALLEN, D. Evaluating the
Communication Performance of an Ad Hoc Wireless Network. IEEE
112

Transactions on Wireless Communications, [S.l.], v.1, n.3, p. 402-414,


July 2002.
[TSA 2000] TSAOUSSIDIS, V.; BADR, H. TCP-Probing: Towards an Error Control
Schema with Energy and Throughput Performance gains. In: IEEE
INTERNATIONAL CONFERENCE ON NETWORK PROTOCOLS, 2000.
Proceedings… Osaka , Japan:[s.n.], 2000. p.12-21.
[WEN 2000] WEN-TSUEN, C.; JYH-SHIN, L. Some Mechanisms to Improve TCP/IP
Performance over Wireless and Mobile Computing Environment. In: IEEE
INTERNATIONAL CONFERENCE ON PARALLEL AND
DISTRIBUTED SYSTEMS, 7., 2000. Proceedings... Iwate, Japan:[s.n.],
2000. p.437-444.
[WEN 2001] WENQING, D.; JAMALIPOUR, A. A New Explicit Loss Notification
with Acknowledgment for Wireless TCP. In: IEEE INTERNATIONAL
SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO
COMMUNICATIONS, 12., 2001. Proceedings… San Diego, CA,
USA:[s.n.], 2001. v.1, p. B-65 - B-69.
[WEN 2001a] WENQING, D.; JAMALIPOUR A. Delay Performance of the New
Explicit Loss Notification TCP Technique for Wireless Networks. In:
IEEE GLOBAL TELECOMMUNICATIONS CONFERENCE,
GLOBECOM, 2001. Proceedings… San Antonio, TX, USA:[s.n.], 2001.
v.6, p.3483-3487.
[XUS 2000] XU, S.; SAADAWI, T.; LEE, M. Comparison of TCP Reno and Vegas in
wireless Mobile Ad hoc Networks. In: IEEE CONFERENCE ON LOCAL
COMPUTER NETWORKS, LCN, 25., 2000. Proceedings… Tampa,
FL, USA:[s.n.], 2000. p. 42-43.
[XUS 2002] XU, S. G.; SAADAWI, T. Revealing the Problems with 802.11 Medium
Access Control protocol in multi-hop wireless ad hoc networks. Computer
Networks, [S.l.], v. 38, n. 4 , p. 531-548, Mar. 2002.
[XYL 2001] XYLOMENOS, G. et al. TCP Performance Issues over Wireless Links.
IEEE Communications Magazine, New York, v.39, n. 4, p. 52-58, Apr.
2001.
[ZHE 2002] ZHENGPING, Z. In Building Wireless LAN. Available at:
<http://www.cis.ohio-state.edu/~jain/cis788-99/wireless_lans/index.html>.
Visited on: July 01, 2002.

You might also like