Professional Documents
Culture Documents
INSTITUTO DE INFORMATICA
PROGRAMA DE PÓS-GRADUAÇÃO EM COMPUTAÇÀO
Acknowledgments
Special thanks
Table of Contents
List of Abbreviations ............................................................................................ 7
List of Figures ......................................................................................................... 9
List of Tables …………………………………………………………………….. 11
Abstract ……………………………………………………………………………. 12
Resumo ……………………………………………………………………………. 13
1 Introduction ......................................................................................................... 14
1.1 Motivation ............................................................................................................. 15
List of Abbreviations
AP Access Point
ACK Acknowledgement
DS Direct Sequence
FH Frequency Hopping
MH Mobile Host
STA Station
List of Figures
FIGURE 2.1 - General Overview IEEE 802 ………………………………………….. 16
FIGURE 3.3 - Existing multicast routing protocols for ad hoc wireless networks…… 32
FIGURE 4.5 - TCP slow start and congestion avoidance behavior in action ………… 42
FIGURE 4.7 - Comparison of Perf. of four TCPs under different mobility Metric…… 49
FIGURE 5.3 - TCP-Reno throughput over an 802.11 fixed, linear, multi-hop Network 55
10
FIGURE 5.4 - Instability problem in the four hop TCP Reno connection ……………. 57
FIGURE 5.5 - Throughput of two TCP connections with different sender and receiver 58
FIGURE 5.6 - Throughput of two TCP connections with the same hop Number…….. 60
FIGURE 6.5 - Option field for Negative Acknowledgement in TCP header …………. 79
FIGURE 6.7 - State transition diagram for ATCP at the sender ……………………… 83
FIGURE 6.11 - State transition of MAITE’s features at a mobile host that acts as
a TCP Sender ………………………………………………………… 93
FIGURE 7.5 - Comparative Performance IEEE 802.11 vs. DCMA …………………. 105
11
List of tables
TABLE 5.1 - Route discovery (RD) time at 1 second beaconing interval for different
hop counts ……………………………………………………………….. 56
Abstract
Wireless networks are one of the more challenging environments for the Internet
protocols, and for TCP in particular. Wired TCP protocols lack of good performance in
Wireless environments. As the main reason of this poor performance for TCP, may raise
the fact that TCP can not distinguish between packet losses due to wireless errors from
those due to congestion. Traditional transport connections set up without any modification
in wireless ad hoc networks are plagued by problems such as high bit error rates, frequent
route changes, and partitions. In this paper, we are going to present a general overview of
IEEE 802.11 Standard, TCP, Wireless Ad Hoc networks and different solutions to
congestion in TCP over wireless.
Resumo
As redes sem fios “Wireless” são um dos ambientes mais desafiadores para os
protocolos da Internet, em particular o TCP. Os protocolos TCP para redes com
capeamento não apresentam bom desempenho em ambientes Wireless. A razão principal
deste baixo desempenho para o TCP esta relacionada ao fato do TCP não distinguir entre os
pacotes perdidos devido a erros Wireless, ou devido a congestionamento na rede com
cabeamento. As conexões de transporte tradicionais, sem modificações, em redes
Wireless Ad Hoc, são atingidas por problemas tais como: taxas de erro elevadas, mudanças
freqüentes da rota, e partições. Neste trabalho apresenta-se uma visão geral do padrão
IEEE 802,11, do TCP, das redes Wireless Ad Hoc e de soluções diferentes para
congestionamento em redes sem fios.
1 Introduction
Wireless networks are one of the more challenging environments for the Internet
protocols, and for TCP in particular. It is possible to see two approaches:
The first one allows mobile wireless devices to function as any other Internet-connected
device providing seamless inter-working between the wired and wireless worlds. A second
approach called - walled garden -, means a Web client into the wireless device using some
form of proxy server at the boundary of the wireless network and the Internet. This is the
approach adopted by the Wireless Access Protocol (WAP) Forum [HUS 2001].
Wireless communication faces some challenges different from those present in the
wired world. If mobility is present, then even more challenges appear. Mobility is the origin
for problems to be solved by wireless communication systems, but other physical
challenges exist such as signal attenuation, reflection, refraction, and multi-path
propagation. Furthermore, all of these problems are reflected on the upper layers of a
communications protocol stack [ARA 2001].
TCP communications over wireless channels must address two main problems. The
first problem refers to the high bit error rate (BER) that a wireless channel experiences.
High BER could cause the corruption of data transmitted over a link, which may result in a
loss of TCP data segments or acknowledgments (ACKs). The second problem refers to the
effect of disconnections that could occur while mobile hosts are “handed off” from one cell
to another or when physical obstacles impede the signals from reaching the receivers either
at the base stations or at a mobile host [ARA 2001].
All of this result in a waste of bandwidth and battery power that have been
unnecessarily used to retransmit and process information
Due to the strong drive toward wireless Internet access through mobile terminals, these
problems must be carefully studied in order to build improved systems.
WLANs were standardized as IEEE 802.11 in 1999 and use direct sequence either (DS)
or frequency hopping (FH) spread spectrum radios, at the 900 MHz or 2.4 GHz frequency
bands. While the original bit rate was 2 Mbit/s, more recent WLANs offer 5.5 Mb/s and 11
Mb/s bit rates, with 54 Mbit/s in the IEEE 802.11a
It is important to remark that WLANs deploy carrier sense multiple access with collision
avoidance (CSMA/CA) to share the channel, instead of IEEE 802.3 Ethernet's CSMA with
collision detection (CSMA/CD).
15
Meanwhile, they were initiated two new standardization projects to provide higher
speeds. The IEEE 802.11a uses a high-speed (Orthogonal Frequency Division Multiplex,
OFDM) physical layer, it is deployed in the 5 GHz frequency band, providing bit rates
ranging between 6 and 54 Mbit/s. For increased bit rate, the IEEE 802.11b was developed
over the existing physical layer. Commercial 802.11b solutions provide either 5.5 Mbit/s or
11 Mbit/s rates, using the 2.4 GHz frequency band [XYL 2001].
In this TI, we present different proposal solutions to increase the TCP throughput
performance due to wireless noisy environment and mobility problems.
1.1 Motivation
We start this work because applications such as e-mail, Web browsing, chat, and
general access services over wireless networks are designed to be used with TCP
(Transmission Control Protocol). However, upper layer protocols, like TCP are designed to
operate over relatively error free link. Consequently, when TCP is used over a wireless
channel, the overall performance could be greatly degraded.
On the other side a very interesting application is in emergencies, for example, because
of natural disasters where the entire communications is crashed and restoring
communications quickly is essential. Using Ad hoc wireless networks an infrastructure
could be set up in hours instead of days/weeks required for wire-line communications
[COR 2002].
16
The access standards define seven types of medium access technologies and associated
physical media, each appropriate for particular applications or system objectives. Other
types are under investigation [INT 99].
Specifications.
IEEE Std 802.6 Distributed Queue Dual Bus Access Method and Physical
Layer Specifications.
IEEE Std 802.9 Integrated Services (IS) LAN Interface at the Medium
Access.
IEEE Std 802.10 Interoperable LAN/MAN Security.
IEEE Std 802.11 Wireless LAN Medium Access Control (MAC) and Physical
Layer Specifications.
IEEE Std 802.12 Demand Priority Access Method, Physical Layer and
Repeater Specifications.
- Ubiquity, means that in any place and in any moment may be deployed the mobile
ad hoc networks to exchange any information.
- In IEEE 802.11, the addressable unit is a station (STA). The STA is a message
destination, but not (in general) a fixed location.
- The IEEE 802.11 physical layer uses a medium that has neither absolute nor
readily observable boundaries outside of which stations with conformant physical
layer transceivers are known to be unable to receive network frames.
- The IEEE 802.11 physical layer communicates over a medium significantly less
reliable than wired physical layers.
- The IEEE 802.11 physical layer lacks full connectivity, and therefore the
assumption normally made that every STA can hear ever other STA is invalid
(i.e., STAs may be -hidden- from each other).
- The IEEE 802.11 physical layer has time-varying and asymmetric propagation
properties.
- Well-defined coverage areas in the wireless physical layer simply do not exist;
propagation characteristics are dynamic and unpredictable, small changes in
position or direction may result in dramatic differences in signal strength. Similar
effects occur whether a STA is stationary or mobile.
18
The IEEE 802.11 architecture consists of several components that interact to provide a
wireless LAN that supports station mobility transparently to upper layers.
The basic service set (BSS) is the basic building block of an IEEE 802.11 LAN. Fig. 2
shows two BSSs, each of which has two stations (STAs) that are members of the BSS.
It is useful to think of the ovals used to depict a BSS as the coverage area within which
the member stations of the BSS may remain in communication. (The concept of area, while
not precise, is often good enough.) If a station moves out of its BSS, it can no longer
directly communicate with other members of the BSS.
The association between a station (STA) and a BSS is dynamic (STAs turn on, turn off,
come within range, and go out of range). To become a member of an infrastructure BSS, a
station will become associated. These associations are dynamic and involve the use of the
distribution system service (DSS).
The DS enables mobile device support by providing the logical services necessary to
handle address to destination mapping and seamless integration of multiple BSSs. A DS
may be created from many different technologies including current IEEE 802 wired LANs.
IEEE 802.11 does not constrain the DS to be either data link or network layer based. Nor
does IEEE 802.11 constrain a DS to be either centralized or distributed in nature. IEEE
802.11 explicitly does not specify the details of DS implementations. Instead, IEEE 802.11
specifies Services
An access point (AP) is a STA that provides access to the DS by providing DS services
in addition to acting as a STA. Note that all APs are also STAs; thus they are addressable
entities.
The DS and BSSs allow IEEE 802.11 to create a wireless network of arbitrary size and
complexity. IEEE 802.11 refers to this type of network as the extended service set network.
(ESS) which appears the same to an LLC layer as an Independent BSS network; stations
within an ESS may communicate and mobile stations may move from one BSS to another
(within the same ESS) transparently to LLC. In IEEE 802.11, the ESS architecture (APs
and the DS) provides traffic segmentation and range extension.
To integrate the IEEE 802.11 architecture with a traditional wired LAN, a final logical
architectural component is introduced a portal. A portal is the logical point at which
MSDUs from an integrated non-IEEE 802.11 LAN (i.e. IEEE 802.3 ETHERNET LAN)
enter the IEEE 802.11 DS. For example, a portal is shown in figure 2.2 connecting to a
wired IEEE 802 LAN, all data from non-IEEE 802.11 LANs enter the IEEE 802.11
architecture via a portal. It is possible for one device to offer both the functions of an AP
and a portal; this could be the case when a DS is implemented from IEEE 802 LAN
components. For a better comprehension, see figure 2.2
19
A reference model is presented in the Figure 2.3 that gives us a scope of the
physical layer composed of the Physical Layer Convergence Protocol (PLCP) and the
Physical Medium Dependent (PMD sub layers as well as Data link layer composed by
MAC sub layer [ISO 99].
20
The Physical Medium Dependent (PMD) sub layer provides different transmission
techniques (FHSS, DSSS, OFDM, and Diffuse Infrared), modulation and encoding of the
signal between two or more stations each using the same modulation system:
b) DSSS: Direct Sequence Spread Spectrum, with this technique the transmission
signal is spread over an allowed band (for example 25MHz). A random binary string
(spreading code) is used to modulate the transmitted signal. The data bits are mapped
to into a pattern of -chips - and mapped back into a bit at the destination. The number
of chips that represent a bit is a spreading ratio , the higher the spreading ratio, the
more the signal is resistant to interference. The lower the spreading radio, the more
bandwidth is available to the user. IEEE 802.11 standard requires a spread ratio of
eleven. The transmitter and the receiver must be synchronized with the same
spreading code. If orthogonal spreading codes are used then more than one LAN can
share the same band. It is used by Std. IEEE802.11b [PRE 2002].
c) FHSS: Frequency Hopping Spread Spectrum, This technique splits the band into
many small sub channels (1 MHZ). The signal then hops from sub channel to sub
channel transmitting short bursts of data on each channel for a set period, called
21
Dwell time. The hopping sequence must be synchronized at the sender and receiver
or information is lost. The FCC requires that the band is split into at least 75 sub
channels and the dwell time is no longer, than 400 ms. In order to jam a FHSS the
whole band must be jammed. The sub-channels are smaller than in DSSS. If
orthogonal hopping sequence is used many FHSS, LANs can be co-located [PRE
2002].
The Physical Layer Convergence Procedure (PLCP) sub layer (see figure 2.4):
provides common Service Access Points to its layer, defines a method of mapping the
802.11 PHY sub layer service Data Units (PSDU) into a framing format suitable for
sending and receiving user data and management information between two or more stations
using the associated physical medium dependent system. This allows 802.11 MAC to
operate with minimum dependence on the PMD sub layer [ZHE 2002]. The picture shows a
generic PLCP Frame format; the PLCP actually is transmitted at more than 1.0 Mbit/s, but
according the standard IEEE 802.11a transmits at 54.0 Mbit/s. in the future may be more.
The Medium Access Control (MAC) Layer defines two different access methods,
the Distributed Coordination Function (DCF) and the Point Coordination Function (PCF) as
shown in figure 2.5.
MAC Architecture
Distributed
Coordination Function
(DCF)
DIFS DIFS
Contention Window
PIFS
SIFS
Busy Medium Backoff-Window Next Frame
Differ Access Select Slot and Decrement Backoff as long as medium is idle
RTS Data
Src.
CTS ACK
Dest.
DIFS
Contention Window
NAV ( RTS )
Other
NAV ( CTS )
It is remarkable that all the stations in an Ad hoc wireless networks transmit each
other beacon signals periodically to show its present to other stations, for example in the
figure 2.8 the STA 15, 22, and 31 are sending beacon transmissions.
25
Beacon Interval
Beacon Busy
Transmissions Medium
D1 D1
D1 : Random delay
Awake Period IBSS : Independent BSS = Ad Hoc
b) PCF: the Point coordination function (PCF) is an optional access method which
is only usable on infrastructure (includes DS medium, always APs, and optional
portal entities) network configurations. This access method uses a point coordinator
(PC), which shall operate at the access point of the BSS, to determine which STA
currently has the right to transmit. The operation is essentially that of polling (To
check the status of STAs to see changes), which the PC performing the roll of the
polling master. The operation of the PCF require additional coordination not include
in Std 802.11, to permit efficient operation in cases where multiple point-
coordinated BSSs are operating in the same channel, in overlapping physical space.
The PCF uses a virtual carrier-sense mechanisms aided by an access priority
mechanism. The PCF shall distribute information within Beacon management
frames to gain control of the medium by setting the network allocation vector
(NAV) (think of NAV as a counter) in STAs. In addition, all frame transmission
under the PCF may use an inter frame space (IFS) that is smaller than the IFS for
frames transmitted via DCF, the use of a smaller IFS implies that point-
coordinated traffic shall have priority access to the medium over STAs in
overlapping BSSs operating under the DCF access method. The access priority
provided by a PCF may be utilized to create a contention- free (CF) access method.
The PC controls the frame transmissions of the STAs so as to eliminate contention
for a limited period of time.
26
The DCF and the PCF shall coexist in a manner that permits both to operate
concurrently within the same BSS. When a point coordinator (PC) is operating in a BSS,
the two access methods alternate, with a contention -free period (CFP) followed by a
contention period (CP). In the next picture we can see the difference between a structures
and an Ad hoc network.
3 AD HOC NETWORKS
802.11 MAC/PHY
STA 1 SS STA 2
It is important to see the difference between Mobile and portable STAs and it is that
portable STAs move from point to point but is only used at a fixed time while the mobile
stations can access the WLAN during the movement.
A mobile ad hoc wireless network becomes a multi hop wireless network when it
needs to send a message using three or more node routers between a node transmitter and a
node receiver extending the communication range of the network. These types of networks
are useful in any situation where temporary network connectivity is needed in bigger
28
places. Recent work has concentrated on developing MAC layer protocols and routing
protocols for these types of networks. We can see a good example of a simple multi-hop
network in the next figure 3.2
Ad hoc networks are wireless networks of mobile hosts, in which the topology
rapidly changes due to the movement of mobile hosts. This frequent topology change may
lead to sudden packet losses and delays. Transport Protocols like TCP, which have been
designed for reliable fixed networks, misinterpret this packet loss as congestion and invoke
congestion control, leading to unnecessary retransmissions and loss of throughput [CHA
2001].
The topology of an ad hoc network changes every time an MH’s movement results
in the establishment of new wireless links (an MH moves within range of another) or link
29
disconnections (an MH moves out of range of another which was within its range). The rate
of topology change is dependent on the extent of mobility and transmission range of the
hosts. Routes are heavily dependent on the relative location of MHs. Hence, routes may be
repeatedly invalidated in an unpredictable and arbitrary fashion due to the mobility of hosts.
The mobility of a single node may affect several routes that pass through it [CHA 2001].
- Dynamic topologies: Nodes are free to move arbitrarily, with different speed and
the network topology may change randomly and at unpredictable times
- Energy constrained operation: some or all the nodes may depend on batteries or
other exhaustible means for their energy.
- Limited bandwidth: less than wired nets in addition the realized throughput of
wireless after accounting for the effect of multi path, fading, noise etc it is often much
less than a radio's maximum transmission rate.
- Security menaces: MANET has more predisposition than fixed-cable nets. The
increased possibility of eavesdropping, spoofing, hacking should be considered.
Conventional protocols like link state and distance vector do not match these requirements
because they do not converge quickly enough or scale well as mobility increases [CHA
2001].
Routing protocols are also mentioned in this work because they are associated with
poor throughput performance in wireless networks as shown in chapter 5 of this TI.
30
The traditional routing protocols deployed for wired networks can not be used for
mobile ad hoc networks because of the mobility of networks.
All nodes of ad hoc networks behave as routers and take part in discovery and
maintenance of routes to other nodes in the network. The ad hoc routing protocols can be
divided into two classes: Table-driven and On-demand.
In Table-driven routing protocols, each node maintains one or more tables containing
routing information to every other node in the network. All nodes update these tables to
maintain a consistent up-to-date view of the network. When the network topology changes
the nodes propagate update messages throughout the network in order to maintain
consistent and up-to-date routing information about the whole network. As follows, we
present some of the more important Table-driven ad hoc routing protocols:
Table-driven:
- Table-driven or proactive approach uses periodic route updates, and can either be
link-state based or distance-vector based.
- The word proactive means that this approach will always react to or do something.
In addition, it will react on link changes as well.
- Drawbacks of this approach are that it is inefficient if there is little demand for
routes and it has tendency to instability at high mobility.
On-Demand:
- This approach is termed reactive, meaning that it reacts specifically to link changes
only when needed.
- The advantage of this approach is that it is both power and bandwidth efficient.
For wireless networks, the most natural communication type is broadcasting since
traditional radios are based on omni-directional antennas. However, problems arise in ad
hoc wireless networks multicasting due to mobility of sources, destinations and
intermediate nodes in the distribution tree. In addition, there are hidden terminal problems
and the presence of multicast group dynamics [ADH 2002].
FIGURE 3.3 - Existing multicast routing protocols for ad hoc wireless networks.
- Stream Data Transfer: From the application's viewpoint, TCP transfers a contiguous
stream of bytes through the network. The application does not have to bother with
chopping the data into basic blocks or datagrams. TCP does this by grouping the bytes
in TCP segments, which are passed to IP for transmission to the destination. Also,
TCP itself decides how to segment the data and it can forward the data at its own
convenience. Sometimes, an application needs to be sure that all the data passed to
TCP has actually been transmitted to the destination. For that reason, a push function
is defined. It will push all remaining TCP segments still in storage to the destination
host. The normal close connection function also pushes the data to the destination.
- Reliability: TCP assigns a sequence number to each byte transmitted and expects a
positive acknowledgment (ACK) from the receiving TCP. If the ACK is not received
within a timeout interval, the data is retransmitted. Since the data is transmitted in
blocks (TCP segments), only the sequence number of the first data byte in the segment
is sent to the destination host. The receiving TCP uses the sequence numbers to
rearrange the segments when they arrive out of order, and to eliminate duplicate
segments.
- Flow Control: The receiving TCP, when sending an ACK back to the sender, also
indicates to the sender the number of bytes it can receive beyond the last received TCP
segment, without causing overrun and overflow in its internal buffers. This is sent in
the ACK in the form of the highest sequence number it can receive without problems.
This mechanism is also referred to as a window-mechanism, and we discuss it in more
detail later in this chapter.
- Logical Connections: The reliability and flow control mechanisms described above
require that TCP initializes and maintains certain status information for each data
stream. The combination of this status, including sockets, sequence numbers and
window sizes, is called a logical connection. The pair of sockets used by the sending
and receiving processes uniquely identifies each connection.
34
- Full Duplex: TCP provides for concurrent data streams in both directions.
A simple transport protocol might use the following principle: send a packet and then
wait for an acknowledgment from the receiver before sending the next packet. If the
ACK is not received within a certain amount of time, retransmit the packet.
While this mechanism ensures reliability and better use of the network bandwidth
(better throughput), it only uses a part of the available network bandwidth, at the same
time is important to realize that:
The above window principle is used in TCP, but with a few differences:
35
Sequence Number
Acknowledgement Number
Data UAPRSF
Offset Reset RCS S YI Window
GK H T N N
Options …| … Padding
Data bytes
Where:
- Source Port: The 16-bit source port number, used by the receiver to reply.
- Destination Port: The 16-bit destination port number.
- Sequence Number: The sequence number of the first data byte in this
segment. If the SYN control bit is set, the sequence number is the initial
36
TCP sends data in variable length segments. Sequence numbers are based on a byte
count to be used. ACKs specified the sequence number of the next byte that the receiver
expect to receive.
Consider that segment gets lost or corrupted. In this case, the receiver will
acknowledge all further well-received segments with an acknowledgment referring to the
first byte of the missing packet. The sender will stop transmitting when it has sent all the
bytes in the window. Eventually, a timeout will occur and the missing segment will be
retransmitted.
Before any data can be transferred, a connection has to be established between the
two processes. One of the processes (usually the server) issues a passive OPEN call, the
other an active OPEN call. The passive OPEN call remains dormant until another process
tries to connect to it by an active OPEN.
On the network, three TCP segments are exchanged: This whole process is known
as a three-way handshake. Note that the exchanged TCP segments include the initial
sequence numbers from both sides, to be used on subsequent data transfers. Closing the
connection is done implicitly by sending a TCP segment with the FIN bit (no more data)
set. Since the connection is full-duplex (that is, there are two independent data streams, one
in each direction), the FIN segment only closes the data transfer in one direction. The other
process will now send the remaining data it still has to transmit and ends with a TCP
segment where the FIN bit is set. The connection is deleted (status information on both
sides) once the data stream is closed in both directions.
38
Process 1 Process 2
Passive OPEN
Waits for active request
Active OPEN
Send SYN, seq = n
Receive SYN
Send SYN, seq=m, ACK n+1
Receive SYN+ACK
Send ACK m+1
The connection is now established and the two data streams (one in
each direction) have been initialized (sequence numbers)
The TCP application-programming interface is not fully defined. Only some base
functions it should provide are described in RFC 793 Transmission Control Protocol. As is
the case with most RFCs in the TCP/IP protocol suite, a great degree of freedom is left to
the implementers, thereby allowing for optimal (operating system-dependent)
implementations, resulting in better efficiency (greater throughput).
This returns a local connection name, which is used to reference this particular
connection in all other functions.
- Send: Causes data in a referenced user buffer to be sent over the connection. Can
optionally set the URGENT flag or the PUSH flag.
39
- Close: Closes the connection; causes a push of all remaining data and a TCP
segment with FIN flag set.
- Abort: Causes pending Send and Receive operations to be aborted, and a RESET to
be sent to the foreign TCP.
Full details can be found in RFC 793 Transmission Control Protocol.
The TCP congestion algorithm prevents a sender from overrunning the capacity of
the network (for example, slower WAN links). TCP can adapt the sender's rate to network
capacity and attempt to avoid potential congestion situations. Several congestion control
enhancements have been added and suggested to TCP over the years. This is still an active
and ongoing research area, but modern implementations of TCP contain four intertwined
algorithms as basic Internet standards:
- Slow start.
- Congestion avoidance.
- Fast retransmit.
- Fast recovery.
Old implementations of TCP would start a connection with the sender injecting
multiple segments into the network, up to the window size advertised by the receiver.
While this is OK when the two hosts are on the same LAN, if there are routers and slower
links between the sender and the receiver, problems can arise. Some intermediate routers
cannot handle it, packets are dropped, retransmission results and performance is degraded.
The algorithm to avoid this is called slow start. It operates by observing that the rate at
which new packets should be injected into the network is the rate at which the
acknowledgments are returned by the other end. Slow start adds another window to the
sender's TCP: the congestion window, called CWND. When a new connection is
established with a host on another network, the congestion window is initialized to one
segment (for example, the segment size announced by the other end, or the default,
typically 536 or 512).
Each time an ACK is received, the congestion window is increased by one segment. The
sender can transmit the lower value of the congestion window or the advertised window.
The congestion window is flow control imposed by the sender, while the advertised
40
window is flow control imposed by the receiver. The former is based on the sender's
assessment of perceived network congestion; the latter is related to the amount of available
buffer space at the receiver for this connection.
The sender starts by transmitting one segment and waiting for its ACK. When that
ACK is received, the congestion window is incremented from one to two, and two
segments can be sent. When each of those two segments is acknowledged, the congestion
window is increased to four. This provides an exponential growth, although it is not exactly
exponential, because the receiver may delay its ACKs, typically sending one ACK for
every two segments that it receives.
At some point, the capacity of the IP network (for example, slower WAN links) can
be reached, and an intermediate router will start discarding packets. This tells the sender
that its congestion window has gotten too large.
Sender
Receiver
The assumption of the algorithm is that packet loss caused by damage is very small
(much less than 1 percent). Therefore, the loss of a packet signals congestion somewhere in
the network between the source and destination. There are two indications of packet loss:
- A timeout occurs.
- Duplicate ACKs are received.
Congestion avoidance and slow start are independent algorithms with different
objectives. Nevertheless, when congestion occurs TCP must slow down its transmission
rate of packets into the network, and invoke slow start to get things going again. In practice,
they are implemented together. Congestion avoidance and slow start require that two
variables be maintained for each connection:
1. Initialization for a given connection sets CWND to one segment and SSTHRESH to
65535 bytes.
2. The TCP output routine never sends more than the lower value of CWND or the
receiver's advertised window.
4. When new data is acknowledged by the other end, increase CWND, but the way it
increases depends on whether TCP is performing slow start or congestion avoidance.
If CWND is less than or equal to SSTHRESH, TCP is in slow start; otherwise, TCP is
performing congestion avoidance.
Slow start continues until TCP is halfway to where it was when congestion occurred
(since it recorded half of the window size that caused the problem in step 2), and then
congestion avoidance takes over. Slow start has CWND begin at one segment, and
incremented by one segment every time an ACK is received. As mentioned earlier, this
opens the window exponentially: send one segment, then two, then four, and so on.
Congestion avoidance dictates that CWND be incremented by
SEGSIZE*SEGSIZE/CWND each time an ACK is received, where SEGSIZE is the
segment size and CWND is maintained in bytes. This is a linear growth of CWND,
compared to slow start's exponential growth. The increase in CWND should be at most one
segment each round-trip time. Regardless of how many ACKs are received in that RTT.
42
FIGURE 4.5 - TCP slow start and congestion avoidance behavior in action
Fast retransmit avoids having TCP wait for a timeout to resend lost segments.
Modifications to the congestion avoidance algorithm were proposed in 1990. Before
describing the change, realize that TCP may generate an immediate acknowledgment (a
duplicate ACK) when an out-of-order segment is received. The purpose of this duplicate
ACK is to let the other end know that a segment was received out of order, and to tell it
what sequence number is expected. Since TCP does not know whether a lost segment or
just a reordering of segments causes a duplicate ACK, it waits for a small number of
duplicate ACKs to be received. It is assumed that if there is just a reordering of the
segments, there will be only one or two duplicate ACKs before the reordered segment is
processed, which will then generate a new ACK. If three or more duplicate ACKs are
received in a row, it is a strong indication that a segment has been lost. TCP then performs
a retransmission of what appears to be the missing segment, without waiting for a
retransmission timer to expire.
43
Sender
Receiver
Packet 1
Packet 2
Packet 3 ACK 1
ACK 2
Packet 4
Packet 5
Packet 6
ACK 2
ACK 2
ACK 2
Retransmit Packet 3
ACK 6
After fast retransmit sends what appears to be the missing segment, congestion
avoidance, but not slow start, is performed. This is the fast recovery algorithm. It is an
improvement that allows high throughput under moderate congestion, especially for large
windows. The reason for not performing slow start in this case is that the receipt of the
duplicate ACKs tells TCP more than just a packet has been lost. Since the receiver can only
generate the duplicate ACK when another segment is received, that segment has left the
network and is in the receiver's buffer.
That is, there is still data flowing between the two ends, and TCP does not want to
reduce the flow abruptly by going into slow start. The fast retransmit and fast recovery
algorithms are usually implemented together as follows:
1. When the third duplicate ACK in a row is received, set SSTHRESH to one-half the
44
current congestion window, CWND, but no less than two segments. Retransmit the
missing segment. Set CWND to SSTHRESH plus three times the segment size. This
inflates the congestion window by the number of segments that have left the network
and the other end has cached (3).
2. Each time another duplicate ACK arrives, increment CWND by the segment size.
This inflates the congestion window for the additional segment that has left the
network. Transmit a packet, if allowed by the new value of CWND.
3. When the next ACK arrives that acknowledges new data, set CWND to
SSTHRESH (the value set in step 1). This ACK should be the acknowledgment of the
retransmission from step 1, one round-trip time after the retransmission. Additionally,
this ACK should acknowledge all the intermediate segments sent between the lost
packet and the receipt of the first duplicate ACK. This step is congestion avoidance,
since TCP is down to one-half the rate it was at when the packet was lost.
In this section is introduced some of the most popular current TCP version as: TCP
Tahoe, TCP Reno, TCP New Reno, etc
Historically, TCP Tahoe was the first modification to TCP. The newer TCP Reno
included the fast recovery algorithm. This was followed by New Reno and The Partial
Acknowledgment mechanism for multiple losses in a single window of data. As Noted,
TCP Tahoe, Reno and New Reno, all use the same algorithm at the receiver, but implement
different variations of the transmission process at the sender. The receiver advertises a
window size, and the sender ensures that the number of unacknowledged bytes does not
exceed this size. For each segment correctly received, the receiver sends an
acknowledgment, which includes the sequence number identifying the next in-sequence
segment (byte). The sender implements a congestion window that defines the maximum
number of transmitted-but-unacknowledged bytes permitted. This adaptive window can
increase and decrease, but never exceeds the receiver’s advertised window. TCP applies
graduated multiplicative and additive increases to the sender’s congestion window. The
versions of protocol differ from each other essentially in the way that the congestion
window is manipulated in response to acknowledgments and timeouts [TSA 2000].
TCP Reno introduces Fast Recovery in conjunction with Fast retransmit. The idea
behind fast Recovery is that a DACK is an indication of available channel bandwidth since
a segment has been successfully delivered. This, in turn, implies that the congestion
window (CWND) should actually be incremented. Receiving the threshold number of
DACKS triggers Fast Recovery: the sender retransmits the missing segment then, instead of
entering Slow Start as in TCP Tahoe, increases CWND by the DACK threshold number.
Thereafter, and for as long as the sender remains in Fast Recovery, CWND increased by
one for each additional DACK received. This procedure is called “inflating” CWND. The
Fast Recovery stage is completed when an acknowledgment (ACK) for new data is
received. The sender then halves CWND (“deflating” the window), sets the congestion
threshold to CWND, and resets the DACK counter. In Fast Recovery, CWND is thus
effectively set to half its previous value in the presence of DACKS, rather than performing
Slow Start as for a general retransmission timeout. TCP Reno, however, is not optimized
for multiple segment drops from a single window [TSA 2000].
TCP New Reno addresses the problem of multiple segment drops. In effect, it can
avoid many of multiple the retransmit timeouts of TCP Reno. The New Reno modification
introduces a partial acknowledgement strategy in Fast Recovery. A partial acknowledgment
is defined as an ACK for new data, which does not acknowledge all segments that were in
flight at point when Fast Recovery was initiated. It is thus an indication that not all data
sent before entering Fast Recovery has been received. In TCP Reno, a partial ACK causes
exit from Fast Recovery. In the TCP New Reno it is an indication that (at least) one
segment is missing and needs to be retransmitted. This retransmission is effectuated and
Fast Recovery continues. In this way, when multiple segments are lost from a window of
data, TCP New Reno can recover without waiting for a retransmission timeout. However,
the retransmission triggered off by a partial ACK might be for a delayed rather than lost
segment; thus, the strategy risks making multiple successful transmissions for the segment,
which can seriously impact its energy efficiency with no compensatory gain in goodput.
46
TCP SACK was defined in RFC 2018 by Mathis et al. in 1996, and later extended in
RFC2883 by Floyd et al. in 2000. TCP SACK further improves TCP performance by
allowing the sender to retransmit packets based on the selective ACKs provided by the
receiver. The implementation constitutes a SACK field that contains a number of SACK
blocks. The first block reports the most recently received packets. The additional blocks
repeat the most recently reported SACK blocks. The SACK uses the basic congestion
control algorithms and uses retransmit timeouts as a last option for recovery. The main
difference is the way it handles the loss of multiple packets from the same window, in fast
recovery. Like Reno, SACK enters fast recovery upon receiving duplicate ACKs. It then
retransmits a packet and cuts its congestion window in half. In addition to that, SACK has a
new variable called the pipe, and a data structure called the scoreboard. The pipe is
incremented when the sender sends a new or a retransmitted packet. It is decremented when
the receiver receives a new packet. This is indicated when the sender receives a duplicate
ACK with a SACK option. The scoreboard stores ACKs from previous SACK options,
allowing the sender to retransmit packets that are implied to be missing at the receiver. Like
New-Reno, the sender exits fast recovery [ELA 2002].
TCP Vegas was introduced in 1994 as an alternative to TCP Reno. It improves upon
each of the three mechanisms of TCP Reno. The first enhancement is a more prudent way
to grow the window size during the initial se of slow-start and leads to fewer losses. The
second enhancement is an improved retransmission mechanism where time-out is checked
on receiving the first duplicate acknowledgment, rather than waiting for the third duplicate
acknowledgment (as Reno would), and leads to a more timely detection of loss. The third
enhancement is a new congestion avoidance mechanism that corrects the oscillatory
behavior of Reno. In contrast to the Reno algorithm, which induces congestion to learn the
available network capacity, a Vegas source anticipates the onset of congestion by
monitoring the difference between the rate it is expecting to see and the rate it is actually
realizing. Vegas’ strategy is to adjust the source’s sending rate (window size) in an attempt
to keep a small number of packets buffered in the routers along the path.. Although
experimental results presented by Brakmo and Peterson in 1995, and duplicated in Ahn et
al. in 1995, show that TCP Vegas achieves better throughput and fewer losses than TCP
Reno under many scenarios, at least two concerns remained: whether Vegas is stable, and if
so, whether it stabilizes to a fair distribution of resources; and whether Vegas results in
persistent congestion. In short, Vegas has lacked a theoretical explanation of why it works
[LOW 2000].
47
This is a new implementation of TCP that detects not only the initial stages of
congestion in the network but also identifies the direction of congestion i.e., it determines
whether congestion is developing in the forward or reverse path of the connection. TCP
Santa Cruz is able to isolate the forward throughput from events such as congestion that
may occur in the reverse path. Congestion is determined by calculating the relative delay
that one packet experiences with respect to another as it traverses the network. This
relative delay is the foundation of their congestion control algorithm. The relative delay is
used to estimate the number of packets residing in the bottleneck queue; the congestion
control algorithm keeps the number of packets in the bottleneck queue at a minimum level
by adjusting the TCP source’s congestion window. The congestion window is reduced if
the bottleneck queue length increases (in response to increasing congestion in the network)
beyond a desired number of packets. The window is increased when the source detects
additional bandwidth available in the network (i.e., after a decrease in the bottleneck queue
length). TCP Santa Cruz can be implemented as a TCP option by utilizing the extra 40
bytes available in the options field of the TCP header [PAR 2000].
Besides TCP implementations mentioned above exist others as: TCP Net Reno (It
deals with small congestion window issue with Limited Retransmit [RFC 3042]), but the
TCP Reno is the currently the de facto standard on the internet. It is remarkable that this
TCP version presented here are intended for wired networks and are used as a standard of
comparison with new TCP protocols arising for pure wireless networks and heterogeneous
networks.
TCP is a reliable, stream-oriented transport layer protocol, which was designed for
use over fixed low-error networks like the Internet. Route failures and disruptions are very
infrequent since the network is fixed. Therefore, packet loss, which is detected by TCP as a
timeout, can be reliably interpreted as a symptom of congestion in the network. In response,
TCP invokes congestion control mechanisms. Thus, TCP does not distinguish between
congestion and packet loss due to transmission errors or route failures. This inability of
TCP to distinguish between two distinct problems exhibiting the same symptom results in
performance degradation in ad hoc networks [CHA 2001].
In an ad hoc network, packet losses are frequent in the error-prone wireless medium,
but the effect of these losses can be reduced using reliable link layer protocols. One of the
more serious problems is that of route failures, which can occur very frequently and
unpredictably during the lifetime of a transport session, depending on the relative motion of
MHs in the network [CHA 2001].
In general, whenever the mobility of an mobile host (MH) invalidates a route(s), the
reestablishment of the route by the underlying routing protocol takes a finite amount of
time. During this period, no packets can reach the destination through the existing route.
This results in the queuing and possible loss of packets and acknowledgments. This in turn
48
leads to timeouts at the source, which are interpreted by the transport protocol as
congestion. Consequently the source:
- Invokes congestion control mechanisms that include exponential back off of the
retransmission timers and immediate shrinking of the window size, thus resulting in
reduction of the transmission rate
- Enters a slow start recovery phase to ensure that the congestion is reduced before
resuming transmission at the normal rate [CHA 2001].
- When there is no route available, there is no need to retransmit packets that will not reach
the destination.
- In the period immediately following the restoration of the route, the throughput will be
unnecessarily low because of the slow start recovery mechanism, even though there
may be no congestion in the network [CHA 2001].
In [XUS 2000], [SUN 2001], [JIC 2001] study the performance of TCP Reno, New
Reno, Sack and Vegas in mobile ad hoc networks.
Early research works on cellular wireless systems showed that TCP suffers a poor
performance in wireless networks because of packet losses due to high bit error rates link.
[BAL 97] Beside the link error issue, high mobility of ad hoc networks also has a
significant impact on the TCP performance.
In [SUN 2001] four TCP Tahoe, Reno, New-Reno and Sack were simulated in NS-
2 [NET 02] in a MANET environment using the IEEE 802.11 standard, in spite of they
were created for wired networks. They used the standard IEEE 802.11, they chose the Ad
Hoc On-Demand distance Vector routing protocol (AODV). The simulation was of 50
mobile ad hoc nodes moving around a 1500x300 m2 flat rectangular area and use four
different average speeds: 2, 10, 20 and 30 m/s with pause time of zero.
In the figure 4.7 below four key performance metrics were evaluated:
- Throughput (Only the packets that the sender has received the ACKs are counted in the
throughtput).
- Goodput (it is the ratio between the amount of data arrived at the destination and the
amount of data generated by the TCP source).
49
- End-to-End Delay (it is defined as the average delay incurred for packets from the time it
is deposited into the sender’s buffer until it is successfully acknowledged, which includes
all possible delay caused by buffering during route discovery latency, queuing at the
interface queue, retransmission delay in the MAC, propagation and transfer times).
- Transfer Time (it is the time that destination received a fixed number of packets from the
sender).
FIGURE 4.7 - Comparison of performance of four TCPs under different mobility metric
The figure 4.7 shows that TCP New-Reno, Reno, Sack and Tahoe when mobility
increase the Goodput and throughtput decrease and the delay and transfer time increase
probing that new solutions to improve TCP over wireless are required. Several factors may
have affected these results. First, when the relative speed increases, the route breakage and
formation become more frequent, this causes bigger link-down probability at longer period.
This will increase the retransmission number and overhead, reduce the effective transfer
time. Secondly, after the route is broken, if the routing protocol cannot recover and
50
discover new route before the transmit timeout occurs, TCP will trigger the congestion
control (slow start by setting CWND =1) followed by avoidance mechanism at the sender.
This is because current TCP cannot distinguish the packet loss due to route breakage from
congestion. In a stationary multi hop network using ad hoc algorithm and topology, besides
the TCP congestion control algorithm, the distance between source and destination also
affects the performance [SUN 2001].
51
With the exception of the Fast Retransmit and Recovery algorithms, Transmission
Control Protocol (TCP) assumes congestion to be the only source of packet loss. When
wireless networks experience packet loss due to interference or any other error, congestion
control algorithms in TCP are triggered [MAR 98].
TCP performs at an acceptable efficiency over the traditional wired networks where
packet losses are usually caused by network congestion. However, in networks with
wireless links in addition to wired segments, this assumption would be insufficient, as the
high wireless bit error rate could become the dominant cause of packet loss and thus TCP
performs sub-optimum under these new conditions [WEN 2001a].
As the main reason of this poor performance for TCP, may raise the fact that TCP
cannot distinguish between packet losses due to wireless errors from those due to
congestion. Moreover, TCP sender cannot keep the size of its congestion window at
optimum level and always has to retransmit packets after waiting for timeout, which
significantly degrades end-to-end delay performance of TCP [WEN 2001a].
Transport connections set up in wireless ad hoc networks have many problems such
as high bit error rates, frequent route changes, and partitions. If we run transmission control
protocol (TCP) over such connections, the throughput of the connection is extremely poor
because TCP treats lost or delayed acknowledgments as congestion [JIA 2001].
Bit errors cause packets to get corrupted which result in lost TCP data segments or
acknowledgment. When acknowledgment do not arrive at the TCP sender within a short
amount of time [the retransmit timeout (RTO)], the sender retransmits the segment,
exponentially backs off its retransmit timer for the next retransmission, reduces its
congestion control window threshold, and closes its congestion window to one segment.
Repeated errors will ensure that the congestion window at the sender remains small
resulting in low throughput [JIA 2001].
It is important to note that error correction may be used to combat high BER but it
will waste valuable wireless bandwidth when correction is not necessary [JIA 2001].
When an old route is no longer available (as in figure 5.x1), the network layer at the
sender attempts to find a new route to the destination [in dynamic source routing (DSR),
this is done via route discovery messages while in destination-sequenced distance-vectoring
(DSDV) table exchanges are triggered that eventually result in a new route being found
[JIA 2001].
Discovering a new route may take significantly longer than the retransmit time out
at the sender. As a result, the TCP sender times out, retransmits a packet, and invokes
congestion control. Thus, when a new route is discovered, the throughput will continue to
be small for some time because TCP at the sender grows its congestion window using the
slow start and congestion avoidance algorithm [JIA 2001].
This is clearly undesirable behavior because the TCP connection will be very
inefficient. If we imagine a network in which route computations are done frequently (due
to high node mobility), the TCP connection will never get an opportunity to transmit at the
maximum negotiated rate (i.e., the congestion window will always be significantly smaller
than the advertised window size from the receiver) [JIA 2001].
In figure 5.1, node source (s), needs to re-compute its route to destination (d), for an
ongoing TCP connection because node “a” moved out of range of node “d”
53
It is likely that the ad hoc network may periodically be partitioned for several
seconds at a time (see figure 5.2). If the sender and the receiver of a TCP connection lie in
different partitions, all the sender’s packets get dropped by the network resulting in the
sender invoking congestion control. If the partition lasts for a significant amount of time
(several times longer than the retransmit time out RTO), the situation gets even worse
because of a phenomena called serial timeouts [JIA 2001].
In figure 5.2 is likely that the ad hoc network may be temporarily partitioned due to
node mobility. Source (s) node has an open TCP connection to destination (s) node. The
network get partitioned at time T+5 causing “s” and “d” to lie in different partitions. The
network eventually reconnects 15 seconds after, allowing “s” and “d” to continue
communicating. Unfortunately, this change in node connectivity has disastrous
consequences for TCP’s throughput which can drop to very low level [JIA 2001].
54
The congestion window in TCP imposes an acceptable data rate for a particular
connection based on congestion information that is derived from timeout events as well as
from duplicate ACKs. In an ad hoc network, since routes change during the lifetime of a
connection, we lose the relationship between the congestion window size and the tolerable
data rate for the route. In other words, the congestion window (CWND) as computed for
one route may be too large for a newer route, resulting in network congestion when the
sender transmits at the full rate allowed by the old CWND [JIA 2001].
In The figure 5.3 is presented the measured TCP throughput as a function of the
number of hops, averaged over ten runs. Observe that the throughput decreases rapidly
when the number of hops is increased from one, and then stabilizes once the number of
hops becomes large. The primary reason four-hop network for this trend is due to the
characteristics of 802.11. Consider the simple shown in figure 5.3. In IEEE 802.11, when
link 1–2 is active only link 4–5 may also be active. Link 2–3 cannot be active because node
2 cannot transmit and receive simultaneously, and link 3–4 may not be active because
communication by node 3 may interfere with node 2. Thus, throughput on an “i” hop
802.11 network with link capacity C is bounded by C/“i” for 1 ≤ “i” ≤3, and C/3 otherwise.
The decline in figure 5.3 for “i” ≥ 4 is due to contention caused by the backward flow of
TCP ACKs.
FIGURE 5.3 - TCP-Reno throughput over an 802.11 fixed, linear, multi-hop network
56
In [TOH 2002]´s experiment is also showed that the Route Discovery (RD) time of
the ABR protocol and the End to End delay increases while increasing the number of hops
as shown in table 5.1 and table 5.2
TABLE 5.1 – Route Discovery (RD) time at 1 second beaconing interval for different hop
Counts [TOH 2002].
TABLE 5.2 – Average End-to-End Delay at Different Packet Sizes [TOH 2002].
In the Figure 5.4, we may see the instability problem deploying TCP Reno in a
Multi-hop mobile ad hoc wireless network; we also may appreciate that when windows size
is reduce from 32 to 4 the instability also reduces. According to [XUS 2002] it is because in
a carrier sense wireless network, the interfering range (and sensing range) is typically larger
than the range at which receivers are willing to accept a packet from the same transmitter.
WaveLAN wireless systems are engineered in such a way. According to the IEEE 802.11
protocol implementation in the NS-2 simulator software [NET 2000], which is modeled
after a the WaveLAN wireless radio, the interfering range and the sensing range are more
than two times the size of the communication range. This is the reason why a collision
57
occurs at node 2 when node 1 and node 4 are sending at the same time, even though node 4
cannot directly communicate with node 2. Node 2 is within the interfering range of node 4.
This is a typical “hidden node problem” in wireless packet networks. Node 4 is hidden node
in this case. It is within the interference range of the intended destination (node 2) but out
of the sensing range of the sender (node 1). Since the nominal communication range is 250
m, which is smaller than the interfering range, node 1 cannot hear the Clear to Send (CTS)
packet from node 4. Thus, the virtual carrier sense mechanism cannot function, either in
this case. Now we can explain why node 4 is sending, even if node 2 successfully receives
the Request to Send (RTS) from node 1. Note that node 2 can sense node 4. This is typical
“exposed node problem” in wireless packet networks [XIS 2002].
FIGURE 5.4 - Instability problem in the four hop TCP Reno connection
Now, It is clear that the exposed station problems and collisions are preventing the
intermediated node from reaching its next hop. The random back off scheme is used in the
MAC layer makes this worse, since it always favors the latest successful node. As bigger
data packet sizes and sending back-to-back packets both increase the chance of the
intermediated nodes failing to obtain the channel, the node has to back off a random time
and try again. This will increase the delay of ACKs if it finally succeeds. If it still fails after
several tries, a link breakage will be declared. The result is the report of a route failure
[XUS 2002].
58
To prove unfairness S. Xu and T. Saadawi [XUS 2002] setup two TCP connection I
the network shown in figure 5.5. The first one started at 10.0s, the second one begins 20.0s
later. They called the first one “first session” and the second one “second session”. Their
hole experiment lasts 130.0 s. the first session is from node 6 to node 4; the second session
is a from node 2 to node 3. The first session is a two hop TCP. The first session has a
throughput of around 450 Kpbs after starting fr0m 10.0s. However, it is completely forced
down after the second session starts at 30.0s. In most of its life time after 30.0s, the
throughput of the first session is zero. There is not even chance for it to restart. The
aggregate throughputs of these two TCP connections belong completely to the second
session- around 920 kbps in the 30.0 – 130. 0 s lifetime as shown in figure 5.5. This is also
serious unfairness. The loser session is completely shut down even if it starts much earlier.
Even if the window_ size that is equal to 4 in the experiment change to 1 the results are
quite the same.
FIGURE 5.5 - Throughput of two TCP connections with different sender and receiver
59
The authors shown that with the IEEE 802.11 MAC layer, two simultaneous TCP
traffics cannot coexist in the network at the same time. Once one session develops, the
other one will be shut down. The overturn can happen at any random time.
We can see in figure 5.6.a two TCP connections with two hops each. They cannot
keep alive at the same time and, in this experiment, the overturn can happen three times.
The turnovers are very random and several trials with different simulations seed were done
they could not predict when it may happen that it is why they called it incompatibility
problem. In figure 5.6.a, the TCP sources are neighboring nodes, nodes 4 and 3, but in
figure, 5.6.b TCP sources are not direct neighboring nodes 1 and 6. In figure 5.6.b the TCP
sources are now five hops away while the TCP receivers are neighbors and three turnovers
occurs.
According to [XUS 2002] even if we do not use TCP, the three problems previous
mentions still exist in the MAC layer when the IEEE 802.11 is used in multi-hop networks.
TCP traffic clearly shows the problem existing in the MAC. In fact, these problems always
appear when the traffic load becomes large enough even if traffic is not from TCP.
60
FIGURE 5.6 - Throughput of two TCP connections with the same hop number
61
Varying the transmission packet size has a direct influence on End-to-End delay of
ad hoc wireless routes. This is because the larger the packet size, the longer the data
transmission, propagation and process time. The use of large packet size can increase the
performance of ad hoc networks in terms of throughput. However, at very large packet size,
there is a high probability that a packet is corrupted. This behavior is likely to occur in a
wireless environment due to its high bit-error rate compared with a wired medium.
Moreover contention can be a problem when traffic load is high [TOH 2002].
It is also important to consider the power supply because mobile computers are
battery-operated and as a result, the power resource is limited. Therefore, it will be helpful
if the transmitting and receiving time in a MANET is minimized.
In a MANET the dynamic network topology changes rapidly due to the movement
of the wireless station and at different speeds it other point to take into consideration.
The obstacles in the environment that may create shadow have also to be
considered.
63
There are several excellent papers made about comparison of the different TCP
solution. We start this chapter with Balak Rishnan´s proposal [BAL 97], George
Xylomenos et al [XYL 2001] and Wen- Tsuen Chen and Jyh-shin Lee [WEN 2000].
In this work, we present some of the more important proposals to solve wireless
problems and as a natural evolution of [BAL 97]’s schemes. We suggest to add a New
Layer Scheme proposed by [JIA 2001] and [CHE 2001], and also add Emergent Schemes,
where are the proposals that did not fit well in [BAL 97]´s schemes or appear after with
original and interesting characteristics. Our proposed classification is as follows:
Unlike TCP for the transport layer, there is not de facto standard for link-layer
protocols. Existing link-layer protocols choose from techniques such as stop-and-wait, go-
back- N, selective repeat, and forward error correction to provide reliability [BAL 97].
There have been several proposals for reliable link-layer protocols. The two main
classes of techniques employed by these protocols are: error correction, using techniques
such as forward error correction (FEC), and retransmission of lost packets in response to
automatic repeat request (ARQ) messages [BAL 97].
The main advantage of employing a link-layer protocol for loss recovery is that it
fits naturally into the layered structure of network protocols. The link-layer protocol
operates independently of higher layer protocols, and does not maintain any per-connection
state. The main concern about link-layer protocols is the possibility of an adverse effect on
certain transport-layer protocols such as TCP [BAL 97].
This link-layer protocol takes advantage of the knowledge of the higher layer
transport protocol (TCP) and is deployed in cellular wireless networks. The snoop protocol
introduces a module, called the snoop agent, at the base station. The agent monitors every
64
packet that passes through the TCP connection in both directions, and maintains a cache of
TCP segments sent across the link that have not yet been acknowledged by the receiver. A
packet loss is detected by the arrival of a small number of duplicate acknowledgments from
the receiver or by a local timeout. The snoop agent retransmits the lost packet if it has it
cached, and suppresses the duplicate acknowledgments.
The particular goal here is to improve the end-to-end performance on networks with
wireless links without changing existing TCP implementations at hosts in the fixed network
and without recompiling or re-linking existing applications is achieved by a simple set of
modifications to the network-layer (IP) software at the base station. These modifications
consist mainly of caching packets and performing local retransmissions across the wireless
link by monitoring the acknowledgments to TCP packets generated by the receiver [BAL
95].
We first are going to describe the protocol for transfer of data from a fixed host
(FH) to a mobile host (MH) through a base station (BS). The base station routing code is
modified by adding a module, called the snoop, that monitors every packet that passes
through the connection in either direction. Non-transport layer code runs at the base station.
The snoop module maintains a cache of TCP packets sent from the FH that have not yet
been acknowledged by the MH. This is easy to do since TCP has a cumulative
acknowledgment policy for received packets. When a new packet arrives from the FH,
snoop adds it to its cache and passes the packet on to the routing code which performs the
normal routing functions. The snoop module also keeps track of all the acknowledgments
sent from the mobile host. When a packet loss is detected (either by the arrival of a
duplicate acknowledgment or by a local timeout), it retransmits the lost packet to the MH if
it has the packet cached. Thus, the base station (snoop) hides the packet loss from the FH
by not propagating duplicate acknowledgments, thereby preventing unnecessary congestion
control mechanism invocations [BAL 95].
The snoop module has two linked procedures, snoop_data() and snoop_ack(). the
Snoop_data() processes caches packets intended for the MH while snoop_ack() processes
acknowledgments (ACKs) coming from the MH and drives local retransmissions from the
base station to the mobile host. The flowcharts summarizing the algorithms for
snoop_data() and snoop_ack() are shown in figure 6.1.
65
snoop_data( ) snoop_Ack( )
Ack arrives
Packet arrives
New Ack?
Yes
New packet?
No
1. Free Buffers
2. Update RTT
1. Forward Packet estimate
2. Reset Local No 3. Propagate ACK
rexmit counter to sender
Yes
Sender rxmission Common case
Dup Ack?
No
Discard
In sequence?
No Spurious Ack Yes
The Snoop_data() processes packets from the fixed host. TCP implements a sliding
window scheme to transmit packets based on its congestion window (estimated from local
computations at the sender) and the flow control window (advertised by the receiver). TCP
is a byte stream protocol and each byte of data has an associated sequence number. The
sequence number of its first byte of data and its size identifies a TCP packet (or segment)
uniquely. At the BS, snoop keeps track of the last sequence number seen for the connection.
One of several kinds of packets can arrive at the BS from the FH, and snoop_data()
processes them in different ways [BAL 95]:
1. A new packet in the normal TCP sequence: This is the common case, when a new packet
in the normal increasing sequence arrives at the BS. In this case the packet is added to the
snoop cache and forwarded on to the MH. We do not perform any extra copying of data
66
while doing this. We also place a timestamp on one packet per transmitted window in order
to estimate the round-trip time of the wireless link.
2. An out-of-sequence packet that has been cached earlier: This is a less common case, but
it happens when dropped packets cause timeouts at the sender. It could also happen when a
stream of data following a TCP sender fast retransmission arrives at the base station.
Different actions are taken depending on whether this packet is greater or less than the last
acknowledged packet seen so far. If the sequence number is greater than the last
acknowledgment seen, it is very likely that this packet did not reach the MH earlier, and so
it is forwarded on. If, on the other hand, the sequence number is less than the last
acknowledgment, the MH has already received this packet. At this point, one possibility
would be to discard this packet and continue, but this is not always the best thing to do. The
reason for this is that the original ACK with the same sequence number could have been
lost due to congestion while going back to the FH. In order to facilitate the sender getting to
the current state of the connection as fast as possible, a TCP acknowledgment
corresponding to the last ACK seen at the BS is generated by the snoop module (with the
source address and port corresponding to the MH) and sent to the FH.
3. An out-of-sequence packet that has not been cached earlier: In this case, the packet was
either lost earlier due to congestion on the wired network or has been delivered out of order
by the network. The former is more likely, especially if the sequence number of the packet
(i.e, the sequence number of its first data byte) is more than one or two packets away from
the last one seen so far by the snoop module. This packet is forwarded to the MH, and
marked as having been retransmitted by the sender. Snoop_ack() uses this information to
process acknowledgments (for this packet) from the MH.
The Snoop_ack() monitors and processes the acknowledgments (ACKs) sent back
by the MH and performs various operations depending on the type and number of
acknowledgments it receives. These ACKs fall into one of three categories [BAL 95] :
1. A new ACK: This is the common case (when the connection is fairly error-free and there
is little user movement), and signifies an increase in the packet sequence received at the
MH. This acknowledgment initiates the cleaning of the snoop cache and all acknowledged
packets are freed. The round-trip time estimate for the wireless link is also updated at this
time. This estimate is not done for every packet, but only for one packet in each window of
transmission, and only if no retransmissions happened in that window. The last condition is
needed because it is impossible in general to determine if the arrival of an acknowledgment
for a retransmitted packet was for the original packet or for the retransmission. Finally, the
acknowledgment is forwarded to the FH.
2. A spurious ACK: This is an acknowledgment less than the last acknowledgment seen by
the snoop module and is a situation that rarely happens. It is discarded and the protocol
continues.
sequence have been received, since the MH generates a DUPACK for each TCP segment
received out of sequence. One of several actions is taken depending on the type of duplicate
acknowledgment and the current state of snoop [BAL 95]:
- The first case occurs when we receive a DUPACK for a packet that is either not in the
snoop cache or has been marked as having been retransmitted by the sender. If the
packet is not in the cache, it needs to be resent from the FH, perhaps after invoking the
necessary congestion control mechanisms at the sender. If the packet was marked as a
sender-retransmitted packet, the DUPACK needs to be routed to the FH because the
TCP stack there maintains state based on the number of duplicate acknowledgments it
receives when it retransmits a packet. Therefore, both these situations require the
DUPACK to be routed to the FH.
- The second case occurs when snoop gets a DUPACK that it doesn’t expect to receive
for the packet. This typically happens when the first DUPACK arrives for the packet,
after a subsequent packet in the stream reaches the MH. The arrival of each successive
packet in the window causes a DUPACK to be generated for the lost packet. In order to
make the number of such DUPACKs as small as possible, the lost packet is
retransmitted more soon as the loss is detected, and at a higher priority than normal
packets. This is done by maintaining two queues at the link layer for high and normal
priority packets. In addition, snoop also estimates the maximum number of duplicate
acknowledgments that can arrive for this packet. This is done by counting the number of
packets that were transmitted after the lost packet prior to its retransmission.
- The third case occurs when an “expected” DUPACK arrives, based on the above
maximum estimate. The missing packet would have already been retransmitted when the
first DUPACK arrived (and the estimate was zero), so this acknowledgment is
discarded. In practice, the retransmitted packet reaches the MH before most of the later
packets do and the BS sees an increase in the ACK sequence before all the expected
DUPACKs arrive.
Snoop keeps track of the number of local retransmissions for a packet, but resets
this number to zero if the packet arrives again from the sender following a timeout or a fast
retransmission. In addition to retransmitting packets depending on the number and type of
acknowledgments, the snoop protocol also performs retransmissions driven by timeouts.
Their design involves a slight modification to the TCP code at the mobile host. At
the base station, they keep track of the packets that were lost in any transmitted window,
and generate negative acknowledgments (NACKs) for those packets back to the mobile.
This is especially useful if several packets are lost in a single transmission window, a
situation that happens often under high interference or in fades where the strength and
quality of the signal are low. These NACKs are sent either when a threshold number of
packets (from a single window) have reached the base station or when a certain amount of
time has expired without any new packets from the mobile. Encoding these NACKs as a bit
vector can ensure that the relative fraction of the sparse wireless bandwidth consumed by
NACKs is relatively low. their implementation of NACKs is based on using the Selective
Acknowledgment (SACK) option in TCP. Selective acknowledgments, currently
68
The snoop protocol uses SACKs to cause the mobile host to retransmit quickly
missing packets (relative to the round-trip time of the connection). The only change
required at the mobile host will be to enable SACK processing. No changes of any sort are
required in any of the fixed hosts. They have implemented the ability to generate SACKs at
the base station and process them at the mobile hosts to retransmit lost packets and are
currently measuring the performance of transfers from the mobile host.
Their experiments for moderate to high error rates are very encouraging. For bit
error rates greater than 5x10^ -7 show increase in throughput by a factor of up to 20 times
compared to regular TCP (Reno) depending on the bit error rate. For error rates that are
lower than this, there is little difference between the performance of snoop and regular TCP
showing that the overhead caused by snoop is negligible [BAL 95].
They have also found that our protocol is significantly more robust at dealing with
multiple packet losses in a single window as compared to regular TCP.
The main drawbacks of the snoop agent are that it does not consider packet loss and
delay due to hand off, and interferences of the data link with transport layer retransmission
is still present. [WEN 2000]
6.1.2 TULIP
TULIP is exceptionally robust when bit error rates are high; it maintains high
goodput, i.e., only those packets which are in fact dropped on the wireless link are
retransmitted and then only when necessary.
69
TULIP provides reliable service for packets carrying TCP data traffic, and
unreliable service for other packets types, such as UDP traffic (e.g., routing tables’ updates
and DNS packets) and TCP acknowledgments (ACKs).
TULIP eliminates the need for a transport-layer proxy, which must keep per-session
state to actively monitor the TCP packets and suppress any duplicate ACKs it encounters.
An important feature of TULIP is its ability to maintain local recovery of all lost
packets at the wireless link in order to prevent the unnecessary and delayed retransmission
of packets over the entire path and a subsequent reduction in TCP congestion window.
Flow control across the link is maintained by a sliding window, and the sending side’s link
layer accomplishes automatic retransmission of lost packets. Lost packets are detected at
the sender via a bit vector returned by the receiver as a part of every ACK packet. This
allows for quick and efficient recovery of packets over the link and helps to keep delay and
delay variance low.
TULIP is designed for efficient operation over the half-duplex radio channels
available in today’s commercial radios by strobing packets onto the link in a turn-taking
manner.
TULIP’s times rely on a maximum propagation delay over the link, rather than
performing a round-trip time estimate of the channel delay.
The authors introduce a new feature MAC Acceleration, in which TULIP interacts
with with the MAC protocol to accelerate the return of link-layer ACKs (which are most
often piggybacked with returning TCP ACKs) without renegotiating access to the channel.
This feature is applicable to collision avoidance MAC protocols (e.g., IEEE 802.11) to
improve throughput.
TULIP causes no modifications of the network or transport layer software, and the
link layer is not required to know any details regarding TCP or the algorithm it uses.
TULIP maintains no TCP state and makes no decisions on a TCP-session basis, but
rather solely on a per-destination basis. This approach greatly reduces the overhead of
maintaining state information when multiple TCP sessions are active for a given
destination. (as is common with web traffic). From the transport layer’s point of view, the
path to the destination through a lossy wireless link simply appears to be a slow link
without losses and TCP simply adjusts accordingly.
Precisely because TULIP keeps no TCP state and therefore do not need to look into
the TCP packet header is that it works correctly with any current or future version of TCP
(e.g., TCP-SACK), even if TCP headers are encrypted. TULIP works with both IPV4 and
IPV6; in the latter case, TCP data packets can be identified as requiring reliable service
from the NextHeader field in the IPV6 header. In addition, because this approach does not
restrict the network to the presence of a base station, it can be applied to multi-hop wireless
networks. Furthermore, by controlling the MAC layer, TULIP conserves wireless
70
bandwidth by piggybacking TCP ACKs with link-layer ACKs and returning them
immediately across the channel though MAC acceleration.
There are other solutions in this scheme, which we will just mention summarized
In [CHI 2001] has been studied the performance of TCP when the last hop of the
end-to-end connection is wireless and link-layer retransmissions are used to shield the TCP
sender from losses over the wireless channel, it means hide losses over the wireless link to
TCP in spite of the time varying transmission quality.
In [CHI 2001] also focus on link layer retransmission mechanisms and determine
their parameter setting in such a way that a reliable communication link is provided. In
particular, they have chosen a significant QoS metric at the transport layer and fixed its
targeted value, adapting the maximum number of link-layer transmissions to the
characteristics of the wireless link so that the desired QoS at the transport layer is provided.
finally this approach provides a reliable wireless link in spite of the heterogeneous
environments mobile terminals may incur, and it enables a efficient use of TCP over
wireless connections without any modification to the transport protocol [CHI 2001].
The cellular networks deploy this protocol, even though cellular networks and ad
hoc networks using standard IEEE 802.11 are different technologies, they share the same
problems of mobility and unreliable nature of wireless link that is why is interesting its
study in this TI.
Indirect TCP (I-TCP) is a split-connection solution that uses standard TCP for its
connection over the wireless link. Like other split-connection proposals, it attempts to
separate loss recovery over the wireless link from that across the wire line network, thereby
shielding the original TCP sender from the wireless link. I-TCP utilizes the resources of
Mobility Support Routers (MSRs) to provide transport layer communications between
mobile hosts and hosts on the fixed networks. With I-TCP, the problems related to mobility
and unreliability of wireless link are handled entirely within the wireless link. I-TCP is
particularly suited for applications, which are throughput intensive [BAK 95].
I-TCP is a reliable stream-oriented transport layer protocol for mobile hosts. I-TCP
is fully compatible with TCP/IP on the fixed network and is built around the following
simple concepts [BAK 97 ]:
1) A transport layer connection between an Mobile Host (MH) and an Fixed Host (FH) is
established as two separate connections, one over the wireless medium and another over the
fixed network with the current mobility support router (MSR) being the intermediate point.
2) If the MH switches cells during the lifetime of an I-TCP connection, the center point of
the connection moves to the new MSR.
3) The FH is completely unaware of the indirection and is not affected even when the MH
hands off, i.e., when the intermediate point of the I-TCP connection moves from one MSR
to another.
When a mobile host (MH) wishes to communicate with some fixed host (FH) using
I-TCP, a request is sent to the current MSR (which is also attached to the fixed network) to
open a TCP connection with the FH on behalf of the MH. The MH communicates with its
MSR on a separate connection using a variation of TCP that is tuned for wireless links and
is aware of mobility. See figure 6.2 below
72
MSR-2
Fixed CELL-1
Fixed
Host Network Transport
layer
intermediary
Transport
layer
Existing Handoffs
Transport MSR-1
Protocols CELL-2
Over Transport
IP layer
intermediary
MH
In figure 6.2 above, If mobile host (MH) which had first established a connection
with a fixed host (FH) through MSR-1, moves to another cell under MSR-2. When the MH
requests an I-TCP connection with the FH while located in the cell of MSR-1, MSR-1
establishes a socket with the MH address and MH port number to handle the connection
with the fixed host. It also opens another socket with its own address and some suitable port
number for the wireless side of the I-TCP connection to communicate with the MH.
If the MH switches cells (hand off or hand over), the state associated with two
sockets of the I-TCP connection at MSR-1 is handed over to the new MSR (MSR-2). MSR-
2 then creates two sockets corresponding to the I-TCP connection with the same endpoint
parameters that the sockets at MSR-1 had associated with them. Since the connection
endpoints for both wireless and the fixed parts of the I-TCP connection do not change after
a move, there is no need to re-establish the connection at the new MSR. This also ensures
that the indirection in the transport layer connection is completely hidden from the FH.
We present some advantages of I-TCP according Bakre and Badrinath [BAK 97]:
1) It separates the flow control and congestion control functionality on the wireless link
from that on the fixed network. This is desirable because of the vastly different error and
bandwidth characteristics of the two kinds of links.
73
2) A separate transport protocol for the wireless link can support notification of events such
as disconnections, moves, and other features of the wireless link such as the available
bandwidth, etc., to the higher layers, which can be used by link aware, and location aware
mobile applications.
3) Indirection at the MSR allows faster reaction to mobility and wireless related events
compared to a scheme in which the remote communicating host tries to react to such
events.
4) An indirect transport protocol can provide some measure of reliability over the wireless
link for those applications, which prefer to use unreliable transport to the fixed network.
5) Indirect transport protocols provide backward compatibility with the existing wired
network protocols thus obviating modifications at fixed hosts for accommodating mobile
hosts.
6) Indirection allows an MSR to manage much of the communication overhead for a mobile
host. Thus, a mobile host (e.g., a small palmtop) that only runs a very simple wireless
protocol to communicate with the MSR can still access fixed network services such as
WWW which may otherwise require a full TCP/IP stack running on the mobile.
7) Indirect transport protocols allow the use of different MTUs over the wired and the
wireless part of the connection. Since the wireless links have lower bandwidth and higher
error rate, the optimal MTU for the wireless medium may be smaller than the smallest
MTU supported by the wired network.
1. I-TCP violates the semantics of TCP [PAR 99]. I-TCP acknowledgments and semantics
are not end-to-end. Since the TCP connection is explicitly split into two distinct ones,
acknowledgments of TCP packets can arrive at the sender even before the packet actually
reaches the intended recipient. I-TCP derives its good performance from this splitting of
connections. However, as we shall show, there is no need to sacrifice the semantics of
acknowledgments in order to achieve good performance [BAL 95].
2. Applications running on the mobile host have to be re-linked with the I-TCP library and
need to use special I-TCP socket system calls in the current implementation [BAL 95].
3. Every packet needs to go through the TCP protocol stack and incur the associated
overhead four times (once at the sender, twice at the base station, and once at the receiver).
This also involves copying data at the base station to move the packet from the incoming
TCP connection to the outgoing one. This overhead is lessened if a more lightweight,
wireless specific reliable protocol is used on the last link [BAL 95].
74
6.2.2 WTCP
They propose an efficient mechanism, where the base station is involved in the TCP
connection. The conceptual view of the transport connection is shown in figure 6.3.
Transport Connection
TCP WTCP
TCP TCP
IP M-IP IP
WTCP receives data segment from source: The network layer protocol (M-IP)
running in the base station detects any TCP segment that arrives for a mobile host and
sends it to the WTCP input buffer. If this segment is the next segment expected from the
fixed host, it is stored in the WTCP buffer along with its arrival time, and the receive array
is updated. The receive array maintains the sequence numbers of the segments received by
WTCP. The number of bytes received increases the sequence number for the next segment
expected from the fixed host. If the newly arriving segment has a larger sequence number
than what is expected, the segment is buffered, the arrival time is recorded and the receive
array is updated, but the sequence number of the next packet expected is not changed. If the
sequence number of the segment is smaller than what is expected, the segment is dropped.
75
WTCP sends data segments to mobile host: On the wireless link, WTCP tries to
send the segments that are stored in its buffer. WTCP independently performs flow and
error control for the wireless connection. It also maintains state information for the wireless
connection, such as sequence number of last acknowledgment received from mobile host,
and the sequence number of the last segment that was sent to the mobile host. From its
buffer, WTCP transmits all segments that fall within the wireless link transmission
window. Each time a segment is sent to the mobile host (including a retransmission), the
timestamp of the segment is incremented by the amount of time that segment spent in the
WTCP buffer (residence time). When a segment is sent to the mobile host, the base station
schedules a new timeout if there is no other timeout pending.
According to their experiments, WTCP achieves throughput values 4-8 times higher
than TCP-Tahoe [RAT 98].
76
A drawback of WTCP is that the residence time a TCP segment spends in the base station
buffer can affect the RTT value estimated at the TCP source [RAT 98].
Although a wide variety of TCP versions are used on the Internet, the current de
facto standard for TCP implementations is TCP Reno [BAL 97].
6.3.1 TCP-Probing
In the event of persistent error conditions (e.g. congestion, burst link errors), the
duration of the probe cycle will be naturally extended and is likely to be commensurate
with that of the error conditions, since probe segment will be lost. The data transmission
process is thus effectively “sitting out” these error conditions awaiting successful
completion of the probe cycle. In the case of random loss, however, the probe cycle will
complete much more quickly, in proportions to the prevailing density of occurrence for the
random error [TSA 2000].
The sender enters a probe cycle when either of two situations apply [TSA 2000]:
77
1. A timeout event occurs. If network conditions detected when the probe cycle
completes are sufficiently good, then instead of entering Slow Start, TCP-probing
simply picks up from the point where the timeout event occurred. In other words,
neither congestion window nor threshold is adjusted downwards. They call this
“Immediate Recovery”. Otherwise, slow start is entered.
2. Three duplicated acknowledgements (DACKS) are received. Again, if prevailing
networks conditions at the end of the probe cycle are sufficiently good, Immediate
Recovery is executed. Note that here; however, Immediate Recovery will also
expand the congestion window in response to all DACKS received by the time the
probe cycle terminates. This is analogous to the window inflation phase of Fast
Retransmit in Reno and New Reno. Alternatively, if deteriorated network conditions
are detected at the end of the probe cycle, the sender enters Slow Start. This is in
market distinction to Reno and New Reno behavior at the end of Fast Retransmit.
The logic here is that, having sat out the error condition during the probe cycle and
finding that network throughput is nevertheless still poor at the end of the cycle; a
conservative transmission strategy is more clearly indicated.
Implementation: A probe cycle uses two segments (PROBE1, PROBE2) and their
corresponding acknowledgements (PR1_ACK and PR2_ACK), implemented as options
extensions to the TCP header. The segments carry no payload as we said before. The option
header extension consists of fields:
(i) type: in order to distinguish between the four probe segments (this is
effectively the option code field).
(ii) (options) length
(iii) id number: used to identify an exchange of probe segment.
The sender initiates a probe cycle by transmitting a PROBE1 segment to which the
receiver immediately responds with a PR1_ACK, upon receipt of which the sender
transmits a PROBE2. the receiver acknowledges this second probing with a PR2_ACK
and returns to the ESTAB state as shown in the figure 6.4
78
PROBE 2
PR2_ ACK PR2_ACK
PR2_SENT
[TSA 2000]
The sender makes a round-trip time (RTT) measurement based on the time delay
between sending the PROBE1 and receiving the PR1_ACK, and another based on the
exchange of PROBE2 and PR2_ACK. The sender makes use of two timers during probing.
The first is a Probe timer, used to determine if a PROBE1 or its corresponding PR1_ACK
segment is missing, and the same again for the PROBE2/PR2_ACK segments. The second
is a Measurement timer, used to measure each of the two RTTs from the probe cycle, in
turn. The probe timer is set to estimated RTT value current at the time the probe cycle is
triggered.
The value in the option extension id number identifies a full exchange of PROBE1,
PR1_ACK, PROBE2 and PR2_ACK segments, rather than individual segments within that
exchange. Thus, in the event that the PROBE1 or its PR1_ACK is lost (i.e. the probe timer
expires), the sender reinitialize the probe and measurement timers, and retransmits
PROBE1 with a new id number. Similarly, if a PROBE2 or its PR2_ACK is lost, the sender
reinitiates the exchange of probe segments from the beginning by retransmitting a PROBE1
with a new id number. A PR1_ACK carries the same id number as the corresponding
PROBE1 that it is acknowledging; this is also the id number used by the subsequent
PROBE2 and PR2_ACK segments. The receiver moves to the ESTAB state after sending
the PR2_ACK that should terminate the probe cycle. In this state, and should the
PR2_ACK be lost, the receiver would receive - instead of data segments - a retransmitted
PROBE1 that is reinitiating the exchange of probe segments since the sender’s probe timer,
in the meantime, will have expired.
79
According to Tsaoussidis and Badr, the authors [TSA 2000], TCP-probing can be a
protocol of choice for heterogeneous wired/wireless communications with respect to energy
and throughput performance so it is a protocol that can be used in both environments.
Under the assumptions that a corrupted packet can still reach the destination and the
source address of a corrupted packet is still known, whenever a corrupted packet is
received, a NACK is sent. Upon the detection of the NACK, only the corrupted packet is
retransmitted by the source and no window size adjustments are performed. After the
retransmission, the source resumes normal packet transmission. To avoid inflation of the
round-trip time estimate, the round-trip time measurements from all the packets which have
been sent before the retransmission of the corrupted packet are ignored [CHA 97].
80
A drawback of the NACK scheme is that can not cope with the degradation caused
by hand off [CHA 97].
SIF Estimation: if the TCP sender has accumulated a certain number of duplicate
acknowledgements (DUPACKS) and has the confidence that the next unacknowledged
packet is lost, it can invoke the Fast retransmit algorithm immediately. The number of
DUPACKs that should be accumulated is a function of the number of the current
unacknowledged packets through the Segment In Flight (SIF) Estimation.
A new variable sifest_, which is the sender’s estimation of the amount of in-flight
segments and is zero initially, is proposed to be added in the sender for every active TCP
session. When a new data packet is sent, sifest_ is increased by one. When the TCP
senders get a new acknowledgement, sifest_ is decremented by the number of
acknowledgement segments. If time out occurs or if the sender has been idle for more than
one round-trip time (RTT), sifest_ is zero. Again since the sender is going to reprobe the
network. If the TCP sender retransmits a sent data packet triggered by schemes other than
a time out, sifest_ remains untouched, since it is assumed that the previous copy of this
packet has already left the network. The receiver and intermediate systems are unmodified,
and the modified TCP can be deployed incrementally and interact with ordinary TCP
compatibly over the Internet.
When the TCP sender already has (sifest_ - 1) DUPACKs, or DUPACKs have
exceeded the fixed threshold, and the most unacknowledged packets has not been resent
in the last RTT, this packet is resent immediately according to the Fast Retransmit.
If more new data packets are allowed by CWND and the CWND inflating
algorithm, they can be piped into the network to trigger more ACKs and to help TCP
endpoints regain the self-clocking earlier. Integrating the SIF estimation with the ordinary
TCP variants is quite straightforward and conflict-free with the original algorithms. With
the SIF enhancement, the TCP sender might become less tolerant on re-ordered packets,
and the estimation error might be accumulated for a while. However, when SIF is effective
since there are only few in-flight segments and the estimation is re-initialized periodically,
the probability of DUPACK occurrence due to packet recording is negligible.
For their experiments, the TCP modules in the ns-2 simulator were modified
accordingly to incorporate the Segment in Flight Estimation Algorithm, and it can be seen
that the SIF-enhanced TCP improves the application in terms of high throughput and less
idle periods from ordinary TCP. Their simulations showed also that the TCP with SIF
81
achieves better end-to-end performance, and still keeps the fairness and the compatibility
with ordinary TCP variants [PAN 2000].
Some End to End Solution are mention in [BAL 97] and are presented here
summarized as follows:
6.4.1 ATCP
ipintr ( ) Ip output ( )
DATA
[JIA 2001]
2) ATCP is transparent which means that nodes with and without ATCP can normally set
up TCP connections.
4) ATCP does not interfere with TCP’s congestion control behavior when there is network
congestion.
The ATCP layer is only active at the TCP sender (in a duplex communication, the
ATCP layer at both participating nodes will be active). This layer monitors TCP state and
the state of the network based on ECN (Explicit Congestion Notification) and ICMP
(Internet Control Message Protocol) messages and takes appropriate action. To understand
ATCP’s behavior, consider the figure 6.7 that illustrates ATCP’s four possible states –
Normal, Congested, Loss, and Disconnected. When the TCP connection is initially
established, ATCP at the sender is in the normal state. In this state, ATCP does nothing and
is invisible [JIA 2001].
83
• Lossy Channel: When the connection from the sender to the receiver is lossy, it is likely
that some segments will not arrive at the receiver or may arrive out-of-order. Thus, the
receiver may generate duplicate acknowledgment (ACKs) in response to out of sequence
segments. When TCP receives three consecutive duplicate ACKs, it retransmits the
offending segment and shrinks the congestion window. It is also possible that, due to lost
ACKs, the TCP sender’s retransmission time out (RTO), may expire causing it to
retransmit one segment and invoke congestion control. ATCP in its normal state counts the
number of duplicate ACKs received for any segment. When it sees that three duplicate
ACKs have been received, it does not forward the third duplicate ACK but puts TCP in
persist mode (like snooze state in [CHA 88]). Similarly, when ATCP sees that TCP’s RTO
is about to expire, it again puts TCP in persist mode By doing this, they ensure that the TCP
sender does not invoke congestion control because that is the wrong thing to do under these
circumstances. After ATCP puts TCP in persist mode, ATCP enters the loss state. In the
loss state, ATCP transmits the unacknowledged segments from TCP’s send buffer. It
maintains its own separate timers to retransmit these segments in the event that ACK’s are
not forthcoming. Eventually, when a new ACK arrives (i.e., an ACK for a previously
84
unacknowledged segment) ATCP forward that ACK to TCP which also removes TCP from
persist mode. ATCP then returns to its normal state [JIA 2001].
• Congested: they assume that when the network detects congestion, the ECN flag is set in
ACK and data packets. They also assume that ATCP receives this message when in its
normal state. ATCP moves into its congested state and does nothing. It ignores any
duplicate ACKs that arrive and it ignores imminent RTO expiration events. In other words,
ATCP does not interfere with TCP’s normal congestion behavior. After TCP transmits a
new segment, ATCP returns to its normal state [JIA 2001].
• Other Transitions: Finally, when ATCP is in the loss state, reception of an ECN or an
ICMP Source Quench message will move ATCP into congested state and ATCP removes
TCP from its persist state. Similarly, reception of an ICMP Destination Unreachable
message moves ATCP from either the loss state or the congested state into the
disconnected state and ATCP moves TCP into persist mode (if it was not already in that
state) [JIA 2001].
• Effect of Lost Messages: Note that due to the lossy environment, it is possible that an
ECN may not arrive at the sender or, similarly, a “Destination Unreachable” message may
be lost. If an ECN message is lost, the TCP sender will continue transmitting packets.
However, every subsequent ACK will contain the ECN, thus ensuring that the sender will
eventually receive the ECN causing it to enter the congestion control state as it is supposed
to. Likewise, if there is no route to the destination, the sender will eventually receive a
retransmission of the “Destination Unreachable” message causing TCP to be put into the
persist state by ATCP. Thus, in all cases of lost messages, ATCP performs correctly [JIA
2001].
The ATCP changes TCP’s behavior under the lossy conditions (due to high BER),
ATCP retransmits unacknowledged segments while TCP is put into persist state. Thus,
TCP does not invoke congestion control. In the event that the source and the destination get
disconnected (either for short periods while a new route is computed or for longer periods
due to partition), TCP is again put into persist mode for the duration of the disconnection
and no segments are transmitted by ATCP. When the network is reconnected, TCP
automatically comes out of persist mode because the receiver responds to the sender’s
probe packets. However, the congestion window used in this case is one segment initially.
85
TCP’s congestion behavior is unchanged ensuring that TCP appropriately throttles back its
transmission rate when the network is congested [JIA 2001].
- If ATCP is used in fixed internet implementing ECN, ATCP will operate correctly but If
the fixed internet does not implement ECN, then it is necessary to split the connection at
the node that connects the wireless network with the wired internet. Thus, there will be two
conjugated TCP connections, this is similar to I-TCP [BAK 95] for cellular networks
- They have not considered energy consumption in [JIA 2001] but minimizing the
processing involved is important to increase the life of the battery).
- The test best deployed in [JIA 2001] is an emulations using a wired network with five
Pentium computers each of which had two Ethernet (CSMA/CD) IEEE 802.3 instead of
using a real wireless ad hoc network with standard IEEE 802.11(CSMA/CA)
Finally, about performance, they have implemented their protocol in FreeBSD, and they
show that their solution improves TCP’s throughput by a factor of 2–3 [JIA 2001].
6.4.2 LSSA
In wireless mobile ad hoc networks, each Mobile Host emits a beacon signal that is
used to identify itself and notify its neighbors about its existence. In this environment, the
Link Signal Strength Agent (LSSA) is introduced and it is a new layer that overcomes the
problems associated with the nature of the wireless links and the mobility of nodes and
therefore is applicable in dynamic wireless networks [CHE 2001].
The LSSA resides in between the TCP and IP layers, in a position similar to the
Internet control Message Protocol (ICMP) in the Internet. When receiving the signal
strength indication from lower layers, LSSA encapsulates it into a link signal strength
Indication (LSSI) message, and sends it to the TCP Source and destination. By analyzing
this information, the TCP source is able to monitor the state of current TCP connections
(e.g. good (strong) condition, bad (weak) condition or down). According to the link,
condition TCP source may freeze its congestion window, invoke congestion avoidance or
request new route reconstruction. Each node can detect the strength of a signal coming
from all its neighbors [CHE 2001].
LSSA also receives the LSSI messages from its neighbors as well. It may append its
own connection information into the message and relay it to next hope. In order to support
this new layer, they need to use a new protocol value in the protocol field of IP header.
Thus, when receiving this message, IP layer will pass it directly to the LSSA layer each
node receiving an LSSI message checks its own link status and appends its information into
it. In order to reduce the overhead by the LSSI messages, those nodes receiving a message
from their neighbors do not add more information into the message if they find that the
connection is in good condition [CHE 2001].
86
To simulate the impact of high bit error rate of wireless link and frequent topology
change, they introduce the LSSA state machine shown in figure 6.8. When a connection
becomes weak but the topology does not change, they assumed that the link is in weak
state. The loss state is entered only when network topology change occurs. The periods of a
connection staying in the Good state and Weak state are exponentially distributed with
means mean_good_period and mean_weak_period respectively. Once the link is in Good
state, it may transition with probability “p” to the Weak state and “(1- p)” to the Loss state.
The system will invoke the route re-construct procedure to find a new route when iot
detects a route is invalid.
Weak
Good
Loss
[CHE 2001]
The LSSA was evaluated in the Optimized Network Engineering Tool (OPNET)
[OPT 2003], and its initial results indicate that this proposal overcomes the problems
associated with the nature of the wireless links and the mobility of nodes, and therefore is
applicable in dynamic wireless networks [CHE 2001].
We call this group emergent because are new schemes and proposals that share
characteristics of all or part of all schemes before mentions and/or original new solutions:
The Feed-back-based scheme was introduced first in [CHA 88] and upgrade in
[CHA 2001].
Standard TCP does not distinguish between congestion and packet loss due to
transmission error or route failure. So treating route failure as congestion (and invoking
congestion control) is not advisable because congestion and route failure are disparate
87
phenomena which have to be handled independently and separately. In this scheme the
source is informed of the route failure so that it does not unnecessarily invoke congestion
control and can refrain from sending any further packets until the route is restored.
Consider, for simplicity, a single bulk data transfer session, where a source MH
(mobile host) is sending packets to a destination MH. Every MH behaves in a cooperative
fashion by acting as a router, allowing packets destined to others MHs to pass through it.
As soon as the network layer at an intermediate MH (henceforth referred to as the failure
point, FP) detects the disruption of a route due to the mobility of the next MH along that
route, it explicitly sends a route failure notification (RFN) packet to the source and records
this event. Each intermediate node that receives the RFN packet invalidates the particular
route and prevents incoming packets intended for the destination from passing through that
route. If the intermediate node knows of an alternate route to the destination, this alternate
route can now be used to support further communication, and the RFN is discarded.
Otherwise, the intermediate node simply propagates the RFN toward the source. On
receiving the RFN, the source goes into a snooze state (see figure 6.9) and performs the
following:
From SYN-RECVD
RFN
From SYN-SENT
Snooze Established
To FIN-WAIT_1
RRN
Or route
failure timeout To CLOSE-WAIT
RFN: Route Failure Notification
RRN: Route Restablishment Notification [CHA 2001]
Let one of the intermediate nodes that has previously forwarded an RFN to the
source learn about a new route to the destination (through a routing update). This
intermediate node then sends an RRN packet to the source (whose identity it previously
stored). All further RRNs received by this intermediate node for the same source-
destination connection are discarded. Any other node that receives the RRN simply
forwards it toward the source [CHA 2001].
As soon as the source receives the RRN, it changes to an active state from the
snooze state. It then flushes out all unacknowledged packets in its current window. Since
most packets in transit during the failure period would have been affected, packets can be
flushed out without waiting for acknowledgments from the receiver. The number of
retransmitted packets directly depends on the current window size. These steps in effect
reduce the effect of TCP’s congestion control mechanism when transmission restarts.
Communication now resumes at the same rate, as before the route failure occurred,
ensuring that there is an unnecessary loss of throughput in this period. TCP’s congestion
control mechanism can now take over and adjust to the existing load in the system. The
route failure timer ensures that the source does not indefinitely remain in the snooze state
waiting for an RRN, which may be delayed or lost [CHA 2001].
They also assume that packets reaching the failure point are lost when the next link
from the failure point is down. However, this is not so if the intermediate nodes can buffer
these packets to a limited capacity. In this case, we could do the following. As the RFN
propagates to the source from the failure point, all the intermediate nodes can temporarily
buffer subsequent packets. If there is a substantial overlap between the newly established
route and the old one, the RRN message can be used to flush out the buffers (i.e., the
buffered packets may be sent to the destination along the newly established route).
Similarly, intermediate nodes may forward the buffered packets to the destination without
waiting for an RRN on learning of new routes. This buffering scheme has the following
advantages:
• It will save packet retransmissions, and packet flow can resume even before the source
learns about the route reestablishment.
• Since buffering is staggered across the intermediate hops, the buffering overhead at each
node is expected to be low.
standard TCP libraries. Hence, it is of interest to study the behavior of TCP in the context
of ad hoc networks and evaluate the effect of dynamic topology on TCP performance.
The studies and simulation of [CHA 2001] indicates because of frequent and
unpredictable route disruptions, TCP’s performance is substantially degraded.
- In [CHA 88]’s method, the source continues using the old congestion window for the new
route. This is a problem because the congestion window size is route specific (since it seek
to approximate the available bandwidth [JIA 2001].
- Reference [CHA 88] also does not consider the effect of congestion, out-of-order packets,
and bit error [JIA 2001].
- In [CHA 88] a feedback scheme has bee proposed in order to improve the TCP
performance in mobile ad-hoc networks by notifying the source about route failures.
However, this scheme fails to distinguish between route failures due to topology changes
and temporary bad link quality, and therefore the TCP performance is degraded by both
topology changes as well as by the bath quality of wireless connection [CHE 2001].
6.5.2 TCP-BUS
This proposal deploys the Associatively Based Routing (ABR) [TOH 96] as the
underlying routing protocol based on source-initiated on demand protocol. ABR advocates
for stable and long-lived routes. It also takes advantage of the feedback information for
detecting the route disconnection, which is also used in TCP-F [CHA 88], [KIM 2000].
2. Extending timeouts values: The timeout values for buffered packets at the source
and nodes along the path to the PN are doubled mainly because it takes time to recover the
route in case of lost route and reestablishment [KIM 2000].
4. Avoiding unnecessary requests for fast retransmission: Using the ABR protocol,
packets along the path from the PN to the destination may be discarded by intermediate
nodes after receiving a Route Notification (RN) message [KIM 2000].
TCP-BUS at source: transmits its segments in the same manner as general TCP
when there are no feedback messages (such as ERDN and ERSN messages). The slow start
and congestion avoidance mechanisms function as normal; however, when the source
receives the ERDN feedback message from the network, it stops sending data packets. In
addition, it freezes all timers and windows sizes in a manner to TCP-F [CHA 88], [KIM
2000].
TCP-BUS at intermediate nodes: after anode (PN) detects a route failure, it sends
the ERDN message to notify the source of route failure and initiates partial route discovery.
While ERDN message is propagated towards the source, each intermediate node stops
further transmission of data packets and buffers, all pending packets to defer transmission.
After receiving a replay message, the PN notifies the source of successful route re-
establishment via ERSN message. At each intermediate node receiving the ERSN message,
transmission of buffered packets resumes
for selective retransmission of lost packets is generated on the receiving detecting the hole
of consecutive segment sequence. It requires the source to react to the congestion.
TCP-F and TCP BUS are not the unique solution; briefly, we are going to introduce
some more in this category
6.5.3 MAITE
MAITE shares features of split connection and link layer schemes. The figure 6.10
below shows the topology of MAITE in which two mobile hosts (MH) use wireless links to
communicate with each other. Since both links could be experiencing very difficult
situations, individual control over each of the wireless link is desirable (split scheme). The
supervisory host allows the implementation of specific controls over the links. As shown in
the figure a intermediate access point and a supervisory host is needed. A wired
communication link exists between the supervisory host and the access point.
can be detected if the link layer at an access informs the supervisory host when high BER
occurs.
A high BER condition can be detected when CRC check on received frames
continuously fails. If the link layer is able to distinguish between these two conditions,
upper layers can be informed. After that, appropriate actions at the transport level can be
taken. MAITE incorporates into the MH the ability of sense disconnections via messages
sent from the hardware to the upper layers.
In order to illustrate how MAITE works we will present the state transition
diagrams at both a sending MH and at the supervisory host.
The figure 6.11 below shows that at the mobile host, high BER conditions are
informed to transport layer by the link layer via a HighBER Notification message. Upon
receiving this message, a sending MH will freeze its TCP timers until it receives a
HighBER over message from the link layer. A TCP sender will not attempt any
transmissions during high BER periods. Disconnections are handled in a similar way.
This is an indication of problems over the remote wireless link with the receiving
mobile. Normal conditions are reestablished upon receiving an acknowledgement via the
supervisory host, this reopens this window.
93
FIGURE 6.11 - State transition of MAITE’s features at a mobile host that acts as a TCP
Sender.
The figure 6.12 below shows the state diagram at a supervisory host. MAITE allows
host to receive link layer notifications from the access point. These notifications inform the
supervisory host of high BER conditions.
When a high BER conditions occurs, a link down message is received and those
TCP senders communicating with a mobile receiver are forced into persist mode. No
segments are sent until this condition is over, In the same way no segments are sent to TCP
receivers until a high BER condition is over.
Any timeouts that occur in the supervisory host are treated as an indication of loss
and therefore retransmissions of local cached data occurs.
94
FIGURE 6.12 - State transition diagram at a supervisory host showing MAITE features
The authors in [ARA 2001] do not show the transition state diagram for a receiving
TCP mobile. The transitions between states are simpler in this case. A receiving mobile will
not freeze any timers during high BER or disconnections conditions but will refrain from
transmitting acknowledgements. TCP receivers are not forced into persist mode and it is the
responsibility of the supervisory host to handle bad conditions of the first wireless link
used by storing acknowledgements until good channel conditions exist. In the same way
congestion, detection is not applicable at the receiving mobile. After being reconnected a
receiver restarts communications by sending a reconnect message.
Under high speed conditions or high mobility (45 km/hr) the improvements in end-
to-end goodput introduced by the link layer messages and MAITE are significant lower. In
other words, the improvements are only noticeable under low mobility speed (4 km/hr), as
speed increased the performance improvements decreased significantly.
In [ARA 01], a drawback of MAITE is that it does not handle Handoffs between
supervisory hosts.
6.5.4 IR-TCP
This Transport layer protocol improves the transport layer performance as compared
to TCP in the presence of noisy links such as those in wireless networks. IR-TCP is
interference aware and uses the interference information from the link layer for its recovery
procedure. IR-TCP is backward compatible and does not affect performance during normal
operation or congestion, while providing significant performance improvement during
interference. IR-TCP addresses specific problems in TCP, with regard to performance
during interference. IR-TCP can be defined as an interference aware transport layer that
detects the presence of interference while also being congestion aware. Some researchers
do not believe that this is a good practice. However, it is their strong belief that without
such mechanisms performance cannot be achieved. IR-TCP employs algorithms that
improve recovery from interference, overall performance during interference. It also
prevents the inappropriate usage of congestion control algorithms when there is
interference and not congestion in the path. SNR is a good measure for detecting
interference [MAR 98].
6.5.5 FAST-TCP
The FAST-TCP is proposed to be used for more accurate control of the TCP
transmission rate and better TCP traffic shaping by Nokia Research Center. The basic idea
of the method is that a network element such a router, delays the IP packets carrying TCP
ACKs, when congestions tends to occur. Since the TCP source does not receive an ACK, it
keeps its current transmission window until the delayed ACKs are received.
The authors have been proved that Fast-TCP can reduce TCP flow control feedback
time, reduce buffer oscillation, increase bandwidth utilization, increase throughput, and
reduce packet losses in the IP networks with wired links. The fast-TCP is implemented at
the router, which eliminates the need to change either the sender or receiver’s TCP
implement. The approach also support wireless links [MAJ 99].
96
TCP with SPACK is a new acknowledgement scheme. When base station detects
packet losses, SPACK splits the newly arrived ACK packet into several ones and transfers
and transfer them to the fixed host. The fixed host received several ACK packets,
increasing the windows size rapidly, thus the performance of TCP quickly recovers. The
SPACK has several advantages comparing with other protocols such as: No modifications
of TCP source code, maintenance of end to end TCP semantics and less complexity of
bases station [JIN 99]. We consider SPACK in this classification although with
characteristics of splits schemes, it keeps end-to-end semantics and split schemes do not do
it according to [BAL 97] see figure 6.13
SPACK Transmission
(2) Retransmission
Timer Expired
(1)
X or
(3) Congestion (2) Timer Expired Packet
Control Invoked Loss
Window Size = 1 X
(3)
Retransmit
(4)
Window Size = 2 Received Packlet
Window Size = 3
.
.
. (5) Transfer
.
Splitted Ack
.
Window Size = n
According to [KWO 2002] the major deficiency of the IEEE 802.11 MAC protocol
comes from the slow collision resolution as the number of active station increases. An
active station can be in two modes at each contention period, namely, the transmitting mode
when it wins a contention and the deferring mode when it losses a contention. In the
proposed FCR algorithm the authors changed the contention window size for the deferring
stations and regenerate the back off timers for all potential transmitting stations to avoid
“future” potential collisions, in this way, we can resolve possible packet collisions quickly.
More importantly, the proposed algorithm preserves the simplicity for implementation like
the IEEE 802.11 MAC.
1. Use much smaller initial minimum contention window size (minCW) than the IEEE
802.11 MAC.
2. Use much larger maximum contention window size (maxCW) than the IEEE 802.11
MAC.
3. Increase the contention window size of a station when it is in both collision state and
deferring state.
4. Reduce the back off timers exponentially fast when a prefixed number of consecutive
idle slots are detected.
5. Assign the maximum successive packet transmission limit to keep fairness in serving
users.
In the FCR algorithm, the contention window size of a station will increase not only
when it experiences a collision but also when it is in the deferring mode and senses the start
of a busy period [KWO 2002].
1. Backoff Procedure: All active stations will monitor the medium. If a station senses the
medium idle for a slot, then it will decrement its back off time (BT) by a slot time, i.e.,
BTnew = BTold - aSlotTime (or the back off timer is decreased by one unit in terms of
slot). When its back off timer reaches to zero, the station will transmit a packet. If there are
[(minCW+1) x 2 - 1] consecutive idle slots being detected, its back-off timer should be
decreased much faster (say, exponentially fast), i.e., BTnew = BTold - BTold /2 = BTold /2
98
( if BTnew < aSlotT ime; then BTnew = 0) or the back off timer is decreased by a half. For
example, if a station has the back off timer 2047, hence its back off time is BT = 2047 x
aSlotT ime, which will be decreased by a slot time at each idle slot until the back off timer
reaches 2040 (we assume that [(minCW +1)x2 -1] = 7 or minCW = 3). After then, if the
idle slots continue, the back off timer will be decreased by one half, i.e., BTnew = BTold /2
at each additional idle slot until either it reaches to zero or it senses a non-idle slot,
whichever comes first. As an illustration, after 7 idle slots, we will have BT = 1020 x
aSlotTime on the 8th idle slot, BT = 510 x aSlotTime on the 9th idle slot, BT = 255 x
aSlotTime on the 10th idle slot, and so on until it either reaches to zero or detects a non-idle
slot. Therefore, the wasted idle back off time is guaranteed to be less than or equal to 18 x
aSlotTime for above scenario. The net effect is that the unnecessary wasted idle back off
time will be reduced when a station, which has just performed a successful packet
transmission, runs out of packets for transmission or reaches its maximum successive
packet transmission limit [KWO 2002].
2. Transmission Failure (Packet Collision): If a station notices that its packet transmission
has failed possibly due to packet collision (i.e.,it fails to receive an acknowledgment from
the intended receiving station), the contention window size of the station will be increased
and a random back off time (BT) will be chosen, i.e., CW = min(maxCW, CWx2), BT =
uniform(0,CW- 1) x aSlotTime, where uniform(a,b) indicates a number randomly drawn
from the uniform distribution between a and b and CW is the current contention window
size [KWO 2002].
4. Deferring State: For a station which is in deferring state, whenever it detects the start of a
new busy period, which indicates either a collision or a packet transmission in the medium,
the station will increase its contention window size and pick a new random back off time
(BT) as follows: CW = min(maxCW,CW-2), BT = uniform(0,CW-1) x aSlotTime [KWO
2002].
Finally, in the FCR algorithm, the station that has successfully transmitted a packet
will have the minimum contention window size and smaller back off timer, hence it will
have a higher probability to gain access of the medium, while other stations have relatively
larger contention window size and larger back off timers. After a number of successful
packet transmissions for one station, another station may win a contention and this new
station will then have higher probability to gain access of the medium for a period of time
[KWO 2002].
99
The figure 7.1 below shows the throughput results of the IEEE 802.11 MAC and
FCR algorithms for 100 contending stations with better performance for FCR. It was
simulated using the GloMoSim Network simulator [GLO 2002]
The Receiver-Based Auto Rate (RBAR) protocol is a rate adaptive MAC protocol.
The novelty of RBAR is that its rate adaptation mechanism is in the receiver instead of in
the sender.
Rate adaptation is the process of dynamically switching data rates to match the
channel conditions, with the goal of selecting the rate that will give the optimum throughput
for the given channel conditions. The Lucent WaveLAN II and Aironet PC4800 devices
contain proprietary rate adaptation mechanisms. There are two aspects to rate adaptation:
channel quality estimation and rate selection. Channel quality estimation involves
measuring the time-varying state of the wireless channel for generating predictions of
future quality. Issues include: which metrics should be used as indicators of channel quality
100
(e.g., signal-to-noise ratio, signal strength, symbol error rate, bit error rate), which
predictors should be used, whether predictions should be short-term or long-term, etc. Rate
selection involves using the channel quality predictions to select an appropriate rate.
Techniques vary, but a common technique is threshold selection, where the value of an
indicator is compared against a list of threshold values representing boundaries between the
data rates [HOL 2001].
The central idea of RBAR is to allow the receiver to select the appropriate rate for
the data packet during the RTS/CTS packet exchange.
1. - Both channel quality estimation and rate selection mechanisms are now on the receiver.
This allows the channel quality estimation mechanism to directly access all of the
information made available to it by the receiving hardware (such as the number of multi
path components, the symbol error rate, the received signal strength, etc.), for more
accurate rate selection.
2. - Since the rate selection is done during the RTS/CTS exchange, the channel quality
estimates are nearer to the actual transmission time of the data packet than in existing
sender-based approaches.
3. - It can be implemented into IEEE 802.11 with minor changes, as we will show in a later
section.
Referring to figure 7.2 below, node A is in range of Src but not Dst , and node B is
in range of Dst but not Src. The sender Src chooses a data rate based on some heuristic
(such as the most recent rate that was successful for transmission to the destination Dst),
and then stores the rate and the size of the data packet into the RTS. Node A, overhearing
the RTS, calculates the duration of the requested reservation DRTS using the rate and packet
size carried in the RTS. This is possible because all of the information required calculating
DRTS is known to A. A then updates its NAV to reflect the reservation. While receiving the
RTS, the receiver Dst uses information available to it about the channel conditions to
generate an estimate of the conditions for the impending data packet transmission. Dst then
selects the appropriate rate based on that estimate, and transmits it and the packet size in the
CTS back to the sender. Node B, overhearing the CTS, calculates the duration of the
reservation DCTS similar to the procedure used by A, and then updates its NAV to reflect
the reservation. Finally, Src responds to the receipt of the CTS by transmitting the data
packet at the rate chosen by Dst [HOL 2001]. In the picture are not shown the short
interframe spaces (SIFS) between RST, CTS, and DATA.
101
DRTS
DRSH
A
DCTS
In the instance that the rates chosen by the sender and receiver are different, then the
reservation, DRTS calculated by A will no longer be valid. Thus, we refer to DRTS as a
tentative reservation. A tentative reservation serves only to inform neighboring nodes that a
reservation has been requested but that the duration of the final reservation may differ. Any
node that receives a tentative reservation is required to treat it the same as a final
reservation with regard to later transmission requests; that is, if a node overhears a tentative
reservation it must update its NAV so that any later requests it receives that would conflict
with the tentative reservation must be denied. Thus, a tentative reservation effectively
serves as a placeholder until either a new reservation is received or the tentative reservation
is confirmed as the final reservation. Final reservations are confirmed by the presence or
absence of a special sub-header, called the Reservation Sub-Header (RSH), in the MAC
header of the data packet. The reservation sub header consists of a subset of the header
fields that are already present in the 802.11 data packet frame, plus a check sequence that
serves to protect the sub-header. The fields in the reservation sub-header consist of only
those fields needed to update the NAV, and essentially amount to the same fields present in
an RTS. Furthermore, the fields (minus the check sequence) still retain the same
functionality that they have in a standard 802.11 header [HOL 2001].
Referring again to Figure 7.2 above, in the instance that the tentative reservation
DRTS is incorrect, Src will send the data packet with the special MAC header containing
the RSH sub-header. A, overhearing the RSH, will immediately calculate the final
reservation DRSH, and then update its NAV to account for the difference between DRTS
and DRSH. Note that, for A to update its NAV correctly, it must know what contribution
DRTS has made to its NAV. One way this can be done, is to maintain a list of the end times
of each tentative reservation, indexed according to the < sender; receiver > pair. Thus,
when an update is required, a node can use the list to determine if the difference in the
reservations will require a change in the NAV [HOL 2001].
The DCMA scheme is based on enhancements to the basic IEEE 802.11 4-way handshake,
involving the exchange of RTS/CTS/DATA/ACK packets.
DCMA does not require any modifications or enhancements to the 802.11 NAV. A
node simply stays quiet as long as it is aware of (contiguous) activity involving one or more
of its neighbors.
The Timing diagram in Figure 7.4 is useful to understand the operation of DCMA.
Assume that node A has a packet to send to node D. A sends a RTS to B, which includes a
label LAB associated with the route to D. Assuming that its NAV is not busy for the
proposed transmission duration, B replies with a CTS. B receives the DATA packet, and
then sends a RTS/ACK control packet, with the ACK part addressed to A, and the RTS part
addressed to C, along with a label LBC. C’s actions would be analogous to B, except that it
uses the label LCD in its RTS/ACK message [ACH 2002].
Label lookup: In DCMA, the RTS/ACK (or RTS) bears the label. In principle, the DATA
field is carrying the label, since the label lookup is not strictly necessary until after the
DATA is being received [ACH 2002].
However, by providing the label information in the RTS, we provide the forwarding
node additional time to complete the lookup. This should not be a problem, since the
DATA duration is at least tens of µsecs (e.g., a 500 byte packet on 2 Mbit/s channel takes 2
msecs). Due to the competition among different flows, it is possible that DCMA can fail to
set up the “fast-path” (cut-through) forwarding at different points in the traffic path. Upon
the failure of a cut-through attempt, DCMA reverts to the base 802.11 specification,
aborting the cut-through attempt and using the exponential back off to regulate subsequent
access to the shared channel. The channel contention resolution of DCMA is same as that
of IEEE 802.11, with a node remaining silent as long as any of its one-hop neighbors are
either receiving or transmitting a data packet. Accordingly, this protocol does not suffer
from any additional penalties, over and above those present in IEEE 802.11. Like the base
IEEE 802.11 protocol, DCMA can suffer from possible contention for channel access by
successive paths on consecutive hops on the same path (e.g., in Fig. 7. 4, A may try to send
another packet to B while C is engaged in forwarding the previous packet to D). A more
detailed explanation is in [ACH 2002].
104
R
T DATA
S
A
A R
C C T
B T K S DATA
S
C A R
T C T DATA
S K S
C
C
T
S
D
T
ACK
MAC Address (Out)
Flag [ACH 2002]
Label
RTS
MAC Address
Flag
The simulation was implemented using NS-2 network simulator [NET 2002]. The
parameters where tuned to model the Lucent Wavelan card at 2 Mbps data rate. The
effective transmission range was 250 meters and the interfering range about 550 meters.
The figure 7.5 shows a throughput improvement about 20 % and latency improvements
of100 % in small packets (256 byte) to 63 % in bigger ones (1546 bytes) for DCMA.
8 Conclusions
- Wired TCP cannot distinguish between packet losses (due to wireless errors) from those
due to congestion [WEN 2001a].
- In TCP, if the source is not aware of the route failure, the source continues to transmit (or
retransmit) packets even when the network is down. This leads to packet loss and
performance degradation. Since packet loss is interpreted as congestion, TCP invokes
congestion recovery algorithms when the route is reestablished, leading to throttling of
transmission [CHA 2001].
- In practice, it might be difficult to identify which packets are lost due to errors on a noisy
link [BAL 97].
- The signal-to-noise ratio (SNR) is a good measure for detecting interference [MAR 98].
- In [MAR 98] studied and showed that the window size variations in response to
occurrence of interference are the reason for performance degradation.
- The main difference between MANET (mobile Ad hoc Networks) and Cellular networks
is that MANET stations communicate using identical radio transceivers without the aid of a
fixed infrastructure such Cellular Base stations and fixed routers.
- In wireless multi-hop networks, the hidden node problem still exists, although the
standard has paid much attention to this problem. The protocol has defined several schemes
to deal with this, such as physical carrier sensing and the RTS/CTS handshake. These
schemes work well to prevent the hidden node problem in a wireless LAN where all nodes
can sense each other’s transmissions. The sufficient condition for not having hidden nodes
is: any station that can possibly interfere with the reception of a packet from node A to B is
within the sensing range of A. This might be true in an 802.11 basic service set. Obviously,
however, this condition cannot be true in a multi-hop network [XUS 2002].
- There is no scheme in IEEE 802.11 standard to deal with the exposed node problem,
which will be more harmful in a multi-hop network [XUS 2002].
- The 802.11 MAC is based on carrier sensing, including the physical layer sensing
function (CCA). As we know, carrier sensed wireless networks are usually engineered in
such a way that the sensing range (and interfering range) is typically larger than the
communication range. According to the IEEE 802.11 protocol implementation in the NS-2
simulation software, which is modeled after the Wavelan wireless radio, the interfering
range and the sensing range are more than two times the size of the communication range.
The larger sensing and interfering ranges will degrade the network performance severely in
106
the multi-hop case. The larger interfering range makes the hidden node problem worse; the
larger sensing range intensifies the exposed node problem [XUS 2002].
- The binary exponential back-off .scheme always favors the latest successful node. This
will cause unfairness, even when this protocol is not used in multi-hop networks, like in the
typical wireless LAN defined in the IEEE 802.11 standard [XUS 2002].
- In [HOL 2002] is analyzed the Dynamic Source Routing (DSR) protocol and observed
different characteristics that affects TCP performance and suggested that instead of
augmenting TCP/IP, it would be better to improve the routing protocols so that mobility
will be more effectively masked. Clearly, extensive modifications to upper layer protocols
are less desirable than a routing protocol that can react quickly and efficiently such that
TCP is not disturbed. However, regardless of the efficiency and accuracy of the routing
protocol, network partitioning and delays will still occur because of mobility, which can not
be hidden.
- In general Wireless technology does not came to content with wired one but to give extra
service to the user with the concept of ubiquity using internet in any place, any moment
and with any data.
References
[ACH 2002] ACHARYA, A.; MISRA, A.; BANSAL, S. A Label-switching Packet
Forwarding Architecture for Multi-hop Wireless LANs. In:
INTERNATIONAL WORKSHOP ON WIRELESS MOBILE
MULTIMEDIA. Proceedings… Atlanta, Georgia, USA:[s.n.], 2002. p.33-40
Available at.: <http://doi.acm.org/10.1145/570790.570797> Visited on:
Dec. 01, 2002.
[ADH 2002] AD-HOC Wireless Multicasting. Available at:
<http://www.online.kth.se/courses/common/adhoc/newcontent/7_2.html>.
Visited on: Oct. 30, 2002.
[ARA 2001] ARÁUZ, J.; BANERJEE, S.; KRISHNAMURTHY, Prashant. MAITE: A
Scheme for Improving the Performance of TCP over Wireless Channels.
VEHICULAR TECHNOLOGY CONFERENCE, VTC, 2001.
Proceedings… [S.l.:s.n.], 2001. v.1, p. 252- 256.
[BAK 95] BAKRE, A.; BADRINATH, B.R. I-TCP: Indirect TCP for Mobile Hosts.
In: THE INTERNATIONAL CONFERENCE ON DISTRIBUTED
COMPUTING SYSTEMS, 15., 1995. Proceedings… Vancouver,
BC, Canada:[s.n.], 1995. p. 136-143, 1995. Available at:
http://rictec.capes.gov.br/login.asp>. Visited on: Aug. 27, 2002.
[BAK 97] BAKRE, A.V.; BADRINATH, B.R. Implementation and Performance
Evaluation of Indirect TCP. IEEE Transactions on Computers, New York,
v. 46, n. 3, p. 260-278, Mar 1997.
[BAL 95] BALAKRISHNAN, H. et al. Improving TCP/IP Performance over
Wireless Networks. In: ACM INTERNATIONAL CONFERENCE ON
MOBILE COMPUTING AND NETWORKING, MOBICOM, 1., 1995.
Proceeding… [S.l.:s.n], 1995.
[BAL 97] BALAKRISHNANi, H. et al .A Comparison of Mechanisms for Improving
TCP Performance over Wireless Links. IEEE/ACM Transactions on
Networking, Atlanta, v. 5, n. 6, p. 756-769, Dec. 1997.
[CHA 97] CHANT, A.; TSANG, D.; GUPTA S. TCP (Transmission Control Protocol)
over wireless Links. VEHICULAR TECHNOLOGY CONFERENCE, IEEE
47., 1997. Proceedings… Phoenix:[s.n.],1997. v.3, p.1326-1330
[CHA 2001] CHANDRAN, K. et al. A Feedback-Based Scheme for Improving TCP
Performance in Ad Hoc Wireless Networks. IEEE Personal
Communications, New York, v. 8, n. 1, p. 34-39, Feb. 2001
[CHA 88] CHANDRAN, K.; RAGBUNATHAN, S.; VENKATESAN, S.; PRAKASH,
R. In: A Feedback Based Scheme for Improving TCP performance in ad-hoc
wireless networks. In: INTERNATIONAL CONFERENCE ON
DISTRIBUTED COMPUTING SYSTEMS, 1998. Proceedings…
Amsterdam, Netherlands:[s.n.], 1998. p. 472-479.
108
[CHE 98] CHEN T.; GERLA, M. Global State Routing: A New Routing Scheme for
Ad-hoc Wireless Networks. In: IEEE INTERNATIONAL CONFERENCE ON
COMMUNICATIONS, ICC, 1998. Proceedings… Atlanta, GA, USA:[s.n.],
1998. v.1, p. 171-175.
[CHE 2001] CHENGZHOU, L.; PAPAVASSILIOU, S. The Link Signal Strength Agent
(LSSA) Protocol for TCP Implementation in Wireless Mobile Ad Hoc
Networks. In: VEHICULAR TECHNOLOGY CONFERENCE, VTC,
54.,2001. Proceedings... Discataway, NJ: IEEE, 2001. v.4, p.2528-2532
[CHI 97] CHIANG, C.-C. Routing in Clustered Multihop, Mobile Wireless
Networks with Fading Channel. In: IEEE SINGAPORE INTERNATIONAL
CONFERENCE ON NETWORKS, SICON, 1997. Proceedings…
Singapore:[s.n.],1997. Available at:
<http://www.ics.uci.edu/~atm/adhoc/paper-collection/gerla-routing-
clustered-sicon97.pdf>. Visited on: Aug. 28, 2002.
[CHI 2001] CHIASSERINI, C.; MEO, Michela. Improving TCP over Wireless
through Adaptive Link Layer Setting. In: GLOBAL
TELECOMMUNICATIONS CONFERENCE, GLOBECOM, 2001.
Proceedings… San Antonio, TX, USA:[s.n.], 2001. v.3, p.1766-1770.
Available at <http://www.tlc-
networks.polito.it/carla/papers/globecom01.pdf>. Visited on: Aug. 28, 2002.
[COR 2002] CORDEIRO, C.; AGRAWAL, D. Mobile Ad hoc Networking. Minicurso
SBRC 2002
[DUB 97] DUBE, R. et al. Signal Stability based adaptive routing for Ad Hoc
Mobile network. IEEE Personal Communications, New York, p. 36-45,
Feb. 1997. Available at:
<http://www.cs.umd.edu/projects/mcml/papers/pcm97.ps>. Visited on:
Aug.28, 2002.
[ELA 2002] ELAARAG, H. Improving TCP Performance over Mobile Networks. ACM
Computing Surveys. New York, v.3, n.3, p.357-374, Sept. 2002. Available
at: <http://www.acm.org/>. Visited on: Aug. 28, 2002.
[GLO 2002] GLOMOSIM - Global Mobile Information Systems Simulation Library.
University of California (UCLA). Available at:
<http://pcl.cs.ucla.edu/projects/glomosim/>. Visited on: Sept. 2002
[HOL 2001] HOLLAND, G.; VAIDYA, N.; BAHL, P. A rate-adaptive MAC protocol for
multi-Hop wireless networks. In: INTERNACIONAL CONFERENCE ON
MOBILE COMPUTING AND NETWORKING, MOBICOM, 7., 2001
Proceedings… Rome, Italy:[s.n.], 2001. p.236-251. Available at:
<http://doi.acm.org/10.1145/381677.381700>. Visited on: Aug. 29, 2002
[HOL 2002] HOLLAND, G.; VAIDYA, N. Analysis of TCP Performance over Mobile
Ad Hoc Networks. Wireless Networks, Hingham, v. 8, n. 2/3, p. 275-
109
288, Mar. 2002. Available at: <http://www.acm.org/>. Visited on: Dec. 10,
2002.
[HUS 2001] HUSTON, G. TCP in a Wireless World. IEEE Internet Computing, New
York, v. 5, n. 2, p. 82-84, Mar./Apr. 2001.
[IET 99] IETF DRAFT. Draft-ietf-manet-cbrp-spec-01.txt: Cluster Based Routing
Protocol. [S.l.], Aug. 1999. Available at:
<http://www.eecs.wsu.edu/~rgriswol/Drafts-RFCs/draft-ietf-manet-cbrp-
spec-01.txt > Visited on: Jan. 10, 2003.
[IET 99a] IETF DRAFT. Draft-ietf-manet-dsr-03.txt: The Dynamic Source Routing
Protocol for Mobile Ad Hoc Networks. [S.l.], Oct. 1999. Available at:
http://www.eecs.wsu.edu/~rgriswol/Drafts-RFCs/draft-ietf-manet-dsr-03.txt.
Visited on: July 02, 2002.
[IET 99b] IETF DRAFT. Draft-ietf-manet-aodv-04.txt. Ad Hoc On demand
Distance Vector Routing.[S.l.], Oct. 1999. Available at:
http://www.ietf.org/proceedings/99nov/I-D/draft-ietf-manet-aodv-04.txt.
Visited on: July. 02, 2002.
[IET 2002] IETF. The Internet Engineering Task Force. Available at:
<http://www.ietf.org/html.charters/manet-charter.html>. Visited on: July 12,
2002.
[ISO 99] ISO/IEC 8802-11: 1999(E) ANSI/IEEE. Std 802.11. Part 11: Wireless LAN
Medium Access Control (MAC) and Physical Layer (PHY) specifications:
1999.
[IWA 99] IWATA,A. et al. Scalable Routing Strategies for Ad Hoc Wireless
Networks. IEEE Journal on Selected Areas in Communications, [S.l.],
v.17, n.8, p.1369-1379, Aug. 1999. Available at:
http://www.cs.ucla.edu/NRL/wireless/PAPER/jsac99.ps.gz>. Visited on
Aug. 20, 2002.
[JIA 2001] JIAN, Liu; SING Suresh. ATCP: TCP for Mobile Ad Hoc Networks.
IEEE Journal on Selected Areas in Communications, New York, v. 19,
n. 7, p.1300-1315, July. 2001.
[JIC 2001] JIAN, H.; CHENG, S; CHEN, X. TCP Reno and Vegas performance in
wireless ad hoc networks. In: IEEE INTERNATIONAL CONFERENCE ON
COMMUNICATIONS, ICC, 2001. Proceedings… [S.l.:s.n.], 2001, v.1,
p.132-136.
[JIN 99] JIN, K.; KIM, K.; LEE, J. SPACK: Rapid Recovery of the TCP
Performance using SPLIT-ACK in Mobile Communication Environments.
IEEE REGION 10 CONFERENCE ,TENCON 1999. Proceedings… Cheju
Island, South Korea:[s.n.], 1999. v.1, p.761-764.
[JOA 99] JOA-NG, M; LU, I.-T. A Peer-to-Peer zone-based two-level link state
routing for mobile Ad Hoc Networks. IEEE Journal on Selected Areas in
Communications, New York, v.17, n.8, p. 1415-1425, Aug. 1999.
110
[KIM 2000] KIM, D.; TOH C.-K; CHOI, Y. TCP-BuS: Improving TCP Performance in
Wireless Ad Hoc Networks. In: IEEE INTERNATIONAL CONFERENCE
ON COMMUNICATIONS, ICC, 2000. Proceedings... New Orleans, L.A.,
USA:[s.n.], 2000. v. 3, p.1707-1713.
[KWO 2002] KWON, Y.; FANG, Y.; LATCHMAN, H. Improving Transport Layer
Performance by Using A Novel Medium Access Control Protocol with Fast
Collision Resolution in Wireless LANs. In: ACM INTERNATIONAL
WORKSHOP ON MODELING ANALYSIS AND SIMULATION OF
WIRELESS AND MOBILE SYSTEMS, 5., 2002. Proceedings… Atlanta,
Georgia USA:[s.n.], 2002, p.112-119.
[LOW 2000] LOW, S. H; PETERSON, L. L. ; WANG, L. Understanding TCP Vegas a
duality Model. Journal of the ACM, New York. v. 49, n. 2, p. 207–235,
Mar. 2002.
[MAR 2003] BLESSED VIRGIN MARY. Queen of Peace.- Messages of Our Lady.
Medjugorje. Available at: <http://www.medjugorje.hr>,
<http://www.medjugorje.org>. and Argüera–Bahia. Available at:
<http://www.apelosurgentes.com.br> Visited on: May 25, 2003.
[MAR 98] MARUTHI, B.; ARUN, K.; AZIZOGLU, M. Interference Robust TCP. In:
INTERNATIONAL SYMPOSIUM ON FAULT-TOLERANT
COMPUTING, 29.,1999. Proceedings… Madison, WI, USA:[s.n.], 1999.
p.102-109.
[MAJ 99] MA, J.; WU, J. Improving TCP Performance in IP Networks with Wireless
links. In: IEEE INTERNATIONAL CONFERENCE ON PERSONAL
WIRELESS COMMUNICATION, 1999. Proceedings... Jaipur,
India:[s.n.], 1999. p. 211-215.
[MUR 96] MURTHY, S.; GARCIA-LUNA-ACEVERES, J. An Efficient Routing
Protocol for Wireless Networks. Mobile Networks and Applications, [S.l.],
v.1, n.2, p.183-197, Oct. 1996.
[PAR 97] PARK, V.D.; CORSON, M.S. A highly adaptive distributed routing
algorithm for mobile wireless networks. In: CONFERENCE OF THE IEEE
COMPUTER AND COMMUNICATIONS SOCIETIES, INFOCOM, 16.,
1997. Proceedings… Kobe, Japan: [s.n.], 1997. v. 3, p. 1405-1413.
[PAR 99] PARSA, C.; GARCI-LUNA - ACEVERES, J. TULIP: A Link-Level
Protocol for Improving TCP over Wireless Links. In IEEE WIRELESS
COMMUNICATIONS AND NETWORKING CONFERENCE, WCNC,
1999. Proceedings… New Orleans, LA, USA:[s.n.], 1999. v.3, p.1253-1257.
[PAR 2000] PARSA, C.; GARCIA – LUNA - ACEVES, J. J. Differentiating
congestion vs. Random Loss: A Method for Improving TCP Performance
over Wireless links. In: WIRELESS COMMUNICATIONS AND
NETWORKING CONFERENCE, WCNC, 2000. Proceedings… Chicago,
IL, USA:[s.n.], 2000. v. 1, p. 90-93.
[PER 94] PERKINS, C.E.; BHAGWAT, P. Highly Dynamic Destination-Sequenced
Distance-Vector Routing (DSDV) for Mobile Computers. In: ACM
CONFERENCE ON COMMUNICATIONS ARCHITECTURES,
PROTOCOLS AND APPLICATIONS, 1994. Proceedings… London,
United Kingdom:[s.n.], p. 234-244.
[PRE 2002] PREM, E. C. Wireless Local Area Networks. Available at:
<http://www.cis.ohio-state.edu/~jain/cis788-97/wireless_lans/index.htm>.
Visited on: July 02, 2002.
[RAT 98] RATNAM, K.; MATTA, I. WTCP: An Efficient Mechanism for Improving
TCP Performance over Wireless Links. In: IEEE SYMPOSIUM ON
COMPUTERS AND COMMUNICATIONS, ISCC, 3., 1998.
Proceedings… Athens, Greece:[s.n.], 1998. p. 74-78.
[ROD 2001] RODRIGUEZ A. et al.TCP/IP Tutorial Technical Overview. Available at:
< http:// www.ibm.com/redbooks> . Visited on: Aug. 25, 2002.
[STA 2000] STALLINGS, W. Data & Computer Communications. 6th ed. Upper
Saddle River: Prentice Hall, 2000.
[SUN 2001] SUN, D.; MAN, H. Performance Comparison of Transport Control
Protocols over Mobile Ad Hoc Networks. In: IEEE INTERNATIONAL
SYMPOSIUM ON PERSONAL, INDOOR AND MOBILE RADIO
COMMUNICATIONS, 12., 2001. Proceedings… San Diego, CA,
USA:[s.n.], 2001. v.2, p. G-83-G-87.
[TOH 96] TOH, C.-K. A novel distributed routing protocol to support Ad hoc
mobile computing. In: IEEE INTERNATIONAL CONFERENCE ON
COMPUTERS AND COMMUNICATIONS, 15., 1996. Proceeding…
Scottsdale, AZ, USA:[s.n.], 1996. p.480-486.
[TOH 2002] TOH, C.-K.; DELWAR, M.; ALLEN, D. Evaluating the
Communication Performance of an Ad Hoc Wireless Network. IEEE
112