You are on page 1of 31

ADWSN

Module 2
Routing Protocols and TCP Over Ad Hoc Wireless Networks

1
Routing Protocol Basics
 Ad Hoc wireless network consists a set of mobile nodes (hosts) that are connected by
wireless links. Physical connectivity among nodes (which is also known as communication
link topology) may keep changing randomly.

 Routing protocols that find a path to be followed by the data packets from source to
destination node are different for ad hoc wireless networks compared to traditional wireless
networks. Reasons are as given below

i) ADW N/W has got mobile nodes and dynamic / changing topology
ii) ADW N/W do not have established infrastructure for centralized administration (no base station
or central access point is present ADW N/W)
iii) ADW N/W has got bandwidth constrained wireless links
iv) ADW N/W has got resource and energy constrained nodes
v) ADW N/W has error prone channel state
vi) ADW N/W have got hidden and exposed terminal problem

Issues in Designing Routing Protocol for Ad Hoc Wireless Networks


 NODE MOBILITY: High mobility of transmission/ intermediate/ receiver node causes
random topology change and the on-going session may suffer from path breaks. Wired
networks find alternate routes during path break, but their data convergence rate is slow.
Hence wired protocols are not used for wireless N/W routing. Routing protocols for ad hoc
wireless networks must be able to perform efficient and effective node mobility management.

 BANDWIDTH CONSTRAINT: Wired N/W has unlimited B/W due to fiber optics and due
to (wavelength) WDM. In wireless N/W, the radio band is limited, this causes less data rate
compared to wired N/W. Hence wireless routing protocols should optimally use available
B/W. Due to mobility in wireless network, more control overhead is generated which wastes
the limited B/W in wireless N/W. Unlike wired routing, wireless routing protocol does not
require complete network topology information.

 ERROR PRONE SHARED BROADCAST RADIO CHANNEL: Wireless links has ‘time
varying link capacity & link error probability’. More efficient link is to be found for better
data route quality (job of MAC) . Routing data through less congested path reduces collision
among data packets & control packets.

 HIDDEN AND EXPOSED TERMINAL PROBLEM: MACA and then improved version
of MACA or MACAW solves this problem to great extent by RTS-CTS-DS-DATA-ACK
packets. An exposed transmitting terminal may send data simultaneously with main sender
node with different range of frequency – this is called reusability of frequency spectrum.

 RESOURCES CONTRAINTS: Battery life and processing power are two important
resources that are limited in wireless networks. There is a size and weight constraints for
2
nodes of wireless networks. Increasing battery power & processing ability makes node bulky
which is unwanted. OPTIMAL DESIGN & OPTIMAL MANAGEMENT OF RESOURCES
are most essential for wireless N/Ws.

Wired N/W Routing Protocols can’t be used in Wireless N/Ws

Features of Ideal Routing Protocol


Special Routing Protocols designed for Wireless N/Ws should be

 Wireless routing is fully distributed, with less control overhead. Hence it is more scalable
than centralized routing. Distributed routing is more fault tolerant than centralized routing &
it has less risk of single point of failure.

 It must be adaptive to frequent topology changes caused by node mobility

 Route computation and maintenance must involve minimum nodes. Route access by nodes
should be quick. Minimum connection set up time is desired. Route must be localized to
reduce huge control overhead of Global routing.

 Route must be loop free and also free from stale routes

 Packet collision must be minimum by limiting broadcast made by each node. Transmission
should be reliable, message loss should ideally nil.

 After the N/W topology becomes stable, it must converge quickly to optimal routes. Route
should have optimal use of N/W resources such as bandwidth, computing power, memory,
and battery power

 Every node in N/W must try to save stable local topology info. Frequent node topology
change in remote part of N/W should not affect route performance.

 It should be able to provide certain level if QoS demanded by application, and should also
offer support for time sensitive and real time data traffic.

Classification of Routing Protocols


 The classification is not mutually exclusive. Some routing protocol due to their common
property may fall in more than one class.
 Wireless N/W routing protocols are different than traditional wired routing protocols. Path
finding process of wireless N/W is worth more exploration. Ad Hoc wireless routing
protocols are classified into 4 categories

 Based on Routing information update mechanism

1. Proactive or table driven routing protocol


2. Reactive or on-demand routing protocol
3
3. Hybrid routing protocol
 Based on Use of temporal information for routing

1. Routing protocol using past temporal information


2. Routing protocol that use future temporal information
 Based on Routing topology

1. Flat topology routing protocol


2. Hierarchical topology routing protocols
 Based on Utilization of specific resources

1. Power aware routing protocol


2. Geographical information assisted routing

Classification of Routing Protocol for Ad Hoc wireless N/W

Routing, Based on Information Update Mechanism

Proactive or Table Driven routing protocol - Extension of wired routing protocol. Every node
maintains a topology info in form of routing table. This tables are frequently updated to maintain
accurate N/W state info which is flooded to every node in N/W. When a node tries to find a path till
the destination node, it runs path finding algorithm on the topology information.
 Destination Sequenced Distance-Vector Routing Protocol (DSDV) – First protocol for
ADWN/W. Each node maintains a table that contains shortest distance & immediate or next
node information for the shortest path to every other node in N/W. Helps to prevent loops to
counter the count-to-infinity problem & for fastest coverage from sender to the receiver node.

4
DSDV Table Based Routing Protocol

Route Maintenance in DSDV

 It is an extension of Bellman-Ford algorithm. Each node maintains a table. Routes to all


destinations are readily available at every node at all times.
 The tables are exchanged between neighbors at regular intervals to keep an up-to-date view
of the network topology.

5
 Tables are also forwarded if a node observes a significant change in local topology. The table
updates are of two types: incremental updates and full dumps. An incremental update takes a
single network data packet unit (NDPU), while a full dump may take multiple NDPUs.
 Here node 1 is the source node and node 15 is the destination. As all the nodes maintain
global topology information, the route is already available. Here the routing table of node 1
indicates that the shortest route to the destination node (node 15) is available through node 5
and has 4 hop distance as depicted.
Advantages of DSDV Protocol

 i) Routes always available from all nodes to all destinations. Much less delay is involved in
the route setup process. ii) The mechanism of incremental node path update makes this
existing wired network protocols adaptable to ADW N/W, with least modifications. iii) The
updates are propagated throughout the network in order to maintain an up-to-date view of the
network topology at all the nodes.

Limitations of DSDV Protocol

 i) The updates due to broken links lead to a heavy control overhead during high mobility. ii)
A small N/W with high mobility or a large N/W with low mobility can completely choke the
available B/W. iii) Excessive control overhead is proportional to the number of nodes. Hence
this protocol is not scalable in ADW W/N which have limited bandwidth and whose
topologies are highly dynamic. iv) In DSDV, to obtain information about a particular
destination node, a node has to wait for a table update message initiated by the same
destination node. This delay could result in stale routing information at nodes.

Hierarchical State Routing Protocol (HSR)


 The use of routing hierarchy has several advantages, the most important one being reduction
in the size of routing tables and better scalability.

 The hierarchical state routing (HSR) protocol is a distributed multi-level hierarchical routing
protocol that employs clustering at different levels with efficient membership management at
every level of clustering.

 The use of clustering enhances resource allocation and management. For example, the
allocation of different frequencies or spreading codes to different clusters can improve the
overall spectrum reuse.

 HSR operates by classifying different levels of clusters. Elected leaders or Cluster Heads
(CH) at every level act as a members for the immediate higher level. Different clustering
algorithms, are employed for electing leaders at every level.

6
 In addition to the physical clustering, a logical clustering scheme also exists for HSR. This
logical scheme is based on certain relations among the nodes rather than on their
geographical positions, as in the case of physical clustering.

A Type of Hierarchical Routing - Fisheye State Routing Protocol


 The table-driven routing protocols generate routing overhead depending on the network size
and node mobility. On demand routing protocol generates routing overhead proportional to
number of connections present in the system, network size and node mobility. Zone Routing
Protocol (ZRP) uses intra-zone proactive approach and an inter-zone reactive approach to
reduce control overhead. The Fisheye State Routing (FSR) protocol is a generalization of the
GSR (Gate Switched Routing) protocol.

 FSR uses the Fish Eye Technique to reduce information required to represent graphical
data, to reduce routing overhead. The basic principle behind this technique is the property of
a Fish's eye that can capture pixel information with greater accuracy near its eye's focal point.
The accuracy decreases as moved toward periphery from the center or focal point. This
property is translated to routing in ADW N/W by a node. It keeps the accurate information
about nodes which are in its local topology. The accuracy of information for nodes which are
far from the local topology center decreases with increasing distance.

 FSR maintains the topology of the network at every node, but does not flood the entire
network with the information. Instead of flooding, a node exchanges topology information
only with its neighbors.

 A sequence numbering scheme is used to identify the recent topology changes. This
constitutes a hybrid approach comprising of the link-level information exchange of distance
vector protocols and the complete topology information exchange of link state protocols. The
complete topology information of the network is maintained at every node and the desired
shortest paths are computed as required.

 The topology information exchange takes place periodically rather than being driven by an
event. This is because instability of the wireless links may cause excessive control overhead
when event-driven updates are employed.

 FSR defines routing scope which is the set of nodes that are reachable in a specific number
of hops. The scope of a node at two hops is the set of nodes that can be reached in two hops.
Figure shows the scope of node 5 with one hop and two hops. The routing overhead is
significantly reduced by adopting different frequencies of updates for nodes belonging to
different scopes.

7
Illustration of FSR Routing

Illustration of FSR Routing Table


8
Example - Depicting N/W topology info maintained at nodes in a N/W. Routing info of one hop
distant nodes from centre are exchanged more frequently than routing information of nodes that are
more than one hop distance. Information about nodes that are more than one hop away from the
current node are listed below the dotted line in the topology table.

 The link state information for the nodes belonging to the ONE HOP SCOPE is exchanged at
the highest frequency. Frequency of exchanges decreases with an increase in NUMBER OF
HOP SCOPE. Thus, immediate neighborhood topology information is maintained at a node
more precise compared to the information about nodes further away from it.
 Message size for a typical topology information update packet is thus significantly reduced
by removing topology information about far-away nodes.
 The path information for a distant node may be inaccurate as there can be staleness in the
information. But this is compensated by the fact that the route gets more and more accurate as
the packet nears its destination.
 FSR scales well for large ad hoc wireless networks because of the reduction in routing
overhead due to the use of the described mechanism, where varying frequencies of updates
are used.

Advantages of FISH EYE State Routing Protocol - Multi-level scopes of FSR significantly
reduces the B/W consumed by link state update packets. Hence, FSR is suitable for large and
highly mobile N/W. Choice of hop count associated with each scope level has high impact on
performance of the protocol at different mobility & hence must be carefully chosen.

Power Aware Routing Protocol


In a deviation from the traditional wired network routing and cellular wireless network routing,
power consumption by the nodes is a serious factor to be taken into consideration by routing
protocols for ad hoc wireless networks. This is because, in ad hoc wireless networks, the routers
are also equally power constrained just as the nodes are. Here discussed are some of the
important routing metrics that take into consideration this energy factor.

Power Aware Routing Metrics –

The limitation on the availability of power for operation is a significant bottleneck, given the
requirements of portability, weight, and size of commercial hand-held devices. Hence, the use of
routing metrics that consider the capabilities of the power sources of the network nodes contributes
to the efficient utilization of energy. It helps increase the N/W lifetime. Lot of routing metrics are
proposed in research papers that supports conservation of battery power. The routing protocols that
select paths so as to conserve power must be aware of the states of the batteries (i) at the given
node and (ii) at the other intermediate nodes in the path.

1. Minimal Energy Consumption per Packet - This metric minimizes power consumed by a packet
in traversing from source to destination. Energy consumed by a packet when traversing through a
path is the sum of the energies required at every intermediate hop in that path. Energy consumed at
9
an intermediate hop is a function of the distance between the nodes that form the link and the load on
that link. This metric does not balance the load so that uniform consumption of power is maintained
throughout the network. The disadvantages of this metric include (i) selection of paths with large hop
length, (ii) inability to measure the power consumption at a link in advance when the load varies, and
(iii) the inability to prevent the fast discharging of batteries at some nodes.

2. Maximize Network Connectivity – This metric attempts to balance the routing load among the
cut-set (the subset of the nodes in the network, the removal of which results in network partitions).
This assumes significance in environments where network connectivity is to be ensured by uniformly
distributing the routing load among the cut-set. With a variable traffic origination rate and
unbounded contention in the network, it is difficult to achieve a uniform battery draining rate for the
cut-set.

3. Minimum Variance in Load Power Levels – This metric proposes to distribute the load among
all nodes in the N/W so that the power consumption pattern remains uniform across them. This
problem is very complex when the rate & size of data packets vary. Nearly optimal performance is
achieved by routing packets to the least-loaded next-hop node.

4. Minimum Cost per Packet – To maximize the life of every node in N/W, this routing metric is
made as a function of the state of the node's battery. Cost of a node decreases with decrease in its
battery charge. Translation of the remaining battery charge to a cost factor is used for routing. With
the availability of a battery discharge pattern, the cost of a node can be computed. This metric has the
advantage of ease in the calculation of the cost of a node and at the same time congestion handling is
done.

5. Minimize Maximum Node Cost – This metric minimizes the maximum cost per node for a packet after
routing a number of packets or after a specific period. This delays the failure of a node, occurring due to
higher discharge because of packet forwarding.

Different N/W Clustering Architectures

10
Clustered Network Tree Architecture

 Sensor nodes autonomously form a group called clusters.


 The clustering process is applied recursively to form a hierarchy of clusters.

Data Delivery Modes – Unicast, Broadcast, Multicast


Unicast is one to one communication and multicast is one to many communication.
In multicast, sender transfers information to a group of interested receiver stations.

11
Multicast Routing in Ad Hoc Wireless N/W
Ad Hoc wireless N/W has applications in civilian operations (collaborative and distributed
computing), emergency search-and-rescue, law enforcement, and warfare situations. In all these
cases, setting up and maintaining a communication infrastructure may be difficult & costly. In these
applications, communication and coordination among all nodes are necessary. Multicast routing
protocols play an important role in ADWN/W to provide this communication. It is always
advantageous to use Multicast rather than Multiple Unicast, especially in the Ad Hoc environment,
where bandwidth comes at a premium. Because of the dynamic nature of the network topology in
ADWN/W, Internet protocol (IP) based multicast routing protocols of conventional wired network
don’t perform well in ADWN/W. i) Dynamically changing topology, ii) relatively low bandwidth,
and iii) less reliable wireless links of ADWN/W causes a) long convergence times, b) formation of
transient routing loops and c) consumption of costly B/W, if IP based multicast routing protocols are
applied to ADWN/W. Wired network uses an established routing tree for multicast session. A
packet sent to all nodes in the tree passes once through each node and each link in the tree. Such a
multicast structure is not appropriate for Ad Hoc networks because the tree could easily break due to
the highly dynamic topology.

Why Wired Multicast Routing Protocol is not directly Applicable for Ad Hoc Wireless N/W ?

 Multicast tree structures are not stable and need to be reconstructed continuously as
connectivity changes due to mobility in ADW N/W.

 Maintaining a routing tree for multicast packet forwarding, under frequent N/W topology
change, generates huge control packet traffic, hence B/W loss. The frequent exchange of
communication link state tables, triggered by continuous topology changes, yields excessive
control and processing overhead.

 Longer periods used for routing table update lead to instability of the multicast tree, which in
turn results in increased buffering time for packets, higher packet losses, and an increase in
the number of retransmissions.

 Therefore, Multicast protocols used in static wired networks are not suitable for ad hoc
wireless networks.

The Multicast Routing Problem - The problem of determining which nodes in the network
should participate for targeting of multicast data packets (transmitted from a source, to a select
set of receivers) and presents several multicast routing protocols for ad hoc wireless networks.

PTO

12
Multicast Routing Protocols-Design Issues
Limited available bandwidth, error-prone shared broadcast channel, node mobility, limited energy
resources, hidden & exposed terminals, and limited security - make the design of a multicast routing
protocol for ad hoc networks a challenging one. The several issues involved here are –

1. Robustness - Due to node mobility, link failures are very common in ADW N/W. Hence, data
packets sent by the source are lost. This results in low packet delivery ratio. Hence, a Multicast
Routing protocol should be Robust enough to sustain the mobility of the nodes and achieve a high
packet delivery ratio.

2. Efficiency – In scarce bandwidth of ad hoc wireless network, the efficiency of multicast protocol
is to be maintained. Multicast efficiency is defined as -

Total number of data packets received at receivers / Total number of (data + control packets)
transmitted in the network.

3. Control overhead – To keep track of members in a multicast group, exchange of control packets
is required. This consumes high B/W which is scarce in ADW N/W. Multicast protocol must ensure
that total number of control packets transmitted for maintaining the multicast group is kept to
minimum.

4. Quality of service – Military or strategic applications is one of the main scopes of Ad Hoc N/W.
Hence providing QoS is must for Ad Hoc multicast routing protocols. The main parameters which
are taken into consideration for providing the required QoS are throughput, delay, reliability and
delay jitter.

5. Resource management – Ad hoc networks consists of group of mobile nodes. Each node of Ad
Hoc N/W has limited battery power and memory. Hence, an Ad Hoc multicast routing protocol
should use minimum power. It may be achieved by reducing the number of packet transmissions. To
reduce memory usage, it should use minimum routing table / routing state information.

6. Dependency on Unicast Routing Protocol – If a multicast routing protocol needs to support a


particular unicast routing protocol, then it is difficult for the multicast protocol to work successfully
under heterogeneous network environment. Hence, it is desirable if the multicast routing protocol is
independent of any specific unicast routing protocol.

PTO

13
Architecture Reference Model for Multicast Routing Protocol (Very Important)
The transport layer is ignored for the sake of simplicity

i) MAC Layer for Multicast Routing

Most important jobs are transmission and reception of packets, and acts as facilitator for access to the
channel. Three other jobs related to multicast are
1. Detecting all neighbour nodes for particular hop distance (one/two hop etc)
2. Continuously observe the link characteristics for proper data routing
3. Performing broadcast transmission and reception
There are three principal modules in MAC layer that performs above three jobs
(a) Transmission module – This module facilitates and schedules transmission on the channel.
MAC protocol guides the performance of this module. MAC protocol first maintains
multicast state information, based on past transmission observed on the channel and then
schedules the channel depending on its state.
(b) Receiver module
(c) Neighbour list handler – This module informs higher layers if a particular node is a
neighbour node or not. It maintains list of all neighbour nodes. This functionality can be
implemented by means of beacons or by overlapping all packets on the channel.

Complete Architecture Reference Model Diagram for Multicast Routing Protocol


14
ii) Routing Layer
 This layer is responsible for forming and maintaining table for unicast session / multicast
group. It uses a set of tables, timers and route caches. Most routing multicast protocols
operate in routing layer.
 The multicast services it provides to the upper application layer are i) to join/leave a
multicast group & ii) to transmit or receive multicast packets. Other 2 layers interact closely
with the routing layer.

The Routing Layer


(a) Unicast routing information handler – This serves to discover unicast routes by a on-
demand or a table–driven mechanism.

(b) Multicast information handler – It maintains all relevant information related to the state of
the current node with respect to the multicast groups of which it is a part, in the form of a
table. This state might include a list of downstream nodes, the address of upstream nodes,
sequence number information etc. This table might be maintained per group or per source per
group.

(c) Forwarding module – It uses info provided by the multicast info handler to decide whether a
received multicast packet is to be broadcast, or be forwarded to a neighbour node or to be
sent to application layer.

(d) Tree/Mesh construction module – It constructs multicast topology. It receives request from
the application layer & flood requests across N/W to join a group. When application layer
sends termination message to this module, this module sends a proper message to all other
N/W nodes to terminate the multicast session.

(e) Session maintenance module – It initiates route repair when a lower link informs a link
break. It uses information from unicast/multicast routing tables & performs local search for
upstream or downstream nodes to restore the multicast topology.

15
(f) (f) Route cache maintenance module – It gathers information from routing packets
overheard on the channel for later use. Such information might be the addresses of nodes
which have requested for multicast routing. The route cache is updated as newer information
is obtained from the most recent packets on channel. This module is mostly optional. It
increases efficiency by reducing the control overhead.

iii) Application Layer

The application Layer Block Schematic

This layer utilizes services from routing layer below to satisfy the multicast session requirement
of application. This layer has 2 modules as shown in figure at left.

Interaction Among All Three Layers During the Multicast Session –

1. Joining a Group – Module 10 of application layer makes request to join a group to module 5
of routing layer. Module 5 uses cached info of module 4 and unicast routing info of module 9 &
initiates flooding of JoinReq packet by using module 2 of MAC layer. JoinReq messages from
nodes are picked and forwarded by module 3 to modules 7, 8, 9 of routing layer which updates
the multicast table and propagates the message to upper layer. During reply phase, the forwarding
states in multicast tables of intermediate nodes are established.

2. Data Packet Propagation – Data packets are handled by module 11 of application layer &
passed to module 8 which decides whether to broadcast the packets or not, after consulting with
module 7. Similar process occurs in all nodes under multicast topology until eventually the data
packets are sent by the forwarding module of the receivers to the application layer.

3. Route Repair – It is handled by module 6 on being informed by module 1 about link breaks. It
uses unicast & multicast routing tables to graft the node back into the multicast topology.

All modules don’t operate in all nodes at a given time. Table indicates different modules in
operation at different nodes.

16
Multicast Routing – Classification (Very Important)
Multicast routing protocols for ad hoc wireless networks can be broadly classified into two types:
application-independent or generic and application-dependent. While application-independent
multicast protocols are used for conventional multicasting, application-dependent multicast protocols
are meant only for specific applications for which they are designed.

Application-Independent Multicast protocols can be classified along three different


dimensions.

1) Based on topology – 2 types: Tree-Based and Mesh-Based Multicast Protocols.

2) Based on initialization of a multicast session – The multicast group formation can be initiated
by the source as well as by the receivers. In a multicast protocol, if the group formation is initiated
only by the source node, then it is called a source-initiated multicast routing protocol, and if it is
initiated by the receivers of the multicast group, then it is called a receiver-initiated multicast
routing protocol. Some multicast protocols do not distinguish between source and receiver for
initialization of the multicast group. We call these source-or-receiver-initiated multicast routing
protocols.

3) Based on topology maintenance mechanism – Maintenance of the multicast topology can be


done either by the soft state approach or by the hard state approach. In the soft state approach,
control packets are flooded periodically to refresh the route, which leads to a high packet delivery
ratio at the cost of more control overhead. In hard state approach, the control packets are
transmitted (to maintain the routes) only when a link breaks, resulting in lower control overhead, but
at the cost of a low packet delivery ratio.

PTO
17
Tree Based Multicast Routing Protocol
Tree-based multicasting is a well-established concept used in several wired multicast
protocols to achieve high multicast efficiency. In tree-based multicast protocols, there is only one
path between a source-receiver pair. The main drawback of these protocols is that they are not
robust enough to operate in highly mobile environments.

Tree-based multicast protocols can be classified into two types: Source-tree-based multicast
routing protocols and Shared-tree based multicast routing protocols. In a source-tree-based protocol,
a single multicast tree is maintained per source, whereas in a shared tree-based protocol, a single tree
is shared by all the sources in the multicast group. In source-tree-based multicast routing protocols,
an increase in the number of sources gives rise to a proportional increase in the number of source
trees. This results in significant increase in B/W consumption. But in a shared-tree-based multicast
protocol, this increase in bandwidth usage is not as high as in source-tree-based protocols because,
even when the number of sources for multicast sessions increases, the number of trees remains the
same.

Bandwidth Efficient Multicast Routing (Type of Tree Based Multicast Routing Protocol )- BEMRP

B/W efficiency is the key design issue for multicast protocols. BEMRP tries to find nearest
forwarding node from source to receiver, instead of the shortest path between them. Hence, number
of data packet transmissions is less here. To maintain the multicast tree, it uses the hard state
approach – only after the link breaks, a node transmits the required control packets if it wants
to rejoin the group. It avoids periodic transmission of control packets (as found in soft state
approach) and hence lot of B/W is saved. To remove unwanted forwarding nodes, route optimization
is done, which helps in further reducing the number of data packet transmissions. BEMRP has three
phases i) Tree initialization ii) Tree maintenance and iii) Tree optimization phase.

1) Multicast Tree Initialization in BEMRP - Here, multicast tree construction is initiated by


receivers. When the receiver wants to join the group, it initiates Flooding of Join control packets.
The nodes on receiving these packets, respond with Reply packets. When many such Reply packets
reach the requesting node, it chooses one of them and sends a Reserve packet on the path taken by
the chosen Reply packet. When a new receiver R3 wants to join the multicast group, it floods the
Join control packet. The nodes S, I1, and R2 of the multicast tree may receive more than one Join
control packet. After waiting for a specific time, each of these tree nodes chooses one Join packet
with the smallest hop count traversed. It sends back a Reply packet along the reverse path which the
selected Join packet had traversed. When tree node I1 receives Join packets from the previous nodes
I9 and I2, it sends a Reply packet to receiver R3 through node I2. The receiver may receive more
than one Reply packet. In this case, it selects the Reply packet which has the lowest hop count, and
sends a Reserve packet along the reverse path that the selected Reply packet had traversed. In figure
shown below, receiver R3 receives Reply packets from source S, receiver R2, and intermediate node
I1. Since the Reply packet sent by intermediate node I1 has the lowest hop count (3 hops only), R3
sends a Reserve packet to node I3, and thus joins the multicast group.

18
Multicast Tree Initialization Step

2) Multicast Tree Maintenance Phase in BEMRP - To reduce the control overhead, in BEMRP,
tree reconfiguration is done only when a link break is detected. There are two schemes to recover
from link failures.
Scheme 1 - Broadcast Multicast scheme: In this scheme, the upstream node is responsible for
finding a new route to the previous downstream node. When receiver R3 moves from A to B, it
gets isolated from the remaining part of the tree. The upstream node I3 now floods broadcast-
multicast packets (with limited TTL). After receiving this packet, receiver R3 sends a Reserve
packet and joins the group again.

Scheme 2 – Local Rejoin scheme: In this scheme, the downstream node of the broken link tries to
rejoin the multicast group by means of limited flooding of the Join packets. In figure, when the link
between receiver R3 and its upstream node I3 fails (due to movement of node R3), then R3 floods
the Join control packet with a certain TTL value (depending on the topology, this value can be
tuned). When tree nodes receive the Join control packet, they send back the Reply packet. After
receiving the Reply packet, the downstream node R3 rejoins the group by sending a Reserve
packet to the new upstream node I4.

19
Scheme 1 - Broadcast Multicast Scheme Scheme 2 - Local Rejoin scheme
Two schemes shown above are for Multicast Route Maintenance in BEMRP

The schemes shown above is for Multicast Route Optimization in BEMRP


20
3) Multicast Tree Optimization Phase in BEMRP - When a tree node or a receiver node comes
within the transmission range of other tree nodes, then unwanted tree nodes are cropped by sending
the Quit message. In Figure above, when receiver R3 comes within the transmission range of the
intermediate node I2, it will receive a multicast packet from node I2 sooner than that from node I5.
After R3 receives a multicast packet from node I2, it sends a Reserve packet to node I2 to set
up a new route directly to node I2. R3 also sends a Quit packet to node I5. Since node R3 is no
more a downstream node of I5, node I5 sends a Quit packet to node I4. Node I4 sends a Quit packet
to node I3, and node I3 in turn sends a Quit packet to node I2. Thus unnecessary forwarding nodes
are pruned (cut off). This mechanism helps to reduce the number of data packet transmissions.

BEMRP Advantages and Limitations -

Advantages - The main advantage of this multicast protocol is that it saves bandwidth due to the
reduction in the number of data packet transmissions and the hard state approach being adopted for
tree maintenance.

Limitations – Since a node joins the multicast group through its nearest forwarding node, the
distance between source and receiver increases. This increase in distance increases the probability of
path breaks, which in turn gives rise to an increase in delay and reduction in the packet delivery ratio.
Also, since the protocol uses the hard state approach for route repair, a considerable amount of time
is spent by the node in reconnecting to the multicast session, which adds to the delay in packet
delivery.

Transport Layer Protocols for Ad Hoc Wireless Networks


The objectives of transport layer – i) setting up of end to end link connection ii) end to end
delivery of data packets iii) flow control iv) congestion control. TCP is a reliable, byte stream based,
and connection oriented transport layer protocol for wired networks. TCP for wired networks is
not directly suitable for Ad Hoc Wireless Networks due to mobility, scarce bandwidth, &
scarce battery power related issues of Ad Hoc Networks. All the routing protocols discussed
previously did not take care of a very important N/W issue called SECURITY. It makes Ad Hoc
N/W highly vulnerable against external threats compared to secured wired (cellular phone) N/Ws.
Transport Layer takes care of the security issues in Ad Hoc Wireless Networks as well and special
protocols are designed for that.

Features of Transport Layer – its an interface layer between the software and hardware layers

 It provides higher layers a N/W independent interface to lower layers


 Segmentation and re-assembly of messages occur here
 End-to-end error recovery, end-to-end error control
 Monitoring of Quality of Service (QoS)

21
Applications of Transport Layer

• Class of service – transport layer specifies the service, but do not provide it itself.
• Multiplexing from several transport connections to the same network connection and splitting
from same transport connection to several network connections occur here
• To see that the application reaches the destination application with integrity

Transport Layer is at the heart of OSI N/W model

Issues in Designing Transport Layer Protocol for Ad Hoc Wireless N/Ws


i) Induced traffic – ADW N/W uses multihop radio relaying and multicast routing. If the packet
traffic at a link (or path) is affected by the packet traffic through a neighboring link, then it is referred
as a induced traffic. This happens due to the broadcast nature of the channel and location-
independent contention on the channel. Induced traffic affects throughput of transport layer.
ii) Induced throughput unfairness – Due to less throughput and delay in lower layer (MAC Layer)
throughput unfairness is observed in upper transport layer, Example - ADWN/W with IEEE 802.11
(MANET - MAC protocol) experiences unfairness in o/p at transport layer. Transport layer protocol
should consider these factors while creating fair share of throughput across contending flows.
iii) Separation of congestion control, reliability, and flow control – Transport layer protocol
provides better performance if a) end-to-end reliability, b) flow control & c) congestion control are
handled separately. Reliability and flow control are end-to-end activities & congestion is a local
activity. Transport layer gets congested if just 1 intermediate link is congested. In transport layer of
the ADW N/W, these 3 issues are separately handled. It greatly minimizes control overhead
which in turn improves performance of transport layer for ADW N/W.
22
iv) Power and bandwidth constraints – Nodes in ad hoc wireless networks are a) Power Source
and b) Bandwidth constrained. The performance of transport layer protocol is significantly affected
by these constraints.
v) Misinterpretation of congestion – Packet loss & Retransmission TimeOut (RTO) detection are
standard method for detection of N/W congestion, but not applicable for ADW N/W. Because a)
high error rates of wireless channel b) location-dependent contention c) hidden terminal problem d)
packet collisions in the network e) path breaks due to node mobility and f) node failure due to
drained battery – are also the causes of packet loss in Ad Hoc Wireless N/W.
vi) Completely decoupled transport layer – Interaction with the lower layers is a bigger challenge
faced by transport layer. Wired network transport layer protocols are almost decoupled from the
lower layers. Cross-layer interaction between the transport layer and lower (MAC) layer for ADW
N/W is important for the transport layer to adapt to the changing network environment..
vii) Dynamic topology – Node mobility causes rapidly changing ADW N/W topology. It may lead
to frequent path breaks, partitioning and remerging of networks, and high delay in re-establishment
of paths. Transport layer protocol performance gets hugely affected due to these rapid N/W topology
changes.

Design Goals of Transport Layer Protocol for Ad Hoc Wireless Networks


 The protocol should maximize the throughput per connection.
 It should provide throughput fairness across contending flows.
 It should incur minimum connection setup and minimum connection maintenance overheads.
It should minimize the resource requirements for setting up and maintaining the connection to
make the protocol scalable in large networks.
 The transport layer protocol should have mechanisms for congestion control and flow
control within the Ad Hoc wireless networks.
 It should be able to provide both reliable and unreliable connections as per the requirements
of the application layer.
 It should be able to adapt to the dynamics of the N/W such as the rapid change in topology
and changes in the nature of wireless links from uni-directional to bidirectional or vice versa.
 The most important resource, the ‘available bandwidth’ must be used efficiently.
 It should be aware of resource constraints such as battery power and buffer sizes and make
efficient use of them.
 The transport layer protocol should make use of information from the lower layers (such as
MAC layer) in the protocol stack for improving the network throughput.
 It should have a well-defined cross-layer interaction framework for effective, scalable, and
protocol-independent interaction with lower layers.
 The protocol should maintain end-to-end semantics.

23
Classification of Transport Layer Solutions

Why does TCP NOT Perform Well in Ad Hoc Wireless N/W? (Very Important)
The major issues behind throughput degradation that Transmission Control Protocol (TCP) faces
when used for Ad Hoc Wireless Networks are the following –

i) Misinterpretation of packet loss – TCP is mainly designed for wired N/W where packet loss is a
measure of N/W congestion which is detected by sender’s packet RTO (Retransmission Time Out)
period. When packet loss detected, sender initiates the congestion control algorithm. In ADW N/W,
high degree of packet loss occurs due to high bit error rate (BER) in wireless channel, increased
collision due to presence of hidden terminal, presence of interference, location dependent contention,
uni-directional links, path breaks due to node mobility, & inherent wireless channel fading.

ii) Frequent path breaks – Due to unrestricted node mobility, dynamic topology change & route
change occurs in ADW N/W. Hence route to a destination is to be re-calculated very frequently. Re-
establishment of a broken route is the job of network layer. It is a time consuming process which
depends upon factors a) no. N/W nodes b) transmission ranges of nodes c) current N/W topology d)
channel B/W e) N/W load traffic & f) nature of routing protocol. If route re-establishment time is
greater than RTO of TCP sender, then TCP sender assumes link congestion and re-transmits
the packet and initiates the congestion control algorithm. Re-transmission wastes bandwidth
and battery power. Eventually when a new route is found, the link o/p remains low for some time
as it has built up a congestion control window since TCP undergoes a slow start.

iii) Effect of path length – TCP throughput degrades fast with increase in path length in ADW
N/W topology. Possibility of path break increases with increase in path length. So, more is path
length, more is probability of path break and lesser is N/W throughput.

24
Effect of path length on N/W Throughput in TCP

iv) Misinterpretation of congestion window – TCP considers congestion window as a measure of the rate of
transmission that is acceptable to the N/W and the receiver. In ADW N/W, the congestion control
mechanism is initiated when path break occurs and N/W gets partitioned. This increases RTO period.
When route is reconfigured, congestion window may not reflect the transmission rate acceptable to the new
route, as the new route might accept a much higher transmission rate. Hence, during frequent topology
change & frequent path breaks, the congestion window may not reflect the maximum transmission rate
acceptable to the N/W and the receiver.

v) Asymmetric link behavior – Radio channel used in ADW N/W has properties such as location dependent
contention, environmental effects on propagation, and directional properties leading to asymmetric links. The
directional links can result in delivery of a packet to a node, but failure in the delivery of the ACK back
to the sender. It is possible for a bidirectional link to become uni-directional for a while. It leads to TCP
initiating congestion control algorithm & several retransmissions.

vi) Uni-directional path – Traditional TCP relies on end-to-end ACK for ensuring reliability. ACK packet is
very small compared to data packet and hence consume much less B/W. In ADW N/W every TCP ACK
packet requires RTS-CTS-Data-ACK exchange as per MAC protocol. This leads to 70 bytes of
additional overhead. For number re-transmission, overhead multiplies 70 bytes time. It leads to high
B/W consumption at reverse path than forward path. Few routing protocols have forward data path & reverse
ACK path same. For many protocols, these two paths are entirely/partially different. Path break on an entirely
different reverse path affects N/W performance as much as a path break in the forward path.

vii) Multipath routing – There exists a set of QoS routing and best-effort routing protocols that use multiple
paths between a source-destination pair. There are several advantages in using multipath routing, like – a)
reduction in route computing time b) high resilience to path breaks c) high call acceptance ratio and d) better
security. For TCP, these advantages may add to throughput degradation. These can lead to a significant
amount of out-of-order packets, which in turn generates a set of duplicate acknowledgments (DUPACKs)
which cause additional power consumption and initiation of congestion control.

viii) Network partitioning and remerging – Randomly moving nodes in ad hoc N/W lead to N/W
partitioning. As long as the TCP sender, the TCP receiver, and all the intermediate nodes (sender to receiver)
in a path remain in the same partition, the TCP connection will remain intact. When partitioning causes
sender and receiver fall in different segments, then the intermediate node performance gets severely affected.

25
Figures (a-c) shows effect of N/W partitions. For dynamic topological changes, N/W gets partitioned into 2,
Figure (b) at time t2. Now sender & receiver of TCP session A are in 2 different zones & the TCP session B
has path break. These partitions merges back into one N/W at time t3 Figure (c).

TCP Solutions for Ad Hoc Wireless Networks

1. Feedback Based TCP (TCP-F) - Feedback-based TCP or TCP-F is modified version of


traditional TCP. It improves performance of ADW N/W. It uses feedback (F/B) based approach &
requires reliable link layer & a routing protocol which provides F/B signal to TCP sender about path
breaks. The routing protocol is expected to repair the broken path ASAP. TCP-F minimizes o/p
degradation, resulted from frequent path breaks in ADW N/W. During TCP session, several path
breaks result in huge packet loss and path re-establishment delay. Upon detection of packet loss,
sender in a TCP session initiates congestion control algorithm leading to the exponential back-off
RTO timers and decrease in congestion window size. An intermediate node detects path break in
TCP-F & it originates a Route Failure Notification (RFN) packet which is routed to the sender of the
TCP session. The TCP sender's information is expected to be obtained from TCP packets being
forwarded by the node. The intermediate node that originates the RFN packet is called as Failure
Point (FP) which maintains information about all RFNs it has originated so far. FP node that
knows about route failure, forwards RFN, updates its routing table accordingly, and avoids
forwarding any more packets on that route.

If an intermediate node that receive RFN has an alternate route to the same destination, then it
discards the RFN packet & uses the alternate path for forwarding further data packets, thus reducing
the control overhead involved in the route reconfiguration process. Otherwise, it forwards the RFN
toward the source node. When a TCP sender receives an RFN packet, it goes into Snooze State.
Here the sender stops sending any more packets to destination, cancels all the timers, freezes its
26
congestion window, freezes retransmission timer, & sets up a route failure timer. Route failure timer
is dependent of routing protocol, N/W size, & N/W dynamics. It is to be taken as the worst-case
route reconfiguration time. When the route failure timer expires, the TCP sender changes from the
snooze state to the connected state.

Above 3 figures (a, b and c) show the TCP-F protocol operation. Figure (a) – the set up TCP session
between node A and node D over the path A-B-C-D. When the intermediate link between node C
and node D fails, node C (C is FP) originates RFN packet & forwards it on the reverse path to source
node (figure (b)). The sender's TCP state is changed to the snooze state upon receipt of an RFN
packet. When link CD rejoins, or if any of the intermediate nodes obtains a path to destination D,
Route Re-establishment Notification (RRN) packet is sent to node A. Then TCP state is updated
back to the connected state (figure (c)).

Advantages – i) TCP-F provides a simple feedback-based solution to minimize the problems arising
out of frequent path breaks in ADW N/W. ii) It also permits the TCP congestion control mechanism
to respond to congestion in the network. iii) TCP-F depends on the intermediate nodes' ability to
detect route failures and the routing protocols' capability to re-establish a broken path within a
reasonably short duration. iv) The Failure Point should be able to obtain the correct path (the path
which the packet traversed) to the TCP-F sender for sending the RFN packet. This is simple with a
routing protocol that uses Dynamic Source Routing (DSR).

Limitations – i) If a route to the sender is not available at the FP, then additional control packets
may need to be generated for routing the RFN packet. It uses some of the scarce bandwidth. ii) TCP-
F has an additional state compared to the traditional TCP state machine, and hence its
implementation requires modifications to the existing TCP libraries. iii) The congestion window
used after a new route is obtained may not reflect the achievable transmission rate acceptable to the
network and the TCP-F receiver.

2. TCP with Explicit Link Failure Notification (TCP-ELFN) - Holland and Vaidya proposed
TCP with Explicit Link Failure Notification (TCP-ELFN) for improving TCP performance in ADW
N/W. It is same as TCP-F, except i) explicit link failure notification (ELFN) & ii) use of TCP probe
packets for detecting the route reestablishment. ELFN is originated by the node that detects a path
break, upon detection of a link failure to the TCP sender. This can be implemented in two ways: i) by
sending an Destination Unreachable (DUR) message to the sender or ii) by piggy-backing this
information on the Route Error3 message that is sent to the sender. Once the TCP sender receives the
ELFN intimation packet, it disables its re-transmission timers & enters a standby state. In this state, it
periodically generates probe packets to see if a new route is re-established. When TCP receiver

27
receives ACK for the probe packets, it comes out of the standby state. Then it restores the
retransmission timers, and continues to function as normal.

Advantages – i) TCP-ELFN improves the TCP performance by decoupling the path break
information from the congestion information by the use of ELFN. ii) It is less dependent on the
routing protocol and requires only link failure notification about the path break.
Limitations – i) When the N/W is temporarily partitioned, the path failure may last longer and this
can lead to the origination of periodic probe packets consuming bandwidth and power ii) The
congestion window used after a new route is obtained may not reflect the achievable.

3. TCP with Buffering Capability and Sequence (TCP-BuS) - TCP with Buffering capability and
Sequence (TCP-BuS) is similar to the TCP-F & TCP-ELFN in its use of feedback information from
an intermediate node on detection of a path break. But TCP-BuS is more dependent on the routing
protocol compared to TCP-F and TCP-ELFN. TCP-BuS uses Associativity Based Routing (ABR)
protocol as the routing scheme. It uses some special messages such as Localized Query (LQ) and
REPLY, for finding a partial path. These messages are modified to carry TCP connection and
segment information. Upon detection of a path break, an upstream intermediate node or Pivot Node
(PN) generates an Explicit Route Disconnection Notification (ERDN) message. This ERDN packet
is propagated to the TCP-BuS sender. Upon reception of it, the TCP-BuS sender stops transmission
and freezes all timers and windows as in TCP-F. The packets [from TCP-BuS sender to the Pivot
Node] in transmitted node and in intermediate nodes are buffered (stored in site) until a new partial
path from the PN to the TCP-BuS receiver is found by the PN. In order to avoid unnecessary re-
transmissions, the timers for the buffered packets at the TCP-BuS sender and at the intermediate
nodes up to Pivot Node use timeout values proportional to the RoundTrip Time (RTT). The
intermediate nodes between the TCP-BuS sender and the PN can request the TCP-BuS sender to
selectively retransmit any of the lost packets. Upon detection of a path break, the downstream node
originates a Route Notification (RN) packet to the TCP-BuS receiver, which is forwarded by all the
downstream nodes in the path. An intermediate node that receives an RN packet discards all packets
belonging to that flow. The ERDN packet is propagated to the TCP-BuS sender in a reliable way by
using an implicit acknowledgment and retransmission mechanism. The PN includes the sequence
number of the TCP segment belonging to the flow that is currently at the head of its queue in the
ERDN packet. The PN also attempts to find a new partial route to the TCP-BuS receiver, and the
availability of such a partial path to destination is intimated to the TCP-BuS sender through an
Explicit Route Successful Notification (ERSN) packet. TCP-BuS utilizes the route reconfiguration
mechanism of ABR to obtain the partial route to the destination. Due to this, other routing protocols
may require changes to support TCP-BuS. The LQ and REPLY messages are modified to carry TCP
segment information, including the last successfully received segment at the destination. The LQ
packet carries the sequence number of the segment at the head of the queue buffered at the PN
and the REPLY carries the sequence number of the last successful segment the TCP-BuS
receiver received. It enables the TCP-BuS receiver to understand the packets lost in transition
and those buffered at the intermediate nodes. This is used to avoid fast retransmission requests
usually generated by TCP-BuS receiver when it notices an out-of-order packet delivery. Upon a
successful LQ-REPLY process to obtain a new route to the TCP-BuS receiver, PN informs the TCP-
28
BuS sender of the new partial path using the ERSN packet. When the TCP-BuS sender receives an
ERSN packet, it resumes the data transmission. Since there is a chance for ERSN packet loss due to
congestion in the network, it needs to be sent reliably. The TCP-BuS sender also periodically
originates probe packets to check the availability of a path to the destination. Figure shows
propagation of ERDN and RN messages when a link between nodes 4 and 12 fails. When a TCP-
BuS sender receives the ERSN message, it understands, from the sequence number of the last
successfully received packet at destination and the sequence number of the packet at the head of the
queue at PN, the packets lost in transition. The TCP-BuS receiver understands that the lost packets
will be delayed further and hence uses a selective acknowledgment strategy instead of fast
retransmission. These lost packets are retransmitted by the TCP-BuS sender. During the
retransmission of these lost packets, the network congestion between the TCP-BuS sender and PN is
handled in a way similar to that in traditional TCP.

TCP-Bus Operational Diagram

Advantages – i) The advantages of TCP-BuS include performance improvement and avoidance of


fast retransmission due to the use of buffering, sequence numbering, and selective acknowledgment.
ii) TCP-BuS also takes advantage of the underlying routing protocols, especially the on-demand
routing protocols such as ABR.
Limitations – i) It includes the increased dependency on the routing protocol and the buffering at the
intermediate nodes. ii) The failure of intermediate nodes that buffer the packets may lead to loss of
packets and performance degradation. iii) The dependency of TCP-BuS on the routing protocol may
degrade its performance with other routing protocols that do not have similar control messages as in
ABR.

29
Other Transport Layer Protocol for Ad Hoc Wireless Networks
Performance of a transport layer protocol can be enhanced if it takes into account the nature of the
network environment in which it is applied. Especially in wireless environments, it is important to
consider the properties of the physical layer and the interaction of the transport layer with the lower
layers. The Application Controlled Transport Protocol discussed here is designed specifically
for Ad Hoc Wireless Networks. Even though interworking with TCP is very important, there exist
several application scenarios such as military communication where a radically new transport layer
protocol can be used.

Application Controlled Transport Protocol (ACTP)

Unlike the TCP solutions discussed earlier, ACTP is a light-weight transport layer protocol. It is not
an extension to TCP. ACTP assigns the responsibility of ensuring reliability to the application layer.
It is more like UDP (User Datagram Protocol) with feedback of delivery and state maintenance.
ACTP stands in between TCP and UDP where TCP experiences low performance with high
reliability and UDP provides better performance with high packet loss in Ad Hoc Wireless
Networks. The key design philosophy of ACTP is to leave the provisioning of reliability to the
application layer and provide a simple feedback information about the delivery status of
packets to the application layer. ACTP supports the priority of packets to be delivered, but it is
responsibility of the lower layers to actually provide a differentiated service, based on this priority.

The ACTP layer and the API functions used by Application Layer to interact with the ACTP layer is
shown above. Each API function call to send a packet [SendTo()] contains the additional info
required for ACTP such as the maximum delay a packet can tolerate, the message number of the
packet and the priority of the packet. The message number is assigned by the application layer, and it
need not to be in sequence. The priority level is assigned for every packet by the application. It can
be varied across packets in the same flow with increasing numbers referring to higher priority
packets. The non-zero value in the message number field implicitly conveys that the application
layer expects a delivery status information about the packet to be sent. This delivery status is
maintained at the ACTP layer, and is available to the application layer for verification through
another API function [IsACKed<message number>]. The delivery status returned by IsACKed
function can reflect (i) a successful delivery of the packet (ACK received), (ii) a possible loss of the
packet (no ACK received and the deadline has expired), (iii) remaining time for the packet (no ACK
30
received but the deadline has not expired), and (iv) no state information exists at the ACTP layer
regarding the message under consideration. A zero in the delay field refers to the highest priority
packet, which requires immediate transmission with minimum possible delay. Any other value in the
delay field refers to the delay that the message can experience. On getting the information about the
delivery status, the application layer can decide on retransmission of a packet with the same old
priority or with an updated priority. Well after the packet's lifetime expires, ACTP clears the packet's
state information and delivery status. The packet's lifetime is calculated as 4 × Retransmit TimeOut
(RTO) and is set as the lifetime when the packet is sent to the network layer. A node estimates the
RTO interval by using the round-trip time between the transmission time of a message and the time
of reception of the corresponding ACK. Hence, the RTO value may not be available if there are no
existing reliable connections to a destination. A packet without any message number (i.e., no
delivery status required) is handled exactly the same way as in UDP without maintaining any state
information.

Advantages – i) One of the most important advantages of ACTP is that it provides the freedom of
choosing the required reliability level to the application layer. ii) Since ACTP is a light-weight
transport layer protocol, it is scalable for large networks. iii) Throughput is not affected by path
breaks as much as in TCP as there is no congestion window for manipulation as part of the path
break recovery. the on-demand routing protocols such as ABR.

Limitations – i) It is not compatible with TCP. ii) Use of ACTP in a very large ad hoc wireless
network can lead to heavy congestion in the network as it does not have any congestion control
mechanism.

Notes for Module 2 – Routing Protocol and TCP over Ad Hoc Wireless Networks - Over

31

You might also like