You are on page 1of 40

UNIT 4

TRANSPORT LAYER & NETWORK LAYER

The network layer is concerned with getting packets from the source all the way to the
destination. The network layer must know about the topology of the communication
subnet (i.e., the set of all routers) and choose appropriate paths through it. Network layer
takes the responsibility for routing packets from source to destination within or outside
a subnet.

NETWORK LAYER DESIGN ISSUES / FUNCTIONS


Store-and-Forward Packet Switching
The major components of the system are the carrier's equipment (routers connected
by transmission lines), shown inside the shaded oval, and the customers' equipment,
shown outside the oval. Host H1 is directly connected to one of the carrier's routers, A, by
a leased line. In contrast, H2 is on a LAN with a router, F, owned and operated by the
customer. This router also has a leased line to the carrier's equipment.

Fig. The environment of the network layer protocols

A host with a packet to send transmits it to the nearest router, either on its own LAN or
over a point-to-point link to the carrier. The packet is stored there until it has fully arrived
so the checksum can be verified. Then it is forwarded to the next router along the path
until it reaches the destination host, where it is delivered. This mechanism is store-and-
forward packet switching.

IV BCA – Data Communication & Networks - Unit 4 | 1


Services Provided to the Transport Layer
The network layer provides services to the transport layer at the network layer /
transport layer interface. The network layer services have been designed with the
following goals:
1. The services should be independent of the router technology.
2. The transport layer should be shielded from the number, type, and topology of
the routers present.
3. The network addresses made available to the transport layer should use a
uniform numbering plan, even across LANs and WANs.

Routing is the decision taken by the network layer when there are several possible
routes available to send the data from the source to destination. These networks are
interconnected through a device called Routers or Gateways that route the packet to their
final destination. Network layer concentrate on providing route to the frames to travel.

Addressing: Data link layer provides physical addressing, but network layer resolves
another addressing called Logical addressing that deals with transfer of data across the
network. When the data is transferred outside the network, this layer adds the logical
address along with the packets and transmitted across the network. These logical address
are termed as IP address.

PACKET SWITCHING CONCEPTS


The subnets are organized by using two different approaches:
1. Connection oriented &
2. Connectionless.
In connection oriented services, connection is established before any data transfer take
place. This connection is known as virtual circuit. A circuit or a connection is established
along the path between the end points before the data if forwarded from source to destination.
The connection will be turned off once the data transmission is completed. This is also called
dedicated path establishment.

In connectionless services, the individual packets are called datagrams. No dedicated


path is established before the data transmission. The data is forwarded on behalf of forwarding
tables. No previous handshaking is required and acknowledgements are optional.

IV BCA – Data Communication & Networks - Unit 4 | 2


Datagram
Datagram packet switching is similar to message switching, in that each packet is a self-
contained unit with complete addressing information attached. Even if a packet is a part
of multi-packet transmission the network treats it as an individual unit called datagram.
This fact allows packets to travel in all possible paths through the network to reach the
destination.
The packets, each with the same destination address, do not follow the same route, and
they may arrive out of sequence at the exit point node (or the destination). Reordering is
done at the destination point based on the sequence number of the packets. It is possible
for a packet to get destroyed or lost if one of the nodes on its way is crashed momentarily.
Thus all its queued packets may be lost.

Datagram packet switching networks are referred to as connectionless connection, since


there is no prior connection is established before the packets are transmitted. Datagram
network is established in the network layer.

The advantages of datagram approach are:


 Tables on routers consumes less memory space as they need not to include virtual
circuit information.
 Establishment and release of network or transport layer connection do not
require any special work on the part of routers.
 If ay datagram router crashes or fails, only those users will suffer whose packets
and queued up in the router at that time. Therefore, crash of one router does not
affect entire system.

IV BCA – Data Communication & Networks - Unit 4 | 3


 Datagram also allow the router to balance the traffic throughout the subnets as
routers can be changed halfway through a connection.

The disadvantages of datagram approach are:


 Every packet has to include full address of source and destination that leads to a
significant amount of overhead. As a result of this a lot of bandwidth is wasted.
 The address parsing is difficult as more complicated procedure is required to
determine the path of the packet.
 With a datagram approach, congestion avoidance is more difficult.

Virtual Circuit
In the virtual circuit approach, a preplanned route is established before any data packets
are sent. A logical connection is established when
 A sender sends a "call request packet" to the receiver and
 The receiver sends back an acknowledge packet "call accepted packet" to the
sender if the receiver agrees for the data communication.

In Virtual circuit approach the connection is set up, data is transferred and after the
transmission, the connection is tear down. Resources are allocated during the set up
phase as in circuit switching networks or on-Demand as in datagram network. All the
packets of a message follow the same path established during the connection. A virtual
circuit network is implemented in the data link layer.

In VC, each packet contains short VC number, sequence number, checksum in its header.
If packets flowing over a given VC always take the same route through subnet, each router
must remember where to forward packets for each of the currently open VC passing
through it. Each router maintains table with one entry for each open VC passing through
it.

IV BCA – Data Communication & Networks - Unit 4 | 4


The advantages of virtual circuit approach are:
 Each packet contains VC number not full destination address. This reduces
significant amount of overhead and a lot of bandwidth is saved.
 It provides congestion control within the subnet as enough buffers can be
reserved in advance for each VC, when the connection is established.

IV BCA – Data Communication & Networks - Unit 4 | 5


 Address parsing is easy and does not communicate much time as each router just
uses the circuit number to index into a table to find out where the packet should
go.

The disadvantages of virtual circuit approach are:


 The setup phase of VC takes a lot of time and also consumes various resources.
 Requires a lot of table space within routers. So large amount of router memory is
used by VC table.
 If any router crashes, all the VC that passed through the failed router are
terminated.

CONNECTION ORIENTED Vs CONNECTIONLESS SERVICES


The layers can offer two different types of service to the layers above them: connection-
oriented and connectionless.

Connection-oriented Service is modeled after the telephone system. To talk to


someone, you pick up the phone, dial the number, talk, and then hang up. Similarly, to use
a connection-oriented network service, the service user first establishes a connection,

IV BCA – Data Communication & Networks - Unit 4 | 6


uses the connection, and then releases the connection. In this, the order in which the bits
are received is same as the order in which they are sent.

The service users of connection oriented service undergo 3 different phases:


1. Connection establishment phase
2. Data transfer phase
3. Connection release phase.

Connectionless Service is modeled after the postal system. Each message (letter) carries
the full destination address, and each one is routed through the system independent of
all the others.

It is a self-contained action and does not include establishment, maintenance and


releasing a connection. It does not preserve the order of delivery of messages.

Each service can be characterized by a quality of service. Some services are reliable in the
sense that they never lose data. Usually, a reliable service is implemented by having the
receiver acknowledge the receipt of each message so the sender is sure that it arrived.
The acknowledgement process introduces overhead and delays, which are often worth it
but are sometimes undesirable.
Six different types of service.

The service is said to be reliable when the receiver acknowledges the receipt of each
message so that the sender is sure that data is received. Reliable connection-oriented
service is appropriate is file transfer. It has two minor variations: message sequences
and byte streams.

IV BCA – Data Communication & Networks - Unit 4 | 7


Reliable message stream service is used for transferring the sequence of pages where
it is important to preserve the message boundaries.

Reliable byte stream service, is used for remote login. A user logs into a remote server,
a byte stream from user’s computer is transferred to remote server. the connection is
simply a stream of bytes, with no message boundaries.

The service is said to be unreliable when the receiver does not acknowledge the receipt
of the message. Unreliable datagram service is commonly used for junk e-mails or
simply junk mails. The sender of such junk mails does not establish or release the
connection to send the mails nor reliable delivery. It is similar to telegram service, which
also does not return an acknowledgement to the sender.

Reliable connectionless service called acknowledged datagram service. Such a


service may not require connection to be established for sending one short message but
essentially requires reliability. Similar to sending a registered letter and requiring a
return receipt. When a receipt comes back, the sender is sure that the letters was
delivered to the destination.

Another service is the request-reply service; the sender transmits a single datagram
containing a request; the reply contains the answer. Request-reply is commonly used to
implement communication in the client-server model: the client issues a request and the
server responds to it.

POINT TO POINT PROTOCOL (PPP)


It was devised by IETF (Internet Engineering Task Force) and it is commonly used data
link protocol. It is used to connect Home PC to the server of ISP (Internet Service
Provider) via a modem.

PPP defines the format of the frame to be exchanged between the devices. It defines the
link control protocol (LCP) for:
1. Establishing the link between 2 devices
2. Maintaining the established link
3. Configuring the link
4. Terminating the link after the transfer

IV BCA – Data Communication & Networks - Unit 4 | 8


It defines how the network layer data are encapsulated in datalink frame. It provides
error detection and supports multiple protocols. PPP allow IP address to be assigned at
the connection time that is dynamically. A temporary IP address can be assigned to each
host.

PPP provides multiple network layer services supporting a variety of network layer
protocol. It uses a protocol called NCP (Network Control Protocol). It also defines how
two devices can authenticate each other.

PPP Frame format

Protocol
Flag Address Control Data FCS Flag
(I or 2
(I byte) (I byte) (I byte) (Variable) (2 or 4 Bytes) (I byte)
byte)

Flag field (01111110) marks the beginning and end of the PPP frame.
Address field, is of 1 byte always 11111111. This address is the broadcast address; all
the stations accept the frame.
Control field, is of 1 byte. It uses the format of the U (unnumbered) frame. The value is
always 00000011 to show that the frame does not contain any sequence numbers and
there is no flow control and error control.
Protocol field, specifies the kind of packet in the data field, that is what is being carried
in the data field.
Data field, length is variable. Default length is 1500 bytes is used. It carries user data or
other information.
FCS (Frame Check Sequence) field, it is either 2 or 4 bytes. It contains the checksum.

Transition Phase in PPP


 Dead, the link is not used. There is no active carrier and the line is quiet.
 Establish, connection goes into this phase when one of the nodes start
communication. The 2 parties can negotiate the options. If it is successful, the
system goes into the authentication phase or directly to networking phase. LCP
packets are used for this purpose.
 Authenticate, it is optional. The 2 nodes may decide during the establishment
phase, not to skip this phase. If they proceed with authentication, they send

IV BCA – Data Communication & Networks - Unit 4 | 9


several authentication packets. If the result is successful, the connection goes to
the networking phase, otherwise to the termination phase.
 Network, in this phase negotiation for the network layer protocols take place. PPP
specifies that 2 nodes establish a network layer agreement before data at the
network layer can be exchanged. It supports several protocols at network layer. If
a node is running multiple protocols simultaneously at the network layer, the
receiving node needs to know which protocol will receive the data.
 Open, data transfer takes place. The connection remains in this phase until one of
the endpoints wants to end the connection.
 Terminate, connection is terminated.

ROUTING
Routing Algorithm
The main function of the network layer is routing packets from the source machine to the
destination machine. The routing algorithm is that part of the network layer software
responsible for deciding which output line an incoming packet should be transmitted on.
Routing is the process of forwarding a packet in the network, so that it reaches its
intended destination.

The main function of routing algorithm are:


1. Correctness, routing should be done correctly
2. Simplicity, routing should be done in a simple manner (shortest path)
3. Robustness, the ability to withstand the system over a year
4. Stability, routing algorithms should be stable under all circumstances.
5. Fairness, every node connected to the network should get a fair chance of
transmitting the packets
6. Optimality, in terms of throughput & minimizing mean packet delays.

Routing algorithms can be grouped into two major classes: nonadaptive and adaptive.
Nonadaptive algorithms do not base their routing decisions on measurements or
estimates of the current traffic and topology.
Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the
topology, and usually the traffic as well.

IV BCA – Data Communication & Networks - Unit 4 | 10


Adaptive algorithms Nonadaptive algorithms

It is dynamic. It is static.

Changes the routes dynamically Choice of route is computed in advance

Gathers the information at runtime:


It is off-line, and downloaded to the
locally, from adjacent and all other routes.
routers when the network is booted. This
It changes the routes every delta T
procedure is sometimes called static
seconds, when load changes and topology
routing.
changes.

Examples: Shortest Path routing, flooding Examples: Distance vector routing & Link
and flow based routing algorithm. state routing algorithms.

Routing Algorithms are:


1. The Optimality Principle
2. Shortest Path Routing
3. Flooding
4. Distance Vector Routing
5. Link State Routing
6. Hierarchical Routing
7. Broadcast Routing
8. Multicast Routing
9. Routing for Mobile Hosts

1. The Optimality Principle

IV BCA – Data Communication & Networks - Unit 4 | 11


It states that if router J is on the optimal path from router I to router K, then the optimal
path from J to K also falls along the same route. The part of the route from I to Jr1 and the
rest of the route r2. If a route better than r2 existed from J to K, it could be concatenated
with r1 to improve the route from I to K, contradicting our statement that r1r2 is optimal.
It is known as optimality principle.

The set of optimal routes from all sources to a given destination form a tree rooted at the
destination. Such a tree is called a sink tree, where the distance metric is the number of
hops. A sink tree is not necessarily unique; other trees with the same path lengths may
exist. The goal of all routing algorithms is to discover and use the sink trees for all routers.

Sink tree is indeed a tree, it does not contain any loops, so each packet will be delivered
within a finite and bounded number of hops. Links and routers can go down and come
back up during operation, so different routers may have different ideas about the current
topology. The optimality principle and the sink tree provide a benchmark against which
other routing algorithms can be measured.

2. Shortest Path Routing


The idea is to build a graph of the subnet, with each node of the graph representing a
router and each arc of the graph representing a communication line (often called a link).
To choose a route between a given pair of routers, the algorithm just finds the shortest
path between them on the graph.

Example, using the Dijkstra’s algorithm, to find the shortest path between the nodes A
and D. The weight between the nodes is known as the distance. Each node is labeled (in
parentheses) with its distance from the source node along the best known path. Initially,
no paths are known, so all nodes are labeled with infinity. As the algorithm proceeds and
paths are found, the labels may change, reflecting better paths. A label may be either
tentative or permanent. Initially, all labels are tentative. When it is discovered that a label
represents the shortest possible path from the source to that node, it is made permanent
and never changed thereafter.

IV BCA – Data Communication & Networks - Unit 4 | 12


Step 1: Mark all the distance of the node as infinity (∞)

Step 2: Calculating the shortest path from node A to B and from A to G. The distance
between the node A to B is 2, and from the node A to G is 6. The shortest path is 2, from A
to B

Step 3: Calculating the shortest path from B to C and B to E. The distance between from
B to E is 2 and from B to C is 7. The cumulative distance between B to C is 9 (2+7) and
from B to E is 4 (2+2). So the shortest path is B to E is 4.

IV BCA – Data Communication & Networks - Unit 4 | 13


Step 4: Similarly, the distance to be calculated from the nodes connected.

Step 5: Calculate the shortest path from E to F, C to F, F to H, G to H, C to D and H to D

Step 6: Hence the shortest path from A to D is,

3. Flooding
Another static algorithm is flooding, in which every incoming packet is sent out on every
outgoing line except the one it arrived on. One such measure is to have a hop counter
contained in the header of each packet, which is decremented at each hop, with the packet
being discarded when the counter reaches zero. Ideally, the hop counter should be
initialized to the length of the path from source to destination.

An alternative technique for damming the flood is to keep track of which packets have
been flooded, to avoid sending them out a second time. achieve this goal is to have the
source router put a sequence number in each packet it receives from its hosts. Each router
then needs a list per source router telling which sequence numbers originating at that
source have already been seen. If an incoming packet is on the list, it is not flooded.

IV BCA – Data Communication & Networks - Unit 4 | 14


A variation of flooding that is slightly more practical is selective flooding. In this
algorithm the routers do not send every incoming packet out on every line, only on those
lines that are going approximately in the right direction.

4.Distance Vector Routing


Distance vector routing algorithms operate by having each router maintain a table (i.e,
a vector) giving the best known distance to each destination and which line to use to get
there. These tables are updated by exchanging information with the neighbors.

The distance vector routing algorithm is sometimes called by other names, most
commonly the distributed Bellman-Ford routing algorithm and the Ford-Fulkerson
algorithm.

Bellman Ford routing algorithm, defines the distance at each node.


dx (y) = cost of least – cost path x to y
where ‘x’ is the source and ‘y’ is the destination.
Update the distance based on neighbor,
dx (y) = min {cost (x, v) + dv (y)}
where ‘x’ is the source, ‘y’ is the destination and ‘v’ is the intermediate router.

In distance vector routing, each router maintains a routing table indexed by, and
containing one entry for, each router in the subnet. This entry contains two parts: the
preferred outgoing line to use for that destination and an estimate of the time or distance
to that destination. The metric used might be number of hops, time delay in milliseconds
and total number of packets queued along the path.
Example:

IV BCA – Data Communication & Networks - Unit 4 | 15


To computer the least cost route from B to A:
(i) B to A is 1 (least cost)
(ii) From B Through C (intermediate node) to A, so it is B – C- A , the cost is 8 (3+5)
While comparing the distance between the nodes, B to A is 1, the least cost. The routing
table for B to A is,

Destination Cost Next Hop

A 1 A

To computer the least cost route from B to D:


(i) B – C – D is 7 (3+4)
(ii) B – E – D is 11 (9+2)
(iii) B – A –C – D is 10 (1+5+4)
The least cost distance from B to D is 07. The routing table for the B to D is,

Next Hop
Destination Cost
(intermediate node)

D 7 C

The Count-to-Infinity Problem


1. One of the important issue in Distance Vector Routing is County of Infinity
Problem.
2. Counting to infinity is just another name for a routing loop.
3. In distance vector routing, routing loops usually occur when an interface goes
down.
4. It can also occur when two routers send updates to each other at the same time.

IV BCA – Data Communication & Networks - Unit 4 | 16


Example:

Now imagine that the link


between A and B is cut.
 At this time, B corrects its table.
 After a specific amount of time, routers exchange their tables, and so B receives
C's routing table.
 Since C doesn't know what has happened to the link between A and B, it says that
it has a link to A with the weight of 2 (1 for C to B, and 1 for B to A -- it doesn't
know B has no link to A).
 B receives this table and thinks there is a separate link between C and A, so it
corrects its table and changes infinity to 3 (1 for B to C, and 2 for C to A, as C said).
 Once again, routers exchange their tables.
 When C receives B's routing table, it sees that B has changed the weight of its link
to A from 1 to 3, so C updates its table and changes the weight of the link to A to 4
(1 for C to B, and 3 for B to A, as B said).
 This process loops until all nodes find out that the weight of link to A is infinity.
In this way, Distance Vector Algorithms have a slow convergence rate.

IV BCA – Data Communication & Networks - Unit 4 | 17


One way to solve this problem is for routers to send information only to the neighbors
that are not exclusive links to the destination.

5. Link State Routing


Distance vector routing was used in the ARPANET was replaced by link state routing. Two
primary problems:
1. Since the delay metric was queue length, it did not take line bandwidth into
account when choosing routes.
2. Algorithm often took too long to converge (the count-to-infinity problem).

The idea behind link state routing is simple and can be stated as five parts. Each router
must do the following:
1. Discover its neighbors and learn their network addresses.
2. Measure the delay or cost to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to all other routers.
5. Compute the shortest path to every other router.

The complete topology and all delays are experimentally measured and distributed to

IV BCA – Data Communication & Networks - Unit 4 | 18


every router. Then Dijkstra's algorithm can be run to find the shortest path to every other
router.
 A routing algorithm for creating the least cost tree and forwarding table is Link
State Routing (LSR).
 The collection of states for all links is called the link state database (LSDB)

Example:

Routing table A Routing table B Routing table C Routing table D


D 3 A 2 B 5 A 3
B 2 E 4 F 4 E 5
C 5 G 3

Routing table E Routing table G


Routing table F
D 5 C 3
C 4
F 1
B 4 E 2
F 2 G 1

Steps:
1. Link State database is created for the network
2. Using the database, the node chooses the next node.
3. By using the low cost to reach the node.
Link State Database (LSDB)

A B C D E F G

A 0 2 ∞ 3 ∞ ∞ ∞

B 2 0 5 ∞ 4 ∞ ∞

C ∞ 5 0 ∞ ∞ 4 3

IV BCA – Data Communication & Networks - Unit 4 | 19


D 3 ∞ ∞ 0 5 ∞ ∞

E ∞ 4 ∞ 5 0 2 ∞

F ∞ ∞ 4 ∞ 2 0 1

G ∞ ∞ 3 ∞ ∞ 1 0

The least cost distance using LSR,

6.Hierarchical Routing
When hierarchical routing is used, the routers are divided into regions. Each router
knowing all the details about how to route packets to destinations within its own region,
but knowing nothing about the internal structure of other regions. When different
networks are interconnected, it is natural to regard each one as a separate region in order
to free the routers in one network from having to know the topological structure of the
other ones.

IV BCA – Data Communication & Networks - Unit 4 | 20


Example:

A two-level hierarchy with five regions. The full routing table for router 1A has 17 entries,
When routing is done hierarchically, there are entries for all the local routers as before,
but all other regions have been condensed into a single router, so all traffic for region 2
goes via the 1B -2A line, but the rest of the remote traffic goes via the 1C -3B line.
Hierarchical routing has reduced the table from 17 to 7 entries. As the ratio of the number
of regions to the number of routers per region grows, the savings in table space increase.

IV BCA – Data Communication & Networks - Unit 4 | 21


7. Broadcast Routing
In some applications, hosts need to send messages to many or all other hosts. Sending a
packet to all destinations simultaneously is called broadcasting. One broadcasting
method that requires no special features from the subnet is for the source to simply send
a distinct packet to each destination. Not only is the method wasteful of bandwidth, but
it also requires the source to have a complete list of all destinations.

Flooding is a point-to-point communication, the problem with flooding as a broadcast


technique: it generates too many packets and consumes too much bandwidth.

A third algorithm is multidestination routing. This method is used, each packet contains
either a list of destinations or a bit map indicating the desired destinations. When a packet
arrives at a router, the router checks all the destinations to determine the set of output
lines that will be needed. The router generates a new copy of the packet for each output
line to be used and includes in each packet only those destinations that are to use the line.
In effect, the destination set is partitioned among the output lines. After a sufficient
number of hops, each packet will carry only one destination and can be treated as a
normal packet. Multidestination routing is like separately addressed packets, except that

IV BCA – Data Communication & Networks - Unit 4 | 22


when several packets must follow the same route, one of them pays full fare and the rest
ride free.

A spanning tree is a subset of the subnet that includes all the routers but contains no
loops. If each router knows which of its lines belong to the spanning tree, it can copy an
incoming broadcast packet onto all the spanning tree lines except the one it arrived on.
This method makes excellent use of bandwidth, generating the absolute minimum
number of packets necessary to do the job. The only problem is that each router must
have knowledge of some spanning tree for the method to be applicable.

A reverse path forwarding, is remarkably simple once it has been pointed out. When a
broadcast packet arrives at a router, the router checks to see if the packet arrived on the
line that is normally used for sending packets to the source of the broadcast. If so, there
is an excellent chance that the broadcast packet itself followed the best route from the
router and is therefore the first copy to arrive at the router.

Reverse path forwarding.


(a) A subnet. (b) A sink tree. (c) The tree built by reverse path forwarding.

The principal advantage of reverse path forwarding,


 Both reasonably efficient and easy to implement.
 It does not require routers to know about spanning trees, nor does it have the
overhead of a destination list or bit map in each broadcast packet as does
multidestination addressing. Nor does it require any special mechanism to stop
the process, as flooding does.

IV BCA – Data Communication & Networks - Unit 4 | 23


8.Multicast Routing
Sending a message to such a group is called multicasting, and its routing algorithm is
called multicast routing. Multicasting requires group management. It is needed to create
and destroy groups, and to allow processes to join and leave groups. To do multicast
routing, each router computes a spanning tree covering all other routers.

A network

Some routers are attached to hosts that belong to one or both of these groups, as indicated
in the figure. A spanning tree for the leftmost router.

A spanning tree for the leftmost router.

When a process sends a multicast packet to a group, the first router examines its spanning
tree and prunes it, removing all lines that do not lead to hosts that are members of the
group.

IV BCA – Data Communication & Networks - Unit 4 | 24


A multicast tree for group 1

The spanning tree for group 2. Multicast packets are forwarded only along the
appropriate spanning tree.

A multicast tree for group 2.

The simplest one can be used if link state routing is used and each router is aware of the
complete topology, including which hosts belong to which groups. Then the spanning tree
can be pruned, starting at the end of each path, working toward the root, and removing
all routers that do not belong to the group.

The basic algorithm is reverse path forwarding. However, whenever a router with no
hosts interested in a particular group and no connections to other routers receives a
multicast message for that group, it responds with a PRUNE message, telling the sender
not to send it any more multicasts for that group. When a router with no group members
among its own hosts has received such messages on all its lines, it, too, can respond with
a PRUNE message.

9. Routing for Mobile Hosts


Hosts that never move are said to be stationary. They are connected to the network by
copper wires or fiber optics. Migratory hosts are basically stationary hosts who move
from one fixed site to another from time to time but use the network only when they are

IV BCA – Data Communication & Networks - Unit 4 | 25


physically connected to it. Roaming hosts actually compute on the run and want to
maintain their connections as they move around.

All hosts are assumed to have a permanent home location that never changes. Hosts also
have a permanent home address that can be used to determine their home locations. The
routing goal in systems with mobile hosts is to make it possible to send packets to mobile
hosts using their home addresses and have the packets efficiently reach them.

A WAN to which LANs, MANs, and wireless cells are attached

The world is divided up (geographically) into small units, is typically known as a LAN or
wireless cell. Each area has one or more foreign agents, which are processes that keep
track of all mobile hosts visiting the area. In addition, each area has a home agent, which
keeps track of hosts whose home is in the area, but who are currently visiting another
area.

When a new host enters an area, either by connecting to it, the host computer must
register itself with the foreign agent there. The registration procedure follow the given
steps are:
1. Each foreign agent broadcasts a packet announcing its existence and address.
2. The mobile host registers with the foreign agent, giving its home address, current
data link layer address, and some security information.

IV BCA – Data Communication & Networks - Unit 4 | 26


3. The foreign agent contacts the mobile host's home agent. The message from the
foreign agent to the home agent contains the foreign agent's network address. It
also includes the security information to convince the home agent that the mobile
host is really there.
4. The home agent examines the security information, which contains a timestamp,
it sends the responds to the foreign agent to proceed.
5. When the foreign agent gets the acknowledgement from the home agent, it makes
an entry in its tables and informs the mobile host that it is now registered.

Packet routing for mobile hosts.


The home agent then does two things:
1. It encapsulates the packet in the payload field of an outer packet and sends the
latter to the foreign agent. This mechanism is called tunneling. After getting the
encapsulated packet, the foreign agent removes the original packet from the
payload field and sends it to the mobile host as a data link frame.

2. Home agent tells the sender to henceforth send packets to the mobile host by
encapsulating them in the payload of packets explicitly addressed to the foreign
agent instead of just sending them to the mobile host's home address (step 3).
Subsequent packets can now be routed directly to the host via the foreign agent
(step 4), bypassing the home location entirely.

IV BCA – Data Communication & Networks - Unit 4 | 27


3. Routers along the way record mapped addresses so they can intercept and
redirect traffic even before it gets to the home location. Each visitor is given a
unique temporary address; in others, the temporary address refers to an agent
that handles traffic for all visitors.

4. The schemes differ in how they actually manage to arrange for packets that are
addressed to one destination to be delivered to a different one. One choice is
changing the destination address and just retransmitting the modified packet.
Alternatively, the whole packet, home address and all, can be encapsulated inside
the payload of another packet sent to the temporary address. Finally, the schemes
differ in their security aspects.

THE INTERNET TRANSPORT PROTOCOLS: UDP


The Internet protocol suite supports a connectionless transport protocol, UDP (User
Datagram Protocol). UDP provides a way for applications to send encapsulated IP
datagrams and send them without having to establish a connection.

UDP transmits segments consisting of an 8-byte header followed by the payload. The two
ports serve to identify the end points within the source and destination machines. When
a UDP packet arrives, its payload is handed to the process attached to the destination
port.

The UDP header

The source port is primarily needed when a reply must be sent back to the source. By
copying the source port field from the incoming segment into the destination port field
of the outgoing segment, the process sending the reply can specify which process on the
sending machine is to get it.

The UDP length field includes the 8-byte header and the data. The UDP checksum is
optional and stored as 0 if not computed (a true computed 0 is stored as all 1s).

IV BCA – Data Communication & Networks - Unit 4 | 28


Remote Procedure Call
When a process on machine 1 calls a procedure on machine 2, the calling process on 1 is
suspended and execution of the called procedure takes place on 2. Information can be
transported from the caller to the callee in the parameters and can come back in the
procedure result. No message passing is visible to the programmer. This technique is
known as RPC (Remote Procedure Call) and has become the basis for many networking
applications. The calling procedure is known as the client and the called procedure is
known as the server.

In the simplest form, to call a remote procedure, the client program must be bound with
a small library procedure, called the client stub, that represents the server procedure in
the client's address space. Similarly, the server is bound with a procedure called the
server stub. These procedures hide the fact that the procedure call from the client to the
server is not local.

The actual steps in making an RPC are:


Step 1 is the client calling the client stub. This call is a local procedure call, with the
parameters pushed onto the stack in the normal way.

Step 2 is the client stub packing the parameters into a message and making a system call
to send the message. Packing the parameters is called marshaling.
Step 3 is the kernel sending the message from the client machine to the server machine.
Step 4 is the kernel passing the incoming packet to the server stub.
Step 5 is the server stub calling the server procedure with the unmarshaled parameters.
The reply traces the same path in the other direction.

IV BCA – Data Communication & Networks - Unit 4 | 29


The Real-Time Transport Protocol
As Internet radio, Internet telephony, music-on-demand, videoconferencing, video-on-
demand, and other multimedia applications became more commonplace, people
discovered that each application was reinventing more or less the same real-time
transport protocol. A generic real-time transport protocol for multiple applications is
RTP (Real-time Transport Protocol).

The position of RTP in the protocol stack.


The multimedia application consists of multiple audio, video, text, and possibly other
streams. These are fed into the RTP library, which is in user space along with the
application. This library then multiplexes the streams and encodes them in RTP packets,
which it then stuffs into a socket. At the other end of the socket (in the operating system
kernel), UDP packets are generated and embedded in IP packets. If the computer is on an
Ethernet, the IP packets are then put in Ethernet frames for transmission.

The basic function of RTP is to multiplex several real-time data streams onto a single
stream of UDP packets. The UDP stream can be sent to a single destination (unicasting)
or to multiple destinations (multicasting). Because RTP just uses normal UDP, its packets
are not treated specially by the routers unless some normal IP quality-of-service features
are enabled.

Packet nesting

IV BCA – Data Communication & Networks - Unit 4 | 30


Each packet sent in an RTP stream is given a number one higher than its predecessor.
This numbering allows the destination to determine if any packets are missing. If a packet
is missing, the best action for the destination to take is to approximate the missing value
by interpolation. Retransmission is not a practical option since the retransmitted packet
would probably arrive too late to be useful. As a consequence, RTP has no flow control,
no error control, no acknowledgements, and no mechanism to request retransmissions.

RTP Header

It consists of three 32-bit words and potentially some extensions. The first word contains
the Version field, which is already at 2.
The P bit indicates that the packet has been padded to a multiple of 4 bytes. The last
padding byte tells how many bytes were added. The X bit indicates that an extension
header is present.
The CC field tells how many contributing sources are present, from 0 to 15 (see below).
The M bit is an application-specific marker bit. It can be used to mark the start of a video
frame, the start of a word in an audio channel.
Payload type field tells which encoding algorithm has been used (e.g., uncompressed 8-
bit audio, MP3, etc.).
The Sequence number is just a counter that is incremented on each RTP packet
sent. It is used to detect lost packets.
The timestamp is produced by the stream's source to note when the first sample in the
packet was made.
The Synchronization source identifier tells which stream the packet belongs to. It is
the method used to multiplex and demultiplex multiple data streams onto a single stream
of UDP packets.

IV BCA – Data Communication & Networks - Unit 4 | 31


Finally, the Contributing source identifiers, if any, are used when mixers are present in
the studio.

THE INTERNET TRANSPORT PROTOCOLS: TCP


TCP (Transmission Control Protocol) was specifically designed to provide a reliable
end-to end byte stream. An internetwork differs from a single network because different
parts may have wildly different topologies, bandwidths, delays, packet sizes, and other
parameters. TCP was designed to dynamically adapt to properties of the internetwork
and to be robust in the face of many kinds of failures.

Each machine supporting TCP has a TCP transport entity, either a library procedure, a
user process, or part of the kernel. A TCP entity accepts user data streams from local
processes, breaks them up into pieces not exceeding 64 KB (in practice, often 1460 data
bytes in order to fit in a single Ethernet frame with the IP and TCP headers), and sends
each piece as a separate IP datagram. When datagrams containing TCP data arrive at a
machine, they are given to the TCP entity, which reconstructs the original byte streams.

The TCP Protocol


A key feature of TCP, and one which dominates the protocol design, is that every byte on
a TCP connection has its own 32-bit sequence number.
The sending and receiving TCP entities exchange data in the form of segments. A TCP
segment consists of a fixed 20-byte header (plus an optional part) followed by zero or
more data bytes. It can accumulate data from several writes into one segment or can split
data from one write over multiple segments. Two limits restrict the segment size. First,
each segment, including the TCP header, must fit in the 65,515-byte IP payload. Second,
each network has a maximum transfer unit, or MTU, and each segment must fit in the
MTU. In practice, the MTU is generally 1500 bytes (the Ethernet payload size) and thus
defines the upper bound on segment size.
The basic protocol used by TCP entities is the sliding window protocol. When a sender
transmits a segment, it also starts a timer. When the segment arrives at the destination,
the receiving TCP entity sends back a segment (with data if any exist, otherwise without
data) bearing an acknowledgement number equal to the next sequence number it expects
to receive. If the sender's timer goes off before the acknowledgement is received, the
sender transmits the segment again.

IV BCA – Data Communication & Networks - Unit 4 | 32


The TCP Segment Header
Every segment begins with a fixed-format, 20-byte header. The fixed header may be
followed by header options. Segments without any data are legal and are commonly used
for acknowledgements and control messages.

The Source port and Destination port fields identify the local end points of the
connection. The source and destination end points together identify the connection. The
Sequence number and Acknowledgement number fields perform their usual
functions. The TCP header length tells how many 32-bit words are contained in the TCP
header. This field really indicates the start of the data within the segment, measured in
32- bit words.
Unused six 1-bit flags.: URG is set to 1 if the Urgent pointer is in use. The Urgent pointer
is used to indicate a byte offset from the current sequence number at which urgent data
are to be found.
The ACK bit is set to 1 to indicate that the Acknowledgement number is valid. If ACK is 0,
the segment does not contain an acknowledgement so the Acknowledgement number
field is ignored.
The PSH bit indicates PUSHed data. The receiver is hereby kindly requested to deliver
the data to the application upon arrival and not buffer it until a full buffer has been
received.

IV BCA – Data Communication & Networks - Unit 4 | 33


The RST bit is used to reset a connection that has become confused due to a host crash
or some other reason. It is also used to reject an invalid segment or refuse an attempt to
open a connection.
The SYN bit is used to establish connections. The connection request has SYN = 1 and
ACK = 0 to indicate that the piggyback acknowledgement field is not in use. The
connection reply does bear an acknowledgement, so it has SYN = 1 and ACK = 1. In
essence the SYN bit is used to denote CONNECTION REQUEST and CONNECTION
ACCEPTED, with the ACK bit used to distinguish between those two possibilities.
The FIN bit is used to release a connection. It specifies that the sender has no more data
to transmit. However, after closing a connection, the closing process may continue to
receive data indefinitely. Both SYN and FIN segments have sequence numbers and are
thus guaranteed to be processed in the correct order.
The Window size field tells how many bytes may be sent starting at the byte
acknowledged. A Window size field of 0 is legal and says that the bytes up to and including
Acknowledgement number - 1 have been received. The receiver can later grant
permission to send by transmitting a segment with the same Acknowledgement number
and a nonzero Window size field.
A Checksum is also provided for extra reliability. It checksums the header, the data, and
the conceptual pseudoheader. When performing this computation, the TCP Checksum
field is set to zero and the data field is padded out with an additional zero byte if its
length is an odd number.

The pseudoheader included in the TCP checksum


The pseudoheader contains the 32-bit IP addresses of the source and destination
machines, the protocol number for TCP (6), and the byte count for the TCP segment
(including the header). Including the pseudoheader in the TCP checksum computation

IV BCA – Data Communication & Networks - Unit 4 | 34


helps detect misdelivered packets, but including it also violates the protocol hierarchy
since the IP addresses in it belong to the IP layer, not to the TCP layer. UDP uses the same
pseudoheader for its checksum.

The Options field provides a way to add extra facilities not covered by the regular header.
The most important option is the one that allows each host to specify the maximum TCP
payload it is willing to accept.

CONGESTION CONTROL
Congestion Control refers to techniques and mechanisms that can either prevent
congestion before it happens or remove congestion, after it has happened.
The two categories of congestion are: Open loop and Closed loop
Open loop, in this method, policies are used to prevent the congestion before it happens.
Congestion control is handled either by the sources or by the destination.
Closed Loop, in this method remove the congestion after it happens.
The various methods of open loop congestion control are:
1. Retransmission policy, the sender retransmits a packet, it may have lost or
corrupted. Retransmission policy and retransmission timers need to be designed
to optimize efficiency and at the same time prevent the congestion.
2. Window Policy, to implement this, selective reject window method for
congestion control. Selective window method is preferred over Go-n-back method,
when the timer for a packet times out, several packets are resent, although some
may have arrived safely at the receiver. Thus this duplication may make
congestion worse. Selective reject method sends only the specific lost or damaged
packets.
3. Acknowledgement policy, it was imposed by the receiver may also affect the
congestion. If the receiver does not acknowledge every packet it receives it may
slow down the sender and helps to prevent congestion. To implement it,
 A receiver may send an acknowledgement only if it has a packet to be sent.
 A receiver may send an acknowledgement when a timer expires.
 A receiver may also decide to acknowledgement only N packets at a time.
4. Discarding policy, a router may discard less sensitive packets when a congestion
is likely to happen. Such a discarding policy may prevent congestion and at the
same time may not harm the integrity of the transmission.
IV BCA – Data Communication & Networks - Unit 4 | 35
5. Admission policy, is a quality of service (QoS) mechanism, can also prevent
congestion in virtual circuit networks. A router can deny establishing a virtual
circuit connection if there is congestion in the network or if there is a possibility
of future congestion.
The various methods of closed loop congestion control are:
1. Back pressure, is a node –to – node congestion control that starts with a node
and propagates in the opposite direction of data flow to the source. This technique
can be applied to virtual circuit networks. Upstream node, from which the data
flow is coming. The receiver of the data is downstream.
In this method, the congested node stops receiving data from the immediate
upstream node. This may cause the upstream node on nodes to become congested,
and they in turn reject data from their upstream node or nodes.

Example: Node 3 is congested and it stops receiving the packets and informs its
upstream node 2 to slow down. Node 2 in turn may be congested and informs node
1 to slow down. Now node 1 may create a congestion and informs the source node
to slow down. Thus the pressure on node 2 is moved backward to the source to
remove the congestion.
2. Choke packet, in this method congested router or node sends a special type of
packet called choke packet to the source to inform it about the congestion. The
congested node does not inform its upstream node about the congestion as in
backpressure method. In choke packet method, congested node sends a warning
directly to the source station.

IV BCA – Data Communication & Networks - Unit 4 | 36


3. Implicit Signaling, there is no communication between the congested node or
nodes and the source. The source informs there is a congestion in the network
when it does not receive any acknowledgement. Therefore, the delay in receiving
an acknowledgement is interpreted as congestion in the network. On sensing the
congestion, the source slows down.
4. Explicit Signaling, in this method the congested node explicitly sends a signal to
the source or destination to inform about the congestion. Explicit signaling can
occur either in the forward or the backward direction.
Backward signaling, a bit is set in a packet moving in the direction opposite to
the congestion. This bit warns the source about the congestion and informs the
source to slow down.
Forward signaling, a bit is set in a packet moving in the direction of the
congestion. This bit warns the destination about the congestion. The receiver slow
down the acknowledgement to remove the congestion.

OSI Layer Congestion control policies

Retransmission policy

Out of order policy


Datalink Layer
Acknowledgement policy

Flow control policy

Virtual Circuit vs Datagram inside the subnet


Network Layer Packet queuing and service policy
Packet discard policy
Transmission policy
Out of order policy
Transport layer Acknowledgement policy
Flow control policy
Time out determination

IV BCA – Data Communication & Networks - Unit 4 | 37


Congestion Control Algorithms
1.Leaky Bucket Algorithm
It is traffic shaping mechanism that controls the amount and the rate of the traffic sent to
the network. A leaky bucket algorithm shapes bursty traffic into fixed rate traffic by
averaging the data rate.

Imagine a bucket with a small hole in the bottom. No matter the rate at which water
enters the bucket, the outflow is at a constant rate, ρ, when there is any water in the
bucket and zero when the bucket is empty. Also, once the bucket is full, any additional
water entering it spills over the sides and is lost (i.e., does not appear in the output stream
under the hole).
Implementation of leaky bucket algorithm
Each host is connected to the network by an interface containing a leaky bucket, that is, a
finite internal queue. If a packet arrives at the queue when it is full, the packet is
discarded.
It makes use of clock tick to remove the packet from FIFO queue.
In other words, if one or more processes within the host try to send a packet when the
maximum number is already queued, the new packet is unceremoniously discarded. This

IV BCA – Data Communication & Networks - Unit 4 | 38


arrangement can be built into the hardware interface or simulated by the host operating
system.

A leaky bucket algorithm turns an uneven flow of packets from the user processes inside
the host into an even flow of packets onto the network.

2.Token Bucket Algorithm


The leaky bucket algorithm enforces a rigid output pattern at the average rate, no matter
how bursty the traffic is. For many applications, it is better to allow the output to speed
up somewhat when large bursts arrive, so a more flexible algorithm is needed, preferably
one that never loses data.
One such algorithm is the token bucket algorithm. In this algorithm, the leaky bucket
holds tokens, generated by a clock at the rate of one token every ΔT sec.

The token bucket algorithm. (a) Before. (b) After.

IV BCA – Data Communication & Networks - Unit 4 | 39


The token bucket algorithm provides a different kind of traffic shaping than that of the
leaky bucket algorithm. The leaky bucket algorithm does not allow idle hosts to save up
permission to send large bursts later. The token bucket algorithm does allow saving, up
to the maximum size of the bucket, n. This property means that bursts of up to n packets
can be sent at once, allowing some burstiness in the output stream and giving faster
response to sudden bursts of input.
This algorithm make use of a variable or counter that counts the token. The counter is
incremented by 1, each time a token is generated. Whenever a packet is sent, the counter
is decremented by one. When the counter becomes zero, no packets can be sent.

IV BCA – Data Communication & Networks - Unit 4 | 40

You might also like