Professional Documents
Culture Documents
The network layer is concerned with getting packets from the source all the way to the
destination. The network layer must know about the topology of the communication
subnet (i.e., the set of all routers) and choose appropriate paths through it. Network layer
takes the responsibility for routing packets from source to destination within or outside
a subnet.
A host with a packet to send transmits it to the nearest router, either on its own LAN or
over a point-to-point link to the carrier. The packet is stored there until it has fully arrived
so the checksum can be verified. Then it is forwarded to the next router along the path
until it reaches the destination host, where it is delivered. This mechanism is store-and-
forward packet switching.
Routing is the decision taken by the network layer when there are several possible
routes available to send the data from the source to destination. These networks are
interconnected through a device called Routers or Gateways that route the packet to their
final destination. Network layer concentrate on providing route to the frames to travel.
Addressing: Data link layer provides physical addressing, but network layer resolves
another addressing called Logical addressing that deals with transfer of data across the
network. When the data is transferred outside the network, this layer adds the logical
address along with the packets and transmitted across the network. These logical address
are termed as IP address.
Virtual Circuit
In the virtual circuit approach, a preplanned route is established before any data packets
are sent. A logical connection is established when
A sender sends a "call request packet" to the receiver and
The receiver sends back an acknowledge packet "call accepted packet" to the
sender if the receiver agrees for the data communication.
In Virtual circuit approach the connection is set up, data is transferred and after the
transmission, the connection is tear down. Resources are allocated during the set up
phase as in circuit switching networks or on-Demand as in datagram network. All the
packets of a message follow the same path established during the connection. A virtual
circuit network is implemented in the data link layer.
In VC, each packet contains short VC number, sequence number, checksum in its header.
If packets flowing over a given VC always take the same route through subnet, each router
must remember where to forward packets for each of the currently open VC passing
through it. Each router maintains table with one entry for each open VC passing through
it.
Connectionless Service is modeled after the postal system. Each message (letter) carries
the full destination address, and each one is routed through the system independent of
all the others.
Each service can be characterized by a quality of service. Some services are reliable in the
sense that they never lose data. Usually, a reliable service is implemented by having the
receiver acknowledge the receipt of each message so the sender is sure that it arrived.
The acknowledgement process introduces overhead and delays, which are often worth it
but are sometimes undesirable.
Six different types of service.
The service is said to be reliable when the receiver acknowledges the receipt of each
message so that the sender is sure that data is received. Reliable connection-oriented
service is appropriate is file transfer. It has two minor variations: message sequences
and byte streams.
Reliable byte stream service, is used for remote login. A user logs into a remote server,
a byte stream from user’s computer is transferred to remote server. the connection is
simply a stream of bytes, with no message boundaries.
The service is said to be unreliable when the receiver does not acknowledge the receipt
of the message. Unreliable datagram service is commonly used for junk e-mails or
simply junk mails. The sender of such junk mails does not establish or release the
connection to send the mails nor reliable delivery. It is similar to telegram service, which
also does not return an acknowledgement to the sender.
Another service is the request-reply service; the sender transmits a single datagram
containing a request; the reply contains the answer. Request-reply is commonly used to
implement communication in the client-server model: the client issues a request and the
server responds to it.
PPP defines the format of the frame to be exchanged between the devices. It defines the
link control protocol (LCP) for:
1. Establishing the link between 2 devices
2. Maintaining the established link
3. Configuring the link
4. Terminating the link after the transfer
PPP provides multiple network layer services supporting a variety of network layer
protocol. It uses a protocol called NCP (Network Control Protocol). It also defines how
two devices can authenticate each other.
Protocol
Flag Address Control Data FCS Flag
(I or 2
(I byte) (I byte) (I byte) (Variable) (2 or 4 Bytes) (I byte)
byte)
Flag field (01111110) marks the beginning and end of the PPP frame.
Address field, is of 1 byte always 11111111. This address is the broadcast address; all
the stations accept the frame.
Control field, is of 1 byte. It uses the format of the U (unnumbered) frame. The value is
always 00000011 to show that the frame does not contain any sequence numbers and
there is no flow control and error control.
Protocol field, specifies the kind of packet in the data field, that is what is being carried
in the data field.
Data field, length is variable. Default length is 1500 bytes is used. It carries user data or
other information.
FCS (Frame Check Sequence) field, it is either 2 or 4 bytes. It contains the checksum.
ROUTING
Routing Algorithm
The main function of the network layer is routing packets from the source machine to the
destination machine. The routing algorithm is that part of the network layer software
responsible for deciding which output line an incoming packet should be transmitted on.
Routing is the process of forwarding a packet in the network, so that it reaches its
intended destination.
Routing algorithms can be grouped into two major classes: nonadaptive and adaptive.
Nonadaptive algorithms do not base their routing decisions on measurements or
estimates of the current traffic and topology.
Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the
topology, and usually the traffic as well.
It is dynamic. It is static.
Examples: Shortest Path routing, flooding Examples: Distance vector routing & Link
and flow based routing algorithm. state routing algorithms.
The set of optimal routes from all sources to a given destination form a tree rooted at the
destination. Such a tree is called a sink tree, where the distance metric is the number of
hops. A sink tree is not necessarily unique; other trees with the same path lengths may
exist. The goal of all routing algorithms is to discover and use the sink trees for all routers.
Sink tree is indeed a tree, it does not contain any loops, so each packet will be delivered
within a finite and bounded number of hops. Links and routers can go down and come
back up during operation, so different routers may have different ideas about the current
topology. The optimality principle and the sink tree provide a benchmark against which
other routing algorithms can be measured.
Example, using the Dijkstra’s algorithm, to find the shortest path between the nodes A
and D. The weight between the nodes is known as the distance. Each node is labeled (in
parentheses) with its distance from the source node along the best known path. Initially,
no paths are known, so all nodes are labeled with infinity. As the algorithm proceeds and
paths are found, the labels may change, reflecting better paths. A label may be either
tentative or permanent. Initially, all labels are tentative. When it is discovered that a label
represents the shortest possible path from the source to that node, it is made permanent
and never changed thereafter.
Step 2: Calculating the shortest path from node A to B and from A to G. The distance
between the node A to B is 2, and from the node A to G is 6. The shortest path is 2, from A
to B
Step 3: Calculating the shortest path from B to C and B to E. The distance between from
B to E is 2 and from B to C is 7. The cumulative distance between B to C is 9 (2+7) and
from B to E is 4 (2+2). So the shortest path is B to E is 4.
3. Flooding
Another static algorithm is flooding, in which every incoming packet is sent out on every
outgoing line except the one it arrived on. One such measure is to have a hop counter
contained in the header of each packet, which is decremented at each hop, with the packet
being discarded when the counter reaches zero. Ideally, the hop counter should be
initialized to the length of the path from source to destination.
An alternative technique for damming the flood is to keep track of which packets have
been flooded, to avoid sending them out a second time. achieve this goal is to have the
source router put a sequence number in each packet it receives from its hosts. Each router
then needs a list per source router telling which sequence numbers originating at that
source have already been seen. If an incoming packet is on the list, it is not flooded.
The distance vector routing algorithm is sometimes called by other names, most
commonly the distributed Bellman-Ford routing algorithm and the Ford-Fulkerson
algorithm.
In distance vector routing, each router maintains a routing table indexed by, and
containing one entry for, each router in the subnet. This entry contains two parts: the
preferred outgoing line to use for that destination and an estimate of the time or distance
to that destination. The metric used might be number of hops, time delay in milliseconds
and total number of packets queued along the path.
Example:
A 1 A
Next Hop
Destination Cost
(intermediate node)
D 7 C
The idea behind link state routing is simple and can be stated as five parts. Each router
must do the following:
1. Discover its neighbors and learn their network addresses.
2. Measure the delay or cost to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to all other routers.
5. Compute the shortest path to every other router.
The complete topology and all delays are experimentally measured and distributed to
Example:
Steps:
1. Link State database is created for the network
2. Using the database, the node chooses the next node.
3. By using the low cost to reach the node.
Link State Database (LSDB)
A B C D E F G
A 0 2 ∞ 3 ∞ ∞ ∞
B 2 0 5 ∞ 4 ∞ ∞
C ∞ 5 0 ∞ ∞ 4 3
E ∞ 4 ∞ 5 0 2 ∞
F ∞ ∞ 4 ∞ 2 0 1
G ∞ ∞ 3 ∞ ∞ 1 0
6.Hierarchical Routing
When hierarchical routing is used, the routers are divided into regions. Each router
knowing all the details about how to route packets to destinations within its own region,
but knowing nothing about the internal structure of other regions. When different
networks are interconnected, it is natural to regard each one as a separate region in order
to free the routers in one network from having to know the topological structure of the
other ones.
A two-level hierarchy with five regions. The full routing table for router 1A has 17 entries,
When routing is done hierarchically, there are entries for all the local routers as before,
but all other regions have been condensed into a single router, so all traffic for region 2
goes via the 1B -2A line, but the rest of the remote traffic goes via the 1C -3B line.
Hierarchical routing has reduced the table from 17 to 7 entries. As the ratio of the number
of regions to the number of routers per region grows, the savings in table space increase.
A third algorithm is multidestination routing. This method is used, each packet contains
either a list of destinations or a bit map indicating the desired destinations. When a packet
arrives at a router, the router checks all the destinations to determine the set of output
lines that will be needed. The router generates a new copy of the packet for each output
line to be used and includes in each packet only those destinations that are to use the line.
In effect, the destination set is partitioned among the output lines. After a sufficient
number of hops, each packet will carry only one destination and can be treated as a
normal packet. Multidestination routing is like separately addressed packets, except that
A spanning tree is a subset of the subnet that includes all the routers but contains no
loops. If each router knows which of its lines belong to the spanning tree, it can copy an
incoming broadcast packet onto all the spanning tree lines except the one it arrived on.
This method makes excellent use of bandwidth, generating the absolute minimum
number of packets necessary to do the job. The only problem is that each router must
have knowledge of some spanning tree for the method to be applicable.
A reverse path forwarding, is remarkably simple once it has been pointed out. When a
broadcast packet arrives at a router, the router checks to see if the packet arrived on the
line that is normally used for sending packets to the source of the broadcast. If so, there
is an excellent chance that the broadcast packet itself followed the best route from the
router and is therefore the first copy to arrive at the router.
A network
Some routers are attached to hosts that belong to one or both of these groups, as indicated
in the figure. A spanning tree for the leftmost router.
When a process sends a multicast packet to a group, the first router examines its spanning
tree and prunes it, removing all lines that do not lead to hosts that are members of the
group.
The spanning tree for group 2. Multicast packets are forwarded only along the
appropriate spanning tree.
The simplest one can be used if link state routing is used and each router is aware of the
complete topology, including which hosts belong to which groups. Then the spanning tree
can be pruned, starting at the end of each path, working toward the root, and removing
all routers that do not belong to the group.
The basic algorithm is reverse path forwarding. However, whenever a router with no
hosts interested in a particular group and no connections to other routers receives a
multicast message for that group, it responds with a PRUNE message, telling the sender
not to send it any more multicasts for that group. When a router with no group members
among its own hosts has received such messages on all its lines, it, too, can respond with
a PRUNE message.
All hosts are assumed to have a permanent home location that never changes. Hosts also
have a permanent home address that can be used to determine their home locations. The
routing goal in systems with mobile hosts is to make it possible to send packets to mobile
hosts using their home addresses and have the packets efficiently reach them.
The world is divided up (geographically) into small units, is typically known as a LAN or
wireless cell. Each area has one or more foreign agents, which are processes that keep
track of all mobile hosts visiting the area. In addition, each area has a home agent, which
keeps track of hosts whose home is in the area, but who are currently visiting another
area.
When a new host enters an area, either by connecting to it, the host computer must
register itself with the foreign agent there. The registration procedure follow the given
steps are:
1. Each foreign agent broadcasts a packet announcing its existence and address.
2. The mobile host registers with the foreign agent, giving its home address, current
data link layer address, and some security information.
2. Home agent tells the sender to henceforth send packets to the mobile host by
encapsulating them in the payload of packets explicitly addressed to the foreign
agent instead of just sending them to the mobile host's home address (step 3).
Subsequent packets can now be routed directly to the host via the foreign agent
(step 4), bypassing the home location entirely.
4. The schemes differ in how they actually manage to arrange for packets that are
addressed to one destination to be delivered to a different one. One choice is
changing the destination address and just retransmitting the modified packet.
Alternatively, the whole packet, home address and all, can be encapsulated inside
the payload of another packet sent to the temporary address. Finally, the schemes
differ in their security aspects.
UDP transmits segments consisting of an 8-byte header followed by the payload. The two
ports serve to identify the end points within the source and destination machines. When
a UDP packet arrives, its payload is handed to the process attached to the destination
port.
The source port is primarily needed when a reply must be sent back to the source. By
copying the source port field from the incoming segment into the destination port field
of the outgoing segment, the process sending the reply can specify which process on the
sending machine is to get it.
The UDP length field includes the 8-byte header and the data. The UDP checksum is
optional and stored as 0 if not computed (a true computed 0 is stored as all 1s).
In the simplest form, to call a remote procedure, the client program must be bound with
a small library procedure, called the client stub, that represents the server procedure in
the client's address space. Similarly, the server is bound with a procedure called the
server stub. These procedures hide the fact that the procedure call from the client to the
server is not local.
Step 2 is the client stub packing the parameters into a message and making a system call
to send the message. Packing the parameters is called marshaling.
Step 3 is the kernel sending the message from the client machine to the server machine.
Step 4 is the kernel passing the incoming packet to the server stub.
Step 5 is the server stub calling the server procedure with the unmarshaled parameters.
The reply traces the same path in the other direction.
The basic function of RTP is to multiplex several real-time data streams onto a single
stream of UDP packets. The UDP stream can be sent to a single destination (unicasting)
or to multiple destinations (multicasting). Because RTP just uses normal UDP, its packets
are not treated specially by the routers unless some normal IP quality-of-service features
are enabled.
Packet nesting
RTP Header
It consists of three 32-bit words and potentially some extensions. The first word contains
the Version field, which is already at 2.
The P bit indicates that the packet has been padded to a multiple of 4 bytes. The last
padding byte tells how many bytes were added. The X bit indicates that an extension
header is present.
The CC field tells how many contributing sources are present, from 0 to 15 (see below).
The M bit is an application-specific marker bit. It can be used to mark the start of a video
frame, the start of a word in an audio channel.
Payload type field tells which encoding algorithm has been used (e.g., uncompressed 8-
bit audio, MP3, etc.).
The Sequence number is just a counter that is incremented on each RTP packet
sent. It is used to detect lost packets.
The timestamp is produced by the stream's source to note when the first sample in the
packet was made.
The Synchronization source identifier tells which stream the packet belongs to. It is
the method used to multiplex and demultiplex multiple data streams onto a single stream
of UDP packets.
Each machine supporting TCP has a TCP transport entity, either a library procedure, a
user process, or part of the kernel. A TCP entity accepts user data streams from local
processes, breaks them up into pieces not exceeding 64 KB (in practice, often 1460 data
bytes in order to fit in a single Ethernet frame with the IP and TCP headers), and sends
each piece as a separate IP datagram. When datagrams containing TCP data arrive at a
machine, they are given to the TCP entity, which reconstructs the original byte streams.
The Source port and Destination port fields identify the local end points of the
connection. The source and destination end points together identify the connection. The
Sequence number and Acknowledgement number fields perform their usual
functions. The TCP header length tells how many 32-bit words are contained in the TCP
header. This field really indicates the start of the data within the segment, measured in
32- bit words.
Unused six 1-bit flags.: URG is set to 1 if the Urgent pointer is in use. The Urgent pointer
is used to indicate a byte offset from the current sequence number at which urgent data
are to be found.
The ACK bit is set to 1 to indicate that the Acknowledgement number is valid. If ACK is 0,
the segment does not contain an acknowledgement so the Acknowledgement number
field is ignored.
The PSH bit indicates PUSHed data. The receiver is hereby kindly requested to deliver
the data to the application upon arrival and not buffer it until a full buffer has been
received.
The Options field provides a way to add extra facilities not covered by the regular header.
The most important option is the one that allows each host to specify the maximum TCP
payload it is willing to accept.
CONGESTION CONTROL
Congestion Control refers to techniques and mechanisms that can either prevent
congestion before it happens or remove congestion, after it has happened.
The two categories of congestion are: Open loop and Closed loop
Open loop, in this method, policies are used to prevent the congestion before it happens.
Congestion control is handled either by the sources or by the destination.
Closed Loop, in this method remove the congestion after it happens.
The various methods of open loop congestion control are:
1. Retransmission policy, the sender retransmits a packet, it may have lost or
corrupted. Retransmission policy and retransmission timers need to be designed
to optimize efficiency and at the same time prevent the congestion.
2. Window Policy, to implement this, selective reject window method for
congestion control. Selective window method is preferred over Go-n-back method,
when the timer for a packet times out, several packets are resent, although some
may have arrived safely at the receiver. Thus this duplication may make
congestion worse. Selective reject method sends only the specific lost or damaged
packets.
3. Acknowledgement policy, it was imposed by the receiver may also affect the
congestion. If the receiver does not acknowledge every packet it receives it may
slow down the sender and helps to prevent congestion. To implement it,
A receiver may send an acknowledgement only if it has a packet to be sent.
A receiver may send an acknowledgement when a timer expires.
A receiver may also decide to acknowledgement only N packets at a time.
4. Discarding policy, a router may discard less sensitive packets when a congestion
is likely to happen. Such a discarding policy may prevent congestion and at the
same time may not harm the integrity of the transmission.
IV BCA – Data Communication & Networks - Unit 4 | 35
5. Admission policy, is a quality of service (QoS) mechanism, can also prevent
congestion in virtual circuit networks. A router can deny establishing a virtual
circuit connection if there is congestion in the network or if there is a possibility
of future congestion.
The various methods of closed loop congestion control are:
1. Back pressure, is a node –to – node congestion control that starts with a node
and propagates in the opposite direction of data flow to the source. This technique
can be applied to virtual circuit networks. Upstream node, from which the data
flow is coming. The receiver of the data is downstream.
In this method, the congested node stops receiving data from the immediate
upstream node. This may cause the upstream node on nodes to become congested,
and they in turn reject data from their upstream node or nodes.
Example: Node 3 is congested and it stops receiving the packets and informs its
upstream node 2 to slow down. Node 2 in turn may be congested and informs node
1 to slow down. Now node 1 may create a congestion and informs the source node
to slow down. Thus the pressure on node 2 is moved backward to the source to
remove the congestion.
2. Choke packet, in this method congested router or node sends a special type of
packet called choke packet to the source to inform it about the congestion. The
congested node does not inform its upstream node about the congestion as in
backpressure method. In choke packet method, congested node sends a warning
directly to the source station.
Retransmission policy
Imagine a bucket with a small hole in the bottom. No matter the rate at which water
enters the bucket, the outflow is at a constant rate, ρ, when there is any water in the
bucket and zero when the bucket is empty. Also, once the bucket is full, any additional
water entering it spills over the sides and is lost (i.e., does not appear in the output stream
under the hole).
Implementation of leaky bucket algorithm
Each host is connected to the network by an interface containing a leaky bucket, that is, a
finite internal queue. If a packet arrives at the queue when it is full, the packet is
discarded.
It makes use of clock tick to remove the packet from FIFO queue.
In other words, if one or more processes within the host try to send a packet when the
maximum number is already queued, the new packet is unceremoniously discarded. This
A leaky bucket algorithm turns an uneven flow of packets from the user processes inside
the host into an even flow of packets onto the network.