You are on page 1of 23

Data Link Layer

Bit stuffing:

 Allows frame to contain arbitrary number of bits and arbitrary character size. The frames
are separated by separating flag.
 Each frame begins and ends with a special bit pattern, 01111110 called a flag byte. When
five consecutive l's are encountered in the data, it automatically stuffs a '0' bit into
outgoing bit stream.
 In this method, frames contain an arbitrary number of bits and allow character codes with
an arbitrary number of bits per character. In his case, each frame starts and ends with a
special bit pattern, 01111110.
 In the data a 0 bit is automatically stuffed into the outgoing bit stream whenever the
sender's data link layer finds five consecutive 1s.
 This bit stuffing is similar to byte stuffing, in which an escape byte is stuffed into the
outgoing character stream before a flag byte in the data.
 When the receiver sees five consecutive incoming i bits, followed by a o bit, it
automatically destuffs (i.e., deletes) the 0 bit. Bit Stuffing is completely transparent to
network layer as byte stuffing. The figure1 below gives an example of bit stuffing.
 This method of framing finds its application in networks in which the change of data into
code on the physical medium contains some repeated or duplicate data. For example,
some LANs encodes bit of data by using 2 physical bits.
Byte stuffing:

 In this method, start and end of frame are recognized with the help of flag bytes. Each
frames starts with and ends with a flag byte. Two consecutive flag bytes indicate the end
of one frame and start of the next one. The flag bytes used in the figure 2 used is named
as ―ESC‖ flag byte.
 A frame delimited by flag bytes. This framing method is only applicable in 8-bit
character codes which are a major disadvantage of this method as not all character codes
use 8-bit characters e.g. Unicode.
 Four example of byte sequences before and after stuffing:
Circuit Switching

In circuit switching network dedicated channel has to be established before the call is made
between users. The channel is reserved between the users till the connection is active. For half
duplex communication, one channel is allocated and for full duplex communication, two
channels are allocated. It is mainly used for voice communication requiring real time services
without any much delay.

As shown in the figure 1, if user-A wants to use the network; it need to first ask for the request to
obtain the one and then user-A can communicate with user-C. During the connection phase if
user-B tries to call/communicate with user-D or any other user it will get busy signal from the
network.

Packet Switching

In packet switching network unlike CS network, it is not required to establish the connection
initially. The connection/channel is available to use by many users. But when capacity or number
of users increases then it will lead to congestion in the network. Packet switched networks are
mainly used for data and voice applications requiring non-real time scenarios.

As shown in the figure 2, if user-A wants to send data/information to user-C and if user-B wants
to send data to user-D, it is simultaneously possible. Here information is padded with header
which contains addresses of source and destination. This header is sniffed by intermediate
switching nodes to determine their route and destination.
In packet switching, station breaks long message into packets. Packets are sent one at a time to
the network. Packets are handled in two ways, viz. datagram and virtual circuit.

In datagram, each packet is treated independently. Packets can take up any practical route.
Packets may arrive out of order and may go missing.

In virtual circuit, preplanned route is established before any packets are transmitted. The
handshake is established using call request and call accept messages. Here each packet contains
virtual circuit identifier(VCI) instead of the destination address. In this type, routing decisions
for each packet are not needed.
Comparison between CS vs. PS networks

As shown above in Packet switched (PS) networks quality of service (QoS) is not guaranteed
while in circuit switched (CS) networks quality is guaranteed.
PS is used for time insensitive applications such as internet/email/SMS/MMS/VOIP etc.
In CS even if user is not talking the channel cannot be used by any other users, this will waste
the resource capacity at those intervals.

The example of circuit switched network is PSTN and example of packet switched network is
GPRS/EDGE.
Following table summarizes difference between circuit switching and packet switching of type
datagram and virtual circuit.
Packet
Switching(Datagram
Circuit Switching type) Packet Switching(Virtual Circuit type)

Dedicated path No Dedicated path No Dedicated path

Path is established for Route is established for


entire conversation each packet Route is established for entire conversation

packet transmission
Call setup delay delay call setup delay as well as packet transmission delay

Overload may block call Overload increases Overload may block call setup and increases packet
setup packet delay delay

Fixed bandwidth Dynamic bandwidth Dynamic bandwidth

No overhead bits after overhead bits in each


call setup packet overhead bits in each packet

BGP (Border Gateway Protocol)

BGP (Border Gateway Protocol) is protocol that manages how packets are routed across the
internet through the exchange of routing and reachability information between edge routers. BGP
directs packets between autonomous systems (AS) -- networks managed by a single enterprise or
service provider. Traffic that is routed within a single network AS is referred to as internal BGP,
or iBGP. More often, BGP is used to connect one AS to other autonomous systems, and it is then
referred to as an external BGP, or eBGP.
What is BGP used for?

BGP offers network stability that guarantees routers can quickly adapt to send packets through
another reconnection if one internet path goes down. BGP makes routing decisions based on
paths, rules or network policies configured by a network administrator. Each BGP router
maintains a standard routing table used to direct packets in transit. This table is used in
conjunction with a separate routing table, known as the routing information base (RIB), which is
a data table stored on a server on the BGP router. The RIB contains route information both from
directly connected external peers, as well as internal peers, and continually updates the routing
table as changes occur. BGP is based on TCP/IP and uses client-server topology to communicate
routing information, with the client-server initiating a BGP session by sending a request to the
server.

BGP routing basics

BGP sends updated router table information only when something changes -- and even then, it
sends only the affected information. BGP has no automatic discovery mechanism, which means
connections between peers have to be set up manually, with peer addresses programmed in at
both ends.

]
BGP makes best-path decisions based on current reachability, hop counts and other path
characteristics. In situations where multiple paths are available -- as within a major hosting
facility -- BGP can be used to communicate an organization's own preferences in terms of what
path traffic should follow in and out of its networks. BGP even has a mechanism for defining
arbitrary tags, called communities, which can be used to control route advertisement behavior by
mutual agreement among peers.

Ratified in 2006, BGP-4, the current version of BGP, supports both IPv6 and classless
interdomain routing (CIDR), which enables the continued viability of IPv4. Use of the CIDR is a
way to have more addresses within the network than with the current IP address assignment
scheme.
Static routing
Static routing is a form of routing that occurs when a router uses a manually-configured routing
entry, rather than information from a dynamic routing traffic.[1] In many cases, static routes are
manually configured by a network administrator by adding in entries into a routing table, though
this may not always be the case.[2] Unlike dynamic routing, static routes are fixed and do not
change if the network is changed or reconfigured. Static routing and dynamic routing are not
mutually exclusive. Both dynamic routing and static routing are usually used on a router to
maximise routing efficiency and to provide backups in the event that dynamic routing
information fails to be exchanged. Static routing can also be used in stub networks, or to provide
a gateway of last resort.

Uses

Static routing may have the following uses:

Static routing can be used to define an exit point from a router when no other routes are available
or necessary. This is called a default route.

Static routing can be used for small networks that require only one or two routes. This is often
more efficient since a link is not being wasted by exchanging dynamic routing information.

Static routing is often used as a complement to dynamic routing to provide a failsafe backup in
the event that a dynamic route is unavailable.

Static routing is often used to help transfer routing information from one routing protocol to
another (routing redistribution).

Disadvantages

Static routing can have some potential disadvantages

Human error: In many cases, static routes are manually configured. This increases the potential
for input mistakes. Administrators can make mistakes and mistype in network information, or
configure incorrect routing paths by mistake.
Fault tolerance: Static routing is not fault tolerant. This means that when there is a change in the
network or a failure occurs between two statically defined devices, traffic will not be re-routed.
As a result, the network is unusable until the failure is repaired or the static route is manually
reconfigured by an administrator.

Administrative distance: Static routes typically take precedence over routes configured with a
dynamic routing protocol. This means that static routes may prevent routing protocols from
working as intended. A solution is to manually modify the administrative distance.

Administrative overhead: Static routes must be configured on each router in the network(s). This
configuration can take a long time if there are many routers. It also means that reconfiguration
can be slow and inefficient. Dynamic routing on the other hand automatically propagates routing
changes, reducing the need for manual reconfiguration.

Example

To route IP traffic destined for the network 10.10.20.0/24 via the next-hop router with the IPv4
address of 192.168.100.1, the following configuration commands or steps can be used:-

Linux

In most Linux distributions, a static route can be added using the iproute2 command. The
following is typed at a terminal:

root@router:~# ip route add 10.10.20.0 via 192.168.100.1

Cisco

Enterprise-level Cisco routers are configurable using the Cisco IOS command line, rather than a
web management interface.

Dynamic routing
Dynamic routing

Dynamic routing, also called adaptive routing, describes the capability of a system, through
which routes are characterized by their destination, to alter the path that the route takes through
the system in response to a change in conditions. [3] The adaptation is intended to allow as many
routes as possible to remain valid (that is, have destinations that can be reached) in response to
the change.

People using a transport system can display dynamic routing. For example, if a local railway
station is closed, people can alight from a train at a different station and use another method,
such as a bus, to reach their destination. Another example of dynamic routing can be seen
within financial markets. For example, ASOR or Adaptive Smart Order Router (developed
by Quod Financial), takes routing decisions dynamically and based on real-time market events.

The term is commonly used in data networking to describe the capability of a network to 'route
around' damage, such as loss of a node or a connection between nodes, so long as other path
choices are available. There are several protocols used to achieve this:

 RIP
 OSPF
 IS-IS
 IGRP/EIGRP

Systems that do not implement dynamic routing are described as using static routing, where
routes through a network are described by fixed paths (statically). A change, such as the loss of a
node, or loss of a connection between nodes, is not compensated for. This means that anything
that wishes to take an affected path will either have to wait for the failure to be repaired before
restarting its journey, or will have to fail to reach its destination and give up the journey.
Alternate paths

Many systems use some next-hop forwarding protocol—when a packet arrives at some node, that
node decides on-the-fly which link to use to push the packet one hop closer to its final
destination.

Routers that use some adaptive protocols, such as the Spanning Tree Protocol, in order to
"avoid bridge loops and routing loops", calculate a tree that indicates the one "best" link for a
packet to get to its destination. Alternate "redundant" links not on the tree are temporarily
disabled—until one of the links on the main tree fails, and the routers calculate a new tree using
those links to route around the broken link.

Routers that use other adaptive protocols, such as grouped adaptive routing, find a group of
*all* the links that could be used to get the packet one hop closer to its final destination. The
router sends the packet out any link of that group which is idle. The link aggregation of that
group of links effectively becomes a single high-bandwidth connection.[4]

In Dynamic Routing, Routing Protocols running in Routers continuously exchange network


status updates between each other as broadcast or multicast. With the help of routing updates
messages sent by the Routing Protocols, routers can continuously update the routing table when
ever a network topolgy change happens.

Examples of Routing Protocols are Routing Information Protocol (RIP), Enhanced Interior
Gateway Routing Protocol (EIGRP) and Open Shortest Path First (OSPF).

There are three basic types of routing protocols.

Distance-vector Routing Protocols: Distance-vector Routing Protocols use simple algorithms


that calculate a cumulative distance value between routers based on hop count.

Example: Routing Information Protocol Version 1 (RIPv1) and Interior Gateway Routing
Protocol (IGRP)
Link-state Routing Protocols: Link-state Routing Protocols use sophisticated algorithms that
maintain a complex database of internetwork topology.

Example: Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-
IS)

Hybrid Routing Protocols: Hybrid Routing Protocols use a combination of distance-vector and
link-state methods that tries to incorporate the advantages of both and minimize their
disadvantages.

Example: Enhanced Interior Gateway Routing Protocol (EIGRP), Routing Information Protocol
Version 2 (RIPv2)

Go-Back-N ARQ

Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol, in which
the sending process continues to send a number of frames specified by a window size even
without receiving an acknowledgement (ACK) packet from the receiver. It is a special case of
the general sliding window protocol with the transmit window size of N and receive window size
of 1. It can transmit N frames to the peer before requiring an ACK.

The receiver process keeps track of the sequence number of the next frame it expects to receive,
and sends that number with every ACK it sends. The receiver will discard any frame that does
not have the exact sequence number it expects (either a duplicate frame it already acknowledged,
or an out-of-order frame it expects to receive later) and will resend an ACK for the last correct
in-order frame.[1] Once the sender has sent all of the frames in its window, it will detect that all of
the frames since the first lost frame are outstanding, and will go back to the sequence number of
the last ACK it received from the receiver process and fill its window starting with that frame
and continue the process over again.

Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since unlike
waiting for an acknowledgement for each packet, the connection is still being utilized as packets
are being sent. In other words, during the time that would otherwise be spent waiting, more
packets are being sent. However, this method also results in sending frames multiple times – if
any frame was lost or damaged, or the ACK acknowledging them was lost or damaged, then that
frame and all following frames in the window (even if they were received without error) will be
re-sent. To avoid this, Selective Repeat ARQ can be used.[2]

There are a few things to keep in mind when choosing a value for N:

1. The sender must not transmit too fast. N should be bounded by the receiver’s ability to
process packets.
2. N must be smaller than the number of sequence numbers (if they are numbered from zero
to N) to verify transmission in cases of any packet (any data or ACK packet) being
dropped.[2]
3. Given the bounds presented in (1) and (2), choose N to be the largest number possible.
OVERVIEW

"Distance Vector" and "Link State" are terms used to describe routing protocols which are used
by routers to forward packets between networks. The purpose of any routing protocol is to
dynamically communicate information about all network paths used to reach a destination and to
select the from those paths, the best path to reach a destination network. The terms distance
vector and link state are used to group routing protocols into two broad categories based on
whether the routing protocol selects the best routing path based on a distance metric (the distance)
and an interface (the vector), or selects the best routing path by calculating the state of each link
in a path and finding the path that has the lowest total metric to reach the destination.
DISTANCE VECTOR
Distance

Distance is the cost of reaching a destination, usually based on the number of hosts the
path passes through, or the total of all the administrative metrics assigned to the links in
the path.

Vector

From the standpoint of routing protocols, the vector is the interface traffic will be
forwarded out in order to reach an given destination network along a route or path
selected by the routing protocol as the best path to the destination network.

Distance vector protocols use a distance calculation plus an outgoing network interface (a vector)
to choose the best path to a destination network. The network protocol (IPX, SPX, IP, Appletalk,
DECnet etc.) will forward data using the best paths selected.

COMMON DISTANCE VECTOR ROUTING PROTOCOLS INCLUDE:

 Appletalk RTMP
 IPX RIP
 IP RIP
 IGRP

ADVANTAGES OF DISTANCE VECTOR PROTOCOLS

Well Supported

Protocols such as RIP have been around a long time and most, if not all devices that
perform routing will understand RIP.
LINK STATE

Link State protocols track the status and connection type of each link and produces a calculated
metric based on these and other factors, including some set by the network administrator. Link
state protocols know whether a link is up or down and how fast it is and calculates a cost to 'get
there'. Since routers run routing protocols to figure out how to get to a destination, you can think
of the 'link states' as being the status of the interfaces on the router. Link State protocols will take
a path which has more hops, but that uses a faster medium over a path using a slower medium
with fewer hops.

Because of their awareness of media types and other factors, link state protocols require more
processing power (more circuit logic in the case of ASICs) and memory. Distance vector
algorithms being simpler require simpler hardware.

A COMPARISON: LINK STATE VS. DISTANCE VECTOR

See Fig. 1-1 below. If all routers were running a Distance Vector protocol, the path or 'route'
chosen would be from A B directly over the ISDN serial link, even though that link is about 10
times slower than the indirect route from A C D B.

A Link State protocol would choose the A C D B path because it's using a faster medium (100
Mb ethernet). In this example, it would be better to run a Link State routing protocol, but if all
the links in the network are the same speed, then a Distance Vector protocol is better.
FIG. 1-1
Default route

In computer networking, the default route is a setting on a computer that defines


the packet forwarding rule to use when no specific route can be determined for a given Internet
Protocol (IP) destination address. All packets for destinations not established in the routing
table are sent via the default route.

The default route generally points to another router, which treats the packet the same way: if a
route matches, the packet is forwarded accordingly, otherwise the packet is forwarded to the
default route of that router. The route evaluation process in each router uses the longest prefix
match method to obtain the most specific route. The network with the longest subnet mask that
matches the destination IP address is the next-hop network gateway. The process repeats until a
packet is delivered to the destination. Each router traversal counts as one hop in the distance
calculation for the transmission path.

The device to which the default route points is often called the default gateway, and it often
carries out other functions such as packet filtering, firewalling, or proxy server operations.

The default route in Internet Protocol Version 4 (IPv4) is designated as the zero-
address 0.0.0.0/0 in CIDR notation,[1] often called the quad-zero route.[citation needed]
The subnet
mask is given as /0, which effectively specifies all networks, and is the shortest match possible.
A route lookup that does not match any other route, falls back to this route. Similarly, in IPv6,
the default route is specified by ::/0.

In the highest-level segment of a network, administrators generally point the default route for a
given host towards the router that has a connection to a network service provider. Therefore,
packets with destinations outside the organization's local area network, typically destinations on
the Internet or a wide area network, are forwarded to the router with the connection to that
provider.
What is a Route Leak?

Route leaks involve the illegitimate advertisement of prefixes, blocks of IP addresses, which
propagate across networks and lead to incorrect or suboptimal routing. Route leaks are similar in
structure and effect to route hijacks, BGP hijacks and BGP man-in-the-middle attacks. However,
while hijacks typically connote malicious attacks, route leaks instead are usually inadvertent and
due to filter misconfigurations.

To understand a route leak, we first need to understand how routes are propagated across the
Internet. Routes are defined between networks with common routing policies, known as
Autonomous Systems (ASes). An AS originates prefixes for IP address ranges that it owns and
communicates the AS Path, or sequence of ASes to reach the origin, to other ASes using Border
Gateway Protocol (BGP). An AS also advertises prefixes for traffic that can be delivered by that
AS. As in Figure 1, AS100 will announce its own prefixes to its downstreams, upstream and
peers. AS100 will also announce certain prefixes that it learns and will prepend its AS number to
the path, so AS100 will announce [100 300] to its peers.

Figure 1: Typical routing advertisements for example AS100.

Route leaks can happen from an AS originating a prefix that it does not actually own or an AS
announcing that it can deliver traffic through a route that should not exist. Route leaks are
particularly prone to propagation when a more specific prefix is advertised (as BGP prefers the
most specific block of addresses) or when a path is advertised that is shorter than the currently
available paths (as BGP prefers the shortest AS Path). Practically, route leaks occur when BGP
advertisements are not properly filtered using the no-export community. ASes typically advertise
routes to providers and peers, filtering which routes are sent to which ASes. In Figure 2, AS100
improperly announces the path of its peer AS400 to its upstream transit provider.

Stop-and-Wait Protocol

If data frames arrive at the receiver site faster than they can be processed, the frames must be
stored until their use. Normally, the receiver does not have enough storage space, especially if it
is receiving data from many sources. This may result in either the discarding of frames or denial
of service. To prevent the receiver from becoming overwhelmed with frames, we somehow need
to tell the sender to slow down. There must be feedback from the receiver to the sender.

In Stop-and-Wait Protocol, the sender sends one frame, stops until it receives confirmation from
the receiver (okay to go ahead), and then sends the next frame. We still have unidirectional
communication for data frames, but auxiliary ACK frames (simple tokens of acknowledgment)
travel from the other direction. We added flow control to the Simplest protocol.

Design

The following figure illustrates the mechanism. We can see the traffic on the forward channel
(from sender to receiver) and the reverse channel. At any time, there is either one data frame on
the forward channel or one ACK frame on the reverse channel. We therefore need a half-duplex
link.
In this protocol, the sender sends one frame and waits for feedback from the receiver. When the
ACK arrives, the sender sends the next frame. Note that sending two frames in the protocol
involves the sender in four events and the receiver in two events.
Simplest Protocol:

Simplest Protocol is one that has no flow or error control and it is a unidirectional protocol in
which data frames are traveling in only one direction-from the sender to receiver. We assume
that the receiver can immediately handle any frame it receives with a processing time that is
small enough to be negligible. The data link layer of the receiver immediately removes the
header from the frame and hands the data packet to its network layer, which can also accept the
packet immediately.
Design:

There is no need for flow control in this scheme. The data link layer at the sender site gets data
from its network layer, makes a frame out of the data, and sends it. The data link layer at the
receiver site receives a frame from its physical layer, extracts data from the frame, and delivers
the data to its network layer. The data link layers of the sender and receiver provide transmission
services for their network layers. The data link layers use the services provided by their physical
layers (such as signaling, multiplexing, and so on) for the physical transmission of bits. The
following figure shows a design.

The sender site cannot send a frame until its network layer has a data packet to send. The
receiver site cannot deliver a data packet to its network layer until a frame arrives. If the protocol
is implemented as a procedure, we need to introduce the idea of events in the protocol.

The procedure at the sender site is constantly running; there is no action until there is a request
from the network layer. The procedure at the receiver site is also constantly running, but there is
no action until notification from the physical layer arrives. Both procedures are constantly
running because they do not know when the corresponding events will occur.
The following figure shows an example of communication using this protocol. It is very simple.
The sender sends a sequence of frames without even thinking about the receiver. To send three
frames, three events occur at the sender site and three events at the receiver site. Note that the
data frames are shown by tilted boxes; the height of the box defines the transmission time
difference between the first bit and the last bit in the frame.

You might also like