You are on page 1of 36

Unit-2

Data Link Layer


The data link layer transforms the physical layer, a raw transmission facility, to a link responsible
for node-to-node (hop-to-hop) communication within LAN.The data link layer uses the services
of the physical layer to send and receive bits over communication channels.

It has the following functions


1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast senders.

To accomplish these goals, the data link layer takes the packets it gets from the network layer
and encapsulates them into frames for transmission. Each frame contains a frame header, a
payload field for holding the packet, and a frame trailer.As given in following figure-

In other words DLL is responsible for following:-


● Access Control
When two or more devices are connected to the same link, data link layer protocols are
necessary to determine which device has control over the link at any given time.

● Flow Control
If the rate at which the data is absorbed by the receiver is less than the rate at which
data is produced in the sender, the data link layer imposes a flow control mechanism to
avoid overwhelming/overflowing the receiver.

● Error Control
The data link layer adds reliability to the physical layer by adding mechanisms to detect
and retransmit damaged, duplicate, or lost frames.

● Physical Addressing
After creating frames, the Data link layer adds physical addresses (MAC
address) of the sender and/or receiver in the header of each frame.
● Framing
The data link layer divides the stream of bits received from the network layer into
manageable data units called frames.
Framing
Framing in the data link layer separates a message from one source to a destination, or from
other messages to other destinations, by adding a sender address and a destination address.
The destination address defines where the Frame is to go; the sender address helps the
recipient acknowledge the receipt.
Although the whole message could be packed in one frame, that is not normally done. One
reason is that a frame can be very large, making flow and error control very inefficient. When
a message is carried in one very large frame, even a single-bit error would require the
retransmission of the whole message. When a message is divided into smaller frames, a
single-bit error affects only that small frame.

Types of Framing

● In fixed-size framing, there is no need for defining the boundaries of the frames; the size
itself can be used as a delimiter. An example of this type of framing is the ATM wide-area
network, which uses frames of fixed size called cells.Its largest disadvantage is if frame
size is less than define length we have to use padding.
● In variable-size framing, we need a way to define the end of the frame and the beginning
of the next frame.Historically, two approaches were used for this purpose:-
Fill length-in this approach if due to any error length is changed then data loss
can happen for example if we fill length 15 Bytes but due to some problem in the
network it changes to 10 Bytes then destination station reads only 10 Bytes there
after it stops reading which is not feasible for the network.
Another approach is End Delimiter it is further divided in two approach
1.Character Stuffing
● In byte stuffing (or character stuffing), a special byte is added to the data
section of the frame when there is a character with the same pattern as
the flag.
● The data section is stuffed with an extra byte. This byte is usually called
the escape character (ESC), which has a predefined bit pattern.
Whenever the receiver encounters the ESC character, it removes it from
the data section and treats the next character as data, not a delimiting
flag.
● Byte stuffing by the escape character allows the presence of the flag in
the data section of the frame, but it creates another problem.
For example if the text contains one or more escape characters followed
by a flag? The receiver removes the escape character, but keeps the flag,
which is incorrectly interpreted as the end of the frame.To solve this
problem, the escape characters that are part of the text must also be
marked by another escape character. In other words, if the escape
character is part of the text, an extra one is added to show that the
second one is part of the text.
As Shown in following figure:-

2.Bit Stuffing:-
● Bit stuffing is the process of adding one extra 0 whenever five consecutive 1 follow a 0 in
the data, so that the receiver does not mistake the pattern 0111110 for a flag.
● Or we can say that if the End Delimiter contains n-consecutive 1 following a 0 and same
pattern encountered in message bits then we are stuffing a zero after each (n-1)
consecutive 1.
As shown in figure
Access Control
● In the data link layer there are two sublayers. The upper sublayer is responsible for data
link control, and the lower sublayer is responsible for resolving access to the shared
media. If the channel is dedicated, we do not need the lower sublayer.
● The IEEE has actually made this division for LANs. The upper sublayer that is
responsible for flow and error control is called the logical link control (LLC) layer; the
lower sublayer that is mostly responsible for multiple access resolution is called the
media access control (MAC) layer.
● When nodes or stations are connected and use a common link, called a multipoint or
broadcast link, we need a multiple-access protocol to coordinate access to the link.

CHANNEL ALLOCATION PROBLEM


● The central focus of MAC is how to allocate a single broadcast channel among
competing users.
● The channel might be a portion of the wireless spectrum in a geographic region,
or a single wire or optical fiber to which multiple nodes are connected.
● It does not matter In both cases, the channel connects each user to all other
users and any user who makes full use of the channel interferes with other users
who also wish to use the channel.
Static Channel Allocation
FDMA,TDMA,CDMA(Code-division multiple access)-refer first unit and also figure Created
in class.
Efficiency = Tt /(Tt +Tp)

Polling
“A polling is conducted in which all the stations willing to send data participates”
● Polling works with topologies in which one device is designated as a primary
station and the other devices are secondary stations.
● All data exchanges must be made through the primary device even when the
ultimate destination is a secondary device.
● The primary device controls the link; the secondary devices follow its instructions.
It is up to the primary device to determine which device is allowed to use the
channel at a given time.
● The primary device, therefore, is always the initiator of a session
● The poll function is used by the primary device to solicit transmissions from the
secondary devices.
● When the primary is ready to receive data, it must ask (poll) each device in turn if
it has anything to send.
● When the first secondary is approached, it responds either with a NAK frame if it
has nothing to send or with data (in the form of a data frame) if it does.
● If the response is negative (a NAK frame), then the primary polls the next
secondary in the same manner until it finds one with data to send.
● When the response is positive (a data frame), the primary reads the frame and
returns an acknowledgment (ACK frame), verifying its receipt.
Disadvantage:-
● A station not chosen for a long time leads to starvation.
● Polling is also overhead.

Efficiency =Tt /(Tt +Tpoll+Tp)

Token passing(IEEE 802.5):


In token passing following properties hold:-
● There is a token in the channel which is keep rounding in the channel
● Token round in unidirectional
● If a station wants to send data then it has to hold token and send.
● Because it has only one token so no chance of collision.
● It uses piggybacking acknowledgement.
● Each station can send maximum one frame for each rotation of token.
Ring Latency- Ring latency is the time required for a signal to propagate once around the ring.
Ring latency may be measured in seconds or in bits at the data transmission rate.

If d=total distance of ring v=signal velocity Bw=bandwidth


No. of channel in the LAN=N transfer bits=b

Ring Latency=(d/v)+(N*b)/Bw

Bit delay=Define how much time a station holds a bit to deliver it to another end.

Efficiency =Useful Time /total time


Total time=one complete cycle.
Suppose there is N station and each station has data to send
Transmission delay=Tt for each station
For N station =N*Tt =useful time
Total time=d/v+N*THT=tp+N*THT
THT=token holding time
The token holding time (THT) is the time a given node is allowed to hold the token.
It depends upon which approach we are following i.e early token ring or delay token
ring.
Efficiency =N*Tt /(tp+N*THT)
Delayed Token Reinsertion:-
● In this, the sender transmits the data packet and waits till the time the whole packet
takes the round trip of the ring and returns to it. When the whole packet is received
by the sender, then it releases the token
● There is only one packet in the ring at an instance
● More reliable than ETR.

THT=tp+Tt
Efficiency =N*Tt /(tp+N*(tp+Tt)) -for delayed token ring reinsertion
2. Early token reinsertion (ETR) –

● Sender does not wait for the data packet to complete revolution before
releasing the token. Token is released as soon as the data is
transmitted
● Multiple packets present in the ring
● Less reliable than DTR

In this case, THT = T t

So, cycle time = Tp + N*(Tt)


Efficiency e = (N*Tt)/(Tp + N*(Tt))
n
RESERVATION

ai
● A station needs to make a reservation before sending data.
● In each interval, a reservation frame precedes the data frames sent in that interval.
● If there are N stations in the system, there are exactly N reservation minislots in the

nt
reservation frame.
● Each minislot belongs to a station.
● When a station needs to send a data frame, it makes a reservation in its own minislot.

ou
● The stations that have made reservations can send their data frames after the

M
e
reservation frame.
Th

Aloha
Pure ALOHA and Slotted ALOHA both are the Random Access Protocols, that are implemented
on the Medium Access Control (MAC) layer, a sublayer of Data Link Layer. The purpose of the
ALOHA protocol is to determine which competing station must get the next chance of accessing
K3

the multi-access channel at MAC layer.

Pure ALOHA

The version of the protocol "Pure ALOHA" is quite simple:

● If you have data to send, send the data


This first step implies that Pure ALOHA does not check whether the channel is busy
before transmitting.Since collisions can occur and data may have to be sent again,
ALOHA cannot use 100% of the capacity of the communications channel. How long a
station waits until it transmits, and the likelihood a collision occurs are interrelated, and
affects how efficiently the channel can be used.
● If, while you are transmitting data, you receive any data from another station, there has
been a message collision. All transmitting stations will need to try resending "later".

The concept of "transmit later" is a critical aspect: the quality of the backoff scheme chosen
significantly influences the efficiency of the protocol, the ultimate channel capacity, and the
predictability of its behavior.
n
ai
nt
ou
M
e
Th

Efficiency of Pure Aloha (η) = G x e-2G


Maximum Efficiency-

For maximum efficiency,


K3

● We put dη / dG = 0
● Maximum value of η occurs at G = 1/2
● Substituting G = 1/2 in the above expression, we get-

Maximum efficiency of Pure Aloha

= 1/2 x e-2 x 1/2

= 1 / 2e

= 0.184

= 18.4%

Thus,

Maximum Efficiency of Pure Aloha (η) = 18.4%

The maximum efficiency of Pure Aloha is very less due to the large number of collisions.
n
ai
nt
ou
M
e
Th

2. Slotted Aloha-

● Slotted Aloha divides the time of the shared channel into discrete intervals called as
K3

time slots.
● Any station can transmit its data in any time slot.
● The only condition is that the station must start its transmission from the beginning of the
time slot.
● If the beginning of the slot is missed, then the station has to wait until the beginning of
the next time slot.
● A collision may occur if two or more stations try to transmit data at the beginning of the
same time slot.
n
ai
nt
ou
M
e
Th

Efficiency-
K3

Efficiency of Slotted Aloha (η) = G x e-G

where G = Number of stations willing to transmit data at the beginning of the same time slot

Maximum Efficiency-

For maximum efficiency,

We put dη / dG = 0

Maximum value of η occurs at G = 1

Substituting G = 1 in the above expression, we get-

Maximum efficiency of Slotted Aloha

= 1 x e-1

=1/e

= 0.368

= 36.8%
n
Thus,

ai
Maximum Efficiency of Slotted Aloha (η) = 36.8%

nt
ou
M
e
Th
K3

CSMA/CD
CSMA/CD (Carrier Sense Multiple Access/ Collision Detection) is a media access control
method that was widely used in Early Ethernet technology/LANs when there used to be shared
Bus Topology and each node ( Computers) were connected By Coaxial Cables.

Consider a scenario where there are ‘n’ stations on a link and all are waiting to transfer data
through that channel. In this case, all ‘n’ stations would want to access the link/channel to
transfer their own data. The problem arises when more than one station transmits the data at
the moment. In this case, there will be collisions in the data from different stations.

CSMA/CD is one such technique where different stations that follow this protocol agree on some
terms and collision detection measures for effective transmission. This protocol decides which
station will transmit when data reaches the destination without corruption.

Working of CSMA/CD
● Step 1: Check if the sender is ready for transmitting data packets.
● Step 2: Check if the transmission link is idle.
Sender has to keep on checking if the transmission link/medium is idle. For this, it
continuously senses transmissions from other nodes.If it senses that the carrier is
n
free and there are no collisions, it sends the data. Otherwise, it refrains from sending

ai
data.
● Step 3: Transmit the data & check for collisions.

nt
Sender transmits its data on the link. CSMA/CD does not use an ‘acknowledgment’
system. It checks for successful and unsuccessful transmissions through collision

ou
signals. During transmission, if a collision signal is received by the node,
transmission is stopped. The station then transmits a jam signal onto the link and
waits for random time intervals before it resends the frame. After some random time,


M
it again attempts to transfer the data and repeats the above process.
Step 4: If no collision was detected in propagation, the sender completes its frame
transmission and resets the counters.
e
How does a station know if its data collide?
Th

CASE-1
K3

Consider the above situation. Two stations, A & B.


Propagation Time: Tp = 1 hr ( Signal takes 1 hr to go from A to B)
At time t=0, A transmits its data.

t= 30 mins : Collision occurs.


After the collision occurs, a collision signal is generated and sent to both A & B to inform the
stations about the collision. Since the collision happened midway, the collision signal also takes
30 minutes to reach A & B.
Therefore, t=1 hr: A & B receive collision signals.
This collision signal is received by all the stations on that link. Then,
n
How to ensure that it is our station’s data that collided?

ai
For this, Transmission time (Tt) > Propagation Time (Tp)
This is because we want that before we transmit the last bit of our data from our station, we
should at least be sure that some of the bits have already reached their destination. This

nt
ensures that the link is not busy and collisions will not occur.

For this consider the worst-case scenario.

ou
CASE-2
Consider the above system again.

M
e
Th

At time t=0, A transmits its data.


K3

t= 59:59 mins : Collision occurs


This collision occurs just before the data reaches B. Now the collision signal takes 59:59
minutes again to reach A. Hence, A receives the collision information approximately after 2
hours, that is, after 2 * Tp.
Hence, to ensure tighter bound, to detect the collision completely,

Tt > >= 2 * Tp
This is the maximum collision time that a system can take to detect if the collision was of its own
data.
What should be the minimum length of the packet to be transmitted?
Transmission Time = Tt = Length of the packet/ Bandwidth of the link

[Number of bits transmitted by sender per second]

Substituting above, we get,

Length of the packet/ Bandwidth of the link>= 2 * Tp


Length of the packet >= 2 * Tp * Bandwidth of the link
n
Padding helps in cases where we do not have such long packets. We can pad extra characters

ai
to the end of our data to satisfy the above condition.

Collision detection in CSMA/CD involves the following features:

nt
● Carrier sense: Before transmitting data, a device listens to the network to check if

ou
the transmission medium is free. If the medium is busy, the device waits until it
becomes free before transmitting data.
● Multiple Access: In a CSMA/CD network, multiple devices share the same
M
transmission medium. Each device has equal access to the medium, and any device
can transmit data when the medium is free.
● Collision detection: If two or more devices transmit data simultaneously, a collision
e
occurs. When a device detects a collision, it immediately stops transmitting and
sends a jam signal to inform all other devices on the network of the collision. The
Th

devices then wait for a random time before attempting to transmit again, to reduce
the chances of another collision.
● Backoff algorithm: In CSMA/CD, a backoff algorithm is used to determine when a
device can retransmit data after a collision. The algorithm uses a random delay
K3

before a device retransmits data, to reduce the likelihood of another collision


occurring.
● Minimum frame size: CSMA/CD requires a minimum frame size to ensure that all
devices have enough time to detect a collision before the transmission ends. If a
frame is too short, a device may not detect a collision and continue transmitting,
leading to data corruption on the network.

The CSMA access mode versions are:

● 1-persistent CSMA
● non-persistent CSMA
● p-persistent CSMA

1-persistent CSMA: In 1-persistent CSMA, the station continuously senses the channel to
check its state i.e. idle or busy so that it can transfer data or not. In case when the channel is
busy, the station will wait for the channel to become idle. When station found idle channel, it
transmits the frame to the channel without any delay.
n
p-persistent CSMA: When the station is ready to send the frames, it will sense the channel. If

ai
the channel found to be busy, the channel will wait for the next slot. If the channel found to be
idle, it transmits the frame with probability p, thus for the left probability i.e. q which is equal to

nt
1-p the station will wait for the beginning of the next time slot.

ou
Non-persistent CSMA: In this method, the station that has frames to send, only that station
senses for the channel. In case of an idle channel, it will send frame immediately to that
channel. In case when the channel is found busy, it will wait for the random time and again
M
sense for the state of the station whether idle or busy.
e
Efficiency of CSMA/CD
Th

Efficiency = Tt / ( C*2*Tp + Tt + Tp)

Tt - transmission time
K3

Tp - propagation time

C - number of collision before first successful data transfer

In CSMA/CD, for success, only 1 station should transmit while others shouldn’t. Let p be the
probability to transmit data successfully.

P(success) = nC1 * p * (1-p)n-1 (by using Binomial distribution)

For max P(success), differentiate with respect to p and equate to zero (to get maxima and
minima).
n
We get P(max) = 1/e

ai
Number of times we need to try before getting 1st success(poisson distribution)

nt
1/P(MAX) = 1/(1/e) = e

Here number of times we need to try (C) = e. Put a = Tt/Tp and divide by T in Efficiency = Tt /

ou
(C* 2 * Tp + Tt + Tp) We get,

Efficiency = 1/(e*2a + 1 + a)

a = Tp/Tt

e = 2.72
M
e
Th

Now

Efficiency = 1/( 1 + 6.44a)

Further Analysis of Efficiency :


K3

Efficiency = 1/ (1 + 6.44a)

Advantages of CSMA/CD:

● Simple and widely used: CSMA/CD is a widely used protocol for Ethernet
networks, and its simplicity makes it easy to implement and use.
Fairness: In a CSMA/CD network, all devices have equal access to the transmission
medium, which ensures fairness in data transmission.
Efficiency: CSMA/CD allows for efficient use of the transmission medium by
preventing unnecessary collisions and reducing network congestion.

Disadvantages of CSMA/CD:

● Limited scalability: CSMA/CD has limitations in terms of scalability, and it may not
be suitable for large networks with a high number of devices.
Vulnerability to collisions: While CSMA/CD can detect collisions, it cannot prevent
them from occurring. Collisions can lead to data corruption, retransmission delays,
and reduced network performance.
n
Inefficient use of bandwidth: CSMA/CD uses a random backoff algorithm that can

ai
result in inefficient use of network bandwidth if a device continually experiences
collisions.

nt
Susceptibility to security attacks: CSMA/CD does not provide any security
features, and the protocol is vulnerable to security attacks such as packet sniffing

ou
and spoofing.

M
e
Th
K3
Flow Control Method
● Flow control coordinates the amount of data that can be sent before receiving an
acknowledgment and is one of the most important duties of the data link layer.
● In most protocols, flow control is a set of procedures that tells the sender how much data
it can transmit before it must wait for an acknowledgment from the receiver.
● The flow of data must not be allowed to overwhelm/overflow the receiver.
● Any receiving device has a limited speed at which it can process incoming data and a
limited amount of memory in which to store incoming data.
● The receiving device must be able to inform the sending device before those limits are
reached and to request that the transmitting device send fewer frames or stop
temporarily.
● The rate of such processing is often slower than the rate of transmission. For this
reason, each receiving device has a block of memory, called a buffer, reserved for
storing incoming data until they are processed. If the buffer begins to fill up, the receiver
must be able to tell the sender to halt transmission until it is once again able to receive.
“Flow control refers to a set of procedures used to restrict the amount of data that the
sender can send before waiting for acknowledgment.”

1.stop-and-wait
Protocols in which the sender sends one frame and then waits for an acknowledgement before
proceeding are called stop-and-wait.It is simple to implement but efficiency is less.
The protocol is called the Stop-and-Wait Protocol because the sender sends one frame, stops
until it receives confirmation from the receiver (okay to go ahead), and then sends the next
frame.
Add figures which we created in class.

Efficiency =1/(1+2a)
a=Tp/Tt

Tp =Propagation Delay Tt=Transmission Delay

The Stop-and-Wait Protocol gives us an idea of how to add flow control to its predecessor,
noiseless channels are nonexistent. We can ignore the error (as we sometimes do), or we need
to add error control to our protocols.Now we discuss ARQ with Stop and Wait.Protocols in which
the sender waits for a positive acknowledgement before advancing to the next data item are
often called ARQ (Automatic Repeat reQuest) or PAR (Positive Acknowledgement with
Retransmission).
***For NOISY CHANNELS we have to use
● S & W ARQ
● GBN ARQ
● SR ARQ
Stop and Wait
Use those figure in which we add Time out timer in class
● Error correction in Stop-and-Wait ARQ is done by keeping a copy of the sent
frame and retransmitting of the frame when the timer expires.
● In Stop-and-Wait ARQ~ we use sequence numbers to number the frames.

Sliding Window
● In this protocol the sliding window is an abstract concept that defines the range of
sequence numbers that is the concern of the sender and receiver.
● In other words, the sender and receiver need to deal with only part of the possible
sequence numbers.
● The range which is the concern of the sender is called the send sliding window, the
range that is the concern of the receiver is called the receive sliding window.
● The send window is an imaginary box covering the sequence numbers of the data
frames which can be in transit. In each window position, some of these sequence
numbers define the frames that have been sent; others define those that can be sent.
● The maximum size of the window is 2m - 1 .
● The send window can slide one or more slots when a valid acknowledgment arrives.
Go-Back-N Automatic Repeat Request
● To improve the efficiency of transmission (filling the pipe), multiple frames must be in
transition while waiting for acknowledgment.
● In other words, we need to let more than one frame be outstanding to keep the channel
busy while the sender is waiting for acknowledgement.
● The first which is able to achieve this goal is called Go-Back-N Automatic Repeat
Request.
● In this protocol we can send several frames before receiving acknowledgments; we keep
a copy of these frames until the acknowledgments arrive.
● The send window can slide one or more slots when a valid acknowledgment arrives.
● The receive window is an abstract concept defining an imaginary box of size 1.
● In Go-Back-N ARQ, the size of the send window must be less than 2 m the size of the
receiver window is always 1.
● Stop-and-WaitARQ is a special case of Go-Back-NARQ in which the size of the send
window is 1.
● GBN uses Cumulative Acknowledgment(generally).

Efficiency =N/(1+2a)
Use your class notes also to understand these concepts.
Selective Repeat Automatic Repeat Request
Go-Back-N ARQ simplifies the process at the receiver site. The receiver keeps track of only one
variable, and there is no need to buffer out-of-order frames,they are simply discarded.But, this
protocol is very inefficient for a noisy link. In a noisy link a frame has a higher probability of
damage, which means the resending of multiple frames. This resending uses up the bandwidth
and slows down the transmission. For noisy links, there is another mechanism that does not
resend N frames when just one frame is damaged,only the damaged frame is resent. This
mechanism is called Selective RepeatARQ. It is more efficient for noisy links, but the
processing at the receiver is more complex.
● The Selective Repeat Protocol also uses two windows,a send window and a receive
window. But, there are differences between the windows in this protocol and the ones in
Go-Back-N. First, the size of the send window is much smaller, it is 2m- 1 which is also
equal to the receiver window.
● In Selective Repeat ARQ, the size of the sender and receiver window must be at most
one-half of 2m.
● It uses independence Acknowledgement.
● It is able to send negative Acknowledgement (NAK) .

Efficiency =N/(1+2a)

Acknowledgement
In data communications, when a receiver receives a message, it sends an acknowledgement
back to the sender to notify it about correct receipt of the message.
Types of acknowledgements
1. Cumulative acknowledgement
2. Independent acknowledgement
3. Piggybacking acknowledgement

Cumulative acknowledgement
● Cumulative acknowledgement is a process in which the receiver sends a single
acknowledgement in response to a finite number of frames received.
● Through this, the receiver acknowledges that it has correctly received all
previous frames or packets.
● When the sender receives an acknowledgement for frame n, it understands
correct delivery of frames n – 1, n – 2 and so on.
● Cumulative acknowledgement is used along with sliding window protocols. It
reduces the time and bandwidth wasted for sending acknowledgement.

Independent acknowledgement-
● If every packet is going to get acknowledgement independently.
● Reliability is high here but a disadvantage is that traffic is also high since
for every packet we are receiving independent ack.
● SR uses this approach.
***Piggybacking acknowledgement
● The technique of temporarily delaying outgoing acknowledgements so that they can be
hooked onto the next outgoing data frame is known as piggybacking.
● The principal advantage of using piggybacking over having distinct acknowledgement
frames is a better use of the available channel bandwidth.
● The ack field in the frame header costs only a few bits, whereas a separate frame would
need a header, the acknowledgement, and a checksum.
● In addition, fewer frames sent generally means a lighter processing load at the receiver

Advantages of piggybacking :
1. The major advantage of piggybacking is the better use of available
channel bandwidth. This happens because an acknowledgment frame
needs not to be sent separately.
2. Usage cost reduction
3. Improves latency of data transfer

Disadvantages of piggybacking :
1. The disadvantage of piggybacking is the additional complexity.
2. If the data link layer waits long before transmitting the acknowledgment
(block the ACK for some time), the frame will rebroadcast.

You might also like