You are on page 1of 36

Unit – 3

DATA LINK LAYER


Data-link layer is the second layer after physical layer. The data link layer is responsible for
maintaining the data link between two hosts or nodes.

Functionality of Data-link Layer

Data link layer does many tasks on behalf of upper layer. These are:
• Framing: Data-link layer takes packets from Network Layer and encapsulates them
into Frames. Then, it sends each frame bit-by-bit on the hardware. At receiver’ end,
data link layer picks up signals from hardware and assembles them into frames.
• Addressing: Data-link layer provides layer-2 hardware addressing mechanism.
Hardware address is assumed to be unique on the link. It is encoded into hardware at
the time of manufacturing.
• Synchronization: When data frames are sent on the link, both machines must be
synchronized in order to transfer to take place.
• Error Control: Sometimes signals may have encountered problem in transition and
the bits are flipped. These errors are detected and attempted to recover actual data bits.
It also provides error reporting mechanism to the sender.
• Flow Control: Stations on same link may have different speed or capacity. Data-link
layer ensures flow control that enables both machine to exchange data on same speed.
• Multi-Access: When host on the shared link tries to transfer the data, it has a high
probability of collision. Data-link layer provides mechanism such as CSMA/CD to
equip capability of accessing a shared media among multiple Systems.

Framing
The first service provided by the data link layer is Framing. The DLL at each node needs to encapsulate
the datagram(Packet received from the network layer) in a frame before sending it to the next node.
The node also needs to decapsulate the datagram from the frame received on the logical channel.
“A packet at the data link layer is normally called a frame”.

The process of dividing the data into frames and reassembling it is transparent to the user and is
handled by the data link layer.
Framing is an important aspect of data link layer protocol design because it allows the transmission
of data to be organized and controlled, ensuring that the data is delivered accurately and
efficiently.
Error Detection and Correction
There are many reasons such as noise, cross-talk etc., which may help data to get corrupted
during transmission. The upper layers work on some generalized view of network architecture
and are not aware of actual hardware data processing. Hence, the upper layers expect error-
free transmission between the systems. Most of the applications would not function
expectedly if they receive erroneous data. Applications such as voice and video may not be
that affected and with some errors they may still function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams) are
transmitted with certain level of accuracy. But to understand how errors is controlled, it is
essential to know what types of errors may occur.
Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors.
• Single bit error − In the received frame, only one bit has been corrupted, i.e. either
changed from 0 to 1 or from 1 to 0.

• Multiple bits error − In the received frame, more than one bits are corrupted.

;
• Burst error − In the received frame, more than one consecutive bits are corrupted.
Error Control
Error control can be done in two ways
• Error detection − Error detection involves checking whether any error has occurred
or not. The number of error bits and the type of error does not matter.
• Error correction − Error correction involves ascertaining the exact number of bits that
has been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some additional bits
along with the data bits. The receiver performs necessary checks based upon the additional
redundant bits. If it finds that the data is free from errors, it removes the redundant bits before
passing the message to the upper layers.
Error Detection Techniques
There are three main techniques for detecting errors in frames: Parity Check, Two-dimensional
Parity check, Checksum and Cyclic Redundancy Check (CRC).
Parity Check
The parity check is done by adding an extra bit, called parity bit to the data to make a number
of 1s either even in case of even parity or odd in case of odd parity.
While creating a frame, the sender counts the number of 1s in it and adds the parity bit in the
following way
In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of 1s
is odd then parity bit value is 1.
In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is
even then parity bit value is 1.
On receiving a frame, the receiver counts the number of 1s in it. In case of even parity check,
if the count of 1s is even, the frame is accepted, otherwise, it is rejected. A similar rule is
adopted for odd parity check.
The parity check is suitable for single bit error detection only.
Two-dimensional Parity check
Parity check bits are calculated for each row, which is equivalent to a simple parity check bit.
Parity check bits are also calculated for all columns, then both are sent along with the data. At
the receiving end these are compared with the parity bits calculated on the received data.

Checksum
• In checksum error detection scheme, the data is divided into k segments each of m bits.
• In the sender’s end the segments are added using 1’s complement arithmetic to get the
sum. The sum is complemented to get the checksum.
• The checksum segment is sent along with the data segments.
• At the receiver’s end, all received segments are added using 1’s complement arithmetic to
get the sum. The sum is complemented.
• If the result is zero, the received data is accepted; otherwise discarded.
Cyclic Redundancy Check (CRC)
Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by a
predetermined divisor agreed upon by the communicating system. The divisor is generated
using polynomials.
• Here, the sender performs binary division of the data segment by the divisor. It then
appends the remainder called CRC bits to the end of the data segment. This makes the
resulting data unit exactly divisible by the divisor.
• The receiver divides the incoming data unit by the divisor. If there is no remainder, the
data unit is assumed to be correct and is accepted. Otherwise, it is understood that the
data is corrupted and is therefore rejected.
Longitudinal Redundancy Check (LRC)/2-D Parity Check
In this method, data which the user want to send is organised into tables of
rows and columns. A block of bit is divided into table or matrix of rows and
columns. In order to detect an error, a redundant bit is added to the whole block
and this block is transmitted to receiver. The receiver uses this redundant row to
detect error. After checking the data for errors, receiver accepts the data and
discards the redundant row of bits.

Example :
If a block of 32 bits is to be transmitted, it is divided into matrix of four rows and
eight columns which as shown in the following figure :
In this matrix of bits, a parity bit (odd or even) is calculated for each column. It
means 32 bits data plus 8 redundant bits are transmitted to receiver. Whenever
data reaches at the destination, receiver uses LRC to detect error in data.

Advantage :
LRC is used to detect burst errors.

Example : Suppose 32 bit data plus LRC that was being transmitted is hit by a
burst error of length 5 and some bits are corrupted as shown in the following
figure :

Figure : Burst error & LRC

The LRC received by the destination does not match with newly corrupted LRC.
The destination comes to know that the data is erroneous, so it discards the data.
Disadvantage :
The main problem with LRC is that, it is not able to detect error if two bits in a
data unit are damaged and two bits in exactly the same position in other data unit
are also damaged.

Example : If data 110011 010101 is changed to 010010110100.

Figure : Two bits at same bit position damaged in 2 data units

In this example 1st and 6th bit in one data unit is changed . Also the 1st and 6th
bit in second unit is changed.
Error Correction Techniques
Error correction techniques find out the exact number of bits that have been corrupted and as
well as their locations. There are two principle ways

• Backward Error Correction (Retransmission) − If the receiver detects an error in the


incoming frame, it requests the sender to retransmit the frame. It is a relatively simple
technique. But it can be efficiently used only where retransmitting is not expensive as in
fiber optics and the time for retransmission is low relative to the requirements of the
application.
• Forward Error Correction − If the receiver detects some error in the incoming frame, it
executes error-correcting code that generates the actual frame. This saves bandwidth
required for retransmission. It is inevitable in real-time systems. However, if there are too
many errors, the frames need to be retransmitted.
The four main error correction codes are

• Hamming Codes
Hamming Codes:
Hamming code is a set of error-correction codes that can be used to detect and correct the errors
that can occur when the data is moved or stored from the sender to the receiver. Itis
technique developed by R.W. Hamming for error correction.
Redundant bits –
Redundant bits are extra binary bits that are generated and added to the information-carrying
bits of data transfer to ensure that no bits were lost during the data transfer.
The number of redundant bits can be calculated using the following formula:
2^r ≥ m + r + 1
where, r = redundant bit, m = data bit
Suppose the number of data bits is 7, then the number of redundant bits can be calculated
using:
= 2^4 ≥ 7 + 4 + 1
Thus, the number of redundant bits= 4
Parity bits –
A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s
in the data is even or odd. Parity bits are used for error detection. There are two types of
parity bits:

1. Even parity bit:


In the case of even parity, for a given set of bits, the number of 1’s are counted. If that
count is odd, the parity bit value is set to 1, making the total count of occurrences of
1’s an even number. If the total number of 1’s in a given set of bits is already even, the
parity bit’s value is 0.
2. Odd Parity bit –
In the case of odd parity, for a given set of bits, the number of 1’s are counted. If that
count is even, the parity bit value is set to 1, making the total count of occurrences of 1’s
an odd number. If the total number of 1’s in a given set of bits is already odd, the parity
bit’s value is 0.
General Algorithm of Hamming code –
The Hamming Code is simply the use of extra parity bits to allow the identification of an
error.
1. Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc).
2. All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc).
3. All the other bit positions are marked as data bits.
4. Each data bit is included in a unique set of parity bits, as determined its bit position in
binary form.
a. Parity bit 1 covers all the bits positions whose binary representation includes a 1 in
the least significant
position (1, 3, 5, 7, 9, 11, etc).
b. Parity bit 2 covers all the bits positions whose binary representation includes a 1 in
the second position from
the least significant bit (2, 3, 6, 7, 10, 11, etc).
c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in
the third position from
the least significant bit (4–7, 12–15, 20–23, etc).
d. Parity bit 8 covers all the bits positions whose binary representation includes a 1 in
the fourth position from
the least significant bit bits (8–15, 24–31, 40–47, etc).
e. In general, each parity bit covers all bits where the bitwise AND of the parity
position and the bit position is
non-zero.
5. Since we check for even parity set a parity bit to 1 if the total number of ones in the
positions it checks is
odd.
6. Set a parity bit to 0 if the total number of ones in the positions it checks is even.

Determining the position of redundant bits –


These redundancy bits are placed at the positions which correspond to the power of 2.
As in the above example:
1. The number of data bits = 7
2. The number of redundant bits = 4
3. The total number of bits = 11
4. The redundant bits are placed at positions corresponding to power of 2 - 1, 2, 4, and 8
Suppose the data to be transmitted is 1011001, the bits will be placed as follows:

Determining the Parity bits –

1. R1 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the least significant position.
R1: bits 1, 3, 5, 7, 9, 11

To find the redundant bit R1, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R1 is an even number the value of R1 (parity bit’s
value) = 0
2. R2 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the second position from the least significant bit.
R2: bits 2,3,6,7,10,11
To find the redundant bit R2, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1
3. R4 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the third position from the least significant bit.
R4: bits 4, 5, 6, 7

To find the redundant bit R4, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R4 is odd the value of R4(parity bit’s value) = 1
4. R8 bit is calculated using parity check at all the bits positions whose binary
representation includes a 1 in the fourth position from the least significant bit.
R8: bit 8,9,10,11

To find the redundant bit R8, we check for even parity. Since the total number of 1’s in
all the bit positions corresponding to R8 is an even number the value of R8(parity bit’s
value)=0.
Thus, the data transferred is:

Error detection and correction –


Suppose in the above example the 6th bit is changed from 0 to 1 during data transmission,
then it gives new parity values in the binary number:

The bits give the binary number as 0110 whose decimal representation is 6. Thus, the bit 6
contains an error. To correct the error the 6th bit is changed from 1 to 0.
Hamming Codes: Example:

D7 D6 D5 P4 D3 P2 P1

1 1 0 0 1 1 0

7 6 5 4 3 2 1

• 7 bit hamming code


• Data bits = 4
• Parity bits =3 =>2N = 20 = 1, 21 = 2, and 22 = 4 N = 0,1,2, ….N-1, n=7
• P1=>D3, D5, D7
• P2=>D3, D6, D7
• P4=>D5, D6, D7

example: 1101=data bits


on sender side = even parity bits P1, P2, and P4 will be calculated and added to the data bits
and sent to the receiver
P1 = 1, 0, 1 = (two bits of 1 =even no of 1’s bits) = 0
P2 = 1, 1, 1 = (three bits of 1 = odd no of 1’s bits) = 1
P4 = 0, 1, 1 = (two bits of 1 =even no of 1’s bits) = 0
1100110 = data sent
Due to the noise in the channel the data that is received = 1000110
on Receiver side = 1000110

D7 D6 D5 P4 D3 P2 P1

1 0 =1 0 0 1 1 0

7 6 5 4 3 2 1

P1 = 0 = 1, 0, 1 = even no of 1’s = P1 = 0
P2 = 1 = 1, 0, 1 = odd no of 1’s = P2 = 1
P4 = 0 = 0, 0, 1 = odd no of 1’s = P4 = 1
Error Correction using Hamming codes:
4 2 1
P4 P2 P1 = P4=1, P2=1, P1=0 = (1*4) + (1*2) + (0*1) = 4+2+0 = 6 th bit there is an error

Example: If the 7-bit hamming code word received by a receiver is 1011011. Assuming the
even parity, state whether the received code is correct or wrong. If wrong locate the bit and
correct it.

D7 D6 D5 P4 D3 P2 P1

1 0 1 1 0 1 1

7 6 5 4 3 2 1

• P1=>D3, D5, D7
• P2=>D3, D6, D7
• P4=>D5, D6, D7

P1 = 1 = 0, 1, 1 = odd no of 1’s = 1
P2 = 1 = 0, 0, 1 = even no of 1’s = 0
P4 = 1 = 1, 0, 1 = odd no of 1’s = 1
4 2 1
P4 P2 P1 = (1*4) + (0*2) + (1*1) = 5th bit error
corrected Data = 1001011
Data Flow Control:
• It is technique that generally observes proper flow of data from sender to receiver. It
is very essential because it is possible for sender to transmit data or information at
very fast rate and hence receiver can receive this information and process it. This can
happen only if receiver has very high load of traffic as compared to sender, or if
receiver has power of processing less as compared to sender.
• Flow control is basically technique that gives permission to two of stations that are
working and processing at different speeds to just communicate with one another.
Flow control in Data Link Layer simply restricts and coordinates number of frames or
amount of data sender can send just before it waits for an acknowledgment from
receiver.

Feedback – based Flow Control:


• In this control technique, sender simply transmits data or information or frame to
receiver, then receiver transmits data back to sender and also allows sender to transmit
more amount of data or tell sender about how receiver is processing or doing. This
simply means that sender transmits data or frames after it has received
acknowledgments from user.
Rate – based Flow Control:
• In this control technique, usually when sender sends or transfer data at faster speed to
receiver and receiver is not being able to receive data at the speed, then mechanism
known as built-in mechanism in protocol will just limit or restricts overall rate at
which data or information is being transferred or transmitted by sender without any
feedback or acknowledgment from receiver.
Techniques of Flow Control in Data Link Layer:
1. Stop-and-Wait Flow Control:

This method is the easiest and simplest form of flow control. In this method, basically message
or data is broken down into various multiple frames, and then receiver indicates its readiness
to receive frame of data. When acknowledgment is received, then only sender will send or
transfer the next frame.
This process is continued until sender transmits EOT (End of Transmission) frame. In this
method, only one of frames can be in transmission at a time. It leads to inefficiency i.e. less
productivity if propagation delay is very much longer than the transmission delay.
Advantages –
• This method is very easiest and simple and each of the frames is checked and
acknowledged well.
• It can also be used for noisy channels.
• This method is also very accurate.
Disadvantages –
• This method is fairly slow.
• In this, only one packet or frame can be sent at a time.
• It is very inefficient and makes the transmission process very slow.

1.1. Error control- Stop and wait ARQ (automatic repeat request)

Stop-and-Wait ARQ is also known as alternating bit protocol. It is one of simplest flow and
error control techniques or mechanisms. This mechanism is generally required in
telecommunications to transmit data or information among two connected devices. Receiver
simply indicates its readiness to receive data for each frame. In these, sender sends information
or data packet to receiver. Sender then stops and waits for ACK (Acknowledgment) from
receiver. Further, if ACK does not arrive within given time period i.e., time-out, sender then
again resends frame and waits for ACK. But, if sender receives ACK, then it will transmi next
data packet to receiver and then again wait for ACK fro receiver. This process to stop and wait
continues until sender has no data frame or packet to send.
2. Sliding Window Flow Control:
This protocol improves the efficiency of stop and wait protocol by allowing multiple frames to
be transmitted before receiving an acknowledgment.

The working principle of this protocol can be described as follows −


• Both the sender and the receiver has finite sized buffers called windows. The sender
and the receiver agrees upon the number of frames to be sent based upon the buffer
size.
• The sender sends multiple frames in a sequence, without waiting for acknowledgment.
When its sending window is filled, it waits for acknowledgment. On receiving
acknowledgment, it advances the window and transmits the next frames, according to
the number of acknowledgments received.
Advantages –
• It performs much better than stop-and-wait flow control.
• This method increases efficiency.
• Multiples frames can be sent one after another.
Disadvantages –
• The main issue is complexity at the sender and receiver due to the transferring of
multiple frames.
• The receiver might receive data frames or packets out the sequence.
Multiple Access Protocol:

1. Random Access Protocol: In this, all stations have same superiority that is no station
has more priority than another station. Any station can send data depending on medium’s
state (idle or busy). It has two features:
1. There is no fixed time for sending data
2. There is no fixed sequence of stations sending data

ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used in a shared
medium to transmit data. Using this method, any station can transmit data across a network
simultaneously when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision of data can take place and data frames may be lost during the transmission
of data through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection.
5. It requires retransmission of data after some random amount of time.
Pure Aloha

Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure
Aloha, when each station transmits data to a channel without checking whether the channel is
idle or not, the chances of collision may occur, and the data frame can be lost. When any station
transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment.
If it does not acknowledge the receiver end within the specified time, the station waits for a
random amount of time, called the backoff time (Tb). And the station may assume the frame
has been lost or destroyed. Therefore, it retransmits the frame until all the data are successfully
transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.
Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has
a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a
fixed time interval called slots. So that, if a station wants to send a frame to a shared channel,
the frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent
to each slot. And if the stations are unable to send data to the beginning of the slot, the station
will have to wait until the beginning of the slot for the next time. However, the possibility of a
collision remains when trying to send a frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted Aloha is S =
G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to sense the traffic on a
channel (idle or busy) before transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until the channel becomes idle.
Hence, it reduces the chances of a collision on a transmission medium.

CSMA Access MODES:


• 1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the
shared channel and if the channel is idle, it immediately sends the data. Else it must
wait and keep track of the status of the channel to be idle and broadcast the frame
unconditionally as soon as the channel is idle.
• Non-Persistent: It is the access mode of CSMA that defines before transmitting the
data, each node must sense the channel, and if the channel is inactive, it immediately
sends the data. Otherwise, the station must wait for a random time (not continuously),
and when the channel is found to be idle, it transmits the frames.
• P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P-
Persistent mode defines that each node senses the channel, and if the channel is
inactive, it sends a frame with a P probability. If the data is not transmitted, it waits for
a (q = 1-p probability) random time and resumes the frame with the next time slot.
• O- Persistent: It is an O-persistent method that defines the superiority of the station
before the transmission of the frame on the shared channel. If it is found that the
channel is inactive, each station waits for its turn to retransmit the data.
CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol to transmit data
frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first
senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits
a frame to check whether the transmission was successful. If the frame is successfully received,
the station sends another frame. If any collision is detected in the CSMA/CD, the station sends
a jam/ stop signal to the shared channel to terminate data transmission. After that, it waits for
a random time before sending a frame to a channel.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol for carrier


transmission of data frames. It is a protocol that works with a medium access control layer.
When a data frame is sent to a channel, it receives an acknowledgment to check whether the
channel is clear. If the station receives only a single (own) acknowledgments, that means the
data frame has been successfully transmitted to the receiver. But if it gets two signals (its own
and one more in which the collision of frames), a collision of the frame occurs in the shared
channel. Detects the collision of the frame when a sender receives an acknowledgment signal.

Controlled Access Protocol

It is a method of reducing data frame collision on a shared channel. In controlled access, the
stations seek information from one another to find which station has the right to send. It
allows only one node to send at a time, to avoid collision of messages on shared medium.
The three controlled-access methods are:

1. Reservation
2. Polling
3. Token Passing

Reservation
• In the reservation method, a station needs to make a reservation before sending data.
• The time line has two kinds of periods:
1. Reservation interval of fixed time length
2. Data transmission period of variable frames.
• If there are M stations, the reservation interval is divided into M slots, and each station
has one slot.
• Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
• In general, i th station may announce that it has a frame to send by inserting a 1 bit intoi
th
slot. After all N slots have been checked, each station knows which stations wish to
transmit.
• The stations which have reserved their slots transfer their frames in that order.
• After data transmission period, next reservation interval begins.
• Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five slot reservation frame.
In the first interval, only stations 1, 3, and 4 have made reservations. In the second
interval, only station 1 has made a reservation.
Polling
• Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
• In this, one acts as a primary station(controller) and the others are secondary stations.
All data exchanges must be made through the controller.
• The message sent by the controller contains the address of the node being selected for
granting access.
• Although all nodes receive the message but the addressed one responds to it and sends
data, if any. If there is no data, usually a “poll reject”(NAK) message is sent back.
• Problems include high overhead of the polling messages and high dependence on the
reliability of the controller.
Efficiency
Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then,
Efficiency = Tt/(Tt + Tpoll)

Token Passing
• In token passing scheme, the stations are connected logically to each other in form of
ring and access of stations is governed by tokens.
• A token is a special bit pattern or a small message, which circulate from one station to
the next in the some predefined order.
• In Token ring, token is passed from one station to another adjacent station in the ring
whereas incase of Token bus, each station
uses the bus to send the token to the next station in some predefined order.
• In both cases, token represents permission to send. If a station has a frame queued for
transmission when it receives the token, it can send that frame before it passes the
token to the next station. If it has no queued frame, it passes the token simply.
• After sending a frame, each station must wait for all N stations (including itself) to
send the token to their neighbors and the other N – 1 stations to send a frame, if they
have one.
• There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable operation
of this scheme.
Performance
Performance of token ring can be concluded by 2 parameters:-
1. Delay, which is a measure of time between when a packet is ready and when it is
delivered.So, the average time (delay) required to send a token to the next station =
a/N.
2. Throughput, which is a measure of the successful traffic.
Throughput, S = 1/(1 + a/N) for a<1
and
S = 1/{a(1 + 1/N)} for a>1.
where N = number of stations
a = Tp/Tt
(Tp = propagation delay and Tt = transmission delay)

Channelization Protocols:
Channelization is basically a method that provides the multiple-access and in this, the
available bandwidth of the link is shared in time, frequency, or through the code in
between the different stations.

Channelization Protocols are broadly classified as follows:

• FDMA (Frequency-Division Multiple Access)


• TDMA (Time-Division Multiple Access)
• CDMA (Code-Division Multiple Access)
1. Frequency-Division Multiple Access

With the help of this technique, the available bandwidth is divided into frequency bands.
Each station is allocated a band in order to send its data. Or in other words, we can say that
each band is reserved for a specific station and it belongs to the station all the time.

Each station makes use of the bandpass filter in order to confine the frequencies
of the transmitter.
In order to prevent station interferences, the allocated bands are separated from one
another with the help of small guard bands.
The Frequency-division multiple access mainly specifies a predetermined
frequency for the entire period of communication.
Stream of data can be easily used with the help of FDMA.

Figure: Frequency-Division media access.

Advantages of FDMA

Given below are some of the benefits of using the FDMA technique:

This technique is efficient when the traffic is uniformly constant.


In case if the channel is not in use then it sits idle.
FDMA is simple algorithmically and the complexity is less.
For FDMA there is no restriction regarding the type of baseband or the type of
modulation.

Disadvantages of FDMA

By using FDMA, the maximum flow rate per channel is fixed and small.
2. Time-Division Multiple Access

Time-Division Multiple access is another method to access the channel for shared medium
networks.

With the help of this technique, the stations share the bandwidth of the channel in
time.
A time slot is allocated to each station during which it can send the data.
Data is transmitted by each station in the assigned time slot.
There is a problem in using TDMA and it is due to TDMA the synchronization cannot
be achieved between the different stations.
When using the TDMA technique then each station needs to know the beginning of its
slot and the location of its slot.
If the stations are spread over a large area, then there occur propagation delays; in
order to compensate this guard, times are used.
The data link layer in each station mainly tells its physical layer to use the allocated
time slot.

Figure: Time-Division media access.

Some examples of TDMA are as follows;

personal digital Cellular(PDC)


Integrated digital enhanced network.
Universal terrestrial radio access(UTRA)
3. Code-Division Multiple Access

CDMA (code-division multiple access) is another technique used for channelization.

CDMA technique differs from the FDMA because only one channel occupies the
entire bandwidth of the link.
The CDMA technique differs from the TDMA because all the stations can send data
simultaneously as there is no timesharing.
The CDMA technique simply means communication with different codes.
In the CDMA technique, there is only one channel that carries all the transmission
simultaneously.
CDMA is mainly based upon the coding theory; where each station is assigned a
code, Code is a sequence of numbers called chips.
The data from the different stations can be transmitted simultaneously but using
different code languages.

Advantages of CDMA

Given below are some of the advantages of using the CDMA technique:

Provide high voice quality.


CDMA operates at low power levels.
The capacity of the system is higher than the TDMA and FDMA.
CDMA is better cost-effective.

DATA LINK LAYER PROTOCOL

Noiseless Channels
There are two noiseless channels which are as follows −

• Simplex channel
• Stop & wait channel
Simplest Protocol
Step 1 − Simplest protocol that does not have flow or error control.
Step 2 − It is a unidirectional protocol where data frames are traveling in one direction that is from
the sender to receiver.
Step 3 − Let us assume that the receiver can handle any frame it receives with a processing time that
is small enough to be negligible, the data link layer of the receiver immediately removes the header
from the frame and hands the data packet to its network layer, which can also accept the packet
immediately.

Stop-and-Wait Protocol
Step 1 − If the data frames that arrive at the receiver side are faster than they can be processed, the
frames must be stored until their use.
Step 2 − Generally, the receiver does not have enough storage space, especially if it is receiving data
from many sources. This may result in either discarding of frames or denial of service.
Step 3 − To prevent the receiver from becoming overwhelmed with frames, the sender must slow
down. There must be ACK from the receiver to the sender.
Step 4 − In this protocol the sender sends one frame, stops until it receives confirmation from the
receiver, and then sends the next frame.
Step 5 − We still have unidirectional communication for data frames, but auxiliary ACK frames travel
from the other direction. We add flow control to the previous protocol.
Noisy Channels
There are three types of requests for the noisy channels, which are as follows −

• Stop & wait Automatic Repeat Request.


• Go-Back-N Automatic Repeat Request.
• Selective Repeat Automatic Repeat Request.
Stop and Wait Automatic Repeat Request
Step 1 − In a noisy channel, if a frame is damaged during transmission, the receiver will detect with
the help of the checksum.
Step 2 − If a damaged frame is received, it will be discarded, and the transmitter will retransmit the
same frame after receiving a proper acknowledgement.
Step 3 − If the acknowledgement frame gets lost and the data link layer on 'A' eventually times out.
Not having received an ACK, it assumes that its data frame was lost or damaged and sends the frame
containing packet 1 again. This duplicate frame also arrives at the data link layer on 'B', thus part of
the file will be duplicated and protocol is said to be failed.
Step 4 − To solve this problem, assign a sequence number in the header of the message.
Step 5 − The receiver checks the sequence number to determine if the message is a duplicate since
only the message is transmitted at any time.
Step 6 − The sending and receiving station needs only a 1-bit alternating sequence of '0' or '1' to
maintain the relationship of the transmitted message and its ACK/ NAK.
Step 7 − A modulo-2 numbering scheme is used where the frames are alternatively labelled with '0'
or '1' and positive acknowledgements are of the form ACK 0 and ACK 1.
Normal operation of Stop & Wait ARQ is given below −
Stop & Wait ARQ with Lost frame is as follows –

Go-Back-N ARQ
To improve the transmission efficiency, we need more than one frame to be outstanding to keep the
channel busy while the sender is waiting for acknowledgement.
There are two protocols developed for achieving this goal and they are as follows −

• Go – Back - N – Automatic – Repeat Request


• Sliding window protocol
Go-Back-N ARQ
Step 1 − In this protocol we can send several frames before receiving acknowledgements.
Step 2 − we keep a copy of these frames until the acknowledgment arrives.
Step 3 − Frames from a sending station are numbered sequentially. However, we need to include the
sequence number of each frame in the header; we need to set a limit.
Step 4 − If the header of the frame allows m bits for the sequence number, the sequence numbers
range from 0 to 2m-1. We can also repeat the sequence numbers.
Example
For m = 2, the range of sequence numbers is: 0 to 3, i.e.
0,1,2,3, 0,1,2,3,…
The Go-Back-N ARQ is shown below in diagram format –

Fiber Distributed Data Interface (FDDI)


Fiber Distributed Data Interface (FDDI) is a set of ANSI and ISO standards for transmission of data
in local area network (LAN) over fiber optic cables. It is applicable in large LANs that can extend up
to 200 kilometers in diameter.
Features
• FDDI uses optical fiber as its physical medium.
• It operates in the physical and medium access control (MAC layer) of the Open Systems
Interconnection (OSI) network model.
• It provides high data rate of 100 Mbps and can support thousands of users.
• It is used in LANs up to 200 kilometers for long distance voice and multimedia
communication.
• It uses ring based token passing mechanism and is derived from IEEE 802.4 token bus
standard.
• It contains two token rings, a primary ring for data and token transmission and a
secondary ring that provides backup if the primary ring fails.
• FDDI technology can also be used as a backbone for a wide area network (WAN).

The following diagram shows FDDI –


Frame Format
The frame format of FDDI is similar to that of token bus as shown in the following diagram −

The fields of an FDDI frame are −


• Preamble: 1 byte for synchronization.
• Start Delimiter: 1 byte that marks the beginning of the frame.
• Frame Control: 1 byte that specifies whether this is a data frame or control frame.
• Destination Address: 2-6 bytes that specifies address of destination station.
• Source Address: 2-6 bytes that specifies address of source station.
• Payload: A variable length field that carries the data from the network layer.
• Checksum: 4 bytes frame check sequence for error detection.
• End Delimiter: 1 byte that marks the end of the frame.

What is Polynomial Code?


A polynomial code is a linear code having a set of valid code words that comprises of polynomials
divisible by a shorter fixed polynomial is known as generator polynomial.
They are used for error detection and correction during the transmission of data as well as storage of
data.
Types of Polynomial Codes
The types of polynomial codes are:

• Cyclic Redundancy Code


• Bose–Chaudhuri–Hocquenghem (BCH) Codes
• Reed–Solomon Codes
Representation of Bit Strings with Polynomials
The code words, which are essentially bit strings, are represented by polynomials whose coefficients
are either 0 or 1. A 𝑘 – bit word is represented by a polynomial ranging from 𝑥0 to 𝑥𝑘−1. The order of
this polynomial is the power of the highest coefficient, i.e. (𝑘−1).
For example, an 8 – bit word 11001101 is represented by the following polynomial of order 7:
1𝑥7+ 1𝑥6+ 0𝑥5+ 0𝑥4+1𝑥3+ 1𝑥2+ 0𝑥1+1𝑥0 = 𝑥7+𝑥6+ 𝑥3+ 𝑥2+1

You might also like