You are on page 1of 57

Unit 2

Data-link Layer
Introduction

Data Link Layer is second layer of OSI Layered Model. This layer is one of the most
complicated layers and has complex functionalities and liabilities. Data link layer hides
the details of underlying hardware and represents itself to upper layer as the medium to
communicate.

Data link layer works between two hosts which are directly connected in some sense.
This direct connection could be point to point or broadcast. Systems on broadcast
network are said to be on same link. The work of data link layer tends to get more
complex when it is dealing with multiple hosts on single collision domain.

Data link layer is responsible for converting data stream to signals bit by bit and to send
that over the underlying hardware. At the receiving end, Data link layer picks up data
from hardware which are in the form of electrical signals, assembles them in a
recognizable frame format, and hands over to upper layer.

Data link layer has two sub-layers:

 Logical Link Control: It deals with protocols, flow-control, and error control
 Media Access Control: It deals with actual control of media

Functionality of Data-link Layer

Data link layer does many tasks on behalf of upper layer. These are:

 Framing
Data-link layer takes packets from Network Layer and encapsulates them into
Frames.Then, it sends each frame bit-by-bit on the hardware. At receiver’ end,
data link layer picks up signals from hardware and assembles them into frames.

1
 Addressing
Data-link layer provides layer-2 hardware addressing mechanism. Hardware
address is assumed to be unique on the link. It is encoded into hardware at the
time of manufacturing.
 Synchronization
When data frames are sent on the link, both machines must be synchronized in
order to transfer to take place.
 Error Control
Sometimes signals may have encountered problem in transition and the bits are
flipped.These errors are detected and attempted to recover actual data bits. It also
provides error reporting mechanism to the sender.
 Flow Control
Stations on same link may have different speed or capacity. Data-link layer
ensures flow control that enables both machine to exchange data on same speed.
 Multi-Access
When host on the shared link tries to transfer the data, it has a high probability of
collision. Data-link layer provides mechanism such as CSMA/CD to equip
capability of accessing a shared media among multiple Systems.
Data Link Layer Design Issues
The data link layer in the OSI (Open System Interconnections) Model, is in between the
physical layer and the network layer. This layer converts the raw transmission facility
provided by the physical layer to a reliable and error-free link.

The main functions and the design issues of this layer are

 Providing services to the network layer


 Framing
 Error Control

2
 Flow Control

Services to the Network Layer


In the OSI Model, each layer uses the services of the layer below it and provides
services to the layer above it. The data link layer uses the services offered by the
physical layer.The primary function of this layer is to provide a well defined service
interface to network layer above it.

The types of services provided can be of three types −

 Unacknowledged connectionless service


 Acknowledged connectionless service
 Acknowledged connection - oriented service

Framing
The data link layer encapsulates each data packet from the network layer into frames
that are then transmitted.

3
A frame has three parts, namely −

 Frame Header
 Payload field that contains the data packet from network layer
 Trailer

Error Control
The data link layer ensures error free link for data transmission. The issues it caters to
with respect to error control are −

 Dealing with transmission errors


 Sending acknowledgement frames in reliable connections
 Retransmitting lost frames
 Identifying duplicate frames and deleting them
 Controlling access to shared channels in case of broadcasting

Flow Control
The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not be

4
able to handle it. There will be frame losses even if the transmission is error-free. The
two common approaches for flow control are −

 Feedback based flow control


 Rate based flow control

What are the Data Link Layer services provided to the Network Layer?
In the OSI (Open System Interconnections) Model, each layer uses the services of the
layer below it and provides services to the layer above it. The primary function of the
data link layer is to provide a well-defined service interface to the network layer above
it.

5
Virtual Communication versus Actual Communication
The main service provided is to transfer data packets from the network layer on the
sending machine to the network layer on the receiving machine. Data link layer of the
sending machine transmits accepts data from the network layer and sends them to the
data link layer of the destination machine which hands them to the network layer there.

In actual communication, the data link layer transmits bits via the physical layers and
physical medium. However virtually, this can be visualized as the two data link layers
communicating with each other using a data link protocol.

The processes are depicted in the following diagram −

6
Types of Services
The data link layer offers three types of services.

 Unacknowledged connectionless service − Here, the data link layer of the


sending machine sends independent frames to the data link layer of the receiving
machine. The receiving machine does not acknowledge receiving the frame. No
logical connection is set up between the host machines. Error and data loss is not
handled in this service. This is applicable in Ethernet services and voice
communications.
 Acknowledged connectionless service − Here, no logical connection is set up
between the host machines, but each frame sent by the source machine is
acknowledged by the destination machine on receiving. If the source does not

7
receive the acknowledgment within a stipulated time, then it resends the frame.
This is used in Wifi (IEEE 802.11) services.
 Acknowledged connection-oriented service − This is the best service that the
data link layer can offer to the network layer. A logical connection is set up
between the two machines and the data is transmitted along this logical path. The
frames are numbered, that keeps track of loss of frames and also ensures that
frames are received in correct order. The service has three distinct phases −
o Set up of connection – A logical path is set up between the source and the
destination machines. Buffers and counters are initialised to keep track of
frames.
o Sending frames – The frames are transmitted.
o Release connection – The connection is released, buffers and other
resources are released.
It is appropriate for satellite communications and long-distance telephone
circuits.
Various kind of Framing in Data link layer

Framing is function of Data Link Layer that is used to separate message from source or
sender to destination or receiver or simply from all other messages to all other
destinations just by adding sender address and destination address. The destination or
receiver address is simply used to represent where message or packet is to go and sender
or source address is simply used to help recipient to acknowledge receipt.

Frames are generally data unit of data link layer that is transmitted or transferred among
various network points. It includes complete and full addressing, protocols that are
essential, and information under control. Physical layers only just accept and transfer
stream of bits without any regard to meaning or structure. Therefore it is up to data link
layer to simply develop and recognize frame boundaries.

8
This can be achieved by attaching special types of bit patterns to start and end of the
frame. If all of these bit patterns might accidentally occur in data, special care is needed
to be taken to simply make sure that these bit patterns are not interpreted incorrectly or
wrong as frame delimiters.

Framing is simply point-to-point connection among two computers or devices that


consists or includes wire in which data is transferred as stream of bits. However, all of
these bits should be framed into discernible blocks of information.
Methods of Framing :
There are basically four methods of framing as given below –

1. Character Count

2. Flag Byte with Character Stuffing

3. Starting and Ending Flags, with Bit Stuffing

4. Encoding Violations

These are explained as following below.

1. Character Count :
This method is rarely used and is generally required to count total number of
characters that are present in frame. This is be done by using field in header.
Character count method ensures data link layer at the receiver or destination about
total number of characters that follow, and about where the frame ends.

There is disadvantage also of using this method i.e., if anyhow character count is
disturbed or distorted by an error occurring during transmission, then destination or
receiver might lose synchronization. The destination or receiver might also be not able
to locate or identify beginning of next frame.

9
Character Stuffing :
Character stuffing is also known as byte stuffing or character-oriented framing and is
same as that of bit stuffing but byte stuffing actually operates on bytes whereas bit
stuffing operates on bits. In byte stuffing, special byte that is basically known as ESC
(Escape Character) that has predefined pattern is generally added to data section of the
data stream or frame when there is message or character that has same pattern as that of
flag byte.

But receiver removes this ESC and keeps data part that causes some problems or issues.
In simple words, we can say that character stuffing is addition of 1 additional byte if
there is presence of ESC or flag in text.

10
1.
Bit Stuffing :
Bit stuffing is also known as bit-oriented framing or bit-oriented approach. In bit
stuffing, extra bits are being added by network protocol designers to data streams.
It is generally insertion or addition of extra bits into transmission unit or message
to be transmitted as simple way to provide and give signaling information and
data to receiver and to avoid or ignore appearance of unintended or unnecessary
control sequences.

It is type of protocol management simply performed to break up bit pattern that results
in transmission to go out of synchronization. Bit stuffing is very essential part of
transmission process in network and communication protocol. It is also required in USB.

2. Physical Layer Coding Violations :


Encoding violation is method that is used only for network in which encoding on
physical medium includes some sort of redundancy i.e., use of more than one
graphical or visual structure to simply encode or represent one variable of data.

11
Error Detection and Correction in Data link Layer
Data-link layer uses error control techniques to ensure that frames, i.e. bit streams
of data, are transmitted from the source to the destination with a certain extent of
accuracy.

Errors
When bits are transmitted over the computer network, they are subject to get
corrupted due to interference and network problems. The corrupted bits leads to
spurious data being received by the destination and are called errors.

Types of Errors
Errors can be of three types, namely single bit errors, multiple bit errors, and burst
errors.
 Single bit error − In the received frame, only one bit has been corrupted,
i.e. either changed from 0 to 1 or from 1 to 0.

 Multiple bits error − In the received frame, more than one bits are
corrupted.


 Burst error − In the received frame, more than one consecutive bits are
corrupted.

12
Error Control
Error control can be done in two ways
Error detection − Error detection involves checking whether any error has
occurred or not. The number of error bits and the type of error does not
matter.
 Error correction − Error correction involves ascertaining the exact number
of bits that has been corrupted and the location of the corrupted bits.
For both error detection and error correction, the sender needs to send some
additional bits along with the data bits. The receiver performs necessary checks
based upon the additional redundant bits. If it finds that the data is free from
errors, it removes the redundant bits before passing the message to the upper
layers.

Error Detection Techniques


There are three main techniques for detecting errors in frames: Parity Check,
Checksum and Cyclic Redundancy Check (CRC).

Parity Check
The parity check is done by adding an extra bit, called parity bit to the data to
make a number of 1s either even in case of even parity or odd in case of odd
parity.
While creating a frame, the sender counts the number of 1s in it and adds the
parity bit in the following way
 In case of even parity: If a number of 1s is even then parity bit value is 0. If
the number of 1s is odd then parity bit value is 1.
 In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a
number of 1s is even then parity bit value is 1.

13
On receiving a frame, the receiver counts the number of 1s in it. In case of even
parity check, if the count of 1s is even, the frame is accepted, otherwise, it is
rejected. A similar rule is adopted for odd parity check.
The parity check is suitable for single bit error detection only.

Checksum
In this error detection scheme, the following procedure is applied
 Data is divided into fixed sized frames or segments.
 The sender adds the segments using 1’s complement arithmetic to get the
sum. It then complements the sum to get the checksum and sends it along
with the data frames.
 The receiver adds the incoming segments along with the checksum using 1’s
complement arithmetic to get the sum and then complements it.
 If the result is zero, the received frames are accepted; otherwise, they are
discarded.

Cyclic Redundancy Check (CRC)


Cyclic Redundancy Check (CRC) involves binary division of the data bits being
sent by a predetermined divisor agreed upon by the communicating system. The
divisor is generated using polynomials.
 Here, the sender performs binary division of the data segment by the divisor.
It then appends the remainder called CRC bits to the end of the data
segment. This makes the resulting data unit exactly divisible by the divisor.
 The receiver divides the incoming data unit by the divisor. If there is no
remainder, the data unit is assumed to be correct and is accepted.
Otherwise, it is understood that the data is corrupted and is therefore
rejected.

Error Correction Techniques


Error correction techniques find out the exact number of bits that have been
corrupted and as well as their locations. There are two principle ways
 Backward Error Correction (Retransmission) −  If the receiver detects an
error in the incoming frame, it requests the sender to retransmit the frame.
It is a relatively simple technique. But it can be efficiently used only where

14
retransmitting is not expensive as in fiber optics and the time for
retransmission is low relative to the requirements of the application.
 Forward Error Correction −  If the receiver detects some error in the
incoming frame, it executes error-correcting code that generates the actual
frame. This saves bandwidth required for retransmission. It is inevitable in
real-time systems. However, if there are too many errors, the frames need
to be retransmitted.
The four main error correction codes are

 Hamming Codes
 Binary Convolution Code
 Reed – Solomon Code
 Low-Density Parity-Check Code

What is a Hamming code?


Hamming code is a liner code that is useful for error detection up to two
immediate bit errors. It is capable of single-bit errors.
In Hamming code, the source encodes the message by adding redundant bits in the
message. These redundant bits are mostly inserted and generated at certain
positions in the message to accomplish error detection and correction process.

Application of Hamming code

Here are some common applications of using Hamming code:

 Satellites

 Computer Memory

 Modems

 PlasmaCAM

 Open connectors

 Shielding wire

15
 Embedded Processor

Advantages of Hamming code

 Hamming code method is effective on networks where the data streams are
given for the single-bit errors.

 Hamming code not only provides the detection of a bit error but also helps
you to indent bit containing error so that it can be corrected.

 The ease of use of hamming codes makes it best them suitable for use in
computer memory and single-error correction.

Disadvantages of Hamming code

 Single-bit error detection and correction code. However, if multiple bits are
founded error, then the outcome may result in another bit which should be
correct to be changed. This can cause the data to be further errored.

 Hamming code algorithm can solve only single bits issues.

How to Encode a message in Hamming Code

The process used by the sender to encode the message includes the following three
steps:

 Calculation of total numbers of redundant bits.

 Checking the position of the redundant bits.

 Lastly, calculating the values of these redundant bits.

When the above redundant bits are embedded within the message, it is sent to the
user.

Step 1) Calculation of the total number of redundant bits.

16
Let assume that the message contains:

 n– number of data bits

 p – number of redundant bits which are added to it so that np can indicate at
least (n + p + 1) different states.

Here, (n + p) depicts the location of an error in each of (n + p) bit positions and


one extra state indicates no error. As p bits can indicate 2 p states, 2p has to at least
equal to (n + p + 1).

Step 2) Placing the redundant bits in their correct position.

The p redundant bits should be placed at bit positions of powers of 2. For example,
1, 2, 4, 8, 16, etc. They are referred to as p 1 (at position 1), p2 (at position 2), p3 (at
position 4), etc.

Step 3) Calculation of the values of the redundant bit.

The redundant bits should be parity bits makes the number of 1s either even or
odd.

The two types of parity are ?

 Total numbers of bits in the message is made even is called even parity.

 The total number of bits in the message is made odd is called odd parity.

Here, all the redundant bit, p1, is must calculated as the parity. It should cover all
the bit positions whose binary representation should include a 1 in the 1st position
excluding the position of p1.

P1 is the parity bit for every data bits in positions whose binary representation
includes a 1 in the less important position not including 1 Like (3, 5, 7, 9, …. )

P2 is the parity bit for every data bits in positions whose binary representation
include 1 in the position 2 from right, not including 2 Like (3, 6, 7, 10, 11,…)

P3 is the parity bit for every bit in positions whose binary representation includes a
1 in the position 3 from right not include 4 Like (5-7, 12-15,… )

17
Explain the Cyclic Redundancy Checks (CRCs)

The Cyclic Redundancy Checks (CRC) is the most powerful method for Error-
Detection and Correction. It is given as a kbit message and the transmitter creates
an (n – k) bit sequence called frame check sequence. The out coming frame,
including n bits, is precisely divisible by some fixed number. Modulo 2
Arithmetic is used in this binary addition with no carries, just like the XOR
operation.
Redundancy means duplicacy. The redundancy bits used by CRC are changed by
splitting the data unit by a fixed divisor. The remainder is CRC.
Qualities of CRC
It should have accurately one less bit than the divisor.

 Joining it to the end of the data unit should create the resulting bit sequence
precisely divisible by the divisor.
CRC generator and checker

Process
A string of n 0s is added to the data unit. The number n is one smaller than

the number of bits in the fixed divisor.
 The new data unit is divided by a divisor utilizing a procedure known as
binary division; the remainder appearing from the division is CRC.
 The CRC of n bits interpreted in phase 2 restores the added 0s at the end of
the data unit.
Example

18
Message D = 1010001101 (10 bits)
    Predetermined P = 110101 (6 bits)
    FCS R = to be calculated 5 bits
   Hence, n = 15 K = 10 and (n – k) = 5
The message is generated through 25:accommodating 1010001101000
The product is divided by P.

The remainder is inserted to 25D to provide T = 101000110101110 that is sent.


Suppose that there are no errors, and the receiver gets T perfect. The received
frame is divided by P.

19
Because of no remainder, there are no errors.

Flow Control
o It is a set of procedures that tells the sender how much data it can transmit
before the data overwhelms the receiver.
o The receiving device has limited speed and limited memory to store the data.
Therefore, the receiving device must be able to inform the sending device to
stop the transmission temporarily before the limits are reached.
o It requires a buffer, a block of memory for storing the information until they
are processed.

Two methods have been developed to control the flow of data:

o Stop-and-wait
o Sliding window

Stop-and-wait

20
o In the Stop-and-wait method, the sender waits for an acknowledgement after
every frame it sends.
o When acknowledgement is received, then only next frame is sent. The
process of alternately sending and waiting of a frame continues until the
sender transmits the EOT (End of transmission) frame.

Advantage of Stop-and-wait

The Stop-and-wait method is simple as each frame is checked and acknowledged


before the next frame is sent.

Disadvantage of Stop-and-wait

Stop-and-wait technique is inefficient to use as each frame must travel across all
the way to the receiver, and an acknowledgement travels all the way before the
next frame is sent. Each frame sent and received uses the entire time needed to
traverse the link.

Sliding Window

o The Sliding Window is a method of flow control in which a sender can


transmit the several frames before getting an acknowledgement.
o In Sliding Window Control, multiple frames can be sent one after the
another due to which capacity of the communication channel can be utilized
efficiently.
o A single ACK acknowledge multiple frames.
o Sliding Window refers to imaginary boxes at both the sender and receiver
end.
o The window can hold the frames at either end, and it provides the upper
limit on the number of frames that can be transmitted before the
acknowledgement.

21
o Frames can be acknowledged even when the window is not completely
filled.
o The window has a specific size in which they are numbered as modulo-n
means that they are numbered from 0 to n-1. For example, if n = 8, the
frames are numbered from 0,1,2,3,4,5,6,7,0,1,2,3,4,5,6,7,0,1........
o The size of the window is represented as n-1. Therefore, maximum n-1
frames can be sent before acknowledgement.
o When the receiver sends the ACK, it includes the number of the next frame
that it wants to receive. For example, to acknowledge the string of frames
ending with frame number 4, the receiver will send the ACK containing the
number 5. When the sender sees the ACK with the number 5, it got to know
that the frames from 0 through 4 have been received.

Sender Window

o At the beginning of a transmission, the sender window contains n-1 frames,


and when they are sent out, the left boundary moves inward shrinking the
size of the window. For example, if the size of the window is w if three
frames are sent out, then the number of frames left out in the sender window
is w-3.
o Once the ACK has arrived, then the sender window expands to the number
which will be equal to the number of frames acknowledged by ACK.
o For example, the size of the window is 7, and if frames 0 through 4 have
been sent out and no acknowledgement has arrived, then the sender window
contains only two frames, i.e., 5 and 6. Now, if ACK has arrived with a
number 4 which means that 0 through 3 frames have arrived undamaged and
the sender window is expanded to include the next four frames. Therefore,
the sender window contains six frames (5,6,7,0,1,2).

22
Receiver Window
o At the beginning of transmission, the receiver window does not contain n
frames, but it contains n-1 spaces for frames.
o When the new frame arrives, the size of the window shrinks.
o The receiver window does not represent the number of frames received, but
it represents the number of frames that can be received before an ACK is
sent. For example, the size of the window is w, if three frames are received
then the number of spaces available in the window is (w-3).
o Once the acknowledgement is sent, the receiver window expands by the
number equal to the number of frames acknowledged.
o Suppose the size of the window is 7 means that the receiver window
contains seven spaces for seven frames. If the one frame is received, then the
receiver window shrinks and moving the boundary from 0 to 1. In this way,
window shrinks one by one, so window now contains the six spaces. If
frames from 0 through 4 have sent, then the window contains two spaces
before an acknowledgement is sent.

23
Error Control
Error Control is a technique of error detection and retransmission.

Categories of Error Control:

Stop-and-wait ARQ

Stop-and-wait ARQ is a technique used to retransmit the data in case of damaged


or lost frames.

24
This technique works on the principle that the sender will not transmit the next
frame until it receives the acknowledgement of the last transmitted frame.

Four features are required for the retransmission:

o The sending device keeps a copy of the last transmitted frame until the
acknowledgement is received. Keeping the copy allows the sender to
retransmit the data if the frame is not received correctly.
o Both the data frames and the ACK frames are numbered alternately 0 and 1
so that they can be identified individually. Suppose data 1 frame
acknowledges the data 0 frame means that the data 0 frame has been arrived
correctly and expects to receive data 1 frame.
o If an error occurs in the last transmitted frame, then the receiver sends the
NAK frame which is not numbered. On receiving the NAK frame, sender
retransmits the data.
o It works with the timer. If the acknowledgement is not received within the
allotted time, then the sender assumes that the frame is lost during the
transmission, so it will retransmit the frame.

Two possibilities of the retransmission:

o Damaged Frame: When the receiver receives a damaged frame, i.e., the


frame contains an error, then it returns the NAK frame. For example, when
the data 0 frame is sent, and then the receiver sends the ACK 1 frame means
that the data 0 has arrived correctly, and transmits the data 1 frame. The
sender transmits the next frame: data 1. It reaches undamaged, and the
receiver returns ACK 0. The sender transmits the next frame: data 0. The
receiver reports an error and returns the NAK frame. The sender retransmits
the data 0 frame.

25
o Lost Frame: Sender is equipped with the timer and starts when the frame is
transmitted. Sometimes the frame has not arrived at the receiving end so that
it can be acknowledged neither positively nor negatively. The sender waits
for acknowledgement until the timer goes off. If the timer goes off, it
retransmits the last transmitted frame.

Sliding Window ARQ

SlidingWindow ARQ is a technique used for continuous transmission error


control.

Three Features used for retransmission:

o In this case, the sender keeps the copies of all the transmitted frames until
they have been acknowledged. Suppose the frames from 0 through 4 have
been transmitted, and the last acknowledgement was for frame 2, the sender
has to keep the copies of frames 3 and 4 until they receive correctly.
o The receiver can send either NAK or ACK depending on the conditions. The
NAK frame tells the sender that the data have been received damaged. Since
the sliding window is a continuous transmission mechanism, both ACK and
NAK must be numbered for the identification of a frame. The ACK frame
consists of a number that represents the next frame which the receiver
expects to receive. The NAK frame consists of a number that represents the
damaged frame.
o The sliding window ARQ is equipped with the timer to handle the lost
acknowledgements. Suppose then n-1 frames have been sent before
receiving any acknowledgement. The sender waits for the
acknowledgement, so it starts the timer and waits before sending any more.
If the allotted time runs out, the sender retransmits one or all the frames
depending upon the protocol used.

26
Two protocols used in sliding window ARQ:

o Go-Back-n ARQ: In Go-Back-N ARQ protocol, if one frame is lost or


damaged, then it retransmits all the frames after which it does not receive the
positive ACK.

Three possibilities can occur for retransmission:

o Damaged Frame: When the frame is damaged, then the receiver sends a


NAK frame.

In the above figure, three frames have been transmitted before an error discovered
in the third frame. In this case, ACK 2 has been returned telling that the frames 0,1
have been received successfully without any error. The receiver discovers the error

27
in data 2 frame, so it returns the NAK 2 frame. The frame 3 is also discarded as it
is transmitted after the damaged frame. Therefore, the sender retransmits the
frames 2,3.

o Lost Data Frame: In Sliding window protocols, data frames are sent
sequentially. If any of the frames is lost, then the next frame arrive at the
receiver is out of sequence. The receiver checks the sequence number of
each of the frame, discovers the frame that has been skipped, and returns the
NAK for the missing frame. The sending device retransmits the frame
indicated by NAK as well as the frames transmitted after the lost frame.
o Lost Acknowledgement: The sender can send as many frames as the
windows allow before waiting for any acknowledgement. Once the limit of
the window is reached, the sender has no more frames to send; it must wait
for the acknowledgement. If the acknowledgement is lost, then the sender
could wait forever. To avoid such situation, the sender is equipped with the
timer that starts counting whenever the window capacity is reached. If the
acknowledgement has not been received within the time limit, then the
sender retransmits the frame since the last ACK.

Selective-Reject ARQ

o Selective-Reject ARQ technique is more efficient than Go-Back-n ARQ.


o In this technique, only those frames are retransmitted for which negative
acknowledgement (NAK) has been received.
o The receiver storage buffer keeps all the damaged frames on hold until the
frame in error is correctly received.
o The receiver must have an appropriate logic for reinserting the frames in a
correct order.

28
o The sender must consist of a searching mechanism that selects only the
requested frame for retransmission.

What is HDLC (High-level Data Link Control)?

HDLC (High-level Data Link Control) is a group of protocolsor rules for


transmitting databetween networkpoints (sometimes called nodes).

In more technical terms, HDLC is a bit-oriented, synchronous data link layer


protocol created by the International Organization for Standardization (ISO). The
standard for HDLC is ISO/IEC 13239:2002. ECI stands for the International

29
Electrotechnical Commission, an international electrical and electronic standards
body that often works with the ISO.

How does HDLC work?

In HDLC, data is organized into a unit (called a frame) and sent across a network
to a destination that verifies its successful arrival. The HDLC protocol also
manages the flow or pacing at which data is sent.

How do HDLC frames work?

HDLC frames can be transmitted over either asynchronous or synchronous


communication links:

Synchronous framing. In synchronous frames, data is NRZI (non-return-to-


zeroinverted) encoded, meaning that a 0-bit is sent as a change in the signal,
whereas 1-bit is transmitted as no change. In other words, every 0 relays to the
receiving modem that it must synchronize its clock. In this way, frames can be
sent via a full-duplex, or half-duplex link.

Asynchronous framing. On the other hand, in asynchronous communications, it


is not necessary to worry about the bit pattern. Instead, control-octet transparency
is used (otherwise known as bit stuffing). A control escape octetuses 0x7D values
(for example, a '10111110' bit sequence with the least significant bit first.

When sending HDLC frames, the frame check sequence of an HDLC frame is
either a 16-bit CRC-CCITT or a 32-bit CRC-32 transmitted over Information,
Control or Address fields (CRC stands for cyclic redundancy check). HDLC
frames provide the medium for a receiver to use algorithms for error detection of
any errors that may have been created during transmission. If the recipient's FCS
algorithm doesn't match the sender's, this is an indication of an error.

30
(NOTE: CCITT stands forConsultative Committee for International Telegraphy
and Telephony . Also known as ITU-T(Telecommunication Standardization
Sector of the International Telecommunications Union), CCITT is an international
body that fosters cooperative standards for telecommunications systems and
equipment.)

HDLC frame types

There are three types of commonly used HDLC frame structures. They are as
follows:

 Information frames (I-frames), which transmit user data from the computer
network layer and incorporate error control information with the data. I-frames also
contain control fields used to define data functions.

 Supervisory frames (S-frames), which transmit error and flow control data
whenever it becomes impossible to "piggyback" on transmitted data. For this reason,
S-frames don't contain information fields.

 Unnumbered frames (U-frames), which are for all other miscellaneous purposes,
including link management. Some of these contain information fields, and others do
not.

Point-to-Point Protocol (PPP) Encapsulation

Point-to-Point Protocol (PPP)is basically a Wide Area Network


(WAN)protocol that performs or works at layer 2 by simply encapsulating frames
for transmission or transferring over different physicals links or connections like
serial cables, cell phones, fiber optic cable among others, etc.

31
32
Encapsulation is basically a process in which lower-layer protocol basically
receives data from higher layer protocol and then further places this data
portion of its frame.

In simple words, we can say the encapsulation is process of enclosing one


type of packing with help of other types of packet. PPP generally provides
encapsulation so that various protocols at the network get supported
simultaneously. PPP connections also deliver or transmit packets in
sequence and provide full-duplex simultaneous bi-directional operation. PPP
usually encapsulates any of the network layer packets in its frame that makes
it possible for PPP layer three protocol to become independent and even
capable of carrying multiple-layer three packets through a single link or
connection.

PPP Encapsulation is also required to disambiguate multiprotocol


datagrams i.e. removal of ambiguity to make the multiprotocol datagrams
clear and easy to be understood. PPP puts data in a frame and transmit it
through PPP connection or link. A frame is basically defined as a unit of
transmission in Data Link Layer (DLL) of the OSIprotocol stack. To form
encapsulation, a total of 8-bytes are required. Data is usually transmitted
from left to right in frames. General Structure of PPP encapsulation is shown
below :

33
PPP encapsulation frame basically contains three types of fields as given
below :

1.Protocol Field –
This field is of 1 or 2 bytes i.e., 8 or 16 bits that are used to identify datagram that
is being encapsulated in the information field of packet.

It simply indicates protocol that is used in the frame. Least significant bit of
lower bytes is usually set to 1 and on the other hand, most significant bit is
usually set to 0. Types of protocols that can be present are Link Control
Protocol (LCP),Password Authentication Protocol (PAP),Challenge
Handshake Authentication Protocol (CHAP), etc.

Protocol
Protocol Name
Number

0001 Padding Protocol

0003 to 001f Reserved (transparency inefficient)

00cf Reserved (PPP NLPID)

8001 to 801 Unused

807d Unused

0021 IP (Internet Protocol)

8021 IPCP (Internet Protocol Control Protocol)

34
Protocol
Protocol Name
Number

(NCP for IP)

Van Jacobson TCP/IP header compression


002d
(RFC 1144)

002f Van Jacobson IP header compression

c021 Link Control Protocol

c023 Password Authentication Protocol

c025 Link Quality Report

Challenge Handshake Authentication


c223
Protocol

1.Information Field –
This field is of 0 or more bytes. It also has maximum length of 1500 bytes
including padding and excluding protocol field. It usually contains datagram for
the protocol that is specified and identified in protocol field. A datagram is
basically a transfer unit that is associated with networking.

2.Padding Field –
This field is optional. On transmission, Information field may be padded up to the
Maximum Receive Unit (MRU). Both of peers must be capable of recognizing and
distinguish padding bytes from true data or real information.

35
Difference between HDLC and PPP

S.
No Parameters HDLC PPP
.

HDLC is an ISO-
developed bit-
PPP is a data link layer
oriented code-
communication protocol
1. Basics transparent
for establishing a direct
synchronous data
link between two nodes.
link layer
protocol. 

HDLC stands for


PPP stands for Point-to-
2. Stands for High-level Data
Point Protocol.
Link Control.

HDLC is a bit- PPP is a byte-oriented


3. Protocol type
oriented protocol. protocol.

HDLC is
implemented by
PPP is implemented by
Configuration Point-to-point link
4. Point-to-Point
type configuration and
configuration only.
also multi-point
link configurations.

5. Layer HDLC works at PPP works at layer 2


layer 2 (Data Link and layer 3 (Network

36
S.
No Parameters HDLC PPP
.

Layer). Layer).

Dynamic
Dynamic While in this Dynamic
6. addressing is not
Addressing addressing is offered.
offered by HDLC.

PPP is used in
HDLC is used in
synchronous media as
7. Media Type synchronous
well as asynchronous
media.
media.

PPP provides the feature


HDLC does not of error detection using
Error
8. offer error FCS (Frame Check
detection
detection. Sequence) while
transmitting data.

HDLC is not
PPP is compatible with
9. Compatibility compatible with
non-Cisco devices.
non-Cisco devices.

HDLC protocols
have two types- PPP uses a defined
10. Format
ISO HDLC and format of HDLC.
CISCO HDLC.

11. Authentication HDLC does not PPP provides link


37
S.
No Parameters HDLC PPP
.

authentication using
protocols like PAP
(Password
provide link Authentication
authentication. Protocol) and CHAP
(Challenge Handshake
Authentication
Protocol).

HDLC is more
PPP is comparatively
12. Cost costly
less costly.
comparatively.

PPP offers the quality


HDLC does not
checking of the
offer the quality
13. Link Quality established links using
checking of
LCP (Link Control
established links.
Protocol).

38
What is a multiple access protocol?

When a sender and receiver have a dedicated link to transmit data packets,
the data link control is enough to handle the channel. Suppose there is no
dedicated path to communicate or transfer the data between two devices. In
that case, multiple stations access the channel and simultaneously transmits
the data over the channel. It may create collision and cross talk. Hence, the
multiple access protocol is required to reduce the collision and avoid
crosstalk between the channels.

For example, suppose that there is a classroom full of students. When a


teacher asks a question, all the students (small channels) in the class start
answering the question at the same time (transferring the data
simultaneously). All the students respond at the same time due to which data
is overlap or data lost. Therefore it is the responsibility of a teacher (multiple
access protocol) to manage the students and make them one answer.

Following are the types of multiple access protocol that is subdivided into
the different process as:

39
A. Random Access Protocol

In this protocol, all the station has the equal priority to send the data over a
channel. In random access protocol, one or more stations cannot depend on
another station nor any station control another station. Depending on the
channel's state (idle or busy), each station transmits the data frame.
However, if more than one station sends the data over a channel, there may
be a collision or data conflict. Due to the collision, the data frame packets
may be lost or changed. And hence, it does not receive by the receiver end.

Following are the different methods of random-access protocols for


broadcasting frames on the channel.

Aloha

CSMA

CSMA/CD

CSMA/CA

40
ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also be used
in a shared medium to transmit data. Using this method, any station can
transmit data across a network simultaneously when a data frameset is
available for transmission.

Aloha Rules

1.Any station can transmit data to a channel at any time.

2.It does not require any carrier sensing.

3.Collision and data frames may be lost during the transmission of data
through multiple stations.

4.Acknowledgment of the frames exists in Aloha. Hence, there is no


collision detection.

5.It requires retransmission of data after some random amount of time.

Pure Aloha

Whenever data is available for sending over a channel at stations, we use


Pure Aloha. In pure Aloha, when each station transmits data to a channel

41
without checking whether the channel is idle or not, the chances of collision
may occur, and the data frame can be lost. When any station transmits the
data frame to a channel, the pure Aloha waits for the receiver's
acknowledgment. If it does not acknowledge the receiver end within the
specified time, the station waits for a random amount of time, called the
backoff time (Tb). And the station may assume the frame has been lost or
destroyed. Therefore, it retransmits the frame until all the data are
successfully transmitted to the receiver.

1.The total vulnerable time of pure Aloha is 2 * Tfr.

2.Maximum throughput occurs when G = 1/ 2 that is 18.4%.

3.Successful transmission of data frame is S = G * e ^ - 2 G.

42
As we can see in the figure above, there are four stations for accessing a
shared channel and transmitting data frames. Some frames collide because
most stations send their frames at the same time. Only two frames, frame 1.1
and frame 2.2, are successfully transmitted to the receiver end. At the same
time, other frames are lost or destroyed. Whenever two frames fall on a
shared channel simultaneously, collisions can occur, and both will suffer
damage. If the new frame's first bit enters the channel before finishing the
last bit of the second frame. Both frames are completely finished, and both
stations must retransmit the data frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's efficiency


because pure Aloha has a very high possibility of frame hitting. In slotted
Aloha, the shared channel is divided into a fixed time interval called slots.
So that, if a station wants to send a frame to a shared channel, the frame can
only be sent at the beginning of the slot, and only one frame is allowed to be
sent to each slot. And if the stations are unable to send data to the beginning
of the slot, the station will have to wait until the beginning of the slot for the
next time. However, the possibility of a collision remains when trying to
send a frame at the beginning of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.

2. The probability of successfully transmitting the data frame in the slotted


Aloha is S = G * e ^ - 2 G.

3. The total vulnerable time required in slotted Aloha is Tfr.

43
CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access protocol to


sense the traffic on a channel (idle or busy) before transmitting the data. It
means that if the channel is idle, the station can send data to the channel.
Otherwise, it must wait until the channel becomes idle. Hence, it reduces the
chances of a collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first
sense the shared channel and if the channel is idle, it immediately sends the
data. Else it must wait and keep track of the status of the channel to be idle
and broadcast the frame unconditionally as soon as the channel is idle.

Non-Persistent: It is the access mode of CSMA that defines before


transmitting the data, each node must sense the channel, and if the channel is

44
inactive, it immediately sends the data. Otherwise, the station must wait for
a random time (not continuously), and when the channel is found to be idle,
it transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-persistent


modes. The P-Persistent mode defines that each node senses the channel,
and if the channel is inactive, it sends a frame with a P probability. If the
data is not transmitted, it waits for a (q = 1-p probability) random time and
resumes the frame with the next time slot.

O- Persistent: It is an O-persistent method that defines the superiority of the


station before the transmission of the frame on the shared channel. If it is
found that the channel is inactive, each station waits for its turn to retransmit
the data.

45
CSMA/ CD

It is a carrier sense multiple access/ collision detection network protocol


to transmit data frames. The CSMA/CD protocol works with a medium
access control layer. Therefore, it first senses the shared channel before
broadcasting the frames, and if the channel is idle, it transmits a frame to
check whether the transmission was successful. If the frame is successfully
received, the station sends another frame. If any collision is detected in the
CSMA/CD, the station sends a jam/ stop signal to the shared channel to
terminate data transmission. After that, it waits for a random time before
sending a frame to a channel.

46
CSMA/ CA

It is a carrier sense multiple access/collision avoidance network protocol


for carrier transmission of data frames. It is a protocol that works with a
medium access control layer. When a data frame is sent to a channel, it
receives an acknowledgment to check whether the channel is clear. If the
station receives only a single (own) acknowledgments, that means the data
frame has been successfully transmitted to the receiver. But if it gets two
signals (its own and one more in which the collision of frames), a collision
of the frame occurs in the shared channel. Detects the collision of the frame
when a sender receives an acknowledgment signal.

Following are the methods used in the CSMA/ CAto avoid the collision:

Interframe space: In this method, the station waits for the channel to
become idle, and if it gets the channel is idle, it does not immediately send
the data. Instead of this, it waits for some time, and this time period is called
the Interframe space or IFS. However, the IFS time is often used to define
the priority of the station.

Contention window: In the Contention window, the total time is divided


into different slots. When the station/ sender is ready to transmit the data
frame, it chooses a random slot number of slots as wait time. If the channel
is still busy, it does not restart the entire process, except that it restarts the
timer only to send data packets when the channel is inactive.

Acknowledgment: In the acknowledgment method, the sender station sends


the data frame to the shared channel if the acknowledgment is not received
ahead of time.

47
B. Controlled Access Protocol

It is a method of reducing data frame collision on a shared channel. In the


controlled access method, each station interacts and decides to send a data
frame by a particular station approved by all other stations. It means that a
single station cannot send the data frames unless all other stations are not
approved. It has three types of controlled access: Reservation, Polling, and
Token Passing.

C. Channelization Protocols

It is a channelization protocol that allows the total usable bandwidth in a


shared channel to be shared across multiple stations based on their time,
distance and codes. It can access all the stations at the same time to send the
data frames to the channel.

Following are the various methods to access the channel based on their time,
distance and codes:FDMA (Frequency Division Multiple Access)

1.TDMA (Time Division Multiple Access)

2.CDMA (Code Division Multiple Access)

FDMA

It is a frequency division multiple access (FDMA) method used to divide the


available bandwidth into equal bands so that multiple users can send data
through a different frequency to the subchannel. Each station is reserved
with a particular band to prevent the crosstalk between the channels and
interferences of stations.

48
TDMA

Time Division Multiple Access (TDMA) is a channel access method. It


allows the same frequency bandwidth to be shared across multiple stations.
And to avoid collisions in the shared channel, it divides the channel into
different frequency slots that allocate stations to transmit the data frames.
The same frequency bandwidth into the shared channel by dividing the
signal into various time slots to transmit it. However, TDMA has an
overhead of synchronization that specifies each station's time slot by adding
synchronization bits to each slot.

CDMA

49
The code division multiple access (CDMA)is a channel access method. In
CDMA, all stations can simultaneously send the data over the same channel.
It means that it allows each station to transmit the data frames with full
frequency on the shared channel at all times. It does not require the division
of bandwidth on a shared channel based on time slots. If multiple stations
send data to a channel simultaneously, their data frames are separated by a
unique code sequence. Each station has a different unique code for
transmitting the data over a shared channel. For example, there are multiple
users in a room that are continuously speaking. Data is received by the users
if only two-person interact with each other using the same language.
Similarly, in the network, if different stations communicate with each other
simultaneously with different code language

Carrier Sense Multiple Access with Collision Detection (CSMA/CD) is a


network protocol for carrier transmission that operates in the Medium
Access Control (MAC) layer. It senses or listens whether the shared channel
for transmission is busy or not, and defers transmissions until the channel is
free.

When more than one stations send their frames simultaneously, collision
occurs. Back-off algorithm is a collision resolution mechanism which is
commonly used to schedule retransmissions after collisions in Ethernet. The
waiting time that a station waits before attempting retransmission of the
frame is called as back off time.

Algorithm of CSMA/CD

Step 1) When a frame is ready, the transmitting station checks whether the
channel is idle or busy.

50
Step 2) If the channel is busy, the station waits until the channel becomes
idle.

Step 3) If the channel is idle, the station starts transmitting and continually
monitors the channel to detect collision.

Step 4) If a collision is detected, the station starts the binary exponential


backoff algorithm.

Step 5) The station resets the retransmission counters and completes frame
transmission.

Binary Exponential Backoff Algorithm in case of Collision

Step 1) The station continues transmission of the current frame for a


specified time along with a jam signal, to ensure that all the other stations
detect collision.

Step 2) The station increments the retransmission counter, c, that denote the
number of collisions.

Step 3) The station selects a random number of slot times in the range 0 and
2c – 1. For example, after the first collision (i.e. c = 1), the station will wait
for either 0 or 1 slot times. After the second collision (i.e. c = 2), the station
will wait anything between 0 to 3 slot times. After the third collision (i.e. c =
3), the station will wait anything between 0 to 7 slot times, and so forth.

Step 4) If the station selects a number 𝑘 in the range 0 and 2c – 1, then

Back_off_time = k × Time slot,

where a time slot is equal to round trip time (RTT).

51
Step 5) And the end of the backoff time, the station attempts retransmission
by continuing with the CSMA/CD algorithm.

Step 6) If the maximum number of retransmission attempts is reached, then


the station aborts transmission.

Advantage –

Collision probability decreases exponentially.

Disadvantages –

Capture effect: Station who wins ones keeps on winning.

Works only for 2 stations or hosts.

IEEE 802.3 and Ethernet

Ethernet is a set of technologies and protocols that are used primarily in


LANs. It was first standardized in 1980s by IEEE 802.3 standard. IEEE
802.3 defines the physical layer and the medium access control (MAC) sub-
layer of the data link layer for wired Ethernet networks. Ethernet is classified
into two categories: classic Ethernet and switched Ethernet.

Classic Ethernet is the original form of Ethernet that provides data rates
between 3 to 10 Mbps. The varieties are commonly referred as 10BASE-X.
Here, 10 is the maximum throughput, i.e. 10 Mbps, BASE denoted use of
baseband transmission, and X is the type of medium used. Most varieties of
classic Ethernet have become obsolete in present communication scenario.

52
A switched Ethernet uses switches to connect to the stations in the LAN. It
replaces the repeaters used in classic Ethernet and allows full bandwidth
utilization.

IEEE 802.3 Popular Versions

There are a number of versions of IEEE 802.3 protocol. The most popular
ones are

IEEE 802.3: This was the original standard given for 10BASE-5. It used a
thick single coaxial cable into which a connection can be tapped by drilling
into the cable to the core. Here, 10 is the maximum throughput, i.e. 10
Mbps, BASE denoted use of baseband transmission, and 5 refers to the
maximum segment length of 500m.

IEEE 802.3a:This gave the standard for thin coax (10BASE-2), which is a
thinner variety where the segments of coaxial cables are connected by BNC
connectors. The 2 refers to the maximum segment length of about 200m
(185m to be precise).

IEEE 802.3i:This gave the standard for twisted pair (10BASE-T) that uses
unshielded twisted pair (UTP) copper wires as physical layer medium. The
further variations were given by IEEE 802.3u for 100BASE-TX, 100BASE-
T4 and 100BASE-FX.

IEEE 802.3i:This gave the standard for Ethernet over Fiber (10BASE-F)
that uses fiber optic cables as medium of transmission.

53
What are the IEEE 802.11 Wireless LAN Standards?

IEEE 802.11 standard, popularly known as WiFi, lays down the architecture
and specifications of wireless LANs (WLANs). WiFi or WLAN uses high
frequency radio waves for connecting the nodes.

There are several standards of IEEE 802.11 WLANs. The prominent among
them are 802.11, 802.11a, 802.11b, 802.11g, 802.11n and 802.11p. All the
standards use carrier-sense multiple access with collision avoidance
(CSMA/CA). Also, they have support for both centralised base station based
as well as ad hoc networks.

54
IEEE 802.11

IEEE 802.11 was the original version released in 1997. It provided 1 Mbps
or 2 Mbps data rate in the 2.4 GHz band and used either frequency-hopping
spread spectrum (FHSS) or direct-sequence spread spectrum (DSSS). It is
obsolete now.

IEEE 802.11a

802.11a was published in 1999 as a modification to 802.11, with orthogonal


frequency division multiplexing (OFDM) based air interface in physical
layer instead of FHSS or DSSS of 802.11. It provides a maximum data rate

55
of 54 Mbps operating in the 5 GHz band. Besides it provides error correcting
code. As 2.4 GHz band is crowded, relatively sparsely used 5 GHz imparts
additional advantage to 802.11a.

Further amendments to 802.11a are 802.11ac, 802.11ad, 802.11af, 802.11ah,


802.11ai, 802.11aj etc.

IEEE 802.11b

802.11b is a direct extension of the original 802.11 standard that appeared in


early 2000. It uses the same modulation technique as 802.11, i.e. DSSS and
operates in the 2.4 GHz band. It has a higher data rate of 11 Mbps as
compared to 2 Mbps of 802.11, due to which it was rapidly adopted in
wireless LANs. However, since 2.4 GHz band is pretty crowded, 802.11b
devices faces interference from other devices.

Further amendments to 802.11b are 802.11ba, 802.11bb, 802.11bc,


802.11bd and 802.11be.

IEEE 802.11g

802.11g was indorsed in 2003. It operates in the 2.4 GHz band (as in
802.11b) and provides a average throughput of 22 Mbps. It uses OFDM
technique (as in 802.11a). It is fully backward compatible with 802.11b.
802.11g devices also faces interference from other devices operating in 2.4
GHz band.

IEEE 802.11n

802.11n was approved and published in 2009 that operates on both the 2.4
GHz and the 5 GHz bands. It has variable data rate ranging from 54 Mbps to

56
600 Mbps. It provides a marked improvement over previous standards
802.11 by incorporating multiple-input multiple-output antennas (MIMO
antennas).

IEEE 802.11p

802.11 is an amendment for including wireless access in vehicular


environments (WAVE) to support Intelligent Transportation Systems (ITS).
They include network communications between vehicles moving at high
speed and the environment. They have a data rate of 27 Mbps and operate in
5.9 GHz band.

57

You might also like