You are on page 1of 55

UNIT-II

Data link layer:

• Design issues
• Framing
• Error detection and correction.

Elementary data link protocols:


• simplex protocol
• A simplex stop and wait protocol for an error-free channel
• A simplex stop and wait protocol for noisy channel.

Sliding Window protocols:


• A one-bit sliding window protocol
• A protocol using Go-Back-N
• A protocol using Selective Repeat

Example data link protocols.


Medium Access sub layer:
• The channel allocation problem

Multiple access protocols:


• ALOHA (Additive Links On-line Hawaii Area)
• Carrier Sense Multiple Access Protocols
• Collision Free Protocols.
• Wireless LANs
• Data Link Layer Switching

2.1 DATA LINK LAYER DESIGN ISSUES

The data link layer uses the services of the physical layer to send and receive bits over
communication channels. It has a number of functions, including:
1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast senders.
4. The data link layer takes the packets it gets from the network layer and encapsulates them
into frames for transmission. Each frame contains a frame header, a payload field for
holding the packet, and a frame trailer.
Figure: Relationship between packets and frames.

The main functions and the design issues of this layer are:
1. Services Provided to the Network Layer
2. Framing
3. Error Control
4. Flow Control

2.1.1 Services Provided to the Network Layer

• The function of the data link layer is to provide services to the network layer.
• The principal service is transferring data from the network layer on the source machine to
the network layer on the destination machine.
• 2.1.1 Services Provided to the Network Layer

The data link layer can be designed to offer various services. The actual services that are offered
vary from protocol to protocol. Three reasonable possibilities that we will consider in turn are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection-oriented service.
Unacknowledged connectionless service
 No logical connection is established beforehand or released afterward.
 Unacknowledged connectionless service consists of having the source machine send
independent frames to the destination machine without having the destination machine
acknowledge them.
 If a frame is lost due to noise on the line, no attempt is made to detect the loss or recover
from it in the data link layer.
Acknowledged connectionless service
 When this service is offered, there is no logical connections used, but each frame sent
individually acknowledged. In this way, the sender knows whether a frame has arrived
correctly or not.
 If it has not arrived within a specified time interval, it can be sent again.
 This service is useful over unreliable channels, such as wireless systems. 802.11 (Wi-Fi)
is a good example.
Acknowledged connection-oriented service
 When connection-oriented service is used, transfers go through three distinct phases.
 In the first phase, the connection is established by having both sides initialize variables
and counters needed to keep track of which frames have been received and which ones
have not.
Acknowledged connection-oriented service
 In the second phase, one or more frames are actually transmitted.
 In the third and final phase, the connection is released, freeing up the variables, buffers,
and other resources used to maintain the connection.

2.1.2 FRAMING

• To provide service to the network layer, the data link layer must use the service provided
by the physical layer.
• If the channel is noisy, as it is for most wireless and some wired links, the physical layer
will add some redundancy to its signals to reduce the bit error rate to a tolerable level.
• The usual approach is for the data link layer to break up the bit stream into discrete
frames, compute a short token called a checksum for each frame, and include the
checksum in the frame when it is transmitted.
• When a frame arrives at the destination, the checksum is recomputed. If the newly
computed checksum is different from the one contained in the frame, the data link layer
knows that an error has occurred and takes steps to deal with it.
• Breaking up the bit stream into frames, We will look at four methods:
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.
Figure : A byte stream. (a) Without errors. (b) With one eror.
• The first framing method uses a field in the header to specify the number of bytes in the
frame. When the data link layer at the destination sees the byte count, it knows how
many bytes follow and hence where the end of the frame.
• The second framing method gets around the problem of resynchronization after an error
by having each frame start and end with special bytes, called a flag byte, is used as both
the starting and ending delimiter. The data link layer on the receiving end removes the
escape bytes before giving the data to the network layer. This technique is called byte
stuffing.

Figure : (a) A frame delimited by flag bytes. (b) Four examples of byte sequences
before and after byte stuffing.
• The third method of delimiting the bit stream gets around a disadvantage of byte
stuffing, which is that it is tied to the use of 8-bits. Framing can be also be done at the bit
level, so frames can contain an arbitrary number of bits made up of units of any size.
• It was developed for the once very popular HDLC (High-level Data Link Control)
protocol. Each frame begins and ends with a special bit pattern, 01111110.
• When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it
automatically de-stuffs (i.e., deletes) the 0 bit.

• The last method of framing is ‘‘coding violations’’ to delimit frames. The beauty of this
scheme is that, because it is easy to find the start and end of frames and there is no need
to stuff the data.
• Many data link protocols use a combination of these methods for safety. A common
pattern used for Ethernet and 802.11 is to have a frame begin with a well-defined pattern
called a preamble.
• The preamble is then followed by a length (i.e., count) field in the header that is used to
locate the end of the frame.

2.1.3 ERROR CONTROL

The data link layer ensures error free link for data transmission. The issues it caters to with
respect to error control are −
• Dealing with transmission errors
• Sending acknowledgement frames in reliable connections
• Retransmitting lost frames
• Identifying duplicate frames and deleting them
• Controlling access to shared channels in case of broadcasting
• The data link layer regulates flow control so that a fast sender does not drown a slow
receiver. When the sender sends frames at very high speeds, a slow receiver may not be
able to handle it. There will be frame losses even if the transmission is error-free.

The two common approaches for flow control are –

1. Feedback based flow control


2. Rate based flow control
• In the first one, feedback-based flow control, the receiver sends back information to the
sender giving it permission to send more data, or at least telling the sender how the
receiver is doing.
• In the second one, rate-based flow control, the protocol has a built-in mechanism that
limits the rate at which senders may transmit data, without using feedback from the
receiver.
2.2 FRAMMING

• In the physical layer, data transmission involves synchronized transmission of bits from
the source to the destination.
• The data link layer packs these bits into frames. Data-link layer takes the packets from
the Network Layer and encapsulates them into frames.
• If the frame size becomes too large, then the packet may be divided into small sized
frames.
• At receiver’ end, data link layer picks up signals from hardware and assembles them into
frames.
General Structure of a Frame
• Frame Header: It contains the source and the destination addresses of the frame and the
control bytes.
• Payload field: It contains the message to be delivered.
• Trailer: It contains the error detection and error correction bits. It is also called
Frame Check Sequence (FCS).
• Flag: Two flag at the two ends mark the beginning and the end of the frame.

General Structure of a Frame


• Frame Header: It contains the source and the destination addresses of the frame and the
control bytes.
• Payload field: It contains the message to be delivered.
• Trailer: It contains the error detection and error correction bits. It is also called
Frame Check Sequence (FCS).
• Flag: Two flag at the two ends mark the beginning and the end of the frame.

Frame Header
A frame header contains the destination address, the source address and three control
fields kind, seq, and ack serving the following purposes:
1. Kind: This field states whether the frame is a data frame or it is used for control
functions like error and flow control or link management etc.
2. seq: This contains the sequence number of the frame for rearrangement of out – of –
sequence frames and sending acknowledgements by the receiver.
3 ack: This contains the acknowledgement number of some frame, particularly when
piggybacking is used
Types of Frames
Frames can be of two types, fixed sized frames and variable sized frames.
• Fixed-sized Frames: The size of the frame is fixed and so the frame length acts as
delimiter of the frame. It does not require delimiting bits to identify the start and end of
the frame. Example : ATM cells.
• Variable – Sized Frames: Here, the size of each frame to be transmitted may be
different. So additional mechanisms are kept to mark the end of one frame and the
beginning of the next frame. It is used in local area networks.

2.3 ERROR DETECTION AND CORRECTION

• Data-link layer uses error control techniques to ensure that frames, i.e. bit streams of data,
are transmitted from the source to the destination with a certain extent of accuracy.

Errors
• When bits are transmitted over the computer network, they are subject to get corrupted
due to interference and network problems. The corrupted bits leads to spurious data being
received by the destination and are called errors.

Types of Errors
• Single Bit Errors
• Multiple Bit Errors
• Burst Errors
• Single bit error − In the received frame, only one bit has been corrupted, i.e. either
changed from 0 to 1 or from 1 to 0.

• Multiple bits error − In the received frame, more than one bits are corrupted.
• Burst error − In the received frame, more than one consecutive bits are corrupted.

Error Detection Techniques


The most popular Error Detecting Techniques are:
1. Single Parity check
2. Two-dimensional parity check
3. Checksum
4. Cyclic redundancy check

Error Detection Techniques : Single parity check


• Single Parity checking is the simple mechanism and inexpensive to detect the errors.
• In this technique, a redundant bit is also known as a parity bit which is appended at the
end of the data unit.

Error Detection Techniques : Single parity check


• If the number of 1s bits is odd, then parity bit 1 is appended at the end of the data
unit.
• If the number of 1s bits is even, then parity bit 0 is appended at the end of the data
unit.
• At the receiving end, the parity bit is calculated from the received data bits and compared
with the received parity bit.
• This technique generates the total number of 1s even, so it is known as even-parity
checking.
• This technique generates the total number of 1s odd, so it is known as odd-parity
checking.
Error Detection Techniques : Single parity check
Error Detection Techniques : Single parity check
• It can only detect single-bit errors which are very rare.
• If two bits are interchanged, then it cannot detect the errors

• It can only detect single-bit errors which are very rare.


• If two bits are interchanged, then it cannot detect the errors

• Performance can be improved by using Two-Dimensional Parity Check which


organizes the data in the form of a table.
• Parity check bits are computed for each row, which is equivalent to the single-parity
check.
• In Two-Dimensional Parity check, a block of bits is divided into rows, columns and the
redundant row & column of bits is added to the whole block.
• At the receiving end, the parity bits are compared with the parity bits computed from the
received data.

Error Detection Techniques :

Drawbacks Of Two-Dimensional Parity Check


• If two bits in one data unit are corrupted and two bits exactly the same position in another
data unit are also corrupted, then 2D Parity checker will not be able to detect the error.
• This technique cannot be used to detect the 4-bit errors or more in some cases.

Error Detection : Checksum
• This is a block code method where a checksum is created based on the data values in the
data blocks to be transmitted using some algorithm and appended to the data.
• When the receiver gets this data, a new checksum is calculated and compared with the
existing checksum. If not match indicates an error.

Error Detection by Checksums
• For error detection by checksums, data is divided into fixed sized frames or segments.
• Sender’s End − The sender adds the segments using 1’s complement arithmetic to get
the sum. It then complements the sum to get the checksum and sends it along with the
data frames.
• Receiver’s End − The receiver adds the incoming segments along with the checksum
using 1’s complement arithmetic to get the sum and then complements it.
• If the result is zero, the received frames are accepted; otherwise they are discarded.
Example
• Suppose that the sender wants to send 4 frames each of 8 bits, where the frames are
11001100, 10101010, 11110000 and 11000011.
• The sender adds the bits using 1s complement arithmetic. While adding two numbers
using 1s complement arithmetic, if there is a carry over, it is added to the sum.
Example
• After adding all the 4 frames, the sender complements the sum to get the checksum,
11010011, and sends it along with the data frames.
• The receiver performs 1s complement arithmetic sum of all the frames including the
checksum. The result is complemented and found to be 0. Hence, the receiver assumes
that no error has occurred.

Cyclic Redundancy Check (CRC)


• Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by
a predetermined divisor agreed upon by the communicating system. The divisor is
generated using polynomials.
• Here, the sender performs binary division of the data segment by the divisor. It then
appends the remainder called CRC bits to the end of the data segment. This makes
the resulting data unit exactly divisible by the divisor.

Cyclic Redundancy Check (CRC)
• The receiver divides the incoming data unit by the same divisor of sender. If there is
no remainder, the data unit is assumed to be correct and is accepted. Otherwise, it is
understood that the data is corrupted and is therefore rejected.
• CRC is a redundancy error technique used to determine the error.

Following are the steps used in CRC for error detection:


• In CRC technique, Firstly, a string of n 0s is appended to the data unit, and this n
number is less than the number of bits in a predetermined number, known as division
which is n+1 bits.
• Secondly, the newly extended data is divided by a divisor using a process is known as
binary division. The remainder generated from this division is known as CRC
remainder.
Following are the steps used in CRC for error detection:
• Thirdly, the CRC remainder replaces the appended 0s at the end of the original data.
This newly generated unit is sent to the receiver.
• The receiver receives the data followed by the CRC remainder. The receiver will treat
this whole unit as a single unit, and it is divided by the same divisor that was used to
find the CRC remainder.
Following are the steps used in CRC for error detection:
• If the resultant of this division is zero which means that it has no error, and the data is
accepted.
• If the resultant of this division is not zero which means that the data consists of an error.
Therefore, the data is discarded.
Cyclic Redundancy Check (CRC)

Example: Suppose the original data is 11100 and divisor is 1001.


CRC Generator
• A CRC generator uses a modulo-2 division. Firstly, three zeroes are appended at the
end of the data as the length of the divisor is 4 and we know that the length of the string
0s to be appended is always one less than the length of the divisor.
• Now, the string becomes 11100000, and the resultant string is divided by the divisor
1001.
Suppose the original data is 11100 and divisor is 1001.

CRC Generator
• The remainder generated from the binary division is known as CRC remainder. The
generated value of the CRC remainder is 111.
• CRC remainder replaces the appended string of 0s at the end of the data unit, and the
final string would be 11100111 which is sent across the network.
CRC Checker
• The functionality of the CRC checker is similar to the CRC generator.
• When the string 11100111 is received at the receiving end, then CRC checker performs
the modulo-2 division.
• A string is divided by the same divisor, i.e., 1001.
• In this case, CRC checker generates the remainder of zero. Therefore, the data is
accepted.

Error Correction Techniques


• Error correction techniques find out the exact number of bits that have been corrupted
and as well as their locations. There are two principle ways
• Backward Error Correction (Retransmission) − If the receiver detects an error in the
incoming frame, it requests the sender to retransmit the frame. It is a relatively simple
technique. But it can be efficiently used only where retransmitting is not expensive as in
fiber optics and the time for retransmission is low relative to the requirements of the
application.
• Forward Error Correction − If the receiver detects some error in the incoming frame,
it executes error-correcting code that generates the actual frame. This saves bandwidth
required for retransmission. It is inevitable in real-time systems. However, if there are too
many errors, the frames need to be retransmitted.

Error Correction Techniques:


Types of Error Correcting Codes
• Block codes − The message is divided into fixed-sized blocks of bits, to which redundant
bits are added for error detection or correction.
• Convolutional codes − The message comprises of data streams of arbitrary length and
parity symbols are generated by the sliding application of a Boolean function to the data
stream.
The four main error correction codes are
• Hamming Codes
• Binary Convolution Code
• Reed – Solomon Code
• Low-Density Parity-Check Code
Hamming Codes
• Hamming code is a set of error-correction codes that can be used to detect and correct
the errors that can occur when the data is moved or stored from the sender to the
receiver. It is technique developed by R.W. Hamming for error correction.
Redundant bits –
• Redundant bits are extra binary bits that are generated and added to the information-
carrying bits of data transfer to ensure that no.of bits were lost during the data transfer.
Hamming Codes
• The number of redundant bits can be calculated using the following formula: 2^r ≥ m
+r+1
where,
m = Message (data) bit
r = redundant bit
Parity bits –
• Which is a bit appended to the data bits which ensures that the total number of 1's
are even (even parity) or odd (odd parity).
• While checking the parity, if the total number of 1's are odd then write the value
of parity bit is 1 (which means the error is there ) and if it is even then the value of
parity bit is 0
• The number of redundant bits can be calculated using the following formula: 2^r ≥ m + r
+1
where,
Message (data) bit = m
redundant bit = r
Hamming Codes : At Sender Side(Error Detection)
• In 2^r ≥ m + r + 1, here r is placed at powers of 2.
When r=4,
r1 20
r2 21
r3 22
r4 23

• Parity bits when r=4 ,position to find value of the parity bits
P8 P4 P2 P1

23 22 21 20
Hamming Codes : At Sender Side
• Suppose the number of data bits is 4, and redundant bits is 3 then calculated using this
formula: 2^r ≥ m + r + 1
• m=4
• r=3
 2^0 ≥ 4 + 0 + 1  Not Satisfied
 2^1 ≥ 4 + 1 + 1  Not Satisfied
 2^2 ≥ 4 + 2 + 1  Not Satisfied
 2^3 ≥ 4 +3 + 1  Satisfied
Hamming Codes : At Sender Side
Example 1:
• Data (m) =1001
• Redundancy = 3
• So, when r->3 the position of parity bits : P4 P2 P1
Hamming Codes : At Sender Side
• The length of data is : m+r  4+3 =7 then the seven data bits are:
• Data bits (m) =1001
• parity bits : P4 P2 P1

7 6 5 4 3 2 1

1 0 0 P4 1 P2 P1

Hamming Codes : At Sender Side


• The length of data is : m+r  4+3 =7 then the seven data bits are:
• Data bits (m) =1001
• parity bits : P4 P2 P1
7 6 5 4 3 2 1

1 0 0 P4 1 P2 P1
Hamming Codes : At Sender Side
All possible values
P4 P2 P1

1 0 0 1
2 0 1 0
3 0 1 1
4 1 0 0
5 1 0 1
6 1 1 0
7 1 1 1
Hamming Codes : At Sender Side
• Calculation of parity
 P1->1,3,5,7 -> p1101->p1=0
 P2-> 2,3,6,7->p2101->p2=0
 P4-> 4,5,6,7->p4001->p4=1
• Now parity bits are : P1,P2,P4 -> 0,0,1 now replace the bits like:
 P1101->0101
 P2101->0101
P4001->1001
Hamming Codes : At Sender Side , Here
 P1=0
 P2=0
P4=1
7 6 5 4 3 2 1

1 0 0 P4 1 P2 P1

• Now we have to replace with the help of parity bits values

7 6 5 4 3 2 1

1 0 0 1 1 0 0

Hamming Codes : At Sender Side , Here


 P1=0
 P2=0
 P4=1
7 6 5 4 3 2 1

1 0 0 P4 1 P2 P1

• Now we have to replace with the help of parity bits values

7 6 5 4 3 2 1

1 0 0 1 1 0 0

Hamming Codes : At receiver Side


• Parity bits positions are sent by sender to receiver
 P1->0
 P2->0
 P4->1

7 6 5 4 3 2 1

1 0 0 1 1 0 0

Hamming Codes : At receiver Side(correct the errors)


The receiver may be receive the data with error

7 6 5 4 3 2 1

1 1 0 1 1 0 0

• At receiver side
P1->1,3,5,7 -> 0101->p1=0
P2-> 2,3,6,7-> 0111->p2=1
P4->4,5,6,7-> 1011->p4=1
Hamming Codes : At Receiver Side
• Let us the parity values are
P4 P2 P1
1 1 0
Data bits are :

7 6 5 4 3 2 1

1 1 0 1 1 0 0
• At which position error occurred : 1 1 0 = 6
• The receiver correct the data bits , at 6th position the bit becomes zero

7 6 5 4 3 2 1

1 0 0 1 1 0 0

Example 2: Hamming Code: Error Detection


• The first step is to identify the bit position of the data & all the bit positions which are
powers of 2 are marked as parity bits (e.g. 1, 2, 4, 8, etc.). The following image will help
in visualizing the received hamming code of 7 bits.

• First, we need to detect whether there are any errors in this received hamming code.
• Step 1: For checking parity bit P1, use check one and skip one method, which means,
starting from P1 and then skip P2, take D3 then skip P4 then take D5, and then skip D6
and take D7, this way we will have the following bits,
• Hamming Codes
• As we can observe the total number of bits are odd so we will write the value of parity bit
as P1 = 1. This means error is there.
• Step 2: Check for P2 but while checking for P2, we will use check two and skip
two method, which will give us the following data bits. But remember since we are
checking for P2, so we have to start our count from P2 (P1 should not be considered).

Hamming Codes
As we can observe that the number of 1's are even, then we will write the value of P2 = 0. This
means there is no error
Step 3: Check for P4 but while checking for P4, we will use check four and skip four method,
which will give us the following data bits. But remember since we are checking for P4, so we
have started our count from P4(P1 & P2 should not be considered).

Hamming Codes
•As we can observe that the number of 1's are odd, then we will write the value of P4 = 1.
This means the error is there.
• So, from the above parity analysis, P1 & P4 are not equal to 0, so we can clearly say that
the received hamming code has errors.
Hamming Code: Error Correction
• Since we found that received code has an error, so now we must correct them. To correct
the errors, use the following steps:

• Now the error word E will be:


• Now we have to determine the decimal value of this error word 101 which is 5.
• We get E = 5, which states that the error is in the fifth data bit. To correct it, just invert
the fifth data bit. So the correct data will be:

ELEMENTARY DATA LINK PROTOCOLS

• Protocols in the data link layer are designed so that this layer can perform its basic
functions: framing, error control and flow control.
• Framing is the process of dividing bit - streams from physical layer into data frames
whose size ranges from a few hundred to a few thousand bytes.
• Error control mechanisms deals with transmission errors and retransmission of
corrupted and lost frames.
• Flow control regulates speed of delivery and so that a fast sender does not drown a slow
receiver.
Types of Data Link Protocols
• Data link protocols can be broadly divided into two categories, depending on whether
the transmission channel is noiseless or noisy.

Elementary Data Link Protocols Are:


1. Simplex Protocol
2. A Simplex Stop-and-Wait Protocol for an Error-Free Channel
3. A Simplex Stop-and-Wait Protocol for a Noisy Channel

Simplex Protocol

• It is a unidirectional protocol in which data frames are traveling in only one direction-
from the sender to receiver.
• We assume that the receiver can immediately handle any frame it receives with a
processing time.
• The data link layer of the receiver immediately removes the header from the frame and
hands the data packet to its network layer, which can also accept the packet immediately.
Simplex Protocol : Design
• Sender Site: The data link layer at the sender site
gets data from its network layer, makes a frame out of the data, and sends it.
• Receiver Site: The data link layer at the receiver site receives a frame from its physical
layer, extracts data from the frame, and delivers the data to its network layer.
Simplex Protocol : Design
• The data link layers of the sender and receiver provide transmission services for their
network layers. The data link layers use the services provided by their physical layers
(such as signaling, multiplexing, and so on) for the physical transmission of bits
Algorithm of Simplex Protocol for Sender Site

Algorithm of Simplex Protocol for Receiver Site


Simplex Protocol- Flow Diagram

Simplex Stop – and – Wait Protocol


• The sender sends one frame, stops until it receives confirmation from the receiver (okay
to go ahead), and then sends the next frame.
• We still have unidirectional communication for data frames, but ACK frames (simple
tokens of acknowledgment) travel from the other direction. We add flow control to our
previous protocol.
Simplex Stop and Wait protocol for an error-free channel: Design
• Sender Site: The data link layer in the sender site waits for the network layer for a data
packet.
• It then checks whether it can send the frame. If it receives a positive notification from the
physical layer, it makes frames out of the data and sends it.
• It then waits for an acknowledgement before sending the next frame.
Simplex Stop and Wait protocol for an error-free channel: Design
• Receiver Site: The data link layer in the receiver site waits for a frame to arrive.
• When it arrives, the receiver processes it and delivers it to the network layer. It then sends
an acknowledgement back to the sender.
Sender Site: Algorithm of Simplex Stop – and – Wait Protocol for Noiseless Channel

Receiver Site : Algorithm of Simplex Stop – and – Wait Protocol for Noiseless Channel
Simplex Stop – and – Wait Protocol for Noiseless Channel

Simplex Stop – and – Wait Protocol for Noisy Channel


• Simplex Stop – and – Wait protocol for noisy channel is data link layer protocol for data
communications with error control and flow control mechanisms.
• It is popularly known as Stop – and –Wait Automatic Repeat Request (Stop – and –
Wait ARQ) protocol. It adds error control facilities to Stop – and – Wait protocol.

STOP-AND-WAIT AUTOMATIC REPEAT REQUEST

• Stop-and-Wait Automatic Repeat Request (Stop-and Wait ARQ), adds a simple error
control mechanism to the Stop-and-Wait Protocol.
• To detect and correct corrupted frames, we need to add redundancy bits to our data
frame. When the frame arrives at the receiver site, it is checked and if it is corrupted, it is
silently discarded. The detection of errors in this protocol is manifested by the silence of
the receiver.
• The received frame could be the correct one, or a duplicate, or a frame out of order. The
solution is to number the frames.
• When the receiver receives a data frame that is out of order, this means that frames were
either lost or duplicated.
• The completed and lost frames need to be resent in this protocol.
• At the same time, it starts a timer. If the timer expires and there is no ACK for the sent
frame, the frame is resent, the copy is held, and the timer is restarted.
• Since the protocol uses the stop-and-wait mechanism, there is only one specific frame
that needs an ACK even though several copies of the same frame can be in the network.
• Since an ACK frame can also be corrupted and lost. The ACK frame for this protocol has
a sequence number field.
• The protocol specifies that frames need to be numbered. This is done by using sequence
numbers. A field is added to the data frame to hold the sequence number of that frame.
• The sequence numbers must be suitable for both data frames and ACK frames, we use
this convention: The acknowledgment numbers always announce the sequence number of
the next frame expected by the receiver.

ELEMENTARY DATA LINK PROTOCOLS

• Simplex Stop – and – Wait Protocol for Noisy Channel : Design


• Sender Site − At the sender site, a field is added to the frame to hold a sequence number.
If data is available, the data link layer makes a frame with the certain sequence number
and sends it.
• The sender then waits for arrival of acknowledgment for a certain amount of time.
• If it receives a positive acknowledgment for the frame with that sequence number within
the stipulated time, it sends the frame with next sequence number. \Otherwise, it resends
the same frame

Simplex Stop – and – Wait Protocol for Noisy Channel : Design

Receiver Site − The receiver also keeps a sequence number of the frames expected for arrival.
• When a frame arrives, the receiver processes it and checks whether it is valid or not.
• If it is valid and its sequence number matches the sequence number of the expected
frame, it extracts the data and delivers it to the network layer.
• It then sends an acknowledgement for that frame back to the sender along with its
sequence number.
Sender Site Algorithm of Simplex Stop – and – Wait Protocol for Noisy Channel

Simplex stop – and – wait protocol for noisy channel – Flow diagram
Simplex stop – and – wait protocol for noisy channel − Flow diagram
SLIDING WINDOW PROTOCOLS
Sliding window protocols are data link layer protocols for reliable and sequential delivery of data
frames.
• The sliding window is also used in Transmission Control Protocol.
Types of Sliding Window Protocols
1. A one-bit sliding window protocol
2. A protocol using Go-Back-N
3. A protocol using Selective Repeat
A one-bit sliding window protocol
• In one – bit sliding window protocol, the size of the window is 1.
• So the sender transmits a frame, waits for its acknowledgment, then transmits the next
frame.
• Thus it uses the concept of stop and waits for the protocol. This protocol provides for
full – duplex communications.
• Hence, the acknowledgment is attached along with the next data frame to be sent by
piggybacking.
A one-bit sliding window protocol: Working Principle
• The data frames to be transmitted additionally have an acknowledgment field, ack field
that is of a few bits length.
• The ack field contains the sequence number of the last frame received without error.
A one-bit sliding window protocol: Working Principle
• If this sequence number matches with the sequence number of the frame to be sent, then
it is inferred that there is no error and the frame is transmitted.
• Otherwise, it is inferred that there is an error in the frame and the previous frame is
retransmitted.
A one-bit sliding window protocol : Example
• The following diagram depicts a scenario with sequence numbers 0, 1, 2, 3, 0, 1, 2 and so
on.
• It depicts the sliding windows in the sending and the receiving stations during frame
transmission.
• A one-bit sliding window protocol : Example
The algorithm of One – bit Sliding Window Protocol
A protocol using Go-Back-N
• In Go-Back-N ARQ protocol, if one frame is lost or damaged, then it retransmits all the
frames after which it does not receive the positive ACK.
• It is a case of sliding window protocol having to send window size of N and receiving
window size of 1.
A protocol using Go-Back-N: Working Principle
• Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgment for the first frame. The frames are sequentially numbered and a finite
number of frames.
• The maximum number of frames that can be sent depends upon the size of the sending
window.
• If the acknowledgment of a frame is not received within an agreed upon time period, all
frames starting from that frame are retransmitted.
A protocol using Go-Back-N: Working Principle
• The size of the sending window determines the sequence number of the outbound frames.
• If the sequence number of the frames is an n-bit field, then the range of sequence
numbers that can be assigned is 0 to 2n−1.
• Consequently, the size of the sending window is 2 n−1. Thus in order to accommodate a
sending window size of 2 n−1, a n-bit sequence number is chosen.
A protocol using Go-Back-N: Working Principle
• The sequence numbers are numbered as modulo-n. For example, if the sending window
size is 4, then the sequence numbers will be 0, 1, 2, 3, 0, 1, 2, 3, 0, 1, and so on. The
number of bits in the sequence number is 2 to generate the binary sequence 00, 01, 10,
11.
• The size of the receiving window is 1.
• Sender Site Algorithm of Go-Back-N Protocol
A Protocol using Go-Back-N
• In the above figure, three frames have been transmitted before an error discovered in the
third frame.
• In this case, ACK 2 has been returned telling that the frames 0,1 have been received
successfully without any error.
• The receiver discovers the error in data 2 frame, so it returns the NAK 2 frame.
• The frame 3 is also discarded as it is transmitted after the damaged frame. Therefore, the
sender retransmits the frames 2,3.
Protocol Go-Back-N

A Protocol using Go-Back-N


• Lost Data Frame: In Sliding window protocols, data frames are sent sequentially. If any
of the frames is lost, then the next frame arrive at the receiver is out of sequence.
• The receiver checks the sequence number of each of the frame, discovers the frame that
has been skipped, and returns the NAK for the missing frame.
• The sending device retransmits the frame indicated by NAK as well as the frames
transmitted after the lost frame.
A Protocol using Go-Back-N
• Lost Acknowledgement: The sender can send as many frames as the windows allow
before waiting for any acknowledgement.
• Once the limit of the window is reached, the sender has no more frames to send; it must
wait for the acknowledgement.
• If the acknowledgement is lost, then the sender could wait forever. To avoid such
situation, the sender is equipped with the timer that starts counting whenever the window
capacity is reached.
• If the acknowledgement has not been received within the time limit, then the sender
retransmits the frame since the last ACK.
• A protocol using Selective Repeat
• Selective-Repeat ARQ technique is more efficient than Go-Back-n ARQ.
• In this technique, only those frames are retransmitted for which negative
acknowledgement (NAK) has been received.
• The receiver storage buffer keeps all the damaged frames.
• The sender provide searching mechanism that selects only the requested frame for
retransmission.
A protocol using Selective Repeat
• Selective repeat protocol, also called Selective Repeat ARQ (Automatic Repeat reQuest),
is a data link layer protocol that uses sliding window method for reliable delivery of data
frames. Here, only the erroneous or lost frames are retransmitted.
• It uses two windows of equal size: a sending window that stores the frames to be sent and
a receiving window that stores the frames receive by the receiver.
A protocol using Selective Repeat : Working Principle
• Selective Repeat protocol provides for sending multiple frames depending upon the
availability of frames in the sending window, even if it does not receive
acknowledgement for any frame.
• The maximum number of frames that can be sent depends upon the size of the sending
window.
A protocol using Selective Repeat : Working Principle
• The sender continues to send frames from sending window. Once, it has sent all the
frames from the window, it retransmits the frame whose sequence number is given by the
Negative acknowledgements.
• The receiver records the sequence number of the earliest incorrect or un-received frame.
It sends the sequence number of the missing frame along with every acknowledgement
frame.
A protocol using Selective Repeat
Stop and Wait protocol Vs Sliding Window protocol
Stop and Wait protocol
• Stop and Wait protocol is a protocol for flow control mechanism. In this protocol, sender
sends one frame at a time and waits for acknowledgment from the receiver. Once
acknowledged, sender sends another frame to the receiver.
Sliding Window protocol
• Stop and Wait protocol is also a protocol for flow control mechanism. In this protocol,
sender sends multiple frames at a time and retransmits the frames which are found to be
corrupted or damaged.

Sr. Key Stop and Wait protocol Sliding Window protocol


No.

1 Mechanism In Stop and Wait protocol, sender In Sliding window protocol, sender
sends single frame and waits for sends multiple frames at a time and
acknowledgment from the receiver. retransmits the damamged frames.

2 Efficiency Stop and Wait protocol is less Sliding Window protocol is more
efficient. efficient than Stop and Wait protocol.

3 Window Size Sender's window size in Stop and Sender's window size in Sliding Window
Wait protocol is 1. protocol varies from 1 to n.

4 Sorting Sorting of frames is not needed. Sorting of frames helps increasing the
efficiency of the protocol.

5 Efficiency Stop and Wait protocol efficiency is Sliding Window protocol efficiency is
formulated as 1/(1+2a) where a is formulated as N/(1+2a) where N is no.
ratio of propagation delay vs of window frames and a is ratio of
transmission delay. propagation delay vs transmission delay.

6 Duplex Stop and Wait protocol is half duplex Sliding Window protocol is full duplex
in nature. in nature.
Stop and Wait, GoBack-N Vs Selective Repeat protocols

EXAMPLE DATA LINK PROTOCOLS


• Here we will examine the data link protocols found on point-to-point lines in the Internet
in two common situations.
• The first situation is when packets are sent over SONET optical fiber links in wide-area
networks.
• These links are widely used, for example, to connect routers in the different locations of
an ISP’s network.
• The second situation is for ADSL links running on the local loop of the telephone
network at the edge of the Internet. These links connect millions of individuals and
businesses to the Internet.
• The Internet needs point-to-point links for these uses, as well as dial-up modems,
leased lines, and cable modems, and so on.
• A standard protocol called PPP (Point-to-Point Protocol) is used to send packets over
these links.

HDLC (High-Level Data Link Control)

HDLC (High-Level Data Link Control) is a bit-oriented protocol that is used for communication
over the point-to-point and multipoint links. This protocol implements the mechanism of
ARQ(Automatic Repeat Request). With the help of the HDLC protocol,full-duplex
communication is possible.
HDLC is the most widely used protocol and offers reliability, efficiency, and a high level of
Flexibility.
In order to make the HDLC protocol applicable for various network configurations, there are
three types of stations and these are as follows:
 Primary Station This station mainly looks after data like management. In the case of the
communication between the primary and secondary station; it is the responsibility of the
primary station to connect and disconnect the data link. The frames issued by the primary
station are commonly known as commands.
 Secondary Station The secondary station operates under the control of the primary
station. The Frames issued by the secondary stations are commonly known as responses.
 Combined Station The combined station acts as both Primary stations as well as
Secondary stations. The combined station issues both commands as well as responses.

Transfer Modes in HDLC

The HDLC protocol offers two modes of transfer that mainly can be used in different
configurations. These are as follows:
 Normal Response Mode(NRM)
 Asynchronous Balance Mode(ABM)

Let us now discuss both these modes one by one:

1. Normal Response Mode(NRM)


In this mode, the configuration of the station is unbalanced. There are one primary station and
multiple secondary stations. Where the primary station can send the commands and the
secondary station can only respond.This mode is used for both point-to-point as well
as multiple-point links.

2. Asynchronous Balance Mode(ABM)


In this mode, the configuration of the station is balanced. In this mode, the link is point-to-point,
and each station can function as a primary and as secondary.
Asynchronous Balance mode(ABM) is a commonly used mode today.
HDLC Frames
In order to provide the flexibility that is necessary to support all the options possible in the
modes and Configurations that are just described above. There are three types of frames defined
in the HDLC:
 Information Frames(I-frames) These frames are used to transport the user data and the
control information that is related to the user data. If the first bit of the control field is 0
then it is identified as I-frame.
 Supervisory Frames(S-frames) These frames are only used to transport the control
information. If the first two bits of the control field are 1 and 0 then the frame is
identified as S-frame
 Unnumbered Frames(U-Frames) These frames are mainly reserved for system
management. These frames are used for exchanging control information between the
communicating devices.
Each type of frame mainly serves as an envelope for the transmission of a different type of
message.
Frame Format
There are up to six fields in each HDLC frame. There is a beginning flag field, the address field
then, a control field, an information field, a frame check sequence field(FCS), and an ending
field.
In the case of the multiple-frame transmission, the ending flag of the one frame acts as the
beginning flag of the next frame.
Let us take a look at different HDLC frames:
1. Flag Field
This field of the HDLC frame is mainly a sequence of 8-bit having the bit pattern 01111110 and
it is used to identify the beginning and end of the frame. The flag field mainly serves as a
synchronization pattern for the receiver.
2. Address Field
It is the second field of the HDLC frame and it mainly contains the address of the secondary
station. This field can be 1 byte or several bytes long which mainly depends upon the need of the
network. In case if the frame is sent by the primary station, then this field contains the
address(es) of the secondary stations. If the frame is sent by the secondary station, then this field
contains the address of the primary station.
3. Control Field
This is the third field of the HDLC frame and it is a 1 or 2-byte segment of the frame and is
mainly used for flow control and error control. Bits interpretation in this field mainly depends
upon the type of the frame.
4. Information Field
This field of the HDLC frame contains the user's data from the network layer or the management
information. The length of this field varies from one network to another.
5. FCS Field
FCS means Frame check sequence and it is the error detection field in the HDLC protocol. There
is a 16 bit CRC code for error detection.

MULTIPLE-ACCESS PROTOCOLS
• Data link layer as two sub-layers. The upper sub-layer is responsible for data link control,
and the lower sub-layer is responsible for resolving access to the shared media. If the
channel is dedicated.
• Data link layer divided into two functionality-oriented sub-layers.

• The upper sub-layer that is responsible for flow and error control is called the logical link
control (LLC) layer
• The lower sub-layer that is mostly responsible for multiple-access resolution is called the
media access control (MAC) layer.
• When nodes or stations are connected and use a common link, called a multi-point or
broadcast link.
• We need a multiple-access protocol to coordinate access to the link.

• In a random access method, each station has the right to the medium without being
controlled by any other station.
• However, if more than one station tries to send, there is an access conflict-collision-and
the frames will be either destroyed or modified.
ALOHA – ALOHA stands for Additive Links On-line Hawaii Area. Its developed at the
University of Hawaii in 1971. It was designed for wireless LAN but is also applicable for shared
medium. In this, multiple stations can transmit data at the same time and it can handle the
collisions.
Pure Aloha:
• When a station sends data it waits for an acknowledgement. If the acknowledgement
doesn’t come within the allotted time then the station waits for a random amount of time
called back-off time and re-sends the data.
• Since different stations wait for different amount of time.

Pure Aloha: Flow diagram


Slotted Aloha:
• It is similar to pure aloha, except that we divide time into slots and sending of data is
allowed only at the beginning of these slots.
• If a station misses out the allowed time, it must wait for the next slot. This reduces the
probability of collision.
PARAMETERS PURE ALOHA SLOTTED ALOHA

Stations can transmit the data Here, any random station can
randomly i.e. any number of transmit the data at the
Data transmission
stations can transmit data at any beginning of any random time
time. slot

Here, the time is continuous and Here, the time is discrete unlike
Time status is not globally synchronized pure ALOHA and is also
with any other station. globally synchronized

Vulnerable time 2*Frame transmission time Frame transmission time

PARAMETERS PURE ALOHA SLOTTED ALOHA

G*e-2G
Probability of successful
where, G = no. of stations G*e-G
transmission of a data packet
willing to transmit data

Maximum efficiency 18.4% 36.8%

Here, it reduces the total number


It does not reduce the total
Collision status of collisions to half and doubles
number of collisions to half
the efficiency of pure ALOHA

2. CSMA – Carrier Sense Multiple Access ensures fewer collisions as the station is required to
first sense the medium (for idle or busy) before transmitting data.
• If it is idle then it sends data, otherwise it waits till the channel becomes idle. However
there is still chance of collision in CSMA due to propagation delay.
2. CSMA – For example, if station A wants to send data, it will first sense the medium.
• If it finds the channel idle, it will start sending data. However, by the time the first bit of
data is transmitted (delayed due to propagation delay) from station A
• If station B requests to send data and senses the medium it will also find it idle and will
also send data. This will result in collision of data from station A and B.
CSMA access modes-
• 1-persistent method is simple and straightforward. In this method, after the station finds
the line idle, it sends its frame immediately (with probability 1). This method has the
highest chance of collision because two or more stations may find the line idle and send
their frames immediately.
• Non-Persistent: In the non-persistent method, a station that has a frame to send senses
the line. If the line is idle, it sends immediately. If the line is not idle, it waits a random
amount of time and then senses the line again. The non-persistent approach
• reduces the chance of collision
CSMA access modes-
• P-persistent: The p-persistent method is used if the channel has time slots with a slot
duration equal to or greater than the maximum propagation time.
• The p-persistent approach combines the advantages of the other two strategies.
• It reduces the chance of collision and improves efficiency.
• In this method, after the station finds the line idle it follows these steps:
1. With probability p, the station sends its frame.
2. With probability q = 1 - p, the station waits for the beginning of the next time slot and
checks the line again.
a. If the line is idle, it goes to step 1.
b. If the line is busy, it acts as though a collision has occurred and uses the back-off
procedure.
CSMA/CD : Carrier sense multiple access with collision detection (CSMA/CD) augments the
algorithm to handle the collision.
• In this method, a station monitors the medium after it sends a frame to see if the
transmission was successful. If so, the station is finished. If, however, there is a collision,
the frame is sent again.
The algorithm of CSMA/CD is:
• When a frame is ready, the transmitting station checks whether the channel is idle or
busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station starts transmitting and continually monitors the channel
to detect collision.
• If a collision is detected, the station starts the collision resolution algorithm.
• The station resets the retransmission counters and completes frame transmission.
The algorithm of Collision Resolution is:
• The station continues transmission of the current frame for a specified time along with a
jam signal, to ensure that all the other stations detect collision.
• The station increments the retransmission counter.
• If the maximum number of retransmission attempts is reached, then the station aborts
transmission.
• Otherwise, the station waits for a back-off period which is generally a function of the
number of collisions and restart main algorithm.
CSMA/CA : In CSMA/CA, if the station finds the channel busy, it does not restart the timer of
the contention window; it stops the timer and restarts it when the channel becomes idle.
MULTIPLE-ACCESS PROTOCOLS- Random Access
The algorithm of CSMA/CA is:
• When a frame is ready, the transmitting station checks whether the channel is idle or
busy.
• If the channel is busy, the station waits until the channel becomes idle.
• If the channel is idle, the station waits for an Inter-frame slot (IFS) amount of time and
then sends the frame.
• After sending the frame, it sets a timer.
• The station then waits for acknowledgement from the receiver. If it receives the
acknowledgement before expiry of timer, it marks a successful transmission.
• Otherwise, it waits for a back-off time period and restarts the algorithm.

Advantages of CMSA/CA
• CMSA/CA prevents collision.
• Due to acknowledgements, data is not lost unnecessarily.
• It avoids wasteful transmission.
• It is very much suited for wireless transmissions.
Disadvantages of CSMA/CD
• The algorithm calls for long waiting times.
• It has high power consumption.
MULTIPLE-ACCESS PROTOCOLS- CONTROLLED ACCESS
• In controlled access, the stations consult one another to find which station has the right to
send.
• A station cannot send unless it has been authorized by other stations.
• There are three popular controlled-access methods.
1. Reservation
2. Polling
3. Token Passing
Reservation
• In the reservation method, a station needs to make a reservation before sending data.
• Time is divided into intervals. In each interval, a reservation frame precedes the data
• frames sent in that interval.
• If there are N stations in the system, there are exactly N reservation minislots in the
reservation frame. Each minislot belongs to a station.
• When a station needs to send a data frame, it makes a reservation in its own minislot. The
stations that have made reservations can send their data frames after the reservation
frame.

Polling
• Polling works with topologies in which one device is designated as a primary station and
the other devices are secondary stations.
• All data exchanges must be made through the primary device even when the ultimate
destination is a secondary device.
• The primary device controls the link; the secondary devices follow its instructions.
• It is up to the primary device to determine which device is allowed to use the channel at a
given time.

Token Passing
• In the token-passing method, the stations in a network are organized in a logical ring.
• For each station, there is a predecessor and a successor.
• The predecessor is the station which is logically before the station in the ring.
• The successor is the station which is after the station in the ring. The current station is the
one that is accessing the channel now.
• The right to this access has been passed from the predecessor to the current station.
• The right will be passed to the successor when the current station has no more data to
send.
Token Passing
• When a station has some data to send, it waits until it receives the token from its
predecessor.
• It then holds the token and sends its data. When the station has no more data to send, it
releases the token, passing it to the next logical station in the ring.
• The station cannot send data until it receives the token again in the next round. In this
process.
• when a station receives the token and has no data to send, it just passes the data to the
next station.
Token Passing : Token passing accessing methods are
1. Physical ring
2. Bus ring
3. Dual ring
4. Star ring

Token Passing
• In the physical ring topology, when a station sends the token to its successor, the token
cannot be seen by other stations; the successor is the next one in line.
• This means that the token does not have to have the address of the next successor. The
problem with
• this topology is that if one of the links-the medium between two adjacent stations fails,
the whole system fails.
Token Passing
• The dual ring topology uses a second (auxiliary) ring which operates in the reverse
direction compared with the main ring.
• The second ring is for emergencies only (such as a spare tire for a car). If one of the links
in the main ring fails, the system automatically combines the two rings to form a
temporary ring.
• After the failed link is restored, the auxiliary ring becomes idle again.

Token Passing
• In the bus ring topology, also called a token bus, the stations are connected to a single
cable called a bus.
• They, however, make a logical ring, because each station knows the address of its
successor (and also predecessor for token management purposes).
• When a station has finished sending its data, it releases the token and inserts the address
of its successor in the token.
• Only the station with the address matching the destination address of the token gets the
token to access the shared media.
• The Token Bus LAN, standardized by IEEE, uses this topology.
Token Passing
• In a star ring topology, the physical topology is a star. There is a hub, however, that acts
as the connector.
• The wiring inside the hub makes the ring; the stations are connected to this ring through
the two wire connections.
• This topology makes the network less prone to failure because if a link goes down, it will
be bypassed by the hub and the rest of the stations can operate.
• Also adding and removing stations from the ring is easier.
• This topology is still used in the Token Ring LAN designed by IBM.

MULTIPLE-ACCESS PROTOCOLS-CHANNELIZATION
• Channelization is a multiple-access method in which the available bandwidth of a link is
shared in time, frequency, or through code, between different stations.
• There are three channelization protocols: FDMA, TDMA, and CDMA
• Frequency Division Multiple Access (FDMA) – In frequency-division multiple access
(FDMA), the available bandwidth is divided into frequency bands.
• Each station is allocated a band to send its data. In other words, each band is reserved for
a specific station, and it belongs to the station all the time.
• Each station also uses a band-pass filter to confine the transmitter frequencies.
• To prevent station interferences, the allocated bands are separated from one another by
small guard bands.


Time Division Multiple Access (TDMA) – In this, the bandwidth is shared between
multiple stations. To avoid collision time is divided into slots and stations are allotted
these slots to transmit data.
• However there is a overhead of synchronization as each station needs to know its time
slot. This is resolved by adding synchronization bits to each slot.
• Another issue with TDMA is propagation delay which is resolved by addition of guard
bands.
• Code Division Multiple Access (CDMA) – One channel carries all transmissions
simultaneously. There is neither division of bandwidth nor division of time.
• For example, if there are many people in a room all speaking at the same time, then also
perfect reception of data is possible if only two person speak the same language.
• Similarly data from different stations can be transmitted simultaneously in different code
languages.

Wireless LAN ( IEEE 802.11)

802.11Architecture

The 802.11architecture defines two types of services and three different types of
stations
802.11 Services

The two types of services are


1. Basic services set (BSS)
2. Extended Service Set (ESS)
1. Basic Services Set (BSS)
• The basic services set contain stationary or mobile wireless stations and a central
base station called access point (AP).
• The use of access point is optional.
• If the access point is not present, it is known as stand-alone network. Such a
BSS cannot send data to other BSSs. This type of architecture is known as adhoc
architecture.
• The BSS in which an access point is present is known as an infrastructure network.

2. Extend Service Set (ESS)


• An extended service set is created by joining two or more basic service sets (BSS)
having access points (APs).
• These extended networks are created by joining the access points of basic services
sets through a wired LAN known as distribution system.
• The distribution system can be any IEET LAN.
• There are two types of stations in ESS:
(i) Mobile stations: These are normal stations inside a BSS.
(ii) Stationary stations: These are AP stations that are part of a wired LAN.
• Communication between two stations in two different BSS usually occurs via two
APs.
• A mobile station can belong to more than one BSS at the same time.
802.11StationTypes

IEEE 802.11 defines three types of stations on the basis of their mobility in wireless
LAN. These are:
1. No-transition Mobility
2. BSS-transition Mobility
3. ESS-transition Mobility
1. No-transition .Mobility: These types of stations are either stationary i.e. immovable
or move only inside a BSS.
2. BSS-transition mobility: These types of stations can move from one BSS to
another but the movement is limited inside an ESS.
3. ESS-transition mobility: These types of stations can move from one ESS to
another. The communication mayor may not be continuous when a station moves from
one ESS to another ESS.
MAC sublayer Functions

802.11 support two different modes of operations. These are:


1. Distributed Coordination Function (DCF)
2. Point Coordination Function (PCF)
1. Distributed Coordination Function
• The DCF is used in BSS having no access point.
• DCF uses CSMA/CA protocol for transmission.
• The following steps are followed in this method.

1. When a station wants to transmit, it senses the channel to see whether it is free or
not.
2. If the channel is not free the station waits for back off time.
3. If the station finds a channel to be idle, the station waits for a period of time called
distributed interframe space (DIFS).
4. The station then sends control frame called request to send (RTS) as shown in
figure.
5. The destination station receives the frame and waits for a short period of time called
short interframe space (SIFS).
6. The destination station then sends a control frame called clear to send (CTS) to the
source station. This frame indicates that the destination station is ready to receive data.
7. The sender then waits for SIFS time and sends data.
8. The destination waits for SIFS time and sends acknowledgement for the received
frame.
Collision avoidance

• 802.11 standard uses Network Allocation Vector (NAV) for collision avoidance.
• The procedure used in NAV is explained below:
1. Whenever a station sends an RTS frame, it includes the duration of time for which
the station will occupy the channel.
2. All other stations that are affected by the transmission creates a timer caned network
allocation vector (NAV).
3. This NAV (created by other stations) specifies for how much time these stations
must not check the channel.
4. Each station before sensing the channel, check its NAV to see if has expired or not.
5. If its NA V has expired, the station can send data, otherwise it has to wait.
• There can also be a collision during handshaking i.e. when RTS or CTS control
frames are exchanged between the sender and receiver. In this case following
procedure is used for collision avoidance:
1. When two or more stations send RTS to a station at same time, their control frames
collide.
2. If CTS frame is not received by the sender, it assumes that there has been a
collision.
3. In such a case sender, waits for back off time and retransmits RTS.

Frame Format of 802.11

The MAC layer frame consists of nine fields.

 The main fields of a frame of wireless LANs as laid down by IEEE 802.11 are −
 Frame Control − It is a 2 bytes starting field composed of 11 subfields. It contains
control information of the frame.
 Duration − It is a 2-byte field that specifies the time period for which the frame and its
acknowledgment occupy the channel.
 Address fields − There are three 6-byte address fields containing addresses of source,
immediate destination, and final endpoint respectively.
 Sequence − It a 2 bytes field that stores the frame numbers.
 Data − This is a variable-sized field that carries the data from the upper layers. The
maximum size of the data field is 2312 bytes.
 Check Sequence − It is a 4-byte field containing error detection information.
Sub Fields:

IEEE 802.11 Frame types


There are three different types of frames:
1. Management frame
2. Control frame
3. Data frame
1. Management frame. These are used for initial communication between stations and
access points.
2. Control frame. These are used for accessing the channel and acknowledging
frames. The control frames are RTS and CTS.
3. Data frame. These are used for carrying data and control information.
802.11 Addressing

• There are four different addressing cases depending upon the value of To DS And
from DS subfields of FC field.
• Each flag can be 0 or 1, resulting in 4 different situations.
1. If To DS = 0 and From DS = 0, it indicates that frame is not going to distribution
system and is not coming from a distribution system. The frame is going from one
station in a BSS to another.
2. If To DS = 0 and From DS = 1, it indicates that the frame is coming from a
distribution system. The frame is coming from an AP and is going to a station. The
address 3 contains original sender of the frame (in another BSS).
3. If To DS = 1 and From DS = 0, it indicates that the frame is going to a distribution
system. The frame is going from a station to an AP. The address 3 field contains the
final destination of the frame.
4. If To DS = 1 and From DS = 1,it indicates that frame is going from one AP to another
AP in a wireless distributed system.

You might also like