You are on page 1of 80

Module 2

The Data Link Layer

ANUSHA J
Assistant Professor
Data Link Layer
• The Data Link Layer (DLL) is the second layer in OSI (Open System
Interconnection) model from bottom.
• The PDU (Protocol Data Unit) of DLL is frame.
• The PDU of network layer is packet.

Relationship between packets and frames

• Frame management is important function of DLL.


DATA LINK LAYER DESIGN ISSUES
• The data link layer uses the services of the physical layer to send and
receive bits over communication channels.
DLL Functions

1. Providing a well-defined service to the network layer.

2. Error control - It deals with transmission errors.

3. Flow Control - Regulating the flow of data so that slow receivers are not

flooded by fast senders.


DLL Functions
Services provided to Network Layer
DLL transfers data received from the network layer on the source machine to the network
layer on the destination machine.

On the source machine network layer sends some bits to the data link layer for transmission
to the destination.

DLL transmits the bits to the destination machine so they can be handed over to the network
layer there, as shown in below figure.

a) Virtual communication b) Actual Communication


DLL Functions
Services provided by DLL
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection-oriented service.

Unacknowledged connectionless service


• The source machine sends independent frames to the destination machine without having
the destination machine acknowledge them.
• Ethernet is a good example of a DLL that provides this class of service.
• No logical connection is established beforehand or released afterward.
• If a frame is lost due to noise, no attempt is made to detect the loss or recover from it in
DLL.
• It can be used when the error rate is very low, so recovery is left to higher layers.
• It is appropriate for real time traffic such as voice in which late data are worse than bad
data.
DLL Functions
Acknowledged connectionless service
• No logical connection is established beforehand or released afterward.
• Each frame sent is individually acknowledged. From the acknowledgement the sender
knows if the frame has reached the receiver or been lost.
• If the acknowledgement has not arrived within a specified time interval, the frame can be
sent again.
• This service is useful over unreliable channels, such as wireless systems. 802.11 (WiFi) is
a good example of this class of service.
DLL Functions
Acknowledged connection-oriented service
• With this service, the source and destination machines establish a connection before any data is
transferred.
• Each frame sent over the connection is numbered, and the data link layer guarantees that each
frame sent is indeed received.
• It guarantees that each frame received exactly once and all the frames are received in the right
order.
• Connection-oriented service provides the network layer processes with the equivalent of a reliable bit
stream.
• It is appropriate over long, unreliable links such as a satellite channel or a long-distance telephone
circuit.
• Connection oriented service consists of three distinct phases. First phase is connection establishment
between sender and receiver. In second phase transmission of frames occurs. In the third phase
connection is terminated.
DLL Functions
Framing
The DLL break up the bit stream into discrete frames
Breaking the bit stream into frames can be done in the following methods.
1. Byte Count: In this method a field in the header is used to specify the number of bytes
in a frame, at the destination the DLL uses this field to find the end of the frame.
Fig (a) shows example of byte count
method. The frame 1 and 2 are of size 5
bytes where as frame 3 and 4 of size 8
bytes.
Fig (b) shows an error condition, frame 2 is
5 bytes long but due to error 5 has become
7, the destination will not be able to
recognize the start of next frame.
Framing
2. Flag bytes with byte stuffing: The DLL at the source stuffs special byte called flag byte
at the start and at the end of the frame. 2 consecutive flag bytes indicate the end of one
frame and start of next frame. If the data in the frame contains a flag byte then it is
stuffed with escape byte. If the data contains escape byte then it is stuffed with escape
byte. Fig shows all cases of byte stuffing. The DLL on the receiver side removes escape
bytes and flag bytes.

A frame delimited by flag bytes (b) Four examples of byte sequences before and after byte
stuffing.
Framing
3. Flag bits with bit stuffing : Each frame begins and ends with a special bit pattern,
01111110 or 0x7E in hexadecimal. Whenever the data in a frame contains 5 consecutive
1’s it stuffs a 0 bit into the outgoing bit stream. When the receiver sees five consecutive
incoming 1 bits, followed by a 0 bit, it automatically destuffs the 0 bit. Fig shows the
example of bit stuffing.

(a) Bit stuffing (b) The data as they appear on the line (c) The data as they are stored in the
receiver’s memory after destuffing
Framing
4. Physical layer coding violations:
• This framing method is used only in those networks in which encoding on the physical
medium contain some redundancy.
• Few LANs encode each bit of data by using two physical bits that Manchester coding uses.
• Bit 1 is encoded into a high-low (10) pair and Bit 0 is encoded into a low-high (01) pair.
• The scheme means that every data bit has a transition in the middle, making it easy for
the receiver to locate the bit boundaries.
• The combinations high-high and low-low are not used for data but are used for delimiting
frames in some protocols.
• We can use some reserved signals to indicate the start and end of frames.
• In effect, we are using ‘‘coding violations’’ to delimit frames. The beauty of this scheme is
that, because they are reserved signals, it is easy to find the start and end of frames and
there is no need to stuff the data.
Error Control
• The important functionality of data link layer is to make sure that all frames are delivered
to network layer at the destination in proper order.
• For reliable delivery, the receiver sends back special control frames having positive or
negative acknowledgements about the incoming frames.
• If the acknowledgement is lost or frame itself is lost due to hardware problems, the
sender cannot wait indefinitely for an acknowledgement from receiver. To overcome this
when the sender transmits a frame it starts a timer.
Flow Control
• When the sender is running on a fast, powerful computer and the receiver is running on a
slow, low-end machine, the slow receiver will not be in a position to consume the fast
arriving data.
• Example, a common situation is when a smart phone requests a Web page from a far
more powerful server, which then turns on the fire hose and blasts the data at the poor
helpless phone until it is completely swamped.
• Even if the transmission is error free, the receiver may be unable to handle the frames as
fast as they arrive and will lose some.
• There are 2 ways in which flow of data can be controlled.
• Feedback-based flow control, the receiver sends back information to the sender giving it
permission to send more data, or at least telling the sender how the receiver is doing.
• Rate-based flow control, the protocol has a built-in mechanism that limits the rate at
which senders may transmit data, without using feedback from the receiver.
Error Detection and Correction
• Network designers have developed two basic strategies for dealing with errors.
• One strategy is to include enough redundant information to enable the receiver to deduce
what the transmitted data must have been.
• This strategy uses error correcting codes which is also known as Forward Error Correction
(FEC).
• The other is to include only enough redundancy to allow the receiver to deduce that an
error has occurred (but not which error) and have it request a retransmission. This
strategy uses error detecting codes.
Error Correcting Codes
Hamming Codes:
• Block Code: Data is divided into fixed-size blocks, each encoded with redundant bits.
• Detection and Correction: Can detect up to two-bit errors and correct single-bit errors within a block.
• Redundant Bits: Positions are determined by powers of 2 (1, 2, 4, 8, ...).
• Encoding Example (7-bit Hamming Code):
• Data bits: 101100
• Redundant bits (P1, P2, P4): 011
• Encoded codeword: 10101100
Decoding:
• Calculate parity bits based on received codeword.
• Compare calculated parity with received parity.
• Mismatch indicates error position, allowing correction.
• Note - Parity bits are extra binary bits that are generated and added to the information-carrying (message) bits of data transfer
to ensure that no bits were lost during the data transfer.
Cont..
Cont..
Hamming codes
Generate the hamming code for the message 1000001.
• Here, message bits=m=7
• To calculate no. of parity bits p, we have to use the formula 2 p ≥ p + m + 1 hence p = 4
• Parity bits are p1, p2, p3, p4.

• Hamming code is p1 p2 m3 p4 m5 m6 m7 p8 m9 m10 m11

p1 p 2 1 p4 0 0 0 p8 0 0 1
• Now we have to find the value of parity bits
Computation of p1 depends on 1, 3, 5, 7, 9, 11 bits

• 1, 3, 5, 7, 9, 11 bits in above message from left are - p1 1 0 0 0 1, to make these bits

even parity p1 should be 0, hence value of p1 = 0.


Computation of p2 depends on 2, 3, 6, 7, 10,11 bits.
2, 3, 6, 7, 10, 11 bits in above message from left are – p2 1 0 0 0 1, to make these bits
Hamming codes
Computation of p4 depends on 4, 5, 6, 7

• 4, 5, 6, 7 bits in above message from left are – p4 0 0 0, to make these bits even parity

p4 should be 0, hence value of p4 = 0.

Computation of p8 depends on 8, 9, 10, 11

• 8, 9, 10, 11 bits in above message from left are – p8 0 0 1, to make these bits even parity

p8 should be 1, hence value of p8 = 1.

The final message sent along with code is


0 0 1 0 0 0 0 1 0 0 1
If the receiver receives message as
0 0 1 0 1 0 0 1 0 0 1
Hamming codes
Error Correction in Hamming coding
Error place depends on the no. of parity bits. So , here error place=E4 E3 E2 E1

E1 -> 1,3,5,7,9,11 -> 0 1 1 0 0 1, To make even parity E 1 = 1

E2 -> 2, 3, 6, 7, 10, 11 -> 0 1 0 0 0 1, To make even parity E 2 = 0

E3 -> 4, 5, 6, 7 -> 0 1 0 0 , To make even parity E 3 = 1

E4 -> 8, 9, 10, 11 -> 1 0 0 1, To make even parity E 4 = 0

Here, error place can be defined as E4 E3 E2 E1 i.e. 0 1 0 1 whose decimal equivalent is 5.


So, in 5th bit there is an error. Since ‘1’ is received in 5th bit, the corrected bit value in 5th bit
is 0.
Hence, the corrected Hamming code is 0 0 1 0 0 0 0 1 0 0 1
Binary convolutional codes.
• Encoder Structure:
• Input bits are fed into the shift registers.
• Boolean function generators (XOR gates) combine
current and previous input bits to produce redundant bits.
• Redundant bits are interleaved with input bits to create the encoded codeword stream
Binary convolutional codes Example
Cont..
Decoding (Viterbi Algorithm):
– The decoder uses the received codeword and channel information to determine the most
likely transmitted path through the trellis.
– It calculates path metrics (distances between received bits and possible codeword bits)
and selects the path with the minimum cumulative metric.
– The decoded bits are extracted from the most likely path.
Reed-Solomon codes
• Reed-Solomon codes are linear block codes, and they are often systematic too.
• Properties of Reed-Solomon codes are
• widely used in practice because of their strong error-correction properties, particularly for
burst errors.
• Block Length - n=2m-1 symbols
• message size k symbols
• parity check size (n-k) = 2 t symbols
• minimum distance d =(2t+1)
• For m bit symbols, the codewords are 2m−1 symbols long. The (255, 223) code is
widely used, it adds 32 redundant symbols to 223 data symbols.
Reed-Solomon codes
• When 2t redundant symbols are added, a Reed-Solomon code is able to correct up to t
errors in any of the transmitted symbols.
• This means, for example, that the (255, 223) code, which has 32 redundant symbols, can
correct up to 16 symbol errors.
ERRORS
• Single Bit Error
– only one bit in the data unit
has changed.

• Burst Error -
– It means that two or more
bits in the data unit has
changed.
REDUNDANCY
ERROR DETECTION CODER
• PARITY CHECK - REDUNDANT Bit called parity bit is appended
to every data unit so that the total number of 1s in the unit
becomes even (including parity)
• CHECKSUM
– Calculates a checksum value based on the data bits.
– Appends the checksum to the data.
– Recalculates the checksum at the receiver to detect errors.
• CRC (CYCLIC REDUNDANCE CHECK)
– Uses polynomial division to generate a unique remainder (CRC
code).
– Appends the CRC code to the data.
– Recalculates the CRC at the receiver to detect errors.
Low-Density Parity Check codes
• Agreed Parity between both sender and reciever.
• Workflow as mentioned -
BURST ERROR with PARITY CHECK
• If there are burst error in a block then error detection using parity has the
probability that the error will be detected is only 0.5. By using the concept of
interleaving the probability of error detection can be increased.
Error Detection Codes - Checksum
• The word ‘‘checksum’’ is often used to mean a
group of check bits associated with a message,
regardless of how are calculated.
• Checksum is usually placed at the end of the
message, as the complement of the sum
• Errors may be detected by summing the entire
received codeword, both data bits and checksum.
• If the result comes out to be zero, no error has
been detected.
CHECKSUM CALCULATION AND VERIFICATION
• Checksum Calculation
At Sender:
– Divide data into blocks Original data: 10101001 00111001
10101001
– Add each block's bytes (or words) together
00111001
– Discard any overflow bits --------------
11100010 Sum
– Append resulting checksum to the data 00011101 Checksum
• Checksum Verification 10101001 00111001 00011101
At Receiver:
Receiver's Steps: Received data: 10101001 00111001
– Recalculate checksum using received data 00011101
10101001
– Compare recalculated checksum with 00111001
received checksum 00011101
---------------
– If they match, assume no errors 11111111  Sum
– If they don't match, errors detected 00000000  Complement
Error Detection Codes - CRC
• Sequence of redundant bits, called the CRC or the CRC remainder, is appended to the end of
the data unit so that the resulting data unit becomes exactly divisible by a second,
predetermined binary number.
• At its destination, the incoming data unit is assumed to be intact and is therefore accepted.
• A remainder indicates that the data unit has been damaged in transit and therefore must be
rejected.
CRC
CRC - CRC GENERATOR and CHECKER
DIVISOR
• The divisor is determined according to the algebraic
polynomial.
• Example:

• Conditions - Should not be divisible by X


• - Should be divisible by X+1
Elementary Data Link Protocols
• The protocols are normally implemented in softwares by using one of the
programming languages
• A Utopian Simplex Protocol
• A Simplex Stop and wait protocol
• A Simplex protocol for a noisy channel
Elementary Data Link Protocols
A Utopian Simplex Protocol
• The protocol consists of two distinct procedures, a sender and a receiver.
• The sender runs in the data link layer of the source machine, and the receiver runs in the
data link layer of the destination machine.
• The sender is in an infinite while loop just pumping data out onto the line as fast as it
can.
• The body of the loop consists of three actions: go fetch a packet from the network
layer, construct an outbound frame using the variable s, and send the frame on
its way.
• Receiver waits for some action to happen. When the frame arrives it passes to
network layer, and waits for next frame arrival.
A Utopian Simplex Protocol
void sender1(void) { void receiver1(void) {
frame s; frame r;
packet buffer; event_type event;
while (true) { while (true) {
from_network_layer(&buffer); wait_for_event(&event);
s.info = buffer; from_physical_layer(&r);
to_physical_layer(&s); to_network_layer(&r.info);
} }
}

It is unrealistic because it does not handle either flow control or error correction.
A Simplex Stop-and-Wait Protocol for an error free channel
• After having passed a packet to its network layer, the receiver sends a little dummy frame called
acknowledgement back to the sender which, in effect, gives the sender permission to transmit the
next frame. The sender has to wait for arrival of acknowledgement.

• The sender sends one frame and waits for feedback from the receiver.
• When the ACK arrives, the sender sends the next frame
• It is Stop-and-Wait Protocol because the sender sends one frame, stops until it receives confirmation
from the receiver (okay to go ahead), and then sends.
A Simplex Stop-and-Wait Protocol for an error free channel
typedef enum {frame_arrival} void receiver2(void) {
event_type; frame r, s;
#include "protocol.h" event_type event;
void sender2(void) { while (true) {
frame s;
wait_for_event(&event);
packet buffer;
event_type event; from_physical_layer(&r);
while (true) { to_network_layer(&r.info);
from_network_layer(&buffer); to_physical_layer(&s);
s.info = buffer; }
to_physical_layer(&s); }
wait_for_event(&event);
}
A Simplex Stop-and-Wait Protocol for an error an error noisy channel

The sender keeps a copy of the sent frame and starts a timer.
If the timer expires and there is no ACK for the sent frame, the frame is resent the timer is
restarted.
A Simplex Stop-and-Wait Protocol for an error an error noisy channel
#define MAX SEQ 1 void receiver3(void)
typedef enum {frame arrival, cksum err, timeout} event_type; {
void sender3(void) { seq_nr frame_expected;
seq_nr next_frame_to_send; frame r, s;
frame s; event_type event;
packet buffer; frame_expected = 0;
event_type event; while (true) {
next_frame_to_send = 0; wait for event(&event);
from network layer(&buffer); if (event == frame_arrival) {
while (true) { from physical layer(&r);
s.info = buffer; if (r.seq == frame_expected) {
s.seq = next_frame_to_send; to network layer(&r.info);
to physical layer(&s); inc(frame_expected);
start timer(s.seq); }
wait for event(&event); s.ack = 1 − frame_expected;
if (event == frame arrival) { to physical layer(&s); /* send acknowledgement */
from physical layer(&s); }
if (s.ack == next_frame_to_send) { }
stop timer(s.ack); }
from network layer(&buffer);
inc(next_frame_to_send);
}
}
}
A Sliding Window Protocols

• Sliding window protocols are bidirectional or full duplex.


• The Sliding Window mainly provides the upper limit on the number of frames that can
be transmitted before the requirement of an acknowledgment.
• At any instant of time, the sender maintains a set of sequence numbers corresponding to
frames it is permitted to send. These frames are said to fall within the sending window.
• The receiver also maintains a receiving window corresponding to the set of frames it is
permitted to accept.
• Piggybacking: The acknowledgement is attached to the outgoing data frame. the
acknowledgement gets a free ride on the next outgoing data frame. The technique of
temporarily delaying outgoing acknowledgements so that they can be hooked onto the
next outgoing data frame is known as piggybacking.
A Sliding Window Protocols

Depending on the sizes of sender window and receiver window we have different types of sliding
window protocols
• 1 Bit sliding window protocol: Sender Window size = Receiver Window size = 1
• Go Back N : Sender Window size > 1 and Receiver Window size = 1
• Selective Repeat : Sender Window size > 1 and Receiver Window size > 1
A One-Bit Sliding Window Protocol
In one-bit Sliding Window Protocol the sender and receiver window size is 1, It sends 1 frame starts a
timer, buffers the sent frame and waits for acknowledgement from receiver.
Receiver on receiving the expected frame slides its window and sends acknowledgement.
Upon receiving an acknowledgement, the sender advances its window, it sends a new frame with next
sequence number.
If the timer times out before receiving the acknowledgement, then it resends the buffered frame.
A One-Bit Sliding Window Protocol

a) State of sender and receiver initially, receiver is expecting frame with sequence
number 0.
b) Sender sends sequence number 0 and buffers it.
c) Receiver receives sequence number 0 sends acknowledgement and advances its
window.
d) Sender receives its acknowledgement for frame 0 and advances its window.
A One-Bit Sliding Window Protocol

#define MAX SEQ 1


typedef enum {frame arrival, cksum err, timeout} event_type;
#include "protocol.h"
void protocol4 (void) {
seq_nr next_frame_to_send; /* 0 or 1 only */
seq_nr frame_expected; /* 0 or 1 only */
frame r, s;
packet buffer; /* current packet being sent */
event_type event;
next_frame_to_send = 0; /* next frame on the outbound stream */
frame_expected = 0; /* frame expected next */
from_network_layer(&buffer); /* fetch a packet from the network layer */
s.info = buffer; /* prepare to send the initial frame */
s.seq = next_frame_to_send; /* insert sequence number into frame */
s.ack = 1 − frame_expected; /* piggybacked ack */
to physical layer(&s); /* transmit the frame */
start_timer(s.seq); /* start the timer running */
A One-Bit Sliding Window Protocol

while (true) {
wait_for_event(&event); /* frame arrival, cksum err, or timeout */
if (event == frame_arrival) { /* a frame has arrived undamaged */
from_physical_layer(&r); /* go get it */
if (r.seq == frame_expected) { /* handle inbound frame stream */
to_network_layer(&r.info); /* pass packet to network layer */
inc(frame_expected); /* invert seq number expected next */
}
if (r.ack == next_frame_to_send) { /* handle outbound frame stream */
stop_timer(r.ack); /* turn the timer off */
from_network_layer(&buffer); /* fetch new pkt from network layer */
inc(next_frame_to_send); /* invert sender’s sequence number */
}
}
s.info = buffer; /* construct outbound frame */
s.seq = next_frame_to_send; /* insert sequence number into it */
s.ack = 1 – frame_expected; /* seq number of last received frame */
to_physical_layer(&s); /* transmit a frame */
start_timer(s.seq); /* start the timer running */
}
}
A One-Bit Sliding Window Protocol

bove figure shows a) Normal Case b) Abnormal case


he notation is (seq, ack, packet number). An asterisk indicates where a network layer accepts a packet.
A One-Bit Sliding Window Protocol
Normal Case

A Side B Side
next_frame_to_send = 0 next_frame_to_send = 0
frame_expected = 0 frame_expected = 0
s.seq = 0, s.ack = 1 B receives r = (0, 1, A0)
A sends s = (0, 1, A0) Event is frame_arrival
Since r.seq == frame_expected (frame is sent to N/W
layer)
frame_expected = 1
s.seq = 0
s.ack = 1-1 = 0
B sends s = (0, 0, B0)

A receives r = (0, 0, B0) B receives r = (1, 0, A1)


Event is frame_arrival Event is frame_arrival
Since r.seq == frame_expected (frame is sent to Since r.seq == frame_expected (frame is sent to N/W
N/W layer) layer)
frame_expected = 1 frame_expected = (1+1) % 2 = 0
since r.ack == next_frame_to_send since r.ack == next_frame_to_send
next_frame_to_send = 1 next_frame_to_send = 1
s.seq = 1 s.seq = 1
s.ack = 0 s.ack = 1
Sends (1 , 0, A1) Sends (1, 1, B1)
A One-Bit Sliding Window Protocol
Abnormal Case A Side B Side
next_frame_to_send = 0 next_frame_to_send = 0
frame_expected = 0 frame_expected = 0
s.seq = 0, s.ack = 1 s.seq = 0, s.ack = 1
A sends s = (0, 1, A0) B sends s = (0, 1, B0)
A receives r = (0, 1, B0) B receives r = (0, 1, A0)
Event is frame_arrival Event is frame_arrival
Since r.seq == frame_expected (frame is sent to N/W Since r.seq == frame_expected (frame is sent to N/W layer)
layer) frame_expected = 1
frame_expected = 1 s.seq = 0
s.seq = 0 s.ack = 0
s.ack = 0 B sends s = (0, 0, B0)
A sends s = (0, 0, A0)
A receives r = (0, 0 , B0) B receives r = (0, 0, A0)
Event is frame_arrival Event is frame_arrival
Since r.ack == next_frame_to_send Since r.ack == next_frame_to_send
next_frame_to_send = 1 next_frame_to_send = 1
s.seq = 1 s.seq = 1
s.ack = 0 s.ack = 0
A sends r = (1, 0 , A1) B sends s = (1 , 0, B1)

A receives r = (1, 0 , B1) B gets r = (1, 0, A1)


Event is frame_arrival Event is frame_arrival
Since r.seq == frame_expected (frame is sent to N/W Since r.seq == frame_expected (frame is sent to N/W layer)
layer) frame_expected = 0
frame_expected = 0 s.seq = 1
s.seq = 1 s.ack = 1
s.ack = 1 B sends s = (1 , 1, B1)
A sends s = (1, 1, A1)
B gets r = (1, 1, A1)
Event is frame_arrival
next_frame_to_send = 0
s.seq = 0
s.ack = 1
B sends (0, 1, B2)
Go Back N Protocol

• The sender window size will be N > 1 and receiver window size = 1.
• It uses the concept of protocol pipelining, the sender can send
multiple frames before receiving the ack. for the first frame.
• There are finite number of framers and the frames are numbered in a
sequential manner.
• The number of frames that can be sent depends on window size of
the sender.
• If the ack. of a frame is not received within an agreed upon time
period, all frames in the current window are retransmitted.
• The size of the sending window determines the sequence number of
the outbound frames.
• If there are 10 frames and sending window size is 4 the sequence
number will not be like 1,2,3,4….,10. The sequence numbers will be
0,1,2,3,0,1,2,3,0,1.
Go Back N Protocol

#define MAX SEQ 7


typedef enum {frame arrival, cksum err, timeout, network layer ready} event_type;
#include "protocol.h"
static boolean between(seq_nr a, seq_nr b, seq_nr c) {
/* Return true if a <= b < c circularly; false otherwise. */
if (((a <= b) && (b < c)) || ((c < a) && (a <= b)) || ((b < c) && (c < a)))
return(true);
else
return(false);
}
static void send_data(seq_nr frame nr, seq_nr frame_expected, packet buffer[ ]) {
frame s; /* scratch variable */
s.info = buffer[frame nr]; /* insert packet into frame */
s.seq = frame_nr; /* insert sequence number into frame */
s.ack = (frame_expected + MAX SEQ) % (MAX SEQ + 1);/* piggyback ack */
to physical layer(&s); /* transmit the frame */
start timer(frame nr); /* start the timer running */
}
Go Back N Protocol

void protocol5(void) {
seq_nr next_frame_to_send; /* MAX SEQ > 1; used for outbound stream */
seq_nr ack_expected; /* oldest frame as yet unacknowledged */
seq_nr frame_expected; /* next frame expected on inbound stream */
frame r; /* scratch variable */
packet buffer[MAX SEQ + 1]; /* buffers for the outbound stream */
seq_nr nbuffered; /* number of output buffers currently in use */
seq_nr i; /* used to index into the buffer array */
event_type event;
enable_network layer(); /* allow network layer ready events */
ack_expected = 0; /* next ack expected inbound */
next_frame_to_send = 0; /* next frame going out */
frame_expected = 0; /* number of frame expected inbound */
nbuffered = 0; /* initially no packets are buffered */
Go Back N Protocol
while (true) {
wait for event(&event); /* four possibilities: see event_type above */
switch(event) {
case network_layer_ready: /* the network layer has a packet to send */
from_network_layer(&buffer[next_frame_to_send]); /* fetch new packet */
nbuffered = nbuffered + 1; /* expand the sender’s window */
send_data(next frame to send, frame expected, buffer);/* transmit the frame */
inc(next frame to send); /* advance sender’s upper window edge */
break;
case frame_arrival: /* a data or control frame has arrived */
from physical layer(&r); /* get incoming frame from physical layer */
if (r.seq == frame_expected) {
/* Frames are accepted only in order. */
to_network_layer(&r.info); /* pass packet to network layer */
inc(frame_expected); /* advance lower edge of receiver’s window */
}
while (between(ack expected, r.ack, next frame to send)) {
nbuffered = nbuffered − 1; /* one frame fewer buffered */
stop timer(ack_expected); /* frame arrived intact; stop timer */
inc(ack expected); /* contract sender’s window */
}
Go Back N Protocol

case cksum err: break; /* just ignore bad frames */


case timeout: /* trouble; retransmit all outstanding frames */
next_frame_to_send = ack expected; /* start retransmitting here */
for (i = 1; i <= nbuffered; i++) {
send data(next frame to send, frame expected, buffer);/* resend frame */
inc(next frame to send); /* prepare to send the next one */
}
}
if (nbuffered < MAX SEQ)
enable network layer();
else
disable network layer();
}
}
Selective Repeat Protocol

• The sender window size will be N > 1 and receiver window


size > 1.
• In selective repeat ARQ, only the erroneous or lost frames
are retransmitted, while correct frames are received and
buffered.
• The receiver while keeping track of sequence numbers,
buffers the frames in memory and sends NACK for only
frame which is missing or damaged.
• The sender will send or retransmit only packet for which
NACK is received.
The MAC (medium access control) sublayer

• Sublayer of Data Link layer


• MAC sublayer emulates a Full duplex logical communication in a multipoint network.
• MAC sublayer uses MAC protocols to which is used to determine who transmits data next. Eg Two people
speaking at a same time.
• There are two types of networks
• Point to point network: A point to point network is a communication network in which there is
a dedicated connection between two devices or endpoints.
• Broadcast network: In this network, any form of communication in which a single sender
transmits messages to many receivers at once.
• Issue in Broadcast network - how to determine who gets to use the channel when there is competition
for it.
• Broadcast channels are sometimes referred to as multiaccess channels or random access channels.
The Channel Allocation Problem

Static channel Allocation


• N users - the bandwidth is divided into N equal sized portions (each one portion).
• Private frequency band there is no interference among users.
• Problems -
• when the user is temporarily silent their bandwidth is lost.
• Underutilization
• Blocking
• Interference

Assumptions for dynamic channel allocation


• Independent traffic: The model consists of N independent stations (computers, telephones) each
generating frames for transmission. Once a frame has been generated, the station is blocked and does
nothing until the frame has been successfully transmitted.
• Single Channel: A single channel is available for all communication.
Assumptions for dynamic channel allocation

• Observable Collisions: If two frames are transmitted simultaneously, they overlap in time and the
resulting signal is distorted. This event is known as collision.
• Continuous or Slotted Time: Time may be slotted or divided into discrete intervals. Frame
transmissions must then begin at the start of a slot.
• Carrier Sense or No Carrier Sense:
• With the carrier sense assumption stations can tell if the channel is in use before trying to use it.
• If there is no carrier sense, stations cannot sense the channel before trying to use it. They just go
ahead and transmit.
Multiple Access Protocols
ALOHA

• Aloha is a type of Random access protocol


• it was developed at the University of Hawaii in early 1970
• LAN-based protocol this type there are more chances of occurrence of
collisions during the transmission of data from any source to the
destination
• Aloha has two types one Pure Aloha and another Slotted Aloha.
PURE ALOHA

Pure Aloha
• It allows the stations to transmit data at any time whenever they want.
• After transmitting the data packet, station waits for some time.

• Case-01:
• Transmitting station receives an acknowledgement from the receiving station.
• In this case, transmitting station assumes that the transmission is successful.

• Case-02:
• Transmitting station does not receive any acknowledgement within specified time from the receiving station.
• In this case, transmitting station assumes that the transmission is unsuccessful.
• Then,
• Transmitting station uses a Back Off Strategy and waits for some random amount of time.
• After back off time, it transmits the data packet again.
• It keeps trying until the back off limit is reached after which it aborts the transmission
PURE ALOHA
SLOTTED ALOHA
• Slotted Aloha divides the time of shared
channel into discrete intervals called as
time slots.
• Any station can transmit its data in any
time slot.
• The only condition is that station must
start its transmission from the beginning
of the time slot.
• If the beginning of the slot is missed, then
station has to wait until the beginning of
the next time slot.
• A collision may occur if two or more
stations try to transmit data at the
beginning of the same time slot.
CSMA ( Carrier Sense Multiple Access Protocol)

• To Minimize the chance of collision and to increase the performance.


• Principal of CSMA - Sense before transmit or listen before talk
• Carrier busy - Transmission is currently happening
• Carrier Idle - No transmission currently
• CSMA can reduce the collosion but cannot eliminate.
Collision in CSMA ( Carrier Sense Multiple Access Protocol)

• At time t1, Station B senses the


medium and finds it idle, so it sends
frames.

• At time t2(t2>t1) station c senses the


medium and finds it idle because at
this time, the first bits from station B
have not reached the Station c.

• Station C also sends the frame.

• The two signals collide and both the


frames are destroyed.
What next ????
• What should a station do if the channel is busy ?
• What should a station do if the channel is idle??

• 3 Methods have been devised to answer these questions.


– 1 - persistant method
– non persistant method
– p persistant method
1 Persistant
• Simple and Straight forward.
• If the channel is free the station sends the frame immediately.
• This methods has high collision chances.
Non Persistant
• In the non-persistent method, a station that has a frame to send senses the line
• If the line is idle, it sends immediately.
• If the line is not idle, it waits a random amount of time and then senses the line again.
• The nonpersistent approach reduces the chance of collision
P Persistant
• The 1-persistent method is simple and straightforward.
• In this method, after the station finds the line idle, it sends its
frameimmediately (with probability 1)
• This method has the highest chance of collision because two or more stations
may find the line idle and send their frames immediately.
CSMA CD

• Carrier Sense Multiple Access with Collision Detection


• Station monitors channel while sending a frame
• If, however, there is a collision, the frame is sent again.
• Eg. Collision of the first bit in CSMA/CD, stations A and C are involved in the collision.
CSMA CD
Collision free protocols

1. A Bit-Map protocol
Collision free protocols
2. Binary Countdown
if stations 0010, 0100, 1001, and 1010 are all trying to get
the channel, in the first bit time the stations transmit 0, 0, 1,
and 1, respectively. These are ORed together to form a
Stations 0010 and 0100 see the 1 and know that a higher-
numbered station is competing for the channel, so they give
up for the current round.
Stations 1001 and 1010 continue. The next bit is 0, and both
stations continue. The next bit is 1, so station 1001 gives up.
The winner is station 1010 because it has the highest
address. After winning the bidding, it may now transmit a
The channel
frame, after which another bidding cycle starts. efficiency of
this method
is
d /(d + log2
N).
Collision free protocols
Token Passing
• The token represents permission to send.
• If a station has a frame queued for transmission when
it receives the token, it can send that frame before it
passes the token to the next station.
• If it has no queued frame, it simply passes the token.
Limited Connection Protocols

The Adaptive Tree Walk Protocol


• It combines the advantage of contention protocol and collision free
protocol.
• Under low loads they behave like slotted Aloha, and behave like bitmaps
under heavy loads.
Limited Connection Protocols

The Adaptive Tree Walk Protocol


Out of 8 stations given in fig. station (A D F G ) want to transmit the data then in
Slot 0 : collision occurs as A, D, E, G transmits the data simultaneously.
Slot 1: stations in node 1 (left part of tree) are allowed to transmit. A and D can
transmit but collision occurs.
Slot 2: Station A transmits.
Slot3: Stations in node 2 (right part of tree) are allowed to transmit. F and G can
transmit but collision occurs.
Slot4: Stations in node 5 are allowed to transmit, F station transmits.
Slot5: Stations in node 4 are allowed to transmit, D station transmits.
Slot6: Stations in node 6 are allowed to transmit, G station transmits.
Wireless LAN Protocols

If CSMA is used with wireless LAN then there are few problems.

(a) A and C are hidden terminals when transmitting to B, (b) B and C are exposed terminals when
transmitting to A and D.
As depicted in fig (a), if A and C wants to transmit message to B, A senses the medium and learns that medium is
free hence transmits to B, C also senses the medium learns that medium is free as it cannot hear A as A is out of
range for C and transmits to B resulting in a collision this problem is referred to as hidden terminals problem.
As depicted in fig (b), If B wants to transmit message to A and C wants to transmit message to D, if C senses the
medium, it will hear a transmission and falsely conclude that it may not send to D (a dashed line) as a result the
bandwidth is wasted such a problem is known as exposed terminal problem.
MACA (Multiple Access with Collision Avoidance)
To overcome hidden terminals problem and exposed terminal problem, MACA was introduced.
MACA (Multiple Access with Collision Avoidance)
A starts by sending an RTS (Request To Send) frame to B as in fig (a). RTS is a small frame which consists of the length of the
frame that A wants to send to B next. B with a CTS (Clear To Send) frame, CTS also contains length of the frame copied from RTS.
Any station hearing the RTS is clearly close to A and must remain silent long enough for the CTS to be transmitted back to A.
Any station hearing the CTS is clearly close to B and must remain silent during the upcoming data transmission.
C can hear RTS from A but not the CTS from B. As long as it does not interfere with the CTS, it is free to transmit while the data
frame is being sent.
D is within range of B but not A. It does not hear the RTS but does hear the CTS so it defers sending anything until that frame is
expected to be finished.
E hears both control messages and, like D, must be silent until the data frame is complete.

You might also like