You are on page 1of 56

Computer Networks – 21CS52

UNIT 2

Syllabus:
The Data link layer: Design issues of DLL, Error detection and correction,
Elementary data link protocols, Sliding window protocols.
The medium access control sublayer: The channel allocation problem,
Multiple access protocols.

DATA LINK LAYER


The data link layer is the second layer from the bottom in the OSI (Open System
Interconnection) network architecture model. It is responsible for the node-to-
node delivery of data. Its major role is to ensure error-free transmission of
information. DLL is also responsible for encoding, decode and organizing the
outgoing and incoming data. This is considered the most complex layer of the
OSI model as it hides all the underlying complexities of the hardware from the
other above layers.
Sub-layers of the Data Link Layer
The data link layer is further divided into two sub-layers, which are as follows:
Logical Link Control (LLC)
This sublayer of the data link layer deals with multiplexing, the flow of data
among applications and other services, and LLC is responsible for providing error
messages and acknowledgments as well.
Media Access Control (MAC)
MAC sublayer manages the device’s interaction, responsible for addressing
frames, and also controls physical media access.
The data link layer receives the information in the form of packets from the
Network layer, it divides packets into frames and sends those frames bit-by-bit to
the underlying physical layer.
2.1 DATA LINK LAYER DESIGN ISSUES
The data link layer uses the services of the physical layer to send and receive bits
over communication channels. It has a number of functions, including:
1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast
senders.

1|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

To accomplish these goals, the data link layer takes the packets it gets from the
network layer and encapsulates them into frames for transmission. Each frame
contains a frame header, a payload field for holding the packet, and a frame trailer,
as illustrated in Fig. 3-1. Frame management forms the heart of what the data link
layer does. In the following sections we will examine all the above mentioned
issues in detail.

2.1.1 Services Provided to Network layer


The function of the data link layer is to provide services to the network layer. The
principal service is transferring data from the network layer on the source
machine to the network layer on the destination machine. On the source machine
is an entity, call it a process, in the network layer that hands some bits to the data
link layer for transmission to the destination. The job of the data link layer is to
transmit the bits to the destination machine so they can be handed over to the
network layer there, as shown in Fig. 3-2(a). The actual transmission follows the
path of Fig. 3-2(b), but it is easier to think in terms of two data link layer processes
communicating using a data link protocol. For this reason, we will implicitly use
the model of Fig. 3-2(a) throughout this chapter.

The data link layer can be designed to offer various services. The actual services
that are offered vary from protocol to protocol. Three reasonable possibilities that
we will consider in turn are:

2|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

1. Unacknowledged connectionless service.


2. Acknowledged connectionless service.
3. Acknowledged connection-oriented service.
Unacknowledged connectionless service consists of having the source machine
send independent frames to the destination machine without having the
destination machine acknowledge them. Ethernet is a good example of a data link
layer that provides this class of service. No logical connection is established
beforehand or released afterward. If a frame is lost due to noise on the line, no
attempt is made to detect the loss or recover from it in the data link layer. This
class of service is appropriate when the error rate is very low, so recovery is left
to higher layers. It is also appropriate for real-time traffic, such as voice, in which
late data are worse than bad data
The next step up in terms of reliability is acknowledged connectionless service.
When this service is offered, there are still no logical connections used, but each
frame sent is individually acknowledged. In this way, the sender knows whether
a frame has arrived correctly or been lost. If it has not arrived within a specified
time interval, it can be sent again. This service is useful over unreliable channels,
such as wireless systems. 802.11 (WiFi) is a good example of this class of service
Getting back to our services, the most sophisticated service the data link layer can
provide to the network layer is connection-oriented service. With this service, the
source and destination machines establish a connection before any data are
transferred. Each frame sent over the connection is numbered, and the data link
layer guarantees that each frame sent is indeed received. Furthermore, it
guarantees that each frame is received exactly once and that all frames are
received in the right order. Connection-oriented service thus provides the network
layer processes with the equivalent of a reliable bit stream. It is appropriate over
long, unreliable links such as a satellite channel or a long-distance telephone
circuit. If acknowledged connectionless service were used, it is conceivable that
lost acknowledgements could cause a frame to be sent and received several times,
wasting bandwidth.
When connection-oriented service is used, transfers go through three distinct
phases. In the first phase, the connection is established by having both sides
initialize variables and counters needed to keep track of which frames have been
received and which ones have not. In the second phase, one or more frames are
actually transmitted. In the third and final phase, the connection is released,
freeing up the variables, buffers, and other resources used to maintain the
connection.
2.1.2 Framing

3|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

To provide service to the network layer, the data link layer must use the service
provided to it by the physical layer. What the physical layer does is accept a raw
bit stream and attempt to deliver it to the destination. If the channel is noisy, as it
is for most wireless and some wired links, the physical layer will add some
redundancy to its signals to reduce the bit error rate to a tolerable level. However,
the bit stream received by the data link layer is not guaranteed to be error free.
Some bits may have different values and the number of bits received may be less
than, equal to, or more than the number of bits transmitted. It is up to the data link
layer to detect and, if necessary, correct errors.
The usual approach is for the data link layer to break up the bit stream into discrete
frames, compute a short token called a checksum for each frame, and include the
checksum in the frame when it is transmitted. When a frame arrives at the
destination, the checksum is recomputed. If the newly computed checksum is
different from the one contained in the frame, the data link layer knows that an
error has occurred and takes steps to deal with it (e.g., discarding the bad frame
and possibly also sending back an error report).
Breaking up the bit stream into frames is more difficult than it at first appears. A
good design must make it easy for a receiver to find the start of new frames while
using little of the channel bandwidth. We will look at four methods:
1. Byte count.
2. Flag bytes with byte stuffing.
3. Flag bits with bit stuffing.
4. Physical layer coding violations.
Byte count
The first framing method uses a field in the header to specify the number of bytes
in the frame. When the data link layer at the destination sees the byte count, it
knows how many bytes follow and hence where the end of the frame is. This
technique is shown in Fig. 3-3(a) for four small example frames of sizes 5, 5, 8,
and 8 bytes, respectively.
The trouble with this algorithm is that the count can be garbled by a transmission
error. For example, if the byte count of 5 in the second frame of Fig. 3-3(b)
becomes a 7 due to a single bit flip, the destination will get out of synchronization.
It will then be unable to locate the correct start of the next frame. Even if the
checksum is incorrect so the destination knows that the frame is bad, it still has
no way of telling where the next frame starts. Sending a frame back to the source
asking for a retransmission does not help either, since the destination does not
know how many bytes to skip over to get to the start of the retransmission. For
this reason, the byte count method is rarely used by itself.

4|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Flag bytes with byte stuffing


The second framing method gets around the problem of resynchronization after
an error by having each frame start and end with special bytes. Often the same
byte, called a flag byte, is used as both the starting and ending delimiter. This byte
is shown in Fig. 3-4(a) as FLAG. Two consecutive flag bytes indicate the end of
one frame and the start of the next. Thus, if the receiver ever loses synchronization
it can just search for two flag bytes to find the end of the current frame and the
start of the next frame.
However, there is a still a problem we have to solve. It may happen that the flag
byte occurs in the data, especially when binary data such as photographs or songs
are being transmitted. This situation would interfere with the framing. One way
to solve this problem is to have the sender’s data link layer insert a special escape
byte (ESC) just before each ‘‘accidental’’ flag byte in the data. Thus, a framing
flag byte can be distinguished from one in the data by the absence or presence of
an escape byte before it. The data link layer on the receiving end removes the
escape bytes before giving the data to the network layer. This technique is called
byte stuffing.
Of course, the next question is: what happens if an escape byte occurs in the
middle of the data? The answer is that it, too, is stuffed with an escape byte. At
the receiver, the first escape byte is removed, leaving the data byte that follows it
(which might be another escape byte or the flag byte). Some examples are shown
in Fig. 3-4(b). In all cases, the byte sequence delivered after destuffing is exactly
the same as the original byte sequence. We can still search for a frame boundary
by looking for two flag bytes in a row, without bothering to undo escapes.

5|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Flag bits with bit stuffing.


The third method of delimiting the bit stream gets around a disadvantage of byte
stuffing, which is that it is tied to the use of 8-bit bytes. Framing can be also be
done at the bit level, so frames can contain an arbitrary number of bits made up
of units of any size. It was developed for the once very popular HDLC (Highlevel
Data Link Control) protocol. Each frame begins and ends with a special bit
pattern, 01111110 or 0x7E in hexadecimal. This pattern is a flag byte. Whenever
the sender’s data link layer encounters five consecutive 1s in the data, it
automatically stuffs a 0 bit into the outgoing bit stream. This bit stuffing is
analogous to byte stuffing, in which an escape byte is stuffed into the outgoing
character stream before a flag byte in the data.
When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it
automatically destuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely
transparent to the network layer in both computers, so is bit stuffing. If the user
data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but
stored in the receiver’s memory as 01111110. Figure 3-5 gives an example of bit
stuffing.
With bit stuffing, the boundary between two frames can be unambiguously
recognized by the flag pattern. Thus, if the receiver loses track of where it is, all
it has to do is scan the input for flag sequences, since they can only occur at frame
boundaries and never within the data. With both bit and byte stuffing, a side effect
is that the length of a frame now depends on the contents of the data it carries.

6|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Physical layer coding violations


The last method of framing is to use a shortcut from the physical layer. We can
use some reserved signals to indicate the start and end of frames. In effect, we are
using ‘‘coding violations’’ to delimit frames. The beauty of this scheme is that,
because they are reserved signals, it is easy to find the start and end of frames and
there is no need to stuff the data.
2.1.3 Error control
Assume for the moment that the receiver can tell whether a frame that it receives
contains correct or faulty information For unacknowledged connectionless
service it might be fine if the sender just kept outputting frames without regard to
whether they were arriving properly. But for reliable, connection-oriented service
it would not be fine at all.
The usual way to ensure reliable delivery is to provide the sender with some
feedback about what is happening at the other end of the line. Typically, the
protocol calls for the receiver to send back special control frames bearing positive
or negative acknowledgements about the incoming frames. If the sender receives
a positive acknowledgement about a frame, it knows the frame has arrived safely.
On the other hand, a negative acknowledgement means that something has gone
wrong and the frame must be transmitted again
Similarly, if the acknowledgement frame is lost, the sender will not know how to
proceed. It should be clear that a protocol in which the sender transmits a frame
and then waits for an acknowledgement, positive or negative, will hang forever
if a frame is ever lost due to, for example, malfunctioning hardware or a faulty
communication channel. This possibility is dealt with by introducing timers into
the data link layer. When the sender transmits a frame, it generally also starts a
timer. The timer is set to expire after an interval long enough for the frame to
reach the destination, be processed there, and have the acknowledgement
propagate back to the sender. Normally, the frame will be correctly received and
the acknowledgement will get back before the timer runs out, in which case the
timer will be cancelled.

7|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

However, if either the frame or the acknowledgement is lost, the timer will go off,
alerting the sender to a potential problem. The obvious solution is to just transmit
the frame again. However, when frames may be transmitted multiple times there
is a danger that the receiver will accept the same frame two or more times and
pass it to the network layer more than once. To prevent this from happening, it is
generally necessary to assign sequence numbers to outgoing frames, so that the
receiver can distinguish retransmissions from originals.
The whole issue of managing the timers and sequence numbers so as to ensure
that each frame is ultimately passed to the network layer at the destination exactly
once, no more and no less, is an important part of the duties of the data link layer
2.1.4 Flow Control
Another important design issue that occurs in the data link layer (and higher
layers as well) is what to do with a sender that systematically wants to transmit
frames faster than the receiver can accept them. This situation can occur when the
sender is running on a fast, powerful computer and the receiver is running on a
slow, low-end machine.
Clearly, something has to be done to prevent this situation. Two approaches are
commonly used. In the first one, feedback-based flow control, the receiver sends
back information to the sender giving it permission to send more data, or at least
telling the sender how the receiver is doing. In the second one, rate-based flow
control, the protocol has a built-in mechanism that limits the rate at which senders
may transmit data, without using feedback from the receiver.

2.2 Error Detection and Correction

8|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Other types of errors also exist. Sometimes, the location of an error will be
known, perhaps because the physical layer received an analog signal that was far
from the expected value for a 0 or 1 and declared the bit to be lost. This situation
is called an erasure channel. It is easier to correct errors in erasure channels than
in channels that flip bits because even if the value of the bit has been lost, at least
we know which bit is in error. However, we often do not have the benefit of
erasures
Error Control
Error control can be done in two ways
Error detection – Error detection involves checking whether any error has
occurred or not. The number of error bits and the type of error does not matter.
Error correction – Error correction involves ascertaining the exact number of
bits that has been corrupted and the location of the corrupted bits. The use of error
correcting codes is often referred to as forward error correction
For both error detection and error correction, the sender needs to send some
additional bits along with the data bits called as Redundant bits. The receiver
performs necessary checks based upon the additional redundant bits. If it finds
that the data is free from errors, it removes the redundant bits before passing the
message to the upper layers.
2.2.1 Error Correcting Codes
Types of Error Correcting Codes

ECCs can be broadly categorized into two types, block codes and convolution
codes.

 Block codes − The message is divided into fixed-sized blocks of bits, to


which redundant bits are added for error detection or correction.

9|Pag e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

 Convolutional codes − The message comprises of data streams of


arbitrary length and parity symbols are generated by the sliding application
of a Boolean function to the data stream.
Four different error-correcting codes:
1. Hamming codes.
2. Binary convolutional codes.
3. Reed-Solomon codes.
4. Low-Density Parity Check codes.
 Hamming Codes − It is a block code that is capable of detecting up to two
simultaneous bit errors and correcting single-bit errors.
 Binary Convolution Code − Here, an encoder processes an input
sequence of bits of arbitrary length and generates a sequence of output bits.
 Reed - Solomon Code − They are block codes that are capable of
correcting burst errors in the received data block.
 Low-Density Parity Check Code − It is a block code specified by a
parity-check matrix containing a low density of 1s. They are suitable for
large block sizes in very noisy channels.
Hamming Code

Hamming code is a block code that is capable of detecting up to two simultaneous


bit errors and correcting single-bit errors. It was developed by R.W. Hamming
for error correction.In this coding method, the source encodes the message by
inserting redundant bits within the message. These redundant bits are extra bits
that are generated and inserted at specific positions in the message itself to enable
error detection and correction. When the destination receives this message, it
performs recalculations to detect errors and find the bit position that has error.

Encoding a message by Hamming Code

The procedure used by the sender to encode the message encompasses the
following steps −

 Step 1 − Calculation of the number of redundant bits.


 Step 2 − Positioning the redundant bits.
 Step 3 − Calculating the values of each redundant bit.

Once the redundant bits are embedded within the message, this is sent to the user.

Step 1 − Calculation of the number of redundant bits.

If the message contains m𝑚number of data bits, r𝑟number of redundant bits are
added to it so that m𝑟 is able to indicate at least (m + r+ 1) different states. Here,
(m + r) indicates location of an error in each of (𝑚 + 𝑟) bit positions and one

10 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

additional state indicates no error. Since, r𝑟 bits can indicate 2r𝑟 states, 2r𝑟 must
be at least equal to (m + r + 1). Thus the following equation should hold 2r ≥
m+r+1

Step 2 − Positioning the redundant bits.

The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc.


They are referred in the rest of this text as r1 (at position 1), r2 (at position
2), r3 (at position 4), r4 (at position 8) and so on.

Step 3 − Calculating the values of each redundant bit.

The redundant bits are parity bits. A parity bit is an extra bit that makes the
number of 1s either even or odd. The two types of parity are −

 Even Parity − Here the total number of bits in the message is made even.
 Odd Parity − Here the total number of bits in the message is made odd.

Each redundant bit, ri, is calculated as the parity, generally even parity, based
upon its bit position. It covers all bit positions whose binary representation
includes a 1 in the ith position except the position of ri. Thus −

 r1 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the least significant position excluding 1 (3, 5, 7, 9, 11 and
so on)
 r2 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 2 from right except 2 (3, 6, 7, 10, 11 and so on)
 r3 is the parity bit for all data bits in positions whose binary representation
includes a 1 in the position 3 from right except 4 (5-7, 12-15, 20-23 and so
on)
Decoding a message in Hamming Code

Once the receiver gets an incoming message, it performs recalculations to detect


errors and correct them. The steps for recalculation are −

 Step 1 − Calculation of the number of redundant bits.


 Step 2 − Positioning the redundant bits.
 Step 3 − Parity checking.
 Step 4 − Error detection and correction
Step 1 − Calculation of the number of redundant bits

Using the same formula as in encoding, the number of redundant bits are
ascertained.

11 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

2r ≥ m + r + 1 where m is the number of data bits and r is the number of redundant


bits.

Step 2 − Positioning the redundant bits

The r redundant bits placed at bit positions of powers of 2, i.e. 1, 2, 4, 8, 16 etc.

Step 3 − Parity checking

Parity bits are calculated based upon the data bits and the redundant bits using the
same rule as during generation of p1,p2 ,p3 ,p4 etc. Thus

p1 = parity(1, 3, 5, 7, 9, 11 and so on)

p2 = parity(2, 3, 6, 7, 10, 11 and so on)

p3 = parity(4-7, 12-15, 20-23 and so on)

Step 4 − Error detection and correction

The decimal equivalent of the parity bits binary values is calculated. If it is 0,


there is no error. Otherwise, the decimal value gives the bit position which has
error. For example, if p1p2p3p4 = 1001, it implies that the data bit at position 9,
decimal equivalent of 1001, has error. The bit is flipped to get the correct
message.

12 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

2.2.2 Error Detecting Codes


Three different error-detecting codes. They are all linear, systematic block codes:
1. Parity.
2. Checksums.
3. Cyclic Redundancy Checks (CRCs).

Parity
The parity check is done by adding an extra bit, called parity bit, to the data to
make the number of 1s either even or odd depending upon the type of parity. The
parity check is suitable for single bit error detection only.
The two types of parity checking are
 Even Parity − Here the total number of bits in the message is made even.
 Odd Parity − Here the total number of bits in the message is made odd.

Checksums

This is a block code method where a checksum is created based on the data values
in the data blocks to be transmitted using some algorithm and appended to the
data. When the receiver gets this data, a new checksum is calculated and
compared with the existing checksum. A non-match indicates an error.

Error Detection by Checksums

For error detection by checksums, data is divided into fixed sized frames or
segments.

 Sender's End − The sender adds the segments using 1’s complement
arithmetic to get the sum. It then complements the sum to get the checksum
and sends it along with the data frames.
 Receiver's End − The receiver adds the incoming segments along with the
checksum using 1’s complement arithmetic to get the sum and then
complements it.

13 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

If the result is zero, the received frames are accepted; otherwise they are
discarded.

Cyclic Redundancy Check (CRC)

Cyclic Redundancy Check (CRC) is a block code invented by W. Wesley


Peterson in 1961. It is commonly used to detect accidental changes to data
transmitted via telecommunications networks and storage devices.

CRC involves binary division of the data bits being sent by a predetermined
divisor agreed upon by the communicating system. The divisor is generated using
polynomials. So, CRC is also called polynomial code checksum.

When the polynomial code method is employed, the sender and receiver must
agree upon a generator polynomial, G(x), in advance. Both the high- and low
order bits of the generator must be 1. To compute the CRC for some frame with
m bits corresponding to the polynomial M(x), the frame must be longer than the
generator polynomial. The idea is to append a CRC to the end of the frame in
such a way that the polynomial represented by the check summed frame is
divisible by G(x). When the receiver gets the check summed frame, it tries
dividing it by G(x). If there is a remainder, there has been a transmission error.

The algorithm for computing the CRC is as follows:

1. Let r be the degree of G(x). Append r zero bits to the low-order end of the
frame so it now contains m + r bits and corresponds to the polynomial x r M(x).

2. Divide the bit string corresponding to G(x) into the bit string corresponding to
x r M(x), using modulo 2 division.

3. Subtract the remainder (which is always r or fewer bits) from the bit string
corresponding to x r M(x) using modulo 2 subtraction. The result is the
checksummed frame to be transmitted. Call its polynomial T(x).

Figure 3-9 illustrates the calculation for a frame 1101011111 using the generator
G(x) = 𝑥 4 + x + 1

14 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Example 2

15 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

2.3 Types of Data Link Protocols

Data link protocols can be broadly divided into two categories, depending on
whether the transmission channel is noiseless or noisy.

Simplex Protocol
The Simplex protocol is hypothetical protocol designed for unidirectional data
transmission over an ideal channel, i.e. a channel through which transmission can
never go wrong. It has distinct procedures for sender and receiver. The sender
simply sends all its data available onto the channel as soon as they are available

16 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

its buffer. The receiver is assumed to process all incoming data instantly. It is
hypothetical since it does not handle flow control or error control.
Stop – and – Wait Protocol
Stop – and – Wait protocol is for noiseless channel too. It provides unidirectional
data transmission without any error control facilities. However, it provides for
flow control so that a fast sender does not drown a slow receiver. The receiver
has a finite buffer size with finite processing speed. The sender can send a frame
only when it has received indication from the receiver that it is available for
further data processing.
Stop – and – Wait ARQ
Stop – and – wait Automatic Repeat Request (Stop – and – Wait ARQ) is a
variation of the above protocol with added error control mechanisms, appropriate
for noisy channels. The sender keeps a copy of the sent frame. It then waits for a
finite time to receive a positive acknowledgement from receiver. If the timer
expires or a negative acknowledgement is received, the frame is retransmitted. If
a positive acknowledgement is received then the next frame is sent.
Go – Back – N ARQ
Go – Back – N ARQ provides for sending multiple frames before receiving the
acknowledgement for the first frame. It uses the concept of sliding window, and
so is also called sliding window protocol. The frames are sequentially numbered
and a finite number of frames are sent. If the acknowledgement of a frame is not
received within the time period, all frames starting from that frame are
retransmitted.
Selective Repeat ARQ
This protocol also provides for sending multiple frames before receiving the
acknowledgement for the first frame. However, here only the erroneous or lost
frames are retransmitted, while the good frames are received and buffered.
2.3.1 Elementary Data Link Protocols
1. An Unrestricted (Utopian) Simplex Protocol
2. A Simplex Stop-and-Wait Protocol
3. A Simplex Protocol for a Noisy Channel (Stop and wait ARQ)

NOISELESS CHANNELS
Let us first assume we have an ideal channel in which no frames are lost,
duplicated, or corrupted. We introduce two protocols for this type of channel. The
first is a protocol that does not use flow control; the second is the one that does.

17 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Of course, neither has error control because we have assumed that the channel is
a perfect noiseless channel.
Protocol 1 : An Unrestricted Simplex Protocol
Assumptions
 The transmission channel is completely noiseless (a channel in which no
frames are lost, corrupted, or duplicated).
 The transmission channel is assumed to be ideal in which there is no data
loss, no data duplication, and no data corruption.
 There is no error and flow control mechanism.
 The buffer space for storing the frames at the sender's end and the receiver's
end is infinite.

Important Points related to the simplest protocol:

 The processing time of the simplest protocol is very short. Hence it can be
neglected.
 The sender and the receiver are always ready to send and receive data.
 The sender sends a sequence of data frames without thinking about the
receiver.
 There is no data loss hence no ACK or NACK.
 The DLL at the receiving end immediately removes the frame header and
transfers the data to the subsequent layer.
Design
The design of the simplest protocol is very simple as there is no error and a flow
control mechanism. The sender end (present at the data link layer) gets the data
from the network layer and converts the data into the form of frames so that it can
be easily transmitted.

Now on the receiver's end, the data frame is taken from the physical layer and
then the data link layer extracts the actual data (by removing the header from the
data frame) from the data frame.

18 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Algorithms

Flow diagram
Let us now look at the flow chart of the data transfer from the sender to the
receiver. Suppose that the sender is A and the receiver is B.
The basic flow of the data frame is depicted in the diagram below.

19 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

The idea is very simple, and the sender sends the sequence of data frames without
thinking about the receiver. Whenever a sending request comes from the network
layer, the sender sends the data frames. Similarly, whenever the receiving request
comes from the physical layer, the receiver receives the data frames.
Protocol 2 : A Simplex Stop-and-Wait Protocol (noiseless channel)
Assumptions
 The transmission channel is completely noiseless (a channel in which no
frames are lost, corrupted, or duplicated).
 The transmission channel is assumed to be ideal in which there is no data
loss, no data duplication, and no data corruption.
 There is no error control mechanism.
 The buffer space for storing the frames at the sender's end and the receiver's
end is finite.
 While transmitting data from sender to receiver, the flow of data is required
to be controlled.
Design
Sender Side Rule 1: Sender will send one packet at a time Rule 2: Sender will
send the next packet to the receiver only when it receives the acknowledgment of
the previous packet from the receiver. So, in the stop-and-wait protocol, the
sender-side process is very simple.
Receiver Side Rule 1: The receiver receives the data packet and then consumes
the data packet. Rule 2: The receiver sends acknowledgment when the data packet
is consumed. So, in this protocol, the receiver-side process is also very simple.

20 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Working of Stop and Wait Protocol


 If there is a communication between sender and receiver, then the sender
sends the packet to the receiver, and the packet is called a data packet.
 The sender will not send the second packet to the receiver until
acknowledgment of the first packet is received.
 The receiver will send the acknowledgment for the packet that the receiver
has received.
 When the sender receives the acknowledgment, it sends the next packet to
the receiver.
 This process of sending data and receiving acknowledgment continues
until all the packets are not sent.
 The main benefit of stop and wait protocol is its simplicity

Advantages of Stop and Wait Protocol


 This protocol provides flow control.
 One of the main advantages of the stop-and-wait protocol is its simplicity.
 Stop and wait protocol is useful in LAN as its efficiency decreases with the
increasing distance between sender and receiver.
 If we increase the data packet size, the efficiency is going to increase.
Hence, it is suitable for transmitting big data packets.

21 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

 Stop & wait protocol is accurate as the sender sends the next frame to the
receiver only when the acknowledgment of the previous packet is received.
So there is less chance of the frame being lost.

Conclusion
 Stop and wait protocol is a simple and reliable protocol for flow control.
 Stop and wait protocol is a data link layer protocol.
 In this protocol, the sender will not send the next packet to the receiver
until the acknowledgment of the previous packet is received.
 One of the disadvantages of stop and wait protocol is that its efficiency is
low.

NOISY CHANNELS
Although the Stop-and-Wait Protocol gives us an idea of how to add flow control
to its predecessor, noiseless channels are non existent. We can ignore the error (as
we sometimes do), or we need to add error control to our protocols. We discuss
three protocols in this section that use error control.
Protocol 3 :A Simplex Protocol for Noisy Channel (Stop and Wait ARQ)

22 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Stop & Wait ARQ is a sliding window protocol for flow control and it overcomes
the limitations of Stop & Wait, we can say that it is the improved or modified
version of the Stop & Wait protocol.
Assumptions
• Stop & Wait ARQ assumes that the communication channel is noisy
(previously Stop & Wait assumed that the communication channel
is not noisy).
 Stop & Wait ARQ also assumes that errors may occur in the data
while transmission.
• The buffer space for storing the frames at the sender's end and the
receiver's end is finite.
• While transmitting data from sender to receiver, the flow of data is
required to be controlled.

Problems:
1. Lost Data 2. Lost Acknowledgement:

2. Delayed Acknowledgement/Data: After a timeout on the sender side, a


long-delayed acknowledgement might be wrongly considered as
acknowledgement of some other recent packet.
The above 3 problems are resolved by Stop and Wait for ARQ (Automatic
Repeat Request) that does both error control and flow control.

23 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

1. Time Out:

2. Sequence Number (Data)

3. Delayed Acknowledgement:
This is resolved by introducing sequence numbers for acknowledgement
also.

Working of Stop and Wait for ARQ:


1) Sender A sends a data frame or packet with sequence number 0.
2) Receiver B, after receiving the data frame, sends an acknowledgement with
sequence number 1 (the sequence number of the next expected data frame
or packet) . There is only a one-bit sequence number that implies that both
sender and receiver have a buffer for one frame or packet only.

24 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

25 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

2.3.2 Sliding Window


Sliding window protocols are data link layer protocols for reliable and sequential
delivery of data frames. The sliding window is also used in Transmission Control
Protocol.
In this protocol, multiple frames can be sent by a sender at a time before receiving
an acknowledgment from the receiver. The term sliding window refers to the
imaginary boxes to hold frames. Sliding window method is also known as
windowing.
Piggybacking

26 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Piggybacking is the technique of delaying outgoing acknowledgment and


attaching it to the next data packet.
When a data frame arrives, the receiver waits and does not send the control frame
(acknowledgment) back immediately. The receiver waits until its network layer
moves to the next data packet. Acknowledgment is associated with this outgoing
data frame. Thus the acknowledgment travels along with the next data frame. This
technique in which the outgoing acknowledgment is delayed temporarily is called
Piggybacking.
Working of the Sliding Window Protocol
The working of the sliding window protocol can be divided into two steps sender
steps, and the receiver steps and also some important values are needed in a
network model for smooth transmission of the data frames are:

 Sender and the receiver side


 Window Size
 The total data frames to be transmitted
 Proper sequencing of the frames

Steps for the Sender Side

 To begin with, the sender side will share data frames with the receiver
side per the window size assigned to the model.
 The sliding window will appear on the frames transmitted over to the
receiver side.

27 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

 Then the sender will wait for an acknowledgment from the receiver side
for the shared frames, as mentioned in figure1.

 When the receiver transmits the acknowledgment of the first transmitted


frame, the sliding window will shift from the acknowledged frame.
Now, let’s move on to the steps involved on the receiver side.

Steps for the Receiver Side

 On receiving the data frames from the sender side, the receiver will use
the frames in the network model.
 After the receiver uses the frame, it will transmit the acknowledgement
to the sender side for that data frame.
 Then, the receiver side will receive the next data frame from the sender
side, as mentioned in figure2.
One -bit Sliding Window

One bit sliding window protocol is based on the concept of sliding window
protocol. But here the window size is of 1 bit.

28 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Assume that computer A is trying to send its frame 0 to computer B and that B is
trying to send its frame 0 to A. Suppose that A sends a frame to B, but A’s timeout
interval is a little too short. Consequently, A may time out repeatedly, sending a
series of identical frames, all with seq = 0 and ack = 1.

When the first valid frame arrives at computer B, it will be accepted and frame
expected will be set to a value of 1. All the subsequent frames received will be
rejected because B is now expecting frames with sequence number 1, not 0.
Furthermore, since all the duplicates will have ack = 1 and B is still waiting for
an acknowledgement of 0, B will not go and fetch a new packet from its network
layer.

After every rejected duplicate comes in, B will send A a frame containing seq =
0 and ack = 0. Eventually, one of these will arrive correctly at A, causing A to
begin sending the next packet. No combination of lost frames or premature
timeouts can cause the protocol to deliver duplicate packets to either network
layer, to skip a packet, or to deadlock. The protocol is correct.

However, to show how subtle protocol interactions can be, we note that a peculiar
situation arises if both sides simultaneously send an initial packet. This
synchronization difficulty is illustrated by Fig. 3-17. In part (a), the normal
operation of the protocol is shown. In (b) the peculiarity is illustrated. If B waits
for A’s first frame before sending one of its own, the sequence is as shown in (a),
and every frame is accepted.

However, if A and B simultaneously initiate communication, their first frames


cross, and the data link layers then get into situation (b). In (a) each frame arrival
brings a new packet for the network layer; there are no duplicates. In (b) half of
the frames contain duplicates, even though there are no transmission errors.
Similar situations can occur as a result of premature timeouts, even when one side
clearly starts first. In fact, if multiple premature timeouts occur, frames may be
sent three or more times, wasting valuable bandwidth.

29 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Go-Back-N ARQ

It is a sliding window protocol in which multiple frames are sent from sender to
receiver at once. The number of frames that are sent at once depends upon the
size of the window that is taken.

Pipelining

Pipelining is a process of sending multiple data packets serially without waiting


for the previous acknowledgement. pipelining ensures the efficient and better
utilization of network resources. It also enhances the speed of delivery of data
packets, ensuring the timely transmission of data.

Design of Go-Back-N ARQ

With the help of Go-Back-N ARQ, multiple frames can be transit in the forward
direction and multiple acknowledgements can transit in the reverse direction. The
idea of this protocol is similar to the Stop-and-wait ARQ but there is a difference
and it is the window of Go-Back-N ARQ allows us to have multiple frames in the
transition as there are many slots in the send window.

30 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Window size for Go-Back-N ARQ

In the Go-Back-N ARQ, the size of the send window must be always less than
2m and the size of the receiver window is always 1.

31 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Algorithm:

32 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Selective repeat ARQ:

Go-Back-N ARQ simplifies the process at the receiver site. The receiver keeps
track of only one variable, and there is no need to buffer out-of-order frames; they
are simply discarded. However, this protocol is very inefficient for a noisy link.
In a noisy link a frame has a higher probability of damage, which means the
resending of multiple frames. This resending uses up the bandwidth and slows
down the transmission. For noisy links, there is another mechanism that does not
resend N frames when just one frame is damaged; only the damaged frame is
resent. This mechanism is called Selective Repeat ARQ. It is more efficient for
noisy links, but the processing at the receiver is more complex.

Design

The design in this case is to some extent similar to the one we described for the
00- Back-N, but more complicated, as shown in Figure 11.20.

33 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

34 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

35 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

2.4 MEDIUM ACCESS SUB LAYER

To coordinate the access to the channel, multiple access protocols are requiring.
All these protocols belong to the MAC sub layer. Data Link layer is divided into
two sub layers:

1. Logical Link Control (LLC)- is responsible for error control & flow control.

2. Medium Access Control (MAC)- MAC is responsible for multiple access


resolutions

36 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

2.4.1 THE CHANNEL ALLOCATION PROBLEM

In broadcast networks, single channel is shared by several stations. This channel


can be allocated to only one transmitting user at a time. There are two different
methods of channel allocations:

Static channel allocation in LANs and MANs

Step 1 − If there are N users, the bandwidth is divided into N equal sized
partitions, where each user is assigned with one portion. This is because, each
user has a private frequency band.

Step 2 − When there is only a small and constant number of users, each user has
a heavy load of traffic, this division is a simple and efficient allocation
mechanism.

Step 3 − Let us take a wireless example of FM radio stations, each station gets a
portion of FM band and uses it most of the time to broadcast its signal.

Step 4 − When the number of senders is large and varying or traffic is suddenly
changing, FDM faces some problems.

Step 5 − If the spectrum is cut up into N regions and fewer than N users are
currently interested in communicating, a large piece of valuable spectrum will be
wasted. And if more than N users want to communicate, some of them will be
denied permission for lack of bandwidth, even if some of the users who have been
assigned a frequency band hardly ever transmit or receive anything.

Step 6 − A static allocation is a poor fit to most computer systems, in which data
traffic is extremely burst, often with peak traffic to mean traffic ration of 1000:1,
consequently most of the channels will be idle most of the time.

Now let us divide the single channel into N independent subchannels, each with
capacity C/N bps. The mean input rate on each of the subchannels will now be
λ/N. Recomputing T, we get

Dynamic channel allocation in LANs and MANs

37 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Five key assumptions:

1. Independent Traffic. The model consists of N independent stations (e.g.,


computers, telephones), each with a program or user that generates frames for
transmission. The expected number of frames generated in an interval of length
Δt is λΔt, where λ is a constant (the arrival rate of new frames). Once a frame has
been generated, the station is blocked and does nothing until the frame has been
successfully transmitted.

2. Single Channel. A single channel is available for all communication. All


stations can transmit on it and all can receive from it. The stations are assumed to
be equally capable, though protocols may assign them different roles (e.g.,
priorities).

3. Observable Collisions. If two frames are transmitted simultaneously, they


overlap in time and the resulting signal is garbled. This event is called a collision.
All stations can detect that a collision has occurred. A collided frame must be
transmitted again later. No errors other than those generated by collisions occur.

4. Continuous or Slotted Time. Time may be assumed continuous, in which case


frame transmission can begin at any instant. Alternatively, time may be slotted or
divided into discrete intervals (called slots). Frame transmissions must then begin
at the start of a slot. A slot may contain 0, 1, or more frames, corresponding to an
idle slot, a successful transmission, or a collision, respectively.

5. Carrier Sense or No Carrier Sense. With the carrier sense assumption,


stations can tell if the channel is in use before trying to use it. No station will
attempt to use the channel while it is sensed as busy. If there is no carrier sense,
stations cannot sense the channel before trying to use it. They just go ahead and
transmit. Only later can they determine whether the transmission was successful.

2.5 MULTIPLE ACCESS PROTOCOLS

Many protocols have been defined to handle the access to shared link. These
protocols are organized in three different groups:

38 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Random Access protocol

In this, all stations have same superiority that is no station has more priority than
another station. Any station can send data depending on medium’s state( idle or
busy). It has two features:

1.There is no fixed time for sending data

2.There is no fixed sequence of stations sending data

ALOHA

It is designed for wireless LAN (Local Area Network) but can also be used in a
shared medium to transmit data. Using this method, any station can transmit data
across a network simultaneously when a data frameset is available for
transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the transmission of data
through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision
detection.
5. It requires retransmission of data after some random amount of time.

39 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

PURE ALOHA

Whenever data is available for sending over a channel at stations, we use Pure
Aloha. In pure Aloha, when each station transmits data to a channel without
checking whether the channel is idle or not, the chances of collision may occur,
and the data frame can be lost. When any station transmits the data frame to a
channel, the pure Aloha waits for the receiver's acknowledgment. If it does not
acknowledge the receiver end within the specified time, the station waits for a
random amount of time, called the backoff time (Tb). And the station may assume
the frame has been lost or destroyed. Therefore, it retransmits the frame until all
the data are successfully transmitted to the receiver.

As we can see in the figure above, there are four stations for accessing a shared
channel and transmitting data frames. Some frames collide because most stations
send their frames at the same time. Only two frames, frame 1.1 and frame 2.2, are
successfully transmitted to the receiver end. At the same time, other frames are
lost or destroyed. Whenever two frames fall on a shared channel simultaneously,

40 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

collisions can occur, and both will suffer damage. If the new frame's first bit
enters the channel before finishing the last bit of the second frame. Both frames
are completely finished, and both stations must retransmit the data frame.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful transmission of data frame is S = G * e ^ - 2 G.

SLOTTED ALOHA

The slotted Aloha is designed to overcome the pure Aloha's efficiency because
pure Aloha has a very high possibility of frame hitting. In slotted Aloha, the
shared channel is divided into a fixed time interval called slots. So that, if a station
wants to send a frame to a shared channel, the frame can only be sent at the
beginning of the slot, and only one frame is allowed to be sent to each slot. And
if the stations are unable to send data to the beginning of the slot, the station will
have to wait until the beginning of the slot for the next time. However, the
possibility of a collision remains when trying to send a frame at the beginning of
two or more station time slot.

41 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%.


2. The probability of successfully transmitting the data frame in the slotted
Aloha is S = G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.

42 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Carrier Sense Multiple Access (CSMA)

In Carrier Sense Multiple Access (CSMA) protocol, the station will sense the
channel before the transmission of data. CSMA reduces the chances of collision
in the network but it does not eliminate the collision from the channel. 1-
Persistent, Non-Persistent, P-Persistent are the three access methods of CSMA.

 CSMA works on the principle of "Listen before Talking" or "Sense


before Transmit". When the device on the shared medium wants to
transmit a data frame, then the device first detects the channel to check
the presence of any carrier signal from other connected devices on the
network.
 In this situation, if the device senses any carrier signal on the shared
medium, then this means that there is another transmission on the
channel. The device will wait until the channel becomes idle and the
transmission that is in progress is currently completed.
 When the channel becomes idle the station starts its transmission. All
other stations connected in the network receive the transmission of the
station.
 In CSMA, the station senses or detects the channel before the
transmission of data so it reduces the chance of collision in the
transmission.
 But there may be a situation where two stations detected the channel
idle at the same time and they both start data transmission
simultaneously so in this, there is a chance of collision.
 So CSMA reduces the chance of collision in data transmission but it
does not eliminate the collision.

Types of CSMA Access Modes

1-Persistent

This method is considered the straightforward and simplest method of CSMA. In


this method, if the station finds the medium idle then the station will immediately
send the data frame with 1- probability.

 In this, if the station wants to transmit the data. Then the station first senses
the medium.
 If the medium is busy then the station waits until the channel becomes idle.
And the station continuously senses the channel until the medium becomes
idle.
 If the station detects the channel as idle then the station will immediately

43 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

send the data frame with 1 probability that’s why the name of this method
is 1-persistent.

In this method, there is a high possibility of collision as two or more station


senses the channel idle at the same time and transmits data simultaneously which
may lead to a collision.

This is one of the most straightforward methods. In this method, once the station
finds that the medium is idle then it immediately sends the frame. By using this
method there are higher chances for collision because it is possible that two or
more stations find the shared medium idle at the same time and then they send
their frames immediately.

Non-Persistent

In this method of CSMA, if the station finds the channel busy then it will wait
for a random amount of time before sensing the channel again.

 If the station wants to transmit the data then first of all it will sense the
medium.
 If the medium is idle then the station will immediately send the data.
 Otherwise, if the medium is busy then the station waits for a random
amount of time and then again senses the channel after waiting for a
random amount of time.
 In P-persistent there is less chance of collision in comparison to the 1-
persistent method as this station will not continuously sense the channel
but since the channel after waiting for a random amount of time.

44 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

P-Persistent

The p-persistent method of CSMA is used when the channel is divided into
multiple time slots and the duration of time slots is greater than or equal to the
maximum propagation time. This method is designed as a combination of the
advantages of 1-Persistent and Non-Persistent CSMA. The p-persistent method
of CSMA reduces the chance of collision in the network and there is an
increment in the efficiency of the network. When any station wants to transmit
the data firstly it will sense the channel If the channel is busy then the station
continuously senses the channel until the channel becomes idle. If the channel
is idle then the station does the following steps.

 The station transmits its data frame in the network by p probability.


 The station waits for the start of the next time slot with probability q=1-p
and after waiting again senses the channel.
 If the channel is again idle, then it again performs step 1. If the channel is
busy, then it thinks that there is a collision in the network and now this
station will follow the back-off procedure.

CSMA with Collision Detection (CSMA/CD)

The Carrier Sense Multiple Access/ Collision Detection protocol is used to


detect a collision in the media access control (MAC) layer. Once the collision
was detected, the CSMA CD immediately stopped the transmission by sending
the signal so that the sender does not waste all the time to send the data packet.
Suppose a collision is detected from each station while broadcasting the packets.
In that case, the CSMA CD immediately sends a jam signal to stop transmission
and waits for a random time context before transmitting another data packet. If
the channel is found free, it immediately sends the data and returns it.

CSMA/CD, as well as many other LAN protocols, uses the conceptual model of
Fig. 4-5. At the point marked t 0, a station has finished transmitting its frame.

45 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Any other station having a frame to send may now attempt to do so. If two or
more stations decide to transmit simultaneously, there will be a collision. If a
station detects a collision, it aborts its transmission, waits a random period of
time, and then tries again (assuming that no other station has started transmitting
in the meantime). Therefore, our model for CSMA/CD will consist of alternating
contention and transmission periods, with idle periods occurring when all stations
are quiet (e.g., for lack of work).

Almost all collisions can be avoided in CSMA/CD but they can still occur during
the contention period. The collision during the contention period adversely affects
the system performance, this happens when the cable is long and length of packet
are short. This problem becomes serious as fiber optics network came into use.
Here we shall discuss some protocols that resolve the collision during the
contention period.

Collision free Protocols

 Bit-map Protocol
 Token Passing
 Binary Countdown

Limited Contention Protocols

 The Adaptive Tree Walk Protocol

Pure and slotted Aloha, CSMA and CSMA/CD are Contention based Protocols:

 Try-if collide-Retry
 No guarantee of performance
 What happen if the network load is high?

Collision Free Protocols:

 Pay constant overhead to achieve performance guarantee

46 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

 Good when network load is high

1. Bit-map Protocol:

In this method, a station needs to make a reservation before sending data.

The timeline has two kinds of periods:

1. Reservation interval of fixed time length


2. Data transmission period of variable frames.

 If there are M stations, the reservation interval is divided into M slots, and
each station has one slot.
 Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1.
No other station is allowed to transmit during this slot.
 In general, ith station may announce that it has a frame to send by inserting
a 1 bit into ith slot. After all N slots have been checked, each station knows
which stations wish to transmit.
 The stations which have reserved their slots transfer their frames in that
order.
 After data transmission period, next reservation interval begins.
 Since everyone agrees on who goes next, there will never be any collisions.

The following figure shows a situation with five stations and a five-slot
reservation frame. In the first interval, only stations 1, 3, and 4 have made
reservations. In the second interval, only station 1 has made a reservation.

Disadvantages

1. Highly trust on controlled dependability.


2. Decrease in capacity and channel data rate under light loads; increase in
turn-around time.

47 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

2. Token passing

In token passing scheme, the stations are connected logically to each other in form
of ring and access to stations is governed by tokens.

A token is a special bit pattern or a small message, which circulate from one
station to the next in some predefined order.

 In Token ring, token is passed from one station to another adjacent station
in the ring whereas incase of Token bus, each station uses the bus to send
the token to the next station in some predefined order.
 In both cases, token represents permission to send. If a station has a frame
queued for transmission when it receives the token, it can send that frame
before it passes the token to the next station. If it has no queued frame, it
passes the token simply.
 After sending a frame, each station must wait for all N stations (including
itself) to send the token to their neighbours and the other N – 1 stations to
send a frame, if they have one.
 There exists problems like duplication of token or token is lost or insertion
of new station, removal of a station, which need be tackled for correct and
reliable operation of this scheme.

3. Binary Countdown

Binary countdown protocol is used to overcome the overhead 1 bit per binary
station. In binary countdown, binary station addresses are used. A station wanting
to use the channel broadcast its address as binary bit string starting with the high
order bit. All addresses are assumed of the same length. Here, we will see the
example to illustrate the working of the binary countdown.

48 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

For example, if stations 0010, 0100, 1001, and 1010 are all trying to get the
channel, in the first bit time the stations transmit 0, 0, 1, and 1, respectively. These
are ORed together to form a 1. Stations 0010 and 0100 see the 1 and know that a
higher-numbered station is competing for the channel, so they give up for the
current round. Stations 1001 and 1010 continue. The next bit is 0, and both
stations continue. The next bit is 1, so station 1001 gives up. The winner is station
1010 because it has the highest address. After winning the bidding, it may now
transmit a frame, after which another bidding cycle starts. The protocol is
illustrated in Fig. 4-8. It has the property that higher-numbered stations have a
higher priority than lower-numbered stations, which may be either good or bad,
depending on the context.

The channel efficiency of this method is d /(d + log 2 𝑁). If, however, the frame
format has been cleverly chosen so that the sender’s address is the first field in
the frame, even these log 2 𝑁 bits are not wasted, and the efficiency is 100%

Limited Contention Protocols:

 Collision based protocols (pure and slotted ALOHA, CSMA/CD) are good
when the network load is low.
 Collision free protocols (bitmap, binary Countdown) are good when load
is high.

How about combining their advantages :

 Behave like the ALOHA scheme under light load


 Behave like the bitmap scheme under heavy load.

Adaptive Tree Walk Protocol:

 partition the group of station and limit the contention for each slot.

49 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

 Under light load, everyone can try for each slot like aloha
 Under heavy load, only a group can try for each slot

How do we do it :

 treat every stations as the leaf of a binary tree


 first slot (after successful transmission), all stations
 can try to get the slot(under the root node).
 If no conflict, fine.
 Else, in case of conflict, only nodes under a subtree get to try for the next
one. (depth first search)

Slot-0 : C*, E*, F*, H* (all nodes under node 0 can try which are going to
send), conflict

Slot-1 : C* (all nodes under node 1 can try}, C sends

Slot-2 : E*, F*, H*(all nodes under node 2 can try}, conflict

Slot-3 : E*, F* (all nodes under node 5 can try to send), conflict

Slot-4 : E* (all nodes under E can try), E sends

Slot-5 : F* (all nodes under F can try), F sends

Slot-6 : H* (all nodes under node 6 can try to send), H sends.

Wireless LAN Protocols

We have already remarked that wireless systems cannot normally detect a


collision while it is occurring. The received signal at a station may be tiny,
perhaps a million times fainter than the signal that is being transmitted.

50 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

There is an even more important difference between wireless LANs and wired
LANs. A station on a wireless LAN may not be able to transmit frames to or
receive frames from all other stations because of the limited radio range of the
stations. In wired LANs, when one station sends a frame, all other stations receive
it. The absence of this property in wireless LANs causes a variety of
complications.

A naive approach to using a wireless LAN might be to try CSMA: just listen for
other transmissions and only transmit if no one else is doing so. The trouble is,
this protocol is not really a good way to think about wireless because what matters
for reception is interference at the receiver, not at the sender. To see the nature of
the problem, consider Fig. 4-11, where four wireless stations are illustrated. For
our purposes, it does not matter which are APs and which are laptops. The radio
range is such that A and B are within each other’s range and can potentially
interfere with one another. C can also potentially interfere with both B and D, but
not with A.

First consider what happens when A and C transmit to B, as depicted in Fig. 4-


11(a). If A sends and then C immediately senses the medium, it will not hear A
because A is out of range. Thus C will falsely conclude that it can transmit to B.
If C does start transmitting, it will interfere at B, wiping out the frame from A.
(We assume here that no CDMA-type scheme is used to provide multiple
channels, so collisions garble the signal and destroy both frames.) We want a
MAC protocol that will prevent this kind of collision from happening because it
wastes bandwidth. The problem of a station not being able to detect a potential
competitor for the medium because the competitor is too far away is called the
hidden terminal problem.

Now let us look at a different situation: B transmitting to A at the same time that
C wants to transmit to D, as shown in Fig. 4-11(b). If C senses the medium, it will
hear a transmission and falsely conclude that it may not send to D (shown as a
dashed line). In fact, such a transmission would cause bad reception only in the
zone between B and C, where neither of the intended receivers is located. We

51 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

want a MAC protocol that prevents this kind of deferral from happening because
it wastes bandwidth. The problem is called the exposed terminal problem.

An early and influential protocol that tackles these problems for wireless LANs
is MACA (Multiple Access with Collision Avoidance) (Karn, 1990). The basic
idea behind it is for the sender to stimulate the receiver into outputting a short
frame, so stations nearby can detect this transmission and avoid transmitting for
the duration of the upcoming (large) data frame. This technique is used instead of
carrier sense.

MACA is illustrated in Fig. 4-12. Let us see how A sends a frame to B. A starts
by sending an RTS (Request To Send) frame to B, as shown in Fig. 4-12(a). This
short frame (30 bytes) contains the length of the data frame that will eventually
follow. Then B replies with a CTS (Clear To Send) frame, as shown in Fig. 4-
12(b). The CTS frame contains the data length (copied from the RTS frame).
Upon receipt of the CTS frame, A begins transmission.

Now let us see how stations overhearing either of these frames react. Any station
hearing the RTS is clearly close to A and must remain silent long enough for the
CTS to be transmitted back to A without conflict. Any station hearing the CTS is
clearly close to B and must remain silent during the upcoming data transmission,
whose length it can tell by examining the CTS frame.

In Fig. 4-12, C is within range of A but not within range of B. Therefore, it hears
the RTS from A but not the CTS from B. As long as it does not interfere with the
CTS, it is free to transmit while the data frame is being sent. In contrast, D is
within range of B but not A. It does not hear the RTS but does hear the CTS.

52 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

Hearing the CTS tips it off that it is close to a station that is about to receive a
frame, so it defers sending anything until that frame is expected to be finished.
Station E hears both control messages and, like D, must be silent until the data
frame is complete.

Despite these precautions, collisions can still occur. For example, B and C could
both send RTS frames to A at the same time. These will collide and be lost. In the
event of a collision, an unsuccessful transmitter (i.e., one that does not hear a CTS
within the expected time interval) waits a random amount of time and tries again
later.

S.NO Stop-and-Wait Protocol Sliding Window Protocol

In sliding window protocol, sender


In Stop-and-Wait Protocol, sender
sends more than one frame to the
sends one frame and wait for
1. receiver side and re-transmits the
acknowledgment from receiver
frame(s) which is/are damaged or
side.
suspected.

Efficiency of Stop-and-Wait Efficiency of sliding window


2.
Protocol is worse. protocol is better.

Sender window size of Stop-and- Sender window size of sliding


3.
Wait Protocol is 1. window protocol is N.

Receiver window size of Stop-and- Receiver window size of sliding


4.
Wait Protocol is 1. window protocol may be 1 or N.

In Stop-and-Wait Protocol, sorting In sliding window protocol, sorting


5.
is not necessary. may be or may not be necessary.

Efficiency of Stop-and-Wait Efficiency of sliding window


6. Protocol is protocol is
1/(1+2*a) N/(1+2*a)

53 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

S.NO Stop-and-Wait Protocol Sliding Window Protocol

Stop-and-Wait Protocol is half Sliding window protocol is full


7.
duplex. duplex.

Stop-and-Wait Protocol is mostly Sliding window protocol is mostly


8. used in low speed and error free used in high speed and error-prone
network. network.

In Stop-and-Wait Protocol, the In sliding window protocol, the


sender cannot send any new frames sender can continue to send new
9. until it receives an frames even if some of the earlier
acknowledgment for the previous frames have not yet been
frame. acknowledged.

Stop-and-Wait Protocol has a lower


Sliding window protocol has a higher
throughput as it has a lot of idle
10. throughput as it allows for
time while waiting for the
continuous transmission of frames.
acknowledgment.

S.NO Go-Back-N Protocol Selective Repeat Protocol

In Go-Back-N Protocol, if the sent


frame are find suspected then all In selective Repeat protocol, only
1. the frames are re-transmitted from those frames are re-transmitted which
the lost packet to the last packet are found suspected.
transmitted.

Sender window size of Go-Back-N Sender window size of selective


2.
Protocol is N. Repeat protocol is also N.

Receiver window size of Go-Back- Receiver window size of selective


3.
N Protocol is 1. Repeat protocol is N.

54 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

S.NO Stop-and-Wait Protocol Sliding Window Protocol

Go-Back-N Protocol is less Selective Repeat protocol is more


4.
complex. complex.

In Go-Back-N Protocol, neither In selective Repeat protocol, receiver


5.
sender nor at receiver need sorting. side needs sorting to sort the frames.

In Go-Back-N Protocol, type of In selective Repeat protocol, type of


6.
Acknowledgement is cumulative. Acknowledgement is individual.

In Go-Back-N Protocol, Out-of-


Order packets are NOT Accepted In selective Repeat protocol, Out-of-
7.
(discarded) and the entire window Order packets are Accepted.
is re-transmitted.

In selective Repeat protocol, if


In Go-Back-N Protocol, if
Receives a corrupt packet, it
Receives a corrupt packet, then
8. immediately sends a negative
also, the entire window is re-
acknowledgement and hence only the
transmitted.
selective packet is retransmitted.

Efficiency of Go-Back-N Protocol Efficiency of selective Repeat


9. is protocol is also
N/(1+2*a) N/(1+2*a)

S.NO CSMA/CD CSMA/CA

CSMA / CD is effective after a Whereas CSMA / CA is effective


1.
collision. before a collision.

CSMA / CD is used in wired Whereas CSMA / CA is commonly


2.
networks. used in wireless networks.

55 | P a g e Karthika S, A sst. P rof, SJCI T


Computer Networks – 21CS52

S.NO Stop-and-Wait Protocol Sliding Window Protocol

Whereas CSMA/ CA minimizes the


3. It only reduces the recovery time.
possibility of collision.

Whereas CSMA / CA will first


CSMA / CD resends the data frame
4. transmit the intent to send for data
whenever a conflict occurs.
transmission.

CSMA / CD is used in 802.3 While CSMA / CA is used in 802.11


5.
standard. standard.

It is more efficient than simple While it is similar to simple


6. CSMA(Carrier Sense Multiple CSMA(Carrier Sense Multiple
Access). Access).

It is the type of CSMA to detect the It is the type of CSMA to avoid


7
collision on a shared channel. collision on a shared channel.

8. It is work in MAC layer. It is also work in MAC layer.

56 | P a g e Karthika S, A sst. P rof, SJCI T

You might also like