You are on page 1of 38

Computer Network Unit-2 NMCA-E25

Data Link Layer:-


The data link layer or layer 2 is the second layer of the seven-layer OSI model of computer
networking. The data link layer provides the functional and procedural means to transfer data between network
entities and might provide the means to detect and possibly correct errors that may occur in the physical layer. Data
link layer is most reliable node to node delivery of data. It forms frames from the packets that are received from
network layer and gives it to physical layer. It also synchronizes the information which is to be transmitted over the
data. Error controlling is easily done. The encoded data are then passed to physical.
Error detection bits are used by the data link layer. It also corrects the errors. Outgoing messages are assembled into
frames. Then the system waits for the acknowledgements to be received after the transmission. It is reliable to send
message.
Data link layer is responsible for converting data stream to signals bit by bit and to send that over the underlying
hardware. At the receiving end, Data link layer picks up data from hardware which are in the form of electrical signals,
assembles them in a recognizable frame format, and hands over to upper layer. Some responsibilities of the data link
layer include framing, addressing, flow control, error control, and media access control. The data link layer divides the
stream of bits received from the network layer into manageable data units called frames. The data link layer adds a
header to the frame to define the address of the sender and receiver of the frame. If the rate at which the data are
absorbed by the receiver is less than the rate at which data are produced in the sender, the data link layer imposes a
flow control mechanism to avoid overwhelming the receiver. The data link layer also adds reliability to the physical
layer by adding mechanism to detect and retransmit damaged, duplicate, or lost frames. When two or more devices
are connected to the same link, data link layer protocols are necessary to determine which device has control over
the link at any given time.

Functions of Data Link Layer:-


Data link layer does many tasks on behalf of upper layer. These are:
Framing:-
Data-link layer takes packets from Network Layer and encapsulates them into Frames.Then, it sends
each frame bit-by-bit on the hardware. At receiver end, data link layer picks up signals from hardware and
assembles them into frames.

Addressing:-
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is assumed to be
unique on the link. It is encoded into hardware at the time of manufacturing.

Error Control:-
Sometimes signals may have encountered problem in transition and the bits are flipped.These
errors are detected and attempted to recover actual data bits. It also provides error reporting mechanism to the
sender.

Flow Control:-
Stations on same link may have different speed or capacity. Data-link layer ensures flow control
that enables both machine to exchange data on same speed.

TRIPTI PAL (CS Faculty, RATM) Page 1


Computer Network Unit-2 NMCA-E25

Multi-access Control:-
When host on the shared link tries to transfer the data, it has a high probability of collision.
Data-link layer provides mechanism such as CSMA/CD to equip capability of accessing a shared media among
multiple Systems. Protocols of this layer determine which of the devices has control over the link at any given time,
when two or more devices are connected to the same link.

DATA LINK LAYER

Data Link Layer Design Issue:-


The data link layer is supported to carry out many specified functions. For effective data
communications between two directly connected transmitting and receiving stations the data link layer has to carry out a
number of specific function like:
1. Services Provided to the Network Layer : A well defined serve interface in the network layer. The principle service
is transferring data from the network layer on source machine to the network layer on destination machine.
2. Frame Synchronization : The source machine send data in blocks called frames to be the destination machine. The
starting and ending of each frame should be recognized by the destination machine.
3. Flow Control : The source machine must not be send data frames at a rate faster then the destination machines
must be can accepted them.
4. Error Control : The errors mode in bits during transmission from source to destination machines must be detected
and corrected.
5. Addressing : On a multipoint line, such as many machine connected together (LAN), the identity of the individual
machine must be specified while transmitting the data frames.

OR

TRIPTI PAL (CS Faculty, RATM) Page 2


Computer Network Unit-2 NMCA-E25

Data Link Layer Design Issues:-


The data link layer has a number of specific functions it can carry out. These functions include:
1. Providing a well-defined service interface to the network layer.
2. Dealing with transmission errors.
3. Regulating the flow of data so that slow receivers are not swamped by fast senders.

To accomplish these goals, the data link layer takes the packets it get from the network layer and encapsulayes them in to
frames for transmission. Each frame contains a frame header, a payload field for holding the packet, and a frame trailer,
as illustrated in fig . Frame management forms the heart of what the data link layer does.

In fact, in many networks, these functions are found only in the upper layers and not in the data link layer. However, no
matter where they are found, the principles are pretty much the same, so it does nor really matter where we study them.
In the data link layer they often show up in their simplest and purest forms, making this a good placeto examine them.

Services provided to the network layer:-


Network layer is the layer 3 of OSI model and lies above the data link layer.
Data link layer provides several services to the network layer. The function of the data link layer is to provide services to
the network layer. The principal service is transferring data from the network layer on the source machine to the
network layer on the source machine to the network layer on the destination machine. On the source machine is an
entity, call it a process, in the network layer that hands some bits to the data link layer for transmission to the
destination. The job of the data link layer is to transmit the bits to the destination machine so they can be handed over

TRIPTI PAL (CS Faculty, RATM) Page 3


Computer Network Unit-2 NMCA-E25

to the network layer there, as shown in fig a. The actual transmission follows the path of fig b, but it is easier to think in
terms of two data link layer processes communicating using a data link protocol.

The three major types of services offered by data link layer are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection oriented service.

1. Unacknowledged connectionless service:-


(a) In this type of service source machine sends frames to destination machine but the destination machine does not
send any acknowledgement of these frames back to the source. Hence it is called unacknowledged service.
(b) There is no connection establishment between source and destination machine before data transfer or release
after data transfer. Therefore it is known as connectionless service.
(c) There is no error control i.e. if any frame is lost due to noise on the line, no attempt is made to recover it.
(d) This type of service is used when error rate is low.
(e) It is suitable for real time traffic such as speech.

2. Acknowledged connectionless service:-


(a) In this service, neither the connection is established before the data transfer nor is it released after the data
transfer between source and destination.
(b) When the sender sends the data frames to destination, destination machine sends back the acknowledgement of
these frames.

TRIPTI PAL (CS Faculty, RATM) Page 4


Computer Network Unit-2 NMCA-E25

(c) This type of service provides additional reliability because source machine retransmit the frames if it does not receive
the acknowledgement of these frames within the specified time.
(d) This service is useful over unreliable channels, such as wireless systems.

3. Acknowledged connection oriented service:


(a) This service is the most sophisticated service provided by data link layer to network layer.
(b) It is connection-oriented. It means that connection is establishment between source & destination before any data
is transferred.
(c) In this service, data transfer has three distinct phases:-
(i) Connection establishment
(ii) Actual data transfer
(iii) Connection release
(d) Here, each frame being transmitted from source to destination is given a specific number and is acknowledged by
the destination machine.
(e) All the frames are received by destination in the same order in which they are send by the source.

Framing:-
A point-to-point connection between two computers or devices consists of a wire in which data is
transmitted as a stream of bits. However, these bits must be framed into discernible blocks of information. Framing is a
function of the data link layer. It provides a way for a sender to transmit a set of bits that are meaningful to the
receiver. Ethernet, token ring, frame relay, and other data link layer technologies have their own frame structures.
Frames have headers that contain information such as error-checking codes. The Data Link layer prepares a packet for
transport across the local media by encapsulating it with a header and a trailer to create a frame.
The Data Link layer frame includes:
Data:- The packet from the Network layer.
Header:- Contains control information, such as addressing, and is located at the beginning of the PDU.
Trailer:- Contains control information added to the end of the PDU.

TRIPTI PAL (CS Faculty, RATM) Page 5


Computer Network Unit-2 NMCA-E25

To provide service to the network layer, the data link layer must use the service provided to it by the physical layer. What
the physical layer does is accept a raw bit stream and attempt to deliver it to the destination. This bit stream is not
guaranteed to be error free. The number of bits received may be less than, equal to, or more than the number of bits
transmitted, and they may have different values. It is up to the data link layer to detect and, if necessary, correct errors.
The usual approach is for the data link layer to break the bit stream up into discrete frames and compute the checksum for
each frame. When a frame arrives at the destination, the checksum is recomputed. If the newly computed checksum is
different from the one contained in the frame, the data link layer knows that an error has occurred and takes steps to deal
with it (e.g., discarding the bad frame and possibly also sending an error report).
Breaking the bit stream up into frames is more difficult than it at first appears. One way to
achieve this framing is to insert time gaps between frames, much like the spaces between words in ordinary text.
However, networks rarely make any guarantees about timing, so it is possible these gaps might be squeezed out or other
gaps might be inserted during transmission. Since it is too risky to count on timing to mark the start and end of each
frame, other methods have been devised. We will look at four methods:
1. Character count.
2. Flag bytes with byte stuffing.
3. Starting and ending flags, with bit stuffing.
4. Physical layer coding violations.

1. Character count:-
The first framing method uses a field in the header to specify the number of characters in the frame.
When the data link layer at the destination sees the character count, it knows how many characters follow and hence
where the end of the frame is. This technique is shown in fig for four frames of sizes 5, 5, 8 and 8 characters, respectively.

Fig. A character stream. (a) Without errors. (b) With one error

TRIPTI PAL (CS Faculty, RATM) Page 6


Computer Network Unit-2 NMCA-E25

The trouble with this algorithm is that the count can be garbled by a transmission error. For example, if the character
count of 5 in the second frame of fig (b) becomes a 7, the destination will get out of synchronization and will be unable to
locate the start of the nect frame. Even if the checksum is incorrect so the destination knows that the frame is bad, it still
has no way of telling where the next frame starts. Sending a frame back to the source asking for a retransmission does not
help either, since the destination does not know how many characters to skip over to get to the start of the
retransmission. For this reason, the character count method is rarely used anymore.

2. Flag bytes with byte stuffing:-


The second framing method gets around the problem of resynchronization after an
error by having each frame start and end with special bytes. In the past, the starting and ending bytes were different, but
in recent years most protocols have used the same byte, called a flag byte, as both the starting and ending delimiter, as
shown in fig as FLAG. In this way, if the receiver ever loses synchronization, it can just search for the flag byte to find the
end of the current frame. Two consecutive flag bytes indicate the end of one frame and start of the next one.

Fig (a) A Frame delimited by flag bytes (b) Four examples of byte sequences before and after byte stuffing.

A serious problem occurs with this method when binary data, such as object programs or floating-point numbers, are
being transmitted. It may easily happen that the flag bytes bit pattern occurs in the data. This situation will usually
interfere with the framing. One way to solve this problem is to have the senders data link layer insert a special escape
byte (ESC) just before each accidental flag byte in the data. The data linklayer on the receiving end removes the escape
byte before the data are given to the network layer. This technique is called byte stuffing or character stuffing. Thus, a
framing flag byte can be distinguished from one in the data by the absence or presence of an escape byte before it.

TRIPTI PAL (CS Faculty, RATM) Page 7


Computer Network Unit-2 NMCA-E25

Of course, the next question is; What happens if an escape byte occurs in the middle of tha data? The answer is that it,
too, is stuffed with an escape byte. Thus, any single escape byte is part of an escape sequence, whereas a boubled one
indicates that a single escape occurred naturally in the data. Some examples are shown in fig. In all cases, the byte
sequence delivered after de stuffing is exactly the same as the original byte sequence.
The byte-stuffing scheme depicted in fig is a slight simplification of the one used in the
PPP protocol that most home computers use to communicate with their Internet service provider.

A major disadvantage of using this framing method is that it is closely tied to the use of 8-bit characters. Not all character
codes use 8-bit characters. For example UNICODE uses 16-bit characters, As networks developed, the disadvantages of
embedding the character code length in the framing mechanism became more and more obvious, so a new technique had
to be developed to allow arbitrary sized characters.

3. Starting and ending flags, with bit stuffing:-


The new technique allows data frames to contain an arbitrary number of bits
and allows character codes with an arbitrary number of bits per character. It works like this. Each frame begins and ends
with a special bit pattern, 01111110 (in fact, a flag byte). Whenever the senders data link layer encounters five
consecutive 1s in the data, it automatically stuffs a 0 bit into the outgoing bit stream. This bit stuffing is analogous to byte
stuffing, in which an escape byte is stuffed into the outgoing character stream before a flag byte in the data.

When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically
de stuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely transparent to the network layer in both computers, so
is bit stuffing. If the user data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but stored in the
receivers memory as 01111110.

Fig. Bit stuffing. (a) The original data. (b) The data as they appear on the line. (c) The data as they are stored in the
receivers memory after destuffing.

TRIPTI PAL (CS Faculty, RATM) Page 8


Computer Network Unit-2 NMCA-E25

With bit stuffing, the boundary between two frames can be unambiguously recognized by the flag pattern. Thus, if the
receiver loses track of where it is, all it has to do is scan the input for flag sequences, since they can only occur at frame
boundries and never within the data. The last method of framing is only applicable to networks in which the encoding on
the physical medium contains some redundancy. For example, some LANs encode 1 bit of data by using 2 physical bits.
Normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair. The scheme means that every data bit has a transition in
the middle, making it easy for the receiver to locate the bit boundaries. The combinations high-high and low-low are not
used for data but are used for delimiting frames in some protocols.

As a final note on framing, many data link protocols use combination of a character count with one of the other methods
for extra safety. When a frame arrives, the count field is used to locate the end of the frame. Only if the appropriate
delimiter is present at that positions and the checksum is correct is the frame accepted as valid. Otherwise, the input
stream is scanned for the next delimiter.

Flow Control:-
When a data frame (Layer-2 data) is sent from one host to another over a single medium, it is
required that the sender and receiver should work at the same speed. That is, sender sends at a speed on which the
receiver can process and accept the data. What if the speed (hardware/software) of the sender or receiver differs? If
sender is sending too fast the receiver may be overloaded, (swamped) and data may be lost.
Flow control coordinates the amount of data that can be sent before receiving an acknowledgment and
is one of the most important duties of the data link layer. In most protocols, flow control is a set of procedures that tells
the sender how much data it can transmit before it must wait for an acknowledgment from the receiver. The flow of data
must not be allowed to overwhelm the receiver. Any receiving device has a limited speed at which it can process incoming
data and a limited amount of memory in which to store incoming data. The receiving device must be able to inform the
sending device before those limits are reached and to request that the transmitting device send fewer frames or stop
temporarily. Incoming data must be checked and processed before they can be used. The rate of such processing is often
slower than the rate of transmission. For this reason, each receiving device has a block of memory, called a buffer ,
reserved for storing incoming data until they are processed. If the buffer begins to fill up, the receiver must be able to tell
the sender to halt transmission until it is once again able to receive.

Two types of mechanisms can be deployed to control the flow:


Stop & Wait:-
This flow control mechanism forces the sender after transmitting a data frame to stop and wait until the
acknowledgement of the data-frame sent is received. In this method of flow control, the sender sends a single frame to
receiver & waits for an acknowledgment. The next frame is sent by sender only when acknowledgment of previous frame
is received. This process of sending a frame & waiting for an acknowledgment continues as long as the sender has data to
send. To end up the transmission sender transmits end of transmission (EOT) frame. The main advantage of stop & wait
protocols is its accuracy. Next frame is transmitted only when the first frame is acknowledged. So there is no chance of
frame being lost. The main disadvantage of this method is that it is inefficient. It makes the transmission process slow. In
this method single frame travels from source to destination and single acknowledgment travels from destination to
source. As a result each frame sent and received uses the entire time needed to traverse the link. Moreover, if two
devices are distance apart, a lot of time is wasted waiting for ACKs that leads to increase in total transmission time.
TRIPTI PAL (CS Faculty, RATM) Page 9
Computer Network Unit-2 NMCA-E25

Sliding Window Protocol:-


In this flow control mechanism, both sender and receiver agree on the number of data-
frames after which the acknowledgement should be sent. As we learnt, stop and wait flow control mechanism wastes
resources, this protocol tries to make use of underlying resources as much as possible. In sliding window method,
multiple frames are sent by sender at a time before needing an acknowledgment. Multiple frames sent by source are
acknowledged by receiver using a single ACK frame.

Sliding window refers to an imaginary boxes that hold the frames on both sender and receiver side. It provides the upper
limit on the number of frames that can be transmitted before requiring an acknowledgment. Frames may be
acknowledged by receiver at any point even when window is not full on receiver side. Frames may be transmitted by
source even when window is not yet full on sender side. The windows have a specific size in which the frames are
numbered modulo- n, which means they are numbered from 0 to n-l. For e.g. if n = 8, the frames are numbered 0,
1,2,3,4,5,6, 7, 0, 1,2,3,4,5,6, 7, 0, 1, ....
The size of window is n-1. For e.g. In this case it is 7. Therefore, a maximum of n-l frames may be sent before an
acknowledgment. When the receiver sends an ACK, it includes the number of next frame it expects to receive. For
example in order to acknowledge the group of frames ending in frame 4, the receiver sends an ACK containing the
number 5. When sender sees an ACK with number 5, it comes to know that all the frames up to number 4 have been
received.

TRIPTI PAL (CS Faculty, RATM) Page 10


Computer Network Unit-2 NMCA-E25

Sliding Window on Sender Side:-


At the beginning of a transmission, the sender's window contains n-l frames.
As the frames are sent by source, the left boundary of the window moves inward, shrinking the size of window. This
means if window size is w, if four frames are sent by source after the last acknowledgment, then the number of
frames left in window is w-4.
When the receiver sends an ACK, the source's window expand i.e. (right boundary moves outward) to allow in a
number of new frames equal to the number of frames acknowledged by that ACK.
For example, Let the window size is 7 (see diagram (a)), if frames 0 through 3 have been sent and no
acknowledgment has been received, then the sender's window contains three frames - 4,5,6.
Now, if an ACK numbered 3 is received by source, it means three frames (0, 1, 2) have been received by receiver and
are undamaged.
The sender's window will now expand to include the next three frames in its buffer. At this point the sender's
window will contain six frames (4, 5, 6, 7, 0, 1). (See diagram (b)).

TRIPTI PAL (CS Faculty, RATM) Page 11


Computer Network Unit-2 NMCA-E25

Sliding Window on Receiver Side:-


At the beginning of transmission, the receiver's window contains n-1 spaces for frame but not the frames.
As the new frames come in, the size of window shrinks.
Therefore the receiver window represents not the number of frames received but the number of frames that
may still be received without an acknowledgment ACK must be sent.
Given a window of size w, if three frames are received without an ACK being returned, the number of spaces in a
window is w-3.
As soon as acknowledgment is sent, window expands to include the number of frames equal to the number of
frames acknowledged.
For example, let the size of receiver's window is 7 as shown in diagram. It means window contains spaces for 7
frames.
With the arrival of the first frame, the receiving window shrinks, moving the boundary from space 0 to 1. Now,
window has shrunk by one, so the receiver may accept six more frame before it is required to send an ACK.
If frames 0 through 3 have arrived but have DOC been acknowledged, the window will contain three frame
spaces.
As receiver sends an ACK, the window of the receiver expands to include as many new placeholders as newly
acknowledged frames.
The window expands to include a number of new frame spaces equal to the number of the most recently
acknowledged frame minus the number of previously acknowledged frame. For e.g., If window size is 7 and if
prior ACK was for frame 2 & the current ACK is for frame 5 the window expands by three (5-2).

TRIPTI PAL (CS Faculty, RATM) Page 12


Computer Network Unit-2 NMCA-E25

Therefore, the sliding window of sender shrinks from left when frames of data are sending. The sliding window of
the sender expands to right when acknowledgments are received.
The sliding window of the receiver shrinks from left when frames of data are received. The sliding window of the
receiver expands to the right when acknowledgement is sent.

Error Control:-
When data-frame is transmitted, there is a probability that data-frame may be lost in the transit or it is
received corrupted. In both cases, the receiver does not receive the correct data-frame and sender does not know
anything about any loss.In such case, both sender and receiver are equipped with some protocols which helps them to
detect transit errors such as loss of data-frame. Hence, either the sender retransmits the data-frame or the receiver may
request to resend the previous data-frame.
Requirements for error control mechanism:
Error Detection:-
The sender and receiver, either both or any, must ascertain that there is some error in the transit.
Positive ACK:-
When the receiver receives a correct frame, it should acknowledge it.
Negative ACK:-
When the receiver receives a damaged frame or a duplicate frame, it sends a NACK back to the
sender and the sender must retransmit the correct frame.
Retransmission:-
The sender maintains a clock and sets a timeout period. If an acknowledgement of a data-frame
previously transmitted does not arrive before the timeout the sender retransmits the frame, thinking that the
frame or its acknowledgement is lost in transit.

There are three types of techniques available which Data-link layer may deploy to control the errors by Automatic
Repeat Requests (ARQ):

1. Stop-and-Wait ARQ
2. Go-Back-N ARQ
3. Selective Repeat ARQ

TRIPTI PAL (CS Faculty, RATM) Page 13


Computer Network Unit-2 NMCA-E25

1. Stop-and-wait ARQ:-

The following transition may occur in Stop-and-Wait ARQ:

The sender maintains a timeout counter.


When a frame is sent, the sender starts the timeout counter.
If acknowledgement of frame comes in time, the sender transmits the next frame in queue.
If acknowledgement does not come in time, the sender assumes that either the frame or its acknowledgement is
lost in transit. Sender retransmits the frame and starts the timeout counter.
If a negative acknowledgement is received, the sender retransmits the frame.

TRIPTI PAL (CS Faculty, RATM) Page 14


Computer Network Unit-2 NMCA-E25

2. Go-Back-N ARQ:-
Stop and wait ARQ mechanism does not utilize the resources at their best.When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N ARQ method, both sender and receiver
maintain a window.

The sending-window size enables the sender to send multiple frames without receiving the acknowledgement of the
previous ones. The receiving-window enables the receiver to receive multiple frames and acknowledge them. The
receiver keeps track of incoming frames sequence number.

When the sender sends all the frames in window, it checks up to what sequence number it has received positive
acknowledgement. If all frames are positively acknowledged, the sender sends next set of frames. If sender finds that it
has received NACK or has not receive any ACK for a particular frame, it retransmits all the frames after which it does not
receive any positive ACK.

TRIPTI PAL (CS Faculty, RATM) Page 15


Computer Network Unit-2 NMCA-E25

3. Selective Repeat ARQ:-


In Go-back-N ARQ, it is assumed that the receiver does not have any buffer space for its
window size and has to process each frame as it comes. This enforces the sender to retransmit all the frames which are
not acknowledged.

In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers, buffers the frames in memory and
sends NACK for only frame which is missing or damaged.
The sender in this case, sends only packet for which NACK is received.

TRIPTI PAL (CS Faculty, RATM) Page 16


Computer Network Unit-2 NMCA-E25

Error Detection and Correction:-


Network must be able to transfer data from one device to another with acceptable
accuracy. For most applications, a system must guarantee that the data received are identical to the data transmitted.
Any time data are transmitted from one node to the next, they can become corrupted in passage. Many factors can alter
one or more bits of a message. Some applications require a mechanism for detecting and correcting errors.
Some applications can tolerate a small level of error. For example, random errors in audio or video transmission may be
tolerable, but when we transfer text, we expect a very high level of accuracy.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams) are transmitted with certain
level of accuracy. But to understand how errors is controlled, it is essential to know what types of errors may occur.
Types of Error:-
There may be three types of errors:
1. Single Bit Error:-

In a frame, there is only one bit, anywhere though, which is corrupt.

2. Multiple Bits Error:-

Frame is received with more than one bits in corrupted state.

3. Burst Error:-

Frame contains more than1 consecutive bits corrupted.


Note that a burst error does not necessarily mean that the errors occur in consecutive bits. The length of the burst is
measured from the first corrupted bit to the last corrupted bit. Some bits in between may not have been corrupted.

A burst error is more likely to occur than a single-bit error. The duration of noise is normally longer than the duration of 1
bit, which means that when noise affects data, it affects a set of bits. The number of bits affected depends on the data
rate duration of noise.

TRIPTI PAL (CS Faculty, RATM) Page 17


Computer Network Unit-2 NMCA-E25

Redundancy:-
The central concept in detecting of correcting errors is redundancy. To be able to detect or correct
errors, we need to some extra bits with our data. These redundant bits are added by the sender and removed by the
receiver. Their presence allows the receiver to detect or correct corrupted bits.

Error Detection methods:-


Basic approach used for error detection is the use of redundancy bits, where additional
bits are added to facilitate detection of errors.

Some popular techniques for error detection are:

1. VRC (Vertical Redundancy Check)


2. LRC (Longitudinal Redundancy Check)
3. Checksum
4. CRC (Cyclical Redundancy Check)

1. VRC (Vertical Redundancy Check):-


Short for vertical redundancy check it is a method of error checking that attaches
a parity bit to each byte of data to be transmitted, which is then tested to determine if the transmission is correct. This
method is considered to be somewhat unreliable. If an odd number of bits are distorted, the check will not detect the
error. It is also known as parity check. It is least expensive mechanism for erroe detection. In this technique, the
redundant bit called parity bit is appended to every data unit so that the total number of 1s in the unit becomes even
(including parity bit).
Blocks of data from the source are subjected to a check bit or parity bit generator form, where a parity of :
1 is added to the block if it contains odd number of 1s, and
0 is added if it contains even number of 1s.
This scheme makes the total number of 1s even, that is why it is called even parity checking.

TRIPTI PAL (CS Faculty, RATM) Page 18


Computer Network Unit-2 NMCA-E25

2. LRC (Longitudinal Redundancy Check):-


In this method, a block of bits is organized in table (rows and columns)
calculate the parity bit for each column and the set of this parity bit is also sending with original data. From the block of
parity we can check the redundancy. LRC of n bits can easily detect burst error of n bits. If two bits in one data units are
damaged and two bits in exactly same position in another data unit are also damaged, the LRC checker will not detect
the error. Parity check bits are calculated for each row, which is equivalent to a simple parity check bit. Parity check bits
are also calculated for all columns, then both are sent along with the data. At the receiving end these are compared with
the parity bits calculated on the received data.

3. Checksum:-

In checksum error detection scheme, the data is divided into k segments each of m bits.
In the senders end the segments are added using 1s complement arithmetic to get the sum. The sum is
complemented to get the checksum.
The checksum segment is sent along with the data segments.
At the receivers end, all received segments are added using 1s complement arithmetic to get the sum. The sum is
complemented.
If the result is zero, the received data is accepted; otherwise discarded.

TRIPTI PAL (CS Faculty, RATM) Page 19


Computer Network Unit-2 NMCA-E25

4. CRC (Cyclical Redundancy Check):-

Unlike checksum scheme, which is based on addition, CRC is based on binary division.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of data unit so
that the resulting data unit becomes exactly divisible by a second, predetermined binary number.
At the destination, the incoming data unit is divided by the same number. If at this step there is no remainder, the
data unit is assumed to be correct and is therefore accepted.
A remainder indicates that the data unit has been damaged in transit and therefore must be rejected.

TRIPTI PAL (CS Faculty, RATM) Page 20


Computer Network Unit-2 NMCA-E25

EX.

TRIPTI PAL (CS Faculty, RATM) Page 21


Computer Network Unit-2 NMCA-E25

Data Link Protocols:-


Often called layer 2 protocols, data link protocols exist in the protocol layer just above the physical layer
relative to the OSI protocol model. Data link protocols provide communication between two devices. Because there are many
different ways to connect devices, there are many different data link protocols. The defining factors are
Dedicated point-to-point links between two devices, such as modem, bridges, or routers

Shared media links in which multiple devices share the same cable (i.e., Ethernet LAN)
The PPP (Point-to-Point Protocol) that people use to connect to the Internet via a dial-up modem is an example of a data
link protocol. Because the link between two systems is point to point, the bits are always delivered from sender to
receiver in order. Also, unlike shared-media LANs in which multiple stations attempt to use the network, there is no
contention.
Common Data Link Protocols:-
The most common data link level protocols are listed here with a short description.
Note that most of these data link protocol are used for WAN and Modem Connections
SDLC (Synchronous Data Link Protocol):-
This protocol was originally developed by IBM as part of IBM's SNA
(Systems Network Architecture). It was used to connect remote devices to mainframe computers at central locations
in either point-to-point (one-to-one) or point-to-multipoint (one-to-many) connections.
HDLC (High-Level Data Link Protocol):-
This protocol is based on SDLC and provides both a best-effort unreliable
service and a reliable service. It is used with various serial interface protocols defined in the physical layer, such as
EIA/TIA-232, V.24, V.35, and others.
SLIP (Serial Line Interface Protocol):-
SLIP is a data link control facility for transmitting IP packets, usually between
an ISP (Internet service provider) and a home user over a dial-up link. SLIP has some limitations, including a lack of
any error-detection and correction mechanisms. It is up to higher-layer protocols to perform these checks. Used over
much of the same serial interfaces as HDLC.
PPP (Point-to-Point Protocol):-
PPP provides the same functionality as SLIP (i.e., it is commonly used for Internet
connections over dial-up lines); but it is a more robust protocol that can transport not only IP, but also other types of
packets. Frames contain a field that identifies the type of protocol being carried (IP, IPX, and so on). It is used over
much of the same serial interfaces as HDLC.
Frame Relay:-
LAP used with X.25 is highly reliable, but it also has high overhead. Frame relay does away with the
reliability services (i.e., error-correction mechanisms are removed) to improve throughput.
LLC (Logical Link Protocol):-
The IEEE (Institute of Electrical and Electronic Engineers) defines this protocol in its 802.x
family of networks standards. The ANSI FDDI standard also uses this protocol. LLC is discussed further in the next
section.

TRIPTI PAL (CS Faculty, RATM) Page 22


Computer Network Unit-2 NMCA-E25

Synchronous and Asynchronous Protocol:-


A protocol is a set of rules which governs how data is sent from one point to
another. In data communications, there are widely accepted protocols for sending data. Both the sender and receiver
must use the same protocol when communicating.
In data transmission protocol, it can be categorized into two main parts as:
i) Synchronous Protocols
ii) Asynchronous Protocols
1. Synchronous Protocol:-
In synchronous transmission, greater efficiency is achieved by grouping characters together,
and doing away with the start and stop bits for each character. We still envelop the information in a similar way as
before, but this time we send more characters between the start and end sequences. In addition, the start and stop bits
are replaced with a new format that permits greater flexibility. An extra ending sequence is added to perform error
checking.

A start type sequence, called a header, prefixes each block of characters, and a stop type sequence, called a tail, suffixes
each block of characters. The tail is expanded to include a check code, inserted by the transmitter, and used by the
receiver to determine if the data block of characters was received without errors. In this way, synchronous transmission
overcomes the two main deficiencies of the asynchronous method, that of inefficiency and lack of error detection.
2. Asynchronous Protocol:-
Asynchronous systems send data bytes between the sender and receiver by packaging the
data in an envelope. This envelope helps transport the character across the transmission link that separates the sender
and receiver. The transmitter creates the envelope, and the receiver uses the envelope to extract the data. Each
character (data byte) the sender transmits is preceded with a start bit, and suffixed with a stop bit. These extra bits serve
to synchronize the receiver with the sender.

In asynchronous serial transmission, each character is packaged in an envelope, and sent across a single wire, bit by bit,
to a receiver. Because no signal lines are used to convey clock (timing) information, this method groups data together
into a sequence of bits (five - eight), then prefixes them with a start bit and appends the data with a stop bit.

TRIPTI PAL (CS Faculty, RATM) Page 23


Computer Network Unit-2 NMCA-E25

Elementary Data Link Protocols:-

1. An unrestricted simplex protocol:-


In order to appreciate the step by step development of efficient and complex
protocols such as SDLC, HDLC etc., we will begin with a simple but unrealistic protocol. In this protocol:
Data are transmitted in one direction only
The transmitting (Tx) and receiving (Rx) hosts are always ready
Processing time can be ignored
Infinite buffer space is available
No errors occur; i.e. no damaged frames and no lost frames (perfect channel)

2. A simplex stop-and-wait protocol:-


In this protocol we assume that
Data are transmitted in one direction only
No errors occur (perfect channel)
The receiver can only process the received information at a finite rate
These assumptions imply that the transmitter cannot send frames at a rate faster than the receiver can process
them.
The problem here is how to prevent the sender from flooding the receiver.
A general solution to this problem is to have the receiver provide some sort of feedback to the sender. The process
could be as follows: The receiver send an acknowledge frame back to the sender telling the sender that the last
received frame has been processed and passed to the host; permission to send the next frame is granted. The
sender, after having sent a frame, must wait for the acknowledge frame from the receiver before sending another
frame. This protocol is known as stop-and-wait.

3. A simplex protocol for a noisy channel:-


In this protocol the unreal "error free" assumption in protocol 2 is
dropped. Frames may be either damaged or lost completely. We assume that transmission errors in the frame are
detected by the hardware checksum.
One suggestion is that the sender would send a frame, the receiver would send an ACK frame only if the frame is
received correctly. If the frame is in error the receiver simply ignores it; the transmitter would time out and would
retransmit it.
One fatal flaw with the above scheme is that if the ACK frame is lost or damaged, duplicate frames are accepted at
the receiver without the receiver knowing it.
Imagine a situation where the receiver has just sent an ACK frame back to the sender saying that it correctly
received and already passed a frame to its host. However, the ACK frame gets lost completely, the sender times out
and retransmits the frame. There is no way for the receiver to tell whether this frame is a retransmitted frame or a
new frame, so the receiver accepts this duplicate happily and transfers it to the host. The protocol thus fails in this
aspect.
To overcome this problem it is required that the receiver be able to distinguish a frame that it is seeing for the first
time from a retransmission. One way to achieve this is to have the sender put a sequence number in the header of

TRIPTI PAL (CS Faculty, RATM) Page 24


Computer Network Unit-2 NMCA-E25

each frame it sends. The receiver then can check the sequence number of each arriving frame to see if it is a new
frame or a duplicate to be discarded.
The receiver needs to distinguish only 2 possibilities: a new frame or a duplicate; a 1-bit sequence number is
sufficient. At any instant the receiver expects a particular sequence number. Any wrong sequence numbered frame
arriving at the receiver is rejected as a duplicate. A correctly numbered frame arriving at the receiver is accepted,
passed to the host, and the expected sequence number is incremented by 1 (modulo 2).

Medium Access Control (MAC) Sublayer:-


We can consider the data link layer as two sublayer. The upper sublayer is
responsible for data link control, and the lower sublayer is responsible for resolving access to the shared media. If the
channel is dedicated, we do not need the lower sublayer.

IEEE (Institute of Electrical and Electronics Engineers) has actually made this division for LANs. The upper sublayer that is
responsible for flow control and error control is called the logical link control (LLC) layer; the lower sublayer that is mostly
responsible for multiple access resolution is called the media access control (MAC) layer.
When nodes or stations are connected and use a common link, called a multipoint or broadcast link, we need a multiple
access protocol to coordinate access to the link. The problem of controlling the access to the medium is similar to the
rules of speaking in an assembly. The procedure guarantee that the right to speak is upheld and ensure that two people
do not speak at the same time, do not interrupt each other, do not monopolize the discussion, and so on.
The situation is similar for multipoint network. Many formal protocols have been devised to handle access to a shared
link. We categorize them into three groups. Protocols belonging to each group are shown in fig.

TRIPTI PAL (CS Faculty, RATM) Page 25


Computer Network Unit-2 NMCA-E25

Features/Functions of Medium Access Sublayer:-


The MAC sub-layer interacts with the physical layer and is primarily
responsible for framing/de-framing and collision resolution.
(i) Framing/De-Framing and interaction with PHY: -
On the sending side, the MAC sub-layer is responsible for creation
of frames from network layer packets, by adding the frame header and the frame trailer. While the frame header consists
of layer2 addresses (known as MAC address) and a few other fields for control purposes, the frame trailer consists of the
CRC/checksum of the whole frame. After creating a frame, the MAC layer is responsible for interacting with the physical
layer processor (PHY) to transmit the frame.

On the receiving side, the MAC sub-layer receives frames from the PHY and is responsible for accepting each frame, by
examining the frame header. It is also responsible for verifying the checksum to conclude whether the frame has come
uncorrupted through the link without bit errors. Since checksum computation and verification are compute intensive
tasks, the framing/de-framing functionality is done by dedicated piece of hardware (e.g. NIC card on PCs).

TRIPTI PAL (CS Faculty, RATM) Page 26


Computer Network Unit-2 NMCA-E25

(ii) Collision Resolution :-


On shared or broadcast links, where multiple end nodes are connected to the same link, there has to
be a collision resolution protocol running on each node, so that the link is used cooperatively. The MAC sub-layer is
responsible for this task and it is the MAC sub-block that implements standard collision resolution protocols like
CSMA/CD, CSMA etc. For half-duplex links, it is the MAC sub-layer that makes sure that a node sends data on the link only
during its turn. For full-duplex point-to-point links, the collision resolution functionality of MAC sub-layer is not required.

The primary functions performed by the MAC layer are:

Frame delimiting and recognition


Addressing of destination stations (both as individual stations and as groups of stations)
Conveyance of source-station addressing information
Transparent data transfer of LLC PDUs, or of equivalent information in the Ethernet sublayer
Protection against errors, generally by means of generating and checking frame check sequences
Control of access to the physical transmission medium

In the case of Ethernet, according to 802.3-2002 section 4.1.4, the functions required of a MAC are:

receive/transmit normal frames


half-duplex retransmission and backoff functions
append/check FCS (frame check sequence)
interframe gap enforcement
discard malformed frames
prepend(tx)/remove(rx) preamble, SFD (start frame delimiter), and padding
half-duplex compatibility: append(tx)/remove(rx) MAC address

Random Access Protocols:-


In random access or contention methods, no station is superior to another station and
none is assigned the control over another. No station permits, or does not permit, another station to send. At each
instance, a station that has data to send uses a procedure defined by the protocol to make a decision on whether or not
to send. This decision depends on the state of the medium (idle or busy). In other words, each station can transmit when
it desires on the condition that it follows the predefined procedure, including the testing of the state of the medium.

Two features give this method its name. First, there is no scheduled time for a station to
transmit. Transmission is random among the stations. That is why these methods are called random access. Second, no
rules specify which station should send next. Stations compete with one another to access the medium. That is why
these methods are also called contention methods.

In a random access method, each station has the right to the medium without being
controlled by any other station. However, if more than one station tries to send , there is an access conflict-collision-
and the frames will be either destroyed or modified. To avoid access conflict or to resolve it happens, each station
follows a procedure that answers the following questions:
When can the station access the medium?
What can the station do if the medium is busy?

TRIPTI PAL (CS Faculty, RATM) Page 27


Computer Network Unit-2 NMCA-E25

How can the station determine the success or failure of the transmission?
What can the station do if there is an access conflict?

The random access methods we study in this chapter have evolved from a very interesting protocol known as ALOHA,
which used a very simple procedure called multiple access (MA). The method was improved with the addition of a
procedure that forces the station to sense the medium before transmitting. This was called carrier sense multiple access.
This method later evolved into two parallel methods: carrier sense multiple access with collision detection (CSMA/CD)
and carrier sense multiple access with collision avoidance (CSMA/CA). CSMA/CD tells the station what to do when a
collision is detected. CSMA/CA tries to avoid the collision.

ALOHA:-
ALOHA, the earliest random access method, was developed at the University of Hawaii in early 1970. It was
designed for a radio (wireless) LAN, but it can be used on any shared medium.
It is obvious that there are potential collisions in this arrangement. The medium is shared between the station.
When a station sends data, another station may attempt to do so at the same time. The data from the two stations
collide and become garbled.

Pure ALOHA:-
The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant protocol. The idea is
that each station sends a frame whenever it has a frame to send. However, since there is only one channel to share,
there is the possibility of collision between frames from different stations. Fig shows an example of frame collisions in
pure ALOHA.

PURE ALOHA

TRIPTI PAL (CS Faculty, RATM) Page 28


Computer Network Unit-2 NMCA-E25

There are four stations (unrealistic assumption) that contend with one another for access to the shared channel. The fig.
shows that each station sends two frames; there are a total of eight frames on the shared medium. Some of these frames
collide because multiple frames are in contention for the shared channel. Fig shows that only two frames survive: frame
1.1 from station 1 and frame 3.2 from station 3 . We need to mention that even if one bit of a frame coexists on the
channel with one bit from another frame, there is a collision and both will be destroyed.

It is obvious that we need to resend the frames that have been destroyed during transmission. The pure ALOHA protocol
relies on acknowledgment from the receiver. When a station sends a frame, It expects the receiver to send an
acknowledgment. If the acknowledgment does not arrive after a time-out period, the station assumes that the frame (or
the acknowledgment) has been destroyed and resends the frame.

A collision involves two or more stations. If all these station try to resend their frames after the time-out, the frames will
collide again. Pure ALOHA dictates that when the time-out period passes, each station waits a random amount of time
before resending its frame. The randomness will help avoid more collisions. We call this time the back-off time TB.

Pure ALOHA has a second method to prevent congesting the channel with retransmitted frames. After a maximum
number of retransmission attempts Kmax, a station must give up and try later. Fig shows the procedure for pure ALOHA
based on the above strategy.

Procedure for Pure ALOHA protocol

TRIPTI PAL (CS Faculty, RATM) Page 29


Computer Network Unit-2 NMCA-E25

The time-out period is equal to the maximum possible round-trip propagation delay, which is twice the amount of time
required to send a frame between the two most widely separated station (2*Tp). The back-off time TB is a random value
that normally depends on K (the number of attempted unsuccessful transmissions). The formula for T B depends on the
implementation. Once common formula is the binary exponential back-off. In this method , for each retransmission,
multiplier in the range 0 to 2K-1 is randomly chosen and multiplied by TP (maximum propagation time) or Tfr ( the average
time required to send out a frame) to find TB. Note that in this procedure, the range of the random numbers increases
after each collision. The value of Kmax is usually chosen as 15.

Slotted ALOHA:-
Pure ALOHA has a vulnerable time of 2*Tfr. This is so because there is no rule that defines when the
station can send. A station may send soon after another station has started or soon before another station has finished.
Slotted ALOHA was invented to improve the efficiency of pure ALOHA.
In slotted ALOHA we divide the time into slots of Tfr s and force the station to send only at beginning
of the time slot. Fig shows an example of frame collisions on slotted ALOHA.

Frames in a slotted ALOHA network

Because a station is allowed to send only at the beginning of the synchronized time slot, if a station misses this moment,
it must wait until the beginning of the next time slot. This means that the station which started at the beginning of this
slot has already finished sending its frame. Of course, there is still the possibility of collision if two stations try to send at
the beginning of the same time slot. However, the vulnerable time is now reduced to one-half, equal to Tfr.

Carrier Sense Multiple Access (CSMA):-


To minimize the chance of collision and, therefore, increase the performance,
the CSMA method was developed. The chance of collision can be reduced if a station senses the medium before trying to
use it. Carrier sense multiple access (CSMA) requires that each station first listen to the medium (or check the state of
the medium) before sending. In other words, CSMA is based on the principle sense before transmit or listen before
talk.

TRIPTI PAL (CS Faculty, RATM) Page 30


Computer Network Unit-2 NMCA-E25

CSMA can reduce the possibility of collision, but it can not eliminate it. The reason for this is shown in fig, a space and
time model of a CSMA network. Stations are connected to a shared channel (usually a dedicated medium).

The possibility of collision still exists because of propagation delay; when a station sends a frame, it still takes time
(although very short) for the first bit to reach every station and for evey station to snese it. In other words, a station may
sense the medium and find it idle, because the first bit sent by another station has not yet been received.

CSMA

CSMA
TRIPTI PAL (CS Faculty, RATM) Page 31
Computer Network Unit-2 NMCA-E25

At time t1, station B senses the medium and finds it idle, so it sends a frame. At time t 2(t2>t1), station C senses the
medium and finds it idle because, at this time, the first bits from station B have not reached station C. Station C also
sends a frame. The two signals collide and both frames are destroyed.

There are three different types of CSMA protocols:

1-Persistent CSMA
Non-Persistent CSMA
P-Persistent CSMA

1-Persistent CSMA:-

In this method, station that wants to transmit data, continuously senses the channel to check whether he channel is
idle or busy.
If the channel is busy, station waits until it becomes idle.
When the station detects an idle channel, it immediately transmits the frame.
This method has the highest chance of collision because two or more stations may find channel to be idle at the
same time and transmit their frames.

TRIPTI PAL (CS Faculty, RATM) Page 32


Computer Network Unit-2 NMCA-E25

Non-Persistent CSMA:-
A station that has a frame to send, senses the channel.
If the channel is idle, it sends immediately.
If the channel is busy, it waits a random amount of time and then senses the channel again.
It reduces the chance of collision because the stations wait for a random amount of time .
It is unlikely that two or more stations will wait for the same amount of time and will retransmit at the same time.

P-Persistent CSMA:-
In this method, the channel has time slots such that the time slot duration is equal to or greater than the maximum
propagation delay time.
When a station is ready to send, it senses the channel.
If the channel is busy, station waits until next slot.
If the channel is idle, it transmits the frame.
It reduces the chance of collision and improves the efficiency of the network.

TRIPTI PAL (CS Faculty, RATM) Page 33


Computer Network Unit-2 NMCA-E25

Carrier Sense Multiple Access with Collision Detection (CSMA/CD):-


The CSMA method does not specify the procedure
following a collision. Carrier sense multiple access with collision detection (CSMA/CD) augments the algorithm to handle
the collision.

In this method, a station monitors the medium after it sends a frame to see if the transmission was successful. If so, the
station is finished. If, however, there is a collision, the frame is sent again.
To better understand CSMA/CD, let us look at the first bits transmitted by the two stations involved in the collision.
Although each station continues to send bits in the frame until it detects the collision , we show what happens as the
first bits collide. In fig station A and C are involved in the collision.

CSMA/CD

TRIPTI PAL (CS Faculty, RATM) Page 34


Computer Network Unit-2 NMCA-E25

At time t1, station A has executed its persistence procedure and starts sending the bits of its frame. At time t 2, station C
has not yet sensed the first bit sent by A. Station C executes its persistence procedure and starts sending the bits in its
frame, which propagate both to the left and to the right. The collision occurs sometime after time t2. Station C detects a
collision at time t3 when it receives the first bit of As frame. Station C immediately (or after a short time, but we assume
immediately) aborts transmission. Station A detects collision at time t4 when it receives the first bit of Cs frame; it also
immediately aborts transmission. Looking at the fig, we see that A transmits for the duration t 4-t1; C transmits for the
duration t3-t2. Later we show that, for the protocol to work, the length of any frame divided by the bit rate in this
protocol must be more than either of these durations. At time t4, the transmission of As frame, though incomplete, is
aborted; at time t3, the transmission of Bs frame, though incomplete, is aborted. Now that we know the time durations
for the two transmission, we can show a more complete graph in fig.

Procedure:-

Now let us look at the flow dig for CSMA/CD. It is similar to the one for the ALOHA protocol, but there are differences.

The first difference is the addition of the persistence process. We need to sense the channel before we start sending the
frame by using one of the persistence processes we discussed previously (nonpersistent, 1-persistent, or p-persistent).
The corresponding box can be replaced by one of the persistence processes.

The second difference is the frame transmission. In ALOHA, we first transmit the entire frame and then wait for an
acknowledgment. In CSMA/CD, transmission and collision detection is a continuous process. We do not send the entire
frame and then look for a collision. The station transmits and receives continuously and simultaneously (using two
different ports). We use a loop to show that transmission is a continuous process. We constantly monitor in order to
detect one of two conditions: either transmission is finished or a collision is detected. Either events stops transmission.
When we come out of the loop, if a collision has not been detected, it means that transmission is complete; the entire
frame is transmitted. Otherwise, a collision has occurred.

The third difference is the sending of a short jamming signal that enforces the collision in case other stations have not yet
sensed the collision.

TRIPTI PAL (CS Faculty, RATM) Page 35


Computer Network Unit-2 NMCA-E25

Flow Diagram for CSMA/CD

Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA):-


The basic idea behind CSMA/CD is that a station
needs to be able to receive while transmitting to detect a collision. When there is no collision, the station receives one
signal: its own signal. When there is a collision, the station receives two signal: its own signal and the signal transmitted
by a second satation. To distinguish between these two cases, the received signals in these two cases must be
significantly different. In other words, the signal from the second station needs to add a significant amount of energy to
the one created by the first station.
In a wired network, the received signal has almost the same energy as the sent signal because either the length
of the cable is short or there are repeaters that amplify the energy between the sender and the receiver. This means
that in a collision , the detected energy almost doubles.
However, in a wireless network, much of the sent energy is lost in transmission. The received signal has very
little energy. Therefore, a collision may add only 5 to 10 percent additional energy. This is not useful for effective
collision detection.
We need to avoid collisions on wireless network because they cannot be detected. Carrier sense multiple access
with collision avoidance (CSMA/CA) was invented for this network. Collisions are avoided through the use of CSMA/CAs
three strategies: the inter-frame space, the contention window, and acknowledgment, as shown in fig.

TRIPTI PAL (CS Faculty, RATM) Page 36


Computer Network Unit-2 NMCA-E25

Timing in CSMA/CA

Interframe Space:-
First, collisions are avoided by deferring transmission even if the channel is found idle. When an idle
channel is found, the station does not send immediately. It waits for a period of time called the interframe space or IFS.
Even though the channel may appear idle when it is sensed, a distant station may have already started transmitting. The
distant stations signal has not yet reached this station. The IFS time allows the front of the transmitted signal by the
distant station to reach this station. If after the IFS time the channel is still idle, the station can send, but it still needs to
wait a time equal to the contention time (described next). The IFS variable can also be used to prioritize stations or frame
types. For example, a station that is assigned a shorter IFS has a higher priority.

Contention Window:-
The contention window is an amount of time divided into slots. A station that is ready to send
chooses a random number of slots as its wait time. The number of slots in the window changes according to the binary
exponential back-off strategy. This means that it is set to one slot the first time and then doubles each time the station
cannot detect an idle channel after the IFS time. This is very similar to the p-persistent method except that a random
outcome defines the number of slots taken by the waiting station. One interesting point about the contention window is
that the station needs to sense the channel after each time slot. However, if the station finds the channel busy, it does
not restart the process; it just stop the timer and restart it when the channel is sensed as idle. This gives priority to the
station with the longest waiting time.

Acknowledgment:-
With all these precautions, there still may be a collision resulting in destroyed data. In addition, the
data may be corrupted during the transmission. The positive acknowledgment and the time-out timer can help guarantee
that the receiver has received the frame.

TRIPTI PAL (CS Faculty, RATM) Page 37


Computer Network Unit-2 NMCA-E25

TRIPTI PAL (CS Faculty, RATM) Page 38

You might also like