Professional Documents
Culture Documents
Unit 2
Unit 2
Addressing:-
Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is assumed to be
unique on the link. It is encoded into hardware at the time of manufacturing.
Error Control:-
Sometimes signals may have encountered problem in transition and the bits are flipped.These
errors are detected and attempted to recover actual data bits. It also provides error reporting mechanism to the
sender.
Flow Control:-
Stations on same link may have different speed or capacity. Data-link layer ensures flow control
that enables both machine to exchange data on same speed.
Multi-access Control:-
When host on the shared link tries to transfer the data, it has a high probability of collision.
Data-link layer provides mechanism such as CSMA/CD to equip capability of accessing a shared media among
multiple Systems. Protocols of this layer determine which of the devices has control over the link at any given time,
when two or more devices are connected to the same link.
OR
To accomplish these goals, the data link layer takes the packets it get from the network layer and encapsulayes them in to
frames for transmission. Each frame contains a frame header, a payload field for holding the packet, and a frame trailer,
as illustrated in fig . Frame management forms the heart of what the data link layer does.
In fact, in many networks, these functions are found only in the upper layers and not in the data link layer. However, no
matter where they are found, the principles are pretty much the same, so it does nor really matter where we study them.
In the data link layer they often show up in their simplest and purest forms, making this a good placeto examine them.
to the network layer there, as shown in fig a. The actual transmission follows the path of fig b, but it is easier to think in
terms of two data link layer processes communicating using a data link protocol.
The three major types of services offered by data link layer are:
1. Unacknowledged connectionless service.
2. Acknowledged connectionless service.
3. Acknowledged connection oriented service.
(c) This type of service provides additional reliability because source machine retransmit the frames if it does not receive
the acknowledgement of these frames within the specified time.
(d) This service is useful over unreliable channels, such as wireless systems.
Framing:-
A point-to-point connection between two computers or devices consists of a wire in which data is
transmitted as a stream of bits. However, these bits must be framed into discernible blocks of information. Framing is a
function of the data link layer. It provides a way for a sender to transmit a set of bits that are meaningful to the
receiver. Ethernet, token ring, frame relay, and other data link layer technologies have their own frame structures.
Frames have headers that contain information such as error-checking codes. The Data Link layer prepares a packet for
transport across the local media by encapsulating it with a header and a trailer to create a frame.
The Data Link layer frame includes:
Data:- The packet from the Network layer.
Header:- Contains control information, such as addressing, and is located at the beginning of the PDU.
Trailer:- Contains control information added to the end of the PDU.
To provide service to the network layer, the data link layer must use the service provided to it by the physical layer. What
the physical layer does is accept a raw bit stream and attempt to deliver it to the destination. This bit stream is not
guaranteed to be error free. The number of bits received may be less than, equal to, or more than the number of bits
transmitted, and they may have different values. It is up to the data link layer to detect and, if necessary, correct errors.
The usual approach is for the data link layer to break the bit stream up into discrete frames and compute the checksum for
each frame. When a frame arrives at the destination, the checksum is recomputed. If the newly computed checksum is
different from the one contained in the frame, the data link layer knows that an error has occurred and takes steps to deal
with it (e.g., discarding the bad frame and possibly also sending an error report).
Breaking the bit stream up into frames is more difficult than it at first appears. One way to
achieve this framing is to insert time gaps between frames, much like the spaces between words in ordinary text.
However, networks rarely make any guarantees about timing, so it is possible these gaps might be squeezed out or other
gaps might be inserted during transmission. Since it is too risky to count on timing to mark the start and end of each
frame, other methods have been devised. We will look at four methods:
1. Character count.
2. Flag bytes with byte stuffing.
3. Starting and ending flags, with bit stuffing.
4. Physical layer coding violations.
1. Character count:-
The first framing method uses a field in the header to specify the number of characters in the frame.
When the data link layer at the destination sees the character count, it knows how many characters follow and hence
where the end of the frame is. This technique is shown in fig for four frames of sizes 5, 5, 8 and 8 characters, respectively.
Fig. A character stream. (a) Without errors. (b) With one error
The trouble with this algorithm is that the count can be garbled by a transmission error. For example, if the character
count of 5 in the second frame of fig (b) becomes a 7, the destination will get out of synchronization and will be unable to
locate the start of the nect frame. Even if the checksum is incorrect so the destination knows that the frame is bad, it still
has no way of telling where the next frame starts. Sending a frame back to the source asking for a retransmission does not
help either, since the destination does not know how many characters to skip over to get to the start of the
retransmission. For this reason, the character count method is rarely used anymore.
Fig (a) A Frame delimited by flag bytes (b) Four examples of byte sequences before and after byte stuffing.
A serious problem occurs with this method when binary data, such as object programs or floating-point numbers, are
being transmitted. It may easily happen that the flag bytes bit pattern occurs in the data. This situation will usually
interfere with the framing. One way to solve this problem is to have the senders data link layer insert a special escape
byte (ESC) just before each accidental flag byte in the data. The data linklayer on the receiving end removes the escape
byte before the data are given to the network layer. This technique is called byte stuffing or character stuffing. Thus, a
framing flag byte can be distinguished from one in the data by the absence or presence of an escape byte before it.
Of course, the next question is; What happens if an escape byte occurs in the middle of tha data? The answer is that it,
too, is stuffed with an escape byte. Thus, any single escape byte is part of an escape sequence, whereas a boubled one
indicates that a single escape occurred naturally in the data. Some examples are shown in fig. In all cases, the byte
sequence delivered after de stuffing is exactly the same as the original byte sequence.
The byte-stuffing scheme depicted in fig is a slight simplification of the one used in the
PPP protocol that most home computers use to communicate with their Internet service provider.
A major disadvantage of using this framing method is that it is closely tied to the use of 8-bit characters. Not all character
codes use 8-bit characters. For example UNICODE uses 16-bit characters, As networks developed, the disadvantages of
embedding the character code length in the framing mechanism became more and more obvious, so a new technique had
to be developed to allow arbitrary sized characters.
When the receiver sees five consecutive incoming 1 bits, followed by a 0 bit, it automatically
de stuffs (i.e., deletes) the 0 bit. Just as byte stuffing is completely transparent to the network layer in both computers, so
is bit stuffing. If the user data contain the flag pattern, 01111110, this flag is transmitted as 011111010 but stored in the
receivers memory as 01111110.
Fig. Bit stuffing. (a) The original data. (b) The data as they appear on the line. (c) The data as they are stored in the
receivers memory after destuffing.
With bit stuffing, the boundary between two frames can be unambiguously recognized by the flag pattern. Thus, if the
receiver loses track of where it is, all it has to do is scan the input for flag sequences, since they can only occur at frame
boundries and never within the data. The last method of framing is only applicable to networks in which the encoding on
the physical medium contains some redundancy. For example, some LANs encode 1 bit of data by using 2 physical bits.
Normally, a 1 bit is a high-low pair and a 0 bit is a low-high pair. The scheme means that every data bit has a transition in
the middle, making it easy for the receiver to locate the bit boundaries. The combinations high-high and low-low are not
used for data but are used for delimiting frames in some protocols.
As a final note on framing, many data link protocols use combination of a character count with one of the other methods
for extra safety. When a frame arrives, the count field is used to locate the end of the frame. Only if the appropriate
delimiter is present at that positions and the checksum is correct is the frame accepted as valid. Otherwise, the input
stream is scanned for the next delimiter.
Flow Control:-
When a data frame (Layer-2 data) is sent from one host to another over a single medium, it is
required that the sender and receiver should work at the same speed. That is, sender sends at a speed on which the
receiver can process and accept the data. What if the speed (hardware/software) of the sender or receiver differs? If
sender is sending too fast the receiver may be overloaded, (swamped) and data may be lost.
Flow control coordinates the amount of data that can be sent before receiving an acknowledgment and
is one of the most important duties of the data link layer. In most protocols, flow control is a set of procedures that tells
the sender how much data it can transmit before it must wait for an acknowledgment from the receiver. The flow of data
must not be allowed to overwhelm the receiver. Any receiving device has a limited speed at which it can process incoming
data and a limited amount of memory in which to store incoming data. The receiving device must be able to inform the
sending device before those limits are reached and to request that the transmitting device send fewer frames or stop
temporarily. Incoming data must be checked and processed before they can be used. The rate of such processing is often
slower than the rate of transmission. For this reason, each receiving device has a block of memory, called a buffer ,
reserved for storing incoming data until they are processed. If the buffer begins to fill up, the receiver must be able to tell
the sender to halt transmission until it is once again able to receive.
Sliding window refers to an imaginary boxes that hold the frames on both sender and receiver side. It provides the upper
limit on the number of frames that can be transmitted before requiring an acknowledgment. Frames may be
acknowledged by receiver at any point even when window is not full on receiver side. Frames may be transmitted by
source even when window is not yet full on sender side. The windows have a specific size in which the frames are
numbered modulo- n, which means they are numbered from 0 to n-l. For e.g. if n = 8, the frames are numbered 0,
1,2,3,4,5,6, 7, 0, 1,2,3,4,5,6, 7, 0, 1, ....
The size of window is n-1. For e.g. In this case it is 7. Therefore, a maximum of n-l frames may be sent before an
acknowledgment. When the receiver sends an ACK, it includes the number of next frame it expects to receive. For
example in order to acknowledge the group of frames ending in frame 4, the receiver sends an ACK containing the
number 5. When sender sees an ACK with number 5, it comes to know that all the frames up to number 4 have been
received.
Therefore, the sliding window of sender shrinks from left when frames of data are sending. The sliding window of
the sender expands to right when acknowledgments are received.
The sliding window of the receiver shrinks from left when frames of data are received. The sliding window of the
receiver expands to the right when acknowledgement is sent.
Error Control:-
When data-frame is transmitted, there is a probability that data-frame may be lost in the transit or it is
received corrupted. In both cases, the receiver does not receive the correct data-frame and sender does not know
anything about any loss.In such case, both sender and receiver are equipped with some protocols which helps them to
detect transit errors such as loss of data-frame. Hence, either the sender retransmits the data-frame or the receiver may
request to resend the previous data-frame.
Requirements for error control mechanism:
Error Detection:-
The sender and receiver, either both or any, must ascertain that there is some error in the transit.
Positive ACK:-
When the receiver receives a correct frame, it should acknowledge it.
Negative ACK:-
When the receiver receives a damaged frame or a duplicate frame, it sends a NACK back to the
sender and the sender must retransmit the correct frame.
Retransmission:-
The sender maintains a clock and sets a timeout period. If an acknowledgement of a data-frame
previously transmitted does not arrive before the timeout the sender retransmits the frame, thinking that the
frame or its acknowledgement is lost in transit.
There are three types of techniques available which Data-link layer may deploy to control the errors by Automatic
Repeat Requests (ARQ):
1. Stop-and-Wait ARQ
2. Go-Back-N ARQ
3. Selective Repeat ARQ
1. Stop-and-wait ARQ:-
2. Go-Back-N ARQ:-
Stop and wait ARQ mechanism does not utilize the resources at their best.When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N ARQ method, both sender and receiver
maintain a window.
The sending-window size enables the sender to send multiple frames without receiving the acknowledgement of the
previous ones. The receiving-window enables the receiver to receive multiple frames and acknowledge them. The
receiver keeps track of incoming frames sequence number.
When the sender sends all the frames in window, it checks up to what sequence number it has received positive
acknowledgement. If all frames are positively acknowledged, the sender sends next set of frames. If sender finds that it
has received NACK or has not receive any ACK for a particular frame, it retransmits all the frames after which it does not
receive any positive ACK.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers, buffers the frames in memory and
sends NACK for only frame which is missing or damaged.
The sender in this case, sends only packet for which NACK is received.
3. Burst Error:-
A burst error is more likely to occur than a single-bit error. The duration of noise is normally longer than the duration of 1
bit, which means that when noise affects data, it affects a set of bits. The number of bits affected depends on the data
rate duration of noise.
Redundancy:-
The central concept in detecting of correcting errors is redundancy. To be able to detect or correct
errors, we need to some extra bits with our data. These redundant bits are added by the sender and removed by the
receiver. Their presence allows the receiver to detect or correct corrupted bits.
3. Checksum:-
In checksum error detection scheme, the data is divided into k segments each of m bits.
In the senders end the segments are added using 1s complement arithmetic to get the sum. The sum is
complemented to get the checksum.
The checksum segment is sent along with the data segments.
At the receivers end, all received segments are added using 1s complement arithmetic to get the sum. The sum is
complemented.
If the result is zero, the received data is accepted; otherwise discarded.
Unlike checksum scheme, which is based on addition, CRC is based on binary division.
In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are appended to the end of data unit so
that the resulting data unit becomes exactly divisible by a second, predetermined binary number.
At the destination, the incoming data unit is divided by the same number. If at this step there is no remainder, the
data unit is assumed to be correct and is therefore accepted.
A remainder indicates that the data unit has been damaged in transit and therefore must be rejected.
EX.
Shared media links in which multiple devices share the same cable (i.e., Ethernet LAN)
The PPP (Point-to-Point Protocol) that people use to connect to the Internet via a dial-up modem is an example of a data
link protocol. Because the link between two systems is point to point, the bits are always delivered from sender to
receiver in order. Also, unlike shared-media LANs in which multiple stations attempt to use the network, there is no
contention.
Common Data Link Protocols:-
The most common data link level protocols are listed here with a short description.
Note that most of these data link protocol are used for WAN and Modem Connections
SDLC (Synchronous Data Link Protocol):-
This protocol was originally developed by IBM as part of IBM's SNA
(Systems Network Architecture). It was used to connect remote devices to mainframe computers at central locations
in either point-to-point (one-to-one) or point-to-multipoint (one-to-many) connections.
HDLC (High-Level Data Link Protocol):-
This protocol is based on SDLC and provides both a best-effort unreliable
service and a reliable service. It is used with various serial interface protocols defined in the physical layer, such as
EIA/TIA-232, V.24, V.35, and others.
SLIP (Serial Line Interface Protocol):-
SLIP is a data link control facility for transmitting IP packets, usually between
an ISP (Internet service provider) and a home user over a dial-up link. SLIP has some limitations, including a lack of
any error-detection and correction mechanisms. It is up to higher-layer protocols to perform these checks. Used over
much of the same serial interfaces as HDLC.
PPP (Point-to-Point Protocol):-
PPP provides the same functionality as SLIP (i.e., it is commonly used for Internet
connections over dial-up lines); but it is a more robust protocol that can transport not only IP, but also other types of
packets. Frames contain a field that identifies the type of protocol being carried (IP, IPX, and so on). It is used over
much of the same serial interfaces as HDLC.
Frame Relay:-
LAP used with X.25 is highly reliable, but it also has high overhead. Frame relay does away with the
reliability services (i.e., error-correction mechanisms are removed) to improve throughput.
LLC (Logical Link Protocol):-
The IEEE (Institute of Electrical and Electronic Engineers) defines this protocol in its 802.x
family of networks standards. The ANSI FDDI standard also uses this protocol. LLC is discussed further in the next
section.
A start type sequence, called a header, prefixes each block of characters, and a stop type sequence, called a tail, suffixes
each block of characters. The tail is expanded to include a check code, inserted by the transmitter, and used by the
receiver to determine if the data block of characters was received without errors. In this way, synchronous transmission
overcomes the two main deficiencies of the asynchronous method, that of inefficiency and lack of error detection.
2. Asynchronous Protocol:-
Asynchronous systems send data bytes between the sender and receiver by packaging the
data in an envelope. This envelope helps transport the character across the transmission link that separates the sender
and receiver. The transmitter creates the envelope, and the receiver uses the envelope to extract the data. Each
character (data byte) the sender transmits is preceded with a start bit, and suffixed with a stop bit. These extra bits serve
to synchronize the receiver with the sender.
In asynchronous serial transmission, each character is packaged in an envelope, and sent across a single wire, bit by bit,
to a receiver. Because no signal lines are used to convey clock (timing) information, this method groups data together
into a sequence of bits (five - eight), then prefixes them with a start bit and appends the data with a stop bit.
each frame it sends. The receiver then can check the sequence number of each arriving frame to see if it is a new
frame or a duplicate to be discarded.
The receiver needs to distinguish only 2 possibilities: a new frame or a duplicate; a 1-bit sequence number is
sufficient. At any instant the receiver expects a particular sequence number. Any wrong sequence numbered frame
arriving at the receiver is rejected as a duplicate. A correctly numbered frame arriving at the receiver is accepted,
passed to the host, and the expected sequence number is incremented by 1 (modulo 2).
IEEE (Institute of Electrical and Electronics Engineers) has actually made this division for LANs. The upper sublayer that is
responsible for flow control and error control is called the logical link control (LLC) layer; the lower sublayer that is mostly
responsible for multiple access resolution is called the media access control (MAC) layer.
When nodes or stations are connected and use a common link, called a multipoint or broadcast link, we need a multiple
access protocol to coordinate access to the link. The problem of controlling the access to the medium is similar to the
rules of speaking in an assembly. The procedure guarantee that the right to speak is upheld and ensure that two people
do not speak at the same time, do not interrupt each other, do not monopolize the discussion, and so on.
The situation is similar for multipoint network. Many formal protocols have been devised to handle access to a shared
link. We categorize them into three groups. Protocols belonging to each group are shown in fig.
On the receiving side, the MAC sub-layer receives frames from the PHY and is responsible for accepting each frame, by
examining the frame header. It is also responsible for verifying the checksum to conclude whether the frame has come
uncorrupted through the link without bit errors. Since checksum computation and verification are compute intensive
tasks, the framing/de-framing functionality is done by dedicated piece of hardware (e.g. NIC card on PCs).
In the case of Ethernet, according to 802.3-2002 section 4.1.4, the functions required of a MAC are:
Two features give this method its name. First, there is no scheduled time for a station to
transmit. Transmission is random among the stations. That is why these methods are called random access. Second, no
rules specify which station should send next. Stations compete with one another to access the medium. That is why
these methods are also called contention methods.
In a random access method, each station has the right to the medium without being
controlled by any other station. However, if more than one station tries to send , there is an access conflict-collision-
and the frames will be either destroyed or modified. To avoid access conflict or to resolve it happens, each station
follows a procedure that answers the following questions:
When can the station access the medium?
What can the station do if the medium is busy?
How can the station determine the success or failure of the transmission?
What can the station do if there is an access conflict?
The random access methods we study in this chapter have evolved from a very interesting protocol known as ALOHA,
which used a very simple procedure called multiple access (MA). The method was improved with the addition of a
procedure that forces the station to sense the medium before transmitting. This was called carrier sense multiple access.
This method later evolved into two parallel methods: carrier sense multiple access with collision detection (CSMA/CD)
and carrier sense multiple access with collision avoidance (CSMA/CA). CSMA/CD tells the station what to do when a
collision is detected. CSMA/CA tries to avoid the collision.
ALOHA:-
ALOHA, the earliest random access method, was developed at the University of Hawaii in early 1970. It was
designed for a radio (wireless) LAN, but it can be used on any shared medium.
It is obvious that there are potential collisions in this arrangement. The medium is shared between the station.
When a station sends data, another station may attempt to do so at the same time. The data from the two stations
collide and become garbled.
Pure ALOHA:-
The original ALOHA protocol is called pure ALOHA. This is a simple, but elegant protocol. The idea is
that each station sends a frame whenever it has a frame to send. However, since there is only one channel to share,
there is the possibility of collision between frames from different stations. Fig shows an example of frame collisions in
pure ALOHA.
PURE ALOHA
There are four stations (unrealistic assumption) that contend with one another for access to the shared channel. The fig.
shows that each station sends two frames; there are a total of eight frames on the shared medium. Some of these frames
collide because multiple frames are in contention for the shared channel. Fig shows that only two frames survive: frame
1.1 from station 1 and frame 3.2 from station 3 . We need to mention that even if one bit of a frame coexists on the
channel with one bit from another frame, there is a collision and both will be destroyed.
It is obvious that we need to resend the frames that have been destroyed during transmission. The pure ALOHA protocol
relies on acknowledgment from the receiver. When a station sends a frame, It expects the receiver to send an
acknowledgment. If the acknowledgment does not arrive after a time-out period, the station assumes that the frame (or
the acknowledgment) has been destroyed and resends the frame.
A collision involves two or more stations. If all these station try to resend their frames after the time-out, the frames will
collide again. Pure ALOHA dictates that when the time-out period passes, each station waits a random amount of time
before resending its frame. The randomness will help avoid more collisions. We call this time the back-off time TB.
Pure ALOHA has a second method to prevent congesting the channel with retransmitted frames. After a maximum
number of retransmission attempts Kmax, a station must give up and try later. Fig shows the procedure for pure ALOHA
based on the above strategy.
The time-out period is equal to the maximum possible round-trip propagation delay, which is twice the amount of time
required to send a frame between the two most widely separated station (2*Tp). The back-off time TB is a random value
that normally depends on K (the number of attempted unsuccessful transmissions). The formula for T B depends on the
implementation. Once common formula is the binary exponential back-off. In this method , for each retransmission,
multiplier in the range 0 to 2K-1 is randomly chosen and multiplied by TP (maximum propagation time) or Tfr ( the average
time required to send out a frame) to find TB. Note that in this procedure, the range of the random numbers increases
after each collision. The value of Kmax is usually chosen as 15.
Slotted ALOHA:-
Pure ALOHA has a vulnerable time of 2*Tfr. This is so because there is no rule that defines when the
station can send. A station may send soon after another station has started or soon before another station has finished.
Slotted ALOHA was invented to improve the efficiency of pure ALOHA.
In slotted ALOHA we divide the time into slots of Tfr s and force the station to send only at beginning
of the time slot. Fig shows an example of frame collisions on slotted ALOHA.
Because a station is allowed to send only at the beginning of the synchronized time slot, if a station misses this moment,
it must wait until the beginning of the next time slot. This means that the station which started at the beginning of this
slot has already finished sending its frame. Of course, there is still the possibility of collision if two stations try to send at
the beginning of the same time slot. However, the vulnerable time is now reduced to one-half, equal to Tfr.
CSMA can reduce the possibility of collision, but it can not eliminate it. The reason for this is shown in fig, a space and
time model of a CSMA network. Stations are connected to a shared channel (usually a dedicated medium).
The possibility of collision still exists because of propagation delay; when a station sends a frame, it still takes time
(although very short) for the first bit to reach every station and for evey station to snese it. In other words, a station may
sense the medium and find it idle, because the first bit sent by another station has not yet been received.
CSMA
CSMA
TRIPTI PAL (CS Faculty, RATM) Page 31
Computer Network Unit-2 NMCA-E25
At time t1, station B senses the medium and finds it idle, so it sends a frame. At time t 2(t2>t1), station C senses the
medium and finds it idle because, at this time, the first bits from station B have not reached station C. Station C also
sends a frame. The two signals collide and both frames are destroyed.
1-Persistent CSMA
Non-Persistent CSMA
P-Persistent CSMA
1-Persistent CSMA:-
In this method, station that wants to transmit data, continuously senses the channel to check whether he channel is
idle or busy.
If the channel is busy, station waits until it becomes idle.
When the station detects an idle channel, it immediately transmits the frame.
This method has the highest chance of collision because two or more stations may find channel to be idle at the
same time and transmit their frames.
Non-Persistent CSMA:-
A station that has a frame to send, senses the channel.
If the channel is idle, it sends immediately.
If the channel is busy, it waits a random amount of time and then senses the channel again.
It reduces the chance of collision because the stations wait for a random amount of time .
It is unlikely that two or more stations will wait for the same amount of time and will retransmit at the same time.
P-Persistent CSMA:-
In this method, the channel has time slots such that the time slot duration is equal to or greater than the maximum
propagation delay time.
When a station is ready to send, it senses the channel.
If the channel is busy, station waits until next slot.
If the channel is idle, it transmits the frame.
It reduces the chance of collision and improves the efficiency of the network.
In this method, a station monitors the medium after it sends a frame to see if the transmission was successful. If so, the
station is finished. If, however, there is a collision, the frame is sent again.
To better understand CSMA/CD, let us look at the first bits transmitted by the two stations involved in the collision.
Although each station continues to send bits in the frame until it detects the collision , we show what happens as the
first bits collide. In fig station A and C are involved in the collision.
CSMA/CD
At time t1, station A has executed its persistence procedure and starts sending the bits of its frame. At time t 2, station C
has not yet sensed the first bit sent by A. Station C executes its persistence procedure and starts sending the bits in its
frame, which propagate both to the left and to the right. The collision occurs sometime after time t2. Station C detects a
collision at time t3 when it receives the first bit of As frame. Station C immediately (or after a short time, but we assume
immediately) aborts transmission. Station A detects collision at time t4 when it receives the first bit of Cs frame; it also
immediately aborts transmission. Looking at the fig, we see that A transmits for the duration t 4-t1; C transmits for the
duration t3-t2. Later we show that, for the protocol to work, the length of any frame divided by the bit rate in this
protocol must be more than either of these durations. At time t4, the transmission of As frame, though incomplete, is
aborted; at time t3, the transmission of Bs frame, though incomplete, is aborted. Now that we know the time durations
for the two transmission, we can show a more complete graph in fig.
Procedure:-
Now let us look at the flow dig for CSMA/CD. It is similar to the one for the ALOHA protocol, but there are differences.
The first difference is the addition of the persistence process. We need to sense the channel before we start sending the
frame by using one of the persistence processes we discussed previously (nonpersistent, 1-persistent, or p-persistent).
The corresponding box can be replaced by one of the persistence processes.
The second difference is the frame transmission. In ALOHA, we first transmit the entire frame and then wait for an
acknowledgment. In CSMA/CD, transmission and collision detection is a continuous process. We do not send the entire
frame and then look for a collision. The station transmits and receives continuously and simultaneously (using two
different ports). We use a loop to show that transmission is a continuous process. We constantly monitor in order to
detect one of two conditions: either transmission is finished or a collision is detected. Either events stops transmission.
When we come out of the loop, if a collision has not been detected, it means that transmission is complete; the entire
frame is transmitted. Otherwise, a collision has occurred.
The third difference is the sending of a short jamming signal that enforces the collision in case other stations have not yet
sensed the collision.
Timing in CSMA/CA
Interframe Space:-
First, collisions are avoided by deferring transmission even if the channel is found idle. When an idle
channel is found, the station does not send immediately. It waits for a period of time called the interframe space or IFS.
Even though the channel may appear idle when it is sensed, a distant station may have already started transmitting. The
distant stations signal has not yet reached this station. The IFS time allows the front of the transmitted signal by the
distant station to reach this station. If after the IFS time the channel is still idle, the station can send, but it still needs to
wait a time equal to the contention time (described next). The IFS variable can also be used to prioritize stations or frame
types. For example, a station that is assigned a shorter IFS has a higher priority.
Contention Window:-
The contention window is an amount of time divided into slots. A station that is ready to send
chooses a random number of slots as its wait time. The number of slots in the window changes according to the binary
exponential back-off strategy. This means that it is set to one slot the first time and then doubles each time the station
cannot detect an idle channel after the IFS time. This is very similar to the p-persistent method except that a random
outcome defines the number of slots taken by the waiting station. One interesting point about the contention window is
that the station needs to sense the channel after each time slot. However, if the station finds the channel busy, it does
not restart the process; it just stop the timer and restart it when the channel is sensed as idle. This gives priority to the
station with the longest waiting time.
Acknowledgment:-
With all these precautions, there still may be a collision resulting in destroyed data. In addition, the
data may be corrupted during the transmission. The positive acknowledgment and the time-out timer can help guarantee
that the receiver has received the frame.