You are on page 1of 79

Computer Networks

Course Code: 20ITC20

UNIT-1
Concept of Layering
● Consider how to provide
communication to the top layer of the
five-layer network.
● A message, M, is produced by an
application process running in layer 5
and given to layer 4 for transmission.
● Layer 4 puts a header in front of the
message to identify the message and
passes the result to layer 3.

2
Concept of Layering
● The header includes control
information, such as addresses, to
allow layer 4 on the destination
machine to deliver the message.
● There is no limit placed on the size of
messages transmitted in the layer 4
protocol but there is a limit imposed
by the layer 3 protocol.
● Layer 3 breaks up the incoming
messages into smaller units

3
Concept of Layering
● Layer 2 adds to each piece a header
and a trailer, and gives the resulting
unit to layer 1 for physical
transmission.
● At the receiving machine the message
moves upward, from layer to layer,
with headers are stripped off as it
progresses.

4
Design Issues in Layering
● Reliability
○ Error Detection
○ Error Correction
○ Routing
● Scale to Network Evolution
○ Addressing or Naming
○ Internetworking
● Resource Allocation
○ Statistical Multiplexing
○ Flow Control
○ Congestion
○ Quality of Service
● Securing the Network against Threats
○ Confidentiality
○ Integrity
○ Authentication 5
OSI Reference Model
 The model is called the ISO OSI (Open
Systems Interconnection) Reference
Model because it deals with
connecting open systems—that is,
systems that are open for
communication with other systems.
 Developed by the International
Standards Organization (ISO).
 It consists of 7 layers

6
OSI Reference Model
The principles that were applied to arrive at the seven layers can be briefly summarized as follows:
1. A layer should be created where a different abstraction is needed.
2. Each layer should perform a well-defined function.
3. The function of each layer should be chosen with an eye toward defining internationally
standardized protocols.
4. The layer boundaries should be chosen to minimize the information flow across the interfaces.
5. The number of layers should be large enough that distinct functions need not be thrown
together in the same layer out of necessity and small enough that the architecture does not
become unwieldy

7
OSI Reference Model: Physical Layer
●Bottom most layer of the OSI model. Some of the functionalities of the Physical layer are:
●Responsible for transmitting individual ●Physical topology - It defines the arrangement of the
bits over a medium. devices in a network.
●converts the bits into signals. ●Data rate - Data rate is controlled by the physical
●The bit rate or data rate control is done layer.
at the physical layer. ●Transmission mode - It is the data flow between two
●Defines how physical devices are devices. It is of three types:
connected. 1.Simplex - Data flows in one direction only.
2.Half-duplex - Data flows in both directions but not at
the same time.
3.Full-duplex - Data flows in both directions at the same
time.

8
OSI Reference Model: Data Link Layer
●The data link layer acts as a link Some of the functionalities of the Data link layer are:
between two nodes and transfers data or Framing - The data in the form of bits are grouped in
frames from one node to another node. packets called frames.
●Transfers the error-free data from one Physical Addressing - The data link layer adds the MAC
node to another node. address ( physical address ) of the sender and the receiver
●It is divided into two sub-layers. for the smooth transmission of the data.
1.Media Access Control (MAC) - Flow Control - It is done by maintaining the constant data
Responsible for controlling devices' rate on both sender and receiver ends.
access to a medium. Error Control - To transfer the non-erroneous data. The
2.Logical Link Control (LLC) - Responsible erroneous data are traced and retransmitted by the data
for frame synchronization and error link layer.
control. Access Control - The data link layer helps to determine
which device has control over a link shared by multiple
devices.
9
OSI Reference Model: Network Layer
●Network Layer is responsible for
transferring the data from the source to Some of the functionalities of the Network layer are:
the destination by routing it through the Packetizing - If the message to be is large to be
intermediate nodes. transmitted, it is split into several fragments and then
●Among the different possible paths, it delivered independently and reassembled at the
chooses the best possible path to destination node.
transfer the data from source to Logical Addressing - The network layer adds the IP address
destination. of the source and destination to the header of the frames
for its identification among all the devices.
Routing – Choosing the best possible path is chosen by the
network layer for the transfer of the data from source to
destination Routing.

10
OSI Reference Model: Transport Layer
Some of the functionalities of the Transport layer are:
●The transport layer creates various
Segmentation - The message received from the upper layer is
smaller units called segments out of the
divided into smaller units called segments and is
message received from the application
reassembled at the destination by the transport layer.
layer.
● It adds source and destination port Port Addressing - The source and destination port numbers
are added to the header for the correct handover of the data.
numbers in the header for the right
transfer of the data. Connection Control - There can be two types of services
●The main responsibility of the transport between two devices
layer is the end-to-end delivery of the • Connection-Oriented - In this, the connection is
message and to ensure flow and error established for the data transmission and is disconnected
control. after the transmission.
• Connectionless - It is less reliable and faster and doesn't
require establishing a connection before data
transmission.
Error Control - It checks for erroneous data and retransmits
11
the data on a failed delivery.
OSI Reference Model: Session Layer
●The main responsibility of the session
layer is to establish, maintain and
Some of the functionalities of the session layer are:
synchronize the communication among
the devices. Dialog Control - It allows communication between two
systems in half-duplex or full-duplex.
Synchronization - It allows a process to add checkpoints to
avoid data loss during a crash.

12
OSI Reference Model: Presentation Layer
●The presentation layer establishes
context between application layer
entities.

Some of the functionalities of the Presentation layer are:


Translation - It is the conversion of the data into a commonly
acceptable format.
Encryption - Encryption is done to secure the data from unauthorized
access.
Compression - Compression means compressing the data that is
reducing the number of bits that need to be transmitted.

13
OSI Reference Model: Application Layer
●This is the closest layer to the end-user.
● It interacts directly with the software Some of the functionalities of the Application layer are:
application.
File transfer and access management - This allows the user to
●It acts as a window for the user and the access the files on a remote computer.
software applications to access network
Mail services - It provides access to send or receive email.
services.

14
TCP/IP Reference Model
●The TCP/IP which stands for Transmission Control
Protocol/Internet Protocol is the set of communication
protocols used in the internet and similar computer
networks.

15
TCP/IP Reference Model

16
Comparison of OSI and TCP/IP Reference Models
S. No. OSI TCP/IP

Stands for: Transmission Control


1. Stands for: Open System Interconnection.
Protocol/Internet Protocol.

2. Developed by ISO. Developed by ARPANET.

Only a reference model and no actual This model is used for the development of the
3.
implementation is done internet.

4. There exists 7 layers. There exists 5 layers.

Both are the same and included under the


5. Presentation and Session layers are different.
application layer.

6. Can be used to build models like TCP/IP. Can not be used for model building.

7. Used very less compared to TCP/IP. Highly used.

17
Layering Concept - Quiz
1. OSI stands for __________.

2. The protocol data unit (PDU) for the application layer in the Internet stack is____________.

3. Which address is used to identify a process on a host by the transport layer?

4. Which layer provides the services to user?

5. Transmission data rate is decided by ____________.

6. The data link layer takes the packets from _________ and encapsulates them into frames for transmission.

18
The Data Link Layer

19
Contents
 Framing
 Error Control
 Flow Control
 Error Detection and Correction
 Error-Correcting Codes
 Error Detecting Codes
 Sliding Window Protocols.

20
Framing
● In the physical layer, Data transmission involves synchronized transmission of bits from
the source to the destination. The data link layer packs these bits into frames.

● Data-link layer takes the packets from the Network Layer and encapsulates them into
frames.

● If the frame size becomes too large, then the packet may be divided into small sized
frames. Smaller sized frames makes flow control and error control more efficient.

● Then, it sends each frame bit-by-bit on the hardware. At receiver’s end, data link layer
picks up signals from hardware and assembles them into frames.

21
Framing Methods
● A good design must make it easy for a receiver to find the start of new frames while using little of
the channel bandwidth. The four different methods used for framing are:

○ Byte count.

○ Flag bytes with byte stuffing.

○ Flag bits with bit stuffing.

○ Physical layer coding violations

22
Byte Count
● It uses Byte count field in the
header to specify the number of
bytes in the frame.

● When the data link layer at the


destination sees the byte count, it
knows how many bytes follow and
hence where the end of the frame
is.

 The trouble with this algorithm is that the count can be garbled by a transmission error.

23
Byte Stuffing
● In this method, start and end of frame are
recognized with the help of flag bytes.

● Two consecutive flag bytes indicate the end


of one frame and start of the next one.

● If flag byte occurs in the data, insert a


special escape byte (ESC) just before each
‘‘accidental’’ flag byte in the data.

● Even If an escape byte occurs in the middle


of the data, an escape byte is inserted

This framing method is only applicable in 8-bit character codes which is a major disadvantage of this
method.

24
Bit Stuffing
● Allows frame to contain arbitrary number of
bits and arbitrary character size.

● Each frame begins and ends with a special


bit pattern, 01111110 called a flag byte.

● When five consecutive l's are encountered


in the data, it automatically stuffs a '0' bit
into outgoing bit stream.

25
Bit Stuffing

26
Physical layer coding violations

● It is applicable to networks in which the encoding on the


physical medium contains some redundancy.
● In such cases normally, a 1 bit is a high-low pair and a 0 bit is
a low-high pair.
● The combinations of low-low and high-high which are not
used for data may be used for marking frame boundaries.

27
Error Control
● To ensure reliable delivery, the receiver sends back positive or
negative acknowledgements for every frame received.

● If the sender receives a positive acknowledgement about a


frame, it knows the frame has arrived safely.

● On the other hand, receiver sends a negative acknowledgement


to indicate the sender to retransmit the frame.

● To avoid indefinite waiting, a timer is set at the sender which


expire after an interval long enough for the frame to reach the
destination, be processed there, and have the acknowledgement
propagate back to the sender.

● When frames are transmitted multiple times there is a danger


that the receiver may receive duplicate frames.

● To prevent this from happening, Sequence numbers are assigned


to Frames.

28
Flow Control
● When a sender systematically wants to transmit frames faster than the receiver can
accept them. This situation can occur when the sender is running on a fast, powerful
computer and the receiver is running on a slow, low-end machine.
● Even if the transmission is error free, the receiver may be unable to handle the frames as
fast as they arrive and will lose some.

● Two approaches are commonly used.


1. Feedback-based flow control, the receiver sends back information to the sender
giving it permission to send more data.
2. Rate-based flow control, the protocol has a built-in mechanism that limits the rate at
which senders may transmit data, without using feedback from the receiver.

29
Error Detection and Correction
● Two strategies for dealing with errors. Both add redundant information to the data that is
sent.
● Error Correction:
● To include enough redundant information to enable the receiver to deduce what the
transmitted data must have been.
● uses error-correcting codes (Forward Error Correction).
● Error Detection:
○ To include only enough redundancy to allow the receiver to deduce that an error
has occurred and to request for a retransmission.
○ Uses error-detecting codes.
● Which one to use When?
30
Types of Errors

31
Error Detecting Codes
1. Parity
2. Checksum
3. Cyclic Redundancy Checks

32
Error Detection: Parity
 The parity check is done by adding an extra bit, called parity bit, to the data to make the
number of 1s either even or odd depending upon the type of parity.
 The parity check is suitable for single bit error detection only.
 The two types of parity checking are
 Even Parity − Here the total number of 1s in the message is made even.
 Odd Parity − Here the total number of 1s in the message is made odd.

33
Error Detection: Parity  The receiver will decide whether an error has
occurred by counting whether the total number
of 1s is even.
 Suppose that a sender wants to send the data  When the above frame is received, three cases
1001101 using even parity check method. It will may occur namely,
add the parity bit as shown below.  No error
 Single bit error detection
 Failure to detect multiple bits errors.

What if the receiver receives the data as 10011100?


34
Error Detection: Checksum
 This is a block code method where a checksum is created based on the data values in the
data blocks to be transmitted using some algorithm and appended to the data.
 When the receiver gets this data, a new checksum is calculated and compared with the
existing checksum. A non-match indicates an error.

Error Detection by Checksums


 For error detection by checksums, data is divided into fixed sized frames or segments.
 Sender’s End − The sender adds the segments using 1’s complement arithmetic to get the
sum. It then complements the sum to get the checksum and sends it along with the data
frames.
 Receiver’s End − The receiver adds the incoming segments along with the checksum using
1’s complement arithmetic to get the sum and then complements it.

35
Error Detection: Checksum
Suppose that the sender wants to send 4 frames each of 8 bits, where the frames are
11001100, 10101010, 11110000 and 11000011.

36
Error Detection: CRC
 Cyclic Redundancy Check (CRC) is a block code invented by W. Wesley Peterson in 1961.
 CRC involves binary division of the data bits being sent by a predetermined divisor
agreed upon by the communicating system.
 The divisor is generated using polynomials. So, CRC is also called polynomial code
checksum.

37
Error Detection: CRC-Process

38
Error Detection: CRC-Example

39
Error Detection: CRC-Exercise Problems
1. A bit stream 1101011011 is transmitted using the standard CRC
method. The generator polynomial is x4+x+1. What is the actual bit
string transmitted?

2. A bit stream 10011101 is transmitted using the standard CRC method.


The generator polynomial is x3+1.
a) What is the actual bit string transmitted?
b) Suppose the third bit from the left is inverted during
transmission. How will receiver detect this error?

40
Hamming Code
 It was developed by R.W. Hamming .
 It is a block code that is capable of detecting up to 2 simultaneous bit errors and
correcting single-bit errors.
 In this method, source encodes the message by inserting redundant bits within the
message.
 These redundant bits are extra bits that are generated and inserted at specific
positions in the message.
 When the destination receives this message, it performs recalculations to detect
errors and find the bit position that has error.

41
Hamming Code-Encoding a Message
 The redundant bits are some extra binary bits that are not part of the original data
 How do we determine the number of redundant bits to be added?
 We use the formula, 2𝑅 ≥ 𝑀 + 𝑅 + 1; where R= No. of redundant bits &M = No. of data
bits
 For 4 data bits, 3 redundancy bits are needed.
 Redundant bits are inserted in the bit positions which are the power of 2 like 1, 2, 4, 8, 16,
etc.
 These bit positions are indicated as p1 (position 1), p2 (position 2), p3 (position 4), etc.

Determining the value of


redundant bits using even
parity
Data=1000
42
Hamming Code- Problem

 Find the hamming code for the data: 1001101? Assume Even parity for
transmission.
a. 10011100101
b. 11010000101
c. 10001100101
d. 11111001011

43
Hamming Code-
Error Detection & Correction
Received Hamming Code
of 7 bits

if there is a mismatch, Mark 1,


else mark 0
 Step 1: For checking parity bit P1, use check one and skip
one method.
1
 Step 2: Check for P2 but while checking for P2, we will
use check two and skip two method, which will give us
the following data bits. 0

 Step 3: Check for P4 but while checking for P4, we will


use check four and skip four method
1
P4 P2 P1
1 0 1 5th Bit is received in error Corrected Data
44
Flow Control Protocols

45
Stop and Wait Flow Control
Sender:
 Send one data packet at a time.
 Send the next packet only after receiving
acknowledgement for the previous.

Receiver:
 Send acknowledgement after receiving and
consuming a data packet.

Problems
 Lost Data or lost Acknowledgement causes the
protocol to fail

46
Stop and Wait Flow Control

47
Sliding Window Protocols
 Sliding window protocols are data link layer protocols for reliable and sequential
delivery of data frames.
 In this protocol, multiple frames can be sent by a sender at a time before receiving an
acknowledgment from the receiver.
 The term sliding window refers to the imaginary boxes to hold frames. Sliding window
method is also known as windowing.

48
Sliding Window Protocol
 Sliding window protocol is a flow
control protocol.
 It allows the sender to send
multiple frames before receiving
the acknowledgements.
 Sender slides its window on
receiving the acknowledgements
for the sent frames.
 This allows the sender to send
more number of frames.
Comparison of transmission using (a) stop-and-go, and (b) sliding window.

Ts=Tw×W
Ts-Throughput with top and wait protocol
Tw-Throughput with sliding window protocol
49
Sliding Window Protocol

An illustration of a sliding window in (a) initial, (b) intermediate, and (c) final positions.

50
Sliding Window Protocol-Working Principle

 Both the protocols allow transmission of multiple frames before receiving the acknowledgment
for the first frame.
Go – Back – N ARQ Automatic Repeat Request (ARQ) is an error-control mechanism for data
transmission which uses acknowledgements (or negative
 It uses the concept of sliding window, and so is acknowledgements) and timeouts to achieve reliable data transmission
also called sliding window protocol. over an unreliable communication link.
 The frames are sequentially numbered and a
finite number of frames are sent. Selective Repeat ARQ
 If the acknowledgment of a frame is not  However, here only the erroneous or lost
received within the time period, all frames frames are retransmitted, while the good
starting from that frame are retransmitted.. frames are received and buffered.

51
Sliding Window Protocol-Working Principle

52
Channel Allocation Methods
 When multiple users use a shared network and access the communication channel
simultaneously, it leads to collisions.
 In order to avoid collisions channel allocation methods are required to allocate the same
channel among multiple users.

 There are three types of channel allocation techniques:


 Static channel allocation
 Dynamic channel allocation
 Hybrid channel allocation

53
Static channel allocation
 Static channel allocation is also called fixed channel allocation.
 The frequency division multiplexing (FDM) and time-division multiplexing (TDM) are two
examples of static channel allocation.
 In these methods, either a fixed frequency or fixed time slot is allotted to each user.

54
Dynamic channel allocation
 In dynamic channel allocation channels are allotted to users dynamically as per their
requirements from a central pool.
 This technique optimizes bandwidth usage and provides fast data transmission.
 Eg: Statistical TDM
 Statistical time division multiplexing (STDM) is technique for transmitting several types of
data at the same time across a single transmission cable or line. It is often used for
managing data being transmitted via a local area network (LAN) or a wide area network
(WAN). In this situations, the data is often transmitted at the same time from any number
of input devices attached to the network, including computers, printers, or fax machines.
It can also be used in telephone switchboard settings to manage the simultaneous calls
going to or coming from multiple, internal telephone lines.
 Disadvantage: More computational complexity 55
Hybrid channel allocation
 The mixture of fixed channel allocation and dynamic channel allocation is called hybrid
channel allocation.
 The total available channels are divided into two sets, fixed and dynamic sets.
 First, a fixed set of channels is used for the requested host machines. If all fixed sets are
busy, then dynamic sets are used.
 When there is heavy traffic in a network, then hybrid channel allocation is useful.

56
Assumptions for Dynamic Channel Allocation
● Independent Traffic: It is assumed that the users are independent of each other.
● Single Channel: The algorithms assume that all contending stations request for
transmission via a single channel.
● Detectable Collisions: If two frames from different stations are simultaneously transmitted,
the resulting signal is distorted, and a collision is said to occur. If a collision occurs, all
stations should be able to detect the collision.
● Continuous Time or Slotted Time: In continuous time, frame transmission can start at any
instant. In slotted time, time is divided into discrete slots.
● Carrier Sense or No Carrier Sense: The stations may or may not be capable of sensing
whether the channel is in use.
 In carrier sense protocol, a station sends frame only when it senses that the
channel idle.
 In no carrier sense protocols the stations transmit a frame when it is available.

57
Carrier Sense Multiple Access Protocols

● Protocols in which stations listen for a carrier and act accordingly are
called Carrier Sense Protocols.
● Multiple Access tells the fact that multiple nodes send and receive on the
medium.
● The 3 types of CSMA protocols are
 Non- Persistent CSMA
 1-persistent CSMA
 p-persistent CSMA

58
Non-persistence CSMA
1. A node that has a frame to send first senses the channel.
2. If the channel is idle, then it transmits immediately.
3. If the channel is busy, then it waits for a random amount of time and then
senses the channel again.
● Advantages:
 Better Channel Utilization
 Reduced number of collisions.

59
Non-persistence CSMA
● In this method, the station that has frames to send, only that station senses for the channel. In case
of an idle channel, it will send frame immediately to that channel. In case when the channel is
found busy, it will wait for the random time and again sense for the state of the station whether
idle or busy.
● In this method, the station does not immediately sense for the channel for only the purpose of
capturing it when it detects the end of the previous transmission. The main advantage of using this
method is that it reduces the chances of collision. The problem with this is that it reduces the
efficiency of the network.

60
1-persistent CSMA
1. When a node has data to send, it first listens to the channel to see if
anyone is transmitting.
2. If the channel is busy, the station waits until it becomes idle.
3. When the station identifies an idle channel, it transmits a frame.
4. If a collision occurs, the station waits a random amount of time and starts
retransmission.

61
1-persistent CSMA
1. In 1-persistent CSMA, the station continuously senses the channel to check its state i.e. idle or busy so that it
can transfer data or not. In case when the channel is busy, the station will wait for the channel to become idle.
When station found idle channel, it transmits the frame to the channel without any delay. It transmits the frame
with probability 1. Due to probability 1, it is called 1-persistent CSMA.
2. The problem with this method is that there are a large number of chances for the collision it is because there is
a chance when two or more stations found channel in idle state and the transmit frames at the same time. On
the time when collision occurs the station has to wait for the random time for the channel to be idle and to start
all again.
3. Another chances of collision is that suppose two stations are in connections and data frames are being
transferred, then all the intermediate stations will sense that and wont transmit the data, if the data frame
transmission is ended then all the intermediate stations senses that and sends the data frames at a time, which
makes collision rate even worse.

62
p-persistent CSMA
1. This method is applicable for slotted channels so that the time slot
duration is equal to or greater than maximum propagation delay time.
2. When a station becomes ready to send, it senses the channel.
3. If it is idle, it transmits with a probability p. with probability q=1-p, it defers
until the next slot. If that slot is idle it either transmits or defers again, with
probability p and q.
4. This process is repeated until either the frame has been transmitted or
another node has begun transmitting.
● Advantages:
 p-persistence CSMA reduces the chances of collision
 Improves the efficiency of the network.
63
p-persistent CSMA
● This is the method that is used when channel has time-slots and that time-slot duration is equal to or greater
than the maximum propagation delay time. When the station is ready to send the frames, it will sense the
channel. If the channel found to be busy, the channel will wait for the next slot.
● If the channel found to be idle, it transmits the frame with probability p, thus for the left probability i.e. q which
is equal to 1-p the station will wait for the beginning of the next time slot. In case, when the next slot is also
found idle it will transmit or wait again with the probabilities p and q.
● This process is repeated until either the frame gets transmitted or another station has started transmitting.

64
CSMA Protocols:
Overview

65
Collision Free Protocols
 CSMA Protocols nullifies the possibility of collisions once the
transmission channel is acquired by any station.
 However, collision can still occur during the contention period if
more than one stations starts to transmit at the same time.
 Collision – free protocols ensure that collisions do not occur.
 They resolve collisions during the contention period.

66
Token Passing Protocol
● Token passing involves passing a small
message called a token from one station
to the next in a predefined order.
● The token represents permission to
send.
● If a station has a frame queued for
transmission when it receives the token,
it can send that frame before it passes
the token to the next station.
● If it has no queued frame, it simply
passes the token.

67
Bit – Map Protocol
● In bit map protocol, the contention period is divided into N slots, where N
is the total number of stations sharing the channel.
● If a station has a frame to send, it sets the corresponding bit in the slot.
● Collisions are avoided by mutual agreement among the contending
stations on who gets the channel.

68
Bit – Map Protocol: Disadvantages
● A problem with the basic bit-map protocol is that the overhead is 1 bit per
station
● It does not scale well to networks with thousands of stations.

69
Binary Count Down Protocol
● Overcomes the shortcomings of Bitmap protocol.
● A station that wants to use the channel broadcasts its address as a binary bit string,
starting with the high-order bit.
● The bits in each address position from different stations are BOOLEAN ORed together.
Drawbacks:
● It implicitly assumes that the transmission delays are negligible so that all stations see
asserted bits essentially instantaneously.

70
Binary Count Down Protocol
● Example
● Suppose that six stations contend for channel access which have the
addresses: 1011, 0010, 0101, 1100, 1001 and 1101.
● The iterative steps are −
• All stations broadcast their most significant bit, i.e. 1, 0, 0, 1, 1, 1. Stations 0010 and 0101 sees 1 bit
in other stations, and so they give up competing for the channel.
• The stations 1011, 1100, 1001 and 1101 continue. They broadcast their next bit, i.e. 0, 1, 0, 1. Stations
1011 and 1001 sees 1 bit in other stations, and so they give up competing for the channel.
• The stations 1100 and 1101 continue. They broadcast their next bit, i.e. 0, 0. Since both of them have
same bit value, both of them broadcast their next bit.
• The stations 1100 and 1101 broadcast their least significant bit, i.e. 0 and 1. Since station 1101 has 1
while the other 0, station 1101 gets the access to the channel.
• After station 1101 has completed frame transmission, or there is a time-out, the next contention cycle
starts.

71
Binary Count Down Protocol
● Example
● Suppose that six stations contend for channel access which have the
addresses: 1011, 0010, 0101, 1100, 1001 and 1101.
● The procedure is illustrated as follows −

72
Ethernet (Wired LAN/IEEE 802.3)
 The original form of Ethernet supports data rates between 3 to 10 Mbps.
 It operates both in the physical layer and in the MAC sublayer of the OSI
model.
 In the physical layer, the features of the cables and networks are
considered.
 In MAC sublayer, the frame formats for the Ethernet data frame are
defined.
 Classic Ethernet was first standardized in 1980s as IEEE 802.3 standard.

73
Ethernet Frame Structure
The main fields of a frame of classic Ethernet (IEEE 802.3) are as follows:

74
Ethernet
The main fields of a frame of classic Ethernet (IEEE 802.3) are as follows:
 Preamble − It provides alert and timing pulse for transmission (7 Bytes)
 Start of Frame Delimiter (SOF) − 1-Byte field
 Destination Address − physical address of destination station (6-Bytes)
 Source Address − physical address of the sending station (6-Bytes)
 Type/Length − Indicating the length of the data field in bytes (2-Bytes)
 Data − This is a variable sized field (46 to 1500 Bytes)
 CRC − CRC contains the error detection information.

Padding − This is added to the data to make its length to the minimum
requirement of 46 bytes.

75
Fast Ethernet: Variations

76
Ethernet Variations and Characteristics

77
Ethernet: Physical Media

Twisted Pair
Cables

Fiber Optic
Cable
78
Thank You

79

You might also like