Professional Documents
Culture Documents
Lecture-2.1.1
Data-link layer is the second layer after the physical layer. The data link layer is responsible
for maintaining the data link between two hosts or nodes.
Before going through the design issues in the data link layer. Some of its sub-layers and their
functions are as following below.
There are many reasons such as noise, cross-talk etc., which may help data to get corrupted
during transmission. The upper layers work on some generalized view of network
architecture and are not aware of actual hardware data processing.Hence, the upper layers
expect error-free transmission between the systems. Most of the applications would not
function expectedly if they receive erroneous data. Applications such as voice and video may
not be that affected and with some errors they may still function well.
Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it
is essential to know what types of errors may occur.
Types of Errors
Burst error
o Error detection
o Error correction
Error Detection
Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy
Check (CRC). In both cases, few extra bits are sent along with actual data to confirm that bits
received at other end are same as they were sent. If the counter-check at receiver’ end fails,
the bits are considered corrupted.
Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case of
even parity, or odd in case of odd parity.
The sender while creating a frame counts the number of 1s in it. For example, if even parity is
used and number of 1s is even then one bit with value 0 is added. This way number of 1s
remains even.If the number of 1s is odd, to make it even a bit with value 1 is added.
Basic approach used for error detection is the use of redundancy bits, where additional bits
are added to facilitate detection of errors.
3. Checksum
Blocks of data from the source are subjected to a check bit or parity bit generator form,
where a parity of :
This scheme makes the total number of 1’s even, that is why it is called even parity checking.
3. Checksum
o In checksum error detection scheme, the data is divided into k segments each of m
bits.
o In the sender’s end the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum.
o The checksum segment is sent along with the data segments.
o At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
o If the result is zero, the received data is accepted; otherwise discarded.
Lecture-2.1.2
•
Flow control and Error control are the two main responsibilities of the Data link
layer. Let us understand what these two terms specify. For the node-to-node delivery
of the data, the flow and error control are done at the data link layer.
Flow Control mainly coordinates with the amount of data that can be sent before
receiving an acknowledgment from the receiver and it is one of the major duties of
the data link layer.
o For most of the protocols, flow control is a set of procedures that mainly
tells the sender how much data the sender can send before it must wait for
an acknowledgment from the receiver.
o
o The data flow must not be allowed to overwhelm the receiver; because any
receiving device has a very limited speed at which the device can process the
incoming data and the limited amount of memory to store the incoming data.
o
o The processing rate is slower than the transmission rate; due to this reason
each receiving device has a block of memory that is commonly known
as buffer, that is used to store the incoming data until this data will be
processed. In case the buffer begins to fillup then the receiver must be able
to tell the sender to halt the transmission until once again the receiver
become able to receive.
Thus the flow control makes the sender; wait for the acknowledgment from the
receiver before the continuation to send more data to the receiver.
Some of the common flow control techniques are: Stop-and-Wait and sliding window
technique.
Error Control contains both error detection and error correction. It mainly allows
the receiver to inform the sender about any damaged or lost frames during the
transmission and then it coordinates with the retransmission of those frames by the
sender.
The term Error control in the data link layer mainly refers to the methods of error
detection and retransmission. Error control is mainly implemented in a simple way
and that is whenever there is an error detected during the exchange, then specified
frames are retransmitted and this process is also referred to as Automatic Repeat
request(ARQ).
Protocols
Protocols can be used for noiseless channels(that is error-free) and also used for
noisy channels(that is error-creating). The protocols used for noiseless channels
mainly cannot be used in real-life and are mainly used to serve as the basis for the
protocols used for noisy channels.
Flow Control
When a data frame (Layer-2 data) is sent from one host to another over a single
medium, it is required that the sender and receiver should work at the same speed.
That is, sender sends at a speed on which the receiver can process and accept the
data. What if the speed (hardware/software) of the sender or receiver differs? If sender
is sending too fast the receiver may be overloaded, (swamped) and data may be lost.
The sliding window is a technique for sending multiple frames at a time. It controls the
data packets between the two devices where reliable and gradual delivery of data
frames is needed. It is also used in TCP (Transmission Control Protocol).
In this technique, each frame has sent from the sequence number. The sequence
numbers are used to find the missing data in the receiver end. The purpose of the
sliding window technique is to avoid duplicate data, so it uses the sequence number.
7. Go-Back-N ARQ
8. Selective Repeat ARQ
Go-Back-N ARQ
The size of the sender window is N in this protocol. For example, Go-Back-8, the size
of the sender window, will be 8. The receiver window size is always 1.
If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a
corrupted frame. When the timer expires, the sender sends the correct frame again.
The design of the Go-Back-N ARQ protocol is shown below.
The example of Go-Back-N ARQ is shown below in the figure.
Selective Repeat ARQ
Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request.
It is a data link layer protocol that uses a sliding window method. The Go-back-N ARQ
protocol works well if it has fewer errors. But if there is a lot of error in the frame, lots
of bandwidth loss in sending the frames again. So, we use the Selective Repeat ARQ
protocol. In this protocol, the size of the sender window is always equal to the size of
the receiver window. The size of the sliding window is always greater than 1.
If the receiver receives a corrupt frame, it does not directly discard it. It sends a
negative acknowledgment to the sender. The sender sends that frame again as soon
as on the receiving negative acknowledgment. There is no waiting for any time-out to
send that frame. The design of the Selective Repeat ARQ protocol is shown below.
The example of the Selective Repeat ARQ protocol is shown below in the figure.
Difference between the Go-Back-N ARQ and Selective Repeat ARQ?
Lecture-2.1.3
• High-level Data Link Control (HDLC)
High-level Data Link Control (HDLC) is a group of communication protocols of the data link
layer for transmitting data between network points or nodes. Since it is a data link protocol,
data is organized into frames. A frame is transmitted via the network to the destination that
verifies its successful arrival. It is a bit - oriented protocol that is applicable for both point - to
- point and multipoint communications.
Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous
balanced mode.
o Normal Response Mode (NRM) − Here, two types of stations are there, a primary
station that send commands and secondary station that can respond to received
commands. It is used for both point - to - point and multipoint communications.
o Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each
station can both send commands and respond to commands. It is used for only point
- to - point communications.
HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure
varies according to the type of frame. The fields of a HDLC frame are −
o Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The
bit pattern of the flag is 01111110.
o Address − It contains the address of the receiver. If the frame is sent by the primary
station, it contains the address(es) of the secondary station(s). If it is sent by the
secondary station, it contains the address of the primary station. The address field
may be from 1 byte to several bytes.
o Control − It is 1 or 2 bytes containing flow and error control information.
o Payload − This carries the data from the network layer. Its length may vary from one
network to another.
o FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)
There are three types of HDLC frames. The type of frame is determined by the control field of
the frame −
o I-frame − I-frames or Information frames carry user data from the network layer.
They also include flow and error control information that is piggybacked on user data.
The first bit of control field of I-frame is 0.
o S-frame − S-frames or Supervisory frames do not contain information field. They are
used for flow and error control when piggybacking is not required. The first two bits
of control field of S-frame is 10.
o U-frame − U-frames or Un-numbered frames are used for myriad miscellaneous
functions, like link management. It may contain an information field, if required. The
first two bits of control field of U-frame is 11.
Point-to-Point Protocol (PPP)
Point - to - Point Protocol (PPP) is a communication protocol of the data link layer that is used
to transmit multiprotocol data between two directly connected (point-to-point) computers. It
is a byte - oriented protocol that is widely used in broadband communications having heavy
loads and high speeds. Since it is a data link layer protocol, data is transmitted in frames. It is
also known as RFC 1661.
PPP Frame
PPP is a byte - oriented protocol where each field of the frame is composed of one or more
bytes. The fields of a PPP frame are −
o Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern of the
flag is 01111110.
o Address − 1 byte which is set to 11111111 in case of broadcast.
o Control − 1 byte set to a constant value of 11000000.
o Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
o Payload − This carries the data from the network layer. The maximum length of the
payload field is 1500 bytes. However, this may be negotiated between the endpoints
of communication.
o FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)
Byte Stuffing in PPP Frame − Byte stuffing is used is PPP payload field whenever the flag
sequence appears in the message, so that the receiver does not consider it as the end of the
frame. The escape byte, 01111101, is stuffed before every byte that contains the same byte
as the flag byte or the escape byte. The receiver on receiving the message removes the
escape byte before passing it onto the network layer.
Example :
Chapter-2.2.1
• Medium Access Control Sublayer (MAC sublayer)
The medium access control (MAC) is a sublayer of the data link layer of the open system
interconnections (OSI) reference model for data transmission. It is responsible for flow
control and multiplexing for transmission medium. It controls the transmission of data
packets via remotely shared channels. It sends data over the network interface card.
The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems. The
data link layer is the second lowest layer. It is divided into two sublayers −
Channel allocation is a process in which a single channel is divided and allotted to multiple
users in order to carry user specific tasks. There are user’s quantity may vary every time the
process takes place. If there are N number of users and channel is divided into N equal-sized
sub channels, Each user is assigned one portion. If the number of users are small and don’t
vary at times, than Frequency Division Multiplexing can be used as it is a simple and efficient
channel bandwidth allocating technique.
Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs
and MANs, and Dynamic Channel Allocation.
T = 1/(U*C-L)
T(FDM) = N*T(1/U(C/N)-L/N)
Where,
C = capacity of channel,
1/U = bits/frame,
Station Model: Assumes that each of N stations independently produce frames. The
probability of producing a packet in the interval IDt where I is the constant arrival rate of
new frames.
Single Channel Assumption: In this allocation all stations are equivalent and can send and
receive on that channel.
Collision Assumption: If two frames overlap in time-wise, then that’s collision. Any
collision is an error, and both frames must re transmitted. Collisions are only possible error.
Protocol Assumption:
o N independent stations.
o A station is blocked until its generated frame is transmitted.
o probability of a frame being generated in a period of length Dt is IDt where I is the
arrival rate of frames.
o Only a single Channel available.
o Time can be either: Continuous or slotted.
o Carrier Sense: A station can sense if a channel is already busy before transmission.
o No Carrier Sense: Time out used to sense loss data.
Random Access Protocols is a Multiple access protocol that is divided into four categories
which are ALOHA, CSMA, CSMA/CD, and CSMA/CA. In this article, we will cover all of these
Random Access Protocols in detail.
Have you ever been to a railway station? And noticed the ticket counter over there?
Above are the scenarios for approaching a ticket counter. Which one do you think is more
productive? The ordered one, right? And we all know the reason why. Just to get things
working and avoid problems we have some rules or protocols, like "please stand in the
queue", "do not push each other", "wait for your turn", etc. in the same way computer
network channels also have protocols like multiple access protocols, random access
protocols, etc.
Let's say you are talking to your friend using a mobile phone. This means there is a link
established between you and him. But the point to be remembered is that the communication
channel between you and him (the sender & the receiver or vice-versa) is not always a
dedicated link, which means the channels are not only providing service to you at that time
but to others as well. This means multiple users might be communicating through the same
channel.
How is that possible? The reason behind this is the multiple access protocols. If you refer
to the OSI model you will come across the data link layer. Now divide the layers into 2 parts,
the upper part of the layer will take care of the data link control, and the lower half will be
taking care in resolving the access to the shared media, as shown in the above diagram.
The following diagram classifies the multiple-access protocol. In this article, we are going to
cover Random Access Protocol.
15. There is no time restriction for sending the data (you can talk to your friend without a
time restriction).
16. There is a fixed sequence of stations which are transmitting the data.
As in the above diagram you might have observed that the random-access protocol is further
divided into four categories, which are:
17. ALOHA
18. CSMA
19. CSMA/CD
20. CSMA/CA
The ALOHA protocol or also known as the ALOHA method is a simple communication
scheme in which every transmitting station or source in a network will send the data
whenever a frame is available for transmission. If we succeed and the frame reaches its
destination, then the next frame is lined-up for transmission. But remember, if the data
frame is not received by the receiver (maybe due to collision) then the frame is sent again
until it successfully reaches the receiver's end.
Whenever we talk about a wireless broadcast system or a half-duplex two-way link, the
ALOHA method works efficiently. But as the network becomes more and more complex e.g.
the ethernet. Now here in the ethernet, the system involves multiple sources and destinations
which share a common data path or channel, then the conflict occurs because data-frames
collide, and the information is lost. Following is the flow chart of Pure ALOHA.
So, to minimize these collisions and to optimize network efficiency as well as to increase the
number of subscribers that can use a given network, the slotted ALOHA was developed. This
system consists of the signals termed as beacons which are sent at precise time intervals and
inform each source when the channel is clear to send the frame.
Now, as we came to know about ALOHA's 2 types i.e. Pure & Slotted ALOHA, the following is
the difference between both.
CSMA stands for Carrier Sense Multiple Access. Till now we have understood that when 2
or more stations start sending data, then a collision occurs, so this CSMA method was
developed to decrease the chances of collisions when 2 or more stations start sending their
signals over the data link layer. But how do they do it? The CSMA makes each station to first
check the medium (whether it is busy or not) before sending any data packet.
Here, Vulnerable time = Propagation Time
But, what to do if the channels are busy? Now, here the persistence methods can be applied
to help the station act when the channel is busy or idle.
o 1-persistent mode: In this, first the node checks the channel, if the channel is idle
then the node or station transmits data, otherwise it keeps on waiting and whenever
the channel is idle, the stations transmit the data-frame.
o Non-persistent mode: In this, the station checks the channel similarly as 1-
persistent mode, but the only difference is that when the channel is busy it checks it
again after a random amount of time, unlike the 1-persistent where the stations
keep on checking continuously.
o P-persistent mode: In this, the station checks the channel and if found idle then it
transmits the data frame with the probability of P and if the data is not transmitted
(1-P) then the station waits for a random amount of time and again transmits the
data with the probability P and this cycle goes on continuously until the data-frame
is successfully sent.
o O-persistent: In this, the transmission occurs based on the superiority of stations
which is decided beforehand and transmission occurs in that order. If the channel is
idle, then the station waits for its turn to send the data-frame.
Throughput & Efficiency of CSMA:
It is comparatively much greater than the throughput of pure and slotted ALOHA. Here, for
the 1-persistent mode, the throughput is 50% when G=1 and for Non-persistent mode, the
throughput can reach up to 90%.
In this, whenever station transmits data-frame it then monitors the channel or the medium
to acknowledge the state of the transmission i.e. successfully transmitted or failed. If the
transmission succeeds, then it prepares for the next frame otherwise it resends the
previously failed data-frame. The point to remember here is, that the frame transmission
time should be at least twice the maximum propagation time, which can be deduced when
the distance between the two stations involved in a collision is maximum.
To detect the possible collisions, the sender receives the acknowledgement and if there is
only one acknowledgment present (it's own) then this means that the data-frame has been
sent successfully. But, if there are 2 or more acknowledgment signals then this indicates that
the collision has occurred.
o Interframe space: in this case, assume that your station waits for the channel to
become idle and found that the channel is idle, then it will not send the data-frame
immediately (in order to avoid collision due to propagation delay) it rather waits for
some time called interframe space or IFS, and after this time the station again checks
the medium for being idle. But it should be kept in mind that the IFS duration
depends on the priority of the station.
o Contention Window: here, the time is divided into slots. Say, if the sender is ready
for transmission of the data, it then chooses a random number of slots as waiting
time which doubles every time whenever the channel is busy. But, if the channel is
not idle at that moment, then it does not restart the entire process but restarts the
timer when the channel is found idle again.
o Acknowledgment: as we discussed above that the sender station will re-transmits
the data if acknowledgment is not received before the timer expires.
Lecture-2.2.2
•
Controlled Access
In controlled access, the stations seek information from one another to find which station
has the right to send. It allows only one node to send at a time, to avoid collision of messages
on shared medium.
0. Reservation
1. Polling
2. Token Passing
Reservation
o In the reservation method, a station needs to make a reservation before sending
data.
o The time line has two kinds of periods:
5. Reservation interval of fixed time length
6. Data transmission period of variable frames.
o If there are M stations, the reservation interval is divided into M slots, and each
station has one slot.
o Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
o In general, i th station may announce that it has a frame to send by inserting a 1 bit
into i th slot. After all N slots have been checked, each station knows which stations
wish to transmit.
o The stations which have reserved their slots transfer their frames in that order.
o After data transmission period, next reservation interval begins.
o Since everyone agrees on who goes next, there will never be any collisions.
The following figure shows a situation with five stations and a five-slot reservation frame. In
the first interval, only stations 1, 3, and 4 have made reservations. In the second interval,
only station 1 has made a reservation.
Polling
o Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
o In this, one acts as a primary station(controller) and the others are secondary
stations. All data exchanges must be made through the controller.
o The message sent by the controller contains the address of the node being selected
for granting access.
o Although all nodes receive the message but the addressed one responds to it and
sends data, if any. If there is no data, usually a “poll reject”(NAK) message is sent
back.
o Problems include high overhead of the polling messages and high dependence on
the reliability of the controller.
Efficiency
Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then,
Token Passing
o In token passing scheme, the stations are connected logically to each other in form of
ring and access of stations is governed by tokens.
o A token is a special bit pattern or a small message, which circulate from one station
to the next in some predefined order.
o In Token ring, token is passed from one station to another adjacent station in the
ring whereas incase of Token bus, each station
o uses the bus to send the token to the next station in some predefined order.
o In both cases, token represents permission to send. If a station has a frame queued
for transmission when it receives the token, it can send that frame before it passes
the token to the next station. If it has no queued frame, it passes the token simply.
o After sending a frame, each station must wait for all N stations (including itself) to
send the token to their neighbors and the other N – 1 stations to send a frame, if they
have one.
o There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable
operation of this scheme.
Performance
25. Delay, which is a measure of time between when a packet is ready and when it is
delivered. So, the average time (delay) required to send a token to the next station =
a/N.
26. Throughput, which is a measure of the successful traffic.
and
a = Tp/Tt
•Essentially, the IEEE 802 standards help make sure internet services and technologies follow
a set of recommended practices so network devices can all work together smoothly.
•IEEE 802 is divided into 22 parts that cover the physical and data-link aspects of networking.
The family of standards is developed and maintained by the IEEE 802 LAN/MAN Standards
Committee, also called the LMSC. IEEE stands for Institute of Electrical and Electronics
Engineers.
IEEE 802
•The set of standards started in 1979 with a "local network for computer interconnection"
standard, which was approved a year later. The LMSC has made more than 70 standards for
IEEE 802.
•Some commonly used standards include those for Ethernet, bridging and virtual bridged
LANs, wireless LAN, wireless PAN, MAN and radio access networks as well as media
independent handover services. The better-known specifications include 802.3 Ethernet,
802.11 Wi-Fi and 802.15 Bluetooth/ZigBee. However, some of these standards have been
labeled as disbanded or hibernating and are either superseded by newer standards or are
being reworked. Using an open process, the LMSC advocates for these standards globally.
•Individual "working groups" are decided on and assigned to each area in order to provide
each area with an acceptable amount of focus. IEEE 802 specifications also split the data link
layer into two different layers -- an LLC layer and a MAC layer.
Standards can be found in a PDF provided by the LMSC for up to six months after they have
been published. All standards stay in place until they are replaced with another document or
withdrawn.
•LMSC was formed in 1980 in order to standardize network protocols and provide a path to
make compatible devices across numerous industries.
•Without these standards, equipment suppliers could manufacture network hardware that
would only connect to certain computers. It would be much more difficult to connect to
systems not using the same set of networking equipment. Standardizing protocols help ensure
that multiple types of devices can connect to multiple network types. It also helps make sure
network management isn't the challenge it could be if it wasn't in place.
•IEEE 802 will also coordinate with other international standards, such as ISO, to help
maintain international standards.
•In addition, the "802" in IEEE 802 does not stand for anything with high significance. 802 was
just the next numbered project.
•The IEEE 802 specifications can be used by commercial organizations to ensure their
products maintain any newly specified standards. So, for example, the 802.11 specification
that applies to Wi-Fi could be used to make sure Wi-Fi devices work together under one
standard. In the same way, IEE 802 can help maintain local area network standards.
•These specifications can also define what connectivity infrastructure will be used for --
individual networks, or those at a larger organizational scale.
•The IEEE 802 specifications apply to hardware and software products. So, to ensure
manufacturers don't have any input on the standards, there is a voting protocol in place. This
makes sure that one organization does not influence the standards too much.
Lecture-2.3.1
•
Network Layer
o The Network Layer is the third layer of the OSI model.
o It handles the service requests from the transport layer and further
forwards the service request to the data link layer.
o The network layer translates the logical addresses into physical addresses
o It determines the route from the source to the destination and also
manages the traffic problems such as switching, routing and controls the
congestion of data packets.
o The main role of the network layer is to move the packets from sending
host to the receiving host.
The main functions performed by the network layer are:
o Routing: When a packet reaches the router's input link, the router will
move the packets to the router's output link. For example, a packet from S1
to R1 must be forwarded to the next router on the path to S2.
o Logical Addressing: The data link layer implements the physical
addressing and network layer implements the logical addressing. Logical
addressing is also used to distinguish between source and destination
system. The network layer adds a header to the packet which includes the
logical addresses of both the sender and the receiver.
o Internetworking: This is the main role of the network layer that it
provides the logical connection between different types of networks.
o Fragmentation: The fragmentation is a process of breaking the packets
into the smallest individual data units that travel through different
networks.
Forwarding & Routing
In Network layer, a router is used to forward the packets. Every router has a
forwarding table. A router forwards a packet by examining a packet's header field
and then using the header field value to index into the forwarding table. The value
stored in the forwarding table corresponding to the header field value indicates
the router's outgoing interface link to which the packet is to be forwarded.
For example, the router with a header field value of 0111 arrives at a router, and
then router indexes this header value into the forwarding table that determines
the output link interface is 2. The router forwards the packet to the interface 2.
The routing algorithm determines the values that are inserted in the forwarding
table. The routing algorithm can be centralized or decentralized.
Services Provided by the Network Layer
o Guaranteed delivery: This layer provides the service which guarantees
that the packet will arrive at its destination.
o Guaranteed delivery with bounded delay: This service guarantees that
the packet will be delivered within a specified host-to-host delay bound.
o In-Order packets: This service ensures that the packet arrives at the
destination in the order in which they are sent.
o Guaranteed max jitter: This service ensures that the amount of time
taken between two successive transmissions at the sender is equal to the
time between their receipt at the destination.
o Security services: The network layer provides security by using a session
key between the source and destination host. The network layer in the
source host encrypts the payloads of datagrams being sent to the
destination host. The network layer in the destination host would then
decrypt the payload. In such a way, the network layer maintains the data
integrity and source authentication services.
Network Addressing
o Network Addressing is one of the major responsibilities of the network
layer.
o Network addresses are always logical, i.e., software-based addresses.
o A host is also known as end system that has one link to the network. The
boundary between the host and link is known as an interface. Therefore,
the host can have only one interface.
o A router is different from the host in that it has two or more links that
connect to it. When a router forwards the datagram, then it forwards the
packet to one of the links. The boundary between the router and link is
known as an interface, and the router can have multiple interfaces, one for
each of its links. Each interface is capable of sending and receiving the IP
packets, so IP requires each interface to have an address.
o Each IP address is 32 bits long, and they are represented in the form of
"dot-decimal notation" where each byte is written in the decimal form, and
they are separated by the period. An IP address would look like
193.32.216.9 where 193 represents the decimal notation of first 8 bits of
an address, 32 represents the decimal notation of second 8 bits of an
address.
o In the above figure, a router has three interfaces labeled as 1, 2 & 3 and
each router interface contains its own IP address.
o Each host contains its own interface and IP address.
o All the interfaces attached to the LAN 1 is having an IP address in the form
of 223.1.1.xxx, and the interfaces attached to the LAN 2 and LAN 3 have an
IP address in the form of 223.1.2.xxx and 223.1.3.xxx respectively.
o Each IP address consists of two parts. The first part (first three bytes in IP
address) specifies the network and second part (last byte of an IP address)
specifies the host in the network.
Classful Addressing
o Class A
o Class B
o Class C
o Class D
o Class E
An ip address is divided into two parts:
In the above diagram, we observe that each class have a specific range of IP
addresses. The class of IP address is used to determine the number of bits used in
a class and number of networks and hosts available in the class.
Class A
In Class A, the first bit in higher order bits of the first octet is always set to 0 and
the remaining 7 bits determine the network ID. The 24 bits determine the host ID
in any network.
Class B
In Class B, the higher order bits of the first octet is always set to 10, and the
remaining14 bits determine the network ID. The other 16 bits determine the Host
ID.
Class C
In Class C, the higher order bits of the first octet is always set to 110, and the
remaining 21 bits determine the network ID. The 8 bits of the host ID determine
the host in a network.
Class D
Class E
In Class E, an IP address is used for the future use or for the research and
development purposes. It does not possess any subnetting. The higher order bits
of the first octet is always set to 1111, and the remaining bits determines the host
ID in any network.
The Host ID is used to determine the host within any network. The Host ID is
assigned based on the following rules:
If the hosts are located within the same local network, then they are assigned with
the same network ID. The following are the rules for assigning Network ID:
Class Higher bits NET ID bits HOST ID bits No.of networks No.of hosts per network
A 0 8 24 27 224 0.0.0.0 t
B 10 16 16 214 216 128.0.0.
C 110 24 8 221 28 192.0.0.
D 1110 Not Defined Not Defined Not Defined Not Defined 224.0.0.
E 1111 Not Defined Not Defined Not Defined Not Defined 240.0.0.
IPv6
•IPv4 produces 4 billion addresses, and the developers think that these
addresses are enough, but they were wrong. IPv6 is the next generation of IP
addresses. The main difference between IPv4 and IPv6 is the address size of IP
addresses. The IPv4 is a 32-bit address, whereas IPv6 is a 128-bit hexadecimal
address. IPv6 provides a large address space, and it contains a simple header as
compared to IPv4.
•It provides transition strategies that convert IPv4 into IPv6, and these strategies
are as follows:
•Dual stacking: It allows us to have both the versions, i.e., IPv4 and IPv6, on the
same device.
•Tunneling: In this approach, all the users have IPv6 communicates with an IPv4
network to reach IPv6.
•This hexadecimal address contains both numbers and alphabets. Due to the
usage of both the numbers and alphabets, IPv6 is capable of producing over 340
undecillion (3.4*1038) addresses.
Address format
•The above diagram shows the address format of IPv4 and IPv6. An IPv4 is a 32-
bit decimal address. It contains 4 octets or fields separated by 'dot', and each
field is 8-bit in size. The number that each field contains should be in the range of
0-255. Whereas an IPv6 is a 128-bit hexadecimal address. It contains 8 fields
separated by a colon, and each field is 16-bit in size.
Ipv4 Ipv6
Address
IPv4 is a 32-bit address. IPv6 is a 128-bit address.
length
IPv4 is a numeric address that consists of IPv6 is an alphanumeric address that consists
Fields
4 fields which are separated by dot (.). of 8 fields, which are separated by colon.
IPv4 has 5 different classes of IP address
Classes that includes Class A, Class B, Class C, IPv6 does not contain classes of IP addresses.
Class D, and Class E.
Number of IP IPv4 has a limited number of IP
IPv6 has a large number of IP addresses.
address addresses.
It supports VLSM (Virtual Length Subnet
Mask). Here, VLSM means that Ipv4
VLSM It does not support VLSM.
converts IP addresses into a subnet of
different sizes.
Ipv4 Ipv6
Address It supports manual, DHCP, auto-
It supports manual and DHCP configuration.
configuration configuration, and renumbering.
It generates 340 undecillion unique
Address space It generates 4 billion unique addresses
addresses.
End-to-end In IPv4, end-to-end connection integrity is In the case of IPv6, end-to-end
connection integrity unachievable. connection integrity is achievable.
In IPv4, security depends on the application.
In IPv6, IPSEC is developed for
Security features This IP address is not developed in keeping
security purposes.
the security feature in mind.
Address In IPv4, the IP address is represented in In IPv6, the representation of the IP
representation decimal. address in hexadecimal.
Ipv4 Ipv6
Fragmentation is done by the
Fragmentation is done by the
Fragmentation senders and the forwarding
senders only.
routers.
It does not provide any It uses flow label field in the
Packet flow
mechanism for packet flow header for the packet flow
identification
identification. identification.
The checksum field is available in The checksum field is not
Checksum field
IPv4. available in IPv6.
On the other hand, IPv6 is
Transmission scheme IPv4 is broadcasting. multicasting, which provides
efficient network operations.
Encryption and It does not provide encryption It provides encryption and
Authentication and authentication. authentication.
It consists of 8 fields, and each
field contains 2 octets. Therefore,
Number of octets It consists of 4 octets.
the total number of octets in IPv6
is 16.
Lecture-2.3.2
•
Routing
o A Router is a process of selecting path along which the data can be transferred from
source to the destination. Routing is performed by a special device known as a router.
o A Router works at the network layer in the OSI model and internet layer in TCP/IP
model
o A router is a networking device that forwards the packet based on the information
available in the packet header and forwarding table.
o The routing algorithms are used for routing the packets. The routing algorithm is
nothing but a software responsible for deciding the optimal path through which
packet can be transmitted.
o The routing protocols use the metric to determine the best path for the packet
delivery. The metric is the standard of measurement such as hop count, bandwidth,
delay, current load on the path, etc. used by the routing algorithm to determine the
optimal path to the destination.
o The routing algorithm initializes and maintains the routing table for the process of
path determination.
Distance Vector Routing Algorithm
o The Distance vector algorithm is iterative, asynchronous and distributed.
o Distributed: It is distributed in that each node receives information from one or more
of its directly attached neighbors, performs calculation and then distributes the result
back to its neighbors.
o Iterative: It is iterative in that its process continues until no more information is
available to be exchanged between neighbors.
o Asynchronous: It does not require that all of its nodes operate in the lock step with
each other.
o The Distance vector algorithm is a dynamic algorithm.
o It is mainly used in ARPANET, and RIP.
o Each router maintains a distance table known as Vector.
Three Keys to understand the working of Distance Vector Routing
Algorithm:
o Knowledge about the whole network: Each router shares its knowledge through
the entire network. The Router sends its collected knowledge about the network to
its neighbors.
o Routing only to neighbors: The router sends its knowledge about the network to
only those routers which have direct links. The router sends whatever it has about the
network through the ports. The information is received by the router and uses the
information to update its own routing table.
o Information sharing at regular intervals: Within 30 seconds, the router sends the
information to the neighboring routers.
Distance Vector Routing Algorithm
Let dx(y) be the cost of the least-cost path from node x to node y. The least costs are related by
Bellman-Ford equation,
Where the minv is the equation taken for all x neighbors. After traveling from x to v, if we
consider the least-cost path from v to y, the path cost will be c(x,v)+dv(y). The least cost from
x to y is the minimum of c(x,v)+dv(y) taken over all neighbors.
With the Distance Vector Routing algorithm, the node x contains the following routing
information:
o For each neighbor v, the cost c(x,v) is the path cost from x to directly attached
neighbor, v.
o The distance vector x, i.e., Dx = [ Dx(y) : y in N ], containing its cost to all destinations,
y, in N.
o The distance vector of each of its neighbors, i.e., Dv = [ Dv(y) : y in N ] for each neighbor
v of x.
Distance vector routing is an asynchronous algorithm in which node x sends the copy of its
distance vector to all its neighbors. When node x receives the new distance vector from one of
its neighboring vector, v, it saves the distance vector of v and uses the Bellman-Ford equation
to update its own distance vector. The equation is given below:
The node x has updated its own distance vector table by using the above equation and sends
its updated table to all its neighbors so that they can update their own distance vectors.
Algorithm
At each node x,
Initialization
loop
for each y in N:
Dx(y) = minv{c(x,v)+Dv(y)}
forever
Sharing Information
o In the above figure, each cloud represents the network, and the number inside the
cloud represents the network ID.
o All the LANs are connected by routers, and they are represented in boxes labeled as
A, B, C, D, E, F.
o Distance vector routing algorithm simplifies the routing process by assuming the cost
of every link is one unit. Therefore, the efficiency of transmission can be measured by
the number of links to reach the destination.
o In Distance vector routing, the cost is based on hop count.
In the above figure, we observe that the router sends the knowledge to the immediate
neighbors. The neighbors add this knowledge to their own knowledge and sends the updated
table to their own neighbors. In this way, routers get its own information plus the new
information about the neighbors.
Routing Table
Initially, the routing table is created for each router that contains atleast three types of
information such as Network ID, the cost and the next hop.
o NET ID: The Network ID defines the final destination of the packet.
o Cost: The cost is the number of hops that packet must take to get there.
o Next hop: It is the router to which the packet must be delivered.
o In the above figure, the original routing tables are shown of all the routers. In a routing
table, the first column represents the network ID, the second column represents the
cost of the link, and the third column is empty.
o These routing tables are sent to all the neighbors.
For Example:
o After adjustment, A then combines this table with its own table to create a combined
table.
o The combined table may contain some duplicate data. In the above figure, the
combined table of router A contains the duplicate data, so it keeps only those data
which has the lowest cost. For example, A can send the data to network 1 in two ways.
The first, which uses no next router, so it costs one hop. The second requires two hops
(A to B, then B to Network 1). The first option has the lowest cost, therefore it is kept
and the second one is dropped.
o The process of creating the routing table continues for all routers. Every router
receives the information from the neighbors, and update the routing table.
Link state routing is a technique in which each router shares the knowledge of its
neighborhood with every other router in the internetwork.
o Knowledge about the neighborhood: Instead of sending its routing table, a router
sends the information about its neighborhood only. A router broadcast its identities
and cost of the directly attached links to other routers.
o Flooding: Each router sends the information to every other router on the
internetwork except its neighbors. This process is known as Flooding. Every router
that receives the packet sends the copies to all its neighbors. Finally, each and every
router receives a copy of the same information.
o Information sharing: A router sends the information to every other router only
when the change occurs in the information.
Link State Routing has two phases:
Reliable Flooding
o Initial state: Each node knows the cost of its neighbors.
o Final state: Each node knows the entire graph.
Route Calculation
Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all nodes.
o The Link state routing algorithm is also known as Dijkstra's algorithm which is used
to find the shortest path from one node to every other node in the network.
o The Dijkstra's algorithm is an iterative, and it has the property that after kth iteration
of the algorithm, the least cost paths are well known for k destination nodes.
Let's describe some notations:
o c( i , j): Link cost from node i to node j. If i and j nodes are not directly linked, then c(i
, j) = ∞.
o D(v): It defines the cost of the path from source code to destination v that has the least
cost currently.
o P(v): It defines the previous node (neighbor of v) along with current least cost path
from source to v.
o N: It is the total number of nodes available in the network.
Algorithm
Initialization
if v adjacent to A
loop
Add w to N
Update D(v) for all v adjacent to w and not in N:
In the above algorithm, an initialization step is followed by the loop. The number of times the
loop is executed is equal to the total number of nodes available in the network.
Step 1:
The first step is an initialization step. The currently known least cost path from A to its directly
attached neighbors, B, C, D are 2,5,1 respectively. The cost from A to B is set to 2, from A to D
is set to 1 and from A to C is set to 5. The cost from A to E and F are set to infinity as they are
not directly linked to A.
Step 2:
In the above table, we observe that vertex D contains the least cost path in step 1. Therefore,
it is added in N. Now, we need to determine a least-cost path through D vertex.
58. v = C, w = D
59. D(B) = min( D(C) , D(D) + c(D,C) )
60. = min( 5, 1+3)
61. = min( 5, 4)
62. The minimum value is 4. Therefore, the currently shortest path from A to C is 4.</p>
63. v = E, w = D
64. D(B) = min( D(E) , D(D) + c(D,E) )
65. = min( ∞, 1+1)
66. = min(∞, 2)
67. The minimum value is 2. Therefore, the currently shortest path from A to E is 2.
Note: The vertex D has no direct link to vertex E. Therefore, the value
of D(F) is infinity.
Step 3:
In the above table, we observe that both E and B have the least cost path in step 2. Let's
consider the E vertex. Now, we determine the least cost path of remaining vertices through E.