You are on page 1of 42

COMPUTER NETWORKS

Lecture-2.1.1

Data-link layer is the second layer after the physical layer. The data link layer is responsible
for maintaining the data link between two hosts or nodes.

Before going through the design issues in the data link layer. Some of its sub-layers and their
functions are as following below.

The data link layer is divided into two sub-layers :

0. Logical Link Control Sub-layer (LLC) –


o Provides the logic for the data link, Thus it controls the synchronization, flow
control, and error checking functions of the data link layer. Functions are – (i) Error
Recovery.
o (ii) It performs the flow control operations.
o (iii) User addressing.
o
5. Media Access Control Sub-layer (MAC) –
6. It is the second sub-layer of data-link layer. It controls the flow and multiplexing for
transmission medium. Transmission of data packets is controlled by this layer. This
layer is responsible for sending the data over the network interface card.
o Functions are – (i) To perform the control of access to media.
o (ii) It performs the unique addressing to stations directly connected to LAN.
o (iii) Detection of errors.

Design issues with data link layer are :

10. Services provided to the network layer –


11. The data link layer act as a service interface to the network layer. The principle
service is transferring data from network layer on sending machine to the network
layer on destination machine. This transfer also takes place via DLL (Dynamic Link
Library).
12. Frame synchronization –
13. The source machine sends data in the form of blocks called frames to the destination
machine. The starting and ending of each frame should be identified so that the
frame can be recognized by the destination machine.
14. Flow control –
15. Flow control is done to prevent the flow of data frame at the receiver end. The
source machine must not send data frames at a rate faster than the capacity of
destination machine to accept them.
16. Error control –
17. Error control is done to prevent duplication of frames. The errors introduced during
transmission from source to destination machines must be detected and corrected at
the destination machine.

Error Detection and Correction

There are many reasons such as noise, cross-talk etc., which may help data to get corrupted
during transmission. The upper layers work on some generalized view of network
architecture and are not aware of actual hardware data processing.Hence, the upper layers
expect error-free transmission between the systems. Most of the applications would not
function expectedly if they receive erroneous data. Applications such as voice and video may
not be that affected and with some errors they may still function well.

Data-link layer uses some error control mechanism to ensure that frames (data bit streams)
are transmitted with certain level of accuracy. But to understand how errors is controlled, it
is essential to know what types of errors may occur.

Types of Errors

There may be three types of errors:

Single bit error:

o In a frame, there is only one bit, anywhere though, which is corrupt.

Multiple bits error

Frame is received with more than one bits in corrupted state.

Burst error

o Frame contains more than1 consecutive bits corrupted.

Error control mechanism may involve two possible ways:

o Error detection
o Error correction

Error Detection

Errors in the received frames are detected by means of Parity Check and Cyclic Redundancy
Check (CRC). In both cases, few extra bits are sent along with actual data to confirm that bits
received at other end are same as they were sent. If the counter-check at receiver’ end fails,
the bits are considered corrupted.

Parity Check
One extra bit is sent along with the original bits to make number of 1s either even in case of
even parity, or odd in case of odd parity.

The sender while creating a frame counts the number of 1s in it. For example, if even parity is
used and number of 1s is even then one bit with value 0 is added. This way number of 1s
remains even.If the number of 1s is odd, to make it even a bit with value 1 is added.

Basic approach used for error detection is the use of redundancy bits, where additional bits
are added to facilitate detection of errors.

Some popular techniques for error detection are:

1. Simple Parity check

2. Two-dimensional Parity check

3. Checksum

4. Cyclic redundancy check

1. Simple Parity check

Blocks of data from the source are subjected to a check bit or parity bit generator form,
where a parity of :

o 1 is added to the block if it contains odd number of 1’s, and


o 0 is added if it contains even number of 1’s

This scheme makes the total number of 1’s even, that is why it is called even parity checking.

2. Two-dimensional Parity check


Parity check bits are calculated for each row, which is equivalent to a simple parity check bit.
Parity check bits are also calculated for all columns, then both are sent along with the data.
At the receiving end these are compared with the parity bits calculated on the received data.

3. Checksum

o In checksum error detection scheme, the data is divided into k segments each of m
bits.
o In the sender’s end the segments are added using 1’s complement arithmetic to get
the sum. The sum is complemented to get the checksum.
o The checksum segment is sent along with the data segments.
o At the receiver’s end, all received segments are added using 1’s complement
arithmetic to get the sum. The sum is complemented.
o If the result is zero, the received data is accepted; otherwise discarded.

Lecture-2.1.2

Flow and Error Control

Flow control and Error control are the two main responsibilities of the Data link
layer. Let us understand what these two terms specify. For the node-to-node delivery
of the data, the flow and error control are done at the data link layer.

Flow Control mainly coordinates with the amount of data that can be sent before
receiving an acknowledgment from the receiver and it is one of the major duties of
the data link layer.
o For most of the protocols, flow control is a set of procedures that mainly
tells the sender how much data the sender can send before it must wait for
an acknowledgment from the receiver.
o
o The data flow must not be allowed to overwhelm the receiver; because any
receiving device has a very limited speed at which the device can process the
incoming data and the limited amount of memory to store the incoming data.
o
o The processing rate is slower than the transmission rate; due to this reason
each receiving device has a block of memory that is commonly known
as buffer, that is used to store the incoming data until this data will be
processed. In case the buffer begins to fillup then the receiver must be able
to tell the sender to halt the transmission until once again the receiver
become able to receive.

Thus the flow control makes the sender; wait for the acknowledgment from the
receiver before the continuation to send more data to the receiver.

Some of the common flow control techniques are: Stop-and-Wait and sliding window
technique.

Error Control contains both error detection and error correction. It mainly allows
the receiver to inform the sender about any damaged or lost frames during the
transmission and then it coordinates with the retransmission of those frames by the
sender.

The term Error control in the data link layer mainly refers to the methods of error
detection and retransmission. Error control is mainly implemented in a simple way
and that is whenever there is an error detected during the exchange, then specified
frames are retransmitted and this process is also referred to as Automatic Repeat
request(ARQ).

Protocols

The implementation of protocols is mainly implemented in the software by using


one of the common programming languages. The classification of the protocols can
be mainly done on the basis of where they are being used.

Protocols can be used for noiseless channels(that is error-free) and also used for
noisy channels(that is error-creating). The protocols used for noiseless channels
mainly cannot be used in real-life and are mainly used to serve as the basis for the
protocols used for noisy channels.

Flow Control

When a data frame (Layer-2 data) is sent from one host to another over a single
medium, it is required that the sender and receiver should work at the same speed.
That is, sender sends at a speed on which the receiver can process and accept the
data. What if the speed (hardware/software) of the sender or receiver differs? If sender
is sending too fast the receiver may be overloaded, (swamped) and data may be lost.

Two types of mechanisms can be deployed to control the flow:


o Stop and Wait
o This flow control mechanism forces the sender after transmitting a data
frame to stop and wait until the acknowledgement of the data-frame sent is
received.

Sliding Window Protocol

The sliding window is a technique for sending multiple frames at a time. It controls the
data packets between the two devices where reliable and gradual delivery of data
frames is needed. It is also used in TCP (Transmission Control Protocol).

In this technique, each frame has sent from the sequence number. The sequence
numbers are used to find the missing data in the receiver end. The purpose of the
sliding window technique is to avoid duplicate data, so it uses the sequence number.

Types of Sliding Window Protocol

Sliding window protocol has two types:

7. Go-Back-N ARQ
8. Selective Repeat ARQ
Go-Back-N ARQ

Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is


a data link layer protocol that uses a sliding window method. In this, if any frame is
corrupted or lost, all subsequent frames have to be sent again.

The size of the sender window is N in this protocol. For example, Go-Back-8, the size
of the sender window, will be 8. The receiver window size is always 1.

If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a
corrupted frame. When the timer expires, the sender sends the correct frame again.
The design of the Go-Back-N ARQ protocol is shown below.
The example of Go-Back-N ARQ is shown below in the figure.
Selective Repeat ARQ

Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request.
It is a data link layer protocol that uses a sliding window method. The Go-back-N ARQ
protocol works well if it has fewer errors. But if there is a lot of error in the frame, lots
of bandwidth loss in sending the frames again. So, we use the Selective Repeat ARQ
protocol. In this protocol, the size of the sender window is always equal to the size of
the receiver window. The size of the sliding window is always greater than 1.

If the receiver receives a corrupt frame, it does not directly discard it. It sends a
negative acknowledgment to the sender. The sender sends that frame again as soon
as on the receiving negative acknowledgment. There is no waiting for any time-out to
send that frame. The design of the Selective Repeat ARQ protocol is shown below.

The example of the Selective Repeat ARQ protocol is shown below in the figure.
Difference between the Go-Back-N ARQ and Selective Repeat ARQ?

Go-Back-N ARQ Selective Repeat ARQ


If a frame is corrupted or lost in it,all subsequent frames In this, only the frame is sent again, which is corrupted
have to be sent again. or lost.
If it has a high error rate,it wastes a lot of bandwidth. There is a loss of low bandwidth.
It is more complex because it has to do sorting and
It is less complex.
searching as well. And it also requires more storage.
In this, sorting is done to get the frames in the correct
It does not require sorting.
order.
It does not require searching. The search operation is performed in it.
It is used more. It is used less because it is more complex.

Lecture-2.1.3
• High-level Data Link Control (HDLC)

High-level Data Link Control (HDLC) is a group of communication protocols of the data link
layer for transmitting data between network points or nodes. Since it is a data link protocol,
data is organized into frames. A frame is transmitted via the network to the destination that
verifies its successful arrival. It is a bit - oriented protocol that is applicable for both point - to
- point and multipoint communications.

Transfer Modes
HDLC supports two types of transfer modes, normal response mode and asynchronous
balanced mode.

o Normal Response Mode (NRM) − Here, two types of stations are there, a primary
station that send commands and secondary station that can respond to received
commands. It is used for both point - to - point and multipoint communications.

o Asynchronous Balanced Mode (ABM) − Here, the configuration is balanced, i.e. each
station can both send commands and respond to commands. It is used for only point
- to - point communications.

HDLC Frame
HDLC is a bit - oriented protocol where each frame contains up to six fields. The structure
varies according to the type of frame. The fields of a HDLC frame are −

o Flag − It is an 8-bit sequence that marks the beginning and the end of the frame. The
bit pattern of the flag is 01111110.
o Address − It contains the address of the receiver. If the frame is sent by the primary
station, it contains the address(es) of the secondary station(s). If it is sent by the
secondary station, it contains the address of the primary station. The address field
may be from 1 byte to several bytes.
o Control − It is 1 or 2 bytes containing flow and error control information.
o Payload − This carries the data from the network layer. Its length may vary from one
network to another.
o FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)

Types of HDLC Frames

There are three types of HDLC frames. The type of frame is determined by the control field of
the frame −

o I-frame − I-frames or Information frames carry user data from the network layer.
They also include flow and error control information that is piggybacked on user data.
The first bit of control field of I-frame is 0.
o S-frame − S-frames or Supervisory frames do not contain information field. They are
used for flow and error control when piggybacking is not required. The first two bits
of control field of S-frame is 10.
o U-frame − U-frames or Un-numbered frames are used for myriad miscellaneous
functions, like link management. It may contain an information field, if required. The
first two bits of control field of U-frame is 11.
Point-to-Point Protocol (PPP)

Point - to - Point Protocol (PPP) is a communication protocol of the data link layer that is used
to transmit multiprotocol data between two directly connected (point-to-point) computers. It
is a byte - oriented protocol that is widely used in broadband communications having heavy
loads and high speeds. Since it is a data link layer protocol, data is transmitted in frames. It is
also known as RFC 1661.

Services Provided by PPP

The main services provided by Point - to - Point Protocol are −

o Defining the frame format of the data to be transmitted.


o Defining the procedure of establishing link between two points and exchange of data.
o Stating the method of encapsulation of network layer data in the frame.
o Stating authentication rules of the communicating devices.
o Providing address for network communication.
o Providing connections over multiple links.
o Supporting a variety of network layer protocols by providing a range os services.
Components of PPP

Point - to - Point Protocol is a layered protocol having three components −

o Encapsulation Component − It encapsulates the datagram so that it can be


transmitted over the specified physical layer.
o Link Control Protocol (LCP) − It is responsible for establishing, configuring, testing,
maintaining and terminating links for transmission. It also imparts negotiation for set
up of options and use of features by the two endpoints of the links.
o Authentication Protocols (AP) − These protocols authenticate endpoints for use of
services. The two authentication protocols of PPP are −
o Password Authentication Protocol (PAP)
o Challenge Handshake Authentication Protocol (CHAP)
o Network Control Protocols (NCPs) − These protocols are used for negotiating the
parameters and facilities for the network layer. For every higher-layer protocol
supported by PPP, one NCP is there. Some of the NCPs of PPP are −
o Internet Protocol Control Protocol (IPCP)
o OSI Network Layer Control Protocol (OSINLCP)
o Internetwork Packet Exchange Control Protocol (IPXCP)
o DECnet Phase IV Control Protocol (DNCP)
o NetBIOS Frames Control Protocol (NBFCP)
o IPv6 Control Protocol (IPV6CP)

PPP Frame

PPP is a byte - oriented protocol where each field of the frame is composed of one or more
bytes. The fields of a PPP frame are −

o Flag − 1 byte that marks the beginning and the end of the frame. The bit pattern of the
flag is 01111110.
o Address − 1 byte which is set to 11111111 in case of broadcast.
o Control − 1 byte set to a constant value of 11000000.
o Protocol − 1 or 2 bytes that define the type of data contained in the payload field.
o Payload − This carries the data from the network layer. The maximum length of the
payload field is 1500 bytes. However, this may be negotiated between the endpoints
of communication.
o FCS − It is a 2 byte or 4 bytes frame check sequence for error detection. The standard
code used is CRC (cyclic redundancy code)
Byte Stuffing in PPP Frame − Byte stuffing is used is PPP payload field whenever the flag
sequence appears in the message, so that the receiver does not consider it as the end of the
frame. The escape byte, 01111101, is stuffed before every byte that contains the same byte
as the flag byte or the escape byte. The receiver on receiving the message removes the
escape byte before passing it onto the network layer.

4. Cyclic redundancy check (CRC)

o Unlike checksum scheme, which is based on addition, CRC is based on binary


division.
o In CRC, a sequence of redundant bits, called cyclic redundancy check bits, are
appended to the end of data unit so that the resulting data unit becomes exactly
divisible by a second, predetermined binary number.
o At the destination, the incoming data unit is divided by the same number. If at this
step there is no remainder, the data unit is assumed to be correct and is therefore
accepted.
o A remainder indicates that the data unit has been damaged in transit and therefore
must be rejected.

Example :
Chapter-2.2.1
• Medium Access Control Sublayer (MAC sublayer)

The medium access control (MAC) is a sublayer of the data link layer of the open system
interconnections (OSI) reference model for data transmission. It is responsible for flow
control and multiplexing for transmission medium. It controls the transmission of data
packets via remotely shared channels. It sends data over the network interface card.

MAC Layer in the OSI Model

The Open System Interconnections (OSI) model is a layered networking framework that
conceptualizes how communications should be done between heterogeneous systems. The
data link layer is the second lowest layer. It is divided into two sublayers −

o The logical link control (LLC) sublayer


o The medium access control (MAC) sublayer

The following diagram depicts the position of the MAC layer −

Functions of MAC Layer


o It provides an abstraction of the physical layer to the LLC and upper layers of the OSI
network.
o It is responsible for encapsulating frames so that they are suitable for transmission
via the physical medium.
o It resolves the addressing of source station as well as the destination station, or
groups of destination stations.
o It performs multiple access resolutions when more than one data frame is to be
transmitted. It determines the channel access methods for transmission.
o It also performs collision resolution and initiating retransmission in case of collisions.
o It generates the frame check sequences and thus contributes to protection against
transmission errors.
MAC Addresses
MAC address or media access control address is a unique identifier allotted to a network
interface controller (NIC) of a device. It is used as a network address for data transmission
within a network segment like Ethernet, Wi-Fi, and Bluetooth.

MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired or


hard-coded in the network interface card (NIC). A MAC address comprises of six groups of two
hexadecimal digits, separated by hyphens, colons, or no separators. An example of a MAC
address is 00:0A:89:5B:F0:11.

Channel allocation is a process in which a single channel is divided and allotted to multiple
users in order to carry user specific tasks. There are user’s quantity may vary every time the
process takes place. If there are N number of users and channel is divided into N equal-sized
sub channels, Each user is assigned one portion. If the number of users are small and don’t
vary at times, than Frequency Division Multiplexing can be used as it is a simple and efficient
channel bandwidth allocating technique.

Channel allocation problem can be solved by two schemes: Static Channel Allocation in LANs
and MANs, and Dynamic Channel Allocation.

These are explained as following below.

1. Static Channel Allocation in LANs and MANs:

It is the classical or traditional approach of allocating a single channel among multiple


competing users Frequency Division Multiplexing (FDM). if there are N users, the bandwidth
is divided into N equal sized portions each user being assigned one portion. since each user
has a private frequency band, there is no interface between users.

It is not efficient to divide into fixed number of chunks.

T = 1/(U*C-L)

T(FDM) = N*T(1/U(C/N)-L/N)

Where,

T = mean time delay,

C = capacity of channel,

L = arrival rate of frames,

1/U = bits/frame,

N = number of sub channels,

T(FDM) = Frequency Division Multiplexing Time


2. Dynamic Channel Allocation:

Possible assumptions include:

Station Model: Assumes that each of N stations independently produce frames. The
probability of producing a packet in the interval IDt where I is the constant arrival rate of
new frames.

Single Channel Assumption: In this allocation all stations are equivalent and can send and
receive on that channel.

Collision Assumption: If two frames overlap in time-wise, then that’s collision. Any
collision is an error, and both frames must re transmitted. Collisions are only possible error.

Time can be divided into Slotted or Continuous.

Stations can sense a channel is busy before they try it.

Protocol Assumption:

o N independent stations.
o A station is blocked until its generated frame is transmitted.
o probability of a frame being generated in a period of length Dt is IDt where I is the
arrival rate of frames.
o Only a single Channel available.
o Time can be either: Continuous or slotted.
o Carrier Sense: A station can sense if a channel is already busy before transmission.
o No Carrier Sense: Time out used to sense loss data.

Random Access Protocols

Random Access Protocols is a Multiple access protocol that is divided into four categories
which are ALOHA, CSMA, CSMA/CD, and CSMA/CA. In this article, we will cover all of these
Random Access Protocols in detail.

Have you ever been to a railway station? And noticed the ticket counter over there?
Above are the scenarios for approaching a ticket counter. Which one do you think is more
productive? The ordered one, right? And we all know the reason why. Just to get things
working and avoid problems we have some rules or protocols, like "please stand in the
queue", "do not push each other", "wait for your turn", etc. in the same way computer
network channels also have protocols like multiple access protocols, random access
protocols, etc.

Let's say you are talking to your friend using a mobile phone. This means there is a link
established between you and him. But the point to be remembered is that the communication
channel between you and him (the sender & the receiver or vice-versa) is not always a
dedicated link, which means the channels are not only providing service to you at that time
but to others as well. This means multiple users might be communicating through the same
channel.

How is that possible? The reason behind this is the multiple access protocols. If you refer
to the OSI model you will come across the data link layer. Now divide the layers into 2 parts,
the upper part of the layer will take care of the data link control, and the lower half will be
taking care in resolving the access to the shared media, as shown in the above diagram.

The following diagram classifies the multiple-access protocol. In this article, we are going to
cover Random Access Protocol.

Random Access Protocols


Once again, let's use the example of mobile phone communication. Whenever you call
someone, a connection between you and the desired person is established, also anyone can
call anyone. So here we have all the users (stations) at an equal priority, where any station
can send data depending on medium's state whether it is idle or busy, meaning that if you
friend is talking to someone else through the mobile phone, then its status is busy and you
cannot establish a connection and since all the users are assigned equal priority you can not
disconnect your friend's ongoing call and connect yours.

The random access protocols consist of the following characteristics:

15. There is no time restriction for sending the data (you can talk to your friend without a
time restriction).
16. There is a fixed sequence of stations which are transmitting the data.

As in the above diagram you might have observed that the random-access protocol is further
divided into four categories, which are:

17. ALOHA
18. CSMA
19. CSMA/CD
20. CSMA/CA

Let's cover each one of them, one by one.

ALOHA Random Access Protocol

The ALOHA protocol or also known as the ALOHA method is a simple communication
scheme in which every transmitting station or source in a network will send the data
whenever a frame is available for transmission. If we succeed and the frame reaches its
destination, then the next frame is lined-up for transmission. But remember, if the data
frame is not received by the receiver (maybe due to collision) then the frame is sent again
until it successfully reaches the receiver's end.

Whenever we talk about a wireless broadcast system or a half-duplex two-way link, the
ALOHA method works efficiently. But as the network becomes more and more complex e.g.
the ethernet. Now here in the ethernet, the system involves multiple sources and destinations
which share a common data path or channel, then the conflict occurs because data-frames
collide, and the information is lost. Following is the flow chart of Pure ALOHA.

So, to minimize these collisions and to optimize network efficiency as well as to increase the
number of subscribers that can use a given network, the slotted ALOHA was developed. This
system consists of the signals termed as beacons which are sent at precise time intervals and
inform each source when the channel is clear to send the frame.

Now, as we came to know about ALOHA's 2 types i.e. Pure & Slotted ALOHA, the following is
the difference between both.

CSMA Random Access Protocol

CSMA stands for Carrier Sense Multiple Access. Till now we have understood that when 2
or more stations start sending data, then a collision occurs, so this CSMA method was
developed to decrease the chances of collisions when 2 or more stations start sending their
signals over the data link layer. But how do they do it? The CSMA makes each station to first
check the medium (whether it is busy or not) before sending any data packet.
Here, Vulnerable time = Propagation Time

But, what to do if the channels are busy? Now, here the persistence methods can be applied
to help the station act when the channel is busy or idle.

The CSMA has 4 access modes:

o 1-persistent mode: In this, first the node checks the channel, if the channel is idle
then the node or station transmits data, otherwise it keeps on waiting and whenever
the channel is idle, the stations transmit the data-frame.
o Non-persistent mode: In this, the station checks the channel similarly as 1-
persistent mode, but the only difference is that when the channel is busy it checks it
again after a random amount of time, unlike the 1-persistent where the stations
keep on checking continuously.
o P-persistent mode: In this, the station checks the channel and if found idle then it
transmits the data frame with the probability of P and if the data is not transmitted
(1-P) then the station waits for a random amount of time and again transmits the
data with the probability P and this cycle goes on continuously until the data-frame
is successfully sent.
o O-persistent: In this, the transmission occurs based on the superiority of stations
which is decided beforehand and transmission occurs in that order. If the channel is
idle, then the station waits for its turn to send the data-frame.
Throughput & Efficiency of CSMA:

It is comparatively much greater than the throughput of pure and slotted ALOHA. Here, for
the 1-persistent mode, the throughput is 50% when G=1 and for Non-persistent mode, the
throughput can reach up to 90%.

CSMA/CD Random Access Protocol

CSMA/CD means CSMA with Collision Detection.

In this, whenever station transmits data-frame it then monitors the channel or the medium
to acknowledge the state of the transmission i.e. successfully transmitted or failed. If the
transmission succeeds, then it prepares for the next frame otherwise it resends the
previously failed data-frame. The point to remember here is, that the frame transmission
time should be at least twice the maximum propagation time, which can be deduced when
the distance between the two stations involved in a collision is maximum.

CSMA/CA Random Access Protocol

CSMA/CA means CSMA with collision avoidance.

To detect the possible collisions, the sender receives the acknowledgement and if there is
only one acknowledgment present (it's own) then this means that the data-frame has been
sent successfully. But, if there are 2 or more acknowledgment signals then this indicates that
the collision has occurred.

This method avoids collisions by:

o Interframe space: in this case, assume that your station waits for the channel to
become idle and found that the channel is idle, then it will not send the data-frame
immediately (in order to avoid collision due to propagation delay) it rather waits for
some time called interframe space or IFS, and after this time the station again checks
the medium for being idle. But it should be kept in mind that the IFS duration
depends on the priority of the station.
o Contention Window: here, the time is divided into slots. Say, if the sender is ready
for transmission of the data, it then chooses a random number of slots as waiting
time which doubles every time whenever the channel is busy. But, if the channel is
not idle at that moment, then it does not restart the entire process but restarts the
timer when the channel is found idle again.
o Acknowledgment: as we discussed above that the sender station will re-transmits
the data if acknowledgment is not received before the timer expires.

Lecture-2.2.2

Controlled Access

In controlled access, the stations seek information from one another to find which station
has the right to send. It allows only one node to send at a time, to avoid collision of messages
on shared medium.

The three controlled-access methods are:

0. Reservation
1. Polling
2. Token Passing
Reservation
o In the reservation method, a station needs to make a reservation before sending
data.
o The time line has two kinds of periods:
5. Reservation interval of fixed time length
6. Data transmission period of variable frames.
o If there are M stations, the reservation interval is divided into M slots, and each
station has one slot.
o Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other
station is allowed to transmit during this slot.
o In general, i th station may announce that it has a frame to send by inserting a 1 bit
into i th slot. After all N slots have been checked, each station knows which stations
wish to transmit.
o The stations which have reserved their slots transfer their frames in that order.
o After data transmission period, next reservation interval begins.
o Since everyone agrees on who goes next, there will never be any collisions.

The following figure shows a situation with five stations and a five-slot reservation frame. In
the first interval, only stations 1, 3, and 4 have made reservations. In the second interval,
only station 1 has made a reservation.

Polling
o Polling process is similar to the roll-call performed in class. Just like the teacher, a
controller sends a message to each node in turn.
o In this, one acts as a primary station(controller) and the others are secondary
stations. All data exchanges must be made through the controller.
o The message sent by the controller contains the address of the node being selected
for granting access.
o Although all nodes receive the message but the addressed one responds to it and
sends data, if any. If there is no data, usually a “poll reject”(NAK) message is sent
back.
o Problems include high overhead of the polling messages and high dependence on
the reliability of the controller.
Efficiency

Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then,

Efficiency = Tt/(Tt + Tpoll)

Token Passing
o In token passing scheme, the stations are connected logically to each other in form of
ring and access of stations is governed by tokens.
o A token is a special bit pattern or a small message, which circulate from one station
to the next in some predefined order.
o In Token ring, token is passed from one station to another adjacent station in the
ring whereas incase of Token bus, each station
o uses the bus to send the token to the next station in some predefined order.
o In both cases, token represents permission to send. If a station has a frame queued
for transmission when it receives the token, it can send that frame before it passes
the token to the next station. If it has no queued frame, it passes the token simply.
o After sending a frame, each station must wait for all N stations (including itself) to
send the token to their neighbors and the other N – 1 stations to send a frame, if they
have one.
o There exists problems like duplication of token or token is lost or insertion of new
station, removal of a station, which need be tackled for correct and reliable
operation of this scheme.
Performance

Performance of token ring can be concluded by 2 parameters:-

25. Delay, which is a measure of time between when a packet is ready and when it is
delivered. So, the average time (delay) required to send a token to the next station =
a/N.
26. Throughput, which is a measure of the successful traffic.

Throughput, S = 1/(1 + a/N) for a<1

and

S = 1/{a(1 + 1/N)} for a>1.

where N = number of stations

a = Tp/Tt

(Tp = propagation delay and Tt = transmission delay)

IEEE 802 wireless standards

•DEFINITIONIEEE 802 wireless standards

•ByAlexander S. Gillis,Technical Writer and Editor


•IEEE 802 is a collection of networking standards that cover the physical and data-link layer
specifications for technologies such as Ethernet and wireless. These specifications apply to
local area networks (LAN) and metropolitan area networks (MAN. IEEE 802 also aids in
ensuring multi-vendor interoperability by promoting standards for vendors to follow.

•Essentially, the IEEE 802 standards help make sure internet services and technologies follow
a set of recommended practices so network devices can all work together smoothly.

•IEEE 802 is divided into 22 parts that cover the physical and data-link aspects of networking.
The family of standards is developed and maintained by the IEEE 802 LAN/MAN Standards
Committee, also called the LMSC. IEEE stands for Institute of Electrical and Electronics
Engineers.

IEEE 802

•The set of standards started in 1979 with a "local network for computer interconnection"
standard, which was approved a year later. The LMSC has made more than 70 standards for
IEEE 802.

•Some commonly used standards include those for Ethernet, bridging and virtual bridged
LANs, wireless LAN, wireless PAN, MAN and radio access networks as well as media
independent handover services. The better-known specifications include 802.3 Ethernet,
802.11 Wi-Fi and 802.15 Bluetooth/ZigBee. However, some of these standards have been
labeled as disbanded or hibernating and are either superseded by newer standards or are
being reworked. Using an open process, the LMSC advocates for these standards globally.

•Individual "working groups" are decided on and assigned to each area in order to provide
each area with an acceptable amount of focus. IEEE 802 specifications also split the data link
layer into two different layers -- an LLC layer and a MAC layer.

Standards can be found in a PDF provided by the LMSC for up to six months after they have
been published. All standards stay in place until they are replaced with another document or
withdrawn.

Why IEEE 802 standards are important

•LMSC was formed in 1980 in order to standardize network protocols and provide a path to
make compatible devices across numerous industries.

•Without these standards, equipment suppliers could manufacture network hardware that
would only connect to certain computers. It would be much more difficult to connect to
systems not using the same set of networking equipment. Standardizing protocols help ensure
that multiple types of devices can connect to multiple network types. It also helps make sure
network management isn't the challenge it could be if it wasn't in place.

•IEEE 802 will also coordinate with other international standards, such as ISO, to help
maintain international standards.

•In addition, the "802" in IEEE 802 does not stand for anything with high significance. 802 was
just the next numbered project.

Examples of IEEE 802 uses

•The IEEE 802 specifications can be used by commercial organizations to ensure their
products maintain any newly specified standards. So, for example, the 802.11 specification
that applies to Wi-Fi could be used to make sure Wi-Fi devices work together under one
standard. In the same way, IEE 802 can help maintain local area network standards.
•These specifications can also define what connectivity infrastructure will be used for --
individual networks, or those at a larger organizational scale.

•The IEEE 802 specifications apply to hardware and software products. So, to ensure
manufacturers don't have any input on the standards, there is a voting protocol in place. This
makes sure that one organization does not influence the standards too much.

Lecture-2.3.1

Network Layer
o The Network Layer is the third layer of the OSI model.
o It handles the service requests from the transport layer and further
forwards the service request to the data link layer.
o The network layer translates the logical addresses into physical addresses
o It determines the route from the source to the destination and also
manages the traffic problems such as switching, routing and controls the
congestion of data packets.
o The main role of the network layer is to move the packets from sending
host to the receiving host.
The main functions performed by the network layer are:
o Routing: When a packet reaches the router's input link, the router will
move the packets to the router's output link. For example, a packet from S1
to R1 must be forwarded to the next router on the path to S2.
o Logical Addressing: The data link layer implements the physical
addressing and network layer implements the logical addressing. Logical
addressing is also used to distinguish between source and destination
system. The network layer adds a header to the packet which includes the
logical addresses of both the sender and the receiver.
o Internetworking: This is the main role of the network layer that it
provides the logical connection between different types of networks.
o Fragmentation: The fragmentation is a process of breaking the packets
into the smallest individual data units that travel through different
networks.
Forwarding & Routing

In Network layer, a router is used to forward the packets. Every router has a
forwarding table. A router forwards a packet by examining a packet's header field
and then using the header field value to index into the forwarding table. The value
stored in the forwarding table corresponding to the header field value indicates
the router's outgoing interface link to which the packet is to be forwarded.

For example, the router with a header field value of 0111 arrives at a router, and
then router indexes this header value into the forwarding table that determines
the output link interface is 2. The router forwards the packet to the interface 2.
The routing algorithm determines the values that are inserted in the forwarding
table. The routing algorithm can be centralized or decentralized.
Services Provided by the Network Layer
o Guaranteed delivery: This layer provides the service which guarantees
that the packet will arrive at its destination.
o Guaranteed delivery with bounded delay: This service guarantees that
the packet will be delivered within a specified host-to-host delay bound.
o In-Order packets: This service ensures that the packet arrives at the
destination in the order in which they are sent.
o Guaranteed max jitter: This service ensures that the amount of time
taken between two successive transmissions at the sender is equal to the
time between their receipt at the destination.
o Security services: The network layer provides security by using a session
key between the source and destination host. The network layer in the
source host encrypts the payloads of datagrams being sent to the
destination host. The network layer in the destination host would then
decrypt the payload. In such a way, the network layer maintains the data
integrity and source authentication services.
Network Addressing
o Network Addressing is one of the major responsibilities of the network
layer.
o Network addresses are always logical, i.e., software-based addresses.
o A host is also known as end system that has one link to the network. The
boundary between the host and link is known as an interface. Therefore,
the host can have only one interface.
o A router is different from the host in that it has two or more links that
connect to it. When a router forwards the datagram, then it forwards the
packet to one of the links. The boundary between the router and link is
known as an interface, and the router can have multiple interfaces, one for
each of its links. Each interface is capable of sending and receiving the IP
packets, so IP requires each interface to have an address.
o Each IP address is 32 bits long, and they are represented in the form of
"dot-decimal notation" where each byte is written in the decimal form, and
they are separated by the period. An IP address would look like
193.32.216.9 where 193 represents the decimal notation of first 8 bits of
an address, 32 represents the decimal notation of second 8 bits of an
address.

Let's understand through a simple example.

o In the above figure, a router has three interfaces labeled as 1, 2 & 3 and
each router interface contains its own IP address.
o Each host contains its own interface and IP address.
o All the interfaces attached to the LAN 1 is having an IP address in the form
of 223.1.1.xxx, and the interfaces attached to the LAN 2 and LAN 3 have an
IP address in the form of 223.1.2.xxx and 223.1.3.xxx respectively.
o Each IP address consists of two parts. The first part (first three bytes in IP
address) specifies the network and second part (last byte of an IP address)
specifies the host in the network.
Classful Addressing

An IP address is 32-bit long. An IP address is divided into sub-classes:

o Class A
o Class B
o Class C
o Class D
o Class E
An ip address is divided into two parts:

o Network ID: It represents the number of networks.


o Host ID: It represents the number of hosts.

In the above diagram, we observe that each class have a specific range of IP
addresses. The class of IP address is used to determine the number of bits used in
a class and number of networks and hosts available in the class.

Class A

In Class A, an IP address is assigned to those networks that contain a large number


of hosts.

o The network ID is 8 bits long.


o The host ID is 24 bits long.

In Class A, the first bit in higher order bits of the first octet is always set to 0 and
the remaining 7 bits determine the network ID. The 24 bits determine the host ID
in any network.

The total number of networks in Class A = 27 = 128 network address

The total number of hosts in Class A = 224 - 2 = 16,777,214 host address

Class B

In Class B, an IP address is assigned to those networks that range from small-sized


to large-sized networks.
o The Network ID is 16 bits long.
o The Host ID is 16 bits long.

In Class B, the higher order bits of the first octet is always set to 10, and the
remaining14 bits determine the network ID. The other 16 bits determine the Host
ID.

The total number of networks in Class B = 214 = 16384 network address

The total number of hosts in Class B = 216 - 2 = 65534 host address

Class C

In Class C, an IP address is assigned to only small-sized networks.

o The Network ID is 24 bits long.


o The host ID is 8 bits long.

In Class C, the higher order bits of the first octet is always set to 110, and the
remaining 21 bits determine the network ID. The 8 bits of the host ID determine
the host in a network.

The total number of networks = 221 = 2097152 network address

The total number of hosts = 28 - 2 = 254 host address

Class D

In Class D, an IP address is reserved for multicast addresses. It does not possess


subnetting. The higher order bits of the first octet is always set to 1110, and the
remaining bits determines the host ID in any network.

Class E

In Class E, an IP address is used for the future use or for the research and
development purposes. It does not possess any subnetting. The higher order bits
of the first octet is always set to 1111, and the remaining bits determines the host
ID in any network.

Rules for assigning Host ID:

The Host ID is used to determine the host within any network. The Host ID is
assigned based on the following rules:

o The Host ID must be unique within any network.


o The Host ID in which all the bits are set to 0 cannot be assigned as it is used
to represent the network ID of the IP address.
o The Host ID in which all the bits are set to 1 cannot be assigned as it is
reserved for the multicast address.
Rules for assigning Network ID:

If the hosts are located within the same local network, then they are assigned with
the same network ID. The following are the rules for assigning Network ID:

o The network ID cannot start with 127 as 127 is used by Class A.


o The Network ID in which all the bits are set to 0 cannot be assigned as it is
used to specify a particular host on the local network.
o The Network ID in which all the bits are set to 1 cannot be assigned as it is
reserved for the multicast address.
Classful Network Architecture

Class Higher bits NET ID bits HOST ID bits No.of networks No.of hosts per network
A 0 8 24 27 224 0.0.0.0 t
B 10 16 16 214 216 128.0.0.
C 110 24 8 221 28 192.0.0.
D 1110 Not Defined Not Defined Not Defined Not Defined 224.0.0.
E 1111 Not Defined Not Defined Not Defined Not Defined 240.0.0.

IPv6

•IPv4 produces 4 billion addresses, and the developers think that these
addresses are enough, but they were wrong. IPv6 is the next generation of IP
addresses. The main difference between IPv4 and IPv6 is the address size of IP
addresses. The IPv4 is a 32-bit address, whereas IPv6 is a 128-bit hexadecimal
address. IPv6 provides a large address space, and it contains a simple header as
compared to IPv4.
•It provides transition strategies that convert IPv4 into IPv6, and these strategies
are as follows:

•Dual stacking: It allows us to have both the versions, i.e., IPv4 and IPv6, on the
same device.

•Tunneling: In this approach, all the users have IPv6 communicates with an IPv4
network to reach IPv6.

•Network Address Translation: The translation allows the communication


between the hosts having a different version of IP.

•This hexadecimal address contains both numbers and alphabets. Due to the
usage of both the numbers and alphabets, IPv6 is capable of producing over 340
undecillion (3.4*1038) addresses.

•IPv6 is a 128-bit hexadecimal address made up of 8 sets of 16 bits each, and


these 8 sets are separated by a colon. In IPv6, each hexadecimal character
represents 4 bits. So, we need to convert 4 bits to a hexadecimal number at a
time

Address format

•The above diagram shows the address format of IPv4 and IPv6. An IPv4 is a 32-
bit decimal address. It contains 4 octets or fields separated by 'dot', and each
field is 8-bit in size. The number that each field contains should be in the range of
0-255. Whereas an IPv6 is a 128-bit hexadecimal address. It contains 8 fields
separated by a colon, and each field is 16-bit in size.

Ipv4 Ipv6
Address
IPv4 is a 32-bit address. IPv6 is a 128-bit address.
length
IPv4 is a numeric address that consists of IPv6 is an alphanumeric address that consists
Fields
4 fields which are separated by dot (.). of 8 fields, which are separated by colon.
IPv4 has 5 different classes of IP address
Classes that includes Class A, Class B, Class C, IPv6 does not contain classes of IP addresses.
Class D, and Class E.
Number of IP IPv4 has a limited number of IP
IPv6 has a large number of IP addresses.
address addresses.
It supports VLSM (Virtual Length Subnet
Mask). Here, VLSM means that Ipv4
VLSM It does not support VLSM.
converts IP addresses into a subnet of
different sizes.

Ipv4 Ipv6
Address It supports manual, DHCP, auto-
It supports manual and DHCP configuration.
configuration configuration, and renumbering.
It generates 340 undecillion unique
Address space It generates 4 billion unique addresses
addresses.
End-to-end In IPv4, end-to-end connection integrity is In the case of IPv6, end-to-end
connection integrity unachievable. connection integrity is achievable.
In IPv4, security depends on the application.
In IPv6, IPSEC is developed for
Security features This IP address is not developed in keeping
security purposes.
the security feature in mind.
Address In IPv4, the IP address is represented in In IPv6, the representation of the IP
representation decimal. address in hexadecimal.

Ipv4 Ipv6
Fragmentation is done by the
Fragmentation is done by the
Fragmentation senders and the forwarding
senders only.
routers.
It does not provide any It uses flow label field in the
Packet flow
mechanism for packet flow header for the packet flow
identification
identification. identification.
The checksum field is available in The checksum field is not
Checksum field
IPv4. available in IPv6.
On the other hand, IPv6 is
Transmission scheme IPv4 is broadcasting. multicasting, which provides
efficient network operations.
Encryption and It does not provide encryption It provides encryption and
Authentication and authentication. authentication.
It consists of 8 fields, and each
field contains 2 octets. Therefore,
Number of octets It consists of 4 octets.
the total number of octets in IPv6
is 16.

Lecture-2.3.2

Routing
o A Router is a process of selecting path along which the data can be transferred from
source to the destination. Routing is performed by a special device known as a router.
o A Router works at the network layer in the OSI model and internet layer in TCP/IP
model
o A router is a networking device that forwards the packet based on the information
available in the packet header and forwarding table.
o The routing algorithms are used for routing the packets. The routing algorithm is
nothing but a software responsible for deciding the optimal path through which
packet can be transmitted.
o The routing protocols use the metric to determine the best path for the packet
delivery. The metric is the standard of measurement such as hop count, bandwidth,
delay, current load on the path, etc. used by the routing algorithm to determine the
optimal path to the destination.
o The routing algorithm initializes and maintains the routing table for the process of
path determination.
Distance Vector Routing Algorithm
o The Distance vector algorithm is iterative, asynchronous and distributed.
o Distributed: It is distributed in that each node receives information from one or more
of its directly attached neighbors, performs calculation and then distributes the result
back to its neighbors.
o Iterative: It is iterative in that its process continues until no more information is
available to be exchanged between neighbors.
o Asynchronous: It does not require that all of its nodes operate in the lock step with
each other.
o The Distance vector algorithm is a dynamic algorithm.
o It is mainly used in ARPANET, and RIP.
o Each router maintains a distance table known as Vector.
Three Keys to understand the working of Distance Vector Routing
Algorithm:
o Knowledge about the whole network: Each router shares its knowledge through
the entire network. The Router sends its collected knowledge about the network to
its neighbors.
o Routing only to neighbors: The router sends its knowledge about the network to
only those routers which have direct links. The router sends whatever it has about the
network through the ports. The information is received by the router and uses the
information to update its own routing table.
o Information sharing at regular intervals: Within 30 seconds, the router sends the
information to the neighboring routers.
Distance Vector Routing Algorithm

Let dx(y) be the cost of the least-cost path from node x to node y. The least costs are related by
Bellman-Ford equation,

dx(y) = minv{c(x,v) + dv(y)}

Where the minv is the equation taken for all x neighbors. After traveling from x to v, if we
consider the least-cost path from v to y, the path cost will be c(x,v)+dv(y). The least cost from
x to y is the minimum of c(x,v)+dv(y) taken over all neighbors.

With the Distance Vector Routing algorithm, the node x contains the following routing
information:

o For each neighbor v, the cost c(x,v) is the path cost from x to directly attached
neighbor, v.
o The distance vector x, i.e., Dx = [ Dx(y) : y in N ], containing its cost to all destinations,
y, in N.
o The distance vector of each of its neighbors, i.e., Dv = [ Dv(y) : y in N ] for each neighbor
v of x.

Distance vector routing is an asynchronous algorithm in which node x sends the copy of its
distance vector to all its neighbors. When node x receives the new distance vector from one of
its neighboring vector, v, it saves the distance vector of v and uses the Bellman-Ford equation
to update its own distance vector. The equation is given below:

dx(y) = minv{ c(x,v) + dv(y)} for each node y in N

The node x has updated its own distance vector table by using the above equation and sends
its updated table to all its neighbors so that they can update their own distance vectors.

Algorithm
At each node x,

Initialization

for all destinations y in N:

Dx(y) = c(x,y) // If y is not a neighbor then c(x,y) = ∞

for each neighbor w

Dw(y) = ? for all destination y in N.

for each neighbor w

send distance vector Dx = [ Dx(y) : y in N ] to w

loop

wait(until I receive any distance vector from some neighbor w)

for each y in N:

Dx(y) = minv{c(x,v)+Dv(y)}

If Dx(y) is changed for any destination y

Send distance vector Dx = [ Dx(y) : y in N ] to all neighbors

forever

Sharing Information

o In the above figure, each cloud represents the network, and the number inside the
cloud represents the network ID.
o All the LANs are connected by routers, and they are represented in boxes labeled as
A, B, C, D, E, F.
o Distance vector routing algorithm simplifies the routing process by assuming the cost
of every link is one unit. Therefore, the efficiency of transmission can be measured by
the number of links to reach the destination.
o In Distance vector routing, the cost is based on hop count.

In the above figure, we observe that the router sends the knowledge to the immediate
neighbors. The neighbors add this knowledge to their own knowledge and sends the updated
table to their own neighbors. In this way, routers get its own information plus the new
information about the neighbors.

Routing Table

Two process occurs:

o Creating the Table


o Updating the Table
Creating the Table

Initially, the routing table is created for each router that contains atleast three types of
information such as Network ID, the cost and the next hop.

o NET ID: The Network ID defines the final destination of the packet.
o Cost: The cost is the number of hops that packet must take to get there.
o Next hop: It is the router to which the packet must be delivered.
o In the above figure, the original routing tables are shown of all the routers. In a routing
table, the first column represents the network ID, the second column represents the
cost of the link, and the third column is empty.
o These routing tables are sent to all the neighbors.

For Example:

30. A sends its routing table to B, F & E.


31. B sends its routing table to A & C.
32. C sends its routing table to B & D.
33. D sends its routing table to E & C.
34. E sends its routing table to A & D.
35. F sends its routing table to A.
Updating the Table
o When A receives a routing table from B, then it uses its information to update the
table.
o The routing table of B shows how the packets can move to the networks 1 and 4.
o The B is a neighbor to the A router, the packets from A to B can reach in one hop. So,
1 is added to all the costs given in the B's table and the sum will be the cost to reach a
particular network.

o After adjustment, A then combines this table with its own table to create a combined
table.
o The combined table may contain some duplicate data. In the above figure, the
combined table of router A contains the duplicate data, so it keeps only those data
which has the lowest cost. For example, A can send the data to network 1 in two ways.
The first, which uses no next router, so it costs one hop. The second requires two hops
(A to B, then B to Network 1). The first option has the lowest cost, therefore it is kept
and the second one is dropped.

o The process of creating the routing table continues for all routers. Every router
receives the information from the neighbors, and update the routing table.

Final routing tables of all the routers are given below:


Link State Routing

Link state routing is a technique in which each router shares the knowledge of its
neighborhood with every other router in the internetwork.

The three keys to understand the Link State Routing algorithm:

o Knowledge about the neighborhood: Instead of sending its routing table, a router
sends the information about its neighborhood only. A router broadcast its identities
and cost of the directly attached links to other routers.
o Flooding: Each router sends the information to every other router on the
internetwork except its neighbors. This process is known as Flooding. Every router
that receives the packet sends the copies to all its neighbors. Finally, each and every
router receives a copy of the same information.
o Information sharing: A router sends the information to every other router only
when the change occurs in the information.
Link State Routing has two phases:
Reliable Flooding
o Initial state: Each node knows the cost of its neighbors.
o Final state: Each node knows the entire graph.
Route Calculation

Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all nodes.

o The Link state routing algorithm is also known as Dijkstra's algorithm which is used
to find the shortest path from one node to every other node in the network.
o The Dijkstra's algorithm is an iterative, and it has the property that after kth iteration
of the algorithm, the least cost paths are well known for k destination nodes.
Let's describe some notations:
o c( i , j): Link cost from node i to node j. If i and j nodes are not directly linked, then c(i
, j) = ∞.
o D(v): It defines the cost of the path from source code to destination v that has the least
cost currently.
o P(v): It defines the previous node (neighbor of v) along with current least cost path
from source to v.
o N: It is the total number of nodes available in the network.
Algorithm

Initialization

N = {A} // A is a root node.

for all nodes v

if v adjacent to A

then D(v) = c(A,v)

else D(v) = infinity

loop

find w not in N such that D(w) is a minimum.

Add w to N
Update D(v) for all v adjacent to w and not in N:

D(v) = min(D(v) , D(w) + c(w,v))

Until all nodes in N

In the above algorithm, an initialization step is followed by the loop. The number of times the
loop is executed is equal to the total number of nodes available in the network.

Let's understand through an example:

In the above figure, source vertex is A.

Step 1:

The first step is an initialization step. The currently known least cost path from A to its directly
attached neighbors, B, C, D are 2,5,1 respectively. The cost from A to B is set to 2, from A to D
is set to 1 and from A to C is set to 5. The cost from A to E and F are set to infinity as they are
not directly linked to A.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E)


1 A 2,A 5,A 1,A ∞

Step 2:

In the above table, we observe that vertex D contains the least cost path in step 1. Therefore,
it is added in N. Now, we need to determine a least-cost path through D vertex.

a) Calculating shortest path from A to B


53. v = B, w = D
54. D(B) = min( D(B) , D(D) + c(D,B) )
55. = min( 2, 1+2)>
56. = min( 2, 3)
57. The minimum value is 2. Therefore, the currently shortest path from A to B is 2.

b) Calculating shortest path from A to C

58. v = C, w = D
59. D(B) = min( D(C) , D(D) + c(D,C) )
60. = min( 5, 1+3)
61. = min( 5, 4)
62. The minimum value is 4. Therefore, the currently shortest path from A to C is 4.</p>

c) Calculating shortest path from A to E

63. v = E, w = D
64. D(B) = min( D(E) , D(D) + c(D,E) )
65. = min( ∞, 1+1)
66. = min(∞, 2)
67. The minimum value is 2. Therefore, the currently shortest path from A to E is 2.
Note: The vertex D has no direct link to vertex E. Therefore, the value
of D(F) is infinity.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)


1 A 2,A 5,A 1,A ∞ ∞
2 AD 2,A 4,D 2,D ∞

Step 3:

In the above table, we observe that both E and B have the least cost path in step 2. Let's
consider the E vertex. Now, we determine the least cost path of remaining vertices through E.

You might also like