You are on page 1of 169

CN PROF DEVANSHI DAVE

Unit-1

Q.1 Define computer network & List out applications of Computer Network

A computer network is a system that connects numerous independent computers in order to share
information (data) and resources. The integration of computers and other different devices allows users to
communicate more easily.

Some of the network applications in different field are the following :

 Marketing and sales: – Computer networks are widely used in both marketing sales
firms. These are used by marketing professionals to collect, exchange, and analyzes data
relating to customer requirements and product development cycles. Teleshopping is also
important part of sales applications that use order-entry computers or telephones
connected to an order-processing network, and on-line reservation services for hotels
airline and so on.
CN PROF DEVANSHI DAVE

 Manufacturing: – Now days, computer networks are used in a several aspects of


manufacturing, including the manufacturing process itself. Two applications which use a
network to provide necessary services are computer-assisted manufacturing (CAM) and
computer –assisted designing (CAD) both of which permit multiple users to work on a
project simultaneously.
 Financial Services: – In Present, Financial services are completely dependent on
computer networks. Main applications are credit history searches, foreign exchange and
investment services, and Electronic Funds Transfer (EFT) that permits a user to transfer
money without going into bank.
 Teleconferencing: – With The help of teleconferencing conferences are possible to occur
without the participants being in the same place. Applications include simple text
conferencing, voice conferencing, and video conferencing.
 Cable Television:-Future Services provided by cable television network can include video
on request, as well as the same information, financial and communications services
currently provided by the telephone companies and computer networks.
 Information Services:- Network information services include bulletin boards and data
banks. A World Wide Web site offering the technical specifications for a new product is an
information service.
CN PROF DEVANSHI DAVE

 Electronic Messaging:– Electronic mail (e-mail) is the most widely used network
application.
 Electronic Data Interchange (EDI):– EDI permits business information to be
transferred without using paper.
 Directory services: – By using directory services, it is possible to store the last of files in
a central location to speed worldwide search operations.
 Cellular Telephone: – In the past, two parties desiring to use the services of the
telephone company had to be linked by a fixed physical connection. But, in present
cellular network make it possible to maintain wireless phone connections even while
travelling over large distances.

Q.2 Define LAN, MAN, WAN Define connection-oriented and connectionless Protocols.

LAN - Local Area Network

A Local Area Network (LAN) is a private network that connects computers and devices within a limited area like
a residence, an office, a building or a campus. On a small scale, LANs are used to connect personal computers to
printers. However, LANs can also extend to a few kilometers when used by companies, where a large number of
computers share a variety of resources like hardware (e.g. printers, scanners, audiovisual devices etc), software
(e.g. application programs) and data.
CN PROF DEVANSHI DAVE

MAN - Metropolitan Area Network

A Metropolitan Area Network (MAN) is a larger network than LAN. It often covers multiple cities or towns. It is
quiet expensive and a single organization may not have own it.

WAN - Wide Area Network

A Wide Area Network (WAN) is a much larger network than LAN and MAN. It often covers multiple contries or
contenants. It is quiet expensive and a single organization may not have own it. Satellite is used to manage WAN.

Data communication is a telecommunication network to send and receive data between two or more computers
over the same or different network. There are two ways to establish a connection before sending data from one
device to another, that are Connection-Oriented and Connectionless Service. Connection-oriented service
involves the creation and termination of the connection for sending the data between two or more devices. In
contrast, connectionless service does not require establishing any connection and termination process for
transferring the data over a network.

Connection-Oriented Service

A connection-oriented service is a network service that was designed and developed after the telephone system.
A connection-oriented service is used to create an end to end connection between the sender and the receiver
before transmitting the data over the same or different networks. In connection-oriented service, packets are
transmitted to the receiver in the same order the sender has sent them. It uses a handshake method that creates
a connection between the user and sender for transmitting the data over the network. Hence it is also known as a
reliable network service.
CN PROF DEVANSHI DAVE

Suppose, a sender wants to send data to the receiver. Then, first, the sender sends a request packet to a receiver
in the form of an SYN packet. After that, the receiver responds to the sender's request with an (SYN-ACK)
signal/packets. That represents the confirmation is received by the receiver to start the communication between
the sender and the receiver. Now a sender can send the message or data to the receiver.
CN PROF DEVANSHI DAVE

Similarly, a receiver can respond or send the data to the sender in the form of packets. After successfully exchanging
or transmitting data, a sender can terminate the connection by sending a signal to the receiver. In this way, we can
say that it is a reliable network service.

Connectionless Service

A connection is similar to a postal system, in which each letter takes along different route paths from the source to the
destination address. Connectionless service is used in the network system to transfer data from one end to another end
without creating any connection. So it does not require establishing a connection before sending the data from the sender
to the receiver. It is not a reliable network service because it does not guarantee the transfer of data packets to the
receiver, and data packets can be received in any order to the receiver. Therefore we can say that the data packet does
not follow a defined path. In connectionless service, the transmitted data packet is not received by the receiver due to
network congestion, and the data may be lost.
CN PROF DEVANSHI DAVE

For example, a sender can directly send any data to the receiver without establishing any connection because it is a
connectionless service. Data sent by the sender will be in the packet or data streams containing the receiver's address. In
connectionless service, the data can be travelled and received in any order. However, it does not guarantee to transfer of
the packets to the right destination.
CN PROF DEVANSHI DAVE

Q.3 Draw OSI Reference Model and explain it in detail.

OSI or Open System Interconnection model was developed by International Standards Organization (ISO). It
gives a layered networking framework that conceptualizes how communications should be done between
heterogeneous systems. It has seven interconnected layers. The seven layers of the OSI Model are a physical
layer, data link layer, network layer, transport layer, session layer, presentation layer, and application layer, as
shown in the following diagram −
CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

The physical layer, data link layer and the network layer are the network support layers. The layers manage a
physical transfer of data from one device to another. Session layer, presentation layer, and application layer are
the user support layers. These layers allow communication among unrelated software in dissimilar
environments. Transport layer links the two groups.

The main functions of each of the layers are as follows −

 Physical Layer − Its function is to transmit individual bits from one node to another over a physical
medium.
 Data Link Layer − It is responsible for the reliable transfer of data frames from one node to another
connected by the physical layer.
 Network Layer − It manages the delivery of individual data packets from source to destination through
appropriate addressing and routing.
 Transport Layer −It is responsible for delivery of the entire message from the source host to destination
host.
 Session Layer − It establishes sessions between users and offers services like dialog control and
synchronization.
 Presentation Layer − It monitors syntax and semantics of transmitted information through translation,
compression, and encryption.
 Application Layer − It provides high-level APIs (application program interface) to the users.

4. Compare OSI & TCP-IP reference model.


CN PROF DEVANSHI DAVE

OSI represents Open System Interconnection. TCP/IP model represents the Transmission
Control Protocol / Internet Protocol.

OSI is a generic, protocol independent standard. TCP/IP model depends on standard protocols
It is acting as an interaction gateway between about which the computer network has created.
the network and the final-user. It is a connection protocol that assigns the
network of hosts over the internet.

The OSI model was developed first, and then The protocols were created first and then built
protocols were created to fit the network the TCP/IP model.
architecture’s needs.

It provides quality services. It does not provide quality services.

The OSI model represents defines It does not mention the services, interfaces, and
administration, interfaces and conventions. It protocols.
describes clearly which layer provides services.
CN PROF DEVANSHI DAVE

The protocols of the OSI model are better The TCP/IP model protocols are not hidden, and
unseen and can be returned with another we cannot fit a new protocol stack in it.
appropriate protocol quickly.

It is difficult as distinguished to TCP/IP. It is simpler than OSI.

It provides both connection and connectionless It provides connectionless transmission in the


oriented transmission in the network layer; network layer and supports connecting and
however, only connection-oriented transmission connectionless-oriented transmission in the
in the transport layer. transport layer.

It uses a horizontal approach. It uses a vertical approach.

The smallest size of the OSI header is 5 bytes. The smallest size of the TCP/IP header is 20
bytes.

Protocols are unknown in the OSI model and are In TCP/IP, returning protocol is not difficult.
returned while the technology modifies.
CN PROF DEVANSHI DAVE

5. Compare circuit and packet switching.

Circuit Switching
CN PROF DEVANSHI DAVE

Circuit Switching is a connection-oriented service. It provides a dedicated path from the sender to the receiver. In-circuit
switching, a connection setup is required to send and receive data. It has very little chance of data loss and error due to
the dedicated circuit, but a lot of bandwidth is wasted because the same path cannot be used by other senders during a
congestion. Circuit switching is completely transparent; the sender and receiver can use any bit rate format or framing
method.

Advantages of Circuit Switching

 It uses a fixed bandwidth.

 A dedicated communication channel increases the quality of communication.

 Data is transmitted with a fixed data rate.

 No waiting time at switches.

 Suitable for long continuous communication.

Disadvantages of circuit switching

 A dedicated connection makes it impossible to transmit other data even if the channel is free.

 Resources are not utilized fully.

 The time required to establish the physical link between the two stations is too long.

 A dedicated path has to be established for each connection.


CN PROF DEVANSHI DAVE

 Circuit switching is more expensive.

 Even if there is no transfer of data, the link is still maintained until it is terminated by users.

 Dedicated channels require more bandwidth.

Packet Switching

Packet switching is a connectionless service. It does not require any dedicated path between the sender and receiver. It
places an upper limit on block size. In packet switching bandwidth is freely utilized as unrelated sources can be used in any
path. It has more chance of data loss and error; the packets may arrive in the wrong order.
CN PROF DEVANSHI DAVE

Advantages of Packet switching

 It reduces access delay.

 Costs are minimized to great extent. Hence packet switching is a very cost-effective technique.

 Packets are rerouted in case of any problems. This ensures reliable communication.

 It is more efficient for data transmission because no need to establish the path.
CN PROF DEVANSHI DAVE

 Several users can share the same channel simultaneously. Therefore packet switching makes use of available
bandwidth efficiently.

Disadvantages of Packet switching

 In packet switching, the network can not be used in applications requiring very little delay and higher quality of
service.

 Protocols used in the packet switching are complex.

 If the network becomes overloaded, packets are delayed or discarded, or dropped. This leads to the retransmission
of lost packets by the sender.

 It is not secured if security protocols are not used during packet transmission.

Difference between circuit switching and packet switching

The following table highlights the major differences between circuit switching and packet switching −

Circuit Switching Packet Switching

Circuit switching requires a dedicated path Packet switching does not require any dedicated
before sending data from source to destination. path to send data from source to destination.
CN PROF DEVANSHI DAVE

Circuit Switching Packet Switching

It reserves the entire bandwidth in advance. It does not reserve bandwidth in advance

No store and forward transmission It supports store and forward transmission

Each packet follows the same route A packet can follow any route

Call setup is required No call setup is required

Bandwidth wastage No bandwidth wastage

6. Discover Different types of Nodal Delay in Detail

The delays, here, means the time for which the processing of a particular packet takes place. We have the following types
of delays in computer networks:

1. Transmission Delay:
The time taken to transmit a packet from the host to the transmission medium is called Transmission delay.
CN PROF DEVANSHI DAVE

For example, if bandwidth is 1 bps (every second 1 bit can be transmitted onto the transmission medium) and data size is
20 bits then what is the transmission delay? If in one second, 1 bit can be transmitted. To transmit 20 bits, 20 seconds
would be required.

Let B bps is the bandwidth and L bit is the size of the data then transmission delay is,

Tt = L/B

This delay depends upon the following factors:

 If there are multiple active sessions, the delay will become significant.

 Increasing bandwidth decreases transmission delay.

 MAC protocol largely influences the delay if the link is shared among multiple devices.

 Sending and receiving a packet involves a context switch in the operating system, which takes a finite time.
CN PROF DEVANSHI DAVE

2. Propagation delay:
After the packet is transmitted to the transmission medium, it has to go through the medium to reach the destination.
Hence the time taken by the last bit of the packet to reach the destination is called propagation delay.

Factors affecting propagation delay:

1. Distance – It takes more time to reach the destination if the distance of the medium is longer.

2. Velocity – If the velocity(speed) of the signal is higher, the packet will be received faster.

Tp = Distance / Velocity

Note:

Velocity =3 X 10^8 m/s (for air)

Velocity= 2.1 X 10^8 m/s (for optical fibre)

3. Queueing delay:
Let the packet is received by the destination, the packet will not be processed by the destination immediately. It has to
wait in a queue in something called a buffer. So the amount of time it waits in queue before being processed is called
queueing delay.
CN PROF DEVANSHI DAVE

In general, we can’t calculate queueing delay because we don’t have any formula for that.

This delay depends upon the following factors:

 If the size of the queue is large, the queuing delay will be huge. If the queue is empty there will be less or no delay.

 If more packets are arriving in a short or no time interval, queuing delay will be large.

 The less the number of servers/links, the greater is the queuing delay.

4. Processing delay:
Now the packet will be taken for the processing which is called processing delay.

Time is taken to process the data packet by the processor that is the time required by intermediate routers to decide
where to forward the packet, update TTL, perform header checksum calculations.

It also doesn’t have any formula since it depends upon the speed of the processor and the speed of the processor varies
from computer to computer.

Note: Both queueing delay and processing delay doesn’t have any formula because they depend on the speed of the
processor
This delay depends upon the following factors:

 It depends on the speed of the processor.

Ttotal = Tt + Tp + Tq + Tpro
CN PROF DEVANSHI DAVE

Ttotal = Tt+Tp

(when taking Tq and Tpro equals to 0)


CN PROF DEVANSHI DAVE

Unit - 2
1. Define persistent and Non persistent HTTP in detail.

Non-persistent and persistent are the two types of HTTP connections used to connect the client with the webserver. The
non-persistent connection has connection type 1.0, while the persistent connection has connection type 1.1.

Non-persistent

The non-persistent connection takes a total time of 2RTT + file transmission time. It takes the first RTT (round-trip time) to
establish the connection between the server and the client. The second RTT is taken to request and return the object. This
case stands for a single object transmission.

After the client receives the object in non-persistent, the connection is immediately closed. This is the basic difference
between persistent and non-persistent. The persistent connection ensures the transfer of multiple objects over a single
connection.

Persistent

A persistent connection takes 1 RTT for the connection and then transfers as many objects, as wanted, over this single
connection.

RTT stands for the round-trip time taken for an object request and then its retrieval. In other words, it is the time taken to
request the object from the client to the server and then retrieve it from the server back to the client.

Sample problem
CN PROF DEVANSHI DAVE

Suppose 10 images need to be downloaded from the HTTP server. The total time taken to request and download 10
images in a non-persistent and persistent connection is:

Non-persistent

2 RTT (Connection time) + 2 * 10 RTT= 22 RTT22RTT

Persistent

2 RTT (Connection time) + 10 RTT= 12 RTT12RTT

2. Describe need of conditional GET in HTTP Messages.

Web caching is done by a Proxy Server – an intermediate entity between the original server and the client. When a client
requests for some information (through an HTTP message), it goes through the proxy server, which –

 First checks if it has the copy locally stored.

 If it has, then it forwards the result directly to the client.

 Otherwise, it queries on behalf of the end host, stores a copy of the result locally, and forwards the result back to
the end host.

Web caches (or) Proxy Servers are usually installed by ISPs (Internet service providers), Universities, or even Corporate
Offices, wherein multiple end hosts are connected to the proxy server.

Installing a proxy server has multiple advantages –


CN PROF DEVANSHI DAVE

1. It can substantially reduce the response time for repeated requests. (Especially if the bottleneck between the
original server and receiver is less than bottleneck between the proxy server and receiver.)

2. It reduces the access link bandwidth (of the university or the office), thereby reducing the cost.

3. It reduces traffic on the Internet as a whole.

But there is one problem.


What if the content was modified on the original server, rendering the copy on the proxy server to be an outdated one?

This is where Conditional GET statements kicks in. When a Proxy server receives an HTTP request, and it has the result
stored locally, it still queries the original server, asking if that particular object was modified since the last time it was
requested by the proxy server.

The “Conditional GET” statement has an additional field than a normal GET statement, called the “If-modified-since” field,
which specifies the last time when the same request was made. The original server either –

 Tells the proxy server that the content was not modified – HTTP 304 status code, or

 Sends the updated content (in case there was some modification done) – HTTP 200 response-message code

If the Proxy server gets a 304 – “No Modification” message, it forwards its local copy to the client. If modification had been
there, the Cache forwards the new object, whilst storing it locally along with the date and time it received the new object
(so that it can ask the original server later for modifications).
For obvious reasons, an HTTP 304 message does not contain a message body.

3. Explain SMTP Protocol. Give comparison of SMTP, POP3 and IMAP.


CN PROF DEVANSHI DAVE

You must have heard about SMTP, POP3, and IMAP quite often. They are TCP/IP protocols that have been in use for email
delivery since 1981. Popular Services like Gmail, Outlook, and Yahoo use this underlying technology to transact over 300
Billion emails every day.

Since you are reading this online, chances are you’ve already used these three one way or another. Now before we set out
to learn what SMTP, IMAP, and POP3 are, let’s start with some basics

Do You Want An Upward Moving Sales Graph?

Sales outreach campaigns made easy. Use drag & drop builder to make campaigns in minutes.

Try SalesBlink for free

What Is A Protocol?

A protocol is a set of standard rules that let electronic devices communicate with each other. Two devices supporting the
same protocol can communicate effectively, no matter who their manufacturer is and also what type of devices they are.

TCP/IP stands for “Transmission Control Protocol/Internet Protocol”. TCP/IP protocols aim to allow computers to
communicate with each other over long-distance networks.

How Are Emails Transferred?

The following three parties are involved in transferring an email:

1. The sender

2. The recipient
CN PROF DEVANSHI DAVE

3. A mail server

The email goes to the mail server from the sender, which allows the recipient to receive the email.

Now, to make a connection between the three parties, you need email protocols. SMTP, IMAP, and POP3 are precisely
that. They are also the most commonly used TCP/IP protocols.

Journey Of An Email From A Sender To A Recipient

If the sender is jeff@amazon.com and the recipient is elon@spacex.com, following is how an email from the sender
reaches the recipient,
CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

1. The email client of the sender connects to that of the SMTP server (for example, smtp.gmail.com)

2. The SMTP server authenticates the email address of the recipient.

3. After the authorization of the recipient, the Gmail SMTP server sends the email to spacex.com’s SMTP server.

4. The SMTP server of spacex.com checks whether elon@spacex.com is valid or not.

5. After that, the SMTP server sends the email to the IMAP/POP3 server.

What Are SMTP, IMAP, And POP3?

What Is SMTP?

SMTP stands for Simple Mail Transfer Protocol. It is the standard protocol for sending email messages using SMTP.

Your email client and internet servers use SMTP after you finish typing your email, upon hitting send. It moves your
message over the internet and also makes it land into the recipient’s mailbox.

SMTP servers are of two types: Relays and Receivers.

Relays accept the user’s emails and route them to the recipient, while receivers deliver them to the mailbox after
accepting the email from the relay servers.

Working of SMTP – It involves the SMTP client sending commands and the SMTP server replying to them. The
conversation has the following three stages –

1. SMTP handshake – Here, the SMTP client connects to the SMTP server
CN PROF DEVANSHI DAVE

2. Email transfer – Launches the email transfer

3. Termination – The client and server bid goodbye to each other.

What Is IMAP?

IMAP is an abbreviation for Internet Message Access Protocol. It is a popular protocol for receiving email messages. The
most significant advantage of the IMAP protocol is that it lets you receive email on more than one computer (or device).
That is because after getting delivered, the email stays on the mail server. In short, you can read your email on your office
computer, the desktop at home, your tablet and your smartphone.

Also, it doesn’t download the entire message until you open it. That helps in a faster initial connection and startup.

However, it won’t perform well when you have a slow internet connection.. And you won’t be able to read your emails.

Working of IMAP

The basic IMAP client and server interaction is as mentioned below,

1. The email client of the recipient connects with the server where the message is stored.

2. The recipient can view the message headers on the server.

3. If the recipient selects a message to read it, IMAP downloads that one for the recipient.

What Is POP3?
CN PROF DEVANSHI DAVE

Like IMAP, POP3 is another protocol for receiving emails. POP3 is an abbreviation for Post Office Protocol, whereas ‘3’
refers to the version. Version 3 is the latest and most widely used one.

It is a simple protocol not having much to offer apart from the download.

POP3 downloads an email from the server to one computer (or device) and ends up removing it from the server once it
gets downloaded on your computer. That means it won’t be possible to read your email messages from multiple locations,
which is not ideal.

The advantages of this protocol are that you can read your emails when you are offline. And the use of the email server’s
storage is less.

However, as it works with one device and doesn’t store the messages on the server, there is a need to backup your
computer. Failing to do so will result in you losing all your emails when your computer dies.

Working of POP3

There are four stages in a POP3 connection,

1. Authorization stage – The client connects with the server.

2. Transaction stage – The client retrieves the email

3. Update Stage – The server deletes the message stored

4. The client and server disconnect from each other

Difference Between SMTP, IMAP, And POP3


CN PROF DEVANSHI DAVE

SMTP VS IMAP
CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

IMAP is used to retrieve messages, while SMTP is for sending them.

IMAP works between the server and client for communication, whereas SMTP works between servers to transfer
information. The former is a message transfer agent between user and server, and the latter is a message transfer agent
between servers. Read the difference between SMTP VS IMAP.

With IMAP, users can also organize emails onto the server. With SMTP, you can organize emails on client storage.

SMTP vs POP3
CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

SMTP is used to send messages, and POP3 is used to access them.

SMTP is an MTA or Message Transfer Agent for sending the message to the recipient. There are two MTAs, namely client
MTA and Server MTA. On the other hand, POP3 is an MAA or Message Access Agent to access messages from the mailbox.
There are two MAAs as well, namely client MAA and server MAA.

SMTP is referred to as PUSH protocol, and POP3 is called a POP protocol.

SMTP sends the email from the device of the sender to the mailbox from the mail server of the receiver. POP3 lets you
retrieve and organize emails from the mailbox on the mail server of the receiver to the computer of the receiver. Read the
difference between SMTP VS POP3.

SMTP functions between the mail servers of the sender and receiver, while POP3 functions between the receiver and the
mail server of the receiver.

IMAP VS POP3
CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

Ideally, both IMAP and POP3 are for receiving emails. So, what is the difference between the two protocols?

As already seen above, POP3 downloads an email from the server to one computer and then deletes it from the server. On
the contrary, IMAP stores the email on the server and syncs it across several devices. This way, you can access emails
across multiple devices. Read the difference between IMAP VS POP3.

Both IMAP and POP3 have spam and virus filters. However, IMAP is more advanced when compared to POP3.

With POP3, you can’t organize emails in the mailbox of the mail server, whereas IMAP allows you to organize them.

As a user, you can’t create, delete or rename an email on the mail server but it is possible with IMAP. You can download
all the emails at once with POP3. While with IMAP, you see the message header before you download the email.

There are two modes of POP3 – delete mode and keep mode.

In the delete mode, your email gets deleted from the mailbox after you retrieve it.

In the keep mode, the email stays in the mailbox even after you retrieve it.

With IMAP, there are several copies of the email on the mail server. That means you can easily retrieve an email message
if you lose it.

Which Protocol Is Better To Use – IMAP And POP3?

Choosing between IMAP and POP3 requires you to weigh the advantages and disadvantages of both protocols and
consider your requirements.
CN PROF DEVANSHI DAVE

You should choose IMAP if you want to access mail from multiple devices and organize your email in folders. Also, with
IMAP, you will have no server storage space crunch. This protocol is well-suited for a stable internet connection and works
well when you want to access your email fast.

On the other hand, POP3 is the right choice for you when you have an unstable internet connection. If you are concerned
about your privacy, POP3 is the best protocol for you as there will be no copies of the email you received on the server.
The protocol is well-suited if you are the only one accessing it and also are using just one device for the purpose.

How SalesBlink Uses SMTP And IMAP?

As you have already seen, SMTP is for sending emails, and IMAP is for retrieving them. These two have an important role
to play in automating cold email sequences using our software.

For instance, you can’t schedule too many emails on your email service provider, so SalesBlink helps you out with it. You
can schedule your email campaign using our software and can schedule your campaign.

To do so, you have to enter your details, such as your username and password, along with the host and port of SMTP and
IMAP. Here’s a screenshot for your reference.
CN PROF DEVANSHI DAVE

Next up, SalesBlink uses IMAP to find out whether the outreach emails have got a reply from the prospect or not.

In addition to that, it has a prominent role in enabling you to glance at the history of every prospect and manage them in
the CRM. It allows you to track the progress of each prospect of your campaign easily as you can see all the activities on
one page.

Why Does SalesBlink Prefer IMAP Over POP3?

SalesBlink is a sales outreach automation suite that uses IMAP instead of POP3 as the former is more effective and helps in
serving the purpose.
CN PROF DEVANSHI DAVE

IMAP stores the message on a server and synchronizes the message across multiple devices while POP3 downloads the
email from a server to a single computer, then deletes the email from the server. Therefore, IMAP is more effective for
our platform than POP3.

Do You Know The 3 Main Protocols Better Now?

We have seen that the three main TCP/IP protocols for email delivery are SMTP, POP3, and IMAP. While SMTP is for
sending emails, IMAP and POP3 are for receiving them. Basically, you can choose between IMAP and POP3 based on your
needs. Both have their advantages; however, usually, IMAP suits most people.

FAQs

What is SMTP?

SMTP stands for Simple Mail Transfer Protocol. It is the standard protocol for sending email messages. Your email client
and internet servers use SMTP after you finish typing your email, upon hitting send.

What Is IMAP?

IMAP is an abbreviation for Internet Message Access Protocol. It is a popular protocol for receiving email messages. The
biggest advantage of the IMAP protocol is that it lets you receive email on more than one computer.

What Is POP3?

POP3 is an abbreviation for Post Office Protocol, whereas ‘3’ refers to the version. Version 3 is the latest and most widely
used one. It is a simple protocol not having much to offer apart from the download.

4. Classify different types DNS Services with proper example.


CN PROF DEVANSHI DAVE

DNS servers play a wide variety of roles—a single name server may be a master for some zones, a slave for others, and
provide caching or forwarding services for still others.

The role of the name server is controlled by its configuration file, which in the case of BIND is called named.conf. The
combination of global parameters in the named.conf file (defined in an options clause) and the zones being serviced
(defined in one or more zone clauses) determine the complete functionality of the name server. Depending on the
requirements, such configurations can become very complex.

1. Root Servers

Root servers are positioned at the top or root of the DNS hierarchy and maintain data about each of the top-level zones.
The root servers are maintained by the NIC and have been moved to a common domain for consistent naming purposes.
The root servers are named as A.root-servers.net., B.root-servers.net., and so on.

2. Primary (Master) Servers

 Each domain must have a primary server. Primary server has the following features.

 There is generally only one primary server per domain.

 They are the system where all the changes are made to the domain.

 They are the authoritative for all domains they serve.

 They periodically update and synchronize secondary servers of the domain.

 In current versions of BIND, they are defined by the type master argument to the zone statement in the
configuration file /etc/named.conf.
CN PROF DEVANSHI DAVE

3. Secondary servers

Each domain should have at least one secondary server. In fact,the NIC will not allow a domain to become officially
registered as a subdomain of a top-level domain until a site demonstrates two working DNS servers. Secondary servers
have the following features.

 There is one or more secondary server per domain.

 They obtain copy of the domain information for all domains they serve from the appropriate primary server or
another secondary server for the domain.

 They are authoritative for all the domains they serve.

 They periodically receive updates from the primary servers of the domain.

 They provide load sharing with the primary servers and other servers of the domain.

 They provide redundancy in case one or more other servers are temporarily unavailable.

 They provide more local access to name resolution if placed appropriately.

 In current versions of BIND, they are defined by the type slave argument to the zone statement in
the /etc/named.conf file.

4. Caching-Only servers

These servers only cache information for any DNS domain. They are not authoritative for any domain. Caching-only
servers provide the following features.
CN PROF DEVANSHI DAVE

 They provide local cache of looked up names.

 They have lower administrative overhead.

 They are never authoritative for any domain.

 They reduce overhead associated with secondary servers performing zone transfers from primary servers.

 They allow DNS client access to local cached naming information without the expense of setting up a DNS primary
or secondary server.

5. Forwarding servers

Forwarding servers are a variation on a primary or secondary server and act as focal points for all off-site DNS queries.
Designating a server as a forwarding server causes all off-site requests to go through that server first. Forwarding servers
have the following features.

 They are used to centralize off-site requests.

 The server being used as a forwarder builds up a rich cache of information.

 All off-site queries go through forwarders first.

 They reduce the number of redundant off-site requests.

 No special setup on forwarders is required.

 If forwarders fail to respond to queries, the local server can still contact a remote site, DNS servers itself.
CN PROF DEVANSHI DAVE

5. Prepare a short note on DNS Database.


An application layer protocol defines how the application processes running on different systems, pass the messages to
each other.

o DNS stands for Domain Name System.

o DNS is a directory service that provides a mapping between the name of a host on the network and its numerical
address.

o DNS is required for the functioning of the internet.

o Each node in a tree has a domain name, and a full domain name is a sequence of symbols specified by dots.

o DNS is a service that translates the domain name into IP addresses. This allows the users of networks to utilize user-
friendly names when looking for other hosts instead of remembering the IP addresses.

o For example, suppose the FTP site at EduSoft had an IP address of 132.147.165.50, most people would reach this
site by specifying ftp.EduSoft.com. Therefore, the domain name is more reliable than IP address.

DNS is a TCP/IP protocol used on different platforms. The domain name space is divided into three different sections:
generic domains, country domains, and inverse domain.
CN PROF DEVANSHI DAVE

Generic Domains

o It defines the registered hosts according to their generic behavior.

o Each node in a tree defines the domain name, which is an index to the DNS database.

o It uses three-character labels, and these labels describe the organization type.

Label Description
CN PROF DEVANSHI DAVE

aero Airlines and aerospace companies

biz Businesses or firms

com Commercial Organizations

coop Cooperative business Organizations

edu Educational institutions

gov Government institutions

info Information service providers

int International Organizations

mil Military groups


CN PROF DEVANSHI DAVE

museum Museum & other nonprofit organizations

name Personal names

net Network Support centers

org Nonprofit Organizations

pro Professional individual Organizations


CN PROF DEVANSHI DAVE

Country Domain
CN PROF DEVANSHI DAVE

The format of country domain is same as a generic domain, but it uses two-character country abbreviations (e.g., us for
the United States) in place of three character organizational abbreviations.

Inverse Domain

The inverse domain is used for mapping an address to a name. When the server has received a request from the client,
and the server contains the files of only authorized clients. To determine whether the client is on the authorized list or
not, it sends a query to the DNS server and ask for mapping an address to the name.

Working of DNS

o DNS is a client/server network communication protocol. DNS clients send requests to the. server while DNS servers
send responses to the client.

o Client requests contain a name which is converted into an IP address known as a forward DNS lookups while
requests containing an IP address which is converted into a name known as reverse DNS lookups.

o DNS implements a distributed database to store the name of all the hosts available on the internet.

o If a client like a web browser sends a request containing a hostname, then a piece of software such as DNS
resolver sends a request to the DNS server to obtain the IP address of a hostname. If DNS server does not contain
the IP address associated with a hostname, then it forwards the request to another DNS server. If IP address has
arrived at the resolver, which in turn completes the request over the internet protocol.

6. Discuss the concept of Cookies and its components with suitable example.
CN PROF DEVANSHI DAVE

Cookies are small files of information that a web server generates and sends to a web browser. Web browsers store the
cookies they receive for a predetermined period of time, or for the length of a user's session on a website. They attach the
relevant cookies to any future requests the user makes of the web server.
Cookies help inform websites about the user, enabling the websites to personalize the user experience. For example,
ecommerce websites use cookies to know what merchandise users have placed in their shopping carts. In addition, some
cookies are necessary for security purposes, such as authentication cookies (see below).
The cookies that are used on the Internet are also called "HTTP cookies." Like much of the web, cookies are sent using
the HTTP protocol.
Where are cookies stored?
Web browsers store cookies in a designated file on users' devices. The Google Chrome web browser, for instance, stores
all cookies in a file labeled "Cookies." Chrome users can view the cookies stored by the browser by opening developer
tools, clicking the "Application" tab, and clicking on "Cookies" in the left side menu.
What are cookies used for?
User sessions: Cookies help associate website activity with a specific user. A session cookie contains a unique string (a
combination of letters and numbers) that matches a user session with relevant data and content for that user.
Suppose Alice has an account on a shopping website. She logs into her account from the website's homepage. When she
logs in, the website's server generates a session cookie and sends the cookie to Alice's browser. This cookie tells the
website to load Alice's account content, so that the homepage now reads, "Welcome, Alice."
Alice then clicks to a product page displaying a pair of jeans. When Alice's web browser sends an HTTP request to the
website for the jeans product page, it includes Alice's session cookie with the request. Because the website has this
cookie, it recognizes the user as Alice, and she does not have to log in again when the new page loads.
Personalization: Cookies help a website "remember" user actions or user preferences, enabling the website to customize
the user's experience.
If Alice logs out of the shopping website, her username can be stored in a cookie and sent to her web browser. Next time
she loads that website, the web browser sends this cookie to the web server, which then prompts Alice to log in with the
username she used last time.
CN PROF DEVANSHI DAVE

Tracking: Some cookies record what websites users visit. This information is sent to the server that originated the cookie
the next time the browser has to load content from that server. With third-party tracking cookies, this process takes place
anytime the browser loads a website that uses that tracking service.
If Alice has previously visited a website that sent her browser a tracking cookie, this cookie may record that Alice is now
viewing a product page for jeans. The next time Alice loads a website that uses this tracking service, she may see ads for
jeans.
However, advertising is not the only use for tracking cookies. Many analytics services also use tracking cookies to
anonymously record user activity. (Cloudflare Web Analytics is one of the few services that does not use cookies to
provide analytics, helping to protect user privacy.)
What are the different types of cookies?
Some of the most important types of cookies to know include:
Session cookies
A session cookie helps a website track a user's session. Session cookies are deleted after a user's session ends — once they
log out of their account on a website or exit the website. Session cookies have no expiration date, which signifies to the
browser that they should be deleted once the session is over.
Persistent cookies
Unlike session cookies, persistent cookies remain in a user's browser for a predetermined length of time, which could be a
day, a week, several months, or even years. Persistent cookies always contain an expiration date.
Authentication cookies
Authentication cookies help manage user sessions; they are generated when a user logs into an account via their browser.
They ensure that sensitive information is delivered to the correct user sessions by associating user account information
with a cookie identifier string.
Tracking cookies
Tracking cookies are generated by tracking services. They record user activity, and browsers send this record to the
associated tracking service the next time they load a website that uses that tracking service.
Zombie cookies
CN PROF DEVANSHI DAVE

Like the "zombies" of popular fiction, zombie cookies regenerate after they are deleted. Zombie cookies create backup
versions of themselves outside of a browser's typical cookie storage location. They use these backups to reappear within a
browser after they are deleted. Zombie cookies are sometimes used by unscrupulous ad networks, and even by cyber
attackers.
What is a third-party cookie?
A third-party cookie is a cookie that belongs to a domain other than the one displayed in the browser. Third-party cookies
are most often used for tracking purposes. They contrast with first-party cookies, which are associated with the same
domain that appears in the user's browser.
When Alice does her shopping at jeans.example.com, the jeans.example.com origin server uses a session cookie to
remember that she has logged into her account. This is an example of a first-party cookie. However, Alice may not be
aware that a cookie from example.ad-network.com is also stored in her browser and is tracking her activity on
jeans.example.com, even though she is not currently accessing example.ad-network.com. This is an example of a third-
party cookie.
How do cookies affect user privacy?
As described above, cookies can be used to record browsing activity, including for advertising purposes. However, many
users do not want their online behavior to be tracked. Users also lack visibility or control over what tracking services do
with the data they collect.
Even when cookie-based tracking is not tied to a specific user's name or device, with some types of tracking it could still be
possible to link a record of a user's browsing activity with their real identity. This information could be used in any number
of ways, from unwanted advertising to the monitoring, stalking, or harassment of users. (This is not the case with all
cookie usage.)
Some privacy laws, like the EU's ePrivacy Directive, address and govern the use of cookies. Under this directive, users have
to provide "informed consent" — they have to be notified of how the website uses cookies and agree to this usage —
before the website can use cookies. (The exception to this is cookies that are "strictly necessary" for the website to
function.) The EU's General Data Protection Regulation (GDPR) considers cookie identifiers to be personal data, so its rules
apply to cookie usage in the EU as well. Also, any personal data collected by cookies falls under the GDPR's jurisdiction.
CN PROF DEVANSHI DAVE

Largely because of these laws, many websites now display cookie banners that allow users to review and control the
cookies those websites use.

7. Discuss the high-level view of Internet e-mail system and its major components

Electronic Mail (e-mail) is one of most widely used services of Internet. This service allows an Internet user to send
a message in formatted manner (mail) to the other Internet user in any part of world. Message in mail not only contain
text, but it also contains images, audio and videos data. The person who is sending mail is called sender and person who
receives mail is called recipient. It is just like postal mail service.
Components of E-Mail System :
The basic components of an email system are : User Agent (UA), Message Transfer Agent (MTA), Mail Box, and Spool file.
These are explained as following below.
1. User Agent (UA) :
The UA is normally a program which is used to send and receive mail. Sometimes, it is called as mail reader. It
accepts variety of commands for composing, receiving and replying to messages as well as for manipulation of the
mailboxes.

2. Message Transfer Agent (MTA) :


MTA is actually responsible for transfer of mail from one system to another. To send a mail, a system must have
client MTA and system MTA. It transfer mail to mailboxes of recipients if they are connected in the same machine. It
delivers mail to peer MTA if destination mailbox is in another machine. The delivery from one MTA to another MTA
is done by Simple Mail Transfer Protocol.
CN PROF DEVANSHI DAVE

3. Mailbox :
It is a file on local hard drive to collect mails. Delivered mails are present in this file. The user can read it delete it
CN PROF DEVANSHI DAVE

according to his/her requirement. To use e-mail system each user must have a mailbox . Access to mailbox is only to
owner of mailbox.

4. Spool file :
This file contains mails that are to be sent. User agent appends outgoing mails in this file using SMTP. MTA extracts
pending mail from spool file for their delivery. E-mail allows one name, an alias, to represent several different e-
mail addresses. It is known as mailing list, Whenever user have to sent a message, system checks recipients’s name
against alias database. If mailing list is present for defined alias, separate messages, one for each entry in the list,
must be prepared and handed to MTA. If for defined alias, there is no such mailing list is present, name itself
becomes naming address and a single message is delivered to mail transfer entity.
Services provided by E-mail system :
 Composition –
The composition refer to process that creates messages and answers. For composition any kind of text editor can be
used.
 Transfer –
Transfer means sending procedure of mail i.e. from the sender to recipient.
 Reporting –
Reporting refers to confirmation for delivery of mail. It help user to check whether their mail is delivered, lost or
rejected.
 Displaying –
It refers to present mail in form that is understand by the user.
 Disposition –
This step concern with recipient that what will recipient do after receiving mail i.e save mail, delete before reading
or delete after reading.
CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

Unit - 3
1. Describe rdt 2.0, rdt2.1, rdt2.2 and rdt3.0.

Reliable Data Transfer (RDT) 2.0 protocol works on a Reliable Data Transfer over a bit error channel. It is a more realistic
model for checking bit errors that are present in a channel while transferring it may be the bits in the packet are
corrupted. Such bit errors occurs in the physical components of a network when a packet is transmitted, propagated, or
buffered. In this, we will be assuming that all transmitted packets that are received in the order in which they were sent
(whether they are corrupted).

In this condition we ask the user to send ACK (acknowledgement, i.e., the packet that received is correct and it is not
corrupted) or NAK (negative acknowledgement i.e. the packet received is corrupted). In this protocol we detect the error
by using Checksum Field, checksum is a value that represents the number of bits in a transmission message. To check the
checksum value calculated by the end user is even slightly different from the original checksum value, this means that the
packet is corrupted, the mechanism that is needed to allow the receiver to detect bit errors in a packet using checksum is
called Error Detection.

This techniques allow the receiver to detect, and possibly correct packet bit errors. In this we only need to know that this
technique require that extra bits (beyond the bits of original data to be transferred) be sent from the sender to receiver;
these bits will be gathered into the packet checksum field of the RDT 2.0 data packet.

Another technique is Receiver Feedback since the sender and receiver are executing on different end systems, the only
way for the sender to learn of the receiver’s scenario i.e., whether or not a packet was received correctly, it is that the
receiver should provide explicit feedback to the sender. The positive (ACK) and negative acknowledgement (NAK) replies
CN PROF DEVANSHI DAVE

in the message dictation scenario are an example of such feedback. A zero value indicate a NAK and a value of 1 indicate
an ACK.
CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

Sending Side:

The send side of RDT 2.0 has two states. In one state, the send-side protocol is waiting for data to be passed down from
the upper layer to lower layer . In the other state, the sender protocol is waiting for an ACK or a NAK packet from the
receiver( a feedback). If an ACK packet is received i.e rdt_rcv(rcvpkt) && is ACK(rcvpkt), the sender knows that the most
recently transmitted packet has been received correctly and thus the protocol returns to the state of waiting for data from
the upper layer.

If a NAK is received, the protocol re-transmits the last packet and waits for an ACK or NAK to be returned by the receiver in
response to the re-transmitted data packet. It is important to note that when the receiver is in the wait-for-ACK-or-NAK
state, it can not get more data from the upper layer, that will only happen after the sender receives an ACK and leaves this
state. Thus, the sender will not send a new piece of data until it is sure that the receiver has correctly received the current
packet, due to this behavior of protocol this protocol is also known as Stop and Wait Protocol.
CN PROF DEVANSHI DAVE

Receiving Side:
CN PROF DEVANSHI DAVE

The receiver-site has a single state, as soon as the packet arrives, the receiver replies with either an ACK or a NAK,
depending on whether or not the received packet is corrupted i.e. by using rdt_rcv(rcvpkt) && corrupt(rcvpkt) where a
packet is received and is found to be in error or rdt_rcv(rcvpkt) && not corrupt(rcvpkt) where a packet received is correct.

RDT 2.0 may look as if it works but it has some has some flaws. It is difficult to understand whether the bits to ACK/NAK
packets are actually corrupted or not, if the packet is corrupted how protocol will recover from this errors in ACK or NAK
packets. The difficulty here is that if an ACK or NAK is corrupted, the sender has no way of knowing whether or not the
receiver has correctly received the last piece of transmitted data or not

2. Discuss Stop and wait protocol, GBN Protocol, SR Protocol.

Reliable data transfers are one of the primary concerns in computer networking. This service department lies in the hands
of TCP. Their major flow control protocols – Stop and Wait, Go Back N, and Selective Repeat.

1. Stop and Wait –


The sender sends the packet and waits for the ACK (acknowledgement) of the packet. Once the ACK reaches the
sender, it transmits the next packet in a row. If the ACK is not received, it re-transmits the previous packet again.

2. Go Back N –
The sender sends N packets which are equal to the window size. Once the entire window is sent, the sender then
waits for a cumulative ACK to send more packets. On the receiver end, it receives only in-order packets and discards
out-of-order packets. As in case of packet loss, the entire window would be re-transmitted.
CN PROF DEVANSHI DAVE

3. Selective Repeat –
The sender sends packets of window size N and the receiver acknowledges all packets whether they were received
in order or not. In this case, the receiver maintains a buffer to contain out-of-order packets and sorts them. The
sender selectively re-transmits the lost packet and moves the window forward.

Differences:

Stop and
Properties Wait Go Back N Selective Repeat

Sender window size 1 N N

Receiver Window size 1 1 N

Minimum Sequence number 2 N+1 2N

Efficiency 1/(1+2*a) N/(1+2*a) N/(1+2*a)

Type of Acknowledgement Individual Cumulative Individual


CN PROF DEVANSHI DAVE

Stop and
Properties Wait Go Back N Selective Repeat

In-order delivery Out-of-order delivery as


Supported order at the Receiving end – only well

Number of retransmissions in case of packet


drop 1 N 1

Transmission Type Half duplex Full duplex Full duplex

Implementation difficulty Low Moderate Complex

Where,

 a = Ratio of Propagation delay and Transmission delay,

 At N=1, Go Back N is effectively reduced to Stop and Wait,

 As Go Back N acknowledges the packed cumulatively, it rejects out-of-order packets,


CN PROF DEVANSHI DAVE

 As Selective Repeat supports receiving out-of-order packets (it sorts the window after receiving the packets), it uses
Independent Acknowledgement to acknowledge the packets

3. Discover TCP Connection Management.


Prerequisite – TCP 3-Way Handshake Process
TCP is a connection-oriented protocol and every connection-oriented protocol needs to establish a connection in order to
reserve resources at both the communicating ends.

Connection Establishment –

1. Sender starts the process with the following:

 Sequence number (Seq=521): contains the random initial sequence number generated at the sender side.

 Syn flag (Syn=1): request the receiver to synchronize its sequence number with the above-provided sequence
number.

 Maximum segment size (MSS=1460 B): sender tells its maximum segment size, so that receiver sends datagram
which won’t require any fragmentation. MSS field is present inside Option field in TCP header.

 Window size (window=14600 B): sender tells about his buffer capacity in which he has to store messages from the
receiver.

2. TCP is a full-duplex protocol so both sender and receiver require a window for receiving messages from one another.

 Sequence number (Seq=2000): contains the random initial sequence number generated at the receiver side.
CN PROF DEVANSHI DAVE

 Syn flag (Syn=1): request the sender to synchronize its sequence number with the above-provided sequence
number.

 Maximum segment size (MSS=500 B): sender tells its maximum segment size, so that receiver sends datagram
which won’t require any fragmentation. MSS field is present inside Option field in TCP header.
Since MSSreceiver < MSSsender, both parties agree for minimum MSS i.e., 500 B to avoid fragmentation of packets at
both ends.

Therefore, receiver can send maximum of 14600/500 = 29 packets.

This is the receiver's sending window size.

 Window size (window=10000 B): receiver tells about his buffer capacity in which he has to store messages from the
sender.

Therefore, sender can send a maximum of 10000/500 = 20 packets.

This is the sender's sending window size.

 Acknowledgement Number (Ack no.=522): Since sequence number 521 is received by the receiver so, it makes a
request for the next sequence number with Ack no.=522 which is the next packet expected by the receiver since Syn
flag consumes 1 sequence no.

 ACK flag (ACk=1): tells that the acknowledgement number field contains the next sequence expected by the
receiver.

3. Sender makes the final reply for connection establishment in the following way:
CN PROF DEVANSHI DAVE

 Sequence number (Seq=522): since sequence number = 521 in 1st step and SYN flag consumes one sequence
number hence, the next sequence number will be 522.

 Acknowledgement Number (Ack no.=2001): since the sender is acknowledging SYN=1 packet from the receiver with
sequence number 2000 so, the next sequence number expected is 2001.

 ACK flag (ACK=1): tells that the acknowledgement number field contains the next sequence expected by the sender.
CN PROF DEVANSHI DAVE

Since the connection establishment phase of TCP makes use of 3 packets, it is also known as 3-way Handshaking (SYN, SYN
+ ACK, ACK)

4. Explain TCP Congestion control techniques.


CN PROF DEVANSHI DAVE

TCP uses a congestion window and a congestion policy that avoid congestion. Previously, we assumed that only the
receiver can dictate the sender’s window size. We ignored another entity here, the network. If the network cannot deliver
the data as fast as it is created by the sender, it must tell the sender to slow down. In other words, in addition to the
receiver, the network is a second entity that determines the size of the sender’s window.

Congestion policy in TCP –

1. Slow Start Phase: starts slowly increment is exponential to threshold

2. Congestion Avoidance Phase: After reaching the threshold increment is by 1

3. Congestion Detection Phase: Sender goes back to Slow start phase or Congestion avoidance phase.

Slow Start Phase : exponential increment – In this phase after every RTT the congestion window size increments
exponentially.

Initially cwnd = 1

After 1 RTT, cwnd = 2^(1) = 2

2 RTT, cwnd = 2^(2) = 4

3 RTT, cwnd = 2^(3) = 8

Congestion Avoidance Phase : additive increment – This phase starts after the threshold value also denoted as ssthresh.
The size of cwnd(congestion window) increases additive. After each RTT cwnd = cwnd + 1.

Initially cwnd = i
CN PROF DEVANSHI DAVE

After 1 RTT, cwnd = i+1

2 RTT, cwnd = i+2

3 RTT, cwnd = i+3

Congestion Detection Phase : multiplicative decrement – If congestion occurs, the congestion window size is decreased.
The only way a sender can guess that congestion has occurred is the need to retransmit a segment. Retransmission is
needed to recover a missing packet that is assumed to have been dropped by a router due to congestion. Retransmission
can occur in one of two cases: when the RTO timer times out or when three duplicate ACKs are received.

 Case 1 : Retransmission due to Timeout – In this case congestion possibility is high.

(a) ssthresh is reduced to half of the current window size.


(b) set cwnd = 1
(c) start with slow start phase again.

 Case 2 : Retransmission due to 3 Acknowledgement Duplicates – In this case congestion possibility is less.

(a) ssthresh value reduces to half of the current window size.


(b) set cwnd= ssthresh
(c) start with congestion avoidance phase

Example – Assume a TCP protocol experiencing the behavior of slow start. At 5th transmission round with a threshold
(ssthresh) value of 32 goes into congestion avoidance phase and continues till 10th transmission. At 10th transmission
round, 3 duplicate ACKs are received by the receiver and enter into additive increase mode. Timeout occurs at 16th
transmission round. Plot the transmission round (time) vs congestion window size of TCP segments.
CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

5. Draw and explain Router architecture in detail.

Router architecture is designed in a way that the routers are equipped to perform two main functions. These functions are
as follows:

 Process routable protocols.

 Use routing protocols to determine the best path.

Let us try to understand the router with the help of architecture:

Architecture of Router

Given below is a diagram which explains the architecture of router:


CN PROF DEVANSHI DAVE

The different factors which help in successful functioning of router are explained below:

Input Port

The input port performs many functions. The physical layer functionality of terminating an incoming physical link to a
router it can perform.
CN PROF DEVANSHI DAVE

It performs the data link layer functionality needed to interoperate with the data link layer functionality on the other side
of the incoming link.

It also performs a lookup and forwarding function so that a datagram forwarded into the switching fabric of the router
emerges at the appropriate output port.

The diagram given below depicts the functioning of an input port in a router:

Output Port

It stores packets received from the switching fabric and transmits those packets on the outgoing link by performing the
link-layer and physical-layer functions. Therefore, the output port performs the reverse data link and physical layer
functionality as the input port.

The diagram given below depicts the functioning of an output port in a router:
CN PROF DEVANSHI DAVE

Switching Fabric

It is the combination of hardware and software which moves data coming in to a network node out by the correct port to
the next node in the network.

Routing Processor

Routing processor executes routing protocols. It maintains routing information and forwarding tables. It also performs
network management functions within the router.

Components of Router

Let us see the internal and external components of routers.

Internal components

The internal components in a router are as follows:


CN PROF DEVANSHI DAVE

 Read-only memory (ROM) − It is used to store the routers bootstrap details,

 Flash memory − It holds the operating systems pictures.

 Random-access memory (RAM) − It is used to store the Routing table and buffered data.

 Nonvolatile random-access memory (NVRAM) − It stores the router’s start-up configuration files. Here the stored
data is non-volatile.

 Network interfaces − It is used to connect routers to networks.

External components

The external components in a router are as follows:

 Virtual terminals − For accessing routers.

 Network management stations.

The Router’s input ports, output ports, and switching fabric all together implement hardware and the forwarding
functions.

The Router's control functions operate at the millisecond or second timescale. These control plane functions are
implemented in software and execute on the routing processor.

6. Explain Virtual Circuit Network and Datagram Network.


CN PROF DEVANSHI DAVE

Virtual circuit and datagram networks are computer networks that provide connection oriented and connectionless
services respectively. Let’s try to understand what they mean.

A transport layer can offer applications connectionless service or connection-oriented service between two processes. For
example, the internet’s transport layer provides each application a choice between two services : UDP, a connectionless
service; or TCP, a connection-oriented service between two hosts. Network-layer connection and connectionless service in
many ways parallel transport layer connection-oriented and connectionless services.

For example, a network layer connection service begins with handshaking between the source and destination host; and a
network layer connectionless service does not have any handshaking preliminaries.

Although the network layer connection and connectionless services have some parallel with transport-layer connection-
oriented and connectionless services, there are crucial differences:

 In the network layer, these services are host-to-host services provided by the network layer for the transport layer.
In the transport layer these services are process-to-process services provided by the transport layer for the
application layer.

 In all major computer network architectures to date (Internet, ATM, frame relay and so on), the network layer
provides either host-to-host connectionless service or host-to-host connection service, but not both. Computer
networks that provide only a connection service at the network layer are called virtual circuit (VC) networks ;
computer networks that provide only a connectionless service at the network layer are called datagram networks .

 The implementations of connection oriented service in the transport layer and the connection service in the
network layer are fundamentally different. We already know that the transport-layer connection-oriented service is
CN PROF DEVANSHI DAVE

implemented at the edge of the network in the end systems; we’ll see shortly that the network-layer connection
service is implemented in the routers in the network core as well as in the end systems.

Virtual circuit and datagram networks are two fundamental classes of computer networks. They use very different
information in making their forwarding decision. Let’s now take a closer look at their implementations.

Virtual Circuit Networks

While the internet is a datagram network, many alternative network architectures – including those of ATM
(Asynchronous Transfer Mode) and frame relay – are virtual circuit networks and, therefore, use connections at the
network layer. These network layer connections are called virtual circuits (VCs). Let’s now consider how a VC service can
be implemented in a computer network.

A VC consists of :

1. a path (that is , a series of links and routers) between the source and destination hosts,

2. VC numbers, one number for each link along the path, and

3. entries in the forwarding table n each router along the path.

A packet belonging to a virtual circuit will carry a VC number in its header. Because a virtual circuit may have a different VC
number on each link, each intervening router must replace the VC number of each traversing packet with a new VC
number. The new VC number is obtained from the forwarding table.

To illustrate the concept, consider the network shown in the figure below:
CN PROF DEVANSHI DAVE

The numbers next to links of R1 in the above figure are the link interface numbers. Suppose now that Host A requests that
the network establish a VC between itself and Host B. Suppose also that the network chooses the path A-R1-R2-B and
assigns VC numbers 12, 22, and 32 to the three links in this path for this virtual circuit. In this case, when a packet in this
VC leaves Host A, the value in the VC number filed in the packet header is 12; when it leaves R1, the value is 22; and when
it leaves R2, the value is 32.

How does the router determine the replacement VC number for a packet traversing the router? For a VC network, each
router’s forwarding table includes VC number translation; for example the forwarding table in R1 might look something
like the table below:
CN PROF DEVANSHI DAVE

Whenever a new VC is established across a router, an entry is added to the forwarding table. Similarly, whenever a VC
terminates, the appropriate entries in each table along its path are removed.

You might be wondering why a packet doesn’t just keep the same VC number on each of the links along its route. The
answer is twofold. First, replacing the number from the link reduces the length of the VC field in the packet header.
Second, and more importantly, VC setup is considerably simplified by permitting a different VC number at each link along
the path of the VC. Specifically, with multiple VC numbers, each link in the path can choose a VC number independently of
the VC numbers chosen at other links along the path. If a common VC number were required for all links along the path,
the routers would have to exchange and process a substantial number of messages to agree on a common VC number
(e.g. one that is not being used by any other existing VC at these routers) to be used for a connection.

In a VC network, the network’s routers must maintain connection state information for the ongoing connections.
Specifically, each time a new connection is established across a router, a new connection entry must be added to the
router’s forwarding table; and each time a connection is released an entry must be removed from the table. Note that
even if there is no VC number translation, it is still necessary to maintain connection state information that associates VC
numbers with output interface numbers. The issue of whether or not a router maintains connection state information for
each ongoing connection is a crucial one.

There are three identifiable phases in a virtual circuit:

 VC Setup : During this setup phase, the sending transport layer contacts the network layer, specifies the receiver’s
address, and waits for the network to set up the VC. The network layer determines the path between sender and
receiver, that is, the series of links and routers through which all packets of the VC will travel. The network layer also
determines the VC number for each link along the path. Finally, the network layer adds an entry in the forwarding
CN PROF DEVANSHI DAVE

table in each router along the path. During VC setup, the network layer may also reserve resources (for example,
bandwidth) along the path of the VC.

 Data Transfer : As shown in the figure below, once the VC has been established, packets can begin to flow along the
VC.

 VC Teardown : This is initiated when the sender (or receiver) informs the network layer of its desire to terminate
the VC. The network layer will then typically inform the end system on the other side of the network of the call
termination and update the forwarding table sin each of the packet routers on the path to indicate that the VC no
longer exists.

There is subtle but important distinction between VC setup at the network layer and connection setup at the transport
layer (for example, the TCP three-way handshake). Connection setup at the transport layer involves only the two end
systems. During transport-layer connection setup, the two end systems alone determine the parameters (for example,
CN PROF DEVANSHI DAVE

initial sequence number and flow-control window size) of their transport-layer connection. Although the two end systems
are aware of the transport-layer connection, the routers within the network are completely oblivious to it. On the other
hand, with a VC network layer, routers along the path between the two end systems are involved in VC setup, and each
router is fully aware of all the VCs passing through it.

The message that the end systems send into the network to initiate or terminate a VC, and the message passed between
the routers to set up the VC (that is, to modify connection state in router tables ) are known as signalling messages, and
the protocols used to exchange these message are often referred to as signalling protocols. VC setup is shown in the
figure above.

Datagram Networks

In a datagram network, each time an end system wants to send a packet, it stamps the packet with the address of the
destination end system and then pops the packet into the network. As shown in the figure below , there is no VC setup
and routers do not maintain any VC state information (because there are no VCs).
CN PROF DEVANSHI DAVE

As a packet is transmitted from source to destination, it passes through a series of routers. Each of these routers uses the
packet’s destination address to forward the packet. Specifically, each router has a forwarding table that maps destination
address to link interfaces; when a packet arrives at the router, the router uses the packet’s destination address to look up
the appropriate output link interface in the forwarding table. The router then intentionally forwards the packet to that
output link interface.

To get some further insight into the lookup operation, let’s look at a specific example. Suppose that all destination
addresses are 32 bits (which just happens to be the length of the destination address in an IP datagram). A brute-force
implementation of the forwarding table would have one entry for every possible destination address. Since there are
more than 4 billion possible addresses, this option is totally out of the question.

Now, let’s further suppose that our router has four links, numbered 0 through 3, and the packets are to be forwarded to
the link interfaces as follows :
CN PROF DEVANSHI DAVE

With this style of forwarding table, the router matches a prefix of the packet’s destination address with the entries in the
table; if there’s a match, the router forwards the packet to a link associated with the match.
CN PROF DEVANSHI DAVE

For example , suppose the packet’s destination address is 11001000 00010110 10100001; because the 21-bit prefix of this
address matches the first entry in the table, the router forwards the packet to link interface 0. If a prefix doesn’t match
any of the first three entries, then the router forwards the packet to interface 3.

Although this sounds simple enough, there’s an important subtlety here. You may have noticed that it is possible for a
destination address to match more than one entry. For example, the first 24 bits of the address 11001000 00010111
00011000 10101010 match the second entry in the table, and the first 21 bits of the address match the third entry in the
table. When there are multiple matches, the router uses the longest prefix matching rule; that is, it finds the longest
matching entry in the table and forwards the packet to the link interface associated with the longest prefix match.

Although routers in datagram networks maintain no connection state information, they nevertheless maintain forwarding
state information in their forwarding tables. However, the time scale at which this forwarding information changes is
relatively slow. Indeed, in a datagram network the forwarding tables are modified by routing algorithms, which typically
update a forwarding table every one-to-five minutes or so. In a VC network, a forwarding table in a router is modified
whenever a new connection is set up through the router or whenever an existing connection through the router is torn
down. This could easily happen at microsecond timescale in a backbone, tier-1 router.

Because forwarding tables in datagram networks can be modified at any time, a series of packets sent from one end
system to another may follow different paths through the network and may arrive out of order.

7. Draw and Explain TCP Header.

Every TCP segment consists of a 20 byte fixed format header. Header options may follow the fixed header. With a header
so that it can tag up to 65535 data bytes.

The TCP header format is shown in the figure below −


CN PROF DEVANSHI DAVE

Source Port

It is a 16-bit source port number used by the receiver to reply.

Destination Port
CN PROF DEVANSHI DAVE

It is a 16-bit destination port number.

Sequence Number

The sequence number of the first data byte in this segment. During the SYN Control bit is set, and the sequence number is
n, and the first data byte is n + 1.

Acknowledgement Number

If the ACK control bit is set, this field contains the next number that the receiver expects to receive.

Data Offset

The several 32-bit words in the TCP header shows from where the user data begins.

Reserved (6 bit)

It is reserved for future use.

URG

It indicates an urgent pointer field that data type is urgent or not.

ACK

It indicates that the acknowledgement field in a segment is significant, as discussed early.

PUSH
CN PROF DEVANSHI DAVE

The PUSH flag is set or reset according to a data type that is sent immediately or not.

RST It Resets the connection.

SYN

It synchronizes the sequence number.

FIN

This indicates no more data from the sender.

Window

It is used in Acknowledgement segment. It specifies the number of data bytes, beginning with the one indicated in the
acknowledgement number field that the receiver is ready to accept.

Checksum

It is used for error detection.

Options

The IP datagram options provide additional punctuality. It can use several optional parameters between a TCP sender and
receiver. It depends on the options used. The length of the field may vary in size, but it can't be larger than 40 bytes due to
the header field's size, which is 4 bit.
CN PROF DEVANSHI DAVE

The most typical option is the maximum segment size MASS option. A TCP receiver communicates to the TCP sender the
total length of the segment it can accept with this option. The other various options are used for flow control and
congestion control, each explained in the table shown in the table.

Table of Options

The table of options in TCP segment header is as follows −

Kind Length Meaning

0 - End of option list

1 - No operation

2 4 Maximum segment size

3 3 Window Scale

4 2 Sack-permitted

5 X Sack
CN PROF DEVANSHI DAVE

Kind Length Meaning

8 10 Time Stamps

Padding

Options in each may vary in size, and it may be necessary to "pad" the TCP header with zeros so that the segment ends on
a 32-bit word boundary as per the standard.

Data

Although in some cases like acknowledgement segments with no data in the reverse direction, the variable-length field
carries the application data from sender to receiver. This field, connected with the TCP header fields, constitute a TCP
segment.
CN PROF DEVANSHI DAVE

Unit - 4
1. Define packet switching and circuit switching.

Circuit switching and packet switching are the two different methods of switching that are used to connect multiple
communicating devices with one another. The key difference between circuit switching and packet switching is that
packet switching is connectionless, whereas circuit switching is connection-oriented.

These are the two most common switching methods. We will understand how these processes affect the transfer of data
from sender to receiver and differ from one another.

What Is Circuit Switching?

Circuit switching is defined as the method of switching which is used for establishing a dedicated communication
path between the sender and the receiver. The link which is established between the sender and the receiver is in the
physical form. An analog telephone network is a well-known example of circuit switching. Bandwidth is fixed in this type of
switching. Let us know in detail about the advantages and disadvantages of circuit switching.

Advantages and Disadvantages of Circuit Switching

Advantages

 The bandwidth used is fixed.

 The quality of communication is increased as a dedicated communication channel is used.

 The rate at which the data is transmitted is fixed.


CN PROF DEVANSHI DAVE

 While switching, no time is wasted in waiting.

 It is preferred when communication is long and continuous.

Disadvantages

 Since dedicated channels are used, the bandwidth required is more.

 The utilization of resources is not full.

 Since a dedicated channel has been used, the transmission of other data becomes impossible.

 The time taken by the two stations for the establishment of the physical link is too long.

 Circuit switching is expensive because every connection uses a dedicated path establishment.

 The link between the sender and the receiver will be maintained until and unless the user terminates the link. This
will also continue if there is no transfer of data taking place.

What is Packet Switching?

Packet switching is defined as the connectionless network where the messages are divided and grouped together and this
is known as a packet. Each packet is routed from the source to the destination as individual packets. The actual data in
these packets are carried by the payload. When the packet arrives at the destination, it is the responsibility of the
destination to put these packets in the right order. Let us know in detail about the advantages and disadvantages of
packet switching.

Advantages and Disadvantages of Packet Switching


CN PROF DEVANSHI DAVE

Advantages

 There is no delay in the delivery of the packets as they are sent to the destination as soon as they are available.

 There is no requirement for massive storage space as the information is passed on to the destination as soon as
they are received.

 Failure in the links does not stop the delivery of the data as these packets can be routed from other paths too.

 Multiple users can use the same channel while transferring their packets.

 The usage of bandwidth is better in case of packet switching as multiple sources can transfer packets from the same
source link.

Disadvantages

 Installation costs of packet switching are expensive.

 The delivery of these packets becomes easy when complicated protocols are used.

 High-quality voice calls cannot use packet switching as there is a lot of delay in this type of communication.

 Connectivity issues may lead to loss of information and delay in the delivery of the information.

Let us understand the difference between circuit and switching with packet switching.

Circuit Switching Vs Packet Switching


CN PROF DEVANSHI DAVE

Circuit switching is referred to as the technology of data transfer that utilizes sending messages from one point to
another. This involves sending messages from the receiver to the sender and back simultaneously. A physical connection
gets established during this process along with the receiver; a dedicated circuit is always present to handle data
transmissions, through which data is sent. Packet switching can be used as an alternative to circuit switching. In packet-
switched networks, data is sent in discrete units that have variable lengths.

Difference Between Circuit Switching and Packet Switching

Circuit Switching Packet Switching

A circuit needs to be established to make sure that Each packet containing the information that needs to
data transmission takes place. be processed goes through the dynamic route.

There is no uniform path that is followed end to end


A uniform path is followed throughout the session.
through the session.

It is ideal for voice communication, while also It is used mainly for data transmission as the delay is
keeping the delay uniform. not uniform.
CN PROF DEVANSHI DAVE

Without a connection, it cannot exist, as the A connection is not necessary, as it can exist without
connection needs to be present on a physical layer. one too. It needs to be present on a network layer.

Data to be transmitted is processed at the source Data is processed and transmitted at the source as
itself. well as at each switching station.

2. Define Router &amp; Routing Table.

Routers:
A Router is a networking device that forwards data packets between computer network. This device is usually connected
to two or more different networks. When a data packet comes to a router port, the router reads address information in
packet to determine out which port the packet will be sent. For example, a router provides you with the internet access by
connecting your LAN with the Internet.
CN PROF DEVANSHI DAVE

When a packet arrives at a Router, it examines destination IP address of a received packet and make routing decisions
accordingly. Routers use Routing Tables to determine out which interface the packet will be sent. A routing table lists all
networks for which routes are known. Each router’s routing table is unique and stored in the RAM of the device.

Routing Table:
A routing table is a set of rules, often viewed in table format, that is used to determine where data packets traveling over
an Internet Protocol (IP) network will be directed. All IP-enabled devices, including routers and switches, use routing
tables. See below a Routing Table:

Destination Subnet mask Interface

128.75.43.0 255.255.255.0 Eth0

128.75.43.0 255.255.255.128 Eth1


CN PROF DEVANSHI DAVE

192.12.17.5 255.255.255.255 Eth3

default Eth2

The entry corresponding to the default gateway configuration is a network destination of 0.0.0.0 with a network mask
(netmask) of 0.0.0.0. The Subnet Mask of default route is always 255.255.255.255 .

Entries of an IP Routing Table:


A routing table contains the information necessary to forward a packet along the best path toward its destination. Each
packet contains information about its origin and destination. Routing Table provides the device with instructions for
sending the packet to the next hop on its route across the network.

Each entry in the routing table consists of the following entries:

1. Network ID:
The network ID or destination corresponding to the route.

2. Subnet Mask:
The mask that is used to match a destination IP address to the network ID.

3. Next Hop:
The IP address to which the packet is forwarded

4. Outgoing Interface:
Outgoing interface the packet should go out to reach the destination network.
CN PROF DEVANSHI DAVE

5. Metric:
A common use of the metric is to indicate the minimum number of hops (routers crossed) to the network ID.

Routing table entries can be used to store the following types of routes:

 Directly Attached Network IDs

 Remote Network IDs

 Host Routes

 Default Route

 Destination

When a router receives a packet, it examines the destination IP address, and looks up into its Routing Table to figure out
which interface packet will be sent out.

How are Routing Tables populated?


There are ways to maintain Routing Table:

 Directly connected networks are added automatically.

 Using Static Routing.

 Using Dynamic Routing.


CN PROF DEVANSHI DAVE

These Routing tables can be maintained manually or dynamically. In dynamic routing, devices build and maintain their
routing tables automatically by using routing protocols to exchange information about the surrounding network topology.
Dynamic routing tables allow devices to “listen” to the network and respond to occurrences like device failures and
network congestion. Tables for static network devices do not change unless a network administrator manually changes
them.

Route Determination Process (finding Subnet ID using Routing Table):


Consider a network is subnetted into 4 subnets as shown in the above picture. The IP Address of the 4 subnets are:

200.1.2.0 (Subnet a)

200.1.2.64 (Subnet b)

200.1.2.128 (Subnet c)

200.1.2.192 (Subnet d)
CN PROF DEVANSHI DAVE

Then, Routing table maintained by the internal router looks like:

Destination Subnet Mask Interface

200.1.2.0 255.255.255.192 a
CN PROF DEVANSHI DAVE

Destination Subnet Mask Interface

200.1.2.64 255.255.255.192 b

200.1.2.128 255.255.255.192 c

200.1.2.192 255.255.255.192 d

Default 0.0.0.0 e

To find its right subnet (subnet ID), router performs the bitwise ANDing of destination IP Address mentioned on the data
packet and all the subnet masks one by one.

 If there occurs only one match, router forwards the data packet on the corresponding interface.

 If there occurs more than one match, router forwards the data packet on the interface corresponding to the longest
subnet mask.

 If there occurs no match, router forwards the data packet on the interface corresponding to the default entry.

3. Compare connection oriented and connectionless service with example.


CN PROF DEVANSHI DAVE

Both Connection-oriented service and Connection-less service are used for the connection establishment between two or
more two devices. These types of services are offered by the network layer.

Connection-oriented service is related to the telephone system. It includes connection establishment and connection
termination. In a connection-oriented service, the Handshake method is used to establish the connection between sender
and receiver.

Connection-less service is related to the postal system. It does not include any connection establishment and connection
termination. Connection-less Service does not give a guarantee of reliability. In this, Packets do not follow the same path
to reach their destination.
CN PROF DEVANSHI DAVE

Difference between Connection-oriented and Connection-less Services:

S.NO Connection-oriented Service Connection-less Service

Connection-oriented service is related to the telephone Connection-less service is related to the postal
1. system. system.
CN PROF DEVANSHI DAVE

S.NO Connection-oriented Service Connection-less Service

Connection-oriented service is preferred by long and Connection-less Service is preferred by bursty


2. steady communication. communication.

3. Connection-oriented Service is necessary. Connection-less Service is not compulsory.

4. Connection-oriented Service is feasible. Connection-less Service is not feasible.

In connection-oriented Service, Congestion is not


5. possible. In connection-less Service, Congestion is possible.

Connection-oriented Service gives the guarantee of Connection-less Service does not give a
6. reliability. guarantee of reliability.
CN PROF DEVANSHI DAVE

S.NO Connection-oriented Service Connection-less Service

In connection-oriented Service, Packets follow the same In connection-less Service, Packets do not follow
7. route. the same route.

Connection-oriented services require a bandwidth of a Connection-less Service requires a bandwidth of


8. high range. low range.

9. Ex: TCP (Transmission Control Protocol) Ex: UDP (User Datagram Protocol)

Connection-less Service does not require


10. Connection-oriented requires authentication. authentication.

4. Explain Distance Vector Routing algorithm.

A distance-vector routing (DVR) protocol requires that a router inform its neighbors of topology changes periodically.
Historically known as the old ARPANET routing algorithm (or known as Bellman-Ford algorithm).
CN PROF DEVANSHI DAVE

Bellman Ford Basics – Each router maintains a Distance Vector table containing the distance between itself and ALL
possible destination nodes. Distances,based on a chosen metric, are computed using information from the neighbors’
distance vectors.

Information kept by DV router -

 Each router has an ID

 Associated with each link connected to a router,

 there is a link cost (static or dynamic).

 Intermediate hops

Distance Vector Table Initialization -

 Distance to itself = 0

 Distance to ALL other routers = infinity number.

Distance Vector Algorithm –

1. A router transmits its distance vector to each of its neighbors in a routing packet.

2. Each router receives and saves the most recently received distance vector from each of its neighbors.
CN PROF DEVANSHI DAVE

3. A router recalculates its distance vector when:

 It receives a distance vector from a neighbor containing different information than before.

 It discovers that a link to a neighbor has gone down.

The DV calculation is based on minimizing the cost to each destination

Dx(y) = Estimate of least cost from x to y

C(x,v) = Node x knows cost to each neighbor v

Dx = [Dx(y): y ∈ N ] = Node x maintains distance vector

Node x also maintains its neighbors' distance vectors

– For each neighbor v, x maintains Dv = [Dv(y): y ∈ N ]

Note –

 From time-to-time, each node sends its own distance vector estimate to neighbors.

 When a node x receives new DV estimate from any neighbor v, it saves v’s distance vector and it updates its own DV
using B-F equation:

 Dx(y) = min { C(x,v) + Dv(y), Dx(y) } for each node y ∈ N

Example – Consider 3-routers X, Y and Z as shown in figure. Each router have their routing table. Every routing table will
contain distance to the destination nodes.
CN PROF DEVANSHI DAVE

Consider router X , X will share it routing table to neighbors and neighbors will share it routing table to it to X and distance
from node X to destination will be calculated using bellmen- ford equation.

Dx(y) = min { C(x,v) + Dv(y)} for each node y ∈ N


CN PROF DEVANSHI DAVE

As we can see that distance will be less going from X to Z when Y is intermediate node(hop) so it will be update in routing
table X.
CN PROF DEVANSHI DAVE

Similarly for Z also –


CN PROF DEVANSHI DAVE

Finally the routing table for all –


CN PROF DEVANSHI DAVE

Advantages of Distance Vector routing –

 It is simpler to configure and maintain than link state routing.

Disadvantages of Distance Vector routing –

 It is slower to converge than link state.

 It is at risk from the count-to-infinity problem.

 It creates more traffic than link state since a hop count change must be propagated to all routers and
processed on each router. Hop count updates take place on a periodic basis, even if there are no changes in
the network topology, so bandwidth-wasting broadcasts still occur.

 For larger networks, distance vector routing results in larger routing tables than link state since each router
must know about all other routers. This can also lead to congestion on WAN links.

5. Define Link State Routing. &amp; show the process of generating Link State Packets.

Link state routing is a technique in which each router shares the knowledge of its neighborhood with every other router in
the internetwork.

The three keys to understand the Link State Routing algorithm:


CN PROF DEVANSHI DAVE

o Knowledge about the neighborhood: Instead of sending its routing table, a router sends the information about its
neighborhood only. A router broadcast its identities and cost of the directly attached links to other routers.

o Flooding: Each router sends the information to every other router on the internetwork except its neighbors. This
process is known as Flooding. Every router that receives the packet sends the copies to all its neighbors. Finally,
each and every router receives a copy of the same information.

o Information sharing: A router sends the information to every other router only when the change occurs in the
information.

Link State Routing has two phases:

Reliable Flooding

o Initial state: Each node knows the cost of its neighbors.

o Final state: Each node knows the entire graph.

Route Calculation

Each node uses Dijkstra's algorithm on the graph to calculate the optimal routes to all nodes.

o The Link state routing algorithm is also known as Dijkstra's algorithm which is used to find the shortest path from
one node to every other node in the network.

o The Dijkstra's algorithm is an iterative, and it has the property that after kth iteration of the algorithm, the least cost
paths are well known for k destination nodes.
CN PROF DEVANSHI DAVE

Let's describe some notations:

o c( i , j): Link cost from node i to node j. If i and j nodes are not directly linked, then c(i , j) = ∞.

o D(v): It defines the cost of the path from source code to destination v that has the least cost currently.

o P(v): It defines the previous node (neighbor of v) along with current least cost path from source to v.

o N: It is the total number of nodes available in the network.

Algorithm

Initialization

N = {A} // A is a root node.

for all nodes v

if v adjacent to A

then D(v) = c(A,v)

else D(v) = infinity

loop

find w not in N such that D(w) is a minimum.

Add w to N
CN PROF DEVANSHI DAVE

Update D(v) for all v adjacent to w and not in N:

D(v) = min(D(v) , D(w) + c(w,v))

Until all nodes in N

In the above algorithm, an initialization step is followed by the loop. The number of times the loop is executed is equal to
the total number of nodes available in the network.

Let's understand through an example:

In the above figure, source vertex is A.


CN PROF DEVANSHI DAVE

Step 1:

The first step is an initialization step. The currently known least cost path from A to its directly attached neighbors, B, C, D
are 2,5,1 respectively. The cost from A to B is set to 2, from A to D is set to 1 and from A to C is set to 5. The cost from A to
E and F are set to infinity as they are not directly linked to A.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

Step 2:

In the above table, we observe that vertex D contains the least cost path in step 1. Therefore, it is added in N. Now, we
need to determine a least-cost path through D vertex.

a) Calculating shortest path from A to B

1. v = B, w = D

2. D(B) = min( D(B) , D(D) + c(D,B) )

3. = min( 2, 1+2)>
CN PROF DEVANSHI DAVE

4. = min( 2, 3)

5. The minimum value is 2. Therefore, the currently shortest path from A to B is 2.

b) Calculating shortest path from A to C

1. v = C, w = D

2. D(B) = min( D(C) , D(D) + c(D,C) )

3. = min( 5, 1+3)

4. = min( 5, 4)

5. The minimum value is 4. Therefore, the currently shortest path from A to C is 4.</p>

c) Calculating shortest path from A to E

1. v = E, w = D

2. D(B) = min( D(E) , D(D) + c(D,E) )

3. = min( ∞, 1+1)

4. = min(∞, 2)

5. The minimum value is 2. Therefore, the currently shortest path from A to E is 2.

Note: The vertex D has no direct link to vertex E. Therefore, the value of D(F) is infinity.
CN PROF DEVANSHI DAVE

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

Step 3:

In the above table, we observe that both E and B have the least cost path in step 2. Let's consider the E vertex. Now, we
determine the least cost path of remaining vertices through E.

a) Calculating the shortest path from A to B.

1. v = B, w = E

2. D(B) = min( D(B) , D(E) + c(E,B) )

3. = min( 2 , 2+ ∞ )

4. = min( 2, ∞)

5. The minimum value is 2. Therefore, the currently shortest path from A to B is 2.

b) Calculating the shortest path from A to C.

1. v = C, w = E
CN PROF DEVANSHI DAVE

2. D(B) = min( D(C) , D(E) + c(E,C) )

3. = min( 4 , 2+1 )

4. = min( 4,3)

5. The minimum value is 3. Therefore, the currently shortest path from A to C is 3.

c) Calculating the shortest path from A to F.

1. v = F, w = E

2. D(B) = min( D(F) , D(E) + c(E,F) )

3. = min( ∞ , 2+2 )

4. = min(∞ ,4)

5. The minimum value is 4. Therefore, the currently shortest path from A to F is 4.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞


CN PROF DEVANSHI DAVE

3 ADE 2,A 3,E 4,E

Step 4:

In the above table, we observe that B vertex has the least cost path in step 3. Therefore, it is added in N. Now, we
determine the least cost path of remaining vertices through B.

a) Calculating the shortest path from A to C.

1. v = C, w = B

2. D(B) = min( D(C) , D(B) + c(B,C) )

3. = min( 3 , 2+3 )

4. = min( 3,5)

5. The minimum value is 3. Therefore, the currently shortest path from A to C is 3.

b) Calculating the shortest path from A to F.

1. v = F, w = B

2. D(B) = min( D(F) , D(B) + c(B,F) )

3. = min( 4, ∞)

4. = min(4, ∞)
CN PROF DEVANSHI DAVE

5. The minimum value is 4. Therefore, the currently shortest path from A to F is 4.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

3 ADE 2,A 3,E 4,E

4 ADEB 3,E 4,E

Step 5:

In the above table, we observe that C vertex has the least cost path in step 4. Therefore, it is added in N. Now, we
determine the least cost path of remaining vertices through C.

a) Calculating the shortest path from A to F.

1. v = F, w = C

2. D(B) = min( D(F) , D(C) + c(C,F) )


CN PROF DEVANSHI DAVE

3. = min( 4, 3+5)

4. = min(4,8)

5. The minimum value is 4. Therefore, the currently shortest path from A to F is 4.

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

3 ADE 2,A 3,E 4,E

4 ADEB 3,E 4,E

5 ADEBC 4,E

Final table:
CN PROF DEVANSHI DAVE

Step N D(B),P(B) D(C),P(C) D(D),P(D) D(E),P(E) D(F),P(F)

1 A 2,A 5,A 1,A ∞ ∞

2 AD 2,A 4,D 2,D ∞

3 ADE 2,A 3,E 4,E

4 ADEB 3,E 4,E

5 ADEBC 4,E

6 ADEBCF

Disadvantage:

Heavy traffic is created in Line state routing due to Flooding. Flooding can cause an infinite looping, this problem can be
solved by using Time-to-leave field

6. Compare IPv4 and IPv6.


CN PROF DEVANSHI DAVE

IPv4 and IPv6 are internet protocol version 4 and internet protocol version 6, IP version 6 is the new version of Internet
Protocol, which is way better than IP version 4 in terms of complexity and efficiency.

Difference Between IPv4 and IPv6:

IPv4 IPv6

IPv4 has a 32-bit address length IPv6 has a 128-bit address length

It Supports Manual and DHCP address configuration It supports Auto and renumbering address configuration

In IPv4 end to end, connection integrity is


Unachievable In IPv6 end to end, connection integrity is Achievable

Address space of IPv6 is quite large it can produce


It can generate 4.29×109 address space 3.4×1038 address space
CN PROF DEVANSHI DAVE

IPv4 IPv6

The Security feature is dependent on application IPSEC is an inbuilt security feature in the IPv6 protocol

Address representation of IPv4 is in decimal Address Representation of IPv6 is in hexadecimal

Fragmentation performed by Sender and forwarding


routers In IPv6 fragmentation performed only by the sender

In IPv6 packet flow identification are Available and uses the


In IPv4 Packet flow identification is not available flow label field in the header

In IPv4 checksum field is available In IPv6 checksum field is not available

In IPv6 multicast and anycast message transmission scheme is


It has broadcast Message Transmission Scheme available
CN PROF DEVANSHI DAVE

IPv4 IPv6

In IPv4 Encryption and Authentication facility not In IPv6 Encryption and Authentication are provided
provided

IPv6 has header of 40 bytes fixed


IPv4 has a header of 20-60 bytes.

IPv4 consist of 4 fields which are separated by dot (.) IPv6 consist of 8 fields, which are separated by colon (:)

IPv4’s IP addresses are divided into five different


classes. Class A , Class B, Class C , Class D , Class E. IPv6 does not have any classes of IP address.

IPv4 supports VLSM(Variable Length subnet mask). IPv6 does not support VLSM.

Example of IPv4: 66.94.29.13 Example of IPv6: 2001:0000:3238:DFE1:0063:0000:0000:FEFB


CN PROF DEVANSHI DAVE

7. Define ARP and justify why ARP Query sent within broadcast frame and ARP

Response sent within a frame with specific destination MAC Address.

Most of the computer programs/applications use logical address (IP address) to send/receive messages, however, the
actual communication happens over the physical address (MAC address) i.e from layer 2 of the OSI model. So our mission
is to get the destination MAC address which helps in communicating with other devices. This is where ARP comes into the
picture, its functionality is to translate IP address to physical addresses.
CN PROF DEVANSHI DAVE

The acronym ARP stands for Address Resolution Protocol which is one of the most important protocols of the Network
layer in the OSI model.
Note: ARP finds the hardware address, also known as Media Access Control (MAC) address, of a host from its known IP
address.

Let’s look at how ARP works.

Imagine a device that wants to communicate with the other over the internet. What ARP does? Does it broadcast a packet
to all the devices of the source network.
The devices of the network peel the header of the data link layer from the protocol data unit (PDU) called frame and
CN PROF DEVANSHI DAVE

transfer the packet to the network layer (layer 3 of OSI) where the network ID of the packet is validated with the
destination IP’s network ID of the packet and if it’s equal then it responds to the source with the MAC address of the
destination, else the packet reaches the gateway of the network and broadcasts packet to the devices it is connected with
and validates their network ID

The above process continues till the second last network device in the path reaches the destination where it gets validated
and ARP, in turn, responds with the destination MAC address.

ARP: ARP stands for (Address Resolution Protocol). It is responsible to find the hardware address of a host from a known
IP address. There are three basic ARP terms.
The important terms associated with ARP are:

(i) Reverse ARP

(ii) Proxy ARP

(iii) Inverse ARP

1. ARP Cache: After resolving the MAC address, the ARP sends it to the source where it is stored in a table for future
reference. The subsequent communications can use the MAC address from the table

2. ARP Cache Timeout: It indicates the time for which the MAC address in the ARP cache can reside

3. ARP request: This is nothing but broadcasting a packet over the network to validate whether we came across the
destination MAC address or not.
CN PROF DEVANSHI DAVE

1. The physical address of the sender.

2. The IP address of the sender.

3. The physical address of the receiver is FF:FF:FF:FF:FF:FF or 1’s.

4. The IP address of the receiver

4. ARP response/reply: It is the MAC address response that the source receives from the destination which aids in
further communication of the data.

 CASE-1: The sender is a host and wants to send a packet to another host on the same network.

 Use ARP to find another host’s physical address

 CASE-2: The sender is a host and wants to send a packet to another host on another network.

 The sender looks at its routing table.

 Find the IP address of the next-hop (router) for this destination.

 Use ARP to find the router’s physical address


CN PROF DEVANSHI DAVE

 CASE-3: the sender is a router and received a datagram destined for a host on another network.

 The router checks its routing table.

 Find the IP address of the next router.

 Use ARP to find the next router’s physical address.

 CASE-4: The sender is a router that has received a datagram destined for a host in the same network.

 Use ARP to find this host’s physical address.

NOTE: An ARP request is a broadcast, and an ARP response is a Unicast.

Test Yourself :

Connect two PC, say A and B with a cross cable. Now you can see the working of ARP by typing these commands:
CN PROF DEVANSHI DAVE

1. A > arp -a

There will be no entry at the table because they never communicated with each other.

2. A > ping 192.168.1.2

IP address of destination is 192.168.1.2

Reply comes from destination but one packet is lost because of ARP processing.
CN PROF DEVANSHI DAVE

Now, entries of the ARP table can be seen by typing the command.
This is how ARP table looks like:

8. What is the main difference between forwarding and routing? Explain at least two

Forwarding techniques used by the router to switching to packets from input port to

Output port of the router.


CN PROF DEVANSHI DAVE

Forwarding and Routing in Network Layer

The role of the network layer is thus deceptively simple – to move packets from a sending host to a receiving host. To do
so, two important network-layer functions can be identified:

Forwarding

When a packet arrives at a router’s input link, the router must move the packet to the appropriate output link. For
example, a packet arriving from Host H1 to Router R1 must be forwarded to the next router on a path to H2.

Routing

The network layer must determine the route or path taken by packets as they flow from a sender to a receiver. The
algorithms that calculate these paths are referred to as routing algorithms. A routing algorithm would determine, for
example, the path along which packets flow from H1 to H2.

The terms forwarding and routing are often used interchangeably by writers discussing network layers. We’ll use these
terms more precisely in this book.

Forwarding refers to the router-local action of transferring packet from an input link interface to the appropriate output
link interface.

Routing refers to the network-wide process that determines the end-to-end paths that packets take from source to
destination.
CN PROF DEVANSHI DAVE

Using a driving analogy, consider the trip from Pennsylvania to Florida undertaken by our traveller discussed earlier.
During this trip, our driver passes through many interchanges en route to Florida. We can think of forwarding as the
process of getting through a single interchange: A car enters the interchange from one road and determines which road it
should take to leave the interchange. We think of routing as the process of planning the trip from Pennsylvania to Florida:
Before embarking on the trip, the driver has consulted a map and chosen one of many paths possible, with each path
consisting of a series of road segments connected at interchanges.

Every router has a forwarding table. A router forward a packet by examining the value of a field in the arriving packet’s
header, and then using this header value to index into the router’s forwarding table. The value stored in the forwarding
table entry for that header indicates the router’s outgoing link interface to which the packet is to be forwarded.

Depending on the network layer protocol, the header value could be the destination address of the packet or an indication
of the connection to which the packet belongs.

Figure below provides an example.


CN PROF DEVANSHI DAVE

In the above figure, a packet with a header field value of 0111 arrives to a router. The router indexes into its forwarding
table and determines that the output link interface for this packet is interface 2. The router then internally forwards the
packet to interface 2.
CN PROF DEVANSHI DAVE

You might be wondering how the forwarding tables in the routers are configured. This is a critical issue, one that exposes
the important interplay between routing and forwarding.

As shown in the figure above, the routing algorithm determines the values that are inserted into the routers’ forwarding
tables. The routing algorithm may be centralized (e.g. with an algorithm executing on a central site and downloading
routing information to each of the routers) or decentralized (i.e. with a piece of the distributed routing algorithm running
in each router). In either case, a router receives routing protocol messages, which are used to configure its forwarding
table. The distinct and different purposes of the forwarding and routing functions can be further illustrated by considering
the hypothetical (and unrealistic, but technically feasible) case of a network in which all forwarding tables are configures
directly by human operators physically present at the routers. In this case, no routing protocols would be required! Of
course, the human operators would need to interact with each other to ensure that the forwarding tables were configured
in such a way that packets reached their intended destinations. It’s also likely that human configuration would be more
error-prone and much slower to respond to changes in the network topology than a routing protocol. We’re thus
fortunate that all networks have both a forwarding and a routing function!

Connection Setup in Network Layer

We just said that the network layer has two important functions, forwarding and routing. But we’ll soon see that in some
computer networks there is actually a third important network-layer function, namely, connection setup.

We know that in case of TCP, a three-way handshake is required before data can flow from sender to receiver. This allows
the sender and receiver to set up the needed state information (for example, sequence number and initial flow-control
window size). In an analogous manner, some network-layer architectures – for example, ATM, frame relay, and MPLS
require the routers along the chosen path from source to destination to handshake with each other in order to set up
CN PROF DEVANSHI DAVE

state before network-layer data packets within a given source-to-destination connection can begin to flow. In the network
layer, this process is referred to as connection setup
CN PROF DEVANSHI DAVE

Unit - 5.
1. Describe slotted ALOHA channel access techniques.

Aloha is a packet switching system. The time interval required to transmit one packet is called a slot. Aloha is a random
access technique.

There are two ALOHA protocols as follows −

 Pure ALOHA

 Slotted ALOHA

Now let us see what Slotted ALOHA is −

Slotted ALOHA

The slotted ALOHA is explained below in stepwise manner −

Step 1 − Slotted ALOHA was introduced to improve the efficiency of pure ALOHA, because in pure ALOHA there is a high
chance of collision.

Step 2 − In this protocol, the time of the shared channel is divided into discrete intervals called as slots.

Step 3 − The stations can send a frame only at the beginning of the slot and only one frame is sent in each slot.

Step 4 − In slotted ALOHA, if any station is not able to place the frame onto the channel at the beginning of the slot i.e. it
misses the time slot then the station has to wait until the beginning of the next time slot.
CN PROF DEVANSHI DAVE

Step 5 − In slotted ALOHA, there is still a possibility of collision if two stations try to send at the beginning of the same time
slot.

Step 6 − The users are restricted to transmit only from the instant corresponding to the slot boundary.

The vulnerable period is only one time slot. It is shown below −


CN PROF DEVANSHI DAVE

The throughput of slotted ALOHA is given by,

The Vulnerable time period is ‘1’.

Smax = 1/e

= 0.368

Therefore, in slotted ALOHA, the channel utilization is 36 percent

2. Compare various pure ALOHA and Slotted ALOHA.

Pure Aloha Slotted Aloha

In this Aloha, any station can transmit the data at In this, any station can transmit the data at the beginning of any
any time. time slot.

In this, The time is continuous and not globally


synchronized. In this, The time is discrete and globally synchronized.

Vulnerable time for Pure Aloha = 2 x Tt Vulnerable time for Slotted Aloha = Tt
CN PROF DEVANSHI DAVE

Pure Aloha Slotted Aloha

In Pure Aloha, Probability of successful In Slotted Aloha, Probability of successful transmission of the
transmission of the data packet data packet

= G x e-2Greduce = G x e-G

In Pure Aloha, Maximum efficiency In Slotted Aloha, Maximum efficiency

= 18.4% = 36.8%

Pure Aloha doesn’t reduces the number of Slotted Aloha reduces the number of collisions to half and
collisions to half. doubles the efficiency of Pure Aloha.

3. What is Ethernet? Classify Types of Ethernet In detail.

Ethernet is a type of communication protocol that is created at Xerox PARC in 1973 by Robert Metcalfe and others, which
connects computers on a network over a wired connection. It is a widely used LAN protocol, which is also known as Alto
Aloha Network. It connects computers within the local area network and wide area network. Numerous devices like
printers and laptops can be connected by LAN and WAN within buildings, homes, and even small neighborhoods.
CN PROF DEVANSHI DAVE

It offers a simple user interface that helps to connect various devices easily, such as switches, routers, and computers. A
local area network (LAN) can be created with the help of a single router and a few Ethernet cables, which enable
communication between all linked devices. This is because an Ethernet port is included in your laptop in which one end of
a cable is plugged in and connect the other to a router. Ethernet ports are slightly wider, and they look similar to
telephone jacks.

With lower-speed Ethernet cables and devices, most of the Ethernet devices are backward compatible. However, the
speed of the connection will be as fast as the lowest common denominator. For instance, the computer will only have the
potential to forward and receive data at 10 Mbps if you attach a computer with a 10BASE-T NIC to a 100BASE-T network.
Also, the maximum data transfer rate will be 100 Mbps if you have a Gigabit Ethernet router and use it to connect the
device.
CN PROF DEVANSHI DAVE

The wireless networks replaced Ethernet in many areas; however, Ethernet is still more common for wired networking.
Wi-Fi reduces the need for cabling as it allows the users to connect smartphones or laptops to a network without the
required cable. While comparing with Gigabit Ethernet, the faster maximum data transfer rates are provided by the
802.11ac Wi-Fi standard. Still, as compared to a wireless network, wired connections are more secure and are less prone
to interference. This is the main reason to still use Ethernet by many businesses and organizations.

Different Types of Ethernet Networks


CN PROF DEVANSHI DAVE

An Ethernet device with CAT5/CAT6 copper cables is connected to a fiber optic cable through fiber optic media converters.
The distance covered by the network is significantly increased by this extension for fiber optic cable. There are some kinds
of Ethernet networks, which are discussed below:

o Fast Ethernet: This type of Ethernet is usually supported by a twisted pair or CAT5 cable, which has the potential to
transfer or receive data at around100 Mbps. They function at 100Base and 10/100Base Ethernet on the fiber side of
the link if any device such as a camera, laptop, or other is connected to a network. The fiber optic cable and twisted
pair cable are used by fast Ethernet to create communication. The 100BASE-TX, 100BASE-FX, and 100BASE-T4 are
the three categories of Fast Ethernet.

o Gigabit Ethernet: This type of Ethernet network is an upgrade from Fast Ethernet, which uses fiber optic cable and
twisted pair cable to create communication. It can transfer data at a rate of 1000 Mbps or 1Gbps. In modern times,
gigabit Ethernet is more common. This network type also uses CAT5e or other advanced cables, which can transfer
data at a rate of 10 Gbps.

The primary intention of developing the gigabit Ethernet was to full fill the user's requirements, such as faster transfer of
data, faster communication network, and more.

o 10-Gigabit Ethernet: This type of network can transmit data at a rate of 10 Gigabit/second, considered a more
advanced and high-speed network. It makes use of CAT6a or CAT7 twisted-pair cables and fiber optic cables as well.
This network can be expended up to nearly 10,000 meters with the help of using a fiber optic cable.

o Switch Ethernet: This type of network involves adding switches or hubs, which helps to improve network
throughput as each workstation in this network can have its own dedicated 10 Mbps connection instead of sharing
CN PROF DEVANSHI DAVE

the medium. Instead of using a crossover cable, a regular network cable is used when a switch is used in a network.
For the latest Ethernet, it supports 1000Mbps to 10 Gbps and 10Mbps to 100Mbps for fast Ethernet.

Advantages of Ethernet

o It is not much costly to form an Ethernet network. As compared to other systems of connecting computers, it is
relatively inexpensive.

o Ethernet network provides high security for data as it uses firewalls in terms of data security.

o Also, the Gigabit network allows the users to transmit data at a speed of 1-100Gbps.

o In this network, the quality of the data transfer does maintain.

o In this network, administration and maintenance are easier.

o The latest version of gigabit ethernet and wireless ethernet have the potential to transmit data at the speed of 1-
100Gbps.

Disadvantages of Ethernet

o It needs deterministic service; therefore, it is not considered the best for real-time applications.

o The wired Ethernet network restricts you in terms of distances, and it is best for using in short distances.

o If you create a wired ethernet network that needs cables, hubs, switches, routers, they increase the cost of
installation.
CN PROF DEVANSHI DAVE

o Data needs quick transfer in an interactive application, as well as data is very small.

o In ethernet network, any acknowledge is not sent by receiver after accepting a packet.

o If you are planning to set up a wireless Ethernet network, it can be difficult if you have no experience in the network
field.

o Comparing with the wired Ethernet network, wireless network is not more secure.

o The full-duplex data communication mode is not supported by the 100Base-T4 version.

o Additionally, finding a problem is very difficult in an Ethernet network (if has), as it is not easy to determine which
node or cable is causing the problem.

History of Ethernet

At the beginning of the 1970s, Ethernet was developed over several years from ALOHAnet from the University of Hawaii.
Then, a test was performed, which was peaked with a scientific paper in 1976, and published by Metcalfe together with
David Boggs. Late in 1977, a patent on this technology was filed by Xerox Corporation.

The Ethernet as a standard was established by companies Xerox, Intel, and Digital Equipment Corporation (DEC); first,
these companies were combined to improve Ethernet in 1979, then published the first standard in 1980. Other
technologies, including CSMA/CD protocol, were also developed with the help of this process, which later became known
as IEEE 802.3. This process also led to creating a token bus (802.4) and token ring (802.5).

In 1983, the IEEE technology became standard, and before 802.11, 802.3 was born. Many modern PCs started to include
Ethernet cards on the motherboard, as due to the invention of single-chip Ethernet controllers, the Ethernet card became
CN PROF DEVANSHI DAVE

very inexpensive. Consequently, the use of Ethernet networks in the workplace began by some small companies but still
used with the help of telephone-based four-wire lines.

Until the early 1990s, creating the Ethernet connection through twisted pair and fiberoptic cables was not established.
That led to the development of the 100 MB/s standard in 1995.

Ethernet standards

There are different standards of Ethernet, which are discussed below with additional information about each of them.

Ethernet II / DIX / 802.3

A studied edition of Ethernet, Ethernet II, also called as DIX. The DIX stands for Digital, Intel, and Xerox. And, 802.3, which
is rewritten by Digital Equipment Corp, Xerox, and Intel.

Fast Ethernet / 100BASE-T / 802.3u

Fast Ethernet (100BASE-T or 802.3u) is a communications protocol, which is usually supported by a twisted pair or CAT5
cable.

The 100BASE-T standards have two types. The 100BASE-T is the first standard that makes use of CSMA/CD.

Three different kinds of cable technologies are available with 100BASE-T.

1. 100BASE-T4: It is utilized for a network that requires a low-quality twisted-pair on a 100-Mbps Ethernet.

2. 100BASE-TX: It makes use of two-wire data grade twisted-pair wire, developed by ANSI 100BASE-TX, which is also
called 100BASE-TX and 100BASE-X.
CN PROF DEVANSHI DAVE

3. 100BASE-FX: It uses 2 stands of fiber cable and developed by ANSI.

Gigabit Ethernet / 1000BASE-T / 802.3z / 802.ab

Gigabit Ethernet has the potential to transmit data up to 1 Gbps, which makes use of all four copper wires in category 5,
which is also called 1000BASE-T or 802.3z / 802.3ab.

10 Gigabit Ethernet / 802.3ae

10 Gigabit Ethernet (10GE or 10 GbE or 10 GigE) is a new standard that defines only full-duplex point-to-point links. It
supports up to 10 Gb/s transmissions that were published in 2002, which is also known as 802.3ae. The hubs, CSMA/CD,
and half-duplex operation do not exist in 10 GbE.

How to connect or plug in an Ethernet cable

The process will be the same, whether you are connecting an Ethernet cable to your computer or setting up a home
network. As the below image is representing that it appears to be a large telephone cord jack. Once you have located it,
then, until you hear a click, you have to push the cable connector into the port. You will see a green light that indicates a
signal is found if the connection is properly established on the other end.
CN PROF DEVANSHI DAVE

Why is Ethernet used?

Ethernet is still a common form of network connection, which is used for its high speed, security, and reliability. It is used
to connect devices in a network that is used by specific organizations for local networks, organizations such as school
campuses and hospitals, company offices, etc.

As compared to technology such as IBM's Token Ring, due to Ethernet's low price, it initially grew popular. As gradually
network technology advanced, Ethernet ensured its sustained popularity as it has the potential to develop and deliver
higher levels of performance with maintaining backward compatibility. In the mid-1990s, the original ten megabits per
second of Ethernet increased to 100 Mbps. Furthermore, up to 400 gigabits per second can be supported by current
versions of Ethernet.

How Ethernet Works

The Ethernet, in the OSI model, facilitates the operation of physical and data link layers and resides in the lower layers of
the Open Systems Interconnection. There are seven layers available in the OSI model, which are as follow:
CN PROF DEVANSHI DAVE

o Physical layer

o Data link layer

o Network layer

o Transport layer

o Session layer

o Presentation layer

o Application layer

The application layer is the topmost layer that makes capable of users to download and access data from a mail client or a
web browser. Users enter their queries with the help of the application; then, it is sent to the next layer, where the
request is known as a "packet." The information about the sender and the destination web address is contained by the
packet. Until the packet is reached the bottom layer, called the Ethernet frame, the packet is transmitted from the
application layer. The layer closest to your device is the first or bottom layer.

4. Define framing and show how framing works.

Frames are the units of digital transmission, particularly in computer networks and telecommunications. Frames are
comparable to the packets of energy called photons in the case of light energy. Frame is continuously used in Time
Division Multiplexing process.

Framing is a point-to-point connection between two computers or devices consists of a wire in which data is transmitted
as a stream of bits. However, these bits must be framed into discernible blocks of information. Framing is a function of the
CN PROF DEVANSHI DAVE

data link layer. It provides a way for a sender to transmit a set of bits that are meaningful to the receiver. Ethernet, token
ring, frame relay, and other data link layer technologies have their own frame structures. Frames have headers that
contain information such as error-checking codes.

At the data link layer, it extracts the message from the sender and provides it to the receiver by providing the sender’s and
receiver’s addresses. The advantage of using frames is that data is broken up into recoverable chunks that can easily be
checked for corruption.

Problems in Framing –

 Detecting start of the frame: When a frame is transmitted, every station must be able to detect it. Station detects
frames by looking out for a special sequence of bits that marks the beginning of the frame i.e. SFD (Starting Frame
Delimiter).
CN PROF DEVANSHI DAVE

 How does the station detect a frame: Every station listens to link for SFD pattern through a sequential circuit. If SFD
is detected, sequential circuit alerts station. Station checks destination address to accept or reject frame.

 Detecting end of frame: When to stop reading the frame.

Types of framing – There are two types of framing:

1. Fixed size – The frame is of fixed size and there is no need to provide boundaries to the frame, the length of the frame
itself acts as a delimiter.

 Drawback: It suffers from internal fragmentation if the data size is less than the frame size

 Solution: Padding

2. Variable size – In this, there is a need to define the end of the frame as well as the beginning of the next frame to
distinguish. This can be done in two ways:

1. Length field – We can introduce a length field in the frame to indicate the length of the frame. Used
in Ethernet(802.3). The problem with this is that sometimes the length field might get corrupted.

2. End Delimiter (ED) – We can introduce an ED(pattern) to indicate the end of the frame. Used in Token Ring. The
problem with this is that ED can occur in the data. This can be solved by:

1. Character/Byte Stuffing: Used when frames consist of characters. If data contains ED then, a byte is stuffed into data to
differentiate it from ED.
CN PROF DEVANSHI DAVE

Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\O’ character.
–> if data contains ‘\O$’ then, use ‘\O\O\O$'($ is escaped using \O and \O is escaped using \O).

Disadvantage – It is very costly and obsolete method.

2. Bit Stuffing: Let ED = 01111 and if data = 01111


–> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101.
CN PROF DEVANSHI DAVE

–> Receiver receives the frame.


–> If data contains 011101, receiver removes the 0 and reads the data.

Examples –

 If Data –> 011100011110 and ED –> 0111 then, find data after bit stuffing?

–> 011010001101100
CN PROF DEVANSHI DAVE

 If Data –> 110001001 and ED –> 1000 then, find data after bit stuffing?

–> 11001010011

5. Explain Ethernet frame structure.

Basic frame format which is required for all MAC implementation is defined in IEEE 802.3 standard. Though several
optional formats are being used to extend the protocol’s basic capability.
Ethernet frame starts with Preamble and SFD, both works at the physical layer. Ethernet header contains both Source and
Destination MAC address, after which the payload of the frame is present. The last field is CRC which is used to detect the
error. Now, let’s study each field of basic frame format.

Ethernet (IEEE 802.3) Frame Format –

 PREAMBLE – Ethernet frame starts with 7-Bytes Preamble. This is a pattern of alternative 0’s and 1’s which indicates
starting of the frame and allow sender and receiver to establish bit synchronization. Initially, PRE (Preamble) was
CN PROF DEVANSHI DAVE

introduced to allow for the loss of a few bits due to signal delays. But today’s high-speed Ethernet don’t need
Preamble to protect the frame bits.
PRE (Preamble) indicates the receiver that frame is coming and allow the receiver to lock onto the data stream
before the actual frame begins.

 Start of frame delimiter (SFD) – This is a 1-Byte field which is always set to 10101011. SFD indicates that upcoming
bits are starting of the frame, which is the destination address. Sometimes SFD is considered the part of PRE, this is
the reason Preamble is described as 8 Bytes in many places. The SFD warns station or stations that this is the last
chance for synchronization.

 Destination Address – This is 6-Byte field which contains the MAC address of machine for which data is destined.

 Source Address – This is a 6-Byte field which contains the MAC address of source machine. As Source Address is
always an individual address (Unicast), the least significant bit of first byte is always 0.

 Length – Length is a 2-Byte field, which indicates the length of entire Ethernet frame. This 16-bit field can hold the
length value between 0 to 65534, but length cannot be larger than 1500 because of some own limitations of
Ethernet.

 Data – This is the place where actual data is inserted, also known as Payload. Both IP header and data will be
inserted here if Internet Protocol is used over Ethernet. The maximum data present may be as long as 1500 Bytes. In
case data length is less than minimum length i.e. 46 bytes, then padding 0’s is added to meet the minimum possible
length.
CN PROF DEVANSHI DAVE

 Cyclic Redundancy Check (CRC) – CRC is 4 Byte field. This field contains a 32-bits hash code of data, which is
generated over the Destination Address, Source Address, Length, and Data field. If the checksum computed by
destination is not the same as sent checksum value, data received is corrupted.

Note – Size of frame of Ethernet IEEE 802.3 varies 64 bytes to 1518 bytes including data length (46 to 1500 bytes).

Brief overview on Extended Ethernet Frame (Ethernet II Frame) :

Standard IEEE 802.3 basic frame format is discussed above in detail. Now let’s see the extended Ethernet frame header,
using which we can get Payload even larger than 1500 Bytes.

DA [Destination MAC Address] : 6 bytes


SA [Source MAC Address] : 6 bytes
Type [0x8870 (Ethertype)] : 2 bytes
DSAP [802.2 Destination Service Access Point] : 1 byte
SSAP [802.2 Source Service Access Point] : 1 byte
Ctrl [802.2 Control Field] : 1 byte
Data [Protocol Data] : > 46 bytes
FCS [Frame Checksum] : 4 bytes
CN PROF DEVANSHI DAVE

Although length field is missing in Ethernet II frame, the frame length is known by virtue of the frame being accepted by
the network interface.

6. Explain bit stuffing and byte stuffing.

What are byte stuffing and bit stuffing?

Byte stuffing is a mechanism to convert a message formed of a sequence of bytes that may contain reserved values such
as frame delimiter, into another byte sequence that does not contain the reserved values.

Bit stuffing is the mechanism of inserting one or more non-information bits into a message to be transmitted, to break up
the message sequence, for synchronization purpose.

Purposes of byte stuffing and bit stuffing

In Data Link layer, the stream of bits from physical layer are divided into data frames. The data frames can be of fixed
length or variable length. In variable - length framing, the size of each frame to be transmitted may be different. So, a
pattern of bits is used as a delimiter to mark the end of one frame and the beginning of the next frame. However, if the
pattern occurs in the message, then mechanisms needs to be incorporated so that this situation is avoided.

The two common approaches are −

 Byte - Stuffing − A byte is stuffed in the message to differentiate from the delimiter. This is also called character-
oriented framing.

 Bit - Stuffing − A pattern of bits of arbitrary length is stuffed in the message to differentiate from the delimiter. This
is also called bit - oriented framing.
CN PROF DEVANSHI DAVE

Data link layer frames in byte stuffing and bit stuffing

A data link frame has the following parts −

 Frame Header − It contains the source and the destination addresses of the frame.

 Payload field − It contains the message to be delivered. In bit stuffing it is a variable sequence of bits, while in byte
stuffing it is a variable sequence of data bytes.

 Trailer − It contains the error detection and error correction bits.

 Flags − Flags are the frame delimiters signalling the start and end of the frame. In bit stuffing, flag comprises of a bit
pattern that defines the beginning and end bits. It is generally of 8-bits and comprises of six or more consecutive 1s.
In byte stuffing, flag is of 1- byte denoting a protocol - dependent special character.
CN PROF DEVANSHI DAVE

Mechanisms of byte stuffing versus bit stuffing

Byte Stuffing Mechanism

If the pattern of the flag byte is present in the message byte sequence, there should be a strategy so that the receiver
does not consider the pattern as the end of the frame. Here, a special byte called the escape character (ESC) is stuffed
before every byte in the message with the same pattern as the flag byte. If the ESC sequence is found in the message byte,
then another ESC byte is stuffed before it.

Bit Stuffing Mechanism

Here, the delimiting flag sequence generally contains six or more consecutive 1s. Most protocols use the 8-bit pattern
01111110 as flag. In order to differentiate the message from the flag in case of same sequence, a single bit is stuffed in the
message. Whenever a 0 bit is followed by five consecutive 1bits in the message, an extra 0 bit is stuffed at the end of the
five 1s. When the receiver receives the message, it removes the stuffed 0s after each sequence of five 1s. The un-stuffed
message is then sent to the upper layers.
CN PROF DEVANSHI DAVE

7. What is CSMA? Discover types of CSMA.

This method was developed to decrease the chances of collisions when two or more stations start sending their signals
over the data link layer. Carrier Sense multiple access requires that each station first check the state of the
medium before sending.

Vulnerable Time:
CN PROF DEVANSHI DAVE

Vulnerable time = Propagation time (Tp)

The persistence methods can be applied to help the station take action when the channel is busy/idle.

1. Carrier Sense Multiple Access with Collision Detection (CSMA/CD):

In this method, a station monitors the medium after it sends a frame to see if the transmission was successful. If
successful, the transmission is finished, if not, the frame is sent again.
CN PROF DEVANSHI DAVE

In the diagram, starts sending the first bit of its frame at t1 and since C sees the channel idle at t2, starts sending its frame
at t2. C detects A’s frame at t3 and aborts transmission. A detects C’s frame at t4 and aborts its transmission. Transmission
time for C’s frame is, therefore, t3-t2 and for A’s frame is t4-t1

So, the frame transmission time (Tfr) should be at least twice the maximum propagation time (Tp). This can be deduced
when the two stations involved in a collision are a maximum distance apart.

Process: The entire process of collision detection can be explained as follows:


CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

Throughput and Efficiency: The throughput of CSMA/CD is much greater than pure or slotted ALOHA.

 For the 1-persistent method, throughput is 50% when G=1.

 For the non-persistent method, throughput can go up to 90%.

2. Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) –

The basic idea behind CSMA/CA is that the station should be able to receive while transmitting to detect a collision from
different stations. In wired networks, if a collision has occurred then the energy of the received signal almost doubles, and
the station can sense the possibility of collision. In the case of wireless networks, most of the energy is used for
transmission, and the energy of the received signal increases by only 5-10% if a collision occurs. It can’t be used by the
station to sense collision. Therefore CSMA/CA has been specially designed for wireless networks.

These are three types of strategies:

1. InterFrame Space (IFS): When a station finds the channel busy it senses the channel again, when the station finds a
channel to be idle it waits for a period of time called IFS time. IFS can also be used to define the priority of a station
or a frame. Higher the IFS lower is the priority.

2. Contention Window: It is the amount of time divided into slots. A station that is ready to send frames chooses a
random number of slots as wait time.

3. Acknowledgments: The positive acknowledgments and time-out timer can help guarantee a successful transmission
of the frame.

Process: The entire process of collision avoidance can be explained as follows:


CN PROF DEVANSHI DAVE
CN PROF DEVANSHI DAVE

Types of CSMA Access Modes:

There are 4 types of access modes available in CSMA. It is also referred as 4 different types of CSMA protocols which
decides time to start sending data across a shared media.

1. 1-Persistent: It senses the shared channel first and delivers the data right away if the channel is idle. If not, it must
wait and continuously track for the channel to become idle and then broadcast the frame without condition as soon
as it does. It is an aggressive transmission algorithm.

2. Non-Persistent: It first assesses the channel before transmitting data; if the channel is idle, the node transmits data
right away. If not, the station must wait for an arbitrary amount of time (not continuously), and when it discovers
the channel is empty, it sends the frames.

3. P-Persistent: It consists of the 1-Persistent and Non-Persistent modes combined. Each node observes the channel in
the P-Persistent mode, and if the channel is idle, it sends a frame with a P probability. If the data is not transferred,
the frame restarts with the following time slot after waiting for a (q = 1-p probability) random period.

4. O-Persistent: A supervisory node gives each node a transmission order. Nodes wait for their time slot according to
their allocated transmission sequence when the transmission medium is idle.

You might also like