Professional Documents
Culture Documents
SNMP:
o SNMP stands for Simple Network Management Protocol.
o SNMP is a framework used for managing devices on the internet.
o It provides a set of operations for monitoring and managing the internet.
SNMP Concept
Management Components
o Management is not achieved only through the SNMP protocol but also the
use of other protocols that can cooperate with the SNMP protocol.
Management is achieved through the use of the other two protocols: SMI
(Structure of management information) and MIB(management information
base).
o Management is a combination of SMI, MIB, and SNMP. All these three
protocols such as abstract syntax notation 1 (ASN.1) and basic encoding
rules (BER).
SMI
MIB
o The MIB (Management information base) is a second component for the
network management.
o Each agent has its own MIB, which is a collection of all the objects that the
manager can manage. MIB is categorized into eight groups: system,
interface, address translation, ip, icmp, tcp, udp, and egp. These groups are
under the mib object.
SNMP
Trap: The Trap message is sent from an agent to the manager to report an event.
For example, if the agent is rebooted, then it informs the manager as well as sends
the time of rebooting.
EMAIL:
E-mail is defined as the transmission of messages on the Internet. It is one of the
most commonly used features over communications networks that may contain
text, files, images, or other attachments. Generally, it is information that is
stored on a computer sent through a network to a specified individual or group
of individuals.
Email messages are conveyed through email servers; it uses multiple protocols
within the TCP/IP suite. For example, SMTP is a protocol, stands for simple mail
transfer protocol and used to send messages whereas other protocols IMAP or
POP are used to retrieve messages from a mail server. If you want to login to your
mail account, you just need to enter a valid email address, password, and the mail
servers used to send and receive messages.
Although most of the webmail servers automatically configure your mail account,
therefore, you only required to enter your email address and password. However,
you may need to manually configure each account if you use an email client like
Microsoft Outlook or Apple Mail. In addition, to enter the email address and
password, you may also need to enter incoming and outgoing mail servers and
the correct port numbers for each one.
The email was developed to support rich text with custom formatting, and the
original email standard is only capable of supporting plain text messages. In
modern times, email supports HTML (Hypertext markup language), which
makes it capable of emails to support the same formatting as websites. The
email that supports HTML can contain links, images, CSS layouts, and also can
send files or "email attachments" along with messages. Most of the mail servers
enable users to send several attachments with each message. The attachments
were typically limited to one megabyte in the early days of email. Still,
nowadays, many mail servers are able to support email attachments of 20
megabytes or more in size.
TFTP:
TFTP represents the Trivial File Transfer Protocol.
Some applications do not need the full functionality of TCP, nor can they
afford the complexity.
TFTP supports an inexpensive structure that does not require complex
interactions between the client and the server.
TFTP confine operations to simple File Transfer and does not support
authentications.
The benefit of using TFTP is that it enables bootstrapping code to use the
similar underlying TCP/IP protocols that the operating framework uses
once it starts execution.
Thus it is the possibility for a device to bootstrap from a server on another
physical network.
TFTP does not have a dependable stream transport service.
It runs on the top of UDP of any other unreliable packet delivery system
using timeout and retransmission to ensure that data arrives.
The sending side transmits a file in fixed-size blocks and awaits each
block's acknowledgement before sending the text.
Features of TFTP
WWW
World Wide Web, which is also known as a Web, is a collection of
websites or web pages stored in web servers and connected to local
computers through the internet.
These websites contain text pages, digital images, audios, videos, etc.
Users can access the content of these sites from any part of the world over
the internet using their devices such as computers, laptops, cell phones,
etc.
The WWW, along with internet, enables the retrieval and display of text
and media to your device.
Here the browser displays a web page on the client machine when the
user clicks on a line of text that is linked to a page on abd.com, the
browser follows the hyperlink by sending a message to the abd.com
server asking for the page.
Working of WWW:
The World Wide Web is based on several different technologies: Web
browsers, Hypertext Markup Language (HTML) and Hypertext Transfer
Protocol (HTTP).
A Web browser is used to access web pages.
Web browsers can be defined as programs which display text, data,
pictures, animation and video on the Internet.
Hyperlinked resources on the World Wide Web can be accessed using
software interfaces provided by Web browsers.
Initially, Web browsers were used only for surfing the Web but now
they have become more universal.
Web browsers can be used for several tasks including conducting
searches, mailing, transferring files, and much more.
Some of the commonly used browsers are Internet Explorer, Opera
Mini, and Google Chrome.
Features of WWW:
HyperText Information System
Cross-Platform
Distributed
Open Standards and Open Source
Uses Web Browsers to provide a single interface for many services
Dynamic, Interactive and Evolving.
FIREWALLS(APPLICATION FIREWALLS)
An application firewall is a type of firewall that governs traffic to, from, or
by an application or service.
QOS Concepts
The QOS concepts are explained below−
Congestion Management
The bursty feature of data traffic sometimes bounds to increase traffic more than
a connection speed. QoS allows a router to put packets into different queues.
Servicespecific queues more often depend on priority than buffer traffic in an
individual queue and let the first packet by the first packet out.
Queue Management
The queues in a buffer can fill and overflow. A packet would be dropped if a
queue is complete, and the router cannot prevent it from being dropped if it is a
high priority packet. This is referred to as tail drop.
Link Efficiency
The low-speed links are bottlenecks for lower packets. The serialization delay
caused by the high packets forces the lower packets to wait longer. The
serialization delay is the time created to put a packet on the connection.
Elimination of overhead bits
It can also increase efficiency by removing too many overhead bits.
Traffic shaping and policing
Shaping can prevent the overflow problem in buffers by limiting the full
bandwidth potential of the applications packets. Sometimes, many network
topologies with a highbandwidth link connected with a low-bandwidth link in
remote sites can overflow low bandwidth connections.
Therefore, shaping is used to provide the traffic flow from the high bandwidth
link closer to the low bandwidth link to avoid the low bandwidth link's overflow.
Policing can discard the traffic that exceeds the configured rate, but it is buffered
in the case of shaping.
Performance of a Network
Performance of a network pertains to the measure of service quality of a
network as perceived by the user. There are different ways to measure the
performance of a network, depending upon the nature and design of the
network. The characteristics that measure the performance of a network are :
Bandwidth
Throughput
Latency (Delay)
Bandwidth – Delay Product
Jitter
BANDWIDTH
One of the most essential conditions of a website’s performance is the
amount of bandwidth allocated to the network. Bandwidth determines
how rapidly the webserver is able to upload the requested information.
While there are different factors to consider with respect to a site’s
performance, bandwidth is every now and again the restricting element.
Bandwidth is characterized as the measure of data or information that
can be transmitted in a fixed measure of time. The term can be used in
two different contexts with two distinctive estimating values. In the case
of digital devices, the bandwidth is measured in bits per second(bps) or
bytes per second. In the case of analogue devices, the bandwidth is
measured in cycles per second, or Hertz (Hz).
Bandwidth is only one component of what an individual sees as the
speed of a network. People frequently mistake bandwidth with internet speed
in light of the fact that internet service providers (ISPs) tend to claim that they
have a fast “40Mbps connection” in their advertising campaigns. True internet
speed is actually the amount of data you receive every second and that has a
lot to do with latency too.
“Bandwidth” means “Capacity” and “Speed” means “Transfer rate”.
THROUGHPUT
Throughput is the number of messages successfully transmitted per unit time.
It is controlled by available bandwidth, the available signal-to-noise ratio and
hardware limitations. The maximum throughput of a network may be
consequently higher than the actual throughput achieved in everyday
consumption. The terms ‘throughput’ and ‘bandwidth’ are often thought of as
the same, yet they are different. Bandwidth is the potential measurement of a
link, whereas throughput is an actual measurement of how fast we can send
data.
LATENCY
In a network, during the process of data communication, latency(also known
as delay) is defined as the total time taken for a complete message to arrive at
the destination, starting with the time when the first bit of the message is sent
out from the source and ending with the time when the last bit of the message
is delivered at the destination. The network connections where small delays
occur are called “Low-Latency-Networks” and the network connections which
suffer from long delays are known as “High-Latency-Networks”.
High latency leads to the creation of bottlenecks in any network
communication. It stops the data from taking full advantage of the network
pipe and conclusively decreases the bandwidth of the communicating network.
The effect of the latency on a network’s bandwidth can be temporary or never-
ending depending on the source of the delays. Latency is also known as a ping
rate and is measured in milliseconds(ms).
In simpler terms: latency may be defined as the time required to successfully
send a packet across a network.
JITTER
Jitter is another performance issue related to delay. In technical terms, jitter is
a “packet delay variance”. It can simply mean that jitter is considered as a
problem when different packets of data face different delays in a network and
the data at the receiver application is time-sensitive, i.e. audio or video data.
Jitter is measured in milliseconds(ms). It is defined as an interference in the
normal order of sending data packets. For example: if the delay for the first
packet is 10 ms, for the second is 35 ms, and for the third is 50 ms, then the
real-time destination application that uses the packets experiences jitter.
Simply, jitter is any deviation in, or displacement of, the signal pulses in a
high-frequency digital signal. The deviation can be in connection with the
amplitude, the width of the signal pulse or the phase timing. The major causes
of jitter are electromagnetic interference(EMI) and crosstalk between signals.
Jitter can lead to flickering of a display screen, affects the capability of a
processor in a desktop or server to proceed as expected, introducing clicks or
other undesired impacts in audio signals, and loss of transmitted data between
network devices.
Jitter is negative and causes network congestion and packet loss.
Applications of UDP:
Used for simple request-response communication when the size of
data is less and hence there is lesser concern about flow and error
control.
It is a suitable protocol for multicasting as UDP supports packet
switching.
UDP is used for some routing update protocols like RIP(Routing
Information Protocol).
Normally used for real-time applications which can not tolerate
uneven delays between sections of a received message.
Following implementations uses UDP as a transport layer protocol:
NTP (Network Time Protocol)
DNS (Domain Name Service)
BOOTP, DHCP.
NNP (Network News Protocol)
Quote of the day protocol
TFTP, RTSP, RIP.
The services provided by the transport layer are similar to those of the data link
layer. The data link layer provides the services within a single network while the
transport layer provides the services across an internetwork made up of many
networks. The data link layer controls the physical layer while the transport layer
controls all the lower layers.
The services provided by the transport layer protocols can be divided into
five categories:
o End-to-end delivery
o Addressing
o Reliable delivery
o Flow control
o Multiplexing
End-to-end delivery:
The transport layer transmits the entire message to the destination. Therefore, it
ensures the end-to-end delivery of an entire message from a source to the
destination.
Reliable delivery:
The transport layer provides reliability services by retransmitting the lost and
damaged packets.
o Error control
o Sequence control
o Loss control
o Duplication control
Error Control
o The primary role of reliability is Error Control. In reality, no transmission
will be 100 percent error-free delivery. Therefore, transport layer protocols
are designed to provide error-free transmission.
o The data link layer also provides the error handling mechanism, but it
ensures only node-to-node error-free delivery. However, node-to-node
reliability does not ensure the end-to-end reliability.
o The data link layer checks for the error between each network. If an error
is introduced inside one of the routers, then this error will not be caught by
the data link layer. It only detects those errors that have been introduced
between the beginning and end of the link. Therefore, the transport layer
performs the checking for the errors end-to-end to ensure that the packet
has arrived correctly.
Sequence Control
Flow Control
Flow control is used to prevent the sender from overwhelming the receiver. If the
receiver is overloaded with too much data, then the receiver discards the packets
and asking for the retransmission of packets. This increases network congestion
and thus, reducing the system performance. The transport layer is responsible for
flow control. It uses the sliding window protocol that makes the data transmission
more efficient as well as it controls the flow of data so that the receiver does not
become overwhelmed. Sliding window protocol is byte oriented rather than frame
oriented.
Multiplexing
o According to the layered model, the transport layer interacts with the
functions of the session layer. Many protocols combine session,
presentation, and application layer protocols into a single layer known as
the application layer. In these cases, delivery to the session layer means the
delivery to the application layer. Data generated by an application on one
machine must be transmitted to the correct application on another machine.
In this case, addressing is provided by the transport layer.
o The transport layer provides the user address which is specified as a station
or port. The port variable represents a particular TS user of a specified
station known as a Transport Service access point (TSAP). Each station
has only one transport entity.
o The transport layer protocols need to know which upper-layer protocols
are communicating.
ELEMENTS OF TRANSPORT LAYER
Types of Service
The transport layer also determines the type of service provided to the users from
the session layer. An error-free point-to-point communication to deliver
messages in the order in which they were transmitted is one of the key functions
of the transport layer.
Error Control
Error detection and error recovery are an integral part of reliable service, and
therefore they are necessary to perform error control mechanisms on an end-to-
end basis. To control errors from lost or duplicate segments, the transport layer
enables unique segment sequence numbers to the different packets of the
message, creating virtual circuits, allowing only one virtual circuit per session.
Flow Control
Connection Establishment/Release
The transport layer creates and releases the connection across the network. This
includes a naming mechanism so that a process on one machine can indicate with
whom it wishes to communicate. The transport layer enables us to establish and
delete connections across the network to multiplex several message streams onto
one communication channel.
Multiplexing/De multiplexing
The transport layer establishes a separate network connection for each transport
connection required by the session layer. To improve throughput, the transport
layer establishes multiple network connections. When the issue of throughput is
not important, it multiplexes several transport connections onto the same network
connection, thus reducing the cost of establishing and maintaining the network
connections.
When several connections are multiplexed, they call for demultiplexing at the
receiving end. In the case of the transport layer, the communication takes place
only between two processes and not between two machines. Hence,
communication at the transport layer is also known as peer-to-peer or process-
to-process communication.
When the transport layer receives a large message from the session layer, it
breaks the message into smaller units depending upon the requirement. This
process is called fragmentation. Thereafter, it is passed to the network layer.
Conversely, when the transport layer acts as the receiving process, it reorders the
pieces of a message before reassembling them into a message.
Addressing
HTTP
Features of HTTP:
HTTP Transactions
The above figure shows the HTTP transaction between client and server. The
client initiates a transaction by sending a request message to the server. The
server replies to the request message by sending a response message.
Messages
HTTP messages are of two types: request and response. Both the message types
follow the same message format.
Request Message: The request message is sent by the client that consists
of a request line, headers, and sometimes a body.
FTP
Objectives of FTP
Why FTP?
Although transferring files from one system to another is very simple and
straightforward, but sometimes it can cause problems. For example, two systems
may have different file conventions. Two systems may have different ways to
represent text and data. Two systems may have different directory structures. FTP
protocol overcomes these problems by establishing two connections between
hosts. One connection is used for data transfer, and another connection is used
for the control connection.
Mechanism of FTP
The above figure shows the basic model of the FTP. The FTP client has three
components: the user interface, control process, and data transfer process. The
server has two components: the server control process and the server data
transfer process.
o Control Connection: The control connection uses very simple rules
for communication. Through control connection, we can transfer a line of
command or line of response at a time. The control connection is made
between the control processes. The control connection remains connected
during the entire interactive FTP session.
o Data Connection: The Data Connection uses very complex rules as data
types may vary. The data connection is made between data transfer
processes. The data connection opens when a command comes for
transferring the files and closes when the file is transferred.
FTP Clients
Disadvantages of FTP:
o The standard requirement of the industry is that all the FTP transmissions
should be encrypted. However, not all the FTP providers are equal and not
all the providers offer encryption. So, we will have to look out for the FTP
providers that provides encryption.
o FTP serves two operations, i.e., to send and receive large files on a network.
However, the size limit of the file is 2GB that can be sent. It also doesn't
allow you to run simultaneous transfers to multiple receivers.
o Passwords and file contents are sent in clear text that allows unwanted
eavesdropping. So, it is quite possible that attackers can carry out the brute
force attack by trying to guess the FTP password.
o It is not compatible with every system.
Features
TCP is reliable protocol. That is, the receiver always sends either
positive or negative acknowledgement about the data packet to the
sender, so that the sender always has bright clue about whether the
data packet is reached the destination or it needs to resend it.
TCP ensures that the data reaches intended destination in the same
order it was sent.
TCP is connection oriented. TCP requires that connection between
two remote points be established before sending actual data.
TCP provides error-checking and recovery mechanism.
TCP provides end-to-end communication.
TCP provides flow control and quality of service.
TCP operates in Client/Server point-to-point mode.TCP provides
full duplex server, i.e. it can perform roles of both receiver and
sender.
Header
The length of TCP header is minimum 20 bytes long and maximum 60 bytes.
Addressing
TCP communication between two remote hosts is done by means of port numbers
(TSAPs). Ports numbers can range from 0 – 65535 which are divided as:
System Ports (0 – 1023)
User Ports ( 1024 – 49151)
Private/Dynamic Ports (49152 – 65535)
Connection Management
TCP communication works in Server/Client model. The client initiates the
connection and the server either accepts or rejects it. Three-way handshaking is
used for connection management.
Establishment
Client initiates the connection and sends the segment with a Sequence number.
Server acknowledges it back with its own Sequence number and ACK of client’s
segment which is one more than client’s Sequence number. Client after receiving
ACK of its segment sends an acknowledgement of Server’s response.
Release
Either of server and client can send TCP segment with FIN flag set to 1. When
the receiving end responds it back by ACKnowledging FIN, that direction of TCP
communication is closed and connection is released.
Bandwidth Management
TCP uses the concept of window size to accommodate the need of Bandwidth
management. Window size tells the sender at the remote end, the number of data
byte segments the receiver at this end can receive. TCP uses slow start phase by
using window size 1 and increases the window size exponentially after each
successful communication.
For example, the client uses windows size 2 and sends 2 bytes of data. When the
acknowledgement of this segment received the windows size is doubled to 4 and
next sent the segment sent will be 4 data bytes long. When the acknowledgement
of 4-byte data segment is received, the client sets windows size to 8 and so on.
If an acknowledgement is missed, i.e. data lost in transit network or it received
NACK, then the window size is reduced to half and slow start phase starts again.
TCP uses port numbers to know what application process it needs to handover
the data segment. Along with that, it uses sequence numbers to synchronize itself
with the remote host. All data segments are sent and received with sequence
numbers. The Sender knows which last data segment was received by the
Receiver when it gets ACK. The Receiver knows about the last segment sent by
the Sender by referring to the sequence number of recently received packet.
If the sequence number of a segment recently received does not match with the
sequence number the receiver was expecting, then it is discarded and NACK is
sent back. If two segments arrive with the same sequence number, the TCP
timestamp value is compared to make a decision.
Multiplexing
The technique to combine two or more data streams in one session is called
Multiplexing. When a TCP client initializes a connection with Server, it always
refers to a well-defined port number which indicates the application process. The
client itself uses a randomly generated port number from private port number
pools.
Using TCP Multiplexing, a client can communicate with a number of different
application process in a single session. For example, a client requests a web page
which in turn contains different types of data (HTTP, SMTP, FTP etc.) the TCP
session timeout is increased and the session is kept open for longer time so that
the three-way handshake overhead can be avoided.
This enables the client system to receive multiple connection over single virtual
connection. These virtual connections are not good for Servers if the timeout is
too long.
Congestion Control
When large amount of data is fed to system which is not capable of handling it,
congestion occurs. TCP controls congestion by means of Window mechanism.
TCP sets a window size telling the other end how much data segment to send.
TCP may use three algorithms for congestion control:
Additive increase, Multiplicative Decrease
Slow Start
Timeout React
Timer Management
TCP uses different types of timer to control and management various tasks:
Keep-alive timer:
This timer is used to check the integrity and validity of a connection.
When keep-alive time expires, the host sends a probe to check if the
connection still exists.
Retransmission timer:
This timer maintains stateful session of data sent.
If the acknowledgement of sent data does not receive within the
Retransmission time, the data segment is sent again.
Crash Recovery
TCP is very reliable protocol. It provides sequence number to each of byte sent
in segment. It provides the feedback mechanism i.e. when a host receives a
packet, it is bound to ACK that packet having the next sequence number expected
(if it is not the last segment).
When a TCP Server crashes mid-way communication and re-starts its process it
sends TPDU broadcast to all its hosts. The hosts can then send the last data
segment which was never unacknowledged and carry onwards.
Discuss about sub netting with an example? Mention Pros and Cons of sub
netting.
Advantages of Subnetting
Disadvantages of Subnetting
What is DNS? Recognize the services provided by DNS and explain how it
works.
DNS is a TCP/IP protocol used on different platforms. The domain name space
is divided into three different sections: generic domains, country domains, and
inverse domain.
Generic Domains
The format of country domain is same as a generic domain, but it uses two-
character country abbreviations (e.g., us for the United States) in place of three
character organizational abbreviations.
Inverse Domain
The inverse domain is used for mapping an address to a name. When the server
has received a request from the client, and the server contains the files of only
authorized clients. To determine whether the client is on the authorized list or not,
it sends a query to the DNS server and ask for mapping an address to the name.
Working of DNS
a) Explain Internet Protocol with the neat block diagram of IPv4 header
format
b) Explain IP addressing method
Explain Internet Protocol with the neat block diagram of IPv4 header
format
IPv4
short for Internet Protocol Version 4 is the fourth version of
the Internet Protocol (IP).
IP is responsible to deliver data packets from the source host to the
destination host.
This delivery is solely based on the IP Addresses in the packet headers.
IPv4 is the first major version of IP.
IPv4 is a connectionless protocol for use on packet-switched networks.
IPv4 Header-
The following diagram represents the IPv4 header-
Version-
Header Length-
Header length is a 4 bit field that contains the length of the IP header.
It helps in knowing from where the actual data begins.
The initial 5 rows of the IP header are always used.
So, minimum length of IP header = 5 x 4 bytes = 20 bytes.
The size of the 6th row representing the Options field vary.
The size of Options field can go up to 40 bytes.
So, maximum length of IP header = 20 bytes + 40 bytes = 60 bytes.
Type Of Service-
Typeof service is a 8 bit field that is used for Quality of Service (QoS).
The datagram is marked for giving a certain treatment using this field.
Total Length-
Totallength is a 16 bit field that contains the total length of the datagram
(in bytes).
Identification-
DF Bit-
MF Bit-
Fragment Offset-
Time To Live-
Protocol-
At each hop,
The header checksum is compared with the value contained in this field.
If header checksum is found to be mismatched, then the datagram is
discarded.
Router updates the checksum field whenever it modifies the datagram
header.
Destination IP Address-
For example, some sender wants to send the message to some destination, but the
router couldn't send the message to the destination. In this case, the router sends
the message to the sender that I could not send the message to that destination.
destination. If someone in-between reports the error, then the sender will resend
the message very quickly.
Messages
The error-reporting message means that the router encounters a problem when it
processes an IP packet then it reports a message.
o Query messages
The query messages are those messages that help the host to get the specific
information of another host. For example, suppose there are a client and a server,
and the client wants to know whether the server is live or not, then it sends the
ICMP message to the server.
The message format has two things; one is a category that tells us which type of
message it is. If the message is of error type, the error message contains the type
and the code. The type defines the type of message while the code defines the
subtype of the message.
o Type: It is an 8-bit field. It defines the ICMP message type. The values
range from 0 to 127 are defined for ICMPv6, and the values from 128 to
255 are the informational messages.
o Code: It is an 8-bit field that defines the subtype of the ICMP message
o Checksum: It is a 16-bit field to detect whether the error exists in the
message or not.
The error reporting messages are broadly classified into the following
categories:
o Destination unreachable
The destination unreachable error occurs when the packet does not reach the
destination. Suppose the sender sends the message, but the message does not
reach the destination, then the intermediate router reports to the sender that the
destination is unreachable.
Type: It defines the type of message. The number 3 specifies that the destination
is unreachable.
Code (0 to 15): It is a 4-bit number which identifies whether the message comes
from some intermediate router or the destination itself.
Sometimes the destination does not want to process the request, so it sends the
destination unreachable message to the source. A router does not detect all the
problems that prevent the delivery of a packet.
o Source quench
o Time exceeded
Sometimes the situation arises when there are many routers that exist between the
sender and the receiver. When the sender sends the packet, then it moves in a
routing loop. The time exceeded is based on the time-to-live value. When the
packet traverses through the router, then each router decreases the value of TTL
by one. Whenever a router decreases a datagram with a time-to-live value to zero,
then the router discards a datagram and sends the time exceeded message to the
original source.
The above message format shows that the type of time-exceeded is 11, and the
code can be either 0 or 1. The code 0 represents TTL, while code 1 represents
fragmentation. In a time-exceeded message, the code 0 is used by the routers to
show that the time-to-live value is reached to zero.
The code 1 is used by the destination to show that all the fragments do not reach
within a set time.
Parameter problems
The router and the destination host can send a parameter problem message. This
message conveys that some parameters are not properly set.
Redirection
When the packet is sent, then the routing table is gradually augmented and
updated. The tool used to achieve this is the redirection message. For example,
A wants to send the packet to B, and there are two routers exist between A and
B. First, A sends the data to the router 1. The router 1 sends the IP packet to
router 2 and redirection message to A so that A can update its routing table.
ICMP Query Messages
The ICMP Query message is used for error handling or debugging the internet.
This message is commonly used to ping a message.
Echo-request and echo-reply message
LEAKY BUCKET
When too many packets are present in the network it causes packet delay and
loss of packet which degrades the performance of the system. This situation is
called congestion.
The network layer and transport layer share the responsibility for handling
congestions. One of the most effective ways to control congestion is trying to
reduce the load that transport layer is placing on the network. To maintain this,
the network and transport layers have to work together.
Leaky Bucket Algorithm mainly controls the total amount and the rate of
the traffic sent to the network.
Step 1 − Let us imagine a bucket with a small hole at the bottom where the rate
at which water is poured into the bucket is not constant and can vary but it leaks
from the bucket at a constant rate.
Step 2 − So (up to water is present in the bucket), the rate at which the water
leaks does not depend on the rate at which the water is input to the bucket.
Step 3 − If the bucket is full, additional water that enters into the bucket that
spills over the sides and is lost.
Step 4 − Thus the same concept applied to packets in the network. Consider that
data is coming from the source at variable speeds. Suppose that a source sends
data at 10 Mbps for 4 seconds. Then there is no data for 3 seconds. The source
again transmits data at a rate of 8 Mbps for 2 seconds. Thus, in a time span of 8
seconds, 68 Mb data has been transmitted.
That’s why if a leaky bucket algorithm is used, the data flow would be 8 Mbps
for 9 seconds. Thus, the constant flow is maintained.
TOKEN BUCKET
The leaky bucket algorithm enforces output patterns at the average rate, no matter
how busy the traffic is. So, to deal with the more traffic, we need a flexible
algorithm so that the data is not lost. One such approach is the token bucket
algorithm.
Let us understand this algorithm step wise as given below −
Step 1 − In regular intervals tokens are thrown into the bucket f.
Step 2 − The bucket has a maximum capacity f.
Step 3 − If the packet is ready, then a token is removed from the
bucket, and the packet is sent.
Step 4 − Suppose, if there is no token in the bucket, the packet cannot
be sent.
When compared to Leaky bucket the token bucket algorithm is less restrictive
that means it allows more traffic. The limit of busyness is restricted by the
number of tokens available in the bucket at a particular instant of time.
The implementation of the token bucket algorithm is easy − a variable is used to
count the tokens. For every t seconds the counter is incremented and then it is
decremented whenever a packet is sent. When the counter reaches zero, no
further packet is sent out.