You are on page 1of 105

OSI & TCP/IP Notes

Network:-
-Network is a combination of hardware and software that sends data from one location to another.
OR
-Group of communication devices (PC, Router, and Switch) which is connected via media (Wired/Wireless) to
exchange the information.
Protocol:-
-Protocol is a set of rules that transmit data from one system to another system anywhere in network. It defines what
is communicated, how it is communicated, and when it is communicated.
OR
-Protocols are rules that govern how devices communicate and share information across a network.
Network Components:-
-Devices (PC, Server, Router, Switches, Firewall)
-Media
-Protocol
-Messages
LAN:-
-Local Area Network
-Group of communication devices running under common administration that cover small geographic area, usually
small building, campus, offices, home, school.
MAN:-
-Metropolitan Area Network.
-It is a network that connect LAN’s across a city wide geographic area.
-Ex: Metro Ethernet connection within a city.
WAN:-
-Wide Area Network
-It is a network that cover large geographic location usually to connect multiple LAN between different cities.
Or
- A wide area network (WAN) is a network that connects LANs over large geographic distances.
Internetwork:
-it describing multiple networks connected together.
Intranetwork:
-
Topology:
-How network is physically connected
-Physically structure or connection of your network.
*2 type of topology:
A. Physical Topology
-How the network is look like, how all the cable and devices are connected to each other.
B. Logical Topology
-It is define that how data takes path through the physical topology.
*Types of Physical Topology:
1. Bus
-In a Bus topology all hosts share physical segment. At the end of the cable, terminator is placed.
-A frame sent by one host is received by all other hosts on the bus. However, a host will only process a frame if it
matches the destination hardware address in datalink header.
-The bus represent a single point of failure. A break in the bus will affect all host on the segment.
Or
Shared Bus
-A shared bus is a physical network topology or layout in which multiple devices are connected to the same physical
wire or cable. When one device transmits data, all other devices on the shared bus receive it.
2. Ring
-All computer and network devices are connected on a cable in such way that form “Ring”.
-If cable breaks your network is down.
3. Star
-All end devices are connected to a central devices creating a star model.
-This what we use nowadays on LAN with switch in the middle.
-When switch goes down your network is down as well.
4. Mesh
-Means multiple connection between your devices.
*2 types of Mesh topology:
A. Partial Mesh
-
B. Full Mesh
-Each and every devices connected via individual link.
5. Tree
-Combination of bus and star topology.
6. Hybrid
-Combination of all topology together.
Reference Models:
-The TCP/IP model is a protocol model because it describes the functions that occur at each layer of protocols
within the TCP/IP suite.
-The OSI is used for data network design, operation specifications, and troubleshooting.
-A networking model is only a representation of network operation. The model not the actual network.
>The Benefits of using a Layered Model:
-There are benefits to using a layered model to describe network protocols and operations. Using a layered model:
-Assists in protocol design, because protocols that operate at a specific layer have defined information that they act
upon and a defined interface to the layers above and below.
-products from different vendors can work together.
-Provides a common language to describe networking functions and capabilities.
-Individual parts of the system can be designed independently, but still work together seamlessly.
OSI Model (Open System Interconnection Model):-
-The OSI Model is a layered framework that allows communication between devices. Each layer defines a set of
function.
-Provides guideline on how computers communicate over a network.
-Does not define specific procedure or protocols.
-It’s a theoretical in nature. Does not define protocol.
- Network communication models are generally organized into layers. The OSI model specifically consists of seven
layers, with each layer representing a specific networking function. These functions are controlled by protocols,
which govern end-to-end communication between devices. As data is passed from the user application down the
virtual layers of the OSI model, each of the lower layers adds a header (and sometimes a trailer) containing
protocol information specific to that layer. These headers are called Protocol Data Units (PDUs), and the process of
adding these headers is referred to as encapsulation.

* 7 Layers OSI model:-


1. Application
–It provides network services to application and user interface between application and network.
-TCP/IP application layer protocol provides services to the application software running on a computer. The
application layer does not define the application itself, but it define service that application need.
-Ex: application protocol HTTP define how web browser can pull the content of a web page from web server. In
short, the application layer provide an interface between software running on a computer and network itself.
-It's important to note that this is not the user interface itself. When a user chooses to read email, transfer a file, or
surf the network, the user's software program, such as a Web browser, interacts with the Application Layer.
-Data/ SMTP, FTP, HTTP, DNS, SNMP, TELNET, TFTP, RIP, BGP
2. Presentation
–It defines a standard format for the data and ensure that data is readable by receiving devices.
-It perform function like Encoding & Decoding, Encryption & Decryption, Compression & Decompression.
-Ex: Voice (MIDI, WAV), Video (MPEG, AVI), Text (ASCII), Graphics (JPEG, GIG, TIFF).
-Data/ SMTP, FTP, HTTP, DNS, SNMP, TELNET, TFTP
3. Session
-It responsible for establish, manage, and terminate sessions between devices.
-Data/ SMTP, FTP, HTTP, DNS, SNMP, TELNET, TFTP
4. Transport
–It responsible for providing a logical connection and reliable transfer of data between two hosts and also provide
other function like:
*Flow Control (through the use of Windowing)
*Reliable Connection (through the use of Sequence Number and Acknowledge Number)
*Session multiplexing (through the use of Port Number)
*Segmentation
*Error recovery
-Segment / TCP, UDP, Port Number.
5. Network
-It provides internetwork communication.
-Routes data packets
-Selects best path to deliver data
-Provides logical addressing and path selection
-Packets /IP, ICMP, IGMP, RARP, ARP, EIGRP, OSPF/Router
6. Data Link
-It prepare network layer packet for transmission and control access to physical media.
-Defines how data is formatted for transmission and how the network is controlled along with error detection.
*It perform two function:
-Allow the upper layers to access the media using technique such as framing.
-Control how data is placed on to the media and is received from the media using techniques such media access
control and error detection.
*Two Sub-Layer:
1. LLC (Logical Link Control)
-It place the information in frame and identifies which network layer protocol is being used for the frame. (IP, IPX,
Apple Talk)
2. MAC (Media Access Control)
-It provide data link addressing and controls access to the physical medium using CSMA/CD.
-Frames/ Switch, Bridge, Ethernet, Token Ring, FDDI (Fiber Distributed Data Interface), 802.11 Wireless, Frame
Relay, ATM.
7. Physical
–It transmit bits over a medium and provide mechanical and electrical specifications.
- Bits / Cable, Connector, NIC, Wireless Radio, Hub, Modem, Repeater.
TCP/IP Protocol Suite:-
-A simpler, 4 layer model.
-Define specific protocol at each layer.

-4 layers TCP/IP Protocol Suite:


Application Layer: Include the OSI Application, Presentation and Session Layer, which responsible for
representation, encoding, and dialog control.
Transport Layer: Similar to OSI, with transmission control protocol (TCP) and user datagram protocal (UDP)
operating at this layer.
Internet Layer: Similar to OSI Network layer. IP resides at this layer.
Network Access Layer: Combines all functionality of physical and Data Link layers of OSI model. It is also called
the host-to-network layer.
3. Five Layer Model:
-Combination of the TCP/IP model and OSI model.
1. Application
2. Transport
3. Network
4. Datalink layer
5. Physical

*Encapsulation and PDU:


-As application data is passed down the protocol stack on its way to be transmitted across the network media,
various protocols add information to it at each level. This is commonly known as the Encapsulation Process.
-The form that a piece of data takes at each layer is called a Protocol Data Unit (PDU). During encapsulation, each
layer encapsulates the PDU that it receives from the upper layer in accordance with the protocol being used.
*DOWN (data encapsulation) — The process of adding headers to data as it moves down OSI layers. In addition
to header, the data link layer also adds a trailer, which is the frame check sequence (FCS) used by the receiver to
detect whether the data is in error.
*UP (data decapsulation) — The process of removing headers from data as it moves up OSI layers.

Encapsulation & Decapsulation Process:


>>TCP/IP Model Example:
Protocol Operation of Sending and Receiving a message.
-When sending messages on a network, the protocol stack on a host operates from top to bottom. In the web server
example, we can use the TCP/IP model to illustrate the process of sending an HTML web page to a client.
-The Application layer protocol, HTTP, begins the process by delivering the HTML formatted web page data to the
Transport layer. There the application data is broken into TCP segments. Each TCP segment is given a label, called
a header, containing information about which process running on the destination computer should receive the
message. It also contains the information to enable the destination process to reassemble the data back to its original
format.
-The Transport layer encapsulates the web page HTML data within the segment and sends it to the Internet layer,
where the IP protocol is implemented. Here the entire TCP segment is encapsulated within an IP packet, which adds
another label, called the IP header. The IP header contains source and destination host IP addresses, as well as
information necessary to deliver the packet to its corresponding destination process.
-Next, the IP packet is sent to the Network Access layer Ethernet protocol where it is encapsulated within a frame
header and trailer. Each frame header contains a source and destination physical address. The physical address
uniquely identifies the devices on the local network. The trailer contains error checking information. Finally the
bits are encoded onto the Ethernet media by the server NIC.
-This process is reversed at the receiving host. The data is decapsulated as it moves up the stack toward the end user
application.
Or
*During encapsulation on the sending host:
• Data from the user application is handed off to the Transport layer.
• The Transport layer adds a header containing protocol-specific information, and then hands the segment to the
Network layer.
• The Network layer adds a header containing source and destination logical addressing, and then hands the packet
to the Data-Link layer.
• The Data-Link layer adds a header containing source and destination physical addressing and other hardware-
specific information.
• The Data-Link frame is then handed off to the Physical layer to be transmitted on the network medium as bits.
*During decapsulation on the receiving host, the reverse occurs:
• The frame is received from the physical medium.
• The Data-Link layer processes its header, strips it off, and then hands it off to the Network layer.
• The Network layer processes its header, strips it off, and then hands it off to the Transport layer.
• The Transport layer processes its header, strips it off, and then hands the data to the user application.
>>OSI Reference Model Example
A web browser serves as a good practical illustration of the OSI model and
the TCP/IP protocol suite:
• The web browser serves as the user interface for accessing a website. The browser itself does not function at the
Application layer. Instead, the web browser invokes the Hyper Text Transfer Protocol (HTTP) to interface with the
remote web server, which is why http:// precedes every web address.
• The Internet can provide data in a wide variety of formats, a function of the Presentation layer. Common formats
on the Internet include HTML,XML, PHP, GIF, and JPEG. Any encryption or compression mechanisms used on a
website are also considered a Presentation layer function.
• The Session layer is responsible for establishing, maintaining, and terminating the session between devices, and
determining whether the communication is half-duplex or full-duplex. However, the TCP/IP stackgenerally does not
include session-layer protocols, and is reliant on lower-layer protocols to perform these functions.
• HTTP utilizes the TCP Transport layer protocol to ensure the reliable delivery of data. TCP establishes and
maintains a connection from the client to the web server, and packages the higher-layer data into segments. A
sequence number is assigned to each segment so that data can be reassembled upon arrival.
• The best path to route the data between the client and the web server is determined by IP, a Network layer
protocol. IP is also responsible for the assigned logical addresses on the client and server, and for encapsulating
segments into packets.
• Data cannot be sent directly to a logical address. As packets travel from network to network, IP addresses are
translated to hardware addresses, which are a function of the Data-Link layer. The packets are encapsulated into
frames to be placed onto the physical medium.
• The data is finally transferred onto the network medium at the Physical layer, in the form of raw bits. Signaling
and encoding mechanisms aredefined at this layer, as is the hardware that forms the physicalconnection between the
client and the web server.

or

The picture below is an example of a simple data transfer between 2 computers and shows how the data is
encapsulated and decapsulated:

EXPLANATION:

-The computer in the above picture needs to send some data to another computer. The Application layer is where the
user interface exists, here the user interacts with the application he or she is using, then this data is passed to the
Presentation layer and then to the Session layer. These three layer add some extra information to the original data
that came from the user and then passes it to the Transport layer. Here the data is broken into smaller pieces (one
piece at a time transmitted) and the TCP header is a added. At this point, the data at the Transport layer is called
a segment.

-Each segment is sequenced so the data stream can be put back together on the receiving side exactly as transmitted.
Each segment is then handed to the Network layer for network addressing (logical addressing) and routing through
the internet network. At the Network layer, we call the data (which includes at this point the transport header and the
upper layer information) a packet.

-The Network layer add its IP header and then sends it off to the Datalink layer. Here we call the data (which
includes the Network layer header, Transport layer header and upper layer information) a frame. The Datalink layer
is responsible for taking packets from the Network layer and placing them on the network medium (cable). The
Datalink layer encapsulates each packet in a frame which contains the hardware address (MAC) of the source and
destination computer (host) and the LLC information which identifies to which protocol in the prevoius layer
(Network layer) the packet should be passed when it arrives to its destination. Also, at the end, you will notice the
FCS field which is the Frame Check Sequence. This is used for error checking and is also added at the end by the
Datalink layer.

-If the destination computer is on a remote network, then the frame is sent to the router or gateway to be routed to
the desination. To put this frame on the network, it must be put into a digital signal. Since a frame is really a logical
group of 1's and 0's, the Physical layer is responsible for encapsulating these digits into a digital signal which is read
by devices on the same local network.

-There are also a few 1's and 0's put at the begining of the frame, only so the receiving end can synchronize with the
digital signal it will be receiving.

*Below is a picture of what happens when the data is received at the destination computer.

EXPLANATION

-The receiving computer will firstly synchronize with the digital signal by reading the few extra 1's and 0's as
mentioned above. Once the synchonization is complete and it receives the whole frame and passes it to the layer
above it which is the Datalink layer.

-The Datalink layer will do a Cyclic Redundancy Check (CRC) on the frame. This is a computation which the
comupter does and if the result it gets matches the value in the FCS field, then it assumes that the frame has been
received without any errors. Once that's out of the way, the Datalink layer will strip off any information or header
which was put on by the remote system's Datalink layer and pass the rest (now we are moving from the Datalink
layer to the Network layer, so we call the data a packet) to the above layer which is the Network layer.

-At the Network layer the IP address is checked and if it matches (with the machine's own IP address) then the
Network layer header, or IP header if you like, is stripped off from the packet and the rest is passed to the above
layer which is the Transport layer. Here the rest of the data is now called a segment.

-The segment is processed at the Transport layer, which rebuilds the data stream (at this level on the sender's
computer it was actually split into pieces so they can be transferred) and acknowledges to the transmitting computer
that it received each piece. It is obvious that since we are sending an ACK back to the sender from this layer that we
are using TCP and not UDP. Please refer to the Protocols section for more clarification. After all that, it then happily
hands the data stream to the upper-layer application.
-You will find that when analysing the way data travels from one computer to another most people never analyse in
detail any layers above the Transport layer. This is because the whole process of getting data from one computer to
another involves usually layers 1 to 4 (Physical to Transport) or layer 5 (Session) at the most, depending on the type
of data.

Header:
- Contains control information, such addressing, and is located at the beginning of the PDU
Trailer:
- Contains control information added to the end of the PDU
- Data Link layer protocols add a trailer to the end of each frame. The trailer is used to determine if the frame arrived
without error. This process is called error detection.
Error Detection:
-Data Link layer protocols add a trailer to the end of each frame. The trailer is used to determine if the frame arrived
without error. This process is called error detection.
Or
-Error detection is the detection of errors caused by noise or other impairments during transmission from the
transmitter to the receiver.
Error Correction:
-Error correction is the detection of errors and reconstruction of the original, error-free data.
PDU (Protocol Data Unit):
-The name given to data at different layers of the OSI models.
*Transport: Segment, Packet
*Network: Packet, Datagram
*Data-link: Frame, Packet
*Physical: Bits
Protocol Suites/Stacks:
-Multiple protocol often work together to facilitate end to end network communication forming protocol suites or
stacks.
TCP/IP(TransmissionControlProtocol/Internet Protocol):
-TCP/IP is the communication protocol that defines the rule computers must follow to communicate with each other
over the internet.
-Ex.: Browser and server, Email, Your Internet Address uses TCP/IP
Connection Oriented:
-Connection has to establish before data is sent.
-Connection Oriented protocol provides several services:
1. Segmentation and sequencing:
-Data is segmented in to smaller pieces for transport.
-Each segment is assigned a sequence number so that the receiving device can reassemble the data on arrival.
2. Connection Establishment:
-Connection are established, maintained and terminated between devices.
3. Acknowledgement:
-Receipt of data is confirmed through the use of acknowledgement. Otherwise data is retransmitted, guaranteed
delivery.
4. Flow Control:
-Data transfer rate is negotiated to prevent congestion.
-Ex: TCP
Connection Less:
-Require no connection before data is sent.
-Ex: UDP

TCP:
-TCP is connection oriented that means connection has to establish before data sent.
-TCP/IP transport layer or TCP is responsible for providing a logical connection between two hosts and can provide
these functions:
>Flow control (through the use of windowing)
>Reliable connections/Transmission (through the use of sequence numbers and acknowledgments)
>Session multiplexing (through the use of port numbers and IP addresses)
>Segmentation (through the use of segment protocol data units, or PDUs)
-TCP is fully duplex. In other words each TCP host has access to two logical channel, Incoming & Outgoing
Channel.
-In TCP, it discards duplicates packet and resequence any packet that arrive out of sequence.
-TCP support flow control at both the source and the destination to avoid too much data being sent at a time and
overloading of the network at the router. TCP flow control only allow sender to gradually increase data transmission
rate. To prevent sender from transmitting data which receiver cannot buffer, the receiver also has flow control
indicating the size of the receiver free buffer.
-TCP segment the data receive from the application layer so that it fits into IP datagram.
-TCP connection are logical point to point connection between two application layer protocols. This type of
communication is also refer to as a “direct transmission”.
-TCP segments are transmitted in IP datagram. TCP segment consisting of TCP header and payload segment is an
encapsulated within IP header. The resulting IP datagram is provided with the header and trailer to Datalink layer
and physical layer.
*TCP provides a reliable, connection-oriented, logical service through the use of sequence and acknowledgment
number, windowing for flow control, error detection and correction through checksum, reordering packets, and
dropping extra duplicated packets.
*In TCP connection goes through three phase:
1. Connection setup [Three-way Handshake]
2. Data Transfer [Established]
3. Connection Close [Modified Three-way Handshake]

IP:
-IP is connection less protocol. Connection less means data is transmitted as independent packets. There is no logical
end to end connection between two devices.
-It is responsible for addressing and routing of packet between the hosts.
-IP is an unreliable protocol as it does not guaranteed the delivery of packet.
-IP function according to the principle of best effort. This means it will any case do it best to deliver the packet
correctly. On its way to receiver, packet might get lost, deliver out of sequence, duplicate.
-IP is also responsible for data fragmentation that means splitting of larger data packet in to smaller once. The
process of putting together small packets at receiver is called reassembling.
-No acknowledgement is sent back when data packet is reaches the destination neither sender nor the receiver are
informed if the packet get lost or transmitted out of sequence. This is the responsibility of higher protocol like TCP.
-IP is datagram switching protocol. This means that each packet is a unnumbered message requiring no
acknowledgement which is routed across the network based on the unique IP address.
OR
Internet Protocol (IP)
-The Internet Protocol (IP)is the most common Layer 3 protocol and is used within the Internet to route packets to
their final destination. IP provides connectionless, best-effort delivery of packets through a network and
fragmentation and reassembly of packets going across Layer 2 networks with different maximum transmission units
(MTUs). Each computer or host has at least one IP address that uniquely identifies it from all other computers on the
Internet. The IP addressing scheme is fundamental to the process of routing packets through a network.

Difference between TCP & UDP:-


TCP (Transmission Control Protocol):
-Connection oriented and reliable protocol.-
-TCP uses 3 way handshakes to make sure the reliable transmission.
-Provide sequencing of data.
-TCP is slow.
-Multiplex the data using port.
-Provide retransmission of lost packets.
-Data is called Segment.
-Flow Control.
-TCP Protocol number# 6
-Ex:FTP(21), SSH(22), Telnet(23), Mail(25),DNS(53),HTTP(80), HTTPS(443).
UDP (User Datagram Protocol):
-Connection less and unreliable. [There is no acknowledgement that’s why it’s refer to as unreliable protocol]
-require no connection set up before data sent.
-Has only the basic error checking mechanism using checksums.
-UDP is efficient for broadcast/multicast transmission.
-Does not provide sequencing of data.
-UDP is faster.
-Multiplex the data using the port.
-No retransmission of lost packet.
-Data is called datagram.
-No flow control.
-UDP Protocol Number# 17
-Ex: Real time applications, voice over IP media flow, TFTP, Online multiplayer games, and Streaming media
application such as IPTV or movies.
-Ex:
Domain Name System (DNS) [Works Query and Resnose]
Simple Network Management Protocol (SNMP) [Request/Response/Trap]
Dynamic Host Configuration Protocol (DHCP)
Routing Information Protocol (RIP) [Request/Response]
Trivial File Transfer Protocol (TFTP) [Request/Response]
Online games

Comparison chart

Differences — Similarities —

TCP UDP

Transmission Control Protocol User Datagram Protocol or Universal Datagram


Acronym for
Protocol

Connection TCP is a connection-oriented protocol. UDP is a connectionless protocol.

Function As a message makes its way across the UDP is also a protocol used in message transport or
internet from one computer to another. This transfer. This is not connection based which means
is connection based. that one program can send a load of packets to
TCP UDP

another and that would be the end of the


relationship.

TCP is suited for applications that require UDP is suitable for applications that need fast,
high reliability, and transmission time is efficient transmission, such as games. UDP's
Usage
relatively less critical. stateless nature is also useful for servers that answer
small queries from huge numbers of clients.

Use by other HTTP, HTTPs, FTP, SMTP, Telnet DNS, DHCP, TFTP, SNMP, RIP, VOIP.
protocols

TCP rearranges data packets in the order UDP has no inherent order as all packets are
Ordering of data
specified. independent of each other. If ordering is required, it
packets
has to be managed by the application layer.

The speed for TCP is slower than UDP. UDP is faster because there is no error-checking for
Speed of transfer
packets.

There is absolute guarantee that the data There is no guarantee that the messages or packets
Reliability transferred remains intact and arrives in the sent would reach at all.
same order in which it was sent.

Header Size TCP header size is 20 bytes UDP Header size is 8 bytes.

Common Header Source port, Destination port, Check Sum Source port, Destination port, Check Sum
Fields

Data is read as a byte stream, no Packets are sent individually and are checked for
distinguishing indications are transmitted to integrity only if they arrive. Packets have definite
Streaming of data signal message (segment) boundaries. boundaries which are honored upon receipt,
meaning a read operation at the receiver socket will
yield an entire message as it was originally sent.

Weight TCP is heavy-weight. TCP requires three UDP is lightweight. There is no ordering of
TCP UDP

packets to set up a socket connection, before messages, no tracking connections, etc. It is a small
any user data can be sent. TCP handles transport layer designed on top of IP.
reliability and congestion control.

TCP does Flow Control. TCP requires three UDP does not have an option for flow control
packets to set up a socket connection, before
Data Flow Control
any user data can be sent. TCP handles
reliability and congestion control.

Error Checking TCP does error checking UDP does error checking, but no recovery options.

1. Sequence Number, 2. AcK number, 3. 1. Length, 2. Source port, 3. Destination port, 4.


Data offset, 4. Reserved, 5. Control bit, 6. Check Sum
Fields Window, 7. Urgent Pointer 8. Options, 9.
Padding, 10. Check Sum, 11. Source port,
12. Destination port

Acknowledgement Acknowledgement segments No Acknowledgment

Handshake SYN, SYN-ACK, ACK No handshake (connectionless protocol)

Checksum checksum to detect errors

UDP

The User Datagram Protocol (UDP) is a transport layer protocol defined for use with the IP network layer protocol.
It is defined by RFC 768 written by John Postel. It provides a best-effort datagram service to an End System (IP
host).

The service provided by UDP is an unreliable service that provides no guarantees for delivery and no protection
from duplication (e.g. if this arises due to software errors within anIntermediate System (IS)). The simplicity of
UDP reduces the overhead from using the protocol and the services may be adequate in many cases.

UDP provides a minimal, unreliable, best-effort, message-passing transport to applications and upper-layer
protocols. Compared to other transport protocols, UDP and its UDP-Lite variant are unique in that they do not
establish end-to-end connections between communicating end systems. UDP communication consequently does not
incur connection establishment and teardown overheads and there is minimal associated end system state. Because
of these characteristics, UDP can offer a very efficient communication transport to some applications, but has  no
inherent congestion control or reliability. A second unique characteristic of UDP is that it provides no inherent On
many platforms, applications can send UDP datagrams at the line rate of the link interface, which is often much
greater than the available path capacity, and doing so would contribute to congestion along the path, applications
therefore need to be designed responsibly 

A computer may send UDP packets without first establishing a connection to the recipient. A UDP datagram is
carried in a single IP packet and is hence limited to a maximum payload of 65,507 bytes for IPv4 and 65,527 bytes
for IPv6. The transmission of large IP packets usually requires IP fragmentation. Fragmentation decreases
communication reliability and efficiency and should theerfore be avoided.

To transmit a UDP datagram, a computer completes the appropriate fields in the UDP header (PCI) and forwards the
data together with the header for transmission by the IP network layer.

Q. Who is responsible for the reliability of UDP packet?


A: Application

Or

-Application developers can use UDP in place of TCP. 

- UDP assumes that the application will use its own reliability method, it doesn't use any, which obviously makes
things transfer faster.

Or

-Application programs utilizing UDP accepts full responsibility for packet reliability including message loss,
duplication, delay, out of sequence, multiplexing and connectivity loss.
-Application designers are generally aware that UDP does not provide any reliability, e.g., it does not retransmit any
lost packets. Often, this is a main reason to consider UDP as a transport. Applications that do require reliable
message delivery therefore need to implement appropriate protocol mechanisms in their applications (e.g. tftp).
-UDP's best effort service does not protect against datagram duplication, i.e., an application may receive multiple
copies of the same UDP datagram. Application designers therefore need to verify that their application gracefully
handles datagram duplication and may need to implement mechanisms to detect duplicates.
-The Internet may also significantly delay some packets with respect to others, e.g., due to routing transients,
intermittent connectivity, or mobility. This can cause reordering, where UDP datagrams arrive at the receiver in an
order different from the transmission order. Applications that require ordered delivery must restore datagram
ordering themselves.

UDP Datagram Reassembly


Because UDP is connectionless, sessions are not established before communication takes place as they are with
TCP. UDP is said to be transaction-based. In other words, when an application has data to send, it simply sends the
data.

Many applications that use UDP send small amounts of data that can fit in one segment. However, some
applications will send larger amounts of data that must be split into multiple segments The UDP PDU is referred to
as a datagram, although the terms segment and datagram are sometimes used interchangeably to describe a
Transport layer PDU.
When multiple datagrams are sent to a destination, they may take different paths and arrive in the wrong order. UDP
does not keep track of sequence numbers the way TCP does. UDP has no way to reorder the datagrams into their
transmission order.

Therefore, UDP simply reassembles the data in the order that it was received and forwards it to the application. If
the sequence of the data is important to the application, the application will have to identify the proper sequence of
the data and determine how the data should be processed.

*UDP has no ability to do reassembly. It basically only has fields for source port, destination port, length and
checksum. Fragmentation and reassembly would be done in the application itself when UDP is used. Lower layer
fragmentation is still possible, for example with IP. For anything more intelligent at the transport layer, TCP would
typically be used.

Key Concept: The UDP was developed for use by application protocols that do not require reliability,
acknowledgment or flow control feature at the transport layer. It is designed to be simple and fact, providing only
transport layer addressing in the form of UDP ports and an optional checksum capability, and little else.

Session Establishment & Termination:


-TCP and UDP use completely different processes when establishing a session with a remote peer.
-When two hosts communicate using TCP, a connection is established before data can be exchanged. After the
communication is completed, the sessions are closed and the connection is terminated.
UDP:
-As you probably already have guessed, UDP uses a fairly simple process. With UDP, one of two situations will
occur that indicate that the session is established:
*The source sends a UDP segment to the destination and receives a response
*The source sends a UDP segment to the destination
-As to which of the two are used, that depends on the application. And as to when a UDP session is over, that is also
application-specific:
*The application can send a message, indicating that the session is now over, which could be part of the data
payload
*An idle timeout is used, so if no segments are encountered over a predefined period, the application assumes the
session is over
TCP:
- TCP, on the other hand, is much more complicated. It uses what is called a defined state machine. -A defined state
machine defines the actual mechanics of the beginningof the state (building the TCP session), maintaining the state
(maintaining theTCP session), and ending the state (tearing down the TCP session).
*Below field involved in 3 way handshake
1. Source Port
2. Destination Port
3. Seq. Number
4. ACK Number
5. ACK Flag
6. SYN Flag
TCP three-way handshake:-
-With reliable TCP sessions, before a host can send information to another host, a handshake process must take
place to establish the connection.
*Example: Host-A wants to send data reliably to Host-B via TCP. Before this can take place, PC-A must
establish the session to PC-B. The two hosts go through a three-way handshake to establish the reliable
session.
-It is the method used to establish TCP  connections
*Ex. Of 3-way handshake:
-Host A sends a TCP SYN packet to Host B.
-Host B receives A’s SYN packet.
-Host B sends a SYN-ACK packet.
-Host A receives B’s SYN-ACK packet.
-Host A sends ACK packet.
-Host B receives ACK.
TCP socket connection is ESTABLISHED
OR
*The following three steps occur during the three-way handshake:
1. The source sends SYN segment (where the SYN control flag is set in the TCP header) to the destination,
indicating that the source wants to establish a reliable session.
2. The destination responds with SYN/ACK segment. The ACK indicates receipt of the source’s SYN segment, and
the destination’s SYN flag indicates that a session can be set up. (where the SYN/ACK control flag is set in the TCP
header)
3. Source receives the SYN/ACK, the source responds with an ACK segment (where the ACK flag is set in the TCP
header). This indicates to the destination that its SYN was received by the source and that the session is now fully
established.
-Once the three-way handshake has occurred, data can be transferred across the session.
-TCP uses a three-step, three-way handshake process to set up a reliable connection: SYN, SYN/ACK and ACK
*A handshake or three-way handshake is the three-step process two devices go through to establish a connection
before they can communicate.

Or

TCP 3 –way Handshake:

Before the sending device and receiving device start the exchange of data, both devices need to be synchronize.
During the TCP initialization process, the sending device and the receiving device exchange a few control packets
for synchronization purposes. This exchange is known as a 3 –way handshake.

The 3 –way handshake begins with the initiator sending a TCP segment with the SYN control bit flag set.

TCP allow one side to establish a connection. The other side may either accept the connection or refuse it. If we
consider this from application layer point of view, the side that is establish the connection is the client and the side
waiting for a connection is the server.

TCP identifies 2 type of open calls:

Active Open:
-In an Active Open call a device (Client process) using TCP takes the active role and initiates the connection by
sending TCP SYN message to start the connection.

Passive Open:
-A Passive Open can specify that device (Server process) is waiting for an active OPEN from a specific client. It
does not generate any TCP message segment. The server processes listening for the clients are in passive open
mode.

Step 1:
Device A (Client) sends a TCP segment with SYN=1, ACK=0 flag set, ISN (Initial Sequence Number) = 2000. The
Active Open device (Device A) sends a segment with the SYN flag set to 1, ACK flag set to 0 and initial sequence
number 2000 (for example), which marks the beginning of the sequence number for data that device A will transmit.
SYN in short for synchronization. SYN flag announces an attempt to open a connection. The first byte transmitted to
Device B will have the sequence number ISN+1.

Step 2:
Device B (Server) receives Device A’s TCP segment and returns a TCP segment with SYN=1, ACK=1, ISN=5000
(Device B’s Initial Sequence Number), Acknowledgement Number=2001 (2000+1, the next sequence number
device B expecting from Device A).

Step 3:
Device A sends TCP segment to Device B that acknowledgement receipt of device B’s ISN, with flag set to as
SYN=0. ACK=1, Sequence Number= 2001, Acknowledgement number=5001 (5000+1, the next sequence number
Device A expecting from Device B).

This handshake technique is referred to as the 3 –way handshake or SYN, SYN-ACK, ACK.

After the 3 – way handshake, the connection is open and the participating computers start sending data using the
sequence and acknowledgment numbers.

-Here is a simple example of a three-way handshake with sequence and acknowledgment numbers:

1. Source sends a SYN: sequence number = 1


2. Destination responds with a SYN/ACK: sequence number = 10, acknowledgment = 2
3. Source responds with an ACK segment: sequence number = 2, acknowledgment = 11

- In this example, the destination’s acknowledgment (step 2) number is one greater than the source’s sequence
number, indicating to the source that the next segment expected is 2. In the third step, the source sends the second
segment, and, within the same segment in the Acknowledgment field, indicates the receipt of the destination’s
segment with an acknowledgment of 11—one greater than the sequence number in the destination’s SYN/ACK
segment.

“When acknowledging a received segment, the destination returns a segment with a number in the
acknowledgment field that is one number higher than the received sequence number.”

SYN Parameter:

SYN Seq.no. 17768656

  (next seq.no. 17768657)

  Ack.no. 0

  Window 8192

  LEN = 0 bytes

The value of LEN is the length of the TCP data which is calculated by subtracting the IP and TCP header sizes from
the IP datagram size.
Type of segment 160.221.172.250 160.221.73.26

SYN Seq.no. 17768656  

  (next seq.no. 17768657)  

  Ack.no. 0  

  Window 8192  

  LEN = 0 bytes  

SYN-ACK   Seq.no. 82980009

    (next seq.no. 82980010)

    Ack.no. 17768657

    Window 8760

    LEN = 0 bytes

ACK Seq.no. 17768657  

  (next seq.no. 17768657)  

  Ack.no. 82980010  

  Window 8760  

  LEN = 0 bytes  

  Seq.no. 17768657  

  (next seq.no. 17768729)  

  Ack.no. 82980010  

  Window 8760  
  LEN = 72 bytes of data  

    Seq.no. 82980010

    (next seq.no. 82980070)

    Ack.no. 17768729

    Window 8688

    LEN = 60 bytes of data

  Seq.no. 17768729  

  (next seq.no. 17768885)  

  Ack.no. 82980070  

  Window 8700  

  LEN = 156 bytes of data  

    Seq.no. 82980070

    (next seq.no. 82980222)

    Ack.no. 17768885

    Window 8532

    LEN = 152 bytes of data

FIN Seq.no. 17768885  

  (next seq.no. 17768886)  

  Ack.no. 82980222  

  Window 8548  

  LEN = 0 bytes  

FIN-ACK   Seq.no. 82980222


    (next seq.no. 82980223)

    Ack.no. 17768886

    Window 8532

    LEN = 0 bytes

ACK Seq.no. 17768886  

  (next seq.no. 17768886)  

  Ack.no. 82980223  

  Window 8548  

  LEN = 0 bytes  

The value of LEN is the length of the TCP data which is calculated by subtracting the IP and TCP header sizes from
the IP datagram size.

1. The session begins with station 160.221.172.250 initiating a SYN containing the sequence

number 17768656 which is the ISS. In addition, the first octet of data contains the next sequence

number 17768657. There are only zeros in the Acknowledgement number field as this is not used in

the SYN segment. The window size of the sender starts off as 8192 octets as assumed to be acceptable to

the receiver.

2. The receiving station sends both its own ISS (82980009) in the sequence number field and acknowledges

the sender's sequence number by incrementing it by 1 (17768657) expecting this to be the starting sequence

number of the data bytes that will be sent next by the sender. This is called the SYN-ACKsegment. The

receiver's window size starts off as 8760.

3. Once the SYN-ACK has been received, the sender issues an ACK that acknowledges the receiver's ISS by

incrementing it by 1 and placing it in the acknowledgement field (82980010). The sender also sends the

same sequence number that it sent previously (17768657). This segment is empty of data and we don't want
the session just to keep ramping up the sequence numbers unnecessarily. The window size of  8760 is

acknowledged by the sender.

4. From now on ACKs are used until just before the end of the session. The sender now starts sending data by

stating the sequence number 17768657 again since this is the sequence number of the first byte of the data

that it is sending. Again the acknowledgement number 82980010 is sent which is the expected sequence

number of the first byte of data that the receiver will send. In the above scenario, the sender is intitially

sending 72 bytes of data in one segment. The network analyser may indicated the next expected sequence

number in the trace, in this case this will be 17768657 + 72 = 17768729. The sender has now agreed the

window size of 8760 and uses it itself.

5. The receiver acknowledges the receipt of the data by sending back the number 17768729 in the

acknowledgement number field thereby acknowledging that the next byte of data to be sent will begin with

sequence number 17768729 (implicit in this is the understanding that sequence numbers up to and

including 17768728 have been successfully received). Notice that not every byte needs to be

acknowledged. The receiver also sends back the sequence number of the first byte of data in its own

segment (82980010) that is to be sent. The receiver is sending 60 bytes of data. The receiver subtracts 72

bytes from its previous window size of 8760 and sends 8688 as its new window size.

6. The sender acknowledges the receipt of the data with the number 82980070 (82980010 + 60) in the

acknowledgement number field, this being the sequence number of the next data byte expected to be

received from the receiver. The sender sends 156 bytes of data starting at sequence number  17768729. The

sender subtracts 60 bytes from its previous window size of 8760 and sends the new size of 8700.

7. The receiver acknowledges receipt of this data with the number 17768885 (17768729 + 156) since it was

expecting it, and sends 152 bytes of data beginning with the sequence number 82980070. The receiver

subtracts 156 bytes from the previous window size of 8688 and sends the new window size of 8532.

8. The sender acknowledges this with the next expected sequence number 82980070 + 152 = 82980222 and

sends the expected sequence number 17768885in a FIN because at this point the application wants to close

the session. The sender subtracts 152 bytes from its previous window size of 8700 and sends the new size

of 8548.

9. The receiver sends an FIN-ACK acknowledging the FIN and increments the acknowledgement sequence

number by 1 to 17768886 which is the number it will expect on the final ACK. In addition the receiver
sends the expected sequence number 82980223. The window size remains at 8532 as no data was received

from the sender's FIN.

10. The final ACK is sent by the sender confirming the sequence number 17768886 and acknowledges receipt

of 1 byte with the acknowledgement number82980223. The window size finishes at 8548 and the TCP

connection is now closed.

TCP Sequence Number:

The Sequence and Acknowledgement fields are two of the many features that help us classify TCP as a connection
oriented protocol. As such, when data is sent through a TCP connection, they help the remote hosts keep track of the
connection and ensure that no packet has been lost on the way to its destination.

TCP utilizes positive acknowledgments, timeouts and retransmissions to ensure error-free, sequenced delivery of
user data. If the retransmission timer expires before an acknowledgment is received, data is retransmitted starting at
the byte after the last acknowledged byte in the stream.

A further point worth mentioning is the fact that Sequence numbers are generated differently on each operating
system. Using special algorithims (and sometimes weak ones), an operating system will generate these
numbers, which are used to track the packets sent or received, and since both Sequence and
Acknowledgement fields are 32bit, there are 2^32= 4,294,967,296 possibilities of generating a different
number!

TCP Session Termination:


-To close a connection, the FIN (Finish) control flag in the segment header must be set. To end each one-way TCP
session, a two-way handshake is used, consisting of a FIN segment and an ACK segment. Therefore, to terminate a
single conversation supported by TCP, four exchanges are needed to end both sessions.
* The following 4 steps occur during the session termination:
1. When the client has no more data to send, it sends a FIN segment with the FIN flag set.
2. The server sends an ACK segment to terminate the session from client to server.
3. The server sends a FIN to the client, to terminate the server to client session.
4. The client responds with an ACK of the FIN from the server.
-When the client end of the session has no more data to transfer, it sets the FIN flag in the header of a segment.
Next, the server end of the connection will send a normal segment containing data with the ACK flag set using the
acknowledgment number, confirming that all the bytes of data have been received. When all segments have been
acknowledged, the session is closed.
-The session in the other direction is closed using the same process. The receiver indicates that there is no more data
to send by setting the FIN flag in the header of a segment sent to the source. A return acknowledgement confirms
that all bytes of data have been received and that session is, in turn, closed.
-It is also possible to terminate the connection by a three-way handshake. When the client has no more data to send,
it sends a FIN to the server. If the server also has no more data to send, it can reply with both the FIN and ACK flags
set, combining two steps into one. The client replies with an ACK.
Or

* The following 4 steps occur during the session termination:


-Application program on client tells TCP, it has no more data to send, TCP closes the connection in one direction
only.
1. Clients send segment with the FIN flag set to 1.
2. Server ACK the client’s FIN but doesn’t immediately close its side of connection. Infect server might have still
data to send to the client.
3. When server is done sending data to the client, it’s send the segment with the FIN flag set.
4. Connection is finally close when client ACK the server’s FIN segment by sending the ACK segment.

Q: How do I know which application should receive this data?/How do I know if my data arrived
successfully?
-Data gets routed to the correct computer with the help of IP and other lower layer protocol such as Ethernet.
-One of the job of transport layer protocol is getting the data from one application on one computer to the
correct application on another.
*Multiplexing:
-Multiplexing where multiple source of data such as phone, fax, computer data combine into a single stream over a
single line.
OR
- Multiplexing is the ability of a single host to have multiple sessions open to one or many other hosts.
-TCP and UDP a way of combining or multiplexing data from many application layer protocol into a single stream
with same IP address.
-TCP and UDP use this this software port to route data to appropriate application.

Q: How the transport layer multiplex the data?


>>>3 different type of data are coming from different application layer protocol. 3 types of data arrive at the
transport layer [HTTP, DNS, SMTP application data] which multiplex or combine all the different type of data in to
a single stream down to the IP interface.
-At the transport layer, protocol like UDP or TCP adds the header to the application data that include the source and
destination port number.
-Transport layer then this segment or datagram down to the network layer. At the network layer IP interface adds the
source and destination IP address.
-Network layer then sends packet to the datalink layer with the layer 2 framing information is added and data is
transmitted.
-Notice that transport layer allows multiple application layer protocol to share single IP interface or IP address.
-At the physical layer, each of our frame are converted into electrical signal and transmitted.
-each of this frame going to the same computer but ultimately each is going to a different application layer protocol.
>>>On the receiving device, process is reversed. Datalink layer strips off the layer 2 frame, send the IP packet to the
IP interface where the layer 3 header is removed.
-The network layer sends the datagram to appropriate transport layer protocol which examine the destination port
number, removes the header and send the data to the correct application layer protocol.
>>>At this point, single data stream is demultiplex in to multiple stream going to the appropriate application layer
protocol.

Port Number / Multiplexing / Demultiplexing:


-Transport layer protocol (TCP and UDP) are responsible for supporting multiple network application at the same
instance and these application can send and receive network data simultaneously. Transport layer protocol are
capable of doing this by making use of application level addressing known as port number. The data from different
applications operating on a network device are multiplexed at the sending using port number and demultiplexed at
the receiving device, again using port number.

Source and Destination port identifies the port number which the application is listening at the sending and receiving
device.

Port Number:
-The Transport layer uses an addressing scheme called a port number. Port numbers identify applications and
Application layer services that are the source and destination of data.
--Both TCP and UDP can send data from multiple upper-layer applications at the same time using Port number.
-Port numbers keep track of different conversations crossing the network at any time.
*Example: FTP: 20-21, Telnet: 23, SMTP: 25, DNS: 53, TFTP: 69, SSH: 22, TACACS: 49, DHCP: 67-68, HTTP:
80, SNMP: 161-162, BGP: 179, RIP: 520

-Ports is an internal address that reserved for specific specific application on computer.
-A port can be either TCP or UDP port depending upon whether it links to the TCP or UDP protocol at the transport
layer.
-Port can be any number between 0-65,535.
-Frequently used TCP/IP application are assigned port number under 1024.
-This port is also called Well-Known Ports: 0-1024.
#UDP and TCP Port Number:
Software Port Number
Port 0 – Reserved
Port 1-1023 – Well-Known Port
Port 1024- 49151 – Registered Port
Port 49152-65535 – Dynamic or Private Ports.
*Well-Known Ports: (1-1023)
-They are used only for the most common TCP/IP application.
*Registered Port (1024-49151)
-IANA manage and registers them. Less common TCP/IP application use these port numbers.
*Dynamic or Private Ports (49,152-65535)
- IANA does not manage them.
-Randomly chosen port number in this range are referred to as “ephemeral ports”.
-These ports are not permanently assigned to any publically defined application and are commonly used as the
source port number for the client side of a connection. This allocation is temporary and is valid for the duration of
the connection opened by the application using the protocol.
*Software ports:
-Software ports are specific to a transport layer and are used to route a data to the appropriate application layer
protocol and ultimately correct application program
*Hardware Ports:
-It also knowns as NICs.
-It exist only on layer 1.
*IP interface:
-It exist at layer 3.
*FTP (TCP Port: 20 and 21)
-File Transfer Protocol
-Transfer files with a remote host (typically requires authentication of users credentials).
Or
- File Transfer Protocol (FTP) is a network protocol used to transfer data from one computer to another through a
network, such as over the Internet.
*SSH (TCP Port: 22)
-Secure Shell
-Securely connect to a remote host (typically via a terminal emulator)
*SFTP (TCP Port 22)
-Secure FTP
-Provides FTP file-transfer service over a SSH connection.
*SCP (TCP Port: 22)
-Secure Copy
-Provides a secure file transfer service over a SSH connection and offers a files original date and time information,
which is not available with FTP.
*Telnet (TCP Port: 23)
-Used to connect to a remote host (typically via a terminal emulator)
*SMTP (TCP Port: 25)
-Simple Mail Transfer Protocol
-Used for sending Email.
*DNS (TCP Port: 53, UDP Port: 53)
-Domain Name System.
-Resolves domain names to corresponding IP addresses.
*TFTP (UDP Port: 69)
-Trivial File Transfer Protocol
-Transfer files with remote host (does not require authentication of user credentials)
Or
-The Trivial File Transfer Protocol (TFTP) is a network protocol used to transfer data from one computer to another
through a network, such as over the Internet.
*DHCP (UDP Port: 67)
-Dynamic Host Configuration Protocol
-Dynamically assigns IP address information (for example, IP address, Subnet Mask, DNS Server’s, and Default
Gateway’s IP address) to a network devices.
*HTTP (TCP Port: 80)
-Hypertext Transfer Protocol.
-Retrieves content from a Web Server.
Or
-It is is a communications protocol for the transfer of information on the Internet and the World Wide Web.
*Hypertext Markup Language (HTML)
-Hypertext Markup Language (HTML) is the language the Web browser and Web server use to create and display
Web pages. 
* URL
A Uniform Resource Locator (URL) is the address where a file can be accessed on the Internet, for example
http://www.juniper.net/training/index.html. In this example, the protocol is HTTP, the domain is juniper.net, and the
path to the file is training/index.html.
*POP3 (TCP Port: 110)
-Post Office Protocol version 3.
- Retrieves Email from an Email Server.
*NNTP (TCP Port: 119)
-Network News Transport Protocol
-Supports the posting and reading of articles on Usenet news servers.
*NTP (UDP Port: 123)
-Network Time Protocol
-Used by a network device to synchronize its clock with a time server (NTP Server).
*SNTP (UDP Port: 123)
-Simple Network Time Protocol
-Supports time synchronization among network devices, similar to Network Time Protocol (NTP), although SNTP
uses a less complex algorithm in its calculation and is slightly less accurate than NTP.
IMAP4 (TCP Port: 143)
-Internet Message Access Protocol version 4
-- Retrieves Email from an Email Server.
*LDAP (TCP Port: 389)
-Lightweight Directory Access Protocol
-Provides directory services (for example, a user directory – including username, password, e-mail, and phone
number information) to network clients.
*HTTPS (TCP Port: 443)
-Hypertext Transfer Protocol Secure
- Used to securely retrieve content from a Web Server.
*rsh (TCP Port: 514)
-Remote Shell
-Allows command to be executed on a computer from a remote user.
*RSTP (TCP Port: 554, UDP Port: 554)
-Real Time Streaming Protocol
-Communicates with a media server (for example, a video server) and controls the playback of the server’s media
files.
*RDP (TCP Port: 3389)
-Remote Desktop Protocol
-A Microsoft Protocol that allows a user to view and control the desktop of a remote computer.
*SNMP (TCP Port 161-162)
-Simple Network Management Protocol
-SNMP is a network management protocol and used in TCP/IP networks for network monitoring, configure, and
troubleshoot network resources from a centrally located SNMP management system.
*RIP (UDP Port 520)
*BGP (TCP Port 179)
Socket:
-Socket is an IP address combine with respective TCP or UDP port. [combination of IP Address + TCP/UDP Port]
-An application generate a socket by specifying computer’s IP address, type of data transmission (TCP or UDP), and
the port control by the application.
-IP address determine the destination computer and port determines which protocol used.

The Client/Server model:


-In the client/server model, the device requesting the information is called a client and the device responding
to the request is called a server.

-Data transfer from a client to a server is referred to as an upload and data from a server to a client as a download.

*Server:
-In a general networking context, any device that responds to requests from client applications is functioning as
a server. A server is usually a computer that contains information to be shared with many client systems. For
example, web pages, documents, databases, pictures, video, and audio files can all be stored on a server and
delivered to requesting clients. In other cases, such as a network printer, the print server delivers the client
print requests to the specified printer.
-Different types of server applications may have different requirements for client access. Some servers may require
authentication of user account information to verify if the user has permission to access the requested data or to use
a particular operation.
-When using an FTP client, for example, if you request to upload data to the FTP server, you may have permission
to write to your individual folder but not to read other files on the site.
-In a client/server network, the server runs a service, or process, sometimes called a server daemon. Like
most services, daemons typically run in the background and are not under an end user's direct control.
Daemons are described as "listening" for a request from a client, because they are programmed to respond
whenever the server receives a request for the service provided by the daemon. When a daemon "hears" a
request from a client, it exchanges appropriate messages with the client, as required by its protocol, and
proceeds to send the requested data to the client in the proper format.

*Peer-to-Peer Applications
-A peer-to-peer application (P2P), unlike a peer-to-peer network, allows a device to act as both a client and a
server within the same communication. In this model, every client is a server and every server a client. Both
can initiate a communication and are considered equal in the communication process. However, peer-to-peer
applications require that each end device provide a user interface and run a background service. When you launch a
specific peer-to-peer application it invokes the required user interface and background services. After that the
devices can communicate directly.
-Peer-to-peer applications can be used on peer-to-peer networks, client/server networks, and across the Internet.
-Example: Instant Message
Both Client: Initiate a message / receive a message
Both clients simultaneously: send / receive

End-to-End Communication:
-End to end is source to destination communication.
or
-An end-to-end connection refers to a connection between two systems across a switched network. For example, the
Internet is made up of a mesh of routers. Packets follow a hop-by-hop path from one router to the next to reach their
destinations. Each hop consists of a physical point-to-point link between routers. Therefore, a routed path consists
of multiplepoint-to-point links. In the ATM and frame relay environment, the end-to-end path is called a virtual
circuit that crosses a predefined set of point-to-point links

Hop-to-Hop Communication:
-The communication between transit device is hop by hop.

or

-- With hop-by-hop transport, chunks of data are forwarded from node to node in a store-and-forward manner.

As hop-by-hop transport involves not only the source and destination node, but rather some or all of the intermediate
nodes as well, it allows data to be forwarded even if the path between source and destination is not permanently
connected during communication.

Flow Control:
-Allow a receiver to tell the sender to slow down its transmission rate.
-Data transfer rate is negotiated to prevent congestion.
Error Control:
-Allow a receiver to detect an error in a received frame and request the sender to retransmit frame.
Mode of Transmission:
*Simplex
-One way communication
*Half Duplex
-Two way communication, but not simultaneously
OR
- Half-duplex data transmission allows for communication in two directions, but only in one direction at a time. That
is, a device cannot receive and transmit data simultaneously. This functionality is simlar to using a walkie talkie
where if you are speaking, you cannot hear the person on the other end.
*Full Duplex
-Simultaneously Two way communication
OR
-- Full-duplex data transmission allows for communication in two directions at the same time. That is, a device can
receive and transmit data simultaneously. This functionality is similar to using a telephone where you can talk and
listen at the same time.
Segmentation:
-It divides the data stream into smaller pieces is called segmentation.
-Segmenting messages has two primary benefits:
1.First, by sending smaller individual pieces from source to destination, many different conversations can be
interleaved on the network.
2. Segmentation can increase the reliability of network communications.The separate pieces of each message need
not travel the same path across the network from source to destination. If a particular path becomes congested with
data or fails, individual pieces of the message can still be directed to the destination using alternate path. If part of
the message fails to make it to the destination, only the missing parts need to be retransmitted.
Fragmentation:
-Fragmentation is the process of breaking an IP packet into smaller chunks or data is fragmented when transmitted
over data link technology with a smaller MTU.
Maximum Transmission Unit (MTU):
-The Maximum Transmission Unit (MTU) is the fixed upper limit on the size of packets can be sent in a single
frame.
Host:
-It refers to any device that is connected to a network.
Server:
-Servers are hosts that have software installed that enables them to provide information and services, like e-mail or
web pages, to other hosts on the network.
Client:
# Clients are hosts that have software installed that enables them to request and display the information obtained
from the server.
Network Media or Medium:
-Communication across a network is carried on a medium. The medium provides the channel over which the
message travels from source to destination.
Encoding/Decoding:
-The process of transforming data from one form to another form.
-On wire, the data is encoded in to electrical signal. Fiber optic, the data is encoded in to pulses of light. In wireless
communication, the data is encoded into electromagnetic waves.
Or
-It is the process of converting the binary data in to signals based on the type of the media.
-Ex:
*Copper media- Electrical Signal
*Wireless Media- Radio Frequency Waves
*Fiber Media – Light Pulses
Internet:
-The Internet is created by the interconnection of networks belonging to Internet Service Providers (ISPs). These ISP
networks connect to each other to provide access for millions of users all over the world.
Intranet:
-A system internal to an organization such as website that is explicitly used by internal employees or students. Can
be accessed internally or remotely.
OR
-It is often used to refer to a private connection of LANs and WANs that belongs to an organization, and is designed
to be accessible only by the organization's members, employees, or others with authorization.
OR
- An intranet is a private, internal network that uses the same IP-based protocols used in the Internet. Intranets often
use IP addresses from the private IP address space.
Channel:
-The medium used to transport information from a sender to a receiver.
Checksum/CRC
-A checksum, also known as a Cyclic Redundancy Check or CRC, is a simple mathematical calculation performed
on each frame to ensure it hasn't been corrupted in transit.
Frame Check Sequence (FCS)
-Frame Check Sequence (FCS) is a 2-byte or 4-byte checksum computed over the frame to provide basic protection
against errors in transmission.
Or
- The Frame Check Sequence (FCS) field is used to determine if errors occurred in the transmission and reception of
the frame.
Or
- The Frame Check Sequence (FCS) field is used to determine if errors occurred in the transmission and
reception of the frame. Error detection is added at the Data Link layer because this is where data is transferred
across the media. The media is a potentially unsafe environment for data. The signals on the media could be subject
to interference, distortion, or loss that would substantially change the bit values that those signals represent. The
error detection mechanism provided by the use of the FCS field discovers most errors caused on the media.
To ensure that the content of the received frame at the destination matches that of the frame that left the
source node, a transmitting node creates a logical summary of the contents of the frame. This is known as the
cyclic redundancy check (CRC) value. This value is placed in the Frame Check Sequence (FCS) field of the
frame to represent the contents of the frame.

When the frame arrives at the destination node, the receiving node calculates its own logical summary, or
CRC, of the frame. The receiving node compares the two CRC values. If the two values are the same, the
frame is considered to have arrived as transmitted. If the CRC value in the FCS differs from the CRC
calculated at the receiving node, the frame is discarded.

CSMA/CD:
-CSMA/CD stands for Carrier Sense Multiple Access with Collision Detection. CSMA/CD is the MAC protocol
used by Ethernet to control access to the physical cable segment. If a device has data to transmit, it listens on the
wire to see if any other device is transmitting. If the wire is idle, the device sends the data. All other devices on the
segment receive the transmission. CSMA/CD allows a network device to either transmit data or receive data, but not
both simultaneously. In some cases, two devices may begin transmitting at the same time, and a data collision may
occur. When a data collision occurs, CSMA/CD provides a way for devices to detect the collision and provides a
protocol for re-transmitting the data until the frame is successfully transmitted.
- If two hosts transmit a frame simultaneously, a collision will occur. This renders the collided frames unreadable.
Once a collision is detected, both hosts will send a 32-bit jam sequence to ensure all transmitting hosts are aware of
the collision. The collided frames are also discarded. Both devices will then wait a random amount of time before
resending their respective frames, to reduce the likelihood of another collision.
Or
-On a half-duplex connection, Ethernet utilizes Carrier Sense Multiple Access with Collision Detect (CSMA/CD)
to control media access. Carriersense specifies that a host will monitor the physical link, to determinewhether a
carrier (or signal) is currently being transmitted. The host willonly transmit a frame if the link is idle, and the
Interframe Gap has expired.If two hosts transmit a frame simultaneously, a collision will occur. Thisrenders the
collided frames unreadable. Once a collision is detected, bothhosts will send a 32-bit jam sequence to ensure all
transmitting hosts areaware of the collision. The collided frames are also discarded.
-Both devices will then wait a random amount of time before resending their respective frames, to reduce the
likelihood of another collision. This is controlled by a backoff timer process.
*Slot time:
-Hosts must detect a collision before a frame is finished transmitting, otherwise CSMA/CD cannot function reliably.
This is accomplished using a consistent slot time, the time required to send a specific amount of data from one end
of the network and then back, measured in bits.

*Late Collision:
-A host must continue to transmit a frame for a minimum of the slot time. In a properly configured environment, a
collision should always occur within this slot time, as enough time has elapsed for the frame to have reached the far
end of the network and back, and thus all devices should be aware of the transmission. The slot time effectively
limits the physical length of the network – if a network segment is too long, a host may not detect a collision within
the slot time period. A collision that occurs after the slot time is referred to as a late collision.
-For 10 and 100Mbps Ethernet, the slot time was defined as 512 bits, or 64 bytes. Note that this is the equivalent of
the minimum Ethernet frame size of 64 bytes. The slot time actually defines this minimum. For Gigabit Ethernet,
the slot time was defined as 4096 bits.

CSMA/CA:
-In CSMA/Collision Avoidance (CSMA/CA), the device examines the media for the presence of a data signal. If the
media is free, the device sends a notification across the media of its intent to use it. The device then sends the data.
This method is used by 802.11 wireless networking technologies.
Router:
-A router is a Layer 3 device that allows communication between separate broadcast domains or networks. In order
to forward data from one network to another, routers must know how to reach other networks. A router stores
network location information in a routing table. Each entry in the routing table includes the destination network
number and indicates how the destination network may be reached by specifying which port or interface on the
router should be used and what “Next Hop” address should be used. When a router receives a packet, the router uses
the data's Layer 3 destination address and the routing table to make intelligent decisions on where to send the packet
next. Routers can read, but cannot modify, Layer 3 addresses. Routers change Layer 2 addresses in data whenever
they route data.
Routing Table:
-The routing table is where a router stores network location information including all possible destination network
numbers and how to reach them. Each entry in the routing table includes the destination network number, the next
hop along the way to the destination network, and which port or interface on the router should be used to reach the
next hop.
Ethernet:
-Ethernet is the most common set of rules controlling network communications for local area networks. It is a set of
standards that define rules such as frame format as well as how computers communicate with each other over a
single wired shared by all devices on the network. These rules give any new device attached to the wire the ability to
communicate with any other attached device.
Or
* Ethernet is a LAN technologies that provides data-link and physical specifications for controlling access to a
shared network medium.
-This allowed two or more hosts to use the same physical network medium.
OR
*Ethernet is a LAN technology that function at the data link layer. Ethernet uses the Carrier Sense Multiple
Access/Collision detection (CSMA/CD) mechanism to send information in a shared environment.
* CSMA/CD defines how the sending stations can recognize the collisions and retransmit the frame.
*Ethernet standards define both the Layer 2 protocols and the Layer 1 technologies.
*Ethernet operates in the lower two layers of the OSI model: the Data Link layer and the Physical layer.
-Ethernet at Layer 1 involves signals, bit streams that travel on the media, physical components that put signals on
media, and various topologies. Ethernet Layer 1 performs a key role in the communication that takes place between
devices.
-Ethernet at Layer 2 prepare the data for transmission over the media.
Media Access Control:
-Regulating the placement of data frames onto the media is known as media access control.
Forwarding Table/MAC Address Table:
-A forwarding table or MAC address table is where a switch stores address and location information for all devices
connected directly to its ports.

Frame:
-A frame is one unit of data encapsulated at Layer 2, or the Data Link Layer. Each frame is divided into three parts:
the header, the data, and the trailer. The frame header contains the data's destination and source Layer 2 addresses. It
also indicates which Layer 3 protocol should be used to process the data on the receiving computer. (In the examples
in this course, the IP protocol is used.) The frame trailer is a checksum, which is used to verify data integrity.
Packet:
-A packet is one unit of data encapsulated at Layer 3 (also known as the Network Layer in the OSI and Five-Layer
models, or the Internet Layer in the TCP/IP Model). Each packet contains a header followed by the data. The
packet's header specifies the data's source and destination IP addresses. Each packet header also specifies the IP
protocol number, which indicates whether the data should be processed with the UDP or TCP protocol on the
receiving computer.
Segment:
-A segment is one unit of data encapsulated at Layer 4, or the Transport Layer. Each segment is divided into two
parts, a header followed by data. The segment header contains the data's destination port number, which indicates
which application layer protocol should be used to process the data on the receiving computer. It also specifies a
source port number, which uniquely identifies the connection on the sending side, allowing the receiving computer
to carry on multiple sessions with the sending computer without intermixing the data.
Switch:
-A switch is a Layer 2 network device that enables full-duplex data transmission. Because switches dedicate a single
port to each end-user device, collision domains have only two devices—the end-user device and the switch. When
connected to a switch, an end-user device can send and receive data simultaneously. A switch builds a MAC address
table that it uses to manage traffic flow. Switches operate based on reading Layer 2 frame information only. They
cannot change Layer 2 addresses, and they do not have any access to Layer 3 data. In addition to basic Ethernet
connectivity, switches make possible virtual LANs.
Bridge:
-A bridge is a Layer 2 network device that connects two or more physical cable segments to create one larger
network. Each side of the bridge becomes a separate collision domain or network segment. So, a bridge can be used
to break up a large network into separate collision domains. A bridge builds a MAC address table that it uses to
manage traffic flow. When a bridge receives data from an unknown MAC address, it adds that address to its MAC
address table and notes the port associated with that address. Then, if a bridge later receives data for that address, it
will know on which port it should forward the data. If a bridge receives data for an unknown destination address, it
will forward the data on all ports, which is known as flooding. Bridges operate based on reading Layer 2 frame
information only. They cannot change Layer 2 addresses, and they do not have any access to Layer 3 data.
Hub:
-A hub is a Layer 1 device that takes a signal that it receives from one connected device and passes it along or
repeats it to all other connected devices. A hub allows each device to use its own twisted-pair cable to connect to a
port on the hub. If a cable fails, it will impact only one device, and if one device is causing trouble on the network,
that individual device can easily be unplugged. A hub is not an intelligent network device. It does not look at the
MAC addresses or data in the Ethernet frame and does not perform any type of filtering or routing of the data. It is
simply a junction that joins all the different devices together. Even though each device has its own cable connecting
it to the hub, access to the network still operates by CSMA/CD, and collisions can occur on the shared bus inside the
hub.
Repeater:
-A repeater is a physical layer device used to connect two or more separate physical cable segments together,
making it act like one long cable. A repeater is a simple hardware device that regenerates electrical signals, sending
all frames from one physical cable segment to another.

Urgent Flag & Urgent Pointer:


- The URG flag is used to inform a receiving station that certain data within a segment is urgent and should be
prioritized. If the URG flag is set, the receiving station evaluates the urgent pointer, a 16-bit field in the TCP header.
This pointer indicates how much of the data in the segment, counting from the first byte, is urgent.

PSH/PUSH Flag:

-When you send data, your TCP buffers it. So if you send a character it won't send it immediately but wait to see if
you've got more. But maybe you want it to go straight on the wire: this is where the PUSH function comes in. If you
PUSH data your TCP will immediately create a segment (or a few segments) andpush them.
But the story doesn't stop here. When the peer TCP receives the data, it will naturally buffer them  it won't disturb
the application for each and every byte. Here's where the PSH flag kicks in. If a receiving TCP sees the PSH flag
it will immediately push the data to the application.

TCP State:

*To keep track of all the different events happening during connection establishment, connection termination, and
data transfer, TCP is specified as the finite state machine.

States for TCP State Description

CLOSED: No connection exists


LISTEN: Passive open received; waiting for SYN
SYN-SENT: SYN sent; waiting for ACK
SYN-RCVD: SYN+ACK sent; waiting for ACK
ESTABLISHED: Connection established; data transfer in progress
FIN-WAIT-1: First FIN sent; waiting for ACK
FIN-WAIT-2: ACK to first FIN received; waiting for second FIN
CLOSE-WAIT: First FIN received, ACK sent; waiting for application to close
-TIME-WAIT: Second FIN received, ACK sent; waiting for 2MSL time-out
LAST-ACK: Second FIN sent; waiting for ACK
CLOSING: Both sides decided to close simultaneously

or

TCP State machine:

State Description
CLOSE-WAIT Waits for a connection termination request from the remote host.
CLOSED Represents no connection state at all.
CLOSING Waits for a connection termination request acknowledgment from the remote host.
Represents an open connection, data received can be delivered to the user. The normal state for the
ESTABLISHED
data transfer phase of the connection.
Waits for a connection termination request from the remote host or an acknowledgment of the
FIN-WAIT-1
connection termination request previously sent.
FIN-WAIT-2 Waits for a connection termination request from the remote host.
Waits for an acknowledgment of the connection termination request previously sent to the remote
LAST-ACK
host (which includes an acknowledgment of its connection termination request).
LISTEN Waits for a connection request from any remote TCP and port.
SYN- Waits for a confirming connection request acknowledgment after having both received and sent a
RECEIVED connection request.
SYN-SENT Waits for a matching connection request after having sent a connection request.
Waits for enough time to pass to be sure the remote host received the acknowledgment of its
TIME-WAIT
connection termination request.

Or

LISTEN 
-(server) represents waiting for a connection request from any remote TCP and port.
SYN-SENT 
-(client) represents waiting for a matching connection request after having sent a connection request.
SYN-RECEIVED 
-(server) represents waiting for a confirming connection request acknowledgment after having both received and
sent a connection request.
ESTABLISHED 
(both server and client) represents an open connection, data received can be delivered to the user. The normal state
for the data transfer phase of the connection.
FIN-WAIT-1 
-(both server and client) represents waiting for a connection termination request from the remote TCP, or an
acknowledgment of the connection termination request previously sent.
FIN-WAIT-2 
-(both server and client) represents waiting for a connection termination request from the remote TCP.
CLOSE-WAIT 
-(both server and client) represents waiting for a connection termination request from the local user.
CLOSING 
-(both server and client) represents waiting for a connection termination request acknowledgment from the remote
TCP.
LAST-ACK 
-(both server and client) represents waiting for an acknowledgment of the connection termination request previously
sent to the remote TCP (which includes an acknowledgment of its connection termination request).
TIME-WAIT 
(either server or client) represents waiting for enough time to pass to be sure the remote TCP received the
acknowledgment of its connection termination request. [According to RFC 793 a connection can stay in TIME-
WAIT for a maximum of four minutes known as a MSL (maximum segment lifetime).]
CLOSED 
-(both server and client) represents no connection state at all.

Or
Header
TCP Header:

*TCP header varies in length. It’s minimum length that is when the option are not used is min 20 bytes and max 40
byte.
1. Source Port [16 bits]:
- Identifies which application is sending the information.
2. Destination Port [16 bits]:
-Identifies which application is to receive- the information.
3. Sequence Number [32 bits]:
-Maintains reliability and sequencing.
4. Acknowledgement Number [32 bits]:
-Used to acknowledge received information and identifies the sequence number the source next expects to receive
from the destination.
5. Header Length/Data Offset [4 bits]:
-This indicates where the data begin.
6. Reserved Field [3 bits]:-
-Which are always set to zero.
7. Code Bits [8 bits]/ Flags:
- Eight 1-bit flags that are used for data flow and connection control.
-To understand the three-way handshake process, it is important to look at the various values that the two hosts
exchange. Within the TCP segment header, there are six 1-bit fields that contain control information used to
manage the TCP processes.
-These fields are referred to as flags, because the value of one of these fields is only 1 bit and, therefore, has only
two values: 1 or 0. When a bit value is set to 1, it indicates what control information is contained in the segment.
1. Congestion Window Reduced (CWR)
-The sender reduced its sending rate.
2. ECN-Echo (ECE)
-The sender received an earlier congestion notification.
3. Urgent (URG)
- The URG flag is used to inform a receiving station that certain data within a segment is urgent and should be
prioritized. If the URG flag is set, the receiving station evaluates the urgent pointer. This pointer indicates how much
of the data in the segment, counting from the first byte, is urgent.
4. Acknowledgment (ACK)
-When set to 1, indicates that this segment is carrying an Acknowledgment, and the value of the Acknowledgement
Number field is valid and carrying the next sequence expected from the destination of this segment.
5. Push (PSH)
-The receiver should pass the data to the application as soon as possible.
6. Reset (RST)
-The sender has encountered a problem and wants to reset the connection.
7. Synchronize (SYN)
-Synchronize sequence number to initiate a connection.
8. Final (FIN)
-The sender of the segment is requesting that the connection be closed.

8. Windows Size [16 bits]:


-It is used for flow control and Indicates the number of segments allowed to be sent before waiting for an
acknowledgement from the destination.
9. Checksum [16 bits]:
- Ensure that the data contained in the TCP segment reaches the correct destination and is error-free. 
10. Urgent Field [16 bits]:
- It used only when the URG flag is set and indicate where exactly the urgent data ends.
11. Option [0-32]/ [Padding]: (Variable 0–320 bits, divisible by 32)
-As the name implies, specifies options required by the sender's TCP process. The most commonly used option is
Maximum Segment Size, which informs the receiver of the largest segment the sender is willing to accept. The
remainder of the field is padded with zeros to ensure that the header length is a multiple of 32 octets.
Padding:
-The TCP header padding is used to ensure that the TCP header ends and data begins on a 32 bit boundary. The
padding is composed of zeros.

>The length of this field is determined by the data offset field.

*Options have up to three fields:


1. Option-Kind (1 byte)
2. Option-Length (1 byte)
3. Option-Data (variable).

>The Option-Kind field indicates the type of option, and is the only field that is not optional.
>Depending on what kind of option we are dealing with, the next two fields may be set:
-The Option-Length field indicates the total length of the option
-The Option-Data field contains the value of the option, if applicable.

>For example:
-Option-Kind byte of 0x01 indicates that this is a No-Op option used only for padding, and does not have an Option-
Length or Option-Data byte following it.
-An Option-Kind byte of 0 is the End Of Options option, and is also only one byte.
-An Option-Kind byte of 0x02 indicates that this is the Maximum Segment Size option, and will be followed by a
byte specifying the length of the MSS field (should be 0x04). Note that this length is the total length of the given
options field, including Option-Kind and Option-Length bytes. So while the MSS value is typically expressed in two
bytes, the length of the field will be 4 bytes (+2 bytes of kind and length). In short, an MSS option field with a value
of 0x05B4 will show up as (0x02 0x04 0x05B4) in the TCP options section.

-Some options may only be sent when SYN is set; they are indicated below as [SYN].
0 (8 bits) – End of options list
 1 byte – 8 bit - No operation (NOP, Padding) This may be used to align option fields on 32-bit
boundaries for better performance.
 32 bits/16 bits – 2 byte – Maximum segment size [SYN]
 24 bits / 16 bits- 2 byte – Window scale [SYN]
 16 bits – 2 byte – Selective Acknowledgement permitted. [SYN]  8,10,TTTT,EEEE (80 bits)- Timestamp
and echo of previous timestamp (24 bits) – TCP Alternate Checksum Request. [SYN]
 (variable bits) – TCP Alternate Checksum Data.

*-The TCP header can have up to 40 bytes of optional information.


-Options convey additional information to the destination or align other options.
- We can define two categories of options: 1-byte options and multiple-byte options.
-The first category contains two types of options: End of Option list and No Operation.
-The second category, in most implementations, contains five types of options: maximum segment size, window
scale factor, timestamp, SACK-permitted, and SACK.

End of Option (EOP) (1 byte)


The end-of-option (EOP) option is a 1-byte option used for padding at the end of the option section. It can only be
used as the last option. Only one occurrence of this option is allowed. After this option, the receiver looks for the
payload data.
-The EOP option imparts two pieces of information to the destination:
1. There are no more options in the header.
2. Data from the application program starts at the beginning of the next 32-bit word.

Or
-it used as padding which tells end of option.

No Operation (NOP) (1 Byte)


-The no-operation (NOP) option is also a 1-byte option used as a filler. However, it normally comes before another
option to help align it in a four-word slot. For example, in Figure 15.43 it is used to align one 3-byte option such as
the window scale factor and one 10-byte option such as the timestamp.
-It is a byte which we add in between options to align them in 4 words slot. We can use multiple NOP to make it a
word.

End of Option and No Operation:

#When we are using padding bytes in between options we call it NOP byte. When we are using padding bytes at the
end of option where the data begins we call it EOP bytes.

Maximum Segment Size (MSS) (2 byte)


-The maximum-segment-size option defines the size of the biggest unit of data that can be received by the
destination of the TCP segment. In spite of its name, it defines the maximum size of the data, not the maximum size
of the segment. Since the field is 16 bits long, the value can be 0 to 65,535 bytes. Figure 15.44 shows the format of
this option.

-MSS is determined during connection establishment. Each party defines the MSS for the segments it will receive
during the connection. If a party does not define this, the default values is 536 bytes.
-The value of MSS is determined during connection establishment and does not change during the connection.
Window Scale Factor (2 byte)
-The window size field in the header defines the size of the sliding window. This field is 16 bits long, which means
that the window can range from 0 to 65,535 bytes. Although this seems like a very large window size, it still may
not be sufficient, especially if the data are traveling through a long fat pipe, a long channel with a wide bandwidth.
-To increase the window size, a window scale factor is used. The new window size is found by first raising 2 to the
number specified in the window scale factor. Then this result is multiplied by the value of the window size in the
header.

-The value of the window scale factor can be determined only during connection establishment; it does not change
during the connection.

Timestamp (10 byte)


-This is a 10-byte option with the format shown in Figure 15.46. Note that the end with the active open announces a
timestamp in the connection request segment (SYN segment). If it receives a timestamp in the next segment (SYN +
ACK) from the other end, it is allowed to use the timestamp; otherwise, it does not use it any more. The
timestampoption has two applications: it measures the round-trip time and prevents wraparound sequence numbers.
-One application of the timestamp option is the calculation of round-trip-time (RTT)

PAWS
-The timestamp option has another application, protection against wrapped sequence numbers (PAWS)

SACK-Permitted Option (2 byte)


-The SACK-permitted option of two bytes is used only during connection establishment. The host that sends the
SYN segment adds this option to show that it can support the SACK option. If the other end, in its SYN + ACK
segment, also includes this option, then the two ends can use the SACK option during data transfer. Note that the
SACK-permitted option is not allowed during the data transfer phase.
- The SACK option, of variable length, is used during data transfer only if both ends agree (if they have exchanged
SACK-permitted options during connection establishment). The option includes a list for blocks arriving out of
order.

Example 1:
-Let us see how the SACK option is used to list out-of-order blocks. In Figure 15.49 an end has received five
segments of data.

-The first and second segments are in consecutive order. An accumulative acknowledgment can be sent to report the
reception of these two segments.
-Segments 3, 4, and 5, however, are out of order with a gap between the second and third and a gap between the
fourth and the fifth.
-An ACK and a SACK together can easily clear the situation for the sender.
-The value of ACK is 2001, which means that the sender need not worry about bytes 1 to 2000.
-The SACK has two blocks. The first block announces that bytes 4001 to 6000 have arrived out of order. -The
second block shows that bytes 8001 to 9000 have also arrived out of order.
-This means that bytes 2001 to 4000 and bytes 6001 to 8000 are lost or discarded. The sender can resend only these
bytes.

Example 2:
-Figure shows how a duplicate segment can be detected with a combination of ACK and SACK. In this case, we
have some out-of-order segments (in one block) and one duplicate segment.
-To show both out-of-order and duplicate data, SACK uses the first block, in this case, to show the duplicate data
and other blocks to show out-of-order data.
-Note that only the first block can be used for duplicate data. The natural question is how the sender, when it
receives these ACK and SACK values, knows that the first block is for duplicate data (compare this example with
the previous example).
-The answer is that the bytes in the first block are already acknowledged in the ACK field; therefore, this block must
be a duplicate.
Example 3:

Figure 15.51 shows what happens if one of the segments in the out-of-order section is also duplicated.
In this example, one of the segments (4001:5000) is duplicated.

-The SACK option announces this duplicate data first and then the out-of-order block. This time, however, the
duplicated block is not yet acknowledged by ACK, but because it is part of the out-of-order block (4001:5000 is part
of 4001:6000), it is understood by the sender that it defines the duplicate data.

Options. 0 to 40 bytes.
Options occupy space at the end of the TCP header. All options are included in the checksum. An option may begin
on any byte boundary. The TCP header must be padded with zeros to make the header length a multiple of 32 bits.

Kind Length Description References

0 1 End of option list. RFC 793


1 1 No operation. RFC 793

2 4 MSS, Maximum Segment Size. RFC 793

3 3 WSOPT, Window scale factor. RFC 1323

4 2 SACK permitted. RFC 2018

5 Variable. SACK. RFC 2018, RFC 2883

Data. Variable length.

MSS, Maximum Segment Size.


When IPv4 is used as the network protocol, the MSS is calculated as the maximum size of an IPv4 datagram minus
40 bytes.

When IPv6 is used as the network protcol, the MSS is calculated as the maximum packet size minus 60 bytes. An
MSS of 65535 should be interpreted as infinity.

SMSS, Sender Maximum Segment Size.


The size of the largest segment that the sender can transmit. This value can be based on the maximum transmission
unit of the network, the path MTU discovery algorithm, RMSS, or other factors. The size does not include the TCP
headers and options.

SACK, Selective Acknowledgement. Algorithm.


This technique allows the data receiver to inform the sender about all segments that have arrived
successfully, so the sender need retransmit only the segments that have actually been lost.This extension uses
two TCP options. The first is an enabling option, SACK permitted, which may be sent in a SYN segment to indicate
that the SACK option can be used once the connection is established. The other is the SACK option itself, which
may be sent over an established connection once permission has been given.

SACK Permitted

The TCP SACK permitted option may be sent in a SYN by a TCP that has been extended to receive the SACK
option once the connection has opened. It MUST NOT be sent on non-SYN segments.

End of Option List

The TCP End of Option List option is used to indicate the last option in the list has been reached.

MAC header IP header TCP header TCP option 0 Data

No Operation

The TCP No Operation option is used to pad the option list.

MAC header IP header TCP header TCP option 1 Data

Maximum Segment Size


The TCP Maximum Segment Size option can be used to specify the maximum segment size that the receiver should
use.

MAC header IP header TCP header TCP option 2 Data

Maximum Segment Size. 16 bits.


This field must only be sent in the initial connection request (i.e., in segments with the SYN control bit set). If this
option is not used, any segment size is allowed.

Window Scale (3 byte – 24 bit)

RFC 1323, pg 8:

The window scale extension expands the definition of the TCP window to 32 bits and then uses a scale factor to
carry this 32 bit value in the 16 bit Window field of the TCP header (SEG.WND in RFC-793). The scale factor is
carried in a new TCP option, Window Scale. This option is sent only in a SYN segment (a segment with the SYN bit
on), hence the window scale is fixed in each direction when a connection is opened. (Another design choice would
be to specify the window scale in every TCP segment. It would be incorrect to send a window scale option only
when the scale factor changed, since a TCP option in an acknowledgement segment will not be delivered reliably
(unless the ACK happens to be piggy-backed on data in the other direction). Fixing the scale when the connection is
opened has the advantage of lower overhead but the disadvantage that the scale factor cannot be changed during the
connection.

RFC 1323, pg 9:

The three-byte Window Scale option may be sent in a SYN segment by a TCP. It has two purposes: (1) indicate that
the TCP is prepared to do both send and receive window scaling, and (2) communicate a scale factor to be applied to
its receive window. Thus, a TCP that is prepared to scale windows should send the option, even if its own scale
factor is 1. The scale factor is limited to a power of two and encoded logarithmically, so it may be implemented by
binary shift operations.

MAC header IP header TCP header TCP option 3 Data

Timestamps

The TCP Timestamp option obsoletes the TCP Echo request and Echo reply options.

The timestamps are used for two distinct mechanisms: RTTM (Round Trip Time Measurement) and PAWS (Protect
Against Wrapped Sequences).

MAC header IP header TCP header TCP option 8 Data


Timestamp Value (TSval). 32 bits.
This field contains the current value of the timestamp clock of the TCP sending the option.

Timestamp Echo Reply (TSecr). 32 bits.


This field is only valid if the ACK bit is set in the TCP header. If it is valid, it echos a timestamp value that was sent
by the remote TCP in the TSval field of a Timestamps option. When TSecr is not valid, its value must be zero. The
TSecr value will generally be from the most recent Timestamp option that was received; however, there are
exceptions that are explained below. A TCP may send the Timestamp option in an initial SYN segment (i.e.,
segment containing a SYN bit and no ACK bit), and may send a TSopt in other segments only if it received a TSopt
in the initial SYN segment for the connection.

#TCP Checksum Calculation and the TCP "Pseudo Header" 

Detecting Transmission Errors Using Checksums

TCP Pseudo Header is 12 Bytes

*1. 12 byte TCP Pseudo Header is created before checksum calculation. This Pseudo Header contain information
from both TCP Header and IP Header in to which TCP segment will be encaptulated.

2. Once this 96-bit [12 byte] header has been formed, it is placed in a buffer, following which the TCP
segment itself is placed. Then, the checksum is computed over the entire set of data (pseudo header plus TCP
segment). The value of the checksum is placed into the Checksum field of the TCP header, and the pseudo
header is discarded—it is not an actual part of the TCP segment and is not transmitted. 

3. When the TCP segment arrives at its destination, the receiving TCP software performs the same calculation. It
forms the pseudo header, prepends it to the actual TCP segment, and then performs the checksum (setting
the Checksum field to zero for the calculation as before). If there is a mismatch between its calculation and the value
the source device put in the Checksum field, this indicates that an error of some sort occurred and the segment is
normally discarded.

Table 158: TCP “Pseudo Header” For Checksum Calculation


Size
Field Name Description
(bytes)
Source Source Address: The 32-bit IP address of the originator of the datagram, taken from the
4
Address IP header.
Destination Destination Address: The 32-bit IP address of the intended recipient of the datagram,
4
Address also from the IP header.
Reserved 1 Reserved: 8 bits of zeroes.
Protocol: The Protocol field from the IP header. This indicates what higher-layer
Protocol 1 protocol is carried in the IP datagram. Of course, we already know what this protocol is,
it's TCP! So, this field will normally have the value 6.
TCP Length: The length of the TCP segment, including both header and data. Note that
TCP Length 2
this is not a specific field in the TCP header; it is computed.

Or

To provide basic protection against errors in transmission, TCP includes a 16-bit Checksum field in its header.

The idea behind a checksum is very straight-forward: take a string of data bytes and add them all together.
Then send this sum with the data stream and have the receiver check the sum.

In TCP, a special algorithm is used to calculate this checksum by the device sending the segment; the same
algorithm is then employed by the recipient to check the data it received and ensure that there were no
errors.

Instead of computing the checksum over only the actual data fields of the TCP segment, a 12-byte TCP pseudo
header is created prior to checksum calculation. This header contains important information taken from
fields in both the TCP header and the IP datagram into which the TCP segment will be encapsulated. The
TCP pseudo header has the format shown

Once this 96-bit [12 byte] header has been formed, it is placed in a buffer, following which the TCP segment
itself is placed. Then, the checksum is computed over the entire set of data (pseudo header plus TCP
segment). The value of the checksum is placed into the Checksum field of the TCP header, and the pseudo
header is discarded—it isnot an actual part of the TCP segment and is not transmitted. 

[Note:To calculate the TCP segment header’s Checksum field, the TCP pseudo header is first constructed and
placed, logically, before the TCP segment. The checksum is then calculated over both the pseudo header and the
TCP segment. The pseudo header is then discarded.]

When the TCP segment arrives at its destination, the receiving TCP software performs the same calculation. It forms
the pseudo header, prepends it to the actual TCP segment, and then performs the checksum (setting
the Checksum field to zero for the calculation as before). If there is a mismatch between its calculation and the value
the source device put in the Checksum field, this indicates that an error of some sort occurred and the segment is
normally discarded.

*********************************************************************************************
**

Sequence Number:
-32 bit number used for byte level numbering of TCP segments. If you are using TCP, each byte of data is assigned
a sequence number. If SYN flag is set (during the initial 3 way handshake connection initiation) then this is the
initial sequence number. The sequence of the actual first data byte will then be this sequence number plus one.
-For example, let the first byte of data by a device in a particular TCP header will have its sequence number in this
field 50000. If this packet has 500 bytes of data in it, then the next packet sent by this device will have the sequence
number of 50000 + 500 + 1 = 50501.
Or
-The data bytes receive from upper layer will be given sequence numbers. The first byte number will be sequence
number of the segment.
Or
-It identifies data within a segment rather than the segment itself.
-It allow the receiving host to reassemble the data from multiple segment in correct order, upon arrival.
-It allow receipt of data in a segment to be acknowledged.
-When establishing a connection, a host will choose a 32-bit Initial Sequence Number (ISN).
-Receiver responds to this sequence number with acknowledgement number, set to sequence number +1.
Or
-TCP provides a reliable session between devices is by using sequence numbers and acknowledgments. Every TCP
segment sent has a sequence number in it. This not only helps the destination reorder any incoming segments that
arrived out of order, but it also provides a method of verifying whether all the sent segments were received. The
destination responds to the source with an acknowledgment indicating receipt of the sent segments . Before TCP can
provide a reliable session, it has to go through a synchronization phase—the three-way handshake.

Acknowledgement Number:

-32 bit number field which indicates the next sequence number that the sending device is expecting from the other
device.
Or
It indicates the next sequence number of the segment that the sending device is expecting from the other device.

Transmission Control Block (TCB):

-TCP keeps track of different information about each connection. TCP set up a complex data structure known as
Transmission Control Block (TCB) to do this, which maintain information about the local and remote socket
numbers, the send and receive buffers, security and priority values, and the current segment in the queue. The
Transmission Control block also manages send and receive sequence numbers.

TCP Window:

A TCP window is the amount of unacknowledged data a sender can send on a particular connection before it gets an
acknowledgment back from the receiver, that it has received some of the data.

- The larger the window size for a session, the less number of acknowledgments sent, thus making the session more
efficient. Too small a window size can affect throughput, since a host has to send a small number of segments, wait
for an acknowledgment, send another bunch of small segments, and wait again.

[Note: Reducing the window size increases reliability but reduces throughput.]

- TCP allows the regulation of the flow of segments, ensuring that one host doesn’t flood another host with too many
segments, overflowing its receiving buffer. TCP uses a sliding windowing mechanism to assist with flow control.
-For example, if the window size is 1, a host can send only one segment and must then wait for a corresponding
acknowledgment before sending the next segment. If the window size is 20, a host can send 20 segments and must
wait for the single acknowledgment of the sent 20 segments before sending 20 additional segments.

-Window size is negotiated dynamically during the TCP session.

TCP Sliding Window:

-It increasing the window size as long as receiver can receive total number of segments in window.

The working of the TCP sliding window mechanism can be explained as below:

The sending device can send all segment within the TCP window size (as specified in the TCP header) receiving an
ACK, and should start a timeout timer for each of them.

The receiving device should acknowledge each segment it received, indicating the sequence number of the last well
received packet. After receiving the ACK from the receiving device, the sending device slides the window to right
side.

Example:

In this case, the sending device can send up to 5 TCP segments without receiving an acknowledgement from the
receiving device. After receiving the acknowledgement for segment 6 from the receiving device, the sending device
can slide its window one TCP segment to the right side and the sending device can transmit segment 6 also.

If any TCP segment lost while its journey to the destination, the receiving device can not acknowledge the sender.
Considered while transmission, all other segments reached the destination except segment 3. The receiving device
can acknowledge up to segment 2. At the sending device, a timeout will occur and it will retransmit the lost segment
3. Now the Receiving device has received all the segment, since only segment 3 was lost. Now the receiving device
will send the ACK for segment 5, because it has received all the segment 5.

Acknowledgement (ACK) for segment 5 endures the sender the receiver has successfully received all the segment
up to 5.
TCP uses a byte level numbering system for communication. If the sequence number for a TCP segment at
any instance was 5000 and the segment carry 500 bytes, the sequence number for the next segment will be
5000+500+1. That means TCP segment only carries the sequence number of the first byte in the segment.

The Window size is expressed in number of bytes and is determined by the receiving device when the connection is
established and can very later. You might have noticed when transferring big files from one window machine to
another, initially the time remaining calculation will show a large value and will come down later.

Windowing:

-TCP provide a Windowing system to regulate the flow of data between computers. With this system of flow
control, the receiving computer can notify the sending computer when to speed up or slow down the
transmission of data.

Or

-Windowing is used to control flow of the data in TCP.

Sliding Window:

-It increasing the window size as long as receiver can receive total number of segments in window.
-In this case, the sending device can send up to 5 TCP segments without receiving an acknowledgement from the
receiving device. After receiving the acknowledgement for segment 6 from the receiving device, the sending device
can slide its window one TCP segment to the right side and the sending device can transmit segment 6 also.

Or

-In Sliding window mechanism the sending host will keep increasing its window size as long as the receiver is
capable of receiving window. For example, in this mechanism if host A is sending 3 segment in a window and the
receiver sends cumulative acknowledgment indicating all three segments have been received, host A will slide the
window to right that is it will increase size of window by one segment. In this way it keep increasing the window
size until the receiver is acknowledging all the segment in the window. If receiver sends negative acknowledgment
indicating it has not received particular segments then the sender will reduce the window size that is it will slide
window towards left and send reduced window size.

Window Size:

-It is used for flow control and Indicates the number of segments allowed to be sent before waiting for an
acknowledgement from the destination.

-Window size can never exceed (Maximum Segment Size) MSS, which is 536 bytes.

-The window size can be dynamically changed for flow control, preventing buffer congestion on the receiving host.

-Window size 0 – no further to send – indicating congestion on the receiving host.

-Maximum Window Size: 65,535 bytes

Or
This field is used by the receiver to indicate to the sender the amount of data that it is able to accept.
The Window size field is the key to efficient data transfers and flow control.

Bandwidth-Delay Product:

-The maximum number of bits that can be present on a network segment at any on time.

Example: We have FastEthernet Link = 100 mbps / Latency = 100msec = 0.1 sec

Formula:

Bandwidth-Delay Product= A Segment’s BW (in bits/sec) X Latency a packet experience on the segment (in
sec) [PING – Round trip time response gives delay/latency]

= 100,000,000 bits/sec x 0.1 sec

= 10,000,000 bits

-Maximum of 10,000,000 bit, 10 million bits can be on this network segment at any one time.

Window Scaling:

- The TCP window scale option is an option to increase the receive window size allowed in Transmission Control
Protocol above its former maximum value of 65,535 bytes.
- The window scaling option may be sent only once during a connection by each host, in its SYN packet.

Or

For more efficient use of high bandwidth networks, a larger TCP window size may be used. The TCP window size
field controls the flow of data and its value is limited to between 2 and 65,535 bytes.

The TCP window scale option is an option used to increase the maximum window size from 65,535 bytes to 1
gigabyte. Scaling up to larger window sizes is a part of what is necessary for TCP tuning.
The window scale option is used only during the TCP 3-way handshake.

Lastly, for those who deal with Cisco routers, you might be interested to know that you are able to configure the
Window size on Cisco routers running the Cisco IOS v9 and greater. Routers with versions 12.2(8)T and above
support Window Scaling, a feature that's automatically enabled for Window sizes above 65,535, with a maximum
value of 1,073,741,823 bytes!
Sequence Number / How Sequence Number is calculated:

-This allow, segments to be put back in the correct order if they arrive out of order.

Or

-It identifies data within a segment rather than the segment itself.
-It allow the receiving host to reassemble the data from multiple segment in correct order, upon arrival.

or

*Sequence number are used to:

-Acknowledge which data has been received.

-Determine if data has been lost or damaged.

-Put data into the correct order.

Acknowledgment Number:

-It indicates the next sequence number that receiver expect to receive.

Direct Transmission:

TCP connection are logical point to point connection between two application layer protocols. This type of
communication is also refer to as a direct transmission.

TCP Slow Start:

-If we get to point where we sending to aggressively and packet does get drop, the receiver when send ACK and say
I acknowledge, I receive this segment and I want this next segment and then sender could see that says that I already
sent this segment, it must have been dropped. I must be sending to aggressively that can cause the sender to reduce
its window size. That’s something called TCP slow start.

Or

TCP reduce its window size if segment gets dropped.


Retransmission Timer:

-Every time a segment is sent, the sending host starts a retransmission timer, dynamically determined (and adjusted)
based on the round-trip time between the two hosts.

-If ACK is not received before the retransmission timer expire, the segment is resent, and ensuring guaranteed
delivery, even when segments are lost.

Or

-If the sending host doesn’t receive an ACK for the remaining packet within the interval set by the retransmission
timer, the packet are retransmitted. But retransmission increase the network load.

Positive ACK Retransmission:

-TCP employs a positive acknowledgment with retransmission (PAR) mechanism to recover from lost segments.
The same segment will be repeatedly re-sent, with a delay between each segment, until an acknowledgment is
received from the destination. The acknowledgment contains the sequence number of the segment received
and verifies receipt of all segments sent prior to the retransmission process. This eliminates the need for
multiple acknowledgments and resending acknowledgments.

Or

-This means that correct receipt of data is acknowledge to the sender.

-For reason of efficiency, ACK are sent only for correctly receive sequences and not for each individual packet.

Or

-TCP utilizes PAR to control data flow and confirm data delivery.

*Source sends packet, starts timer, and wait for ACK.

-If timer expires before source receives ACK, source retransmits the packet and restarts the timer.

Positive Acknowledgement:

-The receiver explicitly notifies the sender which segments were received correctly. Positive Acknowledgement
therefore also implicitly informs the sender which packets were not received and provides detail on packets which
need to be retransmitted.

-Positive Acknowledgment with Re-Transmission (PAR), is a method used by TCP to verify receipt of transmitted
data. PAR operates by re-transmitting data at an established period of time until the receiving host acknowledges
reception of the data.

Negative Acknowledgement:
-The receiver explicitly notifies the sender which segments were received incorrectly and thus may need to be
retransmitted

Selective Acknowledgment:

-TCP may experience poor performance when multiple packets are lost from one window of data, but when receiver
send cumulative acknowledgement, it is limited information, because the tcp sender can only learn about single
packet per round trip time. In such case sender could choose to retransmit lost packets early before it received
acknowledgment for remaining packets. However, the retransmitted segments may have already been successfully
received.

To resolve this problem, with selective acknowledgments, the data receiver can inform the sender about all segments
that have arrived successfully, so the sender need retransmit only the segments that have actually been lost.

For example, suppose 10,000 bytes are sent in 10 different TCP packets, and the first packet is lost during
transmission. In a pure cumulative acknowledgment protocol, the receiver cannot say that it received bytes 1,000 to
9,999 successfully, but failed to receive the first packet, containing bytes 0 to 999. Thus the sender may then have to
resend all 10,000 bytes.

To solve this problem TCP employs the selective acknowledgment (SACK) option which allows the receiver to
acknowledge discontinuous blocks of packets which were received correctly, in addition to the sequence number
of the last contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgement can
specify a number of SACK blocks, where each SACK block is conveyed by the starting and ending sequence
numbers of a contiguous range that the receiver correctly received. In the example above, the receiver would send
SACK with sequence numbers 1000 and 9999. The sender thus retransmits only the first packet, bytes 0 to 999.

-The SACK-permitted option of two bytes is used only during connection establishment. The host that sends the
SYN segment adds this option to show that it can support the SACK option. If the other end, in its SYN + ACK
segment, also includes this option, then the two ends can use the SACK option during data transfer. Note that the
SACK-permitted option is not allowed during the data transfer phase.
- The SACK option, of variable length, is used during data transfer only if both ends agree (if they have exchanged
SACK-permitted options during connection establishment). The option includes a list for blocks arriving out of
order.

Example 1:
-Let us see how the SACK option is used to list out-of-order blocks. In Figure 15.49 an end has received five
segments of data.
-The first and second segments are in consecutive order. An accumulative acknowledgment can be sent to report the
reception of these two segments.
-Segments 3, 4, and 5, however, are out of order with a gap between the second and third and a gap between the
fourth and the fifth.
-An ACK and a SACK together can easily clear the situation for the sender.
-The value of ACK is 2001, which means that the sender need not worry about bytes 1 to 2000.
-The SACK has two blocks. The first block announces that bytes 4001 to 6000 have arrived out of order. -The
second block shows that bytes 8001 to 9000 have also arrived out of order.
-This means that bytes 2001 to 4000 and bytes 6001 to 8000 are lost or discarded. The sender can resend only these
bytes.

Example 2:

-Figure shows how a duplicate segment can be detected with a combination of ACK and SACK. In this case, we
have some out-of-order segments (in one block) and one duplicate segment.
-To show both out-of-order and duplicate data, SACK uses the first block, in this case, to show the duplicate data
and other blocks to show out-of-order data.
-Note that only the first block can be used for duplicate data. The natural question is how the sender, when it
receives these ACK and SACK values, knows that the first block is for duplicate data (compare this example with
the previous example).
-The answer is that the bytes in the first block are already acknowledged in the ACK field; therefore, this block must
be a duplicate.
Example 3:

Figure 15.51 shows what happens if one of the segments in the out-of-order section is also duplicated.
In this example, one of the segments (4001:5000) is duplicated.
-The SACK option announces this duplicate data first and then the out-of-order block. This time, however, the
duplicated block is not yet acknowledged by ACK, but because it is part of the out-of-order block (4001:5000 is part
of 4001:6000), it is understood by the sender that it defines the duplicate data.

Cumulative Acknowledgement:

-The receiver acknowledges that it correctly received segment in a stream which implicitly informs the sender that
the previous packets were received correctly. TCP uses cumulative acknowledgment with its TCP sliding window.

Or

- TCP was originally designed to acknowledge receipt of segments cumulatively. The receiver advertises the next
byte it expects to receive, ignoring all segments received and stored out of order. This is sometimes referred to as
positive cumulative acknowledgment or ACK.
-The word “positive” indicates that no feedback is provided for discarded, lost, or duplicate segments. The 32-bit
ACK field in the TCP header is used for cumulative acknowledgments and its value is valid only when the ACK
flag bit is set to 1.

Or

Cumulative acknowledgment for the total window size.

Delayed Acknowledgement:

-This means that when a segment arrives, it is not acknowledged immediately. The receiver waits until there is a
decent amount of space in its incoming buffer before acknowledging the arrived segments. The delayed
acknowledgment prevents the sending TCP from sliding its window. After the sending TCP has sent the data in the
window, it stops. This kills the syndrome.
-Delayed acknowledgment also has another advantage: it reduces traffic. The receiver does not have to acknowledge
each segment. However, there also is a disadvantage in that the delayed acknowledgment may result in the sender
unnecessarily retransmitting the unacknowledged segments. TCP balances the advantages and disadvantages. It now
defines that the acknowledgment should not be delayed by more than 500 ms.

Or

-The receiver need not ACK receive segment immediately. It can wait for further segment as long as there is space
in receive buffer. This is called delayed acknowledgement. So called delayed acknowledgement timer is running
when this is reaches to 0 all segment in receiver buffer must be acknowledge.

TSP Zero Window:

What does TCP Zero Window mean?

Zero Window is something to investigate.

TCP Zero Window is when the Window size in a machine remains at zero for a specified amount of time.

This means that a client is not able to receive further information at the moment, and the TCP transmission is halted
until it can process the information in its receive buffer.

TCP Window size is the amount of information that a machine can receive during a TCP session and still be able to
process the data. Think if it like a TCP receive buffer. When a machine initiates a TCP connection to a server, it will
let the server know how much data it can receive by the Window Size.

In many Windows machines, this value is around 64512 bytes. As the TCP session is initiated and the server begins
sending data, the client will decrement it's Window Size as this buffer fills. At the same time, the client is processing
the data in the buffer, and is emptying it, making room for more data. Through TCP ACK frames, the client informs
the server of how much room is in this buffer. If the TCP Window Size goes down to 0, the client will not be able to
receive any more data until it processes and opens the buffer up again. In this case, Protocol Expert will alert a "Zero
Window" in Expert View.

Troubleshooting a Zero Window For one reason or another, the machine alerting the Zero Window will not receive
any more data from the host. Reason: It could be that the machine is running too many processes at that moment,
and its processor is maxed. Or it could be that there is an error in the TCP receiver, like a Windows registry
misconfiguration. Try to determine what the client was doing when the TCP Zero Window happened.

Purpose of RST bit:

Forcefully terminate an improper connection


-To reset half open connection.

-When a host receives a TCP segment from a host that it does not have a connection with.

-When a host receive a segment with incorrect Sequence Number or Acknowledgement Number.

-A host receive a SYN request on a port it is not listing on.

TCP Half Open Connection:

-A TCP connection can become half-open, indicating that one host is an establish state while other is not. Half-open
connection can result from interruption by an intermediary device (such as a firewall), or from a software or
hardware issue.

TCP Half Close:

-In TCP, one end can stop sending data while still receiving data. This is called a Half-Close.

TCP timestamps

TCP timestamps can help TCP determine in which order packets were sent. 

Q. How do we measure the latency that packet experience on a segment?

A. To use PING, we can do a PING from one side of segment to the other side of segment, take the average and
devide by 2 because PING result a Round Trip Timer (RTT).

Q. Why to segment data at TCP? Why Segmentation?


-TCP is Stream oriented protocol. Because it treat data coming from application as a stream.
-It is an advantage for wide variety of application, because they don’t have to worry about data packaging, can send
files or message of any size.
-IP is message oriented protocol, so TCP divides stream of data in to discrete message for IP, called TCP segment.

Q. What is MSS? Why we need MSS?


>Maximum Segment Size (MSS): It is maximum limit/size of data in a segment that a receiver can receive.
-This limit is applied to prevent unnecessary fragmentation at the IP layer.
-MSS is decided by operating system on the SYN message with TCP option called MSS.
-Each device can use different MSS in a TCP connection.

-UDP: No segmentation, only fragmentation.l


-TCP: No fragmentation, only Segmentation.

-Minimum MTU is 576 for IP datagram.


MSS= largest data size – IP header – TCP header = 576 – 20 -20 = 536 bytes.
-Default MSS = 536 bytes.

Q. What is Fragmentation? Why Fragmentation? What is MTU?


>To break up IP datagram into several fragments is called fragmentation.
-The IP datagram has to traverse through multiple networks with lower MTU values so the datagram is fragmented
at the network boundaries.
[Fragmentation is the process of breaking an IP packet into smaller chunks or data is fragmented when transmitted
over data link technology with a smaller MTU.
-The Maximum Transmission Unit (MTU) is the fixed upper limit on the size of packets can be sent in a single
frame.}

>Maximum Transmission Unit (MTU): The size of the largest datagram that can be transmitted over a physical
network is called MTU. Minimum MTU: 576 Bytes

-Different technologies have different MTU sizes.

Media MTU
Ethernet 1500
Ethernet Jumbo Frame 1500-9000
802.11 2272
802.5 4464
FDDI 4500

Q. What is the difference between segmentation and fragmentation?

Segmentation:
-It divides the data stream into smaller pieces is called segmentation.
-Segmenting messages has two primary benefits:
1.First, by sending smaller individual pieces from source to destination, many different conversations can be
interleaved on the network.
2. Segmentation can increase the reliability of network communications.The separate pieces of each message need
not travel the same path across the network from source to destination. If a particular path becomes congested with
data or fails, individual pieces of the message can still be directed to the destination using alternate path. If part of
the message fails to make it to the destination, only the missing parts need to be retransmitted.

Fragmentation:
-Fragmentation is the process of breaking an IP packet into smaller chunks or data is fragmented when transmitted
over data link technology with a smaller MTU.

Q. What is the difference between MSS and MTU?


-In single line: To overcome MTU problem we do fragmentation and to prevent fragmentation we use MSS.
-To put limit on how much data can we put in a TCP segment we use MSS.
-To put limit on how much data payload can we put in frame we use MTU.

Or

-MSS decides how much data we can put in a TCP segment, MTU decide how much data we can put in a frame.

Q. What TCP MSS Does and How It Works

- The TCP Maximum Segment Size (MSS) defines the maximum amount of data that a host is willing to accept in a
single TCP/IP datagram. This TCP/IP datagram may be fragmented at the IP layer. The MSS value is sent as a TCP
header option only in TCP SYN segments. Each side of a TCP connection reports its MSS value to the other side.
Contrary to popular belief, the MSS value is not negotiated between hosts. The sending host is required to limit the
size of data in a single TCP segment to a value less than or equal to the MSS reported by the receiving host.

- The way MSS now works is that each host will first compare its outgoing interface MTU with its own buffer and
choose the lowest value as the MSS to send. The hosts will then compare the MSS size received against their own
interface MTU and again choose the lower of the two values.

-Example: illustrates this additional step taken by the sender to avoid fragmentation on the local and remote wires.
Notice how the MTU of the outgoing interface is taken into account by each host (before the hosts send each other
their MSS values) and how this helps to avoid fragmentation.

Scenario 2

1. Host A compares its MSS buffer (16K) and its MTU (1500 - 40 = 1460) and uses the lower value as the
MSS (1460) to send to Host B.

2. Host B receives Host A's send MSS (1460) and compares it to the value of its outbound interface MTU - 40
(4422).

3. Host B sets the lower value (1460) as the MSS for sending IP datagrams to Host A.
4. Host B compares its MSS buffer (8K) and its MTU (4462-40 = 4422) and uses 4422 as the MSS to send to
Host A.

5. Host A receives Host B's send MSS (4422) and compares it to the value of its outbound interface MTU -40
(1460).

6. Host A sets the lower value (1460) as the MSS for sending IP datagrams to Host B.

1460 is the value chosen by both hosts as the send MSS for each other. Often the send MSS value will be the same
on each end of a TCP connection.

MSS and Path MTU Discovery:

-The maximum segment size (MSS) is the largest amount of data, specified in bytes, that TCP is willing to receive
in a single segment. For best performance, the MSS should be set small enough to avoid IP fragmentation, which
can lead to packet loss and excessive retransmissions. To try to accomplish this, typically the MSS is announced by
each side using the MSS option when the TCP connection is established, in which case it is derived from
the maximum transmission unit (MTU) size of the data link layer of the networks to which the sender and receiver
are directly attached.

-Furthermore, TCP senders can use path MTU discovery to infer the minimum MTU along the network path
between the sender and receiver, and use this to dynamically adjust the MSS to avoid IP fragmentation within the
network.

MTU Path Discovery: [PMTUD]

-To find the lowest MTU along the network path so that the frames won’t be fragmented or dropped (In case of DF
bit is set) Or to avoid fragmentation.

--When datagram is sent that is too large to be forwarded by a router over a physical link and it has DF (Don’t
Fragment) bit set to prevent fragmentation, a destination unreachable message is sent and the packet is discarded.

*Path MTU Discovery explain this capabilities.


-The source will send a datagram that has its own physical link MTU. If this goes without any error largest MTU
can be used on the path is determined.
-If it gets back Destination Unreachable. Fragment needed and DF set message, this means some other link between
it and destination has a smaller MTU, it tries again using smaller datagram size until MAX MTU determined on the
path.

Or

- One of the message types defined in ICMPv4 is the Destination Unreachable message, which is returned under
various conditions where an IP datagram cannot be delivered. One of these situations is when a datagram is sent that
is too large to be forwarded by a router over a physical link but which has its Don’t Fragment (DF) flag set to
prevent fragmentation. In this case, the datagram must be discarded and a Destination Unreachable message sent
back to the source. A device can exploit this capability by testing the path with datagrams of different sizes, to see
how large they must be before they are rejected.

-The source node typically sends a datagram that has the MTU of its local physical link, since that represents an
upper bound on the MTU of any path to or from that device. If this goes through without any errors, it knows it can
use that value for future datagrams to that destination. If it gets back any Destination Unreachable - Fragmentation
Needed and DF Set messages, this means some other link between it and the destination has a smaller MTU. It tries
again using a smaller datagram size, and continues until it finds the largest MTU that can be used on the path.

UDP Header:

* UDP provides an unreliable connection. UDP doesn’t go through a three-way handshake to set up a connection—it
simply begins sending the data. Likewise, UDP doesn’t check to see whether sent segments were received by a
destination; in other words, it doesn’t use an acknowledgment process.
* UDP Header has fixed length of only 8 bytes and consisting of 4 fields.
Header Details:
1. Source Port [16 bits]:
- Identifies the sending application.
2. Destination Port [16 bits]:
-Identifies the receiving application.
3. Length [16 bits]:
-Defines the size of the UDP segment.
4. Checksum [16 bits]:
-Used for Error Checking and Provides a CRC on the complete UDP segment.
IP Header

* IP header is between 20-60 bytes long. The last 40 bytes can be filled with IP option. This are not vital. They are
require sometimes for control processing and can provides function which are not normally contain in IP header.

1. Version (4 bits) (1/2 byte):


- Identifies the version of IP used to generate the datagram. For IPv4, this is of course the number 4.

-IPv4: 0100

-IPv6: 0110

2. Internet Header Length (IHL) (4 bits) (1/2 byte):


-It indicates the size of the IP header. It’s 20 byte long.

3. Differentiated Service/ Service Type/ Type Of Service (TOS) (1 bytes)


-This 8 bit field was originally called the type of service or simply the TOS byte. The Diffserv (or Differentiated
Service) byte can be used as a method of adding QoS to IP network. In other world the diffserv byte can be used for
traffic prioritization where device support it.
- A field designed to carry information to provide quality of service features, such as prioritized delivery, for IP
datagrams.
-This field now defines a set of differentiated services. The new interpretation is shown in Figure 7.3.
-In this interpretation, the first 6 bits make up the codepoint subfield and the last 2 bits are not used. The codepoint
subfield can be used in two different ways.
A. When the 3 right-most bits are 0s, the 3 left-most bits are interpreted the same as the precedence bits in the
service type interpretation. In other words, it is compatible with the old interpretation.
-The precedence defines the eight-level priority of the datagram (0 to 7) (2^3=8) in issues such as congestion. If a
router is congested and needs to discard some datagrams, those datagrams with lowest precedence are discarded
first. Some datagrams in the Internet are more important than the others.
-For example, a datagram used for network management is much more urgent and important than a datagram
containing optional information for a group.

Precedence & Differential Service Interpretation

B. When the 3 right-most bits are not all 0s, the 6 bits define 56 (64-8) (2^6=64)services based on the priority
assignment by the Internet or local authorities according to Table 7.1.
-The first category contains 24 service types; the second and the third each contain 16. (1st-24, 2nd and 3rd -16)
-The first category is assigned by the Internet authorities (IETF).
-The second category can be used by local authorities (organizations).
- The third category is temporary and can be used for experimental purposes.
Note that these assignments have not yet been finalized.

4. Total Length (TL) (2 bytes):


-Specifies the total length of the IP datagram that is header and payload, in bytes. Since this field is 16 bits wide, the
maximum length of an IP datagram is 65,535 bytes.
Or
-After fragmenting, this field indicates the length of each fragment, not the length of the overall message.

5. Identification (2 bytes):
-It reassemble and identify the fragment that belonging to a particular IP datagram.
-When datagram is fragmented by router, the sending host set the value of this field and assigned to each fragment. -
This field is used by the receiver to reassemble fragment without accidentally mixing fragments from different
datagram. This is needed because fragments may arrive from multiple datagram mixed together, since IP datagrams
can be received out of order from any device. 
Or
-While the function of the Fragment Offset Field is to identify the relative position of each fragment , it is the
Identification Field that serves to allow the receiving device to sort out which fragments comprise what block of
data
6. Flags (3 bits):
-Three control flags, two of which are used to manage fragmentation.
1. Reserved: Not used.
2. DF [Don’t Fragment]
-When set to 1, specifies that the datagram should not be fragmented. Since the fragmentation process is generally
“Invisible” to higher layers, most protocols don’t care about this and don’t set this flag. It is, however, used for
testing the maximum transmission unit (MTU) of a link.
3. MF [More Fragments]:
-When set to 0, indicates the last fragment in a datagram; when set to 1, indicates that more fragments are yet to
come. If no fragmentation is used for a datagram, then of course there is only one “fragment” (the whole datagram),
and this flag is 0. If fragmentation is used, all fragments but the last set this flag to 1 so the receiver knows when all
fragments have been sent.
Or
-3 bit field contains the flags that specify the function of the frame in terms of whether fragmentation has been
employed, additional fragments are coming, or this is the final fragment.

Bit Indicator RFC 791 Definition

0xx Reserved

x0x May Fragment

x1x Do Not Fragment

xx0 Last Fragment

xx1 More Fragmenets

Or

Reserved (1 bit): Not used

DF (Don’t Fragment): 1= Don’t Fragment

0= May Fragment

MF (More Fragment): 1= More fragments are coming

0= This is last Fragments

7. Fragment Offset (13 bits):


-It indicate position which the fragment data takes in the payload of the original IP datagram.
Or
-the Fragment Offset Field is to identify the relative position of each fragment

8. Time To Live (TTL) (1 byte):


-Short version: Specifies how long the datagram is allowed to “live” on the network, in terms of router hops. Each
router decrements the value of the TTL field (reduces it by one) prior to transmitting it. If the TTL field drops to
zero, the datagram is assumed to have taken too long a route and is discarded.
Or
-It determines how many hops and IP datagram can cover before being deleted by router. At each router, TTL value
is decreased by 1. As soon as value become [0], the datagram is deleted. This prevents packet from circulating
around the network forever causing unnecessary traffic.

9.Protocol (1 byte):
-Identifies the higher-layer protocol contain in the payload (TCP or UDP).

10. Header Checksum (2 byte):


-A checksum computed over the header to provide basic protection against corruption in transmission. -It is
calculated by dividing the header bytes into words (a word is two bytes) and then adding them together.
-The data is not check summed, only the header. At each hop the device receiving the datagram does the same
checksum calculation and on a mismatch, discards the datagram as damaged.

11. Source Address (4 byte):


-It contain IP address of the Source host.

12. Destination Address (4 byte):


-It contain IP address of the Destination Host.

13. Options (Variable):


-IP header can be extended by 40 bytes of option on security and network management task.
-If the option field does not fill 32 bits completely, it must be filled up with padding bits so the length of the IP
header can be inserted into the header length field.

Or
-Can have fields used for special purposes like loose source routing, strict source routing, record route, timestamping
etc. But IP options are rarely used these days.
or
-One or more of several types of options may be included after the standard headers in certain IP datagrams. 

IP Options

There may, or may not be an option field. If there is one, it can vary in length.

The option field contains an Option-Type octet, an Option-Length octet and a variable number of Option-
data octets.

 Option-Type

o Copied Flag - 0 indicates that the option is NOT to be copied to each fragment if the datagram is

fragmented. A 1 indicates that the option is to be copied.

o Option Class - 0 is used for Control (used normally) and 2 is used for debugging and

measurement used for the Internet Timestamp option.

o Option Number
 0 - Special case indicating the end of the option list, in this case the option field is just

one octet as no length or data fields are present.

 1 - No Operation, again the option field is just one octet with no length or data fields.

 2 - Security the length is 11 octets and the various security codes can be found in RFC

791.

 3 - Loose Source Routing: which is IP routing based on information supplied by the

source station where the routers can forward the datagram to any number of intermediate

routers in order to get to the destination.

 4 - Internet Timestamp

 7 - Record Route records the route that a datagram takes.

 8 - Stream ID has a length of 4 octets.

 9 - Strict Source Routing which is IP routing based on information supplied by the

source station where the routers can only forward the datagram to a directly connected

router in order to get to the next hop indicated in the source route path.

 Option-Length - variable and not present for the NOP and the end of Option List

 Option-Data - variable and not present for the NOP and the end of Option List. See RFC 791 for the detail

on the data content for each of the Options.

IP Options are not often used today, you may come across IP source-routing (loose or strict) on Unix machines and
the like, perhaps for load balancing traffic where modern routing protocols are not being used.

14. Padding (4 variable):


-If one or more options are included, and the number of bits used for them is not a multiple of 32, enough zero bits
are added to “pad out” the header to a multiple of 32 bits (4 bytes).

15. Data (Variable):


-The data to be transmitted in the datagram, either an entire higher-layer message or a fragment of one.
Fragmentation:
-Fragmentation is the process of breaking an IP packet into smaller chunks. Data is fragmented when transmitted
over data link technology with a smaller MTU.

Example: Since some physical networks on the path between devices may have a smaller MTU than others, it may
be necessary to fragment more than once. For example, suppose the source device wants to send an IP message
12,000 bytes long. Its local connection has an MTU of 3,300 bytes. It will have to divide this message into four
fragments for transmission: three that are about 3,300 bytes long and a fourth remnant about 2,100 bytes long. 

-In this simple example, Device A is sending to Device B over a small internetwork consisting of one router and two
physical links. The link from A to the router has an MTU of 3,300 bytes, but from the router to B it is only 1,300
bytes. Thus, any IP datagrams over 1,300 bytes will need to be fragmented.

Q. Fragmentation is Good or Bad?

Fragmentation Issues and Concerns

Fragmentation is necessary to implement a network-layer internet that is independent of lower layer details, but
introduces significant complexity to IP.

*Remember that IP is an unreliable, connectionless protocol. IP datagrams can take any of several routes on their
way from the source to the destination, and some may not even make it to the destination at all. When we fragment a
message we make a single datagram into many, which introduces several new issues to be concerned with:

o Sequencing and Placement: The fragments will typically be sent in sequential order from the beginning of
the message to the end, but they won't necessarily show up in the order in which they were sent. The
receiving device must be able to determine the sequence of the fragments to reassemble them in the correct
order. In fact, some implementations send the last fragment first, so the receiving device will immediately
know the full size of the original complete datagram. This makes keeping track of the order of segments
even more essential. 

o Separation of Fragmented Messages: A source device may need to send more than one fragmented
message at a time; or, it may send multiple datagrams that are fragmented en route. This means the
destination may be receiving multiple sets of fragments that must be put back together. Imagine a box into
which the pieces from two, three or more jigsaw puzzles have been mixed and you understand this issue. 

o Completion: The destination device has to be able to tell when it has received all of the fragments so it
knows when to start reassembly (or when to give up if it didn't get all the pieces).

To address these concerns and allow the proper reassembly of the fragmented message, IP includes several fields in
the IP format header that convey information from the source to the destination about the fragments.

Bit Indicator RFC 791 Definition


0xx Reserved

x0x May Fragment

x1x Do Not Fragment

xx0 Last Fragment

xx1 More Fragmenets

IP Fragmentation

Regardless of what situation occurs that requires IP Fragmentation, the procedure followed by the device performing
the fragmentation must be as follows:

1. The device attempting to transmit the block of data will first examine the Flag field to see if the field is set
to the value of (x0x or x1x) (May Fragment or Do not Fragment). If the value is equal to (x1x) this
indicates that the data may not be fragmented, forcing the transmitting device to discard that data.
Depending on the specific configuration of the device, an Internet Control Message Protocol (ICMP)
Destination Unreachable -> Fragmentation required and Do Not Fragment Bit Set message may be
generated.
2. Assuming the flag field is set to (x0x), the device computes the number of fragments required to transmit
the amount of data in by dividing the amount of data by the MTU. This will result in "X" number of frames
with all but the final frame being equal to the MTU for that network.
3. It will then create the required number of IP packets and copies the IP header into each of these packets so
that each packet will have the same identifying information, including the Identification Field.
4. The Flag field in the first packet, and all subsequent packets except the final packet, will be set to "More
Fragments." The final packets Flag Field will instead be set to "Last Fragment."
5. The Fragment Offset will be set for each packet to record the relative position of the data contained within
that packet.
6. The packets will then be transmitted according to the rules for that network architecture.

IP Fragment Reassembly

If a receiving device detects that IP Fragmentation has been employed, the procedure followed by the device
performing the Reassembly must be as follows:

1. The device receiving the data detects the Flag Field set to "More Fragments."
2. It will then examine all incoming packets for the same Identification number contained in the packet.
3. It will store all of these identified fragments in a buffer in the sequence specified by the Fragment Offset
Field.
4. Once the final fragment, as indicated by the Flag Field, is set to "Last Fragment," the device will attempt to
reassemble that data in offset order.
5. If reassembly is successful, the packet is then sent to the ULP in accordance with the rules for that device.
6. If reassembly is unsuccessful, perhaps due to one or more lost fragments, the device will eventually time
out and all of the fragments will be discarded.
7. The transmitting device will than have to attempt to retransmit the data in accordance with its own
procedures.

Q. Which device can reassemble the packet?


-End Devices reassemble the packet.

-It may be host or a server or a router.

******************************************************************************************
Fragmentation-Related IP Datagram Header Fields

-When a sending device or router fragments a datagram, it must provide information that will allow the receiving
device to be able to identify the fragments and reassemble them into the datagram that was originally sent. This
information is recorded by the fragmenting device in a number of fields in the IP datagram header.

Total Length
After fragmenting, this field indicates the length of each fragment, not the length of the overall message.

Identification
A unique identifier is assigned to each message being fragmented. 

More Fragments
This flag is set to a 1 for all fragments except the last one, which has it set to 0. When the fragment with a value of 0
in the More Fragments flag is seen, the destination knows it has received the last fragment of the message.

Fragment Offset
This field solves the problem of sequencing fragments by indicating to the recipient device where in the overall
message each particular fragment should be placed.

*********************************************************************************************
***
Frame : Ethernet and IEEE 802.3 Header:-

1. Preamble:(7 byte) [Preamble + Start Of Frame = 7 +1 = 8 byte]

-56 bits of alternating 1’s and 0’s that synchronizes communication on an Ethernet network.

- The preamble tells receiving stations that a frame is coming.

Start of Frame: (1 byte)

-Indicates the beginning of the frame

Or

-It indicates a valid frame is about to begin.

Or

--The Preamble (7 bytes) and Start Frame Delimiter (SFD) (1 byte) are used for synchronization between the
sending and receiving devices. The first 8 bytes of the frame are used to get the attention of the receiving nodes.
Essentially, the first few bytes tell the receivers to get ready to receive a new frame.

*The preamble and the start of frame are not considered part of the actual frame, or calculated as part of the
total frame size.

2. Destination Address: (6 byte)

-48 bit MAC address for the destination host.

3. Source Address: (6 byte)


-48 bit MAC address for the source host.

4. EtherType / Length: (2 byte)

-Value to indicate which upper layer protocol will receive the data after the Ethernet process is complete

Or

-It identifies the type of payload in the frame.

Length: (2 byte)

-It indicates total length of the frame. Layer 2 header + Data.

-Length of the payload.

-It is used to indicate which protocol is encapsulated in the payload of an Ethernet Frame. 

Ethertype Protocol
0*0800 IPv4
0*0806 ARP
0*86DD IPv6
0*8808 Ethernet Flow Control
0*8847 MPLS Unicast
0*8848 MPLS multicast
0*88CC Link Layer Discovery Protocol (LLDP)
0*8906 Fiber Channel over Ethernet (FC0E)
0*9100 Q-IN-Q

5. Data or Payload: (46-1500 byte)

-this is the PDU, typically an IPv4 packet, that is to be transported over the media.

6. Frame Check Sequence (FCS): (4 byte)

-A value used to check for damaged frames.

Or

-The absolute minimum frame size for Ethernet is 64 bytes (or 512 bits)(46 Data + 18 layer 2 header) including
headers.
*Runt:
-A frame that is smaller than 64 bytes will be discarded as a runt.
-The required fields in an Ethernet header add up to 18 bytes – thus, the frame payload must be a minimum of 46
bytes, to equal the minimum 64-byte frame size. If the payload does not meet this minimum, the payload is padded
with 0 bits until the minimum is met.
Note: If the optional 4-byte 802.1Q tag is used, the Ethernet header size will total 22 bytes, requiring a minimum
payload of 42 bytes.
-By default, the maximum frame size for Ethernet is 1518 bytes – 18 bytes of header fields, and 1500 bytes of
payload or 1522 bytes with the 802.1Qtag.
*Giant:
-A frame that is larger than the maximum will be discarded as a giant.
-With both runts and giants, the receiving host will not notify the sender that the frame was dropped. Ethernet relies
on higher-layer protocols, such as TCP, to provide retransmission of discarded frames.
-Some Ethernet devices support jumbo frames of 9216 bytes, which provide less overhead due to fewer frames.
Jumbo frames must be explicitly enabled on all devices in the traffic path to prevent the frames from being dropped.
-The 32-bit Cycle Redundancy Check (CRC) field is used for error detection. A frame with an invalid CRC will be
discarded by the receiving device. This field is a trailer, and not a header, as it follows the payload.
-The 96-bit Interframe Gap is a required idle period between frame transmissions, allowing hosts time to prepare
for the next frame.

What is the Minimum & Maximum size of Ethernet frame ?

Ethernet Frame:
The minimum size of an Ethernet frame is 64 bytes. The breakup of this size between the fields is: Destination
Address (6 bytes) + Source Address (6 bytes) + Frame Type (2 bytes) + Data (46 bytes) + CRC Checksum (4 bytes).
The minimum number of bytes passed as data in a frame must be 46 bytes. If the size of the data to be passed is less
than this, then padding bytes are added.
-The maximum size of an Ethernet frame is 1518 bytes. The breakup of this size between the fields is:
Destination Address (6 bytes) + Source Address (6 bytes) + Frame Type (2 bytes) + Data (1500 bytes) +
CRCChecksum(4bytes).

The maximum number of bytes of data that can be passed in a single frame is 1500 bytes.

OR
The maximum size of an Ethernet frame is 1518 bytes. The breakup of this size between the fields is: Destination
Address (6 bytes) + Source Address (6 bytes) + Frame Type (2 bytes) + Data (1500 bytes) + CRC Checksum (4
bytes). The maximum number of bytes of data that can be passed in a single frame is 1500 bytes.

ARP

ARP Header:
* ARP Packet Size/Header Length: 28 bytes

An analyzer capture of the ARP Request with its encapsulating frame.


Ethernet II, Src: 00:30:65:2c:09:a6, Dst: ff:ff:ff:ff:ff:ff
Destination: ff:ff:ff:ff:ff:ff (Broadcast)
Source: 00:30:65:2c:09:a6 (AppleCom_2c:09:a6)
Type: ARP (0x0806)
Address Resolution Protocol (request)
Hardware type: Ethernet (0x0001)
Protocol type: IP (0x0800)
Hardware size: 6
Protocol size: 4
Opcode: request (0x0001)
Sender MAC address: 00:30:65:2c:09:a6 (AppleCom_2c:09:a6)
Sender IP address: 172.16.1.21 (172.16.1.21)
Target MAC address: 00:00:00:00:00:00 (00:00:00_00:00:00)
Target IP address: 172.16.1.33 (172.16.1.33)

- The broadcast address means that all devices on the data link will receive the frame and examine the
encapsulated packet. All devices except the target will recognize that the packet is not for them and will drop the
packet. The target will send an ARP Reply to the source address, supplying its MAC address.

An analyzer capture of the ARP Reply


Ethernet II, Src: 00:10:5a:e5:0e:e3, Dst: 00:30:65:2c:09:a6
Destination: 00:30:65:2c:09:a6 (AppleCom_2c:09:a6)
Source: 00:10:5a:e5:0e:e3 (3com_e5:0e:e3)
Type: ARP (0x0806)
Trailer: 15151515151515151515151515151515...
Address Resolution Protocol (reply)
Hardware type: Ethernet (0x0001)
Protocol type: IP (0x0800)
Hardware size: 6
Protocol size: 4
Opcode: reply (0x0002)
Sender MAC address: 00:10:5a:e5:0e:e3 (3com_e5:0e:e3)
Sender IP address: 172.16.1.33 (172.16.1.33)
Target MAC address: 00:30:65:2c:09:a6 (AppleCom_2c:09:a6)
Target IP address: 172.16.1.21 (172.16.1.21)

* Devices on a LAN need a way to discover their neighbors so that frames might be transmitted to the correct
destination.

Q. What is the target IP address in ARP request and ARP reply packet?
A. Target IP Address in ARP Request= Destination IP address of the device.
- Target IP Address in ARP Reply =IP address of the device who has generated the ARP request

Q. What is the target MAC address in ARP request and ARP reply packet?
A. Target MAC address in ARP Request = 00:00:00:00:00:00
- Target MAC Address in ARP Reply = MAC of the device who has generated the ARP request

* In Ethernet Frame, Destination MAC address is ff:ff:ff:ff:ff:ff

1. Hardware Type:(16 bit) (2 byte)

-Identify the type of hardware used.

Number Hardware Type


1 Ethernet
15 Frame Relay
16 ATM
-17 HDLC
18 Fibre Channel
19 ATM
20 Serial Link

2. Protocol Type:(16 bit) (2 byte)

-Identify type of layer 3 address used. IPv4 is 2048 (0x0800 in Hexa).

-It identify which upper layer protocol is used.

Protocol Type:

Dec Protocol Hex

1 ICMP 0*01

2 IGMP 0*02
4 IPv4 0*04

6 TCP 0*06

17 UDP 0*11

8 EGP 0*08

9 IGP 0*09

47 GRE/Generic Routing Encapsulation 0*2F

50 ESP/Encap Security Payload 0*32

51 AH/Authentication Header 0*33

88 EIGRP 0*58

89 OSPF 0*59

103 PIM/Protocol Independent Multicast 0*67

112 VRRP 0*70

115 L2TP/Layer Two Tunneling Protocol 0*73

137 MPLS in IP 0*89

3. Hardware Address Length: (8 bit)

-Specifies the length of the data link identifiers. MAC addresses would be [6].

4. Protocol Address Length:(8 bit)

-Specifies the length of the network address. IPv4 would be 4.

5. Operation/Opcode: (16 bit)

-Specifies whether the packet is an ARP Request (1) or an ARP Reply (2). Other values might also be found here,
indicating other uses for the ARP packet. Examples are Reverse ARP Request (3), Reverse ARP Reply (4),
Inverse ARP Request (8), and Inverse ARP Reply (9).

5. Sender Hardware/MAC Address: (6 byte)

-Layer 2 (MAC Address) address of the device sending the message.

6. Sender Protocol/IP Address: (4 byte)

-The protocol address (IPv4 address) of the device sending the message

7. Target Hardware/MAC Address: (6 byte)


-Layer 2 (MAC Address) of the intended receiver. This field is ignored in requests.
[0000.0000.0000.0000/00.00.00.00.00.00]

8. Target Protocol/IP Address: (4 byte)

-The protocol address (IPv4 Address) of the intended receiver

ARP:

- Address Resolution Protocol (ARP) is to resolve an IPv4 address (32 bit Logical Address) to the physical address
(48 bit MAC Address).
- The purpose of Address Resolution Protocol (ARP) is to find out the MAC address of a device in your Local
Area Network (LAN), for the corresponding IPv4 address, which network application is trying to
communicate.

-ARP uses a local broadcast (255.255.255.255) at layer 3 and FF:FF:FF:FF:FF:FF at layer 2 to discover
neighboring devices.
- Basically stated, you have the IP address you want to reach, but you need a physical (MAC) address to send the
frame to the destination at layer 2. ARP resolves an IP address of a destination to the MAC address of the
destination on the same data link layer medium, such as Ethernet.
-Remember that for two devices to talk to each other in Ethernet (as with most layer 2 technologies), the data
link layer uses a physical address (MAC) to differentiate the machines on the segment. When Ethernet
devices talk to each other at the data link layer, they need to know each other’s MAC addresses.

*Why ARP?
-Internetworked device will communicate logically using layer 3 address, but the actual transmissions between
devices take place using layer 2 address (hardware address). For this address resolution is required.
-Two type of address mapping. 1. Static 2. Dynamic.
-AN IP packet may pass through different physical network, that’s why we need both IP and MAC address.
-Static mapping entries have some drawbacks, such as a machine could change its NIC or the machine can be moved
from one physical network to other which needs static mapping table to be updated periodically.
-ARP is combination of Request/Reply.
-In Wireshark capture we see that there is no IP field after Ethernet, its ARP field directly.
-ARP packet is encapsulated in Ethernet frame.
-ARP Request – is broadcast & ARP Reply – is unicast.

Q. ARP works at which layer and Why?


A. - ARP is layer 2. The reason being is that a broadcast is sent on layer 2 (data link layer) and ARP will normally
not traverse to layer 3 (network layer). However it can provide extra features to the layer 3 protocol.
The truth is that not all protocols fit the OSI model exactly, because after all it's just a model. If you really want to
push it into a spot I'd say ARP is a layer 2.5 protocol. It fits layer 2, but doesn't fit layer 3 completely.

Or

-ARP is layer 2 protocol. Why? Beacause it doesn’t define any property of layer 3. Such as ARP packet is never
routed across internetwork nodes.

-If you check a wireshark trace, we can see after an Ethernet frame there is directly an ARP packet.

Q. Is ARP part of the Ethernet frame?


A. No.
-It encapsulates in Ethernet frame. It’s not a part of the Ethernet frame.

Proxy ARP:
-When two devices are in same layer 3 network/subnet, but separated with two physical networks by a router. Both
the devices will think they are on same local network.
-So when device A want to send IP datagram to device B, it will send an ARP broadcast, thinking B is in same
network. However router stops broadcast so B will not receive the A’s request.
[In normal case, when both devices are in different network they first check the IP address, they found the
destination device is in different network. So they will send IP datagram to its default gateway/MAC address of
default gateway]
-To overcome this problem a router is configured as “Proxy ARP” device who will respond to device A on behalf of
device B with its own interface MAC address and vice versa for device B.
-Enable By Default.
[In case of static route with exit interface, next router will do Proxy ARP whenever it receives an ARP
request for other end interface network]

Or

-Proxy ARP allows the router to respond with its own MAC address in an ARP reply for a device on a different
network segment. Proxy ARP is used when you need to move a device from one segment to another but cannot
change its current IP addressing information.

Advantage/Disadvantage:
-The main advantage of proxying is that it is transparent to the hosts on the different physical network segment.
-There is a serious downside to using Proxy ARP. Using Proxy ARP will definitely increase the amount of traffic on
your network segment, and hosts will have a larger ARP table than usual in order to handle all the IP-to-MAC-
address mapping.And Proxy ARP is configured on all Cisco router by default-you should disable it if you don’t
think you are going to use it.

Or

Advantages of Proxy ARP

The main advantage of proxy ARP is that it can be added to a single router on a network and does not disturb the
routing tables of the other routers on the network.

Proxy ARP must be used on the network where IP hosts are not configured with a default gateway or do not have
any routing intelligence.
Disadvantages of Proxy ARP

These are some of the disadvantages:

 It increases the amount of ARP traffic on your segment.

 Hosts need larger ARP tables in order to handle IP-to-MAC address mappings.

 It does not work for networks that do not use ARP for address resolution.

Or

*The ARP table for three devices connected to the same network: a Cisco router, a Microsoft Windows host,
and a Linux host.

Martha#show arp
Protocol Address Age (min) Hardware Addr Type Interface
Internet 10.158.43.34 2 0002.6779.0f4c ARPA Ethernet0
Internet 10.158.43.1 - 0000.0c0a.2aa9 ARPA Ethernet0
Internet 10.158.43.25 18 00a0.24a8.a1a5 ARPA Ethernet0
Internet 10.158.43.100 6 0000.0c0a.2c51 ARPA Ethernet0
Martha#

AGE:

-Notice the Age column. As this column would indicate, ARP information is removed from the table after a certain
time to prevent the table from becoming congested with old information. Cisco routers hold ARP entries for four
hours (14,400 seconds); this default can be changed. The following example changes the ARP timeout to 30
minutes (1800 seconds):

-Cisco Switch hold ARP entries for 5 min.

Martha(config)# interface ethernet 0


Martha(config-if)# arp timeout 1800
________________________________________________________________________
C:\WINDOWS>arp -a

Interface: 148.158.43.25
Internet Address Physical Address Type
10.158.43.1 00-00-0c-0a-2a-a9 dynamic
10.158.43.34 00-02-67-79-0f-4c dynamic
10.158.43.100 00-00-0c-0a-2c-51 dynamic
_________________________________________________________________________
Linux:~# arp -a
Address HW type HW address Flags Mask
10.158.43.1 10Mbps Ethernet 00:00:0C:0A:2A:A9 C *
10.158.43.100 10Mbps Ethernet 00:00:0C:0A:2C:51 C *
10.158.43.25 10Mbps Ethernet 00:A0:24:A8:A1:A5 C *
Linux:~#
ARP entries might also be permanently placed in the table. To statically map 172.21.5.131 to hardware address
0000.00a4.b74c, with a SNAP (Subnetwork Access Protocol) encapsulation type, use the following:

Martha(config)# arp 172.21.5.131 0000.00a4.b74c snap

The command clear arp-cache forces a deletion of all dynamic entries from the ARP table. It also clears the fast-
switching cache and the IP route cache.

Gratuitous ARP:

Q. What is GARP and how it will be useful?

--A device can generate what is called a gratuitous ARP. A gratuitous ARP is an ARP reply that is generated
without a corresponding ARP request. This is commonly used when a device might change its IP address or MAC
address and wants to notify all other devices on the segment about the change so that the other devices have the
correct information in their local ARP tables.

-It is a ARP request with same source and destination IP used to find IP conflict in a network.
-Gratuitous ARP is useful for:
1. It helps detect IP conflict:-
-When a device receive “ARP request” containing a source IP that matches its own, then it knows there is an IP
conflict.
3. To inform switches, that a particular MAC is at a particular swichport.

2. When we move one IP from one machine to other, other machine maintain an ARP table entry; the MAC with an
IP (old entry). So when we change IP of a machine, it is being associated with new MAC and it sends a broadcast.
Gratuitous “ARP Reply” to inform neighbor machine about the MAC address change for the IP.

-A host might occasionally issue an ARP Request with its own IPv4 address as the target address. These ARP
Requests, known as gratuitous ARPs, have several uses:

 A gratuitous ARP might be used for duplicate address checks. A device that issues an ARP Request
with its own IPv4 address as the target and receives an ARP Reply from another device will know
that the address is a duplicate.
 A router running Hot Standby Router Protocol (HSRP) that has just taken over as the active router
from another router on a subnet issues a gratuitous ARP to update the ARP caches of the subnet's
hosts.
 A gratuitous ARP might be used to advertise a new data-link identifier. This use takes advantage of the fact
that when a device receives an ARP Request for an IPv4 address that is already in its ARP cache, the cache
will be updated with the sender's new hardware address.

- It is disabled by default in IOS but can be enabled with the command ip gratuitous-arps.

Or

In more advanced networking situations you may run across something known as Gratuitous ARP (GARP).   A
gratuitous arp something that is often performed by a computer when it is first booted up.   When a NIC’s is first
powered on, it will do what’s known as a gratuitous ARP and automatically ARP out it’s MAC address to the entire
network.  This allows any switches to know the location of the physical devices and DHCP servers to know where
to send an IP address if needed and requested.
Gratuitous ARP is also used by many high availability routing and load balancing devices.  Routers or load
balancers are often configured in an HA (high availability) pair to provide optimum reliability and maximum
uptime.   Usually these devices will be configured in an Active/Standby pair.  One device will be active while the
second will be sleeping waiting for the active device to fail. Think of it as an understudy for the lead role in a
movie.  If the leading lady gets sick, the understudy will gladly and quickly take her place in the lime light.

When a failure occurs, the standby device will assert itself as the new active device and issue a gratuitous ARP out
to the network instructing all other devices to send traffic to it’s MAC address instead of the failed device.

As you can see ARP and it’s cousin play a vital role in helping the network run smoothly and packets finding their
way across the network.

Have more questions about ARP it's other brothers?  Leave a comment below and let's talk about ARP!

Or

Gratuitous ARPs (GARP) are useful for four reasons:

 They can help detect IP conflicts. When a machine receives an ARP request containing a source IP that
matches its own, then it knows there is an IP conflict.
 They assist in the updating of other machines' ARP tables. Clustering solutions utilize this when they move
an IP from one NIC to another, or from one machine to another. Other machines maintain an ARP table that
contains the MAC associated with an IP. When the cluster needs to move the IP to a different NIC, be it on the same
machine or a different one, it reconfigures the NICs appropriately then broadcasts a gratuitous ARP reply to inform
the neighboring machines about the change in MAC for the IP. Machines receiving the ARP packet then update their
ARP tables with the new MAC.
 They inform switches of the MAC address of the machine on a given switch port, so that the switch knows
that it should transmit packets sent to that MAC address on that switch port.
 Every time an IP interface or link goes up, the driver for that interface will typically send a gratuitous ARP
to preload the ARP tables of all other local hosts. Thus, a gratuitous ARP will tell us that that host just has had a link
up event, such as a link bounce, a machine just being rebooted or the user/sysadmin on that host just configuring the
interface up. If we see multiple gratuitous ARPs from the same host frequently, it can be an indication of bad
Ethernet hardware/cabling resulting in frequent link bounces. 

Example with Traffic Flow Following as,

 Two nodes in a cluster are configured to share a common IP address 192.168.1.1. Node A has a hardware
address of 01:01:01:01:01:01 and node B has a hardware address of 02:02:02:02:02:02.
 Assume that node A currently has IP address 192.168.1.1 already configured on its NIC. At this point,
neighboring devices know to contact 192.168.1.1 using the MAC01:01:01:01:01:01.
 Using the heartbeat protocol, node B determines that node A has died.
 Node B configures a secondary IP on an interface with ifconfig eth0:1 192.168.1.1.
 Node B issues a gratuitous ARP withsend_arp eth0 192.168.1.1 02:02:02:02:02:02 192.168.1.255. All
devices receiving this ARP update their table to point to 02:02:02:02:02:02 for the IP address 192.168.1.1.

As a conclusion, GARP is mainly used for avoid IP Conflict, maintaining ARP cache entries with proper mac
address applying ARP announcement, whenever interface/server/port got replied with new hardware assigning same
IP Address.

Or

Transmission of GARP Packets Overview


Gratuitous Address Resolution Protocol (GARP) requests provide duplicate IP address detection.
A GARP request is a broadcast request for a router’s own IP address. If a router sends an
Address Resolution Protocol (ARP) request for its own IP address and no ARP replies are
received, the router’s assigned IP address is not being used by other nodes. If a router sends an
ARP request for its own IP address and an ARP reply is received, the router’s assigned IP
address is already being used by another node.
A GARP is an ARP broadcast in which the source and destination MAC addresses are the same.
It is used primarily by a host to inform the network about its IP address. A spoofed gratuitous
ARP message can cause network mapping information to be stored incorrectly, causing a
network malfunction.
GARP is a method of establishing an association between a logical IP address and a hardware
address whenever an interface is created or the state of the interface shifts to the operationally up
state. On the other hand, ARP dynamically binds the IP address (the logical address) to the
correct MAC address. The device that transmits a GARP populates both the source and
destination fields with its own information. The devices that receive the GARP requests might
update the ARP caches with the new information contained in the GARP packets.
By default, updating the ARP cache on GARP replies is disabled on the router. On Ethernet
interfaces, you can enable transmission of GARP packets on a specific interface by using the ip
gratuitous-arps command in Interface Configuration mode and specify the number of GARP
packets to be sent, depending on the changes to IP interface settings. If an IP address is
configured directly on the physical Ethernet interface and a VLAN major interface is not
configured on the Ethernet interface for VLAN encapsulation, transmission of GARP packets
does not take place.
When you create an IP interface or the administrative status of the interface transitions to the up
state, three GARP packets are transmitted for each IP address. Each GARP packet is sent at an
interval of 10 seconds. By default, the router generates GARP requests. An IP interface can
support up to a maximum of 16 secondary IP addresses. Therefore, with the maximum number
of secondary IP addresses configured, a total of 48 GARP messages for each IP interface are
sent. In a fully scaled environment, such a transmission of a large number of GARP messages
creates a storm of GARP packets in the entire broadcast domain, which contains dynamic
subscriber line access multiplexers (DSLAMs) and other BRAS devices within the same Metro
Ethernet network. In such a network, reducing the number of GARP packets transmitted for
interface changes reduces performance impact on the router and improves the processing
efficiency of the router.
GARP Packets Transmission Scenarios

The following scenarios describe the manner in which GARP packets are generated, based on the
default configuration settings for transmission of GARP packets and the network topology:
 Three GARP packets are sent when you configure a new primary or secondary IP address on an IP interface.

 Three GARP packets are transmitted when an IP interface state transitions from the down state to the up state.

 Three GARP packets are sent for each IP address of the numbered interface when a new unnumbered interface associated with the numbered interface is
created.
 Three GARP packets are sent for all the unnumbered interfaces whenever any secondary IP address on the numbered interface that it is associated with is
modified.
 Three GARP packets are sent for all the unnumbered interfaces for all the IP addresses whenever the primary IP address of the numbered interface that it is
associated with is modified.
In all of the these scenarios, you can modify the number of GARP packets to be transmitted to be
less than three by using the ip gratuitous-arps command.
The following two scenarios describe the method of transmission of GARP packets, regardless of
whether the sending of GARP packets is disabled. In such cases, even if you configure the no ip
gratuitous-arps command to disable sending GARPs, these packets are sent to denote the changes
in system and interface conditions.
 One GARP packet is always sent for each virtual address of a VRRP interface. If you configure VRRP on a virtual router and associate the IP address with
the VRRP instance ID (VRID) using the ip vrrp command in Interface Configuration mode, one GARP packet is always transmitted for each virtual
address of the interface enabled for VRRP.
 Three GARP packets are always sent when a failover occurs to the secondary link of the redundant port on GE-2 and GE-HDE line modules that are paired
with GE-2 SFP I/O modules, 2xGE APS I/O SFP modules, and GE-2 APS I/O SFP modules, with physical link redundancy.

Or

Gratuitous ARP is a sort of "advance notification", it updates the ARP cache of other systems before they ask
for it (no ARP request) or to update outdated information.

When talking about gratuitous ARP, the packets are actually special ARP request packets, not ARP reply
packets as one would perhaps expect. Some reasons for this are explained in RFC 5227.
The gratuitous ARP packet has the following characteristics:

 Both source and destination IP in the packet are the IP of the host issuing the gratuitous ARP

 The destination MAC address is the broadcast MAC address (ff:ff:ff:ff:ff:ff)


 This means the packet will be flooded to all ports on a switch

 No reply is expected
Gratuitous ARP is used for some reasons:

 Update ARP tables after a MAC address for an IP changes (failover, new NIC, etc.)

 Update MAC address tables on L2 devices (switches) that a MAC address is now on a different port

 Send gratuitous ARP when interface goes up to notify other hosts about new MAC/IP bindings in
advance so that they don't have to use ARP requests to find out
 When a reply to a gratuitous ARP request is received you know that you have an IP address conflict in
your network
As for the second part of your question, HSRP, VRRP etc. use gratuitous ARP to update the MAC address
tables on L2 devices (switches). Also there is the option to use the burned-in MAC address for HSRP instead
of the "virtual"one. In that case the gratuitous ARP would also update the ARP tables on L3 devices/hosts.

Or

-Most hosts on a network will send out a Gratuitous ARP when they are initialising their IP stack. This Gratuitous
ARP is an ARP request for their own IP address and is used to check for a duplicate IP address . If there is a
duplicate address then the stack does not complete initialisation.

Or

Gratuitous ARP

Gratuitous ARP could mean both gratuitous ARP request or gratuitous ARP reply. Gratuitous in this case
means a request/reply that is not normally needed according to the ARP specification (RFC 826) but could be
used in some cases.

-A gratuitous ARP request is anAddressResolutionProtocol request packet where the source and destination IP
are both set to the IP of the machine issuing the packet and the destination MAC is the broadcast
address ff:ff:ff:ff:ff:ff. Ordinarily, no reply packet will occur. A gratuitous ARP reply is a reply to which no
request has been made.

Gratuitous ARPs are useful for four reasons:

 They can help detect IP conflicts. When a machine receives an ARP request containing a source IP
that matches its own, then it knows there is an IP conflict.
 They assist in the updating of other machines' ARP tables. Clustering solutions utilize this when they
move an IP from one NIC to another, or from one machine to another. Other machines maintain an
ARP table that contains the MAC associated with an IP. When the cluster needs to move the IP to a
different NIC, be it on the same machine or a different one, it reconfigures the NICs appropriately
then broadcasts a gratuitous ARP reply to inform the neighboring machines about the change in MAC
for the IP. Machines receiving the ARP packet then update their ARP tables with the new MAC.
 They inform switches of the MAC address of the machine on a given switch port, so that the switch
knows that it should transmit packets sent to that MAC address on that switch port.
 Every time an IP interface or link goes up, the driver for that interface will typically send a gratuitous
ARP to preload the ARP tables of all other local hosts. Thus, a gratuitous ARP will tell us that that
host just has had a link up event, such as a link bounce, a machine just being rebooted or the
user/sysadmin on that host just configuring the interface up. If we see multiple gratuitous ARPs from
the same host frequently, it can be an indication of bad Ethernet hardware/cabling resulting in frequent
link bounces.
Examples

 The networking stack in many operating systems will issue a gratuitous ARP if the IP or MAC address
of a network interface changes, to inform other machines on the network of the change so they can
report IP address conflicts, to let other machines update their ARP tables, and to inform switches of
the MAC address of the machine. The networking stack in many operating systems will also issue a
gratuitous ARP on an interface every time the link to that interface has been brought to the up state.
The gratuitous ARP then is used to preload the ARP table on all local hosts of the possibly new
mapping between MAC and IP address (for failover clusters that do not take over the MAC address)
or to let the switch relearn behind which port a certain MAC address resides (for failover clusters
where you do pull the MAC address over as well or when you simply just move the network cable
from one port to another on a normal nonclustered host)
 The High-Availability Linux Project utilizes a command-line tool called send_arp to perform the
gratuitous ARP needed in their failover process. A typical clustering scenario might play out like the
following:
o Two nodes in a cluster are configured to share a common IP address 192.168.1.1. Node A has
a hardware address of 01:01:01:01:01:01and node B has a hardware address
of 02:02:02:02:02:02.
o Assume that node A currently has IP address 192.168.1.1 already configured on its NIC. At
this point, neighboring devices know to contact 192.168.1.1 using the
MAC 01:01:01:01:01:01.
o Using the heartbeat protocol, node B determines that node A has died.
o Node B configures a secondary IP on an interface with ifconfig eth0:1 192.168.1.1.
o Node B issues a gratuitous ARP
with send_arp eth0 192.168.1.1 02:02:02:02:02:02 192.168.1.255. All devices receiving this
ARP update their table to point to 02:02:02:02:02:02 for the IP address 192.168.1.1.
Example Traffic
Ethernet II, Src: 02:02:02:02:02:02, Dst: ff:ff:ff:ff:ff:ff
Destination: ff:ff:ff:ff:ff:ff (Broadcast)
Source: 02:02:02:02:02:02 (02:02:02:02:02:02)
Type: ARP (0x0806)
Trailer: 000000000000000000000000000000000000
Address Resolution Protocol (request/gratuitous ARP)
Hardware type: Ethernet (0x0001)
Protocol type: IP (0x0800)
Hardware size: 6
Protocol size: 4
Opcode: request (0x0001)
Sender MAC address: 02:02:02:02:02:02 (02:02:02:02:02:02)
Sender IP address: 192.168.1.1 (192.168.1.1)
Target MAC address: ff:ff:ff:ff:ff:ff (Broadcast)
Target IP address: 192.168.1.1 (192.168.1.1)
0000 ff ff ff ff ff ff 02 02 02 02 02 02 08 06 00 01 ................
0010 08 00 06 04 00 01 02 02 02 02 02 02 c0 a8 01 01 ................
0020 ff ff ff ff ff ff c0 a8 01 01 00 00 00 00 00 00 ................
0030 00 00 00 00 00 00 00 00 00 00 00 00 ............

Discussion

What's a good choice for example MACs? I picked 02:02:02:02:02:02. Is there a better one? -- RandyMcEoin

-The '02' byte at the start of the MAC indicates that this is a 'locally administered address' which has been set
by the local user or system. Most normal ethernet devices are allocated a MAC with 00 as the most significant
byte.

I updated the article to differentiate between gratuitous ARP request and reply.

Note that some devices will respond to the gratuitous request and some will respond to the gratuitous reply. If
one is trying to write software for moving IP addresses around that works with all routers, switches and IP
stacks, it is best to send both the request and the reply. These are documented by RFC 2002 and RFC 826.
Software implementing the gratuitious ARP function can be found in the Linux-HA source tree. A requestmay
be preceded by a probe to avoid polluting the address space. For an ARP Probe the Sender IP address field is
0.0.0.0. ARP probes were not considered by the original ARP RFC.

-Does the target MAC address ever matter in requests? I gather Solaris uses ff:ff:ff:ff:ff:ff in its standard ARP
requests and most other OSes use00:00:00:00:00:00 instead. Is the use of the ff:ff:ff:ff:ff:ff MAC in the target
address above significant in any way? Obviously having a destination address of ff:ff:ff:ff:ff:ff is critical.yes

RFC 3927, which is based on Gratuitous ARP, specifies 00:00:00:00:00:00 for the target MAC. However many
simple TCP/IP stacks have an API which permits the specification of only one MAC value, and when the
Ethernet Destination field is set to 'broadcast', the ARP target is also set 'broadcast'. Note: Normal ARP
requests have the same value in the ARP Packet Target MAC address as in the Ethernet Destination field.

- How can we explain if the source Ethernet MAC address is different from sender's MAC address in a GARP
packet? The ARP packet value is for the ARP machine, the Ethernet value is for the Ethernet machine.
Originally, they were intended to be redundant information, targeted at different layers. It is possible to
consider a hypothetical network appliance that routes ARP packets, where the source Ethernet MAC address
changes as the packet is routed, but normally ARP packets are not routed.

Q. What is the target IP address in GARP request and GARP reply packet?
A. Target IP address in GARP Request: IP of the machine issuing the packet
Target IP address in GARP Reply:
*A gratuitous ARP request is anAddressResolutionProtocol request packet where the source and destination IP
are both set to the IP of the machine issuing the packet and the destination MAC is the broadcast
address ff:ff:ff:ff:ff:ff. Ordinarily, no reply packet will occur. A gratuitous ARP reply is a reply to which no
request has been made.
RFC 3927, which is based on Gratuitous ARP, specifies 00:00:00:00:00:00 for the target MAC.
Or

Target MAC address: ff:ff:ff:ff:ff:ff (Broadcast)

Q. Packet structure of GARP?

Ethernet II, Src: 02:02:02:02:02:02, Dst: ff:ff:ff:ff:ff:ff


Destination: ff:ff:ff:ff:ff:ff (Broadcast)
Source: 02:02:02:02:02:02 (02:02:02:02:02:02)
Type: ARP (0x0806)
Trailer: 000000000000000000000000000000000000
Address Resolution Protocol (request/gratuitous ARP)
Hardware type: Ethernet (0x0001)
Protocol type: IP (0x0800)
Hardware size: 6
Protocol size: 4
Opcode: request (0x0001)
Sender MAC address: 02:02:02:02:02:02 (02:02:02:02:02:02)
Sender IP address: 192.168.1.1 (192.168.1.1)
Target MAC address: ff:ff:ff:ff:ff:ff (Broadcast)
Target IP address: 192.168.1.1 (192.168.1.1)
0000 ff ff ff ff ff ff 02 02 02 02 02 02 08 06 00 01 ................
0010 08 00 06 04 00 01 02 02 02 02 02 02 c0 a8 01 01 ................
0020 ff ff ff ff ff ff c0 a8 01 01 00 00 00 00 00 00 ................
0030 00 00 00 00 00 00 00 00 00 00 00 00 ............

Or

Gratuitous ARP could indicate either a gratuitous ARP request or gratuitous ARP (GARP) reply. A gratuitous ARP request is an
ARP request packet, in which the source and destination IP are both set to the IP of the machine, which is issuing the packet and
the destination MAC is the ff:ff:ff:ff:ff:ff broadcast address. Ordinarily, the reply packet will not occur.
Example of GARP request traffic:
Frame 1: 42 bytes on wire (336 bits), 42 bytes captured (336 bits)
Ethernet II, Src: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5), Dst: Broadcast (ff:ff:ff:ff:ff:ff)
Address Resolution Protocol (request/gratuitous ARP)
Hardware type: Ethernet (1)
Protocol type: IP (0x0800)
Hardware size: 6
Protocol size: 4
Opcode: request (1)
[Is gratuitous: True]
Sender MAC address: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5)
Sender IP address: 192.168.10.2 (192.168.10.2)
Target MAC address: Broadcast (ff:ff:ff:ff:ff:ff)
Target IP address: 192.168.10.2 (192.168.10.2)
0000 ff ff ff ff ff ff 00 c0 9f 38 a3 d5 08 06 00 01 .........8......
0010 08 00 06 04 00 01 00 c0 9f 38 a3 d5 c0 a8 0a 02 .........8......
0020 ff ff ff ff ff ff c0 a8 0a 02 ..........
A gratuitous ARP reply is an ARP reply packet, in which the source and destination IP are both set to the IP of the machine,
which is issuing the packet and the target MAC is the sender MAC. A gratuitous ARP reply is a reply, to which no request has
been made.
Example of GARP reply traffic:
Frame 1: 42 bytes on wire (336 bits), 42 bytes captured (336 bits)
Ethernet II, Src: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5), Dst: Broadcast (ff:ff:ff:ff:ff:ff)
Address Resolution Protocol (reply/gratuitous ARP)
Hardware type: Ethernet (1)
Protocol type: IP (0x0800)
Hardware size: 6
Protocol size: 4
Opcode: reply (2)
[Is gratuitous: True]
Sender MAC address: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5)
Sender IP address: 192.168.10.2 (192.168.10.2)
Target MAC address: QuantaCo_38:a3:d5 (00:c0:9f:38:a3:d5)
Target IP address: 192.168.10.2 (192.168.10.2)

0000 ff ff ff ff ff ff 00 c0 9f 38 a3 d5 08 06 00 01 .........8......
0010 08 00 06 04 00 02 00 c0 9f 38 a3 d5 c0 a8 0a 02 .........8......
0020 00 c0 9f 38 a3 d5 c0 a8 0a 02 ...8......
Gratuitous ARPs are useful for the following reasons:

 They can help to detect IP conflicts.

 They assist in the updating of ARP tables of other machines. Clustering solutions utilize this when they move an IP from one
NIC to another or from one machine to another. Other machines maintain an ARP table, which contains the MAC address
associated with an IP address. 

When the cluster needs to move the IP to a different NIC, either on the same machine or a different one, it re-configures the NICs
appropriately and then broadcasts a gratuitous ARP reply to inform the neighboring machines about the change in the MAC
address for the IP address. Machines that receive the ARP packet then update their ARP tables with the new MAC address.

 They inform the switches of the MAC address of the machine on a given switch port; so that the switch knows that it should
transmit packets that are sent to the MAC address on the switch port.

 Every time an IP interface or link goes up, the driver for that interface will typically send a gratuitous ARP to preload the ARP
tables of all the other local hosts.
CAUSE:
By default, the received gratuitous ARP reply on SRX devices will not update the ARP cache.

SOLUTION:
To enable the updating of the ARP cache for received gratuitous ARP replies, configure gratuitous-arp-reply under the
interfaces hierarchy level. For example:
[edit]
root@FW_GL_QH_SRX1400# show interfaces 
ge-0/0/0 {
    gratuitous-arp-reply;
        unit 0 {
            family inet {
                address 192.168.10.1/24;
            }

        }
}

PURPOSE:
Configuration
Implementation
Troubleshooting
RELATED LINKS: 
Or

ARP, RARP, Proxy ARP, Gratuitous ARP and IP Redirect

Well... after a while away from my computer (in fact, away from any computer) due to some medical issues I´m back!

Don´t worry, nothing bad, it was scheduled already, and I had the company of my wife, and guess who?! Yeah! Him! Mr.

Jeff Doyle, not in person, but in his book version!

Books like TCP/IP Vol. 1 and 2 MUST be read from cover to cover! Always a good thing to learn!

Some of you may think, ARP, too basic... Yeah, I think too, but there were more than 10, 20 times that people who were

supposed to know this asked me HOW it works... so here (with mr. Doyle´s help) you´ll find ARP some variations of  it.

ARP

Address Resolution Protocol (ARP) is used to map a known IP Address to a unkown data-link identifier (for example

MAC Address). The ARP Request will contain:

 Source IPv4 Address;

 Source data-link identifier address (MAC Address for example);

 Destination IPv4 Address;

 Destination data-link identifier (MAC Address in our example) will be set to 00:00:00:00:00:00.

Check this ARP Request capture:

Ethernet II, Src: 00:30:b8:83:cb:40, Dst: ff:ff:ff:ff:ff:ff 


    Destination: ff:ff:ff:ff:ff:ff (Broadcast) 
  
    Source: 00:30:b8:83:cb:40 (00:30:b8:83:cb:40 )   
    Type: ARP (0x0806) 
    Trailer: FFE000200020003035800000FFE000100030              Address Resolution Protocol
(request) 
    Hardware type: Ethernet (0x0001) 
    Protocol type: IP (0x0800) 
    Hardware size: 6 
    Protocol size: 4 
    Opcode: request (0x0001) 
    Sender MAC address: 00:30:b8:83:cb:40 (00:30:b8:83:cb:40) 
    Sender IP address: 201.6.115.1 (201.6.115.1) 
    Target MAC address: 00:00:00_00:00:00 (00:00:00:00:00:00) 
    Target IP address: 201.6.115.254 (201.6.115.254)

By default Cisco Routers holds the ARP entries for 4 hours. You can change this value per interface basis with the

command: arp timeout <value in seconds>. Example:

interface fastethernet 0/0 


arp timeout 3600

RARP

RARP is the opposite of ARP, it maps an IPv4 Address to a know MAC Address, for example, old workstations  (dumb

terminals) could have it´s firmware programmed to send a RARP request as soon as it was powered up, and a RARP

Server would answer this RARP request with the workstation´s IP Address (Airline Companies used it ALOT in the past).

Hmmm.. looks like DHCP right?! Yeah.. it looks, but it ISN´T ok?! ;)

RARP Request will contain:

 Source and Destination data-link identifier (MAC Address in this example) will be the local host MAC Address;

 Source and Destination IP Address will be set to 0.0.0.0.

Check this example capture of a RARP Traffic:

Ethernet II, Src: Marquett_12:dd:88, Dst: ff:ff:ff:ff:ff:ff 


    Destination: ff:ff:ff:ff:ff:ff (Broadcast) 
  
    Source: Marquett_12:dd:88 (00:00:a1:12:dd:88) 
    Type: ARP (0x0806) 
    Trailer: FFE000200020003035800000FFE000100030              Address Resolution Protocol
(reverse request) 
    Hardware type: Ethernet (0x0001) 
    Protocol type: IP (0x0800) 
    Hardware size: 6 
    Protocol size: 4 
    Opcode: reverse request (0x0003) 
  
    Sender MAC address: Marquett_12:dd:88 (00:00:a1:12:dd:88)  
    Sender IP address: 0.0.0.0 (0.0.0.0) 
  
    Target MAC address: Marquett_12:dd:88 (00:00:a1:12:dd:88)  
    Target IP address: 0.0.0.0 (0.0.0.0)

---> EXAMPLE TOOK FROM Wireshark Wiki <---


Proxy ARP

A Proxy ARP enabled Router answers ARP requests intended for another machine, it does that by making the local host

believe that the Router is the "owner" of that IP Address, local host will forward the traffic to the Router and the Router

will be responsible to "route" the packets to the real destination.

For example, a Host in Subnet A wants to send traffic to Host in Subnet B, Host A and Host B are in the same subnet, but

in different broadcast domains. Host A will send an ARP Request with Host B IP Address, the Router connected to both

subnets will answer to Host A request using it´s own MAC Address instead of Host B MAC Address.

Now when Host A wants to transmit traffic to Host B, it´ll send to the Router MAC Address and the Router will just

forward the traffic to Host B. That´s why "Proxy ARP".

It´s used on networks where the hosts are not configured with a default-gateway.

Oh yeah... it´s enabled by default in the Cisco IOS, and you can disable it on a per-interface basis with the command: no

ip proxy- arp

Gratuitous ARP

In some circunstances a Host (Router, Switch, Computer, etc) might send an ARP Request with it´s own address  as the

target address... But, to his own address?! Why a host would do that!?

Well... there are some reasons... for example:

 It´s use to update other devices ARP Table (when a device receives an ARP Request with an IP that it´s already

in it´s cache, the cache will be updated with the new information;

 HSRP Routers that takes over the control will send Gratuitous ARP out the network to update the cache table of

other devices ;

 To check for duplicate addresses (if the host receives a response, it´ll know that somebody is using the same IP

Address).

You can check this Gratuitous ARP traffic captured with Wireshark (the best opensource sniffer out there):

Ethernet II, Src: 02:02:02:02:02:02, Dst: ff:ff:ff:ff:ff:ff 


    Destination: ff:ff:ff:ff:ff:ff (Broadcast) 
    Source: 02:02:02:02:02:02 (02:02:02:02:02:02) 
    Type: ARP (0x0806) 
    Trailer: 000000000000000000000000000000000000 
Address Resolution Protocol (request/gratuitous ARP) 
    Hardware type: Ethernet (0x0001) 
    Protocol type: IP (0x0800) 
    Hardware size: 6 
    Protocol size: 4 
    Opcode: request (0x0001) 
    Sender MAC address: 02:02:02:02:02:02 (02:02:02:02:02:02) 
    Sender IP address: 192.168.1.1 (192.168.1.1) 
    Target MAC address: ff:ff:ff:ff:ff:ff (Broadcast) 
    Target IP address: 192.168.1.1 (192.168.1.1)

---> EXAMPLE TOOK FROM Wireshark Wiki <---

IP Redirect: 

IP Redirect is used by routers to notify hosts of another router on the data link that should be used for a particular

destination.

For example, Router A and Router B are connected to the same Ethernet Segment, so as Host C. Host C has Router A set

as default-gateway, Host C will send the packets to Router A, and Router A sees that the destination address of the packet

is reachable via Router B, so Router A must forward the packets out the same interface it has received to Router B. Router

A does that, and also, sends an ICMP Redirect to Host C informing to use Router B to reach this particular destination

next time.

IP Redirect is enable by default in IOS Routers and can be disabled on a per interface basis with the command: no ip

redirects.

That´s it! I´ll lie down a while, my head is a little fuzzy right now!

Q. Difference between ARP and GARP?

RARP:
-It resolve the mac address to IP address.
-RARP is sort of the reverse of an ARP. In an ARP, the device knows the layer 3 address, but not the data link layer
address. With a RARP, the device doesn’t have an IP address and wants to acquire one. The only address that this
device has is a MAC address. Common protocols that use RARP are BOOTP and DHCP is a replacement of
RARP.
-In this example, PC-D doesn’t have an IP address and wants to acquire one. It generates a data link layer broadcast
(FF:FF:FF:FF:FF:FF) with an encapsulated RARP request. This example assumes that the RARP is associated with
BOOTP. If there is a BOOTP server on the segment, and if it has an IP address for this machine, it will respond. In
this example, the BOOTP server, 10.1.1.5, has an address (10.1.1.4) and assigns this to PC-D, sending this address
as a response to PC-D.

Inverse-ARP:
-It resolve the IP address from DLCI used in Frame Relay.
-Inverse ARP allows a router to send a Frame Relay frame across a VC with its layer 3 addressing information. The
destination can then use this, along with the incoming DLCI number, to reach the advertiser.
OR
-The Inverse ARP obtains layer 3 addresses of other station from layer address, such as the DLCI in Frame Relay
network. It is primarily used in Frame Relay and ATM networks. Whereas ARP translates layer 3 addresses to layer
2 addresses. Inverse ARP does the opposite.

Dynamic Mapping:
-Dynamic address mapping relies on inverse ARP to resolve a next hop network protocol address to local DLCI
value. The Frame Relay router sends out Inverse ARP requests on its PVC to discover the protocol address of the
remote device connected to the Frame Relay network.

or

RARP (Reverse Address Resolution Protocol):


It is a protocol used by a machine to find it’s own logical address (IP address). Now you might be wondering
why a machine wouldn’t know it’s own IP address. To answer your question, some machines have just enough
hardware to perform it’s primary function, and don’t have enough ROM or are diskless and hence have no
ROM to save a configuration file with information about it’s IP address.

So how does RARP do what it does?


 A machine broadcasts a RARP request with it’s physical address (usually obtained from it’s NIC) over
the network.
 A machine receiving the request, if it does know the logical address of the sender machine, sends a
unicast RARP reply to the sender machine with the logical address. This receiver machine usually has
the IP addresses of all the machines on the network.

Now you might be wondering why the receiver machine has all the IP addresses of all the machines on the
network. The reason is, you need a RARP client program to make a request and a RARP server program to
respond to the requests.

Now for a small shocker – RARP is almost obsolete :O What!? So, I did all this typing and you did all that
reading for nothing!? Not at all. It’s nice to know about RARP because we get to understand
what DHCP (which is one of the protocols replacing RARP) does even better.
ICMP Header:

 All ICMP packets have an 8-byte header and variable-sized data section. The first 4 bytes of the header have
fixed format, while the last 4 bytes depend on the type/code of that ICMP packet.
Header Details:

1. Type Field (8 bit):which identify the particular message


2. Code Field (8 bit):specify the meaning of the message.
3. Checksum (16 bit):It provides error detection coverage for the entire ICMP message.
4. Message Body/Data/Variable Field (Up to 128 bits):Contains the specific fields used to implement each
message type.

Table 1-6. ICMP packet types and code fields.

Type Code Name


0 0 ECHO REPLY
3   DESTINATION UNREACHABLE

0 Network Unreachable
1 Host Unreachable
2 Protocol Unreachable
Table 1-6. ICMP packet types and code fields.

Type Code Name


3 Port Unreachable
4 Fragmentation Needed and Don't Fragment Flag Set
5 Source Route Failed
6 Destination Network Unknown
7 Destination Host Unknown
8 Source Host Isolated
9 Destination Network Administratively Prohibited
10 Destination Host Administratively Prohibited
11 Destination Network Unreachable for Type of Service
12 Destination Host Unreachable for Type of Service
4 0 SOURCE QUENCH (deprecated)
5   REDIRECT

0 Redirect Datagram for the Network (or Subnet)


1 Redirect Datagram for the Host
2 Redirect Datagram for the Network and Type of Service
3 Redirect Datagram for the Host and Type of Service
6 0 ALTERNATE HOST ADDRESS
8 0 ECHO
9 0 ROUTER ADVERTISEMENT
10 0 ROUTER SELECTION
11   TIME EXCEEDED

0 Time to Live Exceeded in Transit


1 Fragment Reassembly Time Exceeded
12   PARAMETER PROBLEM

0 Pointer Indicates the Error


1 Missing a Required Option
2 Bad Length
13 0 TIMESTAMP
14 0 TIMESTAMP REPLY
15 0 INFORMATION REQUEST (Obsolete)
Table 1-6. ICMP packet types and code fields.

Type Code Name


16 0 INFORMATION REPLY (Obsolete)
17 0 ADDRESS MASK REQUEST (Near-obsolete)
18 0 ADDRESS MASK REPLY (Near-obsolete)
30 - TRACEROUTE

*Analyzer captures of two of the most well-known ICMP messagesEcho Request and Echo Reply, which are
used by the ping function.

Example 1-11. ICMP Echo message, shown with its IPv4 header.
Internet Protocol, Src Addr: 172.16.1.21 (172.16.1.21),
Dst Addr: 198.133.219.25 (198.133.219.25)
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
Total Length: 84
Identification: 0xabc3 (43971)
Flags: 0x00
Fragment offset: 0
Time to live: 64
Protocol: ICMP (0x01)
Header checksum: 0x8021 (correct)
Source: 172.16.1.21 (172.16.1.21)
Destination: 198.133.219.25 (198.133.219.25)
Internet Control Message Protocol
Type: 8 (Echo (ping) request)
Code: 0
Checksum: 0xa297 (correct)
Identifier: 0x0a40
Sequence number: 0x0000
Data (56 bytes)
Example 1-12. ICMP Echo Reply.
Internet Protocol, Src Addr: 198.133.219.25 (198.133.219.25),
Dst Addr: 172.16.1.21 (172.16.1.21)
Version: 4
Header length: 20 bytes
Differentiated Services Field: 0x00 (DSCP 0x00: Default; ECN: 0x00)
Total Length: 84
Identification: 0xabc3 (43971)
Flags: 0x00
Fragment offset: 0
Time to live: 242
Protocol: ICMP (0x01)
Header checksum: 0xce20 (correct)
Source: 198.133.219.25 (198.133.219.25)
Destination: 172.16.1.21 (172.16.1.21)
Internet Control Message Protocol
Type: 0 (Echo (ping) reply)
Code: 0
Checksum: 0xaa97 (correct)
Identifier: 0x0a40
Sequence number: 0x0000
Data (56 bytes)

Or

The Type field is used to identify the type of message and each type uses the Code field differently. The Variable
field may contain an Identification and a Sequencenumber plus information such as subnet masks, IP addresses etc.
again depending on the type of message.

Message Types

All the ICMP messages are listed below (notice the gaps, this does not mean that some are missing!) along with any
additions within the Variable field:

 Type 0 - Echo Reply - this is the Echo reply from the end station which is sent as a result of the Type 8

Echo. The Variable field is made up of a 2 octetIdentifier and a 2 octet Sequence Number. The Identifier

matches the Echo with the Echo Reply and the sequence number normally increments by one for each Echo

sent. These two numbers are sent back to the Echo issuer in the Echo Reply.

 Type 3 - Destination Unreachable - the source is told that a problem has occurred when delivering a

packet. There are 5 codes and these are as follows:

o Code 0 - Net Unreachable - sent by a router to a host if the router does not know a route to a

requested network.

o Code 1 - Host Unreachable - sent by a router to a host if the router can see the requested network

but not the destination node.


o Code 2 - Protocol Unreachable - this would only occur if the destination host was reached but

was not running UDP or TCP.

o Code 3 - Port Unreachable - this can happen if the destination host was up and the TCP/IP was

running but a particular service such as a web server that uses a specific port was not running.

o Code 4 - Cannot Fragment - sent by a router if the router needed to fragment a packet but the Do

not fragment (DF) bit was set in the IP header.

o Code 5 - Source Route Failed - IP Source Routing is one of the IP Options.

 Type 4 - Source Quench - the source is sending data too fast for the receiver (Code 0), the buffer has filled

up, slow down!

 Type 5 - Redirect - the source is told that there is another router with a better route for a particular packet

i.e. this gateway checks its routing table and sees that another router exists on the same network with a

more direct route. The Codes are assigned as follows:

o Code 0 - Redirect datagrams for the network

o Code 1 - Redirect datagrams for the host

o Code 2 - Redirect datagrams for the Type of Service and the network

o Code 3 - Redirect datagrams for the Type of Service and the host

All 4 octets of the Variable Field are used for the gateway IP address where this better router resides and

packets should therefore be sent.

 Type 8 - Echo Request - this is sent by Ping (Packet Internet Groper) to a destination in order to check

connectivity. The Variable field is made up of a 2 octetIdentifier and a 2 octet Sequence Number. The

Identifier matches the Echo with the Echo Reply and the sequence number normally increments by one for

each Echo sent. These two numbers are sent back to the Echo issuer in the Echo Reply.

 Type 11 - Time Exceeded - the packet has been discarded as it has taken too long to be delivered. This

examines the TTL field in the IP header and the TTL exceeded code is one of the two codes used for this

type. Trace under UDP, uses the TTL field to good effect. A Code value of 0 means that the Time to

Livewas exceeded whilst the datagram was in transit. A value of 1 means that the Fragment Reassembly

Time was exceeded.
 Type 12 - Parameter Problem - identifies an incorrect parameter on the datagram (Code 0). There is then

a 1 octet Pointer field created in the Variable part of the ICMP packet. This pointer indicates the octet

within the IP header where an error occurred. The numbering starts at 1 for the TOS field.

 Type 13 - Timestamp request - this gives the round trip time to a particular destination. The Variable

Field is made up of two 16-bit fields and three 32-bit fields:

o Identifier - as with the Echo/Echo Reply

o Sequence Number - as with the Echo/Echo Reply

o Originate Timestamp - Time in milliseconds since midnight within the request as it was sent out.

o Receive Timestamp - Time in milliseconds since midnight as the receiver receives the message.

o Transmit Timestamp - Time in milliseconds since midnight within the reply as it was sent out.

The Identifier and Sequence Number field are used to match timestamp requests with replies.

 Type 14 - Timestamp reply - this gives the round trip time to a particular destination.

 Type 15 - Information Request - this allows a host to learn the network part of an IP address on its subnet

by sending a message with the source address in the IP header filled and all zeros in the destination address

field. Uses the two 16-bit Identifier and Sequence Number fields.

 Type 16 - Information Reply - this is the reply containing the network portion. These two are an

alternative to RARP. Uses the two 16-bit Identifier andSequence Number fields.

 Type 17 - Address mask request - request for the correct subnet mask to be used.

 Type 18 - Address mask response - reply with the correct subnet mask to be used.

You can ping an IP broadcast address e.g. for the 10.1.1.0/24 subnet the broadcast address would be 10.1.1.255. You
will then receive replies from any stations that are live on that subnet.

ICMP:
-ICMP is used to send error and control information between TCP/IP devices at the Internet layer.
-ICMP includes many different messages that devices can generate or respond to. Here is a brief list of these
messages: Address Reply, Address Request, Destination Unreachable, Echo, Echo Reply, Information Reply,
Information Request, Parameter Problem, Redirect, Subnet Mask Request, Time Exceeded, Timestamp, and
Timestamp Reply.
-Two common applications that use ICMP are ping and traceroute.
-Ping uses an ICMP echo message to test connectivity to a remote device.

ICMP Message Classes, Types and Codes 


ICMP messages are used to allow the communication of different types of information between IP devices on an
internetwork. The messages themselves are used for a wide variety of purposes.
ICMP messages are divided into two classes:
1. Error Messages
-Error messages that are used to report problem conditions.
2. Informational Messages
-Informational messages that are used for diagnostics, testing and other purposes.
#

Q. Explain various ICMP messages?

1. Error Messages
-Destination Unreachable Messages
-Source Quench Messages
-Time Exceeded Messages
-Redirect Messages
-Parameter Problem Messages
2. Informational Messages
-Echo (Request) and Echo Reply Messages
-Timestamp (Request) and Timestamp Reply Messages
-Router Advertisement and Router Solicitation Messages
-Address Mask Request and Reply Messages
-Traceroute Messages

Host Confirmation
An ICMP Echo Message can be used to determine if a host is operational. The local host sends an ICMP Echo
Request to a host. The host receiving the echo message replies with the ICMP Echo Reply, as shown in the figure.
This use of the ICMP Echo messages is the basis of the ping utility.
Unreachable Destination or Service
The ICMP Destination Unreachable can used to notify a host that the destination or service is unreachable. When a
host or gateway receives a packet that it cannot deliver, it may send an ICMP Destination Unreachable packet to the
host originating the packet. The Destination Unreachable packet will contain codes that indicate why the packet
could not be delivered.
Among the Destination Unreachable codes are:
0 = net unreachable
1 = host unreachable
2 = protocol unreachable
3 = port unreachable
>Codes for net unreachable and host unreachable are responses from a router when it cannot forward a packet.

Q. Difference between destination host unreachable and destination network unreachable?

Network Unreachable:
-If a router receives a packet for which it does not have a route, it may respond with an ICMP Destination
Unreachable with a code = 0, indicating net unreachable.

Host Unreachable:
- If a router receives a packet for which it has an attached route but is unable to deliver the packet to the host on the
attached network, the router may respond with an ICMP Destination Unreachable with a code = 1, indicating that
the network is known but the host is unreachable.

Protocol and Port Unreachable:


>The codes 2 and 3 (protocol unreachable and port unreachable) are used by an end host to indicate that the TCP
segment or UDP datagram contained in a packet could not be delivered to the upper layer service.
-When the end host receives a packet with a Layer 4 PDU that is to be delivered to an unavailable service, the host
may respond to the source host with an ICMP Destination Unreachable with a code = 2 or code = 3, indicating that
the service is not available.
-The service may not be available because no daemon is running providing the service or because security on the
host is not allowing access to the service.
Time Exceeded
-An ICMP Time Exceeded message is used by a router to indicate that a packet cannot be forwarded because the
TTL field of the packet has expired.
-If a router receives a packet and decrements the TTL field in the packet to zero, it discards the packet. The router
may also send an ICMP Time Exceeded message to the source host to inform the host of the reason the packet was
dropped.
-When fragment Reassembly Time was exceeded.

Route Redirection
-A router may use the ICMP Redirect Message to notify the hosts on a network that a better route is available for a
particular destination. This message may only be used when the source host is on the same physical network as both
gateways. If a router receives a packet for which it has a route and for which the next hop is attached to the same
interface as the packet arrived, the router may send an ICMP Redirect Message to the source host. This message will
inform the source host of the next hop contained in a route in the routing table.
Source Quench
The ICMP Source Quench message can be used to tell the source to temporarily stop sending packets. If a router
does not have enough buffer space to receive incoming packets, a router will discard the packets. If the router has to
do so, it may also send an ICMP Source Quench message to source hosts for every message that it discards.
A destination host may also send a source quench message if datagrams arrive too fast to be processed.
When a host receives an ICMP Source Quench message, it reports it to the Transport layer. The source host can then
use the TCP flow control mechanisms to adjust the transmission.

Q. What are the various instances of getting “Request timed out”?


-1. Request Timeout when Router’s interface is shutdown.
-4. Request Timeout when Access-list blocking ICMP traffic.

Q. When we get destination unreachable message and when request time out?

1. Request Timeout when Router’s interface is shutdown.


*Scenario 1: R2’s fa0/0 shutdown
-ping 70.1.1.2 >> ….. >> Since the interface is shutdown, R1 does not get any response and time out, it displays the
dots.
2. Destination Unreachable when router does not have route for particular network.
*Scenario 2: R2 does not have route to 70.1.1.0/24 network
-ping 70.1.1.2 >> UUUUU >> host unreachable generated by R2 (type 3, code 1) if route to the destination host on
directly connected network is not available. (As well as when host is not there and therefore do not respond to ARP
request)

3. Destination Unreachable when Access list is configured


*Scenario 3: ICMP ping to 70.1.1.2 is denied on R3 interface fa1/0 outbound.
#access-list 107 deny icmp host 70.1.1.1 host 70.1.1.2
Ping 70.1.1.2 >> UUUUU >> (Type 3, Code 13) administratively prohibited unreachable from 60.1.1.1 or 2

4. Request Timeout when Access-list blocking ICMP traffic.


*Scenario 4: Blocking ICMP administratively – prohibited reply to R1 and R2 outbound int fa0/0
#access-list 105 deny icmp any any administratively-prohibited
Ping 70.1.1.2 >> ….. >> This time we get dots instead of U’s if router receives an ICMP destination unreachable
within timeout of default 2 seconds, it will display UUUUU. Here, R1 does not receive it within timeout, because of
the access list , so display […..]

PING (Packet Internet Groper):-


-Ping uses ICMP echo messages to determine:
>whether remote host is active or inactive
>the round trip delay
>packet loss

Or

-One of the most common applications that uses ICMP is ping. Ping uses a few ICMP messages, including echo,
echo request, destination unreachable, and others.
-Ping is used to test whether or not a destination is available. A source generates an ICMP echo packet.
-If the destination is available, it will respond with an echo reply packet.
-If an intermediate router doesn’t know how to reach the destination, it will respond with a destination
unreachable message.
-However, if the router knows how to reach the destination, but the destination host doesn’t respond to the
echo packets, you’ll see a request timed out message.
Or

* Ping use multiple sets of Echo and Echo Reply messages, along with considerable internal logic, to allow an
administrator to determine all of the following, and more:
- Whether or not the two devices can communicate; 
- Whether congestion or other problems exist that might allow communication to succeed sometimes but
cause it to fail in others, seen as packet loss—if so, how bad the loss is; 
- How much time it takes to send a simple ICMP message between devices, which gives an indication of the
overall latency between the hosts, and also indicates if there are certain types of problems.

When the utility is invoked with no additional options, default values are used for parameters such as what
size message to send, how many messages to be sent, how long to wait for a reply, and so on. The utility will
transmit a series of Echo messages to the host and report back whether or not a reply was received for each;
if a reply is seen, it will also indicate how long it took for the response to be received. When the program is
done, it will provide a statistical summary showing what percentage of the Echo messages received a reply,
and the average amount of time for them to be received.

-Below shows an example using the ping command on a Windows XP computer (mine!), which by default sends
four 32-byte Echo messages and allows four seconds (4 sec) before considering an Echo message lost.
#M – could not fragment
? - Unknown packet type
& - Packet lifetime exceeded.

* Why can’t I ping ?


1. Routing Issue
2. Interface down
3. Packet Filter (ACL’s)
4. ARP issue – if a corresponding host is down , and we don’t receive ARP reply the ARP entry in ARP table show
“”Incomplete”, so if there is no ARP entry for particular host, we can’t ping it.
-Pinged device should also know, how to send reply to source host. Ex: both device should have a route to reach
each other.
-Delay – If reply (Echo Reply) did not come within default timeout of 2 seconds we get request timeout message.
(Note: the average round-trip time is more than two seconds.)
-High input queue drops – when packet enter the router, the router attempt to forward it at interrupt level, if a match
can’t be found in appropriate catch table, the packet is queued in input queue. If the rate of packet processing is less,
is less, the input queue get full, and packet are dropped.

Methods of Diagnosing Connectivity Problems Using ping

*Some examples of how ping can be used in this way:


-Internal Device TCP/IP Stack Operation: By performing a ping on the device’s own address, you can verify that
its internal TCP/IP stack is working. This can also be done using the standard IP loopback address, 127.0.0.1. 
-Local Network Connectivity: If the internal test succeeds, it’s a good idea to do a ping on another device on the
local network, to verify that local communication is possible. 
-Local Router Operation: If there is no problem on the local network, it makes sense to ping whatever local router
the device is using to make sure it is operating and reachable. 
-Domain Name Resolution Functionality: If a ping performed on a DNS domain name fails, you should try it with
the device’s IP address instead. If that works, this implies either a problem with domain name configuration or
resolution. 
-Remote Host Operation: If all the preceding checks succeed, you can try pinging a remote host to see if it
responds. If it does not, you can try a different remote host; if that one works, it is possible that the problem is
actually with the first remote device itself and not with your local device.

Round Trip Time (RTT)


Using traceroute provides round trip time (RTT) for each hop along the path and indicates if a hop fails to respond.
The round trip time (RTT) is the time a packet takes to reach the remote host and for the response from the
host to return. An asterisk (*) is used to indicate a lost packet.
This information can be used to locate a problematic router in the path. If we get high response times or data losses
from a particular hop, this is an indication that the resources of the router or its connections may be stressed.

Ping Options and Parameters


-They allow ping to be used for more extensive or specific types of testing.
- For example, ping can be set in a mode where it sends Echo messages continually, to check for an intermittent
problem over a long period of time. You can also increase the size of the messages sent or the frequency with which
they are transmitted, to test the ability of the local network to handle large amounts of traffic.

Table 286: Common Windows ping Utility Options and Parameters


Option /
Description
Parameters
If the target device is specified as an IP address, force the address to be resolved to a DNS host
-a
name and displayed.
-f Sets the Don’t Fragment bit in the outgoing datagram.
-i <ttl-value> Specifies the TTL value to be used for outgoing Echo messages.
-j <host-list> Sends the outgoing messages using the specified loose source route.
-k <host-list> Sends the outgoing messages using the indicated strict source route.
-l <buffer-size> Specifies the size of the data field in the transmitted Echo messages.
-n <count> Tells the utility how many Echo messages to send.
Specifies the use of the Record Route IP option and the number of hops to be recorded. As with
-r <count>
the corresponding UNIX “-R” option, the traceroute utility is usually preferable.
Specifies the use of the IP Timestamp option to record the arrival time of the Echo and Echo
-s <count>
Reply messages.
-t Sends Echo messages continuously until the program is interrupted.
Specifies how long the program should wait for each Echo Reply before giving up, in
-w <timeout>
milliseconds (default is 4000, for 4 seconds).

Traceroute:
-Traceroute, sometimes called trace, is an application that will list the IP addresses of the routers along the
way to the destination, displaying the path the packet took to reach the destination.

-Some traceroute applications (Windows OS) use ICMP messages , while others (Linux) use UDP to transport their
messages.

Or

Ping is used to indicate the connectivity between two hosts. Traceroute (tracert) is a utility that allows us to observe
the path between these hosts. The trace generates a list of hops that were successfully reached along the path.

*This list can provide us with important verification and troubleshooting information.
- If the data reaches the destination, then the trace lists the interface on every router in the path.
-If the data fails at some hop along the way, we have the address of the last router that responded to the trace. This is
an indication of where the problem or security restrictions are.

TTL (Time To Live):


-To protect routing loop.
-Since IP datagram travel router to router, it is possible that router loops are created, so to overcome this TTL field is
used.
-Now days this field is filled with “Max Hop Count”, so each time a router process a datagram, it reduce TTL value
by one.
-When TTL value become zero, it drops a ICMP time exceeded Message to inform the originator that the datagram
has expired.

How Traceroute takes advantage of TTL.


The first sequence of messages sent from traceroute will have a TTL field of one. This causes the TTL to time out
the packet at the first router. This router then responds with an ICMP Message. Traceroute now has the address of
the first hop.
Traceroute then progressively increments the TTL field (2, 3, 4...) for each sequence of messages. This provides the
trace with the address of each hop as the packets timeout further down the path. The TTL field continues to be
increased until the destination is reached or it is incremented to a predefined maximum.
Once the final destination is reached, the host responds with either an ICMP Port Unreachable message or an ICMP
Echo Reply message instead of the ICMP Time Exceeded message.

0r

-To discover routes that packets actually take when travelling to their destination. By sending sequence of UDP
datagrams to invalid port address (from 33434 to 33534) at the remote host.
-First 3 datagram are sent with TTL=1, when it hits first router it timeouts TTL value to 0, and the router then
responds with ICMP time exceeded message (type 11, code 0)
-In this way source keeps sending 3 datagram each with TTL value increased to 2,3,4 and so on.
-Since these datagrams are sent trying to reach invalid port, when the packet reach destination, ICMP port
unreachable message is returned. (Type 3, Code 3)

Or

Note: 
-Not all traceroute utility implementations use the technique described above.
- Microsoft’s tracert works not by sending UDP packets but rather ICMP Echomessages with increasing TTL
values. It knows it has reached the final host when it gets back an Echo Reply message.

*Operation of the traceroute Utility

-What traceroute does is to force each router in a route to report back to it by intentionally setting the TTL value in
test datagrams to a value too low to allow them to reach their destination.

-Suppose we have device A and device B, which are separated by routers R1 and R2—three hops total. If you
do a traceroute from device A to device B, here’s what happens:

1. The traceroute utility sends a dummy UDP message (sometimes called a probe) to a port number
(from 33434 to 33534) that is intentionally selected to be invalid. The TTL field of the IP datagram is
set to 1. When R1 receives the message, it decrements the field, which will make its value 0. That
router discards the probe and sends an ICMP Time Exceeded message back to device A. 
2. Device A then sends a second UDP message with the TTL field set to 2. This time, R1 reduces
the TTL value to 1 and sends it to R2, which reduces the TTL field to 0 and sends a Time
Exceeded message back to A. 
3. Device A sends a third UDP message, with the TTL field set to 3. This time, the message will pass
through both routers and be received by device B. However, since the port number was invalid, the
message is rejected by device B, which sends back a Destination Unreachable/Port Unreachable
message to device A.

traceroute Options and Parameters


As is the case with ping, traceroute can be used with an IP address or host name. If no parameters are supplied,
default values will be used for key parameters; on the system I used, the defaults are three “probes” for
each TTL value, a maximum of 64 hops tested, and packets 40 bytes in size. However, a number of options and
parameters are also supported to give an administrator more control over how the utility functions and a smaller
number of options exist in Windows, shown in Table.

Table 289: Common Windows tracert Utility Options and Parameters


Option /
Description
Parameters
Displays the route using numeric addresses only rather than showing both IP addresses and host
-d
names, for faster display. This is the same as the “-n” option on UNIX systems.
-h <maximum-
Specifies the maximum number of hops to use for tracing; default is 30.
hops>
-j <host-list> Sends the outgoing probes using the specified loose source route.
Specifies how long to wait for a reply to each probe, in milliseconds (default is 4000, for 4
-w <wait-time>
seconds).

IP Traceroute Text Characters


Character Description
For each node, the round-trip time in milliseconds for the
nn msec
specified number of probes
* The probe timed out
A Administratively prohibited (example, access-list)
Q Source quench (destination too busy)
I User interrupted test
U Port unreachable
H Host unreachable
N Network unreachable
P Protocol Unreachable
T Timeout
? Unknown packet type
How traceroute works?

What is the difference between traceroute and tracert?

Is traceroute a reliable tool to identify network issues?

Why there are three columns in traceroute results?


-It sends 3 packets at a time so there are 3 columns.
- Traceroute sends out three packets per TTL increment. Each column corresponds to the time is took to
get one packet back (round-trip-time).

Or

In Windows, the traceroute tool will give you the hop number, three columns showing the network latency
between you and the hop (so you can average them if you like), as well as the IP address (or hostname if it
has a reverse DNS entry) of the hop. 

Which ICMP message confirms the traceroute is completed?


- Destination Unreachable/Port Unreachable

What does * indicate in traceroute result?


 * - The probe time out
-There is an ACL or the identify of the node is hide.

How to Interpret the Traceroute?


As part of the Application Performance troubleshooting, we ask the customer to do the “tracert” and share the logs with us.
The big question is “How do you interpret the Tracert log?”
First and foremost, since the Windows tracert command is only a single snapshot of the network at a point in time, run the command
multiple times to ensure that a fair sampling of data is collected.

Terminology the Support Engineers should be familiar with.

Hop number: The specific hop number in the path from the sender to the destination.

Round Trip Time (RTT): The time it takes for a packet to get to a hop and back, displayed in milliseconds (ms). By default, tracert
sends three packets to each hop, so the output lists three roundtrip times per hop. RTT is sometimes also referred to as latency. An
important factor that may impact RTT is the physical distance between hops.

Name: The fully qualified domain name (FQDN) of the system. Many times the FQDN may provide an indication of where the hop is
physically located. If the Name doesn’t appear in the output, the FQDN wasn’t found. It isn’t necessarily indicative of a problem, if an
FQDN isn’t found.

IP Address: The Internet Protocol (IP) address of that specific router or host associated with the Name.

Interpretation of the Tracert


The first line of the tracert output describes what the command is doing.  It lists the destination system (salesforce.com), destination IP
address, and the maximum number of hops that will be used in the traceroute (30).
If an asterisk (*) appears for RTT, then a packet was not returned within the expected timeframe.
One or two asterisks for a hop do not necessarily indicate packet loss at the final destination.

Three asterisks followed by the “Request timed out” message may appear for several reasons :
•The destination’s firewall or other security device is blocking the request.
•There could be a problem on the return path from the target system. Remember the round trip time measures the time it takes
for a packet to travel from your system to a destination system and back. The forward route and the return route often follow
different paths. If there is a problem on the return route, it may not be evident in the command output.
•There may be a connection problem at that particular system or the next system.
Traceroute results that show increased latency on a middle hop, which remains similar all the way through to the destination, do not
indicate a network problem.
A traceroute that shows dramatically increased latency on a middle hop, which then increases steadily through to the destination, can
indicate a potential network issue. Packet loss or asterisks (*) on many of the middle hops may also indicate a possible network level
issue.

This is the type of trend that you will want to report.

A steady trend of increasing latency is typically an indication of congestion or a problem between two points in the network and it
requires one or more parties to correct the problem.
 

HTTP/S Difference between HTTP & HTTP/S?

DHCP How DHCP works?

What is the reason for getting APIPA address?

How to troubleshoot APIPA issue?

What is the purpose of relay agent?

Is DHCP decline message is sent by Client or Server?

Is DHCPNACK message is sent by Client or server?

How DHCP discover message is being forwarded by router when it is a broadcast message?

DNS Explain zone transfer?

What are the types of records?

What is forward lookup & Reverse lookup?

When will DNS use TCP?

When will DNS use UDP?

Explain DNS quesr process.

FTP ( Active & What is the difference between Active and Passive FTP?
Passive)
What is the important of port command?

Which FTP type is preferred if firewall is blocking the connection?

How active FTP works?

How passive FTP works?

TFTP How TFTP works? Explain the protocols involved.


SNMP (Query & What is SNMP?
Response, MIB, SNMP versions.
Communities)
Components of SNMP?

Ports used in SNMP?

Explain MIB?

Explain how to implement SNMP on a network?

Explain difference between SNMP Query response & SNMP trap?

You might also like