Professional Documents
Culture Documents
An IPv4 address is a 32-bit address that uniquely and universally defines the connection of a host or a
router to the Internet. IPv4 uses 32-bit addresses, which means that the address space is 232 or
4,294,967,296 (more than four billion).
Notation
There are three common notations to show an IPv4 address: binary notation (base 2), dotted-decimal
notation (base 256), and hexadecimal notation (base 16)
Hierarchy in Addressing
A 32-bit IPv4 address is also hierarchical, but divided only into two parts. The first part of the address,
called the prefix, defines the network; the second part of theaddress, called the suffix, defines the
node
Classful Addressing
An IPv4 address was designed with a fixed-length prefix, but to accommodate both small and large
networks, three fixed-length prefixes were designed instead of one (n = 8, n = 16, and n = 24). The
whole address space was divided into five classes (class A, B, C, D, and E), as shown in Figure. This
scheme is referred to as classful addressing.
Classless Addressing
In classless addressing, the whole address space is divided into variable length blocks. The prefix in an
address defines the block (network); the suffix defines the node (device). Theoretically, we can have a
block of 20 , 21 , 22 , ..., 232 addresses. . An organization can be granted one block of addresses.
Example
A classless address is given as 167.199.170.82/27. We can find the above three pieces of information
as follows. The number of addresses in the network is 232 − n = 25 = 32 addresses
The first address can be found by keeping the first 27 bits and changing the rest of the bits to 0s
Address: 167.199.170.82/27 10100111 11000111 10101010 01010010
First address: 167.199.170.64/27 10100111 11000111 10101010 01000000
The last address can be found by keeping the first 27 bits and changing the rest of the bits to 1s
Last address: 167.199.170.95/27 10100111 11000111 10101010 01011111
1. The joining host creates a DHCPDISCOVER message in which only the transactionID field is set to a
random number. No other field can be set because the host has no knowledge with which to do so.
This message is encapsulated in a UDP user datagram with the source port set to 68 and the
destination port set to 67.. The user datagram is encapsulated in an IP datagram with the source
address set to 0.0.0.0 (“this host”) and the destination address set to 255.255.255.255 (broadcast
address). The reason is that the joining host knows neither its own address nor the server
address.
1. The DHCP server or servers (if more than one) responds with a DHCPOFFER message in which the your
address field defines the offered IP address for the joining host and the server address field includes
the IP address of the server. The message also includes the lease time for which the host can keep the
IP address. This message is encapsulated in a user datagram with the same port numbers, but in the
reverse order. The user datagram in turn is encapsulated in a datagram with the server address as the
source IP address, but the destination address is a broadcast address, in which the server allows other
DHCP servers to receive the offer and give a better offer if they can.
2. The joining host receives one or more offers and selects the best of them. The joining host then sends a
DHCPREQUEST message to the server that has given the best offer. The fields with known value are
set. The message is encapsulated in a user datagram with port numbers as the first message. The user
datagram is encapsulated in an IP datagram with the source address set to the new client address, but
the destination address still is set to the broadcast address to let the other servers know that their
offer was not accepted.
3. Finally, the selected server responds with a DHCPACK message to the client if the offered IP address is
valid. If the server cannot keep its offer, the server sends a DHCPNACK message and the client needs to
repeat the process. This message is also broadcast to let other servers know that the request is
accepted or rejected.
Q2. (a) IP Protocol Service Model, Packet format, fragmentation and reassembly
Internet Protocol version 4 (IPv4), is responsible for packetizing, forwarding, and delivery of a packet at
the network layer.
Datagram Format
Version Number. The 4-bit version number (VER) field defines the version of the IPv4 protocol
Header Length. The 4-bit header length (HLEN) field defines the total length of the datagram header in 4-
byte words
Service Type. In the original design of the IP header, this field was referred to as type of service (TOS),
which defined how the datagram should be handled
Total Length. This 16-bit field defines the total length (header plus data) of the IP datagram in bytes. A 16-
bit number can define a total length of up to 65,535
Identification, Flags, and Fragmentation Offset. These three fields are related to the fragmentation of the
IP datagram when the size of the datagram is larger than the underlying network can carry
Time-to-live. The time-to-live (TTL) field is used to control the maximum number of hops (routers) visited
by the datagram. When a source host sends the datagram, it stores a number in this field
Protocol. any protocol that uses the service of IP a unique 8-bit number which is inserted in the protocol
field
Header checksum. IP adds a header checksum field to check the header, but not the payload
Source and Destination Addresses. These 32-bit source and destination address fields define the IP address
of the source and destination respectively
Options. A datagram header can have up to 40 bytes of options. Options can be used for network testing
and debugging
Payload. Payload, or data, is the main reason for creating a datagram. Payload is the packet coming from
other protocols that use the service of IP
Fragmentation
IP protocol independent of the physical network, the designers decided to make the maximum length of
the IP datagram equal to 65,535 bytes. This makes transmission more efficient if one day we use a link-
layer protocol with an MTU of this size. However, for other physical networks, we must divide the
datagram to make it possible for it to pass through these networks. This is called fragmentation. When a
datagram is fragmented, each fragment has its own header with most of the fields repeated, but some
have been changed. A fragmented datagram may itself be fragmented if it encounters a network with an
even smaller MTU. In other words, a datagram may be fragmented several times before it reaches the
final destination
A datagram can be fragmented by the source host or any router in the path. The reassembly of the
datagram, however, is done only by the destination host, because each fragment becomes an
independent datagram. Whereas the fragmented datagram can travel through different routes, and we
can never control or guarantee which route a fragmented datagram may take, all of the fragments
belonging to the same datagram should finally arrive at the destination host. So it is logical to do the
reassembly at the final destination.
The 13-bit fragmentation offset field shows the relative position of this fragment with respect to the whole
datagram. It is the offset of the data in the original datagram measured in units of 8 bytes. Figure 19.6
shows a datagram with a data size of 4000 bytes fragmented into three fragments. The bytes in the
original datagram are numbered 0 to 3999. The first fragment carries bytes 0 to 1399. The offset for this
datagram is 0/8 = 0. The second fragment carries bytes 1400 to 2799; the offset value for this fragment is
1400/8 = 175. Finally, the third fragment carries bytes 2800 to 3999. The offset value for this fragment is
2800/8 = 350
The original packet starts at the client; the fragments are reassembled at the server. The value of the
identification field is the same in all fragments, as is the value of the flags field with the more bit set for all
fragments except the last. Also, the value of the offset field for each fragment is shown. Note that
although the fragments arrived out of order at the destination, they can be correctly reassembled
The figure also shows what happens if a fragment itself is fragmented. In this case the value of the offset
field is always relative to the original datagram. For example, in the figure, the second fragment is itself
fragmented later into two fragments of 800 bytes and 600 bytes, but the offset shows the relative position
of the fragments to the original data
(b) how IP packets are forwarded in the network
IP is used as a connectionless protocol, forwarding is based on the destination address of the IP datagram;
when the IP is used as a connection-oriented protocol, forwarding is based on the label attached to an IP
datagram
Forwarding Based on Destination Address
In classless addressing, the whole address space is one entity; there are no classes. This means that
forwarding requires one row of information for each block involved. The table needs to be searched based
on the network address. Unfortunately, the destination address in the packet gives no clue about the
network address. To solve the problem, we need to include the mask (/n) in the table. In other words, a
classless forwarding table needs to include four pieces of information: the mask, the network address, the
interface number, and the IP address of the next router. However, we often see in the literature that the
first two pieces are combined. For example, if n is 26 and the network address is 180.70.65.192, then one
can combine the two as one piece of information: 180.70.65.192/26. Figure shows a simple forwarding
module and forwarding table for a router with only three interfaces
UDP Services
(i)Process-to-Process Communication
UDP provides process-to-process communication using socket addresses, a combination of IP addresses
and port numbers.
(ii) Connectionless Services
UDP provides a connectionless service. This means that each user datagram sent by UDP is an
independent datagram. There is no relationship between the different user datagrams even if they are
coming from the same source process and going to the same destination program. The user datagrams are
not numbered. There is no connection establishment and no connection termination. This means that
each user datagram can travel on a different path
(iii) Flow Control
UDP is a very simple protocol. There is no flow control, and hence no window mechanism. The receiver
may overflow with incoming messages. The lack of flow control means that the process using UDP should
provide for this service, if needed.
(iv) Error Control
There is no error control mechanism in UDP except for the checksum. This means that the sender does not
know if a message has been lost or duplicated. When the receiver detects an error through the checksum,
the user datagram is silently discarded. The lack of error control means that the process using UDP should
provide for this service, if needed
(v) Checksum
UDP checksum calculation includes three sections: a pseudoheader, the UDP header, and the data coming
from the application layer. The pseudoheader is the part of the header of the IP packet in which the user
datagram is to be encapsulated with some fields filled with 0s
We assume that rwnd is much larger than cwnd, so and carries MSS bytes. For simplicity, we also ignore
the delayed-ACK policy and assume that each that the sender window size always equals cwnd. We also
assume that each segment is of the same size segment is acknowledged individually. The sender starts
with cwnd = 1. This means that the sender can send only one segment. After the first ACK arrives, the
acknowledged segment is purged from the window, which means there is now one empty segment slot in
the window. The size of the congestion window is also increased by 1 because the arrival of the
acknowledgment is a good sign that there is no congestion in the network. The size of the window is now
2. After sending two segments and receiving two individual acknowledgments for them, the size of the
congestion window now becomes 4, and so on. The size of the congestion window in this algorithm is a
function of the number of ACKs arrived and can be determined as follows.
If an ACK arrives, cwnd = cwnd + 1
If we look at the size of the cwnd in terms of round-trip times (RTTs), we find that the growth rate is
exponential in terms of each round trip time, which is a very aggressive approach:
A slow start cannot continue indefinitely. There must be a threshold to stop this phase. The sender keeps
track of a variable named ssthresh. When the size of the window in bytes reaches this threshold, slow
start stops and the next phase starts.
Congestion Avoidance: Additive Increase
If we continue with the slow-start algorithm, the size of the congestion window increases exponentially.
To avoid congestion before it happens, we must slow down this exponential growth. TCP defines another
algorithm called congestion avoidance, which increases the cwnd additively instead of exponentially.
When the size of the congestion window reaches the slow-start threshold in the case where cwnd = i,
the slow-start phase stops and the additive phase begins. In this algorithm, each time the whole
“window” of segments is acknowledged, the size of the congestion window is increased by one. A window
is the number of segments transmitted during RTT
The sender starts with cwnd = 4. This means that the sender can send only four segments. After four ACKs
arrive, the acknowledged segments are purged from the window, which means there is now one extra
empty segment slot in the window. The size of the congestion window is also increased by 1. The size of
window is now 5. After sending five segments and receiving five acknowledgments for them, the size of
the congestion window now becomes 6, and so on. In other words, the size of the congestion window in
this algorithm is also a function of the number of ACKs that have arrived and can be determined as
follows:
If an ACK arrives, cwnd = cwnd + (1/cwnd).
The size of the window increases only 1/cwnd portion of MSS (in bytes). In other words, all segments in
the previous window should be acknowledged to increase the window 1 MSS bytes
Fast Recovery
The fast-recovery algorithm is optional in TCP. The old version of TCP did not use it, but the new versions
try to use it. It starts when three duplicate ACKs arrive, which is interpreted as light congestion in the
network. Like congestion avoidance, this algorithm is also an additive increase, but it increases the size of
the congestion window when a duplicate ACK arrives
1. The client sends the first packet, which contains an INIT chunk. The verification tag (VT) of this packet
(defined in the general header) is 0 because no verification tag has yet been defined for this direction
(client to server). The INIT tag includes an initiation tag to be used for packets from the other direction
(server to client). The chunk also defines the initial TSN for this direction and advertises a value for rwnd.
The value of rwnd is normally advertised in a SACK chunk; it is done here because SCTP allows the
inclusion of a DATA chunk in the third and fourth packets; the server must be aware of the available client
buffer size. Note that no other chunks can be sent with the first packet.
2. The server sends the second packet, which contains an INIT ACK chunk. The verification tag is the value
of the initial tag field in the INIT chunk. This chunk initiates the tag to be used in the other direction,
defines the initial TSN, for data flow from server to client, and sets the server’s rwnd. The value of rwnd is
defined to allow the client to send a DATA chunk with the third packet. The INIT ACK also sends a cookie
that defines the state of the server at this moment.
3. The client sends the third packet, which includes a COOKIE ECHO chunk. This is a very simple chunk that
echoes, without change, the cookie sent by the server. SCTP allows the inclusion of data chunks in this
packet.
4. The server sends the fourth packet, which includes the COOKIE ACK chunk that acknowledges the
receipt of the COOKIE ECHO chunk. SCTP allows the inclusion of data chunks with this packet
1. In this situation, the client TCP, after receiving a close command from the client process, sends the
first segment, a FIN segment in which the FIN flag is set. Note that a FIN segment can include the last
chunk of data sent by the client or it can be just a control segment as shown in the figure. If it is only a
control segment, it consumes only one sequence number because it needs to be acknowledged.
2. The server TCP, after receiving the FIN segment, informs its process of the situation and sends the
second segment, a FIN + ACK segment, to confirm the receipt of the FIN segment from the client and
at the same time to announce the closing of the connection in the other direction. This segment can
also contain the last chunk of data from the server. If it does not carry data, it consumes only one
sequence number because it needs to be acknowledged.
3. The client TCP sends the last segment, an ACK segment, to confirm the receipt of the FIN segment
from the TCP server. This segment contains the acknowledgment number, which is one plus the
sequence number received in the FIN segment from the server. This segment cannot carry data and
consumes no sequence numbers