You are on page 1of 28

TCP/IP model

o The TCP/IP model was developed prior to the OSI model.


o

o The TCP/IP model is not exactly similar to the OSI model.


o

o The TCP/IP model consists of five layers: the application layer,


transport layer, network layer, data link layer and physical layer.
o

o The first four layers provide physical standards, network interface,


internetworking, and transport functions that correspond to the first
four layers of the OSI model and these four layers are represented in
TCP/IP model by a single layer called the application layer.
o

o TCP/IP is a hierarchical protocol made up of interactive modules, and


each of them provides specific functionality.
Here, hierarchical means that each upper-layer protocol is supported by two or more lower-level protocols.

Functions of TCP/IP layers:


Network Access Layer

o A network layer is the lowest layer of the TCP/IP model.


o A network layer is the combination of the Physical layer and Data Link layer
defined in the OSI reference model.
o It defines how the data should be sent physically through the network.
o This layer is mainly responsible for the transmission of the data between two
devices on the same network.
o The functions carried out by this layer are encapsulating the IP datagram into
frames transmitted by the network and mapping of IP addresses into physical
addresses.
o The protocols used by this layer are ethernet,

Internet Layer

o An internet layer is the second layer of the TCP/IP model.


o An internet layer is also known as the network layer.
o The main responsibility of the internet layer is to send the packets
from any network, and they arrive at the destination irrespective of
the route they take.

Following are the protocols used in this layer are:

IP Protocol: IP protocol is used in this layer, and it is the most significant


part of the entire TCP/IP suite.

Following are the responsibilities of this protocol:

o IP Addressing: This protocol implements logical host addresses


known as IP addresses. The IP addresses are used by the internet and
higher layers to identify the device and to provide internetwork
routing.
o Host-to-host communication: It determines the path through which
the data is to be transmitted.
o Data Encapsulation and Formatting: An IP protocol accepts the
data from the transport layer protocol. An IP protocol ensures that
the data is sent and received securely, it encapsulates the data into
message known as IP datagram.
o Fragmentation and Reassembly: The limit imposed on the size of
the IP datagram by data link layer protocol is known as Maximum
Transmission unit (MTU). If the size of IP datagram is greater than the
MTU unit, then the IP protocol splits the datagram into smaller units
so that they can travel over the local network. Fragmentation can be
done by the sender or intermediate router. At the receiver side, all the
fragments are reassembled to form an original message.
o Routing: When IP datagram is sent over the same local network such
as LAN, MAN, WAN, it is known as direct delivery. When source and
destination are on the distant network, then the IP datagram is sent
indirectly. This can be accomplished by routing the IP datagram
through various devices such as routers.

ARP Protocol

o ARP stands for Address Resolution Protocol.


o ARP is a network layer protocol which is used to find the physical
address from the IP address.
o The two terms are mainly associated with the ARP Protocol:
o ARP request: When a sender wants to know the physical
address of the device, it broadcasts the ARP request to the
network.
o ARP reply: Every device attached to the network will accept the
ARP request and process the request, but only recipient
recognize the IP address and sends back its physical address in
the form of ARP reply. The recipient adds the physical address
both to its cache memory and to the datagram header

ICMP Protocol

o ICMP stands for Internet Control Message Protocol.


o It is a mechanism used by the hosts or routers to send notifications
regarding datagram problems back to the sender.
o A datagram travels from router-to-router until it reaches its
destination. If a router is unable to route the data because of some
unusual conditions such as disabled links, a device is on fire or
network congestion, then the ICMP protocol is used to inform the
sender that the datagram is undeliverable.
o An ICMP protocol mainly uses two terms:
o ICMP Test: ICMP Test is used to test whether the destination is
reachable or not.
o ICMP Reply: ICMP Reply is used to check whether the
destination device is responding or not.
o The core responsibility of the ICMP protocol is to report the
problems, not correct them. The responsibility of the correction lies
with the sender.
o ICMP can send the messages only to the source, but not to the
intermediate routers because the IP datagram carries the addresses of
the source and destination but not of the router that it is passed to.

Transport Layer

The transport layer is responsible for the reliability, flow control, and correction of data which is being sent over
the network.

The two protocols used in the transport layer are User Datagram protocol and Transmission control
protocol.

o User Datagram Protocol (UDP)


o It provides connectionless service and end-to-end delivery of transmission.
o It is an unreliable protocol as it discovers the errors but not specify the error.
o User Datagram Protocol discovers the error, and ICMP protocol reports the error to the sender
that user datagram has been damaged.
o UDP consists of the following fields:
Source port address: The source port address is the address of the application program that
has created the message.
Destination port address: The destination port address is the address of the application
program that receives the message.
Total length: It defines the total number of bytes of the user datagram in bytes.
Checksum: The checksum is a 16-bit field used in error detection.
o UDP does not specify which packet is lost. UDP contains only checksum; it does not contain
any ID of a data segment.

o Transmission Control Protocol (TCP)


o It provides a full transport layer services to applications.

o It creates a virtual circuit between the sender and receiver, and


it is active for the duration of the transmission.
o TCP is a reliable protocol as it detects the error and retransmits
the damaged frames. Therefore, it ensures all the segments
must be received and acknowledged before the transmission is
considered to be completed and a virtual circuit is discarded.
o At the sending end, TCP divides the whole message into smaller
units known as segment, and each segment contains a
sequence number which is required for reordering the frames to
form an original message.
o At the receiving end, TCP collects all the segments and reorders
them based on sequence numbers.

Application Layer

o An application layer is the topmost layer in the TCP/IP model.


o It is responsible for handling high-level protocols, issues of representation.
o This layer allows the user to interact with the application.
o When one application layer protocol wants to communicate with another application layer, it forwards
its data to the transport layer.
o There is an ambiguity occurs in the application layer. Every application cannot be placed inside the
application layer except those who interact with the communication system. For example: text editor
cannot be considered in application layer while web browser using HTTP protocol to interact with the
network where HTTP protocol is an application layer protocol.

Following are the main protocols used in the application layer:

o HTTP: HTTP stands for Hypertext transfer protocol. This protocol


allows us to access the data over the world wide web. It transfers the
data in the form of plain text, audio, video. It is known as a Hypertext
transfer protocol as it has the efficiency to use in a hypertext
environment where there are rapid jumps from one document to
another.
o SNMP: SNMP stands for Simple Network Management Protocol. It is
a framework used for managing the devices on the internet by using
the TCP/IP protocol suite.
o SMTP: SMTP stands for Simple mail transfer protocol. The TCP/IP
protocol that supports the e-mail is known as a Simple mail transfer
protocol. This protocol is used to send the data to another e-mail
address.
o DNS: DNS stands for Domain Name System. An IP address is used to
identify the connection of a host to the internet uniquely. But, people
prefer to use the names instead of addresses. Therefore, the system
that maps the name to the address is known as Domain Name
System.
o TELNET: It is an abbreviation for Terminal Network. It establishes the
connection between the local computer and remote computer in such
a way that the local terminal appears to be a terminal at the remote
system.
o FTP: FTP stands for File Transfer Protocol. FTP is a standard internet
protocol used for transmitting the files from one computer to another
computer.
Design Issues in Network Layer
Network layer is majorly focused on getting packets from the source to
the destination, routing error handling and congestion control.
Before learning about design issues in the network layer, let’s learn
about it’s various functions.
 Addressing:
Maintains the address at the frame header of both source and
destination and performs addressing to detect various devices in
network.
 Packeting:
This is performed by Internet Protocol. The network layer converts the
packets from its upper layer.
 Routing:
It is the most important functionality. The network layer chooses the
most relevant and best path for the data transmission from source to
destination.
 Inter-networking:
It works to deliver a logical connection across multiple devices.
Network layer design issues:
The network layer comes with some design issues they are described as
follows:
1. Store and Forward packet switching:
The host sends the packet to the nearest router. This packet is stored
there until it has fully arrived once the link is fully processed by
verifying the checksum then it is forwarded to the next router till it
reaches the destination. This mechanism is called “Store and Forward
packet switching.”
2. Services provided to Transport Layer:
Through the network/transport layer interface, the network layer
transfers it’s services to the transport layer. These services are described
below.
But before providing these services to the transfer layer following goals
must be kept in mind :-
 Offering services must not depend on router technology.
 The transport layer needs to be protected from the type, number and
topology of the available router.
 The network addresses for the transport layer should use uniform
numbering pattern also at LAN and WAN connections.
Based on the connections there are 2 types of services provided :
 Connectionless – The routing and insertion of packets into subnet is
done individually. No added setup is required.
 Connection-Oriented – Subnet must offer reliable service and all the
packets must be transmitted over a single route.
3. Implementation of Connectionless Service:
Packet are termed as “datagrams” and corresponding subnet as
“datagram subnets”. When the message size that has to be transmitted is
4 times the size of the packet, then the network layer divides into 4
packets and transmits each packet to router via. a few protocol.Each data
packet has destination address and is routed independently irrespective
of the packets.
4. Implementation of Connection Oriented service:
To use a connection-oriented service, first we establishes a connection,
use it and then release it. In connection-oriented services, the data
packets are delivered to the receiver in the same order in which they
have been sent by the sender.
It can be done in either two ways :
 Circuit Switched Connection – A dedicated physical path or a circuit
is established between the communicating nodes and then data stream
is transferred.
 Virtual Circuit Switched Connection – The data stream is
transferred over a packet switched network, in such a way that it
seems to the user that there is a dedicated path from the sender to the
receiver. A virtual path is established here. While, other connections
may also be using the same path.
Virtual circuit switching
Virtual circuit switching is a packet switching methodology
whereby a path is established between the source and the final
destination through which all the packets will be routed during a
call. This path is called a virtual circuit because to the user, the
connection appears to be a dedicated physical circuit. However,
other communications may also be sharing the parts of the same
path.
Before the data transfer begins, the source and destination
identify a suitable path for the virtual circuit. All intermediate
nodes between the two points put an entry of the routing in their
routing table for the call. Additional parameters, such as the
maximum packet size, are also exchanged between the source
and the destination during call setup. The virtual circuit is cleared
after the data transfer is completed.
Virtual circuit packet switching is connection orientated. This is in
contrast to datagram switching, which is a connection less
packet switching methodology. Advantages of virtual circuit
switching are:

 Packets are delivered in order,


since they all take the same route;
 The overhead in the packets is smaller,
since there is no need for each packet to contain the full
address;
 The connection is more reliable,
network resources are allocated at call setup so that even
during times of congestion, provided that a call has been
setup, the subsequent packets should get through;
 Billing is easier,
since billing records need only be generated per call and not
per packet.
Disadvantages of a virtual circuit switched network are:

 The switching equipment needs to be more powerful,


since each switch needs to store details of all the calls that
are passing through it and to allocate capacity for any traffic
that each call could generate;
 Resilience to the loss of a trunk is more difficult,
since if there is a failure all the calls must be dynamically
reestablished over a different route

Framing in Data Link layer With Types

In this tutorial, you will learn what is framing in data link layer and
the types of framing in computer networks with the help of suitable
examples and diagrams.

Table Of Contents
Let’s Get Started, Happy Learning!

What is Framing in Data link layer?


Framing in data link layer is a point-to-point connection between the
sender and receiver. The framing is the primary function of the data
link layer and it provides a way to transmit data between the
connected devices.

Framing use frames to send or receive data. The data link layer
receives packets from the network layer and converts them into
frames.

If the frame size is too large, then the packet can be divided into
smaller frames. Small frames are more efficient for flow control and
error control.
The data link layer needs to pack bits into frames so that each
frame is distinguishable from another. The simple act of inserting a
letter into an envelope separates one piece of information from
another.

Parts of a Frame
The frame is consist of four parts as follows.

Each frame in data link layers comprises four parts header, payload
field, trailers and flag.

Header–The source and destination address is placed into the


header part of the frame.

Payload field–It contains the actual message or information that the


sender wants to transmit to the destination machine.

Trailer–The trailer comprises error detection and error correction


bits.

Flag–It shows the beginning and end of a particular frame.

Types of Framing in Computer Networks


There are two types of framing that are used by the data link layer
in computer networks. The frame can be of fixed or variable size.
Based on the size, the following are the types of framing in
computer networks,

 Fixed Size Framing


 Variable Size Framing
Fixed Size Framing
The frame has a fixed size. In fixed-size framing, there is no need
for defining the boundaries of the frames to mark the beginning and
end of a frame.
For example- This type of framing is used in ATMs, Wide area
networks. They use frames of fixed size called cells.

Variable Size Framing


The size of the frame is variable in this type of framing. In variable
size framing, we need a way to define the end of the frame and the
beginning of the next frame. This is used in local area networks.

There are two methods to define the frame boundaries, such as


length field and end decimeter.

Length field–To determine the length of the field, a length field is


used. It is used in Ethernet (1EEE 802.3)

End Delimeter–To identify the size of the frame, a pattern is used as


a delimiter. This method is used in the token ring. In short, it is
called an ED. Two methods are used to avoid this situation if the
pattern occurs in the message.

 Character Oriented Approach ( Byte Stuffing)


 Bit Oriented Approach (Bit Stuffing)
What is Byte Stuffing?
Byte stuffing is the process of adding an extra byte when there is a
flag or escape character in the text. Take an example of byte
stuffing as shown in the below diagram.

The sender sends the frame by adding 3 extra ESC bits and the
destination machine received the frame and it removes the extra
bits to covert the frame into the same message.

What is Bit Stuffing?


Most protocols use a special 8-bit pattern flag 01111110 as the
delimiter to define the beginning and the end of the frame. Bit
stuffing is done at the sender end and bit removal at the receiver
end.
If we have a 0 after five 1s. We still stuff a 0. The receiver will
remove the 0. Bit stuffing is also called bit-oriented framing.

What is a multiple access protocol?

When a sender and receiver have a dedicated link to transmit data


packets, the data link control is enough to handle the channel.
Suppose there is no dedicated path to communicate or transfer
the data between two devices. In that case, multiple stations
access the channel and simultaneously transmits the data over the
channel. It may create collision and cross talk. Hence, the multiple
access protocol is required to reduce the collision and avoid
crosstalk between the channels.

For example, suppose that there is a classroom full of students.


When a teacher asks a question, all the students (small channels)
in the class start answering the question at the same time
(transferring the data simultaneously). All the students respond at
the same time due to which data is overlap or data lost. Therefore
it is the responsibility of a teacher (multiple access protocol) to
manage the students and make them one answer.

Following are the types of multiple access protocol that is


subdivided into the different process as:
A. Random Access Protocol

In this protocol, all the station has the equal priority to send
s the
data over a channel. In random access protocol, one or more
stations cannot depend on another station nor any station control
another station. Depending on the channel's state (idle or busy),
each station transmits the data frame. However, if more than one
station sends the data over a channel, there may be a collision or
data conflict. Due to the collision, the data frame packets may be
lost or changed. And hence, it does not receive by the receiver
end.

Following are the different methods of rando


random-access
access protocols
for broadcasting frames on the channel.

o Aloha
o CSMA
o CSMA/CD
o CSMA/CA
ALOHA Random Access Protocol

It is designed for wireless LAN (Local Area Network) but can also
be used in a shared medium to transmit data. Using this method,
any station can transmit data across a network simultaneously
when a data frameset is available for transmission.

Aloha Rules

1. Any station can transmit data to a channel at any time.


2. It does not require any carrier sensing.
3. Collision and data frames may be lost during the
transmission of data through multiple stations.
4. Acknowledgment of the frames exists in Aloha. Hence, there
is no collision detection.
5. It requires retransmission of data after some random amount
of time.

Pure Aloha

Whenever data is available for send


sending
ing over a channel at stations,
we use Pure Aloha. In pure Aloha, when each station transmits
data to a channel without checking whether the channel is idle or
not, the chances of collision may occur, and the data frame can be
lost. When any station transm
transmits
its the data frame to a channel, the
pure Aloha waits for the receiver's acknowledgment. If it does not
acknowledge the receiver end within the specified time, the
station waits for a random amount of time, called the backoff time
(Tb). And the station may assume the frame has been lost or
destroyed. Therefore, it retransmits the frame until all the data are
successfully transmitted to the receiver.

1. The total vulnerable time of pure Aloha is 2 * Tfr.


2. Maximum throughput occurs when G = 1/ 2 that is 18.4%.
3. Successful
ccessful transmission of data frame is S = G * e ^ - 2 G.

As we can see in the figure above, there are four stations for
accessing a shared channel and transmitting data frames. Some
frames collide because most stations send their frames at the
same time. Only two frames, frame 1.1 and frame 2.2, are
successfully transmitted to the receiver end. At the same time,
other frames are lost or destroyed. Whenever two frames fall on a
shared channel simultaneously, collisions can occur, and both will
suffer damage. If the new frame's first bit enters the channel
before finishing the last bit of the second frame. Both frames are
completely finished, and both stations must retransmit the data
frame.

Slotted Aloha

The slotted Aloha is designed to overcome the pure Aloha's


efficiency because pure Aloha has a very high possibility of frame
hitting. In slotted Aloha, the shared channel is divided into a fixed
time interval called slots. So that, if a station wants to send a
frame to a shared channel, the frame can only be sent at the
beginning of the slot, and only one frame is allowed to be sent to
each slot. And if the stations are unable to send data to the
beginning of the slot, the station will have to wait until the
beginning of the slot for the next time. However, the possibility of
a collision remains when trying to send a frame at the beginning
of two or more station time slot.

1. Maximum throughput occurs in the slotted Aloha when G =


1 that is 37%.
2. The probability of successfully transmitting the data frame in
the slotted Aloha is S = G * e ^ - 2 G.
3. The total vulnerable time required in slotted Aloha is Tfr.
CSMA (Carrier Sense Multiple Access)

It is a carrier sense multiple access based on media access


protocol to sense the traffic on a channel (idle or busy)
busy before
transmitting the data. It means that if the channel is idle, the
station can send data to the channel. Otherwise, it must wait until
the channel becomes idle. Hence, it reduces the chances of a
collision on a transmission medium.

CSMA Access Modes

1-Persistent: In the 1--Persistent


Persistent mode of CSMA that defines each
node, first sense the shared channel and if the channel is idle, it
immediately sends the data. Else it must wait and keep track of
the status of the channel to be idle and broadcast the framefra
unconditionally as soon as the channel is idle.
Non-Persistent: It is the access mode of CSMA that defines
before transmitting the data, each node must sense the channel,
and if the channel is inactive, it immediately sends the data.
Otherwise, the station must wait for a random time (not
continuously), and when the channel is found to be idle, it
transmits the frames.

P-Persistent: It is the combination of 1-Persistent and Non-


persistent modes. The P-Persistent mode defines that each node
senses the channel, and if the channel is inactive, it sends a frame
with a P probability. If the data is not transmitted, it waits for a (q
= 1-p probability) random time and resumes the frame with the
next time slot.

O- Persistent: It is an O-persistent method that defines the


superiority of the station before the transmission of the frame on
the shared channel. If it is found that the channel is inactive, each
station waits for its turn to retransmit the data.
CSMA/ CD

It is a carrier sense multiple access/ collisio


collision
n detection network
protocol to transmit data frames. The CSMA/CD protocol works
with a medium access control layer. Therefore, it first senses the
shared channel before broadcasting the frames, and if the channel
is idle, it transmits a frame to check whe
whether
ther the transmission was
successful. If the frame is successfully received, the station sends
another frame. If any collision is detected in the CSMA/CD, the
station sends a jam/ stop signal to the shared channel to
terminate data transmission. After that, it waits for a random time
before sending a frame to a channel.

CSMA/ CA

It is a carrier sense multiple access/collision avoidance network


protocol for carrier transmission of data frames. It is a protocol
that works with a medium access control layer. When a data frame
is sent to a channel, it receives an acknowledgment to check
whether the channel is clear. If the station receives only a single
(own) acknowledgments, that means the data frame has been
successfully transmitted to the receiver. But if it gets two signals
(its own and one more in which the collision of frames), a collision
of the frame occurs in the shared channel. Detects the collision of
the frame when a sender receives an acknowledgment signal.

Following are the methods used in the CSMA/ CA to avoid the


collision:

Interframe space: In this method, the station waits for the


channel to become idle, and if it gets the channel is idle, it does
not immediately send the data. Instead of this, it waits for some
time, and this time period is called the Interframe space or IFS.
However, the IFS time is often used to define the priority of the
station.

Contention window: In the Contention window, the total time is


divided into different slots. When the station/ sender is ready to
transmit the data frame, it chooses a random slot number of slots
as wait time. If the channel is still busy, it does not restart the
entire process, except that it restarts the timer only to send data
packets when the channel is inactive.
Acknowledgment: In the acknowledgment method, the sender
station sends the data frame to the shared channel if the
acknowledgment is not received ahead of time.

B. Controlled Access Protocol

It is a method of reducing data frame collision on a shared


channel. In the controlled access method, each station interacts
and decides to send a data frame by a particular station approved
by all other stations. It means that a single station cannot send
the data frames unless all other stations are not approved. It has
three types of controlled access: Reservation, Polling, and Token
Passing.

Differences between Virtual Circuits and Datagram


Networks

Virtual Circuits Datagram Networks

Virtual circuits are connection-


oriented, which means that there is a It is connectionless service. There
reservation of resources like buffers, is no need for reservation of
bandwidth, etc. for the time during resources as there is no dedicated
which the newly setup VC is going to path for a connection session.
be used by a data transfer session.
A virtual circuit network uses a fixed
A Datagram based network is a
path for a particular session, after
true packet switched network.
which it breaks the connection and
There is no fixed path for
another path has to be set up for the
transmitting data.
next the next session.
All the packets follow the same path Every packet is free to choose any
and hence a global header is required path, and hence all the packets
only for the first packet of connection must be associated with a header
and other packets will not require it. containing information about the
source and the upper layer data.
Data packets reach the destination
Packets reach in order to the
in random order, which means
destination as data follows the same
they need not reach in the order in
path.
which they were sent out.
Datagram networks are not as
Virtual Circuits are highly reliable.
reliable as Virtual Circuits.

But it is always easy and cost-


Implementation of virtual circuits is
efficient to implement datagram
costly as each time a new connection
networks as there is no need of
has to be set up with reservation of
reserving resources and making a
resources and extra information
dedicated path each time an
handling at routers.
application has to communicate.

services provided by the transport layer


Address Mapping
It means mapping of transport address onto the network address.
Whenever a session entity requests to send a transport service data unit
(TSDU) to another session entity, it sends its transport service access point
address as its identification. The transport entity then determines the
network service access point (NSAP) address. This is known as address
mapping.

Assignment of Network Connection


The transport entity assigns a network connection for carrying the transport
protocol data units (TPDUs). The transport entity establishes this assigned
network connection. In some of the transport protocols, recovery from
network disconnection is allowed. In such protocols, whenever a
disconnection occurs, the transport entity reassigns the transport of TPDUs
to a different network connection.

Multiplexing of Transport Connections


For optimum network link uses, the transport entity can create multiple end-
to-end transport connections to the network connection, referred to as
multiplexing.
There are various TSDUs (multiplexed) identified by the receiving transport
entity using the transport connection endpoint identifier (TCEPI), attached
to each TSDU by the sending transport entity.
The TCEP identifier is unique for each connection, as shown in the figure
below −

Splitting of Transport Connection


When the service quality offered by the network service is less than the
required quality of service or when greater resilience is required against
network connection failures, then splitting is done by the transport entity.
Splitting means TPDUs belonging to one transport connection may be sent
over different network connections.
Splitting requires the re-sequencing because it results in the reordering of
TSDUs, as shown in the figure shown below −
Establishment of Transport Connection
The transport layer establishes the transport connection by sending a
request. For establishing a link, it uses the T-CONNECT service primitives.
The transport entity provides the quality of service, requirement, and collect
addresses services.

Data Transfer
The transport layer provides the data transfer of two types, such as the
regular data transfer and expedited data transfer. In normal data transfer,
the user can request to transfer user data with any integral number of
octets.
This transfer is transparent, i.e., user data boundaries are preserved during
the transfer, and there are no constraints on the content and number of
octets. The mode of data transfer can be two ways simultaneously. The
expedited data transfer has a distinct control flow, and it can pass all the
data queue with maximum priority for delivery. It is a user optional or
optional provider service. The number of user data octets is restricted to
16.

Segmentation and Concatenation of TPDUs


The transport entity divides the transport service data unit into several
transport protocol data units, each with a separate header containing a PCI
(Protocol Control Identifier). This function is known as segments.
This segmenting function is used when the network service cannot support
the transport protocol data unit's size containing an unsegmented TSDU. A
reassembly process is performed at the transmitting end for such TPDUs.

The reverse function of segments is known as concatenation. The


concatenation enables the mapping of several TPDUs onto a single NSDU
(Network Service Data Unit). These TPUs may belong to the same or
several transport connections. If they belong to different transport
connections, they must be travelling in the exact directions. At the receiving
end, a separation function is performed by the transport entity.
The transport entity identifies the boundary of different TPDUs.
Concatenation is done for improving the efficiency of utilization of the
network service.
On concatenation, there are some restrictions as to which type of TPDUs
can be concatenated so that their boundaries should be identified by the
transport entity, as shown in the figure below.
Flow Control
The transport entity uses a modified form of sliding window protocol for flow
control. This flow control is required because the transport layer may
experience back pressure from the network layer.
In the mechanism, the window size is variable and controlled by the
receiver. A credit allocated is sent to the receiver's sender, which indicates
how many TPDUs can be received by it.

Error Recovery
The errors at this level can be introduced due to TPDU errors, protocol
errors or signal failure conditions of network connections, i.e., reset or
release of network connections. Such errors occurring at layer 3 are
reported to the transport layer. The TPDU errors can be in the form of lost
TPDU, duplicated TPDU, re-ordering of sequence, or content errors.
The duplicate TPDUs are discarded, lost are acknowledged to resend. In
the recording, they are re-sequenced, and content errors are detected by
incorporating error detection bytes in TPDUs by the transport entity.
Such TPDUs with content errors are discarded and treated as lost, and
hence they are also acknowledged. In the case of protocol errors, the
connection is released, and in the case of signal failure errors,
reassignment of network connection and resynchronization is done.

Sequence Numbering
Each TPDU is given a sequence number by a transport entity that is seven
bits long in the normal operations mode. This sequence numbering is done
to provide flow control and error recovery. In the case of extended mode,
the sequence number can be 31 bits long.

What is TCP Half-Close?


TCP allows you to close each direction of the connection independently.
A TCP connection is considered to be half-closed when it’s closed in one
direction and still open in the other direction. It allows an application to
say: "I am done sending data, so send a FIN to the other end, but I still
want to receive data from the other end, until it sends me a FIN."

TCP half-close is sometimes used to emulate Unix-style IO redirection


on TCP streams by using the FIN message as a EOF marker. The
book TCP/IP Illustrated, Vol. 1 provides a nice example for this:
ssh hostname sort < datafile
Here the “ssh” program starts the “sort” program on the remote host and
reads standard input from a local file named “datafile”. Each line of
standard input is sent to the sort program over the TCP stream. The final
EOF marker from datafile is sent over the TCP stream as a FIN
message and then written as EOF to the standard input of the sort
program. Note that sort can’t generate any output until it has received all
input, so it relies on TCP half-close to work.

You might also like