You are on page 1of 56

CHAPTER -5

Transport Layer

By: Asst. Prof. Sanjivan Satyal


By: Asst. Prof. Sanjivan Satyal 1
Transport Layer
• Offers peer-to-peer and end-to-end connection between two
processes on remote hosts.

• Process to process delivery.

• Takes data from upper layer (i.e. Application layer) and then
breaks it into smaller size segments, numbers each byte, and hands
over to lower layer (Network Layer) for delivery.
Functions of Transport Layer:
• Process to Process Delivery

• Multiplexing and Demultiplexing – Simultaneous use of different


applications

• Segmentation and reassembly

• Congestion Control : A state occurring in network layer when the


message traffic is so heavy that it slows down network response time.
Effects of Congestion
As delay increases, performance decreases.
If delay increases, retransmission occurs, making situation worse.

• Connection Control – TCP (Connection oriented)or UDP(Connection


Less)
• Flow Control: TCP also prevents the data loss due to a fast
sender and slow receiver by imposing some flow control
techniques. It uses the method of sliding window protocol
which is accomplished by receiver by sending a window back to
the sender informing the size of data it can receive

• Error Control : Already Cover (Checksum, Cyclic Redundancy Check


and Parity Checking.)
Process to Process communication
• A process is an application-layer entity that uses the
services of the transport layer.

• The network layer is responsible for communication at the


computer level

• A network layer can deliver the message only to the


destination computer.

• The transport-layer protocol is responsible for delivery of


the message to the appropriate process.
Logical connection at the transport layer
Multiplexing and Demultiplexing
• At the sending end, there are several processes that want
to send packets.

• But there is only one transport layer protocol. This is many


to one relationship and hence requires multiplexing.

• At the receiving end, the relationship is one to many.


Therefore need demultiplexing.

• Transport layer receives datagrams from the network


layer. The transport layer checks for errors and drops the
header to obtain the messages and delivers them to
appropriate process based on the port number.
Elements of Transport Protocols:
Addressing:

Transport layer must figure out which process to send the data to. To make
this possible, an additional addressing element is necessary. This address
allows a more specific location—a software process—to be
identified within a particular IP address.

In TCP/IP, this transport layer address is called a port address. Source and
destination addresses are found in the IP packet, belonging to the network
layer. A transport layer datagram or segment that uses port numbers is
wrapped into an IP packet and transported by it.

The network layer uses the IP packet information to transport the packet
across the network (routing). At the transport layer, we need a transport
layer address, called a port number, to choose among multiple processes
running on the destination host. The destination port number is needed
for delivery; the source port number is needed for the reply.
Port Address

• At the transport layer, we need a transport layer address, called a port


number, to choose among multiple processes running on the destination host.

• The destination port number is needed for delivery; the source port number
is needed for the reply.

• In the Internet model, the port numbers are 16-bit integers between 0 and
65,535.

• The client program defines itself with a port number, chosen randomly by the
transport layer software running on the client host. This is the ephemeral
port number.
• The server process should also defines itself with a port
number but this port is well known port number.
lANA Ranges

• The lANA (Internet Assigned Number Authority) has


divided the port numbers into three ranges: well known,
registered, and dynamic (or private)

• Well-known ports. The ports ranging from 0 to 1023 are


assigned and controlled by lANA. These are the well-known
ports.
• Registered ports. The ports ranging from 1024 to 49,151
are not assigned or controlled by lANA. They can only be
registered with lANA to prevent duplication.
• Dynamic ports. The ports ranging from 49,152 to 65,535
are neither controlled nor registered. They can be used by
any process. These are the ephemeral ports.
• Some of the well-known port addresses associated with network
applications are:
Socket Address
• Process-to-process delivery needs two identifiers, IP address
and the port number, at each end to make a connection.

• The combination of an IP address and a port number is called a


socket address.

• The client socket address defines the client process uniquely


just as the server socket address defines the server process
uniquely

• A transport layer protocol needs a pair of socket addresses: the


client socket address and the server socket address.
Establishing and Releasing Connection:
• A transport layer protocol can either be connectionless or connection-
oriented.

• In a connectionless service, the packets are sent from one party to


another with no need for connection establishment or connection
release. The packets are not numbered; they may be delayed or lost or
may arrive out of sequence. There is no acknowledgment either.

• In a connection-oriented service, a connection is first established


between the sender and the receiver. Data are transferred. At the end,
the connection is released. This constitutes three phases of
communication for connection-oriented mechanism:

 Connection is established.
 Information is sent.
 Connection is released (terminated)
User Datagram Protocol
• simplest Transport Layer communication protocol available
of the TCP/IP protocol suite.

• involves minimum amount of communication mechanism.

• unreliable transport protocol but it uses IP services which


provides best effort delivery mechanism.

• the receiver does not generate an acknowledgement of


packet received and in turn, the sender does not wait for
any acknowledgement of packet sent. This shortcoming
makes this protocol unreliable as well as easier on
processing.
Features
• UDP is used when acknowledgement of data does not hold any
significance.

• UDP is good protocol for data flowing in one direction.

• UDP is simple and suitable for query based communications.

• UDP is not connection oriented.

• UDP does not provide congestion control mechanism.

• UDP does not guarantee ordered delivery of data.

• UDP is suitable protocol for streaming applications such as VoIP,


multimedia streaming.
UDP Header
• Source port number(16 bits): This is the port number
used by the process running on the source host Range 0-
65535

• Destination port number(16 bits)


• This is the port number used by the process running on
the destination host Range 0-65535 Length
• Length : Defines the total length of the user datagram,
header plus data. Length ranges between 8 to 65,535
bytes

• Checksum
• For error checking
Uses of UDP:
• UDP allows very simple data transmission without any error detection. So,
it can be used in networking applications like VOIP, streaming media, etc.
where loss of few packets can be tolerated and still function appropriately.

• UDP is suitable for multicasting since multicasting capability is embedded


in UDP software but not in TCP software.

• UDP is used for some route updating protocols such as RIP.

• UDP is used by Dynamic Host Configuration Protocol (DHCP) to assign IP


addresses to systems dynamically.

• UDP is an ideal protocol for network applications where latency is critical


such as gaming and voice and video communications.
Remote Procedural Call (RPC)

• When a process on machine 1 calls a procedure on machine


2, the calling process on 1 is suspended and execution of the
called procedure takes place on 2.

• Information can be transported from the caller to the


callee in the parameters and can come back in the procedure
result.

• No message passing is visible to the application programmer.


This technique is known as RPC (Remote Procedure Call) and
has become the basis for many networking applications.
Traditionally, the calling procedure is known as the client and
the called procedure is known as the server.
• The idea behind RPC is to make a remote procedure call look as much as
possible like a local one.
• In the simplest form, to call a remote procedure, the client program must
be bound with a small library procedure, called the client stub, that
represents the server procedure in the client’s address space.
• Similarly, the server is bound with a procedure called the server stub.
These procedures hide the fact that the procedure call from the client to
the server is not local.
Transmission Control Protocol
• The transmission Control Protocol (TCP) is one of the
most important protocols of Internet Protocols suite.

• It is most widely used protocol for data transmission


in communication network such as internet.

• Well Known TCP Ports


• FTP- 20(data), 21(control)
• TELNET- 23
• SMTP-25
• DNS-53
• HTTP-80
• RPC-111
Definitions
• The domain name system (i.e., “DNS”) is responsible for
translating domain names into a specific IP address so that
the initiating client can load the requested Internet
resources. The domain name system works much like a phone
book where users can search for a requested person and
retrieve their phone number.

• Telnet : Port 23 is typically used by the Telnet protocol.


Telnet commonly provides remote access to a variety of
communications systems. Telnet is also often used for remote
maintenance of many networking communications devices
including routers and switches.

Port 111 is generally called an unsecured or a security vulnerability


as it provides direct and easy access to the RPC services. Remote
Procedure Call is a software communication protocol that one
program can use to request a service from a program located in
another computer on a network without having to understand the
network's details.
Features
• TCP is reliable protocol i.e. whether the data packet is reached the
destination or it needs to resend it.

• TCP ensures that the data reaches intended destination in the same
order it was sent.

• TCP is connection oriented. TCP requires that connection between


two remote points be established before sending actual data.

• TCP provides error-checking and recovery mechanism.

• TCP provides end-to-end communication.

• TCP provides flow control and quality of service.

• TCP operates in Client/Server point-to-point mode.

• TCP provides full duplex server, i.e. it can perform roles of both
receiver and sender.
Header
3-Way Handshaking

• TCP is connection oriented which means before data


transfer connection should be established

• Step in TCP communication

1. Connection establishment
2. Data Transfer
3. Connection termination

• Connection is established with help of 3 way


handshaking protocol

• Connection termination is also done with 3 way


handshaking protocol
Connection Establishment (Handshaking)

• SYN-Synchronization message used for connection establishment

• ACK- Acknowledgement

Steps
1. First client send SYN message to server

2. Server responds ACK to SYN and SYN to client

3. Client responds ACK to SYN and connection is established


Connection Establishment (Handshaking)
Data Transfer(ACK Number + SEQ Number )

PSH: Request for push


Connection Termination (Handshaking)
•FIN(Finish) – Connection Termination

• ACK- Acknowledgement

Steps
1. First client send FIN message to server
2. Server responds ACK to FIN from client and send FIN to
client
3. Client responds ACK to FIN and connection is established
Connection Termination (Handshaking)
Reliable vs. Unreliable
Traffic Shaping Algorithms
• Traffic shaping is a mechanism to control the amount and
the rate of the traffic sent to the network.

• a burst transmission or data burst is the broadcast of a


relatively high-bandwidth transmission over a short period

• Two techniques can shape traffic:


– Leaky bucket and
– Token bucket.
The Leaky Bucket Algorithm
• If a bucket has a small hole at the bottom, the water leaks from
the bucket at a constant rate as long as there is water in the
bucket. The rate at which the water leaks does not depend on
the rate at which the water is input to the bucket unless the
bucket is empty

• NB: In Network Bucket is router and water is data packets


• In the figure, we assume that the network has committed a
bandwidth of 3 Mbps for a host. The use of the leaky bucket
shapes the input traffic to make it conform to this commitment.
The host sends a burst of data at a rate of 12 Mbps for 2 s, for
a total of 24 Mbits of data. The host is silent for 5 s and then
sends data at a rate of 2Mbps for 3 s, for a total of 6 Mbits of
data. In all, the host has sent 30 Mbits of data in 10 s

• The input rate can vary, but the output rate remains constant

• Similarly, in networking, a technique called leaky bucket can


smooth out bursty traffic, Bursty chunks are stored in router
and sent out at an average rate

• It may also drop the packet if the bucket is full


Implementation:•
a. Process removes a fixed number of packets from the queue at
each tick of the clock
b. If the traffic consists of variable-length packets, the fixed
output rate must be based on the number of bytes or bits.
c. The following is an algorithm for variable-length packets:

1. Initialize a counter to n at the tick of the clock.


2. If n is greater than the size of the packet, send the packet
and decrement the counter by the packet size. Repeat this step
until n is smaller than the packet size.
3. Reset the counter and go to step 1
Token Bucket Algorithm
• In contrast to the LB, the Token Bucket (TB)
algorithm, allows the output rate to vary, depending on
the size of the burst.

• In the TB algorithm, the bucket holds tokens. To


transmit a packet, the host must capture and destroy
one token.

• Tokens are generated by a clock at the rate of one


token every t sec.

• Idle hosts can capture and save up tokens (up to the


max. size of the bucket) in order to send larger bursts
later.
• The host can send bursty data as long as the bucket is
not empty

• Token bucket algorithm allows idle hosts to accumulate


credit for the future in the form of tokens

• For each tick of the clock, the system sends n tokens to


the bucket

• The system removes one token for every cell (or byte) of
data sent For example, if n is 100 and the host is idle for
100 ticks, the bucket collects 10,000 tokens. Now the host
can consume all these tokens in one tick with 10,000 cells,
or the host takes 1000 ticks with 10 cells per tick
Implementation:

• The token bucket can easily be implemented, In this


case, the bucket holds token to transmit a packet, the
host must capture and destroy one token. Tokens are
generated by a clock at the rate of one token every
second.

1. A token is added at every ∆t time.


2. The bucket can hold b-token. If a token arrive when
bucket is full it is discarded
3. When a packet of m bytes arrived m token are removed
from the bucket and the packet sent to the network
4. If less than n tokens are available no token are removed
from the buckets and the packet is considered to be non
conformant.
Congestion
• Load of network : The number of packets sent to the network

• Capacity of network : The number of packets that can be


handle

• Congestion occurs when the load of network is greater than


capacity of network

• Congestion occurs because of the following factor


1. Processing capacity of router
2. No of Packets in input and output interface
Congestion Control
• Congestion control refers to techniques and mechanisms that
can either prevent congestion, before it happens, or remove
congestion, after it has happened

• Congestion control mechanisms into two broad categories:


1. Open-loop congestion control (prevention)
2. Closed-loop congestion control (removal)
Open Loop Congestion

• Congestion Prevention mechanism

• Policies are applied to prevent congestion before it happens

• Congestion control is handled by either the source or the


destination

1. Retransmission policy
2. Windowing policy
3. Acknowledge policy
4. Discard policy
5. Admission policy
• Retransmission Policy :Retransmission is sometimes unavoidable.
If the sender feels that a sent packet is lost or corrupted, the
packet needs to be retransmitted.

• Window Policy: when the timer for a packet times out, several
packets may be resent, although some may have arrived safe
and sound at the receiver. This duplication may make the
congestion worse

• Acknowledgment Policy :The acknowledgment policy imposed by


the receiver may also affect congestion. If the receiver does
not acknowledge every packet it receives, it may slow down the
sender and help prevent congestion.
• Discarding Policy: A good discarding policy by the
routers may prevent congestion and at the same time
may not harm the integrity of the transmission.

• Admission Policy: A router can deny establishing a virtual


circuit connection if there is congestion in the network or
if there is a possibility of future congestion.
Closed-Loop Congestion Control

• Closed-loop congestion control mechanisms try to


alleviate congestion after it happens

• Several mechanisms have been used by different


protocols, They are

1. Back pressure
2. Choke packet
3. Implicit signalling
4. Explicit signalling
1. Back Pressure
• Backpressure refers to a congestion control mechanism in
which a congested node stops receiving data from the
immediate upstream node or nodes.

• Backpressure is a node-to-node congestion control

• Starts from congested node propagates, in the opposite


direction of data flow, to the source

• Backpressure technique can be applied only to virtual circuit


2. Choke Packet :

A choke packet is a packet sent by a node to the source to


inform it of congestion

• Warning is from the congested encountered router to


the source station directly

• Intermediate nodes doesn’t know about congestion.


3. Implicit Signalling

• No communication between the congested nodes and the source


• Source guesses that there is a congestion somewhere in the
network from other symptoms
• Signals like acknowledgement is used
• If ACK is delayed the source assume there is congestion in
destination and slows down the data transfer
• Used mainly in TCP network

4. Explicit Signalling

• The node that experiences congestion can explicitly send a signal


to the source or destination
• Here only difference from choke packet is no separate packet is
used where as in the choke packet separate packet is used
• It is used in Frame relay

You might also like