You are on page 1of 69

Chapter 3:

Transport Layer
our goals:
 understand principles behind transport layer services:
• multiplexing, demultiplexing
• reliable data transfer
• flow control
• congestion control
 learn about Internet transport layer protocols:
• UDP(User Datagram Protocol): connectionless transport
TCP(Transmission Control Protocol): connection-
oriented reliable transport
Why We Need A Transport Layer?
Since several processes may be running on each host machine, a layer is
needed to be responsible for process-to-process delivery.

Transport layer works as a liaison(


‫ )اتصا ل‬between application layer and
lower layers (network layer, data link layer, and physical layer)

Application layer programs use the services of the transport layer to


communicate without being aware of the lower layers (i.e., not
dependent on the physical network itself)

Transport layer uses the network layer services and implement its own
internal functions to hide the imperfection and the heterogeneous
nature of the physical network
Transport services and protocols

 provide logical communication between app processes


running on different hosts
 transport protocols run in end systems
• send side: breaks app messages into segments, passes to
network layer.
• Receive side: reassembles segments into messages,
passes to app layer.
 more than one transport protocol available to apps
• Internet: TCP and UDP
Transport vs. network layer

 network layer: logical communication between hosts


 transport layer: logical communication between
processes
Example of cousins in west& east coast
application messages = letters in envelopes
processes = cousins
hosts (also called end systems) = houses
transport-layer protocol = Ann and Bill
network-layer protocol = postal service (including mail carriers)
Internet transport-layer protocols

Reliable, in-order delivery: TCP


connection setup
congestion control
flow control
unreliable, unordered delivery: UDP
no-frills extension of “best-effort” IP
services not available:
delay guarantees
bandwidth guarantees
security
Transport Layer Functionalities

1. Isolating the application layer from the technology,


design, and imperfections of the network (hiding
transmission details from applications)
2. Segmentation of large messages into segments and
reassembly of segments into messages.
3. Addressing
4. Multiplexing / Demultiplexing
5. Reliable data transfer (provide a reliable service on
top of an unreliable network)
2.Segmentation
Message
From app layer


Transport
Layer

TH payload

segment

To network layer

The transport layer receives the message from the application


process and divides it into smaller units then encapsulates
each unit in a segment (also called packet) by adding header
information
3.Addressing

For two application processes to communicate, they need to


uniquely identify each other using socket address:
• IP address which uniquely distinguish a computer from
all other computers (responsibility of the network layer)
• A port number (local number) to uniquely identify a
process in the computer (i.e. uniquely identify a process at
the transport layer)

A segment at the transport layer must carry


• The destination port to specify remote process
• As well as the source port to specify the sender process
Addressing (example)
4.Multiplexing/Demultiplexing

many-to-one one-to-many
relationship relationship

Multiplexing – at the sender, the transport layer protocol accepts messages


from several processes identified by their port numbers then passes the packets
(segments) to the network layer after adding the header. The job of gathering data chunks at
the source host from different sockets, encapsulating each data chunk with header information (that will later be used in
demultiplexing) to create segments, and passing the segments to the network layer is called multiplexing.
Demultiplexing – at the receiver, the transport layer delivers each message to
appropriate process(socket) based on the port number contained in the header.
The job of delivering the data in a transport-layer segment to the correct socket is called demultiplexing.
Multiplexing/Demultiplexing
multiplexing at sender:
handle data from multiple demultiplexing at receiver:
sockets, add transport header use header info to deliver
(later used for demultiplexing) received segments to correct
socket

application

application P1 P2 application socket


P3 transport P4
process
transport network transport
network link network
link physical link
physical physical
A. Connectionless Demux

 A UDP socket is fully identified by a destination IP


address and a destination port number.

 If two UDP segments have different source IP addresses


and/or source port numbers, but have the same destination
IP address and destination port number
 then the two segments will be directed to the same
destination process via the same destination socket.
How Demultiplexing works

 host receives IP datagrams


• each datagram has source IP address, destination IP
address
• each datagram carries one transport-layer segment
• each segment has source, destination port number
 host uses IP addresses & port numbers to direct
segment to appropriate socket.
Connectionless Demux: example
DatagramSocket serverSocket =
new DatagramSocket
DatagramSocket (6428); DatagramSocket
mySocket2 = new mySocket1 = new
DatagramSocket DatagramSocket
(9157); application
(5775);
application P1 application
P3 P4
transport
transport transport
network
network link network
link physical link
physical physical

source port: 6428 source port: ?


dest port: 9157 dest port: ?

source port: 9157 source port: ?


dest port: 6428 dest port: ?
B. Connection-oriented Demux

 TCP socket identified by 4-tuple:


• source IP address
• source port number
• dest IP address
• dest port number
 demux: receiver uses all four values to direct segment
to appropriate socket
In contrast with UDP, two arriving TCP segments
with different source IP addresses or source port
numbers will be directed to two different sockets
B. Connection-oriented Demux: example
application
application application
P3 P2 P3
transport
transport transport
network
network link network
link physical link
physical server: IP physical
address B

host: IP source IP,port: B,80 host: IP


address A dest IP,port: A,9157 source IP,port: C,5775 address C
dest IP,port: B,80
source IP,port: A,9157
dest IP, port: B,80
source IP,port: C,9157
dest IP,port: B,80
three segments, all destined to IP address: B,
dest port: 80 are demultiplexed to different sockets
5.Reliable Data Transfer

Very important network topic (for transport, application, and


link layers).

Characteristics of unreliable channel will determine the


complexity of reliable data transfer protocol.

How to provide a reliable transmission over a non reliable


channel or a non reliable network layer?

Reliable channel: No transferred data bits are corrupted or


lost, and all are delivered in the order in which they were sent.
UDP: User Datagram Protocol
[RFC 768]
Internet transport protocol
“best effort” service, UDP segments may be:
lost
delivered out-of-order to app
connectionless:
no handshaking between UDP sender, receiver
each UDP segment handled independently of others
UDP use:
 DNS is an application layer protocol that uses UDP, avoiding TCP’s connection
establishment delays.
reliable transfer over UDP:
 add reliability at application layer
 application-specific error recovery!
Why UDP?
No connection establishment (which can add delay)

Simple, no connection state at sender, receiver (no buffers, no


congestion-control parameters, no sequence and acknowledgment
number parameters)

Small header size: TCP 20 bytes, UDP only 8 bytes.

No congestion control: UDP will package the data inside a UDP
segment and immediately pass the segment to the network layer. TCP
throttles the transport-layer TCP sender when one or more links
between the source and destination hosts become excessively
congested.
UDP: segment header
length, in bytes of
32 bits UDP segment,
source port # dest port # including header

length checksum

application
data
(payload)

UDP segment format


UDP checksum
Goal: detect “errors” (e.g., flipped bits) in transmitted
segment
sender: receiver:
treat segment contents, compute checksum of
including header fields, received segment
as sequence of 16-bit check if computed
integers
checksum: addition checksum equals
(one’s complement sum) checksum field value:
of segment contents NO - error detected
sender puts checksum
value into UDP YES - no error detected.
checksum field But maybe errors
nonetheless?
Internet checksum: example

example: add two 16-bit integers


1 1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1

1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

sum 1 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
checksum 1 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1
Uses of UDP?
Used by a process that requires simple request-response
communication with little concern for flow and error control.
Not used for processes that send a bulk of data such as FTP.
Used by a process with internal flow and error control
mechanisms, ex: Trivial File Transfer Protocol (TFTP).
Multicasting, multicasting capability is embedded in the UDP
software but not in the TCP software.
Management processes, ex: SNMP. Network management
applications must often run when the network is in a stressed
state, when reliable, congestion-controlled data transfer is difficult
to achieve.
routing table updating protocols, ex: RIP.
TCP: Overview
point-to-point:
one sender, one receiver
multicasting (the transfer of data from one sender to many receivers)
in a single send operation—is not possible with TCP.
reliable, in-order byte steam
full duplex data:
bi-directional data flow in same connection
connection-oriented:
handshaking (exchange of control msgs) initiates sender, receiver state
before data exchange.
congestion controlled:
throttle sender when network overloaded
flow controlled:
sender will not overwhelm receiver
Outline
TCP Connection Management
TCP Error Control
TCP Flow & Congestion Control
TCP Segment Structure & Security
TCP Connection Management
The TCP sender and receiver establish a “connection”
with each other before exchanging data segments.
As part of the TCP connection establishment, both sides
will Initialize many TCP state variables, For example:
 Sequence numbers.
 flow control info.
TCP connection consists of buffers, variables, and a
socket connection to a process in one host, and another
set of buffers, variables, and a socket connection to a
process in another host.
TCP Connection Management
Process running in (client) wants to initiate a connection with
process in (server).
1. The client application process informs the client TCP that it
wants to establish a connection to a process in the server.
2. The TCP in the client proceeds to establish a TCP connection
with the TCP in the server
3. The client and server will allocate the TCP buffers and variables
to the connection
4. Either of the two processes participating in a TCP connection
can end the connection.
5. When a connection ends, the “resources” (buffers and variables)
in the hosts are deallocated.
TCP Connection Management
client server
Three way handshake: Initiate

Step 1: client host initiates by sending TCP Specify SYN


SYN segment to server seq#

 specifies initial seq # Specify


seq #
 no data ACK
SYN Allocates
Buffer

Step 2: server host receives SYN, replies


with SYNACK segment
ACK
 server allocates buffers
 specifies server initial seq#.

Step 3: client receives SYNACK, replies with


ACK segment, which may contain data
TCP Connection Management
Closing a connection:
Example, Client closes socket: client server
Step 1: client end system sends closing
TCP FIN control segment to FIN
server
Step 2: server receives FIN, replies ACK
with ACK. Closes connection, closing
sends FIN. FIN

Step 3: client receives FIN, replies


ACK
with ACK, Enters “timed wait” timed wait
Step 4: server, receives ACK. closed
Connection closed.
Outline
TCP Connection Management
TCP Error Control
TCP Flow & Congestion Control
TCP Segment Structure & Security
TCP Reliable Data Transfer

Aim:
no bit will be received corrupted and no packet will be
lost
 Approach:

1. Reactive: resend lost or corrupted packet (error control)


2. Proactive: Prevent packet loss at the receiver buffer (flow
control) and at the router buffers (congestion control)
Error Control
Why we need Error-Control?
 Underlying channel may flip bits in packets (bit-error).
 Network layer does not provide a guaranteed service.
 Packets may be lost if routed to a wrong address or if
dropped because of congestion
Q. How to provide a reliable transmission between end users?
Error Control Solution
 Procedure

o A method for detecting bit errors

o A way for detecting lost packets

o A feedback from the receiver

o Retransmission of lost or corrupted packets

 A mechanism that is based on retransmission when


detecting an error is called ARQ: Automatic Repeat
reQuest
ARQ: Automatic Repeat reQuest
Bit-error detection: e.g. checksum

Packet-loss detection: sequence numbers

Receiver feedback: acknowledgement (ACK) or


negative ack. (NAK)

Retransmission:
1. stop-and-wait
2. pipelined protocols using sliding window
1. Stop-and-Wait ARQ
A transmitter sends a frame then stops and waits for an
acknowledgment.

Stop-and-Wait ARQ has the following features:


The sending device keeps a copy of the sent frame
transmitted until it receives an acknowledgment (ACK)
The sender starts a timer when it sends a frame. If an
ACK is not received within an allocated time period, the
sender resends it
 Both frames and acknowledgment (ACK) are numbered
alternately 0 and 1 (two sequence numbers only)
1. Stop-and-Wait ARQ
 The sender will not
send the next frame
until it is sure that the
current one is correctly
received

 Sequence number is
necessary to check for
duplicated frames
2. Pipelining
 Pipelining: A task is begun before the previous
task has ended

 There is no pipelining in stop and wait ARQ


because we need to wait for a frame to reach the
destination and be acknowledged before the next
frame can be sent

 Pipelining improves the efficiency of the


transmission
TCP reliable data transfer

TCP’s reliable data transfer service ensures that the data


stream that a process reads out of its TCP receive buffer is
uncorrupted, without gaps, without duplication, and in
sequence; that is, the byte stream is exactly the same byte
stream that was sent by the end system on the other side of
the connection.
TCP reliable data transfer
TCP creates reliable data transfer service on top of IP’s
unreliable service:
• pipelined segments.
• cumulative acknowledgments, so that n acknowledges
the receipt of all bytes before byte number n.
• single retransmission timer.

Retransmissions triggered by:


• timeout events
• duplicate acks
TCP reliable data transfer
Simplified description of a TCP sender (three major events):
1. data received from application above;
2. timer timeout; if the timer is already not running for some
other segment, TCP starts the timer when the segment is
passed to IP.
3. ACK receipt.

Sequence numbers:
byte stream “number” of first byte in segment’s data
Acknowledgements:
seq # of next byte expected from the other side
TCP sender events
data rcvd from app: timeout:
create segment with seq # retransmit segment that caused
seq # is byte-stream timeout
restart timer
number of first data byte in
segment
start timer if not already ack rcvd:
running if ack acknowledges previously
think of timer as for unacked segments
update what is known to be
oldest unacked segment
ACKed
expiration interval:
restart timer if there are still
TimeOutInterval
unacked segments
TCP: retransmission scenarios
Host A Host B Host A Host B

Seq=92, 8 bytes of data Seq=92, 8 bytes of data


timeout

timeout
Seq=100, 20 bytes of data
ACK=100
X
ACK=100
ACK=120

Seq=92, 8 bytes of data Seq=92, 8


bytes of data

ACK=100
ACK=120

lost ACK scenario premature timeout


TCP: retransmission scenarios
Host A Host B

Seq=92, 8 bytes of data

Seq=100, 20 bytes of data


timeout

ACK=100
X

ACK=120

Seq=120, 15 bytes of data

cumulative ACK
Transport Layer 3-44
Fast Retransmit
One of the problems with timeout-triggered retransmissions is
that the timeout period can be relatively long.
When a segment is lost, this long timeout period forces the
sender to delay resending the lost packet, thereby increasing the
end-to-end delay.
The sender can often detect packet loss well before the timeout
event occurs by noting so-called duplicate ACKs.
A duplicate ACK is an ACK that reacknowledges a segment for
which the sender has already received an earlier acknowledgment.
To understand the sender’s response to a duplicate ACK, we must
look at why the receiver sends a duplicate ACK in the first place
(next slide)
TCP receiver’s ACK generation policy
Event at receiver TCP receiver action

Arrival of in-order segment with Delayed ACK. Wait up to 500 ms


expected seq #. All data up to for next segment. If no next segment,
expected seq # already acknowledged send ACK
Arrival of in-order segment with
expected seq #. One other Immediately send single cumulative
in-order segment waiting for ACK ACK, ACKing both in-order segments
transmission.

Arrival of out-of-order segment Immediately send duplicate ACK,


higher-than-expect seq # . indicating seq # of next expected byte
Gap detected. (which is the lower end of the gap).
Arrival of segment that Immediately send ACK, provided that
partially or completely fills gap in segment starts at lower end of the gap
received data.
TCP Fast Retransmit
 The Time-out period often relatively long:
 Long delay before resending a lost packet
 Detect lost segments via duplicate acks (before that segment’s
timer expires).
 The Sender often sends many segments back-to-back
 If a segment is lost, there will likely be many duplicate ACKs.

TCP fast retransmit


If sender receives 3 ACKs for the same data (+original ACK)
(“Triple duplicate ACKs”), resend unacknowledged segment with
smallest seq #
 Likely that the unacknowledged segment is lost, so don’t wait
for timeout.
 Retransmitting the missing segment before that segment’s
timer expires.
Transport Layer 3-47
TCP Fast Retransmit
Host A Host B

Seq=9
2, 8 b
Seq=1 ytes o
00, 20 f data
Seq= b y te s of da
120
Seq = , 15 bytes ta
X
135, 6 o
Seq = bytes f data
141, 1
fast retransmit after sender 0
6 byte
s
K = 1 0
receipt of triple duplicate ACK, AC
100
timeout

before that segment’s timer AC K =


0
expires. =10
ACK 0
CK =10
A
Seq=1
00, 2 0
bytes
of dat
a

Transport Layer 3-48


Example
A TCP connection has been established between two TCP
parties. The sender will send 350 bytes of data to the
receiver starting from the Sequence Number (SN) 51. Each
segment consists of 50 bytes. Given the data transfer phase
below, continue the diagram by applying the fast
retransmission technique to resend the lost messages.
Illustrate the buffers at the receiver site. Each Cell in the
receiver buffer fits 50 bytes. Assume that the received data
is not read (sent to the application layer) until the end of the
transmission.
Sender Receiver

SN= 50

SN= 80

ACK= 110

SN= 110
LOST
Time
SN= 140

SN= 170

ACK= 110

SN= 110

ACK= 200

SN= 200

ACK= 230

SN= 230

ACK= 260

3-50
Sender Receiver

1
SN= 51 - 100

2
SN= 101-150

ACK= 151

3 SN= 151-200 LOST

4 SN= 201-250
LOST

SN= 251- 300


Timer
ACK= 151
SN= 301- 350

ACK= 151
SN= 351- 400

ACK= 151
SN= 151- 200

ACK= 201
SN= 201- 250

Transport Layer ACK= 401 3-51


Outline
TCP Connection Management
TCP Error Control
TCP Flow & Congestion Control
TCP Segment Structure & Security
TCP Flow Control
 To Prevent packet loss at the receiver buffer
 The Sender won’t overflow the receiver’s buffer by
transmitting too much, too fast
 A Speed-matching service: matching the rate at
which the sender is sending against the rate at which
the receiving application is reading
 Approach: control the number of sent packets
(Sender window) based upon the receiver buffer.
TCP Flow Control: How It Works?
TCP Flow Control: How It Works?
TCP provides flow control by having the sender maintain a
variable called the receive window(rwnd ).
the receive window is used to give the sender an idea of
how much free buffer space is available at the receiver
Suppose that Host A is sending a large file to Host B over a
TCP connection. Host B allocates a receive buffer to this
connection; denote its size by RcvBuffer. From time to
time, the application process in Host B reads from the
buffer.
Host B tells Host A how much spare room it has in the
connection buffer by placing its current value of rwnd in
the receive window field of every segment it sends to A.
TCP Flow Control: How It Works?
• The TCP provides flow control by
to application process
having the sender maintain a variable
called the receive window.
• The receive window is used to give the RcvBuffer TCP data in buffer
sender an idea of how much free
buffer space is available at the receiver. rwnd Spare room
• Host B tells Host A how much spare
room it has in the connection buffer by
placing its current value of rwnd in the Data from IP
receive window field of every segment
it sends to A. receiver-side buffering
• Initially, Host B sets rwnd = RcvBuffer.
 Sender limits unACKed data to rwnd , this
will Guarantees receiver buffer doesn’t
overflow.
TCP Flow Control: How It Works?
 
Congestion Control
Informally: “too many sources sending too much data too

fast for the network to handle”


Consequences:

 Lost packets (buffer overflow at routers)


 Long delays (queuing in router buffers)
Different from flow control!
Approaches towards congestion control
Two broad approaches towards congestion control:

end-end congestion control: network-assisted


• no explicit feedback from congestion control:
network • routers provide feedback
• congestion to end systems
inferred(
‫ )استدال ل‬by end- • single bit indicating
system based on congestion
observed loss, delay • explicit transmission rate
• approach taken by TCP for sender to send at
TCP Congestion Control
End- to-end congestion control:
 Basic idea: ask sender to slow down (or stop altogether) when
there is congestion
Procedure
 Sender notes the receivers advertised window (rwnd)
 AND a second window is defined, the Congestion Window (cwnd)
 The sender can send up to the lowest of the two
actual window size = minimum (rwnd, cwnd)
 Sender alters the Congestion Window according to the way the
network is currently performing
 The Congestion Window will keep increasing until segments
timeout and then start reducing
TCP Slow Start Mechanism
Host A Host B
 When a TCP connection begins, the
value of cwnd is typically initialized to
a small value, then increase rate Cwnd =1 one segme
nt
exponentially until first loss event:

RTT
 Initially cwnd = 1 MSS (maximum
two segme
segment size) Cwnd =2 nts
 Double cwnd every RTT (Round-Trip
Time)
 Done by incrementing cwnd for four segme
nts
every ACK received Cwnd =4

 Summary: initial rate is slow but


Cwnd =8
ramps up exponentially fast
time
TCP Slow Start Mechanism
Q: when should this exponential growth end?
One solution provided by Slow start:
• If there is a loss event (i.e., congestion)
indicated by a timeout, the TCP sender sets the
value of cwnd to 1 and begins the slow start
process again.
Outline
TCP Connection Management
TCP Error Control
TCP Flow & Congestion Control
TCP Segment Structure & Security
TCP Segment Structure
TCP Segment Structure
The 32-bit sequence number field and the 32-bit
acknowledgment number field are used by the TCP sender
and receiver in implementing a reliable data transfer service

The 16-bit receive window field is used for flow control. It is


used to indicate the number of bytes that a receiver is willing
to accept.

The 4-bit header length field specifies the length of the


TCP header in 32-bit words. The TCP header can be of
variable length due to the TCP options field
TCP Segment Structure
The flag field contains 6 bits:
 The ACK bit is used to indicate that the value carried in the
acknowledgment field is valid; that is, the segment contains an
acknowledgment for a segment that has been successfully received.
The RST, SYN, and FIN bits are used for connection setup and
teardown.
Setting the PSH bit indicates that the receiver should pass the data
to the upper layer immediately.
The URG bit is used to indicate that there is data in this segment
that the sending-side upper-layer entity has marked as “urgent.”
In practice, the PSH, URG, and the urgent data pointer are not used.
Securing TCP: Secure Sockets
Layer (SSL) • When an application uses SSL:
TCP & UDP • The sending process passes clear
 No encryption text data to the SSL socket;
• SSL in the sending host then
 Clear text passwords sent into
encrypts the data and passes the
socket traverse the Internet in
encrypted data to the TCP
clear text
socket.
SSL • The encrypted data travels over
 Provides encrypted TCP the Internet to the TCP socket in
connection the receiving process.
 Data integrity • The receiving socket passes the
 End-point authentication encrypted data to SSL, which
SSL is at the application layer decrypts the data.
 Applications use SSL libraries, • Finally, SSL passes the clear text data
which “talk” to TCP through its SSL socket to the
receiving process
Chapter 3: summary
 principles behind transport layer services:
• multiplexing, demultiplexing
• reliable data transfer
• flow control
• congestion control
 instantiation, implementation in the Internet
• UDP
• TCP
next:
leaving the network “edge” (application, transport layers)
into the network “core”
two network layer chapters:
data plane
control plane

You might also like