You are on page 1of 296

Application Layer: Overview

▪ P2P applications
▪ Principles of network ▪ video streaming and content
applications distribution networks
▪ Web and HTTP ▪ socket programming with
▪ E-mail, SMTP, IMAP UDP and TCP
▪ The Domain Name System
DNS

Application Layer: 2-1


DNS: Domain Name System
people: many identifiers: Domain Name System:
• SSN, name, passport # ▪ distributed database implemented in
Internet hosts, routers: hierarchy of many name servers
• IP address (32 bit) - used for ▪ application-layer protocol: hosts,
addressing datagrams name servers communicate to resolve
• “name”, e.g., cs.umass.edu - names (address/name translation)
used by humans
• note: core Internet function,
Q: how to map between IP implemented as application-layer
address and name, and vice protocol
versa ?
• complexity at network’s “edge”

Application Layer: 2-2


DNS: services, structure
DNS services Q: Why not centralize DNS?
▪ hostname to IP address translation ▪ single point of failure
▪ traffic volume
▪ host aliasing
▪ distant centralized database
• canonical, alias names
▪ maintenance
▪ mail server aliasing
▪ load distribution A: doesn‘t scale!
• replicated Web servers: many IP ▪ Comcast DNS servers
addresses correspond to one alone: 600B DNS queries
name per day

Application Layer: 2-3


DNS: a distributed, hierarchical database
Root DNS Servers Root
… …
.com DNS servers .org DNS servers .edu DNS servers Top Level Domain
… … … …
yahoo.com amazon.com pbs.org nyu.edu umass.edu
DNS servers DNS servers DNS servers DNS servers DNS servers Authoritative

Client wants IP address for www.amazon.com; 1st approximation:


▪ client queries root server to find .com DNS server
▪ client queries .com DNS server to get amazon.com DNS server
▪ client queries amazon.com DNS server to get IP address for www.amazon.com
Application Layer: 2-4
DNS: root name servers
▪ official, contact-of-last-resort by
name servers that can not 13 logical root name “servers”
worldwide each “server” replicated
resolve name many times (~200 servers in US)
▪ incredibly important Internet
function
• Internet couldn’t function without it!
• DNSSEC – provides security
(authentication and message
integrity)
▪ ICANN (Internet Corporation for
Assigned Names and Numbers)
manages root DNS domain
Application Layer: 2-5
TLD: authoritative servers
Top-Level Domain (TLD) servers:
▪ responsible for .com, .org, .net, .edu, .aero, .jobs, .museums, and all
top-level country domains, e.g.: .cn, .uk, .fr, .ca, .jp
▪ Network Solutions: authoritative registry for .com, .net TLD
▪ Educause: .edu TLD
Authoritative DNS servers:
▪ organization’s own DNS server(s), providing authoritative hostname
to IP mappings for organization’s named hosts
▪ can be maintained by organization or service provider

Application Layer: 2-6


Local DNS name servers
▪ does not strictly belong to hierarchy
▪ each ISP (residential ISP, company, university) has one
• also called “default name server”
▪ when host makes DNS query, query is sent to its local DNS
server
• has local cache of recent name-to-address translation pairs (but may
be out of date!)
• acts as proxy, forwards query into hierarchy

Application Layer: 2-7


DNS name resolution: iterated query
root DNS server
Example: host at engineering.nyu.edu
wants IP address for gaia.cs.umass.edu 2
3
TLD DNS server
Iterated query: 1 4

▪ contacted server replies 8 5


with name of server to requesting host at local DNS server
contact engineering.nyu.edu dns.nyu.edu
gaia.cs.umass.edu
▪ “I don’t know this name, 7 6
but ask this server”
authoritative DNS server
dns.cs.umass.edu

Application Layer: 2-8


DNS name resolution: recursive query
root DNS server
Example: host at engineering.nyu.edu
wants IP address for gaia.cs.umass.edu 2 3

7 6
Recursive query: 1 TLD DNS server
▪ puts burden of name 8
resolution on requesting host at local DNS server
5 4
engineering.nyu.edu dns.nyu.edu
contacted name gaia.cs.umass.edu

server
▪ heavy load at upper authoritative DNS server
levels of hierarchy? dns.cs.umass.edu

Application Layer: 2-9


Caching, Updating DNS Records
▪ once (any) name server learns mapping, it caches mapping
• cache entries timeout (disappear) after some time (TTL)
• TLD servers typically cached in local name servers
• thus root name servers not often visited
▪ cached entries may be out-of-date (best-effort
name-to-address translation!)
• if name host changes IP address, may not be known Internet-wide
until all TTLs expire!
▪ update/notify mechanisms proposed IETF standard
• RFC 2136
Application Layer: 2-10
DNS records
DNS: distributed database storing resource records (RR)
RR format: (name, value, type, ttl)
type=A type=CNAME
▪ name is hostname ▪ name is alias name for some “canonical”
▪ value is IP address (the real) name
▪ www.ibm.com is really servereast.backup2.ibm.com
type=NS ▪ value is canonical name
▪ name is domain (e.g., foo.com)
▪ value is hostname of
type=MX
authoritative name server for ▪ value is name of mailserver
this domain associated with name

Application Layer: 2-11


DNS protocol messages
DNS query and reply messages, both have same format:
2 bytes 2 bytes

identification flags
message header:
▪ identification: 16 bit # for query, # questions # answer RRs

reply to query uses same # # authority RRs # additional RRs


▪ flags:
questions (variable # of questions)
• query or reply
• recursion desired
answers (variable # of RRs)
• recursion available
• reply is authoritative authority (variable # of RRs)

additional info (variable # of RRs)

Application Layer: 2-12


DNS protocol messages
DNS query and reply messages, both have same format:
2 bytes 2 bytes

identification flags

# questions # answer RRs

# authority RRs # additional RRs

name, type fields for a query questions (variable # of questions)

answers (variable # of RRs)


RRs in response to query

records for authoritative servers authority (variable # of RRs)

additional “ helpful” info that may additional info (variable # of RRs)


be used
Application Layer: 2-13
Inserting records into DNS
Example: new startup “Network Utopia”
▪ register name networkuptopia.com at DNS registrar (e.g., Network
Solutions)
• provide names, IP addresses of authoritative name server (primary and
secondary)
• registrar inserts NS, A RRs into .com TLD server:
(networkutopia.com, dns1.networkutopia.com, NS)
(dns1.networkutopia.com, 212.212.212.1, A)
▪ create authoritative server locally with IP address 212.212.212.1
• type A record for www.networkuptopia.com
• type MX record for networkutopia.com

Application Layer: 2-14


DNS

Application Layer: 2-15


Dynamic DNS

Application Layer: 2-16


Example

Application Layer: 2-17


Application Layer: 2-18
Application Layer: 2-19
Application Layer: 2-20
Application Layer: 2-21
DNS security
DDoS attacks Redirect attacks
▪ bombard root servers with ▪ man-in-middle
traffic • intercept DNS queries
• not successful to date ▪ DNS poisoning
• traffic filtering • send bogus relies to DNS DNSSEC
server, which caches
• local DNS servers cache IPs of TLD [RFC 4033]
servers, allowing root server Exploit DNS for DDoS
bypass ▪ send queries with spoofed
▪ bombard TLD servers source address: target IP
• potentially more dangerous ▪ requires amplification

Application Layer: 2-22


SNMP & SSH
Network Management (SNMP)
• Enables the “administrator” to monitors remote devices and analyzes their
data to ensure that they are operational.
• Benefits:
• Detecting failure of an interface card at a host or a router.
• Host monitoring - periodically check to see if all network hosts are up and
operational
• Monitoring traffic to aid in resource deployment (BW link,etc)
• Detecting rapid changes in routing tables. Route flapping-frequent changes in the
routing tables—may indicate instabilities in the routing or a misconfigured router.
• Monitoring for SLAs. Service Level Agreements (SLAs) are contracts that define
specific performance metrics and acceptable levels of network-provider
performance.
• Intrusion detection. A network administrator may want to be notified when
network traffic arrives from, or is destined for, a suspicious source.
Five areas of network management
• Performance management. The goal of performance management is to quantify,
measure, report, analyze, and control the performance (for example, utilization
and throughput) of different network components.
• Fault management. The goal of fault management is to log, detect, and respond
to fault conditions in the network.
• Configuration management. Configuration management allows a network
manager to track which devices are on the managed network and the hardware
and software configurations of these devices.
• Accounting management. Accounting management allows the network manager
to specify, log, and control user and device access to network resources.
• Security management. The goal of security management is to control access to
network resources according to some well-defined policy.
Network Management Architecture
• Three principal components of a network
management architecture:
• A managing entity
• The managed devices
• Network management protocol.
• The managing entity
• Application running in a centralized network management station.
• Controls the collection, processing, analysis, and/or display of network management
information.
• A managed device
• Network equipment that resides on a managed network.
• A managed device might be a host, router, bridge, hub, printer, or modem.
• Within a managed device, there may be several so-called managed objects.
• Network management agent
• Process running in the managed device that communicates with the managing entity.
• Network management protocol
• The protocol runs between the managing entity and the managed devices, allowing
the managing entity to query the status of managed devices and indirectly take
actions at these devices via its agents.
SSH- Secure Shell
• ‘Secure shell is a de facto standard for remote logins and encrypted file
transfers.’ [SSH communications inc.]
• Founded in 1995 by Tatu Ylonen, a researcher at Helsinki University of
Technology, Finland
• It provides authentication and encryption for business critical applications
to work securely over the internet.
• Originally introduced for UNIX terminals as a replacement for the insecure
remote access “Berkeley services" , viz. rsh, rlogin, rcp, telnet, etc.
• It can also be used for port forwarding of arbitrary TCP/IP or X11
connections (interactive terminals)
• It is a layer over TCP/IP and runs on the port 22.
Why SSH?
• The three core security requirements for a remote access technology – confidentiality,
integrity and authentication
• Most of the earlier technologies lack confidentiality and integrity. For e.g Telnet and FTP
transmit username and passwords in cleartext.
• They are vulnerable to attacks such as IP spoofing, DoS, MITM and eavesdropping.
• Secure shell satisfies all the three requirements by using:
❖ Data Encryption to provide confidentiality
❖ Host-based and (or) client-based authentication
❖ Data integrity using MACs and hashes
Flavours of SSH
• There are two incompatible versions of the SSH protocol: SSH-1 and
SSH-2
• SSH-2 was introduced in order to fix several bugs in SSH-1 and also
includes several new features
• Both versions have technical glitches and are vulnerable to known
attacks.
• SSH-1 is more popular nowadays due to licensing issues with SSH-2
• OpenSSH is a free version of the SSH with open source code
development. It supports both the versions and mainly used in UNIX
environment.
The SSH-1 protocol exchange
The SSH-1 protocol exchange
• Once secure connection is established, client attempts to authenticate itself to
server.
• Some of the authentication methods used are:
• Kerberos
• RHosts and RHostsRSA
• Public key
• Password based authentication (OTP)
• Integrity checking is provided by means of a weak CRC -32
• Compression is provided using the “deflate” algorithm of GNU gzip utility. It is
beneficial for file transfer utilities such as ftp, scp, etc.
SSH-2
• SSH-1 is monolithic, encompassing multiple functions into a single protocol
whereas SSH-2 has been separated into modules and consists of 3 protocols:
• SSH Transport layer protocol (SSH-TRANS) : Provides the initial connection,
packet protocol, server authentication and basic encryption and integrity
services.
• SSH Authentication protocol (SSH-AUTH) : Verifies the client’s identity at the
beginning of an SSH-2 session, by three authentication methods: Public key,
host based and password.
• SSH Connection protocol (SSH-CONN) : It permits a number of different
services to exchange data through the secure channel provided by
SSH-TRANS.
SSH-2 vs SSH-1
SSH-2 SSH-1
Separate transport, authentication One monolithic protocol
and connection protocols
Strong cryptographic integrity Weak CRC-32 integrity check
check
No server keys needed due to Server key used for forward
Diffie Hellman Key exchange secrecy on the session key
Supports public key certificates N/A

Algorithms used: DSA, DH, RSA, MD5, CRC-32, 3DES, IDEA,


SHA-1, MD5, 3DES, Blowfish, ARCFOUR, DES
Twofish, CAST-128, IDEA,
ARCFOUR
Periodic replacement of session N/A
keys
Is SSH really secure?
• Secure shell does counter certain types of attacks such as:
• Eavesdropping
• Name service and IP spoofing
• Connection hijacking
• MITM
• Insertion attack
• However it fails against these attacks:
• Password cracking
• IP and TCP attacks
• SYN flooding
• TCP RST, bogus ICMP
• TCP desynchronization and hijacking
• Traffic analysis
• Covert channels
Chapter 3
Transport Layer
A note on the use of these PowerPoint slides:
We’re making these slides freely available to all (faculty, students,
readers). They’re in PowerPoint form so you see the animations; and
can add, modify, and delete slides (including this one) and slide content
to suit your needs. They obviously represent a lot of work on our part.
In return for use, we only ask the following:
▪ If you use these slides (e.g., in a class) that you mention their source
(after all, we’d like people to use our book!)
▪ If you post any slides on a www site, that you note that they are
adapted from (or perhaps identical to) our slides, and note our
copyright of this material.
Computer Networking: A
For a revision history, see the slide note for this page.
Top-Down
th
Approach
Thanks and enjoy! JFK/KWR 8 edition
All material copyright 1996-2020 Jim Kurose, Keith Ross
J.F Kurose and K.W. Ross, All Rights Reserved Pearson, 2020
Transport Layer: 3-1
Transport layer: overview
Our goal:
▪ understand principles ▪ learn about Internet transport
behind transport layer layer protocols:
services: • UDP: connectionless transport
• multiplexing, • TCP: connection-oriented reliable
demultiplexing transport
• reliable data transfer • TCP congestion control
• flow control
• congestion control

Transport Layer: 3-2


Transport layer: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
▪ Principles of congestion control
▪ TCP congestion control
▪ Evolution of transport-layer
functionality
Transport Layer: 3-3
Transport services and protocols
application
transport

▪ provide logical communication


network
mobile network
data link
physical

between application processes national or global ISP

log
running on different hosts

i
cal
▪ transport protocols actions in end

e
nd-
end
systems: local or

tra
• sender: breaks application messages regional ISP

nsp
into segments, passes to network layer

ort
home network content
• receiver: reassembles segments into provider
network datacenter
messages, passes to application layer application
transport
network
network

▪ two transport protocols available to data link


physical

Internet applications enterprise


network
• TCP, UDP
Transport Layer: 3-4
Transport vs. network layer services and protocols
household analogy:
12 kids in Ann’s house sending
letters to 12 kids in Bill’s
house:
▪ hosts = houses
▪ processes = kids
▪ app messages = letters in
envelopes
▪ transport protocol = Ann and Bill
who demux to in-house siblings
▪ network-layer protocol = postal
service

Transport Layer: 3-5


Transport vs. network layer services and protocols
household analogy:
▪network layer: logical
communication between 12 kids in Ann’s house sending
letters to 12 kids in Bill’s
hosts house:
▪transport layer: logical ▪ hosts = houses
communication between ▪ processes = kids
▪ app messages = letters in
processes envelopes
• relies on, enhances, network ▪ transport protocol = Ann and Bill
layer services who demux to in-house siblings
▪ network-layer protocol = postal
service

Transport Layer: 3-6


Transport Layer Actions

Sender:
application ▪ is passed an application
app. msg
application-layer message
transport ▪ determines segment TThhtransport
app. msg
header fields values
network (IP) ▪ creates segment network (IP)

link ▪ passes segment to IP link

physical physical

Transport Layer: 3-7


Transport Layer Actions

Receiver:
application ▪ receives segment from IP application
▪ checks header values
transport
transport
app. msg ▪ extracts application-layer
message
network (IP)
network (IP) ▪ demultiplexes message up
link to application via socket link

physical physical
Th app. msg

Transport Layer: 3-8


Two principal Internet transport
protocols application

▪TCP: Transmission Control Protocol


transport
network
mobile network
data link
physical
national or global ISP
• reliable, in-order delivery

log
• congestion control

i
cal
• flow control

e
nd-
• connection setup

end
▪UDP: User Datagram Protocol
local or

tra
regional ISP

nsp
• unreliable, unordered delivery

ort
home network content
provider
• no-frills extension of “best-effort” IP network datacenter
application

▪services not available:


network
transport
network
data link

• delay guarantees physical

• bandwidth guarantees enterprise


network

Transport Layer: 3-9


Chapter 3: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
▪ Principles of congestion control
▪ TCP congestion control
▪ Evolution of transport-layer
functionality
Transport Layer: 3-10
HTTP server
client
application application
HTTP msg
transport

transport network transport


network link network
link physical link
physical physical

Transport Layer: 3-11


HTTP server
client
application application
HTTP msg
transport
Ht HTTP msg

transport network transport


network link network
link physical link
physical physical

Transport Layer: 3-12


HTTP server
client
application application
HTTP msg
transport
Ht HTTP msg

transport Hnnetwork
Ht HTTP msg transport
network link network
link physical link
physical physical

Transport Layer: 3-13


HTTP server
client
application application

transport

transport network transport


network link network
link physical link
physical physical

Hn Ht HTTP msg

Transport Layer: 3-14


HTTP server
client1 client2
P-clie P-clie
application nt1 nt2
application

transport

transport network transport


network link network
link physical link
physical physical

Transport Layer: 3-15


Multiplexing/demultiplexing
multiplexing at sender: demultiplexing at receiver:
handle data from multiple use header info to deliver
sockets, add transport header received segments to correct
(later used for demultiplexing) socket

application

application P1 P2 application socket


P3 transport P4
process
transport network transport
network link network
link physical link
physical physical

Transport Layer: 3-16


How demultiplexing works
▪ host receives IP datagrams 32 bits
• each datagram has source IP source port # dest port #
address, destination IP address
• each datagram carries one other header fields
transport-layer segment
• each segment has source, application
destination port number data
▪ host uses IP addresses & port (payload)
numbers to direct segment to
appropriate socket TCP/UDP segment format

Transport Layer: 3-17


Connectionless demultiplexing
Recall: when receiving host receives
UDP segment:
▪ when creating socket, must • checks destination port # in
specify host-local port #: segment
DatagramSocket mySocket1 • directs UDP segment to
= new socket with that port #
DatagramSocket(12534);
▪ when creating datagram to
send into UDP socket, must
specify IP/UDP datagrams with same dest.
port #, but different source IP
• destination IP address addresses and/or source port
• destination port # numbers will be directed to same
socket at receiving host
Transport Layer: 3-18
Connectionless demultiplexing: an example
DatagramSocket
serverSocket = new
DatagramSocket
DatagramSocket mySocket2 = DatagramSocket mySocket1 =
new DatagramSocket (6428); new DatagramSocket (5775);
(9157); application
application application
P1
P3 P4
transport
transport transport
network
network link network
link physical link
physical physical

source port: 6428 source port: ?


dest port: 9157 dest port: ?

source port: 9157 source port: ?


dest port: 6428 dest port: ?
Transport Layer: 3-19
Connection-oriented demultiplexing
▪ TCP socket identified by ▪ server may support many
4-tuple: simultaneous TCP sockets:
• source IP address • each socket identified by its
• source port number own 4-tuple
• dest IP address • each socket associated with
• dest port number a different connecting client
▪ demux: receiver uses all
four values (4-tuple) to
direct segment to
appropriate socket
Transport Layer: 3-20
Connection-oriented demultiplexing: example
application
application P4 P5 P6 application
P1 P2 P3
transport
transport transport
network
network link network
link physical link
physical physical
server: IP
address B

host: IP source IP,port: B,80 host: IP


address A dest IP,port: A,9157 source IP,port: C,5775 address C
dest IP,port: B,80
source IP,port: A,9157
dest IP, port: B,80
source IP,port: C,9157
dest IP,port: B,80
Three segments, all destined to IP address: B,
dest port: 80 are demultiplexed to different sockets
Transport Layer: 3-21
Summary
▪ Multiplexing, demultiplexing: based on segment, datagram
header field values
▪ UDP: demultiplexing using destination port number (only)
▪ TCP: demultiplexing using 4-tuple: source and destination IP
addresses, and port numbers
▪ Multiplexing/demultiplexing happen at all layers

Transport Layer: 3-22


Chapter 3: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
▪ Principles of congestion control
▪ TCP congestion control
▪ Evolution of transport-layer
functionality
Transport Layer: 3-23
UDP: User Datagram Protocol
Why is there a UDP?
▪ “no frills,” “bare bones”
Internet transport protocol ▪ no connection
establishment (which can
▪ “best effort” service, UDP add RTT delay)
segments may be: ▪ simple: no connection state
• lost at sender, receiver
• delivered out-of-order to app ▪ small header size
▪ connectionless: ▪ no congestion control
▪ UDP can blast away as fast as
• no handshaking between UDP desired!
sender, receiver ▪ can function in the face of
• each UDP segment handled congestion
independently of others
Transport Layer: 3-24
UDP: User Datagram Protocol
▪ UDP use:
▪ streaming multimedia apps (loss tolerant, rate sensitive)
▪ DNS
▪ SNMP
▪ HTTP/3
▪ if reliable transfer needed over UDP (e.g., HTTP/3):
▪ add needed reliability at application layer
▪ add congestion control at application layer

Transport Layer: 3-25


UDP: User Datagram Protocol [RFC 768]

Transport Layer: 3-26


UDP: Transport Layer Actions

SNMP client SNMP server

application application

transport transport
(UDP) (UDP)

network (IP) network (IP)

link link

physical physical

Transport Layer: 3-27


UDP: Transport Layer Actions

SNMP client SNMP server


UDP sender actions:
application ▪ is passed an application
SNMP msg
application-layer message
transport
transport ▪ determines UDP segment UDP
UDPhh SNMP msg
(UDP) (UDP)
header fields values
network (IP) ▪ creates UDP segment network (IP)

link ▪ passes segment to IP link

physical physical

Transport Layer: 3-28


UDP: Transport Layer Actions

SNMP client SNMP server


UDP receiver actions:
application ▪ receives segment from IP application
▪ checks UDP checksum transport
transport header value
SNMP msg (UDP)
(UDP) ▪ extracts application-layer
network message network (IP)
h SNMP(IP)
msg
UDP
▪ demultiplexes message up link
link to application via socket
physical physical

Transport Layer: 3-29


UDP segment header
32 bits
source port # dest port #
length checksum

application length, in bytes of


data UDP segment,
(payload) including header

data to/from
UDP segment format application layer

Transport Layer: 3-30


UDP checksum
Goal: detect errors (i.e., flipped bits) in transmitted segment
1st number 2nd number sum

Transmitted: 5 6 11

Received: 4 6 11

receiver-computed sender-computed
checksum
= checksum (as received)

Transport Layer: 3-31


UDP checksum
Goal: detect errors (i.e., flipped bits) in transmitted segment
sender: receiver:
▪ treat contents of UDP ▪ compute checksum of received
segment (including UDP header segment
fields and IP addresses) as
sequence of 16-bit integers ▪ check if computed checksum equals
▪ checksum: addition (one’s checksum field value:
complement sum) of segment • Not equal - error detected
content • Equal - no error detected. But maybe
▪ checksum value put into errors nonetheless? More later ….
UDP checksum field
Transport Layer: 3-32
Internet checksum: an example
example: add two 16-bit integers
1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
wraparound 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1

sum 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0
checksum 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1

Note: when adding numbers, a carryout from the most significant bit needs to be
added to the result

* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
Transport Layer: 3-33
Internet checksum: weak protection!
example: add two 16-bit integers
0 1
1 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1 0
1 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1
wraparound 1 1 0 1 1 1 0 1 1 1 0 1 1 1 0 1 1 Even though
numbers have
sum 1 0 1 1 1 0 1 1 1 0 1 1 1 1 0 0 changed (bit
flips), no change
checksum 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1 in checksum!

Transport Layer: 3-34


Summary: UDP
▪ “no frills” protocol:
• segments may be lost, delivered out of order
• best effort service: “send and hope for the best”
▪ UDP has its plusses:
• no setup/handshaking needed (no RTT incurred)
• can function when network service is compromised
• helps with reliability (checksum)
▪ build additional functionality on top of UDP in application layer
(e.g., HTTP/3)
Chapter 3: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
▪ Principles of congestion control
▪ TCP congestion control
▪ Evolution of transport-layer
functionality
Transport Layer: 3-36
Principles of reliable data transfer

sending receiving
process process
applicatio data data
ntranspor
t reliable
channel
reliable service abstraction

Transport Layer: 3-37


Principles of reliable data transfer

sending receiving sending receiving


process process process process
applicatio data data application data data
ntranspor transport
t reliable
channel
sender-side of receiver-side
reliable service abstraction reliable data of reliable data
transfer protocol transfer protocol

transport
network
unreliable channel

reliable service implementation

Transport Layer: 3-38


Principles of reliable data transfer

sending receiving
process process
application data data
transport

sender-side of receiver-side
Complexity of reliable data reliable data
transfer protocol
of reliable data
transfer protocol
transfer protocol will depend
(strongly) on characteristics of transport
network
unreliable channel (lose, unreliable channel
corrupt, reorder data?)
reliable service implementation

Transport Layer: 3-39


Principles of reliable data transfer

sending receiving
process process
application data data
transport

sender-side of receiver-side
reliable data of reliable data
Sender, receiver do not know transfer protocol transfer protocol
the “state” of each other, e.g.,
was a message received? transport

▪ unless communicated via a


network
unreliable channel

message
reliable service implementation

Transport Layer: 3-40


Reliable data transfer protocol (rdt): interfaces
rdt_send(): called from above, deliver_data(): called by rdt
(e.g., by app.). Passed data to to deliver data to upper layer
deliver to receiver upper layer
sending receiving
process process
rdt_send() data data
deliver_data()

sender-side data receiver-side


implementation of implementation of
rdt reliable data packet rdt reliable data
transfer protocol transfer protocol

udt_send() Header data Header data rdt_rcv()

unreliable channel
udt_send(): called by rdt rdt_rcv(): called when packet
to transfer packet over arrives on receiver side of
Bi-directional communication over
unreliable channel to receiver unreliable channel channel
Transport Layer: 3-41
Reliable data transfer: getting started
We will:
▪ incrementally develop sender, receiver sides of reliable data transfer
protocol (rdt)
▪ consider only unidirectional data transfer
• but control info will flow in both directions!
▪ use finite state machines (FSM) to specify sender, receiver
event causing state transition
actions taken on state transition
state: when in this “state”
next state uniquely state state
determined by next 1 event
event
2
actions

Transport Layer: 3-42


rdt1.0: reliable transfer over a reliable channel
▪ underlying channel perfectly reliable
• no bit errors
• no loss of packets

▪ separate FSMs for sender, receiver:


• sender sends data into underlying channel
• receiver reads data from underlying channel

Wait for rdt_send(data) Wait for rdt_rcv(packet)


sender call from packet = make_pkt(data) receiver call from extract (packet,data)
above udt_send(packet) below deliver_data(data)

Transport Layer: 3-43


rdt2.0: channel with bit errors
▪ underlying channel may flip bits in packet
• checksum (e.g., Internet checksum) to detect bit errors
▪ the question: how to recover from errors?

How do humans recover from “errors” during conversation?

Transport Layer: 3-44


rdt2.0: channel with bit errors
▪ underlying channel may flip bits in packet
• checksum to detect bit errors
▪ the question: how to recover from errors?
• acknowledgements (ACKs): receiver explicitly tells sender that pkt
received OK
• negative acknowledgements (NAKs): receiver explicitly tells sender
that pkt had errors
• sender retransmits pkt on receipt of NAK

stop and wait


sender sends one packet, then waits for receiver response
Transport Layer: 3-45
rdt2.0: FSM specifications
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
Wait for Wait for isNAK(rcvpkt)
sender call from ACK or udt_send(sndpkt) rdt_rcv(rcvpkt) && corrupt(rcvpkt)
above NAK
udt_send(NAK)

rdt_rcv(rcvpkt) && isACK(rcvpkt)


Wait for
Λ call from receiver
below

rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)


extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)

Transport Layer: 3-46


rdt2.0: FSM specification
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
Wait for Wait for isNAK(rcvpkt)
sender call from ACK or udt_send(sndpkt) rdt_rcv(rcvpkt) && corrupt(rcvpkt)
above NAK
udt_send(NAK)

rdt_rcv(rcvpkt) && isACK(rcvpkt)


Wait for
Λ call from receiver
below

Note: “state” of receiver (did the receiver get my


rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
message correctly?) isn’t known to sender unless
extract(rcvpkt,data)
somehow communicated from receiver to sender deliver_data(data)
▪ that’s why we need a protocol! udt_send(ACK)

Transport Layer: 3-47


rdt2.0: operation with no errors
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
Wait for Wait for isNAK(rcvpkt)
sender call from ACK or udt_send(sndpkt) rdt_rcv(rcvpkt) && corrupt(rcvpkt)
above NAK
udt_send(NAK)

rdt_rcv(rcvpkt) && isACK(rcvpkt)


Wait for
Λ call from receiver
below

rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)


extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)

Transport Layer: 3-48


rdt2.0: corrupted packet scenario
rdt_send(data)
snkpkt = make_pkt(data, checksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) &&
Wait for Wait for isNAK(rcvpkt)
sender call from ACK or udt_send(sndpkt) rdt_rcv(rcvpkt) && corrupt(rcvpkt)
above NAK
udt_send(NAK)

rdt_rcv(rcvpkt) && isACK(rcvpkt)


Wait for
Λ call from receiver
below

rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)


extract(rcvpkt,data)
deliver_data(data)
udt_send(ACK)

Transport Layer: 3-49


rdt2.0 has a fatal flaw!
what happens if ACK/NAK handling duplicates:
corrupted? ▪ sender retransmits current pkt
▪ sender doesn’t know what if ACK/NAK corrupted
happened at receiver! ▪ sender adds sequence number
▪ can’t just retransmit: possible to each pkt
duplicate ▪ receiver discards (doesn’t
deliver up) duplicate pkt

stop and wait


sender sends one packet, then
waits for receiver response
Transport Layer: 3-50
rdt2.1: sender, handling garbled ACK/NAKs
rdt_send(data)
sndpkt = make_pkt(0, data, checksum)
udt_send(sndpkt) rdt_rcv(rcvpkt) &&
(corrupt(rcvpkt) ||
Wait for Wait for isNAK(rcvpkt) )
call 0 from ACK or
NAK 0 udt_send(sndpkt)
above
rdt_rcv(rcvpkt)
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt) &&
&& notcorrupt(rcvpkt)
isACK(rcvpkt)
&& isACK(rcvpkt)
Λ
Λ
Wait for Wait for
ACK or call 1 from
rdt_rcv(rcvpkt) NAK 1 above
&& (corrupt(rcvpkt) ||
isNAK(rcvpkt) ) rdt_send(data)

udt_send(sndpkt) sndpkt = make_pkt(1, data, checksum)


udt_send(sndpkt)

Transport Layer: 3-51


rdt2.1: receiver, handling garbled
ACK/NAKs rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
&& has_seq0(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(ACK, chksum)
udt_send(sndpkt)
rdt_rcv(rcvpkt) && (corrupt(rcvpkt) rdt_rcv(rcvpkt) && (corrupt(rcvpkt)
sndpkt = make_pkt(NAK, chksum) sndpkt = make_pkt(NAK, chksum)
udt_send(sndpkt) udt_send(sndpkt)
Wait for Wait for
rdt_rcv(rcvpkt) && 0 from 1 from rdt_rcv(rcvpkt) &&
not corrupt(rcvpkt) && below below not corrupt(rcvpkt) &&
has_seq1(rcvpkt) has_seq0(rcvpkt)
sndpkt = make_pkt(ACK, chksum) sndpkt = make_pkt(ACK, chksum)
udt_send(sndpkt) udt_send(sndpkt)
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
&& has_seq1(rcvpkt)

extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(ACK, chksum)
udt_send(sndpkt)

Transport Layer: 3-52


rdt2.1: discussion
sender: receiver:
▪ seq # added to pkt ▪ must check if received packet
▪ two seq. #s (0,1) will suffice. is duplicate
Why? • state indicates whether 0 or 1 is
expected pkt seq #
▪ must check if received ACK/NAK
corrupted ▪ note: receiver can not know if
its last ACK/NAK received OK
▪ twice as many states at sender
• state must “remember” whether
“expected” pkt should have seq #
of 0 or 1

Transport Layer: 3-53


rdt2.2: a NAK-free protocol
▪ same functionality as rdt2.1, using ACKs only
▪ instead of NAK, receiver sends ACK for last pkt received OK
• receiver must explicitly include seq # of pkt being ACKed
▪ duplicate ACK at sender results in same action as NAK:
retransmit current pkt

As we will see, TCP uses this approach to be NAK-free

Transport Layer: 3-54


rdt2.2: sender, receiver fragments
rdt_send(data)
sndpkt = make_pkt(0, data, checksum)
udt_send(sndpkt) rdt_rcv(rcvpkt) &&
( corrupt(rcvpkt) ||
Wait for Wait for
ACK isACK(rcvpkt,1) )
call 0 from
above 0 udt_send(sndpkt)
sender FSM
fragment rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
rdt_rcv(rcvpkt) && && isACK(rcvpkt,0)
(corrupt(rcvpkt) || Λ
has_seq1(rcvpkt)) Wait for receiver FSM
0 from
udt_send(sndpkt) below fragment
rdt_rcv(rcvpkt) && notcorrupt(rcvpkt)
&& has_seq1(rcvpkt)
extract(rcvpkt,data)
deliver_data(data)
sndpkt = make_pkt(ACK1, chksum)
udt_send(sndpkt) Transport Layer: 3-55
rdt3.0: channels with errors and loss
New channel assumption: underlying channel can also lose
packets (data, ACKs)
• checksum, sequence #s, ACKs, retransmissions will be of help …
but not quite enough

Q: How do humans handle lost


sender-to-receiver words in
conversation?
Transport Layer: 3-56
rdt3.0: channels with errors and loss
Approach: sender waits “reasonable” amount of time for ACK
▪ retransmits if no ACK received in this time
▪ if pkt (or ACK) just delayed (not lost):
• retransmission will be duplicate, but seq #s already handles this!
• receiver must specify seq # of packet being ACKed
▪ use countdown timer to interrupt after “reasonable” amount
of time
timeout

Transport Layer: 3-57


rdt3.0 sender
rdt_send(data)
sndpkt = make_pkt(0, data, checksum)
udt_send(sndpkt)
start_timer

Wait for Wait


call 0 from for
above ACK0
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt) rdt_rcv(rcvpkt)
&& isACK(rcvpkt,1) && notcorrupt(rcvpkt)
stop_timer && isACK(rcvpkt,0)
stop_timer
Wait Wait for
for call 1 from
ACK1 above

rdt_send(data)
sndpkt = make_pkt(1, data, checksum)
udt_send(sndpkt)
start_timer

Transport Layer: 3-58


rdt3.0 sender
rdt_send(data)
rdt_rcv(rcvpkt) &&
sndpkt = make_pkt(0, data, checksum) ( corrupt(rcvpkt) ||
udt_send(sndpkt) isACK(rcvpkt,1) )
rdt_rcv(rcvpkt) start_timer Λ
Λ Wait for Wait
for timeout
call 0 from
ACK0 udt_send(sndpkt)
above
start_timer
rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt) rdt_rcv(rcvpkt)
&& isACK(rcvpkt,1) && notcorrupt(rcvpkt)
stop_timer && isACK(rcvpkt,0)
stop_timer
Wait Wait for
timeout for call 1 from
udt_send(sndpkt) ACK1 above
start_timer rdt_rcv(rcvpkt)
rdt_send(data) Λ
rdt_rcv(rcvpkt) &&
( corrupt(rcvpkt) || sndpkt = make_pkt(1, data, checksum)
isACK(rcvpkt,0) ) udt_send(sndpkt)
start_timer
Λ

Transport Layer: 3-59


rdt3.0 in action
sender receiver sender receiver
send pkt0 pkt0 send pkt0 pkt0
rcv pkt0 rcv pkt0
ack0 send ack0 ack0 send ack0
rcv ack0 rcv ack0
send pkt1 pkt1 send pkt1 pkt1
rcv pkt1 X
loss
ack1 send ack1
rcv ack1
send pkt0 pkt0
rcv pkt0 timeout
ack0 send ack0 resend pkt1 pkt1
rcv pkt1
ack1 send ack1
rcv ack1
send pkt0 pkt0
(a) no loss rcv pkt0
ack0 send ack0

(b) packet loss


Transport Layer: 3-60
rdt3.0 in action
sender receiver
sender receiver send pkt0
pkt0
rcv pkt0
send pkt0 pkt0 send ack0
ack0
rcv pkt0 rcv ack0
ack0 send ack0 send pkt1 pkt1
rcv ack0 rcv pkt1
send pkt1 pkt1 send ack1
rcv pkt1 ack1
ack1 send ack1
X timeout
loss resend pkt1
pkt1 rcv pkt1
timeout
resend pkt1 pkt1
rcv pkt1 rcv ack1 (detect duplicate)
send pkt0 pkt0 send ack1
(detect duplicate)
ack1 send ack1 ack1 rcv pkt0
rcv ack1 rcv ack1 send ack0
send pkt0 pkt0 (ignore) ack0
rcv pkt0
ack0 send ack0 pkt1

(c) ACK loss (d) premature timeout/ delayed ACK


Transport Layer: 3-61
Performance of rdt3.0 (stop-and-wait)
▪ U sender: utilization – fraction of time sender busy sending

▪ example: 1 Gbps link, 15 ms prop. delay, 8000 bit packet


• time to transmit packet into channel:
L 8000 bits
Dtrans = R = 9 = 8 microsecs
10 bits/sec

Transport Layer: 3-62


rdt3.0: stop-and-wait operation
sender receiver
first packet bit transmitted, t = 0

first packet bit arrives


RTT last packet bit arrives, send ACK

ACK arrives, send next


packet, t = RTT + L / R

Transport Layer: 3-63


rdt3.0: stop-and-wait operation
sender receiver

L/R L/R
Usender=
RTT + L / R
.008 RTT
=
30.008
= 0.00027

▪ rdt 3.0 protocol performance stinks!


▪ Protocol limits performance of underlying infrastructure (channel)

Transport Layer: 3-64


rdt3.0: pipelined protocols operation
pipelining: sender allows multiple, “in-flight”, yet-to-be-acknowledged
packets
• range of sequence numbers must be increased
• buffering at sender and/or receiver

Transport Layer: 3-65


Pipelining: increased utilization
sender receiver
first packet bit transmitted, t = 0
last bit transmitted, t = L / R

first packet bit arrives


RTT last packet bit arrives, send ACK
last bit of 2nd packet arrives, send ACK
last bit of 3rd packet arrives, send ACK
ACK arrives, send next
packet, t = RTT + L / R
3-packet pipelining increases
utilization by a factor of 3!

Transport Layer: 3-66


Go-Back-N: sender
▪ sender: “window” of up to N, consecutive transmitted but unACKed pkts
• k-bit seq # in pkt header

▪ cumulative ACK: ACK(n): ACKs all packets up to, including seq # n


• on receiving ACK(n): move window forward to begin at n+1
▪ timer for oldest in-flight packet
▪ timeout(n): retransmit packet n and all higher seq # packets in window
Transport Layer: 3-67
Go-Back-N: receiver
▪ ACK-only: always send ACK for correctly-received packet so far, with
highest in-order seq #
• may generate duplicate ACKs
• need only remember rcv_base
▪ on receipt of out-of-order packet:
• can discard (don’t buffer) or buffer: an implementation decision
• re-ACK pkt with highest in-order seq #

Receiver view of sequence number space:


received and ACKed

… … Out-of-order: received but not ACKed

rcv_base
Not received
Transport Layer: 3-68
Go-Back-N in action
sender window (N=4) sender receiver
012345678 send pkt0
012345678 send pkt1
send pkt2 receive pkt0, send ack0
012345678
send pkt3 Xloss receive pkt1, send ack1
012345678
(wait)
receive pkt3, discard,
012345678 rcv ack0, send pkt4 (re)send ack1
012345678 rcv ack1, send pkt5 receive pkt4, discard,
(re)send ack1
ignore duplicate ACK receive pkt5, discard,
(re)send ack1
pkt 2 timeout
012345678 send pkt2
012345678 send pkt3
012345678 send pkt4 rcv pkt2, deliver, send ack2
012345678 send pkt5 rcv pkt3, deliver, send ack3
rcv pkt4, deliver, send ack4
rcv pkt5, deliver, send ack5

Transport Layer: 3-69


Selective repeat
▪receiver individually acknowledges all correctly received packets
• buffers packets, as needed, for eventual in-order delivery to upper
layer
▪sender times-out/retransmits individually for unACKed packets
• sender maintains timer for each unACKed pkt
▪sender window
• N consecutive seq #s
• limits seq #s of sent, unACKed packets

Transport Layer: 3-70


Selective repeat: sender, receiver windows

Transport Layer: 3-71


Selective repeat: sender and receiver
sender receiver
data from above: packet n in [rcvbase, rcvbase+N-1]
▪ if next available seq # in ▪ send ACK(n)
window, send packet ▪ out-of-order: buffer
timeout(n): ▪ in-order: deliver (also deliver
buffered, in-order packets),
▪ resend packet n, restart timer advance window to next
ACK(n) in [sendbase,sendbase+N]: not-yet-received packet
▪ mark packet n as received packet n in [rcvbase-N,rcvbase-1]
▪ ACK(n)
▪ if n smallest unACKed packet,
advance window base to next otherwise:
unACKed seq # ▪ ignore

Transport Layer: 3-72


Selective Repeat in action
sender window (N=4) sender receiver
012345678 send pkt0
012345678 send pkt1
012345678 send pkt2 receive pkt0, send ack0
012345678 send pkt3 Xloss receive pkt1, send ack1
(wait)
receive pkt3, buffer,
012345678 rcv ack0, send pkt4 send ack3
012345678 rcv ack1, send pkt5
receive pkt4, buffer,
record ack3 arrived send ack4
receive pkt5, buffer,
pkt 2 timeout send ack5
012345678 send pkt2
012345678 (but not 3,4,5)
012345678 rcv pkt2; deliver pkt2,
012345678 pkt3, pkt4, pkt5; send ack2

Q: what happens when ack2 arrives?

Transport Layer: 3-73


sender window receiver window

Selective repeat: (after receipt)

0123012 pkt0
(after receipt)

a dilemma! 0123012
0123012
pkt1
pkt2
0123012
0123012
0123012
example: 0123012 pkt3
X
▪ seq #s: 0, 1, 2, 3 (base 4 counting)
0123012
pkt0 will accept packet

▪ window size=3
with seq number 0
(a) no problem

0123012 pkt0
0123012 pkt1 0123012
0123012 pkt2 X 0123012
X 0123012
X
timeout
retransmit pkt0
0123012 pkt0
will accept packet
with seq number 0
(b) oops!
Transport Layer: 3-74
sender window receiver window

Selective repeat: (after receipt)

0123012 pkt0
(after receipt)

a dilemma! 0123012
0123012
pkt1
pkt2
0123012
0123012
0123012
example: 0123012 pkt3
X
▪ seq #s: 0, 1, 2, 3 (base 4 counting) ▪ receiver can’t
0123012
pkt0 will accept packet
see sender side
▪ window size=3
with seq number 0
▪ receiver
(a) no problem
behavior
identical in both
cases!
▪0something’s
123012 pkt0
Q: what relationship is needed 0(very)
1 2 3 0 1wrong!
2 pkt1 0123012

between sequence # size and 0123012 pkt2 X


X
0123012
0123012
window size to avoid problem X
timeout
in scenario (b)? retransmit pkt0
0123012 pkt0
will accept packet
with seq number 0
(b) oops!
Transport Layer: 3-75
Chapter 3: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
• segment structure
• reliable data transfer
• flow control
• connection management
▪ Principles of congestion control
▪ TCP congestion control
Transport Layer: 3-76
TCP: overview RFCs: 793,1122, 2018, 5681, 7323
▪ point-to-point: ▪ cumulative ACKs
• one sender, one receiver ▪ pipelining:
▪ reliable, in-order byte • TCP congestion and flow control
steam: set window size
• no “message boundaries" ▪ connection-oriented:
▪ full duplex data: • handshaking (exchange of control
• bi-directional data flow in messages) initializes sender,
same connection receiver state before data
• MSS: maximum segment size exchange
▪ flow controlled:
• sender will not overwhelm receiver
Transport Layer: 3-77
TCP segment structure
32 bits

source port # dest port # segment seq #: counting


ACK: seq # of next expected sequence number bytes of data into bytestream
byte; A bit: this is an ACK (not segments!)
acknowledgement number
head not
length (of TCP header) len used C EUAP R SF receive window flow control: # bytes
Internet checksum checksum Urg data pointer receiver willing to accept
options (variable length)
C, E: congestion notification
TCP options
application data sent by
RST, SYN, FIN: connection data application into
management (variable length) TCP socket

Transport Layer: 3-78


TCP sequence numbers, ACKs
outgoing segment from sender
Sequence numbers: source port # dest port #
sequence number
• byte stream “number” of first acknowledgement number
rwnd
byte in segment’s data checksum urg pointer

window size
Acknowledgements: N

• seq # of next byte expected


from other side sender sequence number space

• cumulative ACK sent sent, usable not


ACKed not-yet but not usable
ACKed yet sent
Q: how receiver handles (“in-flight”)

out-of-order segments outgoing segment from receiver


source port # dest port #
• A: TCP spec doesn’t say, - up sequence number

to implementor acknowledgement number


A rwnd
checksum urg pointer
Transport Layer: 3-79
TCP sequence numbers, ACKs
Host A Host B

User types‘C’
Seq=42, ACK=79, data = ‘C’
host ACKs receipt of‘C’,
echoes back ‘C’
Seq=79, ACK=43, data = ‘C’
host ACKs receipt
of echoed ‘C’
Seq=43, ACK=80

simple telnet scenario


Transport Layer: 3-80
TCP round trip time, timeout
Q: how to set TCP timeout Q: how to estimate RTT?
value? ▪ SampleRTT:measured time
▪ longer than RTT, but RTT varies! from segment transmission until
ACK receipt
▪ too short: premature timeout,
• ignore retransmissions
unnecessary retransmissions
▪ SampleRTT will vary, want
▪ too long: slow reaction to estimated RTT “smoother”
segment loss • average several recent
measurements, not just current
SampleRTT

Transport Layer: 3-81


TCP round trip time, timeout
EstimatedRTT = (1- α)*EstimatedRTT + α*SampleRTT
▪ exponential weighted moving average (EWMA)
▪ influence of past sample decreases exponentially fast
▪ typical value: α = 0.125
RTT: gaia.cs.umass.edu to fantasia.eurecom.fr

(milliseconds)
RTT

sampleRTT
EstimatedRTT

time (seconds)
Transport Layer: 3-82
TCP round trip time, timeout
▪ timeout interval: EstimatedRTT plus “safety margin”
• large variation in EstimatedRTT: want a larger safety margin
TimeoutInterval = EstimatedRTT + 4*DevRTT

estimated RTT “safety margin”

▪ DevRTT: EWMA of SampleRTT deviation from EstimatedRTT:


DevRTT = (1-β)*DevRTT + β*|SampleRTT-EstimatedRTT|
(typically, β = 0.25)

* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
Transport Layer: 3-83
TCP Sender (simplified)
event: data received from event: timeout
application ▪ retransmit segment that
caused timeout
▪ create segment with seq #
▪ restart timer
▪ seq # is byte-stream number
of first data byte in segment
event: ACK received
▪ start timer if not already
running ▪ if ACK acknowledges
previously unACKed segments
• think of timer as for oldest
unACKed segment • update what is known to be
ACKed
• expiration interval:
TimeOutInterval • start timer if there are still
unACKed segments
Transport Layer: 3-84
TCP Receiver: ACK generation [RFC 5681]
Event at receiver TCP receiver action
arrival of in-order segment with delayed ACK. Wait up to 500ms
expected seq #. All data up to for next segment. If no next segment,
expected seq # already ACKed send ACK

arrival of in-order segment with immediately send single cumulative


expected seq #. One other ACK, ACKing both in-order segments
segment has ACK pending

arrival of out-of-order segment immediately send duplicate ACK,


higher-than-expect seq. # . indicating seq. # of next expected byte
Gap detected

arrival of segment that immediate send ACK, provided that


partially or completely fills gap segment starts at lower end of gap

Transport Layer: 3-85


TCP: retransmission scenarios
Host A Host B Host A Host B

SendBase=92
Seq=92, 8 bytes of data Seq=92, 8 bytes of data

Seq=100, 20 bytes of data


timeo

timeo
ACK=100
ut

ut
X
ACK=100
ACK=120

Seq=92, 8 bytes of data Seq=92, 8


SendBase=100 bytes of data send cumulative
SendBase=120 ACK for 120
ACK=100
ACK=120

SendBase=120

lost ACK scenario premature timeout

Transport Layer: 3-86


TCP: retransmission scenarios
Host A Host B

Seq=92, 8 bytes of data

Seq=100, 20 bytes of data


ACK=100
X
ACK=120

Seq=120, 15 bytes of data

cumulative ACK covers


for earlier lost ACK

Transport Layer: 3-87


TCP fast retransmit
Host A Host B
TCP fast retransmit
if sender receives 3 additional
Seq=92
ACKs for same data (“triple Seq=1
, 8 bytes
of data
00, 20
duplicate ACKs”), resend unACKed bytes
of data
segment with smallest seq # X
▪ likely that unACKed segment lost, 00
=1
so don’t wait for timeout ACK

timeout
C K =100
A
=100
ACK
C K =100
Receipt of three duplicate ACKs A

indicates 3 segments received Seq=100, 20 bytes of data

after a missing segment – lost


segment is likely. So retransmit!

Transport Layer: 3-88


Chapter 3: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
• segment structure
• reliable data transfer
• flow control
• connection management
▪ Principles of congestion control
▪ TCP congestion control
Transport Layer: 3-89
TCP flow control
applicati
on
Q: What happens if network Application removing process
layer delivers data faster than data from TCP socket
buffers
application layer removes TCP socket
data from socket buffers? receiver buffers

TCP
code
Network layer
delivering IP datagram
payload into TCP
IP
socket buffers code

from sender

receiver protocol stack

Transport Layer: 3-90


TCP flow control
applicati
on
Q: What happens if network Application removing process
layer delivers data faster than data from TCP socket
buffers
application layer removes TCP socket
data from socket buffers? receiver buffers

TCP
code
Network layer
delivering IP datagram
payload into TCP
IP
socket buffers code

from sender

receiver protocol stack

Transport Layer: 3-91


TCP flow control
applicati
on
Q: What happens if network Application removing process
layer delivers data faster than data from TCP socket
buffers
application layer removes TCP socket
data from socket buffers? receiver buffers

TCP
code

receive window
flow control: # bytes
receiver willing to accept IP
code

from sender

receiver protocol stack

Transport Layer: 3-92


TCP flow control
applicati
on
Q: What happens if network Application removing process
layer delivers data faster than data from TCP socket
buffers
application layer removes TCP socket
data from socket buffers? receiver buffers

TCP
flow control code

receiver controls sender, so


sender won’t overflow IP
code
receiver’s buffer by
transmitting too much, too fast
from sender

receiver protocol stack

Transport Layer: 3-93


TCP flow control
▪ TCP receiver “advertises” free buffer
space in rwnd field in TCP header to application process
• RcvBuffer size set via socket
options (typical default is 4096 bytes) RcvBuffer buffered data
• many operating systems autoadjust
RcvBuffer
rwnd free buffer space

▪ sender limits amount of unACKed


(“in-flight”) data to received rwnd TCP segment payloads

▪ guarantees receive buffer will not TCP receiver-side buffering


overflow

Transport Layer: 3-94


TCP flow control
flow control: # bytes receiver willing to accept

▪ TCP receiver “advertises” free buffer


space in rwnd field in TCP header
• RcvBuffer size set via socket
receive window
options (typical default is 4096 bytes)
• many operating systems autoadjust
RcvBuffer
▪ sender limits amount of unACKed
(“in-flight”) data to received rwnd
▪ guarantees receive buffer will not
overflow
TCP segment format

Transport Layer: 3-95


TCP connection management
before exchanging data, sender/receiver “handshake”:
▪ agree to establish connection (each knowing the other willing to establish connection)
▪ agree on connection parameters (e.g., starting seq #s)

application application

connection state: ESTAB connection state: ESTAB


connection variables: connection Variables:
seq # client-to-server seq # client-to-server
server-to-client server-to-client
rcvBuffer size rcvBuffer size
at server,client at server,client

network network

Socket clientSocket = Socket connectionSocket =


newSocket("hostname","port number"); welcomeSocket.accept();
Transport Layer: 3-96
Agreeing to establish a connection
2-way handshake:

Q: will 2-way handshake always


Let’s talk work in network?
ESTAB
ESTAB
OK ▪ variable delays
▪ retransmitted messages (e.g.
req_conn(x)) due to message loss
▪ message reordering
choose x
req_conn(x) ▪ can’t “see” other side
ESTAB
acc_conn(x)
ESTAB

Transport Layer: 3-97


2-way handshake scenarios
choose x
req_conn(x)
ESTAB
acc_conn(x)

ESTAB
data(x+1) accept
ACK(x+1) data(x+1)

connection
x completes

No problem!

Transport Layer: 3-98


2-way handshake scenarios

choose x
req_conn(x)
ESTAB
retransmit acc_conn(x)
req_conn(x)

ESTAB
req_conn(x)

connection
client x completes server
terminates forgets x

ESTAB
acc_conn(x)
Problem: half open
connection! (no client)
Transport Layer: 3-99
2-way handshake scenarios
choose x
req_conn(x)
ESTAB
retransmit acc_conn(x)
req_conn(x)

ESTAB
data(x+1) accept
data(x+1)
retransmit
data(x+1)
connection
x completes server
client
terminates forgets x
req_conn(x)
ESTAB
data(x+1) accept
data(x+1)
Problem: dup data
accepted!
TCP 3-way handshake
Server state
serverSocket = socket(AF_INET,SOCK_STREAM)
Client state serverSocket.bind((‘’,serverPort))
serverSocket.listen(1)
clientSocket = socket(AF_INET, SOCK_STREAM) connectionSocket, addr = serverSocket.accept()
LISTEN
clientSocket.connect((serverName,serverPort)) LISTEN
choose init seq num, x
send TCP SYN msg
SYNSENT SYNbit=1, Seq=x
choose init seq num, y
send TCP SYNACK
msg, acking SYN SYN RCVD
SYNbit=1, Seq=y
ACKbit=1; ACKnum=x+1
received SYNACK(x)
ESTAB indicates server is live;
send ACK for SYNACK;
this segment may contain ACKbit=1, ACKnum=y+1
client-to-server data
received ACK(y)
indicates client is live
ESTAB

Transport Layer: 3-101


A human 3-way handshake protocol

1. On belay?

2. Belay on.
3. Climbing.

Transport Layer: 3-102


Closing a TCP connection
▪ client, server each close their side of connection
• send TCP segment with FIN bit = 1
▪ respond to received FIN with ACK
• on receiving FIN, ACK can be combined with own FIN
▪ simultaneous FIN exchanges can be handled

Transport Layer: 3-103


Chapter 3: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
▪ Principles of congestion control
▪ TCP congestion control
▪ Evolution of transport-layer
functionality
Transport Layer: 3-104
Principles of congestion control
Congestion:
▪ informally: “too many sources sending too much data too fast for
network to handle”
▪ manifestations:
• long delays (queueing in router buffers)
• packet loss (buffer overflow at routers)
▪ different from flow control! congestion control:
▪ a top-10 problem! too many senders,
sending too fast

flow control: one sender


too fast for one receiver
Transport Layer: 3-105
Causes/costs of congestion: scenario 1
original data: λ throughput: λout
in
Simplest scenario: Host A
▪ one router, infinite buffers
▪ input, output link capacity: R infinite shared
output link buffers

▪ two flows
R R
▪ no retransmissions needed
Host B

R/2
Q: What happens as
λout

delay
arrival rate λin throughput:

approaches R/2?
λin R/2 λin R/2
maximum per-connection large delays as arrival rate
throughput: R/2 λin approaches capacity
Transport Layer: 3-106
Causes/costs of congestion: scenario 2
▪ one router, finite buffers
▪ sender retransmits lost, timed-out packet
• application-layer input = application-layer output: λin = λout
• transport-layer input includes retransmissions : λ’in λin

Host A λin : original data


λout
λ'in: original data, plus
retransmitted data

R R

Host B finite shared output


link buffers
Transport Layer: 3-107
Causes/costs of congestion: scenario 2
Idealization: perfect knowledge R/2

▪ sender sends only when router buffers available

throughput: λout
Host A λin : original data λin
copy λout R/2
λ'in: original data, plus
retransmitted data

free buffer space!

R R

Host B finite shared output


link buffers
Transport Layer: 3-108
Causes/costs of congestion: scenario 2
Idealization: some perfect knowledge
▪ packets can be lost (dropped at router) due
to full buffers
▪ sender knows when packet has been
dropped: only resends if packet known to be
lost
Host A λin : original data
copy λ'in: original data, plus
retransmitted data

no buffer space!

R R

Host B finite shared output


link buffers
Transport Layer: 3-109
Causes/costs of congestion: scenario 2
Idealization: some perfect knowledge R/2
“wasted” capacity due
▪ packets can be lost (dropped at router) due to retransmissions

throughput: λout
to full buffers
when sending at
▪ sender knows when packet has been R/2, some packets
are needed
dropped: only resends if packet known to be retransmissions
lost
Host A λin : original data λin R/2
λ'in: original data, plus
retransmitted data

free buffer space!

R R

Host B finite shared output


link buffers
Transport Layer: 3-110
Causes/costs of congestion: scenario 2
Realistic scenario: un-needed duplicates R/2
▪ packets can be lost, dropped at router due to “wasted” capacity due

throughput: λout
full buffers – requiring retransmissions to un-needed
retransmissions
▪ but sender times can time out prematurely,
sending two copies, both of which are delivered when sending at
R/2, some packets
are retransmissions,
including needed
λin : original data and un-needed
Host A λin duplicates, that are
copy
timeout R/2
λ'in: original data, plus delivered!
retransmitted data

free buffer space!

R R

Host B finite shared output


link buffers
Transport Layer: 3-111
Causes/costs of congestion: scenario 2
Realistic scenario: un-needed duplicates R/2
▪ packets can be lost, dropped at router due to “wasted” capacity due

throughput: λout
full buffers – requiring retransmissions to un-needed
retransmissions
▪ but sender times can time out prematurely,
sending two copies, both of which are delivered when sending at
R/2, some packets
are retransmissions,
including needed
and un-needed
λin R/2 duplicates, that are
delivered!
“costs” of congestion:
▪ more work (retransmission) for given receiver throughput
▪ unneeded retransmissions: link carries multiple copies of a packet
• decreasing maximum achievable throughput

Transport Layer: 3-112


Causes/costs of congestion: scenario 3
▪ four senders Q: what happens as λin and λin’ increase ?
▪ multi-hop paths A: as red λin’ increases, all arriving blue pkts at upper
▪ timeout/retransmit queue are dropped, blue throughput → 0
Host A λin : original data
Host B
λ'in: original data, plus
retransmitted data
finite shared
output link buffers

Host D
λout
Host C

Transport Layer: 3-113


Causes/costs of congestion: scenario 3
R/2
λout

λin’ R/2

another “cost” of congestion:


▪ when packet dropped, any upstream transmission capacity and
buffering used for that packet was wasted!

Transport Layer: 3-114


Causes/costs of congestion: insights
▪ throughput can never exceed capacity

▪ delay increases as capacity approached

▪ loss/retransmission decreases effective


throughput

▪ un-needed duplicates further decreases


effective throughput
▪ upstream transmission capacity / buffering
wasted for packets lost downstream
Transport Layer: 3-115
Approaches towards congestion control
End-end congestion control:
▪ no explicit feedback from
network
▪ congestion inferred from ACKs
data data
observed loss, delay ACKs

▪ approach taken by TCP

Transport Layer: 3-116


Approaches towards congestion control
Network-assisted congestion
control: explicit congestion info

▪ routers provide direct feedback


to sending/receiving hosts with data data
ACKs
flows passing through congested ACKs

router
▪ may indicate congestion level or
explicitly set sending rate
▪ TCP ECN, ATM, DECbit protocols
Transport Layer: 3-117
Chapter 3: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
▪ Principles of congestion control
▪ TCP congestion control
▪ Evolution of transport-layer
functionality
Transport Layer: 3-118
TCP congestion control: AIMD
▪ approach: senders can increase sending rate until packet loss
(congestion) occurs, then decrease sending rate on loss event
Additive Increase Multiplicative Decrease
increase sending rate by 1 cut sending rate in half at each
maximum segment size every loss event
RTT until loss detected
TCP sender Sending rate

AIMD sawtooth
behavior: probing
for bandwidth

time Transport Layer: 3-119


TCP AIMD: more
Multiplicative decrease detail: sending rate is
▪ Cut in half on loss detected by triple duplicate ACK (TCP Reno)
▪ Cut to 1 MSS (maximum segment size) when loss detected by
timeout (TCP Tahoe)

Why AIMD?
▪ AIMD – a distributed, asynchronous algorithm – has been
shown to:
• optimize congested flow rates network wide!
• have desirable stability properties

Transport Layer: 3-120


TCP congestion control: details
sender sequence number space
cwnd TCP sending behavior:
▪ roughly: send cwnd bytes,
wait RTT for ACKS, then
send more bytes
last byte
available but cwnd
ACKed sent, but TCP rate ~
~ bytes/sec
not-yet ACKed not used RTT
(“in-flight”) last byte sent

▪ TCP sender limits transmission: LastByteSent- LastByteAcked < cwnd

▪ cwnd is dynamically adjusted in response to observed


network congestion (implementing TCP congestion control)
Transport Layer: 3-121
TCP slow start
Host A Host B
▪ when connection begins,
increase rate exponentially
one segm
until first loss event: ent

RTT
• initially cwnd = 1 MSS two segm
ents
• double cwnd every RTT
• done by incrementing cwnd
four segm
for every ACK received ents

▪ summary: initial rate is


slow, but ramps up
time
exponentially fast
Transport Layer: 3-122
TCP: from slow start to congestion
avoidance
Q: when should the exponential
increase switch to linear?
X
A: when cwnd gets to 1/2 of its
value before timeout.

Implementation:
▪ variable ssthresh
▪ on loss event, ssthresh is set to
1/2 of cwnd just before loss event

* Check out the online interactive exercises for more examples: http://gaia.cs.umass.edu/kurose_ross/interactive/
Transport Layer: 3-123
Summary: TCP congestion control
New
New ACK!
ACK! new ACK
duplicate ACK
dupACKcount++ new ACK .
cwnd = cwnd + MSS (MSS/cwnd)
dupACKcount = 0
cwnd = cwnd+MSS transmit new segment(s), as allowed
dupACKcount = 0
Λ transmit new segment(s), as allowed
cwnd = 1 MSS
ssthresh = 64 KB cwnd > ssthresh
dupACKcount = 0 slow Λ congestion
start timeout avoidance
ssthresh = cwnd/2
cwnd = 1 MSS duplicate ACK
timeout dupACKcount = 0 dupACKcount++
ssthresh = cwnd/2 retransmit missing segment
cwnd = 1 MSS
dupACKcount = 0
retransmit missing segment
timeout New
ACK!
ssthresh = cwnd/2
cwnd = 1 New ACK
dupACKcount = 0
dupACKcount == 3 cwnd = ssthresh dupACKcount == 3
retransmit missing segment dupACKcount = 0
ssthresh= cwnd/2 ssthresh= cwnd/2
cwnd = ssthresh + 3 cwnd = ssthresh + 3
retransmit missing segment
retransmit missing segment
fast
recovery
duplicate ACK
cwnd = cwnd + MSS
transmit new segment(s), as allowed

Transport Layer: 3-124


TCP CUBIC
▪ Is there a better way than AIMD to “probe” for usable bandwidth?
▪ Insight/intuition:
• Wmax: sending rate at which congestion loss was detected
• congestion state of bottleneck link probably (?) hasn’t changed much
• after cutting rate/window in half on loss, initially ramp to to Wmax faster, but then
approach Wmax more slowly

Wmax classic TCP

TCP CUBIC - higher


Wmax/2 throughput in this
example

Transport Layer: 3-125


TCP CUBIC
▪ K: point in time when TCP window size will reach Wmax
• K itself is tuneable
▪ increase W as a function of the cube of the distance between current
time and K
• larger increases when further away from K
• smaller increases (cautious) when nearer K
▪ TCP CUBIC default Wmax
in Linux, most
TCP Reno
popular TCP for TCP CUBIC
popular Web TCP
sending
servers rate

time
t0 t1 t2 t3 t4
Transport Layer: 3-126
TCP and the congested “bottleneck link”
▪ TCP (classic, CUBIC) increase TCP’s sending rate until packet loss occurs
at some router’s output: the bottleneck link

sourc destination
e
application application
TCP TCP
network network
link link
physical physical
packet queue almost
never empty, sometimes
overflows packet (loss)

bottleneck link (almost always busy)


Transport Layer: 3-127
TCP and the congested “bottleneck link”
▪ TCP (classic, CUBIC) increase TCP’s sending rate until packet loss occurs
at some router’s output: the bottleneck link
▪ understanding congestion: useful to focus on congested bottleneck link

insight: increasing TCP sending rate will


sourc not increase end-end throughout destination
e
application
with congested bottleneck
application
TCP TCP
network network
link link
physical physical

insight: increasing TCP


sending rate will
increase measured RTT
Goal: “keep the end-end pipe just full, but not fuller”
RTT
Transport Layer: 3-128
Delay-based TCP congestion control
Keeping sender-to-receiver pipe “just full enough, but no fuller”: keep
bottleneck link busy transmitting, but avoid high delays/buffering
# bytes sent in
measured last RTT interval
RTTmeasured throughput =
RTTmeasured
Delay-based approach:
▪ RTTmin - minimum observed RTT (uncongested path)
▪ uncongested throughput with congestion window cwnd is cwnd/RTTmin
if measured throughput “very close” to uncongested throughput
increase cwnd linearly /* since path not congested */
else if measured throughput “far below” uncongested throughout
decrease cwnd linearly /* since path is congested */
Transport Layer: 3-129
Delay-based TCP congestion control
▪ congestion control without inducing/forcing loss
▪ maximizing throughout (“keeping the just pipe full… ”) while keeping
delay low (“…but not fuller”)
▪ a number of deployed TCPs take a delay-based approach
▪ BBR deployed on Google’s (internal) backbone network

Transport Layer: 3-130


Explicit congestion notification (ECN)
TCP deployments often implement network-assisted congestion control:
▪ two bits in IP header (ToS field) marked by network router to indicate congestion
• policy to determine marking chosen by network operator
▪ congestion indication carried to destination
▪ destination sets ECE bit on ACK segment to notify sender of congestion
▪ involves both IP (IP header ECN bit marking) and TCP (TCP header C,E bit marking)
sourc TCP ACK segment
destination
e
application
ECE=1 application
TCP TCP
network network
link link
physical physical

ECN=10 ECN=11

IP datagram
Transport Layer: 3-131
TCP fairness
Fairness goal: if K TCP sessions share same bottleneck link of
bandwidth R, each should have average rate of R/K
TCP connection 1

bottleneck
TCP connection 2 router
capacity R

Transport Layer: 3-132


Q: is TCP Fair?
Example: two competing TCP sessions:
▪ additive increase gives slope of 1, as throughout increases
▪ multiplicative decrease decreases throughput proportionally

R equal bandwidth share


Is TCP fair?
Connection 2 throughput

A: Yes, under idealized


loss: decrease window by factor of 2 assumptions:
congestion avoidance: additive increase ▪ same RTT
▪ fixed number of sessions
loss: decrease window by factor of 2
congestion avoidance: additive increase
only in congestion
avoidance

Connection 1 throughput R
Transport Layer: 3-133
Fairness: must all network apps be “fair”?
Fairness and UDP Fairness, parallel TCP
▪ multimedia apps often do not connections
use TCP ▪ application can open multiple
• do not want rate throttled by
congestion control parallel connections between two
hosts
▪ instead use UDP:
• send audio/video at constant rate, ▪ web browsers do this , e.g., link of
tolerate packet loss rate R with 9 existing connections:
▪ there is no “Internet police” • new app asks for 1 TCP, gets rate R/10
policing use of congestion • new app asks for 11 TCPs, gets R/2
control

Transport Layer: 3-134


Transport layer: roadmap
▪ Transport-layer services
▪ Multiplexing and demultiplexing
▪ Connectionless transport: UDP
▪ Principles of reliable data transfer
▪ Connection-oriented transport: TCP
▪ Principles of congestion control
▪ TCP congestion control
▪ Evolution of transport-layer
functionality
Transport Layer: 3-135
Evolving transport-layer functionality
▪ TCP, UDP: principal transport protocols for 40 years
▪ different “flavors” of TCP developed, for specific scenarios:
Scenario Challenges
Long, fat pipes (large data Many packets “in flight”; loss shuts down
transfers) pipeline
Wireless networks Loss due to noisy wireless links, mobility;
TCP treat this as congestion loss
Long-delay links Extremely long RTTs
Data center networks Latency sensitive
Background traffic flows Low priority, “background” TCP flows

▪ moving transport–layer functions to application layer, on top of UDP


• HTTP/3: QUIC
Transport Layer: 3-136
QUIC: Quick UDP Internet Connections
▪ application-layer protocol, on top of UDP
• increase performance of HTTP
• deployed on many Google servers, apps (Chrome, mobile YouTube app)

HTTP/2 HTTP/2 (slimmed)


Application HTTP/3
TLS QUIC

Transport TCP UDP

Network IP IP

HTTP/2 over TCP HTTP/2 over QUIC over UDP

Transport Layer: 3-137


QUIC: Quick UDP Internet Connections
adopts approaches we’ve studied in this chapter for
connection establishment, error control, congestion control
• error and congestion control: “Readers familiar with TCP’s loss
detection and congestion control will find algorithms here that parallel
well-known TCP ones.” [from QUIC specification]
• connection establishment: reliability, congestion control,
authentication, encryption, state established in one RTT

▪ multiple application-level “streams” multiplexed over single QUIC


connection
• separate reliable data transfer, security
• common congestion control
Transport Layer: 3-138
QUIC: Connection establishment

TCP handshake
(transport layer) QUIC handshake

data
TLS handshake
(security)
data

TCP (reliability, congestion control QUIC: reliability, congestion control,


state) + TLS (authentication, crypto authentication, crypto state
state)
▪ 1 handshake
▪ 2 serial handshakes

Transport Layer: 3-139


QUIC: streams: parallelism, no HOL blocking
HTTP HTTP
GET GET HTTP
GET
HTTP HTTP
application

GET GET
HTTP
GET QUIC QUIC QUIC QUIC QUIC QUIC
encrypt encrypt encrypt encrypt encrypt encrypt
QUIC QUIC QUIC
TLS encryption TLS encryption RDT RDT RDT error! QUIC
QUIC QUIC
RDT RDT RDT

QUIC Cong. Cont. QUIC Cong. Cont.


TCP RDT TCP
error! RDT
transport

TCP Cong. Contr. TCP Cong. Contr. UDP UDP

(a) HTTP 1.1 (b) HTTP/2 with QUIC: no HOL blocking


Transport Layer: 3-140
Chapter 3: summary
▪ principles behind transport Up next:
layer services: ▪ leaving the network
• multiplexing, demultiplexing “edge” (application,
• reliable data transfer transport layers)
• flow control ▪ into the network “core”
• congestion control
▪ two network-layer
▪ instantiation, implementation chapters:
in the Internet • data plane
• UDP • control plane
• TCP

Transport Layer: 3-141


Additional Chapter 3 slides

Transport Layer: 3-142


Go-Back-N: sender extended FSM
rdt_send(data)
if (nextseqnum < base+N) {
sndpkt[nextseqnum] = make_pkt(nextseqnum,data,chksum)
udt_send(sndpkt[nextseqnum])
if (base == nextseqnum)
start_timer
nextseqnum++
}
Λ else
refuse_data(data)
base=1
nextseqnum=1
timeout
start_timer
Wait udt_send(sndpkt[base])
rdt_rcv(rcvpkt) udt_send(sndpkt[base+1])
&& corrupt(rcvpkt) …
udt_send(sndpkt[nextseqnum-1])
rdt_rcv(rcvpkt) &&
notcorrupt(rcvpkt)
base = getacknum(rcvpkt)+1
If (base == nextseqnum)
stop_timer
else
start_timer
Transport Layer: 3-143
Go-Back-N: receiver extended FSM
any other event
udt_send(sndpkt) rdt_rcv(rcvpkt)
&& notcorrupt(rcvpkt)
Λ && hasseqnum(rcvpkt,expectedseqnum)
expectedseqnum=1 Wait extract(rcvpkt,data)
sndpkt = deliver_data(data)
make_pkt(expectedseqnum,ACK,chksum) sndpkt = make_pkt(expectedseqnum,ACK,chksum)
udt_send(sndpkt)
expectedseqnum++

ACK-only: always send ACK for correctly-received packet with highest


in-order seq #
• may generate duplicate ACKs
• need only remember expectedseqnum
▪ out-of-order packet:
• discard (don’t buffer): no receiver buffering!
• re-ACK pkt with highest in-order seq #
Transport Layer: 3-144
TCP sender (simplified)
data received from application above
create segment, seq. #: NextSeqNum
pass segment to IP (i.e., “send”)
NextSeqNum = NextSeqNum + length(data)
if (timer currently not running)
Λ start timer
NextSeqNum = InitialSeqNum wait
SendBase = InitialSeqNum for
event timeout
retransmit not-yet-acked segment
with smallest seq. #
start timer
ACK received, with ACK field value y
if (y > SendBase) {
SendBase = y
/* SendBase–1: last cumulatively ACKed byte */
if (there are currently not-yet-acked segments)
start timer
else stop timer
}
Transport Layer: 3-145
TCP 3-way handshake FSM
closed
Socket connectionSocket =
welcomeSocket.accept();
Λ Socket clientSocket =
newSocket("hostname","port number");
SYN(x)
SYNACK(seq=y,ACKnum=x+1) SYN(seq=x)
create new socket for communication
back to client
listen

SYN
SYN sent
rcvd
SYNACK(seq=y,ACKnum=x+1)
ESTAB
ACK(ACKnum=y+1) ACK(ACKnum=y+1)
Λ

Transport Layer: 3-146


Closing a TCP connection
client state server state
ESTAB ESTAB
clientSocket.close(
) FINbit=1, seq=x
FIN_WAIT_1 can no longer
send but can
receive data CLOSE_WAIT
ACKbit=1; ACKnum=x+1
can still
FIN_WAIT_2 wait for server send data
close

LAST_ACK
FINbit=1, seq=y
TIMED_WAIT can no longer
send data
ACKbit=1; ACKnum=y+1
timed wait
for 2*max CLOSED
segment lifetime

CLOSED

Transport Layer: 3-147


TCP throughput
▪ avg. TCP thruput as function of window size, RTT?
• ignore slow start, assume there is always data to send
▪ W: window size (measured in bytes) where loss occurs
• avg. window size (# in-flight bytes) is ¾ W
• avg. thruput is 3/4W per RTT
3 W
avg TCP thruput = bytes/sec
4 RTT
W

W/2
TCP over “long, fat pipes”
▪ example: 1500 byte segments, 100ms RTT, want 10 Gbps throughput
▪ requires W = 83,333 in-flight segments
▪ throughput in terms of segment loss probability, L [Mathis 1997]:
1.22 . MSS
TCP throughput =
RTT L
➜ to achieve 10 Gbps throughput, need a loss rate of L = 2·10-10 – a
very small loss rate!
▪ versions of TCP for long, high-speed scenarios

Transport Layer: 3-149


Chapter 4
Network Layer

A note on the use of these ppt slides:


We’re making these slides freely available to all (faculty, students, readers). Computer
They’re in PowerPoint form so you see the animations; and can add, modify,
and delete slides (including this one) and slide content to suit your needs. Networking: A Top
They obviously represent a lot of work on our part. In return for use, we only
ask the following: Down Approach
 If you use these slides (e.g., in a class) that you mention their source
(after all, we’d like people to use our book!)
6th edition
 If you post any slides on a www site, that you note that they are adapted Jim Kurose, Keith Ross
from (or perhaps identical to) our slides, and note our copyright of this
material.
Addison-Wesley
March 2012
Thanks and enjoy! JFK/KWR

All material copyright 1996-2013


J.F Kurose and K.W. Ross, All Rights Reserved

Network Layer 4-1


Chapter 4: network layer
chapter goals:
 understand principles behind network layer
services:
 network layer service models
 forwarding versus routing
 how a router works
 routing (path selection)
 broadcast, multicast
 instantiation, implementation in the Internet

Network Layer 4-2


Network layer
application
 transport segment from transport
network

sending to receiving host data link


physical
network network

 on sending side network


data link
data link
physical
data link
physical

encapsulates segments physical network


data link
network
data link

into datagrams physical physical

 on receiving side, delivers network


data link
network
data link

segments to transport physical physical


network
data link

layer network
physical
application
transport
 network layer protocols network
data link
physical
network
data link
network
data link

in every host, router data link


physical
physical physical

 router examines header


fields in all IP datagrams
passing through it
Network Layer 4-3
Two key network-layer functions
 forwarding: move packets analogy:
from router’s input to
appropriate router  routing: process of
output planning trip from source
to dest
 routing: determine route
taken by packets from  forwarding: process of
source to dest. getting through single
interchange
 routing algorithms

Network Layer 4-4


Interplay between routing and forwarding

routing algorithm routing algorithm determines


end-end-path through network

local forwarding table forwarding table determines


header value output link local forwarding at this router
0100 3
0101 2
0111 2
1001 1

value in arriving
packet’s header
0111 1

3 2

Network Layer 4-5


Network service model
Q: What service model for “channel” transporting
datagrams from sender to receiver?
example services for example services for a flow
individual datagrams: of datagrams:
 guaranteed delivery  in-order datagram
 guaranteed delivery with delivery
less than 40 msec delay  guaranteed minimum
bandwidth to flow
 restrictions on changes in
inter-packet spacing

Network Layer 4-6


Network layer service models:
Guarantees ?
Network Service Congestion
Architecture Model Bandwidth Loss Order Timing feedback

Internet best effort none no no no no (inferred


via loss)
ATM CBR constant yes yes yes no
rate congestion
ATM VBR guaranteed yes yes yes no
rate congestion
ATM ABR guaranteed no yes no yes
minimum
ATM UBR none no yes no no

Network Layer 4-7


Router architecture overview
two key router functions:
 run routing algorithms/protocol (RIP, OSPF, BGP)
 forwarding datagrams from incoming to outgoing link

forwarding tables computed, routing


pushed to input ports routing, management
processor
control plane (software)

forwarding data
plane (hardware)

high-speed
switching
fabric

router input ports router output ports


Network Layer 4-8
Input port functions
lookup,
link forwarding
line layer switch
termination protocol fabric
(receive)
queueing

physical layer:
bit-level reception
data link layer: decentralized switching:
e.g., Ethernet  given datagram dest., lookup output port
see chapter 5 using forwarding table in input port
memory (“match plus action”)
 goal: complete input port processing at
‘line speed’
 queuing: if datagrams arrive faster than
forwarding rate into switch fabric
Network Layer 4-9
Switching fabrics
 transfer packet from input buffer to appropriate
output buffer
 switching rate: rate at which packets can be
transfer from inputs to outputs
 often measured as multiple of input/output line rate
 N inputs: switching rate N times line rate desirable
 three types of switching fabrics

memory

memory bus crossbar

Network Layer 4-10


Switching via memory
first generation routers:
 traditional computers with switching under direct control
of CPU
 packet copied to system’s memory
 speed limited by memory bandwidth (2 bus crossings per
datagram)

input output
port memory port
(e.g., (e.g.,
Ethernet) Ethernet)

system bus

Network Layer 4-11


Switching via a bus
 datagram from input port memory
to output port memory via a
shared bus
 bus contention: switching speed
limited by bus bandwidth
 32 Gbps bus, Cisco 5600: sufficient bus
speed for access and enterprise
routers

Network Layer 4-12


Switching via interconnection network
 overcome bus bandwidth limitations
 banyan networks, crossbar, other
interconnection nets initially
developed to connect processors in
multiprocessor
 advanced design: fragmenting
datagram into fixed length cells, crossbar
switch cells through the fabric.
 Cisco 12000: switches 60 Gbps
through the interconnection
network

Network Layer 4-13


Output ports This slide in HUGELY important!

datagram
switch buffer link
fabric layer line
protocol termination
queueing (send)

 buffering required when datagrams


Datagram arrive
(packets) can be lost
from fabric faster than the
due to transmission
congestion, lack of buffers
rate
 scheduling discipline chooses
Priority among
scheduling – whoqueued
gets best
datagrams for transmission
performance, network neutrality
Network Layer 4-14
Output port queueing

switch
switch
fabric
fabric

at t, packets more one packet time later


from input to output

 buffering when arrival rate via switch exceeds


output line speed
 queueing (delay) and loss due to output port buffer
overflow!
Network Layer 4-15
Input port queuing
 fabric slower than input ports combined -> queueing may
occur at input queues
 queueing delay and loss due to input buffer overflow!
 Head-of-the-Line (HOL) blocking: queued datagram at front
of queue prevents others in queue from moving forward

switch switch
fabric fabric

output port contention: one packet time later:


only one red datagram can be green packet
transferred. experiences HOL
lower red packet is blocked blocking

Network Layer 4-16


The Internet network layer
host, router network layer functions:

transport layer: TCP, UDP

routing protocols IP protocol


• path selection • addressing conventions
• RIP, OSPF, BGP • datagram format
network • packet handling conventions
layer forwarding
table
ICMP protocol
• error reporting
• router
“signaling”
link layer

physical layer

Network Layer 4-17


IP datagram format
IP protocol version 32 bits
number total datagram
header length type of length (bytes)
ver head. length
(bytes) len service for
“type” of data fragment fragmentation/
16-bit identifier flgs
offset reassembly
max number time to upper header
remaining hops live layer checksum
(decremented at
32 bit source IP address
each router)
32 bit destination IP address
upper layer protocol
to deliver payload to options (if any) e.g. timestamp,
record route
how much overhead? data taken, specify
(variable length, list of routers
 20 bytes of TCP
typically a TCP to visit.
 20 bytes of IP
or UDP segment)
 = 40 bytes + app
layer overhead

Network Layer 4-18


IP fragmentation, reassembly
 network links have MTU
(max.transfer size) -
largest possible link-level fragmentation:
frame


in: one large datagram
 different link types, out: 3 smaller datagrams
different MTUs
 large IP datagram divided
(“fragmented”) within net reassembly
 one datagram becomes
several datagrams
 “reassembled” only at …
final destination
 IP header bits used to
identify, order related
fragments
Network Layer 4-19
IP fragmentation, reassembly
length ID fragflag offset
example: =4000 =x =0 =0
 4000 byte datagram
one large datagram becomes
 MTU = 1500 bytes several smaller datagrams

1480 bytes in length ID fragflag offset


data field =1500 =x =1 =0

offset = length ID fragflag offset


1480/8 =1500 =x =1 =185

length ID fragflag offset


=1040 =x =0 =370

Network Layer 4-20


IP addressing: introduction
223.1.1.1
 IP address: 32-bit
identifier for host, router
223.1.2.1

interface 223.1.1.2
223.1.1.4 223.1.2.9
 interface: connection
between host/router and
physical link
223.1.3.27
223.1.1.3
223.1.2.2
 router’s typically have
multiple interfaces
 host typically has one or
two interfaces (e.g., wired 223.1.3.1 223.1.3.2

Ethernet, wireless 802.11)


 IP addresses associated
with each interface 223.1.1.1 = 11011111 00000001 00000001 00000001

223 1 1 1

Network Layer 4-21


IP addressing: introduction
223.1.1.1
Q: how are interfaces
actually connected?
223.1.2.1

A: we’ll learn about that 223.1.1.2


223.1.1.4 223.1.2.9

in chapter 5, 6.
223.1.3.27
223.1.1.3
223.1.2.2

A: wired Ethernet interfaces


connected by Ethernet switches
223.1.3.1 223.1.3.2

For now: don’t need to worry


about how one interface is
connected to another (with no
A: wireless WiFi interfaces
intervening router)
connected by WiFi base station

Network Layer 4-22


Subnets
 IP address: 223.1.1.1
subnet part - high order
bits 223.1.1.2 223.1.2.1
223.1.1.4 223.1.2.9
host part - low order
bits 223.1.2.2
 what ’s a subnet ? 223.1.1.3 223.1.3.27

device interfaces with subnet


same subnet part of IP
address 223.1.3.1 223.1.3.2

can physically reach


each other without
intervening router network consisting of 3 subnets

Network Layer 4-23


Subnets
223.1.1.0/24
223.1.2.0/24
recipe 223.1.1.1

 to determine the 223.1.1.2 223.1.2.1


subnets, detach each 223.1.1.4 223.1.2.9

interface from its host 223.1.2.2


or router, creating 223.1.1.3 223.1.3.27

islands of isolated subnet


networks
 each isolated network 223.1.3.1 223.1.3.2

is called a subnet
223.1.3.0/24

subnet mask: /24


Network Layer 4-24
Subnets 223.1.1.2

how many? 223.1.1.1 223.1.1.4

223.1.1.3

223.1.9.2 223.1.7.0

223.1.9.1 223.1.7.1
223.1.8.1 223.1.8.0

223.1.2.6 223.1.3.27

223.1.2.1 223.1.2.2 223.1.3.1 223.1.3.2

Network Layer 4-25


IP addressing: CIDR
CIDR: Classless InterDomain Routing
 subnet portion of address of arbitrary length
 address format: a.b.c.d/x, where x is # bits in
subnet portion of address

subnet host
part part
11001000 00010111 00010000 00000000
200.23.16.0/23

Network Layer 4-26


Route Aggregation

Network Layer 4-27


IPv4 Addressing
Topics
• IP addressing
• Subnetting
• Variable length subnet masking (VLSM)
• Classless interdomain routing (CIDR)
IPv4 Address Space

The address space of IPv4 is


32
2
or
4,294,967,296.
IP Address Notation
Convert Binary to Decimal
Classful IP Addressing
IPv4 Address Classes
Private Address
Find Usable and Broadcast Address
• NW 172.168.0.0
• SM 255.255.224.0
• NW 172.168.0.0
• SM 255.255.224.0
Find Network Address
Subnetting
VLSM-Variable length Subnet Mask
Class C network, 201.45.222.0/24
• Requirement
• LAN 1 – 126 hosts
• LAN 2 – 60 hosts
• LAN 3 – 20 hosts
• LAN 4 – 14 hosts
• LAN 5 – 10 hosts
CIDR (Classless Inter-Domain Routing)
• By using a prefix address to summarizes routes, administrators can keep routing table
entries manageable, which means the following
• More efficient routing
• A reduced number of CPU cycles when recalculating a routing table, or when
sorting through the routing table entries to find a match
• Reduced router memory requirements
• Route summarization is also known as:
• Route aggregation
• Supernetting
• Supernetting is essentially the inverse of subnetting.
• ISPs can be assigned blocks of address space, which they can then parcel out to
customers.
Without CIDR, a
router must
maintain individual
routing table
entries for these
class B networks.

With CIDR, a
router can
summarize
these routes
using a single
network
address by
using a 13-bit
prefix:
172.24.0.0 /13
Steps:
1. Count the number of left-most matching bits, /13 (255.248.0.0)
2. Add all zeros after the last matching bit:
172.24.0.0 = 10101100 00011000 00000000 00000000
Supernetting Example
• Company XYZ needs to address 400 hosts.
• Its ISP gives them two contiguous Class C addresses:
• 207.21.54.0/24
• 207.21.55.0/24
• Company XYZ can use a prefix of 207.21.54.0 /23 to supernet these two
contiguous networks. (Yielding 510 hosts)
• 207.21.54.0 /23
• 207.21.54.0/24
• 207.21.55.0/24

23 bits in common
Supernetting Example

• With the ISP acting as the addressing authority for a CIDR block of addresses, the
ISP’s customer networks, which include XYZ, can be advertised among Internet
routers as a single supernet.
Example
Revision 2
Services provided by DNS
• Name Resolution
• To translate user-supplied hostnames to IP addresses.
• Host aliasing
• A host with a complicated hostname can have one or more alias names.
• DNS can be invoked by an application to obtain the canonical hostname for a
supplied alias hostname.
• Mail server aliasing.
• The hostname of the Hotmail mail server is more complicated and much less
mnemonic than simply hotmail.com (for example, relay1.west-coast.hotmail.com).
DNS helps to obtain the canonical hostname for a supplied alias hostname as well as
the IP address of the host.
• Load distribution.
• DNS is also used to perform load distribution among replicated servers, such as
replicated Web servers.
Problems with centralized DNS design
• A single point of failure: If the DNS server crashes, so does the entire
Internet!
• Traffic volume: A single DNS server would have to handle all DNS queries
• Distant centralized database: A single DNS server cannot be “close to” all
the querying clients. If we put the single DNS server in New York City, then
all queries from Australia must travel to the other side of the globe,
perhaps over slow and congested links. This can lead to significant delays.
• Maintenance: The single DNS server would have to keep records for all
Internet hosts. This centralized database will be huge.
• A centralized database in a single DNS server simply doesn’t scale.
Methods of interaction / resolution of various DNS servers
• Iterative DNS queries
1. host cis.poly.edu first sends a DNS query message to its local
DNS server, dns.poly.edu. The query message contains the
hostname to be translated, namely, gaia.cs.umass.edu.
2. The local DNS server forwards the query message to a root DNS
server.
3. The root DNS server takes note of the edu suffix and returns to the
local DNS server a list of IP addresses for TLD servers responsible
for edu.
4. The local DNS server then resends the query message to one of these
TLD servers.
5. The TLD server takes note of the umass.edu suffix and responds
with the IP address of the authoritative DNS server for the University
of Massachusetts, namely, dns.umass.edu.
8. Finally, the local DNS server resends the query message directly to
dns.umass.edu, which responds with the IP address of
gaia.cs.umass.edu.
• Recursive DNS queries
DNS records and messages
• A resource record is a four-tuple that contains the following fields: (Name, Value,
Type, TTL)
• TTL is the time to live of the resource record.
• If Type=A, then Name is a hostname and Value is the IP address for the hostname.
Thus, a Type A record provides the standard hostname-to-IP address mapping.
• If Type=NS, then Name is a domain (such as foo.com) and Value is the hostname of an
authoritative DNS server that knows how to obtain the IP addresses for hosts in the
domain.
• If Type=CNAME, then Value is a canonical hostname for the alias hostname Name.
This record can provide querying hosts the canonical name for a hostname.
• If Type=MX, then Value is the canonical name of a mail server that has an alias
hostname Name. MX records allow the hostnames of mail servers to have simple
aliases.
• DNS Messages
• The first 12 bytes is the
header section
• The question section
contains information
about the query that is
being made.
• In a reply from a DNS
server, the answer section
contains the resource
records for the name that
was originally queried
• The authority section contains records of other authoritative servers.

To be
UDP segment structure
• The UDP header has only four fields,
each consisting of two bytes.
• The application data occupies the data
field of the UDP segment.
• The port numbers allow the destination
host to pass the application data to the
correct process running on the
destination end system.
• The length field specifies the number of
bytes in the UDP segment (header plus
data).
• The checksum is used by the receiving
host to check whether errors have been
introduced into the segment.
Checksum for the given words
0110011001100000
0101010101010101 After wrapping around
1000111100001100 0100101011000010

The sum of first two of these 16-bit words is


0110011001100000 Taking 1’s complement gives
0101010101010101 1011010100111101
---------------------------
1011101110110101

Adding the third word to the above sum gives


1011101110110101
1000111100001100
----------------------------
0100101011000001
TCP segment structure
• The 32-bit sequence number and the 32-bit acknowledgment number are used in
implementing a reliable data transfer service.
• The 16-bit receive window field is used for flow control.
• The 4-bit header length field specifies the length of the TCP header in 32-bit words.
• The optional and variable-length options field is used when a sender and receiver
negotiate the maximum segment size (MSS)
• The flag field contains 6 bits.
• The ACK bit is used to indicate that the value carried in the acknowledgment field is
valid;
• The RST, SYN, and FIN bits are used for connection setup and teardown
• Setting the PSH bit indicates that the receiver should pass the data to the upper layer
immediately.
• URG bit is used to indicate that there is data in this segment that the sending-side
upper-layer entity has marked as “urgent.”
• Source and destination port numbers are used for multiplexing/demultiplexing data
from/to upper-layer applications.
TCP connection management

Connection Establishment:
Three way handshake protocol
Step 1.
• The client-side TCP sends a TCP segment to server-side TCP containing the SYN bit, is set to 1.
• This is referred to as a SYN segment.
• The client chooses an initial sequence number (client_isn) and places in the sequence number field of
the initial TCP SYN.
Step 2.
• Once the TCP SYN segment arrives at the server, the server extracts it, allocates the TCP buffers and
variables to the connection, and sends a connection-granted segment to the client TCP.
• This connection-granted segment contains three important information in the segment header.
• First, the SYN bit is set to 1.
• Second, the acknowledgment field of the TCP segment header is set to client_isn+1.
• Finally, the server chooses its own initial sequence number (server_isn) and puts it in the sequence number field.
• The connection-granted segment is referred to as a SYNACK segment.
Step 3.
• Upon receiving the SYNACK segment, the client also allocates buffers and variables to the connection.
• The client then sends the server yet another segment.
• This last segment acknowledges the server’s connection-granted segment.
• The SYN bit is set to zero.
• The client application process issues a
close command.
• This causes the client TCP to send a
special TCP segment to the server
process.
• This special segment has a flag bit in
the segment’s header, the FIN bit set
to 1.
• When the server receives this
segment, it sends the client an
acknowledgment segment in return.
• The server then sends its own
shutdown segment, which has the
FIN bit set to 1.
• Finally, the client acknowledges the
server’s shutdown segment.
Estimation of Round Trip Time
Reliable Data
Operation Transfer
with no loss Operation with packet loss
Operation with ACK loss Premature time out
Step 1.
• The client-side TCP sends a TCP segment to server-side TCP containing the SYN bit, is set to 1.
• This is referred to as a SYN segment.
• The client chooses an initial sequence number (client_isn) and places in the sequence number field of
the initial TCP SYN.
Step 2.
• Once the TCP SYN segment arrives at the server, the server extracts it, allocates the TCP buffers and
variables to the connection, and sends a connection-granted segment to the client TCP.
• This connection-granted segment contains three important information in the segment header.
• First, the SYN bit is set to 1.
• Second, the acknowledgment field of the TCP segment header is set to client_isn+1.
• Finally, the server chooses its own initial sequence number (server_isn) and puts it in the sequence number field.
• The connection-granted segment is referred to as a SYNACK segment.
Step 3.
• Upon receiving the SYNACK segment, the client also allocates buffers and variables to the connection.
• The client then sends the server yet another segment.
• This last segment acknowledges the server’s connection-granted segment.
• The SYN bit is set to zero.
• The client application process issues a
close command.
• This causes the client TCP to send a
special TCP segment to the server
process.
• This special segment has a flag bit in
the segment’s header, the FIN bit set
to 1.
• When the server receives this
segment, it sends the client an
acknowledgment segment in return.
• The server then sends its own
shutdown segment, which has the
FIN bit set to 1.
• Finally, the client acknowledges the
server’s shutdown segment.
TCP flow control mechanism
• TCP provides a flow-control service to its applications to eliminate the
possibility of the sender overflowing the receiver’s buffer.
• TCP provides flow control by having the sender maintain a variable called the
receive window (rwnd).
• Receive window is used to give the sender an idea of how much free buffer
space is available at the receiver.
• Because TCP is full-duplex, the sender at each side of the connection maintains
a distinct receive window.
• Suppose that Host A is sending a large file to Host B over a TCP connection.
• Host B allocates a receive buffer to this connection; denote its size by RcvBuffer.
• From time to time, the application process in Host B reads from the buffer.
• Because the spare room changes with time, rwnd is dynamic.
Congestion Control
Functionalities of network layer
Two functions
• Forwarding. When a packet arrives
at a router’s input link, the router
must move the packet to the
appropriate output link.
• Routing. The network layer must
determine the route or path taken
by packets as they flow from a
sender to a receiver. The algorithms
that calculate these paths are
referred to as routing algorithms.
Router Architecture
• Input port. It terminates an incoming physical link at a router and lookup function is also
performed at the input port
• Switching fabric. The switching fabric connects the router’s input ports to its output ports.
• Output ports. An output port stores packets received from the switching fabric and transmits
these packets on the outgoing link by performing the necessary link-layer and physical-layer
functions.
• Routing processor. The routing processor executes the routing protocols, maintains routing
tables and attached link state information, and computes the forwarding table for the router.
It also performs the network management functions.
IPv4 Datagram Format
• Version number. These 4 bits specify the IP protocol version of the
datagram.
• Header length. These 4 bits are needed to determine where in the IP
datagram the data actually begins. Most IP datagrams do not contain
options, so the typical IP datagram has a 20-byte header.
• Type of service. The type of service (TOS) bits were included in the
IPv4 header to allow different types of IP datagrams (for example,
datagrams particularly requiring low delay, high throughput, or
reliability) to be distinguished from each other.
• Datagram length. This is the total length of the IP datagram (header
plus data), measured in bytes.
• Identifier, flags, fragmentation offset. These three fields have to do with so-called
IP fragmentation.
• Time-to-live. The time-to-live (TTL) field is included to ensure that datagrams do
not circulate forever in the network.
• Protocol. This field is used only when an IP datagram reaches its final destination.
• Header checksum. The header checksum aids a router in detecting bit errors in a
received IP datagram.
• Source and destination IP addresses. When a source creates a datagram, it inserts
its IP address into the source IP address field and inserts the address of the
ultimate destination into the destination IP address field.
• Options. The options fields allow an IP header to be extended.
• Data(payload). the data field of the IP datagram contains the transport-layer
segment (TCPor UDP) to be delivered to the destination.
23-4 SCTP

Stream Control Transmission Protocol (SCTP) is a


new reliable, message-oriented transport layer
protocol. SCTP, however, is mostly designed for
Internet applications that have recently been
introduced. These new applications need a more
sophisticated service than TCP can provide.
Topics discussed in this section:
SCTP Services and Features
Packet Format
An SCTP Association
Flow Control and Error Control
23.1
Note

SCTP is a message-oriented, reliable


protocol that combines the best features
of UDP and TCP.

23.2
Table 23.4 Some SCTP applications

23.3
Figure 23.27 Multiple-stream concept

23.4
Note

An association in SCTP can involve


multiple streams.

23.5
Figure 23.28 Multihoming concept

23.6
Note

SCTP association allows multiple IP


addresses for each end.

23.7
■ Full-Duplex Communication
■ Like TCP, SCTP offers full-duplex service, in which data can flow in both
directions at the same time. Each SCTP then has a sending and receiving
buffer, and packets are sent in both directions.

■ Connection-Oriented Service
■ Like TCP, SCTP is a connection-oriented protocol. However, in SCTP, a
connection is called an association. When a process at site A wants to send
and receive data from another process at site B, the following occurs:
■ 1. The two SCTPs establish an association between each other.
■ 2. Data are exchanged in both directions.
■ 3. The association is terminated.

■ Reliable Service
■ SCTP, like TCP, is a reliable transport protocol. It uses an acknowledgment
mechanism to check the safe and sound arrival of data.

23.8
SCTP Features
■ Transmission Sequence Number
■ SCTP uses a transmission sequence number (TSN) to number
the data chunks

■ Stream Identifier
■ Each stream in SCTP needs to be identified by using a stream
identifier

■ Stream Sequence Number


■ When a data chunk arrives at the destination SCTP, it is
delivered to the appropriate stream and in the proper order.
■ Packets
■ The design of SCTP is totally different: data are carried as
data chunks, control information is carried as control chunks.
Several control chunks and data chunks can be packed
together in a packet.
23.9
Note

In SCTP, a data chunk is numbered


using a TSN.

23.10
Note

To distinguish between different


streams, SCTP uses an SI.

23.11
Note

To distinguish between different data


chunks belonging to the same stream,
SCTP uses SSNs.

23.12
Note

TCP has segments; SCTP has packets.

23.13
Figure 23.29 Comparison between a TCP segment and an SCTP packet

The verification tag in SCTP is an association identifier, which does not exist in TCP. In TCP, the
combination of IP and port addresses defines a connection; in SCTP may have multihorning using
different IP addresses. A unique verification tag is needed to define each association.

23.14
Note

In SCTP, control information and data


information are carried in separate
chunks.

23.15
Figure 23.30 Packet, data chunks, and streams

23.16
Note

Data chunks are identified by three


items: TSN, SI, and SSN.
TSN is a cumulative number identifying
the association; SI defines the stream;
SSN defines the chunk in a stream.

23.17
Note

In SCTP, acknowledgment numbers are


used to acknowledge only data chunks;
control chunks are acknowledged by
other control chunks if necessary.

23.18
Figure 23.31 SCTP packet format

23.19
Note

In an SCTP packet, control chunks come


before data chunks.

23.20
Figure 23.32 General header

23.21
Table 23.5 Chunks

23.22
Note

A connection in SCTP is called an


association.

23.23
Note

No other chunk is allowed in a packet


carrying an INIT or INIT ACK chunk.
A COOKIE ECHO or a COOKIE ACK
chunk can carry data chunks.

23.24
Figure 23.33 Four-way handshaking

23.25
Note

In SCTP, only DATA chunks


consume TSNs;
DATA chunks are the only chunks
that are acknowledged.

23.26
Figure 23.34 Simple data transfer

23.27
Note

The acknowledgment in SCTP defines


the cumulative TSN, the TSN of the last
data chunk received in order.

23.28
Figure 23.35 Association termination

23.29
Figure 23.36 Flow control, receiver site

23.30
Figure 23.37 Flow control, sender site

23.31
Figure 23.38 Flow control scenario

23.32
Figure 23.39 Error control, receiver site

23.33
Figure 23.40 Error control, sender site

23.34

You might also like