Professional Documents
Culture Documents
ioanu@eed.usv.ro
CAP I – Recapitulare
vers 1.0
1
Cum se face notarea ?
NF = 50%NE + 40%NL + 10%PC
Unde
NF – nota finală,
NE – nota la examen,
NL – nota la laborator,
PC – prezenţa la curs.
Bibliografie
• Computer Networking: A Top-Down Approach Featuring the
Internet, James F. Kurose and Keith W. Ross, Addison
Wesley, ISBN-13: 978-0-321-49770-3
Vasile GĂITAN Reţele de calculatoare. Note de curs 2006
Andrew S. Tanenbaum Reţele de calculatoare, Ediţia a patra,
Editura BYBLOS 2003, ISBN 973-0-03000-6
Richard Stevens TCP/IP Ilustrated, Volume 1. The protocols,
Editura ADDISON WESLEY
IBM – Adolfo Rodriguez, John Gatrell, John KARAS, Roland
Peschke, TCP/IP Tutorial and Technical Overview, August 2001
Douglas E. COMER Internetworking with TCP/IP, Vol I, Priciples,
Protocols, and Architecture, Ediţia a patra, Prentice Hall, ISBN 0-13-
018380-6, 2000.
www.cisco.com
Cartea de referinţă
Web-enabled toaster +
weather forecaster
IP picture frame
http://www.ceiva.com/
communication infrastructure
enables distributed
applications:
Web, VoIP, email, games,
e-commerce, file sharing
communication services
provided to apps:
reliable data delivery from
source to destination
“best effort” (unreliable)
data delivery
What’s a protocol?
human protocols: network protocols:
“what’s the time?” machines rather than
Hi
TCP connection
request
Hi
TCP connection
Got the response
time? Get http://www.awl.com/kurose-ross
2:00
<file>
time
institutional access
networks (school,
company)
mobile access networks
Keep in mind:
bandwidth (bits per
second) of access
network?
shared or dedicated?
Residential access: point to point
access
Dialup via modem
up to 56Kbps direct access to
router (often less)
Can’t surf and phone at same
time: can’t be “always on”
Diagram: http://www.cabledatacomnews.com/cmic/diagram.html
Cable Network Architecture: Overview
cable headend
home
cable distribution
network (simplified)
Cable Network Architecture: Overview
server(s)
cable headend
home
cable distribution
network
Cable Network Architecture: Overview
cable headend
home
cable distribution
network (simplified)
Cable Network Architecture: Overview
1 2 3 4 5 6 7 8 9
Channels
cable headend
home
cable distribution
network
Company access: local area networks
router/firewall/NAT
Ethernet
wireless access
point
wireless
to/from laptops
cable router/
cable
modem firewall
headend
wireless
access
Ethernet point
Physical Media
Twisted Pair (TP)
Bit: propagates between two insulated copper
transmitter/rcvr pairs wires
physical link: what lies Category 3: traditional
between transmitter & phone wires, 10 Mbps
receiver Ethernet
guided media:
Category 5:
100Mbps Ethernet
signals propagate in solid
media: copper, fiber, coax
unguided media:
signals propagate freely, e.g.,
radio
Physical Media: coax, fiber
Fiber optic cable:
Coaxial cable: glass fiber carrying light pulses,
two concentric copper each pulse a bit
conductors high-speed operation:
bidirectional high-speed point-to-point
baseband: transmission (e.g., 10’s-100’s
Gps)
single channel on cable
legacy Ethernet
low error rate: repeaters spaced
far apart ; immune to
broadband: electromagnetic noise
multiple channels on
cable
HFC
Physical media: radio
Radio link types:
signal carried in terrestrial microwave
electromagnetic e.g. up to 45 Mbps channels
spectrum LAN (e.g., Wifi)
11Mbps, 54 Mbps
no physical “wire” wide-area (e.g., cellular)
bidirectional 3G cellular: ~ 1 Mbps
satellite
propagation environment Kbps to 45Mbps channel (or multiple smaller
channels)
effects: 270 msec end-end delay
reflection geosynchronous versus low altitude
obstruction by objects
interference
Chapter 1: roadmap
1.1 What is the Internet?
1.2 Network edge
end systems, access networks, links
1.3 Network core
circuit switching, packet switching, network structure
1.4 Delay, loss and throughput in packet-switched
networks
1.5 Protocol layers, service models
1.6 Networks under attack: security
1.7 History
The Network Core
mesh of interconnected
routers
the fundamental question:
how is data transferred
through net?
circuit switching:
dedicated circuit per call:
telephone net
packet-switching: data
sent thru net in discrete
“chunks”
Network Core: Circuit Switching
End-end resources
reserved for “call”
link bandwidth, switch
capacity
dedicated resources: no
sharing
circuit-like (guaranteed)
performance
call setup required
Network Core: Circuit Switching
Example:
FDM
4 users
frequency
time
TDM
frequency
time
Numerical example
How long does it take to send a file of 640,000
bits from host A to host B over a circuit-switched
network?
All links are 1.536 Mbps
Each link uses TDM with 24 slots/sec
500 msec to establish end-to-end circuit
1.5 Mb/s
B
queue of packets
waiting for output
link
D E
L
R R R
Tier-1
providers
Tier 1 ISP
interconnect
(peer)
privately
Tier 1 ISP Tier 1 ISP
Tier-1 ISP: e.g., Sprint
POP: point-of-presence
to/from backbone
peering
… …
.
…
…
to/from customers
Internet structure: network of
networks
“Tier-2” ISPs: smaller (often regional) ISPs
Connect to one or more tier-1 ISPs, possibly other tier-2 ISPs
Tier-2 ISPs
Tier-2 ISP pays Tier-2 ISP also peer
Tier-2 ISP privately with
tier-1 ISP for
connectivity to Tier 1 ISP each other.
rest of Internet
tier-2 ISP is
customer of
tier-1 provider Tier 1 ISP Tier 1 ISP Tier-2 ISP
local
ISP Tier 3 local
local local
ISP ISP
ISP ISP
Local and tier- Tier-2 ISP Tier-2 ISP
3 ISPs are
customers of Tier 1 ISP
higher tier
ISPs
connecting
them to rest
Tier 1 ISP Tier 1 ISP Tier-2 ISP
of Internet
local
Tier-2 ISP Tier-2 ISP
ISP
local local local
ISP ISP ISP
Internet structure: network of
networks
a packet passes through many networks!
local
ISP Tier 3 local
local local
ISP ISP
ISP ISP
Tier-2 ISP Tier-2 ISP
Tier 1 ISP
B
packets queueing (delay)
free (available) buffers: arriving packets
dropped (loss) if no free buffers
Four sources of packet delay
1. nodal processing: 2. queueing
check bit errors time waiting at output
determine output link link for transmission
depends on congestion
level of router
transmission
A propagation
B
nodal
processing queueing
Delay in packet-switched networks
3. Transmission delay: 4. Propagation delay:
R=link bandwidth (bps) d = length of physical link
B
nodal
processing queueing
Caravan analogy
100 km 100 km
ten-car toll toll
caravan booth booth
cars “propagate” at Time to “push” entire
100 km/hr caravan through toll booth
toll booth takes 12 sec to onto highway = 12*10 =
service car (transmission 120 sec
time) Time for last car to
car~bit; caravan ~ packet propagate from 1st to 2nd
Q: How long until caravan is toll both:
lined up before 2nd toll 100km/(100km/hr)= 1 hr
booth? A: 62 minutes
Caravan analogy (more)
100 km 100 km
ten-car toll toll
caravan booth booth
Yes! After 7 min, 1st car at
Cars now “propagate” at 2nd booth and 3 cars still at
1000 km/hr 1st booth.
Toll booth now takes 1 1st bit of packet can arrive at
min to service a car 2nd router before packet is
Q: Will cars arrive to 2nd fully transmitted at 1st
booth before all cars router!
serviced at 1st booth? See Ethernet applet at AWL
Web site
Nodal delay
3 probes 3 probes
3 probes
“Real” Internet delays and routes
traceroute: gaia.cs.umass.edu to www.eurecom.fr
Three delay measurements from
gaia.cs.umass.edu to cs-gw.cs.umass.edu
1 cs-gw (128.119.240.254) 1 ms 1 ms 2 ms
2 border1-rt-fa5-1-0.gw.umass.edu (128.119.3.145) 1 ms 1 ms 2 ms
3 cht-vbns.gw.umass.edu (128.119.3.130) 6 ms 5 ms 5 ms
4 jn1-at1-0-0-19.wor.vbns.net (204.147.132.129) 16 ms 11 ms 13 ms
5 jn1-so7-0-0-0.wae.vbns.net (204.147.136.136) 21 ms 18 ms 18 ms
6 abilene-vbns.abilene.ucaid.edu (198.32.11.9) 22 ms 18 ms 22 ms
7 nycm-wash.abilene.ucaid.edu (198.32.8.46) 22 ms 22 ms 22 ms trans-oceanic
8 62.40.103.253 (62.40.103.253) 104 ms 109 ms 106 ms
9 de2-1.de1.de.geant.net (62.40.96.129) 109 ms 102 ms 104 ms link
10 de.fr1.fr.geant.net (62.40.96.50) 113 ms 121 ms 114 ms
11 renater-gw.fr1.fr.geant.net (62.40.103.54) 112 ms 114 ms 112 ms
12 nio-n2.cssi.renater.fr (193.51.206.13) 111 ms 114 ms 116 ms
13 nice.cssi.renater.fr (195.220.98.102) 123 ms 125 ms 124 ms
14 r3t2-nice.cssi.renater.fr (195.220.98.110) 126 ms 126 ms 124 ms
15 eurecom-valbonne.r3t2.ft.net (193.48.50.54) 135 ms 128 ms 133 ms
16 194.214.211.25 (194.214.211.25) 126 ms 128 ms 126 ms
17 * * *
18 * * * * means no response (probe lost, router not replying)
19 fantasia.eurecom.fr (193.55.113.142) 132 ms 128 ms 136 ms
Packet loss
queue (aka buffer) preceding link in buffer has
finite capacity
packet arriving to full queue dropped (aka lost)
lost packet may be retransmitted by previous
node, by source end system, or not at all
buffer
(waiting area) packet being transmitted
A
B
packet arriving to
full buffer is lost
Throughput
server,
server sendswith link
bits pipe capacity
that can carry link that
pipe capacity
can carry
file of
(fluid) F bits
into pipe fluid at rate
Rs bits/sec Rfluid at rate
c bits/sec
to send to client Rs bits/sec) Rc bits/sec)
Throughput (more)
Rs bits/sec Rc bits/sec
Rs bits/sec Rc bits/sec
bottleneck link
link on end-end path that constrains end-end throughput
Throughput: Internet scenario
Rs
per-connection end- Rs Rs
end throughput:
min(Rc,Rs,R/10) R
in practice: Rc or Rs is
Rc Rc
often bottleneck
Rc
hosts Question:
routers Is there any hope of organizing
structure of network?
links of various
media Or at least our discussion of
applications networks?
protocols
hardware,
software
Organization of air travel
a series of steps
Layering of airline functionality
airplane routing airplane routing airplane routing airplane routing airplane routing
In object-oriented design:
a group of classes that have the same set of (link-time)
module dependences to other modules, i.e. a collection
of reusable components that are available for reuse in
similar circumstances.
http://en.wikipedia.org/wiki/Layer
Why layering?
Application
layer
Physical
layer
Intermediate layer
Physical
layer
optical fiber coax air
Disadvantages of layering
Are there any?
YES!
Inefficiency of the software
Transport layer – might be very inefficient if not
aware of the lower layers
What is a protocol?
A protocol is an agreement between the communicating parties on
how the communication is to proceed
“A protocol defines format & the order of messages exchanged as
well as the actions taken on the transmission/reception of a
message.” (Kurose, Ross)
Analogy: politician meeting, PhD defense ceremony
A protocol is a set of rules that specify
the format of the messages exchanged
a number of different protocol states and what messages are
allowed to be sent in each state;
these states determine, among others, the order of the messages,
timing constraints and other non-functional properties, if any
Example: HTTP, FTP, TCP…
Example: Protocol stacks
Peer protocol
Layered network architecture and
services
Layer N uses services of layer N-1- service user
Layer N-1 provides services to layer N- service provider
From the example: service = post delivery
Post
sender receiver
Request
Network Indicate
delivery delivery
Unreliable services
No guaranteed delivery (no acknowledgments)
An example: a basic service of datagram networks
Reliable services
Guaranteed delivery
Implementation of this service through combination of
timers, acknowledgment and retransmission
An example: FTP, E-mail
Service user
N+1 Layer protocol
N+1 layer N+1 layer
entity entity Service user
SAP
Service N layer N layer Service
provider entity entity provider
Analogy
Question
Define service, service access point and quality of
service at the gasoline station
Answer
Service: fuel distribution, car-washing
Service Access Point: Fuel machine, washing room
Quality of service: Vary
Data exchange
N Layer
N layer
peer-to-peer protocol N layer
Service
user entity entity
Identify senders/receivers?
Addressing
Unreliable physical communication medium?
Error detection
Error control
Message reordering
Sender can swamp the receiver?
Flow control
Multiplexing/Demultiplexing
Internet protocol stack
application: supporting network
applications
application
FTP, SMTP, HTTP
transport: process-process data transfer transport
TCP, UDP
network: routing of datagrams from network
source to destination
IP, routing protocols link
link: data transfer between neighboring
network elements physical
PPP, Ethernet
physical: bits “on the wire”
source
message M application Encapsulation
segment Ht M transport
datagram Hn Ht M network
frame Hl Hn Ht M link
physical
link
physical
switch
destination Hn Ht M network
M application
Hl Hn Ht M link Hn Ht M
Ht M transport physical
Hn Ht M network
Hl Hn Ht M link router
physical
Protocol layering and data
source destination
M application application M message
Ht M transport transport Ht M segment
Hn Ht M network network Hn Ht M datagram
Hl Hn Ht M link link Hl Hn Ht M frame
physical physical
ISO/OSI reference model
Application Application
Layer Layer
Presentation Presentation
Layer Layer
Session Session
Layer Layer
Transport Transport
Layer Communication Network Layer
Node (router)
Network Network
Data Link Data Link
End node Physical Physical
(host)
Application
Presentation
Network
Session Network
Data Link Application
Transport Data Link
Physical Presentation
Network Physical
Session
Data Link
Transport
Physical
Network
Hosts have all 7 layers Data Link
Nodes in the subnet have only the lower 3 Physical
OSI versus TCP/IP
Application
Transport
Network
Data link
Physical
Physical layer
physical
connection
QUESTION
ANSWER
QUESTION
ANSWER
Service: attach frame separator; send data between peers
arbitrates the access on the common media, flow control
Data Link
Network
layer
Network layer
- example -
ANSWER
Service: packet delivery to the destination; fragmentation;
reassembly (what)
Application
Transport
Network
Data link
Physical
Transport layer
Process-to-Process delivery of the entire message
From the original source to a destination
port addresses
network addresses
physical address
Transport layer
QUESTION
ANSWER
Packet sniffing:
broadcast media (shared Ethernet, wireless)
promiscuous network interface reads/records all packets (e.g.,
including passwords!) passing by
A C
B
The bad guys can record and
playback
record-and-playback: sniff sensitive info (e.g., password),
and use later
password holder is that user from system point of view
C
A
B
Network Security
more throughout this course
chapter 8: focus on security
crypographic techniques: obvious uses and
not so obvious uses
Chapter 1: roadmap
1.1 What is the Internet?
1.2 Network edge
end systems, access networks, links
1.3 Network core
circuit switching, packet switching, network structure
1.4 Delay, loss and throughput in packet-switched
networks
1.5 Protocol layers, service models
1.6 Networks under attack: security
1.7 History
Internet History
1961-1972: Early packet-switching principles
2007:
~500 million hosts
wireless links
LANs
datagram datagram
controller controller
frame
LAN
(wired or = adapter
wireless)
71-65-F7-2B-08-53
58-23-D7-FA-20-B0
0C-C4-11-6F-E3-98
LAN Address (more)
MAC address allocation administered by IEEE
manufacturer buys portion of MAC address space (to
assure uniqueness)
analogy:
(a) MAC address: like Social Security Number
(b) IP address: like postal address
MAC flat address ➜ portability
can move LAN card from one LAN to another
IP hierarchical address NOT portable
address depends on IP subnet to which node is attached
ARP: Address Resolution Protocol
A E6-E9-00-17-BB-4B
222.222.222.221
1A-23-F9-CD-06-9B
111.111.111.111
222.222.222.220 222.222.222.222
111.111.111.110
B
111.111.111.112
R 49-BD-D2-C7-56-2A
CC-49-DE-D0-AB-7D
A
E6-E9-00-17-BB-4B
222.222.222.221
1A-23-F9-CD-06-9B
111.111.111.111
222.222.222.220 222.222.222.222
111.111.111.110 B
111.111.111.112
R 49-BD-D2-C7-56-2A
CC-49-DE-D0-AB-7D
Link Layer
5.1 Introduction and
5.6 Link-layer switches
services 5.7 PPP
5.2 Error detection and 5.8 Link Virtualization:
correction ATM and MPLS
5.3Multiple access
protocols
5.4 Link-Layer Addressing
5.5 Ethernet
Ethernet
“dominant” wired LAN technology:
cheap $20 for NIC
Metcalfe’s Ethernet
sketch
Star topology
bus topology popular through mid 90s
all nodes in same collision domain (can collide with each
other)
today: star topology prevails
active switch in center
each “spoke” runs a (separate) Ethernet protocol (nodes
do not collide with each other)
switch
Preamble:
7 bytes with pattern 10101010 followed by one byte
with pattern 10101011
used to synchronize receiver, sender clock rates
Ethernet Frame Structure (more)
Addresses: 6 bytes
if adapter receives frame with matching destination
address, or with broadcast address (eg ARP packet), it
passes data in frame to network layer protocol
otherwise, adapter discards frame
Type: indicates higher layer protocol (mostly IP but
others possible, e.g., Novell IPX, AppleTalk)
CRC: checked at receiver, if error is detected, frame
is dropped
Ethernet: Unreliable, connectionless
connectionless: No handshaking between sending and
receiving NICs
unreliable: receiving NIC doesn’t send acks or nacks to
sending NIC
stream of datagrams passed to network layer can have
gaps (missing datagrams)
gaps will be filled if app is using TCP
otherwise, app will see gaps
Ethernet’s MAC protocol: unslotted CSMA/CD
Ethernet CSMA/CD algorithm
1. NIC receives datagram from 4. If NIC detects another
network layer, creates frame transmission while
2. If NIC senses channel idle, transmitting, aborts and
starts frame transmission If sends jam signal
NIC senses channel busy, 5. After aborting, NIC enters
waits until channel idle, then exponential backoff: after
transmits mth collision, NIC chooses K
3. If NIC transmits entire frame at random from
without detecting another {0,1,2,…,2m-1}. NIC waits
transmission, NIC is done K·512 bit times, returns to
with frame ! Step 2
Ethernet’s CSMA/CD (more)
Jam Signal: make sure all other Exponential Backoff:
transmitters are aware of Goal: adapt retransmission
collision; 48 bits attempts to estimated current
Bit time: .1 microsec for 10 load
Mbps Ethernet ; heavy load: random wait
for K=1023, wait time is will be longer
about 50 msec
first collision: choose K from
{0,1}; delay is K· 512 bit
transmission times
See/interact with Java after second collision: choose K
applet on AWL Web site: from {0,1,2,3}…
highly recommended ! after ten collisions, choose K
from {0,1,2,3,4,…,1023}
CSMA/CD efficiency
Tprop = max prop delay between 2 nodes in LAN
ttrans = time to transmit max-size frame
efficiency goes to 1
as tprop goes to 0
as ttrans goes to infinity
1
efficiency
better performance than ALOHA: and simple, cheap, decentralized !
1 5t prop /ttrans
802.3 Ethernet Standards: Link & Physical Layers
many different Ethernet standards
common MAC protocol and frame format
different speeds: 2 Mbps, 10 Mbps, 100 Mbps,
1Gbps, 10G bps
different physical layer media: fiber, cable
MAC protocol
application and frame format
transport
network 100BASE-TX 100BASE-T2 100BASE-FX
link 100BASE-T4 100BASE-SX 100BASE-BX
physical
network network
data link data link
physical physical
network
data link H2
physical
application
network transport
data link network
network
physical data link
network data link
physical
data link physical
physical
Two Key Network-Layer Functions
forwarding: move analogy:
packets from router’s
input to appropriate
routing: process of
router output planning trip from
source to dest
routing: determine
route taken by packets
forwarding: process of
from source to dest. getting through single
interchange
routing algorithms
Interplay between routing and forwarding
routing algorithm
value in arriving
packet’s header
0111 1
3 2
Connection setup
3rd important function in some network architectures:
ATM, frame relay, X.25
before datagrams flow, two end hosts and intervening
routers establish virtual connection
routers get involved
network vs transport layer connection service:
network: between two hosts (may also involve
intervening routers in case of VCs)
transport: between two processes
Network service model
Q: What service model for “channel” transporting
datagrams from sender to receiver?
Example services for Example services for a flow of
individual datagrams: datagrams:
guaranteed delivery in-order datagram delivery
223 1 1 1
Adresarea IP (1)
Adrese IP rezervate
Componentele adreselor care au toţi biţii pe 1 sau pe 0 au semnificaţii
speciale astfel:
Toţi biţii pe 0:
adresă cu toţi biţii pe 0 într-o porţiune corespunzătoare numărului
gazdei este interpretată ca această (this) gazdă (Adresa IP cu
<host address> = 0).
adresă cu toţi biţii pe 0 într-o porţiune corespunzătoare numărului
reţelei este interpretată ca această (this)reţea (Adresa IP cu
<network address>=0). Atunci când o gazdă doreşte să comunice în
reţea dar nu cunoaşte adresa reţelei poate să trimită un pachet cu
<network address>=0. Celelalte gazde din reţea interpretează adresa ca
fiind această reţea. Replica lor conţine adresa corectă a reţelei pe care
emiţătorul o poate memora pentru utilizări viitoare.
Toţi biţii pe 1:
adresă cu toţi biţii pe 1 este interpretată ca toate reţelele sau ca toate
gazdele. De exemplu adresa 128.2.255.255 înseamnă toate gazdele de
pe reţeaua 128.2 (adrese din clasa B).
Bucla închisă (loopback). Reţeaua de clasă A 127.0.0.0 este utilizată
pentru reţele în buclă închisă. Adresele de la această reţea sunt asignate
interfeţelor care procesează datele în cadrul sistemului local. Aceste
interfeţe în buclă închisă nu au acces la reţeaua fizică.
Chapter 3
Transport Layer
lo
gi
ca
transport protocols run in end
le
nd
systems
-e
nd
send side: breaks app
tr
a
ns
messages into segments,
po
rt
passes to network layer
application
rcv side: reassembles transport
network
segments into messages, data link
physical
lo
network
physical
gi
data link
ca
flow control physical
le
nd
connection setup
-e
nd
network
unreliable, unordered
tr
data link
a
physicalnetwork
ns
delivery: UDP data link
po
physical
rt
network
no-frills extension of data link
physical network
application
transport
“best-effort” IP data link
physical
network
data link
services not available: physical
delay guarantees
bandwidth guarantees
Chapter 3 outline
3.1 Transport-layer
3.5 Connection-oriented
services transport: TCP
segment structure
3.2 Multiplexing and
demultiplexing
reliable data transfer
flow control
3.3 Connectionless
transport: UDP
connection management
3.4 Principles of reliable
3.6 Principles of
data transfer congestion control
3.7 TCP congestion
control
UDP: User Datagram Protocol [RFC 768]
“no frills,” “bare bones”
Internet transport protocol Why is there a UDP?
“best effort” service, UDP no connection establishment
segments may be:
(which can add delay)
lost simple: no connection state
delivered out of order to at sender, receiver
app small segment header
connectionless: no congestion control: UDP
no handshaking between can blast away as fast as
UDP sender, receiver desired
each UDP segment
handled independently of
others
UDP: more
often used for streaming
multimedia apps 32 bits
Sender: Receiver:
treat segment contents as
compute checksum of received
sequence of 16-bit integers segment
check if computed checksum
checksum: addition (1’s
equals checksum field value:
complement sum) of
segment contents NO - error detected
sender puts checksum value YES - no error detected.
into UDP checksum field But maybe errors
nonetheless? More later ….
Chapter 3 outline
3.1 Transport-layer
3.5 Connection-oriented
services transport: TCP
segment structure
3.2 Multiplexing and
demultiplexing
reliable data transfer
flow control
3.3 Connectionless
transport: UDP
connection management
3.4 Principles of reliable
3.6 Principles of
data transfer congestion control
3.7 TCP congestion
control
TCP: Overview RFCs: 793, 1122, 1323, 2018, 2581
350
300
250
RTT (milliseconds)
200
150
100
1 8 15 22 29 36 43 50 57 64 71 78 85 92 99 106
time (seconnds)
DevRTT = (1-)*DevRTT +
*|SampleRTT-EstimatedRTT|
(typically, = 0.25)
# caused timeout
seq # is byte-stream restart timer
Seq=9 Seq=9
2, 8 b 2, 8 b y te
y te s d a s data
ta Seq=
Seq=92 timeout
1 00, 2
0 by t
es da
timeout
t a
=100
ACK 0
10
X CK=
A AC K = 120
loss
Seq=9 Seq=9
2, 8 b
2, 8 b
y t es da Sendbase y t es da
ta
ta
= 100
Seq=92 timeout
SendBase
= 120 = 120
0 K
=10 AC
ACK
SendBase
= 100 SendBase
= 120 premature timeout
time time
lost ACK scenario
TCP retransmission scenarios (more)
Host A Host B
Seq=9
2, 8 byte
s data
=100
timeout
Seq=1 A CK
00, 20
bytes
data
X
loss
SendBase C K =120
A
= 120
time
Cumulative ACK scenario
TCP ACK generation [RFC 1122, RFC 2581]
X
timeout
resen
d 2 nd s
egme
nt
time
speed-matching service:
matching the send rate
to the receiving app’s
drain rate
app process may be
slow at reading from
buffer
TCP Flow control: how it works
Rcvr advertises spare
room by including value
of RcvWindow in
segments
Sender limits unACKed
(Suppose TCP receiver discards data to RcvWindow
out-of-order segments) guarantees receive
spare room in buffer buffer doesn’t overflow
= RcvWindow
= RcvBuffer-[LastByteRcvd -
LastByteRead]
Chapter 3 outline
3.1 Transport-layer
3.5 Connection-oriented
services transport: TCP
segment structure
3.2 Multiplexing and
demultiplexing
reliable data transfer
flow control
3.3 Connectionless
transport: UDP
connection management
3.4 Principles of reliable
3.6 Principles of
data transfer congestion control
3.7 TCP congestion
control
TCP Connection Management
Recall: TCP sender, receiver Three way handshake:
establish “connection” before
exchanging data segments Step 1: client host sends TCP SYN
initialize TCP variables: segment to server
seq. #s
specifies initial seq #
buffers, flow control info
no data
(e.g. RcvWindow) Step 2: server host receives SYN,
client: connection initiator replies with SYNACK segment
Socket clientSocket = new server allocates buffers
Socket("hostname","port
number");
specifies server initial seq.
#
server: contacted by client
Socket connectionSocket = Step 3: client receives SYNACK,
welcomeSocket.accept(); replies with ACK segment, which
may contain data
TCP Connection Management (cont.)
client server
Closing a connection:
close
client closes socket: FIN
clientSocket.close();
timed wait
ACK
sends FIN.
closed
TCP Connection Management (cont.)
client server
Step 3: client receives FIN, replies
with ACK. closing
FIN
Enters “timed wait” - will
respond with ACK to received
FINs ACK
closing
Step 4: server, receives ACK. FIN
Connection closed.
timed wait
Note: with small modification, can ACK
closed
TCP Connection Management (cont)
TCP server
lifecycle
TCP client
lifecycle
Chapter 3 outline
3.1 Transport-layer
3.5 Connection-oriented
services transport: TCP
segment structure
3.2 Multiplexing and
demultiplexing
reliable data transfer
flow control
3.3 Connectionless
transport: UDP
connection management
3.4 Principles of reliable
3.6 Principles of
data transfer congestion control
3.7 TCP congestion
control
Principles of Congestion Control
Congestion:
informally: “too many sources sending too much data
too fast for network to handle”
different from flow control!
manifestations:
lost packets (buffer overflow at routers)
long delays (queueing in router buffers)
a top-10 problem!
Causes/costs of congestion: scenario 1
Host A
in : original data out
two senders, two
receivers
Host B unlimited shared
one router, infinite output link buffers
buffers
no retransmission
large delays
when congested
maximum
achievable
throughput
Causes/costs of congestion: scenario 2
one router, finite buffers
sender retransmission of lost packet
Host A
in : original data out
R/3
out
out
out
R/4
a. b. c.
“costs” of congestion:
more work (retrans) for given “goodput”
Host B
Causes/costs of congestion: scenario 3
Host A
out
Host B
c o n g e s tio n
w in d o w
congestion window size
2 4 K b y te s
Saw tooth
behavior: probing
1 6 K b y te s
for bandwidth
8 K b y te s
time
tim e
TCP Congestion Control: details
sender limits transmission: How does sender perceive
LastByteSent-LastByteAcked congestion?
CongWin loss event = timeout or 3
RTT
double CongWin every two segm
RTT ents
done by incrementing
CongWin for every ACK four segm
ents
received
Summary: initial rate is
slow but ramps up
exponentially fast
time
Refinement: inferring loss
After 3 dup ACKs:
CongWin is cut in half
window then grows linearly Philosophy:
But after timeout event:
CongWin instead set to 1 MSS; 3 dup ACKs indicates
window then grows exponentially network capable of
to a threshold, then grows linearly delivering some segments
timeout indicates a
“more alarming”
congestion scenario
Refinement
Q: When should the
exponential increase
switch to linear?
A: When CongWin gets to
1/2 of its value before
timeout.
Implementation:
Variable Threshold
At loss event, Threshold is set
to 1/2 of CongWin just before
loss event
Summary: TCP Congestion Control
When CongWin is below Threshold, sender in
slow-start phase, window grows exponentially.
When CongWin is above Threshold, sender is in
congestion-avoidance phase, window grows linearly.
When a triple duplicate ACK occurs, Threshold set
to CongWin/2 and CongWin set to Threshold.
When timeout occurs, Threshold set to
CongWin/2 and CongWin is set to 1 MSS.
TCP sender congestion control
State Event TCP Sender Action Commentary
Slow Start ACK receipt CongWin = CongWin + MSS, Resulting in a doubling of
(SS) for previously If (CongWin > Threshold) CongWin every RTT
unacked set state to “Congestion
data Avoidance”
Congestion ACK receipt CongWin = CongWin+MSS * Additive increase, resulting
Avoidance for previously (MSS/CongWin) in increase of CongWin by
(CA) unacked 1 MSS every RTT
data
SS or CA Loss event Threshold = CongWin/2, Fast recovery,
detected by CongWin = Threshold, implementing multiplicative
triple Set state to “Congestion decrease. CongWin will not
duplicate Avoidance” drop below 1 MSS.
ACK
SS or CA Timeout Threshold = CongWin/2, Enter slow start
CongWin = 1 MSS,
Set state to “Slow Start”
SS or CA Duplicate Increment duplicate ACK count CongWin and Threshold
ACK for segment being acked not changed
TCP throughput
What’s the average throughout of TCP as a
function of window size and RTT?
Ignore slow start
Let W be the window size when loss occurs.
When window is W, throughput is W/RTT
Just after loss, window drops to W/2,
throughput to W/2RTT.
Average throughout: .75 W/RTT
TCP Futures: TCP over “long, fat pipes”
1.22 MSS
RTT L
➜ L = 2·10-10 Wow
New versions of TCP for high-speed
TCP Fairness
Fairness goal: if K TCP sessions share same bottleneck
link of bandwidth R, each should have average rate
of R/K
TCP connection 1
bottleneck
TCP
router
connection 2
capacity R
Why is TCP fair?
Two competing sessions:
Additive increase gives slope of 1, as throughout increases
multiplicative decrease decreases throughput proportionally
Connection 1 throughput R
Fairness (more)
Fairness and UDP Fairness and parallel TCP
Multimedia apps often
connections
nothing prevents app from
do not use TCP
do not want rate opening parallel
throttled by congestion connections between 2
control hosts.
Instead use UDP: Web browsers do this
systems
communicate over network
e.g., web server software
communicates with
browser software
No need to write software for application
transport
network
network-core devices data link
physical
application
transport
Network-core devices do network
data link
not run user applications physical
applications on end
systems allows for rapid
app development,
propagation
Chapter 2: Application layer
2.1 Principles of network
2.6 P2P applications
applications 2.7 Socket programming
2.2 Web and HTTP with TCP
2.3 FTP
2.8 Socket programming
with UDP
2.4 Electronic Mail
2.9 Building a Web server
SMTP, POP3, IMAP
2.5 DNS
Application architectures
Client-server
Peer-to-peer (P2P)
Hybrid of client-server and P2P
Client-server architecture
server:
always-on host
permanent IP address
server farms for scaling
clients:
communicate with server
client/server
may be intermittently
connected
may have dynamic IP
addresses
do not communicate directly
with each other
Pure P2P architecture
no always-on server
arbitrary end systems
directly communicate peer-peer
peers are intermittently
connected and change IP
addresses
API: (1) choice of transport protocol; (2) ability to fix a few parameters (lots more on this later)
Addressing processes
to receive messages,
process must have
identifier
host device has unique 32-
bit IP address
Q: does IP address of
host suffice for identifying
the process?
Addressing processes
to receive messages, identifier includes both IP
process must have address and port numbers
identifier associated with process on
host device has unique host.
32-bit IP address Example port numbers:
Q: does IP address of HTTP server: 80
host on which process Mail server: 25
runs suffice for identifying to send HTTP message to
the process? gaia.cs.umass.edu web
A: No, many server:
processes can be IP address: 128.119.245.12
running on same host Port number: 80
more shortly…
App-layer protocol defines
Types of messages Public-domain protocols:
exchanged, defined in RFCs
e.g., request, response allows for
Message syntax: interoperability
what fields in messages & e.g., HTTP, SMTP
how fields are delineated
Proprietary protocols:
Message semantics
e.g., Skype
meaning of information in
fields
Rules for when and how
processes send & respond to
messages
What transport service does an app need?
Data loss Throughput
some apps (e.g., audio) can some apps (e.g., multimedia)
tolerate some loss require minimum amount of
other apps (e.g., file throughput to be “effective”
transfer, telnet) require other apps (“elastic apps”)
100% reliable data transfer make use of whatever
throughput they get
Timing Security
some apps (e.g., Encryption, data integrity, …
Internet telephony,
interactive games)
require low delay to be
“effective”
Transport service requirements of common apps