Professional Documents
Culture Documents
2
Position of Transport Layer
3
Duties of Transport Layer
4
PROCESS-TO-PROCESS DELIVERY
Client/Server Paradigm
Multiplexing and Demultiplexing
Connectionless Versus Connection-Oriented Service
Reliable Versus Unreliable
5
Figure 1 Types of data deliveries
6
Figure 2 Port numbers
7
Figure 3 IP addresses versus port numbers
8
Figure 4 IANA ranges
9
Figure 5 Socket address
10
Figure 6 Multiplexing and demultiplexing
11
Figure 7 Error control
12
Figure 8 Position of UDP, TCP, and SCTP in TCP/IP
suite
13
USER DATAGRAM PROTOCOL (UDP)
23.15
Figure 9 User datagram format
16
Note
UDP length
= IP length – IP header’s length
17
Figure 10 Pseudoheader for checksum calculation
18
Figure 11 Checksum calculation of a simple UDP user datagram
19
Figure 12 Queues in UDP
20
Transport Layer:
TCP & Connection Management
2
TCP
3
Table 1 Well-known ports used by TCP
4
Figure 1 Stream delivery
5
Figure 2 Sending and receiving buffers
6
Figure 3 TCP segments
7
Note
8
Example 1
9
Note
10
Note
11
Figure 4 TCP segment format
12
Figure 5 Control field
13
Table 2 Description of flags in the control field
14
TCP connection
Connection Establishment
Connection Termination
Connection Resetting
15
Figure 6 Connection establishment using three-way handshaking
16
Figure 7 Connection termination using three-way handshaking
17
Figure 8 Half-close
18
Figure 9 Sliding window
19
Note
20
Figure 10 Example 2
21
Silly Window Syndrome
22
Note
23
Figure 11 Normal operation
24
Figure 12 Lost segment
25
TCP
Timers
Retransmission
Persistence
Keep-alive
Time-waited
26
Congestion Control & QoS
2
DATA
TRAFFIC
Traffic Descriptor
Traffic Profiles
3
Figure 24.3 data traffic
4
Figure 1 Traffic descriptors
5
Figure 2 Three traffic profiles
6
CONGESTION
Network Performance
7
Figure 3 Network performance (Queues in a router)
8
Figure 4 Packet delay and throughput as functions of load
9
CONGESTION CONTROL
10
Figure 5 Congestion control categories
11
Open-Loop Congestion Control
12
Closed-Loop Congestion Control
13
Figure 6 Backpressure method for alleviating congestion
14
Figure 7 Choke packet
15
TWO EXAMPLES
16
Figure 8 Slow start, exponential increase
In the slow-start algorithm, the size of the congestion window increases
exponentially until it reaches a threshold.
17
Figure 9 Congestion avoidance, additive increase
In the congestion avoidance algorithm, the size of the congestion window
increases additively until congestion is detected.
18
Note
19
Figure 10 TCP congestion policy summary
20
Figure 11 Congestion example
21
Figure 12 BECN
22
Figure 13 FECN
23
Figure 14 Four cases of congestion
24
QUALITY OF SERVICE
Flow Characteristics
Flow Classes
25
Figure 15 Flow characteristics
26
TECHNIQUES TO IMPROVE QoS
27
Scheduling
28
Figure 17 Priority Queuing
29
Figure 18 Weighted fair Queuing
30
Traffic Shaping
31
Leaky bucket
The host is allowed to put one packet per clock tick onto the network. Again, this can be
enforced by the interface card or by the operating system. This mechanism turns an
uneven flow of packets from the user processes inside the host into an even flow of
packets onto the network, smoothing out bursts and greatly reducing the chances of
congestion.
The leaky bucket consists of a finite queue. When a packet arrives, if there is room on
the queue it is appended to the queue; otherwise, it is discarded. At every clock tick, one
packet is transmitted (unless the queue is empty).
The byte-counting leaky bucket is implemented almost the same way. At each tick, a
counter is initialized to n. If the first packet on the queue has fewer bytes than the current
value of the counter, it is transmitted, and the counter is decremented by that number of
bytes. Additional packets may also be sent, as long as the counter is high enough. When
the counter drops below the length of the next packet on the queue, transmission stops
until the next tick, at which time the residual byte count is reset and the flow can
continue.
32
Figure 19 Leaky bucket
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the
data rate. It may drop the packets if the bucket is full.
33
Figure 20 Leaky bucket implementation
34
Figure 21 Token bucket
The token bucket allows bursty traffic at a regulated maximum rate.
35
Figure 21 Token bucket
The implementation of the basic token bucket algorithm is just a variable that counts
tokens. The counter is incremented by one every ΔT and decremented by one
whenever a packet is sent. When the counter hits zero, no packets may be sent. In
the byte-count variant, the counter is incremented by k bytes every ΔT and
decremented by the length of each packet sent.
36
Resource Reservation & Admission Control
37