You are on page 1of 80

Transport Layer:

Duties,UDP and TCP

2
Position of Transport Layer

3
Duties of Transport Layer

4
PROCESS-TO-PROCESS DELIVERY

The transport layer is responsible for


process-to-process delivery—the delivery of a packet,
part of a message, from one process to another.

Client/Server Paradigm
Multiplexing and Demultiplexing
Connectionless Versus Connection-Oriented Service
Reliable Versus Unreliable

5
Figure 1 Types of data deliveries

6
Figure 2 Port numbers

7
Figure 3 IP addresses versus port numbers

8
Figure 4 IANA ranges

9
Figure 5 Socket address

10
Figure 6 Multiplexing and demultiplexing

11
Figure 7 Error control

12
Figure 8 Position of UDP, TCP, and SCTP in TCP/IP
suite

13
USER DATAGRAM PROTOCOL (UDP)

The User Datagram Protocol (UDP) is called a


connectionless, unreliable transport protocol. It does
not add anything to the services of IP except to provide
process-to-process communication instead of
host-to-host communication.

Well-Known Ports for UDP


User Datagram
Checksum
UDP Operation
Use of UDP
14
Table 1 Well-known ports used with UDP

23.15
Figure 9 User datagram format

16
Note

UDP length
= IP length – IP header’s length

17
Figure 10 Pseudoheader for checksum calculation

18
Figure 11 Checksum calculation of a simple UDP user datagram

19
Figure 12 Queues in UDP

20
Transport Layer:
TCP & Connection Management

2
TCP

TCP is a connection-oriented protocol; it creates a


virtual connection between two TCPs to send data. In
addition, TCP uses flow and error control mechanisms
at the transport level.

3
Table 1 Well-known ports used by TCP

4
Figure 1 Stream delivery

5
Figure 2 Sending and receiving buffers

6
Figure 3 TCP segments

7
Note

The bytes of data being transferred in


each connection are numbered by TCP.
The numbering starts with a randomly
generated number.

8
Example 1

The following shows the sequence number for each


segment:

9
Note

The value in the sequence number field


of a segment defines the
number of the first data byte
contained in that segment.

10
Note

The value of the acknowledgment field


in a segment defines
the number of the next byte a party
expects to receive.
The acknowledgment number is
cumulative.

11
Figure 4 TCP segment format

12
Figure 5 Control field

13
Table 2 Description of flags in the control field

14
TCP connection

TCP is a connection-oriented protocol. It establishes a


virtual path between the source and the destination.

Connection Establishment
Connection Termination
Connection Resetting

15
Figure 6 Connection establishment using three-way handshaking

16
Figure 7 Connection termination using three-way handshaking

17
Figure 8 Half-close

18
Figure 9 Sliding window

19
Note

A sliding window is used to make


transmission more efficient as well as
to control the flow of data so that the
destination does not become
overwhelmed with data.
TCP sliding windows are byte-oriented.

20
Figure 10 Example 2

21
Silly Window Syndrome

Syndrome Created by Sender – Nagle’s algorithm


Syndrome Created by Receiver – Clark’s Solution & Delayed
acknowlegement

22
Note

Some points about TCP sliding windows:


❏ The size of the window is the lesser of rwnd and
cwnd.
❏ The source does not have to send a full window’s
worth of data.
❏ The window can be opened or closed by the
receiver, but should not be shrunk.
❏ The destination can send an acknowledgment at
any time as long as it does not result in a shrinking
window.
❏ The receiver can temporarily shut down the
window; the sender, however, can always send a
segment of 1 byte after the window is shut down.

23
Figure 11 Normal operation

24
Figure 12 Lost segment

25
TCP
Timers

To perform its operation smoothly.

Retransmission
Persistence
Keep-alive
Time-waited

26
Congestion Control & QoS

2
DATA
TRAFFIC

The main focus of congestion control and quality of


service is data traffic. In congestion control we try to
avoid traffic congestion. In quality of service, we try to
create an appropriate environment for the traffic.

Traffic Descriptor
Traffic Profiles

3
Figure 24.3 data traffic

4
Figure 1 Traffic descriptors

5
Figure 2 Three traffic profiles

6
CONGESTION

Congestion in a network may occur if the load on the


network—the number of packets sent to the
network—is greater than the capacity of the
network—the number of packets a network can
handle. Congestion control refers to the mechanisms
and techniques to control the congestion and keep the
load below the capacity.

Network Performance

7
Figure 3 Network performance (Queues in a router)

8
Figure 4 Packet delay and throughput as functions of load

9
CONGESTION CONTROL

Congestion control refers to techniques and


mechanisms that can either prevent congestion, before
it happens, or remove congestion, after it has
happened. In general, we can divide congestion
control mechanisms into two broad categories:
open-loop congestion control (prevention) and
closed-loop congestion control (removal).

10
Figure 5 Congestion control categories

11
Open-Loop Congestion Control

12
Closed-Loop Congestion Control

13
Figure 6 Backpressure method for alleviating congestion

14
Figure 7 Choke packet

15
TWO EXAMPLES

To better understand the concept of congestion


control, let us give two examples: one in TCP and the
other in Frame Relay.

Congestion Control in TCP


Congestion Control in Frame Relay

16
Figure 8 Slow start, exponential increase
In the slow-start algorithm, the size of the congestion window increases
exponentially until it reaches a threshold.

17
Figure 9 Congestion avoidance, additive increase
In the congestion avoidance algorithm, the size of the congestion window
increases additively until congestion is detected.

18
Note

An implementation reacts to congestion


detection in one of the following ways:
❏ If detection is by time-out, a new slow
start phase starts.
❏ If detection is by three ACKs, a new
congestion avoidance phase starts.

19
Figure 10 TCP congestion policy summary

20
Figure 11 Congestion example

21
Figure 12 BECN

22
Figure 13 FECN

23
Figure 14 Four cases of congestion

24
QUALITY OF SERVICE

Quality of service (QoS) is an internetworking issue


that has been discussed more than defined. We can
informally define quality of service as something a
flow seeks to attain.

Flow Characteristics
Flow Classes

25
Figure 15 Flow characteristics

26
TECHNIQUES TO IMPROVE QoS

There are four common methods (techniquies):


scheduling, traffic shaping, admission control, and
resource reservation which can be used to improve the
quality of service

27
Scheduling

A good scheduling techniques treats the different flows


in a fair and appropriate manner.

Figure 16 FIFO Queuing

28
Figure 17 Priority Queuing

29
Figure 18 Weighted fair Queuing

30
Traffic Shaping

31
Leaky bucket

The host is allowed to put one packet per clock tick onto the network. Again, this can be
enforced by the interface card or by the operating system. This mechanism turns an
uneven flow of packets from the user processes inside the host into an even flow of
packets onto the network, smoothing out bursts and greatly reducing the chances of
congestion.
The leaky bucket consists of a finite queue. When a packet arrives, if there is room on
the queue it is appended to the queue; otherwise, it is discarded. At every clock tick, one
packet is transmitted (unless the queue is empty).
The byte-counting leaky bucket is implemented almost the same way. At each tick, a
counter is initialized to n. If the first packet on the queue has fewer bytes than the current
value of the counter, it is transmitted, and the counter is decremented by that number of
bytes. Additional packets may also be sent, as long as the counter is high enough. When
the counter drops below the length of the next packet on the queue, transmission stops
until the next tick, at which time the residual byte count is reset and the flow can
continue.

32
Figure 19 Leaky bucket
A leaky bucket algorithm shapes bursty traffic into fixed-rate traffic by averaging the
data rate. It may drop the packets if the bucket is full.

33
Figure 20 Leaky bucket implementation

34
Figure 21 Token bucket
The token bucket allows bursty traffic at a regulated maximum rate.

35
Figure 21 Token bucket
The implementation of the basic token bucket algorithm is just a variable that counts
tokens. The counter is incremented by one every ΔT and decremented by one
whenever a packet is sent. When the counter hits zero, no packets may be sent. In
the byte-count variant, the counter is incremented by k bytes every ΔT and
decremented by the length of each packet sent.

36
Resource Reservation & Admission Control

37

You might also like