Professional Documents
Culture Documents
Lê Ngọc Sơn
TPHCM, 9-2021
fit@hcmus
fit@hcmus
Agenda
£ Queue Management
£ Drop Policy
£ Scheduling Discipline
£ Quality of Service
£ IntServ and DiffServ
2
Queue Management
fit@hcmus 3
fit@hcmus Router
control plane
data plane Routing Processor
Line card
Switching
Line card
Fabric
4
fit@hcmus
Line Cards (Interface Cards, Adaptors)
Transmit
Receive
ü Rate limiting lookup
ü Packet marking
to/from switch
5
fit@hcmus
Buffer Size
q Why not use infinite buffers?
ü no packet drops!
q Small buffers:
ü often drop packets due to bursts
ü but have small delays
q Large buffers:
ü reduce number of packet drops (due to
bursts)
ü but increase delays
q Can we have the best of both worlds?
#6
fit@hcmus Queue Management
q Each router must implement some queuing
discipline
q Queuing allocates both bandwidth and
buffer space:
ü Bandwidth: which packet to serve (transmit)
next
ü Buffer space: which packet to drop next (when
required)
q Queuing also affects latency
fit@hcmus
Queue Management Issues
q Drop policy
ü When should you discard a packet?
ü Which packet to discard?
q Scheduling discipline
ü Which packet to send?
ü Some notion of fairness? Priority?
q Goal: balance throughput and delay
ü Huge buffers minimize drops, but add to
queuing delay (thus higher RTT, longer slow
start, …)
8
Drop Policies
fit@hcmus 9
fit@hcmus FIFO Scheduling and Drop-Tail
q Access to the bandwidth: first-in first-out queue
ü Packets only differentiated when they arrive
✗ 10
fit@hcmus Bursty Loss From Drop-Tail Queuing
q TCP depends on packet loss
ü Packet loss is indication of congestion
ü TCP additive increase drives network into loss
q Drop-tail leads to bursty loss
ü Congested link: many packets encounter full queue
ü Synchronization: many connections lose packets at
once
11
fit@hcmus Slow Feedback from Drop Tail
q Feedback comes when buffer is completely full
ü … even though the buffer has been filling for a
while
q Plus, the filling buffer is increasing RTT
ü … making detection even slower
q Better to give early feedback
ü Get 1-2 connections to slow down before it’s too
late!
12
fit@hcmus
Random Early Detection (RED)
q Router notices that queue is getting full
ü … and randomly drops packets to signal
congestion
q Packet drop probability
ü Drop probability increases as queue length
increases
ü Else, set drop probability f(avg queue length)
13
fit@hcmus
Random Early Detection (RED)
14
fit@hcmus
RED Dropping Curve
Drop Probability
1
maxp
0
minth maxth
15
fit@hcmus
Properties of RED
q Drops packets before queue is full
ü In the hope of reducing the rates of some flows
q Drops packet in proportion to each flow’s rate
ü High-rate flows selected more often
q Drops are spaced out in time
ü Helps desynchronize the TCP senders
16
fit@hcmus
Problems With RED
q Hard to get tunable parameters just right
ü How early to start dropping packets?
ü What slope for increase in drop probability?
ü What time scale for averaging queue length?
q RED has mixed adoption in practice
ü If parameters aren’t set right, RED doesn’t help
q Many other variations in research community
ü Names like “Blue” (self-tuning), “FRED”…
17
fit@hcmus Feedback: From loss to notification
q Early dropping of packets
ü Good: gives early feedback
ü Bad: has to drop the packet to give the feedback
18
fit@hcmus Explicit Congestion Notification
q Must be supported by router, sender, AND receiver
ü End-hosts determine if ECN-capable during
TCP handshake
q ECN involves all three parties (and 4 header bits)
ü Sender marks “ECN-capable” when sending
ü If router sees “ECN-capable” and experiencing
congestion, router marks packet as “ECN congestion
experienced”
ü If receiver sees “congestion experienced”, marks “ECN
echo” flag in responses until congestion ACK’d
ü If sender sees “ECN echo”, reduces cwnd and marks
“congestion window reduced” flag in next TCP packet
19
Scheduling Discipline
fit@hcmus 20
fit@hcmus
First-In First-Out (FIFO)
24
fit@hcmus Fair Queuing - Round-Robin service
Fair Queuing
26
Quality of Service
fit@hcmus 27
fit@hcmus Traditional Non-converged Network
31
fit@hcmus What Is Quality of Service?
q The user perspective
Users perceive that their applications
are performing properly
Voice, video, and data
q Real-time applications
especially sensitive to QoS Application
Sensitivity to
QoS Metrics
33
fit@hcmus
Building blocks
q Scheduling
ü Active Buffer Management
q Traffic Conditioner
ü Traffic Policing
ü Traffic Shaping
q Traffic Shaping
ü Leaky Bucket
ü Token Bucket
#34
fit@hcmus Scheduling: How Can Routers Help
q Scheduling: choosing the next packet for
transmission
ü FIFO/Priority Queue
ü Round Robin/ DRR
ü Weighted Fair Queuing
q Packet dropping:
ü not drop-tail
ü not only when buffer is full
ü Active Queue Management
q Congestion signaling
ü Explicit Congestion Notification (ECN)
#35
fit@hcmus Traffic Conditioners
q Policing
ü Limits bandwidth by discarding traffic.
ü Can re-mark excess traffic and attempt to send.
ü Should be used on higher-speed interfaces.
ü Can be applied inbound or outbound.
q Shaping
ü Limits excess traffic by buffering.
ü Buffering can lead to a delay.
ü Recommended for slower-speed interfaces.
ü Cannot re-mark traffic.
ü Can only be applied in the outbound direction.
36
Traffic Policing and Shaping
38
fit@hcmus
Discussion
fit@hcmus 42
fit@hcmus
The Leaky Bucket
q The Leaky Bucket Algorithm
ü used to control rate in a network.
ü It is implemented as a single-server queue with
constant service time.
ü If the bucket (buffer) overflows then packets are
discarded.
q Leaky Bucket (parameters r and B):
ü Every r time units: send a packet.
ü For an arriving packet
¡ If queue not full then enqueue
q Note that the output is a “perfect” constant rate.
43
fit@hcmus
The Leaky Bucket Algorithm
(a) A leaky bucket with water. (b) a leaky bucket with packets.
44
fit@hcmus
Discussion
Drawbacks of Leaky
Bucket ?
45
fit@hcmus Token Bucket Algorithm
q Highlights: q Token Bucket
– The bucket holds tokens.
(r, MaxTokens):
– To transmit a packet, we
“use” one token. – Generate r tokens every time
unit
q Allows the output • If number of tokens more
rate to vary, than MaxToken, reset to
– depending on the size of MaxTokens.
the burst. – For an arriving packet:
– In contrast to the Leaky enqueue
Bucket – While buffer not empty and
q Granularity there are tokens:
– Bits or packets • send a packet and discard
a token
46
fit@hcmus The Token Bucket Algorithm
48
QoS Models
fit@hcmus 49
Three QoS Models
Model Characteristics
Best effort No QoS is applied to packets. If it is not
important when or how packets arrive, the best-
effort model is appropriate.
Integrated Services Applications signal to the network that the
applications require certain QoS parameters.
(IntServ)
Differentiated Services The network recognizes classes that require QoS.
(DiffServ)
fit@hcmus
Best-Effort Model
q Internet was initially based on a best-effort packet
delivery service.
q Best-effort is the default mode for all traffic.
q There is no differentiation among types of traffic.
q Best-effort model is similar to using standard mail—
“The mail will arrive when the mail arrives.”
q Benefits:
ü Highly scalable
ü No special mechanisms required
q Drawbacks:
ü No service guarantees
ü No service differentiation
51
fit@hcmus IntServ and DiffServ
Integrated Services Differentiated Services
• Network wide control • Router based control
– Per hop behavior
• Admission Control • Resolves contentions
– Hot spots
• Absolute guarantees • Relative guarantees
• Traffic Shaping • Traffic policing
• Reservations – At entry to network
– RSVP
52
fit@hcmus
IETF Integrated Services
#53
fit@hcmus Intserv: QoS guarantee scenario
q Resource reservation
ü call setup, signaling (RSVP)
ü traffic, QoS declaration
ü per-element admission control
request/
reply
m QoS-sensitive
scheduling (e.g.,
WFQ)
#54
fit@hcmus Call Admission
Arriving session must :
q declare its QoS requirement
ü R-spec: defines the QoS being requested
q characterize traffic it will send into network
ü T-spec: defines traffic characteristics
üsignaling protocol: needed to carry R-spec
and T-spec to routers (where reservation is
required)
ü RSVP
#55
fit@hcmus
Resource Reservation Protocol
(RSVP)
Sending
q Is carried in IP—protocol Host
ID 46
q Can use both TCP and RSVP
UDP port 3455 Tunnel
Diffserv approach:
£ Simple functions in network core, relatively complex
functions at edge routers (or hosts)
£ Don’t define define service classes, provide
functional components to build service classes
#59
fit@hcmus
DiffServ Model
q Describes services associated with traffic
classes, rather than traffic flows.
q Complex traffic classification and conditioning is
performed at the network edge.
q No per-flow state in the core.
q The goal of the DiffServ model is scalability.
q Interoperability with non-DiffServ-compliant
nodes
60
IntServ vs. DiffServ
fit@hcmus
IP
IP
IntServ
network
DiffServ
network
DiffServ PHB
Select PHB PHB Local
Core PHB
PHB conditions
Router
Extract Packet
DSCP treatment
#62
fit@hcmus
Classification
q Classification is the process of identifying and
categorizing traffic into classes, typically based
upon:
ü Incoming interface
ü IP precedence
ü DSCP
ü Source or destination address
ü Application
q Without classification, all packets are treated the
same.
q Classification should take place as close to the
source as possible.
63
fit@hcmus
Marking
£ Marking is the QoS feature component that “colors” a
packet (frame) so it can be identified and distinguished
from other packets (frames) in QoS treatment.
£ Commonly used markers:
ü Link layer:
- CoS (ISL, 802.1p)
- MPLS EXP bits
- Frame Relay
ü Network layer:
- DSCP
- IP precedence
64
Classification Tools
fit@hcmus IP Precedence and DiffServ Code Points
Version ToS
Len ID Offset TTL Proto FCS IP SA IP DA Data
Length Byte
IPv4 Packet
7 6 5 4 3 2 1 0
Standard IPv4
IP Precedence Unused
DiffServ Code Point (DSCP) IP ECN DiffServ Extensions
65
IP ToS Byte and DS Field Inside the IP Header
fit@hcmus
66
IP Precedence and DSCP Compatibility
71
Expedited Forwarding (EF) PHB
£ EF PHB:
ü Ensures a minimum departure rate
ü Guarantees bandwidth—class guaranteed an amount of bandwidth
with prioritized forwarding
ü Polices bandwidth—class not allowed to exceed the guaranteed
amount (excess traffic is dropped)
£ DSCP value of 101110: Looks like IP precedence 5 to non-DiffServ-
compliant devices:
ü Bits 5 to 7: 101 = 5 (same 3 bits are used for IP precedence)
ü Bits 3 and 4: 11 = No drop probability
ü Bit 2: Just 0
72
Assured Forwarding (AF) PHB
£ AF PHB:
ü Guarantees bandwidth
ü Allows access to extra bandwidth, if available
£ Four standard classes: AF1, AF2, AF3, and AF4
£ DSCP value range of aaadd0:
ü aaa is a binary value of the class
ü dd is drop probability
73
AF PHB Values
81
fit@hcmus
Additional Slides
82