You are on page 1of 7

Lecture 23

Congestion Control Techniques, QoS and Traffic Shaping

Network provisioning will tkae time to do (6 months - 1 year)


we need a faster form of congestion control other than discussed in last video

Perils of traffic aware routing


say 2 parallel roads between 2 cities, where one is shorter and other is longer
during weekends the toll on the shorter path can be increased so as to decrease the rush on that
road during weekends
we do a similar thing here
basically packets kind of tend to take the lower cost edge ionn the network
we make the weight ofa link as link_weight = f(band_width, propogation_delay, load,
avg_queuing_delay)
Least-weight paths will then favor paths that are more lightly loaded, all else being equal.
Ei will initially be low cost, then after some time has heavier queing delays then suddenly shift to
CF
problem is flows will oscilate between two paths
ensure change from EI to CF is smoother instead of a sudden change maybe
say you jump with a probability
split the traffic with probability
each individual is interested in maximising their own utility : selfish behaviour (game theory)
everyone would want to sent through least cost but since they are not collaborating with others,
everyone would exppect a bad delay

Traffic Throttling
basically when there is congestion in the network, the network tells the sender's to slow down the
packet rate
slides
Senders adjust their transmissions to send as much traffic as the network can readily deliver.
Networks aims to operate just before the onset of congestion
When congestion is imminent, it must tell the senders to throttle back their transmissions and slow
down. This is knows as congestion avoidance.
back
using queuing delays inside router
internet is often bursty (large in small time, then inactive)
estimate the queuing delay how?
given the old (prev) delay we can estimate the current by

dnew = α.dold + (1 − α)s


​ ​

description : To maintain a good estimate of the queueing delay, d, a sample of the instantaneous
queue length, s, can be made periodically and d updated according to
exponential weighted moving average
why is it called this way ?? SIR Q
routers can compute the delay, when it exceeds the threshold it can inform the transmitters about
the congestion.
ANother way to do the same thing is via Explicit Congestion Notification

Explicit Congestion Notification


host sending to another host
packet thorugh several routers
say one router in this path is facing congestion
the router will mark the ecn flag in the ip header
ecn - explicit congestion notification
and sends it across
now the destination host will see this flag and ask the sender to reduce the rate of transmission
this communication is again via some route or the same route
the routers are not communicating with each other (like before) but just marking
when sender gets the packet it throtles the transmission

Load Shedding
fastest form of congestion control
packet dropping in a router
which packet will you through out ??
File transfer
old packet is worth more than new
buffer the packet before delivering upwards
if you keep new packets but if chunks are missing from previous, then we have to store it
in memory
if you get all the old packets we can process that chunk and send it upwards
Real time media : new packet is worth more than old one
People prefer new milk and old wine
slides
More intelligent load shedding requires cooperation from the senders. E.g., packets carrying
routing information, algorithms for compressing video
To implement an intelligent discard policy, applications must mark their packets to indicate to the
network how important they are.
network layer has to peek inside the payload and make a decision

admission control : makes more sense in virtual connection

Quality of Service (QoS)


way for routers/networks to ensure some level of quality
need : There are applications (and customers) that demand stronger performance guarantees
from the network than “the best that could be done under the circumstances.”
An easy solution to provide good quality of service is to build a network with enough capacity for
whatever traffic will be thrown at it. The name for this solution is overprovisioning
spending your money that is not really required.
challenge
Quality of service mechanisms let a network with less capacity meet application requirements just
as well at a lower cost.
high means : we need this to be the best form (or) how strict their QOS (quality) is for this metric
jitter : Jitter is the variation in time delay
constant delay in audio is managable, but if highly variable delay those are not preffered
audio can have a large loss

Traffic Shaping
before QoS we need to know this
bursty traffic usualy (non uniform)
Bursts of traffic are more difficult to handle than constant-rate traffic because they can fill buffers
and cause packets to be lost.
Traffic shaping is a technique for regulating the average rate and burstiness of a flow of data that
enters the network.
Packets in excess of the agreed pattern might be dropped by the network, or they might be
marked as having lower priority. Monitoring a traffic flow is called traffic policing
monitoring the traffic flow to see if it follows the shaping pattern is called as the traffic policy.
2 techniques to make bursty as non bursty : leaky and token

Leaky bucket
leaky
bucket of capacity at most b
there is a hole at which the packets are draining out, rate r
rate of going out will not exceed rate r
all the packets will have to be queueud outside, once there is space inside the bucket you can
pour it in

Token bucket
bucket of capacity at most b
tap that pours in at rate r
if it overflows its fine
if we need to transmit x bits , we take x tokens out and transmit that packet
if we do not have enough tokens in the bucket we will have to wait till the bucket has enough
tokens

An Illustration of Tokvken Bucket


bursty traffic in
aveage traffic over 1 sec = 200 Mbps
consider token bucket, r = 200 and bucket capacity = 120
initially bucket is full
so in the first 150 ms, token_in_rate = 200 , token_out_rate = 200, net_flow = 800 (out)
800 * 0.15 sec = 120 Mb , which results in bucket being empty 0.
150 to 450 no traffic, tokens are falling in = 0.3s * 200 = 60 Mb tokens in bucket
450 to 700 : out = in , net_flow = 0 (although in , out > capaccity ) the tokens in the bucket do not
change
from 700 onwards 0.3 * 200 = 60, which makes the bucket full at 120 Mb
if we continue the time the line will be horizontal as the bucket is full

Another example

same traffic in
we can only sustain the out_rate of 800 only for 100 sec then bucket becomes empty
after that you can only take out at the rate of tokens falling in which is 200
both the areas of both the figures are the same, we are just changing the peak value

Make it more uniform

make the bucket initially empty


hence the out rate is the rate at which tokens are falling in = 200
note that the first figure is the packet incoming graph
the bifigure graphs are the one that is being outputed
the area in this is the same
we managed to kill out the burstiness and make it smooth
What is the drawback ?? SIR Q
I think i made a mistake : the bucket is not empty but the capacity is 0.

You might also like