Chapter 2: Performance and QoS

ECE 610 – Winter 2013
Dr. Mohamed Mahmoud
Department of Electrical and Computer Engineering
University of Waterloo
http://ece.uwaterloo.ca/~mmabdels/
mmabdels@bbcr.uwaterloo.ca
Outline
2.1 What is QoS and Why?
2.2 Principles for QOS guarantees
2.3 QoS Protocols
2.4 Queuing theory
What goes wrong in a data network?
- Bit-level errors (electrical interference)
- Packet losses
- Link and node failures
- Packets are delayed
- Packets are delivered out-of-order
- Routers are congested (high delay + packet drop)
2-1
Quality of Service (QoS)
- Measures the service quality provided by a network
- QoS metrics: Delay - Jitter (the variation in delay for each
packet) - bandwidth - reliability
- When the network is overloaded:
- Delay increases due to increased queuing delay
- Jitter increases due to chaotic load patterns
- Bandwidth decreases due to increased competition for
access
- Reliability degrades due to queue overflow, causing
packet loss
2-2
2-3
Why Quality of Service?
- The concept of QoS is based on the statements:-
- Applications need a minimum level of performance to
function properly
- Not all applications need the same QoS - applications
may request their specific requirements from the network
- Certain applications require minimum level of QoS
- Web must display a page in 2 seconds
- VoIP call must be understandable, e.g., delay < 500 ms
- A file should be exchanged without errors
The ultimate goal of a network is to provide
satisfactory services to users
2-4
Communication networks support many applications with
different traffic characteristics and QoS requirement
Internet today
- Provides “best effort service” data delivery
- Send packet and hope performance is OK
- Packet switching: statistically share resources in hope that
sessions' peak demands don't coincide
- No guarantees (unpredictable) on bandwidth, delay, etc
- Complexity of network core is simple
- As demands exceeds capacity, service degrades
- All packets are treated equally (generally, FIFO queuing) –
does not distinguish between delay sensitive and best
effort traffic
2-5
- If there were infinite network resources, QoS would not be
necessary
- QoS is about deciding which traffic gets more (or less)
resources, so that all users can get satisfactory service
- Since multimedia applications requires min. QoS
requirements to be effective, today’s Internet uses
application-level techniques to mitigate (as best possible)
effects of delay and packet loss
- Next generation Internet needs to provide application-
specific QoS guarantees
2-6
Outline
2.1 What is QoS and Why?
2.2 Principles for QOS guarantees
2.3 QoS Protocols
2.4 Queuing theory
Principles for QOS Guarantees
1- Packet classification/scheduling
2- Traffic shaping and policing
3- High resource utilization
4- Call admission
2-7
QoS principle (1): Packet classification/scheduling
- Packet marking is needed for router to distinguish between
different classes of traffics
- Packet scheduling mechanism (new queuing policy) is
needed to allocate different bandwidth and transmission
priority to flows
Example: 1Mbps IP phone and FTP share 1.5 Mbps link.
- Bursts of FTP can congest router, cause audio loss
- Want to give priority to audio over FTP
2-8
QoS principle (2): Traffic Shaping and policing
- Prevents applications from misbehaving (e.g., multimedia
application sends higher than declared rate)
- Traffic shaping and policing: hold or drop packets in
misbehaved traffic flow to provide protection to networks
done at the network edge (in host or edge router)
- Policing: force source adherence to bandwidth allocations
- Example: Leaky bucket mechanism
2-9
QoS principle (3): High resource utilization
- Allocating fixed (non-sharable) bandwidth to flow:
inefficient use of bandwidth if flows doesn’t use its allocation
- While providing isolation, it is desirable to use resources as
efficiently as possible
2-10
2-11
QoS principle (4): Call admission
- Basic fact of life: cannot support traffic demands beyond link
capacity
- Call admission: can newly arriving flow be admitted with
performance guarantees while not violated QoS guarantees
made to already admitted flows?
- Session is guaranteed QoS or blocked (denied admission to
network)
- Current networks: all calls accepted, performance degrades
as more calls carried
Summary of QoS Principles
QoS Principles
Call Admission
mechanism
Traffic shaping and
policing mechanism
Packet scheduling
mechanism
2-12
P
a
c
k
e
t

c
l
a
s
s
i
f
i
c
a
t
i
o
n
a
n
d
s
c
h
e
d
u
l
i
n
g
T
r
a
f
f
i
c

s
h
a
p
i
n
g

a
n
d

p
o
l
i
c
i
n
g
H
i
g
h

r
e
s
o
u
r
c
e

u
t
i
l
i
z
a
t
i
o
n
C
a
l
l

A
d
m
i
s
s
i
o
n
1- Scheduling (Queuing) Algorithms
1- First In First Out (FIFO)
2- Priority (Absolute)
3- Weighted Fair Queuing (WFQ)
2-13
1- FIFO (first in first out) scheduling:
- Packets are sent in order of arrival to queue
- No special treatment is given to delay sensitive packets
- A greedy flow will adversely affect other flows
- Currently used in the Internet
2- Priority scheduling algorithm
2-14
- Multiple priority classes, with different priorities each has
its own queue
- Class may depend on marking or other header information,
e.g. IP source/destination, port numbers, etc..
- Transmit a packet from the highest priority class that has a
nonempty queue
3- Weighted Fair Queuing (WFQ)
W
3
W
1
W
2
- Allows different connections to have different service shares
- A separate FIFO queue for each connection sharing the same
link
- Each class of packets gets weighted amount of service
bandwidth in each cycle 2-15
2-16
Where C is the output link speed
- Priority scheduling is impractical because only one
connection can be served at a time
- Fair share in WFQ: every session gets minimum amount of
guaranteed bandwidth
W
1
+ W
2
+ W
3
W
i
- Portion of bandwidth allocated to flow i =
- In each cycle, serve W
1
packets of flow 1, W
2
packets of
flow 2 and W
3
packets of flow 3
- The service rate for nonempty queue i = C ∙
W
1
+ W
2
+ W
3
W
i
2- Traffic shaping and policing
- Traffic Specifications: a flow must describe traffic that it will
inject into network, as follows
- Average Rate: how many packets can be sent per unit
time (in the long run)
- Peak Rate: max. number of packets that can be sent
over a short period of time, e.g., 6000 packets per min.
- Max. Burst Size: max. number of packets sent
consecutively
2-17
2-18
- The leaky bucket is a “traffic shaper”: It changes the
characteristics of packet stream
- Leaky bucket mechanism limits burst size and average rate
of traffic entering the network
- Bucket can hold up to b tokens
- Tokens are generated at a constant rate r tokens/sec
- Tokens are added to bucket if bucket is not full, otherwise
the excess tokens are discarded
- A packet must remove a token from the token bucket
before it is transmitted into the network
- When a packet arrives, it is transmitted if there is a token
available. Otherwise it is buffered until a token becomes
available.
- The bucket must have n tokens to send n packets
- b controls the maximum burst size of traffic
- r tokens/sec controls the average rate of traffic
- Amount of traffic entering over any interval of length t, less
than b + rt
2-19
- If several packets arrive back-to-back and there are
sufficient tokens to serve them all, they are accepted at
peak rate (usually physical link speed).
- Possible token bucket use: shaping, policing, marking
- Delay packets from entering network (shaping)
- Drop packets that arrive without tokens (policing function)
- Let all packets pass through, mark packets: those with
tokens, those without
- Network drops packets without tokens in time of
congestion (marking)
2-20
Token Bucket Traffic Shaper
b bits
link capacity ≤
p bps
regulator
Output traffic can only
be within this region
time
b
slope p
slope r
output traffic
P – maximum link capacity or peak rate
2-21
Maximum # of bits sent
bits
A leaky bucket mechanism shapes bursty traffic into fixed-
rate traffic by averaging the data rate.
2-22
3- Stochastic Admission Control
Consider that:
1. The server has a capacity to serve C packets per second
(Poisson distribution) and cache N packets in the buffer
2. Each flow has a mean arrival rate of λ packets per second
(Poisson distribution)
C
N
Buffer
Flow 1
Flow 2
Flow k


2-23
Question: How many flows can be admitted so that the
buffer overflow probability is smaller than ε, 0 < ε << 1 ?
(hint: M/M/1/N queue)
C
N
Buffer
Flow 1
Flow 2
Flow k


Stochastic Admission Control
2-24
Outline
2.1 What is QoS and Why?
2.2 Principles for QOS guarantees
2.3 QoS Protocols
2.4 Queuing theory
- Focuses on individual packet flows
- Each flow requests specific level of service from network
- Network grants or rejects the flow requests, based on the
availability of resources and the guarantees provided to
other flows
- Resource reservation is fundamental for reliable enforcement
of QoS guarantees
QoS Protocols
2-25
Next generation Internet protocols:
1- Integrated Services (IntServ): a flow-based QoS model
2- Differentiated Services (DiffServ): a class-based QoS model
1- Integrated Services (IntServ)
IntServ Signalling
2-26
- Sender sends a PATH message to the receiver specifying
the characteristics of the traffic it will transmit on the
network (bit rate, peak rate etc.)
- The receiver responds with a RESV message specifying the
reservation specification (guaranteed or controlled) - QoS
level (to request resources for the flow)
Network
sender router
receiver
router
(1) PATH
(2) PATH
(3) PATH
(4) RESV
(6) RESV
(5) RESV
Classifier
Buffer
management
Scheduler
flow 1
flow 2
flow n

Per-flow State
2-27
Each router Implements:
- Admission Control Routine
- Classifier
- Packet Scheduler
1- Admission control:
- Routers along reverse path reserve resources needed to
satisfy receiver's QoS
- Each router performs per-flow admission control -
allocates resources or rejects request
2- Classifier: classifies packets and put them in a specific
queue based on the classification results
3- Packet scheduler: schedule the packet accordingly to meet
its QoS requirements
- Control messages need to be sent periodically - A flow State
will disappear if not refreshed
- Advantages: no need to clean up state after failure - can
tolerate lost signaling packets (never fails unless major
failure)
2-28
- IntServ introduces two new services enhancing the
Internet’s traditional best effort:
1- Guaranteed service
- Guaranteed bounds on delay and bandwidth using
Leaky bucket policed source + WFQ
- For applications with real-time requirements
2- Controlled-load service
- “a QoS is close to the QoS the same flow would
receive from an unloaded network element”, i.e.,
similar to best-effort in networks with limited load
- No quantified guarantees,
for applications that can adapt to moderate losses,
e.g., real-time multimedia applications
2-29
IntServ Problems
- Not scalable
- Huge storage and processing overhead on the routers
- The amount of state information increases
proportionally with the number of flows
- Large message signaling overhead
- Requirement on routers is high
- All routers must implement admission control,
classification, and packet scheduling
- Router should maintain per-flow state information
(allocated resources QoS requests)
2-30
2- Differentiated Services (DiffServ)
Edge router:
- Shape & Police traffic
- Marks “class” of traffic in packets’ header field (e.g., gold
service)
- Per-flow traffic management
Core router: (scheduling)
- Per class traffic management
- Buffering and scheduling
based on marking at edge
- It forwards packets according to their per-hop behavior
(PHB) – e.g., drop lower-class traffic first when congested
2-31
- Doesn't try to distinguish among individual data flows;
instead, uses simpler methods to classify packets into one of
a few categories
- All packets within a particular category are then handled in
the same way, with the same quality parameters.
- The PHB determines how the router resources are used and
shared among the competing service classes
- A PHB can result in different service classes receiving
different performance
- Packet is marked in the Type of Service (TOS) in IPv4, and
Traffic Class in IPv6
- PHB Examples: Class A gets x% of link bandwidth over a
certain time intervals - Class A packets leave first before
packets from class B
2-32
Currently, two PHBs are under active discussion:-
1- Expedited forwarding
- Specifies a minimum packet departure rate of a class, i.e.,
a guaranteed bandwidth
- The guarantee is independent of other classes, i.e.,
enough resources must be available regardless of
competing traffic
- Non-conformant traffic is dropped or shaped.
2- Assured forwarding
- Divide traffic into four classes
- Each class is guaranteed a minimum amount of bandwidth
and buffer
2-33
2-34
- Within each class, there are three drop priorities,
which affect which packets will get dropped first if there
is congestion
- In case of congestion best-effort packets are dropped
first
- If a user sends more assured traffic than its profile,
the excess traffic is converted to best-effort
- Assured forwarding: Provides reliable service even in
time of network congestion
Summary IntServ
- On-line negotiation of per-flow requirements
- End-to-end per-router negotiation of resources
- Complicated and not scalable
- Fundamental changes in Internet so that applications can
reserve end-to-end bandwidth
- Requires new, complex software in hosts & routers
- Non-scalability: signaling, maintaining per-flow router
state difficult with large number of flows
- Good quality guarantee
2-35
Summary: DiffServ
- Per-class traffic management, no per-flow state
- No end-to-end service guarantee
- Simple and scalable
- Fewer changes to Internet infrastructure
- Easy to be implemented even into the network core
- Proper classification can lead to efficient resource allocation
improved QoS
- No state information to be maintained by routers
- Simple functions in network core, relatively complex
functions at edge routers (or hosts)
- May suffer poor quality guarantee
2-36
Outline
2.1 What is QoS and Why?
2.2 Principles for QOS guarantees
2.3 QoS Protocols
2.4 Queuing theory
2.4.1 What is modeling and why?
2.4.2 Probability Review
2.4.3 Queuing theory
Network Design and Queuing
- Packet switching relies on queues
- Queues are everywhere
2-37
At the receiver
At the sender
Communication network is a network of queues!
At the network core
2-38
- At an access point to a network: a computer transmits at
peak rates higher than a LAN can support, or many
computers simultaneously transmit on a LAN.
- At switches or routers: several input ports may access one
output port.
Two approaches to network design
1- “Build first, worry later” approach
- More ad hoc, less systematic
2- “Analyze first build later” Was used extensively for
telephone networks
- More systematic, optimal, etc.
- A model is a mathematical abstraction: keep only the details
that are relevant
- Mathematical model can be used for:
1- Evaluate the system performance
- Average queue length
- Average waiting time
- Loss probability due to buffer overflow
2-39
2- Improve the system performance
- Determine the service rate with tolerable waiting
time (upgrade the system with more capacity incurs
investment cost. But long waiting time would be
annoying to users)
- Provide guaranteed packet loss probability with
large enough buffer (How large is enough?)
Mathematical modeling is one step that can provide
useful approximations
We need to validate the given results with
simulations and experiments
2-40
Randomness inherently exists
- Packet arrival times occur randomly.
- Call holding times are random, similarly packet sizes
could be random.
- Transmission facilities (in packet switching networks) are
shared and could lead to random delays.
- Number of users varies with time in an unpredictable
manner.
- Randomness Probabilistic methods such as Queuing
theory need to be used. - Results are probabilistic (not
deterministic)
2-41
Can we design a network based on deterministic quantities
(e.g., worst-case behavior)?
- Suppose we would like to design a network that would
guarantee no packet losses or delays.
- How would we guarantee this?
Source 1: peak rate 10 Mb/s, mean rate: 1 Mb/s
Source 2: peak rate 20 Mb/s, mean rate: 5 Mb/s
C1 ≥ 10Mb/s, C2 ≥ 20Mb/s, C3 ≥ 30Mb/s
Support few users + waste of resources
Source 1
Source 2
2-42
Deterministic QoS can lead to underutilization
- We ensure that the link capacity of each link on the
network is larger than the peak (maximum) rate at which
packets can be transmitted on this link.
- But the peak and mean rate of the traffics could be quite
different. For example, a video source could be transmitting
on average 5 Mbps, but its peak rate could be 50 Mbps.
- Moreover the different sources may transmit at their peak
rates at very different times.
- So, in reality, we expect that aggregate arrivals will
exceed capacity (hence the possibility of loss) very rarely.
2-43
- The server is usually the transmission facility
- The arrivals and the service time are random.
- The service time: how long a packet will remain in service
- directly related to the length of a packet
- The buffer is the available space in the system.
1- Single Server Queuing Model
Queuing Representation
2-44
2- Multi-Server Queue
- Arriving packets first go to empty servers. If more than
one server is empty, any server is chosen randomly.
- We will study a single queue and then talk about networks,
- But, let’s first give a brief probability review
2-45
D
o
g
s

Q
u
e
u
e

s
y
s
t
e
m
http://www.youtube.com/watch?v=IPxBKxU8GIQ&feature=related
Outline
2.1 What is QoS and Why?
2.2 Principles for QOS guarantees
2.3 QoS Protocols
2.4 Queuing theory
2.4.1 What is modeling and why?
2.4.2 Probability Review
2.4.3 Queuing theory
- Random experiment: an experiment with a non-
deterministic outcome, e.g., in rolling a dice, the outcome
can be one of {1, 2, .., 6} but we do not know the exact
outcome before doing the experiment.
- Sample Space (S): the set of all possible outcomes of an
experiment
- Event (E): A subset of the sample space
Example: In rolling a dice: S = {1, 2, .., 6} and E = {2, 4,
6} is the event that the outcome is even number
- Probability of an event E can be defined in terms of its
relative frequency as:
the number that event E happens
in n trials
2-46
n
E n
E P
n
) (
) (
lim
∞ →
=
Probability is how frequently we expect different outcomes to
occur if we repeat the experiment over and over.
- Ex. tossing a coin 10,000 times n(head) = n(tail) = 5000
P(head) = P(tail) = 5,000/10,000=1/2 This means:
Each time I toss a coin, I do not know the outcome but I
know that it is equally likely to get head or tail
- In many cases, we do not care about the outcome, but how
likely an outcome will happen
For example, after designing a communication system, I
found: 95% of the packets experienced a delay < 100 ms
This does not mean the packets will not experience a delay
more than 100 ms but it will happen rarely.
It is satisfactory service to guarantee that 95% of the
packets will experience a max. delay
2-47
Facts:
Consider an experiment with sample space S, and
probability P:
(i) for all events E, 0 ≤ P(E) ≤ 1
(ii) P(S) = 1, ex. E is outcome of tossing a dice
P(E = 1) + P(E = 2) + ∙∙∙∙∙∙ + P(E = 6) = 1
(iii) P(A) = 1- P(A), ex. P(NOT getting number 4) = 5/6 =
1- P(getting number 4) = 1 – 1/6
2-48
The probability that event A will not happen = 1 - the
probability that event A will happen
A
B
P(A U B) for non-disjoint events
P(A U B) = p(A) + p(B) – p(A ∩ B)
A
B
If A, B are disjoint events, then
- P(A ∩ B) = 0
- P(A U B) = p(A) + p(B)
(iii) for any sequence of events E1, E2, .... which are
mutually disjoint
2-49
- Conditional Probability
- The conditional probability of an event E assuming that an
event F occurs, denoted by P(E|F), is defined as:
- Independent Events
A and B are independent events if knowing that A happens
does not say anything about B happens (does not change
the likelihood that B may happen).
If A and B are independent events, then
Proof: P(A|B) = P(A) = P(A ∩ B)/P(B)
- Example: Suppose that Alice and Bob share one cable to
download data and have no relations
A = {Alice downloads}, B = {Bob downloads}, and C =
{The cable is in use}.
Then A, B are independent events, but A, C and B, C are
dependent events.
2-50
- Total Probability Theorem
(mutually disjoint)
A is an arbitrary event in S, the
total probability theorem is given
as:
Proof:
A
E
1
E
2 E
3
E
4
E
5
E
6
E
7
2-51
( ) ( ) ( ) ( )
| | p A B P B p B A P A ⋅ = ⋅
( ) ( )
p A B p B A ∩ = ∩
and P(A U B) = P(B U A)
2-52
Examples:
experiment : Flipping a coin two times
S = { (H, H), (H, T), (T, H), (T, T)}
E
1
= getting (H, H)
P(E
1
) = ¼ or P(H ∩ H) = P(H) P(H)= ½ x ½ = ¼
Getting H in first and second trial are independent events
because the result of the first trial has no effect on the
result of the second trial
Example of disjoint events: P(even number in tossing a die)
= P(2 or 4 or 6) = P(2) + P(4) + P(6) = 3/6
P(2 and 4) = 0
2-53
Rolling two dies:-
B = the sum is 6 = {(1, 5), (5, 1), (3,3), (2, 4), (4,2)}
A = first die is 4 = {(4, 1), (4, 2), … (4, 6)}
P(B) = 5/36
P(A) = 6/36
P(B|A) = P(A and B)/P(A) = (1/36)/(6/36) = 1/6
Example: A fair coin is tossed 3 times.
S = {(TTT),(TTH),(THT),(HTT),(HHT),(HTH),(THH),(HHH)}
r. v. X: the number of heads obtained in 3 trials
P(X=0)=1/8, P(X=1)=3/8, P(X=2)=3/8, P(X=3)=1/8
X can have a value from {0, 1, 2, 3}
P(X = 0) + P(X = 1) + P(X = 2)+ P(X = 3) =1
2-54
1- Discrete Random Variables
- Random variables which may take only a countable number
of distinct values. finite sample space
- The probability distribution for discrete random variable is
called Probability Mass Function (pmf). P(x
i
) = P(X = x
i
)
- Facts about pmf
- 0 ≤ P(x
i
) ≤ 1, Σ
i (all values)
P(X = x
i
) = 1
- P(X = x
i
∩ X = x
j
) = 0 if i ≠ j (disjoint)
- P(X = x
i
U X = x
j
) = P(X = x
i
) + P(X = x
j
) if i ≠ j
- P(X = x
1
U X = x
2
U … U X = x
k
) = 1
- Cumulative Distribution Function (CDF) F(x)
F(x) = P(X ≤ xi) = Σ
xi ≤ x
P(X = x
i
)
P(X = xi) = F(x
i
) - F(x
i-1
)
2-55
Suppose we throw two dice and the random variable, X, is the
sum of the two dice
Possible values of X are {2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
P(X=2) = P(X=12) = 1/36, P(X=3) = P(X=11) = 2/36
P(X=4) = P(X=10) = 3/36, P(X=5) = P(X=9) = 4/36
P(X=6) = P(X=8) = 5/36, P(X=7) = 6/36
Note:
12
Σ
i=2
P(X = i) = 1
2-56
0.000
0.050
0.100
0.150
0.200
2 3 4 5 6 7 8 9 10 11 12
pmf
0
0.2
0.4
0.6
0.8
1
2 4 6 8 10 12
CDF
2-57
2- Continuous Random Variables
- Take continuous values. Infinite sample space
- The probability distribution is called Probability Density
Function (PDF) f(x).
- The probability of a given value is always 0. P(X = xi) = 0.
why? infinite sample space
- Instead, we compute P(a ≤ X ≤ b)
2-58
( ) ( )
b
a
p a X b f x dx ≤ ≤ =

X
R
X
R x x f
dx x f
R x x f
X
in not is if , 0 ) ( 3.
1 ) ( 2.
in all for , 0 ) ( 1.
=
=


- Properties of PDF:-
( ) ( ) ( ) ( )
b
a
F b F a P a X b f x dx − = ≤ ≤ = ⋅

2-59
Cumulative Probability Distribution (CDF)

∞ −
= ≤ =
v
dx x f v X P v F ) ( ) ( ) (
) ( ) ( x f x F
dx
d
x
=
a b
F(a)
F(b)
P(a ≤ X ≤ b)
Statistical Characterizations
- Mean characterizes the long term average value of the r. v.
- Variance characterizes how dynamic a random r. v. is
2-60
68
Discrete versus Continuous Random Variables
Discrete Random Variable Continuous Random Variable
Finite Sample Space
e.g. {0, 1, 2, 3}
Infinite Sample Space
e.g. [0,1], [2.1, 5.3]
1
1. ( ) 0, for all
2. ( ) 1
i
i
i
p x i
p x

=

=

( ) ( )
i i
p x P X x = =
Cumulative Distribution Function (CDF)
( )
f x
Probability Density Function (PDF)
X
R
X
R x x f
dx x f
R x x f
X
in not is if , 0 ) ( 3.
1 ) ( 2.
in all for , 0 ) ( 1.
=
=


( )
p X x ≤
( )
( )
i
i
x x
p X x p x

≤ =

( ) ( )
0
x
p X x f t dt
−∞
≤ = =

( ) ( )
b
a
p a X b f x dx ≤ ≤ =

Probability Mass Function (PMF)
2-61
Examples of discrete distributions: Binomial – Geometric –
Poisson - Exponential
)! ( !
!
k n k
n
i
n

=
|
|
¹
|

\
|
2-62
1- Binomial distribution (n, p)
- A fixed number of trials, n, e.g., 15 tosses of a coin;
- A binary outcome, called “success” and “failure”, e.g., head
or tail in each toss of a coin
Probability of success is p, probability of failure q = 1 – p
- Constant probability for each observation (independent
trials), e.g., probability of getting a tail is the same each
time we toss the coin
X is a r. v. that refers to the number of successes in n trials
np = mean value
70
Example 1:
- Every packet has n bits. There is a probability p
Β
that a bit
gets corrupted.
- What is the probability that a packet has exactly 1
corrupted bit?
- What is the probability that a packet is not corrupted?
- What is the probability that a packet is corrupted? At least
one bit is corrupted
1 1 1
) 1 (
)! 1 (
!
) 1 (
)! 1 ( ! 1
!
) 1 (
− −


= −

= =
n
B B
n
B B
p p
n
n
p p
n
n
X P
n
B
n
B B
p p p
n
n
X P ) 1 ( ) 1 (
)! ( ! 0
!
) 0 (
0
− = − = =
n
B E
p Y P p ) 1 ( 1 ) 0 ( 1 − − = = − =
2-63
One way to get exactly 3 heads: HHHTT What’s the
probability of this exact arrangement?
P(H) x P(H) x P(H) x P(T) x P(T) =(1/2)
3
x (1/2)
2
Another way to get exactly 3 heads: THHHT
P(T) x P(H) x P(H) x P(H) x P(T) = (1/2)
1
x (1/2)
3
x (1/2)
1
=
(1/2)
3
x (1/2)
2
The same as HHHTT
For any arrangement, the probability of HHHTT = (1/2)
3
x
(1/2)
2
why? independent events
What’s the probability that you flip exactly 3 heads in 5 coin
tosses?
2-64
Example 2:
Outcome Probability
THHHT (1/2)
3
x (1/2)
2
HHHTT (1/2)
3
x (1/2)
2
TTHHH (1/2)
3
x (1/2)
2
HTTHH (1/2)
3
x (1/2)
2
HHTTH (1/2)
3
x (1/2)
2
HTHHT (1/2)
3
x (1/2)
2
THTHH (1/2)
3
x (1/2)
2
HTHTH (1/2)
3
x (1/2)
2
HHTHT (1/2)
3
x (1/2)
2
THHTH (1/2)
3
x (1/2)
2
10 arrangements x (1/2)
3
x (1/2)
2
The probability
of each unique
outcome
(note: they are all
equal)
ways to
arrange 3
heads in
5 trials
|
¹
|

\
|
5
3
2-65
Number of arrangements
The probability of one
arrangement
2- Discrete Distributions: Geometric Distribution
Example: what is the probability that we need k die tossing in
order to obtain a head?
- Independent trials - A binary outcome, called “success” and
“failure”, e.g., head or tail in each toss of a coin
Probability of success is p, probability of failure q = 1 – p
- Constant probability for each observation (independent
trials), e.g., Probability of getting a tail is the same each
time we toss the coin
X is a r. v. that refers to the number of trials needed to
obtain the first success
2-66
74
Example:
A wireless network protocol uses a stop-and-wait
transmission policy. Each packet has a probability p
E
of
being corrupted or lost.
What is the probability that the protocol will need 3
transmissions to send a packet successfully?
Solution: 3 transmissions is 2 failures and 1 success,
therefore:
What is the average number of transmissions needed per
packet?
2
) 1 ( ) 3 (
E E
p p X P − = =
E E
E
i
i
E E
i
i
E E
i
p p
p ip p
p p i i X iP X E

=

− = −
= − = = =

∑ ∑

=


=


=
1
1
) 1 (
1
) 1 ( ) 1 (
) 1 ( ) ( ] [
2
1
1
1
1
1
2-67
3- Discrete Distributions: Poisson Distribution
1. In a time interval of short duration ∆; the probability of
one occurrence is λ∆; where λ is the expected number of
occurrences per unit time.
The system can be regarded as a sequence of independent
Binomial trials with success probability p = λ∆
2. The probability of more than one success in any
subinterval is zero;
3. The occurrences in non-overlapping intervals are
independent
4. The probability of one success in a subinterval is constant
for all subintervals;
2-68
- X is the number of successes in time t and takes the values
0 , 1, 2, 3, …
- Poisson distribution is good in expressing the probability of
a given number of events occurring in a fixed interval of
time, e.g., the number of packets that arrive to a router in
one second.
2-69
(1)
Poisson Distribution v.s. Binomial Distribution
- Binomial distribution describes the distribution of random
events in discrete time slots, i.e., probability of arrival at time
slot 1, 2, ... etc
- Poisson distribution describes the distribution of random
events in continuous time system, i.e., number of arrivals
within the interval [t
i
, t
i+1
]
-Both are memoryless: the probability of arrival (or success)
is independent of the probability of arrival in the previous
time slot or interval
- Binomial distribution is controlled by parameter P, but
Poisson distribution is controlled by parameter λ
2-70
Binomial Poisson
- A r. v. X that is exponentially distributed with parameter λ
has density function
- Solely determined by λ
2-71
4- Continuous Distributions: Exponential Distribution
1 ) (
0
=


dx x f
λ
1
) ( = X E
e
x
x
dx x f x F
λ −
− = =

1 ) ( ) (
0
F(x)
0
1
0
λ
f (x)
79
Example:
We assume that the average waiting time of one customer
is 2 minutes
¦
¹
¦
´
¦

=

otherwise , 0
0 x ,
2
1
) (
2 / x
e
x f
PDF: f (time)
time
- Probability that the customer waits exactly 3 minutes is:
3
/ 2
3
1
( 3) (3 3) 0
2
x
P x P x e dx

= = ≤ ≤ = =

2-72
80
- Probability that the customer waits between 2 and 3
minutes is:
- The Probability that the customer waits less than 2
minutes
3
/ 2
2
1
(2 3) 0.145
2
x
P x e dx

≤ ≤ = =

1
(0 2) (2) (0) (2) 1 0.632 P X F F F e

≤ ≤ = − = = − =
145 . 0 ) 1 ( ) 1 ( ) 2 ( ) 3 ( ) 3 2 (
1 ) 2 / 3 (
= − − − = − = ≤ ≤
− −
e e F F X P
2-73
81
Markov (Memoryless) Property
P(X > a + b | X > a) = P(X > b)
2-74
- Time a has passed and the n+1
th
event has not occurred yet
- Given that X

is the remaining time
until the n+1
th
event occurs
X
a X’
n n +1
The remaining time X

follows an exponential distribution with
the same mean 1/λ as that of the inter-arrival time X
( ) ( )
( )
( ) ( )
[ ] P X a b X a
X a b X a
P X a
> + >
= > + ⊂ >
>
I
( )
( )
( )
( )
( )
1
(
1
)
a b
x b
a
x
P X a b F a b
e
e
P X a F
P
e
X
a
b
λ
λ
λ
− +


> + − +
= = = = = >
> −
a a+b
P(X>a)
P(X>a+b)
b
Proof:
2-75
More about memoryless property
- The future is independent of the past
- The fact that it hasn’t happened yet, tells us nothing
about how much longer it will take
- Previous history does not help in predicting the future.
-It does not matter when the last customer arrived, the
distribution of the time until the next one arrives is always
the same.
- No matter how long it has been since the last arrival
happened, we would still expect to wait an average of 1/λ
until the arrival happens.
2-76
Poisson Process with rate λ
A counting process where the number of arrivals in any time
interval t follows Poisson distribution with parameter λt and
the inter-arrival times are independent, identically distributed
and follows exponential distribution with parameter λ
T is the waiting time until the first occurrence, P(T>t) =
P(X=0) = e
-λt
. P(0) in Poisson distribution can prove that T is
exponential distribution with parameter λ.
Merging & Splitting Poisson Processes
2-77
- A
1
,…, A
k
are independent Poisson processes with rates λ
1
,…,
λ
k
- Merged in a single Poisson process with rate λ= λ
1
+…+ λ
k
- A Poisson process with rate λ can
be split into processes A
1
and A
2
independently, with rates λp
and λ(1-p)
λ
λp
λ(1-p)
p
1-p
Outline
2.1 What is QoS and Why?
2.2 Principles for QOS guarantees
2.3 QoS Protocols
2.4 Queuing theory
2.4.1 What is modeling and why?
2.4.2 Probability Review
2.4.3 Queuing theory
Queues are represented via the notation: A/S/C/K
- A: The arrival process of packets. M stands for Markovian
(Poisson) process; λ is the arrival rate (the average number
of packets arriving by unit time). The interarrival time is
exponentially distributed with mean 1/λ.
- S: The packet departure process. In case of M, the
interdeparture time (the service time) is exponentially
distributed with average service time 1/µ, where µ is the
service rate.
- C: the number of parallel servers in the system.
- K: max. number of packets in the queue (the max. number
of packets that can be accommodated in the buffer plus the
number of servers). If K is missing, K = infinity.
Ex: M/M/1 or M/M/1/N queue (Poisson arrivals, Exponential
service time)
2-78
M/M/1 Queue
- Packets arrival and departure follow Poisson distribution
- Inter-arrival and inter-departure times follow exponential
distribution
- Why Poisson?
- Suitable to most real-world scenarios, where arrivals are
independent
2-79
M/M/1: State Transition Diagram
State k: k packets in the system (k-1 in buffer and 1 in server)
- The times spent in states are independent and exponentially
distributed
- The probability of the next state depends only upon the
current state and not upon any previous states.
- State transitions take place between neighboring states only
2-80
λ (= throughput): Packets arrival rate packets/time, the
average inter-arrival time is 1/λ
µ : departure rate of customers, average service time = 1/ µ
2-81
M/M/1: State Balance Equation
- P
n
is the steady state probability that the system is in state
n (n packets in the system).
- where P
n
(t) is the probability of n
packets in system at time t
- State balance equation: At each state: the rate at which
the process leaves = rate at which it enters
0 0 1 1
P P λ = µ
State Balance Equation
0
1
n
1 1 1 1 2 2 0 0
P P P P µ + λ = µ + λ
M
n n n 1 n 1 n 1 n 1 n
P ) ( P P µ + λ = µ + λ
+ + − −
0
1
0
1
P P
µ
λ
=
M
1
2
1
2
P P
µ
λ
=
1 n
n
1 n
n
P P


µ
λ
=
M
M
When: λ
0
= λ
1
= λ
2
= … = λ
µ
0
= µ
1
= µ
2
= …= µ
1 −
=
n n
P P
µ
λ
2-82
M/M/1: Steady State Probability
Utilization factor (fraction of
time the server is busy)
µ
λ
Capacity Available
Demand Capacity
ρ = =
= P[system is busy], 1- P[system is idel]
λ
ρ ρ
µ
= =
2-83
M/M/1: Steady State Probability

=
− − = ≤ − = >
N
i
i
N Q P N Q P
0
) 1 ( 1 ) ( 1 ) (
ρ
ρ
2-84
- The av. number of packets in the system = the packet arrival
rate X the av. time a packet spends in the system.
2-85
E(N)
E(T)
2-86
Increasing µ and
reducing λ will reduce
the delay time and
number of packets
E(T) = E(T
q
) + 1/µ
Buffer length = (The state of the system) – (number of
customers being served)
2-87
Statistical Multiplexing versus Circuit-like Reservation
- Assume: m flows each has rate of λ/m to be transmitted
over a communication line of rate C. The packet lengths for
all flows are exponentially distributed with parameter L.
1- Statistical multiplexing: the flows are merged into a single
buffer, with rate λ =m (λ/m), and the average delay per
packet is:
2- Circuit-like Reservation: The transmission capacity is
divided into m equal portions one per packet stream, each
portion behaves like an M/M/1 queue with arrival rate λ/m
and average service rate C/mL. The average delay per packet
is
m times larger than that of statistical multiplexing
2-88
λ µ −
1
λ µ −
m
M/M/1/N: Finite Buffer Queue
2-89
M/M/1/N: Blocking Probability P
B
2-90
2-91
- Rate at which packets enter queue (throughput) = (1-P
B

- Utilization factor (fraction of time the server is occupied)=
((1-P
B
) λ/µ) = (1-P
B
)ρ = P(server busy ) = 1 – P
0
M/M/m/m: Multiple Server - Finite Buffer Queue
2-92
Given λ and µ, how many servers we need to guarantee
PB < 10
-3
?
M/M/m/m: Blocking Probability PB
2-93
µ
λ
* m Capacity Available
Demand Capacity
=
Utilization factor =
- Packets join a single queue.
- Whenever any of the servers is idle, it serves the first
packet on the single queue.
- All of the servers are identical. Any packet can be served by
any server.
- The expected capacity per time unit is then m*µ
M/M/m: m Parallel Servers with Infinite Buffer
2-94
m*µ
2-95
2-96
E(T) = E(Tq) + (1/µ)
Little’s Formula ⇒ E(Tq) = E(N
q
)/λ
0
2
) 1 ( !
) / (
... ) ( ) ( P
m
P m n N E
m
m n
n q
ρ
ρ µ λ

= = − =


=
Little’s Formula ⇒E(N) = λ E(T) = λ(E(Tq) + 1/ µ) = E(Nq) + λ/ µ
105
Discouraged Arrivals
- Basic Flow Control ( packet arrivals depend on state of the
queue)
- Arrivals tend to get discouraged when more and more
packets are present in the system.
2-97
2-98
Network of M/M/1 Queues
2-99
λ
1
= γ
1
+ γ
2
λ
2
= γ
1
+ γ
2
+ γ
3
λ
3
= γ
1
+ γ
3
µ
2
µ
3
µ
1
2-100
λ
1
= γ
1
+ γ
2
λ
2
= γ
1
+ γ
2
+ γ
3
λ
3
= γ
1
+ γ
3
µ
2
µ
3
µ
1
2-101
λ
1
= γ
1
+ γ
2
λ
2
= γ
1
+ γ
2
+ γ
3
λ
3
= γ
1
+ γ
3
µ
2
µ
3
µ
1
Questions
Mohamed M E A Mahmoud

Sign up to vote on this title
UsefulNot useful