You are on page 1of 41

Lecture 02: Queuing Theory and

Analysis

1
Introduction

 How to analyze changes in network workloads?


(i.e., a helpful tool to use)
 Analysis of system (network) load and performance
characteristics
◼ response time
◼ throughput
 Performance tradeoffs are often not intuitive
 Queuing theory, although mathematically complex,
often makes analysis very straightforward

2
Single-Server Queuing System

Items Arriving Queuing


Items Departing
System
(message, packet, cell)
(Delay Box)

Items Lost

3
Notation

 For queuing analysis,


Queue need to specify:
Servers
◼ Population size
Customer
Population ◼ Number of servers
◼ System capacity
 Imagine waiting for a PC in the ◼ Arrival process
computer lab (or checking out at
a grocery store, or …) ◼ Service time
◼ Resources are “servers” distribution
◼ People are “customers” ◼ Service discipline
 If all servers busy, customers wait
in a “queue”

4
Notation…
 Number of servers
◼ Can be one or more
◼ Assume identical, but if
not then separate
Queue queuing system for each
Customer
 System capacity
Population ◼ Number that can wait
 Population size plus be served
◼ Potential customers ◼ Most systems have finite
who can enter queue length, but easier
◼ Most real systems finite to analyze if infinite
but easier to analyze if
infinite

5
Notation…

 Arrival process (cont.)


Arrival
Process ◼ Most common are Poisson
Queue arrivals
 IID and exponentially
Customer distributed (f(x)=e-x)
Population

 Arrival process
 Service time distribution
◼ Students arrive a t1,t2,…,tj ◼ Amount of time each
◼ Interarrival times are customer at server
j=tj-tj-1 ◼ Again, usually IID
◼ Usually assume ◼ Most common are
independent, identically exponential
distributed (IID)

6
Notation…

?  Kendall notation
◼ A/S/m/B/K/SD
Queue
◼ A is Arrival time distro
◼ S is Service time distro
Servers ◼ m is number of servers
◼ B is number of buffers
 Service discipline ◼ K is population size
◼ Order customers called ◼ SD is service discipline
for servicing  Some typical times used:
◼ Most common is FCFS ◼ M Exponential
 M means “memoryless” in
that current arrival
independent of past
◼ D Deterministic
◼ G General
 Valid for all
7
Notation Example
 M/M/3/20/1500/FCFS – single queue system
with:
◼ Exponentially distributed arrivals
◼ Exponentially distributed service times
◼ Three servers
◼ Capacity 20 (queue size is 20 – 3 = 17)
◼ Population is 1500 total
◼ Service discipline is FCFS
 Often, assume infinite queue and infinite
population and FCFS, so just → M/M/3

8
Variables for Queues

Queue

Previous
arrival Arrival Begin Servers End
 Service Service
w s
Time
  = interarrival time  nq = number of jobs waiting in
  = mean arrival rate queue
◼ = 1/E[]  ns = number of jobs receiving
◼ Can sometimes depend upon service
jobs in system
 n = number of jobs in system
 s = service time per job
◼ n = nq + ns
  = mean service rate per server
◼ = 1/E[s], total rate m  r = response time
 w = waiting time
Note, all except  and  are random
9
Rules
 Stability Condition
◼ If the number of jobs becomes infinite, system is
unstable. For stability, mean arrival rate less than
mean service rate
 < m
◼ Does not apply to finite queue or finite population
systems
 Finite population cannot have infinite queue
 Finite queue drops if too many arrive so never has
infinite queue

10
Rules…
 Number in System versus Number in Queue
◼ Number of jobs is equal to waiting and servicing
n = nq + ns
◼ Also means:
E[n] = E[nq]+E[ns]
◼ So mean number of jobs is equal to mean number in queue
plus mean number being serviced
Var[n] = Var[nq]+Var[ns]
◼ Variance of jobs equal to variance of queue + svc
 Also, service rate of servers independent of jobs in queue
Cov(nq,ns) = 0

11
Rules…

 Number versus Time


◼ If jobs are not lost due to buffer overflow the
mean jobs is related to response time as:
mean jobs in system = arrival rate x mean response time
◼ Similarly
mean jobs in queue = arrival rate x mean waiting time
◼ Above equations known as “Little’s Law” For
finite buffers can use effective arrival rate
(ignoring drops)

12
Rules…
 Time in System versus Time in Queue
◼ Time spent in system is sum of queue and service
time
r=w+s
◼ In particular:
E[r] = E[w] + E[s]
◼ If service rate independent of jobs in queue
Cov(w,s) = 0
Var[r] = Var[w] + Var[s]

13
Little’s Law

Mean jobs in system = arrival rate x mean response time

 Very commonly used in theorems


 Applies if jobs entering equals jobs serviced
◼ No new jobs created, no new jobs lost
◼ If lost, can adjust arrival rate to mean only those not lost
 Intuition: suppose monitor system and keep log of arrival
and departures. If long enough, arrivals about the same as
departures.
◼ Let there be N arrivals in long time T. Then:
arrival rate = total arrivals / total time = N/T

14
Applying Little’s Law

 Can be applied to subsystem, too


◼ mean time in queue = arrival rate x waiting time
◼ mean time being serviced = arrival rate x service time
 Example:
◼ server satisfies I/O request in average of 100 msec. I/O rate is
about 100 requests/sec. What is the mean number of requests
at the server?
◼ Mean number at server = arrival rate x response time
= (100 requests/sec) x (0.1 sec)
= 10 requests

15
Types of Stochastic Processes

 Number of jobs at CPU of computer system at


time t is a random variable (n(t))
 To specify such random variables, need
probability distribution function for each t
◼ Same with waiting time (w(t))
 These random functions of time or sequences
are called stochastic processes
 Useful for describing state of queuing systems

16
Types…

 Discrete-State and Continuous-State Process


◼ Depends upon values its state can take
◼ If finite or countable → discrete
◼ Ex: jobs in system n(t) can only take values 0, 1, 2 …
countable, so discrete-state process
◼ Ex: waiting time w(t) can take any real value, so continuous-
state process
 Markov Process
◼ If future states depend only on the present and are
independent of the past then called markov process
◼ Makes it easier to analyze since do not need past trajectory,
only present state
◼ Also memory-less in that don’t need length of time in current
state
17
Types…
 Birth-Death Process
◼ Markov in which transitions restricted to neighboring states
only are called birth-death process
◼ Can represent states by integers, s.t. process in state n can
only go to state n+1 or n-1
◼ Ex: jobs in queue with single server can be represented by
birth-death process
 Arrival (birth) causes state to change by +1 and departure after
service (death) causes state to change by –1
 Only if arrive individually, not in batch

18
Types…

 Poisson Processes
◼ If interarrival times are IID and exponentially
distributed, then number of arrivals over interval
[t,t+x] has a Poisson distribution → Poisson
Process
◼ Popular because arrivals are memoryless
◼ Also:
 Merging k Poisson streams with mean rate i gives
another Poisson stream with mean rate:
 = i

19
Types…
 Poisson Processes (continued)
◼ Also
 If Poisson stream split into k sub streams with
probability pi, each sub stream is Poisson with mean
rate pi
 If arrivals to single server with exponential service
times are Poisson with mean , departures are also
Poisson with mean , if (<)
 Same relationship holds for m servers as long as total
arrival rate less than total service rate

20
Utilization Law
 Given average arrival rate .
 Average utilization of a system is time busy over total time
U = b/T
 Factor into:
U = b/T = (b/d) (d/T)
where d is number of departures and arrivals during time T
 Notice, (b/d) is average time spent servicing each of the d
jobs. Call it s (s = b/d)
 Since balanced (in == out),  = d/T
 So:
U = s (Utilization Law)

21
Applying Utilization Law
 Consider I/O system with one disk and one
controller. If average time required to service
each request is 6 msec, what is the maximum
request rate it can tolerate?
 Maximum will occur when 100% utilized, so
U=1
 Substituting U = s, we get:
1 = maxs
 So, max= 1 / (6 x 10-3) = 167 requests/sec

22
Utilization Law
 Notice, utilization law U= s can be written as:
U = /
where  is the average service rate
 Ratio / is often called traffic intensity
◼ Given own symbol  = /
 If ( > 1) then  >  (arrival rate greater than service rate)
◼ Jobs arrive faster than can be processed
◼ Queue grows to infinity
◼ Unstable
 Must have ( < 1) for stability (so U never > 100%)

23
Operational Analysis
 Using Little’s Law and Utilization Law, we can say things
about average behavior
◼ Requires no assumptions about distribution times of arrivals
or servicing
◼ High level view
 But can not say things about, say, maximum or worst case
 Will use stochastic distributions and queuing theory to get
more detailed analysis

24
Single Queue, Single Server - M/M/1 Queue
 Only one queue, exponentially distributed
arrivals and service time
◼ Ex: CPU in a system, processes in queue
 No buffer or population limitations
 Can often be modeled as birth-death process
◼ Jobs arrive individually (not batch)
◼ Changes state to n+1 (birth), n-1 (death)
 Transitions depend only on current state
   Notation: probability
0 1 2 … of being in state
   n is Pn

25
M/M/1…
 At any state, probability of going up same as
probability coming down (balanced)
Pn-1 = Pn, or
Pn = (/)Pn-1 = Pn-1
 We have: P1 = P0, P2 = P1, ..
 In general, probability of exactly n jobs in the
system is:
Pn = nP0
 We want a closed form for Pn (with no P0)

26
M/M/1…
 All probabilities add to 1, so:
Pn = nP0 = 1 n=0,1,…,
 Expanding:
0P0+ 1P0+ 2P0+ … = 1
P0 = 1 / (0+1+2…) = 1/n
 Since  < 1 for stability, can be shown that sum
converges:
P0 = 1- and Pn = (1-)n
 Can now derive many useful performance
parameters for M/M/1 queue

27
M/M/1…

 Mean jobs in system


E[n] = nPn = n(1-)n =  /(1-) n=0,…,
 Variance of jobs in system
Var[n] = E[(n – E[n])2] = E[n2] - (E[n])2
2
n2(1-)n – (n(1-)n)2 = /(1-)2
 Probability of n or more jobs
Pr ( n jobs in system) = Pj j=n,…,
= (1-)j = n
 Mean response time
◼ Using Little’s law
 mean jobs = mean arrv rate x mean resp time
E[n] = E[r]
E[r] = E[n]/ = (/(1-))(1/) = (1/) / (1-)
28
M/M/1…

 Mean jobs in queue (use n-1 since at most one


serviced)
E[nq] = (n-1)Pn = (n-1)(1-)n = 2/(1-)
◼ When no jobs in system, idle
◼ When jobs in system, busy
 Utilization
◼ Server is busy when 1 or more jobs in system
◼ Average load, or average utilization
U = 1 – P0 = 1-(1-) = 

29
Example 1
 Given the following parameters about network
gateway
➢ Speed 4 Mbps
➢ packet size 1000 bytes
➢ Arrival rate of 125 packets/sec
◼ What is the probability of overflow with only 12
buffers?
◼ How many buffers are needed to keep packet loss to
1 in 1,000,000?

30
Example…
 Probability of n packets in
 Arrival rate =125 pps gateway
 Service rate: Pr(n) = (1-)n=.75(.25)n
4000000/8  Mean time in gateway:
= 500000 Mbytes/sec (1/) / (1-)
500000/1000 = 500 pps = (1/500)/(1-.25) = 2.66ms
So, =500 pps  Prob of overflow = Pr(13+)
 Utilization (traffic = 13 = .2513 = 1.49x10-8
intensity):  15 packets/billion
 = / = 125/500 = .25  To limit to less than 10-6
 Mean packets in gateway: n10-6
/(1-) = .25/.75 = .33 n > log(10-6)/log(.25)
> 9.96
◼ So, 10 buffers
31
Example 2
 Given a web server
◼ Time between requests exponential with mean time
between 8 ms.
◼ Time to process exponential with average service
time 5 ms.
i) What is the average response time?
ii) How much faster must the server be to halve this
average response time?
iii) How big a buffer so only 1 in 1,000,000,000
requests are lost?

32
Example 2
 Request rate
 = 1000 / 8 = 0.125
requests per ms  To halve, want
 Service rate (1/) / (1-) = 6.665
 = 1000 / 5 = 0.2 requests ◼ Assume  fixed, so change
per ms 
 = 1/6.665 + 0.125 = .257
 Utilization
B) So, (.275-.2)/.2 * 100
 = /= .125/.2 = .625
= 37.5% faster
◼ So, 62.5% of capacity
 1 in 1 billion errors
A) Avg response time ◼ Buffer size k:
(1/) / (1-) Pr(k)  10-9
= (1/.2)/(1-.625) k  10-9
= 13.33 ms C) So,
k > log(10-9) / log(.625)
k  44
33
Single Queue, Multiple Servers - M/M/c
 ‘c’ is the number of servers
 Assume arrival rate  is the same
 Each server now can serve  jobs per time
◼ Mean service rate c
◼ Note, assumes no “cost” for determining server
 If any server idle (fewer than c jobs in system, say n), job
serviced immediately
 If all c servers are busy, job waits in queue

34
M/M/c Queue…

      
0 1 2 … c c+1 c+2 …
 2 3 (c-1) c c c

 From above:  Find P0 since sum must be 1


n =  n=0,…,  Pn= [(c)n/n!]P0 (1 to c)
n=n n=1,…,c-1 + [(c)n/(c!cn-c)]P0 (c+1 to )
n=c n=c,…, =1
 Using balanced equations:  Solve for P0 =
Pn = [(c)n/n!]P0 n=1,…,c __________1_________
Pn = [(c)n/(c!cn-c)]P0 n>c
[(c)n/n!] + (c)c/(c!(1-))
 Where  is traffic intensity
(n=1 to c in first term)
◼ Also, utilization of each
server

35
M/M/c Queue…

 Newly arriving job will wait if all servers are


busy. Happens if more than c jobs.
Pr(>c jobs) = c+ c+1+ c+2+… = Pn (n from c+1 to )
= P0(c)c/c! x n-c (n from c+1 to )
= [(c)c]/[c!(1-)] P0
◼ Known as Erlang’s C formula ()
 Mean jobs in system
E[n] = nPn = [P0(c)c]/[c!(1-)2] + c
= c + /(1-)

36
M/M/c Queue…
 Mean jobs in queue
E[nq] = (n-c)Pn = P0(c)c/c! x (n-c)n-c
= [P0(c)c]/[c!(1-)2] = /(1-)
 Mean response time
◼ Using Little’s law
 mean jobs = mean arrv rate x mean resp time
E[n] = E[r]
E[r] = E[n]/ 
E[r] = 1/ + /[c(1-)]
 Mean waiting time E[w] = E[nq]/ 
= [/(1-)]/ = /[c(1-)]

37
Example
 How does response time for previous M/M/1
Web server change if number of servers
increased to 4?
◼ Can model as M/M/4

38
Example…
 Request rate =0.125
 Service rate =0.2  Erlang’s C formula
 Traffic intensity ◼  = (4x0.1563)4 (0.5352)
◼  =  /(c ) = 0.1563 4!(1-.01563)
 Probability of idle = .0040326
 P0 = ____1_________  So, average response time:
[(c)n/n!) + E[r] = 1/ + /c(1-)
(c)c/(c!(1-))] = 1/.2 + .004326/(4)(.2)(1-
.1563)
P0= 0.532
= 5.01 ms
 Thus, increasing servers by
4 reduces response time by
appx 62%

39
Jackson’s Theorem

⚫ Assumptions:
– the queuing network has m nodes, each providing exponential
service
– items arriving from outside the system at any node arrive with a
Poisson rate
– once served at a node, an item moves immediately to another
with a fixed probability, or leaves the network
⚫ Jackson’s Theorem states:
– each node is an independent queuing system with Poisson inputs
determined by partitioning, merging and tandem queuing
principles
– each node can be analyzed separately using the M/M/1 or M/M/N
models
– mean delays at each node can be added to determine mean
system (network) delays

40
Jackson’s Theorem - Application in Packet Switched Networks

Internal load:
L
 =  i
Packet Switched i=1
Network where:
 = total on all links in network
i = load on link i
L = total number of links

Note:
• Internal > offered load
External load, offered to network:
• Average length for all paths:
N N E[number of links in path] = /
 =   jk
j=1 k=2 • Average number of item waiting
where:
and being served in link i: ri = i Tri
• Average delay of packets sent
 = total workload in packets/sec
jk = workload between source j through the network is:
1 L Mi
and destination k T=  
i=1 Ri - Mi
N = total number of (external)
sources and destinations where: M is average packet length and
Ri is the data rate on link i 41

You might also like