You are on page 1of 101

Queueing Theory Basics

Model of a Queue

Arrivals Departures

Server(s)
Waiting
Position
s
A “Queue” is basically a model of a node providing service where
arrivals come at possibly random times and ask for service of
possibly random length from the one or more servers that may be
present in the queue. A Computer Network may then be modelled
as a set of interconnected queues, i.e. a “Queuing Network”
Input Specifications for a
Queue

• Arrival Process Description

• Service Process Description


• Number of Servers
• Number of Waiting Positions
• Special Queueing Rules, e.g. -
• order of service (FCFS, LCFS, SIRO, etc.)
• baulking, reneging, jockeying for queue position
Additional Input Specifications for
a Queueing Network

For networks of queues, one must provide additional


information, such as -
• Interconnections between the queues
• Routing Strategy - deterministic, class based or
probabilistic with given routing probabilities
• Strategy followed to handle blocking if the
destination queue is one of finite capacity (i.e. with
finite number of waiting positions)
An Open Queueing Network

Jobs (Arrivals) enter the network from outside and eventually


leave the network after obtaining service from one or more queues
in the queueing network.
A Closed Queueing Network

A fixed number of jobs perpetually circulate in the network,


moving to the next queue after obtaining the required service
at the earlier one.
No additional jobs enter the system nor do jobs leave the
network.
A Queue or a Queueing Network Analysis
may be studied in different ways
and/or
The results may be provided
from different points of view Simulation
Consider analytical
approaches here

That of a customer entering


the system for service

That of a service provider


who provides the resources
(servers, buffers etc.)
Parameters of interest for a customer
arriving to the queue for service
(Service Parameters)

• Queueing delay Transient Analysis


• Total delay or
• Number waiting in queue Equilibrium Analysis
• Number in the system
• Blocking probability (for finite
Mean Results
capacity queues)
or
• Probability that the customer
has to wait for service Probability
Distributions
Parameters of interest for the
Service Provider
(Service Parameters)

Transient Analysis
• Server Utilization/Occupancy
or
• Buffer Utilization/Occupancy
Equilibrium Analysis
• Total Revenue obtained
• Total Revenue lost
Mean Results
• Customer Satisfaction (Grade
of Service) or
Probability
Distributions
General approach to the study of queues
and queueing networks

“Subject to appropriate modelling assumptions, obtain


exact analytical results for the mean performance
parameters under equilibrium conditions”

In some special cases, we may also be obtain results on higher moments


(variance etc.) or probability distributions and/or their transforms.
Transient analysis is generally not feasible, except for some very simple
cases. For this, simulation methods are preferred.
In some case, especially for queueing networks, exact analysis is not
feasible but good approximate analytical methods are available.
Analysis of a Simple Queue
(with some simplifying assumptions)

Single
Arrivals
Server
with an
average
arrival Service rate 
rate of  at the server
Infinite Buffer
(infinite number
of waiting
positions)
Assume that, as t0 Arrival Process
Mean Inter-arrival time
P{one arrival in time t} = t 1
=

P{no arrival in time t} =1- t
P{more than one arrival in time t} = O((t)2) = 0

Service Process
P{one departure in time t} = t 1
Mean Service time =
P{no departure in time t} =1-  t 
P{more than one departure in time t} = O((t)2) = 0

P{one or more arrival and one or more departure in time t}


= O((t)2) = 0
We have not really explicitly said it, but the implications
of our earlier description for the arrivals and departures
as t0 is that -

• The arrival process is a Poisson process with


exponentially distributed random inter-arrival times

• The service time is an exponentially distributed


random variable

• The arrival process and the service process are


independent of each other
The state of the queue is defined by defining an appropriate
system state variable

System State at time t = N(t) = Number in the system at t


(waiting and in service)

Let pN(t) = P{system in state N at time t}

Note that, given the initial system state at t=0 (which is


typically assumed to be zero), if we can find pN(t) then we
can actually describe probabilistically how the system will
evolve with time.
By ignoring terms with (t)2 and higher order terms, the
probability of the system state at time t+t may then be
found as -

p 0 (t  t )  p 0 (t )[1  t ]  p1 (t ) t N=0 (1.1)

p N (t  t )  p N (t )[1  t  t ]  p N 1 (t )t  p N 1 (t ) t

N>0 (1.2)

subject to the normalisation condition that  p (t )  1 for all t0


i
i
Taking the limits as t0, and subject to the same normalisation, we get

dp0 (t )
 p0 (t )  p1 (t ) N=0 (1.3)
dt

dpN (t )
 (   ) p N (t )  p N 1 (t )  p N 1 (t ) N>0 (1.4)
dt

These equations may be solved with the proper initial conditions


to get the Transient Solution.
If the queue starts with N in the system, then the corresponding
initial condition will be
pi(0)=0 for iN
pN(0)=1 for i=N
For the equilibrium solution, the conditions
invoked are -

dpi (t )
0
dt

and

pi(t)=pi for i=0, 1, 2…..


For this, defining  =/ erlangs, with <1 for stability, we get

p1=p0
pN+1=(1+)pN - pN-1 = pN = N+1p0 N1 (1.5)


Applying the Normalization Condition p
i 0
i  1 we get

pi   i (1   ) i = 0, 1, ……, (1.6)

as the equilibrium solution for the state distribution when


the arrival and service rates are such that =/ < 1

Note that the equilibrium solution does not depend on the


initial condition but requires that the average arrival rate
must be less than the average service rate
Mean Performance Parameters of the Queue

(a) Mean Number in System, N

 

N  ip  i (1   )  1  
i 0
i
i 0
i (1.7)

(b) Mean Number Waiting in Queue, Nq


  2
N q   (i  1) pi   (1  p0 )    (1.8)
i 1 1  1  1 
Mean Performance Parameters of the Queue

(c) Mean Time Spent in System W


This would require the following additional assumptions
• FCFS system though the mean results will hold for any queue
where the server does not idle while there are customers in the
system
• The equilibrium state probability pk will also be the same as the
probability distribution for the number in the system as seen by
an arriving customer (PASTA: Poisson Arrivals See Time Averages)
• The mean residual service time for the customer currently in
service when an arrival occurs will still be 1/
Memory-less Property satisfied only by the exponential
distribution.
Mean Performance Parameters of the Queue (continued)

Using these assumptions, we can write


(k  1) 1
W 
k 0 
pk 
 (1   ) (1.9)

(d) Mean Time Spent Waiting in Queue Wq

This will obviously be one mean service time less than W

1 
Wq  W   (1.10)
  (1   )
Mean Performance Parameters of the Queue (continued)

Alternatively, Wq may be obtained using the same kind of


arguments as those used to obtain W earlier. This will give


k 
Wq  p
k 0
k 
 (1   )

which is the same result as obtained earlier.

(e) P{Arriving customer has to wait for service} = 1-p0 = 


Mean Performance Parameters of the Queue (continued)

(f) Server Utilization “Fraction of time the server is busy”


= P{server is not idle}
=1-p0 = 

The queue we have analyzed is the single server M/M/1/


queue with Poisson arrivals, exponentially distributed service
times and infinite number of buffer positions
The analytical approach given here may actually be applied
for simple queueing situations where -

• The arrival process is Poisson, i.e. the inter-arrival


times are exponentially distributed

• The service times are exponentially distributed

• The arrival process and the service process are


independent of each other
Some other simple queues which may be similarly analyzed,
under the same assumptions -

• Queue with Finite Capacity

• Queue with Multiple Servers


• Queue with Variable Arrival Rates
• Queue with “Balking”
Stochastic Processes
Markov Processes and Markov Chains
Birth Death Processes

•Summary Review

•Needed for easily obtaining Equilibrium Solutions


Stochastic Process X(t)

Process takes on random values, X(t1)=x1,………., X(tn)=xn at


times t1,…….., tn

The random variables x1,…….., xn…. are specified by specifying


their joint distribution.

One can also choose the time points t1,…….., tn …. (where the
process X(t) is examined) in different ways.
Markov Processes

X(t) satisfies the Markov Property (i.e. it has the


memoryless property) if

P{X(tn+1)=xn+1| X(tn)=xn ……. X(t1)=x1}=P{X(tn+1)=xn+1| X(tn)=xn}

for any choice of time instants ti, i=1,……, n where tj>tk for
j>k

Memoryless property as the state of the system at future time


tn+1 is decided only by the system state at the current time tn and
does not depend on the state at earlier time instants t1,…….., tn-1
Restricted versions of the Markov Property lead to -

(a) Markov Chains over a Discrete State Space

(b) Discrete Time and Continuous Time Markov Processes


and Markov Chains

Markov Chain State Space is discrete (e.g. set of non-


negative integers)
Discrete Time State changes are pre-ordained to occur
only at the integer points 0, 1, 2, ......, n
(that is at the time points t0, t1, t2,......, tn)
Continuous Time State changes may occur anywhere in
time
In the analysis of simple queues, the state of the
queue may be represented by a single random
variable X(t) which takes on integer values {i, i=0,
1…..,} at any instant of time.

The corresponding process may be treated as a


Continuous Time Markov Chain since time is
continuous but the state space is discrete
Homogenous Markov Chain
A Homogenous Markov Chain is one where the
transition probability of going from state i to state j,
i.e.

P{Xn+1=j | Xn=i}

is the same regardless of n.

Therefore, the transition probability pij of going from


state i to state j may be written as

pij = P{Xn+1=j | Xn=i} n.

It should be noted that for a Homogenous Markov chain, the


transition probabilities depend only on the terminal states (i.e.
the initial state i and the final state j) but do not depend on
when the transition (i j ) actually occurs.
p
1-p
A

Discrete Time Markov Chain


(Transition from State A)

P{system stays in state A for N time units | given that the


system is currently in state A} = pN
P{system stays in state A for N time units before exiting
from state A} = pN(1-p)

Distribution is Geometric, which is memoryless in nature


1-t
t
A

Continuous Time Markov Chain


Transition from State A

P{system in state A for time T | system currently in


state A} T
 (1  t ) t  e  T t0

This is the (1- cdf) of an exponential distribution

Distribution is Exponential, which is memoryless in nature


Important Observation

"In a homogenous Markov Chain, the distribution of


time spent in a state is (a) Geometric for discrete
time or (b) Exponential for continuous time"

Semi-Markov Processes
In these processes, the distribution of time spent in
a state can have an arbitrary distribution but the
one-step memory feature of the Markovian property
is retained.
Discrete-Time Markov Chains

The sequence of random variables X1, X2, ..........


forms a Markov Chain if for all n (n=1, 2, ..........) and
all possible values of the random variables, we have
that –
P{Xn=j | X1=i1...........Xn-1=in-1}=P{Xn=j | Xn-1=in-1}

Note that this once again, illustrates the one step


memory of the process
Homogenous Discrete-Time Markov Chain

The homogeneity property additionally implies that the state


transition probability
pij= P{Xn=j | Xn-1=i}
will also be independent of n, i.e. the instant when the
transition actually occurs.

In this case, the state transition probability will only depend


on the value of the initial state and the value of the next
state, regardless of when the transition occurs.
Homogenous Discrete-Time Markov Chain

The homogeneity property also implies that we can define a


m-step state transition probability as follows -

pij( m)  P{ X n m  j | X n  i}  
k
pik( m1) pkj m= 2, 3,…...

where pij(m) is the probability that the state changes


from state i to state j in m steps.
This may also be written in other ways, though the value
obtained in each case will be the same. For example, we
can also write it as -
m= 2, 3,…...
pij( m)  P{ X n m  j | X n  i}  
k
pik pkj( m1)
Irreducible Markov Chain
This is a Markov Chain where every state can be
reached from every other state in a finite number of
steps.

This implies that k exists such that pij(k) for i, j.

If a Markov Chain is not irreducible, then -

(a) it may have one or more absorbing states which will


be states from which the process cannot move to any of
the other states,
or

(b) it may have a subset of states A from where one


cannot move to states outside A, i.e. in AC
Classification of States for a Markov Chain

fj(n) = P{system returns to


state j exactly n steps after
Transient if fj<1
leaving state j}
Recurrent Null
State j
if Mj= 
Recurrent
if fj=1
fj  
n 1
f j( n )
Recurrent Non-null
or Positive Recurrent 
if Mj< Mj  
n 1
nf j( n )

fj = P{system returns to state j some time after leaving state j}

Mj = Mean recurrence time for state j


= Mean number of steps to return to state j after leaving state j
State j is periodic with respect to  ( >1), if the only
possible steps in which state j may occur are , 2, 3 ............
In that case, the recurrence time for state j has period .
State j is said to be aperiodic if =1

A recurrent state is said to be ergodic if it is both positive


recurrent and aperiodic.
An ergodic Markov chain will have all its states as ergodic.
An Aperiodic, Irreducible, Markov Chain with a finite number
of states will always be ergodic.

The states of an Irreducible Markov Chain are either all


transient, or all recurrent null or all recurrent positive. If
the chain is periodic, then all states have the same period .
In an irreducible, aperiodic, homogenous Markov Chain, the limiting
state probabilities pj=P{state j} always exist and these are
independent of the initial state probability distribution

and

All states are transient, or all states are recurrent


either null - in this case, the state probabilities pj’s are zero
for all states and no stationary state distribution will
exist.

All states are recurrent positive - in this case a


or stationary distribution giving the equilibrium state
probabilities exists and is given by pj=1/Mj j.
The stationary distribution of the states (i.e. the equilibrium
state probabilities) of an irreducible, aperiodic, homogenous
Markov Chain (which will also be ergodic), may be found by
solving a set of simultaneous linear equations and a normalization
condition.

pj  p pi
i ij
j Balance Equations

p
i
i 1 Normalization
Condition

If system has N states, j=0, 1, ….., (N-1), then we need to


solve for p0, p1,….., pN-1 using the Normalization Condition and
any (N-1) equations from the N Balance Equations
Birth-Death Processes
Homogenous, aperiodic, irreducible (discrete-time or
continuous-time) Markov Chain where state changes can
only happen between neighbouring states.

If the current state (at time instant n) is Xn=i, then the state
at the next instant can only be Xn+1 = (i+1), i or (i-1).
Note that states are represented as integer-valued without
loss of generality.

Pure Birth Process Pure Death Process


No decrements, only No increments, only
increments decrements
Continuous Time Birth-Death Markov Chain

Let k be the birth rate in state k


k be the death rate in state k

Then P{state k to state k+1 in time t}=kt


P{state k to state k-1 in time t} =kt
P{state k to state k in time t} = 1- kt - kt t0
P{other transitions in time t}= 0

System State X(t) = Number in the system at time t


= Total Births - Total Deaths in (0, t)

The initial condition will not matter when we are only


interested in the equilibrium state distribution
pk(t) = P{X(t)=k}
0 0 = Probability {system in
1 state k at time t}
1 1

2 2 2

3

k-1 k-1
k k k

k+1 k+1

State Transition Diagram for a Birth-Death Process


The state transitions from time t to t+t will then be
governed by the following equations -

p0 (t  t )  p0 (t )[1  0 t ]  p1 (t ) 1t
pk (t  t )  pk (t )[1  (k   k )t ]  pk 1 (t )k 1t  pk 1 (t )  k 1t

 p (t )  1
k 0
k

t0

dp0 (t )
 0 p0 (t )  1 p1 (t )
dt
dpk (t ) (2.6)
 (k   k ) pk (t )  k 1 pk 1 (t )   k 1 pk 1 (t )
dt

p
k 0
k (t ) 1
We can now obtain the equilibrium solutions by setting

dpi (t )
 0 i
dt

and obtaining the state distribution pi i such that the


normalization condition

p
i 0
i 1

is satisfied.
This yields the following equations to be solved for
the state probabilities under equilibrium conditions-

0 p0 = 1 p1 k=0
k-1 pk--1 + k+1 pk+1= (k+k)pk k=1,2,3,……,

p
i 0
i 1 Product Form Solution

 k 1 i 
pk  p0  

 i 0 i 1 
(2.7)
The solution is -
1
p0   k 1
i (2.8)
1 
k 1 i  0  i 1
Instead of writing differential equations, one can obtain the
solution in a simpler fashion by directly considering flow
balance for each state.

(a) Draw the state transition diagram k

(b) Draw closed boundaries and equate flows across this


boundary. Any closed boundary may be chosen for
this.
If the closed boundary encloses state k, then we get
Global Balance
Flow entering state k =k-1pk--1 + k+1pk+1= (k+k)pk = Flow Equation for
leaving state k state k
as the desired equation for state k.

(c) Solve the equations in (b) along with the normalization


condition to get the equilibrium state distribution.
It would be even simpler in this
case to consider a closed boundary k-1 k
which is actually closed at infinity

This would lead to the following equation Detailed


Flow from state k-1 to k = Flow from state k to k-1 Balance
k-1pk--1=kpk Equation

The solution for this will be the same as that obtained earlier
In general, the equations expressing flow balance
in a Birth-Death Chain of this type will be -

Global Balance Equations


p p
i j
i ij  pj p
i j
ji Closed boundary encircling each
state j

Detailed Balance Equations


Equates flows between states i and
pi pij  p j p ji j, in a pair-wise fashion.
Boundary between states i and j,
closed at + and -
Conditions for Existence of Solution for
Birth-Death Chain


 k 1 i 
  
k 0  i 0 

i 1 
(a) All states are transient, if
and only if  =,  <
(b) All states recurrent null, if
and only if  =,  =
(c) All states ergodic, if and

1 only if  <,  =
  k 1
i
k 0
k  Equilibrium State Distribution will
i 1
i 0
exist only for the case where all the
states are ergodic and hence, the chain
itself is also ergodic
Analyzing M/M/-/- Type Queues
(under equilibrium conditions)

Kendall’s Notation for Queues A/B/C/D/E

Shorthand notation where A, B, C, D, E describe the


queue
Applicable to a large number of simple queueing scenarios
Kendall’s Notation for Queues A/B/C/D/E

M exponential
A Inter-arrival time distribution D deterministic
B Service time distribution Ek Erlangian (order k)
G general
C Number of servers
D Maximum number of jobs that can be there
in the system (waiting and in service)
Default  for infinite number of waiting
positions
E Queueing Discipline (FCFS, LCFS, SIRO
etc.), Default is FCFS

M/M/1 or M/M/1/ Single server queue with Poisson


arrivals, exponentially distributed service times and
infinite number of waiting positions
Little’s Result

N=W (2.9)

Nq=Wq (2.10)

Result holds in general for virtually all types of


queueing situations where 
 = Mean arrival rate of jobs that actually enter the
system

Jobs blocked and refused entry into the system will


not be counted in 
Little’s Result

(t), Arrivals in (0,t)


Number of customers (t), Departures in (0,t)
(arrivals/departures)

N
(t
)
increments by
one

Time t

Graphical Illustration/Verification of Little's Result


Little’s Result

Consider the time interval (0,t) where t is large, i.e.


t
t

Area(t) = area between (t) and (t) at time t =  [ (t )   (t )]dt


0

Area(t )
Average Time W spent in system = lim t 
 (t )
Area(t )  (t ) Area(t )
Average Number N in system = limt   limt 
t t  (t )

 (t )
Since,   limt  Therefore, N=W
t
The PASTA Property
“Poisson Arrivals See Time Averages”

pk(t) =P{system is in state k at time t }


qk(t) =P{an arrival at time t finds the system in state k}
N(t) be the actual number in the system at time t
A(t, t+t) be the event of an arrival in the time interval (t, t+t)

Then qk (t )  lim t 0 PN (t )  k | A(t , t  t


PA(t , t  t | N (t )  k |PN (t )  k 
 lim t 0  pk (t )
PA(t , t  t

because P{A(t, t+t)|N(t) = k} = P{A(t, t+t)}


Equilibrium Solutions for M/M/-/- Queues

Method 1: Obtain the differential-difference equations.


Solve these under equilibrium conditions along with the
normalization condition.

Method 2: Directly write the flow balance equations for


proper choice of closed boundaries as illustrated and solve
these along with the normalization condition.

Method 3: Identify the parameters of the birth-death


Markov chain for the queue and directly use equations (2.7)
and (2.8) .
In the following, we have used this approach. However,
Method 2 is also interesting and should be tried as it allows
various complicated models to be easily analyzed.
M/M/1 (or M/M/1/ ) Queue
(Single Server Queue with Infinite Buffer)

For  <1
k   k k

pk  p0    p0  k
k  0 k 0 
 k  1, 2, 3,....... p0  (1   )

  Using
 N 1
N 
i 0
ipi  
i 0
i i (1   ) 
1 
W 
  (1   )
Little’s
Result

Using
1  2
Wq  W   N q  Wq  Little’s
  (1   ) (1   ) Result
M/M/1/ Queue with Discouraged Arrivals

 For  = / < 


k  k k 1 k
k 1   1
pk  p0 
i  0  (i  1)
 p0  
   k!
(2.14)

k  0 k 0 
p0  exp( ) (2.15)
 k  1, 2, 3,....... 


   
N   kpk  eff   k pk   1  exp( )
k 0  k 0   
N 
W  Little’s Effective Arrival
eff 2   Result
 1  exp( ) Rate
  
M/M/1/ Queue with Discouraged Arrivals
In this case, PASTA is not applicable as the overall arrival process
is not Poisson
i
r = P{arriving customer sees r in system  /     1
P{Ei }  pi  e  
(before joining the system)}    i!
Let E be the event of an arrival in (t, t+t)
Ei is the event of the system being in state i t
P{E | Ei } 
i 1

P{Er }P{E | Er } P{Er }P{E | Er }


 r  P{Er | E}   
P{E}

P{Ei }P{E | Ei }
i 0


r 1
1  e  / 
 k 1 

 r    
(r  1)! 1  e  / 


W  
k 
 2 (1  e  /  )
  k 0
M/M/m/ Queue
(m servers, infinite number of waiting positions)

k   k  k  k 0  k  (m  1)
 m km

k
For  = / < m p k  p0 for k  m
k!
(2.16)
 k
 p0 k m
for k  m
m!m
Erlang’s 1
C-Formula  m1  k m m 
p0  

k  0 k!

m!( m   )



(2.17)


m m
P{queueing} = 
k m
pk  C (m,  )  p0
m!(m   )
(2.18)
M/M/m/m Queue
(m server loss system, no waiting)

k   km
0 otherwise ( Blocking or Loss Condition )
 k  k 0k m
0 otherwise
k
p k  p0 for k  m
For k! (2.19)
0 otherwise

  1
 p0  (2.20)
m
k
 k!
k 0
M/M/m/m Queue
(m server loss system, no waiting)

Simple model for a telephone exchange where a line is


given only if one is available; otherwise the call is lost

Blocking Probability B(m,) = P{an arrival finds all servers


busy and leaves without service}

m
B(m,  )  p0 Erlang’s B-Formula (2.21)
m!
B(m  1,  )
B(m,  )  m (2.22)
B(0,  )  1 B(m  1,  )
1
m
M/M/1/K Queue
(single server queue with K-1 waiting positions)

k   kK
0 otherwise ( Blocking or Loss Condition)
k   kK
0 otherwise

pk  p0  k for k  K (2.23)
For
0 otherwise

  (1   )
 p0  (2.24)
(1   K 1 )
M/M/1/-/K Queue
(single server, infinite number of waiting positions, finite customer
population K )

k   ( K  k ) kK
0 otherwise ( Blocking or Loss Condition)
k   kK
0 otherwise

K!
pk  p0  k
k=1,.….,K (2.25)
For ( K  k )!
 1
  p0  (2.26)
 K
K!

k 0

k
( K  k )!
Departure Process from a M/M/m/ Queue

p p Knowledge of the nature


of the departure
Q2 Q4 p process from a queue
Q1 p
would be useful as we
 can then use it to
1-p
Q3 (1-p)
analyze simple cases of
queueing networks as
(1-p) shown.

The key result here is that the departure process from a


M/M/m/ queue is also Poisson with the same rate as the
arrival rate entering the queue. (Burke’s Theorem)

It should also be noted that the result of randomly splitting


or combining independent Poisson processes also yields a
Poisson process. (Useful in analyzing queueing networks)
How can we handle queues where the service
time distribution is not exponential?

[A] If we can express the actual service time as


combinations of exponentially distributed time
intervals, then the Method of Stages may be used.
(Section 2.9)

[B] The M/G/1 queue and its variations may be


analyzed.

[C] Approximation methods may be used if the mean


and variance of the service time are given. ( e.g. A
GI/G/m approximation)
Method of Stages
Consider a M/-/1/ example where
Stage 1 Stage 2
 1/1 1/2 the actual service time is the sum of
two random variables, each of which
is exponentially distributed.

State of the system represented as (n, j) where n is the total


number of customers in the system where the customer currently
being served is at Stage j, n=0,1,......,, j=1, 2
State (0,0) represents the state when the system is empty

  (2, 1)
(0,0) (1, 1)
1
State Transition
1 2
2 Diagram of the
(1, 2) (2, 2) System

Method of Stages

p 00   2 p12
(  1 ) p11  p 00   2 p 22
Balance Equations (   2 ) p12  1 p11 (2.38)
for the System (  1 ) p 21  p11   2 p 32
(   2 ) p 22  p12  1 p 21
etc.......

These Balance Equations may be solved along with the


appropriate Normalization Condition to obtain the state
probabilities of the system.
Once these are known, performance parameters of the
queue may be appropriately evaluated.
Method of Stages

The method illustrated for the M/-/1/ example may be


extended for the following types systems.

1. Have k stages of service times - more rows in the state


transition diagram

2. Finite Number of Waiting Positions in the Queue - make


the arrival rate a function of the number in the system and
make it go to zero once all the waiting positions have been
filled

3. Multiple Servers - approximate this by allowing more


than one job to enter service at a time

4. More General Service Time Distributions - see next slide


Method of Stages
For more general service time distributions, the Method of
Stages may be used if the Laplace Transform of the pdf
of the service time may be represented as a rational
function of s, LB(s)=N(s)/D(s), with simple roots.

Entry Stage 1 Stage 1


1 1 2 1 This leads to -
1-1 1-2

i
LB ( s)   0  
Exit

With multiple stages like this, the i s  i


L.T. of the service time pdf will
be of the form -

j
i
LB ( s)  (1  1 )   1 ...... j 1 (1   j )
j i 1 s   i
Method of Stages

Given a service time pdf as LB(s)=N(s)/D(s) with simple


roots -

1. Obtain the multiple stage representation in the form


shown earlier
2. Draw the corresponding state transition diagram and
identify the flows between the various states
3. Write and solve the flow balance equations along with
the normalization condition to obtain the state
probabilities
4. Use the state probabilities to obtain the required
performance parameters
Some Examples of Service Time
Distributions

0.5 1

Stage 1 0.5 Stage 2


  1/ 1/
0.5
Buffer

0.5 2

Service Facility

0.5s( 1   2 )  1  2 0.5 0.5 2


LB ( s)  LB ( s)  
( s  1 )(s   2 ) s   (s   ) 2
Queues with Bulk (or Batch) Arrivals

M[X] Poisson Batch Arrival Process

• Batches arriving as a Poisson process with exponentially


distributed inter-arrival times between batches
• Batch size = Number of jobs in a batch (random variable)

 = Average Batch Arrival r = P{r jobs in a batch} r=1,2,….



Rate
 ( z)    r z r
r 1

   r r
r 1
The M[X]/M/1 Queue

p0  p1 for k  0


k 1 Balance
(   ) p k  p k 1   
i 0
k i pi for k  1 Equations

Though these may be solved in the standard


fashion, we will consider a solution approach for 
directly obtaining P(z), the Generating Function for
the number in the system. For this, we would need
P( z )  
n 0
pn z n
to multiply the kth equation above by zk and sum
from k=1 to k=.

   k 1

(   ) 
k 1
pk z k 
z

k 1
p k 1 z k 1   i k i
p 
k 1 i 0
z k

Simplifying, (   )[P( z )  p 0 ]  [ P( z )  p 0  p1 z ]  P( z )  ( z )
z
we get
p 0 (1  z )
P( z ) 
 (1  z )  z[1   ( z )]

Define   as the offered traffic

Note that, P(1)=1 is effectively the same as the Normalization
Condition. Using this, we get -
p0  1  

Therefore
 (1   )(1  z ) (2.42)
P( z ) 
 (1  z )  z[1   ( z )]
We can invert P(z) or expand it as a power series in zi i=0,1,… to
get the state probability distribution. The mean number N in
the system may be directly calculated from P(z) as -

dP( z )  (   2 )
N  (2.43)
dz z 0 2(1   )
The M[X]/-/-/K Queue
Batch Arrival Queue with Finite Capacity

For operating queues of this type, one must also specify the batch
acceptance strategy to be followed if a batch of size k or more
arrrives in a system where the number of waiting positions available is
less than k.

Partial Batch Whole Batch


Acceptance Strategy Acceptance Strategy
(PBAS) (WBAS)

Randomly choose as many Accept the batch only if


jobs from the batch as all its jobs may be
may be accommodated in accommodated;
the buffer otherwise, reject all
jobs of the batch
Priority Operation of an M/G/1 Queue

Class 1 Class P
jobs jobs

1 2 P

P Priority Classes
Class P Highest Priority Class
Class 1 Lowest Priority Class

Head of Line (HOL) Priority Operation of M/G/1 Queue


Open and Closed Networks
of
M/M/m Type Queues

(Jackson’s Theorem for Open and Closed


Networks)
p1 Poisson
Rate p1

Poisson Process N
Average Rate  p i 1
pj Poisson i 1
Rate pj

pN Poisson
Rate pN

Splitting a Poisson process probabilistically (as in


random, probabilistic routing) leads to processes
which are also Poisson in nature.
Q2
p p

p Under equilibrium
Q1 conditions, average flow
  leaving the queue will
equal the average flow
1-p Q3 entering the queue.

(1-p) (1-p)

Routing Probabilities are p and (1-p)

For M/M/m/ queues at equilibrium, Burke’s Theorem assures us


that the departure process of jobs from the network will also be
Poisson. From flow balance, the average flow rate leaving the queue
will also be the same as the average flow rate entering the queue.
Poisson Process
Average Rate 1

Poisson Process
Average Rate 1+2

Poisson Process
Average Rate 2

Combining independent Poisson processes leads to a


process which will also be Poisson in nature.
Example:
An Acyclic Q2
(Feedforward) 2
Network of
M/M/m Queues 0.4
Q1 Q4
1
0.2

0.4 0.6
External arrivals with rates 1
and 2 from Poisson processes
0.4
Q3
Probabilistic routing with the
routing probabilities as shown
• Applying flow balance to each queue, we get

Q1 = Average job arrival rate for Q1 = 1


Q2 = Average job arrival rate for Q2 = 0.41+2
Q3 = Average job arrival rate for Q3 = 0.41
Q4 = Average job arrival rate for Q4 = 0.841+2

• Burke’s Theorem and the earlier quoted results on splitting and


combining of Poisson processes imply that, under equilibrium
conditions, the arrival process to each queue will be Poisson.
• Given the mean service times at each queue and using the
standard results for M/M/m queues, we can then find the
individual state probability distribution for each of the queues
• This process may be done for any system of M/M/m queues as
long as there are no feedback connections between the queues
• It should be noted that this analysis can only give us the state
distributions for each of the individual queues but cannot really
say what will be the joint state distribution of the number of
jobs in all the queues of the network.

• Jackson’s Theorem, presented subsequently, is needed to get


the joint state distribution. This gives the simple, and elegant
result that -

P(n1 , n2 , n3 , n4 )  pQ1 (n1 ) pQ 2 (n2 ) pQ3 (n3 ) pQ 4 (n4 )

Product Form Solution for


Joint State Distribution of
the Queueing Network
Jackson’s Theorem for Open Networks

• Jackson’s Theorem is applicable to a Jackson Network.


This is an arbitrary open network of M/M/m queues where
jobs arrive from a Poisson process to one or more nodes and
are probabilistically routed from one queue to another until
they eventually depart from the system.
The departures may also happen from one or more queues
The M/M/m nodes are sometimes referred to as Jackson
Servers
• Jackson’s Theorem states that provided the arrival rate at each
queue is such that equilibrium exists, the probability of the
overall system state (n1…….nK) for K queues will be given by the
product-form expression
K
P(n1 ,......,n K )  p
i 1
Qi ( ni )
Jackson Network: Network of K (M/M/m) queues,
arbitrarily connected

External Arrival to Qi : Poisson process with average rate i

At least one queue Qi must be such that i  0. Note that j  0


if there are no external arrivals to Qj. This is because we are
considering an Open Network. (Closed Networks are considered
later).

Routing Probabilities: pij = P{a job served at Qi is routed to Qj}

 K 
1 


j 1
p ij  = P{a job served at Qi exits from the network}

Arrival Process of Jobs to Qi
= [External Arrivals, if any, to Qi]

K
 
j 1
Jobs which finish service at Qj and are then
routed to Qi for the next stage of service

Let i = Average Arrival Rate of Jobs to Qi {external and rerouted}

Given the external arrival rates to each of the K queues in the


system and the routing probabilities from each queue to another,
the effective job arrival rate to each queue (at equilibrium) may
be obtained by solving the flow balance equations for the
network.
Flow Balance Conditions at Equilibrium imply that -
K
 j   j   i pij for j=1,….., K (5.2)
i 1

• For an Open Network, at least one of the j’s will be non-zero


(positive)
• The set of K equations in (5.2) can therefore be solved to find
the effective job arrival rate to each of the K queues, under
equilibrium conditions.
• The network will be at equilibrium if each of the K queues are
at equilibrium. This can happen only if the effective traffic
offered to each queue is less than the number of servers in the
queue. i.e. j =  j /j < mj j=1,….., K where mj is the number of
servers in Qj.
For a network of this type with M/M/m/ queues (i.e. Jackson
Servers) at each node, Jackson's Theorem states that provided
the arrival rate at each queue is such that equilibrium exists,
the probability of the overall system state (n1,......., nK) will be -

K
P(n~)  P(n1 ,......, n K )  p
j 1
j (n j )
(5.4)

with pj(nj)=P{nj customers in Qj}

This individual queue state probability may be found by


considering the M/M/m/ queue at node j in isolation with its
total average arrival rate j, its mean service time 1/j and the
corresponding results for the steady state M/M/m/ queue.
Stability requirement for the existence of the solution of (5.4)
is that -

For each queue Qj j=1,….., K in the network, the traffic offered


should be such that

 j 
j    mj
 j 

where mj is the number of servers in the M/M/m/ queue


at Qj
Implications of Jackson’s Theorem

• Once flow balance has been solved, the individual queues may be
considered in isolation.
• The queues behave as if they are independent of each other (even
though they really are not independent of each other) and the joint
state distribution may be obtained as the continued product of the
individual state distributions (product-form solution)
• The flows entering the individual queues behave as if they are
Poisson, even though they may not really be Poisson in nature (i.e. if
there is feedback in the network).

Note that Jackson’s Theorem does require the external arrival


processes to be Poisson processes and the service times at each
queue to be exponentially distributed in nature with their
respective, individual means.
Performance Measures

K
Total Throughput =   
j 1
j (5.5)

j
Average traffic load at node j (i.e. Qj) =  j  (5.6)
j
j
Visit Count to node j = V j  (5.7)

These may also be obtained by directly solving the
following K linear equations -

j K
Vj 

 V p
i 1
i ij
j =1,……, K (5.8)

Scaled Flow Balance Equations


Interpretation of the Visit Ratio Vj : Average number of times a
job will visit Qj every time it actually enters the (open) queueing
network.


Average number of jobs at node j = N j   kp
k 0
j (k ) (5.9)

K
Average number of jobs in system = N  N
j 1
j
(5.10)

Mean Sojourn Time (W): The mean total time spent in the system
by a job before it leaves the network.

N K Nj
W

 
j 1 
(5.11)
Example: Open Jackson Network
  (1  p1 )
1  2 
p1 p1
 1 1 p1 =1p1
Q1
  (1  p1 )
M/M/1
p2 1  2 
1 p1  2 p1
p1+p2=1
2

2 =1p2 Mean Number


Q2 in the Queues
M/M/1 2
Service Rate of Q1=1 1 2
Service Rate of Q2=2
N1  N2 
1  1 1  2
P(n1 , n2 )  1n1 (1  1 )  2n2 (1   2 )

N 1 2
Mean Sojourn Time W  
  (1  1 )  (1   2 )

Copyright 2013, Sanjay K. Bose 97


Example 1  r  2 (1  2 p) 2  2r  1 (1  p)  2 p
2 r (3  5 p) r (3  p)
1 p 1  2 
r Q1 2 p(1  p) 2 p(1  p)
M/M/1
r (3  5 p) r (3  p)
1-p 1  2 
4p(1  p) 2p(1  p)

2 Q2
p
2r Joint Probability
M/M/1
P(n1 , n2 )  (1  1 )(1   2 ) 1n1  2n2
p
1-2p
Visit
3 5p 3 p
Ratios V1  V2 
6 p(1  p) 6 p(1  p)

Mean Number in Q1, Q2 and in the 1 2


N1  N2 
system 1  1 1  2
1 2
N 
1  1 1   2
N
Mean Sojourn Time W 
3r

Copyright 2013, Sanjay K. Bose 98


Example 1  1 2  3  4  0.5 0.22  0.23    1 
 
2  3  0.41 
0.2   0.2  0.6  0.6 
Q2 0.2  4 1 2 3
M/M/1
~
0.4
0.6
   (1 , 2 , 3 , 4 )
 (1.1905 , 0.4762 , 0.4762 , 0.8095 )
 0.2
Q1 Q4
M/M/1 M/M/1 ~  ( 1 ,  2 , 3 ,  4 )
0.4
0.6  (1.1905 0 , 0.9524 0 , 0.9524 0 ,1.6190  0 )
Q3 0.2
M/M/1
0.2

Note that as λ increases,


P(n1 , n2 , n3 , n4 )  (1  1 )(1   2 ) 2 (1   4 ) 1n1  2n2  n3  4n4 system eventually becomes
unstable because  4  1
for 0  n1 , n2 , n3 , n4  
This, therefore, implies that
1 2 4 ρ0<0.6176 or λ<0.6176μ for
N1  N2  N3  N4 
1  1 1  2 1  4 the queueing network to be
stable.

Copyright 2013, Sanjay K. Bose 99


When does the Product-Form Solution hold?

The product-form expression for the joint


state probabilities hold for any open or
closed queueing network where local
balance conditions are satisfied.
Specifically, open or closed networks with the following types
of queues will have a product-form solution -

1. FCFS queue with exponential service times


2. LCFS queues with Coxian service times
3. Processor Sharing (PS) queues with Coxian service times
4. Infinite Server (IS) queues with Coxian service times

A Coxian service time has a distribution of the following type -


L i j
LB ( s)   1   
i 1
1   i 1 
2 ..... i
j 1 sj
with  i 1   i for 1  i  L and  L 1  1

You might also like