This action might not be possible to undo. Are you sure you want to continue?
ABDELKADER BENHARI
Mar kov Chai ns, Poi sson Pr ocesses and Queuei ng Theor y
Summary
Marko Chains Markov Chains
Discrete Time Markov Chains
H d h M k Homogeneous and nonhomogeneous Markov
chains
Transient and steady state Markov chains y
Continuous Time Markov Chains
Homogeneous and nonhomogeneous Markov g g
chains
Transient and steady state Markov chains
A.BENHARI
The Poisson Process The Poisson Process
Properties of the Poisson Process
I t i l ti Interarrival times
Memoryless property and the residual lifetime
paradox p
Superposition of Poisson processes
Random selection of Poisson Points
Bulk Arrivals and Compound Poisson Processes
A.BENHARI
Little’s La Little’s Law
Queueing System Notation
St ti A l i f El t Q i Stationary Analysis of Elementary Queueing
Systems
M/M/1 M/M/1
M/M/m
M/M/1/K M/M/1/K
…
A.BENHARI
Queueing Theory
Markov Processes
The definition of a Markov Process The definition of a Markov Process
The future of process X(t) does not depend on its past, only on its
present
( ) ( ) ( ) { }
P  X X X
( ) ( ) ( ) { }
( ) ( ) { }
1 1 0 0
1 1
Pr  , ...,
Pr 
k k k k
k k k k
X t x X t x X t x
X t x X t x
+ +
+ +
= = =
= s =
Since we are dealing with “chains”, X(t) can take discrete
values from a finite or a countable infinite set.
For a discrete time Markov chain the notation is also For a discretetime Markov chain, the notation is also
simplified to
{ } { }
1 1 0 0 1 1
Pr  ,..., Pr 
k k k k k k k k
X x X x X x X x X x = = = = = =
{ } { }
1 1 0 0 1 1
Pr  ,..., Pr 
k k k k k k k k
X x X x X x X x X x
+ + + +
Where x
k
is the value of the state at the kth step
A.BENHARI
Transition Probability
Define the one step transition probabilities Define the onestep transition probabilities
( )
{ }
1
Pr 
ij k k
p X j X i
k
+
= = =
Clearly for all i k and all feasible transitions from state i Clearly, for all i, k, and all feasible transitions from state i
( )
( )
1
ij
j
i
p
k
eI
=
¿
( ) { }
Pr 
,
ij k n k
p X j X i
k k n
+
= = =
+
Define the nstep transition probabilities
x
i
x
1
…
x
j
x
R
k u k+n
A.BENHARI
ChapmanKolmogorov Equations
x
1
x
i
x
1
x
…
x
j
x
R
k u k+n
R
Using total probability
( ) { } { }
1
Pr  , Pr 
,
R
ij k n u k u k
r
p X j X r X i X r X i
k k n
+
=
= = = = = =
+
¿
Using the memoryless property of Marckov chains
{ } { }
Pr  , Pr 
k n u k k n u
X j X r X i X j X r
+ +
= = = = = =
g y p p y
Therefore, we obtain the ChapmanKolmogorov Equation
( ) ( ) ( )
1
,
, , ,
R
ij ir rj
r
p p p k u k n
k k n k u u k n
=
= s s +
+ +
¿
p g q
A.BENHARI
Matrix Form
Define the matrix Define the matrix
( )
( ) ,
,
ij
p
k k n
k k n
= + (
+
¸ ¸
H
We can rewrite the ChapmanKolmogorov Equation
( ) ( ) ( ) , , , k k n k u u k n
=
+ +
H H H
Choose, u = k+n1, then
( ) ( ) ( ) , , 1 1, k k n k k n k n k n
=
+ + ÷ + ÷ +
H H H
( ) ( ) ( )
( )
( )
, 1
1
k k n
k n
=
+ ÷
+ ÷
H P
One step transition
probability
Forward Chapman
Kolmogorov
A.BENHARI
Matrix Form
Choose, u = k+1, then
( ) ( ) ( ) , , 1 1, k k n k k k k n
=
+ + + +
H H H
( ) ( ) ( )
( )
( )
, , ,
1, k k n
k
=
+ +
P H
One step transition
probability
Backward Chapman
Kolmogorov
A.BENHARI
Homogeneous Markov Chains
The one step transition probabilities are independent of The onestep transition probabilities are independent of
time k.
( )
{ }
1
or Pr 
ij
k k
p
X j X i
k
+
= = = = ( (
¸ ¸ ¸ ¸
P P ( )
{ }
1

ij
k k
p
j
k
+
¸ ¸ ¸ ¸
Even though the one step transition is independent of k,
this does not mean that the joint probability of X and X this does not mean that the joint probability of X
k+1
and X
k
is also independent of k
Note that
{ } { } { }
{ }
1 1
Pr , Pr  Pr
Pr
k k k k k
k
X j X i X j X i X i
p X i
+ +
= = = = = =
= =
{ }
Pr
ij k
p X i
A.BENHARI
Example
Consider a two processor computer system where, time p p y
is divided into time slots and that operates as follows
At most one job can arrive during any time slot and this can
happen with probability α.
Jobs are served by whichever processor is available, and if
both are available then the job is given to processor 1.
If both processors are busy, then the job is lost
When a processor is busy, it can complete the job with
probability β during any one time slot.
If a job is submitted during a slot when both processors are
busy but at least one processor completes a job then the job is busy but at least one processor completes a job, then the job is
accepted (departures occur before arrivals).
Describe the Markov Chain that describe this model.
A.BENHARI
Example: Markov Chain
For the State Transition Diagram of the Markov Chain, each
transition is simply marked with the transition probability
p
11
0 1 2
p
01
p
12
p
00
p
10
p
21
p
22
p
10
21
p
20
( )
1
p p o
0 p =
( )
00
1
p
o
=
÷
01
p o =
02
0 p =
( )
10
1
p 
o
=
÷
( )
( )
11
1
1
p o 
o
= + ÷
÷
( )
12
1 p o  = ÷
( )
2
( )
2
20
1
p 
o
=
÷
( )
( )
2
21
2 1
1
p  o  
o
= + ÷
÷
( ) ( )
2
22
2 1 1 p o   = + ÷ ÷
A.BENHARI
Example: Markov Chain
p
11
0 1 2
p
01
p
12
p
22
0 1 2
p
00
p
10
p
21
p
20
Suppose that α = 0.5 and β = 0.7, then,
(
0.5 0.5 0
0.35 0.5 0.15
ij
p
(
(
= = (
¸ ¸
(
(
P
0.245 0.455 0.3
(
¸ ¸
A.BENHARI
State Holding Times
Suppose that at point k, the Markov Chain has pp p
transitioned into state X
k
=i. An interesting question is
how long it will stay at state i.
Let V(i) be the random variable that represents the Let V(i) be the random variable that represents the
number of time slots that X
k
=i.
We are interested on the quantity Pr{V(i) = n}
( )
{ } { }
Pr Pr  V n X i X i X i X i
i
= ( )
{ } { }
{ }
{ }
1 1
1
Pr Pr , ,..., 
Pr  ,...,
k n k n k k
k n k n k
V n X i X i X i X i
i
X i X i X i
+ + ÷ +
+ + ÷
= = = = = =
= = = = ×
{ }
1 1
Pr ,..., 
k n k k
X i X i X i
+ ÷ +
= = =
{ }
{ }
1
Pr 

k n k n
X i X i
+ + ÷
= = = ×
{ }
{ }
1 2
2 1
Pr  ...,
Pr ,..., 
k n k n k
k n k k
X i X X i
X i X i X i
+ ÷ + ÷
+ ÷ +
= = ×
= = =
A.BENHARI
State Holding Times
( )
{ } { }
Pr Pr  V n X i X i
i
= = = = × ( )
{ } { }
{ }
{ }
1
1 2
Pr Pr 
Pr  ...,
P 
k n k n
k n k n k
V n X i X i
i
X i X X i
X X X
+ + ÷
+ ÷ + ÷
= = = = ×
= = ×
{ }
2 1
Pr ,..., 
k n k k
X i X i X i
+ ÷ +
= = =
( ) { }
{ }
1 2
1 Pr 
P 
ii k n k n
p X i X i
X X X
+ ÷ + ÷
= ÷ = = ×
{ }
{ }
2 3
3 1
Pr  ,...,
Pr ,..., 
k n k n k
k n k k
X i X i X i
X i X i X i
+ ÷ + ÷
+ ÷ +
= = =
= = =
This is the Geometric Distribution with parameter p
( )
{ } ( )
1
Pr 1
n
ii ii
V n p p
i
÷
= = ÷
This is the Geometric Distribution with parameter p
ii
.
Clearly, V(i) has the memoryless property
A.BENHARI
State Probabilities
An interesting quantity we are usually interested in is the An interesting quantity we are usually interested in is the
probability of finding the chain at various states, i.e., we
define
( )
{ }
Pr
i k
X i
k
t ÷ = ( )
{ }
i k
For all possible states, we define the vector
( ) ( ) ( )
 
0 1
, ...
k k k
t t = π
Using total probability we can write
( )
{ } { }
1 1
Pr  Pr
j k k k
i
X j X i X i
k
t
÷ ÷
= = = =
¿
( ) ( )
1
i
ij i
i
p
k k
t =
÷
¿
In vector form one can write In vector form, one can write
( ) ( ) ( )
1 k k k
=
÷
π π P ( ) ( )
1 k k
=
÷
π π P
Or, if homogeneous
Markov Chain
A.BENHARI
State Probabilities Example
Suppose that pp
( )
 
1 0 0
0
= π
with
0.5 0.5 0
0.35 0.5 0.15
(
(
=
(
(
P
Find π(k) for k=1,2,…
0.245 0.455 0.3
(
¸ ¸
0 5 0 5 0
(
( )
   
0.5 0.5 0
1 0 0 0.35 0.5 0.15 0.5 0.5 0
1
0 245 0 455 0 3
(
(
= =
(
(
¸ ¸
π
0.245 0.455 0.3
(
¸ ¸
Transient behavior of the system: MCTransient.m
In general, the transient behavior is obtained by solving In general, the transient behavior is obtained by solving
the difference equation
( ) ( )
1 k k
=
÷
π π P
A.BENHARI
Classification of States
Definitions
State j is reachable from state i if the probability to go
from i to j in n >0 steps is greater than zero (State j is
reachable from state i if in the state transition diagram reachable from state i if in the state transition diagram
there is a path from i to j).
A subset S of the state space X is closed if p
ij
=0 for
every ieS and j eS every ieS and j eS
A state i is said to be absorbing if it is a single
element closed set.
A closed set S of states is irreducible if any state jeS
is reachable from every state ieS.
A Markov chain is said to be irreducible if the state A Markov chain is said to be irreducible if the state
space X is irreducible.
A.BENHARI
Example
Irreducible Markov Chain Irreducible Markov Chain
0 1 2
p
01
p
12
p
p
22
0 1 2
p
00
p
10
p
21
Reducible Markov Chain Reducible Markov Chain
p
01
p
12
p
23
p
0 1 2
3
p
00
p
10
p
14
p
32
3
Absorbing
p
22
4
p
33
Absorbing
State
Closed irreducible set
A.BENHARI
Transient and Recurrent States
Hitting Time
{ }
min 0 : T k X i X j = > = =
Hitting Time
Recurrence Time T
ii
is the first time that the MC returns to
state i.
{ }
0
min 0 : ,
ij k
T k X i X j = > = =
Let ρ
i
be the probability that the state will return back to i
given it starts from i. Then,
{ }
P T k
·
¿
{ }
1
Pr
i ii
k
T k µ
=
= =
¿
The event that the MC will return to state i given it started
from i is equivalent to T
ii
< ∞, therefore we can write
{ } { }
Pr Pr
i ii ii
T k T µ
·
= = = < ·
¿
1 k =
A state is recurrent if ρ
i
=1 and transient if ρ
i
<1
A.BENHARI
Theorems
If a Markov Chain has finite state space, then at least one p ,
of the states is recurrent.
If state i is recurrent and state j is reachable from state i
then, state j is also recurrent.
If S is a finite closed irreducible set of states, then every
state in S is recurrent.
A.BENHARI
Positive and Null Recurrent States
Let M
i
be the mean recurrence time of state i
i
A t t i id t b iti t if M< If M
  { }
1
Pr
i ii ii
k
M E T k T k
·
=
÷ = =
¿
A state is said to be positive recurrent if M
i
<∞. If M
i
=∞
then the state is said to be nullrecurrent.
Theorems
If state i is positive recurrent and state j is reachable
from state i then, state j is also positive recurrent.
If S is a closed irreducible set of states then every If S is a closed irreducible set of states, then every
state in S is positive recurrent or, every state in S is
null recurrent, or, every state in S is transient.
If S i fi it l d i d ibl t f t t th If S is a finite closed irreducible set of states, then
every state in S is positive recurrent.
A.BENHARI
Example
p
01
p
12
p
23
p
00
p
10
p
p
32
0 1 2
3
p
14
p
22
4
p
33
Transient
States
Positive
Recurrent
States
Recurrent State
A.BENHARI
Periodic and Aperiodic States
Suppose that the structure of the Markov Chain is such pp
that state i is visited after a number of steps that is an
integer multiple of an integer d >1. Then the state is
called periodic with period d called periodic with period d.
If no such integer exists (i.e., d =1) then the state is called
aperiodic.
Example
1 0.5
0 1 2
0.5
0 1 2
1
0 1 0
(
(
Periodic State d = 2
0.5 0 0.5
0 1 0
(
=
(
(
¸ ¸
P
A.BENHARI
Steady State Analysis
Recall that the probability of finding the MC at state i after
the kth step is given by
( )
{ }
Pr
i k
X i
k
t ÷ =
( ) ( ) ( )
 
0 1
, ...
k k k
t t = π
An interesting question is what happens in the “long run” An interesting question is what happens in the long run ,
i.e., ( ) lim
i
k
k
i
t t
÷·
÷
This is referred to as steady state or equilibrium or
Questions:
This is referred to as steady state or equilibrium or
stationary state probability
Do these limits exists?
If they exist, do they converge to a legitimate
b bilit di t ib ti i 1
¿
probability distribution, i.e.,
How do we evaluate π
j
, for all j.
1
i
t =
¿
A.BENHARI
Steady State Analysis
Recall the recursive probability
( ) ( )
1 k k
=
+
π π P
If steady state exists, then π(k+1) ~ π(k), and therefore
the steady state probabilities are given by the solution to the steady state probabilities are given by the solution to
the equations
= π πP
and
1
i
t =
¿
If an Irreducible Markov Chain the presence of periodic
states prevents the existence of a steady state probability
i
Example: periodic.m
0 1 0
(
(
0.5 0 0.5
0 1 0
(
=
(
(
¸ ¸
P ( )
 
1 0 0
0
= π
A.BENHARI
Steady State Analysis y y
THEOREM: In an irreducible aperiodic Markov chain THEOREM: In an irreducible aperiodic Markov chain
consisting of positive recurrent states a unique stationary
state probability vector π exists such that π
j
> 0 and
( )
1
lim
j j
k
j
k
M
t t
÷·
= =
where M
j
is the mean recurrence time of state j
The steady state vector π is determined by solving The steady state vector π is determined by solving
= π πP
and
1
i
i
t =
¿
Ergodic Markov chain.
A.BENHARI
BirthDeath Example
1p 1p 1p
p
p p
0 1 i
p
p p
p
1 0
0 1
p p
p p
÷
(
(
÷
(
0 1
0 0
p p
p
(
=
(
(
(
¸ ¸
P
(
¸ ¸
Thus, to find the steady state vector π we need to solve , y
= π πP
and
1
i
i
t =
¿
A.BENHARI
BirthDeath Example
In other words
0 0 1
p p t t t = +
In other words
( )
1 1
, 1, 2,... 1
j j j
p j p t t t
÷ +
= + = ÷
1 0
1 p
t t
÷
=
Solving these equations we get
2
2 0
1 p
t t
÷
 
=

1 0
p
2 0
p
t t

\ .
In general
0
1
j
j
p
t t
÷
 
=

\ .
0 j
p

\ .
Summing all terms we get
i i
   
0 0
0 0
1 1
1 1
i i
i i
p p
p p
t t
· ·
= =
÷ ÷
   
= ¬ =
 
\ . \ .
¿ ¿
A.BENHARI
BirthDeath Example
Therefore, for all states j we get
j i
If p<1/2 then
0
1 1
j i
j
i
p p
p p
t
·
=
÷ ÷
   
=
 
\ . \ .
¿
0
1
i
i
p
p
·
÷
 
= ·

\ .
¿
If p<1/2, then
0, for all
j
j t ¬ =
All states are transient
0 i
p
= \ .
All states are transient
i
 
If p>1/2, then
1
2 1
j
p
p
÷
÷  
0
1
0
2 1
i
i
p
p
p
p
·
=
÷
 
= >

÷
\ .
¿
1
2 1
, for all
j
p
p
j
p
p
t
 
¬ =

\ .
All states are positive recurrent All states are positive recurrent
A.BENHARI
BirthDeath Example
If 1/2 then If p=1/2, then
1
i
p
p
·
÷
 
= ·

\ .
¿
0, for all
j
j t ¬ =
All states are null recurrent 0 i
p
= \ . All states are null recurrent
A.BENHARI
Reducible Markov Chains
Transient
Set T
Irreducible
Set S
1
Irreducible
Set S
2
In steady state, we know that the Markov chain will
eventually end in an irreducible set and the previous
analysis still holds or an absorbing state analysis still holds, or an absorbing state.
The only question that arises, in case there are two or more
irreducible sets, is the probability it will end in each set
A.BENHARI
Reducible Markov Chains
Transient
Set T
I d ibl
r
s
1
Irreducible
Set S
i
r
s
n
Suppose we start from state i. Then, there are two ways to
go to S go to S.
In one step or
Go to r eT after k steps, and then to S.
Define
( )
{ }
0
Pr  , 1, 2,...
i k
X S X i k
S
µ = e = =
A.BENHARI
Reducible Markov Chains
First consider the onestep transition
Next consider the general case for k=2,3,…
{ }
1 0
Pr  X S X i e = =
ij
j S
p
e
¿
First consider the one step transition
{ }
1 1 1 0
Pr , ..., 
k k k
X S X r T X r T X i
÷ ÷
e = e = e = =
{ }
1 1 1 0
Pr , ..., ,
k k k
X S X r T X r T X i
÷ ÷
= e = e = e =
{ }
{ }
1 1 1 0
1 0

Pr 
k k k
X r T X i × = e =
{ }
1 1 1
Pr , ...,
k k k ir
X S X r T X r T p
÷ ÷
= e = e = e
{ }
( )
r ir
p
S
µ =
( ) ( )
i ij r ir
j S r T
p p
S S
µ µ
e e
¬ = +
¿ ¿
A.BENHARI
ContinuousTime Markov Chains
In this case, transitions can occur at any time , y
Recall the Markov (memoryless) property
( ) ( ) ( ) { }
1 1 0 0
Pr  ,...,
k k k k
X t x X t x X t x
+ +
= = =
( ) ( ) { }
1 1
Pr 
k k k k
X t x X t x
+ +
= = =
where t
1
< t
2
< … < t
k
Recall that the Markov property implies that
X(t
k+1
) depends only on X(t
k
) (state memory)
It d t tt h l th t t t ( ) ( It does not matter how long the state at X(t
k
) (age
memory).
The transition probabilities now need to be defined for The transition probabilities now need to be defined for
every time instant as p
ij
(t), i.e., the probability that the MC
transitions from state i to j at time t.
A.BENHARI
Transition Function
Define the transition function
{ } ( )
( ) ( )
{ }
Pr  ,
,
ij
p X j X i s t
s t
t s
÷ = = s
The continuoustime analogue of the Chapman
Kolmokorov equation is Kolmokorov equation is
( )
( ) ( ) ( )
{ }
( ) ( )
{ }
,
Pr  Pr 
ij
p
s t
X j X r X i X r X i
t
÷
= = = = =
¿
Using the memoryless property
( ) ( ) ( )
{ }
( ) ( )
{ }
Pr  , Pr 
r
X j X r X i X r X i
t u s u s
= = = = =
¿
( )
( ) ( )
{ }
( ) ( )
{ }
P  P  X j X X X i
¿
( )
( ) ( )
{ }
( ) ( )
{ }
Pr  Pr 
,
ij
r
p X j X r X r X i
s t
t u u s
÷ = = = =
¿
Define H(s,t)=[p
ij
(s,t)], i,j=1,2,…then
( ) ( ) ( )
,
, , ,
s u t
s t s u u t
= s s H H H
Note that H(s, s)= I
A.BENHARI
Transition Rate Matrix
Consider the ChapmanKolmogorov for s ≤ t ≤ t+Δt p g ≤ ≤
( ) ( ) ( ) , , , s t t s t t t t
=
+ A + A
H H H
Subtracting H(s,t) from both sides and dividing by Δt g ( , ) g y
( ) ( )
( ) ( ) ( ) , ,
, ,
s t t t t
s t t s t
t t
÷
+ A ÷
+ A
=
A A
H H I
H H
Taking the limit as Δt0
( )
( )
( )
, s t
t
cH
H Q
( )
( )
( )
,
, s t
t
t
=
c
H Q
where the transition rate matrix Q(t) is given by
( )
( )
0
,
lim
t
t t t
t
t
A ÷
÷
+ A
=
A
H I
Q
A.BENHARI
Homogeneous Case
In the homogeneous case, the transition functions do not g ,
depend on s and t, but only on the difference ts thus
( )
( )
,
ij ij
p p
s t
t s
=
÷
It follows that
( )
( ) ( )
, s t
t s t
= ÷
÷
H H P
and the transition rate matrix
( )
( ) ( ) ,
li li t t
t t t
t
÷
+ A ÷
A
H I
H I
Q Q ( )
( ) ( )
0 0
,
lim lim , constant
t t
t
t
t t
A ÷ A ÷
= = =
A A
Q Q
Thus
( )
( ) ( )
1 if
with
0
0 if
ij
i j
t
p
t
i j t
=
¦ c
= =
´
= c
¹
P
P Q ( )
t
e
t
¬ =
Q
P
A.BENHARI
State Holding Time
A.BENHARI
State Holding Time
A.BENHARI
Transition Rate Matrix Q.
Recall that
( )
( )
t
t
cP
P Q
Evaluating this at t = 0, we have and then
( )
t
t
=
c
P Q
( )
0
I = P
( ) ( )
0 0
ij
ij
t t
p
t t
Q q
t
t
= =
c c
= ¬ =
c
c
P
If , τ: exponential residual lifetime
i j =
( )
0
ij
ij
p
t
q
t
c
= =
c
0
ij
ij
ij
e
ì t
t
ì
ì
=
=
( ) Pr{ } 1
ij
t
ij
p t t e
ì
t
÷
= < = ÷ ¬
0 t
t
=
c
( ) { }
ij
p
In other words q
ij
is the rate of the Poisson process that
activates the event that makes the transition from i to j. activates the event that makes the transition from i to j.
A.BENHARI
Transition Rate Matrix Q.
If , τ: exponential residual lifetime
i j =
, p
j
( ) Pr{ }
ii
t
ii
p t t e
ì
t
÷
= > = ¬
( )
0
ii
ii
t
p
t
q
t
=
c
= =
c
0
ii
t
ii
ii
t
e
ì
ì
ì
=
= ÷
÷
( )
ii
ii
p
t
q
c
=
( )
 
1
q
p
t
ì
c
· = ÷ =
Probability that
chain leaves state
i
0
ii
t
q
t
=
c
( )
 
0
1
ii ii
ii
t
q
p
t
t
ì
=
· = ÷ =
÷
c
Note that for each row i, the sum
( ) 1 0
ij ij
j j
p t q = ¬ =
¿ ¿
,
A.BENHARI
Transition Probabilities P.
Suppose that state transitions occur at random points in Suppose that state transitions occur at random points in
time T
1
< T
2
<…< T
k
<…
Let X
k
be the state after the transition at T
k
{ }
1
Pr 
ij k k
P X j X i
+
= = =
k k
Define
Recall that in the case of the superposition of two or more
Poisson processes, the probability that the next event is
from process j is given by λ
j
/Λ. p j g y
j
In this case, we have
ij
q
,
ij
ij
ii
q
P i j
q
= =
÷
0
ii
P =
and
A.BENHARI
Example
Assume a computer system where jobs arrive according to Assume a computer system where jobs arrive according to
a Poisson process with rate λ.
Each job is processed using a First In First Out (FIFO)
policy.
The processing time of each job is exponential with rate μ.
The computer has buffer to store up to two jobs that wait The computer has buffer to store up to two jobs that wait
for processing.
Jobs that find the buffer full are lost.
Draw the state transition diagram.
Find the Rate Transition Matrix Q.
Find the State Transition Matrix P
A.BENHARI
Example
a a a
d
0 1 2 3
a
d d
The rate transition matrix is given by
d d d
0 0 ì ì ÷
(
(
( )
( )
0
0
µ ì ì µ
µ ì ì µ
(
÷ +
(
=
(
÷ +
(
Q
0 0 µ µ
(
÷
(
¸ ¸
( )
0 0 0
0 0
1
P
ì µ
µ ì
( +
(
(
=
(
The state transition
( )
( )
0 0
0 0 0
P
µ ì
ì µ
ì µ
=
(
+
(
+
(
¸ ¸
The state transition
matrix is given by
A.BENHARI
State Probabilities and Transient
Analysis Analysis
Similar to the discretetime case we define Similar to the discrete time case, we define
( ) ( )
{ }
Pr
j
X j
t t
t ÷ =
In vector form
( ) ( ) ( )
 
1 2
, ,...
t t t
t t = π
With initial probabilities ( ) ( ) ( )
 
1 2
, ,...
0 0 0
t t = π
Using our previous notation (for homogeneous MC) Using our previous notation (for homogeneous MC)
( ) ( ) ( )
0 t t
= = π π P ( )
0
t
e
Q
π
Obtaining a general
solution is not easy!
Differentiating with respect to t gives us more “inside”
( )
( )
d
t
t
=
π
π Q
( )
( ) ( )
j
jj j ij i
d
t
q q
t t
t
t t · = +
¿
( )
t
dt
π Q ( ) ( )
jj j ij i
i j
q q
t t
dt
t t
=
· +
¿
Note: ( )
t t t
e Qe e Q
'
= =
Q Q Q
A.BENHARI
“Probability Fluid” view
We view π
j
(t) as the level of a “probability fluid” that is We view π
j
(t) as the level of a probability fluid that is
stored at each node j (0=empty, 1=full).
( )
j
d
t
t
¿ ¿ ¿
( )
( ) ( ) ( ) ( )
j
jj j ij i jr j ij i
i j r j i j
d
t
q q q q
t t t t
dt
t
t t t t
= = =
= + = ÷ +
¿ ¿ ¿
f Change in the
probability fluid
inflow
outflow
r i
q
j
q
ij
…
q
jr
…
jj jr
r j
q q
=
÷ =
¿
Inflow Outflow
A.BENHARI
Steady State Analysis
Often we are interested in the “longrun” probabilistic Often we are interested in the long run probabilistic
behavior of the Markov chain, i.e.,
( ) lim
j j
t
t
t t
÷·
=
t ÷·
As ith the discrete time case e need to address the
These are referred to as steady state probabilities or
equilibrium state probabilities or stationary state probabilities
As with the discretetime case, we need to address the
following questions
Under what conditions do the limits exist?
If they exist, do they form legitimate probabilities?
How can we evaluate these limits?
A.BENHARI
Steady State Analysis
Theorem: In an irreducible continuoustime Markov Chain
consisting of positive recurrent states, a unique stationary
state probability vector π with
( ) lim
j j
t
t t = ( )
j j
t
t
÷·
These vectors are independent of the initial state
probability and can be obtained by solving
and 1 t
¿
πQ = 0 and 1
j
j
t =
¿
πQ = 0
i
Using the “probability fluid”
view
( )
t
t
( ) ( ) 0
jj j ij i
q q
t t
t t = +
¿
outflow
r i
j
q
ij
…
q
jr
…
( )
0
j
t
dt
t
=
jj j j
i j =
¿
0 Change
inflow
j
… …
Inflow Outflow
A.BENHARI
Example
a a a
d
0 1 2 3
a
d d
For the previous example, with the above transition
function, what are the steady state probabilities
d d d
 
( )
0 0
0
ì ì
µ ì ì µ
÷
(
(
÷ +
(
Solve
 
( )
( )
0 1 2 3
0
0
0 0
µ ì ì µ
t t t t
µ ì ì µ
µ µ
+
(
= =
(
÷ +
(
÷
(
¸ ¸
πQ 0
µ µ
(
¸ ¸
0 1 2 3
1 t t t t + + + =
A.BENHARI
Example
The solution is obtained The solution is obtained
0 1
0 ìt µt ÷ + =
1 0
ì
t t
µ
¬ =
( )
0 1 2
0 ìt t µt ì µ ÷ + = +
2
2 0
ì
t t
µ
 
¬ =

\ .
3
 
( )
1 2 3
0 ìt t µt ì µ ÷ + = +
3
3 0
ì
t t
µ
 
¬ =

\ .
0 1 2 3
1 t t t t + + + = ¬
0
2 3
1
t
ì ì ì
=
     
0 1 2 3
2 3
1
ì ì ì
µ µ µ
     
+ + +
  
\ . \ . \ .
A.BENHARI
BirthDeath Chain
λ
0 λ
1
λ
i 1
λ
i
0 1 i
1 i1 i
μ
1
μ
i
μ
i+1
Find the steady state probabilities
Similarly to the previous example,
( )
0 0
1 1 1 1
0 ì ì
µ ì ì µ
÷
(
(
÷ +
(
= Q
( )
2 2 2
0 µ ì µ
(
=
(
÷ +
(
(
¸ ¸
Q
And we solve
= πQ 0
0
1
i
i
t
·
=
=
¿
and
A.BENHARI
Example
The solution is obtained
0 0 1 1
0 ì t µ t ÷ + =
0
1 0
1
ì
t t
µ
¬ =
( )
0 0 1 2 2 1 1
0 ì t t µ t ì µ ÷ + = +
0 1
2 0
1 2
ì ì
t t
µ µ
 
¬ =

\ .
In general In general
( )
1 1 1 1
0
j j
j j j j j
ì µ
ì t t µ t
÷ ÷ + +
+
÷ + =
0
1 0
1 1
...
...
j
j
j
ì ì
t t
µ µ
+
+
 
¬ =


\ .
j
\ .
Making the sum equal to 1
0 1
...
1 1
j
ì ì
·
÷
 
 


¿
Solution exists if
ì ì
·
 
0 1
0
1 1
1 1
...
j
j j
t
µ µ
=
+ =




\ .
\ .
¿
0 1
1 1
...
1
...
j
j j
S
ì ì
µ µ
·
÷
=
 
= + < ·


\ .
¿
A.BENHARI
Uniformization of Markov Chains
In general discrete time models are easier to work with In general, discretetime models are easier to work with,
and computers (that are needed to solve such models)
operate in discretetime
Thus, we need a way to turn continuoustime to discrete
time Markov Chains
Uniformization Procedure
Recall that the total rate out of state i is –q
ii
=Λ(i). Pick
if t h th t f ll t t a uniform rate γ such that γ ≥ Λ(i) for all states i.
The difference γ  Λ(i) implies a “fictitious” event that
returns the MC back to state i (self loop) returns the MC back to state i (self loop).
A.BENHARI
Uniformization of Markov Chains
Uniformization Procedure
Let P
U
ij
be the transition probability from state I to state j for the
discretetime uniformized Markov Chain, then
ij
q
¦
if
ij
U
ij
ij
j i
q
i j
P
q
¸
¸
=
¦
=
¦
¦
=
´
÷
¦
¿
if
ij
j i
q
i j
¸
¸
=
¦
=
¦
¹
¿
j
…
j
…
ij
q
¸
ii
q ¸ +
i k
…
q
ij
q
ik
Uniformization
i k
…
¸
ik
q
¸
…
q
ik
…
ik
q
¸
A.BENHARI
Poisson Processes Poisson Processes
A.BENHARI
Summary
The Poisson Process The Poisson Process
Properties of the Poisson Process
I t i l ti Interarrival times
Memoryless property and the residual lifetime
paradox p
Superposition of Poisson processes
Random selection of Poisson Points
Bulk Arrivals and Compound Poisson Processes
A.BENHARI
The Poisson Counting Process
Let the process {N(t)} which counts the number of events p { ( )}
that have occurred in the interval [0,t). For any 0 ≤ t
1
≤ …
≤ t
k
≤ …
( )
( ) ( ) ( )
1 2
0 ... ...
0
k
N N N N t t t = s s s s s
6
Process with independent
increments: The random
4
2
variables N(t
1
), N(t
1
,t
2
),…,
N(t
k1
,t
k
),…are mutually
independent.
0
t
1
t
2
t
3
t
k1
…
t
k
Process with stationary
independent increments:
The random variable
( ) ( ) ( )
1 1
,
k k k k
N t t N t N t
÷ ÷
= ÷
N(t
k1
, t
k
), does not depend
on t
k1
, t
k
but only on t
k
t
k1
A.BENHARI
The Poisson Counting Process
Assumptions: p
At most one event can occur at any time instant (no two
or more events can occur at the same time)
A process with stationary independent increments
( ) { } ( ) { }
1 1
Pr , Pr
k k k k
N t t n N t t n
÷ ÷
= = ÷ =
( ) { } ( ) { }
1 1 k k k k
Given that a process satisfies the above assumptions Given that a process satisfies the above assumptions,
find
( ) ( )
{ }
Pr , 0,1, 2,...
n
P N n n
t t
÷ = =
A.BENHARI
The Poisson Process
Step 1: Determine Step 1: Determine
( ) ( )
{ }
0
Pr 0 P N
t t
÷ =
Starting from g
( )
{ }
Pr 0 N
t s
= =
+
( )
( )
{ }
Pr 0 and 0
,
N N
t t s
t
= =
+
( )
{ }
( )
{ }
Pr 0 Pr 0 N N
t s
= = =
Stationary independent
i t
{ } { }
increments
( ) ( ) ( )
0 0 0
P P P
t s t s
¬ =
+
Lemma: Let g(t) be a differentiable function for all t≥0
such that g(0)=1 and g(t) ≤ 1 for all t >0. Then for any t,
s≥0 s≥0
( ) ( ) ( ) ( )
t
g t s g g g e
t s t
ì ÷
+ = · =
for some λ>0
A.BENHARI
The Poisson Process
Therefore
( ) ( )
{ }
P 0
t
P N
ì ÷
Therefore
( ) ( )
{ }
0
Pr 0
t
P N e
t t
ì
÷ = =
Step 2: Determine P
0
(Δt) for a small Δt.
( ) ( )
2 3
t t ì ì A A
( )
{ }
Pr 0
t
N e
t
ì ÷ A
= = =
A
( ) ( )
1 ...
2! 3!
t t
t
ì ì
ì
A A
÷ A + ÷ +
( ) 1 . t o
t
ì = ÷ A +
A
( ) 1 . t o
t
ìA +
A
Step 3: Determine P
n
(Δt) for a small Δt.
For n=2,3,…since by assumption no two events can
( ) ( )
{ }
( ) Pr
n
P N n o
t t t
÷ = =
A A A
occur at the same time
A lt f 1
( ) ( )
{ }
( )
1
Pr 1 P N t o
t t t
ì ÷ = = A +
A A A
As a result, for n=1
A.BENHARI
The Poisson Process
Step 4: Determine P
n
(t+Δt) for any n p
n
( ) y
( ) ( )
{ }
Pr
n
P N n
t t t t
÷ = =
+ A + A
( ) ( )
0
n
n k k
k
P P
t t
÷
=
A
¿
( ) ( ) ( ) ( ) ( ) P P P P o
t t t t t
+ +
A A A
( ) ( ) ( ) ( ) ( )
0 1 1
.
n n
P P P P o
t t t t t
÷
= + +
A A A
Mo ing terms bet een sides
( )
 
( ) ( )
 
( ) ( )
1
. 1
n n
P P o t o t o
t t t t t
ì ì
÷
= + + ÷ A + A +
A A A
Moving terms between sides,
( ) ( )
( ) ( )
( )
1
.
n n
n n
P P o
t t t t
P P
t t
t t
ì ì
÷
÷
+ A A
= ÷ + +
A A
Taking the limit as Δt 0
t t A A
( ) dP
t
( )
( ) ( )
1
n
n n
dP
t
P P
t t
dt
ì ì
÷
= ÷ +
A.BENHARI
The Poisson Process
Step 4: Determine P
n
(t+Δt) for any n p
n
( ) y
( ) ( )
{ }
Pr
n
P N n
t t t t
÷ = =
+ A + A
( ) ( )
0
n
n k k
k
P P
t t
÷
=
A
¿
( ) ( ) ( ) ( ) ( ) P P P P o
t t t t t
+ +
A A A
( ) ( ) ( ) ( ) ( )
0 1 1
.
n n
P P P P o
t t t t t
÷
= + +
A A A
Mo ing terms bet een sides
( )
 
( ) ( )
 
( ) ( )
1
. 1
n n
P P o t o t o
t t t t t
ì ì
÷
= + + ÷ A + A +
A A A
Moving terms between sides,
( ) ( )
( ) ( )
( )
1
.
n n
n n
P P o
t t t t
P P
t t
t t
ì ì
÷
÷
+ A A
= ÷ + +
A A
Taking the limit as Δt 0
t t A A
( ) dP
t
( )
( ) ( )
1
n
n n
dP
t
P P
t t
dt
ì ì
÷
= ÷ +
A.BENHARI
The Poisson Process
Step 5: Solve the differential equation to obtain p q
( ) ( )
{ }
( )
Pr , 0, 0,1, 2,...
!
n
t
n
t
P N n e t n
t t
n
ì
ì
÷
÷ = = > =
This expression is known as the Poisson distribution and
it full characterizes the stochastic process {N(t)} in [0,t)
! n
p { ( )} [ , )
under the assumptions that
No two events can occur at exactly the same time, and
Independent stationary increments Independent stationary increments
You should verify that
( )
 
E t N
t
ì =
and  
var ( ) t N t ì = ( )
 
E t N
t
ì
and  
var ( ) t N t ì
Parameter λ has the interpretation of the “rate” that events
arrive.
A.BENHARI
Properties of the Poisson Process:
Interevent Times Interevent Times
Let t
k 1
be the time when the k1 event has occurred and let
k1
V
k
denote the (random variable) interevent time between
the kth and k1 events.
Wh t i th df f V G ( )? What is the cdf of V
k
, G
k
(t)?
( )
{ } { }
Pr 1 Pr
k k k
G V t V t
t
= s = ÷ >
{ }
1 Pr 0 arrivals in the interval [ ) t t t = +
{ }
1 1
1 Pr 0 arrivals in the interval [ , )
k k
t t t
÷ ÷
= ÷ +
( )
{ }
1 Pr 0 N
t
= ÷ =
1
t ì ÷
Stationary independent
increments
1
t
e
ì
= ÷
t
k
t
k
+ t
( ) 1
t
G e
t
ì ÷
¬
V
k
t
k1
t
k1
+ t
( ) 1 G e
t
¬ = ÷
N(t
k1
,t
k1
+t)=0
Exponential Distribution
A.BENHARI
Properties of the Poisson Process:
Exponential Interevent Times Exponential Interevent Times
The process {V
k
} k=1,2,…, that corresponds to the p {
k
} , , , p
interevent times of a Poisson process is an iid stochastic
sequence with cdf
( )
{ }
P 1
t
G V
ì
s
Therefore, the Poisson is
also a renewal process
( )
{ }
Pr 1
t
k
G V t e
t
ì ÷
= s = ÷
The corresponding pdf is
p
( ) , 0
t
g e t
t
ì
ì
÷
= >
O il h th t One can easily show that
 
1
k
E V
ì
=  
2
1
var
k
V
ì
=
and
ì
ì
A.BENHARI
Properties of the Poisson Process:
Memoryless Property Memoryless Property
Let t
k
be the time when previous event has occurred and let V denote
the time until the next event.
Assuming that we have been at the current state for z time units, let Y
be the remaining time until the next event.
V
What is the cdf of Y?
( )
{ } { }
Pr Pr 
Y
F Y t V z t V z
t
= s = ÷ < >
t
k
t
k
+z
V
Y=Vz
( )
{ } { }

Y
t
{ }
{ }
Pr and
Pr
V z V z t
V z
> < +
=
>
{ }
{ }
Pr
1 Pr
z V z t
V z
< < +
=
÷ <
( ) ( )
( ) 1
G G
t z z
G
z
÷
+
=
÷
{ }
Pr V z > { }
( )
1 1
1 1
z
t z
z
e e
e
ì ì
ì
÷ ÷
+
÷
÷ ÷ +
=
÷ +
Memoryless! It does not matter that we
have already spent z time units at the
current state.
( ) ( ) 1
t
Y
F e G
t t
ì ÷
¬ = ÷ =
A.BENHARI
Memoryless Property
This is a unique property of the exponential distribution. If q p p y p
a process has the memoryless property, then it must be
exponential, i.e.,
{ } { } { }
Pr  Pr Pr 1
t
V z t V z V t V t e
ì ÷
s + > = s · s =
{ } { } { }
Pr  Pr Pr 1 V z t V z V t V t e s + > = s · s = ÷
Poisson Exponential
M l
Poisson
Process
λ
Exponential
Interevent Times
G(t)=1e
λt
Memoryless
Property
A.BENHARI
Superposition of Poisson Processes
Consider a DES with m events each modeled as a Poisson Process
with rate λ
i
, i=1,…,m. What is the resulting process?
Suppose at time t
k
we observe event 1. Let Y
1
be the time until the
next event 1. Its cdf is G
1
(t)=1exp{λ
1
t}.
Let Y
1
,…,Y
m
denote the residual time until the next occurrence of the
corresponding event.
Their cdfs are:
e
1
e
j
V
j
( ) 1
i
t
i
G e
t
ì ÷
= ÷
t
k
V
1
= Y
1
e
1
e
j
Memoryless
Property
Y
j
=V
j
z
j
p y
Y
j
V
j
z
j
Let Y
*
be the time until the next event (any type).
{ }
*
min
i
Y Y =
Therefore, we need to find
( )
{ } *
*
Pr
Y
G Y t
t
= s
A.BENHARI
Superposition of Poisson Processes
( )
{ }
*
Pr G Y t
t
= s
{ } { }
Pr min Y t = s ( )
{ } *
Pr
Y
G Y t
t
= s
{ } { }
Pr min
i
Y t = s
{ } { }
1 Pr min
i
Y t = ÷ >
{ }
1
1 Pr ,...,
m
Y t Y t = ÷ > >
{ }
1 Pr
m
Y t = ÷ >
[
1
i
m
t
e
ì ÷
= ÷
[
{ }
1
1 Pr
i
i
Y t
=
= >
[
1
1
i
e
=
[
( )
t A
h
m
ì A
¿
Independence
The superposition of m Poisson processes is also a
( )
*
1
t
Y
G e
t
÷A
¬ = ÷
1
where =
i
i
ì
=
A
¿
The superposition of m Poisson processes is also a
Poisson process with rate equal to the sum of the rates of
the individual processes
A.BENHARI
Superposition of Poisson Processes
Suppose that at time t
k
an event has occurred. What is Suppose that at time t
k
an event has occurred. What is
the probability that the next event to occur is event j?
Without loss of generality, let j=1 and define
Y’=min{Y: i=2 m}
{ }
m
¦ ¹
¿
{ }
Pr next event is 1 j = =
Y =min{Y
i
: i=2,…,m}.
{ }
1
Pr Y Y
'
s =
{ }
2
~1exp 1 exp
i
i
t t ì
=
¦ ¹
'
÷ = ÷ ÷A
´ `
¹ )
¿
{ }
Pr next event is 1 j
{ }
1
Pr Y Y s
1 1
1 1
0 0
y
y y
e e dy dy
ì
ì
'
·
' ' ÷ ÷A
' '
= A
} }
0 0
( )
1 1
0
1
y
y
e dy
e
ì
·
' ' ÷A
÷
' '
= A
÷
}
1
ì
=
A
1
where =
m
i
i
ì
=
A
¿
A.BENHARI
Residual Lifetime Paradox
Suppose that buses pass by the
V
pp p y
bus station according to a Poisson
process with rate λ. A passenger
arrives at the bus station at some
random point
V
b
k
b
k+1
p
random point.
How long does the passenger has
to wait?
Y Z
S l ti 1 Solution 1:
E[V]= 1/λ. Therefore, since the passenger will (on average) arrive
in the middle of the interval, he has to wait for E[Y]=E[V]/2= 1/(2λ).
But using the memoryless property the time until the next bus is But using the memoryless property, the time until the next bus is
exponentially distributed with rate λ, therefore E[Y]=1/λ not 1/(2λ)!
Solution 2:
Using the memoryless property, the time until the next bus is
exponentially distributed with rate λ, therefore E[Y]=1/λ.
But note that E[Z]= 1/λ therefore E[V]= E[Z]+E[Y]= 2/λ not 1/λ!
A.BENHARI
Random selection of Poisson Points
Let represent random arrival points of a
, , , ,
2 1 i
t t t
Let represent random arrival points of a
Poisson process X(t) with parameter λt. Associated with
each point, we define an independent Bernoulli R.V. N
i
where
, , , ,
2 1 i
where
{ 1} , { 0} 1 .
i i
P N p P N p q = = = = ÷ =
1
t
i
t
t
2
t
Define
) ( ) ( ) 1 ( ) ( ; ) (
) (
1
) (
1
t Y t X N t Z N t Y
t X
i
i
t X
i
i
÷ = ÷ = =
¿ ¿
= =
We claim that both Y(t) and Z(t) are independent Poisson
processes with parameters λpt and λqt respectively.
A.BENHARI
Random selection of Poisson Points
Proof: ( ( ) ) { ( )  ( ) } { ( ) )} P Y t k P Y t k X t n P X t n
·
= = = = =
¿
Proof: ( ( ) ) { ( )  ( ) } { ( ) )}.
n k
P Y t k P Y t k X t n P X t n
=
= = = = =
¿
( )
{ ( ) } .
n
t
t
P X t n e
ì
ì
÷
= =
( )
{ ( )  ( ) } , 0 ,
k n k
n
k
P Y t k X t n p q k n
÷
= = = s s
{ ( ) }
! n
( ) ( )
!
( )! ! ! ( )!
{ ( ) } ( )
!
n n k
q t
k t
t q t t k n k k
n
n k k n n k
n k n k
p e
P Y t k e p q t
k
ì
ì
ì ì ì
ì
÷
÷
· ·
÷ ÷
÷ ÷
= =
= = =
¿ ¿
(1 )
( )
( ) , 0, 1, 2,
! !
q t
e
q t k
k pt
e pt
pt e k
k k
ì
ì
ì
ì
ì
÷ ÷
÷
= = =
~ ( ). P pt ì
A.BENHARI
Random selection of Poisson Points
Similarly:
{ ( ) } ~ ( ). P Z t m P qt ì =
More generally,
{ ( ) , ( ) } { ( ) , ( ) ( ) } P Y t k Z t m P Y t k X t Y t m = = = = ÷ = { ( ) , ( ) } { ( ) , ( ) ( ) }
{ ( ) , ( ) }
{ ( )  ( ) } { ( ) }
P Y t k X t k m
P Y t k X t k m P X t k m
= = = +
= = = + = +
( )
( ) ( ) ( )
( )! ! !
k m n n
k m t pt qt
k m
k
t pt qt
p q e e e
k m k m
ì ì ì
ì ì ì
+
÷ ÷ ÷
+
= · =
+
( ( ) ) ( ( P Y t k P Z =
) )
{ ( ) } { ( ) },
t m
P Y t k P Z t m
=
= = =
A.BENHARI
Bulk Arrivals and Compound Poisson
Processes Processes
Consider a random number of events C
i
occur Consider a random number of events C
i
occur
simultaneously at any event instant time of a Poisson
process.
3
1
= C
2
2
= C
4 =
i
C
t
1
t
2
t
n
t
t
1
t
2
t
n
t
(a) Poisson Process
(b) Compound Poisson Process
Let , then is a
compound Poisson process where N(t) is an ordinary
, 2 , 1 , 0 }, { = = = k k C P p
i k
(a) Poisson Process
( ) p
( )
1
( ) ,
N t
i
i
X t C
=
=
¿
compound Poisson process, where N(t) is an ordinary
Poisson process.
{ ( ) } { ( )  ( ) } { ( ) } P X t m P X t m N t n P N t n
·
= = = = =
¿
0
{ ( ) } { ( )  ( ) } { ( ) }
n
P X t m P X t m N t n P N t n
=
= = = = =
¿
A.BENHARI
Bulk Arrivals and Compound Poisson
Processes Processes
( ) { } { }
i
C k
C i
z E z P C k z 
·
= = =
¿
0
( )
( ) { } { }
( ) { } { ( ) }
X
C i
k
X t m
z E z P C k z
z E z P X t m z


=
·
= = =
¿
¿
0
0 0
{ ( )  ( ) } { ( ) }
m
m
m n
P X t m N t n P N t n z
=
· ·
= =
= = = =
¿¿
0 0
0 0 1
( { } ) { ( ) }
m n
n
m
i
n m i
P C m z P N t n
= =
· ·
= = =
= = =
¿ ¿ ¿
1
0 0
( { }) { ( ) } ( { } ) { ( ) }
n
i
i i
C m
C n
n n
E z P N t n E z P N t n
=
=
· ·
= =
¿
= = = =
¿ ¿
0
( ( ))
n
C
n
z 
·
=
=
¿
(1 ( ))
{ ( ) }
C
t z
P N t n e
ì  ÷ ÷
= =
A.BENHARI
Bulk Arrivals and Compound Poisson
Processes
2
1 2
(1 ) (1 ) (1 )
( )
k
k
t z t z t z ì ì ì

÷ ÷ ÷ ÷ ÷ ÷
Processes
We can write
1 2
( ) ( ) ( )
( )
k
X
t t
z e e e
ì ì
 =
We can write
where which shows that the compound Poisson
b d th f i t l d
, ì ì
k k
p =
process can be expressed as the sum of integerscaled
independent Poisson processes Thus
. ), ( ), (
2 1
t m t m
·
. ) ( ) (
1
¿
=
=
k
k
t m k t X
M ll li bi ti f i d d t More generally, every linear combination of independent
Poisson processes represents a compound Poisson
process.
A.BENHARI
Queueing Theory I Queueing Theory I
A.BENHARI
Summary
Little’s La Little’s Law
Queueing System Notation
St ti A l i f El t Q i Stationary Analysis of Elementary Queueing
Systems
M/M/1 M/M/1
M/M/m
M/M/1/K M/M/1/K
…
A.BENHARI
Little’s Law
a(t): the process that counts the number of arrivals up to t. ( ) p p
d(t): the process that counts the number of departures up to t.
N(t)= a(t) d(t)
a(t)
d(t)
N(t)
Area γ(t)
Time t
Average arrival rate (up to t) λ
t
= a(t)/t
A ti h t d i th t T (t)/ (t) Average time each customer spends in the system T
t
= γ(t)/a(t)
Average number in the system N
t
= γ(t)/t
A.BENHARI
Little’s Law
a(t)
A ( )
d(t)
N(t)
Area γ(t)
t t t
N T ì =
T ki th li it t t i fi it
Time t
Taking the limit as t goes to infinity
    E E
N T
ì =
Expected number of
customers in the system
Expected time in the system
Arrival rate IN the system
A.BENHARI
Generality of Little’s Law
Little’s Law is a pretty general result
    E E
N T
ì =
p y g
It does not depend on the arrival process distribution
It does not depend on the service process distribution
It does not depend on the number of servers and buffers
in the system.
Queueing
Network
λ
Aggregate
e o
Aggregate
Arrival rate
A.BENHARI
Specification of Queueing Systems
Customer arrival and service stochastic models Customer arrival and service stochastic models
Structural Parameters
Number of servers Number of servers
Storage capacity
Operating policies p g p
Customer class differentiation (are all customers
treated the same or do some have priority over others?)
Scheduling/Queueing policies (which customer is
served next)
Admission policies (which/when customers are Admission policies (which/when customers are
admitted)
A.BENHARI
Queueing System Notation
Arrival Process
•M: Markovian
•D: Deterministic
Service Process
•M: Markovian
•D: Deterministic
A/B/m/K/N
•Er: Erlang
•G: General
•Er: Erlang
•G: General
Number of
servers m=1,2,…
N b f t
Storage Capacity K=
1,2,…
Number of customers
N= 1,2,…
(for closed networks
(if ∞ then it is omitted)
otherwise it is omitted)
A.BENHARI
Performance Measures of Interest
We are interested in steady state behavior y
Even though it is possible to pursue transient results, it is a
significantly more difficult task.
E[S] average system time (average time spent in the [ ] g y ( g p
system)
E[W] average waiting time (average time spent waiting
in queue(s)) in queue(s))
E[X] average queue length
E[U] average utilization (fraction of time that the
b i d) resources are being used)
E[R] average throughput (rate that customers leave the
system) y )
E[L] average customer loss (rate that customers are lost
or probability that a customer is lost)
A.BENHARI
Recall the BirthDeath Chain Example
λ
0
λ
1 λ
j 2
λ
j1
λ
2
λ
j 0
0 1
μ
1
1
2
μ
2
λ
j2
j1
μ
j1
j 1
j
μ
j
μ
3
λ
2
j
μ
j+1
At steady state, we obtain
0 0 1 1
0 ì t µ t ÷ + =
0
1 0
1
ì
t t
µ
¬ =
1
µ
In general
( )
1 1 1 1
0
j j
j j j j j
ì µ
ì t t µ t
÷ ÷ + +
+
÷ + =
0
1 0
...
j
j
ì ì
t t
µ µ
+
 
¬ =


\ .
( )
j j
j j j j j
1 1
...
j
j
µ µ
+

\ .
Making the sum equal to 1
ì ì
·
 
 
Solution exists if
 
0 1
0
1 1
...
1 1
...
j
j j
ì ì
t
µ µ
·
÷
=
 
 
+ =




\ .
\ .
¿
0 1
1 1
...
1
...
j
j j
S
ì ì
µ µ
·
÷
=
 
= + < ·


\ .
¿
A.BENHARI
M/M/1 Example
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, one server and infinite capacity buffer.
λ λ
λ
λ
λ
λ
0 1
μ
2
μ
j1
μ
j
μ
μ
μ
Using the birth death result λ λ and we obtain Using the birthdeath result λ
j
=λ and μ
j
=μ, we obtain
0
, 0,1, 2,...
j
j
j
ì
t t
µ
 
= =

\ .
Th f
µ
\ .
Therefore
1
j
ì
·
 
 

¿
f λ/ 1
0
1 t µ = ÷
0
1
1
1
j
ì
t
µ
=
 
=
+ 


\ .
\ .
¿
for λ/μ = ρ <1
( )
, 1, 2,... 1
j
j
j t µ µ = = ÷
A.BENHARI
M/M/1 Performance Metrics
Server Utilization
Throughput
  ( )
0
1
1 1 1
j
j
E
U
t t µ µ
·
=
= = ÷ = ÷ = ÷
¿
Throughput
 
( )
0
1
1
j
j
E
R
µ t µ t µµ ì
·
=
= = ÷ = =
¿
Expected Queue Length
 
( ) ( )
{ }
1 1
j
j
j
d
E j j
X
d
µ
t µ µ µ µ
µ
· · ·
= = = = ÷ ÷
¿ ¿ ¿
0 0 0 j j j
dµ
= = =
( ) ( )
1
1 1
j
d d µ
µ µ µ µ µ
·
¦ ¹ ¦ ¹
¦ ¦
= = =
´ ` ´ `
¿
( ) ( )
( ) ( )
0
1 1
1 1
j
d d
µ µ µ µ µ
µ µ µ µ
=
= = = ÷ ÷
´ ` ´ `
÷ ÷
¦ ¦
¹ ) ¹ )
¿
A.BENHARI
M/M/1 Performance Metrics
Average System Time g y
       
1
E E E E
S S X X
ì
ì
= ¬ =
 
( ) ( )
1 1
1 1
E
S
µ
ì µ µ µ
= =
÷ ÷
Average waiting time in queue
            E E E E E E             E E E E E E
S W W S Z Z
= + ¬ = ÷
 
1 1
E
W
µ
 
( ) ( ) 1 1
E
W
µ µ µ µ µ
= ÷ =
÷ ÷
A.BENHARI
M/M/1 Performance Metrics Examples
μ=0.5
35
40
e
r
s
μ
Ε[S]
25
30
b
e
r
o
f
c
u
s
t
o
m
e
Ε[W]
15
20
e
u
n
i
t
s
)
/
N
u
m
b
Ε[Χ]
[ ]
5
10
D
e
l
a
y
(
t
i
m
e
[ ]
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
rho
A.BENHARI
PASTA Property
PASTA: Poisson Arrivals See Time Averages PASTA: Poisson Arrivals See Time Averages
Let π
j
(t)= Pr{ System state X(t)= j }
Let (t) P { A i i t t t fi d X(t) j } Let a
j
(t)= Pr{ Arriving customer at t finds X(t)= j }
In general π
j
(t) ≠ a
j
(t)!
Suppose a D/D/1 system with interarrival times equal to 1 and Suppose a D/D/1 system with interarrival times equal to 1 and
service times equal to 0.5
a a a a
0 0.5 1.0 1.5 2.0 2.5 3.0
Thus π
0
(t)= 0.5 and π
1
(t)= 0.5 while a
0
(t)= 1 and a
1
(t)= 0!
A.BENHARI
Theorem
For a queueing system, when the arrival process is Poisson q g y , p
and independent of the service process then, the probability
that an arriving customer finds j customers in the system is
equal to the probability that the system is at state j In other equal to the probability that the system is at state j. In other
words,
( ) ( ) ( )
{ }
Pr , 0,1,...
j j
a X j j
t t t
t = ÷ = =
Proof:
A i l i i l
Proof:
( ) ( )
( )
{ }
0
limPr 
,
j
t
a X j a
t t t
t t
A ÷
÷ =
+ A
Arrival occurs in interval (t,Δt)
( ) { }
( )
( ) { }
( ) { }
0
Pr ,
,
lim
Pr
,
t
X j a
t t t
t
a
t t t
A ÷
=
+ A
=
+ A
( )
{ } ( ) { }
( ) { }
( )
{ }
( )
0
Pr Pr
,
lim Pr
Pr
,
j
t
X j a
t t t
t
X j
t t
a
t t t
t
A ÷
=
+ A
= = = =
+ A
A.BENHARI
M/M/m Queueing System
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, m identical servers and infinite capacity buffer.
11
…
λ λ
λ
λ
λ
λ
m
0 1
μ
2
2μ
m
mμ
m+1
mμ
3μ
mμ
¦
if 0
and =
if
j j
j j m
m j m
µ
ì ì µ
µ
s s
¦
=
´
>
¹
A.BENHARI
M/M/m Queueing System
Using the general birthdeath result g g
0
1
, if
!
j
j
j m
j
ì
t t
µ
 
= <

\ .
0
, if
!
j
m
j
m
j m
m
m
ì
t t
µ
 
= >

\ .
Letting ρ=λ/(mμ) we get
( )
0
if
!
j
m
j m
j
µ
t
¦
<
¦
¦
0
0
!
if
!
j
m j
j
j
m
j m
t
µ
t
¦
=
´
¦
>
¦
¹ ! m
¦
¹
( )
1
j
m j
m ·
 
( ) ( )
1
1
j m
m
÷
÷
 
To find π
0
( )
1
0
1
1
1
! !
j
m j
m
j j m
m
m
j m
µ
µ
t
÷ ·
= =
 
=
+ + 

\ .
¿ ¿
( ) ( )
( )
1
0
1
1
! ! 1
j
m
j
m m
j m
µ µ
t
µ
÷
=
 
¬ =
+ + 

÷
\ .
¿
A.BENHARI
M/M/m Performance Metrics
Server Utilization
( )
j
j
 
 
{ }
1
1
Pr
m
j
j
E j m X m
U
t
÷
=
= + >
¿
( )
1
0
1
! !
j
m j
m
j j m
m
m
j m
j m
µ
µ
t
÷ ·
= =
 
=
+ 

\ .
¿ ¿
( ) ( )
j m
 
( )
( )
( )
( )
( )
1
0
2
! ! 1 1
j m
m
j
m m
m m
m j
µ µ
t
µ
µ
÷
=
 
=
+ + 

÷ ÷
\ .
¿
1 1 1 1 j
 
( )
( )
( )
( )
( )
( )
( )
( )
1 1 1 1
1
0
2
1
! ! ! ! 1 1
1 1
j m m m
m
j
m m m m m
m
m j
m m
µ µ µ µ
t µ
µ
÷ ÷ ÷ ÷
÷
=
 
=
+ + ÷ + 

÷ ÷
÷ ÷
\ .
¿
( ) ( )
( )
1
0
1
1
! ! 1
j m
m
j
m m
m
j m
µ µ
t µ
µ
÷
=
 
=
+ + 

÷
\ .
¿
0
0
1
m m
ì
t µ µ
t µ
= = =
A.BENHARI
M/M/m Performance Metrics
Throughput
1 m÷ ·
g p
Expected Queue Length
 
1
1
m
j j
j j m
E j m
R
µ t µ t ì
= =
= + =
¿ ¿
p Q g
 
( )
1
0
0
1
...
! !
j
m
m
j
j
j
j j m
m
m
E j
j j
X
j m
µ
t t
µ
·
÷ ·
=
= =
 
= = =
+ 

\ .
¿
¿ ¿
j j
\ .
 
( )
( )
0
2
!
1
m
m
E m
X
m
µ
µ
µ t
µ
= +
÷
Using Little’s Law Using Little s Law
   
( )
( )
0
2
1 1
!
1
m
m
E E m
S X
m
µ
µ
µ t
ì ì
µ
 
= = + 

÷
\ .
( ) µ
\ .
Average Waiting time in queue
   
1
E E
W S
µ
= ÷
A.BENHARI
M/M/m Performance Metrics
Queueing Probability Q g y
{ }
( )
( )
0
0
Pr
! ! 1
m
m j
Q j
m
m
P X m
t µ
µ
t t
· ·
= > = = =
¿ ¿
{ }
( )
! ! 1
Q j
j m j m
m m µ
= =
÷
¿ ¿
Erlang C Form la Erlang C Formula
A.BENHARI
Example
Suppose that customers arrive according to a Poisson pp g
process with rate λ=1. You are given the following two
options,
Install a single server with processing capacity 1 5 Install a single server with processing capacity μ
1
= 1.5
Install two identical servers with processing capacity μ
2
= 0.75 and
μ
3
= 0.75
S lit th i i t ffi t t h ith b bilit 0 5 d Split the incoming traffic to two queues each with probability 0.5 and
have μ
2
= 0.75 and μ
3
= 0.75 serve each queue.
μ
μ
1
λ
Α
μ
2
Β
μ
2
μ
3
λ
Α
μ
2
μ
3
λ
μ
3
C
A.BENHARI
Example
Throughput Throughput
It is easy to see that all three systems have the same
throughput E[R
A
]= E[R
B
]= E[R
C
]=λ
Server Utilization
 
1 2
A
E U
ì
= = =
 
1
1.5 3
A
E U
µ
 
1 4
B
E U
ì
= = = Therefore, each server is 2/3 utilized
 
2
0.75 3
B
µ
,
 
0.5 1 2
2 0 75 3
C
E U
ì
= = =
 
2
2 0.75 3
C
µ ×
Therefore, all servers are similarly loaded.
A.BENHARI
Example
Probability of being idle y g
0
1
1
1
3
A
ì
t
µ
= ÷ =
1
2
÷
 
 
2
4
1
4
3
1
5
2
3
2
 
 


\ .

= =
+ +

 


( ) ( )
( )
1
1
0
1
1
! ! 1
j m
m
j
m m
j m
µ µ
t
µ
÷
÷
B
 
=
+ + 

÷
\ .
¿
1
1
ì
5
2
3
2
1
3

 
÷


\ .
\ .
( )
1
! ! 1
j
j m µ
=
\ .
For each server
0
2
1
2 3
C
t
µ
= ÷ =
A.BENHARI
Example
Queue length and delay Queue length and delay
 
1
1
2
1.5 1
A
E X
ì
µ ì
= = =
÷ ÷
   
1
2
A A
E E S X
ì
= =
1
1.5 1 µ ì
 
( ) 12
m
m
E m X
µ
µ
µ t = + =
ì
   
1 12
E E S X = =  
( )
0
2
! 5
1
B
E m X
m
µ t
µ
= + =
÷
   
5
B B
E E S X
ì
= =
For each queue!
 
1
2
/ 2 0.5
2
/ 2 0.75 0.5
C
E X
ì
µ ì
= = =
÷ ÷
   
1
2 4
C C
E X E X ¬ = × =
   
1
4
C C
E X E X
ì
= =
A.BENHARI
M/M/∞ Queueing System
Special case of the M/M/m system with m going to ∞ p y g g
λ λ λ λ
0 1 2
λ
m
m
m+1
( 1)
3
λ
0 1
μ
2
2μ
m
mμ
m 1
(m+1)μ
3μ
and = for all
j j
j j ì ì µ µ =
0
!
j
j
µ
t t =
Let ρ=λ/μ then, the state probabilities are given by
0 0
1 1
!
j
e
µ
µ
t t
·
÷
 
+ = ¬ =
 ¿
!
j
j
e
j
µ
µ
t
÷
¬ =
0
!
j
j
0 0
1
!
j
j
=

\ .
¿
!
j
j
System Utilization and Throughput System Utilization and Throughput
 
0
1 1 E e
U
µ
t
÷
= ÷ = ÷
  E
R
ì =
A.BENHARI
M/M/∞ Performance Metrics
Expected Number in the System p y
 
( )
1
0 0 1
! ! 1
j j
j
j j j
E j j e e
X
j j
µ µ
µ µ
t µ µ
÷
· · ·
÷ ÷
= = =
= = = =
÷
¿ ¿ ¿
( )
j j j
j j
Number of busy servers
Using Little’s Law
   
1 1 1
E E
S X
ì
ì ì µ µ
= = =
No queueing!
A.BENHARI
M/M/1/K – Finite Buffer Capacity
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, one server and finite capacity buffer K.
λ λ
λ
λ
λ
Using the birth death result λ λ and we obtain
0 1
μ
2
μ
K1
μ
K
μ
μ
Using the birthdeath result λ
j
=λ and μ
j
=μ, we obtain
0
, 0,1, 2,...
j
j
j K
ì
t t
µ
 
= =

\ .
Th f
µ
\ .
Therefore
1
j
K
ì
 
 

¿
f λ/
0
1
1
1
K
µ
t
µ
+
÷
=
÷
0
1
1
1
j
ì
t
µ
=
 
=
+ 


\ .
\ .
¿
for λ/μ = ρ
( )
1
1
, 1, 2,...
1
j
j
K
j K
µ µ
t
µ
+
÷
= =
÷
A.BENHARI
M/M/1/K Performance Metrics
Server Utilization
Throughput
 
( ) ( )
0
1 1
1 1
1 1
1 1
K
K K
E
U
µ
µ µ
t
µ µ
+ +
÷ ÷
= ÷ = ÷ =
÷ ÷
Throughput
 
( )
0
1
1
1
1
K
K
E
R
µ
µ t ì ì
µ
+
÷
= ÷ = <
÷
Blocking Probability
( )
1
1
K
B K
K
P
µ µ
t
+
÷
= =
1
1
B K
K
µ
+
÷
Probability that an arriving customer
fi d th f ll ( t t t ) finds the queue full (at state K)
A.BENHARI
M/M/1/K Performance Metrics
Expected Queue Length p Q g
 
( ) ( )
{ }
1 1
0 0 0
1 1
1 1
j
K K K
j
j
K K
j j j
d
E j j
X
d
µ
µ µ µ
t µ
µ µ µ
+ +
= = =
÷ ÷
= = = =
÷ ÷
¿ ¿ ¿
( ) ( )
( )
1
1 1
0
1 1
1
1 1 1
K
K
j
K K
j
d d
d d
µ µ µ µ
µ
µ
µ µ µ µ µ
+
+ +
=
¦ ¹ ¦ ¹
÷ ÷
÷
¦ ¦
= = =
´ ` ´ `
÷ ÷ ÷
¦ ¦
¹ ) ¹ )
¿
( )
( ) ( )
( )
( )
1
2 1
1
1 1 1
1
1
K
K
K
K
µ µ
µ µ µ
µ
µ
+
+
 
÷ ÷
÷ ÷ +
= =


÷
÷
\ .
( ) µ
\ .
1
1
1 1
K
K
K
K
µ µ
µ
µ µ
+
 
÷
= ÷

÷ ÷
\ .
Net arrival rate (no losses)
System time
 
( )
  1
K
E E
S X
ì t = ÷
Net arrival rate (no losses)
A.BENHARI
M/M/m/m – Queueing System
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, m servers and no storage capacity.
λ λ
λ
λ
λ
Using the birth death result λ λ and we obtain
0 1
μ
2
2μ
m1
(m1)μ
m
mμ
3μ
Using the birthdeath result λ
j
=λ and μ
j
=μ, we obtain
0
1
, 0,1, 2,...
!
j
j
j m
j
ì
t t
µ
 
= =

\ .
Th f
! j
µ
\ .
Therefore
1
1
j
m
ì
t
 
 
=



¿ f λ/
1
0
0
!
j
m
j
j
µ
t
÷
=
 
=

\ .
¿
0
0
1
!
j
j
t
µ
=
=



\ .
\ .
¿ for λ/μ = ρ
\ .
0
, 1, 2,...
!
j
j
j m
j
µ
t t = =
A.BENHARI
M/M/m/m Performance Metrics
Blocking Probability Blocking Probability
!
0
/ !
j
m
B m
m
j
m
P
µ
µ
t = =
¿
Erlang B Formula
!
0
j
j =
¿
Probability that an arriving customer
finds all servers busy (at state m)
Throughput
finds all servers busy (at state m)
Throughput
 
( )
!
0
/ !
1
1 j
m
m
m
j
j
m
E
R
µ
µ
ì t ì ì
 
÷
 = ÷ = <

\ .
¿
!
0
j
j =
\ .
¿
A.BENHARI
M/M/1//N – Closed Queueing System
Meaning: Poisson Arrivals, exponentially distributed service g , p y
times, one server and the number of customers are fixed to N.
μ
Using the birthdeath result,
λ
μ
1
we obtain
( )
0
!
, 1, 2,...
!
j
j
N
j N
N j
t µ t = =
λ
1
…
( )
!
j
N j ÷
( )
1
0
!
!
N
j
N
N j
µ
t
÷
(
=
(
¸ ¸
¿
Nλ (N1)λ
2λ
λ
(N2)λ
N
( )
0
0
!
j
N j
=
(
÷
¸ ¸
¿
0 1
μ
2
μ
N1
μ
N
μ
μ
( )
A.BENHARI
M/M/1//N – Closed Queueing System
Nλ (N1)λ
2λ
λ
(N2)λ
Response Time
0 1
μ
2
μ
N1
μ
N
μ
μ
( )
Response Time
Time from the moment the customer entered the queue until it
received service.
 
( )
 
0
1 E E
S X
µ t = ÷
For the queue, using Little’s law we get,
I th “thi ki ” t
 
( )
0
1
1 E
N X
µ t
ì
= ÷
÷
In the “thinking” part,
( )
1
1 N
Therefore
 
( )
( ) ( )
0
0 0
1
1
1 1
N
N
E
S
µ t
ì
µ t µ t ì
÷ ÷
= = ÷
÷ ÷
Therefore
A.BENHARI
A.BENHARI
Bibliography
Cinlar, E. (1975). Introduction to Stochastic Processes. PrenticeHall, Englewood Cliffs, New york
Cox, D.R. and H.D. Miller (1965). The Theory of Stochastic Processes. John Wiley & Sons, New York.
Doob, J.L. (1953). Stochastic Processes. John Wiley & Sons, New York.
GoodmanProbability and Stochastic Processes, John Wiley and Sons, Second Edition, 2005
J.R.Norris: Markov chains, Cambridge University Press, 1997
M. O'Flynn, Probabilities, Random Variables, and Stochastic Processes. New York,NY, USA:
Harper & Row, Publishers, Inc., 1982.
Athanasios Papoulis, Random Variables and Stochastic Processes. New York, NY, USA: McGrawHill Book Company,
2nd ed., 1984
Athanasios Papoulis, Probability, Random Variables, and Stochastic Processes. NewYork, NY, USA: McGrawHill Inc.,
3rd ed., 1991
Parzen, E. (1962). Stochastic Processes. HoldenDay, Francisco, California.
Prabhu, N.U. (1965). Stochastic Processes. Macmillan, New York.
Yu Rozanov: Probability Theory, Random Processes, and Mathematical Statistics, Kluwer Academic Publishers, 1995
Sheldon Ross. A First Course in Probability. Englewoods CLiffs, NJ, USA: PrenticeHall, Inc., 1994.
A.N.Shiriyaev:Probability, SpringerVerlag, NewYork, 1984
Spitzer, F. (1964). Random Walks Nostrand. Princeton, New Jersey.
H. L. Van Trees, Detection, Estimation and Modulation Theory, Part I: Detection, Estimation, and Linear Modulation Theory
New York, NY, USA: John Wiley & Sons Inc., 1968.
Y. Viniotis, Probability and Random Processes for Electrical Engineering. New York, NY, USA: McGrawHill Inc., 1998.
D.Williams: Probability with Martingales, Cambridge Math.Textbooks, Cambridge, 1991 Roy D. Yates and David J.