You are on page 1of 159

An Introduction to Markov Chains and Queueing Theory

John M. Noble, Mathematiska institutionen, LiTH, Linkpings universitet, 58183 LINKPING, Sweden

ii

Contents
1 Introduction to Queueing Models 2 Generating Functions and Conditional Expectation
2.1 2.2 2.3 Conditional Expectation and the Probability Generating Function . . . . . . . . . . . The Galton Watson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Moment Generating Function and Characteristic Function . . . . . . . . . . . . .

1 3
3 6 8

3 The Poisson Process


3.1 3.2 3.3 3.4 3.5 The Probability Function of a Poisson Process . . . . . . . . . . . . . . . . . . . . . . . First Arrival Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Breakdown of a Poisson Process into Component Parts . . . . . . . . . . . . . . . . . . Generalizations of the Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9
9 11 16 19 25

4 Discrete Time Markov Processes


4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 Discrete Time Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The n-step Transition Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Classication of States of a Discrete Markov Chain . . . . . . . . . . . . . . . . . . . . Expected Time Spent in Transient States . . . . . . . . . . . . . . . . . . . . . . . . . Steady State Probabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Gambler's Ruin Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time Reversibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37
37 39 40 47 48 53 55 58

5 Continuous Time Markov Chains


5.1 5.2 5.3 5.4 Transition Rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time to Reach a State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Time Reversed Markov Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67
68 74 76 79

6 Markov Queueing Models


6.1 The M/M/1 Queue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

85
85

6.2 6.3 6.4 6.5 6.6 6.7

Distributions Associated with M/M/1 . The M/M/1/K Queue - Finite Capacity M/M/c systems . . . . . . . . . . . . . The M/M/c/c Queue . . . . . . . . . . The M/M/ Queueing System . . . . . Exercises . . . . . . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

85 86 87 91 93 94

7 Markovian Queues in Equilibrium


7.1 7.2 7.3 7.4 The Er /M/1 Queue . . . . . . . . . . Bulk Arrival Systems . . . . . . . . . . Series Parallel Stages: Generalisations Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

105
107 111 113 115

8 Networks of Queueing Systems


8.1 8.2 Little's Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Networks of Queueing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.1 Jackson Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

121
121 127 128

9 M/G/1 Queueing Systems


9.1 9.2 9.3 Residual Service Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Priority Service Disciplines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

131
131 134 138

10 The Pollaczek-Khinchin (P-K) Transform Equation


10.1 Generating Function for the Number of Customers . . . . . . . . . . . . . . . . . . . . 10.2 Distribution of the Waiting Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

143
143 146 148

iv

Preface
These notes were developed for a course introducing Markov chains and queueing theory, intended for students who have not had courses in measure theory or functional analysis. The course evolved during the years 2004 - 2008. Many thanks to Jrg - Uwe Lbus for pointing out a substantial number of corrections. All the remaining typographical and more serious errors are entirely my own responsibility. The selection of material in the book by Sheldon Ross [4] would have been appropriate if it had been approached in the right way. Unfortunately, the introduction to that book contains the false statement, `It is generally felt that there are two approaches to the study of probability theory. One approach is heuristic and non-rigorous and attempts to develop in the student an intuitive feel for the subject which enables him to think probabilistically. The other approach attempts a rigorous development of probability by using the tools of measure theory. It is the rst approach that is employed in this text.' This statement is absolute rubbish on several levels. It may be true that such a false perception has crept into the teaching, but that has nothing to do with preparing the student to think probabilistically; rather, it is an approach taken by universities that require good student numbers to remedy their nancial diculties. The principles are based on applied psychology; give the students something that they think they can understand so that the course gets good numbers and good evaluations - the students believe that the course is of value, regardless of whether or not this is the case. There is also a false assumption that probability is only rigorous if it uses measure theory. One may question whether the work by probabilists such as Gauss, Laplace and Poisson, whose work all pre-dated the construction of measure theory, was suciently rigorous for Sheldon Ross. They limited themselves to settings where work could be carried out rigorously without measure theory; there was much rigorous probability before Kolmogorov established the axioms that required countable additivity. Intuition and mathematical thinking (probabilistic thinking and also any valid approach to any other mathematical discipline) derive from a full, rigorous understanding of concrete examples. It is this rigorous understanding of more straightforward results and their proofs that develops intuition and probabilistic thinking that enables more general results and their proofs to be established. The material is therefore presented in a way that only uses convergence in probability and weak law of large numbers. Almost sure convergence is not required and the denitions have been altered accordingly. The material on the Poisson process, Discrete time Markov chains and continuous time Markov chains is mostly from the book by Sheldon Ross [4]. An alternative (and better) treatment of most of the material is found in the book by Samuel Karlin [1]. The material on Queueing models was taken from the two volumes by Leonard Kleinrock [2] and [3], books that still remain classics many years after they were written. The `hippie at the motorway service station' to describe residual service times was taken from Leonard Kleinrock. John M. Noble Linkping, March 2010

vi

Chapter 1

Introduction to Queueing Models


There are four components of a queuing system; namely 1. arrival pattern (for simple models, this will be a Poisson process), 2. service time distribution ( often assumed to be exponentially distributed), 3. number of servers and 4. queueing discipline (which jobs get priority). Examples of queueing disciplines are rst come rst served (fcfs), last come rst served (lcfs), priority, when the arrivals are ranked according to priority and when a server becomes availiable, the rst in the highest priority queue is dealt with. An example of a priority discipline is shortest job rst (sjf), which may be used when the job lengths are known in advance. Often the job lengths are not known in advance. In this case, the round robin is often used. With the round robin, each job is given a quantum of processing time and then put to the back of the queue until it comes up again. Most of the course will be more concerned with the rst three aspects. The queueing system has one or more servers. The ith customer arrives at the system seeking a service that will require Si seconds of service time from one server. If all servers are busy, he will join a queue where he will remain until a server becomes available. Sometimes, only a limited number of waiting spaces are available, so that customers who arrive when there is no room will be turned away. Such customers are called blocked and we will denote the rate at which they are turned away by b . The queueing discipline aects the waiting time Wi which elapses from the arrival time of the ith customer until he enters service. The total delay T of the ith customer in the system is the sum of his
i

waiting and service time:

Ti = Wi + Si .

(1.1)

From the customer's point of view, the performance of the system is given by the statistics of the waiting time W , the total delay T and the proportion of customers blocked, b /. From the point of view of resource allocation, the performance of the system is measured by the proportion of time that each server is utilized and the rate at which customers are serviced by the 1

CHAPTER 1. INTRODUCTION TO QUEUEING MODELS

system, d = b . These quantities are a function of N (t), the number of customers in the system at time t and Nq (t), the number of customers in the queue at time t.

The Components of a Queueing System


The notation a/b/m/K is used to describe a queueing system. Its meaning is as follows.

Arrival process a species the arrival process. If a is given by M , then we mean that the arrival process is Poisson and the interarrival times (length of time between arrivals) are i.i.d. exponential random variables. Service times b species the service times. If b is given by M , then the service times are i.i.d. exponential. If b is given by D, then the service times are constant, i.e. deterministic. If b is given by G, then the service times are i.i.d. given by a general distribution. m species the number of servers and K species the maximum number of customers allowed at any one time. If K is missing, it is assumed that there is no restriction.

Chapter 2

Generating Functions and Conditional Expectation


Generating functions, that is the characteristic function, probability generating function and moment generating function feature largely in the mathematical analysis of queueing systems.

2.1 Conditional Expectation and the Probability Generating Function


This paragraph presents revision from the basic probability course. The example of the Galton Watson process is new. Let (X, Y ) be two discrete random variables with joint probability function pX,Y (x, y), dened as pX,Y (x, y) = p(X = x, Y = y). Recall that the conditional probability of X given Y = y is dened as

pX|Y (x|y) := p(X = x|Y = y) =


The conditional expectation of X given Y = y is dened as

pX,Y (x, y) . pY (y)

E[X|Y = y] =
x

xpX|Y (x|y).

Conditional expectation satises the following important property: provided E[|X|] < +,

Theorem 2.1.
E[E[X|Y ]] = E[X].
3

CHAPTER 2. GENERATING FUNCTIONS AND CONDITIONAL EXPECTATION

Proof The proof, for discrete variables, or continuous variables, is very easy: for discrete variables,
E[E[X|Y ]] =
y

E[X|Y = y]pY (y) xpX|Y (x|y)pY (y)


y x

= =
y x

x x
x y

pX,Y (x, y) pY (y) pY (y) pX,Y (x, y) =


x

pX (x) = E[X].

Since E[|X|] < +, the sum is absolutely convergent, so the order of summation can be exchanged. This result is extremely useful. The following relatively straightforward, basic example illustrates the way in which it may be used for a very large class of problems.

Example Let (Xj ) be independent, identically distributed Bernoulli trials, each with parameter p. Let
Z = inf{n|Xn = 1} (i.e. the number of trials necessary until the rst success is obtained). Compute E[Z]
This can be computed using the following straightforward conditional expectation argument:

E[Z] = E[E[Z|X1 ] = E[Z|X1 = 1]pX1 (1) + E[Z|X1 = 0]pX1 (0) = p + (1 + E[Z])(1 p)


so that

pE[Z] = 1 1 E[Z] = . p The point is that if X1 = 1, then the rst success has been obtained, so that Z = 1. If X1 = 0, then one starts the whole process again, one step later.
giving

The Probability Generating Function For non negative discrete random variables, the probability
generating function is dened as

GX (s) :=
x=0

sx pX (x).

It is useful to think of s as a complex number. The probability generating function is therefore simply the z transform, discussed in the course on transform theory and the series is clearly absolutely convergent for |s| 1. The probability generating function is so called, because it generates the probabilities in the sense that if one has the generating function, then the probabilities are the corresponding coecients in the power series expansion. The probability generating function has some obvious properties:

2.1. CONDITIONAL EXPECTATION AND THE PROBABILITY GENERATING FUNCTION 5

GX (1) =

x pX (x)

= 1.

GX (0) = pX (0). GX (1) = E[X] GX (1) + GX (1) (GX (1))2 = Var(X). If X1 , . . . , Xn are independent random variables, then
n

GX1 +...+Xn (s) =


j=1

GXj (s).

For any non negative integer valued random variables, GX (s) 0, GX (s) 0 0 s 1.

The formulae for expected value and variance are easily seen by dierentiating:

GX (s) =
x=1

xsx1 pX (x);

GX (1) =
x=1

xpX (x) = E[X].

GX (s) =
x=2

x(x 1)sx2 pX (x);

GX (1) =
x=2

x(x 1)pX (x) = E[X 2 ] E[X].

The formula that the generating function of the sum is equal to the product of the generating functions for independent random variables arises simply from the fact that if Y1 , . . . , Yn are independent random variables, such that their expectations are well dened, then
n n

E[
j=1

Yj ] =
j=1

E[Yj ].

This is used in the following way:


n n n

GX1 +...+Xn (s) = E[sX1 +...+Xn ] = E[


j=1

sXj ] =
j=1

E[sXj ] =
j=1

GXj (s).

Problems involving dierentiation are often easier than problems involving sums. In many examples, the probability generating function is dicult to compute, but many queueing systems involved Poission arrivals and the generating function for the Poisson distribution has a very convenient form: Recall that if X P oiss(), then X has probability function

pX (k) =
From this, it is easy to compute that

k e , k!

k = 0, 1, 2, . . . .

GX (s) = e(s1) ,

6 from which

CHAPTER 2. GENERATING FUNCTIONS AND CONDITIONAL EXPECTATION

E[X] = ,

Var(X) = .

Furthermore, if Xj Poiss(j ) for j = 1, . . . , n and X1 , . . . , Xn are independent, then


n n

Xj Poiss(
j=1 j=1

j ).

These results are left as an exercise. The probability generating function is very useful for composition of variables. Consider bulk arrivals at a service station. That is, bus loads of customers arrive according to a Poisson process; the bus contains a random number of customers; what is the overall arrival distribution? Suppose that X is the number of loads that arrive, and the number of customers in each load may be considered as independent identically distributed, with the same distribution as a random variable Y . Let Z be the total number of customers. Then the generating function of Z has particularly convenient expression in terms of the generating functions for X and Y .

Theorem 2.2. Let (Yj ) be independent identically distributed non negative integer valued random j=1
variables, with the same distribution as Y and let X be a non negative integer valued random variable independent of (Yj ) . Let Z = X Yj . Then j=1 j=1
GZ (s) = GX (GY (s)).

Proof Let Yj denote the number of customers in load j , then

GZ (s) = E[s ] =
n=0

E[sZ |X = n]pX (n)

=
n=0

E[s

Y1 +...+Yn

|X = n]pX (n) =
n=0

E[sY1 +...+Yn ]pX (n)

(GY (s))n pX (n) = GX (GY (s)).


n=0

2.2 The Galton Watson Process


The Galton Watson process is a classic example that illustrates the use of the probability generating function. Consider a population, which evolves in a simple way: each individual lives for a single unit of time. At the end of the time unit, it dies and is replaced by k individuals, with probability pk , where pk = 1; these probabilities are the same for each individual, and each population member k=0 reproduces independently of the others. Let Zn denote the total number of individuals in generation n. The problem is to establish whether or not the population dies out with certainty, or whether or not it survives for all time with positive probability. Let Gn (s) = E[sZn ], the probability generating function for generation n and let (s) = sk pk , k=0 the probability generating function for an individual. Note that p(Zn = 0) = Gn (0). This is the

2.2. THE GALTON WATSON PROCESS

probability that the population has died out at, or before, generation n. Let Z0 = 1 (start with one individual) and let X n,k denote the number of ospring from individual k from generation n. To obtain an expression for Gn+1 in terms of Gn , the trick is to condition on the number of ospring of the individual from generation zero.

Gn+1 (s) = E[sZn+1 ] = E[E[sZn+1 |X 0,1 ]] =


k=0 (j) k j=1 Zn ,

E[sZn+1 |X 0,1 = k]pk .

Now note that, conditioned on X 0,1 = k , Zn+1 = It follows that

the sum of k independent copies of Zn .

Gn+1 (s) =
k=0

(Gn (s))k pk = (Gn (s)).

(2.1)

Now note that Gn (0) = p(Zn = 0) is decreasing in n (the probability that it is still alive at generation n + 1 is clearly smaller than the probability that it is alive at generation n). Dene

:= lim Gn (0).
n+

(2.2)

This limit exists, because the sequence is non increasing and bounded below by 0. The value = limn+ p(Zn = 0) is known as the extinction probability. It follows from equation (2.1) that

= ().
Now consider the function (s) for 0 s 1. Note that (1) = 1 and (1) = number of ospring for an individual. Furthermore, (s) 0 for all 0 s 1.

kpk , the expected

It is now clear that if (1) 1, then (s) > s for all 0 s < 1, because is strictly convex and (1) = 1. It follows that the only solution to = () is = 1. (Note: is always convex and it is strictly convex if there is an element of `randomness'; i.e. for any distribution such that the particle is not replaced by a deterministic number of ospring). It follows that if, on average, the expected replacement is less than or equal to 1, the process eventually dies out. If (1) > 1, then (using the fact that (0) = p0 0), together with the convexity of , it follows that there exists exactly two solutions to the equation = () in the region [0, 1]; namely, 1 = 1 and 2 [0, 1). A little further analysis (left as an exercise) shows that 2 is the limiting value for the iterative scheme rn+1 = (rn ). The population therefore survives for all time, with positive probability, if and only if the expected replacement is strictly greater than 1.

CHAPTER 2. GENERATING FUNCTIONS AND CONDITIONAL EXPECTATION

2.3 The Moment Generating Function and Characteristic Function


For a continuous random variable T , with probability density function fT , the characteristic function is dened as T (p) = E[eipT ] = eipx fT (x)dx.

The moment generating function is dened as

MT (p) = E[epT ] =

epx fT (x)dx.

These should look familiar from previous exposure to transform theory. The characteristic function is a Fourier transform of the density function, while the moment generating function is (up to change of sign in the p) the Laplace transform of the density function. The advantage of the characteristic function is that it always exists for real p; the advantage of the Laplace transform is that no complex numbers are involved. One of the important transforms will be that of the exponential distribution. Let T Exp(), then

MT (p) =

ex epx dx =

p (, )

+ p [, +).

Properties of the Moment Generating Function The moment generating function satises the
following properties, which are straight forward to prove and therefore left as an exercise.

If X1 , . . . , Xn are independent random variables, then


p

MX1 +...+Xn (p) =


j=1

MXj (p).

Provided E[|X|m ] < +, E[X m ] =

dm MX (0), dpm

m N.

Chapter 3

The Poisson Process


The Poisson Process provides the basis for elementary Markov chains and queueing models.

Denition 3.1 (Poisson Process). Consider a non negative integer valued random process {N (t), t
0}, non decreasing, which satises the following

1.

1 p(N (t + h) N (t) = 1) = h0 h for some > 0 which is independent of t. lim 1 p(N (t + h) N (t) 2) = 0 h

2.
h0

lim

3. For all u < v < s < t, N (t) N (s) and N (v) N (u) are independent random variables. A process which satises these assumptions is known as a Poisson Process with parameter .
An important consequence of temporal homogeneity (that is independent of t) is that N (t + r) N (t) and N (s + r) N (s) have the same distribution for all r 0, s 0 and t 0.

3.1 The Probability Function of a Poisson Process


Suppose that a Poisson process is observed starting at time 0 until a xed time t. Then the number of events that occur in the interval [0, t] (for xed t) is a random variable N (t).

Theorem 3.2. Let N be a Poisson process with parameter . Then N (t) has a Poisson distribution
with parameter t.

Proof The most convenient way of proving this is to use the probability generating function. Set
G(t, s) = E[sN (t) ],
then 9

10

CHAPTER 3. THE POISSON PROCESS

G(t, s) = t

1 (E[sN (t) (sN (t+h)N (t) 1)]) h0 h 1 = lim (E[sN (t) (E[sN (t+h)N (t) ] 1)]) h0 h lim

using the independence of the increments. Now, for 0 s 1,

1 E[sN (t+h)N (t) ] p(N (t + h) N (t) = 0) sp(N (t + h) N (t) = 1) h 1 1 k = s p(N (t + h) N (t) = k) p(N (t + h) N (t) = k) h h
k=2 k=2

1 h0 = p(N (t + h) N (t) 2) 0. h
Furthermore,

1 (1 p(N (t + h) N (t) = 0)) h0 h 1 = lim (p(N (t + h) N (t) = 1) + p(N (t + h) N (t) 2)) = . h0 h lim
It follows that, for 0 s 1,

h0

lim

1 E[sN (t+h)N (t) ] 1 = (s 1) h

from which

= (s 1)G(t, s) G(0, s) = 1.
This gives

t G(t, s)

G(t, s) = et(s1) .
This is the probability generating function of a Poisson random variable with expectation t.

Properties of Poisson Process Recall that, from the basic properties of a Poisson distribution
with parameter t,

E[N (t)] = t

and

Var(N (t)) = t.

When the arrivals at a queueing system are modelled by a Poisson process, the parameter may be thought of as the expected number of arrivals in unit time and is referred to as the arrival rate.

3.2. FIRST ARRIVAL TIME

11

3.2 First Arrival Time


Following the denition of the Poisson Process (denition (3.1)), the rst two quantities to be examined are X(t), the number of events that occur in the time interval (0, t] and the random variable T , the time until the rst event occurs. The distribution of T may be computed quite easily:

p(T > t) = p(time to next event is greater than time t) = p(no events in an interval of length t) = P0 (t) = exp{t}.
It means that T is has exponential distribution with parameter t.

p(T t) = 1 exp{t}.
It follows that the mean value for T is given by

E[T ] =

1 = .

Memoryless Property of the Exponential Distribution A useful property of the exponential


is the memoryless property. This is very easy to prove.

Proposition 3.3. Let T be distributed according to an exponential law. Then


p(T > r + a|T > a) = p(T > r).

In other words, conditioned on the fact that the rst arrival has not happened by time a, then the probability of it occurring in the time interval (a, a + r) is the same as the unconditioned probability of an arrival occurring in the time interval (0, r).

Proof Let be the parameter of the exponential distribution. Then


p(T > r + a|T > a) = =
which is the desired probability. The converse of this also holds:

p({T > r + a} {T > a}) p(T > a) p(T > r + a) e(r+a) = = er p(T > a) ea

Proposition 3.4 (Converse). If a random variable T satises the memoryless property


p(T > r + a|T > a) = p(T > r)

then T is exponentially distributed.

12

CHAPTER 3. THE POISSON PROCESS

Proof
p(T > r) = p(T > r + a|T > a) =
so that

p(T > r + a) p(T > r + a, T > a) = p(T > a) p(T > a)

p(T > r + a) = p(T > r)p(T > a).


Set f (x) = p(T > x). Note that f (0) = 1, f is decreasing and that f satises f (x + y) = f (x)f (y). It follows that log f (x + y) = log f (x) + log f (y). Therefore

d 1 log f (x + h) log f (x) 1 log f (h) log f (x) = lim = =c h0 h dx h h h


where c does not depend on x, so that

log f (x) = log f (0) + cx


and therefore

f (x) = f (0)ecx .
Since f (0) = 1 and f is decreasing, it follows that f (x) = exp{x} for some parameter > 0.

Denition 3.5. Let X be a nonnegative random variable with probability function FX (i.e. FX (t) =
p(X t)) and density function fX . The failure (or hazard) rate function rX (t) is dened as rX (t) = fX (t) . 1 FX (t)

The following working indicates how the hasard function may be interpreted.

1 p(t X t + h|X > t) h

= = =
h0

1 p(t X t + h, X > t) h p(X > t) 1 p(t < X t + h) h p(X > t)


1 h t+h pX (s)ds t

1 FX (t)

rX (t).
That is, rX (t) represents the conditional probability density for the failure time of a process that is already t years old. For the exponential distribution,

rX (t) =
It does not change with t.

fX (t) et = t = . 1 FX (t) e

3.2. FIRST ARRIVAL TIME

13

Lemma 3.6. Let X be a non negative valued random variable. The probability function FX (i.e.
FX (t) = p(X t)) may be expressed as:
t

FX (t) = 1 exp

rX (s)ds .

Proof
rX (t) =
yielding
t

1 dFX (t) 1 FX (t) dt

log(1 FX (t)) =
Since FX (0) = 0, it follows that k = 0, so that

rX (s)ds + k

FX (t) = 1 exp

rX (s)ds .

Further Properties of the Exponential Distribution The following will be used later. Lemma 3.7. Let X1 , . . . , Xk be k i.i.d. random variables, each exponentially distributed with parameter
. Then Y := X1 + . . . + Xk has a (k, ). In other words, let fY denote the density function of Y . Then (x)k1 x fY (x) = e . (k 1)!

Proof This may be proved by induction. For k = 1,


fY (x) = ex .
Assume true for Z = X1 + . . . + Xk1 . Then Y = Z + Xk and

fY (x) =
x

fZ (y)fXk (x y)dy

(y)k2 y (xy) e e dy 0 (k 2)! (x)k1 = exp{x}. (k 1)! =

Another property is the following. Let X1 Exp(1 ) and X2 Exp(2 ) be independent random variables. Then

14

CHAPTER 3. THE POISSON PROCESS

p(X1 < X2 ) =

1 . 1 + 2

(3.1)

This is left as an exercise and a solution is provided in the exercise set.

Theorem 3.8. Let T1 , . . . , Tn be independent exponential random variables with parameters 1 , . . . , n


respectively and let T = inf(T1 , . . . , Tn ). Then, T is exponentially distributed with parameter
n j=1 j .

Proof Note that the event {inf(T1 , . . . , Tn ) > t} is equivalent to n {Tj > t}, so that j=1
p(inf(T1 , . . . , Tn ) > t) = p(n {Tj > t}) j=1
n n

=
j=1

p(Tj > t) =
j=1

exp{j t} = exp

n j=1

j t ,

so that T Exp(1 + . . . + n ) as required.

Pooled Poisson Process: Waiting Times


Suppose you are outside a bank waiting for the doors to open. Two people have arrived before you so that you are third in line for service. When the doors eventually open, you nd that only two tellers are on duty so that you will have to wait for service. Two queueing procedures may be adopted. In the rst, there are two lines, one for each teller. An arriving customer simply chooses a teller and joins the queue at that desk until the teller is free. The other is to form a central queue and then when either teller is free, the rst person goes forward. Suppose it is known from past experience that the service time of each teller is an exponential random variable T , with parameter . In the rst situation, you will have to wait for an exponential time with parameter . For the second situation, theorem (3.8) shows that the time you have to wait for service is an exponential random variable with parameter 1 + 2 .

Example Consider a queue with a single server. The service time is exponential with rate ; S
Exp(). Customers arrive and stand in line either until they are served, or else for an exponential time with parameter , which ever is shortest. Dierent customers have independent `tolerance' times. If they are not served by that time, they leave the system. Suppose that someone is being served and consider the person who is nth in line.
1. Find Pn , the probability that this customer reaches service before he leaves the system. 2. Find Wn , the conditional expected amount of time that a customer spends standing in line given that the customer is served.

3.2. FIRST ARRIVAL TIME

15

Solution The customer who is nth in line leaves if his `tolerance' is less than the tolerance of the

others in line and less than the service time of the customer in service. Otherwise, he goes to n 1th in line. Once he is n 1th in line, we use the memoryless property of the exponential distributions and start again. Let Xi denote the tolerance time of the ith customer in line. Let S denote the service time of the customer in service.

Pn = Pn1 p(Xn > inf(X1 , . . . , Xn1 , S)),


Note that, by above,

P0 = 1.

inf(X1 , . . . , Xn1 , S) Exp((n 1) + ).


Since Xn Exp() is independent of inf(X1 , . . . , Xn1 , S), it follows (by equation (3.1)) that

p(Xn > inf(X1 , . . . , Xn1 , S)) =


It follows that

(n 1) + . n +

Pn = Pn1
yielding

(n 1) + , n +

Pn =

. n +

To compute Wn , again the memoryless property is used. One way is to compute the expected waiting time to become the n 1th in line. It is left as an EXERCISE (see examples sheets) to show that if X Exp(1 ) and Y Exp(2 ), then

p(X < t|X < Y ) = 1 exp{(1 + 2 )t}.


Using this, let Tj denote the time taken by the customer to move from position j to position j 1 in the queue, given that he remains in the queue. He moves to position j 1 if min(X1 , . . . , Xj1 , S) < Xj . It follows that

E[Tj ] =
Now
n

1 . j +

E[Wn ] =
j=1

E[Tj ] =
j=1

1 . j +

16

CHAPTER 3. THE POISSON PROCESS

3.3 Breakdown of a Poisson Process into Component Parts


Suppose a Poisson process is occurring and the events are of k dierent types. For example, the number of cars passing, recorded according to whether the car is German, French, Japanese or Swedish and that dierent car owners behave independently of each other. Given that a car passes, then the probability k that it is of type i is pi and i=1 pi = 1. The full Poisson process is the sum of k independent Poisson process occurring simultaneously, with parameters p1 , . . . , pk . This is the content of the next theorem.

Theorem 3.9. Let N 1 , . . . , N n be independent Poisson processes with parameters 1 , . . . , n respectively. Then
n j j=1 N

is a Poisson process with parameter

n j=1 j . n j j=1 N .

Proof It is necessary to show that denition 3.1 is satised. Let X =


condition,
n

For the second

p(X(t + h) X(t) 2)
j=1

p(N j (t + h) N j (t) 2)

1 so that if N j are Poisson processes, then limh0 h p(X(t + h) X(t) 2) = 0.

p(X(t + h) X(t) = 1) =
j=1 n

p(N j (t + h) N j (t) = 1, N k (t + h) N k (t) = 0, p(N j (t + h) N j (t) = 1)


j=1 k=j

k = j)

=
so that

p(N k (t + h) N k (t) = 0)

1 lim p(X(t + h) X(t) = 1) = lim h0 h h0

j
j=1 k=j

(1 k h) =
j=1

j .

The independence of X(t) X(s) and X(v) X(u) for u < v < s < t follows from independence of the increments for the original Poisson processes. Conversely, if there is a Poisson process with parameter , and the events fall into n distinct categories, n where an event of type j occurs with probability pj , j=1 pj = 1 and the `type' of an event is independent of all events that have gone before, then events of type j occur according to a Poisson process with parameter pj . The proof of this is straightforward and is left as an exercise. Another important property of a Poisson process is given in the following theorem.

Theorem 3.10. Let N denote a Poisson process with parameter and let T be the time of the rst
arrival. Then, given that exactly one arrival has occurred in the time interval [0, t], T U (0, t). In other words, the conditional distribution of T is uniform.

3.3. BREAKDOWN OF A POISSON PROCESS INTO COMPONENT PARTS

17

Proof For any s (0, t),


p(T < s|N (t) = 1) p({T < s} {N (t) = 1}) p({one arrival in (0, s)} {no arrivals in (s, t)}) = = p({N (t) = 1}) p(N (t) = 1) = s p(N (s) = 1)p(N (t s) = 0) (s)es e(ts) = = p(N (t) = 1) t (t)et

which is the cumulative distribution function for a U (0, t) random variable. This result may be generalised.

Denition 3.11 (Order Statistics). Let Y1 , . . . , Yn be n random variables. The order statistics corresponding to the Y1 , . . . , Yn are Y(1) , . . . , Y(n) where Y(1) Y(2) . . . Y(n) and (Y(1) , . . . , Y(n) ) = (Y(1) , . . . , Y(n) ) for some permutation .

Lemma 3.12. Let Y1 , . . . , Yn be i.i.d. random variables, each uniformly distributed on the interval
[0, t]. Then the joint density function for (Y(1) , . . . , Y(n) ) is given by f(Y(1) ,...,Y(n) ) (y1 , . . . , yn ) =
n! tn

0 < y1 < . . . < yn < t otherwise.

Proof This is seen by symmetry. Theorem 3.13. Let N (t) be the number of events in a Poisson process by time t. Given that N (t) = n,
the n arrival times S1 , . . . , Sn have the same distribution as the order statistics corresponding to n independent random variables uniformly distributed on (0, t).

Proof Let Tj denote the j th interarrival time. The event that N (t) = n is equivalent to T1 +. . .+Tn < t
and T1 + . . . + Tn+1 > t. Let

Ah1 ,...,hn = {S1 (s1 , s1 + h1 ), S2 (s2 , s2 + h2 ), . . . , Sn (sn , sn + hn )}.


The density function for 0 < S1 < . . . < Sn < t is given by

fS1 ,...,Sn (s1 , . . . , sn ) = 1 p(Ah1 ,...,hn |N (t) = n) h1 0,...,hn 0 h1 . . . hn p(Ah1 ,...,hn , N (t) = n) 1 = lim h1 0,...,hn 0 h1 . . . hn p(N (t) = n) lim
s1 +h1 s2 +h2 s1 s2

1 = lim h1 ,...,hn 0 h1 . . . hn n et = (t)n et /(n!) n! = n. t

...

sn +hn sn

n e

Pn

j=1 rj

e(t

Pn

j=1 rj )

dr1 . . . drn

(t)n et /(n!)

18

CHAPTER 3. THE POISSON PROCESS

This leads to the following proposition.

Proposition 3.14. Let N (t) be a Poisson process. Suppose that each event which occurs may be
classied as one of k possible types of events and that, conditioned on an event happening at time s it will be of type i with probability Pi (s), where k Pi (s) = 1. Let Ni (t) denote the number of events i=1 of type i which have occurred by time t. Then Ni (t) are independent Poisson random variables (the processes are not Poisson processes) with expectations
t

E[Ni (t)] =

Pi (s)ds.

Proof Let N (t) =

k j=1 Nj (t).

Computing the joint probabilities, note that (trivially)

p(N1 (t) = n1 , . . . , Nk (t) = nk ) = p N1 (t) = n1 , . . . , Nk (t) = nk |N (t) =

k j=1

nj p N (t) =

k j=1

nj .

Set n =

k j=1 nj .

Then p(N (t) =

k j=1 nj )

(t)n t . n! e

Given that an event occurs, it is of type i with probability Pi (s) conditioned on occurrence at time s. One obtains

p(type i|event occurs) =

t 0

p(type i|event occurs at time s)p(s|event occurs)ds

where p(s|event occurs) is the density function for the occurrence of the event, given that the event occurs in [0, t]. This has uniform distribution. It follows that

p(type i|event occurs) =


independently of all the other events. It follows that

1 t

t 0

Pi (s)ds =: Pi .

p(N1 (t) = n1 , . . . , Nk (t) = nk ) = =

n! (t)n n1 n P . . . Pk k en . n1 ! . . . nk ! n! 1
k j=1

(tPj )nj tPj e . nj !

and the proof is complete.

Example (An Innite Server Queue) Suppose that customers arrive at a service station according
to a Poisson process with rate . Upon arrival, the customer is immediately served by one of an innite number of possible servers (this is often a model for a telecommunications network). Service times are independent with common distribution G.

3.4. GENERALIZATIONS OF THE POISSON PROCESS

19

1. What is the distribution of X(t), the number of customers who have completed service by time t? 2. What is the distribution of Y (t), the number of customers who are being served at time t?

Solution Let customers who complete service by time t be denoted type 1 customers and customers
who do not complete service by time t type 2 customers. Let Tj denote the arrival time of customer j and Sj denote the service time of customer j . Then customer j is of type 1 if Tj + Sj < t and of type 2 if Tj < t < Tj + Sj . The distribution of Sj is G. A customer arriving at time s < t will be of type 1 with probability G(t s) and type 2 with probability G(t s) = 1 G(t s). By the previous proposition, it follows that X(t) and Y (t) are independent Poisson random variables with expected values
t t

E[X(t)] =
0

G(t s)ds =
0

G(y)dy

and
t

E[Y (t)] =
0

G(t s)ds =
0

G(y)dy.

3.4 Generalizations of the Poisson Process


The Nonhomogeneous Poisson Process There are situations where the intensity of a process
may change over time.

Denition 3.15. A counting process {N (t), t 0} is said to be a non homogeneous Poisson process
with intensity (t) if it satises

1. p(N (0) = 0), 2. {N (t) : t 0} has independent increments,


1 3. limh0 h p(N (t + h) N (t) 2) = 0 1 4. limh0 h p(N (t + h) N (t) = 1) = (t).

Lemma 3.16. If N (t) is a non homogeneous Poisson process, then


p(N (t + s) N (t) = n) = (
t+s (r)dr)n R t+s (r)dr t t

n!

, n 0.

Proof This is done by computing the probability generating function and showing that it is the p.g.f.
of a Poisson random variable.

20 Set

CHAPTER 3. THE POISSON PROCESS

G(s; r, t) = E[sN (r+t)N (r) ].


Then

G(s; r, t) = t

1 (G(s; r, t + h) G(s; r, t)) h0 h 1 = E[sN (t+t)N (r) ] lim E[sN (t+r+h)N (t+r) 1] h0 h 1 = G(s; r, t) lim ((p(N (t + r + h) N (t + r) = 0) 1) h0 h lim

+sp(N (t + r + h) N (t + r) = 1) +
j=2

sj p(N (t + r + h) N (t + r) = j))

= G(s; r, t)(s 1)(t),


because for 0 s 1

1 h

sj p(N (t + r + h) N (t + r) = j
j=2

1 h

p(N (t + r + h) N (t + r) = j
j=2

=
It follows that

1 h0 |p(N (t + r + h) N (t + r) 2| 0. h

t+r

G(s; r, t) = exp (s 1)
r

(u)du ,
t+r r

which is the probability generating function of a Poisson random variable with expectation

(u)du.

Example Consider an M/G/ queue. That is, the arrivals are Poisson, the service time has a
general distribution and there are an innite available number of servers. The output process is a non homogeneous Poisson process with intensity G(t).

Proof This has already been discussed. Since arrivals are independent and service times are independent, the departure process therefore has independent increments and is therefore a non homogeneous Poisson process with intensity G(t).

Denition 3.17 (The Compound Poisson Process). A stochastic process {X(t), t 0} is said to be a
compound Poisson process if it can be represented as

3.4. GENERALIZATIONS OF THE POISSON PROCESS

21

N (t)

X(t) =
i=1

Yi ,

t 0,

where {N (t), t 0} is a Poisson process and {Yi , i 0} is a family of independent, identically distributed random variables which are also independent of {N (t), t 0}. The random variable X(t) is said to be a compound Poisson random variable.

The Probability Generating Function of a Compound Poisson Process The probability


generating function of a compound Poisson process has a particularly convenient form, which enables expectation, variance and other moments to be computed in a reasonably straightforward manner.

Lemma 3.18. Using the notation of the denition, let (s) = E[sY ] denote the probability generating
function of Y and suppose that N is a Poisson process with parameter . Let
G(s, t) = E[sX(t) ].

Then
G(s, t) = et((s)1) .

Proof Using the convention that

n j=1 Yj

= 0 for n = 0,
PN (t)
j=1

E[sX(t) ] = E[s

Yj

] = E[E[s
Yj

PN (t)
j=1

Yj

|N (t)]]

=
j=0

E[s

Pn

j=1

|N (t) = n]

(t)n t e n!

=
j=0

E[sY ]n

(t)n t e n!

= = e

n=0 t((s)1)

((s)t)n t e n! .

It follows that

E[X(t)] =
Recall that

G (s, t) s

=
s=1

t((s)1) e = tE[Y ]. s

E[Y 2 ] = (1) + (1).


It follows, using subscripts to denote derivative,

22

CHAPTER 3. THE POISSON PROCESS

Var(X(t)) = Gss (1, t) + Gs (1, t) G2 (1, t) s

= (t) (1) + (t)2 2 (1) + (t) (1) (t)2 2 (1) = (t)E[Y 2 ] = (t)(Var(Y ) + E[Y ]2 ).

Example: Busy Periods in a Single Server Poisson Arrival Queue Consider an M/G/1 queue.
Poisson arrivals, rate , general service time distribution, single server. The system alternates between idle periods and busy periods. A busy period begins when an arrival nds the system empty. The length of successive busy periods will be independent, identically distributed random variables. Let B denote the length of a busy period. What is E[B] and Var(B), expressed in terms of the mean and variance of S , the service time of a customer?

Solution Let N (S) denote the number of arrivals during a service time S . If N (S) = 0, then the busy
period will end when the initial customer completes service. If one customer arrives during service, then after the rst departure, another customer will enter service. The additional time from S until the system is empty will have the same distribution as a busy period. If N (S) = n, there will be n customers standing in line when the server nishes his initial service. Note that

N (S)

B=S+
j=1

Bj ,

where Bj are i.i.d. random variables with the same distribution as B . They are the time it takes to deal with the number who show up while customer j is being served and everything that they in turn are responsible for, etc .... Hence it has same distribution as B . The expected value and variance are best computed using the Moment Generating Function

MB (p) = E[epB ].
Note that for positive integer q ,

E[B q ] =

dq E[epB ]|p=0 . dpq

Now,

3.4. GENERALIZATIONS OF THE POISSON PROCESS

23

MB (p) = E[epB ] = E[ep(S+ = E[epS E[ep


PN (S)
j=1

PN (S)
j=1

Bj )

Bj

|S]]
Bj

= E[epS
n=0

E[ep

Pn

j=1

|S, N (S) = n]p(N (S) = n|S)]

= E[epS
n=0

E[epB ]n

(S)n S e ] n!

= E e = E[e
It follows that

(p)S

n=0 (p+(MB (p)1))S

(MB (p))n (S)n n! ].

E[B] = E[S] + E[S]E[B]


yielding

E[B] =
and

E[S] 1 E[S]

E[B 2 ] = E[S 2 ](1 + E[B])2 + E[S]E[B 2 ]


so that

E[B 2 ] =
so that

(1 + E[B])2 E[S 2 ] E[S 2 ] = 1 E[S] (1 E[S])3

Var(B) =

E[S 2 ] E[S]2 (1 E[S])3 (1 E[S])2 Var(S) + E[S]3 . (1 E[S])3

If S has exponential distribution, then the moment generating function can be computed explicitly. Continuing the analysis, suppose S Exp(). Then

MB (p) = =

et+(p+(MB (p)1))t dt

. + MB (p) p

24 The moment generating function then solves


2 MB (p) MB (p) 1 +

CHAPTER 3. THE POISSON PROCESS

= 0.

In principle, the moment generating function may be inverted to obtain the distribution, but this formula should be adequate for computing moments.

3.5 Exercises
1. (a) Let X Poiss(). Show that the generating function GX (s) is given by

GX (s) = exp{(s 1)}.


(b) Use this to show that E[X] = and Var(X) = . (c) Let X1 , . . . , Xn be independent random variables, where Xj Poiss(j ). Show, using the generating function, that
n n

Xj Poiss(
j=1 j=1

j ).

2. Suppose that cars arrive at a motorway service station according to a Poisson process, with, on average, 6 cars per hour. Suppose that the number of passengers in each car is independent of the other cars and may be taken to have distribution pX (k) = 1 , k = 1, 2, 3, 4. Compute the 4 probability generating function for the number of people arriving at the motorway service station in one hour. 3. (a) Consider the equation = G(), where G(1) = 1, G (s) > 0 for 0 s 1 and G is convex. Show that if G (1) > 1, then there are exactly two solutions to the equation = G(); namely, = 1 and the other solution lies in the interval (0, 1). If G (1) 1 show that the only solution in the interval [0, 1] is = 1. What about other solutions? (b) Consider the Galton Watson process, with Z 0 = 1 (i.e. exactly one ancestor). Let (s) denote the probability generating function for the number of ospring produced by a particle. Suppose that (1) > 1. Consider the iterative scheme

n+1 = (n )
with initial condition 0 = 0. Show that n , where < 1. (c) Consider a Galton Watson process, where Z 0 = 1 and each particle produces a random number of ospring according to a Bi(4, 1 ) distribution. 2 i. What is the expected number of particles in the nth generation? ii. What is the probability that the process dies out? 4. For a series of dependent trials, the probability of success on any trial is number of successes in the previous two trials. Compute
k+1 k+2

where k is the

s := lim p({success on trial n}).


n+

5. Recall that the hazard function associated with a continuous random variable X , with c.d.f. FX (x) := p(X x) and density function pX is dened as

rX (x) :=
25

pX (x) . 1 FX (x)

26

CHAPTER 3. THE POISSON PROCESS If X1 and X2 are two independent, non negative continuous random variables, show that

p(X1 < X2 | min(X1 , X2 ) = t) =

rX1 (t) . rX1 (t) + rX2 (t)

6. Let X and Y be independent exponential random variables with respective means 1/1 and 1/2 respectively. (a) What is the conditional distribution of Z := min(X, Y ) given that Z = X ? (b) Show that the conditional distribution of Y Z given that Z = X is exponential with mean 1/2 . 7. Let X1 and X2 be independent exponential random variables each having rate . Let

Z1 = min(X1 , X2 )
and

Z2 = max(X1 , X2 ).
Find E[Z1 ], Var(Z1 ), E[Z2 ]. Show that Z1 and Z2 Z1 are independent. Find Var(Z2 ). 8. A transmitter needs two batteries to be operational. Consider such a transmitter and a set of n functional batteries, numbered from 1 to n. Initially, the rst two are installed. Whenever a battery fails, it is replaced with the lowest numbered battery not yet in use. Suppose that the lifetimes of the dierent batteries are independent, exponential random variables each having rate . At a random time T , a battery fails and the stockpile is empty. At that moment, exactly one of the batteries, battery X will not yet have failed. Compute (a) p(X = n), (b) p(X = 1), (c) p(X = i), (d) E[T ] (e) the distribution of T . 9. Suppose that people arrive at a bus stop according to a Poisson process with rate . The bus leaves at time t. Let X denote the total amount of waiting time of all those who get on at time t. Let N (t) denote the number of arrivals by time t. (a) What is E[X|N (t)]? (b) Show that Var(X|N (t)) = N (t)t2 /12 (c) What is Var(X)?

3.5. EXERCISES

27

10. Let X1 , X2 , . . . be independent, identically distributed non negative continuous random variables having density function p(x). A record occurs at time n if Xn is larger than max(X1 , . . . , Xn1 ). If a record occurs at time n, then Xn records a record value. Let N (t) denote the number of record values that are less than or equal to t. Characterise the process N (t) when (a) p is an arbitrary continuous density function. (b) p(x) = ex for x 0. 11. Let X1 , . . . , Xn be i.i.d. exponential random variables. Show that the probability that the largest of them is greater than the sum of the others is n/2n1 . In other words, set M = maxj Xj and show that
n

p(M >
j=1

Xj M ) =

n 2n1

Hint Consider p(X1 >

n j=2 Xj ),

together with properties of exponential random variables.

12. Events occur according to a non homogeneous Poisson process, with E[X(t)] = t2 + 2t. Compute the probability that n events occur between times t = 4 and t = 5. 13. Consider a single server queueing system where customers arrive according to a Poisson process with rate and service is with rate . Customers are served on a rst come rst served basis. Suppose that a customer nds n 1 others in the system upon arrival. Compute the probability function for X , the number in the system just after the customer has departed. 14. Consider a two server parallel queueing system where customers arrive according to a Poisson process with rate and service times are exponentially distributed with rate . An arriving customer who nds both servers busy leaves without waiting. Such a customer is said to be lost. (a) If both servers are busy, nd the expected time until the next customer who enters service. (b) Starting from zero customers, nd the expected time until both servers are busy. (c) Find the expected time between two lost customers. 15. Consider an n server parallel queueing system, where customers arrive according to a Poisson process with rate and service times are exponentially distributed with rate . If an arriving customer nds all servers busy, he departs immediately, without being served. If an arrival nds all servers busy, compute the probability that the next arrival nds exactly k servers free. 16. (dicult) Consider an innite server queueing system in which customers arrive according to a Poisson process and where the service distribution is exponential with rate . Let X(t) denote the number of customers in the system at time t. By considering the probability generating function, and using E[X] = GX (1) and Var(X) = GX (1) + GX (1) (GX (1))2 , compute (a) E[X(t + r)|X(r) = n] and

28 (b) Var(X(t + r)|X(r) = n).

CHAPTER 3. THE POISSON PROCESS

17. Consider a two server system, where a customer is rst served by server 1 and then by server 2, then departs. All service times are independent exponential. Server 1 has rate 1 , server 2 has rate 2 . When you arrive, you nd server 1 free and two customers (A and B) waiting for server 2. Customer A is being served, customer B is standing in line. (a) Find PA , the probability that A is still in service when you complete your transaction with server 1. (b) Find PB , the probability that B is still in service when you complete your transaction with server 1. (c) Find E[T ], the expected time in the system. 18. Consider a queueing system where customers are served on a rst come rst served basis. There are two servers. Server one has exponentially distributed time rate 1 and server two has exponentially distributed time, rate 2 . All service times are independent. Olof arrives and sees Sven with server 1 and ke with server 2. What is the probability that he is not the last to leave?

Answers
1. (straightforward) 2. (straightforward) 3. (straightforward) 4. Let Sn denote event of succes on trial n. Let

s = lim p(Sn ),
n

a = lim p(Sn |Sn1 ),


n

c b = lim p(Sn |Sn1 ). n

Then

p(Sn ) = p(Sn |Sn1 , Sn2 )p(Sn1 |Sn2 )p(Sn2 )


c c c +p(Sn |Sn1 , Sn2 )p(Sn1 |Sn2 )p(Sn2 ) c c +p(Sn |Sn1 , Sn2 )p(Sn1 |Sn2 )p(Sn2 ) c c c c c +p(Sn |Sn1 , Sn2 )p(Sn1 |Sn2 )p(Sn2 )

yielding

3 2 2 1 s = as + b(1 s) + (1 a)s + (1 b)(1 s) 4 3 3 2


so that

3.5. EXERCISES

29

5 a b b 1 s( + ) = ( + ). 6 12 6 6 2
Also,
c c p(Sn |Sn1 ) = p(Sn |Sn1 , Sn2 )p(Sn2 |Sn1 ) + p(Sn |Sn1 , Sn2 )p(Sn2 |Sn1 ).

An application of Bayes theorem gives

3 2 1s a= a+ b 4 3 s
so that

8 1s a= b . 3 s
c c c c c c c p(Sn |Sn1 ) = p(Sn |Sn1 , Sn2 )p(Sn2 |Sn1 ) + p(Sn |Sn1 , Sn2 )p(Sn2 |Sn1 )

yielding

2 s 1 b = (1 a) + (1 b). 3 1s 2
so that

b=
and

9 12 s + 59 1 s 59

a=
yielding

32 24 1 s + , 59 59 s

2 s 82 38 s = , 45 75 1 s 150
so that s = 5.
57 88 .

p(X1 < X2 | min(X1 , X2 ) = x) p(X1 < X2 , X1 (x, x + h)) = lim h0 p({X1 (x, x + h), X1 < X2 } {X2 (x, x + h), X2 < X1 }) = lim =
h0 x+h x y fX2 (z)fX1 (y)dzdy x+h x+h x y fX1 (z)fX1 (y)dzdy + x y fX2 (z)fX1 (y)dzdy

fX1 (x)(1 FX2 (x)) . fX1 (x)(1 FX2 (x)) + fX2 (x)(1 FX1 (x))

30 Now divide through by (1 FX1 (x))(1 FX2 (x)). 6. (a)

CHAPTER 3. THE POISSON PROCESS

p(X t|X < Y ) =

p(X t, X < Y ) p(X < Y ) 1 1 + 2 1 (1 e(1 +2 )t ) 1 + 2

p(X < Y ) =
t

p(X t, X < Y ) = 1 2

(
0 x

e2 y dy)e1 x dx =

divide through. Conditional distribution is Exp(1 + 2 ). (b)

p(Y X z|Y X) =
Note that
x x+z

p(Y X + z) p(Y X)

p(Y X) =
0

1 2 e1 x2 y dydx = 1 2 e1 x2 y dydx =

1 . 1 + 2 1 e2 z 1 + 2

p(Y X + z) =
0

and the result follows. 7. From the previous exercise, it follows that (a) 1/(2) (b) 1/(42 ). (c) From previous exercise, one may compute that Z2 Z1 Exp() (the previous exercise gives that it is Exp() if either X1 or X2 are the minimum). It follows that

E[Z2 ] =
(d)

1 1 + . 2

p(Z1 x, Z2 Z1 y) = p(x Z1 Z2 y) = p(x X1 X2 y, X1 X2 ) + p(x X1 X2 y, X2 X1 )


=2 =e
x y+ 2x y

2 e(+) dd

= p(Z1 x)p(Z2 Z1 y).


(e) It follows directly that Var(Z2 ) =

1 1 + 2. 2 4

3.5. EXERCISES 8. (a) 1/2 (b) (1/2)n1 (c) (1/2)ni (d) Sum of independent exponentials E[T ] = (e) Gamma, parameters n 1 and 2. 9. (a) Each arrival distributed uniformly on (0, t), so
t n1 2 .

31

E[X|N (t)] = N (t)


0

(t s)

ds = N (t)t/2. t

(b) Let

(Ui )n i=1

be i.i.d. uniformly distributed on (0, t). Then Var(X|N (t) = n) = nVar(Ui ) =

nt2 . 12

(c) Var(X) = E[X 2 ] E[X]2

= E[E[X 2 |N (t)]] E[X]2 = E[E[(X E[X|N (t)])2 |N (t)]] + E[(E[X|N (t)]2 ] E[X 2 ] = E[Var(X|N (t))] + E[(E[X|N (t)])2 ] E[X 2 ] t2 t2 t2 = E[N (t)] + E[N (t)2 ] E[N (t)]2 12 4 4 3 3 t t = + 12 4 3 t = . 3
10. (a) Let R(t, h) denote the event that the rst record greater than t lies in (t, t + h). Then

1 p(R(t, h)) = h0 h lim

1 p(X1 (t, t + h)) h0 h 1 + lim p(Xk (t, t + h), X1 Xk , . . . , Xk1 Xk ) h0 h lim


k=2

1 lim h0 h

t+h t

pX (y)dy
k=0

1 limh0 h t p(x)dx FX (t) = . 1 FX (t) k

t+h

It follows that independent of records less than t, there is one lying in (t, t + h) with probability (t)h + o(h), where (t) is given by

(t) =

p(t) = rX (t). 1 F (t)

(i.e. a non homogeneous Poisson process with rate (t) = rX (t), the hazard function.)

32

CHAPTER 3. THE POISSON PROCESS (b) If p(t) = et , the counting process of record values is therefore a Poisson process with rate .

11. Firstly,
n

p(X1 >
j=2

Xj ) = p(X1 > X2 )p(X1 > X2 + X3 |X1 > X2 ) . . . p(X1 > X2 + . . . + Xn |X1 > X2 + . . . + Xn1 ) 1 = n1 2

by `memoryless' property (p(X1 > T + H|X1 > T ) = p(X1 > H)). This requires a little justication if H, T and X1 are jointly independent random variables. Condition on H and T , then remove the conditioning. It follows that
n n

p(M >
j=1

Xj M ) =
j=1

p(Xj >
j=k

Xk ) =

n 2n1

12.

5 4 (s)ds

= m(5) m(4) = 25 + 10 16 8 = 11. It follows that p(X(5) X(4) = n) = 11n 11 e . n!

13. Let S be time to completion. Then S = T1 + . . . + Tn where Tj are independent identically distributed service times, so that
n tn1 t (n1)! e

pS (t) =
Note also that

t0 t < 0.

p(X = k|S = t) =
Putting this together,

(t)k t e . k!

p(X = k) =
0

p(X = k|S = t)fS (t)dt

= = =

k n tk+n1 (+)t e dt (n 1)!k! 0 n k (n + k 1)! n+k ( + ) (n 1)!k! n k ( + )n+k n+k1 k .

(2)n+k tk+n1 2t e dt (n + k 1)!

3.5. EXERCISES

33

Note that this may be interpreted as the number of failures before success number n for a Bernoulli process, where the trials have `success' probability p = + . 14. (a) The time until a server is available is exponential, parameter 2. Use `memoryless property'; rst server becomes available, then next arrival.

E[T ] =
(b) T denotes time until both are busy. Then

1 1 + . 2

E[T ] =
so that

1 +

1 +

1 + E[T ] +

E[T ] =

+ 2 . 2

(c) Let Tk denote expected time until a lost customer starting from k customers. Then

T2 = =

1 + 2 + 2 1 2 + T1 + 2 + 2

+(

1 2 + T1 ) + 2 + 2

T1 = ( =

1 1 + T2 ) +( + T0 ) + + + + 1 + T0 + T2 . + + + T0 = 1 + T1

giving

T1 =
so that

1 + 2 + T2

T2 =

2 22 1 (1 + + 2 ).

15. Use memoryless property: to nd 0 servers free, the next arrival must happen before any services are completed. Time to next departure is Exponential, parameter k if k servers are busy. Then, let T denote interarrival time, S time until next service is completed,

p(0

servers free

) = p(T < S) =

. + n

p(1

server free

)=

n n +

(n 1) +

34 and the general formula is

CHAPTER 3. THE POISSON PROCESS

p(k

servers free

)=

n n +

...

(n k + 1) (n k + 1) +

(n k) +

16. Let G(t, s) = E[sX(t+r) |X(r) = n]. Then

G 1 = E[sX(t+r) ( + X(t + r)) + s + X(t + r) |X(r) = n] t s


so that

G G = (s 1)G (s 1) , t s
giving

G = G + (s 1)G G (s 1)G t
and

G (1) = G (1) t
so that

E[X(r + t)|X(r) = n] = net +


0

e(ts) ds =

+ (n )et .

For the variance,

G = 2G + (s 1)G 2G (s 1)G t
giving

G (1) = 2G (1) 2G (1). t


Using Var(X) = G (1) + G (1) (G (1))2 ,

Var(X) = 2Var(X) + E[X], t


so that (using Var(X(t)|X(t) = n) = 0) Var(X) = n(et e2t ) +

(1 e2t ) (et e2t ). 2

3.5. EXERCISES 17. (a)


1 1 +2

35

(b) X Exp(1 ), Y1 , Y2 Exp(2 ) all independent


y1 xy1

PB = p(Y1 < X < Y1 + Y2 ) =


giving

1 2 e(2 y1 +1 x+2 y2 ) dy2 dxdy1 2

PB =
Alternatively, use Memoryless property.

1 2 . (1 + 2 )2

pB = p(Y1 < X < Y1 + Y2 ) = p(Y1 < X)p(X < Y1 + Y2 |Y1 < X) 1 2 . = p(Y1 < X)p(X < Y2 ) = (1 + 2 )2
(c)

E[S] =
18.

1 1 + (1 + 2PA + PB ). 1 2

p({not last}) = p({not last}|{Olof leaves rst})p({Olof leaves rst}) +p({not last}|{ke leaves rst})p({ke leaves rst}) = 1 1 + 2
2

2 1 + 2

36

CHAPTER 3. THE POISSON PROCESS

Chapter 4

Discrete Time Markov Chains


In 1913, Andriej Andriejevicz Markov (1856 - 1922) was reading a book entitled `Eugene Onegin' by Alexander Puszkin (1799 - 1837). He studied the rst 20000 characters. He wrote a 0 for every vowel and 1 for every consonant and studied the sequence. He noted that if one was reading a vowel, then the probability that the next letter would be a consonant was 0.872, whereas if one was reading a consonant, the probability that the next letter would be a consonant was only 0.337. Clearly the sequence could not be understood as a sequence of independent identically distributed random variables. But he noted that where one went to next only depended on where one was now and not on the previous letters. This was the rst example of what became known as a Markov Chain. The transition probabilities for Eugene Onegin, computed by Markov, were vowel vowel consonant 0.128 0.663 consonant 0.872 0.337

He also computed that the proportion of consonants was 0.568 and the proportion of vowels was 0.432.

Denition 4.1 (Markov Process). A random process X(t) is said to be a Markov process if the future
of the process given the present is independent of the past. That is, given an arbitrary sequence of times t1 < . . . < tk < tk+1 , and for any two real numbers a < b,
p(a X(tk+1 ) < b|X(tk ) = xk , . . . , X(t1 ) = x1 ) = p(a X(tk+1 ) < b|X(tk ) = xk ).

This property is referred to as the Markov property.


An integer valued Markov process is called a Markov chain.

4.1 Discrete Time Markov Chains


Let Xn be a discrete time, integer valued Markov chain.

Denition 4.2 (Entrance Law). The entrance law of a discrete time integer valued Markov chain that
starts at n = 0 is the law of X0 .
37

38 Suppose that X has entrance law

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

pj (0) = p(X0 = j).


The joint probability function for the rst n + 1 values of the process is

p(Xn = in , . . . , X0 = i0 ) = p(Xn = in |Xn1 = in1 ) . . . p(X1 = i1 |X0 = i0 ) p(X0 = i0 ).


Thus, the joint probability function for a particular sequence is simply the product of the entrance probability multiplied by the subsequent one step transition probabilities. We will now assume that the one step transition probabilities do not change with time.

Denition 4.3 (Time Homogeneous Markov Chain). The Markov chain is said to be time homogeneous if the one step transition probabilities are xed. That is, if

p(Xn+1 = j|Xn = i) = pij

for all n.
Think of pij as the probability of moving from site i, to site j in one step. For a time homogeneous chain, the probability function, or law of Xn is completely specied by the entrance law and the matrix of one step transition probabilities,

P =

p00 p01 p02 p10 p11 p12 . . . . . . . . . pi0 pi1 p12 . . . . . . . . .

... ... ... . ... ...

The matrix P is referred to as the transition matrix. Note that each row of P must add up to 1, since

1=
j

p(Xn+1 = j|Xn = i) =
j

pij .

Example Suppose a system has three states, labelled 0, 1 and 2. For example, on day zero, a car has
two spare tyres. The probability that the car will need a single new tyre during day n is p and the probability that it will not need any is q = 1 p. Let Yn be the number of spare tyres left in the house at the end of day n, then Yn is a Markov chain with transition matrix

1 0 0 P = p q 0 . 0 p q

4.2. THE N -STEP TRANSITION PROBABILITIES

39

?>=< j p 89:; 0

?>=< j p 89:; 1

89:; ?>=< 2

Figure 4.1: circle diagram for tyre example The one step transition probabilities may be represented by a circle diagram, as in gure (4.1). Example Let Sn be the sum of n Bernouilli trials. In one step, Sn can either stay the same with probability 1 p or increase by 1 with probability p. The transition probability matrix is given by

P =

1p p 0 0 0 1p p 0 0 0 1p p 0 0 0 1p . . . . . . . . . ...

... ... ... ...

4.2 The n-step Transition Probabilities


For a time homoegenous discrete time Markov chain, the n step transition matrix P (n) = {pij (n)} is dened as the matrix with entries

pij (n) = p(Xn+k = j|Xk = i).


This does not depend on k , by time homogeneity. Note that

pij (n) = p(Xn = j|X0 = i) =


k

p(Xn = j, X1 = k|X0 = i)

=
k

p(Xn = j|X1 = k, X0 = i) p(X1 = k|X0 = i) pkj (n 1)pik (1) =


k k

=
It follows that

pik (1)pkj (n 1).

P (n) = P P (n 1)
so that, by induction, P (n) = P n . Consider the tyre example. It is an easy exercise to verify, by induction, that

1 0 0 P (n) = 1 qn qn 0 . 1 q n npq n1 npq n1 q n

40 It follows that

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

1 0 0 P (n) 1 0 0 . 1 0 0
It follows that pj0 (n) 1 for each j . In other words, no matter how many tyres we start out with, eventually the probability of having no spare tyres left is 1. Therefore, it is certain that eventually we run out of tyres.

Denition 4.4 (Chapman Kolmogorov Equations). For a Markov process, the relation
P (m + n) = P (m)P (n)

is known as the Chapman Kolmogorov Equation.

4.3 Classication of States of a Discrete Markov Chain


Denition 4.5 (Accessible). A state j is said to be accessible from state i if Pij (n) > 0 for some
n 0.

Denition 4.6 (Commication). Two states i and j are said to communicate if the are accessible to
each other. This is written i j .

Lemma 4.7. Consider three states i, j and k. Then the following hold.
1. State i communicates with state i for all i. 2. If state i communicates with state j , then state j communicates with state i. 3. If state i communicates with state j and state j communicates with state k , then state i communicates with state k .

Proof Only the third needs to be dealt with. Suppose i communicates with j and j communicates
with k . Then there exist integers m and n such that Pji (n) > 0 and Pkj (m) > 0. It follows that

Pki (m + n) =
l=0

Pkl (m)Pli (n) Pkj (m)Pji (n) > 0.

Hence, state k is accessible from state i. Similarly, state i is accessible from state k .

Denition 4.8 (Class). Two states which communicate are said to be in the same class.
By lemma 4.7, any two classes of states are either equal or disjoint.

Denition 4.9 (Irreducible). A Markov chain is said to be irreducible if all states communicate with
each other.

4.3. CLASSIFICATION OF STATES OF A DISCRETE MARKOV CHAIN

41

Example Consider a Markov chain consisting of four states, 0, 1, 2 and 3, with transition matrix
P = 1/2 1/2 0 0 1/2 1/2 0 0 1/4 1/4 1/4 1/4 0 0 0 1 .

The associated circle diagram is given in gure (4.2). From this, the communicating classes are clear.
1/2 1/2 1/4

?>=< j 89:; 0

1/2 1/2

B ?>=< 89:; j 1
1/4

1/4

?>=< 89:; 2

1/4

B ?>=< 89:; 3

Figure 4.2: Circle Diagram for Classication of States The classes of this Markov chain are {0, 1}, {2} and {3}. State 3 is an absorbing state; no other states are accessible from it. For any state i, let

pi,k = p(k {Xj = i}|{X0 = i}) j=1


and set

pi = lim pi,k .
k+

This is the limit as k + of the probability that the process will return to state i at least once by time k .

Denition 4.10 (Recurrence). A state i is said to be recurrent if pi = 1 and transient if pi < 1. Note If a state i is recurrent, then the process starting from that state will return to it innitely often. Proposition 4.11. A state i is recurrent if Proof Set
fi (k) = p(Xk = i, Xj = i 1 j k 1|X0 = i).
Let
n=1 Pii (n)

= + and transient if

n=1 Pii (n)

< +.

In =
Set

1 Xn = i 0 Xn = i.

42

CHAPTER 4. DISCRETE TIME MARKOV CHAINS


n

Ni,n =
k=1

Ik .

Ni,n counts the number of returns of X to i by time n. Note that E[Ni,n ] = E


j=0 n

Ij |X0 = i =

E[Ij |X0 = i]
j=0 n

=
j=0 n

p(Xj = i|X0 = i) Pii (j).


j=0

This shows that Pii (n) = + is equivalent to limn+ E[Ni,n ] = +. It remains to prove that n=0 it is equivalent to pi = 1. By the Markov property,
n

p(Ni,n j) =
k=1 n

p(Ni,n j|rst return at k)p(rst return from i to i at time k) p(Ni,nk j 1)fi (k).
k=1

Let qi (j) = limn+ p(Ni,n j), then


qi (j) =
k=1

fi (k)qi (j 1) = qi (j 1)
k=1 j

fi (k) ,

qi (0) = 1

so that

qi (j) =
k=1

fi (k)

It follows that
n+

lim E[Ni,n ] = lim

n+

kp(Ni,n = k) = lim
k0

n+

p(Ni,n k) =
k0 k=0

qi (k) =

1 1
k fi (k)

It follows that pii (n) < + is equivalent to k fi (k) < 1 and n=1 to k fi (k) = 1. But, from the construction, n fi (k) = pin and k=1

n=1 pii (n) = + k=1 fi (k) = pi .

is equivalent

It follows that pii (n) < + is equivalent to the statement that the process is transient and the n=1 statement n pii (n) = + is equivalent to the statement that, with probability 1, the process makes at least one return to i.

4.3. CLASSIFICATION OF STATES OF A DISCRETE MARKOV CHAIN

43

Remark By induction, and using time homogeneity, it follows that this statement is equivalent to
the statement that the process returns to i innitely often.

Corollary 4.12. If state i is recurrent and state i communicates with state j , then state j is recurrent. Proof Suppose Pij (k) > 0 and Pji (m) > 0. For any integer n,
Pjj (m + n + k) Pji (m)Pii (n)Pij (k).
From this,

Pjj (m + n + k) Pji (m)Pij (k)


n=1 n=1

Pii (n) = +.

It follows that the state j is recurrent.

Remarks Firstly, the above corollary shows that transience is a class property. Secondly, the above
corollary shows that all the states of a nite irreducible Markov chain are recurrent.

Example Consider the Markov chain with the states 0,1,2 and 3, where
P = 0 1 0 0 0 1/2 1/2 0 0 0 . 1 0 0 1 0 0

Which states are transient and which are recurrent?

Solution The circle diagram is given in gure (4.3).


1

?>=< 89:; 0

?>=< 89:; 1
1/2

?>=< 89:; 2

?>=< 89:; 3

1/2

Figure 4.3: gure for example All states communicate. Since this the Markov chain is nite, all states are recurrent.

Example Consider the Markov chain with the states 0,1,2,3 and 4, where

44

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

P =
Find the recurrent states.

1/2 1/2 0 0 0 1/2 1/2 0 0 0 0 0 1/2 1/2 0 0 0 1/2 1/2 0 1/4 1/4 0 0 1/2

Solution The chain has three classes; {0, 1}, {2, 3} and {4}. The rst two classes are recurrent; the
last is transient.

Stirling's Formula The following paragraph states and proves a very useful approximation, due to
James Stirling (1692 - 1770), which rst appeared in `Methodus Dierentialis' London, 1730. It is a powerful approximation and comes in extremely handy in many weird and wonderful situations.

Theorem 4.13 (Stirling's Formula). For all n 1, there is a number n (0, 1) such that
n! = 2nn+1/2 en+n /12n .

Proof It is necessary to prove that


e < (1 +
Using the expansion

1 n+1/2 ) < e1+1/12n(n+1) . n xk , k

(4.1)

log(1 + x) =
k1

(1)k

it follows that

log
With x =
1 1+x 2n+1 , 1x

1+x = 2x 1x

j=0

x2j . 2j + 1

=1+

1 n.

It follows that

1 log(1 + ) = n = =

2 2n + 1

j=0

1 (2j + 1)(2n + 1)2j


j1

2 1 1+ 2n + 1 3 2 2n + 1 2 2n + 1 1+

1 (2n + 1)2j

1 1 (2n + 1)2 3 (2n + 1)2 (2n + 1)2 1 1 1+ . 12n(n + 1)

4.3. CLASSIFICATION OF STATES OF A DISCRETE MARKOV CHAIN It follows that

45

1 1 1 1 (n + ) log(1 + ) 1 + , 2 n 12n(n + 1)
proving the inequality (4.1). Set

an =
Then

n!en . nn+1/2

1 an = e1 1 + an+1 n
so that, using (4.1),

n+1/2

1<
Let a = limn+ an . It follows that

an e1/12n < e1/12n(n+1) = 1/12(n+1) . an+1 e

n! = ann+1/2 en+n /12n where 0 < n < 1. It remains to prove that a = 2 .


Set
/2

In =
and note that for n > m, In < Im .

sinn (x)dx

Set y = cos(x). By elementary change of variables,


/2 0

sinn (x)dx =
0

(1 y 2 )(n1)/2 dy =

1 2

1 0

( n+1 )( 1 ) 1 2 2 x(n1)/2 . dx = 2( n+2 ) 1x 2

Look up the beta distribution for details of the computation in, for example, L. Rde and B. Westergren. This is clearly decreasing in n. Setting 2m + 1 = n, it follows that for all m 2,

(m + 1)( 1 ) (m + 1 )( 1 ) (m)( 1 ) 2 2 2 2 < < 1 . (m + 1) (m + 3 ) (m + 2 ) 2


Recall that the function satises

1 (1) = 1, ( ) = , (a) = (a 1)(a 1). 2 2 Recall the notation: for k odd, k!! = k(k 2)(k 4) . . . 3 1

46 and for k even

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

k!! = k(k 2) . . . 4 2
so that (2k)!! = 2k k!. Note that (m + 3 ) = 2

(2m + 1)!! 2m+1

2 .

It follows that, for all n 2,

(2n 1)!! (2n 2)!! (2n)!! < < . (2n + 1)!! (2n)!! 2 (2n 1)!!
Rearranging gives

1 (2n + 1)
Note that

(2n)!! (2n 1)!!

<

1 < 2 2n

(2n)!! (2n 1)!!

(2n)!! ((2n)!!)2 22n (n!)2 = = . (2n 1)!! (2n)! (2n)!


Using n! = an en nn+1/2 yields

n a4 1 a4 n n < < . 2(2n + 1) a2 2 4 a2 2n 2n


Using that an a, it follows that

a=
and Stirling's formula is proved.

In the proof, the following limit was established, which is due to John Wallis (1616 - 1703); namely,

1 n+ 2n + 1 lim

(2n)!! (2n 1)!!

. 2

This was rst published in `Arithmetica Innitorum' Oxford, 1656.

Example Consider a Markov chain with state space Z, the integers, with transition probabilities given
by

Pn,n+1 = p = 1 Pn,n1 , n Z
and 0 < p < 1. All states clearly communicate, so they are all either transient or recurrent. Note that

P0,0 (2n) =
Using Stirling's formula, it follows that

2n n

pn (1 p)n .

n! = 1, n+ nn+1/2 en 2 lim

4.4. EXPECTED TIME SPENT IN TRANSIENT STATES from which it follows that

47

n+

lim

P0,0 (2n) = 1. (4p(1 p))n / n


(4p(1p))n n=1 n 1 =1 p= 2 < 1 p = 1. 2

It follows that

n P0,0 (n)

< + if and only if

< +. Note that

4p(1 p)
The process is recurrent if p =
1 2

and transient if p = 1 . 2

4.4 Expected Time Spent in Transient States


Suppose that states T = {1, 2, . . . , t} are transient. Set

P11 P12 . . . P1t . . . . . . . . PT = . . . . . Pt1 Pt2 . . . Ptt


If the set of transient states are only a strict subset of all states, then t Pij < 1. Let Sij denote the j=1 expected number of times that the Markov chain is in state j given that it started in state i. Then, by conditioning on the rst transition, it is clear that

Sij = K(i, j) +
k

Pik Skj .

Here K(i, j) = 1 if i = j and 0 otherwise. Note that the sum is only over transient states, since it is not possible for the chain to move from a recurrent state to a transient state. In matrix notation,

S = I + PT S,
so that

I = S(I PT )
and, if I PT is invertible, then

S = (I PT )1 .
If all the states of a Markov chain are transient, then S(I P ) = I has a solution and Sij gives the expected number of times that the chain returns to state j given that it started at state i.

48

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

4.5 Steady State Probabilities


Under certain assumptions, which are stated later, as n , the n step transition matrix approaches a matrix in which all the columns are equal to the same probability function. That is,

pij (n) j
for all j . The steady state probabilities may be obtained as solution to the equation

n+

= P.

Example Suppose that arrival of signals may be modelled by the following. If the nth unit of time
contains no signal, then the probability of no signal in the next unit is 1 and the probability of a signal arriving is . Similarly, if the nth unit contains a signal, then the probability of a signal in the next packet is 1 and the probability of no signal is . The transition matrix is given by

P =
The steady state probability satises

1 1

(P I) = 0;
so satises

(0 , 1 )
It follows that

= 0.

0 + 1 = 0.
Since 0 + 1 = 1, it follows that

0 =

, +

1 =

. +

Not all Markov chains settle into stationary behaviour. The binomial counting process grows steadily, so that for any xed j , pj (n) 0 as n .

Denition 4.14 (Positive Recurrent, Null Recurrent). States of a Markov chain are said to be positive
recurrent if, starting in a state i the expected time until a process returns to state i is nite. Recurrent states which are not positive recurrent are known as null recurrent. Positive recurrence is a class property (left as an exercise).

4.5. STEADY STATE PROBABILITIES

49

Denition 4.15 (Period of a State). State i is said to have period d if Pii (n) = 0 for n not divisible
by d and d is the largest integer with this property. A state with period 1 is said to be aperiodic.

Denition 4.16 (Ergodic). Positive recurrent aperiodic states are said to be ergodic. An ergodic
Markov chain is one in which all the states are ergodic.

Theorem 4.17. For an irreducible ergodic Markov chain, limn+ Pij (n) exists and is independent
of i. Furthermore, letting j = limn+ Pij (n), it follows that is the unique nonnegative solution of
= P

satisfying

j=0 j

= 1.

The proof of this is not very hard. It is given later, after a couple of examples and preparatory lemmas.

Example Consider a large population of individuals, each of whom possesses a particular pair of
genes. The genes are of type A or a. The proportion of individuals with gene pairs AA, aa, Aa are respectively p0 ,q0 ,r0 , where p0 + q0 + r0 = 1. When two individuals mate, each contributes one gene, chosen at random, to the resulting ospring. Assume that the mating occurs at random. Calculate the proportions of individuals in the next generation.

Solution The probability that a parent passes on gene A is

p(A) = p(A|AA)p0 + p(A|aa)q0 + p(A|Aa)r0 r0 = p0 + . 2


Similarly, the probability of a gene a being passed on by a parent is

p(a) = q0 +

r0 . 2

Therefore, the probabilities of an individual of the next generation having the various gene combinations is

p = (p0 + q = (q0 + r = 2(p0 +


Now,

r0 2 ) , 2 r0 2 ) , 2

r0 r0 )(q0 + ). 2 2

50

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

p+

r 2

r0 2 r0 r0 ) + (p0 + )(q0 + ) 2 2 2 r0 = (p0 + (p0 + r0 + q0 ) 2 r0 = p0 + = p(A). 2 = (p0 +

It follows that after the rst generation the percentages of the population having AA, Aa and aa remain xed at the values r0 r p = (p0 + )2 = (p + )2 , 2 2 r0 2 r q = (q0 + ) = (q + )2 2 2 r0 r0 r r r = 2(p0 + )(q0 + ) = 2(p + )(q + ). 2 2 2 2 This is known as the Hardy Weinberg law. Suppose now that the gene population has stabilized. Consider the genetic history of a single individual. Let Xn denote the genetic state of the nth decendant. Then the transition probability matrix (states AA, aa, Aa in that order) is

p + r/2 0 q + r/2 P = 0 q + r/2 p + r/2 . p/2 + r/4 q/2 + r/4 p/2 + q/2 + r/2
It is easily checked that (p, q, r) is the stationary state.

Example Suppose a production process with states 1, . . . , n changes according to a Markov chain with
one step transition probability matrix P . Suppose that some states A are `acceptable' and the others, Ac are `unacceptable'. The production is said to be `up' if it is in an `acceptable' state and `down' if it is in an `unacceptable' state. Assume that production has reached an equilibrium. Compute 1. the probability the production process goes from `up' to `down' (the rate of breakdowns) 2. the average length of time the process remains down when it goes down 3. the average length of time the process remains up when it goes up.

Solution Let k denote the stationary probability for state k. Then


p(goes from `up' to `down') =
jA

p(Xn+1 Ac |Xn = j)p(Xn = j).

Now, because it is in equilibrium, p(Xn = j) = j . Furthermore,

p(Xn+1 Ac |Xn = j) =
kAc

Pjk ,

4.5. STEADY STATE PROBABILITIES so that the probability of going from `up' to `down' is

51

Pjk j .
jA kAc

Let Ui denote the number of time units the process remains `up' after failure i 1 until failure number i and let Di the average number of time units the process remains `down' after failure i before recovery i + 1. Let U = E[Ui ] and R = E[Ri ]. Then

Pjk j = lim
jA kAc

n+

n j=1 Dj

n +

n j=1 Uj

1 U +D

by the law of large numbers. Also, by the law of large numbers, the proportion of time that the system is `up' is given by proportion of `up' time = It follows that

U = U +D

j .
jA

U=
jAc

jA j iA Pij i

and

D=
jAc

jAc

iA Pij i

Lemma 4.18 (Stationary Probabilities). If the initial state is chosen at random with probability p(X0 =
j) = j , then for all n 0, p(Xn = j) = j .

Proof The proof is by induction. Assume that p(Xn1 = k) = k , for all k, then
p(Xn = j) =
k

p(Xn = j|Xn1 = k)p(Xn1 = k) Pkj k


k

= = j .

Lemma 4.19. For a time homogeneous Markov chain, let mj denote the average number of transitions
until a Markov chain starting in state j returns to state j . For any solution to (I P ) = 0 with j > 0, it holds that
j = 1 . mj

52

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

Sketch of Proof The result follows by the strong law of large numbers; in mj time units, it spends
on average 1 unit of time in state j . Theorem 4.17 may now be proved.

Proof of theorem 4.17 Firstly, if the Markov chain is irreducible, then all states communicate.
Lemma 4.19 together with lemma 4.18 implies that the solution to (P I) = 0 satisfying j j = 1 is unique. The only problem is to show that limn+ Pij (n) = j . Let Y be an independent copy of the Markov chain, with dierent starting point. Then (X, Y ) is again a Markov chain (show this). It remains to prove that (X, Y ) is ergodic. The following shows that it is aperiodic. Set pij (n) = p(X(n) = j|X(0) = i). Then

p({(X(n), Y (n)) = (i, j)}|{(X(0), Y (0)) = (i, j)}) = pii (n)pjj (n).
Because X is ergodic, there exist numbers m1 and m2 such that pji (m1 ) > 0 and pij (m2 ) > 0. For n m1 + m2 , pjj (n) pji (m1 )pii (n (m1 + m2 ))pij (m2 ). To prove aperiodicity, it is necessary and sucient to prove that the highest common factor of n such that pii (n)pjj (n) > 0 is 1. Set a = m1 + m2 , then for n m, pii (n)pjj (n) pii (n)pii (n a). Suppose (X, Y ) is of period d 2 and set a = d + , where 0 d 1. Then for all k 0 and all 1 r d 1, either pii (kd + r) = 0 or pii ((k )d + (r )) = 0. It follows that for all k 0 and all 1 r d 1, either pii (kd + r) = 0 or pii ((k + )d + (r + )) = 0. Note that pii (n + a) pii (n)pii (a). If pii (a) = 0, it follows that pii (kd + r) = 0 for all k and all 1 r d 1 and hence that X has period d, contradicting the fact that X is ergodic. But k1 and k2 can be chosen so that pii (a) > 0. It follows that (X, Y ) is aperiodic and hence it is follows directly that (X, Y ) is ergodic. It follows that the average time of (X, Y ) between visits of the state (i, j) is (X0 , Y0 ) = (i, j). Let = inf{n : Xn = Yn }. Then
1 i j

< +. Suppose

Pik (n) Pjk (n) = p(Xn = k) p(Yn = k) = (p(Xn = k| < n) p(Yn = k| < n))p( < n) +(p(Xn = k| > n) p(Yn = k| > n))p( > n) = (p(Xn = k| > n) p(Yn = k| > n))p( > n)
so that

|Pik (n) Pjk (n)| < 2p( > n) 0.


This proves that the limit is independent of the starting point. To nish the proof, for any state k , using = P (n),

n+

4.6. THE GAMBLER'S RUIN PROBLEM

53

k Pjk (n) =
i

i (Pik (n) Pjk (n)) 0.

n+

Proposition 4.20 (Weak Ergodicity). Let {Xn , n 1} be an irreducible Markov chain with stationary
probabilities j , j {0, 1, . . . , s}. Let r be a bounded function on the state space. Then, for all

N +

> 0,

lim p

1 N

r(j)j > = 0.

r(Xn )
n=1 j=0

Sketch of Proof Let aj (N ) =

N n=0 j (Xn ),

where

j (x) =

1 x=j 0 x = j.

aj (N ) is the number of visits of j in the time interval [0, N ]. Then

1 N

r(Xn ) =
n=1 j=0

aj (N ) r(j). N

By the weak law of large numbers, it follows that

aj (N ) 1 > N mj

N +

>0

and hence, by the previous lemma, that

aj (N ) j > N

N +

> 0.

4.6 The Gambler's Ruin Problem


Consider a gambler who at each play of the game has probability p of winning one unit and probability 1 p = q of losing one unit. Assume that successive plays are independent. Let Xn denote the player's fortune at time n. Then Xn is a Markov chain with transition probabilities

P00 = PN N = 1 Pi,i+1 = p = 1 Pi,i1 , i = 1, 2, . . . , N 1.


The Markov chain has three classes, namely {0}, {1, . . . , N 1}, {N }. The rst and third are recurrent and the second is transient.

54

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

Let Qi = limM + p(M {Xn = N }|X0 = i); namely, the probability that, starting with fortune i, n=1 the gambler will eventually reach N . Then, by conditioning on the outcome of X1 ,

p(M {Xn = N }|X0 = i) = p(M {Xn = N }|X1 = i + 1)p(X1 = i + 1|X0 = i) n=1 n=1 +p(M {Xn = N }|X1 = i 1)p(X1 = i 1|X0 = i), n=1
so that, letting M +,

Qi = pQi+1 + qQi1 .
It follows that

q Qi+1 Qi = (Qi Qi1 ), i = 1, 1, . . . , N 1 p


since Q0 = 0. It follows directly that

Qi Qi1 =
so that (adding)
i

q p

i1

Q1 , i = 1, . . . , N

i1

Qi Q1 =
j=1

(Qj Qj1 ) = Q1
j=1

q p

yielding
1(q/p)i 1(q/p) Q1 q p

Qi =

=1

iQ1

p = q = 1. 2

Now note that QN = 1 (if he starts with N and does not gamble, then he nishes with N ), giving
1(q/p) 1(q/p)N 1 N

Q1 =
so that
1(q/p)i 1(q/p)N i N q p

q=p q=p=
1 2

Qi =

=1

N +

p = q = 1. 2

q p

p> p

1 2 1 2.

1 Loosely, this may be interpreted as follows: if p = 2 , then a gambler with nite resources will certainly be ruined eventually playing against an innitely rich adversary.

4.7. TIME REVERSIBILITY

55

4.7 Time Reversibility


Consider a Markov chain, where the initial probabily distribution is given by , the solution of (P I) = 0. Set Qij = p(Xm = j|Xm+1 = i). Then

Qij

= p(Xm = j|Xm+1 = i) p(Xm+1 = i, Xm = j) = p(Xm+1 = i) p(Xm = j)p(Xm+1 = i|Xm = j) = p(Xm+1 = i) j Pji . = i

Lemma 4.21. The time reversed process is again a Markov chain. Proof It is necessary and sucient to show that
p(Xm = j| N {Xm+k = ik }) = p(Xm = j|Xm+1 = i1 ) k=1
for any sequence i1 , . . . , iN and any N 1. Note that

p(Xm = j| N {Xm+k = ik }) = k=1 = = =

p(N {Xm+k = ik }, Xm = j) k=1 p(N {Xm+k = ik }) k=1 (


N k=0 p(Xm+k+1 N k=1 p(Xm+k+1

= ik+1 |Xm+k = ik ))p(Xm = j)

= ik+1 |Xm+k = ik ))p(Xm+1 = i1 ) ( p(Xm+1 = i1 |Xm = j)p(Xm = j) p(Xm+1 = i1 ) p(Xm+1 = i1 , Xm = j) = p(Xm = j|Xm+1 = i1 ). p(Xm+1 = i1 )

Denition 4.22 (Reversibility). Let X be a Markov chain with one step transition matrix P and
stationary distribution . X is said to be reversible if the one step transition matrix for the reversed process Q, given by
Qij = j Pji i

satises
Q = P.

Another way of stating this is that


j Pji = i Pij

for all

i, j.

56

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

The following theorem helps establish if a Markov chain is time reversible or not.

Theorem 4.23. An irreducible, aperiodic, positive recurrent Markov chain, for which Pij = 0 whenever
Pji = 0 is time reversible if and only if starting in state i the probability of any path which returns to i has the same probability as the reversed path. That is, if Pi,i1 . . . Pik ,i = Pi,ik . . . Pi1 ,i

for any path (i, i1 , . . . , ik , i).

Proof Reversibility implies this relation;


Pij = j Pji i

and if the chain is irreducible and positive recurrent, then j > 0 for all j . It follows that

Pi,i1 Pi1 ,i2 . . . Pik ,i =


Now suppose that

i i1 i . . . k Pi1 ,i . . . Pi,ik = Pi,ik Pik ,ik1 . . . Pi1 ,i . i1 i2 i

Pi,i1 . . . Pik ,j Pji = Pi,j Pj,ik . . . Pi1 ,i


holds for any path (i, i1 , . . . , ik , j, i). Then, summing over i1 , . . . , ik gives

Pij (k)Pji = Pij Pji (k).


Letting k + gives

j Pji = Pij i .

Example Consider a particle moving on the graph in gure 2.


From any site, it will move to one of its nearest neighbours with equal probability. Find the stationary distribution.

Solution It is clear that any path which returns to its starting point will have the same probability
as the reversed path. This is because in both the path and the reversed path, the jumps will be from the same nodes, the same number of times (albeit in a dierent order). Therefore the process is time reversible. The relation

A PAC = C PCA
gives

A =

C . 3

4.7. TIME REVERSIBILITY

57

?>=< 89:; A

@ABC GFED B

@ABC GFED C

@ABC GFED D

@ABC GFED E

@ABC GFED F

@ABC GFED G

Figure 4.4: Graph for Time Reversibility Example

Similarly,

D PDC = C PCD
gives

D C = . 2 3 Now note that A = B = F = G , C = E and i i = 1. This yields 1 3 2 , C = , D = . 12 12 12 Note the association between the probabilities and the number of edges. A =
The initial justication that the path was time reversible was unnecessary; one could start by searching for a solution to the set of equations

j Pji = Pij i .
In general, this is an over determined system of equations, which does not necessarily have a solution. But if a non trivial solution exists to this system, then there is a unique solution satisfying j j = 1, which is necessarily the stationary distribution, and it is time reversible.

4.8 Exercises
1. Three white and three black balls are distributed in two urns in such a way that each contains three balls. The system is in state j for j = 0, 1, 2, 3 if the rst urn contains j white balls. At each step, two balls are drawn at random, one from each urn and the balls are put into the opposite urns. Let Xn denote the state of the system after the nth step. Show that {Xn : n = 0, 1, 2, . . .} is a Markov chain and calculate its transition probability matrix. 2. Compute the limiting probabilities j for the model given in the previous example. 3. Show that if state i is recurrent and state i does not commute with state j then pij = 0. This implies that once a process enters a recurrent class of states it can never leave that class. 4. Consider the Markov chain {Xn : n 0} with states 0,1 and 2 and with one step transition probability matrix

0 1/2 1/2 P = 1/2 1/2 0 . 1 0 0


Let f be the function such that f (0) = 0, f (1) = f (2) = 1. Let Yn = f (Xn ). Is Yn a Markov chain? 5. Specify the classes in the following Markov chain and decide which states are transient and which are recurrent.

P =

1/2 1/2 0 0 0 1/2 1/2 0 0 0 0 0 1 0 0 0 0 1/2 1/2 0 1 0 0 0 0

6. A transition probability matrix P is said to be doubly stochastic if the sum over each column equals 1. That is,

Pij = 1
i

and
j

Pij = 1.

Consider a Markov chain with states 0, 1, . . . , M and suppose that it has a doubly stochastic one step transition matrix, If it is also irreducible, show that

j =
58

1 . M +1

4.8. EXERCISES

59

7. A particle moves on a circle through points which have been marked 0,1,2,3,4, in clockwise order. At each step, the particle moves clockwise with probability p and anticlockwise with probability 1 p. Let X(n) denote the location after step n. (a) Find the transition probability matrix. (b) Compute the limiting probabilities. (c) Compute the expected number of steps the particle takes to return to the starting position. (d) Compute the probability that all other positions are visited before the particle returns to its starting position. 8. Consider a single server queueing system, where the inter arrival times have distribution F and the service times have distribution G. Customers arrive, stand in line until the server becomes available, are served in order of arrival, then leave the system on completion of service. Let {Xn } denote the number of customers in service immediately before the nth arrival and let Yn denote the number of customers in the system when the nth customer departs. (a) If F is exponential with rate , is {Xn } a Markov chain? Is {Yn } a Markov chain? (b) If G is an exponential distribution with rate , is {Xn } a Markov chain? Is {Yn } a Markov chain? If any of the processes described above is a Markov chain, compute the transition probabilities. WARNING: These chains will NOT be time homogeneous. p(Xn = xn |Xn1 = xn1 ) will depend on the random time between the n 1th and nth arrival. 9. In the example given in lectures concerning busy periods of an M/G/1 system, use a generating function technique to compute the expected number of customers served during a busy period. 10. Let {Xn , n 0} be an ergodic Markov chain with limiting probabilities . Let Yn = (Xn , Xn1 ). Is {Yn }n1 a Markov chain? If so, compute the transition probabilities and compute

n+

lim p(Yn = (i, j)).

11. Consider a population where each generation has m genes. Each gene is one of two possible genetic types. If a population has exactly i of its m genes of type 1, then the next generation will have j genes of type 1 with probability

pij =

m j

i m

mi m

mj

, j = 0, 1, . . . , m.

Let Xn denote the number of type 1 genes in the nth generation. Suppose that X0 = i. (a) Compute E[Xn ].

60

CHAPTER 4. DISCRETE TIME MARKOV CHAINS (b) Compute the probability that eventually all genes will be of type 1.

12. (a) Consider a random walk on a nite undirected graph, where at each step, the particle jumps to one of its neighbours, each with equal probability. Show that the random walk is a time reversible Markov chain, and that the stationary state probabilities are directly proportional to the number of edges from the site. (b) On a chess board, with only one knight, compute the expected number of knight's moves required to return to the starting position. Is the answer dierent for dierent starting positions? 13. A Markov chain is said to be a tree process if (a) Pji > 0 whenever Pij > 0. (b) For every pair of states (i, j) i = j there is a unique way for the process to go from i to j without reentering a state. This path is the reverse of the path to go from j to i. Show that an ergodic tree process is time reversible. 14. Let Xj be independent identically distributed random variables taking values 1 or 1 each with probability 1/2. Let Sn = X1 + . . . + Xn . Let Mn = max1jn Sn . (a) Is Sn a Markov chain? (b) Is Mn a Markov chain? 15. In the gambler's ruin problem (described in the notes), suppose that we know that the gambler's fortune will eventually reach N . Given this information, if his current fortune is i, show that the probability that he wins the next gamble is

p(1 (q/p)i+1 ) 1 , p= i 1 (q/p) 2


and

i+1 1 , p= . 2i 2
Recall from the notes that
1(q/p)i 1(q/p)N i N

lim p(Xm = N |Xn = i) =


m

p= p=

1 2 1 2.

16. A group of 3 processors is arranged in an ordered list. When a job arrives, the rst processor in line attempts it. If it is unsuccessful, the job moves to the next in line. If that is unsuccessful, it moves to the third. When a job has been successfully processed, or when all processors have tried it, it leaves the system.

4.8. EXERCISES

61

At this point, the processors are reordered, using the rule that the successful processor is moved one position up the line, by interchanging its position with the one directly in front. If all processors were unsuccessful, the ordering remains the same. Then the next job comes.
1 Processors 1, 2 and 3 are successful with probabilities 2 , 1 and 3 1 6

on each job.

(a) Dene an appropriate Markov chain to analyse the model. (b) Show that the Markov chain is time reversible. (c) Find the stationary distribution.

Answers
1.

P =
where entries are (Pij )3 . ij,=0

0 1 0 0 1/9 4/9 4/9 0 0 4/9 4/9 1/9 0 0 1 0

2. (I P ) = 0. This gives

0 = 3 = 1/20, 1 = 2 = 9/20.
3. If pij is strictly positive, then (P n )ji = 0 for all n; otherwise i and j communicate. Then the process, starting at i, has positive probability of never returning to i, contradicting recurrence. It follows that pij = 0. 4. No: counterexample

p(Yn = 0|Yn1 = 1, Yn2 = 0) =

p({Xn = 0} {Xn1 = 1or2}|Xn2 = 0) p(Xn1 = 1 or 2|Xn2 = 0) = p({Xn = 0} {Xn1 = 1 or 2}|Xn2 = 0)

= p(Xn = 0|Xn2 = 0)
and the probability clearly depends on the history two steps back. 5. Classes {0, 1}, {2}, {3}. Transient states: 4 and 3. Recurrent: 0,1 and 2. 6. Plug in 0 = . . . = M =
1 M +1 .

Easy to check that this is a solution to

j =
j

Pji i =

1 M +1

Pji =
j

1 . M +1

62 7. (a)

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

P =

0 p 0 0 1p 1p 0 p 0 0 0 1p 0 p 0 0 0 1p 0 p p 0 0 1p 0

(b) j = 1 , j = 0, 1, 2, 3, 4 5 (c) mj =
1 j

=5

(d) Let E denote event that all sites are visited.

p(E|X(1) = 1) = a1 , p(E|X(1) = 1, X(2) = 2) = a2 , p(E|X(1) = 1, X(2) = 2, X(3) = 3) = a3 p(E|X(1) = 4) = b1 , p(E|X(1) = 4, X(2) = 3) = b2 , p(E|X(1) = 4, X(2) = 3, X(3) = 2) = b3
then (using q = 1 p)

p(E) = a1 p + b1 q = a2 p2 + b2 q 2 = a3 p3 + a1 p2 q + b1 q 2 p + b3 q 3 = p4 + 2a2 p3 q + 2b2 q 3 p + q 4 = p4 + 2a1 p3 q 2 + 2a3 p4 q + 2b3 q 4 p + 2b1 q 3 p2 + q 4 = p4 + p5 q + 4a2 p4 q 2 + 4b2 q 4 p2 + q 4 + q 5 p = p4 + p5 q + 4a1 p4 q 3 + 4a3 p5 q 2 + 4b3 q 5 p2 + 4b1 q 4 p3 + q 5 p + q 4
Using equations 2,4 and 6 yields

(1 2pq)(p2 a2 + q 2 b2 ) = p4 + q 4 2p2 (1 2p2 q)a2 + 2q 2 (1 2q 2 p)b2 = p4 + q 4


from which a2 and b2 may be computed (for p = q ), hence p(E). The case p = computed using the rst four equations and aj = bj for j = 1, 2, 3.
1 2

may be

8. Xn = Xn1 + 1 (number of departures during n 1th interarrival time). The process Xn satises the Markov property only if the service time has a memoryless property, so that number of departures depends only on the number in the system and not on the history. In other words, X is Markov if and only if G is exponential.

4.8. EXERCISES

63

Yn = Yn1 1+ (number of arrivals between departure n 1 and n). This satises the Markov property if and only if the inter arrival distribution satises the memoryless property. In other words, Y is Markov if and only if F is exponential.
Transition probability for p(Xn |Xn1 ): Let Sn denote the nth interarrival time, then

pij;n =

(Sn )i+1j Sn (i+1j)! e (Sn )kj (Sn ) k=i+1 (kj)! e

j = 1, . . . , i + 1 j=0 j i + 2.

Transition probability for Yn : Let n denote the time that the nth service is completed. Then

0 pij;n =

j <i1 j i 1, i 1 i = 0, j 0

((n n1 ))ji+1 (n n1 ) e (ji+1)! j ((n S)) (n S) e j!

where S denotes the time of the rst arrival after time n1 . 9. Let S denote the service time. Let NS denote the number of arrivals during the service time. Let N denote the total number served during a busy period. Then
NS

N = NS +
j=1

Nj

where Nj denotes the number of customers that the j th customer who rolled up during time (0, S) is responsible for. Namely, those who roll up while he is being served, plus those who roll up while one of these subsequent is being served, etc .... Nj are i.i.d., same distribution as N . Let (s) = E[sN ]. Recall that arrival rate is Poisson with parameter , while service time has distribution G (that is to say, the cumulative distribution function is G(t) = p(S t). The density function is given by G(t) where the dot denotes the derivative). Then
PNS

(s) = E[sNS +

j=1

Nj

] (sk k (s))p(NS = k)

= p(NS = 0) +
k=1

=
0

p(NS = 0|S = t)pS (t)dt +


k=1

(sk k (s))
0

p(NS = k|S = t)fS (t)dt

=
0 k=0

(s(s)t)k t e G(t)dt k!

=
0

e(s(s)1)t G(t)dt.

64 Recall that E[N ] = (1). Note that

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

(1) =
0

t(1 + (1))G(t)dt = (1 + (1))E[S].

It follows that

E[N ] = (1) =
10. Yes.

. 1 + E[S]

p(kl),(ij) = p(Yn = (i, j)|Yn1 = (k, l)) = p(Xn = i, Xn1 = j|Xn1 = k, Xn2 = l).
Clearly p(kl),(ij) = 0 for k = j . For k = j ,

p(ij)(jl) = p(Xn = i|Xn1 = j, Xn2 = l) = pij . p(Yn = (i, j)) = pji p(Xn1 = j)
so
n+

lim p(Yn = (i, j)) = pji j .

11. (a) We use

E[Xn ] = E[E[. . . E[Xn |Xn1 ] . . . X0 ]].


Note that Xn |Xn1 Bi(m, Xn1 ) so that E[Xn |Xn1 ] = Xn1 . It follows that m

E[Xn |X0 = i] = i.
(b) p00 = 1, and p0j = 0 for all 1 j = m, pmm = 1, pmj = 0 for all 0 j m 1. i pi0 = (1 m )m , so 0 is accessible from all states, m is accessible from all states, states {1, . . . , m 1} communicate with each other. Eventually either all genes will be of type 1 or all genes will be of type 2. Let pi0 = limn+ pi0 (n) and let pim = limn+ pim (n). Then, since pij (n) 0 for all j {1, . . . , m 1},
m

E[Xn |X0 = i] = i
j=0

jpij (n) mim , p


mi m

n+

i so pim = m (the probability that eventually all genes are of type 1) and pi0 = probability that eventually all genes are of type 2).

(the

4.8. EXERCISES

65

12. (a) It is clearly a Markov Chain (future independent of past conditioned on present). To show that it is time reversible, let nj denote the number of edges from site j . Then

Pi0 ,i1 Pi1 ,i2 . . . Pin ,i0 =


so it is time reversible. Now, using

1 = Pi0 ,in Pin1 ,in2 . . . Pi1 ,i0 ni0 ni1 . . . nin

i Pij = j Pji
gives
i ni

j nj

= where is a constant, so that i =

Pni . j nj

(b) Following example of random walk on graph, EXPECTED TIME TO FIRST RETURN IS 1 i WHERE IS THE STATIONARY PROBABILITY. From the example of the graph, one simply has to count up the possible ways a knight's move can be made from each square. Take a 4 by 4 corner of the board; there are

2 + (3 + 3) + (4 + 4 + 4 + 4) + 4 + (6 + 6) + (6 + 6) + (8 + 8 + 8 + 8) = 164
so total for the whole board is 656. For corner, it follows that 11 = time is 328.
2 656

so expected return

13. Consider a path from i which returns to i. If the forward path passes from a state k to a state j then it must also pass from j to k . Hence for each appearance of pkj in the expression of the probability for a path returning to itself, pjk also appears. Hence the same transition probability is the same as for the reversed path. 14. (a) yes (b) no. Reason:

1 p(Mn = j + 1|Mn2 = j 1, Mn1 = j) = . 2 p(Mn = j + 1|Mn3 = j 1, Mn2 = j, Mn1 = j) = 0.


15. We want

lim p(Xn+1 = i + 1|Xn = i, Xm = N )


m

p(Xn = i, Xn+1 = i + 1, Xm = N ) p(Xn = i, Xm = N ) p(Xm = N |Xn+1 = i + 1)p(Xn+1 = i + 1|Xn = i) = lim m p(Xm = N |Xn = i) i 1 (q/p) = 1 (q/p) = lim
m

for p = 1/2 and

i+1 2i

for p =

1 2

as advertised.

66

CHAPTER 4. DISCRETE TIME MARKOV CHAINS

3 2 1 16. (a) Xm = (Xm , Xm , Xm ) denotes the positions of the three processors after job m.

(b) Reversibility because reverse of path returning to its starting point has same probability as forward path. This comes from writing out the states (see below) and noting that the direct links are between states (1, 2), (2, 5), (5, 6), (6, 4), (4, 3), (3, 1) and the reverse of these. The only point that needs to be checked is that

P12 P25 P56 P64 P43 P31 = P13 P34 P46 P65 P52 P21 .
(c) Label states

1 = (1, 2, 3), 2 = (1, 3, 2), 3 = (2, 1, 3), 4 = (2, 3, 1), 5 = (3, 1, 2), 6 = (3, 2, 1)
then

P = 0 0 0
Now, time reversibility gives:

7 9 5 36 1 3

1 18 14 18

1 6

0
11 18 5 18

0 0
1 18 11 18

0
1 12

0 0
5 12

0 0
4 9 5 18

0 0 0
1 9 5 36 4 9

0 0

0
5 18

j Pij = i Pji
so that i Pij = j Pji . For example,
1 18

52 36

so that

2 2 = 1 5 2 1 5 = 2 = 1 5 25 1 3 = 2 1 1 4 = 3 = 1 5 10 1 1 6 = 5 = 1 2 25 53 1 = 1 ( ) 25 10 25 2 5 1 25 , 4 = , 5 = , 6 = . 1 = , 2 = , 3 = 53 53 106 53 106 53

Chapter 5

Continuous Time Markov Chains


Denition 5.1 (Continouous Time Markov Chain). A continuous time process {X(t), t 0}
is said to be a continuous time Markov chain if for all s 0, t 0 and all i and j and all 0 u1 . . . un s, n 1 and arbitrary sets Ak ,
p(X(t + s) = j|{X(s) = i}, n {X(uk ) Ak }) = p(X(t + s) = j|X(s) = i). k=1

It is said to be homogeneous if in addition


p(X(t + s) = j|X(s) = i) = p(X(t) = j|X(0) = i)

for all s 0 and all i, j .


When the process X is a continuous time, time homogeneous process, the transition probabilities will depend on the dierence of the two times;

p(X(t + s) = j|X(s) = i) = p(X(t) = j|X(0) = i) = pij (t).

Example: The Poisson Process The Poisson Process is an example of a Markov process. The
transition probabilities are given by

pij (t) = p({j i events in t seconds}) =


for j i. It follows that

(t)ji exp{t} (j i)!

P (t) =
As t 0, et

et tet (t)2 et /2! 0 et tet 0 0 et . . . . . . . . .

... ... ... ...

1 t, so that
67

68

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS

P ()

1 0 0 1 0 0 1 . . . . . . . . .

... ... ... ...

where all higher order terms have been neglected.

Theorem 5.2 (State Occupation Times). Let X(t) be a continuous time Markov chain. Then the
time Ti that X remains at a given state i is an exponentially distributed random variable.

Proof Suppose the process has been at state i for s seconds. Then
p(Ti > t + s|Ti > s, X(0) = i) = p(Ti > t + s|X(s) = i) = p(Ti > t|X(0) = i).
Only the exponential random variable satises the memoryless property, so that

p(Ti > t) = exp{i t}


for some i . The mean state occupation time 1/i will be dierent for each state. This result provides another way of looking at a continuous time Markov chain. Each time a state, say j is selected, an exponentially distributed occupation time Tj is selected. When this time is up, the next state i is selected according to a discrete time Markov chain with transition probabilities pij . These transition probabilities dene what is known as an embedded Markov chain.

5.1 Transition Rates


The probability that the process will remain in state i during the very short time interval (t, t + dt] is given by

p(Ti > dt) = exp{i dt} = 1 i dt + O((dt)2 ),


by Taylor's expansion theorem. The probability of two or more jumps in time interval (t, t + dt] is O((dt)2 ). Therefore, ignoring all terms of O((dt)2 ),

1 pii (dt) = i dt.


The quantity i is the rate at which the process X(t) leaves state i. Once the process leaves state i, it will enter state j with probability pij . Thus, for i = j ,

pij (dt) = i pij dt = ij dt.

(5.1)

5.1. TRANSITION RATES

69

The quantity ij is the rate at which the process X(t) jumps from state i to state j . The notation may be completed by setting

ii := i ,
so that, formally,

(5.2)

p(X(t + dt) = i|X(t) = i) = 1 i dt = 1 + ii dt.


Set pi (t) = p(X(t) = i). Then, using equations (5.1) and (5.2),

pi (t + dt) = p(X(t + dt) = i) =


j

p(X(t + dt) = i|X(t) = j)p(X(t) = j) pji (dt)pj (t)


j

= =
i=j

pj (t)ji dt + (1 + ii dt)pi (t)

so that

pi (t + dt) pi (t) =
j

ji pj (t)dt,

giving

d pi (t) = dt

ji pj (t).
j

(5.3)

Example : Simple Queueing System Suppose a queueing system has just two states. Either the
system is idle (state 0) or busy (state 1). The idle time is exponential with parameter , while the busy time is exponential with parameter . Find p0 (t) and p1 (t).

Solution Note that 0 = and 1 = , so that


00 = , 01 = , 10 = , 11 = .
For the embedded discrete time process, p01 = p10 = 1 and p00 = p11 = 0. The equations then become

d p0 (t) = p0 (t) + p1 (t) dt d p1 (t) = p0 (t) p1 (t). dt


Using p0 + p1 = 1, this gives

d p0 (t) = ( + )p0 (t) dt

70 from which

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS

p0 (t) =
where C = p0 (0) /( + ). Note that

+ C exp{( + )t}, +

p0 (t)
and that

p1 (t)
The relation p0 (t) + p1 (t) = 1 holds for all t 0.

. +
.

Lemma 5.3. Suppose that the embedded discrete time Markov chain has a stationary distribution
. The stationary distribution for the continuous time Markov chain is related to the stationary distribution for the discrete time chain by the following: j = j /j , k k /k

or
j = j j , k k k
j

Proof Firstly, by equation (5.3), since


gives

d dt i

= 0, it follows that 0 =

j ji for each i. Equation (5.2)

i i =
j=i

j ji

(5.4)

so that, by equation (5.1),

i i =
j:j=i

j j pji

where pji are the one step transition probabilities for the discrete time chain and the result follows since pii = 0 (by denition), so that

i =
j:j=i

j pji ,

or

= P , where P is the one step transition matrix with elements pij .


Many systems, like the one above, reach equilibrium as t .

5.1. TRANSITION RATES

71

Theorem 5.4. Let X be a continuous time Markov chain, where the embedded Markov chain is positive
recurrent and irreducible. Let denote the stationary measure for the embedded continuous time Markov chain. Then, provided j j /j < +, limt p(t) where is solution to
ji j = 0
j

j = 1.
j

Proof The proof employs the same coupling method as in the discrete case. Note that the exponential
holding time means that it does not matter if the embedded discrete time chain is aperiodic or not; the independent copy will meet with probability 1. If the hypotheses are satised, it follows that

lim

d pi (t) = 0 = lim t dt

ji pj (t)
j

and, setting j = limt pj (t), this becomes

ji j = 0.
j

This may be rewritten as

i i =
j:j=i

ji j ,

which is equation (5.4). These equations are the steady state equations and is the stationary distribution of the Markov chain. If we start the Markov chain with initial distribution , then the state probabilities will be given by pi (t) = i for all t.

Example : Single server queue Consider a queueing system in which customers are served by a
single server in order of arrival. The time between customer arrivals is exponentially distributed with rate and the time requred to service a customer is exponentially distributed with rate . Find the stationary (or steady state) distribution for the number of customers in the system.

Solution The transition rates are as follows. Customers arrive at rate , so that the steady state
probabilities may be calculated as

i,i+1 = , i = 0, 1, 2, . . .
This is the rate of hopping from i customers to i + 1 customers and is given by the arrival rate. When the system isn't empty, the customers depart at rate , so that

i,i1 = , i = 1, 2, 3, . . .

72

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS

If the system is empty, then nobody can leave. The steady state equations (equation (5.4)) are therefore

0 = 1 i=0 ( + )i = i1 + i+1 i 1,
where in the notation above, 0 = , i = ( + ) for i 1. Here = i pi,i+1 and = i pi,i1 for i 1 and 0 p0,1 = , so that

p0,1 = 1 pi,i+1 = , + pi,i1 = , + i 1.

The steady state equations may be rewritten as

i i+1 = i1 i ,
so that

i1 i = const = 0 1 = 0,
giving

i1 = i .
Let = /. It follows that

j = j 0 .
Finally, since the probabilities must sum to 1, 0 is calculated from the identity

1=
i=0

i = (1 + + 2 + . . .)0 =

1 0 1

so that 0 = 1 and

i = (1 )i .
The system of equations only has a non trivial solution if < 1.

(5.5)

Birth and Death Processes Consider a Markov chain in which transitions occur only between
adjacent sites. The single server queueing system is an example of this. Suppose that the transition rates now depend on the site, so that the probability of hopping from i to i + 1 depends on i and the rate is given by i and the probability of hopping from i to i 1 also depends on the site i and is given by i . Then,

i,i+1 = i

(the `death rate' at site i)

5.1. TRANSITION RATES and

73

i,i1 = i
The steady state equations become

(the `birth rate' at site i).

0 0 = 1 1 i i i+1 i+1 = i1 i1 i i = . . . = 0 0 1 1 = 0.
It follows that

i = ri i1 ,
where ri = i1 /i , so that

i = ri ri1 . . . r1 0
and, letting Ri =
i j=1 rj

(5.6)

with R0 = 1, 0 is found from

i=0 i

= 1, giving

1 = 0
i=0

Ri

so that equation (5.6) may be rewritten as

i =

Ri . j=1 Rj

If the series does not converge, then i = 0 for all states i.

Example: The M/M/c Queue Consider a queueing system where customers arrive according to
a Poisson process with rate and there are c servers available, each with an exponential service time, rate . The M/M/c stands for Markov arrivals, Markov service, c servers. This may be regarded as a birth / death process with rates

n =
This is seen in the following lemma.

n n c c n > c.

Lemma 5.5. If k servers are engaged, the departure rate is exponentially distributed with parameter
k

Proof Let X denote time until next departure. Then, if k servers are busy,
X = min(S1 , . . . , Sk ),
where (Si )i=1,...,k are i.i.d exponential random variables with parameter , Si denoting the service time for server i. Then

74

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS

p(X > t) = p(min(1 , . . . , Sk ) > t) = p(k {Sj > t}) j=1


k

=
j=1

p(Sj > t)

= exp{kt}

The arrival rates are given by n = . It may be represented using the circle diagram given in gure 5.1.

?>=< 89:; 0

?>=< 89:; 1

?>=< 89:; 2

. . .
3 c

?>=< 89:; c

. . .
c

Figure 5.1: gure for M/M/c

5.2 Time to Reach a State


Let Sj,k denote the time to arrive at state k starting from state j , for a Markov birth / death process. This may be analysed using the moment generating function

MX (p) = E[epX ]
which has a particularly simple form for an exponential random variable. Recall that
0

epx ex dx =

. p

Let

Ij =
Then

1 rst jump from j is to j + 1 0 otherwise.

E[epSj,j+1 ] = E[epSj,j+1 |Ij = 1]p(Ij = 1) + E[epSj,j+1 |Ij = 0]p(Ij = 0).


j Now, p(Ij = 1) = j +j . Furthermore, conditioned on EITHER Ij = 1 OR Ij = 0, the time for the rst jump is exponential with parameter i + i . Sj,j+1 , conditioned on Ij = 1 is simply the time of the rst jump. Conditioned on Ij = 0, Sj,j+1 is the time for the rst jump plus additionally the time to get from j 1 to j + 1. It follows that

5.2. TIME TO REACH A STATE

75

E[epSj,j+1 ] = =
Firstly,

(j + j ) j j (j + j ) + E[epSj1,j ]E[epSj,j+1 ] ((j + j ) p) (j + j ) (j + j ) ((j + j ) p) j j + E[epSj1,j ]E[epSj,j+1 ]. ((j + j ) p) ((j + j ) p)

E[epS0,1 ] =
Setting Mj (p) = E[epSj,j+1 ] gives

. p

(5.7)

Mj (p) =
giving

+ Mj1 (p)Mj (p), ( + ) p ( + ) p

Mj (p) =
It follows that

. ( + ) p Mj1 (p)

Mj (0) =
Note that Mj (0) = E[Sj,j+1 ]. It follows that

1 + M (0). j1

E[Sj,j+1 ] =
so that
j

1 (

j k=0

(/)j+1 1 )= ,

E[S0,j ] =
k=1

1 E[Sk1,k ] =

j 1 1

2 The computation of the variance is reasonably messy. Recall that E[Sj,j+1 ] = Mj (0) and that Sj,j+1 are independent. Taking second derivatives gives

Mj (p) =

2 2 + Mj1 (p)Mj (p) 3 (( + ) p) (( + ) p)3 2 + (Mj1 (p)Mj (p) + Mj1 (p)Mj (p)) (( + ) p)2 + (M (p)Mj (p) + 2Mj1 (p)Mj (p) + Mj1 (p)Mj (p)), ( + ) p j1 2 2 + )Mj1 (0) + Mj (0)) 2 ( + ) ( + )2

so that

Mj (0) = +

(M (0) + Mj1 (0) + 2Mj (0)Mj1 (0)). + j

76

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS

This leads to the following recursive formula.

Var(Sj,j+1 ) =

2 2 (/)j+1 + (/)j 1 Var(Sj1,j ) + + ( + ) ( + ) 2j+1 (/)j+1 (/)j + 1 (/) +2 ( )2 = + Var(Sj1,j ),


1 2

where and arise in the previous line. Now use Var(S0,1 ) =

and the fact that the sequence

an+1 = + an
has solution

an = n1 a0 +
and
j

1 n , 1

n 1.

Var(S0,j ) =
k=1

Var(Sk1,k ) =
k=1

k1 (1 k ) + 2 1

5.3 Time Reversed Markov Chains


Consider a continuous time Markov chain running backwards in time.

Theorem 5.6. Let X be a continuous time Markov process where each site i has an exponential holding
time with parameter i . Assume that the process is in equilibrium; that is, p(X(t) = i) = i . Then the reversed process also spends an exponentially distributed amount of time, with parameter i on site i.

Proof If X(t) = i, the probability that the reverse process remains on site i for an additional s seconds
may be calculated as follows. Let Ti (r) be the additional holding time for the forward process found at site i at time r, then

p(no jump in [t s, t]|X(t) = i) = p(X(r) = i, t s < r < t|X(t) = i) p(X(t s) = i, Ti (t s) > s) = p(X(t) = i) p(X(t s) = i)p(Ti (0) > s) = p(X(t) = i) = p(Ti > s) = exp{i s}

5.3. TIME REVERSED MARKOV CHAINS

77

The jumps in the forward process X(t) are determined by the embedded Markov chain with transition matrix (ij )ij , so that the jumps of the reversed process are determined by the discrete time Markov p chain with transition probabilities

qij =

j pji , i

where denotes the stationary distribution of the embedded discrete time Markov chain. Time re versibility for a continuous time Markov chain as follows.

Denition 5.7. The continuous time Markov chain is said to be reversible if the embedded discrete
time Markov chain is reversible.

Corollary 5.8. An aperiodic positive recurrent birth / death process is time reversible.
A birth / death process is a tree process (see exercises section 4.8, exercise 13). The embedded discrete time Markov chain is time reversible, since any reversed path that returns to its starting point has the same probability as the forward path. Hence the continuous time Markov chain is time reversible.

Example: The M/M/c Queue If < c, then the output process of departing customers is
asymptotically a Poisson process with rate .

Proof The M/M/c queue is an ergodic birth / death process and is therefore time reversible. Going
forward in time, the points at which N (t), the number in the system increases by one unit forms a Poisson process with parameter as these are the arrival times. It follows that, given the future, the points at which the reversed process increases by one unit are also a Poisson process with parameter .

Proposition 5.9. Consider a probability distribution . Suppose that


i ij = ji j .

Then the continuous time Markov chain is time reversible and is the stationary probability distribution.

Proof Summing gives


i i = i i
j:j=i

pij = i
j:j=i

ij =
j:j=i

i ij =
j:j=i

j ji ,

so that

i i =
j:j=i

j ji

78

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS

where i is the rate at which the process leaves state i. Hence the probabilities i satisfy the balance equations (equation (5.4)) and are therefore the stationary probabilities. The chain is time reversible because

i ij = j ji .

Proposition 5.10. A time reversible chain with limiting probabilities j which is truncated to the set
A and remains irreducible is also time reversible, and has limiting probabilities given by
A j =

j
jA j

Proof It is necessary and sucient to show that


A A i ij = j ji .

This is equivalent to

i ij = j ji .
But this follows directly, since the original chain is (by assumption) time reversible.

Example Consider and M/M/1 queue in which arrivals, nding K in the system, do not enter. This
is known as an M/M/1/K queue. Recall that the stationary distribution for an M/M/1 queue is

j =

By the previous propositions, this is time reversible and has probabilities given by

(K)

(/)j
K j j=0 (/)

5.4 Exercises
1. Suppose that a single celled organism can be in one of two states, either state A or state B. An individual in state A will change to state B at an exponential rate . An individual in state B will change to state A, dividing into two independent identical copies at an exponential rate . Dene an appropriate continuous time Markov chain for a population of such organisms. 2. Consider a pure birth process, where members of a population can give birth but not die, where each member acts independently of the others and takes an exponentially distributed amount of time to give birth. Start with a single individual. That is, suppose X(0) = 1. Let Ti denote the time it takes the process to go from population size i to size i + 1. (a) What is the distribution of Ti ? (b) Let X1 , . . . , Xj denote independent exponential random variables each having rate and interpret Xi as the lifetime of component i. Show that

max(X1 , . . . , Xj ) =
in probability, where
j

+ ... +

are independent exponential random variables with rates

j, (j 1), . . . , 2, .
(c) Using the rst two parts, show that

p(T1 + . . . + Tj t) = (1 et )j .
(d) Use this to show that

P1j (t) = (1 et )j1 (1 et )j = et (1 et )j1 .


Given X(0) = 1, what is the distribution of X(t)? (e) Conclude that

Pij (t) =

j1 i1

et(i1) (1 et )ji .

3. A set of 4 cities is to be connected via telecommunication links. The cost to construct a link between cities i and j is Cij (i = j ). Enough links should be constructed so that each pair of cities has exactly one path connecting them. Thus exactly 3 links are needed. It has been decided to connect them in the following way. First one constructs the least expensive link. The other links are established sequentially, by connecting a city without any links to one with links in the cheapest way. Suppose that all costs Cij are independent identically distributed exponential random variables with expected value 1. Find the expected cost of connecting all the cities. 79

80

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS 4. (time reversibility) Consider two M/M/1 queues with respective parameters i , i , i = 1, 2. Suppose they share a common waiting room that can hold at most three customers. Find the limiting probability that there will be n customers from queue 1 and m customers from queue 2 in the system. 5. Consider a taxi rank where taxis and customers arrive according to a Poisson process with rates one and two per minute respectively. A taxi will wait no matter how many taxis are present. An arriving customer who does not nd a waiting taxi leaves. Find (a) the average number of taxis waiting and (b) the proportion of arriving customers who do not nd a taxi. 6. Consider a system of 3 components such that the working time of a component is exponentially distributed with parameter i . When it fails, the repair rate depends on how many other components are down. If there are k components down, then the instantaneous repair rate for component i is k i . (a) Dene a continous time Markov chain which may be used to analyse the situation, giving the states and parameters of the chain. (b) Show that, in the steady state, the chain is time reversible and compute a set of linear equations which give the stationary distribution. 7. Customers arrive according to a Poisson process with rate at a service station manned by a single server whose service times are independent exponentially distributed with rate 1 . After completion, the customer then joins a second system where the server has i.i.d. service times exponentially distributed rate 2 . Assume < 1 and < 2 . Find the stationary probabilities mn which denotes the probability of m customers in the rst system and n customers in the second.

Answers: Continuous Time Markov Chains


1. Let NA (t) and NB (t) denote the number of organisms in state A and B respectively. Then {NA (t), NB (t)} is a continuous time Markov chain with

n,m = n + m, n n + m m . m + m

p(n1,m+1),(n,m) =

p(n+2,m1),(n,m) =

2. (a) Let be the rate at which an individual gives birth then Ti Exp(i).

5.4. EXERCISES

81

(b) Memoryless property, note that i is the minimum of j (i 1) independent exponentials with rate and is therefore exponential, rate (j i + 1). (c)

p(T1 + . . . + Tj t) = p( max Xi t) = (1 et )j .
1ij

(d) clear (it has gone from 1 to j , but has not yet proceeded to j + 1).

p(X(t) = j) = p(1 p)j1

j1

with p = et (number of trials up to and including rst success, where p is success probability for independent Bernoulli trials). (e) Let Xk be independent copies of X(t). Let Z = X1 + . . . + Xi . Then Pij (t) = p(Z = j). This may be interpreted as the probability that it takes j trials until success number i. Last trial is success; therefore i 1 successes distributed in j 1 trials. Standard argument gives result. 3. There are six possible links. Firstly, prove that the three cheapest links are used with probability 4 1 5 , while the cheapest, second cheapest and fourth cheapest are used with probability 5 (left as exercise). The expected cost is then

4 5

3 2 1 + + 6 5 4

1 5

3 2 1 1 + + + 6 5 4 3

4. First consider the simpler problem, where the waiting room is of innite size. Let Xi (t) denote the number of customers at servers i = 1, 2. Since each of the M/M/1 processes is time reversible, it follows that (X1 , X2 ) is a time reversible Markov chain. The process of interest is this vector process restricted to the set of states A where

A = {(0, m) : m 4} {(n, 0) : n 4} {(n, m) : nm > 0, n + m 5}.


It follows that the probability that there are n with server 1 and m with server 2 is

pnm = k

1 1

1 1

2 2 = 1.

2 2

=C

1 1

2 2

where C is computed using

mnA pnm

5. The number of taxis waiting is a birth /death process with = 1 and = 2 for all states. It follows from results in notes that (a) Average number waiting is

= =1 1

82

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS (b) Proportion who do not get a taxi is proportion who show up when no taxi is waiting which is probability of zero taxis.

0 = 1

1 = . 2

6. (a) There are 23 states, given by (a1 , a2 , a3 ) where ai = 1 corresponds to machine i working and ai = 0 corresponds to machine i broken. If i is working, it goes to the corresponding state with i down at instantaneous rate i and goes from that state with i down back to the original state with instantaneous rate k i where k is the number of machines broken. It is time reversible using characterisation that time reversibility is equivalent to

i0 i1 . . . in1 i0 = i0 in1 . . . i1 i0 .
3

i0 i1 . . . in1 i0 =

k j=1

j j jj ,

j = 1, 2, 3

for some non negative integers aj and bj and k . The number k is the same for the reversed and forward path and the numbers aj and bj are also the same; on any path that returns to its starting point, each breakdown corresponds to exactly one repair of the same machine and each repair corresponds to exactly one breakdown. k is arrived at by considering, at each state, the number of broken machines when a repair takes place. These are clearly equal for the path and the reversed path, by considering the `number of broken machines' process on (0, 1, 2, 3) which is clearly a birth / death (tree) process. (b) Label states

(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)
where 0 signies `broken'. Let i =
i i

and use

i ij = j ji .
Then

(110) =

(111)(110) 3 111 = (111) (110)(111) 2 (111) 1 (011) = (111) 2 (100) = 2 (110) 1 (010) = 2 (110) (101) =

5.4. EXERCISES

83

(001) = (000) =

2 2 (011)

3 3 (001) (000) + . . . + (111) = 1.


7. First, write out the steady state equations. A diagram is useful; the arrows leaving a state correspond to the terms on the left hand side; the arrows entering a state correspond to the terms on the right hand side. 00 = 2 01

( + 1 )j0 = j1,0 + 2 j,1 ( + 2 )0j = 1 1,j1 + 2 0,j+1

j1 j1 j 1,
2

( + 1 + 2 )jk = j1,k + 1 j+1,k1 + 2 j,k+1 ,

k 1.
+1 .

First, try solutions of the form nm = Cn m . This works, with =


It follows that mn = C( +1 )m ( 2 )n for m 0, n 0 where

and =

C = (1

1 ) . 2 + 1

84

CHAPTER 5. CONTINUOUS TIME MARKOV CHAINS

Chapter 6

Markov Queueing Models


This chapter considers queueing systems where the number in the system is modelled by a Markov Process.

6.1 The M/M/1 Queue


From the notation introduced earlier, M/M/1 means that customers arrive according to a Poisson process and the service times are i.i.d. exponential random variables. There is one server and the system can accommodate an innite number of customers. Let denote the rate parameter for the Poisson arrival process and denote the rate parameter for the exponential service time distribution. Let = . The number of customers N (t) in an M/M/1 system is a continuous time Markov chain. The stationary distribution has already been computed in equation (5.5). The average number of customers is given by

E[N ] =
j=0

jp(N = j)

= (1 )
j=0

jj = (1 )
j=0

d j d

= (1 )

d d

j =
j=0

. 1

6.2 Distributions Associated with M/M/1


Denition 6.1 (Arriving Customer's Distribution). The arriving customer's distribution is the probability function corresponding to the random variable Na , the number of customers found in the system by a customer on arrival.

Lemma 6.2. If the arrivals are Poisson, and independent of the system state and the customer service
times, then the arriving customer's distribution is equal to the steady state distribution for the system.
85

86

CHAPTER 6. MARKOV QUEUEING MODELS

Proof Let A(t) denotes the total number of arrivals up to time t. Since the arrivals form a Poisson
process, the random variable A(t + s) A(t) is independent of N (t), so that

p(Na (t) = k) = lim p(N (t) = k|A(t + ) A(t) = 1) = p(N (t) = k).
0

Theorem 6.3. Let T denote the total amount of time that a customer spends in an M/M/1 system
with rst come rst served service discipline. Let denote the arrival rate and let the service time be distributed exponentially with parameter . Then, T is an exponential random variable with parameter .

Proof Suppose that a customer arrives and nds k customers already there; i.e. Na = k (so he is
k th in line, plus one being served). Then, conditioned on this, T is the residual service time of the customer being served, plus the service times of the remaining k 1 customers, plus his own service time. By the memoryless property, this is equivalent to the sum of k + 1 independent exponential variables, each with parameter . Lemma 3.7, proved earlier, is required to complete the proof. It follows, by lemma 3.7, that (x)k exp{x}, fT (x|Na = k) = k!
the (k + 1, ) distribution (Gamma distribution with parameters k + 1 and ) so that, using p(N = k) = (1 )k ,

fT (x) =
k=0

fT (x|Na = k)p(Na = k) (x)k k!ex (1 )k


k=0

= (1 )ex
k=0

(x)k k!

= (1 )ex ex = ( )e()x .
for x 0 and 0 for x < 0 and the proof is complete.

6.3 The M/M/1/K Queue - Finite Capacity


The M/M/1/K queue is identical to the M/M/1 system except that it can only hold a maximum of K customers in the system. Customers who arrive when the system is full are turned away. For this system, N (t) is a continuous time Markov chain taking values in {0, 1, . . . , K}. The steady state distribution has already been computed using time reversibility. Let = .

6.4. M/M/C SYSTEMS

87

(K)

j
K i i=0

1 j 1K+1 1 K+1

j = 0, . . . , K j = 0, . . . , K

=1 = 1.

While it was necessary that < 1 to have a stationary distribution for the M/M/1 model, it is clear that the M/M/1/K model has a stationary distribution for any > 0. The expected number of customers in the system is given by

E[N ] =

1 1K 1 K+1

K j j=0 j K j=0 j

= =

1 K 2

(K+1)K+1 1K+1

=1 = 1.

The probability that the system turns away customers is K so that the rate at which customers actually enter the system, a , is given by

a = (1 K ).
The rate at which customers are blocked is given by

(6.1)

b = K .

(6.2)

6.4

M/M/c systems

The M/M/c system has Poisson arrivals and c servers, each with exponential service time. Assume that arrivals occur at rate and that the service time for a customer is exponentially distributed with parameter .

Denition 6.4 (Utilization). The utilization for a queueing system of innite capacity is dened as
:= c

where is the arrival rate, is the service rate for an individual server and c is the number of servers.
The M/M/c system is a Markov chain, birth and death process. The circle diagram has already been described, giving steady state equations

0 = 1 , ( + j)j = j1 + (j + 1)j+1 ( + c)j = j1 + cj+1


This gives

1j c1 j c.

j =
Set a = , then

j1 j

j = 1, . . . , c.

j =

aj 0 j!

j = 0, . . . , c

88 and

CHAPTER 6. MARKOV QUEUEING MODELS

j =
Here

jc ac j1 = jc c = 0 c c!

j c + 1.

. c Finally, 0 is obtained from the normalisation condition =

1=
j=0

c1 aj ac j = 0 + j! c!
j=0

j
j=0

If < c, this is well dened and

0 =

1
c1 ai i=0 i!

ac 1 c! 1

(6.3)

The Erlang c Formula Let W denote the waiting time. The probability that an arriving customer
nds all servers engaged and has to wait in a queue is an important parameter of the M/M/c system and is given by

p(W > 0) = p(N c) =


j=c

jc c =

c . 1

This probability is called the Erlang c Formula and is denoted by C(c, a).

1 ac c = 0 (6.4) 1 1 c! where 0 is given by equation 6.3. Several quantities of interest may be expressed using the Erlang c formula. For example, the average number of customers in the queue is given by C(c, a) =

E[Nq ] =
j=c

(j c)jc c

= c
j=0

jj

= =

c (1 )2 C(c, a). 1

Theorem 6.5 (Waiting Time Distribution). Let W be the waiting time for an M/M/c system with
arrival rate and processing rate for each server. Let = /c denote the utilisation parameter and a = /. Then p(W x) = 1 C(c, a) exp{c(1 )x}.

6.4. M/M/C SYSTEMS

89

Proof Firstly, p(W = 0) is calculated. This is then used to calculate p(W x|W > 0). Consider a
customer arriving when there are already k in the queue (so that there are already k + c in the system). There must be k + 1 completions before this customer is served. Let N denote the number already in the system, as seen by the arriving customer. Each service completion is exponentially distributed with parameter c. It has already been shown that the sum of k + 1 i.i.d. exponentials each with parameter is a gamma with parameters and k , so that the conditional density function is

fW |N (x|c + k) =
Also,

(cx)k k! c exp{cx}

x0 x < 0.

(6.5)

p(N = j|N c) =

p(N = j) jc c = = (1 )jc . p(N c) c /(1 )

(6.6)

Using FW |N (x|c + k) = p(W x|N = c + k), it follows that FW (x|W > 0) := p(W x|W > 0) is given by

FW (x|W > 0) =
k=0

FW |N (x|c + k)p(N (t) = c + k|N (t) c)

= (1 )
k=0

k
0 x 0 x

(cy)k c exp{cy}dy k!

= (1 )
k=0

(cy)k c exp{cy}dy k!

= (1 )c
0

exp{c(1 )y}dy

= 1 exp{c(1 )x}.
Finally,

p(W x) = p(W = 0) + p(0 < W x) = p(W = 0) + FW (x|W > 0)p(W > 0) = (1 C(c, a)) + (1 exp{c(1 )x})C(c, a) = 1 C(c, a) exp{c(1 )x}.

Corollary 6.6. The cumulative distribution funcion for the total time in the system T is given by
FT (x) = (1 C(c, a))(1 exp{x}) + C(c, a) 1 1 c(1 ) (1 exp{c(1 )x}).

90

CHAPTER 6. MARKOV QUEUEING MODELS

Proof Let S denote a service time. Then W and S are independent and T = W + S . Then,
p(T x) = p(W + S x)
x

= p(W = 0)(1 exp{x}) +


0

fW (y)(1 exp{(x y)})dy

= (1 C(c, a))(1 exp{x})


x

+
0

C(c, a)c(1 ) exp{c(1 )y}(1 exp{(x y)})dy

= (1 C(c, a))(1 exp{x}) + C(c, a)(1 exp{c(1 )x}) c(1 ) (1 exp{x(c(1 ))}) C(c, a) c(1 ) = (1 C(c, a))(1 exp{x}) 1 +C(c, a) (1 exp{c(1 )x}) 1 c(1 )
The expected waiting time may be computed quite easily from the c.d.f.;

E[W ] =
0

p(W x)dx =

1 C(c, a). c(1 )

Since E[S] = 1/, the average total delay is

E[T ] = E[W ] + E[S] =

C(c, a) +1 . c(1 )

Example Suppose a system has four servers and that customers arrive according to a Poisson process
at a rate of one every two minutes and suppose that service time durations are exponentially distributed with mean 4 minutes. When all servers are occupied, a queue forms. Find the probability of having to wait in the queue.

Solution = 1/2, = 1/4, a = / = 2, = a/c = 1/2. It follows that


0 =
and, using equation (6.4),

1 1+2+2+
4 3

16 1 24 ( 11/2 )

3 23

p(W > 0) = C(4, 2) =

24 /4! 3 4 = . 1 1/2 23 23

Example Consider an M/M/1 system with arrival rate = 1 and exponential service time with
parameter = 2. Then, = / = 1/2. The expected waiting time is

E[W ] =

/ = 1/2 1

6.5. THE M/M/C/C QUEUE and the expected total delay is

91

E[T ] =

1/ = 1. 1

Now consider an M/M/2 system, where the arrival rate is 2 = 1 and each server has parameter 2 = 1, so that the combined processing rate is 2, then a = 1/1 = 1 and = 1/2. The probability of the system being empty is

0 =
and the Erlang c formula becomes

1 1+a+
a2 /2 11/2

= 1/3

C(2, 1) =

a2 /2 1 0 = . 1 3

The expected waiting time for this system is therefore

E[W2 ] =
and the expected total delay is

1 1/2 C(2, 1) = 2(1 ) 3

E[T2 ] =

1 4 1 + = . 3 2 3

In general, increasing the number of servers (while keeping the total processing rate constant) will decrease the waiting time, but increase the total delay.

Example Consider again the example with Poisson arrivals; rate one every two minutes and four
servers, each with exponentially distributed service time with mean 4 minutes. What is the probability of having to wait more than one minute?

Solution
p(W > 1) = 1 p(W 1) = C(c, a) exp{c(1 )1} 4 1 1 = exp{4 (1 ) 1} 23 4 2

0.11

6.5 The M/M/c/c Queue


The M/M/c/c queueing system has c servers, but no waiting room. An arrival which occurs when all servers are engaged is turned away. It is straightforward to show that the steady state probabilities for this system are given by

92

CHAPTER 6. MARKOV QUEUEING MODELS

j =
where a = / is the oered load. Since

aj 0 , j!
c j=0 j

j = 0, . . . , c, = 1, it follows that
c j=0

0 =

1 aj . j!

Denition 6.7 (Erlang b formula). The Erlang b formula is dened as the probability that all servers
are engaged;
B(c, a) = p(N = c) = ac /c! . 1 + a + a2 /2 + . . . + ac /c!

The actual arrival rate into the system, a is given by

a = (1 B(c, a))
and the rate at which customers are turned away is

b = B(c, a).
The average number in the system is given by

E[N ] =

c j=1

aj (j 1)!

c1 j=0

j = a(1 c ) = (1 B(c, a)).

0 = a

Note that, since there is no waiting, T = S (the total time is the service time) and hence

E[N ] = a E[T ].
The system therefore satises Little's formula, which will be considered later in chapter 8, section 8.1.

Example For the system with 4 servers, each exponential with mean 4 and arrivals Poisson with rate
one every two minutes, suppose customers are turned away when all four servers are engaged. What proportion of customers are turned away? The Erlang b formula gives

B(4, 2) =

24 /4! 1 + 2 + 4/2 + 8/6 + 16/24

0.095.

6.6. THE M/M/ QUEUEING SYSTEM

93

6.6 The M/M/ Queueing System


Consider a system with Poisson arrivals and exponential service times and suppose that the number of servers is so large that a server can always be found. This is the M/M/ queueing system. The steady state distribution may be found by solving the balance equations

0 = 1 ( + j)j = j1 + (j + 1)j+1 ,
It follows that for all j ,

j 1.

(j + 1)j+1 j = jj j1 = . . . = 1 0 = 0.
From this, using a = ,

j+1 =

aj+1 a j = 0 . j+1 (j + 1)!

1=
j=0

j = 0
j=0

aj = ea 0 j!

so that

0 = ea
and

j =

aj a e , j!

j 0.

Note that this is a Poisson distribution and E[N ] = a. This is the limiting distribution as c + for an M/M/c system or an M/M/c/c system, with arrival rate and service rate . Example Consider a computer network where users log on according to a Poisson process of rate = 1 customer every 3 minutes. Suppose that a user remains logged on for an exponentially distributed amount of time with mean 1/2 hour. Find the probability that, in steady state, there are 4 or less people logged on.
1 Solution The number of users is Poisson with parameter a = /, where = 1/3 and = 2 60 =

1/30. It means that a =

1 3

30 = 10. Looking up Poisson tables, p(N 4) 0.0293.

6.7 Exercises
1. (a) Find p(N n) for an M/M/1 system. (use p(N = j) = (1 )j ) (b) What is the maximum allowable arrival rate in a system with service rate so that

p(N 10) = 103 ?


2. A decision to purchase one of two machines must be made. The characteristics are as follows

Machine 1 performs transactions on average per hour, exponentially distributed and costs B pounds per hour to operate. Machine 2 is able to process transactions exponentially with rate 2, but costs 2B pounds per hour to operate.
The total cost of performing a transaction is the operation cost plus A pounds for each hour the transaction is in the system. (a) Find expressions for the expected total cost per hour and plot against arrival rate. You may use the following. Let T denote the total time, W the waiting time and S the service time. Then

E[T ] = E[N ]

1 = 1

(b) Suppose A = B/10. For what range of is the expected cost using machine 1 less than the expected cost using machine 2? 3. Consider an M/M/1 queue in which each arrival brings a prot of 5 pounds and each unit of time delay costs 1 pound. Find the range of arrival rates for which the system makes a net prot. (Use E[T ] = 1/( )) 4. Consider an M/M/1 queue with arrival rate customers per second. (a) Show that

E[Nq ] =
(b) Show that C(1, ) = and hence that

2 . 1

E[W ] =
and hence that

(1 )

E[Nq ] = E[W ].
(This is also a consequence of Little's formula, which will be considered later in the course). 94

6.7. EXERCISES (c) What service rate would give an average queue of 5 people? Recall that

95

E[W ] =
where S is the service time. (d) Find the service rate so that

E[S] 1

E[Nq |Nq > 0] = 5.


5. Consider an M/M/1/3 queueing system in which each customer accepted into the system brings in a prot of 5 pounds and each customer rejected results in a loss of 1 pound. Find the arrival rate (or rather the ratio of arrival to service rates) at which the system breaks even. 6. For an M/M/1/K system, show that

p(N = k|N < K) =

p(N = k) . 1 p(N = K)

Why does this probability represent the proportion of arriving customers who actually enter the system and nd k customers in the system? 7. Suppose that two types of customers arrive at a queueing system with one server according to independent Poisson processes of rate /2. Both types of customers require exponential service times of rate . Type 1 customers are always accepted into the system, but type 2 customers are turned away when the total number of customers in the system exceeds K . (a) Sketch the transition rate diagram for N , the total number of customers in the system. (b) Find the steady state transition probabilities for N . 8. Find p(N c + k) for an M/M/c queue. 9. Customers arrive at a shop according to a Poisson process of rate 12 customers per hour. The shop has two clerks to attend to the customers. Suppose that it takes a clerk an exponentially distributed amount of time with mean 5 minutes to service one customer. (a) What is the probability that an arriving customer must wait to be served? (b) Find the mean number of customers in the system and the mean time spent in the system. (c) Find the probability that there are more than 4 customers in the system. 10. For an M/M/c system, show that

E[N ] = E[Nq ] +

11. Consider an M/M/c system. Verify, by explicit calculation, that the average number of busy servers is E[S], where S is a service time.

96

CHAPTER 6. MARKOV QUEUEING MODELS

12. Enquiries arrive at an information centre according to a Poisson process of rate 10 enquiries per second. It takes a server 1/2 seconds to answer each query. (a) How many servers are needed if we require that the mean total delay for each enquiry should not exceed four seconds and 90% of queries should wait less than 8 seconds? (b) What is the resulting probability that all severs are busy? idle? 13. Consider a queueing system in which the maximum processing rate is c customers per second. Let k be the number of customers in the system. When k c, c customers are served at a rate each. When 0 < k c, these k customers are served at a rate c/k each. Assume Poisson arrivals of rate and exponentially distributed times. (a) Find the transition rate diagram for the system. (b) Find the steady state transition probabilities for the system (c) Find E[W ] and E[T ]. (d) For c = 2, compare E[W ] and E[T ] for this system to those of M/M/1 and M/M/2 systems of the same maximum processing rate. 14. Show that the Erlang B formula satises the following recursive equation

B(c, a) =
where a = E[S].

aB(c 1, a) , c + aB(c 1, a)

15. Consider an M/M/5/5 system in which the arrival rate is 10 customers per minute and the mean service time is 1/2 a minute. (a) Find the probability of blocking a customer. (b) How many more servers are required to reduce the blocking probability to 10%? 16. A tool rental shop has four oor sanders. Customers for oor sanders arrive according to a Poisson process at a rate of one customer every two days. The average rental time is exponentially distributed with mean two days. If the shop has no oor sanders available, then the customers will go to the shop across the road. (a) Find the proportion of customers that go to the shop across the road. (b) What is the mean number of oor sanders rented out? (c) What is the increase in the probability of turning a customer away if one of the sanders breaks down and is not replaced? 17. Suppose that department A in a certain company has three private telephone lines connecting two sites. Calls arrive according to a Poisson process of rate 1 call per minute, and have an exponentially distributed holding time of two minutes. Calls that arrive when the three lines

6.7. EXERCISES

97

are busy are automatically redirected to public telephone lines. Suppose that department B also has three private telephone lines connecting the same sites and that it has the same arrival and service statistics. (a) Find the proportion of all calls that are redirected to public lines. (b) Suppose we consolidate the telephone trac from the two departments and allow all calls to share the six lines. What proportion of calls are redirected to public lines?

Answers
1. (a)

p(N n) =
j=n

(1 )j
n j=0

= (1 )

j 1 = n . 1

= (1 )n

(b) = / so that

p(N 10) =
so that

10

= 103

1 103/10

. 1.995

2. (a) Let C1 denote the hourly cost for machine 1 and C2 denote the hourly cost for machine 2. Then

E[C1 ] = B + AE[N ] = B + A

=B+

A.

E[C2 ] = 2B +

A. 2

98 (b)

CHAPTER 6. MARKOV QUEUEING MODELS

E[C1 ] B+A A 10( ) (2 )

E[C2 ] < 2B + A < < < < < 2 B+A 2 1+ 10(2 ) 10 + 2 20 10 + 2 (20 9)( )

2 2 < 202 29 + 92
This reduces to

102 31 + 202 > 0 ( 1.65)2 > (1.652 2)2


so that

< (1.65 0.7225) = 0.9275,

> (1.65 + 0.7225) = 2.3725.

The larger root does not give an admissible solution, because in that case > , so range is < 0.9275. 3. A unit time delay for a customer costs on average
1

1. Solution:

1 < 5,
so that

1 < . 5
n=1 nn+1

4. (a) E[Nq ] = lectures.

= (1 )2
1 1

n1 ; n=1 n (1) 1

remainder of the computation found in

(b) Straight from lectures. C(1, ) = formula, with c = 1. (c)

= . For the rest, plug into the M/M/c

E[Nq ] = E[W ] =
so that

E[S] = 1 (1 ) 2 1

5 = E[Nq ] =

6.7. EXERCISES or

99

2 + 5 5 = 0
giving

=
so that

5 +

25 + 20 = 0.854 = 2 = . 0.854

(d)

E[Nq |Nq > 0] =


x=1

xp(Nq = x|Nq > 0) x


x=1

=
Also,

p(Nq = x) . p(Nq > 0)

p(Nq = 0) = p(N = 0 or 1) = (1 ) + (1 ) = 1 2
so that

p(Nq > 0) = 1 (1 2 ) = 2
giving

E[Nq |Nq > 0] =


This gives

1 2

p(Nq = x) =
x=1

1 1 2 E[Nq ] = 2 = = 5. 2 (1 ) 1 4 5

=
which may be rewritten as

5 = . 4

5. Customers blocked if 3 are in the system. arrivals service rate = /, then

k =

k (1 4 ) , 1

k = 0, 1, 2, 3

E[prot per unit time] = 1


so answer is solution to

3 (1 ) 3 (1 ) 63 (1 ) + 5 (1 )=5 . 1 4 1 4 (1 4 )

100

CHAPTER 6. MARKOV QUEUEING MODELS

5(1 4 ) 63 + 64 = 0; 4 63 + 5 = 0. = 1 is a solution (easy: check whether or not there are others).


6. Easy application of conditional probability. The condition means actually entering; Markov property gives that the distribution in the system is the same as the distribution seen by an arriving customer. 7.

n =
The constraint
n=0 n

0 n n 0 2nK

nK nK +1

= 1 gives 0 = (1 )(2 ) . 2 K+1 1


c1 (c)j j=0 j!

8.

p(X c + k) =
9. (a)

cc c+k c!(1 )

(c)c c!(1)

0 = a = / = 1, 0 =
(b)
1 1+1+1

1
ac c c! ca 1 3

c1 ak k=0 k! 1 3

= 1 , 1 = a0 = 3

p({wait}) = 1 0 1 = 1

1 3

1 3

1 1 1 1 0 = , 1 = , k = ( )k2 , k 2 3 3 6 2
so

1 1 E[N ] = + 3 3

k(1/2)k1 =
k=2

1 1 4 + 3= 3 3 3 20 . 3

E[N ] = E[T ]; E[T ] =


(c) p(N > 4) = 10. 11. Use a = /.
k=5 k

= 1/24

k =
Note that

0 a 0nc1 n! ac a nc n c. 0 c! ( c )

6.7. EXERCISES

101

0 =

ac c c! ca

1 +

c1 an n=0 n!

.
c1 k=0 k .

Let k = p({k servers busy }), then k = k for 0 k c 1 and c = 1 that

It follows

c1

E[busy servers] = 0 ( = 0 a{

n=0 c2 n=0 c1

an ac c +c ). n! c! c a c ac1 an + } n! (c 1)! c a an a ac1 + } n! (c 1)! c a an ac c + } n! c! c a

= 0 a{
n=0 c1

= 0 a{
n=0

= a = E[S]
as required. 12. Use = 10, = 2, = (a)
c

= 5 , a = 5. Then c 1 1 1 1 ( C(c, a) + 1) = ( C(c, 5) + 1) 4. c(1 ) 2 c5

E[T ] = E[W ] + E[S] =

Now compute various values C(1, 5), C(2, 5), C(3, 5), etc ... and see which one works. For over 90 percent waiting less than 8 seconds, use waiting time distribution theorem.

0.9 p(W < 8) = 1 C(c, 5) exp{40 16c}


Compute (or look up tables) C(1, 5), C(2, 5), C(3, 5), . . . and nd rst which gives

C(c, 5) exp{40 16c} 0.1.


(b) 0 for idle, 1 0 for busy. 13. (a) (picture: draw it yourself) (b) Let = (c) E[Nq ] =
c ,

then k = (1 )k for k 0. where =


c

c+1 1

so that, by Little's formula,

E[W ] =

E[Nq ] c = M

where M = c is the maximum processing rate

102 Note
c k=1

CHAPTER 6. MARKOV QUEUEING MODELS

E[S] = = = =

1 c

1 kk +

(1 (c + c+1 + c (1 ) 2 (1 (c + 1)c + cc+1 ) cc+2 + (1 ) 2 (1 cc ) (1 ) E[T ] = E[W ] + E[S].

k k=c+1 1)c + cc+1 )

For c = 2, plug in values. (d) M/M/1 service rate arrival rate = / and M = .

E[Nq ] = (

2 ) 1

E[W ] =

1 2 1 E[Nq ] = ) . = ( 2 ( ) 1 E[S] = 1

so

E[T ] =

1 1 1 (1 + 2 ) + = + = . ( )2 M (M )2 (1 )2
2(1) 2

For M/M/2, M = 2. Set = /2 then 0 =

and k =

0 k 2

for k 1.

E[Nq ] =
k0

k2+k =

4 2

E[W ] =

4 (2 ) 1 = 1 4 ( + ). 2

E[S] =

E[T ] = E[W ] + E[S] =

6.7. EXERCISES 14. Let Q =


c1 aj j=0 j!

103 then

B(c, a) = =
15. (a) a = 5 and blocking probability is B(5, 5)

a B(c 1, a)Q c c Q+ a c! aB(c 1, a) . c + aB(c 1, a)

(b) Find B(c, 5) such that B(c, 5) < 0.1. Answer is c 5 16. (a) proportion is B(4, 1) (b) E[N ] = 1 B(4, 1) =
64 65 (B(3,1)B(4,1)) 1B(4,1)

(c) Percentage of current custom lost is 17. (a) B(3, 2) (b) B(6, 1)

100.

104

CHAPTER 6. MARKOV QUEUEING MODELS

Chapter 7

Markovian Queues in Equilibrium


This section extends considers models that are not Markov, but may be mapped onto Markov models and analysed using Markov techniques. In one class of models, the interarrival times and service times are modelled by the `Erlang r' distribution, another name for the distribution. In another class of models, the service times are modelled using a hyperexponential distribution. These models are important, because although they are not Markov, they may be mapped onto a Markov model. Markov models are easier to analyse. In modelling, one can often obtain reasonable results if one knows the coecient of variation; namely, the ratio of the variance to the square of the expected value. For a random variable S , if Var(S) < 1, then the distribution may be approximated by a distribution E[S]2 with the same expected value and variance; if Var(S) > 1, then the distribution may be approximated
E[S]2

by a hyperexponential with the same mean and variance.

The Queue M/Er /1


Consider a system where the arrivals are according to a Poisson process with parameter , but the service requires r stages, each stage requires an exponential service time, independent with the others, with parameter r, so that the expected service time is 1/. This is the M/Er /1 queue. The E denotes `Erlang', who was the rst to consider such models. The idea is to consider the state variables as the number of stages remaining for a customer. The transition diagram then becomes

89:; ?>=< 0

?>=< 89:; 1

?>=< 89:; 2

. . .
r r

?>=< 89:; r

WVUT PQRS r+1


r

. .! .

. . .

Figure 7.1: gure for M/Er /1 The balance equations are 105

106

CHAPTER 7. MARKOVIAN QUEUES IN EQUILIBRIUM

j = 0 j < 0 0 = r1 ( + r)j = jr + rj+1 j 1.


One standard way to solve these is to use the probability generating function. In some books, this is also known as the z - transform. This leads to an expression that can be used to nd the steady state probabilities. Unfortunately, recovering the distribution from the p.g.f. is often technically demanding.

Recall that the p.g.f. G(s) is given by

G(s) =
j0

sj j .

The balance equatons yield


( + r)sj j =
j1 j=1

sj jr +
j=1

rsj j+1 .

Rewriting gives

( + r)(
j=0

s j 0 ) = s

r j=1

jr

r jr + ( s

sj j )
j=2

so that

( + r)(G(s) 0 ) = sr G(s) +
This gives

r (G(s) s1 0 ). s

G(s) =
Using 0 = r1 gives

0 ( + r r ) r1 s . + r sr r s

G(s) = = =
Using G(1) = 1 gives

r0 (1 s) r + sr+1 ( + r)s r0 (1 s) r(1 s) s(1 sr ) r0 . r r s 1s 1s

1=

r0 r r

7.1. THE ER /M/1 QUEUE so that

107

0 = 1
Setting =

gives

G(s) =

r(1 )(1 s) . r + sr+1 ( + r)s

With a little more analysis, it follows that the structure of the steady state distribution is the sum of geometric distributions. Note that

G(s) =

r (s

(1 ) . + s2 + . . . + sr )

Let s1 , . . . , sr denote the roots of the function

1
Then,

(s + s2 + . . . + sr ). r

G(s) =
Using a partial fraction expansion,

(1

s s1 )

1 . . . (1

s . sr )

G(s) = (1 )
j=1

Aj , (1 ssj )

where
r

Aj =
n=1,n=j

1 sj . (1 sn )

Using G(s) =

j j=0 s j

yields
r

j = (1 )
n=1

An

1 sj n

7.1 The Er /M/1 Queue


Now consider the queueing system Er /M/1 for which the arrival density function is given by

a(t) =

r(rt)r1 rt e t0 (r 1)!

and the service time density function is given by

b(t) = et .

108

CHAPTER 7. MARKOVIAN QUEUES IN EQUILIBRIUM

Here the roles of the arrival and service times are interchanged from the previous subsection. In many ways, the two queueing systems can be considered as duals of each other. The system operates as follows: an arriving customer is introduced into an r-stage Erlangian facility, given in the diagram for the Er /M/1 queue.
r r r r r r

89:; ?>=< 0

?>=< 89:; 1

?>=< 89:; 2

. . .

?>=< 89:; r

WVUT PQRS r+1

. . .

...

Figure 7.2: gure for Er /M/1 This is considered to be an arriving facility rather than a service facility. An arriving customer must pass through r exponential stages, each with parameter r. When he exits from stage r, he is said to `arrive' at the Er /M/1 queueing facility.

Having arrived (i.e. reaching stage r), the customer joins the queue and waits for service.
The steady state probabilities are to be understood as follows: if there are j customers in the system, with another customer in the process of arriving, at stage k of the arrival process, but who has not yet arrived, then the system is in state rj + k . The steady state probabilities are given by the equations

r0 = r rj = rj1 + j+r 1 j r 1 (r + )j = rj1 + j+r j r.


As is usual for queueing systems, the steady state probabilities are obtained by considering the probability generating function G(s) = j0 sj j .

( + r)
j=r

j sj = rs
j=r1

j sj +

sr

j sj .
j=2r

Rearranging gives

((1
so that

1 ) + r(1 s))G(s) = ( + r) sr

r1

r2

j sj (rs)
j=0 j=0

j sj

sr

2r1

j sj ,
j=0

7.1. THE ER /M/1 QUEUE

109

(1 =

1 ) + r(1 s) G(s) sr {( + r)j rj+1 j+r } + ( + r)0 r s


r1 r

r1 j=1

j sj
j=0

= ( + r)0 +
j=1

j sj

sr

j sj .
j=0

Using r0 = r yields

1 1 (1 r ) + r(1 s) G(s) = 1 r s s
Using = , it follows that

r1

sj j .
j=0

G(s) =

(1 sr ) rsr+1 (1

r1 j j=0 s j + r)sr +

This may not seem very useful, but the following trick helps. Consider the extension to the complex plane, where the p.g.f. is the z transform. Clearly the denominator has r + 1 zeros, one of which is s = 1. In the complex plane, set s = Rei . The aim is to show rstly that any root satisfying |s| 1 is real and hence that there is exactly one root satisfying |s| = 1 (namely s = 1) and exactly one root satisfying |s| > 1. If , for a > 0,

asr+1 (1 + a)sr + 1 = 0,
then

sr =

sr 1 (sr 1)(s 1) = a(s 1) a(s 1)(s 1)

where s denotes the complex conjugate, from which

Rr =
It follows that if = 0, then

Rr+1 ei Rr Rei(r+1) + eir . a(R2 2R cos() + 1

Rr+1 sin() + R sin((r + 1)) sin(r) = 0.


From this,

Rr+1 = R
which, for R 1 gives

sin((r + 1)) sin(r) + , sin() sin()

110

CHAPTER 7. MARKOVIAN QUEUES IN EQUILIBRIUM

Rr+1 (r + 1)R 1
which has no solutions for R 1. It therefore follows that any root satisfying |s| > 1 is real. For a > 0 and R > 0, the equation

aRr+1 + (1 + a)Rr + 1 = 0
has no solutions. It follows that any roots satisfying |s| 1 are real and positive. For 0 a < 1, it is easy to see (by taking the derivative, noting that there is exactly one turning point r0 , the function is decreasing from 1 to r0 and then increasing) that there is exactly one root satisfying |s| > 1. This root is real and positive. We denote the root by s0 . Since G(s) is analytic on |s| < 1, it follows that the r 1 zeros inside |s| < 1 in the denominator must correspond to r 1 zeros inside |s| < 1 in the numerator. Let s0 denote the zero in the denominator that lies outside |s| 1. It follows that

rsr+1 (1 + r)sr + 1 =K (1 s)(1 ss0 )


where K is a constant to be evaluated. It follows that

r1

sj j ,
j=0

G(s) =

1 sr K(1 s)(1

s . s0 )

The constant is evaluated using G(1) = 1. This yields

K=
so that

r , 1 s1 0

G(s) =

(1 sr )(1 r(1 s)(1

1 s0 ) s . s0 )

The steady state probabilities (j ) may be obtained now, for example, by a partial fraction expanj=0 sion.

G(s) = (1 sr )

1 1 r(1 s) rs0 (1 ss0 ) j 1 1 s = (1 sr ) sj r s0 s0


j=0 j=0

1 r

r1

(1
j=0

1 )sj + j+1 r s0

j+1 j=r s0

(sr 1)sr . 0

7.2. BULK ARRIVAL SYSTEMS It follows that the steady state probabilities are given by

111

j =

1 1 r (1 sj+1 ) 0 1 r j+1 (s0 1) rs0

0j r1 j r.

This is a very convenient formula. There is still (of course) the problem of nding the root s0 . This may be done numerically.

7.2 Bulk Arrival Systems


Attention is restricted to computation of the probability generating function. Consider the M/Er /1 system from a dierent point of view; each `customer' arrival is, in fact, the arrival of r customers, each of which requires only a single stage of service. A realistic model for bulk arrivals takes into account the possibility that the number of arrivals may vary. Set

gk = p({an `arrival' has k customers}).


Let denote the p.g.f. for the bulk size. Then

(s) =
k0

sk gk .

Assume that the arrival rate of `bulks' is . Then the arrival rate of bulks size k is gk . Let k denote the equilibrium probability of k customers in the system, then

0 = 1
k1

( + )k = k+1 +
j=0

j gkj k 1.

The standard technique to obtain information about the steady state distribution is to use its p.g.f.

( + )
k=1

s k = s
k

k1

s
k=1

k+1

k+1 +
k=1

k j=0

j gkj .

Note that
k1

j gkj s =
k=1 j=0 j=0

s j
k=j+1

kj

gkj = (
j=0

s j )(
j=1

sj gj ) = G(s)(s).

It follows that

( + )(G(s) 0 ) =

(G(s) 0 1 s) + G(s)(s), s

112 yielding

CHAPTER 7. MARKOVIAN QUEUES IN EQUILIBRIUM

G(s) =

0 (1 s) . (1 s) s(1 (s))

Using G(1) = 1, together with (1) = E[B], where B denotes bulk size, one obtains 0 = 1 E[B] = 1 (this is the denition of ). It follows that

G(s) =

(1 )(1 s) . (1 s) s(1 (s))

Bulk Service Systems


Consider the queueing system where, when a server becomes available, he takes a group of r customers, if the number customers waiting is greater than or equal to r. If there are not r available, he waits until there is a group size r. Assume the service of the group takes an exponential time, parameter , independent of the other customers. Such a system is an Er /M/1 system. It seems a waste for the server to remain idle until r customers are waiting, so assume that when the server becomes free he will take r customers if there are r available, or the total amount waiting if it is less. Assume that the service time for a group is always exponential, parameter . The steady state probabilities are given by

( + )k = k+r + k1 k 1 0 = (1 + . . . + r ).
The p.g.f. is given by

( )(G(s) 0 ) = r (G(s) s
yielding

sk k ,
k=0

G(s) =
Note that

r k r k=0 s k ( + )0 s . sr+1 ( + )sr +

sr ( + )0 = sr
k=0

k .

Setting = /r, this yields

G(s) =

r1 k r k=0 k (s s ) rsr+1 (1 + r)sr +

The denominator of this equation is precisely that found in the study of the Er /M/1 system. It has r + 1 roots. s = 1 is a root, there are r 1 roots in |s| < 1 (complex plane) and 1 root outside |s| 1,

7.3. SERIES PARALLEL STAGES: GENERALISATIONS

113

which we call s0 . Using the fact that G(s) is analytic in |s| < 1, that the numerator has r roots, one of which is s = 1, it follows that

G(s) =
yielding

1 K(1

s s0 )

G(s) =

1 1

1 s0 s s0

It follows that the distribution of the number of customers in the bulk system is geometric, satisfying

k = (1

1 1 k )( ) . s0 s0

7.3 Series Parallel Stages: Generalisations


The Erlang distributions provide a class of distributions where Var(X) < E[X]2 . If one is prepared to model a distribution with specied mean and variance by an Erlang distribution, one may nd an approximating distribution if this criterion is satised. often, though, this criterion does not hold. It would be useful to nd other Markov models for those situations. Consider the random variable X with p.d.f.
R

fX (x) =
j=1

j j ej x x 0.

where j 0 and denoted HR .

j = 1. This random variable is known as the hyperexponential random variable,

To model a queueing system where arrivals are according to a Poisson process and service is according to an HR distribution, one may consider that an arriving customer waits until the service facility is available. The service facility takes only one customer. When the service facility is available, the customer proceeds to service stage j with probability j . At service stage j , the service time is exponential with parameter j . The customer departs after this time and then the facility becomes available. The system constructed in this way is clearly Markovian. Note that
R

E[X] =
j=1 R

j j

Var(X) = 2
j=1

j E[X]2 . 2 j

114

CHAPTER 7. MARKOVIAN QUEUES IN EQUILIBRIUM

Note that (by Hlder's inequality, or Cauchy Schwartz)


R j=1

j j 2 j j=1 j

so that the coecient of variation

CX 1,
so this Markovian system has the desired property.

7.4 Exercises
1. Consider the Markovian queueing system shown below. Branch labels are birth and death rates. Node labels give the number of customers in the system.

?>=< 89:; 0

?>=< 89:; 1

89:; ?>=< 2

Figure 7.3: Figure for Q1

(a) Find the steady state distribution (b) Find the average number of customers in the system (c) Write down the transition rate matrix for this problem . 2. Consider an Ek /En /1 queueing system where no queue is permitted to form. A customer who arrives to nd the service facility busy is `lost'. Let Eij denote the system state in which the `arriving' customer is at the ith arrival stage and that the customer in service is in the j th service stage. Let 1/k be the average time spent in any arrival stage and 1/n be the average time spent in any service stage. (a) Draw the state transition diagram showing all the transition rates. (b) Write down the equilibrium equation for Eij , where 0 i k 1 and 0 j n. 3. Consider an M/Er /1 system, where no queue is permitted to form. Let j denote the number of stages left in the system and let j denote the equilibrium probability of being in state Ej . (a) Find j for j = 0, 1, . . . , r. (b) Find the equilibrium probability that the system is busy. 4. Consider an M/M/1 system where customers arrive at an average rate of 2 per minute and the service rate is . Suppose that exactly two customers arrive at each arrival instant. (a) Draw the state transition diagram. (b) Write down the steady state equations. (c) Let = 2/. Let G(s) denote the probability generating function. Express G(s) in terms of and s. (d) Find the mean and variance of the number of customers in the system from G(s). 115

116

CHAPTER 7. MARKOVIAN QUEUES IN EQUILIBRIUM

5. Consider an M/M/1 queueing system with parameters and (that is the arrival rate of a batch of customers is , the service rate per customer is ). At each stage of the arrival instants, one new customer will enter the system with probability 1/2, or two new customers will enter with probability 1/2. (a) Draw the steady state transition diagram for the system. (b) Write down the equilibrium equations for k . (c) Find G(s) (the probability generating function) and evaluate any constants in the expression so that G(s) is given in terms of s, and . If possible, eliminate any common factors from the numerator and the denominator. This will make life simpler for you in the next part. (d) Use the generating function to nd the expected number of customers in the system at equilibrium. 6. Consider the M/Er /1 bulk arrival system, where

gi = p({arrival of bulk size i}) = i (1 ).


Find k , the equilibrium probability for nding k in the system. 7. Consider an M/H2 /1 system in which no queue is permitted to form. Service is of the hyperexponential type, where 1 = 21 and 2 = 2(1 1 ). (a) Find the equilibrium probability that the system is empty. (b) Find the probability that the system is busy. (c) Find the probability that server 1 is occupied. 8. Consider an M/M/1 system with the following variation: Whenever the server becomes free, he accepts two customers into service simultaneously, if at least two are available. Of these two customers, only one receives service. When the service for this one is completed, both depart. The other customer got a `free ride'. In all cases, the service time is exponentially distributed with average value 1/ and the average (Poisson) arrival rate is customers per second. (a) Draw the appropriate state diagram. (b) Write down the appropriate dierence equations for k , the equilibrium probability for nding k in the system. (c) Find G(s), the probability generating function, in terms of 0 and 1 . (d) Express 1 in terms of 0 .

7.4. EXERCISES

117

Answers
1. (a)

0 = 1 + 2 ( + )1 = 0 2 = 1 1 = 0 =
(b)

2 0 , 2 = 0 + ( + )

2 , 1 = , 2 = . + ( + )2 ( + )2 ( + 2) ( + )2

E[N ] =
(c)

0 = ( + ) . 0

2. (a) Transition rates are

(i,j),(i,j+1) = k, 0 i n, 0 j k 1 (0,k),(1,0) = k (j,k),(j,0) = k, 1 j n (i,j),(i+1,j) = n, 1 i n 1 (i,n),(i,0) = n.


The state (i, 0) means that a customer is in the i stage of arrival; nobody is in service. (b)

(n + k)ij = ki,j1 + ni1,j , 1 i k, 2 j n. k0i = nni + k0,i1 , 0 i k (n + k)10 = k(0k + 1k ) (n + k)j0 = kjk + nj1,0 , 2 j n. (n + k)1i = k1,i1 , 1 i k

118 3. (a) Balance equations are

CHAPTER 7. MARKOVIAN QUEUES IN EQUILIBRIUM

0 = r1 rr = 0 rj = rj+1 , 1 j r 1.
It follows that

r = . . . = 1 =
so that

0 r

0 =
(b) p({busy}) = 1 0 =
+ .

, j = , 1 j r. + r( + )

4. (a) j,j+2 = , j+1,j = all j 0. (b)

0 = 1 ( + )1 = 2 ( + )j = j2 + j+1 , j 2
(c)

( + )(G(s) 1 s 0 ) = s2 G(s) +
Using 1 = 0 = 0 and 2 = 2 + 1

(G(s) s2 2 s1 0 ) s

= (1 + ) 0 yields 2 2

1 (1 + )(G(s) (1 + s)0 ) = s2 G(s) + (G(s) (1 + s + s2 (1 + ))0 ). 2 2 2 s 2 2 2 Rearranging gives 0 = G(s)


Using G(1) = 1 yields

1 s s + s3 2 2 = G(s)(1 s(1 + s)) 1s 2 0 = 1

yielding

G(s) =
(d)

1 . 1 s(1 + s) 2

G (s) = G (s) =

(1 )(1 + 2s) 2(1 s(1 + s))2 2

(1 ) 2 (1 )(1 + 2s)2 . + (1 s(1 + s))2 2(1 s(1 + s))3 2 E[N ] = 3 2(1 )

Var(N ) =

2 3 92 (10 ) +9 + = . 2 2 1 2(1 ) 2(1 ) 4(1 ) 4(1 )2

7.4. EXERCISES 5. (a)

119

i,i+1 = /2 i,i+2 = /2 i+1,i = , i 0.


(b)

0 = 1 ( + )1 = ( + )j =
(c)

0 + 2 2

(j2 + j1 ) + j+1 , j 2 2

( + )(G(s) s1 0 ) = (s2 G(s) + s(G(s) 0 )) + (G(s) s2 2 s1 0 ) 2 s

G(s){( + )

s2 s } 2 2 s s 0 = ( + )(0 + s1 ) 0 s2 1 . 2 s 1 = 2 0 , 2 = ( )(1 + )0 2

G(s){

2(s 1) s(s + 2)(s 1) 1 } = 0 (1 ) 2s s G(s) = 20 2 s(s + 2) 20 2 3

1= G(s) = E[N ] = G (1) =

2 3 2 s(s + 2) =
s=1

2 3 (2(s + 1)) (2 s(s + 2))2

4 2 3

6. Recall from lectures the following notations: (s) is the probability generating function for the arrivals, yielding

(s) =
n=0

sn n (1 ) =

1 . 1 s

120

CHAPTER 7. MARKOVIAN QUEUES IN EQUILIBRIUM

E[B] = (1) =
Denition of is

. 1

=
and result proved in lectures is

E[B]

G(s) =
yielding

(1 )(1 s) (1 s) s(1 (s))

G(s) =

(1 )(1 s)(1 s) (1 )(1 s) = (1 s)(1 s) s(1 s) 1 s(1 + )

G(s) = (1 )(1 s)
n=0

sn n (1 +

n )

0 = 1 n = (1 )n ( )n (1 + )n1
7. The service time distribution is

fS (t) = 22 e2t + 2(1 )2 e2(1)t .


(a)

E[S] =

1 1 1 + (1 ) = . 2 2(1 ) .

0 = 1 E[S] = 1
(b) p({busy}) = 1 0 =

(c) p({server 1 busy}) =

8. (a) i,i+1 = , 1,0 = , j+2,j = , j 0. (b)

0 = (1 + 2 ) ( + )j = j1 + j+2 , j 1.
(c)

( + )(G(s) 0 ) = sG(s) +

(G(s) s2 2 s1 0 ) s2 (0 (s + 1) + 1 s) G(s) = . (1 + s) s2

(d) 1 = 2

20 (using G(1) = 1).

Chapter 8

Networks of Queueing Systems


This chapter considers networks of queueing systems, giving Burke's theorem and considering Jackson networks. The analysis of the time spent in systems uses a well known and useful formula known as Little's formula.

8.1 Little's Formula


Little's formula is a formula that is intuitively clear, but relatively dicult to prove in the generality in which it holds. It is a very useful formula for connecting the average number of customers in the system with the average arrival rate and the average time spent in the system by each customer. For this course, it is stated without proof.

Theorem 8.1 (Little's Formula). Let N (t) be a random variable representing the number of customers
in the system at time t and let A(t) be a random variable representing the number of arrivals up to time t. Suppose that the interarrival times may be considered as independent, identically distributed random variables, with nite expectation. Let X denote a random variable representing an interarrival 1 time and let = E[X] . Then limt+ E[A(t)] = . Suppose that, as t +, the distribution of N (t) t reaches a stationary distribution. Let T be a random variable representing the time that a randomly chosen customer spends in the system, when the system is at equilibrium. Then
E[N ] = E[T ].
The following result, connecting the statistical averages, should motivate Little's formula.

Lemma 8.2. Consider a system that has zero customers at time 0. Let s1 , s2 , . . . denote the arrival
times of subsequent customers and let ti denote the time that customer i remains in the system. Let n(t) denote the number of customers in the system at time t, a(t) the number of arrivals in the interval t (0, t), (t) = a(t) (the average arrival rate up to time t), n(t) = 1 0 n(s)ds (the average number in the t t system up to time t) and
1 T (t) = a(t)
a(t)

((si + ti ) t si ).
j=1

121

122

CHAPTER 8. NETWORKS OF QUEUEING SYSTEMS

Then
n(t) = (t)T (t).

Proof of lemma Let a(t) denote the number of customer arrivals up to time t and let d(t) denote

the number of customers who have departed by time t. The ith customer spends time ti in the system and then departs at time di = si + ti . The number of customers in the system at time t is the number of arrivals which have not yet left the system. Then

n(t) = a(t) d(t).


Consider the statistical average n(t) of the number of customers in the system during the time interval (0, t]. This is given by

n(t) :=

1 t

n(s)ds.
0

For si t, arrival i contributes (si + ti ) t si to the integral. At time t,

n(t) =

1 t

a(t)

((si + ti ) t si )
i=1

and the average arrival rate up to time t is given by

a(t) . (t) := t The average number of customers is therefore n(t) = (t)


as required. The problem is going from statistical averages to population averages. If the times spent in the system by the various customers were independent variables, then the result would follow directly from the law of large numbers. But the time a customer spends in the system is inuenced by the size of the queue he nds on arrival and the time spent in the system by those in front. The variables are not independent. The following result is of interest; the proof uses a standard, but important trick.

1 a(t)

a(t)

((si + ti ) t si ) = (t)T (t)


i=1

Lemma 8.3. Consider a system where the interarrival times are independent identically distributed
variables, with the same distribution as X . Suppose that E[X] = of arrivals by time t. Then
lim p A(t) > t =0
1

< +. Let A(t) denote the number

t+

> 0.

8.1. LITTLE'S FORMULA

123

Proof This uses the law of large numbers Let X1 , X2 , . . . denote independent copies of X and let
n

S(n) =
j=1

Xj ,

then

{A(t) n} = {S(n) t}.


1 Let = E[X] = . Then, since S(n) is the sum of independent identically distributed random variables with nite expectation, the weak law of large numbers gives

n+

lim p

S(n) > n

=0

> 0.

It follows that, for all

> 0, A(t) + t

= p (A(t) ( + )t) = p(S([( + )t]) t) = p t S([( + )t] [( + )t] [( + )t]


t+

by the law of large numbers, because

1 t t+ [( + )t] +
Similarly, for all

1+

> 0, p

A(t) t

t+

0.

Example: The M/M/1/K System Recall the M/M/1/K system, from chapter 6 section 6.3.
Recall equation 6.1, for the actual arrival rate. Applying Little's formula using the actual rate of entry gives

E[T ] =

E[N ] E[N ] = . a (1 K )

The identity E[N ] = E[T ] leads to the following two denitions for the M/M/1/K system:

Denition 8.4 (Trac intensity, Carried load). The trac intensity is dened as the load oered to
the system,
E[T ].

The carried load is dened as the demand met by the system,


a E[T ].

124

CHAPTER 8. NETWORKS OF QUEUEING SYSTEMS

Extensions of Little's Formula Little's formula is rather general and may be applied under a
variety of circumstances. For instance, it can be applied to Nq (t), the number of customers waiting in the queue alone and W , the waiting time before service begins. Little's formula, applied to this piece of the system gives

E[Nq ] = E[W ].
Similarly, let Ns (t) denote the number of customers being served at time t and let S denote the service time for a customer. Then, letting the set of servers denote the system, Little's formula gives

E[Ns ] = E[S].
For single server systems, Ns (t) can only be 0 or 1, so that E[Ns ] represents the proportion of time that the server is busy. Let 0 = limt+ p(N (t) = 0) denote the steady state probability that the system is empty. Then, since

E[Ns ] = 0 0 + 1 1 = 0 0 + 1 (1 0 ),
it follows that

1 0 = E[Ns ] = E[S],
or

0 = 1 E[S],
since 0 is the proportion of time that the server is busy. This is the motivation for the denition of utilisation.

Denition 8.5 (Utilisation). The utilisation of a single server system is dened by


= E[S].

The utilisation of a system with c servers, , is dened as


= E[S] . c

Example A communications network receives messages from R sources with mean arrival rates
1 , . . . , R . On average, there are E[Ni ] messages from source i in the network.
1. Use Little's formula to nd the average time E[Ti ] spent by type i customer in the network. 2. Let denote the total arrival rate into the network. Use Little's formula to nd an expression for the mean time E[T ] spent by customers of all types in the network in terms of the E[Ti ].

8.1. LITTLE'S FORMULA

125

Solution
1. By Little's formula, E[Ni ] = i E[Ti ], so E[Ti ] = E[Ni ]/i . 2. Because we have R independent Poisson variables, the total arrival rate will be given by = 1 + . . . + R .

E[T ] =
so that

E[N ] E[N1 ] + . . . + E[NR ] = , 1 + . . . + R

E[T ] =
giving

E[N1 ] 1 E[NR ] R + ... + 1 R

E[T ] =
i=1

E[Ti ]

i .

The following two examples, connected with M/M/1 systems, do not need Little's formula, but nevertheless, it makes some of the computations easier.

Example A concentrator receives messages from a group of terminals and transmits them over a
single transmission line. Suppose that messages arrive according to a Poisson process, at a rate of one message every 4 milliseconds and suppose that the message transmission times are exponentially distributed with mean 3ms. Find the mean number of messages in the system and the mean total delay in the system. What percentage increase in arrival rate results in a doubling of the above mean total delay?

Solution The arrival rate is 1/4 messages per ms; the mean service time is 3. Therefore, the utilisation,
, is = E[S] =
The mean number of customers in the system is

3 1 3= . 4 4

E[N ] =

= 3. 1

Using Little's formula, the mean time spent in the system is

E[T ] =

E[N ] 3 = = 12ms. 1/4

The mean time in the system will be doubled to 24ms when

126

CHAPTER 8. NETWORKS OF QUEUEING SYSTEMS

24 =

E[S] 3 = 1 1

so that = 7/8 and the corresponding arrival rate is

= = 7/24.
The original arrival rate was 1/4 = 6/24, so that an increase of 1/24 a doubling of the mean system delay.

4% in the arrival rate leads to

Example A large processor handles transactions at a rate of K transactions per second. Suppose
transactions arrive according to a Poisson process of rate K transactions per second and that transactions require an exponentially distributed amount of processing time. Suppose that it is proposed that the large processor is replaced with K small processors, each with a proceesing rate of transactions per second and an arrival rate of transactions per second. Compare the mean delay performance of the two systems.

Solution The large processor is an M/M/1 queue with arrival rate K and service rate K, so that
the utilisation is = /. The mean delay is given by

E[T ] =

E[S] 1 = . 1 K(1 )

Each of the small processors is an M/M/1 queue with arrival rate and service rate so the utilisation is again = /. The mean delay is

E[T ] =

E[S ] 1 = = KE[T ]. 1 (1 )

Thus, the large processor has a smaller mean delay.

8.2. NETWORKS OF QUEUEING SYSTEMS

127

8.2 Networks of Queueing Systems


This section considers networks of queueing systems. That is, a job is done in several parts. After completing one part, the customer joins another queue for the next part until all parts are completed. To compute the overall waiting time, it is necessary to know about the departure process. In the M/M case, this is the content of Burke's theorem.

Theorem 8.6 (Burke's Theorem). Consider an M/M/c queueing system at steady state, with arrival
rate . Then 1. The departure process is Poisson with rate , 2. At each time t, the number of customers in the system, N (t) is independent of the sequence of departure times prior to t.

Proof The rst part follows from the time reversibility of a time-homogeneous birth and death process.
For the second part, x a time t. In the reversed system, the `arrivals' are Poisson and thus do not depend on N (t). The arrival instants of the reversed process are the departure instants of the forward process. Consider a network of 2 queues, the state of the system is specied by the number of customers in the two queues, (N1 (t), N2 (t)). For M/M queues, Burke's theorem does not state that N1 and N2 are independent random processes. This is clearly not the case, as this would imply that N1 (t1 ) and N2 (t2 ) were independent for all pairs (t1 , t2 ). It does imply that, for each t, N1 (t) and N2 (t) are independent random variables, so that

p(N1 = n, N2 = m) = p(N1 = n)p(N2 = m).


A network with this property is said to have product form solution.

(8.1)

Example Consider a network where queue 1 is M/M/1 with Poisson arrivals, parameter 1 and
exponential service, parameter 1 . After leaving the rst queue, the departures are randomly routed to queues 2 and 3, which are M/M/1 with service parameters 2 and 3 respectively. Suppose that there is a further independent input to queue 3, which is Poisson with parameter 3 . Find the joint probability function for the number at each node in this network of queues.

Solution From Burke's theorem, N1 (t) and N2 (t) are independent. Also, N1 (t) and N3 (t) are independent. The inputs to queues 2 and 3 are independent. The input to queue 2 will be Poisson with parameter 1 /2 and the input to queue 3 will be Poisson with parameter 3 + 1 /2. It follows that

p(N1 (t) = k, N2 (t) = m, N3 (t) = n) = (1 1 )k (1 2 )m (1 3 )k , 1 2 3


where 1 = 1 /1 , 2 = 2 /2 and 3 = (1 /2 + 3 )/3 .

128

CHAPTER 8. NETWORKS OF QUEUEING SYSTEMS

8.2.1 Jackson Networks


In many queueing systems, a customer is allowed to visit a particular queue more than once. Burke's theorem does not hold for such systems. Jackson's theorem extends the product form solution for the steady state probability function to a broader class of queueing networks.

Open Queueing Networks An open queueing network is dened as follows. Consider a network of
K queues in which customers arrive from outside the network to queue k according to an independent Poisson process with rate k . The service time for a queue k server is exponential with parameter k . Suppose that queue k has ck servers. After completion of service in queue k , a customer proceeds to K queue i with probability Pki and exits the network with probability 1 i=1 Pki . The total arrival rate k into queue k is then the sum of the external arrival rate and the internal arrival rates.
K

k = k +
j=1

j Pjk .

Theorem 8.7 (Jackson's Theorem). If k < ck k , then for each state


n = (n1 , . . . , nK ),

it holds that
p(N(t) = n) =

p(Ni = ni ),
i=1

where p(Nk = nk ) is the steady state probability function of a M/M/ck queueing system with arrival rate k and service rate k .
Jackson's theorem states that the number of customers in the queues at time t are independent random variables. In addition, it states that the steady state probabilities of the individual queues are those of an M/M/ck system. This result is remarkable, because the input process to a queue is not Poisson.

Proof of Jackson's Theorem Omitted. Example Messages arrive at a concentrator according to a Poisson process with parameter . The
time required to transmit and receive acknowledgement is exponential with mean 1/. Suppose that a message needs to be retransmitted with probability p. Find the steady state probability function for the number of messages in the concentrator. Solution The overall system can be represented by the simple queue with feedback. The net arrival rate into the queue is = + p, so that

. 1p

8.2. NETWORKS OF QUEUEING SYSTEMS It follows that

129

p(N = n) = (1 )n ,
where = / = /((1 p)).

Example New programmes arrive at a CPU according to a Posson process with rate . A programme
spends an exponentially distributed execution time of mean 1/1 in the CPU. At the end of the service time, the programme execution is complete with probability p, or requires retreiving additional information from secondary storage with probability 1 p. Suppose that the retrieval of information from the secondary storage unit requires an exponentially distributed amount of time with mean 1/2 . Find the mean time each programme spends in the system.

Solution The net arrival rates into the two queues are
1 = + 2
Thus, and

2 = (1 p)1 . 1p . p 2 , 1 2

1 =

and

2 =

Each queue behaves like an M/M/1 system, so that

E[N1 ] =

1 1 1

and

E[N2 ] =

where 1 = 1 /1 and 2 = 2 /2 . Little's formula then gives the mean for the total time spent in the system 1 1 2 E[N1 + N2 ] = + . E[T ] = 1 1 1 2

130

CHAPTER 8. NETWORKS OF QUEUEING SYSTEMS

Chapter 9

M/G/1 Queueing Systems


This section considers a single server system, with Poisson arrivals, but with a general service time distribution. Let S denote the service time and let fS denote its probability density function. The number of customers N (t) will no longer be a Markov process; only the exponential distribution has the `memoryless' property.

9.1 Residual Service Time


The residual service time, which we denote by R, is dened as the remaining time for completion for the current customer in service, seen by a customer just arriving.

Lemma 9.1. Assume that the arriving customer arrives `at random' during the time of sevice. Let
S denote a random variable with the distribution of a service time. The following identity holds for p(R > x). p(R > x) = 1 E[S]
x

(1 FS (y))dy.

Proof Consider an equivalent problem; cars arrive at a motorway service station, where the times
between arrivals are independent, identically distributed random variables, with the same distribution as S , the service time. When the system is in equilibrium, a hippie shows up at the service station at a random time. What is the residual time R that he must wait until the next car shows up? Let FR (x) = p(R x) denote the distribution function for R, the residual service time and let FS (t) = p(S t) denote the distribution function for S , the service time. Let Ak denote the arrival time of the k th car. If the hippie shows up at time t, then

{R x} = k {t < Ak t + x < Ak+1 },


where {t < Ak t + x < Ak+1 } denotes the event that he waits less than time x and is picked up by car k . Using Ak+1 = Ak + S where S has the distribution of the service time and is independent of Ak , 131

132

CHAPTER 9. M/G/1 QUEUEING SYSTEMS

FR (x) =
k

p(t < Ak t + x < Ak+1 ) p(Ak > t, 0 < t + x Ak < S)


k t+x

= =
k

t t+x

p(0 < t + x z < S|Ak = z)fAk (z)dz p(0 < t + x z < S)fAk (z)dz

=
k t

where fAk (z) denotes the density function of Ak . It follows that


t+x

FR (x) =

(1 FS (t + x z))(
k

fAk (z))dz.

Now, let N (y) denote the number of cars which have arrived by time y . Then

p(Ak y) = p(N (y) k).


Using

E[N (y)] =
k=0

p(N (y) k) = 1 +
k=1

p(Ak y),

it follows (using pAk (z) =

d dz p(Ak

z)) that pAk (z) = d 1 z+ E[N (z)] . dz E[S]

k=1

As t +, this gives

FR (x) =
from which

1 E[S]

x 0

(1 FS (z))dz, 1 E[S]
x

p(R x) = 1 FR (x) =
as required. This leads us to the very important identity

(1 FS (z))dz

Theorem 9.2. Assume that limx+ x2 (1 FS (x)) = 0. Then the mean residual time is
E[R] = E[S 2 ] . 2E[S]

9.1. RESIDUAL SERVICE TIME

133

Proof The density function of R is given by


fR (x) =
so that

1 FS (x) E[S]

E[R] = = =

1 FS (x) dx E[S] 0 x2 (1 FS (x)) 2E[S] x E[S 2 ] . 2E[S]

+
0

1 2E[S]

x2 fS (x)dx

To show that this formula could have been reasonably expected on intuitive grounds, consider service times distributed exponentially and compare with constant service times. If the service time is distributed exponentially with expected value m, then E[S] = m and E[S 2 ] = 2m2 , so that E[Rexp ] = 2m2 /2m = m. In other words, the mean residual service time of a customer is the same as the full service time of a customer. If the service times are constant, then E[S] = m and E[S 2 ] = m2 so that E[Rconst ] = m/2. Again, this is the expected result. A customer arriving at some random point during the service time would expect to wait roughly half of the full service time.

Mean Delay for M/G/1 System Let R denote the residual service time of the customer found in
service. That is, R = 0 if no customer is found in service and R = R if a customer is found in service. Then the following result holds:

Lemma 9.3.
E[R ] =

E[S 2 ]. 2

Proof
E[R ] = 0 p(N (t) = 0) + E[R](1 p(N (t) = 0)) E[S 2 ] = E[S] = E[S 2 ]. 2E[S] 2
(recall that 1 p(N (t) = 0) = = E[S]).
2 Denition 9.4. Let S = Var(S). The coecient of variation of a random variable S , denoted CS , is

dened as
2 CS =

2 S . E[S]2

Theorem 9.5 (Pollaczek - Khinchin). The mean waiting time E[W ] is given by
E[W ] =
2 (1 + CS ) E[S]. 2(1 )

This formula is known as the Pollaczek-Khinchin mean value formula.

134

CHAPTER 9. M/G/1 QUEUEING SYSTEMS

Proof Using Little's formula to go from line 2 to line 3,


E[W ] = E[R ] + E[Nq (t)]E[S] = E[R ] + E[W ]E[S] = E[R ] + E[W ]
so that

E[W ] =
2 Using E[S 2 ] = S + E[S]2 , this becomes

E[R ] E[S 2 ] = . 1 2(1 )

E[W ] =

2 2 2 (S + E[S]2 ) (1 + CS ) (1 + CS ) = E[S]2 = E[S]. 2(1 ) 2(1 ) 2(1 )

Example Let W1 denote the waiting time for an M/M/1 system and let W2 denote the waiting time
for an M/D/1 system. For the M/M/1 system, the service times are exponentially distributed so that 2 CS = 1 and E[W1 ] = E[S]. 1 For the M/D/1 system, the service time is constant, so that CS 0 and

E[W2 ] =

E[S]. 2(1 )

Thus, the waiting time in an M/M/1 is twice that of an M/D/1.

9.2 Priority Service Disciplines


In this subsection, there are K priority classes of customer. Customers of type k arrive according to a Poisson distribution with parameter k and have service times with p.d.f. fSk (x). A separate queue is kept for each priority class and each time a server becomes available it selects the next customer from the highest priority queue which isn't empty. The service utilization for type k customers is

k = k E[Sk ].
In what follows, it is assumed that the total server utilisation is less than 1;

= 1 + . . . + K < 1.
Otherwise, the lower priority queues will grow without bound. Let R1 denote the residual service time of the customer found in service and let Nq1 (t) = k1 denote the number of number of type 1 customers found in the queue. Let Wk denote the waiting time for type k customers. Then,

9.2. PRIORITY SERVICE DISCIPLINES

135

Theorem 9.6. The mean waiting time for type k customers is given by
E[Wk ] =
K 2 j=1 j E[Sj ]

2(1 1 . . . k1 )(1 1 . . . k )

The mean delay for type k customers is given by


E[Tk ] = E[Wk ] + E[Sk ].

Proof The proof uses induction. Note that, following the proof of the previous theorem,
E[W1 ] = E[R] + E[Nq1 ]E[S1 ]
so that, using E[Nq1 ] = 1 E[W1 ],

E[W1 ] =

E[R] . 1 1

If an arriving type 2 customer nds Nq1 (t) = k1 type 1 customers and Nq2 (t) = k2 type 2 customers in the queue, then the waiting time is the sum of three components; the type 1 customers already in the queue, the type 2 customers already in the queue and the type 1 customers who arrive while the type 2 customer is waiting. Letting M1 denote the number of type 1 customers who arrive while waiting,

E[W2 ] = E[R] + E[Nq1 ]E[S1 ] + E[Nq2 ]E[S2 ] + E[M1 ]E[S1 ].


Little's formula gives E[Nq1 ] = 1 E[W1 ] and E[Nq2 ] = 2 E[W2 ]. Also, E[M1 ] = 1 E[W2 ] (the average number of type 1 arrivals in time E[W2 ]). Solving for E[W2 ] gives

E[W2 ] =

E[R] + 1 E[W1 ] E[R] = . 1 1 2 (1 1 )(1 1 2 )

Now, for the general formula, assume that

E[Wm ] =
for 1 m k 1. Note that

E[R] (1
m1 j=1 j )(1

m j=1 j )

k1

E[Wk ] = E[R] +
j=1

E[Sj ]E[Nqj ] +
j=1

E[Sj ]E[Mj ].

The identities E[Mj ] = j E[Wk ] and E[Nqj ] = j E[Wj ] may be applied to obtain
k k1

E[Wk ] = E[R] +
j=1

j E[Wj ] + E[Wk ]
j=1

j ,

giving

136

CHAPTER 9. M/G/1 QUEUEING SYSTEMS

E[Wk ] 1

k j=1

j = E[R] +

k1

j E[Wj ]
j=1 k1 j=1

(1 j j1 l=1 l )(1
j l=1 l )

= E[R] 1 +
where, to simplify notation,
0 l=1 l k1

= 1 is used. Now, assume that j j1 l=1 l )(1 = 1 1


k1 j=1 l

1+
j=1

(1

j l=1 l )

It follows that
k

1+
j=1

(1

j j1 l=1 l )(1

j l=1 l )

= = =

1 1 (1 1 1
k l=1 l k1 j=1 l

(1 + k

k j1 l=1 l )(1
k l=1 l )

j l=1 l )

k l=1 l

k1 l=1 l )(1

From this, it follows that


k

E[Wk ](1
j=1

j ) = E[R]

1 1
k1 j=1 j

and hence

E[Wk ] =
Note that

E[R] (1
k1 j=1 j )(1

k j=1 j )

k = 1, . . . , K

(9.1)

E[S 2 ] , 2 = 1 + . . . + K E[R] =
and, assuming independence of the arrivals of each type,

(9.2)

E[S 2 ] =
giving

1 K 2 2 E[S1 ] + . . . + E[SK ]

(9.3)

E[Wk ] =

2(1

K 2 j=1 j E[Sj ] . k1 k j=1 j )(1 j=1 j )

(9.4)

9.2. PRIORITY SERVICE DISCIPLINES

137

Example Consider a queue at a bank, where there are two types of transaction. Let type A denote
withdrawals, which require a constant service time of 1 minute and let type B denote all others, which are distributed exponentially with mean 3 minutes. What is the mean waiting time if this system operates 1. As an ordinary M/G/1 system? 2. As a M/G/1 system with priority given to withdrawals? Assume that two types of arrivals are independent and have Poisson distribution, each with the same rate, (so that the total arrival rate is 2).

Solution

1 1 1 3 E[S] = E[S1 ] + E[S2 ] = + = 2 2 2 2 2 1 1 1 1 2 2 E[S 2 ] = E[S1 ] + E[S2 ] = + 2 9 = 9.5 2 2 2 2 1 = , 2 = 3, = 2 E[S] = 4 E[R] = 2 E[S 2 ] = 9.5. 2 E[R] 9.5 = 1 1 4 E[R] 9.5 = 1 1 1 3

For the M/G/1 system,

E[W ] =
For the system with priority,

E[W1 ] =
and

E[W2 ] =
The overall mean waiting time is

E[R] 9.5 = . (1 1 )(1 ) (1 )(1 4) 1 1 E[W1 ] + E[W2 ] 2 2 1 E[R] 1 (1 + ) 2 1 1 1 1 /2 E[R] 1 1 1 1 2 E[W ]. 1

E[Wpriority ] = = = =

The `Short job rst' discipline has improved the overall waiting time.

9.3 Exercises
1. Consider an M/G/1 system in which the rst customer in a busy period has service distribution G1 and all others have distribution G2 . let C denote the number of customers in a busy period and let S denote the service time of one customer chosen at random. Let an denote the proportion of customers that nd n already in the system when they arrive. Show that (a) a0 = p0 = 1 E[S] (b) E[S] = a0 E[S1 ] + (1 a0 )E[S2 ] where Si has distribution Gi , (c) Use these results to show that E[B], the expected length of a busy period is given by

E[B] =

E[S1 ] . 1 E[S2 ]

(d) Compute E[C], where C is the number of customers served in a busy period. 2. For an M/G/1 queue, let Xn denote the number in the system left behind by the nth departure. (a) If

Xn+1 =
what does Yn represent? (b) Rewrite the preceeding as

Xn 1 + Yn Xn 1 Yn Xn = 0,

Xn+1 = Xn 1 + Yn + Zn ,
where

Zn =

1 Xn = 0 0 Xn 1.

Take expectations and let n + to obtain

E[Z ] = 1 E[S].
(c) Square both sides of the rewritten equation and then let n + to obtain

E[X ] =

2 E[S 2 ] + E[S]. 2(1 E[S])

138

9.3. EXERCISES

139

3. Consider two M/M/1 queues placed in series. The arrival rate is , the service rate for the rst queue is 1 and the service rate for the second is 2 . Show that the equations for the steady state probabilities

p(N1 = n1 , N2 = n2 ) = n1 ,n2
are given by

0,0 = 2 0,1 ( + 1 )n1 ,0 = n1 1,0 + 2 n1 ,1 , n1 1 ( + 2 )0,n2 = 1 1,n2 1 + 2 0,n2 +1 , n2 1

( + 1 + 2 )n1 ,n2 = n1 1,n2 + 1 n1 +1,n2 1 + 2 n1 ,n2 +1 , n1 1, n2 1.


Verify that n1 ,n2 = (1 1 )(1 2 )n1 n2 , 1 = /1 and 2 = /2 and max(1 , 2 ) < 1. 1 2 4. Consider the following queueing system with Poisson arrivals and exponential service time in each case: Customers arrive and enter queue 1 with rate 1 . The service rate for the rst job is 1 . After this, they join either queue 2 with probability 3/4 or queue 3 with probability 1/4. Additionally, new customers arrive at queue 3 with rate 2 . The service rates for queues 2 and 3 are 2 and 3 respectively. Upon nishing whichever of these jobs (2 or 3) the customer has chosen, he then joins queue 4, with service rate 4 and, upon completion, leaves the system. (a) What are the conditions that must be satised for the steady state to exist? (b) What is the average time spent in the system for a customer who takes route (1,2,4)? (c) What is the average time spent in the system for a customer who takes route (3,4)? 5. Consider an M/G/1 queueing system. Let R denote the residual service time of the customer found in service. This time is 0 if there is no customer in service. Let S denote the service time and the arrival rate. Show that

E[R ] =

E[S 2 ]. 2

Hint: You may use the following: if R is the residual service time; namely, the time remaining
to completion seen by a customer just arriving, then

E[R] =

E[S 2 ] . 2E[S]

140

CHAPTER 9. M/G/1 QUEUEING SYSTEMS Consider a two-class non preemptive priority queueing system and suppose that the lower priority class is saturated (i.e. 1 E[S1 ] + 2 E[S2 ] > 1). (a) Show that the rate of low priority customers served by the system, 2 , is given by

2 =

1 1 E[S1 ] . E[S2 ]

(Hint: Let N1,s denote the number of type 1 customers in service. Since N1,s = 0 or 1, it follows that p(N1,s = 1) = E[N1,s ] = 1 E[S1 ]. What proportion of the time is the server busy with class two customers?) (b) Show that the mean waiting time for class 1 customers is

E[W1 ] =

2 2 (1/2)1 E[S1 ] E[S2 ] + . 1 1 E[S1 ] 2E[S2 ]

(c) Consider now an M/G/1 system in which the server goes on holiday whenever it empties the queue. If upon returning from holiday the system is still empty, the server takes another holiday and so on until it nds customers in the system. Suppose that the holiday times are independent of each other and of the other variables in the system and identically distributed. Show that the mean waiting time for customers in the system is

E[W ] =
where H is the holiday time.

(1/2)E[S 2 ] E[H 2 ] + , 1 E[S] 2E[H]2

9.3. EXERCISES

141

Answers: M/G/1 and Networks


1. (a) a0 = p0 due to Poisson arrivals. Recall: for single servers, the number in service can only be either zero or one, so Little's formula gives

1 0 = E[Ns ] = E[S].
(b) Since a0 is proportion of arrivals with service distribution G1 and 1 a0 the proportion with service distribution G2 , the result follows. (c) Let denote the length of time of an `empty period'. It follows that

0 =

E[] . E[] + E[B]


1

Since arrivals are Poisson, it follows that E[] = nential with parameter ) so that

(since the interarrival times are expo-

E[B] =
The result follows by using

1 0 E[S] = . 0 1 E[S]

E[S] = (1 E[S])E[S1 ] + E[S]E[S2 ]


(where 0 = 1 E[S] and 1 = E[S]) giving

E[S] =

E[S1 ] . 1 + E[S1 ] E[S2 ]

(d) Trick: note that for every arrival in a busy period, one (the rst) will nd the system empty, so that on average, one in every E[C] arrivals will nd the system empty, so

p0 =
giving

1 E[C]

E[C] =

1 . 1 E[S]

2. (a) number of arrivals in the period between successive departures (b) clear (c) clear 3. Write out
d dt p(t; n1 , n2 ).

Derivation and verication are clear.

142 4. (a) Jackson's theorem:

CHAPTER 9. M/G/1 QUEUEING SYSTEMS

p(n1 , n2 , n3 , n4 ) = (1 1 )(1 2 )(1 3 )(1 4 )n1 n2 n3 n4 1 2 3 4


where 1 = 1 /1 , 2 = ( 3 1 )/2 , 3 = ( 1 1 + 2 )/3 , 4 = (1 + 2 )/4 . 4 4 (b) Expected time (1,2,4) is

E[W1 ] +

1 1 1 + E[W2 ] + + E[W4 ] + 1 2 4 31 /4 1 + 2 1 1 1 1 = + + + + + . 1 (1 1 ) 1 2 (2 31 /4) 2 4 (4 1 2 ) 4

(c) Expected time (3,4) is

2 + 1 /4 1 1 + 2 1 + + + . 3 (3 2 1 /4) 3 4 (4 1 2 ) 4
5. rst part is straight from lectures. (a) p(N1,s = 0) = 1 1 E[S1 ] = 2 E[S2 ] so that

2 =
(b) Using = 1 + 2 ,

1 1 E[S1 ] . E[S2 ]

E[W1 ] = E[S1 ]E[Nq1 ] + E[R ] E[S 2 ] = 1 E[S1 ]E[W1 ] + 2


Using

E[S 2 ] =
it follows that

1 2 2 E[S1 ] + 2 E[S2 ]

E[W1 ] =
(c) H = S2 .

2 2 E[S 2 ] 1 E[S1 ] E[S2 ] = + . 2(1 1 E[S1 ]) 2(1 1 E[S1 ]) 2E[S2 ]

Chapter 10

The Pollaczek-Khinchin (P-K) Transform Equation


10.1 Generating Function for the Number of Customers
Consider the M/G/1 queueing system. Let Xn denote the number of customers in the system after the nth departure. Then

Xn+1 = Xn Zn + Yn ,
where Zn = 1 if Xn 1 and Zn = 0 if Xn = 0, and Yn denotes the number of arrivals between the n and n + 1th departure.

Theorem 10.1 (The Generating Function for the Number of Arrivals). Suppose the service time S
has density g(t). Let

(p) =
0

ept g(t)dt

denote the moment generating function of S . Let Y denote a random variable with the same distribution as Yn . Let
(s) = E[sY ]

denote the probability generating function of the number of arrivals between two successive departure times. Then
(s) = ((s 1)).

Proof Note that


p(Y = k) =
0

p(Y = k|S = t)g(t)dt =


0

(t)k t e g(t)dt. k!

143

144 From this,

CHAPTER 10. THE POLLACZEK-KHINCHIN (P-K) TRANSFORM EQUATION

(s) =
k=0

sk p(Y = k) sk
k=0 0

= =
0

(t)k t e g(t)dt k!

e(1s)t g(t)dt

= ((s 1)).

Theorem 10.2 (The Pollaczek-Khinchin (P-K) Transform Equation). Set


G(s) := lim E[sXn ].
n+

Then
G(s) = ((s 1)) (1 )(1 s) . ((s 1)) s

Proof Set

Gn (s) =
k=0

sk p(Xn = k) = E[sXn ].

Note that

Gn+1 (s) = E[sXn+1 ] = E[sXn Zn +Yn ] = E[sXn Zn sYn ].


Since the arrival process is Poisson, the number of arrivals between departures n and n+1 is independent of the number in the system at the nth departure. It follows that

Gn+1 (s) = E[sXn+1 ] = E[sXn Zn ]E[sYn ].


The distribution of the number of arrivals between departures n and n + 1 does not depend on n. Set

(s) := E[sYn ].
Using p(Xn = 0) = Gn (0), note that

E[s

Xn Zn

] = p(Xn = 0) + p(Xn = 1) +
k=0

1 sk p(Xn = k + 2) = Gn (0) + (Gn (s) Gn (0)). s

It follows that

10.1. GENERATING FUNCTION FOR THE NUMBER OF CUSTOMERS

145

Gn+1 (s) = (s) Gn (0) +


In the limit,

Gn (s) Gn (0) s

G(s) = (s) G(0) +


Using G(0) = 1 gives

G(s) G(0) s

G(s) = (s)

(1 )(1 1/s) 1 (s)/s (1 )(1 s) . = ((s 1)) ((s 1)) s

Example Consider the M/H2 /1 queueing system, with service time density given by
1 3 g(t) = et + (2)e2t . 4 4
Here,

(p) =
so that

3 82 7p + = . 4( p) 4 2p 4( p)(2 p)

((s 1)) =
giving

15 7s (2 s)(3 s)

G(s) =

(1 )(1 s)[8 + 7(1 s)] . 8 + 7(1 s) 4s(2 s)(3 s)

Cancelling the common term 1 s from numerator and denominator gives

G(s) =

7 (1 )(1 15 s) 2 . (1 2 s)(1 3 s) 5

This may be expanded as a partial fraction, yielding

1 G(s) = (1 ) 4

1 3 2 + 2 1 5s 1 3s

Finally, note that = 5 . It follows that the stationary distribution is given as 8

k =

3 2 k 9 2 ( ) + ( )k . 32 5 32 3

146

CHAPTER 10. THE POLLACZEK-KHINCHIN (P-K) TRANSFORM EQUATION

10.2 Distribution of the Waiting Time


Let Cn denote the nth customer. Recall the notation

Tn = Wn + Sn
where Tn denotes the total time in the system, Wn the waiting time and Sn the service time. Now, the number of customers in the system at departure n is precisely the number that arrived while customer Cn was in the system. Set

MT (p) := E[epT ].
and let fT denote the density function of T , where T is the total time in the system for a customer once the system has reached equilibrium. Let X denote the number of arrivals between two successive departures once the system has reached equilibrium. Then

GX (s) =
k=0

sk p(X = k)
0

=
k=0

p(X = k|T = t)fT (t)dt


0

= =
0

sk
k=0

(t)k t e fT (t)dt k!

e(s1)t fT (t)dt

= MT ((s 1)).
Recall that

G(s) = ((s 1))


It follows that

(1 )(1 s) . ((s 1)) s

MT ((s 1)) = ((s 1))


so that

(1 )(1 s) , ((s 1)) s

MT (p) = (p)

p(1 ) . + p (p)

In other words, the moment generating function of the total time in the system is expressed in terms of the arrival rate, the service rate and the moment generating function of the service time distribution. Now, the waiting time and the service time are independent random variables, so

10.2. DISTRIBUTION OF THE WAITING TIME

147

MT (p) = MW (p)(p).
(Recall that (p) = MS (p)). Consequently,

MW (p) =

p(1 ) . + p (p)

10.3 Exercises
1. Consider M/E2 /1. (a) Compute

(p) := E[epS ],
where S is the service time. (b) Hence nd MT (p), the moment generating function for the total time in the system.

2. Let (p) = E[epS ] denote the moment generating function of the service time. Let denote the moment generating function of the residual service time. Show that (p) 1 (p) = , pE[S]
and hence conclude that

MW (p) =

1 . 1 (p)

Consider an M/D/1 system, with service time 2 seconds. (a) Show that the residual service time is uniformly distributed. (b) For = 0.25, compute MW (p). 3. Consider an M/G/1 queue in which bulk arrivals occur at rate and with a probability qr that r customers arrive together at an arrival instant. (a) Show that the probability generating function of the number of customers arriving in an interval of length t is et(1Q(s)) , where Q(s) = r1 sr qr . (b) Show that the probability generating function of the random variables Xn , the number of arrivals during the service of a customer, is ((Q(s) 1)), where denotes the moment generating function of the service time distribution. 4. Consider an M/G/1 system with bulk service. Whenever the server becomes free, he accepts two customers from the queue into service simultaneously, or, if only one is on queue, he accepts that one. In either case, the service time from the group (of size 1 or 2) has density fS (in other words, the service time does not depend on the number in service). Let Xn denote the number of customers remaining after the nth service instant. Let Yn denote the number of arrivals during 1 the nth service. Dene (s) = E[sY ], (p) = E[epS ] and G(s) = E[sX ] as usual. Let = 2 E[S]. (a) Find E[X] = limn+ E[Xn ] in terms of , C(S) = 0 := p(X = 0). (b) Find G(s) in terms of , 0 and 1 = p(X = 1). (c) Express 1 in terms of 0 . 148
2 (S) , E[S]2

the coecient of variation and

10.3. EXERCISES

149

Answers
1 1. (a) Very easy. Assume that E[S] = . Then S = X1 + X2 where Xj are independent exponentials, with parameter 2. It follows that

(p) := E[epS ] = E[epX ]2 =

2 2 p

, p < 2.

(b) Very easy. Lift the result that was proved in lectures.

MT (p) = (p)
= , so

p(1 ) . + p (p)

MT (p) =
2. Using the result proved in lectures

4( ) . p2 + ( 4)p + 4( )

p(R > t) =
the question is very easy. Note that

1 E[S]

(1 FS (y))dy,

fR (t) =

1 (1 FS (t)). E[S] 1 E[S]


0

(p) = E[epR ] =
integration by parts yields

ept (1 FS (t))dt

(p) =
so that

1 E[S]

ept (1 FS (t)) p

+
0

1 p

ept fS (t)dt

(p) 1 (p) = pE[S]


as required. The moment generating function for the waiting time is now a very easy application of the result proved in the lecture;

MW (p) =

p(1 ) . + p (p)

150 Note that

CHAPTER 10. THE POLLACZEK-KHINCHIN (P-K) TRANSFORM EQUATION

(p) = p(p)E[S] + 1.
Then, using = E[S],

MW (p) =

1 p(1 ) = . + p p(p)E[S] 1 (p)

(a) Very easy. Service time is deterministic, length t say. For y > 0,

p(R > y) =
yielding

1 t

(1 [t,) (x))dx =

1 1

y t

0yt y > t.

fR (y) =
(b)

1 t

0
t 0

y [0, t] otherwise. 1 pt (e 1), pt

1 (p) = t
so that (using t = 2)

ept dt =

MW (p) =

0.75 3p = . 1 2p 1) 8p (e2p 1) 1 0.25 2p (e

3. (a) Let B denote the (random) number of customers in a bulk.

E[sN ] =
x0

E[sN |xbulks arrive]p({x bulks arrive})

E[sN ] =
x0

E[sB ]x Q(s)x
x0

(t)x t e x!

(t)x t e x!

= e(Q(s)1)(t)
as required. (b) Let Y denote the number of bulks arriving between two successive departures. Let (p) = E[epS ]. Then GY (s) := E[sY ] = ((s 1)) (taken from lecture). The p.g.f. of number of customers N arriving a service interval is

10.3. EXERCISES

151

(s) = E[sN ] =
x0

E[sN |x bulks arrive]p({x bulks}) =


x0

Q(s)x p({x bulks}) = GY (Q(s)).

It follows that

(s) = (((Q(s) 1)).


4. (a) Square up both sides of Xn+1 = Xn + Yn Zn gives

0 = E[Y 2 ] + E[Z 2 ] + 2E[X]E[Y ] 2E[XZ] 2E[Y ]E[Z] E[Y 2 ] = (1) + (1) (s) = ((s 1)) E[Y 2 ] = 2 E[S 2 ] + E[S] E[Z] = E[Y ] = E[S] E[XZ] = 2E[X] 1
Using

kk = 0 + 1 +
k0 k2

(k 2)k + E[Y ]

gives

20 + 1 = 2(1 ) E[Z 2 ] = 4 40 31
giving

E[X] =
so that

E[Y 2 ] + E[Z 2 ] + 21 2E[Y ]E[Z] 4 E[Y ]

E[X] =
giving

42 (1 + C(S)) + 2 + 4 40 1 + 82 4 2 42 (3 + C(S)) + 2(1 + 2) 20 2(2 )

E[X] =
(b) Equation is

G(s) = (s){0 + 1 +

1 (G(s) s1 0 )} s2 0 (1 s2 ) + 1 (1 s) G(s) = (s) (s) s2 0 (1 s2 ) + 1 (1 s) . ((s 1)) s2

so that

G(s) = ((s 1))

152

CHAPTER 10. THE POLLACZEK-KHINCHIN (P-K) TRANSFORM EQUATION (c) already done in rst part;

1 = 2(1 0 ).

Bibliography
[1] Karlin, S. [1968] A rst course on stochastic processes rst edition, Academic Press [2] Kleinrock, L. [1975] Queueing Systems Volume 1: Theory Wiley [3] Kleinrock, L. [1976] Queueing Systems Volume 2: Computer Applications Wiley [4] Ross, S.M. [2007]Introduction to probability models eighth edition, Academic Press

153