You are on page 1of 13

CMI/BVR SP1-HA1 (19-08-2020)

Men, men, these are wanted: everything else will be ready, but strong, vigorous, believing
young men, sincere to the backbone, are wanted. A hundred such and the world becomes
revolutionized. Swami Vivekananda
(Vivekananda considered man and woman as two wings of a bird. Bird can not fly with
one wing. When he says men, he means men and women.)

n!
1. For fixed n, consider the trinomial coefficients k!l!(n−k−l)! for k, l such that 0 ≤
k, l, k + l ≤ n. Show that the maximum is attained near k, l = n/3. More precisely,
if n = 3m or 3m + 1 then maximum is attained at k = l = m and when n = 3m + 2
then maximum is attained at k = l = m + 1.
2. Consider a chain
 with three states named AA, Aa and aa respectively with tran-
p q 0
sition matrix  p/2 1/2 q/2 . Here 0 < p < 1. Calculate the two step tran-
0 p q
sition matrix, three step matrix. Any guess why the states are named so. What
happens in the long run?
3. A store near my house stocks Lyril soap. She follows (2/5) inventory schedule.
This means, if she has ≤ 2 soaps by the closing time today, she will get some more
from godown to start with a total of 5 tomorrow morning. If she has at least two
soaps at closing time today, then she does not replenish. Let us assume that the
demand of this soap on successive days are i.i.d. random variables; taking value i
with probability pi for 0 ≤ i ≤ 6.
Let Xn be his closing stock on n-th day. Let Yn be the opening stock on n-th day.
Show that both are Markov chains. Write down their transition probabilities.
 
0 1 0
4. Consider a three state chain with states 0, 1, 2 and transition matrix P =  1 − p 0 p 
0 1 0
Calculate P 2 , P 3 and P 4 . Calculate all P n .
5. Suppose that we have a finite state Markov chain with, say, M states. Suppose that
(n)
i and j are two states and that for some n ≥ 1, pij > 0. Show that this happens
for some n ≤ M . Thus if there is a chance of reaching j then there is a chance of
reaching j in at most M steps.
6. A two state chain is the simplest. Suppose that we have two states 0 and 1. Think
of zero as ‘off’ state and one as ‘on’ state of a machine. Let p01 = α and p10 = β.
This will specify the transition matrix.
(i) In each of the four cases “α is zero or one” and “β is zero or one” how does the
motion proceed.
(ii) Suppose that α = 0 and 0 < β < 1. Now 0 is an absorbing state. What is
(n) (n) (n) (n)
P00 . What is P11 . Calculate f00 . Calculate f10 and show that they add up to

1
1. Thus starting at 1, you eventually end up in 0. What is the expected time taken
to reach 0. What if we had 0 < α < 1 and β = 1.
(iii) Now assume that 0 < α, β < 1. Solve for a probability vector: πP = π. There
is only one such solution given by π = (β/(α + β), α/(α + β)). Show that if X0 ∼ π
then for every n, Xn ∼ π. Thus each Xn looks like X0 . More generally if X0 ∼ π
then show that for any k and n,

(Xn , Xn+1 , · · · , Xn+k ) ∼ (X0 , X1 , · · · , Xk )

Thus the process appears stationary: not moving. The probability vector π is called
stationary initial distribution for the chain. In fact it is the equilibrium distribution
too in the following sense: Whatever be the initial distribution η of X0 , show that
the distribution of Xn converges to π.
We are very lucky that we could perform all the calculations here. But we are not
always so lucky.
7. Want to convince you Random Walk is not just a mathematical problem, it is an
‘Idea!’. I toss a coin whose chance of heads in a single toss is p, where 0 < p < 1.
Heads up, I move one step forward; tails up I move one step backward. Write down
the transition probabilities in the following cases. In each of these cases calculate
the distribution of X2 after choosing some initial starting point X0 . Do it for enough
initial points to get an idea.
(a) The state space is all integers. This is RW or unrestricted RW.
(b) The state space is {0, 1, 2 · · · · · · }. If you are at 0 then move to 1. This is RW
reflected at zero.
(c) State space is as in (c). If at 0, move to 1. If at 99, move to 98. This is RW
with two reflecting barriers.
(d) The state space is {0, 1, 2 · · · , 99}. If at 0 or at 99, stay there forever. This
is RW with two absorbing barriers. (e) State space is as in (c). Fix two numbers
0 < ri < 1 for i = 0 and i = 99. If at 0 stay there with probability r0 and go to 1
with probability 1 − r0 . Similarly do at 99 using r99 . This is RW with two elastic
barriers.
(f) State space is as in (c). If at 0, move to 1. If at 99, stay there. This is RW with
one absorbing barrier and one reflecting barrier.
(g) State space is as in (c). Treat 0 as state next to 99 and 99 as state preceding
0. Equivalently, think of the states arranged in a circular/cyclic order. There is no
need to specify anything more for the motion to continue. This is cyclic RW.
Can also think of two/three dimensions or other lattices/graphs, move from one
vertex to another.

2
CMI/BVR SP1-HA2 (26-08-2020)

Abraham De Moivre (1667-1754) throughout his life had to struggle with the hardest
poverty. Here was a man so famous as a mathematician that the Royal Society had
in 1712 put him on one of its committees and yet he had to earn his living at first as
a travelling teacher of mathematics, and later in life sitting daily in a Coffee House in
Long Acre at the beck and call of gamblers, who paid him a small sum for calcualting
odds, and of underwriters and annuity brokers who wished their values reckoned.
Karl Pearson.

8. Let (Xn , n ≥ 0) be a Markov Chain. Show


P (X0 = io kX1 = i1 , X2 = i2 , . . . , Xn = in ) = P (X0 = i0 kX1 = i1 )
9. Sometimes the Markov property is stated in a symmetric form, namely, that past
and future are conditionally independent given the present. To understand this, fix
any states i0 , i1 , . . . , i54 .
Let A be the event (Xm = im ; 0 ≤ m ≤ 15) and B be the event (Xm = im ; 17 ≤
m ≤ 54). Show that
P (A ∩ B|X16 = i16 ) = P (A|X16 = i16 )P (B|X16 = i16 ).
(Today is 16, before today is past, after today is future. Independence means
P (A ∩ B) = P (A)P (B). Conditional independence given an event C simply means
P (A ∩ BkC) = P (AkC)P (BkC).)
10. Let (Xn , n ≥ 0) be a Markov chain with transition matrix P . You know that
P ∗ = P 7 (seventh power of P ) is also a stochastic matrix. Can you read the Chain
corresponding to this transition matrix P ∗ , taking help of the chain (Xn )?
11. Classify the states of the following chains whose transition matrices are given below.
 
  0 0 0 1  
0 1/2 1/2  0 1 0 0
 1/2 0 0 1 
0 1/2    1/2 1/2 0 0 
  1/4 1/2 1/2 
1/2 1/2 0 0 0 1
0 0 1 0
 
0 1/2 1/2 0 0 0  
 0 1/4 3/4 0 0 0
0 0 1/3 1/3 1/3 
  1/2 1/2 0

 0 0 0 
0 0 1/3 1/3 1/3    0
 

 1 0 1 0 0 
0 0 0 0 0  
  

 1 0 0 1/3 2/3 0 
0 0 0 0 0 
1 0 0 0 0
1 0 0 0 0 0
 
  1 0 0 0 0 0
1/2 0 1/2 0 0
  0 1 0 0 0 0 

 1/4 1/2 1/4 0 0 
   1/4 0 1/2 0 0 1/4 
 1/2 0 1/2 0 0  
  

 0 0 1/4 0 1/2 0 1/4 
0 0 1/2 1/2  
 0

0 0 0 0 1 
0 0 0 1/2 1/2
1/16 1/16 1/4 1/4 1/8 1/4

3
Incidentally, as far as discussing the nature of the states is concerned, is it necessary
to have the full matrix before you? Instead of exact numbers in the above matrices
I just put ∗ when there is a strictly positive quantity. Would this have been enough
for you?
12. Consider the set Ω = {1, 2, · · · , 2n}. A perfect matching is a partition of Ω into
pairs. For example for {1, 2, 3, 4, 5, 6} here is a perfect matching: {(1, 6)(2, 4)(3, 5)}.
Let Mn be the set of perfect matchings of Ω. If you are familiar with graph theory
then this Mn is nothing but set of perfect matchings of the complete graph on 2n
vertices: K2n .
Here is a markov chain with state space S = Mn . if you are at x ∈ Mn move to
y ∈ Mn obtained as follows: pick two elements (pairs) from x, pick one element
from each of these selected pairs, the two elements so selected form one pair and
the remaining two form another pair, the rest of the pairs of x remain as they are.
This is y.
Describe the transition matrix. Is the chain irreducible? For state i, the greatest
(n)
common divisor of {n ≥ 1 : pii > 0} is called period of the state i. What is the
period of each state? What do you think will happen in the long run? Take n = 3,
how many elements are there in Mn ? Write the transition matrix. Is this related
to the first matrix of the exercise above?
[This chain appears in studying phylogenetic trees. This and many other chains,
we describe in our course, have a large theory behind them, much of which I do
not know and also much of it is of recent origin. We discuss only some models in
detail.]
13. Consider an integer n > 1. Let the state space S be the set of all graphs on the
vertex set V = {1, 2, · · · , n}. Here is a markov chain with this state space. Start
with a graph G ∈ S. Pick two vertices one after the other without replacement,
say, u, v in that order. Disconnect v from all its neighbours and connect to u.
Let F ⊂ S be the set of non-empty forests. A forest is a graph which is a disjoint
union of trees. It is empty forest if there are no edges at all, in other words, empty
forest consists of V with no edges.
Show that if you start with an element of F , then you move only to an element of
F . Show that every graph leads to an element of F .
14. (Sliding board) Consider a chain with state space {1, 2, · · · }. If you are at 1 go
to state j with probability pj (j = 1, 2, · · · ) ,where these are nonnegative numbers
adding to 1. If you are in a state i > 1, then go just one step back, that is, to i − 1.
Discuss nature of the states and nature of stationary distribution.

4
CMI/BVR SP1-HA3 (02-09-2020)

Negative thoughts weaken men. If you can give them positive ideas, people will grow
up to be men and learn to stand on their own legs. · · · in everything we must point
out not the mistakes that people are making in their thoughts and actions, but the way
in which they will gradually be able to do these things better. Vivekananda.

15. Consider a connected graph (finite, no loops, no multiple edges). From a vertexP
move to one of its neighbours at random. If vertex v has degree dv and d = dv ,
then π(v) = dv /d is stationary distribution and is the only stationary distribution.
Show this.
16. Consider the usual chess board, not having any pieces. Start with a knight at one of
the squares. At each stage select one of the possible moves of the knight at random.
Discuss irreducibility, aperiodicity and invariant distributiion. A theorem we prove
soon will even anable to calculate the mean time taken by the knight to return to
the starting position?
17. There is a list of k books. Each day, one of these k possible books are requested
– the i-th one with probability Pi (> 0). The list is revised as follows: The book
selected today is moved to the top of the list, while the relative positions of all
the other books remains unchanged. The state of the system is the ordered list of
the books. How does the transition matrix look like? Discuss irreducibility and
periodicity.
For any state η which is a permutation of 1, 2, · · · , k (understanding; η(1) is the
top name in the list, η(2) is the second name in the list etc). let Π(η) denote the
limiting probability (stationary distribution). In order to be in this state η, it is
necessary that the last request was for η(1); the last non-η(1) request be for η(2);
the last non-{η(1), η(2)} request be for η(3) etc. So it appears intuitively that
Pη(2) Pη(3)
Π(η) = Pη(1) ···
1 − Pη(1) 1 − Pη(1) − Pη(2)
Try with k = 3 and see.
This is called Tsetlin Library, the Russian mathematician M L Tsetlin considered
it first in 1963. This model very well applies to k-datasets instead of k books.
18. We have a Urn with two compartments CI and CII. There are a > 0 balls in CI
and b > 0 balls in CII. Of these a + b balls 0 < r < a + b are red and others
black. Each time instant, we select one ball from each compartment and switch
their compartments. State of the system is the number of red balls in CI.
Write down explicitly the State space, transition matrix. Show the chain is irre-
ducible. Show that it is aperiodic (except when a = b = r = 1). Show that the
stationary distribution is the hypergeometric distribution:
    
r a+b−r a+b
π(i) = / .
i a−i a

5
Find the mean and variance of the invariant distribution.
This is called Bernoulli-Laplace model of diffusion of incompressible gases. That a
and b are fixed shows incompressibility. red and black balls are the two different
kinds of gas molecules. Exchange of balls models the process of diffusion.
19. A die is consecutively turned from one face to any of the four neighbouring faces
(n)
with equal probability and independent of the preceding turns. Find lim p66 .
n

20. Let Xn be a Markov chain with transition matrix P . Let τ be the first n such that
Xn 6= X0 . Calculate E(τ |X0 = i) in terms of pii .
21. A state i in a Markov chain is called inessential if there is a state j such that i j
and j 6 i.
In a finite state chain, if a state is transient then show that it is inessential. Is this
true for infinite state chains?
Consider an irreducible chain. Suppose that for one state j, we have pjj > 0. Show
that the chain is aperiodic.
22. Consider an irreducible Markov chain on the state space {1, 2, 3} having stationary
distribution (1/3, 1/3, 1/3). If the transition matrix has diagonal entries zero, show
that p12 = p23 = p31 and p13 = p21 = p32 .
(n)
23. Consider 1-dim RW with coin whose chance of heads is p.pCalculate p00 for all
n. Show that P00 (s) = (1 − 4pqs2 )−1/2 and F00 (s) = 1 − (1 − 4pqs2 ). Discuss
transience and recurrence.
24. I have L boxes and lots of balls with me. I put the balls, one by one, in the boxes by
picking a box at random independent of previous choices. Let Xn be the number of
empty boxes after n placements. Thus X0 = L. Show that (Xn ) is a Markov chain.
describe the state space and transition matrix. Classify the states. What happens
to the chain ultimately? How long does it for that to happen? The time it takes
is a random variable. This last question can be interpreted in two ways. Find its
distribution or if it is difficult we will be happy with its expected value.

6
CMI/BVR SP1-HA 4 (09/09/2020)

In writing this book I had one more goal in mind: I wanted to stress the practical power
of abstract reasoning. The point is that during the last few years at different computer
science conferences, I heard reiteration of the following claim: Complex theories do not
work, simple algorithms do. One of the goals of this book is to show that, at least in
the problems of statistical inference, this is not true. I would like to demonstrate that
in this area of science a good old principle is valid: Nothing is more practical than a
good theory. Vladimir Vapnik: The Nature of statistical learning theory.

25. We consider only finite graphs in this exercise. A graph (undirected, no multiple
edges, no loops) on n vertices is a tree if it is connected and has no cycles; equiv-
alently, conected and has (n − 1) edges; equivalently has no cycles and has (n − 1)
edges. Show the equivalence.
Prove that every tree contains a leaf, vertex of degree one. Show between any two
vertices there is a unique simple (no edge repeated) path.
Let T be a tree and C be the set of proper three colourings of T . Join two elements
of C, if those colourings differ at exactly one vertex. Show that this makes C a
connected graph.
26. Consider simple random walk on a cycle of odd number of vertices (from each vertex
go just to its left and right neighbours). Find the smallest n such that for all i, j,
(n)
pij is strictly positive.
A particle moves on a circle through points marked (in clockwise order) as 0,1,2,3,4.
At each step it moves to the neighbouring points – to right with probability p, to
the left with probability 1 − p. Let Xn denote its location after n steps. This is a
Markov chain. Find its transition matrix and limiting probabilities.
27. (Discrete Torus) Consider the set {0, 1, 2, · · · , a − 1} × {0, 1, 2, · · · , b − 1} with
periodic boundary conditions, which means when you add two points in this set you
do so coordinate wise with first coordinate added modulo a and second coordinate
added modulo b. This is called discrete torus. Consider the chain: from (x, y) we
move to (x, y + 1) or (x + 1, y) each with probability 1/2. Show that the chain
is irreducible aperiodic iff g.c.d(a, b) = 1. What if I considered the motion as the
usual two dimensional random walk (treat end points as neighbours)?
28. Consider state space S = {0, 1, 2, · · · }. Assume that we have numbers pn , qn , rn for
each n. If at state n, move to (n − 1) with probability qn , move to (n + 1) with
probability pn , stay at n with probability rn . Here pn + qn + rn = 1. We assume
that q0 = 0. We also assume pn > 0 for all n and qn > 0 for all n ≥ 1. Show that
the chain is irreducible. Show that there is a stationary distribution iff

X p0 p1 · · · pn−1
< ∞.
1
q1 q2 · · · qn

7
When this happens give a formula for the stationary distribution.
This is called ‘birth and death chain’. From n, moving to n + 1 is birth; moving to
(n − 1) is death. When size of population is zero, there is no death, of course we
still allow birth!
29. A chain is said to be doubly stochastic if the transition matrix has column sums
also equal to 1. Suppose that P is a doubly stochastic finite state space chain. If
it is irreducible, then show that the uniform distribution on the state space is the
only stationary distribution. what if it is not irreducible? What if the state space
is infinite?
30. Suppose that (Xn ) is a Markov chain with state space S and transition matrix P .
Define Yn = (Xn , Xn+1 , · · · Xn+L−1 ). Here L is a fixed integer, L ≥ 1. Show that
the Y process is a Markov chain with state space S ∗ = finite sequences (s1 , s2 , · · · sL )
of length L from S such that ps1 s2 ps2 s3 · · · psL−1 sL > 0. Calculate its transition
matrix.
If the original chain is irreducible then show that this new chain is also irreducible.
Moreover if the original chain has stationary distribution π then show that this
chain also has a stationary distribution and get a formula for this in terms of π.
This Y chain is called the snake chain of length L associated with the X chain.
31. Let π be the stationary distribution for an irreducible finite state chain. Let y and
z be two states such that pxy = 2pxz for all states x. Show that π(y) = 2π(z).
32. Consider an irreducible markov chain with finite state space S and transition matrix
P = (pij ) with pii = 0 for all i. Suppose ci for i ∈ S are numbers 0 < ci < 1 for all
i. Define qij = ci pij for i 6= j and pii = 1 − ci . Show that the Q-chain is irreducible
and aperiodic. Give a formula for the stationary distribution using {ci } and the
stationary distribution π of the P -chain.

8
CMI/BVR SP1-HA 5 (19/09/2020)
May All Be Happy  May All Be Free From Ailments.
May All See What Is Auspicious  May No One Suffer. Upanishads

33. (Systematic sampling) Let (Xn )n≥0 be a Markov chain with state space S and
transition matrix P . Let Yn = Xkn , where k ≥ 1 is an integer. Show that (Yn ) is a
Markov chain. Explain irreducibility, aperiodicity in terms of the original chain.
Here we sample/observe every k-th term of the original chain.
34. (sampling changes only) Let (Xn ) be a Markov chain with transition matrix P and
state space S. Define ‘random times’ τn as follows. τ0 = 0, τ1 = inf{m ≥ 1 : Xm 6=
X0 } and in general τn+1 = inf{m ≥ τn : Xm 6= Xτn }. Put Yn = Xτn . Show that
this is a Markov chain. Calculate its transition matrix. Discuss irreducibility. If
the original chain has stationary distribution, do you think using it and diagonal
entries of P we can get stationary distribution for this new chain. (Try two state
chain)
Here we sample/observe the chain only when it changes the state.
35. (Sampling Return times) Let (Xn ) be a recurrent Markov chain with state space
S = {0, 1, 2 · · · }, transition matrix P . Let S0 be a non-empty subset of S. Assume
that X0 ∈ S0 . Define return times to S0 as follows. These are random times.
τ0 = 0, τ1 = inf{i ≥ 1 : Xi ∈ S0 } and in general τn+1 = inf{i ≥ τn + 1 : Xi ∈ S0 }.
(Yn ) = (Xτn ) is a Markov chain, it may be difficult to rigorously argue, but attempt.
Try to guess its transition matrix, there is no closed form.
Here we sample/observe the chain when it visits S0 .
36. (success run length chain) Consider a chain with state space {0, 1, 2, · · · } and tran-
sition matrix pi0 = ai and pi,i+1 = 1 − ai . Here for each i, 0 < ai < 1. Show that
the chain is irreducible. Show that f00 1 n
= a0 and for n > 1, f00 = an−1 Πn−2
i=0 (1−ai ).
m+1
P n
Show that f00 = 1−Πm 0 (1−an ). Deduce that the chain is recurrent iff the series
P n=1 Q P
pn diverges. [analysis result: (ai ) as above, then (1 − an ) = 0 iff an = ∞ ]
The special case pi = q for all i, arises as success run length in coin tosses as follows.
The state is zero if the current toss results in tails. Otherwise, it is the length of
the current success run. In a sequence of coin tosses, you can expect to see each
possible run length infinitely many times.
37. same exercise as above with different interpretation. (Birth or Collapse model) Let
pi,i+1 = ai and pi,0 = 1 − ai for i = 0, 1, 2 . . . . Here 0 < ai < 1 for all i ≥ 1 and
a0 = 1. P
Show that the chain is irreducible. Show it is recurrent iff (1 − ai ) = ∞.
P Q k
Show it has a stationary distribution iff aj < ∞.
k≥1 j=1
Calculate stationary distribution when aj = 1/(j + 2).

9
38. suppose that balls labelled 1, 2, . . . , N are distributed between two boxes labelled I
and II. State of the system is the number of balls in Box I. Determine the one step
transition matrix for the following experiment
At each step a number is selected at random from 1, 2 . . . , N . Independently of the
number selected, box I is selected with probability p and box II is selected with
probability 1 − p. The ball with selected number is placed in the box selected.
39. Sometimes you may have processes where the state of tomorrow depends not only
on today but also on yesterday’s state and hence the process is not Markov. If you
think carefully you can convert such processes into a Markov chain.
Consider three chairs 0,1,2. If I am in chair i today and also yesterday, then I select
one of the other two chairs at random and move to that tomorrow. If I am in chair
i today but not yesterday then I select one of the three chairs at random and move
to that chair tomorrow. Thus motion is random movement in the chairs, however
staying in the same chair for more than two days consecutively is forbidden. Let
Xn be my chair on the n-th day. Formulate it as a Markov chain by enlargiong
state space and discuss its long term behaviour.
40. Sometimes there may be two transition mechanisms operating. Consider (Xn ) with
three states 0,1,2. P (Xn+1 = j||Xn = i, Xn−1 = in−1 , · · · X0 = i0 ) = Pij1 or Pij2
according as n is even or odd. Here P 1 and P 2 are two 3 × 3 stochastic matrices.
Enlarge state space, ‘transform’ it into a Markov Chain.
41. Consider a finite set S and finitely many functions (fi : 1 ≤ i ≤ k) from S to S
and a probability vector (pi : 1 ≤ i ≤ k). here is a Markov chain with state space
S. If you are at s, select one of the functions using the probability vector. New
state is value of the selected function at s. This is called Iterated Function System
IFS for short. Show that we do indeed have a Markov chain. What is its transition
Matrix?
Conversely, show that given any P you can produce an IFS (that is finitely many
functions etc) which gives the Markov chain with transition matrix P .
If S is the grid 256 × 256; run the chain for a long time; when close to steady state
π you can take empirical frequencies to approximate π; and instruct to color pixels
depending on the frequency; may get your picture.
 
p 1−p
42. Let the transition matrix of a 2-state Markov chain be given by P = .
1−p p
Show that !
1 1 n 1 1 n
n 2 + 2 (2p − 1) 2 − 2 (2p − 1)
P = 1 1 n 1 1 n
.
2 − 2 (2p − 1) 2 + 2 (2p − 1)

10
CMI/BVR SP1-HA 6 (07/10/2020)
I have been a happy man ever since Jan 1, 1990, when I no longer had a email address.
I had used email since about 1975, and it seems to me that fifteen years of email is
plenty for one life time. Email is a wonderful thing for people whose role in life is to be
on top of things. But not for me; my role is to be on the bottom of things. What I do
takes long hours of studying and uninterruptible concentration. Donald Knuth?

43. (Gibbs Sampler) We have a strict probability π on the set S = {0, 1}V where V is a
non-empty set. We want to simulate π. Here is a way. Start from any point in the
state space S and run a Markov chain. If you are at Xn , pick a v ∈ V at random
and set Xn+1 as follows. It agrees with Xn at coordinate places other than v. At v,
I pick zero or one, according to the π-conditional distribution of the v-th coordinate
given that all other vertices have values as given by Xn . Do NOT proceed till you
understood this. Show that the chain is reversible with stationary distribution π.
This chain simulates π if it is aperiodic. Is it aperiodic?
(There is a variant of this called Systematic Sweep Gibbs Sampler. Here choice of v
is eliminated. First enumerate elements of V as {v1 , v2 , · · · v100 }. Having started at
X0 , you update by the earlier method the coordinate v1 to get X1 , now you update
the coordinate v2 to get X2 and now update coordinate v3 etc and once you finished
updating v100 , continue by updating coordinate v1 again to get X101 from X100 etc
forever. Of course this is not Markov chain in our sense.
The ones we are discussing in class are called Homogeneous Markov chains OR
Markov Chains with Stationary Transition Probabilities. The word stationary here
refers to the fact that transition probabilities remain stationary, that is they do not
change from day to day and so are described by one matrix P ; not to be confused
with stationary distributions we discussed.)
44. (Metropolis Chain) We want to simulate a probability π on a finite set S. First
make a connected graph on S, let d(v) be the degree of the vertex v in the graph.
Here is a Markov Chain. If you are at v, pick a neighbour w of v at random.
Accept it with probability min{1, πw dv /πv dw } and with remaining probability stay
at v itself. Show that this gives a reversible aperiodic chain and simulates π. If you
choose simple graphs, the simulation would be easier.
45. Here is a nice example (from Haggstrom) that arises in the context of simulated
annealing. Consider four states {a, b, c, d}. Let f be the function with f (a) = 1,
f (b) = f (d) = 2 and f (c) = 0. The graph we take, towards Metropolis chain, is
just the square, edges: (ab)(bc)(cd)(da). For inverse temperature β, the Mertopolis
chain, to simulate the corresponding Gibbs distribution, is given by the transition
matrix
1 − e−β 21 e−β 1 −β
 
0 2e
1 1
0 0
 
2 2
 
 
1 −2β −2β 1 −2β 

 0 2 e 1 − e 2 e 
1 1
2 0 2 0

11
We start at a and use this transition matrix at the n-thQ stage−βwith βn . Show that
the probability of the chain remaining at a forever
P −βn is (1 − e n
). Thus there is a
nonzero probability of getting stuck at a, if e < ∞. For example, if βn = n
then this is so. The reason for such a thing happening is firstly, a is a local minimum
and secondly the cooling is too fast.
46. (Winning streak). State space is {0, 1, 2, . . . , 100}. I keep tossing a fair coin inde-
pendently. if it is tails move to zero; If it is heads, move one step forward that is,
the number of consecutive Heads ending with the present Head. However, I can not
count more than 100, so if this streak of Heads is more than 100, I just say 100.
Describe the transition matrix, calculate the stationary distribution.
If the transition matrix is denoted by ((pij )) and stationary distribution by π then
one defines a reverse chain as follows: Same state space and transition matrix is
given by ((qij )) where qij = π(j)pji /π(i) Calculate transition matrix of the reverse
chain? Start the reverse chain at an arbitrary state and calculate the distribution
after 100 days. see what happens.
47. (record value chain) Let 0 < p < 1 and (Zn )n≥1 be a sequence of independent G(p)
variables, that is, P (Zn = k) = q k p for k ≥ 0. Suppose that X0 is a nonnegative
integer valued random variable independent of the Z sequence. Set for n ≥ 1,
Xn = max(X0 , Z1 , · · · , Zn ). Show that (Xn ) is a Markov chain. Calculate its
transition matrix. Can it have stationary distribution?
48. (Birth and Death Chain again, see exercise 28) Consider a chain with state space
{0, 1, 2, · · · }. From state n you go to states n − 1, n and n + 1 with probabilities
qn , rn and pn respectively. Here pn + qn + rn = 1. We assume that pn > 0 for all
n; qn > 0 for n ≥ 1; q0 = 0 (why?).
We take a < b in the state space. Start the chain from an integer x ∈ [a, b],
initial population size. Let Ta be the first time the population size equals a, that is
Ta = inf{n ≥ 0 : Xn = a}. Similarly, Tb be the first time the population size equals
b. We are interested in calculating the probability Px (Ta < Tb ). This notation
means P (Ta < Tb |X0 = x). Denote this function by u(x) for a ≤ x ≤ b. Observe,
u(a) = 1; u(b) = 0; and for a < n < b; u(n) = qn u(n − 1) + rn u(n) + pn u(n + 1).
Let γ0 = 1 and γn = pq11 ···q
···pn for n ≥ 1.
n

Argue u(n + 1) − u(n) = γγna [u(a + 1) − u(a)].


b−1
γn ]−1 .
P
Deduce [u(a) − u(a + 1)]/γa = [
n=a
b−1
γk ]−1 .
P
Conclude that u(n) − u(n + 1) = γn [
k=a
b−1
P b−1
P x−1
P b−1
P
Conclude that Px (Ta < Tb ) = [ γk ]/[ γk ]; Px (Tb < Ta ) = [ γk ]/[ γk ].
k=x k=a k=a k=a


P
If γn = ∞ show fn0 = 1 ∀n ≥ 1 and hence the chain is recurrent.
0
∞ ∞
∞

P P P
If γn < ∞, then fn0 = γk γk . and hence the chain is transient.
0 k=n k=0

12
In particular, if pn ≤ qn for n ≥ 1, then show that the chain is recurrent.
If qn /pn = [n/(n + 1)]2 for n ≥ 1, show that the chain is transient.
49. (A Queue) During each unit of time either zero or one customer arrives for service
and joins a single line. The probability of one customer arriving is λ and of no
customner arriving is (1 − λ). Also each unit of time, independent of arrivals, a
service starts; a single service is completed with probability p or continues to the
next period with probability (1 − p). Let Xn be the number of customers (waiting
in the line or being served) at the beginning of n-th unit of time.
(i) Show that (Xn ) is a birth-Death chain on S = {0, 1, 2 . . .}.
(ii) Discuss transience/Recurrence/positive recurrence.
(iii) Calculate stationary distribution π when λ < p. Calculate Eπ (Xn ).

13

You might also like