You are on page 1of 4

Problem Set 1

asdf
April 14, 2023

1 Problem 1
1.1 Problem
In each of the following cases, determine whether the stochastic matrix P , which you may assume
to be irreducible, is reversible:
 
1−a a
(a)
b 1−b
(b)  
0 p 1−p
1 − p 0 p 
p 1−p 0

(c) I = {0, 1, ..., N } and pij = 0 if |j − i| ≥ 2 and pij > 0 if |j − i| ≤ 1.

(d) I = {0, 1, ...} and p01 = 1, pi,i+1 = p, pi,i−1 = 1 − p for i ≥ 1.

(e) pij = pji for all i, j ∈ I.

1.2 Solution 1
(a) Note that by Theorem 1.9.1, the diagonal must be preserved, that is, p̂ii = pii for all i. But
since our P is 2 by 2, this implies that the off-diagonal entries must also be identical since
the rows must sum to 1. Thus this is reversible.

(b) First, note that the uniform distribution is our invariant distribution. Thus from THeorem
1.9.1, we must have p̂ij = pji for every pair i, j, or in other words, P̂ = P T . Hence our chain
is reversible only if p = 1/2.

(c) Considering the detailed balance equations, we see that for i ̸= j, the only nontrivial equa-
tions are:
p01
λ1 = λ0 ,
p10
p12
λ2 = λ1 ,
p21

1
p23
λ3 = λ2 ,
p32
..
.
pN −1,N
λN = λN −1
pN,N −1
and thus we may let λ0 = α, and then
p01
λ1 = α
p10
and
pk−1,k pk−2,k−1 · · · p01
λk = α, ∀k ≥ 2.
pk,k−1 pk−1 k − 2 · · · p10

Then we may normalize the values by scaling down α so that N


P
i=0 λi = 1

(d) Considering the detailed balancing equations, we see that, letting λ0 = α, we for i ≥ 1,
 i−1  
p 1
λi = α.
1−p 1−p

Now, this may be scaled down to a distribution if and only if the series converges, which
p
occurs if and only if 1−p < 1 ⇐⇒ p < 21 .

(e) If I is finite, then P = P T implies that the uniform distribution is an invariant distribution
and P̂ = P T =⇒ P̂ = P and is thus reversible.
However, if I is not finite, the uniform measure cannot be made into a distribution, and
moreover any other measure cannot be invariant and thus there is no invariant distribution
and hence P is not reversible.

2 Problem 2
2.1 Problem
Each morning a student takes 1 of the 3 books he owns from his shelf. The probability that he
chooses book i is αi where 0 < αi < 1 for i = 1, 2, 3 and α1 + α2 + α3 = 1, and choices on
successive days are independent. In the evening he replaces the book at the left-hand end of the
shelf. Let pn denote the probability that on the morning of day n the student finds the books in the
correct order 1,2,3 from left to right.
(a) Show that, irrespective of the initial arrangement of the books, pn converges as n → ∞.
(b) Find this limit.

2
2.2 Solution 2
Note that there are six possible orderings (or states) for the books on any given day. Moreover,
from any state, at the end of the day the order may change with 1/3 probability each to one of two
different states, or with 1/3 probability remain as itself. Moreover, any state may be reached from
itself and from two other states (not the ones it may reach).
(1)
In particular pii = 1/3 > 0 for all i and therefore our chain is aperiodic. Moreover, our chain
is irreducible (this is easy to see from drawing all six states).

(a) Our matrix P is such that every row and column has three entries valued at 0 and three
valued at 1/3. It is clear that the invariant distirbution is the uniform distribution π =
(1/6, · · · , 1/6).
Thus applying Theorem 1.8.3 (since P is irreducible and aperiodic), we see that P (Xn =
j) → πj = 1/6 for every state j, including our 1,2,3 state.

(b) In fact this limit is 1/6, as previously stated.

3 Problem 3
3.1 Problem
John has N umbrellas. He walks to his office in the morning and walks back home in the evening.
If it is raining and there is at least one umbrella at where he is, he will carry an umbrella, and if it
is not raining he will not carry an umbrella. Suppose that it rains on each journey with probability
p ∈ (0, 1) independently of past weather. Find the long-run propotion of journeys that John will
get wet.

3.2 Solution 3
We consider 2(N +1) states, indicating where John is (at office or at home) and how many umbrellas
there are at home. That is, for 1 ≤ i ≤ N + 1, the state i indicates that John is at home with i − 1
umbrellas at home. For N + 2 ≤ i ≤ 2N + 2, state i indicates John is at the office with i − (N + 2)
umbrellas at home.
The transition matrix between therefore looks like the following:

3
0 0 0 0 ··· 0 1 0 0 0 ··· 0
 
 0 0 0 0 ··· 0 p 1−p 0 0 ··· 0

··· 1−p ···
 
 0 0 0 0 0 0 p 0 0

 .. .. .. .. .. .. .. .. .. ..


 . . . . . . . . . ··· ···

.
 0 0 0 0 ··· 0 0 0 ··· 0 p 1 − p
 
P =
1 − p p 0 ··· 0 0 0 0 ··· ··· 0 0 

 0 1−p p 0 ··· 0 0 0 ··· ··· 0 0 
 
 . .. ... .. .. .. .. 
 .. . ··· ··· . . . ··· ··· ··· . 
 
 0 0 ··· 0 1−p p 0 0 ··· ··· 0 0 
0 ··· 0 0 0 1 0 0 ··· ··· 0 0

Now, note that we are in particular concerned with states 1 and 2N + 2. In either of those states,
John will travel to the next location without changing the number of umbrellas at home, but if it
rains he will get wet with probability p. Therefore, the long-run proportion of journeys that John
will get wet in is:
 
V1 (n) + V2N +2 (n)
p lim
n→∞ n
where Vi (n) is the number of visits to i before n.

Now, P has invariant measure λ2 = λ3 = · · · = λ2N +1 = α, λ1 = λ2N +2 = (1 − p)α for some


α > 0. Normalizing, we see that the invariant distribution is given when
1
2N α + 2(1 − p)α = 1 =⇒ α = .
2N + 2(1 − p)
1−p
Hence the invariant distribution π has π1 = π2N +2 = 2N +2(1−p) . Hence by Theorem 1.7.7.,
mi = Ei (Ti ) = 1/πi =⇒ Vi (n) → 1/mi = πi , and thus
 
V1 (n) + V2N +2 (n) 2p(1 − p) p(1 − p)
p lim = = .
n→∞ n 2N + 2(1 − p) N +1−p

You might also like