Professional Documents
Culture Documents
Example: Two-State Markov Chain Not All Markov Chains Have Limiting Distributions I
Consider simple random walk on {0, 1, 2, 3, 4} with absorbing
boundary.
0 1
0 1 2 3 4
0 1−α α 0
1 0 0 0 0
X = {0, 1}, P=
1 β 1−β 1 1/2 0 1/2 0 0
By induction, one can show that P = 2 0 1/2 0 1/2 0
3 0 0 1/2 0 1/2
4 0 0 0 0 1
β α n α α n
+ (1−α−β) − (1−α−β)
α + β α + β α+β α+β Classes: {0}, {1,2,3}, {4}. One can show that
Pn = β
β n α β n
+ (1−α−β) − (1−α−β) 0 1 2 3 4
α+β α+β α+β α+β
0 1 0 0 0 0
β α
1 3/4 0 0 0 1/4
α + β α + β
→ as n → ∞ Pn → 2
2/4 0 0 0 2/4 as n → ∞
β α
3 1/4 0 0 0 3/4
α+β α+β 4 0 0 0 0 1
β
The limiting distribution π is ( α+β α
, α+β ). This Markov chain has no limiting distribution since limn→∞ Pijn
depends on the initial state i.
Lecture 4 - 9 Lecture 4 - 10
Lecture 4 - 11 Lecture 4 - 12