You are on page 1of 17

.

edu

utdallas

/~metin
Markov Chains

Page
1
Outline
 Stochastic Processes and Markov Property
 Markov Chains
 Chapman-Kolmogorov Equations
 Classification of States
 Invariant Measures, Time Averages, Limiting Probabilities
.edu

utdallas

/~metin
Stochastic Processes and Markov Property

Page
2
 Stochastic Process
– Discrete-time: {𝑋𝑋𝑛𝑛 : 𝑛𝑛 ≥ 0}, integer number 𝑛𝑛 indexed random variables
– Continuous-time: {𝑋𝑋(𝑡𝑡): 𝑡𝑡 ≥ 0}, real number 𝑡𝑡 indexed random variables
– Discrete state-space if each 𝑋𝑋𝑛𝑛 or 𝑋𝑋(𝑡𝑡) has a countable range
– Continuous state-space if each 𝑋𝑋𝑛𝑛 or 𝑋𝑋(𝑡𝑡) has an uncountable range
– Ex: Markov chains have discrete-time and discrete state-space

 Markov Property: Future conditioned on the present is in independent of the past


Past

Past Present Future

Past
.edu

utdallas

/~metin
Markov Chains

Page
3
 Markov Chain: Discrete time, discrete state space Markovian stochastic process.
– Often described by its transition matrix 𝑃𝑃

 Ex: Moods {Cooperative, Judgmental, Oppositional} of a person as Markov chain

 Ex: A random walk process has state space of integers … , −2, −1, 0, 1,2, … . For a
fixed probability 0 ≤ p ≤ 1, the process either moves forward or backward:
– P(𝑋𝑋𝑛𝑛+1 = 𝑖𝑖 + 1|𝑋𝑋𝑛𝑛 = 𝑖𝑖) = 1 − P(𝑋𝑋𝑛𝑛+1 = 𝑖𝑖 − 1|𝑋𝑋𝑛𝑛 = 𝑖𝑖)
– The transition matrix has infinite dimensions and is sparse

… -2 -1 0 1 2 …
… … … … … … … …
-2 … 0 𝑝𝑝 0
-1 0 1 − 𝑝𝑝 0 p 0
0 0 1 − 𝑝𝑝 0 p 0
1 0 1 − 𝑝𝑝 0 p 0
2 0 1 − 𝑝𝑝 0 …
… … … … … … … …
.edu

utdallas

/~metin
Chapman-Kolmogorov Equations

Page
4
 Probability of going from state 𝑥𝑥 to state 𝑦𝑦 in 𝑛𝑛 steps
<𝑛𝑛> = P(𝑋𝑋
𝑝𝑝𝑥𝑥,𝑦𝑦 𝑘𝑘+𝑛𝑛 = 𝑦𝑦|𝑋𝑋𝑘𝑘 = 𝑥𝑥)
 To go from 𝑥𝑥 to 𝑦𝑦 in 𝑛𝑛 + 𝑚𝑚 steps, go through state 𝑧𝑧 in the 𝑛𝑛th step
<𝑛𝑛+𝑚𝑚> = � 𝑝𝑝 <𝑛𝑛> 𝑝𝑝<𝑚𝑚>
𝑝𝑝𝑥𝑥,𝑦𝑦 𝑥𝑥,𝑧𝑧 𝑧𝑧,𝑦𝑦
𝑧𝑧∈𝓧𝓧
 Using transition matrices
𝑃𝑃 𝑛𝑛+𝑚𝑚 = 𝑃𝑃 𝑛𝑛 𝑃𝑃𝑚𝑚
.edu

utdallas

/~metin
Classification of States: Communication

Page
5
<𝑛𝑛> > 0 for some 𝑛𝑛.
 State 𝑦𝑦 is accessible from state 𝑥𝑥 if 𝑝𝑝𝑥𝑥,𝑦𝑦
<𝑛𝑛> = 0 for all 𝑛𝑛.
 Contrapositive: If state 𝑦𝑦 is not accessible from 𝑥𝑥, then 𝑝𝑝𝑥𝑥,𝑦𝑦
P(Reaching 𝑦𝑦 ever | Starting in 𝑥𝑥)=∑∞ 𝑛𝑛=0 𝑝𝑝 <𝑛𝑛>=0
𝑥𝑥,𝑦𝑦
 States (𝑥𝑥, 𝑦𝑦) communicate if 𝑦𝑦 is accessible from 𝑥𝑥 and 𝑥𝑥 is accessible from y
 Ex: Communication is a relation on (𝓧𝓧 × 𝓧𝓧). This relation is reflexive, symmetric
and transitive. Hence, it is an equivalence relation.
 The communication relation splits 𝓧𝓧 into equivalence classes: Each class includes
the set of states that communicate with each other.
 Ex: The transition matrix below on the left creates classes {1,4}, {2}, {3,5}. We
can define an aggregate state Markov chain whose states are these classes as below
in the middle. The new chain is likely to end up in {1,4} below on the right.
1 2 3 4 5
1,4 2 3,5
1 +
2 + + 1,4 + {1,4} {2} {3,5}
3 + + + 2 +
4 + 3,5 + + +
5 + + +
.edu

utdallas

/~metin
Classification of States: Periodicity

Page
6
 Ex: The transition matrix below on the left creates classes {1,2,4} and {3,5}. These classes
are not accessible from each other, so the chain decomposes into two chains, with transition
matrices on the right. 1 2 3 4 5 1 2 4 3 5
1 + + 1 + + 3 +
2 + 2 + 5 +
3 + 4 +
4 +
5 +

 An irreducible Markov chain has only one class of states. A reducible Markov chains as two
examples above illustrate either eventually moves into a class or can be decomposed. In view
of these, limiting probability of a state in an irreducible chain is considered. Irreducibility
does not guarantee the presence of limiting probabilities.
 Ex: A Markov chain with two states 𝓧𝓧 = {𝑥𝑥, 𝑦𝑦} such that 𝑝𝑝𝑥𝑥,𝑦𝑦 = 𝑝𝑝𝑦𝑦,𝑥𝑥 = 1. Starting in state
<𝑛𝑛>
𝑥𝑥, we can ask for 𝑝𝑝𝑥𝑥,𝑥𝑥 . This probability has a simple but periodic structure: It is 1 when 𝑛𝑛 is
<𝑛𝑛>
even; 0 otherwise. The limit of 𝑝𝑝𝑥𝑥,𝑥𝑥 does not exist as 𝑛𝑛 approached infinity.
 To talk about limiting probabilities, we need to rule out periodicity. Period 𝑑𝑑(𝑥𝑥) of state 𝑥𝑥 is
<𝑛𝑛>
the greatest common divisor (gcd) of all the integers in {𝑛𝑛 ≥ 1: 𝑝𝑝𝑥𝑥,𝑥𝑥 > 0}.
<𝑛𝑛> > 0}.
𝑑𝑑 𝑥𝑥 = 𝑔𝑔𝑔𝑔𝑔𝑔{𝑛𝑛 ≥ 1: 𝑝𝑝𝑥𝑥,𝑥𝑥
.edu

utdallas

/~metin
Page
Markov Chain Examples with Different Periods

7
2 States 3 States 4 States

Period 2 = gcd{2,4, … }

Period 3 = gcd{3,6, … } Period 4 = gcd{4,8, … }

Period 1 = gcd{1,2, … }
Period 1 = gcd{2,3, … } Period 1 = gcd{4,7, . . }

Many States
……
Period 1 = gcd{1,2, … }
Period 2 = gcd{2,4,6 … }
All possible transitions with
2 communicating states
⇒The same period
.edu

utdallas

/~metin
Period is a Class Property

Page
8
 Period of any two states in the same class are the same.
– For classes with two states only, see the last page
– Consider classes with at least three states
<𝑚𝑚> <𝑛𝑛> 𝑧𝑧
– Consider 𝑥𝑥, 𝑦𝑦 such that 𝑝𝑝𝑥𝑥,𝑦𝑦 > 0 and 𝑝𝑝𝑦𝑦,𝑥𝑥 > 0 for some 𝑚𝑚 and n. 𝑛𝑛 𝑠𝑠
 Such 𝑚𝑚, 𝑛𝑛 exist because 𝑥𝑥, 𝑦𝑦 are in the same class
<𝑠𝑠> 𝑦𝑦 𝑥𝑥
» Period of state 𝑥𝑥, 𝑑𝑑 𝑥𝑥 = gcd{𝑠𝑠 ≥ 1: 𝑝𝑝𝑥𝑥,𝑥𝑥 > 0}
<𝑠𝑠>
» By definition of 𝑚𝑚, 𝑛𝑛 and for any 𝑠𝑠 with 𝑝𝑝𝑥𝑥,𝑥𝑥 > 0. 𝑚𝑚
<𝑛𝑛+𝑚𝑚> <𝑛𝑛> <𝑚𝑚> <𝑛𝑛+𝑠𝑠+𝑚𝑚> <𝑛𝑛> <𝑠𝑠> <𝑚𝑚>
 𝑝𝑝𝑦𝑦,𝑦𝑦 ≥ 𝑝𝑝𝑦𝑦,𝑥𝑥 𝑝𝑝𝑥𝑥,𝑦𝑦 > 0 and 𝑝𝑝𝑦𝑦,𝑦𝑦 ≥ 𝑝𝑝𝑦𝑦,𝑥𝑥 𝑝𝑝𝑥𝑥,𝑥𝑥 𝑝𝑝𝑥𝑥,𝑦𝑦 >0
 Such 𝑠𝑠 ≥ 1 exists because 𝑥𝑥 communicates with another (third) state 𝑧𝑧 in its class
» 𝑑𝑑 𝑦𝑦 divides both 𝑛𝑛 + 𝑚𝑚 and 𝑛𝑛 + 𝑠𝑠 + 𝑚𝑚
<𝑠𝑠>
» 𝑑𝑑 𝑦𝑦 divides every 𝑠𝑠 with 𝑝𝑝𝑥𝑥,𝑥𝑥 >0
 𝑑𝑑 𝑦𝑦 divides gcd of such 𝑠𝑠

» Hence, 𝑑𝑑 𝑦𝑦 divides 𝑑𝑑(𝑥𝑥).


– Repeat by changing the roles
» 𝑥𝑥 ↔ 𝑦𝑦 ⇒ 𝑑𝑑 𝑦𝑦 divides 𝑑𝑑 𝑥𝑥 .
– Periods 𝑑𝑑(𝑥𝑥) and 𝑑𝑑 𝑦𝑦 divide each other ⇒ they must be equal.
.edu

utdallas

/~metin
Classification of States: Recurrence

Page
9
 A state is called recurrent if the chain returns to the state in finite steps with probability 1.
– The first time state visits state 𝑦𝑦 after starting at state 𝑥𝑥 is a random variable 𝜏𝜏𝑥𝑥,𝑦𝑦 :
𝜏𝜏𝑥𝑥,𝑦𝑦 = min{𝑛𝑛 ≥ 1: 𝑋𝑋𝑛𝑛 = 𝑦𝑦 and 𝑋𝑋0 = 𝑥𝑥}
– This variable is also called the hitting time
– Recurrent state 𝑥𝑥 iff P 𝜏𝜏𝑥𝑥,𝑥𝑥 < ∞ = 1; Otherwise, transient state.

 A recurrent state has only finite value of hitting time.


 A positive recurrent state has E(𝜏𝜏𝑥𝑥,𝑥𝑥 ) < ∞. Positive recurrence ⇒ recurrence.
– Ex: Heavy tail hitting time distributions, e.g., Pareto, can have infinite expected value.

 Ex: Starting with 𝑋𝑋0 = 𝑥𝑥, let 𝑁𝑁𝑥𝑥 be the number times the chain is in 𝑥𝑥:
𝑁𝑁𝑥𝑥 = 1𝑋𝑋0=𝑥𝑥 + 1𝑋𝑋1=𝑥𝑥 + 1𝑋𝑋2=𝑥𝑥 + ⋯
– We have
∞ ∞ ∞
<𝑛𝑛>
E 𝑁𝑁𝑥𝑥 |𝑋𝑋0 = 𝑥𝑥 = E � 1𝑋𝑋𝑛𝑛 =𝑥𝑥|𝑋𝑋0 = 𝑥𝑥 = � E(1𝑋𝑋𝑛𝑛 =𝑥𝑥 |𝑋𝑋0 = 𝑥𝑥) = � 𝑝𝑝𝑥𝑥,𝑥𝑥
𝑛𝑛=0 𝑛𝑛=0 𝑛𝑛=0
The last term is more operational as
it is based on transition probabilities
.edu

utdallas

/~metin
Page
Recurrence Related Derivations

10
 The expected value, of the number of times the chain is in 𝑥𝑥, E 𝑁𝑁𝑥𝑥 |𝑋𝑋0 = 𝑥𝑥 = ∑∞ <𝑛𝑛>
𝑛𝑛=0 𝑝𝑝𝑥𝑥,𝑥𝑥
can also be written as
1
E 𝑁𝑁𝑥𝑥|𝑋𝑋0 = 𝑥𝑥 =
1 − P(𝜏𝜏𝑥𝑥,𝑥𝑥 < ∞)
– Note that to be in state 𝑥𝑥 at time 𝑛𝑛 ≥ 1, the chain must come to state 𝑥𝑥 for the first time in time 𝑘𝑘 for
𝑘𝑘 = 1 … 𝑛𝑛. This probabilistic reasoning yields
<𝑛𝑛>
𝑝𝑝𝑥𝑥,𝑥𝑥 = ∑𝑛𝑛𝑘𝑘=1 P 𝜏𝜏𝑥𝑥,𝑥𝑥 = 𝑘𝑘 𝑝𝑝𝑥𝑥,𝑥𝑥
<𝑛𝑛−𝑘𝑘>

– On the other hand,


∞ ∞ ∞ 𝑛𝑛
<𝑛𝑛> <𝑛𝑛> <𝑛𝑛−𝑘𝑘>
� 𝑝𝑝𝑥𝑥,𝑥𝑥 − 1 = � 𝑝𝑝𝑥𝑥,𝑥𝑥 = � � P 𝜏𝜏𝑥𝑥,𝑥𝑥 = 𝑘𝑘 𝑝𝑝𝑥𝑥,𝑥𝑥
𝑛𝑛=0 𝑛𝑛=1 𝑛𝑛=1 𝑘𝑘=1
∞ 𝑛𝑛 ∞ ∞
<𝑛𝑛−𝑘𝑘> <𝑛𝑛−𝑘𝑘>
= � � P 𝜏𝜏𝑥𝑥,𝑥𝑥 = 𝑘𝑘 𝑝𝑝𝑥𝑥,𝑥𝑥 = � P 𝜏𝜏𝑥𝑥,𝑥𝑥 = 𝑘𝑘 � 𝑝𝑝𝑥𝑥,𝑥𝑥
𝑛𝑛=0 𝑘𝑘=0 𝑘𝑘=0 𝑛𝑛=𝑘𝑘
∞ ∞ ∞
<𝑛𝑛> <𝑛𝑛>
= � P 𝜏𝜏𝑥𝑥,𝑥𝑥 = 𝑘𝑘 � 𝑝𝑝𝑥𝑥,𝑥𝑥 = P(𝜏𝜏𝑥𝑥,𝑥𝑥 < ∞) � 𝑝𝑝𝑥𝑥,𝑥𝑥
𝑘𝑘=0 𝑛𝑛=0 𝑛𝑛=0
1
– Hence, E 𝑁𝑁𝑥𝑥 |𝑋𝑋0 = 𝑥𝑥 = ∑∞ <𝑛𝑛>
𝑛𝑛=0 𝑝𝑝𝑥𝑥,𝑥𝑥 = .
1−P(𝜏𝜏𝑥𝑥,𝑥𝑥 <∞)

 If P 𝜏𝜏𝑥𝑥,𝑥𝑥 < ∞ = 1, the state 𝑥𝑥 is recurrent and E 𝑁𝑁𝑥𝑥 |𝑋𝑋0 = 𝑥𝑥 = ∑∞ <𝑛𝑛>


𝑛𝑛=0 𝑝𝑝𝑥𝑥,𝑥𝑥 = ∞.
 If P 𝜏𝜏𝑥𝑥,𝑥𝑥 < ∞ < 1, the state 𝑥𝑥 is transient and E 𝑁𝑁𝑥𝑥 |𝑋𝑋0 = 𝑥𝑥 = ∑∞ <𝑛𝑛>
𝑛𝑛=0 𝑝𝑝𝑥𝑥,𝑥𝑥 < ∞.
.edu

utdallas

/~metin
Page
Infinite Hitting Time

11
 P 𝜏𝜏𝑥𝑥,𝑥𝑥 < ∞ < 1 ⇔ P 𝜏𝜏𝑥𝑥,𝑥𝑥 = ∞ > 0

Example:
1 1
 P 𝜏𝜏1,1 = ∞ = 2 and P 𝜏𝜏1,1 = 2 = 2
 𝑁𝑁1: Number of times to visit state 1
1 1 2
1/2 – 𝑁𝑁1 = 1 wp , 𝑁𝑁1 = 2 wp
2 2
1 𝑘𝑘
1 2 – 𝑁𝑁1 = 𝑘𝑘 wp
2
1 1
1/2  E 𝑁𝑁1 = 2 = 1 =
1−P(𝜏𝜏
1−2 1,1 <∞)

 ∑∞
𝑘𝑘=0 P 𝜏𝜏1,1 = 𝑘𝑘 ?
1 1
– lim ∑𝑛𝑛𝑘𝑘=0 P 𝜏𝜏1,1 = 𝑘𝑘 = 0 + + 0 + 0 + ⋯ =
𝑛𝑛→∞ 2 2

1 1
– P 𝜏𝜏1,1 = ∞ + lim ∑𝑛𝑛𝑘𝑘=0 P 𝜏𝜏1,1 = 𝑘𝑘 = + = 1
𝑛𝑛→∞ 2 2
.edu

utdallas

/~metin
Page
Invariant Measures

12
 Invariant measure 𝜌𝜌, possibly infinite dimensional, column vector with 𝜌𝜌 ≥ 0 satisfying
𝜌𝜌𝑇𝑇 = 𝜌𝜌𝑇𝑇 𝑃𝑃
– Viewing transition matrix 𝑃𝑃 as an operator, the invariant measure is the fixed point of the operator;
successive applications of the operator does not move the invariant measure.
– Invariant measure is not unique: 𝜌𝜌 invariant ⇒ 2𝜌𝜌 invariant
– Towards uniqueness, normalize the invariant measure:
𝜌𝜌
– 𝜋𝜋 = for 𝜌𝜌𝑇𝑇 𝟏𝟏 < ∞, where 1 is a column vector of ones.
𝜌𝜌𝑇𝑇 𝟏𝟏
– Invariant probability measure 𝜋𝜋 satisfies
» Invariance: 𝜋𝜋𝑇𝑇 = 𝜋𝜋𝑇𝑇 𝑃𝑃
» Normalization: 𝜋𝜋𝑇𝑇 𝟏𝟏 = 1
» Nonnegativity: 𝜋𝜋 ≥ 0
 Ex: Consider a 4-state Markov Chain with 1 2
0 1 0 0
𝑃𝑃 = 0 0 1 0
0 0 0 1 4 3
1 0 0 0
1 1 1 1
– This chain has invariant measures , , , , [1, 1, 1, 1], [2, 2, 2, 2] or [𝑎𝑎, 𝑎𝑎, 𝑎𝑎, 𝑎𝑎] for 𝑎𝑎 ≥ 0
4 4 4 4
1 1 1 1
– Among these, the only invariant probability is , , ,
4 4 4 4
.edu

utdallas

/~metin
Page
Invariant Measure and Time Averages

13
 Ex: Consider a 4-state Markov Chain with
0 1 0 0 1 2
1/2 1/2
𝑃𝑃 = 0 0
0 0 0 1
1 0 0 0 4 3
2 2 1 2
– This chain has invariant measures , , , , [2, 2, 1, 2], [4, 4, 2, 4] or [2𝑎𝑎, 2𝑎𝑎, 𝑎𝑎, 2𝑎𝑎] for 𝑎𝑎 ≥ 0
7 7 7 7
2 2 1 2
– Among these, the only invariant probability is , , , as
7 7 7 7
0 1 0 0
2 2 1 2 1/2 1/2 2 2 1 2
, , , = 0 0 , , ,
7 7 7 7 0 0 0 1 7 7 7 7
1 0 0 0

– Consider two cycles, triangle and square, defined as 1 2 1 2


1 1
– Think of the Markov Chain as triangle+ square.
2 2
– In the triangle, the chain takes 3 steps to come back.
4 4 3
– In the square, it takes 4 steps.
– In 7 steps, the chain returns to state 1 by visiting {1,2,4} twice and {3} once on average
1 1 𝜏𝜏 𝜏𝜏 𝜏𝜏
– E 𝜏𝜏1,1 = 3.5 = 3 + 4 and E ∑𝑛𝑛=0
1,1
1𝑋𝑋𝑛𝑛 =1|𝑋𝑋0 = 1 = E ∑𝑛𝑛=0
1,1
1𝑋𝑋𝑛𝑛 =2 |𝑋𝑋0 = 1 = E ∑𝑛𝑛=0
1,1
1𝑋𝑋𝑛𝑛=4 |𝑋𝑋0 = 1 = 1,
2 2
𝜏𝜏
whereas E ∑𝑛𝑛=0
1,1
1𝑋𝑋3 =1|𝑋𝑋0 = 1 = 0.5.
1
– An invariant measure turns out to be the expected number of visits to a particular state: 1, 1, , 1
2
1 1 0.5 1 2 2 1 2
– The invariant probability is , , , = , , ,
3.5 3.5 3.5 3.5 7 7 7 7
.edu

utdallas

/~metin
Page
Invariant Measure, Time Average & Limiting Probability

14
 In the previous example, time averages are 1/3.5, 1/3.5, 1/7, 1/3.5 represent the percentage of time the
chain stays in states 1, 2, 3, 4.
 In general, time average random variable is not over single cycle but over 𝑁𝑁 steps for 𝑁𝑁 → ∞:
∑𝑁𝑁
𝑛𝑛=0 1𝑋𝑋𝑛𝑛 =𝑥𝑥
lim
𝑁𝑁 →∞ 𝑁𝑁
 Consistency Result: An irreducible and positive recurrent Markov chain 𝑋𝑋𝑛𝑛 has
– The unique invariant probability 𝜋𝜋, and
∑𝑁𝑁
𝑛𝑛=0 1𝑋𝑋𝑛𝑛 =𝑥𝑥
– Time average converges to this invariant probability almost surely → 𝑎𝑎𝑎𝑎 𝜋𝜋𝑥𝑥
𝑁𝑁
 The consistency result implies that we do not have to separately search for invariance probability and
time averages; it suffices to find one of these. But the result is not operational.

 Towards an operational method, let us introduce limiting probability


<𝑛𝑛>
𝜋𝜋𝑦𝑦 = lim 𝑝𝑝𝑥𝑥,𝑦𝑦
𝑛𝑛→∞
 Note the limiting probability is independent of the initial state 𝑥𝑥; possible only in an aperiodic chain
 Crude methodology: Keep multiplying the transition matrix by itself to obtain 𝑃𝑃 𝑛𝑛 until its rows converge
to each other so that any one of the rows can be taken as the limiting probability.
 Issues with the crude methodology:
– No assurance of convergence
– No relation between limiting probability, time average and invariant measure
Main Result
.edu

utdallas

/~metin
Page
Invariant Measure=Time Average=Limiting Probability

15
Main Result: For an irreducible Markov chain with a period of 1, if an invariant probability
measure 𝜋𝜋 exists, i.e., a solution to 𝜋𝜋 𝑇𝑇 = 𝜋𝜋 𝑇𝑇 𝑃𝑃, 𝜋𝜋 𝑇𝑇 𝟏𝟏 = 1, 𝜋𝜋 ≥ 0 then
– the Markov chain is positive recurrent,
– 𝜋𝜋 is unique,
– 𝜋𝜋 is also the limiting probability,
– for each state 𝑥𝑥, 𝜋𝜋𝑥𝑥 > 0.

 Since irreducible & positive recurrent chains have time average → 𝑎𝑎𝑎𝑎 invariant
measure, 𝜋𝜋 computed above is also the time average

 All we have to check is 1) irreducible, 2) aperiodic 3) solution to 𝜋𝜋𝑇𝑇 = 𝜋𝜋𝑇𝑇 𝑃𝑃, 𝜋𝜋𝑇𝑇 𝟏𝟏 = 1, 𝜋𝜋 ≥ 0.
 The solution to 𝜋𝜋𝑇𝑇 = 𝜋𝜋 𝑇𝑇 𝑃𝑃, 𝜋𝜋 𝑇𝑇 𝟏𝟏 = 1, 𝜋𝜋 ≥ 0 is 𝟏𝟏𝑇𝑇 𝐼𝐼 − 𝑃𝑃 + ⫿ −1 , where 𝐼𝐼 is the identity matrix and
⫿ is the matrix of ones, both of these matrices have the same size as the transition matrix 𝑃𝑃.
– To obtain this, 𝜋𝜋𝑇𝑇 = 𝜋𝜋𝑇𝑇 𝑃𝑃 implies 𝜋𝜋𝑇𝑇 𝐼𝐼 − 𝑃𝑃 = 𝟎𝟎.
– Hence, 𝜋𝜋𝑇𝑇 (𝐼𝐼 − 𝑃𝑃 + ⫿) = 𝟎𝟎𝑇𝑇 + 𝜋𝜋𝑇𝑇 𝟏𝟏 = 1𝑇𝑇 , where 𝟎𝟎 is the column vector of only 0s.
– When the Markov chain is irreducible (𝐼𝐼 − 𝑃𝑃 + ⫿) can be shown to have the inverse
𝐼𝐼 − 𝑃𝑃 + ⫿ −1 , so

𝜋𝜋 𝑇𝑇 = 𝟏𝟏𝑇𝑇 𝐼𝐼 − 𝑃𝑃 + ⫿ −1
.edu

utdallas

/~metin
Page
Limiting Probability Example

16
 Ex: Consider a 4-state Markov Chain with
0 1 0 0 1 2
1/2 1/2
𝑃𝑃 = 0 0
0 0 0 1
1 0 0 0 4 3
– The chain is irreducible and aperiodic, main result applies
2 0 1 1
– 𝐼𝐼 − 𝑃𝑃 + ⫿ = 1 2 1/2 1/2 , in R “IP1=rbind(c(2,0,1,1),c(1,2,1/2,1/2),c(1,1,2,0),c(0,1,1,2))”.
1 1 2 0
0 1 1 2
6.5 3 −2 −4
−3.5 7 0 0
−1.5 −5 8 2
– 𝐼𝐼 − 𝑃𝑃 + ⫿ −1 = 2.5 −1 −4 6 , in R “solve(IP1)”
14
4 4 2 4 2 2 1 2
– 𝟏𝟏𝑇𝑇 𝐼𝐼 − 𝑃𝑃 + ⫿ −1 = , , , = , , , , in R “c(1,1,1,1) %*% solve(IP1)”
14 14 14 14 7 7 7 7

4 4 2 4
– On the other hand, 𝑃𝑃 𝑛𝑛 rows convergence to , , , :
14 14 14 14

3.9375 5.2500 1.7500 3.0625 3.84 4.05 2.10 4.01 4.00 4.00 2.00 4.00
3.0625 3.9375 2.6250 4.3750 4.02 3.84 2.02 4.12 4.00 4.00 2.00 4.00
3.5000 2.6250 2.6250 5.2500 4.18 3.86 1.91 4.05 4.00 4.00 2.00 4.00
4.05 4.18 1.93
𝑃𝑃 15 = 5.2500 3.5000 1.3125 3.9375
, 𝑃𝑃 30 = 3.84
and 𝑃𝑃 60 = 4.00 4.00 2.00 4.00
14 14 14
.edu

utdallas

/~metin
Page
Summary

17
 Stochastic Processes and Markov Property
 Markov Chains
 Chapman-Kolmogorov Equations
 Classification of States
 Invariant Measures, Time Averages, Limiting Probabilities

You might also like