You are on page 1of 19

Stochastic Matrix

Solution Using Powers of a Matrix

Section 4.9: Markov Chains

November 21, 2010

Section 4.9: Markov Chains


Stochastic Matrix
Solution Using Powers of a Matrix

Outline

1 Stochastic Matrix
First Example
Stochastic Matrix
The Steady State Vector

2 Solution Using Powers of a Matrix


Diagonalization
The Steady State Vector

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

Outline

1 Stochastic Matrix
First Example
Stochastic Matrix
The Steady State Vector

2 Solution Using Powers of a Matrix


Diagonalization
The Steady State Vector

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

First Example of a Stochastic Matrix

Consider the following model of population movement between


a city and the suburbs: each year 5% of city dwellers move the
suburbs and 3% of suburbanites move to the city. If in 2001
58.2% of the population lived in the city and 41.8% lived in the
suburbs, what is the population distribution 20 years later?
Let  
0.95 0.03
M=
0.05 0.97

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

How is this a matrix problem?

Let  
0.95 0.03
M=
0.05 0.97
 
0.582
and use the vector x0 = to represent the
0.418
population distribution in 2001.
 
0.565
In this case, x1 = Mx0 = gives the population
0.435
distribution in 2002.
In general, xn = M n x0 gives the population distribution in n
years after 2001.

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

Outline

1 Stochastic Matrix
First Example
Stochastic Matrix
The Steady State Vector

2 Solution Using Powers of a Matrix


Diagonalization
The Steady State Vector

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

Definition

A stochastic vector is one whose entries are from the


interval [0, 1] and whose entries sum to 1. The idea is that
this can be interpreted as a vector of probabilities.

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

Definition

A stochastic vector is one whose entries are from the


interval [0, 1] and whose entries sum to 1. The idea is that
this can be interpreted as a vector of probabilities.
A stochastic matrix is a square matrix whose columns are
all stochastic vectors.

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

Outline

1 Stochastic Matrix
First Example
Stochastic Matrix
The Steady State Vector

2 Solution Using Powers of a Matrix


Diagonalization
The Steady State Vector

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

The Steady State Vector

The steady state vector x satisfies the equation Mx = x .


That is, it is an eigenvector for the eigenvalue λ = 1.

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

The Steady State Vector

The steady state vector x satisfies the equation Mx = x .


That is, it is an eigenvector for the eigenvalue λ = 1.
Why is λ = 1 always an eigenvalue of M?

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

The Steady State Vector

The steady state vector x satisfies the equation Mx = x .


That is, it is an eigenvector for the eigenvalue λ = 1.
Why is λ = 1 always an eigenvalue of M?
Because M T has the property that every row sums to 1, it
follows that λ = 1 is an eigenvalue for M T corresponding to
the eigenvector  
1
1
v = .
 
 .. 
1
But M and M T have the same eigenvalues because
Det(M − λI) = Det((M − λI)T ), so 1 is an eigenvalue of M.

Section 4.9: Markov Chains


First Example
Stochastic Matrix
Stochastic Matrix
Solution Using Powers of a Matrix
The Steady State Vector

Population Distribution Example

For  
0.95 0.03
M= ,
0.05 0.97
the eigenspace for λ = 1 is the null-space of
 
−0.05 0.03
M −I = ,
0.05 −0.03
 
3/5
which is spanned by the basis . The only stochastic
1
 
0.375
vector in this space is , so this is the steady state vector
0.625
for this population distribution. That is, over the long term the
population will settle into 37.5% city dwellers and 62.5%
suburbanites.
Section 4.9: Markov Chains
Stochastic Matrix Diagonalization
Solution Using Powers of a Matrix The Steady State Vector

Outline

1 Stochastic Matrix
First Example
Stochastic Matrix
The Steady State Vector

2 Solution Using Powers of a Matrix


Diagonalization
The Steady State Vector

Section 4.9: Markov Chains


Stochastic Matrix Diagonalization
Solution Using Powers of a Matrix The Steady State Vector

Diagonalization to find M k

Using Mathematica, for


 
0.95 0.03
M= ,
0.05 0.97

the eigenvalues are λ = 1 and λ = 0.92, and the corresponding


eigenvectors are
   
−0.514496 −0.707107
v1 = and v2 =
−0.857493 0.707107

so we form
   
−0.514496 −0.707107 1 0
P= and D =
−0.857493 0.707107 0 0.92

Section 4.9: Markov Chains


Stochastic Matrix Diagonalization
Solution Using Powers of a Matrix The Steady State Vector

Diagonalization to find M k , cont’d

With  
0.95 0.03
M= ,
0.05 0.97
and
   
−0.514496 −0.707107 1 0
P= and D = ,
−0.857493 0.707107 0 0.92

we can write 
1 0
M =Pk
P −1
0 (0.92)k

Section 4.9: Markov Chains


Stochastic Matrix Diagonalization
Solution Using Powers of a Matrix The Steady State Vector

Outline

1 Stochastic Matrix
First Example
Stochastic Matrix
The Steady State Vector

2 Solution Using Powers of a Matrix


Diagonalization
The Steady State Vector

Section 4.9: Markov Chains


Stochastic Matrix Diagonalization
Solution Using Powers of a Matrix The Steady State Vector

Steady state vector

With 
1 0
M =Pk
P −1
0 (0.92)k
it follows that
 
1 0 −1
lim M k = P P
k →∞ 0 0
 
0.375 0.375
=
0.625 0.625

Section 4.9: Markov Chains


Stochastic Matrix Diagonalization
Solution Using Powers of a Matrix The Steady State Vector

Steady state vector

With 
1 0
M =Pk
P −1
0 (0.92)k
it follows that
 
1 0 −1
lim M k = P P
k →∞ 0 0
 
0.375 0.375
=
0.625 0.625

So for any initial stochastic vector v, we will have


     
k 0.375 0.375 0.375v1 + 0.375v2 0.375
lim M v = v= =
k →∞ 0.625 0.625 0.625v1 + 0.625v2 0.625

Section 4.9: Markov Chains

You might also like