# Solving Discrete-Time Markov Chains

When the number of states in an ergodic, discrete-time Markov chain is finite, we can solve for the steady-state probability vector in several ways. Although the computations could be performed by hand for small problems, Matlab provides simple and efficient operators to use in finding the solution and can be used even when the number of states is relatively large. Define the state probability vector of a discrete-time Markov chain with m states after the n th transition, given some initial state probability vector p(0) , to be p( n ) = ( p1 ( n ) p2 ( n ) L pm ( n )) ! !

! is in state i after transition n , given p(0) . where pi ( n ) is the probability that the system ! Recall that P = [ p jk ] is the single-step transition probability matrix: p jk is the probability that the the state probability vector at time ! j . If we know! ! next state will be k given that the current state is ! n , we can compute the i th component of the vector at time n + 1: ! ! pi ( n + 1) = p1 ( n ) p1i + p2 ( n ) p2 i + K + pm ( n ) pmi ! ! ! ! p( n + 1) = p( n ) P . These are called forward The set of m such equations can be summarized as Chapman-Kolmogorov equations. As an example, consider the Markov chain described by the state transition diagram on the ! right: .1 1 .2 .4 .6 3 .5 The single-step transition probability matrix is ".1 .7 .2% \$ ' P = \$.4 0 .6' \$ # 0 .5 .5' & p1 (1) = .3(.1) + .4 (.4 ) + .3(0) = .19 p2 (1) = .3(.7) + .4 (0) + .3(.5) = .36 p3 (1) = .3(.2) + .4 (.6) + .3(.5) = .45 p1 (2) = .19(.1) + .36(.4 ) + .45(0) = .163 p2 (2) = .19(.7) + .36(0) + .45(.5) = .358 p3 (2) = .19(.2) + .36(.6) + .45(.5) = .479 .7 2

! !

.5

!

This is clearly an ergodic Markov chain. If the initial state probability vector is p(0) = (.3 .4 .3) , we can compute the ! evolution of the state probability vector for as many transitions as we want by repeated applications of the Chapman-Kolmogorov equations. !

!

1574 .4881) p(9) = (.3) p(1) = (.3983 .4999) p( 7) = (.3543 .2829 .4926) p(9) = (.4875) p(5) = (.1575 .17 .1573 . ! The Ergodicity Theorem tells us not only that p(0) = (1 0 0) the state probability vector will converge.3566 .3274 .3544 .3708 .3605 .1726 .4686) state.5182) previous initial state probability vector because the new ! initial vector is “farther” from steady p(5) = (.2) that the steady-state probability vector is unique and does not depend on the initial state.1575 .19 .4882) p(11) = (.163 .3442 .1595 .097 .4 .3 .1596 . but p(1) = (.4883) p(8) = (. p(2) = (.1576 .1562 .1575 .4881) p( 7) = (.4882) ! As expected.430) same procedure.3546 . the state probability vectors are converging to the steady state probability vector " .Elec 428 Solving Discrete Time Markov Chains The resulting sequence of state probability vectors is p(0) = (.1989 .3544 .45) p(2) = (.3551 .358 .7 .3542 .4898) p(11) = (.36 . the state probabilities remain unchanged to four decimal places.3543 .1575 .4872) M p(20) = (.3505 .4882) ! Page 2 of 5 . After the 11th transition.1631 .1575 .3543 .4883) p(6) = (. we approach the same result.1482 .4882) p(10) = (.3543 . p(6) = (.1578 .1331 .4882) M p(20) = (. Convergence is a little slower than for the p( 4 ) = (.54 ) If we start with p(0) = (1 0 0) and apply the p( 3) = (.479) p( 3) = (.1 .4810) p(8) = (.3536 .473 .29 .1540 .3539 .1574 .4855) p(10) = (.4869) p( 4 ) = (.

6 0 . i + K + pm (0)! . Then P must satisfy " = p(0) P \$ p(0) .1575 # 4 .17 .1 .58 .546' \$ #.1900 >> p2 = p1*P 0.3 .1108 \$ #. ! " = lim n #\$ n #\$ n #\$ n # " # Let lim P = P . First enter the steady-state transition probability matrix and initial state probability vector and then apply p( n + 1) = p( n ) P for successive values of n starting at 0.5000 >> p0 = [.7000 0 0.38' \$ # . It is simple to n n see why this must be the case.2 .1575 \$ 21 P = \$.4510' .4790 0.1575 \$.1 # i # m and " p(0) .5 .4 0 .5] P = 0.236 . " i = p1 (0) P1.5055' & .473 .4 .5000 0.218 .4000 0. ! >> P = [. We know that p( n ) = lim p(0) P = p(0) lim P . or n "# ! ! # # "i.3543 .43 % \$ ' P = \$.04 .54% \$ ' P = \$.5182% ' .1630 0.1000 0. For an arbitrary p(0) .465' & 3 ! ! ! ! ".29 .2 .2829 .097 .2000 0.3580 0.25 .415 .7 .4500 0.Elec 428 Solving Discrete Time Markov Chains Performing this computation in Matlab is straightforward if a trifle tedious.3165 . Let’s look at this matrix for a few values of n : n 2 ".3543 .4382 .# Pm i + p2 (0) P2.i ! ! ! ! ! Page 3 of 5 ! . the sequence of computations outlined above can be written as ! p(1) = p(0) P p(2) = p(1) P = p(0) P M p( n ) = p(0) P n P is the n -step transition probability matrix.55' & 2 ". we can avoid using it altogether (especially if we have something like Matlab to manipulate matrices).3000 0.4882% ' .3] 0.3543 .1989 \$ P = \$.120 . the rows of P converge to the steady state probability vector " .4000 0 0.3000 ! Since the initial state probability vector is irrelevant in determining the steady state probability vector.6000 0.4882' & ! n ! As you can see.1780 M ".4882' .3000 >> p1 = p0*P 0.

>> P1(:.1000 0. ! ! ! >> J = diag([1 1 0].Elec 428 Solving Discrete Time Markov Chains Since this must be true for all p(0) . That is. with 1s.3543 0. the brute force method of raising the matrix to larger and larger powers starts getting computationally unfriendly. representing the equation for " m ).0000 0.4000 0 0. all of the entries in " the i th column of the P matrix must be equal to " i . (0 1 K 0) . Matlab has an exponentiation operator (^) which ! can be 0. We need one other independent equation in the " i s to go with any m " 1 of the m equations from " P = " . " " (0 0 K 1) in the equation for " i to find that P1" i = P2 i = K = Pmi = # i . K .7000 0. ! ! >> P1 = P.1575 0. Of course.3543 0. and the second is undoubtedly quicker. ! way the equation for " i can be 1 " j " m . However. P is the single-step transition probability matrix specified earlier.4882 applied to matrices as well as vectors.4882 enough but no larger than necessary. However. we always have one: the normalization equation ! # m i= 1 " i = 1. to be equal.0000 1. ! Create the m x m identity matrix with the i th diagonal element replaced by 0. ! corresponding to the normalization equation.1575 0. For small matrices.0000 ! ! first replace any ! To implement this method. A third method for determining the steady-state probabilities is to treat the problem as one of solving the set of linear equations represented by " P = " .4882 Choose a value that you think might be large 0.3543 0.0) J = 1 0 0 0 1 0 0 0 0 Page 4 of 5 . the last column. we can use the m vectors (1 0 K 0) . either of these methods is fine. Fortunately. Another way to arrive at the same conclusion " is to note that the only ! true for arbitrary ! p(0) is for all! of the P ji s. Factoring this ! out of every product term on the left hand side of the equation ! ! leaves p1 (0) + p2 (0) + K + pm (0) = 1. once the size of a matrix starts getting large. column i (for instance.0000 1. you could just enter P1 directly. 0.3) = [1 1 1]’ P1 = 0. ! ! ! ! ! ! This means that a second way to compute the ! steady state probability vector is to raise P to >> P^20 ! higher and higher powers until all of the entries in each column are identical to the other entries ans = in the same column. P is a stochastic matrix (each of its rows sums to 1) and hence is singular.5000 1.1575 0.

>> ! [v.i) is the eigenvalue corresponding right eigenvector v(:. for the same eigenvalues. another way of finding " is ! the left eigenvector corresponding to an eigenvalue of 1.1575 0.7767 0.d] = eig(P’) v = 0.7831 ! ! d = -0. One.2526 0.5683 0. ! for a 1 in ! >> pie = [0 0 1]/(P1-J) 0.i).2123 0 0 0 1.3)) pie = 0. J is the diagonal matrix with 1s along the diagonal except for a 0 in the i th diagonal element.3543 0. d(i.3)/sum(v(:.d] = eig(A) to find returns a square matrix v whose columns are the right eigenvectors of the square matrix A.6123 0 0 0 0. with eigenvalue 1. plus a diagonal to the ! matrix d of the eigenvalues ! ! of A.1575 0.1703 0. We cannot predict how Matlab will scale the eigenvectors (at least. since the steady-state probabilities ! must sum to one. we can normalize each ! element of the eigenvector corresponding to the eigenvalue 1 by dividing by the sum of the elements in the eigenvector to obtain the steady-state probability vector.4882 keywords: discrete-time Markov chain steady-state probability vector normalization equation single step transition probability matrix Matlab Ergodicity Theorem Page 5 of 5 . However.3571 -0.3543 0. and the vector on the left-hand side is all 0's except !the i th component. not right. This is easy to get around.4574 -0. [v. In Matlab.6064 -0. for any non-zero constant !a . The second minor difficulty is that if " is an eigenvector. then a" is.8144 0.0000 >> pie = v(:. too. eigenvectors of T P . we want left. by noting that a left eigenvector of P is a right eigenvector of P .Elec 428 Solving Discrete Time Markov Chains ! ! Solve the system of linear equations " ( P1 # J ) = [0 0 K 0 1 0 K 0] where P1 is P with the i th column replaced by 1s. the transpose of P .4882 ! Yet another !way to solve for the steady state probability vector is based on the realization that " P = " implies " is a left eigenvector of P . There are two problems. I can't). Hence. That is.