You are on page 1of 37

MARKOV CHAIN + DECISION PROCESS

MARKOV DECISSION PROCESS.


UNIT 3
What is a Markov Chain?

A Markov chain is a stochastic model. The idea behind is to assume


that Xt captures all the relevant information for predicting the next
state Xt+1
Definition A Stochastic Process {Xt , t = 0, 1, . . .} with discrete
state space Xt ∈ {0, 1, . . .} is a Markov Chain if it has the Markov
Property

Our main focus is with Discrete time Markov Chains

In this case we can write the joint probability distribution as:

p(X1 , X2 , ..., Xt ) = P (X1 ) × P (X2 |X1 ) × · · · × P (Xt |Xt−1 )

This model is called: Markov Chain


If the probability P (Xt |Xt−1 ) is independent of time, then we
called the model: homogeneous, stationary, or time-invariant
which is formally written as: P (Xt+s |Xt−1+s ) = P (Xt |Xt−1 ). That
means also P (Xt+1 |Xt ) = P (X1 |X0 )

Discrete time Markov Chains


• The state changes at certain discrete time instants
• At each time step n the Markov chain has a state denoted by Xn
which belongs to a finite set S (which define the possible states:
state space)

How can we model a Markov Chain?

• It is described in terms of its transition probabilities pij

Markov property

• There is no memory
• The probability law of the next state Xn+1 depends on the past
only through the value of the present state Xn it does not matter
what happened previous to Xn
4 markov chain + decision process

– We do not care about Xn−1 or Xn−2 because it only matter the


present state to know what will happen next

P (Xn+1 = j|Xn = i, Xn−1 = in−1 , ..., Xo = io ) = P (Xn+1 = j|Xn = i) = pij

A stochastic process is a Markov Model if the conditional probabil-


ity distribution of future states only depend on the current state, and
not on previous ones. The only probability model we need to keep is
this one:

• pij = P (Xn+1 = j|Xn = i) in words means, whenever the state


happens to be i, there is a probability pij that the next state is
equal to j

How to encode a Markov Chain?


• A Markov chain model can be encoded in a transition probability
matrix, or also called a Stochastic Matrix.

 
p1,1 p1,2 p1,3 ··· p1,n
 
 p2,1 p2,2 p2,3 ··· p2,n 
P =
 .. .. .. .. .. 

 . . . . . 
pn,1 pn,2 pn,3 ··· pn,n

• Properties of this square matrix P :


– Each element of P is nonnegative, that is pi,j ≥ 0
∑n
– Each row of P sums to one, that is: j=1 pi,j = 1

Observation: it is possible to see each row as a probability mass func-


tion across a set of n states

The matrix is called left-to-right transition matrix or right


stochastic matrix (each row summing up to 1). This matrix denote
the one-step transition probabilities.
Definition P = P (Xn+1 = j|Xn = i) is the one-step transition
matrix. Thus P 2 is the 2-step transition matrix


P 2 = P (Xn+2 = j|Xn = i) = P (Xn+1 = k|Xn = i)P (Xn+2 = j|Xn+1 = k)
k=0

The matrix multiplication identity P n+m = P n P m correspond to


the Chapman-Kolmogorov equations. Also shown as:

Pijn+m = n m
Pik Pkj ∀n, m ≥ 0, ∀i, j ≥ 0
k=0
markov chain + decision 5

That is, the probability of getting from i to j in n + m transitions


(steps) is just the probability of getting from i to k in n transitions,
and then from k to j in m transitions, summed up over all k
It is also helpful to represent the transition matrix as a direct
graph, where nodes represent states and arrows possible transitions
(if probability is not null) with its weight equivalent to their prob-
ability. The usual name used for this visual representation is state
transition diagram

Example
• Suppose the following transition matrix Pa

 
1 0 0 0
0.3 0.4 0
 0.3 
a=  1
 0 0.3 0.4 0.3
0 0 0 1
0.4 S1
• It is easy to generate a GRAPH from the Transition Matrix 0.3

(See Figure1) 0.4 1


S2 0.3
0.3 0.3
S3 S4
Reading the Transition Matrix P
It is clear the properties of the transition matrix Pa . However you can Figure 1: Markov Chain Graph for
take a look at the second row. If you see what information is provided the transition matrix Pa

by the second row, then, another perception can be obtained when 1


drawing the transition matrix from S2 . ( See Figure 2)
In other words, what we have is the probability distribution 0.4 S1
0.3
from moving from the current state (S2 ) to the next state in the fol-
lowing transition. That is, given that I am in S2 : 0.4 1
S2 0.3
• P (Xn+1 = j|Xn = 2) = {0.3, 0.4, 0.3} 0.3 0.3
S3 S4
From this point we can ask questions related to what is the proba-
bility that if we start in S2 , we move through the following sequence:
Figure 2: Markov Chain Graph for
• S2 → S2 → S2 → S3 → S 4 the transition matrix Pa with a focus
on S2
How to write and solve this probability?
Because we have the Markov property the solution is

P (X1 = 2, X2 = 2, X3 = 3, X4 = 4|X0 = 2) = p2,2 p2,2 p2,3 p3,4 = 0.4×0.4×0.3×0.3

Thus:

P (X1 = 2, X2 = 2, X3 = 3, X4 = 4|X0 = 2) = (0.4)2 (0.3)2


Classification of States

• Definitions: A state j is accessible from i if Pijk > 0 for some k ≥ 0


• States i and j communicate if they are accessible from each other
• An equivalence relation divides a set into disjoint classes of equiva-
lent states, called communication classes
• A Markov chain is irreducible if all the states communicate with
each other (i.e. if there is only one communication class)
• The communication class containing i is absorbing if the class can
never be left
n
• State i has period d if Pi,i = 0 when n is not a multiple of d and
if d is the greatest integer with this property. If d = 1 then i is
aperiodic.
• State i is recurrent if P (re-enteri|X0 = i] = 1
– If i is not recurrent, then it is transient

Usually the notation to represent a state j that is accessible from


state i, is with i → j, if Pi,jn
> 0. In words, this means is it possible of
reaching state j from state i in some number of steps n. Thus, if state
j is not accessible from state i, then Pi,jn
= 0 , ∀ n ≥ 0. So in this case
a chain starting form state i will never visit state j
You should realize that if a Markov Chain has m states, then j is
accessible from state i iff (P + P 2 + · · · + P m )i,j > 0
The notation used to represent communicating states is ↔. That
is, if state i is accessible from state j, and state j is accessible from
state i, then we say that state i and state j communicate. Using
this definition it is easy to see how we can divides states into classes.
This idea is already seen and basically says that Within each class,
all states communicate to each other, but no pair of states in different
classes communicates.
With this concepts the identification of an irreducible markov
chain is pretty straightforward: the chain is irreducible if there is only
one class. From a computational point of view, irreducibility means
that all entries of this matrix (I + P + P 2 + · · · + P m ) > 0
For any state i if it is defined the following function: fi = P (ever reenter i|X0 =
i), then if state i has fi = 1 it is a recurrent state. If this condition
8 markov chain + decision process

does not hold it means fi < 1 then the state i is a transient state.
Another way to see these conditions is that starting from any state i, a
Markov Chain visits a recurrent state infinitely many times, or not at
all.

Expected number of visits to state i (Optional reading)


The expected number of visits to state i, can be a useful measure.
When it is important to account for the number of times, including
time 0, when the chain is at state i. To see a possible solution one can
think that at every visit to state i, the probability of never visiting
state i again is 1−fi . Thus,

P (exactly n visits to i|X0 = i) = fin−1 (1 − fi )

Recall that this expression it is a Geometric random variable for


the number of visits to state i. Once identified the distribution it is
direct to give the expected value.

1 ∑
E[# of visits to i|X0 = i] = = n
Pi,i =∞
1 − fi n=1
Stationary Distributions

π = {π, i = 0, 1, . . .} is a stationary distribution for P if πj =


∑ ∑
i πi Pi,j . With π ≥ 0 and i πi = 1 ∑
This formulation is easy to follow in matrix notation:πj = i πi Pi,j
is π = πP , where π is a row vector
Theorem An irreducible, aperiodic, positive recurrent Markov
chain has a unique stationary distribution, which is also the limiting
n
distribution. πj = lim Pi,j
n→∞
Such Markov chains are called ergodic

Spectral Analysis
Eigenvalues and eigenvectors are the tools needed to apply spectral
analysis over the stochastic matrix. They can provide useful informa-
tion related with the periodicity of the closed classes.
Some have already see Spectral Analysis in many other disciplines
such as statistical and other computational fields.

• How can we compute Eigenvalues and Eigenvectors? The basic


approach is this one. An eigenvalue denoted by lambda, λ, (defined
as a scalar) that when multiplied by a non-zero vector x satisfies
the following equation:

Ax = λx ⇒ (A − λI)x = 0
To find the solutions that don’t end up as x = 0 the matrix (A−λI)
is set |A − λI| = 0 which is called the characteristic equation. For an
n × n matrix, there will be n roots, and thus, n eigenvalues.
Using a Software system such as R it is straightforward to obtain
the eigenvalues and eigenvectors from a Stochastic Matrix
-The eigen() function computes both eigenvalues and eigenvectors
Examples

Given the following Markov Chain transition matrix examples, solve


the following questions

1. Draw the transition diagram (graph)


2. Compute the 2 step transition matrix
3. Compute the 5 step transition matrix
4. Compute the 10 step transition matrix
5. Solve the steady state (if possible)
6. Classify the Markov Chain

Example 1
 
0.1 0.5 0.4 0
0 1
 0 0 
P1 =  
 0 0.8 0 0.2
0 0 0.3 0.7
0.7

0.3
S4
1 0.2
• First Graphical Representation (See Figure 3)
0.1 0.8 S3
0.5 S2
0.4
S1

• Markov Chain Analysis Example for P1 Figure 3: State transition diagram P1

Second: 2-step transition matrix for P1 example


 
0.01 0.37 0.04 0.58
 0 0.7 
 0 0.3 
P12 =  
 0 0 0.06 0.94
0 0.24 0.21 0.55

Third: 5 step transition matrix for P1 example


12 markov chain + decision process

 
0 0.1885 0.1865 0.625
0 0.132 0.2001 0.6679
 
P15 =  
0 0.1608 0.172 0.6672
0 0.1601 0.2004 0.6396

Fourth:10 step transition matrix for P1 example


 
0 0.1549 0.195 0.65
0 0.1565 0.1947 0.6488
 
P110 = 
0 0.1557 0.1954 0.6489
0 0.1557 0.1946 0.6496

Fifth:Solve the steady state for P1 example


( )
πP SteadyState = 0 0.156 0.195 0.649
1

Sixth:Classify the Markov Chain for P1 example

• A) we check for recurrent classes

Table 1: Recurrent Classes

Class_Gr states
1 {S2,S3,S4}

• B) we check for transient classes

Table 2: Transient Classes

Class_Gr states
1 {S1}

• C) we check for Absorving states

• D) we check for Regular and Irreducible Markov Chain


markov chain + decision 13

Table 3: Summary Regular + Irre-


ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO

Seventh Spectral Analysis

• Compute the eigenvalues of P1

( )
P1eigenvalues = 1 + 0i −0.15 + 0.466i −0.15 − 0.466i 0.1 + 0i

• Compute the module (in case complex numbers)

( )
|λ|
P1 = 1 0.4899 0.4899 0.1

Conclusions

The Markov Chain P1 is not a nice example to deal with regular and
irreducible chains.

— End Example P1

Example 2
 
0.5 0.4 0.1 0
0.3 0.3 0.4 0 
 
P2 =  
0 0 0.2 0.8
0 0 0.6 0.4
0.3
0.5
0.4 0.2 0.4
• First Graphical Representation (See Figure 4) S2
0.4
S1 0.3
0.1 0.8
S3 S4
0.6

Figure 4: State transition diagram P2


14 markov chain + decision process

• Markov Chain Analysis Example for P2

Second: 2-step transition matrix for P2 example


 
0.37 0.32 0.23 0.08
0.24 0.32
 0.21 0.23 
P22 =  
 0 0 0.52 0.48
0 0 0.36 0.64

Third: 5 step transition matrix for P2 example


 
0.1625 0.1412 0.3332 0.3631
0.1059 0.0919 0.3709 0.4313
 
P25 =  
 0 0 0.4227 0.5773
0 0 0.433 0.567

Fourth:10 step transition matrix for P2 example


 
0.0414 0.0359 0.4046 0.5181
0.0269 0.0234 0.4129 0.5368
 
P210 = 
 0 0 0.4286 0.5714
0 0 0.4285 0.5715

Fifth:Solve the steady state for P2 example


( )
πP SteadyState = 0 0 0.429 0.571
2

Sixth:Classify the Markov Chain for P2 example

• A) we check for recurrent classes

Table 4: Recurrent Classes

Class_Gr states
1 {S3,S4}

• B) we check for transient classes


markov chain + decision 15

Table 5: Transient Classes

Class_Gr states
1 {S1,S2}
Table 6: Summary Regular + Irre-
ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO

• C) we check for Absorving states

• D) we check for Regular and Irreducible Markov Chain

Seventh Spectral Analysis

• Compute the eigenvalues of P2

( )
P2eigenvalues = 1 0.761 −0.4 0.039

Conclusions

The Markov Chain P2 is an example of a semi-ergodic markov chain.


With the properties of not being neither a regular and irreducible
chain.

— End Example P2

Example 3
 
0.5 0.4 0.1 0
0 0
 1 0 
P3 =  
0 0 0.2 0.8
0 0 0.6 0.4

• First Graphical Representation (See Figure 5)


1
16 markov chain + decision process
0.5 S2
0.4

S1 0.2 0.4
0.1
0.8
S3 S4
0.6
• Markov Chain Analysis Example for P3
Figure 5: State transition diagram P3

Second: 2-step transition matrix for P3 example


 
0.25 0.16 0.01 0
 0 0 
 1 0 
P32 =  
 0 0 0.04 0.64
0 0 0.36 0.16

Third: 5 step transition matrix for P3 example


 
0.0312 0.0102 0 0
 0 0 
 1 0 
P35 =  
 0 0 3e − 04 0.3277
0 0 0.0778 0.0102

Fourth:10 step transition matrix for P3 example


 
0.001 1e − 04 0 0
 0 0 
 1 0 
P310 = 
 0 0 0 0.1074 
0 0 0.006 1e − 04

Fifth:Solve the steady state for P3 example


( )
0 0 0.429 0.571
πP SteadyState =
3 0 1 0 0

Sixth:Classify the Markov Chain for P3 example

• A) we check for recurrent classes

Table 7: Recurrent Classes

Class_Gr states
1 {S2}
2 {S3,S4}
markov chain + decision 17

Table 8: Transient Classes

Class_Gr states
1 {S1}

• B) we check for transient classes

• C) we check for Absorving states

Table 9: Absorving States

Class_Gr states
1 {S2}

• D) we check for Regular and Irreducible Markov Chain

Table 10: Summary Regular + Irre-


ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO

Seventh Spectral Analysis

• Compute the eigenvalues of P3

( )
P3eigenvalues = 1 1 0.5 −0.4

• We can see that the multiplicity of the root 1 is two

Conclusions
The Markov Chain P3 is an example of a semiregular chain. Note that
the probabilities of the stable state depend on the initial state of the
chain

— End Example P3
18 markov chain + decision process

Example 4
 
0.5 0.4 0.1 0
0 0
 0 1 
P4 =  
 0 0.2 0 0.8
0 0 1 0

0.5 1
S2
0.4 0.8
• First Graphical Representation (See Figure 6) 0.2 S3 S4
S1 0.1 1

Figure 6: State transition diagram P4

• Markov Chain Analysis Example for P4

Second: 2-step transition matrix for P4 example


 
0.25 0.16 0.01 0
 0 0 
 0 1 
P42 =  
 0 0.04 0 0.64
0 0 1 0

Third: 5 step transition matrix for P4 example


 
0.0312 0.0102 0 0
 0 0 
 0 1 
P45 =  
 0 3e − 04 0 0.3277
0 0 1 0

Fourth:10 step transition matrix for P4 example


 
0.001 1e − 04 0 0
 0 0 
 0 1 
P410 = 
 0 0 0 0.1074
0 0 1 0

Fifth:Solve the steady state for P4 example


( )
πP SteadyState = 0 0.1 0.5 0.4
4

Sixth:Classify the Markov Chain for P4 example


markov chain + decision 19

Table 11: Recurrent Classes

Class_Gr states
1 {S2,S3,S4}
Table 12: Transient Classes

Class_Gr states
1 {S1}

• A) we check for recurrent classes

• B) we check for transient classes

• C) we check for Absorving states

• D) we check for Regular and Irreducible Markov Chain

Table 13: Summary Regular + Irre-


ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO

Seventh Spectral Analysis

• Compute the eigenvalues of P4

( )
P4eigenvalues = −1 1 0.5 0

• The multiplicity of the root λ = 1 is two. Thus the period is p = 2

Conclusions
The Markov Chain P4 is an example to deal with cyclic markov
chains.

— End Example P4
20 markov chain + decision process

Example 5
 
0.3 0.7 0 0 0
0.3 0.4 0.1 0.2 0 
 
 
P5 =  0 0 0 1 0
 
0 0 0.5 0 0.5
0 0 0.7 0.3 0
0.3 0.4

0.1
0.7
S1 S2 0.5
0.3 0.2
1 S3
• First Graphical Representation (See Figure 7) S4
0.5 0.7

0.3 S5

Figure 7: State transition diagram P5


• Markov Chain Analysis Example for P5

Second: 2-step transition matrix for P5 example


 
0.09 0.49 0 0 0
0.09 0.16 0.01 0.04 0 
 
 
P52 =  0 0 0 1 0 
 
 0 0 0.25 0 0.25
0 0 0.49 0.09 0

Third: 5 step transition matrix for P5 example


 
0.0024 0.1681 0 0 0
0.0024 0.0102 3e − 04 0 
 0 
 
P5 =  0
5
0 0 1 0 
 
 0 0 0.0312 0 0.0312
0 0 0.1681 0.0024 0

Fourth:10 step transition matrix for P5 example


 
0 0.0282 0 0 0
0 1e − 04 0 
 0 0 
 
P510 = 0 0 0 1 0 
 
0 0 0.001 0 0.001
0 0 0.0282 0 0

Fifth:Solve the steady state for P5 example


( )
πP SteadyState = 0 0 0.362 0.426 0.213
5
markov chain + decision 21

Sixth:Classify the Markov Chain for P5 example

• A) we check for recurrent classes

Table 14: Recurrent Classes

Class_Gr states
1 {S3,S4,S5}

• B) we check for transient classes

Table 15: Transient Classes

Class_Gr states
1 {S1,S2}

• C) we check for Absorving states

• D) we check for Regular and Irreducible Markov Chain

Table 16: Summary Regular + Irre-


ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO

Seventh Spectral Analysis

• Compute the eigenvalues of P5

( )
P5eigenvalues = 1 + 0i 0.811 + 0i −0.5 + 0.316i −0.5 − 0.316i −0.111 + 0i

• Compute the module (in case complex numbers)


22 markov chain + decision process

( )
|λ|
P5 = 1 0.81098 0.59161 0.59161 0.11098

Conclusions
The Markov Chain P5 is another example to deal with non regular and
irreducible chains.
— End Example P5

Example 6
 
1 0 0 0 0 0
 
 0 1 0 0 0 0 
 
0.05 0.05 0.9 0 0 0 
P6 = 
 0.1 0.05

 0 0.8 0.05 0 
 
 0.2 0.1 0 0.05 0.6 0.05
0.1 0.2 0 0 0 0.7

• First Graphical Representation (See Figure 8)

• Markov Chain Analysis Example for P6

Second: 2-step transition matrix for P6 example

 
1 0 0 0 0 0
 
 0 1 0 0 0 0 
 
0.0025 0.0025 0.81 0 0 0 
P6 = 
2
 0.01 0.0025

 0 0.64 0.0025 0 
 
 0.04 0.01 0 0.0025 0.36 0.0025
0.01 0.04 0 0 0 0.49

Third: 5 step transition matrix for P6 example

 
1 0 0 0 0 0
 
 0 1 0 0 0 0 
 
 0 0 0.5905 0 0 0 
P65 = 
 0

 0 0 0.3277 0 0  
 
3e − 04 0 0 0 0.0778 0 
0 3e − 04 0 0 0 0.1681
markov chain + decision 23

Figure 8: State transition diagram P6 ,


1 example from an airline company
0.1
0.2
S1
0.8 0.6 0.7 0.1

0.05
0.05 0.05
S4 S5 S6
0.05

0.9 0.2

1
S3 0.05

0.1 S2
0.05

Fourth:10 step transition matrix for P6 example


 
1 0 0 0 0 0
 
0 1 0 0 0 0 
 
0 0 0.3487 0 0 0 
P610 =
0

 0 0 0.1074 0 0  
 
0 0 0 0 0.006 0 
0 0 0 0 0 0.0282

Fifth:Solve the steady state for P6 example


( )
0 1 0 0 0 0
πP SteadyState =
6 1 0 0 0 0 0

Sixth:Classify the Markov Chain for P6 example

• A) we check for recurrent classes

• B) we check for transient classes


24 markov chain + decision process

Table 17: Recurrent Classes

Class_Gr states
1 {S1}
2 {S2}
Table 18: Transient Classes

Class_Gr states
1 {S3}
2 {S4,S5}
3 {S6}

• C) we check for Absorving states

Table 19: Absorving States

Class_Gr states
1 {S1}
2 {S2}

• D) we check for Regular and Irreducible Markov Chain

Seventh Spectral Analysis

• Compute the eigenvalues of P6

( )
P6eigenvalues = 1 1 0.9 0.812 0.7 0.588

• The multiplicity of 1 is two, thus the period of the chain is p = 2

Conclusions
The Markov Chain P6 is a complex real situation Markov Chain with
all types of states present.

— End Example P6
markov chain + decision 25

Table 20: Summary Regular + Irre-


ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO

Example 7
 
0.7 0.1 0.1 0 0 0 0 0.1
0.4 0.3 0.1 0 0.2 0 0
 0 
 
0 0 0.7 0.3 0 0 0 0
 
0 0 0.6 0.4 0 0 0 0

P7 =  
0 0 0.1 0.3 0.3 0.1 0.2 0 

 
0 0 0 0 0 0.5 0.5 0 
 
0 0 0 0 0 0.4 0.6 0 
0 0 0 0 0 0 0 1

• First Graphical Representation (See Figure 9)

0.5 Figure 9: State transition diagram P7

0.6
S6 0.5
0.1 0.4 S7
0.2
0.3

0.3 S5 0.4
0.2 0.3
0.1
0.7 0.1 S2 0.7
S4
0.1 0.3
0.4
0.1 S3 0.6
S1

0.1
1

S8

• Markov Chain Analysis Example for P7


26 markov chain + decision process

Second: 2-step transition matrix for P7 example

 
0.49 0.01 0.01 0 0 0 0 0.01
0.16 0.09 0.01 0 
 0 0.04 0 0 
 
 0 0 0.49 0.09 0 0 0 0 
 
 0 0 0.36 0.16 0 0 0 0 
2 
P1 =  
 0 0 0.01 0.09 0.09 0.01 0.04 0 
 
 0 0 0 0 0 0.25 0.25 0 
 
 0 0 0 0 0 0.16 0.36 0 
0 0 0 0 0 0 0 1

Third: 5 step transition matrix for P7 example

 
0.1681 0 0 0 0 0 0 0
0.0102 0.0024 − 0
 0 0 3e 04 0 0 
 
 0 0 0.1681 0.0024 0 0 0 0
 
 0 0 0.0778 0.0102 0 0 0 0
P75 = 
 0

 0 0 0.0024 0.0024 0 3e − 04 0
 
 0 0 0 0 0 0.0312 0.0312 0
 
 0 0 0 0 0 0.0102 0.0778 0
0 0 0 0 0 0 0 1

Fourth:10 step transition matrix for P7 example

 
0.0282 0 0 0 0 0 0 0
1e − 04 0
 0 0 0 0 0 0 
 
 0 0 0.0282 0 0 0 0 0
 
 0 0 0.006 1e − 04 0 0 0 0
P710 =
 0

 0 0 0 0 0 0 0
 
 0 0 0 0 0 0.001 0.001 0
 
 0 0 0 0 0 1e − 04 0.006 0
0 0 0 0 0 0 0 1

Fifth:Solve the steady state for P7 example

 
0 0 0 0 0 0 0 1
 
πP SteadyState = 0 0 0 0 0 0.444 0.556 0
7
0 0 0.667 0.333 0 0 0 0
markov chain + decision 27

Table 21: Recurrent Classes

Class_Gr states
1 {S3,S4}
2 {S6,S7}
3 {S8}

Sixth:Classify the Markov Chain for P7 example

• A) we check for recurrent classes

• B) we check for transient classes

Table 22: Transient Classes

Class_Gr states
1 {S1,S2}
2 {S5}

• C) we check for Absorving states

Table 23: Absorving States

Class_Gr states
1 {S8}

• D) we check for Regular and Irreducible Markov Chain

Seventh Spectral Analysis

• Compute the eigenvalues of P7

( )
P7eigenvalues = 1 1 1 0.783 0.3 0.217 0.1 0.1
28 markov chain + decision process

Table 24: Summary Regular + Irre-


ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO

• The multiplicity is three, thus the period of the markov chain is


p=3

Conclusions

The Markov Chain P7 is an example of a interesting Markov Chain,


which can be simulated.

— End Example P7

Example 8
 
0.333 0.667 0 0 0 0 0 0
0.333 0.333 0.334 0 
 0 0 0 0 
 
 0 0.333 0.333 0.334 0 0 0 0 
 
 0 0 0.333 0.333 0.334 0 0 0 

P8 =  
 0 0 0 0.333 0.333 0.334 0 0  
 
 0 0 0 0 0.333 0.333 0.334 0 
 
 0 0 0 0 0 0.333 0.333 0.334
0 0 0 0 0 0 0.667 0.333

• First Graphical Representation (See Figure 10)

0.333 0.333 0.333 0.333 0.333 0.333 0.333 0.333

0.667 0.334 0.334 0.334 0.334 0.334 0.334


S1 S2 S3 S4 S5 S6 S7 S8
0.333 0.333 0.333 0.333 0.333 0.333 0.667

Figure 10: State transition diagram


P8

• Markov Chain Analysis Example for P8


markov chain + decision 29

Second: 2-step transition matrix for P8 example

 
0.110889 0.444889 0 0 0 0 0 0
0.110889 0.110889 0.111556 
 0 0 0 0 0 
 
 0 0.110889 0.110889 0.111556 0 0 0 0 
 
 0 0 0.110889 0.110889 0.111556 0 0 0 
2 
P8 =  

 0 0 0 0.110889 0.110889 0.111556 0 0 
 
 0 0 0 0 0.110889 0.110889 0.111556 0 
 
 0 0 0 0 0 0.110889 0.110889 0.111556
0 0 0 0 0 0 0.444889 0.110889

Third: 5 step transition matrix for P8 example

 
0.0041 0.132 0 0 0 0 0 0
0.0041 0.0041 0.0042 0 
 0 0 0 0 
 
 0 0.0041 0.0041 0.0042 0 0 0 0 
 
 0 0 0.0041 0.0041 0.0042 0 0 0 
P85 = 
 0

 0 0 0.0041 0.0041 0.0042 0 0 
 
 0 0 0 0 0.0041 0.0041 0.0042 0 
 
 0 0 0 0 0 0.0041 0.0041 0.0042
0 0 0 0 0 0 0.132 0.0041

Fourth:10 step transition matrix for P8 example


 
0 0.0174 0 0 0 0 0 0
0 0
 0 0 0 0 0 0 
 
0 0 0 0 0 0 0 0
 
0 0 0 0 0 0 0 0
P810 =
0

 0 0 0 0 0 0 0
 
0 0 0 0 0 0 0 0
 
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0.0174 0

Fifth:Solve the steady state for P8 example

( )
πP SteadyState = 0.071 0.142 0.142 0.143 0.143 0.143 0.144 0.072
8

Sixth:Classify the Markov Chain for P8 example

• A) we check for recurrent classes


30 markov chain + decision process

Table 25: Recurrent Classes

Class_Gr states
1 {S1,S2,S3,S4,S5,S6,S7,S8}

• B) we check for transient classes

• C) we check for Absorving states

• D) we check for Regular and Irreducible Markov Chain

Table 26: Summary Regular + Irre-


ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible YES

Seventh Spectral Analysis

• Compute the eigenvalues of P8

( )
P8eigenvalues = 1 0.934 0.749 0.481 −0.334 −0.268 0.185 −0.083

Conclusions

The Markov Chain P8 is not a nice example to deal with regular and
irreducible chains.

— End Example P8
markov chain + decision 31

Example 9

 
0 0 1 0 0 0 0 0 0 0
0 0
 0 0.3 0.3 0.4 0 0 0 0 
 
0 0 0 0 0 1 0 0 0 0
 
0 0 0 0 0 0 1 0 0 0
 
0 0
 0 0 0 0 0.3 0.7 0 0 
P9 =  
0 0 0 0 0 0 0 0.2 0 0.8
 
0 0 0 0 0 0 0 0 0.8 0.2
 
1 0
 0 0 0 0 0 0 0 0 
 
0 1 0 0 0 0 0 0 0 0
0.5 0.5 0 0 0 0 0 0 0 0

• First Graphical Representation (See Figure 11)

0.3

S5 0.4
0.7
0.3 S4
1
0.8 S9 S2
0.3
S7
1
S3 1
0.5 1 S6
S10 0.8 0.2
0.5
S1 1
0.2 S8

Figure 11: State transition diagram


P9

• Markov Chain Analysis Example for P9

Second: 2-step transition matrix for P9 example


32 markov chain + decision process

 
0 0 1 0 0 0 0 0 0 0
 0 0 
 0 0.09 0.09 0.16 0 0 0 0 
 
 0 0 0 0 0 1 0 0 0 0 
 
 0 0 0 0 0 0 1 0 0 0 
 
 0 0 
 0 0 0 0 0.09 0.49 0 0 
P9 = 
2

 0 0 0 0 0 0 0 0.04 0 0.64
 
 0 0 0 0 0 0 0 0 0.64 0.04
 
 1 0 
 0 0 0 0 0 0 0 0 
 
 0 1 0 0 0 0 0 0 0 0 
0.25 0.25 0 0 0 0 0 0 0 0

Third: 5 step transition matrix for P9 example

 
0 0 1 0 0 0 0 0 0 0
 0 0 
 0 0.0024 0.0024 0.0102 0 0 0 0 
 
 0 0 0 0 0 1 0 0 0 0 
 
 0 0 0 0 0 0 1 0 0 0 
 
 0 0 
 0 0 0 0 0.0024 0.1681 0 0 
P95 =  
 0 0 0 0 0 0 0 3e − 04 0 0.3277 
 
 0 0 0 0 0 0 0 0 0.3277 3e − 04
 
 1 0 
 0 0 0 0 0 0 0 0 
 
 0 1 0 0 0 0 0 0 0 0 
0.0312 0.0312 0 0 0 0 0 0 0 0

Fourth:10 step transition matrix for P9 example

 
0 0 1 0 0 0 0 0 0 0
 0 0 1e − 04 0 
 0 0 0 0 0 0 
 
 0 0 0 0 0 1 0 0 0 0 
 
 0 0 0 0 0 0 1 0 0 0 
 
 0 0 
 0 0 0 0 0 0.0282 0 0 
P910 = 
 0 0 0 0 0 0 0 0 0 0.1074
 
 0 0 0 0 0 0 0 0 0.1074 0 
 
 1 0 
 0 0 0 0 0 0 0 0 
 
 0 1 0 0 0 0 0 0 0 0 
0.001 0.001 0 0 0 0 0 0 0 0

Fifth:Solve the steady state for P9 example

( )
πP SteadyState = 0.109 0.141 0.151 0.042 0.056 0.168 0.082 0.034 0.065 0.151
9
markov chain + decision 33

Sixth:Classify the Markov Chain for P9 example

• A) we check for recurrent classes

Table 27: Recurrent Classes

Class_Gr states
1 {S1,S2,S3,S4,S5,S6,S7,S8,S9,S10}

• B) we check for transient classes

• C) we check for Absorving states

• D) we check for Regular and Irreducible Markov Chain

Table 28: Summary Regular + Irre-


ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible YES

Seventh Spectral Analysis

• Compute the eigenvalues of P1

( )
P9eigenvalues = 1 + 0i 0 + 1i 0 − 1i −1 + 0i 0 + 0.734i 0 − 0.734i −0.734 + 0i 0.734 + 0i 0 + 0i 0 + 0i

• Compute the module (in case complex numbers)

( )
|λ|
P1 = 1 1 1 1 0.73384 0.73384 0.73384 0.73384 0 0

We can see a multiplicity of |λ| = 1 of four. Thus the period of the


chain is p = 4
34 markov chain + decision process

Conclusions

The Markov Chain P9 is not a nice example to deal with regular and
irreducible chains.

— End Example P9

Example 10
 
0.5 0 0.5 0 0 0 0 0 0 0
 0 0.333 0 
 0 0 0 0 0.667 0 0 
 
1 0 0 0 0 0 0 0 0 0 
 
0 0 0 0 1 0 0 0 0 0 
 
0 0 
 0 0 0.333 0.333 0 0 0 0.333 
P 10 =  
0 0 0 0 0 1 0 0 0 0 
 
0 0 0 0 0 0 0.25 0 0.75 0 
 
0 0.25 
 0 0.25 0.25 0 0 0 0.25 0 
 
0 1 0 0 0 0 0 0 0 0 
0 0.333 0 0 0.333 0 0 0 0 0.333

• First Graphical Representation (See Figure 12)

• Markov Chain Analysis Example for P1 0

Second: 2-step transition matrix for P1 0 example

 
0.25 0 0.25 0 0 0 0 0 0 0
 0 0 
 0.1111 0 0 0 0 0.4444 0 0 
 
 1 0 0 0 0 0 0 0 0 0 
 
 0 0 0 0 1 0 0 0 0 0 
 
 0 0 
 0 0 0.1111 0.1111 0 0 0 0.1111 
2
P10 = 
 0 0 0 0 0 1 0 0 0 0 
 
 0 0 0 0 0 0 0.0625 0 0.5625 0 
 
 0 0.0625
 0 0.0625 0.0625 0 0 0 0.0625 0 
 
 0 1 0 0 0 0 0 0 0 0 
0 0.1111 0 0 0.1111 0 0 0 0 0.1111
markov chain + decision 35

S6
0.25

0.25 0.5
1 S3

0.25 0.5
S8 S1
0.25
0.333
0.333 0.333
S4
1 0.333 0.25
S5 S9 0.75
0.333
S10 1 0.333 S7
0.667

0.333
S2

Figure 12: State transition diagram


P10

Third: 5 step transition matrix for P1 0 example

 
0.0312 0 0.0312 0 0 0 0 0 0 0
 0 0 
 0.0041 0 0 0 0 0.1317 0 0 
 
 1 0 0 0 0 0 0 0 0 0 
 
 0 0 0 0 1 0 0 0 0 0 
 
 0 0 
 0 0 0.0041 0.0041 0 0 0 0.0041 
5
P10 = 
 0 0 0 0 0 1 0 0 0 0 
 
 0 0 0 0 0 0 0.001 0 0.2373 0 
 
 0 0.001 
 0 0.001 0.001 0 0 0 0.001 0 
 
 0 1 0 0 0 0 0 0 0 0 
0 0.0041 0 0 0.0041 0 0 0 0 0.0041

Fourth:10 step transition matrix for P1 0 example


36 markov chain + decision process

 
0.001 0 0.001 0 0 0 0 0 0 0
 0 0
 0 0 0 0 0 0.0173 0 0 
 
 1 0 0 0 0 0 0 0 0 0
 
 0 0 0 0 1 0 0 0 0 0
 
 0 0
 0 0 0 0 0 0 0 0 
10
P10 = 
 0 0 0 0 0 1 0 0 0 0
 
 0 0 0 0 0 0 0 0 0.0563 0
 
 0 0
 0 0 0 0 0 0 0 0 
 
 0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0

Fifth:Solve the steady state for P1 0 example

 
0 0 0 0 0 1 0 0 0 0
 
πP SteadyState = 0 0.391 0 0 0 0 0.348 0 0.261 0
10
0.667 0 0.333 0 0 0 0 0 0 0

Sixth:Classify the Markov Chain for P1 0 example

• A) we check for recurrent classes

Table 29: Recurrent Classes

Class_Gr states
1 {S1,S3}
2 {S2,S7,S9}
3 {S6}

• B) we check for transient classes

Table 30: Transient Classes

Class_Gr states
1 {S4,S5}
2 {S8}
3 {S10}

• C) we check for Absorving states


markov chain + decision 37

Table 31: Absorving States

Class_Gr states
1 {S6}
Table 32: Summary Regular + Irre-
ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO

• D) we check for Regular and Irreducible Markov Chain

Seventh Spectral Analysis

• Compute the eigenvalues of P1 0

(
eigenvalues
P10 = 1 + 0i 1 + 0i 1 + 0i 0.768 + 0i −0.208 + 0.676i −0.208 − 0.676i −0.5 + 0i −0.434 + 0i 0.333 + 0i

• Compute the module (in case complex numbers)

( )
|λ|
P10 = 1 1 1 0.76759 0.70711 0.70711 0.5 0.43426 0.33333 0.25

We can verify the multiplicity of 1 as three, so p = 3

Conclusions

The Markov Chain P1 0 is not a nice example to deal with regular and
irreducible chains.

— End Example P10

You might also like