Professional Documents
Culture Documents
Markov property
• There is no memory
• The probability law of the next state Xn+1 depends on the past
only through the value of the present state Xn it does not matter
what happened previous to Xn
4 markov chain + decision process
p1,1 p1,2 p1,3 ··· p1,n
p2,1 p2,2 p2,3 ··· p2,n
P =
.. .. .. .. ..
. . . . .
pn,1 pn,2 pn,3 ··· pn,n
∑
P 2 = P (Xn+2 = j|Xn = i) = P (Xn+1 = k|Xn = i)P (Xn+2 = j|Xn+1 = k)
k=0
Example
• Suppose the following transition matrix Pa
1 0 0 0
0.3 0.4 0
0.3
a= 1
0 0.3 0.4 0.3
0 0 0 1
0.4 S1
• It is easy to generate a GRAPH from the Transition Matrix 0.3
Thus:
does not hold it means fi < 1 then the state i is a transient state.
Another way to see these conditions is that starting from any state i, a
Markov Chain visits a recurrent state infinitely many times, or not at
all.
1 ∑
E[# of visits to i|X0 = i] = = n
Pi,i =∞
1 − fi n=1
Stationary Distributions
Spectral Analysis
Eigenvalues and eigenvectors are the tools needed to apply spectral
analysis over the stochastic matrix. They can provide useful informa-
tion related with the periodicity of the closed classes.
Some have already see Spectral Analysis in many other disciplines
such as statistical and other computational fields.
Ax = λx ⇒ (A − λI)x = 0
To find the solutions that don’t end up as x = 0 the matrix (A−λI)
is set |A − λI| = 0 which is called the characteristic equation. For an
n × n matrix, there will be n roots, and thus, n eigenvalues.
Using a Software system such as R it is straightforward to obtain
the eigenvalues and eigenvectors from a Stochastic Matrix
-The eigen() function computes both eigenvalues and eigenvectors
Examples
Example 1
0.1 0.5 0.4 0
0 1
0 0
P1 =
0 0.8 0 0.2
0 0 0.3 0.7
0.7
0.3
S4
1 0.2
• First Graphical Representation (See Figure 3)
0.1 0.8 S3
0.5 S2
0.4
S1
0 0.1885 0.1865 0.625
0 0.132 0.2001 0.6679
P15 =
0 0.1608 0.172 0.6672
0 0.1601 0.2004 0.6396
Class_Gr states
1 {S2,S3,S4}
Class_Gr states
1 {S1}
( )
P1eigenvalues = 1 + 0i −0.15 + 0.466i −0.15 − 0.466i 0.1 + 0i
( )
|λ|
P1 = 1 0.4899 0.4899 0.1
Conclusions
The Markov Chain P1 is not a nice example to deal with regular and
irreducible chains.
— End Example P1
Example 2
0.5 0.4 0.1 0
0.3 0.3 0.4 0
P2 =
0 0 0.2 0.8
0 0 0.6 0.4
0.3
0.5
0.4 0.2 0.4
• First Graphical Representation (See Figure 4) S2
0.4
S1 0.3
0.1 0.8
S3 S4
0.6
Class_Gr states
1 {S3,S4}
Class_Gr states
1 {S1,S2}
Table 6: Summary Regular + Irre-
ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO
( )
P2eigenvalues = 1 0.761 −0.4 0.039
Conclusions
— End Example P2
Example 3
0.5 0.4 0.1 0
0 0
1 0
P3 =
0 0 0.2 0.8
0 0 0.6 0.4
S1 0.2 0.4
0.1
0.8
S3 S4
0.6
• Markov Chain Analysis Example for P3
Figure 5: State transition diagram P3
Class_Gr states
1 {S2}
2 {S3,S4}
markov chain + decision 17
Class_Gr states
1 {S1}
Class_Gr states
1 {S2}
( )
P3eigenvalues = 1 1 0.5 −0.4
Conclusions
The Markov Chain P3 is an example of a semiregular chain. Note that
the probabilities of the stable state depend on the initial state of the
chain
— End Example P3
18 markov chain + decision process
Example 4
0.5 0.4 0.1 0
0 0
0 1
P4 =
0 0.2 0 0.8
0 0 1 0
0.5 1
S2
0.4 0.8
• First Graphical Representation (See Figure 6) 0.2 S3 S4
S1 0.1 1
Class_Gr states
1 {S2,S3,S4}
Table 12: Transient Classes
Class_Gr states
1 {S1}
( )
P4eigenvalues = −1 1 0.5 0
Conclusions
The Markov Chain P4 is an example to deal with cyclic markov
chains.
— End Example P4
20 markov chain + decision process
Example 5
0.3 0.7 0 0 0
0.3 0.4 0.1 0.2 0
P5 = 0 0 0 1 0
0 0 0.5 0 0.5
0 0 0.7 0.3 0
0.3 0.4
0.1
0.7
S1 S2 0.5
0.3 0.2
1 S3
• First Graphical Representation (See Figure 7) S4
0.5 0.7
0.3 S5
Class_Gr states
1 {S3,S4,S5}
Class_Gr states
1 {S1,S2}
( )
P5eigenvalues = 1 + 0i 0.811 + 0i −0.5 + 0.316i −0.5 − 0.316i −0.111 + 0i
( )
|λ|
P5 = 1 0.81098 0.59161 0.59161 0.11098
Conclusions
The Markov Chain P5 is another example to deal with non regular and
irreducible chains.
— End Example P5
Example 6
1 0 0 0 0 0
0 1 0 0 0 0
0.05 0.05 0.9 0 0 0
P6 =
0.1 0.05
0 0.8 0.05 0
0.2 0.1 0 0.05 0.6 0.05
0.1 0.2 0 0 0 0.7
1 0 0 0 0 0
0 1 0 0 0 0
0.0025 0.0025 0.81 0 0 0
P6 =
2
0.01 0.0025
0 0.64 0.0025 0
0.04 0.01 0 0.0025 0.36 0.0025
0.01 0.04 0 0 0 0.49
1 0 0 0 0 0
0 1 0 0 0 0
0 0 0.5905 0 0 0
P65 =
0
0 0 0.3277 0 0
3e − 04 0 0 0 0.0778 0
0 3e − 04 0 0 0 0.1681
markov chain + decision 23
0.05
0.05 0.05
S4 S5 S6
0.05
0.9 0.2
1
S3 0.05
0.1 S2
0.05
Class_Gr states
1 {S1}
2 {S2}
Table 18: Transient Classes
Class_Gr states
1 {S3}
2 {S4,S5}
3 {S6}
Class_Gr states
1 {S1}
2 {S2}
( )
P6eigenvalues = 1 1 0.9 0.812 0.7 0.588
Conclusions
The Markov Chain P6 is a complex real situation Markov Chain with
all types of states present.
— End Example P6
markov chain + decision 25
Example 7
0.7 0.1 0.1 0 0 0 0 0.1
0.4 0.3 0.1 0 0.2 0 0
0
0 0 0.7 0.3 0 0 0 0
0 0 0.6 0.4 0 0 0 0
P7 =
0 0 0.1 0.3 0.3 0.1 0.2 0
0 0 0 0 0 0.5 0.5 0
0 0 0 0 0 0.4 0.6 0
0 0 0 0 0 0 0 1
0.6
S6 0.5
0.1 0.4 S7
0.2
0.3
0.3 S5 0.4
0.2 0.3
0.1
0.7 0.1 S2 0.7
S4
0.1 0.3
0.4
0.1 S3 0.6
S1
0.1
1
S8
0.49 0.01 0.01 0 0 0 0 0.01
0.16 0.09 0.01 0
0 0.04 0 0
0 0 0.49 0.09 0 0 0 0
0 0 0.36 0.16 0 0 0 0
2
P1 =
0 0 0.01 0.09 0.09 0.01 0.04 0
0 0 0 0 0 0.25 0.25 0
0 0 0 0 0 0.16 0.36 0
0 0 0 0 0 0 0 1
0.1681 0 0 0 0 0 0 0
0.0102 0.0024 − 0
0 0 3e 04 0 0
0 0 0.1681 0.0024 0 0 0 0
0 0 0.0778 0.0102 0 0 0 0
P75 =
0
0 0 0.0024 0.0024 0 3e − 04 0
0 0 0 0 0 0.0312 0.0312 0
0 0 0 0 0 0.0102 0.0778 0
0 0 0 0 0 0 0 1
0.0282 0 0 0 0 0 0 0
1e − 04 0
0 0 0 0 0 0
0 0 0.0282 0 0 0 0 0
0 0 0.006 1e − 04 0 0 0 0
P710 =
0
0 0 0 0 0 0 0
0 0 0 0 0 0.001 0.001 0
0 0 0 0 0 1e − 04 0.006 0
0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 1
πP SteadyState = 0 0 0 0 0 0.444 0.556 0
7
0 0 0.667 0.333 0 0 0 0
markov chain + decision 27
Class_Gr states
1 {S3,S4}
2 {S6,S7}
3 {S8}
Class_Gr states
1 {S1,S2}
2 {S5}
Class_Gr states
1 {S8}
( )
P7eigenvalues = 1 1 1 0.783 0.3 0.217 0.1 0.1
28 markov chain + decision process
Conclusions
— End Example P7
Example 8
0.333 0.667 0 0 0 0 0 0
0.333 0.333 0.334 0
0 0 0 0
0 0.333 0.333 0.334 0 0 0 0
0 0 0.333 0.333 0.334 0 0 0
P8 =
0 0 0 0.333 0.333 0.334 0 0
0 0 0 0 0.333 0.333 0.334 0
0 0 0 0 0 0.333 0.333 0.334
0 0 0 0 0 0 0.667 0.333
0.110889 0.444889 0 0 0 0 0 0
0.110889 0.110889 0.111556
0 0 0 0 0
0 0.110889 0.110889 0.111556 0 0 0 0
0 0 0.110889 0.110889 0.111556 0 0 0
2
P8 =
0 0 0 0.110889 0.110889 0.111556 0 0
0 0 0 0 0.110889 0.110889 0.111556 0
0 0 0 0 0 0.110889 0.110889 0.111556
0 0 0 0 0 0 0.444889 0.110889
0.0041 0.132 0 0 0 0 0 0
0.0041 0.0041 0.0042 0
0 0 0 0
0 0.0041 0.0041 0.0042 0 0 0 0
0 0 0.0041 0.0041 0.0042 0 0 0
P85 =
0
0 0 0.0041 0.0041 0.0042 0 0
0 0 0 0 0.0041 0.0041 0.0042 0
0 0 0 0 0 0.0041 0.0041 0.0042
0 0 0 0 0 0 0.132 0.0041
( )
πP SteadyState = 0.071 0.142 0.142 0.143 0.143 0.143 0.144 0.072
8
Class_Gr states
1 {S1,S2,S3,S4,S5,S6,S7,S8}
( )
P8eigenvalues = 1 0.934 0.749 0.481 −0.334 −0.268 0.185 −0.083
Conclusions
The Markov Chain P8 is not a nice example to deal with regular and
irreducible chains.
— End Example P8
markov chain + decision 31
Example 9
0 0 1 0 0 0 0 0 0 0
0 0
0 0.3 0.3 0.4 0 0 0 0
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0
0 0 0 0 0.3 0.7 0 0
P9 =
0 0 0 0 0 0 0 0.2 0 0.8
0 0 0 0 0 0 0 0 0.8 0.2
1 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0.5 0.5 0 0 0 0 0 0 0 0
0.3
S5 0.4
0.7
0.3 S4
1
0.8 S9 S2
0.3
S7
1
S3 1
0.5 1 S6
S10 0.8 0.2
0.5
S1 1
0.2 S8
0 0 1 0 0 0 0 0 0 0
0 0
0 0.09 0.09 0.16 0 0 0 0
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0
0 0 0 0 0.09 0.49 0 0
P9 =
2
0 0 0 0 0 0 0 0.04 0 0.64
0 0 0 0 0 0 0 0 0.64 0.04
1 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0.25 0.25 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
0 0
0 0.0024 0.0024 0.0102 0 0 0 0
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0
0 0 0 0 0.0024 0.1681 0 0
P95 =
0 0 0 0 0 0 0 3e − 04 0 0.3277
0 0 0 0 0 0 0 0 0.3277 3e − 04
1 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0.0312 0.0312 0 0 0 0 0 0 0 0
0 0 1 0 0 0 0 0 0 0
0 0 1e − 04 0
0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0
0 0 0 0 0 0.0282 0 0
P910 =
0 0 0 0 0 0 0 0 0 0.1074
0 0 0 0 0 0 0 0 0.1074 0
1 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0.001 0.001 0 0 0 0 0 0 0 0
( )
πP SteadyState = 0.109 0.141 0.151 0.042 0.056 0.168 0.082 0.034 0.065 0.151
9
markov chain + decision 33
Class_Gr states
1 {S1,S2,S3,S4,S5,S6,S7,S8,S9,S10}
( )
P9eigenvalues = 1 + 0i 0 + 1i 0 − 1i −1 + 0i 0 + 0.734i 0 − 0.734i −0.734 + 0i 0.734 + 0i 0 + 0i 0 + 0i
( )
|λ|
P1 = 1 1 1 1 0.73384 0.73384 0.73384 0.73384 0 0
Conclusions
The Markov Chain P9 is not a nice example to deal with regular and
irreducible chains.
— End Example P9
Example 10
0.5 0 0.5 0 0 0 0 0 0 0
0 0.333 0
0 0 0 0 0.667 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0
0 0
0 0 0.333 0.333 0 0 0 0.333
P 10 =
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0.25 0 0.75 0
0 0.25
0 0.25 0.25 0 0 0 0.25 0
0 1 0 0 0 0 0 0 0 0
0 0.333 0 0 0.333 0 0 0 0 0.333
0.25 0 0.25 0 0 0 0 0 0 0
0 0
0.1111 0 0 0 0 0.4444 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0
0 0
0 0 0.1111 0.1111 0 0 0 0.1111
2
P10 =
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0.0625 0 0.5625 0
0 0.0625
0 0.0625 0.0625 0 0 0 0.0625 0
0 1 0 0 0 0 0 0 0 0
0 0.1111 0 0 0.1111 0 0 0 0 0.1111
markov chain + decision 35
S6
0.25
0.25 0.5
1 S3
0.25 0.5
S8 S1
0.25
0.333
0.333 0.333
S4
1 0.333 0.25
S5 S9 0.75
0.333
S10 1 0.333 S7
0.667
0.333
S2
0.0312 0 0.0312 0 0 0 0 0 0 0
0 0
0.0041 0 0 0 0 0.1317 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0
0 0
0 0 0.0041 0.0041 0 0 0 0.0041
5
P10 =
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0.001 0 0.2373 0
0 0.001
0 0.001 0.001 0 0 0 0.001 0
0 1 0 0 0 0 0 0 0 0
0 0.0041 0 0 0.0041 0 0 0 0 0.0041
0.001 0 0.001 0 0 0 0 0 0 0
0 0
0 0 0 0 0 0.0173 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0
0 0
0 0 0 0 0 0 0 0
10
P10 =
0 0 0 0 0 1 0 0 0 0
0 0 0 0 0 0 0 0 0.0563 0
0 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 0 0 0 0
πP SteadyState = 0 0.391 0 0 0 0 0.348 0 0.261 0
10
0.667 0 0.333 0 0 0 0 0 0 0
Class_Gr states
1 {S1,S3}
2 {S2,S7,S9}
3 {S6}
Class_Gr states
1 {S4,S5}
2 {S8}
3 {S10}
Class_Gr states
1 {S6}
Table 32: Summary Regular + Irre-
ducible Markov Chain
item Property Answer
1 Regular NO
2 Irreducible NO
(
eigenvalues
P10 = 1 + 0i 1 + 0i 1 + 0i 0.768 + 0i −0.208 + 0.676i −0.208 − 0.676i −0.5 + 0i −0.434 + 0i 0.333 + 0i
( )
|λ|
P10 = 1 1 1 0.76759 0.70711 0.70711 0.5 0.43426 0.33333 0.25
Conclusions
The Markov Chain P1 0 is not a nice example to deal with regular and
irreducible chains.