Professional Documents
Culture Documents
Chapter-4
Introduction
Many random processes leading to time series
problem often behave like chain-like events where
generated values in the counting domain are
independent of one another. However, the future
probability of any value in the time series is actually
dependent on the current probability. These are cases
where Markov Chains can be adapted.
Introduction
Andrey Andeyvich Markov (“Markoff”)
(1856 – 1922)
pij P{ X n 1 j | X n i}
This is the probability that the process goes from
state i to state j in one transition step.
• The one-step Transition Matrix
P [ pij ]
0 1 L m X n 1
0 p00 p01 L p0 m
1 p10 p11 L p1m
P
M M M M M
m pm 0 pm1 L pmm
Xn
pij P{ X n k j | X n i}, i , j S {0,1, 2, L , m}
Property:
(1) 0 pij 1 For all i , j in S
(2) Each a column vector
m
p j
for all in S
ij 1
j 0
• The k-step Transition probability is defined as
pij ( k ) P{ X n k j | X n i}
This is the probability that the process goes from state
i to state j in k transition steps.
• The k-step Transition Matrix
P ( k ) [ pij( k ) ]
0 1 L m X nk
0 p00
(k ) (k )
p01 p0( km)
L
(k ) (k ) (k )
1 p10 p11 L p1m
P(k )
M M M M M
(k ) (k ) (k )
m pm 0 pm1 L pmm
Xn
pij( k ) P{ X n k j | X n i}, i, j S {0,1, 2, L , m}
Property:
(1) 0 pij( k ) 1 For all i, j in S
(2) For every row vector
m
p j
for all in S
(k )
ij 1
j 0
Let X n、X n1 denote state of random processes
at time n、n +1 respectively. Their
probability distribution is as following
Xn 0 1 … m
P{Xn} p0 p1 … pm
Xn+1 0 1 … m
, , ,
P{Xn+1} p
0 p1
… p m
m m
pi 1
i 0
i 1
p ,
i 0
Probability vector :
Sn [ p0 p1 pm ]
Sn 1 [ p ,0 p1, p ,m ]
p00 p01 p0 m
p p11 p1m
,
[ p0 p1,
p m ] [ p0
,
p1 pm ] 10
pm 0 pm1 pmm
Sn 1 Sn P
(k )
Relationship between P and P
Sn k Sn k 1 P
Sn k 2 P 2
S n k 3 P 3
Sn P k
Sn k Sn P (k )
P (k )
P k
Example :
A body moves on the line with four dots(as following) .
• if the body stays at dot 0, it can move to dot 1 or
still stays at dot 0,probility is 0.5、0.5 respectively.
• if the body stays at dot 1, it can move to dot 0 or
still stays at dot 1,also it can be move to dot 2
probability is 0.3、0.4 and 0.3 respectively.
• if the body stays at dot 2, it can move to dot 1 or
still stays at dot 2, and it can also move to dot 2,
probability is 0.3、0.4 and 0.3 respectively.
• if the body stays at dot 3, it can move to dot 2 or
still stays at dot 3,probability is 0.5、0.5
respectively.
Let X(n) denotes the site of the body after n move
steps. {X(n), n=0,1,2,…} is a markov chain.
(1)Find the state space.
(2)Find the one-step transition probability matrix.
(3)Giving the starting site 0,find the probability of the
body would be stay at site 2 after 2 move steps.
Solution:
(1)The state space
{0,1,2,3}
(2) The one-step transition matrix
0.5 0.5 0 0
0.3 0.4 0.3 0
P
0 0.3 0.4 0.3
0 0 0.5 0.5
(3)
X0 0 1 2 3
P{X0} 1 0 0 0
S0 [1 0 0 0]
S2 S0 P (2) S0 P 2
0.5 0.5 0 0 0.5 0.5 0 0
0.3 0.4 0.3 0 0.3 0.4 0.3 0
P (2) P 2
0 0.3 0.4 0.3 0 0.3 0.4 0.3
0 0 0.5 0.5 0 0 0.5 0.5
0.4 0.45 0.15 0
0.27 0.4 0.24 0.09
0.09 0.24 0.4 0.27
0 0.15 0.45 0.4
(3)
0.4 0.45 0.15 0
0.27 0.4 0.24 0.09
S2 [1 0 0 0]
0.09 0.24 0.4 0.27
0 0.15 0.45 0.4
[0.4 0.45 0.15 0]
the probability of the body being stay at site 2 after
2 move steps is 0.15.
• Example:
{ X (n), n 0,1,2, } is a homogenous markov chain ,
S {i0 , i1, i2 , i3} is the state space , for all n , X (n) S .
The one-step transition probability matrix is as following:
0.2 0.4 0 0.4
0.1 0 0.7 0.2
P
0.6 0.1 0 0.3
0.5 0.2 0.1 0.2
X(3) i0 i1 i2 i3
P{X(3)} 0.3776 0.2612 0.2188 0.3424
(2)
P{ X (0) i0 , X (1) i1 , X (2) i2 , X (3) i3}
P{ X (3) i3 | X (2) i2 , X (1) i1 , X (0) i0} P{ X (2) i2 , X (1) i1, X (0) i0}
P{ X (3) i3 | X (2) i2 } P{ X (2) i2 | X (1) i1, X (0) i0} P{ X (1) i1, X (0) i0}
P{ X (3) i3 | X (2) i2 } P{ X (2) i2 | X (1) i1} P{ X (1) i1 | X (0) i0 } P{ X (0) i0}
0.3 0.7 0.4 0.2
Classification of states
• The behavior of a markov chain depends
on the structure of the one-step transition
matrix P.
• In this section we will study various
measures for characterizing states,
subsets of states in a markov chain.
• State j is said to be accessible from i if for
some n 0, pij( n ) 0 . We use i j to
denote it.
1 2 3 4
1 0 1 00
2 0 0 1 0
P
3 0 0 0 1
4 0.5 0 0.5 0
The states transition map:
0 1 L m
1 L m
0
M M M M
0 1 L m
for all i , 0 i 1
• Theory :
if matrix P can satisfy following condition ,
(1)All the element 0 P 1 ; ij
M M M M
p (mn0) p m1 L p mm
(n) (n)
Then all the rows are equal to each other when
k is limited to infinity .
0 1 L m
1 L m
lim P
k 0 0< i <1
k M M M M
0 1 L m
• Several example :
example001
0.3333 0 0.6667 0
0 0.3333 0 0.6667
lim P ( l ) lim P l
l l 0.3333 0 0.6667 0
0 0.3333 0 0.6667
it do not satisfy the definition of ergodic
• Example2:
Following is a transition probability matrix :
1 2 3 4
1 0.5 0.5 0 0
2 0 0.5 0.5 0
P
3 0 0 0.5 0.5
4 0.5 0 0 0.5
[ 0 , 1 ,L , M ] [ 1 , 2 ,L , M ] P
M
j 1
j 0
Xn i0 i1 … iM
P{Xn} p0, p1, … pM,
Xn+1 i0 i1 … iM
P{Xn+1} p0,, p1,, … pM,,
S0 [ p0 , p1 ,L , pM ] , p0 p1 L pM =1
Sn [ p ,0 , p1, ,L , p ,M ] [ p0 , p1,L , pM ] P ( n )
Sn 1 [ p ,,0 , p1,, ,L , p ,,M ] [ p0 , p1 ,L , pM ] P ( n 1)
If the markov chain is ergodic, then when n is
limited to infinity ,
0 1 L M
1 L M
lim P ( n ) lim P 0
n
n n M M M M
0 1 L M
0 1 L M
L
lim P ( n 1) lim P 0
n 1 1 M
0< j <1
n n M M M M
0 1 L M
Hence 0 1 L M
1 L M
Sn [ p0 , p1 ,L , pM ] P ( n ) [ p0 , p1 ,L , pM ] 0
M M M M
0 1 L M
=[ 0 , 1 ,L , M ]
As likely ,
0 1 L M
1 L M
Sn 1 [ p0 , p1 ,L , pM ] P ( n 1)
[ p0 , p1 ,L , pM ] 0
M M M M
0 1 L M
=[ 0 , 1 ,L , M ]
1 2 3 4
1 0 1 0 0
2 0 0 1 0
P
3 0 0 0 1
4 0.5 0 0 0.5
0.3333 0 0.6667 0
0 0.3333 0 0.6667
lim P ( l ) lim P l
l l 0.3333 0 0.6667 0
0 0.3333 0 0.6667
• Example:
Consider a markov chain with one-step
transition probability matrix
0.5 0.5 0 0
0 0.5 0.5 0
P
0 0 0.5 0.5
0.5 0 0 0.5
[ 1 , 2 , 3 , 4 ] [ 1 , 2 , 3 , 4 ] P
1 2 3 4 1
0.5 0.5 0 0
0 0.5 0.5 0
P
0 0 0.5 0.5
0.5 0 0 0.5
Following:
1 0.5 1 0 2 0 3 0.5 4
0.5 0.5 0 0
2 1 2 3 4
3 0 1 0.5 2 0.5 3 0 4
0 0 0.5 0.5
4 1 2 3 4
1 2 3 4 1
1 0.25, 2 0.25, 3 0.25, 4 0.25
• Example:
Consider a markov chain with transition probability
matrix
0.1 0.1 0.2 0.2 0.4 0 0
0 0 0.5 0.5 0 0 0
0 0 0 1 0 0 0
P 0 1 0 0 0 0 0
0 0 0 0 0.5 0.5 0
0 0 0 0 0.5 0 0.5
0 0 0 0 0 0.5 0.5
The state space is {1,2,3,4,5,6,7}.
(1) Class the states.
(2)Find the limiting distribution .
Solution:
(1) The map of transition
From the map ,we can calss the state space into
three set :
C1={2,3,4} C2= {5,6,7} ,C3={1}
2 3 4
2 0 0.5 0.5
PC1 3 0 0 1
4 1 0 0
Let 2 , 3 , 4 , denote the limiting distribution of
States 2,3,4 respectively. The equation is :
[ 2 , 3 , 4 ] [ 2 , 3 , 4 ] Pc 2
0 0.5 0.5
= [ 2 , 3 , 4 ] 0 0
1
1 0 0
2 3 4 1
Continue…
2 0 2 0 3 1 4
0.5 0 0
3 2 3 4
4 0.5 2 1 3 0 4
2 3 4 1
2 1 2
2 , 3 , 4
5 5 5
2)Consider the set C2 {5,6,7}
The transition probability matrix is :
5 6 7
5 0.5 0.5 0
PC 2 6 0.5 0 0.5
7 0 0.5 0.5
Let 5 , 6 , 7 denote the limiting distribution of
States 2,3,4 respectively. The equation is :
[ 5 , 6 , 7 ] [ 5 , 6 , 7 ] Pc 2
0.5 0.5 0
= [ 5 , 6 , 7 ] 0.5 0 0.5
0 0.5 0.5
5 6 7 1
Continue…
5 0.5 5 0.5 6 0 7
0.5 0 0.5
6 5 6 7
7 0 5 0.5 6 0.5 7
5 6 7 1
1 1 1
5 , 6 , 7
3 3 3
• Example:
{ X (n), n 0,1,2,L }
is a homogenous discrete-time
markov chain , the state space is I {i0 , i1 , i2 , i3} .
the one-step transition probability matrix is as
following
0.3 0.3 0.4 0
0 0.3 0.2 0.5
P
0.5 0 0.1 0.4
0.7 0 0 0.3
0 0.3872, if j i0
0.1659, if j i1
1
lim p ij(l )
2 0.2090, if j i2
l
3 0.2379, if j i3
– Assignment Q.1:
{ X (n), n 0,1,2, } is a homogenous markov chain ,
S {i0 , i1 , i2 } is the state space , for all n , X (n) S .
The one-step transition probability matrix is as following:
0.5 0.5 0
P 0 0.5 0.5
0.5 0 0.5