You are on page 1of 71

Discrete-time markov chains

Chapter-4
Introduction
Many random processes leading to time series
problem often behave like chain-like events where
generated values in the counting domain are
independent of one another. However, the future
probability of any value in the time series is actually
dependent on the current probability. These are cases
where Markov Chains can be adapted.
Introduction
Andrey Andeyvich Markov (“Markoff”)
(1856 – 1922)

The concept of Markov chains first


appeared in 1906, where Markov
attempted to justify the occurrence
of complex chains as a probability
process in simplistic terms.
Definitions of Markov Chains
Definitions of Markov Chains (cont’d)
Definitions of Markov Chains (cont’d)
• The outcome of any event is determined by a finite
number of state spaces
• The outcome of any event is dependent on its
present state and not any past states
• As an event transits/jumps/steps from different
states to another, a transitional probability to define
these jumps.
Introduction and definition
• Consider a stochastic process X  {X n , n  0,1,2, }
,Where X n denote the state of the system at
time n .as an example , X n can be a
gambler’s fortune after n plays. we assume the
random variable X n take on value in the set S.
The set S is called state space. S is finite or
infinitely countable .for convenience ,we index
the possible value of the state space so that
S={0,1,2,…} . If X n  i ,we say that the process is
in state i at time n
Definition of markov chain
the stochastic process X  {X n , n  0,1,2,L } is
a discrete-time markov chains with state space
if
P{ X n 1  j | X n  in , X n 1  in 1 ,L , X 1  i1 , X 0  i0 }
 P{ X n 1  j | X n  in }
hold for all j, i, in1 ,L , i1, i0 in S and all n  0,1,2,L
it says that the future of the process is
independent to the past of the process once the
current state of the process is known.
• For all n, if
P{ X n 1  j | X n  i}  pij
For all i , j in S ,the the markov chains is
called time homogeneous.
Transition probability
• The one-step Transition probability is defined as

pij  P{ X n 1  j | X n  i}
This is the probability that the process goes from
state i to state j in one transition step.
• The one-step Transition Matrix

P  [ pij ]
0 1 L m X n 1
0  p00 p01 L p0 m 
1  p10 p11 L p1m 
P  
M M M M M
 
m  pm 0 pm1 L pmm 
Xn
pij  P{ X n  k  j | X n  i}, i , j  S  {0,1, 2, L , m}

Property:
(1) 0  pij  1 For all i , j in S
(2) Each a column vector
m

p j
for all in S
ij 1
j 0
• The k-step Transition probability is defined as
pij ( k )  P{ X n k  j | X n  i}
This is the probability that the process goes from state
i to state j in k transition steps.
• The k-step Transition Matrix

P ( k )  [ pij( k ) ]
0 1 L m X nk
0  p00
(k ) (k )
p01 p0( km) 
L
 (k ) (k ) (k ) 
1  p10 p11 L p1m 
P(k ) 
M M M M M
 (k ) (k ) (k ) 
m  pm 0 pm1 L pmm 
Xn
pij( k )  P{ X n k  j | X n  i}, i, j  S  {0,1, 2, L , m}
Property:
(1) 0  pij( k )  1 For all i, j in S
(2) For every row vector
m

p j
for all in S
(k )
ij 1
j 0
Let X n、X n1 denote state of random processes
at time n、n +1 respectively. Their
probability distribution is as following

Xn 0 1 … m
P{Xn} p0 p1 … pm

Xn+1 0 1 … m
, , ,
P{Xn+1} p
0 p1
… p m

m m

 pi  1
i 0
 i 1
p ,

i 0
Probability vector :

Sn  [ p0 p1 pm ]
Sn 1  [ p ,0 p1, p ,m ]
 p00 p01 p0 m 
p p11 p1m 
,
[ p0 p1,
p m ]  [ p0
,
p1 pm ]   10 
 
 
 pm 0 pm1 pmm 
Sn 1  Sn  P
(k )
Relationship between P and P

Sn  k  Sn  k 1  P
 Sn  k 2  P 2

 S n  k 3  P 3


 Sn  P k

Sn  k  Sn  P (k )

P (k )
P k
Example :
A body moves on the line with four dots(as following) .
• if the body stays at dot 0, it can move to dot 1 or
still stays at dot 0,probility is 0.5、0.5 respectively.
• if the body stays at dot 1, it can move to dot 0 or
still stays at dot 1,also it can be move to dot 2
probability is 0.3、0.4 and 0.3 respectively.
• if the body stays at dot 2, it can move to dot 1 or
still stays at dot 2, and it can also move to dot 2,
probability is 0.3、0.4 and 0.3 respectively.
• if the body stays at dot 3, it can move to dot 2 or
still stays at dot 3,probability is 0.5、0.5
respectively.
Let X(n) denotes the site of the body after n move
steps. {X(n), n=0,1,2,…} is a markov chain.
(1)Find the state space.
(2)Find the one-step transition probability matrix.
(3)Giving the starting site 0,find the probability of the
body would be stay at site 2 after 2 move steps.
Solution:
(1)The state space
{0,1,2,3}
(2) The one-step transition matrix

 0.5 0.5 0 0
 0.3 0.4 0.3 0 
P 
 0 0.3 0.4 0.3
 
0 0 0.5 0.5
(3)
X0 0 1 2 3
P{X0} 1 0 0 0

S0  [1 0 0 0]
S2  S0  P (2)  S0  P 2
0.5 0.5 0 0  0.5 0.5 0 0
 0.3 0.4 0.3 0  0.3 0.4 0.3 0 
P (2)  P 2    
 0 0.3 0.4 0.3  0 0.3 0.4 0.3
   
 0 0 0.5 0.5   0 0 0.5 0.5 
 0.4 0.45 0.15 0 
0.27 0.4 0.24 0.09 
 
0.09 0.24 0.4 0.27 
 
 0 0.15 0.45 0.4 
(3)
 0.4 0.45 0.15 0 
0.27 0.4 0.24 0.09 
S2  [1 0 0 0]   
0.09 0.24 0.4 0.27 
 
 0 0.15 0.45 0.4 
 [0.4 0.45 0.15 0]
the probability of the body being stay at site 2 after
2 move steps is 0.15.
• Example:
{ X (n), n  0,1,2, } is a homogenous markov chain ,
S  {i0 , i1, i2 , i3} is the state space , for all n , X (n)  S .
The one-step transition probability matrix is as following:
0.2 0.4 0 0.4 
 0.1 0 0.7 0.2 
P 
 0.6 0.1 0 0.3
 
 0.5 0.2 0.1 0.2 

The distribution of X (0) is as following table:


X(0) i0 i1 i2 i3
P{X(0)} 0.2 0.4 0.4 0.2

(1) Find the distribution of X (3) ;


(2) Find the joint probability P{X (0)  i3 , X (1)  i2 , X (2)  i1, X (3)  i0}
Solution:
(1)

S0  [0.2 0.4 0.4 0.2]


S3  S0  P (3)  S0  P 3
0.2 0.4 0 0.4  0.2 0.4 0 0.4  0.2 0.4 0 0.4 
 0.1 0 0.7 0.2   0.1 0 0.7 0.2   0.1 0 0.7 0.2 
P (3)  P 3     
 0.6 0.1 0 0.3  0.6 0.1 0 0.3  0.6 0.1 0 0.3
     
 0.5 0.2 0.1 0.2   0.5 0.2 0.1 0.2   0.5 0.2 0.1 0.2 
0.384 0.192 0.136 0.288
 0.280 0.276 0.134 0.310
 
 0.306 0.186 0.242 0.266
 
0.332 0.190 0.206 0.272 
(1) 0.384 0.192 0.136 0.288
 0.280 0.276 0.134 0.310 
S3  [0.2 0.4 0.4 0.2]   
 0.306 0.186 0.242 0.266 
 
0.332 0.190 0.206 0.272 
 [0.3776 0.2612 0.2188 0.3424]

The distribution of X (3) is as following table :

X(3) i0 i1 i2 i3
P{X(3)} 0.3776 0.2612 0.2188 0.3424
(2)
P{ X (0)  i0 , X (1)  i1 , X (2)  i2 , X (3)  i3}
 P{ X (3)  i3 | X (2)  i2 , X (1)  i1 , X (0)  i0}  P{ X (2)  i2 , X (1)  i1, X (0)  i0}
 P{ X (3)  i3 | X (2)  i2 }  P{ X (2)  i2 | X (1)  i1, X (0)  i0}  P{ X (1)  i1, X (0)  i0}
 P{ X (3)  i3 | X (2)  i2 }  P{ X (2)  i2 | X (1)  i1}  P{ X (1)  i1 | X (0)  i0 }  P{ X (0)  i0}
 0.3  0.7  0.4  0.2
Classification of states
• The behavior of a markov chain depends
on the structure of the one-step transition
matrix P.
• In this section we will study various
measures for characterizing states,
subsets of states in a markov chain.
• State j is said to be accessible from i if for
some n  0, pij( n )  0 . We use i  j to
denote it.

• If i  j and j  i , we say that i and j


Communicate and use i  j to denote it.

• Communicate is an equivalence relation , that


is ,it is reflexive , semmetric, and transitive.
if i  j and j  M
then i  M
Example:
Consider a markov chain with one-step transition
probability matrix
0 1 2    N -1 N
0 1 0 0
1 q r p 0 0
 
2 0 q r p 0 0
P  
 
N -1  0 0 q r p
 
N 0 0 1
q  r  p  1,0  q  1,0  r  1,0  p  1.
The states transition map:

We see that the chain has three classes {0},


{N}, {1,2,…,N-1} , and {0} and {N} are two
close classes.
• Example:
Consider a markov chain with transition
probability matrix

1 2 3 4
1 0 1 00

2 0 0 1 0 
P  
3 0 0 0 1
 
4 0.5 0 0.5 0 
The states transition map:

We see that all states are communicate.


Therefore it is an irreducible chain.
Ergodic markov chain
• Definition
For each state j , and for all starting state i  S
The transition probability lim Pij( k )   j  0 is
k 
independent to the starting state i , then the
markov chain is Ergodic .
• As
if k  
0 1 L m X nk
0  p00
(k ) (k )
p01 Lp0( km) 
 
1  p10( k ) p11( k ) Lp1(mk ) 
lim P ( k ) 
k  M M M M M
 (k ) 
m  pm( k0) pm( k1) L pmm 
Xn

 0 1 L m 
 1 L m 
 0 
M M M M

 0  1 L  m 
for all i , 0  i  1
• Theory :
if matrix P can satisfy following condition ,
(1)All the element 0  P  1 ; ij

(2)Sum of All element in each row is 1 .


 p00 p01 L p0 m 
p 0  pij  1
p11 L p1m 
(1) for all i, j
P   10  m
 M M M M (2) for all j , p ij 1
  i 0
 pm 0 pm1 L pmm 

(3) We can find a integer n to make all element of


matrix P n be more than zero .
 p 00( n ) p 01( n ) Lp 0( nm ) 
 (n) 
 p10 p11( n ) L p1(mn )  for all i, j 0  p ij( n )  1
P 
n

M M M M
 
 p (mn0) p m1 L p mm 
(n) (n)
Then all the rows are equal to each other when
k is limited to infinity .

 0 1 L m 
 1 L m 
lim P 
k  0  0< i <1
k  M M M M

 0  1 L  m 
• Several example :
example001

0.10 0.30 0.60



P  0.45 0.35 0.20 
 
 0.55 0.10 0.35

0.3632 0.2302 0.4063


P16  0.3632 0.2302 0.4063
 
0.3632 0.2302 0.4063
• Example002:

0.35 0.23 0.17 0.25


 0.13 0.20 0.33 0.34 
P 
0.05 0.55 0.31 0.09 
 
 0.41 0.19 0.02 0.38
 0.2463 0.2746 0.2001 0.2790 
0.2462 0.2747 0.2001 0.2790 
P8   
 0.2461 0.2747 0.2002 0.2789 
 
0.2464 0.2745 0.2000 0.2790 
0.2463 0.2746 0.2001 0.2790
0.2463 0.2746 0.2001 0.2790
P16   
0.2463 0.2746 0.2001 0.2790
 
0.2463 0.2746 0.2001 0.2790
• Example003:
0.3 0.4 0 0.3
 0 0.2 0.6 0.2 
P 
 0.1 0 0 0.9 
 
 0 0.4 0.6 0 
0.09 0.32 0.42 0.17
 0.06 0.12 0.24 0.58
P2   
 0.03 0.40 0.54 0.03
 
 0.06 0.08 0.24 0.62 
0.0508 0.2147 0.3559 0.3785
0.0508 0.2147 0.3559 0.3785
P 32   
0.0508 0.2147 0.3559 0.3785
 
0.0508 0.2147 0.3559 0.3785
• Example1:
Following is a transition probability matrix :
1 2 3 4
1 0 1 0 0
2 0 0 1 0
P  
3 0 0 0 1
 
4 0.5 0 0.5 0 

The markov chain is not ergodic


• Showing:

0.3333 0 0.6667 0 
 0 0.3333 0 0.6667 
lim P ( l )  lim P l   
l  l  0.3333 0 0.6667 0 
 
 0 0.3333 0 0.6667 
it do not satisfy the definition of ergodic
• Example2:
Following is a transition probability matrix :

1 2 3 4
1 0.5 0.5 0 0
2  0 0.5 0.5 0 
P  
3 0 0 0.5 0.5
 
4 0.5 0 0 0.5

The markov chain is ergodic


• Showing:
0.125 0.375 0.375 0.125
0.125 0.125 0.375 0.375
P 
3 
0.375 0.125 0.125 0.375
 
0.375 0.375 0.125 0.125

0.25 0.25 0.25 0.25


0.25 0.25 0.25 0.25
lim P ( l )  lim P  
l 
l  l  0.25 0.25 0.25 0.25
 
0.25 0.25 0.25 0.25

it satisfies the definition of ergodic


• Example3:
Following is a transition probability matrix :
1 2 3 4
1 0.25 0.15 0 0.6
2 0 0.5 0.5 0 
P  
3  0.3 0.2 0.1 0.4 
 
4 0 0.3 0.7 0 

The markov chain is ergodic


• Showing:
 0.0625 0.2925 0.4950 0.1500
0.1500 0.3500 0.300 0.2000
P2   
0.1050 0.2850 0.3900 0.2200
 
0.2100 0.2900 0.2200 0.2800

0.13587 0.30707 0.33967 0.21739 


0.13587 0.30707 0.33967 0.21739 
lim P ( l )  lim P  
l 
l  l  0.13587 0.30707 0.33967 0.21739 
 
0.13587 0.30707 0.33967 0.21739 

it satisfies the definition of ergodic


If the markov chain is ergodic,
lim p (n)
ij j
n 

The  j is decided by the state j ,


independent to all start state i .
The limiting distribution { j , j  0,1, 2,L , M }
Of an ergodic chain is unique nonnegative solution
of following equation , that is :

[ 0 ,  1 ,L ,  M ]  [ 1 ,  2 ,L ,  M ]  P
M

 j  1
 j 0

P is the 1-step transition probability matrix.


Showing:
X(0) i0 i1 L iM
P{X(0)} p0 p1 L pM

Xn i0 i1 … iM
P{Xn} p0, p1, … pM,

Xn+1 i0 i1 … iM
P{Xn+1} p0,, p1,, … pM,,

S0  [ p0 , p1 ,L , pM ] , p0  p1  L  pM =1
Sn  [ p ,0 , p1, ,L , p ,M ]  [ p0 , p1,L , pM ]  P ( n )
Sn 1  [ p ,,0 , p1,, ,L , p ,,M ]  [ p0 , p1 ,L , pM ]  P ( n 1)
If the markov chain is ergodic, then when n is
limited to infinity ,
 0 1 L M 
 1 L M 
lim P ( n )  lim P   0
n 
n  n  M M M M

 0  1 L  M 
 0  1 L  M 
  L  
lim P ( n 1)  lim P   0
n 1 1 M
0< j <1
n  n   M M M M
 

 0  1 L  M

Hence  0 1 L M 
 1 L M 
Sn  [ p0 , p1 ,L , pM ]  P ( n )  [ p0 , p1 ,L , pM ]   0 
M M M M

 0  1 L  M 
=[ 0 ,  1 ,L ,  M ]
As likely ,
 0 1 L M 
 1 L M 
Sn 1  [ p0 , p1 ,L , pM ]  P ( n 1)
 [ p0 , p1 ,L , pM ]   0 
M M M M

 0  1 L  M 
=[ 0 ,  1 ,L ,  M ]

And we can find following equation :


Sn 1  Sn  P
[ 0 ,  1 ,L ,  M ]  [ 0 ,  1 ,L ,  M ]  P

P is the one-step transition probability matrix.


• Example:
Consider a markov chain with transition
probability matrix

1 2 3 4
1 0 1 0 0
2 0 0 1 0
P  
3 0 0 0 1
 
4 0.5 0 0 0.5

Find the limiting distribution


Solution:
The markov chain is not ergodic, so it has no
limiting distribution .

0.3333 0 0.6667 0 
 0 0.3333 0 0.6667 
lim P ( l )  lim P l   
l  l  0.3333 0 0.6667 0 
 
 0 0.3333 0 0.6667 
• Example:
Consider a markov chain with one-step
transition probability matrix

0.5 0.5 0 0
 0 0.5 0.5 0 
P 
0 0 0.5 0.5
 
0.5 0 0 0.5

(1) Show that the markov chain is ergodic ;


(2) Find the limiting distribution
(1)Show: we can find n  3 to make all element
of matrix P 3 be more than zero , so the markov
chain is ergodic
0.125 0.375 0.375 0.125
0.125 0.125 0.375 0.375
P3   
0.375 0.125 0.125 0.375
 
0.375 0.375 0.125 0.125

0.25 0.25 0.25 0.25


0.25 0.25 0.25 0.25
lim P ( l )  lim P l   
l  l  0.25 0.25 0.25 0.25
 
0.25 0.25 0.25 0.25
Solution:
(2)The markov chain is ergodic, so it has limiting
distribution . let the limiting distribution
be [ 1 ,  2 ,  3 ,  4 ] , the equation is :

[ 1 ,  2 ,  3 ,  4 ]  [ 1 ,  2 ,  3 ,  4 ]  P

 1   2   3   4  1
0.5 0.5 0 0
 0 0.5 0.5 0 
P 
0 0 0.5 0.5
 
 0.5 0 0 0.5 
Following:

 1  0.5   1  0   2  0   3  0.5   4
  0.5    0.5    0    0  
 2 1 2 3 4

 3  0   1  0.5   2  0.5   3  0   4
  0    0    0.5    0.5  
 4 1 2 3 4

 1   2   3   4  1
  1  0.25,  2  0.25,  3  0.25,  4  0.25
• Example:
Consider a markov chain with transition probability
matrix
0.1 0.1 0.2 0.2 0.4 0 0
0 0 0.5 0.5 0 0 0
 
0 0 0 1 0 0 0
 
P 0 1 0 0 0 0 0
0 0 0 0 0.5 0.5 0 
 
0 0 0 0 0.5 0 0.5
 0 0 0 0 0 0.5 0.5
The state space is {1,2,3,4,5,6,7}.
(1) Class the states.
(2)Find the limiting distribution .
Solution:
(1) The map of transition
From the map ,we can calss the state space into
three set :
C1={2,3,4} C2= {5,6,7} ,C3={1}

(2)Consider the set C1={2,3,4}


The transition probability matrix is :

2 3 4
2 0 0.5 0.5
PC1  3 0 0 1
 
4 1 0 0 
Let  2 ,  3 ,  4 , denote the limiting distribution of
States 2,3,4 respectively. The equation is :

[ 2 ,  3 ,  4 ]  [ 2 ,  3 ,  4 ]  Pc 2

  0 0.5 0.5 

= [ 2 ,  3 ,  4 ]  0 0 
 
1

 1 0 0 

 2   3   4  1
Continue…

 2  0   2  0   3  1   4
  0.5    0    0  
 3 2 3 4

 4  0.5   2  1   3  0   4
 2   3   4  1
2 1 2
  2  , 3  , 4 
5 5 5
2)Consider the set C2 {5,6,7}
The transition probability matrix is :

5 6 7
5 0.5 0.5 0 

PC 2  6 0.5 0 0.5 
 
7  0 0.5 0.5
Let  5 ,  6 ,  7 denote the limiting distribution of
States 2,3,4 respectively. The equation is :

[ 5 ,  6 ,  7 ]  [ 5 ,  6 ,  7 ]  Pc 2

  0.5 0.5 0 

= [ 5 ,  6 ,  7 ]  0.5 0 0.5 
  
  0 0.5 0.5

 5   6   7  1
Continue…

 5  0.5   5  0.5   6  0   7
  0.5    0    0.5  
 6 5 6 7

 7  0   5  0.5   6  0.5   7
 5   6   7  1
1 1 1
  5  , 6  , 7 
3 3 3
• Example:
{ X (n), n  0,1,2,L }
is a homogenous discrete-time
markov chain , the state space is I  {i0 , i1 , i2 , i3} .
the one-step transition probability matrix is as
following
 0.3 0.3 0.4 0 
 0 0.3 0.2 0.5
P 
 0.5 0 0.1 0.4 
 
0.7 0 0 0.3

the starting state X (0) has following probability


distribution
X(0) i0 i1 i2 i3
P{X(0)} 0.1 0.4 0.3 0.2
(1) Show that the markov chain is ergodic.
(2) When n   , find the probability distribution
of X (n) and tell the relationship between
X (0) and X (n ) .
(l )
lim
(3) Find the matrix l  P .
(l )
(4) Find the transition probability lim pij .
l 
Solution : (1)
0.29 0.18 0.22 0.31
 0.45 0.09 0.08 0.38
P (2)  P2   
 0.45 0.15 0.21 0.16 
 
0.42 0.21 0.28 0.09 

we can find l  2 to make pij( l 2)  0 for all


i and j in the state space I , so the
markov chain is ergodic.
• (2)since the markov chain is ergodic ,it has limiting
distribution  0 1  2  3  which satisfy following
equations             P
0 1 2 3 0 1 2 3

  0.3 0.3 0.4 0 
  0 0.3 0.2 0.5
 =  0  1  2  3    
  0.5 0 0.1 0.4 
  
0.7 0 0 0.3

 0   1   2   3  1
  0  0.3872,  1  0.1659,  2  0.2090,  3  0.2379

P is the one-step transition probability matrix.


hence the probability distribution of lim X ( n )
n 
is as :
lim X ( n )
n 
i0 i1 i2 i3
P{ X (n )} 0.3872 0.1659 0.2090 0.2379
Whatever is the probability distribution of X (0) ,
lim X ( n ) must follow the limiting distribution
n 

 0 1  2  3   0.3872 0.1659 0.2090 0.2379

(3) Since the markov chain is ergodic , then

 0 1 2  3  0.3872 0.1659 0.2090 0.2379


 1 2  3  0.3872 0.1659 0.2090 0.2379
lim P ( l )  0  
l   0 1 2  3  0.3872 0.1659 0.2090 0.2379

 0 1 2  3  0.3872 
0.1659 0.2090 0.2379
(4) For all i in the state space I  {i0 , i1 , i2 , i3}

 0  0.3872, if j  i0
  0.1659, if j  i1
 1
lim p ij(l )

 2  0.2090, if j  i2
l 

 3  0.2379, if j  i3
– Assignment Q.1:
{ X (n), n  0,1,2, } is a homogenous markov chain ,
S  {i0 , i1 , i2 } is the state space , for all n , X (n)  S .
The one-step transition probability matrix is as following:
0.5 0.5 0 
P   0 0.5 0.5
 
0.5 0 0.5

The distribution of X (0) is as following table:


X(0) i0 i1 i2
P{X(0)} 0 1 0

(1) Find the distribution of X (2) ;


(2) Find the joint probability P{X (2)  i0 , X (3)  i1, X (4)  i2}
• Q.2:
{ X (n), n  0,1,2,L }
is a homogenous discrete-time
markov chain , the state space is I  {i0 , i1 , i2 } .
the one-step transition probability matrix is as
following
 0.3 0.4 0.3
P   0 0.5 0.5
 
0.5 0 0.5

the starting state X (0) has following probability


distribution
X(0) i0 i1 i2
P{X(0)} 0.4 0.4 0.2
(1) Show that the markov chain is ergodic.
(2) When n   , find the probability distribution
of X (n) and tell the relationship between
X (0) and X (n ) .
(l )
lim
(3) Find the matrix l  P .
(l )
(4) Find the transition probability lim pij .
l 

You might also like