You are on page 1of 7

Discrete Time Markov Chain

Biman Chakraborty

2 June 2020

1. To simulate the states of a Markov chain whose transition prob-


ability matrix is given:
1. Auxilary Function to simulate from a distrete distribution with probability vecor p = (p1 , p2 , ...pn ) and
State space S = {1, 2, ..., n}.
sim=function(p){
s=1:length(p)
x=sample(s,1,replace=TRUE, prob=p)
return(x)
}

2. Simulation from a Markov chain whose transition matrix and initial distribution are given
sim_mc=function(P,p0,n){
# P: TPM
# p0: initial dsitribution
# n : sample size
x=rep(0,n)
############# to simulate Initial state ########
x[1]=sim(p0)
############ To simulate other states ###########
for (i in 2: n)
x[i]=sim(P[x[i-1],])
return(x)
}

3. Illustration of the programme


p=matrix(c(0.2,0.5,0.3,0.5,0,0.5,0,1,0),3,3,byrow=T)
p

[,1] [,2] [,3]


[1,] 0.2 0.5 0.3
[2,] 0.5 0.0 0.5
[3,] 0.0 1.0 0.0
p0=c(0.2,0.3,0.5)
p0

[1] 0.2 0.3 0.5


s=sim_mc(p,p0,100)
s

[1] 1 2 3 2 1 1 3 2 1 2 3 2 3 2 3 2 3 2 1 2 3 2 1 1 2 3 2 3 2 3 2 1 2 1 3 2 1

1
[38] 2 1 1 2 1 3 2 3 2 1 2 3 2 3 2 1 2 3 2 3 2 1 3 2 3 2 1 3 2 3 2 3 2 1 1 3 2
[75] 1 2 1 2 3 2 3 2 3 2 1 2 1 2 3 2 1 1 1 3 2 1 2 1 2 1

2. To estimate the transition probability matrix when we have


realization of the Markov chain
1. Auxlary function to estimate probability
prob=function(s,n){
y=rep(0,n)
for(i in 1:n)
y[i]=sum(s==i)/length(s)
return(y)
}

2. To estimate 1-step transition probability matrix


tpm=function(x){
n=length(unique(x))
p=matrix(0,n,n)
for(i in 1:n){
y=NULL
c=0
for (j in 1:(length(x)-1))
{
if(x[j]==i){
c=c+1
y[c]=x[j+1]
}
}
p[i,]=prob(y,n)
}
return(p)
}

tpm(s)

[,1] [,2] [,3]


[1,] 0.2142857 0.5357143 0.2500000
[2,] 0.5116279 0.0000000 0.4883721
[3,] 0.0000000 1.0000000 0.0000000

3. Function to calculate n step transition matrix


mp=function(P,n){
temp=P
for (i in 2:n){
temp=temp%*%P
}
return(temp)
}

Note: This function is works when n is more than or equal to 2.

2
Example 1
P=matrix(c(0.5,0.5,0.7,0.3), 2,byrow=T)
P

[,1] [,2]
[1,] 0.5 0.5
[2,] 0.7 0.3
mp(P,2)

[,1] [,2]
[1,] 0.60 0.40
[2,] 0.56 0.44
mp(P,3)

[,1] [,2]
[1,] 0.580 0.420
[2,] 0.588 0.412
mp(P,10)

[,1] [,2]
[1,] 0.5833334 0.4166666
[2,] 0.5833333 0.4166667
mp(P,20)

[,1] [,2]
[1,] 0.5833333 0.4166667
[2,] 0.5833333 0.4166667

4. Limiting distribution of a Markov chain


# Function to calculate limiting distribution:
ld=function(P){
# To find left eigen vector of P corresponding to eigen value 1.

ev1=eigen(t(P)) # Left eigenvectors are eigenvectors of P’.

pi1=(ev1$vectors[,1]) # First column is eigenvector corresponding to 1.

# For p.m.f. sum is 1, so divide by total.

pi=pi1/(sum(pi1))

# Unconditional probability [P(x_n=i), i in S] is called limiting distribution.

return(round(pi,4))
}
ld(P)

[1] 0.5833 0.4167


round(mp(P,20),4)

3
[,1] [,2]
[1,] 0.5833 0.4167
[2,] 0.5833 0.4167
Note that every row of the above matrix is same as limiting distribution (π).

Example 2:
P=matrix(c(0.7, 0.3,
0, 1), 2,byrow=T)
P

[,1] [,2]
[1,] 0.7 0.3
[2,] 0.0 1.0
mp(P,2)

[,1] [,2]
[1,] 0.49 0.51
[2,] 0.00 1.00
mp(P,5)

[,1] [,2]
[1,] 0.16807 0.83193
[2,] 0.00000 1.00000
mp(P,10)

[,1] [,2]
[1,] 0.02824752 0.9717525
[2,] 0.00000000 1.0000000
ld(P)

[1] 0 1
round(mp(P,30),4)

[,1] [,2]
[1,] 0 1
[2,] 0 1

Example 3:
P=matrix(c(0, 1/5, 3/5, 1/5,
1/4, 1/4, 1/4, 1/4,
1, 0, 0, 0,
0, 1/2, 1/2, 0), 4,byrow=T)
mp(P,10)

[,1] [,2] [,3] [,4]


[1,] 0.3855592 0.1782264 0.3201518 0.1160626
[2,] 0.3705777 0.1792873 0.3300468 0.1200882
[3,] 0.3548134 0.1803996 0.3404593 0.1243277
[4,] 0.3885240 0.1780126 0.3181924 0.1152710

4
ld(P)

[1] 0.3731 0.1791 0.3284 0.1194


round(mp(P,30),4)

[,1] [,2] [,3] [,4]


[1,] 0.3731 0.1791 0.3284 0.1194
[2,] 0.3731 0.1791 0.3284 0.1194
[3,] 0.3731 0.1791 0.3284 0.1194
[4,] 0.3731 0.1791 0.3284 0.1194

Example 4:
P=matrix(c(0, 1, 0,
1/2, 0, 1/2,
0, 1, 0), 3,byrow=T)
mp(P,20)

[,1] [,2] [,3]


[1,] 0.5 0 0.5
[2,] 0.0 1 0.0
[3,] 0.5 0 0.5
mp(P,21)

[,1] [,2] [,3]


[1,] 0.0 1 0.0
[2,] 0.5 0 0.5
[3,] 0.0 1 0.0
ld(P)

[1] 2.451449e+15 -4.902898e+15 2.451449e+15


round(mp(P,30),4)

[,1] [,2] [,3]


[1,] 0.5 0 0.5
[2,] 0.0 1 0.0
[3,] 0.5 0 0.5
round(mp(P,31),4)

[,1] [,2] [,3]


[1,] 0.0 1 0.0
[2,] 0.5 0 0.5
[3,] 0.0 1 0.0
Observations:
• Transition probabilities are not independent on n.
• Rows are not identical for all transition matrices.
• What about perod of the Markov chain???
• limiting distribution does not exist!!!

5
Example 5:
P=matrix(c(1, 0, 0, 0,
1/2, 0, 1/2, 0,
1/3, 0, 0, 2/3,
0, 0, 0, 1), 4,byrow=T)
mp(P,10)

[,1] [,2] [,3] [,4]


[1,] 1.0000000 0 0 0.0000000
[2,] 0.6666667 0 0 0.3333333
[3,] 0.3333333 0 0 0.6666667
[4,] 0.0000000 0 0 1.0000000
ld(P)

[1] 1 0 0 0
round(mp(P,30),4)

[,1] [,2] [,3] [,4]


[1,] 1.0000 0 0 0.0000
[2,] 0.6667 0 0 0.3333
[3,] 0.3333 0 0 0.6667
[4,] 0.0000 0 0 1.0000

Example 6:
P=matrix(c(1/3, 2/3, 0, 0, 0,
3/4, 1/4, 0, 0, 0,
0, 0, 1/8, 1/4, 5/8,
0, 0, 0, 1/2, 1/2,
0, 0, 1/3, 0, 2/3), 5,byrow=T)
mp(P,10)

[,1] [,2] [,3] [,4] [,5]


[1,] 0.5294860 0.4705140 0.0000000 0.0000000 0.0000000
[2,] 0.5293283 0.4706717 0.0000000 0.0000000 0.0000000
[3,] 0.0000000 0.0000000 0.2424192 0.1212205 0.6363602
[4,] 0.0000000 0.0000000 0.2424065 0.1212419 0.6363516
[5,] 0.0000000 0.0000000 0.2424295 0.1212032 0.6363672
ld(P)

[1] 0.5294 0.4706 0.0000 0.0000 0.0000


round(mp(P,30),4)

[,1] [,2] [,3] [,4] [,5]


[1,] 0.5294 0.4706 0.0000 0.0000 0.0000
[2,] 0.5294 0.4706 0.0000 0.0000 0.0000
[3,] 0.0000 0.0000 0.2424 0.1212 0.6364
[4,] 0.0000 0.0000 0.2424 0.1212 0.6364
[5,] 0.0000 0.0000 0.2424 0.1212 0.6364
Observations:

6
• First two rows identical, and last three rows are identical.
• Limiting transition probabilities are not independent of initial state.
• Markov Chain is not irreducible (Why?).
Remarks: Consider an irreducible, positive recurrent Markov chain with unique stationary distribution π.
n−1
If we let Ni (n) = 1xk =i , denote the number of visits to state i before time n, then Nin(n) converges in
P
k=0
probability to π.

5. Gambler’s Ruin Problem


a=0.5
b=1-a
p=matrix(c(1, 0, 0, 0, 0,
a,0, b, 0, 0,
0,a, 0, b, 0,
0, 0, a, 0, b,
0, 0, 0, 0, 1),5,byrow=T)
p

[,1] [,2] [,3] [,4] [,5]


[1,] 1.0 0.0 0.0 0.0 0.0
[2,] 0.5 0.0 0.5 0.0 0.0
[3,] 0.0 0.5 0.0 0.5 0.0
[4,] 0.0 0.0 0.5 0.0 0.5
[5,] 0.0 0.0 0.0 0.0 1.0
round(mp(p,100),2)

[,1] [,2] [,3] [,4] [,5]


[1,] 1.00 0 0 0 0.00
[2,] 0.75 0 0 0 0.25
[3,] 0.50 0 0 0 0.50
[4,] 0.25 0 0 0 0.75
[5,] 0.00 0 0 0 1.00
eigen(t(p))

eigen() decomposition
$values
[1] 1.0000000 1.0000000 0.7071068 -0.7071068 0.0000000

$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] 1 0 -0.5445261 0.1434034 -0.3162278
[2,] 0 0 0.3189760 -0.4896098 0.6324555
[3,] 0 0 0.4511002 0.6924128 0.0000000
[4,] 0 0 0.3189760 -0.4896098 -0.6324555
[5,] 0 1 -0.5445261 0.1434034 0.3162278
ld(p)

[1] 1 0 0 0 0

You might also like