Professional Documents
Culture Documents
Biman Chakraborty
2 June 2020
2. Simulation from a Markov chain whose transition matrix and initial distribution are given
sim_mc=function(P,p0,n){
# P: TPM
# p0: initial dsitribution
# n : sample size
x=rep(0,n)
############# to simulate Initial state ########
x[1]=sim(p0)
############ To simulate other states ###########
for (i in 2: n)
x[i]=sim(P[x[i-1],])
return(x)
}
[1] 1 2 3 2 1 1 3 2 1 2 3 2 3 2 3 2 3 2 1 2 3 2 1 1 2 3 2 3 2 3 2 1 2 1 3 2 1
1
[38] 2 1 1 2 1 3 2 3 2 1 2 3 2 3 2 1 2 3 2 3 2 1 3 2 3 2 1 3 2 3 2 3 2 1 1 3 2
[75] 1 2 1 2 3 2 3 2 3 2 1 2 1 2 3 2 1 1 1 3 2 1 2 1 2 1
tpm(s)
2
Example 1
P=matrix(c(0.5,0.5,0.7,0.3), 2,byrow=T)
P
[,1] [,2]
[1,] 0.5 0.5
[2,] 0.7 0.3
mp(P,2)
[,1] [,2]
[1,] 0.60 0.40
[2,] 0.56 0.44
mp(P,3)
[,1] [,2]
[1,] 0.580 0.420
[2,] 0.588 0.412
mp(P,10)
[,1] [,2]
[1,] 0.5833334 0.4166666
[2,] 0.5833333 0.4166667
mp(P,20)
[,1] [,2]
[1,] 0.5833333 0.4166667
[2,] 0.5833333 0.4166667
pi=pi1/(sum(pi1))
return(round(pi,4))
}
ld(P)
3
[,1] [,2]
[1,] 0.5833 0.4167
[2,] 0.5833 0.4167
Note that every row of the above matrix is same as limiting distribution (π).
Example 2:
P=matrix(c(0.7, 0.3,
0, 1), 2,byrow=T)
P
[,1] [,2]
[1,] 0.7 0.3
[2,] 0.0 1.0
mp(P,2)
[,1] [,2]
[1,] 0.49 0.51
[2,] 0.00 1.00
mp(P,5)
[,1] [,2]
[1,] 0.16807 0.83193
[2,] 0.00000 1.00000
mp(P,10)
[,1] [,2]
[1,] 0.02824752 0.9717525
[2,] 0.00000000 1.0000000
ld(P)
[1] 0 1
round(mp(P,30),4)
[,1] [,2]
[1,] 0 1
[2,] 0 1
Example 3:
P=matrix(c(0, 1/5, 3/5, 1/5,
1/4, 1/4, 1/4, 1/4,
1, 0, 0, 0,
0, 1/2, 1/2, 0), 4,byrow=T)
mp(P,10)
4
ld(P)
Example 4:
P=matrix(c(0, 1, 0,
1/2, 0, 1/2,
0, 1, 0), 3,byrow=T)
mp(P,20)
5
Example 5:
P=matrix(c(1, 0, 0, 0,
1/2, 0, 1/2, 0,
1/3, 0, 0, 2/3,
0, 0, 0, 1), 4,byrow=T)
mp(P,10)
[1] 1 0 0 0
round(mp(P,30),4)
Example 6:
P=matrix(c(1/3, 2/3, 0, 0, 0,
3/4, 1/4, 0, 0, 0,
0, 0, 1/8, 1/4, 5/8,
0, 0, 0, 1/2, 1/2,
0, 0, 1/3, 0, 2/3), 5,byrow=T)
mp(P,10)
6
• First two rows identical, and last three rows are identical.
• Limiting transition probabilities are not independent of initial state.
• Markov Chain is not irreducible (Why?).
Remarks: Consider an irreducible, positive recurrent Markov chain with unique stationary distribution π.
n−1
If we let Ni (n) = 1xk =i , denote the number of visits to state i before time n, then Nin(n) converges in
P
k=0
probability to π.
eigen() decomposition
$values
[1] 1.0000000 1.0000000 0.7071068 -0.7071068 0.0000000
$vectors
[,1] [,2] [,3] [,4] [,5]
[1,] 1 0 -0.5445261 0.1434034 -0.3162278
[2,] 0 0 0.3189760 -0.4896098 0.6324555
[3,] 0 0 0.4511002 0.6924128 0.0000000
[4,] 0 0 0.3189760 -0.4896098 -0.6324555
[5,] 0 1 -0.5445261 0.1434034 0.3162278
ld(p)
[1] 1 0 0 0 0