Professional Documents
Culture Documents
MARKOV CHAIN 1 / 14
1 Markov Chain
MARKOV CHAIN 2 / 14
Markov Chain
Outlines
1 Markov Chain
MARKOV CHAIN 3 / 14
Markov Chain
Markov Chain
MARKOV CHAIN 4 / 14
Markov Chain
P (Xm+n = s| X0 = x0 , X1 = x1 , · · · , Xm = xm ) = P (Xm+n = s| Xm = xm
(3)
(b) Without loss of generality, we can denote the elements of S as 1,
2, . . . , k, although in some examples we may use the original labeling
of the states to avoid confusion.
(c) The distribution vector of Xn , πn = (πn (1) , πn (2) , · · · , πn (k)),
is called the distribution at t = n and π0 the initial distribution of
the Markov chain.
MARKOV CHAIN 5 / 14
Markov Chain
From now on, all Markov chains are assumed homogeneous, with
state space S = {1, 2, · · · , k}, transition matrix P ∈ Rk×k , and
probability distributions πn ∈ Rk , n = 0, 1, 2, · · · , where
πn (i) = P (Xn = i), for i = 1, 2, · · · , k.
MARKOV CHAIN 6 / 14
Markov Chain
P (Xm+n = i| Xm = j) = P (Xn = i| X0 = j) .
(n) (n)
The matrix P(n) = pij ∈ Rk×k , where pij = P (Xn = i| X0 = j),
(n)
is called the n-step transition matrix and the pij are called the
n-step transition probabilities. In particular, P is the 1-step
transition matrix of the Markov chain.
State j at time t = m
↓
(n)
P(n) = pij ← State i at time t = m + n
MARKOV CHAIN 7 / 14
Markov Chain
Proof
Pk
Since P (Xn+1 = i, X0 = j) = l=1 P (Xn+1 = i, Xn = l, X0 = j), and
for each l = 1, 2, · · · , k,
MARKOV CHAIN 8 / 14
Markov Chain
we have
Pk
P (Xn+1 = i| X0 = j) = P(XP(X
n+1 =i,X0 =j)
0 =j)
= P(X10 =j) l=1 P (Xn+1 = i, Xn =
Pk
= l=1 P (Xn+1 = i| Xn = l) P (Xn = l| X0 = j)
as desired.
Since P (Xn+1 = i| Xn = l) = P (X1 = i| X0 = l) = pil , the equation
(4) can be rewritten as
k
X
(n+1) (n)
pij = pil plj ,
l=1
where the right hand side is equal to the inner product of the i-th row
of P with the j-th column of P(n) , and we arrive at
MARKOV CHAIN 9 / 14
Markov Chain
MARKOV CHAIN 10 / 14
Markov Chain
Since the right hand side is the inner product of the i-th row of Pn
with the (column) vector π0 , we conclude that πn = P n π0 as desired.
Definition 7. Let {Xn } be a Markov chain and let
π = (π (1) , π (2) , · · · , π (k)) be a distribution on S, i.e., π (i) ≥ 0,
for all i = 1, 2, · · · , k, and ki=1 π (i) = 1.
P
(i) π is called the limiting distribution of {Xn } if
lim Pn (i, j) = π (i), for all i, j = 1, 2, · · · , k.
n→∞
(ii) π is called a stationary distribution of {Xn } if
π = Pπ.
Remarks. (a) By the uniqueness property of the limit, a Markov
chain has at most one limiting distribution, whereas it may have more
than one stationary distribution.
(b) If π is the limiting distribution, then P (Xn = i| X0 = j) → π (i)
as n → ∞, for all j = 1, 2, · · · , k, i.e., regardless of what the initial
state j is, the probability that the chain is at the state i at time
t = n, P (Xn = i), can be approximated by π (i) for large values of n.
MARKOV CHAIN 11 / 14
Markov Chain
1 2 3
1 0.2 0 1
P=
2 0.8 0.4 0
3 0 0.6 0
and initial distribution π0 = (0.5, 0.5, 0).
MARKOV CHAIN 12 / 14
Markov Chain
π = Pπ ⇔ (I3 − P) π = 0. (5)
By transforming
− 45 1 0 − 45
0.8 0 −1 1 0
I3 −P = −0.8 0.6 0 → 0 0.6 −1 → 0 1 − 53
0 −0.6 1 0 −0.6 1 0 0 0