Chapter 4 1

Markov Chains
Definition, Chapman-Kolmogorov Equations,
Classification of States, Limiting Probabilities,
Transient Analysis, Time Reversibility
Chapter 4 2
Stochastic Processes
A stochastic process is a collection of random variables

Typically, T is continuous (time) and we have
Or, T is discrete and we are observing
at discrete time points n that may or may not be evenly spaced.
Refer to X(t) as the state of the process at time t.
The state space of the stochastic process is the set of all
possible values of X(t): this set may be discrete or
continuous as well.
( ) { }
, X t t T e
{ }
, 0,1, 2,...
n
X n =
( ) { }
, 0 X t t >
Chapter 4 3
Markov Chains
In this chapter, consider discrete-state, discrete-time:
A Markov chain is a stochastic process
where each X
n
belongs to the same subset of {0, 1, 2, …},
and
for all states i
0
, i
1
,…, i
n-1
and all n > 0 .
Say
Then


Let be the matrix of one-step transition probabilities.

{ }
, 0,1, 2,...
n
X n =
{ } { }
1 1 1 1 1 0 0 1
, ,..., ,
n n n n n n
P X j X i X i X i X i P X j X i
+ ÷ ÷ +
= = = = = = = =
{ }
1 ij n n
P P X j X i
+
= = =
1
0 for all ,
For any , 1
ij
ij
j
P i j
i P
·
=
>
=
¿
ij
P ( =
¸ ¸
P
Chapter 4 4
n-step Transition Probabilities
Given the chain is in state i at a given time, what is the
probability it will be in state j after n transitions? Find it by
conditioning on the initial transition(s).
{ }
{ } { }
{ } { }
1 1
0
1 1
0
1
0
,
n
ij m n m
m n m m m m
k
m n m m m
k
n
kj ik
k
P P X j X i
P X j X i X k P X k X i
P X j X k P X k X i
P P
+
·
+ + +
=
·
+ + +
=
·
÷
=
= = =
= = = = = =
= = = = =
=
¿
¿
¿
Chapter 4 5
Chapman-Kolmogorov Equations
In general, can find the n-step transition probabilities by
conditioning on the state at any intermediate stage:





Let P
(n)
be the matrix of n-step transition probabilities:

So, by induction,
{ }
{ } { }
0
0 0
0
0
,
n m
ij n m
n m n n
k
m n
kj ik
k
P P X j X i
P X j X k X i P X k X i
P P
+
+
·
+
=
·
=
= = =
= = = = = =
=
¿
¿
( ) ( ) ( ) n m n m +
= P P P
( ) n
n
= P P
Chapter 4 6
Classification of States
State j is accessible from state i if
If j is accessible from i and i is accessible from j, we say that
states i and j communicate (i ÷ j).
Communication is a class property:
(i) State i communicates with itself, for all i > 0
(ii) If i communicates with j then j communicates with i
(iii) If i ÷ j and j ÷ k, then i ÷ k.
Therefore, communication divides the state space up into
mutually exclusive classes.
If all the states communicate, the Markov chain is irreducible.
0 for some 0
n
ij
P n > >
Chapter 4 7
Recurrence vs. Transience
Let f
i
be the probability that, starting in state i, the process will
ever reenter state i. If f
i
= 1, the state is recurrent, otherwise
it is transient.
If state i is recurrent then, starting from state i, the process
will reenter state i infinitely often (w/prob. 1).
If state i is transient then, starting in state i, the number of
periods in which the process is in state i has a geometric
distribution with parameter 1 – f
i.

Or, state i is recurrent if and transient if
Recurrence (transience) is a class property: If i is recurrent
(transient) and i ÷ j then j is recurrent (transient).
A special case of a recurrent state is if P
ii
= 1 then i is absorbing.
1
n
ii
n
P
·
=
= ·
¿
1
n
ii
n
P
·
=
< ·
¿
Chapter 4 8
Recurrence, Transience and Other Properties
Not all states in a finite Markov chain can be transient (why?).
All states of a finite irreducible Markov chain are recurrent.
If whenever n is not divisible by d, and d is the largest
integer with this property, then state i is periodic with
period d.
If a state has period d = 1, then it is aperiodic.
If state i is recurrent and if, starting in state i, the expected
time until the process returns to state i is finite, it is
positive recurrent (otherwise it is null recurrent).
A positive recurrent, aperiodic state is called ergodic.
0
n
ii
P =
Chapter 4 9
Limiting Probabilities
Theorem: For an irreducible ergodic Markov chain,
exists for all j and is independent of i. Furthermore, t
j
is
the unique nonnegative solution of




The probability t
j
also equals the long run proportion of
time that the process is in state j.
If the chain is irreducible and positive recurrent but
periodic, the same system of equations can be solved for
these long run proportions.
lim
n
j ij
n
P t
÷·
=
0
0
, 0
1
j i ij
i
j
j
P j t t
t
·
=
·
=
= >
=
¿
¿
Chapter 4 10
Limiting Probabilities 2
The long run proportions t
j
are also called stationary
probabilities because if
then
Let m
jj
be the expected number of transitions until the
Markov chain, starting in state j, returns to state j (finite if
state j is positive recurrent). Then
If is an irreducible Markov chain with stationary
probabilities , and r is a bounded function on the state
space. Then with probability 1,


{ }
0 j
P X j t = =
{ }
for all , 0
n j
P X j n j t = = >
1
jj j
m t =
{ }
, 0
n
X n >
( )
( )
1
0
lim
N
n
n
j
N
j
r X
r j
N
t
·
=
÷·
=
=
¿
¿
Long run
average reward
Chapter 4 11
Transient Analysis
Suppose a finite Markov chain with m states has some
transient states. Assume the states are numbered so that T
= {1, 2, …, t} is the set of transient states, and let P
T
be the
matrix of transition probabilities among these states.
Let R be the t x (m-t) matrix of one-step transition
probabilities from transient states to the recurrent states and
P
R
be the (m-t) x (m-t) matrix of transition probabilities
among the recurrent states: the overall one-step transition
probability matrix can be written as
0
(
=
(
¸ ¸
T
R
P R
P
P
If the recurrent states are all
absorbing then P
R
= I.
Chapter 4 12
Transient Analysis 2
• If the process starts in a transient state, how long does it
spend among the transient states?
• What are the probabilities of eventually entering a given
recurrent state?
Define o
ij
= 1 if i = j and 0 otherwise.
For i and j in T, let s
ij
be the expected number of periods that
the Markov chain is in state j given that it started in state i.
1
T
ij ij ik kj
k
s P s o
=
= +
¿
Condition on the first transition,
and note that transitions from recurrent
states to transient states are impossible
Chapter 4 13
Transient Analysis 3
Let S be the matrix of s
ij
values. Then S = I + P
T
S. Or,



For i and j in T, let f
ij
be the probability that the Markov chain
ever makes a transition into j, starting from i.

For i in T and j in T
c
, the matrix of these probabilities is
( )
( )
1
T
T
÷
÷ =
= ÷
I P S I
S I P
ij ij
ij
jj
s
f
s
o ÷
=
( )
1
T
÷
= ÷ F I P R
Chapter 4 14
Time Reversibility
• One approach to estimate transition probabilities from each
state is by looking at transitions into states and tracking what
the previous state was.
– How do we know this information is reliable?
– How do we use it to estimate the forward transition probabilities?
Consider a stationary ergodic Markov chain.
Trace the sequence of states going backwards: X
n
, X
n-1
,…, X
0
This is a Markov chain with transition probabilities:


If Q
ij
= P
ij
for all i, j, then the Markov chain is time reversible.
{ }
1
j ji
ij m m
i
P
Q P X j X i
t
t
+
= = = =
Chapter 4 15
Time Reversibility 2
Another way of writing the reversibility equation is:

Proposition: Consider an irreducible Markov chain with
transition probabilities P
ij
. If one can find positive numbers
t
i
summing to 1 and a transition probability matrix Q such
that the above equation holds for all i, j, then Q
ij
are the
transition probabilities for the reversed chain and the t
i
are
the stationary probabilities for both the original and the
reversed chain.

Use this, thinking backwards, to guess at transition
probabilities of reversed chain.
i ij j ji
Q P t t =

 at discrete time points n that may or may not be evenly spaced. The state space of the stochastic process is the set of all possible values of X(t): this set may be discrete or continuous as well.. T is discrete and we are observing  X n .. t T  Typically.. T is continuous (time) and we have  X t  .Stochastic Processes A stochastic process is a collection of random variables  X  t  .1. Chapter 4 2 . Refer to X(t) as the state of the process at time t. n  0. 2. t  0 Or.

and P  X n1  j X n  i. X n1  in1 . discrete-time: A Markov chain is a stochastic process  X n . n  0. Chapter 4 3 . in-1 and all n  0 .. Say Pij  P  X n1  j X n  i Then Pij  0 for all i..  Pij  1 j 1  Let P   Pij    be the matrix of one-step transition probabilities.Markov Chains In this chapter. i1.. X 0  i0   P  X n1  j X n  i for all states i0. j For any i. where each Xn belongs to the same subset of {0. 2.. X1  i1 . 1.1.. consider discrete-state.. …}. 2..….

n-step Transition Probabilities Given the chain is in state i at a given time. X m 1  k  P  X m 1  k X m  i k 0     P  X m  n  j X m 1  k  P  X m 1  k X m  i k 0  n   Pkj 1 Pik k 0 Chapter 4 4 . Pijn  P  X m  n  j X m  i   P  X m  n  j X m  i. what is the probability it will be in state j after n transitions? Find it by conditioning on the initial transition(s).

X 0  i P  X n  k X 0  i m   Pkj Pikn k 0 Let P(n) be the matrix of n-step transition probabilities: P P P So. by induction. P n   P n  n  m  n   m Chapter 4 5 .Chapman-Kolmogorov Equations In general. can find the n-step transition probabilities by conditioning on the state at any intermediate stage: Pijn  m  P  X n  m  j X 0  i  k 0    P  X n  m  j X n  k .

the Markov chain is irreducible. for all i  0 (ii) If i communicates with j then j communicates with i (iii) If i  j and j  k. then i  k. we say that states i and j communicate (i  j). Chapter 4 6 .Classification of States State j is accessible from state i if Pijn  0 for some n  0 If j is accessible from i and i is accessible from j. communication divides the state space up into mutually exclusive classes. Communication is a class property: (i) State i communicates with itself. Therefore. If all the states communicate.

the process will reenter state i infinitely often (w/prob. the number of periods in which the process is in state i has a geometric distribution with parameter 1 – fi. starting in state i. otherwise it is transient. the process will ever reenter state i. A special case of a recurrent state is if Pii = 1 then i is absorbing. starting from state i. the state is recurrent. If state i is transient then.Recurrence vs. 1). starting in state i. Chapter 4 7 . Transience Let fi be the probability that. state i is recurrent if  n1 Recurrence (transience) is a class property: If i is recurrent (transient) and i  j then j is recurrent (transient). If fi = 1.   Piin   and transient if  n1 Piin   Or. If state i is recurrent then.

then state i is periodic with period d.Recurrence. If state i is recurrent and if. If a state has period d = 1. Chapter 4 8 . starting in state i. it is positive recurrent (otherwise it is null recurrent). then it is aperiodic. the expected time until the process returns to state i is finite. Transience and Other Properties Not all states in a finite Markov chain can be transient (why?). aperiodic state is called ergodic. A positive recurrent. and d is the largest integer with this property. All states of a finite irreducible Markov chain are recurrent. If Piin  0 whenever n is not divisible by d.

j  0 i 0  p j 0  j 1 The probability pj also equals the long run proportion of time that the process is in state j.Limiting Probabilities Theorem: For an irreducible ergodic Markov chain. p j  lim Pijn n exists for all j and is independent of i. Furthermore. the same system of equations can be solved for these long run proportions. Chapter 4 9 . pj is the unique nonnegative solution of p j   p i Pij . If the chain is irreducible and positive recurrent but periodic.

Limiting Probabilities 2 The long run proportions pj are also called stationary probabilities because if P  X 0  j  p j then P X n  j  p j for all n. and r is a bounded function on the state space. returns to state j (finite if state j is positive recurrent). j  0 Let mjj be the expected number of transitions until the Markov chain. n  0 is an irreducible Markov chain with stationary probabilities . Then with probability 1. Then m jj  1 p j If  X n . starting in state j. N   lim N n 1 r  Xn  N   r  j p j j 0  Long run average reward 10 Chapter 4 .

and let PT be the matrix of transition probabilities among these states. 2. Let R be the t x (m-t) matrix of one-step transition probabilities from transient states to the recurrent states and PR be the (m-t) x (m-t) matrix of transition probabilities among the recurrent states: the overall one-step transition probability matrix can be written as  PT R  P 0 PR    If the recurrent states are all absorbing then PR = I.Transient Analysis Suppose a finite Markov chain with m states has some transient states. Chapter 4 11 . t} is the set of transient states. …. Assume the states are numbered so that T = {1.

sij  d ij   Pik skj k 1 T Condition on the first transition. For i and j in T. and note that transitions from recurrent states to transient states are impossible Chapter 4 12 . let sij be the expected number of periods that the Markov chain is in state j given that it started in state i.Transient Analysis 2 • If the process starts in a transient state. how long does it spend among the transient states? • What are the probabilities of eventually entering a given recurrent state? Define dij = 1 if i = j and 0 otherwise.

 I  PT  S  I 1 S   I  PT  For i and j in T. the matrix of these probabilities is 1 F   I  PT  R Chapter 4 13 . starting from i. Then S = I + PTS.Transient Analysis 3 Let S be the matrix of sij values. Or. f  sij  d ij ij s jj For i in T and j in Tc. let fij be the probability that the Markov chain ever makes a transition into j.

Chapter 4 14 . then the Markov chain is time reversible. Xn-1. – How do we know this information is reliable? – How do we use it to estimate the forward transition probabilities? Consider a stationary ergodic Markov chain.Time Reversibility • One approach to estimate transition probabilities from each state is by looking at transitions into states and tracking what the previous state was. X0 This is a Markov chain with transition probabilities: p j Pji Qij  P  X m  j X m1  i  pi If Qij = Pij for all i.…. Trace the sequence of states going backwards: Xn. j.

Chapter 4 15 . If one can find positive numbers pi summing to 1 and a transition probability matrix Q such that the above equation holds for all i.Time Reversibility 2 Another way of writing the reversibility equation is: p i Qij  p j Pji Proposition: Consider an irreducible Markov chain with transition probabilities Pij. j. Use this. to guess at transition probabilities of reversed chain. then Qij are the transition probabilities for the reversed chain and the pi are the stationary probabilities for both the original and the reversed chain. thinking backwards.

Sign up to vote on this title
UsefulNot useful