You are on page 1of 2

# APPM 4/5560: Markov processes, queues and simulation Handout - WEEK 3

Fall 2005

3.1 More on the Markov Property.
Our deﬁnition of Markov chain was (omitting the time homogeneity part) as follows: (Xn )n≥0 is a ﬁrst order Markov chain provided that P (Xn+1 = sn+1 | X0 = s0 , . . . , Xn = sn ) = P (Xn+1 = sn+1 | Xn = sn ) , for all s0 , s1 , . . . , sn ∈ S. Does this imply for example that P (Xn+1 = sn+1 | X0 = s0 , Xn = sn ) = P (Xn+1 = sn+1 | Xn = sn )? Intuitively, the answer should be a yes. Mathematically, this is the argument: P (Xn+1 = sn+1 |X0 = s0 , Xn = sn ) = P (Xn+1 = sn+1 , Xn = sn , X0 = s0 ) , P (Xn = sn , X0 = s0 ) P (Xn+1 = sn+1 , Xn = sn , Xn−1 = sn−1 , . . . , X1 = s1 , X0 = s0 ) = , P (Xn = sn , X0 = s0 )
s1 ,...,sn−1 ∈S

= =

P (Xn+1 = sn+1 |Xn = sn ) P (Xn = sn , X0 = s0 )

P (Xn = sn , Xn−1 = sn−1 , . . . , X1 = s1 , X0 = s0 ) ,

s1 ,...,sn−1 ∈S

P (Xn+1 = sn+1 |Xn = sn ) · P (Xn = sn , X0 = s0 ) , P (Xn = sn , X0 = s0 ) = P (Xn+1 = sn+1 |Xn = sn ) .

The above identity is a special case of the following more general property (which holds with A0 := {s0 }, A1 = . . . = An−1 = S.) Suppose that for each i ∈ {0, . . . , n − 1}, Ai is a non-empty subset of S. Then the P (Xn+1 = sn+1 | X0 ∈ A0 , . . . , Xn−1 ∈ An−1 , Xn = sn ) = P (Xn+1 = sn+1 | Xn = sn ) . Exercise 3.1.1 – True or False? If for each i ∈ {0, . . . , n + 1}, Ai is a non-empty subset of S then P (Xn+1 ∈ An+1 | X0 ∈ A0 , . . . , Xn ∈ An ) = P (Xn+1 ∈ An+1 | Xn ∈ An ) .

3.2 Markov chains with random initial states.
Suppose that (Xn )n≥0 is a ﬁrst-oder homogeneous Markov chain on a discrete state space S and with probability transition matrix p. We learned in lecture that the P (Xn = j | X0 = i) is the entry in row-i and column-j of pn . What’s the probability that Xn = j when X0 is itself a random variable? In this case things should not be very diﬀerent. Indeed, letting µn denote the distribution of Xn i.e. P (Xn = j) = µn (j) it follows for n ≥ 1 that µn (j) = P (Xn = j) =
i∈S

(j ∈ S) ,

P (X0 = i, Xn = j) =
i∈S

P (X0 = i) · P (Xn = j | X0 = i) =
i∈S

µ0 (i) · pn (i, j) .

Thus, if we think of the µn ’s as a row-vectors the above identity is equivalent to µn = pn · µ0 .

3.3 A non-stopping time example.
Recall that a function T : S N → N ∪ {∞} is said to be a stopping time if T satisﬁes the following property for all ﬁnite n ≥ 0: If T (s0 , s1 , . . .) = n then T (s0 , s1 , . . .) = n whenever (s0 , . . . , sn ) = (s0 , . . . , sn ). Given x ∈ S, we showed in class that Tx (s0 , s1 , . . .) := min{n ≥ 1 : sn = x} is a stopping time. For a Markov chain with state space S, Tx can be interpreted as the time of the ﬁrst visit to x (after n = 0).

however.4. To see why consider x. . with x = y. the Px (Tx < ∞) = ρk = 1 and therefore xx ∞ ∞ k [Tx k=0 Px (N (x) < ∞) = P = ∞] ≤ k=0 k P (Tx = ∞) = 0 . x. this means that T = min{Tx . .True or False? Let (Xn )n≥0 be a Markov chain deﬁned in certain state space S and x. . Ty }). y. . . y. s1 ) = (s0 . Observe that for all y ∈ S the Py (N (x) ∈ {0. . Then.1 . Exercise 3. Px (N (x) < ∞) = 1. k for all k ≥ 0. Observe that UTx = x whenever Tx < ∞. To see why recall that by deﬁnition x is recurrent if ρxx = 1. 2.3. Nx can be interpreted as the last time a Markov chain visits state x between times n = 1 and n = 8 inclusive. Suppose that (Un )n≥0 is a Markov chain deﬁned in a state space S and with a probability transition matrix p. z) + p(y. Let T be the ﬁrst time (after n = 0) that the chain visits state x or y (mathematically. So Markov chains visit transient states only a ﬁnite number of times. after all. Nx is not a stopping time. with the understanding that ∞ · 0 = 0. y. and deﬁne s := (y. The strong Markov property states that this intuition is actually correct.4 More on the strong Markov property. x. N (x) is a discrete random variable and ∞ Ey (N (x)) = k=0 k · P (N (x) = k) + ∞ · P (N (x) = ∞) . at time Tx the chain was located at x. y. . y ∈ S be distinct states. . y.) := max{1 ≤ n ≤ 8 : sn = x}. U } is a stopping time. . y. y.). Exercise 3. Observe that Nx (s) = 1 and (s0 . y ∈ S. Therefore. y). . for a transient state x.e. . this property states that P (UTx +1 = y | Tx < ∞) = p(x. z) . 1. However. y. 3. Px (N (x) = ∞) = 1 for a recurrent state x. s1 . in particular. 3. x is transient ⇐⇒ Ex N (x) < ∞ . with the understanding that the maximum of an empty set is inﬁnite. for all z ∈ S the P (XT +1 = z|T < ∞) = p(x. Using that the Ex N (x) = ∞ for a recurrent state x we cannot conclude that Px (N (x) = ∞) = 1 (why?). this assertion is in fact true for recurrent states. it is intuitive that the probability that UTx +1 = y is p(x. y. . Let N (x) be the total number of visits for times n ≥ 1 that the chain makes to state x. s1 ). Suppose that (Xn )n≥0 is a ﬁrst-oder homogeneous Markov chain on a discrete state space S and with probability transition matrix p.1 – True or False? If T and U are both stopping times then min{T. Let Tx be the ﬁrst time that (Un )n≥0 visits state x. Thus Nx cannot be a stopping time. . Nx (s ) = 3 = 1 = Nx (s). y. x. To be more precise. y. We proved in lecture that x is recurrent ⇐⇒ Ex N (x) = ∞ . In particular. Since Un is a ﬁrst-order homogeneous Markov process. Equivalently. y.As an example of a non-stopping time consider the function Nx (s0 . the Px (N (x) = ∞) = 0 i. y. y) .5 Transience versus recurrence.) and s := (y.} ∪ {∞}) = 1. Equivalently.