You are on page 1of 53

# Chapter 2: Markov Chains (part 1)

Stochastic Process

Denition: A stochastic Process {Xt , t T } is a collection of random variables where t is the time index ( or space index as in spatial analysis). T is the index set The set of possible values of Xt is state space, denoted by S . If Xt = i S , then we say that the process is in state i at time t. X0 is usually called the initial state. For convenience, we write X (t) for Xt . Example Toss a coin n times. Let Xi be the outcome of the ith toss. Then {Xt : t = 0, 1, 2, n} is a stochastic Process with T = {0, 1, 2, , n} S = {H, T }.

Example (Gamblers ruin) A gambler starts with 3\$, win 1\$ with probability 1/3; losses 1\$ with probability 2/3. He must leave when he goes broke or he wins N\$. Let Xt be the money he has after the t th game. Then {Xt : t = 0, 1, 2, } is a stochastic Process with T = {0, 1, 2, , } S = {0, 1, , N }.

Example (Poisson Process) Xt counts the number of times that a specied event occurs during the time period from 0 to t. Then {Xt : t [0, ]} is a stochastic process with T = [0, ], S = {0, 1, 2, , }. 1

Example (Stock market index) Xt is the S&P 500 index in the t th day of this year. Then {Xt : t = 1, 2, , 365} is a stochastic process with T = {1, 2, , 365}, Classication of a stochastic process : If T is a countable set {Xt : t T } a discrete time stochastic process . If T is a continuum {Xt : t T } a continuous-time stochastic process . If S is a countable set {Xt : t T } a discrete-state stochastic process . If S is a continuum {Xt : t T } a real-state stochastic process . S = (0, ).

## Denition of Markov Chain

Denition: Markov Chain1 Let {Xt : t T } be a stochastic process with discrete-state space S and discrete-time space T satisfying P (Xn+1 = j |Xn = i, Xn1 = in1 , , X0 = i0 ) = P (Xn+1 = j |Xn = i) for any set of state i0 , i1 , , in1 , i, j in S and n 0 is called a Markov Chain (MC). A process with the property stated in the above denition is said to have the Markovian property, i.e. the conditional distribution of any future state Xn+1 depends on the present state and is independent of the past states. Example (Gamblers ruin (continued) with N = 5) P (Xt+1 = j |Xt = i)
Markov chain was named after A.A. Markov who developed the theoretical functions for the nite state Markov chains in the 1900s. An interesting example from the 19th century is called the Galton-Watson process which attempts to answer the question of when and with what probability would a given family name become extinct.
1

## 1/3, j = i + 1 and 5 > i > 0; 2 /3, j = i 1 and 5 > i > 0; 1, if j = i = 0; = 1 , if j = i = 5; 0, otherwise

5 4 Sample path 3 2 1 0

10

n,n+1 Denition Let Pij = P (Xn+1 = j |Xn = i), called one-step transition probability.

It is easy to see that the one-step transition probability dependents on n and i, j . Denition: (Stationary transition probability2 ) A Markov chain {Xn : n = 0, 1, 2, } with state space S is said to have stationary transition probability if P (Xn+1 = j |Xn = i) = P (Xn = j |Xn1 = i) = = P (X1 = j |X0 = i) for each i, j S , i.e. probability of the one-step transition does not change as n in creases. A counterexample of stationary Markov Chain: for peoples promotion, time space is T : peoples age; State space {junior, senior}, then ... > P (X50 = senior|X49 = junior) > ... > P (X40 = senior|X39 = junior) > P (X39 = senior|X38 = junior) ...
2

In this module, we consider Markov chains with stationary transition probability only.

One-step transition probability matrix We can use a matrix to denote the transition probability. 0 1 P = (pij ) = 2 . . . 0 p00 p10 p20 1 2 p01 p02 p11 p12 p21 p22

Example Toss a coin (probability of Head 0.6) independently n times. Xi is outcome of ith toss. Then the one-step transition matrix is P = (pij ) = H T H T 0.6 0.4 0.6 0.4

Example (Gamblers ruin (continued) with N = 5) P (Xt+1 = j |Xt = i) 1/3, j = i + 1 and 5 > i > 0; 2/3, j = i 1 and 5 > i > 0; 1, if j = i = 0; = 1 , if j = i = 5; 0, otherwise Then the one-step transition matrix is 0 1 2 3 4 5 1 0 0 0 0 0 2/3 0 1/3 0 0 0 0 2/3 0 1/3 0 0 0 0 2/3 0 1/3 0 0 0 0 2/3 0 1/3 0 0 0 0 0 1

## 0 1 2 P = (pij ) = 3 4 5 Properties of P (1) (2)

j =0

pij 0,

for i, j S for i S

pij = 1,

summation of each row in P is 1. Proof: It follows immediately from the denition of probability and law of total probability.

State-diagram associated with transition probabilities A diagram for the Markov chain that represents the movement and the probability for the movement. Example Example of tossing a coin
0.4 0.4

H
0.6 0.6

Example Draw a state diagram associated with the following transition probability matrix. 1 2 3 4 1.0 0 0 0 0 0.3 0.7 0 0 0.5 0.5 0 0.2 0 0.1 0.7

1 2 P= 3 4

1.0

0.3

1
0.5

0.2

0.7

0.1

4
0.7

3
0.5

Example (Gamblers ruin (continued)) Draw a state diagram associated with the following transition probability matrix. 0 1 2 3 4 5 1 0 0 0 0 0 2/3 0 1/3 0 0 0 0 2/3 0 1/3 0 0 0 0 2/3 0 1/3 0 0 0 0 2/3 0 1/3 0 0 0 0 0 1

0 1 2 P= 3 4 5

States 0 and 5 are called absorbing states since once these states are reached, the process can never leave.

Example A salesman lives in Town A and is responsible for the sales consisting of Towns A, B and C. Each week he is required to visit a dierent town. When he is in his own town, it makes no dierence which town he visits next so he slips a coin and if it is heads he goes to B and if tails he goes to C. However, after spending a week away from home, he has a slight preference for going home so when he is in either towns B or C, he has a slight preference for going home so when he is in either towns B or C, he ips two coins. If two heads occur, then he goes to the other town; otherwise he goes to A. The successive towns that the salesman visits form a Markov chain with state space {A, B, C } where random variable Xn equals A, B or C according to his location during week n. The transition probability matrix of this problem is given by

P=

## 0 0.50 0.50 0.75 0 0.25 0.75 0.25 0

Given a Markov chain, what do we want to know next? After t steps, whats the probability that the MC is in state i? As t , whats the probability that the MC is in state i? Given that weve taken t steps, whats the probability weve ever been in state i? Whats the expected number of steps before we reach state i for the rst time?

Chapman-Kolmogorov Equations

Denote the n-step transition probability of the Markov chain {Xn : n = 0, 1, 2, } with state space S by pij = P (Xm+n = j |Xm = i), m, n = 0, 1, 2, , i, j S.
(n)

Similarly, we dene the n-step transition probability matrix as p00 p01 p02 (n) (n) (n) p10 p11 p12 . . .
(n) pi0 (n) (n) (n)

P(n) = (pij ) =

(n)

. . .

(n) pi1

(n) pi2

Example Consider the salesman example where the salesman starts from Town B. The transition probability matrix indicates that the probability of being in Town A after one step (in one week) is 0.75. But what is the probability that he will be in Town A after two steps (or even more steps)? Notice that the 2-step transition probability from Town B to Town A is given by P (X2 = A|X0 = B ) = P (X1 = A|X0 = B )P (X2 = A|X1 = A) +P (X1 = B |X0 = B )P (X2 = A|X1 = B ) +P (X1 = C |X0 = B )P (X2 = A|X1 = C ) = 0.75 0 + 0 0.75 + 0.25 0.75 = 0.1875 Consider that [P P]BA 0 0.50 0.50 0.75 0 0.25 0.75 0.25 0 0 0.50 0.50 0.75 0 0.25 0.75 0.25 0

BA

0.75 0.125 0.125 0.1875 0.4375 0.375 0.1875 0.3750 0.4375 = 0.1875. = The n-step transition probabilities can be computed using the Chapman-Kolmogorov equations. Theorem For Markov chain {Xn : n = 0, 1, 2, }, the n-step transition probability from state i to state j satises the Chapman-Kolmogorov equation pij =
k=0 (n+m)

k=0 (n) (m)

= or

## P(m+n) = P(m) P(n) . P(n) = Pn .

Proof: pij
(m+n)

= P (Xm+n = j |X0 = i)

=
k=0

k=0

= =
k=0

## P (Xn = k |X0 = i)P (Xm+n = j |Xn = k ) Pik pkj .

k=0 (n) (m)

2 Example (salesman example continued) The 5-step transition probability matrix of the salesman example is given by P (5) = P 5 9

P () = lim P n
n

10

## CHAPTER 2: Markov Chains (part 2)

Basic questions
First step analysis

1
1.1

## Some Markov Chain Models

An inventory Model

Let Xn denote the number consumable items somebody has at the end of a week. In week n, n items will be consumed with P (n = k ) = ak , k = 0, 1, 2, ...

on the weekend, if Xn > s, no buying; if Xn < s, S Xn items will be bought. Then Xn+1 = Xn n+1 , S n+1 , if s < Xn S if Xn s.

1. It is clear that {Xn : n = 0, 1, 2, ...} is a MC (why?) 2. Let s = 0, S = 2 with P (n = 0) = 0.5, P (n = 1) = 0.4 and P (n = 2) = 0.1, what is the transition probability matrix ? -1 P= 0 1 2 (- means borrowing) 3. (unsolved problem) how frequently you need to purchase? -1 P = 0 1 2 -1 0.0444 0.0444 0.0444 0.0444 0 0.2333 0.2333 0.2333 0.2333 1 0.4444 0.4444 0.4444 0.4444 2 0.2778 0.2778 0.2778 0.2778 -1 0 0 0.1 0 0 0.1 0.1 0.4 0.1 1 0.4 0.4 0.5 0.4 2 0.5 0.5 0 0.5

1.2

## The Ehrenfest Urn Models

suppose there are 2a balls. amongst them, k balls are in container (urn) A, and 2a k are in container B. A balls is selected at random (all selections are equally likely) from the totally 2a balls and moved to the other container. Let Yn be the number of balls in container A at the nth stage. Then Yn is a Markov Chain with S = {0, 1, 2, , 2a}. and P (Yn+1 = i + 1|Yn = i) = P (Yn+1 P (Yn+1 2a i , if 0 i 2a 2a i = i 1|Yn = i) = , 2a = j |Yn = i) = 0, if j = i 1, i + 1.

For a = 2, the transition probability matrix is 0 1 P= 2 3 4 We have 0 1 = 2 3 4 0 0 1/4 0 0 0 0 0.125 0 0.125 0 0.125 0 0 0.125 0 0.125 0 1 1 0 1/2 0 0 1 0 0.5 0 0.5 0 1 0.5 0 0.5 0 0.5 2 0 3/4 0 3/4 0 2 0.75 0 0.75 0 0.75 2 0 0.75 0 0.75 0 3 0 0 1/2 0 1 3 0 0.5 0 0.5 0 3 0.5 0 0.5 0 0.5 4 0 0 0 1/4 0 4 0.125 0 0.125 0 0.125 4 0 0.125 0 0.125 0

P2k

and 0 1 = 2 3 4

P2k+1

1.3

## A discrete queueing Markov Chain

Customers arrive for service and take their place in a waiting line1 . Suppose that P (k customer arrive in a service period )
1

## A more advanced topic

= P (n = k ) = ak . where n has the same distribution as . In each service period, only one customer is served. Let Xn be the customers waiting for service. Then Xn+1 = max(Xn 1, 0) + n . Based on this, the transition probability matrix is 0 1 P= 2 3 . . . Unsolved questions 1. what is Pn ? 2. Intuitively, (a) If E > 1, then the number of customers waiting for service will increase innitely. (b) If E < 1, what is the probability that there will be k customers waiting for service. [if you are the only hairdresser in a barbershop, how many chairs you need to provide?] 0 a0 a0 0 0 . . . 1 a1 a1 a0 0 . . . 2 a2 a2 a1 a0 . . . 3 a3 a3 a2 a1 . . . 4 ... ... ... ... ...

2
2.1

## First Step Analysis

Motivating Example

In the Gamblers Ruin example (with N=4 and X0 = 3), 1. what is probability that the gambler eventually goes broke (or win N\$)? 2. On average, how many games he can play before he goes broke? 3. how many times he can have 0 < k N dollars before the game ends? Recall (for N = 4) that 0 1 P= 2 3 4 0 1 2/3 0 0 0 1 0 0 2/3 0 0 3 2 0 1/3 0 2/3 0 3 0 0 1/3 0 0 4 0 0 0 1/3 1

States 0 and 4 are absorbing states. Denition If Pii = 1, then state i is an absorbing state. Two important variables T = min{n : Xn = 0 or Xn = N }the length of the game
n

{XT = 0}{the game ends in state 0} (or {XT = N }{the game ends in state 0}.) By this notation, we go back to the questions * Question 1 is to calculate u3 = P (XT = 0|X0 = 3) More generally, ui = P (XT = 0|X0 = i), where i = 1, 2, 3 * Question 2 is to calculate v3 = E (T |X0 = 3) More generally, vi = E (T |X0 = i), * Question 3 is to calculate w3k = E (
T 1 T 1 n=0 I (Xn

wik = E (
n=0

## I (Xn = k )|X0 = i), i = 1, 2, 3

We have the following important relations P (XT = 0|X0 = i) = P (XT = 0|X1 = i), E (T |X1 = i) = 1 + E (T |X0 = i)
T 1

1 + E( I (Xn = k )|X1 = i) = 0 + E(

E(
n=0

## T 1 n=0 I (Xn T 1 n=0 I (Xn

= k )|X0 = i) = k )|X0 = i)

if i = k if i = k

Calculation of ui in the gamblers example Let ui = P (XT = 0|X0 = i), i = 0, 1, 2, 3, 4. It is easy to see from the example (N=4) u0 = P (XT = 0|X0 = 0) = 1, u4 = P (XT = 4|X0 = 0) = 0

5 4 3 2 1 0

10

5 4 3 2 1 0

10

5 4 3 2 1 0

10

## By the important relations, we have u3 = P (XT = 0|X0 = 3)

4

=
i=0 4

P (XT = 0|X0 = 3, X1 = i)P (X1 = i|X0 = 3) P (XT = 0|X1 = i)P (X1 = i|X0 = 3)
i=0 4

= =
i=0 4

## ui P (X1 = i|X0 = 3) ui p3i

i=0

More generally,
4

uj =
i=0

ui pji ,

j = 1 , 2, 3

i.e. u0 = 1, 5

u1 = p10 u0 + p11 u1 + p12 u2 + p13 u3 + p14 u4 u2 = p20 u0 + p21 u1 + p22 u2 + p23 u3 + p24 u4 u3 = p30 u0 + p31 u1 + p32 u2 + p33 u3 + p34 u4 u4 = 0 or

u0 = 1 2 1 u1 = u0 + 0u1 + u2 + 0u3 + 0u4 3 3 2 1 u2 = 0u0 + u1 + 0u2 + u3 + 0u4 3 3 1 2 u3 = 0u0 + 0u1 + u2 + 0u3 + u4 3 3 u4 = 0 we have u1 = Interestingly, 1 14/15 4/5 8/15 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1/15 0 1/5 0 7/15 0 1 14 , 15 4 u2 = , 5 u3 = 8 15

Pn

Calculation of vi = E (T |X0 = i) in the gamblers example Note that v0 = E (T |X0 = 0) = 0, By the important relations v3 = E (T |X0 = 3)
4

v4 = E (T |X0 = 4) = 0

=
i=0 4

## E (T |X0 = 3, X1 = i)E (X1 = i|X0 = 3) E (T |X1 = i) P (X1 = i|X0 = 3)

i=0 4

= =
i=0 4

(1 + vi ) P (X1 = i|X0 = 3)
4

=
i=0

p3i +
i=0

vi p3i

Generally
4

vj or

= E (T |X0 = j ) = 1 +
i=0

vi pji , j = 1, 2, 3.

v0 = 0 2 1 v1 = 1 + v0 + 0v1 + v2 + 0v3 + 0v4 3 3 2 1 v2 = 1 + 0v0 + v1 + 0v2 + v3 + 0v4 3 3 2 1 v3 = 1 + 0v0 + 0v1 + v2 + 0v3 + v4 3 3 v4 = 0 we have v1 = Calculation of wi3 = E ( Following the same idea w33 = 1 + p30 w03 + p31 w13 + p32 w23 + p33 w33 + p34 w43 and for i = 3 wi3 = pi0 w03 + pi1 w13 + pi2 w23 + pi3 w33 + pi4 w43 special cases w03 = 0, w43 = 0 Combining these, we have w03 = 0 2 1 w13 = w03 + 0w13 + w23 + 0w33 + 0w43 3 3 2 1 w23 = 0w03 + w13 + 0w23 + w33 + 0w43 3 3 1 2 w33 = 1 + 0w03 + 0w13 + w23 + 0w33 + w43 3 3 w43 = 0 we have w13 = 0.2, w23 = 0.6, w33 = 1.4. 11 , 5 u2 = 18 , 5 v3 = 17 5

T 1 n=0 I (Xn

## = 3)|X0 = i) in the gamblers example

Example (Random walk with 3 states) A person walks randomly between positions 0, 1 and 2. Let Xn be the position: 0, 1, 2, at n time point. Then, the transition probability matrix is 0 P= 1 2 Suppose he starts with state 2, 1. what is probability that he will be strapped in 0? 2. what is the average time that he rst reaches 0? 3. what is the expected number of times he will visit state 1 before absorption. Dene T = min{n : Xn = 0}the length of the process
n

0 1 a

1 0 b

2 0 c

{XT = 0}{the walk strapped in state 0}. Let ui = P (XT = 0|X0 = i), vi = E (T |X0 = i),
T 1

wk = P (
n=0

I (Xn = k )|X0 = 2)

## where i, k = 0, 1, 2. We have u0 = 1 and

2

uj =
i=0

ui pji ,

j = 1, 2

i.e. u0 = 1, u1 = p10 u0 + p11 u1 + p12 u2 u2 = p20 u0 + p21 u1 + p22 u2 or u0 = 1 u1 = u0 + u1 + u2 u2 = au0 + bu1 + cu2 8

## The answer to question 1 is u2 = {b + (1 )a}/{(1 )(1 c) b } Note that v0 = 0 and

2

vj = 1 +
i=0

vi pji ,

j = 1 , 2.

or v0 = 0 v1 = 1 + v1 + v2 v2 = 1 + bv1 + cv2 we have (1 c) + , (1 )(1 c) b b + (1 ) v2 = (1 )(1 c) b v1 = The answer to the second question is v2 = {b + (1 )}/{(1 )(1 c) b } Similarly, w0 = 0,
2

w1 = 1 +
i=0 2

wi p1i wi p2i

w2 =
i=0

## i.e. w0 = 0 w1 = 1 + w1 + w2 w2 = bw1 + cw2 9

we have 1c , (1 )(1 c) b b w2 = (1 )(1 c) b w1 = The answer to the second question is w2 = b/{(1 )(1 c) b }

10

## Example A Maze A white rat is put into the maze

3 shock

7 shock

9 food

In the absence of learning, one might hypothesize that the rat would move through the maze at random, i.e. if there are k ways to leave a compartment, then the rat would choose each of them with equal probability 1/k . Assume that the rat makes one changes to some adjacent compartment at each unit of time. Let Xn be the compartment occupied at stage n. Suppose that compartment 9 contains food and 3 and 7 contain electrical shocking mechanisms. 1. what is the transition probability matrix 1 2 3 4 P= 5 6 7 8 9 1 0 1/3 0 1/3 0 0 0 0 0 2 1/2 0 0 0 1/4 0 0 0 0 3 0 1/3 1 0 0 1/3 0 0 0 4 1/2 0 0 0 1/4 0 0 0 0 5 0 1/3 0 1/3 0 1/3 0 1/3 0 6 0 0 0 0 1/4 0 0 0 0 7 0 0 0 1/3 0 0 1 1/3 0 8 0 0 0 0 1/4 0 0 0 0 9 0 0 0 0 0 1/3 0 1/3 1

2. If the rat starts in compartment 1, what is the probability that the rate encounters food before being shocked. 1

Let T = min{n : Xn = 3 or Xn = 7 or Xn = 9}. Let ui = E (XT = 9|X0 = i), i,e. the probability that the rat absorbed by food. Note that there are 3 absorbing states 3, 7 and 9. It is easy to see that u3 = 0, u7 = 0, we have, for the others i = 1, 2, 4, 5, 6, 8, ui = pi1 u1 + pi2 u2 + pi3 u3 + pi4 u4 + pi5 u5 + pi6 u6 + pi7 u7 + pi8 u8 + pi9 u9 in details u1 = u2 = u4 = u5 = u6 = u8 = u0 + u1 + u2 + u3 + u4 + u5 + u6 + u7 + u8 u0 + u1 + u2 + u3 + u4 + u5 + u6 + u7 + u8 u0 + u1 + u2 + u3 + u4 + u5 + u6 + u7 + u8 u0 + u1 + u2 + u3 + u4 + u5 + u6 + u7 + u8 u0 + u1 + u2 + u3 + u4 + u5 + u6 + u7 + u8 u0 + u1 + u2 + u3 + u4 + u5 + u6 + u7 + u8 u9 = 1

[PLEASE FILL IN THE COEFFICIENTS] Finally, we have u1 = 0.1429, u2 = 0.1429, u2 = 0, u4 = 0.1429, u5 = 0.2857, u6 = 0.4286, u7 = 0, u8 = 0.4286, u9 = 1 Example A model of Fecundity Changes in sociological patterns such as increase in age at marriage, more remarriages after widowhood, and increased divorce rates have profound eects on overall population growth rates. Here we attempt to model the life span of a female in a population in order to provide a framework for analyzing the eect of social changes on average fecundity. For a typical woman, we may categorize her in one of the follows states E0 : Prepuberty E4 : Widowed E1 : Single E2 : Married E3 : Divorced,

## E5 : Died or emigration from the population

Suppose the transition probability matrix is E0 E1 P = E2 E3 E4 E5 E0 0 0 0 0 0 0 E1 0.9 0.5 0 0 0 0 2 E2 0 0.4 0.6 0.4 0.4 0 E3 0 0 0.2 0.5 0 0 E4 0 0 0.1 0 0.5 0 E5 0.1 0.1 0.1 0.1 0.1 1

We are interested in the mean duration spent in state E2 , Married, since this corresponds to the state of maximum fecundity. Let wi2 be the mean duration in state E2 given the initial state is Ei . From the rst step analysis, we have w22 = 1 + p21 w12 + p22 w22 + p23 w32 + p24 w42 + p22 w52 . For absorbing state 5, we have w52 = 0 If i is not a state 2 and absorbing state 5, wi2 = pi1 w12 + pi2 w22 + pi3 w32 + pi4 w42 + pi2 w52 All together, we have w02 = w12 = 0w02 + 0.9w12 + 0w22 + 0w32 + 0.1w42 + 0.1w52 w02 + w12 + w22 + w32 + w42 + 0w52

w22 = 1 + w02 + w12 + w22 + w32 + w42 + 0w52 w32 = w42 = w52 = 0 The solution is w02 = 4.5, w12 = 5, w22 = 6.25, w32 = 5, w42 = 5, w52 = 0. w02 + w12 + w22 + w32 + w42 + 0w52 w02 + w12 + w22 + w32 + w42 + 0w52

Each female, on the average, spend w02 = 4.5 periods in the childbearing state E2 during her lifetime. Example [A process with short-term memory, e.g. the weather depends on the past m-days] We constrain the weather to two states s: sunny, s: cloudy s c Xn1 s Xn s Xn+1 c s -

Suppose that given the weathers in the previous two days, we can predict the weather in the following day as sunny (yesterday) + sunny (today) = sunny (tomorrow) with probability 0.8; 3

cloudy (tomorrow) with probability 0.2; cloudy (yesterday)+sunny (today) = sunny (tomorrow) with probability 0.6; cloudy (tomorrow) with probability 0.4; sunny (yesterday)+cloudy (today) = sunny (tomorrow) with probability 0.4; cloudy (tomorrow) with probability 0.6; cloudy (yesterday)+ cloudy (today) = sunny (tomorrow) with probability 0.1; cloudy (tomorrow) with probability 0.9;

Let Xn be the weather of the nth day. Then the state space is S = {s, c} We have P (xn+1 = s|xn1 = s, xn = s) = 0.8 P (xn+1 = s|xn1 = c, xn = s) = 0.6 Therefore {Xt } is not a MC . Let Yn = (Xn1 , Xn ) be the weather of the day and the previous day. Then the state space for {Yn } is S = {(s, s), (c, s), (s, c), (c, c)} Then {Yt } is a MC with transition probability matrix (s,s) P = (s,c) (c,s) (c,c) (s,s) 0.8 0 0.6 0 (s,c) 0.2 0 0.4 0 (c,s) 0 0.4 0 0.1 (c,c) 0 0.6 0 0.9

Suppose that in the past two days the weathers are (c, c). In how many days on average can we expect to have two successive sunny days? To solve the question, we dene a new MC by recording the weathers in successive days. If there are two successive sunny days, we stop. Denote the process by {Zn }. The transition probability matrix is then (s,s) P = (s,c) (c,s) (c,c) (s,s) 1 0 0.6 0 (s,c) 0 0 0.4 0 4 (c,s) 0 0.4 0 0.1 (c,c) 0 0.6 0 0.9

Denote the states (s,s), (s, c), (c,s) and (c,c) by 1,2,3 and 4 respectively. Let vi denote the expected number of days we rst have two sunny days? By the rst step analysis, we have the following equations v1 = 0 v2 = 1 + p21 v1 + p22 v2 + p23 v3 + p24 v4 v = 1 + p31 v1 + p32 v2 + p33 v3 + p34 v4 3 v4 = 1 + p41 v1 + p42 v2 + p43 v3 + p44 v4 i,e. v1 v2 v 3 v4 we have v2 = 13.3333, v3 = 6.3333, v4 = 16.3333 =0 = 1 + 0.4v3 + 0.6v4 = 1 + 0.6v1 + 0.4v2 = 1 + 0.1v3 + 0.9v4

## September 21, 2005

Basic questions
1. Regular MC; 2 How to calculate the limit distribution; 3. How to interpret the distribution.

## Regular transition probability matrix

Question: 1. limn Pij =? 2. If the above limit exists, does it dependent on i? Denition A MC (with transition probability matrix P) is called regular if there exists an integer k > 0 such that all the elements of Pk are strictly positive. We call the corresponding transition probability matrix the regular transition probability matrix . Example 0.8 0.2 0 0 0.8 0 0.2 0 0.8 0 0 0.2 0.8 0 0 0.2
(n)

P=

We have 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.16 0.04 0 0.16 0 0.04 0.16 0 0.04 0.16 0 0.04 0.16 0.16 0.16 0.16 0.032 0.032 0.032 0.032 1 0.008 0.008 0.008 0.008

P2 =

P3 =

Thus, P is a regular transition probability matrix. We further have 0.8 0.8 0.8 0.8 0.16 0.16 0.16 0.16 0.032 0.032 0.032 0.032 0.008 0.008 0.008 0.008

Pn

2

## 1 0 0 0.278 0.027 0.695 0 0 1 1 0 0 0.2857 0.000 0.7134 0 0 1 1 0 0 0.2857 0 0.7134 0 0 1

P100 = . . . P

Some methods to check whether a MC is regular. 1. (sucient and necessary) if a MC is regular, then there is a k such that all elements in the Pn is greater than 0 (for all n > k ). [instead of checking Pk : k = 1,2,3, we may check P2k : k = 1, 2,.] 2. (sucient) for every pair of states i, j there is a path k1 , k2 kr such that Pik1 Pk1 k2 Pkr j > 0 and there is at least one state i for which Pii > 0 3. (necessary) if a MC has absorbing states, then it is not a regular MC. For a regular MC, we have the following observation: 1. the limit of
n

lim Pn

exists. 2

2. the starting state does not aect the long-run behaviors of the MC Theorem Suppose that P is a regular {0, 1, 2, , N }. Then (1) (2) (3)
1 1

## transition probability matrix with states S =

limn Pij

(n)

exists;
(n)

The limit does not depend on starting state i, denote by j = limn Pij ;
N k=0 k

## = 1. (we call (0 , 1 , , N ) a limiting distribution.)

(4) the limits = (0 , 1 , , N )T are the solution of the N j = k=0 k Pkj , j = 0, 1, 2, , N, or in matrix
N N k=0 k

(1.1)

= 1.

P = ,
k=0

k = 1.

## (5) the limiting distribution is unique [Proof of (3): Note that

N

P (Xn = j |X0 = i) = 1;
j =1

n

(n) Pij N

=
k=0

Pik

(n1)

Pkj ,

j = 0, 1, , N

k Pkj
k=0

## proof of (5): We need to show that if x0 , x1 , , xN is a solution of (1.1), i.e.

N

xj =
k=0

xk Pkj , j = 0, 1, 2, , N,

(1.2)

xk = 1,
k=0

(1.3)

N

(1.4)

xj Pjl =
k=0

k Pkj Pjl ,

j = 1, 2, N

N N N

xj Pjl =
j =0 j =0 k=0 N

k Pkj Pjl
N

=
k=0 N

k
j =0

Pkj Pjl
(2)

=
k=0

k Pkj

(why?)

N

xj Pjl = xj ;
j =0

Thus
N

xj =
k=0

k Pjl ,

(2)

j = 1, 2, N

By induction
N

xj =
k=0

xk Pkj
N

(n)

for all n

Let n , we have
N

xj =
k=0

xk j = j
k=0

xk = j

for all j

## Thus, xj = j Interpretation of j : j = 0, 1, 2, , N . 4 for all j

1. j is the (unconditional) probability that the MC is in state j ; 2. the limiting distribution j = lim P (Xn = j |X0 = i)
n

(note that it is independent of the initial state) 3. Long run mean fraction of visits to state j . [Dene indicator function I {Xk = j } = By the denition, we have (a) E (I {Xk = j }|X0 = j ) = P (Xk = j |X0 = j ); (b) I {Xk = j } = 1 means the MC is visiting state j at time k . Then the fraction of visit is
1

1, 0,

if Xk = j if Xk = j.

E = = =

1 m 1 m 1 m

m1

## I {Xk = j }|X0 = i E (I {Xk = j }|X0 = i)

k=0 m1 k=0 m1

E (I {Xk = j }|X0 = i)
k=0 m1

1

P =

0 0 0 0 0
n

1 1 1 1 1

2 2 2 2 2

N N N N N

## A mathematic fact is used here: if an a, then n1

k=1

an a.

Calculation of the limiting distribution 1. (if you have software) approximate it by denition: calculate P n for suciently large n. If all the column are the same, the i is the value in column i. 2. solution of the equations (note the there are N+2 equations with N+1 parameters). Delete any one in the rst N+1 equations. Then solve the equation. Example For one-step transition probability matrix 1. 0 1 2 3 0 0.1 0 0 1 1 0.5 0 0 0 2 0 1 0 0 3 0.4 0 1 0

P=

[Let (0 , 1 , 2 , 3 ) be the limiting distribution. Then 0.10 + 3 0.50 1 0.40 + 2 0 + 1 + 2 + 3 The solutions are

= = = = =

0 1 2 3 1

(0 , 1 , 2 , 3 ) = (0.3448, 0.1724, 0.1724, 0.3103) ] 2. IF the matrix is not regular, then we cannot use the above method. The Ehrenfest Urn Model with a = 1 0 P= 1 2 0 0 1/2 0 1 1 0 1 2 0 1/2 0

Example [A process with short-term memory, e.g. the weather depends on the past m-days] We constrain the weather to two states s: sunny, s: cloudy s c Xn1 s Xn s Xn+1 c s -

Let Yn = (Xn1 , Xn ) be the weather of the day and the previous day. Then the state space for {Yn } is S = {(s, s), (c, s), (s, c), (c, c)} 6

Then {Yt } is a MC with transition probability matrix (s,s) P = (s,c) (c,s) (c,c) To calculate the limiting distribution = (0 , 1 , 2 , 3 ). Approach 1: (by denition) 0.3228 0.2679 0.3087 0.2425 0.2727 0.2727 0.2727 0.2727 0.1029 0.0898 0.0995 0.0837 0.0909 0.0909 0.0909 0.0909 0.0893 0.0911 0.0898 0.0919 0.0909 0.0909 0.0909 0.0909 0.4850 0.5512 0.5019 0.5820 0.5455 0.5455 0.5455 0.5455 (s,s) 0.8 0 0.6 0 (s,c) 0.2 0 0.4 0 (c,s) 0 0.4 0 0.1 (c,c) 0 0.6 0 0.9

10

P 100 =

we have = (0.2727, 0.0909, 0.0909, 0.5455)T Approach 2 (by theorem 4.1): 0.80 0.20

## + 0.62 + 0.42 + 0.13 + 0.93 + 3

0.41 0.61 + 1 +

= = = = =

0 1 2 3 1

Deleting one of the rst 4 equations and solving the rest, we have 0 = 3/11, Moreover, P (s) = P ((s, s)) + P ((c, s)) = 4/11; P (c) = P ((s, c)) + P ((c, c)) = 6/11; 1 = 1/11, 2 = 1/11, 3 = 6/11.

## September 26, 2005

Classication of States
(n)

Denition For a Markov Chain {Xn : n = 0, 1, 2, , } with transition probability matrix P, state j is said to be accessible from state i, denoted by i j , if Pij and we write i j . Example Consider the following transition probability matrices P= 0 0.50 0.50 0.75 0 0.25 0.75 0.25 0 1.0 0 0 0 0 0.3 0.7 0 0 0.5 0.5 0 0.2 0 0.1 0.7 0.55 0.36 0.09 0.40 0.60 0 0.75 0 0.25 0.3 0.4 0.3 1.0 0.0 0.0 0.0 0.3 0.7 12 13 23 23 4 1, 4 3 12 13 23 12 13 23 > 0 for some n 0. Furthermore, two states i and j which are accessible to each other are said to communicate

P=

P=

P=

Theorem Communication is an equivalence relation, i.e. (1). i i (2). i j (3). i j (reexivity) ji (symmetry) i k (transitivity)

and j k 1

Proof: (1) and (2) are straightforward. Next, we show (3). By the conditions, there exists m > 0 and n > 0 such that Pij
(m)

> 0,

Pjk > 0

(n)

(m+n) pik

=
l=0

## pil plk pij pjk > 0

(m) (n)

(m) (n)

i.e. i k . Similarly, k i and hence i and k communicate. Thus, we can write i j, j k and i k as i j k Denition A Markov Chain is irreducible if all the states communicate with one another. Example which of the following Markov Chain is irreducible? P= 0 0.50 0.50 0.75 0 0.25 0.75 0.25 0 1.0 0 0 0 0 0.3 0.7 0 0 0.5 0.5 0 0.2 0 0.1 0.7 0.5 0 0 0.5 0 0.3 0.7 0 0 0.1 0.9 0 0.6 0 0 0.4

P=

P=

## Periodicity of a Markov Chain

(n)

Denition For a state i, let d(i) be the greatest common divisor of {n 1 : Pii {n 1 : Pii
(n)

> 0}. If

## If d(i) = 1, then state i is aperiodic. Example P= 0 1 0 0 0 1 1 0 0

(2) (3) (4) (5) (6)

It is easy to check that P00 = 0, P00 = 0, P00 = 1, P00 = 0,, P00 = 0, P00 = 1, ... and {n 1 : Pii
(n)

> 0} = {3, 6, 9, }

The common divisor is d(0) = 3. Thus the period for state 0 is d(0) = 3. what about state 1 and 2?

Example 0 P= 1 2 3 we have d(0) = 2 because {n 1 : P00 > 0} = {4, 6, 8, ...} d(3) = 2 because {n 1 : P33 > 0} = {2, 4, 6, 8, ...}; what about state 1 and 2? Example 0 P= 1 2 3 we have d(i) = 2, i = 0, 1, 2, 3. 0 0 0 0.5 0.4 1 0 0 0.5 0.6 2 0.3 0.4 0 0 3 0.7 0.6 0 0
(n) (n)

0 0 0 0 0.5

1 1 0 0 0

2 0 1 0 0.5

3 0 0 1 0

(N d(i)) (m)

> 0 = Pii

(nd(i))

3. Pji

> 0 = Pji

(m+nd(i))

## Recurrent and Transient States

For any state i, dene the probability that starting from state i, the rst return to i is at the nth transition fii [We dene fii = 0] Theorem Pii
(n) n (0) (n)

## = P (X1 = i, X2 = i, , Xn1 = i, Xn = i|X0 = i).

= P (Xn = i|X0 = i) =
k=0

fii Pii

(k)

(nk)

n

= P(
k=1 n

=
k=1 n

=
k=1 n

## P (Ck |X0 = i)P (Xn = i|Xk = i)

k fii pn ii k=1 n (k) (k)

= =
k=0

k fii pn ii .

## The proof is now complete. Let fii =

n=0 (n) N N n=0 (n)

fii = lim

fii

Then, fii is the probability that starting from state i, a Markov Chain returns to the state i (at some time). Denition If fii = 1, then i is recurrent; if fii < 1, then i is transient.

If i is a transient state, let Ni be the number of times that the Markov Chain visits state i. Theorem If i is transient, then E (Ni |X0 = i) = fii . 1 fii

[Proof:

=
j =1

=
j =1

=
j =1

=
j =1

## P (Ni 1|X0 = i) P (X1 = i, , Xj 1 = i, Xj = i|X0 = i)

= P (Ni 1|X0 = i)
j =1

P (X1 = i, , Xj 1 = i, Xj = i|X0 = i)
2 = P (Ni 1|X0 = i)P (Ni 1|X0 = i) = fii .

In general
k P (Ni k |X0 = i) = fii .

Therefore

E (Ni |X0 = i) =
k=0

E (Ni k |X0 = i)

2

## the proof might be dicult for you

Example For a Markov Chain with transition probability matrix 1.0 0 0 0 0 0.3 0.7 0 0 0.6 0.4 0 0.2 0 0.1 0.7

P=

we have 1. f11 = 1 because f11 = 1, 2. f22 = 1 because f22 = 0.3, f22 = 0.7 0.6, f22 = 0.7 0.4 0.6, f22 = 0.7 0.42 0.6, ... f22 =
k=1 (k) (4) (3) (2) (1) (1)

f11 = 0,

(2)

f11 = 0,

(3)

f22 = 0.3

+0.7 (1 + 0.4 + 0.42 + ...) 0.6 = 1. 3. f33 = 1 (why?) 4. f44 < 1 because f44 = 0.7, The expected number of visits are E (N1 |X0 = 1) = E (N2 |X0 = 2) = E (N3 |X0 = 3) = E (N4 |X0 = 4) < .
(1)

f44 = 0,

(2)

f44 = 0,

(3)

n=1

n=1

(n)

pii < .

(n)

Ni =
n=1

I (Xn = 1)

## where I (Xn = i) = We have E (Ni |X0 = i)

1, 0,

if Xn = i, if Xn = i

= E
n=0

=
n=0

=
n=0

## P (Xn = i|X0 = i) pii .

n=0 (n)

By the previous theorem, the theorem follows. ] Theorem If state i is recurrent (transient) and state i communicates with state j , then state j is recurrent (transient) [proof: ij = there exist m, n 0 such that Pij
(m)

(n)

v =0 (m+n+v )

(m)

(v )

(n)

Pjj

(m+n+v )

v =0

## Pij Pii Pij

(m) (n) v =0

(m)

(v )

(n)

= Pij Pij ] Example (Continued) 1.0 0 0 0 0 0.3 0.7 0 0 0.5 0.5 0 0.2 0 0.1 0.7

Pii =

(v )

P=

Because 2 is recurrent, and 2 3 therefore, 3 is recurrent. Example Consider a Markov Chain with state space S = {a, b, c, d, e, f } and one-step transition probability matrix given by 0.3 0.2 0.2 0.2 0.1 0 0 0.5 0 0 0 0.5 0 0 0.4 0.6 0 0 0 0.3 0.2 0.5 0 0 0 1 0 0 0 0 0.8 0 0 0 0.2

P=

October 3, 2005

## The Basic Limit theorem of MC

Consider a recurrent state i. recall that fii = P (X1 = i, X2 = i, , Xn1 = i, Xn = i|X0 = i). Dene the rst return time Ri = min{n 1, Xn = i} Then fii = P (Ri = n|X0 = i) for n = 1, 2, Since i is recurrent, fii =
(n) n=1 fii (n) (n)

(n)

n=1

nfii

(n)

n

lim Pii

(n)

1
(n) n=1 nfii

1 . mi

2.
n

n

(n)

(n)

3.
n
1

lim Pii

(n)

= i = 1/mi .

## the proof is beyond the scope of the module

Example Consider a MC whose transition probability matrix is given by 0 P= 1 2 3 0 0 0.1 0.2 0.3 1 1 0.5 0.2 0.3 2 0 0.2 0.5 0.4 3 0 0.2 0.1 0

Find the mean time for the MC to go from state 0 to state 0 [solution: Let = (0 , 1 , 2 , 3 ) be the limit distribution. Then = P 1 + 2 + 3 + 4 = 1 The solution is 0 = 0.1383, 1 = 0.4609, 2 = 0.2806, 3 = 0.1202 Thus, the mean time is m0 = 1/0 = 7.2319

## A brief Review of Markov chain

1. Denition (a) Denition: Markov Chain, (stationary Markov Chain) Example[Weather forecasting] Let Xn be the weather, sunny (s) and cloudy (c), in the nth day. Whether {Xn , n > 0} is a MC depending the rule that the weather changes. For example i. If the weather in any day in determined by the weather in the previous day, then Xn is a MC. ii. If the weather in any day in determined by the weather in the past 2 days, then Xn is not a MC. Example[Gamblers Ruin] A fair coin is tossed repeatedly. At each toss a gambler wins 1\$ if a head shows and loses 1\$ if tails. He continues playing until his capital reaches m or he goes broke. (b) One-step transition probability matrix. For a MC, we are interested in pij = P (Xn+1 = j |Xn = i) 2

We can write this in form of a matrix. Example[Weather forecasting (continued)] If we know P (s|c) = 0.2, P (s|s) = 0.6, Then the transition probability matrix is P= s c s 0.6 0.2 c 0.4 0.8 P (c|c) = 0.8 P (c|s) = 0.4.

(c) n-step transition probability matrix and Chapman-Kolmogorov equations. For a MC, we are now interested in pij = P (Xn = j |X0 = i) Example[Weather forecasting (continued)] P(3) = P3 = s c s 0.376 0.312 c 0.624 0.688
(n)

n-step transition probability matrix if sucient to analyze short-term properties of a MC 2. Long term behavior for a MC: (a) with absorbing states and (b) without absorbing state. (a) Long term behavior for a MC with absorbing states: rst-step analysis. Derive the set of equations for any given transition probability matrix . Example[Gamblers Ruin (continued)] Find pi , the probability that he goes broke if his initial capital is i\$. How many games he can play before the game is over? How may times he can have j\$ before the game is over? (b) Long term behavior for a MC without absorbing states:limiting distribution (or stationary distribution). How to calculate the limiting distribution and how to explain the limiting distribution. Example[Weather forecasting (continued)] what is the proportion of sunny days?

## 3. Classication of states in a MC: recurrent and transient.

fii =
n=1

fii .

(n)

state i is recurrent

fii = 1

n=1

Pii

(n)

state i is transient

fii < 1
n=1

Pii

(n)

<

4. The basic limit theorem of MC: the mean duration between visits to a recurrent aperiodic state i is
n mi = E (Ri |X0 = i) = 1/ lim Pii . n

More examples

Example 3.1 (A discrete queueing Markov Chain) Suppose that P (k customer arrive in a service period ) = P (n = k ) = ak . In each service period, only one customer is served. Let Xn be the customers waiting for service. Then Xn+1 = (Xn 1)+ + n . Based on this, the transition probability matrix is 0 1 P= 2 3 . . . 0 a0 a0 0 0 0 1 a1 a1 a0 0 0 2 a2 a2 a1 a0 0 3 a3 a3 a2 a1 0 ... ... ... ... ... ...

1. If E > 1, then the number of customers waiting for service will increase innitely. 2. If E < 1, what is the probability that there will be k customers waiting for service. [if you are the only hairdresser in a barbershop, how many chairs you need to provide?] 4

Example 3.2 (Independent random variable ) suppose is a random variable such that P ( = i) = ai 0, i = 0, 1, ,

0 , 1 , , n are independent random samples from . Dene {Xn = n : n = 0, 1, 2, } It is a MC because Pij = P (Xn+1 = j |X1 = i1 , , Xn1 = in1 , Xn = i) = P (n+1 = j |1 = i1 , , n1 = in1 , n = i) = P (n+1 = j ) The transition probability matrix is 0 1 P= 2 3 . . . 0 a0 a0 a0 a0 . . . 1 a1 a1 a1 a1 . . . 2 a2 a2 a2 a2 . . . 3 a3 a3 a3 a3 . . . ... ... ... ... ... ...

## Example 3.3 (Successive Maxima) suppose is a random variable such that P ( = i) = ai 0, i = 0, 1, ,

0 , 1 , , n are independent random samples from . Dene Xn = max{1 , 2 , , n } Xn = max{Xn1 , n }. Then {Xn } is a MC with pij = P (Xn+1 = j |X1 = i1 , X2 = i2 , , Xn1 = in1 , Xn = i) = P (max{Xn , n+1 } = j |X1 = i1 , X = i2 , , Xn1 = in1 , Xn = i) = P (max{Xn , n+1 } = j |Xn = i) = P (max{i, n+1 } = j ) = P (max{i, } = j ). transition probability matrix

0 1 P= 2 3 . . .

0 a0 0 0 0 . . .

1 a1 a0 + a1 0 0 . . .

2 a2 a2 a0 + a1 + a2 0 . . .

3 a3 a3 a3 a0 + a1 + a2 + a3 . . .

## ... ... ... ... ... ...

Example 3.4 (partial sums) X0 = 0, Then {Xn } is a MC with 0 1 P= 2 3 . . . 0 a0 0 0 0 . . . 1 a1 a0 0 0 . . . 2 a2 a1 a0 0 . . . 3 a3 a2 a1 a0 . . . ... ... ... ... ... ... Xn = 1 + + n , n = 1, 2,

Example 3.5 (successive trials) Consider success trials with outcomes (F: failure) and (S: Success) F S Xn1 S Xn S Xn+1 F S -

Let Xn be the maximum success trials before trail n (including trial n). In the above example, Xn1 = 1, Xn = 2, Xn+1 = 3, Xn+2 = 0. Suppose P r(S ) = and P (F ) = (with + = 1 ). {Xn , n = 1, 2, } is a MC with S = {0, 1, 2, } and Xn+1 = Xn + 1, if the n+1 trial is a success 0, if the n+1 trial is a failure

## with transition probability matrix 0 0 0 0 0 0 . . . . . . . . . . . . . . . 6

P =

Example 3.6 (current age in a renewal process) The lifetime of a light bulb is with P ( = k ) = ak > 0, k = 1 , 2, 3,

Let each bulb be replaced by a new one when it burns out. Let Xn be the age of the bulb at time n. (set X0 = 0). Then {Xn : n = 0, 1, 2, } is a MC because Xn+1 = We have P (Xn+1 = 0|Xn = i) = P ( i| > i) = ak+1 /(ak+1 + ak+2 + ). Example 3.7 (Gamblers ruin) Player A has i\$; Player B (e.g. a Casino) has N i\$. A wins 1\$ with probability p and losses 1\$ with probability q (p + q = 1). The game is over is either A or B goes broke. Let Xn be As fortune after game n. Then {Xn : n = 0, 1, 2, } is a MC with state space S = {0, 1, 2, , N }. the transition probability matrix 1 0 0 q 0 p 0 q 0 . . . . . . . . . . . . 0 0 0 0 0 0 0 0 0 . . . . . . 0 1 Xn + 1, if the bulb works at n+1 0, if the bulb burns out at time n+1

P =

Dene T = min{n : Xn = 0 or Xn = N },
n

i = P (Xn = 0|X0 = i), wi = P (Xn = N |X0 = i). Questions: 1. what is the prob that A will lose all his money? 2. how many games they can expect to play? Let Xn be the money player A has after the n th game. Dene T = min{n 0 : Xn = 0 or Xn = N } then the questions is equivalent to calculate 7

1. uk = P (XT = 0|X0 = k ) By the rst step analysis, we have uk = puk+1 + quk1 , f or k = 1, 2, , N 1, u0 = 1, uN = 0. Note that p + q = 1. Thus uk = puk+1 + quk1 = uk (p + q ) = puk+1 + quk1 = 0 = p(uk+1 uk ) q (uk uk1 ) Let xk = uk uk1 0 = p(uk+1 uk ) q (uk uk1 ) = 0 = pxk+1 qxk or xk = (q/p)xk1 = xk = (q/p)k1 x1 . Note also that uk u0 = (uk uk1 ) + ... + (u1 u0 ) = [1 + (q/p) + ... + (q/p)(k1) ]x1 Especially, uN u0 = (uk uk1 ) + ... + (u1 u0 ) = [1 + (q/p) + ... + (q/p)(N 1) ]x1 we have x1 = 1 1 + (q/p) + ... + (q/p)N 1 8

1(q/p)k 1(q/p)N

## 1 + (q/p) + ... + (q/p)(k1) 1 + (q/p) + ... + (q/p)N 1 k

1(q/p)k 1(q/p)

if p = q = 1/2 if q = q

if p = q = 1/2 if p = q

The prob that A wins all the N\$ with prob. k = 1 uk = Let N k = 1 uk 0 pq 1 if p > q k/N
1(q/p)k 1(q/p)N

if p = q = 1/2 if p = q

[Even the gambling is fair, A will eventually go broke] Example 3.8 (A continuous sampling Plan) Consider a production line where each item has probability p of being defective. Assume that the condition of a particular item (defective od nondefective) does not depend on the conditions of the other items. Consider the following continuous sampling Plan. Initially, every item is sampled as it is produced; this procedure continues until i consecutive nondefective items are found. Then the sampling the sampling plan calls for sampling only one out of every r items at randome until a defective one is found. When this happens the plan calls for reverting to 100% sampling until i consecutive nondefective items are found. The process continues in the same way. State Ek (k = 0, 1, ..., i-1) denotes that k consecutive nondefective items have been found in the 100 percent sampling portion of the plan, while state Ei denotes that the plan is in the second stage (sampling one out of r ). Time m is considered to follow the mth item, whether sampled or not.

Pjk = P (in state Ek after m + 1 items | in state Ej after m items) p for k = 0, 0 j < i 1 p for k = j + 1 i p for k = 0, j = i = r p 1 r for k = j = i 0 otherwise 9

## The transition probability matrix is p p . . . p/r 1p 0 0 1p 0 0 ... ... 0 0 0 0

P =

.... 0 1 p/r

Let k be the limiting distribution that the system is in state Ek for k = 0, 1, 2, ..., i. The equations determining these limiting probability are (0) (1) (2) . . . (i) and 0 + 1 + + i = 1 From (1) to (i) we have k = (1 p)k1 it follows k = (1 p)k 0 , together with the nal equation, we have r {[1 + ... + (1 p)i1 ] + (1 p)i }0 = 1. p Hence k = and i = p(1 p)k 1 + (r 1)(1 p)i k = 0, , i 1 and i = (r/p)(1 p)i1 . p0 + (1 p)0 + 0+ 0+ p1 + 0+ (1 p)1 + 0 ... + ... + ... + ... + pi1 + 0+ 0+ (1 p)i1 + (p/r)i 0 0 (1 p/r)i = = = = 0 1 2 i

k = 0, 1, , i 1

## r(1 p)i 1 + (r 1)(1 p)i

The Average fraction inspected, the long run fraction of items that are inspected, is AF I = (0 + ... + i1 ) + (1/r)i = 1 , 1 + (r 1)(1 p)i

because each item is inspected while in states E0 , , Ei1 but only one out of r is inspected in state Ei . The average fraction not inspected is 1 AF I = (r 1)(1 p)i . 1 + (r 1)(1 p)i 10

Let us assume taht each item founded to be defective is replaced by an item known to be good. Thus the the average outgoing quality (AOQ) is only in the uninspected items, with defective rate p. thus AOD = (r 1)(1 p)i p 1 + (r 1)(1 p)i

## Example 3.9 (A Maze) A white rat is put into the maze

3 shock

In the absence of learning, one might hypothesize that the rat would move through the maze at random, i.e. if there are k ways to leave a compartment, then the rat would try each of them with equal probability 1/k . Suppose that doors between 4 and 7, 6 and 9 are not accessible from each other and the door between 5 and 8 is only accessible from 5 to 8, the door will be closed. Assume that the rat makes one try to some adjacent compartment at each unit of time. Let Xn be the compartment occupied at stage n. Suppose that compartment 3 contains electrical shocking mechanisms. The transition probability matrix 1 2 3 4 P= 5 6 7 8 9 1 0 1/3 0 1/3 0 0 0 0 0 2 1/2 0 0 0 1/4 0 0 0 0 3 0 1/3 1 0 0 1/3 0 0 0 4 1/2 0 0 1/3 1/4 0 0 0 0 5 0 1/3 0 1/3 0 1/3 0 0 0 6 0 0 0 0 1/4 1/3 0 0 0 7 0 0 0 0 0 0 1/2 1/3 0 8 0 0 0 0 1/4 0 1/2 1/3 1/2 9 0 0 0 0 0 0 0 1/3 1/2

1. Starting from 1, what is the probability the rat will be shocked? Dene the MC Yn with state 1,2,3,4,5,6 and A, where {Yn = k } = {Xn = k }, for k = 1, ..., 6, and {Yn = A} = {Xn = 7, 8, 9} 11

Then its probability matrix is 1 2 3 P= 4 5 6 A 1 0 1/3 0 1/3 0 0 0 2 1/2 0 0 0 1/4 0 0 3 0 1/3 1 0 0 1/3 0 4 1/2 0 0 1/3 1/4 0 0 5 0 1/3 0 1/3 0 1/3 0 6 0 0 0 0 1/4 1/3 0 A 0 0 0 0 1/4 0 1

Let T = minn {Yn = 3 or A} and ui = P (YT = 3|Y0 = i). Then uA = 0, u3 = 1 Then by the rst step analysis ui = pi1 u1 + pi2 u2 + pi3 u3 + pi4 u4 + pi5 + pi6 u6 + piA uA for i = 1, 2, 4, 5, 6. it gives u1 = 0.6552 2. In the long run, what is the probability that the rat will be in 7 starting from 1?

P (Xn = 7|X0 = 1) = P (Xn = 7|YT = A)P (YT = A|X0 = 1) Note that {Yn } with states 7, 8, 9 is an irreducible MC with transition probability matrix 7 8 9 7 1/2 1/3 0 8 1/2 1/3 1/2 9 0 1/3 1/2

PY = Let

## k = lim P (Xn = k |YT = A)

n

Then (7 , 8 , 9 ) = (7 , 8 , 9 )PY 7 + 8 + 9 = 1 It gives 7 = 0.2857 Therefore, starting from 1 the limiting probility that the rat is in 7 is 0.2857 (1 0.6552) = 0.0985

12