You are on page 1of 16

Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

Stochastic Process 04 - Random Walk Model,


Gambler's Ruin Problem and Drug Testing

1. Random Walk Model

1.1. Definition and Interpretation

A Markov Chain whose state space is given by the integers is said to


be a Random Walk if, for some number ,

• think of it as being a model for an individual walking on a straight line who at


each point of time either

• takes one step to the right (+1) with probability or

• one step to the left (-1) with probability

• it could represent the wanderings of a drunken man as he walks along a


straight line.

• or the winnings of a gambler who on each play of the game either wins or loses
one dollar.

Example

1 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...


Find the probability that starting at 0, the chain will be in state i after 5 steps if
p=0.6

Solution:

• The Possible States that Random Walk may Stop after 5 steps are

• The transition matrix is:

The First Row, First Column is

• Use Online Matrix Calculator

2 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

• We get

• As a Check

1.2. Classification of States

Since , ALL States clearly communicate,

3 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

• the Chain has ONE Communicating Class

• By Class Property from 2.4, they are either ALL Transient or ALL Recurrent

• so let us consider State 0 and determine if

is Finite or Infinite

• Since it is IMPOSSIBLE TO BE EVEN (using the gambling model interpretation)


after an ODD NUMBER of Plays , we must have that

• On the other hand, we would be EVEN after Trails if and only if we won of
these and lost of these

• because each play of the game results in a win with probability and a loss
with probability ,

• the desired probability is thus the binomial probability

For , the preceding process is called a Symmetric Random Walk

• which shows that

and thus ALL States are Recurrent

4 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

Example


Find the probability that the chain will return to state 0 .

Solution:

• Let

• To determine , start by conditioning on the initial transition to obtain

• Let

• Because the Markov chain will always increase by 1 with probability p or decrease
by 1 with probability 1 − p no matter what its current state, note that

• To obtain an equation for , condition on the next transition to obtain

where the final equation follows by noting that in order for the chain to ever go
from state 2 to state 0, it must first go to state 1 with probability ,

• and then it must still go to state 0 , with conditional probability , so

• The two roots of this equation are

• In the case of the Symmetric Random Walk where p = 1/2, we can conclude

5 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

that

By Symmetry,

proving that ALL States of a Symmetric Random Walk are Recurrent

• In the case of the NonSymmetric Random Walk where p > 1/2,

and since the random walk is transient in state 0, then

Using , we get

• In the case of the NonSymmetric Random Walk where p < 1/2,

and we need , let

when , the chain must return to -1 with probability and then to 0 with
probability

• The two roots of this equation are

• and since the random walk is transient in state 0, then

6 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

Using , we get

• Finally, the probability that the chain will return to state 0 is

2. Gambler's Ruin Problem

2.1. Definition

Consider a Gambler who, at each play of the game, either

• wins 1 dollar with probability or

• loses 1 dollar with probability .

If we suppose that the Gambler quits playing either when

• he goes broke or
• he attains a fortune of N dollars

If we let denote the player’s fortune at time n, then the process


is a Markov Chain with transition probabilities

for and

We can visualize this game as a finite state random walk on the integers between 0
and N

• the game ends when the random walk reaches 0 or N

7 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

• States 0 and N are called Absorbing States since once entered they are never
left.

• Therefore, This Markov chain has three classes, namely, {0}, {1, 2, ... , N −
1}, and {N},

• States 0 and N are Recurrent, and ALL other states are Transient

• The chain is reducible because from state 0 it is only possible to go to state 0 ,


and from state N it is only possible to go to state N

• Since each transient state is visited only finitely often, it follows


that, after some finite amount of time , the gambler will either attain his goal of
N or go broke.

Example


Starting with 5 dollars and p=0.45, and the Gambler's will stop if the fortune
attain 10 or 0 dollars,

• Find the probability that, the game will last less than 20 rounds?

Solution:

• The Transition Matrix for the gambler's fortune is

8 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

• Then Use Online Matrix Calculator

https://matrix.reshish.com/power.php

• We get

• So, the probability that, the game will last less than 20 rounds is:

9 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

2.2. Probability to Win


What is the probability that, starting with dollars , the Gambler's fortune will
attain before reaching 0?

Solution:

• Let , denote the probability that, starting with dollars , the


gambler's fortune will eventually reach N

• This Game has a Recursive Structure:

• after the first step, it's exactly the same game, except that the Gambler's
fortune is now either (win) and (lose)

• By Law of Total Probability (LOTP) , conditioning on the outcome of the initial play

of the game, we have

• Since ,

• Hence,

• Since , we have

10 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

• Adding the first of these equations yields

• Using the fact that , we obtain

• Then,

• Hence,

• As ,

• Thus, if , there is a positive probability that the gambler’s fortune will

11 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

reach N;

• while , the gambler will, with 0 probability, reach N before going


broke.

Example


What is the probability that, starting with dollars and p=0.45, the Gambler's
fortune will attain dollars before reaching 0?

Solution:

• Let and , since

• Then, the probability that, starting with 5 dollars, the gambler's fortune will
reaching 10 before reaching 0 is:

Experiment in R

12 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

# The Gambler will stop if he attains N or 0

N <- 10

# The probability of 1 dollar win at each round

p <- 0.45

# The number of rounds of the game

nround <- 80

# Vector x stores the values of the Markov Chain

x <- rep(0,nround)

# The Gambler starts with $5

x[1] <- 5

# Each pass through the loop represents

# one step of the Markov chain.

for (i in 2:nround){

# First check to see whether the chain is already

# at one of the endpoints, 0 or N

if (x[i-1]==0 || x[i-1]==N){

# set its new value equal to its previous value

# since the chain is not allowed to escape 0 or N

x[i] <- x[i-1]

else{

# Use the sample command to move

# to the right 1 unit or to the left 1 unit,

# with probabilities p and 1-p, respectively.

x[i] <- x[i-1] + sample(c(1,-1), 1, prob=c(p,1-p))

# The path taken by the Markov chain during simulation

plot(x,type='l',ylim=c(0,N))

13 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

A path that starts at $5 and bounces up and down before being absorbed into state 0 or state N.

3. Application: Drug Testing

3.1. Intro to Problem

One application of the gambler’s ruin problem is Drug Testing,

• suppose that two new drugs have been developed for treating a certain disease
• Drug has a cure rate

in the sense that each patient treated with drug will be cured with probability
.

• These cure rates, however, are NOT known, and suppose we are interested in a
method for deciding whether

3.2. Steps of Testing

To decide upon one of these alternatives, consider the following test:

• Pairs of patients are treated sequentially

• with one member of the pair receiving drug 1 and the other drug 2

• The results for each pair are determined , and the testing stops when

• the cumulative number of cures using one of the drugs exceeds the cumulative
number of cures when using the other

• by some fixed predetermined number .

• More formally, let

14 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

• for a predetermined positive integer M the test stops after pair N where N is
the first value of n such that either

or

• In the former case we then assert that ,

• and in the latter that

3.3. Probability of Incorrect Decision

In order to help ascertain whether the preceding is a GOOD Test,

• one thing we would like to know is the probability of it leading to an incorrect


decision.

• That is, for given and where

what is the probability that the test will Incorrectly assert that ?

To determine this probability, note that after each pair is checked the cumulative
difference of cures using drug 1 versus drug 2 will either

• go up by 1 with probability

since this is the probability that drug 1 leads to a cure and drug 2 does not, or

• go down by 1 with probability

• or remain the same with probability

since this is the probability that both drugs lead to a cure or both drugs do not

15 of 16 2024-03-08, 15:08
Markdown | 让排版变 Nice https://editor.mdnice.com/?outId=a8fbd1b6d90446b8...

Hence, if we ONLY consider those pairs in which the Cumulative Difference changes ,

• then the difference will go up 1 with probability

and down 1 with probability

• The probability that the test will assert that is

• equal to the probability that

• a Gambler who wins each (one unit) bet with probability p will go down
M before going up M.

with

• So

• Thus, for instance, if and , then the probability of an incorrect


decision is 0.017 when M = 5 and reduces to 0.0003 when M = 10 .

Resources:

[1] Sheldon M. Ross, Introduction to Probability Models, Academic Press; 12th edition
(2019)

[2] Joseph K. Blitzstein, Jessica Hwang, Introduction to Probability, Chapman and


Hall/CRC, 2nd Edition (2019)

16 of 16 2024-03-08, 15:08

You might also like