3
Random walk
Another way of modelling the gambler’s ruin problem of the last
chapter is as a one-dimensional random walk.
Suppose that a+ 1 positions are marked out on a straight line and
numbered
0, 1, 2, . . . , a.
A person starts at k where 0 < k < a.
The walk proceeds in such a way that at each step there is a
probability p that
the walker goes ‘forward’ one place to k + 1, and
a probability q = 1 − p that the walker goes ‘back’ one place to k −
1.
The walk continues until either 0 or a is reached, and then ends.
the position of a walker after having moved n
times is known as the state of the walk after n
steps
the walk starts at stage k at step 0 and moves to either
stage k − 1 or stage k + 1 after 1 step, and so on.
A random walk is symmetric if p = q = 1/2 .
If the walk is bounded, then the ends of the walk are
known as barriers, and they may have various properties.
In this case the barriers are said to be absorbing, which
implies that the walk must end once a barrier is reached
since there is no escape. On the other hand, the barrier
could be reflecting, in which case the walk returns to its
previous state.
A simple random walk on a line or in one
dimension occurs when a step forward (+1) has
probability p and a step back (−1) has probability
q(= 1 − p).
At the i-th step the modified Bernoulli random
variable Wi
the position of the walk at the n-th step is the
random variable:
we introduce basic random variables
first random variable is something like this w1
w2 w3 and so on so in general we write as wi
Wi = +1 or -1
(each wi can take two values either positive one
or negative one)
the position of the walk at the nth step is the
random variable:
In the gambler’s ruin problem, X0 = k, but in the
following discussion it is assumed, that walks
start from the origin so that
X0 = 0.
we plot the state of
Sarah and Bob in this two-dimensional space
Stake
k
Step
we want to formalize this kind of process
in terms of random variables
The horizontal axis represents the steps and
the vertical axis represents the stake and
each step is either one up or one down
taking positive value positive 1 with probability p
, p(wi=1)=p
and w i taking negative value negative one with
probability q, p(wi=-1) = q
x n : is a random walk
we can define this conditional probability
P(Xn=xn| X0=x0, X1=x1, …, Xn-1=xn-1) =
P(Xn=xn|Xn-1=xn-1)
this random walk has the Markov property
what does this mean ,
The last value depends only on the last step rather
than the entire history of the random walk
We defined
the current state of the walk depends on its immediate
previous state, not on the history of the walk up to the present
state. Furthermore
Xn = Xn−1 ± 1, and we know that the transition probabilities
from one position to another
Find the expectation value of each of w and
variance of it
we have defined
p(wi=-1) = q,
p(wi=1)=p
So
E(wi) = +1 * p + (-1) * q = p-q
And
E((wi)^2) = +1 * p + (1) * q = p+q=1
the variance of wi
V(wi) = E((wi)^2) – (E(wi))^2) = 1 – (p-q)^2)=
4pq
We had
here for simplification we assume this x0=0
V
since the Wi are independent and identically
distributed random variables. Thus
Knowing the mean and standard variation of a
random variable does not enable us to identify the
probability distribution.
However, for large n we may apply the central limit
theorem, which states:
if W1, W2, . . . is a sequence of independent identically
distributed random variables with mean µ and variance
σ 2 , then, for the random variable Xn = W1 + W2 + · ·
· + Wn, has a standard normal N(0, 1) distribution as n
→∞
we can say that Xn ∼ N[n(p − q), 4npq]
The exact probability distribution
of a random walk
The probability distribution of the random
variable Xn, the position after n steps. The
position Xn, after n steps, can be written as
Xn = Rn − Ln,
Rn is the random variable of the number of right
(positive) steps (+1) and Ln is that of the number
of left (negative) steps (−1). Furthermore, N = Rn
+ Ln,
Example 3.1
First returns of the symmetric
random walk
This time we consider only the first passage
through x=0
previously it could be second time
or third time or some other times of passing through
zero