You are on page 1of 4

Stochastic Processes and Time Series

Module 1
Markov Chains - I
Dr. Alok Goswami, Professor, Indian Statistical Institute, Kolkata

1 A Coin-Tossing Game
1.1 A Simple Example to Illustrate the Idea:
A coin is tossed repeatedly. At each toss, you either WIN or LOSE 1 rupee, depending on whether
the toss results in a HEAD or a TAIL. Your net/accumulated fortune/capital fluctuates at each toss.

Consider the following question:


Suppose the game has gone on for sometime and you know how your net fortune has changed from
the start till that time point (say, the ”present” time). Given this information, what can you say
about your possible net fortune after the next toss?
Denote: P(Head) = θ, P(Tail)= 1 − θ, for each toss.

1.2 Coin-Tossing Game: Important Features!


Clearly, all you can say is: if your net capital at present is Rs x, then, after next toss, net capital
would be Rs x + 1 or Rs x − 1 with (conditional) probabilities θ and 1 − θ respectively. The
knowledge of what your net capitals had been at times prior to present, even if you have it, does
NOT matter! This is called the MARKOV PROPERTY: In evolving to the future, the process
remembers ONLY the PRESENT, forgets THE PAST.
All the past values from the start till just before the present, even if given, are of no consequence!
Another subtle but important feature: The time period of the known past is also of no relevance.
The transition from the present value to the possible future values follow the same rule, irrespective
of how much time has passed.
This property is called TIME HOMOGENEITY!

1.3 Coin-Tossing Game:


Mathematically speaking....
We put our observations on the Coin-Tossing game in a more precise mathematical language. De-
note X0 = your initial capital and, for n ≥ 1, Xn = your net capital after nth toss. Each Xn is
a random variable taking values in the set S = Z = set of all integers. The sequence {Xn }n≥0
represents fluctuation/evolution of your net fortune with successive tosses and satisfies:

1
Time-Homogeneous Markov Property: For every n ≥ 0 and every x0 , x1 , . . . , xn−1 , x, y ∈ S,
P(Xn+1 = y | X0 = x0 , . . . , Xn−1 = xn−1 , Xn = x)

 θ if y = x + 1
= 1 − θ if y = x − 1
if y 6= x ± 1

0

2 Markov Chains: Definition


It turns out that in a large number of situations, involving random evolution with time, the same
feature as observed in the Coin-Tossing game holds. This makes Markov Chains one of the most
important and widely used models for understanding and analysing such systems. Like in the case
of the Coin-Tossing game, we only consider systems whose possible values/states at different points
of time belong to a countable set.

Definition 1. Let S be a countable set. A sequence {Xn }n≥0 of random variables taking values in
S is said to be a Markov Chain if, for every n ≥ 0 and for every choice of x0 , x1 , . . . , xn−1 , x, y ∈ S,
P(Xn+1 = y | X0 = x0 , X1 = x1 , . . . , Xn−1 = xn−1 , Xn = x)
= P(Xn+1 = y | Xn = x) = P(X1 = y | X0 = x) = pxy

The definition captures two properties: One, the conditional probability on the left doesn’t
depend on x0 , x1 , . . . , xn−1 (first equality), and, Two, it doesn’t depend on n either (second equal-
ity).
Thus, a Markov chain, by definition, satisfies both Markov property and Time-homogeneity.
The set S is called the State Space of the chain and elements of S are called states. The
numbers pxy , x, y ∈ S are called transition probabilites of the chain: given the history of the
chain upto any time point n, pxy is the (conditional) probability that the chain will move from its
(given) present state x to the state y at the next time point. The following properties are obvious:

X
pxy ≥ 0 ∀ x, y ∈ S and pxy = 1 ∀ x ∈ S.
y∈S

3 Some Examples:
• On-Off chains

Imagine a system which can be in one of two states: ’on’ or ’off’. If it is ’off’ at time n, then chances
that it will be ’on’ at time n + 1 is α, while the chances of going from an ’on’ state to ’off’ is β,
irrespective of the past history. Denoting the ’on’ and ’off’ states as states 1 and 0 respectively,
we get the simplest of Markov chains with state space S = {0, 1} and transition probabilities
p01 = α, p10 = β; of course, p00 = 1 − α, p11 = 1 − β.

• Random Walk on Integers

ξn , n ≥ 1, integer-valued i.i.d. random variables with common distribution:


P(ξn = i) = θi , i ∈ Z.

2
{Xn }n≥0 integer-valued sequence of random variables defined as:
X0 = any Z-valued random variable, independent of {ξn } and,
X1 = X0 + ξ1 , X2 = X1 + ξ2 , · · ·, Xn = Xn−1 + ξn = X0 + ξ1 +· · · + ξn , · · ·
It is easy to see that: (Xn )n≥0 is a Markov chain with state space S = Z and transition probabilities
pxy = θy−x , x, y ∈ Z. (write conditional probability in definition in terms of X0 , ξ1 , . . . , ξn+1 ). It
is called Random Walk on integers: A particle is moving along integers through random i.i.d.
steps/increments:

Xn is the position of the particle at time n.

Coin-Tossing game is a special case when the increments ξi take values ±1 only: called Simple
Random walk.
An important property (space homogeneity):

pxy depends on x, y only through y − x

Exs: Random Walks are the only Markov chains on integers whose transition probabilities satisfy
space homogeneity.

• Renewal Chain

Running of a process depends on an item that has a random lifetime. When it breaks down, a new
item is installed in its place at the start of next day.
Assume: Lifetimes of successive replacements are i.i.d. random variables with common distribu-
tion:
P(L = i) = θi , i = 1, 2, . . .. (L: generic notation for lifetime)
Consider: Xn = age of item in use on day n, n = 0, 1, . . ..
It is easy to see that: (Xn )n≥0 is a Markov chain with state space S = N and transition probabilities:

px,x+1 = P(L ≥ x + 1 | L ≥ x), px,1 = P(L = x | L ≥ x)

and pxy = 0, for y 6= 1, x + 1.


(Xn = x means item in use on day n survived at least x days!)

• Queuing chain - a toy model

Customers arrive at a service counter and queue up for service. If, at the start of a period, no
customers are waiting for service, there will be no service during that period; otherwise, only
the first customer in queue is served during that period.
Assume: Arrivals during successive periods are independent with common distribution:
P(ξ = i) = θi , i = 0, 1, 2, . . .. Xn = length of the queue at the start of the nth period.
It is easy to see: (Xn )n≥0 is a Markov chain with state space S = Z+ and transition probabilities:
pxy = θy−x+1 , for x ≥ 1, y ≥ x − 1
and p0y = θy , for y ≥ 0.

3
• Ehrenfest Diffusion chain

Consider d balls distributed between two boxes. Out of the d balls, one is selected at random.
The selected ball is taken out of the box it is in and transferred to the other.The same process is
repeated. Consider Xn = number of balls in Box 1 after nth turn.
Easy to see: (Xn )n≥0 is a Markov chain with state space S = {0, 1, . . . , d} and transition probabil-
ities:
x x
px,x−1 = , px,x+1 = 1− and pxy = 0 for y 6= x − 1, x + 1.
d d
Remark: Captures an essential feature of diffusion of particles: Whenever a box carries a very large
share of the balls, the chances of a ball moving out of it into the other box are high!

You might also like