You are on page 1of 43

Stochastic Processes

Biman Chakraborty

Aliah University, Kolkata

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 1 / 43


Outline

1 Deterministic and Stochastic Processes

2 Classification of Stochastic Process

3 Discrete Time Markov Chain

4 Restricted Random walk models:

5 Period of a state

6 Stationary Distribution

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 2 / 43


Deterministic vs. Stochastic Process

Definition
A process is deterministic if its future is completely determined by its present and past.

Example
Growth of some biological system is governed by X(t) = X0 ert .

Definition
A stochastic process is a random process evolving in time.
Even if you have full knowledge of the state of the system and its entire past, you can
not be sure of its value at future times with certainty.

Example
Snake and ladder game
Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 3 / 43
Definition
A stochastic process, or often random process, is a collection of random variables
{Xt , t ∈ T }, representing the evolution of some system of random values over time.

If T is continuous (discrete) then we call the stochastic process Xt is a continuous


(discrete) time stochastic process.

Definition
The set of all possible values of Xt is called State Space.

The elements of this set are called State. The State Space may be discrete as well as
continuous.

If the State Space is discrete (continuous) then the stochastic process is called dis-
crete (continuous) state space stochastic process.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 4 / 43


Classification of Stochastic Process

According to state space and time stochastic process can be classified into four
categories.
1 Discrete time and discrete state space (DTDS) stochastic processes:
Example : Random walk on the set of integers
2 Discrete time and continuous state space (DTCS) stochastic processes:
Example: Autoregressive processes
3 Continuous time and discrete state space (CTDS) stochastic processes:
Example: Population size of any species over time.
4 Continuous time and Continuous state space (CTCS) stochastic processes:
Example: Brownion Motion

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 5 / 43


Definition
A discrete time stochastic process Xn is called a discrete time Markov chain, if the following
holds for all choices of n ≥ 0 and any set of states i0 , i1 , i2 , ..., in+1 in the state space S:

P [Xn+1 = in+1 |X0 = i0 , ..., Xn = in ] = P [Xn+1 = in+1 |Xn = in ]

Example
1 Random Walk on Integers:
2 snakes and ladders game:

Does every stochastic processes have Markov property?


An urn contains 2 red balls and 1 green ball.
1 ball was drawn yesterday, 1 ball was drawn today, and the final ball will be drawn tomorrow.
If X1 =red, but X0 not known then P (X2 = green|X1 = red) = 12 .
P (X2 = green|X1 = red, X0 = red) = 1

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 6 / 43


Definition
A Markov chain {Xn : n = 0, 1, 2, ...} is said to be homogeneous or stationary transition
probabilities if ∀, i Pij (n, n + 1) does not depend on n i.e.

P [Xn+1 = j|Xn = i] = P [X1 = j|X0 = i].

Definition
A stochastic matrix (also termed probability matrix, transition matrix, substitution matrix, or
Markov matrix) is a matrix used to describe the transitions of a Markov chain. The (i, j)th
element of one step transition probability matrix ( denoted by P ) is pi,j i.e.

P [X1 = j|X0 = i]

.
(n)
The (i, j)th element of n-step transition probability matrix ( denoted by P (n) ) is pi,j i.e.
P [Xn = j|X0 = i].

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 7 / 43


Example
Suppose a frog can jump between three lily pads, labeled 1, 2, and 3. If the frog is on lily
pad 1, it will jump next to lily pad 2 with a probability 1. Similarly, if the frog is on lily pad 3,
it will next jump to lily pad 2 with a probability 1. However, when the frog is on lily pad 2, it
will next jump to lily pad 1 with probability 14 , and to lily pad 3 with probability 43 . This is
discrete time discrete state space stochastic process with state space S = {1, 2, 3}. The
transition probability matrix is given by
 
0 1 0
P =  41 0 34  .
0 1 0

Is there any stochastic process which is non stationary or time-inhomogeneous?


A woman may be: never married, married for the first time, divorced, widowed or remarried.
We can draw a state space diagram, but transition probabilities change with age.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 8 / 43


Theorem
Chapman-Kolmogorov equation:
Let {Xn : n = 0, 1, 2, ...} be a homogeneous Markov chain. For i, j ∈ S and m, n ∈
{1, 2, 3, ...} following holds.
(m+n) P (n) (m) P (m) (n)
1 pi,j = pi,k pk,j = pi,k pk,j
k∈S k∈S
(m+n) (m) (n)
2 pi,j ≥ pi,k pk,j ∀k∈S

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 9 / 43


Definition
Initial Distribution: The probability distribution of the random variable X0 is called is called initial distri-
bution. For example P [X0 = i] = αi , i = 1, 2, 3, ... ∈ S. We may also represent this as a row vector
π (0) = (P (X0 = 1), P (X0 = 2), ...).

Time-dependent distribution: defines the probability that Xn takes a value in a particular subset of S at
a given time n. Note that we can calculate this distribution using initial distribution (π (0) ) and transition
probability matrix P . For a state j ∈ S
X X (n)
P (Xn = j) = P (Xn = j, X0 = i) = P (X0 = i)pij = π (0) P (n) [, j] = π (0) P n [, j]
i∈S i∈S

Stationary distribution: defines the probability that Xt takes a value in a particular subset of S as t → ∞
(assuming the limit exists)

Hitting probability: the probability that a given state is S will ever be entered

First passage time: the instant at which the stochastic process first time enters a given state or set of
states starting from a given initial state

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 10 / 43


Path of a Markov Chain
For any process, it is typically of interest to know the values of P (X0 = i0 , X1 = i1 , ...Xn =
in ), where n ∈ {0, 1, 2, 3, ...} and i0 , i1 , ...in ∈ S. X0 = i0 , X1 = i1 , ...Xn = in is called a
path of the Markov chain.

The multiplication rule:

P (E1 E2 E3 ...En ) = P (E1 )P (E2 |E1 )P (E3 |E1 E2 )...P (En |E1 E2 ...En−1 )

For a set of n events such that P (Ei |E1 E2 ...Ei−1 ) = P (Ei |Ei−1 ), we have

P (E1 E2 E3 ...En ) = P (E1 )P (E2 |E1 )P (E3 |E2 )...P (En |En−1 )

Probability of a Path
P (X0 = i0 , X1 = i1 , ...Xn = in ) = P (X0 = i0 )P (X1 = i1 |X0 = i0 )...P (Xn = in |Xn−1 = in−1 )

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 11 / 43


Definition
(n)
A state j ∈ S is said to be accessible from state i ∈ S if pij > 0 for some n ≥ 0
(written as i → j).
Two state i ∈ S and j ∈ S are said to communicate (written as i ↔ j) if i → j and
j → i.

Definition
The relation ↔ forms an equivalent relation (How?). The set of equivalence classes in a
DTMC are called the communication classes or, more simply, the classes of the Markov
chain. If every state in the Markov chain can be reached from every other state, then there
is only one communication class (all the states are in the same class).

Definition
A Markov chain is said to be irreducible if there is only one equivalence class, i.e., if all
states communicate with each other.
Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 12 / 43
Example
Consider a Markov chain with state space {1, 2, 3, 4} and transition probability matrix P ,
where
 1 1 
2 2 0 0
 1 1 0 0 
P = 2 2
 1 1 1 1 .

4 4 4 4
0 0 0 1
Find the equivalence classes.

Whether all the states will be in a single communicating class? In that case all states will
communicate with 1. So we check it in the following.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 13 / 43


Case 1: Does state 1 communicate with state 2?
Yes.
Note that p12 > 0. Hence 1 → 2
p21 > 0 =⇒ 2 → 1. Therefore 1 ↔ 2.
Case 2: Does state 1 communicate with state 3?
(n)
No. Since p31 > 0, 3 → 1. But we can not find any n ≥ 0 for which p31 > 0. Hence 1
and 3 do not communicate.
Case 3: Does state 1 communicate with state 4?
(n)
No. Because we can not find any n ≥ 0 for which p14 > 0 as well as we can not find any
(m)
m ≥ 0 such that p41 > 0.
Hence all states can not be in one communicating class. More precisely, only 1 and 2 will be
in same communicating class.
Case-4: Whether 3 and 4 will be in same class or not?
No. Because 3 is not accessible from 4.
Hence the communication classes are {{1, 2}, {3}, {4}}.
Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 14 / 43
Definition
A state i ∈ S is said to be absorbing if pii = 1.

Lemma
(n)
A state i ∈ S is said to be absorbing if and only if pii = 1, ∀n ≥ 0.

If a state i ∈ S is absorbing then it does not communicate to any other state (How?).

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 15 / 43


Definition
(n)
Let fii denote the probability that starting from state i, the first return to state i at
the nth step,
(n)
fii = P [Xn = i, Xm 6= i, m = 1, 2, 3, ..., n − 1|X0 = i], n ≥ 1
(n) (0)
The probabilities fii are known as first return probabilities. Define fii = 0. Note
(1) (n) (n)
that fii = pii , but in general fii 6= pii . The first return probabilities represent the
first time the chain returns to state i; thus

(n)
X
0≤ fii ≤ 1
n=1

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 16 / 43


Definition

P (n)
State i is said to be transient if fii < 1. State i is said to be recurrent (persistent) if
n=1

P (n)
fii = fii = 1.
n=1

Definition
(n)
If state i is recurrent, then the set {fii }i≥1 defines a probability distribution for a random
variable (Tii ) which defines the first return time.
1 The mean of the distribution of Tii is referred to as the mean recurrence time to state
i.

(n)
X
µii = nfii
n=1
2 A recurrent state i is positive recurrent if µii < ∞, otherwise null recurrent.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 17 / 43


Example
1 1
 
P = 2
1
2
2 . Find the first return probabilities of each state, where S = {1, 2}.
3 3

(1) 1
f11 = P (X1 = 1|X0 = 1) = p11 =
2
(2) 1
f11 = P (X2 = 1, X1 6= 1|X0 = 1) = P (X2 = 1, X1 = 2|X0 = 1) = p12 p21 =
6
(3)
f11 = P (X3 = 1, X2 6= 1, X1 6= 1|X0 = 1) = P (X3 = 1, X2 = 2, X1 = 2|X0 = 1)
1 2
= p12 p22 p21 = ×
6 3
(4)
f11 = P (X4 = 1, X3 6= 1, X2 6= 1, X1 6= 1|X0 = 1) = P (X4 = 1, X3 = 2, X2 = 2, X1 = 2|X0 = 1)
 2
1 2
= p12 p222 p21 = ×
6 3

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 18 / 43


Example
Hence
(1) (2) (3) (4)
f11 = f11 + f11 + f11 + f11 + ...
 2 !
1 1 2 2
= + 1+ + + ... = 1
2 6 3 3

Therefore the state 1 is recurrent. The mean recurrent time for the state 1 is
∞  2
X (n) 1 1 1 2 1 2
µ11 = =1× +2× +3× × +4× ×
nf11 + ...
n=1
2 6 6 3 6 3
 2 !  2 !
1 1 2 2 1 2 2
= +2× 1+ + + ... + + + ... + ...
2 6 3 3 6 3 3
 2 !
1 2 2 5
= 1+ 1+ + + ... =
2 3 3 2

5
Since µ11 = 2
< ∞ state 1 is positive recurrent.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 19 / 43


Lemma
n
(n) P (r) (n−r)
1 pij = fij pjj
r=1
n−1
(n) (n) P (r) (n−r)
2 fij = pij − fij pjj
r=1

Lemma
For i, j ∈ S
∞ ∞ ∞
 
P (n) P (n) P (n)
1 pij = fij pjj = fij 1 + pjj
n=1 n=0 n=1

P (n)
pij
n=1
2 fij = ∞
P (n)
1+ pjj
n=1

(n) P (n)
3 sup pij ≤ fij ≤ pij
n≥1 n=1

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 20 / 43


Lemma
A state j is accessible from state i (i → j) if and only if fij > 0.

Two states i and j communicate (i ↔ j) if and only if fij > 0 and fji > 0.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 21 / 43


Theorem

P (n)
A state i is recurrent (transient) if and only if pii diverges (converges), i.e.,
n=0


(n)
X
pii = ∞ (< ∞)
n=0

Theorem
If i ↔ j then
1 i is recurrent ⇐⇒ j is recurrent.
2 i is transient ⇐⇒ j is transient.

Theorem

P (n) (n)
If j ∈ S is transient, then prove that ∀ i ∈ S, pij < ∞ and lim pij = 0.
n=1 n→∞

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 22 / 43


Theorem
In a finite Markov chain all states can’t be transient.

Theorem
In a finite and irreducible Markov chain all states are recurrent.

Exercise
Consider a homogeneous Markov chain {Xn : n ≥0}. Determine
which states are tran-
0 0 12 21
 1 0 0 0 
sient and which are recurrent? S = {1, 2, 3, 4}; P = 
 0

1 0 0 
0 1 0 0

Theorem
i ∈ S recurrent ⇐⇒ E[number of returns to i |X0 = i] = ∞

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 23 / 43


Definition
A restricted random walk is a random walk with at least one boundary.

Example
1 {0, 1, 2, . . . , N}-finite, 2 boundaries at 0 and N .
2 {0,1,2, . . . } - semi infinite, 1 boundary at 0.

Example
1 Absorbing boundary: An absorbing boundary at x = 0 assumes the one step
transition probability is p00 = 1.
2 Reflecting boundary: A reflecting boundary at x = 0 assumes the transition
probabilities are p11 = 1 − p and p12 = p.
3 Elastic boundary: An elastic boundary at x = 0 assumes the transitions probabilities
are p12 = p, p11 = sq, p10 = (1 − s)q, p + q = 1, p00 = 1, for 0 < p, s < 1.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 24 / 43


Example

Simple random walk on {1, 2, 3, 4} Consider a random walk with reflecting


with absorbing boundaries at 1 and boundary at 5 but elastic boundary at 0.
4, which has the following transition  
probability matrix P. 1 0 0 0 0 0
 (1 − s)q sq p 0 0 0 
   
1 0 0 0  0 q 0 p 0 0 
P = 
 q 0 p 0  0 0 q 0 p 0 
P =   
 0 q 0 p   0 0 0 q p 0 
0 0 0 1 0 0 0 0 0 1

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 25 / 43


Gambler’s Ruin Problem

Xn : Gambler’s fortune at time n, then the process {Xn , n = 0, 1, 2, ...} is a Markov chain

 
1 0 0 ... 0 Communication Classes:

 p 0 q ... 0 
 {0}, {1, 2, ..., N − 1}, {N }
P =
 .. .. .. . . ..
 . . . . .
 Recurrent State: {0}, {N }
 0 0 q 0 p 
0 0 0 0 1 Transient States: {1, 2, ..., N − 1}

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 26 / 43


Definition
The period of a state i is defined by
(n)
d(i) = g.c.d.{n|pii > 0 and n ≥ 1}.

(n)
The state i has period d if pii = 0 unless n = vd, is a multiple of d.

If a state i has period d(i) > 1, it is said to be periodic of period d(i).

If d(i) = 1, it i is said to be aperiodic.


(n)
If pii = 0 for all n ≥ 1, define d(i) = 0.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 27 / 43


Example
Find the periods of the following Markov chain.
 
0.5 0.3 0.2
P =  0.2 0.5 0.3 
0.1 0.5 0.4

Theorem
Let Xn be a Markov chain with state space S. If i, j ∈ S are in the same communication
class, then d(i) = d(j). That is they have the same period.

Definition
An aperiodic positive recurrent state is called ergodic. If the Marcov chain is irreducible
and all states are ergodic then the Markov chain is called ergodic Markov chain.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 28 / 43


Example
Find the periods of the following Markov chain with S = {1, 2, 3}.
 
0 0.5 0.5
P =  0.5 0 0.5 
0.5 0.5 0

If we calculate the P 2 and P 3 we get

 
0.5 0.25 0.25  
P 2 =  0.25 0.5 0.25  0.25 0.375 0.375
0.25 0.25 0.5 P 3 =  0.375 0.25 0.375 
0.375 0.375 0.25
Hence the period of the state 1 is g.c.d.(2, 3, ..) = 1. Similarly period of each state is 1.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 29 / 43


Definition
Two states i, j ∈ S are said to be of the same type if they have the same classification.
That is
1 i and j have the same period, and
2 either
both i and j are transient,
or
both i and j are recurrent positive,
or
both i and j are recurrent null.

Definition
A set C of states is said to be closed if once the process enters it, it can not get out of it.
(n)
i.e., if fij = 0 ∀i ∈ C, j ∈ C c . If C is closed then pij = 0 ∀n ≥ 1, ∀ i ∈ C, j ∈ C c .

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 30 / 43


Theorem
1 If j is positive recurrent, then

(nt) t
lim pjj = , where t is period of state j
n→∞ µjj
.
(nt)
2 If j is recurrent null (whether periodic or aperiodic) then limn→∞ pjj = 0.

(n)
3 If k is recurrent null then for any j ∈ S, limn→∞ pjk = 0.

4 If k is aperiodic and positive recurrent, then for any j ∈ S

(n) fjk
lim pjk = .
n→∞ µkk

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 31 / 43


Stationary Distribution

When the distribution of states tends to a limiting distribution?

Definition

A non-negative vector π is said to be an invariant If


Pπ also satisfies
measure if πk = 1, then π is called
π0P = π0, k
a stationary, equilibrium or
which in component form is steady state probability
X distribution.
πi = πj pji ∀ i ∈ S.
j

Stationary distribution is a left eigenvector of the transition matrix corresponding to eigen-


value 1.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 32 / 43


Example
Consider  Chain with state space S = {1, 2} and transition probability matrix P, where
 the Markov
0.5 0.5
P = .
0.7 0.3

The eigenvalues are 1, −0.2.     


−0.5 0.7 x 0
Let (x, y)0 be the left eigenvector corresponding to 1. Hence =
0.5 −0.7 y 0
Solving, we get 5x = 7y = k(say).

k k 35
However, this eigenvector will be a probability distribution if x + y = 1 =⇒ 5
+ 7
= 1 =⇒ k = 12
.
7 5
Therefore x = 12 , y = 12 .

7 5
Note that lim P (Xn = 1) = 12
and lim P (Xn = 2) = 12
.
n→∞ n→∞

7 5
 
Again, we can calculate lim P (n) = 12
7
12
5 .
n→∞
12 12

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 33 / 43


Lemma
The distribution of Xn is independent of n ⇐⇒ the initial distribution is a stationary
distribution.

Proof.
Let aj = P [X0 = j], j = 1, 2, ...
First suppose that the distribution of Xn does not depend on n. Then

X ∞
X
aj = P [X1 = j] = P [X1 = j|X0 = k]P [X0 = k] = pkj ak
k=1 k=1

=⇒ {a1 , a2 , ...} is a stationary distribution.

Conversely, suppose that {a1 , a2 , ...} is a stationary distribution. Then,



X ∞
X
P [X0 = i] = aj = ak pn
kj = P [X0 = k]P [Xn = j|X0 = k] = P [Xn = j]
k=1 k=1

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 34 / 43


Example
Consider the Markov chain with S = {1, 2, 3, 4}

0 51 35 1
 
5
 1 1 1 1 
P = 4 4 4 4 .
 1 0 0 0 
0 21 12 0

One can verify that (using R)


 
0.3731 0.1791 0.3284 0.1194
 0.3731 0.1791 0.3284 0.1194 
lim P (n) =
 0.3731

n→∞ 0.1791 0.3284 0.1194 
0.3731 0.1791 0.3284 0.1194

Hence unique stationary distribution exists which is given by


π = (0.3731, 0.1791, 0.3284, 0.1194). Verify this finding the left eigenvector of P .

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 35 / 43


Example
Let us consider the following Markov chain with S = {1, 2, 3}.
 
0 1 0
P =  0.5 0 0.5 
0 1 0

If we calculate the higher order transition probability matrices we get


   
0.5 0 0.5 0 1 0
P 2n =  0 1 0  . P 2n+1 =  0.5 0 0.5 
0.5 0 0.5 0 1 0

Transition probabilities are not independent on n.


Rows are not identical for all transition matrices.
limiting distribution does not exist!!!
What about period of the Markov chain???

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 36 / 43


Example
Let us consider a Markov chain with state space S = {1, 2, 3, 4} and transition probability
matrix  
1 0 0 0
 1 0 1 0 
P = 2 2
2 .

 1 0 0
3 3
0 0 0 1
One can verify that  
1 0 0 0
 0.6667 0 0 0.3333 
lim P (n) =
 0.3333

n→∞ 0 0 0.6667 
0 0 0 1
Hence unique stationary distribution does not exist.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 37 / 43


1 Under what conditions on a Markov chain will a stationary distribution exist?

2 When a stationary distribution exists, when is it unique?

3 Under what conditions can we guarantee convergence to a unique stationary


distribution?

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 38 / 43


Theorem
If a Markov chain is irreducible and recurrent, then there is an invariant measure π, unique
up to multiplicative constants, that satisfies 0 < πj < 1 for all j ∈ S. Further, if the Markov
chain is positive recurrent then
1
πi = .
µii

Theorem
Suppose a Markov chain is irreducible and that a stationary distribution π exists:
X
π 0 = π 0 P, pij = 1, πj > 0.
j∈S

Then, the Markov chain is positive recurrent.


Thus, a necessary and sufficient condition for determining positive recurrence is simply
demonstrating the existence or non-existence of a stationary distribution.

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 39 / 43


Example
Let us consider  Chain with state space S = {1, 2}, and the transition probability matrix
 the Markov
1 1
P, where P = 2 2 .
1 2
3 3
The eigenvalues are 1, 16 . The left eigenvectors of P corresponding to the eigenvalue 1 is the right
0 0
eigenvector
 1  P corresponding
of    to 1. Let (x, y) be the eigenvector. Hence
1
2 3 x 0
1 1 =
2 − 3 y 0
Solving, we get x2 = y3 = k(say). However this eigenvector will be a probability distribution if
x + y = 1 =⇒ 2k + 3k = 1 =⇒ k = 15 .
Therefore x = 52 , y = 53 .
Note that lim P (Xn = 1) = 25 and lim P (Xn = 2) = 35 .
n→∞ n→∞
We can find µ11 = 52 . Hence lim P (Xn = 1) = 1
µ11 .
n→∞

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 40 / 43


Example
Here we consider a Markov chain with state space S = {1, 2, 3, 4, 5} and transition probability
matrix  1 2 
3 3 0 0 0
 3 1 0 0 0 
 4 4 1 1 5 

P =  0 0 8 4
1
8 .
1 
 0 0 0
2 2
0 0 31 0 32

It is easy to verify that the Markov chain is not irreducible and its communication classes are
{{1, 2}, {3, 4, 5}}.
 We can find that 
0.5294 0.4706 0.0000 0.0000 0.0000
 0.5294 0.4706 0.0000 0.0000 0.0000 
(n)
 
lim P = 0.0000 0.0000 0.2424 0.1212 0.6364  .
n→∞ 
 0.0000 0.0000 0.2424 0.1212 0.6364 
Does limiting
0.0000 0.0000 0.2424 0.1212 0.6364
distribution exist?
If not, Why?

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 41 / 43


References

1 Anderson, D. F.: Introduction to Stochastic Processes with Application in Biosciences

2 Allen, Linda J. S.: An Introduction to Stochastic Processes with Applications to Biology

3 Ross, S. M.: Stochastic Processes

4 Ross, S. M.: Introduction to Probability Models

5 Karlin, S. and Taylor, H. M.: A First Course in Stochastic Processes

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 42 / 43


THANK YOU

Biman Chakraborty (Aliah University) Stochastic Processes May 3, 2020 43 / 43

You might also like