You are on page 1of 7

# PROBABILITY

&
QUEUEING THEORY

## (As per SRM UNIVERSITY Syllabus)

Dr. P. GodhandaRaman
Assistant Professor
Department of Mathematics
SRM University, Kattankulathur – 603 203
Email : godhanda@gmail.com, Mobile : 9941740168
1
Dr. P. Godhandaraman, Assistant Professor, Dept of Maths, SRM University, Kattankulathur, Mobile: 9941740168, Email: godhanda@gmail.com
15MA207 – PROBABILITY AND QUEUEING THEORY
UNIT – 5 : MARKOV CHAINS
Syllabus
• Introduction to Stochastic process, Markov process, Markov chain one step & n-step Transition Probability.
• Transition Probability Matrix and Applications
• Chapman Kolmogorov theorem (Statement only) – Applications.
• Classification of states of a Markov chain – Applications

INTRODUCTION
Random Processes or Stochastic Processes : A random process is a collection of random variables , which are
functions of a real variable (time). Here ∈ (sample space) and ∈ (index set) and each , is a real valued
function. The set of possible values of any individual member is called state space.
Classification : Random processes can be classified into 4 types depending on the continuous or discrete nature of the
state space S and index set T.
1. Discrete random sequence : If both S and T are discrete
2. Discrete random process : If S is discrete and T is continuous
3. Continuous random sequence : If S is continuous and T is discrete
4. Continuous random process : If both S and T are continuous.
Markov Process
If, for < < < ⋯ < , we have ≤ / = , = ... = = ≤ / =
then the process is called a Markov process. That is, if the future behaviour of the process depends only on the
present state and not on the past, then the random process is called a Markov process.
Markov Chain : If, for all , = / = , = ... = = = / = then
the process ; = 0,1,2 … is called a Markov chain.
One Step Transition Probability : The conditional probability !" − 1, = \$ = " / = ! % is called one step
transition probability from state ! to state " in the &' step.
Homogeneous Markov Chain : If the one step transition probability does not depend on the step. That is, ()* + −
,,+=()*-−,, - the Markov chain is called a homogeneous markov chain or the chain is said to have stationary
transition probabilities.
+ - Step Transition Probability : The conditional probability !" = \$ = "/ = !% is called n – step transition
probability.
&'
Chapman Kolmogorov Equations : If P is the tpm of a homogeneous Markov chain, the step tpm is equal to
. That is, / !" 0=1 !" 2 .
3
Regular Matrix : A stochastic matrix P is said to be a regular matrix, if all the entries of are positive. A
homogeneous Markov chain is said to be regular if its tpm is regular.
Classification of States of a Markov Chain
Irreducible Chain and Non - Irreducible (or) Reducible: If for every 4, 5 we can find some such that !" > 0, then
every state can be reached from every other state, and the Markov chain is said to be irreducible. Otherwise the chain is
non – irreducible or reducible.
Return State : State 4 of a Markov chain is called a return state, if !" > 0 for some > 1.
Periodic State and Aperiodic State : The period 7! of a return state 4 is the greatest common divisor of all 8 such that
3
!" > 0. That is, 7! = 9:; <8: !! > 0 >. State 4 is periodic with period 7! if 7! > 1 and aperiodic if 7! = 1.
Recurrent (Persistent) State and Transient : If ∑AB @!! = 1, the return to state 4 is certain and the state 4 is said to be
persistent or recurrent. Otherwise, it is said to be transient.
Null Persistent and Non – null Persistent State : C!! = ∑AB @!! is called the mean recurrence time of the state 4. If
C!! is finite, the state 4 is non null persistent. If C!! = ∞ the state 4 is null persistent.
Ergodic State: A non null persistent and aperiodic state are called ergodic.

2
Dr. P. Godhandaraman, Assistant Professor, Dept of Maths, SRM University, Kattankulathur, Mobile: 9941740168, Email: godhanda@gmail.com
Theorem used to classify states
1. If a Markov chain is irreducible, all its states are of the same type. They are all transient, all null persistent or all
non null persistent. All its states are either aperiodic or periodic with the same period.
2. If a Markov chain is finite irreducible, all its states are non null persistent.
PROBLEMS
1. A man either drives a car or catches a train to go to office each day. He never goes 2 days in a row by train but if he
drives one day, then the next day he is just as likely to drive again as he is to travel by train. Now suppose that on the
first day of the week, the man tossed a fair die and drove to work if and only if a 6 appeared. (i) Find the probability
that he takes a train on the 3rd day (ii) Find the probability that he drives to work in the long run.

E 4 → E 4 = 0, E 4 → : E = 1, : E → E 4 = , : E → : E =
Solution : Let T – Train and C - Car. If today he goes by train, next day he will not go by train.

:
0 1
=G H . The first day of the week, the man tossed a fair die and drove to work if and only if a 6 appeared. Initial
:

## Probability of going by car = and Probability of going by train = 1 − =

J
state probability distribution is obtained by throwing a die.

I I I
= / 0
J
I I
The 1st day sate distribution is
0 1
= = / 0 G H = / 0
J
I I
The 2nd day sate distribution is
0 1
The 3rd day sate distribution is = = / 0 G H = / K K0

K
P(he travels by train on 3rd day) =
(ii) The limiting form or long run probability distribution. LM = L
(i)

0 1
NO O P G H = NO O P

= O ⇒ O = 2 O
QR
(1)
O + = O ⇒ O = 2 O
QR

O + O = 1
(2)

## Sub. (1) in (3), O + 2 O = 1 ⇒ L, = U

,
(3)
(4)
+O = 1 ⇒ LV = ,
V
U
Sub. (4) in (3), P(driving in the long run) = .
2. A college student X has the following study habits. If he studies one night, he is 70% sure not to study the next night.
If he does not study one night, he is only 60% sure not to study the next night also. Find (i) the transition probability
matrix (ii) how often he studies in the long run.

## W7X4 Y → Z[ W7X4 Y = 0.7, W7X4 Y → W7X4 Y = 0.3,

Solution : Let S – Studying and N - Not Studying. If he studies one night, next night he is 70% not studying.

## Z[ W7X4 Y → Z[ W7X4 Y = 0.6, Z[ W7X4 Y → W7X4 Y = 0.4 ,

Z
0.3 0.7
= / 0 . The limiting form or long run probability distribution. LM = L
Z 0.4 0.6
0.3 0.7
NO O P / 0 = NO O P
0.4 0.6
0.3 O + 0.4 O = O ⇒ 0.4 O = 0.7 O ⇒ 4 O = 7O
0.7 O + 0.6 O = O ⇒ 0.4 O = 0.7 O ⇒ 4 O = 7O
(1)

O + O = 1
(2)

O + O = 1 ⇒ LV =
K̀ a
(3)

,,
Sub. (1) in (3), (4)
O + =1⇒ L, = , P(he studies in the long run) =
a b b
,, ,, ,,
Sub. (4) in (3), .
3
Dr. P. Godhandaraman, Assistant Professor, Dept of Maths, SRM University, Kattankulathur, Mobile: 9941740168, Email: godhanda@gmail.com
,
U
3. Suppose that the probability of a dry day following a rainy day is and that the probability of a rainy day following a
,
dry day is V. Given that May 1 is a dry day, find the prob. that (i) May 3 is also a dry day (ii) May 5 is also a dry day.

; c
Solution : Let D – Dry day and R - Rainy day.

;
= d e . Initial state probability distribution is the probability distribution on May 1. Since May 1 is a dry day.
c

J ` m

## = = / 0 d e = /` 0 , = / 0 d e = /K K 0 , ( ghi m )j h kli khi = bUV

K J ` n K̀ K n K̀ ` Jn ,aU
`

4. A salesman territory consists of 3 cities A, B and C. He never sells in the same city on successive days. If he sells in
city A, then the next day, he sells in city B. However, if he sells in either B or C, the next day he is twice as likely to
sell in city A as in the other city. In the long run, how often does he sell in each of the cities?

0 1 0
A B C
o
Solution : The tpm of the given problem is = p q 0 r. The limiting form or long run prob. distribution. LM = L
: 0
0 1 0
NO O O P q 0 r = NO O O P
0
O + O = O ⇒ 2 O + 2 O = 3O ⇒ 3O − 2 O − 2 O = 0 (1)
O + O = O ⇒ 3O + O = 3O ⇒ 3O − 3O + O = 0 (2)
O = O ⇒ O = 3O
O + O + O = 1
(3)

## Sub. (3) in (1), 3O − 2 O − 2 O = 0 ⇒ 3O − 2 3O − 2 O = 0 ⇒ 3O = 8 O ⇒ O = O

t
(4)
(5)
Sub. (3) and (5) in (4), O + O + O = 1 ⇒ O + 3O + O = 1 ⇒ O =
t
(6)
Sub. (6) in (3), O = , Sub. (6) in (5), O = , L, = u. bu, LV = u. bm, LU = u. ,m
n t

5. A fair die is tossed repeatedly. If v+ denotes the maximum of the numbers occurring in the first + tosses, find the
Thus in the long run, he sells 40% of the time in city A, 45% of the time in the city B and 15% of the time in city C.

## transition probability matrix ( of the Markov chain v+ . Find also (V and ( vV = w .

Solution: The state space is 1, 2, 3, 4, 5, 6 . = maximum of the numbers occurring in the first trials.
y = maximum of the numbers occurring in the first + 1 trials = maxN , W8}~E 4 ℎ~ + 1 &' E4 €P .

=1
Let us see how the First Row of the tpm is filled.

y = 1 4@ 1 ••~ E 4 + 1 &' E4 €
= 2 4@ 1 ••~ E 4 + 1 &' E4 €
= 3 4@ 1 ••~ E 4 + 1 &' E4 €
= 4 4@ 1 ••~ E 4 + 1 &' E4 €
= 5 4@ 1 ••~ E 4 + 1 &' E4 €
= 6 4@ 1 ••~ E 4 + 1 &' E4 €
+1 &'
I
Now, in the trial, each of the numbers 1, 2, 3, 4, 5, 6 occurs with probability .
4
Dr. P. Godhandaraman, Assistant Professor, Dept of Maths, SRM University, Kattankulathur, Mobile: 9941740168, Email: godhanda@gmail.com
=2
Let us see how the Second Row of the tpm is filled.

‚@ + 1 &' E4 € E~ W€ 4 1 [E 2, y = 2
Here

‚@ + 1 &' E4 € E~ W€ 4 3, y = 3
‚@ + 1 &' E4 € E~ W€ 4 4, y = 4
‚@ + 1 &' E4 € E~ W€ 4 5, y = 5
‚@ + 1 &' E4 € E~ W€ 4 6, y = 6
= 2, y = 2 = and y = ƒ = , ƒ = 3, 4, 5, 6.
I I
If

+ 1 &' ~
Proceeding similarly, the tpm is

† I I I I I I ‰̂ † I I I I I I ‰̂ † I I I I I I ‰̂ † ‰̂
J ` n

…0 …0 …0
I I I I I I
…0
K J

`

n

… I I I I Iˆ … I I I I Iˆ … I I I I Iˆ … I I I I Iˆ
…0 0 ˆ …0 0 ˆ …0 0 ˆ … ˆ
0 0
n ` n
= &' … I I I Iˆ
, =… I I I Iˆ …
I I I Iˆ
=… I I I Iˆ
…0 0 0 ˆ …0 0 0 ˆ …0 0 0 ˆ
K K K
… 0 0 0 I I I ˆ
I n
… I I I
ˆ … I I I
ˆ … I I I
ˆ
… 0 0 0 0 I I ˆ … 0 0 0 0 I I ˆ … 0 0 0 0 I I ˆ … 0 0 0 0 J ˆ
J J J

… ˆ … ˆ … ˆ … I Iˆ
„ 0 0 0 0 0 I ‡ „ 0 0 0 0 0 0 0 0 0 0 „ 0 0 0 0 0 1 ‡
I‡ „ I‡
= / 0
I I I I I I
The initial probability distribution is
= 6 = ∑I!B = 6/ =4 = 4 = ∑I!B !I = N I + I + I + KI + JI + II P
I I
= 6 = I/ + + + + + 10 =
n
I I I I I I
6. The transition probability matrix of a Markov chain v+ , + = ,, V, … having 3 states 1, 2, 3 is
u. , u. m u. b
( = Šu. w u. V u. V‹ and the initial distribution ( u = u. a, u. V, u. , .
u. U u. b u. U
Find (i) ( vV = U, v, = U, vu = V (ii) ( vV = U (iii) ( vU = V, vV = U, v, = U, vu = V .

= 3, = 3, = 2 = = 3/ = 3, = 2 = 3, = 2
Solution:

= = 3/ =3 = 3, =2 = = 3, =2
(i)

= = 3/ = 2 = 2 = =2
= 0.3 0.2 0.2 = 0.012
=3 = = 3, =1 + = 3, = 2 + = 3, = 3
= = 3/ =1 =1 + = 3/ = 2 = 2 + = 3/ =3 =3
(ii)

= =1 + =2 + =3
= 0.26 0.7 + 0.34 0.2 + 0.29 0.1 = 0.279
0.1 0.5 0.4 0.1 0.5 0.4 0.43 0.31 0.26
= Š0.6 0.2 0.2‹ Š0.6 0.2 0.2‹ = Š0.24 0.42 0.34‹
0.3 0.4 0.3 0.3 0.4 0.3 0.36 0.35 0.29
= 2, = 3, = 3, = 2 = = 2/ = 3, = 3, = 2 = 3, = 3, =2
= = 2/ = 3 = 3, = 3, =2
(iii)

= = 3/ = 3, =2 = 3, =2
= = 3/ =3 = 3, =2
= = 3/ = 2 =2 = =2
= 0.4 0.3 0.2 0.2 = 0.0048

5
Dr. P. Godhandaraman, Assistant Professor, Dept of Maths, SRM University, Kattankulathur, Mobile: 9941740168, Email: godhanda@gmail.com
u‰̂
U ,
†b b
…,
7. The transition probability matrix of a Markov chain • Ž , Ž = ,, V, … having 3 states 1, 2, 3 is f = … b
, ,
V bˆ
… ,ˆ
&

„u
U
b b‡
the initial distribution f = < , , >. Find (i) f •U = V/• V = , (ii) f •V = V (iii) f • V = V, •, = ,, • u = V
u , , ,
U U U
(iii) f •U = ,, • V = V, •, = ,, • u = V

(i) P X = 2/X = 1 = =
Solution:

K
=2 = = 2, =1 + = 2, =2 + = 2, = 3
= = 2/ =1 =1 + = 2/ =2 =2 + = 2/ =3 =3
(iii)

= =1 + =2 + =3
= < >< > + < >< > + < >< > =
J t n
I I I I

† K K 0‰̂ † K K 0‰̂ † I
J
I I‰̂
… … …J
= …K =…
t
Kˆ …K Kˆ I I Iˆ
… ˆ… ˆ … Kˆ
„0 K K ‡ „0 K K ‡ „ I
n
I I‡
(iv) = 2, = 1, =2 = = 2/ = 1, =2 = 1, =2
= = 2/ =1 = 1, =2 = = 1, =2
= = 1/ =2 =2 = =2
= <K> <K> < > = 0.0208
= 1, = 2, = 1, =2 = = 1/ = 2, = 1, = 2 = 2, = 1, =2
= = 1/ = 2 = 2, = 1, =2
(v)

= = 2/ = 1, =2 = 1, =2
= = 2/ =1 = 1, =2
= = 1, =2 = = 1/ =2 =2
= = 2 = < > < > < > < > = 0.0052
K K K
u , u
8. Find the nature of the states of the Markov chain with the transition probability matrix ( = d u e.
, ,
V V
u , u
0 1 0 0 1 0 0 1 0 0
=d 0 e, = =d 0 ed 0 e = q0 1 0r ,
0 1 0 0 1 0 0 1 0 0
Solution :

0 0 1 0 0 1 0 0 1 0 0 1 0 0
= = q0 1 0r d 0 e=d 0 e= K
= =d 0 ed 0 e = q0 1 0r =
0 0 1 0 0 1 0 0 1 0 0 1 0 0
,

## > 0, > 0, > 0, > 0, > 0, >0, > 0, > 0, >0

The chain is irreducible. Also since there are only 3 states, the chain is finite. That is, the chain is finite and irreducible.

## > 0, > 0, > 0, … , ~E4[7 [@ ~ 1 = 9:; 2, 4, 6 … = 2

K I
All the states are non null persistent.

State 1:
K I

## > 0, > 0, > 0, … ~E4[7 [@ ~ 3 = 9:; 2, 4, 6 … = 2

State 2:
K I
State 3:
All the states 1, 2, 3 have period 2. That is, they are periodic. All the states are non null persistent, but the states are
periodic. Hence all the states are not ergodic.

6
Dr. P. Godhandaraman, Assistant Professor, Dept of Maths, SRM University, Kattankulathur, Mobile: 9941740168, Email: godhanda@gmail.com
9. Three boys A, B, C are throwing a ball to each other. A always throw the ball to B &B always throws to C but C
is just as likely to throw the ball to B as to A. Show that the process is Markovian. Find the transition matrix &
classify the states.
Solution:

o 0 1 0
A B C

= p d 0 0 1e
: 0

0 1 0 0 1 0 0 0 1 0 0 1 0 1 0 † 0‰̂
0r , 0 …
= = d0 0 1e d0 0 1e = q = =q rd0 0 1 e = …0 ˆ
0 0 0 0 0 … ˆ
„K K ‡
† 0‰̂ 0 1 0 †0 ‰̂ †0 ‰̂ 0 1 0 †K K ‰̂
… 0 0 1e = … … 0 0 1 …
K
= = …0 ˆd …K ˆ,
J
= K = … K ˆd e = …K Kˆ
0 0
K K
… ˆ … ˆ … ˆ … ˆ
„K K ‡ „K K‡ „K K ‡ „t t ‡
> 0, > 0, >0, > 0, > 0, >0, > 0, > 0, >0
The chain is irreducible. Also since there are only 3 states, the chain is finite. That is, the chain is finite and irreducible.

## > 0, > 0, … ~E4[7 [@ o = 9:; 3, 5, … = 1

All the states are non null persistent.
J
1st state A:
> 0, > 0, > 0… ~E4[7 [@ p = 9:; 2, 3, 4 … = 1
nd K

## > 0, > 0, > 0, … ~E4[7 [@ : = 9:; 2, 3, 4 … = 1

2 state B:
rd K
3 state C:
All the states A, B, C have period 1. That is, they are aperiodic.
All the states are aperiodic and non null persistent, they are ergodic.

## All the Best

Regards!
Dr. P. GodhandaRaman
Assistant Professor
Department of Mathematics
SRM University, Kattankulathur – 603 203
Email : godhanda@gmail.com, Mobile : 9941740168

7
Dr. P. Godhandaraman, Assistant Professor, Dept of Maths, SRM University, Kattankulathur, Mobile: 9941740168, Email: godhanda@gmail.com