You are on page 1of 113

Markov Process: Properties, Analysis and

Applications

AJAY KUMAR

Assistant Professor
Atal Bihari Vajpayee-
Indian Institute of Information Technology & Management, Gwalior
Morena Link Road, Gwalior-474010
MP, India

March 19, 2010

1 / 113
Stochastic Process: A stochastic process is a collection of
random variables, {X (t) : t ∈ T }, where X (t) is a random variable
for each t ∈ T .

I The set T is called the index set of the process and t ∈ T is


called and index or parameter.
I The values taken by X (t) are called as states of the
stochastic process, corresponding to the index t.
I If T is countable, the stochastic process is said to be discrete
time process
I If T in an interval, the stochastic process is called a
continuous time process

2 / 113
I Chain: If the state space S of a stochastic process
{X (t) : t ∈ T } is countable, the process is said to be a chain.
I Based on the countable or continuous nature of the index set
and the space state, we have the following types of stochastic
processes:
I Discrete time, discrete state space processes
I Discrete time, continuous state space processes
I Continuous time, discrete state space processes
I Continuous time, continuous state space processes

3 / 113
Example: Consider a single machine prone to failures and let us
examine its states at discrete points of time, t0 , t1 , t2 , · · · . Let us
define its state as 0, if the machine is up and 1, if machine is
down. The machine will keep changing its state with time. Let, Xi
denote the state of the machine at time ti , i = 0, 1, 2, · · · , then
Xi : i ∈ N is a stochastic process.

I Here Xi is a discrete random variable with range {0, 1}, so the


state space is S = {0, 1}.
I Index set T = {t0 , t1 , t2 , · · · }
I The stochastic process, defined above is a discrete time chain
I If the state of the machine is examined at any instant of time,
then the index set will be an interval [0, ∞] and the stochastic
process X (t) : t ≥ 0 will be a continuous time chain.

4 / 113
Markov Process: Markov process (or chain) is a stochastic
process which has the property that the probability of transition
from a given state to any future state depends only on the present
state and not on the manner in which it was reached. OR
Mathematically, a stochastic process is called a Markov process, if
the occurrence of the future state depends on the immediately
preceding state and only on it. Thus, if t0 , t1 , t2 , · · · , represents the
points in time scale, then the family of random variables {X (tn )}
is said to be a Markov process if it holds the following property:

P {X (tn ) = xn |X (tn−1 ) = xn−1 , X (tn−2 ) = xn−2 , · · · , X (t0 ) = x0 }


= P {X (tn ) = xn |X (tn−1 ) = xn−1 } , ∀X (t0 ), X (t1 ), X (t2 ), · · ·

5 / 113
Transition Probability: The probability of moving from one state
to another is said to be transition probability. Mathematically,

pij = P{X (tn ) = j|X (tn−1 = i)}

is called transition probability.


Transition Probability Matrix (TPM): The transition
probabilities can be arranged in a matrix form. Such a matrix is
said to be one step-transition probability matrix. It is denoted by P
and is given by:
 
p11 p12 · · · p1m
 p21 p22 · · · p2m 
 
P= . .. .. .. 
 .. . . . 
pm1 pm2 · · · pmm

6 / 113
Properties of TPM:

I P is a square matrix
I 0 ≤ pij≤1 , ∀i, j
P
m
I Sum along each row is unity, i.e., pij = 1
j=1

7 / 113
Discrete Time Markov Chain (DTMC): A discrete time Markov
chain is a stochastic process, {Xn : n ∈ N}, with countable state
space S, such that for all n ∈ N, and for all
xi ∈ S, i = 0, 1, 2, · · · , n:

P {X (tn ) = xn |X (tn−1 ) = xn−1 , X (tn−2 ) = xn−2 , · · · , X (t0 ) = x0 }


= P {X (tn ) = xn |X (tn−1 ) = xn−1 }

Which implies, that given the current state of the system, the
future is independent of the past.

8 / 113
Let, P[sj (0)] be the probability that the system be in state sj at
time t = t0 , and

pij = P{X (tn ) = j|X (tn−1 ) = i}

is a transition probability from state si to sj . Then, the stochastic


matrix,  
p01 p02 · · · p0m ···
 p11 p12 · · · p1m ··· 
P= 
.. .. .. .. ..
. . . . .
called as TPM, along with P[sj (0)], completely defines the DTMC.

9 / 113
Question: What is the probability that the system will occupy
state sj after n transitions, if its state at n = 0, is known.
Let P[sj (n + 1)] = the probability that the system is in state sj at
time t = tn+1 , Then:

P[sj (n + 1)] = P[s1 (n)]p1j + P[s2 (n)]p2j + · · · + P[sm (n)]pmj

Now, put j = 1, 2, · · · , m, we get:

[P[s1 (n + 1)], P[s2 (n + 1)], · · · , P[sm (n + 1)]]


 
p11 p12 ··· p1m
 p21 p22 ··· p2m 
 
= [[P[s1 (n)], P[s2 (n)], · · · , P[sm (n)]]  .. .. .. .. 
 . . . . 
pm1 pm2 · · · pmm

10 / 113
The above system can be written as:

P(n + 1) = P(n) · P

where, P(n + 1) is a row vector. Now,

P(1) = P(0) · P
P(2) = P(1) · P = P(0) · P 2
P(3) = P(2) · P = P(0) · P 3
··· ··· ··· ···
P(k) = P(k − 1) · P = P(0) · P k

11 / 113
Question: Gwalior city is divided into three zones A, B and C.
From the records of drivers, the following information is available.
Out of passengers picked up in zone A (B) (C) 40% (30%) (30%)
go to a destination in zone A, 40% (40%) (50%) are taken to a
destination in zone B, 20% (30%) (20%) go to a destination in
zone C. Suppose in the beginning of the day 30% of the taxis are
in zone A, 30% are in zone B and 40% are in zone C. What is the
distribution of taxis after all have go one ride and two rides. Also
determine the long run distribution of taxis.
Answer:

P(1) = (0.33, 0.44, 0.23)


P(2) = (0.333, 0.423, 0.244)
π = (11/33, 14/33, 8/33)

12 / 113
Limiting State Probabilities: As, if n → ∞, P(n) will be
constant. In this situation limiting state probabilities exist and are
independent of the initial conditions. The process in this case is
said to be ergodic process.
Let lim P(n) = π, then:
n→∞

lim P(n + 1) = lim P(n) · P


n→∞ n→∞
π = π·P

Where, π is a 1 × m vector.

13 / 113
I Transient State: If the system can leave the state but never
return to it, e.g., s1
I Trapping or Absorbing State: If the system can enter the
state and cannot leave it, e.g., s2
I Recurrent Chain: A recurrent chain is a collection of all
trapping states.
0.25

s1 s2
0.75

14 / 113
Continuous Time Markov Chain (CTMC): A discrete state
space, continuous time process {X (t) : t ≥ 0} with state space S
is called CTMC, if the following Markov or memoryless property is
satisfied, for all s ≥ 0, u ≥ 0, t ≥ s and i, j, x(u) ∈ S,

P {X (t) = j|X (s) = i, X (u) = x(u), for 0 ≤ u < s}


= P {X (t) = j|X (s) = i}

Transition Rate: aij is the transition rate of a process from state i


to state j, i 6= j.
Transition Rate Matrix: A = [aij ] is the transition rate matrix.
Transition Probability: Given that the present state is si , the
conditional probability that si → sj , during time interval 4t is
aij 4t. The unit of aij is sec−1 . So aij−1 = expected time between
transitions

15 / 113
Question: Let P[si (t)]=probability that the system occupies state
i at time t. Find out the probability of being in state j at time
t + 4t.
Note that:

P[sj (t + 4t)] = P[sj (t)]P(no transition from j)


+ P[s1 (t)]a1j 4t + P[s2 (t)]a2j 4t
+ · · · + P[sj−1 (t)]aj−1,j 4t + P[sj+1 (t)]aj+1,j 4t
+ · · · + P[sm (t)]amj 4t

Next, the probability of marking no transition from j=1-probability


of making transition from j
m
X
=1− aji 4t
i6=j

16 / 113
Substituting,
 
m
X m
X
P[sj (t + 4t)] = P[sj (t)] 1 − aji 4t  + P[si (t)]aij 4t
i6=j i6=j
j = 1, 2, 3, · · · , m

Now define the diagonal elements of the transition rate matrix A


by:
m
X
ajj = − aji
i6=j

m
X
P[sj (t + 4t)] = P[sj (t)](1 + ajj 4t) + P[si (t)]aij 4t
i6=j

or
m
X
P[sj (t + 4t)] − P[sj (t)] = P[si (t)]aij 4t
i=1
17 / 113
Dividing by 4t and taking limit 4t → 0, leads to:

X m
d
P[sj (t)] = P[si (t)]aij ; j = 1, 2, · · · , m
dt
i=1

Next, define:

P(t) = [P[s1 (t)], P[s2 (t)], · · · , P[sm (t)]]

Which results in:


d
P(t) = P(t) · A
dt
Which has the solution:

P(t) = P(0)e At

18 / 113
Steady State Solution: As time t → ∞, P(t) will be constant
and so from:
d
P(t) = P(t) · A
dt
we have:
0 = P(∞) · A = π · A
=⇒ rate of leaving the state=rate of entering in it

19 / 113
Example: Consider a 2-component system, which has following
four states:

State Component-1 Component-2


1 operating operating
2 failed operating
3 operating failed
4 failed failed

Assumptions:

I Each component will be in one of the two states, operating or


failed
I The failures among the components are independent
I Initially the system is in working state
I The failure behavior is exponential

20 / 113
I If the two components are in parallel configuration, only state
4 results in system failure, so reliability is given by:

Rp (t) = R1 (t) + R2 (t) + R3 (t)

I If the two components are in series configuration, states 2, 3,


and 4, result is system failure, so reliability is given by:

Rs (t) = R1 (t)

I The system must be in one of the given four states, so

P1 (t) + P2 (t) + P3 (t) + P4 (t) = 1

I The failures rates are λ1 and λ2 , respectively.

21 / 113
The transition diagram is given below:

dP1 (t)
+ (λ1 + λ2 )P1 (t) = 0
dt
dP2 (t)
+ λ2 P2 (t) = λ1 P1 (t)
dt
dP3 (t)
+ λ1 P3 (t) = λ2 P1 (t)
dt
dP4 (t)
= λ1 P3 (t) + λ2 P2 (t)
dt
22 / 113
The probabilities are given by:

P1 (t) = e −(λ1 +λ2 )t


P2 (t) = e −λ2 t − e −(λ1 +λ2 )t
P3 (t) = e −λ1 t − e −(λ1 +λ2 )t
P4 (t) = 1 − P1 (t) − P2 (t) − P3 (t)

I Reliability for series system:

Rs (t) = P1 (t) = e −(λ1 +λ2 )t

I Reliability for parallel system:

Rp (t) = R1 (t) + R2 (t) + R3 (t)


= e −λ1 t + e −λ2 t − e −(λ1 +λ2 )t

23 / 113
Load-Sharing System: A system has two components, working in
parallel configuration. There is a dependency between the
components. If one component fails, the failure rate of the other
component increases as a result of additional load, placed on it.
Assume that, λ+ +
1 and λ2 are the increased failure rates of
component-1 and component-2, respectively.

24 / 113
The transition diagram is shown below:

dP1 (t)
+ (λ1 + λ2 )P1 (t) = 0
dt
dP2 (t)
+ λ+
2 P2 (t) = λ1 P1 (t)
dt
dP3 (t)
+ λ+
1 P3 (t) = λ2 P1 (t)
dt

25 / 113
The solution of the system is given below:

P1 (t) = e −(λ1 +λ2 )t


λ1 h + −(λ +λ )t
i
P2 (t) = e −λ2 t−e 1 2
λ1 + λ2 − λ+ 2
λ2 h + −(λ1 +λ2 )t
i
P3 (t) = e −λ1 t−e
λ1 + λ2 − λ+ 1

The reliability is given by:

R(t) = P1 (t) + P2 (t) + P3 (t)

MTTF can be calculated as:


Z∞
MTTF = R(t)dt
0

26 / 113
Standby Systems: Two components, A & B, are working in
standby redundancy mode, i.e., if A fails, B starts functioning.
Such systems are said to be standby systems.
The failure rate of standby unit will depend on the state of primary
(on-line) unit. We may consider:

I The standby unit will have no failure or reduced failure rate,


while in its standby mode
I While, the standby unit active, two cases arise:
I it may experience the same failure rate as the online system, if
they are identical
I it may experience different failure rate, if they are not identical

The problem is to model the system and to find out its:


(a) reliability, and (b) MTTF

27 / 113
The transition diagram is shown below:

dP1 (t)
+ (λ1 + λ−2 )P1 (t) = 0
dt
dP2 (t)
+ λ2 P2 (t) = λ1 P1 (t)
dt
dP3 (t)
+ λ1 P3 (t) = λ−2 P1 (t)
dt

28 / 113
The solution of the system is given below:

P1 (t) = e −(λ1 +λ2 )t
· ¸
λ1 −
−λ2 t−e −(λ1 +λ2 )t
P2 (t) = e
λ1 + λ− 2 − λ2

P3 (t) = e −λ1 t − e −(λ1 +λ2 )t

The reliability is given by:

R(t) = P1 (t) + P2 (t) + P3 (t)


· ¸
−λ1 t λ1 −
−λ2 t−e −(λ1 +λ2 )t
= e + e
λ1 + λ−2 − λ2

and
1 λ1
MTTF = +
λ1 λ2 (λ1 + λ−
2)

29 / 113
Degraded Systems: Some systems may continue to operate in a
degraded state.
Define the states as fully operational (state 1), degraded (state 2)
and failed (state 3). The transition diagram in this case is shown
below:

30 / 113
The system of ODEs can be generated as:

dP1 (t)
+ (λ1 + λ2 )P1 (t) = 0
dt
dP2 (t)
+ λ3 P2 (t) = λ2 P1 (t)
dt

Solution:

P1 (t) = e −(λ1 +λ2 )


λ2 h i
P2 (t) = e −λ3 t − e −(λ1 +λ2 )t
λ1 + λ2 − λ3
and
· ¸
1 λ2 1 1
MTTF = + −
λ1 + λ2 λ1 + λ2 − λ3 λ3 λ1 + λ2

31 / 113
Availability: Availability is the probability that a system or
component is performing its required function at a given point in
time when operated and maintained in a prescribed manner.
Mathematically,
uptime
Availability =
uptime+downtime
MTBF
=
MTBF+MTTR

Note: Repairs are not considered while observing reliability, but


they are taken into account while calculating availability.

32 / 113
Calculate point availability and steady state availability for:

I A system having one component


I A system has two components, working in parallel

33 / 113
Example: Consider the CTMC, which has three states
0: set up, 1: Processing, and 2: Down
The transition diagram is given below:

p r

0 1 2
s f

Find out the ODEs for the above transition diagram and find out:

I Transient/Instantaneous state probabilities


I Steady state probabilities

34 / 113
For steady state, the values of π0 , π1 and π2 are given by:
pr rs fs
π0 = , π1 = , π2 =
pr + rs + fs pr + rs + fs pr + rs + fs
Moreover, if s = 20 per hour, p = 4 per hour, f = 0.05 per hour,
and r = 1 per hour, then:

π0 = 0.16, π1 = 0.80, and π2 = 0.04

35 / 113
Example: Consider a system with two machines, M1 and M2 ,
working in parallel configuration. Machine M1 is fast and M2 is a
slow machine. Raw parts are always available and have to undergo
one operation each, which may be carried out either on M1 or on
M2 . Each machine will immediately start processing, after finishing
the processing of one part. Each machine can fail randomly, and
after possible repair, it will resume processing.
The objective is to find CTMC for the above system, and to
analyze the system performance, with:

I no repair facility
I single repair facility
I two identical repair facilities

36 / 113
The system can be modeled using CTMC with four states given
by:

I 0: M1 and M2 both working


I 1: M1 working and M2 down
I 2: M1 down and M2 working
I 3: M1 and M2 both down

Solution:

I If no repair facility, state 3 will be absorbing state and 0, 1,


and 2 are transient states.
1
f f

0 3
f f
2

37 / 113
I For one repair facility, assume that in state 3, where both the
machines are down, the repair facility will spend its time
equally between M1 and M2 .
r
r/2
1
f f

0 3
f f
2 r/2
r

The steady-state probabilities are given by:

r2
π0 =
r2 + 2fr + 2f 2
fr
π1 = π2 =
r + 2fr + 2f 2
2

2f 2
π3 =
r 2 + 2fr + 2f 2

38 / 113
I When there are two identical repair facilities, the CTMC
model will be similar to that in last figure, except the labels
on the directed arcs from state 3 to state 2, and from state 3
to state 1.

The steady-state probabilities are given by:

r2
π0 =
r 2 + 2fr + f 2
fr
π1 = π2 =
r + 2fr + f 2
2

2f 2
π3 =
r 2 + 2fr + f 2

39 / 113
Computational Techniques for CTMC
I Transient or Instantaneous Analysis
I Solution of system of linear ODEs
I Steady State Analysis
I Solution of π · A = 0

40 / 113
I Methods for Steady-State Analysis
I Power method
I SOR
I Gaussian elimination Method
I Gauss-Seidel Method
I Methods for Transient Analysis
I Fully Symbolic Method (Laplas Transformation)
I Euler’s Method
I Runge-Kutta forth order Method

41 / 113
Methods for Steady State
Analysis

42 / 113
Power Method: The problem is to solve π · A = 0

I Step-1: choose, a ≥ max |aii |


i
I Step-2: The new system will be:
µ ¶
A
π=π I+
a

Note A∗ = I + Aa is a transition probability matrix of a DTMC.


I Do iteration: π (i) = π (i−1) A∗
I Initial value: π (0) = π(0)

43 / 113
Example: Consider the following transition rate matrix to solve
πA = 0:  
−2λ 2λc 0 2λ(1 − c)
 0 −λ λc λ(1 − c) 
A=  0


0 0 0
0 0 0 0
Choose a = 2λ, we get:
 
0 c 0 1−c
 0 21 c 1−c 
A∗ = 
 0 0
2 2 

1 0
0 0 0 1

Now, take π (0) = [1, 0, 0, 0] and find the rest.

44 / 113
SOR (Successive Over Relaxation) Method: Let A be a
symmetric matrix then the linear system Ax = b can be solved
using the following steps:

I Start with an initial guess


I Iterate until some criteria for convergence are satisfied:

x (k+1) = (D + ρL)−1 [(1 − ρ)D − ρU]x (k) + ρ(D + ρL)−1 b

I where, x (k+1) is the solution vector at the k th iteration, L is a


lower triangular matrix, U is an upper triangular matrix, and
D is a diagonal matrix
I Gauss-Seidel is a special case of SOR with ρ = 1, i.e.,

x (k+1) = −(D + L)−1 Ux (k) + (D + L)−1 b

45 / 113
Gauss Elimination Method: To solve the system Ax = b

I find the augmented metrix, [A|b]


(k) (k)
I choose, ajk = max |aik |, k ≤ i ≤ n, and interchange rows k
and j
(k)
I The elements aij with i, j ≥ k are given by:

(k)
(k+1) (k) aik (k)
aij = aij − a
(k) kj
akk
i = k + 1, k + 2, · · · , n
j = k + 1, k + 2, · · · , n, n + 1
(1)
where, aij = aij
I this method will always give the exact solution

46 / 113
Example: Solve the system of equations:
    
2 1 1 −2 x1 −10
 4 0 2 1     
   x2  =  8 
 3 2 2 0   x3   7 
1 3 2 −1 x4 −5

using Gauss elimination method.

 
2 1 1 −2 −10
 4 0 2 1 8 
[A|b] =   R ↔ R1
 3 2 2 0 7  2
1 3 2 −1 −5
 
4 0 2 1 8
 2 1 −2 −10 
= 
1  R − 1R , R − 3R , R − 1R
 3 2 2 0 7  2 2 1 3 4 1 4 4 1
1 3 2 −1 −5

47 / 113
 
4 0 2 1 8
 0 1 0 −5/2 −14 
=   R ↔ R2
 0 2 1/2 −3/4 1  4
0 3 3/2 −5/4 −7
 
4 0 2 1 8
 0 3 3/2 −5/4 −7 
=   R3 − 2 R2 , R4 − 1 R2
 0 2 1/2 −3/4 1  3 3
0 1 0 −5/2 −14
 
4 0 2 1 8
 0 3 3/2 −5/4 −7 
=   R − R3
 0 0 −1/2 1/12 17/3  4
0 0 −1/2 −25/12 −35/3

48 / 113
 
4 0 2 1 8
 0 3 3/2 −5/4 −7 
=   R − R3
 0 0 −1/2 1/12 17/3  4
0 0 0 −13/6 −52/3

Finally, we get:
    
4 0 2 1 x1 8
 0 3 3/2 −5/4   x2   −7 
    
 0 0 −1/2 1/12   x3  =  17/3 
0 0 0 −13/6 x4 −52/3

and so, x4 = 8, x3 = −10, x2 = 6, x1 = 5

49 / 113
Methods for Transient Analysis

50 / 113
Fully Symbolic Method

I Taking the Laplace transform on both sides of the


Kolmogorov differential equation and rearranging the terms:

P̄(s) = P(0)(sI − A)−1

I The transient state probability is obtained by computing the


inverse Laplace transform of P̄(s)
I Solutions obtained will be closed-form and full symbolic in
both the system parameters and time t

51 / 113
Using Matrix Theory: The solution of Kolmogorov differential
equations is:
P(t) = P(0)e At
Now, the problem is to compute e At :

I Input matrix A
I Find out eigenvalues and eigenvectors of matrix A
I A = V · D · V −1 and e A = V · e D · V −1 , where V is collection
of eigenvectors and D is diagonal matrix whose diagonal
elements are eigenvalues of A
I in MATLAB, e A can be directly obtained using:
I Y = expm(A), or
I [V,D] = EIG(A);
EXPM(A) = V*diag(exp(diag(D)))/V

52 / 113
Euler Method: Consider the following differential equation:

u 0 = f (t, u), u(0) = u0

then:
uj+1 = uj + hfj , j = 0, 1, 2, · · · , N − 1

53 / 113
Example: Solve the ode:

u 0 = −2tu 2 , u(0) = 1

We have:
uj+1 = uj − 2htj uj2
with h = 0.2 and u0 = 1
u(0.2) = u1 = 1.0
u(0.4) = u2 = 0.92
u(0.6) = u3 = 0.78458
u(0.8) = u4 = 0.63684
u(1) = u5 = 0.50706

54 / 113
Runge-Kutta Method of Fourth Order (RK4): Consider the
following differential equation:

u 0 = f (t, u), u(0) = u0

Then:
1
ui+1 = ui + [k1 + 2k2 + 2k3 + k4 ]
6
Where:

k1 = hf (ti , ui )
µ ¶
h k1
k2 = hf ti + , ui +
2 2
µ ¶
h k2
k3 = hf ti + , ui +
2 2
k4 = hf (ti + h, ui + k3 )

55 / 113
RK4 Method for System of ODEs: Consider the system of n
equations:
dU
= F (t, u1 , u2 , · · · , un )
dt
U(t0 ) = U0

Where, U = [u1 , u2 , · · · , un ]T ,
F = [f1 , f2 , · · · , fn ]T ,
U0 = [u1 (0), u2 (0), · · · , un (0)]

56 / 113
Then the solution is given by:
1
Uj+1 = Uj + (K1 + 2K2 + 2K3 + K4 )
6
Where:    
u1,j+1 k1i
 u2,j+1   k2i 
   
Uj+1 =  .. ; Ki =  .. 
 .   . 
un,j+1 kni
and:

ki1 = hfi (tj , u1,j , u2,j , · · · , un,j )


h 1 1 1
ki2 = hfi (tj + , u1,j + k11 , u2,j + k21 , · · · , un,j + kn1 )
2 2 2 2
h 1 1 1
ki3 = hfi (tj + , u1,j + k12 , u2,j + k22 , · · · , un,j + kn2 )
2 2 2 2
ki4 = hfi (tj , u1,j + k13 , u2,j + k23 , · · · , un,j + kn3 )

57 / 113
Solve the system of equations:

u 0 = −3u + 2v , u(0) = 0
v 0 = 3u − 4v , v (0) = 0.5

with h = 0.2 on the interval [0, 0.4]. Use RK4 method.

58 / 113
Euler’s Method for System of ODEs: Consider the system of n
equations:
dU
= F (t, u1 , u2 , · · · , un )
dt
U(t0 ) = U0

Where,
U = [u1 , u2 , · · · , un ]T ,
F = [f1 , f2 , · · · , fn ]T ,
U0 = [u1 (0), u2 (0), · · · , un (0)]
The solution is given by:

Uj+1 = Uj + hUj0 ; j = 0, 1, 2, · · · , N − 1

59 / 113
Computational Techniques for DTMC

1: Given a two state Markov Chain, with the transition probability


matrix:
· ¸
1−a a
P= ; 0 ≤ a, b ≤ 1, |1 − a − b| < 1
b 1−b

then matrix P n is given by:


à !
b+a(1−a−b)n a−a(1−a−b)n
Pn = a+b
b−b(1−a−b)n
a+b
a+b(1−a−b)n
a+b a+b

60 / 113
Assignments

1. Make a computer program to solve a given DTMC having


2-states.
Inputs: TPM, P and initial probability, P(0)
Output: n−step transition probabilities, P(n)
2. Extend the program to m × m TPM, using
P(n + 1) = P(0) · P n or P(n + 1) = P(n) · P

61 / 113
Pulping System: The four major actions carried out in the system
are, (i) cooking of chips, (ii) separation of knots, (iii) washing of
pulp, and (iv) opening of fibers. The pulping system consists of for
subsystems, namely:

I Digester (A): One unit, used for cooking the chips whose
failure causes the complete failure of the cooking system.
I Knotter (B): Two units, one working and other standby,
used to remove the knots from the cooked chips. Whose
complete failure occurs only if both the units fail.

62 / 113
I Decker (C): Three units, arranged in series configuration,
used to remove liquor for the cooked chips. Failure of any one
causes the complete failure of the pulping system.
I Opener (D): Two units, one working and other standby, used
to separate the fibers. Whose complete failure occurs only if
both the units fail.

The interactions among the various components of pulping system


are shown by RBD in Fig. 1.

B1 D1

A C1 C2 C3

Digester Deckers
B2 D2

Knotter Opener

Figure: Reliability Block Diagram of Pulping System

63 / 113
Figure: CTMC for Pulping System

α, λ, σ, θ: Failure rates of digester, knotter, decker and opener,


respectively.
β, µ, ², φ: Repair rates of digester, knotter, decker and opener,
respectively.

64 / 113
Assignments (Contd...)

Make computer programmes to find out steady state probabilities


using:

3. Power Method
4. SOR/Gauss-Seidel Method
5. Gauss Elimination Method

65 / 113
Assignments (Contd...)

Make computer programmes to find out transient probabilities


using:

6. Matrix Method
7. Euler’s Method
8. RK4 Method

Also, simulate the system behavior for t = 100 hours.

66 / 113
Birth and Death Process (BD Process): A homogeneous
CTMC {X (t) : t ≥ 0}, with state space {0, 1, 2, · · · } is called a
BD process, if there exist constants,
λi ; i = 0, 1, 2, · · · , and
µi ; i = 1, 2, 3, · · · such that the transition rates are given by:

pi,i+1 = λi , i = 0, 1, 2, · · ·
pi,i−1 = µi , i = 1, 2, 3, · · ·
pij = 0, for |i − j| > 1

67 / 113
In steady-state,we get:

λ0 π0 = µ1 π1
(λk + µk )πk = λk−1 πk−1 + µk+1 πk+1 , k ≥ 1

Now,

λk πk − µk+1 πk+1 = λk−1 πk−1 − µk πk


= λk−2 πk−2 − µk−1 πk−1
= λk−3 πk−3 − µk−2 πk−2
= ··· ··· ··· ···
= λ0 π0 − µ1 π1
= 0

68 / 113
Thus, we have:
λk πk = µk+1 πk+1
or
λk λk−1
πk+1 = πk =⇒ πk = πk−1 , for k ≥ 1
µk+1 µk
or
k−1Y λi
λ0 · λ1 · · · λk−1
πk = π0 = π0
µ1 · µ2 · · · µk µi+1
i=0

69 / 113
P
Using the fact that πj = 1, we get:
j

X k−1
Y λi
π0 + π0 =1
µi+1
k≥1 i=0

i.e.,
1
π0 =
P k−1
Q λi
1+ µi+1
k≥1 i=0

The limiting probabilities will exist provided the series

X k−1
Y λi
µi+1
k≥1 i=0

converges.

70 / 113
Queues: A queue is a system into which customers arrive to
receive service. When the servers in the system are busy, incoming
customers wait for their turn. Upon completion of a service, the
customer to be served next is selected according to some queuing
discipline.
Queuing discipline: The queuing discipline is a rule for selecting
the customer for the next service among the set waiting customer.
This could be either:

I First in first out (FIFO),


I Last in first out (LIFO),
I Service in random order (SIRO),
I Shortest processing time (SPT), etc.

71 / 113
A queuing system can be completely described by:

(a). The input or arrival pattern


(b). The service mechanism or service pattern
(c). The number of service channels
(d). Customer’s behavior
(e). Maximum length of a queue
(f). The size of population

72 / 113
Notation for queues: Generally queuing model may be completely
specified in the following symbolic form: (A/B/C):(D/E/F), where

I A: probability law for the arrival (or inter-arrival) time


I B: probability law according to which the customers are being
served
I C: number of service channels (service stations)
I D: queue/service discipline
I E: capacity of the system, i.e., the maximum number allowed
in the system (in service or waiting)
I F: Population size

D. Kendall (1953) introduced the three characteristics (A/B/C).


Later, A. Lee (1966) added fourth, fifth and sixth characteristics to
the notations.

73 / 113
Let C1 , C2 , . . . be any arbitrary arrival stream of customers with:

I tj = arrival epoch of Cj , (0 ≤ t1 ≤ t2 ≤ . . . < ∞), and


I Tj = tj+1 − tj is inter-arrival time between Cj and Cj+1

A customer who has arrived but not yet departed is said to be in


the system. While in the system, a customer may be either in
service or in the queue. Let,

I Sj = service time of Cj
I Dj = delay in queue of Cj , (waiting time in queue)
I Wj = Dj + Sj = waiting time in system of Cj

74 / 113
The number of customers in the system at epoch t is given by
N(t), which can be written as

N(t) = Nq (t) + Ns (t)

where Nq (t) = number of customers in the queue at time t, and


Ns (t) = number of customers in the service at time t.
The number of arrivals by time t, nA (t) and number of departures
by time t, nD (t), are given by:

nA (t) = max{j : tj ≤ t}
nD (t) = nA (t) − N(t)

The process {nA (t) : t ≥ 0} is called the arrival process and the
process {nD (t) : t ≥ 0} is called the departure process.

75 / 113
Performance Measures: The performance of a queue will be
defined in terms of properties of one or more of the following
stochastic processes: {N(t) : t ≥ 0}, {Nq (t) : t ≥ 0},
{Ns (t) : t ≥ 0}, {Wj : j ∈ N}, and {Dj : j ∈ N}.
Let us define the following parameters first:
nA (t)
λ = Arrival rate = limt→∞ t

1 1 P
n
µ = Average time in service = limn→∞ n Sj , where µ is called
j=1
the service rate. We now define generic measures of performance
for a queue. Let, m be the number of identical servers in the
queuing system.

76 / 113
Average number of customers in the system:

Zt
1
L = lim N(u)du (1)
t→∞ t
0

Average number of customers in the queue (mean queue


length):
Zt
1
Q = lim Nq (u)du (2)
t→∞ t
0

Average number of customers in the service:

Zt
1
B = lim Ns (u)du (3)
t→∞ t
0

77 / 113
Average server utilization:
B
U= (4)
m

Average waiting time in the system:


n
1X
W = lim Wj (5)
n→∞ n
j=1

Average waiting time in the queue:


n
1X
D = lim Dj (6)
n→∞ n
j=1

The measures L, Q, B and U are averages over time, whereas the


measures W and D are averages over customers. The values on
the right hand side are random variables.
78 / 113
Little’s Result: The result states that, given a queue, if the limits:
1 P
n
nA (t)
λ = lim and W = lim Wj exist, with λ < ∞ and
t→∞ t n→∞ n j=1
W < ∞, then the following limit,

Zt
1
L = lim N(u)du
t→∞ t
0

exists, where L < ∞ and

L = λW (7)

A consequence of Little’s result is that if the limits defining λ and


D exist, with λ < ∞, D < ∞, then the limit defining Q exists,
with Q < ∞ and Q = λD. Note that this enables the result to be
applied to parameters connected with the queue alone.

79 / 113
The M/M/1 Queue: In the M/M/1 queue, the arrivals are
following Poisson distribution with rate λ, the departures are
following Poisson distribution with rate µ, and there is only single
server.
Let N(t) be the number of customers in the M/M/1 queue at time
t, then obviously {N(t) : t ≥ 0} is a CTMC. Also it is a special
case of Birth and Death Process with:
λk = λ k ≥ 0
µk = µ k ≥ 1
λ
The ratio ρ = µ is called as traffic intensity of the system.

80 / 113
From the last equation of BD process, we have:
1
π0 = P = (1 − ρ) (8)
ρk
k≥0

That is, the probability of an empty system is given by (1 − ρ).


Thus ρ represents the proportion of the time, the server is busy.
This is called the utilization U of the server or traffic intensity of
the queue.
πk = ρk (1 − ρ), k ≥ 0 (9)
The probability πk is to be described, in the following ways:

I as the proportion of time in the steady state the system has


exactly k customers,
I in the steady state, a randomly arriving customer finds exactly
k customers in the system,
I a departing customer leaves behind exactly k customers in the
system.
81 / 113
The mean number of customers in the system, L is found as the
expected value of random variable N, where,

P{N = k} = lim P{N(t) = k} = πk


t→∞

Thus we have,

X ∞
X
L = E [N] = kπk = (1 − ρ) kρk (10)
k=0 k=0

But, we have,

X ρ
kρk = (11)
(1 − ρ)2
k=0

So by (10) and (11), we have,

ρ λ
L= = (12)
1−ρ µ−λ

82 / 113
The mean number of customers, Q in the queue is similarly given
as

X ∞
X ∞
X
Q = (k − 1)πk = kπk − πk
k=1 k=1 k=1
= L − (1 − π0 ) = L − ρ
ρ2 λ2
= = (13)
1−ρ µ(µ − λ)

We can also rewrite as,


L=Q +ρ
Now, W : the mean waiting time in the system or mean response
time, is given by,
1
W =D+ (14)
µ
1
where, µ is the service time.

83 / 113
To find out the explicit formula for W , we use Little’s formula,

L = λW (15)

Using (15), we obtain the expressions for W and D for the M/M/1
queue. From (12) and (15), we have,

L 1 1
W = = = (16)
λ µ(1 − ρ) µ−λ

and, from (14)

1 1 λ
D= − = (17)
µ−λ µ µ(µ − λ)

84 / 113
Example: Consider an NC machine center processing raw parts
one at a time in M/M/1 fashion. Let λ = 8 parts/h and µ = 10
parts/h. Then find out:

a. Machine utilization,
b. Mean number of customers in the system,
c. Mean number of customers in the queue,
d. Mean waiting time in the system, and
e. Mean waiting time in queue.

The answers are 0.8, 4.0, 3.2, 0.5 h, and 0.4 h.

85 / 113
Stability of M/M/1 Queue: From (8), we see that:

1
π0 = P
ρk
k≥0

We can solve this equation for π0 if and only if the series in the
numerator is convergent. The series will be convergent if and only
if ρ < 1. When ρ ≥ 1, the series diverges, which means that the
queue grows without bound and the values of L, Q, W and D
would go to infinity. We say that the queue is stable if ρ < 1 and it
is unstable if ρ ≥ 1. The stability condition physically means that
the arrival rate must be strictly less than the service rate.
When ρ is close to 1, queues become very long and as ρ → 1, the
number in the system grows without bound. Transient analysis
reveals system behavior as ρ → 1 or short term behavior when
ρ > 1.

86 / 113
Waiting Time Distributions: We have seen that W and D are
the mean values of the random variables w and d, whose
distributions can be defined by:
n
1X
Fw (t) = lim P{Wj ≤ t}
n→∞ n
j=1
n
1X
Fd (t) = lim P{Dj ≤ t}
n→∞ n
j=1

Now we will calculate Fw (t) and Fd (t), assuming that the


customers are served in FIFO (or FCFS) order.

87 / 113
First note that d is a continuous random variable except that there
is a nonzero probability, π0 that d = 0 and the customers enters
service immediately on arrival. We have:

Fd (0) = P{d ≤ 0} = P{d = 0}


= P{System is empty on arrival}

So,
Fd (0) = π0 = 1 − ρ (18)

88 / 113
Now we will find Fd (t) for t > 0. If there are k customers in the
system upon arrival, for the customer to get into the system
between 0 and t, all k units must be served by the time t. Since
the service time distribution is memoryless, the distribution of the
time required for k service times is independent of the time of the
current arrival, and is the convolution of k exponential random
variables, which is an Erlang-k distribution. The probability density
function of the Erlang distribution is:

λ(λx)k−1 e −λx
f (x; k, λ) = for x, λ ≥ 0
(k − 1)!

The parameter k is called the shape parameter and the parameter


λ is called the rate parameter.

89 / 113
The probability that an arrival finds k units in the system in the
steady state is simply the probability πk . Thus we have:

Fd (t) = P(d ≤ t)
∞ ·
X ¸
= Fd (0) + P(0 < d ≤ t|k units in the system)πk
k=1

P(0 < d ≤ t|k units in the system) = P(sum of k service times ≤ t)


Zt
µ(µx)k−1 e −µx
= dx
(k − 1)!
0

90 / 113
Therefore, we get for t > 0,


X Zt
k µ(µx)k−1 e −µx
Fd (t) = (1 − ρ) + (1 − ρ) ρ dx
(k − 1)!
k=1 0
Zt µX
∞ ¶
(µxρ)k−1
= (1 − ρ) + (1 − ρ)ρ µe −µx dx
(k − 1)!
0 k=1

Zt
= (1 − ρ) + (1 − ρ)ρ µe −µx(1−ρ) dx (19)
0

91 / 113
Simplifying (19) we get:

Fd (t) = 1 − ρe −µ(1−ρ)t for t > 0

The distribution for waiting time in queue is then:


(
1 − ρ, for t = 0
Fd (t) = (20)
1 − ρe −µ(1−ρ)t for t > 0

From (20), we can compute the mean waiting time in queue as:

λ
D = E [D] = (21)
µ(µ − λ)

92 / 113
Let us now compute the distribution of w , following the similar
arrangements. We have for t ≥ 0,
∞ ·
X ¸
Fw (t) = P(w ≤ t|k units in the system)πk
k=0

X
= P(sum of (k + 1) service times ≤ t)πk
k=0
∞ µZ
X
t ¶
µ(µx)k e −µx
= dx ρk (1 − ρ)
k!
k=0 0

93 / 113
Zt ∞
X (µxρ)k
Fw (t) = (1 − ρ) µe −µx dx
k!
0 k=0

Zt
= (1 − ρ) µe −µx(1−ρ) dx
0
−µ(1−ρ)t
= 1−e
= 1 − e −(µ−λ)t

94 / 113
Thus we have:
(
0, for t < 0
Fw (t) = −(µ−λ)t
(22)
1−e , for t ≥ 0.

Therefore, w is exponentially distributed with mean given by:


1
W =
µ−λ
It is noted that the mean values D and W are independent of the
queuing discipline, whereas the distributions of d and w depend on
the queuing discipline.

95 / 113
The M/M/1/N Queue: In an M/M/1/N queueing system, the
maximum number of jobs in the system is N, which implies a
maximum queue length of N − 1. An arriving job enters the queue
if it finds fewer than N jobs in the system and is lost otherwise.
This behavior can be modeled by a birth-death process with:
(
λ for k = 0, 1, 2, . . . , N − 1
λk =
0 for k ≥ N

The service rate µk = µ for all k ≥ 1. The CTMC here is a finite


birth and death process, with N + 1 states.

96 / 113
The steady state probabilities are given by:

(1 − ρ)ρk
πk = , k = 0, 1, 2, . . . , N
1 − ρN+1
= 0, k > N (23)

Note that a system with finite population such as above will always
be stable for all values of ρ.
The performance measures can be verified to be given by:

ρ(1 − ρN )
U = 1 − π0 = (24)
1 − ρN+1

97 / 113
ρ2 [1 − ρN − NρN−1 (1 − ρ)]
Q= (25)
(1 − ρ)(1 − ρN+1 )

ρ[1 − ρN − NρN (1 − ρ)]


L= (26)
(1 − ρ)(1 − ρN+1 )

1 − ρN − NρN (1 − ρ)
W = (27)
µ(1 − ρ)(1 − ρN )

ρ[1 − ρN − NρN−1 (1 − ρ)]


D= (28)
µ(1 − ρ)(1 − ρN )

98 / 113
Example: It would be interesting to study the performance
measures of a single-server queuing system for various queue
capacities. First, consider the M/M/1 queue of Example 1, where
we had λ = 8 parts/h and µ = 10 parts/h. We have seen that
U = 0.8, L = 4, Q = 3.2, W = 0.5 h and D = 0.4 h.
At the other extreme, we have M/M/1/1 system in which no
queuing is allowed. In this case, there are only two states, 0 and 1,
and
µ
π0 = λ+µ = 0.556; and π1 = 0.444
Verify that U = 0.444, L = Q = 0.444, and W = D = 0.1h.
Can you calculate the performance measures for M/M/1/10
queue?

99 / 113
Example: Consider a machine center modeled as an M/M/1/N
queue with parameters λ and µ. Let C µ be the cost of operating
the machine at a rate of µ and let a profit of q accrue for every
finished part produced. Find out the service rate that maximizes
the total profit.
Solution: Production rate of the machine center=Uµ
n) N −λN )
= ρ(1−ρ
1−ρN+1
µ= λµ(µ
µN+1 −λN+1

The above is the number of parts produced in one time unit (for
example one hour, if µ is expressed in terms of parts per hour).
The corresponding total profit is given by:

λµ(µN − λN )
P= q − Cµ
µN+1 − λN+1
The service rate µ that maximizes the profit can now be obtained
in the usual way.

100 / 113
The M/M/m Queue: Consider a queuing system with Poisson
arrival rate λ as before, but where m(≥ 1) exponential servers,
with rate µ each, share a common queue. This is referred to as the
M/M/m queue and is illustrated in Fig. 3. If the number of
customers in the system is greater than the number of servers,
then a queue forms.

Figure: M/M/m queue and its state diagram

101 / 113
Steady State Analysis: The M/M/m queue is also a special case
of the birth and death model with rates:

λk = λ, k = 0, 1, 2, . . . (29)

µk = kµ, k = 0, 1, . . . , m
= mµ, k > m (30)

102 / 113
The state transition diagram is shown in Fig. 3. The steady state
probabilities could be obtained by using (29) and (30) in the
expressions of π0 and πk of BD process. We find that:

π0 (mρ)k
πk = ; k = 0, 1, 2, . . . , m
k!
π0 mm ρk
= ; k>m (31)
m!
where ρ is defined as:
λ
ρ= (32)

Equation (31) is valid for determining the steady state probabilities
whenever ρ < 1, i.e., the stability condition holds.

103 / 113
The expression for π0 can now be obtained using (31) and the fact
that the steady state probabilities sum up to 1. Thus:
· X (mρ)k ¸−1
m−1
(mρ)m
π0 = + (33)
m!(1 − ρ) k!
k=0

To compute the different performance measures, we first consider


Ns , the number of busy servers in the steady state. Ns is a discrete
random variable with range {0, 1, 2, . . . , m} and with pmf given by:

P(Ns = k) = πk k = 0, 1, . . . , m − 1
X∞
= πj k=m
j=m

104 / 113
The average number of busy servers in the steady state, B can be
immediately obtained as:
m
X λ
B = E [Ns ] = kP(Ns = k) = (34)
µ
k=0

The mean utilization U of each server is simply the ration of B to


the total number of servers and hence:
λ
U= =ρ (35)

105 / 113
We now compute Q, the average number of customers waiting in
queue:
X∞
ρ(mρ)m π0
Q= (k − m)πk = (36)
m!(1 − ρ)2
k=m

The average number L in the system is given by:

ρ(mρ)m π0 λ
L=Q +B = + (37)
m!(1 − ρ)2 µ

106 / 113
We now use Little’s law to compute the average waiting time in
the system. We have:

L ρ(mρ)m π0 1
W = = 2
+ (38)
λ m!λ(1 − ρ) µ

The average waiting time in queue can now be obtained as:

1 ρ(mρ)m π0
D=W − = (39)
µ m!λ(1 − ρ)2

107 / 113
The M/M/∞ Queue: In an M/M/∞ queueing system we have a
Poisson arrival process with arrival rate λ and an infinite number of
servers with service rate µ each.
If there are k jobs in the system, then the overall service rate is kµ
because each arriving job immediately gets a server and does not
have to wait. Once again, the underlying CTMC is a birth-death
process.

108 / 113
From the expressions of π0 and πk of BD process, we obtain the
steady-state probability of k jobs in the system:
k−1
Y µ ¶k
λ λ 1
πk = π0 = π0 (40)
(i + 1)µ µ k!
i=0

109 / 113
The steady state probability of no jobs in the system:
³ ´
1 λ
− µ
π0 = ∞ ³ ´k =e (41)
P λ 1
1+ µ k!
k=1

and finally: ³ ´
³ ´k λ
λ −
µ e µ

πk = (42)
k!
This is the Poisson pmf.

110 / 113
The expected number of jobs in the system is:
λ
L= (43)
µ
With Little’s theorem, the mean waiting time in the system is
given by:
1
W = (44)
µ

111 / 113
Queuing models for self study:

A. (M/M/m):(GD/K /∞)
B. (M/M/m):(GD/K /N)

112 / 113
Thank You

113 / 113

You might also like