Professional Documents
Culture Documents
Chain (Part 2)
MH4702: Probabilistic Methods in
Operations Research
Continuous time,
Continuous value Process
is a past time
is the current time
is time units into the future
and
A pair of states and are said to communicate if there are times and such that
()and.
Then, always exists for an irreducible Markov chain and is independent of its initial state.
𝑀 𝑴
𝜋 𝑗=∑ 𝜋 𝑖 𝑝𝑖 𝑗 (𝑡), for
𝑖=0
and every with ∑ 𝝅 𝒋=𝟏
𝒋=𝟎
A key random variable — Each time the process enters state , the amount of time it spends in that state
before moving to a different state is a random variable where
is exponential distributed.
Therefore,
0
𝑖𝑗 {
( 𝑡 ) = ¿ 1 if 𝑖 = 𝑗 ¿
0 if 𝑖 ≠ 𝑗
Only one step at a time.
When leaving state , the process moves to a state with probability , where the satisfy the
conditions
is not a function of time .
𝑀
The next state visited after state is independent of the time spent in state .
Each time the process enters state the amount of time it will spend in state before a transition to state
occurs (if a transition to some other state does not occur first) is a random variable
The time spent in state until a transition occurs is the minimum (over ) of the .
Because the chain is stationary and memoryless, you only need to study the instantaneous transition,
which is the rate of change.
Intensity of transitioning out of state , is the expected number of transition out of state in unit time,
in a small amount of time .
For small time interval , the number of transition is either 1 or 0, hence the expected number is the
same as the probability of transition out of state
Now,
,
since is the amount of time the chain spends in state, has an exponential distribution
with parameter and
Note that for , , is the probability of transitioning to state from , in the interval .
Now, for , is the probability that the chain transition to state when leaving state That is, at any time ,
𝒒 𝒊𝒋 =𝒒 𝒊 𝒑 𝒊𝒋
𝑞𝑖 =
1 ,
𝐸 [𝑇𝑖 ]
𝑞𝑖 𝑗=
1 , 𝑞= 𝑞
𝑖 ∑
𝑖𝑗
𝐸 [𝑇 𝑖 𝑗 ] 𝑗≠ 𝑖
For modelling purpose, you usually start with identifying the transition rates, , instead of the
transition function
The purpose is to find the stationary probabilities.
Let
From the earlier discussion, you know that is related to the derivative of .
[ ]
𝑝 00 (𝑡 ) 𝑝 01 (𝑡 ) 𝑝 02 (𝑡 ) … … 𝑝 0 𝑀 (𝑡 )
𝑝1 0 (𝑡 ) 𝑝 11 (𝑡 ) 𝑝 12 (𝑡 ) ⋯ ⋯ … 𝑝1 𝑀 ( 𝑡 ) …
𝑷 (𝑡 )= 𝑝 20 (𝑡 ) 𝑝2 1 (𝑡 ) 𝑝 22 (𝑡 ) ⋯ ⋯ 𝑝 2 𝑀 (𝑡 )
¿
⋮ ⋮ ⋮ ⋱ ¿⋮
⋮ ⋮ ⋮ ⋱ ⋮ 𝑝 𝑀𝑀 (𝑡 )
𝑝 𝑀 0 (𝑡 ) 𝑝 𝑀 1 ( 𝑡 ) 𝑝 𝑀 2(𝑡 ) … ¿
[ ]
−𝑞 0 𝑞0 1 𝑞 02 … … 𝑞0 𝑀
𝑞1 0 − 𝑞1 𝑞1 2 ⋯ ⋯ 𝑞1 𝑀
𝑸= 𝑞20 𝑞2 1 − 𝑞2 ⋯ ⋯ 𝑞2 𝑀
¿
⋮ ⋮ ⋮ ⋱ ¿⋮
⋮ ⋮ ⋮ ⋱ ⋮ −𝑞 𝑀
𝑞𝑀 0 𝑞𝑀 1 𝑞𝑀 2 … ¿
How to solve ?
𝟎= 𝝅 𝑷 ′ ( 0 )=𝝅 𝑸❑
( )
−1/3 1/6 1/6
𝑸= 𝑅𝑎𝑖𝑛 𝑆𝑛𝑜𝑤 𝐶𝑙𝑒𝑎𝑟 3/24 −1/6 1/24 ¿
¿
1/48 3/48 −1/12
with
𝑞𝑖𝑗 =𝑞 𝑖 𝑝𝑖𝑗
Hence, .
State: 0 1 2
𝑞10 =2 𝑞2 1=2
The rate diagram 2. Since the expected repair time is day:
( )
? 2 0
𝑸= 0 1 2
2 ? 1¿
¿
0 2 ?
( )
0 1 2
−2 2 0
𝑸=0 1 2 2 −3 1 ¿
State:
𝑞10 =2 𝑞2 1=2 ¿
0 2 −2
The rate diagram
2. The steady-state equations:
A continuous-time Markov chain with transition function and generator satisfies the forward equation: and the
backward equation:
And
𝜆 ~
𝑷=
0
1 ( 1
0 ) A two-state continuous-time Markov
chain is specified by two holding time
parameters as depicted in the transition
1 2 graph given here.
𝑸=( −𝜇𝜆 𝜆
−𝜇 ) The process stays in state 1 for an
𝜇 exponential length of time with
parameter λ before moving to 2. It stays
in 2 for an exponential length of time
with parameter before moving to 1 and
so on.
𝑸=
( −𝜆
𝜇
𝜆
−𝜇 )
The solution to the linear differential equation is
Similarly,
~
(
𝑷= 0
1
1
0 ) 𝑸=
( −𝜆
𝜇
𝜆
−𝜇 )
{ 𝑋 ( 𝑡′ ) : 𝑡 ′ ≥ 0 }
𝑝 𝑖𝑗 ( 𝑡 ) =𝑃 { 𝑋 ( 𝑡 ) = 𝑗∨ 𝑋 ( 0 )=i }
where is referred to as the continuous time transition probability
function. Assume:
lim 𝑝𝑖𝑗 (𝑡 )=
𝑡 →0 {1 if 𝑖= 𝑗
0 if 𝑖 ≠ 𝑗