Professional Documents
Culture Documents
Sonia REBAI
Tunis Business School
University of Tunis
Introduction
ü Deterministic and stochastic models are two broad categories of
mathematical models that aim at providing quantitative characterizations of
a real system under study.
ü A deterministic model predicts a single outcome, whereas a stochastic
model predicts a set of possible outcomes along with the likelihood of each
outcome.
ü Stochastic processes are models used to depict the dynamic relationship of
a family of random variables evolving in time or space.
Introduction - continued
ü A stochastic process {Xt}t is a sequence of random variables indexed by a
parameter such as time or space.
where pij is the probability that given the system is in state i at period t, it
will be in a state j at period t+1. The pij are often referred to as the
transition probabilities of the stochastic process.
Discrete Time Markov Chain - continued
ü A Markov chain is a discrete-time stochastic process that can be in one of a
finite number of states satisfying the stationarity and the memory-less
properties.
ü A Markov process is completely characterized by its transition matrix P = (pij).
æp 11
p12
... p1j
... ö
çp p ... ... ... ÷ where
ç 21 22
÷
P = ç ... ... ... ... ... ÷ 𝑝!" ≥ 0 and ' 𝑝!" = 1
çp ... ... p ... ÷
ç i1 ij
÷ !"
ç ... ... ... ... ... ÷ø
è
Example 1
Every day, Ali uses one of the paths A or B to go to work. If Ali meets
congestion on the selected day along the chosen path, he will change the route
the next day. It is assumed that the chance of congestion on path A is equal to
Make sure that the problem may be described by a Markov chain and provide
ü The stochastic process (Xn) satisfies the Markovian property. In fact, the
probability of using a given road depends only on the last used one.
ü The process is stationary because the transition probability from one state
to another does not depend on the specific day of transition.
with P(Xn= ai) is the probability that the process be at state i at period n.
ü The m-step transition matrix P(m) specifies the origin of the process for each
destination m periods ahead.
ü Note that pij(m) ¹ (pij)m
Transition Probabilities over n periods - continued
Properties
æ 65 43 ö
ç ÷
ç 108 108 ÷ 259 173
P(3) = (1 / 2, 1 / 2) ´ ç ÷ = (
432
,
432
) It follows that P(X3=A) = 259/432
ç 43 29 ÷
ç ÷
è 72 72 ø
First passage probabilities
Let’s denote by fij(n) the probability of 1st passage from state i to state j in n
periods.
Back to the previous example and suppose that the 1st day, Ali uses road A.
What is the probability of using B for the 1st time in 3 days?
1st Approach : fAB(3) = (pAA)2 x pAB = (2/3)2 x 1/3 = 4/27
2nd Approach : we may obtain the same result by considering the following
recurrence formula:
= å pil ´ f lj
(n) ( n -1)
f ij
l¹ j
First passage probabilities - continued
fAB(1) = pAB = 1/3
fAB(2) = (fAB)(1) x pAA = (1/3) x (2/3) = 2/9
fAB(3) = (fAB)(2) x pAA = (2/9) x (2/3) = 4/27
3rd Approach : we may as well obtain the same result by considering the
following recurrence formula:
$+(
($) ($) (&) ($+&)
𝑓!" = 𝑝!" − ' 𝑓!" ×𝑝""
&'(
First passage probabilities - continued
fAB(3) = pAB(3) - fAB(1) x pBB(2) - fAB(2) x pBB(1)
pAB(3) = 43/108
pBB(2) = 5/12
Time Probability
¥
1 fij(1)
2 fij(2)
µ ij = å ij
nf ( n)
n =1
. .
n fij(n)
. .
. .
(fij < 1) then the mean time of 1st passage from i to j is infinite (µij = ¥).
ü If the passage from i to j is certain (fij = 1) then the mean time of 1st passage
µAB = 1 + pAAµAB
On the average, he will use road A for 3 consecutive days before moving to
road B.
Classifying states in a Markov chain
ü A path from state i to state j is a sequence of transitions starting from i and
finishing on j.
ü If a Markov chain has only one equivalence class, then all the states are of
the same type.
p = lim P(n)
n®¥
Moreover, as n®¥ all the rows of the matrix Pn converge to the vector π .
lim 𝑃 𝑛 + 1 = lim 𝑃 𝑛 ×𝑃
$→- $→-
π=πxP
This last expression gives rise to a system of r equations with r unknowns.
Given that matrix P has a rank £ r-1, we need one more independent equation.
Remember that we have π1 + π2 + … + πr = 1. This is exactly what is needed.
Example 3
To keep a triangular castle equipped with a guard at every corner, the guard
must flip a coin every 5 minutes to determine which of the next two corners to
occupy. If head, the guardian must go left; else, he would go to the right
corner. He must stay for another 5 minutes, then again take a coin and so on.
ü An absorbing state will have zero entry for all other states in the
corresponding row of the transition matrix, but 1 for the entry related to
the row and column of that state.
ü An absorbing Markov chain can only have transient states that can
communicate with each others.
Limiting Probabilities -continued
the states so that the first rows and columns would correspond to the
ü It follows that a Markov chain having k transient states and n-k absorbing
Or equivalently
Limiting Probabilities -continued
Some facts about absorbing chains can be found:
(1) If the chain begins in a given transient state, and before reaching an
absorbing state, what is the expected number of times that each state will
be entered? How many periods do we expect to spend in a given transient
state before absorption takes place?
If we are at present in a transient state i, the expected number of periods that
th
will be spent in transient state j before absorption is the ij element of the
-1
matrix (I-Q) .
Limiting Probabilities -continued
(2) If a chain begins in a given transient state, what is the probability that we
end up in each absorbing state?
1. Find the the expected number of periods that will be spent in each transient
state.
2. Find the probability to be absorbed by each absorbing state starting from
each possible transient state.
Example 4 - continued