You are on page 1of 9

Kurdistan Regional Government of Iraq

Ministry of Higher Education and Scientific Research


University of Sulaimani
College of Administration and Economics
Chemistry department
2019 0 2020
Department of : Statistics and informatics
Stage : 4
Subject : Stochastic Process ) ( : ‫رقم السري‬

Final Date to send the report 14 / 6 /2020

Report about:
)Markov Chain and Markov Process)

Preparation by: Lecturers Name:


Karwan Muhamad Mirza Dr. Mohammad mahmood

) ( : ‫رقم السري‬
Markov chain and Markov processes
Introdution
A diagram representing a two-state Markov process, with the states labelled E and
A. Each number represents the probability of the Markov process changing from
one state to another state, with the direction indicated by the arrow. For example, if
the Markov process is in state A, then the probability it changes to state E is 0.4,
while the probability it remains in state A is 0.6.
A Markov chain is a stochastic model describing a sequence of possible events in
which the probability of each event depends only on the state attained in the
previous event. In continuous-time, it is known as a Markov process. It is named
after the Russian mathematician Andrey Markov.
Markov chains have many applications as statistical models of real-world
processes, such as studying cruise control systems in motor vehicles, queues or
lines of customers arriving at an airport, currency exchange rates and animal
population dynamics.
Markov processes are the basis for general stochastic simulation methods known as
Markov chain Monte Carlo, which are used for simulating sampling from complex
probability distributions, and have found application in Bayesian statistics and
artificial intelligence.
The adjective Markovian is used to describe something that is related to a Markov
process.

Markovian processes
A stochastic process is called Markovian (after the Russian mathematician Andrey
Andreyevich Markov) if at any time t the conditional probability of an arbitrary
future event given the entire past of the process—i.e., given X(s) for all s ≤ t—
equals the conditional probability of that future event given only X(t). Thus, in
order to make a probabilistic statement about the future behaviour of a Markov
process, it is no more helpful to know the entire history of the process than it is to
know only its current state. The conditional distribution of X(t + h) given X(t) is
called the transition probability of the process. If this conditional distribution does
not depend on t, the process is said to have ―stationary‖ transition probabilities. A
Markov process with stationary transition probabilities may or may not be a
stationary process in the sense of the preceding paragraph. If Y1, Y2,… are
independent random variables and X(t) = Y1 +⋯+ Yt, the stochastic process X(t) is
a Markov process. Given X(t) = x, the conditional probability that X(t + h) belongs
to an interval (a, b) is just the probability that Yt + 1 +⋯+ Yt + h belongs to the
translated interval (a − x, b − x); and because of independence this conditional
probability would be the same if the values of X(1),…, X(t − 1) were also given. If
the Ys are identically distributed as well as independent, this transition probability
does not depend on t, and then X(t) is a Markov process with stationary transition
probabilities. Sometimes X(t) is called a random walk, but this terminology is not
completely standard. Since both the Poisson process and Brownian motion are
created from random walks by simple limiting processes, they, too, are Markov
processes with stationary transition probabilities. The Ornstein-Uhlenbeck process
defined as the solution (19) to the stochastic differential equation (18) is also a
Markov process with stationary transition probabilities.

Markov chains and Markov processes


Important classes of stochastic processes are Markov chains and Markov
processes. A Markov chain is a discrete-time process for which the future
behaviour, given the past and the present, only depends on the present and
not on the past. A Markov process is the continuous-time version of a
Markov chain. Many queueing models are in fact Markov processes. This
chapter gives a short introduction to Markov chains and Markov processes
focussing on those characteristics that are needed for the modelling and
analysis of queueing problems.

Markov chains
A Markov chain, studied at the discrete time points 0, 1, 2, . . ., is
characterized by a set of states S and the transition probabilities pij
between the states. Here, pij is the probability that the Markov chain is at
the next time point in state j, given that it is at the present time point at
state i. The matrix P with elements pij is called the transition probability
matrix of the Markov chain. Note that the definition of the pij implies that
the row sums of P are equal to 1. Under the conditions that
all states of the Markov chain communicate with each other (i.e., it is
possible to go from each state, possibly in more than one step, to
every other state),
the Markov chain is not periodic (a periodic Markov chain is a chain
in which, e.g., you can only return to a state in an even number of
steps),
• the Markov chain does not drift away to infinity,

the probability pi(n) that the system is in state i at time point n converges
to a limit πi as n tends to infinity. These limiting probabilities, or
equilibrium probabilities, can be computed from a set of so-called balance
equations. The balance equations balance the probability of leaving and
entering a state in equilibrium. This leads to the equations
Σ Σ
πi pij = πjpji, i ∈ S
jƒ=i jƒ=i
or
πi = πjpji, i ∈ S.
j∈S
In vector-matrix notation this becomes, with π the row vector with
elements πi,
π = πP. (1)

Together with the normalization equation

πi = 1,
i∈S
the solution of the set of equations (1) is unique.
Markov processes
In a Markov process we also have a discrete set of states S. However, the
transition behaviour is different from that in a Markov chain. In each state
there are a number of possible events that can cause a transition. The
event that causes a transition from state
i to j, where j ƒ= i, takes place after an exponential amount of time, say
with parameter
qij. As a result, in this model transitions take place at random points in
time. According
to the properties of exponential random variables (cf. section 1.2.3) we
have:

In state i a transition takes place after an exponential amount of time


with parameter
Σ
jƒ=i qij .
• The system makes a transition to state j with probability

pij := qij/ qik.


kƒ=i

Define
qii := − qij, i ∈ S.
jƒ=i
The matrix Q with elements qij is called the generator of the Markov
process. Note that the definition of the qii implies that the row sums of Q
are 0. Under the conditions that
• all states of the Markov process communicate with each other,

• the Markov process does not drift away to infinity,

the probability pi(t) that the system is in state i at time t converges to a limit
pi as t tends to infinity. Note that, different from the case of a discrete time
Markov chain, we do not have to worry about periodicity. The randomness
of the time the system spends in each state guarantees that the probability
pi(t) converges to the limit pi. The limiting probabilities, or equilibrium
probabilities, can again be computed from the balance equations. The
balance equations now balance the flow out of a state and the flow into
that state. The flow is the mean number of transitions per time unit. If the
system is in state i, then events that cause the system to make a transition
to state j occur with a frequency or rate qij. So the mean number of
transitions per time unit from i to j is equal to piqij. This leads to the
balance equations
Σ Σ
pi qij = pjqji, i ∈ S
jƒ=i jƒ=i
or
0 = pjqji.
j∈S
In vector-matrix notation this becomes, with p the row vector with
elements pi,

0 = pQ. (2)
Together with the normalization equation

pi = 1,
i∈S

the solution of the set of equations (2) is unique.

An equivalent Markov chain


The equilibrium distribution p may also be obtained from an equivalent
Markov chain via an elementary transformation. Let ∆ be any real number
satisfying
1
0 < ∆ −qii . (3)
mi
Then P defined n
as i
P = I + ∆Q
is a stochastic matrix (verify). The Markov chain with transition
probability matrix P has exactly the same equilibrium distribution p as the
original Markov process, since p satisfies pQ = 0 if and only if pP = p(I +
∆Q) = p. Note that this transformation also works for infinite state Markov
processes as long as supi −qii is finite.

Uniformization
The transformation in the previous section also has a probabilistic
interpretation. Let P be the Markov chain with transition probabilities pij =
qij/qii. For the Markov chain P the sojourn time in state i is exactly one
time unit, whereas for Q it is exponential with mean −1/qii. But this is the
only difference; the jump probabilities are the same. Clearly, if the mean
1/qii is the same for each i, then Q and P have exactly the same
equilibrium distribution. This can be established by introducing fictitious
transitions in each state i, which is called uniformization.
Let ∆ satisfy (3), and introduce a (fictitious) transition from state i to
itself with rate qii + 1/∆. Then the total outgoing rate from state i is qii +
1/∆ qii = 1/∆, and thus the mean sojourn time in state i is ∆; now it is the
same for all states i. Hence the equilibrium distribution of the Markov
process Q is the same as that of the Markov chain P with jump
probabilities pij = ∆qij + δij, where δij = 1 if i = j and 0 otherwise.

The embedded Markov chain


An interesting way of analyzing a Markov process is through the
embedded Markov chain. If we consider the Markov process only at the
moments upon which the state of the system changes, and we number
these instances 0, 1, 2, etc., then we get a Markov chain. This
Markov chain has the transition probabilities pij given by pij = qij/ kƒ=i qik
for j ƒ= i and
pii = 0. The equilibrium probabilities πi of this embedded Markov chain
satisfy

πi = πjpji .
j∈S

Then the equilibrium probabilities of the Markov process can be


computed by multiplying the equilibrium probabilities of the embedded
chain by the mean times spent in the various states. This leads to,
pi = Cπi/ qij
jƒ=i
where the constant C is determined by the normalization condition. One
easily verifies that these probabilities indeed satisfy 0 = pQ.

References
1. Markov (1906) "Rasprostranenie zakona bol'shih chisel na velichiny, zavisyaschie drug
ot druga". Izvestiya Fiziko-matematicheskogo obschestva pri Kazanskom universitete,
2-ya seriya, tom 15, pp. 135–156.
2. Markov. "Extension of the limit theorems of probability theory to a sum of variables
connected in a chain". reprinted in Appendix B of: R. Howard. Dynamic Probabilistic
Systems, volume 1: Markov Chains. John Wiley and Sons, 1971.
3. Markov, A. A. (2006). Translated by Link, David. "An Example of Statistical
Investigation of the Text Eugene Onegin Concerning the Connection of Samples in
Chains". Science in Context. 19 (4): 591–600. doi:10.1017/s0269889706001074.
4. Leo Breiman (1992) [1968] Probability. Original edition published by Addison-Wesley;
reprinted by Society for Industrial and Applied Mathematics ISBN 0-89871-296-3. (See
Chapter 7)
5. J. L. Doob (1953) Stochastic Processes. New York: John Wiley and Sons ISBN 0-471-
52369-0.
6. S. P. Meyn and R. L. Tweedie (1993) Markov Chains and Stochastic Stability. London:
Springer-Verlag ISBN 0-387-19832-6. online: MCSS . Second edition to appear,
Cambridge University Press, 2009.

You might also like