MARKOV CHAIN

“Markov process is a process that tells the evolution of a new state from an old state.”
New state=f(oldstate,noise)
Lecture outline
 Check out counter example
 N-step transition probabilities
 Classification of states
Example: Consider a checkout counter case. There will be 10 persons in the queue. At a time the
following possibilities are there.

Let the customer arrivals be p,and customer served is q
Customer arrival and no departure p(1-q)
Service completion and no customer arrival q(1-p)
Arrival and departure pq
Nothing will happen (1-p)(1-q)


Finite state Markov chains
Let xn-State after n transitions: belongs to a finite set (1……………………..m)
X0 is either given or random
X0-------------------xn
Initial final

Markov transition


The state xo can change to final state xn after n transions.But the intermediate transitions are random.
The transitions between the changes will happen randomly.
Markov property/Assumption
Markov assumes that the given state is not depends on the past states.
Pij =P(xn+1=j|xn=i) where i is the current position
=P(xn+1=j|xn=I,(xn-1…………………x0)
This statement says that if we know all possible information about the current positions the past
informations can be neglected.
Model specifications:
 Identify the possible sets
 Identify the possible transitions
 Identify the transition probabilities
Example:
Consider the case of a projectile. For predicting its future positions both time and velocity is required. If
any of the above information is missing we need the past position of the projectile to complete the
trajectory and find out the future position. So when we are selecting a state variable we have to collect
all informations regarding it and this information should have some relevance to the future state and it
may or may not include every possible transition states.



N step transition probabilities
Rij(n)=P(xn=j,x0=i)
In zero transition
Rij(0)=1 if i=j
=0 i!=j
In single transition
Rij(i)=P(ij)
N-step transition diagram

Consider the following diagram.
For the probability of tavelling from I to j we can us the following recursive equation where m is number
of transitions.

m
Rij(n)=∑ rik(n-1)pk
K=1
For the random initial state
m
P(xn=j)= ∑P(x0=i)rij(n)
K=1


m
Rij(m)=∑pik.rkj(m-1)
K=1


Example:


N=0 N=1 N=2 N=100 N=101
R11(n) 1 .5 .35 2/7 2/7
R12(n) 0 .5 .65 5/7 5/7
R21(n) 0 .2 2/7 2/7
R22(n) 1 .8 5/7 5/7

R11 and R21 having the same probabilities show that initial state is not depend on the final state. What
really happens is that the randomness coming during the transitions is washing out the information
about the initial state.
The probability of remaining in the state 2 is more because its more sticky in nature rather than the 1
st

state.
Contradictions in the convergence

N is off r22(n)=0 n is even r22(n)=1
Dependence of initial state in the probabilities

R11(n)=1
R31(n)=0
R21(n)=.5 as n-infinity
Recurrent state and transient state
Recurrent state is a state in which we can return even if we started travelling from that state.ie there
will be a return path to that state.
But all states that are not recurrent will be a transition state.
If the initial state should not depend the final state there should not be more than one recurrent state