You are on page 1of 3

Markov Chains

Rasyid Sulaeman

November 30, 2019

1 Introduction to Markov Chains


Example: Some people buy orange juice once a week. Let A are the one that uses brand A
(Orange Juice A) and A0 are the one that uses other brand. In the initial state, there are 20%
people that uses brand A and 80% uses brand A0 . One day, brand A make an advertisement so
the sale increases where one uses brand A0 move to band A with 70% and one uses brand A0
is not move to other brand with the presentage about 30% and the others that uses brand A
keeps uses brand A with persentage about 90% but there are some people that formerly uses
brand A now move to brand A0 with 10%.
Assuming that the sale increases steady. So we can draw diagramatically by

Figure 1: Transition Diagram

This diagram is called Transition diagram, where the diagram can be mathematically
write as a matrix

A A0
 
A .9 .1
P = (1)
A0 .7 .3

The initial state of distribution matrix is as follow

1
A A0

P = .2 .8 (2)

To know what probabilities that one uses brand A and A0 after one weeks, we can deduce
it with tree diagram shows below

Figure 2: Tree Diagram

The probabilites that one uses brand A after one week is

P (brandA) = (.2)(.9) + (.8)(.7)


= (.18) + (.56)
= .74

So then, we have the state matrix probabilites sales after one week

A A0

S1 = .74 .26 (3)

This state is too ideal I know, but thats how mathematical model works. We do a model
of simple problem about state by write it in mathematical language. This procces is called
Markov Chains, where the next state is effected by the initial state and maybe with an aid
pertubation like in this case is an advertisement.
In real world, the problem is very different. It effected by other perturbation.
So, we can simplifying it by

[S1 ] = [S0 ][P ]


 
  .9 .1
[S1 ] = .2 .8
.7 .3

So we get the same result as in equation 3

2
 
[S1 ] = .74 .26

After two week


[S2 ] = [S1 ][P ]
[S3 ] = [S2 ][P ]
..
.

This proccess goes on until convergence or we get the stationary matrix probabilites.

2 Regular Markov Chains


A transition matrix P is regular if some power of P has only positive entries. A markov chains
is a regular markov chains if its transition matrix is regular.
Example:
 
.3 .7
P = ≡ Regular M atrix
.1 .9

 
0 1
P = ≡ N ot Regular M atrix
1 0

 
.2 .8
P = ≡ Regular M atrix
1 0

3 Properties of Regular Markov Chains


Let P be the transition matrix for a regular markov chains:
a). There is a unique stationary matrix S that can be found by solving the equation [S][P ] =
[S].
b). Given any initial matrix [S0 ], the state matrices [Sk ] approach the stationary matrix [S].
c). The matrices [P k ] approcah a limiting matrix P̄ , where each now of P̄ is equal to the
stationary matrix S.

You might also like