You are on page 1of 56

|  

J. M. Akinpelu
 
  

A     is a collection of random variables


 ë , 3 .

± The 
is often interpreted as time.
±  is called the   of the process at time .
± The set is called the 
 of the process.
‡ If is countable, the stochastic process is said to be a
   process.
‡ If is an interval of the real line, the stochastic process is
said to be a 

  process.
± The     is the set of all possible values that the
random variables  can assume.

R
 
  

½ amples:
1. Winnings of a gambler at successive games of
blackjack; è {1, 2, .., 10},  è set of
integers.
2. Temperatures in Laurel, MD. on September
17, 2008; è [12 a.m. Wed., 12 a.m. Thurs.),
 è set of real numbers.

G
|  

A stochastic process 0 ë
,
0 0, 1, 2, ¦ is
called a |
provided that

{
1 0 |
0 ,
ü1 0 
ü1 , ¦ , 1 0 1 , 0 0 0 }
0 (
1 0 |
0 }

for all states  0 , 1 , ¦ , 


ü1 , ,  and all
á0 .

Î
|  

We restrict ourselves to Markov chains such that the


conditional probabilities

 0 {
1 0 |
0 }

are independent of
, and for which
½ ´ {0, 1, 2, ¦ }
(which is equivalent to saying that  is finite or
countable). Such a Markov chain is called

.
X
|  

Since
± probabilities are non-negative, and
± the process must make a transition into some
state at each time in , then

  0  ,    ;   01    .
 00

We can arrange the probabilities  into a square


matri {  } called the 
 
  .

Ë
| | 
Let  be a square matri of entries defined for
all ,   . Then  is a |  provided
that
for each ,   ½ ,  á 0

for each   ½ , 
 0
 1.

Thus the transition matri for a Markov chain


is a Markov matri .
r
| | 
½ ample:
 1 1 1

 2 4 4
ù
0 1
2
0 1
2 ù
 1 1 1 ù
 4 4 2

This Markov matri corresponds to a 3-state


Markov chain.
‰
   

It follows from the definition of a Markov chain


that, for e ample,
{ 9 0 , 8 0| 7 0 }

0 { 9 0 | 8 0 , 7 0 }{ 8 0| 7 0 }

0 { 9 0 | 8 0  }{ 8 0| 7 0 }

0  

ƒ
   

±ence we have the following

  . For any


0, 1, 2, ¦ , î 1, 2, 3, ¦ ,
and 0 , ¦ ,    ,

{ 
1 1 , ¦ , 
 î î 
0 }

0 1 12 ¦ î ü1î .

u
   

Vow note that


{
2 0 |
0 }

0  {
2 0 |
1 0 ,
0 }{
1 0|
0 }
 00

0  {
2 0 |
1 0 }{
1 0|
0 }
 00

0   
 00
0 2


uu
   

½ ample:

 1 1 1
  167 3 3

12 4 4
ù  16 8
ù
2 2ù  0   0  83 3 ù
1
! 0 2 2 1
 4 8
ù
1 1 1ù
3 7 ù
4 4 2
8
3
16 16

3
±ence, {
2 0 2|
0 3} 0  (3, 2) 0 .
2

16

uR
   

In general, for all


,  á 0 and ,   ½ ,

{
 0 |
0 } 0   .

This leads to the ›



 
:

!
 î   !
!
½

î
or all
, î á 0, all ,   ½.

uG
   

Also note that if 3 (0) is the probability distribution


of the states at time 0, i.e.,

3 (0) 0 { 0 0 }.

Then the probability distribution at time


is

(
) ( 0)

3 3 ! .




    



Fi state , and let be the time of the first visit to
state .
± State  is called  
if

!{3    0 } 1 .
± A recurrent state  is call 
u
 if

[ | 0   
± Otherwise, it is called
.

 
uX


    



± A recurrent state  is said to be    with
period if

! 0
± whenever
is not divisible by , and is that
largest integer with this property. A state with
period 1 is said to be   .
± Recurrent positive, aperiodic states are called
 . A Markov chain is called   if all
of its states are ergodic.




    



± State  is called 

if

{ „  | 0 0 } „ 1 .
± There is a positive probability of never returning
to that state.

ur


    



± A state  is   from state  if there
e ists
á 0 such that !
M 0 .
± A set of states is said to be  if no state
outside it is accessible from any state in it.
± A state forming a closed set by itself is
called an 
 state.





 

 u‰


    




± A closed set is   if no proper subset of


it is closed.
± A Markov chain is called   if its only
closed set is the set of all states.




    

recurrent transient

positive null

absorbing non-absorbing

R


    

State  is
± absorbing if and only if
! 1.
± recurrent if and only if


  0  .



01

± transient if and only if




  „  .



01
Ru


    

  . In an irreducible Markov chain, either


all states are transient, all states are recurrent null,
or all states are recurrent positive.

  . In a finite-state Markov chain, all


recurrent states are positive, and it is impossible
that all states are transient. If the Markov chain is
also irreducible, then it has no transient states.

RR


    

   . In a Markov chain, the states can


be divided, in a unique manner, into
irreducible sets of recurrent states and a set of
transient states.

RG


    

 1 

 2 0 ù
ù
 . ù
 ù
0 . ù
 ù


0 .


ù
ù
 ù
 !1 !2 . . . !
 ù

½ach  corresponds to an irreducible set of recurrent


states;  corresponds to the transient states



    

½ amples: Consider a Markov chain with state


space {1, 2, 3, 4, 5} and transition matri
 12 0 12 0 0 
 ù
 0 1 0 3 0ù
 4 4
ù
! 0 0 1 0 2 ù
 3 3ù

1 1 1 ù
4 2 0 4
0 ù
1 1 1ù
3 0 3 0 3

What is the classification of each state?

RX


    

It is useful to draw a graph with the states as vertices and a


directed edge from  to  if (, ) > 0.
 12 0 12 0 0 
 ù
 0 1 0 3 0ù
 4 4
ù
!  0 0 13 0 23 ù
 ù
1 1 1 ù
 4 2
0 4
0 ù
1 1 1ù
3 0 3
0 3
3

2 4 1 5




    

Vote that:
± {1, 3, 5} and {1, 2, 3, 4, 5} are closed.
± {1, 3, 5} is irreducible.
3

2 4 1 5

 12 0 1
0 0
 2 ù
0 1
0 3
0ù  12 1
0
! 
4
1
4
2ù  2 ù
0 0 3
0 3ù 2 š 0 1 2
ù
1 1 1
0ù 3 3
 14
0  1 1ùù

2 4 1
0 1
0 ù 3 3 3
3 3 3

 is a Markov matri corresponding to the state space {1, 3, 5}. Since the
set is finite, all states in the set are recurrent positive.
Rr


    

If the states are reordered


{1, 3, 5, 2, 4} 0 {1×
, 2×
, 3×
, 4×
, 5×
},
then the transition matri becomes
 12 12 0 0 0 
 ù
 0 1 2 0 0ù
 2 3
ù
!×  13 13 13 0 0 ù
 ù
 1 3ù
 0 0 0 4 4ù
1 1 1ù
4 0 0 2 4

1×, 2×, and 3× are recurrent positive; 4× and


5× are transient. R‰
   

  ". In an irreducible aperiodic
homogenous Markov chain,

3 0 lim 3 (
) 0 lim 3 (0) 


2
2

always e ists and is independent of the initial


state probability distribution 3 (0) .

   

Moreover, either:
a. all states are transient or all states are recurrent null,
in which cases 3  0 0 for all , or
b. all states are recurrent positive (i.e., ergodic), in
which case 3  M 0 for all  and the 3  are uniquely
determined by the following equations:

3  0  3  


3  0 1.


This can also be written as 3 3 !, 3 1 1.


G
   

± Furthermore, if î  is the e pected time between two


returns to , then
1
3 0 .


± In other words, the limiting probability of being in


state  is equal to the rate at which  is visited.

1
î 0
3

 
Gu
    
It follows that for recurrent positive states, the
limiting probabilities have two interpretations:
± the limiting distribution of the state at time
± the long-run proportion of time that the process
spends in each state.

GR
    
!robability that process is in state  at time 2 3

!roportion of time spent in state  during (0,  2 3

GG
    

If 3 is a solution of 3 è3, then any constant , 3


is also a solution. In solving 3 è3, 31è1, it is best
to solve 3 è3 first and then normalize the resulting
solution to satisfy the second condition.

In the finite state space case, the equations of 3 è3


are linearly dependent; therefore one can throw one
of the equations out of consideration.


   

½ ample:
 1 1 1

 2 4 4
ù
!0 1
2 0 1
2 ù
 1 1 1 ù
 4 4 2

3 0 3 , 3 1 0 1 implies that 3 0 Õ25 , 15 , 25 .

GX
   

Compare 3 Õ52 , 15 , 25 with

 13 13 25
  512
205 205 409

 32 64 64
ù  1024 1024
ù
 3 0  13 3 13 ù  5 0  512
205 51 205 ù
 32 16 32
ù  256 512
ù
 25 13 ù  409 205 205 ù
 64
13
64 32  1024 1024 512

m  
. In a finite-state irreducible Markov
chain, the rows of 
converge to 3 u


    
   

‡ If a Markov chain has infinitely many


transient states, then it is possible for the
chain to remain in the set of transient states
forever.
‡ ±owever, if there are finitely many transient
states, then the chain will eventually leave the
set of transient states never to return.

Gr
    
   

  #. For a Markov chain with a finite number of


transient states, let be the set of transient states, and
let the transition matri  be written as

š 0
!  ùù
 !3

where  corresponds to the recurrent states, and 


corresponds to the transient states (ordered as in ).


    
   

Let   be the e pected number of visits to state , starting at


state , and let $ be the square matri of entries   defined for
all  ,   . Then

$ (  ü ! 3 ) ü1 .

The e pected time spent in all transient states, starting at state ,


is given by

3
 .


    
   

 12 12 0 0 0 
 ù
½ ample:  0 1 2 0 0ù
 2 3
ù
0  13 13 13 0 0 ù

 ù
 1 3ù
 0 0 0 4 4ù
1 1 1ù
4 0 0 2 4

 14 3  34 ü 34   4 4
 0  1 4ù  ü !3  ù (  ü  ) ü1
0  8 ù
 1ù ü 1 3 ù 4 ù
2 4  2 4  3

So, for e ample, the e pected number of visits to state 4,


starting in state 4, is 4. The e pected total number of visits
to all the transient states (states 4 and 5), starting in state 4,
are 8.
Î
  
Let X be a Markov chain with state space
½ {0, 1, 2, ¦ }, and transition matri
0 1 
 ù
 0  0 ù
 ù
  0  ù
 0  ù
 . . . ù
 ù
 ù
 0 . . . ù
 ù
 . .

where 0    1,  1 ü .
Îu
  
All states can be reached from each other, so the
Markov chain is irreducible. Consequently, all
states are transient, all are recurrent positive, or all
are recurrent null.

To identify the states, let us assume that a limiting


probability distribution e ists, that is, there e ists a
vector 3 satisfying

3 0 3 .

ÎR
  
Any solution is of the form

 ü1
1 
3  0  ùù 3  ,  0 1, 2, ¦


for some constant 3 0 0.

ÎG
  
If  å , then / å 1 and

2 1 
 3 0 3 0 , 3 0 0 1 ü ùù
 00 ü  2 

and

Õ
1 1 ü 
2 
if  0 0
3 0   ü1
 1
2 
1 Õ Õ
ü

  if  1

In this case, all states are recurrent positive.

ÎÎ
  

ÎX
  
If  , then we can use the following theorem to determine if
the states are transient or recurrent null.
  %. Let X be a irreducible Markov chain with transition
matri , and let & be the matri obtained from  by deleting
the -row and the -column for some   . Then all states are
recurrent if and only if the only solution of

  0  &   , 0    1 for all    ü {}




is  è 0 for all   ± {}.

ÎË
  
We eliminate state 0 and solve for , obtaining:



0 for all  if  0 
 0 

1ü Õ
 
for all  if  M .

±ence the states are recurrent null if  è , and


transient if  > .

Îr
  

Ή
  

΃
   

Let X be a Markov chain with state space
½ {0, 1, 2, ¦ , and
N }, transition matri
1 
 ù
 0  0 ù
 ù
  0  ù
  ù
 . . . ù
 ù
 ù
 0  0 ù
 ù
 1

where 0    1,  1 ü . This is also called a random


walk with absorbing barriers.
X
   

If  is the probability of reaching ', starting in
state , then

1ü Õ 
 
1
  N , i 

! 1 ü Õ  2
1
  , i 

 N 2

(See Ross pg. 217)

Xu

   
Slotted Aloha is a Vetwork Random Access Method
± Used in packet radio and satellite communications in which satellites
communicated by radio
± Time is divided into ³slots´ of one packet duration
± When a node has a packet to send, it waits until the start of the ne t slot
to send it
± If no other node attempts transmission during that time slot, the
transmission is successful
± If a collision occurs, each collided packet is transmitted with
probability  in each successive slot until it is successfully transmitted

u R G Î X Ë


  


  

XR

   
Slotted Aloha can be represented as a Markov chain.
Assume that  packets arrive in a time slot with
probability , and at most one packet arrives at each
node in each time slot. If  1, then

0   (1 ü  )  ü1  0  ü1

0 [1 ü   (1 ü  )  ü1  1 (1 ü  )  0

 ,  0
1 [1 ü (1 ü  ) 
 0 1


  ü    2
XG

   
Using Markov chain analysis, one can show that, if
0 + 1 å 1, then slotted Aloha is unstable, i.e., all
states are transient, and the number of the packets
that have not been successfully transmitted grows
without bound (see Ross pg 21).


 
Assign discussion problems to two students.
± Ross Chapter 4: 5, 14, 29, 45
± !rove that for a random walk, the all of the
states are recurrent null if p è ½ and transient
if p > ½.

XX


  

Let X be a Markov chain with state space {1, 2}, initial
distribution 6 è (1/3, 2/3) and transition matri

 0.5 0.5 
0 ù
 0.3 0.7 ù

1. Calculate
1. !{X1 è 1}
2. !{X3 è 2 | X1 è 1}
3. the limiting distribution 3
2. Classify the states in this Markov chain