You are on page 1of 24

Limit Theorems

for Metastable Markov Chains

Samarth Tiwari, Zsolt Pajor-Gyulai

Courant Institute of Mathematical Sciences

27 October, 2017

Courant Institute of Mathematical Sciences


Formulation of the Problem

What is a discrete-time Markov Chain?


What is a continuous-time Markov Process?
What is a continuous-time semi-Markov Process?
How do we analyze families of stochastic processes?
How is their behavior dependent on time-scales?

Courant Institute of Mathematical Sciences


We only consider stochastic processes on a Finite State
Space S.
For discrete-time, transition probabilities of the Markov chain
Z are given as pi,j := probability of transitioning to state j if
process is currently at state i.
Process is Markov: transition probabilities do not depend on
the history. In other words, if at time t the process is at state
i, the path it took to get to this state is irrelevant for future
transitions.

Courant Institute of Mathematical Sciences


Well known result: for every state i S, i : S [0, 1] such
that
lim P(Z (n) = j Z (0) = i) = i (j)
n

This i is the Ergodic measure of Z starting at i.

Courant Institute of Mathematical Sciences


First generalization: to continuous time.
Now the transitions of the Markov process do not occur at
every integer time. Transitions are modeled through N0 1
exponential random variables

Courant Institute of Mathematical Sciences


Suppose Z (0) = i is the initial point of the Markov Process.
Let zj be pairwise independent exponential random variables
with parameters pi,j Their PDFs are:

fzk (t) = pi,j exp(pi,j t)

These can be considered exponential alarm clocks. The


moment an alarm clock goes off, the process transitions to the
respective state, and a new set of alarm clocks are considered.

Courant Institute of Mathematical Sciences


Property of exponential random variables: the minimum of
finitely many independent exponential r.v.s is itself an
exponential r.v.
Thus, the amount of time the process spends at a state is an
exponential r.v. There is a similar ergodic theorem for
continuous time Markov chains.

Courant Institute of Mathematical Sciences


Second generalization: Asymptotics
We consider a family of continuous-time Markov processes Z ,
for > 0. Each of these processes have transition probabilities

pi,j , and we analyze the limiting behavior, as 0.

Courant Institute of Mathematical Sciences


Notion of time-scales: time-scales are suitably defined
functions f : (0, ) (0, ) that satisfy

lim f () =

Courant Institute of Mathematical Sciences


The limiting ergodic measures of Z depend on the time-scale.
Topologically speaking: the ergodic measures are not
continuous at infinity as a function of (1/, t = f ()).

Courant Institute of Mathematical Sciences


Our research built upon the result of Friedlin and Koralov:
roughly speaking, there are finitely many time-scales T r ()
that can be considered as thresholds.
If f () is, for every r , either:

f () T r ()
lim =0 or lim =0
0 T r () 0 f ()

Then there is a limiting ergodic measure: i such that:

lim P(Z (f ()) = j|Z (0) = i) = i (j)


0

These i s are the metastable distributions of this family of


Markov processes.

Courant Institute of Mathematical Sciences


The proof of this ergodic theorem required formulation of a
family of skeleton Markov chains, discrete-time analogs of
Z . Further, we have a hierarchy of Markov processes Z ,r for
each time-scale T r , so that Z behaves like Z ,r near the
time-scale T r .
Use of appropriate and non-confusing notation was a big
challenge.

Courant Institute of Mathematical Sciences


Third generalization: from families of Markov processes to
families of semi-Markov Processes, and an introduction of
travel times.

Courant Institute of Mathematical Sciences


Semi-Markov processes, aka Markov renewal processes.
Exponential alarm clocks are no longer used. Immediately
following a transition to state i, the process chooses another
state to transition to.
This decision-making is Markov.
However, each transition takes a certain amount of time Ti,j .
This travel period is not Markov. Thus the term
semi-Markov process.

Courant Institute of Mathematical Sciences


Now we consider a family Z of semi-Markov processes on S.
Jump-times are 0 < 1 < 2 . . . k . . . . These are the instants
when the process transitions between states. Transition
probabilities:

P(Z (k+1

) = j|Z (k ) = i) = pi,j

If Z (k+1

) = j, Z (k ) = i, then almost surely,

k+1 k = Ti,j

Courant Institute of Mathematical Sciences


Key Idea: to derive ergodic measures, consider an equivalent
process on the Edge Graph of S.

Courant Institute of Mathematical Sciences


Let E = {(i, j)|i, j S}. This is the set of directed edges in
the complete graph on S.
It was more helpful to study a corresponding process on this
edge set E .

Courant Institute of Mathematical Sciences


Let X be a family of semi-Markov processes on E with the
following transition probabilities: if
e1 = (i, j) E , e2 = (j 0 , k) E , then
(


pj,k if j = j 0
pe1 ,e2 =
0 otherwise

X stays on state e1 = (i, j) for time Ti,j .

Courant Institute of Mathematical Sciences


Consider a family of skeleton Markov chains Y on E , which
are defined for discrete time, with transition probabilities pe1 ,e2 ,
the same as those for X .

Courant Institute of Mathematical Sciences


The same proof techniques of Friedlin and Koralov allow us to
show existence of ergodic measures for Y between certain
thresholds of travel times.
However, time-scales for Y do not have a straightforward
relationship with time-scales for X .

Courant Institute of Mathematical Sciences


We have the following relationship between Y and X v e. For
every > 0,

P(X (k+1

) = e2 |X (k ) = e1 ) = pe1 ,e2

= P(Y (k + 1) = e2 |Y (k) = e1 )
Y is said to be the section of X along k . In other words,
Y contains no information regarding travel times.

Courant Institute of Mathematical Sciences


An example: if, for a time-scale f() for Y starting at e1 , the
ergodic measure is e1 , and if e2 E such that
0 < e1 (e2 ) < 1, then for sufficiently small ,
inf{k|Y (k) = e2 } < f() almost surely.
In this case, if the travel time Te2 is asymptotically greater
than all other travel times, the ergodic measure of X might
be supported only on e2 even though the same does not hold
for Y .

Courant Institute of Mathematical Sciences


Keeping this possibility in mind, by appropriately considering
edges that will take too long to travel, we can prove
existence of limiting ergodic measures, along the lines of
Friedlin and Koralov.

Courant Institute of Mathematical Sciences


Further work:
Allow transitions from a state back to itself: motivated by
practical applications for my mentor.

Make the travel times i,j non-deterministic. If exponential
situation reverts to continuous time Markov process. Is the
theory any more interesting for random variables of higher
variance?

Courant Institute of Mathematical Sciences

You might also like