analysis

© All Rights Reserved

4 views

analysis

© All Rights Reserved

- chap2
- v40-26
- ct42010-2014
- Effect Mechanism of Penstock on Stability and Regulation Quality of Turbine Regulating System
- Admision Control
- 4. IJAMSS - Multi Server Batch Queueing Model With Variable Service Rates
- SITI Project
- PhDThesisRaimondo
- Unit-4 HMM
- HWsets-2011_2
- pe_byron
- decarlo2000_2
- Controller Design for a Class of Cascade Nonlinear Systems Using
- A Fractal Forecasting Model for Financial Time Series.pdf
- WSEAS Conference-Pran
- Chemical Kinetics
- 2005 an Unobserved-Component Model
- Lec19
- 337 Simulation and Modelling
- front-matter.pdf

You are on page 1of 32

Abstract

We develop an approach to the construction of Lyapunov functions

for the forward equation of a nite state nonlinear Markov process.

Nonlinear Markov processes can be obtained as a law of large number limit for a system of weakly interacting processes. The approach

exploits this connection and the fact that relative entropy denes a

Lyapunov function for the solution of the forward equation for the

many particle system. Candidate Lyapunov functions for the nonlinear Markov process are constructed via limits, and veried for certain

classes of models.

Introduction

Stochastic processes that go under the unusual title nonlinear Markov processes arise as the limits of large collections of exchangeable and weakly

interacting particles. Although these processes originally appeared as models in mathematical physics (Kac, 1956), they now appear in many other

contexts, including communication networks (Dawson et al., 2005; McDonald and Reynier, 2006; Antunes et al., 2008; Benam and Le Boudec, 2008;

Graham and Robert, 2009), economics (Dai Pra et al., 2009), and neural

networks (Laughton and Coolen, 1995).

University, Providence, RI, 02912, USA. Research supported in part by the National

Science Foundation (DMS-0706003 and DMS-1008331), the Air Force Oce of Scientic

Research (FA9550-09-1-0378), and the Army Research Oce (W911NF-09-1-0155).

35121 Padova, Italy. Research supported in part by the Air Force Oce of Scientic

Research (FA9550-09-1-0378).

time. One quantity is a particle that corresponds to a typical or canonical

particle at the prelimit level and the other is a probability measure. If the

measure component were xed then the particle component would evolve as

an ordinary Markov process. The role of the measure-valued component is to

account for the aggregate eect of all other particles on the evolution of the

canonical particle. At the prelimit, each of these particles interacts with the

canonical particle through their state values. Under standard scalings this

eect can be summarized in terms of the empirical measure on the particle

states, and since many particles contribute equally to this empirical measure,

the eect of any one is in some sense weak. In the limit as the number of

particles tends to innity, this empirical measure converges to the measurevalued component of the corresponding nonlinear Markov process. Because

of exchangeability one can interpret the empirical measure as an approximation to the distribution of the canonical particle. In the limit the measure

component is identied with the exact distribution of the canonical particle, and the two components evolve together. Thus the measure component

feeds into the dynamics of the particle component, which in turn determines

the evolution of the measure since it coincides with the distribution of the

particle.

purposes, one can eliminate the particle from the dynamics and obtain an

evolution equation for just the measure component.

An important but dicult issue in the study of nonlinear Markov processes is stability. Here, what is meant by stability is deterministic stability

of the measure component. For example, one can ask if there is a unique,

globally attracting xed point for the dynamics of the measure-valued component. When this is not the case, all the usual questions regarding stability

of deterministic systems, such as existence of multiple xed points, their local stability properties, etc., arise here as well. One is also interested in the

connection between these sorts of stability properties of the limit model and

related stability and meta-stability (in the sense of small noise stochastic

systems) questions for the prelimit model.

There are several features which make stability analysis particularly difcult for these models.

the set of probability measures on some Polish space, is not a linear space

(although it is a closed, convex subset of a Banach space).

Obvious rst

quadratic functions, are not naturally suited to such state spaces. Related

to the structure of the state space is the fact that the dynamics, linearized

at any point in the state space, always have a zero eigenvalue, which also

The purpose of the present paper is to introduce and develop a somewhat

more systematic approach to the construction of Lyapunov functions for nonlinear Markov processes. The starting point is the observation that given any

ergodic Markov process, the relative entropy function in some sense always

denes a globally attracting Lyapunov function. Now of course a nonlinear

Markov process is not a Markov process in the usual sense. Indeed, as will be

shown with examples there can be multiple xed points, locally stable and

otherwise, that are associated with the measure component. The fact that

relative entropy provides a Lyapunov function for ordinary Markov processes

is not directly applicable. Instead, we will temporarily lift the problem to

the level of the prelimit processes.

N -particle

process will be ergodic, and thus relative entropy can be used to dene a

Lyapunov function for the joint distribution of these

interacting system then suggest that the limit of suitably normalized relative

entropy for the

N -particle

To simplify the exposition, in this paper we focus on the relatively simple

setting in which each of the interacting particles is modeled by a nite state

Markov process. This setting is only relatively simple and, as we will see,

even with additional strong assumptions there are many possible behaviors

for the corresponding nonlinear process.

An outline of the rest of this paper is as follows. Section 2 provides the

notation and basic results on families of weakly interacting Markov processes

and the associated limit systems.

Lyapunov functions for the limit systems based on relative entropy. Section 4

studies a class of limit systems where the eect of the nonlinear part due to

interaction is in some sense slow. In Section 5 a class of weakly interacting

Markov processes of Gibbs type is studied.

remarks.

processes

of weakly interacting Markov processes in continuous time.

To avoid all

technical complications let us assume that the state of any individual particle

to use the particular choice X = {1, . . . , d}. For N N, the N -particle

N

system is described as an X -valued Markov process. We will use boldface

to denote objects that take values in spaces that are N -fold products such

N

as X , and ordinary typeface for objects taking values in one copy, such as

X.

A heuristic description of the N -particle process is a follows. Let P(X )

denote the space of probability vectors on X . We are given a family of

rate matrices (p) indexed by p P(X ), and assume for simplicity that

p 7 (p) is Lipschitz continuous. Given the states X 1,N (t), . . . , X N,N (t) of

all N particles at any time t, form the empirical measure

takes values in a nite set

N

. 1 X

(t, ) =

X i,N (t,) ,

N

N

(2.1)

t 0, ,

i=1

where

particle, say

X i,N ,

x. Then

[t, t + ]

with

>0

with the law of each particle governed by the rate matrix

(N (t)).

recoupled through the evolution of the empirical measure, and so to give

a precise description we consider the entire

(x1 , . . . , xN ), y = (y1 , . . . , yN )

X N -valued

dier in exactly one component.

an admissible transition from

Let x =

N -particle sysonly if x and y

process.

to

independent particles.]

is assumed to depend on

The rate of

and

only

through the values of the component that is changing as well as the empirical

measure of

(2.2)

x,

i.e.,

N

x

N

. 1 X

=

xj .

N

j=1

Set

.

N

PN (X ) = {N

x : x X } P(X ).

4

N (x, y)

Let

and

xj = yj

x to y, x 6= y. If x, y

N (x, y) = 0. If x and y dier in

is l {1, . . . , N } such that xl 6= yl

for all

j 6= l,

N (x, y) = N xl , yl , N

x

(2.3)

N : X X PN (X ) 7 [0, ).

N -particle system

N =

described previously in a heuristic fashion corresponds to N xl , yl , x

xl ,yl (N

x ). (A situation that requires to also depend on N will appear in

The

X

.

N (x, x) =

N (x, y),

x XN.

y6=x

The matrix

N = (N (x, y))x,yX N

N -particle

system.

Let

matrix

N dened on some probability space (, F, P). Then XN provides

X i,N

XN has piecewise constant cdlg trajectories. The assumption on the noni,N , X j,N

zero o-diagonal entries of N implies that, with probability one, X

do not jump simultaneously if

i 6= j .

N -particle state process XN can be written as in (2.1). NoP(X )-valued cdlg trajectories. Actually, N is a Markov

with values in PN (X ). Let LN denote the innitesimal generator

Then LN acts on bounded measurable functions f : PN (X ) 7 R

sponding to the

N has

tice that

process

of

N .

according to

LN (f )(p) = N

N (x, y, p) f p +

1

N y

1

N x

f (p) px .

x,yX

For later use we note that since

generator

N ,

XN

equation

d N

p (t) = pN (t)N ,

dt

(2.4)

with initial condition

all

t 0.

t 0,

Note that

pN (t) P(X N )

for

Since

and hence

XN

1,N (t), . . . , X N,N (t) are exchangerst is that the random variables X

1,N

for all t > 0 whenever X

(0), . . . , X N,N (0) are exchangeable. In this

The

able

property that the evolution of the

i-th

i {1, . . . , N }, 0 s t,

f : X 7 R,

E f (X i,N (t))|XN (s) = E f (X i,N (t))|X i,N (s), N (s) .

functions

More

and all

because the contribution of any individual particle to the

ical measure

is of order

X PN (X )-valued

1/N .

Markov process.

N

of i is not

(X i,N , N ).

the limit as

of

the choice

i = 1.

We have

that one can derive an evolution equation for the limit of the

component

Although

alone.

be deterministic, and thus the limit is a form of the law of large numbers.

The stability properties of the limit model are the main object of study,

though ultimately one is also interested in various properties of the prelimit

processes as well.

N (x, y, p) =

y 6= x for the rest

of this section. Since (p) is a rate matrix,

P

automatically x,x (p) =

y6=x x,y (p) for all x X . The result stated

below continues to hold if N (x, y, p) x,y (p) uniformly in p P(X )

for all x 6= y P(X ), a situation that will be encountered in Section 5.

N will be characterized via the nonlinear Kolmogorov forward

The limit of

To simplify the discussion we make the particular choice

x,y (p)

for

equation

d

p(t) = p(t)(p(t)).

dt

(2.5)

Since

{p Rd : px 0

and

Pd

x=1 px = 1}.

Thus

P(X ) is a compact and convex

with

subset of

Rd

take values in

P(X ).

can be eciently established by using a martingale problem formulation, see

Oelschlger (1984), for instance.

that

and assume that N (0) converges in probability to q P(X ) as N tends to

innity. Then (N ())N N converges uniformly on compact time intervals in

probability to p(), where p() is the unique solution to Eq. (2.5) with p(0) = q .

Theorem 2.1.

Proof.

In the

E = P(X ), EN = PN (X ), N N,

N px N1 ey N1 ex x,y (p),

p EN ,

FN (p) =

X

x,yX

F (p) =

p E,

x,yX

.

LN (f )(p), p PN (X ), when f is the identity function f (

p) = p Rd .

Moreover, the z -th component of the d-dimensional vector F (p) is equal to

P

P

P

x px x,z (p), the

y6=z pz z,y (p), which in turn is equal to

x6=z px x,z (p)

z -th component P

of the row vector p(p), since x,z (p) = (x, z, p) for x 6= z

d

and z,z (p) =

y6=z (z, y, p). The ODE dt p(t) = F (p(t)) is therefore the

same as Eq. (2.5).

where

Under the assumptions of Theorem 2.1, Eq. (2.5) has a unique solution

N p, one could

we have considered just the convergence in distribution

i,N

N

also consider the limit of the pair (X

, ) and thereby show that Eq. (2.5)

q P(X ) and

.

p() be the unique solution to Eq. (2.5) with p(0) = q . Set q (t) = (p(t)),

t 0. Then (q (t))t0 is a time-dependent rate matrix, and we can nd

a time-inhomogeneous X -valued Markov process X with Law(X(0)) = q

(Stroock, 2005, Section 5.5.2). The evolution of the law of X at time t obeys

one can also directly construct such a process. To see this, let

let

(2.6)

d

p(t) = p(t)q (t),

dt

7

t 0,

p(t) = q .

p()

Since

p() = Law(X()).

It follows that

Law(X()) = p(),

X.

the discussion below is propagation of chaos; see Gottlieb (1998) for an exposition and characterization. Propagation of chaos means that the rst

components of the

N -particle

as

tends to inn-

(XN )N N

(or (N )N N ) means the following. If q P(X ) and if for all k N

Law(X 1,N (0), . . . , X k,N (0)) converges to the product measure k q weakly

as N goes to innity, then for all k N and all t 0,

N

(2.7)

Law X 1,N (t), . . . , X k,N (t) k p(t) weakly in P(X ),

where

of a particular time

pN .

by

(N ).

N -particle

systems determined

In this section we outline an approach to the construction of Lyapunov functions for nonlinear Markov processes. As noted in the introduction, various

features of the deterministic system (2.5) make standard forms of Lyapunov

functions that might be considered unsuitable. Indeed, one of the most challenging problems in the construction of Lyapunov functions for any system

is the identication of natural forms that reect the particular features and

structure of the system. Here the ODE (2.5) is naturally related to a ow of

probability measures, and for this reason one might consider constructions

based on relative entropy.

punov function is limited to processes that are stationary and ergodic. The

ODE (2.5) corresponds to a nonstationary process, though as was discussed

in the previous section it arises naturally as a certain limit of stationary processes. The general approach is to identify ways to exploit this connection

to suggest good forms for the limit, which in general are obtained as limits

of relative entropies for the prelimit model.

Here we recall the descent property of Markov processes with respect to

relative entropy. This fact, which is well known, has a straightforward proof

in the setting of nite-state continuous-time Markov processes. The earliest

proof the authors have been able to locate is in Spitzer (1971, pp. I-16-17).

Let G = (Gx,y )x,yX be an irreducible rate matrix over the nite state space

X , and denote by its unique stationary distribution. The forward equation

for the family of Markov processes with rate matrix G is the linear ODE

d

p(t) = p(t)G.

dt

.

Dene : [0, ) 7 [0, ) by (z) = z log(z)z +1. Recall that the relative

entropy of p P(X ) with respect to q P(X ) is given by

px

. X

R (pkq) =

px log

.

qx

(3.1)

xX

p(0), q(0) P(X ). Then for all t 0,

X

py (t)qx (t)

qy (t)

d

R (p(t)kq(t)) =

px (t)

Gy,x 0.

dt

px (t)qy (t)

qx (t)

Lemma 3.1.

x,yX :x6=y

Moreover,

d

dt R (p(t)kq(t))

Proof.

[0, ),

with

is strictly convex on

(0) = 1 and (z) = 0 if and only if z = 1. Owing to the

irreducibility of G, p(t) and q(t) have no zero components and hence are

equivalent probability vectors for all t > 0. Assume for the time being that

p(0), q(0) are equivalent probability

vectors.

P

0

By assumption p

x (t) = yX py (t)Gy,x for all x X and all t 0. Since

is a rate matrix,

xX

Gy,x = 0

for all

y X.

Thus

d

R (p(t)kq(t))

dt

d X

px (t)

=

px (t) log

dt

qx (t)

xX

X

X

X

q 0 (t)

px (t)

0

+

p0x (t)

px (t) x

=

px (t) log

qx (t)

qx (t)

xX

xX

xX

X

qy (t)

px (t)

py (t) + py (t) log

=

px (t)

Gy,x

qx (t)

qx (t)

x,yX

X

py (t)

py (t) log

Gy,x

qy (t)

x,yX

X

qy (t)

py (t)qx (t)

px (t)

Gy,x

=

py (t) py (t) log

px (t)qy (t)

qx (t)

x,yX

X py (t)qx (t) py (t)qx (t)

py (t)qx (t)

qy (t)

=

log

1 px (t)

Gy,x

px (t)qy (t) px (t)qy (t)

px (t)qy (t)

qx (t)

x,yX

X

py (t)qx (t)

qy (t)

=

px (t)

Gy,x .

px (t)qy (t)

qx (t)

x,yX :x6=y

Recall that

x 6= y .

0.

qx , px 0 for all

d

R(p(t)kq(t))

0.

dt

Clearly,

It follows that

x X,

and

Gy,x 0

for all

p(0), q(0) are not equivalent, that is, if there is x X

such that px (0) = 0 and qx (0) > 0, or px (0) > 0 and qx (0) = 0, then

d

dt R(p(t)kq(t))|t=0 = and the above equalities continue to hold in the

sense that = .

d

It remains to show that

dt R(p(t)kq(t)) = 0 if and only if p(t) = q(t).

We claim that this follows from the fact that 0 with (z) = 0 if and

only if z = 1, and from the irreducibility of G. Indeed, p(t) = q(t) if and

py (t)qx (t)

only if

px (t)qy (t) = 1 for all x, y X with x 6= y . Thus p(t) = q(t) implies

Next observe that

probability vectors

d

dt R(p(t)kq(t))

all

x, y X

= 0.

If

such that

d

dt R(p(t)kq(t))

Gy,x > 0.

If

p (t)q (t)

y does not directly communicate with

leading from

If

q(0) =

to

x,

then, by stationarity,

q(t) =

10

for all

t 0.

py (t)qx (t)

px (t)qy (t)

= 1.

p 7 R (pk)

(3.2)

is a strict Lyapunov function for the linear forward equation (3.1). This is,

however, just one of many possibilities. Indeed, a second example is

p 7 R (kp) .

Yet a third can be constructed as follows.

T > 0

qp (0) = p.

Let

mapping

p 7 R (pkqp (T )) ,

(3.3)

where

qp ()

implies that the mapping given by (3.3) is a strict Lyapunov function for

Eq. (3.1).

R p(t)kqp(t) (T ) = R (p(t)kq(t)), where q()

with q(0) = p(T ). Note that (3.2) arises as the

This is so because

limit of (3.3) as

goes to innity.

Since a nonlinear Markov process is not in general equivalent to a time inhomogeneous Markov process, the descent property for relative entropy does

not directly apply.

tion dened in terms of the same rate matrix. However, nonlinear Markov

processes can be related to linear Markov processes through the law of large

numbers and propagation of chaos, and this suggests an indirect way to obtain natural forms of Lyapunov functions. There are in fact many dierent

ways this connection could be used to suggest forms that are suitable, and

this paper will consider two that are arguably the most straightforward.

Consider a xed point

of (2.5), i.e.,

() = 0,

p0 (t) = p(t)(p(t)). Consider a weakly interacting system as in

Section 2. In particular, for N N, let N be the innitesimal generator of

N

a linear Markov process with values in X , describing the N -particle system,

and suppose that () is the nonlinear generator of the corresponding limit

sake of argument that

dynamics

model.

For simplicity, let us assume that the initial distributions for the

N -particle systems are of the product form N q for some q P(X ), and let

pN (t) be the law of the N -particle system at time t for this initial condition.

i,N and N denote the ith particle and empirical measure.

As before, let X

11

dynamics. The discussion here is formal, being meant only to motivate the

possible forms for Lyapunov functions, which will later be justied by direct

verication.

q 7

(3.4)

where

T (0, ]

1

R pN (0)kpN (T ) ,

N

scaling in

pN (T )

distribution q .

plauThe

N.

non-increasing at

t = 0

t 7 R pN (t)kpN (T + t)

is

N

(and if p (0) is not the stationary distribution

F (q).

Then

p0 (t) = p(t)(p(t)). There are (at least) two potential problems.

N

N

One is that although R p (0)kp (T ) is strictly decreasing along solutions

to (2.4), the strict decrease may be lost in the limit as N . This can in

one might conjecture that

function for

those examples there are multiple xed points to (2.5), and

still serves as

a Lyapunov function (though not a global one), with a strict decrease at all

points that are not xed points. A second (and more signicant) potential

problem is that in the interchange of dierentiation of time and limits in

N,

even the non-increasing property may be lost. Suppose for simplicity that

T = ,

i.e., that

= N p(0). For > 0

R pN ()k N R N p(0)k N .

of the

N -particle

N

system. Let p (0)

F (p(t))

if it turns out

that

(3.5)

R N p()k N R pN ()k N + o()N,

does not hold,

R N p()k N R pN ()k N

again

pN (0) = N q ,

12

q 7

(3.6)

1

R pN (T )kpN (0)

N

Thus two issues that should be addressed if this approach is to be used

successfully are (i) how to construct approximations to

as much of the dynamics of the

N -particle

pN (T )

that capture

ensuring that a bound such as (3.5) is valid. It turns out that the second

issue is not symmetric for the two forms (3.4) and (3.6).

In fact, based

on large deviation calculations one can show that the bound (3.5) typically

holds for the former, and that it usually fails for the latter. However, we will

not give this calculation since for the problems we consider

is obtained in

an explicit form for both cases, and we simply test whether or not

has the

Remark 5.6.

Thus we return to the rst issue, the construction of approximations to

distribution of the

N -particle

N ,

One class of models for which this is the case are those connected with

Gibbs measures. This class of models will be discussed in Section 5. The

next simplest may be to assume that the propagation of chaos approximation

is valid over a large enough time interval that the marginals of

nearly equal to

N

are

of particles as

pN (T )

xed

number

pN (T )

is

N p(T ) N .

This crude

(3.7)

1

1

.

R (pN (0)kpN (T )) R N qk N = R (qk) = F (q).

N

N

One would expect this to yield a useful Lyapunov function only when the

interaction between particles is limited (which is related to but distinct from

the strength of the interaction as measured by

1/N ).

which (3.7) generically gives a global Lyapunov function. The analysis for

this case complements results proved in Veretennikov (2006). While this class

of models is clearly not as broad as one would like, it is useful in that when

compared with the Lyapunov functions obtained for the models of Section 5

13

(where both methods apply), one can identify important correction terms

that are lost by neglecting the correlations between particles.

introduced in Section 2 as limit systems, but with a structure we call

adaptation,

slow

in the context of nonlinear diusions arising as limits of weakly interacting

It diusions, is studied in Veretennikov (2006) based on coupling arguments

and hitting times (and not in terms of a Lyapunov function).

d 2

(p) be the

rate matrix for a nonlinear Markov process, and assume that p 7 (p) is

Lipschitz continuous for p P(X ). Assume also that P(X ) is a xed

point of the corresponding ODE. Then for any [0, 1], is also a xed

0

point for the ODE dened by p = ((p ) + ). The rate matrix (p) =

((p ) + ) corresponds to a version of the original system but with slow

adaptation when > 0 is small. With (0, 1] xed, the rate matrices

(p), p P(X ), determine a family of nonlinear Markov processes. The

Recall that

elements.

Let

(4.1)

d

p (t) = p (t) (p (t))

dt

interested in the question of when

p(0) P(X ).

We are

slow adaptation.

Following the discussion of the last section, the mapping

(4.2)

is a natural candidate for

F (p) = R (pk)

>0

but small.

that if [0, 0 ], then for all t 0

Proposition 4.1.

d

F (p (t)) 0,

dt

14

Proof.

x, y X ,

all

> 0,

all

C <

p P(X ),

yx (p) yx () Ckp k,

. P

kp k = x |px x |. Using calculations similar to those in the

X

d

px

+ 1 yx (p)

R p (t)k =

py log

dt

x

x,yX

X

px y

px y

=

py log

+ 1 yx ()

py x

py x

x,yX :x6=y

X

px

yx (p) yx ()

+

py log

x

x,yX

X

px y

px y

=

py log

+ 1 yx ()

py x

py x

x,yX :x6=y

X

px y

+

py log

yx (p) yx () .

py x

where

x,yX :x6=y

For

x, y X

with

x 6= y

set

px y

px y

.

yx (p) = py log

+ 1 yx (),

py x

py x

px y

.

yx (p) = py log

yx (p) yx () .

py x

0 > 0

yx (p) + yx (p) 0

X

(4.3)

such that

for all

[0, 0 ],

x,y:x6=y

p = .

that s 7 log(s) s + 1

We rst use the fact

for

s [0, ),

c > 0,

s = 1. Since P(X )

p, such that

not depending on

yx (p) ckp k2 .

x,yX :x6=y

15

is

(4.4)

x,y:x6=y

Set

1 X

c

yx (p).

yx (p) kp k2 +

2

2

x,y:x6=y

.

min = min{yx () : yx () > 0}.

Then

min > 0

since

maxxX

Let

.

< since min = minxX x > 0.

. p

x, y X , x 6= y , and set z = pxy xy .

1

2 or

(),

being

Notice that

1

x

yx (p)

yx () 0,

then, since

| log(s)| 2|s 1|

for all

If

1

2,

yx (p) = py log(z) yx (p) yx ()

px y

1 yx (p) yx ()

2py

py x

2

=

|px y py x |yx (p) yx ()

x

2

(y |px x | + x |y py |) Ckp k.

min

yx (p) yx () < 0 and z [0, 12 ), then yx () min since yx (p) 0

for x 6= y , and, since log(s) s + 1 0 for all s 0,

1

1

(p)

+

(p)

=

p

(log(z)

z

+

1)

()

+

log(z)

(p)

()

yx

y

yx

yx

yx

yx

2

2

1

py 2 (log(z) z + 1) min + | log(z)|2C

If

z [0, 21 ) whenever

min

inequality (4.4), we have for [0,

16C ],

X

yx (p) + yx (p)

recalling

min

16C . Consequently,

x,y:x6=y

X

c

2C

kp k2 +

kp k

(y |px x | + x |y py |)

2

min

x,y:x6=y

X

X

c

2C

kp k2 +

kp k

|px x | +

|y py |

2

min

xX

c

4C

kp k2 +

kp k2 .

2

min

16

yX

min

min

< min{ 16C

, c 8C

} and p 6= ,

min

min

and zero for p = . Choosing 0 (0, min{

,

c

})

, we nd that (4.3)

16C

8C

holds, with equality if and only if p = .

The bound on

on

()

and

can be found.

In this section we carry out the program outlined in Section 3 for a class

of problems where the stationary distribution for the

N -particle

system is

known. Subsection 5.1 introduces the class of weakly interacting Markov processes and the corresponding nonlinear Markov processes. The construction

starts from the denition of the stationary distribution as a Gibbs measure

for the

N -particle

functions for the limit systems as limits of relative entropy. As we will see,

(3.4) is suitable while (3.6) is not, and there are interesting terms that are

not needed when one restricts to systems with slow adaptation as in the last

section. In Subsection 5.3, the functions obtained from (3.4) are shown to be

indeed strict Lyapunov functions for the limit systems. Subsection 5.4 contains a discussion of Tamura (1987), where the somewhat analogous situation

of nonlinear It diusions of Gibbs type is studied.

X is a nite set with d 2 elements. Let V : X 7 R, W :

X X 7 R be functions, representing the environment potential and the

interaction potential, respectively. Assume throughout that W is zero on the

diagonal, that is, W (x, x) = 0 for all x X . It turns out that the same

Recall that

models at both the prelimit and limit are obtained for the symmetrized

for all (x, y) X X .

Let > 0 and N N. The functions W , V induce a Gibbs distribution

N given by

with interaction parameter on the N -particle state space X

N

N X

N

X

X

. 1

(5.1)

N (x) =

exp

V (xi )

W (xi , xj ) ,

ZN

N

version of

i=1

i=1 j=1

17

where

ZN

N

N X

N

X

X

X

.

exp

=

V (xi )

W (xi , xj )

N

N

i=1

xX

i=1 j=1

There are standard methods for identifying

for which

partition function ).

often called Glauber dynamics; see, for instance, Stroock (2005, 5.4) or

weakly interacting N -particle system and is

N 7 R, the

To start, dene a function UN : X

which has the structure of a

N .

N -particle energy function, by

N

N

X

. X

UN (x) =

V (xi ) +

W (xi , xj ).

N

i=1

x, y X N ,

N (x)

= eUN (y)UN (x) .

N (y)

(5.2)

Let

i,j=1

A = ((x, y))x,yX

diagonal entries equal to zero and o-diagonal entries either one or zero.

will identify those states of a single particle that can be reached in one jump

X N according to AN (x, y) = (xl , yl ) if x and y

dier in exactly one index l {1, . . . , N }, and AN (x, y) = 0 otherwise.

Then AN determines which states of the N -particle system can be reached

in one jump, as discussed in Section 2. Observe that AN is symmetric and

irreducible with values in {0, 1}.

N be such that

Recall that W (x, x) = 0 for all x X . Let x, y X

AN (x, y) = 1. Then xl 6= yl for exactly one l {1, . . . , N }. Using this

indexed by elements of

18

UN (y) UN (x)

=

N

X

(V (yi ) V (xi )) +

i=1

N

X

(W (yi , yj ) W (xi , xj ))

N

i,j=1

= V (yl ) V (xl ) +

N

X

j=1

= V (yl ) V (xl )

(W (yl , xl ) + W (xl , yl ))

N

N

X

(W (yl , xj ) W (xl , xj ) + W (xj , yl ) W (xj , xl ))

+

N

j=1

= V (yl ) V (xl )

2 X

1

N

+

(W (yl , z) W (xl , z)) x xl ({z}),

N

N

zX

xl and N

x is the empirical measure

of x

= #{j {1, . . . , N } : xj = z}/N . Let M1 (X )

denote the set of subprobability measures on X . Dene : X X M1 (X ) 7

R by

X

.

(x, y, ) = V (y) V (x) + 2

W (y, z) W (x, z) z .

where

xl

N

as in (2.2), i.e., x ({z})

zX

Then for

x, y X N

as above,

UN (y) UN (x) = xl , yl , N

x

(5.3)

1

N xl

N that corresponds

x, y X N , x 6= y, set either

to

N .

(5.4a)

(5.4b)

or

(5.4c)

or

+

.

N (x, y) = e(UN (y)UN (x)) AN (x, y)

1

.

N (x, y) = 1 + eUN (y)UN (x)

AN (x, y)

. 1

N (x, y) =

1 + e(UN (y)UN (x)) AN (x, y).

2

P

.

N

set N (x, x) =

y6=x N (x, y), x X . The

19

model

as

consider only (5.4a), the analysis for the other dynamics being completely

analogous.

The matrix

l {1, . . . , N }, xj = yj for j 6= l,

X N such that

xl 6= yl

for some

N (x, y) = e

+

1

xl ,yl ,N

x N xl

for

x, y

AN (x, y).

N

,

and

so

for

each

of

the

three

choices in (5.4), N is the

x

empirical measure

Section 2. Indeed, in the notation of that section,

a function

N : X X PN (X ) 7 [0, )

is dened in terms of

given by

+

1

x,y,p N x

N (x, y, p) = e

By symmetry, AN (x, y) = AN (y, x). Taking into account Eq. (5.2), it is

easy to see that for any of the three choices of N according to (5.4) we have

N (x)N (x, y) = N (y)N (y, x). Thus N satises the detailed balance

condition with respect to N , and N is its unique stationary distribution.

The rate matrix

states and near-neighbor transitions.

otherwise. Let r1 , r2 (0, 1), w3 R. Dene functions

|x y| = 1 and zero

V , W according to

.

V (1) = 0,

.

V (2) = log(r1 ),

.

W (3, 2) = W (2, 3) = w3 ,

.

V (3) = log(r2 ) log(r1 ),

.

W (x, y) = 0 otherwise.

Then

(2, 1, p) = (1, 2, p),

For the model (5.4a), if

w3 0

and

r2 < e2w3

20

N .

will asymptotically be

r1 e2w3 p3

r2 e

for transition from xl = 2 to yl = 1,

for transition from xl = 3 to yl = 2,

2w3 (p2 p3 )

1

1

r1 , r2 < e2w3 then the transition rates will asymptotically be

2w3 p3

1

1

+

r

e

for transition from xl = 1 to yl = 2,

1

2

2w3 (p2 p3 )

1

for transition from xl = 2 to yl = 3,

2 1 + r2 e

2w3 p3

1

for transition from xl = 2 to yl = 1,

2 1 r1 e

2w3 (p2 p3 )

1

for transition from xl = 3 to yl = 2.

2 1 r2 e

where

For

p P(X )

let

(p)

(x,y (p))x,yX

if

given by

+

.

x,y (p) = e((x,y,p)) (x, y), x 6= y,

P

.

and with diagonal entries x,x (p) =

yX x,y (p), x X . If p P(X )

is xed then (p) is the generator of an ergodic nite-state Markov process,

and the unique invariant distribution on X is given by (p) with

X

1

.

(5.6)

(p)x =

exp V (x) 2

W (x, y)py ,

Z(p)

(5.5)

yX

where

X

. X

Z(p) =

exp V (x) 2

W (x, y)py .

xX

yX

(X 1,N , . . . , X N,N ) be an X N -valued cdlg Markov process with generator

N . As in Section 2, N is the empirical measure process corresponding to

XN .

N

Theorem 2.1 implies that the sequence ( )N N of D([0, ), P(X ))Let

valued random variables satises a law of large numbers with limit determined by Eq. (2.5), the nonlinear forward equation. More precisely, if

21

N (0)

q P(X )

converges in distribution to

as

d

p(t) = p(t)(p(t)),

dt

The nonlinear generator

to (5.5).

Thus

()

p()

N ()

con-

of

p(0) = q.

We will construct a candidate Lyapunov function for the nonlinear limit

model as the limit of normalized relative entropies according to (3.4).

this end, for

N N,

dene

FN : P(X ) 7 [0, ]

To

by

. 1

FN (p) = R N p
N .

N

(5.7)

is a constant C R such that for all p P(X ),

Theorem 5.2.

Proof.

Let

(5.7)

. Then there

W (x, y)px py C.

x,yX

p P(X ).

N

1 X Y

FN (p) =

pxi log

N

N

i=1

xX

N

X

N

1 X Y

=

p xi

N

N

log(pxi )

i=1

i=1

xX

!

p

x

i

i=1

N (x)

!

QN

1

log(ZN )

N

N

N

N

X

X

1 X Y

p xi

V (xi ) +

+

W (xi , xj ) .

N

N

N

xX

Let

(Xi )iN

i=1

be a sequence of i.i.d.

distribution

i,j=1

X -valued

N

1 X Y

pxi

N

N

xX

i=1

i=1

N

X

i=1

!

log(pxi )

#

N

1 X

log (pXi ) = E [log (pX1 )] ,

=E

N

"

i=1

22

N

1 X Y

pxi

N

N

i=1

xX

and as

N

X

!

V (xi )

#

N

1 X

V (Xi ) = E [V (X1 )] ,

=E

N

"

i=1

i=1

N

N

N

X

X

X Y

1

pxi

W (xi , xj ) = E 2

W (Xi , Xj )

N

N

N

N

i=1

xX

i,j=1

i,j=1

N2 N

E [W (X1 , X2 )]

N2

E [W (X1 , X2 )] .

=

tinuous mapping

: P(X ) R

1

N

log(ZN ),

by

X

.

(q) =

W (x, y)qx qy .

x,yX

common distribution given by

.

. P

N ) + N log(Z).

Let

tributed

exponential integrals (Dupuis and Ellis, 1997), it follows that

lim

N

Note that

1

.

=

log(ZN ) = inf {R(qk) + (q)} + log(Z)

C.

N

qP(X )

is nite and does not depend on

Recalling that

distribution

p,

X1 , X2

p.

we nd

X

X

X

=

px log(px ) +

V (x)px +

W (x, y)px py C.

xX

xX

x,yX

R (pk(p)) = log(Z(p)) +

px log(px ) +

xX

+ 2

W (x, y)px py .

x,yX

23

X

xX

V (x)px

Therefore

W (x, y)px py C.

x,yX

The constant

function

(5.8)

p.

limN FN (p)

in The-

X

.

F (p) = R (pk(p)) log(Z(p))

W (x, y)px py .

x,yX

also takes

the form

(5.9)

F (p) =

px log(px ) +

xX

V (x)px +

xX

W (x, y)px py .

x,yX

p, F

is the sum of a

fact is useful in determining if the xed points of

P(X ).

This

{1,

.

.

.

, d} and P(X ) with

Pd

d

{p R : px 0 and

x=1 px = 1}. Let x denote /px . From (5.9) it

follows that for p P(X ) such that px > 0 for all x X ,

X

(5.10)

x F (p) = log(px ) + 1 + V (x) + 2

W (x, y)py .

will use it to characterize the xed points. Recall that

with

y6=x

P

.

Ptan (X ) = { Rd : di=1 i = 0}. The directional derivative

direction Ptan (X ) is given by

X

X

x log(px ) + V (x) + 2

W (x, y)py .

F (p) =

Set

xX

If the derivatives of

p P(X )

then

in any

y6=x

in all directions in

Ptan (X )

F (p) = 0 for all Ptan (X ).

Theorem 5.3.

24

Proof.

(p)(p) = 0. First observe that p is an equilibrium point

for Eq. (2.5) if and only if p(p) = 0, which can be true if and only if

p = (p). If p = (p) then px > 0 for all x X {1, . . . , d} and so (5.10)

holds. Let x, y X , x 6= y . Then by (5.10),

X

(x y )F (p) = log(px ) + V (x) + 2

W (x, z)pz

(p),

Recall that

and hence

z6=x

(5.11)

log(py ) V (y) + 2

W (y, z)pz .

z6=y

.

(x y ) is the derivative in direction v Ptan (X ) with vx = 1,

.

.

vy = 1, vz = 0 for z X \ {x, y}.

v F (p) = 0 for all v Ptan (X ). In view of (5.1), the

denition of (p) in (5.6), and (5.11) it follows that for all i, j {1, . . . , d},

i 6= j ,

pi

(p)i

log

= log

,

pj

(p)j

Note that

p = (p) then, by (5.11), (i j )F (p) = 0 for all

F (p) = 0 for all v Ptan (X ).

which implies

p = (p)

since

(2.5) are precisely the critical points of

= 0

(i.e.,

or

W 0)

on

(p) does not depend on p

and Eq. (2.5) has a unique equilibrium point, namely the unique stationary

distribution associated with

In general, Eq. (2.5) can have multiple equilibria, stable and unstable.

Let us illustrate this by a simple example.

Example 5.4.

Then

Assume that

F (p) = f (p1 )

with

.

f (x) = x log(x) + (1 x) log(1 x) + 2(1 x)x,

The critical points of

the critical points of

x [0, 1].

f on [0, 1]. Clearly, f (0) = 0 = f (1), and for x (0, 1),

25

1

1

+

4.

x 1x

If 1 then f has exactly one critical point, namely a global minimum

1

at x = . If > 1 then there are three critical points, one local maximum at

2

x = 12 and two minima at x and 1 x , respectively, for some x (0, 21 ),

1

where x

2 as & 1, x 0 as goes to innity.

As a consequence of Theorem 5.3 and Theorem 5.5 below, if > 1 then

the two minima of f correspond to stable equilibria of the forward equation,

Moreover,

The function

of the limit model as given by Eq. (2.5), the nonlinear forward equation.

initial distribution p(0) P(X ). Then for all t 0,

Theorem 5.5.

(2.5)

with some

d

d

F (p(t)) = R (p(t)k(q))

0.

dt

dt

q=p(t)

Moreover,

Proof.

d

dt F (p(t))

p()

p(0) = q

then

(5.12)

d

d

F (p(t))

= R (p(t)k(q)) .

dt

dt

t=0

t=0

In view of the semigroup property of solutions to the ODE (2.5), the forward

q is

q = p(t),

d

dt R (p(t)k(q)),

for all

d

dt F (p(t))

t 0.

P

p(0) = q . By Eq. (5.9) and since xX p0x (0) = 0,

X

X

d

F (p(t))

log(qx )p0x (0) +

V (x)p0x (0)

=

dt

t=0

xX

xX

X

X

0

+

W (x, y)px (0)qy +

W (x, y)p0y (0)qx .

Let

x,yX

x,yX

X

X

d

=

log(qx )p0x (0) +

V (x)p0x (0)

R (p(t)k(q))

dt

t=0

xX

xX

X

X

0

+

W (x, y)px (0)qy +

W (y, x)p0x (0)qy .

x,yX

x,yX

26

Since

x,y

x,y

holds.

The rest of the assertion follows from the observation that

(q)

is the

(q)

and from the strict Lyapunov property of relative entropy in case of ergodic

Remark 5.6. It was noted in Subsection 3.2 that the reversed form of

Lyapunov functions.

1

.

F (p) = lim

R N k N p .

N N

P

x,yX W (x, y)px py is convex and that p

p(p) = 0 in P(X ). Using the Lyapunov function F it is not hard to check

that under N the empirical measure converges in probability to p

, and that

Suppose that

to

F (p) = R (

pkp) + log Z(

p).

In contrast to

Remark 5.7. Consider the slow adaptation setting of Section 4 for the limit

as above, we thus start from a family of rate matrices

(p), p X ,

dened

.

For [0, 1], p X , set (p) = (p + (p p )). The rate matrices (p)

, , W given by

dierent potentials in place of V , W , namely V

X

.

.

V , (x) = V (x) + 2(1 )

W (x, z)pz , W (x, y) = W (x, y).

are again of Gibbs type, that is,

zX

Fix

0.

X

X

. X

F (p) =

px log(px ) +

V , (x)px +

W (x, y)px py

xX

xX

x,yX

!

=

px log(px ) +

xX

V (x) + 2(1 )

xX

X

zX

W (x, y)px py

x,yX

27

W (x, z)pz

px

d

p(t) = p(t) (p(t))

dt

(p), p X . Proposition 4.1, on the other hand, implies

(p)=R (pkp ) is also a strict Lyapunov function when is positive but

that F

X

X

R (pkp ) =

px log(px )

px log(px )

associated with

xX

xX

!

=

px log(px ) + log(Z(p )) +

xX

which is equal to

V (x) + 2

xX

F (p) + log(Z(p ))

for

= 0.

W (x, z)pz

px ,

zX

Observe that the term

depend on

p.

A situation analogous to that of this section is considered in Tamura (1987),

where the author studies the long-time behavior of nonlinear It-McKean

diusions of the form

(5.13)

1 W X(t), y) t (dy) dt + 2dB(t),

Z

dX(t) = V X(t) + 2

Rd

Rd 7 R, the environment potential, and W a symmetric function

Rd Rd 7 R with zero diagonal, the interaction potential. Signs and con-

where

function

stants have been chosen in analogy with the nite-state models considered

here. Solutions of Eq. (5.13) arise as weak limits of the empirical measure

processes associated with weakly interacting It diusions. The

N -particle

2 X

dX i,N (t) = V X i,N (t) dt

1 W X i,N (t), X j,N (t) dt+ 2dB i (t),

N

j=1

where

tions,

and 1 denotes gradient with respect to the rst R -valued variable.

28

F : P(Rd ) 7 [0, ]

is

Lyapunov function. This function, which appears to be a close analogue of

the functions derived in this section as the limits of relative entropies, takes

Lebesgue measure and of the form f (x)dx, then

with respect to

(5.14)

.

F () =

Z

log (f (x)) f (x)dx +

Z Z

V (x)(dx) +

W (x, y)(dx)(dy).

.

F () = .

these is the form taken by the derivative of the composition of the Lyapunov

function with the solution to the forward equation. In Tamura (1987) the

descent property is established by expressing the orbital derivatives of

t is frozen at

d

P(R ). In contrast, in our case the orbital derivative of the Lyapunov

the invariant distribution

(p)

p.

in fact carries over to the diusion case in the sense that for all

d

d

F (t ) = R (t k )=t ,

dt

dt

(5.15)

where

t 0,

t = Law(X(t)), X

P(Rd ) is given by

Z

. 1

exp V (x) 2

W (x, y)(dy) dx

(dx) =

Z

Rd

(5.16)

with

(p) P(X )

dened in (5.6).

Theorem 5.3. On the other hand, the relationship between orbital derivative

of the Lyapunov function and Donsker-Varadhan rate function as established

in Tamura (1987) for the diusion case does

Gibbs models studied above.

29

not

Concluding remarks

In this paper we have exploited a connection between nonlinear Markov process and approximating

N -particle

as a limit of normalized relative entropies. Explicit identication was carried out for two classes of models, based on very dierent approximations.

However, there are other quite natural approximations that should be considered so that the technique can be developed more broadly. The goal of

these approximations should be to bring in more information on the correlations between particles when the

N -particle

system is quasi-stationary,

but hopefully without requiring that full information on the stationary distribution be available. For example, as an improvement on the very crude

product form approximation used for the case of slow adaptation, one might

consider the mapping

1

R N pkqN (T ) ,

N N

p 7 lim

where

point

qN () is

of ().

qN (0) = N ,

with

a xed

References

N. Antunes, C. Fricker, P. Robert, and D. Tibi. Stochastic networks with

multiple stable points.

for computer and communication systems.

Performance Evaluation,

65

(11-12):823838, 2008.

P. Dai Pra, W. J. Runggaldier, E. Sartori, and M. Tolotti. Large portfolio

losses: A dynamic contagion model.

19(1):347394,

2009.

D. A. Dawson, J. Tang, and Y. Q. Zhao. Balancing queues by mean eld

interaction.

Large Deviations.

T. D. Frank. Lyapunov and free energy functionals of generalized FokkerPlanck equations.

30

A. D. Gottlieb.

C. Graham.

40(1):6982,

1992.

C. Graham and P. Robert.

stochastic networks.

Proceedings

of the Berkeley Symposium on Mathematical Statistics and Probability,

volume 3, pages 171197, Berkeley, California, 1956.

T. G. Kurtz.

separable stochastic neural networks with detailed balance.

F. Martinelli.

J. Statist.

In

Lectures on probability theory and statistics (SaintFlour XXVII, 1997), volume 1717 of Lecture Notes in Mathematics, pages

P. Bernard, editor,

D. R. McDonald and J. Reynier.

Ann.

interacting stochastic processes.

F. Spitzer.

Mathematical

at the 1971 MAA Summer Seminar, Williamstown, Mass.

Texts in Mathematics. Springer, Berlin, 2005.

D. W. Stroock.

Y. Tamura.

(2):443484, 1987.

31

34

A. Y. Veretennikov.

quasi-Monte Carlo methods 2004, pages 471486, Berlin, 2006. Springer.

equations.

32

- chap2Uploaded byAkram Al-muharami
- v40-26Uploaded byDarko Savic
- ct42010-2014Uploaded byHemanta Bashyal
- Effect Mechanism of Penstock on Stability and Regulation Quality of Turbine Regulating SystemUploaded bygdcerq8134
- Admision ControlUploaded bymyname06
- 4. IJAMSS - Multi Server Batch Queueing Model With Variable Service RatesUploaded byiaset123
- SITI ProjectUploaded byAttilio Sacripanti
- PhDThesisRaimondoUploaded byFaridun Naim Tajuddin
- Unit-4 HMMUploaded byrajeevsahani
- HWsets-2011_2Uploaded bysalim
- pe_byronUploaded byCHui Tupak Yupanky
- decarlo2000_2Uploaded byNirmal mehta
- Controller Design for a Class of Cascade Nonlinear Systems UsingUploaded byDailyguyuk
- A Fractal Forecasting Model for Financial Time Series.pdfUploaded bySantiago PA
- WSEAS Conference-PranUploaded byprandadhich
- Chemical KineticsUploaded byAli Saab
- 2005 an Unobserved-Component ModelUploaded bySekti Wicaksono
- Lec19Uploaded byspitzersglare
- 337 Simulation and ModellingUploaded byBhavika Tekwani
- front-matter.pdfUploaded byMiguel mendez
- 3 Adaptive on-Device Location RecognitionUploaded by56mani78
- Counting level crossings by a stochastic process by Loren LutesUploaded byadaniliu13
- Probabl It yUploaded byAbhishek Rastogi
- Roi Baer- On the mapping of time-dependent densities onto potentials in quantum mechanicsUploaded byPrem_Swift
- Analytical Computation of the Epidemic Threshold on Temporal NetworksUploaded bypeter2bertone
- Lecture 20 - Random ProcessesUploaded byRamadhir Singh
- Statics communicationUploaded byZed Elspeth
- Two Tailed Hypothesis TestUploaded bysanjeevlr
- Note BootstrapUploaded byNely Kurniawati
- all requiredUploaded byrichalachuria

- refining-and-processing-sugar.pdfUploaded byAsikin
- Chicago GridUploaded byMichael Pearson
- Dynamics and Structure in Metallic Supercooled LiUploaded byMichael Pearson
- Crystallization of Fat in and Outside Milk Fat Globules Uden Artikler PABUUploaded byMichael Pearson
- brakke - Surface EvolverUploaded byMichael Pearson
- Quantum Path to PhotosynthesisUploaded byMichael Pearson
- Molecular Computing Does DNA ComputeUploaded byMichael Pearson
- Chronicle of Higher Ed Ground Zero for Unionization Nov 23 2015Uploaded byMichael Pearson
- Mesenchymal Stem Cells Hold Promise for Regenerative MedicineUploaded byMichael Pearson
- Ramparts CIA frontUploaded byMichael Pearson
- Advanced Operating Systems Denning Brown1984Uploaded byMichael Pearson
- Steiner Trees and Spanning Trees in Six-pin Soap FilmsUploaded byMichael Pearson
- What is Quantum Cognition, And How is It Applied to PsychologyUploaded byMichael Pearson
- Worlds Muslims Religion Politics Society Full ReportUploaded byEduardo Rossi
- Tracking the Origins of a State Terror NetworkUploaded byMichael Pearson
- Relative Entropy in Quantum Information TheoryUploaded byMichael Pearson
- Quantum Computation and Shor's Factoring AlgorithmUploaded byMichael Pearson
- Electrically Induced Morphological Instabilities in Free Dendrite GrowthUploaded byMichael Pearson
- Entangled states of trapped atomic Blatt 2008Uploaded byOliver Barter
- Scale Relativity and Schrödinger's EquationUploaded byMichael Pearson
- 5.Time Symmetry Breaking, DualityUploaded bypipul36
- Foundations of Algebraic GeometryUploaded byMichael Pearson
- Valuation of Ruil Forests Ecosystem Services in Central Chile With the Choice Experiment MethodUploaded byMichael Pearson
- Electrochemical Corrosion of Unalloyed Copper in Chloride Media––a Critical ReviewUploaded byMichael Pearson
- Everify White Paper ACLUUploaded byMichael Pearson
- Elisabeth Verhoeven Experiential Constructions in Yucatec Maya a Typologically Based Analysis of a Functional Domain in a Mayan Language Studies in Language Companion SUploaded byMichael Pearson
- Microfluidics for DNA AnalysisUploaded byMichael Pearson
- US20010044775.pdfUploaded byMichael Pearson
- Us 20140218233Uploaded byMichael Pearson

- TrueBeamBrochure_RAD10119D_September2013Uploaded byDC Shekhar
- 1060 Collection and Preservation of SamplesUploaded byDiana Andrea Cardona Peña
- ETAP 11 Network AnalysisUploaded bylimres
- Impact of Metal Oxide Nanoparticles on Beneficial Soil Microorganisms and their Secondary MetabolitesUploaded bySSR-IIJLS Journal
- CP 99-2003 PreviewUploaded byAquiles_voy
- Kem-rp Ing 18069Uploaded bymehdi
- Stress FracturesUploaded bykymaloga
- grasshopperUploaded byAnkit Kokil
- Copy of STIFF (Version 1)Uploaded bymaheshbandham
- De ApplicationsUploaded byMark Joseph Acid
- Chapter 2 Structure, Properties and Behavior of MatterUploaded byaxeman1n
- 10.1.1.75.1788Uploaded byHooman Ghasemi
- The Energy Dissipation in Turbulent Shear FlowsUploaded byscience1990
- Gustav Mie TheorieUploaded byjuan
- Siemens-SITRANS-FUP1010-man-A5E02951522rAC-2013-01Uploaded byWalter Recosta
- 4 Types of Measuring Instruments and Its FeaturesUploaded byKanomax USA
- 3-1.pdfUploaded bylundumarulituasinura
- Finite Element Modeling of Electron Beam Welding of AUploaded byCiprian Mihai Tărboiu
- Overload Control ElevatorsUploaded byFERNS
- Riego Parcial de Raiz - ManzanaUploaded byanny franco
- Chap 4 Physical Properties of SoilUploaded byDiirock
- 1s_series_i821-e1_5_2_csmUploaded byIlson Junior
- The Physical PropertiesUploaded byAbhijeet Dash
- JackUploaded bykriskrishna
- Scican Statim 2000,5000 - Service ManualUploaded byleopa78
- ps08Uploaded byEmmanuel Carrodeguas
- Lead - lag controllerUploaded byharry_chem
- Abdi-EVD2007-pretty.pdfUploaded byali
- Bioengineering Analysis of Orthodontic Mechanics.pdfUploaded byElizabeth Diaz Bueno
- Nanocalsium Characterization of Cakalang Fish Bone Flour (Katsuwonus Pelamis L)Uploaded byInternational Journal of Innovative Science and Research Technology