You are on page 1of 66

3.

Stochas+c processes, master equa+ons and Fokker-Planck equa+ons


ü Countless situations in physics where deterministic descriptions (and laws) alone are inadequate, particularly as the
number of interacting species becomes small and the individual motion of microscopic bodies has observable
consequences.

ü Drag forces, viscosity, electrical resistance, they are all traces of the microscopic dynamics left behind in the
macroscopic models when the enormous degrees of freedom in the original many-body problem were integrated
away to leave a simple deterministic description.

ü The effects of microscopic motion are often called fluctuations or noise.


As you saw in the previous lectures, the most famous example of observable
fluctuations in a physical system is Brownian motion.

ü For historical and pedagogical reasons, Brownian motion provides a straightforward


and concrete illustration of the mathematical formalism used to study stochastic processes.

ü Renewed interest in the study of stochastic processes, particularly in the context of microfluidics, nanoscale devices
or in the study of cellular processes and the behavior of complex systems.

23/11/22 M. Carmen Miguel López 1


Einstein's paper (1905) contains the seeds of all of the ideas developed in this course. His mathemaLcal
approach to the problem provided, using simple assumpLons, a parLal differenLal equaLon governing
the probability density of finding the Brownian parLcles at a given posiLon x aOer a Lme t.
A. Einstein (1905) Ann. Physik 17:549.
• A total of n particles suspended in a liquid each moving independently of all the other particles.
• The movements of each particle in different time intervals are independent processes, as long as these time intervals are not too small.
• In a time interval 𝜏, the X -coordinates of the individual particles will increase by an amount ∆, which for each particle has a different (positive or
negative) value. The number dn of particles which experience a shift which is between ∆ and ∆ + d∆ will be:

• If we consider f(x,t) the number of particles per unit volume, the number of particles found at time t+𝜏 in between x and x+dx is

Seed for Chapman-Kolmogorov eq. for Markov processes


Small 𝜏
Small ∆

A parGcular case of the


Fokker-Planck eq.
a stochasGc differenGal eq.
AproximaGons involved: for the parGcles paths
conGnuous sample paths where
Kramers-Moyal expansions
2
Many physical systems, with an appropriately chosen coarse-graining of the observaLon Lme-scale, can be represented by
Markov processes. The main dynamical equaLon for a Markov process is the Chapman-Kolmogorov equaLon.

Under the same coarse-graining of the observaLon Lme-scale that permits the physical process to be described as
Markovian, the Chapman-Kolmogorov equaLon reduces to the simpler master equaLon.
M. von Smoluchowski (1906) Ann. Physik 21:756.

Smoluchowski’s model (1906) of Brownian moLon assumes that a parLcle


can move only one step to the right or to the leO, and that these occur with
equal probability. This type of process is called an unbiased random walk,
an example of a more general class of one-step processes.

The condiLonal probability density P(x,t) for an unbiased random walk obeys the
equaLon

which is an example of a master equaLon with discrete states. Moreover, in a certain limit, the Smoluchowski model
reduces to Einstein's diffusion equaLon.

In problems where the state space is inherently discrete, such as molecule numbers or animal populaLons, Smoluchowski's
methodology is parLcularly convenient. It is not difficult to imagine various classes of more complicated processes that can
be constructed along similar lines by allowing, for example, mulLple steps, biased transiLon rates, state-dependent
transiLon rates, mulLdimensional or complex laZces, etc
23/11/22 M. Carmen Miguel López
3
A few years aOer Einstein and Smoluchowski’s work, Paul Langevin developed a new method for studying Brownian moLon.
Langevin arrives at the same result for the mean-squared displacement as Einstein, although coming from a very different
perspecLve P. Langevin (1908) C. R. Acad. Sci. (Paris) 146:530.

For systems governed by linear dynamics in the absence of fluctuaLons, Langevin’s


heurisLc approach is very intuiLve and simple to formulate. For nonlinear systems,
however, Langevin’s approach exhibits several limitaLons.
Seeds of stochastic differential equations: Wiener, Ornstein-Uhlenbeck processes.

Soon aOer Langevin’s paper, several outstanding contribuLons to the theory of Brownian moLon appeared. In
parLcular, in 1930 the Ornstein-Uhlenbeck paper sought to strengthen the foundaLons of Langevin’s approach.
G. E. Uhlenbeck and L. S. Ornstein (1930) Phys. Rev. 36:823.

Ornstein and Uhlenbeck provided a mathemaLcal connecLon between the kinemaLc approach of Einstein (using a parLal
differenLal equaLon to describe the probability distribuLon) and the dynamical approach of Langevin. 4
What is a stochas*c process?
Suppose we are given an experiment specified by its outcome 𝜔 ∈ Ω (Ω=the set of all possible outcomes), and by the
probability of occurrence of certain subsets of Ω. For example, in the tossing of a die, Ω ={1,2,3,4,5,6}.

To every outcome 𝜔 we now assign a funcLon of Lme 𝜉(𝜔,t). We have thus created a family of funcLons, one for each 𝜔.
This family is called a stochasLc process (or a random funcLon ). Usually, t ∈ ℝ, although it could also be that t ∈ [0,T]. Then
a stochasLc process is a random funcLon of two variables, 𝜔 and t .

There are two possible points of view:

1. Fix 𝜔, then 𝜉(𝜔,t)=𝜉𝜔 (t) is a funcLon of Lme, depending upon


the parameter 𝜔. To each outcome 𝜔, there corresponds a
funcLon of t . This funcLon is called a realizaLon, or sample
funcLon of the stochasLc process.

2. Fix t, then 𝜉(𝜔,t)=𝜉t(𝜔) is a family of random variables


depending upon the parameter t.

In that way, a stochas<c process can be regarded either as a family of realiza<ons 𝜉𝜔 (t) or as a family of random variables 𝜉 t(𝜔). No<ce that
Einstein's point of view was to treat Brownian mo<on as a distribu<on of a random variable describing posi<on (𝜉 t(𝜔)), while Langevin took
the point of view that Newton's laws of mo<on apply to an individual realiza<on (𝜉𝜔 (t)).

We can denote the stochastic process simply by ξ(t) (Note that I will also use the notation x(t) in the following lectures). 5
Remember: The nth-order joint distributions
CumulaLve distribuLon funcLon

Corresponding probability density


assuming the Lme ordering t1 < t2 < ... < tn

The nth-order distribution function determines all lower-order distribution functions, and in fact, it completely
determines the stochastic process.
The condiLonal
probability density

Moreover
one has:

Moments of a stochastic process:

Mean with
Alterna<vely

Correlations
6
Sta8onary process: The stochasLc process ξ(t) is staLonary if all finite dimensional distribuLon funcLons defining ξ(t) remain
unchanged when the whole group of points is shiOed along the Lme axis

for any n,t1,t2,...,tn and τ. In particular, all one-dimensional cumulative distribution functions must be identical
(i.e. F (x, t) = F (x) cannot depend on t); all 2-dimensional cumulative distribution functions can only depend upon |t1 − t2|

Purely random process: Successive values of ξ(t) are staLsLcally independent, i.e.

in other words, all the information about the process is contained in the 1st-order density.

Markov process: Defined by the fact that the conditional probability density enjoys the property,
In Einstein’s study of Brownian mo<on,
this property is approximately sa<sfied
by the process over the coarse-grained
<me scale
That is, the condiLonal probability density at tn, given the value xn−1 at tn−1, is not
affected by the values at earlier Lmes. In this sense, the process is “without memory.”
A Markov process is fully determined by the two funcLons and 7
3.1 Discrete Markov processes
We first deal with stochasLc processes in discrete space and Lme. They are parLcularly useful to establish the basic
mathemaLcal tools that will be later extended to conLnuous space and Lme, into the form of stochasLc differenLal
equaLons.
Markov chains
Markov chains are stochasLc processes discrete in Lme and in the state space, where the value assumed by each
stochasLc variable depends on the value taken by the same variable at the previous instant of Lme.

ü We assume that the stochastic variable x(t) takes values at each instant of time t over a set of N states,
S = {s1, s2, . . .,sN−1, sN}.
ü We also assume that x(t) is measured at equal finite time intervals, so that time becomes a discrete variable,
equivalent, in some arbitrary time unit, to the ordered sequence of natural numbers, t = 1, 2, . . . , n, . . .
ü The basic quantity we want to deal with is the probability, p(x(t) = si), that x(t) is in state si at time t.
If p(x(t) = si) does not depend on the previous history of the stochastic process, we are dealing with
the simple case of a sequence of independent events, like tossing a coin.
In general, one could expect that, if some time correlation (i.e. memory) is present in the evolution of x(t),
then p(x(t) = si) could depend also on the previous history of the stochastic process.

If the condiLonal probability

is finite and cannot be reduced, we say that the stochasLc process has memory n.
23/11/22 M. Carmen Miguel López 8
The case n = 1 defines a Markov process, where
=
is the transiLon probability in a unit Lme step from si to sj

In general, the transiLon probability in a unit Lme can be a funcLon


of Lme, but here we limit ourselves to consider sta7onary Markov
processes, where this transiLon rate is independent of Lme.

Shorthand notation:

The stochastic dynamical rule of the Markov chain can be written as:
which saLsfy:
or as

in a vectorial form with


and where the transiLon probabiliLes Wji can be viewed as the
entries of an N × N matrix W, called stochas7c matrix.

Not that in the literature one does not usually write the condiGoning of the probability by the iniGal state, preferring instead the shorthand notaGon pi(t). However,
remember that any probability governed by a master equaGon (or any other dynamical equaGon) is necessarily condiGoned by the iniGal distribuGon. 9
This matrix relaLon can be generalized to obtain

where Wn, the n power of W, is also a stochastic matrix, since it satisfies the same properties of W. In turn, this relation
also leads to another important relation concerning the stochastic matrix, the Chapman–Kolmogorov equation,

The Chapman-Kolmogorov equations extends to stochastic processes the law valid for deterministic dynamical systems,
where the evolution operator from time 0 to time (t + n) can be written as the composition of the evolution operator from
time 0 to time t with the evolution operator from time t to time t+n.

As usual for any N × N matrix it is useful to solve the eigenvalue problem det (W − λI) = 0, where I is the identity matrix
and λ is a scalar quantity, whose values solving the eigenvalue equation are called the spectrum of W.

ü Since W is not a symmetric matrix, its eigenvalues are not necessarily real numbers.
ü One should distinguish between right and left eigenvectors of W. We denote the right ones as

The spectrum of W has the following properties (see proof in Appendix notes on the Campus Virtual):

1.
2. There is at least one eigenvalue
3. is either an eigenvector with eigenvalue equal to 1, or it satisfies the condition 10
Definitions
Accesible state: A state sj is accessible from a state si if there is a finite value of 8me t such that (Wt)ji > 0.

Persistent and transient states: A state sj is persistent if the probability of returning to sj aGer some finite
8me t is 1, while it is transient if there is a finite probability of never returning to sj for any finite 8me t.

Thus, a persistent state will be visited infinitely many 3mes, while a transient state will be discarded by
the evolu8on aGer a sufficiently long 8me.

Irreducible Markov chain: A Markov chain is irreducible when all the states are accessible from any other
state.

Periodic Markov chain: A Markov chain is periodic when the return 8mes Tj on a state sj are all integer
mul8ples of a given period T.

Ergodic Markov chain: Let us consider a Markov chain with a finite state space, i.e. S = {s1, s2, . . . , sN}:
it is ergodic if it is irreducible, nonperiodic and all states are persistent.
The main property of ergodic Markov chains is that they determine a unique sta8onary probability distribu8on,
whis is given by the eigenvector which is the solu8on of the eigenvalue problem with λ = 1:

23/11/22 M. Carmen Miguel López 11


The staLonary probability w(1), of an ergodic matrix will be eventually axained exponenLally fast on a Lme scale, τ ,
independent of the iniLal condiLons, namely

where A is a suitable vector with constant components that sum up to zero to fulfill the 3rd spectral condiLon. In
the limit t → ∞, w(1) is a true dynamical state of the stochasLc process, and, accordingly, it has to obey the
condiLons that all its components are nonnegaLve and that it is normalized.

Another important result is that the spectral properLes of an ergodic Markov chain determine the Lme scale of
convergence to the staLonary probability. If we order the eigenvalues such that:

the relaxaLon will be dominated by the longest Lme scale, i.e. the one corresponding to the eigenvalue λ(2)

Any probability on the state space at Lme t, p(t), can be wrixen as a suitable linear combinaLon of the eigenvectors
w(k) of the stochasLc matrix, which form an orthonormal basis:

The longest Lme scale is:

23/11/22 12
M. Carmen Miguel López
3.2 The master equa*on and detailed balance
The dynamical rule of the Markov chain
can be rewritten as

And finally in the form of the so called master equation,

The variaLon of the probability of being in state si in a unit Lme step can be obtained from the posiLve contribuLon of all
transiLon processes from any state sj to state si and from the negaLve contribuLon of all transiLon processes from state si
to any other state sj.
This form is parLcularly useful to define the condiLons under which one can obtain a staLonary probability, i.e. all pi are
independent of Lme t (the leO-hand side vanishes) and the staLonarity condiLon reads:

which is verified if the following


stronger condiLon holds

The laxer is called the detailed balance condiLon. A Markov chain whose stochasLc matrix elements obey this condiLon is
said to be reversible and it can be shown that it is also ergodic, with the staLonary probability p =w(1) represenLng the so-
called equilibrium probability. 13
23/11/22 M. Carmen Miguel López
The Monte Carlo method that you will use in the following computer lab session is one of the most useful
and widely employed applications of stochastic processes. The method aims at solving the problem of
the effective statistical sampling of suitable observables by a reversible Markov chain.

Remember that in the Metropolis algorithm, one uses the transition rates

which satisfy the detailed balance condition

Assuming the probability for each equilibrium state is known and equal to the corresponding Boltzmann factors,
the method provides the equilibrium average of an observable making use of a sufficiently long trajectory
(s1, s2, . . . , sn) in the state space of the Markov chain.

23/11/22 M. Carmen Miguel López 14


Examples One-step processes
Many stochastic processes are of a special type called one-step process, birth-and-death process or generation-
recombination process. They are Markov processes whose transition probability in a unit time (i.e. Wnm) only
permits jumps between adjacent sites,

where rn is the probability per unit time that, being at site n, a jump occurs to site n − 1.
Conversely, gn is the probability per unit time that, being at site n, a jump occurs to site n + 1.

Remember that Smoluchowski applied this model to the study of Brownian motion
by setting the generation and recombination probabilities to 1/2: gn = rn = 1/2.

Defining the probability density as the probability that a random walker beginning at n at time t = 0
will be at site m after s steps, the master equation for this unbounded random walk is

23/11/22 M. Carmen Miguel López 15


Example A system with two states
We consider a Markov chain made of two states, S = {a, b}.

The stochasLc matrix has the form

By applying the stochasLc evoluLon rule one obtains that the probability of finding the system in state a at Lme t+1:

You can find the solution by simple algebra, and also from the analysis of eigenvalues (λ(1) = 1 and λ(2) = 1-q-r) and
eigenvectors of W (Homework)
Ansatz

Where is the iniLal condiLon, i.e. the probability of observing the state a at Lme 0.
There are two limiLng cases: (i) r = q = 0, no dynamics occurs; (ii) r = q = 1, dynamics oscillates between state a and state b.
In all other cases, and in the limit t → ∞, pa → α and pb → (1 − α). More precisely, pa(t) converges exponenLally fast to α.
23/11/22 M. Carmen Miguel López 16
This simple Markov chain is irreducible and its states are accessible and persistent.

The dynamics approaches the staLonary state exponenLally fast with a characterisLc Lme

In addiLon, in the staLonary state this stochasLc process saLsfies a detailed balance condiLon:

that establishes a sort of Lme reversibility of the stochasLc process. Actually, in the staLonary state, the
probability of being in state a and passing in a unit Lme step to state b is equal to the probability of being
in b and passing in a unit Lme step to state a.

23/11/22 M. Carmen Miguel López 17


Example Random walk on a ring with N states

StochasLc matrix:

By applying the stochastic evolution rule one obtains that the probability of finding the system in state a at time t+1:
And in the staLonary state:
which has a simple constant soluLon

At long Lmes, the random walker will have lost any memory of its iniLal state and, since all sites are equivalent, the
staLonary state will correspond to an equal probability of visiLng any site. As in the previous example, the Markov chain
is irreducible and all its states are accessible and persistent.

The spectrum of W provides us with more detailed informaLon about this problem. Its eigenvectors saLsfy the equaLon:

23/11/22 M. Carmen Miguel López 18


Ansatz with periodic boundary condiLons

yields N independent soluLons (Fourier modes)

with the corresponding eigenvalue

Thus the components of each eigenvector are:

The normalizaLon factor, 1/√N, ensures that wTw = 1.

NoLce that for r = 1/2 the eigenvalues λj are all real, but in general apart from λ0 = 1, they are complex.

For j = 0 we have λ0 = 1, indendently of the value of r, thus showing that the staLonary soluLon is the same both
for the asymmetric and symmetric random walk on a ring. On the other hand, for r ≠ 1/2 there is a bias for the
walker to move forward (r > 1/2) or backward (r < 1/2). Its staLonary soluLon is as expected.
23/11/22 M. Carmen Miguel López 19
3.3 Con*nuous stochas*c processes
Many physical processes are bexer represented in conLnuous space and Lme. Here we first provide a conLnuous Lme
version of stochasLc processes, while keeping the discrete nature of the state space.

We derive the conLnuous Lme version of the master equaLon making use of the Chapman–Kolmogorov relaLon

for an infinitesimal increment of Lme. One can imagine that for


vanishes when and, at the same Lme, in the same limit

For this reason, one can aproximate up to

Due to the normalizaLon condiLon, all these transiLon rates are not independent and saLsfy

AOer subsLtuLng these last two expressions in the Chapman-Kolmogorov relaLon, we can write

Note: Now the transition matrix Wt should be understood as a transition matrix depending on the continuous parameter t. 20
Dividing by the infinitesimal Lme increment and taking the limit one obtains the conLnuous Lme
master equaLon:

which can also be writen in terms of the probability associated to each state as a funcLon of Lme. Using

AOer derivaLng and subsLtuLng

one can obtain the continuous time master equation in the form
The equaLon tells us that the variaLon in Lme of the probability of being
in state si is obtained from the posiLve contribuLon of all transiLon
processes from any state sk to state si and from the negaLve contribuLon
of all transiLon processes from state si to any other state sk.
23/11/22 M. Carmen Miguel López 21
We finally provide a conLnuous version, both in the state space and Lme, for the Chapman-Kolmogorov and for the
master equaLon.

Considering the general relaLon

and integraLng over x1, we obtain the stochasLc dynamic equaLon in terms of the transiLon (condiLonal) probality:
Remember: If we integrate the joint density func<on
f(x1, . . . , xn) with respect to certain variables, we obtain
(Eq. I) the joint density of the remaining variables (called the
marginal density); e.g.

Moreover, from the relaLon

with t1 < t2 < t3, integraLon over x2 gives,

and using
23/11/22 M. Carmen Miguel López 22
We get the conLnuous Chapman-Kolmogorov equaLon:

(Eq. II)

which is a funcLonal equaLon relaLng all condiLonal probability densiLes for a Markov process.

Indeed Eqs. I and II uniquely define a Markov process.

Schema<c representa<on of Chapman-Kolmogorv equa3on.


The intermediate variable x2 is integrated over to provide a connec<on between x1 and x3.

Remarks:
1. The Chapman-Kolmogorov equaLon is a funcLonal equaLon for the transiLon probability f(xi,ti|xj,tj). Its
soluLon would give us a complete descripLon of any Markov process – Unfortunately, no general soluLon to this
equaLon is known. Keep in mind that it is a non- linear equaLon in the transiLon probabiliLes.
2. From the meaning of f(x,t|x0,t0), it is clear that we must have, f(x,t|x0,t0)→δ(x−x0) as t→t0.
23/11/22 M. Carmen Miguel López 23
For a stationary Markov process, we can introduce the time difference and redefine

to rewrite the Chapman-Kolmogorov equation in the form

with

In this case, we also have:

The Chapman-Kolmogorov equaLon allows to build up the condiLonal probability densiLes over the “long” Lme
interval (t1,t3) from those over the “short” intervals (t1,t2) and (t2,t3). It turns out this is an incredibly useful
property since, from our knowledge of the transiLon probability at small Lmes we can build up our knowledge of
the transiLon probability at all Lmes iteraLvely from the Chapman-Kolmogorov equaLon.

23/11/22 M. Carmen Miguel López 24


It is also possible to show, using similar arguments to the ones introduced for the discret state space, that over
very short time the continuous Master Equation takes the form

an integro-differenLal equaLon where w(x|z) is the transiLon probability per unit Lme, or the transiLon rate.

IntegraLng over x1, we can alternaLvely write it in the more general form

for any possible state q of the system at Lme t.

In the master equation, one considers the transition rates, w(xj|xi), as a given function determined by the specific
physical system, and the resulting equation is linear in the conditional probability density which determines the
(mesoscopic) state of that system.
Although it’s more tractable than the Chapman-Kolmogorov equation, it is difficult to find exact solutions. Moreover, the
transition rates are often nonlinear, and approximated perturbative methods are usually required. (See Applications chapter.)

23/11/22 M. Carmen Miguel López 25


Stochas*c differen*al equa*ons: The Wiener process
The mathematical structure of the Langevin equation is quite weird. The deterministic part is made of
differentiable functions with respect to time, but the random force is intrinsically discontinuous at any time due to

with the random force


per unit of mass

A way out of this problem is the introduction of the Wiener process, which can be considered the continuous time
version of a random walk. The basic idea is quite simple: we use the integrated stochastic force to define a new
stochastic process that can be made continuous in time,

The staLsLcal average has to be thought of as the average over the realizaLons of the stochasLc process. If

and
meaning that the average squared amplitude of the Wiener process diffuses in Lme.
23/11/22 M. Carmen Miguel López 26
The Wiener process is not a stationary process, since its correlations do not only depend on the temporal
difference (t-t’).
Moreover, it has the following properties:
1. Wi(t) is a stochastic process continuous in time and with zero average, ⟨Wi(t)⟩ = 0.
2. For any t1 < t2 < t3 the increments (Wi(t2) − Wi(t1)) and (Wi(t3) − Wi(t2)) are independent quantities, following
the same distribution.
3. For any t1 < t2 the probability distribution of the increments (Wi(t2) − Wi(t1)) is a Gaussian with zero average
and variance (t2 − t1), which is a consequence of the central limit theorem.
Notice that Wi(t) is not differentiable, meaning that we cannot define its time derivative, but we can define its
infinitesimal increment dWi(t) for an arbitrary small time interval dt,

which implies

To simplify notaLon, the infinitesimal increment of the Wiener process is usually redefined as

where is a stochasLc process with zero average and unit variance. This relaLon axributes to the amplitude of
the infinitesimal increment of the Wiener process a physical scale given by the square root of the infinitesimal
Lme increment dt.

23/11/22 M. Carmen Miguel López 27


Now we can write the general form of a well-defined stochastic differential equation using the Wiener process
n possible components
generalized driO coefficient
generalized diffusion coefficient
For example, in the Langevin equaLon of the Brownian parLcle: n=3 components and Xi(t) = vi(t) (the i-th component
of the velocity of the Brownian parLcle), while ai(X(t), t) = −γ vi(t) and bi(X(t), t) is proporLonal to (2γT/m)1/2.

Simplifying the notation, we will simply write:

and will integrate this general stochasLc differenLal equaLon, conLnuous in both space and Lme, to formally obtain:

If we want to extract any useful information from this formal solution, we have to perform its statistical average with
respect to the Wiener process. Here, one has to face another problem, and it is that the last integral is not uniquely
defined when averaged over the stochastic process. In fact, according to basic analysis, the integral of a standard
function f(t) can be estimated by the Euler approximations

Sum of the areas of small rectangles

23/11/22 M. Carmen Miguel López 28


where the total Lme t is subdivided into an ordered sequence of N + 1 sampling Lmes (t0 = 0,t1,t2 ...,tN−1,tN = t), with
∆ti = ti+1−ti, which can be uniformly sampled, i.e. ∆ti = t/N, ∀i. A theorem guarantees that in the limit ∆t → 0 both Euler
approximaLons converge to the integral. Such a theorem does not hold for a Wiener process. The analogs of the Euler
approximaLons for the stochasLc integral are:

Itô discreLzaLon

Stratonovich discretization

Itô and Stratonovich formulations are not equivalent. Thus, when dealing with models based on stochastic equations,
we have to choose and explicitly declare which formulation we are adopting to integrate the equations.
To see this, we can approximate:

where Wʹ indicates that, in general, it is a different realization of the Wiener process, and
is the functional derivative of f(t) with respect to Wʹ at ti.

23/11/22 M. Carmen Miguel López 29


By performing the limit ∆W → 0 and averaging over the Wiener process, we obtain:

which does not vanish!

In many applications it is advisable using the Itô formulation, because the property indicated right above, makes
calculations simpler. Nevertheless, when noise is multiplicative (it depends on the stochastic process) rather than
additive, the two formulations are known to differ and the Stratonovich one is preferable.

For example, if we write the Langevin equation for the kinetic energy of a Brownian particle

MulLplicaLve noise

it is necessary to use the Stratonovich prescription to obtain the simple form of the fluctuation-dissipation relation.

23/11/22 M. Carmen Miguel López 30


Stochas*c differen*al equa*ons: The Ornstein-Uhlenbeck process
Let us now describe the moLon of a stochasLc parLcle in the presence of a conservaLve force and
thermal fluctuaLons. In the case of an elasLc force and of a constant diffusion coefficient, b(X(t), t) = √2D,
the process is known as the Ornstein–Uhlenbeck process:

where k is the Hooke elastic constant divided by the friction coefficient .

Changing variables we can integrate

where X(0) is the initial position of the particle. By averaging over the Wiener process we finally obtain

23/11/22 M. Carmen Miguel López 31


And using the staLsLcal properLes of the Wiener process:

we can obtain

In the longtime limit t → +∞ the average displacement vanishes, irrespectively of the initial condition, while the
variance of the stochastic process converges to D/k. The stochastic particle diffuses around the origin and its mean
squared displacement is distributed according to a Gaussian with zero average, while its variance is inversely
proportional to the Hook constant k and proportional to the amplitude of fluctuations, i.e. to the diffusion constant D.

If we want to recover the result for the Brownian particle, we should take the limit k → 0 before the limit t → ∞,
obtaining the expected result:

23/11/22 M. Carmen Miguel López 32


Stochas*c differen*al equa*ons: General Fokker-Planck equa*on
We will now obtaing the so-called general Fokker-Planck equaLon. To do that we can use the general stochasLc
differenLal equaLon
(i)

Moreover, we will consider a generic funcLon f(X(t),t) that is at least twice differenLable with respect to X. Thus, we can
write its Taylor series expansion as

and assume that X(t) obeys the general stochasLc differenLal equaLon Eq. (i) to, up to linear order in dt, obtain:

where we have also assumed that knowing that it is only true after averaging, i.e.
Indeed to obtain a formal solution, one has to integrate and average over the Wiener process.
Notice that any function f(X(t)), at least twice differentiable, obeys the same kind of stochastic differential equation
obeyed by the stochastic process X(t).
23/11/22 M. Carmen Miguel López 33
Let us assume that the funcLon f does not depend explicitly on Lme, i.e. and let us average over
the Wiener process, i.e.
One obtains

where we have used that

Or integrating by parts the two integrals on the right hand side, we can write

(ii)

where we have assumed that the probability density P(X, t) vanishes at the boundary of the state space S.
For example, in the case of a Brownian parLcle, the distribuLon funcLon of its posiLon in space vanishes at infinity.

23/11/22 M. Carmen Miguel López 34


Since Eq. (ii) holds for an arbitrary funcLon f(X), we can write the so called general Fokker– Planck equaLon for
the probability density P(X, t) as

General
Fokker-Planck
equa8on

The possibility of finding an explicit soluLon of this equaLon depends on the funcLonal forms of the generalized
driO coefficient a(X, t) and of the generalized diffusion coefficient b2/2. InteresLng physical examples correspond
to simple forms of these quanLLes. For a=0, we get back the diffusion equaLon for the probability.

Examples
1. Sta8onary Diffusion with Absorbing Barriers
The Fokker-Planck equation can be expressed in the form of a conservation law, or continuity equation,

with

a current of probability.

23/11/22 M. Carmen Miguel López 35


Since X ∈ R, we can consider the interval I =[X1,X2] and define the probability, P(t), that at Lme t the stochasLc process is
within this interval I.

One can impose various boundary condiLons on I. For instance, the condiLon of reflec7ng barriers, i.e. no flux of
probability through the boundaries of I, amounts to J(X1,t) = J(X2,t) = 0 at any Lme t. Accordingly, the probability of
finding the walker inside I is conserved.

The condiLon of absorbing barriers implies that once the walker reaches X1 or X2, it will never come back to I, i.e.
P(X1,t) = P(X2,t) = 0 at any Lme t.

Moreover, when dealing with stochasLc systems defined on a finite interval (a typical situaLon of numerical simulaLons),
it may be useful to impose periodic boundary condi7ons that correspond to P(X1, t) = P(X2, t) and J(X1, t) = J(X2, t). In this
case the probability P is conserved not because the flux vanishes at the borders, but because the incoming and the
outgoing fluxes compensate each other.

23/11/22 M. Carmen Miguel López 36


Stationary solutions P∗(X) of the Fokker-Planck equation must fulfill the condition

where, in addiLon, the funcLons a(X) and b(X) do not depend on t. For instance, in the case that
pure diffusion, the general soluLon of this staLonary equaLon is of the form:
The integraLon constants C1 and C2 can be obtained from boundary
and normalizaLon condiLons.
(a) In the case of reflecLng barriers at X1 and X2 in the interval I, there is no flux through the boundaries, so that
the staLonary soluLon must correspond to no flux in I,

(b) In the case of absorbing barriers we obtain the trivial soluLon


because for t → ∞ the walker will eventually reach one of the absorbing barriers and disappear.

23/11/22 M. Carmen Miguel López 37


(c) HOMEWORK 3: Solve the stationary Fokker-Plank equation with absorbing barriers and with a constant flux F
of walkers injected in the interval I, i.e. assuming that at each time unit a new walker starts at site X0 ∈ I.

See Livi and PoliL


Pags. 43-44

SoluLon:

where J− and J+ are the net fluxes of parLcles flowing in X1 and X2, respecLvely (and J− + J+ = F, because of maxer
conservaLon).

23/11/22 M. Carmen Miguel López 38


2. The Fokker–Planck Equation in the Presence of a Mechanical Force

We now consider the Fokker–Planck equation for a particle subject to an external mechanical force F(x) generated
by a conservative potential U(x), i.e.,

with

Here D is the usual diffusion coefficient and the parameter γ ̃ is the viscous drag coefficient.
The staLonary soluLon in the long Lme limit, safisfies

And, in parLcular, the soluLon is the only possible equilibrium soluLon since
any (macroscopic) current must vanish at equilibrium. Thus we have

where A is a normalizaLon constant and we have used the Einstein relaLon D = T/γ .̃
As expected, we obtain the equilibrium Boltzmann distribuLon, where T is the equilibrium temperature.

23/11/22 M. Carmen Miguel López 39


3. Isotropic Diffusion with a Trap
The solution of the Fokker-Planck equation for a Brownian particle diffusing freely in one dimension, with diffusion
coefficient D, if the particle is initially located at x0, is the solution of the diffusion equation:

But if there is a trap at x = 0, the probability p(x,t) must vanish in x = 0 at any t. The correct soluLon in this case can be
obtained by taking the linear combinaLon of px0 (x, t) and p−x0 (x, t), because it automaLcally saLsfies the boundary
condiLon p(0, t) = 0 at all Lmes. The combinaLon is

The probability that the parLcle is trapped in x = 0 in the Lme interval (t, t + dt) is called the first passage probability,
f(t), and it is equal to the current |J| flowing in the origin, where J = −Dpʹ(x = 0), i.e.,

which is properly normalized

23/11/22 M. Carmen Miguel López 40


The average trapping time diverges,

because the integrand goes like for large t.

The average posiLon of the parLcle is easily evaluated from the definiLon:

and changing variable x → −x in the second integral,

And thus, the presence of a trap does not modify the average posiLon of the walker.

23/11/22 M. Carmen Miguel López 41


Optional exercice: The Fokker–Planck equation of the Ornstein–Uhlenbeck process is

Find its soluLon:


as we obtained
with previously.

The solution of the Ornstein–Uhlenbeck process is a time-dependent Gaussian distribution.

Nevertheless, in the limit t → ∞, if k ≠ 0 its two cumulants become (NOTE that in this limit, this result is a
parLcular case of the previous example with a conservaLve harmonic potenLal):

This asymptoLc constant value is the result of the balance between diffusion (which
would tend to increase fluctuaLons) and the elasLc restoring force (which suppresses
fluctuaLons).

23/11/22 M. Carmen Miguel López 42


Alterna*ve deriva*on of the Fokker-Planck equa*on
The states x1, x2, x3 appearing in the master equaLon may be interpreted as represenLng the iniLal, intermediate and
final state, respecLvely. Let relabel them as y0, y’, y, so that the master equaLon reads,

If the dependence on the iniLal state is understood, then it may be rewrixen in a short-hand form as

We introduce the jump-size so that we can rewrite the transition probabilities in terms of . The
transition probability w(y|y’) is the probability per unit time that starting at y’, there is a jump of size . We write
this as, Similarly, we re-write the transition probability
With this change in notation, the master equation becomes,

23/11/22 M. Carmen Miguel López 43


Next, we make two assumpLons:
1. Only small jumps occur. That is, we assume that there exists a such that
2. p varies slowly with y.

Under these condiLons, one is able to Taylor expand w and p in y’ about y,

=
After which, we can write

Note the dependence of w(y, ) on the second argument is fully maintained; an expansion with respect to is not
possible since w varies rapidly with .
23/11/22 M. Carmen Miguel López 44
Since

we finally obtain the Fokker-Planck EquaLon,

where

In theory, the coefficients can be computed explicitly if w(y, ) is known


from an underlying microscopic theory. In pracLce, that is not always the case, and an alternaLve method is necessary.
The simplest procedure is the derivaLon based upon the Langevin equaLon that we introduced before.

We could also include all terms in the Taylor expansion used above and obtain the Kramers-Moyal expansion,

with

This expression is formally equivalent to the master equation, but to be practical the expansion must be truncated
at a certain point – the Fokker-Planck equation, for example, is the result of truncation after two terms. It is
equivalent to what Einstein used in his derivation of the diffusion equation for a Brownian particle!

23/11/22 M. Carmen Miguel López 45


Kolmogorov derived the same Fokker-Planck equaLon, also known as the Kolmogorov equa7on, by imposing abstract
condiLons on the moments of the transiLon probabiliLes w in the Chapman-Kolmogorov equaLon.
A. N. Kolmogorov (1931) Math. Ann. 104: 415.

Consider a stationary, Markov process – one dimensional, for simplicity. Write the Chapman-Kolmogorov equation as,

and define the jump moments by

Furthermore, assume that for only the 1st and 2nd moments become proportional to and assume
all higher moments vanish in that limit. Then, the following limits exist,

Now, let R(y) be a suitable test function, possessing all properties required for the following operations to be well-defined,
and note that,

23/11/22 M. Carmen Miguel López 46


And using the Chapman-Kolmogorov equaLon, one can rewrite it in the form:

We expand R(y) around the point y = z and interchange the order of integration. Since we have assumed all jump
moments beyond the second vanish, we are left with,

Recall that R(y) is an arbitrary test funcLon, with all the necessary properLes for
both R(y) and as as fast as necessary to saLsfy:

23/11/22 M. Carmen Miguel López 47


Then we can integrate by parts to obtain

And since R(y) is an arbitrary test funcLon, we must conclude that,

Forward Kolmogorov Equa8on

Note that the Kolmogorov equation is an evolution equation for the conditional probability density, and that it must
be solved subject to the initial condition:

The backward Kolmogorov equaLon describes the evoluLon of probability with respect to the iniLal condiLon. It is
a useful equaLon for some applicaLons, such as the one discussed in the following secLon.

See the derivation of the Kramers–Moyal expansion and the derivation of the backward Kolmogorov equation in
Appendix E of Livi & Politi book.

23/11/22 M. Carmen Miguel López 48


3.4 First passage *me
An interesLng applicaLon of the Fokker-Planck equaLon is the problem of the escape 7me from an interval I = [X1, X2].
For instance, in the case of a homogeneous diffusion process with absorbing barriers in X1 and X2, one can define
the probability of remaining within the interval I at some Lme t aOer having started at X0 ∈ 𝐼 at some earlier Lme t0.
To calculate this probability, one can use the transiLon (condiLonal) probability. , where, for simplicity,
we now write first the iniLal state and second the final state

is, in other words, the probability that the exit time


is larger than t.
One should remark that this equaLon will not be valid for any boundary condiLons, because if the “random walk parLcle”
is allowed to exit the interval and reenter, the relaLon between these two quanLLes will no longer be true. In the
following we are considering either reflecLng boundary condiLons or absorbing boundary condiLons: in both cases,
reentry is not allowed and this equaLon is valid.

One can also define the probability density of the first exit time from the interval I, starting at X0 (although to
simplify the notation in the arguments of this quantity we have eliminated the explicit dependence on I and X0). By
definition we have

23/11/22 M. Carmen Miguel López 49


From this probability density, one can obtain, for instance, the average exit Lme as

where the last expression has been obtained integrating by parts and assuming that vanishes sufficiently
rapidly for t → +∞.
Now, we will consider the backward Kolmogorov equaLon for the transiLon probability for the
diffusive case, b (X, t) = 2D, with a Lme independent driO term. The backward Kolmogorov equaLon reads
2

Homework: Revise the


Appendix in the CV

And using that due to time translational invariance in stationary


conditions which implies that W(X0, t0|Y, t) only depends on the time difference (t − t0), we can write

which we can integrate over Y to


obtain the equaLon:

23/11/22 M. Carmen Miguel López 50


We are going to replace the starting point X0 with X, to make notation simple. Integrating further both sides of this
equation over t and assuming t0 = 0, we finally obtain an equation for the average exit time

where the right-hand side comes from the conditions

For the case without driO a(X) = 0 and for absorbing boundaries and ⟨TI(X1)⟩ = ⟨TI(X2)⟩ = 0, the soluLon is

In this case, one obtains a finite result!

An interesLng applicaLon of these methods and results is in game theory, such as in the gambler’s ruin problem,
a classical problem where a gambler bets repeatedly $1 on the flip of a potenLally biased coin unLl he either loses
all his money or wins the money of his opponent. (Ini<al posi<on – Amount of money when he/she starts playing!)
In chapter 4, we will generalize this calculaLon for the case of non-zero driO, which will allow us to account for a
more interesLng problem, the problem of the escape Lme of a stochasLc diffusive process from an asymmetric
potenLal U(X) which includes an energy barrier. This last situaLon appears many Lmes in biological applicaLons
and, in general, in the physics of complex systems.

23/11/22 M. Carmen Miguel López 51


3.5 Generalized random walks. Anomalous diffusion.
A broad class of natural phenomena exhibit characteristics that cannot be appropriately represented by standard diffusive
processes. Examples include physical processes such as the dynamics of ions in an optical lattice, the diffusion of light in
heterogeneous media, hopping processes of molecules along polymers, etc. as well as processes found in other fields of
science such as the spreading of epidemics, the foraging behavior of bacteria and animals, the statistics of earthquakes
and air traffic, or the evolution of the stock market.

Paul Lévy pioneered the statistical description of anomalous diffusive stochastic processes that are usually called Lévy
processes; observed for instance, in the searching trajectories of different animals (see Figures). When animals have
no information about where targets (i.e., resource patches, mates, etc.) are located, different random search strategies
(that can be described by different classes of generalized random walks) may provide better chances to find them.

hxps://www.youtube.com/watch?v=9kK0meGXj2o

23/11/22 M. Carmen Miguel López 52


Anomalous diffusion
A stochastic process described by the variable x(t) (one can assume ⟨x(t)⟩ = 0) is said to exhibit anomalous diffusion if its
variance obeys the relation:

α=1 standard diffusive process


0 < α < 1 subdiffusive stochastic processes
1 < α < 2 superdiffusive stochastic processes

The case α = 0 corresponds to stationary cases


such as the Ornstein–Uhlenbeck process,
while the case α = 2 corresponds to ballistic
motion.

23/11/22 M. Carmen Miguel López 53


Con*nuous *me random walks and Levy flights
One can easily generalize the definition of a random walk to the so-called continuous time random walk by assuming
that the usual space jumps are continuous, independent, identically distributed random variables, according to a
probability distribution η(x) (for example a Poisson distribution). Similarly, one can also assume that the time intervals,
t, needed to perform a step are also continuous, independent, identically distributed random variables, according to a
probability distribution ω(t). This implies that in this continuous time random walk (CTRW) model the walker performs
elementary steps with different velocities. Most often, the continuous random variable t is defined as a residence time
of the walker at a given position in space, before performing a step in a vanishing time.

Mathematically, if we assume that the space where the random walker moves is isotropic, we have to assume that a
jump of length x is equally probable to a jump of length –x, that is

and

One usually introduces the survival probability

which represents the probability of the walker not to make


a step until time t.

23/11/22 M. Carmen Miguel López 54


Now we want to characterize the random dynamics of the walker as usual by introducing the probability p(x, t) that a
conLnuous-Lme random walker arrives at posi7on x at 7me t, starLng from the origin x = 0 at Lme t = 0. This probability
can be wrixen as

which means that the position x can be reached at time t by any walker that arrived at any other position x − y at time
t − τ , by making a jump of length y (−∞ < y < +∞) after a waiting time τ (0 ≤ τ ≤ t).
Moreover, we can use p(x,t) and the survival probability ψ(t) to obtain the probability that a continuous-time random
walker is at position x at time t,
It is the sum over the Lme intervals τ during which a walker, which arrived
at x at any previous Lme t − τ , survived unLl Lme t.
One can use the Fourier–Laplace transformation to solve these integral equations. If k and s are the dual variables of x
and t, respectively. The first integral is a convolution product for both variables x and t; by Fourier-transforming on x and
by Laplace-transforming on t,

this equation takes the simple form

23/11/22 M. Carmen Miguel López 55


From the definiLon of the survival probability, ω(t) is equal to minus its derivaLve, then

aOer integraLon by parts and using the condiLon ψ(0) = 1

Thus the Fourier–Laplace transform of ρ (x, t), which is the density of CTRW walkers at x at time t,

from which we would be able to obtain the average value of all its moments (Remember the characterisLc funcLon of
the probability density )

And its Laplace transform is

23/11/22 M. Carmen Miguel López 56


In parLcular, for the second moment

where we have used that

It is interesting to investigate the limits k → 0 and s → 0, which provide information on the asymptotic behaviour
of ρ(x,t) at large distances and for long times. In fact, in these limits we can use the approximate expressions

For a symmetric distribuLon of spaLal jumps, and

and antitransforming

23/11/22 M. Carmen Miguel López 57


In conclusion, if θ and σ2 are finite, we obtain standard diffusion, with In fact, the probability ρ(k,s) in this
asymtoLc limit with

which is the Laplace-Fourier transform of the solution of the standard diffusion equation

Thus, in this case CTRWs are equivalent to Brownian moLon on large spaLo-temporal scales.

But apart from the standard diffusion behavior, ρ(x,t) may exhibit other three different universal behaviours which
only depend on the asymptoLcs of η(x) and ω(t), and thus on the behaviour of η(k) and ω(s) for small arguments:

Fractional Brownian motion Lévy flights Ambivalent processes

58
23/11/22 M. Carmen Miguel López
Frac*onal Brownian mo*on
With a CTRW model, one can also find anomalous diffusion. For instance, let us consider the case where σ2 is finite,
while the average residence time θ diverges. This means that the residence time distribution ω(t) is dominated by a
power law tail for t → +∞,
with 0 < α < 1 .

The average residence Lme diverges as t1−α in the limit t → +∞ and the CTRW describes a much slower evoluLon than
standard diffusion, i.e. a subdiffusive stochas7c process .
Following a similar procedure, in the limits k → 0 and s → 0 we can use the approximaLons

This expression can be read as the Fourier–Laplace transform of the following, fractional partial differential, equation

23/11/22 M. Carmen Miguel López 59


The corresponding time-dependent average square displacement, up to leading order in k, gives

And the inverse transform yields subdiffusive behavior

where Dα is a generalized diffusion coefficient.

Nonexponential inter-event time distributions have been reported in different contexts, and make evident the
existence of memory effects and, thus, of non-Markovian dynamics.

23/11/22 M. Carmen Miguel López 60


Levy flights
Another interesting example contained in the CTRW model is the case where θ is finite, while η(x) is a normalized
symmetric probability density function, whose asymptotic behavior obeys the power law

These processes are known as Lévy flights. For symmetry reasons the first momentum of η(x) is zero, while the second
momentum is found to diverge as

In the limits k → 0 and s → 0, one can approximate

and

From this result, it is not easy to perform the inverse transforms, but one can calculate the different moments of x(t):

where one introduces the quanLty

23/11/22 M. Carmen Miguel López 61


In parLcular, for the second moment one obtains

Again for α = 2 we recover the standard diffusive behavior, while for α < 2 the mean squared displacement grows in time
more than linearly. In these conditions we talk about a superdiffusive process.
Notice that α = 1 corresponds to a ballistic propagation.
In general, Lévy processes are a general class of stochastic Markov processes with independent residence times and space
increments. The main feature of Lévy flights is that not all of its momenta are finite. As a consequence, it cannot be fully
characterized by its first two momenta as is the case for Gaussian probability distributions.

Their main drawback, though, is that we are implicitly assuming that each step lasts a fixed time interval, independent of
the step length. Since this can be arbitrarily large, the random walker seems to move with an arbitrarily large velocity, a
feature that is difficult to justify physically. Such physical interpretation can be restored by introducing Lévy walks, where
the distribution of lengths is the same as for Lévy flights but the path between the starting and the end points of a jump is
assumed to be run by the walker at constant velocity v; i.e., the time t spent in a jump is proportional to its length x, with
t = x/|v|. Then Lévy walks can also be viewed as processes where a walker moves at constant velocity in a time step t,
whose asymptotic distribution is of the form . If α < 2 the variance of ω(t) diverges and the distances of
the jumps run by the walker obey the probability distribution

Thus, with this model one can also describe superdiffusive behavior.

23/11/22 M. Carmen Miguel López 62


In conclusion, we can formulate the Lévy Walk process as a modificaLon of the CTRW model discussed . The probability
p(x,t) that a LW arrives at posiLon x at Lme t, starLng from the origin x = 0 at Lme t = 0, sLll obeys a similar equaLon,
namely,

But now, the probability distribution of performing a step of length y in a time τ, must take into account the relation
between jump size and jump duration, i.e. with for large values of t.
Fourier-transforming on x and Laplace-transforming on t, now gives

Moreover, the probability of being at x at Lme t

where

Here we have to consider the probability that a LW moves exactly by a distance xʹ in a Lme τ, or in other words, that the
moLon of a walker proceeds at constant speed v under the condiLon that no other jump event occurs before Lme t.

The algebra is quite cumbersome in this case,


but the final results can be summarized here

23/11/22 M. Carmen Miguel López 63


Ambivalent processes
The last and most interesting combination of waiting times and spatial steps is the one in which long waiting times
compete and interfere with long range spatial steps, i.e. if both η(x) and ω(s) decay asymptotically as a power law.

The asymptotic pdf for the position of the ambivalent process can again be expressed in terms of a Fourier and a
Laplace inversion. The scaling of the second moment in this case:

The raLo of the exponents α/β resembles the interplay between sub- and superdiffusion. For β < 2α the ambivalent
CTRW is effecLvely superdiffusive, for β > 2α effecLvely subdiffusive. For β = 2α the process exhibits the same scaling
as ordinary Brownian moLon, despite the probability distribuLon is non-Gaussian.

The various types of asymptoLc universal behaviours are depicted in the following figure, which shows a phase
diagram spanned by the temporal exponent α and the spaLal exponent β.

23/11/22 M. Carmen Miguel López 64


23/11/22 M. Carmen Miguel López 65
Other non-Markovian effects
Non-Markovian processes, which also occur for instance in the case of colored noise, cannot be considered merely as
corrections to the class of Markov processes. They require the development of new methods.
In a discrete stochasLc process, where Xt has two or more possible states j, and in each it has a probability per unit Lme
Wij to jump to i, when the Wij are constants, {Xt} is a Markov process with a master equaLon. On the other hand, when
the transiLon rates are funcLons g(t) of the Lme t elapsed since arrival in j, the process is non-Markovian and no master
equaLon can be wrixen for that case. Examples: reactant molecules in soluLon may get stuck temporarily to the
container wall; a bacterium produces offspring aOer reaching a certain age, paxerns of human acLvity …

Many materials are also not Markovian, and it would be more useful to write the equaLons of moLon using a
memory kernel γ1(t). For example, in the case of a Brownian parLcle immersed in a deformable surrounding media:

The thermal noise terms are now not so easy to deal with; the substrate noise term is now coloured, by the fluctuaLon
dissipaLon theorem
Moreover in the case of non-Markovian problems, it is not possible to write down a Fokker-Planck equaLon for these kind
of Langevin equaLons. Concepts, such as first-passage Lmes, no longer apply and need to be reconsidered.
Non-Markovian stochasLc processes are difficult to tackle analyLcally and, in many cases, their understanding relies
completely on numerical simulaLons.
23/11/22 M. Carmen Miguel López 66

You might also like