Numerical Simulation in Statistical Physics

common lecture in Master 2 “Theoretical physics
of complex systems” and “Modeling, Statistics
and Algorithms for out-of-equilibrium systems.
Pascal Viot
Laboratory de Physique Th´eorique de la Mati`ere Condens´ee, Boˆıte 121,
4, Place Jussieu, 75252 Paris Cedex 05
Email : viot@lptl.jussieu.fr
December 6, 2010
These lecture notes provide an introduction to the methods of numerical sim-
ulation in classical statistical physics.
Based on simple models, we start in a first part on the basics of the Monte
Carlo method and of the Molecular Dynamics. A second part is devoted to the
introduction of the basic microscopic quantities available in simulation methods,
and to the description of the methods allowing for studying the phase transitions.
In a third part, we consider the study of out-of-equilibrium systems as well as the
characterization of the dynamics: in particular, aging phenomena are presented.
2
Chapter 1
Statistical mechanics and
numerical simulation
Contents
1.1 Brief History of simulation . . . . . . . . . . . . . . . . 3
1.2 Ensemble averages . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Microcanonical ensemble . . . . . . . . . . . . . . . . . 5
1.2.2 Canonical ensemble . . . . . . . . . . . . . . . . . . . 6
1.2.3 Grand canonical ensemble . . . . . . . . . . . . . . . . 7
1.2.4 Isothermal-isobaric ensemble . . . . . . . . . . . . . . 8
1.3 Model systems . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Simple liquids . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.3 Ising model and lattice gas. Equivalence . . . . . . . . 10
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.1 ANNNI Model . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Blume-Capel model . . . . . . . . . . . . . . . . . . . . 16
1.6.1 Potts model . . . . . . . . . . . . . . . . . . . . . . . . 18
1.1 Brief History of simulation
Numerical simulation started in the fifties when computers were used for the first
time for peaceful purposes. In particular, the computer MANIAC started in 1952
1
1
The computer MANIAC is the acronym of ”mathematical and numerical integrator and
computer”. MANIAC I started March 15, 1952.
3
Statistical mechanics and numerical simulation
at Los Alamos. Simulation provides a complementary approach to theoretical
methods
2
. Areas of physics where the perturbative approaches are efficient (dilute
gases, vibrations in quasi-harmonic solids) do not require simulation methods.
Conversely, liquid state physics, where few exact results are known and where the
theoretical developments are not always under control, has been developed largely
through simulation. The first Monte Carlo simulation of liquids was performed
by Metropolis et al. in 1953
3
.
The first Molecular Dynamics was realized on the hard disk model by Alder
and Wainwright in 1957Alder and Wainwright [1957]. The first Molecular Dy-
namics of a simple liquid (Argon) was performed by Rahman in 1964.
In these last two decades, the increasing power of computers associated with
their decreasing costs allowed numerical simulations with personal computers.
Even if supercomputers are necessary for extensive simulations, it becomes possi-
ble to perform simulations on low cost computers. In order to measure the power
of computers, the unit of performance is the GFlops (or billion of floating point
operations per second). Nowadays, a Personal computer PC (for instance, Intel
Core i7) has a processor with four cores (floating point unit) and offers a power
of 24 Gflops. Whereas the frequency of processors seems to have plateaued last
few years, power of computers continues to increase because the number of cores
within a processor grows. Nowadays, processors with 6 or 8 cores are available.
In general, the power of a computer is not a linear function of cores for many
programs. For scientific codes, it is possible to exploit this possibility by develop-
ing scientific programs incorporating libraries which perform parallel computing
(OpenMP, MPI). The gnu compilers are free softwares allowing for parallel com-
puting. For massive parallelism, MPI (message passing interface) is a library
which spreads the computing load over many cores.
Table 1.1 gives the characteristics and power of most powerful computers in
the world
It is worth noting that the rapid evolution of the power of the computers. In
2009, only one computer exceeded the TFlops, while they are 4 this year. In the
same period, the IDRIS computer went from 9 last year to 38 this year. Note that
China owns the second most powerful computer in the world. The last remark
concerns the operating system of the 500 fastest computers: Linux and associated
distributions 484, Windows 5, others 11.
2
Sometimes, theories are in their infancy and numerical simulation is the sole manner for
studying models
3
Metropolis, Nicholas Constantine (1915-1999) both mathematician and physicist of educa-
tion was hired by J. Robert Oppenheimer at the Los Alamos National Laboratory in April 1943.
He was one of scientists of the Manhattan Project and collaborated with Enrico Fermi and Ed-
ward Teller on the first nuclear reactors. After the war, Metropolis went back to Chicago as an
assistant professor, and returned to Los Alamos in 1948 by creating the Theoretical Division.
He built the MANIAC computer in 1952, then 5 years later MANIAC II. He returned from
1957 to 1965 to Chicago and founded the Computer Research division, and finally, returned to
Los Alamos.
4
1.2 Ensemble averages
Ranking Vendor /Cores. (TFlops) Pays
1 Cray XTS / 224162 1759 2331 Oak Ridge Nat. Lab. USA 2009
2 Dawning / 120640 1271 2984 NSCS (China)2010
3 Roadrunner /122400 1042 1375 Doe USA 2009
4 Cary XTS /98928 831 1028 NICST USA 2009
5 Blue GeneP 212992 825 1002 FZJ Germany 2009
... ... ... ...
18 SGI /23040 237 267 France 2010
Table 1.1 – June 2009 Ranking of supercomputers
1.2 Ensemble averages
Knowledge of the partition function of a system allows one to obtain all thermody-
namic quantities. First, we briefly review the main ensembles used in Statistical
Mechanics. We assume that the thermodynamic limit leads to the same quan-
tities, a property for systems where interaction between particles are not long
ranged or systems without quenched disorder.
For finite size systems (which correspond to those studied in computer simu-
lation), there are differences that we will analyze in the following.
1.2.1 Microcanonical ensemble
The system is characterized by the set of macroscopic variables: volume V , total
energy E and the number of particles N. This ensemble is not appropriate for
experimental studies, where one generally has
• a fixed number of particles, but a fixed pressure P and temperature T. This
corresponds to a set of variables (N, P, T) and is the isothermal-isobaric
ensemble,
• a fixed chemical potential µ, volume V and temperature T, ensemble (µ, V, T)
or grand canonical ensemble,
• a fixed number of particles, a given volume V and temperature T, ensemble
(N, V, T) or canonical ensemble.
There is a Monte Carlo method for the micro canonical ensemble, but it is
rarely used, in particular for molecular systems. Conversely, the micro-canonical
ensemble is the natural ensemble for the Molecular Dynamics of Hamiltonian
systems where the total energy is conserved during the simulation.
The variables conjugated to the global quantities defining the ensemble fluc-
tuate in time. For the micro-canonical ensemble, this corresponds to the pressure
5
Statistical mechanics and numerical simulation
P conjugate to the volume V , to the temperature T conjugate to the energy E
and to the chemical potential µ conjugate to the total number of particles N.
1.2.2 Canonical ensemble
The system is characterized by the following set of variables: the volume V ; the
temperature T and the total number of particles N. If we denote the Hamiltonian
of the system H, the partition function reads
Q(V, β, N) =
¸
α
exp(−βH(α)) (1.1)
where β = 1/k
B
T (k
B
is the Boltzmann constant). The sum runs over all config-
urations of the system. If this number is continuous, the sum is replaced with an
integral. α denotes the index of these configurations. The free energy F(V, β, N)
of the system is equal to
βF(V, β, N) = −ln(Q(V, β, N)). (1.2)
One defines the probability of having a configuration α as
P(V, β, N; α) =
exp(−βH(α))
Q(V, β, N)
. (1.3)
One easily checks that the basic properties of a probability are satisfied, i.e.
¸
α
P(V, β, N; α) = 1 and P(V, β, N; α) > 0.
The derivatives of the free energy are related to the moments of this prob-
ability distribution, which gives a microscopic interpretation of the macroscopic
thermodynamic quantities. The mean energy and the specific heat are then given
by
• Mean energy
U(V, β, N) =
∂(βF(V, β, N))
∂β
(1.4)
=
¸
α
H(α)P(V, β, N; α) (1.5)
='H(α)` (1.6)
• Specific heat
C
v
(V, β, N) = −k
B
β
2
∂U(V, β, N)
∂β
(1.7)
= k
B
β
2

¸
¸
α
H
2
(α)P(V, β, N; α) −

¸
α
H(α)P(V, β, N; α)

2

(1.8)
= k
B
β
2

'H(α)
2
` −'H(α)`
2

(1.9)
6
1.2 Ensemble averages
1.2.3 Grand canonical ensemble
The system is then characterized by the following set of variables: the volume V ,
the temperature T and the chemical potential µ. Let us denote the Hamiltonian
H
N
the Hamiltonian of N particles, the grand partition function Ξ(V, β, µ) reads:
Ξ(V, β, µ) =

¸
N=0
¸
α
N
exp(−β(H
N

N
) −µN)) (1.10)
where β = 1/k
B
T (k
B
is the Boltzmann constant) and the sum run over all
configurations of N particles and over all configurations for systems having a
number of particles going from 0 to
i
nfty. The grand potential is equal to
βΩ(V, β, µ) = −ln(Ξ(V, β, µ)) (1.11)
In a similar way, one defines the probability (distribution) P(V, β, µ; α
N
) of having
a configuration α
N
(with N particles) by the relation
P(V, β, µ; α
N
) =
exp(−β(H
N

N
) −µN))
Ξ(V, β, µ)
(1.12)
The derivatives of the grand potential can be expressed as moments of the prob-
ability distribution
• Mean number of particles
'N(V, β, µ)` =−
∂(βΩ(V, β, µ))
∂(βµ)
(1.13)
=
¸
N
¸
α
N
NP(V, β, µ; α
N
) (1.14)
• Susceptibility
χ(V, β, µ) =
β
'N(V, β, µ)`ρ
∂'N(V, β, µ)`
∂βµ
(1.15)
=
β
'N(V, β, µ)`ρ

¸
N
¸
α
N
N
2
P(V, β, µ; α
N
) −

¸
N
¸
α
N
NP(V, β, µ; α
N
)

2
¸
¸
(1.16)
=
β
'N(V, β, µ)`ρ

'N
2
(V, β, µ)` −'N(V, β, µ)`
2

(1.17)
7
Statistical mechanics and numerical simulation
1.2.4 Isothermal-isobaric ensemble
The system is characterized by the following set of variables: the pressure P, the
temperature T and the total number of particles N. Because this ensemble is
generally devoted to molecular systems and not used for lattice models, one only
considers continuous systems. The partition function reads:
Q(P, β, N) =
βP
Λ
3N
N!


0
dV exp(−βPV )

V
0
dr
N
exp(−βU(r
N
)) (1.18)
where β = 1/k
B
T (k
B
is the Boltzmann constant).
The Gibbs potential is equal to
βG(P, β, N) = −ln(Q(P, β, N)). (1.19)
One defines the probability Π(P, β, µ; α
V
)
4
of having a configuration α
V
≡ r
N
(particle positions r
N
), with a temperature T and a pressure P).
Π(P, β, µ; α
V
) =
exp(−βV ) exp(−β(U(r
N
)))
Q(P, β, N)
. (1.20)
The derivatives of the Gibbs potential are expressed as moments of this prob-
ability distribution. Therefore,
• Mean volume
'V (P, β, N)` =
∂(βG(P, β, N))
∂βP
(1.21)
=


0
dV V

V
0
dr
N
Π(P, β, µ; α
V
). (1.22)
This ensemble is appropriate for simulations which aim to determine the equation
of state of a system.
Let us recall that a statistical ensemble can not be defined from a set of
intensives variables only. However, we will see in Chapter 7 that a technique so
called Gibbs ensemble method is close in spirit of such an ensemble (with the
difference we always consider in simulation finite systems).
1.3 Model systems
1.3.1 Introduction
We restrict the lecture notes to classical statistical mechanics, which means that
the quantum systems are not considered here. In order to provide many illustra-
tions of successive methods, we introduce several basic models that we consider
several times in the following
4
In order to avoid confusion with the pressure, the probability is denoted Π.
8
1.3 Model systems
1.3.2 Simple liquids
A simple liquid is a system of N point particles labeled from 1 to N, of identical
mass m, interacting with an external potential U
1
(r
i
) and among themselves by
a pairwise potential U
2
(r
i
, r
j
) (i.e. a potential where the particles only interact
by pairs). The Hamiltonian of this system reads:
H =
N
¸
i=1
¸
p
2
i
2m
+ U
1
(r
i
)

+
1
2
¸
i=j
U
2
(r
i
, r
j
), (1.23)
where p
i
is the momentum of the particle i.
For instance, in the grand canonical ensemble, the partition function Ξ(µ, β, V )
is given by
Ξ(µ, β, V ) =

¸
N=0
1
N!

N
¸
i=1
(d
d
p
i
)(d
d
r
i
)
h
dN
exp(−β(H−µN)) (1.24)
where h is the Planck constant and d the space dimension.
The integral over the momentum can be obtained analytically, because there
is factorization of the multidimensional integral on the variables p
i
. The one-
dimensional integral of each component of the momentum is a Gaussian integral.
Using the thermal de Broglie length
Λ
T
=
h

2πmk
B
T
, (1.25)
one has

+∞
−∞
d
d
p
h
d
exp(−βp
2
/(2m)) =
1
Λ
d
T
. (1.26)
The partition function can be reexpressed as
Ξ(µ, β, V ) =

¸
N=0
1
N!

e
βµ
Λ
d
T

N
Z
N
(β, N, V ) (1.27)
where Z
N
(β, N, V ) is called the configuration integral.
Z
N
(β, N, V ) =

dr
N
exp(−βU(r
N
)) (1.28)
One defines z = e
βµ
as the fugacity.
The thermodynamic potential associated with the partition function, Ω(µ, β, V ),
is
Ω(µ, β, V ) = −
1
β
ln(Ξ(µ, β, V )) = −PV (1.29)
where P is the pressure.
Note that, for classical systems, only the part of the partition function with
the potential energy is non-trivial. Moreover, there is decoupling between the
kinetic part and potential part, contrary to the quantum statistical mechanics.
9
Statistical mechanics and numerical simulation
1.3.3 Ising model and lattice gas. Equivalence
The Ising model is a lattice model where sites are occupied by particles with very
limited degrees of freedom. Indeed, the particle is characterized by a spin which
is a two-state variable (−1, +1). Each spin interacts with its nearest neighbors
and with a external field H, if it exists. The Ising model, initially introduced
for describing the behavior of para-ferromagnetic systems, can be used in many
physical situations. This simple model can be solved analytically in several situ-
ations (one and two dimensions) and accurate results can be obtained in higher
dimensions.
The Hamiltonian reads
H = −J
¸
<i,j>
S
i
S
j
(1.30)
where < i, j > denotes a summation over nearest-neighbor sites and J is the
interaction strength. If J > 0, the interaction is ferromagnetic and conversely, if
J < 0, the interaction is antiferromagnetic The analytical solution of the system
in one dimension shows that there is no phase transition at a finite temperature.
In two dimensions, Onsager (1944) obtained the solution in the absence of an
external field. in three dimensions, no analytical solution has been obtained, but
theoretical developments and numerical simulations give the properties of the
system in the phase diagram magnetization-temperature.
The lattice gas model was introduced by Lee and Yang. The basic idea, greatly
extended later, consists of assuming that the macroscopic properties of a system
with a large number of particles do not crucially depend on the microscopic details
of the interaction. By performing a coarse-graining of the microscopic system,
one builds an effective model with a smaller number of degrees of freedom. This
idea is often used in statistical physics, because it is often necessary to reduce
the complexity of the original system for several reasons: 1) practical: In a
simpler model, theoretical treatments are more tractable and simulations can be
performed with larger system sizes. 2) theoretical: macroscopic properties are
almost independent of some microscopic degrees of freedom and a local average
is efficient method for obtaining an effective model for the physical properties of
the system. This method underlies the existence of a certain universality, which
is appealing to many physicists.
From a Hamiltonian of a simple liquid to a lattice gas model, we proceed
in three steps. The first consists of rewriting the Hamiltonian by introducing a
microscopic variable: this step is exact. The second step consists of performing
a local average to define the lattice Hamiltonian; several approximations are
performed in this step and it is essential to determine their validity. In the third
step, several changes of variables are performed in order to transform the lattice
gas Hamiltonian into a spin model Hamiltonian: this last step is exact again.
10
1.3 Model systems
Rewriting of the Hamiltonian
First, let us reexpress the Hamiltonian of the simple liquid as a function of the
microscopic
5
density
ρ(r) =
N
¸
i=1
δ(r −r
i
). (1.31)
By using the property of the Dirac distribution

f(x)δ(x −a)dx = f(a) (1.32)
one obtains
N
¸
i=1
U
1
(r
i
) =
¸
i=1

V
U
1
(r)δ(r −r
i
)d
d
r =

V
U
1
(r)ρ(r)d
d
r (1.33)
in a similar way
¸
i=j
U
2
(r
i
, r
j
) =
¸
i=j

V
U
2
(r, r
j
)δ(r −r
i
)d
d
r (1.34)
=
¸
i=j

V

V
U
2
(r, r

)δ(r −r
i
)δ(r

−r
j
)d
d
rd
d
r

(1.35)
=

V

V
U
2
(r

, r)ρ(r)ρ(

r)d
d
rd
d
r

(1.36)
Local average
The area (”volume”) V of the simple liquid is divided into N
c
cells such that
the probability of finding more than one particle center per cell is negligible
6
(typically, this means that the diagonal of each cell is slightly smaller than the
particle diameter, see Fig. 1.1). Let us denote by a the linear length of the cell,
then one has

V
N
¸
i=1
d
d
r
i
= a
d
N
c
¸
α=1
(1.37)
which gives N
c
= V/a
d
. The lattice Hamiltonian is
H =
N
c
¸
α=1
U
1
(α)n
α
+ 1/2
N
c
¸
α,β
U
2
(α, β)n
α
n
β
(1.38)
5
The local density is obtained by performing a local average that leads to a smooth function
6
The particle is an atom or a molecule with its own characteristic length scale.
11
Statistical mechanics and numerical simulation
0 0.2 0.4 0.6 0.8 1
X
0
0.2
0.4
0.6
0.8
1
Y
Figure 1.1 – Particle configuration of a two-dimensional simple liquid. The grid
represents the cells used for the local average. Each cell can accommodate zero
or one particle center.
where n
α
is a Boolean variable, namely n
α
= 1 when a particle center is within
a cell α, and 0 otherwise. Note that the index α of this new Hamiltonian is
associated with cells whereas the index of the original Hamiltonian is associated
with the particles. One obviously has U(α, α) = 0, no self-energy, because there
is no particle overlap. Since the interaction between particles is short range, U
2
(r)
is also short range, and is replaced with an interaction between nearest neighbor
cells:
H =
N
c
¸
α=1
U
1
(α)n(α) + U
2
¸
<αβ>
n
α
n
β
(1.39)
The factor 1/2 does not appear because the bracket < αβ > only considers
distinct pairs.
12
1.3 Model systems
Equivalence with the Ising model
We consider the previous lattice gas model in a grand canonical ensemble. The
relevant quantity is then
H−µN =
¸
α
(U
1
(α) −µ)n
α
+ U
2
¸
<α,β>
n
α
n
β
. (1.40)
Let us introduce the following variables
S
i
= 2n
i
−1. (1.41)
As expected, the spin variable is equal to +1 when a site is occupied by a particle
(n
i
= 1) and −1 when the site in unoccupied (n
i
= 0). One then obtains
¸
α
(U
1
(α) −µ)n
α
=
1
2
¸
α
(U
1
(α) −µ)S
α
+
1
2
¸
α
(U
1
(α) −µ) (1.42)
and
U
2
¸
<α,β>
n
α
n
β
=
U
2
4
¸
<α,β>
(1 + S
α
)(1 + S
β
) (1.43)
=
U
2
4

N
c
c
2
+ c
¸
α
S
α
+
¸
<α,β>
S
α
S
β

(1.44)
where c is the coordination number (number of nearest neighbors) of the lattice.
Therefore, this gives
H−µN = E
0

¸
α
H
α
S
α
−J
¸
<α,β>
S
α
S
β
(1.45)
with
E
0
= N
c

'U
1
(α)` −
µ
2
+
U
2
c
8

(1.46)
where 'U
1
(α)` corresponds to the average of U
1
over the sites.
H
α
=
µ −U(α)
2

cU
2
4
(1.47)
and
J = −
U
2
4
(1.48)
where J is the interaction strength. Finally, one gets
Ξ
gas
(N, V, β, U(r)) = e
−βE
0
Q
Ising
(H, β, J, N
c
). (1.49)
13
Statistical mechanics and numerical simulation
This analysis confirms that the partition function of the Ising model in the canon-
ical ensemble has a one-to-one map with the partition function of the lattice gas in
the grand canonical ensemble. One can easily show that the partition function of
the Ising model with the constraint of a constant total magnetization corresponds
to the partition function in the canonical ensemble of the lattice gas model.
Some comments on these results: first, one checks that if the interaction
is attractive, U
2
< 0, one has J > 0, which corresponds to a ferromagnetic
interaction. Because the interaction is divided by 4, The critical temperature
of the Ising model is four times higher than of the lattice gas model. Note
also that the equivalence concerns the configuration integral and not the initial
partition function. This means that, like for the Ising model, the lattice gas
does not own a microscopic dynamics, unlike to the original liquid model. By
performing a coarse-graining, one has added a symmetry between particles and
holes, which does not exist in the liquid model. This leads for the lattice gas
model a symmetrical coexistence curve and a critical density equal to 1/2. A
comparison with the Lennard-Jones liquid gives a critical packing fraction equal
to 0.3 in three dimensions. In addition, the coexistence curves of liquids are not
symmetric between the liquid phase and the gas phase, whereas the lattice gas
due to its additional symmetry ρ
liq
= 1 −ρ
gas
, leads to a symmetric coexistence
curve.
Before starting a simulation, it is useful to have an estimate of the phase
diagram in order to correctly choose the simulation parameters. The mean-field
theory gives a first approximation, for instance, of the critical temperature. For
the Ising model (lattice gas), one obtains
T
c
= cJ =
cU
2
4
(1.50)
As expected, the mean-field approximation overestimates the value of the
critical temperature because the fluctuations are neglected. This allows order
to persist to a temperature higher than the exact critical temperature of the
system. Unlike to the critical exponents which do not depend on the lattice,
the critical temperature depends on the details of the system. However, larger
the coordination number, the closer the mean-field value to the exact critical
temperature.
1.4 Conclusion
In this chapter, we have drawn the preliminary scheme before the simulation: 1)
Select the appropriate ensemble for studying a model 2) When the microscopic
model is very complicated, perform a coarse-graining procedure which leads to
an effective model that is more amenable for theoretical treatments and/or more
efficient for simulation. As we will see in the following chapters, refined Monte
14
1.5 Exercises
Carlo methods consists of including more and more knowledge of the Statistical
Mechanics, which finally leads to a more efficient simulation. Therefore, I rec-
ommend reading of several textbooks on the Statistical PhysicsChandler [1987],
Goldenfeld [1992], Hansen and Donald [1986], Young [1998]. In addition to these
lecture notes, several textbooks or reviews are available on simulationsBinder
[1997], Frenkel and Smit [1996], Krauth [2006], Landau and Binder [2000].
1.5 Exercises
1.5.1 ANNNI Model
Frustrated systems are generally characterized by a large degeneracy of the ground
state. This property can be illustrated through a simple model, the ANNNI model
(Axial Next Nearest Neighbor Ising). One considers a lattice in one dimension, of
unit step and of length N, on which, with periodic boundary conditions, an Ising
variable exists on each lattice point. Spins interact with a ferromagnetic near-
est neighbor interaction J
1
> 0 and a antiferromagnetic next nearest neighbor
interaction J
2
< 0. The Hamiltonian reads
H = −J
1
N
¸
i=1
S
i
S
i+1
−J
2
N
¸
i=1
S
i
S
i+2
(1.51)
such that S
N+1
= S
1
and S
N+2
= S
2
.
♣ Q. 1.5.1-1 One considers a staggered configuration of spins. S
i
= (−1)
i
. Cal-
culate the energy per spin of this configuration. Infer that this configuration
cannot be the ground state of the ANNNI model whatever the values of J
1
and
J
2
. The calculation should be done for a lattice with a number of even.
♣ Q. 1.5.1-2 Calculate the energy per spin for a configuration of aligned (con-
figuration A) and for a periodic configuration pour a configuration of alternated
spins of period 4 (configuration B) , corresponding to a sequence of two spins up
and two spins down, and two spins up, etc. . . The calculation will be done for a
lattice of length multiple of 4.
♣ Q. 1.5.1-3 Show that the configuration A has a lower energy than the config-
uration B for a ratio κ = −J
2
/J
1
to be determined.
In the case where κ = 1/2, one admits that the ground-state is strongly
degenerated and that the configurations associated are sequences of spins of length
k ≥ 2.
♣ Q. 1.5.1-4 From one of these configurations on a lattice of length L − 1, one
inserts at the end of the lattice one site with a spin S
L
. What sign must this
spin be with respect to S
L−1
in order that the new configuration belongs to the
ground state of a lattice of length L?
15
Statistical mechanics and numerical simulation
♣ Q. 1.5.1-5 From one of these configurations on a lattice of length L − 2, one
inserts two sites at the end of the lattice with two spins S
L−1
and S
L
of same sign.
What sign must be these two spins with respect to S
L−2
in order that the new
configuration belongs to the ground state of a lattice of length L and is different
from a configuration generated by the previous process?
♣ Q. 1.5.1-6 Let us denote by D
L
the degeneracy of a lattice of size L. By using
previous results, justify the relation
D
L
= D
L−1
+ D
L−2
(1.52)
♣ Q. 1.5.1-7 Show that for large values of L, one has
D
L
∼ (a
F
)
L
(1.53)
where a
F
is a constant to be determined.
The entropy per spin of the ground state in the thermodynamic limit is defined
as
S
0
= lim
L→∞
k
B
L
ln(D
L
). (1.54)
♣ Q. 1.5.1-8 Calculate numerically this value.
♣ Q. 1.5.1-9 Justify that S
0
is less than ln(2). What can one say about the
degeneracy of the ground state for a value of κ = 1/2?
1.6 Blume-Capel model
The Blume-Capel model describes a lattice spin system where the spin variables
can take 3 values : −1, 0 and +1. This model describes metallic ferromagnetism
and superfluidity in a Helium He
3
-He
4
mixture. Here, we consider a mean-field
approach, that allows tractable calculations, by assuming that lattice spins inter-
act all together (long ranged interactions). The Hamiltonian is
H = ∆
N
¸
i=1
S
2
i

1
2N

N
¸
i=1
S
i

2
(1.55)
where the second term of the Hamiltonian describes a ferromagnetic coupling and
∆ > 0 is the difference between ferromagnetic states (S
i
= ±1) and paramagnetic
states (S
i
= 0).
♣ Q. 1.6.0-10 Determine the ground states and the associated energies. Verify
that the ground state energy is extensive. Infer a phase transition at T = 0 for a
value ∆ to be determined.
16
1.6 Blume-Capel model
♣ Q. 1.6.0-11 Let us denote by N
+
, N

and N
0
the number of sites occupied by
spins +1, −1 and 0 respectively. Show that the total energy of the system can
be expressed as a function of ∆, N, M = N
+
−N

and Q = N
+
+ N

.
♣ Q. 1.6.0-12 Justify that the number of available microstates is given by
Ω =
N!
N
+
!N

!N
0
!
. (1.56)
♣ Q. 1.6.0-13 Using a Stirling’s formula, show that the microcanonical entropy
per spin s = S/N can be written as a function of m = M/N and q = Q/N (One
sets k
B
= 1).
♣ Q. 1.6.0-14 By introducing = E/N, show that the energy per site is given
by
q = 2K + Km
2
(1.57)
where K is a parameter to be determined.
♣ Q. 1.6.0-15 Show that the entropy per spin reads
s(m, ) = −(1 −2K −Km
2
) ln(1 −2K −Km
2
) −
1
2
(2K + Km
2
+ m) ln(2K + Km
2
+ m)

1
2
(2K + Km
2
−m) ln(2K + Km
2
−m) + (2K + Km
2
) ln 2. (1.58)
In the following, we determine the transition line in the diagram T −∆.
♣ Q. 1.6.0-16 Show that expansion of the entropy per spin to 4th order in m
can be written as
s = s
0
+ Am
2
+ Bm
4
+ O(m
6
) (1.59)
with
A = −K ln

K
1 −2K


1
4K
(1.60)
and
B = −
K
4(1 −2K)
+
1
8K
2

1
96(K)
3
. (1.61)
♣ Q. 1.6.0-17 When A and B are negative, what is the value of m for which the
entropy is maximum? Which is the corresponding phase?
17
Statistical mechanics and numerical simulation
♣ Q. 1.6.0-18 By using the definition of the microcanonical temperature, show
that in the same phase
β = 2K ln

1 −2K
K

. (1.62)
♣ Q. 1.6.0-19 The transition line corresponds to A = 0 and B < 0. Infer that
the transition temperature is given by the implicit equation
β =
exp(
β
2K
)
2
+ 1. (1.63)
♣ Q. 1.6.0-20 The transition line ends in a tricritical point when the coefficient
B is equal to zero. Show for this point that the temperature is given by
K
2

2
¸
1 + 2 exp

−β
2K


K

+
1
12
= 0. (1.64)
♣ Q. 1.6.0-21 For lower temperature, where it is necessary to perform the ex-
pansion beyond the 4th order, explain why the transition line becomes first order.
1.6.1 Potts model
The Potts model is a lattice model where the spin variables are denoted σ
i
where
σ
i
goes from 1 to q. The interaction between nearest neighbor sites is given by
the Hamiltonian:
H = −J
¸
<i,j>
δ
σ
i
σ
j
(1.65)
where δ
ij
denotes the Kronecker symbol and is equal to 1 when i = j and 0 when
i = j.
♣ Q. 1.6.1-1 For q = 2, show that this model is equivalent to a Ising model
(H = −J
I
¸
<i,j>
S
i
S
j
) with a coupling constant equal to J = 2J
I
.
To estimate the phase diagram of this model, one can use a mean-field ap-
proach. Let x
i
is the fraction of spins in the state i where i runs i = 1, 2, ..., q.
♣ Q. 1.6.1-2 Assuming that the spins are independent, give a simple expression
for the entropy per site s as a function of x
i
and the Boltzmann constant.
18
1.6 Blume-Capel model
♣ Q. 1.6.1-3 Assuming that the spins are independent, justify that the energy
per site is given by
e = −
cJ
2
¸
i
x
2
i
. (1.66)
where c is the lattice coordination number.
♣ Q. 1.6.1-4 Infer the dimensionless free energy per site βf.
♣ Q. 1.6.1-5 A guess is x
1
=
1
q
(1 + (q −1)m) and x
i
=
1
q
(1 −m) for i = 2, ...q.
Infer the difference βf(m) −βf(0)
♣ Q. 1.6.1-6 Show that m = 0 is always solution of the previous equation when
βf(m) −βf(0) = 0.
♣ Q. 1.6.1-7 Consider first the case q = 2. Determine the temperature of tran-
sition by calculating
∂βf(m)
∂m
= 0.
♠ Q. 1.6.1-8 For q > 2, show that
cβJm = ln

1 + (q −1)m
1 −m

(1.67)
One admits that m
c
=
q−2
q−1
, determine the “temperature”of transition 1/β
c
. Show
that βf(m
c
) = βf(0) . Show that the order of the transition changes for q > q
c
with q
c
to be determined.
♠ Q. 1.6.1-9 Calculate the latent heat of the transition when the transition be-
comes first order..
One wishes to perform the simulation of the two dimensional Potts model. It
was shown that in two dimensions the critical value of q is 4.
♣ Q. 1.6.1-10 Write a Metropolis algorithm for the simulation of the q−state
Potts model.
♣ Q. 1.6.1-11 Describe briefly the convergence problems of the Metropolis algo-
rithm for q ≤ 4 and q > 4
♣ Q. 1.6.1-12 Give a method adapted for studying the transition in the two
cases. Justify your answer.
19
Statistical mechanics and numerical simulation
20
Chapter 2
Monte Carlo method
Contents
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Uniform and weighted sampling . . . . . . . . . . . . . 22
2.3 Markov chain for sampling an equilibrium system . . 23
2.4 Metropolis algorithm . . . . . . . . . . . . . . . . . . . 25
2.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.1 Ising model . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.2 Simple liquids . . . . . . . . . . . . . . . . . . . . . . . 29
2.6 Random number generators . . . . . . . . . . . . . . . 31
2.6.1 Generating non uniform random numbers . . . . . . . 33
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.7.1 Inverse transformation . . . . . . . . . . . . . . . . . . 38
2.7.2 Detailed balance . . . . . . . . . . . . . . . . . . . . . 39
2.7.3 Acceptance probability . . . . . . . . . . . . . . . . . . 39
2.7.4 Random number generator . . . . . . . . . . . . . . . 41
2.1 Introduction
Once a model of a physical system has been chosen, its statistical properties of
the model can be determined by performing a simulation. If we are interested in
the static properties of the model, we have seen in the previous chapter that the
computation of the partition function consists of performing a multidimensional
integral or a multivariable summation similar to
Z =
¸
i
exp(−βU(i)) (2.1)
21
Monte Carlo method
where i is an index running over all configurations available to the system
1
. If
one considers the simple example of a lattice gas in three dimensions with a linear
size of 10, the total number of configurations is equal to 2
1000
· 10
301
, for which
is impossible to compute exactly the sum, Eq. (2.1). For a continuous system,
calculation of the integral starts by a discretization. Choosing 10 points for each
space coordinate and with 100 particles evolving in a three dimensional space,
the number of points is equal to 10
300
, which is of the same order of magnitude
of the previous lattice system with a larger number of sites. It is necessary to
have specific methods for evaluating the multidimensional integrals. The specific
method used is the Monte Carlo method with an importance sampling algorithm.
2.2 Uniform and weighted sampling
To understand the interest of a weighted sampling, we first consider a basic
example, an one-dimensional integral
I =

b
a
dxf(x). (2.2)
This integral can be recast as
I = (b −a)'f(x)` (2.3)
where 'f(x)` denotes the average of the function f on the interval [a, b]. By choos-
ing randomly and uniformlyN
r
points along the interval [a, b] and by evaluating
the function of all points, one gets an estimate of the integral
I
N
r
=
(b −a)
N
r
N
r
¸
i=1
f(x
i
). (2.4)
The convergence of this method can be estimated by calculating the variance, σ
2
,
of the sum I
N
r
2
. Therefore, one has
σ
2
=
1
N
2
r
N
r
¸
i=1
N
r
¸
j=1
(f(x
i
) −'f(x
i
)``)('f(x
j
) −'f(x
j
)`). (2.5)
The points being chosen independently, crossed terms vanish, and one obtains
σ
2
=
1
N
r
'f(x)
2
` −'f(x)`
2
). (2.6)
1
We will use in this chapter a roman index for denoting configurations.
2
The points x
i
being chosen uniformly on the interval [a, b], the central limit theorem is
valid and the integral converges towards the exact value according to a Gaussian distribution
22
2.3 Markov chain for sampling an equilibrium system
The 1/N
r
-dependence gives a a priori slow convergence, but there is no simple
modification for obtaining a faster convergence. However, one can modify in a
significant manner the variance. It is worth noting that the function f has signif-
icant values on small regions of the interval [a, b] and it is useless to calculate the
function where values are small. By using a random but non uniform distribution
with a weight w(x), the integral is given by
I =

b
a
dx
f(x)
w(x)
w(x). (2.7)
If w(x) is always positive, one can define du = w(x)dx with u(a) = a and
u(b) = b, and
I =

b
a
du
f(x(u))
w(x(u))
, (2.8)
the estimate of the integral is given by
I ·
(b −a)
N
r
N
r
¸
i=1
f(x(u
i
))
w(x(u
i
))
, (2.9)
with the weight w(x). Similarly, the variance of the estimate then becomes
σ
2
=
1
N
r

f(x(u))
w(x(u))

2
¸

f(x(u))
w(x(u))

2

. (2.10)
By choosing the weight distribution w proportional to the original function
f, the variance vanishes. This trick is only possible in one dimension. In higher
dimensions, the change of variables in a multidimensional integral involves the
absolute value of a Jacobian and one cannot find in a intuitive manner the change
of variable to obtain a good weight function.
2.3 Markov chain for sampling an equilibrium
system
Let us return to the statistical mechanics: very often, we are interested in the
computation of the thermal average of a quantity but not in the partition function
itself.
'A` =
¸
i
A
i
exp(−βU
i
)
Z
. (2.11)
Let us recall that
P
i
=
exp(−βU
i
)
Z
(2.12)
defines the probability of having the configuration i (at equilibrium). The basic
properties of a probability distribution are satisfied: P
i
is strictly positive and
23
Monte Carlo method
¸
i
P
i
= 1. If one were able to generate configurations with this weight, the
thermal average of A should be given by
'A` ·
1
N
r
N
r
¸
i
A
i
(2.13)
where N
r
is the total number of configurations where A is evaluated. In this way,
the thermal average becomes an arithmetic average.
The trick proposed by Metropolis, Rosenbluth and Teller in 1953 consists
of introducing a stochastic Markovian process between successive configurations,
and which converges towards the equilibrium distribution P
eq
.
First, we introduce some useful definitions: To follow the sequence of con-
figurations, one defines a time t equal to the number of “visited“ configurations
divided by the system size. This time has no relation with the real time of the
system. Let us denote P(i, t) the probability of having the configuration i at time
t.
Let us explain the meaning of the dynamics: ”stochastic“ means that going
from a configuration to another does not obey a ordinary differential equation
but is a random process determined by probabilities; ”Markovian“ means that
the probability of having a configuration j at time t + dt, (dt = 1/N where N is
the particle number of the system), only depends on the configuration i at time
t, but not on previous configurations (the memory is limited to the time t); this
conditional probability is denoted by W(i → j)dt. The master equation of the
system is then given (in the thermodynamic limit) by :
P(i, t + dt) = P(i, t) +
¸
j
(W(j →i)P(j, t) −W(i →j)P(i, t)) dt (2.14)
This equation corresponds to the fact that at time t + dt, the probability of the
system of being in the configuration i is equal to the probability of being in a same
state at time t, (algebraic) added of the probability of leaving the configuration
towards a configuration j and the probability of going from a configuration j
towards the configuration i.
At time t = 0, the system is in a initial configuration i
0
: The initial probability
distribution is P(i) = δ
i
0
,i
, which means that we are far from the equilibrium
distribution
At equilibrium, according the master equation Eq. (4.56), one obtains the set
of conditions
¸
j
W(j →i)P
eq
(j) = P
eq
(i)
¸
j
W(i →j) (2.15)
A simple solution solution of these equations is given by
W(j →i)P
eq
(j) = W(i →j)P
eq
(i) (2.16)
24
2.4 Metropolis algorithm
This equation, (2.16), is known as the condition of detailed balance. It expresses
that, in a stationary state (or equilibrium state), the probability that the system
goes from a stationary state i to a state j is the same that the reverse situation.
Let us add this condition is not a sufficient condition, because we have not proved
that the solution of the set of equations (2.15) is unique and that the equation
(2.16) is a solution with a simple physical interpretation. Because it is difficult
to prove that a Monte Carlo algorithm converges towards equilibrium, several
algorithms use the detailed balance condition. As we will see later, recently
a few algorithms violating detailed balance have been proposed that converges
asymptotically towards equilibrium (or a stationary state).
Equation (2.16) can be reexpressed as
W(i →j)
W(j →i)
=
P
eq
(j)
P
eq
(j)
(2.17)
= exp(−β(U(j) −U(i))). (2.18)
This implies that W(i → j) does not depend on the partition function Z, but
only on the Boltzmann factor.
2.4 Metropolis algorithm
The choice of a Markovian process in which detailed balance is satisfied is a
solution of Eqs. (2.16). We will discuss other choices later. In order to obtain
solutions of Eqs. (2.16) or, in other words, to obtain the transition matrix (W(i →
j)), note that a stochastic process is the sequence of two elementary steps:
1. From a configuration i, one selects randomly a new configuration j, with a
priori probability α(i →j).
2. This new configuration is accepted with a probability Π(i →j).
Therefore, one has
W(i →j) = α(i →j)Π(i →j). (2.19)
In the original algorithm developed by Metropolis (and many others Monte
Carlo algorithms), one chooses α(i →j) = α(j →i); we restrict ourselves to this
situation in the remainder of this chapter
Eqs. (2.18) are reexpressed as
Π(i →j)
Π(j →i)
= exp(−β(U(j) −U(i))) (2.20)
The choice introduced by Metropolis et al. is
Π(i →j) = exp(−β(U(j) −U(i))) if U(j) > U(i) (2.21)
= 1 if U(j) ≤ U(i) (2.22)
25
Monte Carlo method
As we will see later, this solution is efficient in phase far from transitions.
Moreover, the implementation is simple and can be used as a benchmark for
more sophisticated methods. In chapter 5, specific methods for studying phase
transitions will be discussed.
Additional comments for implementing a Monte Carlo algorithm:
1. Computation of a thermal average only starts when the system reaches
equilibrium, namely when P · P
eq
. Therefore, in a Monte Carlo simulation,
there are generally two time scales: The first one where one starts from an
initial configuration, one performs a first dynamics in order to lead the
system close to equilibrium, the second one where the system evolves in the
vicinity of equilibrium and where the computation of averages is performed.
In the absence of a precise criteria, the duration of the first stage is not easily
predictable.
2. A naive approach consists of following the evolution of the instantaneous
energy of the system and in considering that equilibrium is reached when
the energy is stabilized around a quasi-stationary value.
3. A more precise method estimates the relaxation time of a correlation func-
tion and one chooses a time significantly larger than the relaxation time.
For disordered systems, and at temperature larger the phase transition, this
criteria is reasonable in a first approach. More sophisticated estimates will
be discussed in the lecture.
We now consider how the Metropolis algorithm can be implemented for several
basic models.
2.5 Applications
2.5.1 Ising model
Some known results
Let us consider the Ising model (or equivalently a lattice gas model) defined by
the Hamiltonian
H = −J
¸
<i,j>
S
i
S
j
−H
N
¸
i=1
S
i
(2.23)
where the summation < i, j > means that the interaction is restricted to distinct
pairs of nearest neighbors and H denotes an external uniform field. If J > 0,
interaction is ferromagnetic and if J < 0, interaction is antiferromagnetic.
In one dimension, this model can be solved analytically and one shows that
the critical temperature is the zero temperature.
26
2.5 Applications
In two dimensions, Onsager (1944) solved this model in the absence of an
external field and showed that there exists a para-ferromagnetic transition at a
finite temperature. For the square lattice, this critical temperature is given by
T
c
= J
2
ln(1 +

2)
(2.24)
and a numerical value equal to T
c
· 2.269185314 . . . J.
In three dimensions, the model can not be solved analytically, but numerical
simulations have been performed with different algorithms and the values of the
critical temperature obtained for various lattices and from different theoretical
methods are very accurate (see Table 2.1).
A useful, but crude, estimate of the critical temperature is given by the mean-
field theory
T
c
= cJ (2.25)
where c is the coordination number.
D Lattice T
c
/J (exact) T
c,MF
/J
1 0 2
2 square 2.269185314 4
2 triangular 3.6410 6
2 honeycomb 1.5187 3
3 cubic 4.515 6
3 bcc 6.32 8
3 diamond 2.7040 4
4 hypercube 6.68 8
Table 2.1 – Critical Temperature of the Ising model for different lattices in 1 to
4 dimensions.
Table 2.1 illustrates that the critical temperature given by the mean-field
theory are always an upper bound of the exact values and that the quality of the
approximation becomes better when the spatial dimension of the system is large
and/or the coordination number is large.
Metropolis algorithm
Since simulation cells are always of finite size, the ratio of the surface times the
linear dimension of the particle over the volume of the system is a quite large
number. To avoiding large boundary effects, simulations are performed by using
periodic boundary conditions. In two dimensions, this means that the simulation
cell must tile the plane: due to the geometric constraints, two types of cell are
possible: a elementary square and a regular hexagonal, with the conditions that a
27
Monte Carlo method
particle which exits by a given edge returns into the simulation cell by the opposite
edge. Note that this condition can bias the simulation when the symmetry of the
simulation cell is incompatible with the symmetry of the low-temperature phase,
e.g. a crystal phase, a modulated phase,...
For starting a Monte Carlo simulation, one needs to define a initial configu-
ration, which can be:
1. The ground-state with all aligned spins +1 or −1,
2. A infinite-temperature configuration: on each site, one chooses randomly
and uniformly a number between 0 and 1. If this selected number is between
0 and 0.5, the site spin is taken equal to +1; if the selected number is
between 0.5 and 1, the site spin is taken equal to −1.
As mentioned previously, Monte Carlo dynamics involves two elementary
steps: first, one selects randomly a trial configuration and secondly, one con-
siders acceptance or rejection by using the Metropolis condition.
In order that the trial configuration can be accepted, this configuration needs
to be ”close” to the previous configuration; indeed, the acceptance condition is
proportional to the exponential of the energy difference of the two configurations.
If this difference is large and positive, the probability of acceptance becomes
very small and the system can stay “trapped“ a long time a local minimum.
For a Metropolis algorithm, the single spin flip is generally used which leads to
a dynamics where energy changes are reasonable. Because the dynamics must
remain stochastic, a spin must be chosen randomly for each trial configuration,
and not according to a regular sequence.
In summary, an iteration of the Metropolis dynamics for an Ising model con-
sists of the following:
1. A site is selected by choosing at random an integer i between 1 and the
total number of lattice sites.
2. One computes the energy difference between the trial configuration (in
which the spin i is flipped) and the old configuration. For short-range in-
teractions, this energy difference involves a few terms and is ”local” because
only the selected site and its nearest neighbors are considered.
3. If the trial configuration has a lower energy, the trial configuration is ac-
cepted. Otherwise, a uniform random number is chosen between 0 and 1
and if this number is less than exp(β(U(i) −U(j))), the trial configuration
is accepted. If not, the system stays in the old configuration.
Computation of thermodynamic quantities (mean energy, specific heat, mag-
netization, susceptibility,. . . ) can be easily performed. Indeed, if a new config-
uration is accepted, the update of these quantities does not require additional
28
2.5 Applications
calculation: E
n
= E
o
+∆E where E
n
and E
o
denote the energies of the new and
the old configuration; similarly, the magnetization is given by M
n
= M
o
+2sgn(S
i
)
where sgn(S
i
) is the sign of spin S
i
in the new configuration. In the case where
the old configuration is kept, instantaneous quantities are unchanged, but time
must be incremented.
As we discussed in the previous chapter, thermodynamic quantities are ex-
pressed as moments of the energy distribution and/or the magnetization distri-
bution. The thermal average can be performed at the end of the simulation
when one uses a histogram along the simulation and eventually one stores these
histograms for further analysis.
Note that, for the Ising model, the energy spectra is discrete, and computation
of thermal quantities can be performed after the simulation run. For continuous
systems, it is not equivalent to perform a thermal average “on the fly“ and by
using a histogram. However, by choosing a sufficiently small bin size, the average
performed by using the histogram converges to the ”on the fly“ value.
Practically, the implementation of a histogram method consists of defining
an array with a dimension corresponding the number of available values of the
quantity; for instance, the magnetization of the Ising model is stored in an array
histom[2N+1], where N is the total number of spins. (The size of the array could
be divided by two, because only even integers are available for the magnetization,
but state relabeling is necessary.) Starting with a array whose elements are set
to zero at the initial time, magnetization update is performed at each simulation
time by the formal code line
histom[magne] = histom[magne] + 1 (2.26)
where magne is the variable storing the instantaneous magnetization.
2.5.2 Simple liquids
Brief overview
Let us recall that a simple liquid is a system of point particles interacting by a
pairwise potential. When the phase diagram is composed of different regions (liq-
uid phase, gas and solid phases), the interaction between particles must contain
a repulsive short-range interaction and an attractive long-range interaction. The
physical interpretation of these contributions is the following: at short distance,
quantum mechanics prevent atom overlap which explains the repulsive part of the
potential. At long distance, even for neutral atoms, the London forces (whose
origin is also quantum mechanical) are attractive. A potential satisfying these
two criteria, is the Lennard-Jones potential
u
ij
(r) = 4
¸

σ
r

12

σ
r

6

(2.27)
29
Monte Carlo method
where defines a microscopic energy scale and σ denotes the atom diameter.
In a simulation, all quantities are expressed in dimensionless units: temper-
ature is then T

= k
B
T/ where k
B
is the Boltzmann constant, the distance is
r

= r/σ and the energy is u

= u/. The three-dimensional Lennard-Jones
model has a critical point at T

c
= 1.3 and ρ

c
= 0.3, and a triple point T

t
= 0.6
and ρ

t
= 0.8
Metropolis algorithm
For an off-lattice system, a trial configuration is generated by moving at random
a particle; this particle is obviously chosen at random. In a three-dimensional
space a trial move is given by
x

i
→x
i
+ ∆(rand −0.5) (2.28)
y

i
→y
i
+ ∆(rand −0.5) (2.29)
z

i
→z
i
+ ∆(rand −0.5) (2.30)
with the condition that (x

i
−x
i
)
2
+(y

i
−y
i
)
2
+(z

i
−z
i
)
2
≤ ∆
2
/4 (this condition
corresponds to considering isotropic moves) where rand denotes a uniform random
number between 0 and 1. ∆ is a a distance of maximum move by step. The value
of this quantity must be set at the beginning of the simulation and generally its
value is chosen in order to keep a reasonable ratio of new accepted configurations
over the total number of configurations.
Note that x

i
→ x
i
+ ∆rand would be incorrect because only positive moves
are allowed and the detailed balance is then not satisfied (see Eq. (2.16)).
Calculation of the energy of a new configuration is more sophisticated than
for lattice models. Indeed, because the interaction involves all particles of the
system, the energy update contains N terms, where N is the total number of
particles of the simulation cell. Moreover, periodic boundary conditions covers
space by replicating the original simulation unit cell. Therefore, with periodic
boundary conditions, the interaction now involves not only particles inside the
simulation cell, but also particles with the simulation cell and particles in all
replica.
U
tot
=
1
2

¸
i,j,n
u([r
ij
+nL[) (2.31)
where L is the length of the simulation cell and n is a vector with integer com-
ponents. For interaction potential decreasing sufficiently rapidly (practically for
potentials such that

dV u(r) is finite, where dV is the infinitesimal volume), the
summation is restricted to the nearest cells, the so-called minimum image con-
vention.; for calculating of the energy between the particle i and the particle j,
a test is performed, for each spatial coordinate if [x
i
−x
j
[ ([y
i
−y
j
[ and [z
i
−z
j
[,
respectively) is less than one-half of length size of the simulation cell L/2. If
30
2.6 Random number generators
[x
i
−x
j
[ ([y
i
−y
j
[ and [z
i
−z
j
[, respectively) is greater than to L/2, one calculates
x
i
− x
j
mod L (y
i
− y
j
mod L and z
i
− z
j
mod L, respectively), which considers
particles within the nearest cell of the simulation box.
This first step is time consuming, because the energy update requires cal-
culation of N
2
/2 terms. When the potential decreases rapidly (for instance, the
Lennard-Jones potential), at long distance, the density ρ(r) becomes uniform and
one can estimate this contribution to the mean energy by the following formula
u
i
=
1
2


r
c
4r
2
dru(r)ρ(r) (2.32)
·
ρ
2


r
c
4r
2
dru(r) (2.33)
This means that the potential is replaced with a truncated potential
u
trunc
(r) =

u(r) r ≤ r
c
,
0 r > r
c
.
(2.34)
For a finite-range potential, energy update only involves a summation of a
finite number of particles at each time, which becomes independent of the system
size. For a Monte Carlo simulation, the computation time becomes proportional
to the number of particles N (because we keep constant the same number of
moves per particle for all system sizes). In the absence of truncation of potential,
the update energy would be proportional to N
2
Note that the truncated potential
introduces a discontinuity of the potential. This correction can be added to the
simulation results, which gives an ”impulse” contribution to the pressure. For the
Molecular Dynamics, we will see that this procedure is not sufficient.
In a similar manner to the Ising model (or lattice models), it is useful to
store the successive instantaneous energies in a histogram. However, the energy
spectra being continuous, it is necessary to introduce a adapted bin size (∆E).
If one denotes N
h
the dimension of the histogram array, the energies are stored
from E
min
to E
min
+ ∆(EN
t
−1). If E(t) is the energy of the system at time t,
the index of the array element is given by the relation
i = Int(E −E
min
/∆E) (2.35)
In this case, the numerical values of moments calculated from the histogram
are not exact unlike the Ising model, and it is necessary to choose the bin size in
order to minimize this bias.
2.6 Random number generators
Monte Carlo simulation uses extensively random numbers and it is useful to ex-
amine the quality of these generators. The basic generator is a procedure that
31
Monte Carlo method
gives sequences of uniform pseudo-random numbers. The quality of a genera-
tor depends on a large number of criteria: it is obviously necessary to have all
moments of a uniform distribution satisfied, but also since the numbers must
be independent, the correlation between successive trials must be as weak as
possible.
As the numbers are coded on a finite number of bytes, a generator is char-
acterized by a period, that we expect very large and more precisely much larger
than the total random numbers required in a simulation run. In the early stage of
computational physics, random number generators used a one-byte coding lead-
ing to very short period, which biases the first simulations. We are now beyond
this time, since the coding is performed with 8 or 16 byte words. As we will see
below, there exists nowadays several random number generators of high quality.
A generator relies on an initial number (or several numbers). If the seed(s) of
the random number generator is (are) not set, the procedure has a seed by default.
However, when one restarts a run without setting the seed, the same sequence
of random numbers is generated and while this feature is useful in debugging,
production runs require different number seeds, every time one performs a new
simulation. If not, dangerous biases can result from the absence of a correct
initialization of seeds.
Two kinds of algorithms are at the origin of random number generators. The
first one is based on a linear congruence relation
x
n+1
= (ax
n
+ c) mod m (2.36)
This relation generates a sequence of pseudo-random integer numbers between
0 and m− 1. m corresponds to the generator period. Among generators using
a linear congruence relation, one finds the functions randu (IBM), ranf (Cray),
drand48 on the Unix computers, ran (Numerical Recipes ,Knuth), etc. The peri-
ods of these generators go from 2
29
(randu IBM) to 2
48
(ranf).
Let us recall that 2
30
· 10
9
; if one considers a Ising spin lattice in three
dimensions with 100
3
sites, only 10
3
spin flips per site can be done on the smallest
period. For a model lattice with 10
3
sites, allowed spin flips are multiplied by
thousand and can be used for a preliminary study of the phase diagram (outside
of the critical region, where a large number of configuration is required.
The generator rng cmrg (Lecuyer)
3
provides a sequence of numbers from the
relation:
z
n
= (x
n
−y
n
) mod m
1
, (2.37)
where x
n
and y
n
are given by the following relations
x
n
= (a
1
x
n−1
+ a
2
x
n−2
+ a
3
x
n−3
) mod m
1
(2.38)
y
n
= (b
1
y
n−1
+ b
2
y
n−2
+ b
3
y
n−3
) mod m
2
. (2.39)
3
P. Lecuyer contributed many times to develop several generators based either on linear
congruence relation or on shift register.
32
2.6 Random number generators
The generator period is 2
305
· 10
61
The second class of random number generators is based on the register shift
through the logical operation “exclusive or“. An example is provided by the
Kirkpatrick and Stoll generator.
x
n
= x
n−103

x
n−250
(2.40)
Its period is large, 2
250
, but it needs to store 250 words. The generator with the
largest period is likely of that Matsumoto and Nishimura, known by the name
MT19937 (Mersenne Twister generator). Its period is 10
6000
! It uses 624 words
and it is equidistributed in 623 dimensions!
2.6.1 Generating non uniform random numbers
Introduction
If, as we have seen in the above section, it is possible to obtain a ”good”generator
of uniform random numbers, there exists some physical situations where it is
necessary to generate non uniform random numbers, namely random numbers
defined by a probability distribution f(x) on an interval I, such that

I
dxf(x) = 1
(condition of probability normalization). We here introduce several methods
which generated random numbers with a specific probability distribution.
Inverse transformation
If f(x) denotes the probability distribution on the interval I, one defines the
cumulative distribution F as
F(x) =

x
f(t)dt (2.41)
If there exists an inverse function F
−1
, then u = F
−1
(x) define a cumulative
distribution for random numbers with a uniform distribution on the interval [0, 1].
For instance, if one has a exponential distribution probability with a average
λ, usually denoted c(λ), one has
F(x) =

x
0
dtλe
−λt
= 1 −e
−λx
(2.42)
Therefore, inverting the relation u = F(x), one obtains
x = −
ln(1 −u)
λ
(2.43)
Considering the cumulative distribution F(x) = 1 − x, one immediately ob-
tains that u is a uniform random variable defined on the unit interval, and 1−u is
33
Monte Carlo method
also a uniform random variable uniform defined on the same interval. Therefore,
Equ. (2.43) can be reexpressed as
x = −
ln(u)
λ
(2.44)
Box-Muller method
The Gaussian distribution is frequently used in simulations Unfortunately, the
cumulative distribution is an error function. for a Gaussian distribution of unit
variance centered on zero, denoted by ^(0, 1) ( ^ corresponds to a normal dis-
tribution), one has
F(x) =
1

x
−∞
dte
−t
2
/2
(2.45)
This function is not invertible and one cannot apply the previous method. How-
ever, if one considers a couple of independent random variables (x, y) = the joint
probability distribution is given by f(x, y) = exp(−(x
2
+y
2
)/2)/(2π). Using the
change of variables (x, y) →(r, θ), namely polar coordinates, the joint probability
becomes
f(r
2
)rdrdθ = exp(−
r
2
2
)
dr
2
2


. (2.46)
The variable r
2
is a random variable with a exponential probability distribution
of average 1/2, or c(1/2), and θ is a random variable with a uniform probability
distribution on the interval [0, 2π].
If u and v are two uniform random variables on the interval [0, 1], or |
[0,1]
,
one has
x =

−2 ln(u) cos(2πv)
y =

−2 ln(u) sin(2πv) (2.47)
which are independent random variables with a Gaussian distribution .
Acceptance rejection method
The inverse method or the Box-Muller method can not be used for general prob-
ability distribution. The acceptance-rejection method is a general method which
gives random numbers with various probability distributions.
This method uses the following property
f(x) =

f(x)
0
dt (2.48)
Therefore, by considering a couple of random variables with an uniform distribu-
tion, one defines the joint probability g(x, u) with the condition 0 < u < f(x);
34
2.6 Random number generators
0 0.2 0.4 0.6 0.8 1
0
0.5
1
1.5
2
2.5
3
1624 black 3376 red
numerical ratio 0.324 (exact ratio 1/3)
Figure 2.1 – Computation of the probability distribution B(4, 8) with the
acceptance-rejection method with 5000 random points.
the function f(x) is the marginal distribution (of the x variable) of the joint
distribution,
f(x) =

dug(x, u) (2.49)
By using a Monte Carlo method with an uniform sampling, one obtains inde-
pendent random variables with a distribution, not necessary invertible. The price
to pay is illustrated in Fig. 2.1 where the probability distribution x
α
(1 −x)
β
(or
more formally, a distribution B(α, β)) is sampled . For a good efficiency of the
method, it is necessary to choose for the variable u an uniform distribution whose
interval |
[0,m]
with m greater or equal than the maximum of f(x). Therefore, the
maximum of f(x) must be determined in the definition interval before selecting
the range for the random variables. By choosing a maximum value for u equal to
the maximum of the function optimizes the method.
More precisely, the efficiency of the method is asymptotically given by the
ratio of areas: the area whose external boundaries are given by the curve, the
x-axis and the two vertical axis over that of the rectangle defined by the two
horizontal and vertical axis (in Fig. 2.1, the area of the rectangle is equal to 1).
35
Monte Carlo method
For the probability distribution β(3, 7) and by choosing a maximum value for
the variable u equal to 3, one obtains a ratio of 1/3 (which corresponds to an
infinitely large number of trials) between accepted random numbers over total
trial numbers (5000 in Fig. 2.1 with 1624 acceptances and 3376 rejections).
The limitations of this method are easy to identify: only probability distri-
bution with a compact support can be well sampled. If not, a truncation of the
distribution must be done and when the distribution decreases slowly, the ratio
goes to a small value which deteriorates the efficiency of the method.
In order to improve this method, the rectangle can be replaced with a area
with an upper boundary defined by a curve. If g is a simple function to sample
and if g > f for all values on the interval, one samples random numbers with the
distribution g and the acceptance-rejection method is applied for the function
f(x)/g(x). The efficiency of the method then becomes the ratio of areas defined
by the two functions, respectively, and the number of rejections decreases.
Method of the Ratio of uniform numbers
This method generates random variables for various probability distributions by
using the ratio of two uniform random variables. Let us denote z = a
1
+ a
2
y/x
where x and y are uniform random numbers.
If one considers an integrable function r normalized to one, if x is uniform on
the interval [0, x

] and y on the interval [y

, y

], and by introducing w = x
2
and
z = a
1
+ a
2
y/x, if w ≤ r(z), then z has the distribution r(z), for −∞< z < ∞.
In order to show the property, let us consider the joint distribution f
X,Y
(x, y)
which is uniform in the domain T, which is within the rectangle [0, x

] [y

, y

].
The joint distribution f
W,Z
(w, z) is given by
f
W,Z
(w, z) = Jf
X,Y
(

w, (z −a
1
)

w/a
2
) (2.50)
where J is the Jacobian of the change of variables.
Therefore, one has
J =

∂x
∂w
∂x
∂z
∂y
∂w
∂y
∂z

(2.51)
=

1
2

w
0
z−a
1
2a
2

w

w
a
2

(2.52)
=
1
2a
2
(2.53)
By calculating
f
Z
(z) =

dwf
W,Z
(w, z) (2.54)
one infers f
Z
(z) = r(z).
36
2.6 Random number generators
To determine the equations of the boundaries T, one needs to solve the fol-
lowing equations
x(z) =

r(z) (2.55)
y(z) = (z −a
1
)x(z)/a
2
(2.56)
Let us consider the Gaussian distribution. We restrict the interval to positive
values. The bounds of the domain are given by the equations.
x

= sup(

r(z) (2.57)
y

= inf((z −a
1
)

r(z)/a
2
) (2.58)
y

= sup((z −a
1
)

r(z)/a
2
) (2.59)
One chooses a
1
= 0 and a
2
= 1. Since r(z) = e
−z
2
/2
, by using Eqs. (2.57)- (2.59),
one infers that x

= 1, y

= 0 and y

=

2/e. On can show that the ratio
of the domain T over the rectangle available for values of x and y is equal to

πe/4 · 0.7306. The test to do is x ≤ e
−(y/x)
2
/2
, which can be reexpressed as
y
2
≤ −4x
2
ln(x). Because this computation involves logarithms, which are time
consuming to compute, the interest of the method could be limited. But it is
possible to improve the performance by avoiding the calculation of transcendental
functions as follows: the logarithm function is bounded between 0 and 1 by the
functions
x −1
x
≤ ln(x) ≤ 1 −x (2.60)
To limit the calculation of logarithms, one performs the following pretests
• y
2
≤ 4x
2
(1−x). If the test is true, the number is accepted (this corresponds
to a domain inside to the domain of the logarithm), otherwise the second
test is performed.
• y
2
≤ 4x(1 − x). If the test is false, the number is rejected, otherwise, the
second test is performed.
With this procedure, one can show that we save computing time. A simple ex-
planation comes from the fact that a significant fraction of numbers are accepted
by the pretest and only few accepted numbers need the logarithm test. Although
some numbers need the three tests before acceptance, this is compensated by the
fact that the pretest selects the largest fraction of random numbers.
Beyond this efficiency, the ROU method allows the sampling of probability
distributions whose interval is not compact (see above the Gaussian distribution).
This represents a significant advantage to the acceptance-rejection method, where
the distribution needs to be truncated.
It is noticeable that many softwares have libraries where many probability
distribution are implemented nowadays: for instance, the Gnu Scientific Library
37
Monte Carlo method
owns Gaussian, Gamma, Cauchy distributions, etc,... This feature is not re-
stricted to the high-level language, because scripting languages like Python, Perl
also implement these functions.
2.7 Exercises
2.7.1 Inverse transformation
Random numbers are provided by softwares are basically uniform random num-
bers on the interval [0, 1]. For Monte Carlo simulations, it is necessary to sample
random numbers with non uniform distribution. There exists several methods
for generating these non uniform random numbers.
♣ Q. 2.7.1-1 Give briefly two methods that generate random numbers with non
uniform distribution.
In order to simulate a uniform rotation of a vector in three dimensions, one can
use the spherical coordinates with the usual angles θ and φ, which are related to
the Cartesian coordinates with (x = cos(φ) sin(θ), y = sin(φ) sin(θ), z = cos(θ)).
The probability distribution is then given by f(θ, φ) =
sin(θ)

with θ comprised
between 0 and π and φ between 0 and 2π.
♣ Q. 2.7.1-2 Using the inverse transformation method, show that one can find
two uniform random variables u and v, with θ = h(u) and φ = g(v), where h and
g are two functions to be determined
For rotating a solid in three dimension, unit quaternions are often used.
Quaternions are a three dimensional generalization of complex numbers and are
expressed as
Q = q
0
1 + q
1
i + q
2
j + q
3
k, (2.61)
where 1, i, j, k are elementary quaternions. q
0
is called real part and (q
1
, q
2
, q
3
)
are imaginary components of a vector.
Quaternion multiplication is not commutative (like rotation composition in
three dimension) and is given by the following rules 1.a = a with a any elementary
quaternion, i.i = j.j = k.k = −1, i.j = −j.i = k, i.k = −k.i = jandj.k = −k.j =
i.
One defines the conjugate of Q by
Q

= q
0
1 −q
1
i −q
2
j −q
3
k, (2.62)
♣ Q. 2.7.1-3 Calculate the square module Q.Q

of a quaternion.
♣ Q. 2.7.1-4 Justify that quaternions can be interpreted as points belonging to
a hypersphere of a 4-dimensional space
38
2.7 Exercises
2.7.2 Detailed balance
To obtain a system at equilibrium, the dynamics is given by a Monte Carlo
simulation. The goal of the exercise is to prove some properties of algorithms
driving to equilibrium.
In the first three questions, one assumes that the algorithm satisfies detailed
balance.
♣ Q. 2.7.2-1 Let us denote two states α and β of the phase space; give detailed
balance satisfied by this algorithm. P
α→β
is transition probability of the Marko-
vian dynamics and which represents the transition of the state α towards a state
β and ρ
α
is the equilibrium density.
♣ Q. 2.7.2-2 Give two Monte Carlo algorithms which satisfy detailed balance.
♣ Q. 2.7.2-3 Consider 3 states, denoted α, β andγ; show that there exists a
simple relation between P
α→β
P
β→γ
P
γ→α
and P
α→γ
P
γ→β
P
β→α
in which the equi-
librium densities are absent.
We look for to prove the conversion of the above question: let an algorithm
verifying the relation of question 3, one wants to show that it satisfies detailed
balance.
♣ Q. 2.7.2-4 By assuming that there is at least one state α such that P
α→δ
= 0
for each state δ and that the equilibrium probability associated with α is ρ
α
, show
that one can infer the detailed balance equation between two states β and δ, by
setting that the equilibrium density ρ
δ
equal to
ρ
δ
=

¸
α
P
δ→α
P
α→δ
ρ
α

−1
(2.63)
♣ Q. 2.7.2-5 Calculate the right-hand side de Eq. (2.63) assuming that the de-
tailed balance is satisfied for the Monte Carlo algorithm.
2.7.3 Acceptance probability
Consider a particle of mass m moving in a one-dimensional potential v(x). In a
Monte Carlo simulation with a Metropolis algorithm, at each step of the simula-
tion, a random and uniform move is proposed for the particle with an amplitude
displacement comprised between −δ and δ.
39
Monte Carlo method
♣ Q. 2.7.3-1 Write the master equation of the Monte Carlo algorithm for the
probability P(x, t) of finding the particle at x at time t as a function of
W(x →x + h) = min(1, exp(−v(x + h) + v(x)))
and of W(x + h →x). Verify that the system goes to equilibrium.
♣ Q. 2.7.3-2 The acceptance rate is defined as the ratio of new configurations
over the total number of configurations. Justify that this ratio is given by the
expression
P
acc
(δ) =
1
2δR

+∞
−∞
dx exp(−βv(x))

δ
−δ
dhW(x →x + h) (2.64)
where R =

+∞
−∞
dx exp(−βv(x)).
♣ Q. 2.7.3-3 Show that the above equation can be reexpressed as
d(δP
acc
(δ))

=
1
2R

+∞
−∞
dx exp(−βv(x))(W(x →x+δ) +W(x →x−δ)) (2.65)
Let us consider the following potential:
v(x) =

0 [x[ ≤ a,
+∞ [x[ > a.
(2.66)
One restricts to the case where δ < a. Show that
P
acc
(δ) = 1 −Aδ (2.67)
where A is a constant to be determined.
♣ Q. 2.7.3-4 The potential is now:
v(x) =

− [x[ ≤ b,
0 a ≥ [x[ > b
+∞ [x[ > a.
(2.68)
Let us denote α = exp(β).
♣ Q. 2.7.3-5 Assuming that b > 2a, show that
P
acc
(δ) =

1 −Bδ δ ≤ b,
C −Dδ 2b > δ ≥ b.
(2.69)
where B, C and D are quantities to be expressed as functions of a, b and α.
40
2.7 Exercises
2.7.4 Random number generator
Random number generators available on computers are uniform on the interval
]0, 1]. It is necessary for Monte Carlo simulation to have random numbers with a
non uniform distribution. One seeks to obtain random numbers between −1 and
+1 such that the probability distribution f(x) is given by the equations
f(x) = 1 + x for −1 < x < 0
f(x) = 1 −x for 0 < x < 1 (2.70)
♣ Q. 2.7.4-1 Determine the cumulative probability distribution F(x) associated
with f(x). You should consider two cases x < 0 and x > 0.
♣ Q. 2.7.4-2 Denoting η as a uniform random variable between −1 and 1, de-
termine the relation between x and η. Consider the two cases again.
We now have two generators of random uniform independent numbers between
−1/2 and +1/2. Let us denote by η
1
and η
2
two numbers associated with these
distributions. Only pairs (η
1
, η
2
) such that η
2
1
+ η
2
2
≤ 1/4 are selected.
♣ Q. 2.7.4-3 Considering the joint probability distribution f(η
1
, η
2
), show that
disc sampling of radius 1/2 is uniform.
♣ Q. 2.7.4-4 Calculate the acceptance probability of pairs of random numbers
for generating this distribution.
♣ Q. 2.7.4-5 One considers the ratio z = η
1

2
. Determine the probability dis-
tribution of f(z). Hint: use polar coordinate
♣ Q. 2.7.4-6 From the preceding distribution f(z), one calculates the numbers
u = α + βz, determine the probability distribution g(u) as a function of the
distribution f.
41
Monte Carlo method
42
Chapter 3
Molecular Dynamics
Contents
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Equations of motion . . . . . . . . . . . . . . . . . . . . 45
3.3 Discretization. Verlet algorithm . . . . . . . . . . . . 45
3.4 Symplectic algorithms . . . . . . . . . . . . . . . . . . . 47
3.4.1 Liouville formalism . . . . . . . . . . . . . . . . . . . . 47
3.4.2 Discretization of the Liouville equation . . . . . . . . . 50
3.5 Hard sphere model . . . . . . . . . . . . . . . . . . . . 51
3.6 Molecular Dynamics in other ensembles . . . . . . . . 53
3.6.1 Andersen algorithm . . . . . . . . . . . . . . . . . . . 54
3.6.2 Nos´e-Hoover algorithm . . . . . . . . . . . . . . . . . . 54
3.7 Brownian dynamics . . . . . . . . . . . . . . . . . . . . 57
3.7.1 Different timescales . . . . . . . . . . . . . . . . . . . 57
3.7.2 Smoluchowski equation . . . . . . . . . . . . . . . . . 57
3.7.3 Langevin equation. Discretization . . . . . . . . . . . 57
3.7.4 Consequences . . . . . . . . . . . . . . . . . . . . . . . 58
3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9.1 Multi timescale algorithm . . . . . . . . . . . . . . . . 59
3.1 Introduction
Monte Carlo simulation has an intrinsic limitation: its dynamics does not cor-
respond to the “real” dynamics. For continuous systems defined from a classical
43
Molecular Dynamics
Hamiltonian, it is possible to solve the differential equations of all particles si-
multaneously. This method provides the possibility of precisely obtaining the
dynamical properties (temporal correlations) of equilibrium systems, quantities
which are available in experiments, which allows one to test, for instance, the
quality of the model.
The efficient use of new tools is associated with knowledge of their ability; first,
we analyze the largest available physical time as well as the particle numbers that
we can consider for a given system through Molecular Dynamics. For a three
dimensional system, one can solve the equations of motion for systems up to
several hundred thousands particles. Note that for a simulation with a moderate
number of particles, namely 10
4
, the average number of particles along an edge
of the simulation box is (10
4
)
(1/3)
· 21. This means that for a thermodynamic
study one needs to use the periodic boundary conditions like in Monte Carlo
simulations.
For atomic systems far from the critical region, the correlation length is smaller
than the dimensionless length of 21, but Molecular Dynamics is not the best
simulation method for a study of phase transitions. The situation is worse if
one considers more sophisticated molecular structures, in particular biological
systems.
By using a Lennard-Jones potential, it is possible to find a typical time scale
from an analysis of microscopic parameters of the system (m mass of the particle,
σ diameter, and energy scale of the interaction potential) is
τ = σ

m

(3.1)
This time corresponds to the duration for a atom of moving on a distance equal
to its linear size with a velocity equal to the mean velocity in the liquid. For
instance, for argon, one has the numerical values σ = 3
˚
A, m = 6.63.10
−23
kg
and = 1.64.10
−20
J, which gives τ = 2.8.10
−14
s. For solving the equations of
motion, the integration step must be much smaller than the typical time scale τ,
typically ∆t = 10
−15
s, even smaller. The total number of steps performed in a
run is typically of order of magnitude from 10
5
to 10
7
; therefore, the duration of
the simulation for an atomic system is 10
−8
s.
For many atomic systems, relaxation times are much smaller than 10
−8
s and
Molecular Dynamics is a very good tool for investigating dynamic and thermody-
namic properties of the systems. For glassformers, in particular, in for supercooled
liquids, one observes relaxation times to the glass transition which increases by
several orders of magnitude, where Molecular Dynamics cannot be equilibrated.
For conformational changes of proteins, for instance, to the contact of a solid sur-
face, typical relaxation times are of order of a millisecond. In these situations, it is
necessary to coarse-graining some of the microscopic degrees of freedom allowing
the typical time scale of the simulation to be increased.
44
3.2 Equations of motion
3.2 Equations of motion
We consider below the typical example of a Lennard-Jones liquid. Equations of
motion of particle i are given by
d
2
r
i
dt
2
= −
¸
j=i

r
i
u(r
ij
). (3.2)
For simulating an “infinite” system, periodic boundary conditions are used.
The calculation of the force between two particles i and j can be performed by
using the minimum image convention (if the potential decreases sufficiently fast
to 0 at large distance. This means that for a given particle i, one needs to find if
j or one of its images which is the nearest neighbor of i.
Similarly to a Monte Carlo, simulation, force calculation on the particle i
involves the computation of de (N − 1) elementary forces between each particle
and particle i. By using a truncated potential, one restricts the summation to
particles within a sphere whose radius corresponds to the potential truncation.
In order that the forces remain finite whatever the particle distance, the trun-
cated Lennard-Jones potential used in Molecular Dynamics is the following
u
trunc
(r) =

u(r) −u(r
c
) r < r
c
0 r ≥ r
c
.
(3.3)
One has to account for the bias introduced in the potential in the thermodynamics
quantities compared to original Lennard-Jones potential.
3.3 Discretization. Verlet algorithm
For obtaining a numerical solution of the equations of motion, it is necessary
to discretize. Different choices are a priori possible, but as we will see below,
it is crucial that the total energy which is constant for a isolated Hamiltonian
system remains conserved along the simulation (let us recall that the ensemble is
microcanonical). The Verlet’s algorithm is one of first methods and remains one
most used nowadays.
For the sake of simplicity, let us consider an Hamiltonian system with identical
N particles. r, is a vector with 3N components: r = (r
1
, r
2
, ..., r
N
, where r
i
denotes the position of particle i. The system formally evolves as
m
d
2
r
dt
2
= f (r(t)). (3.4)
A time series expansion gives
r(t + ∆t) = r(t) +v(t)∆t +
f (r(t))
2m
(∆t)
2
+
d
3
r
dt
3
(∆t)
3
+O((∆t)
4
) (3.5)
45
Molecular Dynamics
and similarly

r(t −∆t) = r(t) −v(t)∆t +
f (r(t))
2m
(∆t)
2

d
3
r
dt
3
(∆t)
3
+O((∆t)
4
). (3.6)
Adding these above equations, one obtains
r(t + ∆t) +r(t −∆t) = 2r(t) +
f (r(t))
m
(∆t)
2
+O((∆t)
4
). (3.7)
The position updates are performed with an accuracy of (∆t)
4
. This algorithm
does not use the particle velocities for calculating the new positions. One can
however determine these as follows:
v(t) =
r(t + ∆t) −r(t −∆t)
2∆t
+O((∆t)
2
) (3.8)
Let us note that most of the computation is spent by the force calculation, and
marginally spent on the position updates. As we will see below, we can improve
the accuracy of the simulation by using a higher-order time expansion, but this
requires the computation of spatial derivatives of forces and the computation time
increases rapidly compared to the same simulation performed with the Verlet
algorithm.
For the Verlet algorithm, the accuracy of trajectories is roughly given by
∆t
4
N
t
(3.9)
where N
t
is the total number of integration steps, and the total simulation time
is given by ∆tN
t
.
It is worth noting that the computation accuracy decreases with simulation
time, due to the round-off errors which are not included in Eqs. (3.9).
There are more sophisticated algorithms involving force derivatives; they im-
prove the short-time accuracy, but the long-time precision may deteriorate faster
than with a Verlet algorithm. In addition, the computation efficiency is significant
lower.
Time symmetry is an important property of the equations of motion. Note
that Verlet’s algorithm preserves this symmetry. This is, if we change ∆t →−∆t,
Eq. (3.7) is unchanged. As a consequence, if at a given time t of the simulation,
one inverts the arrow of time , the Molecular Dynamics trajectories are retraced.
The round-off errors accumulative in the simulation limit the the reversibility
when the total number of integration steps becomes very large.
For Hamiltonian systems, the volume of phase space is conserved with time,
and numerical simulation must preserve this property. Conversely, if an algorithm
does not have this property, the total energy is no longer conserved, even for a
short simulation. In order to keep this property, it is necessary that the Jacobian
of the transformation in phase space, namely between the old and new coordinates
46
3.4 Symplectic algorithms
in phase space; is equal to one. We will see below that the Verlet algorithm
satisfies this property.
An a priori variant of the Verlet algorithm is known as the Leapfrog algorithm
and is based on the following procedure: velocities are calculated on half-time
intervals, and positions are obtained on integer time intervals. Let us define the
velocities for t + ∆t/2 and t −∆t/2
v(t + ∆t/2) =
r(t + ∆t) −r(t)
∆t
, (3.10)
v(t −∆t/2) =
r(t) −r(t −∆t)
∆t
, (3.11)
one immediately obtains
r(t + ∆t) = r(t) +v(t + ∆t/2)∆t (3.12)
and similarly
r(t −∆t) = r(t) −v(t −∆t/2)∆t. (3.13)
By using Eq. (3.7), one gets
v(t + ∆t/2) = v(t −∆t/2) +
f (t)
m
∆t +O((∆t)
3
). (3.14)
Because the Leapfrog algorithm leads to Eq. (3.7), the trajectories are identi-
cal to those calculated by the Verlet’s algorithm. The computations of velocities
for half-integer times are temporary variables, but the positions are identical
Some care is necessary when calculating the thermodynamics quantities because
the mean potential energy, which is calculated at integer times, and the kinetic
energy which is calculated at half-integer times, do not give a constant total
energy.
Many attempts to obtain better algorithms have been made, and, in order to
overcome the previous heuristic derivation, a more rigorous approach is neces-
sary. From the Liouville formalism, we are going to derive symplectic algorithms,
namely algorithms conserving the phase space volume and consequently the total
energy of the system. This method provides algorithms with greater accuracy
than the Verlet algorithm for a fixed time step ∆t. Therefore, more precise tra-
jectories can be obtained.
3.4 Symplectic algorithms
3.4.1 Liouville formalism
In the Gibbs formalism, the phase space distribution is given by an N-particle
probability distribution f
(N)
(r
N
, p
N
, t) (N is the total number of particles of
47
Molecular Dynamics
the system) r
N
denotes the coordinate set and p
N
that of the impulses. If
f
(N)
(r
N
, p
N
, t) is known, the average of a macroscopic quantity can be calcu-
lated Frenkel and Smit [1996], Tsai et al. [2005].
Time evolution of this distribution is given by the Liouville equation
∂f
(N)
(r
N
, p
N
, t)
∂t
+
N
¸
i=1

dr
i
dt
∂f
(N)
∂r
i
+
dp
i
dr
i
∂f
(N)
∂p
i

= 0, (3.15)
or,
∂f
(N)
(r
N
, p
N
, t)
∂t
−¦H
N
, f
(N)
¦ = 0, (3.16)
where H
N
denotes the Hamiltonian of the system of N particles, and the bracket
¦A, B¦ corresponds the Poisson bracket
1
. The Liouville operator is defined as
L =i¦H
N
, ¦ (3.18)
=
N
¸
i=1

∂H
N
∂r
i


∂p
i

∂H
N
∂p
i


∂r
i

(3.19)
with

∂H
N
∂p
i

=v
i
=
dr
i
dt
(3.20)

∂H
N
∂r
i

=−f
i
= −
dp
i
dt
. (3.21)
(3.22)
Therefore, Eq. (3.16) is reexpressed as
∂f
(N)
(r
N
, p
N
, t)
∂t
= −iLf
(N)
, (3.23)
and one gets the formal solution
f
(N)
(r
N
, p
N
, t) = exp(−iLt)f
(N)
(r
N
, p
N
, 0). (3.24)
Similarly, if A is a function of positions r
N
, and impulses, p
N
, (but without
explicit time dependence) obeys
dA
dt
=
N
¸
i=1

∂A
∂r
i
dr
i
dt
+
∂A
∂p
i
dp
i
dt

. (3.25)
1
If A andB are two functions of r
N
andp
N
, the Poisson bracket between A andB is defined
as
¦A, B¦ =
N
¸
i=1

∂A
∂r
i
∂B
∂p
i

∂B
∂r
i
∂A
∂p
i

(3.17)
48
3.4 Symplectic algorithms
Using the Liouville operator, this equation becomes
dA
dt
= iLA, (3.26)
which gives formally
A(r
N
(t), p
N
(t)) = exp(iLt)A(r
N
(0), p
N
(0)). (3.27)
Unfortunately (or fortunately, because, if not, the world would be too simple),
exact expression of the exponential operator is not possible in general case. How-
ever, one can obtain it in two important cases: the first expresses the Liouville
operator as the sum of two operators
L = L
r
+L
p
(3.28)
where
iL
r
=
¸
i
dr
i
dt

∂r
i
(3.29)
and
iL
p
=
¸
i
dp
i
dt

∂p
i
(3.30)
In a first case, one assumes that the operator L
p
is equal to zero. Physically,
this corresponds to the situations where particle impulses are conserved during
the evolution of the system and
iL
0
r
=
¸
i
dr
i
dt
(0)

∂r
i
(3.31)
Time evolution of A(t) is given by
A(r(t), p(t)) = exp(iL
0
r
t)A(r(0), p(0)) (3.32)
Expanding the exponential, one obtains
A(r(t), p(t)) = exp

¸
i
dr
i
dt
(0)t

∂r
i

A(r(0), p(0)) (3.33)
= A(r(0), p(0)) + iL
0
r
A(r(0), p(0)) +
(iL
0
r
)
2
2!
A(r(0), p(0)) + . . .
(3.34)
=

¸
n=0
¸
i

dr
i
dt
(0)t

n
n!


n
∂r
n
i

A(r(0), p(0)) (3.35)
= A

r
i
+
dr
i
dt
(0)t

N
, p
N
(0)

(3.36)
49
Molecular Dynamics
The solution is a simple translation of spatial coordinates, which corresponds
to a free streaming of particles without interaction, as expected.
The second case uses a Liouville operator where L
0
r
= 0. Applying to A, L
0
p
defined like L
0
r
in the first case, one obviously obtains the solution of the Liouville
equation which corresponds to a simple impulse translation.
3.4.2 Discretization of the Liouville equation
In the previous section, we expressed the Liouville operator as the sum of two
operators L
r
and L
p
. These two operators do not commute
exp(Lt) = exp(L
r
t) exp(L
p
t). (3.37)
The key point of the reasoning uses the Trotter identity
exp(B + C) = lim
P→∞

exp

B
2P

exp

C
P

exp

B
2P

P
. (3.38)
For a finite number of iterations, one obtains for a finite number P
exp(B + C) =

exp

B
2P

exp

C
P

exp

B
2P

P
exp

O

1
P
2

(3.39)
When truncating after P iterations, the approximation is of order 1/P
2
. There-
fore, by using ∆t = t/P, by replacing the formal solution of the Liouville equation
by a discretized version, and by introducing
B
P
=
iL
p
t
P
(3.40)
and
C
P
=
iL
r
t
P
, (3.41)
one obtains for an elementary step
e
iL
p
∆t/2
e
iL
r
∆t
e
iL
p
∆t/2
. (3.42)
Because the operators L
r
and L
p
are hermitian, the exponential operator is uni-
tary, and by using the Trotter formula, one gets a method for which we may
derive symplectic algorithms namely algorithms preserving the volume of the
phase space.
e
iL
p
∆t/2
A

r
N
(0), p
N
(0)

=
A

r
N
(0),

p(0) +
∆t
2
dp(0)
dt

N

(3.43)
50
3.5 Hard sphere model
then
e
iL
r
∆t
A

r
N
(0),

p(0) +
∆t
2
dp(0)
dt

N

=
A

¸

r(0) + ∆t
dr(
∆t
2
)
dt

N
,

p(0) +
∆t
2
dp(0)
dt

N

(3.44)
and lastly, applying e
iL
p
∆t/2
, we have
A

¸

r(0) + ∆t
dr(
∆t
2
)
dt

N
,

p(0) +
∆t
2
dp(0)
dt
+
∆t
2
dp(∆t)
dt

N

(3.45)
In summary, one obtains the global transformations
p(∆t) = p(0) +
∆t
2
(f (r(0)) +f (r(∆t))) (3.46)
r(∆t) = r(0) + ∆t
dr(∆t/2)
dt
(3.47)
By using Eq. (3.46) with the impulse (defined at half-integer times) and Eq. (3.47)
for the initial and final times 0 and −∆t, one removes the velocity dependence
and recover the Verlet algorithm at the lowest order expansion of the Trotter
formula. Note that it is possible of restarting a new derivation by inverting the
roles of operators L
r
and L
p
in the Trotter formula and to obtains new evolution
equations. Once again, the discretized equations of trajectories are those of Verlet
algorithm.
In summary, the product of the three unitary operators is also a unitary
operator which leads to a symplectic algorithm. In other words, the Jacobian of
the transformation, Eq. (3.42), is the product of three Jacobian and is equal to
1. Expansion can be performed to the next order in the Trotter formula: More
accurate algorithms can be derived, which are always symplectic, but they involve
force derivatives. Since force calculation is always very demanding, calculation
of force derivatives would add a large penalty of the computing performance and
are only used when short-time accuracy is a crucial need.
3.5 Hard sphere model
The hard sphere model is defined by the Hamiltonian
H
N
=
N
¸
i

1
2
mv
2
i
+ u(r
i
−r
j
)

, (3.48)
51
Molecular Dynamics
where the interaction potential is
u(r
i
−r
j
) =

+∞ [r
i
−r
j
[ ≤ σ
0 [r
i
−r
j
[ > σ
(3.49)
From this definition, one sees that the Boltzmann factors, which are involved in
the configurational integral exp(−βu(r
i
−r
j
)) are equal either 0 (when two spheres
overlap) or 1 otherwise, which means that this integral does not depend on the
temperature, but only on the density. As for systems with a purely repulsive
interaction, the hard sphere model does not undergo a liquid-gas transition, but
a liquid-solid transition exists.
This model was been intensely studied both theoretically and numerically. A
renewal of interest in the two last decades comes in the non equilibrium version,
commonly used for granular gases. Indeed, when the packing fraction is small,
the inelastic hard sphere is a good reference model. The interaction potential
remains the same, but dissipation that occurring during collisions is accounted
for in a modified collision rule (see Chapter 7).
Note that the impulsive character of forces between particles prevents use of
the Verlet algorithm and others; all previous algorithms assume that forces are
continuous functions of distance, which is not the case here.
For hard spheres, particle velocities are only modified during collisions. Be-
tween two collisions, particles follow rectilinear trajectories. Moreover, since the
collision is instantaneous, the probability of having three spheres in contact at
the same time is infinitesimal. The dynamics is a sequence of binary collisions.
Consider two spheres of identical mass: during the collision, the total momen-
tum is conserved
v
1
+v
2
= v

1
+v

2
(3.50)
For an elastic collision, the normal component of the velocity at contact is in-
verted. Let us denote ∆v
i
= v
i
−v

i
, one infers
(v

2
−v

1
).(r
2
−r
1
) = −(v
1
−v
2
).(r
2
−r
1
), (3.51)
whereas the tangential component of the velocity is unchanged.
(v

2
−v

1
).n
12
= (v
1
−v
2
).n
12
, (3.52)
where n
12
is the normal vector to the collision axis and belonging to the plane
defined by the two incoming velocities.
By using Eqs. (3.51) and (3.52), one obtains the post-collisional velocities
v

1
= v
1
+
(v
2
−v
1
).(r
1
−r
2
)
(r
1
−r
2
)
2
(r
1
−r
2
) (3.53)
v

2
= v
2
+
(v
1
−v
2
).(r
1
−r
2
)
(r
1
−r
2
)
2
(r
1
−r
2
) (3.54)
52
3.6 Molecular Dynamics in other ensembles
It is easy to check that the total kinetic energy is constant during the simu-
lation. Indeed, since the algorithm being exact (discarding for the moment the
round-off problem), the algorithm is obviously symplectic.
Practically, the hard sphere algorithm is as follows: starting from an ini-
tial (non overlapping) configuration, all possible binary collisions are considered,
namely for each pair of particles, collision time is calculated.
Between collisions, the positions of i and j are given by
r
i
= r
0
i
+v
i
t (3.55)
r
j
= r
0
j
+v
j
t (3.56)
where = r
0
i
and = r
0
i
are the particle positions i andj after collision.
The contact condition between two particles is given by relation
σ
2
= (r
i
−r
i
)
2
= (r
0
i
−r
0
j
)
2
+ (v
i
−v
j
)
2
t
2
+ 2(r
0
i
−r
0
j
)(v
i
−v
j
)t (3.57)
Because the collision time is a solution of a quadratic equation, several cases must
be considered: if the roots are complex, the collision is set to a very large value
in simulation. If the two roots are real, either they are negative and the collision
time is set to a very large value in simulation again. If only one root is positive,
the collision time corresponds to this value. If the two roots are positive, the
smallest one is selected as the collision time.
A necessary condition for having a positive solution for t (collision in the
future) is that
(r
0
i
−r
0
j
).(v
i
−v
j
) < 0 (3.58)
Once all pair of particles have been examined, the shortest time is selected,
namely the first collision. The trajectories of particles evolve rectilinearly until the
new collision occurs (calculation of trajectories is exact, because forces between
particles are only contact forces). At the collision time, velocities of the two
particles are updated, and one calculates the next collision again.
The algorithm provides trajectories with an high accuracy, bounded only by
the round-off errors. The integration step is not constant, contrary to the Verlet’s
algorithm. This quasi-exact method has some limitation, when the density be-
comes significant, because the mean time between two collisions decreases rapidly
and the system then evolves slowly. In addition, rattling effects occur in isolated
regions of the system, which decreases the efficiency of the method.
3.6 Molecular Dynamics in other ensembles
As discussed above, Molecular Dynamics corresponds to a simulation in the mi-
crocanonical ensemble. Generalizations exist in other ensembles, for instance, in
canonical ensemble. Let us note that the thermal bath introduces random forces
which changes dynamics with an ad hoc procedure. Because the dynamics then
53
Molecular Dynamics
depends on a adjustable parameter, different dynamics can generated that are
different from the “real“ dynamics obtained in a microcanonical ensemble. The
methods consist of modifying the equations of motion so that the velocity distri-
bution reaches a Boltzmann distribution. However, this condition is not sufficient
for characterizing a canonical ensemble. Therefore, after introducing a heuristic
method, we will see how to build a more systematic method for performing Molec-
ular Dynamics in generalized ensembles.
3.6.1 Andersen algorithm
The Andersen algorithm is a method where the coupling of the system with
a bath is done as follows: a stochastic process modifies velocities of particles
by the presence of instantaneous forces. This mechanism can be interpreted
as Monte Carlo-like moves between isoenergy surfaces. Between these stochastic
”collisions”, the system evolves with the usual Newtonian dynamics. The coupling
strength is controlled by the collision frequency denoted by ν. In addition, one
assumes that the stochastic “collisions“ are totally uncorrelated, which leads to a
Poissonian distribution of collisions, P(ν, t)
P(ν, t) = e
−νt
(3.59)
where P(ν, t) denotes the probability distribution of having the next collision
between a particle and the bath occurs in a time interval dt.
Practically, the algorithm is composed of three steps
• The system follows Newtonian dynamics over one or several time steps,
denoted ∆t.
• On chooses randomly a number of particles for stochastic collisions. The
probability of choosing a particle in a time interval ∆t is ν∆t.
• When a particle is selected, its velocity is chosen randomly in a Maxwellian
distribution with a temperature T. Other velocities are not updated.
This procedure ensures that the velocity distribution goes to equilibrium, but
obviously, real dynamics of the system is deeply disturbed. Moreover, the full
distribution obtained with the Andersen thermostat is not a canonical distribu-
tion: This can be shown by considering the fluctuation calculation. In particular,
correlation functions relax at a rate which strongly depends on coupling with the
thermostat.
3.6.2 Nos´e-Hoover algorithm
As discussed above, the Andersen algorithm biases dynamics of the simulation,
which is undesirable, because the main purpose of the Molecular Dynamics is to
54
3.6 Molecular Dynamics in other ensembles
provide a real dynamics. To overcome this problem, a more systematic approach,
introduced by Nose enlarges the system by introducing additional degrees of
freedom. This procedure modifies the Hamiltonian dynamics by adding friction
forces which change the particle velocities. These changes increase or decrease
the velocities.
Let us consider the Lagrangian
L =
N
¸
i=1
m
i
s
2
2

dr
i
dt

2
−U(r
N
) +
Q
2

ds
dt

2

L
β
ln(s) (3.60)
where L is a free parameter and U(r
N
) is the interaction potential between N
particles. Q represent the effective mass of the unidimensional variable s. The
Lagrange equations of this system read
p
i
=
∂L
∂˙ r
i
= m
i
s
2
˙ r
i
(3.61)
p
s
=
∂L
∂ ˙ s
= Q˙ s (3.62)
By using a Legendre transformation, one obtains the Hamiltonian
H =
N
¸
i=1
(p
i
)
2
2m
i
s
2
+ U(r
N
) +
p
2
s
2Q
+
L
β
ln(s) (3.63)
Let us reexpress the microcanonical partition function of this system made of
N interacting particles and of this additional degree of freedom interacting with
N particles.
Q =
1
N!

dp
s
dsdp
N
dr
N
δ(H−E) (3.64)
The partition function corresponds to N indistinguishable particles and to the
additional variable in the microcanonical ensemble where microstates of isoenergy
are considered equiprobable.
Let us introduce p

= p/s. One obtains that
Q =
1
N!

dp
s
dss
3N
dp

N
dr
N

δ(
N
¸
i=1
(p

i
)
2
2m
i
+ U(r
N
) +
p
2
s
2Q
+
L
β
ln(s) −E) (3.65)
One defines H

as
H

=
N
¸
i=1
(p

i
)
2
2m
i
+ U(r
N
) (3.66)
55
Molecular Dynamics
and the partition function becomes
Z =
1
N!

dp
s
dss
3N
dp

N
dr
N
δ(H

+
p
2
s
2Q
+
L
β
ln(s) −E) (3.67)
By using the property
δ(h(s)) =
δ(s −s
0
)
h

(s
0
)
(3.68)
where h(s) is a function with only one zero on the x-axis at the value s = s
0
.
One integrates over the variable s and one obtains
δ(H

+
p
2
s
2Q
+
L
β
ln(s) −E) =
βs
L
δ(s −exp(−
β
L
(H

+
p
2
s
2Q
−E))) (3.69)
This gives for the partition function
Z =
β exp(E(3N + 1)/L)
LN!

dp
s
dp

N
dr
N
exp

−β(3N + 1)
L
(H

+
p
2
s
2Q
)

(3.70)
Setting L = 3N + 1, and integrating over the additional degree of freedom, the
partially integrated partition function Q of the microcanonical ensemble of the full
system (N particles and the additional degree of freedom), becomes a canonical
partition function of a system with N particle.
It is worth noting that variables used are r, p

and

t

. Discretizing the equa-
tions of motion would give a variable time step which is not easy to control in
Molecular Dynamics. It is possible to return to a constant time step by using
additional changes of variable. This work was done by Hoover. Finally, the
equations of motion are
˙ r
i
=
p
i
m
i
(3.71)
˙ p
i
= −
∂U(r
N
)
∂r
i
−ξp
i
(3.72)
˙
ξ =

N
¸
i=1
p
2
i
2m
i

L
β

1
Q
(3.73)
˙ s
s
= ξ (3.74)
The two first equations are a closed set of equations for N particles. The last
equation controls the evolution of the additional degree of freedom in the simu-
lation.
56
3.7 Brownian dynamics
3.7 Brownian dynamics
3.7.1 Different timescales
Until now, we have considered simple systems with a unique microscopic time.
For a mixture of particles, whose size are very different (an order of magnitude
is sufficient, two different microscopic time scales are present (the smallest one
is associated with the smallest particles). To obtain the statistical properties of
large particles, simulation time exceeds the computer ability. This situation is
not exceptional, because it corresponds, for instance, to the physical situation of
biological molecules in water. Diluted nanoparticles, like ferrofluids or other col-
loids, are examples where the presence of several microscopic timescales prohibits
simulation with standard Molecular Dynamics. Brownian dynamics (which refers
to the Brownian motion) corresponds to a dynamics where smallest particles are
replaced with a continuous media. The basic idea is to perform averages on the
degrees of freedom associated with the smallest relaxation times. Timescales as-
sociated with the largest particle moves are assumed larger than the relaxation
times of the velocities.
3.7.2 Smoluchowski equation
The Liouville equation describes the evolution of the system at a microscopic
scale. After averaging over the smallest particles (solvent) and on the velocity
distribution of the largest particles, the joint probability distribution P¦r¦ of
finding the N particles at positions ¦r¦ is given by a Smoluchowski equation.
∂P(¦r
N
¦)
∂t
=

∂r
N
.D
∂P(¦r
N
¦)
∂r
N
−β

∂r
N
.DF(¦r
N
¦)P(¦r
N
¦) (3.75)
where β = 1/k
B
T and D is a 3N 3N matrix which depends on the particle
configuration and contains the hydrodynamic interactions.
3.7.3 Langevin equation. Discretization
A Smoluchowski equation corresponds to a description in terms of the probability
distribution; it is also possible to describe the same process in term of trajectories,
which corresponds to the Langevin equation
dr
N
dt
= βDF +

∂r
N
.D+ ξ
r
N (3.76)
where D is the diffusion matrix which depends on the particle configuration in
the solvent and ξ
r
N is a Gaussian white noise.
By discretizing the stochastic differential equation, Eq. (3.76) with a con-
stant time step ∆t, one obtains with a Euler algorithm the following discretized
57
Molecular Dynamics
equation
∆r
N
=

βDF +

∂r
N
.D

∆t +R (3.77)
∆t is a sufficiently small time step in order that the force changes remain weak in
the time step, but sufficiently large for having an equilibrated velocity distribu-
tion. R is a random move chosen from a Gaussian distribution with a zero mean
< R >= 0, and with a variance given by
'RR
T
` = 2D∆t (3.78)
R
T
denotes the transpose of the vector R.
A main difficulty with the dynamics is to obtain an expression of the diffusion
matrix as a function of particle configuration. For an infinitely dilute system,
the diffusion matrix becomes diagonal and the nonzero matrix elements are all
equal to the diffusion constant in the infinite dilution limit. In a general case, the
solvent plays a more active role by adding hydrodynamic forces. Theses forces are
generally long ranged and their effects can not be neglected for modest packing
fractions. In the case of the dilute limit, spherical particles with translational de-
grees of freedom are involved, the diffusion matrix has the asymptotic expression
D = D
0
I + O

1
r
2
ij

(3.79)
with
D
0
=
1
3πησβ
(3.80)
where η is the solvent viscosity, σ is the diameter of the large particles, and β is
the inverse of the temperature. This corresponds to Stokes law.
3.7.4 Consequences
This stochastic differential equation does not satisfy the time reversal symmetry
observed in Hamiltonian systems. Therefore, the time correlation functions cal-
culated from the Brownian dynamics will be different on the short time scales.
Conversely, one finds glassy behaviors with many studied systems with the Brow-
nian Dynamics. This universal behavior is observed in Molecular Dynamics with
molecular systems (Orthophenyl, salol, glycerol,...) and nanomolecules (ferroflu-
ids, polymers,...).
Let us note that Brownian dynamics which combines aspect of Molecular Dy-
namics (deterministic forces) and of stochastic process is a intermediate method
between Molecular dynamics and Monte Carlo simulation. Optimization tech-
niques and hydrodynamics forces are topics which goes beyond these lecture notes,
but must be considered for efficient and accurate simulation of complex systems.
58
3.8 Conclusion
3.8 Conclusion
The Molecular Dynamics method has developed considerably during these last
decades. Initially devoted for simulating atomic and molecular systems, general-
ized ensembles and/or Brownian dynamics allow the study of systems containing
large molecules, like biological molecules.
3.9 Exercises
3.9.1 Multi timescale algorithm
When implementing a Molecular Dynamics simulation, the standard algorithm
is the Verlet algorithm (or Leap-Frog). The time step of the simulation is chosen
such that the variations of different quantities (forces and velocities) are small
on a time step. However, interaction forces contain short-range and long-range
interaction
F = F
short
+F
long
(3.81)
The Liouville operator iL for a particle reads
iL = iL
r
+ iL
F
(3.82)
= v

∂r
+
F
m

∂v
(3.83)
♣ Q. 3.9.1-1 Using the Trotter formula at first order in time step, recall equa-
tions of motion. One denotes the time step by ∆t.
The Liouville operator can be separated in two parts as follows
iL = (iL
r
+ iL
F
short
) + iL
F
long
(3.84)
with
iL
F
short
=
F
short
m

∂v
(3.85)
iL
F
long
=
F
long
m

∂v
(3.86)
♣ Q. 3.9.1-2 By taking a shorter time step, sub-multiple of the original time
step ∆t, δt = ∆t/n for the short-range forces, show that the propagator e
iL∆t
can be expressed at first order as
e
iL
F
long
∆t/2

e
iL
F
short
δt/2
e
iL
r
δt
e
iL
F
short
δt/2

n
e
iL
F
long
∆t/2
(3.87)
♣ Q. 3.9.1-3 Describe the principle of this algorithm. What can one say about
the reversibility of the algorithm and the conservation of volume about phase
space?
59
Molecular Dynamics
60
Chapter 4
Correlation functions
Contents
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.1 Radial distribution function . . . . . . . . . . . . . . . 62
4.2.2 Structure factor . . . . . . . . . . . . . . . . . . . . . . 66
4.3 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.2 Time correlation functions . . . . . . . . . . . . . . . . 67
4.3.3 Computation of the time correlation function . . . . . 68
4.3.4 Linear response theory: results and transport coefficients 69
4.4 Space-time correlation functions . . . . . . . . . . . . 70
4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 70
4.4.2 Van Hove function . . . . . . . . . . . . . . . . . . . . 70
4.4.3 Intermediate scattering function . . . . . . . . . . . . 72
4.4.4 Dynamic structure factor . . . . . . . . . . . . . . . . 72
4.5 Dynamic heterogeneities . . . . . . . . . . . . . . . . . 72
4.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 72
4.5.2 4-point correlation function . . . . . . . . . . . . . . . 73
4.5.3 4-point susceptibility and dynamic correlation length . 73
4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7.1 “Le facteur de structure dans tous ses ´etats!” . . . . . 74
4.7.2 Van Hove function and intermediate scattering function 75
61
Correlation functions
4.1 Introduction
With Monte Carlo or Molecular Dynamics simulation, one can calculate thermo-
dynamic quantities , as seen in previous chapters, but also the spatial correlation
functions. These functions characterize the structure of the system and provide
more detailed information that thermodynamic quantities. Correlation functions
can be directly compared to light or neutron scattering, and/or to several theo-
retical approaches that have been developed during the last decades (for instance,
the integral equations of liquid state).
Time correlation functions can be compared to experiments if the simulation
dynamics corresponds to a ”real”dynamics, namely a Molecular Dynamics. Monte
Carlo dynamics strongly depends of the chosen rule. However, although the
Metropolis algorithm does not correspond to a real dynamics, global trends of a
real dynamics are generally contained (slowing down for a continuous transition,
glassy behavior in glass formers liquids, ...). Using the linear response theory,
the transport coefficients can be obtained by integrating the time correlation
functions.
In the second part of this chapter, we consider the glassy behavior where
simulation showed recently that dynamics of the system is characterized by dy-
namical heterogeneities. Indeed, while many particles seem to be frozen, isolated
regions contain mobile particles. This phenomenon evolves continuously with
time where mobile particles becomes frozen and conversely. This feature can be
characterized by a high-order correlation function (4-point correlation function).
Therefore, specific behavior can be characterized by a more or less sophisticated
correlation functions.
4.2 Structure
4.2.1 Radial distribution function
We now derive the equilibrium correlation functions as a function of the local
density (already defined in Chap. 1). The main interest is to show that these
expression are well adapted for simulation. In addition, we pay special attention
to finite size effects; indeed the simulation box containing a finite number of
particles, quantities of interest display some differences with the same quantities
in the thermodynamic limit and one must account for it.
Let us consider a system of N particles defined by the Hamiltonian
H =
N
¸
i=1
p
2
i
2m
+ V (r
N
). (4.1)
In the canonical ensemble, the density distribution associated with the probability
of finding n particles in the elementary volume
¸
n
i=1
dr
i
, denoted in the following
62
4.2 Structure
as dr
n
, is given by
ρ
(n)
N
(r
n
) =
N!
(N −n)!

. . .

exp(−βV (r
N
))dr
N−n
Z
N
(V, T)
(4.2)
where
Z
N
(V, T) =

. . .

exp(−βV (r
N
))dr
N
. (4.3)
Normalization of ρ
(n)
N
(r
n
) is such that

. . .

ρ
(n)
N
(r
n
)dr
n
=
N!
(N −n)!
, (4.4)
which corresponds to the number of occurrence of finding n particles among N.
In particular, one has

ρ
(1)
N
(r
1
)dr
1
=N (4.5)

ρ
(2)
N
(r
2
)dr
2
=N(N −1) (4.6)
which means for Eqs.(4.5) and (4.6) that one can find N particles and one can
find N(N −1) pairs of particles in the total volume, respectively.
The pair distribution function is defined as
g
2
N
(r
1
, r
2
) = ρ
(2)
N
(r
1
, r
2
)/(ρ
(1)
N
(r
1

(1)
N
(r
2
)). (4.7)
For an homogeneous system (which is not the case of liquids in the vicinity of
interfaces), one has ρ
(1)
N
(r) = ρ and the pair distribution function only depends
on the relative distance between particles:
g
(2)
N
(r
1
, r
2
) = g([r
1
−r
2
[) =
ρ
(2)
N
([r
1
−r
2
[)
ρ
2
(4.8)
where g(r) is called the radial de distribution function.
When the distance [r
1
− r
2
[ is large, the radial distribution function goes to
1 −
1
N
. To the thermodynamics limit, one recovers that the radial distribution
function goes to 1, but in finite size, a correction appears which changes the large
distance limit.
Noting that
'δ(r −r
1
)` =
1
Z
N
(V, T)

. . .

δ(r −r
1
) exp(−βV
N
(r
N
))dr
N
(4.9)
=
1
Z
N
(V, T)

. . .

exp(−βV
N
(r, r
2
, . . . , r
N
))dr
2
. . . dr
N
(4.10)
63
Correlation functions
the probability density is reexpressed as
ρ
(1)
N
(r) = '
N
¸
i=1
δ(r −r
i
)`. (4.11)
Similarly, one infers
ρ
(2)
N
(r, r

) = '
N
¸
i=1
N
¸
j=1,j=i
δ(r −r
i
)δ(r

−r
j
)`. (4.12)
By using microscopic densities, the radial distribution function is expressed as
g
(2)
(r
1
, r
2
)ρ(r
1
))ρ(r
2
) = '
N
¸
i=1
N
¸
j=1,j=i
δ(r −r
i
)δ(r

−r
j
)`. (4.13)
For a homogeneous and isotropic system, the radial distribution function only
depends on the module of relative distance [r −r

[. Integrating over the volume,
one obtains that
ρg([r[) =
1
N
'
N
¸
i=1
N
¸
j=1,j=i
δ(r −r
i
+r
j
)`. (4.14)
In a simulation using periodic boundary conditions (with cubic symmetry),
one cannot obtain the structure of the system beyond a distance equal to L/2,
because of the spatial periodicity of each main directions of the box. Computation
of the radial distribution function lies on a space discretization. Practically, one
starts with an empty distance histogram of spatial step of ∆r, one calculates the
number of pairs whose distance is between r and r+∆r, one performs integration
of Eq. (4.14) on the shell labeled by the index j of thickness ∆r by using the
approximate formula:
ρg(∆r(j + 0.5)) =
1
N(V
j+1
−V
j
)
2N
p
(r), (4.15)
where N
p
(r) is the number of distinct pairs whose center to center distances are
between j∆r and (j + 1)∆r and where V
j
=

3
((j + 1)∆r)
3
.
This formula is accurate if the radial distribution smoothly varies between
the distance r and r + ∆r. By decreasing the spatial steps, one can check a
posteriori that this condition is satisfied. However, by decreasing the spatial step
of the histogram, the statistical accuracy in each bin diminishes. Therefore, a
compromise must be found for the step, based in part on a rule of thumb.
Last, note that computation of this correlation function (for a given config-
uration) involves all pairs, namely the square of the particle number, a intrinsic
64
4.2 Structure
0 0.2 0.4 0.6 0.8 1
X
0
0.2
0.4
0.6
0.8
1
Y
Figure 4.1 – Basics of the computation of the radial distribution function: starting
from a particle, one determines the pair number whose distance is between the
successive built from the discretization given by Eq. (4.15).
property of the correlation function. Because a correct average needs to consider
a sufficient number of different configurations, only configurations where all par-
ticles have been moved will be considered instead of all configurations. The part
of computation time spent for the correlation is then divided by N. In summary,
calculation of the correlation function scales as N and is comparable to this of the
simulation dynamics. The computation time can be decreased if only the short
distance structure is either significant or of interest.
By expressing the radial distribution function with the local density, note that
the distribution function is then defined in a statistical manner. The ensemble
average is not restricted to equilibrium. We will see later (Chapter 7) that it
is possible to calculate this function for out of equilibrium systems ( random
sequential addition) where the process is not described by a Gibbs distribution.
65
Correlation functions
4.2.2 Structure factor
The static structure factor, S(k), is a quantity available experimentally from
light or neutron scattering; this is the Fourier transform of the radial distribution
function g(r).
For a simple liquid, the static structure factor is defined as
S(k) =
1
N

k
ρ
−k
` (4.16)
where ρ
k
is the Fourier transform of the microscopic density ρ(r). This reads
S(k) =
1
N

N
¸
i=1
N
¸
j=1
exp(−ikr
i
) exp(ikr
j
)
¸
. (4.17)
Using delta distributions, S(k) is expressed as
S(k) = 1 +
1
N

exp(−ik(r −r

))
N
¸
i=1
N
¸
j=1,i=j
δ(r −r
i
)δ(r −r
j
)drdr

¸
(4.18)
which gives
S(k) = 1 +
1
N

exp(−ik(r −r

))ρ(r, r

)drdr

. (4.19)
For a homogeneous fluid (isotropic and uniform)
S(k) = 1 +
ρ
2
N

exp(−ik(r −r

))g(r, r

)drdr

(4.20)
For an isotropic fluid, the radial distribution function only depends on [r − r

[.
One then obtains
S(k) = 1 + ρ

exp(−ikr)g(r)dr. (4.21)
Because the system is isotropic, the Fourier transform only depends on the mod-
ulus k of wave vector, [k[, which gives in three dimensions
S(k) = 1 + 2πρ

r
2
g(r)

π
0
exp(−ikr cos(θ)) sin(θ)dθdr (4.22)
and, after some calculation
S(k) = 1 + 4πρ


0
r
2
g(r)
sin(kr)
kr
dr. (4.23)
Finally, calculation of the static structure factor requires a one-dimensional
sine-Fourier transform on the positive axis. Efficient codes of fast Fourier trans-
forms calculate S(k) very rapidly.
66
4.3 Dynamics
In a simulation, one can calculate either g(r) and obtain S(k) by a Fourier
transform or S(k) from Eq. (4.17) and obtain g(r) by an inverse Fourier transform.
In both cases, knowledge of correlations is limited to a distance equal to the half
length of the simulation box.
For the static structure factor, this yields an infrared cut-off (π/L). The
simulation cannot provide information for smaller wave vectors. Because the
computation of the spatial correlation function over pairs of particles for a given
configuration, the number of terms of the summation is proportional to N
2
(times
the number of configurations used for the average in the simulation). In the
direct calculation of the static structure factor S(k), the number of terms to
consider is proportional to N multiplied by the number of wave numbers which
are the components of the structure factor (and finally multiplied by the number
of configurations used in the simulation.
Repeated evaluation of trigonometric functions in Eq. (4.17) can be avoided
by tabulating complex exponentials in an array at the beginning of the simulation
Basically, computation of g(r), followed of the Fourier transform, should give
the same result than the direct computation of the structure factor. However,
because of the finite duration of the simulation as well as the finite size of the
simulation box, statistical errors and round-off errors can give differences between
the two manners of computation.
4.3 Dynamics
4.3.1 Introduction
Spatial correlation functions are relevant probes of the presence or absence of
the order at many lenghtscales. Similarly, knowledge of equilibrium fluctuations
is provided by time correlation functions. Indeed, let us recall that equilibrium
does not mean absence of particle dynamics: the system undergoes continuously
fluctuations and how a fluctuation is able to disappear is a essential characteristic
associated with to macroscopic changes, observed in particular in the vicinity of
a phase transition.
4.3.2 Time correlation functions
Let us consider two examples of time correlation functions using simple models
introduced in the first chapter of these lecture notes.
For the Ising model, one defines the spin-spin time correlation function
C(t) =
1
N
N
¸
i=1
< S
i
(0)S
i
(t) > (4.24)
67
Correlation functions
where the brackets denote equilibrium average and S
i
(t) is the spin variable of site
i at time t. This function measures the system looses memory of a configuration
at time t. At time t = 0, this function is equal to 1 (value which can not exceeded,
by definition), then decreases and goes to 0 when the elapsed time is much larger
than the equilibrium relaxation time.
For a simple liquid made of point particles, one can monitor the density au-
tocorrelation function
C(t) =
1
V

dr'δρ(r, t)δρ(r, 0)` (4.25)
where δρ(r, t) denotes the local density fluctuation. This autocorrelation function
measures the evolution of the local density along the time. It goes to zero when
the elapsed time is much larger to the relaxation times of the system.
4.3.3 Computation of the time correlation function
We now go in details for implementing the time autocorrelation function, for a
basic example, namely the spin correlation in the Ising model. This function is
defined in Eq. (4.24)
1
.
Time average of a correlation function (or others quantities) in the equilibrium
simulation (Molecular Dynamics or Monte Carlo) is calculated by using a fun-
damental property of equilibrium systems, namely, time translational invariance:
in other words, if one calculates < S
i
(t

)S
i
(t

+ t) >, the results is independent
of t

. In order to have a statistical average well defined, one must iterate this
computation for many different t

.
C(t) =
1
NM
M
¸
j=1
N
¸
i=1
S
i
(t
j
)S
i
(t
j
+ t). (4.26)
where the initial instant is a time where the system is already at equilibrium.
We have assumed that the time step is constant. When the later is variable,
one can add a constant time step for the correlation function (in general, slightly
smaller than the mean value of variable time step) and the above method can be
then applied.
For practical computation of C(t), one first defined N
c
vectors of N compo-
nents (N is the total number of spins and once equilibrated, one real vector of
N
c
components and one integer vector of N
c
components. All vectors are initially
set to 0.
1
Ising model does now own a Hamiltonian dynamics, and must be simulated by a Monte
Carlo simulation, but the method described here can be applied for all sorts of dynamics, Monte
Carlo or Molecular Dynamics.
68
4.3 Dynamics
• At t = 0, the spin configuration is stored in the first N component vector,
scalar product
¸
i
S
i
(t
0
)S
i
(t
0
) is performed and added to the first compo-
nent of the real N
c
component vector
• At t = 1, the spin configuration is stored in the second N component
vector, scalar products
¸
i
S
i
(t
1
)S
i
(t
1
) and
¸
i
S
i
(t
0
)S
i
(t
1
) are performed
and added to the first and second component of the real N
c
component
vector
• For t = k with k < N
c
, the same procedure is iterated and k + 1 scalar
products are performed and added in the k + 1 components of the real N
c
component vector.
• When t = N
c
−1, the first N component vector is erased and replaced with
the new configuration. N
C
scalar products can be performed and added to
the N
c
component vector. For t = N
c
, the second vector is replaced by the
new configuration and so on.
• Finally, this procedure is stopped at time t = T
f
, and because number of
configurations involved in averaging the correlation function are not equal,
the integer vector of N
c
is used as a histogram of configuration average.
• Finally Eq. (4.26) is used for calculating the correlation function. )
A last remark concerns the delicate choice of the time step as well the num-
ber of components of the correlation function. First, the correlation function is
calculated for a duration equal to (N
c
−1)∆t, which is bounded
τ
eq
<< N
c
∆t
c
<< T
sim
(4.27)
Indeed, configuration averaging needs that N
c
∆t
c
is less than the total simulation
time, but much larger than the equilibrium relaxation. Therefore, ∆t
c
which is a
free parameter, is chosen a multiple of the time step of the simulation.
4.3.4 Linear response theory: results and transport coef-
ficients
When a equilibrium system is slightly perturbed by an external field, the linear
response theory provides the following result: the response of the system and
return to equilibrium are directly related to the ability of the equilibrium system
(namely in the absence of a external field) to answer to fluctuations ( Onsager
assumption of fluctuation regression). Derivation of the relation between response
function and fluctuation correlation is given in appendix A.
In a simulation, it is easier to calculate a correlation function than a linear
response function for several reasons: i) Computation of a correlation function
69
Correlation functions
is done at equilibrium. Practically, the correlation function is obtained simul-
taneously with other thermodynamic quantities along the simulation. ii) For
computing response functions, it is necessary to perform different simulations for
each response functions. Moreover, for obtaining the linear part of the response
function (response of a quantity B(t) to a external field∆F), one needs to en-
sure that the ratio '∆B(t)`/∆F is independent of ∆F: this constraint leads to
choose a small value of ∆F, but if the perturbative field is too weak, the function
'∆B(t)` will be small and could be of the same order of magnitude of statis-
tical fluctuations of the simulation. In the last chapter of these lecture notes,
we will introduce a recent method allowing to calculate response functions with-
out perturbing the system. However, this method is restricted to Monte Carlo
simulations. For Molecular Dynamics, fluctuations grow very rapidly with time
2
.
In summary, by integrating the correlation function C(t) over time, on obtains
the system susceptibility.
4.4 Space-time correlation functions
4.4.1 Introduction
To go further in correlation study, one can follow correlation both in space and
time. This information is richer, but the price to pay is the calculation of a two-
variable correlation function, at least. We introduce these functions because they
are also available in experiments, essentially form neutron scattering.
4.4.2 Van Hove function
Let us consider the density correlation function which depends both on space and
time
ρG(r, r

; t) = 'ρ(r

+r, t)ρ(r

, 0)` (4.28)
This function can be expressed from the microscopic densities as
ρG(r, r

; t) = '
N
¸
i=1
N
¸
j=1
δ(r

+r −r
i
(t))δ(r −r
j
(0))` (4.29)
For a homogeneous system, G(r, r

; t) only depends on the relative distance. In-
tegrating over volume, one obtains
G(r, t) =
1
N
'
N
¸
i=1
N
¸
j=1
δ(r −r
i
(t) +r
j
(0))` (4.30)
2
It exists some cases where the response function is easier to calculate than the correlation
function, namely viscosity for liquids.
70
4.4 Space-time correlation functions
At t = 0, the Van Hove function G(r, t) is simplified and one obtains
G(r, 0) =
1
N
'
N
¸
i=1
N
¸
j=1
δ(r +r
i
(0) −r
j
(0))` (4.31)
= δ(r) + ρg(r) (4.32)
Except a singularity at the origin, at t = 0, the Van Hove function is proportional
to the pair distribution function g(r). On can separate this function in two parts,
self and distinct
G(r, 0) = G
s
(r, 0) + G
d
(r, 0) (4.33)
with
G
s
(r, 0) =
1
N
'
N
¸
i=1
δ((r +r
i
(0) −r
i
(t))` (4.34)
G
d
(r, 0) =
1
N
'
N
¸
i=j
δ(r +r
j
(0) −r
j
(t))` (4.35)
These correlation functions have the physical interpretation: the Van Hove
function is the probability density of finding a particle i in the vicinity of r at
time t knowing that a particle j is in the vicinity of the origin at time t = 0.
G
s
(r, t) is the probability density of finding a particle i at time t knowing that
this particle was at the origin at time t = 0; finally, G
d
(r, t) corresponds to the
probability density of finding a particle j different of i at time t knowing that the
particle i was at the origin at time t = 0.
Normalization of these two functions reads

drG
s
(r, t) = 1 (4.36)
and corresponds to be certain that the particle is in the volume at any time
(namely, particle conservation).
Similarly, one has

drG
d
(r, t) = N −1 (4.37)
with the following physical meaning: by integrating over space, the distinct cor-
relation function is able to count the remaining particles.
In the long time limit, the system looses memory of the initial configuration
and the correlation functions becomes independent of the distance r.
lim
r→∞
G
s
(r, t) = lim
t→∞
G
s
(r, t) ·
1
V
· 0 (4.38)
lim
r→∞
G
d
(r, t) = lim
t→∞
G
s
(r, t) ·
N −1
V
· ρ (4.39)
71
Correlation functions
4.4.3 Intermediate scattering function
Instead of considering correlations in space, one can see in reciprocal space,
namely in Fourier components. Let us define a correlation function so called
intermediate scattering function, which is the Fourier transform of the Van Hove
function
F(k, t) =

dkG(r, t)e
−ik.rt
(4.40)
In a similar manner, one defines self and distinct parts of the function
F
s
(k, t) =

dkG
s
(r, t)e
−ikrt
(4.41)
F
d
(k, t) =

dkG
d
(r, t)e
−ikrt
(4.42)
The physical interest of splitting the function is related to the fact that coher-
ent and incoherent neutron scattering provide the total F(k, t) and self F
s
(k, t)
intermediate scattering functions.
4.4.4 Dynamic structure factor
The dynamics structure factor is defined as the time Fourier transform of the
intermediate scattering function
S(k, ω) =

dtF(k, t)e
iωt
(4.43)
Obviously, one has the following sum rule: integrating the dynamic structure
factor over ω gives the static structure factor.

dωS(k, ω) = S(k) (4.44)
4.5 Dynamic heterogeneities
4.5.1 Introduction
In previous sections, we have seen that accurate information how the system
evolves by monitoring space and time correlation functions. However, it exists
important physical situations where relevant information is not provided by the
previous defined functions. Indeed, many systems have relaxation times grow-
ing in spectacular manner with a rather small variation of a control parameter,
namely the temperature. This corresponds to a glassy behavior, where no signifi-
cant changes in the structure appear (spatial correlation functions do not change
when the control parameter is changes), no phase transition occurs. Numerical
72
4.5 Dynamic heterogeneities
simulations as well experimental results have shown that particles of the sys-
tem, assumed identical, behave very differently at the same time: while most of
the particles are characterized by a extremely slow evolution, a small part evolves
more rapidly. In order to know these regions have a collective behavior, one needs
to build more suitable correlation functions. Indeed, the previous functions are
irrelevant for characterizing this heterogeneous dynamics.
4.5.2 4-point correlation function
The heterogeneities cannot be characterized by a two-point correlation function,
but one needs a higher-order correlation function, and let us define the 4-point
correlation function
C
4
(r, t) = '
1
V

dr

δρ(r

, 0)δρ(r

, t)δρ(r

+r, 0)δρ(r

+r, t)`
−'
1
V

dr

δρ(r

, 0)δρ(r

, t)`'
1
V

dr

δρ(r

+r, 0)δρ(r

+r, t)` (4.45)
where δρ(r, t) is a density fluctuation at the position r at time t. The physical
meaning of this function is as follows: : when a fluctuation occurs at r

at time
t = 0, how long this heterogeneity will survive and what is the spatial extension
of this heterogeneity?
4.5.3 4-point susceptibility and dynamic correlation length
In order to measure the strength of the correlation defined above, one must per-
form the spatial integration of the 4-point correlation function. This defines the
4-point susceptibility
χ
4
(t) =

drC
4
(r, t) (4.46)
This function can be reexpressed as
χ
4
(t) = N(< C
2
(t) > − < C(t) >
2
) (4.47)
where one defines C(t)
C(t) =
1
V

drδρ(r, t)δρ(r, 0) (4.48)
as the correlation function (not averaged). For glassy systems, this 4-point sus-
ceptibility is a function which displays a maximum at the typical relaxation time.
Considering this quantity as algebraic of a quantity ξ(t); this later is interpreted
as a dynamic correlation length which characterized the spatial heterogeneities.
73
Correlation functions
4.6 Conclusion
Correlation functions are tools allowing for detailed investigation of spatial and
time behavior of equilibrium systems. As seen above with the 4-point functions,
collective properties of systems can be revealed by building ad hoc correlation
functions, when thermodynamic quantities as well as two point correlation func-
tions do not change.
4.7 Exercises
4.7.1 “Le facteur de structure dans tous ses ´etats!”
Characterization of spatial correlations can be done by knowing the static struc-
ture factor. We propose to study how the particle correlations modify the shape
of the static structure factor in different physical situations
Let N particles places within a simulation box of volume V . One defines the
microscopic density as
ρ(r) =
N
¸
i=1
δ(r −r
i
) (4.49)
where r
i
denotes the particle position i. One defines the structure factor as
S(k) =
< ˜ ρ(k)˜ ρ(−k) >
N
(4.50)
where ˜ ρ(k) is the Fourier transform of the microscopic density and the brackets
< ... > denote the average over the configuration ensemble of the system.
♣ Q. 4.7.1-1 Express the structure factor S(k) as a function of terms as <
e
−ikr
ij
>, where r
ij
= r
i
−r
j
with i = j.
♣ Q. 4.7.1-2 Show that S(k) →1 when k →+∞.
♣ Q. 4.7.1-3 The simulation box is a cubic with a linear dimension L with peri-
odic boundary conditions. What the minimum modulus of wave vectors k avail-
able in simulation?
♣ Q. 4.7.1-4 Consider a gas model defined on a cubic lattice with a spatial step
of a (with the periodic boundary conditions), particles occupy lattice nodes. Show
that there is a maximum modulus of wave vectors. Count all wave vectors avail-
able in simulation..
One now considers a one-dimensional lattice with a step a and N nodes where
each site is occupied by a particle.
74
4.7 Exercises
♣ Q. 4.7.1-5 Calculate ˜ ρ(k) and infer the structure factor associated with this
configuration. By taking the thermodynamic limit (N → ∞), show that the
structure factor is equal to zero in the Brillouin zone (−
π
a
< k <
π
a
).
Each particle is split in two identical particles shifted symmetrically on each
side of the lattice node on a random distance u with the probability distribution
p(u). The microscopic density of the system then becomes
ρ
d
(r) =
N
¸
i=1
(δ(r −r
i
−u
i
) + δ(r −r
i
+ u
i
)) (4.51)
and the structure factor is given by
S
d
(k) =
< ˜ ρ
d
(k)˜ ρ
d
(−k) >
2N
. (4.52)
♣ Q. 4.7.1-6 Show that S
d
(k) = 2 < (cos(ku))
2
> +2 < cos(ku) >
2
(S(k) −1).
♣ Q. 4.7.1-7 Assuming that the moments < u
2
> and < u
4
> are finite, cal-
culate the expansion of S
d
(k) to order k
4
. Configurations corresponding to a
structure factor which goes to zero when k → 0, are called “super-uniform”.
Justify this definition.
♣ Q. 4.7.1-8 Recent simulations have shown that the structure factor of dense
granular particles behaves as k at small wave vectors. Why needs the simulation
a large number of particles (10
6
) ?
We now consider the structure factor of a simple liquid in the vicinity of
the critical point. The critical correlation function is given by the scaling law
g(r) ∼ r
−(1+η)
for r > r
c
.
♣ Q. 4.7.1-9 By using the relation between the correlation function and the
structure factor, show that this latter diverges at small wave vectors as k
−A
,
where A is an exponent to be determined (the integral


0
du u
−η
sin(u) is finite).
4.7.2 Van Hove function and intermediate scattering func-
tion
The goal of this problem is to understand the behavior change of the self Van
Hove function G
s
(r, t) (or its spatial Fourier transform, namely the intermediate
scattering function F
s
(k, t)) when a system goes from a usual dense phase to
glassy phase. Without loss of generality, one writes the self Van Hove correlation
function as G
s
(r, t) =< δ(r +r
i
(0) −r
i
(t)) > where i is a tagged particle.
One considers the long time and long distance behavior of the self Van Hove
function.
75
Correlation functions
♣ Q. 4.7.2-1 Write the conservation of mass of tagged particles which related
the local density ρ
(s)
(r, t) and the particle flux j
(s)
(r, t).
♣ Q. 4.7.2-2 One assumes that particles undergo a diffusion motion, described
by the relation j
(s)
(r, t) = −D∇ρ
(s)
(r, t), what is the time dependent equation
satisfied by ρ
(s)
(r, t)?
♣ Q. 4.7.2-3 Defining the Fourier transform of ρ
(s)
(r, t), as ˆ ρ
(s)
(k, t) =

dr
3
ρ
(s)
(r, t)e
−ikr
,
obtain the differential equation satisfied by ˆ ρ
(s)
(k, t). Solve this equation. The
initial condition is called ˆ ρ
(s)
(k, 0)
♣ Q. 4.7.2-4 One admits that F
s
(k, t) =< ˆ ρ
(s)
(k, t)ˆ ρ
(s)
(−k, 0) > where the
bracket denotes the average on the initial conditions. Infer from the above ques-
tion that F
s
(k, t) ∼ exp(−Dk
2
t).
♣ Q. 4.7.2-5 By using the inverse Fourier transform, obtain the expression of
G
s
(r, t).
One now considers the situation where dynamics is glassy and one of character-
istics is the presence of heterogeneities. Experiments and numerical simulation
reveal that the long distance behavior of the Van Hove correlation function is
slower than a usual Gaussian and close to an exponential decay. One suggests a
simple model for particle dynamics. Neglecting all vibration modes, one assumes
that a particle undergoes stochastic jumps whose time distribution is given by
φ(t) = τ
−1
e
−t/τ
(One also assumes that the probability distribution associated to
the first jump after the initial time is also φ(t)). The self Van Hove function then
reads
G
s
(r, t) =

¸
n=0
p(n, t)f(n, r) (4.53)
where p(n, t) is the probability of jumping n times at time t and f(n, r) the
probability of moving of a distance r after n jumps.
♣ Q. 4.7.2-6 Show that p(0, t) = 1 −

t
0
dt

φ(t

). Calculate explicitly p(0, t).
♣ Q. 4.7.2-7 Find the relation between p(n, t) and p(n −1, t).
♣ Q. 4.7.2-8 Same question between f(n, r) and f(n −1, r).
♣ Q. 4.7.2-9 Show that
ˆ
f(n, k) = f(k)
n
and that ˆ p(n, s) = ˆ p(0, s)
ˆ
φ(s)
n
One defines the Fourier-Laplace transform as
˜
G(k, s) =

d
3
re
−ikr


0
dte
−st
G
s
(r, t) (4.54)
76
4.7 Exercises
♣ Q. 4.7.2-10 Show that
˜
G(k, s) =
ˆ p(0, s)
1 −
ˆ
φ(s)
ˆ
f(k)
(4.55)
♣ Q. 4.7.2-11 Let us denote
˜
G
0
(k, s) = ˆ p(0, s). Calculate G
0
(r, t). What is the
physical meaning of this quantity?
♠ Q. 4.7.2-12 Show that
G(r, t) −G
0
(r, t) =
e
−t/τ
(2π)
3

d
3
k

e
t
ˆ
f(k)/τ
−1

e
ikr
(4.56)
♠ Q. 4.7.2-13 If f(r) = (2πd
2
)
−3/2
e
−r
2
/(2d
2
)
, calculate
˜
f(k). One considers the
instant t = τ. Expanding the exponential of Eq. (4.56), and performing the
inverse Fourier transform, show that
G(r, τ) −G
0
(r, τ) ∼ e

r
d
(4.57)
Give a physical meaning of this result.
77
Correlation functions
78
Chapter 5
Phase transitions
Contents
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2.1 Critical exponents . . . . . . . . . . . . . . . . . . . . 81
5.2.2 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . 82
5.3 Finite size scaling analysis . . . . . . . . . . . . . . . . 87
5.3.1 Specific heat . . . . . . . . . . . . . . . . . . . . . . . 87
5.3.2 Other quantities . . . . . . . . . . . . . . . . . . . . . 88
5.4 Critical slowing down . . . . . . . . . . . . . . . . . . . 90
5.5 Cluster algorithm . . . . . . . . . . . . . . . . . . . . . 90
5.6 Reweighting Method . . . . . . . . . . . . . . . . . . . 93
5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.8.1 Finite size scaling for continuous transitions: logarithm
corrections . . . . . . . . . . . . . . . . . . . . . . . . 96
5.8.2 Some aspects of the finite size scaling: first-order tran-
sition . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
5.1 Introduction
One very interesting application of simulations is to the study of phase transitions,
even though they can occur only in infinite systems (namely in the thermody-
namic limit). This apparent paradox can be resolved as follows: in a continuous
phase transition (unlike a first-order transition), fluctuations occur at increasing
length scales as the temperature approaches the critical point. In a simulation
79
Phase transitions
box, when fluctuations are smaller than the linear dimension of the simulation
cell, the system does not realize that the simulation cell is finite and its behavior
(and the thermodynamic quantities) is close to an infinite system. When the fluc-
tuations are comparable to the system size, the behavior differs from the infinite
system close the critical temperature, but the finite size behavior is similar to the
infinite system when the temperature is far from the critical point.
Finite size scaling analysis which involves simulations with different system
sizes
1
provides an efficient tool for determining the critical exponents of the sys-
tem.
Many simulations investigating the critical phenomena (continuous phase tran-
sitions) were performed with lattice models. Studies of continuous systems are less
common. the critical phenomena theory show that phase transitions are charac-
terized by scaling laws (algebraic laws) whose exponents are universal quantities
independent of a underlying lattice. This is due to the fact that a continuous
phase transition is associated with fluctuations of all length scales at the critical
point. Therefore, microscopic details are irrelevant in the universal behavior of
the system, but the space dimension of the system as well as the symmetry group
of the order parameter are essential.
Phase transitions are characterized by critical exponents which define univer-
sality classes. Studies can therefore be realized accurately on simple models. For
example, the liquid-gas transition for liquids interacting with a pairwise potential
belongs to the same universality class as the para-ferromagnetic transition of the
Ising model.
As we will see below, simulations must be performed with different sizes in
order to determine the critical exponents. If we compare lattice and continuous
systems, the former requires less simulation time. Indeed, varying the size of
a continuous system over several orders of magnitude is a challenging problem,
whereas it is possible with lattice systems.
Finally, let us note that the critical temperature obtained in simulations varies
weakly with the system size in continuous systems, whereas a strong dependence
can be observed in lattice models. Consequently, with a modest size, a rather
good estimate of the critical temperature can be obtained with a simulation.
1
For a good accuracy for the critical exponents, one must choose large simulation cells (and
the accuracy of critical exponents are restricted by the computer powers). Indeed, scaling laws
used for the critical exponent calculation corresponds to asymptotic behaviors, whose validity
is for large simulation cells. To have accurate critical exponents, scaling law corrections can be
included, which introduces additional parameters to be estimated.
80
5.2 Scaling laws
5.2 Scaling laws
5.2.1 Critical exponents
Scaling law assumptions were made in the sixties and were subsequently precisely
derived with the renormalization group theory, which provides a general frame-
work for studying critical phenomena. Detailed presentation of this theory goes
beyond the scope of these lecture notes, and we only introduce the basic concepts
useful for studying phase transitions in simulations
For the sake of simplicity, we consider the Ising model whose Hamiltonian is
given by
H = −J
N
¸
<i,j>
S
i
S
j
−H
N
¸
i=1
S
i
(5.1)
where < i, j > denotes a summation over nearest sites, J is the ferromagnetic
interaction strength, and H a uniform external field.
In the vicinity of the critical point (namely, for the Ising model, H = 0,
T
c
/J = 2.2691 . . . in two dimensions), thermodynamic quantities, as well as spa-
tial correlation functions behave according to scaling laws. The magnetization
per spin is defined by
m
N
(t, h) =
1
N
N
¸
i=1
'S
i
`, (5.2)
where t = (T − T
c
)/T
c
is the dimensionless temperature and h = H/k
B
T the
dimensionless external field.
Magnetization (Eq. (5.2)) is defined as
m(t, h) = lim
N→∞
m
N
(t, h). (5.3)
In the absence of an external field, the scaling law is
m(t, h = 0) =

0 t > 0
A[t[
β
t < 0
(5.4)
where the exponent β characterizes the spontaneous magnetization in the ferro-
magnetic phase.
Similarly, along the critical isotherm, one has
m(t = 0, h) =

−B[h[
1/δ
h < 0
B[h[
1/δ
h > 0
(5.5)
where δ is the exponent of the magnetization in the presence of an external field.
81
Phase transitions
The specific heat c
v
is given by
c
v
(t, h = 0)

C[t[
−α
t < 0
C

[t[
−α

t > 0
(5.6)
where α and α

are the exponents associated with the specific heat.
Experimentally, one always observes that α = α

. The amplitude ratio C/C

is also a universal quantity. Similarly, one can define a pair of exponents for each
quantity for positive or negative dimensionless temperature, but exponents are
always identical and we only consider an exponent for each quantity. Isothermal
susceptibility in zero external field diverges at the critical point as
χ
T
(h = 0) ∼ [t[
−γ
, (5.7)
where γ is the susceptibility exponent.
The spatial correlation function, denoted by g(r), behaves in the vicinity of
the critical point as
g(r) ∼
exp(−r/ξ)
r
d−2+η
, (5.8)
where ξ is the correlation length, which behaves as
ξ ∼ [t[
−ν
, (5.9)
where ν is the exponent associated with the correlation length.
This correlation function decreases algebraically at the critical point as follows
g(r) ∼
1
r
d−2+η
(5.10)
where η is the exponent associated with the correlation function.
These six exponents (α, β, γ, δ, ν, η) are not independent! Assuming that the
free energy per volume unit and the pair correlation function obey a scaling
function, it can be shown that only two exponents are independent.
5.2.2 Scaling laws
Landau theory neglects the fluctuations of the order parameter (relevant in the
vicinity of the phase transition) and expresses the free energy as an analytical
function of the order parameter. The phase transition theory assumes that the
neglected fluctuations of the mean-field approach make a non analytical contri-
bution to the thermodynamic quantities, e.g. the free energy density, denoted
f
s
(t, h):
f
s
(t, h) = [t[
2−α
F
±
f

h
[t[

(5.11)
82
5.2 Scaling laws
where F
±
f
are functions defined below and above the critical temperature and
which approach a non zero value when h → 0 and have an algebraic behavior
when the scaling variable goes to infinity,
F
±
f
(x) ∼ x
λ+1
, x →∞. (5.12)
Properties of this non analytical term give relations between critical expo-
nents. Therefore, magnetization is obtained by taking the derivative of the free
energy density with respect to the external field h,
m(h, t) = −
1
k
B
T
∂f
s
∂h
∼ [t[
2−α−∆
F
±

f

h
[t[

. (5.13)
For h →0, one identifies exponents of the algebraic dependence in temperature:
β = 2 −α −∆. (5.14)
Similarly, by taking the second derivative of the free energy density with respect
to the field h, one obtains the isothermal susceptibility
χ
T
(t, h) ∼ t
2−α−2∆
F
±

f

h
[t[

. (5.15)
For h →0, one identifies exponents of the algebraic dependence in temperature:
−γ = 2 −α −2∆. (5.16)
By eliminating ∆, one obtains a first relation between exponents α, β and γ,
called Rushbrooke scaling law
α + 2β + γ = 2. (5.17)
This relation does not depend on the space dimension d. Moreover, one has
∆ = β + γ. (5.18)
Let us consider the limit when t goes to zero when h = 0. One then obtains
for the magnetization, along the critical isotherm:
m(t, h) ∼[t[
β

h
[t[

λ
(5.19)
∼[t[
β−∆λ
h
λ
. (5.20)
In order to recover Eq. (5.5), magnetization must be finite when t →0, i.e. does
not diverge or vanish. This yields
β = ∆λ, (5.21)
83
Phase transitions
By identifying exponents of h in Eqs. (5.5) and (5.20) one infers
λ =
1
δ
. (5.22)
Eliminating ∆ and λ in Eqs. (5.21), (5.22) and (5.18) , one infers the following
relation
βδ = β + γ. (5.23)
The next two relations are inferred from the scaling form of the free energy
density and of the spatial correlation relation g(r). By considering that the
relevant macroscopic length scale of the system is the correlation length, the
singular free energy density has the following expansion
f
s
k
B
T
∼ ξ
−d
(A + B
1

l
1
ξ

+ . . .) (5.24)
where l
1
is a microscopic length scale. When t →0, subdominant corrections can
be neglected and one has
f
s
k
B
T
∼ ξ
−d
∼ [t[
νd
. (5.25)
Taking the second derivative of this equation with respect to the temperature,
one obtains for the specific heat
c
v
= −T

2
f
s
∂T
2
∼ [t[
νd−2
. (5.26)
By using Eq. (5.6), one infers the Josephson relation (so called the hyper scaling
relation),
2 −α = dν, (5.27)
which involves the space dimension. Knowing that the space integral of g(r)
is proportional to the susceptibility, one performs the integration of g(r) over a
volume whose linear dimension is ξ, which gives

ξ
0
d
d
rg(r) =

ξ
0
d
d
r
exp(−r/ξ)
r
d−2+η
(5.28)
With the change of variable u = r/ξ in Eq. (5.28), it comes

ξ
0
d
d
rg(r) = ξ
2−η

1
0
d
d
u
exp(−u)
u
d−2+η
(5.29)
On the right-hand side of Eq. (5.29), the integral is a finite value. By using the
relation between the correlation length ξ and the dimensionless temperature t
(Eq. (5.9)), one obtains that
χ
T
(h = 0) ∼ [t[
−(2−η)ν
, (5.30)
84
5.2 Scaling laws
Figure 5.1 – Spin configuration of Ising model in 2 dimensions at high tempera-
ture.
By considering Eq. (5.7), one infers
γ = (2 −η)ν. (5.31)
Finally, since there are four relations between critical exponents that implies
only two are independent
2
.
An significant feature of the critical phenomena theory is the existence of a
upper and lower critical dimensions: At the upper critical dimension d
sup
and
above, the mean-field theory (up to some logarithmic subdominant corrections)
describes the critical phase transition. Below lower critical dimension d
inf
, no
phase transition can occur. For the Ising model, d
inf
= 1 and d
sup
= 4. This
implies that in three-dimensional Euclidean space, a mean-field theory cannot
describe accurately the para-ferromagnetic phase transition, in particular the
critical exponents. Because the liquid-gas transition belongs to the universality
class of the Ising model, a similar conclusion applies to the phase transition.
After this brief survey of critical phenomena, we now apply scaling laws for
finite size systems to develop a method which gives the critical exponents of the
phase transition as well as non universal quantities in the thermodynamic limit.
2
This property does not apply to quenched disordered systems.
85
Phase transitions
Figure 5.2 – Spin configuration of the Ising model in 2 dimensions close to the
critical temperature.
Figure 5.3 – Spin configuration of the Ising model in 2 dimensions below the the
critical temperature.
86
5.3 Finite size scaling analysis
5.3 Finite size scaling analysis
5.3.1 Specific heat
A close examination of spin configuration in Fig. 5.4 merely illustrates that a sys-
tem close to the phase transition shows domains of spins of the same sign. The
size of domains increases as the transition is approached. The group renormaliza-
tion theory showed that, close to the critical point, the thermodynamic quantities
of a finite system of linear size L for a dimensionless temperature t, and a di-
mensionless field h,. . . , are the same of a system of size L/l for a dimensionless
temperature tl
y
t
and a dimensionless field hl
y
h
,. . . , which gives
f
s
(t, h, . . . L
−1
) = l
−d
f
s
(tl
y
t
, hl
y
h
, . . . , (l/L)
−1
). (5.32)
where y
t
and y
h
are the exponents associated with the fields.
A phase transition occurs when all variables of f
s
go to 0. In zero field, one
has
f
s
(t, . . . L
−1
) = [t[
2−α
F
±
f
([t[
−ν
/L). (5.33)
If the correlation length ξ is smaller than L, the system behaves like an infinite
system. But, in simulation, the limit t →0 can be taken before than L →∞, and
ξ can be larger than L, the linear dimension of the simulation cell; in other words,
this corresponds to the fact that the variable of the function F
f
(Eq. (5.33)) then
goes to infinity; this means, one moves away the critical point. Below and above
the critical point of the infinite and for a vicinity which shrinks with increasing
sizes of simulation cells, the system behaves like an infinite system whereas in
the region where the correlation length is equal to size of the simulation cell, one
observes a size-dependence (See Fig. 5.4). If we denote by F
c
the scaling function
associated with the specific heat, which depends on the scaling variable [t[
−ν
/L,
one has
c
v
(t, L
−1
) = [t[
−α
F
±
c
([t[
−ν
/L). (5.34)
Because [t[
−α
goes to infinity when t →0, F
±
c
(x) must go to zero when x goes to
zero. Reexpressing this function as
F
±
c
([t[
−ν
/L)) = ([t[
−ν
/L)
−κ
D
±
(Lt
ν
) (5.35)
with D
±
(0) finite. Since the specific heat does not diverge when [t[ goes to zero,
one requires that
κ = α/ν (5.36)
which gives for the specific heat
c
v
(t, L
−1
) = L
α/ν
D(L[t[
ν
). (5.37)
The function D goes to zero when the scaling variable is large and is always finite
and positive. D is a continuous function, and displays a maximum for a finite
87
Phase transitions
1.8 2 2.2 2.4 2.6 2.8 3
T
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
2.2
C
v
L=16
L=24
L=32
L=48
Figure 5.4 – Specific heat versus temperature of the Ising model in two dimensions,
The simulation results correspond to different system sizes where L is the linear
dimension of the lattice.
value of the scaling variable, denoted x
0
. Therefore, for a finite size system, one
obtains the following result
• The maximum of the specific heat occurs at a temperature T
c
(L) which is
shifted with respect to that of the infinite system
T
c
(L) −T
c
∼ L
−1/ν
. (5.38)
• The maximum of the specific heat of a finite size system L is given by the
scaling law
C
v
(T
c
(L), L
−1
) ∼ L
α/ν
. (5.39)
5.3.2 Other quantities
Similar results can be obtained for other thermodynamic quantities, available in
simulations. For instance, let us consider the absolute value of the magnetization
'[m[` =
1
N
'[
N
¸
i=1
S
i
[` (5.40)
and the isothermal susceptibility
k
B
Tχ = N('m
2
` −'m`
2
). (5.41)
88
5.3 Finite size scaling analysis
It is possible to calculate a second susceptibility
k
B

= N('m
2
` −'[m[`
2
) (5.42)
χ increases as N when T →0, due to the existence of two peaks in the magnetiza-
tion distribution and does not display a maximum at the transition temperature.
Conversely, χ

goes to 0 when T → 0 and has a maximum at the critical point.
At high temperature, both susceptibilities are related in the thermodynamic limit
by the relation χ = χ

(1 −2/π). At the critical point, both susceptibilities χ and
χ

diverge with the same exponent, but χ

has a smaller amplitude.
Another quantity, useful in simulation, is the Binder parameter
U = 1 −
'm
4
`
3'm
2
`
2
(5.43)
The reasoning leading to the scaling function of the specific heat can be used
for other thermodynamic quantities and scaling laws for finite size systems are
given by the relations
'[m(t, 0, L
−1
)[` = L
−β/ν
F
±
m
(tL
1/ν
) (5.44)
k
B
Tχ(t, 0, L
−1
) = L
γ/ν
F
±
χ
(tL
1/ν
) (5.45)
k
B

(t, 0, L
−1
) = L
γ/ν
F
±
χ

(tL
1/ν
) (5.46)
U(t, 0, L
−1
) = F
±
U
(tL
1/ν
) (5.47)
where F
±
m
, F
±
χ
, F
±
χ

, and F
±
U
are 8 scaling functions (with different maxima).
Practically, by plotting Binder’s parameter as a function of temperature, all
curves U(t, 0, L
−1
) intersect at the same abscissa (within statistical errors), which
determines the critical temperature of the system in the thermodynamic limit.
Once the temperature of the transition obtained, one can compute β/ν from'[m[`
, then γ/ν from χ (or χ

). By considering the maximum of C
v
or other quantities,
one derives the value of the exponent 1/ν.
Note that in simulation more information is available than the two indepen-
dent exponents and the critical temperature. Indeed, because of uncertainty and
sub-dominant corrections to scaling laws,
3
, by comparing results, one can deter-
mine more accurately critical exponents.
Monte Carlo simulation, combined with finite size scaling, is a powerful method
for calculating critical exponents. Moreover, one also obtains non-universal quan-
tities, like the critical temperature, that no analytical treatment is able to provide
with sufficient accuracy until now. Indeed, the renormalization group can only
give critical exponents and is unable to provide a reasonable value of the crit-
ical temperature. Only the functional or non perturbative approach can give
quantitative results, but with a high price to pay in terms of computation time.
3
Scaling laws correspond to asymptotic behavior for very large system sizes: in order to
increase the accuracy of the simulation, one needs to incorporate subdominant corrections.
89
Phase transitions
5.4 Critical slowing down
The nice method, developed in the previous section, uses the idea that the
Metropolis algorithm remains efficient even in the vicinity of the critical region.
This is not the case, and in order to obtain equilibrium, fluctuations must be
sampled at all scales. But, close to the critical point, large scale fluctuations are
present and relaxation times increase accordingly; these times are related to the
growing correlation length by the scaling relation
τ ∼ (ξ(t))
z
(5.48)
where z is a new exponent, the so-called dynamical exponent. It typically varies
between 2 and 5 (for a Metropolis algorithm). For a infinite system, knowing
that ξ ∼ [t[
−ν
, the relaxation time then diverges as
τ ∼ [t[
−νz
. (5.49)
For a finite-size system, the correlation length is bound by the linear system size
L and the relaxation time increases as
τ ∼ L
z
; (5.50)
Simulation times then become prohibitive!
5.5 Cluster algorithm
Let us recall that the Metropolis algorithm is a Markovian dynamics for particle
motion (or spin flips in lattices). A reasonable acceptance ratio is obtained by
choosing a single particle (or spin) in order to have a product [β∆E[ which is
relatively small. A major advance of twenty years ago consists of generating
methods where a large number of particles (or spins) are moved (or flipped)
simultaneously, and keeping a quite high acceptance ratio (or even equal to 1),
as well as by reaching a equilibrium state.Swendsen and Wang [1987].
We consider below the Ising model, but be generalized the method can be
done for all locally symmetric Hamiltonian (for the Ising model, this corresponds
to the up/down symmetry).
Ideally, a global spin flip should have a acceptance ratio equal to 1. The
proposed method adds a new step to the Metropolis algorithm: from a given spin
configuration, one characterizes the configuration by means of bonds between
spins. The detailed balance equation (2.18) Chap. 2 is modified as follows
P
gc
(o)W(o →n)
P
gc
(n)W(n →o)
= exp(−β(U(n) −U(o))) (5.51)
90
5.5 Cluster algorithm
where P
gc
(o) is the probability of generating a bond configuration from o to n
(o and n denote the old and new configurations, respectively). If one wishes to
accept all new configurations (W(i → j) = 1), whatever configurations i and j,
one must have the following ratio
P
gc
(o)
P
gc
(n)
= exp(−β(U(n) −U(o))) (5.52)
In order to build such an algorithm, let us note that energy is simply expressed
as
U = (N
a
−N
p
)J (5.53)
where N
p
is the number of pairs of nearest neighbor spins of the same sign and N
a
the number of pairs of nearest neighbor spins of the opposite sign. This formula,
beyond its simplicity, has the advantage of being exact whatever the dimension-
ality of the system. The method relies on rewriting of the partition function
given by Fortuyn and Kasteliyn (1969), but exploiting this idea for simulation
occurred almost twenty years later with the Swendsen and WangSwendsen and
Wang [1987] work.
For building clusters, one defines bonds between nearest neighbor spins with
the following rules
1. If two nearest neighbor spins have opposite signs, they are not connected.
2. If two nearest neighbor spins have the same sign, they are connected with
a probability p and disconnected with a probability (1 −p).
This rule assumes that J is positive (ferromagnetic system). When J is neg-
ative, spins of same sign are always disconnected and spins of opposite signs
are connected with a probability p, and disconnected with a probability (1 −p).
Let consider a configuration of N
p
pairs of spins of same sign, the probability
of having n
c
pairs connected (and consequently n
b
= N
p
− n
c
pairs of spins are
disconnected) is given by the relation
P
gc
(o) = p
n
c
(1 −p)
n
b
. (5.54)
Once bonds are set, a cluster is a set of spins connected by at least one bond
between them. If one flips a cluster (o →n), the number of bonds between pairs
of spins of same and opposite signs is changed and one has
N
p
(n) = N
p
(o) + ∆ (5.55)
and similarly
N
a
(n) = N
a
(o) −∆. (5.56)
The energy of the new configuration U(n) is simply related to the energy of the
old configuration U(o) by the equation:
|(n) = |(o) −2J∆. (5.57)
91
Phase transitions
Let us now consider the probability of the inverse process. One wishes to generate
the same cluster structure, but one starts from a configuration where one has
N
p
+ ∆ parallel pairs and N
a
− ∆ antiparallel pairs. Antiparallel bonds are
assumed broken. One wishes that the bond number n

c
is equal to n
c
because
one needs to have the same number of connected bonds for generating the same
cluster. The difference with the previous configuration is the number of bonds to
break. Indeed, one obtains
N
p
(n) = n

c
+ n

b
(5.58)
N
p
(o) = n
c
+ n
b
(5.59)
N
p
(n) = N
p
(o) + ∆ (5.60)
= n
c
+ n
b
+ ∆, (5.61)
which immediately gives
n

b
= n
b
+ ∆. (5.62)
The probability of generating the new bond configuration from the old configu-
ration is given by
P
gc
(n) = p
n
c
(1 −p)
n
b
+∆
(5.63)
Inserting Eqs. (5.54) and(5.63) in Eq. (5.52), one obtains
(1 −p)
−∆
= exp(2βJ∆) (5.64)
One can solve this equation for p, which gives
(1 −p) = exp(−2βJ), (5.65)
Finally, the probability p is given by
p = 1 −exp(−2βJ). (5.66)
Therefore, if the probability is chosen according Eq. (5.66), new configuration is
always accepted!
The virtue of this algorithm is not only its ideal acceptance rate, but one
can also show that, in the vicinity of the critical point, the critical slowing down
drastically decreases. For example, the critical exponent of the two-dimensional
Ising model is equal to 2.1 with the Metropolis algorithm while it is 0.2 with a
cluster algorithm. Therefore, a cluster algorithm becomes quite similar to the
Metropolis algorithm in term of relaxation time far from the critical point.
More generally, Monte Carlo dynamics is accelerated when one finds a method
able to flip large clusters corresponding to the underlying physics. In the Swendsen-
Wang method, one selects spin sets which correspond to equilibrium fluctuations;
conversely, in a Metropolis method, by choosing randomly a spin, one generally
creates a defect in the bulk of a large fluctuation, and this defect must diffuse
92
5.6 Reweighting Method
from the bulk to the cluster surface in order that the system reaches equilibrium.
Knowing that diffusion is like a random walk and that the typical cluster size is
comparable to the correlation length, the relaxation time is then given by τ ∼ ξ
2
,
which gives for a finite size system τ ∼ L
2
, and explains why the dynamical
exponent is close to 2.
Note that the necessary ingredient for defining a cluster algorithm is the exis-
tence of a symmetry. C. Dress et W. Krauth Dress and Krauth [1995] generalized
these methods when the symmetry of the Hamiltonian has a geometric origin. For
lattice Hamiltonians, where the up-down symmetry does not exist, J. Heringa and
H. Blote generalized the Dress and Krauth method by using the discrete symme-
try of the lattice Heringa and Blote [1998a,b].
5.6 Reweighting Method
To obtain a complete phase of a given system, a large number of simulations are
required for scanning the entire range of temperatures, or for the range of external
fields. When one attempts to locate a phase transition precisely, the number of
simulation runs can become prohibitive. The reweighting method is a powerful
techniques that estimates the thermodynamic properties of the system by using
the simulation data performed at a temperature T, for a set of temperatures
between [T − ∆T, T + ∆T] (or for an external field H, within a set of values of
the external field [H −∆H, H + ∆H]).
The method relies on the following results: Consider the partition func-
tion Ferrenberg and Swendsen [1989, 1988], Ferrenberg et al. [1995],
Z(β, N) =
¸
{α}
exp(−βH(¦α¦)) (5.67)
=
¸
i=1
g(i) exp(−βE(i)) (5.68)
where ¦α¦ denotes all available states, the index i runs over all available energies
of the system, g(i) denotes the density of states of energy E(i). For a given
inverse temperature β

= 1/(k
B
T

), the partition function can be expressed as
Z(β

, N) =
¸
i
g(i) exp(−β

E(i)) (5.69)
=
¸
i
g(i) exp(−βE(i)) exp(−(β

−β)E(i)) (5.70)
=
¸
i
g(i) exp(−βE(i)) exp(−(∆β)E(i)), (5.71)
where ∆β = (β

−β).
93
Phase transitions
Similarly, a thermal average, like the mean energy, is expressed as:
'E(β

, N)` =
¸
i
g(i)E(i) exp(−β

E(i))
¸
i
g(i) exp(−β

E(i))
(5.72)
=
¸
i
g(i)E(i) exp(−βE(i)) exp(−(∆β)E(i))
¸
i
g(i) exp(−βE(i)) exp(−(∆β)E(i))
. (5.73)
In a Monte Carlo simulation, at equilibrium, one can monitor an energy histogram
associated with visited configurations.
This histogram, denoted D
β
(i), is proportional to the density of states g(i)
weighted by the Boltzmann factor,
D
β
(i) = C(β)g(i) exp(−βE(i)) (5.74)
where C(β) is a temperature dependent constant. One can easily see that Eq. (5.73)
can be reexpressed as
'E(β

, N)` =
¸
i
D
β
(i)E(i) exp(−(∆β)E(i))
¸
i
D
β
(i) exp(−(∆β)E(i))
. (5.75)
An naive (or optimistic) view would consist in believing that a single simula-
tion at given temperature could give the thermodynamic quantities of the entire
range of temperature
4
. As shown in the schematic of Fig. 5.5, when the histogram
is reweighted with a lower temperature than the temperature of the simulation,

1
> β), energy histogram is shifted towards to lower energies. Conversely,
when reweighted energy histogram with a higher temperature (β
2
> β), energy
histogram is shifted towards the higher energies.
In a Monte Carlo simulation, the number of steps is finite, which means that
for a given temperature, states of energy significantly lower or larger than the
mean energy are weakly sampled, or not sampled at all. Reweighting leads to an
exponential increase of errors with the temperature difference. Practically, when
the reweighted histogram has a maximum at a distance larger than the standard
deviation of the simulation data, the accuracy of the reweighted histogram, and
of the thermodynamic quantities is poor.
We briefly summarize the results of a more refined analysis of the reweighting
method
For a system with a continuous phase transition, the energy histogram D
β
(i) is
bell-shaped, and can be approximated by a Gaussian far from a phase transition.
This is associated with the fact that at equilibrium, available energy states are
around the mean energy value. Far from the critical point, the histogram width
corresponds to fluctuations and is then proportional to 1/

N where N is the site
number or the particle number. This shows that the efficiency of this reweighting
4
By using a system of small dimensions, one can obtain an estimate for a large range of
temperatures.
94
5.7 Conclusion
0 2 4 6 8 10
E
0
0.2
0.4
0.6
0.8
1
D
β
(
E
)
β
1
β
2
Figure 5.5 – Energy histogram, D
β
(E) of a Monte Carlo simulation (middle curve)
to the inverse temperature β; ,D
β
1
(E) and D
β
2
(E) are obtained by a reweighting
method.
method decreases when the system size increases. In the critical region, the width
of the energy histogram increases, and if simulation sampling is done correctly,
an increasing range of temperatures is estimated by the reweighting method.
In summary, this method is accurate in the vicinity of the temperature of the
simulation and allows one to build a complete phase diagram rapidly, because
the number of simulations to perform is quite limited. By combining several
energy histograms (multiple reweighting method), one can improve the estimates
of thermodynamic quantities for a large range of temperatures.
The principle of the method developed by considering energy and the conju-
gate variable β can be generalized for all pairs of conjugate variables, e.g. the
magnetization M and an external uniform field H.
5.7 Conclusion
Phase transitions can be studied by simulation, by using several results of statisti-
cal physics. Indeed, the renormalization group theory has provided the finite size
analysis, which permits the extrapolations of the critical exponents of the system
95
Phase transitions
in the thermodynamic limit by using simulation results of different systems size.
Cluster methods are required for decreasing the dynamic exponent associated
with the phase transition. By combining simulation results with a reweighting
method, one drastically reduces the number of simulations. If the accuracy of
simulation results are comparable to the theoretical methods for simple systems,
like the Ising model, it is superior for almost more complicated models.
5.8 Exercises
5.8.1 Finite size scaling for continuous transitions: loga-
rithm corrections
The goal of this problem is to recover the results of the finite size analysis with
a simple method.
♣ Q. 5.8.1-1 Give the scaling laws for a infinite system, of the specific heat C(t),
of the correlation length ξ(t) and of the susceptibility χ(t) as a function of usual
critical exponents α, ν, γ and of t =
T−T
c
T
c
, the dimensionless temperature, where
T
c
is the critical temperature.
♣ Q. 5.8.1-2 In a simulation, explain why there are no divergences of quantities
previously defined.
Under an assumption of finite size analysis, one has the relation
C
L
(0)
C(t)
= T
C

ξ
L
(0)
ξ(t)

(5.76)
where T
C
is a scaling function, C
L
(0) the maximum value of the specific heat
obtained in a simulation of a finite system with a linear dimension L.
♣ Q. 5.8.1-3 By assuming that ξ
L
(0) = L and that the ratio

ξ
L
(0)
ξ(t)

is finite and
independent of L, infer that t ∼ L
x
where x is an exponent to be determined.
♣ Q. 5.8.1-4 By using the above result and Eq. (5.76), show that
C
L
(0) ∼ L
y
(5.77)
where y is an exponent to calculate.
♣ Q. 5.8.1-5 By assuming that
χ
L
(0)
χ(t)
= T
χ

ξ
L
(0)
ξ(t)

(5.78)
96
5.8 Exercises
show that
χ
L
(0) ∼ L
z
(5.79)
where z is an exponent to be determined.
Various physical situations occur where scaling laws must be modified to ac-
count for logarithmic corrections. The rest of the problem consists of obtaining
relations between exponents associated with these corrections. We now assume
that for an infinite system
ξ(t) ∼ [t[
−ν
[ ln [t[[
ˆ ν
(5.80)
C(t) ∼ [t[
−α
[ ln [t[[
ˆ α
(5.81)
χ(t) ∼ [t[
−γ
[ ln [t[[
ˆ γ
(5.82)
♣ Q. 5.8.1-6 By assuming that ξ
L
(0) ∼ L(ln(L))
ˆ q
and the finite size analysis is
valid for the specific heat, Eq.(5.76), show that
C
L
(0) ∼ L
x
(ln(L))
ˆ y
(5.83)
where ˆ y is expressed as a function of α, ˆ α, ν, ˆ ν and of ˆ q.
Hint: Consider the equation y ln(y)
c
= x
−a
[ ln(x)[
b
; for y > 0 and going to infinity,
the asymptotic solution is given by
x ∼ y
−1/a
ln(y)
(b−c)/a
(5.84)
A finite size analysis of the partition function (details are beyond of the scope of
this problem) shows that if α = 0 the specific heat of a finite size system behaves
as
C
L
(0) ∼ L
−d+
2
ν
(ln(L))
−2
ˆ ν−ˆ q
ν
(5.85)
♣ Q. 5.8.1-7 By using the results of question 5.8.1-6 show that one recovers the
hyperscaling relation and an additional relation between ˆ α, ˆ q, ˆ ν, ν and α.
♣ Q. 5.8.1-8 By using the hyperscaling relation, show that the new relation can
be expressed as a function of ˆ α, ˆ q, ˆ ν, and d.
When α = 0, the specific heat of a finite size system behaves as
C
L
(0) ∼ (ln(L))
1−2
ˆ ν−ˆ q
ν
(5.86)
97
Phase transitions
♣ Q. 5.8.1-9 What is the relation between ˆ α, ˆ q, ˆ ν and d?
Let us consider now the logarithmic corrections of the correlation function
g(r, t) =
(ln(r))
ˆ η
r
d−2+η
D

r
ξ(t)

(5.87)
♣ Q. 5.8.1-10 By calculating the susceptibility χ(t) from the correlation func-
tion, recover Fisher’s law as well as a new relation between the exponents ˆ η, ˆ γ, ˆ ν
and η.
5.8.2 Some aspects of the finite size scaling: first-order
transition
Let us consider discontinuous (or first-order) transitions. One can show that the
partition function of a finite size system of linear size L, with periodic boundary
conditions is expressed
Z =
k
¸
i=1
exp(−β
c
f
i

c
)L
d
) (5.88)
where k is the number of coexisting phases (often k = 2, but it is not necessary),
f
i

c
) is the free energy per site of the ithe phase in the thermodynamic limit.
At the transition temperature, the free energies of each phase are equal.
♣ Q. 5.8.2-1 Assume only two coexisting phases. Expand the free energies to
first order in deviations from the transition temperature.
βf
i
(β) = β
c
f
i

c
) −β
c
e
i
t + O(t
2
) (5.89)
where e
i
is the free energy per site of the phase i and t = 1 − T
c
/T. Moreover,
one assumes that the partition function expressed as the sum of free energies of
different phases is valid in the vicinity of the transition temperature.
Express the partition function by keeping first-order terms in t. Infer that the
two phases coexist only if tL
d
<< 1.
One can show that the probability of finding an energy E per site is given for
large L as
P(E) =
Kδ(E −e
1
) + δ(E −e
2
)
1 + K
(5.90)
where K is a function of the temperature as K = K(tL
d
); K(x) → ∞ when
x →−∞ and K(x) →0 when x →+∞.
98
5.8 Exercises
♣ Q. 5.8.2-2 Knowing that the specific heat per site C(L, T) for a lattice of size
L is given by
C(L, T)L
−d
= β
2
(< E
2
> − < E >
2
) (5.91)
express C(L, T) as a function of e
1
, e
2
, β and K.
♣ Q. 5.8.2-3 Show that C(L, T) goes to zero for temperature much larger and
much smaller than the temperature of the transition.
♣ Q. 5.8.2-4 Determine the value of K where C(L, T) is maximal.
♣ Q. 5.8.2-5 Show that there exists a maximum of C(L, T).
♣ Q. 5.8.2-6 Determine the corresponding value of C(L, T), expressed as a func-
tion of e
1
e
2
, β
c
.
♣ Q. 5.8.2-7 For characterizing a first-order transition, one can consider the
Binder parameter
V
4
(L, T) = 1 −
< E
4
>
< E
2
>
2
. (5.92)
Express V
4
as a function of e
1
, e
2
, β and K.
♣ Q. 5.8.2-8 Determine the value of K where V
4
is maximal. Calculate the
corresponding value of V
4
.
♣ Q. 5.8.2-9 What can one say about the specific heat and V
4
when e
1
is close
to e
2
?
99
Phase transitions
100
Chapter 6
Monte Carlo Algorithms based
on the density of states
Contents
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 Density of states . . . . . . . . . . . . . . . . . . . . . . 102
6.2.1 Definition and physical meaning . . . . . . . . . . . . 102
6.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.3 Wang-Landau algorithm . . . . . . . . . . . . . . . . . 105
6.4 Thermodynamics recovered! . . . . . . . . . . . . . . . 106
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.6.1 Some properties of the Wang-Landau algorithm . . . . 108
6.6.2 Wang-Landau and statistical temperature algorithms . 109
6.1 Introduction
When a system undergoes a continuous phase transition, Monte Carlo methods
which consists of flipping (or moving) a single spin (or particle) converge more
slowly in the critical region than the cluster algorithm (when it exists). The
origin of this slowing down is associated with real physical phenomena, even
if Monte Carlo dynamics can never to be considered as a true dynamics. The
rapid increase of the relaxation time is related to the existence of fluctuations of
large sizes (or the presence of domains) which are difficult to sample by a local
dynamics: indeed, cluster algorithms accelerate the convergence in the critical
region, but not far away.
101
Monte Carlo Algorithms based on the density of states
For a first-order transition, la convergence of Monte Carlo methods based
on a local dynamics is weak because close to the transition, there exists two or
more metastable states: the barriers between metastable states increase with the
system size, and the relaxation time has an exponential dependence of system size
which rapidly exceeds a reasonable computing time. Finally, the system is unable
to explore the phase space and is trapped in a metastable region. The replica
exchange method provides a efficient way for crossing metastable regions, but
this method requires a fine tuning of many parameters (temperatures, exchange
frequency between replica).
New approaches have been proposed in the last decade based on the compu-
tation of the density of states. These methods are general in the sense that the
details of dynamics are not imposed. Only a condition similar to the detailed
balance is given. The goal of these new methods is to obtain the key quantity
which is the density of states.
We first see why knowledge of this quantity is essential for obtaining the
thermodynamics quantities.
6.2 Density of states
6.2.1 Definition and physical meaning
For a classical systems characterized by a Hamiltonian H, the partition function
reads (see Chapter 1)
Z(E) =
¸
α
δ(H−E) (6.1)
where E denotes the energy of the system and the index α runs over all available
configurations. The symbol δ denotes the Kronecker symbol where the summation
is discrete; an analogous formula can be written for the microcanonical partition
function when the energy spectra is continuous; the sum is replaced with an
integral and the Kronecker symbol with a δ distribution.
Equation (6.1) means that the microcanonical partition function is a sum over
all microstates of total energy E, each state has a identical weight. Introducing
the degeneracy of the system for a given energy E, the function g(E) is defined
as follows: in the case of a continuous energy spectra, g(E)dE is the number of
states available of the system for a total energy between E and E + dE. The
microcanonical partition function is then given as
Z(E) =

dug(u)δ(u −E) (6.2)
= g(E) (6.3)
Therefore, the density of states is nothing else than the microcanonical partition
function of the system.
102
6.2 Density of states
-2 -1 0 1 2
E
0
200
400
600
800
1000
l
n
(
g
(
E
)
)
Figure 6.1 – Logarithm of the density of states ln(g(E)) as a function of energy
E for a two-dimensional Ising model on a square lattice of linear size L = 32.
6.2.2 Properties
The integral (or the sum) over all energies of the density of states gives the
following rule

dEg(E) = ^ (6.4)
where ^ is the total number of configurations (for a lattice model).
For a Ising model, this number^ is equal to 2
N
where N is the total number
of spins of the system. This value is then very large and grows exponentially with
the system size. From a numerical point of view, one must use the logarithm of
this function ln(g(E)) for avoiding overflow problem.
For simple models, one can obtain analytical expressions for small or moderate
system sizes, but combinatorics becomes rapidly a tedious task. Some results can
not obtained easily: Indeed, for the Ising model, there are two ground states
corresponding to full ordered states. The first nonzero value of the density of
states is equal to 2N which correspond to the number of possibilities of flipping
one spin on the lattice of N sites.
Figure (6.1) shows the logarithm of the density of states of the two-dimensional
Ising model on a square lattice with a total number of site equal to 1024. The
103
Monte Carlo Algorithms based on the density of states
shape of this curve is the consequence of specific and universal features that we
now detail: for all systems where the energy spectra is discrete, the range of
energy has a lower bound given by the ground state of the system. In the case
of simple liquids, the density of states does not have an upper bound, because
overlap between particles which interact with a Lenard-Jones potential is always
possible, which gives very large values of the energy; however the equilibrium
probability of having such configurations is weak.
The shape of the curve which displays a maximum is a characteristic of all
simple systems, but the Oy-symmetry is a consequence of the up-down symmetry
of the corresponding Hamiltonian. This symmetry is also present in thermody-
namic quantities, like magnetization versus temperature (see Chapter 1).
For Monte Carlo algorithms based on the density of states, one can simply
remark: With the Metropolis algorithm, the acceptance probability of a new
configuration satisfies detailed balance, namely Π(i →j)P
eq
(i) = Π(j →i)P
eq
(j)
where P
eq
(j) is the probability of having the configuration j at equilibrium and
Π(i → j is the transition probability of going from the state i towards the state
j. One can reexpress detailed balance in energy variable.
Π(E →E

) exp(−βE) = Π(E

→E) exp(−βE

) (6.5)
Let us denote H
β
(E) the energy histogram associated to successive visits of the
simulation. Consequently, one has
H
β
(E) ∝ g(E) exp(−βE) (6.6)
Assume that one imposes the following balance
Π(E →E

) exp(−βE + w(E)) = Π(E

→E) exp(−βE

+ w(E

)) (6.7)
energy histogram then becomes proportional to
H
β
(E) ∝ g(E) exp(−βE + w(E)) (6.8)
It immediately appears that, when w(E) ∼ βE −ln(g(E)), the energy histogram
becomes a flat histogram and independent of the temperature.
The stochastic process corresponds to a random walk in energy space. The
convergence of this method is related to the fact that the available interval of the
walker is bounded. For the Ising model, these bounds exist, for other models, it
is possible of restricting the range of the energy interval in a simulation.
This very interesting result is at the origin of multicanonical methods (intro-
duced for the study of first-order transition). However, it is necessary to know
the function w(E), namely to know the function g(E). In multicanonical method,
a guess for w(E) is provided from the density of states obtained (by simulation)
with small system sizes. By extrapolating an expression for larger system sizes,
the simulation is performed and by checking the deviations of the flatness of the
104
6.3 Wang-Landau algorithm
energy histogram, one can correct the weight function, in order to flatten the
energy histogram in a second simulation run.
Performing this kind of procedure consisting of suppressing the free energy
barriers which exist in a Metropolis algorithm, and which vanishes here by the
introduction of an appropriate weight function. The difficulty of this method is
to obtain an good extrapolation of the weight function for large system sizes.
Indeed, a bias of the weight function can insert new free energy barriers and
simulations converge slowly again.
For obtaining a simulation method that computes the density of states with-
out knowledge of a good initial guess, one needs to add an ingredient to the
multicanonical algorithm. A new method has been proposed in 2001 by J.S.
Wang et D.P. LandauWang and Landau [2001a,b], which we detail below.
6.3 Wang-Landau algorithm
The basics of the algorithm are:
1. A change of the initial configuration is proposed (generally a local modifi-
cation)
2. One calculates the energy associated with the modified configuration E

.
This configuration is accepted with a Metropolis rule Π(E →E

) = Min(1, g(E)/g(E

)),
if not the configuration is rejected and the old configuration is kept (and
added as in an usual Metropolis algorithm)
3. The accepted configuration modifies the density of states as follows: ln(g(E

)) ←
ln(g(E

)) + ln(f)
This iterative scheme is repeated until the energy histogram is sufficiently flat:
practically, the original algorithm proposes the histogram is flat when each value
of the histogram must be larger than 80% of the mean value of this histogram.
f is called the modification factor. Because the density of states changes
continuously in time, it is not a stationary quantity. The accuracy of the density
of states depends on the statistical quantity of each value of g(E). Because the
density of states increases with time, the asymptotic accuracy is given by

ln(f).
To this first stage, the precision is not sufficient for calculating the thermodynamic
quantities.
Wang and Landau completed the algorithm as follows: One resets the energy
histogram (but, obviously not the histogram of the density of states). One restart
a new simulation described above, but with a new modification factor equal to

f.
Once this second stage achieved, the density of states is refined with respect to
the previous iteration, because the accuracy of the density is given by

ln(f)/2.
105
Monte Carlo Algorithms based on the density of states
In order to have an accurate density of states, the simulation is continued by
diminishing the modification factor by the relation
f ←

f (6.9)
When ln(f) ∼ 10
−8
, the density of states does not longer evolve, the dynamics
becomes a random walk in energy space and the simulation is then stopped.
Once the density of states obtained, one can easily derive all thermodynamic
quantities from this alone function. A additional virtue in this kind of simulation
is that if the accuracy is not reached, the simulation can be easily prolongated by
using the previous density of states until a sufficient accuracy is reached. Contrary
to other methods based on one or several values of the temperature, one can here
calculate the thermodynamic quantities for all temperatures (corresponding to
the energy scale used in the simulation).
Considering that the motion in the energy space is purely diffusive (what is
only true for the last iterations of the algorithm, namely values of the modification
factor close to one), the computer time which is necessary is proportional to
(∆E)
2
, where ∆E represents the energy interval. This means that computer
time increases as the square of the system size. The computer time of this king
of simulation is then larger than this of Metropolis algorithm, but for the second
case, one obtains the thermodynamic quantities for one temperature (or by using
a reweighting method a small interval of temperature), whereas for only one
Wang-Landau simulation, one can obtains these quantities for a large range of
temperatures (as shown below) and the accuracy is much better.
For decreasing the computing time, it is judicious to reduce the energy inter-
val in the simulation, and eventually to perform several simulations for different
(overlapping) energy ranges. In the second case, the total duration of all sim-
ulations is divided by two compared to an unique simulation over the complete
energy space. One can then match the density of states of each simulation. How-
ever, this task is subtle, even impossible, if the energy interval belongs to the
region of a transition. Indeed, it occurs errors in the density of states at the ends
of each density of states forbidding a complete rebuilding of the density of states.
With limitations, the method is very efficient for lattice systems with discrete
energy spectra (additional problems occur with continuous energy spectra. In
any case, free energy barriers disappear for first order transition, and it allows
for an efficient exploration of phase space.
6.4 Thermodynamics recovered!
Once obtained the density of states (to an additive constant), one can calculate
the mean energy a posteriori by computing the ratio of following integrals
< E >=

E exp(−βE + ln(g(E)))dE

exp(−βE + ln(g(E)))dE
(6.10)
106
6.4 Thermodynamics recovered!
The practical calculation of these integrals requires some caution. The arguments
of these exponentials can be very large and lead to overflows. For avoiding these
problems, let us note that for a given temperature (or β ), the expression
−βE + ln(g(E)) (6.11)
is a function which admits one or several maxima (but close) and which decreases
rapidly toward negative values when the energy is much smaller or greater to
these minima. Without changing the result, one can multiply each integral by
exp(−Max(−βE + ln(g(E))))
1
, the exponentials of each interval have an argu-
ment always negative or equal to zero. On can then truncate the bounds of each
integral when the arguments of each exponential become equal to −100 and the
computation of each interval is safe, without any overflows (or underflows).
The computation of the specific heat is done with the following formula
C
v
= k
B
β
2

'E
2
` −'E`
2

(6.12)
It is easy to see that the procedure skechted for the computation of the energy
can be transposed identically for the computation of the specific heat. One can
calculate the thermodynamic quantities for any value of the temperature and
obtain an accurate estimate of the maximum of the specific hear as well as the
associated temperature.
A last main advantage of this method is to obtain the free energy of the system
for all temperatures as well as the entropy of the system, a different situation of
a Monte Carlo simulation in the canonical ensemble.
One can also obtain the thermodynamic magnetic quantities with a modest
computational effort. Indeed, one obtained the density of states, it is possible to
run a simulation where the modification factor is equal to 1 (the density of states
does not evolve) and therefore the simulation corresponds to a random walk in the
space energy: one stores a magnetization histogram M(E) as well as the square
magnetization M
2
(E), even higher moments M
n
(E).... From these histograms
and the density of states g(E), one can calculate the mean magnetization of the
system as a function of the temperature by using the following formula
< M(β) >=

M(E)
H(E)
exp(−βE + ln(g(E)))dE

exp(−βE + ln(g(E)))dE
(6.13)
and one can obtain the critical exponent associated to the magnetization for a
finite size scaling.
Similarly, one can calculate the magnetic susceptibility as a function of the
temperature.
Therefore, one can write a minimal program for studying a system by calcu-
lating the quantities involving the moments of energy in order to determine the
1
When there are several maxima, one chooses the maximum which gives the largest value.
107
Monte Carlo Algorithms based on the density of states
region of the phase diagram where the transition takes place and what is the order
of the transition. Secondly, one can calculate other thermodynamic quantities by
performing an additional simulation in which the modification factor is equal to
1. Conversely, for a simulation in a canonical ensemble, a complete program must
be written for calculating all quantities from the beginning.
6.5 Conclusion
To simply summarize the Monte Carlo simulations based on the density of states,
one can say that an efficient method for obtaining thermodynamics of a system
consists in performing a simulation where thermodynamics is not involved.
Indeed, simulation is focussed on the density of states which is not a thermo-
dynamic quantity, but a statistical quantity intrinsic of the system.
It is possible to perform simulations of Wang-Landau with a varying number
of particles. It allows one to obtain the thermodynamic potential associated with
a grand canonical ensemble. On can also perform simulation of simple lquids
and build the coexistence curves with accuracu. Let us note that additional
difficulties occur when the energy spectra is continuous and some refinements
with respect to the original algorithm have been proposed by several authors
correcting in part the drawbacks of the method. Because this method is very
recent , improvements are expected in the near future, but this method represents
a breakthough and can alllow studies of systems where standard methods fail to
obtain reliable thermodynamic quantities. There is an active fied for bringing
modifications to the original method (see e.g. the references Poulain et al. [2006],
Zhou and Bhatt [2005]).
6.6 Exercises
6.6.1 Some properties of the Wang-Landau algorithm
♣ Q. 6.6.1-1 The Wang-Landau algorithm allows calculating the density of states
g(E) of a system described by a Hamilonian H. Knowing that the logarithm of
the partition function in the microcanonical ensemble is equal to the entropy
S(E) where E is the total energy , show simply that the relation between g(E)
and S(E). For the most of the systems, far away of the ground state, the entropu
is a extensive quantity, how depend the density of states g(E) on the particle
number?
♣ Q. 6.6.1-2 Simulation results of a Monte Carlo simulation of Wand-Landau
type gives the density of states up to a multiplicative constant (Cg(E), o` u C
is a constant). Show that the quantities in the canonical ensemble as the mean
energy and the specific heat do not depend on C.
108
6.6 Exercises
♣ Q. 6.6.1-3 Knowing that the canonical equilibrium distribution P
β
(E) ∼ g(E) exp(−βE),
at a inverse temperature β behaves (far away of a phase transition) as a Gaus-
sian aroung the mean energy E
0
, expand the logarithm at E
0
. Show that the
first order term is equal to zero. Infer that one obtains a gaussian approximation
for the equilibrium distribution. In a simulation, in a plot of ln(g(E)) versus E,
what is the physical meaning of the slope for a given energy E ?
One wants to study the convergence of the Wang-Landau algorithm.
♣ Q. 6.6.1-4 Noting that the energy histogram H
k
(E) for the k simulation run,
namely when the modification factor is f
k
, show that the density of states is given
by the relation
ln(g
k
(E)) =
k
¸
i=1
H
i
(E) ln(f
k
). (6.14)
♣ Q. 6.6.1-5 Noting that
ˆ
H
k
(E) = H
k
(E) −Min
E
H
k
(E), explain why
ln(ˆ g
k
(E)) =
k
¸
i=1
ˆ
H
i
(E) ln(f
k
) (6.15)
is equal to the above equation up to a additive constant.
♣ Q. 6.6.1-6 Several simulations have shown that δH
k
=
¸
E
ˆ
H
k
(E) behave as
1/

ln(f
k
). Knowing that f
k
=

f
k−1
, infer that the density of states goes to a
finite value.
6.6.2 Wang-Landau and statistical temperature algorithms
The Wang-Landau algorithm is a Monte-Carlo method which allows obtaining
the density of states g(E) of a systems in a finite interval of energy.
♣ Q. 6.6.2-1 Why is it necessary to decrease the modification factor f to each
iteration? Justify your answer.
♣ Q. 6.6.2-2 Why is it numerically interesting of working with the logarithm of
the density of states?
♣ Q. 6.6.2-3 At the beginning of the simulation, the density of states g(E) is
generally chosen equal to 1. If one takes another initial condition, does it obtain
the same density of states at the end of the simulation.
109
Monte Carlo Algorithms based on the density of states
♣ Q. 6.6.2-4 Wang-Landau dynamics is generally performed by using elemen-
tary moves of a single particle, in a similar manner of a Metropolis algorithm.
If dynamics involves several particles, why does the efficiency decreases rapidly
with the systems size, and not for the Wang-Landau algorithm?
For models with a continuous energy spectra, the Wang-Landau algorithm is
generally less efficient and sometimes does not converge towards the equilibrium
density of states. We are going to determine the origin of this problem; for
calculating the density of states, it is necessary to discretize the spectra of the
density of states. Let us denote the bin size ∆E of of the histogram of the density
of states.
♣ Q. 6.6.2-5 If the mean energy change of a configuration is δE
c
<< ∆E, and
if the initial configuration has an energy located within the interval of the mean
energy E, what can one say about the acceptance probability of the new configu-
ration? Why does it imply a certain bias in the convergence of the Wang-Landau
method?
Pour corriger ce d´efaut de la m´ethode de Wang-Landau, Kim et ses col-
laborateurs ont propos´e d’effectuer la mise ` a jour de la temp´erature effective
T(E) =
∂E
∂S
, o` u S(E) est l’entropie microcanonique, plutˆot que celle de la den-
sit´e d’´etat.
♣ Q. 6.6.2-6 By discretizing the derivative of the density of states with respect
to the energy (with k
B
= 1) :
∂S
∂E

E=E
i
=
1
T
i
= β
i
·
(S
i+1
−S
i−1
)
2∆E
(6.16)
show that for a elementary move whose new configuration has an energy E
i
, two
statistical temperatures must be changed according to the formula
β

i±1
= β
i±1
∓δf (6.17)
where δf =
ln f
2∆E
and f is the modification factor facteur de modification. Show
that
T

i±1
= α
i±1
T
i±1
where α
i±1
is a parameter to be determined. How doe T
i+1
and T
i−1
evolve along
the simulation? What can one conclude about the temperature variation with
energy?
110
6.6 Exercises
♣ Q. 6.6.2-7 One then calculates the entropy of each configuration by using a
linear interpolation of the temperature between two successive intervals i and
i + 1, show that the entropy is then given by the equation
S(E) = S(E
0
) +
i
¸
j=1

E
j
E
j−1
dE
T
j−1
+ λ
j−1
(E −E
j−1
)
+

E
E
i
dE
T
i
+ λ
i
(E −E
i
)
(6.18)
where λ
i
is a parameter to be determined as a function of ∆E and of temperature
T
i
and T
i+1
.
♣ Q. 6.6.2-8 What can one say about the entropy difference between two con-
figurations belonging to the same interval for the statistical temperature and
separated by an energy δE
c
? What can one conclude by comparing this algo-
rithm to the original Wang-Landau algorithm?
111
Monte Carlo Algorithms based on the density of states
112
Chapter 7
Monte Carlo simulation in
different ensembles
Contents
7.1 Isothermal-isobaric ensemble . . . . . . . . . . . . . . 113
7.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 113
7.1.2 Principe . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.2 Grand canonical ensemble . . . . . . . . . . . . . . . . 116
7.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 116
7.2.2 Principle . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.3 Liquid-gas transition and coexistence curve . . . . . . 119
7.4 Gibbs ensemble . . . . . . . . . . . . . . . . . . . . . . . 121
7.4.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.4.2 Acceptance rules . . . . . . . . . . . . . . . . . . . . . 121
7.5 Monte Carlo method with multiples Markov chains . 123
7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.1 Isothermal-isobaric ensemble
7.1.1 Introduction
L’ensemble isobare-isotherme (P, V, T) est largement utilis´e car il correspond `a
une grande majorit´e de situations exp´erimentales pour lesquelles la pression et
la temp´erature sont impos´ees au syst`eme ` a ´etudier. De plus, quand le potentiel
d’interaction entre particules n’est pas additif de paires, la pression ne peut pas
ˆetre obtenue `a partir de l’´equation du viriel et la simulation dans un ensemble
N, P, T permet d’obtenir directement l’´equation d’´etat du syst`eme.
113
Monte Carlo simulation in different ensembles
Une deuxi`eme utilisation consiste ` a r´ealiser des simulations `a pression nulle, en
partant ` a haute densit´e et en augmentant progressivement la temp´erature. Cela
permet d’obtenir une estimation rapide de la courbe de coexistence des phases
liquide-gaz.
7.1.2 Principe
Le syst`eme consid´er´e est constitu´e d’un r´eservoir de gaz id´eal comprenant M−N
particules, occupant un volume V
0
− V , et de N particules occupant un volume
V .
La fonction de partition du syst`eme total est donn´ee par
Q(N, M, V, V
0
, T) =
1
Λ
3M
N!(M −N)!

L

0
dr
M−N

L
0
dr
N
exp(−βU(r
N
)) (7.1)
o` u L
3
= V et L
3
= V
0
−V . On introduit les notations suivantes:
r
i
= s
i
L. (7.2)
Apr`es le changement de variables, la fonction de partition s’´ecrit
Q(N, M, V, V
0
, T) =
V
N
(V
0
−V )
M−N
Λ
3M
N!(M −N)!

ds
N
exp(−βU(s
N
; L)) (7.3)
o` u la notation U(s
N
; L) rappelle que le potentiel n’a g´en´eralement pas une ex-
pression simple dans le changement de variable r
N
→s
N
.
Prenons la limite thermodynamique du r´eservoir (V
0
→∞, M →∞ et (M −
N)/V
0
→ρ); on a
(V
0
−V )
M−N
= V
M−N
0
(1 −V/V
0
)
M−N
→V
M−N
0
exp(−ρV ). (7.4)
En notant que la densit´e ρ est celle du gaz id´eal, on a donc ρ = βP.
Le syst`eme que l’on souhaite ´etudier par la simulation est le sous-syst`eme
consitu´e de N particules, ind´ependamment des configurations des particules du
gaz parfait plac´ees dans la deuxi`eme boite. La fonction de partition du syst`eme
des N particules est alors donn´ee en utilisant l’´equation (7.3) par
Q

(N, P, V, T) = lim
V
0
,(M−N)/V
0
→∞
Q(N, M, V, V
0
, T)
Q
id
(M −N, V
0
, T)
(7.5)
o` u Q
id
(M −N, V
0
, T) est la fonction de partition d’un gaz id´eal de M −N par-
ticules occupant un volume V
0
, ` a la temp´erature T.
En utilisant l’ ´equation (7.4), on obtient
Q

(N, P, V, T) =
V
N
Λ
3N
N!
exp(−βPV )

ds
N
exp(−βU(s
N
; L)). (7.6)
114
7.1 Isothermal-isobaric ensemble
Si on permet au syst`eme de modifier le volume V , la fonction de partition
associ´ee au sous-syst`eme des particules interagissant par le potentiel U(r
N
) est
donn´e en consid´erant la somme des fonctions de partitions avec des volumes V
diff´erents. Cette fonction est alors ´egale ` a
Q(N, P, T) =

+∞
0
dV (βP)Q

(N, P, V, V
0
, T) (7.7)
=


0
dV βP
V
N
exp(−βPV )
Λ
3N
N!
ds
N
exp(−βU(s
N
)) (7.8)
La probabilit´e P(V, s
N
) que N particules occupent un volume V avec les partic-
ules situ´ees aux points s
N
est donn´ee par
P(V, s
N
) ∼ V
N
exp(−βPV −βU(s
N
; L)) (7.9)
∼ exp(−βPV −βU(s
N
; L) + N ln(V )). (7.10)
En utilisant la relation du bilan d´etaill´e, il est alors possible de donner les prob-
abilit´es de transition pour un changement de volume avec la r`egle de Metropolis.
Pour un changement de volume de la boˆıte de simulation, le taux d’acceptation
de l’algorithme de Metropolis est donc
Π(o →n) = Min

1, exp(−β[U(s
N
, V
n
) −U(s
N
, V
o
) + P(V
n
−V
o
)]
+N ln(V
n
/V
o
))) . (7.11)
Except´e si le potentiel a une forme simple (U(r
N
) = L
N
U(s
N
)), le changement
de volume est un calcul coˆ uteux dans la simulation num´erique. On effectue une
fois en moyenne ce changement de volume quand on a d´eplac´e chacune des par-
ticules en moyenne. Pour des raisons de performance de l’algorithme on pr´ef`ere
effectuer un changement d’´echelle logarithmique en volume au lieu d’un change-
ment lin´eaire. Dans ce cas, on doit r´e´ecrire la fonction de partition (Eq. (7.6))
comme
Q(N, P, T) =
βP
Λ
3N
N!

d ln(V )V
N+1
exp(−βPV )

ds
N
exp(−βU(s
N
; L))
(7.12)
ce qui donne un taux d’acceptation de l’algorithme de Metropolis modifi´e comme
suit:
Π(o →n) = Min

1, exp(−β[U(s
N
, V
n
) −U(s
N
, V
o
) + P(V
n
−V
o
)]+
+(N + 1) ln(V
n
/V
o
))) . (7.13)
L’algorithme de la simulation se construit de la mani`ere suivante:
1. Tirer un nombre al´eatoire entre 0 et le nombre de particules N.
115
Monte Carlo simulation in different ensembles
2. Si ce nombre est diff´erent de z´ero, choisir la particule correspondant au
nombre al´eatoire tir´e et tenter un d´eplacement `a l’int´erieur de la boˆıte de
volume V .
3. Si ce nombre est ´egal ` a 0, tirer un nombre al´eatoire entre 0 et 1 et calculer
le volume de la nouvelle boite selon la formule
v
n
= v
0
exp(ln(v
max
)(rand −0.5)). (7.14)
Changer alors le centre de masse de chaque mol´ecule (pour les particules
ponctuelles, cela revient aux coordonn´ees) par la relation
r
N
n
= r
N
o
(v
n
/v
o
)
1/3
(7.15)
Calculer l’´energie de la nouvelle configuration (si le potentiel n’est pas sim-
ple, cette ´etape demande un temps de calcul tr`es important), et effectuer
le test Metropolis donn´e par la relation (7.11). Si ce nombre est plus grand
qu’un nombre al´eatoire compris entre 0 et 1, on accepte la nouvelle boˆıte,
sinon on garde l’ancienne boˆıte (et on n’oublie pas de compter l’ancienne
configuration).
7.2 Grand canonical ensemble
7.2.1 Introduction
L’ensemble grand-canonique (µ, V, T) est un ensemble adapt´e pour un syst`eme
en contact avec un r´eservoir de particules ` a temp´erature constante. Cet ensemble
est appropri´e pour d´ecrire les situations physiques correspondant aux ´etudes des
isothermes d’adsorption (fluides dans les z´eolithes ou les milieux poreux) et, dans
une certaine mesure, l’´etude d’une courbe de coexistence pour des syst`emes purs.
7.2.2 Principle
Le syst`eme consid´er´e dans ce cas est constitu´e d’un r´eservoir de gaz id´eal com-
prenant M−N particules, occupant un volume V −V
0
, et de Nparticules occupant
un volume V
0
. Au lieu d’´echanger du volume comme dans la section pr´ec´edente,
on va ´echanger des particules.
La fonction de partition est
Q(M, N, V, V
0
, T) =
1
Λ
3M
N!(M −N)!

L

0
dr
M−N

L
0
dr
N
exp(−βU(r
N
)).
(7.16)
116
7.2 Grand canonical ensemble
On peut effectuer de la mˆeme mani`ere le changement de variable sur les coor-
donn´ees r
N
et r
M−N
que l’on a effectu´e dans la section pr´ec´edente. et on obtient
alors
Q(M, N, V, V
0
, T) =
V
N
(V
0
−V )
M−N
Λ
3M
N!(M −N)!

ds
N
exp(−βU(s
N
)). (7.17)
En prenant la limite thermodynamique pour le r´eservoir de gaz id´eal, M →
∞, (V
0
−V ) →∞ avec M/(V
0
−V ) →ρ, on obtient
M!
(V
0
−V )
N
(M −N)!
→ρ
N
. (7.18)
On peut aussi exprimer la fonction de partition associ´ee au sous-syst`eme de
la boite de simulation comme le rapport des fonctions de partition du syst`eme
total (Eq. (7.16)) sur celle de la contribution d’un gaz parfait Q
id
(M, V
0
−V, T)
comprenant M particules dans le volume V
0
−V .
Q

(N, V, T) = lim
M,(V
0
−V )→∞
Q(N, M, V, V
0
, T)
Q
id
(M, V
0
−V, T)
(7.19)
Pour un gaz id´eal, le potentiel chimique est donn´e par la relation
µ = k
B
T ln(Λ
3
ρ). (7.20)
On peut alors r´e´ecrire la fonction de partition du sous syst`eme d´efinie par le
volume V ` a la limite thermodynamique comme
Q

(µ, N, V, T) =
exp(βµN)V
N
Λ
3N
N!

ds
N
exp(−βU(s
N
) (7.21)
Il ne reste qu’` a sommer sur toutes les configurations avec un nombre N de
particules allant de 0 ` a l’infini.
Q

(µ, V, T) = lim
N→∞
M
¸
N=0
Q

(N, V, T) (7.22)
soit encore
Q(µ, V, T) =

¸
N=0
exp(βµN)V
N
Λ
3N
N!

ds
N
exp(−βU(s
N
)). (7.23)
La probabilit´e P(µ, s
N
) que N particules occupent un volume V avec les particules
situ´ees aux points s
N
est donn´ee par
P(µ, s
N
) ∼
exp(βµN)V
N
Λ
3N
N!
exp(−βU(s
N
)). (7.24)
En utilisant la relation du bilan d´etaill´e, il est alors possible de donner les
probabilit´es de transition pour un changement du nombre de particules dans la
boite de simulation selon la r`egle de Metropolis.
Le taux d’acceptation de l’algorithme de Metropolis est le suivant:
117
Monte Carlo simulation in different ensembles
1. Pour ins´erer une particule dans le syst`eme,
Π(N →N + 1) = Min

1,
V
Λ
3
(N + 1)
exp(β(µ −U(N + 1) + U(N)))

.
(7.25)
2. Pour retirer une particule dans le syst`eme,
Π(N →N −1) = Min

1,
Λ
3
N
V
exp(−β(µ + U(N −1) −U(N)))

.
(7.26)
L’algorithme de la simulation se construit de la mani`ere suivante:
1. Tirer un nombre al´eatoire entre 1 et N
1
avec N
1
= N
t
+ n
exc
(le rapport
n
exc
/'N` fixe la fr´equence moyenne des ´echanges avec le r´eservoir) avec N
t
le nombre de particules dans la boite ` a l’instant t.
2. Si ce nombre est compris entre 1 et le nombre de particules N
t
, choisir la
particule correspondant au nombre al´eatoire tir´e et tenter un d´eplacement
de cette particule selon l’algorithme de Metropolis standard.
3. Si ce nombre est sup´erieur ` a N
t
, tirer un nombre al´eatoire entre 0 et 1.
• Si ce nombre est inf´erieur ` a 0.5, on tente la suppression d’une partic-
ule. Pour cela, calculer l’´energie avec les N
t
− 1 particules restantes,
effectuer le test de Metropolis de l’´equation (7.26).
– Si la nouvelle configuration est accept´ee, supprimer la particule,
en ´ecrivant la formule
r[retir] = r[N
t
]) (7.27)
N
t
= N
t
−1. (7.28)
– Sinon garder l’ancienne configuration.
• Si le nombre est sup´erieur `a 0.5, on essaie d’ins´erer une particule,
on calcule l’´energie avec N
t
+ 1 particules et on effectue le test de
Metropolis de l’´equation (7.25).
– Si la nouvelle configuration est accept´ee, ajouter la particule en
´ecrivant la formule
N
t
= N
t
+ 1 (7.29)
r[N
t
] = r[insr]. (7.30)
– Sinon garder l’ancienne configuration.
La limitation de cette m´ethode concerne les milieux tr`es denses o` u la probabilit´e
d’insertion devient tr`es faible
118
7.3 Liquid-gas transition and coexistence curve
7.3 Liquid-gas transition and coexistence curve
Si pour les r´eseaux l’´etude du diagramme de phase est souvent limit´e au voisinage
des points critiques, la d´etermination du diagramme de phase pour des syst`emes
continus concerne une r´egion beaucoup plus large que la (ou les r´egions) cri-
tique(s). La raison vient du fait que mˆeme pour un liquide simple, la courbe
de coexistence
1
liquide-gaz pr´esente une asym´etrie entre la r´egion liquide et la
r´egion gazeuse et traduit un caract`ere non universel li´e au syst`eme ´etudi´e, alors
que pour un r´eseau l’existence d’une sym´etrie entre trou et particule conduit `a
une courbe de coexistence temp´erature densit´e parfaitement sym´etrique de par
et d’autre de la densit´e critique.
La condition de coexistence entre deux phases est que les pressions, les tem-
p´eratures et les potentiels chimiques doivent ˆetre ´egaux. L’id´ee naturelle serait
d’utiliser un ensemble µ, P, T. Malheureusement un tel ensemble n’existe pas, car
un ensemble ne peut pas ˆetre seulement d´efini par des param`etres intensifs. Pour
obtenir un ensemble clairement d´efini, il est n´ecessaire d’avoir au moins soit une
variable extensive, soit le volume, soit le nombre de particules. La m´ethode, intro-
duite par Panagiotopoulos en 1987, permet d’´etudier l’´equilibre de deux phases,
car elle supprime l’´energie interfaciale qui existe quand deux phases coexistent `a
l’int´erieur d’une mˆeme boˆıte. Une telle interface signifie une barri`ere d’´energie
que l’on doit franchir pour passe d’une phase `a une autre, et n´ecessite des temps
de calcul qui croissent exponentiellement avec l’´energie libre interfaciale.
1
Pour un liquide, en dessous de la temp´erature de transition liquide-gaz, il apparaˆıt une
r´egion du diagramme de phase o` u le liquide et le gaz peuvent coexister; la fronti`ere de cette
r´egion d´efinit ce que l’on appelle la courbe de coexistence.
119
Monte Carlo simulation in different ensembles
Figure 7.1 – Courbe de coexistence (diagramme temp´erature densit´e) liquide-gaz
du mod`ele de Lenard-Jones. La densit´e critique est voisine de 0.3 `a comparer
avec le mod`ele de gaz sur r´eseau qui donne 0.5 et la courbe montre une asym´etrie
au dessous du point critique.
120
7.4 Gibbs ensemble
7.4 Gibbs ensemble
7.4.1 Principle
On consid`ere un syst`eme constitu´e de N particules occupant un volume V =
V
1
+ V
2
, en contact avec un r´eservoir thermique ` a la temp´erature T.
La fonction de partition de ce syst`eme s’´ecrit
Q(M, V
1
, V
2
, T) =
N
¸
N
1
=0
1
Λ
3N
N
1
!(N −N
1
)!

V
0
dV
1
V
N
1
1
(V −V
1
)
N−N
1

ds
N
1
1
exp(−βU(s
N
1
1
))

ds
N−N
1
2
exp(−βU(s
N−N
1
2
)).
(7.31)
Si l’on compare avec le syst`eme utilis´e pour la d´erivation de la m´ethode de
l’ensemble grand canonique, on voit que les particules qui occupent le volume V
2
ne sont pas ici des particules de gaz id´eal mais des particules interagissant par le
mˆeme potentiel que les particules situ´ees dans l’autre boˆıte.
On peut facilement d´eduire la probabilit´e P(N
1
, V
1
, s
N
1
1
, s
N−N
1
2
) de trouver
une configuration avec N
1
particules dans la boˆıte 1 avec les positions s
N
1
1
et les
N
2
= N −N
1
particules restantes dans la boˆıte 2 avec les positions s
N
2
2
:
P(N
1
, V
1
, s
N
1
1
, s
N−N
1
2
) ∼
V
N
1
1
(V −V
1
)
N−N
1
N
1
!(N −N
1
)!
exp(−β(U(s
N
1
1
)+U(s
N−N
1
2
))). (7.32)
7.4.2 Acceptance rules
Il y a trois types de d´eplacements avec cet algorithme: les d´eplacements individu-
els de particules dans chaque boˆıte, les changements de volume avec la contrainte
que le volume total soit conserv´e, les changements de boˆıte pour une particule.
Pour les d´eplacements ` a l’int´erieur d’une boˆıte de simulation, la r`egle de
Metropolis est celle d´efinie pour un ensemble de particules dans un ensemble
canonique .
Pour le changement de volume, le choix du changement d’´echelle lin´eaire en
volume (V
n
= V
o
+ ∆V ∗ rand) conduit ` a un taux d’acceptation ´egal ` a
π(o →n) = Min

1,
(V
n,1
)
N
1
(V −V
n,1
)
N−N
1
exp(−βU(s
N
n
))
(V
o,1
)
N
1
(V −V
o,1
)
N−N
1
exp(−βU(s
N
o
))

. (7.33)
Il est g´en´eralement plus avantageux de proc´eder ` a un changement d’´echelle
logarithmique du rapport des volumes ln(V
1
/V
2
), de mani`ere analogue `a ce qui a
´et´e fait dans l’ensemble N, P, T.
Pour d´eterminer quel est le nouveau taux d’acceptation, on r´e´ecrit la fonction
de partition avec ce changement de variable:
121
Monte Carlo simulation in different ensembles
0 0.2 0.4 0.6 0.8 1
X
0
0.2
0.4
0.6
0.8
1
Y
V-V
1
N-N
1
V
1 N
1
Figure 7.2 – Repr´esentation d’une simulation de l’ensemble de Gibbs o` u le syst`eme
global est en contact avec un r´eservoir isotherme et o` u les deux sous-syst`emes
peuvent ´echanger ` a la fois des particules et du volume.
Q(M, V
1
, V
2
, T) =
N
¸
N
1
=0

V
0
d ln

V
1
V −V
1

(V −V
1
)V
1
V
V
N
1
1
(V −V
1
)
N−N
1
Λ
3N
N
1
!(N −N
1
)!

ds
N
1
1
exp(−βU(s
N
1
1
))

ds
N−N
1
2
exp(−βU(s
N−N
1
2
)).
(7.34)
On en d´eduit alors la nouvelle probabilit´e avec le d´eplacement logarithmique
du rapport des volumes des boˆıtes,
^(N
1
, V
1
, s
N
1
1
, s
N−N
1
2
) ∼
V
N
1
+1
1
(V −V
1
)
N−N
1
+1
V N
1
!(N −N
1
)!
exp(−β(U(s
N
1
1
) + U(s
N−N
1
2
)))
(7.35)
ce qui conduit `a un taux d’acceptation de la forme
π(o →n) = Min

1,
(V
n,1
)
N
1
+1
(V −V
n,1
)
N−N
1
+1
exp(−βU(s
N
n
))
(V
o,1
)
N
1
+1
(V −V
o,1
)
N−N
1
+1
exp(−βU(s
N
o
))

. (7.36)
122
7.5 Monte Carlo method with multiples Markov chains
La derni`ere possibilit´e de mouvement concerne le changement de boˆıte pour
une particule. Pour faire passer une particule de la boˆıte 1 dans la boˆıte 2, le
rapport des poids de Boltzmann est donn´e par
N(o)
N(n)
=
N
1
!(N −N
1
)!(V
1
)
N
1
−1
(V −V
1
)
N−(N
1
−1)
(N
1
−1)!(N −(N
1
−1))!(V
1
)
N
1
(V −V
1
)
N−N
1
exp(−β(U(s
N
n
) −U(s
N
o
))
=
N
1
(V −V
1
)
(N −(N
1
−1))V
1
exp(−β(U(s
N
n
) −U(s
N
o
)))
(7.37)
ce qui conduit `a un taux d’acceptation
Π(o →n) = Min

1,
N
1
(V −V
1
)
(N −(N
1
−1))V
1
exp(−β(U(s
N
n
) −U(s
N
o
)))

. (7.38)
L’algorithme correspondant ` a la m´ethode de l’ensemble de Gibbs est une g´en´eral-
isation de l’algorithme de l’ensemble (µ, V, T) dans lequel on remplace l’insertion
et le retrait de particules par les ´echanges entre boˆıtes et dans lequel le change-
ment de volume que nous avons vu dans l’algorithme (N, P, T) est remplac´e ici
par le changement simultan´e de volume des deux boˆıtes.
7.5 Monte Carlo method with multiples Markov
chains
Les barri`eres d’´energie libre empˆechent le syst`eme de pouvoir explorer la totalit´e
de l’espace des phases; en effet, un algorithme de type Metropolis est fortement
ralenti par la pr´esence de barri`eres ´elev´ees.
Pour calculer une courbe de coexistence ou pour obtenir un syt`eme ´equilibr´e
au dessous de la temp´erature critique, on doit r´ealiser une succession de simula-
tions pour des valeurs successives de la temp´erature (voire aussi simultan´ement
de potentiel chimique dans le cas d’un liquide). La m´ethode de “trempe parall`ele”
cherche ` a exploiter le fait suivant: s’il est de plus en plus difficile d’´equilibrer un
syst`eme quand on abaisse la temp´erature, on peut exploiter la facilit´e du syst`eme
` a changer de r´egion de l’espace des phases `a temp´erature ´elev´ee en “propageant”
des configurations vers des temp´eratures plus basses.
Consid´erons pour des raisons de simplicit´e le cas d’un syst`eme o` u seul on
change la temp´erature (la m´ethode est facilement g´en´eralisable pour d’autres
grandeurs intensives (potentiel chimique, changement du Hamiltonien,...)): la
fonction de partition d’un syst`eme ` a la temp´erature T
i
est donn´ee par la relation
Q
i
=
¸
α=1
exp(−β
i
U(α)) (7.39)
123
Monte Carlo simulation in different ensembles
o` u α est un indice parcourant l’ensemble des configurations accessibles au syst`eme
et β
i
est l’inverse de la temp´erature.
Si on consid`ere le produit direct de l’ensemble de ces syst`emes ´evoluant ` a
des temp´eratures toutes diff´erentes, on peut ´ecrire la fonction de partition de cet
ensmble comme
Q
total
=
N
¸
i=1
Q
i
(7.40)
o` u N d´esigne le nombre total de temp´eratures utilis´e dans les simulations.
On peut tirer un grand avantage en r´ealisant simultan´ement les simulations
pour les diff´erentes temp´eratures; la m´ethode de “trempe parall`ele” introduit une
chaine de Markov suppl´ementaire en sus de celles pr´esentes dans chaque boite de
simulation; cette chaine se propose d’´echanger, en respectant les conditions d’un
bilan d´etaill´e, la totalit´e des particules entre deux boites cons´ecutives choisies au
hasard parmi la totalit´e de celles-ci.
La condition de bilan d´etaill´e de cette seconde chaine peut s’exprimer comme
Π((i, β
i
), (j, β
j
) →(j, β
i
), (i, β
j
))
Π((i, β
j
), (j, β
i
) →(i, β
i
), (j, β
j
))
=
exp (−β
j
U(i)) exp (−β
i
U(j))
exp (−β
i
U(i)) exp (−β
j
U(j))
(7.41)
o` u U(i) et U
(
j) r´epresentent les ´energies des particules de la boite i et j.
L’algorithme de la simulation est constitu´e de deux types de mouvement
• D´eplacement de particule dans chaque boˆıte (qui peut ˆetre r´ealis´e en par-
all`ele) selon l’algorithme de Metropolis La probabilit´e d’acceptation: pour
chaque pas de simulation ` a l’int´erieur de chaque boite est donn´ee par
min(1, exp(β
i
(U
i
(n) −U
i
(o)))) (7.42)
o` u les notations n et o correspondent `a la nouvelle et l’ancienne configura-
tion respectivement.
• Echange de particules entre deux boˆıtes cons´ecutives. la probabilit´e d’acceptation
est alors donn´ee par
min(1, exp((β
i
−β
j
)(U
j
−U
i
))) (7.43)
La proportion de ces ´echanges reste `a fixer afin d’optimiser la simulation pour
l’ensemble de ce syst`eme.
Afin d’illustrer tr`es simplement ce concept, nous allons ´etudi´e le syst`eme mod-
`ele suivant. Soit une particule dans un espace unidimensionnel soumise au po-
124
7.5 Monte Carlo method with multiples Markov chains
Figure 7.3 – Potentiel unidimensionel auquel est soumise une particule
tentiel ext´erieur V (x) suivant
V (x) =

+∞ si x < −2
1 + sin(2πx) si −2 ≤ x ≤ 1.25
2 + 2 sin(2πx) si −1.25 ≤ x ≤ −0.25
3 + 3 sin(2πx) si −0.25 ≤ x ≤ 0.75
4 + 4 sin(2πx) si 0.75 ≤ x ≤ 1.75
5 + 5 sin(2πx) si 1.75 ≤ x ≤ 2.0
(7.44)
Les probabilit´es d’´equilibre P
eq
(x, β) pour ce syst`eme s’exprime simplement
comme
P
eq
(x, β) =
exp(−βV (x))

2
−2
dx exp(−βV (x))
(7.45)
La figure 7.4 montre ces probabilit´es ` a l’´equilibre. Notons que comme les min-
ima de l’´energie potentielle sont identiques, les maxima des probabilit´es d’´equilibre
sont les mˆemes. Quand la temp´erature est abaiss´ee, la probabilit´e entre deux
minima d’´energie potentielle devient extrˆemement faible.
La figure 7.4 montre aussi le r´esultat de simulations Monte Carlo de type
Metropolis ind´ependantes tandis que la figure 7.5 montre le r´esultat avec un
125
Monte Carlo simulation in different ensembles
Figure 7.4 – Probabilit´es d’´equilibre (courbes pleines ) et probabilit´es obtenues
par une simulation Metropolis pour la particule soumise au potentiel unidimen-
sionnel pour trois temp´eratures T = 0.1, 0.3, 2
126
7.6 Conclusion
Figure 7.5 – Identique ` a la figure 7.4 except´e que la simulation est une trempe
parall`ele
simulation de “trempe parall`ele”. Le nombre total de configurations effectu´ees
par la simulation reste inchang´ee.
Avec une m´ethode de Metropolis, le syst`eme n’est ´equilibr´e que pour la tem-
p´erature la plus ´elev´ee. Pour T = 0.3, le syst`eme ne peut explorer que les deux
premiers puits de potentiel (sachant que la particule ´etait situ´ee dans le premier
puits `a l’instant initial) et pour T = 0.1, la particule est incapable de franchir
une seule barri`ere. Avec la m´ethode de “trempe parall`ele”, le syst`eme devient
´equilibr´e ` a toute temp´erature en particulier `a tr`es basse temp´erature.
Pour des syst`emes `a N particules, la m´ethode de trempe parall`ele n´ecessite
un nombre de boites sup´erieur ` a celui utilis´e dans cet exemple tr`es simple. On
peut mˆeme montrer que pour un mˆeme intervalle de temp´eratures, le nombre de
boites n´ecessaire pour obtenir un syst`eme ´equilibr´e augmente quand la taille du
syst`eme augmente.
7.6 Conclusion
Nous avons vu que la m´ethode Monte Carlo peut ˆetre utilis´ee dans un nom-
bre tr`es vari´e de situations. Ce rapide survol de ces quelques variantes ne peut
repr´esenter la tr`es vaste litt´erature sur le sujet. Cela permet, je l’esp`ere, de don-
127
Monte Carlo simulation in different ensembles
ner des ´el´ements pour construire, en fonction du syst`eme ´etudi´e, une m´ethode
plus sp´ecifiquement adapt´ee.
128
Chapter 8
Out of equilibrium systems
Contents
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 130
8.2 Random sequential addition . . . . . . . . . . . . . . . 131
8.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.3 Avalanche model . . . . . . . . . . . . . . . . . . . . . . 137
8.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 137
8.3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.4 Inelastic hard sphere model . . . . . . . . . . . . . . . 141
8.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 141
8.4.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.4.4 Some properties . . . . . . . . . . . . . . . . . . . . . 143
8.5 Exclusion models . . . . . . . . . . . . . . . . . . . . . . 144
8.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 144
8.5.2 Random walk on a ring . . . . . . . . . . . . . . . . . 146
8.5.3 Model with open boundaries . . . . . . . . . . . . . . 147
8.6 Kinetic constraint models . . . . . . . . . . . . . . . . 149
8.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 149
8.6.2 Facilitated spin models . . . . . . . . . . . . . . . . . 152
8.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 154
129
Out of equilibrium systems
8.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.8.1 Haff Law . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.8.2 Croissance de domaines et mod`ele `a contrainte cin´etique156
8.8.3 Diffusion-coagulation model . . . . . . . . . . . . . . . 158
8.8.4 Random sequential addition . . . . . . . . . . . . . . . 159
8.1 Introduction
Nous abordons, dans les deux derniers chapitres, le vaste domaine de la m´ecanique
statistique hors d’´equilibre. Les situations dans lesquelles un syst`eme ´evolue sans
relaxer vers un ´etat d’´equilibre sont de loin les plus fr´equentes dans la nature.
Les raisons en sont diverses: i) un syst`eme n’est pas forc´ement isol´e, en contact
avec un thermostat, ou entour´e d’un ou plusieurs r´eservoirs de particules,... ce
qui empˆeche une ´evolution vers un ´etat d’´equilibre; ii) il peut exister des temps
de relaxation tr`es sup´erieurs aux temps microscopiques (adsorption de prot´eines
aux interfaces, polym`eres, verres structuraux, verres de spins); iii) le ph´enom`ene
peut poss´eder une irr´eversibilit´e intrins`eque (li´ee ` a des ph´enom`enes dissipatifs)
qui n´ecessite l’introduction d’une mod´elisation fondamentalement hors d’´equilibre
(milieux granulaires, fragmentation, ph´enom`enes de croissance, avalanches, trem-
blements de terre)Hinrichsen [2000], Talbot et al. [2000], Turcotte [1999].
On peut distinguer trois grandes classes de syst`emes en ce qui concerne
l’´evolution du syst`eme dans un r´egime hors d’´equilibre: dans un premier cas, les
syst`emes sont le si`ege de temps de relaxation tr`es longs (tr`es sup´erieurs au temps
caract´eristique de l’observation), dans un second cas, les syst`emes qui ´evoluent
vers un ´etat stationnaire loin de l’´equilibre. Dans un dernier cas, les syst`emes
sont soumis ` a une ou plusieurs contraintes ext´erieures et ´evoluent vers un ´etat
stationnaire, la dynamique microscopique ´etant hamiltonienne ou non.
Nous illustrons, sur quatre syst`emes mod`eles, la pertinence d’une approche
statistique des ph´enom`enes hors d’´equilibre et, particuli`erement, l’int´erˆet d’une
´etude par simulation num´erique des mod`eles.
Cette approche num´erique est d’autant plus justifi´ee qu’en l’absence d’une
formulation analogue ` a la thermodynamique, les m´ethodes th´eoriques de la m´e-
canique statistique sont moins puissantes. L’utilisation de la simulation num´erique
est donc ` a la fois un moyen essentiel pour comprendre la physique des mod`eles
et un support pour le d´eveloppement de nouveaux outils th´eoriques.
130
8.2 Random sequential addition
8.2 Random sequential addition
8.2.1 Motivation
Quand on place une solution contenant des macromol´ecules (prot´eines, collo¨ıdes),
en contact avec une surface solide, on observe une adsorption sur la surface sous
la forme d’une monocouche de macromol´ecules. Ce processus d’adsorption est
lent, l’adsorption n´ecessitant des dur´ees qui peuvent aller de quelques minutes
` a plusieurs semaines. Une fois l’adsorption achev´ee, si on remplace la solution
contenant les macromol´ecules par une solution tampon, on n’observe aucun (ou
peu de) changement dans la monocouche adsorb´ee. Quand l’exp´erience est r´ep´et´ee
avec des solutions de concentrations diff´erentes, la densit´e de particules adsorb´ees
` a la surface est pratiquement toujours la mˆemeTalbot et al. [2000].
Compte tenu des ´echelles de temps et de la complexit´e microscopique des
mol´ecules, il n’est pas possible de faire une Dynamique Mol´eculaire pour d´ecrire
le procesus. Une premi`ere ´etape consiste `a faire une mod´elisation sur une ´echelle
“m´esoscopique”. Compte tenu de la taille des macromol´ecules (de 10nm` a quelques
mm), on peut consid´erer que les interactions entre particules sont de port´ee tr`es
inf´erieure `a la taille des particules. Ces particules sont aussi peu d´eformables et
ne peuvent pas s’interp´en´etrer facilement. En premi`ere approximation, le poten-
tiel entre particules peut ˆetre pris comme celui d’objets durs (par exemple pour
les collo¨ıdes, sph`eres dures).
Quand la particule est au voisinage imm´ediat de la surface, l’interaction avec
la surface est fortement attractive sur une courte distance, et fortement r´epulsive
quand la particule p´en`etre dans la surface. Le potentiel d’interaction entre une
particule et la surface peut ˆetre pris comme un potentiel de contact fortement
n´egatif. Une fois adsorb´ees, les particules ne d´esorbent pas durant le temps
de l’exp´erience et ne diffusent pas sur la surface. Compte tenu du mouvement
brownien des particules en solution, on peut consid´erer que les particules arrivent
au hasard sur la surface. Si la particule rentre en contact avec une particule
d´ej` a adsorb´ee, elle est repouss´ee et elle “repart” vers la solution. Si elle entre en
contact avec la surface solide, sans recouvrement des particules d´ej` a pr´esentes,
elle se “colle” ` a la surface de mani`ere d´efinitive.
8.2.2 Definition
La mod´elisation ainsi d´efinie conduit naturellement au mod`ele suivant, appel´e
Adsorption (ou Addition) S´equentielle Al´eatoire (en anglais, Random Sequential
Adsorption (or Addition), RSA).
Soit un espace de dimensions d, on place s´equentiellement des particules dures
` a des positions al´eatoires, avec la condition suivante: si la particule introduite ne
recouvre aucune autre particule d´ej` a d´epos´ee, cette particule est plac´ee de mani`ere
d´efinitive `a cette position; sinon, cette particule est retir´ee et on proc`ede `a un
131
Out of equilibrium systems
nouveau tirage.
Un tel mod`ele contient deux ingr´edients que l’on a consid´er´es comme ´etant
essentiels pour caract´eriser la cin´etique d’adsorption: l’effet d’encombrement
st´erique des particules et l’irr´eversibilit´e du ph´enom`ene.
Si l’on consid`ere des particules sph´eriques, cela signifie qu’`a une dimension,
on d´epose des segments sur une ligne, `a deux dimensions, on d´epose des disques
sur un plan, et `a trois dimensions des boules dans l’espace.
Comme nous l’avons dit dans le chapitre 2, la dynamique de la m´ethode Monte
Carlo est celle d’un processus Markovien qui converge vers un ´etat d’´equilibre. La
d´efinition du mod`ele RSA est celle d’un procesus Markovien (tirage al´eatoire de
particules), sans condition de convergence vers un ´etat d’´equilibre. Nous pouvons
donc facilement d´ecrire l’algorithme de la simulation num´erique adapt´ee ` a ce
mod`ele.
8.2.3 Algorithm
Le principe de l’algorithme de base ne d´epend pas de la dimension du probl`eme.
Comme le mod`ele est physiquement motiv´e par des probl`emes d’adsorption sur
une surface, nous consid´erons ici des disques durs (qui repr´esentent la projection
d’une sph`ere sur le plan). Compte tenu du diam`etre σ des particules (la boite
de simulation est choisie comme un carr´e de cˆot´e unit´e), on r´eserve un tableau
par dimension d’espace pour stocker la liste de chaque coordonn´ee de position des
particules qui seront adsorb´ees au cours du processus. Sa taille est n´ecessairement
born´ee par 4/(πσ
2
).
Ces tableaux ne contiennent aucune position de particules ` a l’instant initial,
(on suppose que la surface est initialement vide). Un pas ´el´ementaire de la cin´e-
tique est le suivant:
1. On tire au hasard avec une distribution uniforme la position d’une particule
dans la boˆıte de simulation.

x
0
= rand
y
0
= rand
(8.1)
On incr´emente le temps de 1: t = t + 1.
2. on teste si la particule ainsi choisie ne recouvre aucune particule d´ej` a
pr´esente. Il faut v´erifier que (r
0
− r
i
)
2
> σ
2
avec i prenant les valeurs
de 1 `a la derni`ere particule adsorb´ee ` a l’instant t, not´ee n
a
D`es que ce test
n’est pas v´erifi´e, on arrˆete la boucle sur les indices, on revient ` a l’´etape 1
pour un nouveau tirage. Si ce test est satisfait pour toutes les particules,
on incr´emente l’indice n
a
(n
a
= n
a
+1), on ajoute la position de la derni`ere
132
8.2 Random sequential addition
0.2 0.4 0.6 0.8
X
0.2
0.4
0.6
0.8
Y
Figure 8.1 – Disques adsorb´es en cin´etique RSA: la particule au centre de la figure
est la particule qui vient d’ˆetre choisie. La cellule correspondant ` a son centre est
en rouge. Les 24 cellules ` a consid´erer pour le test de non-recouvrement sont en
orange.
particule dans les tableaux des coordonn´ees de position:

x[n
a
] = x
0
y
[
n
a
] = y
0
.
(8.2)
Comme on ne connaˆıt pas ` a l’avance le nombre de particules que l’on peut
placer sur la surface, on fixe le nombre maximum de tentatives au d´epart de la
simulation, ou en d’autres termes le temps maximum de la simulation.
L’algorithme pr´esent´e ci-dessus est correct, mais tr`es lent. En effet, a chaque
pas de temps, il est n´ecessaire d’effectuer n
a
tests pour d´eterminer si la nouvelle
particule peut ˆetre ins´er´ee. Quand le remplissage est proche du remplissage max-
imum, on passe beaucoup de temps a effectuer des tests qui sont inutiles, car les
particules sont pratiquement toutes rejet´ees.
Sachant qu’un disque dur ne peut ˆetre entour´e, au maximum, que de six autres
disques, il est facile de voir qu’il faut ´eviter de faire un test de recouvrement avec
des particules qui sont pour la plupart tr`es ´eloign´ees de la particule que l’on vient
de choisir.
133
Out of equilibrium systems
Une solution consiste `a faire une liste de cellules. En effet si on choisit
une grille dont le carr´e ´el´ementaire est strictement inf´erieur au diam`etre d’un
disque, cela signifie que dans chaque carr´e on peut placer, au plus, un et un seul
disque. On cr´ee donc un tableau suppl´ementaire dont les deux indices en X et
en Y rep´erent les diff´erents carr´es ´el´ementaires. Ce tableau est initialis´e `a 0.
L’algorithme est alors modifi´e de la mani`ere suivante:
1. La premi`ere ´etape est inchang´ee.
2. On d´etermine les indices correspondant ` a la cellule ´el´ementaire o` u la par-
ticule est choisie

i
0
= Int(x
0
/σ)
j
0
= Int(y
0
/σ).
(8.3)
Si la cellule est pleine (on a alors cell[i
0
][j
0
] = 0), on revient `a l’´etape 1. Si
la cellule est vide, il faut tester les 24 cellules entourant la cellule centrale
et voir si elles sont vides ou occup´ees (voir la Fig 8.1). D`es qu’une cellule
est occup´ee (cell[i][j] = 0), il est n´ecessaire de faire un test pour v´erifier la
pr´esence ou non d’un recouvrement entre la nouvelle particule et celle d´ej` a
adsorb´ee. S’il y a recouvrement, on repasse `a l’´etape 1, sinon on continue
` a tester la cellule suivante. Si toutes les cellules test´ees sont soit vides soit
occup´ees par une particule qui ne recouvre pas la particule que l’on cherche
` a ins´erer, on ajoute alors la nouvelle particule comme le pr´ecise le pr´ec´edent
algorithme et on remplit la cellule de la nouvelle particule ajout´ee comme
suit:
cell[i
0
][j
0
] = n
a
. (8.4)
Avec ce nouvel algorithme le nombre de tests est au plus ´egal ` a 24 quel que soit
le nombre de particules adsorb´ees. L’algorithme est maintenant un algorithme
proportionnel au nombre de particules et non plus proportionnel au carr´e du
nombre. Des m´ethodes comparables sont utilis´ees en dynamique mol´eculaire afin
de limiter le nombre de particules ` a tester dans le calcul des forces.
8.2.4 Results
A une dimension, un syst`eme de particules ` a l’´equilibre n’a pas de transition
de phase `a temp´erature finie. Ainsi la r´esolution de tels mod`eles n’apporte pas
d’informations tr`es utiles pour l’´etude du mˆeme syst`eme en dimension sup´erieure
(o` u g´en´eralement, il n’y a des transitions de phase, mais pas de solutions ana-
lytiques). Au contraire, dans le cas de la RSA (ainsi que d’autres mod`eles hors
´equilibre), la solution du mod`ele unidimensionnel apporte beaucoup d’informations
utiles pour comprendre le mˆeme mod`ele ´etudi´e en dimension sup´erieure.
Qualitativement, le ph´enom`ene d’adsorption se d´eroule en plusieurs ´etapes:
dans un premier temps, les particules introduites sont adsorb´ees sans rejet et la
134
8.2 Random sequential addition
densit´e augmente rapidement. Au fur et ` a mesure que la surface se remplit, des
particules que l’on tente d’ins´erer sont rejet´ees. Quand le nombre de tentatives
devient de l’ordre de quelques fois le nombre de particules adsorb´ees, le taux
de remplissage est, `a 10% pr`es, la valeur que l’on peut atteindre au maximum.
Il apparaˆıt alors un r´egime lent o` u les espaces disponibles pour l’insertion de
particules, sont isol´es sur la surface et repr´esentent une fraction tr`es faible de la
surface initiale.
La version unidimensionnelle du mod`ele, qui est appel´ee aussi mod`ele du park-
ing, peut ˆetre r´esolue analytiquement (voir Appendice C). Nous allons ici nous
contenter d’exploiter ces r´esultats pour en d´eduire des comportements g´en´eriques
en dimension sup´erieure.
A une dimension, en prenant la longueur des segments ´egale ` a l’unit´e, on
obtient l’expresion suivante pour le taux de remplissage de la ligne. lorsque cette
derni`ere est vide ` a l’instant initial:
ρ(t) =

t
0
duexp

−2

u
0
dv
1 −e
−v
v

. (8.5)
Quand t → ∞, la densit´e tend vers 0.7476 . . .. Dans cette dynamique, les r´ear-
rangements de particules sont interdits, contrairement ` a une situation `a l’´equilibre.
A la saturation, il reste donc des espaces vides sur la ligne, dont la longueur in-
dividuelle est n´ecessairement inf´erieure au diam`etre d’une particule. Il faut aussi
noter que la densit´e maximale ` a saturation d´epend fortement de la configuration
initiale. Il s’agit `a nouveau d’une diff´erence notable avec un processus `a l’´equilibre
qui ne d´epend jamais du chemin thermodynamique choisi mais uniquement des
conditions thermodynamiques impos´ees.
On aboutit `a la situation apparemment paradoxale suivante: la dynamique
stochastique du remplissage est Markovienne stationnaire, c’est `a dire que la
m´emoire de s´election des particules est nulle, mais comme les particules adsorb´ees
sont fix´ees, le syst`eme garde la m´emoire de la configuration initiale jusqu’`a la fin
du processus.
Pr`es de la saturation, le d´eveloppement asymptotique (t →∞) de l’´equation
(8.5) donne
ρ(∞) −ρ(t) ·

e
−2γ
t

, (8.6)
o` u γ est la constante d’Euler. L’approche de l’´etat de saturation est donc lente et
est donn´ee par une loi alg´ebrique. En dimension sup´erieure, on peut montrer que
pour des particules sph´eriques la saturation est approch´ee comme ρ(∞) −ρ(t) ·
1/t
1/d
o` u d est la dimension de l’espace. Ceci donne un exposant ´egal `a 1/2 pour
d = 2.
Parmi les diff´erences que l’on peut aussi noter avec le mˆeme syst`eme de partic-
ules, mais consid´er´e `a l’´equilibre, il y a par exemple les fluctuations de particules
d’un syst`eme de taille finie. A l’´equilibre, si l’on fait une simulation Monte-Carlo,
135
Out of equilibrium systems
on peut facilement calculer les fluctuations de particules ` a partir des moments
de la distribution de particules, 'N` et 'N
2
` dans l’ensemble grand-canonique.
Dans une simulation de type RSA, ces fluctuations sont accessibles de la mani`ere
suivante: si on r´ealise une s´erie de simulations, pour une taille de syst`eme et
une configuration initiale identiques, on peut enregistrer un histogramme de den-
sit´e ` a un temps donn´e de l’´evolution du processus. On calcule alors les valeurs
moyennes de 'N` et 'N
2
`. La signification des crochets correspond `a une moyenne
sur diff´erentes r´ealisations du processus.
On peut donc voir que ces grandeurs peuvent ˆetre d´efinies en l’absence d’une
moyenne de Gibbs pour le syst`eme. Pour un syst`eme thermodynamique, on
peut calculer ces moyennes en proc´edant de la mˆeme mani`ere, mais compte tenu
de l’unicit´e de l’´etat d’´equilibre, une moyenne statistique sur des r´ealisations dif-
f´erentes est ´equivalente ` a une moyenne `a l’´equilibre sur une seule r´ealisation. C’est
´evidemment la raison pour laquelle on n’effectue pas g´en´eralement de moyenne
sur des r´ealisations dans le cas de l’´equilibre. En changeant de r´ealisations, le
temps de calcul n´ecessaire est consid´erablement plus ´elev´e, car il faut r´e´equilibrer
chaque r´ealisation avant de commencer le calcul de la moyenne.
L’importance de ces fluctuations ` a une densit´e donn´ee est diff´erente entre un
processus RSA et un syst`eme `a l’´equilibre. A une dimension, ces fluctuations
peuvent ˆetre calcul´ees analytiquement dans les deux cas et leur amplitude est
moins grande en RSA. Ce ph´enom`ene persiste en dimension sup´erieure, o` u un
calcul num´erique est alors n´ecessaire; c’est un moyen exp´erimental pour distinguer
des configurations g´en´er´ees par un processus irr´eversible et celui provenant de
configurations ` a l’´equilibre.
Comme nous l’avons vu au chapitre 4, la fonction de corr´elation g(r) peut ˆetre
d´efinie `a partir de consid´erations de g´eom´etrie probabiliste (pour des particules
dures) et non n´ecessairement ` a partir d’une distribution de Gibbs sous-jacente.
Cela implique qu’il est possible de calculer, pour un processus RSA, les corr´ela-
tions entre paires de particules. En utilisant la moyenne statistique, introduite
pour le calcul des fluctuations, on peut calculer la fonction de corr´elation g(r) ` a
un temps donn´e de l’´evolution de la dynamique en consid´erant un grand nombre
de r´ealisations.
Comme r´esultat significatif concernant cette fonction, on peut montrer qu’`a
la densit´e de saturation, la fonction diverge logarithmiquement au contact:
g(r) ∼ ln

(r −1)
σ

. (8.7)
Un tel comportement diff`ere fortement d’une situation d’´equilibre, o` u la fonction
de corr´elation reste finie au contact pour une densit´e correspondant ` a la densit´e
de saturation RSA.
136
8.3 Avalanche model
8.3 Avalanche model
8.3.1 Introduction
Il y a une quinzaine ann´ees, Bak, Tang et Wiesenfeld ont propos´e un mod`ele pour
expliquer le comportement observ´e dans les tas de sable. En effet, si l’on laisse
tomber, avec un flux qui doit ˆetre tr`es faible, des particules de sable au dessus
d’un point donn´e d’une surface, on voit se former progressivement un tas, dont
la forme est grossi`erement celle d’un cˆ one et dont l’angle ne peut d´epasser une
valeur limite. Une fois cet ´etat atteint, on peut voir que si l’on continue d’ajouter
du sable, toujours avec la mˆeme intensit´e, on observe ` a des instants diff´erents
des avalanches de particules qui ont des tailles tr`es diff´erentes. Ces particules
d´evalent la pente brusquement. Ce ph´enom`ene ne semble pas a priori d´ependre
de la nature microscopique des interactions entre particules.
8.3.2 Definition
Consid´erons un r´eseau carr´e `a deux dimensions (N N). En chaque point du
r´eseau, on appelle z(i, j) le nombre entier associ´e au site (i, j). Par analogie au
tas de sable, ce nombre sera consid´er´e comme la pente locale du tas de sable. Les
´etapes ´el´ementaires du mod`ele sont les suivantes:
1. A l’instant t, on choisit al´eatoirement un site du r´eseau not´e i
0
, j
0
. On
accroˆıt la valeur du site d’une unit´e

z(i
0
, j
0
) → z(i
0
, j
0
) + 1
t →t + 1.
(8.8)
2. Si la valeur de z(i
0
, j
0
) > 4, la diff´erence de pente devient trop ´elev´ee et il y
a redistribution de particules sur les sites qui sont les plus proches voisins
avec les r`egles suivantes:

z(i
0
, j
0
) →z(i
0
, j
0
) −4
z(i
0
±1, j
0
) →z(i
0
±1, j
0
) + 1
z(i
0
, j
0
±1) →z(i
0
, j
0
±1) + 1
t →t + 1.
(8.9)
3. La redistribution sur les sites voisins peut provoquer, sur ces site, le d´e-
passement du seuil critique. Si un site se trouve dans un ´etat critique, il
d´eclenche `a son tour l’´etape 2...
137
Out of equilibrium systems
Pour les sites se trouvant `a la limite de la boˆıte, on consid`ere que les par-
ticules sont perdues et les r`egles d’actualisation sont les suivantes.

z(0, 0) →z(0, 0) −4
z(1, 0) →z(1, 0) + 1
z(0, 1) →z(0, 1) + 1
t →t + 1.
(8.10)
Dans ce cas, deux particules sortent de la boˆıte.
Pour i compris entre 1 et N −2

z(0, i) →z(0, i) −4
z(1, i) →z(1, i) + 1
z(0, i ±1) →z(0, i ±1) + 1
t →t + 1.
(8.11)
Dans ce cas, une seule particule sort de la boˆıte.
On peut aussi avoir des variantes de ces r`egles concernant les limites de
la boˆıte, en consid´erant les seuils de d´eclenchement sont modifi´es pour les
sites situ´es ` a la fronti`ere. Par exemple, pour le site situ´e en coin de boˆıte
le seuil est alors de 2 et l’on a

z(0, 0) →z(0, 0) −2
z(1, 0) →z(1, 0) + 1
z(0, 1) →z(0, 1) + 1
t →t + 1.
(8.12)
Pour i compris entre 1 et N −2, on un seuil de 3

z(0, i) →z(0, i) −3
z(1, i) →z(1, i) + 1
z(0, i ±1) →z(0, i ±1) + 1
t →t + 1.
(8.13)
On peut, de mani`ere similaire, ´ecrire les conditions aux limites sur les quatre
cot´es du r´eseau.
Ce cycle est poursuivi jusqu’`a ce qu’aucun site du r´eseau ne d´epasse le seuil
critique
1
. A ce moment, on recommence l’´etape 1.
1
Comme les sites qui deviennent critiques `a un instant donn´e t ne sont pas plus proches
voisins, l’ordre dans lequel on effectue la mise `a jour n’influe pas l’´etat final du cycle. En
d’autres termes, ce mod`ele est Ab´elien.
138
8.3 Avalanche model
Cet algorithme peut facilement ˆetre adapt´e pour des r´eseaux diff´erents et pour
les dimensions diff´erentes d’espace.
Les grandeurs que l’on peut enregistrer au cours de la simulation sont les
suivantes: la distribution du nombre de particules impliqu´ees dans une avalanche,
c’est ` a dire le nombre de sites du r´eseau qui se sont retrouv´es dans une situation
critique (et qu’il a fallu actualiser) avant de red´emarrer l’ajout de particules (´etape
1). On peut aussi consid´erer la distribution du nombre de particules perdues par
le r´eseau (par les bords) au cours du processus. On peut s’int´eresser enfin ` a la
p´eriodicit´e des avalanches au cours du temps.
8.3.3 Results
Les r´esultats essentiels de ce mod`ele aux r`egles tr`es simples sont les suivants:
• Le nombre d’avalanches de particules suit une loi de puissance
N(s) ∼ s
−τ
(8.14)
o` u τ · 1.03 ` a deux dimensions.
• Les avalanches apparaissent de mani`ere al´eatoire et leur fr´equence d’apparition
d´epend de leur taille s selon une loi de puissance
D(s) ∼ s
−2
. (8.15)
• Si le temps associ´e aux nombres d’´etapes ´el´ementaires dans une avalanche
est T, le nombre d’avalanches avec T ´etapes est donn´e par
N ∼ T
−1
. (8.16)
Il est maintenant int´eressant de comparer ces r´esultats avec ceux que l’on a
rencontr´es dans l’´etude de syst`emes `a l’´equilibre.
Une des premi`eres caract´eristiques inhabituelles de ce mod`ele est qu’il ´evolue
spontan´ement vers un ´etat que l’on peut qualifier de critique, car, tant sur le
plan spatial (distribution des avalanches) que sur le plan temporel, il y a une
grande analogie avec un point critique usuel dans un syst`eme thermodynamique
usuel. A l’inverse, alors que dans un syst`eme thermodynamique, l’obtention d’un
point critique n´ecessite l’ajustement d’un ou de plusieurs param`etres de contrˆ ole
(la temp´erature, la pression,...), il n’y a pas ici de param`etre de contrˆole. En
effet, le flux de particules d´epos´ees peut ˆetre aussi faible que l’on veut, la distri-
bution des avalanches reste toujours inchang´ee. Les auteurs de ce mod`ele, qui
ont identifi´e ces caract´eristiques, ont appel´e cet ´etat la criticalit´e auto-organis´ee.
(Self-Organized Criticality (SOC), en anglaisBak et al. [1987], Turcotte [1999]).
La sp´ecificit´e de ce mod`ele a fascin´e une communaut´e de scientifiques, qui
d´epasse largement celle de la physique statistique. De nombreuses variantes de
139
Out of equilibrium systems
ce mod`ele ont ´et´e ´etudi´ees et il est apparu que les comportements observ´es dans
le mod`ele primitif se retrouvaient dans un grand nombre de situations physiques.
Citons pour exemple le domaine de la sismologie. Les tremblements de terre r´esul-
tent de l’accumulation de contraintes dans le manteau terrestre, sur des ´echelles de
temps qui d´epassent les centaines d’ann´ees. Ces contraintes r´esultent du d´eplace-
ment tr`es lent des plaques tectoniques. Les tremblements de terre apparaissent
dans des r´egions o` u il y accumulation de contraintes. Un comportement universel
des tremblements de terre est le suivant: quelle que soit la r´egion de la surface ter-
restre o` u ceux-ci se produisent, il est tr`es difficile de pr´evoir la date et l’intensit´e
d’un tremblement de terre dans le futur, mais statistiquement, on observe que
la distribution de cette intensit´e ob´eit `a une loi de puissance concernant leur in-
tensit´e avec un exposant α · 2. A titre de comparaison, le mod`ele SOC pr´evoit
un exposant α = 1, ce qui montre la description des tremblements de terre est
qualitativement correct, mais que des mod`eles plus raffin´es sont n´ecessaires pour
un accord plus quantitatif.
Les feux de forˆets repr´esentent un deuxi`eme exemple pour lequel l’analogie
avec le mod`ele SOC est int´eressante. Durant une longue p´eriode, il y a croissance
de la forˆet, qui peut ˆetre d´etruite partiellement ou quasi totalement par un feu de
forˆet. Dans les mod`eles utilis´es pour d´ecrire ce ph´enom`ene, il y a deux ´echelles
de temps: la premi`ere est associ´ee ` a la d´eposition al´eatoire d’arbres sur un r´eseau
donn´e; la seconde est l’inverse de la fr´equence avec laquelle l’on tente de d´emarrer
un feu de forˆet `a un site al´eatoire
2
. La propagation du feu se poursuit par con-
tamination de sites voisins; si ceux-ci sont occup´es par un arbre. La distribution
des aires de forˆets brˆ ul´ees est exp´erimentalement d´ecrite par une loi de puissance
avec α · 1.3 −1.4, voisine du r´esultat pr´edit par le SOC.
Nous terminons cette section par un retour sur les objectifs initiaux du mod-
`ele. Est-ce que les avalanches observ´ees dans les tas de sables sont bien d´ecrites
par ce type de mod`ele? La r´eponse est plutˆ ot n´egative. Dans la plupart des
dispositifs exp´erimentaux utilis´es, la fr´equence d’apparition des avalanches n’est
pas al´eatoire, mais plutˆot p´eriodique. La distribution des tailles n’est pas non
plus une loi de puissance. Ces d´esaccords est li´es aux effets inertiels et coh´esifs
qui ne peuvent pas en g´en´eral ˆetre n´eglig´es. Le syst`eme, qui respecte le plus
probablement les conditions du SOC, est celui d’une assembl´ee de grains de riz
pour lesquels une loi de puissance est bien observ´ee
3
.
2
Je rappelle bien entendu que la r´ealisation exp´erimentale, sans invitation des autorit´es
comp´etentes, est r´epr´ehensible tant sur le plan moral que sur le plan p´enal.
3
I let you the responsability of doing a experimental setup in your kitchen for checking the
truth of these results, but I cannot take in charge the resulting damages.
140
8.4 Inelastic hard sphere model
8.4 Inelastic hard sphere model
8.4.1 Introduction
Les milieux granulaires (poudres, sables, ...) sont caract´eris´es par les propri´et´es
suivantes: la taille des particules est tr`es sup´erieure aux ´echelles atomiques (au
moins plusieurs centaines de microm`etres); les interactions entre les particules
se manifestent sur des distances tr`es inf´erieures aux dimensions de celles-ci; lors
des collisions entre particules, une partie de l’´energie cin´etique est transform´ee en
´echauffement local au niveau du contact; compte tenu de la masse des particules,
l’´energie d’agitation thermique est n´egligeable devant l’energie gravitationnelle et
l’´energie inject´ee dans le syst`eme (g´en´eralement sous forme de vibrations).
En l’absence d’apport continu d’´energie, un milieu granulaire transforme tr`es
rapidement l’´energie cin´etique re¸cue en chaleur. Au niveau macroscopique (qui
est celui de l’observation), le syst`eme apparaˆıt comme dissipatif. Compte tenu de
la s´eparation importante entre les ´echelles de temps et celle entre les longueurs
microscopiques et macroscopiques, on peut consid´erer que le temps de collision est
n´egligeable devant le temps de vol d’une particule, c’est ` a dire que la collision entre
particules peut ˆetre suppos´ee instantan´ee. Comme les interactions sont ` a tr`es
courte port´ee et que les particules sont peu d´eformables (pour des collisions “pas
trop violentes”), on peut supposer que le potentiel d’interaction est un potentiel
de coeur dur.
La quantit´e de mouvement reste bien entendu conserv´ee lors de la collision:
v
1
+v
2
= v

1
+v

2
. (8.17)
Lors de la collision, la vitesse relative du point de contact subit le changement
suivant. En ce pla¸cant dans le cas o` u la composante tangentielle de la vitesse de
contact est conserv´ee (ce qui correspond `a un coefficient de restitution tangentielle
´egal ` a 1) on a la relation suivante
(v

1
−v

2
).n = −e(v
1
−v
2
).n (8.18)
o` u e est appel´e le coefficien de restitution normale et n un vecteur normal unitaire
dont la direction est donn´e la droite joingnant les centres des deux particules en
collision. La valeur de ce coefficient est comprise entre 0 et 1, ce dernier cas
corrrespondant `a un syst`eme de sph`eres dures parfaitement ´elastiques.
8.4.2 Definition
Ainsi le syst`eme des sph`eres dures in´elastiques est d´efini comme un ensemble de
particules sph´eriques interagissant par un potentiel r´epulsif infiniment dur. En
utilisant les relations ci-dessus, on obtient alors les r`egles suivantes de collision :
v

i,j
= v
i,j
±
1 + e
2
[(v
j
−v
i
).ˆ n]ˆ n (8.19)
141
Out of equilibrium systems
avec
ˆ n =
r
1
−r
2
[r
1
−r
2
[
. (8.20)
On obtient alors un mod`ele particuli`erement adapt´e pour une ´etude en sim-
ulation de Dynamique Mol´eculaire. Nous avons vu, au chapitre 3, le principe de
la Dynamique Mol´eculaire pour des sph`eres dures (´elastiques); l’algorithme de
simulation pour des sph`eres dures in´elastiques est fondamentalement le mˆeme.
Une premi`ere diff´erence intervient dans l’´ecriture des r`egles de collisions qui, bien
´evidemment, doivent respecter l’´equation (8.19). Une deuxi`eme diff´erence appa-
raˆıt durant la simulation, car le syst`eme perd r´eguli`erement de l’´energie.
8.4.3 Results
Cette propri´et´e de perte d’´energie a plusieurs cons´equences: d’une part, pour
garder l’´energie constante en Dynamique Mol´eculaire, il est n´ecessaire de fournir
une ´energie externe au syst`eme, d’autre part, l’existence de cette perte d’´energie
est `a l’origine du ph´enom`ene d’agr´egation des particules. Cela se traduit par
le fait que, mˆeme si l’on part avec une distribution uniforme de particules dans
l’espace, le syst`eme ´evolue spontan´ement aux temps longs vers une situation o` u
les particules se regroupent par amas. La densit´e locale de ces derniers est tr`es
sup´erieure `a la densit´e uniforme initiale; corr´elativement, il apparaˆıt des r´egions
de l’espace, o` u la densit´e de particules devient tr`es faible.
Quand la densit´e des particules augmente, il se produit un ph´enom`ene qui va
bloquer rapidement la simulation: il s’agit de l’effrondrement in´elastique. Comme
les particules perdent de l’´energie au cours des collisions, on peut se trouver
dans la situation o` u trois particules (pratiquement align´ees) vont ˆetre le si`ege de
collisions successives. Dans cet amas de trois particules, le temps entre collisions
va diminuer au cours du temps jusqu’` a tendre vers z´ero. De fa¸ con asymptotique,
on assiste `a un “collage de deux particules” entre elles
4
.
L’effet qui peut apparaˆıtre dans une simulation entre une particule “encadr´ee”
par deux voisines peut ˆetre compris en consid´erant le ph´enom`ene ´el´ementaire du
rebondissement d’une seule bille in´elastique, sur un plan, soumise ` a un champ
de pesanteur. On appelle h la hauteur ` a laquelle la bille est lˆach´ee. Au premier
contact avec le plan, le temps ´ecoul´e est t
1
=

2h/g et la vitesse avant la collision
est

2gh. Juste apr`es le premier contact avec le plan, la vitesse devient e

2gh.
Au second contact, le temps ´ecoul´e est
t
2
= t
1
+ 2et
1
(8.21)
et pour le ni`eme contact, on a
t
n
= t
1

1 + 2
n−1
¸
i=1
e
i

. (8.22)
4
Il ne s’agit pas d’un vrai collage car le temps de simulation s’est arrˆet´e avant.
142
8.4 Inelastic hard sphere model
Ainsi, pour un nombre infini de collisions, on a
t

=t
1

1 +
2e
1 −e

(8.23)
=t
1
1 + e
1 −e
(8.24)
Un temps fini s’est ´ecoul´e jusqu’`a la situation finale o` u la bille est au repos sur
le plan, et cela n´ecessite un nombre infini de collisions.
Pour la simulation, un tel ph´enom`ene conduit ` a un arrˆet de l’´evolution de
la dynamique. Au del` a de l’aspect gˆenant pour la simulation, nos hypoth`eses
physiques concernant la collision ne sont plus valables aux tr`es faibles vitesses.
Quand la vitesse de la particule devient tr`es petite, celle-ci devient inf´erieure ` a
la vitesse d’agitation thermique, qui est certes faible, mais finie dans la r´ealit´e.
Physiquement, le coefficient de restitution d´epend en fait de la vitesse de collision
et n’est constant que pour des vitesses relatives au moment du choc qui restent
macroscopiques. Quand les vitesses tendent vers z´ero, le coefficient de restitution
tend vers 1,et les collisions entre particules redeviennent ´elastiques.
Finalement, mˆeme si on s’attend que des sph`eres in´elastiques finissent par
voir d´ecroˆıtre leur ´energie cin´etique jusqu’` a une valeur ´egale `a z´ero (en l’absence
d’une source ext´erieure qui apporterait de l’´energie pour compenser la dissipation
li´ee aux collisions), la simulation des sph`eres in´elastiques ne permet d’atteindre
cet ´etat d’´energie nulle, car celle-ci est stopp´ee bien avant, d`es qu’une sph`ere se
trouve dans la situation de collisions multiples entre deux partenaires.
Si l’on souhaite avoir une mod´elisation proche de l’exp´erience, il est n´ecessaire
de tenir compte des conditions aux limites proches de l’exp´erience: pour fournir
de l’´energie de mani`ere permanente au syst`eme, il y a g´en´eralement un mur qui
effectue un mouvement vibratoire. A cause des collisions entre les particules et le
mur qui vibre, cela se traduit par un bilan net positif d’´energie cin´etique inject´ee
dans le syst`eme, ce qui permet de maintenir une agitation des particules malgr´e
les collisions in´elastiques entre particules.
En conclusion, le mod`ele des sph`eres in´elastiques est un bon outil pour la mod-
´elisation des milieux granulaires, si ceux-ci conservent une agitation suffisante.
En l’absence d’injection d’´energie, ce mod`ele ne peut d´ecrire que les ph´enom`enes
prenant place dans un intervalle de temps limit´e.
8.4.4 Some properties
En l’absence d’apport d’´energie, le syst`eme ne peut que se refroidir. Il existe un
r´egime d’´echelle avant la formation d’agr´egats dans lequel le syst`eme reste globale-
ment homog`ene et dont la distribution des vitesses s’exprime par une loi d’´echelle.
Pour comprendre simplement ce r´egime, on consid`ere la perte d’´energie cin´etique
r´esultant d’une collision entre deux particules. En utilisant l’´equations (8.19), on
143
Out of equilibrium systems
obtient pour la perte d’´energie cin´etique
∆E = −
1 −α
2
4
m((v
j
−v
i
).ˆ n)
2
(8.25)
On remarque bien ´evidemment que si α = 1, cas des sph`eres ´elastiques, l’´energie
cin´etique est conserv´ee comme nous l’avons vu dans le chapitre3. Inversement,
la perte d’´energie est maximale quand le coefficient de restitution est ´egale `a 0.
En supposant que le milieu reste homog`ene, la fr´equence de collision moyenne
est de l’ordre de l/∆v. La perte d’´energie moyenne est donn´ee par ∆E = −(∆v)
2
o` u = 1 −α
2
ce qui donne l’´equation d’´evolution
dT
dt
= −T
3/2
(8.26)
ce qui donne la loi de refroidissement suivante
T(t) ·
1
(1 + At)
2
(8.27)
Cette propri´er´e est connu sous le nom de loi de Haff (1983).
Dans le cas o` u le milieu granulaire re¸coit de l’´energie de mani`ere continue,
le syst`eme atteint un r´egime stationnaire o` u les propri´et´es statistiques s’´ecartent
de celles de l’´equilibre. En particulier, la distribution des vitesses n’est jamais
gaussienne. Cela implique en autres que la temperature granulaire qui est d´efinit
comme la moyenne moyenne du carr´e de la vitesse est diff´erente de la temp´erature
associ´ee `a la largeur de la distribution des vitesses. On a pu montrer que la queue
de la distribution des vitesses ne d´ecroit pas en g´en´eral de mani`ere gaussienne et
que la forme asymptotique est intimement reli´e ` a la mani`ere dont l’´energie est
inject´ee dans le syst`eme.
Parmi les propri´et´es diff´erentes de celles de l’´equilibre, dans le cas d’un m´elange
de deux esp`eces de particules granulaires, (par exemple, sph`eres de taille dif-
f´erente), la temp´erature granulaire de chaque esp`ece est diff´erente; il n’y a donc
pas ´equipartition de l’´energie comme dans le cas de l’´equilibre.
8.5 Exclusion models
8.5.1 Introduction
Alors que la m´ecanique statistique d’´equilibre dispose de la distribution de Gibbs-
Botzmann pour caract´eriser la distribution statistique des configurations d’un
syst`eme ` a l’´equilibre, il n’existe pas de m´ethode g´en´erale pour construire la dis-
tribution des ´etats d’un syst`eme en r´egime stationnaire. Les mod`eles d’exclusion
sont des exemples de mod`eles simples qui ont ´et´e introduits durant ces quinze
144
8.5 Exclusion models
derni`eres ann´ees, avec lesquels de nombreux travaux th´eoriques ont ´et´e effec-
tu´es et qui se prˆetent assez facilement ` a des simulations num´eriques pour tester
les diff´erentes approches th´eoriques. Ils peuvent servir de mod`eles simples pour
repr´esenter le traffic des voitures le long d’une route, en particulier, d´ecrire le
ph´enom`ene d’embouteillage ainsi que les fluctuations de flux qui apparaissent
spontan´ement quand la densit´e de voitures augmente.
Il existe un grand nombre de variantes de ces mod`eles et nous nous limitons
` a pr´esenter les exemples les plus simples pour en d´egager les propri´et´es les plus
significatives. Pour permettre des traitements analytiques, les version unidimen-
sionnnelles de ces mod`eles ont ´et´e les plus largement ´etudi´ees.
On consid`ere un r´eseau unidimensionnel r´egulier consitu´e de N sites sur lequel
on place n particles. Chaque particle a une probabilit´e p de sauter sur le site
adjacent ` a droite si celui-ci est vide et une probabilit´e 1 −p de sauter sur le site
de gauche si celui-ci est vide. Des variantes de ce mod`ele correspondent ` a des
situations sont les sites 1 et N.
Dans le cas de ph´enom`ene hors d’´equilibre, la sp´ecificit´e de la dynamique
stochastique intervient de fa¸con plus ou moins importante dans les ph´enom`enes
observ´es. Pour ces mod`eles, on dispose au moins de trois m´ethodes diff´erentes
pour r´ealiser cette dynamique.
• La mise `a jour parall`ele. Au mˆeme instant toutes les r´egles sont appliqu´ees
aux particules. On utilise aussi le terme de mise `a jour synchrone. Ce
type de dynamique induit g´en´eralement les corr´elations les plus importantes
entre les particules. L’´equation maˆıtresse correspondante est une ´equation
discr`ete en temps.
5
• La mise ` a jour asynchrone. Le m´ecanisme le plus utilis´e consiste `a choisir
de mani`ere al´eatoire et uniforme une particule ` a l’instant t et d’appliquer
les r`egles d’´evolution du mod`ele. L’´equation maˆıtresse correspondante est
une ´equation continue.
• Il existe aussi des situations o` u la mise ` a jour est s´equentielle et ordonn´ee.
On peut soit consid´erer successivement les sites du r´eseau ou les particules
du r´eseau. L’ordre peut ˆetre choisi de la gauche vers la droite ou inverse-
ment. L’´equation maˆıtresse correspondante est aussi une ´equation `a temps
discret.
5
Pour le mod`ele ASEP, pour ´eviter que deux particules venant l’une de la gauche et l’autre
de la droite choisissent le mˆeme site au mˆeme temps, on r´ealise la dynamique parall`ele tout
d’abord sur le sous-r´eseau des sites pairs puis sur le sous-r´eseau des sites impairs.
145
Out of equilibrium systems
8.5.2 Random walk on a ring
Stationarity and detailed balance
Dans le cas o` u l’on impose des conditions aux limites p´eriodiques, le marcheur
qui arrive au site N a une probabilit´e p d’aller sur le site 1. Il est facile de se
rendre compte qu’un tel syst`eme a un ´etat stationnaire o` u la probabilit´e d’ˆetre
sur un site est la mˆeme quel que soit le site est ´egale `a P
st
(i) = 1/N. Ainsi on a
Π(i →i + 1)P
st
(i) =
p
N
(8.28)
Π(i + 1 →i)P
st
(i + 1) =
1 −p
N
(8.29)
Hormis le cas o` u la probabilit´e de sauter ` a droite ou ` a gauche est identique, c’est-
` a-dire p = 1/2, la condition de bilan d´etaill´e n’est pas satisfaite alors que celle de
stationnarit´e l’est. Cela ne constitue pas une surprise, mais c’est une illustration
du fait que les conditions de stationnarit´e de l’´equation maˆıtresse sont beaucoup
g´en´erales que celle du bilan d´etaill´e. Ces derni`eres sont g´en´eralement utilis´es dans
des simulations de m´ecanique statistique d’´equilibre.
Totally assymetric exclusion process
Pour permettre d’aller assez loin dans le traitement analytique et d´ej`a d’illuster
une partie de la ph´enom´enologie de ce mod`ele, on peut choisir d’interdire ` a une
particule d’aller sur la gauche. Dans ce cas, on a p = 1 Avec un algorithme
de temps continu, une particule au site i a une probabilit´e dt de se d´eplacer ` a
droite si le site est vide, sinon elle reste sur le mˆeme site. Consid´erons P particles
sur un r´eseau comprenant N sites, avec la condition P < N. On introduit des
variables τ
i
(t) qui repr´esentent le taux d’occupation ` a l’instant t du site i. Une
telle variable vaut 1 si une particule est pr´esente sur le site i ` a l’instant t et 0 si
le site est vide.
Il est possible d’´ecrire une ´equation d’´evolution du syst`eme pour la variable
τ
i
(t). Si le site i est occup´e, il peut se vider si la particule pr´esente ` a l’instant t
peut aller sur le site i + 1, qui doit alors ˆetre vide. Si le site i est vide, il peut
se remplir s’il existe une particule sur le site i −1 ` a l’instant t. Ainsi l’´etat de la
variable τ
i
(t + dt) est le mˆeme que τ
i
(t) si aucun des liens ` a droite ou `a gauche
du site i n’ont chang´e
τ
i
(t+dt)

τ
i
(t) Proba d’avoir aucun saut 1-2dt
τ
i
(t) + τ
i−1
(t)(1 −τ
i
(t)) Proba saut venant de la gauche dt
τ
i
(t) −τ
i
(t)(1 −τ
i+1
(t)) = τ
i
(t)τ
i+1
(t) Proba saut vers la droite dt
(8.30)
146
8.5 Exclusion models
En prenant la moyenne sur l’histoire de la dynamique, c’est-` a-dire de l’instant
O ` a l’instant t, on obtient l’´equation d’´evolution suivante:
d'τ
i
(t)`
dt
= −'τ
i
(t)(1 −τ
i+1
(t))` +'τ
i−1
(t)(1 −τ
i
(t))` (8.31)
On voit que l’´evolution de l’´etat d’un site d´epend de l’´evolution de la paire de
deux sites voisins. Par un raisonnement similaire ` a celui qui vient d’ˆetre fait pour
l’´evolution de la variable τ
i
(t), l’´equation d’une paire de sites adjacents d´epend
de l’´etat des sites situ´es en amont ou en aval de la paire consid´er´ee. On a ainsi
d('τ
i
(t)τ
i+1
(t)`)
dt
= −'τ
i
(t)τ
i+1
(t)(1 −τ
i+2
(t)`) +'τ
i−1
(t)τ
i+1
(t)(1 −τ
i
(t))` (8.32)
On obtient une hierarchie d’´equations faisant intervenir des agr´egats de sites de
taille de plus en plus grande. Il existe une solution simple dans le cas de l’´etat
stationnaire, car on peut montrer que toutes les configurations ont un poids equal
qui est donn´e par
P
st
=
P!(N −P)!
N!
(8.33)
qui correspond l’inverse du nombre de possibilit´es de placer P particules sur N
sites. Les moyennes dans l’´etat stationnaire donnent

i
` =
P
N
(8.34)

i
τ
j
` =
P(P −1)
N(N −1)
(8.35)

i
τ
j
τ
k
` =
P(P −1)(P −2)
N(N −1)(N −2)
(8.36)
8.5.3 Model with open boundaries
En placant des fronti`eres ouvertes, le syst`eme va pouvoir faire ´echanger des par-
ticules avec des“r´eservoirs”de particles. Pour le site 1, si le site est vide ` a l’instant
t, il y a une probabilit´e (ou densit´e de probabilit´e pour une dynamique ` a temps
continu) α que le site qu’une particule soit inject´e au site 1, et si le site est occup´e,
il y a une probabilit´e γ que la particule situ´ee au site 1 sorte par la gauche du
r´eseau. De mani`ere similaire, si le site N est occup´e, il y a une probabilit´e β que
la particule soit ´eject´ee du r´eseau et si le site N est vide, il y a une probabilit´e δ
pour qu’une particule soit inject´e sur le site N.
On peut ´ecrite des ´equations d’´evolution dans le cas g´en´eral, mais leur r´eso-
lution n’est pas possible en g´en´eral. Par souci de simplification, nous allons nous
limiter au cas totalement asym´etrique. Les ´equations d’´evolution des variables
d’occupation obtenues dans le cas d’un syst`eme p´eriodique sont modifi´ees pour
les sites 1 et N de la mani`ere suivante: le site 1 peut, s’il est vide, recevoir une
147
Out of equilibrium systems
0 0.5 1
α
0
0.5
1
β
CP
LDP
HDP
Figure 8.2 – Diagramme de phase du mod`ele ASEP. La ligne en pointill´es est une
ligne de transition du premier ordre et les deux lignes pleines sont des lignes de
transition continue
particule provenant du r´eservoir avec une probabilit´e α et s’il est occup´e perdre
la particule soit parce qu’elle repart dans le r´eservoir avec la probabilit´e γ, soit
parce qu’elle a saut´e sur le site de droite. On obtient alors
d'τ
1
(t)`
dt
= α(1 −'τ
1
(t)`) −γ'τ
1
(t)` −'τ
1
(t)(1 −τ
2
(t))` (8.37)
De mani`ere similaire pour le site N, on a
d'τ
N
(t)`
dt
= 'τ
N−1
(t)(1 −τ
N
(t))` + δ'(1 −τ
N
(t))` −β'τ
N
(t)` (8.38)
Il est possible d’obtenir une solution de ces ´equations dans le cas stationnaire.
Nous nous contentons de r´esumer les caract´eristiques essentielles de celles-ci en
tra¸cant le diagramme de phase, dans le cas o` u δ = γ = 0, voir figure 8.2.
La r´egion LDP (low density phase, en Anglais) correspond `a une densit´e
stationnaire (loin des bords) ´egale `a α si α < 1/2 et α < β. La r´egion HDP
(high density phase) correspond ` a une densit´e stationnaire 1−β et apparaˆıt pour
des valeurs de β < 0.5 et α > β. La derni`ere r´egion du diagramme est dite
phase de courant maximal et correspond ` a des valeurs de α et β strictement
148
8.6 Kinetic constraint models
sup´erieures `a 1/2. La ligne en pointill´e s´eparant les phases LCP et HDP est une
ligne de transition de premier ordre. En effet, ` a la limite “thermodynamique”, en
traversant la ligne de transition, la densit´e subit une discontinuit´e
∆ρ = ρ
HCP
−ρ
LCP
= 1 −β −α (8.39)
= 1 −2α (8.40)
qui est une grandeur non-nulle pour tout α < 0.5. En simulation, pour un syst`eme
de taille finie, on note une brusque variation de densit´e pour le site situ´e au milieu
du r´eseau, variation d’autant plus prononc´ee que la taille du r´eseau est grande.
8.6 Kinetic constraint models
8.6.1 Introduction
La probl´ematique de l’approche de la transition vitreuse des verres structuraux
consiste ` a expliquer l’apparition de ph´enom`enes tels que la croissance extrˆeme-
ment grande des temps de relaxation vers l’´equilibre alors qu’aucune caract´eris-
tique associ´ee ` a une transition de phase (divergence d’une longueur de corr´elation)
n’est pr´esente. Pour rassembler les donn´ees de nombreux verres structuraux, on
a coutume de montrer la croissance rapide des temps de relaxation de ces sys-
t`emes (ou la viscosit´e) en fonction de l’inverse de la temp´erature normalis´ee pour
chaque corps avec sa temp´erature de transition vitreuse
6
La figure 8.3 montre le diagramme d’Angell pour plusieurs syst`emes. Quand
on obtient une droite dans ce diagramme, on dit alors que le liquide (verre)
est “fort” tandis que si on observe une courbure, on parle alors de liquide (verre)
“fragile”. La fragilit´e est d’autant plus marqu´ee que la courbure est importante; en
l’occurrence le liquide le plus fragile est donc dans ce diagramme l’orthoterph´enyl.
Dans le cas de verre fort, la d´ependance en temp´erature du temps de relaxation
est ´evidente, compte tenu de la figure 8.3, et donne
τ = τ
0
exp(E/T) (8.41)
o` u E est une ´energie d’activation qui ne d´epend de la temp´erature sur le domaine
consid´er´e.
Dans le cas de verres fragiles, plusieurs formules d’ajustement ont ´et´e pro-
pos´ees et sont associ´ees, depuis leurs introductions, ` a l’existence ou l’absnce d’une
temp´erature de transition sous-jacente; cette transition n’est pas observ´ee car elle
serait situ´ee dans une r´egion non accessible exp´erimentalement. Pour ne citer que
6
Cette temp´erature est d´efinie par une condition exp´erimentale o` u le temps de relaxation
atteint la valeur de 1000s. Cette temp´erature d´epend de la vitesse de trempe impos´ee au
syst`eme.
149
Out of equilibrium systems
Figure 8.3 – Logarithme (d´ecimal) de la viscosit´e (ou du temps de relaxation)
en fonction de l’inverse de la temp´erature normalis´ee avec la temp´erature de
transition vitreuse associ´ee ` a chaque corps. L’oxyde de Germanium est consid´er´e
comme un verre fort tandis que l’orthoterph´enyl est le verre le plus fragile.
les plus connues, il y a la loi de Vogel-Fulscher-Thalmann (VFT) et qui s’exprime
comme
τ = τ
0
exp(A/(T −T
0
)). (8.42)
La temp´erature T
0
est souvent interpr´et´ee comme une temp´erature de transition
qui ne peut pas etre atteinte. On peut aussi consid´erer que le rapport A/(T −T
0
)
est une ´energie d’activation qui augmente donc fortement quand on abaisse la
temp´erature. On peut noter qu’une loi de la forme
τ = τ
0
exp(B/T
2
) (8.43)
permet de reproduire assez bien la courbure de plusieurs verres fragiles et ne
pr´esuppose rien sur l’existence d’une temp´erature de transition non nulle sous-
jacente.
Une autre caract´eristique g´en´erique des verres est l’existence d’une d´ecrois-
sance des fonctions de corr´elation selon une forme que ne peut pas g´en´eralement
ajust´ee par une simple exponentielle. De nouveau, ce comportement de d´ecrois-
sance lente est souvent ajust´ee par une loi de Kohraush-Williams-Watts. Cette
150
8.6 Kinetic constraint models
loi s’exprime comme
φ(t) = exp(−at
b
) (8.44)
o` u φ(t) repr´esente la fonction de corr´elation de la grandeur mesur´ee. Tr`es g´en´erale-
ment, b est une fonction d´ecroissante de la temp´erature, partant g´en´eralement de
1 `a haute temp´erature (r´egion o` u la d´ecroissance des fonctions est exponentielle)
pour aller vers 0.5 ` a 0.3 On peut aussi ´ecrire la fonction de corr´elation comme
φ(t) = exp(−(t/τ)
b
) (8.45)
o` u τ = a
−1/b
. On parle souvent d’exponentielle ´etir´ee, l’´etirement ´etant d’autant
plus important que l’exposant b est faible.
Pour d´eterminer le temps de relaxation caract´erisque de la fonction de cor-
r´elation, il existe trois mani`eres de mesurer ce temps
• Consid´erer que le temps caract´eristique est donn´ee par l’´equation φ(τ) =
0.1φ(0). Ce moyen est utilis´ee ` a la fois dans les simulations et sur le plan
exp´erimental, la valeur de la fonction de corr´elation peut etre d´etermin´ee
alors pr´ecisement car les fluctuations (statistiques) ne dominent pas alors la
valeur de la fonction de corr´elation, ce qui serait le cas si on choisissait un
crit`ere φ(τ) = 0.01φ(0) ` a la fois dans des donn´ees de simulation ou celles
d’exp´erience.
• De mani`ere plus rigoureuse, on peut dire que le temps caract´eristique est
donn´ee par l’int´egrale de la fonction de corr´elation τ =


0
dtφ(t). Ce
moyen est int´eressant quand on travaille avec des expressions analytiques,
mais reste difficile ` a mettre en oeuvre dans le cas de r´esultats num´eriques
ou exp´erimentaux.
• On peut aussi d´eterminer ce temps `a partir de la loi d’ajustement, Eq.(8.45).
Cette m´ethode souffre du fait qu’elle n´ecessite de d´eterminer en meme temps
l’exposant de l’exponentielle ´etir´ee et la valeur du temps τ
Dans le cas o` u la fonction de corr´elation serait d´ecritepar une simple expo-
nentielle, les d´eterminations du temps caract´eristique par chacune des m´ethodes
coincident `a un facteur multiplicatif pr`es. Dans le cas g´en´eral, les trois temps ne
sont pas reli´es entre eux simplement; toutefois dans la mesure o` u la d´ecroissance
lente est d´ecrite par un comportement dominant de type KWW et que l’exposant
b n’est pas trop proche de 0 (ce qui correspond `a la situation exp´erimentale des
verres structuraux), la relation entre les trois temps reste quasi-lin´eaire et les
comportements essentiellement les memes. Ainsi la premi`ere m´ethode est celle
qui est retenue dans le cas d’une ´etude par simulation.
Une derni`ere caract´eristique observ´ee dans les verres est l’existence d’une re-
laxation dite h´et´erog`ene; en d’autres termes, le liquides surfondu serait divis´e
en r´egions o` u il existe un temps caract´eristique sp´ecifique. La relaxation globale
151
Out of equilibrium systems
observ´ee serait la superposition de relaxations “individuelles”. Une interpr´eta-
tion de ce ph´enom`ene conduisant une loi KWW est possible; en effet chaque
domaine relaxant exponentiellement avec une fr´equence de relaxation typique ν,
la relaxation globale est donn´ee par
φ(t) =

dνexp(−tν)D(ν) (8.46)
Il s’agit math´ematiquement de la transform´ee de Laplace de la densit´e en fr´equences
des diff´erents domaines.
8.6.2 Facilitated spin models
Introduction
Compte tenu de la nature universelle de comportement vitreux observ´ee dans ces
situations physiques qui concernent les verres structuraux, les polym`eres voire
les milieux granulaires, il est tentant de s’int´eresser ` a des approches qui font
une moyenne locale sur le d´etail microscopique des interactions. L’ingr´edient des
mod`eles ` a contrainte cin´etique utilise sur cette hypoth`ese. En consid´erant de
plus que la dynamique h´et´erog`ene est la source de la relaxation lente observ´ee
dans les syst`emes vitreux, de nombreux mod`eles sur r´eseau avec des variables
discr`etes ont ´et´e propos´es. Ces mod`eles ont la vertu de mettre en ´evidence que
mˆeme si la dynamique satisfait le bilan d´etaill´e et en utilisant un Hamiltonien
de particules n’interagissant pas entre elles, on peut cr´eer des dynamiques de
relaxation dont le comportement est hautement non triviale et reproduire une
partie des ph´enom`enes observ´esRitort and Sollich [2003]. Comme il s’agit encore
d’un domaine en pleine ´evolution, nous ne nous engageons pas sur la pertinence
de ceux-ci, mais nous les consid´erons comme des exemples significatifs d’une
dynamique qui est boulevers´e par rapport ` a une ´evolution donn´ee par une r´egle
classique de Metropolis. Nous verrons dans le prochain un autre exemple (mod`ele
d’adsorption-desorption) o` u l’absence de mouvements diffusifs pr´esents dans une
dynamique usuelle conduit `a une dynamique tr`es lente et aussi non triviale.
Friedrickson-Andersen model
Le mod`ele de Friedrickson-Andersen, introduit il y a une vingtaine d’ann´ees par
ces deux auteurs, est le suivant: on consid`ere des objets discr`ets sur un r´eseau
(dans le cas de ce cours nous nous limitons au cas d’un r´eseau unidimensionnel,
mais des g´en´eralisations sont possibles en dimensions sup´erieures). Les variables
discr`etes sont not´ees n
i
et valent soit 0 soit 1. Le Hamiltonien d’interaction est
donn´ee par
H =
N
¸
i=1
n
i
(8.47)
152
8.6 Kinetic constraint models
A une temp´erature T (soit β = 1/k
B
T), on a la densit´e de variables dans l’´etat
1, not´e n qui est donn´ee par
n =
1
1 + exp(β)
(8.48)
Les variables dans cet ´etat sont consid´er´es comme les r´egions mobiles et les r´egions
dans l’´etat 0 comme des r´egions peu mobiles. Ce mod`ele essaie de reproduire une
dynamique h´et´erog`ene. A basse temp´erature, la densit´e de r´egions mobiles est
faible n ∼ exp(−β). Avec une dynamique de Metropolis usuelle, quatre situations
de changement de l’´etat d’un site sont ` a consid´erer. La r´egle de transition de
changement d’´etat d’un site d´epend de l’´etat de ces deux voisins. On a donc les
quatres situations suivantes
...000... ↔...010... (8.49)
...100... ↔...110... (8.50)
...001... ↔...011... (8.51)
...101... ↔...111... (8.52)
Le mod`ele de Friedrickson-Andersen consite ` a interdire la premi`ere des quatre
situations, ce qui correspond `a interdire ` a la fois la cr´eation ou la destruction
d’une r´egion mobile ` a l’int´erieur d’une r´egion immobile. La restriction de cette
dynamique permet l’exploration de la quasi-totalit´e de l’espace des phases comme
avec des r´egles usuelles de M´etropolis, hormis une seule configuration qui ne peut
jamais etre atteinte qui est celle d’une configuration ne contenant que des sites
avec la valeur 0.
Dans ce cas, on peut montrer que le temps caract´eristique de la relaxation
´evolue ` a une dimension comme
τ = exp(3β) (8.53)
ce qui correspond ` a une situation de verre fort et de dynamique induite par des
processus activ´es.
East Model
Ce mod`ele est inspir´e du mod`ele pr´ec´edent par le fait que le Hamiltonien est
identique, mais la dynamique est plus contrainte avec l’introduction d’une brisure
avec de sym´etrie gauche-droite de ce mod`ele. En effet, les r`egles dynamiques util-
is´ees sont alors les r`egles du mod`ele pr´ec´edent amput´ees de la r`egle, Eq.(8.51), et
donc la cr´eation de r´egions mobiles situ´es ` a gauche d’une r´egion mobile est inter-
dite. On peut v´erifier ais´ement que toutes les configurations du mod`ele pr´ec´edent
restent accessibles avec l’introduction de cette contrainte suppl´ementaire, mais le
temps caract´eristique de la relaxation devient alors
τ = exp(1/(T
2
ln(2)) (8.54)
153
Out of equilibrium systems
Avec un temps de relaxation qui croˆıt plus rapidement qu’avec une loi d’Arrhenius,
ce mod`ele est un exemple de verre dit fragile tandis que le mod`ele FA est un ex-
emple de verre fort. Avec des r`egles aussi simples, il est possible de r´ealiser des
simulations tr`es pr´ecises et d’obtenir des r´esultats analytiques dans un grand
nombre de situations.
8.7 Conclusion
On voit sur ces quelques exemples la diversit´e et donc la richesse des comporte-
ments des syst`emes en dehors d’une r´egion d’´equilibre. En l’absence de th´eorie
g´en´erale pour des syst`emes en dehors de l’´equilibre, la simulation num´erique
reste un outil puissant pour ´etudier le comportement de ces syst`emes et permet
de tester les d´eveloppements th´eoriques que l’on peut obtenir sur les mod`eles les
plus simples.
8.8 Exercises
8.8.1 Haff Law
On consid`ere un syst`eme de sph`eres dures in´elastiques de masse identique m,
assujetties `a se d´eplacer le long d’une ligne droite. Les sph`eres sont initialement
plac´ees al´eatoirement sur la droite (sans se recouvrir) et avec une distribution de
vitesse p(v, t = 0).
On consid`ere tout d’abord une collision entre 2 sph`eres poss`edant des vitesses
v
1
et v
2
qui subissent une collision. Au moment du choc in´elastique, on a la r`egle
de collision suivante
v

1
−v

2
= −α(v
1
−v
2
) (8.55)
o` u v

1
et v

2
sont les vitesses apr`es collision et α est appel´e le coefficient de resti-
tution (0 ≤ α < 1).
♣ Q. 8.8.1-1 Quelle est la loi de conservation satisfaite lors de la collision. Don-
ner l’´equation correspondante.
♣ Q. 8.8.1-2 Calculer v

1
et v

2
en fonction de v
1
, v
2
et de α.
♣ Q. 8.8.1-3 Calculer la perte d’´energie cin´etique au cours de la collision, que
l’on note ∆e
c
, en fonction de = 1 −α
2
, de ∆v = (v
2
−v
1
) et de m.
On suppose dans la suite une absence de corr´elation dans les collisions entre
particules, ce qui implique que la probabilit´e jointe p(v
1
, v
2
, t) (probabilit´e de
trouver la particule 1 et la particule 2 avec les vitesses v
1
et v
2
) se factorise
p(v
1
, v
2
, t) = p(v
1
, t)p(v
2
, t) (8.56)
154
8.8 Exercises
♣ Q. 8.8.1-4 Montrer que la valeur moyenne < (∆v)
2
> perdue par collision
< (∆v)
2
>=

dv
1
dv
2
p(v
1
, v
2
, t)(∆v)
2
(8.57)
s’exprime en fonction de la temp´erature granulaire T d´efinie par la relation T =
m < v
2
> et de la masse m.
♣ Q. 8.8.1-5 Le syst`eme subit des collisions successives au cours du temps et
son ´energie cin´etique totale (et donc sa temp´erature) va d´ecroˆıtre. Montrer que
dT

= −
T
N
(8.58)
o` u τ est le nombre moyen de collisions et N le nombre total de particules du
syst`eme.
♣ Q. 8.8.1-6 En d´efinissant n =
τ
N
, le nombre moyen de collisions par particule,
montrer que
T(n) = T(0)e
−n
(8.59)
o` u T(0) est la temp´erature initiale du syst`eme.
♣ Q. 8.8.1-7 La fr´equence moyenne de collisions ω(T) par bille est donn´ee par
ω(T) = ρ

T
m
(8.60)
o` u ρ est la densit´e du syst`eme. Etablir l’´equation diff´erentielle reliant
dT
dt
, T(t),
m et ρ.
♣ Q. 8.8.1-8 Int´egrer cette ´equation diff´erentielle et montrer que la solution se
comporte pour les grands temps comme T(t) ∼ t
β
o` u β est un exposant que l’on
d´eterminera. Cette d´ecroissance de l’´energie est appel´ee loi de Haff.
♣ Q. 8.8.1-9 On suppose que la distribution des vitesses p(v, t) s’exprime comme
une fonction d’´echelle
p(v, t) = A(t)P

v
v(t)

(8.61)
Montrer que A(t) est une fonction simple de v(t), appel´ee vitesse caract´eristique.
♣ Q. 8.8.1-10 En utilisant l’expression de la probabilit´e d´efinie ` a l’´equation
(8.61), calculer e
p
(t) = m/2

dvv
2
p(v, t) en fonction de la vitesse caract´eristique
et de constantes.
155
Out of equilibrium systems
♣ Q. 8.8.1-11 La loi d’´echelle pour la distribution des vitesses est-elle compatible
avec la loi de Haff? Justifier la r´eponse.
♣ Q. 8.8.1-12 Pour n
c
∼ 1/, l’´energie cin´etique ne d´ecroˆıt plus alg´ebrique-
ment et des agr´egats commencent ` a se former, associ´es ` a l’apparition d’un effron-
drement in´elastique. D´ecrire simplement le ph´enom`ene d’effrondrement in´elas-
tique.
8.8.2 Croissance de domaines et mod`ele `a contrainte cin´e-
tique
De nombreux ph´enom`enes `a l’´equilibre et hors d’´equilibre sont caract´eris´es par la
formation de r´egions spatiales appel´ees domaines o` u un ordre local est pr´esent. La
longueur caract´eristique de ces domaines est not´ee l. Leur ´evolution temporelle se
manifeste par le d´eplacement des fronti`eres, appel´ees murs de domaines. Le m´e-
canisme de d´eplacement d´epend du syst`eme physique consid´er´e. Dans le cas d’un
ph´enom`ene hors d’´equilibre, l’´equation d’´evolution de la longueur l est donn´ee
par
dl
dt
=
l
T(l)
(8.62)
o` u T(l) est le temps caract´eristique du d´eplacement de mur.
♥ Q. 8.8.2-1 Dans le cas o` u le mur se d´eplace librement sous l’effet de la diffu-
sion, on a T(l) = l
2
. R´esoudre l’´equation diff´erentielle et d´eterminer l’´evolution
de la croissance de l.
Dans un cas plus g´en´eral, le d´eplacement d’un mur n´ecessite le franchissement
d’une barri`ere d’´energie (libre) et le temps caract´eristique est alors donn´e par la
relation
T(l) = l
2
exp(β∆E(l)) (8.63)
o` u ∆E(l) est une ´energie d’activation.
♣ Q. 8.8.2-2 Dans le cas o` u ∆E(l) = l
m
, que devient l’´evolution de l avec le
temps? Est-elle plus rapide ou moins rapide que dans le cas pr´ec´edent? Inter-
pr´eter ce r´esultat.
Quand le ph´enom`ene de croissance de domaines se manifeste dans une dy-
namique pr`es de l’´equilibre, l’´equation d’´evolution est modifi´ee comme suit
dl
dt
=
l
T(l)

l
eq
T(l
eq
)
(8.64)
o` u l
eq
d´esigne la longueur typique d’un domaine `a l’´equilibre.
156
8.8 Exercises
♥ Q. 8.8.2-3 R´esoudre pour l ∼ l
eq
cette ´equation diff´erentielle. En d´eduire le
temps caract´eristique de retour ` a l’´equilibre en fonction de T(l
eq
) et

d ln(T(l))
d ln(l)

l
e
q
.
Les mod`eles ` a contrainte cin´etique sont des exemples de syst`emes o` u la relax-
ation vers l’´equilibre est tr`es lente, mˆeme si la thermodynamique reste ´el´emen-
taire. On consid`ere l’exemple appel´e East Model. L’Hamiltonien de ce mod`ele
est donn´e par
H = 1/2
¸
i
S
i
(8.65)
♥ Q. 8.8.2-4 On consid`ere un syst`eme de N spins. Calculer la fonction de par-
tition Z de ce syst`eme.
♣ Q. 8.8.2-5 Calculer la concentration c de spins ayant la valeur +1 ` a l’´equilibre.
On se place `a tr`es basse temp´erature, montrer que l’on peut alors approcher cette
concentration par e
−β
.
La dynamique de ce mod`ele est celle de Metropolis ` a l’exception du fait que
le retournement d’un spin +1 n’est possible que si celui-ci poss`ede `a gauche un
voisin qui est aussi un spin +1.
♥ Q. 8.8.2-6 Expliquer pourquoi l’´etat o` u tous les spins sont n´egatifs est un ´etat
inatteignable si l’on part d’une configuration quelconque. On appelle domaine
une suite de spins −1 bord´ee `a droite par un spin +1. Le domaine de plus petite
taille a la valeur 1 et est constitu´e d’un seul spin +1.
♣ Q. 8.8.2-7 Compte tenu de la dynamique impos´ee, donner le chemin (c’est ` a
dire la suite de configurations) permettant de fusionner un mur de taille 2 situ´e
` a droite d’un spin +1. En d’autres termes, on cherche la suite de configurations
allant de +1 −1 + 1 `a 1 −1 −1.
Le retournement d’un spin −1 en un spin +1 est d´efavorable ` a basse temp´era-
ture et coˆ ute une ´energie d’activation +1. De mani`ere g´en´erale, la cr´eation d’une
suite de longueur n de spins +1 situ´ee ` a droite d’un mur quelconque (c’est-` a-dire
une suite de n domaines de taille 1) coˆ ute une ´energie d’activation n.
Un chemin possible pour d´etruire un mur de taille n consiste `a trouver une
suite de configurations cons´ecutives de retournement de spins individuels allant
de 1 −1... −11 ` a 1 −1... −1
♣ Q. 8.8.2-8 Donner ce chemin (la suite de configurations) Ce chemin n’est pas
le plus favorable car il impose de cr´eer une suite tr`es grande de spins +1. On va
montrer qu’un chemin plus ´economique existe.
♣ Q. 8.8.2-9 On consid`ere un mur de taille l = 2
n
. Si on appelle h(l) la plus
petite suite `a cr´eer pour faire disparaˆıtre un mur de taille l, montrer par r´ecurrence
que
h(2
n
) = h(2
(n−1)
) + 1 (8.66)
157
Out of equilibrium systems
♣ Q. 8.8.2-10 En d´eduire que l’´energie d’activation de la destruction d’un mur
de taille l vaut ln(l)/ ln(2)
♠ Q. 8.8.2-11 En admettant que l
eq
= e
−β
et en ins´erant le r´esultat pr´ec´e-
dent dans celui-ci obtenu ` a la question 2.2, montrer que le temps de relax-
ation de ce mod`ele se comporte asymptotiquement ` a basse temp´erature comme
exp(β
2
/ln(2)).
8.8.3 Diffusion-coagulation model
On consid`ere le mod`ele stochastique suivant: sur un r´eseau unidimensionnel de
taille L, `a l’instant t = 0, on place al´etoirement N particules not´ees A, avec N <
L. La dynamique du mod`ele est compos´ee de deux m´ecanismes: une particule
peut sauter al´eatoirement ` a droite ou ` a gauche sur un site voisin; si le site est
vide, la particule occupe ce site et lib`ere le site pr´ec´edent, si le site est occup´e,
la particule “coagule” avec la particule pr´esente pour donner une seule particule.
Les deux m´ecanismes sont repr´esent´es par les r´eactions suivantes
A + . ↔. + A (8.67)
A + A →A (8.68)
o` u le point d´esigne un site vide.
♣ Q. 8.8.3-1 Pour un r´eseau de taille L avec des conditions aux limites p´eri-
odiques, quel est l’´etat final de la dynamique? En d´eduire la densit´e ρ
L
(∞). Que
vaut cette limite quand L →∞.
♣ Q. 8.8.3-2 On veut faire la simulation num´erique de ce mod`ele stochastique
avec une dynamique s´equentielle. Proposer un algorithme permettant de simuler
ce mod`ele sur un r´eseau de taille L
♣ Q. 8.8.3-3 Est-ce que l’algorithme propos´e v´erifie le bilan d´etaill´e? Justifier
votre r´eponse.
On note P(l, t) la probabilit´e de trouver un intervalle de longueur l ne con-
tenant aucune particule.
♣ Q. 8.8.3-4 Exprimer la probabilit´e Q(l, t) de trouver un intervalle vide de
longueur l et au moins une particule situ´ee sur l’un des deux sites bordant
l’intervalle de longueur l en fonction de P(l, t) et de P(l + 1, t).
♣ Q. 8.8.3-5 Montrer que l’´evolution de la probabilit´e P(l, t) est donn´ee par
l’´equation diff´erentielle suivante
∂P(l, t)
∂t
= Q(l −1, t) −Q(l, t) (8.69)
158
8.8 Exercises
♣ Q. 8.8.3-6 Exprimer la densit´e de particules ρ(t) sur le r´eseau en fonction de
P(0, t) et P(1, t).
♣ Q. 8.8.3-7 Justifier le fait que P(0, t) = 1.
♣ Q. 8.8.3-8 En passant `a la limite continue, c’est-`a-dire en posant que P(l +
1, t) = P(l, t) +
∂P(l,t)
∂l
+ 1/2

2
P(l,t)
∂l
2
+ ..., montrer que l’on obtient l’´equation dif-
f´erentielle suivante
∂P(l, t)
∂t
=

2
P(l, t)
∂l
2
(8.70)
et que
ρ(t) = −
∂P(l, t)
∂l

l=0
(8.71)
♣ Q. 8.8.3-9 V´erifier que 1−erf(
l
2

t
) est une solution pour P(l, t) dans la limite
continue, o` u erf est la fonction erreur. (Indication: erf(x) =
2

π

x
0
dt exp(−t
2
)).En
d´eduire la densit´e ρ(t)
On consid`ere maintenant une approche de champ moyen pour l’´evolution de
la dynamique. L’´evolution de densit´e locale moyenne ¯ ρ(x, t) est alors donn´ee par
∂¯ ρ(x, t)
∂t
=

2
¯ ρ(l, t)
∂l
2
− ¯ ρ(x, t)
2
(8.72)
On cherche une solution homog`ene de cette ´equation ¯ ρ(x, t) = ρ(t).
♣ Q. 8.8.3-10 R´esoudre alors l’´equation diff´erentielle. Comparer ce r´esultat avec
le r´esultat exact pr´ec´edent. Quelle est l’origine de la diff´erence?
8.8.4 Random sequential addition
On consid`ere le mod`ele unidimensionnel du parking o` u l’on ins`ere des particules
dures de taille unit´e, s´equentiellement et al´eatoirement. On rep`ere les positions
des centres de particules adsorb´ees sur la ligne par les abscisses x
i
ordonn´ees de
mani`ere croissante o` u i est un indice allant de 1 `a n(t), n(t) ´etant le nombre
de particules adsorb´ees ` a l’instant t (ce tableau ne correspond bien ´evidemment
pas `a l’ordre d’insertion des particules). Le syst`eme est born´e ` a gauche par un
mur en x = 0 et ` a droite par un mur en x = L. On d´efinit aussi un tableau
des intervalles h
i
; chaque intervalle est d´efini comme la partie de la ligne non
recouverte comprise soit entre deux particules cons´ecutives, soit entre un mur et
une particule pour deux intervalles situ´es aux limites du syst`eme. Ce tableau
contient n(t) + 1 ´el´ements.
♣ Q. 8.8.4-1 Exprimer la densit´e de la ligne recouverte par des particules en
fonction de n(t) et de L.
159
Out of equilibrium systems
♣ Q. 8.8.4-2 Soit un intervalle h , justifier le fait que Max(h−1, 0) est la partie
disponible pour l’insertion d’une nouvelle particule. En d´eduire la fraction de la
ligne disponible Φ pour l’insertion d’une nouvelle particule en fonction des h
i
,
puis en fonction des x
i
.
♣ Q. 8.8.4-3 Pour chaque nouvelle particule ins´er´ee ` a l’int´erieur d’un intervalle
de longueur h, il apparaˆıt deux intervalles: un de longueur h

et un autre de
longueur h

. Donner la relation entre h, h

et h

. (Indication: Faire un sch´ema
´eventuellement).
Pour r´ealiser la simulation num´erique de ce syst`eme, un algorithme, not´e A,
consiste `a choisir al´eatoirement la position d’une particule et de tester si cette
nouvelle particule ne recouvre pas une ou des particules pr´ec´edemment adsorb´ees.
Si la particule ne recouvre aucune particule d´ej`a adsorb´ee, cette particule est
plac´ee; sinon elle est rejet´ee et on proc`ede `a un nouveau tirage. Pour chaque
tirage, on incr´emente le temps par ∆t. Aux temps longs, la probabilit´e pour
ins´erer une nouvelle particule est tr`es faible et les ´ev´enements couronn´es de succ`es
deviennent rares.
♣ Q. 8.8.4-4 En consid´erant l’´evolution du syst`eme aux temps courts, et des
boites de simulation de taille diff´erente, justifier que ∆t = 1/L afin que le r´esultat
donn´e par une simulation soit ind´ependant de la taille de la boˆıte.
On propose d’utiliser un autre algorithme, not´e B, d´efini de la mani`ere suiv-
ante: au d´ebut de la simulation, on choisit al´eatoirement et uniform´ement un
nombre η compris entre 0.5 et L − 0.5. On place la particule en η. L’intervalle
initial est remplac´e par deux nouveaux intervalles. On incr´emente le temps de
1/L et on consid`ere les intervalles de cette nouvelle g´en´eration.
Si un intervalle est inf´erieur ` a 1, on consid`ere que l’intervalle est devenu bloqu´e
et il n’est plus consid´er´e dans la suite de la simulation. Pour chaque intervalle de
longueur h sup´erieur ` a 1, on tire al´eatoirement un nombre compris entre la borne
inf´erieure de l’intervalle augment´ee de 0.5 et la borne sup´erieure de l’intervalle
diminu´ee de 0.5; on a alors cr´e´e deux nouveaux intervalles pour la g´en´eration
suivante et on incr´emente alors le temps de ∆τ = 1/L pour chaque particule
ins´er´ee. Une fois que cette op´eration est r´ealis´ee sur tous les intervalles de la
mˆeme g´en´eration, on recommence la mˆeme proc´edure sur les intervalles de la
g´en´eration suivante. La simulation est stopp´ee quand il n’existe plus d’intervalles
de taille sup´erieure ` a 1.
♣ Q. 8.8.4-5 Justifier le fait pour un syst`eme de taille finie, le temps de simula-
tion est alors fini.
♣ Q. 8.8.4-6 Donner une borne sup´erieure du temps de simulation.
♣ Q. 8.8.4-7 La densit´e maximale atteinte est-elle sup´erieure, inf´erieure ou iden-
tique `a celle obtenue avec l’algorithme A. Justifier votre r´eponse.
160
8.8 Exercises
♣ Q. 8.8.4-8 Comment ´evolue la densit´e avec le temps de la simulation?
♣ Q. 8.8.4-9 Etablir la relation entre ∆t (algorithme A), Φ et ∆τ (algorithme
B).
♣ Q. 8.8.4-10 Proposer un algorithme parfait en d´efinissant pour l’algorithme B
un nouveau ∆τ
1
par une quantit´e accessible `a la simulation que l’on d´eterminera.
161
Out of equilibrium systems
162
Chapter 9
Slow kinetics, aging.
Contents
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 163
9.2 Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . 164
9.2.1 Two-time correlation and response functions. . . . . . 164
9.2.2 Aging and scaling laws . . . . . . . . . . . . . . . . . . 165
9.2.3 Interrupted aging . . . . . . . . . . . . . . . . . . . . . 166
9.2.4 “Violation” of the fluctuation-dissipation theorem . . . 166
9.3 Adsorption-desorption model . . . . . . . . . . . . . . 167
9.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 167
9.3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . 168
9.3.3 Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . 169
9.3.4 Equilibrium linear response . . . . . . . . . . . . . . . 170
9.3.5 Hard rods age! . . . . . . . . . . . . . . . . . . . . . . 171
9.3.6 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.4 Kovacs effect . . . . . . . . . . . . . . . . . . . . . . . . 173
9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . 174
9.1 Introduction
Nous avons vu, au chapitre pr´ec´edent, que la physique statistique pouvait d´ecrire
des ph´enom`enes qui ´etaient en dehors d’un ´equilibre thermodynamique. Les sit-
uations physiques consid´er´ees entraˆınaient le syst`eme ` a ´evoluer vers un ´etat hors
d’´equilibre. Une autre classe de situations physiques, rencontr´ees fr´equemment
sur le plan exp´erimental, concerne les syst`emes dont le retour ` a un ´etat d’´equilibre
est tr`es lent.
163
Slow kinetics, aging.
Quand le temps de relaxation est tr`es sup´erieur au temps de l’observation,
les grandeurs, que l’on peut mesurer dans une exp´erience (ou une simulation
num´erique), d´ependent plus ou moins fortement de l’´etat du syst`eme. Cela signifie
que, si l’on effectue deux exp´eriences sur le mˆeme syst`eme, ` a quelques jours
voire quelques ann´ees d’intervalle, les grandeurs mesur´ees diff`erent. Pendant
longtemps, une telle situation a repr´esent´e un frein dans la compr´ehension du
comportement de ces syst`emes.
Depuis une dizaine d’ann´ees, grˆ ace `a l’accumulation de r´esultats exp´erimen-
taux (verres de spins, vieillissement des polym`eres plastiques, milieux granu-
laires,...), ` a l’´etude th´eorique des ph´enom`enes de croissance de domaines (simu-
lation et r´esultats exacts), et `a la dynamique des mod`eles de spins (avec d´esordre
gel´e), il apparaˆıt qu’il existe une certaine forme d’universalit´e des propri´et´es de
non-´equilibre.
L’activit´e scientifique portant sur cette physique statistique hors d’´equilibre
est, ` a l’heure actuelle, tr`es importante. Une grande partie des lois propos´ees
provenant des d´eveloppement th´eoriques, restent ` a ce jour des conjectures. La
simulation num´erique, permettant de d´eterminer le comportement pr´ecis de l’´evolution
des syst`emes, constitue un apport important pour une meilleure connaissance de
la physique de ces syst`emes.
9.2 Formalism
9.2.1 Two-time correlation and response functions.
Quand le temps de relaxation du syst`eme est sup´erieur au temps de l’observation,
le syst`eme ne peut pas ˆetre consid´er´e `a l’´equilibre au d´ebut de l’exp´erience, ni
souvent durant le d´eroulement de celle-ci.
Il est donc n´ecessaire de consid´erer non seulement le temps τ ´ecoul´e durant
l’exp´erience (comme dans une situation ` a l’´equilibre), mais aussi de garder la
m´emoire du temps ´ecoul´e t
w
, depuis l’instant initial de la pr´eparation du syst`eme
jusqu’au d´ebut de l’exp´erience. Supposons par exemple que le syst`eme soit `a
l’´equilibre `a l’instant t = 0, `a la temp´erature T, il subit alors une trempe rapide
pour ˆetre amen´e ` a une temp´erature plus basse T
1
. On note alors t
w
le temps
´ecoul´e depuis cette trempe jusqu’au d´ebut de l’exp´erience. Pour une grandeur A,
fonction des variables microscopiques du syst`eme (aimantations pour un mod`ele
de spin, vitesse et position des particules dans un liquide,...), on d´efinit la fonction
de corr´elation ` a deux temps C
A
(t, t
w
) comme
C
A
(t, t
w
) = 'A(t)A(t
w
)` −'A(t)`'A(t
w
)`. (9.1)
Le crochet signifie que l’on prend la moyenne sur un grand nombre de syst`emes
pr´epar´es de mani`ere identique
1
.
1
Par exemple, on consid`ere un syst`eme ´equilibr´e “rapidement” `a haute temp´erature. On
164
9.2 Formalism
De mani`ere similaire, on peut d´efinir une fonction de r´eponse pour la variable
A, pr´ec´edemment introduite. Soit h son champ conjugu´e thermodynamiquement
` a la variable A (δE = −hA), on appelle la fonction de r´eponse impulsionnelle `a
deux temps, la d´eriv´ee fonctionnelle de la variable A par rapport au champ h:
R
A
(t, t
w
) =

δ'A(t)`
δh(t
w
)

h=0
. (9.2)
Dans les simulations et exp´erimentalement, il est plus facile de calculer la fonction
de r´eponse int´egr´ee qui est d´efinie comme
χ
A
(t, t
w
) =

t
t
w
dt

R
A
(t, t

) (9.3)
En d’autres termes, la fonction de r´eponse impulsionnelle peut ˆetre exprim´ee
comme
'A(t)` =

t
t
w
dsR
A
(t, s)h(s) +O(h
2
) (9.4)
Pour respecter la causalit´e, la fonction de r´eponse est telle que
R
A
(t, t
w
) = 0, t't
w
. (9.5)
9.2.2 Aging and scaling laws
Pour tout syst`eme, au moins deux ´echelles de temps caract´erisent la dynamique
du syst`eme: un temps de r´eorganisation locale (temps moyen de retournement
d’un spin pour les mod`eles de spins), que l’on peut appeler temps microscopique
t
0
, et un temps d’´equilibration t
eq
. Quand le temps d’attente t
w
et le temps
d’observation τ se situent dans l’intervalle suivant
t
0
<< t
w
<< t
eq
(9.6)
t
0
<< t
w
+ τ << t
eq
, (9.7)
le syst`eme n’est pas encore ´equilibr´e et ne pourra pas atteindre l’´equilibre sur
l’´echelle de temps de l’exp´erience.
Dans un grand nombre d’exp´eriences, ainsi que pour des mod`eles solubles
(g´en´eralement dans l’approximation du champ moyen), il apparaˆıt que la fonction
de corr´elation s’exprime comme la somme de deux fonctions,
C
A
(t, t
w
) = C
ST
(t −t
w
) + C
AG

ξ(t
w
)
ξ(t)

(9.8)
applique une trempe `a l’instant t = 0, et l’on laisse relaxer le syst`eme pendant un temps t
w
, `a
partir duquel on proc`ede `a la mesure de la fonction de corr´elation.
165
Slow kinetics, aging.
o` u ξ(t
w
) est une fonction non-universelle qui d´epend du syst`eme. Ceci signifie
que, sur une p´eriode courte, le syst`eme r´eagit essentiellement comme s’il ´etait
d´ej` a ` a l’´equilibre; ce r´egime est d´ecrit par la fonction C
ST
(τ = t − t
w
), et ne
d´epend pas du temps d’attente t
w
, comme dans un syst`eme d´ej` a ´equilibr´e.
Prenons l’exemple de la croissance d’un domaine ferromagn´etique dans un
mod`ele de spins, tremp´e `a une temp´erature inf´erieure `a la temp´erature critique.
Le premier terme de l’´equation (9.8) correspond `a la relaxation des spins `a
l’int´erieur d’un domaine. Le second terme d´epend du temps d’attente et d´ecrit
la relaxation des parois des domaines (o` u ξ(t) est la taille caract´eristique des
domaines `a l’instant t).
Le terme de“vieillissement”se justifie par le fait que, plus on attend longtemps
pour faire une mesure, plus le syst`eme a besoin de temps pour perdre la m´emoire
de sa configuration initiale.
La fonction ξ(t
w
) n’est pas pas connue en g´en´eral, ce qui signifie qu’il est n´eces-
saire de proposer des fonctions d’essai simples et de v´erifier si les diff´erentes fonc-
tions de corr´elation C
AG

ξ(t
w
)
ξ(t)

se superposent sur une seule courbe-maˆıtresse.
9.2.3 Interrupted aging
Si la seconde in´egalit´e de l’´equation (9.7) n’est pas v´erifi´ee, c’est ` a dire si le temps
t devient tr`es sup´erieur au temps d’´equilibration t
eq
, alors que t
w
est inf´erieur ` a
celui-ci, on va se trouver dans la situation dite, de “vieillissement interrompu”.
Au del` a des temps courts, le syst`eme relaxe comme un syst`eme hors d’´equilibre,
c’est-`a-dire avec une partie non stationnaire de la fonction de corr´elation. Aux
temps tr`es longs, le syst`eme retourne ` a l’´equilibre; sa fonction de corr´elation re-
devient invariante par translation dans le temps et v´erifie ` a nouveau le th´eor`eme
de fluctuation-dissipation.
9.2.4 “Violation” of the fluctuation-dissipation theorem
Dans le cas d’un syst`eme `a l’´equilibre, les fonctions de r´eponse et de corr´elation
sont `a la fois invariantes par translation dans le temps R
A
(t, t
w
) = R
A
(τ = t −t
w
)
et C
A
(t, t
w
) = C
A
(τ), et reli´ees l’une ` a l’autre par le th´eor`eme de fluctuation-
dissipation (voir Appendice A).
R
A
(τ) =

−β
dC
A
(τ)

τ ≥ 0
0 τ < 0
(9.9)
Une telle relation ne peut ˆetre ´etablie dans le cas d’un syst`eme ´evoluant en
dehors d’un ´etat d’´equilibre
2
. On parle de mani`ere abusive de la violation du
2
Le point cl´e dans la d´emonstration du th´eor`eme de fluctuation-dissipation est la connais-
sance de la distribution de probabilit´e `a l’´equilibre.
166
9.3 Adsorption-desorption model
th´eor`eme de fluctuation-dissipation pour souligner le fait que la relation ´etablie
` a l’´equilibre n’est plus v´erifi´ee en dehors de l’´equilibre, mais il ne s’agit pas d’une
v´eritable violation car les conditions de ce th´eor`eme ne sont remplies alors. Une
v´eritable violation consisterait ` a observe le non respect du th´eor`eme dans les
conditions de l’´equilibre, ce qui n’est jamais observ´e.
Compte tenu des r´esultats analytiques obtenus sur des mod`eles solubles et des
r´esultats de simulation num´erique, on d´efinit formellement une relation entre la
fonction de r´eponse et la fonction de corr´elation associ´ee,
R
A
(t, t
w
) = −βX(t, t
w
)
∂C
A
(t, t
w
)
∂t
w
(9.10)
o` u la fonction X(t, t
w
) est alors d´efinie par cette relation: cette fonction n’est
donc pas connue a priori.
Dans le cas o` u le syst`eme est `a l’´equilibre, on a
X(t, t
w
) = 1. (9.11)
Aux temps courts, les d´ecroissances de la fonction de corr´elation et de la fonction
de r´eponse sont donn´ees par des fonctions stationnaires qui ne d´ependent pas du
temps d’attente; on a alors
X(t, t
w
) = 1 τ = t −t
w
<< t
w
. (9.12)
Au del` a de ce r´egime, le syst`eme vieillit et la fonction X
o
est diff´erente de 1.
Dans les mod`eles de type champ moyen, le rapport fluctuation-dissipation
ne d´epend que de la fonction de corr´elation X(t, t
w
) = X(C(t, t
w
)) aux grands
temps. Dans ce cas, on peut suppose que le syst`eme viellissant est caract´eris´e
par une temp´erature effective
T
eff
(t, t
w
) =
T
X(t, t
w
)
(9.13)
9.3 Adsorption-desorption model
9.3.1 Introduction
Nous allons illustrer les concepts introduits ci-dessus sur un mod`ele d’adsorption-
d´esorption. Ce mod`ele pr´esente plusieurs avantages: il repr´esente une extension
du mod`ele du parking que nous avons vu au chapitre pr´ec´edent; pour des temps
tr`es grands (qui peuvent ˆetre sup´erieurs au temps de la simulation num´erique),
le syst`eme retourne vers un ´etat d’´equilibre. En cons´equence, en changeant un
seul paramˆetre de contrˆ ole (essentiellement le potentiel chimique du r´eservoir qui
fixe la constante d’´equilibre entre taux d’adsorption et taux de d´esorption), on
167
Slow kinetics, aging.
peut passer progressivement de l’observation de propri´et´es d’´equilibre ` a celle des
propri´et´es hors d’´equilibre.
La dynamique de ce syst`eme correspond ` a une forme de simulation Monte
Carlo grand canonique , dans laquelle seuls les ´echanges de particules avec un
r´eservoir sont permis, mais o` u aucun d´eplacement de particules dans le syst`eme
n’est effectu´e. Mˆeme si, `a la limite des temps tr`es longs, le syst`eme converge vers
l’´etat d’´equilibre, la relaxation du syst`eme peut devenir extr`emement lente, et on
va pouvoir observer le vieillissement de ce syst`eme.
En dernier lieu, ce mod`ele permet de comprendre un grand nombre de r´esultats
exp´erimentaux concernant les milieux granulaires denses.
9.3.2 Definition
Des bˆ atonnets durs sont plac´es au hasard sur une ligne, avec une constante de
vitesse k
+
. Si la nouvelle particule ne recouvre aucune particule d´ej` a adsorb´ee,
cette particule est accept´ee, sinon elle est rejet´ee et l’on proc`ede ` a un nouveau
tirage. De plus, toutes les particules adsorb´ees sont d´esorb´ees al´eatoirement avec
une constante de vitesse k

. Quand k

= 0, ce mod`ele correspond au mod`ele du
parking pour lequel la densit´e maximale est ´egale ` a ρ

· 0.7476 . . . . (pour une
ligne initialement vide). Quand k

= 0, mais tr`es petit, le syst`eme converge tr`es
lentement vers un ´etat d’´equilibre.
En notant que les propri´et´es de ce mod`ele d´ependent seulement du rapport
K = k
+
/k

, et apr`es un changement d’´echelle de l’unit´e de temps, on peut ´ecrire
l’´evolution du syst`eme comme

dt
= Φ(t) −
ρ
K
, (9.14)
o` u Φ(t) est la probabilit´e d’ins´erer une particule ` a l’instant t.
A l’´equilibre, on a
Φ(ρ
eq
) =
ρ
eq
K
(9.15)
La probabilit´e d’insertion est alors donn´ee par
Φ(ρ) = (1 −ρ) exp

−ρ
1 −ρ

. (9.16)
En ins´erant l’´equation (9.16) dans l’´equation (9.15), on obtient l’expression de la
densit´e ` a l’´equilibre,
ρ
eq
=
L
w
(K)
1 + L
w
(K)
, (9.17)
o` u L
w
(x), la fonction Lambert-W, est la solution de l’´equation x = ye
y
. Dans la
limite des petites valeurs de K, ρ
eq
∼ K/(1 +K), tandis que pour des valeurs de
K tr`es grandes,
ρ
eq
∼ 1 −1/ ln(K). (9.18)
168
9.3 Adsorption-desorption model
0 5 10 15
ln(t)
0.40
0.50
0.60
0.70
0.80
0.90
ρ
(
t
)
1/ln(t)
1/t
e
(-Γt)
Figure 9.1 –
´
Evolution temporelle de la densit´e du mod`ele d’adsorption-
d´esorption. Sch´ematiquement, la cin´etique du processus est divis´ee en trois
r´egimes `a haute densit´e: en 1/t, puis en 1/ ln(t) et enfin en exponentielle
exp(−Γt).
La densit´e d’´equilibre ρ
eq
tend vers 1 quand K → ∞. On peut noter qu’il
y a une discontinuit´e entre la limite K → ∞ et le cas K = ∞, car les densit´es
maximales sont respectivement de 1 et de 0.7476 . . ..
9.3.3 Kinetics
Contrairement au mod`ele du parking, la cin´etique du mod`ele d’adsorption-d´esorption
ne peut pas ˆetre obtenue analytiquement. Il est toujours possible de faire une
analyse qualitative de la dynamique en d´ecoupant la cin´etique en diff´erents r´egimes.
Nous nous pla¸cons par la suite dans les cas o` u la d´esorption est tr`es faible
1/K << 1.
1. Jusqu’` a ce que la densit´e soit voisine de la densit´e `a saturation du mod`ele
du parking, la d´esorption peut ˆetre n´eglig´ee et avant un temps t ∼ 5, la
densit´e croˆıt comme celle du mod`ele du parking, c’est `a dire en 1/t.
2. Quand l’intervalle de temps entre deux adsorptions devient comparable au
temps caract´eristique d’une adsorption, la densit´e ne croit que lorsque, suite
169
Slow kinetics, aging.
0 0.2 0.4 0.6 0.8
C(t,t
w
)
0
0.2
0.4
0.6
0.8
1
χ
(
t
,
t
w
)
Figure 9.2 – Fonction de r´eponse int´egr´ee χ(τ) ` a l’´equilibre et normalis´ee par
sa valeur `a l’´equilibre versus la fonction de corr´elation C(τ) celle-ci ´egalement
normalis´ee pour K = 300 (le temps d’attente est de t
w
= 3000 > t
eq
).
` a une d´esorption lib´erant un espace assez grand, deux nouvelles particules
sont adsorb´ees. Dans ce r´egime, on peut montrer que la croissance de la
densit´e est essentiellement de l’ordre de 1/ ln(t) (voir figure 9.1).
3. Aux temps tr`es longs, le syst`eme retourne ` a l’´equilibre; le nombre de d´esorp-
tions ´equilibre le nombre d’adsorptions et l’approche de la densit´e d’´equilibre
est exponentielle exp(−Γt).
9.3.4 Equilibrium linear response
Dans la limite o` u le temps d’attente est tr`es sup´erieur au temps du retour `a
l’´equilibre, on doit retrouver que le th´eor`eme de fluctuation-dissipation est v´erifi´e.
A l’´equilibre, le mod`ele d’adsorption-d´esorption correspond ` a un syst`eme de
bˆ atonnets durs plac´e dans un ensemble grand-canonique avec un potentiel chim-
ique donn´e par βµ = ln(K).
La fonction de r´eponse int´egr´ee peut ˆetre facilement calcul´ee
χ
eq
= ρ
eq
(1 −ρ
eq
)
2
. (9.19)
170
9.3 Adsorption-desorption model
De mani`ere similaire on peut calculer la valeur de la fonction de corr´elation ` a
l’´equilibre.
C
eq
= ρ
eq
(1 −ρ
eq
)
2
. (9.20)
Le th´eor`eme de fluctuation-dissipation donne
χ(τ) = C(0) −C(τ). (9.21)
Pour des objets durs, la temp´erature n’est pas un param`etre essentiel, ce qui ex-
plique que le th´eor`eme de fluctuation-dissipation est l´eg`erement modifi´e. La figure
9.2 montre le trac´e de la fonction χ(τ)/χ
eq
en fonction de C(τ)/C
eq
. La diagonale
correspond ` a la valeur donn´ee par le th´eor`eme de fluctuation-dissipation.
9.3.5 Hard rods age!
Nous nous int´eressons aux propri´et´es de non-´equilibre de ce mod`ele dans le r´egime
cin´etique en 1/ ln(t), c’est-` a-dire dans un domaine de temps o` u l’´equilibre ther-
modynamique n’est pas encore atteint. La grandeur int´eressante de ce mod`ele
est la fonction d’auto-corr´elation de la densit´e (normalis´ee), d´efinie comme
C(t, t
w
) = 'ρ(t)ρ(t
w
)` −'ρ(t)`'ρ(t
w
)` (9.22)
o` u les crochets signifient une moyenne sur un ensemble de simulations ind´epen-
dantes.
Quand τ et t
w
sont assez grands, mais plus petits que le temps du retour `a
l’´equilibre τ
eq
, le vieillissement est d´ecrit par une fonction d’´echelle. Il apparaˆıt ici
empiriquement que cette fonction d´ecrit un vieillissement simple. On a en effet
C(t, t
w
) = f(t
w
/t) (9.23)
9.3.6 Algorithm
La fonction de r´eponse int´egr´ee `a deux temps est d´efinie comme
χ(t, t
w
) =
δρ(t)
δ ln(K(t
w
))
. (9.24)
En simulation, la fonction de r´eponse int´egr´ee n´ecessite a priori de faire une
simulation en ajoutant un champ `a partir de l’instant t
w
. Une telle proc´edure
souffre du fait que l’on doit utiliser un champ faible pour rester dans le cadre de
la r´eponse lin´eaire, mais si ce dernier est trop petit, la fonction de r´eponse (qui
s’obtient en soustrayant les r´esultats de simulation ` a champ nul) devient extrˆeme-
ment bruit´ee. La premi`ere m´ethode qui a ´et´e utilis´ee consiste `a faire trois simu-
lations, une `a champ nul, une seconde ` a champ positif et une troisi`eme ` a champ
n´egatif. Par une combinaison lin´eaire des r´esultats, il est possible d’annuler les
171
Slow kinetics, aging.
termes non lin´eaires en champ quadratique, mais il subsiste des termes cubiques
et au del` a. La m´ethode que nous allons pr´esenter est tr`es r´ecente (2007) et a ´et´e
propos´ee par L. BerthierBerthier [2007] et permet de faire une seule simulation
en champ nul.
L’id´ee de base de la m´ethode consiste `a exprimer la moyenne en termes de
probabilit´es d’observer le syst`eme `a l’instant t
'ρ(t)` =
1
N
N
¸
k=1
ρ(t)P
k
(t
w
→t) (9.25)
o` u P
k
(t
w
→t) est la probabilit´e que la ki`eme trajectoire (ou simulation) parte de
t
w
et aille `a t et k un indice parcourant les N trajectoires ind´ependantes.
Compte tenu du caract`ere markovien de la dynamique, la probabilit´e P
k
(t
w

t) s’exprime comme le produit de probabilit´es de transition des configurations
successives menant le syst`eme de t
w
` a t
P
k
(t
w
→t) =
t−1
¸
t

=t
w
J
C
k
t

→C
k
t

+1
(9.26)
o` u J
C
k
t

→C
k
t

+1
est la probabilit´e de transition de la configuration C
k
t
de la ki`eme
trajectoire `a l’instant t

vers la configuration de la ki`eme trajectoire C
k
t

+1
` a
l’instant t

+ 1. Pour une dynamique Monte Carlo standard, cette probabilit´e
de transition s’exprime comme
J
C
k
t
→C
k
t+1
= δ
C
k
t+1
C

t
Π(C
k
t
→C

t
) + δ
C
t+1
C
k
t
(1 −Π(C
k
t
→C

t
)) (9.27)
o` u finalement Π(C
k
t
→C

t
) est la probabilit´e d’acceptation du passage de la con-
figuration C
k
t
` a l’instant t ` a la configuration C

t+1
` a l’instant t+1. Le second terme
de la partie droite de l’´equation (9.27) tient compte du fait que si la configuration
C

t
est refus´ee la trajectoire reste avec la configuration de l’instant pr´ec´edent avec
la probabilit´e compl´ementaire.
La susceptibilit´e int´egr´ee s’´ecrit comme
χ(t, t
w
) =
∂ρ(t)
∂h(t
w
)
(9.28)
o` u dans le cas du mod`ele du parking h = ln(K).
En utilisant les probabilit´es de trajectoires, on peut exprimer la susceptibilit´e
comme
χ(t, t
w
) =
1
N
N
¸
k=1
ρ(t)
∂P
k
(t
w
→t)
∂h
(9.29)
172
9.4 Kovacs effect
Si maintenant, on ajoute un champ infinit´esimal, les probabilit´es de transition
sont modifi´ees et dans le calcul en prenant le logarithme de l’´equation (9.25)
∂P
k
(t
w
→t)
∂h
= P
k
(t
w
→t)
t−1
¸
t

=t
w
∂ ln(J
C
k
t

→C
k
t

+1
)
∂h
(9.30)
Ainsi, on peut calculer la fonction r´eponse int´egr´ee comme la moyenne suivante
χ(t, t
w
) = 'ρ(t)H(t
w
→t)` (9.31)
o` u la valeur moyenne s’effectue sur une trajectoire non perturb´ee et avec la fonc-
tion H
k
(t
w
→t) qui est donn´ee par
H
k
(t
w
→t) =
¸
t

∂ ln(J
C
k
t

→C
k
t

+1
)
∂h
(9.32)
On peut aussi calculer facilement la fonction de corr´elation `a deux temps
sur les mˆemes trajectoires non perturb´ees. Pour estimer le rapport fluctuation-
dissipation, on utilise la relation
∂χ(t, t
w
)
∂t
w
= −
X(t, t
w
)
T
∂C(t, t
w
)
∂t
w
(9.33)
En tra¸cant dans un diagramme param´etrique la susceptibilit´e int´egr´ee en fonction
de la fonction de corr´elation ` a deux temps, pour un temps fix´e t et pour des valeurs
diff´erentes t
w
on peut calculer la fonction
X(t,t
w
)
T
en calculant la pente locale du
trac´e pr´ec´edent. Si cette fonction est constante par morceaux, on interpr`ete
la fonction X(t, t
w
) comme une temp´erature effective. Je souhaite insister sur
le fait que le trac´e param´etrique doit se faire `a t constant et pour des valeurs
diff´erentes de t
w
, ce fait est une cons´equence de l’´equation (9.33). En effet, la
diff´erentiation se fait par rapport `a t
w
et non pas par rapport ` a t. Les premi`eres
´etudes ont ´et´e faites tr`es souvent de mani`ere incorrecte et il est apparu que des
corrections substantielles apparaissent quand les calculs ont ´et´e refaits selon la
m´ethode expos´ee ci-dessus.
9.4 Kovacs effect
Pour des syst`emes dont les temps de relaxation sont extremement longs, la
r´eponse ` a un succession de perturbations appliqu´ee au syst`eme conduit ` a un
comportement souvent tr`es diff´erent de ce que l’observe pour un syst`eme relax-
ant rapidement vers l’´equilibre. Il semble qu’il y ait une aussi grande universalit´e
des comportements des r´eponses des syst`emes. L’effet Kovacs est un exemple que
nous allons illustrer sur le mod`ele du Parking. Historiquement, ce ph´enom`ene a
´et´e d´ecouvert dans les ann´ees soixante et consiste ` a appliquer `a des polym`eres
173
Slow kinetics, aging.
1 10 100
t-t
w
0
0.001
0.002
0.003
0.004
1
/
ρ
(
t
)
-
1
/
ρ
e
q
(
K
=
5
0
0
)
Figure 9.3 – Evolution temporelle du volume d’exc`es pour le mod`ele d’adsorption-
d´esorption 1/ρ(t) − 1/ρ
eq
(K = 500) en fonction de t − t
w
. Dans la courbe du
haut K est chang´e de K = 5000 ` a K = 500 pour t
w
= 240, dans la courbe
intermediaire K est chang´e de K = 2000 ` a K = 500 pour t
w
= 169, et dans la
courbe du bas, K passe de 1000 `a 500 pour t
w
= 139.
une trempe rapide suivie d’un temps d
ˇ
Zattente, puis un chauffage rapide `a une
temp´erature interm´ediaire. Dans le cas o` u le volume atteint par le syst`eme au
moment o` u l’on proc`ede au rechauffement est ´egale `a celui qu’aurait le syst`eme
s’il ´etait ` a l’´equilibre, on observe alors une augmentation du volume dans un pre-
mier r´egime suivi d’une diminution pour atteindre asymptotiquement le volume
d’´equilibre. Ce ph´enom`ene hors d’´equilibre est en fait tr`es universel et on peut
l’observer dans le mod`ele d’adsorption-d´esorption. Sur la figure 9.3, on observe
que le maximum du volume est d’autant plus important que la “trempe” du sys-
t`eme a ´et´e importante. Ce ph´enom`ene a ´et´e observ´e aussi dans le cas des verres
fragiles comme l’orthoterph´enyl en simulation.
9.5 Conclusion
L’´etude des ph´enom`enes hors d’´equilibre est encore en grande partie dans son
enfance, compar´ee `a celle des syst`emes `a l’´equilibre. Pour des syst`emes plus
174
9.5 Conclusion
complexes que ceux d´ecrits pr´ec´edemment et pour lesquels il existe toute une
hi´erarchie d’´echelles de temps, il est n´ecessaire d’envisager une s´equence de vieil-
lissement de type
C
A
(t
w
+ τ, t
w
) = C
ST
(τ) +
¸
i
C
AG,i

ξ
i
(t
w
)
ξ
i
(t
w
+ τ)

(9.34)
L’identification de la s´equence est alors de plus en plus d´elicate, mais refl`ete la
complexit´e intrins`eque du ph´enom`ene. C’est probablement le cas des verres de
spins exp´erimentaux. On a propos´e ces derni`eres ann´ees plusieurs modes op´era-
toires, applicables `a la fois exp´erimentalement et num´eriquement, dans lesquels
on perturbe le syst`eme plusieurs fois sans qu’il puisse alors revenir ` a l’´equilibre.
La diversit´e des comportements observ´es permet de comprendre progressivement
la structure de ces syst`emes `a travers l’´etude de leurs propri´et´es dynamiques.
175
Slow kinetics, aging.
176
Appendix A
Reference models
Apr`es un d´eveloppement consid´erable de la Physique Statistique au si`ecle pr´ec´e-
dent, plusieurs Hamiltoniens sont devenus des syst`emes mod`eles car ils permet-
tent de d´ecrire de nombreux ph´enom`enes sur la base d’hypoth`eses. Le propos de
cette appendice est de donner une liste (non exhaustive) mais assez large de ces
mod`eles ainsi qu’une description br`eve de leurs propri´et´es essentielles.
A.1 Lattice models
A.1.1 XY model
Le mod`ele XY est un mod`ele de spins sur r´eseau o` u les spins sont alors des
vecteurs `a deux dimensions. Le r´eseau support de ces spins est de dimension
quelconque. Le Hamiltonien de ce syst`eme s’´ecrit
H = −J
¸
i,j
S
i
.Sj (A.1)
La notation 'i, j` d´esigne une double sommation o` u i est un indice parcourant les
sites et j les sites plus proches voisins de i. S
i
.Sj repr´esente le produit scalaire
entre les deux vecteurs S
i
et S
j
. La dimension crituque de ce mod`ele est 4 et
le th´eor`eme de Mermin-WagnerMermin and Wagner [1966] emp`eche une transi-
tion ` a temp´erature finie en dimensions 2 associ´ee au param`etre d’ordre qui est
l’aimantation. En fait, il apparaˆıt une transition de phase `a temp´erature finie as-
soci´ee aux d´efauts topologiques qui subissent une transition de type ´etat li´e-´etat
dissoci´e. (Concernant les d´efauts topologiques en mati`ere condens´ee, on peut
toujours consulter cette revue un peu ancienne mais tellement compl`ete de Mer-
minMermin [1979]). Cette transition a ´et´e d´ecrite la premi`ere fois par Kosterlitz
et ThoulessKosterlitz and Thouless [1973].
A deux dimensions, ette transition est une transition qui n’est marqu´e par
aucune discontinuit´e des d´eriv´ees du potentiel thermodynamique. La signature
en simulation se fait sur la grandeur que l’on appelle l’h´elicit´e.
177
Reference models
A trois dimensions, cette transition redevient plus classique avec des exposants
critiques de ceux d’une transition continue.
A.1.2 Heisenberg model
Le mod`ele d’Heisenberg est un mod`ele sur r´eseau o` u les spins sont alors des
vecteurs d’un espace `a trois dimensions. Formellement, le Hamiltonien est donn´e
par l’´equation
H = −J
¸
i,j
S
i
.Sj (A.2)
avec S
i
un vecteur tridimensionnel, les autres notations restant identiques ` a celles
d´efinies dans le paragraphe pr´ec´edent. La dimension critique inf´erieure de ce mod-
`ele est de deux, ce qui signifie la temp´erature critique est ´egale est nulle. D`es
que la temp´erature est diff´erente de z´ero, le syst`eme perd son aimantation. Con-
trairement au mod`ele XY, le syst`eme ne poss`ede pas de d´efauts topologiques: les
vecteurs acc`edant ` a une troisi`eme dimensionMermin [1979]. A trois dimensions,
le syst`eme poss`ede une transition continue entre une phase paramagn´etique et
une phase ferromagn´etique `a temp´erature finie.
A.1.3 O(n) model
La g´en´eralisation des mod`eles pr´ec´edents est simple, car on peut toujours con-
sid´erer des spins caract´eris´es par un vecteur de dimension n. Outre qu’il existe
des exemples de r´ealisation physique o` u n = 4, la g´en´eralisation avec un n quel-
conque a un int´er`et th´eorique: quand n tend vers l’infini, on converge le mod`ele
sph´erique qui peut-etre r´esolu en g´en´eral de mani`ere analytique et qui poss`ede
un diagramme de phases non trivial. De plus, il existe des m´ethodes analytiques
qui permettent de travailler autour du mod`ele sph´erique (d´eveloppement en 1/n
et qui par extrapolation donnent des pr´edictions int´eressantes pour un n fix´e.
A.2 off-lattice models
A.2.1 Introduction
Nous avons utilis´e au cours de ce cours deux mod`eles de liquides simples que ceux
respectivement, le mod`ele des sph`eres dures et le mod`ele de Lennard-Jones. Le
premier tient compte exclusivement les probl`emes d’encombrement g´eom´etriques
et poss`ede ainsi un diagramme de phases o` u il apparaˆıt une transition liquide-
solide (pour une dimension ´egale ` a trois ou plus) et le mod`ele de Lennard-Jones
disposant d’un potentiel attractif ` a longue distance (r´epr´esentant les forces de Van
der Waals) ´etoffe son diagramme de phases, on a alors un alors une transition
liquide-gaz et un point critique associ´e, une courbe de coexistence liquide-gaz `a
178
A.2 off-lattice models
basse temp´erature. Dans la r´egion o` u le syst`eme est dense, on retrouve une ligne
de transition liquide-solide (du premier ordre pour un syst`eme ` a trois dimensions)
finissant par un point triple `a basse temp´erature.
A.2.2 StockMayer model
Le mod`ele de Lennard-Jones est un bon mod`ele tant que les mol´ecules ou atomes
consid´er´es ne poss`edent pas de charge ou de dipˆole permanent. Cette situ-
ation est bien ´evidemment trop restrictive pour d´ecrire les solutions qui font
l’environnement de notre quotidien. En particulier, l’eau qui est constitu´e pour
l’essentiel de mod´ecules H
2
O (en n´egligeant dans un premier temps la dissociation
ionique) poss`ede un fort moment dipolaire. En restant ` a un niveau de description
tr`es ´el´ementaire, le Hamiltonien de Stockmayer est celui de Lennard-Jones ajout´e
d’un terme d’interaction dipolaire.
H =
N
¸
i=1
p
2
i
2m
+

2
N
¸
i=j
¸

σ
r
ij

12

σ
r
ij

6
¸
+
1
r
3
ij

µ
i

j
−3µ
i
.ˆr
i
µ
j
.ˆr
j

(A.3)
A.2.3 Kob-Andersen model
Les verres surfondues ont la caract´eristique essentielle de pr´esenter une augmen-
tation tr`es spectaculaire du temps de relaxation quand la temp´erature diminue.
Cette accroissement atteint 15 ordres de grandeur jusqu’`a un point o` u `a d´efaut
de cristalliser, le syst`eme se g`ele et se trouve pour des temp´eratures inf´erieures
` a la temp´erature dite de transition vitreuse dans un ´etat hors d’´equilibre (´etat
amorphe). Cette temp´erature de transition n’est pas unique pour un syst`eme
donn´e, car elle d´epend en partie de la vitesse de trempe du syst`eme et une partie
des caract´eristiques habituelles li´ees aux transitions de phase n’est pas pr´esente.
Comme la situation concerne un tr`es grand nombre de syst`emes, il est tentant de
penser que des liquides simples pourraient pr´esenter ce type de ph´enom´enologie.
Or dans un espace tridimensionnel, les liquides simples purs subissent (sph`eres
dures, Lennard-Jones) subissent toujours une transition de type liquide-solide et
il est tr`es difficile d’observer une surfusion de ces liquides, mˆeme si la vitesse de
trempe est diminu´ee au maximum (en simulation, elle reste toutefois importante
par rapport ` a celle des protocoles exp´erimentaux). Un des mod`eles les plus con-
nus parmi les liquides simples qui pr´esentent une phase surfondue importante
(avec une cristallistion ´evit´ee) est le mod`ele de Kob-AndersenKob and Andersen
[1994, 1995a,b] Ce mdod`ele est un m´elange de deux esp`eces de particules de type
Lennard-Jones avec un potentiel d’interaction
v
αβ
= 4
αβ
¸

σ
αβ
r

12

σ
αβ
r

6

(A.4)
179
Reference models
Les deux esp`eces sont not´ees A et B et la proportion du m´elange est 80% de A
et 20% de B. Les param`etres d’interaction sont
AA
= 1,
AB
= 1.5, et
BB
= 0.5,
et les tailles sont donn´ees par σ
AA
= 1, σ
AB
= 0.8 et σ
BB
= 0.88,
180
Appendix B
Linear response theory
Soit un syst`eme ` a l’´equilibre d´ecrit par le Hamiltonien H
0
, on consid`ere qu’` a
l’instant t = 0, ce syst`eme est perturb´e par un action ext´erieure d´ecrite par le
Hamiltonien H

H

= −A(r
N
)F(t) (B.1)
o` u F(t) est une force perturbatrice qui ne d´epend que du temps et A(r
N
) est la
variable conjugu´ee ` a la force F. On suppose que cette force d´ecroit quand t →∞
de telle mani`ere que le syst`eme retourne `a l’´equilibre. Il est possible de consid´ere
une force d´ependante de l’espace, mais pour des raisons de simplicit´e nous nous
limitons ici `a une force uniforme.
L’´evolution du syst`eme peut ˆetre d´ecrite par l’´equation de Liouville
∂f
(N)
(r
N
, p
N
, t)
∂t
= −iLf
(N)
(r
N
, p
N
, t) (B.2)
=¦H
0
+H

, f
(N)
(r
N
, p
N
, t)¦ (B.3)
=−iL
0
f
(N)
(r
N
, p
N
, t) −¦A, f
(N)
(r
N
, p
N
, t)¦F(t) (B.4)
o` u L
0
d´esigne l’op´erateur de Liouville associ´e `a H
0
L
0
= i¦H
0
, ¦ (B.5)
Comme le syst`eme ´etait initialement ` a l’´equilibre, on a
f
(N)
(r
N
, p
N
, 0) = C exp(−βH
0
(r
N
, p
N
)), (B.6)
o` u C est une constante de normalisation. Dans la mesure o` u le champ ext´erieur
est suppos´e faible, on fait un d´eveloppement perturbatif de l’´equation (B.4) en
ne gardant que le premier terme non nul. On pose
f
(N)
(r
N
, p
N
, t) = f
(N)
0
(r
N
, p
N
) + f
(N)
1
(r
N
, p
N
, t) (B.7)
On obtient alors ` a l’ordre le plus bas:
∂f
(N)
1
(r
N
, p
N
, t)
∂t
= −iL
0
f
(N)
1
(r
N
, p
N
, t) −¦A(r
N
), f
(N)
0
(r
N
, p
N
)¦F(t). (B.8)
181
Linear response theory
L’´equation (B.8) est r´esolue formellement avec la condition initiale donn´ee par
l’´equation (B.6). Ceci conduit `a
f
(N)
1
(r
N
, p
N
, t) = −

t
−∞
exp(−i(t −s)L
0
)¦A, f
(N)
0
¦F(s)ds. (B.9)
Ainsi, la variable '∆B(t)` = 'B(t)` −'B(−∞)` ´evolue comme
'∆B(t)` =

dr
N
dp
N

f
(N)
(r
N
, p
N
, t) −f
(N)
0
(r
N
, p
N
)

B(r
N
)dr
N
dp
N
.
(B.10)
A l’ordre le plus bas, la diff´erence des distributions de Liouville est donn´ee par
l’´equation (B.9) et l’´equation (B.10) devient
'∆B(t)` =−

dr
N
dp
N

t
−∞
exp(−i(t −s)L
0
)¦A, f
(N)
0
¦B(r
N
)F(s)ds
(B.11)
=−

dr
N
dp
N

t
−∞
¦A, f
(N)
0
¦ exp(i(t −s)L
0
)B(r
N
)F(s)ds (B.12)
en utilisant l’hermiticit´e de l’op´erateur de Liouville.
En calculant le crochet de Poisson, il vient que
¦A, f
(N)
0
¦ =
N
¸
i=1

∂A
∂r
i
∂f
(N)
0
dp
i

∂A
∂p
i
∂f
(N)
0
dr
i

(B.13)
=−β
N
¸
i=1

∂A
∂r
i
∂H
(N)
0
dp
i

∂A
∂p
i
∂H
(N)
0
dr
i

f
(N)
0
(B.14)
=−βiL
0
Af
(N)
0
(B.15)
=−β
dA(0)
dt
f
(N)
0
(B.16)
En ins´erant l’´equation (B.16) dans l’´equation (B.12), on a
'∆B(t)` = β

dr
N
dp
N
f
(N)
0

t
−∞
dA(0)
dt
exp(−i(t −s)L
0
)B(r
N
)F(s)ds
(B.17)
En utilisant le fait que
B(r
N
(t)) = exp(itL
0
)B(r
N
(0)) (B.18)
l’´equation (B.17) devient
'∆B(t)` = β

t
−∞
ds

dA(0)
dt
B(t −s)

F(s) (B.19)
182
On d´efinit la fonction de r´eponse lin´eaire de B par rapport ` a F comme
'∆B(t)` =


−∞
dsχ(t, s)F(s) +O(F
2
) (B.20)
Compte tenu de l’´equation (B.19), on trouve les propri´et´es suivantes:
1. En identifiant les ´equations (B.19) et (B.20), on obtient le th´eor`eme de
fluctuation-dissipation (Fluctuation-dissipation theorem, en anglais FDT),
χ(t) =

−β
d
dt
'A(0)B(t)` t > 0
0 t < 0
(B.21)
2. Un syst`eme ne peut pas r´epondre ` a une perturbation avant que celle-ci ne
se soit produite. Cette propri´et´e s’appelle le respect de la causalit´e.
χ(t, s) = 0, t −s ≤ 0 (B.22)
3. La fonction de r´eponse (pr`es) de l’´equilibre est invariante par translation
dans le temps
χ(t, s) = χ(t −s) (B.23)
Quand A = B, on note la fonction d’autocorr´elation comme
C
A
(t) = 'A(0)A(t)`, (B.24)
et on a
χ(t) =

−β
dC
A
(t)
dt
t > 0
0 t < 0
(B.25)
En d´efinissant la r´eponse int´egr´ee
R(t) =

t
0
χ(s)ds (B.26)
Le th´eor`eme de fluctuation-dissipation s’exprime alors
R(t) =

β(C
A
(0) −C
A
(t)) t > 0
0 t < 0
(B.27)
183
Linear response theory
184
Appendix C
Ewald sum for the Coulomb
potential
Ewald sum
On consid`ere un ensemble de N particules poss´edant des charges, tel que la charge
totale du syst`eme est nulle,
¸
i
q
i
= 0, dans une boite cubique de longueur L avec
conditions aux limites p´eriodiques. A une particule i situ´ee ` a r
i
dans la boite de
r´ef´erence correspond une infinit´e d’images situ´ees dans les copies de cette boite
initiale et rep´er´ees par les coordonn´ees en r
i
+ nL, o` u n est un vecteur dont
les composantes (n
x
, n
y
, n
z
) sont enti`eres. L’´energie totale du syst`eme s’exprime
comme
U
coul
=
1
2
N
¸
i=1
q
i
φ(r
i
), (C.1)
o` u φ(r
i
) est le potentiel ´electrostatique au site i,
φ(r
i
) =

¸
j,n
q
j
[r
ij
+nL[
. (C.2)
L’´etoile indique que la somme est faite sur toutes les boˆıtes et sur toutes les
particules, hormis j = i quand n = 0.
Pour des potentiels `a courte port´ee (d´ecroissance plus rapide que 1/r
3
), l’interaction
entre deux particules i et j est en tr`es bonne approximation calcul´ee comme
l’interaction entre la particule i et l’image de j la plus proche de i (convention
d’image minimale). Cela signifie que cette image n’est pas n´ecessairement dans
la boˆıte initiale. Dans le cas de potentiels ` a longue port´ee, cette approximation
n’est pas suffisante car l’´energie d’interaction entre deux particules d´ecroˆıt trop
lentement pour qu’on puisse se limiter `a la premi`ere image. Dans le cas des po-
tentiels coulombiens, il est mˆeme n´ecessaire de tenir compte de l’ensemble des
boˆıtes, ainsi que de la nature des conditions aux limites ` a l’infini.
185
Ewald sum for the Coulomb potential
Pour rem´edier `a cette difficult´e, la m´ethode des sommes d’Ewald consiste ` a s´e-
parer (C.1) en plusieurs parties: une partie `a courte port´ee, obtenue en ´ecrantant
chaque particule avec une distribution de charge (que l’on prend souvent gaussi-
enne) de mˆeme intensit´e mais de signe oppos´e `a celle de la particule (sa contribu-
tion pourra alors ˆetre calcul´ee avec la convention d’image minimale) et une autre
` a longue port´ee, due ` a l’introduction d’une distribution de charge sym´etrique ` a
la pr´ec´edente et dont la contribution sera calcul´ee dans l’espace r´eciproque du
r´eseau cubique. De la forme plus ou moins diffuse de la distribution de charge
d´ependra la convergence de la somme dans l’espace r´eciproque.
Si on introduit comme distribution de charge une distribution gaussienne,
ρ(r) =
N
¸
j=1
¸
n
q
j
(
α
π
)
1
2
exp[−α[r −(r
j
+nL)[
2
], (C.3)
la partie ` a courte port´ee est donn´ee par
U
1
=
1
2
N
¸
i=1
N
¸
j=1
¸
n
q
i
q
j
erfc(α[r
ij
+nL[)
[r
ij
+nL[
, (C.4)
o` u la somme sur n est tronqu´ee `a la premi`ere image et o` u erfc est la fonction
erreur compl´ementaire, et la partie ` a longue port´ee par
U
2
=
1
2πL
3
N
¸
i=1
N
¸
j=1
¸
k=0
q
i
q
j
(

2
k
2
) exp(
−k
2

)cos(kr
ij
). (C.5)
A cette derni`ere expression, on doit retirer un terme dit d’auto-interaction dˆ u ` a
l’interaction de chaque distribution de charge q
j
avec la charge ponctuelle situ´ee
au centre de la gaussienne. Le terme `a retirer est ´egal ` a
α
π
1/2
N
¸
i=1
q
2
i
= −U
3
. (C.6)
L’´energie d’interaction coulombienne devient donc:
U
coul
= U
1
+ U
2
+ U
3
. (C.7)
Pour des charges (ou spins, ou particules) plac´ees sur les sites d’un r´eseau, il est
possible d’effectuer les sommes dans l’espace r´eciproque une fois pour toutes, au
d´ebut de chaque simulation. Il est donc possible de prendre en compte un tr`es
grand nombre de vecteurs d’ondes, ce qui assure une tr`es bonne pr´ecision pour le
calcul de l’´energie coulombienne.
Dans le cas de syst`emes continus, on doit effectuer le calcul dans l’espace `a
chaque fois que les particules sont d´eplac´ees, ce qui fait que l’algorithme est p´enal-
isant pour les grands syst`emes. Il existe alors des algorithmes plus performants
comme ceux bas´es sur un d´eveloppement multipolaire.
186
Remarks
La somme dans l’expression (C.5) n’est effectu´ee que pour k = 0. Ceci r´e-
sulte de la convergence conditionnelle des sommes d’Ewald et a des cons´equences
physiques importantes. Dans un syst`eme coulombien p´eriodique, la forme de
l’´energie d´epend en effet de la nature des conditions aux limites ` a l’infini et le fait
de n´egliger la contribution ` a l’´energie correspondant `a k = 0 revient `a consid´erer
que le syst`eme est plong´e dans un milieu de constante di´electique infinie (c’est ` a
dire un bon conducteur). C’est la convention qui est utilis´ee dans les simulations
de syst`emes ioniques. Dans le cas inverse, c’est ` a dire si le syst`eme se trouve dans
un milieu di´electrique, les fluctuations du moment dipolaire du syst`eme cr´eent
des charges de surface qui sont responsables de l’existence d’un champ d´epolar-
isant. Celui-ci rajoute un terme ` a l’´energie qui n’est autre que la contribution
correspondant `a k = 0.
187
Ewald sum for the Coulomb potential
188
Appendix D
Hard rod model
D.1 Equilibrium properties
Consid´erons le syst`eme constitu´e de N segments imp´en´etrables de longueur iden-
tique, σ, assujettis ` a se d´eplacer sur une droite. Le Hamiltonien de ce syst`eme
s’´ecrit
H =
N
¸
i

1
2
m

dx
i
dt

2
+ v(x
i
−x
j
)

(D.1)
o` u
v(x
i
−x
j
) =

+∞ [x
i
−x
j
[ > σ
0 [x
i
−x
j
[ ≤ σ.
(D.2)
Le calcul de la fonction de partition se factorise comme pour tous les syst`emes
classiques en deux parties: la premi`ere correspond `a la partie cin´etique et peut
ˆetre facilement calcul´ee (int´egrale gaussienne), la seconde est l’int´egrale de con-
figuration qui dans le cas de ce syst`eme unidimensionnel peut ˆetre aussi calcul´e
exactement. On a
Q
N
(L, N) =

. . .

¸
dx
i
exp(−β/2
¸
i=j
v(x
i
−x
j
)). (D.3)
Comme le potentiel est soit nul soit infini, l’int´egrale de configuration ne
d´epend pas de la temp´erature. Puisque deux particules ne peuvent se recouvrir,
on peut r´e´ecrire l’int´egrale Q
N
en ordonnant les particules:
Q
N
(L, N) =

L−Nσ
0
dx
1

L−(N−1)σ
x
1

. . .

L−σ
x
N−1

dx
N
. (D.4)
L’int´egration successive de l’´equation (D.4) donne
Q
N
(L, N) = (L −Nσ)
N
. (D.5)
189
Hard rod model
La fonction de partition canonique du syst`eme est donc d´etermin´ee et l’´equation
d’´etat est donn´ee par la relation
βP =
ln Q
N
(L, N)
L
(D.6)
ce qui donne
βP
ρ
=
1
1 −σρ
(D.7)
o` u ρ = N/L.
Le potentiel chimique d’exc`es est donn´e par la relation
exp(−βµ
ex
) = (1 −ρ) exp(−
ρ
1 −ρ
). (D.8)
Les fonctions de corr´elation peuvent ´egalement ˆetre calcul´ees analytiquement.
D.2 Parking model
L’addition s´equentielle al´eatoire est un processus stochastique o` u des particules
dures sont ajout´ees s´equentiellement dans un espace de dimension D ` a des posi-
tions al´eatoires en respectant les conditions qu’aucune nouvelle particule ne doit
recouvrir une particule d´ej` a ins´er´ee et qu’une fois ins´er´ees, les particules sont
immobiles.
La version unidimensionnelle du mod`ele est connue sous le nom du probl`eme
du parking et a ´et´e introduite par un math´ematicien hongrois, A. R´enyi, en
1963R´enyi [1963].
Des bˆ atonnets durs de longueur σ sont jet´es al´eatoirement et s´equentiellement
sur une ligne en respectant les conditions ci-dessus. Si ρ(t) repr´esente la densit´e
de particules sur la ligne ` a l’instant t, la cin´etique du processus est gouvern´ee par
l’´equation maˆıtresse suivante:
∂ρ(t)
∂t
= k
a
Φ(t), (D.9)
o` u k
a
est une constante de vitesse par unit´e de longueur (qui l’on peut choisir
´egale ` a l’unit´e en changeant l’unit´e de temps) et Φ(t), qui est la probabilit´e
d’insertion `a l’instant t, est aussi la fraction de la ligne disponible pour l’insertion
d’une nouvelle particule `a t. Le diam`etre des particules est choisi comme unit´e
de longueur.
Il est utile d’introduire la fonction de distribution des intervalles G(h, t), qui
est d´efinie telle que G(h, t)dh repr´esente la densit´e de vides de longueurs comprises
entre h and h+dh ` a l’instant t. Pour un intervalle de longueur h, le vide disponible
pour ins´erer une nouvelle particule est h − 1, et en cons´equence la fraction de
190
D.2 Parking model
la ligne disponible Φ(t) est simplement la somme de (h − σ) sur l’ensemble des
intervalles disponibles, i.e. G(h, t):
Φ(t) =


σ
dh(h −1)G(h, t). (D.10)
Comme chaque intervalle correspond ` a une particule, la densit´e de particules ρ(t)
s’exprime comme
ρ(t) =


0
dhG(h, t), (D.11)
tandis que la fraction de la ligne non recouverte est reli´ee ` a G(h, t) par la relation
1 −ρ(t) =


0
dhhG(h, t). (D.12)
Les deux ´equations pr´ec´edentes repr´esentent des r`egles de somme pour la fonction
de distribution des intervalles. Durant le processus, cette fonction G(h, t) ´evolue
comme
∂G(h, t)
∂t
= −H(h −1)(h −1)G(h, t) + 2


h+1
dh

G(h

, t), (D.13)
o` u H(x) est la fonction de Heaviside. Le premier terme du membre de droite de
l’´equation (D.13) (terme de destruction) correspond ` a l’insertion d’une particule ` a
l’int´erieur d’un intervalle de longueur h (pour h ≥ 1), tandis que le second terme
(terme de cr´eation) correspond ` a l’insertion d’une particule dans un intervalle de
longueur h

> h + 1. Le facteur 2 tient compte des deux possibilit´es de cr´eer
un intervalle de longueur h ` a partir d’un intervalle plus grand de longueur h

.
Notons que l’´evolution temporelle de la fonction de distribution des intervalles
G(h, t) est enti`erement d´etermin´ee par des intervalles plus grands que h.
Nous avons maintenant un ensemble complet d’´equations, qui r´esulte de la
propri´et´e que l’insertion d’une particule dans un intervalle donn´e n’a aucun effet
sur les autres intervalles (effet d’´ecrantage). Les ´equations peuvent ˆetre r´esolues
en utilisant l’ansatz suivant, pour h > σ,
G(h, t) = F(t) exp(−(h −1)t), (D.14)
ce qui conduit `a
F(t) = t
2
exp

−2

t
0
du
1 −e
−u
u

. (D.15)
En int´egrant alors l’´equation (D.13) avec la solution de G(h, t) pour h > 1, on
obtient G(h, t) pour 0 < h < 1,
G(h, t) = 2

t
0
duexp(−uh)
F(u)
u
. (D.16)
191
Hard rod model
Les trois ´equations (D.10), (D.11) et (D.12) conduisent bien ´evidemment au mˆeme
r´esultat pour la densit´e ρ(t),
ρ(t) =

t
0
duexp

−2

u
0
dv
1 −e
−v
v

. (D.17)
Ce r´esultat a ´et´e obtenu la premi`ere fois par R´enyi.
Une propri´et´e non ´evidente du mod`ele est que le processus atteint une limite
d’encombrement (quand t → ∞) ` a laquelle la densit´e sature pour une valeur
ρ

σ = 0.7476 . . .; celle-ci est de mani`ere significative plus petite que la densit´e
d’encombrement g´eom´etrique maximale (ρ

= 1) qui est obtenue quand les par-
ticules peuvent diffuser sur la ligne. De plus, il est facile de voir que la limite
d’encombrement d´epend des conditions initiales: ici, nous sommes partis d’une
ligne vide. Au contraire, ` a l’´equilibre, l’´etat final du syst`eme est d´etermin´e seule-
ment par le potentiel chimique et ne garde aucune m´emoire de l’´etat initial.
La cin´etique aux temps longs peut ˆetre obtenue `a partir de l’´equation (D.17):
ρ

−ρ(t) ·

e
−2γ

1
t
(D.18)
o` u γ est la constante d’Euler, ce qui montre que l’approche vers la saturation est
alg´ebrique.
La structure des configurations g´en´er´ees par ce processus irr´eversible a un cer-
tain nombre de propri´et´es inhabituelles. A saturation, la distribution d’intervalle
a une divergence logarithmique (int´egrable bien entendu) au contact h →0,
G(h, ∞) · −e
−2γ
ln(h). (D.19)
De plus, les corr´elations entre paires de particules sont extrˆemement faibles ` a
longue distance,
g(r) −1 ∝
1
Γ(r)

2
ln r

r
(D.20)
o` u Γ(x) est la fonction Gamma: la d´ecroissance de g(r) est dite super-exponentielle
et est donc diff´erente de la situation ` a l’´equilibre o` u l’on obtient une d´ecroissance
exponentielle.
192
Bibliography
B. J. Alder and T. E. Wainwright. Phase transition for a hard sphere system. The
Journal of Chemical Physics, 27(5):1208–1209, 1957. doi: 10.1063/1.1743957.
URL http://link.aip.org/link/?JCP/27/1208/1.
Per Bak, Chao Tang, and Kurt Wiesenfeld. Self-organized criticality: An ex-
planation of the 1/f noise. Phys. Rev. Lett., 59(4):381–384, Jul 1987. doi:
10.1103/PhysRevLett.59.381.
Ludovic Berthier. Efficient measurement of linear susceptibilities in molecu-
lar simulations: Application to aging supercooled liquids. Physical Review
Letters, 98(22):220601, 2007. doi: 10.1103/PhysRevLett.98.220601. URL
http://link.aps.org/abstract/PRL/v98/e220601.
K Binder. Applications of monte carlo methods to statistical physics. Reports
on Progress in Physics, 60(5):487–559, 1997. URL http://stacks.iop.org/
0034-4885/60/487.
D. Chandler. Introduction to Modern Statistical Mechanics. Oxford University
Press, New York, USA, 1987.
C. Dress and W. Krauth. Cluster algorithm for hard spheres and related sys-
tems. Journal Of Physics A-Mathematical And General, 28(23):L597–L601,
December 1995.
Alan M. Ferrenberg and Robert H. Swendsen. Optimized monte carlo data anal-
ysis. Phys. Rev. Lett., 63(12):1195–1198, 1989. URL http://link.aps.org/
abstract/PRL/v63/p1195.
Alan M. Ferrenberg and Robert H. Swendsen. New monte carlo technique for
studying phase transitions. Phys. Rev. Lett., 61:2635, 1988.
Alan M. Ferrenberg, D. P. Landau, and Robert H. Swendsen. Statistical errors
in histogram reweighting. Phys. Rev. E, 51(5):5092–5100, 1995. URL http:
//link.aps.org/abstract/PRE/v51/p5092.
D. Frenkel and B. Smit. Understanding Molecular Simulation: from algorithms
to applications. Academic Press, London, UK, 1996.
193
BIBLIOGRAPHY
Nigel Goldenfeld. Lectures on Phase Transitions and the Renormalization Group.
Addison-Wesley, New-York, USA, 1992.
J. P. Hansen and I. R. Mc Donald. Theory of simple liquids. Academic Press,
London, UK, 1986.
J. R. Heringa and H. W. J. Blote. Geometric cluster monte carlo simulation.
Physical Review E, 57(5):4976–4978, May 1998a.
J. R. Heringa and H. W. J. Blote. Geometric symmetries and cluster simulations.
Physica A, 254(1-2):156–163, May 1998b.
Haye Hinrichsen. Non-equilibrium critical phenomena and phase transitions into
absorbing states. Advances in Physics, 49(7):815–958, 2000. URL http://
www.informaworld.com/10.1080/00018730050198152.
W. Kob and H. C. Andersen. Scaling behavior in the beta-relaxation regime of a
supercooled lennard-jones mixture. Physical Review Letters, 73(10):1376–1379,
September 1994.
W. Kob and H. C. Andersen. Testing mode-coupling theory for a supercooled
binary lennard-jones mixture - the van hove correlation-function. Phys. Rev.
E, 51(5):4626–4641, May 1995a.
W. Kob and H. C. Andersen. Testing mode-coupling theory for a supercooled
binary lennard-jones mixture .2. intermediate scattering function and dynamic
susceptibility. Phys. Rev. E, 52(4):4134–4153, October 1995b.
J. M. Kosterlitz and D.J. Thouless. Metastability and phase transitions in two-
dimensional systems. J. Phys. C, 6:1181, 1973.
Werner Krauth. Statistical Mechanics: Algorithms and Computations. Oxford
University Press, London, UK, 2006.
David P. Landau and Kurt Binder. A Guide to Monte Carlo Simulations in
Statistical Physics. Cambridge University Press, New York, USA, 2000.
N. D. Mermin. The topological theory of defects in ordered media. Rev. Mod.
Phys., 51:591, 1979.
N.D. Mermin and H. Wagner. Absence of ferromagnetism and antiferomagnetism
in one and two dimensional isotropic heisenberg models. Phys. Rev. Lett., 17:
1133, 1966.
P. Poulain, F. Calvo, R. Antoine, M. Broyer, and Ph. Dugourd. Performances of
wang-landau algorithms for continuous systems. Phys. Rev. E, 73(5):056704,
2006. URL http://link.aps.org/abstract/PRE/v73/e056704.
194
BIBLIOGRAPHY
A. R´enyi. Sel. Trans. Math. Stat. Prob., 4:205, 1963.
F. Ritort and P. Sollich. Glassy dynamics of kinetically constrained models.
Advances in Physics, 52(4):219–342, 2003. URL http://www.informaworld.
com/10.1080/0001873031000093582.
Robert H. Swendsen and Jian-Sheng Wang. Nonuniversal critical dynamics in
monte carlo simulations. Phys. Rev. Lett., 58(2):86–88, 1987. URL http:
//link.aps.org/abstract/PRL/v58/p86.
J. Talbot, G. Tarjus, P. R. Van Tassel, and P. Viot. From car parking to
protein adsorption: an overview of sequential adsorption processes. Col-
loids and Surfaces A: Physicochemical and Engineering Aspects, 165(1-3):287–
324, May 2000. URL http://www.sciencedirect.com/science/article/
B6TFR-3YF9X8C-M/2/a91f10d760e37c8def4d51f17c548c3e.
S. H. Tsai, H. K. Lee, and D. P. Landau. Molecular and spin dynamics simulations
using modern integration methods. American Journal Of Physics, 73(7):615–
624, July 2005.
Donald L Turcotte. Self-organized criticality. Reports on Progress in Physics, 62
(10):1377–1429, 1999. URL http://stacks.iop.org/0034-4885/62/1377.
Fugao Wang and D. P. Landau. Efficient, multiple-range random walk algorithm
to calculate the density of states. Phys. Rev. Lett., 86(10):2050–2053, 2001a.
URL http://link.aps.org/abstract/PRL/v86/p2050.
Fugao Wang and D. P. Landau. Determining the density of states for classi-
cal statistical models: A random walk algorithm to produce a flat histogram.
Phys. Rev. E, 64(5):056101, 2001b. URL http://link.aps.org/abstract/
PRE/v64/e056101.
A. P. Young. Spin glasses and random fields. World Scientific, Singapore, 1998.
Chenggang Zhou and R. N. Bhatt. Understanding and improving the wang-
landau algorithm. Phys. Rev. E, 72(2):025701, 2005. URL http://link.aps.
org/abstract/PRE/v72/e025701.
195
BIBLIOGRAPHY
196
Contents
1 Statistical mechanics and numerical simulation 3
1.1 Brief History of simulation . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Ensemble averages . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 Microcanonical ensemble . . . . . . . . . . . . . . . . . . . 5
1.2.2 Canonical ensemble . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Grand canonical ensemble . . . . . . . . . . . . . . . . . . 7
1.2.4 Isothermal-isobaric ensemble . . . . . . . . . . . . . . . . . 8
1.3 Model systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Simple liquids . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.3 Ising model and lattice gas. Equivalence . . . . . . . . . . 10
1.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.5.1 ANNNI Model . . . . . . . . . . . . . . . . . . . . . . . . 15
1.6 Blume-Capel model . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6.1 Potts model . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Monte Carlo method 21
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Uniform and weighted sampling . . . . . . . . . . . . . . . . . . . 22
2.3 Markov chain for sampling an equilibrium system . . . . . . . . . 23
2.4 Metropolis algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.1 Ising model . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5.2 Simple liquids . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.6 Random number generators . . . . . . . . . . . . . . . . . . . . . 31
2.6.1 Generating non uniform random numbers . . . . . . . . . 33
2.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.7.1 Inverse transformation . . . . . . . . . . . . . . . . . . . . 38
2.7.2 Detailed balance . . . . . . . . . . . . . . . . . . . . . . . 39
2.7.3 Acceptance probability . . . . . . . . . . . . . . . . . . . . 39
2.7.4 Random number generator . . . . . . . . . . . . . . . . . . 41
197
CONTENTS
3 Molecular Dynamics 43
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 Equations of motion . . . . . . . . . . . . . . . . . . . . . . . . . 45
3.3 Discretization. Verlet algorithm . . . . . . . . . . . . . . . . . . . 45
3.4 Symplectic algorithms . . . . . . . . . . . . . . . . . . . . . . . . 47
3.4.1 Liouville formalism . . . . . . . . . . . . . . . . . . . . . . 47
3.4.2 Discretization of the Liouville equation . . . . . . . . . . . 50
3.5 Hard sphere model . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.6 Molecular Dynamics in other ensembles . . . . . . . . . . . . . . . 53
3.6.1 Andersen algorithm . . . . . . . . . . . . . . . . . . . . . . 54
3.6.2 Nos´e-Hoover algorithm . . . . . . . . . . . . . . . . . . . . 54
3.7 Brownian dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 57
3.7.1 Different timescales . . . . . . . . . . . . . . . . . . . . . . 57
3.7.2 Smoluchowski equation . . . . . . . . . . . . . . . . . . . . 57
3.7.3 Langevin equation. Discretization . . . . . . . . . . . . . . 57
3.7.4 Consequences . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.9.1 Multi timescale algorithm . . . . . . . . . . . . . . . . . . 59
4 Correlation functions 61
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2 Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
4.2.1 Radial distribution function . . . . . . . . . . . . . . . . . 62
4.2.2 Structure factor . . . . . . . . . . . . . . . . . . . . . . . . 66
4.3 Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4.3.2 Time correlation functions . . . . . . . . . . . . . . . . . . 67
4.3.3 Computation of the time correlation function . . . . . . . 68
4.3.4 Linear response theory: results and transport coefficients . 69
4.4 Space-time correlation functions . . . . . . . . . . . . . . . . . . . 70
4.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 70
4.4.2 Van Hove function . . . . . . . . . . . . . . . . . . . . . . 70
4.4.3 Intermediate scattering function . . . . . . . . . . . . . . . 72
4.4.4 Dynamic structure factor . . . . . . . . . . . . . . . . . . . 72
4.5 Dynamic heterogeneities . . . . . . . . . . . . . . . . . . . . . . . 72
4.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 72
4.5.2 4-point correlation function . . . . . . . . . . . . . . . . . 73
4.5.3 4-point susceptibility and dynamic correlation length . . . 73
4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.7.1 “Le facteur de structure dans tous ses ´etats!” . . . . . . . . 74
4.7.2 Van Hove function and intermediate scattering function . . 75
198
CONTENTS
5 Phase transitions 79
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
5.2.1 Critical exponents . . . . . . . . . . . . . . . . . . . . . . 81
5.2.2 Scaling laws . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5.3 Finite size scaling analysis . . . . . . . . . . . . . . . . . . . . . . 87
5.3.1 Specific heat . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3.2 Other quantities . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4 Critical slowing down . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.5 Cluster algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.6 Reweighting Method . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
5.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.8.1 Finite size scaling for continuous transitions: logarithm cor-
rections . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
5.8.2 Some aspects of the finite size scaling: first-order transition 98
6 Monte Carlo Algorithms based on the density of states 101
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.2 Density of states . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.2.1 Definition and physical meaning . . . . . . . . . . . . . . . 102
6.2.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.3 Wang-Landau algorithm . . . . . . . . . . . . . . . . . . . . . . . 105
6.4 Thermodynamics recovered! . . . . . . . . . . . . . . . . . . . . . 106
6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
6.6.1 Some properties of the Wang-Landau algorithm . . . . . . 108
6.6.2 Wang-Landau and statistical temperature algorithms . . . 109
7 Monte Carlo simulation in different ensembles 113
7.1 Isothermal-isobaric ensemble . . . . . . . . . . . . . . . . . . . . . 113
7.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 113
7.1.2 Principe . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
7.2 Grand canonical ensemble . . . . . . . . . . . . . . . . . . . . . . 116
7.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.2.2 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.3 Liquid-gas transition and coexistence curve . . . . . . . . . . . . . 119
7.4 Gibbs ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.4.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
7.4.2 Acceptance rules . . . . . . . . . . . . . . . . . . . . . . . 121
7.5 Monte Carlo method with multiples Markov chains . . . . . . . . 123
7.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
199
CONTENTS
8 Out of equilibrium systems 129
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
8.2 Random sequential addition . . . . . . . . . . . . . . . . . . . . . 131
8.2.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
8.2.3 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
8.2.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
8.3 Avalanche model . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
8.3.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
8.4 Inelastic hard sphere model . . . . . . . . . . . . . . . . . . . . . 141
8.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.4.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.4.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.4.4 Some properties . . . . . . . . . . . . . . . . . . . . . . . . 143
8.5 Exclusion models . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 144
8.5.2 Random walk on a ring . . . . . . . . . . . . . . . . . . . . 146
8.5.3 Model with open boundaries . . . . . . . . . . . . . . . . . 147
8.6 Kinetic constraint models . . . . . . . . . . . . . . . . . . . . . . 149
8.6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.6.2 Facilitated spin models . . . . . . . . . . . . . . . . . . . . 152
8.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.8.1 Haff Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.8.2 Croissance de domaines et mod`ele `a contrainte cin´etique . 156
8.8.3 Diffusion-coagulation model . . . . . . . . . . . . . . . . . 158
8.8.4 Random sequential addition . . . . . . . . . . . . . . . . . 159
9 Slow kinetics, aging. 163
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
9.2 Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
9.2.1 Two-time correlation and response functions. . . . . . . . . 164
9.2.2 Aging and scaling laws . . . . . . . . . . . . . . . . . . . . 165
9.2.3 Interrupted aging . . . . . . . . . . . . . . . . . . . . . . . 166
9.2.4 “Violation” of the fluctuation-dissipation theorem . . . . . 166
9.3 Adsorption-desorption model . . . . . . . . . . . . . . . . . . . . . 167
9.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 167
9.3.2 Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
9.3.3 Kinetics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
9.3.4 Equilibrium linear response . . . . . . . . . . . . . . . . . 170
9.3.5 Hard rods age! . . . . . . . . . . . . . . . . . . . . . . . . 171
200
CONTENTS
9.3.6 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
9.4 Kovacs effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
9.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
A Reference models 177
A.1 Lattice models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
A.1.1 XY model . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
A.1.2 Heisenberg model . . . . . . . . . . . . . . . . . . . . . . . 178
A.1.3 O(n) model . . . . . . . . . . . . . . . . . . . . . . . . . . 178
A.2 off-lattice models . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
A.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 178
A.2.2 StockMayer model . . . . . . . . . . . . . . . . . . . . . . 179
A.2.3 Kob-Andersen model . . . . . . . . . . . . . . . . . . . . . 179
B Linear response theory 181
C Ewald sum for the Coulomb potential 185
D Hard rod model 189
D.1 Equilibrium properties . . . . . . . . . . . . . . . . . . . . . . . . 189
D.2 Parking model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
201

These lecture notes provide an introduction to the methods of numerical simulation in classical statistical physics. Based on simple models, we start in a first part on the basics of the Monte Carlo method and of the Molecular Dynamics. A second part is devoted to the introduction of the basic microscopic quantities available in simulation methods, and to the description of the methods allowing for studying the phase transitions. In a third part, we consider the study of out-of-equilibrium systems as well as the characterization of the dynamics: in particular, aging phenomena are presented.

2

Chapter 1 Statistical mechanics and numerical simulation
Contents
1.1 1.2 Brief History of simulation . . . . . . . . . . . . . . . . Ensemble averages . . . . . . . . . . . . . . . . . . . . . 1.2.1 1.2.2 1.2.3 1.2.4 1.3 1.3.1 1.3.2 1.3.3 1.4 1.5 1.6 Microcanonical ensemble . . . . . . . . . . . . . . . . . Canonical ensemble . . . . . . . . . . . . . . . . . . . Grand canonical ensemble . . . . . . . . . . . . . . . . Isothermal-isobaric ensemble . . . . . . . . . . . . . . Introduction . . . . . . . . . . . . . . . . . . . . . . . Simple liquids . . . . . . . . . . . . . . . . . . . . . . . Ising model and lattice gas. Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 5 5 6 7 8 8 8 9 10 14 15 15 16 18

Model systems . . . . . . . . . . . . . . . . . . . . . . .

Conclusion 1.5.1 1.6.1

Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . ANNNI Model . . . . . . . . . . . . . . . . . . . . . . Potts model . . . . . . . . . . . . . . . . . . . . . . . . Blume-Capel model . . . . . . . . . . . . . . . . . . . .

1.1

Brief History of simulation

Numerical simulation started in the fifties when computers were used for the first time for peaceful purposes. In particular, the computer MANIAC started in 19521
The computer MANIAC is the acronym of ”mathematical and numerical integrator and computer”. MANIAC I started March 15, 1952.
1

3

Statistical mechanics and numerical simulation

at Los Alamos. Simulation provides a complementary approach to theoretical methods2 . Areas of physics where the perturbative approaches are efficient (dilute gases, vibrations in quasi-harmonic solids) do not require simulation methods. Conversely, liquid state physics, where few exact results are known and where the theoretical developments are not always under control, has been developed largely through simulation. The first Monte Carlo simulation of liquids was performed by Metropolis et al. in 19533 . The first Molecular Dynamics was realized on the hard disk model by Alder and Wainwright in 1957Alder and Wainwright [1957]. The first Molecular Dynamics of a simple liquid (Argon) was performed by Rahman in 1964. In these last two decades, the increasing power of computers associated with their decreasing costs allowed numerical simulations with personal computers. Even if supercomputers are necessary for extensive simulations, it becomes possible to perform simulations on low cost computers. In order to measure the power of computers, the unit of performance is the GFlops (or billion of floating point operations per second). Nowadays, a Personal computer PC (for instance, Intel Core i7) has a processor with four cores (floating point unit) and offers a power of 24 Gflops. Whereas the frequency of processors seems to have plateaued last few years, power of computers continues to increase because the number of cores within a processor grows. Nowadays, processors with 6 or 8 cores are available. In general, the power of a computer is not a linear function of cores for many programs. For scientific codes, it is possible to exploit this possibility by developing scientific programs incorporating libraries which perform parallel computing (OpenMP, MPI). The gnu compilers are free softwares allowing for parallel computing. For massive parallelism, MPI (message passing interface) is a library which spreads the computing load over many cores. Table 1.1 gives the characteristics and power of most powerful computers in the world It is worth noting that the rapid evolution of the power of the computers. In 2009, only one computer exceeded the TFlops, while they are 4 this year. In the same period, the IDRIS computer went from 9 last year to 38 this year. Note that China owns the second most powerful computer in the world. The last remark concerns the operating system of the 500 fastest computers: Linux and associated distributions 484, Windows 5, others 11.
Sometimes, theories are in their infancy and numerical simulation is the sole manner for studying models 3 Metropolis, Nicholas Constantine (1915-1999) both mathematician and physicist of education was hired by J. Robert Oppenheimer at the Los Alamos National Laboratory in April 1943. He was one of scientists of the Manhattan Project and collaborated with Enrico Fermi and Edward Teller on the first nuclear reactors. After the war, Metropolis went back to Chicago as an assistant professor, and returned to Los Alamos in 1948 by creating the Theoretical Division. He built the MANIAC computer in 1952, then 5 years later MANIAC II. He returned from 1957 to 1965 to Chicago and founded the Computer Research division, and finally, returned to Los Alamos.
2

4

1.2 Ensemble averages

Ranking Vendor /Cores. (TFlops) Pays 1 Cray XTS / 224162 1759 2331 Oak Ridge Nat. Lab. USA 2009 2 Dawning / 120640 1271 2984 NSCS (China)2010 3 Roadrunner /122400 1042 1375 Doe USA 2009 4 Cary XTS /98928 831 1028 NICST USA 2009 5 Blue GeneP 212992 825 1002 FZJ Germany 2009 ... ... ... ... 18 SGI /23040 237 267 France 2010 Table 1.1 – June 2009 Ranking of supercomputers

1.2

Ensemble averages

Knowledge of the partition function of a system allows one to obtain all thermodynamic quantities. First, we briefly review the main ensembles used in Statistical Mechanics. We assume that the thermodynamic limit leads to the same quantities, a property for systems where interaction between particles are not long ranged or systems without quenched disorder. For finite size systems (which correspond to those studied in computer simulation), there are differences that we will analyze in the following.

1.2.1

Microcanonical ensemble

The system is characterized by the set of macroscopic variables: volume V , total energy E and the number of particles N . This ensemble is not appropriate for experimental studies, where one generally has • a fixed number of particles, but a fixed pressure P and temperature T . This corresponds to a set of variables (N, P, T ) and is the isothermal-isobaric ensemble, • a fixed chemical potential µ, volume V and temperature T , ensemble (µ, V, T ) or grand canonical ensemble, • a fixed number of particles, a given volume V and temperature T , ensemble (N, V, T ) or canonical ensemble. There is a Monte Carlo method for the micro canonical ensemble, but it is rarely used, in particular for molecular systems. Conversely, the micro-canonical ensemble is the natural ensemble for the Molecular Dynamics of Hamiltonian systems where the total energy is conserved during the simulation. The variables conjugated to the global quantities defining the ensemble fluctuate in time. For the micro-canonical ensemble, this corresponds to the pressure 5

Statistical mechanics and numerical simulation

P conjugate to the volume V , to the temperature T conjugate to the energy E and to the chemical potential µ conjugate to the total number of particles N .

1.2.2

Canonical ensemble

The system is characterized by the following set of variables: the volume V ; the temperature T and the total number of particles N . If we denote the Hamiltonian of the system H, the partition function reads Q(V, β, N ) =
α

exp(−βH(α))

(1.1)

where β = 1/kB T (kB is the Boltzmann constant). The sum runs over all configurations of the system. If this number is continuous, the sum is replaced with an integral. α denotes the index of these configurations. The free energy F (V, β, N ) of the system is equal to βF (V, β, N ) = − ln(Q(V, β, N )). (1.2) One defines the probability of having a configuration α as exp(−βH(α)) . (1.3) P (V, β, N ; α) = Q(V, β, N ) One easily checks that the basic properties of a probability are satisfied, i.e. α P (V, β, N ; α) = 1 and P (V, β, N ; α) > 0. The derivatives of the free energy are related to the moments of this probability distribution, which gives a microscopic interpretation of the macroscopic thermodynamic quantities. The mean energy and the specific heat are then given by • Mean energy U (V, β, N ) = =
α

∂(βF (V, β, N )) ∂β H(α)P (V, β, N ; α)

(1.4) (1.5) (1.6)

= H(α) • Specific heat ∂U (V, β, N ) Cv (V, β, N ) = −kB β 2 ∂β  = kB β 
α 2

(1.7) 
2

H (α)P (V, β, N ; α) −
α

2

H(α)P (V, β, N ; α)

(1.8) = kB β 2 H(α)2 − H(α) 6
2

(1.9)

1.2 Ensemble averages

1.2.3

Grand canonical ensemble

The system is then characterized by the following set of variables: the volume V , the temperature T and the chemical potential µ. Let us denote the Hamiltonian HN the Hamiltonian of N particles, the grand partition function Ξ(V, β, µ) reads:

Ξ(V, β, µ) =
N =0 αN

exp(−β(HN (αN ) − µN ))

(1.10)

where β = 1/kB T (kB is the Boltzmann constant) and the sum run over all configurations of N particles and over all configurations for systems having a number of particles going from 0 to i nf ty. The grand potential is equal to βΩ(V, β, µ) = − ln(Ξ(V, β, µ)) (1.11)

In a similar way, one defines the probability (distribution) P (V, β, µ; αN ) of having a configuration αN (with N particles) by the relation P (V, β, µ; αN ) = exp(−β(HN (αN ) − µN )) Ξ(V, β, µ) (1.12)

The derivatives of the grand potential can be expressed as moments of the probability distribution • Mean number of particles N (V, β, µ) = − =
N αN

∂(βΩ(V, β, µ)) ∂(βµ) N P (V, β, µ; αN )

(1.13) (1.14)

• Susceptibility χ(V, β, µ) = β ∂ N (V, β, µ) N (V, β, µ) ρ ∂βµ  β  = N 2 P (V, β, µ; αN ) − N (V, β, µ) ρ N α
N

(1.15)
2

 

N P (V, β, µ; αN )
N αN

(1.16) = β N (V, β, µ) ρ N 2 (V, β, µ) − N (V, β, µ)
2

(1.17)

7

Statistical mechanics and numerical simulation

1.2.4

Isothermal-isobaric ensemble

The system is characterized by the following set of variables: the pressure P , the temperature T and the total number of particles N . Because this ensemble is generally devoted to molecular systems and not used for lattice models, one only considers continuous systems. The partition function reads: Q(P, β, N ) = βP 3N N ! Λ
∞ V

dV exp(−βP V )
0 0

drN exp(−βU (rN ))

(1.18)

where β = 1/kB T (kB is the Boltzmann constant). The Gibbs potential is equal to βG(P, β, N ) = − ln(Q(P, β, N )). (1.19)

One defines the probability Π(P, β, µ; αV )4 of having a configuration αV ≡ rN (particle positions rN ), with a temperature T and a pressure P ). Π(P, β, µ; αV ) = exp(−βV ) exp(−β(U (rN ))) . Q(P, β, N ) (1.20)

The derivatives of the Gibbs potential are expressed as moments of this probability distribution. Therefore, • Mean volume V (P, β, N ) = =
0

∂(βG(P, β, N )) ∂βP
∞ V

(1.21) (1.22)

dV V
0

drN Π(P, β, µ; αV ).

This ensemble is appropriate for simulations which aim to determine the equation of state of a system. Let us recall that a statistical ensemble can not be defined from a set of intensives variables only. However, we will see in Chapter 7 that a technique so called Gibbs ensemble method is close in spirit of such an ensemble (with the difference we always consider in simulation finite systems).

1.3
1.3.1

Model systems
Introduction

We restrict the lecture notes to classical statistical mechanics, which means that the quantum systems are not considered here. In order to provide many illustrations of successive methods, we introduce several basic models that we consider several times in the following
4

In order to avoid confusion with the pressure, the probability is denoted Π.

8

2πmkB T (1. of identical mass m.25) dd p 1 exp(−βp2 /(2m)) = d . Using the thermal de Broglie length ΛT = √ one has +∞ h . V ) = drN exp(−βU (rN )) (1. V ) = − ln(Ξ(µ. Moreover. because there is factorization of the multidimensional integral on the variables pi . β. The thermodynamic potential associated with the partition function. d ΛT −∞ h The partition function can be reexpressed as Ξ(µ. The onedimensional integral of each component of the momentum is a Gaussian integral. N. ZN (β.27) where ZN (β. V )) = −P V (1. the partition function Ξ(µ. in the grand canonical ensemble. 9 . rj ).23) where pi is the momentum of the particle i.2 Simple liquids A simple liquid is a system of N point particles labeled from 1 to N . β.e. Ω(µ. for classical systems. Note that. i=j (1. V ) = N! N =0 ∞ N i=1 (dd pi )(dd ri ) exp(−β(H − µN )) hdN (1. For instance. V ) = 1 N! N =0 ∞ (1. there is decoupling between the kinetic part and potential part. contrary to the quantum statistical mechanics. β. N. rj ) (i. β. interacting with an external potential U1 (ri ) and among themselves by a pairwise potential U2 (ri .24) where h is the Planck constant and d the space dimension. V ) is called the configuration integral.3. is 1 Ω(µ. The integral over the momentum can be obtained analytically. The Hamiltonian of this system reads: N H= i=1 1 p2 i + U1 (ri ) + 2m 2 U2 (ri . V ).26) eβµ Λd T N ZN (β. β. V ) (1.29) β where P is the pressure. a potential where the particles only interact by pairs).28) One defines z = eβµ as the fugacity. N.3 Model systems 1.1. β. only the part of the partition function with the potential energy is non-trivial. V ) is given by 1 Ξ(µ.

if J < 0. several approximations are performed in this step and it is essential to determine their validity. 2) theoretical: macroscopic properties are almost independent of some microscopic degrees of freedom and a local average is efficient method for obtaining an effective model for the physical properties of the system.3 Ising model and lattice gas. we proceed in three steps. The basic idea. The Hamiltonian reads H = −J <i. +1). can be used in many physical situations. From a Hamiltonian of a simple liquid to a lattice gas model. The Ising model. greatly extended later.j> Si Sj (1. theoretical treatments are more tractable and simulations can be performed with larger system sizes. initially introduced for describing the behavior of para-ferromagnetic systems. in three dimensions. j > denotes a summation over nearest-neighbor sites and J is the interaction strength. the particle is characterized by a spin which is a two-state variable (−1. if it exists. because it is often necessary to reduce the complexity of the original system for several reasons: 1) practical: In a simpler model. which is appealing to many physicists. no analytical solution has been obtained.Statistical mechanics and numerical simulation 1. but theoretical developments and numerical simulations give the properties of the system in the phase diagram magnetization-temperature. several changes of variables are performed in order to transform the lattice gas Hamiltonian into a spin model Hamiltonian: this last step is exact again. By performing a coarse-graining of the microscopic system. one builds an effective model with a smaller number of degrees of freedom. the interaction is antiferromagnetic The analytical solution of the system in one dimension shows that there is no phase transition at a finite temperature.3. This method underlies the existence of a certain universality. 10 . If J > 0. This simple model can be solved analytically in several situations (one and two dimensions) and accurate results can be obtained in higher dimensions. The first consists of rewriting the Hamiltonian by introducing a microscopic variable: this step is exact.30) where < i. Each spin interacts with its nearest neighbors and with a external field H. Onsager (1944) obtained the solution in the absence of an external field. In two dimensions. In the third step. consists of assuming that the macroscopic properties of a system with a large number of particles do not crucially depend on the microscopic details of the interaction. the interaction is ferromagnetic and conversely. The lattice gas model was introduced by Lee and Yang. Equivalence The Ising model is a lattice model where sites are occupied by particles with very limited degrees of freedom. Indeed. The second step consists of performing a local average to define the lattice Hamiltonian. This idea is often used in statistical physics.

34) (1.3 Model systems Rewriting of the Hamiltonian First. this means that the diagonal of each cell is slightly smaller than the particle diameter. r )δ(r − ri )δ(r − rj )dd rdd r i=j V V (1. rj ) = i=j i=j V U2 (r.β U2 (α. The lattice Hamiltonian is Nc Nc H= α=1 5 6 U1 (α)nα + 1/2 α.36) = = V V U2 (r .35) (1.31) By using the property of the Dirac distribution f (x)δ(x − a)dx = f (a) one obtains N (1. 1. r)ρ(r)ρ( r)dd rdd r Local average The area (”volume”) V of the simple liquid is divided into Nc cells such that the probability of finding more than one particle center per cell is negligible6 (typically.1. 11 .38) The local density is obtained by performing a local average that leads to a smooth function The particle is an atom or a molecule with its own characteristic length scale.37) which gives Nc = V /ad . let us reexpress the Hamiltonian of the simple liquid as a function of the microscopic5 density N ρ(r) = i=1 δ(r − ri ). Let us denote by a the linear length of the cell. (1.33) in a similar way U2 (ri . see Fig.32) U1 (ri ) = i=1 i=1 V U1 (r)δ(r − ri )dd r = V U1 (r)ρ(r)dd r (1. then one has N Nc dd ri = ad V i=1 α=1 (1. rj )δ(r − ri )dd r U2 (r.1). β)nα nβ (1.

no self-energy.4 0. α) = 0. and is replaced with an interaction between nearest neighbor cells: Nc H= α=1 U1 (α)n(α) + U2 <αβ> nα nβ (1. The grid represents the cells used for the local average.Statistical mechanics and numerical simulation 1 0.2 0 0 0. because there is no particle overlap.39) The factor 1/2 does not appear because the bracket < αβ > only considers distinct pairs.4 X 0.1 – Particle configuration of a two-dimensional simple liquid. Each cell can accommodate zero or one particle center. and 0 otherwise.6 Y 0.8 1 Figure 1. One obviously has U (α. where nα is a Boolean variable.6 0. namely nα = 1 when a particle center is within a cell α. 12 .2 0. Note that the index α of this new Hamiltonian is associated with cells whereas the index of the original Hamiltonian is associated with the particles.8 0. U2 (r) is also short range. Since the interaction between particles is short range.

13 (1.β> nα nβ = U2 4 U2 4 (1 + Sα )(1 + Sβ ) <α. J. Therefore.41) As expected.48) (1.β> nα nβ .49) .44) where c is the coordination number (number of nearest neighbors) of the lattice. (1.β> Sα Sβ (1. U (r)) = e−βE0 QIsing (H. The relevant quantity is then H − µN = α (U1 (α) − µ)nα + U2 <α.β> Sα Sβ (1. the spin variable is equal to +1 when a site is occupied by a particle (ni = 1) and −1 when the site in unoccupied (ni = 0). V.40) Let us introduce the following variables Si = 2ni − 1. β.46) where U1 (α) corresponds to the average of U1 over the sites. (1.42) and U2 <α. one gets Ξgas (N.β> (1. Hα = and µ − U (α) cU2 − 2 4 J =− (1. Nc ). β.43) = Nc c +c 2 Sα + α <α.45) with E0 = Nc U1 (α) − µ U2 c + 2 8 (1. One then obtains (U1 (α) − µ)nα = α 1 2 (U1 (α) − µ)Sα + α 1 2 (U1 (α) − µ) α (1. Finally. this gives H − µN = E0 − α Hα Sα − J <α.1.47) U2 4 where J is the interaction strength.3 Model systems Equivalence with the Ising model We consider the previous lattice gas model in a grand canonical ensemble.

which does not exist in the liquid model. However. The mean-field theory gives a first approximation.3 in three dimensions. the critical temperature depends on the details of the system. Unlike to the critical exponents which do not depend on the lattice.4 Conclusion In this chapter. As we will see in the following chapters. the closer the mean-field value to the exact critical temperature. the mean-field approximation overestimates the value of the critical temperature because the fluctuations are neglected. we have drawn the preliminary scheme before the simulation: 1) Select the appropriate ensemble for studying a model 2) When the microscopic model is very complicated. it is useful to have an estimate of the phase diagram in order to correctly choose the simulation parameters. A comparison with the Lennard-Jones liquid gives a critical packing fraction equal to 0. U2 < 0. The critical temperature of the Ising model is four times higher than of the lattice gas model. This means that. In addition. for instance. Because the interaction is divided by 4. one has added a symmetry between particles and holes. leads to a symmetric coexistence curve. the lattice gas does not own a microscopic dynamics.Statistical mechanics and numerical simulation This analysis confirms that the partition function of the Ising model in the canonical ensemble has a one-to-one map with the partition function of the lattice gas in the grand canonical ensemble. 1. By performing a coarse-graining. This leads for the lattice gas model a symmetrical coexistence curve and a critical density equal to 1/2. Before starting a simulation. which corresponds to a ferromagnetic interaction. For the Ising model (lattice gas). refined Monte 14 . Some comments on these results: first. One can easily show that the partition function of the Ising model with the constraint of a constant total magnetization corresponds to the partition function in the canonical ensemble of the lattice gas model. This allows order to persist to a temperature higher than the exact critical temperature of the system. perform a coarse-graining procedure which leads to an effective model that is more amenable for theoretical treatments and/or more efficient for simulation. larger the coordination number.50) As expected. one checks that if the interaction is attractive. Note also that the equivalence concerns the configuration integral and not the initial partition function. one has J > 0. of the critical temperature. unlike to the original liquid model. one obtains Tc = cJ = cU2 4 (1. whereas the lattice gas due to its additional symmetry ρliq = 1 − ρgas . the coexistence curves of liquids are not symmetric between the liquid phase and the gas phase. like for the Ising model.

1-4 From one of these configurations on a lattice of length L − 1. The Hamiltonian reads N N H = −J1 i=1 Si Si+1 − J2 i=1 Si Si+2 (1. corresponding to a sequence of two spins up and two spins down. Frenkel and Smit [1996]. 1. In addition to these lecture notes.5 Exercises Carlo methods consists of including more and more knowledge of the Statistical Mechanics. Therefore. ♣ Q. 1. ♣ Q. 1. Hansen and Donald [1986]. The calculation will be done for a lattice of length multiple of 4. on which.5.51) such that SN +1 = S1 and SN +2 = S2 .5. The calculation should be done for a lattice with a number of even. with periodic boundary conditions. Krauth [2006]. In the case where κ = 1/2. one inserts at the end of the lattice one site with a spin SL . etc. One considers a lattice in one dimension. .1-1 One considers a staggered configuration of spins. Infer that this configuration cannot be the ground state of the ANNNI model whatever the values of J1 and J2 . . which finally leads to a more efficient simulation. 1. one admits that the ground-state is strongly degenerated and that the configurations associated are sequences of spins of length k ≥ 2. Young [1998]. Landau and Binder [2000]. Spins interact with a ferromagnetic nearest neighbor interaction J1 > 0 and a antiferromagnetic next nearest neighbor interaction J2 < 0.5. several textbooks or reviews are available on simulationsBinder [1997]. of unit step and of length N . 1.1. I recommend reading of several textbooks on the Statistical PhysicsChandler [1987]. the ANNNI model (Axial Next Nearest Neighbor Ising).5 1. This property can be illustrated through a simple model.5.1-3 Show that the configuration A has a lower energy than the configuration B for a ratio κ = −J2 /J1 to be determined. an Ising variable exists on each lattice point. Calculate the energy per spin of this configuration. Goldenfeld [1992].5. ♣ Q.1 Exercises ANNNI Model Frustrated systems are generally characterized by a large degeneracy of the ground state.1-2 Calculate the energy per spin for a configuration of aligned (configuration A) and for a periodic configuration pour a configuration of alternated spins of period 4 (configuration B) . What sign must this spin be with respect to SL−1 in order that the new configuration belongs to the ground state of a lattice of length L? 15 . and two spins up. Si = (−1)i . ♣ Q.

(1.0-10 Determine the ground states and the associated energies.Statistical mechanics and numerical simulation ♣ Q. 1. ♣ Q.5. Here. 1. that allows tractable calculations.1-6 Let us denote by DL the degeneracy of a lattice of size L.1-5 From one of these configurations on a lattice of length L − 2. one has DL ∼ (aF )L (1. What can one say about the degeneracy of the ground state for a value of κ = 1/2? 1. we consider a mean-field approach. by assuming that lattice spins interact all together (long ranged interactions). By using previous results. 16 .1-7 Show that for large values of L.5.5.52) ♣ Q.53) where aF is a constant to be determined.5. 1.1-8 Calculate numerically this value. justify the relation DL = DL−1 + DL−2 (1.5. 1.55) where the second term of the Hamiltonian describes a ferromagnetic coupling and ∆ > 0 is the difference between ferromagnetic states (Si = ±1) and paramagnetic states (Si = 0). What sign must be these two spins with respect to SL−2 in order that the new configuration belongs to the ground state of a lattice of length L and is different from a configuration generated by the previous process? ♣ Q.6 Blume-Capel model The Blume-Capel model describes a lattice spin system where the spin variables can take 3 values : −1. one inserts two sites at the end of the lattice with two spins SL−1 and SL of same sign. 1. Verify that the ground state energy is extensive. 0 and +1. This model describes metallic ferromagnetism and superfluidity in a Helium He3 -He4 mixture. ♣ Q. Infer a phase transition at T = 0 for a value ∆ to be determined.1-9 Justify that S0 is less than ln(2). 1.54) S0 = limL→∞ L ♣ Q. The entropy per spin of the ground state in the thermodynamic limit is defined as kB ln(DL ). The Hamiltonian is N H=∆ i=1 Si2 1 − 2N N 2 Si i=1 (1.6.

1. 1.6.0-13 Using a Stirling’s formula. 1.6.6.57) In the following.6 Blume-Capel model ♣ Q.6. show that the microcanonical entropy per spin s = S/N can be written as a function of m = M/N and q = Q/N (One sets kB = 1).0-16 Show that expansion of the entropy per spin to 4th order in m can be written as s = s0 + Am2 + Bm4 + O(m6 ) (1. Show that the total energy of the system can be expressed as a function of ∆. ♣ Q. what is the value of m for which the entropy is maximum? Which is the corresponding phase? 17 . ♣ Q. 1.0-14 By introducing by = E/N . −1 and 0 respectively. M = N+ − N− and Q = N+ + N− .61) ♣ Q. (1. we determine the transition line in the diagram T − ∆.60) K 1 + 4 (1 − 2K ) 8K 2 − (1.56) ♣ Q.6. 96(K )3 (1. N− and N0 the number of sites occupied by spins +1.0-12 Justify that the number of available microstates is given by Ω= N! .0-15 Show that the entropy per spin reads 1 s(m.6. 1. 1. N+ !N− !N0 ! (1. show that the energy per site is given q = 2K + Km2 where K is a parameter to be determined.6.0-17 When A and B are negative. ) = −(1 − 2K − Km2 ) ln(1 − 2K − Km2 ) − (2K + Km2 + m) ln(2K + Km2 + m) 2 1 2 2 − (2K + Km − m) ln(2K + Km − m) + (2K + Km2 ) ln 2. ♣ Q. 1. 1. ♣ Q.0-11 Let us denote by N+ .58) 2 (1.59) with A = −K ln and B=− K 1 − 2K − 1 4K 1 . N .

one can use a mean-field approach.62) β = 2K ln K ♣ Q. give a simple expression for the entropy per site s as a function of xi and the Boltzmann constant. explain why the transition line becomes first order. 1.j> Si Sj ) with a coupling constant equal to J = 2JI . 2β 12 (1.0-20 The transition line ends in a tricritical point when the coefficient B is equal to zero. Show for this point that the temperature is given by K2 1 + 2 exp 2β 2 −β 2K − 1 K + = 0. Let xi is the fraction of spins in the state i where i runs i = 1..1-2 Assuming that the spins are independent. show that in the same phase 1 − 2K .6. 2. The interaction between nearest neighbor sites is given by the Hamiltonian: H = −J δσi σj (1. (1. show that this model is equivalent to a Ising model (H = −JI <i.0-21 For lower temperature. 1.0-19 The transition line corresponds to A = 0 and B < 0. 1. 1. To estimate the phase diagram of this model. ♣ Q. ♣ Q.6. 18 . .j> where δij denotes the Kronecker symbol and is equal to 1 when i = j and 0 when i = j.6.6.Statistical mechanics and numerical simulation ♣ Q.1 Potts model The Potts model is a lattice model where the spin variables are denoted σi where σi goes from 1 to q. 1.6.65) <i.6.. 2 (1. q. 1.1-1 For q = 2. Infer that the transition temperature is given by the implicit equation β exp( 2K ) β= + 1.. where it is necessary to perform the expansion beyond the 4th order.64) ♣ Q.0-18 By using the definition of the microcanonical temperature. 1.63) ♣ Q.6.

6.1-4 Infer the dimensionless free energy per site βf . ♣ Q.1-12 Give a method adapted for studying the transition in the two cases. One wishes to perform the simulation of the two dimensional Potts model..6 Blume-Capel model ♣ Q.1-9 Calculate the latent heat of the transition when the transition becomes first order.1. 1. 1.1-8 For q > 2. 1. ♣ Q. (1. show that cβJm = ln 1 + (q − 1)m 1−m (1. ∂m ♠ Q. .1-11 Describe briefly the convergence problems of the Metropolis algorithm for q ≤ 4 and q > 4 ♣ Q. determine the “temperature” of transition 1/βc .6.1-6 Show that m = 0 is always solution of the previous equation when βf (m) − βf (0) = 0. 1.. 1. ♠ Q.1-7 Consider first the case q = 2. It was shown that in two dimensions the critical value of q is 4. Determine the temperature of transition by calculating ∂βf (m) = 0.6. 1.6..1-10 Write a Metropolis algorithm for the simulation of the q−state Potts model.6. ♣ Q.6. Show that the order of the transition changes for q > qc with qc to be determined.6.1-3 Assuming that the spins are independent.q. 1.67) q−2 One admits that mc = q−1 .6. ♣ Q. Justify your answer. justify that the energy per site is given by cJ x2 . ♣ Q. 1.66) e=− 2 i i where c is the lattice coordination number. 1. 19 . 1. Show that βf (mc ) = βf (0) .6. q q Infer the difference βf (m) − βf (0) ♣ Q.6.1-5 A guess is x1 = 1 (1 + (q − 1)m) and xi = 1 (1 − m) for i = 2.

Statistical mechanics and numerical simulation 20 .

.1) . . . . . . . . . . . .7 2. Inverse transformation . . Acceptance probability . .5.2 2. . . . . . . . . . . . . . . . . . . .7. .2 2. . . . . . . . . . . . . . . . . . .Chapter 2 Monte Carlo method Contents 2. . .1 2. . . . . . . . . . .1 Introduction Once a model of a physical system has been chosen. . . .5 Introduction . . . . we have seen in the previous chapter that the computation of the partition function consists of performing a multidimensional integral or a multivariable summation similar to Z= i exp(−βU (i)) 21 (2. . . . . . . . Ising model . . . . . .5. . . . . . . . . . . . . . . . . . . . Generating non uniform random numbers . . . . . . . .6 2. . . . . .4 . . its statistical properties of the model can be determined by performing a simulation. . . . . .1 2. . . .4 2. . . . . . . . . . . . 21 22 23 25 26 26 29 31 33 38 38 39 39 41 Uniform and weighted sampling . . . .7. . Markov chain for sampling an equilibrium system . .3 2. . Detailed balance . . . . . . . . . If we are interested in the static properties of the model. . . . . . . . . . . Exercises . . . . . .2 2. . . . . . . . . . . 2. .1 2. . Metropolis algorithm . . . . . . Random number generators . . . . . . . . . . . .1 2. . . Simple liquids .3 2. . . . Random number generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. .7. . . . . Applications 2. . . . . . . . .7.

and one obtains σ2 = 1 2 1 f (x)2 − f (x) 2 ). of the sum INr 2 . Eq. The specific method used is the Monte Carlo method with an importance sampling algorithm. (2.2 Uniform and weighted sampling To understand the interest of a weighted sampling. Nr (2. crossed terms vanish. 2. the number of points is equal to 10300 . one has 1 σ = 2 Nr 2 Nr Nr (f (xi ) − f (xi ) )( f (xj ) − f (xj ) ). The points xi being chosen uniformly on the interval [a. It is necessary to have specific methods for evaluating the multidimensional integrals. (2. i=1 (2. calculation of the integral starts by a discretization. for which is impossible to compute exactly the sum. By choosing randomly and uniformlyNr points along the interval [a.6) We will use in this chapter a roman index for denoting configurations. i=1 j=1 (2.2) This integral can be recast as I = (b − a) f (x) (2. which is of the same order of magnitude of the previous lattice system with a larger number of sites. b]. the central limit theorem is valid and the integral converges towards the exact value according to a Gaussian distribution 22 . For a continuous system. b] and by evaluating the function of all points.5) The points being chosen independently. b].4) The convergence of this method can be estimated by calculating the variance.Monte Carlo method where i is an index running over all configurations available to the system 1 . Therefore. one gets an estimate of the integral INr = (b − a) Nr Nr f (xi ). an one-dimensional integral b I= a dxf (x). we first consider a basic example. σ 2 .1). Choosing 10 points for each space coordinate and with 100 particles evolving in a three dimensional space. If one considers the simple example of a lattice gas in three dimensions with a linear size of 10. the total number of configurations is equal to 21000 10301 .3) where f (x) denotes the average of the function f on the interval [a.

and b f (x(u)) du .3 Markov chain for sampling an equilibrium system The 1/Nr -dependence gives a a priori slow convergence.11) A = i Z Let us recall that exp(−βUi ) Pi = (2. (2. w(x(ui )) (2. However.8) I= w(x(u)) a the estimate of the integral is given by I (b − a) Nr Nr i=1 f (x(ui )) . By using a random but non uniform distribution with a weight w(x). we are interested in the computation of the thermal average of a quantity but not in the partition function itself. one can define du = w(x)dx with u(a) = a and u(b) = b. 2. one can modify in a significant manner the variance. This trick is only possible in one dimension. Similarly. (2.10) By choosing the weight distribution w proportional to the original function f .12) Z defines the probability of having the configuration i (at equilibrium). The basic properties of a probability distribution are satisfied: Pi is strictly positive and 23 . the integral is given by b I= a dx f (x) w(x). the variance vanishes. the variance of the estimate then becomes 1 σ = Nr 2 f (x(u)) w(x(u)) 2 − f (x(u)) w(x(u)) 2 . It is worth noting that the function f has significant values on small regions of the interval [a. Ai exp(−βUi ) . w(x) (2.2. the change of variables in a multidimensional integral involves the absolute value of a Jacobian and one cannot find in a intuitive manner the change of variable to obtain a good weight function. but there is no simple modification for obtaining a faster convergence. In higher dimensions. (2. b] and it is useless to calculate the function where values are small.7) If w(x) is always positive.9) with the weight w(x).3 Markov chain for sampling an equilibrium system Let us return to the statistical mechanics: very often.

(4. the thermal average of A should be given by A 1 Nr Nr Ai i (2. The trick proposed by Metropolis. Let us denote P (i. only depends on the configuration i at time t. this conditional probability is denoted by W (i → j)dt.Monte Carlo method i Pi = 1. Let us explain the meaning of the dynamics: ”stochastic“ means that going from a configuration to another does not obey a ordinary differential equation but is a random process determined by probabilities.13) where Nr is the total number of configurations where A is evaluated. the thermal average becomes an arithmetic average. The master equation of the system is then given (in the thermodynamic limit) by : P (i.16) . one defines a time t equal to the number of “visited“ configurations divided by the system size. ”Markovian“ means that the probability of having a configuration j at time t + dt. and which converges towards the equilibrium distribution Peq . but not on previous configurations (the memory is limited to the time t). we introduce some useful definitions: To follow the sequence of configurations. one obtains the set of conditions W (j → i)Peq (j) = Peq (i) W (i → j) (2. First. This time has no relation with the real time of the system. the system is in a initial configuration i0 : The initial probability distribution is P (i) = δi0 . t) the probability of having the configuration i at time t.15) j j A simple solution solution of these equations is given by W (j → i)Peq (j) = W (i → j)Peq (i) 24 (2. In this way. t + dt) = P (i.56). t)) dt (2. (dt = 1/N where N is the particle number of the system). (algebraic) added of the probability of leaving the configuration towards a configuration j and the probability of going from a configuration j towards the configuration i.i . Rosenbluth and Teller in 1953 consists of introducing a stochastic Markovian process between successive configurations. At time t = 0.14) This equation corresponds to the fact that at time t + dt. the probability of the system of being in the configuration i is equal to the probability of being in a same state at time t. t) − W (i → j)P (i. If one were able to generate configurations with this weight. according the master equation Eq. t) + j (W (j → i)P (j. which means that we are far from the equilibrium distribution At equilibrium.

recently a few algorithms violating detailed balance have been proposed that converges asymptotically towards equilibrium (or a stationary state). Equation (2. to obtain the transition matrix (W (i → j)). we restrict ourselves to this situation in the remainder of this chapter Eqs.16) is a solution with a simple physical interpretation. 2. one chooses α(i → j) = α(j → i).4 Metropolis algorithm This equation.22) (2. one has W (i → j) = α(i → j)Π(i → j). (2.18) This implies that W (i → j) does not depend on the partition function Z.16). From a configuration i. 2.4 Metropolis algorithm The choice of a Markovian process in which detailed balance is satisfied is a solution of Eqs. As we will see later. Let us add this condition is not a sufficient condition.16) can be reexpressed as Peq (j) W (i → j) = W (j → i) Peq (j) = exp(−β(U (j) − U (i))). several algorithms use the detailed balance condition.18) are reexpressed as Π(i → j) = exp(−β(U (j) − U (i))) Π(j → i) The choice introduced by Metropolis et al. in other words. (2.17) (2. Because it is difficult to prove that a Monte Carlo algorithm converges towards equilibrium. We will discuss other choices later.15) is unique and that the equation (2.16) or. In order to obtain solutions of Eqs. the probability that the system goes from a stationary state i to a state j is the same that the reverse situation. because we have not proved that the solution of the set of equations (2. (2. one selects randomly a new configuration j. (2.21) (2. (2. This new configuration is accepted with a probability Π(i → j). is known as the condition of detailed balance. is Π(i → j) = exp(−β(U (j) − U (i))) =1 25 if U (j) > U (i) if U (j) ≤ U (i) (2. with a priori probability α(i → j).2.16). Therefore. in a stationary state (or equilibrium state).19) In the original algorithm developed by Metropolis (and many others Monte Carlo algorithms). but only on the Boltzmann factor. note that a stochastic process is the sequence of two elementary steps: 1.20) . (2. It expresses that.

In chapter 5. If J > 0. the second one where the system evolves in the vicinity of equilibrium and where the computation of averages is performed.5.Monte Carlo method As we will see later. this criteria is reasonable in a first approach. 2. interaction is antiferromagnetic. namely when P Peq . in a Monte Carlo simulation. j > means that the interaction is restricted to distinct pairs of nearest neighbors and H denotes an external uniform field. this solution is efficient in phase far from transitions. For disordered systems.1 Applications Ising model Some known results Let us consider the Ising model (or equivalently a lattice gas model) defined by the Hamiltonian N H = −J <i. and at temperature larger the phase transition. In one dimension. A more precise method estimates the relaxation time of a correlation function and one chooses a time significantly larger than the relaxation time. one performs a first dynamics in order to lead the system close to equilibrium. 2. In the absence of a precise criteria. We now consider how the Metropolis algorithm can be implemented for several basic models. Moreover.5 2. this model can be solved analytically and one shows that the critical temperature is the zero temperature. the duration of the first stage is not easily predictable. the implementation is simple and can be used as a benchmark for more sophisticated methods. interaction is ferromagnetic and if J < 0. 26 . Computation of a thermal average only starts when the system reaches equilibrium. More sophisticated estimates will be discussed in the lecture. specific methods for studying phase transitions will be discussed. Therefore. Additional comments for implementing a Monte Carlo algorithm: 1.j> Si Sj − H i=1 Si (2. A naive approach consists of following the evolution of the instantaneous energy of the system and in considering that equilibrium is reached when the energy is stabilized around a quasi-stationary value. there are generally two time scales: The first one where one starts from an initial configuration.23) where the summation < i. 3.

2. J.269185314 . D Lattice Tc /J (exact) Tc.1).6410 6 2 honeycomb 1.1 illustrates that the critical temperature given by the mean-field theory are always an upper bound of the exact values and that the quality of the approximation becomes better when the spatial dimension of the system is large and/or the coordination number is large. the model can not be solved analytically. In three dimensions. For the square lattice.M F /J 1 0 2 2 square 2. Metropolis algorithm Since simulation cells are always of finite size. Table 2. simulations are performed by using periodic boundary conditions.24) and a numerical value equal to Tc 2. Onsager (1944) solved this model in the absence of an external field and showed that there exists a para-ferromagnetic transition at a finite temperature. but crude. the ratio of the surface times the linear dimension of the particle over the volume of the system is a quite large number. with the conditions that a 27 . this means that the simulation cell must tile the plane: due to the geometric constraints.5 Applications In two dimensions. . two types of cell are possible: a elementary square and a regular hexagonal.7040 4 4 hypercube 6. To avoiding large boundary effects. estimate of the critical temperature is given by the meanfield theory Tc = cJ (2.1 – Critical Temperature of the Ising model for different lattices in 1 to 4 dimensions. but numerical simulations have been performed with different algorithms and the values of the critical temperature obtained for various lattices and from different theoretical methods are very accurate (see Table 2.5187 3 3 cubic 4.68 8 Table 2. .25) where c is the coordination number.269185314 4 2 triangular 3.515 6 3 bcc 6. A useful.32 8 3 diamond 2. In two dimensions. this critical temperature is given by Tc = J 2 √ ln(1 + 2) (2.

a spin must be chosen randomly for each trial configuration. susceptibility. the acceptance condition is proportional to the exponential of the energy difference of the two configurations. 2. indeed. the update of these quantities does not require additional 28 . the site spin is taken equal to +1. and not according to a regular sequence. Note that this condition can bias the simulation when the symmetry of the simulation cell is incompatible with the symmetry of the low-temperature phase.. ) can be easily performed. As mentioned previously. a crystal phase. Because the dynamics must remain stochastic. For starting a Monte Carlo simulation. If this difference is large and positive. specific heat. this configuration needs to be ”close” to the previous configuration.. a modulated phase. If the trial configuration has a lower energy. Monte Carlo dynamics involves two elementary steps: first. Otherwise.. One computes the energy difference between the trial configuration (in which the spin i is flipped) and the old configuration. . one chooses randomly and uniformly a number between 0 and 1. this energy difference involves a few terms and is ”local” because only the selected site and its nearest neighbors are considered. 3. the system stays in the old configuration. A site is selected by choosing at random an integer i between 1 and the total number of lattice sites. If this selected number is between 0 and 0.5. a uniform random number is chosen between 0 and 1 and if this number is less than exp(β(U (i) − U (j))). magnetization. the trial configuration is accepted.5 and 1. one selects randomly a trial configuration and secondly. 2. Indeed. one considers acceptance or rejection by using the Metropolis condition. Computation of thermodynamic quantities (mean energy.Monte Carlo method particle which exits by a given edge returns into the simulation cell by the opposite edge. which can be: 1. if a new configuration is accepted. the single spin flip is generally used which leads to a dynamics where energy changes are reasonable.g. e. one needs to define a initial configuration. if the selected number is between 0. . A infinite-temperature configuration: on each site. an iteration of the Metropolis dynamics for an Ising model consists of the following: 1. In order that the trial configuration can be accepted. In summary. For a Metropolis algorithm. the trial configuration is accepted. the site spin is taken equal to −1. For short-range interactions. The ground-state with all aligned spins +1 or −1. the probability of acceptance becomes very small and the system can stay “trapped“ a long time a local minimum.. If not.

2. quantum mechanics prevent atom overlap which explains the repulsive part of the potential. even for neutral atoms. A potential satisfying these two criteria. for the Ising model. At long distance. the interaction between particles must contain a repulsive short-range interaction and an attractive long-range interaction. instantaneous quantities are unchanged. For continuous systems. The thermal average can be performed at the end of the simulation when one uses a histogram along the simulation and eventually one stores these histograms for further analysis. the average performed by using the histogram converges to the ”on the fly“ value. As we discussed in the previous chapter. it is not equivalent to perform a thermal average “on the fly“ and by using a histogram. the London forces (whose origin is also quantum mechanical) are attractive. thermodynamic quantities are expressed as moments of the energy distribution and/or the magnetization distribution. by choosing a sufficiently small bin size. the energy spectra is discrete. and computation of thermal quantities can be performed after the simulation run. because only even integers are available for the magnetization.) Starting with a array whose elements are set to zero at the initial time.26) 2. similarly.2 Simple liquids Brief overview Let us recall that a simple liquid is a system of point particles interacting by a pairwise potential.5 Applications calculation: En = Eo + ∆E where En and Eo denote the energies of the new and the old configuration. where N is the total number of spins. Practically. However. the magnetization of the Ising model is stored in an array histom[2N +1].27) . the magnetization is given by Mn = Mo +2sgn(Si ) where sgn(Si ) is the sign of spin Si in the new configuration. Note that. (The size of the array could be divided by two. for instance. but time must be incremented. (2. gas and solid phases). When the phase diagram is composed of different regions (liquid phase. magnetization update is performed at each simulation time by the formal code line histom[magne] = histom[magne] + 1 where magne is the variable storing the instantaneous magnetization.5. the implementation of a histogram method consists of defining an array with a dimension corresponding the number of available values of the quantity. but state relabeling is necessary. is the Lennard-Jones potential uij (r) = 4 σ r 29 12 − σ r 6 (2. In the case where the old configuration is kept. The physical interpretation of these contributions is the following: at short distance.

8 t Metropolis algorithm For an off-lattice system. the interaction now involves not only particles inside the simulation cell. with periodic boundary conditions. for each spatial coordinate if |xi − xj | (|yi − yj | and |zi − zj |.j.16)).3 and ρ∗ = 0. The three-dimensional Lennard-Jones model has a critical point at Tc∗ = 1.3.. Moreover. Note that xi → xi + ∆rand would be incorrect because only positive moves are allowed and the detailed balance is then not satisfied (see Eq. but also particles with the simulation cell and particles in all replica.5) yi → yi + ∆(rand − 0. Calculation of the energy of a new configuration is more sophisticated than for lattice models. the distance is r∗ = r/σ and the energy is u∗ = u/ .5) (2. periodic boundary conditions covers space by replicating the original simulation unit cell.n where L is the length of the simulation cell and n is a vector with integer components. all quantities are expressed in dimensionless units: temperature is then T ∗ = kB T / where kB is the Boltzmann constant. respectively) is less than one-half of length size of the simulation cell L/2.31) 2 i. ∆ is a a distance of maximum move by step. where dV is the infinitesimal volume).29) (2.Monte Carlo method where defines a microscopic energy scale and σ denotes the atom diameter. the so-called minimum image convention. and a triple point Tt∗ = 0. because the interaction involves all particles of the system. If 30 . For interaction potential decreasing sufficiently rapidly (practically for potentials such that dV u(r) is finite. In a three-dimensional space a trial move is given by xi → xi + ∆(rand − 0.6 c and ρ∗ = 0.30) with the condition that (xi − xi )2 + (yi − yi )2 + (zi − zi )2 ≤ ∆2 /4 (this condition corresponds to considering isotropic moves) where rand denotes a uniform random number between 0 and 1. 1 Utot = u(|rij + nL|) (2. where N is the total number of particles of the simulation cell. Indeed. this particle is obviously chosen at random. The value of this quantity must be set at the beginning of the simulation and generally its value is chosen in order to keep a reasonable ratio of new accepted configurations over the total number of configurations. Therefore.5) zi → zi + ∆(rand − 0. the energy update contains N terms. for calculating of the energy between the particle i and the particle j. the summation is restricted to the nearest cells. In a simulation. a trial configuration is generated by moving at random a particle. a test is performed.28) (2. (2.

which becomes independent of the system size. This correction can be added to the simulation results. This first step is time consuming. the computation time becomes proportional to the number of particles N (because we keep constant the same number of moves per particle for all system sizes). we will see that this procedure is not sufficient. and it is necessary to choose the bin size in order to minimize this bias. For a Monte Carlo simulation. respectively). 2. If one denotes Nh the dimension of the histogram array. because the energy update requires calculation of N 2 /2 terms. (2.6 Random number generators Monte Carlo simulation uses extensively random numbers and it is useful to examine the quality of these generators. 0 r > rc . For the Molecular Dynamics. it is useful to store the successive instantaneous energies in a histogram. In the absence of truncation of potential. the index of the array element is given by the relation i = Int(E − Emin /∆E) (2. The basic generator is a procedure that 31 . However. In a similar manner to the Ising model (or lattice models). the Lennard-Jones potential).32) (2. one calculates xi − xj mod L (yi − yj mod L and zi − zj mod L. it is necessary to introduce a adapted bin size (∆E).6 Random number generators |xi − xj | (|yi − yj | and |zi − zj |. the update energy would be proportional to N 2 Note that the truncated potential introduces a discontinuity of the potential. at long distance. which considers particles within the nearest cell of the simulation box.2. the density ρ(r) becomes uniform and one can estimate this contribution to the mean energy by the following formula 1 ui = 2 ρ 2 ∞ 4r2 dru(r)ρ(r) rc ∞ (2.34) For a finite-range potential. When the potential decreases rapidly (for instance. the energy spectra being continuous.33) 4r2 dru(r) rc This means that the potential is replaced with a truncated potential utrunc (r) = u(r) r ≤ rc . energy update only involves a summation of a finite number of particles at each time. the energies are stored from Emin to Emin + ∆(ENt − 1).35) In this case. which gives an ”impulse” contribution to the pressure. If E(t) is the energy of the system at time t. respectively) is greater than to L/2. the numerical values of moments calculated from the histogram are not exact unlike the Ising model.

that we expect very large and more precisely much larger than the total random numbers required in a simulation run. As we will see below. As the numbers are coded on a finite number of bytes. A generator relies on an initial number (or several numbers). production runs require different number seeds. allowed spin flips are multiplied by thousand and can be used for a preliminary study of the phase diagram (outside of the critical region. ran (Numerical Recipes . but also since the numbers must be independent. m corresponds to the generator period. etc.38) (2.36) This relation generates a sequence of pseudo-random integer numbers between 0 and m − 1. there exists nowadays several random number generators of high quality. the procedure has a seed by default. where a large number of configuration is required. every time one performs a new simulation. since the coding is performed with 8 or 16 byte words. 32 . random number generators used a one-byte coding leading to very short period. which biases the first simulations. 3 (2. ranf (Cray).Monte Carlo method gives sequences of uniform pseudo-random numbers. when one restarts a run without setting the seed. if one considers a Ising spin lattice in three 3 dimensions with 100 sites.39) P. If not. dangerous biases can result from the absence of a correct initialization of seeds. the same sequence of random numbers is generated and while this feature is useful in debugging. The generator rng cmrg (Lecuyer)3 provides a sequence of numbers from the relation: zn = (xn − yn ) mod m1 . In the early stage of computational physics. Lecuyer contributed many times to develop several generators based either on linear congruence relation or on shift register. We are now beyond this time. If the seed(s) of the random number generator is (are) not set.37) where xn and yn are given by the following relations xn = (a1 xn−1 + a2 xn−2 + a3 xn−3 ) mod m1 yn = (b1 yn−1 + b2 yn−2 + b3 yn−3 ) mod m2 . Let us recall that 230 109 . Two kinds of algorithms are at the origin of random number generators. The first one is based on a linear congruence relation xn+1 = (axn + c) mod m (2. However. Among generators using a linear congruence relation. (2. one finds the functions randu (IBM). only 103 spin flips per site can be done on the smallest period. a generator is characterized by a period. The periods of these generators go from 229 (randu IBM) to 248 (ranf). drand48 on the Unix computers.Knuth). the correlation between successive trials must be as weak as possible. The quality of a generator depends on a large number of criteria: it is obviously necessary to have all moments of a uniform distribution satisfied. For a model lattice with 103 sites.

An example is provided by the Kirkpatrick and Stoll generator. then u = F −1 (x) define a cumulative distribution for random numbers with a uniform distribution on the interval [0.41) If there exists an inverse function F −1 . Inverse transformation If f (x) denotes the probability distribution on the interval I. usually denoted E(λ).42) = 1 − e−λx Therefore.43) Considering the cumulative distribution F (x) = 1 − x. one immediately obtains that u is a uniform random variable defined on the unit interval.6. inverting the relation u = F (x). The generator with the largest period is likely of that Matsumoto and Nishimura. Its period is 106000 ! It uses 624 words and it is equidistributed in 623 dimensions! 2. namely random numbers defined by a probability distribution f (x) on an interval I. but it needs to store 250 words. 2250 . known by the name MT19937 (Mersenne Twister generator). one defines the cumulative distribution F as x F (x) = f (t)dt (2. 1]. xn = xn−103 xn−250 (2. For instance. one obtains x=− ln(1 − u) λ (2. as we have seen in the above section. if one has a exponential distribution probability with a average λ.2. there exists some physical situations where it is necessary to generate non uniform random numbers. one has x F (x) = 0 dtλe−λt (2.1 Generating non uniform random numbers Introduction If.40) Its period is large. and 1−u is 33 .6 Random number generators The generator period is 2305 1061 The second class of random number generators is based on the register shift through the logical operation “exclusive or“. such that I dxf (x) = 1 (condition of probability normalization). it is possible to obtain a ”good” generator of uniform random numbers. We here introduce several methods which generated random numbers with a specific probability distribution.

namely polar coordinates. if one considers a couple of independent random variables (x. If u and v are two uniform random variables on the interval [0. the joint probability becomes r2 dr2 dθ 2 . one defines the joint probability g(x. However. (2.Monte Carlo method also a uniform random variable uniform defined on the same interval. 34 . This method uses the following property f (x) f (x) = 0 dt (2. one has x 1 2 √ dte−t /2 (2. Therefore.43) can be reexpressed as x=− Box-Muller method The Gaussian distribution is frequently used in simulations Unfortunately. 1].44) which are independent random variables with a Gaussian distribution . u) with the condition 0 < u < f (x). Equ. or U[0.48) Therefore. 2π]. y) → (r. by considering a couple of random variables with an uniform distribution. denoted by N (0. or E(1/2). y) = the joint probability distribution is given by f (x. (2.47) ln(u) λ (2.46) f (r )rdrdθ = exp(− ) 2 2 2π The variable r2 is a random variable with a exponential probability distribution of average 1/2. Using the change of variables (x. θ). Acceptance rejection method The inverse method or the Box-Muller method can not be used for general probability distribution.1] . for a Gaussian distribution of unit variance centered on zero.45) F (x) = 2π −∞ This function is not invertible and one cannot apply the previous method. one has x= y= −2 ln(u) cos(2πv) −2 ln(u) sin(2πv) (2. y) = exp(−(x2 + y 2 )/2)/(2π). and θ is a random variable with a uniform probability distribution on the interval [0. The acceptance-rejection method is a general method which gives random numbers with various probability distributions. 1) ( N corresponds to a normal distribution). the cumulative distribution is an error function.

f (x) = dug(x. not necessary invertible. 2. For a good efficiency of the method. one obtains independent random variables with a distribution. the function f (x) is the marginal distribution (of the x variable) of the joint distribution. u) (2.5 0 0 0. the maximum of f (x) must be determined in the definition interval before selecting the range for the random variables. the efficiency of the method is asymptotically given by the ratio of areas: the area whose external boundaries are given by the curve. the x-axis and the two vertical axis over that of the rectangle defined by the two horizontal and vertical axis (in Fig. β)) is sampled .2 0.1. 35 .5 1 0.1 where the probability distribution xα (1 − x)β (or more formally. The price to pay is illustrated in Fig.1 – Computation of the probability distribution B(4. the area of the rectangle is equal to 1).6 Random number generators 1624 black 3376 red numerical ratio 0. 8) with the acceptance-rejection method with 5000 random points. a distribution B(α.8 1 Figure 2.2. More precisely.4 0.6 0. By choosing a maximum value for u equal to the maximum of the function optimizes the method.49) By using a Monte Carlo method with an uniform sampling.5 2 1.324 (exact ratio 1/3) 3 2. it is necessary to choose for the variable u an uniform distribution whose interval U[0.m] with m greater or equal than the maximum of f (x). Therefore. 2.

If not. let us consider the joint distribution fX. and by introducing w = x2 and z = a1 + a2 y/x. The joint distribution fW.Z (w. z) (2. one has J= = = By calculating fZ (z) = one infers fZ (z) = r(z). z) is given by √ √ (2. then z has the distribution r(z). x∗ ] and y on the interval [y∗ . x∗ ] × [y∗ .Z (w.Z (w.54) ∂x ∂x ∂w ∂z ∂y ∂y ∂w ∂z 1 √ 0 2 w √ w z−a1 √ a2 2a2 w (2. y ∗ ]. one samples random numbers with the distribution g and the acceptance-rejection method is applied for the function f (x)/g(x). for −∞ < z < ∞. If one considers an integrable function r normalized to one. (z − a1 ) w/a2 ) where J is the Jacobian of the change of variables. and the number of rejections decreases. In order to show the property. the ratio goes to a small value which deteriorates the efficiency of the method.50) fW. y ∗ ]. In order to improve this method.Monte Carlo method For the probability distribution β(3. Let us denote z = a1 + a2 y/x where x and y are uniform random numbers. a truncation of the distribution must be done and when the distribution decreases slowly. 36 dwfW. 2. If g is a simple function to sample and if g > f for all values on the interval.53) 1 2a2 .Y ( w. the rectangle can be replaced with a area with an upper boundary defined by a curve. The efficiency of the method then becomes the ratio of areas defined by the two functions. y) which is uniform in the domain D. if x is uniform on the interval [0. 7) and by choosing a maximum value for the variable u equal to 3.52) (2. z) = JfX.1 with 1624 acceptances and 3376 rejections). Therefore.51) (2. if w ≤ r(z). Method of the Ratio of uniform numbers This method generates random variables for various probability distributions by using the ratio of two uniform random variables.Y (x. respectively. which is within the rectangle [0. one obtains a ratio of 1/3 (which corresponds to an infinitely large number of trials) between accepted random numbers over total trial numbers (5000 in Fig. The limitations of this method are easy to identify: only probability distribution with a compact support can be well sampled.

If the test is false. the number is accepted (this corresponds to a domain inside to the domain of the logarithm). Since r(z) = e−z /2 . otherwise. the number is rejected. But it is possible to improve the performance by avoiding the calculation of transcendental functions as follows: the logarithm function is bounded between 0 and 1 by the functions x−1 ≤ ln(x) ≤ 1 − x (2. the second test is performed. the Gnu Scientific Library 37 . It is noticeable that many softwares have libraries where many probability distribution are implemented nowadays: for instance. by using Eqs. The test to do is x ≤ e−(y/x) /2 . If the test is true. Beyond this efficiency.57) (2. one needs to solve the following equations x(z) = r(z) y(z) = (z − a1 )x(z)/a2 (2. one performs the following pretests • y 2 ≤ 4x2 (1−x).58) (2.2. which can be reexpressed as y 2 ≤ −4x2 ln(x). We restrict the interval to positive values. On can show that the ratio of √ the domain D over the rectangle available for values of x and y is equal to 2 πe/4 0.7306. which are time consuming to compute. the ROU method allows the sampling of probability distributions whose interval is not compact (see above the Gaussian distribution). this is compensated by the fact that the pretest selects the largest fraction of random numbers. Although some numbers need the three tests before acceptance.57). This represents a significant advantage to the acceptance-rejection method. one can show that we save computing time. the interest of the method could be limited.59) One chooses a1 = 0 and a2 = 1.(2. The bounds of the domain are given by the equations. otherwise the second test is performed. (2.56) Let us consider the Gaussian distribution.6 Random number generators To determine the equations of the boundaries D. where the distribution needs to be truncated.55) (2. With this procedure. A simple explanation comes from the fact that a significant fraction of numbers are accepted by the pretest and only few accepted numbers need the logarithm test. Because this computation involves logarithms. x∗ = sup( r(z) y∗ = inf ((z − a1 ) r(z)/a2 ) y ∗ = sup((z − a1 ) r(z)/a2 ) 2 (2. y∗ = 0 and y ∗ = 2/e.60) x To limit the calculation of logarithms. one infers that x∗ = 1. • y 2 ≤ 4x(1 − x).59).

2. etc. One defines the conjugate of Q by Q∗ = q0 1 − q1 i − q2 j − q3 k.. because scripting languages like Python.j = i..7.7. i. ♣ Q. i. q3 ) are imaginary components of a vector.1-2 Using the inverse transformation method. ♣ Q. 2. For Monte Carlo simulations. ♣ Q.k = −k. j. i.1-1 Give briefly two methods that generate random numbers with non uniform distribution. Gamma.i = k. 2. There exists several methods for generating these non uniform random numbers.k = −k. 2.7. Cauchy distributions. it is necessary to sample random numbers with non uniform distribution. Quaternions are a three dimensional generalization of complex numbers and are expressed as Q = q0 1 + q1 i + q2 j + q3 k. q0 is called real part and (q1 . where h and g are two functions to be determined For rotating a solid in three dimension. with θ = h(u) and φ = g(v). which are related to the Cartesian coordinates with (x = cos(φ) sin(θ). 1]. In order to simulate a uniform rotation of a vector in three dimensions. This feature is not restricted to the high-level language. ♣ Q. 2.7 2. k are elementary quaternions. The probability distribution is then given by f (θ.Q∗ of a quaternion. unit quaternions are often used.j = k. show that one can find two uniform random variables u and v.62) .Monte Carlo method owns Gaussian. (2.1-3 Calculate the square module Q. z = cos(θ)).7.1-4 Justify that quaternions can be interpreted as points belonging to a hypersphere of a 4-dimensional space 38 (2. one can use the spherical coordinates with the usual angles θ and φ.i = j.7. φ) = sin(θ) with θ comprised 4π between 0 and π and φ between 0 and 2π.61) where 1.j = −j. i.a = a with a any elementary quaternion. Quaternion multiplication is not commutative (like rotation composition in three dimension) and is given by the following rules 1.1 Exercises Inverse transformation Random numbers are provided by softwares are basically uniform random numbers on the interval [0.i = jandj. Perl also implement these functions. y = sin(φ) sin(θ). q2 .k = −1..

In a Monte Carlo simulation with a Metropolis algorithm.63) assuming that the detailed balance is satisfied for the Monte Carlo algorithm. 2. We look for to prove the conversion of the above question: let an algorithm verifying the relation of question 3. Pα→β is transition probability of the Markovian dynamics and which represents the transition of the state α towards a state β and ρα is the equilibrium density. ♣ Q. 2.2-2 Give two Monte Carlo algorithms which satisfy detailed balance. β andγ. ♣ Q. show that there exists a simple relation between Pα→β Pβ→γ Pγ→α and Pα→γ Pγ→β Pβ→α in which the equilibrium densities are absent. a random and uniform move is proposed for the particle with an amplitude displacement comprised between −δ and δ. ♣ Q. give detailed balance satisfied by this algorithm. 2. the dynamics is given by a Monte Carlo simulation.2 Detailed balance To obtain a system at equilibrium.3 Acceptance probability Consider a particle of mass m moving in a one-dimensional potential v(x). The goal of the exercise is to prove some properties of algorithms driving to equilibrium.2. 2.7.7. one wants to show that it satisfies detailed balance. by setting that the equilibrium density ρδ equal to ρδ = α Pδ→α ρα Pα→δ −1 (2. 2. In the first three questions. (2. 2. at each step of the simulation. show that one can infer the detailed balance equation between two states β and δ.2-3 Consider 3 states.2-4 By assuming that there is at least one state α such that Pα→δ = 0 for each state δ and that the equilibrium probability associated with α is ρα . ♣ Q.7.7 Exercises 2.2-5 Calculate the right-hand side de Eq. 39 .63) ♣ Q.7. denoted α.2-1 Let us denote two states α and β of the phase space.7.7. one assumes that the algorithm satisfies detailed balance.7.

2. 2. b and α. exp(−v(x + h) + v(x))) and of W (x + h → x). 2.69) (2.64) dx exp(−βv(x)). show that Pacc (δ) = 1 − Bδ C − Dδ δ ≤ b.3-2 The acceptance rate is defined as the ratio of new configurations over the total number of configurations. t) of finding the particle at x at time t as a function of W (x → x + h) = min(1. Let us denote α = exp(β ).3-5 Assuming that b > 2a.3-3 Show that the above equation can be reexpressed as d(δPacc (δ)) 1 = dδ 2R +∞ dx exp(−βv(x))(W (x → x + δ) + W (x → x − δ)) (2. ♣ Q. ♣ Q.7.3-4 The potential is now:  − |x| ≤ b. Show that Pacc (δ) = 1 − Aδ where A is a constant to be determined.68) where B.Monte Carlo method ♣ Q. 2b > δ ≥ b. ♣ Q.65) −∞ Let us consider the following potential: v(x) = 0 |x| ≤ a. (2. C and D are quantities to be expressed as functions of a.67) (2. 2.7. Justify that this ratio is given by the expression 1 Pacc (δ) = 2δR where R = +∞ −∞ +∞ δ dx exp(−βv(x)) −∞ −δ dhW (x → x + h) (2. (2. +∞ |x| > a.7. 40 .  v(x) = 0 a ≥ |x| > b   +∞ |x| > a.3-1 Write the master equation of the Monte Carlo algorithm for the probability P (x. 2.7.66) One restricts to the case where δ < a. ♣ Q. Verify that the system goes to equilibrium.7.

Let us denote by η1 and η2 two numbers associated with these 2 2 distributions. 2. You should consider two cases x < 0 and x > 0.4-4 Calculate the acceptance probability of pairs of random numbers for generating this distribution.7.4-2 Denoting η as a uniform random variable between −1 and 1. It is necessary for Monte Carlo simulation to have random numbers with a non uniform distribution. Only pairs (η1 . We now have two generators of random uniform independent numbers between −1/2 and +1/2. 1].7. 2. ♣ Q. 41 .4-5 One considers the ratio z = η1 /η2 . One seeks to obtain random numbers between −1 and +1 such that the probability distribution f (x) is given by the equations f (x) = 1 + x for −1 < x < 0 f (x) = 1 − x for 0 < x < 1 (2.7 Exercises 2. Determine the probability distribution of f (z). η2 ). Consider the two cases again. 2. ♣ Q. determine the relation between x and η. ♣ Q.4-1 Determine the cumulative probability distribution F (x) associated with f (x). η2 ) such that η1 + η2 ≤ 1/4 are selected.7.7.70) ♣ Q.4 Random number generator Random number generators available on computers are uniform on the interval ]0.2. ♣ Q.7. 2. determine the probability distribution g(u) as a function of the distribution f .7. Hint: use polar coordinate ♣ Q. 2.4-3 Considering the joint probability distribution f (η1 . one calculates the numbers u = α + βz. 2. show that disc sampling of radius 1/2 is uniform.4-6 From the preceding distribution f (z).7.

Monte Carlo method 42 .

.4 Introduction . . . .6. . . .6 Symplectic algorithms . . . . . . . . . Discretization . . . . . .1 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Andersen algorithm . . . . . . . . .7. . . . . Hard sphere model . . . . . . . . . . . . .2 3. . . . . . 3. . . . . . .Chapter 3 Molecular Dynamics Contents 3. . . . Liouville formalism . . . . . . . . . . . . . . . . Consequences . . . . . . . . . . .1 3. . . . . . . . . . .7. . . Molecular Dynamics in other ensembles .1 Exercises .4. . . . Nos´-Hoover algorithm . . . . . . 3. .1 Introduction Monte Carlo simulation has an intrinsic limitation: its dynamics does not correspond to the “real” dynamics. 3. 43 45 45 47 47 50 51 53 54 54 57 57 57 57 58 59 59 59 Equations of motion . . . . . . . . . . . . . . . . . . .7. . .3 3. . . . . . . . . . . . . 3. . . . . . . . .8 3. . .9 Conclusion 3. . . . . .7. . . . . . . . . . . . . . . . . . . . .1 3. . . .2 3. . .2 3. .6. . . . .5 3. . . . . . . Langevin equation. . . . .4. . . . . Discretization. . Verlet algorithm 3.3 3. . . . For continuous systems defined from a classical 43 . . . . . . . . e Different timescales . . . . . . . . . . .7 Brownian dynamics . . . . . Multi timescale algorithm . . . . . . . . . . . . . . . . .9. . . . . . . . . . . . . . . . . . . . . . . .4 Smoluchowski equation . . . . . . . . . . . . . .1 3. . . . . 3. . . . . . Discretization of the Liouville equation . . .

64.1) This time corresponds to the duration for a atom of moving on a distance equal to its linear size with a velocity equal to the mean velocity in the liquid. we analyze the largest available physical time as well as the particle numbers that we can consider for a given system through Molecular Dynamics. one observes relaxation times to the glass transition which increases by several orders of magnitude. for argon. to the contact of a solid surface. quantities which are available in experiments. For a three dimensional system. in particular biological systems.10 s. the average number of particles along an edge of the simulation box is (104 )(1/3) 21. For many atomic systems. For instance. This means that for a thermodynamic study one needs to use the periodic boundary conditions like in Monte Carlo simulations. the quality of the model. This method provides the possibility of precisely obtaining the dynamical properties (temporal correlations) of equilibrium systems. for instance. one can solve the equations of motion for systems up to several hundred thousands particles. the duration of the simulation for an atomic system is 10−8 s. in for supercooled liquids. σ diameter. even smaller.Molecular Dynamics Hamiltonian. but Molecular Dynamics is not the best simulation method for a study of phase transitions. In these situations. in particular. The total number of steps performed in a run is typically of order of magnitude from 105 to 107 . typical relaxation times are of order of a millisecond. where Molecular Dynamics cannot be equilibrated. namely 104 . For conformational changes of proteins. For solving the equations of motion. For glassformers. relaxation times are much smaller than 10−8 s and Molecular Dynamics is a very good tool for investigating dynamic and thermodynamic properties of the systems. Note that for a simulation with a moderate number of particles.10 J. it is possible to solve the differential equations of all particles simultaneously. one has the numerical values σ = 3˚ m = 6.63. typically ∆t = 10−15 s. which allows one to test. The situation is worse if one considers more sophisticated molecular structures. it is possible to find a typical time scale from an analysis of microscopic parameters of the system (m mass of the particle.10−23 kg A. which gives τ = 2. it is necessary to coarse-graining some of the microscopic degrees of freedom allowing the typical time scale of the simulation to be increased. −20 −14 and = 1.8. first. the integration step must be much smaller than the typical time scale τ . The efficient use of new tools is associated with knowledge of their ability. For atomic systems far from the critical region. and energy scale of the interaction potential) is τ =σ m (3. By using a Lennard-Jones potential. the correlation length is smaller than the dimensionless length of 21. therefore. 44 . for instance.

3. Different choices are a priori possible. 3. .3) One has to account for the bias introduced in the potential in the thermodynamics quantities compared to original Lennard-Jones potential. The system formally evolves as m A time series expansion gives r(t + ∆t) = r(t) + v(t)∆t + f (r(t)) d3 r (∆t)2 + 3 (∆t)3 + O((∆t)4 ) 2m dt 45 (3. (3.. one needs to find if j or one of its images which is the nearest neighbor of i. but as we will see below.5) d2 r = f (r(t)). dt2 (3.4) . The calculation of the force between two particles i and j can be performed by using the minimum image convention (if the potential decreases sufficiently fast to 0 at large distance.. For the sake of simplicity. rN . it is crucial that the total energy which is constant for a isolated Hamiltonian system remains conserved along the simulation (let us recall that the ensemble is microcanonical). j=i (3. Verlet algorithm For obtaining a numerical solution of the equations of motion. one restricts the summation to particles within a sphere whose radius corresponds to the potential truncation. r. force calculation on the particle i involves the computation of de (N − 1) elementary forces between each particle and particle i. periodic boundary conditions are used. r2 . simulation.. let us consider an Hamiltonian system with identical N particles.2) For simulating an “infinite” system. is a vector with 3N components: r = (r1 . In order that the forces remain finite whatever the particle distance.2 Equations of motion 3. the truncated Lennard-Jones potential used in Molecular Dynamics is the following utrunc (r) = u(r) − u(rc ) r < rc 0 r ≥ rc . Similarly to a Monte Carlo. By using a truncated potential. The Verlet’s algorithm is one of first methods and remains one most used nowadays. where ri denotes the position of particle i.2 Equations of motion We consider below the typical example of a Lennard-Jones liquid. Equations of motion of particle i are given by d2 r i =− dt2 ri u(rij ). This means that for a given particle i.3 Discretization. it is necessary to discretize.

Molecular Dynamics

and similarly ” d3 r f (r(t)) 2 (∆t) − 3 (∆t)3 + O((∆t)4 ). r(t − ∆t) = r(t) − v(t)∆t + 2m dt Adding these above equations, one obtains r(t + ∆t) + r(t − ∆t) = 2r(t) + f (r(t)) (∆t)2 + O((∆t)4 ). m (3.7) (3.6)

The position updates are performed with an accuracy of (∆t)4 . This algorithm does not use the particle velocities for calculating the new positions. One can however determine these as follows: v(t) = r(t + ∆t) − r(t − ∆t) + O((∆t)2 ) 2∆t (3.8)

Let us note that most of the computation is spent by the force calculation, and marginally spent on the position updates. As we will see below, we can improve the accuracy of the simulation by using a higher-order time expansion, but this requires the computation of spatial derivatives of forces and the computation time increases rapidly compared to the same simulation performed with the Verlet algorithm. For the Verlet algorithm, the accuracy of trajectories is roughly given by ∆t4 Nt (3.9)

where Nt is the total number of integration steps, and the total simulation time is given by ∆tNt . It is worth noting that the computation accuracy decreases with simulation time, due to the round-off errors which are not included in Eqs. (3.9). There are more sophisticated algorithms involving force derivatives; they improve the short-time accuracy, but the long-time precision may deteriorate faster than with a Verlet algorithm. In addition, the computation efficiency is significant lower. Time symmetry is an important property of the equations of motion. Note that Verlet’s algorithm preserves this symmetry. This is, if we change ∆t → −∆t, Eq. (3.7) is unchanged. As a consequence, if at a given time t of the simulation, one inverts the arrow of time , the Molecular Dynamics trajectories are retraced. The round-off errors accumulative in the simulation limit the the reversibility when the total number of integration steps becomes very large. For Hamiltonian systems, the volume of phase space is conserved with time, and numerical simulation must preserve this property. Conversely, if an algorithm does not have this property, the total energy is no longer conserved, even for a short simulation. In order to keep this property, it is necessary that the Jacobian of the transformation in phase space, namely between the old and new coordinates 46

3.4 Symplectic algorithms

in phase space; is equal to one. We will see below that the Verlet algorithm satisfies this property. An a priori variant of the Verlet algorithm is known as the Leapfrog algorithm and is based on the following procedure: velocities are calculated on half-time intervals, and positions are obtained on integer time intervals. Let us define the velocities for t + ∆t/2 and t − ∆t/2 r(t + ∆t) − r(t) , ∆t r(t) − r(t − ∆t) v(t − ∆t/2) = , ∆t v(t + ∆t/2) = one immediately obtains r(t + ∆t) = r(t) + v(t + ∆t/2)∆t and similarly r(t − ∆t) = r(t) − v(t − ∆t/2)∆t. By using Eq. (3.7), one gets v(t + ∆t/2) = v(t − ∆t/2) + f (t) ∆t + O((∆t)3 ). m (3.14) (3.13) (3.12) (3.10) (3.11)

Because the Leapfrog algorithm leads to Eq. (3.7), the trajectories are identical to those calculated by the Verlet’s algorithm. The computations of velocities for half-integer times are temporary variables, but the positions are identical Some care is necessary when calculating the thermodynamics quantities because the mean potential energy, which is calculated at integer times, and the kinetic energy which is calculated at half-integer times, do not give a constant total energy. Many attempts to obtain better algorithms have been made, and, in order to overcome the previous heuristic derivation, a more rigorous approach is necessary. From the Liouville formalism, we are going to derive symplectic algorithms, namely algorithms conserving the phase space volume and consequently the total energy of the system. This method provides algorithms with greater accuracy than the Verlet algorithm for a fixed time step ∆t. Therefore, more precise trajectories can be obtained.

3.4
3.4.1

Symplectic algorithms
Liouville formalism

In the Gibbs formalism, the phase space distribution is given by an N -particle probability distribution f (N ) (rN , pN , t) (N is the total number of particles of 47

Molecular Dynamics the system) rN denotes the coordinate set and pN that of the impulses. If f (N ) (rN , pN , t) is known, the average of a macroscopic quantity can be calculated Frenkel and Smit [1996], Tsai et al. [2005]. Time evolution of this distribution is given by the Liouville equation ∂f (N ) (rN , pN , t) + ∂t or, ∂f (N ) (rN , pN , t) − {HN , f (N ) } = 0, (3.16) ∂t where HN denotes the Hamiltonian of the system of N particles, and the bracket {A, B} corresponds the Poisson bracket1 . The Liouville operator is defined as L =i{HN , }
N N

i=1

dri ∂f (N ) dpi ∂f (N ) + dt ∂ri dri ∂pi

= 0,

(3.15)

(3.18) ∂ − ∂pi ∂HN ∂pi ∂ ∂ri (3.19)

=
i=1

∂HN ∂ri

with ∂HN ∂pi ∂HN ∂ri =vi = dri dt dpi . dt (3.20) (3.21) (3.22) Therefore, Eq. (3.16) is reexpressed as ∂f (N ) (rN , pN , t) = −iLf (N ) , ∂t and one gets the formal solution f (N ) (rN , pN , t) = exp(−iLt)f (N ) (rN , pN , 0). (3.24) (3.23)

= − fi = −

Similarly, if A is a function of positions rN , and impulses, pN , (but without explicit time dependence) obeys dA = dt
1

N

i=1

∂A dri ∂A dpi + ∂ri dt ∂pi dt

.

(3.25)

If A andB are two functions of rN andpN , the Poisson bracket between A andB is defined
N

as {A, B} =

i=1

∂A ∂B ∂B ∂A − ∂ri ∂pi ∂ri ∂pi

(3.17)

48

3.4 Symplectic algorithms

Using the Liouville operator, this equation becomes dA = iLA, dt which gives formally A(rN (t), pN (t)) = exp(iLt)A(rN (0), pN (0)). (3.27) (3.26)

Unfortunately (or fortunately, because, if not, the world would be too simple), exact expression of the exponential operator is not possible in general case. However, one can obtain it in two important cases: the first expresses the Liouville operator as the sum of two operators L = Lr + Lp where iLr =
i

(3.28)

dri ∂ dt ∂ri dpi ∂ dt ∂pi

(3.29)

and iLp =
i

(3.30)

In a first case, one assumes that the operator Lp is equal to zero. Physically, this corresponds to the situations where particle impulses are conserved during the evolution of the system and iL0 = r
i

dri ∂ (0) dt ∂ri

(3.31)

Time evolution of A(t) is given by A(r(t), p(t)) = exp(iL0 t)A(r(0), p(0)) r Expanding the exponential, one obtains A(r(t), p(t)) = exp
i

(3.32)

∂ dri (0)t dt ∂ri

A(r(0), p(0))

(3.33)

= A(r(0), p(0)) + iL0 A(r(0), p(0)) + r
∞ n dri (0)t dt i

(iL0 )2 r A(r(0), p(0)) + . . . 2! (3.34) (3.35) (3.36)

=
n=0

n! dri (0)t dt

∂n ∂rn i
N

A(r(0), p(0))

=A

ri +

, pN (0)

49

one obviously obtains the solution of the Liouville r equation which corresponds to a simple impulse translation. and by introducing iLp t B = P P and C iLr t = . pN (0) = ∆t dp(0) A r (0). Therefore. (3.37) P →∞ .39) When truncating after P iterations. and by using the Trotter formula. 3. Applying to A. These two operators do not commute exp(Lt) = exp(Lr t) exp(Lp t). The key point of the reasoning uses the Trotter identity exp(B + C) = lim exp B 2P exp C P exp B 2P P (3.41) one obtains for an elementary step (3.4.Molecular Dynamics The solution is a simple translation of spatial coordinates. the approximation is of order 1/P 2 . The second case uses a Liouville operator where L0 = 0. by using ∆t = t/P . eiLp ∆t/2 A rN (0).38) For a finite number of iterations. we expressed the Liouville operator as the sum of two operators Lr and Lp . (3. one obtains for a finite number P exp(B + C) = exp B 2P exp C P exp B 2P P exp O 1 P2 (3.40) (3. one gets a method for which we may derive symplectic algorithms namely algorithms preserving the volume of the phase space. which corresponds to a free streaming of particles without interaction. P P eiLp ∆t/2 eiLr ∆t eiLp ∆t/2 .42) Because the operators Lr and Lp are hermitian.2 Discretization of the Liouville equation In the previous section. by replacing the formal solution of the Liouville equation by a discretized version. the exponential operator is unitary. as expected. p(0) + 2 dt N N (3.43) 50 . L0 p r defined like L0 in the first case.

(3.46) (3. calculation of force derivatives would add a large penalty of the computing performance and are only used when short-time accuracy is a crucial need. Note that it is possible of restarting a new derivation by inverting the roles of operators Lr and Lp in the Trotter formula and to obtains new evolution equations.44) ∆t dp(0) . the discretized equations of trajectories are those of Verlet algorithm. one removes the velocity dependence and recover the Verlet algorithm at the lowest order expansion of the Trotter formula. Once again.47) for the initial and final times 0 and −∆t. 3. In other words.5 Hard sphere model N The hard sphere model is defined by the Hamiltonian HN = i 1 2 mv + u(ri − rj ) . the Jacobian of the transformation.42).48) . Since force calculation is always very demanding. but they involve force derivatives. Expansion can be performed to the next order in the Trotter formula: More accurate algorithms can be derived.46) with the impulse (defined at half-integer times) and Eq. we have  N dr( ∆t ) ∆t dp(0) ∆t dp(∆t) 2 A  r(0) + ∆t + . applying eiLp ∆t/2 . Eq. one obtains the global transformations ∆t (f (r(0)) + f (r(∆t))) 2 dr(∆t/2) r(∆t) = r(0) + ∆t dt p(∆t) = p(0) + N   (3.45) (3. p(0) + dt 2 dt 2 dt In summary. (3.47) By using Eq.5 Hard sphere model then eiLr ∆t A rN (0). 2 i 51 (3. which are always symplectic. is the product of three Jacobian and is equal to 1. p(0) +  A  r(0) + ∆t ∆t dp(0) 2 dt N N = N dr( ∆t ) 2 dt   (3.3. In summary. the product of the three unitary operators is also a unitary operator which leads to a symplectic algorithm. (3. p(0) + 2 dt and lastly.

As for systems with a purely repulsive interaction.51) and (3. whereas the tangential component of the velocity is unchanged. Consider two spheres of identical mass: during the collision.(r2 − r1 ). The interaction potential remains the same. the inelastic hard sphere is a good reference model. the hard sphere model does not undergo a liquid-gas transition. since the collision is instantaneous.(r2 − r1 ) = −(v1 − v2 ). This model was been intensely studied both theoretically and numerically.51) where n12 is the normal vector to the collision axis and belonging to the plane defined by the two incoming velocities.(r1 − r2 ) v2 = v2 + (r1 − r2 ) (r1 − r2 )2 v1 = v1 + 52 (3. which is not the case here. Note that the impulsive character of forces between particles prevents use of the Verlet algorithm and others.n12 . one sees that the Boltzmann factors. all previous algorithms assume that forces are continuous functions of distance. commonly used for granular gases.(r1 − r2 ) (r1 − r2 ) (r1 − r2 )2 (v1 − v2 ).53) (3. one obtains the post-collisional velocities (v2 − v1 ).49) From this definition. Indeed. the normal component of the velocity at contact is inverted. but a liquid-solid transition exists. Between two collisions.50) For an elastic collision. The dynamics is a sequence of binary collisions. For hard spheres. the total momentum is conserved v1 + v2 = v1 + v 2 (3. particles follow rectilinear trajectories.n12 = (v1 − v2 ).54) . particle velocities are only modified during collisions. (v2 − v1 ).52) (3. when the packing fraction is small. (3. but only on the density. the probability of having three spheres in contact at the same time is infinitesimal. By using Eqs. (3. Let us denote ∆vi = vi − vi . Moreover. which means that this integral does not depend on the temperature.Molecular Dynamics where the interaction potential is u(ri − rj ) = +∞ |ri − rj | ≤ σ 0 |ri − rj | > σ (3. one infers (v2 − v1 ). but dissipation that occurring during collisions is accounted for in a modified collision rule (see Chapter 7).52). A renewal of interest in the two last decades comes in the non equilibrium version. which are involved in the configurational integral exp(−βu(ri −rj )) are equal either 0 (when two spheres overlap) or 1 otherwise.

6 Molecular Dynamics in other ensembles As discussed above. If the two roots are real.55) (3. for instance. Practically. namely the first collision.58) i j Once all pair of particles have been examined. The trajectories of particles evolve rectilinearly until the new collision occurs (calculation of trajectories is exact. If the two roots are positive. Molecular Dynamics corresponds to a simulation in the microcanonical ensemble. because forces between particles are only contact forces). velocities of the two particles are updated. Between collisions. since the algorithm being exact (discarding for the moment the round-off problem). The integration step is not constant. This quasi-exact method has some limitation. the positions of i and j are given by ri = r0 + vi t i 0 rj = rj + vj t where = r0 and = r0 are the particle positions i andj after collision. Indeed. either they are negative and the collision time is set to a very large value in simulation again. the hard sphere algorithm is as follows: starting from an initial (non overlapping) configuration. all possible binary collisions are considered. rattling effects occur in isolated regions of the system. namely for each pair of particles. contrary to the Verlet’s algorithm. Because the dynamics then 53 . when the density becomes significant. the shortest time is selected.57) (3.3. i i The contact condition between two particles is given by relation σ 2 = (ri − ri )2 = (r0 − r0 )2 + (vi − vj )2 t2 + 2(r0 − r0 )(vi − vj )t i j i j (3. 3.56) Because the collision time is a solution of a quadratic equation.6 Molecular Dynamics in other ensembles It is easy to check that the total kinetic energy is constant during the simulation. the algorithm is obviously symplectic.(vi − vj ) < 0 (3. At the collision time. the collision is set to a very large value in simulation. If only one root is positive. and one calculates the next collision again. Let us note that the thermal bath introduces random forces which changes dynamics with an ad hoc procedure. The algorithm provides trajectories with an high accuracy. collision time is calculated. because the mean time between two collisions decreases rapidly and the system then evolves slowly. in canonical ensemble. the smallest one is selected as the collision time. A necessary condition for having a positive solution for t (collision in the future) is that (r0 − r0 ). In addition. bounded only by the round-off errors. the collision time corresponds to this value. Generalizations exist in other ensembles. several cases must be considered: if the roots are complex. which decreases the efficiency of the method.

but obviously. In particular.6. which is undesirable. This mechanism can be interpreted as Monte Carlo-like moves between isoenergy surfaces. the algorithm is composed of three steps • The system follows Newtonian dynamics over one or several time steps.6. this condition is not sufficient for characterizing a canonical ensemble. we will see how to build a more systematic method for performing Molecular Dynamics in generalized ensembles. Practically.59) where P (ν. denoted ∆t. t) = e−νt (3. which leads to a Poissonian distribution of collisions. correlation functions relax at a rate which strongly depends on coupling with the thermostat. different dynamics can generated that are different from the “real“ dynamics obtained in a microcanonical ensemble. the system evolves with the usual Newtonian dynamics. because the main purpose of the Molecular Dynamics is to 54 .2 Nos´-Hoover algorithm e As discussed above. • When a particle is selected. 3. the Andersen algorithm biases dynamics of the simulation. Moreover. one assumes that the stochastic “collisions“ are totally uncorrelated. Other velocities are not updated. • On chooses randomly a number of particles for stochastic collisions. Between these stochastic ”collisions”. t) P (ν. The coupling strength is controlled by the collision frequency denoted by ν.Molecular Dynamics depends on a adjustable parameter. after introducing a heuristic method. The methods consist of modifying the equations of motion so that the velocity distribution reaches a Boltzmann distribution. t) denotes the probability distribution of having the next collision between a particle and the bath occurs in a time interval dt. In addition. real dynamics of the system is deeply disturbed.1 Andersen algorithm The Andersen algorithm is a method where the coupling of the system with a bath is done as follows: a stochastic process modifies velocities of particles by the presence of instantaneous forces. However. its velocity is chosen randomly in a Maxwellian distribution with a temperature T . 3. Therefore. the full distribution obtained with the Andersen thermostat is not a canonical distribution: This can be shown by considering the fluctuation calculation. P (ν. This procedure ensures that the velocity distribution goes to equilibrium. The probability of choosing a particle in a time interval ∆t is ν∆t.

65) N δ( i=1 (p i )2 p2 L + U (rN ) + s + ln(s) − E) 2mi 2Q β N One defines H as H = i=1 (pi )2 + U (rN ) 2mi 55 (3. introduced by Nose enlarges the system by introducing additional degrees of freedom. Q represent the effective mass of the unidimensional variable s.62) By using a Legendre transformation.63) Let us reexpress the microcanonical partition function of this system made of N interacting particles and of this additional degree of freedom interacting with N particles. One obtains that Q= N 1 N! dps dss3N dp drN × (3. Q= 1 N! dps dsdpN drN δ(H − E) (3.6 Molecular Dynamics in other ensembles provide a real dynamics. Let us introduce p = p/s. The Lagrange equations of this system read pi = ∂L ˙ = mi s2 ri ˙ ∂ ri ∂L ps = = Qs ˙ ∂s ˙ (3. To overcome this problem.64) The partition function corresponds to N indistinguishable particles and to the additional variable in the microcanonical ensemble where microstates of isoenergy are considered equiprobable. This procedure modifies the Hamiltonian dynamics by adding friction forces which change the particle velocities. a more systematic approach. one obtains the Hamiltonian N H= i=1 p2 L (pi )2 + U (rN ) + s + ln(s) 2 2mi s 2Q β (3.66) . Let us consider the Lagrangian N L= i=1 m i s2 2 dri dt 2 Q − U (r ) + 2 N ds dt 2 − L ln(s) β (3. These changes increase or decrease the velocities.60) where L is a free parameter and U (rN ) is the interaction potential between N particles.61) (3.3.

74) ˙ ξ= i=1 p2 L i − 2mi β s ˙ =ξ s The two first equations are a closed set of equations for N particles. It is worth noting that variables used are r. Discretizing the equations of motion would give a variable time step which is not easy to control in Molecular Dynamics.67) N δ(H + By using the property δ(h(s)) = (3.Molecular Dynamics and the partition function becomes Z= 1 N! dps dss3N dp drN p2 L s + ln(s) − E) 2Q β δ(s − s0 ) h (s0 ) (3.72) 1 Q (3. It is possible to return to a constant time step by using additional changes of variable.73) (3. The last equation controls the evolution of the additional degree of freedom in the simulation. p and t . becomes a canonical partition function of a system with N particle. the equations of motion are pi mi ∂U (rN ) pi = − ˙ − ξpi ∂ri ri = ˙ N (3. This work was done by Hoover.69) This gives for the partition function Z= β exp(E(3N + 1)/L) LN ! dps dp drN exp N −β(3N + 1) p2 (H + s ) L 2Q (3.70) Setting L = 3N + 1.68) where h(s) is a function with only one zero on the x-axis at the value s = s0 . and integrating over the additional degree of freedom. the partially integrated partition function Q of the microcanonical ensemble of the full system (N particles and the additional degree of freedom). One integrates over the variable s and one obtains δ(H + p2 L βs β p2 s + ln(s) − E) = δ(s − exp(− (H + s − E))) 2Q β L L 2Q (3.71) (3. 56 . Finally.

we have considered simple systems with a unique microscopic time. whose size are very different (an order of magnitude is sufficient.DF({rN })P ({rN }) N ∂t ∂r ∂r ∂r (3.7.7. the joint probability distribution P {r} of finding the N particles at positions {r} is given by a Smoluchowski equation. simulation time exceeds the computer ability. which corresponds to the Langevin equation ∂ drN = βDF + N . The basic idea is to perform averages on the degrees of freedom associated with the smallest relaxation times. it is also possible to describe the same process in term of trajectories. Discretization A Smoluchowski equation corresponds to a description in terms of the probability distribution. to the physical situation of biological molecules in water. (3.7. are examples where the presence of several microscopic timescales prohibits simulation with standard Molecular Dynamics.3. For a mixture of particles.D − β N .76) where D is the diffusion matrix which depends on the particle configuration in the solvent and ξrN is a Gaussian white noise. Diluted nanoparticles. Eq.1 Brownian dynamics Different timescales Until now. After averaging over the smallest particles (solvent) and on the velocity distribution of the largest particles. Brownian dynamics (which refers to the Brownian motion) corresponds to a dynamics where smallest particles are replaced with a continuous media. ∂ ∂P ({rN }) ∂ ∂P ({rN }) = N . two different microscopic time scales are present (the smallest one is associated with the smallest particles). This situation is not exceptional.76) with a constant time step ∆t. Timescales associated with the largest particle moves are assumed larger than the relaxation times of the velocities.7 Brownian dynamics 3.7 3.2 Smoluchowski equation The Liouville equation describes the evolution of the system at a microscopic scale.D + ξrN dt ∂r (3. 3.3 Langevin equation. like ferrofluids or other colloids. for instance. because it corresponds. 3. By discretizing the stochastic differential equation. one obtains with a Euler algorithm the following discretized 57 .75) where β = 1/kB T and D is a 3N × 3N matrix which depends on the particle configuration and contains the hydrodynamic interactions. To obtain the statistical properties of large particles.

glycerol.7. For an infinitely dilute system. In a general case. polymers.. This universal behavior is observed in Molecular Dynamics with molecular systems (Orthophenyl. Theses forces are generally long ranged and their effects can not be neglected for modest packing fractions.Molecular Dynamics equation ∆rN = βDF + ∂ .. 3. In the case of the dilute limit. and β is the inverse of the temperature. but sufficiently large for having an equilibrated velocity distribution. but must be considered for efficient and accurate simulation of complex systems. spherical particles with translational degrees of freedom are involved.80) where η is the solvent viscosity.77) ∆t is a sufficiently small time step in order that the force changes remain weak in the time step.4 Consequences This stochastic differential equation does not satisfy the time reversal symmetry observed in Hamiltonian systems.. and with a variance given by RRT = 2D∆t (3... Conversely. Let us note that Brownian dynamics which combines aspect of Molecular Dynamics (deterministic forces) and of stochastic process is a intermediate method between Molecular dynamics and Monte Carlo simulation. the time correlation functions calculated from the Brownian dynamics will be different on the short time scales.) and nanomolecules (ferrofluids.79) 1 3πησβ (3. one finds glassy behaviors with many studied systems with the Brownian Dynamics. A main difficulty with the dynamics is to obtain an expression of the diffusion matrix as a function of particle configuration. 58 .D ∆t + R ∂rN (3. This corresponds to Stokes law. Therefore. the solvent plays a more active role by adding hydrodynamic forces. the diffusion matrix becomes diagonal and the nonzero matrix elements are all equal to the diffusion constant in the infinite dilution limit.. Optimization techniques and hydrodynamics forces are topics which goes beyond these lecture notes. R is a random move chosen from a Gaussian distribution with a zero mean < R >= 0.).78) RT denotes the transpose of the vector R. σ is the diameter of the large particles. the diffusion matrix has the asymptotic expression D = D0 I + O with D0 = 1 r2 ij (3. salol.

3. The Liouville operator can be separated in two parts as follows iL = (iLr + iLFshort ) + iLFlong with iLFshort = iLFlong Fshort ∂ m ∂v Flong ∂ = m ∂v (3. sub-multiple of the original time step ∆t.8 Conclusion 3. 3.9. 3. However. One denotes the time step by ∆t.83) ♣ Q.9. Initially devoted for simulating atomic and molecular systems. like biological molecules.9. 3.85) (3.86) (3.1-3 Describe the principle of this algorithm.82) (3. recall equations of motion.1 Exercises Multi timescale algorithm When implementing a Molecular Dynamics simulation.9 3.8 Conclusion The Molecular Dynamics method has developed considerably during these last decades. δt = ∆t/n for the short-range forces.1-1 Using the Trotter formula at first order in time step. show that the propagator eiL∆t can be expressed at first order as eiLFlong ∆t/2 eiLFshort δt/2 eiLr δt eiLFshort δt/2 n iLF ∆t/2 long e (3.81) The Liouville operator iL for a particle reads iL = iLr + iLF F ∂ ∂ =v + ∂r m ∂v (3. the standard algorithm is the Verlet algorithm (or Leap-Frog). generalized ensembles and/or Brownian dynamics allow the study of systems containing large molecules. The time step of the simulation is chosen such that the variations of different quantities (forces and velocities) are small on a time step.3.84) ♣ Q. interaction forces contain short-range and long-range interaction F = Fshort + Flong (3. What can one say about the reversibility of the algorithm and the conservation of volume about phase space? 59 .1-2 By taking a shorter time step.87) ♣ Q.9.

Molecular Dynamics 60 .

. . . Space-time correlation functions . Conclusion 4. . . . . . .2 Exercises . . . . . . . . . .6 4. .2 4. . . . . . . .7. . . .3. . .2 4. . . . . Van Hove function . . . . . .4. . . . . .2. . . . . . . . . . . . . . . . . . . . . Dynamic structure factor . . . . . . . Time correlation functions . . . . . . . . . . . . . . . . .3.1 4. . . . Dynamic heterogeneities . .3. . . . . . . . . . . . . .2 4. . . . . . . Radial distribution function . . . . . “Le facteur de structure dans tous ses ´tats!” . Dynamics . . . . . . .4 4. .4 4. .5. . . . .3 4. . . . . . . . . . . . . . . . . . . . . .7. . . .4. . . . . . . . . . .5. . . . . . . . . .5. . . . . . . . . . . . . . . .3 4. .2. . .3 4. . . Introduction . . Introduction . . . . . . .2 4. . . . . . . . . . . 62 62 62 66 67 67 67 68 70 70 70 72 72 72 72 73 73 74 74 74 Structure .4. . . . e Van Hove function and intermediate scattering function 75 61 . . . . . Structure factor . . .3 4. . . . .3. .4 4. . . . . .1 4.1 4. . . .1 4. . . . . . . . . . . . . . .7 . . . . . . . . . . . . . . . . . . . . . . Intermediate scattering function . . . .Chapter 4 Correlation functions Contents 4. . .2 Introduction 4. . . . . .5 4. . . . . . . . . .1 4. . . . . . . . . . . . . . . . . . .4. .1 4. Linear response theory: results and transport coefficients 69 Introduction . 4-point susceptibility and dynamic correlation length . Computation of the time correlation function . . . . . . . . . . . . . . . . . . . . . . . . 4-point correlation function . . . . . . . . . . .

Therefore.2. and/or to several theoretical approaches that have been developed during the last decades (for instance. Time correlation functions can be compared to experiments if the simulation dynamics corresponds to a ”real”dynamics. specific behavior can be characterized by a more or less sophisticated correlation functions. Monte Carlo dynamics strongly depends of the chosen rule. These functions characterize the structure of the system and provide more detailed information that thermodynamic quantities. the integral equations of liquid state). denoted in the following i=1 62 . Using the linear response theory. although the Metropolis algorithm does not correspond to a real dynamics. This phenomenon evolves continuously with time where mobile particles becomes frozen and conversely. one can calculate thermodynamic quantities . we pay special attention to finite size effects. . quantities of interest display some differences with the same quantities in the thermodynamic limit and one must account for it. Indeed. In the second part of this chapter.1 Introduction With Monte Carlo or Molecular Dynamics simulation..Correlation functions 4. 2m (4. 1). glassy behavior in glass formers liquids.1) In the canonical ensemble. Correlation functions can be directly compared to light or neutron scattering. we consider the glassy behavior where simulation showed recently that dynamics of the system is characterized by dynamical heterogeneities. In addition. but also the spatial correlation functions..1 Structure Radial distribution function We now derive the equilibrium correlation functions as a function of the local density (already defined in Chap. Let us consider a system of N particles defined by the Hamiltonian N H= i=1 p2 i + V (rN ). as seen in previous chapters. while many particles seem to be frozen. This feature can be characterized by a high-order correlation function (4-point correlation function).2 4. the density distribution associated with the probability of finding n particles in the elementary volume n dri . namely a Molecular Dynamics. indeed the simulation box containing a finite number of particles. 4. The main interest is to show that these expression are well adapted for simulation. global trends of a real dynamics are generally contained (slowing down for a continuous transition.). However. isolated regions contain mobile particles. the transport coefficients can be obtained by integrating the time correlation functions.

exp(−βV (rN ))drN −n ZN (V.6) which means for Eqs.8) where g(r) is called the radial de distribution function. (N − n)! (4. Noting that δ(r − r1 ) = 1 ZN (V. one has ρN (r) = ρ and the pair distribution function only depends on the relative distance between particles: (2) gN (r1 . .2) . is given by ρN (rn ) = where ZN (V. . T ) = (n) (n) N! (N − n)! .9) (4. When the distance |r1 − r2 | is large. The pair distribution function is defined as 2 gN (r1 ... .2 Structure as drn . (4. respectively.4) which corresponds to the number of occurrence of finding n particles among N . rN ))dr2 . one has ρN (r1 )dr1 =N ρN (r2 )dr2 =N (N − 1) (2) (1) (4.. . (4. T ) exp(−βV (rN ))drN . (2) (1) (1) (4..3) Normalization of ρN (rn ) is such that ... .4. T ) 1 = ZN (V. one recovers that the radial distribution function goes to 1. ρN (rn )drn = (n) N! . .5) and (4. r2 ) ρ (|r1 − r2 |) = g(|r1 − r2 |) = N ρ2 (2) (4. δ(r − r1 ) exp(−βVN (rN ))drN exp(−βVN (r.. r2 )/(ρN (r1 )ρN (r2 )). a correction appears which changes the large distance limit. the radial distribution function goes to 1 1 − N . In particular. but in finite size. drN 63 (4.7) For an homogeneous system (which is not the case of liquids in the vicinity of (1) interfaces).(4. r2 . r2 ) = ρN (r1 . T ) .10) .5) (4. To the thermodynamics limit.6) that one can find N particles and one can find N (N − 1) pairs of particles in the total volume.. ...

Last. Computation of the radial distribution function lies on a space discretization. one can check a posteriori that this condition is satisfied. the statistical accuracy in each bin diminishes. the radial distribution function only depends on the module of relative distance |r − r |. one obtains that 1 ρg(|r|) = N N N δ(r − ri + rj ) . (4. the radial distribution function is expressed as N N g (r1 . based in part on a rule of thumb. Therefore.j=i (2) δ(r − ri )δ(r − rj ) . one infers N (2) ρN (r. r2 )ρ(r1 ))ρ(r2 ) = i=1 j=1.14) on the shell labeled by the index j of thickness ∆r by using the approximate formula: ρg(∆r(j + 0.11) Similarly. (4. r N )= i=1 j=1. By decreasing the spatial steps. (4. note that computation of this correlation function (for a given configuration) involves all pairs. However.15) where Np (r) is the number of distinct pairs whose center to center distances are between j∆r and (j + 1)∆r and where Vj = 4π ((j + 1)∆r)3 . one starts with an empty distance histogram of spatial step of ∆r. one calculates the number of pairs whose distance is between r and r +∆r. a intrinsic 64 . 3 This formula is accurate if the radial distribution smoothly varies between the distance r and r + ∆r.j=i δ(r − ri )δ(r − rj ) . a compromise must be found for the step.14) In a simulation using periodic boundary conditions (with cubic symmetry).12) By using microscopic densities. Integrating over the volume. one cannot obtain the structure of the system beyond a distance equal to L/2. because of the spatial periodicity of each main directions of the box. namely the square of the particle number. one performs integration of Eq. i=1 j=1.13) For a homogeneous and isotropic system.Correlation functions the probability density is reexpressed as N (1) ρN (r) = i=1 δ(r − ri ) .5)) = 1 N (Vj+1 − Vj ) 2Np (r).j=i (4. (4. by decreasing the spatial step of the histogram. (4. Practically.

one determines the pair number whose distance is between the successive built from the discretization given by Eq.1 – Basics of the computation of the radial distribution function: starting from a particle. Because a correct average needs to consider a sufficient number of different configurations. The ensemble average is not restricted to equilibrium.15).2 Structure 1 0. 65 .8 1 Figure 4. The part of computation time spent for the correlation is then divided by N .4. note that the distribution function is then defined in a statistical manner. calculation of the correlation function scales as N and is comparable to this of the simulation dynamics.2 0 0 0. property of the correlation function.4 0. By expressing the radial distribution function with the local density. In summary.6 0. The computation time can be decreased if only the short distance structure is either significant or of interest. We will see later (Chapter 7) that it is possible to calculate this function for out of equilibrium systems ( random sequential addition) where the process is not described by a Gibbs distribution.2 0. only configurations where all particles have been moved will be considered instead of all configurations.8 0.4 X 0. (4.6 Y 0.

Correlation functions 4. the Fourier transform only depends on the modulus k of wave vector. S(k) is expressed as S(k) = 1 + 1 N N N exp(−ik(r − r )) i=1 j=1. calculation of the static structure factor requires a one-dimensional sine-Fourier transform on the positive axis.2.16) where ρk is the Fourier transform of the microscopic density ρ(r). For a simple liquid. 66 . (4.2 Structure factor The static structure factor. is a quantity available experimentally from light or neutron scattering.19) For a homogeneous fluid (isotropic and uniform) S(k) = 1 + ρ2 N exp(−ik(r − r ))g(r.i=j δ(r − ri )δ(r − rj )drdr (4. |k|. the radial distribution function only depends on |r − r |.17) Using delta distributions.21) Because the system is isotropic.20) For an isotropic fluid. after some calculation r2 g(r) 0 exp(−ikr cos(θ)) sin(θ)dθdr (4. r )drdr .23) Finally. the static structure factor is defined as S(k) = 1 ρk ρ−k N (4. This reads 1 S(k) = N N N exp(−ikri ) exp(ikrj ) . r )drdr (4. (4. which gives in three dimensions π S(k) = 1 + 2πρ and. Efficient codes of fast Fourier transforms calculate S(k) very rapidly. i=1 j=1 (4.22) ∞ S(k) = 1 + 4πρ 0 r2 g(r) sin(kr) dr.18) which gives S(k) = 1 + 1 N exp(−ik(r − r ))ρ(r. kr (4. S(k). One then obtains S(k) = 1 + ρ exp(−ikr)g(r)dr. this is the Fourier transform of the radial distribution function g(r).

(4.3 Dynamics In a simulation.2 Time correlation functions Let us consider two examples of time correlation functions using simple models introduced in the first chapter of these lecture notes.17) and obtain g(r) by an inverse Fourier transform. Repeated evaluation of trigonometric functions in Eq. the number of terms of the summation is proportional to N 2 (times the number of configurations used for the average in the simulation). For the static structure factor. let us recall that equilibrium does not mean absence of particle dynamics: the system undergoes continuously fluctuations and how a fluctuation is able to disappear is a essential characteristic associated with to macroscopic changes. the number of terms to consider is proportional to N multiplied by the number of wave numbers which are the components of the structure factor (and finally multiplied by the number of configurations used in the simulation. In both cases. because of the finite duration of the simulation as well as the finite size of the simulation box. The simulation cannot provide information for smaller wave vectors.24) 67 .3. statistical errors and round-off errors can give differences between the two manners of computation.3 4. In the direct calculation of the static structure factor S(k). this yields an infrared cut-off (π/L).4. should give the same result than the direct computation of the structure factor. For the Ising model.3. computation of g(r). followed of the Fourier transform.1 Dynamics Introduction Spatial correlation functions are relevant probes of the presence or absence of the order at many lenghtscales. observed in particular in the vicinity of a phase transition. knowledge of correlations is limited to a distance equal to the half length of the simulation box. Indeed. one defines the spin-spin time correlation function 1 C(t) = N N < Si (0)Si (t) > i=1 (4.17) can be avoided by tabulating complex exponentials in an array at the beginning of the simulation Basically. one can calculate either g(r) and obtain S(k) by a Fourier transform or S(k) from Eq. (4. However. knowledge of equilibrium fluctuations is provided by time correlation functions. Similarly. 4. 4. Because the computation of the spatial correlation function over pairs of particles for a given configuration.

namely the spin correlation in the Ising model. time translational invariance: in other words. if one calculates < Si (t )Si (t + t) >. 1 68 . All vectors are initially set to 0. For practical computation of C(t). In order to have a statistical average well defined. t)δρ(r.3 Computation of the time correlation function We now go in details for implementing the time autocorrelation function. This function is defined in Eq. This function measures the system looses memory of a configuration at time t. by definition). 4. 0) (4. one first defined Nc vectors of N components (N is the total number of spins and once equilibrated. t) denotes the local density fluctuation.25) where δρ(r. one must iterate this computation for many different t .Correlation functions where the brackets denote equilibrium average and Si (t) is the spin variable of site i at time t. for a basic example. but the method described here can be applied for all sorts of dynamics. (4. For a simple liquid made of point particles. Ising model does now own a Hamiltonian dynamics. and must be simulated by a Monte Carlo simulation. At time t = 0. Monte Carlo or Molecular Dynamics. the results is independent of t . one real vector of Nc components and one integer vector of Nc components. slightly smaller than the mean value of variable time step) and the above method can be then applied.26) where the initial instant is a time where the system is already at equilibrium. one can monitor the density autocorrelation function C(t) = 1 V dr δρ(r. j=1 i=1 (4. namely. 1 C(t) = NM M N Si (tj )Si (tj + t). then decreases and goes to 0 when the elapsed time is much larger than the equilibrium relaxation time. This autocorrelation function measures the evolution of the local density along the time. We have assumed that the time step is constant. Time average of a correlation function (or others quantities) in the equilibrium simulation (Molecular Dynamics or Monte Carlo) is calculated by using a fundamental property of equilibrium systems.3. When the later is variable. one can add a constant time step for the correlation function (in general. this function is equal to 1 (value which can not exceeded.24)1 . It goes to zero when the elapsed time is much larger to the relaxation times of the system.

First. the first N component vector is erased and replaced with the new configuration. ) A last remark concerns the delicate choice of the time step as well the number of components of the correlation function. it is easier to calculate a correlation function than a linear response function for several reasons: i) Computation of a correlation function 69 . which is bounded τeq << Nc ∆tc << Tsim (4. but much larger than the equilibrium relaxation. In a simulation. the second vector is replaced by the new configuration and so on. and because number of configurations involved in averaging the correlation function are not equal.4 Linear response theory: results and transport coefficients When a equilibrium system is slightly perturbed by an external field. the same procedure is iterated and k + 1 scalar products are performed and added in the k + 1 components of the real Nc component vector. • When t = Nc − 1. • Finally Eq. 4. the linear response theory provides the following result: the response of the system and return to equilibrium are directly related to the ability of the equilibrium system (namely in the absence of a external field) to answer to fluctuations ( Onsager assumption of fluctuation regression).26) is used for calculating the correlation function. NC scalar products can be performed and added to the Nc component vector.3. ∆tc which is a free parameter. Derivation of the relation between response function and fluctuation correlation is given in appendix A.27) Indeed. this procedure is stopped at time t = Tf . configuration averaging needs that Nc ∆tc is less than the total simulation time. the spin configuration is stored in the first N component vector. the spin configuration is stored in the second N component vector. Therefore. • Finally.4. is chosen a multiple of the time step of the simulation.3 Dynamics • At t = 0. scalar products i Si (t1 )Si (t1 ) and i Si (t0 )Si (t1 ) are performed and added to the first and second component of the real Nc component vector • For t = k with k < Nc . For t = Nc . the correlation function is calculated for a duration equal to (Nc − 1)∆t. the integer vector of Nc is used as a histogram of configuration average. (4. scalar product i Si (t0 )Si (t0 ) is performed and added to the first component of the real Nc component vector • At t = 1.

0) (4.4. on obtains the system susceptibility.29) For a homogeneous system. In summary. t)ρ(r . one obtains 1 G(r.28) This function can be expressed from the microscopic densities as N N ρG(r. namely viscosity for liquids. Moreover. it is necessary to perform different simulations for each response functions. one needs to ensure that the ratio ∆B(t) /∆F is independent of ∆F : this constraint leads to choose a small value of ∆F .1 Space-time correlation functions Introduction To go further in correlation study. However.30) It exists some cases where the response function is easier to calculate than the correlation function. We introduce these functions because they are also available in experiments. For Molecular Dynamics. we will introduce a recent method allowing to calculate response functions without perturbing the system. 4. fluctuations grow very rapidly with time2 . the function ∆B(t) will be small and could be of the same order of magnitude of statistical fluctuations of the simulation. In the last chapter of these lecture notes. 70 . this method is restricted to Monte Carlo simulations. ii) For computing response functions. r . G(r. This information is richer. essentially form neutron scattering. r .4. t) = i=1 j=1 δ(r + r − ri (t))δ(r − rj (0)) (4. Practically. Integrating over volume. but if the perturbative field is too weak. for obtaining the linear part of the response function (response of a quantity B(t) to a external field∆F ). at least. t) only depends on the relative distance.Correlation functions is done at equilibrium. 4. but the price to pay is the calculation of a twovariable correlation function. t) = N 2 N N δ(r − ri (t) + rj (0)) i=1 j=1 (4. by integrating the correlation function C(t) over time. the correlation function is obtained simultaneously with other thermodynamic quantities along the simulation. one can follow correlation both in space and time. t) = ρ(r + r. r .4 4.2 Van Hove function Let us consider the density correlation function which depends both on space and time ρG(r.

particle conservation). Similarly. 0) = Gd (r. 0) = 1 N N N δ(r + ri (0) − rj (0)) i=1 j=1 (4. Gs (r. the system looses memory of the initial configuration and the correlation functions becomes independent of the distance r.31) (4. 0) = Gs (r.32) = δ(r) + ρg(r) Except a singularity at the origin. t) corresponds to the probability density of finding a particle j different of i at time t knowing that the particle i was at the origin at time t = 0. finally. the Van Hove function G(r.4. one has drGd (r. t) = lim Gs (r. t) 71 1 0 V N −1 V (4.39) . self and distinct G(r. 0) (4.34) δ(r + rj (0) − rj (t)) i=j (4.37) with the following physical meaning: by integrating over space.36) and corresponds to be certain that the particle is in the volume at any time (namely. r→∞ lim Gs (r.38) ρ (4. the distinct correlation function is able to count the remaining particles.33) with Gs (r. 0) = 1 N 1 N N δ((r + ri (0) − ri (t)) i=1 N (4. Normalization of these two functions reads drGs (r. t) = lim Gs (r. Gd (r. In the long time limit. 0) + Gd (r. t) is the probability density of finding a particle i at time t knowing that this particle was at the origin at time t = 0. t) is simplified and one obtains G(r. t) = N − 1 (4. On can separate this function in two parts.35) These correlation functions have the physical interpretation: the Van Hove function is the probability density of finding a particle i in the vicinity of r at time t knowing that a particle j is in the vicinity of the origin at time t = 0. at t = 0. t) = 1 (4.4 Space-time correlation functions At t = 0. the Van Hove function is proportional to the pair distribution function g(r). t) t→∞ t→∞ r→∞ lim Gd (r.

t) and self Fs (k. no phase transition occurs. Numerical 72 .42) The physical interest of splitting the function is related to the fact that coherent and incoherent neutron scattering provide the total F (k.rt (4. 4. namely the temperature. one defines self and distinct parts of the function Fs (k. t)eiωt (4. t) = dkG(r. t)e−ikrt (4. t) intermediate scattering functions.40) In a similar manner. t) = Fd (k.4.41) (4. Indeed. where no significant changes in the structure appear (spatial correlation functions do not change when the control parameter is changes). one has the following sum rule: integrating the dynamic structure factor over ω gives the static structure factor. t)e−ikrt dkGd (r. we have seen that accurate information how the system evolves by monitoring space and time correlation functions. ω) = S(k) (4.5.1 Dynamic heterogeneities Introduction In previous sections. This corresponds to a glassy behavior.4 Dynamic structure factor The dynamics structure factor is defined as the time Fourier transform of the intermediate scattering function S(k. dωS(k. one can see in reciprocal space.4. ω) = dtF (k.5 4. However.43) Obviously. namely in Fourier components. Let us define a correlation function so called intermediate scattering function. which is the Fourier transform of the Van Hove function F (k.3 Intermediate scattering function Instead of considering correlations in space. t) = dkGs (r.Correlation functions 4. t)e−ik. it exists important physical situations where relevant information is not provided by the previous defined functions.44) 4. many systems have relaxation times growing in spectacular manner with a rather small variation of a control parameter.

one must perform the spatial integration of the 4-point correlation function. Considering this quantity as algebraic of a quantity ξ(t).47) drC4 (r.3 4-point susceptibility and dynamic correlation length In order to measure the strength of the correlation defined above. how long this heterogeneity will survive and what is the spatial extension of this heterogeneity? 4.45) where δρ(r.48) (4.5 Dynamic heterogeneities simulations as well experimental results have shown that particles of the system. a small part evolves more rapidly. Indeed. behave very differently at the same time: while most of the particles are characterized by a extremely slow evolution. t)δρ(r + r. In order to know these regions have a collective behavior. 0) (4. t) = 1 V 1 − V dr δρ(r .5. This defines the 4-point susceptibility χ4 (t) = This function can be reexpressed as χ4 (t) = N (< C 2 (t) > − < C(t) >2 ) where one defines C(t) C(t) = 1 V drδρ(r. 0)δρ(r + r.46) as the correlation function (not averaged). this later is interpreted as a dynamic correlation length which characterized the spatial heterogeneities. The physical meaning of this function is as follows: : when a fluctuation occurs at r at time t = 0. t) (4. but one needs a higher-order correlation function.4. t) (4. this 4-point susceptibility is a function which displays a maximum at the typical relaxation time. t) is a density fluctuation at the position r at time t. 4. one needs to build more suitable correlation functions. t) dr δρ(r . the previous functions are irrelevant for characterizing this heterogeneous dynamics. t) 1 V dr δρ(r + r. 0)δρ(r + r. and let us define the 4-point correlation function C4 (r. 0)δρ(r . 0)δρ(r .5. t)δρ(r. For glassy systems. 73 .2 4-point correlation function The heterogeneities cannot be characterized by a two-point correlation function. assumed identical.

Correlation functions 4.50) where ρ(k) is the Fourier transform of the microscopic density and the brackets ˜ < .1-3 The simulation box is a cubic with a linear dimension L with periodic boundary conditions.1-4 Consider a gas model defined on a cubic lattice with a spatial step of a (with the periodic boundary conditions). 4.1 “Le facteur de structure dans tous ses ´tats!” e Characterization of spatial correlations can be done by knowing the static structure factor. ♣ Q.49) where ri denotes the particle position i.1-2 Show that S(k) → 1 when k → +∞.. 4.7. What the minimum modulus of wave vectors k available in simulation? ♣ Q. when thermodynamic quantities as well as two point correlation functions do not change. We propose to study how the particle correlations modify the shape of the static structure factor in different physical situations Let N particles places within a simulation box of volume V .1-1 Express the structure factor S(k) as a function of terms as < e−ikrij >. One now considers a one-dimensional lattice with a step a and N nodes where each site is occupied by a particle. ♣ Q. particles occupy lattice nodes.7. collective properties of systems can be revealed by building ad hoc correlation functions. 4. 4. 74 .7. One defines the microscopic density as N ρ(r) = i=1 δ(r − ri ) (4. Count all wave vectors available in simulation.7 Exercises 4..7. As seen above with the 4-point functions. Show that there is a maximum modulus of wave vectors. where rij = ri − rj with i = j.. 4.6 Conclusion Correlation functions are tools allowing for detailed investigation of spatial and time behavior of equilibrium systems.7. One defines the structure factor as S(k) = < ρ(k)˜(−k) > ˜ ρ N (4. ♣ Q. > denote the average over the configuration ensemble of the system.

t) (or its spatial Fourier transform. ♣ Q. namely the intermediate scattering function Fs (k. 2N (4. The microscopic density of the system then becomes N ρd (r) = i=1 (δ(r − ri − ui ) + δ(r − ri + ui )) (4. t)) when a system goes from a usual dense phase to glassy phase. one writes the self Van Hove correlation function as Gs (r. Without loss of generality. 4.2 Van Hove function and intermediate scattering function The goal of this problem is to understand the behavior change of the self Van Hove function Gs (r.1-5 Calculate ρ(k) and infer the structure factor associated with this ˜ configuration. are called “super-uniform”.1-7 Assuming that the moments < u2 > and < u4 > are finite. t) =< δ(r + ri (0) − ri (t)) > where i is a tagged particle. ♣ Q.7.1-8 Recent simulations have shown that the structure factor of dense granular particles behaves as k at small wave vectors. show that this latter diverges at small wave vectors as k −A . show that the structure factor is equal to zero in the Brillouin zone (− π < k < π ).7. ♣ Q.7 Exercises ♣ Q.7. calculate the expansion of Sd (k) to order k 4 .1-6 Show that Sd (k) = 2 < (cos(ku))2 > +2 < cos(ku) >2 (S(k) − 1). a a Each particle is split in two identical particles shifted symmetrically on each side of the lattice node on a random distance u with the probability distribution p(u).7. 4. Configurations corresponding to a structure factor which goes to zero when k → 0. Why needs the simulation a large number of particles (106 ) ? We now consider the structure factor of a simple liquid in the vicinity of the critical point. 4.52) ♣ Q.1-9 By using the relation between the correlation function and the structure factor. 4. ∞ where A is an exponent to be determined (the integral 0 du u−η sin(u) is finite). One considers the long time and long distance behavior of the self Van Hove function. 4.51) and the structure factor is given by Sd (k) = < ρd (k)˜d (−k) > ˜ ρ . The critical correlation function is given by the scaling law g(r) ∼ r−(1+η) for r > rc . Justify this definition. By taking the thermodynamic limit (N → ∞). 4.7. 75 .4.7.

2-1 Write the conservation of mass of tagged particles which related the local density ρ(s) (r. t)f (n. k) = f (k)n and that p(n. t) ∼ exp(−Dk 2 t). t) (4. The self Van Hove function then reads ∞ Gs (r. 4. t) = −D ρ(s) (r. 0) > where the ˆ ρ bracket denotes the average on the initial conditions. Solve this equation.2-9 Show that f (n. one assumes that a particle undergoes stochastic jumps whose time distribution is given by φ(t) = τ −1 e−t/τ (One also assumes that the probability distribution associated to the first jump after the initial time is also φ(t)). Calculate explicitly p(0. s)φ(s)n ˆ ˆ One defines the Fourier-Laplace transform as ∞ ˜ G(k. 4.2-8 Same question between f (n. 4. t) is the probability of jumping n times at time t and f (n.7. obtain the expression of Gs (r. One suggests a simple model for particle dynamics. 0) ˆ ♣ Q. r). r) (4. 4. ♣ Q.54) 76 . ♣ Q. t). t) =< ρ(s) (k.7. t) and p(n − 1. ˆ ˆ ♣ Q. t).53) where p(n.2-5 By using the inverse Fourier transform. s) = p(0. t). t)e−ikr . 4. t)ˆ(s) (−k.7. r) and f (n − 1.2-6 Show that p(0.2-2 One assumes that particles undergo a diffusion motion.7. Infer from the above question that Fs (k. what is the time dependent equation satisfied by ρ(s) (r. ♣ Q. 4. described by the relation j(s) (r. t) = 1 − t 0 dt φ(t ). The ˆ (s) initial condition is called ρ (k. t)? ♣ Q. t) and the particle flux j(s) (r. as ρ(s) (k. One now considers the situation where dynamics is glassy and one of characteristics is the presence of heterogeneities. Neglecting all vibration modes. 4. ♣ Q.7. t) = dr3 ρ(s) (r.Correlation functions ♣ Q. r) the probability of moving of a distance r after n jumps.7. t) = n=0 p(n. 4. Experiments and numerical simulation reveal that the long distance behavior of the Van Hove correlation function is slower than a usual Gaussian and close to an exponential decay. s) = d re 3 −ikr 0 dte−st Gs (r. 4. ♣ Q. ˆ (s) obtain the differential equation satisfied by ρ (k.2-4 One admits that Fs (k. t).2-3 Defining the Fourier transform of ρ(s) (r. t).7. t).2-7 Find the relation between p(n.7. t).7.

2-12 Show that G(r. τ ) − G0 (r.2-13 If f (r) = (2πd2 )−3/2 e−r /(2d ) . s) = p(0.2-11 Let us denote G0 (k.7. r (4.56) 2 2 ˜ ♠ Q.7.7. τ ) ∼ e− d Give a physical meaning of this result.55) ˜ ♣ Q. 4.2-10 Show that ˜ G(k. 4.7 Exercises ♣ Q. t) = e−t/τ (2π)3 d3 k etf (k)/τ − 1 eikr ˆ (4. Calculate G0 (r. 4.4. calculate f (k).56). 4. s). s) = p(0. show that G(r.7. One considers the instant t = τ . and performing the inverse Fourier transform. t) − G0 (r. (4. s) ˆ ˆ ˆ 1 − φ(s)f (k) (4. Expanding the exponential of Eq. t). What is the ˆ physical meaning of this quantity? ♠ Q.57) 77 .

Correlation functions 78 .

. .2 . . Critical slowing down . . 5. . . . Exercises .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . .2 5. .8 . . . . . . . . . .7 5. . . Cluster algorithm . . . . . . . . .8. . . . . . . . . . . . Finite size scaling for continuous transitions: logarithm corrections . . . . . . .3. . . This apparent paradox can be resolved as follows: in a continuous phase transition (unlike a first-order transition). . . . . . . . . . . . .1 5. . . . . . . . . . . . . . . Other quantities . . . . . . . . . . . .3 5.Chapter 5 Phase transitions Contents 5. . . . Critical exponents . . . . . . . . . . . . . Conclusion 5. .6 5. . Finite size scaling analysis . . . .2 5. . . .2. . . . . . . . . . . . . . . fluctuations occur at increasing length scales as the temperature approaches the critical point. . Reweighting Method . .4 5. . . . . . . . Some aspects of the finite size scaling: first-order transition . . . . . . . . . . . . . . . . Scaling laws . . . . . . . . .1 Introduction One very interesting application of simulations is to the study of phase transitions. . Specific heat . . . . . . . . . .5 5.1 5. . . . . .1 5. . . . . . . . . . . . . . In a simulation 79 . . . 79 81 81 82 87 87 88 90 90 93 95 96 96 98 Scaling laws . . . . . . . . . . . . . . . . . .8. . . . . . . . . . . . . . . . . .1 5. . . . . . . . . . . . . . . . . .2 Introduction 5. . . . . . . even though they can occur only in infinite systems (namely in the thermodynamic limit). . . . . . . . . . . .

This is due to the fact that a continuous phase transition is associated with fluctuations of all length scales at the critical point. Many simulations investigating the critical phenomena (continuous phase transitions) were performed with lattice models. Consequently. let us note that the critical temperature obtained in simulations varies weakly with the system size in continuous systems. Indeed. As we will see below. varying the size of a continuous system over several orders of magnitude is a challenging problem. If we compare lattice and continuous systems. whereas it is possible with lattice systems. with a modest size. Indeed. Finite size scaling analysis which involves simulations with different system sizes1 provides an efficient tool for determining the critical exponents of the system. Phase transitions are characterized by critical exponents which define universality classes. For a good accuracy for the critical exponents. the liquid-gas transition for liquids interacting with a pairwise potential belongs to the same universality class as the para-ferromagnetic transition of the Ising model. microscopic details are irrelevant in the universal behavior of the system. simulations must be performed with different sizes in order to determine the critical exponents. which introduces additional parameters to be estimated. To have accurate critical exponents. the system does not realize that the simulation cell is finite and its behavior (and the thermodynamic quantities) is close to an infinite system. whereas a strong dependence can be observed in lattice models. the behavior differs from the infinite system close the critical temperature.Phase transitions box. scaling laws used for the critical exponent calculation corresponds to asymptotic behaviors. the critical phenomena theory show that phase transitions are characterized by scaling laws (algebraic laws) whose exponents are universal quantities independent of a underlying lattice. when fluctuations are smaller than the linear dimension of the simulation cell. whose validity is for large simulation cells. scaling law corrections can be included. For example. one must choose large simulation cells (and the accuracy of critical exponents are restricted by the computer powers). 1 80 . Studies of continuous systems are less common. a rather good estimate of the critical temperature can be obtained with a simulation. Therefore. When the fluctuations are comparable to the system size. but the finite size behavior is similar to the infinite system when the temperature is far from the critical point. the former requires less simulation time. but the space dimension of the system as well as the symmetry group of the order parameter are essential. Finally. Studies can therefore be realized accurately on simple models.

as well as spatial correlation functions behave according to scaling laws. the scaling law is m(t. Magnetization (Eq. In the vicinity of the critical point (namely. . i=1 (5. .3) In the absence of an external field. h) = lim mN (t. N →∞ (5.5.2 5. thermodynamic quantities. h) = N N Si .2) where t = (T − Tc )/Tc is the dimensionless temperature and h = H/kB T the dimensionless external field. The magnetization per spin is defined by 1 mN (t. 81 .1 Scaling laws Critical exponents Scaling law assumptions were made in the sixties and were subsequently precisely derived with the renormalization group theory.j> Si Sj − H i=1 Si (5. h). (5. one has m(t = 0.2. for the Ising model. and H a uniform external field.5) where δ is the exponent of the magnetization in the presence of an external field. h = 0) = 0 A|t|β t>0 t<0 (5. and we only introduce the basic concepts useful for studying phase transitions in simulations For the sake of simplicity. J is the ferromagnetic interaction strength.1) where < i. along the critical isotherm.2 Scaling laws 5. Tc /J = 2. which provides a general framework for studying critical phenomena. we consider the Ising model whose Hamiltonian is given by N N H = −J <i.4) where the exponent β characterizes the spontaneous magnetization in the ferromagnetic phase.2691 .2)) is defined as m(t. Similarly. Detailed presentation of this theory goes beyond the scope of these lecture notes. H = 0. h) = −B|h|1/δ B|h|1/δ h<0 h>0 (5. j > denotes a summation over nearest sites. in two dimensions).

9) where ν is the exponent associated with the correlation length.7) where γ is the susceptibility exponent. denoted fs (t. (5. (5. Isothermal susceptibility in zero external field diverges at the critical point as χT (h = 0) ∼ |t|−γ . The spatial correlation function. δ. γ. which behaves as ξ ∼ |t|−ν . h = 0) C|t|−α C |t|−α t<0 t>0 (5. h) = |t|2−α Ff (5. e. Similarly. η) are not independent! Assuming that the free energy per volume unit and the pair correlation function obey a scaling function. The amplitude ratio C/C is also a universal quantity. h): h ± fs (t. behaves in the vicinity of the critical point as exp(−r/ξ) . (5. ν. but exponents are always identical and we only consider an exponent for each quantity. The phase transition theory assumes that the neglected fluctuations of the mean-field approach make a non analytical contribution to the thermodynamic quantities.2.2 Scaling laws Landau theory neglects the fluctuations of the order parameter (relevant in the vicinity of the phase transition) and expresses the free energy as an analytical function of the order parameter.g.11) |t|∆ 82 . one can define a pair of exponents for each quantity for positive or negative dimensionless temperature. the free energy density.6) where α and α are the exponents associated with the specific heat. 5.8) g(r) ∼ rd−2+η where ξ is the correlation length. it can be shown that only two exponents are independent. one always observes that α = α . These six exponents (α. Experimentally. β. denoted by g(r). This correlation function decreases algebraically at the critical point as follows g(r) ∼ 1 rd−2+η (5.Phase transitions The specific heat cv is given by cv (t.10) where η is the exponent associated with the correlation function.

(5. t) = − 1 ∂fs ± ∼ |t|2−α−∆ Ff kB T ∂h h |t|∆ . (5. magnetization must be finite when t → 0. h) ∼|t|β h |t|∆ ∼|t|β−∆λ hλ . Moreover. i.17) This relation does not depend on the space dimension d.5. one identifies exponents of the algebraic dependence in temperature: β = 2 − α − ∆. Therefore. by taking the second derivative of the free energy density with respect to the field h. β and γ. (5.20) In order to recover Eq. h) ∼ t2−α−2∆ Ff h |t|∆ .21) . This yields β = ∆λ.13) For h → 0.2 Scaling laws ± where Ff are functions defined below and above the critical temperature and which approach a non zero value when h → 0 and have an algebraic behavior when the scaling variable goes to infinity. one obtains a first relation between exponents α.18) Let us consider the limit when t goes to zero when h = 0. called Rushbrooke scaling law α + 2β + γ = 2. magnetization is obtained by taking the derivative of the free energy density with respect to the external field h. One then obtains for the magnetization.5). 83 (5. m(h. x → ∞. (5.14) Similarly. (5. (5. ± Ff (x) ∼ xλ+1 . along the critical isotherm: m(t. λ (5.e.19) (5. does not diverge or vanish. (5. one obtains the isothermal susceptibility ± χT (t.15) For h → 0.12) Properties of this non analytical term give relations between critical exponents. one has ∆ = β + γ.16) By eliminating ∆. one identifies exponents of the algebraic dependence in temperature: −γ = 2 − α − 2∆. (5.

δ (5. (5. which gives ξ ξ dd rg(r) = 0 0 dd r exp(−r/ξ) rd−2+η (5.25) kB T Taking the second derivative of this equation with respect to the temperature.18) .24) where l1 is a microscopic length scale. (5.28) With the change of variable u = r/ξ in Eq. one infers the following relation βδ = β + γ. (5. (5. it comes ξ 1 dd rg(r) = ξ 2−η 0 0 dd u exp(−u) ud−2+η (5. (5.22) and (5.29).26) By using Eq.28). (5. Knowing that the space integral of g(r) is proportional to the susceptibility. .) (5.22) Eliminating ∆ and λ in Eqs. 2 − α = dν. one obtains for the specific heat cv = −T ∂ 2 fs ∼ |t|νd−2 .Phase transitions By identifying exponents of h in Eqs. By using the relation between the correlation length ξ and the dimensionless temperature t (Eq.6). . (5. one performs the integration of g(r) over a volume whose linear dimension is ξ. (5. the integral is a finite value.30) . When t → 0. one infers the Josephson relation (so called the hyper scaling relation).20) one infers 1 λ= .23) The next two relations are inferred from the scaling form of the free energy density and of the spatial correlation relation g(r).21). 84 (5.27) which involves the space dimension. ∂T 2 (5.9)).5) and (5. subdominant corrections can be neglected and one has fs ∼ ξ −d ∼ |t|νd . the singular free energy density has the following expansion fs ∼ ξ −d (A + B1 kB T l1 ξ + . By considering that the relevant macroscopic length scale of the system is the correlation length.29) On the right-hand side of Eq. (5. (5. one obtains that χT (h = 0) ∼ |t|−(2−η)ν .

For the Ising model. An significant feature of the critical phenomena theory is the existence of a upper and lower critical dimensions: At the upper critical dimension dsup and above. By considering Eq. This implies that in three-dimensional Euclidean space. 2 This property does not apply to quenched disordered systems.7). we now apply scaling laws for finite size systems to develop a method which gives the critical exponents of the phase transition as well as non universal quantities in the thermodynamic limit. Because the liquid-gas transition belongs to the universality class of the Ising model. no phase transition can occur. dinf = 1 and dsup = 4. After this brief survey of critical phenomena. a similar conclusion applies to the phase transition. a mean-field theory cannot describe accurately the para-ferromagnetic phase transition. since there are four relations between critical exponents that implies only two are independent 2 . (5. 85 . (5. Below lower critical dimension dinf .1 – Spin configuration of Ising model in 2 dimensions at high temperature. one infers γ = (2 − η)ν.31) Finally.2 Scaling laws Figure 5.5. the mean-field theory (up to some logarithmic subdominant corrections) describes the critical phase transition. in particular the critical exponents.

3 – Spin configuration of the Ising model in 2 dimensions below the the critical temperature. 86 .Phase transitions Figure 5.2 – Spin configuration of the Ising model in 2 dimensions close to the critical temperature. Figure 5.

Reexpressing this function as Fc± (|t|−ν /L)) = (|t|−ν /L)−κ D± (Ltν ) (5.35) with D± (0) finite. in other words. are the same of a system of size L/l for a dimensionless temperature tlyt and a dimensionless field hlyh .36) which gives for the specific heat cv (t. L−1 ) = l−d fs (tlyt . D is a continuous function. Below and above the critical point of the infinite and for a vicinity which shrinks with increasing sizes of simulation cells. 5. . L−1 ) = Lα/ν D(L|t|ν ). Fc± (x) must go to zero when x goes to zero. the limit t → 0 can be taken before than L → ∞.. . this means. .34) Because |t|−α goes to infinity when t → 0.4 merely illustrates that a system close to the phase transition shows domains of spins of the same sign. If we denote by Fc the scaling function associated with the specific heat. . A phase transition occurs when all variables of fs go to 0. . one observes a size-dependence (See Fig.3.5. the system behaves like an infinite system. . and a dimensionless field h. . . . . (5. Since the specific heat does not diverge when |t| goes to zero.37) The function D goes to zero when the scaling variable is large and is always finite and positive. . L−1 ) = |t|2−α Ff (|t|−ν /L). one requires that κ = α/ν (5. the linear dimension of the simulation cell. In zero field. in simulation. (5. . close to the critical point. the thermodynamic quantities of a finite system of linear size L for a dimensionless temperature t.1 Finite size scaling analysis Specific heat A close examination of spin configuration in Fig. (5. (l/L)−1 ).. 5. which depends on the scaling variable |t|−ν /L. The size of domains increases as the transition is approached. one has ± fs (t. The group renormalization theory showed that. But. . hlyh .3 5. one moves away the critical point. h.33)) then goes to infinity. one has cv (t. this corresponds to the fact that the variable of the function Ff (Eq. L−1 ) = |t|−α Fc± (|t|−ν /L). and displays a maximum for a finite 87 .32) where yt and yh are the exponents associated with the fields. the system behaves like an infinite system whereas in the region where the correlation length is equal to size of the simulation cell.4). . (5.3 Finite size scaling analysis 5. (5.33) If the correlation length ξ is smaller than L. which gives fs (t. and ξ can be larger than L. . .

39) 5. (5.2 0 1.4 0.6 2.8 0. (5.8 3 L=16 L=24 L=32 L=48 Figure 5.8 1.41) N Si | i=1 (5. available in simulations. denoted x0 .4 – Specific heat versus temperature of the Ising model in two dimensions. 88 (5.8 2 2.4 Cv 1. for a finite size system.6 1. Therefore.4 T 2.2 2 1. value of the scaling variable.2 2. one obtains the following result • The maximum of the specific heat occurs at a temperature Tc (L) which is shifted with respect to that of the infinite system Tc (L) − Tc ∼ L−1/ν .Phase transitions 2.40) .2 Other quantities Similar results can be obtained for other thermodynamic quantities.6 0.38) • The maximum of the specific heat of a finite size system L is given by the scaling law Cv (Tc (L). L−1 ) ∼ Lα/ν . The simulation results correspond to different system sizes where L is the linear dimension of the lattice.3.2 1 0. For instance. let us consider the absolute value of the magnetization 1 |m| = | N and the isothermal susceptibility kB T χ = N ( m2 − m 2 ).

by comparing results. that no analytical treatment is able to provide with sufficient accuracy until now. Note that in simulation more information is available than the two independent exponents and the critical temperature. By considering the maximum of Cv or other quantities. one needs to incorporate subdominant corrections. one can compute β/ν from |m| . At the critical point. Fχ . Moreover. L−1 ) = Lγ/ν Fχ (tL1/ν ) ± kB T χ (t. one also obtains non-universal quantities. Indeed. useful in simulation.5. L−1 ) = Lγ/ν Fχ (tL1/ν ) (5. and FU are 8 scaling functions (with different maxima). is a powerful method for calculating critical exponents. χ goes to 0 when T → 0 and has a maximum at the critical point.3 Finite size scaling analysis It is possible to calculate a second susceptibility kB T χ = N ( m2 − |m| 2 ) (5. 0. combined with finite size scaling. Another quantity. 0. At high temperature. Indeed. like the critical temperature.3 .43) The reasoning leading to the scaling function of the specific heat can be used for other thermodynamic quantities and scaling laws for finite size systems are given by the relations ± |m(t. 0. L ) = −1 ± FU (tL1/ν ) ± ± ± ± where Fm . by plotting Binder’s parameter as a function of temperature. but χ has a smaller amplitude. 0. the renormalization group can only give critical exponents and is unable to provide a reasonable value of the critical temperature. 3 89 . L−1 ) intersect at the same abscissa (within statistical errors).46) (5. 0. because of uncertainty and sub-dominant corrections to scaling laws. which determines the critical temperature of the system in the thermodynamic limit. one can determine more accurately critical exponents. Conversely. Only the functional or non perturbative approach can give quantitative results. due to the existence of two peaks in the magnetization distribution and does not display a maximum at the transition temperature. then γ/ν from χ (or χ ). L−1 )| = L−β/ν Fm (tL1/ν ) ± kB T χ(t. all curves U (t.42) χ increases as N when T → 0. but with a high price to pay in terms of computation time. both susceptibilities are related in the thermodynamic limit by the relation χ = χ (1 − 2/π). Monte Carlo simulation. is the Binder parameter m4 U =1− 3 m2 2 (5. Once the temperature of the transition obtained. both susceptibilities χ and χ diverge with the same exponent. Practically.45) (5. Fχ .44) (5.47) U (t. one derives the value of the exponent 1/ν. Scaling laws correspond to asymptotic behavior for very large system sizes: in order to increase the accuracy of the simulation.

51) . uses the idea that the Metropolis algorithm remains efficient even in the vicinity of the critical region. The proposed method adds a new step to the Metropolis algorithm: from a given spin configuration.18) Chap. the relaxation time then diverges as τ ∼ |t|−νz .49) For a finite-size system. For a infinite system. and keeping a quite high acceptance ratio (or even equal to 1).50) 5. Simulation times then become prohibitive! (5.48) where z is a new exponent. close to the critical point. A reasonable acceptance ratio is obtained by choosing a single particle (or spin) in order to have a product |β∆E| which is relatively small. one characterizes the configuration by means of bonds between spins. This is not the case. It typically varies between 2 and 5 (for a Metropolis algorithm). a global spin flip should have a acceptance ratio equal to 1. developed in the previous section. these times are related to the growing correlation length by the scaling relation τ ∼ (ξ(t))z (5. (5. but be generalized the method can be done for all locally symmetric Hamiltonian (for the Ising model. large scale fluctuations are present and relaxation times increase accordingly.Swendsen and Wang [1987]. as well as by reaching a equilibrium state. 2 is modified as follows Pgc (o)W (o → n) = exp(−β(U (n) − U (o))) Pgc (n)W (n → o) 90 (5.4 Critical slowing down The nice method. fluctuations must be sampled at all scales. But. knowing that ξ ∼ |t|−ν . this corresponds to the up/down symmetry).5 Cluster algorithm Let us recall that the Metropolis algorithm is a Markovian dynamics for particle motion (or spin flips in lattices). Ideally. The detailed balance equation (2. We consider below the Ising model. the correlation length is bound by the linear system size L and the relaxation time increases as τ ∼ Lz . and in order to obtain equilibrium.Phase transitions 5. the so-called dynamical exponent. A major advance of twenty years ago consists of generating methods where a large number of particles (or spins) are moved (or flipped) simultaneously.

spins of same sign are always disconnected and spins of opposite signs are connected with a probability p. let us note that energy is simply expressed Once bonds are set.57) (5. one must have the following ratio Pgc (o) = exp(−β(U (n) − U (o))) Pgc (n) as U = (Na − Np )J (5. a cluster is a set of spins connected by at least one bond between them. If two nearest neighbor spins have the same sign. For building clusters. If two nearest neighbor spins have opposite signs. they are not connected.54) (5. but exploiting this idea for simulation occurred almost twenty years later with the Swendsen and WangSwendsen and Wang [1987] work.5.53) where Np is the number of pairs of nearest neighbor spins of the same sign and Na the number of pairs of nearest neighbor spins of the opposite sign. they are connected with a probability p and disconnected with a probability (1 − p). has the advantage of being exact whatever the dimensionality of the system. 2.55) . (5. If one flips a cluster (o → n). respectively). the number of bonds between pairs of spins of same and opposite signs is changed and one has Np (n) = Np (o) + ∆ and similarly Na (n) = Na (o) − ∆. 91 (5. one defines bonds between nearest neighbor spins with the following rules 1. beyond its simplicity. The method relies on rewriting of the partition function given by Fortuyn and Kasteliyn (1969). whatever configurations i and j.56) The energy of the new configuration U (n) is simply related to the energy of the old configuration U (o) by the equation: U(n) = U(o) − 2J∆. the probability of having nc pairs connected (and consequently nb = Np − nc pairs of spins are disconnected) is given by the relation Pgc (o) = pnc (1 − p)nb . Let consider a configuration of Np pairs of spins of same sign. and disconnected with a probability (1 − p).5 Cluster algorithm where Pgc (o) is the probability of generating a bond configuration from o to n (o and n denote the old and new configurations. When J is negative. This formula. (5. If one wishes to accept all new configurations (W (i → j) = 1). This rule assumes that J is positive (ferromagnetic system).52) In order to build such an algorithm.

(5. Finally.63) Inserting Eqs. Therefore. the critical exponent of the two-dimensional Ising model is equal to 2.54) and(5. which gives (1 − p) = exp(−2βJ). conversely. For example. More generally. but one starts from a configuration where one has Np + ∆ parallel pairs and Na − ∆ antiparallel pairs. (5. (5. by choosing randomly a spin. Indeed.66) (5. the probability p is given by p = 1 − exp(−2βJ).60) (5.66). if the probability is chosen according Eq.65) (5. Antiparallel bonds are assumed broken.1 with the Metropolis algorithm while it is 0.61) Therefore. the critical slowing down drastically decreases. One wishes that the bond number nc is equal to nc because one needs to have the same number of connected bonds for generating the same cluster.59) (5.58) (5.2 with a cluster algorithm. in a Metropolis method. In the SwendsenWang method. but one can also show that.62) The probability of generating the new bond configuration from the old configuration is given by Pgc (n) = pnc (1 − p)nb +∆ (5.64) (5. one obtains (1 − p)−∆ = exp(2βJ∆) One can solve this equation for p. in the vicinity of the critical point. one selects spin sets which correspond to equilibrium fluctuations. one generally creates a defect in the bulk of a large fluctuation. Monte Carlo dynamics is accelerated when one finds a method able to flip large clusters corresponding to the underlying physics.Phase transitions Let us now consider the probability of the inverse process.63) in Eq. (5. and this defect must diffuse 92 .52). new configuration is always accepted! The virtue of this algorithm is not only its ideal acceptance rate. which immediately gives nb = nb + ∆. a cluster algorithm becomes quite similar to the Metropolis algorithm in term of relaxation time far from the critical point. (5. One wishes to generate the same cluster structure. one obtains Np (n) = nc + nb Np (o) = nc + nb Np (n) = Np (o) + ∆ = nc + nb + ∆. The difference with the previous configuration is the number of bonds to break.

where the up-down symmetry does not exist. Blote generalized the Dress and Krauth method by using the discrete symmetry of the lattice Heringa and Blote [1998a. J. the number of simulation runs can become prohibitive.71) = = i g(i) exp(−βE(i)) exp(−(∆β)E(i)). N ) = {α} exp(−βH({α})) g(i) exp(−βE(i)) i=1 (5. When one attempts to locate a phase transition precisely. Ferrenberg et al.6 Reweighting Method To obtain a complete phase of a given system.70) (5. within a set of values of the external field [H − ∆H. C. 5. Note that the necessary ingredient for defining a cluster algorithm is the existence of a symmetry.5. [1995]. g(i) denotes the density of states of energy E(i). N ) = i g(i) exp(−β E(i)) g(i) exp(−βE(i)) exp(−(β − β)E(i)) i (5.6 Reweighting Method from the bulk to the cluster surface in order that the system reaches equilibrium. For a given inverse temperature β = 1/(kB T ). Z(β. The method relies on the following results: Consider the partition function Ferrenberg and Swendsen [1989. H + ∆H]). Krauth Dress and Krauth [1995] generalized these methods when the symmetry of the Hamiltonian has a geometric origin. a large number of simulations are required for scanning the entire range of temperatures. the partition function can be expressed as Z(β . which gives for a finite size system τ ∼ L2 . Knowing that diffusion is like a random walk and that the typical cluster size is comparable to the correlation length.67) (5. for a set of temperatures between [T − ∆T. or for the range of external fields. Dress et W. where ∆β = (β − β). The reweighting method is a powerful techniques that estimates the thermodynamic properties of the system by using the simulation data performed at a temperature T . the relaxation time is then given by τ ∼ ξ 2 .68) = where {α} denotes all available states.69) (5. 93 . Heringa and H. For lattice Hamiltonians.b]. and explains why the dynamical exponent is close to 2. the index i runs over all available energies of the system. T + ∆T ] (or for an external field H. 1988].

and can be approximated by a Gaussian far from a phase transition.73) can be reexpressed as E(β . 5. This is associated with the fact that at equilibrium.74) where C(β) is a temperature dependent constant. Far from the critical point. energy histogram is shifted towards to lower energies. (5. or not sampled at all. at equilibrium. Reweighting leads to an exponential increase of errors with the temperature difference. the number of steps is finite. which means that for a given temperature. N ) = i Dβ (i)E(i) exp(−(∆β)E(i)) . 4 94 . the energy histogram Dβ (i) is bell-shaped. the accuracy of the reweighted histogram. i Dβ (i) exp(−(∆β)E(i)) (5.75) An naive (or optimistic) view would consist in believing that a single simulation at given temperature could give the thermodynamic quantities of the entire range of temperature4 .73) In a Monte Carlo simulation. As shown in the schematic of Fig.5. (β1 > β). In a Monte Carlo simulation. when the histogram is reweighted with a lower temperature than the temperature of the simulation. is expressed as: E(β . when the reweighted histogram has a maximum at a distance larger than the standard deviation of the simulation data. energy histogram is shifted towards the higher energies. and of the thermodynamic quantities is poor. available energy states are around the mean energy value.Phase transitions Similarly. one can monitor an energy histogram associated with visited configurations. is proportional to the density of states g(i) weighted by the Boltzmann factor. when reweighted energy histogram with a higher temperature (β2 > β). One can easily see that Eq. the histogram width √ corresponds to fluctuations and is then proportional to 1/ N where N is the site number or the particle number. N ) = = i g(i)E(i) exp(−β E(i)) i g(i) exp(−β E(i)) i g(i)E(i) exp(−βE(i)) exp(−(∆β)E(i)) . Practically. i g(i) exp(−βE(i)) exp(−(∆β)E(i)) (5. states of energy significantly lower or larger than the mean energy are weakly sampled. a thermal average. like the mean energy. denoted Dβ (i). Conversely. one can obtain an estimate for a large range of temperatures.72) (5. We briefly summarize the results of a more refined analysis of the reweighting method For a system with a continuous phase transition. This histogram. This shows that the efficiency of this reweighting By using a system of small dimensions. Dβ (i) = C(β)g(i) exp(−βE(i)) (5.

Dβ (E) of a Monte Carlo simulation (middle curve) to the inverse temperature β. method decreases when the system size increases.Dβ1 (E) and Dβ2 (E) are obtained by a reweighting method.5.4 0.g.8 β2 0. the width of the energy histogram increases.6 Dβ(E) 0. this method is accurate in the vicinity of the temperature of the simulation and allows one to build a complete phase diagram rapidly.5 – Energy histogram. 5. one can improve the estimates of thermodynamic quantities for a large range of temperatures. e. and if simulation sampling is done correctly. . By combining several energy histograms (multiple reweighting method). In summary.7 Conclusion 1 β1 0. because the number of simulations to perform is quite limited. which permits the extrapolations of the critical exponents of the system 95 . In the critical region. by using several results of statistical physics.2 0 0 2 4 E 6 8 10 Figure 5. the renormalization group theory has provided the finite size analysis. an increasing range of temperatures is estimated by the reweighting method. The principle of the method developed by considering energy and the conjugate variable β can be generalized for all pairs of conjugate variables. the magnetization M and an external uniform field H. Indeed.7 Conclusion Phase transitions can be studied by simulation.

(5. ν.1-1 Give the scaling laws for a infinite system. 5. of the specific heat C(t).76) where FC is a scaling function. show that CL (0) ∼ Ly where y is an exponent to calculate. 5.76).8. explain why there are no divergences of quantities previously defined.8. 5. If the accuracy of simulation results are comparable to the theoretical methods for simple systems. 5. L (0) ♣ Q. like the Ising model.1-3 By assuming that ξL (0) = L and that the ratio ξξ(t) is finite and independent of L. γ and of t = T −Tc . 5.8. Cluster methods are required for decreasing the dynamic exponent associated with the phase transition. of the correlation length ξ(t) and of the susceptibility χ(t) as a function of usual critical exponents α. one has the relation CL (0) = FC C(t) ξL (0) ξ(t) (5. 5. one drastically reduces the number of simulations.1-5 By assuming that χL (0) = Fχ χ(t) 96 ξL (0) ξ(t) (5. ♣ Q. infer that t ∼ Lx where x is an exponent to be determined. ♣ Q. ♣ Q.1-4 By using the above result and Eq.77) .Phase transitions in the thermodynamic limit by using simulation results of different systems size.8. By combining simulation results with a reweighting method.78) (5. Under an assumption of finite size analysis.8.1-2 In a simulation.8. where Tc Tc is the critical temperature. ♣ Q. it is superior for almost more complicated models.1 Exercises Finite size scaling for continuous transitions: logarithm corrections The goal of this problem is to recover the results of the finite size analysis with a simple method. the dimensionless temperature.8 5. CL (0) the maximum value of the specific heat obtained in a simulation of a finite system with a linear dimension L.

5.8.86) 97 . Eq.81) (5.79) Various physical situations occur where scaling laws must be modified to account for logarithmic corrections. ν and of q .1-7 By using the results of question 5. for y > 0 and going to infinity. show that ˆ CL (0) ∼ Lx (ln(L))y (5. α. ν.1-8 By using the hyperscaling relation. 5.8. and d.82) χ(t) ∼ |t| | ln |t|| −γ γ ˆ ˆ ♣ Q. 5.1-6 By assuming that ξL (0) ∼ L(ln(L))q and the finite size analysis is valid for the specific heat.8. ν and α.84) A finite size analysis of the partition function (details are beyond of the scope of this problem) shows that if α = 0 the specific heat of a finite size system behaves as ν −ˆ ˆ q 2 (5. ˆ ˆ ˆ When α = 0. (5.83) where y is expressed as a function of α.8 Exercises show that χL (0) ∼ Lz where z is an exponent to be determined. We now assume that for an infinite system ˆ ξ(t) ∼ |t|−ν | ln |t||ν ˆ C(t) ∼ |t|−α | ln |t||α (5. the specific heat of a finite size system behaves as CL (0) ∼ (ln(L))1−2 ν −ˆ ˆ q ν (5. ν .76). ν .1-6 show that one recovers the hyperscaling relation and an additional relation between α. The rest of the problem consists of obtaining relations between exponents associated with these corrections. show that the new relation can be expressed as a function of α.85) CL (0) ∼ L−d+ ν (ln(L))−2 ν ♣ Q. ˆ ˆ ˆ ˆ Hint: Consider the equation y ln(y)c = x−a | ln(x)|b . q . the asymptotic solution is given by x ∼ y −1/a ln(y)(b−c)/a (5. q .(5.80) (5.8. ˆ ˆ ˆ ♣ Q. 5.

Infer that the two phases coexist only if tLd << 1.8.1-9 What is the relation between α. Express the partition function by keeping first-order terms in t. βfi (β) = βc fi (βc ) − βc ei t + O(t2 ) (5. ♣ Q. 5. fi (βc ) is the free energy per site of the ithe phase in the thermodynamic limit.90) P (E) = 1+K where K is a function of the temperature as K = K(tLd ). 5. but it is not necessary). 5.8. Expand the free energies to first order in deviations from the transition temperature. γ . ν ˆ ˆ ˆ and η. K(x) → ∞ when x → −∞ and K(x) → 0 when x → +∞. the free energies of each phase are equal.Phase transitions ♣ Q.2-1 Assume only two coexisting phases. One can show that the partition function of a finite size system of linear size L. One can show that the probability of finding an energy E per site is given for large L as Kδ(E − e1 ) + δ(E − e2 ) (5.89) where ei is the free energy per site of the phase i and t = 1 − Tc /T . 98 . q . ν and d? ˆ ˆ ˆ Let us consider now the logarithmic corrections of the correlation function g(r.8. t) = ˆ (ln(r))η D rd−2+η r ξ(t) (5. with periodic boundary conditions is expressed k Z= i=1 exp(−βc fi (βc )Ld ) (5.1-10 By calculating the susceptibility χ(t) from the correlation function. one assumes that the partition function expressed as the sum of free energies of different phases is valid in the vicinity of the transition temperature. Moreover. 5.8. At the transition temperature.87) ♣ Q.2 Some aspects of the finite size scaling: first-order transition Let us consider discontinuous (or first-order) transitions. recover Fisher’s law as well as a new relation between the exponents η .88) where k is the number of coexisting phases (often k = 2.

expressed as a function of e1 e2 . e2 .8.2-3 Show that C(L. 5. 5. 5. 5. T ). 5. β and K. ♣ Q.8.8. T ) for a lattice of size L is given by C(L. ♣ Q. Calculate the corresponding value of V4 . ♣ Q. T ) is maximal. βc . T ) goes to zero for temperature much larger and much smaller than the temperature of the transition.2-2 Knowing that the specific heat per site C(L. ♣ Q.2-8 Determine the value of K where V4 is maximal. T )L−d = β 2 (< E 2 > − < E >2 ) (5.8. (5. 5.2-5 Show that there exists a maximum of C(L. one can consider the Binder parameter < E4 > . e2 .91) express C(L. 5. T ).8. T ) = 1 − < E 2 >2 Express V4 as a function of e1 .2-7 For characterizing a first-order transition. 5.5.8 Exercises ♣ Q.8. β and K.2-9 What can one say about the specific heat and V4 when e1 is close to e2 ? 99 .2-4 Determine the value of K where C(L. ♣ Q. ♣ Q. T ) as a function of e1 .92) V4 (L.2-6 Determine the corresponding value of C(L.8. ♣ Q.8.

Phase transitions 100 .

. . . . . . . . . . . . . . 103 Wang-Landau algorithm .1 6. . .1 Introduction When a system undergoes a continuous phase transition. . . . .2. . 108 Exercises . . . .1 6. . . . . even if Monte Carlo dynamics can never to be considered as a true dynamics. . . . . . . . . . . 105 Thermodynamics recovered! . . .Chapter 6 Monte Carlo Algorithms based on the density of states Contents 6. The rapid increase of the relaxation time is related to the existence of fluctuations of large sizes (or the presence of domains) which are difficult to sample by a local dynamics: indeed. . . . . .6. . . Monte Carlo methods which consists of flipping (or moving) a single spin (or particle) converge more slowly in the critical region than the cluster algorithm (when it exists). . . . . . . . . . .3 6. . 108 Wang-Landau and statistical temperature algorithms . . . .2 6. . . . . . . . . . . . . . . . . . . . . .2 Some properties of the Wang-Landau algorithm . . . . . . . . . . . . . . . 101 Density of states . . . . 106 Conclusion . . . . . . . . .2 Introduction . . . . . . . . . . . 102 Properties . 108 6.1 6. . . . . The origin of this slowing down is associated with real physical phenomena. . . 109 6. . . . . . . . . . .5 6. . 101 . . . . . . . . . . cluster algorithms accelerate the convergence in the critical region. . 102 6. . . .6 Definition and physical meaning .4 6. . . . . . but not far away. . . . .2. .6. . . . . .

Monte Carlo Algorithms based on the density of states

For a first-order transition, la convergence of Monte Carlo methods based on a local dynamics is weak because close to the transition, there exists two or more metastable states: the barriers between metastable states increase with the system size, and the relaxation time has an exponential dependence of system size which rapidly exceeds a reasonable computing time. Finally, the system is unable to explore the phase space and is trapped in a metastable region. The replica exchange method provides a efficient way for crossing metastable regions, but this method requires a fine tuning of many parameters (temperatures, exchange frequency between replica). New approaches have been proposed in the last decade based on the computation of the density of states. These methods are general in the sense that the details of dynamics are not imposed. Only a condition similar to the detailed balance is given. The goal of these new methods is to obtain the key quantity which is the density of states. We first see why knowledge of this quantity is essential for obtaining the thermodynamics quantities.

6.2
6.2.1

Density of states
Definition and physical meaning

For a classical systems characterized by a Hamiltonian H, the partition function reads (see Chapter 1) δ(H − E) (6.1) Z(E) =
α

where E denotes the energy of the system and the index α runs over all available configurations. The symbol δ denotes the Kronecker symbol where the summation is discrete; an analogous formula can be written for the microcanonical partition function when the energy spectra is continuous; the sum is replaced with an integral and the Kronecker symbol with a δ distribution. Equation (6.1) means that the microcanonical partition function is a sum over all microstates of total energy E, each state has a identical weight. Introducing the degeneracy of the system for a given energy E, the function g(E) is defined as follows: in the case of a continuous energy spectra, g(E)dE is the number of states available of the system for a total energy between E and E + dE. The microcanonical partition function is then given as Z(E) = dug(u)δ(u − E) (6.2) (6.3)

= g(E)

Therefore, the density of states is nothing else than the microcanonical partition function of the system. 102

6.2 Density of states

1000

800

ln(g(E))

600

400

200

0 -2

-1

0 E

1

2

Figure 6.1 – Logarithm of the density of states ln(g(E)) as a function of energy E for a two-dimensional Ising model on a square lattice of linear size L = 32.

6.2.2

Properties

The integral (or the sum) over all energies of the density of states gives the following rule dEg(E) = N (6.4)

where N is the total number of configurations (for a lattice model). For a Ising model, this numberN is equal to 2N where N is the total number of spins of the system. This value is then very large and grows exponentially with the system size. From a numerical point of view, one must use the logarithm of this function ln(g(E)) for avoiding overflow problem. For simple models, one can obtain analytical expressions for small or moderate system sizes, but combinatorics becomes rapidly a tedious task. Some results can not obtained easily: Indeed, for the Ising model, there are two ground states corresponding to full ordered states. The first nonzero value of the density of states is equal to 2N which correspond to the number of possibilities of flipping one spin on the lattice of N sites. Figure (6.1) shows the logarithm of the density of states of the two-dimensional Ising model on a square lattice with a total number of site equal to 1024. The 103

Monte Carlo Algorithms based on the density of states

shape of this curve is the consequence of specific and universal features that we now detail: for all systems where the energy spectra is discrete, the range of energy has a lower bound given by the ground state of the system. In the case of simple liquids, the density of states does not have an upper bound, because overlap between particles which interact with a Lenard-Jones potential is always possible, which gives very large values of the energy; however the equilibrium probability of having such configurations is weak. The shape of the curve which displays a maximum is a characteristic of all simple systems, but the Oy-symmetry is a consequence of the up-down symmetry of the corresponding Hamiltonian. This symmetry is also present in thermodynamic quantities, like magnetization versus temperature (see Chapter 1). For Monte Carlo algorithms based on the density of states, one can simply remark: With the Metropolis algorithm, the acceptance probability of a new configuration satisfies detailed balance, namely Π(i → j)Peq (i) = Π(j → i)Peq (j) where Peq (j) is the probability of having the configuration j at equilibrium and Π(i → j is the transition probability of going from the state i towards the state j. One can reexpress detailed balance in energy variable. Π(E → E ) exp(−βE) = Π(E → E) exp(−βE ) (6.5)

Let us denote Hβ (E) the energy histogram associated to successive visits of the simulation. Consequently, one has Hβ (E) ∝ g(E) exp(−βE) Assume that one imposes the following balance Π(E → E ) exp(−βE + w(E)) = Π(E → E) exp(−βE + w(E )) energy histogram then becomes proportional to Hβ (E) ∝ g(E) exp(−βE + w(E)) (6.8) (6.7) (6.6)

It immediately appears that, when w(E) ∼ βE − ln(g(E)), the energy histogram becomes a flat histogram and independent of the temperature. The stochastic process corresponds to a random walk in energy space. The convergence of this method is related to the fact that the available interval of the walker is bounded. For the Ising model, these bounds exist, for other models, it is possible of restricting the range of the energy interval in a simulation. This very interesting result is at the origin of multicanonical methods (introduced for the study of first-order transition). However, it is necessary to know the function w(E), namely to know the function g(E). In multicanonical method, a guess for w(E) is provided from the density of states obtained (by simulation) with small system sizes. By extrapolating an expression for larger system sizes, the simulation is performed and by checking the deviations of the flatness of the 104

6.3 Wang-Landau algorithm

energy histogram, one can correct the weight function, in order to flatten the energy histogram in a second simulation run. Performing this kind of procedure consisting of suppressing the free energy barriers which exist in a Metropolis algorithm, and which vanishes here by the introduction of an appropriate weight function. The difficulty of this method is to obtain an good extrapolation of the weight function for large system sizes. Indeed, a bias of the weight function can insert new free energy barriers and simulations converge slowly again. For obtaining a simulation method that computes the density of states without knowledge of a good initial guess, one needs to add an ingredient to the multicanonical algorithm. A new method has been proposed in 2001 by J.S. Wang et D.P. LandauWang and Landau [2001a,b], which we detail below.

6.3

Wang-Landau algorithm

The basics of the algorithm are: 1. A change of the initial configuration is proposed (generally a local modification) 2. One calculates the energy associated with the modified configuration E . This configuration is accepted with a Metropolis rule Π(E → E ) = M in(1, g(E)/g(E )), if not the configuration is rejected and the old configuration is kept (and added as in an usual Metropolis algorithm) 3. The accepted configuration modifies the density of states as follows: ln(g(E )) ← ln(g(E )) + ln(f ) This iterative scheme is repeated until the energy histogram is sufficiently flat: practically, the original algorithm proposes the histogram is flat when each value of the histogram must be larger than 80% of the mean value of this histogram. f is called the modification factor. Because the density of states changes continuously in time, it is not a stationary quantity. The accuracy of the density of states depends on the statistical quantity of each value of g(E). Because the density of states increases with time, the asymptotic accuracy is given by ln(f ). To this first stage, the precision is not sufficient for calculating the thermodynamic quantities. Wang and Landau completed the algorithm as follows: One resets the energy histogram (but, obviously not the histogram of the density of states). One restart √ a new simulation described above, but with a new modification factor equal to f . Once this second stage achieved, the density of states is refined with respect to the previous iteration, because the accuracy of the density is given by ln(f )/2. 105

the computer time which is necessary is proportional to (∆E)2 . but for the second case.9) When ln(f ) ∼ 10−8 . For decreasing the computing time. Once the density of states obtained. Considering that the motion in the energy space is purely diffusive (what is only true for the last iterations of the algorithm. and eventually to perform several simulations for different (overlapping) energy ranges. one obtains the thermodynamic quantities for one temperature (or by using a reweighting method a small interval of temperature). where ∆E represents the energy interval. one can obtains these quantities for a large range of temperatures (as shown below) and the accuracy is much better.10) . this task is subtle. it occurs errors in the density of states at the ends of each density of states forbidding a complete rebuilding of the density of states. namely values of the modification factor close to one). it is judicious to reduce the energy interval in the simulation. whereas for only one Wang-Landau simulation. In the second case. and it allows for an efficient exploration of phase space. The computer time of this king of simulation is then larger than this of Metropolis algorithm. 6. One can then match the density of states of each simulation. In any case. the simulation is continued by diminishing the modification factor by the relation f← f (6. A additional virtue in this kind of simulation is that if the accuracy is not reached. one can calculate the mean energy a posteriori by computing the ratio of following integrals < E >= E exp(−βE + ln(g(E)))dE exp(−βE + ln(g(E)))dE 106 (6. one can easily derive all thermodynamic quantities from this alone function. the simulation can be easily prolongated by using the previous density of states until a sufficient accuracy is reached. the density of states does not longer evolve. free energy barriers disappear for first order transition.4 Thermodynamics recovered! Once obtained the density of states (to an additive constant). even impossible. However. With limitations. the dynamics becomes a random walk in energy space and the simulation is then stopped. if the energy interval belongs to the region of a transition. Indeed. This means that computer time increases as the square of the system size. one can here calculate the thermodynamic quantities for all temperatures (corresponding to the energy scale used in the simulation).Monte Carlo Algorithms based on the density of states In order to have an accurate density of states. the method is very efficient for lattice systems with discrete energy spectra (additional problems occur with continuous energy spectra. the total duration of all simulations is divided by two compared to an unique simulation over the complete energy space. Contrary to other methods based on one or several values of the temperature.

Without changing the result. Therefore. the expression −βE + ln(g(E)) (6. The arguments of these exponentials can be very large and lead to overflows. Indeed.. The computation of the specific heat is done with the following formula Cv = kB β 2 E2 − E 2 (6. one can calculate the mean magnetization of the system as a function of the temperature by using the following formula < M (β) >= M (E) H(E) exp(−βE + ln(g(E)))dE (6. One can calculate the thermodynamic quantities for any value of the temperature and obtain an accurate estimate of the maximum of the specific hear as well as the associated temperature. For avoiding these problems.12) It is easy to see that the procedure skechted for the computation of the energy can be transposed identically for the computation of the specific heat. From these histograms and the density of states g(E). let us note that for a given temperature (or β ). it is possible to run a simulation where the modification factor is equal to 1 (the density of states does not evolve) and therefore the simulation corresponds to a random walk in the space energy: one stores a magnetization histogram M (E) as well as the square magnetization M 2 (E). the exponentials of each interval have an argument always negative or equal to zero. one can multiply each integral by exp(−M ax(−βE + ln(g(E))))1 . Similarly. 107 ..11) is a function which admits one or several maxima (but close) and which decreases rapidly toward negative values when the energy is much smaller or greater to these minima. one chooses the maximum which gives the largest value. one can write a minimal program for studying a system by calculating the quantities involving the moments of energy in order to determine the 1 When there are several maxima.. a different situation of a Monte Carlo simulation in the canonical ensemble. even higher moments M n (E). On can then truncate the bounds of each integral when the arguments of each exponential become equal to −100 and the computation of each interval is safe.6.13) exp(−βE + ln(g(E)))dE and one can obtain the critical exponent associated to the magnetization for a finite size scaling. without any overflows (or underflows). A last main advantage of this method is to obtain the free energy of the system for all temperatures as well as the entropy of the system.4 Thermodynamics recovered! The practical calculation of these integrals requires some caution. one can calculate the magnetic susceptibility as a function of the temperature. one obtained the density of states. One can also obtain the thermodynamic magnetic quantities with a modest computational effort.

6.6 6. show simply that the relation between g(E) and S(E).6. improvements are expected in the near future. It is possible to perform simulations of Wang-Landau with a varying number of particles. For the most of the systems. On can also perform simulation of simple lquids and build the coexistence curves with accuracu. simulation is focussed on the density of states which is not a thermodynamic quantity. It allows one to obtain the thermodynamic potential associated with a grand canonical ensemble. o` C u is a constant). 6. Because this method is very recent .5 Conclusion To simply summarize the Monte Carlo simulations based on the density of states.Monte Carlo Algorithms based on the density of states region of the phase diagram where the transition takes place and what is the order of the transition.g. There is an active fied for bringing modifications to the original method (see e. Show that the quantities in the canonical ensemble as the mean energy and the specific heat do not depend on C.6.1 Exercises Some properties of the Wang-Landau algorithm ♣ Q. the references Poulain et al. Conversely. one can calculate other thermodynamic quantities by performing an additional simulation in which the modification factor is equal to 1. for a simulation in a canonical ensemble. Indeed. 6. a complete program must be written for calculating all quantities from the beginning. Zhou and Bhatt [2005]). [2006]. the entropu is a extensive quantity. Secondly. far away of the ground state. but a statistical quantity intrinsic of the system.1-2 Simulation results of a Monte Carlo simulation of Wand-Landau type gives the density of states up to a multiplicative constant (Cg(E). 6. one can say that an efficient method for obtaining thermodynamics of a system consists in performing a simulation where thermodynamics is not involved. 108 . how depend the density of states g(E) on the particle number? ♣ Q.6. but this method represents a breakthough and can alllow studies of systems where standard methods fail to obtain reliable thermodynamic quantities. Let us note that additional difficulties occur when the energy spectra is continuous and some refinements with respect to the original algorithm have been proposed by several authors correcting in part the drawbacks of the method.1-1 The Wang-Landau algorithm allows calculating the density of states g(E) of a system described by a Hamilonian H. Knowing that the logarithm of the partition function in the microcanonical ensemble is equal to the entropy S(E) where E is the total energy .

ˆ ♣ Q. show that the density of states is given by the relation k ln(gk (E)) = i=1 Hi (E) ln(fk ). Show that the first order term is equal to zero.2-1 Why is it necessary to decrease the modification factor f to each iteration? Justify your answer.1-4 Noting that the energy histogram Hk (E) for the k simulation run. If one takes another initial condition. Knowing that fk = fk−1 . infer that the density of states goes to a finite value. 6.6.6. namely when the modification factor is fk . 6. 6.2 Wang-Landau and statistical temperature algorithms The Wang-Landau algorithm is a Monte-Carlo method which allows obtaining the density of states g(E) of a systems in a finite interval of energy. ♣ Q. at a inverse temperature β behaves (far away of a phase transition) as a Gaussian aroung the mean energy E0 .6 Exercises ♣ Q.6.1-5 Noting that Hk (E) = Hk (E) − MinE Hk (E). 6. (6. ♣ Q. 6. 6. the density of states g(E) is generally chosen equal to 1. expand the logarithm at E0 . what is the physical meaning of the slope for a given energy E ? One wants to study the convergence of the Wang-Landau algorithm.6.1-3 Knowing that the canonical equilibrium distribution Pβ (E) ∼ g(E) exp(−βE).2-3 At the beginning of the simulation.6. 109 .6. In a simulation. 6. ♣ Q. explain why k ln(ˆk (E)) = g i=1 ˆ Hi (E) ln(fk ) (6.6. 6. in a plot of ln(g(E)) versus E.2-2 Why is it numerically interesting of working with the logarithm of the density of states? ♣ Q.1-6 Several simulations have shown that δHk = E Hk (E) behave as 1/ ln(fk ).15) is equal to the above equation up to a additive constant.14) ˆ ♣ Q. does it obtain the same density of states at the end of the simulation.6.6. Infer that one obtains a gaussian approximation for the equilibrium distribution.

17) ln f where δf = and f is the modification factor facteur de modification. the Wang-Landau algorithm is generally less efficient and sometimes does not converge towards the equilibrium density of states. If dynamics involves several particles. 6.2-5 If the mean energy change of a configuration is δEc << ∆E. what can one say about the acceptance probability of the new configuration? Why does it imply a certain bias in the convergence of the Wang-Landau method? Pour corriger ce d´faut de la m´thode de Wang-Landau. why does the efficiency decreases rapidly with the systems size. We are going to determine the origin of this problem. Show 2∆E that Ti±1 = αi±1 Ti±1 where αi±1 is a parameter to be determined.2-6 By discretizing the derivative of the density of states with respect to the energy (with kB = 1) : ∂S ∂E = E=Ei 1 = βi Ti (Si+1 − Si−1 ) 2∆E (6. How doe Ti+1 and Ti−1 evolve along the simulation? What can one conclude about the temperature variation with energy? 110 . Let us denote the bin size ∆E of of the histogram of the density of states.6. and if the initial configuration has an energy located within the interval of the mean energy E. for calculating the density of states. Kim et ses cole e laborateurs ont propos´ d’effectuer la mise a jour de la temp´rature effective e ` e ∂E . o` S(E) est l’entropie microcanonique.Monte Carlo Algorithms based on the density of states ♣ Q.6.6. it is necessary to discretize the spectra of the density of states. e e ♣ Q. in a similar manner of a Metropolis algorithm. plutˆt que celle de la denu o T (E) = ∂S sit´ d’´tat.16) show that for a elementary move whose new configuration has an energy Ei . ♣ Q.2-4 Wang-Landau dynamics is generally performed by using elementary moves of a single particle. two statistical temperatures must be changed according to the formula βi±1 = βi±1 δf (6. and not for the Wang-Landau algorithm? For models with a continuous energy spectra. 6. 6.

6.18) Ti + λi (E − Ei ) where λi is a parameter to be determined as a function of ∆E and of temperature Ti and Ti+1 .2-7 One then calculates the entropy of each configuration by using a linear interpolation of the temperature between two successive intervals i and i + 1.6 Exercises ♣ Q. ♣ Q.2-8 What can one say about the entropy difference between two configurations belonging to the same interval for the statistical temperature and separated by an energy δEc ? What can one conclude by comparing this algorithm to the original Wang-Landau algorithm? 111 .6.6.6. show that the entropy is then given by the equation i Ej Ej−1 S(E) = S(E0 ) + j=1 dE + Tj−1 + λj−1 (E − Ej−1 ) E Ei dE (6. 6.

Monte Carlo Algorithms based on the density of states 112 .

. . . . . . . .1 Isothermal-isobaric ensemble . . . . . . . 113 7. . 113 Principe . . . . . . . . . . . . . .1.1. . . . . . . . . . . . De plus. . . . . . . . .4 Introduction . . 116 Grand canonical ensemble . . .2 7.1 Isothermal-isobaric ensemble Introduction L’ensemble isobare-isotherme (P. . . . . . . . .2 7. . . T permet d’obtenir directement l’´quation d’´tat du syst`me. . . . . . . . . .1 7. . . . . .4. . T ) est largement utilis´ car il correspond ` e a une grande majorit´ de situations exp´rimentales pour lesquelles la pression et e e la temp´rature sont impos´es au syst`me a ´tudier. . . . .2. . .1 7. . . .1 7. V. . e e e 113 . .6 Monte Carlo method with multiples Markov chains . . . 116 Principle . . . 119 Gibbs ensemble . . . . . . . 121 Acceptance rules . . . . 123 Conclusion 7.1. . . . . . . . la pression ne peut pas ˆtre obtenue ` partir de l’´quation du viriel et la simulation dans un ensemble e a e N. . . . . . . . . . . . .5 7. . . 127 7. . . .2 Principle . . . . . . . . . 121 . . . . . . . . quand le potentiel e e e `e d’interaction entre particules n’est pas additif de paires. . . . . . . . . . . . . . . . . . .2. . . .Chapter 7 Monte Carlo simulation in different ensembles Contents 7.2 7. . . . . 116 Liquid-gas transition and coexistence curve . 114 Introduction .4. . . . . . . . . . . . . . . . . . . . . . . 121 7. . . . . P. . . . . . . . . . . . .1 7. .3 7. . . . . . . . . . . . . . .

P.(M −N )/V0 →∞ Qid (M − N. et de N particules occupant un volume V. M. M. V. M. occupant un volume V0 − V . 7. L) rappelle que le potentiel n’a g´n´ralement pas une exu e e pression simple dans le changement de variable rN → sN . La fonction de partition du syst`me total est donn´e par e e 1 Q(N. V0 . ` e En utilisant l’ ´quation (7. Apr`s le changement de variables. T ) = Q(N. (7. M → ∞ et (M − e N )/V0 → ρ).4).4) En notant que la densit´ ρ est celle du gaz id´al.1. on a (V0 − V )M −N = V0M −N (1 − V /V0 )M −N → V0M −N exp(−ρV ).Monte Carlo simulation in different ensembles Une deuxi`me utilisation consiste a r´aliser des simulations ` pression nulle. e e Le syst`me que l’on souhaite ´tudier par la simulation est le sous-syst`me e e e consitu´ de N particules. V. On introduit les notations suivantes: u ri = si L. L)) (7. V0 . La fonction de partition du syst`me e e e des N particules est alors donn´e en utilisant l’´quation (7.3) (7. Prenons la limite thermodynamique du r´servoir (V0 → ∞. T ) = 3M Λ N !(M − N )! L L dr 0 M −N 0 drN exp(−βU (rN )) (7. T ) = VN exp(−βP V ) Λ3N N ! 114 dsN exp(−βU (sN .6) . T ) = 3M Λ N !(M − N )! dsN exp(−βU (sN . on obtient e Q (N. V. L)). P. V. a la temp´rature T . T ) est la fonction de partition d’un gaz id´al de M − N paru e ticules occupant un volume V0 . T ) lim (7.2 Principe Le syst`me consid´r´ est constitu´ d’un r´servoir de gaz id´al comprenant M − N e ee e e e particules. V0 . on a donc ρ = βP .5) o` Qid (M − N. Cela ` e e permet d’obtenir une estimation rapide de la courbe de coexistence des phases liquide-gaz. V0 .2) o` la notation U (sN . (7.3) par e e Q (N. T ) V0 . en e ` e a partant a haute densit´ et en augmentant progressivement la temp´rature.1) o` L3 = V et L 3 = V0 − V . la fonction de partition s’´crit e e V N (V0 − V )M −N Q(N. V. V0 . ind´pendamment des configurations des particules du e e gaz parfait plac´es dans la deuxi`me boite.

T ) dV βP 0 (7.9) (7. V. e 115 . (7. P.13) L’algorithme de la simulation se construit de la mani`re suivante: e 1. on doit r´´crire la fonction de partition (Eq.12) ce qui donne un taux d’acceptation de l’algorithme de Metropolis modifi´ comme e suit: Π(o → n) = M in 1.8) = V N exp(−βP V ) N ds exp(−βU (sN )) Λ3N N ! La probabilit´ P (V. Vo ) + P (Vn − Vo )] +N ln(Vn /Vo ))) . Pour des raisons de performance de l’algorithme on pr´f`re ee effectuer un changement d’´chelle logarithmique en volume au lieu d’un changee ment lin´aire. le taux d’acceptation ıte de l’algorithme de Metropolis est donc Π(o → n) = M in 1. Dans ce cas.11) Except´ si le potentiel a une forme simple (U (rN ) = LN U (sN )). exp(−β[U (sN . exp(−β[U (sN . V0 . Cette fonction est alors ´gale a e e ` +∞ Q(N. T ) = 0 ∞ dV (βP )Q (N. e e Pour un changement de volume de la boˆ de simulation. le changement e de volume est un calcul coˆteux dans la simulation num´rique.7) (7. sN ) ∼ V N exp(−βP V − βU (sN . Tirer un nombre al´atoire entre 0 et le nombre de particules N . N (7.6)) e ee comme Q(N. L)) ∼ exp(−βP V − βU (s . Vn ) − U (sN .7. Vn ) − U (sN . la fonction de partition e associ´e au sous-syst`me des particules interagissant par le potentiel U (rN ) est e e donn´ en consid´rant la somme des fonctions de partitions avec des volumes V e e diff´rents. (7. (7. il est alors possible de donner les probe e abilit´s de transition pour un changement de volume avec la r`gle de Metropolis.10) En utilisant la relation du bilan d´taill´. P. T ) = βP Λ3N N ! d ln(V )V N +1 exp(−βP V ) dsN exp(−βU (sN . On effectue une u e fois en moyenne ce changement de volume quand on a d´plac´ chacune des pare e ticules en moyenne. L)) (7. sN ) que N particules occupent un volume V avec les partice ules situ´es aux points sN est donn´e par e e P (V. L) + N ln(V )). Vo ) + P (Vn − Vo )]+ +(N + 1) ln(Vn /Vo ))) . P.1 Isothermal-isobaric ensemble Si on permet au syst`me de modifier le volume V .

11). V. occupant un volume V −V0 . e e e on va ´changer des particules. et de N particules occupant un volume V0 .1 Grand canonical ensemble Introduction L’ensemble grand-canonique (µ.15) Calculer l’´nergie de la nouvelle configuration (si le potentiel n’est pas sime ple. Si ce nombre est ´gal a 0. l’´tude d’une courbe de coexistence pour des syst`mes purs. T ) est un ensemble adapt´ pour un syst`me e e en contact avec un r´servoir de particules a temp´rature constante. Au lieu d’´changer du volume comme dans la section pr´c´dente. (7.16) 116 .2 7.2 Principle Le syst`me consid´r´ dans ce cas est constitu´ d’un r´servoir de gaz id´al come ee e e e prenant M −N particules. choisir la particule correspondant au e e nombre al´atoire tir´ et tenter un d´placement ` l’int´rieur de la boˆ de e e e a e ıte volume V . V. (7. cela revient aux coordonn´es) par la relation e rN = rN (vn /vo )1/3 o n (7.14) Changer alors le centre de masse de chaque mol´cule (pour les particules e ponctuelles. tirer un nombre al´atoire entre 0 et 1 et calculer e ` e le volume de la nouvelle boite selon la formule vn = v0 exp(ln(vmax )(rand − 0. Si ce nombre est diff´rent de z´ro.2. sinon on garde l’ancienne boˆ (et on n’oublie pas de compter l’ancienne ıte configuration). e La fonction de partition est 1 3M N !(M − N )! Λ L L Q(M. N. dans e une certaine mesure. Si ce nombre est plus grand e qu’un nombre al´atoire compris entre 0 et 1. e e 7. 7. V0 .2. cette ´tape demande un temps de calcul tr`s important). 3.5)).Monte Carlo simulation in different ensembles 2. et effectuer e e le test Metropolis donn´ par la relation (7. on accepte la nouvelle boˆ e ıte. T ) = drM −N 0 0 drN exp(−βU (rN )). Cet ensemble e ` e est appropri´ pour d´crire les situations physiques correspondant aux ´tudes des e e e isothermes d’adsorption (fluides dans les z´olithes ou les milieux poreux) et.

V. ` Q (µ. V.23) La probabilit´ P (µ.21) Λ3N N ! Il ne reste qu’` sommer sur toutes les configurations avec un nombre N de a particules allant de 0 a l’infini. V. sN ) que N particules occupent un volume V avec les particules e situ´es aux points sN est donn´e par e e exp(βµN )V N exp(−βU (sN )). M → e e ∞.16)) sur celle de la contribution d’un gaz parfait Qid (M. T ) = N →∞ Q (N. V0 . et on obtient e e e e alors V N (V0 − V )M −N dsN exp(−βU (sN )). il est alors possible de donner les e e probabilit´s de transition pour un changement du nombre de particules dans la e boite de simulation selon la r`gle de Metropolis.(V0 −V )→∞ Qid (M. le potentiel chimique est donn´ par la relation e e Q (N. sN ) ∼ 117 . T ) Pour un gaz id´al. T ) = lim µ = kB T ln(Λ3 ρ). Q(N. (7. (7.20) On peut alors r´´crire la fonction de partition du sous syst`me d´finie par le ee e e volume V a la limite thermodynamique comme ` exp(βµN )V N dsN exp(−βU (sN ) (7. T ) comprenant M particules dans le volume V0 − V . V.7. e Le taux d’acceptation de l’algorithme de Metropolis est le suivant: P (µ. (7. T ) N =0 (7. V0 . T ) = 3M Λ N !(M − N )! En prenant la limite thermodynamique pour le r´servoir de gaz id´al.17) Q(M. (7. M. T ) = M Q (µ.22) exp(βµN )V N Λ3N N ! N =0 ∞ dsN exp(−βU (sN )). on obtient M! → ρN .24) Λ3N N ! En utilisant la relation du bilan d´taill´.18) On peut aussi exprimer la fonction de partition associ´e au sous-syst`me de e e la boite de simulation comme le rapport des fonctions de partition du syst`me e total (Eq.19) (7.2 Grand canonical ensemble On peut effectuer de la mˆme mani`re le changement de variable sur les coore e donn´es rN et rM −N que l’on a effectu´ dans la section pr´c´dente. (7. V0 − V. V0 − V. (V0 − V )N (M − N )! (7. V. (V0 − V ) → ∞ avec M/(V0 − V ) → ρ. T ) = lim soit encore Q(µ. N. V. T ) M. V. N.

Pour cela. Pour retirer une particule dans le syst`me.30) (7. e Λ3 N exp(−β(µ + U (N − 1) − U (N ))) . Si ce nombre est compris entre 1 et le nombre de particules Nt . ` 2.26) L’algorithme de la simulation se construit de la mani`re suivante: e 1.Monte Carlo simulation in different ensembles 1. calculer l’´nergie avec les Nt − 1 particules restantes. Tirer un nombre al´atoire entre 1 et N1 avec N1 = Nt + nexc (le rapport e nexc / N fixe la fr´quence moyenne des ´changes avec le r´servoir) avec Nt e e e le nombre de particules dans la boite a l’instant t.26). e – Si la nouvelle configuration est accept´e. tirer un nombre al´atoire entre 0 et 1. (7. e – Si la nouvelle configuration est accept´e. on essaie d’ins´rer une particule.5. Pour ins´rer une particule dans le syst`me. e a e on calcule l’´nergie avec Nt + 1 particules et on effectue le test de e Metropolis de l’´quation (7. – Sinon garder l’ancienne configuration. Si ce nombre est sup´rieur a Nt . e effectuer le test de Metropolis de l’´quation (7. e e Π(N → N + 1) = M in 1. e en ´crivant la formule e r[retir] = r[Nt ]) Nt = Nt − 1. • Si le nombre est sup´rieur ` 0.25).29) (7. supprimer la particule.28) . 3. V Λ3 (N + 1) exp(β(µ − U (N + 1) + U (N ))) . V (7.25) 2. Π(N → N − 1) = M in 1. – Sinon garder l’ancienne configuration. on tente la suppression d’une partice ` ule.27) (7.5. e ` e • Si ce nombre est inf´rieur a 0. choisir la particule correspondant au nombre al´atoire tir´ et tenter un d´placement e e e de cette particule selon l’algorithme de Metropolis standard. ajouter la particule en e ´crivant la formule e Nt = Nt + 1 r[Nt ] = r[insr]. La limitation de cette m´thode concerne les milieux tr`s denses o` la probabilit´ e e u e d’insertion devient tr`s faible e 118 (7.

permet d’´tudier l’´quilibre de deux phases.7. e e car elle supprime l’´nergie interfaciale qui existe quand deux phases coexistent ` e a l’int´rieur d’une mˆme boˆ e e ıte. P. il apparaˆ une e ıt r´gion du diagramme de phase o` le liquide et le gaz peuvent coexister.3 Liquid-gas transition and coexistence curve Si pour les r´seaux l’´tude du diagramme de phase est souvent limit´ au voisinage e e e des points critiques. alors e e e e e e que pour un r´seau l’existence d’une sym´trie entre trou et particule conduit ` e e a une courbe de coexistence temp´rature densit´ parfaitement sym´trique de par e e e et d’autre de la densit´ critique. soit le nombre de particules. les temp´ratures et les potentiels chimiques doivent ˆtre ´gaux. L’id´e naturelle serait e e e e d’utiliser un ensemble µ. Malheureusement un tel ensemble n’existe pas. La raison vient du fait que mˆme pour un liquide simple. la fronti`re de cette e u e r´gion d´finit ce que l’on appelle la courbe de coexistence. soit le volume. e e 1 119 . Une telle interface signifie une barri`re d’´nergie e e que l’on doit franchir pour passe d’une phase ` une autre. la d´termination du diagramme de phase pour des syst`mes e e continus concerne une r´gion beaucoup plus large que la (ou les r´gions) crie e tique(s). et n´cessite des temps a e de calcul qui croissent exponentiellement avec l’´nergie libre interfaciale. car un ensemble ne peut pas ˆtre seulement d´fini par des param`tres intensifs. en dessous de la temp´rature de transition liquide-gaz. la courbe e 1 de coexistence liquide-gaz pr´sente une asym´trie entre la r´gion liquide et la e e e r´gion gazeuse et traduit un caract`re non universel li´ au syst`me ´tudi´.3 Liquid-gas transition and coexistence curve 7. e La condition de coexistence entre deux phases est que les pressions. Pour e e e obtenir un ensemble clairement d´fini. La m´thode. T . il est n´cessaire d’avoir au moins soit une e e variable extensive. e Pour un liquide. introe duite par Panagiotopoulos en 1987.

Monte Carlo simulation in different ensembles Figure 7.1 – Courbe de coexistence (diagramme temp´rature densit´) liquide-gaz e e du mod`le de Lenard-Jones.5 et la courbe montre une asym´trie e e e au dessous du point critique. La densit´ critique est voisine de 0.3 ` comparer e e a avec le mod`le de gaz sur r´seau qui donne 0. 120 .

4. ee Pour d´terminer quel est le nouveau taux d’acceptation. en contact avec un r´servoir thermique a la temp´rature T . la r`gle de e ` e ıte e Metropolis est celle d´finie pour un ensemble de particules dans un ensemble e canonique . sN −N1 ) 1 2 V1N1 (V − V1 )N −N1 ∼ exp(−β(U (sN1 )+U (sN −N1 ))).32) 1 2 N1 !(N − N1 )! 7. que le volume total soit conserv´.1 ) o.1 o .2 Acceptance rules Il y a trois types de d´placements avec cet algorithme: les d´placements individue e els de particules dans chaque boˆ les changements de volume avec la contrainte ıte. V1 . sN −N1 ) de trouver e e 1 2 une configuration avec N1 particules dans la boˆ 1 avec les positions sN1 et les ıte 1 N2 = N − N1 particules restantes dans la boˆ 2 avec les positions sN2 : ıte 2 P (N1 . e ` e La fonction de partition de ce syst`me s’´crit e e N Q(M. On peut facilement d´duire la probabilit´ P (N1 . T ) = N1 1 3N N !(N − N )! Λ 1 1 =0 V dV1 V1N1 (V − V1 )N −N1 0 dsN1 exp(−βU (sN1 )) 1 1 dsN −N1 exp(−βU (sN −N1 )). on voit que les particules qui occupent le volume V2 ne sont pas ici des particules de gaz id´al mais des particules interagissant par le e mˆme potentiel que les particules situ´es dans l’autre boˆ e e ıte. e ıte Pour les d´placements a l’int´rieur d’une boˆ de simulation.1 )N −N1 exp(−βU (sN )) n N1 (V − V )N −N1 exp(−βU (sN )) (Vo. sN1 . V2 .7. (7. sN1 .33) Il est g´n´ralement plus avantageux de proc´der a un changement d’´chelle e e e ` e logarithmique du rapport des volumes ln(V1 /V2 ). les changements de boˆ pour une particule.4 Gibbs ensemble 7. le choix du changement d’´chelle lin´aire en e e volume (Vn = Vo + ∆V ∗ rand) conduit a un taux d’acceptation ´gal a ` e ` π(o → n) = M in 1. V1 .1 )N1 (V − Vn. on r´´crit la fonction e ee de partition avec ce changement de variable: 121 .31) Si l’on compare avec le syst`me utilis´ pour la d´rivation de la m´thode de e e e e l’ensemble grand canonique. de mani`re analogue ` ce qui a e a ´t´ fait dans l’ensemble N. T .1 Gibbs ensemble Principle On consid`re un syst`me constitu´ de N particules occupant un volume V = e e e V1 + V2 . (Vn. V1 . (7.4. P.4 7. Pour le changement de volume. 2 2 (7.

Monte Carlo simulation in different ensembles 1 V-V1 0.2 0.8 N-N1 V1 N1 0. V1N1 +1 (V − V1 )N −N1 +1 exp(−β(U (sN1 ) + U (sN −N1 ))) 1 2 V N1 !(N − N1 )! (7.36) . V1 .34) N ds1 1 exp(−βU (sN1 )) 1 On en d´duit alors la nouvelle probabilit´ avec le d´placement logarithmique e e e du rapport des volumes des boˆ ıtes. 2 2 (7. V1 .2 – Repr´sentation d’une simulation de l’ensemble de Gibbs o` le syst`me e u e global est en contact avec un r´servoir isotherme et o` les deux sous-syst`mes e u e peuvent ´changer a la fois des particules et du volume.1 )N −N1 +1 exp(−βU (sN )) n (Vo.4 0.4 X 0.1 )N1 +1 (V − Vo.8 1 Figure 7.1 )N −N1 +1 exp(−βU (sN )) o 122 .2 0 0 0. V2 . sN1 .35) ce qui conduit ` un taux d’acceptation de la forme a N (N1 .1 )N1 +1 (V − Vn.6 Y 0. (Vn.6 0. T ) = N1 =0 0 d ln V1 V − V1 (V − V1 )V1 V1N1 (V − V1 )N −N1 V Λ3N N1 !(N − N1 )! dsN −N1 exp(−βU (sN −N1 )). sN −N1 ) ∼ 1 2 π(o → n) = M in 1. (7. e ` N V Q(M.

T ) est remplac´ ici e par le changement simultan´ de volume des deux boˆ e ıtes. e Consid´rons pour des raisons de simplicit´ le cas d’un syst`me o` seul on e e e u change la temp´rature (la m´thode est facilement g´n´ralisable pour d’autres e e e e grandeurs intensives (potentiel chimique.37) ce qui conduit ` un taux d’acceptation a Π(o → n) = M in 1..5 Monte Carlo method with multiples Markov chains Les barri`res d’´nergie libre empˆchent le syst`me de pouvoir explorer la totalit´ e e e e e de l’espace des phases. La m´thode de “trempe parall`le” e e cherche a exploiter le fait suivant: s’il est de plus en plus difficile d’´quilibrer un ` e syst`me quand on abaisse la temp´rature. un algorithme de type Metropolis est fortement ralenti par la pr´sence de barri`res ´lev´es. V.. le ıte ıte rapport des poids de Boltzmann est donn´ par e N1 !(N − N1 )!(V1 )N1 −1 (V − V1 )N −(N1 −1) N (o) = exp(−β(U (sN ) − U (sN )) n o N1 (V − V )N −N1 N (n) (N1 − 1)!(N − (N1 − 1))!(V1 ) 1 N1 (V − V1 ) = exp(−β(U (sN ) − U (sN ))) o n (N − (N1 − 1))V1 (7. on peut exploiter la facilit´ du syst`me e e e e a changer de r´gion de l’espace des phases ` temp´rature ´lev´e en “propageant” ` e a e e e des configurations vers des temp´ratures plus basses.7. en effet.38) L’algorithme correspondant a la m´thode de l’ensemble de Gibbs est une g´n´ral` e e e isation de l’algorithme de l’ensemble (µ. changement du Hamiltonien. N1 (V − V1 ) exp(−β(U (sN ) − U (sN ))) . e e e e Pour calculer une courbe de coexistence ou pour obtenir un syt`me ´quilibr´ e e e au dessous de la temp´rature critique.39) .)): la fonction de partition d’un syst`me a la temp´rature Ti est donn´e par la relation e ` e e Qi = α=1 exp(−βi U (α)) 123 (7.5 Monte Carlo method with multiples Markov chains La derni`re possibilit´ de mouvement concerne le changement de boˆ pour e e ıte une particule. T ) dans lequel on remplace l’insertion et le retrait de particules par les ´changes entre boˆ et dans lequel le changee ıtes ment de volume que nous avons vu dans l’algorithme (N. Pour faire passer une particule de la boˆ 1 dans la boˆ 2. 7. on doit r´aliser une succession de simulae e tions pour des valeurs successives de la temp´rature (voire aussi simultan´ment e e de potentiel chimique dans le cas d’un liquide). P.. n o (N − (N1 − 1))V1 (7.

e La condition de bilan d´taill´ de cette seconde chaine peut s’exprimer comme e e exp (−βj U (i)) exp (−βi U (j)) Π((i. (j. βj ) → (j. (i. βi ). exp(βi (Ui (n) − Ui (o)))) (7. βj )) exp (−βi U (i)) exp (−βj U (j)) (7.40) o` N d´signe le nombre total de temp´ratures utilis´ dans les simulations. la totalit´ des particules entre deux boites cons´cutives choisies au e e e e hasard parmi la totalit´ de celles-ci. exp((βi − βj )(Uj − Ui ))) (7. Soit une particule dans un espace unidimensionnel soumise au poe 124 . βj )) = Π((i. u e e e On peut tirer un grand avantage en r´alisant simultan´ment les simulations e e pour les diff´rentes temp´ratures.42) o` les notations n et o correspondent ` la nouvelle et l’ancienne configurau a tion respectivement. (j. βj ). en respectant les conditions d’un e bilan d´taill´. e Si on consid`re le produit direct de l’ensemble de ces syst`mes ´voluant a e e e ` des temp´ratures toutes diff´rentes. la probabilit´ d’acceptation ıtes e e est alors donn´e par e min(1. βi ). on peut ´crire la fonction de partition de cet e e e ensmble comme N Qtotal = i=1 Qi (7. (j.43) La proportion de ces ´changes reste ` fixer afin d’optimiser la simulation pour e a l’ensemble de ce syst`me. cette chaine se propose d’´changer. u e e L’algorithme de la simulation est constitu´ de deux types de mouvement e • D´placement de particule dans chaque boˆ (qui peut ˆtre r´alis´ en pare ıte e e e all`le) selon l’algorithme de Metropolis La probabilit´ d’acceptation: pour e e chaque pas de simulation a l’int´rieur de chaque boite est donn´e par ` e e min(1.Monte Carlo simulation in different ensembles o` α est un indice parcourant l’ensemble des configurations accessibles au syst`me u e et βi est l’inverse de la temp´rature. la m´thode de “trempe parall`le” introduit une e e e e chaine de Markov suppl´mentaire en sus de celles pr´sentes dans chaque boite de e e simulation. e Afin d’illustrer tr`s simplement ce concept. • Echange de particules entre deux boˆ cons´cutives.41) o` U (i) et U( j) r´presentent les ´nergies des particules de la boite i et j. nous allons ´tudi´ le syst`me mode e e e `le suivant. βi ). βi ) → (i.

4 montre ces probabilit´s a l’´quilibre.75 ≤ x ≤ 1.45) dx exp(−βV (x)) −2 La figure 7. β) = 2 (7.4 montre aussi le r´sultat de simulations Monte Carlo de type e Metropolis ind´pendantes tandis que la figure 7.75 0. les maxima des probabilit´s d’´quilibre e e e sont les mˆmes. e e La figure 7.75 1. la probabilit´ entre deux e e e e minima d’´nergie potentielle devient extrˆmement faible. Notons que comme les mine ` e ima de l’´nergie potentielle sont identiques.75 ≤ x ≤ 2.25 ≤ x ≤ −0.25 −0.5 montre le r´sultat avec un e e 125 .3 – Potentiel unidimensionel auquel est soumise une particule tentiel ext´rieur V (x) suivant e  +∞    1 + sin(2πx)    2 + 2 sin(2πx) V (x) =  3 + 3 sin(2πx)   4 + 4 sin(2πx)     5 + 5 sin(2πx) si si si si si si x < −2 −2 ≤ x ≤ 1.44) Les probabilit´s d’´quilibre Peq (x. β) pour ce syst`me s’exprime simplement e e e comme exp(−βV (x)) Peq (x.25 −1. Quand la temp´rature est abaiss´e.5 Monte Carlo method with multiples Markov chains Figure 7.7.25 ≤ x ≤ 0.0 (7.

1.Monte Carlo simulation in different ensembles Figure 7.3.4 – Probabilit´s d’´quilibre (courbes pleines ) et probabilit´s obtenues e e e par une simulation Metropolis pour la particule soumise au potentiel unidimensionnel pour trois temp´ratures T = 0. 2 e 126 . 0.

e e` e a e e Pour des syst`mes ` N particules.4 except´ que la simulation est une trempe ` e parall`le e simulation de “trempe parall`le”.6 Conclusion Nous avons vu que la m´thode Monte Carlo peut ˆtre utilis´e dans un nome e e bre tr`s vari´ de situations. Le nombre total de configurations effectu´es e e par la simulation reste inchang´e. la m´thode de trempe parall`le n´cessite e a e e e un nombre de boites sup´rieur a celui utilis´ dans cet exemple tr`s simple. On e ` e e peut mˆme montrer que pour un mˆme intervalle de temp´ratures. le syst`me n’est ´quilibr´ que pour la teme e e e p´rature la plus ´lev´e. je l’esp`re. le syst`me ne peut explorer que les deux e e e e premiers puits de potentiel (sachant que la particule ´tait situ´e dans le premier e e puits ` l’instant initial) et pour T = 0. le syst`me devient e e e e ´quilibr´ a toute temp´rature en particulier ` tr`s basse temp´rature.5 – Identique a la figure 7.3.1. de done e e e 127 .7. Cela permet. Avec la m´thode de “trempe parall`le”.6 Conclusion Figure 7. le nombre de e e e boites n´cessaire pour obtenir un syst`me ´quilibr´ augmente quand la taille du e e e e syst`me augmente. Ce rapide survol de ces quelques variantes ne peut e e repr´senter la tr`s vaste litt´rature sur le sujet. e 7. la particule est incapable de franchir a une seule barri`re. e Avec une m´thode de Metropolis. Pour T = 0.

Monte Carlo simulation in different ensembles ner des ´l´ments pour construire. en fonction du syst`me ´tudi´. une m´thode ee e e e e plus sp´cifiquement adapt´e. e e 128 .

. . . . . . . . . . . . . . . . . . . . . 131 Algorithm . . . . . . 149 Facilitated spin models . . . . . .1 8. .6 8. .6. . . . . . . . . . . .2 Introduction 8. . 149 Conclusion 129 . . . . . . . . . . . . . 139 Introduction . . . . . .3. . . . . . 144 Kinetic constraint models . . . . . . . . .5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 Definition . . . . . . . . . . . 132 Results . . . . . . . . . . . . . . . . . 134 Introduction . . . . . . . . .4. . . . . . . . . . . . .3. . . . . . . . . . . . . .3 8. .5 8. . . . . . . . . . . . . . . . . . . . .Chapter 8 Out of equilibrium systems Contents 8. .3. . . . . . . . . . . . . . .4 8. 144 Random walk on a ring . . . . . . . . . . . . . . . . . . . . 137 Results . . . . . . . . .4 8. . . .3 8. . . . . . . . . . . . . . . .1 8. . 143 Introduction . . . . .1 8. . . .6. . 146 Model with open boundaries . . . . . . . .2 8. . . . . 154 Avalanche model .1 8. 141 Exclusion models . . . . . . . . . . . .3 8.3 8. . .5. . . . . . . . . . . . . . . . . . .2 8. .7 . . . . . . . . . . . 131 Motivation . . . . . . . . . . . . . .2.2. . . . . 137 Inelastic hard sphere model .4. . . . . . . .2 8. . . . . . . . 137 Definition . . . . . . . . 147 Introduction . . . . . . . . .1 8.2. . .4 8. . . 141 Results . . . . . . . . . . . . . .5. . . . .2. . . . . . . . . . . . . . . 131 Definition . . . . . . . . . . . . .4. . . . . . . . .1 8. . . . . . . . . . . .3 8. . . . . . . . . . . . . . . . . . . . . . . . . . 152 . . . . . . . . . . . . .2 8. . . . . . 130 Random sequential addition . . . . .4.2 8. . 142 Some properties . . . . . . . . . . . . .

les m´thodes th´oriques de la m´` e e e canique statistique sont moins puissantes. . [2000]. .8. . 159 8. 158 Random sequential addition .8. .4 Haff Law . . ou entour´ d’un ou plusieurs r´servoirs de particules. . ph´nom`nes de croissance. dans les deux derniers chapitres. 154 8. . . . . . les syst`mes qui ´voluent e e e vers un ´tat stationnaire loin de l’´quilibre. . . en contact e e e avec un thermostat. l’int´rˆt d’une e e e e ee ´tude par simulation num´rique des mod`les. . . particuli`rement. . . . . On peut distinguer trois grandes classes de syst`mes en ce qui concerne e l’´volution du syst`me dans un r´gime hors d’´quilibre: dans un premier cas. le vaste domaine de la m´canique e statistique hors d’´quilibre. e e e Les raisons en sont diverses: i) un syst`me n’est pas forc´ment isol´. avalanches. les syst`mes e e e sont soumis a une ou plusieurs contraintes ext´rieures et ´voluent vers un ´tat ` e e e stationnaire. . la pertinence d’une approche e e statistique des ph´nom`nes hors d’´quilibre et. . L’utilisation de la simulation num´rique e est donc a la fois un moyen essentiel pour comprendre la physique des mod`les ` e et un support pour le d´veloppement de nouveaux outils th´oriques.8. la dynamique microscopique ´tant hamiltonienne ou non.Out of equilibrium systems 8. . . . . verres de spins). . dans un second cas. . . . . . . . Les situations dans lesquelles un syst`me ´volue sans e e e relaxer vers un ´tat d’´quilibre sont de loin les plus fr´quentes dans la nature. polym`res. ii) il peut exister des temps e e e e de relaxation tr`s sup´rieurs aux temps microscopiques (adsorption de prot´ines e e e aux interfaces. fragmentation.. . ce e e qui empˆche une ´volution vers un ´tat d’´quilibre.8. . Talbot et al. . e e 130 .. . . Dans un dernier cas. . . . . Turcotte [1999]. . .3 8. 154 Croissance de domaines et mod`le ` contrainte cin´tique156 e a e Diffusion-coagulation model . . . . . verres structuraux. . . . . iii) le ph´nom`ne e e e peut poss´der une irr´versibilit´ intrins`que (li´e a des ph´nom`nes dissipatifs) e e e e e ` e e qui n´cessite l’introduction d’une mod´lisation fondamentalement hors d’´quilibre e e e (milieux granulaires. . . .8 Exercises . . . sur quatre syst`mes mod`les. . . e Nous illustrons. .1 8. . .1 Introduction Nous abordons. . . treme e blements de terre)Hinrichsen [2000]. .2 8. . les e e e e syst`mes sont le si`ge de temps de relaxation tr`s longs (tr`s sup´rieurs au temps e e e e e caract´ristique de l’observation). . . . . . . . . . e e e Cette approche num´rique est d’autant plus justifi´e qu’en l’absence d’une e e formulation analogue a la thermodynamique. .. . .

e Quand la particule est au voisinage imm´diat de la surface. le potene e e tiel entre particules peut ˆtre pris comme celui d’objets durs (par exemple pour e les collo¨ ıdes. Si elle entre en ea e e contact avec la surface solide. Quand l’exp´rience est r´p´t´e e e e ee avec des solutions de concentrations diff´rentes. sinon. collo¨ e e ıdes).1 Random sequential addition Motivation Quand on place une solution contenant des macromol´cules (prot´ines. l’interaction avec e la surface est fortement attractive sur une courte distance. on peut consid´rer que les particules arrivent e au hasard sur la surface.2.2 8. Une fois l’adsorption achev´e. Ce processus d’adsorption est e lent. on place s´quentiellement des particules dures e a des positions al´atoires. on n’observe aucun (ou e peu de) changement dans la monocouche adsorb´e. cette particule est retir´e et on proc`de ` un e a e e a 131 . la densit´ de particules adsorb´es e e e a la surface est pratiquement toujours la mˆmeTalbot et al.2. Compte tenu de la taille des macromol´cules (de 10nm a quelques e e ` mm). Compte tenu du mouvement e brownien des particules en solution.8. ` e Compte tenu des ´chelles de temps et de la complexit´ microscopique des e e mol´cules. l’adsorption n´cessitant des dur´es qui peuvent aller de quelques minutes e e a plusieurs semaines. elle est repouss´e et elle “repart” vers la solution. il n’est pas possible de faire une Dynamique Mol´culaire pour d´crire e e e le procesus. ` e e 8. RSA). En premi`re approximation. Une premi`re ´tape consiste ` faire une mod´lisation sur une ´chelle e e a e e “m´soscopique”. avec la condition suivante: si la particule introduite ne ` e recouvre aucune autre particule d´j` d´pos´e. [2000]. ea e elle se “colle” a la surface de mani`re d´finitive. Ces particules sont aussi peu d´formables et e a e ne peuvent pas s’interp´n´trer facilement. cette particule est plac´e de mani`re ea e e e e d´finitive ` cette position.2 Definition La mod´lisation ainsi d´finie conduit naturellement au mod`le suivant. Random Sequential e e Adsorption (or Addition).2 Random sequential addition 8. Le potentiel d’interaction entre une e e particule et la surface peut ˆtre pris comme un potentiel de contact fortement e n´gatif. en contact avec une surface solide. sph`res dures). si on remplace la solution ` e contenant les macromol´cules par une solution tampon. et fortement r´pulsive e quand la particule p´n`tre dans la surface. Une fois adsorb´es. sans recouvrement des particules d´j` pr´sentes. on peut consid´rer que les interactions entre particules sont de port´e tr`s e e e inf´rieure ` la taille des particules. Si la particule rentre en contact avec une particule d´j` adsorb´e. les particules ne d´sorbent pas durant le temps e e e de l’exp´rience et ne diffusent pas sur la surface. on observe une adsorption sur la surface sous la forme d’une monocouche de macromol´cules. Soit un espace de dimensions d. appel´ e e e e Adsorption (ou Addition) S´quentielle Al´atoire (en anglais.

Out of equilibrium systems nouveau tirage. e 2. nous consid´rons ici des disques durs (qui repr´sentent la projection e e d’une sph`re sur le plan). Compte tenu du diam`tre σ des particules (la boite e e de simulation est choisie comme un carr´ de cˆt´ unit´). e e Comme le mod`le est physiquement motiv´ par des probl`mes d’adsorption sur e e e une surface. la dynamique de la m´thode Monte e Carlo est celle d’un processus Markovien qui converge vers un ´tat d’´quilibre. La e e d´finition du mod`le RSA est celle d’un procesus Markovien (tirage al´atoire de e e e particules). on teste si la particule ainsi choisie ne recouvre aucune particule d´j` ea 2 2 pr´sente. ` deux dimensions. Sa taille est n´cessairement e e 2 born´e par 4/(πσ ).1) . on r´serve un tableau e oe e e par dimension d’espace pour stocker la liste de chaque coordonn´e de position des e particules qui seront adsorb´es au cours du processus. Il faut v´rifier que (r0 − ri ) > σ avec i prenant les valeurs e e de 1 ` la derni`re particule adsorb´e a l’instant t. on incr´mente l’indice na (na = na + 1). sans condition de convergence vers un ´tat d’´quilibre. e 8. et ` trois dimensions des boules dans l’espace.2. e e a on d´pose des segments sur une ligne. e e e e e Si l’on consid`re des particules sph´riques. Un tel mod`le contient deux ingr´dients que l’on a consid´r´s comme ´tant e e ee e essentiels pour caract´riser la cin´tique d’adsorption: l’effet d’encombrement e e st´rique des particules et l’irr´versibilit´ du ph´nom`ne. Un pas ´l´mentaire de la cin´ee e tique est le suivant: 1. on arrˆte la boucle sur les indices. on d´pose des disques e a e sur un plan. Si ce test est satisfait pour toutes les particules. not´e na · D`s que ce test a e e ` e e n’est pas v´rifi´. Nous pouvons e e donc facilement d´crire l’algorithme de la simulation num´rique adapt´e a ce e e e ` mod`le. on ajoute la position de la derni`re e e 132 (8. cela signifie qu’` une dimension. a Comme nous l’avons dit dans le chapitre 2.3 Algorithm Le principe de l’algorithme de base ne d´pend pas de la dimension du probl`me. ` (on suppose que la surface est initialement vide). ıte x0 = rand y0 = rand On incr´mente le temps de 1: t = t + 1. On tire au hasard avec une distribution uniforme la position d’une particule dans la boˆ de simulation. e Ces tableaux ne contiennent aucune position de particules a l’instant initial. on revient a l’´tape 1 e e e ` e pour un nouveau tirage.

mais tr`s lent. on passe beaucoup de temps a effectuer des tests qui sont inutiles. Quand le remplissage est proche du remplissage maxe ee imum.8. L’algorithme pr´sent´ ci-dessus est correct.1 – Disques adsorb´s en cin´tique RSA: la particule au centre de la figure e e est la particule qui vient d’ˆtre choisie. on fixe le nombre maximum de tentatives au d´part de la e simulation. e Sachant qu’un disque dur ne peut ˆtre entour´. particule dans les tableaux des coordonn´es de position: e x[na ] = x0 y[ na ] = y0 . La cellule correspondant a son centre est e ` en rouge. ou en d’autres termes le temps maximum de la simulation.2) Comme on ne connaˆ pas a l’avance le nombre de particules que l’on peut ıt ` placer sur la surface.4 X 0. Les 24 cellules a consid´rer pour le test de non-recouvrement sont en ` e orange. (8. il est facile de voir qu’il faut ´viter de faire un test de recouvrement avec e des particules qui sont pour la plupart tr`s ´loign´es de la particule que l’on vient e e e de choisir. au maximum. car les particules sont pratiquement toutes rejet´es. 133 . que de six autres e e disques. il est n´cessaire d’effectuer na tests pour d´terminer si la nouvelle e e particule peut ˆtre ins´r´e.2 Random sequential addition 0.4 0.6 Y 0. En effet.2 0.8 0.2 0.8 Figure 8.6 0. a chaque e e e pas de temps.

3) j0 = Int(y0 /σ). on revient ` l’´tape 1. un et un seul e disque. il est n´cessaire de faire un test pour v´rifier la e e e pr´sence ou non d’un recouvrement entre la nouvelle particule et celle d´j` e ea adsorb´e. (8. au plus. Si la cellule est pleine (on a alors cell[i0 ][j0 ] = 0). L’algorithme est maintenant un algorithme e proportionnel au nombre de particules et non plus proportionnel au carr´ du e nombre.4 Results A une dimension.Out of equilibrium systems Une solution consiste ` faire une liste de cellules. On cr´e donc un tableau suppl´mentaire dont les deux indices en X et e e en Y rep´rent les diff´rents carr´s ´l´mentaires. S’il y a recouvrement.4) Avec ce nouvel algorithme le nombre de tests est au plus ´gal a 24 quel que soit e ` le nombre de particules adsorb´es. on ajoute alors la nouvelle particule comme le pr´cise le pr´c´dent ` e e e e algorithme et on remplit la cellule de la nouvelle particule ajout´e comme e suit: cell[i0 ][j0 ] = na . D`s qu’une cellule e e est occup´e (cell[i][j] = 0). Au contraire. on repasse ` l’´tape 1. e e e e e Qualitativement. mais pas de solutions anau e e lytiques). cela signifie que dans chaque carr´ on peut placer. Ainsi la r´solution de tels mod`les n’apporte pas a e e e d’informations tr`s utiles pour l’´tude du mˆme syst`me en dimension sup´rieure e e e e e (o` g´n´ralement. dans le cas de la RSA (ainsi que d’autres mod`les hors e ´quilibre). La premi`re ´tape est inchang´e. un syst`me de particules a l’´quilibre n’a pas de transition e ` e de phase ` temp´rature finie. les particules introduites sont adsorb´es sans rejet et la e 134 . le ph´nom`ne d’adsorption se d´roule en plusieurs ´tapes: e e e e dans un premier temps. e e e ee e a L’algorithme est alors modifi´ de la mani`re suivante: e e 1. ` 8. Ce tableau est initialis´ ` 0. Si toutes les cellules test´es sont soit vides soit ` e occup´es par une particule qui ne recouvre pas la particule que l’on cherche e a ins´rer. il n’y a des transitions de phase. Des m´thodes comparables sont utilis´es en dynamique mol´culaire afin e e e de limiter le nombre de particules a tester dans le calcul des forces. la solution du mod`le unidimensionnel apporte beaucoup d’informations e e utiles pour comprendre le mˆme mod`le ´tudi´ en dimension sup´rieure.1). e e e 2. sinon on continue e a e a tester la cellule suivante. On d´termine les indices correspondant a la cellule ´l´mentaire o` la pare ` ee u ticule est choisie i0 = Int(x0 /σ) (8. Si a e la cellule est vide. il faut tester les 24 cellules entourant la cellule centrale et voir si elles sont vides ou occup´es (voir la Fig 8.2. En effet si on choisit a une grille dont le carr´ ´l´mentaire est strictement inf´rieur au diam`tre d’un e ee e e disque.

Dans cette dynamique.. Quand le nombre de tentatives e e devient de l’ordre de quelques fois le nombre de particules adsorb´es.2 Random sequential addition densit´ augmente rapidement. L’approche de l’´tat de saturation est donc lente et u e est donn´e par une loi alg´brique. mais comme les particules adsorb´es e e e sont fix´es. (8. des e ` particules que l’on tente d’ins´rer sont rejet´es. qui est appel´e aussi mod`le du parke e e ing.5) donne e−2γ . le taux e de remplissage est. il reste donc des espaces vides sur la ligne. on e ` e obtient l’expresion suivante pour le taux de remplissage de la ligne.7476 . ` 10% pr`s. sont isol´s sur la surface et repr´sentent une fraction tr`s faible de la e e e surface initiale. peut ˆtre r´solue analytiquement (voir Appendice C). .6) ρ(∞) − ρ(t) t o` γ est la constante d’Euler. Au fur et a mesure que la surface se remplit. e A une dimension. c’est ` dire que la a m´moire de s´lection des particules est nulle. Pr`s de la saturation.8. e On aboutit ` la situation apparemment paradoxale suivante: la dynamique a stochastique du remplissage est Markovienne stationnaire. lorsque cette derni`re est vide a l’instant initial: e ` t u ρ(t) = 0 du exp −2 0 dv 1 − e−v v . a e Il apparaˆ alors un r´gime lent o` les espaces disponibles pour l’insertion de ıt e u particules. ` a e A la saturation. il y a par exemple les fluctuations de particules eea e d’un syst`me de taille finie. la valeur que l’on peut atteindre au maximum. le syst`me garde la m´moire de la configuration initiale jusqu’` la fin e e e a du processus. dont la longueur individuelle est n´cessairement inf´rieure au diam`tre d’une particule. En dimension sup´rieure. le d´veloppement asymptotique (t → ∞) de l’´quation e e e (8. si l’on fait une simulation Monte-Carlo. e e 135 . la densit´ tend vers 0. Il s’agit ` nouveau d’une diff´rence notable avec un processus ` l’´quilibre a e a e qui ne d´pend jamais du chemin thermodynamique choisi mais uniquement des e conditions thermodynamiques impos´es. Parmi les diff´rences que l’on peut aussi noter avec le mˆme syst`me de partice e e ules. La version unidimensionnelle du mod`le. mais consid´r´ ` l’´quilibre. . A l’´quilibre. (8. en prenant la longueur des segments ´gale a l’unit´. Nous allons ici nous e e contenter d’exploiter ces r´sultats pour en d´duire des comportements g´n´riques e e e e en dimension sup´rieure. Il faut aussi e e e noter que la densit´ maximale a saturation d´pend fortement de la configuration e ` e initiale. les r´are e rangements de particules sont interdits.5) Quand t → ∞. Ceci donne un exposant ´gal ` 1/2 pour u e a d = 2. on peut montrer que e e e pour des particules sph´riques la saturation est approch´e comme ρ(∞) − ρ(t) e e 1/d 1/t o` d est la dimension de l’espace. contrairement a une situation ` l’´quilibre.

on peut calculer la fonction de corr´lation g(r) a e ` un temps donn´ de l’´volution de la dynamique en consid´rant un grand nombre e e e de r´alisations. e e On peut donc voir que ces grandeurs peuvent ˆtre d´finies en l’absence d’une e e moyenne de Gibbs pour le syst`me.7) Un tel comportement diff`re fortement d’une situation d’´quilibre. mais compte tenu e e e de l’unicit´ de l’´tat d’´quilibre. pour un processus RSA. N et N 2 dans l’ensemble grand-canonique. o` la fonction e e u de corr´lation reste finie au contact pour une densit´ correspondant a la densit´ e e ` e de saturation RSA. e L’importance de ces fluctuations a une densit´ donn´e est diff´rente entre un ` e e e processus RSA et un syst`me ` l’´quilibre. Pour un syst`me thermodynamique. ces fluctuations sont accessibles de la mani`re e suivante: si on r´alise une s´rie de simulations. On calcule alors les valeurs e` e e moyennes de N et N 2 . on peut montrer qu’` e a la densit´ de saturation. En utilisant la moyenne statistique. Dans une simulation de type RSA. Ce ph´nom`ne persiste en dimension sup´rieure. c’est un moyen exp´rimental pour distinguer e e e des configurations g´n´r´es par un processus irr´versible et celui provenant de e ee e configurations a l’´quilibre. o` un e e e u calcul num´rique est alors n´cessaire. C’est e e ` a e e ´videmment la raison pour laquelle on n’effectue pas g´n´ralement de moyenne e e e sur des r´alisations dans le cas de l’´quilibre. la fonction de corr´lation g(r) peut ˆtre e e d´finie ` partir de consid´rations de g´om´trie probabiliste (pour des particules e a e e e dures) et non n´cessairement a partir d’une distribution de Gibbs sous-jacente. En changeant de r´alisations.Out of equilibrium systems on peut facilement calculer les fluctuations de particules a partir des moments ` de la distribution de particules. le e e e temps de calcul n´cessaire est consid´rablement plus ´lev´. on peut enregistrer un histogramme de densit´ a un temps donn´ de l’´volution du processus. ces fluctuations e a e peuvent ˆtre calcul´es analytiquement dans les deux cas et leur amplitude est e e moins grande en RSA. 136 . e ` Cela implique qu’il est possible de calculer. les corr´lae tions entre paires de particules. (8. La signification des crochets correspond ` une moyenne a sur diff´rentes r´alisations du processus. introduite pour le calcul des fluctuations. on e e peut calculer ces moyennes en proc´dant de la mˆme mani`re. pour une taille de syst`me et e e e une configuration initiale identiques. une moyenne statistique sur des r´alisations dife e e e f´rentes est ´quivalente a une moyenne ` l’´quilibre sur une seule r´alisation. la fonction diverge logarithmiquement au contact: e (r − 1) σ g(r) ∼ ln . e Comme r´sultat significatif concernant cette fonction. ` e Comme nous l’avons vu au chapitre 4. A une dimension. car il faut r´´quilibrer e e e e ee chaque r´alisation avant de commencer le calcul de la moyenne.

j0 ) → z(i0 . toujours avec la mˆme intensit´. il e d´clenche ` son tour l’´tape 2.1 Avalanche model Introduction Il y a une quinzaine ann´es. la diff´rence de pente devient trop ´lev´e et il y e e e a redistribution de particules sur les sites qui sont les plus proches voisins avec les r`gles suivantes: e  → z(i0 . Une fois cet ´tat atteint. j ) + 1 0 0 0 0 z(i0 . j0 ) + 1 t → t + 1. j0 )   z(i ± 1. dont e la forme est grossi`rement celle d’un cˆne et dont l’angle ne peut d´passer une e o e valeur limite. En chaque point du e e ea r´seau. le d´e passement du seuil critique. j0 ) − 4 z(i0 . sur ces site. Les ee ´tapes ´l´mentaires du mod`le sont les suivantes: e ee e 1. j ) → z(i ± 1. des particules de sable au dessus e e d’un point donn´ d’une surface. Ce ph´nom`ne ne semble pas a priori d´pendre e e e e de la nature microscopique des interactions entre particules. avec un flux qui doit ˆtre tr`s faible. j0 ) > 4. on choisit al´atoirement un site du r´seau not´ i0 . on voit se former progressivement un tas. j) le nombre entier associ´ au site (i.8. j0 ± 1) + 1    t → t + 1.3 8.2 Definition Consid´rons un r´seau carr´ ` deux dimensions (N × N ). En effet. A l’instant t. Ces particules e e d´valent la pente brusquement. ce nombre sera consid´r´ comme la pente locale du tas de sable. Si un site se trouve dans un ´tat critique. j). Si la valeur de z(i0 . La redistribution sur les sites voisins peut provoquer.. Tang et Wiesenfeld ont propos´ un mod`le pour e e e expliquer le comportement observ´ dans les tas de sable. j0 .. On e e e accroˆ la valeur du site d’une unit´ ıt e z(i0 .3. on peut voir que si l’on continue d’ajouter e du sable. j0 ± 1) → z(i0 .9) 3.3 Avalanche model 8. Par analogie au e e tas de sable.8) 2. Bak. (8. on appelle z(i. e a e 137 .3. si l’on laisse e tomber. on observe a des instants diff´rents e e ` e des avalanches de particules qui ont des tailles tr`s diff´rentes. (8. 8.

0)   z(1. i ± 1) → z(0. Dans ce cas. i) − 3 z(0. A ce moment. i) → z(1. pour le site situ´ en coin de boˆ e ` e e ıte le seuil est alors de 2 et l’on a  → z(0. Pour i compris entre 1 et N − 2. 0) − 4 z(0.10) z(0. ´crire les conditions aux limites sur les quatre e e cot´s du r´seau.13) On peut. i ± 1) → z(0. En a e d’autres termes. on un seuil de 3  → z(0. ce mod`le est Ab´lien. Pour i compris entre 1 et N − 2  → z(0. e e Ce cycle est poursuivi jusqu’` ce qu’aucun site du r´seau ne d´passe le seuil a e e 1 critique . de mani`re similaire. 0) → z(1. 1) + 1    t → t + 1. e ticules sont perdues et les r`gles d’actualisation sont les suivantes. e e e sites situ´s a la fronti`re. Dans ce cas. on recommence l’´tape 1. une seule particule sort de la boˆ ıte. l’ordre dans lequel on effectue la mise ` jour n’influe pas l’´tat final du cycle. 0) + 1 (8. 1) + 1    t → t + 1. e Comme les sites qui deviennent critiques ` un instant donn´ t ne sont pas plus proches a e voisins. i)   z(1. 1) → z(0. (8.Out of equilibrium systems Pour les sites se trouvant ` la limite de la boˆ on consid`re que les para ıte. i ± 1) + 1    t → t + 1. i) − 4 z(0. i) → z(1. 0) + 1 (8. e  → z(0. Par exemple. i) + 1 z(0. i ± 1) + 1    t → t + 1.11) (8. 0)   z(1. 0) → z(1. e e 1 138 . On peut aussi avoir des variantes de ces r`gles concernant les limites de e la boˆ en consid´rant les seuils de d´clenchement sont modifi´s pour les ıte. 1) → z(0.12) z(0. deux particules sortent de la boˆ ıte. i)   z(1. i) + 1 z(0. 0) − 2 z(0.

3. A l’inverse. l’obtention d’un e point critique n´cessite l’ajustement d’un ou de plusieurs param`tres de contrˆle e e o (la temp´rature. ` (8. On peut aussi consid´rer la distribution du nombre de particules perdues par e le r´seau (par les bords) au cours du processus. Turcotte [1999]). qui e e ont identifi´ ces caract´ristiques.. Les auteurs de ce mod`le.3 Results Les r´sultats essentiels de ce mod`le aux r`gles tr`s simples sont les suivants: e e e e • Le nombre d’avalanches de particules suit une loi de puissance N (s) ∼ s−τ o` τ u 1. tant sur le e e plan spatial (distribution des avalanches) que sur le plan temporel. e Les grandeurs que l’on peut enregistrer au cours de la simulation sont les suivantes: la distribution du nombre de particules impliqu´es dans une avalanche. alors que dans un syst`me thermodynamique. le nombre d’avalanches avec T ´tapes est donn´ par e e N ∼ T −1 . La sp´cificit´ de ce mod`le a fascin´ une communaut´ de scientifiques.16) Il est maintenant int´ressant de comparer ces r´sultats avec ceux que l’on a e e rencontr´s dans l’´tude de syst`mes ` l’´quilibre.8.03 a deux dimensions. (8. car. il n’y a pas ici de param`tre de contrˆle. De nombreuses variantes de e 139 . qui e e e e e d´passe largement celle de la physique statistique.. On peut s’int´resser enfin a la e e ` p´riodicit´ des avalanches au cours du temps. e c’est a dire le nombre de sites du r´seau qui se sont retrouv´s dans une situation ` e e critique (et qu’il a fallu actualiser) avant de red´marrer l’ajout de particules (´tape e e 1).3 Avalanche model Cet algorithme peut facilement ˆtre adapt´ pour des r´seaux diff´rents et pour e e e e les dimensions diff´rentes d’espace. la pression. en anglaisBak et al.. En e e o effet. ont appel´ cet ´tat la criticalit´ auto-organis´e. [1987]. le flux de particules d´pos´es peut ˆtre aussi faible que l’on veut. e e 8.14) • Les avalanches apparaissent de mani`re al´atoire et leur fr´quence d’apparition e e e d´pend de leur taille s selon une loi de puissance e D(s) ∼ s−2 . (8. e e e a e Une des premi`res caract´ristiques inhabituelles de ce mod`le est qu’il ´volue e e e e spontan´ment vers un ´tat que l’on peut qualifier de critique. il y a une grande analogie avec un point critique usuel dans un syst`me thermodynamique e usuel.).15) • Si le temps associ´ aux nombres d’´tapes ´l´mentaires dans une avalanche e e ee est T . la distrie e e bution des avalanches reste toujours inchang´e. e e e e e e (Self-Organized Criticality (SOC).

mais que des mod`les plus raffin´s sont n´cessaires pour e e e un accord plus quantitatif. La distribution e des aires de forˆts brˆl´es est exp´rimentalement d´crite par une loi de puissance e ue e e avec α 1. Les tremblements de terre apparaissent e dans des r´gions o` il y accumulation de contraintes. il y a deux ´chelles e e e e e e e de temps: la premi`re est associ´e a la d´position al´atoire d’arbres sur un r´seau e e ` e e e donn´. Les tremblements de terre r´sule tent de l’accumulation de contraintes dans le manteau terrestre. on observe que la distribution de cette intensit´ ob´it ` une loi de puissance concernant leur ine e a tensit´ avec un exposant α 2. la seconde est l’inverse de la fr´quence avec laquelle l’on tente de d´marrer e e e un feu de forˆt ` un site al´atoire2 .4. e Je rappelle bien entendu que la r´alisation exp´rimentale. Le syst`me. A titre de comparaison. il est tr`s difficile de pr´voir la date et l’intensit´ u e e e d’un tremblement de terre dans le futur. mais statistiquement. voisine du r´sultat pr´dit par le SOC. sur des ´chelles de e temps qui d´passent les centaines d’ann´es. e e Nous terminons cette section par un retour sur les objectifs initiaux du mod`le. il y a croissance e e e de la forˆt. Dans les mod`les utilis´s pour d´crire ce ph´nom`ne. La propagation du feu se poursuit par cone a e tamination de sites voisins. Est-ce que les avalanches observ´es dans les tas de sables sont bien d´crites e e e par ce type de mod`le? La r´ponse est plutˆt n´gative.3 − 1. Un comportement universel e u des tremblements de terre est le suivant: quelle que soit la r´gion de la surface tere restre o` ceux-ci se produisent. est celui d’une assembl´e de grains de riz e 3 pour lesquels une loi de puissance est bien observ´e . sans invitation des autorit´s e e e comp´tentes. Dans la plupart des e e o e dispositifs exp´rimentaux utilis´s. Les feux de forˆts repr´sentent un deuxi`me exemple pour lequel l’analogie e e e avec le mod`le SOC est int´ressante. 2 140 . le mod`le SOC pr´voit e e e un exposant α = 1. mais plutˆt p´riodique. qui peut ˆtre d´truite partiellement ou quasi totalement par un feu de e e e forˆt. Ces d´saccords est li´s aux effets inertiels et coh´sifs e e e qui ne peuvent pas en g´n´ral ˆtre n´glig´s. est r´pr´hensible tant sur le plan moral que sur le plan p´nal. Durant une longue p´riode. but I cannot take in charge the resulting damages. si ceux-ci sont occup´s par un arbre. e Citons pour exemple le domaine de la sismologie. La distribution des tailles n’est pas non e o e plus une loi de puissance. e e e e 3 I let you the responsability of doing a experimental setup in your kitchen for checking the truth of these results. Ces contraintes r´sultent du d´placee e e e ment tr`s lent des plaques tectoniques. ce qui montre la description des tremblements de terre est qualitativement correct. qui respecte le plus e e e e e e probablement les conditions du SOC.Out of equilibrium systems ce mod`le ont ´t´ ´tudi´es et il est apparu que les comportements observ´s dans e eee e e le mod`le primitif se retrouvaient dans un grand nombre de situations physiques. la fr´quence d’apparition des avalanches n’est e e e pas al´atoire.

Compte tenu de e ıt la s´paration importante entre les ´chelles de temps et celle entre les longueurs e e microscopiques et macroscopiques. ce dernier cas corrrespondant ` un syst`me de sph`res dures parfaitement ´lastiques. Au niveau macroscopique (qui e e c est celui de l’observation). a e e e 8. on peut supposer que le potentiel d’interaction est un potentiel de coeur dur.4. La quantit´ de mouvement reste bien entendu conserv´e lors de la collision: e e v 1 + v2 = v1 + v2 .) sont caract´ris´s par les propri´t´s e e ee suivantes: la taille des particules est tr`s sup´rieure aux ´chelles atomiques (au e e e moins plusieurs centaines de microm`tres). En ce pla¸ant dans le cas o` la composante tangentielle de la vitesse de c u contact est conserv´e (ce qui correspond ` un coefficient de restitution tangentielle e a ´gal a 1) on a la relation suivante e ` (v1 − v2 ). .18) o` e est appel´ le coefficien de restitution normale et n un vecteur normal unitaire u e dont la direction est donn´ la droite joingnant les centres des deux particules en e collision. compte tenu de la masse des particules. le syst`me apparaˆ comme dissipatif.4 8.4 Inelastic hard sphere model 8. c’est a dire que la collision entre e ` particules peut ˆtre suppos´e instantan´e. on peut consid´rer que le temps de collision est e n´gligeable devant le temps de vol d’une particule.19) .j = vi.ˆ ]ˆ nn 2 141 (8. la vitesse relative du point de contact subit le changement suivant. on obtient alors les r`gles suivantes de collision : e vi. e l’´nergie d’agitation thermique est n´gligeable devant l’energie gravitationnelle et e e l’´nergie inject´e dans le syst`me (g´n´ralement sous forme de vibrations)...j ± 1+e [(vj − vi ).17) Lors de la collision. un milieu granulaire transforme tr`s e e rapidement l’´nergie cin´tique re¸ue en chaleur.8. lors e e des collisions entre particules. les interactions entre les particules e se manifestent sur des distances tr`s inf´rieures aux dimensions de celles-ci.4.2 Definition Ainsi le syst`me des sph`res dures in´lastiques est d´fini comme un ensemble de e e e e particules sph´riques interagissant par un potentiel r´pulsif infiniment dur. La valeur de ce coefficient est comprise entre 0 et 1. (8.n (8.1 Inelastic hard sphere model Introduction Les milieux granulaires (poudres. sables. e e e e e En l’absence d’apport continu d’´nergie. Comme les interactions sont a tr`s e e e ` e courte port´e et que les particules sont peu d´formables (pour des collisions “pas e e trop violentes”). une partie de l’´nergie cin´tique est transform´e en e e e ´chauffement local au niveau du contact.n = −e(v1 − v2 ). En e e utilisant les relations ci-dessus.

u e e Quand la densit´ des particules augmente. Juste apr`s le premier contact avec le plan. Une deuxi`me diff´rence appae e e e raˆ durant la simulation. le temps ´coul´ est t1 = 2h/g et la vitesse avant la collision e e √ √ est 2gh. le principe de e la Dynamique Mol´culaire pour des sph`res dures (´lastiques). ıt e e e e ˆ n= avec 8. e Au second contact. o` la densit´ de particules devient tr`s faible. (8. au chapitre 3. a e c 4 on assiste ` un “collage de deux particules” entre elles . on a e n−1 (8.3 Results Cette propri´t´ de perte d’´nergie a plusieurs cons´quences: d’une part. on peut se trouver e dans la situation o` trois particules (pratiquement align´es) vont ˆtre le si`ge de u e e e collisions successives. bien e e e e ´videmment. (8. le temps ´coul´ est e e t2 = t1 + 2et1 et pour le ni`me contact.21) tn = t1 1 + 2 i=1 4 ei . La densit´ locale de ces derniers est tr`s e e sup´rieure ` la densit´ uniforme initiale. il se produit un ph´nom`ne qui va e e e bloquer rapidement la simulation: il s’agit de l’effrondrement in´lastique. Au premier ` a e contact avec le plan. On appelle h la hauteur a laquelle la bille est lˆch´e. pour ee e e garder l’´nergie constante en Dynamique Mol´culaire. l’algorithme de e e e simulation pour des sph`res dures in´lastiques est fondamentalement le mˆme. ee 142 .4. car le syst`me perd r´guli`rement de l’´nergie. sur un plan. corr´lativement. soumise a un champ e ` de pesanteur. mˆme si l’on part avec une distribution uniforme de particules dans e l’espace. l’existence de cette perte d’´nergie e e e est ` l’origine du ph´nom`ne d’agr´gation des particules.19). Nous avons vu. la vitesse devient e 2gh. doivent respecter l’´quation (8. Comme e les particules perdent de l’´nergie au cours des collisions. il est n´cessaire de fournir e e e une ´nergie externe au syst`me. le syst`me ´volue spontan´ment aux temps longs vers une situation o` e e e u les particules se regroupent par amas. il apparaˆ des r´gions e a e e ıt e de l’espace. Cela se traduit par a e e e le fait que. a L’effet qui peut apparaˆ dans une simulation entre une particule “encadr´e” ıtre e par deux voisines peut ˆtre compris en consid´rant le ph´nom`ne ´l´mentaire du e e e e ee rebondissement d’une seule bille in´lastique. le temps entre collisions va diminuer au cours du temps jusqu’` tendre vers z´ro. e e e Une premi`re diff´rence intervient dans l’´criture des r`gles de collisions qui. Dans cet amas de trois particules.20) |r1 − r2 | On obtient alors un mod`le particuli`rement adapt´ pour une ´tude en sime e e e ulation de Dynamique Mol´culaire.Out of equilibrium systems r1 − r2 .22) Il ne s’agit pas d’un vrai collage car le temps de simulation s’est arrˆt´ avant. De fa¸on asymptotique. d’autre part.

e e Pour comprendre simplement ce r´gime. d`s qu’une sph`re se e e e e e trouve dans la situation de collisions multiples entre deux partenaires.4 Inelastic hard sphere model Ainsi.8. le coefficient de restitution d´pend en fait de la vitesse de collision e et n’est constant que pour des vitesses relatives au moment du choc qui restent macroscopiques. pour un nombre infini de collisions. e En l’absence d’injection d’´nergie. la simulation des sph`res in´lastiques ne permet d’atteindre e e e cet ´tat d’´nergie nulle. celle-ci devient inf´rieure a e e ` la vitesse d’agitation thermique. si ceux-ci conservent une agitation suffisante. on e e 143 . mais finie dans la r´alit´. ce mod`le ne peut d´crire que les ph´nom`nes e e e e e prenant place dans un intervalle de temps limit´. le syst`me ne peut que se refroidir. il est n´cessaire e e e de tenir compte des conditions aux limites proches de l’exp´rience: pour fournir e de l’´nergie de mani`re permanente au syst`me. e Pour la simulation. Il existe un e e r´gime d’´chelle avant la formation d’agr´gats dans lequel le syst`me reste globalee e e e ment homog`ne et dont la distribution des vitesses s’exprime par une loi d’´chelle. e En conclusion. A cause des collisions entre les particules et le mur qui vibre. En utilisant l’´quations (8. et cela n´cessite un nombre infini de collisions. Quand les vitesses tendent vers z´ro. Au del` de l’aspect gˆnant pour la simulation. e Quand la vitesse de la particule devient tr`s petite. le mod`le des sph`res in´lastiques est un bon outil pour la mode e e ´lisation des milieux granulaires.24) Un temps fini s’est ´coul´ jusqu’` la situation finale o` la bille est au repos sur e e a u le plan. le coefficient de restitution e tend vers 1.et les collisions entre particules redeviennent ´lastiques. mˆme si on s’attend que des sph`res in´lastiques finissent par e e e voir d´croˆ leur ´nergie cin´tique jusqu’` une valeur ´gale ` z´ro (en l’absence e ıtre e e a e a e d’une source ext´rieure qui apporterait de l’´nergie pour compenser la dissipation e e li´e aux collisions).4 Some properties En l’absence d’apport d’´nergie. car celle-ci est stopp´e bien avant. on consid`re la perte d’´nergie cin´tique e e e e r´sultant d’une collision entre deux particules. on a t∞ =t1 1 + =t1 1+e 1−e 2e 1−e (8. cela se traduit par un bilan net positif d’´nergie cin´tique inject´e e e e dans le syst`me. e 8.19). un tel ph´nom`ne conduit a un arrˆt de l’´volution de e e ` e e la dynamique. Si l’on souhaite avoir une mod´lisation proche de l’exp´rience. e e Physiquement.4. ce qui permet de maintenir une agitation des particules malgr´ e e les collisions in´lastiques entre particules. qui est certes faible. il y a g´n´ralement un mur qui e e e e e effectue un mouvement vibratoire. nos hypoth`ses a e e physiques concernant la collision ne sont plus valables aux tr`s faibles vitesses. e Finalement.23) (8.

ee Dans le cas o` le milieu granulaire re¸oit de l’´nergie de mani`re continue. La perte d’´nergie moyenne est donn´e par ∆E = − (∆v)2 e e 2 o` = 1 − α ce qui donne l’´quation d’´volution u e e dT = − T 3/2 dt ce qui donne la loi de refroidissement suivante T (t) 1 (1 + A t)2 (8.5 8. la temp´rature granulaire de chaque esp`ce est diff´rente. e e Parmi les propri´t´s diff´rentes de celles de l’´quilibre. On a pu montrer que la queue e a de la distribution des vitesses ne d´croit pas en g´n´ral de mani`re gaussienne et e e e e que la forme asymptotique est intimement reli´ a la mani`re dont l’´nergie est e` e e inject´e dans le syst`me.5. sph`res de taille dife e f´rente).27) (8. il n’y a donc e e e e pas ´quipartition de l’´nergie comme dans le cas de l’´quilibre.26) Cette propri´r´ est connu sous le nom de loi de Haff (1983). En particulier. e e e 8. Inversement. la fr´quence de collision moyenne e e est de l’ordre de l/∆v.1 Exclusion models Introduction Alors que la m´canique statistique d’´quilibre dispose de la distribution de Gibbse e Botzmann pour caract´riser la distribution statistique des configurations d’un e syst`me a l’´quilibre. cas des sph`res ´lastiques. l’´nergie e e e e cin´tique est conserv´e comme nous l’avons vu dans le chapitre3. Cela implique en autres que la temperature granulaire qui est d´finit e comme la moyenne moyenne du carr´ de la vitesse est diff´rente de la temp´rature e e e associ´e ` la largeur de la distribution des vitesses. e e a En supposant que le milieu reste homog`ne.Out of equilibrium systems obtient pour la perte d’´nergie cin´tique e e ∆E = − 1 − α2 m((vj − vi ). (par exemple. il n’existe pas de m´thode g´n´rale pour construire la dise ` e e e e tribution des ´tats d’un syst`me en r´gime stationnaire. la distribution des vitesses n’est jamais e gaussienne. u c e e le syst`me atteint un r´gime stationnaire o` les propri´t´s statistiques s’´cartent e e u ee e de celles de l’´quilibre. Les mod`les d’exclusion e e e e sont des exemples de mod`les simples qui ont ´t´ introduits durant ces quinze e ee 144 .ˆ )2 n 4 (8.25) On remarque bien ´videmment que si α = 1. e e la perte d’´nergie est maximale quand le coefficient de restitution est ´gale ` 0. dans le cas d’un m´lange ee e e e de deux esp`ces de particules granulaires.

avec lesquels de nombreux travaux th´oriques ont ´t´ effece e e ee tu´s et qui se prˆtent assez facilement a des simulations num´riques pour tester e e ` e les diff´rentes approches th´oriques. on r´alise la dynamique parall`le tout e e e e d’abord sur le sous-r´seau des sites pairs puis sur le sous-r´seau des sites impairs. Ils peuvent servir de mod`les simples pour e e e repr´senter le traffic des voitures le long d’une route. d´crire le e e ph´nom`ne d’embouteillage ainsi que les fluctuations de flux qui apparaissent e e spontan´ment quand la densit´ de voitures augmente.8. e e 5 145 . L’´quation maˆ e ıtresse correspondante est une ´quation e 5 discr`te en temps. en particulier.5 Exclusion models derni`res ann´es. Des variantes de ce mod`le correspondent a des e ` situations sont les sites 1 et N . e • La mise ` jour parall`le. e ee e e On consid`re un r´seau unidimensionnel r´gulier consitu´ de N sites sur lequel e e e e on place n particles. e • La mise a jour asynchrone. Chaque particle a une probabilit´ p de sauter sur le site e adjacent a droite si celui-ci est vide et une probabilit´ 1 − p de sauter sur le site ` e de gauche si celui-ci est vide. les version unidimensionnnelles de ces mod`les ont ´t´ les plus largement ´tudi´es. e e Il existe un grand nombre de variantes de ces mod`les et nous nous limitons e a pr´senter les exemples les plus simples pour en d´gager les propri´t´s les plus ` e e ee significatives. Dans le cas de ph´nom`ne hors d’´quilibre. Au mˆme instant toutes les r´gles sont appliqu´es a e e e e aux particules. Ce a type de dynamique induit g´n´ralement les corr´lations les plus importantes e e e entre les particules. Le m´canisme le plus utilis´ consiste ` choisir ` e e a de mani`re al´atoire et uniforme une particule a l’instant t et d’appliquer e e ` les r`gles d’´volution du mod`le. L’´quation maˆ e e e e ıtresse correspondante est une ´quation continue. Pour permettre des traitements analytiques. pour ´viter que deux particules venant l’une de la gauche et l’autre e e de la droite choisissent le mˆme site au mˆme temps. L’´quation maˆ e ıtresse correspondante est aussi une ´quation ` temps e a discret. On utilise aussi le terme de mise ` jour synchrone. on dispose au moins de trois m´thodes diff´rentes e e e e pour r´aliser cette dynamique. Pour le mod`le ASEP. u ` e e On peut soit consid´rer successivement les sites du r´seau ou les particules e e du r´seau. e • Il existe aussi des situations o` la mise a jour est s´quentielle et ordonn´e. la sp´cificit´ de la dynamique e e e e e stochastique intervient de fa¸on plus ou moins importante dans les ph´nom`nes c e e observ´s. Pour ces mod`les. L’ordre peut ˆtre choisi de la gauche vers la droite ou inversee e ment.

Ainsi on a e e a p N 1−p Π(i + 1 → i)Pst (i + 1) = N Π(i → i + 1)Pst (i) = (8. Une e ` telle variable vaut 1 si une particule est pr´sente sur le site i a l’instant t et 0 si e ` le site est vide. qui doit alors ˆtre vide. Si le site i est occup´.28) (8.5. Ainsi l’´tat de la ` e variable τi (t + dt) est le mˆme que τi (t) si aucun des liens a droite ou ` gauche e ` a du site i n’ont chang´ e  τi (t) Proba d’avoir aucun saut 1-2dt  τi (t+dt) τi (t) + τi−1 (t)(1 − τi (t)) Proba saut venant de la gauche dt   τi (t) − τi (t)(1 − τi+1 (t)) = τi (t)τi+1 (t) Proba saut vers la droite dt (8. avec la condition P < N .29) Hormis le cas o` la probabilit´ de sauter a droite ou a gauche est identique. le marcheur u e qui arrive au site N a une probabilit´ p d’aller sur le site 1. Il est facile de se e rendre compte qu’un tel syst`me a un ´tat stationnaire o` la probabilit´ d’ˆtre e e u e e sur un site est la mˆme quel que soit le site est ´gale ` Pst (i) = 1/N . e e Totally assymetric exclusion process Pour permettre d’aller assez loin dans le traitement analytique et d´j` d’illuster ea une partie de la ph´nom´nologie de ce mod`le. on peut choisir d’interdire a une e e e ` particule d’aller sur la gauche. Consid´rons P particles e e sur un r´seau comprenant N sites. Si le site i est vide.30) 146 . On introduit des e variables τi (t) qui repr´sentent le taux d’occupation a l’instant t du site i.2 Random walk on a ring Stationarity and detailed balance Dans le cas o` l’on impose des conditions aux limites p´riodiques. on a p = 1· Avec un algorithme de temps continu. sinon elle reste sur le mˆme site. mais c’est une illustration e du fait que les conditions de stationnarit´ de l’´quation maˆ e e ıtresse sont beaucoup g´n´rales que celle du bilan d´taill´. il peut se vider si la particule pr´sente a l’instant t e e ` peut aller sur le site i + 1. Il est possible d’´crire une ´quation d’´volution du syst`me pour la variable e e e e τi (t). Ces derni`res sont g´n´ralement utilis´s dans e e e e e e e e des simulations de m´canique statistique d’´quilibre. la condition de bilan d´taill´ n’est pas satisfaite alors que celle de ` e e stationnarit´ l’est.Out of equilibrium systems 8. c’estu e ` ` a-dire p = 1/2. une particule au site i a une probabilit´ dt de se d´placer a e e ` droite si le site est vide. il peut e se remplir s’il existe une particule sur le site i − 1 a l’instant t. Cela ne constitue pas une surprise. Dans ce cas.

3 Model with open boundaries En placant des fronti`res ouvertes. le syst`me va pouvoir faire ´changer des pare e e ticules avec des “r´servoirs” de particles. recevoir une e 147 . il y a une probabilit´ β que e e e e la particule soit ´ject´e du r´seau et si le site N est vide. il y a une probabilit´ (ou densit´ de probabilit´ pour une dynamique a temps e e e ` continu) α que le site qu’une particule soit inject´ au site 1. e On peut ´crite des ´quations d’´volution dans le cas g´n´ral.34) (8. c’est-`-dire de l’instant a O a l’instant t.5.36) 8. car on peut montrer que toutes les configurations ont un poids equal qui est donn´ par e P !(N − P )! Pst = (8. Les moyennes dans l’´tat stationnaire donnent e τi = τi τj τi τj τk P N P (P − 1) = N (N − 1) P (P − 1)(P − 2) = N (N − 1)(N − 2) (8. et si le site est occup´. Il existe une solution simple dans le cas de l’´tat e stationnaire.33) N! qui correspond l’inverse du nombre de possibilit´s de placer P particules sur N e sites. De mani`re similaire. Pour le site 1. il y a une probabilit´ δ e e e e pour qu’une particule soit inject´ sur le site N . Par souci de simplification. on obtient l’´quation d’´volution suivante: ` e e d τi (t) = − τi (t)(1 − τi+1 (t)) + τi−1 (t)(1 − τi (t)) (8. On a ainsi e e ee d( τi (t)τi+1 (t) ) = − τi (t)τi+1 (t)(1 − τi+2 (t) ) + τi−1 (t)τi+1 (t)(1 − τi (t)) (8.32) dt On obtient une hierarchie d’´quations faisant intervenir des agr´gats de sites de e e taille de plus en plus grande. l’´quation d’une paire de sites adjacents d´pend e e e de l’´tat des sites situ´s en amont ou en aval de la paire consid´r´e.5 Exclusion models En prenant la moyenne sur l’histoire de la dynamique. e e il y a une probabilit´ γ que la particule situ´e au site 1 sorte par la gauche du e e r´seau. si le site N est occup´. mais leur r´soe e e e e e lution n’est pas possible en g´n´ral. si le site est vide a l’instant e ` t. Par un raisonnement similaire a celui qui vient d’ˆtre fait pour ` e l’´volution de la variable τi (t). s’il est vide.8.31) dt On voit que l’´volution de l’´tat d’un site d´pend de l’´volution de la paire de e e e e deux sites voisins. nous allons nous e e limiter au cas totalement asym´trique. Les ´quations d’´volution des variables e e e d’occupation obtenues dans le cas d’un syst`me p´riodique sont modifi´es pour e e e les sites 1 et N de la mani`re suivante: le site 1 peut.35) (8.

dans le cas o` δ = γ = 0. La derni`re r´gion du diagramme est dite e e phase de courant maximal et correspond a des valeurs de α et β strictement ` 148 . voir figure 8.37) Il est possible d’obtenir une solution de ces ´quations dans le cas stationnaire.5 β HDP 0 0 0. e Nous nous contentons de r´sumer les caract´ristiques essentielles de celles-ci en e e tra¸ant le diagramme de phase.5 et α > β. On obtient alors e d τ1 (t) = α(1 − τ1 (t) ) − γ τ1 (t) − τ1 (t)(1 − τ2 (t)) dt De mani`re similaire pour le site N . en Anglais) correspond ` une densit´ e a e stationnaire (loin des bords) ´gale ` α si α < 1/2 et α < β. La r´gion HDP e a e (high density phase) correspond a une densit´ stationnaire 1 − β et apparaˆ pour ` e ıt des valeurs de β < 0.5 α 1 Figure 8.38) (8. c u La r´gion LDP (low density phase.2 – Diagramme de phase du mod`le ASEP. on a e d τN (t) = τN −1 (t)(1 − τN (t)) + δ (1 − τN (t)) − β τN (t) dt (8.2. La ligne en pointill´s est une e e ligne de transition du premier ordre et les deux lignes pleines sont des lignes de transition continue particule provenant du r´servoir avec une probabilit´ α et s’il est occup´ perdre e e e la particule soit parce qu’elle repart dans le r´servoir avec la probabilit´ γ. soit e e parce qu’elle a saut´ sur le site de droite.Out of equilibrium systems 1 CP LDP 0.

a l’existence ou l’absnce d’une e e ` temp´rature de transition sous-jacente. La fragilit´ est d’autant plus marqu´e que la courbure est importante. e 6 149 . en ` traversant la ligne de transition. e e e 8. on dit alors que le liquide (verre) est “fort” tandis que si on observe une courbure. on note une brusque variation de densit´ pour le site situ´ au milieu e e du r´seau. variation d’autant plus prononc´e que la taille du r´seau est grande. Pour rassembler les donn´es de nombreux verres structuraux.5. on e e a coutume de montrer la croissance rapide des temps de relaxation de ces syst`mes (ou la viscosit´) en fonction de l’inverse de la temp´rature normalis´e pour e e e e 6 chaque corps avec sa temp´rature de transition vitreuse e La figure 8. pour un syst`me e de taille finie. En simulation. Quand e on obtient une droite dans ce diagramme.3 montre le diagramme d’Angell pour plusieurs syst`mes.41) o` E est une ´nergie d’activation qui ne d´pend de la temp´rature sur le domaine u e e e consid´r´. plusieurs formules d’ajustement ont ´t´ proee pos´es et sont associ´es.40) qui est une grandeur non-nulle pour tout α < 0. cette transition n’est pas observ´e car elle e e serait situ´e dans une r´gion non accessible exp´rimentalement. et donne e τ = τ0 exp(E/T ) (8. on parle alors de liquide (verre) “fragile”. a la limite “thermodynamique”. En effet. La ligne en pointill´ s´parant les phases LCP et HDP est une e a e e ligne de transition de premier ordre. Cette temp´rature d´pend de la vitesse de trempe impos´e au e e e syst`me.3. e Dans le cas de verre fort. ee Dans le cas de verres fragiles.39) (8. Pour ne citer que e e e Cette temp´rature est d´finie par une condition exp´rimentale o` le temps de relaxation e e e u atteint la valeur de 1000s.6.8. la densit´ subit une discontinuit´ e e ∆ρ = ρHCP − ρLCP =1−β−α = 1 − 2α (8. compte tenu de la figure 8. depuis leurs introductions.6 Kinetic constraint models sup´rieures ` 1/2.6 8. la d´pendance en temp´rature du temps de relaxation e e est ´vidente. en e e l’occurrence le liquide le plus fragile est donc dans ce diagramme l’orthoterph´nyl.1 Kinetic constraint models Introduction La probl´matique de l’approche de la transition vitreuse des verres structuraux e consiste a expliquer l’apparition de ph´nom`nes tels que la croissance extrˆme` e e e ment grande des temps de relaxation vers l’´quilibre alors qu’aucune caract´rise e tique associ´e a une transition de phase (divergence d’une longueur de corr´lation) e ` e n’est pr´sente.

(8.3 – Logarithme (d´cimal) de la viscosit´ (ou du temps de relaxation) e e en fonction de l’inverse de la temp´rature normalis´e avec la temp´rature de e e e transition vitreuse associ´e a chaque corps. il y a la loi de Vogel-Fulscher-Thalmann (VFT) et qui s’exprime comme τ = τ0 exp(A/(T − T0 )). Cette e 150 . Une autre caract´ristique g´n´rique des verres est l’existence d’une d´croise e e e sance des fonctions de corr´lation selon une forme que ne peut pas g´n´ralement e e e ajust´e par une simple exponentielle. e les plus connues. L’oxyde de Germanium est consid´r´ e ` ee comme un verre fort tandis que l’orthoterph´nyl est le verre le plus fragile. ce comportement de d´croise e sance lente est souvent ajust´e par une loi de Kohraush-Williams-Watts.42) La temp´rature T0 est souvent interpr´t´e comme une temp´rature de transition e ee e qui ne peut pas etre atteinte.Out of equilibrium systems Figure 8.43) permet de reproduire assez bien la courbure de plusieurs verres fragiles et ne pr´suppose rien sur l’existence d’une temp´rature de transition non nulle souse e jacente. On peut aussi consid´rer que le rapport A/(T − T0 ) e est une ´nergie d’activation qui augmente donc fortement quand on abaisse la e temp´rature. On peut noter qu’une loi de la forme e τ = τ0 exp(B/T 2 ) (8. De nouveau.

en d’autres termes.01φ(0) a la fois dans des donn´es de simulation ou celles e ` e d’exp´rience. le liquides surfondu serait divis´ ee e e en r´gions o` il existe un temps caract´ristique sp´cifique. Ce moyen est utilis´e a la fois dans les simulations et sur le plan e ` exp´rimental. On parle souvent d’exponentielle ´tir´e. Dans le cas g´n´ral. e • On peut aussi d´terminer ce temps ` partir de la loi d’ajustement. Tr`s g´n´raleu e e e e e e ment. Ce e e e moyen est int´ressant quand on travaille avec des expressions analytiques. e a Cette m´thode souffre du fait qu’elle n´cessite de d´terminer en meme temps e e e l’exposant de l’exponentielle ´tir´e et la valeur du temps τ e e Dans le cas o` la fonction de corr´lation serait d´critepar une simple expou e e nentielle. toutefois dans la mesure o` la d´croissance e u e lente est d´crite par un comportement dominant de type KWW et que l’exposant e b n’est pas trop proche de 0 (ce qui correspond ` la situation exp´rimentale des a e verres structuraux). e • De mani`re plus rigoureuse. la valeur de la fonction de corr´lation peut etre d´termin´e e e e e alors pr´cisement car les fluctuations (statistiques) ne dominent pas alors la e valeur de la fonction de corr´lation. Eq. on peut dire que le temps caract´ristique est e e ∞ donn´e par l’int´grale de la fonction de corr´lation τ = 0 dtφ(t).5 a 0. ce qui serait le cas si on choisissait un e crit`re φ(τ ) = 0.45) o` τ = a−1/b . il existe trois mani`res de mesurer ce temps e e • Consid´rer que le temps caract´ristique est donn´e par l’´quation φ(τ ) = e e e e 0. La relaxation globale e u e e 151 . l’´tirement ´tant d’autant u e e e e plus important que l’exposant b est faible. partant g´n´ralement de e e e e 1 ` haute temp´rature (r´gion o` la d´croissance des fonctions est exponentielle) a e e u e pour aller vers 0.8.(8.44) o` φ(t) repr´sente la fonction de corr´lation de la grandeur mesur´e.3 On peut aussi ´crire la fonction de corr´lation comme ` e e φ(t) = exp(−(t/τ )b ) (8.6 Kinetic constraint models loi s’exprime comme φ(t) = exp(−atb ) (8.45). e mais reste difficile a mettre en oeuvre dans le cas de r´sultats num´riques ` e e ou exp´rimentaux. les trois temps ne a e e e sont pas reli´s entre eux simplement. e Une derni`re caract´ristique observ´e dans les verres est l’existence d’une ree e e laxation dite h´t´rog`ne. les d´terminations du temps caract´ristique par chacune des m´thodes e e e coincident ` un facteur multiplicatif pr`s.1φ(0). Pour d´terminer le temps de relaxation caract´risque de la fonction de core e r´lation. b est une fonction d´croissante de la temp´rature. Ainsi la premi`re m´thode est celle e e qui est retenue dans le cas d’une ´tude par simulation. la relation entre les trois temps reste quasi-lin´aire et les e comportements essentiellement les memes.

de nombreux mod`les sur r´seau avec des variables e e e discr`tes ont ´t´ propos´s. Les variables e e e discr`tes sont not´es ni et valent soit 0 soit 1. Nous verrons dans le prochain un autre exemple (mod`le e d’adsorption-desorption) o` l’absence de mouvements diffusifs pr´sents dans une u e dynamique usuelle conduit ` une dynamique tr`s lente et aussi non triviale. nous ne nous engageons pas sur la pertinence e de ceux-ci. les polym`res voire e les milieux granulaires. est le suivant: on consid`re des objets discr`ts sur un r´seau e e e (dans le cas de ce cours nous nous limitons au cas d’un r´seau unidimensionnel. mais nous les consid´rons comme des exemples significatifs d’une e dynamique qui est boulevers´ par rapport a une ´volution donn´e par une r´gle e ` e e e classique de Metropolis. e la relaxation globale est donn´e par e φ(t) = dνexp(−tν)D(ν) (8. e 8. Ces mod`les ont la vertu de mettre en ´vidence que e ee e e e mˆme si la dynamique satisfait le bilan d´taill´ et en utilisant un Hamiltonien e e e de particules n’interagissant pas entre elles.47) 152 . L’ingr´dient des e e mod`les a contrainte cin´tique utilise sur cette hypoth`se. on peut cr´er des dynamiques de e relaxation dont le comportement est hautement non triviale et reproduire une partie des ph´nom`nes observ´sRitort and Sollich [2003].2 Facilitated spin models Introduction Compte tenu de la nature universelle de comportement vitreux observ´e dans ces e situations physiques qui concernent les verres structuraux. introduit il y a une vingtaine d’ann´es par e e ces deux auteurs.46) Il s’agit math´matiquement de la transform´e de Laplace de la densit´ en fr´quences e e e e des diff´rents domaines. Une interpr´tae e tion de ce ph´nom`ne conduisant une loi KWW est possible. a e Friedrickson-Andersen model Le mod`le de Friedrickson-Andersen. e mais des g´n´ralisations sont possibles en dimensions sup´rieures). Comme il s’agit encore e e e d’un domaine en pleine ´volution. En consid´rant de e ` e e e plus que la dynamique h´t´rog`ne est la source de la relaxation lente observ´e ee e e dans les syst`mes vitreux.6. Le Hamiltonien d’interaction est e e donn´e par e N H= i=1 ni (8. il est tentant de s’int´resser a des approches qui font e ` une moyenne locale sur le d´tail microscopique des interactions.Out of equilibrium systems observ´e serait la superposition de relaxations “individuelles”. en effet chaque e e domaine relaxant exponentiellement avec une fr´quence de relaxation typique ν.

001.. ↔ .53) ce qui correspond a une situation de verre fort et de dynamique induite par des ` processus activ´s...011.. (8.. On peut v´rifier ais´ment que toutes les configurations du mod`le pr´c´dent e e e e e restent accessibles avec l’introduction de cette contrainte suppl´mentaire.6 Kinetic constraint models A une temp´rature T (soit β = 1/kB T ).. ↔ . et e e e e e e e donc la cr´ation de r´gions mobiles situ´s a gauche d’une r´gion mobile est intere e e ` e dite.. ↔ ..111. Ce mod`le essaie de reproduire une e e e dynamique h´t´rog`ne.. . hormis une seule configuration qui ne peut e e jamais etre atteinte qui est celle d’une configuration ne contenant que des sites avec la valeur 0.48) Les variables dans cet ´tat sont consid´r´s comme les r´gions mobiles et les r´gions e ee e e dans l’´tat 0 comme des r´gions peu mobiles..(8. A basse temp´rature. les r`gles dynamiques utile e e is´es sont alors les r`gles du mod`le pr´c´dent amput´es de la r`gle. on peut montrer que le temps caract´ristique de la relaxation e ´volue a une dimension comme e ` τ = exp(3β) (8. on a la densit´ de variables dans l’´tat e e e 1.. La restriction de cette e ` e e dynamique permet l’exploration de la quasi-totalit´ de l’espace des phases comme e avec des r´gles usuelles de M´tropolis...49) (8.. not´ n qui est donn´e par e e n= 1 1 + exp(β) (8. Avec une dynamique de Metropolis usuelle.101..51)....51) (8.50) (8.110. Eq...52) Le mod`le de Friedrickson-Andersen consite a interdire la premi`re des quatre e ` e situations..... On a donc les e e e quatres situations suivantes .... .000. Dans ce cas. la densit´ de r´gions mobiles est ee e e e e faible n ∼ exp(−β). quatre situations de changement de l’´tat d’un site sont a consid´rer.8.010.100...54) . mais le e temps caract´ristique de la relaxation devient alors e τ = exp(1/(T 2 ln(2)) 153 (8. En effet... La r´gle de transition de e ` e e changement d’´tat d’un site d´pend de l’´tat de ces deux voisins. .. e East Model Ce mod`le est inspir´ du mod`le pr´c´dent par le fait que le Hamiltonien est e e e e e identique. mais la dynamique est plus contrainte avec l’introduction d’une brisure avec de sym´trie gauche-droite de ce mod`le. ↔ . ce qui correspond ` interdire a la fois la cr´ation ou la destruction a ` e d’une r´gion mobile a l’int´rieur d’une r´gion immobile.

♣ Q.8.56) .Out of equilibrium systems Avec un temps de relaxation qui croˆ plus rapidement qu’avec une loi d’Arrhenius. 8. Avec des r`gles aussi simples.7 Conclusion On voit sur ces quelques exemples la diversit´ et donc la richesse des comportee ments des syst`mes en dehors d’une r´gion d’´quilibre. v2 . ♣ Q.8. on a la r`gle e e de collision suivante v1 − v2 = −α(v1 − v2 ) (8.1-3 Calculer la perte d’´nergie cin´tique au cours de la collision. e ♣ Q. il est possible de r´aliser des e e simulations tr`s pr´cises et d’obtenir des r´sultats analytiques dans un grand e e e nombre de situations. t = 0). En l’absence de th´orie e e e e g´n´rale pour des syst`mes en dehors de l’´quilibre. Les sph`res sont initialement a e e plac´es al´atoirement sur la droite (sans se recouvrir) et avec une distribution de e e vitesse p(v.1-2 Calculer v1 et v2 en fonction de v1 . t) = p(v1 . t) 154 (8. On suppose dans la suite une absence de corr´lation dans les collisions entre e particules. ıt ce mod`le est un exemple de verre dit fragile tandis que le mod`le FA est un exe e emple de verre fort. Donner l’´quation correspondante. 8. la simulation num´rique e e e e e reste un outil puissant pour ´tudier le comportement de ces syst`mes et permet e e de tester les d´veloppements th´oriques que l’on peut obtenir sur les mod`les les e e e plus simples. de ∆v = (v2 − v1 ) et de m. que e e 2 l’on note ∆ec .1-1 Quelle est la loi de conservation satisfaite lors de la collision. en fonction de = 1 − α . t) (probabilit´ de e e trouver la particule 1 et la particule 2 avec les vitesses v1 et v2 ) se factorise p(v1 .1 Exercises Haff Law On consid`re un syst`me de sph`res dures in´lastiques de masse identique m. 8. v2 .8 8.8.8. 8. e e e e assujetties ` se d´placer le long d’une ligne droite.55) o` v1 et v2 sont les vitesses apr`s collision et α est appel´ le coefficient de restiu e e tution (0 ≤ α < 1). On consid`re tout d’abord une collision entre 2 sph`res poss`dant des vitesses e e e v1 et v2 qui subissent une collision. v2 et de α. 8. Au moment du choc in´lastique. t)p(v2 . ce qui implique que la probabilit´ jointe p(v1 .

1-4 Montrer que la valeur moyenne < (∆v)2 > perdue par collision < (∆v)2 >= dv1 dv2 p(v1 .8.1-6 En d´finissant n = N .8. Cette d´croissance de l’´nergie est appel´e loi de Haff. u e e ♣ Q. calculer ep (t) = m/2 dvv p(v. e montrer que T (n) = T (0)e− n (8. t) en fonction de la vitesse caract´ristique e et de constantes.8.8.1-7 La fr´quence moyenne de collisions ω(T ) par bille est donn´e par e e ω(T ) = ρ T m (8.60) dT . 8.1-9 On suppose que la distribution des vitesses p(v.8 Exercises ♣ Q. e e ♣ Q. ♣ Q. dt o` ρ est la densit´ du syst`me. T (t).1-8 Int´grer cette ´quation diff´rentielle et montrer que la solution se e e e comporte pour les grands temps comme T (t) ∼ tβ o` β est un exposant que l’on u d´terminera.61) p(v. 8. 8.58) o` τ est le nombre moyen de collisions et N le nombre total de particules du u syst`me.59) o` T (0) est la temp´rature initiale du syst`me.8.61). Etablir l’´quation diff´rentielle reliant u e e e e m et ρ.8. 8. t)(∆v)2 (8.57) s’exprime en fonction de la temp´rature granulaire T d´finie par la relation T = e e m < v 2 > et de la masse m. ♣ Q.1-5 Le syst`me subit des collisions successives au cours du temps et e son ´nergie cin´tique totale (et donc sa temp´rature) va d´croˆ e e e e ıtre. 8. e τ ♣ Q. le nombre moyen de collisions par particule. appel´e vitesse caract´ristique.8.1-10 En utilisant l’expression de la probabilit´ d´finie a l’´quation e e ` e 2 (8. 8. t) = A(t)P v(t) Montrer que A(t) est une fonction simple de v(t).8. 8. e e e e ♣ Q. Montrer que dT T =− dτ N (8. v2 . t) s’exprime comme une fonction d’´chelle e v (8. 155 .

8. Leur ´volution temporelle se e e e manifeste par le d´placement des fronti`res.Out of equilibrium systems ♣ Q. 8. Dans un cas plus g´n´ral. u e a e 156 (8.8. u e e ♥ Q.8. que devient l’´volution de l avec le u e temps? Est-elle plus rapide ou moins rapide que dans le cas pr´c´dent? Intere e pr´ter ce r´sultat.1-12 Pour nc ∼ 1/ . l’´quation d’´volution de la longueur l est donn´e e e e e e e par l dl = (8. u e ♣ Q.8.64) . e e Quand le ph´nom`ne de croissance de domaines se manifeste dans une dye e namique pr`s de l’´quilibre. La e e u e longueur caract´ristique de ces domaines est not´e l. appel´es murs de domaines.2-2 Dans le cas o` ∆E(l) = lm . 8.8. R´soudre l’´quation diff´rentielle et d´terminer l’´volution e e e e e de la croissance de l.63) o` ∆E(l) est une ´nergie d’activation.1-11 La loi d’´chelle pour la distribution des vitesses est-elle compatible e avec la loi de Haff? Justifier la r´ponse. 8. 8. associ´s a l’apparition d’un effrone ` e ` drement in´lastique. Le m´e e e e canisme de d´placement d´pend du syst`me physique consid´r´. l’´nergie cin´tique ne d´croˆ plus alg´briquee e e ıt e ment et des agr´gats commencent a se former. le d´placement d’un mur n´cessite le franchissement e e e e d’une barri`re d’´nergie (libre) et le temps caract´ristique est alors donn´ par la e e e e relation T (l) = l2 exp(β∆E(l)) (8.8. e ♣ Q.2 Croissance de domaines et mod`le ` contrainte cin´e a e tique De nombreux ph´nom`nes ` l’´quilibre et hors d’´quilibre sont caract´ris´s par la e e a e e e e formation de r´gions spatiales appel´es domaines o` un ordre local est pr´sent.2-1 Dans le cas o` le mur se d´place librement sous l’effet de la diffuu e sion. Dans le cas d’un e e e ee ph´nom`ne hors d’´quilibre.62) dt T (l) o` T (l) est le temps caract´ristique du d´placement de mur. D´crire simplement le ph´nom`ne d’effrondrement in´lase e e e e tique. on a T (l) = l2 . l’´quation d’´volution est modifi´e comme suit e e e e e l leq dl = − dt T (l) T (leq ) o` leq d´signe la longueur typique d’un domaine ` l’´quilibre.

e ♣ Q. En d’autres termes. − 1 ` ♣ Q. le q Les mod`les a contrainte cin´tique sont des exemples de syst`mes o` la relaxe ` e e u ation vers l’´quilibre est tr`s lente.. 8.8. On appelle domaine une suite de spins −1 bord´e ` droite par un spin +1. 8. Si on appelle h(l) la plus e petite suite ` cr´er pour faire disparaˆ un mur de taille l. e ♣ Q. − 11 a 1 − 1.. a Le retournement d’un spin −1 en un spin +1 est d´favorable a basse temp´rae ` e ture et coˆte une ´nergie d’activation +1.2-8 Donner ce chemin (la suite de configurations) Ce chemin n’est pas le plus favorable car il impose de cr´er une suite tr`s grande de spins +1. 8. donner le chemin (c’est a e ` dire la suite de configurations) permettant de fusionner un mur de taille 2 situ´ e a droite d’un spin +1. on cherche la suite de configurations ` allant de +1 − 1 + 1 ` 1 − 1 − 1. montrer par r´currence a e ıtre e que h(2n ) = h(2(n−1) ) + 1 (8.8. Calculer la fonction de pare e tition Z de ce syst`me.8. montrer que l’on peut alors approcher cette a e e concentration par e−β .2-4 On consid`re un syst`me de N spins. De mani`re g´n´rale. 8. ` e On se place ` tr`s basse temp´rature.2-6 Expliquer pourquoi l’´tat o` tous les spins sont n´gatifs est un ´tat e u e e inatteignable si l’on part d’une configuration quelconque. la cr´ation d’une u e e e e e suite de longueur n de spins +1 situ´e a droite d’un mur quelconque (c’est-`-dire e ` a une suite de n domaines de taille 1) coˆte une ´nergie d’activation n. On consid`re l’exemple appel´ East Model.8.8.2-7 Compte tenu de la dynamique impos´e.2-5 Calculer la concentration c de spins ayant la valeur +1 a l’´quilibre.8.8. Le domaine de plus petite e a taille a la valeur 1 et est constitu´ d’un seul spin +1. e ♣ Q. 8. 8.. ♥ Q.66) 157 .8 Exercises ♥ Q.2-9 On consid`re un mur de taille l = 2n . On va e e montrer qu’un chemin plus ´conomique existe. L’Hamiltonien de ce mod`le e e e est donn´ par e H = 1/2 Si (8.8. u e Un chemin possible pour d´truire un mur de taille n consiste ` trouver une e a suite de configurations cons´cutives de retournement de spins individuels allant e de 1 − 1.65) i ♥ Q. La dynamique de ce mod`le est celle de Metropolis a l’exception du fait que e ` le retournement d’un spin +1 n’est possible que si celui-ci poss`de ` gauche un e a voisin qui est aussi un spin +1. mˆme si la thermodynamique reste ´l´mene e e ee taire.. En d´duire le e e e e temps caract´ristique de retour a l’´quilibre en fonction de T (leq ) et e ` e d ln(T (l)) d ln(l) .2-3 R´soudre pour l ∼ leq cette ´quation diff´rentielle. 8.

3-3 Est-ce que l’algorithme propos´ v´rifie le bilan d´taill´? Justifier e e e e votre r´ponse.2-10 En d´duire que l’´nergie d’activation de la destruction d’un mur e e de taille l vaut ln(l)/ ln(2) ♠ Q.8. ♣ Q.+A A+A→A o` le point d´signe un site vide. Proposer un algorithme permettant de simuler e ce mod`le sur un r´seau de taille L e e ♣ Q.67) (8. t) la probabilit´ de trouver un intervalle de longueur l ne cone tenant aucune particule. on place al´toirement N particules not´es A.8. 8.3 Diffusion-coagulation model On consid`re le mod`le stochastique suivant: sur un r´seau unidimensionnel de e e e taille L.8. 8.8. la particule occupe ce site et lib`re le site pr´c´dent. e Les deux m´canismes sont repr´sent´s par les r´actions suivantes e e e e A+.↔. si le site est occup´. ♣ Q. 8.3-5 Montrer que l’´volution de la probabilit´ P (l. quel est l’´tat final de la dynamique? En d´duire la densit´ ρL (∞).69) (8. ♣ Q. t) ∂t (8. t) de trouver un intervalle vide de e longueur l et au moins une particule situ´e sur l’un des deux sites bordant e l’intervalle de longueur l en fonction de P (l.Out of equilibrium systems ♣ Q.8.2.8. t) = Q(l − 1.8.2-11 En admettant que leq = e−β et en ins´rant le r´sultat pr´c´e e e e dent dans celui-ci obtenu a la question 2. t) est donn´e par e e e l’´quation diff´rentielle suivante e e ∂P (l. t) et de P (l + 1.3-4 Exprimer la probabilit´ Q(l. e On note P (l. 8. t) − Q(l. e e e e la particule “coagule” avec la particule pr´sente pour donner une seule particule. montrer que le temps de relax` ation de ce mod`le se comporte asymptotiquement a basse temp´rature comme e ` e exp(β 2 /ln(2)).3-1 Pour un r´seau de taille L avec des conditions aux limites p´rie e odiques. t). 8. 8. u e ♣ Q.68) 158 . 8. avec N < a e e L. si le site est e ` ` vide. ` l’instant t = 0. La dynamique du mod`le est compos´e de deux m´canismes: une particule e e e peut sauter al´atoirement a droite ou a gauche sur un site voisin.8.3-2 On veut faire la simulation num´rique de ce mod`le stochastique e e avec une dynamique s´quentielle. 8. Que e e e vaut cette limite quand L → ∞.

8. 8.. t) et P (1.72) ♣ Q. e e ¯ (8. t) ¯ = − ρ(x. Ce tableau e e contient n(t) + 1 ´l´ments.71) ∂l l=0 l ♣ Q.8. t) ¯ ∂ ρ(x. t) = 1. 159 . On rep`re les positions e e e e des centres de particules adsorb´es sur la ligne par les abscisses xi ordonn´es de e e mani`re croissante o` i est un indice allant de 1 ` n(t).8. 8. t) dans la limite e x 2 continue. Le syst`me est born´ a gauche par un a e e` mur en x = 0 et a droite par un mur en x = L. s´quentiellement et al´atoirement. t) est alors donn´e par e e ¯ e ∂ 2 ρ(l. 8. t) + ∂l + 1/2 ∂l2 + . ee ♣ Q.8.3-8 En passant ` la limite continue. On d´finit aussi un tableau ` e des intervalles hi .4-1 Exprimer la densit´ de la ligne recouverte par des particules en e fonction de n(t) et de L.8. o` erf est la fonction erreur.t) ∂P (l..70) ∂t ∂l2 et que ∂P (l. t)2 ¯ ∂t ∂l2 On cherche une solution homog`ne de cette ´quation ρ(x.8 Exercises ♣ Q. 8. t) = (8. chaque intervalle est d´fini comme la partie de la ligne non e recouverte comprise soit entre deux particules cons´cutives. 8. t) ∂ 2 P (l. 8. ♣ Q.4 Random sequential addition On consid`re le mod`le unidimensionnel du parking o` l’on ins`re des particules e e u e dures de taille unit´. t).. t) ρ(t) = − (8. c’est-`-dire en posant que P (l + a a ∂ 2 P (l.3-10 R´soudre alors l’´quation diff´rentielle. soit entre un mur et e une particule pour deux intervalles situ´s aux limites du syst`me.En u d´duire la densit´ ρ(t) e e On consid`re maintenant une approche de champ moyen pour l’´volution de e e la dynamique. t) = ρ(t). Comparer ce r´sultat avec e e e e le r´sultat exact pr´c´dent.t) e 1.3-9 V´rifier que 1−erf ( 2√t ) est une solution pour P (l.3-6 Exprimer la densit´ de particules ρ(t) sur le r´seau en fonction de e e P (0.8. t) = P (l.8. Quelle est l’origine de la diff´rence? e e e e 8. ♣ Q. montrer que l’on obtient l’´quation diff´rentielle suivante e ∂P (l.8. n(t) ´tant le nombre e u a e de particules adsorb´es a l’instant t (ce tableau ne correspond bien ´videmment e ` e pas ` l’ordre d’insertion des particules). L’´volution de densit´ locale moyenne ρ(x. (Indication: erf (x) = √π 0 dt exp(−t2 )).3-7 Justifier le fait que P (0.

e ♣ Q.8. On propose d’utiliser un autre algorithme.5. h et h . 8.5 et L − 0. e e e Si un intervalle est inf´rieur a 1. on incr´mente le temps par ∆t. il apparaˆ deux intervalles: un de longueur h et un autre de ıt longueur h . d´fini de la mani`re suive e e ante: au d´but de la simulation. Pour chaque intervalle de ee longueur h sup´rieur a 1.4-4 En consid´rant l’´volution du syst`me aux temps courts. on recommence la mˆme proc´dure sur les intervalles de la e e e e e g´n´ration suivante. not´ B.5 et la borne sup´rieure de l’intervalle e e e diminu´e de 0. la probabilit´ pour e e ins´rer une nouvelle particule est tr`s faible et les ´v´nements couronn´s de succ`s e e e e e e deviennent rares. justifier le fait que M ax(h − 1. e e e e consiste ` choisir al´atoirement la position d’une particule et de tester si cette a e nouvelle particule ne recouvre pas une ou des particules pr´c´demment adsorb´es. 8. on a alors cr´´ deux nouveaux intervalles pour la g´n´ration e ee e e suivante et on incr´mente alors le temps de ∆τ = 1/L pour chaque particule e ins´r´e. Justifier votre r´ponse. Pour chaque e e e a tirage.4-2 Soit un intervalle h . L’intervalle initial est remplac´ par deux nouveaux intervalles.Out of equilibrium systems ♣ Q. 8. 0) est la partie disponible pour l’insertion d’une nouvelle particule.4-5 Justifier le fait pour un syst`me de taille finie.8. En d´duire la fraction de la e ligne disponible Φ pour l’insertion d’une nouvelle particule en fonction des hi . 8. un algorithme. sinon elle est rejet´e et on proc`de ` un nouveau tirage.5. e Pour r´aliser la simulation num´rique de ce syst`me. 8. 8. puis en fonction des xi .8. on consid`re que l’intervalle est devenu bloqu´ e ` e e et il n’est plus consid´r´ dans la suite de la simulation. et des e e e boites de simulation de taille diff´rente. ♣ Q. ♣ Q. On place la particule en η. ♣ Q. Aux temps longs. Une fois que cette op´ration est r´alis´e sur tous les intervalles de la ee e e e mˆme g´n´ration.8.8. not´ A.4-7 La densit´ maximale atteinte est-elle sup´rieure. inf´rieure ou idene e e tique ` celle obtenue avec l’algorithme A. cette particule est ea e plac´e. Donner la relation entre h.4-6 Donner une borne sup´rieure du temps de simulation. le temps de simulae tion est alors fini. on tire al´atoirement un nombre compris entre la borne e ` e inf´rieure de l’intervalle augment´e de 0. a e 160 . on choisit al´atoirement et uniform´ment un e e e nombre η compris entre 0. justifier que ∆t = 1/L afin que le r´sultat e e donn´ par une simulation soit ind´pendant de la taille de la boˆ e e ıte.4-3 Pour chaque nouvelle particule ins´r´e a l’int´rieur d’un intervalle ee ` e de longueur h.8. (Indication: Faire un sch´ma e ´ventuellement). On incr´mente le temps de e e 1/L et on consid`re les intervalles de cette nouvelle g´n´ration. e ` ♣ Q. La simulation est stopp´e quand il n’existe plus d’intervalles e e e de taille sup´rieure a 1. e e e Si la particule ne recouvre aucune particule d´j` adsorb´e.

8. ♣ Q.4-10 Proposer un algorithme parfait en d´finissant pour l’algorithme B e un nouveau ∆τ1 par une quantit´ accessible ` la simulation que l’on d´terminera.8. 8.8. 8.4-8 Comment ´volue la densit´ avec le temps de la simulation? e e ♣ Q.8. Φ et ∆τ (algorithme B). e a e 161 .8 Exercises ♣ Q.4-9 Etablir la relation entre ∆t (algorithme A).8.

Out of equilibrium systems 162 .

. . aging. .2. . 174 Adsorption-desorption model . . . . . . . . . 167 Kovacs effect . . .2. . . . . . . . .3. Une autre classe de situations physiques. . . . . . . .2. . . . . . . . . . . . . .1 Introduction Nous avons vu. . . . . . . . . . . . . . 169 Equilibrium linear response . . . . . . . . . . . . . . .Chapter 9 Slow kinetics. . . .3. rencontr´es fr´quemment e e e sur le plan exp´rimental. . . . . . . . . . . . . . .3. . . . que la physique statistique pouvait d´crire e e e des ph´nom`nes qui ´taient en dehors d’un ´quilibre thermodynamique.3. . . . . . . . . au chapitre pr´c´dent. . . . . 173 Conclusion 9. concerne les syst`mes dont le retour a un ´tat d’´quilibre e e ` e e est tr`s lent. . . . .2 9. . .4 9.3. .3. . Contents 9. . . e 163 . . . . . . . . . . . . . . . . . . 168 Kinetics . . .1 9. . . . . . . . .3 9. . 171 . . . . .2 9. . . . . . . . . . . . . . .3 9. .3 9. . . . . . . . . .4 9. . . . . . . . . . . . . . . . . . . . 166 Introduction . . . . . . . . . . . . . . . . .6 9. 170 Hard rods age! . . .2 Introduction 9. . 164 Aging and scaling laws . . . . . . . . . . . . . . . . . . . . Les site e e e uations physiques consid´r´es entraˆ ee ınaient le syst`me a ´voluer vers un ´tat hors e `e e d’´quilibre. 165 Interrupted aging . . . . . . . . 166 “Violation” of the fluctuation-dissipation theorem . . . . 163 Formalism . . . . . .1 9. . . . . . . .5 . . . . . . . . . . 167 Definition .4 9.2. . . . . . . . . . . . . 171 Algorithm . . . . . . .5 9. . . .1 9. . 164 Two-time correlation and response functions. . . . . .

Quand le temps de relaxation est tr`s sup´rieur au temps de l’observation.1) Le crochet signifie que l’on prend la moyenne sur un grand nombre de syst`mes e 1 pr´par´s de mani`re identique . vitesse et position des particules dans un liquide. il apparaˆ qu’il existe une certaine forme d’universalit´ des propri´t´s de e ıt e ee non-´quilibre. e Depuis une dizaine d’ann´es. a l’´tude th´orique des ph´nom`nes de croissance de domaines (simu` e e e e lation et r´sultats exacts). a quelques jours e e e ` voire quelques ann´es d’intervalle.. et ` la dynamique des mod`les de spins (avec d´sordre e a e e gel´). Une grande partie des lois propos´es ` e e provenant des d´veloppement th´oriques. on d´finit la fonction e de corr´lation a deux temps CA (t. constitue un apport important pour une meilleure connaissance de e la physique de ces syst`mes. Pendant e e e longtemps. on consid`re un syst`me ´quilibr´ “rapidement” ` haute temp´rature. vieillissement des polym`res plastiques. Supposons par exemple que le syst`me soit ` e e e a l’´quilibre ` l’instant t = 0. permettant de d´terminer le comportement pr´cis de l’´volution e e e e des syst`mes. Quand le temps de relaxation du syst`me est sup´rieur au temps de l’observation. si l’on effectue deux exp´riences sur le mˆme syst`me.2 9.2. Pour une grandeur A. e 9.Slow kinetics. e e le syst`me ne peut pas ˆtre consid´r´ ` l’´quilibre au d´but de l’exp´rience. a l’heure actuelle. (9. tw ) = A(t)A(tw ) − A(t) A(tw ) .. On e e e e a e 164 . restent a ce jour des conjectures. e L’activit´ scientifique portant sur cette physique statistique hors d’´quilibre e e est. On note alors tw le temps e e ` e ´coul´ depuis cette trempe jusqu’au d´but de l’exp´rience. mais aussi de garder la e ` e m´moire du temps ´coul´ tw . tr`s importante. tw ) comme e ` CA (t. La e e ` simulation num´rique. d´pendent plus ou moins fortement de l’´tat du syst`me. les grandeurs mesur´es diff`rent.1 Formalism Two-time correlation and response functions. e e les grandeurs. il subit alors une trempe rapide e a a e pour ˆtre amen´ a une temp´rature plus basse T1 . e e e 1 Par exemple. Cela signifie e e e e que.. que l’on peut mesurer dans une exp´rience (ou une simulation e num´rique). une telle situation a repr´sent´ un frein dans la compr´hension du e e e comportement de ces syst`mes.). aging.. milieux granue laires. depuis l’instant initial de la pr´paration du syst`me e e e e e jusqu’au d´but de l’exp´rience. e Il est donc n´cessaire de consid´rer non seulement le temps τ ´coul´ durant e e e e l’exp´rience (comme dans une situation a l’´quilibre). ni e e eea e e e souvent durant le d´roulement de celle-ci... ` la temp´rature T .). e e e e fonction des variables microscopiques du syst`me (aimantations pour un mod`le e e de spin. grˆce ` l’accumulation de r´sultats exp´rimene a a e e taux (verres de spins.

on appelle la fonction de r´ponse impulsionnelle ` ` e a deux temps.5) 9. pr´c´demment introduite. et un temps d’´quilibration teq .8) applique une trempe ` l’instant t = 0.4) Pour respecter la causalit´. h=0 (9. la fonction de r´ponse est telle que e e RA (t.2. ` a e a partir duquel on proc`de ` la mesure de la fonction de corr´lation. que l’on peut appeler temps microscopique e t0 . (9.9.7) le syst`me n’est pas encore ´quilibr´ et ne pourra pas atteindre l’´quilibre sur e e e e l’´chelle de temps de l’exp´rience. il est plus facile de calculer la fonction e de r´ponse int´gr´e qui est d´finie comme e e e e t χA (t. la d´riv´e fonctionnelle de la variable A par rapport au champ h: e e RA (t. Soit h son champ conjugu´ thermodynamiquement e e e a la variable A (δE = −hA). il apparaˆ que la fonction e e ıt de corr´lation s’exprime comme la somme de deux fonctions. e e Dans un grand nombre d’exp´riences. au moins deux ´chelles de temps caract´risent la dynamique e e e du syst`me: un temps de r´organisation locale (temps moyen de retournement e e d’un spin pour les mod`les de spins). e a e 165 . tw ) = δ A(t) δh(tw ) . (9. tw ) = 0. e CA (t. t ) (9.6) (9.2 Aging and scaling laws Pour tout syst`me. tw ) = CST (t − tw ) + CAG ξ(tw ) ξ(t) (9. tw ) = tw dt RA (t. s)h(s) + O(h2 ) (9.2 Formalism De mani`re similaire. et l’on laisse relaxer le syst`me pendant un temps tw . Quand le temps d’attente tw et le temps e d’observation τ se situent dans l’intervalle suivant t0 << tw << teq t0 << tw + τ << teq . on peut d´finir une fonction de r´ponse pour la variable e e e A.2) Dans les simulations et exp´rimentalement. ainsi que pour des mod`les solubles e e (g´n´ralement dans l’approximation du champ moyen).3) En d’autres termes. la fonction de r´ponse impulsionnelle peut ˆtre exprim´e e e e comme t A(t) = tw dsRA (t. t tw .

ce r´gime est d´crit par la fonction CST (τ = t − tw ).   dCA (τ ) τ ≥0 −β (9. le syst`me retourne a l’´quilibre. tremp´ ` une temp´rature inf´rieure ` la temp´rature critique. le syst`me r´agit essentiellement comme s’il ´tait e e e e d´j` a l’´quilibre. plus le syst`me a besoin de temps pour perdre la m´moire e e de sa configuration initiale. On parle de mani`re abusive de la violation du e e e Le point cl´ dans la d´monstration du th´or`me de fluctuation-dissipation est la connaise e e e sance de la distribution de probabilit´ ` l’´quilibre. c’est a dire si le temps e e e e e ` t devient tr`s sup´rieur au temps d’´quilibration teq .8) correspond ` la relaxation des spins ` e a a l’int´rieur d’un domaine. ea e 2 166 . e ea e e a e Le premier terme de l’´quation (9. e e eae e Prenons l’exemple de la croissance d’un domaine ferromagn´tique dans un e mod`le de spins. comme dans un syst`me d´j` ´quilibr´. La fonction ξ(tw ) n’est pas pas connue en g´n´ral.2. et ne ea ` e e e d´pend pas du temps d’attente tw . tw ) = RA (τ = t − tw ) a et CA (t.4 “Violation” of the fluctuation-dissipation theorem Dans le cas d’un syst`me ` l’´quilibre.9) RA (τ ) = dτ 0 τ <0 Une telle relation ne peut ˆtre ´tablie dans le cas d’un syst`me ´voluant en e e e e 2 dehors d’un ´tat d’´quilibre . o` ξ(tw ) est une fonction non-universelle qui d´pend du syst`me. sur une p´riode courte.3 Interrupted aging Si la seconde in´galit´ de l’´quation (9. on va se trouver dans la situation dite. sa fonction de corr´lation ree e ` e e devient invariante par translation dans le temps et v´rifie a nouveau le th´or`me e ` e e de fluctuation-dissipation. a Le terme de “vieillissement” se justifie par le fait que. ce qui signifie qu’il est n´cese e e saire de proposer des fonctions d’essai simples et de v´rifier si les diff´rentes fonce e ξ(tw ) ıtresse. Le second terme d´pend du temps d’attente et d´crit e e e la relaxation des parois des domaines (o` ξ(t) est la taille caract´ristique des u e domaines ` l’instant t).2.7) n’est pas v´rifi´e. Au del` des temps courts.Slow kinetics. Ceci signifie u e e que. a e e e c’est-`-dire avec une partie non stationnaire de la fonction de corr´lation. et reli´es l’une a l’autre par le th´or`me de fluctuatione ` e e dissipation (voir Appendice A). alors que tw est inf´rieur a e e e e ` celui-ci. plus on attend longtemps pour faire une mesure. tw ) = CA (τ ). tions de corr´lation CAG ξ(t) se superposent sur une seule courbe-maˆ e 9. de “vieillissement interrompu”. le syst`me relaxe comme un syst`me hors d’´quilibre. 9. les fonctions de r´ponse et de corr´lation e a e e e sont ` la fois invariantes par translation dans le temps RA (t. Aux a e temps tr`s longs. aging.

tw ) (9. tw ) = 1 τ = t − tw << tw . Ce mod`le pr´sente plusieurs avantages: il repr´sente une extension e e e e du mod`le du parking que nous avons vu au chapitre pr´c´dent.3 Adsorption-desorption model th´or`me de fluctuation-dissipation pour souligner le fait que la relation ´tablie e e e a l’´quilibre n’est plus v´rifi´e en dehors de l’´quilibre. tw ) = T X(t.1 Adsorption-desorption model Introduction Nous allons illustrer les concepts introduits ci-dessus sur un mod`le d’adsorptione d´sorption. e e e RA (t. (9.13) 9.9. En cons´quence.10) o` la fonction X(t.3. on peut suppose que le syst`me viellissant est caract´ris´ e e e par une temp´rature effective e Tef f (t. e e Compte tenu des r´sultats analytiques obtenus sur des mod`les solubles et des e e r´sultats de simulation num´rique. les d´croissances de la fonction de corr´lation et de la fonction e e de r´ponse sont donn´es par des fonctions stationnaires qui ne d´pendent pas du e e e temps d’attente. tw ) ∂CA (t. pour des temps e e e tr`s grands (qui peuvent ˆtre sup´rieurs au temps de la simulation num´rique). ce qui n’est jamais observ´. tw )) aux grands e e temps. e e e e le syst`me retourne vers un ´tat d’´quilibre. on a alors X(t.12) Au del` de ce r´gime. le syst`me vieillit et la fonction Xo est diff´rente de 1. tw ) ∂tw (9. tw ) = X(C(t. Dans le cas o` le syst`me est ` l’´quilibre. en changeant un e e e e seul paramˆtre de contrˆle (essentiellement le potentiel chimique du r´servoir qui e o e fixe la constante d’´quilibre entre taux d’adsorption et taux de d´sorption).3 9. Dans ce cas. on e e 167 . le rapport fluctuation-dissipation e ne d´pend que de la fonction de corr´lation X(t. on a u e a e X(t. (9. a e e e Dans les mod`les de type champ moyen. Une e e e v´ritable violation consisterait a observe le non respect du th´or`me dans les e ` e e conditions de l’´quilibre. mais il ne s’agit pas d’une ` e e e e v´ritable violation car les conditions de ce th´or`me ne sont remplies alors. tw ) = −βX(t. on d´finit formellement une relation entre la e e e fonction de r´ponse et la fonction de corr´lation associ´e. tw ) = 1. tw ) est alors d´finie par cette relation: cette fonction n’est u e donc pas connue a priori.11) Aux temps courts.

` la limite des temps tr`s longs.7476 . et apr`s un changement d’´chelle de l’unit´ de temps. (9. le syst`me converge tr`s e e e lentement vers un ´tat d’´quilibre. e` e Lw (K) .15) (9. Quand k− = 0.Slow kinetics. De plus. toutes les particules adsorb´es sont d´sorb´es al´atoirement avec e e e e une constante de vitesse k− . Si la nouvelle particule ne recouvre aucune particule d´j` adsorb´e.16) dans l’´quation (9. et on e e e e va pouvoir observer le vieillissement de ce syst`me. ea e cette particule est accept´e. dt K o` Φ(t) est la probabilit´ d’ins´rer une particule a l’instant t. sinon elle est rejet´e et l’on proc`de a un nouveau e e e ` tirage. est la solution de l’´quation x = yey . ee e La dynamique de ce syst`me correspond a une forme de simulation Monte e ` Carlo grand canonique . Quand k− = 0. tandis que pour des valeurs de K tr`s grandes. Mˆme si. on peut ´crire e e e e l’´volution du syst`me comme e e ρ dρ = Φ(t) − .16) En ins´rant l’´quation (9. mais tr`s petit. Dans la u e limite des petites valeurs de K. ce mod`le permet de comprendre un grand nombre de r´sultats e e exp´rimentaux concernant les milieux granulaires denses. (pour une e e ` ligne initialement vide). e 9. . ρeq ∼ K/(1 + K). ce mod`le correspond au mod`le du e e parking pour lequel la densit´ maximale est ´gale a ρ∞ 0. peut passer progressivement de l’observation de propri´t´s d’´quilibre a celle des ee e ` propri´t´s hors d’´quilibre. avec une constante de a e vitesse k+ .15). mais o` aucun d´placement de particules dans le syst`me e u e e n’est effectu´. le syst`me converge vers e e a e e l’´tat d’´quilibre. dans laquelle seuls les ´changes de particules avec un e r´servoir sont permis. e e En notant que les propri´t´s de ce mod`le d´pendent seulement du rapport ee e e K = k+ /k− . aging. . la relaxation du syst`me peut devenir extr`mement lente. la fonction Lambert-W. u e e ` A l’´quilibre. e En dernier lieu. on a e ρeq Φ(ρeq ) = K La probabilit´ d’insertion est alors donn´e par e e Φ(ρ) = (1 − ρ) exp −ρ 1−ρ .14) (9.17) ρeq = 1 + Lw (K) o` Lw (x).3. (9. on obtient l’expression de la e e e densit´ a l’´quilibre.2 Definition Des bˆtonnets durs sont plac´s au hasard sur une ligne. (9. . e ρeq ∼ 1 − 1/ ln(K).18) 168 .

3. La densit´ d’´quilibre ρeq tend vers 1 quand K → ∞.3 Adsorption-desorption model 0. suite e e 169 . la cin´tique du processus est divis´e en trois e e e e r´gimes ` haute densit´: en 1/t. 9. . Jusqu’` ce que la densit´ soit voisine de la densit´ ` saturation du mod`le a e ea e du parking. la d´sorption peut ˆtre n´glig´e et avant un temps t ∼ 5.3 Kinetics Contrairement au mod`le du parking.90 e 0.1 – Evolution temporelle de la densit´ du mod`le d’adsorptione e d´sorption. la cin´tique du mod`le d’adsorption-d´sorption e e e e ne peut pas ˆtre obtenue analytiquement. . e ıt e a 2.70 ρ(t) 1/t 0. c’est ` dire en 1/t.7476 .40 0 5 10 15 ln(t) ´ Figure 9. 1.50 0. car les densit´s e e maximales sont respectivement de 1 et de 0. la e e e e densit´ croˆ comme celle du mod`le du parking.. puis en 1/ ln(t) et enfin en exponentielle e a e exp(−Γt). Il est toujours possible de faire une e analyse qualitative de la dynamique en d´coupant la cin´tique en diff´rents r´gimes.60 0. la densit´ ne croit que lorsque. On peut noter qu’il e e y a une discontinuit´ entre la limite K → ∞ et le cas K = ∞. Sch´matiquement.80 (-Γt) 1/ln(t) 0. Quand l’intervalle de temps entre deux adsorptions devient comparable au temps caract´ristique d’une adsorption. e e e e Nous nous pla¸ons par la suite dans les cas o` la d´sorption est tr`s faible c u e e 1/K << 1.9.

3. 1 0.4 0. Dans ce r´gime.6 0. le nombre de d´sorpe e ` e e tions ´quilibre le nombre d’adsorptions et l’approche de la densit´ d’´quilibre e e e est exponentielle exp(−Γt). le mod`le d’adsorption-d´sorption correspond a un syst`me de e e e ` e bˆtonnets durs plac´ dans un ensemble grand-canonique avec un potentiel chima e ique donn´ par βµ = ln(K). aging. e La fonction de r´ponse int´gr´e peut ˆtre facilement calcul´e e e e e e χeq = ρeq (1 − ρeq )2 . deux nouvelles particules ` e e sont adsorb´es. e a une d´sorption lib´rant un espace assez grand.1). le syst`me retourne a l’´quilibre. 170 (9.6 χ(t.tw) 0.4 Equilibrium linear response Dans la limite o` le temps d’attente est tr`s sup´rieur au temps du retour ` u e e a l’´quilibre.2 – Fonction de r´ponse int´gr´e χ(τ ) a l’´quilibre et normalis´e par e e e ` e e sa valeur ` l’´quilibre versus la fonction de corr´lation C(τ ) celle-ci ´galement a e e e normalis´e pour K = 300 (le temps d’attente est de tw = 3000 > teq ).2 0.Slow kinetics. on peut montrer que la croissance de la e e densit´ est essentiellement de l’ordre de 1/ ln(t) (voir figure 9. e 3.tw) 0. Aux temps tr`s longs. on doit retrouver que le th´or`me de fluctuation-dissipation est v´rifi´.8 Figure 9.2 0 0 0.4 C(t. 9.19) .8 0. e e e e e A l’´quilibre.

une ` champ nul. ce qui exe e plique que le th´or`me de fluctuation-dissipation est l´g`rement modifi´.3 Adsorption-desorption model De mani`re similaire on peut calculer la valeur de la fonction de corr´lation a e e ` l’´quilibre. mais si ce dernier est trop petit. Il apparaˆ ici e e e ıt empiriquement que cette fonction d´crit un vieillissement simple. le vieillissement est d´crit par une fonction d’´chelle.21) Pour des objets durs. mais plus petits que le temps du retour ` a l’´quilibre τeq . la temp´rature n’est pas un param`tre essentiel. e Ceq = ρeq (1 − ρeq )2 .23) 9.20) Le th´or`me de fluctuation-dissipation donne e e χ(τ ) = C(0) − C(τ ). il est possible d’annuler les e e e 171 . δ ln(K(tw )) La fonction de r´ponse int´gr´e ` deux temps est d´finie comme e e e a e χ(t. Quand τ et tw sont assez grands.9. tw ) = f (tw /t) (9. La diagonale e correspond a la valeur donn´e par le th´or`me de fluctuation-dissipation.22) o` les crochets signifient une moyenne sur un ensemble de simulations ind´penu e dantes. la fonction de r´ponse int´gr´e n´cessite a priori de faire une e e e e simulation en ajoutant un champ ` partir de l’instant tw .24) En simulation. La figure e e e e e 9.6 Algorithm δρ(t) . Par une combinaison lin´aire des r´sultats.3. la fonction de r´ponse (qui e e e s’obtient en soustrayant les r´sultats de simulation a champ nul) devient extrˆmee ` e ment bruit´e. d´finie comme e e e e C(t. La grandeur int´ressante de ce mod`le e e est la fonction d’auto-corr´lation de la densit´ (normalis´e). (9.2 montre le trac´ de la fonction χ(τ )/χeq en fonction de C(τ )/Ceq . c’est-`-dire dans un domaine de temps o` l’´quilibre there a u e modynamique n’est pas encore atteint.5 Hard rods age! Nous nous int´ressons aux propri´t´s de non-´quilibre de ce mod`le dans le r´gime e ee e e e cin´tique en 1/ ln(t). une seconde a champ positif et une troisi`me a champ a ` e ` n´gatif. (9. tw ) = (9. tw ) = ρ(t)ρ(tw ) − ρ(t) ρ(tw ) (9. La premi`re m´thode qui a ´t´ utilis´e consiste ` faire trois simue e e ee e a lations. ` e e e 9. On a en effet e C(t.3. Une telle proc´dure a e souffre du fait que l’on doit utiliser un champ faible pour rester dans le cadre de la r´ponse lin´aire.

Pour une dynamique Monte Carlo standard. on peut exprimer la susceptibilit´ e e comme N 1 ∂Pk (tw → t) χ(t.Slow kinetics. u e En utilisant les probabilit´s de trajectoires. la probabilit´ Pk (tw → e e t) s’exprime comme le produit de probabilit´s de transition des configurations e successives menant le syst`me de tw a t e ` t−1 Pk (tw → t) = t =tw WC k →C k t (9.28) o` dans le cas du mod`le du parking h = ln(K).26) t +1 o` WC k →C k u t t +1 e est la probabilit´ de transition de la configuration Ctk de la ki`me e trajectoire ` l’instant t vers la configuration de la ki`me trajectoire Ctk +1 a a e ` l’instant t + 1. termes non lin´aires en champ quadratique.27) o` finalement Π(Ctk → Ct ) est la probabilit´ d’acceptation du passage de la conu e k figuration Ct a l’instant t a la configuration Ct+1 a l’instant t+1. mais il subsiste des termes cubiques e et au del`.25) o` Pk (tw → t) est la probabilit´ que la ki`me trajectoire (ou simulation) parte de u e e tw et aille ` t et k un indice parcourant les N trajectoires ind´pendantes. a e Compte tenu du caract`re markovien de la dynamique. cette probabilit´ e de transition s’exprime comme WCtk →Ct+1 = δCt+1 Ct Π(Ctk → Ct ) + δCt+1 Ctk (1 − Π(Ctk → Ct )) k k (9. tw ) = ∂ρ(t) ∂h(tw ) (9. BerthierBerthier [2007] et permet de faire une seule simulation e en champ nul. aging.27) tient compte du fait que si la configuration e e e e Ct est refus´e la trajectoire reste avec la configuration de l’instant pr´c´dent avec la probabilit´ compl´mentaire.29) N k=1 ∂h 172 . tw ) = ρ(t) (9. e e La susceptibilit´ int´gr´e s’´crit comme e e e e χ(t. Le second terme ` ` ` de la partie droite de l’´quation (9. La m´thode que nous allons pr´senter est tr`s r´cente (2007) et a ´t´ a e e e e ee propos´e par L. L’id´e de base de la m´thode consiste ` exprimer la moyenne en termes de e e a probabilit´s d’observer le syst`me ` l’instant t e e a 1 ρ(t) = N N ρ(t)Pk (tw → t) k=1 (9.

30) Ainsi. e e 9.31) o` la valeur moyenne s’effectue sur une trajectoire non perturb´e et avec la foncu e tion Hk (tw → t) qui est donn´e par e Hk (tw → t) = t ∂ ln(WC k →C k ) t t +1 ∂h (9. tw ) ∂C(t. tw ) = ρ(t)H(tw → t) (9. Historiquement.25) e e t−1 ∂ ln(W k k Ct →Ct +1 ) ∂Pk (tw → t) = Pk (tw → t) ∂h ∂h t =t w (9. la e r´ponse a un succession de perturbations appliqu´e au syst`me conduit a un e ` e e ` comportement souvent tr`s diff´rent de ce que l’observe pour un syst`me relaxe e e ant rapidement vers l’´quilibre.4 Kovacs effect Pour des syst`mes dont les temps de relaxation sont extremement longs. tw ) =− ∂tw T ∂tw (9. En effet. on interpr`te e e e e la fonction X(t. L’effet Kovacs est un exemple que e e nous allons illustrer sur le mod`le du Parking. Pour estimer le rapport fluctuatione e dissipation. tw ) comme une temp´rature effective. tw ) X(t. Je souhaite insister sur e le fait que le trac´ param´trique doit se faire ` t constant et pour des valeurs e e a diff´rentes de tw . on peut calculer la fonction r´ponse int´gr´e comme la moyenne suivante e e e χ(t. les probabilit´s de transition e e sont modifi´es et dans le calcul en prenant le logarithme de l’´quation (9. on utilise la relation ∂χ(t.32) On peut aussi calculer facilement la fonction de corr´lation ` deux temps e a sur les mˆmes trajectoires non perturb´es.33) En tra¸ant dans un diagramme param´trique la susceptibilit´ int´gr´e en fonction c e e e e de la fonction de corr´lation a deux temps.4 Kovacs effect Si maintenant. ce fait est une cons´quence de l’´quation (9.tw ) en calculant la pente locale du e T trac´ pr´c´dent. Si cette fonction est constante par morceaux. la e e e diff´rentiation se fait par rapport ` tw et non pas par rapport a t. pour un temps fix´ t et pour des valeurs e ` e diff´rentes tw on peut calculer la fonction X(t.33). ce ph´nom`ne a e e e ´t´ d´couvert dans les ann´es soixante et consiste a appliquer ` des polym`res ee e e ` a e 173 . Il semble qu’il y ait une aussi grande universalit´ e e des comportements des r´ponses des syst`mes. on ajoute un champ infinit´simal. Les premi`res e a ` e ´tudes ont ´t´ faites tr`s souvent de mani`re incorrecte et il est apparu que des e ee e e corrections substantielles apparaissent quand les calculs ont ´t´ refaits selon la ee m´thode expos´e ci-dessus.9.

002 0.5 Conclusion L’´tude des ph´nom`nes hors d’´quilibre est encore en grande partie dans son e e e e enfance. 0. Ce ph´nom`ne a ´t´ observ´ aussi dans le cas des verres e ee e e ee e fragiles comme l’orthoterph´nyl en simulation. et dans la e ` courbe du bas. puis un chauffage rapide ` une a temp´rature interm´diaire. K passe de 1000 ` 500 pour tw = 139. Dans le cas o` le volume atteint par le syst`me au e e u e moment o` l’on proc`de au rechauffement est ´gale ` celui qu’aurait le syst`me u e e a e s’il ´tait a l’´quilibre. a ˇ une trempe rapide suivie d’un temps dZattente. Sur la figure 9. Dans la courbe du e haut K est chang´ de K = 5000 a K = 500 pour tw = 240. dans la courbe e ` intermediaire K est chang´ de K = 2000 a K = 500 pour tw = 169.004 0. Pour des syst`mes plus e a e a e e 174 .3.Slow kinetics.003 1/ρ(t)-1/ρeq(K=500) 0. aging.3 – Evolution temporelle du volume d’exc`s pour le mod`le d’adsorptione e d´sorption 1/ρ(t) − 1/ρeq (K = 500) en fonction de t − tw . Ce ph´nom`ne hors d’´quilibre est en fait tr`s universel et on peut e e e e e l’observer dans le mod`le d’adsorption-d´sorption. e 9. on observe e e que le maximum du volume est d’autant plus important que la “trempe” du syst`me a ´t´ importante.001 0 1 t-tw 10 100 Figure 9. compar´e ` celle des syst`mes ` l’´quilibre. on observe alors une augmentation du volume dans un pree ` e mier r´gime suivi d’une diminution pour atteindre asymptotiquement le volume e d’´quilibre.

dans lesquels a e e on perturbe le syst`me plusieurs fois sans qu’il puisse alors revenir a l’´quilibre. C’est probablement le cas des verres de e e e e spins exp´rimentaux.34) L’identification de la s´quence est alors de plus en plus d´licate. On a propos´ ces derni`res ann´es plusieurs modes op´rae e e e e toires. e a e ee 175 . e ` e La diversit´ des comportements observ´s permet de comprendre progressivement e e la structure de ces syst`mes ` travers l’´tude de leurs propri´t´s dynamiques.9. tw ) = CST (τ ) + i CAG. applicables ` la fois exp´rimentalement et num´riquement.5 Conclusion complexes que ceux d´crits pr´c´demment et pour lesquels il existe toute une e e e hi´rarchie d’´chelles de temps. mais refl`te la e e e complexit´ intrins`que du ph´nom`ne. il est n´cessaire d’envisager une s´quence de vieile e e e lissement de type CA (tw + τ.i ξi (tw ) ξi (tw + τ ) (9.

aging. 176 .Slow kinetics.

La signature e e e en simulation se fait sur la grandeur que l’on appelle l’h´licit´. e e 177 . (Concernant les d´fauts topologiques en mati`re condens´e.Sj (A. il apparaˆ une transition de phase ` temp´rature finie asıt a e soci´e aux d´fauts topologiques qui subissent une transition de type ´tat li´-´tat e e e ee dissoci´.1 Lattice models XY model Le mod`le XY est un mod`le de spins sur r´seau o` les spins sont alors des e e e u vecteurs ` deux dimensions.1 A. La dimension crituque de ce mod`le est 4 et e le th´or`me de Mermin-WagnerMermin and Wagner [1966] emp`che une transie e e tion a temp´rature finie en dimensions 2 associ´e au param`tre d’ordre qui est ` e e e l’aimantation.j Si . on peut e e e e toujours consulter cette revue un peu ancienne mais tellement compl`te de Mere minMermin [1979]). j d´signe une double sommation o` i est un indice parcourant les e u sites et j les sites plus proches voisins de i. Cette transition a ´t´ d´crite la premi`re fois par Kosterlitz ee e e et ThoulessKosterlitz and Thouless [1973]. Le Hamiltonien de ce syst`me s’´crit e e H = −J i. plusieurs Hamiltoniens sont devenus des syst`mes mod`les car ils permete e tent de d´crire de nombreux ph´nom`nes sur la base d’hypoth`ses.1) La notation i. e e ee A. Le r´seau support de ces spins est de dimension a e quelconque.Appendix A Reference models Apr`s un d´veloppement consid´rable de la Physique Statistique au si`cle pr´c´e e e e e e dent. En fait. A deux dimensions.1. ette transition est une transition qui n’est marqu´ par e aucune discontinuit´ des d´riv´es du potentiel thermodynamique.Sj repr´sente le produit scalaire e entre les deux vecteurs Si et Sj . Si . Le propos de e e e e cette appendice est de donner une liste (non exhaustive) mais assez large de ces mod`les ainsi qu’une description br`ve de leurs propri´t´s essentielles.

2) i. Outre qu’il existe e e e des exemples de r´alisation physique o` n = 4.Sj (A.1. le syst`me ne poss`de pas de d´fauts topologiques: les e e e e vecteurs acc`dant a une troisi`me dimensionMermin [1979]. une courbe de coexistence liquide-gaz ` e a 178 . car on peut toujours cone e e e e sid´rer des spins caract´ris´s par un vecteur de dimension n. e a e A. Le e e e premier tient compte exclusivement les probl`mes d’encombrement g´om´triques e e e et poss`de ainsi un diagramme de phases o` il apparaˆ une transition liquidee u ıt solide (pour une dimension ´gale a trois ou plus) et le mod`le de Lennard-Jones e ` e disposant d’un potentiel attractif a longue distance (r´pr´sentant les forces de Van ` e e der Waals) ´toffe son diagramme de phases.j avec Si un vecteur tridimensionnel. ce qui signifie la temp´rature critique est ´gale est nulle. D`s e e e e que la temp´rature est diff´rente de z´ro. e ` e le syst`me poss`de une transition continue entre une phase paramagn´tique et e e e une phase ferromagn´tique ` temp´rature finie. le Hamiltonien est donn´ a e par l’´quation e H = −J Si . on converge le mod`le ee e e sph´rique qui peut-etre r´solu en g´n´ral de mani`re analytique et qui poss`de e e e e e e un diagramme de phases non trivial.2 A. A.1. De plus.2 Heisenberg model Le mod`le d’Heisenberg est un mod`le sur r´seau o` les spins sont alors des e e e u vecteurs d’un espace ` trois dimensions. le mod`le des sph`res dures et le mod`le de Lennard-Jones. le syst`me perd son aimantation.1 off-lattice models Introduction Nous avons utilis´ au cours de ce cours deux mod`les de liquides simples que ceux e e respectivement. il existe des m´thodes analytiques e qui permettent de travailler autour du mod`le sph´rique (d´veloppement en 1/n e e e et qui par extrapolation donnent des pr´dictions int´ressantes pour un n fix´. Formellement. A trois dimensions. la g´n´ralisation avec un n quele u e e conque a un int´r`t th´orique: quand n tend vers l’infini. les autres notations restant identiques a celles ` d´finies dans le paragraphe pr´c´dent. on a alors un alors une transition e liquide-gaz et un point critique associ´.3 O(n) model La g´n´ralisation des mod`les pr´c´dents est simple. Cone e e e trairement au mod`le XY.2.Reference models A trois dimensions. e e e A. cette transition redevient plus classique avec des exposants critiques de ceux d’une transition continue. La dimension critique inf´rieure de ce mode e e e `le est de deux.

3 Kob-Andersen model Les verres surfondues ont la caract´ristique essentielle de pr´senter une augmene e tation tr`s spectaculaire du temps de relaxation quand la temp´rature diminue. e e e Comme la situation concerne un tr`s grand nombre de syst`mes. En particulier.A. N H= i=1 p2 i + 2m 2 N i=j σ rij 12 − σ rij 6 + 1 µi .3) A. car elle d´pend en partie de la vitesse de trempe du syst`me et une partie e e e des caract´ristiques habituelles li´es aux transitions de phase n’est pas pr´sente.2. 1995a. les liquides simples purs subissent (sph`res e dures. le syst`me se g`le et se trouve pour des temp´ratures inf´rieures e e e e a la temp´rature dite de transition vitreuse dans un ´tat hors d’´quilibre (´tat ` e e e e amorphe).ˆi µj .ˆj r r 3 rij (A.µj − 3µi .4) . Dans la r´gion o` le syst`me est dense. Un des mod`les les plus con` e e nus parmi les liquides simples qui pr´sentent une phase surfondue importante e (avec une cristallistion ´vit´e) est le mod`le de Kob-AndersenKob and Andersen e e e [1994. l’eau qui est constitu´ pour e l’essentiel de mod´cules H2 O (en n´gligeant dans un premier temps la dissociation e e ionique) poss`de un fort moment dipolaire.2 off-lattice models basse temp´rature.2. Lennard-Jones) subissent toujours une transition de type liquide-solide et il est tr`s difficile d’observer une surfusion de ces liquides. il est tentant de e e penser que des liquides simples pourraient pr´senter ce type de ph´nom´nologie. elle reste toutefois importante e par rapport a celle des protocoles exp´rimentaux). mˆme si la vitesse de e e trempe est diminu´e au maximum (en simulation. e e e Or dans un espace tridimensionnel. a e A. on retrouve une ligne e e u e de transition liquide-solide (du premier ordre pour un syst`me a trois dimensions) e ` finissant par un point triple ` basse temp´rature.b] Ce mdod`le est un m´lange de deux esp`ces de particules de type e e e Lennard-Jones avec un potentiel d’interaction vαβ = 4 αβ σαβ r 179 12 − σαβ r 6 (A. e e Cette accroissement atteint 15 ordres de grandeur jusqu’` un point o` ` d´faut a ua e de cristalliser. En restant a un niveau de description e ` tr`s ´l´mentaire.2 StockMayer model Le mod`le de Lennard-Jones est un bon mod`le tant que les mol´cules ou atomes e e e consid´r´s ne poss`dent pas de charge ou de dipˆle permanent. Cette temp´rature de transition n’est pas unique pour un syst`me e e donn´. Cette situee e o ation est bien ´videmment trop restrictive pour d´crire les solutions qui font e e l’environnement de notre quotidien. le Hamiltonien de Stockmayer est celui de Lennard-Jones ajout´ e ee e d’un terme d’interaction dipolaire.

et BB = 0. e et les tailles sont donn´es par σAA = 1.8 et σBB = 0. Les param`tres d’interaction sont AA = 1.88. σAB = 0. e 180 .Reference models Les deux esp`ces sont not´es A et B et la proportion du m´lange est 80% de A e e e et 20% de B. AB = 1.5.5.

1) o` F (t) est une force perturbatrice qui ne d´pend que du temps et A(rN ) est la u e variable conjugu´e a la force F . on fait un d´veloppement perturbatif de l’´quation (B. t) − {A(rN ).5) (B.7) (B. On pose f (N ) (rN . pN . pN )}F (t). a L’´volution du syst`me peut ˆtre d´crite par l’´quation de Liouville e e e e e ∂f (N ) (rN .Appendix B Linear response theory Soit un syst`me a l’´quilibre d´crit par le Hamiltonien H0 . on a e e ` e f (N ) (rN . ∂t 181 (N ) (N ) (N ) (B. t)}F (t) o` L0 d´signe l’op´rateur de Liouville associ´ ` H0 u e e ea L0 = i{H0 . pN . on consid`re qu’` e ` e e e a l’instant t = 0. pN . pN . Il est possible de consid´re e e a e e une force d´pendante de l’espace.3) (B. mais pour des raisons de simplicit´ nous nous e e limitons ici ` une force uniforme. t) − {A. pN . On suppose que cette force d´croit quand t → ∞ e ` e de telle mani`re que le syst`me retourne ` l’´quilibre. pN .6) (B. t) = f0 (rN . pN . f0 (rN . f (N ) (rN . t) (N ) (N ) = −iL0 f1 (rN .4) o` C est une constante de normalisation. t) ∂t ={H0 + H . pN ) + f1 (rN . pN . pN )). 0) = C exp(−βH0 (rN . (B. Dans la mesure o` le champ ext´rieur u u e est suppos´ faible. pN . t) On obtient alors a l’ordre le plus bas: ` ∂f1 (rN . pN . t) = −iLf (N ) (rN . f (N ) (rN . t)} = − iL0 f (N ) (rN .2) (B.8) . } Comme le syst`me ´tait initialement a l’´quilibre.4) en e e e ne gardant que le premier terme non nul. ce syst`me est perturb´ par un action ext´rieure d´crite par le e e e e Hamiltonien H H = −A(rN )F (t) (B.

la variable ∆B(t) = B(t) − B(−∞) ´volue comme e ∆B(t) = drN dpN f (N ) (rN . on a e e e ∆B(t) = β dr dp N N (N ) f0 t −∞ dA(0) exp(−i(t − s)L0 )B(rN )F (s)ds dt (B.14) (B.15) (B.18) ∆B(t) = β −∞ ds dA(0) B(t − s) F (s) dt 182 (B. f0 }B(rN )F (s)ds (B.8) est r´solue formellement avec la condition initiale donn´e par e e e l’´quation (B. f0 } exp(i(t − s)L0 )B(rN )F (s)ds (B. f0 } = i=1 (N ) ∂A ∂f0 ∂A ∂f0 − ∂ri dpi ∂pi dri N (N ) (N ) (N ) (B.9) Ainsi.12). pN ) B(rN )drN dpN .19) . (N ) (B. la diff´rence des distributions de Liouville est donn´e par e e l’´quation (B.16) dans l’´quation (B. pN .11) (N ) =− drN dpN −∞ {A.10) A l’ordre le plus bas.10) devient e e t (N ) ∆B(t) = − drN dpN −∞ t exp(−i(t − s)L0 ){A. t) − f0 (rN .6). t) = − −∞ (N ) t exp(−i(t − s)L0 ){A.17) En utilisant le fait que B(rN (t)) = exp(itL0 )B(rN (0)) l’´quation (B.9) et l’´quation (B.12) (N ) en utilisant l’hermiticit´ de l’op´rateur de Liouville. (B. f0 }F (s)ds.Linear response theory L’´quation (B.17) devient e t (B. e e En calculant le crochet de Poisson. Ceci conduit ` e a f1 (rN .13) (N ) (N ) =−β i=1 ∂A ∂H0 ∂A ∂H0 − ∂ri dpi ∂pi dri (N ) f0 (B. il vient que N {A. pN .16) = − βiL0 Af0 dA(0) (N ) =−β f dt 0 En ins´rant l’´quation (B.

26) Le th´or`me de fluctuation-dissipation s’exprime alors e e R(t) = β(CA (0) − CA (t)) t > 0 0 t<0 (B. La fonction de r´ponse (pr`s) de l’´quilibre est invariante par translation e e e dans le temps χ(t. Cette propri´t´ s’appelle le respect de la causalit´. on note la fonction d’autocorr´lation comme e CA (t) = A(0)A(t) . s)F (s) + O(F 2 ) (B. ee e χ(t.20).21) χ(t) = dt 0 t<0 2.23) Quand A = B. s) = χ(t − s) (B.20) Compte tenu de l’´quation (B.22) 3. et on a χ(t) = En d´finissant la r´ponse int´gr´e e e e e t (B. t−s≤0 (B. En identifiant les ´quations (B.24)   −β dCA (t) dt t>0 t<0 (B.  d  A(0)B(t) t > 0 −β (B.On d´finit la fonction de r´ponse lin´aire de B par rapport a F comme e e e ` ∞ ∆B(t) = −∞ dsχ(t.25) 0 R(t) = 0 χ(s)ds (B. on trouve les propri´t´s suivantes: e ee 1.19) et (B. on obtient le th´or`me de e e e fluctuation-dissipation (Fluctuation-dissipation theorem. en anglais FDT). s) = 0.19). Un syst`me ne peut pas r´pondre a une perturbation avant que celle-ci ne e e ` se soit produite.27) 183 .

Linear response theory 184 .

l’interaction a e e entre deux particules i et j est en tr`s bonne approximation calcul´e comme e e l’interaction entre la particule i et l’image de j la plus proche de i (convention d’image minimale). nz ) sont enti`res.2) L’´toile indique que la somme est faite sur toutes les boˆ e ıtes et sur toutes les particules. tel que la charge e e totale du syst`me est nulle.1) Ucoul = 2 i=1 o` φ(ri ) est le potentiel ´lectrostatique au site i. ny . i qi = 0. hormis j = i quand n = 0. il est mˆme n´cessaire de tenir compte de l’ensemble des e e boˆ ıtes. u e ∗ φ(ri ) = j. A une particule i situ´e a ri dans la boite de e e ` r´f´rence correspond une infinit´ d’images situ´es dans les copies de cette boite ee e e initiale et rep´r´es par les coordonn´es en ri + nL. ` 185 . (C. |rij + nL| (C. L’´nergie totale du syst`me s’exprime e e e comme N 1 qi φ(ri ). dans une boite cubique de longueur L avec e conditions aux limites p´riodiques. Dans le cas des poa e tentiels coulombiens. o` n est un vecteur dont ee e u les composantes (nx . ainsi que de la nature des conditions aux limites a l’infini. Cela signifie que cette image n’est pas n´cessairement dans e la boˆ initiale.n qj . cette approximation ıte ` e n’est pas suffisante car l’´nergie d’interaction entre deux particules d´croˆ trop e e ıt lentement pour qu’on puisse se limiter ` la premi`re image.Appendix C Ewald sum for the Coulomb potential Ewald sum On consid`re un ensemble de N particules poss´dant des charges. Pour des potentiels ` courte port´e (d´croissance plus rapide que 1/r3 ). Dans le cas de potentiels a longue port´e.

4) o` la somme sur n est tronqu´e ` la premi`re image et o` erfc est la fonction u e a e u erreur compl´mentaire. la m´thode des sommes d’Ewald consiste a s´e a e e ` e parer (C. e Dans le cas de syst`mes continus.5) A cette derni`re expression. e e 186 . Il est donc possible de prendre en compte un tr`s e e grand nombre de vecteurs d’ondes. |rij + nL| (C. Le terme ` retirer est ´gal a a e ` α π 1/2 N 2 qi = −U3 . ce qui assure une tr`s bonne pr´cision pour le e e calcul de l’´nergie coulombienne. au e d´but de chaque simulation.Ewald sum for the Coulomb potential Pour rem´dier ` cette difficult´.1) en plusieurs parties: une partie ` courte port´e. (C. due a l’introduction d’une distribution de charge sym´trique a ` e ` e ` la pr´c´dente et dont la contribution sera calcul´e dans l’espace r´ciproque du e e e e r´seau cubique. k2 4α (C. Il existe alors des algorithmes plus performants e comme ceux bas´s sur un d´veloppement multipolaire.6) L’´nergie d’interaction coulombienne devient donc: e Ucoul = U1 + U2 + U3 . i=1 (C. ou particules) plac´es sur les sites d’un r´seau. ce qui fait que l’algorithme est p´nale e e isant pour les grands syst`mes. et la partie a longue port´e par e ` e 1 U2 = 2πL3 N N qi qj ( i=1 j=1 k=0 −k 2 4π 2 ) exp( )cos(krij ). on doit retirer un terme dit d’auto-interaction dˆ a e u` l’interaction de chaque distribution de charge qj avec la charge ponctuelle situ´e e au centre de la gaussienne. De la forme plus ou moins diffuse de la distribution de charge e d´pendra la convergence de la somme dans l’espace r´ciproque. e e Si on introduit comme distribution de charge une distribution gaussienne. il est e e possible d’effectuer les sommes dans l’espace r´ciproque une fois pour toutes. obtenue en ´crantant a e e chaque particule avec une distribution de charge (que l’on prend souvent gaussienne) de mˆme intensit´ mais de signe oppos´ ` celle de la particule (sa contribue e ea tion pourra alors ˆtre calcul´e avec la convention d’image minimale) et une autre e e a longue port´e. on doit effectuer le calcul dans l’espace ` e a chaque fois que les particules sont d´plac´es.7) Pour des charges (ou spins. N ρ(r) = j=1 n α 1 qj ( ) 2 exp[−α|r − (rj + nL)|2 ].3) la partie a courte port´e est donn´e par ` e e U1 = 1 2 N N q i qj i=1 j=1 n erfc(α|rij + nL|) . π (C.

Dans le cas inverse. Dans un syst`me coulombien p´riodique. C’est la convention qui est utilis´e dans les simulations e de syst`mes ioniques. Celui-ci rajoute un terme a l’´nergie qui n’est autre que la contribution ` e correspondant ` k = 0. les fluctuations du moment dipolaire du syst`me cr´ent e e e des charges de surface qui sont responsables de l’existence d’un champ d´polare isant. a 187 .5) n’est effectu´e que pour k = 0.Remarks La somme dans l’expression (C. c’est a dire si le syst`me se trouve dans e ` e un milieu di´lectrique. la forme de e e l’´nergie d´pend en effet de la nature des conditions aux limites a l’infini et le fait e e ` de n´gliger la contribution a l’´nergie correspondant ` k = 0 revient ` consid´rer e ` e a a e que le syst`me est plong´ dans un milieu de constante di´lectique infinie (c’est a e e e ` dire un bon conducteur). Ceci r´e e sulte de la convergence conditionnelle des sommes d’Ewald et a des cons´quences e physiques importantes.

Ewald sum for the Coulomb potential 188 .

4) L’int´gration successive de l’´quation (D.3) Comme le potentiel est soit nul soit infini. dxi exp(−β/2 i=j v(xi − xj )).5) . 189 (D.1) H= m 2 dt i o` u v(xi − xj ) = +∞ |xi − xj | > σ 0 |xi − xj | ≤ σ.1 Equilibrium properties Consid´rons le syst`me constitu´ de N segments imp´n´trables de longueur idene e e e e tique. assujettis a se d´placer sur une droite.4) donne e e QN (L. (D. On a QN (L. N ) = 0 dx1 x1 +σ .2) Le calcul de la fonction de partition se factorise comme pour tous les syst`mes e classiques en deux parties: la premi`re correspond ` la partie cin´tique et peut e a e ˆtre facilement calcul´e (int´grale gaussienne). (D. N ) = . Le Hamiltonien de ce syst`me ` e e s’´crit e N 2 1 dxi + v(xi − xj ) (D. l’int´grale de configuration ne e d´pend pas de la temp´rature.. Puisque deux particules ne peuvent se recouvrir. σ.Appendix D Hard rod model D. la seconde est l’int´grale de cone e e e figuration qui dans le cas de ce syst`me unidimensionnel peut ˆtre aussi calcul´ e e e exactement. (D... e e on peut r´´crire l’int´grale QN en ordonnant les particules: ee e L−N σ L−(N −1)σ L−σ QN (L. N ) = (L − N σ)N . xN −1 +σ dxN ..

t). e Des bˆtonnets durs de longueur σ sont jet´s al´atoirement et s´quentiellement a e e e sur une ligne en respectant les conditions ci-dessus. en ee e e 1963R´nyi [1963].7) ln QN (L. le vide disponible ` pour ins´rer une nouvelle particule est h − 1.2 Parking model L’addition s´quentielle al´atoire est un processus stochastique o` des particules e e u dures sont ajout´es s´quentiellement dans un espace de dimension D a des posie e ` tions al´atoires en respectant les conditions qu’aucune nouvelle particule ne doit e recouvrir une particule d´j` ins´r´e et qu’une fois ins´r´es. la cin´tique du processus est gouvern´e par ` e e l’´quation maˆ e ıtresse suivante: ∂ρ(t) = ka Φ(t). qui est la probabilit´ e ` e e e d’insertion ` l’instant t. La version unidimensionnelle du mod`le est connue sous le nom du probl`me e e du parking et a ´t´ introduite par un math´maticien hongrois.8) (D. et en cons´quence la fraction de e e 190 . u Le potentiel chimique d’exc`s est donn´ par la relation e e exp(−βµex ) = (1 − ρ) exp(− ρ ). Il est utile d’introduire la fonction de distribution des intervalles G(h.9) o` ka est une constante de vitesse par unit´ de longueur (qui l’on peut choisir u e ´gale a l’unit´ en changeant l’unit´ de temps) et Φ(t). R´nyi.6) Les fonctions de corr´lation peuvent ´galement ˆtre calcul´es analytiquement. Si ρ(t) repr´sente la densit´ e e de particules sur la ligne a l’instant t.Hard rod model La fonction de partition canonique du syst`me est donc d´termin´e et l’´quation e e e e d’´tat est donn´e par la relation e e βP = ce qui donne βP 1 = ρ 1 − σρ o` ρ = N/L. Le diam`tre des particules est choisi comme unit´ a e e de longueur. est aussi la fraction de la ligne disponible pour l’insertion a d’une nouvelle particule ` t. Pour un intervalle de longueur h. 1−ρ (D. e e e e D. ∂t (D. A. qui est d´finie telle que G(h. N ) L (D. t)dh repr´sente la densit´ de vides de longueurs comprises e e e entre h and h+dh a l’instant t. les particules sont ea ee ee immobiles.

t) est enti`rement d´termin´e par des intervalles plus grands que h. (D. t). t) = 2 0 du exp(−uh) 191 F (u) . e e e Nous avons maintenant un ensemble complet d’´quations.14) F (t) = t2 exp −2 0 du 1 − e−u u . t).D. t) ´volue e comme ∂G(h. G(h.11) tandis que la fraction de la ligne non recouverte est reli´e a G(h. Les ´quations peuvent ˆtre r´solues e e e e en utilisant l’ansatz suivant. Le facteur 2 tient compte des deux possibilit´s de cr´er e e un intervalle de longueur h a partir d’un intervalle plus grand de longueur h .13) (terme de destruction) correspond a l’insertion d’une particule a e ` ` l’int´rieur d’un intervalle de longueur h (pour h ≥ 1). t) par la relation e ` ∞ 1 − ρ(t) = 0 dhhG(h. tandis que le second terme e (terme de cr´ation) correspond a l’insertion d’une particule dans un intervalle de e ` longueur h > h + 1. (D. Le premier terme du membre de droite de u l’´quation (D. t G(h.2 Parking model la ligne disponible Φ(t) est simplement la somme de (h − σ) sur l’ensemble des intervalles disponibles. pour h > σ. t) pour 0 < h < 1. h+1 (D. ` Notons que l’´volution temporelle de la fonction de distribution des intervalles e G(h. t) pour h > 1. la densit´ de particules ρ(t) ` e s’exprime comme ∞ ρ(t) = 0 dhG(h. t) + 2 ∂t ∞ dh G(h . G(h. (D. Durant le processus. t) = −H(h − 1)(h − 1)G(h. ce qui conduit ` a t (D. on e e obtient G(h. t).13) avec la solution de G(h. qui r´sulte de la e e propri´t´ que l’insertion d’une particule dans un intervalle donn´ n’a aucun effet ee e sur les autres intervalles (effet d’´crantage).12) Les deux ´quations pr´c´dentes repr´sentent des r`gles de somme pour la fonction e e e e e de distribution des intervalles.16) . t) = F (t) exp(−(h − 1)t).10) Comme chaque intervalle correspond a une particule. u (D. cette fonction G(h. t): ∞ Φ(t) = σ dh(h − 1)G(h.e. i.13) o` H(x) est la fonction de Heaviside. t). (D.15) En int´grant alors l’´quation (D.

20) g(r) − 1 ∝ Γ(r) ln r o` Γ(x) est la fonction Gamma: la d´croissance de g(r) est dite super-exponentielle u e et est donc diff´rente de la situation a l’´quilibre o` l’on obtient une d´croissance e ` e u e exponentielle. e ee e e Une propri´t´ non ´vidente du mod`le est que le processus atteint une limite ee e e d’encombrement (quand t → ∞) a laquelle la densit´ sature pour une valeur ` e ρ∞ σ = 0. . ce qui montre que l’approche vers la saturation est u alg´brique. l’´tat final du syst`me est d´termin´ seule` e e e e e ment par le potentiel chimique et ne garde aucune m´moire de l’´tat initial. (D. la distribution d’intervalle ee a une divergence logarithmique (int´grable bien entendu) au contact h → 0. ∞) −e−2γ ln(h). De plus. e La structure des configurations g´n´r´es par ce processus irr´versible a un cere ee e tain nombre de propri´t´s inhabituelles. e e t u ρ(t) = 0 du exp −2 0 dv 1 − e−v v . nous sommes partis d’une e ligne vide. celle-ci est de mani`re significative plus petite que la densit´ e e d’encombrement g´om´trique maximale (ρ∞ = 1) qui est obtenue quand les pare e ticules peuvent diffuser sur la ligne.19) De plus.12) conduisent bien ´videmment au mˆme e e e r´sultat pour la densit´ ρ(t).. il est facile de voir que la limite d’encombrement d´pend des conditions initiales: ici.Hard rod model Les trois ´quations (D.17) Ce r´sultat a ´t´ obtenu la premi`re fois par R´nyi. A saturation.18) t o` γ est la constante d’Euler. r 2 1 (D. 192 . .7476 .17): e e a e 1 (D. (D. (D. a l’´quilibre.11) et (D. e e La cin´tique aux temps longs peut ˆtre obtenue ` partir de l’´quation (D. les corr´lations entre paires de particules sont extrˆmement faibles a e e ` longue distance. Au contraire.10). e ρ∞ − ρ(t) e−2γ G(h.

New York.220601.org/ abstract/PRL/v63/p1195. Dress and W. 98(22):220601.. Cluster algorithm for hard spheres and related systems. 63(12):1195–1198.iop. Phys.org/abstract/PRL/v98/e220601. C.Bibliography B. Ludovic Berthier.1063/1. and Robert H. 51(5):5092–5100.98. 1997.aip.1103/PhysRevLett. The Journal of Chemical Physics. Oxford University Press. Reports on Progress in Physics.. Rev. Frenkel and B. Academic Press.aps. Introduction to Modern Statistical Mechanics.381.org/abstract/PRE/v51/p5092. Ferrenberg and Robert H. 193 . doi: 10. URL http: //link. URL http://link. UK. D. 27(5):1208–1209. December 1995. Understanding Molecular Simulation: from algorithms to applications. K Binder. doi: 10. Rev. Alan M. Lett. Lett.org/link/?JCP/27/1208/1. Ferrenberg and Robert H. London. 59(4):381–384. E.59. Rev. J. and Kurt Wiesenfeld.aps. Landau. URL http://link. Alan M. P. Phase transition for a hard sphere system. Self-organized criticality: An explanation of the 1/f noise.aps. Alan M. Journal Of Physics A-Mathematical And General. Lett. 28(23):L597–L601. USA. Phys. URL http://link. D. Swendsen. Chandler. 1989. Rev. Physical Review Letters. 2007.1743957. Phys.. Applications of monte carlo methods to statistical physics. 1995. 1988. Chao Tang. 1957. 1996. Per Bak. Jul 1987.org/ 0034-4885/60/487. New monte carlo technique for studying phase transitions. D. URL http://stacks. Swendsen. Smit. Krauth. 60(5):487–559. Efficient measurement of linear susceptibilities in molecular simulations: Application to aging supercooled liquids. Swendsen. doi: 10. Statistical errors in histogram reweighting. Phys. Wainwright. 1987. E. Alder and T. Ferrenberg. Optimized monte carlo data analysis.1103/PhysRevLett. 61:2635.

the van hove correlation-function. W. 2006. M. N. Rev. May 1998b. 254(1-2):156–163. Rev. 1973. J. Rev. 2000. and Ph. Landau and Kurt Binder. Antoine. New York. Addison-Wesley.2. J. October 1995b. Kob and H. 2000. J. F. Kob and H. R.. E. Rev. 1966. URL http:// www. Werner Krauth. Oxford University Press. J. USA. W. J. Advances in Physics. N.1080/00018730050198152. Statistical Mechanics: Algorithms and Computations. Phys. 52(4):4134–4153. May 1995a. UK. 1992. Geometric cluster monte carlo simulation. Andersen. Scaling behavior in the beta-relaxation regime of a supercooled lennard-jones mixture. Phys. M. R. USA. R. New-York. 51:591.aps. Mod. R. 57(5):4976–4978. URL http://link.com/10. Andersen. Absence of ferromagnetism and antiferomagnetism in one and two dimensional isotropic heisenberg models. P. UK. 73(10):1376–1379. D. A Guide to Monte Carlo Simulations in Statistical Physics. C. Mermin. J. Physical Review E. Metastability and phase transitions in twodimensional systems. Phys. 49(7):815–958. Rev. Thouless. 73(5):056704. Lectures on Phase Transitions and the Renormalization Group. Haye Hinrichsen. May 1998a. C. Kosterlitz and D. Calvo. London. Mermin and H. September 1994. Heringa and H. Kob and H. Testing mode-coupling theory for a supercooled binary lennard-jones mixture . Hansen and I. Geometric symmetries and cluster simulations. J. Wagner.org/abstract/PRE/v73/e056704. Phys. The topological theory of defects in ordered media. Andersen. W. intermediate scattering function and dynamic susceptibility. Lett. 194 . David P. London. Physica A. C. Testing mode-coupling theory for a supercooled binary lennard-jones mixture . 51(5):4626–4641. Theory of simple liquids. W. 1986. Physical Review Letters.J. Academic Press. E. Phys. Mc Donald. P. 1979. Dugourd. Performances of wang-landau algorithms for continuous systems.D. Poulain. 17: 1133. Heringa and H..BIBLIOGRAPHY Nigel Goldenfeld.informaworld. Blote. E. Broyer. 2006. W. Phys. Non-equilibrium critical phenomena and phase transitions into absorbing states. 6:1181. Cambridge University Press. C. Blote.

URL http://link. Spin glasses and random fields. P. Rev. Singapore. E. American Journal Of Physics. URL http://link. H. Sollich. P. Viot. Nonuniversal critical dynamics in monte carlo simulations.. 165(1-3):287– 324.aps.1080/0001873031000093582. 1999. Lett.. URL http://stacks. E. Rev. Advances in Physics. P. 195 . Van Tassel. July 2005. Prob. N. May 2000.com/science/article/ B6TFR-3YF9X8C-M/2/a91f10d760e37c8def4d51f17c548c3e. 2003. 52(4):219–342. and D. World Scientific.iop. 86(10):2050–2053. e F. 1963. Determining the density of states for classical statistical models: A random walk algorithm to produce a flat histogram. P. Chenggang Zhou and R. 1998.BIBLIOGRAPHY A.org/abstract/PRL/v58/p86. G.informaworld.org/abstract/PRL/v86/p2050. Self-organized criticality. Reports on Progress in Physics. 4:205. 64(5):056101. H. Robert H. Stat. Talbot. 58(2):86–88. Molecular and spin dynamics simulations using modern integration methods. Phys. Math. Bhatt. Rev. K. Rev.aps. Lett. A. 62 (10):1377–1429. 1987. Phys. Understanding and improving the wanglandau algorithm. Donald L Turcotte. R. Landau. 73(7):615– 624. Landau. Young. Trans. Sel. Glassy dynamics of kinetically constrained models. 72(2):025701. Swendsen and Jian-Sheng Wang. URL http://www. Lee. Landau. URL http: //link. R´nyi. Fugao Wang and D.aps. Tarjus. org/abstract/PRE/v72/e025701.org/0034-4885/62/1377. 2005. From car parking to protein adsorption: an overview of sequential adsorption processes.aps. Fugao Wang and D. J. Ritort and P. S. Efficient. multiple-range random walk algorithm to calculate the density of states. 2001a..org/abstract/ PRE/v64/e056101. URL http://link. and P. Phys. 2001b. Tsai. Colloids and Surfaces A: Physicochemical and Engineering Aspects. P.sciencedirect. com/10. Phys. URL http://www.

BIBLIOGRAPHY 196 .

. . . . . . . .3 Acceptance probability . . . . . . 1. . .4 Metropolis algorithm . . . . . . .7. . . . .1 Introduction . . . . . . . . . . . 1. . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . . . . 2. . . . . . 1. . . . . . . . . . . . . . . . . . . . . . 2. . .2 Detailed balance . . . . . . .5. . . . . . . . . . . 1. . . . . .Contents 1 Statistical mechanics and numerical simulation 1. . . . . .1 Inverse transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . . . .1 Generating non uniform random numbers 2. . . . . . . . . 2. . . . . . . . . . . .1 ANNNI Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Ising model and lattice gas. . . . . 2. . . 197 . . . .5 Exercises . . . . . . . . . . . . . . . . . . . . . .5. . . . . . . . . .3 Markov chain for sampling an equilibrium system 2. Equivalence 1. . . . . . . . . . . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . .3 Model systems . . . .7. . . . . . .7 Exercises . . 2. . .3. . . . . . . . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . . . . . .2 Simple liquids . . . . . . . . .5 Applications . . . . . . . . . . . . . 1. . . . 2. . . 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2. . . . . . . 1. .4 Random number generator . . . . . . . . . . . . . . . . .2 Uniform and weighted sampling . . . . . . . . . . . . . .2 Ensemble averages . . .2. . . . .2. . . . . . . . . . 1.3 Grand canonical ensemble . . . 1. . .6. . . . . . . . .1 Microcanonical ensemble .4 Conclusion . 3 3 5 5 6 7 8 8 8 9 10 14 15 15 16 18 21 21 22 23 25 26 26 29 31 33 38 38 39 39 41 . . . . . .2. . . . 2 Monte Carlo method 2. . . . . . . .2 Simple liquids . 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Potts model .5. . . . . . . . . . . . . . . . . .2. . . 2. .1 Ising model . . . . .6 Blume-Capel model . . 1. . . . . . . . . . . . . . . . . 1. . . . . . . . . . . . .6. . . . . . . . . . . .4 Isothermal-isobaric ensemble . . .6 Random number generators . 1. . . . . . . . . .3. . . . . . . . . 2. .2 Canonical ensemble . . . . . . . . . . . . .7. .1 Brief History of simulation . . . . . . . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. .2 Smoluchowski equation . . . . . . .5 Hard sphere model . . 3. . . . . . . . . . . . .7 Exercises . . . . . . . . . . . . . . .1 Liouville formalism . . . . . . 3. . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 Dynamic structure factor . . . . . . . . . .2 Structure factor . . . . . . . . . . . .8 Conclusion . . . . . . . . . . .3. . . . .1 Introduction . . . . . . . . . . . . . . . . . .4. . . . . . . . . . . .2 Discretization of the Liouville equation 3. . 3. .CONTENTS 3 Molecular Dynamics 3. . . . 4. . . . . . . . . .1 Introduction . .2. .4 Linear response theory: results and transport coefficients 4. . .7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . 3. . . 4. . . . . . . . . . . . . 4. . .4. . 3. . . . .3 Langevin equation. .3 Discretization. e 3.5. . . Discretization .7. . . . . . . . . . . . . .4 Consequences . . . 4. . . . . . . . . . . . .5. . . . . . . . . . . . 4. . . . . . . . . . .1 Introduction . 3. . . . . .2 Equations of motion . .5. . 3. . . . . . . . . . Verlet algorithm . . . .1 “Le facteur de structure dans tous ses ´tats!” . . . . . . 4. . .4 Symplectic algorithms . . . . . . . . . .2 Van Hove function and intermediate scattering function . . . 3. . . . . . 4. . . . . .6 Conclusion . .4. .4. . . . . . . . . . .1 Radial distribution function . . . . . . . . . . . . . . 4. . . .7. . . . . . . . 3. . . . . . . . . . . . . . . . . .1 Introduction . 3. . . .5 Dynamic heterogeneities . . . . . . . . . .2 Van Hove function .3 4-point susceptibility and dynamic correlation length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . .2 Structure . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . .7 Brownian dynamics . . . 4. . . . . . . . . . . . .3 Intermediate scattering function . 4. . . . . . .6. . . . . . .3. . . . . . . . . . . . . . . . . . . . . .2. . .7.9 Exercises . . . .6 Molecular Dynamics in other ensembles . . . .7. . . . . . . . . . . . . . . 4. . . .4. . . . . . . . . . . . . . . . 4. . . . . . . . .6. . 3. . . . . . . . . .1 Andersen algorithm . . . . . . . . .9. . 4. . . . . . .3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Nos´-Hoover algorithm . e 4. . . . . 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Computation of the time correlation function . . . . .4. 198 . . . . . . 4. . . . . . . . . . . 4. . . . . . . . . . . . . . 4. . . . . .3 Dynamics .1 Introduction . . . . . . . . . . . . . . . . . . 43 43 45 45 47 47 50 51 53 54 54 57 57 57 57 58 59 59 59 61 62 62 62 66 67 67 67 68 69 70 70 70 72 72 72 72 73 73 74 74 74 75 4 Correlation functions 4. . . . . . . . . . . . . . . . . . . . . . . .3.7. . . . . . . . . . .4 Space-time correlation functions .1 Multi timescale algorithm . . . . .2 Time correlation functions . . . . . .2 4-point correlation function . . .1 Different timescales . . . . . . . . . 4. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .2 Some aspects of the finite size scaling: first-order transition 98 6 Monte Carlo Algorithms based on the density of states 6.2 Grand canonical ensemble . 6. . . .1 Specific heat . . . . . . . 96 5. . . . . . . . . 81 5. . . . . . 7. . . . . . . . . . . . . . . . . . . .2 Properties . . . . . . . . 79 5. . . . . .3 Wang-Landau algorithm . . .2. . . . . . . . . . . . . . . . . . . . . . . . 81 5. . . . . . . .3 Finite size scaling analysis . . . . . . . . . . . . . . . .CONTENTS 5 Phase transitions 79 5. . . .1 Introduction . . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . 95 5. . . . . .4.5 Monte Carlo method with multiples Markov chains 7. . . . . . . . . .2 Scaling laws . . . . . . 87 5. . .6 Conclusion . . . .1 Introduction . . . . . . 88 5. .4 Gibbs ensemble . . .8. . . . .6. . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . .1 Isothermal-isobaric ensemble . . . .3. . . . . . .8. . . . . . . . . . . . . . . . . . .2 Principe . . . . . . . . . . . . . . . 7. . . . . . . . . 7. . .4 Thermodynamics recovered! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . .2 Wang-Landau and statistical temperature algorithms 7 Monte Carlo simulation in different ensembles 7. . . . . . . .1 Some properties of the Wang-Landau algorithm . . . . . .3. . . . . . . . . . . . . . . . . . .1 Definition and physical meaning . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . . . . . . . .5 Cluster algorithm . . . .2 Other quantities . 6. . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . . 93 5. . . . . . . . . . . . . . . . . .6. . . 90 5. . . . . .6 Reweighting Method . . . . . . .1 Critical exponents . . . . . . . . . . . . . . . . 7. .2 Scaling laws . . . . . . . . . . . . . 6. . . . . . . . .1 Introduction . . . . . . . . . 90 5. . . .8 Exercises .1 Introduction . . . . . . . .2. . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . .2 Principle . . . . . .2. . . . . . . . . . . . . . . . . . . . . . .2 Density of states . . . . . . . . . . . . . . . . . . . . . . . . .1 Principle . . . . .2 Acceptance rules . . . . . . . . . . . .2.3 Liquid-gas transition and coexistence curve . .2. . . . . . . . . . . . . . . . . . . . . . .6 Exercises . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6. . . . . . . . . .1. . . . . . . . . . . 7. . . . 82 5. . . .1 Finite size scaling for continuous transitions: logarithm corrections . . . . 199 101 101 102 102 103 105 106 108 108 108 109 113 113 113 114 116 116 116 119 121 121 121 123 127 . . . . . . . . .5 Conclusion . . 6. . . . . . . . . . . . . . . 6. . 7. . . . . . . . . . . . . .4. . . . . . . . . 96 5. . . . . . .7 Conclusion . . . . . . . . . . . . . 87 5. . .4 Critical slowing down . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . .1 Two-time correlation and response functions. . . . . . 8. . . . . . . . aging. . . . . . . . . .5 Hard rods age! . .3 Interrupted aging . . 8. . .4 Equilibrium linear response . . . . . .6 Kinetic constraint models . . . 9. . . . . 8. . . . . . . . . . .3 Model with open boundaries . . . . . . . . . .3.2 Aging and scaling laws . . . . .2 Facilitated spin models . . . .4 Results . . . . . . 8. . . . . . . . . . . . .2. . . .5 Exclusion models . . . . . . . . . .3. . . . . . . . . . . . . . . .2 Definition . . . . . .8. . . . . . . . .1 Haff Law . . . . . . . . . . .3 Algorithm . . . . . . . . . 8. . . 8. .5. . .8. . . .3. . . . . . . . . . . . .1 Introduction . . . . . . . . . . . . . . . . . . . .3 Kinetics . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . .3. . . . . . . . . . .2 Random sequential addition . . . . . . 9. . . . . . . . . . . . . . . . .4. . . . . . . . . . . . . . . . 8. . . . . .8. . . . . . . . . . . . . . . .1 Introduction . 200 . . . . . . . . . . . .4. .2 Definition . . . cin´tique e . . . . .2 Croissance de domaines et mod`le e 8. .2. . . 8. . . . . . . . . . . . . . . . . . . 8.3. . . . .CONTENTS 8 Out of equilibrium systems 8. . . . . . . . . . . . . . . .2. . . . . .2. . . . . . . . . . . . . . . . . .2 Formalism . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. 8. . . . . . . . .2. . . . . . . . 9. . . . . . . . . . . .5. . . . . . . . .7 Conclusion . . . . . . . . . .8 Exercises . . . . . .3 Avalanche model . . . . . . . . . . . . . . . . . 8. 129 130 131 131 131 132 134 137 137 137 139 141 141 141 142 143 144 144 146 147 149 149 152 154 154 154 156 158 159 163 163 164 164 165 166 166 167 167 168 169 170 171 9 Slow kinetics. . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . 9. . . . . . .4. . . . . 9. . . .2. . .4 Inelastic hard sphere model . . . . . . . . . . . .2. . . . . . . . . ` a .1 Introduction . 8. . . .1 Introduction . . . . . .3. . . . . . . 8. . . . . . . . . . . . . . . . .3 Diffusion-coagulation model . . . .4 Random sequential addition . . . . . . . . . . . . . . 8. . . . .2 Definition . .2 Definition . . . . .6. . . . . . . . . . . .2 Random walk on a ring . . . . . . . . 8. . . . . . . .4 “Violation” of the fluctuation-dissipation theorem 9. . . 8. . . . . . . 8. . . . . . . .1 Introduction . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . 9. . .8. . 9. . . . . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . . . .3. . . . . . . . . . . .3 Results . . . . . . . . .6. . . . 8.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 Introduction . . . . . . .4. . . . . . . . . . . . . . . . . . . . . 9. . . . . . contrainte . . . . .2. . . . . . . . . . . . . . . . . 8.5. 8. . . . . . . . . . .1 Motivation . . . 8. . . . . . . .4 Some properties .3 Adsorption-desorption model . . . . . . . . . . . . 8. . .

. . 173 Conclusion .1 XY model . . . . . . .5 9. . . . . .2 Parking model . . . . . . 171 Kovacs effect . . . . . . . . . . . . .1. . . . . . . . . C Ewald sum for the Coulomb potential D Hard rod model 189 D. . . . . . . . . . . . . . .3 Kob-Andersen model B Linear response theory . . . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. . . . . . . . . A.3 O(n) model . . . . . . . A. . . . . . 174 177 177 177 178 178 178 178 179 179 181 185 A Reference models A. . . . .1. . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . .2. . . . . A.2 Heisenberg model . . . . . . . .1 Equilibrium properties . . . . . . . . . . .1 Introduction . . . . . . . . . . . . . . . .2 StockMayer model . . .CONTENTS 9. 190 201 . . . . . . . .4 9. . . . . . . . . . . . . . . . . . . . .1 Lattice models . . . . . . . . . . . . . . . . . . . . . .2. .6 Algorithm . . .2 off-lattice models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . A. . . . . . . A. . . . . . . . . . . 189 D. . . . . . . . . . . . . . . . . . . A.3. . . . . .

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.