Professional Documents
Culture Documents
8. Iterate over all particles and stochastically attempt to select a particle with probability η∆t;
if selected, reset the particle velocity vector by sampling each component from the Maxwell-
Boltzmann velocity distribution.
9. Repeat steps 3-8 until a sufficient number of timesteps have elapsed. Periodically compute
and save the value of some observables (e.g., temperature, pressure).
10. Approximate the ensemble-average value of the saved observables by averaging over states
sampled during a period of time during which the system is at equilibrium.
1
University of Wisconsin-Madison Lecture 13
CBE 710, Fall 2019 - Prof. R. C. Van Lehn October 17, 2019
Note again that this algorithm now introduces a stochastic element to molecular dynamics,
unlike the basic MD algorithm which in principle is deterministic. Finally, it is important to
recognize that the Andersen thermostat is just one simple approach for maintaining the system
temprature, and is no longer commonly used in practice in favor of more advanced techniques (e.g.
the Nose-Hoover thermostat) that are considered more reliable.
2
University of Wisconsin-Madison Lecture 13
CBE 710, Fall 2019 - Prof. R. C. Van Lehn October 17, 2019
This expression integrates over all possible particle configurations in a continuous space with
phase space volume V N , representing all possible combinations of positions. Here, we let E(rN )
refer to only the potential energy of the system associated with particle-particle interactions and
ignore the kinetic energy contribution due to particle velocities, as we have noted previously that
velocities are only a function of the temperature and not particle positions. Alternatively, we can
also write this integral by breaking up rN into each of the N different position vectors for the
different particles.
Z Z Z Z
Z= dr1 dr2 dr3 · · · drN exp [−βE(r1 , r2 , r3 , . . . rN )] (13.2)
V V V V
Here, subscripts indicate the index of one particular particle. Using either notation, the prob-
ability of finding the system in some configuration rN is:
exp −βE(rN )
N
p(r ) = R N N
(13.3)
V N dr exp [−βE(r )]
exp [−βE(r1 , r2 , r3 , . . . rN )]
=R R R R (13.4)
V dr1 V dr2 V dr3 · · · V drN exp [−βE(r1 , r2 , r3 , . . . rN )]
This probability distribution function is called the configurational distribution in analogy to the
velocity distribution discussed last class. Again, this is technically a probability density function,
like the Maxwell-Boltzmann distribution, and is the probability of observing the system in a small
volume of phase space near rN ; it has units of volume−N as a result since rN has units of V N .
Now, let’s consider a different question. Rather than asking the probability of a particular
configuration of particles, let’s ask the probability that particle 1 is at position r1 . Recall that back
in our derivation of the canonical ensemble, we divided a combined system into a small system
of interest and a bath, and we said that the probability that the bath obtains a microstate is
proportional to the number of equivalent bath states. We can use the same idea here, and say that
the probability of a configuration in which a specific particle has some particular position is given
by integrating the probability of a system configuration over all other particle positions. In other
words, we can write the following:
R R R
V dr2 V dr3 · · ·
drN exp [−βE(r1 , r2 , r3 , . . . rN )]
V
p(r1 ) = (13.5)
Z Z Z Z
= dr2 dr3 · · · drN p(rN ) (13.6)
V V V
What this expression says is that we fix the position of particle 1 at r1 , then we integrate the
probability density function over the rest of the phase space where the other particle positions
can obtain any value. This expression is referred to as a reduced configurational distribution
because we are now asking the probability of being in one specific region of phase space (i.e., the
region where the position of particle 1 is fixed). This expression assumes that there is a particular
particle with index 1 at position r1 . Alternatively, we could insead calculate the probability that
any of N identical particles is at position r1 as opposed to a specific tagged particle. Since the
particles are identical, this probability is just N times larger, leading to:
3
University of Wisconsin-Madison Lecture 13
CBE 710, Fall 2019 - Prof. R. C. Van Lehn October 17, 2019
Z Z
p̃(r1 , r2 ) = N (N − 1) dr3 · · · drN p(rN ) (13.9)
V V
We could continue to generalize this notation, but there is no need for our present purposes.
N
p̃(r1 ) = N p(r1 ) =≡ρ (13.12)
V
Here, ρ is simply the number density of the system. Thus, the density of the system tells us
the probability of finding a particle in any given small volume of the system independent of the
positions of its neighbors.
Now let’s consider the reduced configurational distribution for two particles:
Z Z
p̃(r1 , r2 ) = N (N − 1) dr3 · · · drN p(rN ) (13.13)
V V
4
University of Wisconsin-Madison Lecture 13
CBE 710, Fall 2019 - Prof. R. C. Van Lehn October 17, 2019
Unlike the former case of studying only a single particle, now there are possible correlations
between particles. That is, certain pairs of positions may be more likely than others due to the
interaction of particle 1 and 2, even when integrating over the positions of all other particles.
Let’s first assume that the particles in our system are completely uncorrelated because they do not
interact; that is, we have an ideal gas. In that case, the probability p(r1 , r2 ) can be factorized into
two single-particle probability distributions since the two particles are independent of each other.
We can then write:
Z Z
p̃(r1 , r2 ) = N (N − 1) dr3 · · · drN p(rN ) (13.14)
V V
= N (N − 1)p(r1 , r2 ) (13.15)
= N (N − 1)p(r1 )p(r2 ) if positions are independent/uncorrelated (13.16)
N (N − 1)
= (13.17)
V2
2
≈ρ (13.18)
Here, we assume that N 1 for a typical system and approximate this joint probability
distribution function as the density squared. This result is true for an ideal gas, but in general a
real system will have some deviation from ideality that emerges from interactions between pairs of
particles. We will define a function, g(r1 , r2 ), as:
p̃(r1 , r2 )
g(r1 , r2 ) = (13.19)
p̃(r1 , r2 )ideal
p̃(r1 , r2 )
= (13.20)
ρ2
This expression is called the pair-correlation function because it reflects the tendency for
pairs of particles to be distributed at certain positions relative to each other with probabilities that
deviate from what would be expected for an ideal gas. The expression would be 1 for all pairs of
positions for an ideal gas. For completeness we can rewrite this as:
This expression will be useful in the next lecture. For an isotropic system, g(r1 , r2 ) only depends
on the distance between two particles, r = |r1 − r2 |. Again, this is because all directions in an
isotropic fluid are equivalent, so if we rotate the entire system there is no change in the probability.
Equivalently, I can imagine placing particle 1 at some position, then placing particle 2 a distance r
away - rotating the entire system, including particle 2, while preserving the same distance r does
not change the system properties. For an isotropic system we can then write the pair-correlation
function as:
5
University of Wisconsin-Madison Lecture 13
CBE 710, Fall 2019 - Prof. R. C. Van Lehn October 17, 2019
where r is the scalar distance between particles at position r1 and r2 . g(r) is called the radial
distribution function.
If we assume that one of these particles is always at the origin of the coordinate system, and we
know that the probability of that particle being at the origin is ρ since that is the single-particle
reduced distribution function, then we arrive at the result:
p̃(r1 , r2 )
g(r1 , r2 ) = g(r) = (13.24)
ρ2
p̃(r1 = 0)p̃(r1 = 0, r)
= (13.25)
ρ2
p̃(r1 = 0, r)
∴ g(r) = (13.26)
ρ
Here, we write the conditional probability density that any particle is at a distance r away from
any particle placed at the origin. Hence we finally achieve the following definition: for a given
particle i in a system, which we will say is “tagged” and placed at the origin of the coordinate
system, the radial distribution function tells us the probability of a particle being a distance r away
from the tagged particle relative to what would be observed in an ideal gas. Alternatively, we could
say that the quantity ρg(r) tells us the average number density of particles at a distance r away
from a single tagged particle. This last definition is probably the most intuitive.
6
University of Wisconsin-Madison Lecture 13
CBE 710, Fall 2019 - Prof. R. C. Van Lehn October 17, 2019
1. During a converged simulation, periodically iterate over all particles. For each particle, cal-
culate the distance r to every other particle.
2. For each distance, calculate the corresponding index of the bin in the histogram; that is,
calculate r/dr and round to an integer (or round down, depending on how you define your
bins). Increase the value of this bin by 1.
3. Repeat steps 1 and 2 a number of times during a simulation (i.e., every time you save the
energy after convergence).
4. Normalize the histogram at the end of the simulation. This requires dividing each bin by the
number of samples (to time average), the number of particles in the system (since we iterate
over all particles each timestep), and the volume of the bin (to get a density). Finally, divide
all bins by the bulk density (N/V ) to get g(r).
The function δ(r − rij ) is a delta function that returns 1 if the distance is within the small
binwidth of dr around the desired distance r; this is captured by incrementing bins in our histogram.
The ensemble-average of this sum is equal to averaging over the number of samples (i.e., time-
averaging). The term 1/V (r) normalizes each bin by its volume to get an average density per bin.
The term 1/N normalizes the sum by the number of particles, since we iterate over all particles
each time we sample. Finally, 1/ρ normalizes g(r) by the bulk density as desired. Calculating this
for each value of r would yield a complete g(r) curve.