## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

**Course material with JAVA applets
**

Franz J. Vesely

Computational Physics Group

Institute of Experimental Physics, University of Vienna,

Boltzmanngasse 5, A-1090 Vienna, Austria, Europe

Copyright F. Vesely 1996-2005

franz.vesely@univie.ac.at

This tutorial is regularly updated while I give the 2004/05 course.

The ﬁnal version will be available towards January 31, 2005.

January 25, 2005

Chapter 1

Why is water wet?

Statistical mechanics is the attempt to explain the properties of matter from ﬁrst principles.

In the framework of the phenomenological theory of energy and matter, in Thermodynamics,

these properties were measured and interrelated in a logically consistent manner. The resulting

theoretical body allowed scientists to write up exact relations between measured properties.

However, as long as the molecular or atomic nature of matter was not included in the theory, a

complete prediction of the numerical values of these observables was impossible.

To take an example: the entropy diﬀerence S

1,2

between two states of a system was opera-

tionally deﬁned as the integral S

2

−S

1

≡

2

1

dQ/T (assuming a reversible path from 1 to 2), but

the numerical value of S could not be predicted even for an ideal gas. Similarly, thermodynamic

reasoning shows that the diﬀerence between the speciﬁc heats C

P

and C

V

must exactly equal

C

P

− C

V

= TV α

2

/κ

T

, where α and κ

T

are the expansions coeﬃcient and the isothermal com-

pressibility, respectively; but a speciﬁc value for C

P

or C

V

cannot be predicted.

Let us dream for a moment: what if we could calculate, say, the entropy S of a piece of matter

on the basis of microscopic considerations? Assuming we could ﬁnd an explicit expression for

S(E, V ) – entropy as a function of energy and volume – then this would give us access to all the

riches of thermodynamic phenomena. All we would have to do is take the inverse function E(S, V )

and apply the thermodynamic relations T ≡ −(∂E/∂S)

V

, C

V

≡ (∂E/∂T)

V

, P ≡ −(∂E/∂V )

S

etc.

Our project then is to describe the mechanics of an assembly of many (≈ 10

24

) particles with

mutual interactions. The only way to do this is by application of statistical methods: a rigorous

1

analysis of the coupled motions of many particles is simply impossible.

It should be noted, however, that with the emergence of fast computing machines it became

possible to perform numerical simulations on suitable model systems. It is sometimes suﬃcient

to simulate systems of several hundred particles only: the properties of such small systems diﬀer

by no more than a few percent from those of macroscopic samples. In this manner we may

check theoretical predictions referring to simpliﬁed – and therefore not entirely realistic – model

systems. An important example are “gases” made up of hard spheres. The properties of such

gases may be predicted by theory and “measured” in a simulation. The importance of computer

simulation for research does not end here. In addition to simple, generic models one may also

simulate more realistic and complex systems. In fact, some microscopic properties of such systems

are accessible neither to direct experiment nor to theory – but to simulation.

In the context of this course simulations will be used mainly for the visualisation of the

statistical-mechanical truths which we derive by mathematical means. Or vice versa: having

watched the chaotic buﬀeting of a few dozen simulated particles we will courageously set out to

analyse the essential, regular features hidden in that disorderly motion.

Thus our modest goal will be to identify a microscopic quantity that has all the properties

of the thing we call entropy. For a particularly simple model system, the ideal gas, we will even

write down an explicit formula for the function S(E, V ). Keeping our eyes open as we take a

walk through the neat front yard of ideal gas theory, we may well learn something about other,

more complex systems such as crystals, liquids, or photon gases.

The grown-up child’s inquiry why the water is wet will have to remain unanswered for some

more time. Even if we reformulate it in appropriate terms, asking for a microscopic-statistical

explanation for the phenomenon of wetting, it exceeds the frame of this introductory treatment.

– And so much the better: curiosity, after all, is the well spring of all science.

1.1 A quick resum´e of thermodynamics

• Thermodynamic state — State variables

We can describe the state of a thermodymic system uniquely by specifying a small number

of macroscopic observables (state variables) X

i

:

amount of substance (in mol) n; pressure P; volume V ; temperature T; internal energy E;

etc.

Equations of state of the form Form f(X

1

, X

2

, ..) = 0 serve to reduce the number of

independent state variables:

PV = nRT; E = nC

V

T (ideal gas); (P + an

2

/V

2

)(V − bn) = nRT (real gas; Van der

Waals); etc.

• First Law

Joule’s experiment (heat of friction: stirring water with a spindle driven by a weight)

proved that heat is a form of energy. Mechanical energy (W) and heat (Q) may be converted

both ways. Following general custom we assign a positive sign to mechanical work performed

by the system, while heat is counted positive when it is fed into the system. The law of

energy conservation then reads, taking into account the internal energy E,

dE = dQ−dW (1.1)

Frequently the mechanical energy occurs in the form of ”volume work”, i.e. work performed

against the ambient pressure when the system expands: dW = P dV (piston lifting weight).

It should be noted that E is a state variable, while Q and W may depend not only on the

state of the system but also on the way it has reached that state; we may stress this by

using the notation dE =

¯

dQ−

¯

dW.

1

Internal energy of an ideal gas: The expansion experiment (also by Joule: expand air from

1

Quantities such as Q and W are often called process variables.

2

one bottle into an additional, evacuated bottle) demonstrates that E

idG

= C

V

T, meaning

that the internal energy of an ideal gas depends only on the temperature T. (Do not confuse

this with Joule and Thompson’s throttling experiment, which relates to real gases.)

Since ∆W = 0 we have ∆E = ∆Q; but as T

2

= T

1

(empirical fact) there follows

∆Q ∝ ∆T = 0. Thus ∆E = 0. However, since the volumes of ﬁnal and initial state

diﬀer, the internal energy E cannot depend on V . In other words, for an ideal gas the

formula E = E(V, T) reads simply E = E(T).

We know even more about E(T). Feeding energy to the gas in an isochoric way, i.e. with

V = const, is possible only by heating: ∆E = ∆Q = C

V

∆T. Therefore we ﬁnd, using the

empirical fact that C

V

= const, that E(T) = C

V

T.

• Thermal interaction — Thermal equilibrium — Temperature

Bringing two bodies in contact such that they may exchange energy, we will in general

observe a ﬂow of a certain amount of energy in one direction. As soon as this ﬂow has

ebbed oﬀ, we have reached thermal equilibrium – by deﬁnition.

A measurable quantity to characterize this equilibrium state is the temperature: [T

1

= T

2

]

eq

• Second Law

Not all transformations of energy that are in accordance with the First Law are really

possible. The Second Law tells us which transformations are excluded. There are several

logically equivalent formulations:

Impossible are those processes in which nothing happens but a complete trans-

formation of ∆Q into ∆W. [Kelvin]

Example: Expansion of a gas against a weighted piston; here we have a complete change

of ∆Q into

**P dV , but the end state is diﬀerent from the initial one (larger V ).
**

Impossible are those processes in which nothing happens but a transport of heat

∆Q from a colder to a warmer reservoir. [Clausius]

Example: Heat pump; here we do transport ∆Q, but in order to do so we have to feed in

mechanical energy.

• Reversible processes

If a process is such that the inversion of the arrow of time t −→ −t will again refer to a

possible process, this process is called reversible.

It is a consequence of the Second Law that there are processes which are not

reversible.

Example 1: Reversible expansion

Let an ideal gas expand isothermally – i.e. with appropriate input of heat – against a (slowly

diminishing!) weight. The internal energy of the gas remains constant, (E

idG

= nC

V

T),

and we have

∆Q = ∆W =

P(V ) dV = nRT

1

V

dV = nRT ln

V

2

V

1

(1.2)

This process may be reversed: by slowly increasing the weight load on the piston and remov-

ing any excess heat to a thermostatted heat bath we may recompress the gas isothermally,

3

to reach the initial state again.

Example 2: Irreversible expansion

Let an ideal gas expand into vacuum (Joule’s expansion experiment). Result: T

2

= T

1

,

therefore no change of internal energy. In this case no energy has been exerted and stored,

but the volume has increased.

Changing the direction of time does not produce a possible process: the shrinking of a gas

to a smaller volume without input of mechanical energy does not occur.

• Entropy

A mathematically powerful formulation of the Second Law relies on a quantity which is

deﬁned as follows:

Let 1 −→ 2 be a reversible path from state 1 to 2. The diﬀerence of entropies is then

given by

∆S ≡ S(2) −S(1) =

2

1

dQ

T

(1.3)

S has the following important properties:

◦ S is a state variable, i.e. in any particular physical state of a system the entropy

S has a well-deﬁned value, independent of what happened to the system before. This

value, however, is deﬁned only up to an additive constant; the Third Law removes this

ambiguity by stating that S(T →0) = 0.

◦ ∆S >

2

1

dQ/T in all irreversible processes.

◦ When a thermally isolated system is in equilibrium the entropy is at its maximum.

◦ Thus in a thermally isolated system (the universe being a notable special case) we

have dQ = 0 and therefore ∆S ≥ 0, meaning that the entropy will never decrease

spontaneously.

4

◦ As a consequence of the First Law dE = T dS −P dV we have

dE

dS

V

= T (1.4)

But this means that we can describe thermal equilibrium in yet another way:

The ﬂow of energy between two bodies in contact will come to an end when

dE

dS

1

=

dE

dS

2

(1.5)

◦ If two systems are in thermal equilibrium then S

tot

= S

1

+ S

2

; entropy is additive

(extensive).

• Entropy applied:

By scrutinizing the entropy balance in a thermodynamic process we may ﬁnd out whether

that process is, according to the Second Law, a possible one. In particular, we can easily

decide whether the change of state is reversible or not.

Example 1: Reversible expansion

In the former experiment the entropy of the gas increases by the amount

∆S

gas

=

∆Q

T

=

∆W

T

= nRln

V

2

V

1

(1.6)

At the same time the heat reservoir delivers the energy ∆Q to the gas; this decreases the

reservoir’s entropy by

∆S

res

= −

∆Q

T

= −nRln

V

2

V

1

(1.7)

Thus the entropy of the combined system remains the same, and the reverse process is

possible as well. =⇒reversible.

Example 2: Irreversible expansion:

Let the initial and ﬁnal states of the gas be the same as above. Since S is a state variable

we have as before

∆S

gas

= nRln

V

2

V

1

(1.8)

This time, however, no energy is fed in, which makes ∆S

res

= 0 and thus ∆S

tot

> 0.

The reverse process would therefore give rise to a decrease in entropy – which is forbidden:

=⇒irreversible.

• Macroscopic conditions and thermodynamic potentials

The thermodynamic state – the “macrostate” – of a gas is uniquely determined by two

independent state variables. Any of the two mechanical variables – P or V – may be

combined with one of the two thermal variables – T or S – to describe the state. Of course,

the natural choice will be that pair of variables that is controlled, or monitored, by the

experimenter. For example, if a sample of gas is contained in a thermostatized (T) cylinder

with a piston whose position (V ) is directly observed, the description in terms of (V, T) is

indicated. Then again, it may be the force on the piston, and not its position, which we

measure or control; the natural mechanical variable is then the pressure P. And instead

of holding the temperature constant we may put a thermal insulation around the system,

thus conserving its thermal energy or, by 0 = dQ = T dS, its entropy. To describe this

experimental situation we will of course choose the pair (P, S). Let us consider the possible

choices:

5

* (S, V ): The system’s volume and heat content are controlled. This may be done

by a thermally insulating the sample (while allowing for controlled addition of heat,

dQ = T dS) and manipulating the position of the piston, i. e. the sample volume V .

The internal energy is a (often unknown) function of these variables: E = E(S, V ).

Now let the macrostate (S, V ) be changed reversibly, meaning that we introduce small

changes dS, dV of the state variables. According to the First Law, the change dE of in-

ternal energy must then be the net sum of the mechanical and thermal contributions:

dE(S, V ) = (∂E/∂S)

V

dS + (∂E/∂V )

S

dV . Comparing this to dE = T dS − P dV

we derive the thermodynamic (“Maxwell’s”) relations P = −(∂E/∂V )

S

and T =

(∂E/∂S)

V

.

Special case dS = 0: For adiabatic expansion we see that the internal energy of the

gas changes by ∆E = −

**P dV – just the amount of mechanical work done by the
**

gas.

The internal energy E(S, V ) is our ﬁrst example of a Thermodynamic Potential.

Its usefulness stems from the fact that the thermodynamic observables P and T may

be represented as derivatives of E(S, V ). We will now consider three more of these

potentials, each pertaining to another choice of state variables.

* (V, T): The system’s volume and its temperature are controlled.

The appropriate thermodynamic potential is then the Helmholtz Free Energy de-

ﬁned by A(V, T) ≡ E −TS. Using dA = dE −T dS −S dT = −P dV −S dT we may

derive the relations P = −(∂A/∂V )

T

and S = −(∂A/∂T)

V

.

Special case dT = 0: Considering a process of isothermal expansion we ﬁnd that now

the free energy of the gas changes by ∆A = −

**P dV – again, the amount of mechan-
**

ical work done by the gas.

This is worth remembering: the mechanical work done by a substance (not just an

ideal gas!) that is allowed to expand isothermally and reversibly from volume V

1

to

V

2

is equal to the diﬀerence between the free energies of the ﬁnal and initial states,

respectively.

* (T, P): Heat bath and force-controlled piston.

Gibbs Potential G(P, T) ≡ A + PV : dG = dA + P dV + V dP = −S dT + V dP

=⇒S = −(∂G/∂T)

P

and V = (∂G/∂P)

T

.

Special case dT = 0: The basic experiment described earlier, isothermal expansion

against a controlled force (= load on the piston) is best discussed in terms of G: the

change ∆G equals simply

2

1

V (P; T = const) dP, which may be either measured or

calculated using an appropriate equation of state V (P, T).

* (P, S): As in the ﬁrst case, the heat content is controlled; but now the force acting on

the piston (and not the latter’s position) is the controllable mechanical parameter.

Deﬁne the Enthalpy H ≡ E + PV : dH = dE + P dV + V dP = T dS + V dP

=⇒T = (∂H/∂S)

P

and V = (∂H/∂P)

S

.

Special case dS = 0: Again, the system may be allowed to expand, but without

replacement of the exported mechanical energy by the import of heat (adiabatic ex-

pansion). The enthalpy diﬀerence is then just ∆H =

2

1

V (P; S = const) dP, which

may be measured. (In rare cases, such as the ideal gas, there may be an equation of

state for V (P; S = const), which allows us to calculate the integral.)

6

The multitude of thermodynamic potentials may seem bewildering; it should not. As men-

tioned before, they were actually introduced for convenience of description by the “natural”

independent variables suggested by the experiment – or by the machine design: we should

not forget that the early thermodynamicists were out to construct thermomechanical ma-

chines.

There is a extremely useful and simple mnemonic scheme, due to Maxwell, that allows us

to reproduce the basic thermodynamic relations without recourse to longish derivations:

Maxwell’s square. Here it is:

Figure 1.1: Thermodynamics in a square: Maxwell’s mnemonic device

• First of all, it tells us that A = A(V, T), E = E(S, V ), etc.

• Second, the relations between the various thermodynamic potentials are barely hidden –

discover the rules yourself: A = E −TS; H = E +PV ; G = A+PV .

• Next, the eight particularly important Maxwell relations for the state variables V, T, P, S

may be extracted immediately:

V =

∂H

∂P

S

=

∂G

∂P

T

(1.9)

T =

∂H

∂S

P

=

∂E

∂S

V

(1.10)

P = −

∂A

∂V

T

= −

∂E

∂V

S

(1.11)

S = −

∂A

∂T

V

= −

∂G

∂T

P

(1.12)

• Using these relations we may easily write down the description of inﬁnitesimal state changes

in the various representations given above. To repeat,

dE(V, S) = −P dV +T dS (1.13)

dA(V, T) = −P dV −S dT (1.14)

dG(P, T) = V dP −S dT (1.15)

dH(P, S) = V dP +T dS (1.16)

Chemical Potential: So far we have assumed that the mass (or particle) content of the con-

sidered system was constant. If we allow the number n of moles to vary, another important

7

parameter enters the scene. Generalizing the internal energy change to allow for a substance loss

or increase, we write

dE(S, V, n) = −PdV +TdS +µdn (1.17)

In other words, the chemical potential µ quantiﬁes the energy increase of a system upon addition

of substance, keeping entropy and volume constant: µ = ∂E/∂n[

S,V

. Since S is itself dependent

on n another, equally valid relation is usually preferred:

µ =

∂G

∂n

T,P

(1.18)

The name given to this state variable stems from the fact that substances tend to undergo a

chemical reaction whenever the sum of their individual chemical potentials exceeds that of the

reaction product. However, the importance of µ is not restricted to chemistry. We will encounter

it several times on our way through statistical/thermal physics.

That thing called entropy:

We now set a deﬁnite goal for this tutorial. From an analysis of the statistical properties

of many-body systems we hope to ﬁnd a microscopic correlate for the quantity which

thermodynamicists introduced under the name of Entropy.

Thus we will look out for a function S(E, V ) that must have these properties:

Given two systems (e.g. ideal gases) with n

1

, V

1

, E

1

and n

2

, V

2

, E

2

= E −E

1

which are

allowed to exchange heat with the constraint that the total energy E remains constant:

• The regular ﬂow of energy in one direction is to stop as soon as

dS

dE

E1

=

dS

dE

E2

(equilibrium)

• The ﬁnal state shall be characterized by

S = S

1

+S

2

(additivity)

If we succeed in ﬁnding such a S(E, V ), we can in principle use the inverse function

E(S, V ) and the relations (∂E/∂V )

S

= −P etc. to build up thermodynamics from ﬁrst

principles!

1.2 Model systems

Statistical physics must make use of models. To simplify matters we sometimes invoke crudely

simpliﬁed images of reality, which may seem to render the results of theory useless. However,

many fundamental statements are all but independent of the particular model system used in

their derivation. Instead, they are consequences of a few basic properties common to all many-

body systems, and of the laws of statistics. Thus the following models should be understood

primarily as vehicles for demonstrating statistical-mechanical predictions.

• STADIUM BILLIARD

This model, introduced by Bunimovich and Sinai, will serve us to demonstrate the existence

of chaos even in quite small systems. Chaos, it must be understood, is a fundamental

prerequisite for the application of statistical rules. It is a basic property of chaotic systems

that they will acquire each one of a certain set of “states” with equal probability, i.e. with

8

equal relative frequency. It is this “equal a priory probability” of states which we need to

proceed into the heart of Statistical Mechanics.

The stadium billiard is deﬁned as follows. Let a “light ray” or “mass point” move about in a

two-dimensional container with reﬂecting or ideally elastic walls. The boundaries have both

ﬂat and semicircular parts, which gives rise to an eﬃcient mixing of the ﬂight directions

upon reﬂection. Such a system is chaotic, meaning that the motional degrees of freedom

¦v

x

, v

y

¦ exhibit a uniform probability distribution. Since we have v

2

x

+ v

2

y

= const the

“phase points” describing the momentary motion are distributed evenly over the periphery

of a circle. The model is easily extended into three dimensions; the velocity then has three

components, energy conservation keeps the phase points on the surface of a sphere, and

equal a priory probability (or chaos) means that the distribution of points on this “energy

surface” is homogeneous – nowhere denser or thinner than elsewhere.

Simulation: Stadium Billiard in 2 Dimensions

Figure 1.2: Applet Stadium

– See the trajectory of the particle (ray); note the frequency histograms for ﬂight direction

φ ≡ arctan(v

y

/v

x

) and x-velocity v

x

.

– “Chaos” is demonstrated by simultaneously starting a large number of trajectories with

nearly identical initial directions: → fast emergence of equidistribution on the circle v

2

x

+

v

2

y

= const.

[Code: Stadium]

Figure 1.3: Applet VarSinai

9

psi(x,y)

-1

-0.5

0

0.5

1

Figure 1.4: Quantum particle in a 2-dimensional box: n

x

= 3, n

y

= 2

Simulation: Sinai Billiard

– Ideal one particle gas in a box having randomizers on its walls

– See the trajectory of the particle (ray); note the frequency histograms for ﬂight direction

φ ≡ arctan(v

y

/v

x

) and x-velocity v

x

[Code: VarSinai]

• CLASSICAL IDEAL GAS

N particles are conﬁned to a volume V . There are no mutual interactions between molecules

– except that they may exchange energy and momentum in some unspeciﬁed but conserva-

tive manner. The theoretical treatment of such a system is particularly simple; nevertheless

the results are applicable, with some caution, to gases at low densities. Note that air at

normal conditions may be regarded an almost ideal gas.

The momentary state of a classical system of N point particles is completely determined by

the speciﬁcation of all positions ¦r

i

¦ and velocities ¦v

i

¦. The energy contained in the sys-

tem is entirely kinetic, and in an isolated system with ideally elastic walls remains constant.

• IDEAL QUANTUM GAS

Again, N particles are enclosed in a volume V . However, the various states of the system

are now to be speciﬁed not by the positions and velocities but according to the rules of

quantum mechanics. Considering ﬁrst a single particle in a one-dimensional box of length

L. The solutions of Schroedinger’s equation

HΨ(x) = EΨ(x) with H ≡ −

2

2m

∇

2

(1.19)

are in this case Ψ

n

(x) = (1/

√

L) sin(2πnx/L), with the energy eigenvalues E

n

= h

2

n

2

/8mL

2

(n = 1, 2, . . .).

In two dimensions – the box being quadratic with side length L – we have for the energies

E

n

=

h

2

8mL

2

(n

2

x

+n

2

y

) (1.20)

where n ≡ ¦n

x

, n

y

¦. A similar expression may be found for three dimensions.

10

If there are N particles in the box we have, in three dimensions,

E

N,n

=

h

2

8mL

2

N

¸

i=1

[n

2

ix

+n

2

iy

+n

2

iz

] (1.21)

where n

ix

is the x quantum number of particle i.

Note: In writing the sum 1.21 we only assumed that each of the N particles is in a certain

state n

i

≡ (n

ix

, n

iy

, n

iz

). We have not considered yet if any combination of single particle

states n

i

(i = 1, . . . , N) is indeed permitted, or if certain n

i

, n

j

might exclude each other

(Pauli priciple for fermions.)

• HARD SPHERES, HARD DISCS

Again we assume that N such particles are conﬁned to a volume V . However, the ﬁnite-size

objects may now collide with each other, and at each encounter will exchange energy and

momentum according to the laws of elastic collisions. A particle bouncing back from a wall

will only invert the respective velocity component.

At very low densities such a model system will of course resemble a classical ideal gas.

However, since there is now a – albeit simpliﬁed – mechanism for the transfer of momentum

and energy the model is a suitable reference system for kinetic theory which is concerned

with the transport of mechanical quantities. The relevant results will be applicable as ﬁrst

approximations also to moderately dense gases and ﬂuids.

In addition, the HS model has a special signiﬁcance for the simulation of ﬂuids.

Simulation: Hard discs

– “Gas” of hards discs in a square box with reﬂecting walls. To achieve chaos

even with a single moving particle the walls are decorated with “randomizers”, i.e.

reﬂecting semicircles.

The motion of 1, 2, . . . N particles is followed. [Code: Harddisks]

– Displayed: trajectory; frequency histograms for ﬂight direction φ ≡ arctan(v

y

/v

x

)

and x-speed v

x

.

[Code: Harddisks]

Figure 1.5: Applet Harddisks

11

Simulation: Hard spheres

– “Gas” of N hard spheres in a cubic box with reﬂecting walls. To achieve chaos

even with a single moving particle the walls are decorated with “randomizers”, i.e.

reﬂecting semispheres.

The motion of 1, 2, . . . N particles is followed.

– Displayed: trajectory; frequency histograms for ﬂight direction φ ≡ arctan(v

y

/v

x

)

and x-speed v

x

.

[Code: Hspheres]

Figure 1.6: Applet Hspheres

⇒ Simulation 1.4: N hard discs in a 2D box with periodic boundaries. [Code:

Hdiskspbc, UNDER CONSTRUCTION]

• LENNARD-JONES MOLECULES

This model ﬂuid is deﬁned by the interaction potential (see ﬁgure)

U(r) = 4[(r/σ)

−12

−(r/σ)

−6

] (1.22)

where (potential well depth) and σ (contact distance) are substance speciﬁc parameters.

In place of the hard collisions we have now a continuous repulsive interaction at small

distances; in addition there is a weak attractive force at intermediate pair distances. The

model is fairly realistic; the interaction between two rare gas atoms is well approximated

by equ. 1.22.

The LJ model achieved great importance in the Sixties and Seventies, when the micro-

scopic structure and dynamics of simple ﬂuids was an all-important topic. The interplay of

experiment, theory and simulation proved immensely fruitful for laying the foundations of

modern liquid state physics.

In simulation one often uses the so-called “periodic boundary conditions” instead of reﬂect-

ing vessel walls: a particle leaving the (quadratic, cubic, ...) cell on the right is then fed in

with identical velocity from the left boundary, etc. This guarantees that particle number,

energy and total momentum are conserved, and that each particle is at all times surrounded

by other particles instead of having a wall nearby. The situation of a molecule well within

a macroscopic sample is better approximated in this way.

Simulation: Lennard-Jones ﬂuid

N Lennard-Jones particles in in a 2d box with periodic boundary conditions.

[Code: LJones]

12

-1

-0.5

0

0.5

1

0 0.5 1 1.5 2 2.5 3 3.5 4

4*(r**(-12)-r**(-6))

Figure 1.7: Lennard-Jones potential with = 1 (well depth) and σ = 1 (contact distance where

U(r) = 0). Many real ﬂuids consisting of atoms or almost isotropic molecules are well described

by the LJ potential.

Figure 1.8: Applet LJones

• HARMONIC CRYSTAL

The basic model for a solid is a regular conﬁguration of atoms or ions that are bound

to their nearest neighbors by a suitably modelled pair potential. Whatever the functional

form of this potential, it may be approximated, for small excursions of any one particle from

its equilibrium position, by a harmonic potential. Let l denote the equilibrium (minimal

potential) distance between two neighbouring particles; the equilibrium position of atom j

in a one-dimensional lattice is then given by X

j

= j l. Deﬁning the displacements of the

atoms from their lattice points by x

j

we have for the energy of the lattice

E(x,

˙

x) =

f

2

N

¸

j=2

(x

j

− x

j−1

)

2

+

m

2

N

¸

j=1

˙ x

2

j

(1.23)

where f is a force constant and m is the atomic mass.

The generalization of 1.23 to 2 and 3 dimensions is straightforward.

13

The further treatment of this model is simpliﬁed by the approximate assumption that each

particle is moving independently from all others in its own oscillator potential: E(x,

˙

x)

≈ f

¸

N

j=1

x

2

j

+(m/2)

¸

N

j=1

˙ x

2

j

. In going to 2 and 3 dimensions one introduces the further

simplifying assumption that this private oscillator potential is isotropically “smeared out”:

E

pot

(¦x, y, z¦) ≈ f

¸

N

j=1

(x

2

j

+y

2

j

+z

2

j

). The model thus deﬁned is known as the “Einstein

model” of solids.

• MODELS FOR COMPLEX MOLECULES

Most real substances consist of more complex units than isotropic atoms or Lennard-Jones

type particles. There may be several interaction centers per particle, containing electrical

charges, dipoles or multipoles, and the units may be joined by rigid or ﬂexible bonds. Some

of these models are still amenable to a theoretical treatment, but more often than not the

methods of numerical simulation – Monte Carlo or molecular dynamics – must be invoked.

• SPIN LATTICES

Magnetically or electrically polarizable solids are often described by models in which “spins”

with the discrete permitted values σ

i

= ±1 are located at the vertices of a lattice.

If the spins have no mutual interaction the discrete states of such a system are easy to

enumerate. This is why such models are often used to demonstrate of statistical-mechanical

– or actually, combinatorical – relations (Reif: Berkeley Lectures). The energy of the system

is then deﬁned by E ≡ −H

¸

i

σ

i

, where H is an external ﬁeld.

Of more physical signiﬁcance are those models in which the spins interact with each other,

For example, the parallel alignment of two neighboring spins may be energetically favored

over the antiparallel conﬁguration (Ising model). Monte Carlo simulation experiments

on systems of this type have contributed much to our understanding of the properties of

ferromagnetic and -electric substances.

1.3 Fundamentals of Statistics

By statistics we denote the investigation of regularities in apparently non-deterministic processes.

An important basic quantity in this context is the “relative frequency” of an “event”. Let us

consider a repeatable experiment – say, the throwing of a die – which in each instance leads to

one of several possible results e – say, e ≡ numberofpoints = 4. Now repeat this experiment

n times under equal conditions and register the number of cases in which the speciﬁc result e

occurs; call this number f

n

(e). The relative frequency of e is then deﬁned as r(e) ≡ f

n

(e)/n.

Following R. von Mises we denote as the “probability ” of an event e the expected value of

the relative frequency in the limit of inﬁnitely many experiments:

{(e) ≡ lim

n→∞

f

n

(e)

n

(1.24)

Example: Game die; 100-1000 trials; e ≡ ¦no. of points = 6¦, or e ≡ ¦no. of points

≤ 3¦.

Now, this deﬁnition does not seem very helpful. It implies that we have already done some

experiments to determine the relative frequency, and it tells us no more than that we should

expect more or less the same relative frequencies when we go on repeating the trials. What we

want, however, is a recipe for the prediction of {(e).

14

To obtain such a recipe we have to reduce the event e to so-called “elementary events” that

obey the postulate of equal a priori probability. Since the probability of any particular one

among K possible elementary events is just {(

i

) = 1/K, we may then derive the probability of

a compound event by applying the rules

{(e =

i

or

j

) = {(

i

) +{(

j

) (1.25)

{(e =

i

and

j

) = {(

i

) {(

j

) (1.26)

Thus the predictive calculation of probabilities reduces to the counting of the possible elementary

events that make up the event in question.

Example: The result

6

≡ ¦no. of points = 6¦ is one among 6 mutually exclusive

elementary events with equal a priory probabilities (= 1/6). The compound event e ≡

¦no. of points ≤ 3¦ consists of the elementary events

i

= ¦1, 2 or 3¦; its probability

is thus {(1 ∨ 2 ∨ 3) = 1/6 + 1/6 + 1/6 = 1/2.

How might this apply to statistical mechanics? – Let us assume that we have N equivalent

mechanical systems with possible states s

1

, s

2

, . . . s

K

. A relevant question is then: what is the

probability of a situation in which

e ≡ ¦k

1

systems are instate s

1

, k

2

systems instate s

2

, etc.¦ (1.27)

Example: N = 60 dice are thrown (or one die 60 times!). What is the probability that

10 dice each have numbers of points 1, 2, . . . 6? What, in contrast, is the probability

that all dice show a “one”?

The same example, but with more obviously physical content:

Let N = 60 gas atoms be contained in a volume V , which we imagine to be divided

into 6 equal partial volumes. What is the probability that at any given time we ﬁnd

k

i

= 10 particles in each subvolume? And how probable is the particle distribution

(60, 0, 0, 0, 0, 0)? (Answer: see below under the heading “multinomial distribution”.)

We can generally assume that both the number N of systems and the number K of accessible

states are very large – in the so-called “thermodynamic limit” they are actually taken to approach

inﬁnity. This gives rise to certain mathematical simpliﬁcations.

Before advancing into the ﬁeld of physical applications we will review the fundamental con-

cepts and truths of statistics and probability theory, focussing on events that take place in number

space, either { (real numbers) or ^ (natural numbers).

DISTRIBUTION FUNCTION

Let x be a real random variate in the region (a, b). The distribution function

P(x

0

) ≡ {¦x < x

0

¦ (1.28)

is deﬁned as the probability that some x is smaller than the given value x

0

. The function

P(x) is monotonically increasing and has P(a) = 0 and P(b) = 1. The distribution function is

dimensionless: [P(x)] = 1.

The most simple example is the equidistribution for which

P(x

0

) =

x

0

−a

b −a

(1.29)

15

Another important example, with a = −∞, b = ∞, is the normal distribution

P(x

0

) =

1

√

2π

x0

−∞

dxe

−x

2

/2

(1.30)

and its generalization, the Gaussian distribution

P(x

0

) =

1

√

2πσ

2

x0

−∞

dxe

−(x−x)

2

/2σ

2

(1.31)

where the parameters 'x` and σ

2

deﬁne an ensemble of functions.

DISTRIBUTION DENSITY

The distribution or probability density p(x) is deﬁned by

p(x

0

) dx ≡ {¦x ∈ [x

0

, x

0

+dx]¦ ≡ dP(x

0

) (1.32)

In other words, p(x) is just the diﬀerential quotient of the distribution function:

p(x) =

dP(x)

dx

, i.e. P(x

0

) =

x0

a

p(x) dx (1.33)

p(x) has a dimension; it is the reciprocal of the dimension of its argument x:

[p(x)] =

1

[x]

(1.34)

For the equidistribution we have

p(x) = 1/(b −a), (1.35)

and for the normal distribution

p(x) =

1

√

2π

e

−x

2

/2

. (1.36)

If x is limited to discrete values x

α

with a step ∆x

α

≡ x

α+1

− x

α

one often writes

p

α

≡ p(x

α

) ∆x

α

(1.37)

for the probability of the event x = x

α

. This p

α

is by deﬁnition dimensionless, although it is

related to the distribution density p(x) for continuous arguments. The deﬁnition 1.37 includes

the special case that x is restricted to integer values k; in that case ∆x

α

= 1.

MOMENTS OF A DENSITY

By this we denote the quantities

'x

n

` ≡

b

a

x

n

p(x) dx or, in the discrete case, ≡

¸

α

p

α

x

n

α

(1.38)

The ﬁrst moment 'x` is also called the expectation value or mean value of the distribution

density p(x), and the second moment 'x

2

` is related to the variance and the standard devi-

ation: variance σ

2

= 'x

2

` −'x`

2

(standard deviation σ = square root of variance).

Examples:

.) For an equidistribution [0, 1] we have 'x` = 1/2, 'x

2

` = 1/3 and σ

2

= 1/12.

.) For the normal distribution we ﬁnd 'x` = 0 and 'x

2

` = σ

2

= 1.

16

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

0 0.2 0.4 0.6 0.8 1

P(x)

p(x)

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

-4 -2 0 2 4

P(x)

p(x)

Figure 1.9: Equidistribution and normal distribution functions P and densities p

SOME IMPORTANT DISTRIBUTIONS

• Equidistribution: Its great signiﬁcance stems from the fact that this distribution is central

both to statistical mechanics and to practical numerics. In the theory of statistical-mechanical

systems, one of the fundamental assumptions is that all states of a system that have the same

energy are equally probable (axiom of equal a priori probability). And in numerical computing

the generation of homogeneously distributed pseudo-random numbers is relatively easy; to ob-

tain diﬀerently distributed random variates one usually “processes” such primary equidistributed

numbers.

• Gauss distribution: This distribution pops up everywhere in the quantifying sciences. The

reason for its ubiquity is the “central value theorem”: Every random variate that can be ex-

pressed as a sum of arbitrarily distributed random variates will in the limit of many summation

terms be Gauss distributed. For example, when we have a complex measuring procedure in which

a number of individual errors (or uncertainties) add up to a total error, then this error will be

nearly Gauss distributed, regardless of how the individual contributions may be distributed. In

addition, several other physically relevant distributions, such as the binomial and multinomial

densities (see below), approach the Gauss distribution under certain – quite common – circum-

stances.

• Binomial distribution: This discrete distribution describes the probability that in n inde-

pendent trials an event that has a single trial probability p will occur exactly k times:

p

n

k

≡ {¦k times e in n trials¦

=

n

k

p

k

(1 −p)

n−k

(1.39)

For the ﬁrst two moments of the binomial distribution we have 'k` = np (not necessarily integer)

and σ

2

= np(1 −p) (i.e. 'k

2

` −'k`

2

).

Application: Fluctuation processes in statistical systems are often described in

terms of of the binomial distribution. For example, consider a particle freely roaming

a volume V . The probability to ﬁnd it at some given time in a certain partial volume

17

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 1 2 3 4 5 6 7 8 9 1011121314151617181920

n=1

n=2

n=5

n=10

n=20

p=0.4

Figure 1.10: Binomial distribution density p

n

k

V

1

is p(V

1

) = V

1

/V . Considering now N independent particles in V , the probability

of ﬁnding just N

1

of them in V

1

is given by

p

N

N1

=

N

N

1

(V

1

/V )

N1

(1 −V

1

/V )

N−N1

(1.40)

The average number of particles in V

1

and its standard deviation are

'N

1

` = NV

1

/V and σ ≡

(∆N

1

)

2

=

N(V

1

/V )(1 −V

1

/V ). (1.41)

Note that for Np ≈ 1 we have for the variance σ

2

≈ σ ≈ 'N

1

` ≈ 1, meaning that the

population ﬂuctuations in V

1

are then of the same order of magnitude (namely, 1) as

the mean number of particles itself.

For large n such that np >> 1 the binomial distribution approaches a Gauss distribution

with mean np and variance σ

2

= npq (theorem of Moivre-Laplace):

p

n

k

⇒p

G

(k) =

1

√

2πnpq

exp[−(k −np)

2

/2npq] (1.42)

with q ≡ 1 −p.

If n →∞and p →0 such that their product np ≡ λ remains ﬁnite, the density 1.39 approaches

p

n

(k) =

λ

k

k!

e

−λ

(1.43)

which goes by the name of Poisson distribution.

An important element in the success story of statistical mechanics is the fact that with

increasing n the sharpness of the distribution 1.39 or 1.42 becomes very large. The relative

width of the maximum, i. e. σ/'k`, decreases as 1/

√

n. For n = 10

4

the width of the peak is

no more than 1% of 'k`, and for “molar” orders of particle numbers n ≈ 10

24

the relative width

σ/'k` is already ≈ 10

−12

. Thus the density approaches a “delta distribution”. This, however,

renders the calculation of averages particularly simple:

'f(k)` =

¸

k

p

n

k

f(k) →f('k`) (1.44)

18

or

'f(k)` ≈

dk δ(k −'k`)f(k) = f('k`) (1.45)

• Multinomial distribution: This is a generalization of the binomial distribution to more than

2 possible results of a single trial. Let e

1

, e

2

, . . . , e

K

be the (mutally exclusive) possible results

of an experiment; their probabilities in a single trial are p

1

, p

2

, . . . , p

K

, with

¸

K

i

p

i

= 1. Now do

the experiment n times; then

p

n

(k

1

, k

2

, . . . , k

K

) =

n!

k

1

!k

2

! . . . k

K

!

p

k1

1

p

k2

2

. . . p

kK

K

(with k

1

+k

2

+. . . +k

K

= n) (1.46)

is the probability to have the event e

1

just k

1

times, e

2

accordingly k

2

times, etc.

We get an idea of the signiﬁcance of this distribution in statistical physics if we interpret the

the K possible events as “states” that may be taken on by the n particles of a system (or, in

another context, by the n systems in an ensemble of many-particle systems). The above formula

then tells us the probability to ﬁnd k

1

among the n particles in state e

1

, etc.

Example: A die is cast 60 times. The probability to ﬁnd each number of points just

10 times is

p

60

(10, 10, 10, 10, 10, 10) =

60!

(10!)

6

1

6

60

= 7.457 10

−5

. (1.47)

To compare, the probabilities of two other cases: p

60

(10, 10, 10, 10, 9, 11) = 6.778

10

−5

, p

60

(15, 15, 15, 5, 5, 5) = 4.406 10

−8

. Finally, for the quite improbable case

(60, 0, 0, 0, 0, 0) we have p

60

= 2.046 10

−47

Due to its large number of variables (K) we cannot give a graph of the multinomial distribu-

tion. However, it is easy to derive the following two important properties:

Approach to a multivariate Gauss distribution: just as the binomial distribution ap-

proaches, for large n, a Gauss distribution, the multinomial density approaches an appropriately

generalized – “multivariate” – Gauss distribution.

Increasing sharpness: if n and k

1

. . . k

K

become very large (multiparticle systems; or ensem-

bles of n → ∞ elements), the function p

n

(k

1

, k

2

, . . . , k

K

) ≡ p

n

(

**k) has an extremely sharp
**

maximum for a certain partitioning

k

∗

≡ ¦k

∗

1

, k

∗

2

, . . . , k

∗

K

¦, namely ¦k

∗

i

= np

i

; i = 1, . . . , K¦.

This particular partitioning of the particles to the various possible states is then “almost always”

realized, and all other allotments (or distributions) occur very rarely and may safely be neglected.

This is the basis of the method of the most probable distribution which is used with

great success in several areas of statistical physics.[2.2]

STIRLING’S FORMULA

For large values of m the evaluation of the factorial m! is diﬃcult. A handy approximation is

Stirling’s formula

m! ≈

√

2πm(m/e)

m

(1.48)

Example: m = 69 (Near most pocket calculators’ limit): 69! = 1.7112 10

98

;

√

2π 69 (69/e)

69

= 1.7092 10

98

.

The same name Stirling’s formula is often used for the logarithm of the factorial:

ln m! ≈ m(ln m−1) + ln

√

2πm ≈ m(ln m−1) (1.49)

(The term ln

√

2πm may usually be neglected in comparison to m(ln m−1).)

19

Example 1: ln 69! = 226.1905; 69(ln 69 − 1) + ln

√

2π 69 = 223.1533 + 3.0360 =

226.1893.

Example 2: The die is cast again, but now there are 120 trials. When asked to

produce 120! most pocket calculators will cancel their cooperation. So we apply

Stirling’s approximation:

The probability of throwing each number of points just 20 times is

p

120

(20, . . . , 20) =

120!

(20!)

6

1

6

120

≈

√

240π

¸

120

6e

20

1

20!

¸

6

= 1.350 10

−5

, (1.50)

and the probability of the partitioning (20, 20, 20, 20, 19, 21) is 1.285 10

−5

.

STATISTICAL (IN)DEPENDENCE

Two random variates x

1

, x

2

are statistically mutually independent (uncorrelated) if the distri-

bution density of the compound probability (i.e. the probability for the joint occurence of x

1

and x

2

) equals the product of the individual densities:

p(x

1

, x

2

) = p(x

1

) p(x

2

) . (1.51)

Example: In a ﬂuid or gas the distribution density for a single component of the

particle velocity is given by (Maxwell-Boltzmann)

p(v

α

) =

m

2πkT

exp¦−

mv

2

α

2kT

¦, α = x, y, z (1.52)

The degrees of freedom α = x, y, z are statistically independent; therefore the com-

pound probability is given by

p(v) ≡ p(v

x

, v

y

, v

z

) = p(v

x

)p(v

y

)p(v

z

) =

m

2πkT

3/2

exp¦−

m

2kT

(v

2

x

+v

2

y

+v

2

z

)¦ (1.53)

By conditional distribution density we denote the quantity

p(x

2

[x

1

) ≡

p(x

1

, x

2

)

p(x

1

)

(1.54)

(For uncorrelated x

1

, x

2

we have p(x

2

[x

1

) = p(x

2

)).

The density of a marginal distribution describes the density of one of the variables regard-

less of the speciﬁc value of the other one, meaning that we integrate the joint density over all

possible values of the second variable:

p(x

2

) ≡

b1

a1

p(x

1

, x

2

) dx

1

(1.55)

TRANSFORMATION OF DISTRIBUTION DENSITIES

From 1.32 we can immediately conclude how the density p(x) will transform if we substitute the

variable x. Given some bijective mapping y = f(x); x = f

−1

(y) the conservation of probability

requires

[dP(y)[ = [dP(x)[ (1.56)

20

p(v)

v_0 E

p(E)

Figure 1.11: Transformation of the distribution density (see Example 1)

(The absolute value appears because we have not required f(x) to be an increasing function.)

This leads to

[p(y) dy[ = [p(x) dx[ (1.57)

or

p(y) = p(x)

dx

dy

= p[f

−1

(y)]

df

−1

(y)

dy

(1.58)

Incidentally, this relation is true for any kind of density, such as mass or spectral densities, and

not only for distribution densities.

Example 1: A particle of mass m moving in one dimension is assumed to have any

velocity in the range ±v

0

with equal probability; so we have p(v) = 1/2v

0

. The

distribution density for the kinetic energy is then given by (see Figure 1.11)

p(E) = 2p(v)

dv

dE

=

1

2

√

E

0

1

√

E

(1.59)

in the limits 0..E

0

, where E

0

= mv

2

0

/2. (The factor 2 in front of p(v) comes from the

ambiguity of the mapping v ↔E).

Example 2: An object is found with equal probability at any point along a circular

periphery; so we have p(φ) = 1/2π for φ [0, 2π]. Introducing cartesian coordinates

x = r cos φ, y = r sin φ we ﬁnd for the distribution density of the coordinate x, with

x [±r], that

p(x) = p(φ)

dφ

dx

=

1

π

1

√

r

2

−x

2

(1.60)

(see Figure 1.12)

Problems equivalent to these examples:

a) A homogeneously blackened glass cylinder – or a semitransparent drinking straw

– held sideways against a light source: absorption as a function of the distance from

the axis?

b) Distribution of the x-velocity of a particle that can move randomly in two dimen-

sion, keeping its kinetic energy constant.

21

p(phi)

2*pi

x

p(x)

Figure 1.12: Transformation of the distribution density (see Example 2)

⇒ Simulation 1.6: Stadium Billiard. Distribution of the velocity component

v

x

. [Code: Stadium]

c) Distribution of the velocity of any of two particles arbitrarily moving in one di-

mension, keeping only the sum of their kinetic energies constant.

For the joint probability density of several variables the transformation formula is a direct

generalization of 1.58, viz.

p(y) = p(x)

dx

dy

(1.61)

Here we write [dx/dy[ for the functional determinant (or Jacobian) of the mapping x = x(y),

[d(x

1

, x

2

, . . .)/d(y

1

, y

2

, . . .)[ =

dx

1

/dy

1

dx

1

/dy

2

. . .

dx

2

/dy

1

dx

2

/dy

2

. . .

.

.

.

.

.

.

(1.62)

Example 3: Again, let v ≡ ¦v

x

, v

y

, v

z

¦, and p(v) as in equ. 1.53. Now we write w

≡ ¦v, φ, θ¦, with

v

x

= v sin θ cos φ, v

y

= v sin θ sin φ, v

z

= v cos θ . (1.63)

The Jacobian of the mapping v = v( w) is

[d(v

x

, v

y

, v

z

)/d(v, φ, θ)[ =

**sinθ cos φ −v sinθ sinφ v cos θ cos φ
**

sin θ sin φ v sin θ cos φ v cos θ sinφ

cos θ 0 −v sin θ

= −v

2

sin θ

(1.64)

Therefore we have for the density of the modulus of the particle velocity

p(v) ≡

2π

0

dφ

π

−π

dθv

2

sin θ

m

2πkT

3/2

exp¦−

m

2kT

v

2

¦ (1.65)

= 4π

m

2πkT

3/2

v

2

exp¦−

m

2kT

v

2

¦ (1.66)

22

1.4 Problems for Chapter 1

EXERCISES:

1.1 Entropy and irreversibility: (a) Two heat reservoirs are at temperatures T

1

and T

2

> T

1

.

They are connected by a metal rod that conducts the heat Q per unit time from 2 to 1. Do an

entropy balance to show that this process is irreversible.

(b) Using the speciﬁc values T

1

= 300K, T

2

= 320K and Q = 10J, calculate the entropy increase

per second.

1.2 Reversible, isothermal expansion: Compute the entropy balance for the experiment

described (or shown) in the lecture, using estimated values of the necessary experimental param-

eters.

Discuss the role of the ideal gas assumption: is it necessary / unnecessary / convenient ?

1.3 Maxwell square:

Draw up Maxwell’s square and use it to complete the following equations:

(a) G = A±. . .; (b) A = E ±. . .;

(c) P = . . . = . . . (in terms of derivatives of A and E);

(d) dG = V dP −. . ..

1.4 Random directions in 2D: Using your favourite random number generator, sample a

number K ≈ 50 −100 of angles φ

i

(i = 1, . . . , K) equidistributed in [0, 2π]. Interpret cos φ

i

and

sin φ

i

as the v

x,y

−components of a randomly oriented velocity vector v

i

with [v

i

[ = 1.

(a) Draw a histogram of the empirical probability (i.e. event frequency) of p(v

x

). Compare the

shape of your p(v

x

) to the bottom right histogram in Applet Stadium (see 1.2).

(b) Normalize the histogram such that a sum over all bins equals one. What is the value of

p(0.5)?

1.5 Binomial distribution: Suggest an experiment with given p and n. Perform the experi-

ment N times and draw an empirical frequency diagram; compare with 1.39.

1.6 Density ﬂuctuations in air:

(a) Calculate the mean number of molecules (not discerning between N

2

and O

2

) that are to be

found at a pressure of 10 Pa in a cube with a side length of the wave length of light (≈ 6 10

−7

m).

What is the standard deviation of the particle number, both absolutely and relative to the mean

particle number? (Air is to be treated as an ideal gas at normal temperature T

0

= 273.15 K.)

(b) Compute the value of the probability density of the event N(∆V ) = k ≈< N >, i. e. the

probability of ﬁnding an integer number k next to the mean number of particles in the sample

volume? (Hint: Don’t attempt to evaluate factorials of large numbers, such as appear in the bi-

nomial distribution p

n

k

; rather, use that distribution which resembles p

n

k

when n becomes large.)

What is the probability of ﬁnding only 95 percent of the mean particle number in the sample

volume?

1.7 Multinomial distribution: A volume V is divided into m = 5 equal-sized cells. The

N = 1000 particles of an ideal gas may be allotted randomly to the cells.

a) What is the probability of ﬁnding in a snapshot of the system the partitioning ¦N

1

, N

2

, . . . , N

5

¦?

Explain the formula.

b) Demonstrate numerically that the partitioning with the greatest probability is given by

N

i

= N/m = 200. For example, compare the situations (201, 199, . . .), (202, 199, 199, . . .),

(202, 198, . . .), (205, 195, . . .), and (204, 199, 199, 199, 199) to the most probable one.

c) (1 bonus point) Prove analytically that N

i

= 200 is the most probable distribution. Hint:

minimize the function f(

k) ≡ log p

n

(

**k) under the condition
**

¸

i

k

i

= n.

1.8 Transformation of a distribution density: Repeat the calculation of Example 3 for the

23

two-dimensional case, i.e. v ≡ ¦v

x

, v

y

¦ and w ≡ ¦v, φ¦. Draw the distribution density p

2D

(v).

TEST YOUR UNDERSTANDING OF CHAPTER 1:

1.1 Thermodynamic concepts:

– What does the entropy balance tell us about the reversibility/irreversibility of a process?

Demonstrate, using a speciﬁc example.

– Describe the process of thermal interaction between two bodies. When will the energy ﬂow

stop?

– Which thermodynamic potential is suited for the description of isothermal-isochoric systems?

1.2 Model systems:

– Describe 2-3 model systems of statistical mechanics.

– What quantities are needed to completely describe the momentary state of a classical ideal gas

of N particles?

– What quantities are needed for a complete speciﬁcation of the state of a quantum ideal gas?

1.3 Statistical concepts:

– Explain the concepts “distribution function” and “distribution density”; give two examples.

– What are the moments of a distribution? Give a physically relevant example.

1.4 Equal a priori probability: Explain the concept and its signiﬁcance for statistical me-

chanics.

24

Chapter 2

Elements of Kinetic Theory

Figure 2.1: Ludwig Boltzmann

In a dilute gas the molecules are in free, linear ﬂight most of the time; just occasionally their

ﬂight will be interrupted by a collision with a single other particle. The result of such an event,

which in general may be treated as a classical elastic collision, is a change in the speeds and

directions of both partners. So it should be possible to derive, from a statistical treatment of

such binary collisions, predictions on the properties of gases. This idea was the starting point

from which Ludwig Boltzmann proceeded to develop his “Kinetic Theory of Gases”.

First he deﬁned a distribution density f such that f (r, v; t) dr dv denotes the number of

particles which at time t situated at r and have a velocity v. The 6-dimensional space of the

variables ¦r, v¦ has later been called “µ space”.

1

The evolution in time of the function f (r, v; t) is then described by Boltzmann’s Transport

Equation. We will shortly sketch the derivation and shape of this important formula.

A particularly important result of Boltzmann’s equation is its stationary solution, i.e. the

function that solves the equation in the long-time limit t →∞. It turns out that this equilibrium

1

The term is probably due to Paul Ehrenfest who in his classical book on Statistical Mechanics refers to µ-space,

or “molecular” space as opposed to Γ, or “gas” space (see Chapter 3 below).

25

distribution in µ space may be derived without explicitely solving the transport equation itself.

The strategy used to do so, called the method of the most probable distribution, will be

encountered again in other parts of Statistical Mechanics.

2.1 Boltzmann’s Transport Equation

With his “Kinetic Theory of Gases” Boltzmann undertook to explain the properties of dilute

gases by analysing the elementary collision processes between pairs of molecules.

The evolution of the distribution density in µ space, f (r, v; t), is described by Boltzmann’s

transport equation. A thorough treatment of this beautiful achievement is beyond the scope

of our discussion. But we may sketch the basic ideas used in its derivation.

• If there were no collisions at all, the swarm of particles in µ space would ﬂow according to

f

r +v dt, v +

K

m

dt; t +dt

= f (r, v; t) (2.1)

where

K denotes an eventual external force acting on particles at point (r, v). The time

derivative of f is therefore, in the collisionless case,

∂

∂t

+v ∇

r

+

K

m

∇

v

f(r, v; t) = 0 (2.2)

where

v ∇

r

f ≡ v

x

∂f

∂x

+v

y

∂f

∂y

+v

z

∂f

∂z

(2.3)

and

K

m

∇

v

f ≡

1

m

K

x

∂f

∂v

x

+K

y

∂f

∂v

y

+K

z

∂f

∂v

z

(2.4)

To gather the meaning of equation 2.2 for free ﬂow, consider the collisionless, free ﬂow of

gas particles through a thin pipe: there is no force (i. e. no change of velocities), and

µ-space has only two dimensions, x and v

x

(see Figure 2.2).

Figure 2.2: A very simple µ-space

26

At time t a diﬀerential “volume element” at (x, v

x

) contains, on the average, f(x, v

x

)dxdv

x

particles. The temporal change of f(x, v

x

) is then given by

∂f(x, v

x

)

∂t

= −v

x

∂f(x, v

x

)

∂x

(2.5)

To see this, count the particles entering during the time span dt from the left (assuming v

x

>

0, n

in

= f(x − v

x

dt, v

x

)dxdv

x

and those leaving towards the right, n

out

= f(x, v

x

)dxdv

x

.

The local change per unit time is then

∂f(x, v

x

)

∂t

=

n

in

−n

out

dt dxdv

x

(2.6)

=

f(x −v

x

dt, v

x

) −f(x, v

x

)

dt

(2.7)

(2.8)

=

f(x, v

x

) −(v

x

dt) (∂f/∂x) −f(x, v

x

dt

(2.9)

(2.10)

= −v

x

∂f(x, v

x

)

∂x

(2.11)

The relation

∂f

∂t

+v

x

∂f

∂x

= 0 is then easily generalized to the case of a non-vanishing force,

∂f

∂t

+v

x

∂f

∂x

+

K

x

m

∂f

∂v

x

= 0 (2.12)

(this would engender an additional vertical ﬂow in the ﬁgure), and to six instead of two

dimensions (see equ. 2.2).

All this is for collisionless ﬂow only.

• In order to account for collisions a term (∂f/∂t)

coll

dt is added on the right hand side:

∂

∂t

+v ∇

r

+

K

m

∇

v

f(r, v; t) =

∂f

∂t

coll

(2.13)

The essential step then is to ﬁnd an explicit expression for (∂f/∂t)

coll

. Boltzmann solved

this problem under the simplifying assumptions that

– only binary collisions need be considered (dilute gas);

– the inﬂuence of container walls may be neglected;

– the inﬂuence of the external force

K (if any) on the rate of collisions is negligible;

– velocity and position of a molecule are uncorrelated (assumption of molecular chaos).

The eﬀect of the binary collisions is expressed in terms of a “diﬀerential scattering cross

section” σ(Ω) which describes the probability density for a certain change of velocities,

¦v

1

, v

2

¦ →¦v

1

, v

2

¦ . (2.14)

(Ω thus denotes the relative orientation of the vectors (v

2

−v

1

) and (v

2

−v

1

)). The function

σ(Ω) depends on the intermolecular potential and may be either calculated or measured.

Under all these assumptions, and by a linear expansion of the left hand side of equ. 2.1 with

respect to time, the Boltzmann equation takes on the following form:

∂

∂t

+v

1

∇

r

+

K

m

∇

v1

f

1

=

dΩ

dv

2

σ(Ω) [v

1

−v

2

[

f

1

f

2

−f

1

f

2

(2.15)

27

where f

1

≡ f(r, v

1

; t), f

1

≡ f(r, v

1

; t) etc. This integrodiﬀerential equation describes, under the

given assumptions, the spatio-temporal behaviour of a dilute gas. Given some initial density

f(r, v; t = 0) in µ-space the solution function f(r, v; t) tells us how this density changes over

time. Since f has up to six arguments it is diﬃcult to visualize; but there are certain moments

of f which represent measurable averages such as the local particle density in 3D space, whose

temporal change can thus be computed.

Chapman and Enskog developed a general procedure for the approximate solution of Boltz-

mann’s equation. For certain simple model systems such as hard spheres their method produces

predictions for f (r, v; t) (or its moments) which may be tested in computer simulations. An-

other more modern approach to the numerical solution of the transport equation is the “Lattice

Boltzmann” method in which the continuous variables r and v are restricted to a set of discrete

values; the time change of these values is then described by a modiﬁed transport equation which

lends itself to fast computation.

The initial distribution density f(r, v; 0) may be of arbitrary shape. To consider a simple

example, we may have all molecules assembled in the left half of a container – think of a removable

shutter – and at time t = 0 make the rest of the volume accessible to the gas particles:

f(r, v, 0) = AΘ(x

0

−x)f

0

(v) (2.16)

where f

0

(v) is the (Maxwell-Boltzmann) distribution density of particle velocities, and Θ(x

0

−x)

denotes the Heaviside function. The subsequent expansion of the gas into the entire accessible

volume, and thus the approach to the stationary ﬁnal state (= equilibrium state) in which the

particles are evenly distributed over the volume may be seen in the solution f(r, v; t) of Boltz-

mann’s equation. Thus the greatest importance of this equation is its ability to describe also

non-equilibrium processes.

Simulation: The power of Boltzmann’s equation

– Irreversible Expansion of a hard disc gas

– Presentation of the distribution densities in r-space and in v-space

[Code: BM]

The Equilibrium distribution f

0

(r, v) is that solution of Boltzmann’s equation which is sta-

tionary, meaning that

∂f(r, v; t)

∂t

= 0 (2.17)

It is also the limiting distribution for long times, t →∞.

It may be shown that this equilibrium distribution is given by

f

0

(r, v) = ρ(r)

¸

m

2πkT(r)

3/2

exp¦−m[v −v

0

(r)]

2

/2kT(r)¦ (2.18)

28

where ρ(r) and T(r) are the local density and temperature, respectively.

If there are no external forces such as gravity or electrostatic interactions we have ρ(r) = ρ

0

=

N/V . In case the temperature is also independent of position, and if the gas as a whole is not

moving (v

0

= 0), then f(r, v) = ρ

0

f

0

(v), with

f

0

(v) =

m

2πkT

3/2

e

−mv

2

/2kT

, (2.19)

This is the famous Boltzmann distribution; it may be derived also in diﬀerent ways, without

requiring the explicit solution of the transport equation. =⇒See next section, equ.2.30.

2.2 The Maxwell-Boltzmann distribution

We want to apply statistical procedures to the swarm of points in Boltzmann’s µ space. To do

this we ﬁrst divide that space in 6-dimensional cells of size (∆x)

3

(∆v)

3

, labelling them by j

(= 1, . . . , m). There is a characteristic energy E

j

≡ E(r

j

, v

j

) pertaining to each such cell. For

instance, in the ideal gas case this energy is simply E

j

= mv

2

j

/2, where v

j

is the velocity of the

particles in cell j.

Now we distribute the N particles over the m cells, such that n

j

particles are allotted to

cell no. j. In a closed system with total energy E the population numbers n

j

must fulﬁll the

condition

¸

j

n

j

E

j

= E. The other condition is, of course, the conservation of the number of

particles,

¸

j

n

j

= N. Apart from these two requirements the allottment of particles to cells is

completely random.

We may understand this prescription as the rule of a game of fortune, and with the aid of a

computer we may actually play that game!

Playing Boltzmann’s game

[Code: LBRoulette]

Instead of playing the game we may calculate its outcome by probability theory. For good

statistics we require that N >> m >> 1. A speciﬁc m-tuple of population numbers n ≡ ¦n

j

; j =

1, . . . , m¦ will here be called a partitioning. (If you prefer to follow the literature, you may

refer to it as a distribution.) Each partitioning may be performed in many diﬀerent ways, since

the labels of the particles may be permutated without changing the population numbers in the

cells. This means that many speciﬁc allottments pertain to a single partitioning. Assuming

that the allotments are elementary events of equal probability, we simply count the number of

possible allottments to calculate the probability of the respective partitioning.

The number of possible permutations of particle labels for a given partition n ≡ ¦n

j

; j =

1, . . . , m¦ is

P(n) =

N!

n

1

!n

2

! . . . n

m

!

(2.20)

(In combinatorics this is called permutations of m elements – i. e. cell numbers – with repetition).

29

Since each allottment is equally probable, the most probable partitioning is the one allowing

for the largest number of allottments. In many physically relevant cases the probability of that

optimal partitioning very much larger than that of any other, meaning that we can restrict

the further discussion to this one partitioning. (See the previous discussion of the multinomial

distribution.)

Thus we want to determine that speciﬁc partitioning n

∗

which renders the expression 2.20 a

maximum, given the additional constraints

m

¸

j=1

n

j

−N = 0 and

m

¸

j=1

n

j

E

j

−E = 0 (2.21)

Since the logarithm is a monotonically increasing function we may scout for the maximum

of ln P instead of P – this is mathematically much easier. The proven method for maximizing a

function of many variables, allowing for additional constraints, is the variational method with

Lagrange multipliers (see Problem 2.1). The variational equation

δ ln P −δ

¸

α

¸

j

n

j

+β

¸

j

E

j

n

j

¸

= 0 (2.22)

with the undetermined multipliers α and β leads us, using the Stirling approximation for ln P,

to

−ln n

∗

j

−α −βE

j

= 0 (2.23)

Thus the optimal partitioning n

∗

j

is given by

n

∗

j

= e

−α−βEj

; j = 1, . . . m (2.24)

Consequently

f(r

j

, v

j

) ∝ n

∗

j

= Aexp¦−βE(r

j

, v

j

)¦ (2.25)

In particular, we ﬁnd for a dilute gas, which in the absence of external forces will be homogeneous

with respect to r,

f(v

j

) = Bexp¦−β(mv

2

j

/2)¦ (2.26)

Using the normalizing condition

f(v)dv = 1 or

B

4πv

2

e

−βmv

2

/2

dv = 1 (2.27)

we ﬁnd B = (βm/2π)

3/2

and therefore

f(v) =

¸

βm

2π

3/2

e

−βmv

2

/2

(2.28)

Now we take a closer look at the quantity β which we introduced at ﬁrst just for mathematical

convenience. The mean kinetic energy of a particle is given by

'E` ≡

dv(mv

2

/2)f(v) =

3

2β

(2.29)

But we will learn in Section 2.3 that the average kinetic energy of a molecule is related to the

macroscopic observable quantity T (temperature) according to 'E` = 3kT/2; therefore we have

β ≡ 1/kT. Thus we may write the distribution density of the velocity in the customary format

f(v) =

m

2πkT

3/2

e

−mv

2

/2kT

(2.30)

30

0

0.05

0.1

0.15

0.2

0 1 2 3 4 5

Maxwell-Boltzmann-Dichte

Figure 2.3: Maxwell-Boltzmann distribution

This density in velocity space is commonly called Maxwell-Boltzmann distribution density.

The same name is also used for a slightly diﬀerent object, namely the distribution density of the

modulus of the particle velocity (the “speed”) which may easily be derived as (see equ. 1.66).

f([v[) = 4πv

2

f(v) = 4π

m

2πkT

3/2

v

2

e

−mv

2

/2kT

(2.31)

So we have determined the population numbers n

∗

j

of the cells in µ space by maximizing

the number of possible allottments. It is possible to demonstrate that the partitioning we have

found is not just the most probable but by far the most probable one. In other words, any

noticeable deviation from this distribution of particle velocities is extremely improbable (see

above: multinomial distribution.) This makes for the great practical importance of the MB

distribution: it is simply the distribution of velocities in a many particle system which we may

assume to hold, neglecting all other possible but improbable distributions.

As we can see from the ﬁgure, f([v[) is a skewed distribution; its maximum is located at

˜ v =

2kT

m

(2.32)

This most probable speed is not the same as the mean speed,

'v` ≡

∞

0

dv vf([v[) =

8

π

kT

m

(2.33)

or the root mean squared velocity or r.m.s. velocity),

v

rms

≡

'v

2

` =

3kT

m

(2.34)

Example: The mass of an H

2

molecule is m = 3.346 10

−27

kg; at room temperature

(appr. 300 K) we have kT = 1.380 10

−23

.300 = 4.141 10

−21

J; therefore the most

probable speed of such a molecule under equilibrium conditions is ˜ v = 1.573 10

3

m/s.

31

2.3 Thermodynamics of dilute gases

The pressure of a gas in a container is produced by the incessant drumming of the gas molecules

upon the walls. At each such wall collision – say, against the right wall of a cubic box – the

respective momentum of the molecule p

x

≡ mv

x

is reversed. The momentum transferred to the

wall is thus ∆p

x

= 2mv

x

. The force acting on the unit area of the wall is then just the time

average of the momentum transfer:

P ≡

'K`

t

F

≡ '

1

Ft

M(t)

¸

k=1

∆p

(k)

x

` ≡ '

1

Ft

M(t)

¸

k=1

2mv

(k)

x

` (2.35)

where M(t) is the number of wall impacts within the time t.

To obtain a theoretical prediction for the value of the pressure we argue as follows:

The number of particles having x-velocity v

x

and impinging on the right wall per unit time is

obviously proportional to v

x

. The momentum transfer from such a particle to the wall is 2mv

x

.

Thus we ﬁnd

P =

N

V

vx>0

dv 2mv

2

x

f(v) (2.36)

Inserting the Boltzmann density for f(v) and performing the integrations we have

P =

2

3

N'E`

V

(2.37)

Simulation: Thermodynamics of dilute gases

3-dimensional system of hard spheres in a cubic box. The pressure is determined

– from the collisions of the particles with the container walls (P

ex

)

– from equation 2.37 (P

th

).

[Code: Hspheres]

Where do we stand now? Just by statistical reasoning we have actually arrived at a prediction

for the pressure – a thermodynamic quantity!

Having thus traversed the gap to macroscopic physics we take a few steps further. From

thermodynamics we know that the pressure of a dilute gas at density ρ ≡ N/V and temperature

T is P = ρkT. Comparing this to the above formula we ﬁnd for the mean energy of a molecule

'E` =

3kT

2

, (2.38)

and the parameter β (which was introduced in connection with the equilibrium density in velocity

space) turns out to be just the inverse of kT.

The further explication of the dilute gas thermodynamics is easy. Taking the formula for the

internal energy, E ≡ N'E` = 3NkT/2, and using the First Law

dQ = dE +PdV (2.39)

32

we immediately ﬁnd that amount of energy that is needed to raise the temperature of a gas by

1 degree – a.k.a. the heat capacity C

V

:

C

V

≡

dQ

dT

V

=

dE

dT

=

3

2

Nk (2.40)

2.4 Transport processes

A system is in “equilibrium” if its properties do not change spontaneously over time. (In case there

are external ﬁelds – such as gravity – the material properties may vary in space, otherwise they

will be independent also of position: the system is not only in equilibrium but also homogeneous.)

In particular, in equilibrium the local energy density, i.e. the energy contained in a vol-

ume element dV divided by that dV , as well as the momentum density and the particle or

mass density remain constant. These densities, which refer to the three conserved quantities of

mechanics, play a special role in what follows.

Again considering the dilute gas, and assuming there are no external ﬁelds, the densities are

ρ

E

≡

E

V

=

1

V

N

¸

i=1

mv

2

i

2

, ρ

p

≡

p

V

=

1

V

N

¸

i=1

mv

i

, and ρ

m

≡

Nm

V

(2.41)

By looking more closely we would see that these local densities are in in fact not quite constant;

rather, they will ﬂuctuate about their mean. In other words, there will be a spontaneous waxing

and waning of local gradients of those densities. Also, the experimentor may intervene to create

a gradient. For instance, one might induce in a horizontal layer of a gas or liquid a certain x-

velocity u

x

= 0, and thereby a momentum p

x

. The diﬀerence between the momenta in adjacent

layers then deﬁnes a gradient of the momentum density. If we left the system to itself this gradient

would decrease and eventually vanish. The property governing the speed of this decrease is called

viscosity.

If a physical quantity – such as mass, or momentum – is conserved, any local change can only

be achieved by a ﬂow into or out of the space region under consideration; we are then speaking

of transport processes whose speeds are governed by the respective transport coeﬃcients

– such as the viscosity η, the heat conductivity λ (for the energy transport) and the diﬀusion

constant D (for mass transport).

In real experiments aimed at determining these transport coeﬃcients the respective gradient

is artiﬁcially maintained. Thus there will be a continuing ﬂow of momentum, energy or matter in

the (reverse) direction of the gradient – which is obviously a non-equilibrium situation. However,

by a careful setup we can keep at least these ﬂows and the local densities and gradients constant

in time. This is called a stationary non-equilibrium situation.

DEFINITION OF TRANSPORT COEFFICIENTS

In order to deﬁne the transport coeﬃcients η, λ and D we consider the basic experimental setup.

• Viscosity: To measure the coeﬃcient of momentum transport η we generate a laminary

ﬂow by placing a gas or ﬂuid layer between two horizontal plates and moving the upper

plate with constant velocity u

0

to the right. In this manner we superimpose a systematic

x-velocity onto the random thermal motion of the molecules.

The magnitude of the thermal speed is of the order 10

3

m/s; by adding a “shear velocity”

u

x

(z) of some centimeters per second the local equilibrium is not considerably disturbed.

Thus we may assume that we have still a Maxwell-Boltzmann distribution of velocities at

any point in the ﬂuid, with the same value of 'E` (bzw. kT) everywhere.

Yet by imposing a velocity gradient we have slightly perturbed the equilibrium; a certain

amount of x-momentum will ﬂow against the gradient – in our case downwards – so as to

reestablish equilibrium. The amount of momentum ﬂowing down through a unit of area

33

per unit of time is called ﬂow density

j

p

. In a ﬁrst (linear) approximation this ﬂow will

be proportional to the imposed velocity gradient, and the coeﬃcient of proportionality is,

by deﬁnition, the transport coeﬃcient η:

j

p

= −η

du

x

(z)

dz

(2.42)

This deﬁning equation for η is often called Newton’s law of viscous ﬂow. The parameter

η is known as viscosity.

• Thermal conductivity: To generate a gradient of energy density we may place the sub-

stance between two heat reservoirs with diﬀerent temperatures. if we use the same simple

geometry as for the viscosity then the temperature (or energy) gradient will have only the

one non-vanishing component dT/dz which accordingly produces a ﬂow of energy in the

counter direction. The coeﬃcient λ of thermal conduction is deﬁned by (Fourier’s law),

j

E

= −λ

dT(z)

dz

(2.43)

• Diﬀusion constant: Even in a homogeneous gas (or a ﬂuid, or a solid, for that matter)

the individual molecules will alter their positions in the course of time. This process is

known as self diﬀusion. It is possible to “tag” some of the particles – for instance, by

giving them a radioactive nucleus. The density of the marked species – call it 1 – may then

once more have a gradient that will be balanced by a particle ﬂow (Fick’s law):

j

1

= −D

dρ

1

(z)

dz

(2.44)

MEAN FREE PATH

We will presently attempt to calculate the above deﬁned transport coeﬃcients from the micro-

scopic dynamics of the molecules. In a simple kinetic treatment the elementary process deter-

mining the various ﬂows is the free ﬂight of a gas particle between two collisions with other

molecules. Thus the ﬁrst step should be an estimation of the average path length covered by a

particle between two such collisions.

Let P(x) be the – as yet unknown – probability that a particle meets another one before

it has passed a distance x. Then 1 − P(x) is the probability for the molecule to ﬂy a distance

x without undergoing a collision. But the probability of an encounter within an inﬁnitesimal

path element dx is given by p = ρσ

2

πdx, where σ is the diameter of the particles. (p equals the

fraction of the “target area” covered by other particles.) Thus the diﬀerential probability for a

collision between x and x +dx is

dP(x) = [1 −P(x)]ρσ

2

πdx (2.45)

which upon integration yields

P(x) = 1 −e

−ρσ

2

πx

(2.46)

The probability density p(x) ≡ dP(x)/dx for a collision within that interval is

p(x) = ρσ

2

πe

−ρσ

2

πx

(2.47)

The mean free path is then the ﬁrst moment of this density:

l ≡ 'x` ≡

∞

0

dxxp(x) =

1

ρσ

2

π

(2.48)

34

Example: The diameter of a H

2

molecule is σ ≈ 10

−10

m (= 1

˚

Angstrøm). According

to Loschmidt (and the later but more often quoted Avogadro) a volume of 22.4

10

−3

m

3

contains, at standard conditions, 6.022 10

23

particles; the particle density is

thus ρ = 2.69 10

25

m

−3

, and the mean free path is l = 1.18 10

−6

m.

Simulation: Collision rate in a dilute gas

3-dimensional system of hard spheres in a cubic box:

– Compare the theoretical collision rate Z

th

with the empirical one Z

ex

[Code: Hspheres]

ESTIMATING THE TRANSPORT COEFFICIENTS

In the case of a dilute gas we can actually ﬁnd expressions for the transport coeﬃcients. Assuming

once more the basic geometric setting used in most experiments, we suppose that the gradient of

the respective conserved quantity has only a z component, leading to a ﬂow that is also directed

along the z axis.

• Viscosity: In the course of the random (thermal) motion of the particles the small sys-

tematic term u

x

(z) is carried along. On the average, an equal number of particles will

cross the interface between two horizontal layers from below and above. However, since

the upper layer has a larger u

x

(assuming du

x

/dz > 0, it will gradually slow down, while

the lower layer will take up speed. The ﬂow of momentum due to this mechanism can be

estimated as follows:

The mean number of particles crossing the unit of area from below and from above, respec-

tively, is ρ'v`/6. Each of the molecules carries that systematic u

x

(z) which pertains to the

layer of its previous collision, i.e. u

x

(z ±l). The momentum ﬂow density is therefore

j

p

=

1

6

ρ'v` [mu(z −l) −mu(z +l)] ≈

1

6

ρ'v`

¸

−2m

du

dz

l

(2.49)

Comparing this to the deﬁning equation for the viscosity η we ﬁnd

η =

1

3

ρl'v`m (2.50)

It is a somewhat surprising consequence of this formula that, since l ∝ 1/ρ, the viscosity

eta is apparently independent of density! Maxwell, who was the ﬁrst to derive this result,

had to convince himself of its validity by accurate experiments on dilute gases.

35

• Thermal conductivity: A similar consideration as in the case of momentum transport

yields, with j

E

= −λdT/dz,

λ =

1

3

ρl'v`mc

V

(2.51)

where c

V

≡ C

V

/M is the speciﬁc heat (C

V

.. molar heat capacity; M..molar mass).

• Diﬀusion: Again we assume a gradient of the relevant density – in this case, the particle

density of some species 1 – along the z direction. Writing j

1

= −Ddρ

1

/dz and using the

same reasoning as before we ﬁnd

D =

1

3

l'v` (2.52)

2.5 Just in case ...

... someone has forgotten how to ﬁnd the extremum of a many-variable function under additional

constraints. To remind you: the method of undetermined Lagrange multipliers waits to be applied

here.

Let f(x, y) = x

2

+ y

2

be the given function. Of course, this is the equation of a parabolid

with its tip – or minimum – at the origin. However, let g(x, y) ≡ x −y −1 = 0 be the constraint

equation, meaning that we don’t search for the global minimum but for the minimum along the

line y = x − 1. There are two ways to go about it. The simple but inconvenient way is to

substitute y = x − 1 in f(x, y), thus rendering f a function of x only. Equating the derivative

df/dx to zero we ﬁnd the locus of the conditional minimum, x = 1/2 and y = −1/2. The process

of substitution is, in general, tedious.

A more elegant method is this: deﬁning a (undetermined) Lagrange multiplier α, ﬁnd the

minimum of the function f(x, y) −αg(x, y) according to

∂f

∂x

−α

∂g

∂x

= 0 (2.53)

∂f

∂y

−α

∂g

∂y

= 0 (2.54)

Eliminating α we ﬁnd the solution without substituting anything. In our case

2x −α = 0 (2.55)

2y +α = 0 (2.56)

=⇒x +y = 0, and from g(x, y) = 0: (x, y) = (1/2, −1/2).

2.6 Problems for Chapter 2

EXERCISES:

2.1 Variational method of Lagrange: Determine the minimum of the function f(x, y) =

x

2

+ 2y

2

under the condition (i.e. along the curve) g(x, y) = x + y − 6 = 0, by two diﬀerent

methods:

a) by inserting y = 6 −x in f(x,y) and diﬀerentiating by x;

b) using a Lagrange multiplier α and evaluating Lagrange’s equations ∂f/∂x − α(∂g/∂x) = 0

and ∂f/∂y −α(∂g/∂y) = 0. (Shorthand notation: δf −αδg = 0; δf... “Variation of f”).

36

For your better understanding sketch the functions f(x, y) and g(x, y) = 0.

2.2 Method of the most probable distribution: Having understood the principle of La-

grange variation, reproduce the derivation of the Boltzmann distribution of energies (see text).

Compare your result with the bottom right histogram in Applet LBRoulette.

2.3 Moments of the Maxwell-Boltzmann distribution: Verify the expressions for the most

probable velocity, the average and the r.m.s. velocity.

2.4 Pressure in a dilute gas: Verify the formula P = (2/3)(N'E`/V ).

2.5 Transport properties: For nitrogen under standard conditions, estimate the mean free

path and the transport coeﬃcients η, λ and D. (σ ≈ 4 10

−10

m).

TEST YOUR UNDERSTANDING OF CHAPTER 2:

2.1 Maxwell-Boltzmann distribution: What is the formula for the distribution density f

0

(v)

of the molecular velocities in equilibrium; what is the respective formula for the speeds (absolute

values of the velocity), f

0

([v[)?

2.2 Pressure in an ideal gas: Derive the pressure equation of state for the ideal gas from

simple kinetic theory.

2.3 Transport coeﬃcients:

– Write down the deﬁning equation for one of the three transport coeﬃcients.

– What is the mean free path in a gas of spheres with diameter σ?

– How does the viscosity of a dilute gas depend on the density?

37

Chapter 3

Phase space

Figure 3.1: Josiah Willard Gibbs

The Kinetic Theory explained in Chapter 2 is applicable only to gases and – with certain add-ons

– to ﬂuids. A more general and extremely powerful method for analyzing statistical-mechanical

systems is based on a geometrical exploration of the high-dimensional phase space spanned by

the complete set of microvariables.

3.1 Microscopic variables

The microscopic state of a model system is uniquely determined by the speciﬁcation of the

complete set of microscopic variables. The number of such variables is of the same order as

the number of particles. In contrast, the macroscopic state is speciﬁed by a small number

of measurable quantities such as net mass, energy, or volume. Generally a huge number of

diﬀerent microstates are compatible with one single macrostate; and it is a primary task of

statistical mechanics to ﬁnd out just how many microstates there are for a given set of macroscopic

conditions.

Interpreting the microscopic variables as coordinates in a high-dimensional space we may

represent a particular microstate as a point or vector in that space. The space itself is called

Gibbs phase space; the state vector is often symbolized as

Γ.

38

Figure 3.2: Phase space of the 2D quantum gas

Consider a simple, classical many-particle system – say, an ideal gas, or a ﬂuid made up of

hard spheres or Lennard-Jones molecules. The state vector is then deﬁned by all position and

velocity coordinates:

Γ ≡ ¦r

1

, . . . , r

N

; v

1

, . . . , v

N

¦ r

i

V ; v

i,α

(±∞) (3.1)

The number of degrees of freedom (d.o.f.) is n = 6N (or, in 2 dimensions, 4N); the number

of velocity d.o.f. is 3N (or 2N). It is often allowed – and always advantageous – to treat the

subspaces

Γ

r

≡ ¦r

i

¦ (position space) and

Γ

v

≡ ¦v

i

¦ (velocity space) separately.

The representation of a microstate of the entire system by a single point in 6N-dimensional

Gibbs space must not be mixed up with the treatment of Chapter 2, introduced by Boltzmann.

The µ-space deﬁned there had only 6 dimensions ¦r, v¦, and each particle in the system had

its own representative point. Thus the microstate of a system of N particles corresponded to a

swarm of N points ¦r

i

, v

i

¦.

In the case of an ideal quantum gas it suﬃces to specify all quantum numbers to deﬁne a

state:

Γ ≡ ¦(n

ix

, n

iy

, n

iz

); i = 1, . . . N¦ (3.2)

Model systems made up of N spins are speciﬁed by

Γ ≡ ¦σ

i

; i = 1, . . . N¦ (3.3)

with σ

i

= ±1.

ENERGY SURFACE

We consider a collection of M systems, all having the same energy E, the same number of particles

N, and – in the case of a gas or ﬂuid – the same volume V . All microstates

Γ compatible with

these macroscopic conditions are then equally probable, that is, they have the same relative

39

frequency within the ensemble of systems. The size M of the ensemble is assumed to be very

large – indeed, to approach inﬁnity.

The assumption that all microstates that are compatible with the condition E = E

0

have

the same probability is one of the solid foundations Statistical Mechanics is built upon. It is

called the “postulate of equal a priori probability”. For a mathematically exact formulation of

this axiom we use the phase space density which is supposed to have the property

ρ(r, v) =

ρ

0

for E(r, v) = E

0

0 elsewhere

(3.4)

The condition E = const deﬁnes an (n−1)-dimensional “surface” within the n-dimensional phase

space of the system. The set of all points upon that “energy surface” is named microcanonical

ensemble. To take an example, in the 3N-dimensional velocity space of a classical ideal gas the

equation

E

kin

≡

m

2

¸

i

v

2

i

= const = E

0

(3.5)

deﬁnes the (3N −1)-dimensional surface of a 3N-sphere of radius r =

2E

0

/m.

A typical question to be answered by applying statistical methods to such an ensemble is this:

what is the average of the squared particle velocity 'v

2

i

` over all – a priori equally probable –

states on this surface?

ERGODICITY

Instead of considering an ensemble of systems let us now watch just one single system as it evolves

in time according to the laws of mechanics. In a closed system the total energy will remain

constant; the microstates visited by the system must therefore lie on the (N − 1)-dimensional

energy surface deﬁning the microcanonical ensemble.

The ergodic hypothesis states that in the course of such a “natural evolution” of the

system any permitted microstate will be reached (or closely approximated) with the same relative

frequency.

This hypothesis cannot be proven in general; in fact, it does not always hold. However, for

many relevant systems such as gases or ﬂuids under normal conditions it is quite true. In such

cases the time t

0

needed for a suﬃciently thorough perambulation of the energy surface is in the

range of 10

−9

− 10

−6

seconds, i.e. safely below the typical observation time in an experiment.

Among those systems which we may characterize as “barely ergodic” or non-ergodic we have

supercooled liquids and glasses. In such systems the state vector Γ remains trapped for long

times in a limited region of the energy surface; it may then take seconds, days, or even centuries

before other parts of the microcanonical surface are reached.

The ergodic hypothesis, if true, has an important practical consequence: for the calculation of

mean values over the microstates on the energy surface it does not matter if we take the average

over states randomly picked from a microcanonical ensemble, or over the successive states of one

single, isolated system. This corrolary of the ergodic hypothesis is often succinctly stated as

ensemble average = time average

The assumption of ergodicity enables us to support our theoretical arguments by “molecular

dynamics” computer experiments. These are deterministic simulations reproducing the temporal

evolution of a single isolated N-particle system. We will later touch upon another kind of com-

puter simulation, in which the state space is perambulated in a stochastic manner; it bears the

suggestive name “Monte Carlo simulation”.

ENERGY SHELL

In place of the strict condition E = E

0

we will generally require the weaker condition E

0

−∆E

≤ E ≤ E

0

to hold. In other words, the permitted states of the system are to be restricted to a

40

Model

Microvariables

(i = 1, . . . , N)

Dimension of

phase space Energy

Classical ideal gas

ri, vi 6N E

kin

≡

m

2

P

v

2

i

Hard spheres in a

box ri, vi 6N E

kin

≡

m

2

P

v

2

i

Hard discs in a box

ri, vi 4N E

kin

≡

m

2

P

v

2

i

LJ molecules, pe-

riodic boundary

conditions

ri, vi

6N

(or. 6N −3)

E

kin

+ Epot ≡

m

2

P

v

2

i

+

P

i,j>i

ULJ(rij)

Linear or nonlinear

molecules ri, vi, ei, ωi 10N or. 12N

E

kin

+ Epot ≡

m

2

P

v

2

i

+

I

2

P

ω

2

i

+ Epot

Harmonic crystal

qi,

˙

q

i

6N

E

kin

+ Epot ≡

m

2

P

˙

q

2

i

+

f

2

P

| qi − qi−1|

2

Ideal quantum gas

ni 3N

h

2

8mL

2

P

i

|ni|

2

Spins, non-

interacting

σi N

−H

P

i

σi

Table 3.1: State spaces of some model systems

thin “shell” at E ≈ E

0

. This more pragmatic requirement – which is made to keep the mathe-

matics simpler – agrees well with the experimental fact that the exchange of energy between a

system and its environment may be kept small, but can never be completely suppressed.

Thus we assume for the density in phase space that

ρ(r, v) =

ρ

0

for E(r, v) [E

0

, ∆E]

0 elsewhere

(3.6)

For the classical ideal gas this requirement reads

1

E

0

−∆E ≤ E

kin

≡ (m/2)

¸

i

v

2

i

≤ E

0

(3.7)

For the ideal quantum gas we require

E

0

−∆E ≤ E

N,n

≡

h

2

8mL

2

¸

i

n

2

i

≤ E

0

(3.8)

In a system of N non-interacting spins the energy shell is deﬁned by −H

¸

i

σ

i

[E

0

, ∆E].

Table 3.1 presents an overview of the state spaces and energy functions for various model

systems.

PHASE SPACE VOLUME AND THERMODYNAMIC PROBABILITY

Let us now recall the fundamental assumption that all microstates having the same energy are

equally probable. Thus they represent elementary events as deﬁned in Section 1.3.

It follows then that a macrostate that allows for more microstates than others will occur more

often – and thus will be more probable.

1

In an MD simulation not only the total energy but also the total momentum

P is conserved; this should

be kept in mind when comparing simulation results to theoretical statements. Actually, the four conditions

E (E

0

−∆E, E

0

) and

P = const, would deﬁne a 3N −3-dimensional spherical shell in velocity space. However,

for suﬃciently numerous particles (N ≥ 100) this detail may be neglected, and we can base the argumentation

just on the condition E ≈ E

0

(with a 3N-dimensional shell).

41

As an example, we may ask for the probability to ﬁnd all molecules of an ideal gas in the left

half of the vessel, with no other restrictions on position or speed. All such microstates are located

in a small part of the total permitted phase space shell [E, ∆E]; they make up a sub-ensemble of

the complete microcanonical ensemble. Now, the probability of the macrostate “all particles in

the left half of the container” is obviously equal to the size ratio of the subensemble and the total

ensemble. We will have to compare the volume (in phase space) of the partial shell pertaining

to the subensemble to the volume of the total shell. (We will see later that the ratio of the two

volumes is (1/2

N

) and may thus be neglected – in accordance with experience and intuition.)

3.2 From Hyperspheres to Entropy

Some model systems have the convenient property that their energy may be written as a sum

over the squares of their microvariables. The most popular example is the classical ideal gas

with E ≡ E

kin

= (m/2)

¸

v

2

i

. The conditions E ≤ E

0

or E ≈ E

0

then describe a high-

dimensional sphere or spherical shell, respectively. Thus it will be worth the while to make

ourselves acquainted with the geometrical properties of such bodies. It must be stressed, however,

that the restriction to n-spheres is only a matter of mathematical convenience. It will turn

out eventually that those properties of n-dimensional phase space which are of importance in

Statistical Mechanics are actually quite robust, and not reserved to n-spheres. For instance, the

n-rhomboid which pertains to the condition

¸

i

σ

i

≈ const for simple spin systems could be (and

often is) used instead.

VOLUME AND SURFACE OF HIGHDIMENSIONAL SPHERES

In the following discussion it will be convenient to use, in addition to the sphere radius r

0

, a

variable that represents the square of the radius: z

0

≡ r

2

0

. In the phase space of interactionless

many-particle systems, this quantity is related to an energy, as may be seen from equs. 3.7, 3.8

and Table 3.1); and in an isolated system it is just the energy which is given as a basic parameter.

Attention:

It should be kept in mind that the energy – and thus the square of the sphere radius – is an

extensive quantity, meaning that it will increase as the number of degrees of freedom (or particles):

z

0

∝ n. It would be unphysical and misleading to keep z

0

at some constant value and at the

same time raise the number of dimensions.

VOLUME

The volume of a n-dimensional sphere with radius r

0

=

√

z

0

is

V

n

(z

0

) = C

n

r

n

0

= C

n

z

n/2

0

with C

n

=

π

n/2

(n/2)!

(3.9)

where (1/2)! =

√

π/2 and (x + 1)! = x!(x + 1). The following recursion is useful:

C

n+2

=

2π

n + 2

C

n

(3.10)

Examples: C

1

= 2, C

2

= π, C

3

= π

3/2

/(3/2)! = 4π/3, C

4

= π

2

/2, C

5

=

8π

2

/15, C

6

= π

3

/6, . . . C

12

= π

6

/720, . . .

For large n we have, using Stirling’s approximation for (n/2)!,

C

n

≈

¸

2πe

n

n/2

1

√

πn

or ln C

n

≈

n

2

ln

2πe

n

≈

n

2

(ln π + 1) −

n

2

ln

n

2

(3.11)

42

As soon as the Stirling approximation holds, i.e. for n ≥ 100, the hypersphere volume may be

written

V

n

(z

0

) ≈

1

√

πn

2πe

n

n/2

z

n/2

0

or ln V

n

(z

0

) =

n

2

ln

2πe

n

z

0

−ln

√

πn (3.12)

Going to very large n(≥ 500) we ﬁnd for the logarithm of the volume (the formula for V proper

is then diﬃcult to handle due to the large exponents)

ln V

n

(z

0

) ≈

n

2

ln

2πe

n

z

0

(3.13)

Example: Let n = 1000 and z

0

= 1000. For the log volume we ﬁnd ln V

1000

(1000) ≡

500 ln(2πe) −ln

√

1000π = 882.76

SURFACE AREA

For the surface area of a n-dimensional sphere we have

O

n

(z

0

) =

dV

n

(z

0

)

dr

0

= nC

n

z

(n−1)/2

0

(3.14)

Examples:

V

1

(z

0

) = 2r

0

, O

1

(z

0

) = 2

V

2

(z

0

) = πz

0

, O

2

(z

0

) = 2πr

0

V

3

(z

0

) = (4π/3)r

3

0

, O

3

(z

0

) = 4πz

0

V

4

(z

0

) = (π

2

/2)z

2

0

, O

4

(z

0

) = 2π

2

r

3

0

A very useful representation of the surface is this:

O

n

(r

0

) =

r0

−r0

dr

1

r

0

r

2

O

n−1

(r

2

) (3.15)

with r

2

≡

r

2

0

−r

2

1

. This formula provides an important insight; it shows that the “mass” of a

spherical shell is distributed along one sphere axis (r

1

) as follows:

p

n

(r

1

) =

r

0

r

2

O

n−1

(r

2

)

O

n

(r

0

)

=

(n −1)C

n−1

nC

n

r

n−3

2

r

n−2

0

(3.16)

Examples: Let r

0

= 1; then we ﬁnd (see Fig. 3.3)

p

2

(r

1

) =

1

π

(1 −r

2

1

)

−1/2

p

3

(r

1

) =

1

2

(constant!)

p

4

(r

1

) =

2

π

(1 −r

2

1

)

1/2

p

5

(r

1

) =

3

4

(1 −r

2

1

)

. . .

p

12

(r

1

) =

256

63π

(1 −r

2

1

)

9/2

(The astute reader notes that for once we

have kept r

0

= 1, regardless of the value of n; this is permitted here because we are

dealing with a normalized density p

n

.)

43

-0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1

n=2

n=3

n=4

n=5

n=12

Figure 3.3: Mass distribution p(r

1

) of a (n − 1)-dimensional spherical surface along one axis.

With increasing dimension the mass concentrates more and more at the center of the axis. If

we interprete the surface as the locus of all phase space points with given total energy E

kin

=

(m/2)

¸

v

2

i

, then p(r

1

) ≡ p(v

i

) is just the distribution density for any single velocity component

v

i

.

Application: Assume that a system has n degrees of freedom of translatory motion.

(Example: n particles moving on a line, or n/3 particles in 3 dimensions.) Let the sum

of squares of all velocities (energy!) be given, but apart from that let any particular

combination of the values v

1

, v

2

, . . . be equally probable. All “phase space points”

v ≡ ¦v

1

...v

n

¦ are then homogeneously distributed on the spherical surface O

n

([v[

2

),

and a single velocity v

1

occurs with probability density 3.16.

As we increase the number of dimensions, the character of this density function

changes dramatically at ﬁrst (see Fig. 3.3). If just two particles on a line (or the

two d.o.f. of a pin ball) share the total energy mv

2

/2, then the velocity of one of

them is most probably near the possible maximal value while the other has only a

small speed. In contrast, for many dimensions (or particles) the maximum of the

probability density p

n

(v

1

) is near zero. The case n = 3, meaning 3 particles on a line

or one particle in 3 dimensions, is special: all possible values of v

1

occur with equal

probability.

Approach to the Maxwell-Boltzmann distribution: For very large n we have

p

n

(v) ≈

1

2π'v

2

`

exp¦−v

2

/2'v

2

`¦ (3.17)

with 'v

2

` = 2E

0

/nm.

The Maxwell-Boltzmann distribution may thus be derived solely from the

postulate of equal a priori probability and the geometric properties of

high-dimensional spheres.

44

Figure 3.4: Bunimovich’s stadium billard. Distribution of ﬂight directions, p(φ) and of a single

velocity component, p(v

x

)

Figure 3.5: Simulation: N = 1 to 3 hard disks in a 2D box with reﬂecting walls. Distribution of

ﬂight directions, p(φ), and of a single velocity component, p(v

1x

). Comparison with the theoretical

distribution [Code: Harddisks]

TWO AMAZING PROPERTIES OF HYPERSPHERES

We have seen that spheres in high-dimensional spaces exhibit quite unusual geometrical proper-

ties. In the context of Statistical Physics the following facts are of particular relevance:

• Practically the entire volume of a hypersphere is assembled in a thin shell immediately

below the surface

• The volume of a hypersphere is – at least on a logarithmic scale – almost identical to the

volume of the largest inscribed hyper-cylinder

We will consider these two statements in turn.

VOLUME OF A SPHERICAL SHELL

The ratio of the volume ∆V

n

(r

0

−∆r, r

0

) of a thin “skin” near the surface and the total volume

of the sphere is

∆V

n

V

n

=

r

n

0

−(r

0

−∆r)

n

r

n

0

= 1 −(1 −

∆r

r

0

)

n

−→1 −exp¦−n(∆r/r

0

)¦ (3.18)

45

Figure 3.6: Simulation: One hard sphere in a box with reﬂecting walls. Distribution of one

velocity component; demonstration of the special case n = 3 (where p(v

1x

) = const). [Code:

Hspheres]

Figure 3.7: Simulation: N Lennard-Jones particles in a 2D box with periodic boundary condi-

tions. Distribution of a velocity component, p(v

1x

). [Code: LJones]

or, using the quantities z

0

and ∆z:

∆V

n

V

n

−→1 −exp[−

n

2

∆z

z

0

] (3.19)

For n →∞ this ratio approaches 1, regardless of how thin the shell may be!

At very high dimensions the entire volume of a sphere is concentrated immediately below the

surface:

∆V

n

(z

0

, ∆z) −→V

n

(z

0

), n >> 1 (3.20)

Example: n = 1000, ∆r = r

0

/100 → ∆V/V = 1 −4.3 10

−5

HYPERSPHERES AND HYPERCYLINDERS

46

The sphere volume V

n

(z

0

) may alternatively be written as

V

n

(z

0

) =

r0

0

dr

1

O

n1

(z

1

)V

n−n1

(z

2

)

≡

z0

0

dV

n1

(z

1

)

dz

1

dz

1

V

n−n1

(z

0

−z

1

)

≡

z0

0

dV

n1

(z

1

) V

n−n1

(z

0

−z

1

) (3.21)

with r

1

≡

√

z

1

.

Example:

V

6

(z

0

) =

r0

0

dr

1

O

3

(z

1

)V

3

(z

0

−z

1

) =

z0

0

dr

1

4πr

2

1

4π

3

(z

0

−z

1

)

3/2

=

π

3

6

z

3

0

(3.22)

Partitioning the integration interval [0, z

0

] in equ. 3.21 into small intervals of size ∆z we may

write

V

n

(z

0

) ≈

K

¸

k=1

∆V

n1

(z

k

) V

n−n1

(z

0

−z

k

) (3.23)

where ∆V

n1

(z

k

) ≡ V

n1

(k∆z) −V

n1

((k −1)∆z).

Similarly. for the volume of the spherical shell we have

∆V

n

(z

0

) =

z0

0

dV

n1

(z

1

) ∆V

n−n1

(z

0

−z

1

) ≈

K

¸

k=1

∆V

n1

(z

k

) ∆V

n−n1

(z

0

−z

k

) (3.24)

Remembering equ. 3.20 we may, for high enough dimensions n, n

1

and (n−n

1

), always write

V in place of ∆V . Therefore,

V

n

(z

0

) ≈

K

¸

k=1

V

n1

(z

k

) V

n−n1

(z

0

−z

k

) (3.25)

Now, the terms in these sums are strongly varying. For high dimensions there is always one

single term that dominates the sum; all other terms may safely be neglected. To ﬁnd this term

we evaluate

d

dz

[V

n1

(z)V

n−n1

(z

0

−z)] = 0 (3.26)

From

d

z

n1/2

(z

0

−z)

(n−n1)/2

dz

= 0 (3.27)

we ﬁnd for the argument z

∗

that maximizes the integrand 3.21 or the summation term in 3.24 or

3.25,

z

∗

=

n

1

n

z

0

(3.28)

We have thus found a very surprising property of high-dimensional spheres which we may sum-

marize by the following calculation rule:

• Divide the given dimension n in n

1

and n −n

1

such that also n

1

>> 1 and n −n

1

>> 1

47

• Find the maximum of the product f(z) ≡ V

n1

(z) V

n−n1

(z

0

− z) (or the maximum of the

logarithm ln f(z)) with respect to z; this maximum is located at

z

∗

=

n

1

n

z

0

(3.29)

• With this, the following holds:

ln V

n

(z

0

) ≈ ln V

n1

(z

∗

) + ln V

n−n1

(z

0

−z

∗

) (3.30)

and similarly,

ln ∆V

n

(z

0

) ≈ ln ∆V

n1

(z

∗

) + ln ∆V

n−n1

(z

0

−z

∗

) (3.31)

(Remark: in a numerical veriﬁcation of these relation it must be remembered that the maximum

of the function at the r.h.s. of 3.31 or 3.30 is very sharp indeed; it may be overlooked when the

interval [0, z

0

] is scanned in regular steps. It is better to localize z

∗

ﬁrst, then compare with

slightly smaller and larger z.)

The geometric interpretation of the relation 3.30 is amazing:

On a logarithmic scale the volume of an n-sphere equals the product of the volumes

of two spheres in the subspaces n

1

and n −n

1

. But this product may be understood

as the volume of a hypercylinder that is inscribed in the n-sphere and which has the

“base area” in n

1

space and the “height” in (n −n

1

) space.

Example: n = 1000, n

1

= 400, z

0

= 1000: the maximum of the quantity ln f(z) ≡

ln V

400

(z) + ln V

600

(1000 −z) is located at z

∗

= 400, and we have

500 ln(2πe) −ln

√

1000π ≈ 200 ln(2πe) −ln

√

400π + 300 ln(2πe) −ln

√

600π

1418.94 −4.03 ≈ 567.58 −3.57 + 851.36 −3.77 (3.32)

Figure 3.8: Simulation: Hyperspheres and -cylinders. In a given hypersphere of dimension n

we inscribe hypercylinders whose “base areas” and “heights” have n

1

and n − n

1

dimensions,

respectively. The hypercylinder with the maximum volume is identiﬁed: its log volume is almost

equal to that of the circumscribed sphere. [Code: Entropy1]

48

DISCRETISATION OF PHASE SPACE; ENTROPY

Returning from geometry to physics, we will from now on denote phase space volumes by Γ,

reserving the symbol V for the volume of a system in real space. Thus, Γ

v

is the 3N-dimensional

velocity phase space volume of a N-particle system, Γ

r

refers to the positional subspace volume

which usually will also have 3N dimensions, and Γ denotes the full phase space (or volume in

phase space) with 6N dimensions.

Depending on which model system we are considering, the microvariables are either continuous

(classical manybody system) or discrete (quantum models, spin lattices.) In order to be able to

“enumerate” the states in phase space it is opportune to introduce a raster even in the case of

continuous microvariables. Thus we imagine the phase space of a N-particle system to be divided

into cells of size

2

g ≡ ∆x∆v (3.33)

Now let us consider a classical gas, or ﬂuid, with particle number N and volume V . In the 3N-

dimensional velocity subspace of the 6N-dimensional phase space the condition E

kin

[E, ∆E]

again deﬁnes a spherical shell whose volume is essentially equal to the volume of the enclosed

sphere. The number of ∆v cells in (or below) this shell is

Σ

v

(E, ∆E) ≡

∆Γ

v

(∆v)

3N

≈

Γ

v

∆v

3N

= C

3N

2E

m

3N/2

/(∆v)

3N

(3.34)

In the case of the ideal gas the contribution of the position subspace may be included in a simple

manner; we write

Σ

r

=

V

N

∆x

3N

(3.35)

Thus the total number of cells in 6N space is

Σ

(N, V, E) = Σ

r

Σ

v

= C

3N

¸

V

g

3

2E

m

3/2

¸

N

=

1

√

3Nπ

2πe

3N

3N/2

¸

V

g

3

2E

m

3/2

¸

N

(3.36)

Now we have to prematurely introduce a result of quantum statistics. One characteristic property

of quantum objects is their indistinguishability. It is evident that the number of distinct

microstates will depend on whether or not we take into account this quantum property. A

detailed analysis which must be postponed for now leads up to the simple result that we have to

divide the above quantity Σ

(N, V, E) by N! to ﬁnd

Σ(N, V, E) ≡ Σ

(N, V, E)/N! (3.37)

which is now indeed proportional to the total number of physically distinct microstates. This

rule is known as the rule of correct Boltzmann enumeration.

It is characteristic of the physical “instinct” of J. W. Gibbs that he found just this rule for the

correct calculation of Σ although quantum mechanics was not yet known to him. Ho proposed

the 1/N! rule in an ad hoc manner to solve a certain theoretical problem, the so-called Gibbs

Paradox.

The quantity Σ(N, V, E) is a measure of the available phase space volume, given

in units g

3N

. The logarithm of Σ(N, V, E) has great physical signiﬁcance: it is –

up to a prefactor k – identical to the entropy S(N, V, E) that was introduced in

2

A particularly powerful formulation of classical mechanics, known as Hamilton’s formalism, makes use of the

variables q, p (position and momentum) in place of r, v (position and velocity.) In this notation the phase space

cells have the size h ≡ ∆q ∆p.

49

thermodynamics. At present, this identiﬁcation is no more than a hypothesis; the

following chapter will show how reasonable the equality

S(N, V, E) = k ln Σ(N, V, E) (3.38)

really is.

In the case of a classical ideal gas we ﬁnd, using ln N! ≈ N ln N and neglecting ln

√

3Nπ in equ.

3.36,

S(N, V, E) ≡ k ln Σ(N, V, E) = Nk ln

¸

V

N

4πEe

3Nmg

2

3/2

¸

(3.39)

This is the famous Sackur-Tetrode equation, named for the authors Otto Sackur and

Hugo Tetrode.

The numerical value of Σ, and therefore of S, is obviously dependent on the chosen grid size

g. This disquieting fact may be mitigated by the following considerations:

• As long as we are only comparing phase space volumes, or entropies, the unit is of no

concern

• There is in fact a smallest physically meaningful gridsize which may well serve as the natural

unit of Σ; it is given by the quantum mechanical uncertainty: g

min

= h/m

Example: N = 36, m = 2, E = N = 36. The average energy per particle is then = 1,

and the mean squared velocity is 'v

2

` = 1. Assuming a cubic box with V = L

3

= 1

and a rather coarse grid with ∆x = ∆v = 0.1 we ﬁnd

S(N, V, E)/k = 36 ln

¸

1

36

4 π 36 e

3 36 2 10

−4

3/2

¸

= 462.27 (3.40)

Just for curiosity, let us determine ln(∆Σ), where ∆Σ is the phase space volume

between E = 35.5 and 36.0:

∆Σ = Σ(36.0) −Σ(35.5) = Σ(36.0)

¸

1 −

35.5

36

54

¸

= Σ(36.0) 0.53 (3.41)

and thus

S(N, V, ∆E)/k = 462.27 −0.63 = 461.64 (3.42)

We see that even for such a small system the entropy value is quite insensitive to

using the spherical shell volume in place of the sphere volume!

3.3 Problems for Chapter 3

EXERCISES:

3.1 Geometry of n-spheres I: Calculate the volumes and surfaces of spheres with r =

1.0 in 3,6,12 and 100 dimensions. In evaluating 100! use the Stirling approximation n! ≈

√

2πn(n/e)

n

.

3.2 Geometry of n-spheres II: For the various n-spheres of the foregoing example, compute

50

the fraction of the total volume contained in a shell between r = 0.95 and r = 1.0. Comment the

result.

3.3 Approximate formula for ln V

n

(z

0

): Choose your own example to verify the validity of

the approximation 3.30. Put the meaning of this equation in words; demonstrate the geometrical

meaning for a three-dimensional sphere – even if the approximation is not yet valid in this case.

Make a few runs with Applet Entropy1 and comment on your ﬁndings.

3.4 Sackur-Tetrode equation: Calculate the entropy of an ideal gas of noble gas atoms; choose

your own density, energy per particle, and particle mass (keeping in mind that the gas is to be

near ideal); use the mimimal grid size in phase space, g = g

min

≡ h/m.

TEST YOUR UNDERSTANDING OF CHAPTER 3:

3.1 Geometry of phase space: Explain the concepts phase space, energy surface, energy

shell.

3.2 Entropy and geometry: What is the relation between the entropy of a system and the

geometry of the phase space?

3.3 Geometry of n-spheres: Name a property of high-dimensional spheres that simpliﬁes the

entropy of an ideal gas. (Hint: shell volume?)

51

Chapter 4

Statistical Thermodynamics

4.1 Microcanonical ensemble

We recall the deﬁnition of this ensemble – it is that set of microstates which for given N, V

have an energy in the interval [E, ∆E]. The number of such microstates is proportional to the

phase space volume they inhabit. And we found some reason to suspect that this volume – its

logarithm, rather – may be identiﬁed as that property which the thermodynamicists have dubbed

entropy and denoted by S(N, V, E). This can hold only if S has the two essential properties of

entropy:

1. If the system is divided into two subsystems that may freely exchange energy, then the

equilibrium state is the one in which the available energy E = E

1

+E

2

is distributed such

that

dS(N

1

, V

1

, E

1

)

dE

1

E1=E

∗

1

?

=

dS(N

2

, V

2

, E

2

)

dE

2

E2=E−E

∗

1

(4.1)

The quantity T ≡ [dS(N, V, E)/dE]

−1

is commonly called temperature.

2. In equilibrium the entropy S is additive:

S(N

1

, V

1

, E

1

) +S(N

2

, V

2

, E

2

)

?

= S(N

1

+N

2

, V

1

+V

2

, E

1

+E

2

) (4.2)

THERMAL INTERACTION

Now consider two systems (N

1

, V

1

, E

1

) and (N

2

, V

2

, E

2

) in thermal contact, meaning that they

can exchange energy but keeping the total energy E ≡ E

1

+E

2

constant; their “private” volumes

and particles remain separated. We may determine the phase space volume of the combined

system.

Ad 1. The optimal partial energy E

∗

1

fulﬁlls Γ

2

(E

2

) ∂Γ

1

(E

1

)/∂E

1

= Γ

1

(E

1

) ∂Γ

2

(E

2

)/∂E

2

, or

∂

∂E

1

ln Γ

1

(E

1

)[

E1=E

∗

1

=

∂

∂E

2

ln Γ

2

(E

2

)[

E2=E−E

∗

1

(4.3)

This, however, is nothing else but the well-known thermodynamic equation

∂S

1

(E

1

)

∂E

1

E1=E

∗

1

=

∂S

2

(E

2

)

∂E

2

E2=E−E

∗

1

(4.4)

52

Figure 4.1: Isolated system: → microcanonical ensemble

or, with ∂S/∂E ≡ 1/T,

T

1

= T

2

. (4.5)

Ad 2. Let the total energy be divided up according to E ≡ E

1

+ E

2

. Then we have, by equ.

3.30,

Γ

1+2

(E) = Γ

1

(E

∗

1

) Γ

2

(E −E

∗

1

) (4.6)

and thus

S

1+2

= k ln

Γ

1+2

(E)

g

3N

= k ln

¸

Γ

1

(E

∗

1

)

g

3N1

Γ

2

(E −E

∗

1

)

g

3N2

= k ln

Γ

1

(E

∗

1

)

g

3N1

+ k ln

Γ

2

(E −E

∗

1

)

g

3N2

= S

1

(E

∗

1

) +S

2

(E −E

∗

1

) (4.7)

where E

∗

1

is that partial energy of system 1 which maximizes the product Γ

1

(E

1

)Γ

2

(E −E

1

).

In other words, at thermal contact between two systems isolated from the outside world there

will be a regular ﬂow of energy until the quantity ∂S/∂E (≡ 1/T) is equal in both systems. Since

the combined system has then the largest extension in phase space, this will be the most probable

distribution of energies upon the two systems. There may be ﬂuctuations around the optimal

energy distribution, but due to the extreme sharpness of the maximum of Γ

1

(E

1

)Γ

2

(E − E

1

)

these deviations remain very small.

It should be noted that these conclusions, although of eminent physical signiﬁcance, may be

derived quite simply from the geometrical properties of high-dimensional spheres.

Example: Consider two systems with N

1

= N

2

= 36 and initial energies E

1

= 36,

E

2

= 72. Now bring the systems in thermal contact. The maximum value of the

product Γ

1

(E

1

)Γ

2

(E − E

1

) occurs at E

∗

1

= E

∗

2

= 54, and the respective phase space

volume is

Γ

1

(E

∗

1

)Γ

2

(108 −E

∗

1

) =

¸

1

36

πe10

4

3/2

72

(4.8)

53

How does another partitioning of the total energy – say, (46, 62) instead of (54, 54) –

compare to the optimal one, in terms of phase space volume and thus probability?

Γ

1

(46)Γ

2

(62)

Γ

1

(54)Γ

2

(54)

=

46 62

54 54

54

= 0.30 (4.9)

We can see that the energy ﬂuctuations in these small systems are relatively large:

δE

1

/E

∗

1

≈ 8/54 = 0.15. However, for larger particle numbers δE

1

/E

∗

1

decreases as

1/

√

N: (515 565/540 540)

540

= 0.31; thus δE

1

/E

∗

1

≈ 25/540 = 0.05.

THERMODYNAMICS IN THE MICROCANONICAL ENSEMBLE

Let the system under consideration be in mechanical or thermal contact with other systems. The

macroscopic conditions (V, E) may then undergo changes, but we assume that this happens in

a quasistatic way, meaning that the changes are slow enough to permit the system always to

eﬀectively perambulate the microensemble pertaining to the momentary macroconditions. To

take the N-particle gas as an example, we require that its energy and volume change so slowly

that the system may visit all regions of the the phase space shell (E(t), V (t)) before it moves to

a new shell. Under these conditions the imported or exported diﬀerential energy dE is related to

the diﬀerential volume change according to

dS =

∂S

∂E

V

dE +

∂S

∂V

E

dV (4.10)

Deﬁning (in addition to T ≡ (∂S/∂E)

−1

V

) the pressure by

P ≡ T

∂S

∂V

E

(4.11)

then equ. 4.10 is identical to the thermodynamic relation

dS =

1

T

(dE +P dV ) or dE = T dS −P dV (First Law) (4.12)

Example: Classical ideal gas with

S(N, V, E) = Nk ln

¸

V

N

4πEe

3Nmg

2

3/2

¸

(4.13)

Solving this equation for E we ﬁnd for the internal energy U(S, V ) ≡ E the explicit

formula

U(S, V ) =

N

V

2/3

3Nmg

2

4π

e

2S/3Nk−1

(4.14)

From thermodynamics we know that T = (∂U/∂S)

V

; therefore

T =

2

3

U

Nk

(4.15)

From this we conclude, in agreement with experiment, that the speciﬁc heat of the

ideal gas is

C

V

=

3

2

Nk . (4.16)

The pressure may be found from P = −(∂U/∂V )

S

:

P =

2

3

U

V

=

NkT

V

(4.17)

54

SIMULATION IN THE MICROCANONICAL ENSEMBLE: MOLECULAR DY-

NAMICS

We found it a simple matter to derive the entropy S of an ideal gas as a function of N, V and

E; and once we had S(N, V, E) the subsequent derivation of thermodynamics was easy. To keep

things so simple we had to do some cheating, in that we assumed no interaction between the

particles. Statistical mechanics is powerful enough to yield solutions even if there are such inter-

actions. A discussion of the pertinent methods – virial expansion, integral equation theories etc.

– is beyond the scope of this tutorial. However, there is a more pragmatic method of investigating

the thermodynamic properties of an arbitrary model system: computer simulation. The classical

equations of motion of mass points interacting via a physically plausible pair potential u(r) such

as the one introduced by Lennard-Jones read

dv

i

dt

= −

1

m

¸

j=i

∂u(r

ij

)

∂r

i

(4.18)

If at some time t the microstate ¦r

i

, v

i

; i = 1, . . . N¦ is given we can solve these equations of

motion for a short time step ∆t by numerical approximation; the new positions and velocities at

time t +∆t are used as starting values for the next integration step and so forth. This procedure

is known as molecular dynamics simulation.

Since we assume no external forces but only forces between the particles, the total energy of

the system remains constant: the trajectory in phase space is conﬁned to the energy surface E =

const. If the systems is chaotic it will visit all states on this hypersurface with the same frequency.

An average over the trajectory is therefore equivalent to an average over the microcanonical

ensemble. For example, the internal energy may be calculated according to U

i

≡ 'E

kin

` +

'E

pot

`, where E

kin

may be determined at any time from the particle velocities, and E

pot

≡

(1/2)

¸

i,j

u(r

ij

) from the positions. By the same token the temperature may be computed via

3NkT/2 ≡ 'E

kin

`, while the pressure is the average of the so-called “virial”; that is the quantity

W ≡ (1/2)

¸

i

¸

j=i

K

ij

r

ij

. In particular we have P = NkT/V +'W`/3V .

In the case of hard spheres the particle trajectories are computed in a diﬀerent manner. For

given r

i

, v

i

the time span t

0

to the next collision between any two particles in the system is

determined. Calling these prospective collision partners i

0

and j

0

we ﬁrst move all spheres along

their speciﬁc ﬂight directions by ∆r

i

= v

i

t

0

and then simulate the collision (i

0

, j

0

), computing

the new directions and speeds of the two partners according to the laws of elastic collisions. Now

we have gone full circle and can do the next t

0

and i

0

, j

0

.

Further details of the MD method may be found in [Vesely 94] or [Allen 90]

4.2 Canonical ensemble

We once more put two systems in thermal contact with each other. One of the systems is supposed

to have many more degrees of freedom than the other:

n > n − n

1

>> n

1

>> 1 (4.19)

The larger system, with n

2

≡ n − n

1

d.o.f., is called “heat bath”. The energy E

2

= E − E

1

contained in the heat bath is “almost always” much greater than the energy of the smaller

system; the heat reservoir’s entropy may therefore be expanded around S

2

(n

2

, E):

S

2

(E −E

1

) ≈ S

2

(E) −E

1

∂S

2

(E

)

∂E

E

=E

= S

2

(E) −

E

1

T

(4.20)

55

Figure 4.2: System in contact with an energy reservoir: → canonical ensemble

where T is the temperature of the heat bath. The number of phase space cells occupied by the

larger system is thus

Σ

2

(E −E

1

) = e

S2/k

e

−E1/kT

(4.21)

But the larger Σ

2

(E−E

1

), the larger the probability to ﬁnd system 1 in a microstate with energy

E

1

. We may express this in terms of the “canonical phase space density” ρ

1

(Γ

1

):

ρ

1

[Γ

1

(E

1

)] = ρ

0

e

−E1/kT

(4.22)

where ρ

0

is a normalizing factor, and

ρ

1

(Γ

1

) ≡ ρ [r

1

, v

1

; E(r

1

, v

1

) = E

1

] (4.23)

is the density of microstates in that region of phase space of system 1 that belongs to energy E

1

.

For a better understanding of equ. 4.22 we recall that in the microcanonical ensemble only

those states of system 1 were considered for which the energy E

1

was in the interval [E

1

, E

1

−∆E].

In the canonical ensemble all energy values are permitted, but the density of state points varies

strongly, as exp[−E

1

/kT].

Equation 4.22 may not be understood to say that the most probable energy of the smaller

system be equal to zero. While the density of states in the phase space of system 1 indeed

drops sharply with increasing E

1

, the volume Γ

1

(E

1

) pertaining to E

1

is strongly increasing as

E

3N/2

1

. The product of these two factors, i.e. the statistical weight of the respective phase

space region, then exhibits a maximum at an energy E

1

= 0.

As a – by now familiar – illustration of this let us recall the Maxwell-Boltzmann distribution:

p([v[ is just the probability density for the (kinetic) energy of a subsystem consisting of only one

particle, while the heat bath is made up of the N −1 other molecules.

In the more general case, i.e. for a large number of particles, the peak of the energy distribution

is so sharp that the most probable energy is all but identical to the mean energy:

∆

E

E

∝

1

√

N

(4.24)

Thus we have found that even non-isolated systems which may exchange energy have actually

most of the time a certain energy from which they will deviate only slightly. But this means that

56

we may calculate averages of physical quantities either in the microcanonical or in the canonical

ensemble, according to mathematical convenience. This principle is known as “equivalence of

ensembles”.

We have derived the properties of the canonical ensemble using a Taylor expansion of the en-

tropy. The derivation originally given by Gibbs is diﬀerent. J. W. Gibbs generalized Boltzmann’s

“method of the most probable distribution” to an ensemble of microscopically identical systems

which are in thermal contact with each other.

Gibbs considered M equal systems (i = 1, . . . , M), each containing N particles. The sum

of the M energies was constrained to sum up to a given value, c =

¸

i

E

i

, with an unhindered

interchange of energies between the systems. Under these simple assumptions he determined

the probability of ﬁnding a system in the neighbourhood of a microstate

Γ having an energy

in the interval [E, ∆E]. With increasing energy E this probability density drops as ρ(Γ(E)) ∝

exp[−E/kT]. Since the volume of the energy shell rises sharply with energy we again ﬁnd that

most systems will have an energy around 'E` = c/M. Thus the important equivalence of

canonical and microcanonical ensembles may alternatively be proven in this manner.

THERMODYNAMICS IN THE CANONICAL ENSEMBLE

The quantity

Q(N, V, T) ≡

1

N!g

3N

dr dve

−E(r,v)/kT

(4.25)

is called canonical partition function. First of all, it is a normalizing factor in the calculation

of averages over the canonical ensemble. For example, the internal energy may be written as

U ≡ 'E` =

1

Q(N, V, T)

1

N!g

3N

dr dvE(r, v)e

−E(r,v)/kT

(4.26)

But the great practical importance of the partition function stems from its close relation to

Helmholtz’ free energy A(N, V, T), which itself is a central object of thermodynamics. The

relation between the two is

Q(N, V, T) = e

−βA(N,V,T)

(4.27)

where β ≡ 1/kT. To prove this important fact we diﬀerentiate the identity

1

N!g

3N

dr dv e

β[A(N,V,T)−E(r,v)]

= 1 (4.28)

by β, obtaining

A(N, V, T) −'E` +β

∂A

∂β

V

= 0 (4.29)

or

A(N, V, T) −U(N, V, T) −T

∂A

∂T

V

= 0 (4.30)

But this is, with S ≡ −(∂A/∂T)

V

, identical to the basic thermodynamic relation A = U −TS.

All other thermodynamic quantities may now be distilled from A(N, V, T). For instance, the

pressure is given by

P = −

∂A

∂V

T

, (4.31)

Similarly, entropy and Gibbs’ free energy are calculated from

S = −

∂A

∂T

V

and G = A+PV (4.32)

57

Example 1: For the classical ideal gas we have

Q(N, V, T) =

m

3N

N!h

3N

drdv exp [−

m

2kT

¸

[v

i

[

2

] =

1

N!h

3N

q

N

(4.33)

with

q ≡

drdv exp [−

m

2kT

[v[

2

] =

2πkT

m

3/2

V (4.34)

This leads to

Q(N, V, T) =

1

N!

2πmkT

h

2

3N/2

V

N

(4.35)

and using A ≡ −kT ln Q and 4.31 we ﬁnd

P =

NkT

V

(4.36)

which is the well-known equation of state of the ideal gas. Similarly we ﬁnd from

S = −(∂A/∂T)

V

the entropy in keeping with equ. 3.39.

Example 2: The free energy of one mole of an ideal gas at standard conditions, as-

suming a molecular mass of m = 39.95m

H

(i. e. Argon), is

A = −kT ln Q = −4.25 10

7

J (4.37)

We have now succeeded to derive the thermodynamics of an ideal gas solely from a geometrical

analysis of the phase space of N classical point masses. It must be stressed that similar rela-

tions for thermodynamical observables may also be derived for other model systems with their

respective phase spaces.

In hindsight it is possible to apply the concept of a “partition function” also to the micro-

canonical ensemble. After all, the quantity Σ(N, V, E) was also a measure of the total accessible

phase space volume. In other words, we might as well call it the “microcanonical partition func-

tion”. And we recall that its logarithm – the entropy – served as a starting point to unfold

statistical thermodynamics.

EQUIPARTITION THEOREM

Without proof we note the following important theorem:

If the Hamiltonian of a system contains some position or velocity coordinate in quadratic form,

the respective degree of freedom will have the mean energy kT/2.

Example 1: The Hamiltonian of the classical ideal gas is

H(v) =

N

¸

i=1

3

¸

α=1

mv

2

iα

2

(4.38)

Each of the 3N translational d.o.f. appears in the guise v

2

iα

. The equipartition

theorem then tells us that for each velocity component

m

2

'v

2

iα

` =

kT

2

or

m

2

'v

2

i

` =

3kT

2

(4.39)

58

(As there is no interactional or external potential, the positional d.o.f. contain no

energy.)

Example 2: Every classical ﬂuid, such as the Lennard-Jones liquid, has the Hamilto-

nian

H(r, v) = H

pot

(r) +

N

¸

i=1

3

¸

α=1

mv

2

iα

2

(4.40)

Therefore each of the 3N translatory d.o.f. contains, on the average, an energy kT/2.

(The interaction potential is not quadratic in the positions; therefore the equipartition

law does not apply to r

iα

.)

Example 3: A system of N one-dimensional harmonic oscillators is characterized by

the Hamiltonian

H(x,

˙

x) = H

pot

+H

kin

=

f

2

¸

i=1

x

2

i

+

m

2

¸

i=1

˙ x

2

i

(4.41)

Therefore,

f

2

'x

2

i

` =

kT

2

and

m

2

' ˙ x

2

i

` =

kT

2

. (4.42)

Thus the total energy of the N oscillators is E = NkT. The generalization of this to

three dimensions is trivial; we ﬁnd E

id.cryst.

= 3NkT. For the speciﬁc heat we have

consequently C

V

≡ (∂E/∂T)

V

= 3Nk. This prediction is in good agreement with

experimental results for crystalline solids at moderately high temperatures. For low

temperatures – and depending on the speciﬁc substance this may well mean room

temperature – the classical description breaks down, and we have to apply quantum

rules to predict C

V

and other thermodynamic observables (see Chapter 5.)

Example 4: A ﬂuid of N “dumbbell molecules”, each having 3 translatory and 2

rotatory d.o.f., has the Hamiltonian

H(r, e,

˙

r,

˙

e) = H

pot

(r, e) +

¸

i,α

mv

2

iα

2

+

¸

i,β

Iω

2

iβ

2

(4.43)

where e are the orientation vectors of the linear particles, ω are their angular velocities,

and I denotes the molecular moment of inertia.

Thus we predict

m

2

'v

2

i

` =

3kT

2

and

I

2

'ω

2

i

` =

2kT

2

. (4.44)

which is again a good estimate as long as a classical description may be expected to

apply.

CHEMICAL POTENTIAL

Let us recall the experimental setup of Fig. 4.2. Allowing two systems to exchange energy leads

to an equilibration of their temperatures T (see Section 4.1). The energies E

1

, E

2

of the two

subsystems will then only ﬂuctuate – usually just slightly – around their average values.

Let us now assume that the systems can also exchange particles. In such a situation we will

again ﬁnd some initial equilibration after which the particle numbers N

1

, N

2

will only slightly

ﬂuctuate around their mean values. The ﬂow of particles from one subsystem to the other will

come to an end as soon as the free energy A = A

1

+ A

2

of the combined system tends to a

59

constant value. This happens when (∂A

1

/∂N)

V

= (∂A

2

/∂N)

V

. The quantity that determines

the equilibrium is therefore µ ≡ (∂A/∂N)

V

; the subsystems will trade particles until µ

1

= µ

2

.

(Compare this deﬁnition of the chemical potential with that of temperature, (∂S/∂E)

−1

V

.)

Example: The chemical potential of Argon (m = 39.95m

H

) at normal conditions is

µ ≡

dA

dN

= −kT

d ln Q

dN

= −kT ln

¸

V

N

2πmkT

h

2

3/2

¸

= −7.26 10

−20

J (4.45)

SIMULATION IN THE CANONICAL ENSEMBLE: MONTE CARLO CALCULA-

TION

We may once more ﬁnd an appropriate numerical rule for the correct “browsing” of all states

with given (N, V, T) (in place of (N, V, E)). Since it is now the Boltzmann factor that gives the

statistical weight of a microstate

Γ ≡ ¦r

i

, v

i

¦, the average value of some quantity a(

Γ) is given

by

'a`

N,V,T

=

a(

Γ) exp[−E(

Γ)/kT]d

Γ/

exp [−E(

Γ)/kT]d

Γ (4.46)

If a depends only on the particle positions and not on velocities – and this is true for important

quantities such potential energy, virial, etc. – then we may even conﬁne the weighted integral to

the 3N-dimensional conﬁgurational subspace

Γ

c

≡ ¦r

i

¦ of full phase space:

'a`

N,V,T

=

a(

Γ

c

) exp [−E

pot

(

Γ

c

)/kT]d

Γ

c

/

exp [−E

pot

(

Γ

c

)/kT]d

Γ

c

(4.47)

In order to ﬁnd an estimate for 'a` we formally replace the integrals by sums and construct a

long sequence of randomly sampled states

Γ

c

(m); m = 1, . . . M with the requirement that the

relative frequency of microstates in the sequence be proportional to their Boltzmann factors. In

other words, conﬁgurations with high potential energy should occur less frequently than states

with small E

pot

. The customary method to produce such a biased random sequence (a so-called

“Markov chain”) is called “Metropolis technique”, after its inventor Nicholas Metropolis.

Now, since the Boltzmann factor is already contained in the frequency of states in the sequence

we can compute the Boltzmann-weighted average simply according to

'a` =

1

M

M

¸

m=1

a

Γ

c

(m)

(4.48)

An extensive description of the Monte Carlo method may be found in [Vesely 1978] or [Vesely

2001].

There is also a special version of the molecular dynamics simulation method that may be

used to perambulate the canonical distribution. We recall that in MD simulations normally the

total system energy E

tot

= E

kin

+E

pot

is held constant, which is roughly equivalent to sampling

the microcanonical ensemble. However, by introducing a kind of numerical thermostat we may

at each time step adjust all particle velocities so as to keep E

kin

either constant (isokinetic MD

simulation) oder near a mean value such that 'E

kin

` ∝ T (isothermal MD).

4.3 Grand canonical ensemble

Once again we put a small system (1) in contact with a large one (2). However, this time we do

not only permit the exchange of energy but also the crossing over of particles from one subsystem

60

Figure 4.3: System in contact with an energy and particle reservoir: →grand canonical ensemble

to the other.

And as before we can write down the probability density in the phase space of the smaller system;

it depends now both on the number of particles N

1

and on ¦r

i

, v

i

; i = 1, . . . N

1

¦, as follows:

p(r, v; N

1

) ∝ e

µN1/kT

e

−E(r,v)/kT

(4.49)

Summing this density over all possible values of N

1

and integrating – at each N

1

– over all

¦r

i

, v

i

; i = 1, . . . N

1

¦ we obtain the grand partition function

Z(µ, V

1

, T) ≡

∞

¸

N1=0

e

N1µ/kT

Q(N

1

, V

1

, T) (4.50)

Its value is just the “total statistical weight” of all possible states of system 1. Above all, it serves

as the source function of thermodynamics.

THERMODYNAMICS IN THE GRAND CANONICAL ENSEMBLE

From the grand partition function we can easily derive expressions for the various thermodynamic

observables. For instance, putting z ≡ e

µ/kT

and β ≡ 1/kT we ﬁnd

P =

kT

V

ln Z(z, V, T) (4.51)

N(≡ 'N`) = z

∂

∂z

ln Z(z, V, T) = kT

∂ ln Z

∂µ

(4.52)

U(≡ 'E`) = −

∂

∂β

ln Z(z, V, T) = kT

2

∂ ln Z

∂T

(4.53)

As a rule the – permitted – ﬂuctuations of the number of particles remain small; in particular

we have ∆N/N ≈ 1/

√

N. Thus the grand ensemble is again equivalent to others ensembles of

statistical mechanics.

[To do: applet with MD simulation, averages taken only over particles in a partial volume –¿

61

same results!]

Example: Let us visit the ideal gas again. For the grand partition function we have

Z(z, V, T) =

¸

N

z

N

V

N

N!

2πmkT

h

2

3N/2

=

¸

N

y

N

N!

with y ≡ V z

2πmkT

h

2

3/2

(4.54)

Therefore

Z = exp

¸

−zV

2πmkT

h

2

3/2

¸

or ln Z = −zV

2πmkT

h

2

3/2

(4.55)

Using the formulae for internal energy and pressure we ﬁnd

P = −kTz

2πmkT

h

2

3/2

and U = −kTz

3V

2

2πmkT

h

2

3/2

(4.56)

Consequently, P = 2U/3V or

P =

2

3V

3NkT

2

=

N

V

kT (4.57)

in keeping with the phenomenological ideal gas equation.

SIMULATION IN THE GRAND CANONICAL ENSEMBLE: GCMC

The states within the grand ensemble may again be sampled in a random manner. Just as in

the canonical Monte Carlo procedure we produce a sequence of microstates

Γ

c

(m), m = 1, . . . M

with the appropriate relative frequencies. The only change is that we now vary also the number

of particles by occasionally adding or removing a particle. The stochastic rule for these insertions

and removals is such that it agrees with the thermodynamic probability of such processes. By

averaging some quantity over the “Markov chain” of conﬁgurations we again obtain an estimate

of the respective thermodynamic observable.

4.4 Problems for Chapter 4

EXERCISES:

4.1 Canonical phase space density: Considering the Maxwell-Boltzmann distribution in a

classical 3D ﬂuid, show that

a) with increasing kinetic energy E

k

the density of states decreases as exp[−E

k

/kT];

b) in spite of this the most probable kinetic energy is not zero.

Apply this to the general canonical phase space.

4.2 Grand canonical ensemble: Using the grand partition function, derive the mean number

62

of particles in one m

3

of an ideal gas at standard conditions.

TEST YOUR UNDERSTANDING OF CHAPTER 4:

4.1 Temperature and entropy: What happens if two systems are put in contact with each

other such that they may exchange energy?

4.2 Changes of state: What is a quasistatic change of state?

4.3 Equipartition theorem: Formulate the equipartition theorem; to which microscopic vari-

ables does it apply? Give two examples.

4.4 Canonical ensemble: What are the macroscopic conditions that deﬁne the canonical en-

semble? When is it equivalent to the microcanonical ensemble?

4.5 Grand canonical ensemble: What are the macroscopic conditions deﬁning the grand

canonical ensemble? What is needed to make it equivalent to the canonical and microcanonical

ensembles?

63

Chapter 5

Statistical Quantum Mechanics

Counting the states in phase space takes some consideration if we want to apply quantum me-

chanics. We have already mentioned the fact that quantum particles are not distinguishable, and

we took this into account in a heuristic manner, by introducing the permutation factor 1/N! in

the partition function.

Another feature of the quantum picture is that state space is not continuous but consists

of ﬁnite “raster elements”: just think of the discrete states of an ideal quantum gas. Finally,

it depends on the “symmetry class” of the particles how many of them may inhabit the same

discrete microstate. Fermions, with a wave function of odd symmetry, can take on a particular

state only exclusively; the population number of a raster element in pahese space can be just 0

or 1. In contrast, even-symmetry bosons may in any number share the same microstate.

What are the consequences of these additional counting rules for Statistical Thermodynamics?

To seek an answer we may either proceed in the manner of Boltzmann (see Section 2.2) or ` a la

Gibbs (see Chapter 4). For a better understanding we will here sketch both approaches.

5.1 Ideal quantum gas: Method of the most probable dis-

tribution

We consider a system of N independent particles in a cubic box with ideally elastic walls. In

the spririt of the kinetic theory of dilute gases we explore the state space for the individual

particles, which Boltzmann dubbed µ-space. In the quantum case this space is spanned by the

quantum numbers n

x,y,z

= 1, 2 . . . pertaining to the momentum eigenstates having eigenvalues

p ≡ (h/2L)n and energies E

n

= p

2

/2m = (h

2

/8mL

2

)[n[

2

.

Now we batch together all states having energies E

n

in an interval [E

j

, ∆E]. The number of

states in such a “cell” is named g

j

, j = 1, 2, . . . m. The values of the g

j

are not important; they

should only be large enough to allow the application of Stirling’s formula.

As before we try to answer the question how the N particles should best be distributed over

the m cells. To do so we change the notation from the one used in Section 2.2, in that we denote

the number of particles in cell j by f

j

. The reason for using f

j

is that “n” is reserved for the

quantum numbers.

A speciﬁc distribution

f ≡ ¦f

j

; j = 1, . . . m¦ of the N particles to the m cells is more probable

if its multiplicity W is larger, meaning that we can allot the ¦f

j

¦ particles in more diﬀerent

ways to the ¦g

j

¦ states in each cell – always keeping in mind the Fermi or Bose rules:

Fermi −Dirac : W =

m

¸

j=1

g

j

f

j

Bose −Einstein : W =

m

¸

j=1

g

j

+f

j

−1

f

j

(5.1)

64

To compare: the multiplicity given in Section 2.2, pertaining to a classical distribution (see equ.

2.20) would in our present notation read W =

¸

m

j=1

g

fj

j

/f

j

!.

The distribution f

∗

j

having the largest multiplicity may again be determined by Lagrange

variation with the conditions

¸

j

f

j

E

j

= E and

¸

j

f

j

= N:

δ ln W −βδ

¸

¸

j

f

j

E

j

−E

¸

+γδ

¸

¸

j

f

j

−N

¸

= 0 (5.2)

Writing z ≡ e

γ

we ﬁnd for

Fermi −Dirac : f

∗

j

=

g

j

z

−1

e

βEj

+ 1

Bose −Einstein : f

∗

j

=

g

j

z

−1

e

βEj

−1

(5.3)

This is the most probable distribution of the particles upon the cells. Since g

j

denotes the number

of states in cell j, we have for the average population number of each state

'f

n

` ≡

f

∗

j

g

j

=

1

z

−1

e

βEj

±1

(+ . . . Fermi; − . . . Bose) (5.4)

It is easy to interpret the Lagrange parameters β and z. As in Section 2.2 one compares the

consequences of the population densities given above to empirical/thermodynamical facts, ﬁnding

that β is related to temperature as β = 1/kT, and that z = e

γ

= e

µ/kT

is identical to the fugacity.

For a better understanding of this derivation, let us interpret its premises as a set of rules

in a game of fortune, as we have done in Chapter 2. By running the applet EFRoulette we may

indeed play that game – for Fermi particles at least – and compare its outcome with the result

just given.

These are the rules:

For non-interacting particles in a square box the µ-plane is spanned by integers n

x

, n

y

; each quan-

tum state is represented by a point. A speciﬁc state of a system of N fermions is represented by

a set of N inhabited points on that plane.

To ﬁnd the average (and also most probable!) distribution of particles on states,

– assign N particles randomly to the states on µ-plane

– make sure that the sum of the particle energies equals the given system energy,

AND

– discard all trials in which a state is inhabited by more than one particle

– determine the mean number of particles in each state; sort the result according to the state

energies

5.2 Ideal quantum gas: Grand canonical ensemble

We may derive the properties of a quantum gas in another way, making use of the (z, V, T)

ensemble in Gibbsean phase space. Recalling the general deﬁnition of the grand partition function,

Z(z, V, T) ≡

¸

∞

N=0

z

N

Q(N, V, T), we now write Q as a sum (in place of an integral) over states:

Q(N, V, T) =

m

3N

N!h

3N

d

Γexp

−E(

Γ)/kT

=⇒

¸

{f

n

}

exp

¸

−

¸

n

f

n

E

n

/kT

¸

(5.5)

65

Figure 5.1: Playing the Fermi-Dirac game. [Code: EFRoulette]

The sum ¦f

n

¦ is to be taken over all permitted population numbers of all states n, again requiring

that

¸

n

f

n

= N. The permitted values of f

n

are: 0 and 1 for fermions, and 0, 1, 2, . . . N for

bosons. In this manner we arrive at

Z =

∞

¸

N=0

z

N

¸

{f

n

}

exp

¸

−

¸

n

f

n

E

n

/kT

¸

=

∞

¸

N=0

¸

{f

n

}

¸

n

ze

−E

n

/kT

f

n

(5.6)

It is easy to show that this is equal to

Z =

¸

n

¸

f

ze

−E

n

f

¸

¸

(5.7)

Now we can insert the possible values of f. We ﬁnd that for

Fermions (f = 0, 1):

Z =

¸

n

1 +ze

−E

n

/kT

(5.8)

and for Bosons (f = 0, 1, 2, . . .):

Z =

¸

n

1

1 −ze

−E

n

/kT

(5.9)

Having secured the grand partition function, we can now apply the well-known formulae for

pressure, mean particle number, and internal energy (see Section 4.3) to determine the thermo-

dynamic properties of the system. The mean population number of a given state n is

'f

n

` ≡

1

Z

¸

N

¸

{f

n

}

f

n

exp

¸

−

¸

n

E

n

f

n

¸

= −

1

β

∂

∂E

n

ln Z (5.10)

Inserting for Z the respective expression for Fermi or Bose particles we once more arrive at the

population densities of equ. 5.4.

In the following sections we discuss the properties of a few particularly prominent fermion

and boson gases.

5.3 Ideal Fermi gas

Figure 5.2 shows the fermion population density (see equ. 5.4 with the positive sign in the de-

nominator.) It is evidently quite diﬀerent from its classical counterpart, the Boltzmann factor. In

particular, when the temperature is low then it approaches a step function: in the limit kT →0

66

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

−20 −15 −10 −5 0 5 10 15 20

Fermi−Dirac−Verteilungsdichte mit mu=0

kT=0.1

kT=1.0

kT=5.0

exp(−E/5.0)

Figure 5.2: Mean population numbers of the states in a Fermi-Dirac system with µ = 0. For

comparison we have also drawn the classical Boltzmann factor exp [−E/kT] for a temperature of

kT = 5.

all states having an energy below a certain value are inhabited, while the states with higher

energies remain empty. The threshold energy E

F

= µ is called “Fermi energy”˙

It is only for high temperatures and small values of µ that the Fermi-Dirac density approaches

the classical Boltzmann density; and as it does so it strictly obeys the inequality f

FD

< f

Bm

.

For the purpose of comparison we have included a graph of f

Bm

= exp [−E

p

/kT] in Figure 5.2.

ELECTRONS IN METALS

Conduction electrons in metallic solids may be considered as an ideal fermion gas – with an

additional feature: since electrons have two possible spin states the maximum number of particles

in a state p is 2 instead of 1.

At low temperatures all states p with E

p

≤ E

F

= µ are populated. The number of such states

is, as we can see from eq. 1.21,

N =

8π

3

V

2mµ

h

2

3/2

(5.11)

67

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

−5 0 5 10 15 20

Bose−Einstein−Verteilungsdichte mit mu=0

kT=0.1

kT=1.0

kT=5.0

exp(−E/5.0)

Figure 5.3: Mean population number of states in a Bose-Einstein system with µ = 0. For

comparison we include a graph of the classical density exp[−E/kT] at kT = 5.

Assuming a value of µ ≈ 5 10

−19

J typical for conduction electrons we ﬁnd N/V ≈ 3 10

27

m

−3

.

Applying the general formulae 4.51 and 4.53 we may easily derive the pressure and internal

energy of the electron gas. We may also recapture the relation PV = 2U/3, consistent with an

earlier result for the classical ideal gas.

5.4 Ideal Bose gas

For bosons the mean population number of a state is

'f

p

` ≡

f

∗

i

g

i

=

1

z

−1

e

βEi

−1

(5.12)

This function looks a bit like the Boltzmann factor ∝ e

−E

p

/kT

but is everywhere larger than the

latter. For small µ and large T we again ﬁnd that f

B

≈ f

Bm

.

PHOTONS IN A BOX

A popular example for this type of system is a “gas” of photons. A box with an internal coating

68

of reﬂecting material in which a photon gas is held in thermodynamic equilibrium is often called

a “black body”. Originally this name refers to a grain of black material that might be placed in

the box to allow for absorption and re-emission of photons, thus enabling energy exchange and

equilibration. In actual fact the internal walls of the box are never perfectly reﬂecting, rendering

the insertion of a black body grain unnecessary.

Since the total energy in the box is conserved and the photons may change their energy upon

absorption and reemission, the total number of photons is not conserved. This is tantamount

to assuming µ = 0.

To determine the energy spectrum of the photons we ﬁrst calculate the number dΣ of states

within a small energy interval. These states will then be populated according to equ. 5.12. A

simple geometric consideration – how many lattice points are lying within a spherical shell –

leads to

dΣ =

8πV p

2

dp

h

3

=

8πV ν

2

dν

c

3

(5.13)

where p = hν/c is the momentum pertaining to the energy E = hν. The total number of photons

in the system is thus

N =

'f

p

`dΣ =

8πV

c

3

ν

2

e

hν/kT

−1

dν (5.14)

and accordingly the number of photons in a frequency interval [ν, dν] is

n(ν)dν =

8πV

c

3

ν

2

e

hν/kT

−1

dν (5.15)

The amount of energy carried by these photons is the spectral density of black body radiation;

it is given by

I(ν)dν ≡

dE(ν)

dν

= E(ν)n(ν)dν =

8πV

c

3

hν

3

e

hν/kT

−1

dν (5.16)

For pressure and internal energy of the photon gas we ﬁnd – in contrast to the classical and Fermi

cases – the relation PV = U/3.

Providing an explanation for the experimentally measured spectrum of black body radiation

was one of the decisive achievements in theoretical physics around 1900. Earlier attempts based

on classical assumptions had failed, and it was Max Planck who, by the ad hoc assumption of

a quantization of energy, could reproduce the correct shape of the spectrum. The subsequent

eﬀorts to understand the physical implications of that assumption eventually lead up to quantum

physics.

5.5 Problems for Chapter 5

TEST YOUR UNDERSTANDING OF CHAPTER 5:

5.1 Population densities: Explain the signiﬁcance of the population densities for Fermi-Dirac

and Bose-Einstein statistics. Sketch these functions of energy at two temperatures.

5.2 Classical limit: Starting from the Fermi-Dirac density discuss the classical limit of quantum

statistics.

5.3 Ideal quantum gases: Discuss two examples of ideal quantum gases.

69

Bibliography

[Abramowitz 65] Abramowitz, M., Stegun, I. A., Hrsg.: Handbook of Mathematical Functions.

Dover, New York, 1965.

[Alder 57] Alder, B. J. und Wainwright, T. E., J. Chem. Phys. 27 (1957) 1208.

[Alder 67] Alder, B. J., und Wainwright, T. E., Phys. Rev. Lett. 18 (1967) 988.

[Allen 90] Allen, M.P., und Tildesley, D.J.: Computer Simulation of Liquids. Oxford University

Press, 1990.

[Binder 87] Binder, K.: Applications of the Monte Carlo Method in Statistical Physics.

Springer, Berlin 1987.

[Binder 92] Binder, K.: The Monte Carlo Method in Condensed Matter Physics. Springer,

Berlin 1992.

[Hansen 69] Hansen, J.-P., und Verlet, L., Phys. Rev. 184 (1969) 151.

[Hoover 68] Hoover, W. G., und Ree, F. H., J. Chem. Phys. 49 (1968) 3609.

[Kalos 86] Kalos, M. H., und Whitlock, P. A.: Monte Carlo Methods. Wiley, New York 1986.

[Levesque 69] Levesque, D., und Verlet, L., Phys. Rev. 182 (1969) 307; Phys. Rev. A2/6 (1970)

2514; - und K”urkij”arvi, J., Phys. Rev. A7/5 (1973) 1690.

[Metropolis 49] Metropolis, N., und Ulam, S., J. Amer. Statist. Assoc. 44 (1949) 335.

[Metropolis 53] Metropolis, N. A., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., und

Teller, E., J. Chem. Phys. 21 (1953) 1087.

[Papoulis 81] Papoulis, Athanasios: Probability, Random Variables and Stochastic Processes.

McGraw-Hill International Book Company, 1981.

[Posch 87] Holian, B. L., Hoover, W. G., Posch, H. A., Phys. Rev. Lett. 59/1 (1987) 10.

[Reif 90] Reif, Frederick: Statistische Physik (Berkeley Physik Kurs 5). Vieweg, Braunschweig

1990.

[Vesely 78] Vesely, F. J.: Computerexperimente an Fl”ussigkeitsmodellen. Physik Verlag,

Weinheim 1978.

[Vesely 94] Vesely, F. J.: Computational Physics – An Introduction. Plenum, New York 1994;

2nd edn. Kluwer, New York 2001.

70

Chapter 1

Why is water wet?

Statistical mechanics is the attempt to explain the properties of matter from ﬁrst principles. In the framework of the phenomenological theory of energy and matter, in Thermodynamics, these properties were measured and interrelated in a logically consistent manner. The resulting theoretical body allowed scientists to write up exact relations between measured properties. However, as long as the molecular or atomic nature of matter was not included in the theory, a complete prediction of the numerical values of these observables was impossible. To take an example: the entropy diﬀerence S1,2 between two states of a system was opera2 tionally deﬁned as the integral S2 − S1 ≡ 1 dQ/T (assuming a reversible path from 1 to 2), but the numerical value of S could not be predicted even for an ideal gas. Similarly, thermodynamic reasoning shows that the diﬀerence between the speciﬁc heats CP and CV must exactly equal CP − CV = T V α2 /κT , where α and κT are the expansions coeﬃcient and the isothermal compressibility, respectively; but a speciﬁc value for CP or CV cannot be predicted. Let us dream for a moment: what if we could calculate, say, the entropy S of a piece of matter on the basis of microscopic considerations? Assuming we could ﬁnd an explicit expression for S(E, V ) – entropy as a function of energy and volume – then this would give us access to all the riches of thermodynamic phenomena. All we would have to do is take the inverse function E(S, V ) and apply the thermodynamic relations T ≡ −(∂E/∂S)V , CV ≡ (∂E/∂T )V , P ≡ −(∂E/∂V )S etc. Our project then is to describe the mechanics of an assembly of many (≈ 1024 ) particles with mutual interactions. The only way to do this is by application of statistical methods: a rigorous 1

analysis of the coupled motions of many particles is simply impossible. It should be noted, however, that with the emergence of fast computing machines it became possible to perform numerical simulations on suitable model systems. It is sometimes suﬃcient to simulate systems of several hundred particles only: the properties of such small systems diﬀer by no more than a few percent from those of macroscopic samples. In this manner we may check theoretical predictions referring to simpliﬁed – and therefore not entirely realistic – model systems. An important example are “gases” made up of hard spheres. The properties of such gases may be predicted by theory and “measured” in a simulation. The importance of computer simulation for research does not end here. In addition to simple, generic models one may also simulate more realistic and complex systems. In fact, some microscopic properties of such systems are accessible neither to direct experiment nor to theory – but to simulation. In the context of this course simulations will be used mainly for the visualisation of the statistical-mechanical truths which we derive by mathematical means. Or vice versa: having watched the chaotic buﬀeting of a few dozen simulated particles we will courageously set out to analyse the essential, regular features hidden in that disorderly motion. Thus our modest goal will be to identify a microscopic quantity that has all the properties of the thing we call entropy. For a particularly simple model system, the ideal gas, we will even write down an explicit formula for the function S(E, V ). Keeping our eyes open as we take a walk through the neat front yard of ideal gas theory, we may well learn something about other, more complex systems such as crystals, liquids, or photon gases. The grown-up child’s inquiry why the water is wet will have to remain unanswered for some more time. Even if we reformulate it in appropriate terms, asking for a microscopic-statistical explanation for the phenomenon of wetting, it exceeds the frame of this introductory treatment. – And so much the better: curiosity, after all, is the well spring of all science.

1.1

A quick resum´ of thermodynamics e

• Thermodynamic state — State variables We can describe the state of a thermodymic system uniquely by specifying a small number of macroscopic observables (state variables) Xi : amount of substance (in mol) n; pressure P ; volume V ; temperature T ; internal energy E; etc. Equations of state of the form Form f (X1 , X2 , ..) = 0 serve to reduce the number of independent state variables: P V = nRT ; E = n CV T (ideal gas); (P + an2 /V 2 )(V − bn) = nRT (real gas; Van der Waals); etc. • First Law Joule’s experiment (heat of friction: stirring water with a spindle driven by a weight) proved that heat is a form of energy. Mechanical energy (W ) and heat (Q) may be converted both ways. Following general custom we assign a positive sign to mechanical work performed by the system, while heat is counted positive when it is fed into the system. The law of energy conservation then reads, taking into account the internal energy E, dE = dQ − dW (1.1)

Frequently the mechanical energy occurs in the form of ”volume work”, i.e. work performed against the ambient pressure when the system expands: dW = P dV (piston lifting weight). It should be noted that E is a state variable, while Q and W may depend not only on the state of the system but also on the way it has reached that state; we may stress this by ¯ ¯ using the notation dE = dQ − dW .1 Internal energy of an ideal gas: The expansion experiment (also by Joule: expand air from

such as Q and W are often called process variables.

1 Quantities

2

Impossible are those processes in which nothing happens but a transport of heat ∆Q from a colder to a warmer reservoir. There are several logically equivalent formulations: Impossible are those processes in which nothing happens but a complete transformation of ∆Q into ∆W . However. here we do transport ∆Q. but the end state is diﬀerent from the initial one (larger V ). which relates to real gases. that E(T ) = CV T . 3 .e. Therefore we ﬁnd.2) V V1 This process may be reversed: by slowly increasing the weight load on the piston and removing any excess heat to a thermostatted heat bath we may recompress the gas isothermally. meaning that the internal energy of an ideal gas depends only on the temperature T . The Second Law tells us which transformations are excluded. with V = const.e. using the empirical fact that CV = const. T ) reads simply E = E(T ). As soon as this ﬂow has ebbed oﬀ. [Clausius] Example: Heat pump. for an ideal gas the formula E = E(V. • Reversible processes If a process is such that the inversion of the arrow of time t −→ −t will again refer to a possible process.) Since ∆W = 0 we have ∆E = ∆Q. Example 1: Reversible expansion Let an ideal gas expand isothermally – i. we have reached thermal equilibrium – by deﬁnition. In other words. Thus ∆E = 0. [Kelvin] Example: Expansion of a gas against a weighted piston. this process is called reversible. the internal energy E cannot depend on V . and we have 1 V2 ∆Q = ∆W = P (V ) dV = nRT dV = nRT ln (1. here we have a complete change of ∆Q into P dV . (EidG = n CV T ). A measurable quantity to characterize this equilibrium state is the temperature: [T 1 = T2 ]eq • Second Law Not all transformations of energy that are in accordance with the First Law are really possible. • Thermal interaction — Thermal equilibrium — Temperature Bringing two bodies in contact such that they may exchange energy. is possible only by heating: ∆E = ∆Q = CV ∆T . since the volumes of ﬁnal and initial state diﬀer. we will in general observe a ﬂow of a certain amount of energy in one direction. Feeding energy to the gas in an isochoric way. i. It is a consequence of the Second Law that there are processes which are not reversible. We know even more about E(T ). (Do not confuse this with Joule and Thompson’s throttling experiment. The internal energy of the gas remains constant. but as T2 = T1 (empirical fact) there follows ∆Q ∝ ∆T = 0.one bottle into an additional. with appropriate input of heat – against a (slowly diminishing!) weight. evacuated bottle) demonstrates that EidG = CV T . but in order to do so we have to feed in mechanical energy.

is deﬁned only up to an additive constant. however. This value. The diﬀerence of entropies is then given by 2 dQ ∆S ≡ S(2) − S(1) = (1. independent of what happened to the system before. Result: T2 = T1 . Changing the direction of time does not produce a possible process: the shrinking of a gas to a smaller volume without input of mechanical energy does not occur. ◦ When a thermally isolated system is in equilibrium the entropy is at its maximum. ◦ Thus in a thermally isolated system (the universe being a notable special case) we have dQ = 0 and therefore ∆S ≥ 0. i. Example 2: Irreversible expansion Let an ideal gas expand into vacuum (Joule’s expansion experiment).to reach the initial state again. therefore no change of internal energy. in any particular physical state of a system the entropy S has a well-deﬁned value.e. but the volume has increased.3) T 1 S has the following important properties: ◦ S is a state variable. In this case no energy has been exerted and stored. meaning that the entropy will never decrease spontaneously. the Third Law removes this ambiguity by stating that S(T → 0) = 0. 4 . • Entropy A mathematically powerful formulation of the Second Law relies on a quantity which is deﬁned as follows: Let 1 −→ 2 be a reversible path from state 1 to 2. ◦ ∆S > 2 1 dQ/T in all irreversible processes.

Since S is a state variable we have as before V2 (1. thus conserving its thermal energy or.7) T V1 Thus the entropy of the combined system remains the same. T ) is indicated. this decreases the reservoir’s entropy by ∆Q V2 ∆Sres = − = −nR ln (1. by the experimenter. and the reverse process is possible as well.6) At the same time the heat reservoir delivers the energy ∆Q to the gas. To describe this experimental situation we will of course choose the pair (P. Any of the two mechanical variables – P or V – may be combined with one of the two thermal variables – T or S – to describe the state.◦ As a consequence of the First Law dE = T dS − P dV we have dE dS =T V (1. which we measure or control. we can easily decide whether the change of state is reversible or not. its entropy. the description in terms of (V. • Entropy applied: By scrutinizing the entropy balance in a thermodynamic process we may ﬁnd out whether that process is. according to the Second Law. and not its position.4) But this means that we can describe thermal equilibrium in yet another way: The ﬂow of energy between two bodies in contact will come to an end when dE dS = 1 dE dS (1. by 0 = dQ = T dS. entropy is additive (extensive). In particular. no energy is fed in. Example 2: Irreversible expansion: Let the initial and ﬁnal states of the gas be the same as above. Then again. =⇒reversible. And instead of holding the temperature constant we may put a thermal insulation around the system. Of course. a possible one. the natural mechanical variable is then the pressure P . the natural choice will be that pair of variables that is controlled. For example. The reverse process would therefore give rise to a decrease in entropy – which is forbidden: =⇒irreversible. Let us consider the possible choices: 5 . if a sample of gas is contained in a thermostatized (T ) cylinder with a piston whose position (V ) is directly observed.5) 2 ◦ If two systems are in thermal equilibrium then Stot = S1 + S2 . it may be the force on the piston. S). however. which makes ∆Sres = 0 and thus ∆Stot > 0. or monitored.8) ∆Sgas = nR ln V1 This time. Example 1: Reversible expansion In the former experiment the entropy of the gas increases by the amount ∆Sgas = ∆Q ∆W V2 = = nR ln T T V1 (1. • Macroscopic conditions and thermodynamic potentials The thermodynamic state – the “macrostate” – of a gas is uniquely determined by two independent state variables.

the heat content is controlled. the change dE of internal energy must then be the net sum of the mechanical and thermal contributions: dE(S. dV of the state variables.) 6 . S): As in the ﬁrst case. T ) ≡ A + P V : dG = dA + P dV + V dP = −S dT + V dP =⇒S = −(∂G/∂T )P and V = (∂G/∂P )T . Its usefulness stems from the fact that the thermodynamic observables P and T may be represented as derivatives of E(S. This may be done by a thermally insulating the sample (while allowing for controlled addition of heat. but now the force acting on the piston (and not the latter’s position) is the controllable mechanical parameter. V ) be changed reversibly. dQ = T dS) and manipulating the position of the piston. V ) = (∂E/∂S)V dS + (∂E/∂V )S dV . Special case dT = 0: Considering a process of isothermal expansion we ﬁnd that now the free energy of the gas changes by ∆A = − P dV – again. each pertaining to another choice of state variables. V ) is our ﬁrst example of a Thermodynamic Potential. We will now consider three more of these potentials. S = const). Comparing this to dE = T dS − P dV we derive the thermodynamic (“Maxwell’s”) relations P = −(∂E/∂V )S and T = (∂E/∂S)V . T ). * (V. the system may be allowed to expand. T = const) dP . S = const) dP . there may be an equation of state for V (P . Gibbs Potential G(P. V ). which allows us to calculate the integral. This is worth remembering: the mechanical work done by a substance (not just an ideal gas!) that is allowed to expand isothermally and reversibly from volume V1 to V2 is equal to the diﬀerence between the free energies of the ﬁnal and initial states. Special case dS = 0: For adiabatic expansion we see that the internal energy of the gas changes by ∆E = − P dV – just the amount of mechanical work done by the gas. Deﬁne the Enthalpy H ≡ E + P V : dH = dE + P dV + V dP = T dS + V dP =⇒T = (∂H/∂S)P and V = (∂H/∂P )S . Using dA = dE − T dS − S dT = −P dV − S dT we may derive the relations P = −(∂A/∂V )T and S = −(∂A/∂T )V . the amount of mechanical work done by the gas. The internal energy is a (often unknown) function of these variables: E = E(S. Special case dS = 0: Again. * (T. which may be either measured or calculated using an appropriate equation of state V (P. which may be measured. meaning that we introduce small changes dS. The internal energy E(S. Now let the macrostate (S. (In rare cases. respectively. the sample volume V .* (S. but without replacement of the exported mechanical energy by the import of heat (adiabatic ex2 pansion). T ): The system’s volume and its temperature are controlled. e. i. The enthalpy diﬀerence is then just ∆H = 1 V (P . such as the ideal gas. According to the First Law. isothermal expansion against a controlled force (= load on the piston) is best discussed in terms of G: the 2 change ∆G equals simply 1 V (P . V ). * (P. Special case dT = 0: The basic experiment described earlier. V ): The system’s volume and heat content are controlled. The appropriate thermodynamic potential is then the Helmholtz Free Energy deﬁned by A(V. T ) ≡ E − T S. P ): Heat bath and force-controlled piston.

There is a extremely useful and simple mnemonic scheme. • Second. dE(V. etc. If we allow the number n of moles to vary. that allows us to reproduce the basic thermodynamic relations without recourse to longish derivations: Maxwell’s square.12) P • Using these relations we may easily write down the description of inﬁnitesimal state changes in the various representations given above. As mentioned before. T ) = −P dV − S dT = V dP − S dT = V dP + T dS (1.14) (1. H = E + P V . To repeat. they were actually introduced for convenience of description by the “natural” independent variables suggested by the experiment – or by the machine design: we should not forget that the early thermodynamicists were out to construct thermomechanical machines. P. due to Maxwell. Here it is: Figure 1. • Next. T ) dH(P. the relations between the various thermodynamic potentials are barely hidden – discover the rules yourself: A = E − T S.10) V ∂E ∂V ∂G ∂T (1.13) (1. S) = −P dV + T dS dA(V. V ).The multitude of thermodynamic potentials may seem bewildering. it tells us that A = A(V.9) T (1. T.11) S (1.1: Thermodynamics in a square: Maxwell’s mnemonic device • First of all.16) dG(P. G = A + P V . the eight particularly important Maxwell relations for the state variables V. S may be extracted immediately: V T P S ∂H ∂G = ∂P S ∂P ∂E ∂H = = ∂S P ∂S ∂A = − =− ∂V T ∂A = − =− ∂T V = (1. it should not. T ).15) (1. S) Chemical Potential: So far we have assumed that the mass (or particle) content of the considered system was constant. E = E(S. another important 7 .

V ) and the relations (∂E/∂V )S = −P etc. Thus we will look out for a function S(E. i. To simplify matters we sometimes invoke crudely simpliﬁed images of reality. However. will serve us to demonstrate the existence of chaos even in quite small systems.P The name given to this state variable stems from the fact that substances tend to undergo a chemical reaction whenever the sum of their individual chemical potentials exceeds that of the reaction product.17) In other words. many fundamental statements are all but independent of the particular model system used in their derivation. we can in principle use the inverse function E(S. ideal gases) with n1 . equally valid relation is usually preferred: µ= ∂G ∂n (1. Instead. we write dE(S. However. E1 and n2 . the importance of µ is not restricted to chemistry. V ). We will encounter it several times on our way through statistical/thermal physics. the chemical potential µ quantiﬁes the energy increase of a system upon addition of substance. n) = −P dV + T dS + µdn (1.e. keeping entropy and volume constant: µ = ∂E/∂n|S. is a fundamental prerequisite for the application of statistical rules. V2 . V.parameter enters the scene. It is a basic property of chaotic systems that they will acquire each one of a certain set of “states” with equal probability.g. From an analysis of the statistical properties of many-body systems we hope to ﬁnd a microscopic correlate for the quantity which thermodynamicists introduced under the name of Entropy. E2 = E − E1 which are allowed to exchange heat with the constraint that the total energy E remains constant: • The regular ﬂow of energy in one direction is to stop as soon as dS dE = E1 dS dE (equilibrium) E2 • The ﬁnal state shall be characterized by S = S1 + S2 (additivity) If we succeed in ﬁnding such a S(E. Since S is itself dependent on n another. • STADIUM BILLIARD This model. which may seem to render the results of theory useless. That thing called entropy: We now set a deﬁnite goal for this tutorial. they are consequences of a few basic properties common to all manybody systems.18) T. with 8 . it must be understood. Chaos. and of the laws of statistics.V . introduced by Bunimovich and Sinai.2 Model systems Statistical physics must make use of models. Generalizing the internal energy change to allow for a substance loss or increase. Thus the following models should be understood primarily as vehicles for demonstrating statistical-mechanical predictions. V ) that must have these properties: Given two systems (e. to build up thermodynamics from ﬁrst principles! 1. V1 .

[Code: Stadium] Figure 1.equal relative frequency. Such a system is chaotic. which gives rise to an eﬃcient mixing of the ﬂight directions upon reﬂection. Since we have vx + vy = const the “phase points” describing the momentary motion are distributed evenly over the periphery of a circle. The boundaries have both ﬂat and semicircular parts. Simulation: Stadium Billiard in 2 Dimensions Figure 1. Let a “light ray” or “mass point” move about in a two-dimensional container with reﬂecting or ideally elastic walls. energy conservation keeps the phase points on the surface of a sphere. note the frequency histograms for ﬂight direction φ ≡ arctan(vy /vx ) and x-velocity vx . The model is easily extended into three dimensions. – “Chaos” is demonstrated by simultaneously starting a large number of trajectories with 2 nearly identical initial directions: → fast emergence of equidistribution on the circle v x + 2 vy = const. vy } exhibit a uniform probability distribution. The stadium billiard is deﬁned as follows.2: Applet Stadium – See the trajectory of the particle (ray).3: Applet VarSinai 9 . the velocity then has three components. meaning that the motional degrees of freedom 2 2 {vx . and equal a priory probability (or chaos) means that the distribution of points on this “energy surface” is homogeneous – nowhere denser or thinner than elsewhere. It is this “equal a priory probability” of states which we need to proceed into the heart of Statistical Mechanics.

• IDEAL QUANTUM GAS Again. In two dimensions – the box being quadratic with side length L – we have for the energies En = h2 (n2 + n2 ) y 8mL2 x (1. However.19) where n ≡ {nx . and in an isolated system with ideally elastic walls remains constant. . There are no mutual interactions between molecules – except that they may exchange energy and momentum in some unspeciﬁed but conservative manner. 2. The energy contained in the system is entirely kinetic. The theoretical treatment of such a system is particularly simple. with the energy eigenvalues En = h2 n2 /8mL2 (n = 1.20) HΨ(x) = EΨ(x) with H ≡ − 2 2m (1. ny = 2 Simulation: Sinai Billiard – Ideal one particle gas in a box having randomizers on its walls – See the trajectory of the particle (ray). The solutions of Schroedinger’s equation 2 √ are in this case Ψn (x) = (1/ L) sin(2πnx/L). nevertheless the results are applicable.y) 1 0. . N particles are enclosed in a volume V . Considering ﬁrst a single particle in a one-dimensional box of length L.). A similar expression may be found for three dimensions. The momentary state of a classical system of N point particles is completely determined by the speciﬁcation of all positions {ri } and velocities {vi }. . ny }. note the frequency histograms for ﬂight direction φ ≡ arctan(vy /vx ) and x-velocity vx [Code: VarSinai] • CLASSICAL IDEAL GAS N particles are conﬁned to a volume V .4: Quantum particle in a 2-dimensional box: nx = 3. the various states of the system are now to be speciﬁed not by the positions and velocities but according to the rules of quantum mechanics. to gases at low densities. with some caution. Note that air at normal conditions may be regarded an almost ideal gas.psi(x.5 0 -0.5 -1 Figure 1. 10 .

) • HARD SPHERES. niz ). the ﬁnite-size objects may now collide with each other. In addition. i. since there is now a – albeit simpliﬁed – mechanism for the transfer of momentum and energy the model is a suitable reference system for kinetic theory which is concerned with the transport of mechanical quantities.5: Applet Harddisks 11 . Simulation: Hard discs – “Gas” of hards discs in a square box with reﬂecting walls. The relevant results will be applicable as ﬁrst approximations also to moderately dense gases and ﬂuids. However. A particle bouncing back from a wall will only invert the respective velocity component. Note: In writing the sum 1. . and at each encounter will exchange energy and momentum according to the laws of elastic collisions. . The motion of 1. or if certain ni . niy . . the HS model has a special signiﬁcance for the simulation of ﬂuids.21 we only assumed that each of the N particles is in a certain state ni ≡ (nix .21) where nix is the x quantum number of particle i. EN. reﬂecting semicircles. At very low densities such a model system will of course resemble a classical ideal gas. in three dimensions.n h2 = 8mL2 N [n2 + n2 + n2 ] ix iy iz i=1 (1.If there are N particles in the box we have. N particles is followed. . HARD DISCS Again we assume that N such particles are conﬁned to a volume V . We have not considered yet if any combination of single particle states ni (i = 1. To achieve chaos even with a single moving particle the walls are decorated with “randomizers”. [Code: Harddisks] – Displayed: trajectory. . frequency histograms for ﬂight direction φ ≡ arctan(v y /vx ) and x-speed vx . [Code: Harddisks] Figure 1. nj might exclude each other (Pauli priciple for fermions.e. However. . . 2. N ) is indeed permitted.

To achieve chaos even with a single moving particle the walls are decorated with “randomizers”. in addition there is a weak attractive force at intermediate pair distances. frequency histograms for ﬂight direction φ ≡ arctan(v y /vx ) and x-speed vx . Simulation: Lennard-Jones ﬂuid N Lennard-Jones particles in in a 2d box with periodic boundary conditions. In place of the hard collisions we have now a continuous repulsive interaction at small distances. The interplay of experiment. when the microscopic structure and dynamics of simple ﬂuids was an all-important topic.) cell on the right is then fed in with identical velocity from the left boundary..Simulation: Hard spheres – “Gas” of N hard spheres in a cubic box with reﬂecting walls.6: Applet Hspheres ⇒ Simulation 1. . energy and total momentum are conserved.22. theory and simulation proved immensely fruitful for laying the foundations of modern liquid state physics. cubic. – Displayed: trajectory.. The LJ model achieved great importance in the Sixties and Seventies. [Code: Hspheres] Figure 1. etc. N particles is followed. UNDER CONSTRUCTION] • LENNARD-JONES MOLECULES This model ﬂuid is deﬁned by the interaction potential (see ﬁgure) U (r) = 4 [(r/σ)−12 − (r/σ)−6 ] where (1. The motion of 1. i.e.22) (potential well depth) and σ (contact distance) are substance speciﬁc parameters. and that each particle is at all times surrounded by other particles instead of having a wall nearby. [Code: LJones] 12 . The situation of a molecule well within a macroscopic sample is better approximated in this way. reﬂecting semispheres. 2. In simulation one often uses the so-called “periodic boundary conditions” instead of reﬂecting vessel walls: a particle leaving the (quadratic. . This guarantees that particle number. . The model is fairly realistic. [Code: Hdiskspbc. . the interaction between two rare gas atoms is well approximated by equ.4: N hard discs in a 2D box with periodic boundaries. 1.

5 2 2.23) where f is a force constant and m is the atomic mass.1 4*(r**(-12)-r**(-6)) 0.5 -1 0 0. by a harmonic potential.5 4 Figure 1. Figure 1.7: Lennard-Jones potential with = 1 (well depth) and σ = 1 (contact distance where U (r) = 0).5 0 -0. for small excursions of any one particle from its equilibrium position.23 to 2 and 3 dimensions is straightforward. 13 . Deﬁning the displacements of the atoms from their lattice points by xj we have for the energy of the lattice f ˙ E(x.8: Applet LJones • HARMONIC CRYSTAL The basic model for a solid is a regular conﬁguration of atoms or ions that are bound to their nearest neighbors by a suitably modelled pair potential.5 3 3. The generalization of 1. Whatever the functional form of this potential. Let l denote the equilibrium (minimal potential) distance between two neighbouring particles. x) = 2 N j=2 (xj − xj−1 ) + 2 m 2 N x2 ˙j j=1 (1. it may be approximated. Many real ﬂuids consisting of atoms or almost isotropic molecules are well described by the LJ potential.5 1 1. the equilibrium position of atom j in a one-dimensional lattice is then given by Xj = j l.

e ≡ {no. 100-1000 trials. In going to 2 and 3 dimensions one introduces the further ˙ simplifying assumption that this private oscillator potential is isotropically “smeared out”: N 2 2 Epot ({x. The model thus deﬁned is known as the “Einstein j model” of solids. Let us consider a repeatable experiment – say. but more often than not the methods of numerical simulation – Monte Carlo or molecular dynamics – must be invoked. For example. Some of these models are still amenable to a theoretical treatment. Following R. call this number fn (e). 14 . 1. • SPIN LATTICES Magnetically or electrically polarizable solids are often described by models in which “spins” with the discrete permitted values σi = ±1 are located at the vertices of a lattice. The energy of the system is then deﬁned by E ≡ −H i σi . x) N N 2 2 ≈ f j=1 xj +(m/2) j=1 xj .The further treatment of this model is simpliﬁed by the approximate assumption that each ˙ particle is moving independently from all others in its own oscillator potential: E(x. There may be several interaction centers per particle. z}) ≈ f j=1 (x2 + yj + zj ). Monte Carlo simulation experiments on systems of this type have contributed much to our understanding of the properties of ferromagnetic and -electric substances. • MODELS FOR COMPLEX MOLECULES Most real substances consist of more complex units than isotropic atoms or Lennard-Jones type particles. What we want. If the spins have no mutual interaction the discrete states of such a system are easy to enumerate. of points ≤ 3}.3 Fundamentals of Statistics By statistics we denote the investigation of regularities in apparently non-deterministic processes. the parallel alignment of two neighboring spins may be energetically favored over the antiparallel conﬁguration (Ising model). e ≡ numberof points = 4. It implies that we have already done some experiments to determine the relative frequency. An important basic quantity in this context is the “relative frequency” of an “event”. Now. Now repeat this experiment n times under equal conditions and register the number of cases in which the speciﬁc result e occurs. is a recipe for the prediction of P(e). This is why such models are often used to demonstrate of statistical-mechanical – or actually. this deﬁnition does not seem very helpful. where H is an external ﬁeld. containing electrical charges. of points = 6}. y. however. or e ≡ {no. dipoles or multipoles. combinatorical – relations (Reif: Berkeley Lectures). Of more physical signiﬁcance are those models in which the spins interact with each other.24) n→∞ Example: Game die. The relative frequency of e is then deﬁned as r(e) ≡ fn (e)/n. the throwing of a die – which in each instance leads to one of several possible results e – say. and it tells us no more than that we should expect more or less the same relative frequencies when we go on repeating the trials. and the units may be joined by rigid or ﬂexible bonds. von Mises we denote as the “probability ” of an event e the expected value of the relative frequency in the limit of inﬁnitely many experiments: P(e) ≡ lim fn (e) n (1.

s2 . sK .27) Example: N = 60 dice are thrown (or one die 60 times!).To obtain such a recipe we have to reduce the event e to so-called “elementary events” that obey the postulate of equal a priori probability. . 0. either R (real numbers) or N (natural numbers). of points = 6} is one among 6 mutually exclusive elementary events with equal a priory probabilities (= 1/6). How might this apply to statistical mechanics? – Let us assume that we have N equivalent mechanical systems with possible states s1 . What is the probability that at any given time we ﬁnd ki = 10 particles in each subvolume? And how probable is the particle distribution (60. A relevant question is then: what is the probability of a situation in which e ≡ {k1 systems are in state s1. b). Since the probability of any particular one among K possible elementary events is just P( i ) = 1/K. of points ≤ 3} consists of the elementary events i = {1. DISTRIBUTION FUNCTION Let x be a real random variate in the region (a. . The most simple example is the equidistribution for which P (x0 ) = x0 − a b−a (1. k2 systems in state s2. Example: The result 6 ≡ {no.28) is deﬁned as the probability that some x is smaller than the given value x0 . 6? What. 0. 2. which we imagine to be divided into 6 equal partial volumes. What is the probability that 10 dice each have numbers of points 1. we may then derive the probability of a compound event by applying the rules P(e = i i or j) j) P(e = and = P( i ) + P( j ) = P( i ) · P( j ) (1. This gives rise to certain mathematical simpliﬁcations. . 0. Before advancing into the ﬁeld of physical applications we will review the fundamental concepts and truths of statistics and probability theory. etc. but with more obviously physical content: Let N = 60 gas atoms be contained in a volume V .26) Thus the predictive calculation of probabilities reduces to the counting of the possible elementary events that make up the event in question. its probability is thus P(1 ∨ 2 ∨ 3) = 1/6 + 1/6 + 1/6 = 1/2. focussing on events that take place in number space. . is the probability that all dice show a “one”? The same example. 0)? (Answer: see below under the heading “multinomial distribution”. The function P (x) is monotonically increasing and has P (a) = 0 and P (b) = 1.} (1. The distribution function is dimensionless: [P (x)] = 1. 0.29) 15 . The distribution function P (x0 ) ≡ P{x < x0 } (1. The compound event e ≡ {no.) We can generally assume that both the number N of systems and the number K of accessible states are very large – in the so-called “thermodynamic limit” they are actually taken to approach inﬁnity. 2 or 3}.25) (1. . in contrast. .

in the discrete case. with a = −∞. it is the reciprocal of the dimension of its argument x: [p(x)] = For the equidistribution we have p(x) = 1/(b − a). MOMENTS OF A DENSITY By this we denote the quantities xn ≡ b a xn p(x) dx or. P (x0 ) = dx x0 (1.32) p(x) dx a (1.34) (1.37) for the probability of the event x = xα .37 includes the special case that x is restricted to integer values k. 1] we have x = 1/2. x2 = 1/3 and σ 2 = 1/12. the Gaussian distribution P (x0 ) = √ 1 2πσ 2 x0 −∞ x0 −∞ dx e−x 2 /2 (1. is the normal distribution 1 P (x0 ) = √ 2π and its generalization. i. The deﬁnition 1.36) pα ≡ p(xα ) ∆xα (1. 2π If x is limited to discrete values xα with a step ∆xα ≡ xα+1 − xα one often writes 1 [x] (1.e. and for the normal distribution 2 1 p(x) = √ e−x /2 .30) dx e−(x− x )2 /2σ 2 (1. 16 . p(x) is just the diﬀerential quotient of the distribution function: p(x) = dP (x) .33) p(x) has a dimension. b = ∞. . x0 + dx]} ≡ dP (x0 ) In other words.38) The ﬁrst moment x is also called the expectation value or mean value of the distribution density p(x). and the second moment x2 is related to the variance and the standard deviation: variance σ 2 = x2 − x 2 (standard deviation σ = square root of variance). This pα is by deﬁnition dimensionless. in that case ∆xα = 1.35) (1. Examples: .Another important example.) For an equidistribution [0. ≡ p α xn α α (1. DISTRIBUTION DENSITY The distribution or probability density p(x) is deﬁned by p(x0 ) dx ≡ P{x ∈ [x0 .) For the normal distribution we ﬁnd x = 0 and x2 = σ 2 = 1.31) where the parameters x and σ 2 deﬁne an ensemble of functions. although it is related to the distribution density p(x) for continuous arguments.

In addition.39) For the ﬁrst two moments of the binomial distribution we have k = np (not necessarily integer) and σ 2 = np(1 − p) (i.2 0 -0.2 1 0.2 -0. several other physically relevant distributions. approach the Gauss distribution under certain – quite common – circumstances. to obtain diﬀerently distributed random variates one usually “processes” such primary equidistributed numbers.4 0. Application: Fluctuation processes in statistical systems are often described in terms of of the binomial distribution.2 1 0. regardless of how the individual contributions may be distributed.9: Equidistribution and normal distribution functions P and densities p SOME IMPORTANT DISTRIBUTIONS • Equidistribution: Its great signiﬁcance stems from the fact that this distribution is central both to statistical mechanics and to practical numerics. such as the binomial and multinomial densities (see below). The reason for its ubiquity is the “central value theorem”: Every random variate that can be expressed as a sum of arbitrarily distributed random variates will in the limit of many summation terms be Gauss distributed.4 0. The probability to ﬁnd it at some given time in a certain partial volume 17 .2 0 -0. • Gauss distribution: This distribution pops up everywhere in the quantifying sciences. then this error will be nearly Gauss distributed. one of the fundamental assumptions is that all states of a system that have the same energy are equally probable (axiom of equal a priori probability).2 0.4 0.4 -4 -2 0 2 4 p(x) P(x) Figure 1.1.6 0.8 P(x) 0. consider a particle freely roaming a volume V . k 2 − k 2 ).2 0 0. For example. And in numerical computing the generation of homogeneously distributed pseudo-random numbers is relatively easy.8 1 p(x) 1. • Binomial distribution: This discrete distribution describes the probability that in n independent trials an event that has a single trial probability p will occur exactly k times: pn k ≡ P{k times e in n trials} n k = p (1 − p)n−k k (1.e.6 0. For example. when we have a complex measuring procedure in which a number of individual errors (or uncertainties) add up to a total error.8 0.6 0. In the theory of statistical-mechanical systems.

0. An important element in the success story of statistical mechanics is the fact that with increasing n the sharpness of the distribution 1.39 √ 1.42 becomes very large.3 0.41) Note that for N p ≈ 1 we have for the variance σ 2 ≈ σ ≈ N1 ≈ 1.10: Binomial distribution density pn k V1 is p(V1 ) = V1 /V . (1.4 0.2 0.40) The average number of particles in V1 and its standard deviation are N1 = N V1 /V and σ ≡ N (V1 /V )(1 − V1 /V ). i.39 approaches pn f (k) → f ( k ) k 18 (1.43) k! which goes by the name of Poisson distribution. however. renders the calculation of averages particularly simple: pn (k) = f (k) = k with q ≡ 1 − p.1 0 0 1 2 3 4 5 6 7 8 9 1011121314151617181920 n=2 n=5 n=10 n=20 p=0.4 Figure 1. and for “molar” orders of particle numbers n ≈ 1024 the relative width σ/ k is already ≈ 10−12 . the probability of ﬁnding just N1 of them in V1 is given by p N1 = N N (V1 /V )N1 (1 − V1 /V )N −N1 N1 (∆N1 )2 = (1.5 0. meaning that the population ﬂuctuations in V1 are then of the same order of magnitude (namely.44) . For n = 104 the width of the peak is no more than 1% of k . For large n such that np >> 1 the binomial distribution approaches a Gauss distribution with mean np and variance σ 2 = npq (theorem of Moivre-Laplace): pn ⇒ pG (k) = √ k 1 exp [−(k − np)2 /2npq] 2πnpq (1. the density 1. Thus the density approaches a “delta distribution”. Considering now N independent particles in V .42) λk −λ e (1. e.6 n=1 0. If n → ∞ and p → 0 such that their product np ≡ λ remains ﬁnite. This. decreases as 1/ n.7 0. 1) as the mean number of particles itself. The relative or width of the maximum. σ/ k .

their probabilities in a single trial are p1 . 10. . and all other allotments (or distributions) occur very rarely and may safely be neglected. 10.406 · 10−8 . .2] STIRLING’S FORMULA For large values of m the evaluation of the factorial m! is diﬃcult. . √ 2π · 69 (69/e)69 = 1. . 11) = 6. Finally. namely {ki = npi .47) To compare. . . .or f (k) ≈ dk δ(k − k )f (k) = f ( k ) (1. However. p2 . by the n systems in an ensemble of many-particle systems). This particular partitioning of the particles to the various possible states is then “almost always” realized. . for the quite improbable case (60. K}.) (1. . A handy approximation is Stirling’s formula √ m! ≈ 2πm (m/e)m (1. kK ) = n! pk1 pk2 . 10) = 60! (10!)6 1 6 60 = 7.45) • Multinomial distribution: This is a generalization of the binomial distribution to more than 2 possible results of a single trial.7112 · 10 98 . . the function pn (k1 . 10. . Increasing sharpness: if n and k1 . (1. . The probability to ﬁnd each number of points just 10 times is p60 (10. . This is the basis of the method of the most probable distribution which is used with great success in several areas of statistical physics. . for large n. 0. kK ) ≡ pn (k) has an extremely sharp ∗ ∗ ∗ ∗ maximum for a certain partitioning k ∗ ≡ {k1 . . . .49) 19 . 5. the probabilities of two other cases: p60 (10. .457 · 10−5 . 15. it is easy to derive the following two important properties: Approach to a multivariate Gauss distribution: just as the binomial distribution approaches. k2 . 10. Let e1 . 10. or ensembles of n → ∞ elements). pK .7092 · 1098 . 10. . . i = 1. p60 (15. 0. kK become very large (multiparticle systems.046 · 10−47 Due to its large number of variables (K) we cannot give a graph of the multinomial distribution. 0) we have p60 = 2. etc. the multinomial density approaches an appropriately generalized – “multivariate” – Gauss distribution. The same name Stirling’s formula is often used for the logarithm of the factorial: √ ln m! ≈ m(ln m − 1) + ln 2πm ≈ m(ln m − 1) √ (The term ln 2πm may usually be neglected in comparison to m(ln m − 1). 5) = 4. a Gauss distribution. The above formula then tells us the probability to ﬁnd k1 among the n particles in state e1 . then pn (k1 . . 10. k2 .778 · 10−5 . . kK ! 1 2 (1. . k2 . Now do i the experiment n times. . 0.48) Example: m = 69 (Near most pocket calculators’ limit): 69! = 1. . pkK (with k1 + k2 + .46) is the probability to have the event e1 just k1 times. etc. e2 accordingly k2 times.[2. We get an idea of the signiﬁcance of this distribution in statistical physics if we interpret the the K possible events as “states” that may be taken on by the n particles of a system (or. 0. . 9. . 5. in another context. Example: A die is cast 60 times. . . with K pi = 1. e2 . eK be the (mutally exclusive) possible results of an experiment. + kK = n) K k1 !k2 ! . 15. . . . kK }.

α = x. z are statistically independent. . The density of a marginal distribution describes the density of one of the variables regardless of the speciﬁc value of the other one.1905. STATISTICAL (IN)DEPENDENCE Two random variates x1 .1533 + 3.285 · 10 −5 . So we apply Stirling’s approximation: The probability of throwing each number of points just 20 times is 120! p120 (20.350 · 10−5 . x2 ) = p(x1 ) p(x2 ) . 20. (1. 19. When asked to produce 120! most pocket calculators will cancel their cooperation. 20) = (20!)6 1 6 120 ≈ √ 240π 120 6e 20 1 20! 6 = 1.32 we can immediately conclude how the density p(x) will transform if we substitute the variable x. 20.51) Example: In a ﬂuid or gas the distribution density for a single component of the particle velocity is given by (Maxwell-Boltzmann) p(vα ) = m mv 2 exp{− α }. y.1893. meaning that we integrate the joint density over all possible values of the second variable: b1 p(x2 ) ≡ p(x1 . .Example 1: ln 69! = 226. vy .54) (For uncorrelated x1 . y. x2 are statistically mutually independent (uncorrelated) if the distribution density of the compound probability (i. x2 ) dx1 a1 (1. (1.0360 = Example 2: The die is cast again. therefore the compound probability is given by p(v) ≡ p(vx . .56) 20 . but now there are 120 trials. vz ) = p(vx )p(vy )p(vz ) = m 2πkT 3/2 exp{− m 2 2 2 (v +v +v )} (1.52) The degrees of freedom α = x.50) and the probability of the partitioning (20. the probability for the joint occurence of x 1 and x2 ) equals the product of the individual densities: p(x1 .53) 2kT x y z By conditional distribution density we denote the quantity p(x2 |x1 ) ≡ p(x1 . x2 ) p(x1 ) (1. z 2πkT 2kT (1. 21) is 1. x = f −1 (y) the conservation of probability requires |dP (y)| = |dP (x)| (1. Given some bijective mapping y = f (x).e. . 69(ln 69 − 1) + ln 226. √ 2π · 69 = 223. x2 we have p(x2 |x1 ) = p(x2 )).55) TRANSFORMATION OF DISTRIBUTION DENSITIES From 1. 20.

58) Incidentally.11: Transformation of the distribution density (see Example 1) (The absolute value appears because we have not required f (x) to be an increasing function. Example 1: A particle of mass m moving in one dimension is assumed to have any velocity in the range ±v0 with equal probability. so we have p(φ) = 1/2π for φ [0.11) p(E) = 2p(v) dv 1 1 = √ √ dE 2 E0 E (1. (The factor 2 in front of p(v) comes from the ambiguity of the mapping v ↔ E). 2π].p(v) p(E) v_0 E Figure 1. Example 2: An object is found with equal probability at any point along a circular periphery. Introducing cartesian coordinates x = r cos φ.57) or p(y) = p(x) dx df −1 (y) = p[f −1 (y)] dy dy (1.) This leads to |p(y) dy| = |p(x) dx| (1. this relation is true for any kind of density. that dφ 1 1 p(x) = p(φ) = √ (1. y = r sin φ we ﬁnd for the distribution density of the coordinate x.E0 .60) 2 − x2 dx π r (see Figure 1. with x [±r]. where E0 = mv0 /2. keeping its kinetic energy constant. so we have p(v) = 1/2v0 .59) 2 in the limits 0. such as mass or spectral densities. 21 .. The distribution density for the kinetic energy is then given by (see Figure 1. and not only for distribution densities.12) Problems equivalent to these examples: a) A homogeneously blackened glass cylinder – or a semitransparent drinking straw – held sideways against a light source: absorption as a function of the distance from the axis? b) Distribution of the x-velocity of a particle that can move randomly in two dimension.

. For the joint probability density of several variables the transformation formula is a direct generalization of 1. with vx = v sin θ cos φ. keeping only the sum of their kinetic energies constant. . θ)| = sin θ cos φ −v sin θ sin φ sin θ sin φ v sin θ cos φ cos θ 0 v cos θ cos φ v cos θ sin φ −v sin θ = −v 2 sin θ (1. Now we write w ≡ {v. vy = v sin θ sin φ..)| = dx1 /dy1 dx2 /dy1 .58.63) Therefore we have for the density of the modulus of the particle velocity p(v) ≡ m 2 m 3/2 exp{− v } 2πkT 2kT 0 −π m 3/2 2 m 2 = 4π v exp{− v } 2πkT 2kT dφ dθv 2 sin θ 2π π (1.. vz )/d(v. . . vz = v cos θ . . θ}. and p(v) as in equ. The Jacobian of the mapping v = v(w) is |d(vx . dx p(y) = p(x) (1. vy . viz. . .64) (1. φ. ..65) (1. .66) 22 . φ. [Code: Stadium] c) Distribution of the velocity of any of two particles arbitrarily moving in one dimension.p(phi) p(x) 2*pi x Figure 1. dx1 /dy2 dx2 /dy2 . |d(x1 .62) Example 3: Again. x2 .. Distribution of the velocity component vx . vy . let v ≡ {vx .)/d(y1 .. y2 . (1. . . 1.61) dy Here we write |dx/dy| for the functional determinant (or Jacobian) of the mapping x = x(y).6: Stadium Billiard.12: Transformation of the distribution density (see Example 2) ⇒ Simulation 1.53. vz }.

. (d) dG = V dP − . (in terms of derivatives of A and E). . 199. (c) P = . Perform the experiment N times and draw an empirical frequency diagram. . i. 1. . 199) to the most probable one.). rather. . c) (1 bonus point) Prove analytically that Ni = 200 is the most probable distribution. Compare the shape of your p(vx ) to the bottom right histogram in Applet Stadium (see 1. (a) Draw a histogram of the empirical probability (i.. (202. . 199. .). .3 Maxwell square: Draw up Maxwell’s square and use it to complete the following equations: (a) G = A ± . 195.8 Transformation of a distribution density: Repeat the calculation of Example 3 for the 23 .5 Binomial distribution: Suggest an experiment with given p and n. a) What is the probability of ﬁnding in a snapshot of the system the partitioning {N1 .4 Problems for Chapter 1 EXERCISES: 1.).1.2 Reversible. both absolutely and relative to the mean particle number? (Air is to be treated as an ideal gas at normal temperature T0 = 273. compare the situations (201. (b) Normalize the histogram such that a sum over all bins equals one. using estimated values of the necessary experimental parameters. calculate the entropy increase per second.. b) Demonstrate numerically that the partitioning with the greatest probability is given by Ni = N/m = 200. . compare with 1.2). .4 Random directions in 2D: Using your favourite random number generator. N2 .e. . = . . Interpret cos φi and sin φi as the vx. For example. . What is the standard deviation of the particle number. (205. . 1. . Do an entropy balance to show that this process is irreversible. . K) equidistributed in [0. . 1. (202. use that distribution which resembles pn when n becomes large. the probability of ﬁnding an integer number k next to the mean number of particles in the sample volume? (Hint: Don’t attempt to evaluate factorials of large numbers.39. (b) Using the speciﬁc values T1 = 300K. (b) A = E ± . . 198. sample a number K ≈ 50 − 100 of angles φi (i = 1. Discuss the role of the ideal gas assumption: is it necessary / unnecessary / convenient ? 1. and (204.) k k What is the probability of ﬁnding only 95 percent of the mean particle number in the sample volume? 1. . isothermal expansion: Compute the entropy balance for the experiment described (or shown) in the lecture. 199. . The N = 1000 particles of an ideal gas may be allotted randomly to the cells. Hint: minimize the function f (k) ≡ log pn (k) under the condition i ki = n. . . 199. .7 Multinomial distribution: A volume V is divided into m = 5 equal-sized cells.15 K.1 Entropy and irreversibility: (a) Two heat reservoirs are at temperatures T 1 and T2 > T1 .).5)? 1. N5 }? Explain the formula. T2 = 320K and Q = 10J. . 199. . 1. event frequency) of p(v x ). . . e. 199. What is the value of p(0.. such as appear in the binomial distribution pn .y −components of a randomly oriented velocity vector vi with |vi | = 1. .6 Density ﬂuctuations in air: (a) Calculate the mean number of molecules (not discerning between N2 and O2 ) that are to be found at a pressure of 10 P a in a cube with a side length of the wave length of light (≈ 6·10 −7 m).) (b) Compute the value of the probability density of the event N (∆V ) = k ≈< N >. They are connected by a metal rod that conducts the heat Q per unit time from 2 to 1. 2π]. . .

1 Thermodynamic concepts: – What does the entropy balance tell us about the reversibility/irreversibility of a process? Demonstrate. TEST YOUR UNDERSTANDING OF CHAPTER 1: 1. – Describe the process of thermal interaction between two bodies. give two examples. vy } and w ≡ {v. φ}. 24 .4 Equal a priori probability: Explain the concept and its signiﬁcance for statistical mechanics. – What are the moments of a distribution? Give a physically relevant example. Draw the distribution density p2D (v).e.2 Model systems: – Describe 2-3 model systems of statistical mechanics.two-dimensional case. – What quantities are needed to completely describe the momentary state of a classical ideal gas of N particles? – What quantities are needed for a complete speciﬁcation of the state of a quantum ideal gas? 1.3 Statistical concepts: – Explain the concepts “distribution function” and “distribution density”. 1. v ≡ {vx . i. using a speciﬁc example. When will the energy ﬂow stop? – Which thermodynamic potential is suited for the description of isothermal-isochoric systems? 1.

v} has later been called “µ space”. A particularly important result of Boltzmann’s equation is its stationary solution. which in general may be treated as a classical elastic collision. just occasionally their ﬂight will be interrupted by a collision with a single other particle. 25 . v. is a change in the speeds and directions of both partners. or “molecular” space as opposed to Γ. The result of such an event. This idea was the starting point from which Ludwig Boltzmann proceeded to develop his “Kinetic Theory of Gases”. First he deﬁned a distribution density f such that f (r. The 6-dimensional space of the variables {r. t) is then described by Boltzmann’s Transport Equation. t) dr dv denotes the number of particles which at time t situated at r and have a velocity v.1: Ludwig Boltzmann In a dilute gas the molecules are in free. We will shortly sketch the derivation and shape of this important formula. the function that solves the equation in the long-time limit t → ∞. v.1 The evolution in time of the function f (r.e. linear ﬂight most of the time.Chapter 2 Elements of Kinetic Theory Figure 2. predictions on the properties of gases. It turns out that this equilibrium 1 The term is probably due to Paul Ehrenfest who in his classical book on Statistical Mechanics refers to µ-space. i. or “gas” space (see Chapter 3 below). from a statistical treatment of such binary collisions. So it should be possible to derive.

v. x and vx (see Figure 2. e. The time derivative of f is therefore. But we may sketch the basic ideas used in its derivation. The strategy used to do so. • If there were no collisions at all. t + dt m = f (r. The evolution of the distribution density in µ space. ∂ +v· ∂t where v· and K · m vf rf r + K · m v f (r. called the method of the most probable distribution. in the collisionless case.distribution in µ space may be derived without explicitely solving the transport equation itself.2: A very simple µ-space 26 . t) (2.4) To gather the meaning of equation 2.1 Boltzmann’s Transport Equation With his “Kinetic Theory of Gases” Boltzmann undertook to explain the properties of dilute gases by analysing the elementary collision processes between pairs of molecules.3) ≡ 1 m Kx (2. consider the collisionless. is described by Boltzmann’s transport equation. f (r. and µ-space has only two dimensions. Figure 2. no change of velocities). v). t). A thorough treatment of this beautiful achievement is beyond the scope of our discussion. free ﬂow of gas particles through a thin pipe: there is no force (i. v. 2.1) where K denotes an eventual external force acting on particles at point (r. t) = 0 (2. the swarm of particles in µ space would ﬂow according to f r + v dt.2 for free ﬂow.2) ≡ vx ∂f ∂f ∂f + vy + vz ∂x ∂y ∂z ∂f ∂f ∂f + Ky + Kz ∂vx ∂vy ∂vz (2.2). v + K dt. will be encountered again in other parts of Statistical Mechanics. v.

– the inﬂuence of container walls may be neglected. f (x. nin = f (x − vx dt.9) (2.12) (this would engender an additional vertical ﬂow in the ﬁgure). – the inﬂuence of the external force K (if any) on the rate of collisions is negligible. 2. – velocity and position of a molecule are uncorrelated (assumption of molecular chaos). ∂f ∂f Kx ∂f + vx + =0 ∂t ∂x m ∂vx (2.5) ∂t ∂x To see this. 2.14) (Ω thus denotes the relative orientation of the vectors (v2 − v1 ) and (v2 − v1 )). Under all these assumptions.11) + vx ∂f ∂x = 0 is then easily generalized to the case of a non-vanishing force. vx ) = −vx (2. v.13) coll The essential step then is to ﬁnd an explicit expression for (∂f /∂t)coll . vx )dx dvx . All this is for collisionless ﬂow only. vx ) dt f (x. vx dt ∂f (x. v2 } .10) = −vx The relation ∂f ∂t (2. and to six instead of two dimensions (see equ. vx ) ∂x (2. The function σ(Ω) depends on the intermolecular potential and may be either calculated or measured. nout = f (x. t) = ∂f ∂t (2. vx )dx dvx particles. vx ) contains. on the average.1 with respect to time. vx ) ∂t = = nin − nout dt dx dvx f (x − vx dt. The eﬀect of the binary collisions is expressed in terms of a “diﬀerential scattering cross section” σ(Ω) which describes the probability density for a certain change of velocities. (2. vx ) is then given by ∂f (x.2). • In order to account for collisions a term (∂f /∂t)coll dt is added on the right hand side: ∂ +v· ∂t r + K · m v f (r.7) (2. count the particles entering during the time span dt from the left (assuming v x > 0. {v1 . vx ) ∂f (x.15) 27 . vx )dx dvx and those leaving towards the right.8) = (2. v2 } → {v1 . vx ) − (vx dt) (∂f /∂x) − f (x. Boltzmann solved this problem under the simplifying assumptions that – only binary collisions need be considered (dilute gas). and by a linear expansion of the left hand side of equ.At time t a diﬀerential “volume element” at (x. vx ) − f (x.6) (2. The local change per unit time is then ∂f (x. The temporal change of f (x. the Boltzmann equation takes on the following form: ∂ + v1 · ∂t r + K · m v1 f1 = dΩ dv2 σ(Ω) |v1 − v2 | f1 f2 − f1 f2 (2.

v1 . and thus the approach to the stationary ﬁnal state (= equilibrium state) in which the particles are evenly distributed over the volume may be seen in the solution f (r. but there are certain moments of f which represent measurable averages such as the local particle density in 3D space. Thus the greatest importance of this equation is its ability to describe also non-equilibrium processes. v1 . Since f has up to six arguments it is diﬃcult to visualize. t) etc. For certain simple model systems such as hard spheres their method produces predictions for f (r. t → ∞. under the given assumptions. To consider a simple example. v) = ρ(r) m 2πkT (r) 3/2 exp{−m [v − v0 (r)] /2kT (r)} 28 2 (2. v. Given some initial density f (r. the spatio-temporal behaviour of a dilute gas. v.where f1 ≡ f (r.18) . meaning that ∂f (r.17) ∂t It is also the limiting distribution for long times. This integrodiﬀerential equation describes. 0) may be of arbitrary shape. v. t) =0 (2. Simulation: The power of Boltzmann’s equation – Irreversible Expansion of a hard disc gas – Presentation of the distribution densities in r-space and in v-space [Code: BM] The Equilibrium distribution f0 (r. we may have all molecules assembled in the left half of a container – think of a removable shutter – and at time t = 0 make the rest of the volume accessible to the gas particles: f (r.16) where f0 (v) is the (Maxwell-Boltzmann) distribution density of particle velocities. the time change of these values is then described by a modiﬁed transport equation which lends itself to fast computation. t) of Boltzmann’s equation. Another more modern approach to the numerical solution of the transport equation is the “Lattice Boltzmann” method in which the continuous variables r and v are restricted to a set of discrete values. t) tells us how this density changes over time. v. t = 0) in µ-space the solution function f (r. v) is that solution of Boltzmann’s equation which is stationary. v. t) (or its moments) which may be tested in computer simulations. f1 ≡ f (r. The subsequent expansion of the gas into the entire accessible volume. v. and Θ(x 0 − x) denotes the Heaviside function. t). It may be shown that this equilibrium distribution is given by f0 (r. Chapman and Enskog developed a general procedure for the approximate solution of Boltzmann’s equation. The initial distribution density f (r. v. 0) = AΘ(x0 − x)f0 (v) (2. whose temporal change can thus be computed.

In case the temperature is also independent of position.) Each partitioning may be performed in many diﬀerent ways. with f0 (v) = m 2πkT 3/2 e−mv 2 /2kT . . (If you prefer to follow the literature. respectively. . . . We may understand this prescription as the rule of a game of fortune. of course. For 2 instance.2 The Maxwell-Boltzmann distribution We want to apply statistical procedures to the swarm of points in Boltzmann’s µ space. There is a characteristic energy Ej ≡ E(rj .19) This is the famous Boltzmann distribution. . . 2. cell numbers – with repetition).20) P (n) = n1 !n2 ! . in the ideal gas case this energy is simply Ej = mvj /2. the conservation of the number of particles. we simply count the number of possible allottments to calculate the probability of the respective partitioning. The number of possible permutations of particle labels for a given partition n ≡ {nj . since the labels of the particles may be permutated without changing the population numbers in the cells. A speciﬁc m-tuple of population numbers n ≡ {n j . Now we distribute the N particles over the m cells. The other condition is. vj ) pertaining to each such cell. . v) = ρ0 f0 (v). m} will here be called a partitioning.30. labelling them by j (= 1. . then f (r. it may be derived also in diﬀerent ways. and if the gas as a whole is not moving (v0 = 0). j = 1. m). .2.where ρ(r) and T (r) are the local density and temperature. j = 1. m} is N! (2. . Assuming that the allotments are elementary events of equal probability. . This means that many speciﬁc allottments pertain to a single partitioning. and with the aid of a computer we may actually play that game! Playing Boltzmann’s game [Code: LBRoulette] Instead of playing the game we may calculate its outcome by probability theory. If there are no external forces such as gravity or electrostatic interactions we have ρ(r) = ρ 0 = N/V . =⇒See next section. equ. j nj = N . without requiring the explicit solution of the transport equation. (2. nm ! (In combinatorics this is called permutations of m elements – i. e. . such that nj particles are allotted to cell no. . . To do this we ﬁrst divide that space in 6-dimensional cells of size (∆x)3 (∆v)3 . In a closed system with total energy E the population numbers nj must fulﬁll the condition j nj Ej = E. 29 . you may refer to it as a distribution. where vj is the velocity of the particles in cell j. Apart from these two requirements the allottment of particles to cells is completely random. j. For good statistics we require that N >> m >> 1.

20 a maximum. m j (2. vj )} j In particular.23) j n∗ = e−α−βEj . vj ) ∝ n∗ = A exp{−βE(rj . meaning that we can restrict the further discussion to this one partitioning.Since each allottment is equally probable.22) Consequently f (rj . The mean kinetic energy of a particle is given by E ≡ dv(mv 2 /2)f (v) = 3 2β (2.3 that the average kinetic energy of a molecule is related to the macroscopic observable quantity T (temperature) according to E = 3kT /2. which in the absence of external forces will be homogeneous with respect to r. therefore we have β ≡ 1/kT . to − ln n∗ − α − βEj = 0 (2.28) Now we take a closer look at the quantity β which we introduced at ﬁrst just for mathematical convenience. given the additional constraints m j=1 m nj − N = 0 and j=1 nj E j − E = 0 (2. is the variational method with Lagrange multipliers (see Problem 2. . using the Stirling approximation for ln P .1). Thus we may write the distribution density of the velocity in the customary format f (v) = m 2πkT 30 3/2 e−mv 2 /2kT (2.29) But we will learn in Section 2. the most probable partitioning is the one allowing for the largest number of allottments. .24) (2.26) Using the normalizing condition f (v)dv = 1 or B we ﬁnd B = (βm/2π) 3/2 4πv 2 e−βmv 2 /2 dv = 1 (2.27) and therefore f (v) = βm 2π 3/2 e−βmv 2 /2 (2. we ﬁnd for a dilute gas.25) E j nj = 0 (2. j = 1.) Thus we want to determine that speciﬁc partitioning n∗ which renders the expression 2.30) . allowing for additional constraints. The variational equation δ ln P − δ α nj + β j j Thus the optimal partitioning n∗ is given by j with the undetermined multipliers α and β leads us. 2 f (vj ) = B exp{−β(mvj /2)} (2. .21) Since the logarithm is a monotonically increasing function we may scout for the maximum of ln P instead of P – this is mathematically much easier. In many physically relevant cases the probability of that optimal partitioning very much larger than that of any other. The proven method for maximizing a function of many variables. (See the previous discussion of the multinomial distribution.

The same name is also used for a slightly diﬀerent object.3: Maxwell-Boltzmann distribution This density in velocity space is commonly called Maxwell-Boltzmann distribution density. It is possible to demonstrate that the partitioning we have found is not just the most probable but by far the most probable one. therefore the most probable speed of such a molecule under equilibrium conditions is v = 1. ∞ v ≡ dv vf (|v|) = 0 (2. 1.m.32) This most probable speed is not the same as the mean speed.33) or the root mean squared velocity or r. ˜ 31 . m 3/2 2 −mv2 /2kT v e (2. neglecting all other possible but improbable distributions.34) Example: The mass of an H2 molecule is m = 3. namely the distribution density of the modulus of the particle velocity (the “speed”) which may easily be derived as (see equ.300 = 4. f (|v|) is a skewed distribution. at room temperature (appr.15 Maxwell-Boltzmann-Dichte 0. 300 K) we have kT = 1. its maximum is located at v= ˜ 2kT m 8 π kT m (2.1 0. v rms ≡ v2 = 3kT m (2. any noticeable deviation from this distribution of particle velocities is extremely improbable (see above: multinomial distribution.66).) This makes for the great practical importance of the MB distribution: it is simply the distribution of velocities in a many particle system which we may assume to hold.141 · 10−21 J.346 · 10−27 kg.380 · 10−23 .573 · 103m/s. velocity). In other words.31) 2πkT So we have determined the population numbers n∗ of the cells in µ space by maximizing j the number of possible allottments.05 0 0 1 2 3 4 5 Figure 2. f (|v|) = 4πv 2 f (v) = 4π As we can see from the ﬁgure.2 0.0.s.

The pressure is determined – from the collisions of the particles with the container walls (Pex ) – from equation 2. The further explication of the dilute gas thermodynamics is easy. against the right wall of a cubic box – the respective momentum of the molecule px ≡ mvx is reversed.39) . Comparing this to the above formula we ﬁnd for the mean energy of a molecule E = 3kT .3 Thermodynamics of dilute gases The pressure of a gas in a container is produced by the incessant drumming of the gas molecules upon the walls. The momentum transfer from such a particle to the wall is 2mvx . At each such wall collision – say.35) where M (t) is the number of wall impacts within the time t. The momentum transferred to the wall is thus ∆px = 2mvx . From thermodynamics we know that the pressure of a dilute gas at density ρ ≡ N/V and temperature T is P = ρkT . The force acting on the unit area of the wall is then just the time average of the momentum transfer: K P ≡ F t 1 ≡ Ft M (t) ∆p(k) x k=1 1 ≡ Ft M (t) (k) 2mvx k=1 (2. Taking the formula for the internal energy.37) Simulation: Thermodynamics of dilute gases 3-dimensional system of hard spheres in a cubic box. Thus we ﬁnd N 2 dv 2mvx f (v) (2.37 (Pth ). and using the First Law dQ = dE + P dV 32 (2. To obtain a theoretical prediction for the value of the pressure we argue as follows: The number of particles having x-velocity vx and impinging on the right wall per unit time is obviously proportional to vx .38) and the parameter β (which was introduced in connection with the equilibrium density in velocity space) turns out to be just the inverse of kT . 2 (2. [Code: Hspheres] Where do we stand now? Just by statistical reasoning we have actually arrived at a prediction for the pressure – a thermodynamic quantity! Having thus traversed the gap to macroscopic physics we take a few steps further.2.36) P = V vx >0 Inserting the Boltzmann density for f (v) and performing the integrations we have P = 2N E 3 V (2. E ≡ N E = 3N kT /2.

This is called a stationary non-equilibrium situation. In real experiments aimed at determining these transport coeﬃcients the respective gradient is artiﬁcially maintained. However. the energy contained in a volume element dV divided by that dV . they will ﬂuctuate about their mean. we are then speaking of transport processes whose speeds are governed by the respective transport coeﬃcients – such as the viscosity η. The amount of momentum ﬂowing down through a unit of area 33 . the heat capacity CV : CV ≡ dQ dT = V 3 dE = Nk dT 2 (2. energy or matter in the (reverse) direction of the gradient – which is obviously a non-equilibrium situation. If we left the system to itself this gradient would decrease and eventually vanish.) In particular. The diﬀerence between the momenta in adjacent layers then deﬁnes a gradient of the momentum density. The magnitude of the thermal speed is of the order 103 m/s. the heat conductivity λ (for the energy transport) and the diﬀusion constant D (for mass transport). by a careful setup we can keep at least these ﬂows and the local densities and gradients constant in time. in equilibrium the local energy density. and ρm ≡ Nm V (2. there will be a spontaneous waxing and waning of local gradients of those densities. i. and thereby a momentum px . These densities. In other words. Yet by imposing a velocity gradient we have slightly perturbed the equilibrium.we immediately ﬁnd that amount of energy that is needed to raise the temperature of a gas by 1 degree – a. otherwise they will be independent also of position: the system is not only in equilibrium but also homogeneous. by adding a “shear velocity” ux (z) of some centimeters per second the local equilibrium is not considerably disturbed. with the same value of E (bzw.41) By looking more closely we would see that these local densities are in in fact not quite constant. and assuming there are no external ﬁelds. DEFINITION OF TRANSPORT COEFFICIENTS In order to deﬁne the transport coeﬃcients η. rather.4 Transport processes A system is in “equilibrium” if its properties do not change spontaneously over time. which refer to the three conserved quantities of mechanics. The property governing the speed of this decrease is called viscosity. one might induce in a horizontal layer of a gas or liquid a certain xvelocity ux = 0. as well as the momentum density and the particle or mass density remain constant. the densities are ρE ≡ E 1 = V V N i=1 2 p 1 mvi . For instance.a. • Viscosity: To measure the coeﬃcient of momentum transport η we generate a laminary ﬂow by placing a gas or ﬂuid layer between two horizontal plates and moving the upper plate with constant velocity u0 to the right.e.k. In this manner we superimpose a systematic x-velocity onto the random thermal motion of the molecules. the experimentor may intervene to create a gradient.40) 2. or momentum – is conserved. Thus we may assume that we have still a Maxwell-Boltzmann distribution of velocities at any point in the ﬂuid. (In case there are external ﬁelds – such as gravity – the material properties may vary in space. play a special role in what follows. If a physical quantity – such as mass. λ and D we consider the basic experimental setup. Thus there will be a continuing ﬂow of momentum. ρp ≡ = 2 V V N i=1 mvi . Also. kT ) everywhere. Again considering the dilute gas. a certain amount of x-momentum will ﬂow against the gradient – in our case downwards – so as to reestablish equilibrium. any local change can only be achieved by a ﬂow into or out of the space region under consideration.

Thus the ﬁrst step should be an estimation of the average path length covered by a particle between two such collisions. where σ is the diameter of the particles. But the probability of an encounter within an inﬁnitesimal path element dx is given by p = ρσ 2 πdx. by deﬁnition. In a ﬁrst (linear) approximation this ﬂow will be proportional to the imposed velocity gradient.42) This deﬁning equation for η is often called Newton’s law of viscous ﬂow. if we use the same simple geometry as for the viscosity then the temperature (or energy) gradient will have only the one non-vanishing component dT /dz which accordingly produces a ﬂow of energy in the counter direction. the transport coeﬃcient η: jp = −η dux (z) dz (2. by giving them a radioactive nucleus.46) P (x) = 1 − e−ρσ 2 πx πx (2. jE = −λ dT (z) dz (2. (p equals the fraction of the “target area” covered by other particles. Let P (x) be the – as yet unknown – probability that a particle meets another one before it has passed a distance x. and the coeﬃcient of proportionality is.45) (2. It is possible to “tag” some of the particles – for instance. The density of the marked species – call it 1 – may then once more have a gradient that will be balanced by a particle ﬂow (Fick’s law): j1 = −D dρ1 (z) dz (2. This process is known as self diﬀusion.43) • Diﬀusion constant: Even in a homogeneous gas (or a ﬂuid.48) 34 . In a simple kinetic treatment the elementary process determining the various ﬂows is the free ﬂight of a gas particle between two collisions with other molecules.47) The mean free path is then the ﬁrst moment of this density: ∞ l≡ x ≡ dx x p(x) = 0 1 ρσ 2 π (2. Then 1 − P (x) is the probability for the molecule to ﬂy a distance x without undergoing a collision.44) MEAN FREE PATH We will presently attempt to calculate the above deﬁned transport coeﬃcients from the microscopic dynamics of the molecules. The coeﬃcient λ of thermal conduction is deﬁned by (Fourier’s law). or a solid.) Thus the diﬀerential probability for a collision between x and x + dx is dP (x) = [1 − P (x)]ρσ 2 πdx which upon integration yields The probability density p(x) ≡ dP (x)/dx for a collision within that interval is p(x) = ρσ 2 πe−ρσ 2 (2. • Thermal conductivity: To generate a gradient of energy density we may place the substance between two heat reservoirs with diﬀerent temperatures. The parameter η is known as viscosity. for that matter) the individual molecules will alter their positions in the course of time.per unit of time is called ﬂow density jp .

Example: The diameter of a H2 molecule is σ ≈ 10−10 m (= 1 ˚ngstrøm).4 · 10−3 m3 contains. Assuming once more the basic geometric setting used in most experiments. On the average. at standard conditions. leading to a ﬂow that is also directed along the z axis.022 · 1023 particles. respectively.69 · 1025 m−3 . The momentum ﬂow density is therefore jp = 1 1 ρ v [mu(z − l) − mu(z + l)] ≈ ρ v 6 6 −2m du l dz (2.18 · 10−6 m. who was the ﬁrst to derive this result. is ρ v /6. i. had to convince himself of its validity by accurate experiments on dilute gases. it will gradually slow down. The ﬂow of momentum due to this mechanism can be estimated as follows: The mean number of particles crossing the unit of area from below and from above. ux (z ± l). Simulation: Collision rate in a dilute gas 3-dimensional system of hard spheres in a cubic box: – Compare the theoretical collision rate Zth with the empirical one Zex [Code: Hspheres] ESTIMATING THE TRANSPORT COEFFICIENTS In the case of a dilute gas we can actually ﬁnd expressions for the transport coeﬃcients. • Viscosity: In the course of the random (thermal) motion of the particles the small systematic term ux (z) is carried along. 6.49) Comparing this to the deﬁning equation for the viscosity η we ﬁnd η= 1 ρl v m 3 (2. an equal number of particles will cross the interface between two horizontal layers from below and above. According A to Loschmidt (and the later but more often quoted Avogadro) a volume of 22. while the lower layer will take up speed.e. the particle density is thus ρ = 2. and the mean free path is l = 1. 35 . since l ∝ 1/ρ. Each of the molecules carries that systematic ux (z) which pertains to the layer of its previous collision.50) It is a somewhat surprising consequence of this formula that. However. the viscosity eta is apparently independent of density! Maxwell. since the upper layer has a larger ux (assuming dux /dz > 0. we suppose that the gradient of the respective conserved quantity has only a z component.

y) = x2 + y 2 be the given function. someone has forgotten how to ﬁnd the extremum of a many-variable function under additional constraints.54) 2y + α = 0 =⇒x + y = 0.5 Just in case . y) according to ∂g ∂f −α =0 ∂x ∂x ∂g ∂f −α =0 ∂y ∂y Eliminating α we ﬁnd the solution without substituting anything. y) − α g(x. “Variation of f ”). y) = (1/2. Let f (x. y).56) (2. The process of substitution is. molar heat capacity. In our case 2x − α = 0 (2. y) = 0: (x. Of course. • Diﬀusion: Again we assume a gradient of the relevant density – in this case. and from g(x.molar mass). in general. along the curve) g(x.. meaning that we don’t search for the global minimum but for the minimum along the line y = x − 1.. Equating the derivative df /dx to zero we ﬁnd the locus of the conditional minimum. 1 (2.• Thermal conductivity: A similar consideration as in the case of momentum transport yields. thus rendering f a function of x only.1 Variational method of Lagrange: Determine the minimum of the function f (x. tedious..e.53) (2.55) (2. the particle density of some species 1 – along the z direction.y) and diﬀerentiating by x. x = 1/2 and y = −1/2. let g(x.. To remind you: the method of undetermined Lagrange multipliers waits to be applied here. −1/2).. δf .6 Problems for Chapter 2 EXERCISES: 2.52) 3 2. with jE = −λ dT /dz. y) = x2 + 2y 2 under the condition (i. ﬁnd the minimum of the function f (x. A more elegant method is this: deﬁning a (undetermined) Lagrange multiplier α. M . The simple but inconvenient way is to substitute y = x − 1 in f (x. y) ≡ x − y − 1 = 0 be the constraint equation.. There are two ways to go about it. 2.51) λ = ρl v mcV 3 where cV ≡ CV /M is the speciﬁc heat (CV . However. (Shorthand notation: δf − α δg = 0. b) using a Lagrange multiplier α and evaluating Lagrange’s equations ∂f /∂x − α(∂g/∂x) = 0 and ∂f /∂y − α(∂g/∂y) = 0. by two diﬀerent methods: a) by inserting y = 6 − x in f(x. Writing j1 = −D dρ1 /dz and using the same reasoning as before we ﬁnd 1 D= l v (2... 36 . y) = x + y − 6 = 0. this is the equation of a parabolid with its tip – or minimum – at the origin. .

2.m. 2. – What is the mean free path in a gas of spheres with diameter σ? – How does the viscosity of a dilute gas depend on the density? 37 .3 Moments of the Maxwell-Boltzmann distribution: Verify the expressions for the most probable velocity.5 Transport properties: For nitrogen under standard conditions. y) and g(x. what is the respective formula for the speeds (absolute values of the velocity).s.4 Pressure in a dilute gas: Verify the formula P = (2/3)(N E /V ). the average and the r. estimate the mean free path and the transport coeﬃcients η. y) = 0.2 Pressure in an ideal gas: Derive the pressure equation of state for the ideal gas from simple kinetic theory.2 Method of the most probable distribution: Having understood the principle of Lagrange variation. f0 (|v|)? 2.For your better understanding sketch the functions f (x. 2. TEST YOUR UNDERSTANDING OF CHAPTER 2: 2. velocity. λ and D. (σ ≈ 4 · 10−10 m). Compare your result with the bottom right histogram in Applet LBRoulette.3 Transport coeﬃcients: – Write down the deﬁning equation for one of the three transport coeﬃcients.1 Maxwell-Boltzmann distribution: What is the formula for the distribution density f0 (v) of the molecular velocities in equilibrium. 2. reproduce the derivation of the Boltzmann distribution of energies (see text). 2.

1 Microscopic variables The microscopic state of a model system is uniquely determined by the speciﬁcation of the complete set of microscopic variables. In contrast. Interpreting the microscopic variables as coordinates in a high-dimensional space we may represent a particular microstate as a point or vector in that space.Chapter 3 Phase space Figure 3. or volume. The number of such variables is of the same order as the number of particles. the state vector is often symbolized as Γ. Generally a huge number of diﬀerent microstates are compatible with one single macrostate. and it is a primary task of statistical mechanics to ﬁnd out just how many microstates there are for a given set of macroscopic conditions. 3. 38 . the macroscopic state is speciﬁed by a small number of measurable quantities such as net mass. The space itself is called Gibbs phase space. A more general and extremely powerful method for analyzing statistical-mechanical systems is based on a geometrical exploration of the high-dimensional phase space spanned by the complete set of microvariables. energy.1: Josiah Willard Gibbs The Kinetic Theory explained in Chapter 2 is applicable only to gases and – with certain add-ons – to ﬂuids.

. classical many-particle system – say. introduced by Boltzmann. N } with σi = ±1. all having the same energy E. . Thus the microstate of a system of N particles corresponded to a swarm of N points {ri . . . All microstates Γ compatible with these macroscopic conditions are then equally probable. . . rN . The representation of a microstate of the entire system by a single point in 6N -dimensional Gibbs space must not be mixed up with the treatment of Chapter 2. N } (3. . . . in 2 dimensions.2: Phase space of the 2D quantum gas Consider a simple.o. The state vector is then deﬁned by all position and velocity coordinates: Γ ≡ {r1 . vi }. v1 . the number of velocity d.) is n = 6N (or. i = 1. v}. In the case of an ideal quantum gas it suﬃces to specify all quantum numbers to deﬁne a state: Γ ≡ {(nix .f. The µ-space deﬁned there had only 6 dimensions {r. and each particle in the system had its own representative point. vN } ri V . or a ﬂuid made up of hard spheres or Lennard-Jones molecules.3) 39 . and – in the case of a gas or ﬂuid – the same volume V . ENERGY SURFACE We consider a collection of M systems. It is often allowed – and always advantageous – to treat the subspaces Γr ≡ {ri } (position space) and Γv ≡ {vi } (velocity space) separately.o. i = 1. the same number of particles N . . vi. is 3N (or 2N ). 4N ). an ideal gas.α (±∞) (3.Figure 3. .2) Model systems made up of N spins are speciﬁed by Γ ≡ {σi . niz ). they have the same relative (3. that is. . niy . .f. .1) The number of degrees of freedom (d.

in fact. ENERGY SHELL In place of the strict condition E = E0 we will generally require the weaker condition E0 − ∆E ≤ E ≤ E0 to hold.e. or even centuries before other parts of the microcanonical surface are reached. the microstates visited by the system must therefore lie on the (N − 1)-dimensional energy surface deﬁning the microcanonical ensemble. v) = ρ0 0 for E(r. These are deterministic simulations reproducing the temporal evolution of a single isolated N -particle system. We will later touch upon another kind of computer simulation. or over the successive states of one single. i. The ergodic hypothesis states that in the course of such a “natural evolution” of the system any permitted microstate will be reached (or closely approximated) with the same relative frequency. It is called the “postulate of equal a priori probability”. to approach inﬁnity. in which the state space is perambulated in a stochastic manner. In such cases the time t0 needed for a suﬃciently thorough perambulation of the energy surface is in the range of 10−9 − 10−6 seconds. if true. For a mathematically exact formulation of this axiom we use the phase space density which is supposed to have the property ρ(r. The set of all points upon that “energy surface” is named microcanonical ensemble. it bears the suggestive name “Monte Carlo simulation”. The assumption that all microstates that are compatible with the condition E = E0 have the same probability is one of the solid foundations Statistical Mechanics is built upon. Among those systems which we may characterize as “barely ergodic” or non-ergodic we have supercooled liquids and glasses. for many relevant systems such as gases or ﬂuids under normal conditions it is quite true. This corrolary of the ergodic hypothesis is often succinctly stated as ensemble average = time average The assumption of ergodicity enables us to support our theoretical arguments by “molecular dynamics” computer experiments. The ergodic hypothesis. safely below the typical observation time in an experiment. days. it may then take seconds. To take an example. v) = E0 elsewhere (3.frequency within the ensemble of systems. In such systems the state vector Γ remains trapped for long times in a limited region of the energy surface. has an important practical consequence: for the calculation of mean values over the microstates on the energy surface it does not matter if we take the average over states randomly picked from a microcanonical ensemble. A typical question to be answered by applying statistical methods to such an ensemble is this: 2 what is the average of the squared particle velocity vi over all – a priori equally probable – states on this surface? ERGODICITY Instead of considering an ensemble of systems let us now watch just one single system as it evolves in time according to the laws of mechanics. In other words. it does not always hold. In a closed system the total energy will remain constant. the permitted states of the system are to be restricted to a 40 . isolated system.5) 2 i i deﬁnes the (3N − 1)-dimensional surface of a 3N -sphere of radius r = 2E0 /m. The size M of the ensemble is assumed to be very large – indeed.4) The condition E = const deﬁnes an (n−1)-dimensional “surface” within the n-dimensional phase space of the system. in the 3N -dimensional velocity space of a classical ideal gas the equation m Ekin ≡ v 2 = const = E0 (3. This hypothesis cannot be proven in general. However.

1 presents an overview of the state spaces and energy functions for various model systems. Thus we assume for the density in phase space that ρ(r. . but can never be completely suppressed. this should be kept in mind when comparing simulation results to theoretical statements. However. E0 ) and P = const. interacting non- Microvariables (i = 1. This more pragmatic requirement – which is made to keep the mathematics simpler – agrees well with the experimental fact that the exchange of energy between a system and its environment may be kept small. PHASE SPACE VOLUME AND THERMODYNAMIC PROBABILITY Let us now recall the fundamental assumption that all microstates having the same energy are equally probable. an MD simulation not only the total energy but also the total momentum P is conserved. vi ri . 12N 6N 3N N of Energy Ekin ≡ Ekin ≡ Ekin ≡ m 2 m 2 m 2 P P P 2 vi 2 vi 2 vi Ekin + Epot ≡ P 2 P m vi + i. vi Dimension phase space 6N 6N 4N 6N (or. ∆E] elsewhere (3.n ≡ h2 8mL2 n2 ≤ E 0 i (3. 1 In 41 . the four conditions E (E0 − ∆E. and we can base the argumentation just on the condition E ≈ E0 (with a 3N -dimensional shell). q i ni σi σi Table 3.1: State spaces of some model systems thin “shell” at E ≈ E0 . . periodic boundary conditions Linear or nonlinear molecules Harmonic crystal Ideal quantum gas Spins. Actually. .3. Thus they represent elementary events as deﬁned in Section 1. Table 3. vi . v) = ρ0 0 for E(r. . 6N − 3) 10N or.8) 2 vi ≤ E 0 (3. for suﬃciently numerous particles (N ≥ 100) this detail may be neglected. e i . It follows then that a macrostate that allows for more microstates than others will occur more often – and thus will be more probable.j>i ULJ (rij ) 2 Ekin + Epot P P 2 I ≡ 2 m vi + 2 ωi + Epot 2 Ekin + Epot ≡ P ˙2 f P m qi + 2 |qi − qi−1 |2 2 P 2 2 h i |ni | 8mL2 −H P i ri . vi ri .6) For the classical ideal gas this requirement reads1 E0 − ∆E ≤ Ekin ≡ (m/2) For the ideal quantum gas we require E0 − ∆E ≤ EN. v) [E0 . ∆E].7) i i In a system of N non-interacting spins the energy shell is deﬁned by −H i σi [E0 . ωi ˙ qi .Model Classical ideal gas Hard spheres in a box Hard discs in a box LJ molecules. would deﬁne a 3N − 3-dimensional spherical shell in velocity space. vi ri . N ) ri .

∆E]. C2 = π. and not reserved to n-spheres.7. We will have to compare the volume (in phase space) of the partial shell pertaining to the subensemble to the volume of the total shell. It must be stressed. and in an isolated system it is just the energy which is given as a basic parameter. meaning that it will increase as the number of degrees of freedom (or particles): z0 ∝ n.As an example. C6 = π 3 /6. Cn ≈ 2πe n n/2 1 n n n n 2πe √ ≈ (ln π + 1) − ln or ln Cn ≈ ln πn 2 n 2 2 2 42 (3. they make up a sub-ensemble of the complete microcanonical ensemble. It would be unphysical and misleading to keep z0 at some constant value and at the same time raise the number of dimensions. . 8π 2 /15.1). that the restriction to n-spheres is only a matter of mathematical convenience. we may ask for the probability to ﬁnd all molecules of an ideal gas in the left half of the vessel. 3. . Now. in addition to the sphere radius r0 . . (3.8 and Table 3. C3 = π 3/2 /(3/2)! = 4π/3. (We will see later that the ratio of the two volumes is (1/2N ) and may thus be neglected – in accordance with experience and intuition. a 2 variable that represents the square of the radius: z0 ≡ r0 .10) C5 = Examples: C1 = 2. this quantity is related to an energy. . For large n we have. All such microstates are located in a small part of the total permitted phase space shell [E. VOLUME √ The volume of a n-dimensional sphere with radius r0 = z0 is n Vn (z0 ) = Cn r0 = Cn z0 n/2 with Cn = π n/2 (n/2)! (3. the probability of the macrostate “all particles in the left half of the container” is obviously equal to the size ratio of the subensemble and the total ensemble. VOLUME AND SURFACE OF HIGHDIMENSIONAL SPHERES In the following discussion it will be convenient to use. using Stirling’s approximation for (n/2)!. as may be seen from equs. The most popular example is the classical ideal gas 2 with E ≡ Ekin = (m/2) vi . the n-rhomboid which pertains to the condition i σi ≈ const for simple spin systems could be (and often is) used instead. It will turn out eventually that those properties of n-dimensional phase space which are of importance in Statistical Mechanics are actually quite robust.) 3. C12 = π 6 /720. .2 From Hyperspheres to Entropy Some model systems have the convenient property that their energy may be written as a sum over the squares of their microvariables. however. The following recursion is useful: Cn+2 = 2π Cn n+2 C4 = π 2 /2.11) . In the phase space of interactionless many-particle systems. Attention: It should be kept in mind that the energy – and thus the square of the sphere radius – is an extensive quantity. . For instance. with no other restrictions on position or speed.9) where (1/2)! = √ π/2 and (x + 1)! = x!(x + 1). Thus it will be worth the while to make ourselves acquainted with the geometrical properties of such bodies. The conditions E ≤ E0 or E ≈ E0 then describe a highdimensional sphere or spherical shell. 3. respectively.

This formula provides an important insight. 3 V3 (z0 ) = (4π/3)r0 ..As soon as the Stirling approximation holds.14) A very useful representation of the surface is this: r0 On (r0 ) = −r0 dr1 r0 On−1 (r2 ) r2 (3. it shows that the “mass” of a spherical shell is distributed along one sphere axis (r1 ) as follows: pn (r1 ) = n−3 (n − 1)Cn−1 r2 r0 On−1 (r2 ) = n−2 r2 On (r0 ) nCn r0 (3.76 SURFACE AREA For the surface area of a n-dimensional sphere we have On (z0 ) = Examples: V1 (z0 ) = 2r0 .13) Example: Let n = 1000 and z0 = 1000. i.12) z0 − ln πn πn n 2 n Going to very large n(≥ 500) we ﬁnd for the logarithm of the volume (the formula for V proper is then diﬃcult to handle due to the large exponents) n ln 2 2πe z0 n ln Vn (z0 ) ≈ (3.3) p2 (r1 ) p3 (r1 ) p4 (r1 ) p5 (r1 ) = = = = . regardless of the value of n. this is permitted here because we are dealing with a normalized density pn .) 43 . dVn (z0 ) (n−1)/2 = nCn z0 dr0 O1 (z0 ) = 2 O2 (z0 ) = 2πr0 O3 (z0 ) = 4πz0 3 O4 (z0 ) = 2π 2 r0 (3. the hypersphere volume may be written n/2 √ 2πe n 1 2πe n/2 z0 or ln Vn (z0 ) = ln Vn (z0 ) ≈ √ (3. 2 V4 (z0 ) = (π 2 /2)z0 . for n ≥ 100.e. For the log volume we ﬁnd ln V1000 (1000) ≡ √ 500 ln(2πe) − ln 1000π = 882. 3. then we ﬁnd (see Fig. 1 2 (1 − r1 )−1/2 π 1 (constant!) 2 2 2 (1 − r1 )1/2 π 3 2 (1 − r1 ) 4 (The astute reader notes that for once we 256 2 p12 (r1 ) = (1 − r1 )9/2 63π have kept r0 = 1.15) 2 2 with r2 ≡ r0 − r1 . V2 (z0 ) = πz0 ..16) Examples: Let r0 = 1.

. of a pin ball) share the total energy mv 2 /2..1. the character of this density function changes dramatically at ﬁrst (see Fig.2 1 0. be equally probable. If just two particles on a line (or the two d. .8 -0. If we interprete the surface as the locus of all phase space points with given total energy E kin = 2 (m/2) vi .6 0.) Let the sum of squares of all velocities (energy!) be given.6 1. With increasing dimension the mass concentrates more and more at the center of the axis. v2 . for many dimensions (or particles) the maximum of the probability density pn (v1 ) is near zero. Approach to the Maxwell-Boltzmann distribution: For very large n we have pn (v) ≈ 1 exp{−v 2 /2 v 2 } 2π v 2 (3. The case n = 3.4 0. 44 .6 -0. then p(r1 ) ≡ p(vi ) is just the distribution density for any single velocity component vi . or n/3 particles in 3 dimensions. All “phase space points” v ≡ {v1 .f.4 0. 3. (Example: n particles moving on a line. then the velocity of one of them is most probably near the possible maximal value while the other has only a small speed.vn } are then homogeneously distributed on the spherical surface On (|v|2 ).16.2 0 0. Application: Assume that a system has n degrees of freedom of translatory motion.6 0.2 0 -0. In contrast. is special: all possible values of v1 occur with equal probability.8 0. As we increase the number of dimensions.4 -0. but apart from that let any particular combination of the values v1 . .2 -1 -0. and a single velocity v1 occurs with probability density 3.2 0.17) with v 2 = 2E0 /nm.3).o. . The Maxwell-Boltzmann distribution may thus be derived solely from the postulate of equal a priori probability and the geometric properties of high-dimensional spheres.8 1 n=3 n=2 n=5 n=4 n=12 Figure 3.3: Mass distribution p(r1 ) of a (n − 1)-dimensional spherical surface along one axis.4 1. meaning 3 particles on a line or one particle in 3 dimensions.

p(φ). In the context of Statistical Physics the following facts are of particular relevance: • Practically the entire volume of a hypersphere is assembled in a thin shell immediately below the surface • The volume of a hypersphere is – at least on a logarithmic scale – almost identical to the volume of the largest inscribed hyper-cylinder We will consider these two statements in turn. r0 ) of a thin “skin” near the surface and the total volume of the sphere is ∆Vn rn − (r0 − ∆r)n ∆r n = 0 = 1 − (1 − ) −→ 1 − exp{−n(∆r/r0 )} n Vn r0 r0 45 (3. and of a single velocity component. p(v1x ). Distribution of ﬂight directions. Distribution of ﬂight directions. p(φ) and of a single velocity component.18) .Figure 3.5: Simulation: N = 1 to 3 hard disks in a 2D box with reﬂecting walls. VOLUME OF A SPHERICAL SHELL The ratio of the volume ∆Vn (r0 − ∆r.4: Bunimovich’s stadium billard. Comparison with the theoretical distribution [Code: Harddisks] TWO AMAZING PROPERTIES OF HYPERSPHERES We have seen that spheres in high-dimensional spaces exhibit quite unusual geometrical properties. p(vx ) Figure 3.

regardless of how thin the shell may be! At very high dimensions the entire volume of a sphere is concentrated immediately below the surface: ∆Vn (z0 . ∆r = r0 /100 → ∆V /V = 1 − 4.6: Simulation: One hard sphere in a box with reﬂecting walls. n >> 1 (3. Distribution of one velocity component.20) (3. [Code: Hspheres] Figure 3. [Code: LJones] or. demonstration of the special case n = 3 (where p(v1x ) = const). using the quantities z0 and ∆z: ∆Vn n ∆z −→ 1 − exp[− ] Vn 2 z0 For n → ∞ this ratio approaches 1.19) Example: n = 1000. ∆z) −→ Vn (z0 ).7: Simulation: N Lennard-Jones particles in a 2D box with periodic boundary conditions.Figure 3. Distribution of a velocity component.3 · 10−5 HYPERSPHERES AND HYPERCYLINDERS 46 . p(v1x ).

22) Partitioning the integration interval [0.The sphere volume Vn (z0 ) may alternatively be written as r0 Vn (z0 ) = 0 z0 dr1 On1 (z1 )Vn−n1 (z2 ) dVn1 (z1 ) dz1 Vn−n1 (z0 − z1 ) dz1 dVn1 (z1 ) Vn−n1 (z0 − z1 ) (3. the terms in these sums are strongly varying.20 we may.25. n1 and (n − n1 ).27) dz we ﬁnd for the argument z ∗ that maximizes the integrand 3. always write V in place of ∆V .24) Remembering equ. for high enough dimensions n. To ﬁnd this term we evaluate d (3.24 or 3. 3. Similarly.28) n We have thus found a very surprising property of high-dimensional spheres which we may summarize by the following calculation rule: • Divide the given dimension n in n1 and n − n1 such that also n1 >> 1 and n − n1 >> 1 47 .25) Now. Therefore. 3. For high dimensions there is always one single term that dominates the sum. K Vn (z0 ) ≈ k=1 Vn1 (zk ) Vn−n1 (z0 − zk ) (3. z0 ] in equ. n1 z∗ = z0 (3.21 or the summation term in 3. all other terms may safely be neglected.21 into small intervals of size ∆z we may write K Vn (z0 ) ≈ k=1 ∆Vn1 (zk ) Vn−n1 (z0 − zk ) (3.23) where ∆Vn1 (zk ) ≡ Vn1 (k∆z) − Vn1 ((k − 1)∆z).26) [Vn1 (z)Vn−n1 (z0 − z)] = 0 dz From d z n1 /2 (z0 − z)(n−n1 )/2 =0 (3. 0 z0 0 Example: r0 z0 V6 (z0 ) = 0 dr1 O3 (z1 )V3 (z0 − z1 ) = 0 2 dr1 4πr1 π3 3 4π (z0 − z1 )3/2 = z 3 6 0 (3.21) ≡ ≡ with r1 ≡ √ z1 . for the volume of the spherical shell we have z0 K ∆Vn (z0 ) = 0 dVn1 (z1 ) ∆Vn−n1 (z0 − z1 ) ≈ k=1 ∆Vn1 (zk ) ∆Vn−n1 (z0 − zk ) (3.

• Find the maximum of the product f (z) ≡ Vn1 (z) · Vn−n1 (z0 − z) (or the maximum of the logarithm ln f (z)) with respect to z; this maximum is located at z∗ = • With this, the following holds: ln Vn (z0 ) ≈ ln Vn1 (z ∗ ) + ln Vn−n1 (z0 − z ∗ ) and similarly, ln ∆Vn (z0 ) ≈ ln ∆Vn1 (z ∗ ) + ln ∆Vn−n1 (z0 − z ∗ ) (3.31) (3.30) n1 z0 n (3.29)

(Remark: in a numerical veriﬁcation of these relation it must be remembered that the maximum of the function at the r.h.s. of 3.31 or 3.30 is very sharp indeed; it may be overlooked when the interval [0, z0 ] is scanned in regular steps. It is better to localize z ∗ ﬁrst, then compare with slightly smaller and larger z.) The geometric interpretation of the relation 3.30 is amazing: On a logarithmic scale the volume of an n-sphere equals the product of the volumes of two spheres in the subspaces n1 and n − n1 . But this product may be understood as the volume of a hypercylinder that is inscribed in the n-sphere and which has the “base area” in n1 space and the “height” in (n − n1 ) space. Example: n = 1000, n1 = 400, z0 = 1000: the maximum of the quantity ln f (z) ≡ ln V400 (z) + ln V600 (1000 − z) is located at z ∗ = 400, and we have √ √ √ 500 ln(2πe) − ln 1000π ≈ 200 ln(2πe) − ln 400π + 300 ln(2πe) − ln 600π 1418.94 − 4.03 ≈ 567.58 − 3.57 + 851.36 − 3.77 (3.32)

Figure 3.8: Simulation: Hyperspheres and -cylinders. In a given hypersphere of dimension n we inscribe hypercylinders whose “base areas” and “heights” have n1 and n − n1 dimensions, respectively. The hypercylinder with the maximum volume is identiﬁed: its log volume is almost equal to that of the circumscribed sphere. [Code: Entropy1]

48

DISCRETISATION OF PHASE SPACE; ENTROPY Returning from geometry to physics, we will from now on denote phase space volumes by Γ, reserving the symbol V for the volume of a system in real space. Thus, Γv is the 3N -dimensional velocity phase space volume of a N -particle system, Γr refers to the positional subspace volume which usually will also have 3N dimensions, and Γ denotes the full phase space (or volume in phase space) with 6N dimensions. Depending on which model system we are considering, the microvariables are either continuous (classical manybody system) or discrete (quantum models, spin lattices.) In order to be able to “enumerate” the states in phase space it is opportune to introduce a raster even in the case of continuous microvariables. Thus we imagine the phase space of a N -particle system to be divided into cells of size 2 g ≡ ∆x ∆v (3.33) Now let us consider a classical gas, or ﬂuid, with particle number N and volume V . In the 3N dimensional velocity subspace of the 6N -dimensional phase space the condition Ekin [E, ∆E] again deﬁnes a spherical shell whose volume is essentially equal to the volume of the enclosed sphere. The number of ∆v cells in (or below) this shell is Σv (E, ∆E) ≡ Γv ∆Γv ≈ = C3N (∆v)3N ∆v 3N 2E m

3N/2

/(∆v)3N

(3.34)

In the case of the ideal gas the contribution of the position subspace may be included in a simple manner; we write VN Σr = (3.35) ∆x3N Thus the total number of cells in 6N space is Σ (N, V, E) = Σr Σv = C3N V g3 2E m 2πe 3N

3/2 N

=

1 √ 3N π

3N/2

V g3

2E m

3/2 N

(3.36)

Now we have to prematurely introduce a result of quantum statistics. One characteristic property of quantum objects is their indistinguishability. It is evident that the number of distinct microstates will depend on whether or not we take into account this quantum property. A detailed analysis which must be postponed for now leads up to the simple result that we have to divide the above quantity Σ (N, V, E) by N ! to ﬁnd Σ(N, V, E) ≡ Σ (N, V, E)/N ! (3.37)

which is now indeed proportional to the total number of physically distinct microstates. This rule is known as the rule of correct Boltzmann enumeration. It is characteristic of the physical “instinct” of J. W. Gibbs that he found just this rule for the correct calculation of Σ although quantum mechanics was not yet known to him. Ho proposed the 1/N ! rule in an ad hoc manner to solve a certain theoretical problem, the so-called Gibbs Paradox. The quantity Σ(N, V, E) is a measure of the available phase space volume, given in units g 3N . The logarithm of Σ(N, V, E) has great physical signiﬁcance: it is – up to a prefactor k – identical to the entropy S(N, V, E) that was introduced in

2 A particularly powerful formulation of classical mechanics, known as Hamilton’s formalism, makes use of the variables q, p (position and momentum) in place of r, v (position and velocity.) In this notation the phase space cells have the size h ≡ ∆q ∆p.

49

thermodynamics. At present, this identiﬁcation is no more than a hypothesis; the following chapter will show how reasonable the equality S(N, V, E) = k ln Σ(N, V, E) really is. In the case of a classical ideal gas we ﬁnd, using ln N ! ≈ N ln N and neglecting ln 3.36,

3/2

(3.38) √

3N π in equ.

S(N, V, E) ≡ k ln Σ(N, V, E) = N k ln

V N

4πEe 3N mg 2

(3.39)

This is the famous Sackur-Tetrode equation, named for the authors Otto Sackur and Hugo Tetrode. The numerical value of Σ, and therefore of S, is obviously dependent on the chosen grid size g. This disquieting fact may be mitigated by the following considerations: • As long as we are only comparing phase space volumes, or entropies, the unit is of no concern • There is in fact a smallest physically meaningful gridsize which may well serve as the natural unit of Σ; it is given by the quantum mechanical uncertainty: gmin = h/m Example: N = 36, m = 2, E = N = 36. The average energy per particle is then = 1, and the mean squared velocity is v 2 = 1. Assuming a cubic box with V = L3 = 1 and a rather coarse grid with ∆x = ∆v = 0.1 we ﬁnd S(N, V, E)/k = 36 ln 1 36 4 · π · 36 · e 3 · 36 · 2 · 10−4

3/2

= 462.27

(3.40)

Just for curiosity, let us determine ln(∆Σ), where ∆Σ is the phase space volume between E = 35.5 and 36.0: ∆Σ = Σ(36.0) − Σ(35.5) = Σ(36.0) 1 − and thus S(N, V, ∆E)/k = 462.27 − 0.63 = 461.64 (3.42) We see that even for such a small system the entropy value is quite insensitive to using the spherical shell volume in place of the sphere volume! 35.5 36

54

= Σ(36.0) · 0.53

(3.41)

3.3

Problems for Chapter 3

EXERCISES: 3.1 Geometry of n-spheres I: Calculate the volumes and surfaces of spheres with r = 1.0 √ in 3,6,12 and 100 dimensions. In evaluating 100! use the Stirling approximation n! ≈ 2πn (n/e)n . 3.2 Geometry of n-spheres II: For the various n-spheres of the foregoing example, compute 50

30. Make a few runs with Applet Entropy1 and comment on your ﬁndings. Put the meaning of this equation in words. demonstrate the geometrical meaning for a three-dimensional sphere – even if the approximation is not yet valid in this case. choose your own density.0. 3. energy per particle.2 Entropy and geometry: What is the relation between the entropy of a system and the geometry of the phase space? 3.4 Sackur-Tetrode equation: Calculate the entropy of an ideal gas of noble gas atoms. Comment the result.3 Approximate formula for ln Vn (z0 ): Choose your own example to verify the validity of the approximation 3.the fraction of the total volume contained in a shell between r = 0. use the mimimal grid size in phase space. TEST YOUR UNDERSTANDING OF CHAPTER 3: 3. 3. and particle mass (keeping in mind that the gas is to be near ideal). energy shell.1 Geometry of phase space: Explain the concepts phase space. energy surface.95 and r = 1. g = gmin ≡ h/m. (Hint: shell volume?) 51 .3 Geometry of n-spheres: Name a property of high-dimensional spheres that simpliﬁes the entropy of an ideal gas. 3.

V1 . E1 ) ? dS(N2 . The optimal partial energy E1 fulﬁlls Γ2 (E2 ) ∂Γ1 (E1 )/∂E1 = Γ1 (E1 ) ∂Γ2 (E2 )/∂E2 .2) THERMAL INTERACTION Now consider two systems (N1 . V. E).3) ∂S2 (E2 ) ∂E2 (4. This can hold only if S has the two essential properties of entropy: 1. rather – may be identiﬁed as that property which the thermodynamicists have dubbed entropy and denoted by S(N. then the equilibrium state is the one in which the available energy E = E1 + E2 is distributed such that dS(N1 . E2 ) = S(N1 + N2 . V. E)/dE]−1 is commonly called temperature. E1 + E2 ) ? (4. And we found some reason to suspect that this volume – its logarithm. In equilibrium the entropy S is additive: S(N1 . E1 ) and (N2 . however. is nothing else but the well-known thermodynamic equation ∂S1 (E1 ) ∂E1 = ∗ E1 =E1 (4. V1 + V2 .1) = dE1 dE2 E1 =E ∗ E2 =E−E ∗ 1 1 The quantity T ≡ [dS(N. 2. E1 ) + S(N2 . ∗ Ad 1.Chapter 4 Statistical Thermodynamics 4. V1 . ∆E]. The number of such microstates is proportional to the phase space volume they inhabit. V2 .4) ∗ E2 =E−E1 52 . V1 . meaning that they can exchange energy but keeping the total energy E ≡ E1 + E2 constant. E2 ) in thermal contact. We may determine the phase space volume of the combined system. V2 . their “private” volumes and particles remain separated. If the system is divided into two subsystems that may freely exchange energy. E2 ) (4.1 Microcanonical ensemble We recall the deﬁnition of this ensemble – it is that set of microstates which for given N. V2 . or ∂ ∂ ln Γ1 (E1 )|E1 =E ∗ = ln Γ2 (E2 )|E2 =E−E ∗ 1 1 ∂E1 ∂E2 This. V have an energy in the interval [E.

may be derived quite simply from the geometrical properties of high-dimensional spheres. at thermal contact between two systems isolated from the outside world there will be a regular ﬂow of energy until the quantity ∂S/∂E (≡ 1/T ) is equal in both systems. by equ. E2 = 72. In other words. with ∂S/∂E ≡ 1/T . Now bring the systems in thermal contact. this will be the most probable distribution of energies upon the two systems. It should be noted that these conclusions.6) and thus S1+2 ∗ ∗ Γ1 (E1 ) Γ2 (E − E1 ) Γ1+2 (E) = k ln 3N 3N1 3N2 g g g ∗ ∗ Γ2 (E − E1 ) Γ1 (E1 ) = k ln 3N1 + k ln g g 3N2 ∗ ∗ = S1 (E1 ) + S2 (E − E1 ) = k ln (4. Example: Consider two systems with N1 = N2 = 36 and initial energies E1 = 36.8) Γ1 (E1 )Γ2 (108 − E1 ) = 36 53 . There may be ﬂuctuations around the optimal energy distribution. Since the combined system has then the largest extension in phase space. but due to the extreme sharpness of the maximum of Γ1 (E1 )Γ2 (E − E1 ) these deviations remain very small.1: Isolated system: → microcanonical ensemble or. T1 = T 2 .30.5) Ad 2. and the respective phase space volume is 72 1 3/2 ∗ ∗ πe104 (4. 3. Let the total energy be divided up according to E ≡ E1 + E2 .7) ∗ where E1 is that partial energy of system 1 which maximizes the product Γ1 (E1 )Γ2 (E − E1 ). (4. The maximum value of the ∗ ∗ product Γ1 (E1 )Γ2 (E − E1 ) occurs at E1 = E2 = 54.Figure 4. Then we have. ∗ ∗ Γ1+2 (E) = Γ1 (E1 ) Γ2 (E − E1 ) (4. although of eminent physical signiﬁcance.

How does another partitioning of the total energy – say. meaning that the changes are slow enough to permit the system always to eﬀectively perambulate the microensemble pertaining to the momentary macroconditions. in agreement with experiment. for larger particle numbers δE1 /E1 decreases as √ 540 ∗ 1/ N: (515 · 565/540 · 540) = 0. 62) instead of (54. therefore T = 2 U 3 Nk (4. V ) = V 4π From thermodynamics we know that T = (∂U/∂S)V . E) may then undergo changes. Under these conditions the imported or exported diﬀerential energy dE is related to the diﬀerential volume change according to dS = ∂S ∂E dE + V ∂S ∂V dV E (4.10 is identical to the thermodynamic relation dS = 1 (dE + P dV ) or dE = T dS − P dV (First Law) T (4.05. that the speciﬁc heat of the ideal gas is 3 CV = N k . (4.12) Example: Classical ideal gas with S(N.9) We can see that the energy ﬂuctuations in these small systems are relatively large: ∗ ∗ δE1 /E1 ≈ 8/54 = 0. we require that its energy and volume change so slowly that the system may visit all regions of the the phase space shell (E(t).31.10) Deﬁning (in addition to T ≡ (∂S/∂E)−1 ) the pressure by V P ≡T ∂S ∂V (4. V. THERMODYNAMICS IN THE MICROCANONICAL ENSEMBLE Let the system under consideration be in mechanical or thermal contact with other systems.15. but we assume that this happens in a quasistatic way. However.30 (4. V (t)) before it moves to a new shell.17) . E) = N k ln V N 4πEe 3N mg 2 3/2 (4.14) U (S.15) From this we conclude. 54) – compare to the optimal one.13) Solving this equation for E we ﬁnd for the internal energy U (S. (46. V ) ≡ E the explicit formula 2/3 3N mg 2 2S/3N k−1 N e (4.11) E then equ.16) 2 The pressure may be found from P = −(∂U/∂V )S : P = N kT 2U = 3V V 54 (4. 4. in terms of phase space volume and thus probability? Γ1 (46)Γ2 (62) = Γ1 (54)Γ2 (54) 46 · 62 54 · 54 54 = 0. The macroscopic conditions (V. thus δE1 /E1 ≈ 25/540 = 0. To take the N -particle gas as an example.

V and E.18) j=i If at some time t the microstate {ri . and Epot ≡ (1/2) i. j0 . in that we assumed no interaction between the particles. there is a more pragmatic method of investigating the thermodynamic properties of an arbitrary model system: computer simulation. This procedure is known as molecular dynamics simulation. j0 ). – is beyond the scope of this tutorial. An average over the trajectory is therefore equivalent to an average over the microcanonical ensemble. In the case of hard spheres the particle trajectories are computed in a diﬀerent manner. If the systems is chaotic it will visit all states on this hypersurface with the same frequency. However. where Ekin may be determined at any time from the particle velocities. i = 1. vi the time span t0 to the next collision between any two particles in the system is determined.SIMULATION IN THE MICROCANONICAL ENSEMBLE: MOLECULAR DYNAMICS We found it a simple matter to derive the entropy S of an ideal gas as a function of N . while the pressure is the average of the so-called “virial”.19) The larger system. By the same token the temperature may be computed via 3N kT /2 ≡ Ekin . computing the new directions and speeds of the two partners according to the laws of elastic collisions. and once we had S(N.20) E =E . . Statistical mechanics is powerful enough to yield solutions even if there are such interactions. Now we have gone full circle and can do the next t0 and i0 . V.j u(rij ) from the positions.2 Canonical ensemble We once more put two systems in thermal contact with each other. is called “heat bath”. Since we assume no external forces but only forces between the particles. E) the subsequent derivation of thermodynamics was easy. the heat reservoir’s entropy may therefore be expanded around S2 (n2 . integral equation theories etc. Further details of the MD method may be found in [Vesely 94] or [Allen 90] 4.. E): S2 (E − E1 ) ≈ S2 (E) − E1 ∂S2 (E ) ∂E 55 = S2 (E) − E1 T (4. the internal energy may be calculated according to Ui ≡ Ekin + Epot . that is the quantity W ≡ (1/2) i j=i Kij · rij . Calling these prospective collision partners i0 and j0 we ﬁrst move all spheres along their speciﬁc ﬂight directions by ∆ri = vi t0 and then simulate the collision (i0 . the total energy of the system remains constant: the trajectory in phase space is conﬁned to the energy surface E = const. N } is given we can solve these equations of motion for a short time step ∆t by numerical approximation. To keep things so simple we had to do some cheating. . the new positions and velocities at time t + ∆t are used as starting values for the next integration step and so forth. One of the systems is supposed to have many more degrees of freedom than the other: n > n − n1 >> n1 >> 1 (4. . The energy E2 = E − E1 contained in the heat bath is “almost always” much greater than the energy of the smaller system. The classical equations of motion of mass points interacting via a physically plausible pair potential u(r) such as the one introduced by Lennard-Jones read dvi 1 =− dt m ∂u(rij ) ∂ri (4. In particular we have P = N kT /V + W /3V . For example. with n2 ≡ n − n1 d.o. A discussion of the pertinent methods – virial expansion. For given ri . vi .f.

In the more general case. While the density of states in the phase space of system 1 indeed drops sharply with increasing E1 .Figure 4. But this means that 56 . then exhibits a maximum at an energy E1 = 0. For a better understanding of equ. The product of these two factors. the volume Γ1 (E1 ) pertaining to E1 is strongly increasing as 3N/2 E1 . for a large number of particles. E1 − ∆E]. and ρ1 (Γ1 ) ≡ ρ [r1 .e. v1 ) = E1 ] (4. i. We may express this in terms of the “canonical phase space density” ρ1 (Γ1 ): ρ1 [Γ1 (E1 )] = ρ0 e−E1 /kT where ρ0 is a normalizing factor. while the heat bath is made up of the N − 1 other molecules.2: System in contact with an energy reservoir: → canonical ensemble where T is the temperature of the heat bath. i.23) (4. In the canonical ensemble all energy values are permitted. Equation 4.21) But the larger Σ2 (E − E1 ).22 we recall that in the microcanonical ensemble only those states of system 1 were considered for which the energy E1 was in the interval [E1 . but the density of state points varies strongly.24) Thus we have found that even non-isolated systems which may exchange energy have actually most of the time a certain energy from which they will deviate only slightly.22) is the density of microstates in that region of phase space of system 1 that belongs to energy E 1 . as exp[−E1 /kT ]. the peak of the energy distribution is so sharp that the most probable energy is all but identical to the mean energy: 1 ∆E ∝√ E N (4. 4. The number of phase space cells occupied by the larger system is thus Σ2 (E − E1 ) = eS2 /k e−E1 /kT (4.e. E(r1 . v1 .22 may not be understood to say that the most probable energy of the smaller system be equal to zero. the statistical weight of the respective phase space region. As a – by now familiar – illustration of this let us recall the Maxwell-Boltzmann distribution: p(|v| is just the probability density for the (kinetic) energy of a subsystem consisting of only one particle. the larger the probability to ﬁnd system 1 in a microstate with energy E1 .

it is a normalizing factor in the calculation of averages over the canonical ensemble. First of all. T ) ≡ dr dve−E(r. the pressure is given by ∂A . Since the volume of the energy shell rises sharply with energy we again ﬁnd that most systems will have an energy around E = E/M . the internal energy may be written as U≡ E = 1 1 Q(N. With increasing energy E this probability density drops as ρ(Γ(E)) ∝ exp[−E/kT ]. V. Thus the important equivalence of canonical and microcanonical ensembles may alternatively be proven in this manner. V. For instance. We have derived the properties of the canonical ensemble using a Taylor expansion of the entropy. according to mathematical convenience. (4. T ) − U (N.26) But the great practical importance of the partition function stems from its close relation to Helmholtz’ free energy A(N.25) is called canonical partition function.V.V. M ). T ) N !g 3N dr dvE(r.v)/kT N !g 3N (4. . T ) − T ∂A ∂β =0 V dr dv eβ[A(N. T ) = e−βA(N.we may calculate averages of physical quantities either in the microcanonical or in the canonical ensemble. . The sum of the M energies was constrained to sum up to a given value. V.27) where β ≡ 1/kT . Under these simple assumptions he determined the probability of ﬁnding a system in the neighbourhood of a microstate Γ having an energy in the interval [E. The relation between the two is Q(N. ∆E]. entropy and Gibbs’ free energy are calculated from S=− ∂A ∂T and G = A + P V V (4. E = i Ei . with S ≡ −(∂A/∂T )V .T )−E(r.31) P =− ∂V T Similarly.30) But this is.v)/kT (4. T ). THERMODYNAMICS IN THE CANONICAL ENSEMBLE The quantity 1 Q(N. T ) − E + β or A(N.32) 57 .v)] = 1 (4. This principle is known as “equivalence of ensembles”. V. Gibbs generalized Boltzmann’s “method of the most probable distribution” to an ensemble of microscopically identical systems which are in thermal contact with each other. with an unhindered interchange of energies between the systems. V.29) ∂A ∂T =0 V (4.28) (4. For example. which itself is a central object of thermodynamics. identical to the basic thermodynamic relation A = U − T S. . obtaining A(N. All other thermodynamic quantities may now be distilled from A(N. V. v)e−E(r. V. To prove this important fact we diﬀerentiate the identity 1 N !g 3N by β. W. The derivation originally given by Gibbs is diﬀerent.T ) (4. Gibbs considered M equal systems (i = 1. each containing N particles. . V. J. T ).

Similarly we ﬁnd from S = −(∂A/∂T )V the entropy in keeping with equ. In other words.o.95mH (i. In hindsight it is possible to apply the concept of a “partition function” also to the microcanonical ensemble. appears in the guise viα .Example 1: For the classical ideal gas we have Q(N. E) was also a measure of the total accessible phase space volume.25 · 107 J (4. And we recall that its logarithm – the entropy – served as a starting point to unfold statistical thermodynamics. is A = −kT ln Q = −4. assuming a molecular mass of m = 39. the respective degree of freedom will have the mean energy kT /2.37) We have now succeeded to derive the thermodynamics of an ideal gas solely from a geometrical analysis of the phase space of N classical point masses.39.33) 3/2 V (4.35) and using A ≡ −kT ln Q and 4. After all. e. V.34) VN (4. Argon). 3.f. EQUIPARTITION THEOREM Without proof we note the following important theorem: If the Hamiltonian of a system contains some position or velocity coordinate in quadratic form. Example 2: The free energy of one mole of an ideal gas at standard conditions. V. Example 1: The Hamiltonian of the classical ideal gas is H(v) = 2 mviα 2 i=1 α=1 N 3 (4. the quantity Σ(N. It must be stressed that similar relations for thermodynamical observables may also be derived for other model systems with their respective phase spaces. we might as well call it the “microcanonical partition function”.31 we ﬁnd P = N kT V (4. T ) = with q≡ This leads to Q(N. T ) = 1 N! 2πmkT h2 drdv exp [− m |v|2 ] = 2kT 2πkT m 3N/2 m3N N !h3N drdv exp [− m 2kT |vi |2 ] = 1 qN N !h3N (4.39) . V.38) 2 Each of the 3N translational d.36) which is the well-known equation of state of the ideal gas. The equipartition theorem then tells us that for each velocity component m 2 kT v = 2 iα 2 or 58 m 2 3kT v = 2 i 2 (4.

α 2 mviα + 2 2 Iωiβ 2 (4. This prediction is in good agreement with experimental results for crystalline solids at moderately high temperatures. an energy kT /2.1). contain no energy.o. r.β where e are the orientation vectors of the linear particles.cryst.) Example 2: Every classical ﬂuid.f.f.f. Let us now assume that the systems can also exchange particles. The ﬂow of particles from one subsystem to the other will come to an end as soon as the free energy A = A1 + A2 of the combined system tends to a 59 .(As there is no interactional or external potential. m 2 3kT I 2 2kT vi = and ωi = . N2 will only slightly ﬂuctuate around their mean values. E2 of the two subsystems will then only ﬂuctuate – usually just slightly – around their average values. Thus we predict CHEMICAL POTENTIAL Let us recall the experimental setup of Fig.41) f 2 kT m 2 kT x = and x = ˙ . contains. has the Hamiltonian ˙ ˙ H(r.43) i..40) 2 i=1 α=1 Therefore each of the 3N translatory d.2. on the average. f 2 x2 + i i=1 m 2 x2 ˙i i=1 (4. e. 4. ω are their angular velocities. (4. Allowing two systems to exchange energy leads to an equilibration of their temperatures T (see Section 4.42) 2 i 2 2 i 2 Thus the total energy of the N oscillators is E = N kT . such as the Lennard-Jones liquid. = 3N kT .44) 2 2 2 2 which is again a good estimate as long as a classical description may be expected to apply. (4. For the speciﬁc heat we have consequently CV ≡ (∂E/∂T )V = 3N k. The generalization of this to three dimensions is trivial.o. we ﬁnd Eid. e) = Hpot (r. In such a situation we will again ﬁnd some initial equilibration after which the particle numbers N1 . v) = Hpot (r) + (4.o. therefore the equipartition law does not apply to riα . and we have to apply quantum rules to predict CV and other thermodynamic observables (see Chapter 5. e) + i. (The interaction potential is not quadratic in the positions. has the Hamiltonian N 3 2 mviα H(r. The energies E1 .) Example 3: A system of N one-dimensional harmonic oscillators is characterized by the Hamiltonian ˙ H(x.) Example 4: A ﬂuid of N “dumbbell molecules”. the positional d. and I denotes the molecular moment of inertia. x) = Hpot + Hkin = Therefore. For low temperatures – and depending on the speciﬁc substance this may well mean room temperature – the classical description breaks down. each having 3 translatory and 2 rotatory d.

26 · 10−20 J (4. We recall that in MD simulations normally the total system energy Etot = Ekin + Epot is held constant. However. (Compare this deﬁnition of the chemical potential with that of temperature. V. This happens when (∂A1 /∂N )V = (∂A2 /∂N )V . Since it is now the Boltzmann factor that gives the statistical weight of a microstate Γ ≡ {ri . T ) (in place of (N. – then we may even conﬁne the weighted integral to the 3N -dimensional conﬁgurational subspace Γc ≡ {ri } of full phase space: a N. since the Boltzmann factor is already contained in the frequency of states in the sequence we can compute the Boltzmann-weighted average simply according to a = 1 M M a Γc (m) m=1 (4. conﬁgurations with high potential energy should occur less frequently than states with small Epot . vi }.46) If a depends only on the particle positions and not on velocities – and this is true for important quantities such potential energy.45) SIMULATION IN THE CANONICAL ENSEMBLE: MONTE CARLO CALCULATION We may once more ﬁnd an appropriate numerical rule for the correct “browsing” of all states with given (N.48) An extensive description of the Monte Carlo method may be found in [Vesely 1978] or [Vesely 2001]. The customary method to produce such a biased random sequence (a so-called “Markov chain”) is called “Metropolis technique”.3 Grand canonical ensemble Once again we put a small system (1) in contact with a large one (2). In other words. this time we do not only permit the exchange of energy but also the crossing over of particles from one subsystem 60 . .95mH ) at normal conditions is µ≡ dA d ln Q V = −kT = −kT ln dN dN N 2πmkT h2 3/2 = −7.T = a(Γc ) exp [−Epot (Γc )/kT ]dΓc / exp [−Epot (Γc )/kT ]dΓc (4. by introducing a kind of numerical thermostat we may at each time step adjust all particle velocities so as to keep Ekin either constant (isokinetic MD simulation) oder near a mean value such that Ekin ∝ T (isothermal MD). V. 4. .V. M with the requirement that the relative frequency of microstates in the sequence be proportional to their Boltzmann factors. the average value of some quantity a(Γ) is given by a N. m = 1. etc. .T = a(Γ) exp [−E(Γ)/kT ]dΓ/ exp [−E(Γ)/kT ]dΓ (4. However. The quantity that determines the equilibrium is therefore µ ≡ (∂A/∂N )V .V.constant value.) V Example: The chemical potential of Argon (m = 39. which is roughly equivalent to sampling the microcanonical ensemble. virial. E)). There is also a special version of the molecular dynamics simulation method that may be used to perambulate the canonical distribution. the subsystems will trade particles until µ1 = µ2 . Now. (∂S/∂E) −1 . after its inventor Nicholas Metropolis.47) In order to ﬁnd an estimate for a we formally replace the integrals by sums and construct a long sequence of randomly sampled states Γc (m).

V1 . . vi . as follows: p(r.Figure 4.49) Summing this density over all possible values of N1 and integrating – at each N1 – over all {ri . . Above all. . . V1 . V. averages taken only over particles in a partial volume –¿ 61 . . i = 1.53) As a rule the – permitted – ﬂuctuations of the number of particles remain small.52) (4. T ) = kT 2 U (≡ E ) = − ∂β ∂T P = (4. T ) V ∂ ∂ ln Z N (≡ N ) = z ln Z(z. vi . putting z ≡ eµ/kT and β ≡ 1/kT we ﬁnd kT ln Z(z.3: System in contact with an energy and particle reservoir: → grand canonical ensemble to the other. in particular √ we have ∆N/N ≈ 1/ N . T ) N1 =0 (4. For instance. it serves as the source function of thermodynamics. V. N1 ) ∝ eµN1 /kT e−E(r. Thus the grand ensemble is again equivalent to others ensembles of statistical mechanics. [To do: applet with MD simulation.50) Its value is just the “total statistical weight” of all possible states of system 1. THERMODYNAMICS IN THE GRAND CANONICAL ENSEMBLE From the grand partition function we can easily derive expressions for the various thermodynamic observables. T ) ≡ eN1 µ/kT Q(N1 . .51) (4. V. N1 }. it depends now both on the number of particles N1 and on {ri . i = 1.v)/kT (4. N1 } we obtain the grand partition function ∞ Z(µ. And as before we can write down the probability density in the phase space of the smaller system. T ) = kT ∂z ∂µ ∂ ln Z ∂ ln Z(z. v.

m = 1. . The stochastic rule for these insertions and removals is such that it agrees with the thermodynamic probability of such processes.55) Using the formulae for internal energy and pressure we ﬁnd P = −kT z 2πmkT h2 3/2 and U = −kT z 3V 2 2πmkT h2 3/2 (4.2 Grand canonical ensemble: Using the grand partition function. For the grand partition function we have Z(z. Just as in the canonical Monte Carlo procedure we produce a sequence of microstates Γc (m).57) in keeping with the phenomenological ideal gas equation.54) Therefore Z = exp −zV 2πmkT h2 3/2 or ln Z = −zV 2πmkT h2 3/2 (4. . P = 2U/3V or P = N 2 3N kT = kT 3V 2 V (4. Apply this to the general canonical phase space. SIMULATION IN THE GRAND CANONICAL ENSEMBLE: GCMC The states within the grand ensemble may again be sampled in a random manner. b) in spite of this the most probable kinetic energy is not zero. 4.56) Consequently. T ) = N zN VN N! 2πmkT h2 3N/2 = N yN with y ≡ V z N! 2πmkT h2 3/2 (4. . show that a) with increasing kinetic energy Ek the density of states decreases as exp [−Ek /kT ]. V.4 Problems for Chapter 4 EXERCISES: 4. By averaging some quantity over the “Markov chain” of conﬁgurations we again obtain an estimate of the respective thermodynamic observable.same results!] Example: Let us visit the ideal gas again. M with the appropriate relative frequencies. The only change is that we now vary also the number of particles by occasionally adding or removing a particle. derive the mean number 62 .1 Canonical phase space density: Considering the Maxwell-Boltzmann distribution in a classical 3D ﬂuid. 4.

1 Temperature and entropy: What happens if two systems are put in contact with each other such that they may exchange energy? 4.5 Grand canonical ensemble: What are the macroscopic conditions deﬁning the grand canonical ensemble? What is needed to make it equivalent to the canonical and microcanonical ensembles? 63 .2 Changes of state: What is a quasistatic change of state? 4. to which microscopic variables does it apply? Give two examples. TEST YOUR UNDERSTANDING OF CHAPTER 4: 4. 4.of particles in one m3 of an ideal gas at standard conditions.4 Canonical ensemble: What are the macroscopic conditions that deﬁne the canonical ensemble? When is it equivalent to the microcanonical ensemble? 4.3 Equipartition theorem: Formulate the equipartition theorem.

and we took this into account in a heuristic manner. The values of the gj are not important. What are the consequences of these additional counting rules for Statistical Thermodynamics? ` To seek an answer we may either proceed in the manner of Boltzmann (see Section 2. can take on a particular state only exclusively. To do so we change the notation from the one used in Section 2. in that we denote the number of particles in cell j by fj . the population number of a raster element in pahese space can be just 0 or 1.1) 64 . which Boltzmann dubbed µ-space. meaning that we can allot the {fj } particles in more diﬀerent ways to the {gj } states in each cell – always keeping in mind the Fermi or Bose rules: m Fermi − Dirac : W = j=1 gj fj m Bose − Einstein : W = j=1 gj + f j − 1 fj (5. they should only be large enough to allow the application of Stirling’s formula. Finally. In contrast.1 Ideal quantum gas: Method of the most probable distribution We consider a system of N independent particles in a cubic box with ideally elastic walls. In the quantum case this space is spanned by the quantum numbers nx. 5. The reason for using fj is that “n” is reserved for the quantum numbers. even-symmetry bosons may in any number share the same microstate. .2. j = 1. j = 1. . by introducing the permutation factor 1/N ! in the partition function. Another feature of the quantum picture is that state space is not continuous but consists of ﬁnite “raster elements”: just think of the discrete states of an ideal quantum gas.z = 1. .Chapter 5 Statistical Quantum Mechanics Counting the states in phase space takes some consideration if we want to apply quantum mechanics.y. m. Fermions. pertaining to the momentum eigenstates having eigenvalues p ≡ (h/2L)n and energies En = p2 /2m = (h2 /8mL2)|n|2 . Now we batch together all states having energies En in an interval [Ej . . . In the spririt of the kinetic theory of dilute gases we explore the state space for the individual particles. The number of states in such a “cell” is named gj . 2 . A speciﬁc distribution f ≡ {fj . . As before we try to answer the question how the N particles should best be distributed over the m cells. We have already mentioned the fact that quantum particles are not distinguishable. . ∆E]. 2. For a better understanding we will here sketch both approaches. m} of the N particles to the m cells is more probable if its multiplicity W is larger. with a wave function of odd symmetry.2) or a la Gibbs (see Chapter 4). it depends on the “symmetry class” of the particles how many of them may inhabit the same discrete microstate. .

T ) = m3N N !h3N dΓ exp −E(Γ)/kT =⇒ exp − fn En /kT n (5. T ) ≡ N =0 z N Q(N.To compare: the multiplicity given in Section 2.2. making use of the (z.3) This is the most probable distribution of the particles upon the cells. V. T ). Recalling the general deﬁnition of the grand partition function. For a better understanding of this derivation. ∗ The distribution fj having the largest multiplicity may again be determined by Lagrange variation with the conditions j fj Ej = E and j fj = N : δ ln W − βδ fj Ej − E + γδ fj − N = 0 gj z −1 eβEj − 1 (5. T ) ensemble in Gibbsean phase space. . ny .4) It is easy to interpret the Lagrange parameters β and z.2) j j Writing z ≡ eγ we ﬁnd for ∗ Fermi − Dirac : fj = gj z −1 eβEj + 1 ∗ Bose − Einstein : fj = (5. .2 Ideal quantum gas: Grand canonical ensemble We may derive the properties of a quantum gas in another way.2 one compares the consequences of the population densities given above to empirical/thermodynamical facts. we now write Q as a sum (in place of an integral) over states: Q(N. as we have done in Chapter 2. These are the rules: For non-interacting particles in a square box the µ-plane is spanned by integers nx . and that z = eγ = eµ/kT is identical to the fugacity. By running the applet EFRoulette we may indeed play that game – for Fermi particles at least – and compare its outcome with the result just given. . V.20) would in our present notation read W = j=1 gj j /fj !. Bose) gj z e ±1 (5. V. Since gj denotes the number of states in cell j. let us interpret its premises as a set of rules in a game of fortune.5) {fn } 65 . – assign N particles randomly to the states on µ-plane – make sure that the sum of the particle energies equals the given system energy. each quantum state is represented by a point. To ﬁnd the average (and also most probable!) distribution of particles on states. pertaining to a classical distribution (see equ. sort the result according to the state energies 5. V. As in Section 2. . f m 2. A speciﬁc state of a system of N fermions is represented by a set of N inhabited points on that plane. Fermi. ﬁnding that β is related to temperature as β = 1/kT . we have for the average population number of each state fn ≡ ∗ fj 1 = −1 βEj (+ . ∞ Z(z. AND – discard all trials in which a state is inhabited by more than one particle – determine the mean number of particles in each state. − .

) It is evidently quite diﬀerent from its classical counterpart. .4. and 0. we can now apply the well-known formulae for pressure. 5.10) N {fn } n Inserting for Z the respective expression for Fermi or Bose particles we once more arrive at the population densities of equ. The permitted values of fn are: 0 and 1 for fermions. 2.4 with the positive sign in the denominator.7) Now we can insert the possible values of f . In this manner we arrive at ∞ ∞ Z= N =0 zN {fn } exp − fn En /kT = n N =0 {fn } n ze−En/kT fn (5. and internal energy (see Section 4.9) Having secured the grand partition function.6) It is easy to show that this is equal to Z= n ze−En f f (5. . We ﬁnd that for Fermions (f = 0. [Code: EFRoulette] The sum {fn } is to be taken over all permitted population numbers of all states n.Figure 5. In particular. the Boltzmann factor. . In the following sections we discuss the properties of a few particularly prominent fermion and boson gases. 5.3) to determine the thermodynamic properties of the system. 2.3 Ideal Fermi gas Figure 5. N for bosons. . 5. again requiring that n fn = N . The mean population number of a given state n is fn ≡ 1 Z fn exp − En fn = − 1 ∂ ln Z β ∂En (5.): Z= n 1 1 − ze−En/kT (5.1: Playing the Fermi-Dirac game.8) and for Bosons (f = 0. . when the temperature is low then it approaches a step function: in the limit kT → 0 66 .2 shows the fermion population density (see equ. . 1. 1): Z= n 1 + ze−En/kT (5. 1. mean particle number.

0) 0.1 0 −20 −15 −10 −5 0 5 10 15 20 Figure 5. all states having an energy below a certain value are inhabited.8 kT=5.6 1. 1.11) 3 h2 67 .2. ELECTRONS IN METALS Conduction electrons in metallic solids may be considered as an ideal fermion gas – with an additional feature: since electrons have two possible spin states the maximum number of particles in a state p is 2 instead of 1.2: Mean population numbers of the states in a Fermi-Dirac system with µ = 0. For comparison we have also drawn the classical Boltzmann factor exp [−E/kT ] for a temperature of kT = 5. 3/2 8π 2mµ N= V (5.2 kT=0.0 0.6 exp(−E/5.4 0.4 1. and as it does so it strictly obeys the inequality f F D < fBm .0 0.Fermi−Dirac−Verteilungsdichte mit mu=0 1.2 1 kT=1. while the states with higher energies remain empty.8 1.21. The threshold energy EF = µ is called “Fermi energy”˙ It is only for high temperatures and small values of µ that the Fermi-Dirac density approaches the classical Boltzmann density. The number of such states is. For the purpose of comparison we have included a graph of fBm = exp [−Ep /kT ] in Figure 5. At low temperatures all states p with Ep ≤ EF = µ are populated. as we can see from eq.

2 1 0. A box with an internal coating 68 . Assuming a value of µ ≈ 5 · 10−19 J typical for conduction electrons we ﬁnd N/V ≈ 3 · 1027 m−3 . Applying the general formulae 4.51 and 4.4 0.12) This function looks a bit like the Boltzmann factor ∝ e−Ep /kT but is everywhere larger than the latter. For small µ and large T we again ﬁnd that fB ≈ fBm .4 1.8 1.6 exp(−E/5.3: Mean population number of states in a Bose-Einstein system with µ = 0.4 Ideal Bose gas 1 fi∗ = −1 βEi gi z e −1 For bosons the mean population number of a state is fp ≡ (5.1 kT=1. 5. consistent with an earlier result for the classical ideal gas.53 we may easily derive the pressure and internal energy of the electron gas.0 0 −5 0 5 10 15 20 Figure 5.8 kT=5.0 0.0) 1.2 kT=0.Bose−Einstein−Verteilungsdichte mit mu=0 1. For comparison we include a graph of the classical density exp [−E/kT ] at kT = 5. PHOTONS IN A BOX A popular example for this type of system is a “gas” of photons. We may also recapture the relation P V = 2U/3.6 0.

5. Providing an explanation for the experimentally measured spectrum of black body radiation was one of the decisive achievements in theoretical physics around 1900. Since the total energy in the box is conserved and the photons may change their energy upon absorption and reemission. thus enabling energy exchange and equilibration.5 Problems for Chapter 5 TEST YOUR UNDERSTANDING OF CHAPTER 5: 5. To determine the energy spectrum of the photons we ﬁrst calculate the number dΣ of states within a small energy interval. This is tantamount to assuming µ = 0.14) hν/kT − 1 c e and accordingly the number of photons in a frequency interval [ν. could reproduce the correct shape of the spectrum. rendering the insertion of a black body grain unnecessary. Sketch these functions of energy at two temperatures. 5. by the ad hoc assumption of a quantization of energy. 5. it is given by dE(ν) hν 3 8πV I(ν)dν ≡ = E(ν)n(ν)dν = 3 hν/kT dν (5.1 Population densities: Explain the signiﬁcance of the population densities for Fermi-Dirac and Bose-Einstein statistics. A simple geometric consideration – how many lattice points are lying within a spherical shell – leads to 8πV p2 dp 8πV ν 2 dν dΣ = = (5. and it was Max Planck who. dν] is n(ν)dν = ν2 8πV dν 3 ehν/kT − 1 c (5.16) dν c e −1 For pressure and internal energy of the photon gas we ﬁnd – in contrast to the classical and Fermi cases – the relation P V = U/3. In actual fact the internal walls of the box are never perfectly reﬂecting. These states will then be populated according to equ. 69 . The subsequent eﬀorts to understand the physical implications of that assumption eventually lead up to quantum physics.13) h3 c3 where p = hν/c is the momentum pertaining to the energy E = hν.3 Ideal quantum gases: Discuss two examples of ideal quantum gases. the total number of photons is not conserved.of reﬂecting material in which a photon gas is held in thermodynamic equilibrium is often called a “black body”. Earlier attempts based on classical assumptions had failed.15) The amount of energy carried by these photons is the spectral density of black body radiation. The total number of photons in the system is thus ν2 8πV N= fp dΣ = 3 dν (5. Originally this name refers to a grain of black material that might be placed in the box to allow for absorption and re-emission of photons.12.2 Classical limit: Starting from the Fermi-Dirac density discuss the classical limit of quantum statistics. 5.

W. 70 . D. [Alder 57] Alder.. D. [Levesque 69] Levesque. Phys. und Whitlock. A. [Posch 87] Holian. Phys. [Papoulis 81] Papoulis.. K. T. Phys. Random Variables and Stochastic Processes.. B. und Wainwright. [Kalos 86] Kalos.. Hoover. Dover.Bibliography [Abramowitz 65] Abramowitz. [Hoover 68] Hoover.. Rev. Berlin 1992.: Monte Carlo Methods. 44 (1949) 335.. 184 (1969) 151. A. Braunschweig 1990. H. Rosenbluth. Vieweg. und Ree. Teller. Kluwer. A7/5 (1973) 1690. E. A.. Physik Verlag. 18 (1967) 988. New York 1994. W. Amer. und Teller.. New York.. 1981. Phys. 27 (1957) 1208. Phys. G. Plenum. und Tildesley. H. Hrsg. . L. Athanasios: Probability. J. F. J. Oxford University Press. G. F. W.. P. Wiley. N. [Alder 67] Alder. Chem. Lett. M. und Verlet. Rev. J. Chem.P. A2/6 (1970) 2514. Statist. Lett.: Applications of the Monte Carlo Method in Statistical Physics. Assoc. 1990. H. J.. [Binder 87] Binder... A. B. J.. J. A. [Hansen 69] Hansen. J. K. 1965. [Vesely 94] Vesely.und K”urkij”arvi. L. J... Rev.-P. und Verlet. Phys.. New York 1986. Rosenbluth. 59/1 (1987) 10. Posch. M. [Binder 92] Binder.: The Monte Carlo Method in Condensed Matter Physics. T. [Allen 90] Allen. M. Weinheim 1978. H. M. Phys.. S..: Computerexperimente an Fl”ussigkeitsmodellen. Rev.. McGraw-Hill International Book Company. [Vesely 78] Vesely. L. Phys.: Handbook of Mathematical Functions. 21 (1953) 1087. N. 182 (1969) 307. Springer. [Metropolis 53] Metropolis. Stegun. 49 (1968) 3609. Springer. und Wainwright. Chem. und Ulam. J... A. I. E. B. Frederick: Statistische Physik (Berkeley Physik Kurs 5). J. Berlin 1987.: Computer Simulation of Liquids. 2nd edn. E. Phys. Rev. Rev.. New York 2001. N. F.J.: Computational Physics – An Introduction. [Reif 90] Reif. [Metropolis 49] Metropolis..

- thermodynamics - sunil(1).pdfuploaded byPradeepvenugopal
- TSM1uploaded byArun Yoga
- 1formatsuploaded bymsloveindia
- EEE -A Bcme 2011 Lecture Scheduleuploaded bymasaravanan88
- Entropy：A concept that is not a physical quantityuploaded byshufeng-zhang
- L8 Entropyuploaded bydaniceuy
- Thermodynamics 2 Marksuploaded bySanthana Bharathi
- Volumes and Entropy of Activationuploaded bywex007
- Lattice Boltzmann Intro Lecturesuploaded byVikram Singh
- Udine03-FTTtoolsuploaded bygmrp
- Thermodynamics [03] Section [54 - 91]uploaded byTammanurRavi
- Thermodynamics unit 1 Notesuploaded bykapilraj_mech
- Content Hull 2535uploaded byDaniel Tham
- 1950, Von Bertalanffy, The Theory of Open Systems in Physics and Biologyuploaded byFrancisco Castro Richter
- A Guide to Entropy and the Second Law of Thermodynamicsuploaded byMike LaRose-Edwards
- BITS F111 Thermodynamics Handout 2013-14uploaded bykhalid anwar
- Chapter 01uploaded byArdian Yudo
- Development of a Process Model for a Peirce Smith Converteruploaded byPablo Andrés Pérez Fuentes
- Chapter 01uploaded byHendry Hispano Suiza
- BME OTuploaded bymech_nricem
- entropy-01-00053uploaded byca6tik01
- Chemical Thermodynamics (Ernő Keszei)uploaded byIvan Sequera Grappin
- ch06uploaded byNeil Dalal
- Material Balance Summary Sheet 2uploaded byAgung Prihandi
- Chapter 01uploaded bySyafril Abdillah
- CH02uploaded byAbdelkader Faklani Dou
- THE SECOND LAW OF THERMODYNAMICSuploaded by秀朋
- 1 Koretskyuploaded byPimpofsea Duarte
- Chapter 01uploaded byIyos Urang Tasik
- Alexander. Entropy and Popular Cultureuploaded byCauê Martins

- double spherical pendulum.pdfuploaded bycrguntalilib
- Bra Ket & Linear Algebrauploaded bycrguntalilib
- Basics of Semiconductorsuploaded bycrguntalilib
- Euler Anglesuploaded bycrguntalilib
- ARAGON72 Clifford Algebrauploaded bycrguntalilib
- 1315Tensor Physicsuploaded bycrguntalilib
- Gravitation and Spacetime 1995cqgohanruffrec.pdfuploaded bycrguntalilib
- Vector Analysis and Quaternionsuploaded bycrguntalilib
- Multivariable Calclabs With Maple v by Belmonte and Yasskinuploaded bycrguntalilib
- Fitzpatrick Statmech Fitzpatrick R 201puploaded bycrguntalilib
- Rau J Statistical Mechanics in a Nutshelluploaded bycrguntalilib
- Leonard Pairs in Classical Mechanicsuploaded bycrguntalilib
- Blom a the Dirac Formalism of Quantum Mechanicsuploaded bycrguntalilib
- Huan a Statistical-Mechanicsuploaded bycrguntalilib
- Classical Mechanics by J J Binney 31puploaded bycrguntalilib
- Understanding Climate Change Impacts, Vulnerabilityuploaded bycrguntalilib
- Styer Statistical Mech 2007uploaded bycrguntalilib
- Fourier Opticsuploaded bycrguntalilib

This action might not be possible to undo. Are you sure you want to continue?

Loading