You are on page 1of 77

LECTURE NOTES ON

Physics 74423309

STATISTICAL PHYSICS

Tjipto Prastowo, Ph.D


Endah Rahmawati, M.Si
Utama Alan Deta, M.Pd., M.Si

Department of Physics
Faculty of Mathematics and Natural Sciences
The State University of Surabaya
June 2016
TO THE STUDENT WE LOVE

Lecture notes on Statistical Physics provide basic materials given in a series of lectures
on the basis of a weekly timetable for the third year students in the Department of Physics,
Faculty of Mathematics and Natural Sciences, the State University of Surabaya (Unesa).
These notes introduce systematic discussion on major themes usually taught in Statistical
Physics: Classical Distribution and Quantum Distribution of microscopic systems,
and therefore are written to provide basic physics concepts of the two views. Each chapter
available is accompanied with some exercises suitable for students assignments. To master
learning materials covered, you need not just knowledge but skills how to solve problems,
which can only be obtained through continual practices throughout the notes. You may or
may not obtain a superficial knowledge by attending and listening to the lectures, but you
cannot reach the specific skills expected by that way. It is common to come across student
conversation: I understand it but I cant do the problem ! This student feels uncomfortable
with some problem although it looks so easy when a lecturer explains it in class.
The above example shows lack of practice and hence lack of skills required in this course.
Our dearest students, please always study with pencil and paper at hand. You will find that
the more able you are to use an eective method of solving problems the easier it will be
for you to master the materials provided in the course. This costs you nothing but practice,
practice and again practice. Please do remember that the best way to learn to solve problems
is to solve them straight away.
We eventually welcome good comments on the content of this course from all readers for
further improvement as it is important to improve the quality of classroom teaching process,
particularly in the course of Statistical Physics. Hope these notes are useful for all users in
the department.

Best wishes of us,


T jipto Prastowo, Endah Rahmawati, U tama Alan Deta

ii
iii

General Guidance

PHYSICS 74423309: STATISTICAL PHYSICS


Pre-requisites: Modern Physics, Quantum Physics

Lecturers: Tjipto Prastowo, Endah Rahmawati, Utama Alan Deta

References: Pointon (1978); Libo (1980); Reif 1985); Huang (1987);


Beiser (1988); Tipler (1999); Serway et al. (2005); Abdullah (2009)

Time and Place: Thursday, 7-9 am, C30101

Marking Scheme: NA = 20%P + 20%UTS + 30%T + 30%UAS


NA=Final Mark, P=Presence, UTS=Mid-Exam, T=Homework, UAS=Final Exam

Notes:

1. Students are not allowed to join the class for being late (a maximum of 15 minutes
from the starting time is permitted), except for reasonable arguments.

2. Each lecturer contributes an equal proportion of mark to the final mark.

3. P is possibly reduced to a minimum.

4. UTS = 100% taken from Quiz

5. T = 100% taken from Homework

6. UAS normally contains 2-3 problems.

7. Homework will be distributed to class members and all students are required to hand
the completed assignments in within a given time. Various penalties will be given for
any delay, i.e, 25% discounted mark for a one-day delay and 50% for a two-day delay.
There will be no mark given for those who submit the assignments more than two-day
delays.

8. No additional assignments or examinations after formal exam (both Mid and Final),
except for specified reasons with very limited permission given or medical examination
required.

9. Students are allowed to work with their notes and books in both Mid and Final Exams.

10. Other important issues, if any, will be discussed in the class. Students are strongly
encouraged to be active and well-prepared. If possible, tutorial is available for a further,
detailed description of each topics.
WEEKLY TIME TABLE

1. Chapter One: Introduction (Week 1)

overview, system of many particles, distribution function, description of assembly


and phase space, the scope of statistical physics

2. Chapter Two: Maxwell-Boltzmann Statistics (Weeks 2, 3 and 4)

basic concepts of MB statistics, classical distribution for Maxwellian particles,


applications of MB statistics, Boltzmann partition function

3. Chapter Three: Bose-Einstein Statistics (Weeks 5, 6 and 7)

basic concepts of BE statistics, quantum distribution for bosons, applications of


BE statistics

4. Chapter Four: Fermi-Dirac Statistics (Weeks 8, 9 and 10)

basic concepts of FD statistics, quantum distribution for fermions, applications


of FD statistics

5. Quiz: Chapters Two, Three and Four (Week 11)

6. Chapter 5: Thermodynamics of Gases (Weeks 12, 13 and 14)

concept of entropy, Gibbs paradox, a failure description of classical distribution


in a semi-classical gas, semi-classical distribution for semi-classical gas molecules,
the specific heat of a diatomic gas

7. Chapter 6: Canonical and Grand Canonical Ensembles (Weeks 15 and 16)

concept of ensemble, canonical ensemble, semi-classical total partition function,


the presence of molecular interactions, imperfect gas, closed and open assemblies,
grand canonical ensemble, micro canonical ensemble

iv
Contents

1 Introduction 1
1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Statistical distribution function . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Description of assembly and phase space . . . . . . . . . . . . . . . . . . . . 4
1.4 The scope of statistical physics . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Maxwell-Boltzmann Statistics 7
2.1 The velocity distribution function . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 The momentum and energy distribution functions . . . . . . . . . . . . . . . 12
2.3 Applications of Maxwell-Boltzmann statistics . . . . . . . . . . . . . . . . . 13
2.3.1 The mean, rms and most probable velocities . . . . . . . . . . . . . . 13
2.3.2 Equipartition principle of energy . . . . . . . . . . . . . . . . . . . . 14
2.3.3 The specific heat of an ideal gas . . . . . . . . . . . . . . . . . . . . . 16
2.4 Boltzmann partition function . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3 Bose-Einstein Statistics 21
3.1 Weight of configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2 Population of bosons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.3 Bose-Einstein gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Applications of Bose-Einstein statistics . . . . . . . . . . . . . . . . . . . . . 27
3.4.1 Black-body radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.4.2 The specific heat of a solid . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4 Fermi-Dirac Statistics 37
4.1 Weight of configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.2 Population of fermions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Fermi-Dirac gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.4 Applications of Fermi-Dirac statistics . . . . . . . . . . . . . . . . . . . . . . 41
4.4.1 Conduction electrons in a metal . . . . . . . . . . . . . . . . . . . . . 41
4.4.2 The specific heat of a metal . . . . . . . . . . . . . . . . . . . . . . . 42
4.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

v
vi CONTENTS

5 Thermodynamics of Gases 47
5.1 The concept of entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.2 Classical gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.3 Gibbs paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.4 Semi-classical gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
5.5 Gas of diatomic molecules . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5.5.1 A quantum model for rotational motion . . . . . . . . . . . . . . . . 54
5.5.2 A quantum model for vibrational motion . . . . . . . . . . . . . . . . 55
5.5.3 Total partition function of a diatomic gas . . . . . . . . . . . . . . . . 57
5.5.4 The specific heat of a diatomic gas . . . . . . . . . . . . . . . . . . . 58
5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

6 Canonical, Grand Canonical, and Micro Canonical Ensembles 63


6.1 Canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
6.1.1 Classical total partition function . . . . . . . . . . . . . . . . . . . . . 64
6.1.2 Semi-classical total partition function . . . . . . . . . . . . . . . . . . 65
6.1.3 Total partition function in the presence of interactions . . . . . . . . 65
6.2 Imperfect gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
6.3 Grand canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
6.4 Micro canonical ensemble . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
6.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Bibliography 71
Chapter 1

Introduction

1.1 Overview
There are many cases in physics where the exact properties of a given system are difficult
to solve as it consists of many individual components, either interacting or non-interacting
particles. Such a system include molecules of gases, liquids and solids, and electromagnetic
radiation (photons). However, it remains possible to write down all the equations of motion
required for describing the dynamics of the system, provided that the positions and velocities
of each individual component are well-defined. This course is thus devoted to discussions
on the basic physics principles and methods appropriate for describing systems involving
very many particles (of order 1023 ). The inherent problem of these systems appears to be
complex, owing to the complexity of molecular interactions of a large number of particles.
The behaviour of these particles as a whole often reveals dierent qualitative features from
that of each of the individual particles. Since it is impracticably possible to study the detailed
behaviour of each particle, a statistical approach then becomes plausible. The question is
that what kind of statistics is appropriate for describing the dynamics of the system ? Can
common statistics, in the sense of a general terminology, handle this problem ? Or, do we
need to seek for an alternative method ? If so, what should we do then ?
Before further discussing the central theme in this course, it is necessary to distinguish
microscopic systems (i.e., systems of order 10 9 m or less), whose description is of great
interest, from macroscopic systems (i.e., systems of order 10 6 m or greater). In the latter,
macroscopic or thermodynamic parameters, such as temperature, volume, pressure, energy
and other measurable quantities are usually used to characterise the systems. For an isolated
macroscopic system, these parameters may not vary with time and the associated condition
in which they are time-independent is said to be equilibrium. On the other hand, for an open
macroscopic system where dynamic interactions between particles constituting the system
and the surroundings and hence the exchanges of energy are allowed to occur, macroscopic

1
2 1. Introduction

parameters will therefore be time-dependent. The time variations of these parameters lead
to mean values over a period of very long time, corresponding to a final equilibrium state.
This is the state of practical interest and with which we are concerned.
The discipline discussing inter-dependence of measurable physics quantities associated
with a given system is called as thermodynamics, where most of all formulations used are
derived from experimental measurements. A thermodynamic state is thus defined as a state
whose a set of values of macroscopic parameters necessary for describing the system. Then
the equation of state is a functional relationship among such parameters in equilibrium.
Here, a fundamental approach, namely statistical physics provides insight into a given
system from microscopic points of view. It is therefore the purpose of statistical physics to
examine the properties of macroscopic systems in terms of microscopic properties, without
involving the detailed dynamics of the individual constituents. In this model, a wide range of
physics principles from basic laws to advanced methods are used to examine systems under
consideration.

1.2 Statistical distribution function


In this section, we discuss discrete and continuous distribution functions in a situation where
a physical system is well-defined. For this system, three important quantities involved are
as follows: the average or mean, root-mean-square (rms) and most probable values.
For discrete systems, statistical data describing the system will be given in a discrete form
of xi . Thus, the three statistical values are written as xave , xrms and xmax , respectively,
where generally dierent. In practice, such values are obtained by introducing a discrete
distribution function fi defined as fi = ni /N , where ni is the occurrence of the individual
data and N is the total number of the data. The followings are useful definitions associated
with discrete systems.
X X ni 1 X
fi = = ni = 1 (1.1)
i i
N N i
X 1 X
xave = xi f i = xi ni (1.2)
i
N i
X 1 X 2 p
(x2 )ave = x2i fi = x ni and xrms = (x2 )ave (1.3)
i
N i i

xmax is obtained when fi is maximum (1.4)

Let us examine the above definitions using an explicit example: 10 students taking Final
Exam on Statistical Physics. The final marks are 20, 20, 35, 60, 60, 60, 60, 80, 80 and 90.
From these marks we have discrete data in the form of xi = 20, 20, 35, 60, 60, 60, 60, 80,
1.2. Statistical distribution function 3

80, 90, and N = 10 for which


X 1 X 1 1
fi = ni = n1 + n2 + n3 + n4 + n5 = 2+1+4+2+1 =1
i
N i 10 10

1 X 1
xave = xi ni = 20 2 + 35 1 + 60 4 + 80 2 + 90 1 = 56.5
N i 10
1 X 2 1
xrms = xi ni = 202 2 + 352 1 + 602 4 + 802 2 + 902 1 = 61.1
N i 10

xmax = 60 for f1 = n1 /N = 0.2, f2 = n2 /N = 0.1, f3 = n3 /N = 0.4, f4 = n4 /N = 0.2 and


f5 = n5 /N = 0.1

As noted earlier, statistical physics is concerned with systems of very many individual
particles hence the systems can be eectively assumed as continuous systems. In this context,
a continuous distribution function f (x) is then used to describe the dynamics of the systems.
Therefore, (1.1) through (1.4) are replaced with
Z
f (x) dx = 1 (1.5)

Z
xave = x f (x) dx (1.6)
Z p
2
(x )ave = x2 f (x) dx and xrms = (x2 )ave (1.7)

xmax occurs when the first derivative of f (x) with respect of x equals zero. (1.8)

Note that as is the case of (1.1), (1.5) represents the probability for finding a given system
in a particular state.

An interesting quantity frequently discussed in statistics is the standard deviation ,


introduced here as a measure of the spread of data about the average value (i.e., the scatter
of the measurements). The standard deviation of the data distribution is defined as
p
= (x2 )ave x2ave (1.9)

where is, by definition, positive since xrms xave . For a normal or Gaussian distribution,
two-thirds of the values constituting the data are expected to fall within the range xave .
There will also be random and systematic errors in the measurements. Mean values of these
measurements thus include some error, indicated by the standard error of the mean m ,
p
defined as m = / N . Therefore, it is also possible to write the results of the measurements
as xave m .
4 1. Introduction

1.3 Description of assembly and phase space


An assembly is defined as a group of individual components forming a physical body of
particular interest. For the case of a gas placed in a closed container such components are
gas atoms or molecules while the assembly is the gas itself. The state of an assembly is then
determined by the (time averaged) position and momentum of each of individual components
in a six-dimensional space , commonly known as phase space.
In a Cartesian system, phase space consists of both Euclidean space (x, y, z) and
momentum space (px , py , pz ). Any physical quantity can then be written in terms of
position and momentum coordinates. For example, an element volume d of a single-particle
system in phase space is written as

d = dx dy dz dpx dpy dpz (1.10)

where dx dy dz is an element volume in the Euclidean space and dpx dpy dpz is an element
volume in the momentum space. The (non-relativistic) kinetic energy of this system is
given by
1 2 2 2

= p + py + pz (1.11)
2m x
where m is the mass of the system, and px , py and pz are the components of linear momentum
of the system in x, y and z directions, respectively.
The formulation above could also be extended to a system consisting of many particles.
The corresponding element volume d N and the total kinetic energy E for N particles are
respectively written as

d N = dx1 dy1 dz1 dpx1 dpy1 dpz1 dxN dyN dzN dpxN dpyN dpzN
N
Y (1.12)
= dxi dyi dzi dpxi dpyi dpzi
i=1

E = 1 + 2 + 3 + N
1 2
XN (1.13)
= pxi + p2yi + p2zi
i=1
2m

where i in both (1.12) and (1.13) refers to individual particles.

1.4 The scope of statistical physics


While modern statistical physics may cover a broad spectrum of materials, ranging from
classical physics to plasma physics, the main theme considered in this course merely involves
1.4. The scope of statistical physics 5

basic formulations of classical statistics and its counter part, quantum statistics, with
their simple applications being discussed in separate chapters. The two kinds of statistics
deal with both classical and quantum assemblies, where fundamental dierences in the basic
assumptions are due to the distinct behaviour of the two assemblies. For a classical assembly,
each of the individual components constituting the assembly is completely distinguishable.
Whereas, in quantum assemblies individual components are considered as indistinguishable
particles.
Another fundamental dierence between the two statistics is that energy and momentum
of classical particles spread over a continuum spectrum, while in the quantum systems
particles are distributed over discrete energy levels. The way in which this discretised energy
is occupied by quantum particles leads to two dierent groups of particles, namely bosons
and fermions. The detailed description of both classical and quantum particles, along with
the applications of the associated systems to real cases, will be discussed in separate chapters
(Chapter 2 for classical systems, Chapter 3 for bosons and Chapter 4 for fermions).
A comprehensive discussion will be given in Chapter 5 for statistical thermodynamics of a
perfectly ideal gas, in which molecular interactions between gas molecules have no eects on
the thermodynamic properties of such a gas. The concept of entropy as a fundamental physics
quantity, together with its applications to the dierence in the total entropy between classical
and semi-classical considerations for a given system, will be introduced. The applications of
quantum distribution to the case of a gas whose diatomic molecules for use of determining
the specific heat of a diatomic gas will be discussed at the end of this chapter.
A further discussion on special topics associated with possible types of ensemble in physics
where appreciable interactions between constituting particles are no longer negligible, hence
the total energy and the number of particles or systems could be variable, will be summarised
in a general description in Chapter 6. In this chapter, fundamental dierences in conditions
on thermodynamic parameters between canonical, grand canonical, and micro canonical
ensembles will be shortly introduced.
The main reference of this course to which we refer is Pointon (1978) as this book is
appropriate and much simpler for undergraduate students in the third year than other books
without losing basic ideas of physics behind the scene. All the students are also encouraged
to do further readings, focusing on the topics considered in this course from a basic level
(Tipler, 1999; Abdullah, 2009) and an intermediate level (Beiser, 1988; Serway et al., 2005)
to an advanced level (Reif, 1985; Huang, 1987).
6 1. Introduction
10.1 THE MAXWELL BOLTZMANN DIST

(6) (30) (30) (60) (30)


8E
6E
4E
2E
Chapter
0 2
(120) (60) (15) (120) (60)

Maxwell-Boltzmann
8E
6E
Statistics
4E
2E
It is desired to
0 find a distribution function that characterises a system of classical particles.
This class of particles is allowed to have arbitrary values of both energy and momentum.
(180)
The underlying assumptions (30) (60)
for these particles (90) they (180)
for which obey Maxwell-Boltzmann
8E
(MB) statistics are as follows:
6E
classical
4E particles are distinguishable
2E
the particle size is relatively small compared with the average distance between particles
0
FD FD FD
molecular interaction between classical particles is negligible
(120) (6) (15) (60) (15)
the density
8E of classical particles is sufficiently low

there 6E
is no theoretical limit on the number of particles occupying a given energy state
4E
Here we provide
2E a simple illustration in Figure 2.1 below, demonstrating the way by which
one of six distinguishable
0 particles occupy 8E energy level on the top from the lowest state of
energy level. Such a way does not hold for(a)indistinguishable particles in Quantum Statistics.

8E
6E
4E
2E
0
123456 123456 123456 123456 123456 123456

Particle label
Figure 2.1: A simple diagram showing (b) the likely dierent occupying arrangements
for six distinguishable particles with separated energy levels (taken from p.337, Ch.10,
Figure 10.1Physics,
Modern (a) The 20 arrangements
Serway et al., 2005). of six indistinguishable particles with a total en-
ergy of 8E. (b) The decomposition of the upper left-hand arrangement of part (a) into
six distinguishable states for distinguishable particles.
7

this case, N ! 6, n1(0E) ! 3 (that is, the number of particles in the 0 energy
8 2. Maxwell-Boltzmann Statistics

In this chapter, we will derive this classical distribution using a simple way, independent of
the details of molecular interaction. We first construct classical velocity distribution function
for each direction of motion, with which we then derive the MB velocity distribution function
in 2.1. Further, we examine the momentum and energy distribution functions in 2.2 to
obtain a full description of a classical assembly. This section is then followed by a discussion
on the applications of the MB statistical distribution in 2.3. The fundamental concept of
the Boltzmann partition function is also introduced in 2.4.

2.1 The velocity distribution function


Here, we are particularly interested in finding the velocity distribution function of a classical
system containing very many particles. Such a system can be an ideal gas in a fixed volume.
Once this function is obtained, other distribution functions (i.e., energy and momentum
distribution functions) can be directly derived.
We begin with describing the dynamics of molecules of such a gas using a Cartesian
system for simplicity. The density N of the gas molecules per unit volume in velocity space
is determined by the product of the total number N of the molecules and velocity distribution
function for all directions of motion,

N = N f (vx ) f (vy ) f (vz ) (2.1)

where f (vx ), f (vy ) and f (vz ) are velocity distribution functions, and vx , vy and vz are the
components of the velocity along the x, y and z axes, respectively.
The total derivative dN of (2.1) with respect to vx , vy and vz is given by

@N @N @N
dN = dvx + dvy + dvz (2.2)
@vx @vy @vz

which can be, due to (2.1), rewritten as



dN = N f 0 (vx ) f (vy ) f (vz ) dvx + f (vx ) f 0 (vy ) f (vz ) dvy + f (vx ) f (vy ) f 0 (vz ) dvz (2.3)

It is expected to have separated parameters for each of the terms on the RHS of (2.3). Thus
we divide (2.3) by N , resulting in

dN f 0 (vx ) f 0 (vy ) f 0 (vz )


= dvx + dvy + dvz
N f (vx ) f (vy ) f (vz )

As the density is for simplicity assumed to be constant, hence dN = 0, the above expression
2.1. The velocity distribution function 9

becomes
f 0 (vx ) f 0 (vy ) f 0 (vz )
0= dvx + dvy + dvz (2.4)
f (vx ) f (vy ) f (vz )

We assume that the gas molecules are separated by distances that are sufficiently large,
compared with the molecule diameters and that no external forces acting upon the molecules,
except when they collide. In the absence of such forces, there is no preferred direction for
the velocity of each molecule and the speed is constant. We can then write

v.v = v 2 = vx2 + vy2 + vz2 = constant

which can be derived with respect to vx , vy and vz to obtain

vx dvx + vy dvy + vz dvz = 0 (2.5)

as a restrictive condition for (2.4). Thus, we have so far a set of two dierential equations,
that is, (2.4) and (2.5). The solution of these equations can be obtained by introducing a
constant parameter , to be determined later, by which we multiply (2.5) to obtain

vx dvx + vy dvy + vz dvz = 0 (2.6)

We then add (2.6) to (2.4) to obtain


0 0
f 0 (vx ) f (vy ) f (vz )
+ vx dvx + + vy dvy + + vz dvz = 0
f (vx ) f (vy ) f (vz )

where dvx , dvy and dvz are all not zero for the non-trivial solutions. Instead, a system of
homogeneous dierential equations below are required,

f 0 (vx ) f 0 (vy ) f 0 (vz )


+ vx = 0 + vy = 0 + vz = 0
f (vx ) f (vy ) f (vz )

for which f (vx ), f (vy ) and f (vz ) will be determined. For the case of the x-component,

df (vx ) df (vx )
= vx f (vx ) or = vx dvx
dvx f (vx )

which can be easily solved. Integrating both sides and applying simple calculus yield

1 1
vx2
`nf (vx ) = vx2 + `n C = `n Ce 2
2

We thus have
1
vx2
f (vx ) = Ce 2 (2.7)
10 2. Maxwell-Boltzmann Statistics

where C is an integration constant, which will be determined below. The next step is thus
to determine the exact value of C by normalising (2.7) such that
Z 1 Z 1
1
vx2
f (vx ) dvx = 1 or C e 2 dvx = 1 (2.8)
1 1

1
vx2
As e 2 is an even function, (2.8) can then be written as
Z 1
1
vx2
2C e 2 dvx = 1
0

(see Appendix 6 of Pointon (1978) for the properties of function integral). We refer to
Z 1 n + 1
ax2 1
xn e dx = (2.9)
0 2a(n+1)/2 2
p
and use (1/2) = to calculate C as
1/2 r
2 p
C =1 or C=
2

Hence, (2.7) becomes r


1
vx2
f (vx ) = e 2 (2.10)
2
Using (2.10), we can calculate the mean square velocity as
Z 1 Z 1
r
1
vx2 1
(vx2 )ave = vx2 f (vx ) dvx = vx2 e 2 dvx = (2.11)
1 1 2

where we have again used (2.9) and a simple relation (n + 1) = n (n).

According to the kinetic theory of a perfectly ideal gas, for every degree of freedom
translational kinetic energy is equal to the thermal energy of kT/2 such that

1 1 1 1 1 1
m (vx2 )ave = kT m (vy2 )ave = kT m (vz2 )ave = kT (2.12)
2 2 2 2 2 2
23
where m is the total mass of molecules, k is the Boltzmanns constant (1.38 10 J K 1)
and T is the equilibrium temperature of the molecules. We thus have

kT
(vx2 )ave = (2.13)
m

for the x-component of the velocity. Combining both (2.11) and (2.13) yields = m/kT and
similar results are obtained for the y and z directions. We can thus rewrite the x-component
2.1. The velocity distribution function 11

of the velocity distribution function in (2.10) as


r
m 1
mvx2 /kT
f (vx ) = e 2 (2.14)
2kT

Notice that molecules are free to move in three dimensional motion. In the same manner, the
velocity distribution function for the other two degrees of freedom (the y and z components)
can be obtained as follows,
r r
m 1
mvy2 /kT m 1
mvz2 /kT
f (vy ) = e 2 and f (vz ) = e 2 (2.15)
2kT 2kT

Having derived the velocity distribution function for each direction of the motion, we are
now ready to derive the corresponding three dimensional velocity distribution function in a
Cartesian coordinate system by defining the density of gas molecules N , previously given
in (2.1), as
dNvx vy vz
N = (2.16)
dvx dvy dvz
where dNvx vy vz is the number of gas molecules having velocity components in the range vx
to vx + dvx along the x-axis, vy to vy + dvy along the y-axis, vz to vz + dvz along the z-axis,
and dvx dvy dvz is an element volume in velocity space in a Cartesian coordinate system.
Substituting (2.16) into (2.1) gives

dNvx vy vz
= N f (vx ) f (vy ) f (vz ) (2.17)
dvx dvy dvz

or 3/2
m 1
m(vx2 +vy2 +vz2 )/kT
dNvx vy vz = N e 2 dvx dvy dvz (2.18)
2kT
Equation (2.18) can be written in a spherical coordinate system as follows,
3/2
m 1
mv 2 /kT
dNvr v v = N e 2 v 2 sin d d dv (2.19)
2kT

Integrating (2.19) over the whole space and using (2.17) written in terms of v yield
3/2 Z Z 2
m 1
mv 2 /kT 2
dNv = N f (v) dv = N e 2 v dv sin d d
2kT 0 0

or 3/2
m 1
mv 2 /kT
dNv = N f (v) dv = 4 N e 2 v 2 dv (2.20)
2kT
where dNv is the number of gas molecules having velocity in the range v to v + dv. It is
12 2. Maxwell-Boltzmann Statistics

clear from (2.20) that


3/2
m 1
mv 2 /kT
f (v) = 4 e 2 v2 (2.21)
2kT
which is known as the MB velocity distribution function. This is more convenient to
use than the components of the velocity distribution function defined in (2.14) and (2.15), in
that it is independent of direction of motion. The behaviour of the distribution function of
(2.21) is described by the v 2 term (when v ! 0) and the exponential term (when v ! 1).
Which term dominates over the other determines this limitation and defines what is meant
by probability to find particles in a fixed volume. For short, v must have a finite value.

2.2 The momentum and energy distribution functions


As previously mentioned, the velocity distribution function defined in (2.21) can be used to
derive other forms of distribution function. For convenience, we will first derive momentum
distribution function, then energy distribution function. But first, we rewrite (2.21) as
3/2
m 1
mv 2 /kT
f (v) dv = 4 e 2 v 2 dv (2.22)
2kT

where f (v) dv is the probability of the MB velocity distribution function. Integral of this
over the whole space is, by definition, unity since such integral describes the total probability
to find a molecule with speed in the range v to v + dv in the velocity space. The Euclidean
volume V of each molecule is absent in the present discussion. It follows that the properties
of the molecules of ideal gases are independent of the geometry of the molecules.
Using a simple relation from classical physics p = mv and its associated dierential form
dp = m dv, we can then write an expression for the probability of the MB momentum
distribution function f (p) dp in momentum space as
3/2
1 p2 /2mkT
f (p) dp = 4 e p2 dp (2.23)
2mkT

which characterises classical assemblies having momentum in the range p to p + dp. As is


the case of the probability of the MB velocity distribution function in (2.22), the integral of
(2.23) over the whole space equals one. It is understood that
3/2
1 p2 /2mkT
f (p) = 4 e p2 (2.24)
2mkT

where f (p) is known as the MB momentum distribution function.


2.3. Applications of Maxwell-Boltzmann statistics 13

Again, using a simple formula from classical physics but this time relating momentum to
p p
kinetic energy p = 2m and its associated dierential form dp = m/2 d, we can write
an expression for the probability of the MB energy distribution function f () d as
3/2
1 /kT
f () d = 2 e 1/2 d (2.25)
kT

where f () d describes classical assemblies with energy in the range to + d. Again, as


is the case of the probability of the MB velocity and momentum distribution functions given
in (2.22) and (2.23), respectively, the integral of (2.25) over the whole energy space equals
one. It then follows that 3/2
1
f () = 2 e /kT 1/2 (2.26)
kT
where f () is known as the MB energy distribution function.

2.3 Applications of Maxwell-Boltzmann statistics


Having derived the MB velocity, momentum and energy distribution functions, it is time to
further examine the applications of the MB statistics to classical systems, such as a classical
perfect gas.

2.3.1 The mean, rms and most probable velocities


As mentioned in 1.2, there are three dynamic quantities that can be drawn from a given
distribution function. These quantities are the mean, rms and most probable values.
For example, we will use the MB velocity distribution function given in (2.21) to derive the
mean, rms and most probable velocities of a perfectly ideal gas in a closed container having
total volume V and N molecules. It is expected that maximum information can be obtained
by calculating the three statistical values.
The following discussion is aimed to understanding the nature of the calculation of such
values. Here, we first calculate the mean velocity vave , which is by definition and together
with (2.22), to be written as
Z 1 3/2 Z 1
m 1
mv 2 /kT
vave = v f (v) dv = 4 e 2 v 3 dv (2.27)
0 2kT 0

Using (2.9) and a simple relation, (n+1) = n!, the above equation yields
r
8kT
vave = (2.28)
m
14 2. Maxwell-Boltzmann Statistics

where m and T are the mass and temperature of such a gas, respectively. The rms velocity
can be derived in a similar manner to that used for the mean velocity. Thus, we have
Z 1 3/2 Z 1
2 2 m 1
mv 2 /kT
(v )ave = v f (v) dv = 4 e 2 v 4 dv (2.29)
0 2kT 0

Again, applying (2.9) and (n + 1) = n (n) to (2.29) yields


r
3kT p 3kT
(v 2 )ave = and vrms = (v 2 )ave = (2.30)
m m

The result for vrms is in line with the sum of the equations in (2.12). To make this clear, we
here rewrite it as

1 1 1 1 3
m (v 2 )ave = m (vx2 )ave + m (vy2 )ave + m (vz2 )ave = kT (2.31)
2 2 2 2 2

The most probable velocity occurs when the the MB velocity distribution function defined
in (2.21) is maximum, i.e., the first derivative of this function is zero. Thus, we have
3/2
df (v) m mv 3 1
mv 2 /kT
= 4 2v e 2 =0 (2.32)
dv 2kT kT

for which we obtain r


2kT
vmax = (2.33)
m
The results for the mean, rms and most probable velocities confirm that they are dierent.
The rms velocity in (2.30) is the one, which is directly related to the mean energy of a system
having expression for energy in a special form that will be discussed below.

2.3.2 Equipartition principle of energy


In this section, we discuss the equipartition principle of energy for a classical system
having dynamic variables that occur squared in the formula for the energy of the system.
According to this principle, such a system in thermal equilibrium at temperature T is valued
for an energy of kT /2 for each independent mode of motion the so called degree of freedom.
For example, a perfectly ideal gas with the same probability of translational movement along
all directions in phase space, in which molecular interactions are negligible, has three degrees
of freedom; or a model of one dimensional harmonic oscillator where potential interactions
between individual components exist leading to potential energy of the model has two degrees
of freedom. The one, or more relevant three, dimensional harmonic oscillator model will be
of use when modelling the atoms of a solid as a system of vibrating harmonic oscillators.
10.1 2.3.
THEApplications of Maxwell-Boltzmann
MAXWELL BOLTZMANN statistics 341
DISTRIBUTION 15

or Gas n(v) vmp


v
at Temperature T vrms
ilibrium speed distribution, or the
and v ! dv in a gas at temperature
ltzmann distribution in its continu- n(v)
r, we shall show that

3/2
" 2
v 2e %mv /2kBTdv (10.8) v

v
les per unit volume with 2.2:
Figure speeds be- showing the three velocities of gas molecules with vmp indicates
A curve
Figure 10.4 The speed distrib-
er of molecules per
vmax unit volume,
in (2.33) and m
v denotes vave in (2.28); the number of molecules in the range v
ution of gas molecules at some
nns constant, andis Tequal
is the absolute
to the area of the shaded rectangle (taken from p.341, Ch.10, Modern Physics,
temperature. The number of
tion is sketched inSerway
Figureet 10.4.
al., 2005).
The molecules in the range &v is
istribution as v : 0, and the expo- equal to the area of the shaded
: ". rectangle, n(v)&v. The most
internal structure and no interac- probable speed, vmp, the average
For convenience, we first examine the application of the equipartition principle of energy to
h molecule consists only of transla- speed, v, and the root mean
a perfectly ideal gas, and latersquare
extendspeed,
the result to aindicated.
vrms, are harmonic oscillator.
2 The mean energy of a perfectly ideal gas having energy between and + d can be,
with the help of (2.26), calculated as
s have speeds that are continuously
bution of molecules is also continu- Z 1 3/2 Z 1
1 /kT
ave =
ecules per unit volume with energy f () d = 2 e 3/2 d (2.34)
0 kT 0
vy
2/2k T
# g (E )Ae %mv Using a functional approach of
B dE function below, v = constant
1 Z
ntroduce the concept of velocity 1
xn e ax dx = n+1 (n + 1) (2.35)
y of each molecule may be repre- 0 v a
vy
s vx , vy , and vz or by a point in veloc-
vx
vz (Fig. 10.5).weFrom
obtainFigure 10.5 we 3/2 v
1 5/2 3z p 3
speeds between v and v ! dv is pro-ave = 2 kT = kT (2.36)
kT x v 4 2
ell between v and v ! dv: v z
$v2 dv When the gas is(10.9)
in equilibrium, the total mean energy will be equally distributed among
dv
the translational kinetic energies associated with momentum in the x, y and z directions.
mv 2, each speed v corresponds to a Figure 10.5 Velocity space.
This is best described by (2.12) and (2.31). Thus, we have
ergy states, g(E)dE, with energies The number of states with
umber of states with speeds between speeds
1 between v and 1 v! dv is 3
m (v 2 )ave =to3 the
ave =proportional m (vx2 )aveof=a kT
volume (2.37)
2 2 2
spherical shell with radius v and
# C4 $ v 2 dv and hence
thickness dv.
1 1
into our expression for n(E)dE, we m (vx2 )ave = kT (2.38)
2 2

%mv 2/2kBT dv

nto the normalization coefficient A.


y between E and E ! dE equals the
16 2. Maxwell-Boltzmann Statistics

The result in (2.38) shows that each component of velocity and momentum that appears as
a squared term in the expression for the corresponding kinetic energy will contribute kT /2
per molecule. Therefore, this value holds for the average kinetic energy in each independent
direction along a particular axis.
Note that the degrees of freedom are not only associated with translational motion but
also associated with rotational and vibrational motions. When this occurs, the total energy
of the system is the sum of all possible forms of energy associated with types of motion.
Here we provide an example of a combined motion, in the absence of rotational motion,
where both translational and vibrational motions are present. Potential interaction between
individual particles may lead to vibrational motion, where the corresponding potential energy
is in the form of a quadratic term of position. For example, the total energy of a model of
one-dimensional harmonic oscillator can be written as

p2x 1
= + x2 (2.39)
2m 2

where denotes the restoring force per unit displacement. Now we introduce a symbol h i,
which is actually the same as ( )ave used before. Thus, we have

p2x 1 2
h ix = + x
2m 2 (2.40)
1 1
= kT + kT = kT
2 2

Notice that the LHS of (2.40) describes the mean energy corresponding to the x-component
of momentum of the 1-D harmonic oscillator. For a 3-D harmonic oscillator, we have
2
p2x p2y pz 1 2 1 2 1 2
h ix y z = + + + x + y + z
2m 2m 2m 2 2 2
1 1 1 1 1 1 (2.41)
= kT + kT + kT + kT + kT + kT
2 2 2 2 2 2
= 3kT

The above result in (2.41) is useful in examining the behaviour of a solid with high density
of the atoms. This behaviour will then be examined in more detail in Chapter 3.

2.3.3 The specific heat of an ideal gas

In this section, we derive a useful quantity, namely specific heat, by defining the total
energy of a classical system of N particles as E = N h i. The specific heat discussed here is
that of a perfectly ideal gase. For such a gas with total N molecules and the mean energy h i
2.3. Applications of Maxwell-Boltzmann statistics 17

as given in (2.36), we have

3 3 3
E = N h i = N kT = nNo kT = nRT (2.42)
2 2 2

where n denotes the number of moles, No = 6.02 1023 mole 1 is the Avogrados number
and R = 8.314 J mole 1 K 1 is the universal gas constant. We generalise this result for a
given system with total f degrees of freedom at temperature T to calculate the total energy
of the system as follows,
1 1
E = f N kT = f nRT (2.43)
2 2
It is useful to introduce the molar specific heat at a constant volume V , defined as
the heat capacity per unit mole. Unlike the heat capacity, the molar specific heat depends
only on the nature of the system under consideration. The molar specific heat at a constant
volume cV is thus written as

1 @E 1 1
cV = = f No k = f R (2.44)
n @T V 2 2

For monoatomic molecules, cV = 32 R as each molecule has three degrees of freedom (f =3)
concerning with translational motion (this result is also confirmed by classical measurements
in the laboratory). For diatomic molecules, however, there are two additional degrees of
freedom regarding rotational motions. Therefore, gases with diatomic molecules have a total
of five degrees of freedom (f =5), for which their molar specific heat at a constant volume
cV = 52 R.
Another useful quantity, the molar specific heat at a constant pressure P is, in relation
to cV , defined as
1 @E
cP = = cV + R (2.45)
n @T P
by which cP is greater than cV . Originally, scientists were confused to find that the value
of molar specific heat obtained from laboratory measurements is always greater than that
derived from theoretical consideration. With (2.45), we understand that calculations derived
from theory to obtain cV are performed for a fixed volume, while experimental measurements
to quantify cP are carried out under condition of a constant pressure. Both cV and cP provide
information about internal energy, and hence molecular structure of a gas. When heat is
added to a gas at a constant volume, there will be no work done and so the amount of
heat supplied is converted to an increase in internal energy of the gas. But when the heat
with the same amount is added at a constant pressure, work is done. Hence, less amount of
heat is used to increase the internal energy, resulting in a smaller increase in temperature,
compared with that obtained from the addition of heat at a constant volume.
18 2. Maxwell-Boltzmann Statistics

2.4 Boltzmann partition function

In this section, we introduce a function that relates statistical expression for a number of
gas molecules as a microscopic system to thermodynamic quantities under condition where
the molecules are in thermal equilibrium at temperature T . The formulation of this function
is derived from the fact that individual components of the assembly are distributed over
dierent energy states. Such a function is termed as the Boltzmann partition function
that describes how particles are distributed over the available energy states in a continuous
classical assembly. Using all variables written in terms of microscopic quantities except for
temperature, the Boltzmann partition function is defined as
Z 1
/kT
Z= g() e d (2.46)
0

where g() d is the number of states having energy between and + d, and e /kT is known
as the Boltzmann factor. In this context, both g() d and are classified as microscopic
quantities as the energy is related to the velocity v and momentum p of particles.
Based on a physical argument, g() d can be expressed as the product of the density
of states B and an element of volume d in phase space, or simply g() d = B d . Thus,
(2.46) can be written as
Z 1 Z 1
/kT (p2x +p2y +p2z )/2mkT
Z= Be d = Be dx dy dz dpx dpy dpz
0 1
Z
p2 /2mkT
= BV e p2 sin d d dp
Z (2.47)
1
p2 /2mkT
= 4BV e p2 dp
0
3/2
= BV 2mkT

The partition function in (2.47) can be used to calculate pressure, a measured quantity of
the thermodynamic function. The total pressure of a classical system containing N particles
is written as

@ `n Z d `n V
P = N kT = N kT
@V dV
T (2.48)
N kT
=
V

The last expression for P is well-known as the equation of state of a perfectly ideal gas,
P V = N kT = nRT , commonly discussed in elementary books of physics. This proves that
statistical physics that lies on theoretical points of view provides the same result as given
2.5. Exercises 19

by thermodynamics based on laboratory measurements.


Another use of the Boltzmann partition function is that it is used to derive total energy of
an ideal gas having N particles at equilibrium temperature T . Using the following definition
for the total energy,

2 @ `n Z 3 d `n T
E = N kT = N kT 2
@T 2 dT
V (2.49)
3
= N kT
2

The above result in (2.49) is exactly the same as that derived from equipartition principle of
energy previously stated in (2.42) and again providing theoretical background for empirical
measurements.

2.5 Exercises
1. Is the Maxwell-Boltzmann statistics valid for hydrogen gas at standard temperature
and pressure (STP) ? Hint: under STP of 273 K and 1 atm, 1 mole of H2 gas consists
of 6.02 1023 molecules and occupies a total volume of 22.4 10 3 m3 (taken from
Serway et al., 2005, Ch.10, p.345).

2. Prove in detail that the most probable speed of a gas molecule is given by
r
2kT
vmax =
m

Note that the most probable speed corresponds to the point where the classical or
Maxwellian distribution function achieves a maximum value (taken from Serway et al.,
2005, Ch.10, p.362).

3. A flux of 1012 neutrons/m2 emerges each second from a port of nuclear reactor. If these
neutrons have a Maxwell-Boltzmann energy distribution corresponding to temperature
T = 300 K, calculate the density of neutrons in the beam (taken from Beiser, 1988,
Ch.15, Problem 7).

4. Derive the variation of pressure with height in a column of a gas at temperature T


(a) using the fact that the change in pressure over a height dh is -g dh, where is the
density of the gas
(b) using the Boltzmann factor to give the concentration gradient of the molecules
(taken from Pointon, 1978, Ch.3, Problem 1).
20 2. Maxwell-Boltzmann Statistics

5. The molecules of a monoatomic ideal gas are escaping by eusion through a small hole
in a wall of an enclosure maintained at absolute temperature T .
(a) By physical reasoning (without actual calculation) do you expect the mean kinetic
energy h o i of a molecule in the eusing beam to be equal to, or greater than, or less
than the mean kinetic energy h i i of a molecule within the enclosure?
(b) Calculate h o i for a molecule in the eusing beam
(taken from Reif, 1985, Ch.7, p.287, Problem 7.30).
Chapter 3

Bose-Einstein Statistics

In this chapter, we here discuss Bose-Einstein (BE) statistics, which is used to describe
the statistical behaviour of bosons - quantum particles having zero or integral spin. Because
the spin classification of particles determines the nature of the distribution of energy over
available energy states, bosons do not obey the Pauli exclusion principle. Thus, there is no
restriction for how many bosons are allowed to occupy the same quantum state, retaining
the classical assumption of no theoretical limit on the number of particles per energy sate.
This chapter is outlined as follows. The concept of weight of configuration, or the number
of ways of distributing particles over the available energy states, is first introduced in 3.1.
Then, the number of particles occupying a certain state is derived in 3.2, followed by a
discussion on conditions under which a particular gas could be considered as a classical gas
or a Bose-Einstein gas in 3.3, where the applications of the BE statistics to real cases in
boson systems, such as electromagnetic radiation in an enclosure space and elastic waves
in solid are discussed. The Plancks quantum radiation formula for black-body radiation is
statistically derived in 3.4.1. At the end of this chapter, the Einstein and Debye theories
for the specific heat of solids are examined in 3.4.2.

3.1 Weight of configuration


There are two principal dierences between classical and boson systems. In the MB statistics,
the basic assumption is that classical particles are distinguishable and thus, the interchange
of two classical particles leads to a new arrangement. In the BE statistics, the assembly
consists of indistinguishable particles, hence the interchange of two identical particles gives
no new configuration. Another dierence between the MB and BE distributions arises from
the nature of the available energy states occupied. We will see that the BE distribution
tends to have more particles in lower energy levels while at higher levels of energy the two
distributions show a rapid decrease in probability of occupation with an increasing energy.

21
22 3. Bose-Einstein Statistics

Before further discussing bosons in detail, it is now necessary to calculate the number of
ways of arranging particles within the available energy states. Let us consider two classical
particles, labelled as a and b, occupying two energy states. Based on the classical distribution,
there will be four possible arrangements for these particles,

| a | b | | b | a | | ab | | | | ab |
| {z } | {z } | {z } | {z }
arrangement 1 arrangement 2 arrangement 3 arrangement 4

However, the first two arrangements are, according to the BE statistics, completely identical.
Thus, they are considered to be united, resulting in only three arrangements for the system.

| a | a | | aa | | | | aa |
| {z } | {z } | {z }
arrangement 1 arrangement 2 arrangement 3

If the above particles are distributed over three available energy states, then you will find
easily that nine configurations are possible for classical arrangements, while only six are found
for boson arrangements. Detailed description of these arrangements are left for students for
exercise. The dierence in the number of arrangements between classical and boson systems
is associated with dierent statistical formulations for calculation of possible configurations
within each assembly. For the classical statistics, the number of possible configurations W
(also known as the weight of configuration) is given by

Y g Ni
i
W =N! (3.1)
i
N i!

where gi and Ni are the number of states and the number of particles in the ith sheet,
respectively, and N is the total number of particles. For the boson statistics, this quantity
is written as
Y (Ni + gi 1) !
W = (3.2)
i
Ni ! (gi 1) !

We can calculate the number of ways of arranging both classical particles and bosons using
the above formulations in (3.1) and (3.2), respectively.

3.2 Population of bosons


Having discussed the weights of configurations for both the MB and BE statistics, it is now
necessary to find an expression for the number of particles distributed over a particular state.
This can be obtained by maximising the number of ways of distributing the indistinguishable
bosons among the allowed energy states, subject to the single constraint of a fixed energy.
3.2. Population of bosons 23

This is of particular importance since photons in an enclosure cavity and phonons in solids
are both bosons whose the number of particles per unit volume increases with increasing
temperature. To work this out and for the shake of learning with analogy for ease, however,
we begin with the classical case of Maxwellian particles by writing the classical configuration
in (3.1) and taking a natural logarithm of it to have

Y g Ni
i
`n W = `n N ! + `n
i
Ni !
X giNi
= `n N ! + `n (3.3)
i
Ni !
X
= `n N ! + Ni `n gi `n Ni !
i

It is useful to apply Stirlings approximation: `n N ! N `n N N to the first term and


the second term in a bracket on RHS of (3.3) (see this approximation in standard textbooks).
Equation (3.3) becomes
X
`n W = N `n N N+ Ni `n gi Ni `n Ni + Ni
i
X (3.4)
= N `n N + Ni `n gi Ni `n Ni
i

P
where we have used Ni = N .

Statistical physics is dealing with a system of microscopic particles under condition where
many particles with dierent levels of energy involved and hence so many configurations are
possible for such particles to occupy the available energy states. In this statistical sense,
the most probable configuration will correspond to the condition under which `n W reaches
maximum. This is simply written as d `n W = 0, or
X
0 = d(N `n N ) + d Ni `n gi Ni `n Ni
i
dN X dNi
= `n N dN + N + `n gi dNi `n Ni dNi Ni (3.5)
N i
Ni
X
= `n gi `n Ni dNi
i

P P
where dNi = d Ni = dN = 0 because N is constant, but dNi 6= 0 as Ni is the number
of particles occupying the i th state (a change in Ni is followed by a change in configuration).

It is important to mention here that the above condition goes together with the imposed
limitations on the values of total fixed particle N and total fixed energy E, respectively, for
24 3. Bose-Einstein Statistics

which the followings hold,


X X X
Ni = N = constant dNi = d Ni = dN = 0
i i i
X X X (3.6)
Ni i = E = constant d(Ni i ) = d Ni i = dE = 0
i i i

Thus, we can write


d `n W + dN + dE = 0 (3.7)

where and are Lagrange multipliers that are to be later determined. Now we have all
ingredients needed to develop Lagrangian method using (3.5), (3.6) and (3.7). Equation (3.7)
becomes
X
`n gi `n Ni + + i dNi = 0 (3.8)
i

which requires
`n gi `n Ni + + i = 0

After several simple steps, we obtain the particle population Ni in the ith state,

gi
Ni = (+ i )
(3.9)
e

From discussion on the Boltzmann partition function in 2.4, we have Z in a continuous


form as follows, Z 1
Z= g() e /kT d
0

Rewriting this function into a discrete form gives


X
i /kT
Z= gi e (3.10)
i

Dividing both sides of (3.10) by N yields


P
Z gi e i /kT
= P (3.11)
N Ni

To derive the number of particles occupying a particular state, we here define

N
e (3.12)
Z

and substitute it into (3.11) to obtain

gi
Ni = ( i /kT )
(3.13)
e
3.2. Population of bosons 25

It is clear from (3.9) and (3.13) that

1
= (3.14)
kT

The same result as in (3.14) can also be obtained from thermodynamics. It is important to
keep it in mind that either (3.9) or (3.13) is frequently referred to as classical distribution
(also known as classical population number), which describes population of particles
within each state in the classical assembly.
Having derived classical population number, we can derive in a similar manner the number
of bosons, or boson population number, occupying available quantum states by first taking
a natural logarithm of (3.2),

Y (Ni + gi 1) !
`n W = `n
i
Ni ! (gi 1) !
X (3.15)
= Ni + gi 1 `n Ni + gi 1 Ni `n Ni (gi 1) `n (gi 1)
i

where we have again applied Stirling approximation. The most probable configuration for
boson distribution is obtained when (3.15) reaches maximum, or d `n W = 0. Again, using
(3.6) and (3.7) we can write

0 = d `n W + dN + dE
X (3.16)
= `n (Ni + gi 1) `n Ni + + i dNi
i

The solution of (3.16) requires

0 = `n (Ni + gi 1) `n Ni + + i
(3.17)
= `n (Ni + gi ) `n Ni + + i

where we have again used Stirling approximation, i.e., Ni 1 to simplify calculation. After
simple algebra, we obtain the population number of bosons,

gi
Ni = ( i /kT )
(3.18)
e 1

where we have used = 1/kT . Equation (3.18) is referred to as quantum distribution


for bosons (also known as boson population number), which describes population of
bosons within each state in boson systems. It can then be shown easily that (3.18) approaches
to (3.13) when gi Ni . Note that has not been yet determined, and that it is the purpose
of the next section to derive the actual form of .
26 3. Bose-Einstein Statistics

3.3 Bose-Einstein gas


Here, we examine conditions under which a gas may obey either the MB or BE statistics.
In doing so, we need to assume that each available energy state in the quantum assembly
occupies a fixed volume in phase space. From discussion on the Boltzmann partition function
in 2.4, it is defined that g() d = B d , where g() d denotes the number of available energy
states having energy in the range to + d and B is a constant associated with the fixed
volume, which will be determined below.
For a single allowed energy state (i.e., g = 1) occupying an element volume d in phase
space, we can write

1=Bd
= B dx dy dz dpx dpy dpz
= B dx dpx dy dpy dz dpz

Assume that the changes in position and momentum in each direction are finite, the above
expression can then be written as

1=B x p y p z p
| {z }x | {z }y | {z }z
~ ~ ~

= B ~3

for which B = 1/~3 . We have taken x px ~ from the Heisenbergs uncertainty principle.
Similar expressions for the other two directions also hold. Thus, equation (2.47) can be
rewritten as
V 3/2
Z = 3 2mkT (3.19)
~
Hence, equation (3.12) becomes

N N ~3
= 3/2
= e = A (3.20)
Z V 2mkT

where A is constant.
The condition under which a particular gas can be considered as a classical gas is related
to the value of A. In most cases, this value is sufficiently small (i.e., A 1 1) such
1 /kT 1 /kT
that A e 1A e . If this condition is satisfied then boson population (3.18)
approaches to the classical distribution (3.13). For example, Helium gas with molar mass
of 4 gram per mole 1 at STP has the value of A of approximately 3 10 6 , and increases
to 0.15 at T = 4 K. Thus, the He gas at room temperature obeys the classical distribution,
and even at temperature as low as of 4 K, the He gas behaves as a classical gas.
3.4. Applications of Bose-Einstein statistics 27

3.4 Applications of Bose-Einstein statistics

3.4.1 Black-body radiation

In this section, we consider electromagnetic radiation within an enclosure of volume V with


rigid walls being maintained at temperature T . The radiation can then be considered as a
collection of photons, which obey the BE statistics. When the photons set in motion within
the container, some are continuously absorbed and reemitted by the walls. In this context,
the number of photons inside the enclosure is not conserved. Because (3.7) must hold, it
follows that must be zero, leading to A = 1. Thus, (3.18) can be rewritten as

g() d
N () d = (3.21)
e/kT 1

where we have invoked e = A 1 = 1. N () d and g() d denote the number of photons


and the number of available states with energy in the range to + d, respectively.
Note that the energy of a single photon depends on the frequency or equivalently on
the wavelength of the radiation, = h = hc/ , and that the photon is radiated in space
over the whole range of either frequency or wavelength. Thus, equation (3.21) can be written
in two forms,

g() d
N () d =
eh/kT 1
(3.22)
g( ) d
N( ) d =
ehc/ kT 1

where N () d and g() d denote the number of photons and the number of available states
with frequency in the range to + d, respectively. Similar definitions hold for N ( ) d
and g( ) d in the second equation of (3.22).
The energy distribution of radiated photons can be obtained by rewriting the number of
available energy states g as

d dx dy dz dpx dpy dpz V 4p2 dp


g=2 = 2 = 2 (3.23)
h3 h3 h3

where the multiplier 2 in front of an element volume d arises from the fact that there are
two independent directions of spin orientation, that is, parallel and anti-parallel to the photon
linear momentum. Another physical argument is that according to the particle-wave duality,
a photon with a definite spin state corresponds to a propagating electromagnetic wave having
two independent directions of polarisation, with each direction being perpendicular to the
wave propagation. For simplicity, it is assumed that photons are linearly polarised.
28 3. Bose-Einstein Statistics

The number of available states having wavelength in the range to + d can be derived
with the help of de Broglies hypothesis, p = h/ . Substituting this into (3.23) yields

8
g( ) d = V 4
d (3.24)

We define the density of states G having wavelength between and + d as the number
of available states g( ) d per unit volume V such that (3.24) becomes

8
G( ) d = 4
d (3.25)

The photons are distributed over the available states, where the associated energy density E
in the range of wavelength between and + d is defined as the product of the photon
energy and the photon density N (i.e., the number of photons per unit volume N/V )

E( ) d = N
hc G( ) d
= (3.26)
ehc/ kT 1
8hc d
= 5 ehc/ kT 1

known as the Plancks quantum thermal radiation formula for the spectral distribution
of radiant energy within an enclosure in thermal equilibrium at temperature T . The above
equation with as a dynamic variable can also be written using as a dynamic variable,

8h 3 d
E() d = 3 h/kT
(3.27)
c e 1

Two possible approximations can be directly derived from the last expression of (3.26),
leading to classical thermal radiation formulas. At sufficiently long wavelengths, i.e., when
the energy of a single photon is much smaller relatively compared to thermal energy at room
temperature, or when hc/ kT , we have

hc
ehc/ kT
=1+ +
kT (3.28)
hc/ kT hc
e 1
kT

for which (3.26) becomes


8kT
E( ) d = 4
d (3.29)

known as the Rayleigh-Jeans thermal radiation formula for the spectral distribution
of radiant energy within an enclosure in thermal equilibrium at temperature T .
3.4. Applications of Bose-Einstein statistics 29

On the other extreme, at sufficiently short wavelengths, i.e., when the energy of a single
photon is much smaller relatively compared to thermal energy at room temperature, or when
hc/ kT , we have
ehc/ kT 1 ehc/ kT (3.30)

for which (3.26) becomes


8hc d
E( ) d = 5
(3.31)
ehc/ kT
known as the Wiens thermal radiation formula for the spectral distribution of radiant
energy within an enclosure in thermal equilibrium at temperature T .
In short, it has been shown that the Plancks quantum thermal radiation formula for
black-body radiation is valid over the whole spectrum of wavelengths as written in (3.26), or
accordingly frequencies as given in (3.27). The quantum formulation covers the empirically
observed findings of the spectral distribution of electromagnetic radiation that are previously
unexplained by classical theories of electrodynamics and thermodynamics.
Let us now assume that there is a small hole in a wall of the enclosure, giving a way for
the photons inside to radiate out of the enclosure. Therefore, the number of photons emitted
per unit time per unit area of the hole (i.e., the flux of photons crossing the hole) is

1
Nrad ( ) d = c N( )d (3.32)
4

because photons are always in motion with the velocity of light c in free space.
By analogy, the energy radiated out per unit time per unit area of the hole is then given
by
1
Erad ( ) d =
c E( ) d . (3.33)
4
Substituting the Plancks formula (3.26) into (3.33) results in

2hc2 d
Erad ( ) d = 5
(3.34)
ehc/ kT 1

Integrating both sides of (3.34) gives


Z 1 Z 1
2hc2 d
Erad ( ) d = 5
0 0 ehc/ kT 1

and arranging the integral leads to


Z 0
2k 4 T 4 (hc/ kT )3 hc
Erad = d (3.35)
h 3 c2 1 ehc/ kT 1 kT

where Erad denotes the total energy radiated out of the hole per unit time per unit area.
30 3. Bose-Einstein Statistics

Now let us introduce x hc/ kT as a variable of integration such that


Z 1
2k 4 T 4 x3
Erad = dx (3.36)
h 3 c2 0 ex 1

The integral can be evaluated with the help of some steps (see ?, 1996, p.5),
Z 1 Z 1 1
X 1
X Z 1 1
X
x3 3 x nx 1 3 y 1 4
dx = x dx e e = y e dy = 6 =
0 ex 1 0 n=0 n=0
(n + 1)4 0 1
n4 15

Equation (3.36) then becomes



2 5 k 4
Erad = T4 = T4 (3.37)
15h3 c2

where is called the Stefans constant (5.67 10 8 Watt m 2 K 4 ). Note that the above
equation in (3.37) is theoretically derived from some basic assumptions and is known as the
Stefan-Boltzmann law for the energy density of thermal radiation at temperature T . But,
it was found from laboratory measurements that an empirical constant that depends upon
the surface of a substance the so called the emissivity of a substance e should be added to
the RHS of (3.37). The magnitude of this quantity is valued between 0 and 1. Objects that
are grouped into perfect heat absorbers (or heat radiators) have unit emissivity. Conversely,
imperfect heat absorbers (or heat radiators) correspond to the value of emissivity between
0 and 1. Black holes in the universe are considered as an example of a perfect black-body.

3.4.2 The specific heat of a solid


As previously discussed in 3.4.1, electromagnetic radiation at temperature T is quantised
as photon emission. In this section, the phenomenon of elastic waves propagated in solids is
quantised as phonon propagation. Both photons and phonons are bosons and therefore
obey the BE statistics. Phonon theory was first introduced to cope with the wrong prediction
of classical physics for the specific heat of solids. As widely known, classical physics predicts
the specific heat of a solid is independent of temperature, that is, the same value at all ranges
of temperature, cV = 3R, where R = 8.314 J mole 1 K 1 is the universal gas constant. But,
this constant value does not match with that obtained from laboratory measurements of
specific heat of some solid elements as shown in Figure 3.1.
The fact that classical physics failed to give the correct value for the specific heat of a
solid at all ranges of temperature drives Einstein to propose the idea of vibrating atoms
in a solid as independent, three-dimensional quantum harmonic oscillators, with the average
energy of a one-dimensional, quantum harmonic oscillator having angular frequency ! and
that the quantized energies of vibrating atoms in a solid must be explicitly
considered at low temperatures to secure agreement with experimental
measurements of specific heat. Einstein assumed that the atoms of the solid

3.4. Applications of Bose-Einstein statistics 31

7
Lead Aluminum
6
Silicon
5

C (cal/molK)
Diamond
4

0
0 200 400 600 800 1000 1200
Absolute temperature, K
Figure
Figure 3.1: The
10.9 Curves showing the
dependence dependence
of specific of the
heat on specific heat
temperature for of somesolid
several solidelements.
elements
(taken from p.353, Ch.10, Modern Physics, Serway et al., 2005).

temperature T given
Copyright by the form
2005 Thomson of the
Learning, BE
Inc. Alldistribution
Rights Reserved.
~!
hi = (3.38)
e~ !/kT 1

where k is the Boltzmann constant. Using this and the definition of the molar specific heat
at a constant volume cV for a 3D model of solid atoms, one can derive an expression for
correctly predicting values of cV at all ranges of temperatures, i.e., cV approaches to zero at
low temperatures and then asymptotes to a value of 3R (see Figure 3.1).
For the case of a 3D model of harmonic oscillator with total N oscillators, the total
internal energy of the atoms in a solid, assuming that the atoms are considered independent,
is given by
~!
E = 3N h i = 3N ~ !/kT (3.39)
e 1
The specific heat of a solid is then calculated as follows,

1 @E
cV =
n @T V
NA ~ 2 ! 2 e~ !/kT
= 3
k T 2 (e~ !/kT 1)2
2 (3.40)
~! e~ !/kT
= 3 NA k
kT (e~ !/kT 1)2
2
~! e~ !/kT
= 3R
kT (e~ !/kT 1)2

The above expression for the specific heat will be divided into two ranges of temperature.
32 3. Bose-Einstein Statistics

For cases where temperature of solids is sufficiently low such that ~! kT , we have
e~ !/kT 1 e~ !/kT . Regarding this, the specific heat of a solid at low temperatures is
2
~! ~ !/kT
cV = 3R e 0 (3.41)
kT

The exponential term dominates over the other terms in (3.41) and hence it plays a key role
in determining the value of cV at low temperatures. The qualitative description of this result
is as follows. By low temperature, it is understood that the average thermal energy kT of an
oscillator is much less than the spacing adjacent energy levels ~!. Hence, there is insufficient
thermal energy for an atom of a solid to make it jumping up and achieving to a higher level
of energy from its original position at a ground state. Atoms in a solid are unable to absorb
additional thermal energy obtained from only a small increase in temperature.
~!
For sufficiently high temperature cases, however, ~! kT and hence e~ !/kT 1 + kT
such that the specific heat of a solid at high temperatures is
2 ~!
~! 1+ kT ~!
cV = 3R = 3R 1+ 3R (3.42)
kT ~! 2 kT
kT

The energy level spacing ~! is here much small relatively compared to the average thermal
energy kT , and hence there will be more atoms in excited energy levels. As a consequence,
the atomic energies appear to be in the form of energy sheets rather than discretised levels,
leading to a continuous form of energy and thus the classical result of cV = 3R is satisfied.
By both (3.41) and (3.42), it has been proved that the Einsteins model for vibrating atoms
of a solid is able to predict all possible values of cV at all ranges of temperature. As clearly
seen in Figure 3.1, the specific heat of solids is indeed constant with temperature particularly
at a regime of high temperature but sharply decreasing at a regime of lower temperature
and approaching zero at 0 K.
Although the Einsteins theory of specific heat can predict values for cV at all ranges
of temperature, it remains keeping the problem in the pocket, in that the theory lies on
the assumption that each atom vibrates independently to each other with its neighbouring
atoms at a single, fixed frequency. But, it should be noted here that molecular interactions
between atoms of a solid may come to play a role, resulting in a spread of frequencies that
correspond to a group of interacting, neighbouring atoms. Thus, it may be possible to find
the dependence of the specific heat of solids on temperature at low temperatures. Debye in
few years after this puzzle showed, on the basis of empirical measurements, that cV / T 3 at
low temperatures.
The Debyes theory of specific heat is based on a quantised elastic vibration of
natural frequency ! the so called a phonon, which travels at a speed of sound in a solid
3.4. Applications of Bose-Einstein statistics 33

and carries a quantum of elastic energy ~!. Thus, the energy of a phonon also depends on
its frequency and hence, we use the phonon frequency as a dynamic variable. The phonon
population having frequency between and + d is then written as

g() d
N () d = (3.43)
eh/kT 1

Here we use Debyes approximation, g() d = C 2 d for m , where C is a constant


and m is the maximum frequency corresponding to three modes of atomic vibration, that is
one longitudinal polarisation and two transverse relative to the direction of propagation of
the waves. Note that the former polarisation is associated with a compressional wave and
the latter corresponds to shear waves. Hence, for a solid having N atoms there exists a total
of 3N allowed states. It follows that
Z m Z m
3N = g() d = C 2 d (3.44)
0 0

3
for which C = 9N/m . Thus, equation (3.43) becomes

9N 2 d
N () d = 3 eh/kT
for m (3.45)
m 1

The total energy of phonons in a solid is determined by the energy of a phonon = h


and the number of phonons N () d with frequency in the range to + d,
Z m Z m
9N h 3 d
E= N () d = 3 (3.46)
0 m 0 eh/kT 1

In the same manner as the definition in (2.44), the specific heat of solids can be derived with
the help of (3.46),
Z m 4 h/kT
1 @E 9No h2 1 e d
cV = = 3 2 2 (3.47)
n @T V m kT 0 eh/kT 1

We introduce a new variable, x = h/kT and a new parameter, namely Debye temperature
= hm /k to replace (3.47) by
3 Z /T 4 x
T x e dx
cV = 9No k 2 (3.48)
0 ex 1

There will be two regimes of temperature for which (3.48) is evaluated. For a regime of
high temperatures, it is understood that /T 1 and ex 1 + x 1. In this condition,
34 3. Bose-Einstein Statistics

equation (3.48) becomes


3 Z /T
T
cV = 9No k x2 dx = 3R (3.49)
0

where R replaces No k. The above result in (3.49) is exactly the same as that derived from
the classical equipartition principle of energy.
At low temperatures, however, /T 1 holds and the upper limit approaches to infinity.
Thus, equation (3.48) becomes
3 Z 1 4 x
T x e dx
cV = 9R 2 (3.50)
0 ex 1

The integral can be evaluated with the help of some steps below,
Z 1 1
X
x4 ex dx 1 4 4
2 = 24 =
0 ex 1 1
n4 15

Substituting the above expression into (3.50) results in


3
12 T
cV = 4 R (3.51)
5

Note that the results for regimes of high (3.49) and low (3.51) temperatures derived from
(3.48) for all ranges of temperature are in good agreement with experimental measurements
of the specific heats of some solid elements. Debye have successfully demonstrated both
theoretically and empirically that a phonon gas with a distribution of allowed frequencies
is a better model of a solid at lower temperatures than the model proposed by Einstein that
lies on a system of independent harmonic oscillators all having the same frequency.

3.5 Exercises
1. Helium atoms have spin 0 and are thus bosons.

(a) Must we use the BE statistics at STP to describe the Helium gas molecules, or will
the MB statistics suffice ?

(b) Helium gas becomes a liquid with a density of 0.145 g/cm3 at 4.2 K and atmospheric
pressure. Must the BE statistics be used in this case ? Provide your argument with
sufficient explanation.

(taken from Serway et al., 2005, Ch.10, p.364).


3.5. Exercises 35

2. The Plancks quantum thermal radiation for photons in an enclosure with rigid walls
in thermal equilibrium at temperature T is given by

8hc d
E( ) d = 5 ehc/ kT 1

(a) Write an expression for the thermal radiation formula in terms of frequency .
(b) If m represents the wavelength of the thermal radiation at a maximum intensity,
show that m T = 2, 9 10 3 mK holds.
(c) The maximum intensity of the sun energy occurs at a value of m = 4,84 10 7 m.
Assuming that the sun is a black-body, estimate temperature on the surface of the sun.

3. (a) Calculate the vibration frequency of lead atoms and their energy level spacing if
the Einstein temperature of lead is 70 K.
(b) Explain the low Einstein temperature of lead relative to that for diamond in terms
of the physical properties of lead (the Einstein temperature for the carbon atoms in
diamond is 1300 K).
(c) Calculate the average one dimensional harmonic oscillator energy in lead at room
temperature. Is there sufficient energy to raise lead atoms out of the ground state at
300 K ?
(taken from Serway et al., 2005, Ch.10, p.356).

4. Using equation (4.20) of Pointon (1978), or alternatively the equation below

8 d
N( )d = 4 ehc/ kT 1

show that the total number of photons per unit volume of an enclosure at temperature
T is given by
3 X 1
kT 1
16
hc n=1
n3

where the ns are positive integers (taken from Pointon, 1978, Ch.4, Problem 1).
36 3. Bose-Einstein Statistics
Chapter 4

Fermi-Dirac Statistics

In this chapter, we examine a group of quantum particles having half-integer spin which obey
the Pauli exclusion principle, meaning that only one particle can occupy a given quantum
state. Such particles are called fermions with some examples of these are electrons, protons
and neutrons. The corresponding quantum distribution used to describe the behaviour of
fermions is known as Fermi-Dirac (FD) statistics. This chapter is outlined as follows.
The number of ways of arranging fermions in the available energy states, or the weight of
configuration, is shortly introduced in 4.1, followed by discussion on the fermion population
in 4.2. The discussions of these two sections are mostly similar to those of the BE statistics,
and hence many details are left for students. The basic physics concepts of the Fermi energy
and the Fermi function, and their properties under ideal conditions are examined in 4.3.
The application of the FD statistics to the case of high concentration of conduction electrons
in a metal is given in 4.4.1, followed by a discussion on the specific heat of a metal in 4.4.2.

4.1 Weight of configuration


Let us consider an isolated, quantum assembly containing N non-interacting fermions with
total energy E. These fermions are distributed over available energy sheets, which include
all available states gi . As with bosons, fermions are indistinguishable particles, and hence
any rearrangement of these particles within the available states will make no configuration.
Unlike bosons, the number of fermions occupying a single energy state is limited by the Pauli
exclusion principle. It follows that there exists only one particle or no particle per unit state.
Thus, the number of possible configuration for fermions can be written as
Y gi !
W = (4.1)
i
Ni ! (gi Ni ) !

where Ni is the number of particles in the ith state, which will be derived in the next section.

37
38 4. Fermi-Dirac Statistics

4.2 Population of fermions


Recall again that in the Lagrange method given in (3.7), we have

d `n W + dN + dE = 0

where and are the Lagrange multipliers. In a manner similar to the steps discussed in
Chapter 3, we derive the fermion population by first taking the natural logarithm of (4.1),
and then applying (3.6) to the above expression. After several simple algebra, we obtain

gi
Ni = (+ i )
(4.2)
e +1

Here again = 1/kT , but this time is defined as F /kT (dierent from that for bosons),
where F is the Fermi energy. Thus, equation (4.2) becomes

gi
Ni = ( )/kT
(4.3)
e F +1

widely known as the Fermi-Dirac distribution or fermion population in a discrete form.


When dealing with the high concentration of fermions in a metal, hence a large number of
quantum particles involved, the equation in (4.3) can be written as a continuous distribution
function as follows,
g()
N () = ( )/kT = f () g() (4.4)
e F +1
where f () is defined as
1
f () = (4.5)
e( F )/kT
+1
usually called as the Fermi function that provides the probability of finding fermions with
energy in a given energy state.

4.3 Fermi-Dirac gas


Here we use the term Fermi-Dirac gas for describing a large number of fermions considered
as a continuous form of a substance at temperature T . It is known that the absolute zero
of temperature has never been achieved, and the condition under which the Fermi-Dirac gas
behaves at T = 0 K is considered to be an ideal case. We thus first examine the behaviour
of the Fermi function in (4.5) and derive the Fermi energy F at this temperature. We here
begin with analysing extreme values for f () at T = 0 K and T > 0 K (see Figure 4.1) as
follows,
1
f () = 1 =1 for < F (0) (4.6)
e +1
To apply this expression to electrons in a metal, we must multiply it by a factor
of 2 to account for the two allowed spin states of an electron with a given mo-
mentum or energy:
k 2 dk
4.3. Fermi-Dirac gas g(k) dk ! (10.36) 39
&2

f (E ) f (E )

T=0K T>0K
1.0 1.0

0.5

0 E 0 E
EF EF

2k BT

Figure 4.1: The(a)


Fermi function at absolute zero temperature (b)the
on
left and T > 0 K
on Figure
the right figures
10.11 (taken fromofp.357,
A comparison Ch.10,
the Fermi Modern
Dirac Physics,
distribution Serwayat et
functions (a)al., 2005).
absolute
zero and (b) finite temperature.

and
1
f () = =0 for > F (0) (4.7)
e1 +Inc.
Copyright 2005 Thomson Learning, 1 All Rights Reserved.

The values in both (4.6) and (4.7) for f () at T = 0 K on the left figure imply that all
states with energy smaller than F (0) are occupied while those having energy greater than
F (0) are empty. These values are used to evaluate the Fermi energy at T = 0 K by first
writing explicitly the number of available states for fermions and to relate it to the volume
occupied by these states in phase space. Recall again that the relation between the number
of states g() d and an element volume in phase space d is given by

g() d = B d
dx dy dz dpx dpy dpz
=2 (4.8)
~3
2
V 4p dp
=2
~3

where the multiplier 2 arises from the fact that for fermions there are two independent
directions of spin orientation. Based on the classical linear momentum-kinetic energy relation
p2 = 2m, the above expression can be rewritten as

4 V
g() d = (2m)3/2 1/2 d (4.9)
~ 3

Now integrating both sides of (4.4) and inserting (4.9) yield


Z 1 Z 1
N () d = f () g() d (4.10)
0 0
40 4. Fermi-Dirac Statistics

where the LHS of (4.10) denotes the total number of fermions. The RHS of (4.10) can then
be separated into two parts as follows,
Z F (0) Z 1
N= f () g() d + f () g() d (4.11)
0 F (0)

Substituting the values of the Fermi function f () at T = 0 K in (4.6) and (4.7), and g() d
in (4.9) to (4.11) results in
Z F (0) 3/2
4 V 2m 2
N = 3 (2m)3/2 1/2
d = 4 V F (0)3/2 (4.12)
~ 0 ~2 3

for which we obtain the Fermi energy at T = 0 K,


2/3
3N ~2
F (0) = (4.13)
8V 2m

The above equation in (4.13) shows a gradual increase in the value of F (0) with increasing
the concentration N/V of electrons in a metal.
A fundamental dierence in terms of the properties of the electrons between metals and
gases is that, in contrast to the predictions of both the MB and BE statistics, in which
all particles condense to a state of zero energy at temperature of absolute zero, conduction
electrons in a metal with the cut-o energy F are set in motion at absolute zero temperature
with the associated Fermi speed vF derived from classical relation as follows,

1
m vF2 = F (0) (4.14)
2

For a typical value of F (0) 5 eV, we have a remarkable calculation to have the speed of
free electrons in a metal at the Fermi level on the order of 106 m/s at 0 K. This is surprising
as classical theory predicts that at zero temperature all gas molecules have no velocities.
As also widely known, metals in general are good conductors in the sense hat metals have
free electrons on the surface that are easy to move, freely transporting electrical charges and
hence energy. This phenomenon is of another fundamental dierence between metals and
gases. We then introduce the Fermi temperature TF derived from the Fermi energy F (0)
at T = 0 K in (4.13) using the definition of thermal energy kTF . Thus, we have
2/3
3N ~2
TF = (4.15)
8V 2mk

The values of F (0), vF and TF calculated from (4.13), (4.14) and (4.15) for various metals
are listed in Figure 4.2.
10.5 AN APPLICATION OF FERMI-DIRAC STATISTICS: THE FREE ELECTRON GAS THEORY OF ME

4.4. Applications of Fermi-Dirac


Table 10.1 statistics
Calculated Values of Various Parameters for 41
Metals Based on the Free Electron Theory
Electron Fermi Fermi Fermi
Concentration Energy Speed Temperature
Metal (m!3) (eV) (m/s) (K)

Li 4.70 " 1028 4.72 1.29 " 106 5.48 " 104
Na 2.65 " 1028 3.23 1.07 " 106 3.75 " 104
K 1.40 " 1028 2.12 0.86 " 106 2.46 " 104
Cu 8.49 " 1028 7.05 1.57 " 106 8.12 " 104
Ag 5.85 " 1028 5.48 1.39 " 106 6.36 " 104
Au 5.90 " 1028 5.53 1.39 " 106 6.41 " 104

Figure 4.2: A list of dierent metals with the dierent values of the Fermi energies and
the corresponding speeds and temperatures (taken
E F from p.359, Ch.10, Modern Physics,
Serway et al., 2005). TF % (10.45)
kB
As a final note, it is interesting that a long-standing puzzle concerning the
anomalously small contribution of the conduction electron gas to the heat
4.4 capacity
Applications
of a solid has a of Fermi-Dirac
qualitative statistics
solution in terms of the FermiDirac distri-
bution. If conduction electrons behaved classically, warming a gas of N elec-
trons from 0 to 300 K should result in an average energy increase of 3kBT/2
4.4.1 forConduction
each particle, or aelectrons
total thermalinenergy
a metal
per mole, U, given by
U ! N (3 k BT ) ! 32 RT
The fact that the Fermi temperature for Athe2 electron gas in a metal is high (see Figure 4.2)
leads toThus the electronic
a fundamental idea:heat capacity in
an increase pertemperature
mole shouldfrom be given by zero to a value around
absolute
room temperature at T K may only aect a dU small fraction of the electrons with the energy
C el ! ! 32 R
near the Fermi energy. This implies that thedT free electrons contribute only a small fraction
to the specific
assuming heat
oneoffree
a metal. Twoper
electron arguments,
atom. Anboth qualitative
examination of and quantitative,
Figure are here
10.12, how-
given inever,
the shows thatcalculations.
following on heating from 0 K, very few
The qualitative electrons
argument become
is based on excited and
the right figure in
gain an energy k BT. Only a small fraction f within k BT of E F can be excited ther-
Figure 4.1 with calculated values of the Fermi function f () defined in (4.5) as follows,
mally. The fraction f may be approximated by the ratio of the area of a thin rec-
tangle of width k BT and height n(E F) to the total area 1 under n(E). Thus
for = F kT f () = 1 = 0.73
area of shaded rectangle in Figure e 10.12
+1
f!
total area under n(E ) 1
(k BT )g(E=
for
F) F
f ()F=
(k BT )D(E )1/2e0 +
= 0.50
13 k BT ! 3 T
! ! !
$ 0
EF
1/2
D forE =dE F + kT
2
3 DE F
3/2

f () = 1
1 2 EF
e +1
= 0.27
2 TF

TheSince
rightonly of Figure
f N in
figure the electrons gain shows
4.1 clearly an energy
that of
thethe order area
shaded of k B(formed
T, the actual
on heating
total thermal energy gained per mole is
from T = 0 K to T > 0 K) is relatively small compared to the total area when T = 0 K,
3 T excited and gain 3 RTa2thermal energy of kT . Thus,
indicating that only few electrons
"
U ! become (N
only a small fraction within kT of F2 can
# k
A BT )
TF be thermally
!
2 TF This brings a rough estimate
excited.
From this contribution,
of the electronic result, we findshown
that the
in electronic heattheoretical
(4.16), to the capacity isspecific heat of a metal,
dU T
C el ! ! 3R T
dT 3R TF
celectronic (4.16)
TF

Copyright 2005 Thomson Learning, Inc. All Rights Reserved.


42 4. Fermi-Dirac Statistics

Heating to T = 300 K and using the data for the Fermi temperature TF listed in Figure 4.2
yield an estimate of celectronic 0.09R. The relative contribution of this to the classical specific
heat of a metal (or, say in general, a solid) is then given by

celectronic 0.09R
= 3%
cV 3R

and thus, the free electrons contribute only a small fraction to the specific heat of a metal.
Again, when facing with large numbers of particles having random velocities, it is then
interesting to calculate the average values of both kinetic energy and velocity. In this sense,
the average kinetic energy of conduction electrons in a metal at T = 0 K is given by
R1
N () d 3
(0) = R0 1 = F (0) (4.17)
0
N () d 5

where N () d is given in (4.4). In a similar manner to the above steps, the mean velocity
of the electrons at T = 0 K can be calculated,
R1
v N () d 3
v (0) = R0 1 = vF (4.18)
0
N () d 4

where vF is the Fermi speed previously defined in (4.14).

4.4.2 The specific heat of a metal


In the previous section, it is shown that the electronic contribution of conduction electrons
in a metal is insignificant, which is only about 3%, to the value of the classical specific heat.
Here we provide calculations of the specific heat of a metal as a function of temperature with
a better estimate of the proportion of the electronic contribution. It is important to mention
here that in general there is a dependence of the Fermi energy and the average kinetic energy
of conduction electrons on temperature. This temperature dependence is actually weak and
is written as the general form of the Fermi energy F and the mean energy of conduction
electrons in a metal h i at temperature T . The Fermi energy of the electrons is given by
2 !
2 T
F = F (0) 1 (4.19)
12 TF

and the mean energy of the electrons is given by


2 !
3 2 T
h i = F (0) + (4.20)
5 4 TF
4.4. Applications of Fermi-Dirac statistics 43

Figure 4.3: A linear dependence of the specific heat obtained from the contribution
of conduction electrons in a metal upon temperature in (4.21) is theoretically derived
from the temperature-dependent average kinetic energy of the electrons (taken from
Figure 11.4, p.182, Fisika Statistik untuk Mahasiswa MIPA, Abdullah, 2009).

The second terms in the brackets of both equations (4.19) and (4.20) are considered small
relatively compared to the first terms in each equation.
Using the definition previously discussed in 2.3.3 and the result for the dependence of
the average kinetic energy on temperature given in (4.20), the specific heat of a metal derived
from the properties of conduction electrons is then calculated as follows,

1 @E
celectronic =
n @T V
2 !
@ 3 2 kT
= No F (0) +
@T 5 4 F (0)
2 kT (4.21)
= No k
2 F (0)
2
kT
= R
2 F (0)
/ T

It is clear from the last expression in (4.21) that the electronic specific heat is linearly
dependent upon temperature. Taking thermal energy at room temperature, kT 0.025 eV
and a typical value of F (0) 5 eV with 2 approximates to 10, then we have an estimate
44 4. Fermi-Dirac Statistics

of the electronic specific heat, celectronic 0.0025R.

celectronic 0.0025R
= 1%
cV 3R

which again shows the relatively small contribution of the free electrons to the specific heat
of a metal, or a solid in general. But for the shake of completeness, we then provide below
a better formulation for the specific heat of a metal.
As noted earlier in 3.4.2, the specific heat of a solid is as a function of temperature,
that is, cV / T 3 . This, together with (4.21), brings us into a conclusion that for a metal
there are two contributions to the specific heat: one, from the atomic vibration derived from
the concept of phonons (as bosons) and the other, from the electronic contribution derived
from the concept of conduction electrons (as fermions). Hence, we can write for a metal at
temperatures lower than the Debye temperature defined in (3.48) and the Fermi temperature
given in (4.15) as
cV = a T + b T 3 (4.22)

where a and b are constants that depend on the nature of the microscopic assembly under
consideration. The equation in (4.22) clearly shows that the first term on the RHS represents
the relative contribution of free electrons to the specific heat of a metal whereas the second
term denotes the relative contribution of vibrating phonons.

4.5 Exercises
1. Discuss the underlying physics assumptions of the Maxwell-Boltzman, Bose-Einstein,
and Fermi-Dirac statistics. What are their dierences and similarities ? (taken from
Serway et al., 2005, Ch.10, p.362).

2. (a) Is the Maxwell-Boltzmann statistics valid for hydrogen gas at standard temperature
and pressure (STP) ? Hint: under STP of 273 K and 1 atm, 1 mole of H2 gas consists
of 6.02 1023 molecules and occupies a total volume of 22.4 10 3 m3 .
(b) What about conduction electrons in silver at 300 K, where the silver has a density
of 10.5 g/cm3 and a molar weight of 107.9 g ?
(taken from Serway et al., 2005, Ch.10, p.345).

3. Show that the average kinetic energy of a conduction electron in a metal at 0 K is


3
(0), where F (0) is the Fermi energy at absolute zero temperature. Notice that all
5 F
of the classical molecules of an ideal gas at 0 K have zero kinetic energy (taken from
Serway et al., 2005, Ch.10, p.364).
4.5. Exercises 45

4. Show that the average velocity of a conduction electron in the electron gas at absolute
zero temperature is 34 vF , where vF is the velocity of an electron at the Fermi energy
F (0) (taken from Pointon, 1978, Ch.5, Problem 1).

5. Show that, for a gas in which the molecules behaves as fermions, the value of the Fermi
energy is approximately

N h3
F kT `n
V (2mkT )3/2

for reasonably high temperatures (taken from Pointon, 1978, Ch.5, Problem 3).
46 4. Fermi-Dirac Statistics
Chapter 5

Thermodynamics of Gases

Statistical distribution of a microscopic assembly, either based on the MB classical statistics


for Maxwellian particles (Chapter 2) or the BE quantum statistics for bosons (Chapter 3)
and the FE quantum statistics for fermions (Chapter 4), involves two fixed thermodynamic
quantities, namely the total energy E and its corresponding temperature T of an assembly.
The underlying assumption made is that there is no communication between the assembly
and its surrounding environment such that both dN = 0 and dE = 0. Concerning with this,
the aim of this chapter is to discuss a class of microscopic assemblies frequently described by
thermodynamic quantities, i.e., gases. The concept of entropy is first introduced in 5.1,
then followed by the application of entropy to a classical gas with no molecular interactions
in 5.2. However, classical distribution fails to explain the Gibbs Paradox, a confusing
thermodynamic problem evaluated in 5.3. To overcome the paradox, an alternative concept
of semi-classical distribution for a semi-classical gas is therefore given in 5.4.

5.1 The concept of entropy


Of important statistical parameters, one of them, namely the Lagrange multiplier has been
derived in (3.14), where = 1/kT . This quantity is used to reformulate the Lagrange
dierential equation in (3.7) for a closed system, where the total volume V of the enclosure
and the corresponding number N of particles contained inside are kept fixed such that dV = 0
dan dN = 0. Recall again that the first law of thermodynamics is written as

dQ = dU + P dV (5.1)

where dQ is a change in heat whether supplied to (if dQ > 0), or extracted from (if dQ < 0),
a given thermodynamic system, dU is an associated change in internal energy of the system
(dU > 0 for cases where the heat supplied and dU < 0 for cases where the heat extracted),

47
48 5. Thermodynamics of Gases

and P dV is the work corresponding to the heat supplied to, or extracted from, the system
and is directly related to a change in volume dV (dV > 0 for thermal expansion processes
and dV < 0 for thermal compression processes) at constant pressure P .
Note that while U is a function of state that depends on the values of both initial Ui and
final Uf internal energies in given states, the heat Q and the external work P dV depend
only upon a thermodynamic process considered. For an increase, or decrease, in temperature
associated with a change in state from initial to final, there will be a corresponding increase,
or decrease, in the internal energy. For a given process, the work P dV can be positive or
negative. For P dV > 0, it follows that some amount of external work is done by the system.
For P dV < 0, it follows that some amount of external work is given to the system. In short,
the first law of thermodynamics in (5.1) describes conservation of energy in physics.
However, rather than discussing thermodynamic processes, we are here interested in what
is happening in a given system in a microscopic sense. For a system of N particles occupying
a given volume V with no atomic interactions between particles inside the fixed volume and
the environment (dN = 0 dan dV = 0), it is understood from (5.1) that dQ = dU . Thus,
the Lagrange equation in (3.7) becomes

dQ
d `n W = 0 (5.2)
kT

where we have used a conversion in symbols, i.e., E = U in the sense that internal energy U
represents the total kinetic energy E of the system.
Here, we introduce formally the concept of entropy as a degree of disordered system,
where a system with a larger value of entropy will be more disordered. As with the internal
energy U , hence the total energy E of a system, the entropy S is also a function of state,
and an infinitesimal change in the entropy dS of a given system is written as
Z f
dQ dQ
dS = atau S = Sf Si = (5.3)
T i T

For a closed system under natural conditions with spontaneous and irreversible processes, the
entropy of such a system increases to a maximum value, associated with a critical condition
where a chaotic situation may occur. For this system, the entropy is written as S 0.
With the help of (5.3) above, the equation in (5.2) becomes

1
d `n W dS = 0
k

from which we have


S = k `n W (5.4)
5.2. Classical gas 49

which is known as the Boltzmann relation, directly bridging the weight of configuration W
as a microscopic quantity and the entropy S as a macroscopic quantity. The use of (5.4) will
be explored to derive some basic thermodynamic relations in the following sections.

5.2 Classical gas


In this section, we will demonstrate that the equation of state and the total internal energy
that hold for a perfectly ideal gas can be also derived from the classical formulation for the
number of ways of arranging particles in the available states, or the weight of configuration,
previously defined in (3.1),
Y g Ni
i
W = N!
i
N i!

and the Boltzmann relation stated in (5.4). We thus have


! !
Y g Ni Y g Ni
i i
S = k `n N ! = k `n N ! + `n
i
N i! i
N i!
!
X g Ni X X
=k N `n N N+ `n i
= k N `n N N + `n giNi `n Ni !
i
N i ! i i
X X X
= k N `n N N+ Ni `n gi Ni `n Ni + Ni
i i i
X X
= k N `n N + Ni `n gi Ni `n Ni
i i

We then substitute the number of particles in the i th state below,

N i /kT
Ni = gi e
Z

to the last expression for S to include the partition function Z, the internal energy U and
the absolute temperature T .
X X
N i /kT
S = k N `n N + Ni `n gi Ni `n gi e
i i
Z
X X X X
i /kT
= k N `n N + Ni `n gi Ni `n N + Ni `n Z Ni `n gi e
i i i i
X X X
i /kT
=k Ni `n gi + N `n Z Ni `n gi Ni `n e
i i i
X
= k N `n Z + Ni i /kT = k N `n Z + U/kT
i
50 5. Thermodynamics of Gases

for which we have a simple form as follows,

U
S = N k `n Z + (5.5)
T

The first term on RHS of (5.5) represents microscopic description whereas the second term
is obtained from thermodynamic measurements. Thus it is clear that the entropy S depends
on the exact formulation of a partition function Z of a microscopic assembly.

With the entropy in (5.5) at hand, we now get ready to derive the equation of state in
thermodynamics. On the first step to do, multiplying the left and right sides of (5.5) with T ,
and rearranging all the terms yield

U TS = N kT `n Z (5.6)

where the LHS of (5.6) is known as the Helmholtz free function and is symbolised as F.
Thus, we have
F = N kT `n Z (5.7)
3/2
Substituting the Boltzmann partition function Z = BV 2mkT into (5.7) results in

3/2
F = N kT `n BV 2mkT (5.8)

Below we provide with two examples of how useful the Helmholtz free function is.

The equation of state can be derived from a theoretical definition of pressure,



@F @ `n Z d `n V
P = = N kT = N kT
@V @V dV
T T (5.9)
N kT
=
V

well-known as the equation of state for an ideal gas, P V = N kT . We can also derive
an expression for the total internal energy U of an ideal gas,

2 @F/T 2 @ `n Z
U= T = N kT
@T @T
V V (5.10)
3
= N kT
2

The results for pressure and internal energy of an ideal gas in (5.9) and (5.10) respectively
indicate that formulation for the entropy in (5.5) and the Helmholtz free function in (5.7)
play a role in determining some basic thermodynamic relations that hold for an ideal gas.
The same result for the internal energy U of an ideal gas obtained from the equipartition
5.3. Gibbs paradox 51

principle of energy, as in (2.36), also occurs. In this case, U = N ave = 3 N 12 kT = 32 N kT ,


where the three modes of translational motion are associated with three degrees of freedom,
corresponding to the x, y and z components of linear momentum. Thus, statistical physics
developed from basic assumptions made for microscopic assemblies gives the same results as
those obtained from empirical measurements.

5.3 Gibbs paradox


It has been shown in 5.2 that the entropy defined in (5.5) is useful for the derivations of
some basic relations in cases where the gas molecules are considered classical and are placed
inside a closed container with no separation barrier. Here we discuss the case of two classical
gases placed in two initially separated chambers within an enclosure of a total fixed volume.
We are interested in examining what is happening when the barrier separating the two gas
molecules is withdrawn in terms of the change in entropy of each assembly.
Let us now consider two classical gas molecules with total particles of each N1 and N2 ,
and a total mass of each m1 and m2 initially placed in two separated chambers with a volume
of each V1 dan V2 , respectively. The two chambers are part of an enclosure with a fixed total
volume 2V and are in equilibrium at temperature T . When the barrier separating one gas
from the other is withdrawn, molecular interaction may possibly cause a change in the total
entropy before and after withdrawal.
At the initial condition before withdrawing the barrier, the total entropy of the system,
3/2
with the Boltzmann relation Z = BV 2mkT is put onto (5.5), is given by

3/2 U
S = N k `n B + N k `n V + N k `n 2mkT + (5.11)
T

Based on the equation in (5.11) above, we can write the entropy for each gas as follows,

U1
3/2
S1 = N1 k `n B + N1 k `n V1 + N1 k `n 2m1 kT +
T (5.12)
3/2 U2
S2 = N2 k `n B + N2 k `n V2 + N2 k `n 2m2 kT +
T

The total entropy of the system (two initially separated gas molecules) before withdrawal is
thus given by

3/2
S = S1 + S2 = (N1 + N2 ) k `n B + (N1 + N2 ) k `n V + N1 k `n 2m1 kT
3/2 U1 + U2
+ N2 k `n 2m2 kT + (5.13)
T

where we have assumed V1 = V2 = V .


52 5. Thermodynamics of Gases

Now we calculate the total entropy of the system after the withdrawal of the barrier.
The following conditions after the withdrawal are thus fulfilled; the total volume becomes
V1 + V2 = 2V , with the number of particles of each, before and after, N10 = N1 dan N20 = N2
and the total mass of each, before and after, m01 = m1 dan m02 = m2 , and the internal energy
of each, before and after, U10 = U1 dan U20 = U2 remain constant, respectively. Therefore,
the total entropy of the system at the final condition is given by

3/2
S 0 = (N1 + N2 ) k `n B + (N1 + N2 ) k `n 2V + N1 k `n 2m1 kT
3/2 U1 + U2
+ N2 k `n 2m2 kT + (5.14)
T

It is clear from both the equations in (5.13) and (5.14) that the change in total entropy
of the system is given by

S = S0 S = (N1 + N2 ) k (`n 2V `n V ) (5.15)

Let us assume that the two classical gases are of the same type with the density of each
1 = 2 = such that m1 = m2 = m and hence N1 = N2 = N . Based on (5.15), the change
in the total entropy is thus S = 2N k `n 2. This result is confusing in the sense that for
a mixed system containing initially separated gas molecules of the same type there should
be no change in the total entropy when the separating barrier is withdrawn, or theoretically
S = 0. If the total entropy of such a system were not constant, entropy would not only be
a function of state; instead, it would also depend on whether or not the gas molecules under
consideration are of the same type.

5.4 Semi-classical gas


Gibbs was the one at that time who observed the above paradox and then proposed a new,
alternative formulation for the classical weight of configuration to overcome such a difficulty.
The qualitative explanation is given below. Recall again that the entropy defined in (5.5)
for a classical gas,
U
S = N k `n Z +
T
is obtained from the classical weight of configuration written in (3.1),

Y g Ni
i
W = N!
i
N i!

It seems that the classical weight of configuration has a serious problem. The formulation
5.4. Semi-classical gas 53

is derived from the basic physics assumption that classical gas molecules are distinguishable.
He then proposed the number of ways of arranging particles W in available states should be
much less than the existing value, i.e., N ! times lesser than that in (3.1). We thus have

Y g Ni
i
W = (5.16)
i
Ni !

for the weight of configuration, where gas molecules have to be considered indistinguishable.
A class of gas molecules having the weight of configuration written in (5.16) above is then
known as the semi-classical gas.
The semi-classical weight of configuration in (5.16) is used to derive the total entropy
of a mixed system of gas molecules, previously discussed as the Gibbs paradox in 5.3, using
the Boltzmann relation in (5.4). The use of the Boltzmann relation is valid for both classical
and semi-classical gas molecules. Substituting (5.16) into (5.4) yields

Y g Ni
i
S = k `n
i
N i!

V 3/2 U (5.17)
= N k + N k `n B + N k `n + N k `n 2mkT +
N T
3/2 U
= Nk N k `n N + N k `n B + N k `n V + N k `n 2mkT +
T

The first two-terms on RHS of (5.17) is the dierences in the formulation for the total entropy
between a semi-classical gas and a classical gas (see 5.11). Based on the above equation, we
can then write the entropy of each chamber illustrated in 5.3 as

U1
3/2
S 1 = N1 k N1 k `n N1 + N1 k `n B + N1 k `n V1 + N1 k `n 2m1 kT +
T (5.18)
3/2 U2
S 2 = N2 k N2 k `n N2 + N2 k `n B + N2 k `n V2 + N2 k `n 2m2 kT +
T

The total entropy of the system before the withdrawal is thus given by

3/2 U1 + U2
S = 2N k 2N k `n N + 2N k `n B + 2N k `n V + 2N k `n 2mkT + (5.19)
T

where we have again assumed V1 = V2 = V , N1 = N2 = N , and m1 = m2 = m for the same


type of gas molecules.
By analogy in comparison with a classical gas, the total entropy of a semi-classical gas
after the withdrawal is given by

3/2 U1 + U2
S 0 = 2N k 2N k `n N + 2N k `n B + 2N k `n V + 2N k `n 2mkT + (5.20)
T
54 5. Thermodynamics of Gases

It is very clear that (5.19) is exactly the same as (5.20), meaning that there is no change in
total entropy before and after the withdrawal of the barrier separating the two chambers.
Thus, the Gibbs paradox is resolved for which S = 0, using the semi-classical weight of
configuration defined in (5.16).
It is interesting to mention here that the entropy of semi-classical gas molecules in (5.17)
can be written as
3/2
V 2mkT 5
S = N k `n + (5.21)
N ~3 2
known as the Sackur-Tetrode equation, obtained from substitution of both B = 1/~3 and
U = 32 N kT into (5.17). This equation is useful in deriving basic thermodynamic relations
for a semi-classical gas (see Problem 2, Exercises in 5.6). Students are encouraged to find
further discussion on this topic in other physics lectures.

5.5 Gas of diatomic molecules


5.5.1 A quantum model for rotational motion
The dynamics of a gas can be best described using a model relevant to its molecular structure.
The discussion on the specific heat in 2.3.3 is devoted to an ideal gas whose monoatomic
molecules, where the only type of motion considered is translational. For an ideal gas whose
diatomic molecules, however, there is a possibility to find both rotational and vibrational
motions, and even other possible types of motion, making the degree of freedom higher.
Here, we examine a diatomic gas where it is possible to have at least one additional degree
of freedom, owing to rotational motion. Let us assume that each molecule of a diatomic gas
contains two indistinguishable atoms having the same mass m and are separated by a distance
of 2a one from the other. The moment of inertia about a particular axis through the centre
of the line connecting the two atoms and perpendicular to such a connecting line is given
by J = 2ma2 , where a is here the distance between one atom to the centre of the system.
Using operators in quantum mechanics and by analogy with translational motion we have
for rotational motion,
b2
cr = L
E (5.22)
2J
where Ecr is a quantum operator for the rotational kinetic energy and L
b is a quantum operator
for the orbital angular momentum that obeys the following eigen equation,

b2
L = ~2 `(` + 1) (5.23)

where is a wave function associated with the system and ` is a series of orbital angular
5.5. Gas of diatomic molecules 55

momentum numbers (` = 0, 1, 2, ......). Substituting (5.23) into (5.22) results in

cr ~2 `(` + 1)
E = (5.24)
2J

A small correction but important is then given for the description of angular momentum.
Regarding spinning motion of each atom, the orbital angular momentum number ` is replaced
with the total angular momentum number j. Thus, (5.24) becomes

cj ~2 j(j + 1)
E = (5.25)
2J

A simple algebra can then be used to prove that (5.25) implicitly states the allowed energies
corresponding to rotational motion of the atoms in a diatomic gas as follows,

~2 j(j + 1)
j = (5.26)
2J

where j = ` 12 . For each value of j there will be (2j + 1) possible values of mj , accounted
for the degree of a quantum degeneracy gj = (2j +1). Here, mj represents the total magnetic
quantum number, where mj = -j,...... 0, ......j. We can thus write the partition function
generated from the contribution of rotational motion as
1
X 1
X
j /kT j /kT
Zr = gj e = (2j + 1) e
j=0 j=0
1 (5.27)
X
j(j+1)K/kT
= (2j + 1) e
j=0

where K = h2 /8 2 J . Later, we will see that this partition function contributes to the value
of the specific heat of a diatomic gas at a particular range of temperature.

5.5.2 A quantum model for vibrational motion


Having derived a quantum model for rotational motion of a gas whose diatomic molecules,
we here develop a further model to account for possible vibration of the molecules, giving a
more additional degree of freedom. To include this motion into the dynamics of such a gas,
we model the two atoms as a simple harmonic oscillator, where the two atoms are jointly
coupled by an imaginary spring. We will use a quantum model for the harmonic oscillator.
For comparison, we recall again that in classical physics the mean energy of a 1-D harmonic
oscillator is valued for kT as written in (2.40). It follows that for a classical oscillator the
energy is continuously distributed over arbitrary values of energy depending on temperature.
56 5. Thermodynamics of Gases

Contradiction to this, a quantum approach provides limitation to the oscillator energy as


follows,
1
n = n + h (5.28)
2
where n = 0, 1, 2, ...... and represents the frequency of the oscillator. It is clear from (5.28)
that the energy of a quantum model for the harmonic oscillator is quantised over allowed
levels of energy, corresponding to the available states of energy. The lowest level of energy
corresponding to the ground state is achieved for n = 0, that is, o = 12 h. Other states
with higher levels of energy associated with n > 0 are known as a number of excited states.

We will derive the mean energy of a 1-D quantum oscillator with allowed values of energy
written in (5.28), where the result will be compared with that derived from classical views.
The first step to do is to rewrite (3.10),
X
i /kT
Z= gi e (5.29)
i

The quantum model for the oscillator is such that for a single value of a quantum number n
there exists a single state only corresponding to a singly allowed level of energy. In short,
gi = 1. Substituting (5.28) into (5.29) gives
1
X 1
X
1
(n+1/2)h/kT h/kT nh/kT
Zv = e =e 2 e (5.30)
n=0 n=0

where a summation index i in (5.29) is replaced with n for convenience. Substituting a series
P n
of x = (1 x) 1 where x = e h/kT into (5.30) results in
1
h/kT
e 2
Zv = h/kT
(5.31)
1 e

known as the partition function generated from vibrational motion of the diatomic molecules.
The mean energy of a 1-D quantum harmonic oscillator can then be easily calculated using
the definition previously discussed in 5.2 considering that U = N h i, giving
1
2 @ `n Z 2 @ e 2 h/kT
h i = kT = kT `n
@T V @T 1 e h/kT
@ 1

= kT 2 `n e 2 h/kT `n 1 e h/kT
@T
(5.32)
2 1 h h
= kT +
2 kT 2 kT 2 (eh/kT 1)

1 1
= h +
2 eh/kT 1
5.5. Gas of diatomic molecules 57

which will be examined for high temperatures. At a range of high temperatures, we have
h kT and thus 2
h/kT h 1 h
e 1+ +
kT 2 kT
to the second order for better approximation. Substituting the expression above to the last
expression in (5.32) yields

1 kT 1 h
h i = h + 1 = kT (5.33)
2 h 2 kT

which is exactly the same as that obtained from the equipartition principle discussed in
2.3.2. However, there is a fundamental dierence in the detail of gaining the result between
classical theory and a quantum approach used here.

5.5.3 Total partition function of a diatomic gas


In this section, we develop a more comprehensive model for a gas whose diatomic molecules
with the total energy contributed from all translational, rotational, and vibrational motions.
Considering this, we begin with writing the total partition function Z as

Z = Zt Zr Zv (5.34)

where t, r, and v denote translational, rotational, and vibrational. Detailed discussions on


each partition function on the RHS of (5.34) will be given in the following paragraphs.
The partition function associated with translational motion has been derived in (2.47)
and its further corresponding form in (3.19). Thus we have

3/2 V 3/2
Zt = BV 2mkT = 2mkT (5.35)
~3

The next two steps are thus to write the partition function associated with rotational motion
previously given in (5.27) as follows,
1
X
j(j+1)K/kT
Zr = (2j + 1) e (5.36)
j=0

and that associated with vibrational motion previously written in (5.31) as follows,
1
h/kT
e 2
Zv = h/kT
(5.37)
1 e

where is the characteristic frequency of a gas with vibrating diatomic molecules. Putting
58
Kinetic Theory of Gases
5. Thermodynamics of Gases

30 7
R
2
25 Vibration
5
R
CV ( J/molK)

20 2
Rotation
15
3
R
2
10
Translation
5

0
10 20 50 100 200 500 1 000 2000 5000 10 000
Temperature (K)
Figure 21.7 5.1:
Figure The molar specific
A curve showingheatthe
of molar
hydrogen as a function
specific of temperature.
heat of hydrogen The gas for
as a diatomic
horizontal scale is logarithmic. Note that hydrogen liquefies at 20 K.
a various range of temperatures with a logarithmic scale of T for the horizontal axis
(taken from p.652, Ch.21, College Physics, Serway et al., 2005).

single temperature that gives us the values in Table 21.2. Figure 21.7 shows the mo-
alllar
(5.35), (5.36),
specific heatand (5.37) together
of hydrogen into (5.34)
as a function of results in
temperature. There are three plateaus
in the curve. The remarkable feature 1 of these plateaus is that they are at the values
V X e
1
2
h/kT
of the molarZspecific
= 3 2mkTheat predicted
3/2
by
(2jEquations
+ 1) e 21.14, 21.21,
j(j+1)K/kT
and 21.22! For low(5.38)
~
temperatures, the diatomic hydrogen 1 e h/kT
j=0 gas behaves like a monatomic gas. As the
temperature rises to room temperature, its molar specific heat rises to a value for a
diatomic
known gas,
as the consistent
total partitionwith the inclusion
function of rotation
of a diatomic gas withbut not
both vibration.
vibration andFor high of
rotation
temperatures,
diatomic moleculesthebeing
molar specific
present, heatto
owing is temperature
consistent with a modelasincluding
variations, additionalall types of
degrees
of motion.
freedom to translation (see Figure 5.1 for illustration).
Before addressing the reason for this mysterious behavior, let us make a brief re-
mark about polyatomic gases. For molecules with more than two atoms, the vibrations
are more complex than for diatomic molecules and the number of degrees of freedom
5.5.4 The specific heat of a diatomic gas
is even larger. This results in an even higher predicted molar specific heat, which is in
qualitative agreement with experiment. For the polyatomic gases shown in Table 21.2
One important quantity that describes the characteristics of a gas whether it is a monoatomic
we see that the molar specific heats are higher than those for diatomic gases. The more
ordegrees
a diatomic gas is theavailable
of freedom molar specific heat as already
to a molecule, defined
the more in (2.44).
ways As also
there are shown
to store in 5.4,
energy,
resulting in adistribution
semi-classical higher molar forspecific
a mixedheat.
gas in formulating the total entropy is more reliable
than the classical one. Hence, we here consider a semi-classical gas of N diatomic molecules
A Hint
where the of Energy
molecules areQuantization
indistinguishable. The entropy of such a gas is given by (5.17) and
is rewritten as
Our model for molar specific heats has been based Z so U far on purely classical notions. It
S = N k + N
predicts a value of the specific heat for a diatomic k `n + (5.39)
N gasT that, according to Figure 21.7,
only agrees with experimental measurements made at high temperatures. In order to
Substituting (5.39) into the definition of the Helmholtz free function, F = U T S, yields
explain why this value is only true at high temperatures and why the plateaus exist in
Figure 21.7, we must go beyond classical physics and introduce some quantum physics
ZN
into the model. In Chapter 18, weFdiscussed = kT `nquantization of frequency for vibrating (5.40)
N!
strings and air columns. This is a natural result whenever waves are subject to bound-
ary conditions.
Quantum physics (Chapters 40 to 43) shows that atoms and molecules can be de-
scribed by the physics of waves under boundary conditions. Consequently, these waves
have quantized frequencies. Furthermore, in quantum physics, the energy of a system
5.5. Gas of diatomic molecules 59

The equation in (5.40) diers by a factor of N ! in the denominator of the partition function
from that derived from classical views for a classical gas. This dierence is also found in
the formulation of the weight of configuration W between classical distribution in (3.1) and
semi-classical distribution in (5.16). We can therefore rewrite (5.40) as

F = kT `n Z (5.41)

where Z is here defined as


ZN
Z= (5.42)
N!
known as the partition function for a semi-classical gas.

With the help of both equations (5.41) and (5.10), we can calculate the internal energy
of a diatomic gas as follows,

2@F/T 2 @ `n Z 2 @ `n Z
U= T = kT = N kT
@T V @T V @T V

@ `n Zt @ `n Z r @ `n Z v
= N kT 2 + +
@T @T @T
X 1 (5.43)
3 @
= N kT + N kT 2 `n (2j + 1) e j(j+1)K/kT
2 @T j=0

1 1
+ N h +
2 eh/kT 1

At this stage, it is then useful to define specific parameters, based on terms on the RHS
of (5.43), as a measure of the relative contributions of both rotational (the second term)
and vibrational (the third term) motions to the total internal energy of a diatomic gas
in (5.43). Note that the first term is a contribution of translational motion. The parameters
are characteristic temperatures for rotation and vibration of diatomic molecules,

K h
r = and v = (5.44)
k k

In general, the two characteristic temperatures in (5.44) satisfy r v for which


K h. At low temperatures, it applies T r v making the first term 32 N kT
(a contribution of the translating molecules ) and a part of the third term 12 N h (a partial
contribution of the vibrating molecules) to be dominant over the other parts (the second
term on the RHS of (5.43) approaches to zero at extremely small T and thus the rotation
of diatomic molecules has no contribution at low temperatures). Indeed, among 32 N kT and
1
2
N h only the former does depend upon temperature. This makes easy to understand that
at low temperatures a diatomic gas show the same behaviour as a monoatomic gas.
60 5. Thermodynamics of Gases

To prove what is claimed, we here provide with the one previously discussed in 2.3.3,
that the heat capacity CV of a gas (either monoatomic or diatomic) at a constant volume,
with the temperature-dependent internal energy U given in (5.43), is defined as

@U 3 3 3
CV = = N k = nNo k = nR (5.45)
@T V 2 2 2

The specific heat cV of a diatomic gas at a constant volume for a range of low temperatures
is then given by
CV 3
cV = = R (5.46)
n 2
The result is consistent with experimental data of the molar specific heat of all monoatomic
gases over a wide range of temperatures and of hydrogen as a diatomic gas seen in Figure 5.1
for reasonably low temperatures. At low temperatures, additional energy gained by each of
diatomic molecules from collision with its neighbouring molecules is inadequate to increase
levels of energy from the ground state levels to excited state levels of rotation and vibration.
The only contribution to the total internal energy is from translation at low temperatures.

However, we can claim that the model developed here for a gas whose diatomic molecules
remains reliable as it shows a dependence on temperature. Relatively small deviations from
the predicted values of the specific heat in a real gas are due to the unignored presence of
weak molecular interactions, as opposed to a perfectly ideal gas where molecular interactions
are absent. Let us therefore examine in details the equation in (5.43) in relation to Figure 5.1.
As temperature increases to a room temperature, the specific heat of a diatomic gas rises to
a value in which rotation, but not vibration, comes into play in the dynamics of the specific
heat. Thus, for a range of middle temperatures applied to (5.43) we have

1 @U 3 5
cV = = R + R= R (5.47)
n @T V 2 2

When the temperature is then continuously increased to sufficiently high temperatures,


the vibrating diatomic molecules have a significant contribution to the specific heat. Based on
(5.43) and (5.47), all the terms on the RHS are now fully contributing to the specific heat.
Thus, for a diatomic gas at high temperatures we have

1 @U 3 7
cV = = R + R + R= R (5.48)
n @T V 2 2

It is clear that the maximum values of theoretical prediction for the specific heats of a gas
whose diatomic molecules at various ranges of temperature given by (5.46), (5.47) and (5.48)
are experimentally confirmed using hydrogen as an example in Figure 5.1.
5.6. Exercises 61

5.6 Exercises
1. Show that, when two dierent types of gas molecules A and B of volumes VA and
VB and total molecules of NA and NB , respectively, are mixed together at constant
temperature T to form a total volume of VA + VB , there is an increase in the total
entropy given by the mixing term as follows,

k (NA + NB ) `n (VA + VB ) NA `n VA NB `n VB

Hence deduce that, when VA = VB and NA = NB = N , the mixing term is 2N k `n 2


(taken from Pointon, 1978, Ch.7, Problem 1).

2. Derive the equation of state and the total internal energy of a semi-classical perfect gas
from an expression for the entropy for such a gas. Explain why the results obtained
are the same as those for a classical perfect gas (taken from Pointon, 1978, Ch.7,
Problem 3).

3. Show that the equation of state for a diatomic gas as derived from (8.27) of Pointon
(1978) is the same as that for a monoatomic gas (taken from Pointon, 1978, Ch.8,
Problem 5).

4. Evaluate the partition function at temperature T for a classical 1-D harmonic oscillator
having an energy of
p2 1
= x + x2
2m 2
and hence find the mean energy of such an oscillator at this temperature (taken from
Pointon, 1978, Ch.8, Problem 4).

5. Write out the partition function for a 3-D harmonic oscillator assuming that it obeys
both classical and quantum distributions. Show that both statistical distributions give
a mean kinetic energy of 3kT at a range of high temperatures (taken from Pointon,
1978, Ch.8, Problem 7).
62 5. Thermodynamics of Gases
Chapter 6

Canonical, Grand Canonical, and


Micro Canonical Ensembles

In all previous discussions on physical assemblies under consideration, we have assumed that
both the total energy and the number of particles (systems) are constant such that dE = 0
and dN = 0 and that no interaction between the particles (the systems) and the surrounding.
Although it has been demonstrated that the study of these assemblies leads to useful results
over a range of physical problems, limitations imposed on the assemblies prevent the results
from the possibility to be applied to wider cases. It is the purpose of this chapter to extent
the method to cases with possible interactions between the constituents of a given assembly.
It is important here to introduce the concept of an ensemble. For this purpose, consider
a group of assemblies having the same volume and type of particles. In these assemblies,
there will be a large number of dierent arrangements of the particles within the available
states of energy. If there is at least one assembly in the group for each possible arrangement,
then such a group is said to form an ensemble. In this chapter, we will discuss three types of
ensemble, namely a canonical ensemble detailed discussed in 6.1 with an imperfect gas
as a case study in 6.2, a grand canonical ensemble partially introduced in 6.3, and
a micro canonical ensemble shortly introduced in 6.4. The brief introduction of both
closed and open assemblies within a given ensemble will also be given.

6.1 Canonical ensemble


In a canonical ensemble, the number of particles (systems) is fixed constant and the same
for each assembly such that dN = 0. Instead of having a fixed energy, such that dE = 0,
the constituting assemblies may have dierent energies with constant temperature such that
dT = 0. Under such consideration, it is possible for the assemblies to exchange their energies
hence to allow interactions between the particles (the systems) within a given ensemble.

63
64 6. Canonical, Grand Canonical, and Micro Canonical Ensembles

6.1.1 Classical total partition function

As the assemblies in a canonical ensemble are of the same temperature, it is then understood
to consider the neighbouring assemblies within the ensemble as thermally equilibrium states.
Thus, the ensemble discussed is considered as a physical enclosure of constant temperature.
In this type of ensemble, the total partition function is generally written as
X
Ni i /kT
Z= Wi e (6.1)
i

where W is again the number of possible configurations or the weight of configuration and T is
the ensemble temperature, fixed to be a constant value. Hence, such a partition function is
sometimes referred to as the constant temperature partition function.
Now recall again that the number of possible configurations or the weight of configuration
for classical systems is given by (3.1) below,

Y g Ni
i
W =N!
i
Ni !

for which (6.1) becomes


X Y g Ni
i Ni i /kT
Z= N! e (6.2)
Ni i
N i !
P
where the summation is calculated over all the possible occupation numbers Ni ( Ni = N
P
and Ni i = E). The equation in (6.2) can be directly evaluated by examining the following
expansions,
X N X
i /kT N! X i /kT Ni
gi e = Q gi e
i Ni i Ni ! i

X Y gi e i /kT Ni
= N! (6.3)
Ni
Ni !
X Y g Ni
i Ni i /kT
= N! e
Ni i
Ni !

The RHS of the last expression in (6.3) is exactly the same as that of (6.2), from which it is
easy to conclude that
X N
i /kT
Z= gi e = ZN (6.4)
i

where Z, given by the quantity inside the bracket in (6.4) as also defined in (5.29), is here
the partition function for a system in a particular assembly within a canonical ensemble.
6.1. Canonical ensemble 65

6.1.2 Semi-classical total partition function


In this section, we will derive the total partition function for a semi-classical gas. Let us then
begin with rewriting the number of possible configurations or the weight of configuration for
a semi-classical gas as
Y g Ni
i
W =
i
N i!

Substituting the above relation into (6.1) to obtain

X Y g Ni
i Ni i /kT
Z= e
N i
N i!
i
(6.5)
1 X Y g Ni
i Ni i /kT
= N! e
N! N i
Ni !
i

It is clear from both the equations in (6.3) and (6.4) that (6.5) simply becomes
X N
1 i /kT ZN
Z= gi e = (6.6)
N! i
N!

which is exactly the same as that given in (5.42).

6.1.3 Total partition function in the presence of interactions


In the presence of interactions between the systems, the associated total partition function
is calculated by integrating over all the available volumes of phase space. Let d N thus
be an element volume in phase space such that the total number of allowed energy states
represented by this element volume is given by d N /~3N . The total partition function for
classical systems with no interactions is then given by
Z
1 E/kT
Z = 3N e d N (6.7)
~

For semi-classical systems, however, there are N ! possible ways of arranging any given
N sets of six position and momentum coordinates among the N systems, without additional
distinguishable states produced. For this reason, any state in d N is accounted for only as
1/N ! to allow for this indistinguishability. Thus, the total partition function for semi-classical
systems is written as Z
1
Z = 3N e E/kT d N (6.8)
~ N!
Note that in both (6.7) and (6.8) E denotes the total energy that may be a contribution of
both kinetic and potential energies of all the systems (see further discussion in 6.2).
66 6. Canonical, Grand Canonical, and Micro Canonical Ensembles

6.2 Imperfect gas


One of possible applications of the canonical ensemble to real cases is that of an imperfect gas,
where molecular interactions between gas molecules are not negligible and therefore basic
assumptions made for a perfectly ideal gas, formally stated at the beginning of Chapter 2, no
longer apply. The total energy of such a gas depends upon both the position and momentum
of each gas molecule. If potential interactions ij between the molecules are assumed to be
independent of the molecule momenta and positions of other molecules, the total energy E
of the assembly can then be written as

1 2 X
XN N X
2 2
E= p + pyi + pzi + ij (6.9)
i=1
2m xi i=1 j>i

where ij denotes potential interaction between the ith and jth molecules. The condition
for j > i imposed on the second summation in (6.9) prevents the interaction term ij being
counted twice.
As argued above, a better description of an imperfect gas is achieved using semi-classical
distribution, where systems are considered indistinguishable. The total partition function
for an imperfect gas (as a semi-classical gas) is obtained by substituting (6.9) into (6.8),

Z " # !
1 2 X
XN N X
1 2 2
Z = 3N exp p + pyi + pzi + ij /kT
~ N! i=1
2m xi i=1 j>i
(6.10)
N
Y
dxi dyi dzi dpxi dpyi dpzi
i=1

Q
where d N is taken to be Ni=1 dxi dyi dzi dpxi dpyi dpzi . The integrals in (6.10) that involve
the components of momentum can be separately evaluated, with the limits of the integrals
being from 1 to 1 for each component. Thus, we can write
(Z )3N
1
1
Z = 3N exp ( p2x /2mkT ) dpx
~ N! 1
Z Z Z X N X N
(6.11)
Y
...... exp ij /kT dxi dyi dzi
V V V i=1 j>i i=1

where the integrals involving the spatial coordinates are taken over the volume V of the gas.
The first group of the integrals in (6.11) involving the components of momentum can be
evaluated as follows, Z 1
exp( p2x /2mkT ) dpx = (2mkT )1/2
1
6.2. Imperfect gas 67

such that (6.11) becomes

Z Z Z N X
X N
Y
(2mkT )3N/2
Z= ...... exp ij /kT dxi dyi dzi (6.12)
~3N N ! V V V i=1 j>i i=1

Q
Now then the story begins. With N i=1 dxi dyi dzi = V
N
in hand and let us consider cases
with no molecular interactions, i.e., ij = 0, then (6.12) simply becomes
N
(2mkT )3N/2 1 V (2mkT )3/2 ZN
Z= VN = = (6.13)
~ N!
3N N! ~3 N!

which is again exactly the same as those written in both (5.42) and (6.6) for the semi-classical
total partition function. But this is not the end of the story as molecular interactions of
the imperfect gas molecules are always present, making calculations of the total partition
function for the case of an imperfect gas complicated. It seems likely that the second part on
the RHS of (6.12) brings a mathematical problem when it is overlooked. In the followings,
we will examine closely the integrals in the second part.
As early mentioned and argued above, the total energy of an imperfect gas consists of
both kinetic and potential energies. Hence, the potential energy due to molecular interactions
for an imperfect gas no longer vanishes, i.e., ij 6= 0. Hence, the integrals in (6.12) must be
reevaluated by writing them as
Z Z Z N X
X N
Y
...... exp ij /kT dxi dyi dzi = IN (6.14)
V V V i=1 j>i i=1

Thus, the total partition function for the interacting, semi-classical imperfect gas molecules
can be expressed as
(2mkT )3N/2
Z= IN (6.15)
~3N N !
Here, a further discussion on the value of IN is associated with the case of weak interaction,
where the magnitude of ij is much less than thermal energy kT . In this approximation, we
can write
ij
e ij /kT 1 = 1 + f (rij ) (6.16)
kT
where f (rij ) is, for an imperfect gas of monoatomic molecules, a function of temperature
and separation of the molecules only. Keep this in mind, we can calculate the exponential
part of the integrals in (6.14) as follows,
N X
X N
Y N
Y N X
X
exp ij /kT = 1 + f (rij ) = 1 + f (rij ) = 1 ij /kT (6.17)
i=1 j>i j>i,i=1 j>i,i=1 i=1 j>i
68 6. Canonical, Grand Canonical, and Micro Canonical Ensembles

Substituting (6.17) into (6.14), we then have

Z Z Z N X
X N
Y
IN = ...... 1 ij /kT dxi dyi dzi
V V V i=1 j>i i=1
Z Z Z X
N X N
Y
=V N
...... (6.18)
ij /kT dxi dyi dzi
V V V i=1 j>i i=1
N (N 1)
=VN + VN 1
a
2
R1
where a = 0 f (rij ) 4r2 dr. It is clear from the last expression in (6.18) that the eects of
molecular interactions upon the total partition function of an imperfect gas are providing
with a small but important contribution to the non-interacting, semi-classical gas molecules,
as shown by the second term on the RHS of (6.18). When this term is much less than V N , or
in cases where the interactions between the systems are negligible, the total partition function
in (6.15) reduces to (6.13), which applies to the semi-classical ideal gas with zero ij .

6.3 Grand canonical ensemble


The canonical ensemble previously discussed in 6.1 with its application in cases where
molecular interactions exist, such as imperfect gases detailed discussed in 6.2, is the one in
which the constituting assemblies are allowed to exchange their energies among themselves
but with the same and no change in the total number of systems in each assembly. It follows
that the ensemble is composed of closed assemblies, containing a fixed number of systems of
constant temperature.
Of particular interest here is that the total energy and the number of systems within each
assembly in a given ensemble may vary with time, or both may be dierent at a particular
time. In short, the total energy and the number of systems are thus not fixed quantities.
This can only occur if the assemblies are open, allowing the exchanges of both the total
energy and the number of systems within the assemblies such that dE 6= 0 and dN 6= 0.
The ensemble in which this condition holds is called as the grand canonical ensemble.
In this ensemble, temperature and volume of the assemblies are constant. As the number of
systems in a grand canonical ensemble is not fixed quantity, then it is necessary to examine
the eects of possible changes in the number of systems dN on thermodynamic functions
for the open assemblies. It can be shown that the main results obtained from previous cases
with no change in N for the closed assemblies can also be obtained in a more fashion manner.
But, we do not intend to discuss this issue here, as it requires lots of mathematical functions
necessary for solving the problem that are not relevant to the third year students.
6.4. Micro canonical ensemble 69

6.4 Micro canonical ensemble


This type of ensemble is such that the each constituting assembly has the same total energy
and the same number of systems and that there is no changes in both the total energy and
the number of systems, or mathematically written as dE = 0 and dN = 0, within each
assembly. In other words, the ensemble discussed is composed of identical assemblies for
which the general description of this ensemble can be described by examining and sampling
only one particular assembly within a given ensemble. This ensemble is then commonly
known as the micro canonical ensemble. For this type of ensemble, the equation of state,
or other thermodynamic functions, is introduced and derived only from the properties, or at
a level, of the assemblies. For simplicity, whenever we examine the micro canonical ensemble,
we do not need to discuss the ensemble, as the characteristic of the assemblies is adequate
to describe the ensemble.

6.5 Exercises
1. (a) Describe in your own words the dierences in terms of microscopic parameters
between micro canonical ensemble, canonical ensemble, and grand canonical ensemble.
R1
(b) Describe in your own words the physical meaning of a = 0 f (rij ) 4r2 dr.
(c) Using the equation in (6.18), that is, IN = V N + N (N2 1) V N 1 a, where a is defined
above, estimate whether a is an extremely small or large value for molecular interactions
to come into play in the dynamics of the total partition function for an imperfect gas.
(d) In what condition is a likely to be zero ? If so, what is happening to the system ?

2. Show that the total partition function for a semi-classical, perfectly ideal gas with no
molecular interactions given by

VN 3N/2
Z= 2mkT
N ! ~3N

can be derived from


Z 1
VN E/kT
Z= e dpx1 dpy1 dpz1 dpxN dpyN dpzN
N ! ~3N 1

based on assumption that the total kinetic energy of the sub-system N is as follows,

p2x1 p2y1 p2z1 p2xN p2yN p2


E= + + + + + + zN
2m 2m 2m 2m 2m 2m

(taken from Pointon, 1978, Ch.7, Problem 5).


70 6. Canonical, Grand Canonical, and Micro Canonical Ensembles

3. The classical total energy of a 3-D harmonic oscillator at temperature T is given by

p2x 1 p2y 1 p2z 1


x = + x2 y = + y 2 z = + z 2
2m 2 2m 2 2m 2

where m denotes the mass of the harmonic oscillator and px,y,z represent the linear
momentum of the harmonic oscillator for each direction of vibrational motion. Here
is constant and is the same value for all directions. Instead of the above formulations,
we here also provide the quantum formulation for the total energy of a 3-D harmonic
oscillator a follows,

1 1 1
nx = nx + ~!x ny = ny + ~!y nz = nz + ~!z
2 2 2

where nx,y,z = 0, 1, 2, ........ and !x,y,z represent the oscillator frequency associated with
the direction of a particular axis along with the oscillator vibrates.
(a) Determine the mean energy of such an oscillator using a classical approach.
(b) Determine the mean energy of such an oscillator using a quantum approach for
reasonably high temperatures.
Bibliography

Beiser, A. 1988 Perspective of Modern Physics. London, UK: McGraw-Hill.

Huang, K. 1987 Statistical Mechanics. New York, US: John Wiley and Sons.

Liboff, R. L. 1980 Introductory Quantum Mechanics. Reading, US: Addison-Wesley.

Pointon, A. J. 1978 An Introduction to Statistical Physics. London, UK: Longmann.

Reif, F. 1985 Fundamentals of Statistical and Thermal Physics. Singapore: McGraw-Hill.

Serway, R. A., Moses, C. J. & Moyer, C. A. 2005 Modern Physics. California, US:
Thomson Learning Inc.

Tipler, P. A. 1999 Physics for scientists and engineers. New York, US: W. H. Freemann.

71