You are on page 1of 286

Fundamentals of

STATISTICAL
MECHANICS
(S E C O N D E D IT IO N )

B.B. Laud
Former Professor & Head
Deptt. of Physics, Marathwada University
Aurangabad, Maharashtra

Publishing for one world

N E W A G E IN TE R N A TIO N A L (P) LIM IT E D , PUBLISHERS


N ew Delhi • Bangalore • Chennai • Cochin • Guwaliati
Hyderabad • Kolkata • Lucknow • Mumbai
Visit us at w w w .n e w a g e p u b lis h e r s .c o m
Copyright © 2012, 1998, New Age International (P) Ltd., Publishers
Published by New Age International (P) Ltd., Publishers
First Edition: 1998
Second Edition: 2012

All rights reserved.


No part of this book may be reproduced in any form, by photostat, microfilm, xerography, or any other means,
or incorporated into any information retrieval system, electronic or mechanical, without the written permission
o f the copyright owner.
Branches:
• 37/10, 8th Cross (Near Hanuman Temple), Azad Nagar, Chamarajpet, Bangalore-560 018. Tel.; (080) 2675 6823
Telefax: 2675 6820, E-mail: bangalore@newagepublishers.com
• 26, Damodaran Street, T. Nagar, Chennai-600 017. Tel.. (044) 24353401, Telefax: 24351463
E-mail: chennai@newagepublishers.com
• CC.-39/.1016, Carrier Station Road, Emakulam South, Cochin-682 016. Tel.; (0484) 2377004, Telefax: 4051303
E-mail: cochin@newagepublishers.com
• Hemsen Complex, Mohd. Shah Road, Paltan Bazar, Near Stariine Hotel, Guwahati-781 008. Tel.: (0361) 2513881
Telefax: 2543669, E-mail: guwahati@newagepublishers.com
• 105, 1st Floor, Madhiray Kaveri Tower, 3-2-19, Azam Jahi Road, Nimboliadda, Hyderabad-500 027
Tel.: (040) 24652456, Telefax: 24652457, E-mail: hyderabad@newagepublishers.com
• RDB Chambers (Formerly Lotus Cinema)U)6A, 1st Floor, S.N. Banerjee Road, Kolkata-700 014
Tel.: (033) 22273773, Telefax: 22275247, E-mail: kolkata@newagepublishers.com
• 16-A, Jopling Road, Lucknow -226 001. Tel.: (0522) 2209578, 4045297, Telefax: 2204098
E-mail: lucknow@newagepubIishers.com
• 142C, Victor House, Ground Floor, N.M. Joshi Marg, Lower Parel, M um bai-400 013. Tel.: (022) 24927869
Telefax: 24915415, E-mail: mumbai@newagepublishers.com
• 22, Golden House, Daryaganj, New Delhi-110 002. Tel.: (031) 23262370, 2326236S, Telefax: 43551305
E-mail: sales@newagepublishers.com

ISBN: 978-81-224-3278-7

r 295.00

C -ll-09-5849

Printed in India at Paramanand Offset, Delhi.


Typeset by In-house.

PUBLISHING FOR ONE WORLD


NEW AGE INTERNATIONAL (P) LIM ITED, PUBLISHERS
4835/24, Ansari Road, Daryaganj, New Delhi-110002
Visit us at www.newagepublishers.com
T h e p o p u la rity o f th e b o o k , F u n d a m en ta ls o f S ta tistica l M ech a n ics has
g a in e d o v e r th e y e a rs a n d as p e r ou r c o m m itm e n t to b r in g in excellen ce
in ou r b o o k s, th is e d itio n , h a s b e e n th o r o u g h ly ch e ck e d a n d su g g estion s
b y th e e ste e m e d r e a d e r s in co rp o ra te d . W ith th e a m p lifica tio n a n d rem ov a l
o f p rin te r’ s errors o f ru b ric, as n e ce ssa ry , th is w ill p re s e n t e a sy to read
m a teria l fo r stu d e n ts o f p h y sics a n d e n g in eerin g . W e w ill g re a tly a pp reciate
h e lp fu l sp e cific s u g g e stio n s fr o m te a c h e r s o f th e s u b je ct s o as t o en h an ce
th e u se fu ln e ss o f th e b o o k fo r th e stu d en ts.
— Publisher

v
;C PREFACE T O TH E FIRST
ED ITIO N

Statistical mechanics is an indispensable formalism in the study of physical


properties of matter in bulk. It has led to a better understanding of
foundations of thermodynamics and is a very elegant mathematical
framework capable of presenting a clear picture of many a physical
phenomena. Basic knowledge of this subject is, therefore, essential for
every student of physics, whatever may be the field of his specialization.
Though several excellent accounts have been written on the subject,
many of them are either too brief or too highly mathematical. Hence,
there appears to be a need for an up-to-date, compact and inexpensive
book on statistical mechanics, treating the subject in an introductory
manner, specially adapted for the beginner in the field and also to those
who require a modest working knowledge of this subject because of its
application to other fields. The desire to meet this situation gave me an
incentive in 1981 to write the book Introduction to Statistical Mechanics.
During the decade that has passed since its appearance, the book
was widely used and well-received by students and teachers who made
very useful comments and offered suggestions for its improvement. Keeping
these in mind, I decided to rewrite the book-and amplify it considerably.
Although the framework of the earlier book has been retained, in this
new hook most of the sections have been re-written and expanded. In
Chap l l , theories of phase transitions have been discussed along with
their range of validity. Other topics such as chemical equilibrium and
Saha ionization formula have also been included in this chapter. The
method used by Maxwell before the advent of Gibbs statistics, to derive
his velocity distribution has been described in Chap 3 to satisfy student’s
curiosity. A chapter on Basic concepts of Probability has been included
which is of auxiliary nature and may be omitted by those who are
acquainted with the theory of probability.
I have been very selective in the choice of topics to be included. The
emphasis is on the fundamentals together with some instructive and
useful applications. In a book of this size it has not been possible to deal
exhaustively with all applications. Some typical cases have been selected
vii
viii Contents

with all applications. Some typical cases have been selected and described
so that all the important methods of dealing with any particular problem
can be studied. An Attempt has been made to emphasize the physical
basis of the subject, but without undue neglect of its mathematical aspects.
The book, thus, bridges the gap between highly mathematical works and
the usual less rigorous formulations of the subject. Problems are given at
the end of each chapter. These are meant to be read as integral part of
the text. They' present a number of applications and also serve to illuminate
techniques.
In preparing this book, I have consulted a large number of books byr
various authors to whom I am grateful. A list of some of these is given
at the end.
I should like to thank Dr. P. V. Panat, Professor of Physics, University
o f Poona, who offered some useful suggestions, and my students on whom
the material of the book has been tried out. I wish to express my sincere
appreciation to my wife for her forbearance whilst this book was being
written.
—A uthor
C O N TEN TS

Preface to the Second Edition (u)


Preface to the First Edition (vii)

I -aChapter
aij«BaesMMMike:'~ - , g
1: Introduction' 1-4

ChaPtef I - Basic Concepts of the Theory of fro l 5-34


2.1 Random Events 5
2.2 Probability 6
2.3 Probability and Frequency 7
2.4 Probability from an Ensemble 7
2.5 Some Basic Rules of Probability Theory 8
2.5.1 Summation Rule 8
2.5.2 Corollary 11
2.5.3 Multiplication Rule: Joint Probability 11
2.5.4 Conditional Probability 12
2.6 Continuous Random Variables 14
2.7 Probability for a Molecule to Lie in a Finite Volume 16
2.8 Mean Value of a Discrete Random Variable 16
2.9 Mean Value of a Continuous Random Variable 17
2.10 Variance : Dispersion 18
2.11 Probability Distribution 19
2.12 Binomial Distribution 20
2.13 Mean Value When the Distribution is Binomial 23
2.14 Fluctuations 23
2.15 Stirling Approximation 24
2.16 Poisson Distribution 26
2.17 Mean Values and Standard Deviation in the Case of
Poisson Distribution 27
(ix )
X Contents

2.18 Gaussian Distribution 27


2.19 Standard Deviation in the Case of Gaussian Distribution 31-
2.20 Random Walk Problem 31
Problems 34

| Chapter 3: Maxwell Distribution 35-48


3.1 Maxwell Velocity Distribution 35
3.2 Momentum Distribution 38
3.3 Energy Distribution 38
3.4 Some Significant Mean Values 39
3.5 Experimental Verification o f Maxwell Distribution 41
3.5.1 Stern’s Experiment 41
3.5.2 Free Fall under Gravity 42
3.5.3 The Doppler Broadening of Spectral Lines 43
3.6 Fraction o f Molecules in a Gas with Energies Higher than
a Certain Value e0 44
Problems 48

4.1 Macroscopic States 49


4.2 Microscopic States 50
4.3 Phase Space 51
4.4 The p-space . 55
4.5 The G-space . 56
4.6 Postulate o f Equal a Priori Probabilities 57
4.7 Ergodic Hypothesis 57
4.7.1 Mean Value Over an Ensemble 58
4.7.2 Mean Value Over Time 58
4.8 Density Distribution in Phase Space 59
4.9 Liouville’s Theorem 61
4.10 Principle o f Conservation o f Density in Phase and
Principle o f Conservation o f Extension in Phase 63
4.11 Condition for Statistical Equilibrium 64
Problem s 65

I
Chapter 5: Statistical Ensembles
5.1 M icrocanonical Ensemble
5.2 Canonical Ensemble
Contents xi

5.3 Alternative Method for the Derivation of Canonical


Distribution 70
5.4 Mean Value and Fluctuations 74
5.5 Grand Canonical Ensemble 75
5.6 Alternative Derivation of the Grand Canonical Distribution 76
5.7 Fluctuations in the Number of Particles of a System in a
Grand Canonical Ensemble 77
5.8 Reduction of Gibbs Distribution to Maxwell and Boltzmann
Distributions 78
5.8.1 Maxwell Distribution 78
5.8.-2 Boltzmann Distribution 81
5.9 Barometric Formula 82
5.10 Experimental Verification of the Boltzmann Distribution 83
5.11 Mixture of Gases 83
Problems 85

Chapter 6:rSom eApplicatIonsof ^Statistical


~ ~ Z-rA-'lgsf-
6.1 Rotating Bodies
6.2 The Probability Distribution for Angular Momenta and
Angular Velocities of Rotation of Molecules 89
6.3 Thermodynamics 90
6.3.1 Reversible and Irreversible Process 90
6.3.2 The Laws of Thermodynamics • 91
6.4 Statistical Interpretation of the Basic Thermodynamic
Variables 93
6.4.1 Energy and Work . 93
6.4.2 Pressure 94
6.4.3 Entropy 94
6.4.4 Change in Entropy with the Change inVolume 96
6.4.5 Change in Entropy with the Change in Temperature 96
6.4.6 Entropy for Adiabatic Processes 97
6.4.7 Entropy in Terms of Probability 9S
6.4.8 Additive Property of Entropy 99
6.4.9 Helmholtz Free Energy 99
6.5 Physical Interpretation of a 100
6.6 Chemical Potential intheEquilibrium State 101
6.7 Thermodynamic Functions in Terms of the Grand Partition
Function 102
6.8 Ideal Gas 103
xti Contents

6.9 Gibbs Paradox 106


6.10 The Equipartition Theorem 109
6.11 The Statistics of Paramagnetism HI
6.11.1 Potential Energy of a Magnetic Dipole in a
Magnetic Field 111
6.11.2 Curie’s Law 112
6.12 Thermal Disorder in a CrystalLattice lib
6.13 Non-ideal Gases 117
Problems 123

Chapter 7: Formulation o f Quantum Statistics 125-,133


7.1 Density Matrix 125
7.2 Liouville’s Theorem in Quantum Statistical Mechanics 128
7.3 Condition for Statistical Equilibrium 129
7.4 Ensembles in Quantum Mechanics 130
Problems 133

8.1
Symmetry of Wave Functions 134
8.2
The Quantum Distribution Functions 139
8.3
The Boltzmann Limit of Boson and Fermion Gases 143
8.4
Evaluation of the Partition Function 144
8.5
Partition Function for Diatomic Molecules 146
(a) Translational Partition Function 146
(b) Rotational Partition Function 146
(c) Vibrational Partition Function 149
(d) Electronic Partition Function 150
8.6 Equation o f State for an Ideal Gas 151
8.7 The Quantum Mechanical Paramagnetic Susceptibility 152
Problems 154

Chapter 9: i deal JBose S y s t e m s ^ M ^ 156-176*


9.1 Photon Gas 156
9.2 Einstein’s Derivation o f Planck’s Law - 161
9.3 Bose-Einstein Condensation 163
9.4 Specific Heat from Lattice Vibration 168
9.5 D ebye’s Model o f Solids : Phonon Gas 170
Problems 175
Contents xiii

177-201
10.1 Fermi Energy 177
10.2 An Alternate Derivation of Fermi Energy 179
10.3 Mean Energy of Fermions at T = 0 K 180
10.4 Fermi Gas in Metals 181
10.5 Atomic Nucleus as an Ideal Fermion Gas 182
10.6 Fermi Energy as a Function of Temperature 183
10.7 Electronic Specific Heat 188
10.8 White Dwarfs 190
10.9 Compressibility of Fermi Gas 197
10.10 Pauli Paramagnetism 198
10.11 A Relativistic Degenerate Electron Gas 199
Problems 200

202
11.1 Equilibrium Conditions 202
11:2 Classification of Phase Transitions 204
11.3 Phase Diagram 205
11.4 Clausius-clapeyron Equation 207
11.5 Van der Waal’s Equation 210
11.6 Second-order Phase Transitions 213
11.7 Ginzburg-landau (gl) Theory 214
11.8 Phase Transition in Ferromagnetic Materials 217
11.9 Liquid Helium 219
11.10 Chemical Equilibrium 221
11.11 Condition for Chemical Equilibrium 221
11.12 The Law of Mass Action 222
11.13 Saha Ionization Formula 223
11.14 Dissociation of Molecules 224
Problems 225

226-251
12.1 Mean Collision Time 226
12.2 Scattering Cross-section 227
12.3 Viscosity 231
12.4 Electrical Conductivity 234
12.5 Thermal Conductivity 235
12.6 Thermionic Emission 237
xi v C o ptents

12.7 Photoelectric Effect 239


12.S Molecular Collisions 240
12.9 Effusion 243
12.10 Diffusion 244
12.11 Brownian Motion 246
12.12 Einstein’s Relation for M obility 248
Problem s 250

I Chapter 13: General Theory-;of Transport Phenomena

13.1Distribution Function 252


13.2Boltzm ann Transport Equation 253
13.3Relaxation Approxim ation 254
13.4Electrical Conductivity from the R elaxation A pproxim ation 255
13.5A Rigorous Form ulation o f the Transport T heory 257
13.6Boltzm ann’s Theorem 259
13-7Maxwi'^.-Boltzmann D istribution from B oltzm ann’s
Equation 261
13.8 H Theorem in Quantum M echanics 262
13.9 Variation o f Mean V alues 264
13.10 The Laws o f Conservation 266
Problem s 267

| A ppen dix 268^273


A p p e n d ix A
The m ethod o f Lagrangian undeterm ined m ultipliers 268
A p p e n d ix B
Som e useful integrals 269
A p p e n d ix C
1. physical constants 271
2. C onversion factors 271
A n s w e r s t o P r o b le m s 271

*«#**!* CKlJhiJU-. . .... ,.


I Bibliography 'fj&jg 2 7 4 -2 7 5
IN TR O D U C TIO N

e encounter in nature, physical systems comprising a large number


W of microentities such as atoms, molecules, electrons, photons, etc.
These are macroscopic systems— systems large compared to atomic
dimensions. A macroscopic system may be a solid, a liquid, a gas, an
electromagnetic radiation consisting of a ‘gas of photons’, a ‘gas of electrons’
inside a metallic conductor or any other. We Eire interested in the bulk
properties of these systems. Statistical mechanics consists the study of
the properties and behaviour of these systems.
But is there, really, any need for a ‘new mechanics’ for this purpose?
Can we not obtain the information we are seeking, with the help of
disciplines we are already familiar with? Will not the laws of quantum
mechanics, which describe all known natural phenomena, be able to provide
the desired information? The properties of a macroscopic system depend
upon the properties of the particles that make up the system and on the
way in which they interact with each other. The dynamical behaviour of
the particles can be described using the laws of quantum mechanics.
Interactions between atoms, molecules, etc., are also now fairly well
understood. It would be reasonable, therefore, to expect to derive all
knowledge of a macroscopic system through the study of the behaviour of
their constituent particles.
This reasoning seems to be quite sound and it is, indeed, true in
principle, that we can obtain complete information concerning the motion
of a mechanical system by constructing and integrating the equations of
motion of its constituent particles. But such a dynamical description of
the many-particle system is technically unrealizable.
Consider, for example, a mole of a gas. Gas is the simplest of the
three forms of matter and mole is a relatively small volume of matter.
The number of molecules in a mole of any gas is the Avogadro number
N a = 6.02 x 1023. Suppose we set on constructing and integrating the
equations of motion of the molecules in mole. For describing the motion
of each molecule we must know its initial position and velocity. The
position is determined by the coordinates x, y, z and the velocity by the
three components v^ vz. We, therefore, have to fix 6 x 6.02 x 1023
numbers, i.e., more than 3,000,000,000,000,000,000,000,000 numbers in
1
2 Fundamentals of Statistical Mechanics

order to describe the position and velocities of all the molecules at a


particular instant. Suppose we have with us an instrument which can
register numbers at the rate of 10° per second. Even at this rate, it would
take 6 x 6.02 x 1017s, i.e., about 1011years. This is a hopeless undertaking!
But it is not just this circumstance that makes such a dynamic
consideration impractical. The main trouble is that the solution of the
equation of motion requires a statement of the initial values of all the
variables and we cannot record sufficient data on the initial position and
momenta of all atoms at a particular instant. Even the problem of an
atom with two electrons presents such great mathematical difficulties
that no one so far has been able to solve it completely. Hence, should one
even be able to construct and integrate the equation of motion, it will not
be possible for him to obtain an exact solution for want of precise knowledge
about initial velocities and position coordinates of the particles constituting
the system.
Besides, complexities arising from a large number of constituent
particles can often give rise to some unexpected behaviour of the system,
which cannot be described either by the methods of classical or those of
quantum mechanics. To illustrate this observation, the following examples
can be given.
(i) A gas of identical single atoms such as helium, abruptly
condenses under certain conditions to form a liquid with very
different properties. This is difficult to understand on the basis
of the properties of individual molecules.
(ii) We know that heat flows from warmer to cooler bodies. The
reverse flow is not attained except by input of energy. Gan an
exact knowledge of the positions and velocities of molecules
explain the irreversibility of heat flow? One does not have to
know the coordinates and velocities of the particles of a metal
rod in order to explain why heat spreads from the hot end to the
cold. The kinetic theory explains heat to be a result of random
motion and collisions of molecules. The law's of mechanics, no
doubt, govern collision between molecules and may give us a
complete microscopic picture; but they do not agree with the
concept of irreversibility.
In order to tide over this difficulty, can we not investigate a many-
particle system without going into its internal structure? We have adopted
this technique in thermodynamics, which is distinguished by its simplicity
and leads to very accurate results. But thermodynamic method is
essentially a phenomenological method. Its object is to establish a
relationship between directly observed quantities, such aS pressure, volume
and temperature. It does not deal with the internal mechanism of the
processes determining the behaviour of the system. Thermodynamic laws
which govern the behaviour o f large molecular population, are not
concerned with the fate of individual molecules. The internal mechanisms
Introduction 3

remain unknown. For example, it cannot explain why a copper wire cools
upon rapid expansion while a rubber, under similar conditions, becomes
hot.
It is, therefore, expedient to formulate some special laws, starting
from some basic notions of atomic theory, which would lead to a coherent
conceptual framework capable of describing and predicting the properties
o f microscopic systems. How could this be possible? One would expect the
complexity of the properties of a mechanical system to increase with the
increase in the number of its constituent particles. Fortunately, however,
these complexities lead to some definite regularities, in the properties of
the system, which can be treated adequately and appropriately by
formulating some special laws — statistical laws. With the help of these
laws it has become possible to describe the macroscopic properties of a
system in terms of microscopic properties o f its constituent particles
without involving the detailed calculation of motion of individual particles.'
Although one has started from an incomplete knowledge of the initial
state of a system; statistical mechanics, or the basis o f laws formulated,
enables him to make predictions regarding the future condition of the
system — which hold on an average and are true to a very high degree
of accuracy. The purpose of statistical mechanics i ,u t emot to answer
the question; “Starting from some particular informal.... ailable about
the initial conditions o f a system, how can we make the best prediction
about its future condition?”
The statistical approach allows us to solve problems that cannot be
solved at all within the framework of thermodynamics. Moreover, statistical
physics provides a rigorous substantiation of the accepted laws of
thermodynamics.
In statistical mechanics we do not follow the change in state that
would take place in a particular system. Instead, we study the average
behaviour of the representative ensemble (assembly) of systems of similar
structures, and predict what may be expected on an average for the
particular system. We, therefore, have to deal, in the realm o f statistical
mechanics, with groups, variously described as collections, aggregates or
populations, rather than with individual or discrete entities; or with the
events that happen on an average rather than those that happen on a
particular occasion. The theory, thus formulated leads to certain results
which are purely macroscopic in content. They are conditioned precisely
by the great number of particles, comprising a body, and cease to have
meaning when applied to mechanical systems with a smaller number of
degrees of freedom.
Originally concerned with elucidating the difficulties of kinetic theory,
statistical mechanics is used with great success in diverse branches of
physics.
4 Fundamentals of Statistical Mechanics

It can explain therm al phenom ena in m olecular physics, dielectric


and m agnetic properties in electrom agnetic, m olecular scattering and
therm al radiation in optics and so on.
The theory o f statistical m echanics provides a m athem atical language
fo r th e enunciation and developm ent o f physical laws relating to the
properties o f large assem blies considered as a whole. U nfortunately, the
m athem atical problem is far too difficult to be solved and the future o f
th e system cannot be determ ined rigorously. The m olecules are present
in such large num ber that the only w ay o f attacking the problem is to
apply th e law s o f probability to their configuration and motion. The
application o f the theory o f probability to the analysis o f the m otion of
large assem blages o f m icroentities is the basis o f statistical mechanics
and it is in this respect that it differs from classical mechanics.
Since probability argum ents are o f fundam ental significance for the
understanding o f the behaviour o f system s consisting o f a large num ber
o f particles, it will be useful to review the basic elements o f the probability
theory. C hapter 2 is devoted to this purpose. W hile w riting this chapter
no prior know ledge o f probability is assumed. The chapter is purely o f an
a u x ilia ry n atu re. T h ose w h o are fa m ilia r w ith th e b a sic th eory o f
probability, m ay skip this chapter, or consider it useful revision.

□ □ □
BASIC C O N C EP TS
O F T H E TH E O R Y
O F P R O B A B ILITY

T heevents.
theory of probability deals with special laws governing random

| 2.1 RANDOM EVENTS


What is a random event?
Suppose we toss a coin. It is impossible to say in advance whether
it will show ‘heads’ or ‘tails’. Or suppose, we take a card at random from
a deck. Can we say in advance what its suit will be? In each of these
examples we carried out an ‘experiment’ or ‘trial’, the outcome of which
cannot be predicted. A particular event may or may not occur. These are
the examples of random events.
Not every event, however, whose result is unknown and does not
lend itself to unambiguous prediction may be called random in the sense
we have used here. In the two examples we have considered above, the
trials were carried out under identical conditions. When we deal with
experiments in which the essential circumstances are kept constant and
yet repetition of the experiment yields different results, we say Hie results
are random! Consider now aTootball game played for the World Cup final.
Its outcome is uncertain. But it cannot be said to be a random event,
since the game is played on different occasions under different conditions.
The teams may have new participants; or even the old participants may
have acquired new experience and may be in a different state of health,
or the games may be played in different countries. Such events are, no
doubt, uncertain, but are taken care o f by a different theory. Probability
theory is concerned with random events.
Although the results of experiments such as those described above
cannot be predicted unambiguously, it is possible to determine the degree
o f likelihood — probability — of occurrence of these events.
An elementary understanding of the term ‘probability of an event’ is
inherent in every person with common sense. Probabilities and statistics
occupy a major place in our life. We encounter random phenomena in our
everyday life and plan our action after estimating the probability of their

5
6 Fundamentals of Statistical Mechanics

occurrence. Some random events are more probable and some are less
probable. Laplace, who systematized the theory o f probability said:
"T h e theory o f probab ility at bottom is nothing more than
commohsense reduced to calculations.”
Thus, although the origin of the theory o f probability is more recent,
its roots go back to primitive man.

| 2.2 PROBABILITY
To evaluate the probability o f a random event, we must first of all, have
a unit o f measurement. An event is called a ‘sure’ event if it occurs in an
experiment. In the theory of probability, the probability of a sure event
is assumed to be equal to 1, and that of an impossible event to be equal
to zero. Thus, the probability of a random event P (A), lies in the interval
zero to one, i.e.
0 < P (A) < 1 ...(2.1)
E x a m p le 2.1 How do we calculate the probability o f an event? We
shall illustrate the method o f calculating probability with the help o f a
couple o f examples.
(i) Suppose a die is cast. What is the probability o f appearance o f six
spots?
(ii) I f a coin is tossed, what is the probability that it will show
‘heads’?
S o lu tio n . (i) The die has six faces with spots 1, 2, 3, 4, 5, 6 engraved
on them. Casting of the die can yield one of the six results and no other.
The total number of possible outcomes of the experiment, thus, is six. If
the die is symmetrical, all outcomes are equally likely. It is, therefore,
natural to ascribe equal probabilities to each of them. Out of the six
possible results only one is favourable for the appearance of six spots.
The probability of appearance o f six spots, therefore, is 1/6.
(ii) There are only two possible results, ‘heads’ or ‘tails’. Only one of
them is favourable for the appearance o f ‘heads’. The required probability,
therefore, is — = 0 .5 .
2
Thus, the probability P (A) of an event A, may be intuit: ,-iy defined
as
p (A) -Number of events favourable for A ^ 2)
Total number of possible results
E xam ple 2.2 What is the probability o f pulling from a thoroughly
shuffled pack o f 52 cards, (i) a queen; (ii) queen o f spades?
4 1 1
Basic Concepts of the Theory of Probability 7

| 2.3 PROBABILITY AND FREQUENCY


In the examples described above all the possible outcomes are assumed
to be known. This is not always possible in practice, and, hence, the
formula (2.2) is not a general formula. It can be used only in experiments
that possess symmetry. Suppose we make the dye asymmetric by adding
a little load to one of its faces. Now all the outcomes are not equally likely
and we cannot say that the probability of appearance of six spots is 1/6.
We must, therefore, adopt another technique for calculating probabilities.
Suppose wo cast a die many times, say N times, and we find that
six spots appear M times. Can we take MIN as the measure of the
probability of appearance of N spots? Not exactly.
We introduce here a different term to represent the quantity MIN.
We call it frequency of occurrence of the event. We define frequency of an
event A, F (A), as
_ . ., Number of trials in which A occurs
F (A ) = ------- — -------------- -—■
— -------— ■
— ...(2.3)
Total number of trials
Thus, if a coin is tossed 50 time and in 10 of them the coin shows

heads, the frequency of this event is - 0.2. We have seen above that
50
from the classical definition of probability, the probability of occurrence
of heads is 0.5. We see, therefore, that frequency is not the same as
probability. But there must be some relation between probability and
frequency. The most probable events occur more frequently. This
relationship can be found by increasing the number of trials. As the
number of trials is increased, the frequency of the event progressively
looses its randomness, tends to stabilize and gradually approaches a
constant value—the probability of the event. We define probability in
terms of frequency as

P (A) = lim ...(2.4)


N
N -> a

One may ask, “how large must N be, in order to obtain a sufficiently
accurate result?” Obviously, trials must be conducted until the ratios
MIN differ from one another by a very small value.

I 2.4 PROBABILITY FROM AN ENSEMBLE


It is not necessary to conduct the trials on the same system. An alternative
procedure is to make simultaneous measurements on a large number of
identical systems — an ensemble o f systems. If N is the total number of
systems and a certain event A occurs in M of them, the probability of
occurrence of A is MIN, provided N is very large. Both methods give
identical results.
8 Fundamentals of Statistical Mechanics

2.5 SOME BASIC RULES OF PROBABILITY THEORY


2.5.1 S u m m a tio n Rule
This is applicable to a set of mutually
exclusive events. Two events are said
.to be m u t u a lly ‘ e k c lu s iv e , i f the
h a p p e n in g - o f one ex clu d e s the
possibility o f the happening o f the
other. For example, in a box of volume*
V, we select two small non-overlapping
regions AV1 and AV2 (Fig. 2.1). At a
certain in stan t the presence o f a
particle in AV, rules out the possibility
o f its being present, at the sam e p.g 2 ^
instant, in AV2, and vice versa. The
two events, therefore, are mutually exclusive. We can prove that the
probability that one o f the two mutually exclusive events occur is equal
to the some o f the probabilities of occurrence o f each to them.
Suppose in n trials, the particle is found mx times in AV, and mn
times in AV2. The probabilities of finding the particle in the two regions
are respectively.

P (AV ) = P (AV ) ?= ^ ...(2.5)


n n
W hat is the probability that the particle will be found in at least one
o f the two regions? The number o f times this happens in n trials is m1 +
m2. Hence the required probability is

P (AVj
/a T T *t t \
or Av2 ) = til-1 77ln
—------ Tyli
-—= —- Ttln
h------^
*
n n n
= P (AV,) + P (AV.) ...(2.6)
We can generalize this summation rule and say: Given any number
o f mutually exclusive events A,, A 2, ... with probabilities P (A,), P (A2), ...
respectively, the probability that any one o f them occurs is the sum of the
probabilities o f these events, i.e.
P (A, or A 2...) = P (A,) + P (A 2) + ... ...(2.7)
Let A,, A 2, ..., A v be a complete set o f mutually exclusive events. In
N trials let N x be the number of times the event A, is realized, N 2 be the
number o f times A 2 is realized, and so on. Now
N x + N s + .... + N n = X N. = N
™ ,
Therefore —Nkx h-----4-
No + Nn L N:
h----- 5- = ------l- — l
N N N N
i-e., P (A ,) + P ( A j + .... + P iA N) = £ P (A.) = 1 ...(2.8)
T h is is k n o w n as th e normalization condition fo r th e p r o b a b ility .
Basic Concepts of the Theory of Probability 9

W e shall now obtain som e u sefu l form u lae resu ltin g from the
application o f the sum m ation rule.
(£) Consider a m olecule o f a gas m oving random ly in a closed m etal
box. W e m entally divide the box into n identical parallelepipeds, each of
volum e At. Figure 2.2. shows the cross-section o f the box. W hat is the
probability that the m olecule is in a particular parallelepiped?

The probability th at th e m olecule is in a defin ite pa ra llelepiped is


the sam e for all parallelepipeds, say P (A). The probability th at th e m olecule
is in an y one o f the pa ra llelepiped is, b y su m m ation rule, n P (A). B ut,
this is also th e probability th a t th e m olecu le is in th e b ox w h ich , in fact,
is a certainty, Le., equ al to 1.

T h erefore, n P (A ) = 1 or P (A) = — (2 9)
n
(ii) C on sider now a sm all volu m e elem en t AV in th e b o x w h ich
in clu d es m iden tical parallelepipeds (Fig. 2.3).

Fig. 2.3
10 Fundam entals of Statistical Mechanics

T h e p r o b a b ility t h a t th e m o le c u le is in A V is, b y su m m a tio n ru le,

P (A V ) - m x P (A ) = —
n
m A t _ AV
...(2.10)
n A t V
W e h a v e , th u s , e x p r e s s e d th e p r o b a b ilitie s in te r m s o f v o lu m e s .
(H i) S u p p o s e th e t w o r e g io n s o f v o lu m e s V a n d V 2 in a b o x o f v o lu m e
V, o v e r la p a s s h o w n in F ig . 2 .4 . W e r e p r e s e n t th e o v e r la p p in g r e g io n b y
V’ u , W h a t is th e p r o b a b ilit y th a t th e m o le c u le is in th e r e g io n c o v e r e d b y
V j a n d V 2? T o o b t a in th e v o lu m e V T o f t h is r e g io n , w e a d d V a n d V 2, b u t
in d o in g s o , w e ta k e V tw ic e in t o a c c o u n t.

Therefore, V t = Vi + v 2 ~ Vn ...(2.11)
Hence, the probability that the molecule is in Vr is

p (V ) - YZ- - V 1 + V 2 ~ V 12 _
K T> ~ V v “ v v v
= H (Vj) + f ( V 2) - P (V 12) ...(2.12)
where the last term gives the probability that the molecule is in the
overlapping region.
Generalizing we can say that i f in AC trials, NA is the number of
times the event A occurs, NB is the number of times the event B occurs
and these numbers also include the number when both events occur
simultaneously, then the total number o f events, NA ^B is given by
N a +s = N a + N b - N ab * ...(2.13)

Hence N a +b = N a + N b - N ab = N A NB N^
’ Na +Nb N N N N
i.e., P ( A + B ) = P (A) + P (B) - P ( A B ) ...(2.14)
The last term is the probability o f the simultaneous occurrence of
events A and B.
Basic Concepts of the Theory of Probability 11

£5.2 Corollary
If the probability of occurrence of an event A is P (A) and that of its non­
occurrence p (A);
then P (A) + p (A) = 1 ...(2.15)
It is sometimes easier to calculate the probability p (A) than to
calculate P (A). In such cases P (A) can he found from
P (A) = 1 - P (A) ...(2.16)

2.5.3 Multiplication Rule: Joint Probability


In the calculation of probabilities we sometimes come across random
events, such that the probability of occurrence of one does not affect the
probability of occurrence of the other. For example, in Fig. 2.1, the
probability that a molecule A gets into AVj at a particular instant is
AVi
The probability that another molecule S gets into AV2 at the
V
AV2
same instant is P2 regardless of whether or not the molecule A
V ’
gets into AVr Such events are said to be statistically independent or
uncorrelated.
What is the probability of joint occurrence of these events, i.e., the
probability of A getting into AVj and B getting into AV, at the same
instant?
Suppose in N trials the molecule A is found m times in AVr If P2 is
the probability that B gets into AV2, irrespective of the presence of A in
AVj, the number of times the two events will occur simultaneously is mPy
The joint probability of occurrence of these two events, therefore, is
mP2
N f * P* = P ^ P*

because P = — ...(2.17)
1 N
Thus, probability of joint occurrence of two independent events is
equal to the product of the probabilities o f each of the independent events.
Exam ple 2.3 We throw a die twice, and obtain two numbers. What
is the probability that these numbers are 6 and 4 precisely in that order?
Solution . The probability that the first throw gives a 6 is 1/6. The
probability that the second throw gives a 4 is also 1/6. These two events
are independent. Therefore, the required probability is

1 I _ J_
6 X 6 ~ 36
12 Fundamentals of Statistical Mechanics

E xam ple 2.4 What is the probability that a deuce and ace appear
when two dice are thrown?
Solu tion . The probability that the first die shows a deuce is 1/6 and
the probability that the second one shows an ace is also 1/6. The two
events being independent, the probability of the joint event is 1/36. There
is yet another alternative for this joint event to occur, when the first die
gives an ace and the second one gives a deuce. The probability of such a
joint event is again 1/36. These two alternatives are mutually exclusive.
Therefore, the required probability is

36 + 36 18
\^r E xam ple 2.5 Five marksmen shoot simultaneously at a target and
the probability for each o f them to hit the target is 1/3. What is the
probability for at least one marksman hitting the target?
S olu tion . The probability for each marksman missing the target is
1 — 1/3 = 2/3. The probability for all five marksmen missing the target is
(2/3)5. Therefore, the probability for at least one marksman hitting the
target is

1- 0.87

We have used here the relation (2.16).

2.5.4 Conditional Probability


The probability for an event A to occur under the condition that event B
has occurred is called the conditional probability o f the occurrence of
event A and is denoted by P {AJB).
In the case o f the overlapping regions in Fig. 2.4, what is the
probability o f finding a molecule in the region V1 when it is contained in
V2? This amounts to calculating the probability o f finding the molecule in
the region V when it is contained in V2. One can see that this is given
by

P (V ,/V J = \ ...(2.18)
1 2 V1
In the same w ay in the case o f the generalization discussed in case
(« i), the probability for the event A to occur under the condition that the
event B has occurred is

P(A/B) = ...(2.19)
Nb
Dividing the numerator and the denominator by N , the total number
o f trials, we have
Basic Concepts of the Theory of Probability 13

N ju /N = P (A B )
P (A J B ) ...(2.20)
N b /N P( B)
S im ila r ly
P (AB)
P (B/A) = ...(2.21)
P(A)“
It may be noted that in general these two results are not equal.
From Eqs. (2.20) and (2.21), we get
P (.AB) = PiA/B) P (B ) = P (B /A ) P (A) ...(2.22)
E xam ple 2.6 An urn contains 4 black and 3 white balls. What is
the probability that on two successive draws, the balls drawn are both
black?
Solu tion . Let the event that the first ball drawn is black be denoted
by A. Its probability is P (A) = 4/7. This event is followed by the second
event B, in which the second ball drawn is also black. Now if the event
A is realized the number of balls is reduced by one. The second ball,
therefore, is selected from 6 balls of which 3 are black. Hence, the
probability for the event B under the condition that A is realized is P (B/
3 1
A) = — = —. The first event thus, affects the probability of the second
6 2
event. The probability of the composite event A + B is

P (A + B) = P (A) P (B/A) = — x - = -
7 2 7
E xam ple 2.7 In a box there are 10 cut up alphabet cards with the
letters 3A’s, 4M ’s and 3N’s. We draw three cards one after another and
place these on the table in the order they have been drawn. What is the
probability that the word ‘M A N will appear?
S olu tion . The probability that the first card is M, is 4/10. The
probability that the second letter is A, subject to the realization of the
first condition is 3/9. Similarly, the probability for the third letter to be
N is 3/8. Hence
£ 3 3 = J_
P (MAN)
10 ’ 9 ’ 8 ~ 20
E x a m p le 2.8 In the preceding example suppose the letters are drawn
out simultaneously. What is the probability that the word 'MAN’ can be
obtained from the drawn letters?
S o lu tio n . In this case we have altered the conditions o f drawing the
events. In the preceding example, the cards were to be drawn in the order
M —> A —> N. But now the order does not matter. It can be MAN, NAM,
AMN, .... , etc., there being six (3!) such variants. We have to calculate
the pi'obabilities o f these six alternatives and these being mutually
14 Fundamentals of Statistical Mechanics

ex clu sive, w e h ave to add up th ese proba bilities to fin d th e required


proba bility . T h us
4 3 1
P (A M N ) = —
10 9 '8 20
3 3 4 1
P (N A M ) = —
10 9 8 ~ 20
4 3 3 1
P (M N A ) = — = — , etc.
10 9 ”8 20
1 _ 3
req u ired p rob a b ility is 6 x
20 10

1
2.6 CONTINUOUS RANDOM VARIABLES
In our discussion so far we have considered
only discrete random variables, that is, the
variables that take on values differing from
one another by a finite value. There are
other kinds of variables which take on a
continuous series of values and, hence are
continuous random variables. For example,
the height of rising water during a spate
varies continuously. In Fig. 2.5 we have
shown a cubical vessel with side a
contain in g a m olecule which moves
randomly. The z coordinate o f the molecule
may take values from z = 0 to z = a. It is
senseless to talk about the probability that Fig. 2.5
a continuous variable has a given value.
We can only talk about the probability of such variables akin to a certain
value within a certain interval. In the case of the cubical vessel, we can
find the probability that the variable takes on values between z and z +
dz. Naturally, the probability must be proportional to dz. Smaller the
interval dz, lesser is the chance of the molecule being found in dz. We
can, therefore, write
P(dz) = f (z) dz ...(2.23)
where f {z) is a function of z and is called the probability density. In
general, the probability that a molecule is found in the volume element
AV is
P (AV) = fix , y, z) AV ...(2.24)
E xam ple 2.9 What is the probability that a molecule in the cubical
vessel (Fig. 2.5) lies in the volume AV formed by two planes at z and
z + dz?
Basic Concepts of the Theory of Probability 15

S o lu t io n . The volum e o f the v essel is V = a 3. T h e volum e of the


elem ent AV = a 2 dz.

T h erefore P (AV) = —j - = ——
= — dz
V a3 a
T h e p robability den sity in th is case is 1/a, w h ose d im en sion is the
recip rocal o f th e len gth . T h is is in k eep in g w ith ou r expectation , since
p roba bility is a d im en sion less quantity. W h a t w as n ot ex pected is that it
sh ou ld b e a constan t, since w e h ave d efin ed p rob a b ility as a function o f
coordinates. It is a constan t in th is p a rticu la r ex am ple, b eca u se th e vessel
h as a cubical sh ape and all position s o f th e m olecu le in it are equally
possible.
E x a m p le 2 .1 0 F in d th e p r o b a b il i ty th a t a c e r ta in m o le c u le lies in a
v o lu m e e le m e n t A V f o r m e d b y th e p l a n e s a t z a n d z + d z, in a co n ica l
v e s s e l w h o s e h e ig h t is H a n d th e r a d iu s o f th e b a s e is R (F ig. 2 .6 ). T h e
v o lu m e o f th e v e s s e l is

V = -n R 2H
3
S o l u t i o n . T h e v o lu m e A V is g iv e n b y
A V - n r2 d z
w h e r e r is th e r a d iu s o f th e b a se o f th e elem en t.
r H -z
N ow
R H

R (H - z)
T h e re fo r e r =
H

and AV = n ------ -— s— — d z
H 2

AV n R 2 (H z )2d z 3 (H - z f
H e n ce P (AV ) = d
V H 2 Wz
16 Fundamentals of Statistical Mechanics

- ■■ «
I 2.7 PROBABILITY FOR A MOLECULE TO LIE IN A FINITE VOLUME

Suppose in a total num ber o f observations N g, the molecule was found in


a sm all volum e elem ent d V in d N observations. The probability that the
m olecule lies in d V is
dN
PidV) =
N„
Therefore d N = N o P idV) = N o fix , y, z) d V ...(2.25)
w here w e have used Eq. (2.24).
In a finite volum e V, the num ber o f molecules will be

N( V) = N 0 j f ( x , y , z ) d V
v
The probability that a m olecule lies in the volume V is

P( V) = ^ - 1 = ff(x,y,z)dV ...(2.26)
N° v
Thus, kn ow in g the probability density in a certain region, the
probability o f appearance o f a m olecule in this region can be calculated
using Eq. (2.26).
The follow ing relation needs no explanation

jf(x,y,z)dV = 1 ...(2.27)
v -> -

I 2.8 MEAN VALUE OF A DISCRETE RANDOM VARIABLE


In statistical physics, to get an overall picture, we are sometimes interested
in the m ean value o f a ran dom va ria b le, w h ich is also know n as
m athem atical expectation. This is the central value o f the variable about
w hich its various values are distributed.
Suppose we are interested in the num ber o f molecules in a volume
elem ent AV" in a closed vessel having a volum e V. We ca n y out a large
num ber o f trials, say M . In these, suppose we register rnl times the value
n v m 2 tim es the value n2 and so on. The mean value is then defined as
m-jUj + m 2n2 +... _ m 1« 1 + m 2n2 +...
n
m 1 + m 2 +... M

= + T
^ - n 2 +... = P i n ) n + P i n ) n + ...
M M 11 2 2
M _
= ...(2.28)
t=l
Basic Concepts ot the Theory ot Probability 17

The specification of the probabilities of all ni values, gives complete


statistical description of the system.
In general, if /' (n.) is any function of nt, the mean value of fin.) is
given by

( f (n4)) = X P (n^ W ...(2.29)


i

1
2.9 MEAN VALUE OF A CONTINUOUS
i' iftmi' "tiRANDOM
- ... VARIABLE .
For a continuous variable z, the probability that it takes values in the
range from z to z + dz is by definition
P (z) = f{z) dz
where f (z) is the probability density. Hence the mean value of z is given
by
(z) = ] [ > ( z ) z = J z f (z) dz ...(2.30)
where the integral is evaluated over all possible values of z. The mean
value of a function cp (z) is
(<p (z)) = J cp (z) f (z) dz ...(2.31)
It must be mentioned here that the mean value depends on the
variable, over which the averaging is carried out.
E xam ple 2.11 Find the mean distance o f a material point moving
along a semicircle from its diameter by averaging it (a) along the semicircle;
(b) along the projection o f the point on the diameter o f the circle.
S olu tion . Let R be the radius o f the circle (Fig. 2.7). Suppose in a
time t the point has moved through a small distance s along the semicircle.
Now the distance o f the point from the diameter is
18 F u n d a m e n ta ls of Statistical M e cha nics

and (b) since d - ^R2 - x 2

(d )« = i1
r U r 2 ~ x 2 dx
-R
_ kR
~T

| 2.10 VARIANCE-.DISPERSION
Sometimes it is convenient and even necessary to measure the possible
values o f a variable with respect to its mean value. We may as well be
interested in knowing how the values are dispersed about the mean value.
The deviation o f a variable n from its mean value is
An = n - (n)
I f w e decide to find the mean o f the deviation to obtain an overall
picture o f the spread o f the values, we get
{AN) = { n - { n ) ) = (n) - (n) = 0
Mean deviation, therefore, does not serve our purpose as it does not
give us the measure we are seeking. Hence, we have to use a different
parameter. This parameter is the mean square deviation o f the variable
from its mean value. It is called Variance or dispersion or “the second
moment o f n about its mean” , and is denoted by cr.

Thus a2 = {(n - { n ))2^ = (n 2) - ( n ) 2 ...(2.32)

This determines the degree o f scattering o f a random variable and


is, in a way, a measure o f randomness.
W e can easily show that for discrete variable n

o2 = {(An)2) = ] > > ( n ) ( A n ) 2

= £ p (n )(n -< n »2 ...(2.33)


and for a continuous variable

a2 = j P(n)(n-(n)2dn ...(2.34)

The standard deviation or the root mean square deviation a is given


by the square root o f variance, i.e.

...(2.35)
G = {{* 2M ,i>2} 1/2
Basic Concepts of the Theory of Probability 19

'
I 2.1^PROBABILITY .....• ' -jiSifM
S'S*'"
DISTRIBUTION^:
In the examples discussed so far we have assumed uniform distribution.
Thus, in a uniform distribution of space we assume that each region of
the space is equally probable. In some situation, however, the physical
laws governing the system cause some regions of the space to be more
probable than others. This is indicated by a weight function W (or) which
varies with x. Suppose a particle can more freely along a line of length
L, what is the probability that it lies within a small element dx at x? In
the case of uniform distribution, this would have been dx/L. If, however,
there is a non-formity in the distribution, we write
dx
P (x) dx = W (x) —j— ...(2.36)
Is
where W (x) gives the weightage of the region. The normalization condition
gives
L
J*P (x) dx = 1 ...(2.37)
o
Frequently, a distribution function is introduced rather than a weight
function. Suppose, N particles are distributed over a line of length L
according to a certain distribution law f(x). Then, the number of particles
in the interval between x and x + dx is f{x) dx. Obviously
L
J
/■ (x) dx = N ...(2.38)

From Eqs. (2.37) and (2.38), we get


fix') = NPix) ...(2.39)
Exam ple 2.12 One thousand particles are distributed along a line

o f length 1 m; with a distribution function f (x) = Bx\ 1 —— 1 where L is


l Lf
the length o f the line. How many particles lie between x and x + dx, where
dx = 1 mm?
S olution . The required number is
n = fix) dx =

From the normalization condition

J Bx dx = N = 1000

BL2
i.e., = 1000
6
20 Fundam entals of Statistical M echanics

Therefore, B = 6000 (because L = 1)


Hence, n = 6000 (x —x 2) x 0.001
= 6 (x - x2)
In a given experiment a stochastic (random) variable x may have
any one o f a number o f values x x, x 2, .... Each one of these will appear
with a certain probability f(x). The complete set o f values f i x ) is called
the probability distribution. The values f i x ) must satisfy the condition
fix) > 0

£ /■ ( * < ) = 1 ' ...(2.40)


i
where the sum is taken over all values of the variable x. All possible
information about the variable x can be found if the distribution f i x ) is
known.

| 2.12 BINOMIAL DISTRIBUTION


Suppose we carry out a series o f experiments in which each experiment
has only two possible outcomes. For example, experiments to evaluate
the m agnetic m om en t o f a system o f electrons w hich have spins

— or ——, or the traditionally known experiment o f tossing coins in

which either Tieads’ or ‘tails’ appear. Let the probability o f appearance o f


‘heads’ be represented by p and that o f ‘tails’ by q, where
p + q = 1 ...(2.41)
We list below in a tabular form (Table 2.1), the outcome o f coin­
tossing. In column (1) we give the number o f coins tossed; in column (2)
the possible outcomes (head or tails); in column (3), probability o f their
occurrence and in column (4) w e have entered the probability terms
disregarding the order o f H (head) or T (tail) and considering only number.
Table 2.1
- ri) > s *
'N um ber o f A ~V
i . t oin im ■>
I P E - ’C
ffM loufSothes
/ T rrohabdU *
.'A ^2f ‘I' f Tiv.lX
1 H, T p, Q P, Q
2 HH, HT PP, P<J P f 2pq, q2
TH, TT x ip .m
HUH. HHT ppp, ppq, - n p\ 3p2 q
3 HTII, THH, pqp, qpp, 3p ^ . V
TTH, THT, qqp, qpq
HTT, TTT pm , <m
Basic Concepts of the T h e o ry of Probability 21

One can easily recognize that the terms in the last column are the
terms of the algebraic expansion of ip + q)1 for n = 1; (p + q f for n = 2
and (p + q)3 for n —3, respectively. It is, therefore, possible to conjecture
that the situation for n coins will be symbolized by the terms of

(p + q)n = q n + Y \ p q n 1 + n ^ p 2 q n 2 + ... + p n ...(2.42)


Because of its analogy with the binomial theorem, the distribution
given in the last column of Table 2.1 is called binomial distribution.
Let us now consider a more general case of a series of experiments,
in which each experiment has only two possible outcomes, say A and
A , with probabilities p and q : p + q = 1. Suppose in N trials, the outcome
A results n times and A, n times. Then
N = n + n' ...(2.43)
Since N trials are statistically independent, the probability o f such
a result is p" q"'. But a situation in which A results n times and A, n
times can be achieved in many alternative ways. Let the member of
distinct ways in which the above result is obtained be C in). The probability,
therefore, is
P i n ) = C in) p n qn' ...(2.44)
Suppose, for example, in 4 trials, A results 3 times and A once. The
4!
number of ways in which this situation is achieved is 4 = shown in
3! 1! ’
Table 2.2.

Table 2.2

«# 4 U M I
A ._ :
r A H fe g
A __ A -
A * ' A
.. .A
1
In general the number of distinct ways, in which A results n times
and A in n1 times in N trials, is given by
AM AT! ...(2.45)
Cjv(,l) nl n1! n\(N-n)l
The required probability, therefore, is
AM ...(2.46)
Pin) = pV * '"
n\(N -n)\
Consider now the binomial expansion o f ip + q)N
N t, t N ^ (AT — 1) 2 N - 2 ,
ip + q)N = qN + N p q “ + ------P“<T +••■
22 Fundam entals of Statistical Mechanics

N Nl N-n
P 9 ...(2.47)
- Z;
Comparison of Eqs. (2.46) and (2.47) shows that each term of the
binomial expansion gives the probability P in). One can easily see that
the probabilities thus obtained satisfy the normalization condition.
N
sr' N! N-n
■p q = 5 > + * )N = ...(2.48)
X p (w) = n£ =f 0nn\{N-n)\
(because p + q = 1)
E x a m p le 2.13 A vessel o f volume V is mentally divided into two
equal parts. The vessel contains six molecules. Obtain the probability
distribution for the molecules to lie in the h a lf o f the box.
S o l u t i o n . The probab ility
that a molecule is found in the left
h alf is obviously 1/2, i.e.,
1
p —q — —•
2
F rom Eq. (2 .4 6 ) th e
p ro b a b ility for the a b sen ce o f
m olecules (n = 0) or for all the
molecules (n = 6) to be found in
the left h alf is 1/64. The probability
for one molecule to lie in the left
h alf and five in the right or vice
versa is 6. The probability for the
absence o f molecules (n = 0) or for
all the molecules (n = 6) to be found
in th e le ft h a lf is 1/64. T he
probability for one molecule to lie
in the left half and five in the right
or v ice v e r s a is 6 /6 4 . T he
probability for 2 molecules in the
left h alf and 4 in the right or vice
versa is 15/64 and the probability
for 3 molecules in each h a lf is 20/
64. The d istr ib u tio n is sh ow n
graphically in Fig. 2.8. The event
o f greatest probability is when the number o f molecules is the same in
both halves. Hence, the molecules in the vessel will move about until on
the average there will be an approximately equal number o f molecules in
each h alf o f the vessel.
(
Basic Concepts of the The ory of Probability 23

2.13 MEAN VALUE WHEN THE DISTRIBUTION IS BINOMIAL


Then mean value of a variable n is given by

ip) = ' ^ n P ( n )
n =0
In the case of a binomial distribution, this becomes

, N N
NT" rnN\
uy 5 ^ n ^ N -n
= n^= 0
0n l { N - n y . P Q
N
N\ ( dpn N -n
- ^z 0 n ! ( I V - n ) ! ^ P dp l<?

_ d Nl ,N —n \
= p ^ \ Z oni ( F r - ny .p ‘

- P ^ E pw

where we have used Eq. (2.47).


Therefore (n) = pN (p + q f ~ 1 = pN
(because p + q = 1) ...(2.49)

I 2:i 4! FLUCTUATIONS , M
Although the m ean obtained in Eq. (2.49) is generally close to the
experimental value, chaotic deviations from the mean value, known as
fluctuations, som etim es becom e significant. The m easure o f these
fluctuations is the standard deviation

o* = ( " 2} - M 2
In the case o f the binomial distribution
g. n2N\ N -n
P <7
^n\(N -n)<

Y> N\ ( d_
^ Qn\{N - n ) ' \ P dp
24 Fundamentals of Statistical Mechanics

d 'j ( d 1 2V!
p nq N-n
P dp J dp J n\(N - n)\

= (pi ) ( p¥ j(p+^
= p [N (p + q)N~1 + pN (N — 1) ip + qT - 2]
= pN + p2 N (N — 1) = pN —p 2N + p2N*
= pN (1 —p) + p2 N 2 = Npq + p 2 N2
Hence,
a2 = (n2) - ( n ) 2 = Npq + p 2 N2 - p 2 N2
= Npq ...(2.50)
The root mean square deviation is
CT = jNpq ...(2.51)
The fractional deviation is given by

° „ JEEL = p r ...(2.52)
(n) Np \ Np
This relation shows that larger the value of N, smaller is the fractional
deviation, i.e., sharper the peak or smaller the width of the distribution.
Example 2.14 Consider a binomial distribution for a system for
1 2
which p = q = —, N = 6. Determine the standard deviation and find
O tJ
the probability that n is in the range (n) —a to (ti) + a.

Solution. Here, (n) = pN —2; ct2 = Npq

Therefore, in) —o = 1 and (n) + a <= 3.


The values of n that lie in this range are 1, 2 and 3. Probabilities
for these values calculated using Eq. (2.46) are 0.263, 0.329 and 0.219,
respectively.
Therefore, the probability that n lies in the range (n) - a to (n) + a
is 0.263 + 0.320 + 0.219 = 0.811.

2.15 STIRLING APPROXIMATION / ** * *?*&**&'


The calculation of probability p in) in the case of a binomial
distribution becomes laborious, because it involves the computation of
factorials of large numbers, sometimes as large as 1019. It is, therefore,
Basic Concepts of the Theory of Probability 25

desirable to have a formula for rapidly assessing the values of factorials


of such numbers. Such a formula was provided by Stirling.
Since 2V = 1, 2, 3, ... N
N
In N\ = In 1 + In 2 + In 3 + .... In N = In n ...(2.53)
n =1

In Fig. 2.9 the values of In n against n have been plotted. The


required sum X In (n) is given by the area S confined by the stepped line
and the axis of abscissa. In the same figure we have(plotted In x against
x (dashed curve), where x is assumed to be a continuous variable. The
area under the dashed curve is given by
N N
S' = J In x d x = [x In — dx J
l i
= N In N —N + 1
For large N we can assume S = S'
N
Therefore In N\ — n ^ N l n N —N + l ...(.2.54)
n = 1

If N is very large, In IV! = N In N —.N ...(2.55)


A more rigorous derivation yields
26 Fundamentals of Statistical Mechanics

| 2.16 PO ISSON D IS TR IB U TIO N


There are two limiting forms of the binomial distribution which are very
useful in statistical physics. One of them is known as Poisson distribution.
This distribution is valid when
U) N is very large, N > > 1
(ii) p is very small, p < < 1
(Hi) n is very small, n < < N
but np is finite; that is, we assume that values of n have such a low
probability that it may be ignored.
Using these conditions we can approximate the factors appearing in
the binomial distribution. Thus

r(AT-n)!
r r — td = N (N - 1) ... (N - n + 1)

= Nn ...(2.57)
Here, because of the condition (i), we have assumed each factor
equal to N.
Now, q = 1 —p
therefore, In q = In (1 - p)
Expanding the right hand side by Taylor’s theorem and retaining
only the first term (because p is very small), we get
In q = - p i.e., q = e~p
Hence, qN~n = e-pW-»> = e~pN ...(2.58)
With these approximations the binomial distribution takes the form

Pin) = ^ l p ne- pN = e- pN ...(2.59)


«! nl
This is known as the Poisson distribution. It still contains the term
nl; but since n is very small, it can easily be evaluated. We can write the
Poisson distribution in the form

P(n) = M _ e» ...(2.60)
nl
where we have used Eq. (2.49). From this relation we see that

^P(n) = = e(«).e- » = i ...(2.61)

r ___/ \n 'N
because 'S'----— =
nl
v
The Poisson distribution, therefore, is a properly normalized
probability distribution.
Basic Concepts of the Theory of Probability 27

2.17 M E A N V A L U E S A N D S T A N D A R D D E V IA T IO N IN T H E
C ASE O F P O ISSO N D IS TR IB U TIO N

(n) = X nP W = I > ^ 7n\T e


n=0
71—
1
= (n)e-W £ (n)

= (n )
,-{n) _= (n) = piV ...(2.62)
Thus, Poisson distribution has the same mean value as the binomial
distribution.

Now, („•) , L .»

(nY e-<«> + y _ M L e-<»)


- ^Z 2(n -2 )! „ % - ! ) '•

= (n )2 V e » + (n ) Y
(nY e
7 n=2( « - 2)! =
= (n)2 e<"> + (n) e<n>e-(n)

= (»>2 + (*> ...(2.63)

Therefore, cr2 = (”-2) - (n)2 = (n)2 + (ra) - (n)2 = (n) ...(2.64)

and <r = (n)^2 ...(2.65)

= M!L = (n)-v 2 ...(2.66)


<») <»> w
Some of the physical systems which can be described by the Poisson
distribution are: number of electrons from a thermionic cathode, radioactive
emission, etc. —

| 2 ,1 8 ^G A U S S IA N D IS T R IB U T IO N '3 ® ® \
Tlie other limiting form of the binomial distribution is the Gaussian
distribution which, perhaps, is the most important distribution in statistical
28 Fundam entals of Statistical M echanics

mechanics. This distribution becomes valid when N is large and n also is


large but not too close to N.
It can be shown that the binomial distribution
N\ N —n
PM = 'P H ...(2.67)
n !( W - n ) !
tends to exhibit a maximum which is very much pronounced when N is
large.
For large values o f n, the change in P M , when n changes by unity,
is very small. We can, therefore, treat P (n) as a slowly varying function
o f continuous variable n. We shall deal here with In p (u), since it is a
much more slowly varying function o f n:

f N\
In P M = In P Q
n\{N-n)\
= In Nl — In n\ — In (N — ri)\ + n In p + (N - n)ln q
Using Stirling’s formula
In P in) = JV In N - N - nln n + n - (N - n)ln (N - n)
+ N — n + n In p + (2V — n) In q
= N In N — nln n — (N — n)ln (N — n) + nln p
+ (N — n)ln q
Differentiating with respect to n
d
In P (n) = -In n — 1 + In (N - n) + 1 + In p —In q
dn
( N - n) p
= In ...(2 .6 8 )
nq
For P (n) to be maximum
ln ( « - „ ) „ _ 0.
- " ) '> • = i ...(2.69)
nq nq
(.N - n ) p
or — -------- — = 1 , i.e., n = Np ...(2.70)

Therefore, P (n ) reaches maximum at the most probable value of


n = N p = (n). T h e p ro b a b ility P (n ), th us, is quite large in the
neighbourhood o f the value (n) o f n and becomes negligible when n
differs much from (n). The domain o f interest, therefore, is that in which
P (n ) is not negligible. We will obtain a probability distribution that is
valid in this region.
Let us investigate the behaviour o f P (n) near its maximum value.
Basic Concepts of the The o ry of Probability________ 29

Assuming n — (n) to be very small, we expand In P (n) in a Taylor


series about the value (n), as

In P (n) = In P (n) + In P (n )j (n - {n))

r d 2 ln P (n ) ( d - ( n ) ) +...
2 ! dn2

We may neglect higher powers, since n — (n) is very small.

Now from Eq. (2.68) at n = (n.), —— In P (n) = 0.


dn
d2 , , = N = N = ___ 1 _
and
dn2 n W n(N -n) Np (N - Np) Npq

Hence, In P (n) = In P ((n)) —(” ~ (n))


2Npq
n~(n)f
or PM = P { ( n) ) e 2Npq ...(2.71)

The value o f P {{n)) can be found from the normalization condition

in~(n)f
£ P (n ) = £ p ((n ))e 2Npq = 1

Putting x = n —(n.) and replacing summation by integration, we


have

^ P (n ) = ] P ( { n ) ) e - x*/2Npq dx

Let y =
j2 N P q

Therefore, ^ P ( « ) = P (in)) j e~y~ J2Npqdy

= P ((n)) ^2Npq -4n = 1 (see Appendix B )

Therefore, P (in)) = - 7= —
" ” V 2 nNpq
(n - NPy
1 2Npq
and Pin) = ...(2.72)
^2nNpq
30 Fundam entals of S tatistical M echanics

This expression is easier to evaluate, since it does not involve


factorials o f large numbers.

E x a m p le 2.15. With the data g iven in Ex. 2.13, calculate the


probability distribution using Gaussian disti'ibution formula. Depict the
results obtained in the two causes graphically.
From Ex. 2.13 we have

N = 6, p = q = ^

S o lu tio n . A glance at the table and Fig. 2.10 shows that even at
such a small value of N and n, the results obtained are very close. For
larger N the difference becomes still insignificant.

©
X

© ©
X X

0 .2 -

0 .1 - ©
X

o ©
“ i i r r~ i i ►
0 1 2 3 4 5 6
Binomal----------------- ► © Gaussian-------- ►X
Fig. 2.10

W f W l i

0
swmmmmm
. 1
2
,J '
:
0.0156^
0.0937
0.2343 ^ .
0.0162
0.0858
0.2333
1
'
3 . 0.3125 ' 0.3.157 t „
4 ' 0.2343 0.2333
5 0.0937 0 .0 8 5 8 '
6 \ ; 0.0156 - i: .
: 0.0162
■ ■ v*
Basic Concepts of the Theory of Probability 31

2.19 S TA N D A R D D E V IA TIO N IN T H E CASE O F G A U SSIA N

! D ISTR IBUTIO N

Standard deviation is given by

o 2 = {n2) - ( n)2 = ((n ~ ( n) Y )

= '£J(n ~ Np)2 p ( n)
Putting x = n —Np and changing summation to integration, we have

a2 = ( 2nNpq)~2 J e-*2/2Npg x % dx

= Npq (see Appendix B ) ...(2.73)


and ct = yjNpq ...(2.74)

| v|i2(f RANDOM WALK PROBLEM '


The random walk problem is of considerable interest in statistical
physics. We shall discuss this problem in this section as an application
of the binomial distribution.
The original problem solved by Markoff can be formulated in the
following way: A drunkard starts from a certain lamp post on a street.
Some of the steps he takes are forward and some backward. Each step is
of the same length and is completely independent o f the preceding step.
What is the probability that the drunkard, after taking N steps, will
reach his destination, which is at a distance x from the lamp post?
A particle undergoing random displacements, as in Brownian motion,
form a physical situation similar to a drunkard’s tottering.
Consider a particle undergoing successive displacements each of
length Z, in one dimension. What is the probability that the particle, after
N such displacements, will be at a distance x = mZ, where m is an integer ?
Let nLbe the number o f displacements to the right and n2 to the left.
Therefore, N - n1 + n2 ...(2.75)
The net displacement, say, to the right is
ml = (iMj - n2) l - {nx -*{N - Z = ( 2 ^ - N) l ...(2.76)
If the probability that the displacement is to the right is p, and that
it is to the left is q = 1 - p
then the probability of a sequence of n1 steps to the right
and n2 steps to the left = p ni q n2 ...(2.77)
However, there are various ways in which a particle may undergo
n 1 displacements to the right and n2 to the left. For example, suppose a
32 Fundamentals of Statistical Mechanics

particle undergoes 3 displacements, two of which are to the right and one
to the left. This may happen in 3 different ways as shown (Fig. 2.11).
-------- ». -------- +------------------

-►
Fig. 2.11
The number o f distinct possibilities in the problem we are considering
is, therefore, equal to
N\
- ( 2-78>
Therefore, the probability of nx displacement to the right and n2 to
the left in any order is
iV!
■p n' g n2 ...(2.79)
ll -n2
Since the probability of a displacement in either direction is 1/2; p
= q = —. Also from Eqs. (2.75) and (2.76)

N +m N -,
x 2 and n2
-z = 2 ...(2.80)
Therefore, the probability that the particle will be at x = ml after N
displacements is given by
SN
N\
PAm) = C ______ ^ ______ _ | T ...(2.81)
N + m ) . ( N —m V
■>
where C is the normalizing constant
Taking logarithms
In PN(m) = In C + In N\ - N In 2 - In

^ + m V ln (T _ Z J Z L | ,
...(2.82)
Using Stirling’s formula

In n! = n In n — n + — In (27m) ...(2.83)

In PN(m) - In C + N In N - N + In 271 + — In
Basic Concepts of the Theory of Probability 33

"U -H L
2 l N

- I l n 2K - i l n R f l - ^ ...(2.84)
2 2 \2 \ N
If m « N, which is generally true
m,2 _ ± .
l n f ^ ^ l = ±™ .
I N ) N 2N 2
Substituting this in Eq. (2.84) and simplifying

l n P » = In fill] * - ” L
N 1^2nN J 2N

Therefore PN(m) = C e

Now J P (m) dm = 1

Therefore
W Je 2N dm

= c f - l r V -242^N-^~
\kN ) 2
= 2C = 1

C = -
2
1 m2
and Pw(m) dm = f - J — 2 „ 2N ...(2.85)
V2TtJV
This is the probability function for m excess displacements to the
right. We can find the expression for the probability in terms of the
distance x and time t by substituting in Eq. (2.85), x = ml and N = nt.
1/2
21“ nt
Therefore, P (x ) dx = dx ...(2 .8 6 )
y2id2 nt
This is the normal or Gaussian distribution.
34 Fundamentals of Statistical Mechanics

| PROBLEMS
2.1 A simple pendulum performs harmonic oscillations according to the law

0 = 00 COS — t
where <p0 is the maximum angular deviation and T is the period. W hat
is the probability that in a random measurem ent, the value o f the angle
o f deviation lies between 0 and 0 + f/0?
2.2 Assume that the front and rear wheel axles o f bicycles are manufactured
in different workshops and that for every ten thousand front wheel
axles one is defective; while for every ten thousand rear wheel axles two
are defective. W hat is the probability for a bicycle picked at random to
have two defective axles?
2.3 W orkshop A manufactures high quality parts at the probability 0.8.
W orkshop B manufactures them at a probability 0.9. Three parts made
by workshop A and four made by workshop B are taken at random.
Find the probability for all seven parts to be good.
2.4 Five coins are tossed up simultaneously. Find the probability that at
least one head turns up.
2.5 The probability for an archer hitting a target is 10%. I f he shoots ten
times independently w hat is the probability that the target w ill be hit
at least once?
2.6 In a game, five dice are tossed sim ultaneously. Find the probability
that an ace appears upperm ost
(а) in exactly one die
(б) in at least one die
(c) in exactly two dice
2.7 A n urn contain s 20 w h ite and 10 black id en tica l balls. Find th e
•probability o f successively drawing
(i) two w hite balls; (ii) one w hite and one black ball, under the condition
that after the first trail the ball is (a) returned (b) not returned to the
um .
2.8 The com ponent p o f the m agnetic m om ent o f a spin 1 particle can have
3 possible values: p0, 0, -p „. I f the probability p that g = g 0 is equal to
the probability that p = —p0 calculate

2.9 A book o f 600 pages contains 600 typographical errors. A ssum ing that
these errors occur at random , use Poisson distribution to calculate the
probability
(a) that a page contains no errors
(b ) that a page contains at least three errors.
2.10 A man who was carousing for the evening, decides to return home.
H ow ever, his pow er o f decision becom es com pletely random . H e is
100 steps from his door. W h at is the probability that he arrives hom e
in 100 steps?
□ □ □
<*

<

M A X W E L L D IS TR IB U TIO N <

t would be interesting to see how Maxwell introduced the probability


I concept in the kinetic theory of gases and derived the correct formula
for the velocity distribution in the molecules of an ideal gas. This was
quite an amazing achievement and marks the beginning of a new epoch
in physics.

1
3.1 MAXWELL VELOCITY DISTRIBUTED NT
If f ( u x), f (uy), f i u j are the distribution functions of the molecules in the
three mutually perpendicular directions, the probabilities that the mol­
ecules have velocity components between ux and wx + d ux. uy and uy + d uy.
uz and uz + duz are f ( u x)dux, f { u y)duy, f ( u z)duz, respectively. Maxwell’s
derivation was based on the following two assumptions :
(£) The distribution of molecules moving from a point in any direc­
tion is isotopic; otherwise, the molecules would drift in a pre­
ferred direction. All directions in space are equivalent and, hence,
any direction o f velocity o f a molecule is equally probable.
Therefore f ( u x)dux = f ( u y)duy = f ( u z)duz ...(3.1)
where f is always the same function.
(ii) The movements of the molecules along the three mutually per­
pendicular directions are independent. That is ux, uy, u, are in­
dependent o f each other.
Note, however, that this second assumption holds only for velocities
smaller than the velocity o f light. If the molecules move with the velocity
o f light c, the velocity components are no longer independent, since they
are related to c by the equation.
u2x
+ u2
y
+ u2 z
—c2 ...(3.2)
The formula we derive here is, therefore, valid only in non-relativistic
mechanics.
From Maxwell’s second assumption the number of molecules wuth
velocities in the range ux and ux + dux, u and u + du , uz and u, + du. is
N = N 0 f ( u x) f ( u y) f ( u z) dux duy duz ...(3.3)
where N Q is the total number of molecules o f the gas.
35
36 Fundamentals of Statistical Mechanics

In this equation dux duy duz can be considered as volume element


in the velocity space and f (ux) f(.uy) f iuz) as the probability density of
velocity.
Because of the property of isotopy, the probability for the components
ux and —ux should be the same. Hence, the probability density function
f{u x) must be an even function of ux. We therefore, write
/■(“ ,) = <pUi2) ; /'( “ ,) = <p(“ v); f (u z) = cp(uf) ...(3.4)
where $ is the same function in all three cases.
Now if u is the resultant velocity, the probability density o f velocity
is
y|/(m2) = f (u x) f(u y) f(u z) = cp(u2) cp(u2) <p(it2) ...(3.5)
We have now to determine the form o f the functions \p and <|>.
Taking logarithms
ln\p(it2) = lncp(it2) + In cp(u2) + lncp(it2) ...(3.6)
Differentiating with respect to ux, we have

1 3\p(it2) 1 3cp(it2)
2 it_= 2 it
\|/(it2) 3(«2) cp(it2) 3(it2)
1 3<p(u2 ) 1 3\p(u2 ) ...(3.7)
cp(lt2) d(u2
x) v|/(it2) 3(it2)
Similarly,
1 3cp(it2) 1 3\p(it2)
...(3.8)
v(w2) 3(it2)
■6

d(u2)
A

and
1 3<p(itf ) 1 3\|/(u2)
...(3.9)
qKnf) d(u2) \p(M2) 3(u2)
Hence,
1 3cp(n2) 1 3cp(it2 ) 3cp(itf )
...(3.10)
cp(it2) 3(it2) cp(“ 2) 3(u2) cp(u2) 3(itf)
where a is a constant. The negative sign has been introduced for conve­
nience.
Integrating the equation
1 3'|/(it2)
cp(it2 ) 3(u2)
We have
(j) (it2) = Q g -a u * ...(3 .1 1 )

where C is another constant.


Maxwell Distribution 37

The probability that the molecule has components between ux and


ux + du.x is
fiu x)dux = Ce-a“* dux ...(3.12)
The constant C can be found from the normalization condition

~\fiux)dux = \Ce~au'dux = 1 ...(3.13)

Since the integral must converge, a must necessarily be positive. Now

^Ce~ttu'dux = C. j—
a = 1 (See Appendix B)

Therefore, Jf arid f<u*> du* = ...(3.14)


Hence, the number of molecules whose velocity lies between u and u + du
is
3/2
N (u)du = 1 V ^ -J e~au‘ dux duy duz ...(3.15)
or using spherical polar coordinates
f a ? '2
N(u)du = 4nJV0l — I e- ““2uZ du ...(3.i6)

We have now to identify the parameter a.


We first calculate the pressure of a gas contained in a vessel o f vol­
ume V. In the equilibrium state, the molecule will be distributed over the
volume of the vessel in a uniform manner in such a way that their
concentrations is n = NIV, independently of the coordinates at the point
where the measurement are made. The number density of the molecules
with velocities between uX and uX + X du is
( \112
n(ux) = e~auld u x ...(3.17)
The number of molecules hitting a unit area o f the wall perpendicular
to ux per unit time, is equal to the number in the parallelepiped o f volume
Ux
( ’) 1/2
= ra( — J e~au* u x dux ...(3.18)
Every time a molecule hits the wall, there is a change in momentum
equal to mu —(—mux) = 2mux. In other words the momentum transferred
to the wall is 2mux. Hence, the momentum transferred per unit area of
the wall per unit time.
38 Fundamentals of Statistical Mechanics

= e~au■ux 2mux dux ...(3.19)


The total momentum transferred is equal to the force which in the present
case is the pressure P
, OO s \1/ 2
P= / " i f J e-™* 2 m iP dux

■dlly.
- ( f )1 2m j lix ‘
1/2 i—
a nm
= n — 2m- J * ...(3.20)
u 4a ~2a
The equation o f state of a gas is
P = nkT ...(3.21)
where K is Boltzmann constant and K = 1.38 x 10“23 Joule/K.
Comparing Eqs. (3.20) and (3.21), we have
rm nm . m
nkT - -— , i.e. a = ...(3.22)
2a ' 2kT
Using this value in Eq. (3.16) we have
\3/2
(m V
Niu)du = 4teAT0( ^ j e-mu2/ 2*r uzdu ...(3.23)

3.2 MOMENTUM DISTRIBUTION


Using the relation p = mu, we can transform Eq. (3.23) in terms of
momentum,

3/2
Nip) dp = 4^ o i 2 W c f J e-p2/imkt p idp ...(3 .2 4 )

3.3 ENERGY DISTRIBUTION


The energy distribution can be found in the same way using

e = — mu2 ■ de = mudu = ^2 emdu


...(3.25)
Substituting these values in Eq. (3.23), we have
m ']3/2 _ 2e de
M e ) de = 4jdV
27tAT J e m V 2em
Maxwell Distribution 39

2N ( 1 A3/2
= ~ T = ----- „-e/AT eW2 Wp ...(3.26)
U rJ
•:<mu'■W
MBPIWBW'rm ;«•»
3.4
■"■v iSOME
:. SIGNIFICANT MEAN VALUES
'
(i) Mean value of un
m
(un^ = Jt^/Xitldti = J4tc -m u * / 2 k t un * 2 (fit
2nkT
where we have used Eq. (3.23)
s3 / 2
tn „ -m u r / 2/sT „
= 4n
2%kT) T le
o
un + 2 du
3/2
m
= 4n r[ - l + ^ |(See Appendix B)
2nkT n+3 I 2 J
■ m )~2~~
. 2kT
2 f 2 ^ Y /2 „ f ”-+ 3
...(3.27)
^ r^ rj l 2

The mean speed (it) is

n I2
2 ( 2 kT'S 8kT
<“) “ K l m 7
r( 2 ) ...(3.28)
- I
(it) Root mean square speed
\l/2
3 kT
(“ 2> - V tc l m J 1.2 ■
3 kT
Therefore • fp ) ...(3.29)-

(iit) The most probable speed u-p .

The most probable speed up o f a molecule is the speed for which the
probability P(u) is maximum. The condition for P (u) to be maximum is
3 /2
= 4j[f_J2L _) 2i t — - -m u z /2kt —0
du 2nkT J kT

2 kT
Therefore,
m
...(3 .3 0 )
40 Fundamentals of Statistical Mechanics

Figure 3.1 shows the relative values of the three speeds.

Fig. 3.1
(iv) Mean energy

W = o1 VS ( k T1)3'2 e-« /* r ei /2de


where we have used Eq. (3.26)

/ E\ _ 2N 1 °°fr3/2
( } ~ VS (k T f 2f

Putting * = VS, dx = dz

2N
Therefore (e) = — -------- vry [2a;4
N7 VS { k T f 2 o
4iV i ___ r 5
In ( k T )3/2 x5/2 I 2

kT

■NkT ...(3.31)

This shows that, on an average, energy — kT is associated with each


molecule for each degree o f freedom.
Maxwell Distribution 41

|*£-J5 EX P ER JIV ieN TA LV ^lFltR flO N OF M AXW ELL DISTRIBUTION


We describe in this section a few experiments out of the many that were
carried out to verify experimentally Maxwell distribution.

3.5.1 Stern's Experiment


A cross-section of the experimental arrangement used by Stem for the
verification of Maxwell distribution is shown in Fig. 3.2.

A platinum filament coated with a layer of silver is placed along the


axis o f two rotating co-axial cylinders. The filament is heated to such a
temperature that silver begins to evaporate. A slot is made on the inner
cylinder so that some o f the evaporated atoms fly through the slot to­
wards the outer cylinder where they are deposited. If the two cylinders
were stationary, the atoms deposited on the outer cylinder would form
the image of the slot directly opposite to the slot. The cylinders, however,
are rotating with a certain angular velocity oo. Hence, during the time the
atoms fly from the slot to the outer cylinder, the latter has moved through
a distance x and the atoms will be deposited at a point which is not
directly opposite to the slot.
I f u is the velocity of the atoms and d the separation between cylin­
ders, the duration of the flight is given by
d
At = — ...(3.32)
u
During this time, the outer cylinder has turned through an angle Ay
given by
A\|r = (0 At ...(3.33)
The coordinate x measured from the point opposite the slot is
d
x =R A R coAt = Rco ...(3.34)
u
42 F u n d a m e n ta ls o f S ta tis tic a l M e c h a n ic s

Fo r the gas to obey Maxwell distribution there must not be any in­
teraction of the molecules with one another, hence, a high vacuum w ill
have to be created in the vessel. In order that the molecules emerging
from the slot have the same distribution as in the inner cylinder, we have
to ensure that the gas flows through the slot without any hydrodynamic
pressure. Such a collision-free emergence of atoms is possible if the size
of the slot is very small, smaller than the mean free path of the atoms.
I f we assume that the evaporated atoms obey Maxwell distribution,
the flux of atoms passing through the slot an having their velocities
between u and u + du is
3 /2
m
\J\ = C u2e -mu'/2kTdu
2nkT
in ( Rcod V
m y /B (Read)3 ~2kr[ x ) dx
= cf- ...(3.35)
U nkT ) x4
where C is a constant depending on the parameters of the arrangement.
Th is formula determines the distribution of the atoms deposited on the
outer cylinder. The thickness of the layer deposited can be taken as a
measure of the number of molecules. Th e measurement of the thickness
against x agreed w ith Eq. (3.35), thus confirming Maxwell distribution.

3 .5 .2 F re e Fall u n d e r G r a v it y
Tw enty-five years later, a more elegant experiment was suggested again
by Stern and his colleagues, in which the force of gravity was the selector.
Th e atoms or molecules which effuse from the horizontal slit in the oven,
are registered by a detector D (Fig. 3.3). In gravitational field, slower

+y

OVEN

-y

Fig. 3.3
molecules deviate towards the Earth to a larger extent than faster mol­
ecules. The deviation can be easily calculated as a function of velocity.
Th e velocity of a molecule is analyzed in the same way as the trajectory
of a bullet. A molecule leaving the oven towards +y and passing through
the middle slit, strikes the detector at - y. The number of molecules ar­
riving at - y are directly proportional to the flow’ distribution. Although
the deviations observed is such experiments are of the order of several
tenths of a millimeter, the experiments can be carried out reliably and
confirm the validity of Maxwell distribution.
Maxwell Distribution 43

3.5.3 The Doppler Broadening of Spectral Lines


An experimental verification of the Maxwell distribution function is ob­
tained by measuring the width of a spectral line emitted by the hot gas
molecules at low densities. This broadening is due to what is known as
Doppler effect, which arises from the distribution of velocities of the
molecules in the gas.
Suppose a molecule moving with a velocity vz towards an observer is
emitting a radiation of wavelength X0 in the direction of motion. The
wavelength X. as recorded by the observer, will be different from >.0 and
will be related to it by the Doppler expression

__ ___________
| nG
f— ^

...(3.36)
II

1
> >

> >
O

c(X0 - X)
v = ---------------- ---------------------- ...(3.37)
N

o> >

7 dX
and au = — c -------------- ...(3.38)
Z ^ 0

The negative sign relates to the direction for vg.


The probability that a molecule has a component velocity in the range
from ug to vz + dvz, is
1/2
m
I e -mv\l2ht d v g
2nkT
1/2
i.e.
m \ 2kT c dX ed
2kkT
Therefore, the intensity o f radiation Ix dX received by the observer
will be proportional to
xl/2
m _ 2k T M c dX
2jckT )
me2rX0-xY
i.e. Ix d X = c 1 e 2kT^ J dX ...(3.39)

( m l C
where C1 = UideT j X0
>>

When
II
II

o
o

m e7 ( X.D—xY

‘ x dX = I,Xq e 2kT^ K ' dX


L ...(3.401
The distribution o f intensity, thus, has a Gaussian line shape
(Pig. 3.4).
44 Fundamentals of Statistical Mechanics

The width of a spectral line is usually measured at the half-intensity


points (Fig. 3.4). If Eq. (3.40) is expressed in terms of frequency one can
easily find that the half intensity width Aud is given on the frequency
scale by

Fig. 3.4

1-v
h—
2 2kT ln2>|3/2
= 2 .(3.41)
v0 ”0 me2 J
v
The agreement between theory and experiment has been found to be
excellent.

3.6 FRACTION OF MOLECULES IN A GAS WITH ENERGIES


HIGHER ^HAN A CERTAIN VALUE e. \
It is known that when the velocity of molecules exceeds a certain value
u0, the collisions between molecules becomes violent and result in
bimolecular reactions. For example, a reaction o f the thermonuclear fusion,
in which two deuterium nuclei combine to form a helium nucleus can
proceed- only i f deuterium nuclei have sufficiently large energy. In the
same way, for a gas discharge plasma to exist the electrons in it must
have an energy sufficient for ionization of neutral atoms. In the study of
such cases, it becomes imperative to calculate the fraction of particle
having velocities in excess o f a certain value u0.
(£) O n e-d im en sion a l p r o b le m
We first consider the problem in one dimension. From Eq. (3.41), the
fraction o f molecules having velocities exceeding ux is

g-mu] /2 k T
/i = 2 dux ...(3.42)

We have introduced in this equation a factor 2 in order to take ac­


count o f the molecules moving in both directions.
muz 2kT
Putting dux dx
2kT m
Maxwell Distribution 45

...(3.43)

where
x ° N2kT U°
This integral cannot be evaluated explicitly but we can obtain its
value to a good approximation as shown in Appendix B.
Thus

.(J « L ...(3.44)

or in terms of kinetic energy

A = f— ) e-*«'hT ...(3.45)
J
Exam ple 3.1 Find the fraction of H2 and N2 molecules at the surface
of (i) the earth; (ii) the moon, having velocities (away fro the centre)
higher than their escape velocities. Assume the temperature to be 300K.
What information do these calculations give about the atmosphere of the
earth and the moon?
E a rth M oon
Mass 5.976 x 1024kg 7.400 x 1022 kg
Radius 6.378 x 106m 1.740 x 10®m
2GM
Escape velocity =
R
n/2 x 6.67x 10-11 x 5.976x 1024 , ,
Earth u*0 ----------------------------s-------------- = 1.12 x *104 m/s
6 .3 7 8 x l0 6
V2X6.67X10"11 X7.400X1022 0 00 ,
Moon ---------------------------- 3-------------- = 2.38 x 10d m/s
1.440x10s

NOW f = ± (* h lL \ e -mul/2kT
2 J
(The factor 1/2 has been introduced, because the other half moves in
the opposite direction).
Substituting actual values in the above expression we have
h2 n2

[Earth ~ 10~23 2 x l0 -2
^ ~ 1 Moon ~ 10-307 10~15
We see that the rate of escape of the molecules from the surface of
the moon is much faster compared to that from the surface of the earth.
This explains the absence of lunar atmosphere.
46 Fundamentals of Statistical Mechanics

tii) T w o -d im en sio n a l p ro b lem . Velocity distribution function in this


case is
rn -n*4+uiV2kTd ,
27ikT

m „ m u.
= 2 K U ---j^ e -n u 'l2 k T .d u = e -m u 'l2 k T d ll

...(3.46)
where we have put u2 = u2 + u2 and 27t udu = dux duy. Hence the fraction
o f molecules having velocities higher than u0 is
lHH.ie~^/2kTdu
>2- J kT

mu
= j e Xdx j putting x =
2 kT

= e -x „ = e -m u % i2 k T = g-e„/*r ...(3.47)
(i*i) T h r e e -d im e n s io n a l p r o b le m . The fraction o f molecules having
velocities exceeding a certain value u0 is
3/2
-m u 2 / 2kT .,2
f 3 = |4 tc Z -1 u du ...(3.48)
2n k T j

Let
2kT

Therefore du
V m
Hence

- fa - dx ~ 75 ?C xd(e~*)
Integrating by parts
2 dx
f* = ----1=xe
Vre 7^

2 2
Je * rfx— Je * dx
Tre ^°e 7£
2 _*» , 2 c , ...(3.49)
- = x 0e 0 + 1 ---- y= \e dx.
vTC \JTZ q

Th e function

— f e~xldx = er/" x 0 ...(3.50)


TT J
M axw ell D istribution 47

is known as the error function. This occurs frequently in calculations. It


is evaluated numerically and its values for various x 0 are tabulated in
mathematical literature. Th e values of the error function when xQ « 1
and x 0 » 1, can be found without the aid of tables as has been shown in
Appendix B.
For

, 2 f x ‘q
...(3.51)
xo « 1
‘ T fx ° = l Z * » -T

e r f x0 = 1 - 1— ...(3.52)
*o » 1 n/TIXq 2x,o ;
Hence, for x0 < < 1

fz = x ne = 1-
•Jnx 2*g

If x Q is very much greater than unity,

/. 2 _2_ f j n f 2
fs = ~ j= x oe 0 u0e~mu*/2kT ...(3.53)
V 71 J n \2 k T )
O r in terms of energy

" £o ...(3.54)
lkTj
Exam ple 3.2 Th e electron temperature of a gas discharge plasma in
hydrogen is T = 20000K. Assuming that the distribution function of elec­
trons is Maxwellian, find the fraction of electron that is capable of ioniz­
ing neutral hydrogen atoms. (Ionization energy of hydrogen is 13.5 eV.)
For ionization of neutral atoms, electrons must have energies higher
than the ionization energy of the atoms, i.e. 13.5 eV.

Now f3 = e ^ /kT

£0 _ 13.5 x 1,6 x l ( T 19
k T ~ 1.38xl0~23 x 2 x l0 4

(because 1 eV = 1.6 x 10-19 J )

Therefore, f„ = - ~ = x 2.8 x e'7-8 = 0.0013


3 V3.14
Th at is, 0.13% of electrons are capable of ionizing neutral hydrogen
atoms.
48 Fu n d a m e n ta ls of Statistical M e c h a n ic s

| PROBLEMS
3.1 A vessel contains 0.1 kg of oxygen (0 2). Find the number of
oxygen molecules having their velocities between 200 and 210
m/s, the gas being maintained at 0°C.
3.2 Find the root mean square velocity of hydrogen (H2) gas at room
temperature.
3.3 Show that for a Maxwellian gas, the fluctuation of the velocity
is given by

3.4 Find the temperature at which the root mean square speed of
hydrogen gas molecules is j ust equal to the velocity of escape of
the molecules from the earth.

□ □□
MACROSCOPIC AN D
MICROSCOPIC STATES

he treatment given in the preceding chapter was limited to the


T problems connected with the properties of ideal gases. We now proceed
to describe a more general method for dealing with the problems in
statistical physics. The method can be applied to a system much more
complicated than a gas system, and of macroscopic dimensions, whose
constituent particles may interact with one another.
A system is generally characterized by the physical and chemical
properties of the substance in the space occupied by it, as well as by the
properties of boundaries. In general, we are concerned with systems
consisting of a large number of particles, such as for example, a gas
comprising a large number of molecules contained in a vessel; or a crystal
consisting of a large number of atoms in a lattice etc. The properties of
a system depend upon the state in which it exists at the instant at which
measurements are made. Our first concern, therefore will be how to specify
the state of a system at any time.

MACROSCOPIC STATES >.*


Logically since all system obey quantum mechanics we should start with.
i quantum statistical physics and then arrive at classical statistical physics
as a particular case. But it is not always necessary to use quantum
mechanics in describing the behaviour of a thermodynamic system.
| Newtonian mechanics is in excellent approximation and, hence it,
j convenient to commence the study of statistical mechanics consideration,
i To explain the concept of the state of a system, \ve will once more
t refer to a monatomic ideal gas, which is relatively a simple system. By
[ an ideal gas we mean that its constituent particles have a finite mass,
? undergo elastic collisions with one another and with the boundary walls
I and have no other interaction of any kind. We further assume that the
l vessel containing the gas is isolated. That is, it does not exchange energy
| with the material bodies outside and, hence, all variations in the gas are
I due to internal reasons. The time spent in collision is very much shorter
E than the time lapse between collisions. It must be mentioned here that
real gases are not ideal in the sense we have described above. They
I approximate to the ideal gases when the pressure is very low. Under this
49
50 Fundamentals of Statistical Mechanics

condition the particle density is so low that the particles are relatively
distant from each other. If a gas is left to itself for a sufficiently long time,
the pressure and the temperature o f the gas, whatever their initial
distribution, stabilize over the entire volume and we say that the gas has
attained a steady state. In this state the pressure (P), temperature ID
and volume (V) remain constant in time. This state of the gas, characterized
by the parameters, P, T and V, is called its macroscopic state. The
macroscopic steady state in this particular case is also called its equilibrium
state. Note that a steady state is not always an equilibrium state. For a
system that is not isolated, it is possible to establish a steady state by
maintaining the different parts of the walls of the vessel at different
temperatures with the help of external sources. We will however, restrict
ourselves in the first few chapters to the study of equilibrium state, as
these provide a great simplification. Besides most systems we encounter
are in thermodynamics equilibrium or else they tend to reach very rapidly
to a state of equilibrium.

| 4.2 MICROSCOPIC STATES


Since a gas consists of a large number of particles, the state of a gas can
also be characterized in terms of the states of the constituent particles.
The states o f the particles can be specified by ascertaining their positions
(coordinates qt) and velocities or momenta p r The state of the gas thus
described in terms of the properties of its constituent particles is called
its .macroscopic state.
We have seen above that the quantities pressure, temperature and
volume which describe the macroscopic state remain constant in the steady
state of the gas. The constituent particles o f the gas, however, are
continuously in motion and undergo collisions even in a steady state and
hence microscopic state keep on changing continuously. Let us be more
clear on this point. The microscopic state of a gas at any instant is
defined by the instantaneous positions and momenta (in magnitude and
direction) of the various molecules. If any change is made in the molecular
distribution, a new microscopic state results. Many of the different
microscopic states, however, will be indistinguishable from the standpoint
of macroscopic observation. For instance, if the positions and momenta of
two or more molecules are interchanged, a new microscopic state will
result; yet from the macroscopic point of view, no change will be detected
in the condition of the gas. Each one of the microscopic states is said to
pertain to the same macroscopic state. We are thus led to associate with
each microscopic state a large number of microscopic states, although
they widely differ in respect of their dynamical states.
Consider, for example, an ideal gas system in which particles can
exist in any one of the states with energy equal to 0, 1, 2, 3 energy units
and no other. Suppose the system of three distinguishable particles a, b
and c and the total energy of the system is three energy units. In how
many ways three energy units can be distributed among the three
Macroscopic and Microscopic States 51

distinguishable particles, consistent with the condition that the total energy
of the system is three units? In other words, how many microscopic states
correspond to the macroscopic state specified by three energy units?
Figure 4.1 shows the possible configurations of the system and the
number of microscopic states.
Number of states

Configuration Distinguishable Indistinguishable


particles particles
3 o b c
2 3 1
1
0 be ac 2b

3 a a b
2 b c c
6 1
1
c b a
0
3
2 1 1
1
abc
0 Total number of states 10 3

Fig. 4.1
The total number of accessible microscopic states corresponding to
the single macroscopic states in ten of the particles are distinguishable
and three if they are indistinguishable.
One of the aims of statistical mechanics is to correlate the properties
of matter in bulk with the properties of its constituent particles i.e. to
establish a relation between a macroscopic state and the corresponding
microscopic states.

| 4,3 PHASE SPACE


We shall now show how the state of a system is specified. We assume
that the physical systems obey the most general laws of dynamics as set
down by Hamilton. The state of a system is determined when we know
the details of its motion and, hence, is fully specified at a particular time
by a certain number of generalized coordinates qt and an equal number
of generalized momenta p r These coordinates and momenta are governed
by the canonical equation of Hamilton:
dH
a = l, 2 ,...) ...(4.1)
dqt
where qL are the generalized coordinates
p t are the generalized momenta
and H is Hamiltonian of the system
52 Fundamentals of Statistical Mechanics

A most elementary physical realization of such a system is furnished


by a single particle constrained to move in a straight line. The position
of the particle at any time can be described by giving its distance from
a fixed point on the line—origin of coordinates. This single numerical fact
merely gives its ‘mechanical configuration’ at the time in question. For
the complete specification of its state, its momentum at this instant must
also be given. Since in elementary physics mass is assumed to be constant
momentum is proportional to the velocity and, hence, it may seem that
it matters little which of the two is used. However, it must be emphasized
that the generalized momenta used in Hamiltonian equations are very
different from the mass—velocity products of elementary mechanics.
The state of a particle at any time is given
by the pair o f values (q , p ) = (position,
momentum) determined by the differential
equations of dynamics [Eq. (4.1)]. As the point
moves on a straight line, the values of number
pair Cq, p) will change. Quasi-geometrical
terminology makes for convenient and lucid
expression and considerably facilitates the study
of the changes in the state as time goes on. Any
number pair can be plotted as the rectangular
coordinates of a point in a Cartesian graph (Fig.
Fig. 4.2
4.2) .
Every such point which corresponds to a certain value of coordinate
qt, and momentum p L of the system, describes both the position and
momentum of the system at a particular instant, and hence represents a
certain state of the system.
There will be different points for
different states. As the particles moves on
the straight line, its momentum and coordi­
nate vary in accordance with Hamilton’s
equation [Eq. (4.1)]. Its evolution through
the successive states will be represented
by a trajectory in the Cartesian plane (Fig.
4.3) .
The state o f the system when
represented in this way is referred to as
its phase, the representative point as the
phase point, the trajectory as the phase
path and the Cartesian plane as the phase
space.
It is not necessary to start with Cartesin coordinates. Polar cylindrical
or any other suitably defined set of coordinates would do as well.
Nevertheless, we shall find that the phase space with rectangular axes
corresponding to each coordinate and momentum is very convenient, as
it has simple properties.
Macroscopic and Microscopic States 53

Example 4.1 Determine the phase trajectory of a bullet of unit mass


fired straight upwards with an initial speed of 392m/s. Acceleration due
to gravity is approximately 9.8m/s2 or,
^ = -9 .8
clt
Therefore v = —9.8 t + Cx (a constant)
From boundary conditions viz.
v = 392 when t = 0, C1 = 392
Therefore v =9.8 t + 392 ...(4.2)
p = mv = —9.8 t + 392 (because m = 1)
...(4.3)
dq
Now v = -9.8 t + 392
dt
On integrating q = —4.9 i2 + 392 t + C2 (a constant)
Again, using intial conditions q = 0 when t = 0, C2 = 0
Therefore, q = —4.9t2 + 392 t ...(4.4)
Equations (4.3) and (4.4) give the pair values of the phase point at
different times. Thus

t = 0,
11
©
o

(a) when p = 392


O

(6) when q = 3430, p = 294


II

(c) when t = 30, q = 7350, p = 98


id) when t = 40, q = 7840,
II
o
(e) when t = 60, q = 5880, p = —196
•Q

t = 80, p = —392
©

if) when
II

If we plot these points as shown in Fig. 4.4, we get the phase space
trajectory for the phase point of the bullet. The trajectory is obviously a
parabola, the equation of which can be obtained by eliminating t between
Eqs. (4.3) and (4.4) i.e.

Fig. 4.4
54 Fundamentals of Statistical Mechanics

_ p2 (392)2
q 196 + 19.6
or p 2 + 19.6 q - 153664 = 0 ...(4.5)
Exam ple 4.2 A particle of unit mass is executing simple harmonic
vibrations. Determine its trajectory in phase space.
The total energy of the particle E which is constant of motion is given
by

E ...(4.6)
where the first term represents its kinetic energy and the second its
potential energy.
The above equation can be written as

+ g2 =1 .(4.7)
2E 2E lk
The changes in q, p will always be subject to this condition. Equation
(4.7) is the equation of the curve representing the successive states (phase)
of the vibrating system. The equation represents an ellipse (Fig. 4.5) with
semi-axes equal to yj2E and -J2E / k •Any point on the ellipse represents
the phase (q, p) at some time, and the ellipse is described over and over
again, since the particle is vibrating periodically in a straight line,
representing the same positions and momenta.
For a given pair (q, p) the Eqs. (4.1) determine uniquely the rate of
change of every coordinate in phase space. It follows at once that through
every point in phase space, there passes but one trajectory. Hence, the
representative point can never cross its previous path.

Thus, the locus of a representative point in phase space is either a


simple closed curve, or a curve that never intersect itself.
In the two examples that we have considered, it may be noted, that
the actual motion of the system takes place in a straight line. The parabola
M acroscopic and M icroscopic States 55

in the first example and ellipse in the second, is only the progress chart,
not the actual path of the moving particle.

| 4.4 THE |l-SPACE


A mechanical system slightly more advanced than a particle moving in a
straight line, is a monatomic molecule moving in space. In a three-
dimensional configuration space, the position of molecule is determined
by the coordinates x ,y ,z . These coordinates, however, do not say anything
about its motion. If instead, we imagine a three-dimensional space in
which the coordinates are the momentum projections px, py, pz a momentum
space, a point in it determines the state of motion of the molecule, but
nothing about its position. A very meaningful way to determine the
complete state of a molecule would be to set up a six-dimensional space
with coordinates x, y, z, p x, py, pz in which every point represents a state
of the molecule. Such a six dimensional space for a single particle is
called its |x-space. Note that phase space is purely a mathematical concept.
Suppose that the position or momentum of one of the molecules is
changed slightly. The point of this molecule will undergo a displacement
in the phase space, and the microscopic state of the gas will be modified.
However, if the changes in the position and momentum of the molecule
so slight that even the most accurate experiment would be unable to
reveal them, we should assume that no change in the microscopic state
has occurred.

Fig. 4.6
Graphical representation of the foregoing is easily obtained. Consider
a two-dimensional case. We subdivide the range of variables q and p into
arbitrarily small discrete intervals. Let 5q be the size of the intervals on
the y-axis and 8p, the corresponding size on the p-axis. Phase space is
thus divided into small cells of equal area = 8y8p = hQ(say) (Fig. 4.6). The
state of the system can then be specified by stating that the representative
point lies in a particular cell of phase space. In classical description h0
can be chosen arbitrarily small and the specification of state becomes
56 Fundamentals of Statistical Mechanics

more and more precise, as we choose smaller and smaller magnitudes for
hc. It may be noted that h0 has the dimension o f energy x time. The
lengths <5q and Sp are assumed to be so small that if a phase-point within
a cell is slightly displaced to any point within the cell, the change in the
position and the momentum will to be too minute to be disclosed even by
accurate measurements. If the particles is in the cell, its state is
represented by the pair of coordinates (\qi, p f). If there is a change in the
state o f the particle it will move from this cell to another appropriate cell.

| 4.5 THE G-SPACE


For a system with n atoms a set of 6 n members (qv q2, -~q3n, p v p 2>...p3n)
is required to represent a phase point. Such a 6ra-dimensional space is
called V-space. This is evidently built up o f n spaces o f (i-type. It is also
sometimes more convenient to regard T-space as constructed out of the
configuration space that corresponds to a set o f 3n momenta conjugate to
the coordinates.
In 6rc-dimensional case, we imagine, as in the two-dimensional case,
the phase space to be partitioned into tiny juxtaposed 6-dimensional cells.
These cells are all equal, the edges o f the cells being o f equal magnitude.
We define an element o f volume in Euclidean space dv = dx dy dz so that
a system with coordinates within the range between x and x + dx, y and
y + dy, z and z + dz lies within it. Analogously, a volume element in 6ra-
dimensional phase space is expressed as
d r = dq1 dq2 ... dq3n dpx dp2 ... dp3n
= hgn ...(4.8)
and the state o f the system can be specified by starting in which particular
cell the coordinates q v q2, ... q3n, p v p 2 ...p3n o f the system are found. The
succession of states will be represented by trajectory in the 6n.-dimensional
space. Such a space is difficult to visualise. It is purely a mathematical
concept. In general, therefore, one deals with abstract paths in an abstract
space.
It will be observed that the /"-dimensional hyper volume in the phase
space has the dimensions o f (energy x tim e/. These dimensions hold
regardless o f the coordinates used. It can also be shown that on change
o f variable any given hyper volume goes over to a hyper volume o f equal
size in the new phase space (Prob. 4.3).
In Boltzmann’s days there was no theoretical reason to ascribe any
specific value h to the volume element in the phase space. But with the
discovery o f the Planck constant and the advent o f quantum-theory, more
light was thrown on the situation. We see that the assumption 5q5p = h
is now consistent with the Heisenberg uncertainty relation. According to
this relation, i f the position o f a particle is uncertain within a volume dx
dy, dz, the momentum components of the particle are uncertain within a
volume dpx, dpy, dpz o f momentum space. Consequently, all particles
Macroscopic and Microscopic States 57

situated in the special volume dx dy dz are indistinguishable. And between


these two volumes we have the relation
dx dy dz dpx dpy dpz = h3 ...14.91

| 4.6 POSTULATE OF EQUAL A PRIORI PROBABILITIES


As stated in Chapter 2, it is convenient to analyze various problems in
statistical physics, if we follow the method of ensembles. A vessel containing
an ideal gas is a statistical system. An aggregate of such identical systems
such as above is called a statistical ensemble, following the original
terminology of Gibbs, one of the founders of statistical mechanics. Since
the systems of the ensemble are identical their macroscopic states are the
same, but microscopic states may be different.
Consider a system composed of N identical particles in a volume V,
nx of which are with energy e1; n2 with energy e2 etc. If the particles are
non-interacting the total energy is given by
E = ]>>;£, and N = ...(4.10)
i i
The specification of the actual values of the parameters E, V, N then
defines a particular ‘macrostate’ of the given system. Since the total energy
E consists of a simple sum of energies there will obviously by a large
number of different ways in which the individual can be chosen, so as
to make the total energy E. In other words, there will be a large number
of different ways in which the total energy equal to E of the system can
be distributed among the N particles constituting it. Each of these different
ways specifies a particular microstate o f the given system. In any case,
to a given macrostate of the system there does, in general, correspond a
large number o f microstates and it seems natural that at any time t, the
system is equally likely to be in any one o f these microstates. That is, all
microstates are equally probable. This is generally referred to as the
‘principal o f equal a priori probability’. This principal can be considered
as the basis of the entire theory of systems in equilibrium. If all the
microscopic states have the same probability, the probability o f a given
macroscopic state will be proportional to the number o f different states
which it contains.

I 4.7 ERGODIC HYPOTHESIS


In statistical mechanics we have often to deal with the average or the
mean o f a quantity. The average of a physical quantity can be determined
in two ways: (£) one could consider an ensemble o f a large number of
identical systems and average, the physical quantity over all these systems
at one instant o f time to determine its ensemble average; or (ii) a system
could be followed over a very long period o f time, during which the system
takes different values. The average o f the physical quantity over the long
periodic gives the time averaged value o f the quantity.
58 F u n d a m e n ta ls of S ta tis tic a l M e ch a n ics

Ergodic hypothesis is that these two averages are identical:

4.7.1 Mean Value Over an Ensemble


Suppose we are interested in the mean value of the square of the
coordinate of a particle. Let N be the number of identical s3 ^stems in the
ensemble and q t the coordinate of the particle in the ith system. The
mean value of the square of the coordinate is given by

<«2). = TJ ? ?? - <4-11)
The subscript e indicates that the mean is taken over the entire
ensemble. A particle may appear in a particular cell in a number of
systems. Let 2V be the number of systems in which the particle is found

in the jt h cell with coordinate qy The contribution by Nj cells to *2* is


AT- Qj. Grouping together the terms corresponding to the same cell in
different systems, we can write

2 = x n j<
ij -< 4 1 2 >
i=l j
where the summation is taken over all the cells of each system. Using
this we can express Eq. (4.11) in the form
1 *V
1 X"* 2

N
= -1 I N t f £ ...(4.13)
N j i N
B y definition, the probability that the particle is in the y'th cell is

...(4.14)

Hence ...(4.15)

4.7.2 Mean Value Over Time


Let us how concentrate on a single of the ensemble. The coordinate of the
particle changes with time as the particle jumps from cell to cell. We keep
the particle under observation for a very long time. The mean of the
square of the coordinate is then given by

(<l2) t - \ ' ...(4.16)


Here, the subscript t indicates that the mean is taken over time, and T
is the total time dining which observations have been made. The particle
jumps from cell to cell during the time of observation. Let m be the
M a c ro s c o p ic a n d M ic ro s c o p ic States 59

number of jumps during time T. Suppose in the ith jum p the particle is
in a cell w ith coordinate q i and its duration in this cell is Atr
Therefore ^ A£t - T
m T
and = J? (t)d t ...(4.17)
t=l o
We can express this relation in a different form. During a very long
time, the particle arrives in each cell several times. Let m. be the number
of times it arrives in the yth cell, its coordinates being and its duration
in this cell A ty Equation (4.17) now can be expressed as
tn W
\q2it)d t = £qr?A*,- = Y
o 1= 1 7=1
M
= ...(4.18)
7=1
where M is the total number of cells in a system and T = m- At( is the
total duration of the stay of the particle in the yth cell.
Equation (4.16) now can be written as
2 1_ M M T.
lim lim ...(4.19)
<<?>t T— y .T r f T— *e
T j -1 7=1 1
Since out of the total time of observation T, the particle stays in the
jt h cell for the time T-, the probability that the particle is in jt h cell can
be taken to be
T
p _ lim J ...( 4 .2 0 )
~ r -»- y
Hence

(? > ; = t p j j ...( 4 .2 1 )
\ 7=1
This formula is similar to Eq. (4.15). The question is whether the
probability P is equal to P.. We have not proved it to be so, but intuitively
they seem to be equal. The assumption that P j is the basis of what is
known as Ergodic hypothesis, which can be expressed as
‘The mean over the ensemble is equal to the mean over time,’ i.e.
...(4.22)
This is one of the basic assumptions of statistical physics.

4.8 DENSITY DISTRIBUTION IN■ ■'■■PHASE SPACE


B ■'’to
Suppose the system in whose physical properties we are interested is a
gas in a container. We then consider simultaneously a large number
60 Fundamentals of Statistical Mechanics

(.ensemble) of containers having the same shape and containing the same
number of molecules of the gas of interest. The phases—the positions and
momenta of the molecules at a particular instant—vary from one container
to another. The values of ic/j, q2, ... p v p 2.-., pf1 at that instant are
different for the different systems of the ensemble and they are represented
by different points in the phase space. As time goes on, these different
representative points will move describing the trajectories, each one of
which will give the.dynamic history of one system of the ensemble. The
instantaneous state of any system in the ensemble can then be regarded
as specified by the position of the representative point in the phase space,
and the condition of the ensemble as a whole can be regarded as a ‘cloud’
of such representative points. One can imagine the phase points as a
cloud of dust (in hyperspace) whose individual dust particles are in motion
in accordance with dynamical laws. The density of this cloud at a particular
place may vary as time goes on, or may appear to vary from place to place
at a particular time. It must be emphasized here that the systems of an
ensemble are independent system, that is, there is no interaction between
them and hence the trajectories do not intersect.
Since in statistical physics we are interested in the number of systems
at any time which would be found in the different states that correspond
to different regions of the phase space, there is no need to maintain
distinction between individual system. The condition at any time is
described through a function p which represents the density of the
probability distribution in the phase space. The function p gives the number
of states per unit volume of the phase space. It is therefore necessary to
formulate the family o f probability density functions for the states of the
systems o f the ensemble at different times. We will, thus, .set up the
machinery for a microscopic probabilistic description o f matter.
Given a real iV-particle system, we never know exactly what its state
is. We can only say with a certain probability that it is one o f the points
in the phase space. We can, thus, assign a probability distribution to the
points in phase space, which can be regarded as a probability fluid which
flows according to Hamiltonian dynamics.
The density distribution function is obviously a function o f the
coordinates q v q.„ ...q^p^, p 2, ...pf , and time. Thus, the density distribution
function may be represented by p(qv ,.p f , t) or briefly by p(g, p , t).
Therefore, the number o f phase points (or systems) in a small volume
element d r = dqv ..., dpf is given by
SN — p(q, p , t) dqv ..dqf.d p y ..dpf ...(4.23)
and N = £ p (q, p, t) dqv ..dqf dpv ..dpf ...(4.24)
If the population N is assumed to be very large, p and §N can be
regarded as varying continuously and, hence, the total number o f systems
N is given by
N = $■■■jp (q ,p ,t)d q 1...dqf- dp1...dpf ...(4.25)
M a cro sco p ic and M ic ro s c o p ic S ta te s 61

The knowledge of the distribution would be sufficient to determine


the average values of different mechanical properties of the system
comprising the ensemble. From the definition of probability, we see that
the probability per unit volume of finding the phase point for a system in
p iq ,P ,t) T„ „
different regions of space is ----- — -----. If the properties of an ensemble
are known as functions of the position <?, p in T-space, then the average
properties of the ensemble will be found by averaging the known functions
over all allowed positions q, p. Thus if X ( q , p ) is a measurable property
characterizing the system, its mean value for all the systems in the
ensemble would be given at time, t, by
.J X(<7>P)P(<7, p)dq1...dpf
then <X> = J ...jp(q ,p)d q l ...dpf
...(4.26)

If p is normalized to unity i.e. if


p = j....jp (.q ,p )d q 1...dpf = 1

(x) = ]--]% {q ,p )p iq ,p ')d p i...d p f ...(4.27)


That is, p itself gives directly the probability per unit volume.

4.9 LIOUVILLE'S THEOREM


1
Let us now inquire into the rate of change of phase point density in a
classical system.
Consider a small volume element in the hyperspace, bounded by q x
and q 1 + 8qv p 1 and p 1 + 5p v etc. The number of systems in the element
using Eq. (4.23) is
6N = pd P - p(q, p , t ) 8q1...8pf
The motion of every system is governed by Hamilton’s equations.
Representative points will enter and leave through the boundary surfaces
of the element. Since the number of phase points entering the element of
hyper volume through any ‘face’ will in general be different from the
number of points which are leaving the opposite ‘face’, the number SN

will be changing. The change is equal to ^ dt Sq1...8pf . We can construct

a conceptual Euclidean space of 2f dimensions. Consider two faces


perpendicular to the i^-axis which are located at q l and q t + 8qv The
number of phase points entering the element in time dt through the face
<7 , = constant is (Fig. 4.7).
=p(q, p, t) q x dt 8q2...Spf ...(4.28)
where q 1 is the velocity in the' direction q v
The number leaving the element through the face q 1 + 8q1 —constant
is
62 Fundamentals of Statistical Mechanics

P<7i + ^ — (p<?i)S<?i d t Sq2...8 p f ...(4.29)

Hence, the effective number of systems entering the element in the


direction q y is
d
= p q 1 d t Sq2...Spf pc/l + V---- d t 5 q 2...d p f
dqi
= __ a_ ...(4.30)
(p<7j) d t 8 q 1 5q 2 ... Sp f
dq,

§q,
q, dt ( Q,+ -o— d q ,) dt
6q,
V . ______l
Pi + 5Pi
t
5p,

1 I |
z
M— ----- 5 q, ----- —►

q, q, + oq1 9

F ig . 4.7
Therefore, the net number entering the volume element through all
its faces is given by
dp
vjj- d t 5 g r ..S<2y = -X ) - tX ) d t 8 q t ...8 p f
i=l dqi i 3pt
We, thus obtain the relation
9p f
^-(p<?i) + ^ - ( p ,A )
= -x ...(4.31)
dt oqt i=1 dpi
This result can be simplified to yield

^p = - V ,f!2L + ^ V f 3 P o i. + i P £ ...(4.32)
* h ydq, dp,) dpt
From Hamilton’s equation of motion
dcp dH dzH
dqL dqL dpi dqLdpL

dp, _3_ 3H d2H


dpi ~ dp, dqL = - dqtdpt
Macroscopic and Microscopic States 63

Therefore, the first term in the bracket vanishes and


dp]
dt J,q,p ...14.33)
t=ilM i dpt

dp']
The symbol of partial differentiation indicates that dt ) q p gives the
rate of the change in density at the point of interest. This is known as
Liouville’s equation and is of fundamental importance in statistical
mechanics.
It is easy to see that the equation can be written in any of the following
forms:

H) ^dt + tf^i'Mi
i p - ^ + T dpi
-P i = 0 ...(4.34)

0p Sp dH 9p dH
Hi) = 0 ...(4.35)
Ht ikv&li dPi dpLdqt
9p
HU) ...(4.36)
where
dp dH 9p dH
{p, HI = Z
M i dPi dPi dQi.
is known as the Poisson bracket of the functions p and H.
The total time derivative of p (q, p, t) is given by

= + £ f i p $ . + 9p.p = 0 ...(4.37)
dt dt ^3% dpi
In other words the density of phase points is an integral of motion.
This is Liouville’s theorem in classical presentation.

3j*£"V3
4 .ia p s !i
ANDHm S
mm
Equation (4.36) shows that the density does not change if one moves in
the phase space along with the point representing a system; that is, the
density does not change with time in the neighbourhood of any selected
moving point in the T-phase space. Thus, in a given region
dV = 8q. 5q2 ... 5qv Spx 8p2...5pu
on the T-space there will be
5c/1...8p„
5N = ...(4.38)
h
64 Fundamentals of Statistical Mechanics

states, where v is the number of degrees of freedom. According to Eq.


(4.37) this number will always remain the same, irrespective of what
coordinates we use to describe the system. Hence, the distribution of
representative points moves in the T-space like an incompressible fluid.
Gibbs referred to this fact as the principle o f conservation o f density in
phase.
We can arrive at yet another important principle from Liouville’s
theorem. Consider a region 8V in a phase space, small enough so that the
density p is uniform over its extent. Boundaries of the region are
permanently determined by representative points lying in it. The number
of elements in this region is
52V = p5V
In the course of time, each point in this region traces out a trajectory
in phase space. Hence, at a latter time tl the region 5V will have gone to
a new region 5V of the phase space. Both 5V and 8V ' contain the same
number o f states and, hence, they must have the same volume in phase,
although their shapes may be different. This can be proved as follows:
As no representative point can cross the boundary, nor can any point
be created or destroyed,
Therefore

(52V) = - f ( p 8V) = 5V p (5V) = 0


dt dt dt dt
d do
or — (5V) = 0 (because - j r = 0)
dt dt

i.e. -T- = 0 ...(4.39)


Thus, i f a given number o f representative points occupy an element
o f volume 5V at a certain time, they will occupy another volume o f equal
size at a later time, although the volume element may change its shape.
This is known as the principle o f conservation o f extension.

I •!4 I'i f CONDITION FOR STATISTICAL EQUILIBRIUM


Liouville’s fundamental theorem which controls the temporal behaviour
o f ensem bles, perm its us to discuss the conditions for statistical
equilibrium.
The ensemble is considered to be in statistical equilibrium if p has no
explicit dependence on time at all points in phase space, i.e.
dp
dt ~ ° - ...(4.40)
Under the condition of equilibrium therefore, by Eq. (4.32)

=0 ...(4.41)
Macroscopic and Microscopic States 65

This equation will be satisfied if p is independent of coordinates


0q, p ) i e.
p(q, p) = constant ...(4.42)
In other words, representative points are uniforhily distributed in the
phase space.
More generally, under statistical equilibrium, the density must be a
function of the constants of motion only. Thus, if a is a constant of motion,
p = p(a), so that

^ = |p = |p d* \ = 0 ...(4.43)
dt da dt 3a t^.\d<li 3P; )
It may be noted that when the system is in statistical equilibrium the
observed values of the macroscopic physical quantities are to a high degree
of accuracy, equal to their mean values.

PROBLEMS- ■

4.1 A system o f 4 particles has energy levels with energies 0, 1, 2,


3 units. The total energy of the system is 3 emits. List the
accessible microstates, if the particles are (£) indistinguishable
(££) distinguishable.
4.2 Discuss the nature of the trajectory in the phase space o f a
simple pendulum. Show that the area enclosed by the trajectory
is equal to the product of the total energy E and the time period
T.
4.3 Show that the element c£T of the phase space of a single particle
is invariant under the transform ation from the Cartesian
coordinates to the spherical polar coordinates.
4.4 Find the phase trajectory for a particle o f mass m and change
—e, moving under the influence of a Coulomb force o f attraction
towards a fixed charge +et at a distance rQ.
4.5 A linear harmonic oscillator is described by the equation

x + yt + co2a = 0; co = : J« to
Determine and draw its phase trajectory.
4.6 The movement o f phase points may be regarded as a steady flow
o f a ‘gas’ in phase space o f 2f dimensions. Obtain Liouville’s
equation, starting from the equation o f continuity applicable for
fluids.
4.7 Verify Liouville’s theorem in the case of the motion o f three
particles in a constant gravitational field, the intial states o f the
particles being given by the phase points.
A {p0, Z 0)1B(p0yZ 0 + l\ C(p0 + b, Z 0)

□ □ □
S TA TIS TIC A L EN SEM BLES

I
n the formulation of statistical mechanics, Gibbs introduced three
standard ensembles to which real experiments could be approximated.
These are: (i) the microcanonical ensemble, (ii) the canonical ensemble
and (m) the grand canonical ensemble. Their classification depends on
the manner in which their systems interact. Each of them has its own
characteristics distribution. Physical systems can interact in a variety of
ways; in particular, they may exchange energy, or matter or both with
each other. The ensemble in which the systems exchange energy but not
matter, is called a canonical ensemble, that in which both energy and
matter are exchanged between the systems is called a grand canonical
ensemble', and that in which neither energy nor matter is exchanged, is
called a, microcanonical ensemble. Statistical mechanics provides a num­
ber o f methods for calculating equilibrium thermodynamic properties of
macroscopic systems. Explicit calculations o f thermodynamics functions
can be carried out using microcanonical, canonical or grand canonical
ensembles. A specific choice o f ensembles may be thought of as corre­
sponding to a particular physical situation.

|s5.1 MICROCANONICAL ENSEMBLE


This ensemble consists o f systems which are isolated from the rest of the
world. Such a system is also known as a ‘closed isolated system’ and has
a fixed volume, fixed total energy and a fixed total number of particles.
The probability density Pip, q) o f such a system differs from zero only on
the constant energy hypersurface. It must be noted, however, that in
reality one cannot achieve complete isolation o f a system. We must allow
for some interaction energy AE, though very small. The elements of the
microcanonical ensemble, therefore, lie within the range between E and
E + AE.
The picture of a microcanonical distribution in phase space would be
something like a very thin uniform cloud. One can easily see this by
referring to Fig. 5.1, which gives the trajectory of a simple harmonic
oscillator. The ellipse E represents the evolution o f our conservative
system with total energy E . This system is just one member of the
ensemble. The other members may have energy between E and, E + &E

66
S ta tis tica l E n s e m b le s 67

and hence their corresponding phase points will be between two ellipses.
The thin elliptic cloud is the energy shell. If we conceive of 6E as getting
smaller and smaller, the energy shell between the two surfaces would
become just a surface in the limit as 8E —» 0.
According to the fundamental postulate of equal a priori probability
under the condition of equilibrium the system is equally likely to be found
in one of its accessible states. In the case of microcanonical ensemble all
states between E and E + 8E are equally accessible. Therefore, if the

Fig. 5.1
system is in a state X, corresponding to the energy Ex, the probability Px
of finding the system in state X is given by
fC if E < Ex < E + 5E
x [ 0 otherwise ...(5.1)
where C is a constant, the value o f which can be determined from the
normalization condition ^ P x = 1, when summed over all accessible states
in the range between E and E + 8.E. It should be more appropriate to
write this relation as Px = C 8(E —Ex), where 8 is the Dirac delta function.

5.2 CANONICAL ENSEMBLE '.'w /•


It is evident that the microcanonical ensemble is fundamental and refers
to the simple situation in which the constituent system are isolated and
not influenced in any way by external disturbances. One would, there­
fore, think it proper to use such ensembles in the development of the
basic theory. However, as pointed out above, microcanonical ensembles
are best suited for isolated systems, and we seldom deal with a com­
pletely isolated system; there is always some energy exchange with sur­
roundings. It would be more realistic, therefore, to use an ensemble in
which statistical equilibrium is attained not by isolation but by free in­
teraction. Canonical ensembles have been found to be more appropriate
68 Fundamentals of Statistical Mechanics

for the description of such systems. Besides, canonical approach gives


results which apply even when components interact strongly.
Suppose we desire to study the statistical properties of a thermody­
namical system A. To this end we envisage an assembly of a very large
number of systems (AO identical in volume and molecular composition
with A. Each system of the assembly is considered to be in thermal
contact with a huge heat reservoir. A heat reservoir is a system whose
heat capacity is so much higher than that of the subsystem in contact
with it, that heat flow from or to the heat reservoir does not change its
temperature significantly. The assembly can be thought of as contained
within a microcanonical ensemble. Each system of the assembly is in
contact with the other elements of the microcanonical ensemble, but is
otherwise isolated from the outside world. The walls of the containers of
the system of the assembly which separate it from microcanonical en­
semble are diathermic walls, which allow heat to flow through them but
do not allow the exchange of mass. An assembly of this kind is called a
canonical ensemble of systems of type A. The number of systems in a
canonical ensemble is constant; but their energy is not fixed.
Let us now find the probability that a canonical system, under the
condition of equilibrium, is in a particular state with specific energy E a.
Consider a canonical system A in a
microcanonical ensemble A ' (A < < A ') (Fig.,
5.2). The walls of A are such that A and A ’
are free to exchange energy. Th e A
microcanonical ensemble is isolated and its
total energy E 0 is constant. As discussed
before, a large number of microscopic states
correspond to the macroscopic state of the A'
microcanonical ensemble with energy E 0.
Let £20 be the total number of microstates
of the microcanonical ensemble with en­
ergy E 0. By the postulate of equal a priori
probability, all these microstates are Fig. 5.2

equally probable. Therefore, the probability of each microstate is . It


D0
is obvious that different microstates of the microcanonical ensemble A'
will give rise to different microstates of the small subsystem A. Suppose,
a certain microstate of A ' gives rise to a microstates of A with energy E a
(Ea « E 0). Hence, the energy of the remaining part of the complete
system is E 0 — E a. This is one microstate of A with energy E a, attained
through the other microstates of A ’. Let the numbqr of states of the
microcanonical ensemble A ', through which the states of the small system
A corresponding to the energy E a are attained, be £2a (E0—Ea). Therefore,
the probability that the small system A is in a state with energy E a is
Statistical Ensembles 69

n a(E 0 - E J
...(5.2>
“ ^0
Using the relation

we can express Eq. (5.2) as

P„ = ...(5.3)
fin
Expanding ln D tt(£ 0 - E a ) into a Ta ylo r’s series

ln Q u( E 0 - E a) = I n Q a(E 0) - E a d ln Q t ^ o ) + higher order

terms ...(5.^)

Putting p = °-~ and neglecting higher order terms


dE0
In £2a( E 0 - E a ) = In Q a( E 0) - ^ E a
Hence
ann.(£„)-p£0J eln nJK>
p = i --------------------------------------- = ^ -------------- e ~ ^
nn

= ^ e~PEa = const, x e~&Ea ...(5.5)


O0
From the normalization condition XPa = const, x = 1
1
Therefore, const. ...(5.6)
,-P E .

-P E .
and P_ = ...(5.7)
-P E .
a z<
This is the Gibbs canonical distribution. It gives the probability that
the small system A be in a state with energy E a. There may be other
microstates of A with the same energy E a. If this number is g a.
-p£„
g ae ...(5.8)
P„ =
“ " £
The denominator is usually represented by Z, i.e.

z =£■
-P E .
...(5.9)

-PJSa
Therefore P_ = g ae ...(5.10)
If £ is a continuous variable

Z = Je-p£hPa ...(5.11)
70 Fundamentals of Statistical Mechanics

We have yet to identify the parameter (3, which we will shortly do.
Meanwhile, we shall discuss here an example which gives a clue towards
the identification of (3,
Consider a composite system A 0 made up of two systems A and A ' in
thermal contact and weakly interacting with each other. Let A be in a
state r with energy E 2 and A ' in a state s with energy E s. The energy of
the composite system A 0, therefore, is E ° = E r + E s.
I f the two systems are separately in/internal equilibrium, the prob­
abilities of finding them in respective states are given by

p — ------------------- p =
e~^E-
_ _ _______ ...(5.12)

« These probabilities are statistically independent. Therefore, the prob­


ability that the system A is in state r and A ' in state s simultaneously,
is given by the product of P r and Pg.
g-P B.
P rs = P r P a = £ e-p£r ...(5.13)

Evidently, this does not correspond to a canonical distribution of the


composite system and, hence, does not describe an equilibrium situation.
Exchange of energy between the two systems, therefore will continue
until equilibrium is reached. The time taken by the systems to reach the
state of equilibrium from its initial state of non-equilibrium is called
relaxation time.
However if (3 = J31. Eq. (5.13) reduces to
e-p(£r+£,)
P rs = <sp^pe-p(£r+£,) ...(5.14)
r s
which corresponds to the canonical distribution.
Since the only criterion for thermodynamic equilibrium under these
conditions is that the temperature of the two systems be equal, it is valid
to assume P as a function of temperature.

5.3 ALTERNATIVE^ M E TH O D FOR TH E DERIVATION OF


1 "CANONICAL DISTRIBUTION
Consider an ensemble consisting of a large number of systems N. As
in the preceding section the system are separated by diathermic walls
permit the transfer of energy from one to the other. Suppose now that
n l of these systems are in state 1 with energy E l
n2 of these systems are in state 2 with energy E 2
n 3 of these system are in state 3 with energy E 3
................................................................................ ...(5.15)
nt of these systems are in state i with energy E t
Statistical Ensembles 71

This is one of the modes of distributing the total energy E among N


constituents of the ensemble. It is obvious that

I> * =w,-; Z niEi = N (E ) - f5 -16>


where (E ) stands for the average energy per system.
A n y set of numbers (n v n2, ... ,n.) such as (5.15) which satisfies the
restrictive condition given by Eq. (5.16) is a possible mode of distribution
and any such mode can be realized in a number of ways by effecting a
reshuffle among the members of the ensemble.
We will now find the probability for such a distribution to occur.
Let us first find the number of ways in which a distribution such as
(5.15) can be obtained. Since, of the total number of N systems n 1 are to
be accommodated in state 1, the first can be chosen in N ways; the second
in N — 1 ways ... and n ^ h N — n 1 + 1 ways.
Therefore, the total number of possible ways = N (N — 1) ...(n —n 1 + 1)
IV! ...(5.17)

However, n 1 systems in state 1 are distinguishable and, hence,


permutations of these systems among themselves do not give a new mode
of distribution. The number of possible permutations being n^., the total
number of distinct ways in which n 1 systems can be selected out of N is
IV!
r^ K N -r^ V .
We are now left with N — n x systems, of which n2 are to be placed in
(IV -1 )!
state 2. This can be done in ^ j ( j v _ ni - n 2)l way s-

Proceeding in this way we find that the number of ways in which the
distribution such as (5.15) can be obtained is
N\ (IV — 1)!
r i x K N - n ^ l X n ^ K N - nx - n^V.
(IV - - n 2y
n3 \(N - n 1 - n 2 - n 3)\
N\
..(5.18)
>h ! n2 ! n3 !
The probability W of obtaining such a distribution can be found from
the ratio of this number to the total number of all possible distributions,
say C:

Therefore W =
1 IV! ...(5.19)
C ln2 !...
72 Fundamentals of Statistical Mechanics

Accordingly, the most probable mode of distribution will be the one for
which H' is maximum.
Taking logarithms
In W = In N\ — X ni !- lnC ...(5.20)
i
By Stirling’s formula
In N\ = N In N —N
Therefore, In I f = N In N —N — ^ n ,ln nl + X nj —In C
i i
In W = N In AT — X “ ln ?l! “ ln C ...(5.21)
i
For W to be maximum 5 ln W = 0 i.e.
5X,.n, ln n; = 0
Therefore X (5nt + ln n I5n[) = 0 ...(5.22)
The other restrictive conditions are

2> > = A and =^


t i 1
On taking differentials
X 5wf = 0 ; X £ i5,lf = 0 ,...(5.23)
i *
Therefore, for W to be maximum
Xlttttj-Sn,- = 0 ...(5.24)
i
subject to the conditions (5.23).
Following the technique o f Lagrange’s multipliers (see Appendix A),
we multiply the first of Eq. (5,23) by a and the second by (3 and add these
to Eq. (5.24), to get
X (In th + a + (3E£)Sti£ = 0 ...(5.25)
i
If a, P are chosen suitably, 5ni can be regarded as independent and,
hence, the coefficient of each 8ni must vanish separately.
Therefore ln + a + (3E t = 0 ...(5.26)
or n. = e- “ -P£- = e"ae-p£- ...(5.27)
Since X ni = N
e"a X e"P£,JV ...(5.28)
i
Ne~W
and ...(5.29)
n‘ = x e- pE'
Statistical Ensembles 73

IL1
The probability of finding a system in state i is Pt =

Therefore ...(5.30)
1 _ N ~ £ e-PE,

which corresponds to the canonical distribution.


Exam ple 5.1 A system with just two energy levels is in thermal
equilibrium with a heat reservoir at temperature 600 K. The energy gap
between the levels is 0.1 eV. Find (i) the probability that the system is in
the higher energy level and Hi) the temperature at which this probability
will equal 0.25.
e-PE e-PE 1 ______ 1______
(i)
P = ~ Z ~ = l + c-pE = eP« + 1 1 6x10**
g l. 3 8 x l0 - ls x 6 0 0 + 1

= 0.126

(u) --------- ^ 5—----


lexio-'8 _ = 0.25
e 1 .3 8 x l O -‘* x T + 1
Therefore T = 1056 K
E xam ple 5.2 A solid containing non-interacting paramagnetic atoms,
each having a magnetic moment equal to one Bohr magneton, is placed
in a magnetic induction field of strength 3 Tesla. Assuming that the atoms
are in thermal equilibrium with the lattice, find the temperature to which
the solid must be cooled, so that more than 60%. of the atoms are polar­
ized with their magnetic moments parallel to the magnetic field.
(One Bohr magneton p = 9.271 x 10~24 JT-1)
Here the probability that the molecules are polarized parallel to the
field
e SiBlkT 1
= e -\iB/kT + e \lB/kT = g-2nB/feT + i ~ O'6

Therefore - 2 \\B/kT -
0.6

2/liB , 1 _
i.e. ——- = In 1.5
kT
2pB
Therefore T = = 9.91 K
k In 1.5
74 Fundamentals of Statistical Mechanics

| 5.4 M EAN VALUE A N D FLU CTU ATIO N S [


We have seen in Gh. 2, that the mean of a function fo is given by
(f) = X/a-Pa
a
In the case of canonical distribution
P £.
(f) = y > - iP ...(5.31)

Thus, mean energy is given by

W = X =X
1 dZ
Thus, (E) = - (In Z) ..(5.32)
Z d/3
mean square energy is
E t e -P E«
(E = XX&
X Z dp2
...(5.33)
Therefore, fluctuations in energy is
( (A S )2) = ( { E - ( E ) ) 2} = ( S 2) - <Z?)2

i a2z
z dp z^ f)
d (S )
A 2 (In Z ) = ...(5.34)
ap dp
Using the relation

P = kT ...(5.35)
(to be derived later), we get
,2\ _ _ d ( E ) _ d (E ) dT _
d (E)
((AS)2
d/3 3T d/? = ^
AT = ^
...(5.36)
Therefore, the energy fluctuation in the canonical distribution is pro­
portional to the specific heat.

For example, for a perfect gas E = —NkT


(This will be proved later)

Therefore ((AEy2\ = k T 2~ Nk = | N k2T 2 ...(5.37)


Statistical Ensembles 75

and the mean fractional fluctuation

[ ((AE f ) Y ^ [ 3Nk2T2 4 2 V2
| (S )2 [ “ \ 2 9N 2k2T2 j ,3 N j
...(5.38)
This is negligible if N » 1.
l
If N = 1, fractional fluctuation is = 0.82, showing that fluc­

tuations are more important for microscopic systems. If N = 1018, say,


which corresponds to a macroscopic system, fractional fluctuation is
0.82 x 10 9, which is negligible. It is, therefore, reasonable to assume that
in thermal equilibrium a macroscopic system has a precise energy value,
which is equal to the ensemble average value of energy.

5.5GRANDCANONICAL ENSEMBLE
We now consider a situation slightly different from the one considered in
the case of canonical distribution. Imagine as before a sub-system A in a
system A 1 with a difference that the walls of the sub-systems are such
that A and A 1 can exchange both energy and particles. Such systems are
known as open systems. A 1 is, of course, a member of a microcanonical
ensemble. If E Q, N 0, are the energy and number o f particles of the entire
system
E0 = E a + E 1 ...(5.39)
N0 = N a + W
where E a, E 1 and N a, N 1 are the energies and number of particles in A
and A 1 respectively, and none of them is constant. In deriving the prob­
ability distribution, we adopt the same technique as in the case of canoni­
cal distribution.
The probability that the sub-system A is in a state with energy E a
and number of particles N a, is given by
P a = C Qa (E0 - E a, N 0 - N a) ...(5.40)
where C is a constant and £la has the same meaning as in the case of
canonical distribution.
We expand In £2aCE0 — E a, N 0 —2Va) in a power series
In n a(E0 - E a, N 0 - N a) = In Qa (E() N 0)
3 ( l n a a) 3 (ln Q a ) AT
“ “ ~ ^ n T n ‘ +-
= In n a ( £ 0, N 0) - $ E a + p N a ...(5.41)
where we have put
76 Fundamentals of Statistical Mechanics

3(ln Qa) 9(ln Qa)


BE0 -H = ...(5.42)
d N 0

Hence
- JV-«) = n a(E0Jf0) e-W *+^
Therefore Pa = ...(5.43)
The normalization condition gives

^ = ^ e-0£„-nW„p ...(5.44)
a
and hence

= ^ g-|j£a-niv.p ...(5.45)
a
This distribution is called the grand canonical distribution.
We how introduce here a quantity
2 = e-V* ...(5.46)
which is generally known as the fugacity of the system. In the chemical
literature it is known as the absolute activity.
In terms of fugacity the relation (5.45) becomes

Pa zNae
...(5.47)
a
The denominator in Eq. (5.47) is called the grand partition function.

We will now show how we can arrive at the distribution given by Eq.
(5.45) by following the techniques adopted in Sec. 5.3 for canonical distri­
bution.
As before, the probability that systems are in state 1 with energy
Ev n2 in state 2 with energy E2, etc. is given by
1 N\
W ~ C n^\n2\...
where N is the total number of systems in the ensemble. Let be the
number of particles in each of the systems N2 in each of the n2 sys­
tems, etc.
Now In W = In N\ — ^ In ns! — In C
Statistical Ensem b les 77

= N In N - N -
Z ns In n n 4 - In C
S S
The condition for In W to be m axim um is
8 In W = 0
i.e. Z n s Sn s = 0 ...(5.48)
subject to the conditions

I X = N ; Z nj ( El l = (E ); = {N )

(i) Z ® ns = 0; (ii) ^ , E s5ns = 0 (in ) 2 2 N s8ns = 0.


S s s

M ultiplying (i), (ii) and ( iii ) by Lagrange’s m ultipliers y, p and a and


adding to Eq. (5.48) w e get
Z ( b i ns + Y + P S s +txZVs) 5n3 = 0 ...(5.49)

As 5n 3 are in depen den t their coefficient m u st van ish


T h erefore In n s + y + PES + aNs = 0 ...(5.50)
n £ = g-T-ps.-ow. ....(5.51)
Now Z
s
= e -Z ^ ^ = ^
s
N
T h erefore e ~ ^ eP£.-oW. ....(5.52)

N e ~$E,-aN ,
and 5 = y'gPE.-aiV,
~n~b— sr-

T h e p roba bility o f fin din g th e system in state s is given by


-P E ,-a N .
P = N ^Tg-PB.-oN. ...(5.53)

w h ich is the sa m e as obtain ed in th e preced in g section.

5 .7 i l b W i f A l ' I O N S IN THE N U M B E R OF PARTICLES OF A

1 S y S T E M IN A GRAND CANONICAL E N S E M B L E

T h e m ea n sq u a re d ev iation o f th e n u m ber o f pa rticles in an open system


is g iven b y
{ ( ANf ) = (AT2) _ (iV)2 ...(5.54)
78 Fundamentals of Statistical Mechanics

e-pB.+PH'V.
Now, (At) ...(5.55)
= x piNi = ‘
i z

( A,2> 1 d2Z ...(5.56)


1
Z da2
We also take a = -Pit-
3 11 dZ\ 1 d2Z 1 ( dZ\
But t—
da \Z da 1 - Z do2 ~ Z2 { 3a )
3(iV)
Therfore, = ( * * * ) - (N f
da
Hence

((AAO2) = ^ -(5.57)
It will be shown in the next chapter a = —(3p, where Mis the chemical
potential. Using this relation we can express Eq. (5.57) as

((A2V)2) = kT ...(5.58)
Alternatively, in terms of fugacity 2
Y^NlZNe - ^
_ £ .f? £ = Inz_
(K ) ~ Z ds dz
Z^ 1 _ 3Z y N z rf,e -9S,
Therefore
~ Z2 dz i
. i
Z i
1 Z N lZN-e~0*<
‘Z2’ _ t *I
= -< A T )2 + ( i V 2) = {(A A O : ...(5.59)

1
5.8 R E D U C T I O N G I B B S DISTRIBUTION TO MAXWELL AND
.'*?"■ BOlTZIVJANiCDJSTRIBUTIONS "
Since we have claimed that Gibbs distribution is a general distribution,
we should be able to derive Maxwell and Boltzmann distributions from it.
We now proceed to show how this can be done.
5.8.1 Maxwell Distribution
Maxwell distribution function gives basically the probability of finding a
particular molecule in a certain range of velocities, say, between u and
Statistical Ensembles 79

u + du. The system in this case is a single molecule in a heat reservoir.


The r space is a six-dimensional space, i.e. the M-space.
Since a molecule occupies in the phase space a volume h3, the number
of states of a molecule contained in a phase volume element d r is
dr dx dy dz dpxdp dpz
1J = ------------ h* - (5'60)
where h is Planck’s constant.
According to Gibbs distribution the probability of a system being in

a state with energy E is So. *

Therefore, the probability of finding the system in a phase volume


element d r corresponding to the energy Ea is
e-|3Ea dT
dP a
h3
e-pE“ dx dy dz dpxdpydpz
- --------------- j ? --------------- " (6 '6 1 )
To find the probability that the molecule has energy E a, we have to
integrate Eq. (5.61) with respect to all the elements of the phase volume
corresponding to Ea. Since the energy does not depend upon the coordi­
nates, we can straight away integrate Eq. (5.61) and write
V
dP« = Jz ^ e-ga.' dPx dPy dPz ...(5.62)

By converting into spherical coordinates and integrating over all the


angular coordinates we have
4nV e~pB“ 2,
dP« = -JT5-. Y 7 ^ P P - (5'63)

Now E = —mu2; p = mu
Substituting these in Eq. (5.63), we have
4nV g-prcm2/ 2
dP = — — — -— rr— m2u2mdu
a h3 V e-Pm" ' 2

4nVm3 e-P"<u212
5= -----5 ...(5.64)
h3— —. a—rrr u2du
y e-pm«2/2

The normalization condition gives JC dPa = 1.


80 Fundamentals of Statistical Mechanics

That is 4nVmJ -Pmu2/2„2


-(3mu2 12 w du = 1

4nVm3

4nVm3 V2 y/n
-pmu2/ 2 3
^3S<
(Pm)2 2

4rcVmJ Vtt
fI —p ^Jf T = 1

4nVm3 4Pm )2 (
3

Therefore -pmu2/ 2 IX J
~Jn
...(5.65)

Hence rfp = f P™ ^ g-Pm u'/’ u'du ...(5.66)

This is Maxwell distribution. Comparing this with Eq. (3.23), we see


that the two relations differ only in the constants appearing in them and
will be identical if
2m 1
P = 2am = ...(5.67)
2mkT kT
This relation can also be proved by using Eq. (5.67) to calculate mean
kinetic energy, as
4 pm
= J^°itnu2 N3/2 e- W V d «
Vx l 2
_ 2m f pm Y
f ,-P m u 2/2,
u4du
J
= SmfpmV372 i f 2 f /2 3 r _ JS_
Vnl 2 J 2U ^J 4 V7t~2P
...(5.68)
Using the result proved later
1 mu 2 3 3
— ...(5.69)
2 “ 2 kT ~ 2P
We have

P = kT ...(5.70)
Statistical Ensembles 81

Hence, Maxwell distribution is


m ^2 4 — V
dP(u) = 4 = e u2du
V 7t 2I t )
...(5.71)
which is the same as Eq. (3.23).
5 .8 .2 Boltzm ann Distribution
If the gas is an external potential field, the particle energy Ea is given by
Ea = i /n u 2 + C7(x, y, z) ...(5.72)
where U(x, y, z) is potential energy of the external field. Therefore
p{|m«‘ +uj
dx dy dz dpxdpydpz
dP(x, y, z, px, py, pz) =
A \ mul+u\
...(5.73)
Since the probability densities of coordinates and momenta of a par­
ticle are independent
dP(x, y, z, px, py, pz) = dPt (x, y, z) dP2 (px, py, pz)
where
dPx(x, y, z) = Cj g-pt/Ca.y,*) dx dy dz ...(5.74)

and dP2(px, py, pz) = C2e~^mu /2 dpx dpy dpz ...(5.75)


Here Cv C2 are the constants to be found from the normalization
condition. The second of these equations was discussed in the preceding
section and leads to Maxwell distribution. We will now show that the first
leads to Boltzmann distribution.
Equation (5.74) gives the probability that a molecule is located in the
volume element dxdydz at the point (x, y, z). If N is the number of
molecules in the system, the number of molecules in the volume element
dx dy dz is
dn(x, y, z) = Cl N e Pu<-X>y’z) dxdydz ...(5.76)
Therefore, the concentration at the point (x, y, z) is ,
dn
nQ(x, y, z) = C1 N e~W ...(5.77)
dx dy dz
Suppose, at a given point (x0, y Q, zQ) the concentration is n0(x0, y Q, zQ)
and potential energy is U(xQ, y 0, zQ). Therefore
n0(xo’ y o> za> = C 1 N e-Py(*o,y0.*o) ...(5.78)
From Eqs. (5.77) and (5.78) we get
n0(x,y,z) _ g_p|[/(x,y,z)-u0,y0,E0)l
...(5.79)
noixo.yo.zo)
82 Fundamentals of Statistical Mechanics

i.c. n0(.x, y, z ) = n0(x0, y0, z0) e &uikT ...(5.80)


This is Boltzmann distribution.
E xam ple 5.3 Calculate the average potential energy o f a molecule of
an ideal gas in thermal equilibrium at absolute temperature T, contained
in a cubical box of side L, the only external field acting on the gas being
the earth’s uniform gravitational field.
Let m be the mass of a molecule. Hence, its potential energy at height
z from the bottom of the box is U = nig z. Therefore
mgz e~^mgz dz
(V ) j dz

ze-Pmz
Numerator -nigfi ’ il2m2Z.2
m gl
-P
1le -fimel
P2m g
im gl
Denominator
-firngl
<^> J m gL

I 5.9 BAROMETRIC FORMULA


As an application o f the Boltzmann distribution, we shall now show how
it can be used to obtain a formula describing the variation o f the atmo­
spheric pressure with altitude. We assume that the atmosphere is in
equilibrium and its temperature is constant, which, incidentally, is only
an approximate assumption.
If m is the mass o f a molecule and h. is the height at which the
concentration is to be found, the potential energy o f the molecule at this
height
AU = mgh ...(5.81)
The concentration at this altitude is
n(h) = n(0) e~^u = n(0) e-msh'kT ...(5.82)
At constant temperature, the pressure is proportional to concentra­
tion. Hence, we can write Eq. (5.82) in the form
p(h ) = p {0) e-mshJkT ...(5.83)
where p(h) is the pressure at altitude h and p(0) the pressure at h — 0.
This is known as Barometric formula.
Statistical Ensembles 83

I
'5.10 EXPERIMENTAL VERIFICATION OF T H E BOLTZMANN
. ^D IS TR IB U TIO N

Take a vessel with a liquid filled with a powder of very fine particles,
whose density is slightly higher than that of the liquid. The particles
under the force of gravity will distributed in height non-uniformly in
accordance with Eq. (5.82). The ratio of the number of particles at height
h and at the bottom is
n(h)
—— _ e-mSh/kT ...(5.84)
n (0)
where mg is the weight o f the particle in the liquid. If p is the density of
the particle, r its radius and p0 the density o f the liquid

mg = ^ it r3 (p - p0) g • ...(5.85)
where account is taken o f the buoyant force acting on the particle.
n(fl) -j-nr^p-p^sh/kT
Therefore —— = e d ...(5.86)
n (0 )

The number o f particles at various heights can be measured with the


help o f a microscope. One can verify whether the concentrations thus
measured satisfy Boltzmann distribution.
Experiments o f this type were carried out by J Perrin, the results of
which confirmed Boltzmann distribution.
The relation (5.86) can be expressed as
4 / 37tr3(p - p0)gh
k = ...(5.87)
n (0 )
T In
n(h)
Perrin used this relation to obtain the value of k.

■s". X'. 1
MIXTURE OF GASES '•iw. .... 1
We have derived Eqs. (5.82) and (5.83) for the variation o f concentration
and pressure for a single type of particles. W e often come across a situ­
ation in which two or more types o f particles are present. For example,
Earth’s atmosphere is mainly a mixture o f two gases: nitrogen and oxy­
gen. We shall discuss in this section the variation o f concentration in the
mixture o f two gases.
Suppose a cylindrical vessel with base area A and height h, contains
two types o f molecules. Let their total number be N 1 and N 2 and mass of
a molecule o f each type be m J and m 2 respectively. Since the probability
that a molecules is located at a particular point is 'independent o f the
positions o f all other molecules, we can take the distribution o f each type
8^ ^Fundamentals of Statistical Mechanics

of molecules as

n^h) = A , n^O) e~m'sh/kT


n2(h) = A 2 n2(0) e -m ,e h / k T ...(5.88)
where nx(h), are the concentration at height h and A r are constants.

From the normalization condition

A i l >‘-i(0)enheh,kTdh

kT _e mlg h / k T -h,
i.e.
. m \8 -6
kT
or, A in,(O)-^L-
m1g 1 J

Therefore N ^ g
...(5.89)
1 n1( 0 ) * r { l - e - m’*°/ *:r}
-m yg h lk T
and ...(5,90)
kT |i

_ N 2m2g e - m 2g h / k T
Similarly n lh ) = ...(5.91)
kT |i_3'".ftAo/ *r j

These formulae show that the concentration o f the heavier molecules


decreases rapidly with height.
The ratio o f the concentration is
_ N 2m2 { l - e - ^ / ^ T | •
n ^h) N 1m1 | i _ en,sftAo/*r j

E x a m p le 5.4 The Earth’s atmosphere at sea level consists o f 78%


nitrogen and 22% oxygen by volume (The presence o f other components
is neglected.) Find the percentage content o f these gases at an altitude of
8.75 km, assuming the temperature o f the atmosphere to be 300K.
The concentrations o f the two gases are
^ (h) = n0JO) e- m°*gh/kT

n%{h) = nN2(0) e -m"*eh,kT


I f N is the concentration o f the mixture at the spa level
no,(0) = 0.221V; ^ , ( 0 ) = 0.78 N
Statistical Ensembles 85

The percentage of oxygen content at h is


_________ 0.22 Ne~m°'ghlkT_________
~ 100 0.22 Ne~m°*gh,kT +0.78 Ne~m"*gh/kT
___________0.22_________
0.22 + o/jQe ~<'m"i~rn*,gh,kT
f 0.78(mo —mN2)gh
= 22\1 --------------" W ----------

mo, = 5344 X 10-27 kg; = 47.76 x 10“27 kg


Substituting these values, we get oxygen content in per cent as 19.8%.
Hence nitrogen content is 80.2%.
While closing this chapter we have to draw the attention of the reader
to the fact that the parameter a has not yet been identified. This will be
done in the next chapter.

I PROBLEMS
5.1 Evaluate the partition function at temperature T for a classical
one-dimensional oscillator. Hence find the mean energy of the
oscillator.
5.2 A small system has just two states of energy E 1 = 0 and E2 =
10~22 J. Find the probabilities and p 2 for the system to be in
states of 1 and 2 respectively, when the mean energy <E> o f the
system is (£) 0.2 E z, (££) 0.5 E 2. Assuming Boltzmann distribution
calculate the temperature in the two Cases.
5.3 A rectangular box o f maximum volume is to be made out o f a
cardboard o f fixed area A. Using the method o f Lagrange mul­
tipliers, show that the box with maximum volume is a cube.
5.4 Show that the specific heat of an ideal gas in the extreme rela­
tivistic case is twice its value for unrelativistic monatomic gas.
5.5 The number density o f magnetic ions in a paramagnetic salt is
1024/m3. Each ion has a magnetic moment of one Bohr magneton,
which can always be parallel or anti-parallel to the external
field. Calculate (£) the difference between the number o f ions in
these two states, (££) the mean magnetic energy, if the magnetic
field applied is 1 T and the temperature 3 K.
5.6 An infinite column o f a classical gas consisting of N identical
atoms each of mass m is at thermal equilibrium in a uniform
gravitational field. Find the mean energy per atom.
5.7 A cubical box of side L contains an ideal gas of N molecules, each
o f mass m, in thermal equilibrium at absolute temperature T.
The only external influence acting on the system is the Earth’s
86 Fundamentals of Statistical Mechanics

uniform gravitational field. Calculate the average kinetic energy


o f a molecule.
5.8 Assuming that the atmosphere is all at one temperature T, find
the mean height of a gas molecule in the atmosphere.

5.9 Find the height at which the atmospheric pressure is th


that of the sea level. Assume that the atmosphere is at a con­
stant temperature 300 K.
5.10 Gamboge gum particles were added to a vessel containing water.
The density of gum particles is 1.21 x 103 kg/m3 and the volume
o f each grain in 1.03 x 10-19 m3. Find the height at which the
concentration o f grain distribution decreases to half its value at
the bottom of the vessel, the measurements being carried out at
4°C.
5.11 Following data on the number of Gamboge particles each of mass
9.8 x 10-18 kg and density 1350 kg/m3 in water, was obtained in
an experiment on gravitational sedimentation equilibrium:
Height in p. 0 25 50 75 100
Mean number 200 175 150 130 110
o f particles
Calculate the temperature o f water.
□ □ □
n - ‘1

S O M E A P P L IC A T IO N S O F
§ L & f c f il S T A T IS T IC A L M E C H A N IC S

n this chapter we will show how the method of statistical mechanics


I are applied to some physical situations.

6.1, ROT&TIjNG BODIES .‘f - ’ ,%V i • .C‘ * ,,

Consider a body rotating with an angular velocity about a fixed axis. Let
us find the canonical distribution in the rotating system of coordinates.
Let v = velocity of the constituent particles relative to a fixed coor­
dinate system.
v ' = velocity of the particles relative to the coordinate system
rotating with the body.
E - energy o f the body relative to the fixed coordinate system.
E' = energy o f the body relative to the coordinate system rotating
with the body.
I f r is the position vector
v = v* + (B x r ...(6.1)
where co is the angular velocity vector.
Lagrangian for the body is

L mv — U, where U is the potential energy

1_
(V + co x r ) 2 — U
2

= ^ V m v '2 + ^ m v ' (co x r ) + ^ (co x rO2 - U


z z
...(6.2)
Linear momentum p', relative to the coordinate system rotating with
the body is given by

p' = ~ v' + x r)

= ^ m ( v ' + to x r ) = m.v = p ...(6.3)

87
88 Fundamentals of Statistical Mechanics

where p is the linear momentum relative to the fixed coordinate system.


We know from mechanics that
E’ = ^ p V - L = ^ p -v 7 - L —(6.4)
Substituting from Eqs. (6.1) and (6.2)

E' = i ^ m v '2 - i (to x r)2 + U


...(6.5)
Hence, the density distribution for a canonical system is

s-x
PE = Ce~&BE1
E = Ce 12 2 1
-pl-£-+U~Ym(<axr?
= Ce l2m 2 ...(6.6)
m(d)xr)2
Therefore “P 9
Pe = Ce 1 2
f 2 "N
because p 1 = p and E = —— f- C/ ...(6.7)
2m
Thus, the difference between the distribution function of rotating bodies
and that o f the stationary bodies is the appearance of the additional term
m(coxr)2
in the Boltzmann factor o f the former function. This indicates
2
that the rotation is equivalent to the appearance o f an external field
corresponding to the centrifugal forces.
E x a m p le 6.1 A cylinder o f height l and radius R is filled with a gas
consisting.of N molecules each o f mass m. The cylinder rotates with an
angular velocity co about an axis perpendicular to the base and passing
through its centre. Determine the density o f the gas.

The term to be added to the energy in this case is obviously —— m co2


r2 if r is the distance o f the molecule from the axis.
The density, i.e. the number o f molecules per unit volume n(r), is
therefore

n(r) = A i Pm“V ...(6.8)


where A is a constant.
The number o f molecules in a small volume element dV = 2n rldr is
equal to

A el ,W r2 2 k rldr
Since N is the total number o f molecules
nr . fi? ipmcoV2
N = A f e2K 2nldr
Jo
Some Applications of Statistical Mechanics 89

R — ($mco2r 2
27i r/dr
=A fJo
(imco2 r'2
Putting

N = ----- -- Jo e dx
Pmco
27lAl ^ePm<o3R“ / 2 _
pm. CD2

iVPTnco2 ...(6.9)
Therefore A =
2rf(ePm“ z-R2/2 - l )

iVPmco2 amm*R2/2 ...(6.10)


and n{R) =
27tZ(e| W 'R2/2 - l )

&2''+HE 'P R O B A B Il^ ^ ^ _


,Jj. MOMENTA AND ANGULA
I
■S.. MOLECULES
Let cop co2, co3 be the components of the angular velocity along the prin­
ciple axes of a molecule and I v I2, Z3 be its principle moments of inertia.
The components of angular momentum, therefore, are
A ?i — Z j CDj , A f g = -^2®2> ^ 3 ~ ^3® 3
and the kinetic energy o f rotation (regarding the molecule at a single
body) is

Erd = | ( 7i<°? + + J 8 ® !)

...(6.11)
fl 12 13 ,

The probability distribution for the angular momentum is


p( Mf <M'j <Ml )
dWM = Ae 2( 7l h ) dM 1 dM2 dM 3
...(6.12)
The condition o f normalization gives

A j e 2^ 7‘ h h ' c?M1 dM2 dM z = 1


PAf? ’ - _&«?' ' ~ _PMl
Therefore A J e 2 dM j j e 2 h dM 2 j e 3 , = d M3
90 Fu ndam entals of Statistical M echanics

I M j, 2nl2 J 2 kI3 = i
1e ’ V P I
or A = (2nkT)-V2 I { 1/2 I~1/2 73- 1/2 ...(6.13)
and. hence

dW M = (2nkT)-3/2 (7, 72 73)-1/2 e ■) dM\ dM 2 dM,


2
...(6.14)
and the probability distribution for angular velocity
•(A^f+72
d,(nx d a'22 da>.
u,VJn
...(6.15)

I 6.3 THERMODYNAMICS
Thermodynamics aims at studying phenomenologically the properties of
material bodies characterized by macroscopic parameters. We know that
matter in the aggregate can exist in states which are stable and do not
change in time. The mechanical properties characterizing these equilib­
rium states, however, change with temperature.

6.3.1 Reversible and Irreversible Process


It is possible for a system to change from one equilibrium state to an­
other. These transitions can take place in two different ways:
(O Infinitely slowly: After an infinitely small change in parameters,
the next change is not made until all macroscopic parameters
attain constant values, i.e. the system attains an equilibrium
state. A system is said to be in equilibrium when the statistical
distributions o f the observables averaged over their fluctuations
are constant. After this, the next change is made, adopting the
same procedure. The entire process, thus, consists of a sequence
o f equilibrium states and is called quasistatic equilibrium process.
(ii) A b ru p tly : The changes are bought about so abruptly, that the
parameters may not assume constant values over the entire
system. For example, if we suddenly change the volume o f a gas,
the pressure and temperature will not be constant over the entire
volume and it will be meaningless to characterize the system in
terms o f pressure and temperature. These are known as non­
equilibrium processes.
The transition of a system from one state to another can be brought
about either (a) reversibly or (b) irreversibly.
In a reversible change, the system remains throughout infinitesimally
close to the thermodynamic equilibrium; i.e. the change is carried out
S om e A p p lic a tio n s o f S ta tistica l M echanics 91

quasi-statically. Such changes are always reversible and the system can
be brought back to the original thermodynamic state by reversing the
process. In an irreversible change the system cannot be brought to the
original state without causing a change in the surroundings.

6.3.2 The Laws of Thermodynamics


Phenomenological thermodynamics is based on certain laws. These laws
are merely reasonable hypotheses formulated as a result of partical
experience. They have been applied to a large number of phenomena and
have been proved to be correct. They have, for example, been applied to
matter in extreme physical conditions, such as matter in bulk at nuclear
densities inside neutron stars in the early stages of the hot Big Bang
Model of the universe:
(£) Z e ro th la w o f th e rm o d yn a m ics. The law states that i f two
system s A and B are in therm al equilibrium with a third system
C, then A and B m ust also be in therm al equilibrium with each
other. This law forms the basis of thermometry.
(££) F irs t la w of th erm o d yn am ics. This is the law of conservation
of energy. A system containing a large number of particles will
always have in its equilibrium state an internal energy U due to
the movements of the particles constituting the system and their
interactions. If an amount of heat 8Q is supplied to the system,
the change in the energy is given by
5Q = d U + 8W ...(6.16)
where 5W is the work done by the system. B y convention, the work done
by the system is taken to be positive, while the work done on the system
is taken to be negative.
Suppose a gas is contained in a cylindrical vessel with a piston (Fig.
6.1) of cross-section A . The force acting on the piston due to the mean
pressure { p ) is (p ) A . The work done by displacing the piston through
a distance dx is

Fig. 6.1

( p ) A dx = ( p )d V . Hence, we can write Eq. (6.16) as

5Q = d U + (p ) d V ...(6.17)
92 Fundamentals of Statistical Mechanics

The relation (6.16) is the mathematical expression for the first law of
thermodynamics.
Note that we have used the symbols 8Q, 5W for the heat absorbed and
the work done, while dU for the change in internal energy. This distinc­
tion between the differentials is to show which o f them refer to the func­
tion o f the state and which do not. dU is an exact differential o f the
function o f the state. In the same way dp, dV, dT are also exact differ­
entials. But 6Q, 5W are not, because you can always bring out the same
change dV in the system by adjusting 5Q, 8W.
(i/i) S e c o n d la w o f th e rm o d y n a m ics. This law is expressed in
several ways, one o f them being:
‘No process is possible whose sole aim is the transfer o f heat
from a colder to a hotter body’.
There is yet another way in which this laws can be expressed.
From the first law we have
5Q = dU + pd V
Dividing both sides by T

^ + £ dV ...(6.18)
T T T
The specific heat Cy, is a function o f the state and is given by

Cv = or dU = Cv dT
From the equation o f state o f gas
RT
pV = RT i.e. p = ~y~
where R is the gas constant.
Using these relations Eq. (6.18) can be written as
dQ ’ dT dV
= Cv- j r + R - y - = d(Cy In T-+ R In V)
...(6.19)
Since the right-hand side is an exact differential, the left-hand side
must also be an exact differential. The function o f the state whose differ-
5Q
ential is ~^r is called the entropy o f the system and is denoted by S. Thus

dQ
dS = ~y ~ ...(6.20)
This is a quantity o f which we have no intuitive knowledge. Note that
this relation enables us to measure the change in entrppy caused by a
transition from one equilibrium state to another. It does not give the
value o f entropy in each state.
The second law o f thermodynamics states that when a system goes
from one state to another by absorbing heat slowly, quasi-statistically at
constant temperature its entropy increases.
Som e Applications of Statistical M echanics 93

(iv) T h ird law o f th erm od y n a m ics. The entropy S has a limiting


value SQas the temperature tends to zero.
S0 is independent of nature o f the system and its numerical
value is not fixed at 0 K, but it is convenient to take it to be
equal to zero. The third law of thermodynamics is also known as
the Nemst heat theorem and is best developed and interpreted
using statistical mechanics.

6.4 STATISTICAL INTERPRETATION OF T H E BASIC THERM O­


I DYNAMIC VARIABLES
We will now take up the discussion of the basic thermodynamic variables;
pressure P, entropy S, etc., and show how statistical analysis results in
relatively simple and explicit relationships for all these quantities which
ultimately lead to thermodynamic laws.

6.4.1 Energy and Work


The energy o f a system usually depends not only upon the generalized
coordinates and momenta, but also upon the parameters such as the
volume o f the system, or the applied magnetic or electric field, etc., which
are known to affect the equation o f motion o f the system. These param­
eters are termed as external parameters of the system.
Suppose, the system is acted upon by an external parameter X. A
change dx in this parameter, will cause a change dEs in the energy o f the
system in the state s, given by
dE,
dE = —— dx ...(6.21)
s dx
The mean work done by the system as a result o f such changes is
-P £ .
{dW ) = ^ P ( E s)dEs =
(-9E.)
-p£.

1 -(S ii, z P ax
1_ I dZ 1 3 (ln Z )
dx
Z p t e dx = p dx

i T r - (In Z)dx ...(6.22)


dx
where we have assumed canonical distribution. Thus, knowing the par­
tition function and the temperature, we can find the work done bv the
system.
94 Fundamentals of Statistical Mechanics

6.4.2 Pressure
In the case of an ideal gas, in order to reduce the volume occupied by the
gas, a certain amount of work must be done on the gas to overcome the
forces o f pressure. The work done by' displacing the piston (Fig. 6.1)
through dx is
(dW) = (p) Adx = (p) dV ...(6.23)
In this case the external parameter is V. Replacing x by V in Eq.
(6.22) the mean work done is
1 3(1nZ)
(dW ) = - dV ...(6.24)
dV
Comparing Eqs. (6.23) and (6.24), we get
, , 1 3 (ln Z ) 3(1nZ)
= kT
<p > dV
.(6.25)

6.4.3 Entropy
Entropy is one o f the most important physical quantities in thermody­
namics. The thermodynamics definition o f entropy, however does not
enable us to understand why this state function plays such a major role
in thermodynamics. Its statistical interpretation is much more specific.
The partition function Z is a function o f p and Es. Energy Es, in its
turn is a function of external parameter x.

Therefore d(ln Z) = ~ ~ ~ dp + dx
3p ox
= -(J S )d p + p(dW )
when we have used Eqs. (5.30) and (6.22),
or d(ln Z) = d —d (/?(.£)) + pd (E ) + p(dW )

Therefore d (in Z + p(£l)) = P(d (2?) + (d W )) ...(6.26)


Now
8 Q = d ( E ) + (dW )
Therefore , ( l n Z + p ( £ ) ) = p5Q

d k ( l n Z + p (E )) = = dS ...(6.27)

Therefore S = k (in Z + p (E )) ...(6.28)


This is the expression for the entropy S in tertns o f the partition
function Z and the mean energy (E ) -
Entropy S can also be expressed in other forms.
Som e Applications of Statistical Mechanics 95

We know' that the most probable state is more probable than any
other state. The only contribution which is important in Z, therefore, is
that due to the most probable energy, which in turn is the mean energy.
Hence, we can write for Z an approximation.
z = e -p<f!> a ((£ )) ...(6.29)
where £1 (<E>) is the number of states with mean energy <E>. Using this
in Eq. (6.28)
S = k { —(1(E) + In Q ((.E » +
= k In £2((E )) ...(6.30)
Thus the entropy S of a system is determined by the logarithm of the
number o f microstates, through which a given macrostate is realized.
Although Eq. (6.30) has been derived for an ideal gas, it is valid for
all systems. It also provides a clear interpretation of entropy. The more
ordered a system lower is the number of microstates through which a
macrostate is realized and, hence, lower is its entropy. That is, if all the
particles in a system are fixed at definite positions, there is only one
accessible microstate and its entropy, as given by Eq. (6.30) is zero since
£2 {(E )) = 1. With the increasing number of microstates the entropy in­
creases. It also must be noted that with the increasing number of acces­
sible microstates, the system becomes more and more disordered. The
entropy o f a system, therefore, is a measure of its disorder or of the
chaotic state of the system.
The equilibrium state of a system is the most probable state corre­
sponding to a given condition and is attained through the largest number
of microscopic states. The entropy of the system, therefore, attains its
maximum value in this state. A system always tends to move towards the
equilibrium state; that is, in the direction in which entropy increases. This
can as well be taken as the statement of the second law of thermodynam­
ics.
It must be mentioned here that the definition of entropy in terms of
the partition function Z and the mean energy (E ) , is more convenient
than the one given in Eq. (6.30). Since Z involves an unrestricted sum
over all states, its computation is relatively simpler than counting the
states Cl(Es) within the range between E and E + dE.
E xam ple 6.2 Show that if one erg of heat is added to a mole of copper
which is initially at absolute zero, the temperature of the copper rises to
0.0169 K and the increase in the entropy is 1.18 x 10~5 J K-1. Find also
the number of microstates accessible to the copper. The molar heat capac­
ity of coppers is 7 x 10Jl TJ K-1, where T is the absolute temperature and
volume o f the copper is kept constant.
dU
(1) 5Q = — dT = 7 x 10^ T dT
96 Fundamentals of Statistical Mechanics

Tz
Therefore J5Q = 7 x 10-4 —
2 = io-7
1/2
f 2x 10-7 'l
= 0.0169 K
T ~ 1^7xlO -4 )
(2) dS = $Cv d(lnT ) = \7xl0~4dT
= 7 x 10-4 > 0.0169 = 1.18 x 10-5 JKr1
(3) S = k ln£2

Therefore ln Q = - = E l 8x10 5 = 0 855 x 10 i8


k 1.38xl0~23
(8.55xl017)
i2 = ey ’

6.4.4 Change in Entropy with the Change in Volume


In the light o f arguments given above, one can easily understand why
the entropy increases with increasing volume. With increasing volume,
the number of positions available for a fixed number of particles in­
creases. Thus, more distributions or more accessible states are possible
and, hence, entropy increases as can be seen from Eq. (6.30).
Mathematically, this can be shown as follows:
From Eq. (6.19) we see that, in an isothermal process (i.e. T = con­
stant) any change in entropy is due to the change in volume V.

dS = ~ = d(Cv In T + R In V)
= Rd (In V) ...(6.31)
when T is constant.
If the volume changes from to V2
£dS = R ^ d (ln V )

v2
i.e. S 2 ~ s i = R (ln v 2 ~ ln v i> = R In ...(6.32)
E x a m p le 6.3 Show how the work done by an ideal gas during iso­
thermal expansion can be expressed in terms of the change in entropy.
rK dV
W = pd V = RT M I T = R T ln
Vx
= T(S2 - sx) ...(6.33)

6.4.5 Change in Entropy with the Change in Temperature


If the volume is constant, i.e. dV = 0 (isochoric process), we have from Eq.
(6.19)
Some Applications of Statistical Mechanics 97

S2 - S 1 = A S = [ Cv d(lnT) = Cv In ...(6.34)
i Ti
This relation shows that the entropy increases with temperature as
one should expect. With the increase in temperature mean energy in­
creases, resulting in an increase in the number of energy states.
Exam ple 6.4 Two vessels of volume V, and V2 contain molecules of
the same gas at temperatures Tx and T2, respectively, their pressures
being the same. The two vessels are then connected with each other. The
gases mix with each other and attain a state of equilibrium. Find the
change in entropy during the process.

The final temperature and volume of the mixture are ■1 —— and


V1 + V2, respectively. The changes in the entropy of the two gases are

AS, = { c V l n ^ i ^ + R ln(V, + V2)| _ (Cy ln T1 + R\n V,)

f, . Tx + T2 _ . + Vo
= C,, In- 1 - ■ + R In—1----- ±-
2T, V
AS2 - Cv ln T l + JR in V± .t.V2
2T0
Therefore
ao
AS ao
= AS. +, AS0
ao
= oCv il n—1
( 7 \ + T 2 )2 _ . ( V , + V 2 )2
+R In-—1-----*—
1 “ V ATXT2 V,V2

- cv s
v ATXT2 R2TxT2 / p 2

= Cy +R
v 4TXT2 4 TXT2

= Cy In < & ± l£ + (Cp - cv) I n i ^ k l 2


v 4T1T2 p v AT,To
(,TX+T2f
CP ln 4TiTa

6.4.6 Entropy for Adiabatic Processes


From Eq. (6.19) we obtain
S2 ~ S 1 = Cv ln ^ - + R ln ...(6.35)

For adiabatic processes we have


C,
98 F undam entals of S tatistical M echanics

71 fV Y ^ rp 'SJ
Therefore — = — and In — = (y — 1) In 7 7 -
pi {V 2 ) Tx Vx
Hence

S 2 - S x = C „ ( y - l ) In + (C - Cv) In ^

= (C - C ) J-in ^ -+ I n ^ j =0
' 1 Vx Vi J
...(6.37)
Thus, in an adiabatic process, the entropy does not change. This can
be easily understood. In an adiabatic expansion, the entropy increases
because the volume increases. But with the expansion, temperature de­
creases. Therefore the entropy decreases. The two tendencies completely
compensate each other.

6.4.7 Entropy in Terms of Probability


W e know that the probability that a system is in a state s, with energy
Es is given by

Therefore Es = — In (Z, P s)

Now (E ) = X P S£ S
5

i.e. P( P) = p Y P sP, = - £ P s ln(ZPs)

= - X P sln Z _ X P sln P s
= In Z — ^ i^ .ln Ps (because ^ Ps = 1)
Therefore
S =yfe(lnZ + p{£:))
= - * Z Psln P s ...(6.38)
S

This shows that the entropy is connected with the fact that a system
is not in a definite state, but is spread over a large number o f states
according to some probability distribution.
This relation leads to an interesting conclusion which, in turn, shows
that the relation is consistent with the Nernst heat theorem or the Third
law o f thermodynamics. A t T = 0 K, the system w ill be in its ground state.
I f this state is unique the system will be found in this state with cer­
tainty, i.e. Pe = 1 and, therefore, S will be equal to zero which is essen­
tially the content of the third law.
Some Applications of Statistical Mechanics 99

6.4.8 Additive Property of Entropy


The additive-property of entropy can easily be illustrated when it is
expressed in the form of Eq. (6.38).
Imagine a composite system A0 consisting of the two systems A and
A 1. Suppose that the probability of finding A in a state r is P‘rl) and that
of finding A 1 in a state s is Pj2). Their respective entropies, therefore, will
be
Si = -fe £ P rC1) ln andS2 = - feX PS(2J ln Ps2)
Let us represent the probability of finding the composite system A0 in
the state in which A is in r state and A 1 in s state, by the symbol Prs. If
A and A 1 are weakly interacting, Prs = P^v> Ps(2).
Now the entropy of the composite system using relation (6.39) will be
s = - & Z Z P~ In Prs
r s

= - * X E PrC1)ps(2) ln (Pr(“ Ps(2))


r s

= - * Z Z Z pr(1)P.W> (ln P rcl)+ ln P i« )


r s s

= - f c Z P ® Z Pr(1JlnPra) ^ X pr(1) S pi 2)llipi 2)


s r r s

= - & £ p r(1 ) lnP<« —k Z P s 2)ln Psl2>


r s

[ b e c a u s e X P ^ ^ Z n '2^ 1]
= Sx + S2 ...(6.39)
The relation (6.38) is sometimes taken as the starting point in the
treatment of statistical problems, particularly in the study of information
theory.

6.4.9 Helmholtz Free Energy


It is possible to store energy in a thermodynamic system by doing work
on it through a reversible process and eventually it can be retrieved in
the form of work. The energy which can be stored and is retrievable is
called free energy. There are different forms of free energy in a thermo­
dynamic system, e.g. (i) interpal energy U («) Helmholtz free energy F;
(iii) Gibbs free energy G and (iv) enthalpy H.
From the first law of thermodynamics we have
8Q = d (U) + 5W
or 5W = pdV = - d i p ) + 5 Q = - d ( U) + TdS
...(6.40)
100 Fundamentals of Statistical Mechanics

In an isothermal process T is constant


Therefore 5W = —d ((U ) —TS} = —dF
The quantity
F = (U) — TS ...(6.41)
is known as Helmholtz free energy. In isothermal processes it plays the
role of potential energy. Its variation with the opposite sign is equal to
the work performed.
We can express Helmholtz free energy in terms of partition function:
F = (U) - T S = (U) - Tk (l n Z + P (£ /»
= -k T In Z ...(6.42)
In the same manner, using the results obtained so far we can express
the other thermodynamics functions, also known as thermodynamic
potentials, in terms o f Z. These are
Enthalpy H = U + p V ...(6.43)
and Gibbs free energy G = U — TS + p V ...(6.44)

I 6.5 PHYSICAL INTERPRETATION OF a


We have already identified (3 with temperature. How do we interpret a?
If 6Q is the heat absorbed by a system and 5W is the work done by
it, the resulting increase in the energy of the system is given by
dU = 5 Q - 5W ...(6.45)
If the only work done is that due to a change in volume,
dU = 5 Q - p d V = TdS - pdV ...(6.46)
However, when the number o f particles in the system is not constant,
we have also to take into account the change in energy contributed when
the particles are added to the system.
Let ji be the change in energy of the system when a single particle is
added to it. p is called the chemical potential o f the system.
dU = TdS - pd V + p dN ...(6.47)
where dN is the change in the number o f particles. If the system
contains more than one type of particles, Eq. (6.47) takes the form
dU = TdS - pd V + 211liSNi ...(6.48)
From Eq. (6.47), we get

dS = ^ d U + Y d V - % d^ ...(6.49)

dS\ = R R
Therefore,
d U )V' N T ’ U v J dtjv T l dN Ju y T
...(6.50)
Som e A p p lica tio n s o f S ta tistica l M e ch a n ics 101

On the other hand, since


S = k In &(U, N ) ...(6.51)
fd In £2 d In Q
dS = k dU + dN ...(6.52)
[ dU dN
dS\ d In Q 1
Therefore =k = k$ ...(6.53)
dU J y ^ dU “ T
and
dS'] d In Q p
=k = ka ...(6.54)
-dU Juy dN = T
M- ...(6.55)
Therefore a =- -Pit
kT
Thus, a is related to the chemical potential by the relation (6.55).
Fugacity, therefore, is given by
2 = 6 - “ = e^kT ..(6.56)

'6.6 CHEMICAL POTENTIAL IN THE EQUILIBRIUM STATE -


Consider two systems A and A 1with particles N 1 and N z and chemical
potentials and p, respectively. Suppose that the temperature and pres­
sure of the systems are the same and that they are in contact, being
separated by a permeable partition (Fig. 6.2).
---------------------------- T 1
A | A
1
!
1
1
1
T , P, N 1t g , I T , P, N 2 , g 2

Fig. 6.2
The free energy is given by
F = U — TS
Therefore
dF = dU - TdS - S dT ...(6.57)
Equation (6.48) gives
d U = TdS - p d V + Y,VidN i
Therefore, dF = — SdT — p d V + '^\lldN l ...(6.58)
If T and V are maintained constant
dF - X ...(6.59)
L
102 F u n d a m e n ta ls of S ta tistica l M e c h a n ic s

Therefore, in the example under consideration


d F = px d N 1 + p2 d N 2 ...(6.60)
where dN^ and dN 2 are the changes in the number ofparticles on the two
sides of the partition. If the composite system is closed, N ± + N 2 is
constant.
Therefore d N 1 + dN 2 = 0; or d N x = -d N 2
Hence d F = (px - p2) d N 1 . ...(6.61)
If the composite system is in the equilibrium state
dF ' dF
dNj. = dN2 = 0 ...(6.62)
Therefore Pj - p2 = 0; p2 = p2 ...(6.63)
We, therefore, conclude that if a system is to be in an equilibrium
state, temperature, pressure and chemical potential must be constant
throughout the system.

6.7 THERMODYNAMIC FUNCTIONS IN TERMS O F TH E G R A N D ,


I PARTITION FUNCTION ^
{i ) E n t ro p y . We write entropy in the form
S = - k X! PN ln PN ...(6.64)

Now, P N, ...(6.65)
z
f c -p(U„r N\i) ^
Therefore, S = -k ln

= - k Z Pnc + P^M- - In z ]

= k $ ' E pNiu *rl - W v Y . pm N + kH W n Z


= k $ (U ) - /fep(IV)p + k ln Z
...(6.66)
The other relations defining thermodynamics potential take the
following forms in respect of the grand canonical ensemble.
(ii) In te rn a l energy U at constant tem perature and pressure
U = T S ~ PV +
Therefore dU = TdS - p d V + p J^dN ...(6.67)
d ii) H e lm h o ltz free energy Fi This is useful in systems which are
closed and thermally coupled to the outside world, but are
mechanically isolated. F is obtained from U by adding a term
due to thermal coupling
F = 1/ - TS = p (N ) - P V ...(6.68)
S om e A p p lic a tio n s o f S tatistical M echanics 103

( iv ) E n t h a lp y H : Th is is useful potential for systems which are


thermally isolated and closed but mechanically coupled to the
outside world. H is obtained by adding an additional energy, due
to the mechanical coupling, to the internal energy U, given by
H = U + P V = p(JV) + TS ...(6.69)
(u) G ib b s free en e rgy G: This is obtained by adding terms due to
thermal and mechanical couplings to the internal energy U, given
by
G = U - TS + p V = P-(E) ...(6.70)
The above relations have been obtained for a p, V, T system. They
could be generalized by introducing a generalized force Y (in place of p)
and a generalized displacement X (in place of V). The relation (6.67) then
would be replaced by
d U - Tds + YdX + ...(6.71)

We shall derive in this section various thermodynamics properties of a


classically ideal gas. As stated in the preceding section, an ideal gas is
one in which mutual interactions between molecules are negligible, i.e.
potential energy of interaction U = 0. This is a good approximation,
particularly in the case of rarefied gases.
The total energy of an ideal gas consisting of N molecules each of
mass m is
N
E = £ pi2/2m ...(6.72)
i= l

We have seen that almost all the macroscopic properties of the sys­
tems can be calculated, once we have evaluated the partition function Z
= ^ e~^E- . The summation here is over all the discrete states s. This
implies that we are tacitly assuming the states to be discrete. In other
words, we are dealing with quantized systems. For a classical system,
where the states form a continuous spectrum, the sums must be replaced
by integrals. How can one make a transition from summation to integra­
tion?
We have seen in Ch. 4, how the notion of phase space has been useful
in specifying a state of a classical system. A cell in phase space in clas­
sical discussion is analogous to a state in quantum mechanical discussion.
Therefore, for a classical system, a sum over states must be replaced by
an integral over phase space.
Consider the phase space divided into cells each of value hf, where f
is the number of degrees of freedom. Let d V = d q x ..., dq{ \d p x ...dpf be a
volume element in the phase space at the point ( q r .. qf , p x ...pf) the
104 Fundamentals of Statistical Mechanics

energy throughout it being the same. The number of cells in this volume
dq-y.. .d.qf, dpx...dpf
element is obviously
~ ~ ls “ •
To find the partition function, we first sum over these cells and then
integrate over all such elements of volume. Thus
„ r r -B£ dQx■- d q f, dpy...dpf
...(6.73)
Z = J'” Je 77
In the case of the ideal gas under consideration, if N is the number
o f molecules, their position vector and f - 3N

p | > L + p L + ..,'|

Z = P(2m 2m J dq1...dq3Ndp1...dp3N
3N
...(6.74)
Now
J... jdqy...dq3N = \d3qx jd aq2 ... jd 3qN
_ yN ...(6.75)
where V is the volume o f the container.
Therefore

VN -pf
Z = 3N
2m 2rn dp1dp2...dp3N
h
\rJy 00 p 00 ppi M „PAT2
= ~j^N J e 2mrf3Pi j e 2md 3p 2 ... Je 2m d3p N

N
VN
J e-PP2/2md 3p ...(6.76)
l 3N

The integral inside the brackets is

° j e -Ppa'2md 3p = °je- ^ /md p x je ^ md p y ‘j e - pp7/2md p 2

“ ...(6.77)
-i3 3/2
~ P p Z / 2771 _ f2mn'\
dp
" rrJ
3/2 N
N
2mn f2mnkT 12 y
Therefore Z = -3 N
L P _ \[ h2 )
...(6.78)
S o m e A p p l ic a t io n s of Statistical M e c h a n i c s 105

The various thermodynamics quantities can be derived from this.


Thus
3 . ( 2nm 1
In Z = N ln V + ^ ln | ^ F | -| ln p j
iH lH '
...(6.79)
1 din Z N NkT
Therefore (a) (P) = P dV ...(6.80)
pv
or (p ) V = NkT ...(6.81)
the equation of state of the gas
din Z
(b) {E) = ...(6.82)
dp f - l NkT
(c) Cv = = i Nk ...(6.83)
v U t Jv 2
W) F = -k T In Z

= kTN |InV + | l n f ^ p ^ V | l n / ?

3/2
( 2-KtnkT
= - NkT In ...(6.84)

(e) S = fetln Z + p ( £ ) ]
3 ( 2 tt m 3 _ 3
= Nk InV + —In — — ---- InP + —
2 l. h2 J 2 H 2
_. Tr 3 . _ 3 . ( 2nmk\ 3
= Nk InV + —InT + —In |——— |+ —

= Nk InV h— InT + o ...(6.85)


2
where
3^ , ( 2mnk^i ,
a .( 6 . 86 )
2 lnb H
+1
Note that because h is arbitrary the value of the constant in the
expression (6.85) is indeterminable and, hence, the absolute value of en­
tropy eludes us.
The thermodynamic functions given above are derived on the as­
sumption that the molecules of the gas are distinguishable. However, it
is easy to show that this assumption leads to results in contradiction with
experience. We will show in the next section how the computation of
entropy of a classical ideal gas from Eq. (6.85), leads to a paradoxical
result for the case when two equal volumes of the same gas are mixed
together.
106 Fundamentals of Statistical Mechanics

| 6.9 GIBBS PARADOX


Suppose, there are two different ideal gases, one composed of N x
molecules and the other of Nz molecules. Let V, and V2 be the volumes
the two gases occupy. Suppose, further, that these gases are separated by
a membrane as shown in Fig. 6.3 and they are at the same temperature
and pressure.
If the membrane is now removed, the two gases will mix; but there
will be no change in the energies of the two systems so long as the
temperature remains the same. Let us now calculate the resulting change
in entropy using Eq. (6.85).
The entropies of the two systems before mixing are
3
k In V, + —N k
2 1
...(6.87)
3 . ( 2nm2kT]\
and S2 = N 2 k In V2 + ~ N ,k ^ Jj
...(6.88)

L P [ T, p T1 P
1 AFTER TH E
1 M EM BR ANE
1 IS REMOVED
N, , V ' N2 , V 2 V = V, + V2 , N = N, + N2

Fig. 6.3
The entropy of the composite system before mixing is given by

J s, = s 1+ s 2
1
After the partition is removed and diffusion is completed, the gases
are mixed and occupy a volume Vj + V2, the temperature and pressure
o f the mixture remaining the same. The change in the entropy for each
component in the process of mutual diffusion as obtained from Eqs. (6.87)
and (6.88) is
ASj * N ±k In (Vx + Va) - N x k In Vx
v\ + V2
= N,k In Vj ...(6.89)

Vi+V-2
and AS2 = N 2k In ...(6.90)
Hence, the total change in the eI^ •opy of the system is
-_ . Vi + Vo J.J . V i+ V * '
AS = AS. + AS„ = k ^ i ln — ---- + N 2\n
L vl V2 .
...(6.91)
Some Applications of Statistical Mechanics 107

which is positive and is known as the entropy of mixing.


This result is experimentally correct; when the membrane is removed
the molecules of one gas diffuse into the region which was previously
occupied by the other gas alone. The diffusion process is irreversible, for
by reinserting the membrane into the system one would obtain two samples
of the mixture and not the two gases that were originally present on
mixing, therefore, the entropy of the combined system must increase.
However, a paradoxical situation arises when we consider the mixing
of two samples of the same gas.
Suppose, two equal volumes of the same gas are at the same tempera­
ture and pressure and contain the same number of molecules (Fig. 6.4).
T, P T, P T, P
A FTER |
M IX IN G
V , N V , N 2 V , 2N

Fig. 6.4
Since in this case V1 = V2 = V; = N 2 —N
AS = 2kN In 2 ...(6.92)
This indicates that the removal of a partition between the equal vol­
umes of the same gas causes an increase in the total entropy; which
means that the entropy is not an extensive quantity. Further, one can
imagine that the present state of the gas is arrived at by removing a
number of membranes that initially divided the container into any
number of compartments. Since there will be increase in entropy with
every membrane removed, the entropy could be longer than any number.
This result in Eq. (6.92) is unacceptable because the mixing of the two
samples considered above is a reversible process. We can insert the
membrane back to its position in the system and obtain a situation which
is in no way different from the one we had before mixing. The total
entropy of the combined system, therefore, should not change. This para­
dox is known as Gibbs paradox since it was discussed first by Gibbs.
The paradox can be resolved only within the framework of quantum
mechanics. The molecules of an ideal gas are indistinguishable. There­
fore, a permutation of the N particles in a given state cannot alter the
state of the gas. Thus, if we consider two configurations which differ from
one another merely in the interchange of two molecules in different
energy states, in view of the indistinguishability of the molecules, the
configurations are not distinct; while according to our classical mode of
counting we would regard them as two distinct configurations.
In the treatment of the classical ideal gas given in the proceeding
section, we have, therefore, ‘over counted’ the states by assuming the
molecules to be distinguishable and that permutation of N molecules
108 Fundamentals of Statistical Mechanics

would lead to -V! different states. In ‘correct’ counting their would be


fewer states. The ‘correct’ translational partition function for the ideal
gas. therefore is
VN (2m n)3N/2
...(6.93)
N lh3N [ (3 J
and not the one given by Eq. (6.78)
Therefore
„ s3/2

In Z = N In 2mn I v
A2(3 J -In Nl

2mn
= N In V + —In -N ln N + N
2 h213
(by Striling’s formula)
, V 3. 2mn ,
Therefore In Z= N In — + —In- „ +1 ...(6.94)
N 2 A2p
, V 3. 2mn 3 because E = —kT
= Nk In — + —In—5— + 1 + —
N 2 h2$ 2 2

. V 3 . 27mt
= Nk In — + —In —s— ...(6.95)
N 2 A2|3
This equation is generally referred to as the Sackur-Tetrode equation.
One can easily check that the entropy thus defined is a well-behaved
extensive quantity.
From Eq. (6.95) the entropy of the two equal volumes of the-same gas
(Fig. 6.4) before mixing is

S, + S2 = 2 Nk l l n - ^ + - l n - 2 ^ + - l ...(6.96)
1 2 [ N 2 A2P 2j
The entropy of the mixture as obtained from the same formula is the
same as given by Eq. (6.96); since V and N are here doubled.

Therefore AS = 0
The paradox, thus, is resolved.
It may be noted that the formula (6.95) for entropy does not affect the
equation of state and the thermodynamic functions (p) , {E) of a system
as the term subtracted, viz. In N! is independent of T and V. Other
functions which come out correctly from the modified expression for en­
tropy are:
(1) Free energy F of a system is
Som e Applications of Statistical Mechanics 109

F = -k T In Z = -k T N

= -fcTiV In f 2m7t>|3/2 Ve
t / 2(3 J 77
i

where e is the base of the Naperian logarithm; while Gibbs free energy
is
( 2) G = E — TS + p V = F + pV
wmn \3/2
j Tr 1
Ve
= -k T N In ...(6.97)
“W J *£
(3) Chemical potential p is given by
piV = G
27tm V ''2 V
Therefore p = —kTN In / i 2(3J iV ...(6.98)

8 6.1 b>THE equipartition theorem


In the statistical treatment o f a system, molecules, the mean energy of a
molecule is o f great importance. The treatment of mean energies becomes
particularly simple when we can express the energy o f a molecule as a
sum o f terms, one or more of which depend quadratically on a single
variable out of the total set of variables q 1 ... Qp Pi -Pp where f is the
number o f degrees o f freedom.
Let the energy o f a molecule of a system, due to its motion in qi
direction be quadratic in momentum p t i.e. £, = ct Pc Let its total energy
be
E = txp2 + E' ...(6.99)
where E' does not depend upon p £ and the first term involves only the
single variable p r If the system is in thermal equilibrium, the mean value
o f Ei is given by
( 6 100)
V1 j £ e-PE(q^..p^dq1...dpr

Splitting E into two parts i.e. E = a p 2i + E' we can write


£ e ^ ap' o.pfdpJ j ^ E'dqi - dPf
^ " £ e-fiap'dPi £ , e~Pi?d<h - - dPf.
...(6.101)
110 Fundamentals of Statistical Mechanics

The last integrals in the numerator and denominator are f —1 fold


integrals extending overall tf's and p's excepts p L and they are equal.
£ > - ^ p t2rfpt
Therefore (e) = ...(6.102)
j“ ,,-Parfdp,^.

S u b s titu tin g

p a p f = y 2, i.e. a p, = y

, N £ > - - vV ^ y
We get (e) =
p £ le y*dy
Using the standard formula of integration (See Appendix B, we get
-Jn/2
<*> - = — - —kT ...(6.103)
pVrc 2P 2 kl
If we had taken the energy in the qL direction to be bqj, we would
have obtained a similar result. Thus, if a system is in thermal equilib­
rium, each term in the Hamiltonian that depends quadratically on a
momentum or a coordinate, contributes a mean energy —kT. This is
known as the Principle o f equipartition o f energy. j
It must be emphasized' here that this is not a~general principle of
statistical mechanics. It is a consequence of the assumption that the
energy has a quadratic dependence on the variables involved. The general
equipartition principle has been derived by Tolman in the form

= kT ...(6.104)
dqLj \ dpL(
One can easily check that when e depends quadratically on a variable,
Eq. (6.104) reduces to Eq. (6.103).
Exam ples
(i) M ean k in etic e n erg y o f a m olecu le. Kinetic energy of transla­
tion

E = ----- ( n“ n P?)
2m [Px ■
Therefore, (E) = 3 x — kT = - k T ...(6.105)
^ 2 2
(ii) T h ree d im en sion a l h a rm o n ic o scilla to r. The Hamiltonian of
the oscillator is
3 n 2 1 3
H = T — +-T k q f
2m 2' ;=i, H

Therefore (E) = 6 x - kT = 3kT ...(6.106)


' 2
Some Applications of Statistical Mechanics 111

(Hi) S pecific heat o f solids. Consider a gramme atom of a solid


containing N a (Avogadro’s number) non-interacting atoms harmonically
bound to the centre of forces. The Hamiltonian of each atom is

y f Pi . 1 hn2
H ~ ^2m 2 ^
The total energy, therefore, is the same as that of 3Na independent
one-dimensional oscillators..
Therefore E = N cx x 3kT = 3N a kT = 3RT ...(6.107)
The specific heat per gramme atom is

c v ~ ( a r l v " 3R ...(6.108)
which is Dulong-Petit law.

11 THE STATISTICS OF PARAMAGNETISM


In this section we treat the problem of-paramagnetism classically. Its
quantum mechanical treatment is given in Ch. 8.
In some crystals some of the atoms have a resultant magnetic dipole
moment, associated with either electron orbital angular momentum or
electron spin, or both. If such an atom is placed in a magnetic field, the
magnetic moments tend to align themselves in the direction of the field.
These atoms are paramagnetic atoms.
For simplicity let us consider a simplified model in which each
paramagnetic atom has only one unpaired electron which is in a state of
zero orbital angular momentum. Further, we neglect the interaction be­
tween the unpaired electrons. When a magnetic field is applied to the
material, the electron spin angular momentum of each unpaired electron
can have a component hJ2 either parallel or antiparallel to the field. That
is, each unpaired electron has a magnetic moment p equal to one Bohr
magneton pointing either parallel or anti-parallel to the field. The poten­
tial energies of the magnetic dipoles in these two states are different.

6.11.1 Potential Energy of a M agnetic D ipole in a M agnetic Field


When a magnetic field induction B is applied
an atom which has its magnetic moment
pointing at an angle 0 to the direction of the
field, will experience a torque = p x B.
In Fig. 6.5, the magnetic dipole is pointing at
an angle (90 + 0) to the direction of the field.
The numerical value of the torque is pB sin
(90 + 0) = pB cos0. The work done in rotating
the magnetic dipole through an angle rf0 is
dW = pB cos0 dQ ...(6.109)

Fig. 6.5
112 Fundamentals of Statistical Mechanics

The work done to rotate the magnetic dipole from 0 = 0 to 0 = 7t/2 i.e.
in a direction opposite to that of the field is
W = £ /2gBcos0c£0 ...(6.110)
The potential energy of thfevdipole in this position, therefore, is pB. In
the same way we can show that the potential energy of the magnetic
dipole when it points in the direction o f the magnetic field is —gB.

6.11.2 Curie's Law


In an applied magnetic field each unpaired electron may be either in a
state with energy pB or that with energy —|jB (Fig. 6.6). We can apply
Boltzmann distribution to the system by treating an unpaired electron as
a sub-system and the rest o f the crystal as a heat reservoir.
The partition function of the system is
Z = eVB !kT + e -pB/kT ...(6.111)
The probability that in thermal equilibrium, the magnetic dipole points
parallel to the field is
iiB / k T
P, = - ■ ...(6.112)
Similarly, the probability that it points antiparallel to the field is
-\iBlkT
P 2 = --- ...(6.113)
Z
If N is the total number o f unpaired electrons in states (1) and (2), the
populations o f the two states are
------------------------------------------------------- gB

-----------------------------------------------------------g B

Fig. 6.6 ^
N x = N P t; N 2 = NP2
and
_P>
g - 2 g B/kT
_ _
...(6.114)
Ni ~ A
The mean magnetic moment parallel to the magnetic field is
(fi) = = P ,M- -
_ e-g 8 /A T 1
\ e vB lkT
= e gB / k T
+ e ^ B,kT\
= p. tan h (ydi/kT) ...(6.115)
/
—p
because tan hy = -----------
{ ey +e~y )
Some Applications of Statistical Mechanics 113

If there are N independent non-interacting unpaired electrons in a


sample, its mean magnetic moment is
ii R
(m) = N (|i.) = Np tan h ...(6.116)

M-B
Ify » 1 , tan h y *= 1. Hence from Eq. (6.116) we see that if »1
(which is the case when temperature is very low and magnetic field is
very high), then
{m) = Np ...(6.117)
This is easy to understand, because at low temperatures magnetic
dipoles are in their lower energy states. Because of the high magnetic
field they point in the direction of the field and the system, thus, reaches
a state of a magnetic saturation.
If, on the other hand, y « 1
l + y - l +y
tan Ivy = Z~~ ~
J l+ y + l-y

Hence, if « 1, i.e. at high temperature and low magnetic field,


kT
pB (.lB
tan h
kT kT
Np2B
and {m) ...(6.118)
kT
Magnetization is defined as the magnetic moment per unit volume
and is proportional to the field applied. If m (Eq. 6.118) corresponds to a
unit volume of the material
m.°= b i.e. m = XB ...(6.119)
where X is a constant known as paramagnetic susceptibility. If X is nega­
tive the substance is said to be diamagnetic. If it is positive and small the
material is said to be paramagnetic.
From Eqs. (6.118) and (6.119) we see that
N p2 C
X = ...(6.120)
kT T
i.e., susceptibility is inversely proportional to temperature. This is known
as the Curie law o f paramagnetism and C is called Curie constant, given
by
N p2
C = ...(6.121)
k
In the treatment given above, we have considered only a simplified
model in which each atom has spin 1/2. We shall now discuss the general
case of arbitrary spin.
114 F u n d a m e n ta ls o f S ta tis tic a l M e c h a n ic s

Consider a substance in w hich each atom has an intrinsic magnetic


m om ent u. W he n a magnetic field B is applied, any atom which has its
m agnetic m om ent pointing at an angle 0 to the direction of the field, w ill
experience a torque tending to align it in the direction of the field and as
a consequence w ill have a magnetic energy.
e(0) = - g B cos© ...(6.122)
N o w , the n um ber of states available for atoms w ith their magnetic
m om ent pointing at an'angle 0 w ith in an element of solid angle rfco is
proportional to ,co i.e.
C c?co = C sin0 d<t>
where C is a constant.
T h e mean value (g ) , therefore, is

Jb" Jb e^lBcosagcos0Csin0d0,<|>
<H> = cos ec sin erf0d(i)

i r c p ros6 cosOsin0 ,0
...(6.123)
2 , si n 9 , 0
Substituting
Pg B cos 0 = y
We obtain
P V 7
^ e y dy
(PpB)2 - M
<M) =
l^ 'lBe yd y
PpB J jm* *
e-(JttS + e (3(LB
1
-p p B -1 ...(6.124)
PB
If, now PpB » 1 which is the case when magnetic field is strong or
temperature low, = p (p ) which implies that the induced magnetic
moment m w ill reach its m axim um , or saturation value iVp, independent
of B , where N is the num ber of atoms per unit volume.
If, on the other hand, PpB « 1 (temperature high or field low), the
expansion of the exponential in terms involving (PpB)3 gives the approxi­
mation

<p) = i Pp2 B ...(6.125)


o
N\jt B
Therefore, m = iV {p ) = = XB ...(6.126)
3k T
and, thus
Some Applications of Statistical Mechanics 115

N\x2
X = 3kT ...(6.127)
Alternatively (following Langevin), the partition function for the
magnetic energy of the atoms is given by
-pE(0) _ ,P n B c o s e
...(6.128)
m ag = X = X
e e
Replacing summation by integration
1*271
Z mag = £ j V ^ ^ s i n G d e d c t )

= 2Tt£tep^ cosesinG dG
2n
Pa B )
47t
sinhppB ...(6.129)

Now
3 In Z, N kT2 dZ.mag
em a g = - N -
3P Zmag W
_ NkT f sinllH £ _ i ^ . coshii^"
IxB l kT kT kT
sinh
kT
j kT --c o th V—
- N \iB 4—= s ...(6.130)
pB kT
Since
emna = —mB
mag
...(6.131)

m =- = = 2Vji c o th ^ -^
B kT \iB
rnB\
...(6.132)

where L(x) is the so-called Langevin function, given by


1
L ix) = coth x — ~ ...(6.133)
with x = Pp. B
As before, if x » 1, L(x) is almost equal to unity and the system
acquires a state o f magnetic saturation
m --- IVp
If x « 1, L(x) may be written as
x x5
+ ... ...(6.134)
iW “ ¥ -4 5
\

116 Fundamentals of Statistical Mechanics

In the lowest approximation, therefore


Nyi2 n
3 kT
and the susceptibility
N].i2 C
X = ...(6.135)
3 kT T

I 6.12 THERMAL DISORDER IN A CRYSTAL LATTICE


In a monatomic crystal lattice a disorder may arise due to the displace­
ment o f some o f the atoms from their normal positions to interstitial
positions by the thermal vibrations of the lattice.
Suppose that there are N normal lattice points in a given volume,
there will also be N interstitial positions. If the number of interstitial
atoms is n, and the energy required to remove an atom from its normal
position to an interstitial position is e, the energy of the lattice will be ne,
assuming its energy in its ordered ground state to be zero.
Now the total number of distinct ways in which these n atoms can be
Nl
arranged, among the N interstitial positions, is ^ ~n)\ • number
o f ways in which the remaining N —n atoms can be arranged among the
Nl
N normal positions is also —ryr;---- rr ■
^ n l(N -n )\
Therefore, the weight o f the configuration in which there are intersti­
tial atoms is
Nl ...(6.136)
W =
nl(N —n)l
The resulting entropy S is
Nl
■S = k In W = k In
nl(N —n)!_
= 2k [In Nl - In nl —In (N - n)!]
...(6.137)
Using Stirling’s formula
S = 2k[N In N - N - n In n + n. - (N - n)
In (N — n) + N — n]
= 2k [N In N - n In n - (IV - n) In (IV - n)]
...(6.138)
Free energy o f the lattice is
F = E — TS
= ne — 2kT [N In IV — n In n — (N — n)
ln (N -n )] ...(6.139)
Some Applications of Statistical Mechanics 117

For the system to be in equilibrium configuration n must be such that


dF
F is minimum i.e. v— = 0
dn
Therefore
9F
dn
= e - kT [-In n — 1 + 1 + In (N — n.)]
= e - 2 kT [In (N - n) -In n] = 0 ...(6.140)
N -n e
Therefore In = — |3e ...(6.141)
n 2kT 2P
N
or ...(6.142)
—Pe
1 + e2

6.13 NON-IDEAL GASES


We have shown in Section 6.8 how Gibbs distribution can be applied to
ideal gases. We will now show how it can be applied to real gases, in
which account is taken of the interaction between the constituent atoms
or molecules. The problem cannot be solved exactly, but even an approxi­
mate approach leads to results closer to experimental observations. Con­
sider a relatively simple physical system, a monatomic gas of N identical
atoms, each of mass m. The energy of the gas consisting of these inter­
acting atoms is given by
N p2
E= ^ 2m + U ...(6.143)
where the first term gives the kinetic energy o f the atoms and U is the
sum of the potential energies of interaction between the pairs of atoms,

TT f ^ U ::
U =»12 + u n - + “ 23 + - = 9 t j u
...(6.144)
where is potential energy of interaction of ith and jth atoms. The

factor — takes into account the fact that in the sum over i and j , the
energy o f interaction between each pair of atoms is counted twice.
The partition function Z is given by
.JL(Piu p /+..+pj)
Z = zm d3Pid3p2d3...d3pN
h3NN\
X J... \e3Ud ^ q ^ q z -.- d ^ q x ...(6.145)
The first integral is the same as derived for the ideal gases and is
equal to
118 Fundamentals of Statistical Mechanics

^ ( 2 ^ ^ ...(6.146)
We will now evaluate the second integral
Z u = J...je-pyd 3g1...rf3g*r ...(6.147)
If U — 0 as in the case of an ideal gas
2U =VN ...(6.148)
where V is the volume o f the container.
In the case of non-ideal gases, the mean potential energy is

(U) = j N ( N - 1 )(u) =^AT2 (u) ...(6.149)

where (u) is the mean energy between a pair of atoms. The inte­
grated energy is a function o f the distance between two atoms r. The
probability that an atom is at a relative distance r is
e-\1u(.r)
je~tiu<'r>d 3r1...d3rN
Therefore

(u)
Je"Mr)u(r)e(3r
= Je'P« r>d*r

-- — — In \e~*“ir)d sr ...(6.150)

Now the integral


je~^u(r)d 3r

= Jd3r + J(e- p“(r)- l )d 3r

= v + m =v
...(6.151)
where V is the volume o f the container and
/ ( P ) = J(e~Pu(r) - l ) d 3r ...(6.152)
Therefore

(u) = -J -ln
ap M i+
a In V a1In
-------
ap ap
1 + T
II
S o m e A pplic a tio ns of Statistical M e c h a n ic s 119

= 1 d { = 1 97(P)
i + Z(P2 v J v ap
v
...(6.153)
(because I ((3) « V)
Hence

(U > ® .. .(6 .1 5 4 ,

Integrating

when P = 0 7(0) = 0
gives C = 0

Therefore £< tf)d P = - | ^ ' 7(P) -.(6.155)

The mean potential energy (U) can also be found from the relation
\e~®uUd3q .. .d3rN
{U) =

=- 4 lnZU
( U) d$ = -In Z^ + C1 ...(6.156)
For p = 0, z u = v N
C’ = In VN = N In V
From Eq. (6.156)

In Z u = IV In V - f (C7)dp

1 N2
= N In V + - -7(P) ...(6.157)
2 V
where we have used Eq. (6.155).
The mean pressure (P) is given by

Al27(p)
(P) = I _L.(ln Z ) = —
N' p dV “ p V 2PV2
...(6.158)
(.P ) 1
Therefore
kT = n ~ 2 n2/(P)
120 Fundamentals ot Statistical Mechanics

where n = is the concentration.

The equation of state for real gases is usually written in the form
(P)
- = n + B2 (T)n2 + B s ( D n3 + ...
...(6.159)
where the right-hand side is the series expansion in powers of n and is
known as the virial expansion. B2 (T), B,J„T) etc., are the virial coefficients
which vanish for an ideal gas. Comparing Eqs. (6.158) and (6.159) we find
that

B2 = -| /((3 ) = - | J(e~'3u(r) - l ) d 3r

=— - l)47tr2cfr

= -2rc f(e-/?u(r)- 1 ) r 2dr ...(6.160)


For the evaluation of this integral we must the exact form of the
interaction energy u(r), which is a complicated function or r. Qualitatively
the function should have the shape shown in Fig. 6.6. At larger distances
the atoms virtually do not interact and their interaction energy may be
taken to be zero. At smaller distances, forces of mutual attraction tend to
bring the atoms closer and their interaction energy diminishes. At a
distance rQ, the energy is minimum. At distances less than rQ, electron
shells o f the atoms begin to overlap, repulsive forces come into play and
the potential energy increases.
There are no universal expressions for u(r), suitable for all molecules.
Usually u(r) is approximated by the formula
a2
u(r) = ...(6.161)

where the constants a v a2 are so adjusted as to make the function a


better approximation to a real potential. An analysis o f potentials re­
vealed that the following expression known as Lennard-Jones 6-12 poten­
tial has the shape shown in Fig. 6.6 and is the best approximation:

w(r) = u0 r0 — 21 — ...(6.162)

For the present approximate treatment, however, we shall use a still


simpler potential, viz.
for r < r0
u(r) = ...(6.163)
for r > rn
Some Applications of Statistical Mechanics 121

which has the shape shown, in Fig. 6.7. Now from Eq. (6.160)

B 2 = —27t £ (e-p“(r) —l ) r2dr

= - 2 . [ f f e- pi^ ) - l ) r 2d r + J7(e- p“« - h ) r 2d r ]


... ( 6 . 1 6 4 )
When r is small, u(r) is large and e_Pu<r) —* 0
Therefore

B 2( T ) = 2 tt J ° r 2d r - 2 7 t j ^ ( e - |luCr)- l ) r 2d r . . . ( 6 .1 6 5 )

Further, if the temperature is high p(u) < < 1 and e-e“(r) = 1 — pu(r)
Hence

B 2(T) = + 2k P C u(r )r2dr

2 Kr03
- 2n Pti0
3
12^ F undam entals ol S tatistical M echanics

‘2rc?'j 2n|hy03 __ 27rr0a U u0


3 3 3 l kT
...(6.166)
where we have used Eq. (6.163).
Let
2 tUq . 2 k>
bl = and a 1 - — un = b 1u(> ...(6.167)

Therefore BHT, =! ><- — ...(6.168)

From Eq. (6.159), neglecting higher powers

<p )
■n - BAT) n2
kT

i.e. (p ) = nkT + (b 1 kT - a 1)* 2

or (p ) + aln2 = nkT (1 + b ln)

nkT
1 - bxn
where we have assumed that b ln « 1.
Hence
or ( { p ) + aln2) (1 + bln) = nkT

or ( ( p ) + a'n2)l ~~ ^ ] - kT

N Na
Since n = TT = — r , where N . is the Avogadro s number and v molar
V vd A
volume, we have
, , cl1N\
b1 = kT
\N .

-~2 j { y —N Abx) = NA kT

i.e. (Py + ^ K o - 6) = R T ...(6.169)

where we have put a = s W A, b = N J }1 and R = NA h


Equation (6.169) is van der Waal’s equation o f state for real gases.
Some Applications of Statistical Mechanics 123

j PROBLEMS ------------------- ----------------------------- — ----------------- -------- —


6.1 A cylinder of height l and base radius R, contains N molecules
of an ideal gas each of mass m. The cylinder rotates with an
angular velocity gd about its axis. Calculate the pressure of the
gas on the surface of the cylinder.
6.2 Calculate the internal energy of one litre of helium under a
pressure 9.8 x 104 Pa and temperature 100°C.
6.3 Establish the relations:
U) E =
1 ap lv

where F is the free energy.


6.4 Prove that N = k T -^11—--

, where |_i is the chemical potential.


6.5 Show that the entropy of a system in a grand canonical en­
semble can be written as S = —KlPr In Pr
Z Nre“p£r
where
r ~

6.6 A ferromagnetic system consists of N magnetic atoms each with

spin — .What will be its entropy at


2
(i) absolute zero of temperature?
(ii) a very high temperature?
6.7 A system consisting of N spatially separated sub-systems, is in
thermal equilibrium with a heat reservoir at absolute tempera­

ture T = —, where e is the energy. If each sub-system has non-


k
degenerate energy levels of energy 0, e, 2e, 3e find
(i) the partition function,
(ii) the mean energy, and
(iii) the entropy of the system.
6.8 Consider a system of N one-dimensional harmonic oscillators.
Treating the oscillators classically, obtain the expressions for (i)
Helmholtz free energy, (ii) entropy and (iii) chemical potential of
the systems.
6.9 Find the entropy of a mole of helium gas at T = 273 K and
pressure equal to one atmosphere. The mass of the helium atom
is 6.65 x ICC27 kg. (
124 Fundam entals of Statistical M echanics

6.10 The mass of an argon atom is 6.64 x 10-26 kg. Find the chemical
potential of a mole of argon at T = 273 K and P = 1 atmosphere.
6.11 You are given t%vo vessels o f equal volume. One contains a mole
of gas A and the other a mole of gas B, the molecular weight of
A being 10 times that of B. If the gas A is at temperature 300
K. calculate the temperature o f B so that the two gases have the
same entropy.
6.12 Show that the Sackur-Tetrode equation may also be written in
the form.

S = A / J - l n p + —InT + - l n ^ £ + - (I n K + 1)1
[ 2 2 h2 2 J
where p is the pressure of the ideal gas.
6.13 The temperature of an ideal monatomic gas is changed from Tx
to T2, keeping (i) its pressure constant, and Hi) its volume con­
stant. Show that the ratio of the change in entropy ASp in case
(i) to that of ASv in case Hi) is 5/3.
6.14 A thermally insulated container is divided by a partition into
two compartments, which contain identical gases at the same
tem perature and with equal number of particles, N, but at
different pressures P l and P 2. The partition is now' removed.
Calculate the total change in entropy.
6.15 Air is formed artificially by mixing four moles of nitrogen and
one mole of oxygen at NTP. Calculate the entropy of mixing.

□ □ □
FORM ULATION OF
Q U A N TU M STATISTICS

Although the ensemble theory developed so far is extremely general, we


have considered its application only to some classical systems in which
the components are considered distinguishable. But most of the quantum
mechanical systems comprise indistinguishable components. The
microparticles of the same species are indistinguishable from one an­
other. All electrons are identical. If two electrons interchange places, the
physical situation will not change. Hence, the rules for counting the
microstates must be different from those used in the Maxwell-Boltzmann
model. It is, therefore, necessary to treat the theory in terms of quantum
mechanical wavefunctions and operators. The behaviour of the systems,
treated in this way, generally differs from that expected from classical
treatment. However, it will be shown later, that the dissimilarity in the
behaviour gradually disappears in the limit of high temperatures and low
densities.

71 DENSITY MATRIX - ‘ ■
In classical statistical mechanics we specify the state of system by a point
in the phase space and ensemble by a cloud. The mean value of any
function F{q, p) is given by

where p is the density of distribution of phase points and f the number


of degrees of freedom.
Von Neumann has shown that a quantity called density matrix in
quantum statistics takes the place of the distribution function in classical
statistics. A knowledge of this matrix enables us to calculate the mean
values of any quantity, describing a property of the system as well as the
probabilities of various values of such quantities.
In quantum statistics the concept of uncertainty enters the discussion
in two entirely different ways:

125
126 Fundamentals of Statistical Mechanics

(H Consider a large number of identical systems, all in the same


quantum state. If one tries to assess the value o f a dynamical variable
corresponding to an operator O , by performing the same measurement
on all such systems, then each system will respond by jumping into one
of the eigenstates of O although it is quite impossible to predict in
advance which eigenstate it is going to be. Consequently, such a set of
measurements can only provide us with a probability distribution of the
eigenvalues or the observable of the operator O. This type of uncertainty
forms an integral part of quantum mechanics and cannot be overcome by
any conceivable refinement in the techniques of measurement. It is inde­
pendent o f our subjective lack of information or ignorance.
The Schrodinger equation

HVi qJ) = ih— ^ ...(7.2)


at
determines the variation of T with time.
Let {^(g)} be a complete set o f orthonormal functions of the system.
Suffix n denotes the set of all quantum numbers which distinguish the
various stationary states. T (q, t) can be expressed as the linear combina­
tion of 4>n(q). Thus
T(g , D = S C „ (<?) (7 3)
where Cn’s are the probability amplitudes for the system, i.e.
C*n(t)Cn(t) = |C„(i) |2
is the probability that the system, at time t, is in the state represented
by It is obvious that

IC * (f)C n(f) = X |C „(f) 12= 1 ...(7.4)


n rt
The average value o f an observable F for a system is given by

(F ) = j'¥*(q,t)F '¥(q,t)dq ...(7.5)

where F is the operator corresponding to the observable.


Using Eq. (7.3)

<F> = l'ZC*n(t)<fn(.q)F T C mm nl(q)dq


n m

~ 2) Cn(t)C,n(t)(§*n(q)Ftym(q)dq
m,n J

= I C*n{t)Cm(t)Fnm ' ...(7.6)


m,n
Formulation of Quantum Statistics 127

where Fnm = \§n(<l)F§trS-(l)d<i ...(7.7)


(2) There is another type o f uncertainty, which is widely used in
classical and quantum statistical mechanics and which expresses our lack
of accurate knowledge about a system. Imagine a system which has a
number of degrees of freedom. It is virtually impossible to carry out
experiments to measure the values of each variable and thus determine
the state of the system completely. The system then could be described
approximately by specifying the mean values and corresponding probabil­
ity distributions o f each variable.
If (Fa) is the mean value of the observable for the ath system, the
mean of this observable for N systems a = 1, 2, .... N, is

<p > =
1 Nr
— 2 2 C f ( t ) C “ (t)Fnm ■ ...(7.8)
N ati
This averaging has two-fold nature. It comprises averaging due to the
probabilistic nature o f the quantum description as well as statistical
averaging. It may be noted that the entire averaging process is to be
carried out as a single operation; its constituents cannot be separated.
Consider a system (ath) whose state can be described in terms of
probability amplitudes C“ ( t ) . Then

C*n(t)Cm{t) 2 C “ * C “ (t) = p, ...(7.9)


N a=i

is the mean value o f the quantity C*n(t)C m(t) over all the systems in the
ensemble. We consider pmn as the matrix element o f statistical operator,
viz. density operator— p (f). The term density refers to the fact that the

diagonal element prm = |Cn(t) |2 o f the matrix represents the probability


that a system is found in the state defined by the function <t>n(g). The
elements of the density matrix depend on the representation. The final
result, however, must be independent o f the representation.
Let us now see how the density matrix can be used to calculate the
mean value of any observable F for the systems of an ensemble.
For a single system, the mean value o f F is, by Eq. (7.6)

(F) = X FnmC*(t)Cm(t)
n ,m

Therefore, the mean value o f F for all the systems in the ensemble

(F) = £ FnmCn(t)Cm(t) = 2 Fnmpmn ...(7.10)


n ,m rnyn
128 Fu n d a m e n tals of Statistical Mechanics____________________________ _

This is the quantum mechanical analogue of the classical expression


for the mean given by Eq. (7.1). Following the rules of matrix multipli­
cations, we can write the foregoing equation as

<F>
' '
= E
n
I^ p L m
= trace [P ^ ]
...(7.11)
Thus the integral over all of phase space of a classical quantity is
replaced in quantum mechanics by the trace o f the corresponding quan­
tum mechanical matrix. This formula has the advantage of enabling us
to calculate with any complete set of orthonormal wave functions.
Now if F —I the unit operator
trace p = 1

i.e. E C :(f)C re(t) = X|C„(t)|2= l ...(7.12)


n
This is consistent with Eq. (7.4).
Therefore, the probability that the system lies in the nth state, cor­
responds to the diagonal element pnrt and the problem of determining the
statistical distribution amounts to a calculation o f the probabilities pnn.

7.2 LIOUVILLE'S THEOREM IN Q U AN TU M STATISTICAL


I MECHANICS ■, . '
Let us investigate the rate o f change o f the element pmn with time.

P„
J-y a = l

Therefore p„„ = jr 2 [ c f ( t ) c « ( t ) + c * « ( t ) c “ (t)]


■tv a-1
,.(7.13)

Now ^ ( q . t ) = E C f <|)Z(q)

Multiplying both sides with <]>* (q) and integrating over the entire range
of q

]^n^Q^'Va (.q,t)d q = 1 .C i-{t)^ * n{q )^ l {q )d q ...(7.14)

Therefore c * (t) = \<fn{q)v«(q,t)dq


Formulation of Quantum Statistics 129

IFl J i

= 4rZ H nlC?{t) ...(7.15)


ih i

and Cfd) = ...(7.16)

Hence _i_ a
N

I H raI Cf(t)

Since Hln- H nl
1^
...(7.17)
iti~j^^-^mZPln
I ^PmZ-^lnJ

Therefore i^Pmn= [-H p -p # ] ...(7.18)

ihp= [ h , p] ...(7.19)
This is the quantum mechanical analogue of Liouville’s theorem.


,7.3.CONDITION
> = •
FOR. STATISTICAL/EQUILIBRIUM
• . v'?£sia2c3fS*- - ■''-*&* ’ ■'
-
'.." v- •V-
If the given system is in equilibrium, the corresponding ensemble must
be stationary; i.e. Pmn = 0 . In analogy with classical mechanics, this would
be possible, under the following conditions:
(i) when the density matrix is constant, or
(ii) when the density matrix is a function of a constant of motion,
(i) If the density matrix is constant, its elements will be given by
p = p„5 ’ ...(7.20)
i.e., all the non-diagonal elements of the matrix will be equal to zero and
all the diagonal elements will be equal to a constant p0.
Now in the energy representation, the basic function <f>n are the
eigenfunctions of the Hamiltonian H and, hence, the matrices H and p
are diagonal, i.e.
p = p5 ...(7.21)
In this representation, the density operator p may be formally writ­
ten as
130 F u n d a m e n ta ls of S ta tis tic a l M e c h a n ic s

P = >P,. <«!»„ I = [ « i p ] -(7 .2 2 )

To verify this, consider an element pkl

Pw= (<t»* |p|P/.) = I(»l>fe l <t>R>P.. <«t»n I Pi)


n

= 'Z .& k n P r fin l ~ P k ^ k l


n
which agrees with Eq. (7.21).
Therefore
^Pm n = y [ H n llP l n ~ P r n l ^ l n ]

= T.\_PoHmi8in - p ‘0d>m/H in~\

= P o [# m » ~ H m n] = 0
The distribution, therefore, does not change w ith time and the sys­
tem, und^r this condition is in equilibrium.
(u ) We see from Eq. (7.19), that if the time derivative of the statis­
tical matrix vanishes, the operator p must commute with H . Hence p
must be a function of a constant of motion.

| 7.4 ENSEMBLES IN Q U AN TU M MECHANICS ,> *>


(i) T h e m ic ro c a n o n ic a l ensem ble. The systems constituting this en­
semble are characterised by a fixed number of particles N , a fixed volume V
and energy lying w ithin the interval E and E + 8E. The total number of
accessible states to a system is say, fit E). In the energy representation the
density matrix p will be a diagonal matrix, i.e.
pmn = p08mn for the states in the energy range
E to E + 5E
= 0 for all other states ...(7.23)
The trace of p is equal to the number of states whose energy lies
between E and E + 8E, i.e.
Trace p = X p 0 = Q ( £ ) ...(7.24)
n
The entropy, which determines the thermodynamics of the system, is
given by
S = k In £l(E) ...(7.25)
Gibbs paradox does not arise in this case as 8!£l(E) is supposed to be
computed correctly by considering the indistinguishability of the par­
ticles.
Now if £1(E) = 1, S = 0 which is consistent with the Nernst theorem—
Th ird law of Thermodynamics— which states that: ‘The entropy of a
Formulation of Quantum Statistics 131

system at absolute zero is a universal constant, which may be taken to


be zero’. At absolute zero (T = 0 K) the system will be in its ground state.
If this is unique, Cl = 1 and the system is sure to be found in the state.
Complete order, therefore, will prevail and S will be zero.
If Cl = 1, every system in the ensemble has got to be in one and the
same state. We shall then say that the ensemble is in a pure state. If
£2 > 1, complete specification of the systems is not known and the system
is said to be in a mixed state.
If the ensemble is in the pure state, in the energy representation,
there is only one diagonal element pnjl and that is equal to 1, all other
elements being zero. The matrix, therefore, satisfies the condition
p2 = p ...(7.26)
In any other representation

Pmn = ~ X Cf(t)C*m{t) = C*n(t)Cm(t)


-iV a = l

Therefore P^nn = -^Pm/Pln = ^ C i(t)C m( t ) C nit)C1(t)

= C :(t)C m(i) = Pnm -(7 .2 7 )


Therefore p2 = p is a necessary requirement for a pure state.
(ii) T h e ca n o n ica l en sem b le. In this ensemble N and V are fixed and
E is a variable quantity. The most general considerations outlined for clas­
sical statistics in Ch. 5 are also valid in quantum statistics. Thus, the prob­
ability that a system possesses an energy E n is given by
Pn = c - pe «
Therefore, the density matrix in the energy representation is taken
as
pmix = *pn 5m n
K
...(7.28)
v 1
Since, trace (p) = 1, Y*Ce~^En = 1
n
1 1 e ~?>En
Therefore C = ^ »n d P„ » . . . ( 7 . 2 9 )
n

where Z((3) = Y<e~^En = trace (e-pH) is, as before, the partition function
n w
of the system. Summation here is over the states and not over the energy
eigenvalues. H is the Hamiltonian operator.
The density operator in this ensemble is
e-W*
132 Fundam entals of Statistical Mechanics

„-|W
X I <D„ )<<!>„)
Z(P) 7
_ l _ e-PW
Z((5)
The ensemble average of a physical quantity G is given by

(G ) = T r{p G )= Tr^fe ^ ...(7.30 )


■■ Trie-*1*)
The other thermodynamic variables are given by
{E ) = Trace (pi?)

F = -k T ln(Tr e ~ ^ )
E x a m p le 7.1 A beam o f photons with various polarizations travels along
the 2 -axis. Determine the probability that the photon is found with polar­
ization along (i) x-axis, (ii) y-axis.
Let the state o f polarization in the x-direction be represented by

|a ) = and the state with polarization in the y-direction by 113) =


Therefore the state with an angle cp with the x-axis is given by
|T ) = eia [coscpj a ) + sin(p| (3)]

= e ia coscp + sincp

where a is an overall phase factor.


It can easily be shown that the density matrix in the pure state is

coscp / cos 2 cp smcp coscp A


P (cos cp, sin cp) =
sin ip +sincp coscp sm • 2
cp
T h erefore the probability o f finding the photon polarized in the
x-direction is
(a |p |a ) = cos2 cp
The probability for polarization in the y-direction is sin2cp.
(iii) T h e g r a n d c a n o n ic a l e n se m b le . In this ensemble both N and
are variables. By a generalization o f the preceding case, we can write the
density operator as
__ L _ . e-P(H-F") ...(7.31)
P = Z ( P,|t)
where p is the chem ical potential, h is a number operator with eigenval­
ues 0, 1, 2, .... and Z is the grand partition function given by
Formulation of Quantum Statistics 133

ZCP, ft) = X e-P<^-nNr)


r

= Tr{e~P(A- ^ } ...(7.32)
The ensemble average of G in the grand canonical ensemble is given
by
(G) = Tr{pG)

...(7.33)

PROBLEMS — ----------- - ' --------------■'■-■ : — ■■■ ' '


7.1 Prove, in quantum statistical mechanics that
(.E2) - ( E )2 = kT2Cy

7.2 An electron possesses an intrinsic spin —fto and a magnetic


moment pB, o being the Pauli spin operator. Evaluate the den­
sity matrix of an electron spin placed in a magnetic field B, in
the representation that makes a z diagonal.
Hence show that
( o z ) = tanh(P(iBB)
7.3 Evaluate the density matrix of the electron spin in Prob 7.2
above in the representation which makes d* diagonal. Further,
show that the value o f (d x) resulting from this representation
is the same as that obtained in Prob. 7.2.
H int: Carry out the transformation with the help of unitary
1 1 '
operator 1 2 V2
-1 1
,sf2 V2

7.4 Derive the canonical density matrix (p) for a free particle in a
box, in the coordinate representation. Hence, obtain the expec­
tation value of the Hamiltonian H .

□ □ □
OC-1

BOSE-EINSTEIN AND
FERMI-DIRAC
DISTRIBUTION

| 8.1 S Y M M E TR Y O F W AVE F U N C T IO N S
Suppose that an enclosure contains a single particle. Let us refer to this
particle as particle a. If the particle is in the nth state, it will be
characterized by a wave function \j/“ determined by the Schrodinger
equation

-(8.U
adopted to the particular boundary' conditions of the problem. The energy
associated with it is the corresponding eigenvalue E% ■ If instead of the
particle a, we were dealing with a particle b in the Mh state, its wave
function would be represented by \jtb k and its energy by e \ .
It both the particles are present simultaneously and they do not
interact, this situation is represented by the eigenfunction

<W b
k -(8-2)
and the energyvalue of the system is Ef, + Eb .
Let us now exchange the two particles so that a is in state k and b
in state n. The corresponding wave function is
Vb
nVt -(8.3)
and the corresponding energy + E%. Since the particles are indisting­
uishable
E °+ E b
k = Eb +E% -(8.4)

However is not the same as YnVl •Since both situations have


the same energy but different wave functions, the system is degenerate.
The most general wave function is obtained by taking a'linear combination
of these wave functions.

134
Bose-Einstein and Fermi-Dirac Distribution 135

Two combinations areparticularly simple


W H'nM'l + VnV* symmetric ...(8.5)

and Hi) v Sy * - >


which can bewritten asa determinant, as

Vn v*
anti-symmetric ...(8.6)

If a and b are interchanged, (i) remains unchanged and for this reason
it is called symmetric wave function. The same exchange in (ii) leaves the
absolute value of the function unaffected but reverses its sign. This function
is said to be antisymmetric. It may be noted that Eq. (8.2) and (8.3) are
neither symmetric nor anti-symmetric, since, interchanging a and b, one
is transformed into the other.
Next, suppose that there are N particles a±, a2, ..., aN and that ax is
in state nly a2 in state n2, and so on. The wave function which corresponds
to this distribution will be
-(8.7)
Any interchange in the distribution of N particles among N states
entails a change in the wave function. Since the total number of different
distributions, with one particle in each state, is Nl, there will be N\
different wave functions of the type (8.7). These are neither symmetric
nor anti-symmetric.
Among the linear combinations that may be constructed, the following
two obtained by adding N\ fundamental ones are very important:

5 > “i , v a" symmetric ...(8.8)

\l/°2 ... W°W


< Mnl
tCra2
m vn2 y «2
anti-symmetric ...(8.9)

V? ... 11faN
T^v

The wave functions will be normalized if we multiply them by ,. .. .


y[N\
Now, a wave function determines a microscopic state of the system.
Different types of wave functions, therefore, will lead us to the different
conceptions of the microscopic states, and thereby, as will be shown
136 F u n d a m e n ta ls o f S tatistical M echanics

presently, to different statistics. Let us examine Eqs. (8.7), (8.8) and (8.9)
from this point of view.
Consider a particular case in which the particles a v a2, a3, a4, aB are
in the state n 1 and the remaining (N - 5) particles in the state nN- The
wave function of the type (8.7) corresponding to this distribution is

< < < < < V Z -K s r -(8 -1 0 )


A ll other wave functions corresponding to five particles in the state
Hj and -V — 5 in the state nN, can be obtained by carrying out all the
possible permutations of ar a2, ..., aN in Eq. (8.10). These permutations,
however, will not always yield different wave functions. For example, if
in Eq. (S.10), a v a2, a3, a4, aB are permuted among themselves, or if a6 ...
a v are permuted among themselves, the wave function remains unchanged.
A permutation will yield a different wave function only if the particles
belonging to the different states n l and nN are interchanged. The total
num ber of distinct wave functions corresponding to five particles in the
state and (N — 5) in state nN is
IV!
5!(iV —5)! ...(8.11)
Th e probability that five particles are in the state n 1 and N — 5 in the
state nN is, evidently, proportional to the number (8.11). In other words,
(.8.11) defines the probability of the corresponding macroscopic states.
Recalling the arguments presented in Ch. 5, we see, that the statistics
resulting from the selection of wave functions of the type (8.7) is Maxwell-
Boltzmann (M B ) statistics.
Next, we consider a symmetric function (8.8). If we exchange any two
particles, the wave function remains unaffected. Since the interchanging
of particles does not generate a new wave function and, hence a new
microscopic state, we conclude that when the distribution is described by
the wave functions of the type (8.8), the particles cannot be differentiated
and do not possess any individuality.
Th is also hold for the anti-symmetric wave functions. When we
exchange particles a 1 and a 2, we are exchanging two columns of the
determinant which remains unaffected by this change, except that its
sign is reversed. Th e change, therefore, does not modify the nature of the
wave function and, hence, does not generate a new microscopic state. The
interchange of particles in this case too is meaningless. Nevertheless (8.8)
and (8.9) have their own peculiarities which we will now consider.
Suppose that all the particles are situated in the state n v The
symmetric function describing this distribution is evidently a new one,
given by

However, since the wave function does not vanish, we conclude that
such a situation is possible, i.e. we can accommodate any number of
B o se -E in ste in and F e rm i-D ira c D is trib u tio n 137

■particles in any one state. I f the same procedure is follow ed w ith the anti­
symmetric function (8.9), all the rows o f the determ inant becom e identical,
and the determ inant (and the wave function) vanishes, indicating that
such a situation is not possible. Hence, in this case, all the particles
cannot be in the sam e state. In fact, no state can have m ore than one
particle. Because, i f we assume two particles in the sam e state, tw o rows
o f the determinant will be identical and the determ inant w'ill vanish.
Particles satisfying the sym m etry requirem ent
Vi(...Q v ..Q k) = V( - Q /i - Q 1) ...(8.13)
are said to obey Bose-Einstein (BE) statistics and are called bosons, while
those satisfying anti-sym m etry requirem ent
Vf(...Qy ..Q k) = -V (...Q ,...Q 1) ...(8.14)
are said to obey Ferm i-D irac (FD) statistics and are called ferm ions. It is
known from quantum m echanics that particles having total spin angular
m omentum which is integral, i.e. 0, 1, 2, ... (ex. 4He, photons) obey Bose
statistics, while those with total spin angular m om entum w hich is h a lf
1 3
integral, i.e. —, —... (electrons), obey Fermi statistics, the m ost significant

resu lt o f w hich is P a u li’s ex clu sion p rin cip le. T h ese fin d in g s are
summarized in the table given below:

Particle spin Symmetry o f Statistics Occupation


wave function number
Half-integer Anti-symmetric Fermi-Dirac ns — 0, 1
Integer Symmetric Bose-Einstein ns = 0, 1, 2 ...
It m ust be em phasized that w hen w e talk about M B, BE, or FD
statistics we are not really referring to different statistics, but only to
different distributions.
The following illustration w as given by Schrodinger* to explain the
‘new statistics’ and its relation to the classical statistics, to those ‘w ho
have never heard about such things’ . The illustration m ay seem childishly
simple, but is com pletely adequate and covers the actual situation.
Three schoolboys, Tom, Dick and Harry, deserve a reward. The teacher
has two rewards to distribute am ong them . Before doing so, he w ishes to
realize for h im self how m any different distributions are possible. To count
the number o f different distributions is a statistical question. The answer
depends on the nature o f the reward. Three different kinds o f reward will
illustrate the three kinds of statistics.
(a) The two rewards are two memorial coins with portraits o f Newton
and Shakespeare, respectively.
The teacher m ay give N ew ton to either o f Tom , Dick or Harry, and
Shakespeare to either o f Tom , D ick or Harry. The possible distributions
are given in Table S.l

■ E. Schrodinger, E n deavou r 109, IX, 1950.


138 Fundamentals of Statistical Mechanics

T able 8.1
Torn D ick H a rry

-V. s —
A' -

A’ - S
Ar -

- N. S -

- N s
- S N
.s - N
- - N, S

Thus, there are 3 x 3 = 9 different distributions.


(6) The two rewards are two-shilling pieces:
Either Tom, Dick or Harry receives two shillings or they can be given
to two different boys, the third going without.
■ — - - - - - -

Tom Dick Harry


2 — —

- 2
— — 2
1 1
1 - 1
- ______________________ 1______________________________ 1______________
Thus, the total number o f distributions is 6.
(c) The two rewards are two vacancies in the football team:
In this case, two boys can join the team, and one o f the three is left
out.

Tom Dick Harry


* * _
* *
- * *

Thus there are three different distributions.


Now let the rewards represent the particles, and the boys represent
3tates the particles can assum e. In the first case the particles are
distinguishable and so obey MB statistics. Shillings are not distinguishable
but they are capable o f being owned in the plural. It makes a difference
whether one has one shilling or two. There is no point- in the two boys
exchanging their shillings. The situation corresponds to Bose-Einstein
statistics. With memberships, neither has a meaning. You can either
belong to a team or not. You cannot belong to it twice over. The third
B o s e -E in s te in a n d F e rm i-D ira c D is trib u tio n 139

case, therefore, corresponds to Ferm i-D irac statistics, in which there can
be only one particle in a state.
It would be interesting to find how the ratio r of the probability that
the two particles are found in the same state to the probability that they
belong to different states, varies in the three cases.
We find that

Th us, in the B E statistics there is a greater tendency for particles to


crowd in the same state than in classical statistics, while in F D statistics
the tendency is to remain apart.

I 8.2 TH E Q U A N TU M DISTRIBUTION FU N C TIO N S ;*


Consider an ideal gas of identical particles, N in number. Let s represent
the state of a single particle and S the state of the whole gas. Suppose
that when the gas is in state S, n 1 particles are in state s = 1 w ith energy
ei> n 2 Particles in state s - 2 w ith energy e2, etc. Th e total energy of the
gas, therefore, is
£ „s = Isn ses ...(8.15)

and the total number of particles

N = £ ns ...(8.16)
o
Th e physical quantities, we are interested in, can be calculated w ith
the help of the partition function

Z = Z c _pE*
s
summation here being over all possible states S. W hile sum m ing over the
states we have to take note of what type of statistics the particles obey.
(£) MB Statistics: In this case the particles are distinguishable. Th e
partition function is
-P ln ,e s
Z = Xe * ...(8.17)
s
where the summation is over all states S, i.e. over all possible values of
14 0 Fundamentals of Statistical Mechanics

The mean number o f particles in state s


-PZ n«e«
X nsC 3
<n s> = ...(8.18)
^ e-3Xn.ses

1 3 In Z
...(8.19)
P d£s
The partition function can as well as written as

Z = £ e H5(e*i + S +- +e- ...(8.20)


S

where the summation is over all the possible states of each individual
particles Sj, s2, etc. One can see that the summation carried out in this
manner takes into account the distinguishability of the particles:
X e~PS e^ S . .
®1»s2
-lie*1
S1 S 2

...(8.21)
. S1

and In Z = -Wln^X<?“ PEsi j ...(8.22)

Therefore, substituting this in Eq. (8.19), we have


N e pe*
W ...(8.23)

w hich is the MB distribution obtained previously in classical statistical


mechanics.
(ii) B E S ta tistics: In this case the particles are indistinguishable. BE
and FD statistics can be treated conveniently using the grand canonical
ensem ble as N also is involved.
The grand partition function is given by
z _e-P(nl£l+n2E2+"'-)+FP(ni+rt2--) (8 24)
s
where the summation is over all possible states, i.e. over all possible
numbers o f particles in each single particle state. The number of particles
nt in each state i, will be 0, 1, 2, ..., subject to the condition X ni = N
B ose-E instein and F e rm i-D ira c D is trib u tio n 141

Therefore Z = £ g-P(£i -p)ni ^ e-P(E2-F)"2 I


I'M Vn'2

Now £ e P(ei ^,ni = 2 — pfa-H) -apfo-d) ...(8.25)


ni
This is a geometric series. Hence
1
' £ e~^&l =
i.g -P t a -F )
"i
r i ]
lh erelore z = f( a -g -P1t e -n )1/ ...(8.26)

and In Z = - S l n ( l - « r P( ...(8.27)
s '
1 a in Z
We know, that N = (see Prob 6.3)
P

therefore N = ^ 2LIn |l e ^ * ^ ^ J - Z ( n s)

From this expression one can infer that the sth term in the sum represents
the average number of molecules in the sth level.

...(8.28)

e-P(es-n) x
...(8.29)
1 _ e-P(Es-r*-) ~~ eP(£s-^) _ i _

Since the number of bosons in any state cannot be negative, c ^ Es_tl


must be greater than unity, i.e. (es — (j.) > 1 for all £s. The lowest energy
of a single particle state of a boson gas is zero. Hence, fi for an ideal boson
gas always be negative. •
The formula (8.29) takes a simpler form in the case of photons, which
have spin one and, hence, are bosons. Photons can be considei-ed as an
ideal gas, since it follows from electromagnetic theory that their
interactions are completely negligible. Because photons may be absorbed
and re-emitted by the walls of the enclosure, the number of photons
inside the enclosure is not conserved. The conditions E ns = N, therefore,
d ln Z
is not imposed on the distribution. Hence =0 .
dN

Therefore -
h = (KdN
m
jVtT
=
dN
=0
142 Fundam entals of Statistical Mechanics

and the distribution function takes the form


1
<"*>= ...(8.30)
In fact, this formula could have been derived directly. Thus
Z = = £ e“ P('liel+rt2E2+-0
■S (/Ij...

= ( £ e-Prt'E‘ e~P"2e2

f 1 )f 1 '
V1 —e~PEl .J ll-e -P * 2 ,
In Z = —Z l n f l —C-P£*)
S v
- 1 31nZ 1
and <ns> = P des
-1
(iii) FD S ta tistics: For FD systems, ns = 0 or 1

Therefore z = (i + ep(^ El))(i + ep(^-e2))...

l n Z = E l n jl + e ^ - 6*)} ...(8.31)

P(f- es)
Hence iV . i ^ - 2 3 ? T= i :(n*)
P dp. si+
1
therefore (ns> = eP(es Fs) + i ...(8.32)

This shows that if j e s is very large nB —> 0, i f on the other hand, es is


small, since the denominator is always greater than one, we have
(ns) £ 1 ...(8.33)
The behaviour o f the gas obeying FD statistics is different from that
obeying BE statistics. This difference becomes particularly striking as T
tends to zero, when the gas is in the state o f lowest energy. In the case
o f BE statistics, since there is no restriction on the number o f particles
to be placed in any one single particle state, the gas will have the lowest
energy state. But in the case o f FD statistics, since we can place only one
particle in any one single particle state, even though the gas has lowest
energy', one is forced to populate the successive states qf higher energy
with one particle in each. The lowest energy o f a gas obeying FD statistics,
therefore, is much higher than that it would have been if the particles
had obeyed BE statistics.
B o s e -E i n s t e i n a n d F erm i-D irac Distribution 143

I 8 ^ T H £ B ^ g ^ W l ^ N N LIM IT O F B O S O N A N D FERMION GASES


'W e h a v e seen th at the m ean occupation number of particles in any energy
s ta te es is given hy

< »0 ...(8.34)
eP(.es ± i
w h e re tine upper sign, refers to FD statistics and the lower to BE statistics.
S u p p o se now

r.e. 3 ( es- ^ ) » 1 ...(8.35)


1 = e&Fe-fte.
In th is case (n.s) eMns-F )
F u rth e r, because E<n.s> = JV

e&vi H e _ jsj
S

N
Then efor e e Pa
X e~ p6s
life &Ss
and., h en ce <” -s> = E e -SSes
w hich is M B distribution.
T im s , quan tum statistics given tire Boltzmann form in the limiting
c a se w h en
e-Pu » i ...(8.361

£.<, i^) » 1 (se e E q . £6.95)1 ...(8.37)

T h is condition is satisfied if T is high and N7V = p very low.


T h is variation o f the occnpation number <ns) with f)(£ - (X) in the three
c a se s is show n in F ig. 8.1.

Fig- 8.1
144 Fundamentals of Statistical Mechanics

Curve 1 is for fermions, 2 for bosons and 3 for MB particles. We find,


that for large values of |3(e - u) the quantum curves 1 and 2 merge into
the classical curve 3. We know that the classical treatment is valid at
e- p
high temperatures. Therefore, at high T, if ■■ - is to be high, g must be
fi 1
negative and large in magnitude, the result arrived at the Eq. (8.36).
Thus, for classical treatment to be valid
1 fV'l fiNOfO
Xy \N J >!> 1 ° r 1 V J <<: 1 *-S6e ^ q'
...(8.38)
where we have introduced the mean thermal wavelength

Xt = (2nmkT)yz ...(8.39)
The thermal wavelength is a measure of the wave function of the
particles. As long as XT is much smaller than the average particle spacing,
statistics will be classical.
A gas which can be treated classically is said to be non-degenerate.
On the other hand, if quantum statistical distributions are to be used, the
gas is said to be degenerate. We can see from Eq. (8.37) that when
/ o \3/2 ,
h2 j fN
»1 ...(8.40)
2nm kTJ \V
the gas is degenerate. The condition (8.40) is known as degeneracy criterion.
E x a m p le 8.1 Determine whether the electron gas in copper at room
temperature is degenerate or non-degenerate. (Concentration of electrons
in copper is 8.5 x 1028 rrr3).
From Eq. (8.40), we get

( h2 r
2nmkT J
3/2
(6.62 x lO -34)
\3/2 x 8.5 x 1028
(2 tcx9.10~31 x 1.38x10 -23 :300j
= 6775 ( » 1 )
Therefore the gas is degenerate.

| 8.4 EVALUATION OF THE PARTITION FUNCTION


Consider an ideal gas, consisting o f a large number of molecules N enclosed
in a volume V. In addition to the translational motion, the molecules may
Bose-Einstein and Fermi-Dirac Distribution 145

have rotational and vibrational motion, or they may get electronically


excited.
If, as a first approximation, we assume that the molecules have no
mutual intermolecular forces, the total energy of the molecules is given
by
E moI = ^ tr a n s + E -nb + E el + E rot ...(8 .4 1 )

and the canonical partition function of the molecule will be


z = z,trans *z rot. •z vib., •z el,
Consider first a monatomic gas. Since it does not have any rotational
or vibrational motion
E mol. = U.trans + E el,
Let us first calculate the translational partition function.
Without loss of generality, we can assume the enclosure to be in the
shape of a rectangular parallelepiped with edge lengths Lx, Ly, Lz in the
directions x, y, z, respectively.
Therefore V = Lx, Ly, Lz
We know from quantum mechanics that possible quantized energies
are given by

r 1 „ 2 2 A
2nh2 nx = n y = nz
~ m r2 r2 r2
\,x L-z )
The partition function is

but in this, the energy levels being closely spaced, the quantization of the
translational levels can be neglected, and summation may safely be
replaced by integration. The notion of phase space is not used in quantum
mechanics in the same context as it is used in classical mechanics, for,
the principle of indeterminacy makes it impossible to simultaneously
specify the coordinate and the conjugate momentum of the particle. To be ■
able to make a transition from a sum over discrete states to an integral
over continuous states, one needs to know what volume in phase space
corresponds to a discrete quantum state. Since it is impossible to define
the position of a representative point in the phase space more accurately
than allowed by the condition Ap Aq ~ h, we assume that a discrete
quantum state occupies a volume equal to /i3^, where h is Planck’s constant
and 3N the number of degrees of freedom. In fact, this can be easily
proved in the case of a particle moving freely within a cube of edge L
(Prob 8.1). In a volume element d3q dap in p-space, therefore, there are
( 1/h3) daq d3p possible states. So the summation over all energy states can
be replaced by an integration of the differential
146 F u n d a m e n ta ls of S ta tistica l M e ch a n ics

-\ -e “^ sc!39 d 3p
/r

Therefore Z 1 = — J...Je ^ md 3q d 3p ...(8.43)

Integration over d3q gives the volume of the container V.


V 7 7 7 — L (pt+pi+pZ)
Therefore Z l = — j j j e 2m dpxdpydpz

= zm kT)3^ ...(8.44)
*3
Thus the correct partition function is

Z = = X ^ r (2nm kT)3N/2 ...(8.45)


h3N
This result is the same as obtained classically, except that the arbitrary
parameter h0, measuring the size of the cell in classical phase space, is
replaced by Planck’s constant h. The expressions for the thermodynamic
quantities S, <F>, <P>, etc. are the same as those, calculated in Ch. 6
except that a0 has now a definite value in terms of h.

. •■ - •- S.'- >
t 8.5 PARTITION FUNCTION FOR DiATOA/IIC MOLECULES Jltf

(a) Translational Partition Function


The translational motion of a diatomic molecules is analogous to that
already considered for a structureless molecule and the partition function
will be the same as given by Eq. (8.45) with m replaced by + m2, m v
m2 being the masses of the two atoms.

Therefore Zx = + m 2)kT ]3'2 ...(8.46)


* It*

(6) Rotational Partition Function


To a first approximation, the molecule may be visualized as a pair of
masses m v mz separated by a fixed distance rQ.
The energy levels of such a rotator is given by

■w= 1) •<a47)
where J is the rotational quantum number and takes values 0, 1, 2, ....
o TtXyTTLn o -
and I = H^n = -------------- ro is the moment of inertia of the molecule.
m m1 + m2
Bose-Einstein and Fermi-Dirac Distribution 147

Since (2 J + 1 ) is the degeneracy of the Jth state (i.e., for a given J,


there are 2 J +1 possible quantum states of the same energy)

rot
...(8.48)
j=o
“ 1)
= £(2J+ l)e T ...(8.49)

where 0r = h2/2klrn is called the rotational characteristic temperature.


Consider the following cases:
(i) T or I very small
Z roJ = l + Ze-™r,T +5e- 6s' IT+-" -« -5 0 )
In the low temperature region, because the thermal energy of the
order of kT is not enough to populate the higher rotational levels, almost
all molecules will be in the lowest rotational state and, hence, all terms
beyond the first few are negligible.
Hi) T is reasonably high and l m not small. In this case the spacing
between the consecutive levels is very small compared to kT, and hence,
e may be considered as a continuous variable in J and summation can be
replaced by integration. Put u = J{J + 1)

Therefore Z .= 1 du = H e , = 2Z^ eT- ...(8.51)


ro< JQ p/j2 ft2
(iff) Homonuclear molecules: Consider first the case when the
temperature is so high that the classical approximation discussed in (ii)
above is admissible. If the two nuclei are identical, the two opposing
positions of the molecular axis, viz. those that correspond to the, azimuthal
angles <j>and <)>+ 7t correspond to only one discrete state of the molecule,
because they differ simply by an interchange of the two identical nuclei.
The partition function given by Eq. (8.51), therefore, will be reduced by
a factor of 2. Thus in the classical approximation
_ 2I mkT
r°t 2/]2 ...(8.52)

The general formula for the rotational partition function is


_ 2ImkT
...(8.53)
“ <5t?
where the number a is

11 for heteronuclear molecules


2 for diatomic molecules
3 for molecules with 3 —fold symmetry
148 Fundamentals of Statistical Mechanics

Since (E) = -AT31nZ = NkT ...(8.54)


dp
the contribution o f rotational energy to the specific heat is
Cvo«, = Nk ...(8.55)
(ttri At lower temperature, the rotational states have to be treated as
discrete, in which case the rotational partition function is given by Eq.
(S.49), viz.

Zmt = 2(2 J + 1)e-'/(J+1,0'-/r


™ j
The heat capacities calculated from this formula, however, do not generally
agree with those obtained experimentally. This discrepancy is resolved if
the symmetry character o f the nuclear wave function is taken into
consideration.
Consider, for example, a H, molecule. Both nuclei are identical, each
having a spin 1/2. The total spin o f the pair can either be 0 or 1. In the
former case the molecules are in a single t state (para hydrogen) and in
the latter, they are in the triplet state (ortho hydrogen), their statistical
weights (21 + 1) being 1 and 3, respectively. Pauli principle demands that
the total wave function must be anti-symmetric with respect to the
exchange o f the two nuclei. The electronic and vibrational wave functions
are sym m etric in the ground state. The rotational wave function is
symmetric for even values o f J and anti-symmetric for add values. Since
the nuclear spin function for ortho hydrogen is symmetric under the
exchange o f the nuclei, the rotational part o f the wave function must be
anti-symmetric.
Therefore Z0= 2 e -J (J + V B r /T ,{2J + 1) ...(8.56)
odd J
For para hydrogen, sinoe the nuclear spin function is anti-symmetric,
the rotational partition function is
Z = 2 e~J(J+mr/T .(2J + 1) ...(8.57)
** even J
The total partition function per molecule, if the system is in thermal
equilibrium is
Z = 3Z0 + Zp ...(8.58)
and, hence, the heat capacity is given by

C„ = f Cv M + ^ C v{p) ...(8.59)

This shows a fairly good agreement with the experiment, where


o and p m eans ortho and para contributions to the specific heat.
Bose-Einstein and Ferm i-Dirac Distribution 149

(c) Vibrational Partition Function


To a good approximation, the vibrational motion of a molecule may be
assumed to be simple harmonic in form. Energy of a harmonic oscillator
is given by

en

Therefore Zvu> ~ X, e ^ 2
n= 0

-fjftto/ 2 X e —jjftcon
...(8.60)
n.=0
£ e -pft<on _ 1 + g-P ftto + g —2pftco
+ ... ...(8.61)

l_g -P » co a geometric series

Therefore Z oib = e pftto/2 ~ ...(8.62)

= e~B°,2T [ l - e~°v,2T ^ 1 ...(8.63)

where Qv = hudk is the vibrational characteristic temperature.

Therefore (E) = - N ^ ^ - = N kT 2 91nZ


ap 8T

e-»«/T 6„
= N k T 2 \ -^ -
2T 2 1 —e0b/T yi2

reu , e, 1
= ATfe ...(8.64)
L2

dET\ e „f e8u/T
and C = ...(8.65)

A t high temperatures ~ « 1

Therefore e(‘«rr = l + + —f ?Ji


T 2VT
neglecting the higher terms.
150 Fundamentals of Statistical Mechanics

1
Therefore <£> = NkQu 2
I f 0u
21,7’
1
s N/ho
2

= NkT + Ntuo 12 ...(8.66)


and. hence C„ = Nk ...(8.67)

(</) Electronic Partition Function


The electronic partition function is given by
g 0 + g le - * ' + g 2e -& 2 + ~ ...(8.68)
where £j, e,, ... are the energies required for the excitation o f the electron,
to the various levels and g ’s are the respective degeneracies. The energies
Ej, £2, ... being generally much larger than the thermal energy kT, at room
temperature the molecules o f most o f the gases are hardly excited to
higher electonic states. The electronic partition function, therefore, can
be represented w sufficient accuracy by the first two terms or Eq.
(8.68). Thus

So + ..(8.69)

Therefore (E) = NkT 2 S\e Pe £1


Uo + g\e ~Pe, j k T 2

= 7Ve i |"^0 ePe. + 1 r ...(8.70)


1,S l
-2
and C„ = g 0 epe: 1 + PEi ...(8.71)
k l '“ Si gi
Since there will be no significant contribution from the electronic
energy to the specific heat, except at high tem peratures, the total
temperature dependent energy o f the gas is given by

E = —NkT^trar,^ + NkT(root) + Nkt(vlb)

= 1-NkT ...(8.72)
A
and the total specific heat
7
C = -N k ...(8.73)
Bose-Einstein and Ferm i-Dirac Distribution 151

g '2}»v;:fv ... rnr-'


'8.6 EQUATION OF STATE FOR AN IDEAL GAS
In MB statistics we have seen that

PV = NkT and E = - N k T
2

Therefore PV = ~ E ...(8.74)
We will now show that the equation of state [Eq. (8.74)] retains its
form in quantum statistics too.
We know from Eqs. (8.27) and (8.31) that

In Z = ^ X i + e - P ^ f 1 ...(8.75)

where the upper sign is for FD statistics and the lower for BE statistics.
If V is very large, the summation can be replaced by integration.
Number of states with momenta between p and p + dp is

d3qd3p V j3 V . o i
- h3
ra = T ...(8.76)
h3a d P = T
h3z 4kP dP

Replacing p z by 2me, the number o f states in the energy range e and


e + de
4 ttV 42m
3t3:2 m e— p -a e =
8 tz3h
(2 m)3/2eV2de
2Ve 4ir2ft2
...(8.77)
Now
E = 1 l ^ h 3 (2m)3/2 eV2de
3/2
V(2m) e3|/2de
j; ...(8.78)
4n2fi2 5 P (e- h )± 1

p _ ldlnZ
and
p ay
i a — ^-2-(2m )3/2eV2rfe
B dV Jo
P l ) 47t2fi2 v '
(2m)3^2 j-"
4 tc:?ft3 Jo 1 >
= ± M 3' 2 2 £ 3/2 -p (e -n )i
ln ^ l:
47t2/i3p L3
\3/2 3/2p (-*
+e -p(e^ )
_ (2r ,2 r de
4Jt2f:3p 3 Jo i + e-P(£-F)
152 Fundamentals of Statistical Mechanics

the first term vanishes at both the limits. Therefore


2 (2m)3'2 e3/2 ,
P =
3 4 it V

...(8.79)
3V

Therefore PV = —E ...(8.80)
U
Thus the relation PV = 2/3 E is satisfied by a gas of free monatomic
particles, irrespective of the statistics it obeys. The variation of E with T
at constant volume in the case of BE and FD distribution is shown in Fig.

8.2. The dotted line shows the variation of E with T E —NkT^j for an

Fig. 8.2
ideal monatomic gas in the classical limit. We can see that the total
energy o f an ideal boson gas is always less than that of an ideal monatomic
classical gas while the energy o f FD gas is more than that of the ideal
monatomic gas. At higher temperatures the energies of BE and FD gases
approach that o f the classical gas. From Eq. (8.80) we conclude that the
variation o f pressure with temperature has the same trend as the variation
o f energy.

8.7 THE QUANTUM MECHANICAL PARAMAGNETIC SUSCEPTIBILITY


............................. . ... .

We have given the classical treatment of paramagnetism in Ch. 6. In this


section we will treat the problem quantum mechanically.
We know that the angular momentum number J of an atom or a
molecule takes discrete values and its component M j in the direction of
Bose-Einstein and Fermi-Dirac Distribution 153

an applied magnetic field B can take all values between J and —J in


integral steps. Thus
M j = - J, - J + I, ..., J - 1, J ...(8.81)
The magnetic moment associated with a given M j is
h = g p 23M J ...(8.82)
where g is Lande’s g-factor and |XB is the Bohr magneton.
The magnetic potential energy of the molecule is -\lB.
Therefore, the partition function Zmag will be

Z mag= = X e tev-BMjB ...(8.83)

Suppose B is very large. In this case all the magnetic moments will
be aligned in the direction of the field applied, leading to a state of
magnetic saturation, the maximum possible magnetic moment being ng\igj
where n is the number of atoms per unit volume.
If, on the other hand, the applied field is weak, the exponential function
may be expanded in the form of a series. Thus, putting x = fSgggc/

X exMj = X [ 1+ xM J + ^ mj \ ...(8.84)

higher terms being neglected.

Therefore X ^ = 2J + 1 + ^ X MJ ,(8.85)
Mj=-j 2 Mj^ j
J
because X M j vanishes
Mj=-J
x2 J(J + I)(2t7 + 1)
= 2J + 1- ...(8.86)
ndlnZ
Now E_

a . \nT , x2j (j + 1)(2j + 1)


= -re— ln(2cf + l4--------------- -----------
ap | 6
2 J{J + I)(2t7 + 1)
i p ( g p s B )

...(8.87)
x*J{J + 1X2J +1)
2cT +1 +
Since B is very low g\iBB « kT.
Therefore x « 1 and, hence, the term containing x2 in the denominator
may be neglected.
154 F u n d a m e n ta ls o f S ta tistica l M e ch a n ics

Therefore ...(8.88)

The magnetic moment per unit volume

...(8.89)

Hence, the susceptibility per unit volume, m /B, is


_ Ijrje/Xc/ + 1)
...(S.90)
X= 3kT
This differs from the expression for % obtained classically in Chapter
6, in that + D takes the place of |i in the classical expression.
There is hardly any element for which we can verify the formula
(S.S9> as applied to the gaseous state. However, it has been possible to
verify it in the case of chemical compounds of rare-earth elements, where
other elements do not possess a characteristic magnetic moment. In rare-
earths the moment of the electronic cloud is due to the 4/-shell, which is
deep inside the atom and, hence, is not much affected by the electric field
of the neighbouring atoms. Their state, therefore, m ay be regarded as
that of free atoms of a rare-earth element. Th e agreement of the formula
(8.89) with experiment is found to be fairly good.

| PROBLEMS —
8.1 Considering the free motion of a particle in a cubical box, show
that the volume in phase space corresponding to a discrete
quantum state is h3.
8.2 Assuming that only the ground state and the first excited state
of a simple harmonic one-dimensional oscillator are appreciably
populated, find the mean energy of the oscillator.
8.3 Discuss whether the classical treatment would be valid for 4He
gas at
(0 N T P
(ii) at 3 K and atmospheric pressure.
8.4 Find the value of Z for helium gas at pressure 1 To rr and
temperature 300 K.
8.5 Show that, when T « 9r where 0r is the rotational characteristic
temperature in the lowest approximation

8.6 The nuclear spin of deuterium (Z)2) is 1. Find its rotational


partition function.
B o se -E in s te in and Fe rm i-D ira c Distribution 155

8.7 Calculate the contribution of the vibration o f molecules to the


heat capacity o f a diatomic gas consisting of N molecules at an
absolute temperature equal to the characteristic temperature
Q •
8.8 The following data were obtained from the intensity measure­
ments o f the vibrational bands o f a diatomic molecule excited in
an arc flame.
v 0 1 2 3 4
N JN a 1.000 0.250 0.062 0.016 0.005
Show that the gas is in therm odynam ic equilibrium.
C a lc u la te th e te m p e ra tu re o f th e g a s, i f th e v ib ra tion a l
characteristic tem perature is 3400 K.
8.9 Show that the entropy o f an ideal gas in therm al equilibrium is
given by the form ula
S = - ^ [ ( f O l n n * i { l + n s) l n ( l T n s} ]

w here the upper sign is for FD gas and the low er for BE gas.
8 .10 Show how the expression (8.90) for the m agnetic susceptibility
X can b e obtained b y finding the average m agnetic m om ent o f
the m olecules in th e direction o f the field applied.
8.11 T he concentration o f electrons in a gas discharge at tem perature
2000 K is n = 1018 mr3. F in d the pressure o f electrons.
9 IDEAL BOSE SYSTEMS

S.N. Bose, considering thermal radiation as a gas of photons, introduced


in 1924 the concept of their indistinguishability and obtained Planck’s
formula. Subsequently Einstein developed this idea further into what is
now known as Bose-Einstein statistics.

| 9.1 PHOTON GAS


Consider an enclosure containing electromagnetic radiation. If the
enclosure is maintained at a temperature T, it will emit are re-absorb
photons. After a certain lapse of time, a situation will be established in
which the photons and the matter of which the cavity is composed, will
be in thermodynamic equilibrium. The number of photons in such an
enclosure is not definite; but its total energy remains constant. The
electromagnetic radiation within the enclosure is called black-body
radiation. In this section we will discuss the properties of this radiation.
(i) R adiation pressure: Such a radiation may be regarded as a gas of
photons. The state of a photon can be specified by the magnitude and
direction of its momentum and by the direction of polarization of the
electromagnetic wave associated with it. If the wave is regarded as
quantized, then the photon is considered as a relativistic particle of energy
e = tm and momentum |P |= . The photons have integral spin and,
hence, the photon gas obeys Bose statistics.
Since the gas is in equilibrium, its free energy must be minimum.
dF
Therefore, . —= 0
OJLV

Thus, the chemical potential of photon gas p = 0. The grand partition


function Z, therefore, is given by

...(9.1)
8

or
s

156
Ideal Bose Systems 157

= - X ln[ 1 - e~P'KO*] ...(9.2)


S

The number of allowed states with momentum between p and p + dp

is. „ 2*3 P2dP- Or in terms of angular frequency co the number is


2jt n
V h2a.>2 h V 2j
-----doo = s—s-to d<o
2 2h3 c2 c
tc 2k 2c3
...(9.3)
However, in the photon gas, there are two independent directions of
polarization of the electromagnetic wave perpendicular to its direction of
propagation. Each photon may have one of these directions of polarization
and, hence, the number of allowed states is doubled, i.e. it is equal to
„2
V®2
dco
7t2C3
Therefore, the number of allowed states per unit volume is
co2dco
w - (9-4)
Replacing the summation by integration, Eq. (9.2) can be written as

In Z V
_2_3 Jro2 l n { l - e - '3to]dco ...(9.5)

Integrating by parts

V coa
In Z = ■ln{l —e-Pto}J

V to3 p/ie - P » 0 J
dco
+^ V T ' l _ e-pto
The first term vanishes at both the limits.
Vpfi
Therefore ln Z = dco ...(9.6)
3itV l ePft“ - 1
Substituting
p/ico = x
V r°° X3
In Z = dx
3k2cW Jo c* _ !

V k4
(see Appendix B)
3ir:2c3p3fc3 15
158 Fundamentals of Statistical Mechanics

\jt- i kT
...(9.7)
45 '(f )3
Now (E) =
91nZ
= k T 2 ai nZ
dp dT

Vttr (kT)4
...(9.8)
15 ' h3c3
1 a In z n2 (kT)4
and (P) = P
a dV ...(9.9)
45 h3c3

Therefore <P)V= l ( E ),i.e.(p) = j ± p -


Thus, the pressure o f the radiation is equal to one-third of the energy
density, a well-known result o f the radiation theory.
Note that the corresponding relation for the gas of material particles
is

...(9.10)

Can you account for the difference in the behaviour of these two gases
(Prob. 9.1)?
Entropy o f the thermal photons is
S = ,fe[ln Z + p (E)]
,3 V n2 A T T
V n2 kT V
= k
45 he ) + 15 he J

4Vn2k4T 3
...(9.11)
45/i3c3
Sakur-Tetrode equations [Eq. (6.91)] shows that the entropy is of the
order of number o f particles. It is believed that in the universe the number
o f photons is 108 times larger than the total number o f molecules. Photons,
therefore, provide dominant contribution to the entropy of the universe,
although the particles dominate the total energy.
(ii) R a d ia tion den sity: Since the number o f states per unit volume, in
„ , co2dco
the range o f frequency co and to + dco, is the energy density per
7t2c3
frequency interval dco is
co~dca
p(o>, T) dco =

hco3
dco ...(9.12)
J tV e H .U k T ^
Ideal Bose System s 159

Therefore pfco, T) = ^ _1 J913)

which is Planck’s radiation law. The variation o f p(co, T) as a function o f


to and also o f T is shown in Fig. 9.1.

Fig. 9.1

When « 1, expanding the exponential and neglecting higher


kT
powers, Eq. (9.12) reduces to
/ m , co2kT
p(C0, T) d(£> = —2~3~ dco ...(9.14)
n c
This is Rayleigh-Jeans law.
When h(i>/kT » 1 , we have
2
p(oo, T) d(Q - -e~fm/kTd(0 ...(9.15)
n2C 3
which is W ien’s law.
(Hi) E m issiv ity : I f a small hole is made in the side o f the cavity, some
o f the electromagnetic energy will be radiated. The amount o f energy coming
out per unit area per unit time is called the black-body emissivity.
Let us calculate the radiation emitted through the hole. A section o f
cavity near the hole is shown in Fig. 9.2. The photons that are emitted
through the hole in a short time At must be located inside a hemisphere
o f radius R = cAt and must be directed towards the hole. Imagine an
elementary toroid o f section rdd dr at a distance r from the hole and
inside the hemisphere. The volume o f the elementary toroid is 2n r sin 0
rd0 dr and its energy content is 2 k r sin 0 rd 0p(co)d!co, where p(tt>) is the
density o f radiation o f frequency between co and to + dto . Now i f AS is the
area o f the hole, the angle subtended by AS as the element o f toroid is
160 Fundamentals of Statistical Mechanics

Fig. 9.2
AS cos 9
dC2 = ...(9.16)
Since all photon directions are in the solid angle Q = 4n, the probability
that the photons are directed towards AS is
dn AS cos 0
...(9.17)
47tr2
Therefore, the energy that leaks from the toroid through the hole is
AS cos 9
= 27tr sin 9r<f0<2rp(co)dco „
r2
p(co)ASdtt) . „ „ ,
= —---- ------- sin 0 cos 9 dB dr ...(9.18)
(per unit solid angle).
Hence, the energy from the cavity that leaks through the hole
7r/2
p(co)AS dio
= J J sin 0 cos 0 c£0 ds
0 0

p(co)AS dco cAt


...(9.19)

The energy, that leaks through unit area per unit time, is = —p(co)c<ico.
Hence
1 , , , .1 hm3
e(co, T) = -p(o>)cd<o = f —- -da>
4 A -1
.(9.20)
The total energy radiated per unit area per unit time is
hco
E ra d = J 4it2c2 ehw,kT - 1
- dm
Ideal Bose Systems 161

h __
dx
4rc2c2 ex - 1
(where we have put x = Am/AT)
h p r y 714
47t2c2 l h ) 15

= T 4 = gT 4 ...(9.21)
60c2ft3
1,4 2
where o = ----- 5—5- = 5.67 xl0_8w/m2k4 ...(9.22)
60c2ft3
This is known as Stefan’s law and o as Stefan-Boltzmann constant,
(in) Equilibrium num ber o f photons in the radiation cavity: Since
the radiation density in the frequency interval w to co + do) is p(m, T) dro the
number of photons, each with energy tia>, per unit volume is
p(m,T)dco
Am
V Am3 1
Therefore
N ~
V f k T ^ l x3
rc2c3 A l JJex - 1
2.404VA3T 3 ...(9.23)
tc2 c 3 A3

Therefore, the equilibrium number of photons in the cavity is given


by
N « VT 3 ...(9.24)

t!®*<.-..—
A'aiunjBuf.;.... - ■: " ■' ^ :---
2 EINSTEIN'S DERIVATION OF PLANCK'S
Einstein gave a simplified statistical method for deriving Planck’s formula
(9.13) by considering a system of oscillators, and radiators in equilibrium
in an isothermal enclosure.
We begin by considering two levels. Figure 9.3 shows two energy
levels E x and E2, populated, respectively, by N1 and N2 atoms per unit
volume.

T
i
i p(co21) I
i
E.1' 1
162 Fundamentals of Statistical Mechanics

There are three possible radiative processes connecting the two levels:
In the presence of radiation of density p(coai) of appropriate
frequency, an atom in level 1 may absorb energy /ico21 and jump
to level 2. The number of atoms undergoing this transition will
be proportional to the population N lt to the external radiation
density p(co21) and to the probability o f transition in unit
time B10. The number of such transitions, therefore, is B 12 N 1p
sec-1 m .
(ii) An atom in level 2 may undergo a spontaneous transition to
level 1, with emission o f energy /ico21, its probability in unit
time being A 2V The number o f such transitions is = N 2A 21.
Radiations o f this type are in random directions and in random
phases and, hence, are incoherent.
(m ) An atom in state 2, in the presence o f radiation may undergo an
induced transition to level 1 with emission o f energy /ico2i •The
probability for this transition is denoted by B 21 and the number
o f atoms undergoing this transition is B 21N 2p(co21). The induced
radiation will have the same direction and phase as the
stimulating radiation and, hence, is coherent.
When thermodynamic equilibrium is reached, the rate at which the
atoms enter level 2 must equal the rate at which they leave it.
Therefore iVrB12p(cD21) = N 2A 21 + NJB2l p(co12) ...(9.25)

or p(co2i) ------ ^ * 2 1 -------= ------- ------------------ ...(9.26)


_ -^2-®21 p ( -Wl B12 ^

Assuming that the distribution o f atoms in various levels is Boltzmann


distribution

■Ml _ g-Le~Ei'kT g^_ (E2_El)/kT _ g±_ ,I0J21/kT


N2 = g 2e - E^ T g 2 g2
...(9.27)
where hm2\ = E 2 —E x and g v g 2 are the degeneracies o f the two levels.

Therefore p(co21) = --------------------------- ...(9.28)


S i g 12 tuo/kT
S i B2\
This should be identical with Planck’s expression

fao|i 1
p<a>21) = ...(9.29)
n2c3 etim,kT _ i
Ideal B ose S yste m s 163

Equation (9.28) will be identical with Eq. (9.29), only if


= «A, ■ •<9-30>
and A 2. = -( 9 -3 D

These relations connect the three coefficients A 21, B 2l, B l2 which are
called Einstein’s coefficient o f spontaneous emission, induced em ission
and induced absorption, respectively, and are intrinsic atom ic properties.
If an assembly of atoms which have energy levels (1) and (2) as in
Fig. 9.3 are irradiated with light of frequency co21, every atom in level (1)
has a probability of capturing a photon and thus attenuating the light,
whereas every atom in level (2) has a probability of adding a photon and
thus amplifying the light. The emerging radiation will have intensity
proportional to
(N2 - ...(9.32)
if spontaneous emissions are eliminated. This intensity will be reduced or
amplified with respect to the incident radiation, according as
N2 < N 1
or N2 > N x ...(9.33)
Normally there are more atoms in level (1) than in (2), so that in
thermal equilibrium absorption overrides stimulated emission, by a factor
et,w/kT . To obtain a net amplification, N 2 must be greater than N v This
is non-equilibrium situation and is known as ‘population inversion’. Tw o
devices have been built around this process: (1) The M aser (Microwave
Amplification by Stimulated Emission of Radiation) and (2) L aser (Light
Amplification by Stimulated Emission of Radiation). The first works in
the microwave region and the second in the optical region. Th e
amplification in some case is so powerful as to produce h ig h ly
monochromatic flux of several megawatts per square metre.

| foptoSE-jElNSTEIN CONDENSATION-
The ideal Bose-Einstein gas is particularly interesting because it can
undergo a phase transition.
We have seen that the average number of bosons in the energy state
es is given by

nn s) =
(n
ePK-lO-i

Since £<rcs> = N
____ 1_
N = I ...(9.34)
eP(£S^) - 1
or replacing the summation by integration
164 F u n d a m e n ta ls o f S ta tis tic a l M e c h a n ic s

V
N =
2 tc2/}3 o e 1 P(Es-n) _ ! ...(9.35)

We have also shown in the preceding chapter that B E distribution


goes over to M B distribution when e~^ » 1, i.e. Eq. (9.36). Therefore

( 2nmkT'\3'2( V \ ,
l h2 J
T h is condition is satisfied when T is high. Therefore, at high
temperatures g is large and negative (see Eq. (6.95)). W ith the decrease
in temperature, the magnitude of g also decreases. A t what temperature
w ill g be equal to zero? To answer this question we put g = 0 in Eq. (9.35)
and denote the corresponding temperature by Tb.

N _ V f p 2dp
Therefore ...(9.36)
2n2h3 J
Qe P2/2mhTb _ x

Let x2 = p 2!2m kTb


Substituting in Eq. (9.37)

-C9-37)
2k h b e* - 1
V jr1/2
= — ^—^(2m kTby ~ x 2.612--—
27i2hsK b! 4
(see Appendix B ) ...(9.38)
rvr, , m 27zh2 ( N ' f /3
Therefore ...(9.39)

Tb, therefore, is a characteristic temperature that depends on particle


mass m and particle density N / V and is known as Bose temperature.
W hat will happen if the temperature is further reduced? g cannot be
positive nor can it become negative again. The only possibility is for g to
remain equal to zero after it has attained zero value.
Th e variation of g of the ideal boson gas is shown in Fig. 9.4.

Fig. 9.4
Ideal Bose Systems 165

A difficulty creeps in at this stage. The very first term, i.e. the ground
state term with £s = 0 — in the summation of Eq. (9.34) tends to infinity
if p = 0. The reason for this contradiction is the following: For a boson
gas, there is no limitation on the number of particles that can belong to
a single state, and, yet, the ground state was given zero weight as is
evident from the fact that while replacing summation by integration, the
density of states which it contained is proportional to Vd3p, i.e. to V
-Jz dz and which vanishes at e = 0 .
To remove this contradiction, we isolate the population of the ground
state At in a separate term and write
V P2dp
N = Nn ...(9.40)
2 k2*3 J
0 eP(p2-n) _ 1
The integral is for excited states e = 0. The lower limit can still be
taken as zero, since is does not contribute towards the integral.
Integrating as before

N = N0+ V„ , (2mkT)312 x 2.612— ...(9.41)


2n2h3 4
Substituting for V from Eq. (9.39)
3/2
(2 mkT)3'2- Jn f 2nft2
N = N 0 + -------y— x 2.612 x --------x -------
2 k 2 ?;3 4 ( mk
N
...(9.42)
2.612 T63/2

= N0 + N

Therefore ...(9.43)

At T —> 0, N 0 —> N, all bosons at T = 0 are in the ground state. This


macroscopic occupation of the zero momentum ground state is called
Bose-Einstein condensation of bosons. It may be noted that we arq referring
here to the ‘condensation’ in the momentum space, and not to the actual
condensation in the gas. As T increases, the number of particles in the
ground state decreases, the remaining particles being distributed into
other states.
For T < Tb the system may be regarded as a mixture of two phases:
(i) a gaseous phase with AT = N(.T/Tb)3/2 particles distributed over the
excited states, i.e. state for which e * 0 and (ii) a condensed phase with
N 0 = N - N ' particles in the ground state, with e = 0. For T > Tb the
system will be in the gaseous phase alone.
166 Fundamentals of Statistical Mechanics

For T < Tb, since the particles in the ground state (e = 0 ) do not
contribute to the energy, the total energy is determined by the particles
in excited state e > 0 .
V__r p 2 p 2dp
Therefore E = N' e 2n2h2 ^ 2m e p 2 /2mkT _

V (2 m k T fn r~ x4
2 k2ft3 2m

= —~ ( 2 m ) 3 / 2 /;5,,2T 5/2 x —xl.34lV?t


2jt2fi3 8
(see Appendix B)
3 1.341 V(2 m)3'2h5/2T5/2 rnAA.
= — ---------------- 570-3------...(9.44)
16 nd/2h3
Substituting for V from Eq. (9.39)
y 5 /2

E = 0.77 N ...(9.45)

The heat capacity of the ideal boson gas can be determined from the
equation
n3 /2

- ®d T jv - 1 . 93 Nk ...(9.46)
lb-
We see from Eq. (9.46) that at the condensation temperature T

the heat capacity exceeds the classical value —N k . The variation of Cv


with T is shown in Fig. 9.5.

Now p V = —E . Substituting for E from Eq. (9.44).


3
1.341(2 r n f/2k3/2T 5/2
IT) ...(9.47)
8n3/2h3
Substituting for m from Eq. (9.39)
1.341 kT 5'2 f N
..(9.48)
P(T)~ 2.612 T 3/2 l V ,
Therefore pressure at the transition point, i.e. at T = Tb

N kTb
P(Tb) = 0.51341— '~ ’ ' ...(9.49)

Thus the pressure of an ideal Bose gas at the transition temperature


Tb is about one-half of that of an equivalent Boltzmann gas. From Eqs.
Ideal Bose Systems 167

...0.50)

_ \5/2

a
N 'kT
= 0.5134——— ...0.51)
We conclude, therefore, that the particles in the condensed phase do
not exert any pressure, but the pressure exerted by the particles in the
excited state is about half of the pressure that would have been exerted
if the gas were Boltzmannian.
Entropy of the boson gas is most expeditiously obtained from the
relation
E = TS - pV + piV ...0.52)
Since p = 0
„ E + pV 5pV ( „ 2
S = — ------ = I because’ Pv = g E I

3/2
—x0.5134 N ’ k — ...0.53)
2 l Tb
168 Fundam entals of Statistical Mechanics

where we have used Eq. (9.51).


It is known that a peculiar phase transition occurs in liquid helium
at 2.19 K. 4He obeys Bose statistics. However, the Bose distribution,
w hich relates to ideal gases, cannot be applied to liquid helium.
Nevertheless, it would be interesting to compare the actual temperature
o f transition in liquid helium with the temperature at which Bose
condensation would occur in gaseous helium o f the same density.
For liquid 4He, m = 6.65 x 10-22 kg
and V = 22.4 x 10-23 m3/mole
Substituting these in Eq. (9.39)
Tb = 3.14 K
which is close to transition temperature.

mt-.
I 9.4 SPECIFIC HEAT -
FROM LATTICE VIBRATION | ^
W e have seen that the dilute gases can be regarded as ideal gases and
their thermodynamic properties can be successfully discussed by the
methods o f statistical mechanics. Liquids and dense gases are difficult to
handle because o f the intermolecular forces which play an important role.
In a crystalline solid, the atoms are very close; but such a system is well-
ordered. We will show in this section how the properties o f the simplest
o f these system s, the m onatom ic solids, can be discussed using the
techniques o f statistical mechanics.
A monatomic solid consists o f a large number o f atoms in regular
array, a lattice. Because o f thermal agitation each atom vibrates about its
equilibrium position.
The specific heat o f a substance is one o f the most im portant
thermodynamic quantities. Various factors contribute to the specific heat.
However, the dominant contribution in most solids is the energy given to
the lattice vibrations.
Consider a solid containing N non-interacting particles harmonically
bound to centres o f forces. The Hamiltonian o f each atom is
3 D2 3 1 ,
H = £ ^ - + £ -k q f ...(9.54)
i=i 2m i=12
By the theorem o f equipartition o f energy

Energy/atom = 6 x —kT = 3kT


^ r
and the total energy for N atoms E = 3 N kT, where N is Avogadro’s
number
Therefore C0 = 3 Nk = 3 R = 5.96 cal/mole. ...(9.55)
= 24.93 J/mole.
Ideal Bose System s 169

This result asserts that all simple solids have the same molar specific
heat equal to 5.96 cal mole -1 deg-1 = 24.93 J mole-1 deg-1. This is known
as the law of Dulong and Petit.
It was noticed, however, that the specific heat of most materials
decreases as the temperature is lowered. Diamond, for example, has as
low a value of Cp as 1.1 even at 300 K. Classical statistical mechanics
could not explain this variation of the specific heat with temperature.
Einstein was the first to apply quantum concept to the theory of
specific heat of solids. He made the most simplifying assumption that all
3N vibrational modes of a three-dimensional solid of N atoms have the
same frequency. The energy levels of the oscillator are quantized.

Therefore e„ = |n + :^ ...(9.56)

The partition function Z is

Z e
n=0
-pH>“ .l - e -p*“ ...(9.57)

or In Z = - ■ | ft m -ln (l-e “ |5'rt0)

(e) = —^ f i c o - ln ^ l - e _|3ft<0)

1. e~t"a/kTfi<a
2 fico + i _ e~hm/kT ...(9.58)

Thus, the total mean energy is

<E) . 1,-^ ...(9.59)

( 0 (E )'j 3 N h w el,a/kT (ha/kT 2)

and Cu~ [ dT Jv ~ (e'"o/kT- i f

3Nh2m2e,,u,/kT _ o A7 i e E "|2 e0E/r


k T 2 [e’^ lkT - i f ~ ^ T ) [e*B/T - i f
...(9.60)

where 0 £ = is called Einstein characteristic temperature.

N ow if T » 0E, i.e. « 1
170 Fundamentals of Statistical Mechanics

we can write ,,e£.r _ i + .


- m i
Therefore, from Eq. (9.SO)

c = 3 A ^ f y r .i+ ig /T
) (e e /t )2
= 3 Nk + 3 Nk(QE/T) = 3 R ...(9.61)
Thus, at high temperature, the reslult tends towards the classical
0
one. On the other hand, it T « 9 i.e. —® - » i

Cv = 3 W 7 e | ^ jV es /r ...(9.62)

The specific heat, therefore, falls at an exponentially fast rate and


tends to zero at T —>0. Einstein’s theory, thus explained for the first time
why the molar specific heat decreases with temperature. This variation
is shown in Fig. 9.6. The specific heat tends to zero at absolute zero and
rises to the classical value at high temperatures. The shaded area is
equal to the zero-point energy. Can you prove this (Prob. 9.6)?
1 .0

• t
0.5
O

Fig. 9.6
The rate o f fall, however, turns out to be too fast in comparison
with the observed rate. In fact, it has been observed experimentally that
C’ “ T a as T —> 0. We have to account for this discrepancy.

| 9.5 DEBYE'S M O D E L O F SO LIDS : P H O N O N GAS


The discrepancy referred to in the preceding section is due to the
assumption that all atoms vibrate with the same frequency. In reality, if
a solid has N atoms, it has 3N normal modes o f vibration and, hence 3
N different frequencies. To remove the deficiencies of the Einstein model,
we need a more realistic appraisal of the actual distribution of frequencies.
.In the improved model suggested by Debye, the discreteness of the
atoms in the solid is neglected and the solid is treated as if it were a
continuous elastic medium, each normal mode o f vibration of the elastic
continuum being characterised by a frequency co. If the temperature is
not too high, amplitudes o f vibration of individual atoms in a solid are
Ideal Bose Systems 171

relatively small and collective modes of vibration involving groups of


atoms are the possible sound waves which can propagate through solids.
The energy of these sound waves inside a solid medium can be considered
to be quantized in the form of phonons just as the energy of electromagnetic
radiation is quantized in the form of photons. The energy of a phonon of
frequency co is /ico . Since the phonons have integral angular momentum
and there is no restriction on the number of phonons in a given mode, the
assembly of phonons in a solid may be treated as a boson gas with zero
chemical potential. However, since we cannot have free phonons in empty
space without crystal lattice, phonon is not considered as a true particle.
It is sometimes called a quasiparticle.
A plane-wave expansion is always possible for the description of the
acoustic waves in solids. Three types of plane waves can propagate in a
solid; two of these waves have polarization transverse to the direction of
propagation k, and the third has longitudinal polarization along k. The
velocities of propagation in the longitudinal and transverse directions are
different. Let us denote them by uL and vT respectively.
The number of normal modes with frequency between co and co + dco
is given analogously by
,, , Vco2dco •
o^coldco = o—r- ...(9.63)
2 tc2og
where us is the velocity of the concerned wave. Since there are two
transverse and one longitudinal type of vibrations and the transverse
mode is doubly degenerate, the total number of modes of vibration in the
frequency interval co to co + dco will be
Vco2 1 2 ^
a(co)dco = d co .(9.64)
2 jc2
4 + t>T ,
Debye assumed that the spectrum of frequencies is cut off at an
upper limit coB, such that the total number o f normal modes is equal to
37V, i.e.
»" U
J a(co)dco = 37V ...(9.65)

where coD is the maximum frequency o f phonons in a crystal.


“ ? Vco2 f 1 2 ) ,
Therefore -----s- -o - + —s- dco = 37V ...(9.66)
.0 2k 2 [ v l 4 J

Vco)
i.e. = 3N
6 jc Uy

187Vj i 2 L JL
and hence coD -- ...(9.67)
V ( VL 4
172 Fundamentals of Statistical Mechanics

Using: Eqs. (9.64) and (9.67), we find that the Debye spectrum is given
bv
92Vco2
for co < coD
ct(co) = ...(9.68)
0 for co > coD
In Fig. 9.7 we compare the density of states for copper deduced from
neutron scattering experim ents with those derived from Debye’s
assumption (dotted curve).

Fig. 9.7
W e see that Debye assumption is reasonably valied for low frequency
modes. There are serious discrepancies in the region of higher frequencies
(optical modes). The actual maximum frequency seems to be quite close
to the debye value, but the centre o f gravity o f the frequency distribution
is lowered. These finer details o f the spectrum, however, are not very
important when we consider averaged quantities such as the specific
heat.
Since the number o f phonons is not conserved, the chemical potential
for phonons, p = 0. Therefore, the distribution law for phonons will be

= eh<o/kT _-j_ ...(9.69)


The internal energy o f the phonon gas
“0
E = J fto)(ns )a(co)dco
o

*? * 1 9JV 2 ,
= J • -a -co^ m
0 e -1 ioD

9 Nh co3 ,
- 3 J hmlkT , dw ...(9.70)
mD o e ~L
Ideal Bose Systems 173

Put X = h(X)/kT

E = alKti)
T—^
\4ti ^Dl_kT x3
J0 ex - l
...(9.71)

Let
Qd is called Debye characteristic temperature.

Therefore E= ...(9.72)
0 £> o ex - 1
7lC0n 9
Now if T »0,,, x = ----— = —2- is small and, therefore, expanding
L/’ m a x kT T r °
the exponential and neglecting the higher powers, we can write

*3 ..(9.73)
= x~
e —1

9 NkT4 8° /T = 9NkT* 9 j
Therefore E =
ei> 9% 3T 3
= 3NkT ...(9.74)
This leads to the Dulong-Petit law

...(9.75)
c- - [ i ) v - 3 M - 3K
Thus, for temperatures greater than the Debye temperature, the lattice
behaves classically.
If, on the other hand, T « QD, x —4 °° then
0o/r 3 ~ 3 4
f -------- dx — f -------- dx = ---- (9 76)
{ ex - l {ex - l 15

9Nfer 4 K4 = 3 n4Nk 4
Hence ...(9.77)
£ “ 03 15 " 5 ef,
and the specific heat

12 4a. , ( T
= — jt M — ...(9.78)
C- - 5 ^
Therefore C„ « T 3 ...(9.79)
The general behaviour for the specific heat calculated from the general
expression (9.72) for E is shown in Fig. 9.8.
174 Fundamentals of Statistical Mechanics

Fig. 9.8
Dotted curve corresponds to the Einstein model.
The specific heat as defined by Eq. (9.78) shows a very good agreement
with experimental results.
Some values o f qD are given in Table 9.1.
T a b le 9.1
Substance 6a
A1 426
Cu 335
Na 158
NaCl 321
KC1 235
Diamond 2200
We find from Eq. (9.78) that, although QD may vary widely from solid
to solid, the plot o f Ca against T / 0 O should be the same for all monatomic
solids.
It must be mentioned here that the Debye model fails at extremely
high temperatures, because the forces between atoms are no longer
harmonic and the phonons interact strongly with each other.
. A question that arises here is that, since sound waves can also exist
in liquids, will the quantum effects, discussed above for solids, be manifest
in liquids at low temperatures? Two points o f importance to be noted in
this respect are: (i) liquids can have only longitudinal modes of vibration;
they cannot sustain transverse inodes as they cannot withstand sheer
stress; and (ii) normal modes in liquids may not be purely harmonic.
There may also be other excitations in addition to those by phonons, such
as vortex flow, turbulence or roton excitation.
Experimentation with liquids is difficult because most liquids freeze
long before the temperature is low enough to exhibit the T 3 behaviour.
The only exception is liquid helium, 4He, which remains' liquid at very low
temperatures. Since liquids can have only longitudinal jnodes of vibration,
the formula (9.78) for Cu and Eq. (9.67) for coD reduces to
Ideal Bose Systems 175

kT
c = ^ 5— Nk ftu>n ...(9.80)

1/3
|6 k2N
and coD = ...(9.81)

where v is the velocity of sound in the liquid.

Therefore C = T3 ...(9.82)
" 15ft3o3
Hence, specific heat per gram of the liquid is

C = 27l 2 ^ q r 3 ...(9.83)
u 15pft3u3
where p is the density of the liquid.
Substituting the value of the density of liquid helium p = 145.5 kg/m 3
and the velocity of sound v = 238 m/sec, we get
Cv = 20.7 T 3J/kg/K ...(9.84)
which agrees fairly well with the experimental value
= (20.4 ± 0.4)T 3 J/kg/K ...(9.85)
observed for temperatures 0 < T < 0.6 K.

9.1 The mean pressure of an ideal gas of mateiral particles is related

to its total energy E by the relation (p )V = —(E ) irrespective of


3
whether the particles obey (BE or FD) statistics; while for photons

the relation is {p )V =-^'(23). How do you account for this


O
difference?
9.2 Find the mean energy per photon in a black-body radiation cavity.
9.3 Find the mean square fluctuation in the number of photons in
an enclosure of a given volume and temperature.
9.4 A spherical black-body at the initial temperature 1000 K is cooled
by radiation. Find the time it will take for the temperature to
drop to 100 K.
(Radius of the ,blaqk-body = 0.1 m
p = 8 x 10s kg/m 3 and Cu = 5 x 102 J/kg/degree
9.5 Assuming the sun to be a black-body at a temperature 5800 K,
calculate
(t) the power received per unit area o f the surface of the earth
176 Fundam entals of Statistical Mechanics

(n) the equilibrium temperature of the earth, assuming it to


be an ideal absorber and radiator (Radius o f the sun =
7 x 10s m
Distance o f the sun from the earth = 15 x 10 10 m)
9.G An atom has two electronic states. Set up a equation for the
ratio N 2/N1, if the atoms obey Bose-Einstein statistics. N lt Nz
are the numbers o f atoms in the ground and excited states.
Discuss what happens when Bose condensation sets in.
9.7 D eterm ine the Bose tem perature o f bosons each o f mass
6.65 xlO -27 kg and spin zero, their concentration being 1026 m3.
9.8 In the preceding example, determine the fraction N 0 IN of the
bosons in the single particle ground state at a temperature
0.2 Tb.

□ □ □
ID E A L FERMI SYSTEM S

| " 1 0 .F F E R IV H E N E R G Y *
The Fermi-Dirac distribution function which gives the average occupa­
tion of the energy level e is given by

f{z) = (nt ) = ----- ...(1 0 .1 )


e kT +1
Let us suppose that at T = 0, the value of p is gF. When T = 0, (ne)
has two possible values:

(n .) = — - — - = 1 if e < ur and
\ e— + i ^

= - ^ j - = 0 ifg > p F ...(10.2)

At absolute zero of temperature, the fermions will necessarily occupy


the lowest available energy states. An immediate consequence of the
Pauli exclusion principle is that, each quantum state can contain only one
fermion. Therefore, all the lowest states will be occupied with one fermion
in each, until all the fermions are accommodated. Under this condition
the gas is said to be degenerate.
1 1^ (e -g )
inz) ...(10.3)
= 2 4 kT
e kT + 1
Therefore
(i) If e < g - 2kT (ne) = 1 ...(10.4)
(ii) If e > g + 2kT {ne) = 0 ....(10.5)

{Hi) If e = g(n£) = ^ ...(10.6)


We can show this distribution with varying energy at OK graphically
(Fig. 10.1).
177
178 Fundamentals of Statistical Mechanics

The spread region of Fermi distribution, i.e., the region of e when


(nt \ changes from unity to zero (from p — 2kT to p + 2k'D, narrows as
the temperature decreases and at absolute zero becomes a sharp
discontinuity. The distribution, then, takes the form of a right angle. We
see from Fig. 10.1 that, at T = 0, all states with e < pF are occupied;
f <«■) = < n , >

Fig. 10.1
while those with energy e > pF are empty. The highest occupied level is
called the Fermi level and is represented by eF. Below this level there are
exactly N states, i f N is the total number o f fermions. The states above
this energy level are unoccupied. The value o f zF depends upon the num­
ber o f fermions in the gas. One can easily see that, at T = 0, zF coincides
with pF.
The value o f pF can be found as follows:

jn(e)d£ = N ...(10.7)
o
where JV is the total number o f particles in the gas.
Since at T = 0 all single-particle states up to eF are completely filled
with one particle per state,
for z < zF the number o f states a(e) = n(e),
and for e < zF the number o f states o(e) = 0 ...(1 0 .8 )
*/■
Therefore, J g (e )dz = N ...(10.9)
o
I f the particles have spin s, there are 2s + 1 single-particle states, all
having the same energy e. Therefore, the density of states c(e) dz in the
energy range from e to £ + dz is given by

V(2m )3/2(2S + 1 )
o(z)dz dz ...(1 0 .1 0 )
4 it2*8
Ideal Ferm i S ystem s 179

Hence, Eq. (10.9) can be written as

...(1 0 . 1 1 )
o 4nh3

i.e. ^ ^ ( 2 m )3/2 i E3/2 = N ...( 1 0 .1 2 )


4nh3 3

\2 /3
h2 f 6 N k2
or = ...(10.13)
2m (2 S + 1)V

The value o f the single-particle m omentum corresponding to the


Fermi energy is referred to as Fermi momentum and is denoted by p F
2 /3
6W
p F = Aj2 jm F = h ...(10.14)
(2S + 1)V

We give in this section a simpler method o f deriving Fermi energy of


electron gas. The method is based o f Heisenberg’s uncertainty principle
given by
Ax A p x = h ...(10.15)
Equation (10.15) can be generalised to all three components as:
(Ax Apx) (Ay Apy) (Az ApJ = h3 ...(10.16)

or (AP l Apy Apz) - ...(10.17)

Considering the momentum volume element (Apx Apy Apz) as a sphere


o f radius Ap

4 o h3
(Apx Apy Ap2) = —7i(Ap)3 = — ...(10.18)

By virtue of the uncertainty principle, each electron occupies at least


a volume AV and this electron can exist in either of the two possible spin
orientations.
2
Therefore, if N is the number of electrons in a unit volume, AV = —

h3N
Therefore |- 7 t ( A p ) 3 ...(10.19)
2
A. Sankaranarayan and S. George, Am. J. Phys. 40, 620, 1972.
18 0 Fundamentals of Statistical Mechanics

Hence

Ap ...(10 .20 )
-
Since AVr is the minimum volume needed to house an electron, Ap is
maximum as a consequence of the uncertainty principle.
(Ap f
Therefore ...(1 0 .2 1 )
~ 2m.

,h*_
...(1 0 .22 )
2m
which agrees with Eq. (10.13), if V = 1.
E xa m ple 10.1 The molar mass of lithium is 0.00694 and its density
0.53 x 103 kg/m3. Calculate the Fermi energy and Fermi temperature of
the electrons.
Unit volume contains
0.53 xlO 6
x 6.02xlO 23 atoms and as many conduction electrons.
6.94 x l 0 “3
N
Therefore, = 0.4597 x 1032
V
,2/3
ft2 f 3 n2N
Ewr, ---
2m l V
1.052 x 10” \2/3
2 x 9 .1 x 1 0 “
= 7.4 x 10“ 17

Hence T„ = —— — 53262 K
F k
We see, therefore, that Fermi temperatures are much higher than the
ordinary temperatures. Metals will have melted long before the energy
kT has approached the value o f the Fermi energy. This indicates that the
electron gas is to be treated as a separate case.

3 MEAN ENERGY OF FERMIONS AT T

Jen(e)de
(e(0)> = 5.--------- ...(10.23)
Jn(e)de
Ideal Fermi Systems 181

J Eo(e)de

'-F

J a(z)dz
o
v [is t i ] [ a » r w
4*V Q
J

47l2ft3 0

EJf e3/2de Q
- 5 Ef ..(10.24)
J e1/2de

The ground state, or zero-point energy, therefore is

£o _ g Ne.p ...(10.25)

Substituting for AT from Eq. (10.13)


3/2
( 2 s + 1 ) ^2 m\ c5/2
E0 = „ £jr ...(10.26)
0 IOji2 h2 J
The ground state pressure of the system is
2 (E, 2 N 2
= 5 - 5 mF ...(10.27)
' 3 l y
where n =N/V

10.4 FERMI GAS IN METALS;


I
Modern theory of electrical conduction in metals began with Paul Drude’s
paper published in 1900. Drude’s theory rests on the assumption that
conduction electrons in metals can be treated as a ‘gas’ of weekly inter­
acting particles. The assumption seems to be absolutely oversimplified.
However, it is precisely this feature of Drude’s model that forms the
basis of the modern theory as well. By applying the methods used in the
kinetic theory of gases, Drude was able to give a quantitative derivation
of the Wiedemann-Franz law, that the ratio of thermal to electrical con­
ductivities is a universal number (Lorenz number) times the absolute
temperature. The calculated Lorenz number was in reasonable agree­
ment with experiment. However, the results for the individual conductivi­
ties were not very encouraging. The theory was refined by H.A. Lorenz,
who applied MB statistics to the electron gas. He found the theoretical
182 Fundamentals of Statistical Mechanics

Lorenz number to be reduced by a factor 2/3 from Drude’s value, thus


weakening the agreement with experiment. The main drawbacks of Drude’s
theory are:
(i) The observed specific heat of metals— —at room temperature,
appears to be completely accountable by the lattice vibrations
alone; while the theory demands that on the basis of the
equipartition theorem, every electron must make a contribution
3
of magnitude —k to the specific heat, so that the total capacity
9
should be —/?.
(ii) Failure to explain magnetic properties.
(i«) Failure to explain the tem perature dependence o f the
conductivity.
The fi^st difficulty was resolved by Sommerfeld by the application of
FD statistics to the electron gas in metals. Pauli and Heisenberg showed
how quantum theory can be used to explain the negative properties; and
the concepts necessary to solve the third problem were provided by
Sommerfeld.
1
Since for electron s = — , Fermi energy and Fermi momentum are
given by
2/3
3%2N
...(10.28)
ef ~ 2m V

3n2n ...(10.29)
pF = h

We define Fermi temperature TF by the relation


kTF = eF ...(10.30)

10.5 ATOMIC NUCLEUS AS AN IDEAL FERMION GAS ?*£&■**


§
Both protons and neutrons are fermions of spin 1/2. Inside the nucleus
there are very strong attractive forces between the nucleons. TF'-. binding
energy of a nucleon in a nucleus is o f the order of 8 M e'/. Strictly
speaking, due to these strong interactions, the protons and neutrons in
a nucleus cannot be considered as two non-interacting ideal fermion gases.
However, we can get a rough estimate of magnitude for neutrons and
protons inside the nucleus, if we assume that they are ideal fermion
gases.
An atomic nucleus of atomic weight A and atomic number Z, contains
N = A - Z neutrons. The concentration of neutrons in the nucleus is given
by
Ideal Fermi Systems 183

N_ A -Z A -Z
V ...(10.31)
V

where R is the radius of the nucleus. Experiments in scattering of high


energy particles have shown that
R = 1.3 x 10- 16 x A vz metre ...(10.32)

Therefore ~ = ----------- %------ rrr A ~ Z


V 4 t i x ( 1 . 3 )3 xHT 45 A

= 1.0S7 x 1044 ...(10.33)

For a light nucleus, we can take

= 0.5
A
o \2/ 3
h2 6 n2 N )
Therefore
2m 2 ' v \

2 n2/3
(1.05)2 x !0 ~68
x 1.087 xlO 44 x0.5 j
2x1.67 xlO -27
(Here, mass of neutron = 1.67 x 10-27 kg)
eF = 4.5 x 10- 12 J - 28 MeV • ...(10.34)
Corresponding Fermi temperature is

Tp = = 3.26 x 10u K ...(10.35)

This shows that the ideal neutron gas is degenerate.

>1,p;6 FERMI ENERGY AS A FUNCTION OF TEMPERATURE


For a discussion of properties such as the specific heat or entropy, it is
essential to consider the behaviour of the system at temperatures above
absolute zero. We will, therefore, obtain the expression for the Fermi
energy as a function of temperature.
In the study of this kind it is frequently necessary to evaluate inte­
grals involving the Fermi function /(e), i.e., integrals of the type

Jf(e)<t)(e)de ...(10.36)
o
184 Fundamentals of Statistical Mechanics

where the Fermi function l|(eP(E ,l) + 1 )| and (J>(e) are smoothly varying
functions of e
Integrating by parts
£ t
I = ± /(e ) J<p(e)rfe - jV (e)/-'(e)<ie ...(10.37)
Jo 0
E
where y(e) = J<p(e)de ...(10.38)

Now /C~) = ~
e +r 1 = 0
u
and y (0 ) = J<p(e)de = 0
o
Therefore, the first term in Eq. (10.37) vanishes at both the limits.

Hence I = - J\|/(e)/"(E)d£ ...(10.39)


o
Since, in all practical cases, the temperature is small compared with
Fermi temperature o f the gas, the differential f'(z) will be negligible
except in the neighbourhood o f the Fermi energy e f . The form of / " ' ( e ) as
a function o f e is shown in Fig. 10.2.

Fig. 10.2
Only a relatively narrow range, about £ = sF, contributes appreciably
to the integral. Let us, therefore, expand vp(e) about e f in a power series:
( e - e F)
y (e ) = y(e F) + \|/(eF) ( e - e F) + y 11! ^ ) + ...
2!
...(10.40)

Therefore I = - Jy(eF )/-'(e)cte - jy '( e F)(e - EF)f'(,E)de


o o

- fy "(E F )(£ ~ p F ~ /~'(e)cfe ...(10.41)


2
Ideal Fermi Systems 185

= - / j —I,z —/ 3, neglecting the higher terms


...(10.42)

Now / j = Jtt/tejOf'Ceide = y(eF) JfXz)dz

= v(eF)[/"(e)]“ = - W ■-
Ie"c"*T + 1
and, because eF > > kT,
*r
I x = -v(eF) = - J<p(e)de ...(10.43)
o

/ 2 = ”]V ’(eF)(e-£*.)/"'(£ )<Ze

(e —eF)e kT
= -\|/(eF) J- -ds ...(10.44)
f
kT kT + 1

£ —E^*
Putting X = -■
" ft X .P
x
h = t/(eF) kT J - ------- - 2-dx ...(10.45)
tr \ex + 1
hT L. -I

Because eF » kT and er* is a rapidly decreasing function of x, the


£j?
lower limit ~r= may be replaced bv
hi
°°f xcx
Therefore Z2 = t/tep kT J ---------t2 <^x ...(10.46)
H e* + 1J
The integrand being an odd function, the definite integral changes
sign if x is replaced by —x. Therefore, the integral is zero.
Hence, I2 = 0 • (10.47)

I3 = j V U y ) ^ ^ /"(O de
o z

= - V ^ r l ( k T ) 2 f *-** 2 dx ....(10.48)
2 -~ jV + l]
In this case, the integrand is an even function of x.
186 Fundamentals of Statistical Mechanics

Therefore, h J; -dx

=-Y (eFXkT)2 j- -dx ...(10.49)


° [ e" * + 1 ]
Now [ l + e- * ] ”2 = 1 - 2e~* + 3c ' 21 - 4 e ^ + ...

= ^ (—l )" -1 ne~^n~1)x ...(10.50)


n=1

Therefore, J——~ —-g-ci* = J^ (—l)n~1n x2e ^ d x


o [ l + e~x J o «=i

= ^ ( - D ^ n \x2e~nxdx
n- 1 0

1
= X ( - l ) * '1 2 ]T (—l )'1-1
n=l /l=l n2

...(10.51)

Therefore i3 (eF)(kT)2^ <P' ( e r X k T f^ r


...(10.52)
{because by Eq. (10.38) vpTe) = <P(e) and \|/11(e) = tp'(e)}
Hence 7 = - I x - 13

= Ef tpte) + (p’(EF )(k T )2 ...(10.53)


o 6
Let us use this equation to evaluate the Ferm i energy at tem perature
T. We Toiow that

N = J/"(e)a(e)<7e ...(10.54)
o
Using Eq. (10.53) and replacing <p(e) by a(e), w e get
c 2
N = Ja(e)<7e + cs'(EF )(k T )2 , ...(10.55)
Ideal Fermi System s 187

We may recall that


MO)
J a(e)de N = ...(10.56)
o
where eF(0) is the Fermi energy at T = 0 K.
n 2 (O )
Therefore, oXeF )(kT )2 = J a(e)de — j a(e)de
6 o o
ef (0)
= j a(e)<ie ...(10.57)
Ef
Since the Fermi energy eF differs only slightly from its value E^XO) at
0 K, we may write
MO)
| a(E)c(e = a(ey (0)) (e^ (0) - eF ) ...(10.58)
Ef

7t2
Therefore, a'(£y X&T)2 — = a (eF (0)) (£y ( 0 ) - £ F ) ...(10.59)

Since by Eq. (10.10)


, N V ( 2 m.)3 / 2 e1/2
° (£ f) " 4TC2fc3
V (2m )3/2 o(e)
...(10.60)
^ ~ 4 7 t W /2 ~ 2e
Therefore, Eq. (10.59) changes to

^ l (k T )2 = o ( ef(0))( ef(0) - sF) ...(10.61)


2>Ej? 6
Further, because ef - eF(0)
g (sy ) _ g(ey (0 ))
Ep £y (0)

and g(eF(0)) 2 t?_ = ct (eF(0)) (ef(0) - ef ) ...(10.62)


2 ef (0) 6

(k T )2 n2
i.e. = eF(0 ) ...(10.63)
ef (0) 12

1 (k T )2 Tt2 ^
Hence ef — eF(0) ...(10.64)
(ey (0 ))2 1 2 y
188 Fundamentals of Statistical Mechanics

Thus, we see that the Fermi level is lowered by increasing tempera­


ture.

| 10.7 ELECTRONIC SPECIFIC HEAT


Total energy E is given by

E _ Je/Xe)o(e)d£ ...(10 .66 )


The integral can be evaluated using Eq. (10.53). In this equation, we
replace <P(e) by ea(e). We get
Ef 2 -2
E = fsa(e)de + cr(eJ,)(£7’)2^r + £Fa'(£F)(kT)2 - z
o 6 6
...(10.67)
As in the preceding section
Ef EF (0 ) E f(0 )

J Eo(e)de = J Eo(E)de - J ea(e)d£


0 0 Ejp
= E 0 - ef(0) o(ef(0)) (eF (0) - Ep ...(10.68)
E„m
where E 0 = J ea(e)d£ is the total energy of the electrons at T = 0 K.

By Eq. (10.63)
(AT)2 K2
£ „(0 ) ef - e^ (0 ) 12

Therefore J £o(£)d£ = E 0 — a(eF(0)) (kT)2— ...(10.69)


o
The last two terms o f Eq.* (10.67) can be expressed as

(a(ep + £Fa' (epj (kT)2—

(ef
= joo(e F)) + ct
o ^ J ) (kT)2 — [see Eq. (10.60)]
6

= | a (ep (kT)2 y ^ a (eF(0 )) (AT)2 6


...(10.70)

Therefore E = E 0 —o (ef(0)) (AT)2 — + — o (£^0)) (kT)2 g


Ideal Fermi Systems 189

= E0 + a (Ej/O)) (kT )2 — .(10.71)

3
Now E0 = - N e^O) [See Eq. (10.26)]
E „ (0 )

and N = f c(e)de [See Eq. (10.09)]


0

T, EPm
- w < 2-> s'2 1 ^
2 V" /o \3/2 r /r>M3/2
= —— 5- 0-(2 m) ]eF(0 )]'
3 2it2h2

= ( 0 )o(ej?( 0 >) ...(10.72)

3N
or a ( e / 0 )) = 2e/r(0) ...(10.73)

3 3JV jt2
Hence E = —N ef(0) + 2eF(0) ^ r )2

f3 ^ . ( f e T ) 2*2]
...(10.74)

Therefore, the mean energy of an electron

., E
~ 3 , (k T fn 2
...(10.75)
(E) ~ N ~ 5 F 4eF(0)
rg ^ 2 /^»2 'j

= ^ (0 ){5 +T ^ f } - (10-76)
Tlie electronic specific heat Cv is

Cv = f — •'l = ^ V T - = — E —
v U r Jv 2 ef (0 ) 2 rF
...(10.77)
Specific heat Cv, therefore, varies linearly with temperature and
vanishes at T = 0 in conformity with the third law of thermodynamics.
In order to have an idea of the magnitude of the electronic contribution
to the specific heat at room temperature, let us suppose that the Fermi
temperature is T = 45000 K (note that Fermi temperature is invariably
of the order of 104 to 105 K).
190 Fundam entals of Statistical M echanics

300
Therefore 0.03 R ...(10.78)
45000
This is very small compared to the lattice contribution : 3R. The
electronic specific heat, therefore, is not norm ally detected at room
temperature.
In general, the low-temperature specific heat o f a metal is given by
Cv = yT + ST3 ...(10.79)

i r N k 2 T + i2 n *N k ^3
[See Eq. (9.77)]
2£^t(0) 50
...(10.80)
A plot of Cv IT against T 2, therefore, should give a straight line.
Experimental measurements o f the specific heat o f copper, silver, etc., in
the temperature range 1 — 5 K have amply justified this expectation. The
slope of the line gives the coefficient 5 which enables us to evaluate the
Debye temperature 0D; while the intercept on the C VIT axis gives y which
yields the value of eF
. Now, pressure
2 E
P
3V

2 N_ m v l
—EJ- (0 ) +
= 3 V 4:Zp (0 ) J

’ ' 5 {k T fn 2
= —m JO) ...(10.81)
5 F + 12 (0 ))2
This shows that even at absolute zero, the pressure does not vanish.
This is a consequence of Pauli exclusion principle, which allows only one
particle to have zero momentum. Other particles having finite momen­
tum give rise to the zero point pressure.

I 10.8 W HITE DWARFS


*,

m m .-
t£2- ■.
W-miW-
i
In this section we shall illustrate an application of the degenerate elec­
tron gas theory to a problem in stellar structure.
White dwarf stars occupy a key position in astrophysical theory. Their
properties provide clues to the physical processes, that take place during
the evolutionary states near the ends of stellar lifetimes. Important con­
tributions to our understanding of these stars have come from statistical
mechanics.
Ideal Fermi Systems 191

I f a plot is m ade o f the brightness o f stars against the predom inant


w avelen gth em itted, as show n in Fig. 10.3, it w ill be found that m ost of
th e stars fall w ith in a linear strip, indicating th at the brightness is
proportional to A. Such a plot is know n as the H ertzsprung-R ussel diagram .
T he sequen ce o f stars is called the ‘m ain sequence’ . T h ere are h ow ever,
tw o glarin g exceptions to this general rule:

X RED G IAN TS
X
X
X ^ X

X X
X <$5 X
X X
X X
X ‘t x X
X
X
W H ITE
DW ARFS

F ig . 10.3

(£) the red giant stars which are huge and abnormally bright and (ii) the
white dwarf stars which are “highly underluminous’. The companion of
Sirius which is a white dwarf, has a mass almost equal to that of the sun,
but its luminosity is only 0.003 times that of the sun.
It is now known that hydrogen is the most abundant element in the
universe and that it makes up about 70 per cent of the mass of ordinary
stars. Bethe (1938), from his calculations of the rates of thermonuclear
reactions involving hydrogen conditions appropriate to stellar interiors,
indicated that these reactions provide the energy source necessary for the
stars to derive their luminosities. Application of Bethe’s results to white
dwarf showed that they have exhausted their thermonuclear energy
sources and are among the terminal states of stellar evolution. One has
to account, however, for the source of energy they will still radiate. This
was done by Mestel in 1952, who pointed out that the thermal energy
content of the hot white-dwarf interior, leaking slowly away through the
insulating surface layers, is entirely sufficient to explain the observed
luminosities.
White dwarfs have stellar masses but much smaller diameters and,
hence, are very dense. The gravitational red-shift of light predicted by
Einstein’s general theory of relativity and measured in the white dwarf
Sirius B (one of the first white dwarfs to be discovered) confirmed the
small size and high density of this star. Some of the properties of Sirius
B are given in Table 10.1. For comparison the corresponding values for
the sun are also given.
192 Fundamentals of Statistical Mechanics

T a b le 10.1
Physical constants o f Sirius B and the Sun*

Q u a n tity S ir iu s B Sun

1. M ass 1.05 M O M O = 1.989 x 1020 Kg


2. R adius 0.008 R O R O = 6.96 x 108 m
3. E lectron tem perature 27000 K 5800 K
4. G ravitational red shift 89 ± 16 km /sec 0.6 km/sec
5. M ean density 2.8 x 109 kg/m 3 1.41 x 103 kg/m3
6. Central density 3.3 x 109 kg/m 3 1.6 x 10s kg/m 3
7. C entral tem perature 2.2 x 107 K 1.6 x 107 K

Note that the radius of the earth, 6.37 x 10s m = 0.009 R, is larger
than that o f Sirius B. Existence of such compact stars could not be sat­
isfactorily explained until the quantum-statistical theory of the electron
gas was worked out by Fermi and Dirac. It has been shown by R.H.
Fowler and S. Chandrasekhar that such stellar configurations can satis­
factorily be described by treating the electrons within the stars as a
completely degenerate relativistic Fermi gas. Chandrasekhar found that
there is a critical stellar mass above which stable degenerate dwarfs
cannot exist.
From some typical data for such a star an idealized model for a white
dw arf may be constructed.
Imagine a star with
mass M = 1030 kg
density p = 1 0 10 kg/m 3
and central temperature T = 107 K (see 1Table 10.1)
Material content o f white dw arf is mostly helium and the tempera­
ture 107 K corresponds to a mean thermal energy o f 1000 eV, which is
much larger in comparison with the energy required for ionizing a helium
atom. Hence, the helium atoms in the star are expected to’ be in a state
o f complete ionization and the star may be regarded as a gas composed
o f N electrons and Nf2 helium nuclei. Therefore, the mass o f the star is
given by
M — N{m + 2/tzp) « 2N mp ...(10.82)
where m is the mass o f an electron and mp the mass o f a proton
Mass o f helium nuclei « 4mp
The electron density, therefore, is

n = — = M-/ 2 ^ = - 1 0 36 electrons/m 3
V M /p 2 mp

* Hugh M. van Horn. Physics Today, 32, 25, 1979.


Ideal Fermi Systems 193

Substituting this value for N/V in the expression (10.13) for eF we


can calculate TF from the relation (10.30), viz. TF = -jr- which comes out
to be of the order of 1010 K. Since this is much higher than the tempera­
ture of the star, the electron gas must be a highly degenerate Fermi gas.
Fermi energy is of the order of 1 MeV, which is comparable with the rest
energy me2 of the electron and, therefore, the dynamics of the electrons
is relativistic. Though helium nuclei do not contribute significantly to the
dynamics of the problem and, hence, may be neglected, they do provide
the gravitational attraction to hold the entire system together.
Let us now consider the equilibrium configuration of our idealized
model.
At the centre of a star, the temperature is very high —of the order
of 107 K —and the atoms there are completely ionized. In young stars, the
pressure due to the thermal motion of the electrons and ions near the
centre is sufficient to withstand the weight of the material above it and
prevent the gravitational collapse of the star. In white dwarfs, the gravi­
tational collapse of the core is prevented by the pressure of a degenerate
electron gas.
Assuming the configuration to be spherical, the gas will exert a pres­
sure P u(r) on the boundary of the sphere and any expansion will involve
the expenditure of work. An adiabatic change in volume will cause a
change dE0 in energy given by
dEQ = - P 0(r) dV ...(10.83)
= —P 0(r) 4tcr2 dr
If these were the only forces present, the system would expand indefi­
nitely. However, there is gravitational interaction and, therefore, there is
a change in the gravitational self-energy as well.
Gravitational self-energy can be calculated as follows:
Consider a sphere of uniform mass density p and radius r (Fig. 10.4).
4 o
The mass of the sphere is ~ rcr3p. The work done in bringing an extra
shell of thickness fir is *

Fig. 10.4 ...(10.84)


194 Fundam entals of Statistical M echanics

If the configuration is in equilibrium, the change in its energy for an


infinitesimal change in its size must be identically zero, i.e.

—GAf2
~p0(r)4n r2 + ^— = 0

A 3G1W2 /lri QK>


or p,,(r) — ------ ...(10.85;
0 20r2
It is evident that to find the relation between M and r we have to find
the value o f p 0(r) which we will now proceed to find.
The single-particle energy o f a relativistic particle is given by
E = yjp zc2 + m 2c4 ...(10.86)
The ground state energy o f the Fermi gas, therefore, is
E0 = 2 yjp2c 2 + m 2c 4

=
- f Pf p 2,Jp2c 2 + m 2c 4 dp ...(10.87)
n2h2 Jo
where the factor 2 accounts for the spin degeneracy and p F is the Fermi
momentum.
o -\l/3
3Nn~
P f = h(
V
Put x = p i me
Vm 4c 5 XF
Therefore En
0 =
_ _2*3 J x 2^jx2 + 1 dx ...(1 0 .8 8 )
7t n

Y— m 4c 5f ( x F ) ...(10.89)

*F ---------
where f ( x F) = J x 2s x 2 + 1 dx
0

and _ Pf _ h f 3it2^ ) 173


AF ...(10.90)
me me v~ )
N M / 2m p 3M
Since ...(10.91)
V 8nmpR 3
^nR3

/ „ \l/3 r n \
h 3rt 3 M 9nMh3
Xf —
me 8 nmnR 3 ^8m 3c 3m pR s ;
V p J
...(10.92)
Ideal Fermi System s 195

The pressure exerted by the Fermi gas

_ dEo ...(10.93)
p 0 dy

ra4c5 „ m4c5V df(xF) dxF


KZh3 F n2h3 dxF 6V

.™L£L f ( x ) + ^ 4s1 .j,3 /?■1- X-2F


n2h3 f ? 3 A S W
[by Eq. (10.91)]

Pn = — -\/l + X F — f(.Xp ) ...(10.94)


o

(i) I f x F < < 1 , i.e. < < 1 we would be dealing with a non-
mc
relativistic case.

Therefore J x 2 \ll + x 2clx = j + i x 4 + .. .^jdx

V 3 -V 5
— ...(10.95)
~3~ To
_____ r3 5
and 2 xF ...(10.96)
\ X F 'fi + XF - — + —

where higher terms are neglected.


Substituting Eqs. (10.95) and (10.96), in Eq. (10.94)
\5/3
m 4c 5 m 4 c_5 9nMh3
15 k2ft3 15 k2h3 8m 3c3m pR 3
...(10.97)
Therefore from Eq. (10.85)
\5/3
m 4c 5 9nMh3 3G M 2
..(10.98)
15 k2?(3 8m3c3m nR :i 2 0 Tdt4

5/3
Ah2 ' 9 k '
Hence M ya i? = ..(10.99)
9/wG k v 8mp ;
(ii) If, on the other hand, x F » 1 the case is extrem e relativistic. In
this case
196 Fundamentals of Statistical Mechanics

4 2
XF , *F
4 4 ...(10.100)
4 2
and J * F +1 3 6 ...(10.101)

Therefore •*0 ~ m
p
4c5 'E F i-
2
, XF
4

7t2ftS 3 6 4

m4c5 ( a
...(10.102)
12n2h3
Therefore, from Eq. (10.85)

,'rl c U* 3GZ
127z2h 3 ' F F > ~ 20 tcR^
...(10.103)

3GM 2
Xp { XF —l ) = ...(10.104)
127t2/t3 2 0 tzR4

9nlr M2 9 tc/ij M 2/
Therefore
12tz2h3 8m3c3m p y R2 ] I 8m3c3mn R2

3GM2
...(10.105)
20 ttR4
For convenience, let us put
s 2 /3
-v m4c5 9ti/i
X = ----- s— and y =
127t2ft3 py
Eq. (10.105) can now be written as
M2 3GM2
XT 1 = ...(10.106)
i?2 20 tzR 4

Therefore J L f YM 2' 3 - _ 3G__ M 4/3 = 1 ...(10.107)


i?2 207txy
xl/2
3G
and, hence, ie = M VZ |Y ----- m 2 /3 | ...(10.108)
1 207txy J
Now R must be positive or zero.
3G
Therefore Y > -M 2
20izXY
20-nXY2
M2'3 < ...(10.109)
3G
Ideal Fermi Systems 197

o \ 3 /2
20kX Y 2 j
or M < ...(10.110)
3G
3 /2 3 /2 r o \
20rtXY2 5m4c5 9ntf
Therefore K h 3G 9Gnh3 ^8m3c3mp
...(10.111)
is the limiting mass. Mcft is known as the ‘Chandrasekhar limit’ .
This limit as calculated from Eq. (10.111) comes out to be equal to
3.40 x 1030 kg. Since the mass of the sun is M0 = 1.99 x 103° kg, the
critical mass of white dwarfs is 1.7 M0. Detailed investigations of
Chandrasekhar lead to the result Mch = 1.44 M, where Mo is the mass of
the sun. This was a major discovery and has profound consequences for
the final stages of stellar evolution. Stars that end their evolution with
masses in excess of this limit have gravitational self-attraction too great
to be counter-balanced by the pressure of the degenerate electrons and
are doomed to ultimate collapse.

Let us derive an expression for the compressibility of a Fermi gas at


absolute zero.
The energy at absolute zero is
e£ Wo™'*3/2
E0 - Jecr(e)de = ^ 2 \ j e3^2de ...(10.112)
0 1 0
V{2m)m 2 5/2
...(10.113)
2-kH 3 5 F
_ 2 E0 _ 2(2m f 2 5/2
Hence ...(10.114)
0 3 V 15n2h3 F
2(2 m)3/2 ft5 ( 3N tzz 'f *
1 5 7 t2 f:3 ' (2 m )® 2 l V J
3 2/3h 2 4/3 f N \ W
= -— — — — ...(10.115)
5m
f32/3/)2it4/3AT5''3 1 5
Therefore In P0 = In - \- ~ In V ...(10.116)
I 5m | 3
d In V __ 3
Hence
d P 0 ~ 5P0
198 Fundamentals of Statistical Mechanics

Therefore, the compressibility


-5 /3
1 dV 3 5m
V dP0 5 32,strn 4>3

...(10.117)

It has been found that the compressibility o f alkali metals is close to


the compressibility o f an electron gas.

| 10.10 PAULI PARAMAGNETISM


According to Pauli, the paramagnetism o f alkali metals can also be
explained on the basis o f the concept o f a free electron gas. If we place
a Fermi gas consisting o f electrons in a magnetic field H, the energy of
a particle,

(i) with spin parallel to H is e = —— — (.lH- and (ii) with spin anti-
parallel to H is

...(10.118)

where u is the intrinsic magnetic moment o f the particle.


A t T = 0, all levels up to Fermi level e.F will be filled up. Therefore,
the kinetic energy o f the first group will range from 0 to eF + \xH while
that o f the second group from 0 to eF — [iH. The particles in each group,
therefore, will be

| (Ep + \iH)312

,0.119)

and ...(10.120)
N ~ = 6 ^ {2W)3/2 (£" " ^ /2
Therefore, the net magnetic m oment o f the gas is '
M = p(N* - N~)

{ ( eF + i i H) 3/2- ( E F ~ v H f 2}
Idea l F e r m i S y s t e m s 199

If the field is low

pV(2m.)3/2 f 3/2 3 V2 H
eJP + f e U V / }
6 k 2* 3 \£r + 2 EF]iId

-
“ V(2r r .3 e fH ...(10.121)
6n2h3 E
Therefore, the susceptibility
M g 2(2m)3/24 2
...(10.122)
X° VH 2p2h3
Substituting for (2m)3/2 from Eq. (10.28)
g 2e^2 3hNn2
X° 2tc2h3 Ve%2

3p2iV ...(10.123)
2ef V
3 (J,2n
...(10.124)
2 eF
According to the Boltzmann treatment Xis inversely proportional to
T at high temperature and reaches a state of magnetic saturation at low
temperatures. Pauli’s treatment gives significantly different results. Equa­
tion (10.124) shows that the limiting susceptibility Xo independent of
temperature, but is strongly dependent upon the density of the gas. This
is known as Pauli paramagnetism—in contrast the classical Langevin
paramagnetism.

S
Btio.11 C S * *' - • --
A RELATIVISTIC DEGENERATE ELECTRON GAS'
In a completely degenerate extreme relativistic electron gas, the energy
o f a particle is given by
e = cp ...(10.125)
f 2 i 1/3
Therefore eF = Cp F = _..
ch |.[(2s
;„ 6
+ l)V \
k ...(10.126)

The total energy1 o f the gas at absolute zero o f temperature is

+(2S + 1)V p?
FF

E 0 = .2*3
| cp3dp
2n2h o
(2S + !) cp%
9 it2fc3 d.
200 Fundamentals of Statistical Mechanics

(2S + l)Vcti [ 6Nnz ] 4/3


8n2 j( 2 s + l) V j

2 cM k 2' 3 (3 Y/3 f i V f 3 ...(10.127)


(2s + 1)1/3U J Iv J

p = = 2cfiN4/3n2/a 31/3
dV (2s + 1)1/3 V'4',s 44/3
1/3
4/3
= 2ch ...(10.128)
2s+ 1 (S J
The pressure of an extreme relativistic gas is, therefore, proportional
to Ar4/3. Equations (10.127) and (10.128) lead to the relation

PV = ^ E ...(10.129)

It may be mentioned here that this relation is valid not only ht absolute
zero but at all temperatures.

| problems ;'"- :
10.1 The density of sodium is 0.97 x 103 kg/m3 and its molar mass is
0.023. Find the Fermi energy o f the electron gas in metallic
sodium and the specific heat o f sodium at a temperature T.
10.2 Find the ratio o f the mean velocity o f an electron gas in an
electron at the absolute zero o f temperature to its velocity at the
Fermi energy.
10.3 Density o f copper is *8.9 x 103 kg/m3 and its molar mass is 0.063.
Find the m axim um velocity o f free electrons in copper at
T = OK.
N kT
10.4 Prove that (i) P — < 0 for BE gas
V

and ( « ) P — ~y— > 0 for FD gas


10.5 Show that the Helmholtz free energy o f a system o f fermions is
given by
2
1__ T

f kT
in
£

F = + ...
12~ [ ef(0 ) J

10.6 By what factor does the entropy o f the conduction electrons


change when the temperature is changed from 200 to 600 K?
Ideal Fermi Systems 201

10.7 Calculate the pressure exerted by the conduction electrons


in silver metal on the solid lattice. D ensity o f silver is
10.5 x 103 kg/m3 and its molar mass is 0.108.
10.8 Estimate the numerical value of the low-temperature isothermal
compressibility for the gas of conduction electrons in a potas­
sium crystal. [Density = 0.86 x 103 kg/m3, Molar mass = 0.039].
10.9 Find the specific heat of a superdense electron gas, assuming
the dependence of energy upon momentum to be extremely
relativistic.
10.10 The energy and momentum of an electron in a relativistic
completely degenerate gas are related by
e2 = c2 p 2 + m2 c4
Determine the equation o f state of the gas.

□ □ □
PHASE EQUILIBRIA

In this chapter we w ill be concerned w ith the systems comprising several


phases in equilibrium w ith one another.
B y a p h a se we mean a homogenous portion of a system which exhibits
a macroscopic behaviour different from that of the other parts of the
system and is separated from them by a well-defined boundary. Several
phases of a system m ay co-exist in e qu ilib rium w ith one another.
Examples of two-phase systems are: a liquid and its saturated vapour, a
liquid and a crystal, etc. In general, each phase may contain several
components, e.g. several different species or molecules or ions. We shall
restrict our discussion to the systems containing one component only. A
one-component system can exist in different phases: gaseous, liquid or in
a crystalline form. In a system whose phases are in equilibrium, a slight
variation of external conditions 1 ’esults in a certain amount of the sub­
stance passing from one phase to another as can be observed, for
example, in the fusion (i.e. condensation) or boiling of water. Th is process
is known as ph ase transition. Th e study of the transition region between
phases is one of the most interesting fields of statistical physics and,
hence, while studying the conditions of phase equilibria, we will also
consider the development of phase transitions. We shall describe the
phase transitions in terms of macroscopic variables.

| 11.1 EQUILIBRIUM CONDITIONS


Let us first derive the conditions under which different phases can exist
in equilibrium, so that no transfer of matter takes place from one phase
to another.
Figure 11.1 shows a one-component system existing in two phases.
The system is isolated from its surroundings and its energy is E, volume
V and number of particles N. The corresponding quantities associated
with the two phases are: E v V 1, jV^ and E v V2, N z respectively.
E = E l + E 2; V = V, + V2; N = N 1 + AT,
“ .(11.1)

202
Phase Equilibria 203

The entropy of the system is


S(S, V, AD = S jUSj, Vv A/,) + S2(S2, Va, AT2)
= Vj, Nj) + S2(S - E v V - Vt> A7 - IV,)
-..(11-2)
For equilibrium the entropy of the two systems must be maximum,
i.e. dS = 0.

E , , N., , V2

Fig. 11.1

Therefore dS =
as.
dE. +
aSi
dV, +
as, dNl
as, J ». aATj
1

as2 as2 dV2-


as, dN2-
as,2>u,,N, Vov2 aw.

dS1
| dS.
as, "i f as. dv,
LI 3Ei aEaJJ 1
as, as2
dN1 = 0 ...(11.3)
asr, 3AS
Since S ,, V,, JV, are independent variables
dS1 as2 =0 ...(11.4)
ae 1 ze 2

as, as2 = 0 ...(11.5)


3Vi av2
as. as2 =0 ...(11.6)
3AT, 3A/2
(1) We can see from Eq. (6.28) that
as= fep = ^ ...(11.7)
as T
Hence, we conclude from Eq. (11.4) that T , = T2, i.e., in equilibrium
the temperatures of the two' phases must be the same.

(2) — =K ^ ...(11.8)
av av t
Therefore, relation (11.5) shows that the pressures of the phases must
be the same.
204 Fundamentals of Statistical Mechanics

(31 Since — = ...(11.9)


dN T
The relation (11.6) leads to the condition
H,(r, P) = ft2(T, P ) ..(11.10)

I 11.2 CLASSIFICATION OF PHASE TRANSITIONS


Gibbs free energy is given by
G = [see Eq. (6.80)] ...(11.11)
Therefore, for processes which occur at constant pressure and tem­
perature
dG = ...(11.12)

Therefore dG ...(11.13)
M, =
dNl
Since at the phase transition px = pOJ the derivatives of G with respect
to TV must be 'equal
dG _ dG n
dNx ~ dNz
There is no restriction, however, on the derivatives of G with respect
to P and T. The phase transitions are classified according to the behaviour
o f these derivatives.
dG'j
I f the derivatives and are discontinuous at the
dP J t .n d T jP,N
transition point, that is V and S [refer to Eq. (6.80) or Eq. (11.28)] have
different values in the two phases, the transition is called first order
transition. I f these derivatives are continuous, but higher derivatives are
discontinuous at the transition point, the phase transition is said to be
continuous. .Typical behaviour o f the Gibbs free energy and the entropy
as a function o f temperature at a transition point is shown in (Fig. 11.2).
Phase Equilibria 205

1 1 3 W SE^IAGRAM
1 p a

The phases that are realized in nature for a given set of independent
variables, are those with the lowest free energy. The Gibbs free energy
is a function of T, P and N. At constant temperature and pressure, Gibbs
free energy is proportional to the number of particles, i.e.
G{T, P, AO = Ng (T, P) ...(11.15)
where g(T, P ) is Gibbs free energy per particle and N is the number of
particles. Since G is given by
G =\iN [see Eq. (6.80)]
p=g(T,P) ...(11.16)
Thus, chemical potential is the Gibbs free energy per particle. The
equilibrium condition [Eq. (11.10)], therefore can be written as
gl(T, P) = g 2 (T, P) ...(11.17)
Hence, for equilibrium the Gibbs energy per particle in the two phases
must be the same.
The .relation (11.10) permits us to express one of the arguments of the
chemical potential in terms of the other, provided we have the analytical
expressions for the chemical potentials of both phases. Thus, we can
JJfc. express P as a function of T, i.e.
P = P(T) ...(11.18)
which defines a curve in the T-P plane (Fig. 11.3)
P

Fig. 11.3
At a point on the curve, the chemical potentials of the two phases are
the same and it is only on this curve that the two phases can exist in
equilibrium. It is known as a phase equilibrium curve. I f the phases are
liquid and vapour, it is called the vapour pressure curve; for solid and
vapour it is the sub-limitation curve and for solid and liquid it is the
fusion curve.
For a system consisting of a single component in two phases, the
Gibbs free energy is given by
G = N xg l + N 2g 2 ...(11.19)
206 F u n d a m e n ta ls o f S ta tis tic a l M e c h a n ic s

where .V., are the number of particles in the two phases. We have
shown above that for equilibrium g l = g„. When this condition is satisfied,
the relation (11.19) shows that the transfer of a particle from one phase
to another does not change G.
A one-component system may also exist in three different phases, e.g.
solid, liquid and vapour. Th e arguments similar to those presented above
lead to the following equilibrium conditions:
g l(T, P ) = £ 2(T , P) = g 3 (T, P )
...(11.20)
We have seen that Eq. (11.17) defines a phase equilibrium curve.
Equations (11.20) define three equilibrium curves.
£1 £2 £ 3’ ^1 £ 3

which meet at a point known as the triple point (Fig. 11.4).


The diagram in this figure is known as the phase diagram . The three
phase equilibrium curves divided the T-P plane into three regions in
which solid, liquid and gaseous phases are in stable states, i.e. have the
m inimum Gibbs free energy. A t the triple point ( T 0, P 0) all three phases
are in equilibrium. For instance, for water the triple point occurs at
T 0 = 0.0078°C, P Q = 0.006 atmosphere
...(11.21)

Fig. 11.4
It is clear from Fig. (11.4) that at a pressure lower than P 0, ice upon
heating converts directly into vapour without going through the liquid
phase, i.e. it sublimates.
The point at which the vapour pressure curve abruptly terminates
(point C) is called the critical point. Above this point a substance can go
continuously from a gaseous state to the liquid state without going through
a phase transition.
P hase E q u ilib ria 207

I 11.4 CLAUSIUS-CLAPEYRON EQUATION


We have said above that the relation (11.10) permits us to express P
in terms of T, which when plotted gives us the phase transition curve. We
will now obtain the differential equation of this phase transition curve.
Taking derivatives of Eq. (11.10) we have the relation
Bp.! dpj dP d[i2 dp-2 dP
dT + BP 3 T = B T + dP dT
Substituting these in Eq. (11.22), we have
dP BP
V — — s2 + ^2 dT
dT
where we used Eq. (6.78)
bp S2 ~ S1
i.e. ...(11.22)
dT = V-2-Vi
Finally, since = T(S2 - s j ,
dP 5q
...(11.23)
dT T(V2 - V i)
This is known as the Clausius-Clapeyron equation. The derivative in
this equation gives the rate of change of pressure along the equilibrium
curve. It is clear from Eq. (11.23) that if 5q > 0, the. transition takes place
with the absorption of heat, and further, if the transition is accompanied
by an increase in volume, i.e. V2 > ^ as in the case of boiling of water,
dp
then > 0, that is, the temperature rise with pressure. If, on the other
hand, V2 < Vx as in case of melting of ice, the phase transition tempera-
dp
ture decreases with pressure. The slope ^ 7 , therefore, is positive or
negative depending on the nature of the substance. W ater is an example
of a system whose fusion curve has a negative slope.
In order to obtain the dependence of P on T, i.e. P = P{T ), we have
to integrate Eq. (11.23). For this, we have to express V 1 and V2 as func­
tions of P and T, which is possible if we know the equation of state for
the two phases. We show below how this dependence is obtained in some
particular cases and illustrate its use in finding expressions for some
thermodynamic quantities.
1. Liquid-Vapour equilibria:
(a) V a p o u r pressure curve: In this case, at temperatures lowrer
than the critical point V2 » Vj, the vapour is so rarefied that we may
neglect the change in volume in the liquid and assume that the vapour
obeys the ideal gas laws, i.e. P V = R T
2C8 Fundamentals of Statistical Mechanics

Therefore ^ ...(11.24)
dT R T2
Integrating, we have
P = P0 e~&i,RT ...(11.25)
This shows that as the temperature is increased, the vapour pressure
increases, and &j is a latent heat of vaporization.
(6) H eat c a p a c ity o f v a p o u r in e q u ilib riu m : The Clausius-
Clapeyron equation can be used to solve a number of problems, while we
are at the liquid-vapour equilibrium. Let us see how the heat capacity of
a vapour in equilibrium with its liquid phase can be obtained using Eq.
(11.23).
Since entropy is a function of pressure and temperature, heat capacity

...(11.26)

We know that

Now dE = TdS - pdV = d(TS) - SdT - d{pV) + Vdp


or d(E - TS + pV) = -S d T + Vdp ...(11.27)
i.e. dG = —SdT + Vdp [see Eq. (6.80)]

Therefore V ...(11.28)
Equality of cross derivatives implies
02G _ d2G
dpdT ~ dTdP

...(11.29)

Substituting in Eq. (11.26)

= C_ ...(11.30)
{ d T ) p { d T ) equi
Assuming that the vapour behaves like an ideal gas, the equation of
state is
PV =R T
dV _ R
i.e.
dT ~ P
P h a s e E qui libr ia 209

Hence
R
Ce q u i = Cp - T —
p
...(11.31)
R T2 p T
(c) C o m p r e s s ib ilit y o f v a p o u r : C om pressibility of vapour
1 f dVa'j
I~£p~ \ along the equilibrium phase curve can be found as follows:
V2
(d v2 \ f§v2^1 far)
[d P ) = [ d P JT + { d T ) p U p J
_ fdYs.) T(V2 - V i)
- dP ) T + l d T ) p 5q
...(11.32)
RT
At T « T , we can assume V2
P
dV2 RT R T.RT RT ( R T '|
Therefore
dP p2
~P’ P5q P2 { Sq J
...(11.33)
The compressibility is
1 fd V 2 P RT ( RT
V 2 l. dP ~ ~ liT P 2 { S<?

...(11.34)
P\ )
2. Su blim ation cu rve: We can obtain a fairly simple equation fo
the sublimation curve.
The enthalpy H is given by
H = E + PV
Therefore dH —dE + PdV + VdP = TdS" + VdP
= Cp dT + VdP ...(11.35)
Since the vapour pressure along the sublimation curve is generally
small, we can neglect the variation in pressure and write.
dH = C p dT ...(11.36)
Therefore, the molar enthalpy of the solid at a point on the curve is

Hs =H° s + [, CspdT ...(11.37)


and that o f the vapour is

H v = HI + jr c ;d T ...(11.38)
where Hg, H v are the molar enthalpies of the solid and vapour at zero
pressure and temperature.
210 F undam entals of S tatistical M e ch a n ics

U sing these values for the enthalpy w e can write the Clausius-
Clapeyron equation as'''
dP PAHVs
dT ...(11.39)
~ RT2
RT
where AH US ~ H u - H s and V2 — Vl =
P
dP P
or AH^ + f c ^ d T - f c j r f T
dT RT2

Noting that for a monatomic vapour C = — R and integrating the


* ** jL
above equation,
we have

Ag° 3 _1_ T C p d T
In P + T - dT + constant
RT 2 ln R o t2
...(11.40)
Here AH^s is the latent heat o f sublimation.

I
n . 5 VAN DER WAAL'S EQUATION
We have derived van der W aal’s equation is Sec. 6.13. This equation,
though not the most accurate, is the simplest equation o f state, which
e x h ib it many essential features o f the liquid-vapour phase transition.
The equation is

( P +^ J ( V - 6 ) =RT ...(11.41)
On transformation it takes the form
( RT\ a. ab
v3 = + + pV - — =0 ...(11.42)
A third degree equation with real coefficients can have either one real
and two complex roots or three real roots. Therefore, an isotherm plotted
on a P-V plane using Eq. (11.42) will be crossed by a straight line parallel
to the V-axis, either at three points or at one.
The general form o f the curves representing Eq. (11.42) is shown in
Fig. 11.5. For each curve T remains constant. Higher curves correspond
to higher values of T. With rising temperature, the portion of van der
Waal’s isotherms on which humps and valleys are observed diminishes
and at a certain temperature — critical temperature, rt c, it turns into a
point o f inflexion.
Phase Equilibria 211

Let us now commence searching for maxima' and minima. We can


write van der Waal’s equation in the form.

P = — -------- ^ ...(11.43)
V -b V2
dP
We differentiate this equation to obtain and equate it to zero for
finding the turning point of the curve.
rp, r. 3P RT 2a
Therefore ---- --------------r- + —- = 0 ...(11.44)
8V (V -b )2 V3
Eliminating T between this equation and Eq. (11.43)

P = a W ~ 2b) ■ -.(11.45)
V2
This equation gives the locus of the maxima and minima of the family
of curves obtained for different values of T and is represented by the
dotted line ARBQC. This curve divides the P-V plane into three regions.
The region bounded by the curve AB and the upper point of the critical
isotherm represents the liquid state region. The region located inside the
dome-shaped curve ABC describes two phase states, liquid and vapour.
To the right of the curve BC, it is the vapour state region. The curve ABC
is the co-existence curve. To find the maximum B o f this curve, we differ­
entiate Eq. (11.45) with respect to V and equate it to zero. We get
V. —3b ...(11.46)
a
Therefore P
c = 27b2
212 Fundamentals of Statistical Mechanics

Substituting these in Eq. (11.43), we have


Sa
T — --------- ...(11.47)
• 27 Rb
These are the values o f P, V, T, at the critical point. Actually, the curve
corresponding to the critical temperature has a point of inflexion. At
pressures higher than Pc and temperature above Tc, no difference of
phase exists. At the critical point the two phases become identical.
Away from the critical point van der Waal’s equation provides a
remarkably good fit to experiment. Close to the critical point, however,
the theory breaks down and does not provide an adequate fit to modern
experimental data.
An interesting feature of the van der Waal’s equation is that, it can
be expressed, as shown below, in a universal form.
Let us replace the pressure, volume and temperature in the van der
Waal’s equation by dimensionless parameters.
v_
(reduced pressure) p (reduced volume) y
Pc vc
T_
and (reduced temperature) j* =

The transformed equation becomes


3
P+ (3 V -l) =8 T ...(11.48)
V
We see that in this equation all individual parameters, constants a
and b are eliminated. Most substances when plotted in terms of reduced
parameters lie approximately in the same curve and, thus satisfy the law
o f corresponding states, which states that all fluids when described in
terms of reduced parameters, obey the same equation of state. The general
agreement between the van der Waal equation and experiment is
remarkable. However, the law of corresponding states is in a far more
exact agreement with experimental results than the van der Waal equation.
We have seen that van der Waal’s equation predicts an S-shaped
curve like PQRS. Experiments, however, show that under ordinary
conditions, a real phase transition does not follow this ci’ -ve, but is
represented by PQRS curve.
This is the actual physical isotherm and is in accordance with our
expectations, since pressure does not undergo a discontinuous jump in a
phase transition. The portion QR represents essentially an unrealizable
situation, in which increase in pressure causes increase in volume, thus
dP
violating the stability condition < 0.
P h a s e Equ ilib ria 213

dP
However, since is negative on the segments aQ and '{R, their
3V
existence cannot be ruled out. Under certain conditions it is possible to
realize experimentally the portion yR which correspond to a supercooled
vapour and superheated liquid, respectively. These states are not abso­
lutely stable. Under a small external perturbation, the system rapidly
goes to the nearest stable state. Such states are called metastable states.
Van der Waal’s theory gives no indication of the point at which conden­
sation of the liquid state begins. The position of the line a (3 A can be
determined by a consideration of the change in the appropriate thermo­
dynamical function of state.
We know that the work done in an isothermal process is equal to the
decrease in the free energy. For the work done along the isotherm PQRS,
we have
p
dW = J PdV = F r - F q ...(11.49)
PQRS
The free energy is given by
F = pJV —pV [see Eq. (6.79)]
At equilibrium the chemical potentials of the phases are equal.
Therefore FR + PVR = FQ + PVQ
i.e. F r - F q = P<yQ ~ VK) ...(11.50)

or | PdV = PW Q - VR) ...(11.51)


PQRS
This shows that the integral along the van der Waal’s curve is equal
to the area of the rectangle below the isobar RS and consequently
area I = area II ...(1U52)
When this condition is satisfied P a y S gives the equilibrium state of
the system. The method of realizing the condition (11.52) is called
Maxwell’s construction. Thus, we observe on the P-V diagram a discon­
tinuity jump in volume, V2 — the magnitude of which is determined by
Maxwell’s equal area rule.

11.6 SECOND-ORDER PHASE TR A N SITIO N S ‘


We have discussed so far the first-order phase transitions in which
the chemical potential changes continuously at the transition point (at

which Pj = p2) and their first derivatives = —S and = V

undergo discontinuity jumps. As stated in Sec. 11.2 there also exist


continuous transitions in which these first derivatives do not undergo any
discontinuous jumps — i.e. at the transition point S j = S 2 and V x = V2 —
214 Fundamentals of Statistical Mechanics

but its higher derivatives may have finite or infinite discontinuities. In


this section we confine our discussion to the so-called second-order phase
transitions, during which the second derivatives of the chemical potential
undergo discontinuous jumps at the transition point. Thus, second-order
phase transitions do not involve discontinuity jumps in volume and in

entropy, but involve abrupt changes in heat capacity

and compressibility

Second-order phase transitions are usually associated with the abrupt


changes in various properties characterizing the symmetry of a body. For
example, when a magnetic material changes its state from a ferromagnetic
material to a paramagnetic material, the symmetrical arrangement of the
elementary magnetic moments undergoes a discontinuity jump and the
symmetry changes. Such transitions which involve a broken symmetry
and a continuous change in the slope of the free energy curve can be
described by a theory due to Ginzburg and Landau.

| 11.7 GINZBURG-LANDAU (GL) THEORY


The basic idea o f this theory is that, a phase transition can be considered
as going from an ordered to a disordered phase. Consider for example, a
three-dimensional magnetic system consisting of an array of dipoles. If
the interaction between the dipoles is ferromagnetic, then in the ground
state at T = 0, all the dipoles will be aligned. In this state the internal
energy U is minimum and the entropy S is zero. For T > 0, some dipoles
go out o f alignment because of thermal agitation, but in such a way as to
increase both S and U and minimize the free energy F = U — TS.
The GL theory assumes that a physical system can be characterized,
in addition to the ordinary physical parameters P, T, etc., also by a
macroscopic parameter called the order parameter, which appears in a
less symmetric phase. The order parameter may be a scalar, a vector, a
tensor or some other quantity. In the first-order transitions, there is a
discontinuous jum p in the state of the system, because its v .ime and
entropy change abruptly — and hence, there need not be any relationship
between the symmetry properties o f the two phases. But for a continuous
transition, the state changes continuously and the symmetry properties
o f the two phases are closely related.
The thermodynamic potential o f the system, therefore, is o f the type
& P , T, q). The variable q which represents the order parameter, is slightly
on a different footing from P and T. While the latter can be given arbi­
trary values, q will be determined by the condition that G is minimum.
Phase Equilibria 215

Suppose for simplicity, that the order parameter is a vector q associated


with the less symmetric phase. The free energy must be such that it will
be minimized for r| = 0 above the transition and for r) 0 below the
transition.
Further, the free energy must be a scalar function of the order
parameter. Hence, if the order parameter is a vector, as we have assumed,
the free energy can only depend on the scalar products of the order
parameter.
In general, we can expand the free energy G(T, t\) near the transition
point, in the form
G(T, ri) = G0(D + a2(T) q 2 + a 4(T)q4 + ...
...(11.53)
Note that, the first-order and third-order terms are not included, because
we cannot construct a scalar from them.
We have now to choose the forms of a 2 and a4.
We see from Eq. (11.53) that the free energy is minimum at the
critical temperature Tc and above it if |ti| = 0. For T < T. we have to
chose a2 in such a way that the free energy is minimum for |q |^ 0.
Further, at the transition temperature T = Tc oc2(Tc) must be zero, since
the energy varies continuously through the transition point. All these
conditions are satisfied if we put a 2(T) in the form
a 2(T) = a 0 (T - Tc) ...(11.54)
where a0 is a function of T.
Since the energy must increase if [r\ | is increased to a very high
value, we have to assume a 4(T) to be positive
a 4(T) > 0 ...(11.55)
For the free energy to be minimum, we must have

(;i) —— = 0; (u) — =- > 0 ...(11.56)


dil drf
The first of these conditions gives

= 2ot2T| + 4a4 Tj |ti]2 = 0 ...(11.57)

Therefore either p = 0 or 11 = ±
2a,
—a 0(T —Tc)
...(11.58)
V 2«4
Thus, when a 2 > 0, the minimum occurs for T) = 0 and when a.2 < 0,

the minimum occurs for i) = ± f—— n . The variation o f free energy1’for the
^ 2a4
two cases is shown in Fig. 11.6.
21 6 Fundamentals o f Statistical Mechanics

Fig. 11.6
The free energy above the transition point is given by
G(T, ri) = G0(D for T > Tc ...(11.58A)
and below it by
G(T, tj) = G0(T) + a2(D n 2 + a4(r)t|4
o^r-Tl) * t { T - T cf
= G0m - ■+ a 4(T) —
2a a 4a4
oc2 (T —T i2
= G0( T ) ---------------- -c2- for T < T ...(11.59)
4a. c
We may recall that Cp is given by
( 32G
C> = - T \ ^ ...(11.60)

Hence Cp(T < Tc)t . ^ = Tc ^ ...(11.61)


and Cp{T > Te)TmT' = 0
Therefore the jum p in the heat capacity is

ao
Cp{T < Tc)T^ Te~ Cp(T > Tc)t = T' = Tc 2a4 ...(11.62)
Figure 11.7 shows the specific heat as a function of temperature. The
curve shows the discontinuity in the specific heat at T .

^ '*y itf '


Phase Equilibria 217

| 1 1.8 P H A S E T R A N S IT IO N IN F E R R O M A G N E T IC M A T E R IA L S

For magnetic transitions we have a situation similar to that of gas—


liquid transition. Figure 11.8 shows the isotherms drawn for different
temperatures. If an external magnetic field H is applied to a system,
magnetization M is included in it in a direction depending upon the
direction of the field. For temperatures below the critical temperature T
< T , residual or spontaneous magnetization M0 remains in the system
when the field H is switched off; whereas for T > Tc there is no residual
magnetization.

There is a close analogy between gas and magnetic system, which can
be seen if one compares Figs. 11.5 and 11.8 and identifies M with V and
H with P. The striking difference between the two systems is that, in the
magnetic case the flat parts of the isotherms are on top o f one another.
A simple example o f a transition is the transition from a paramagnetic
to a ferromagnetic system. The magnetic moments in a paramagnetic
substance above the Currie temperature are randomly oriented and there
is no yet magnetization. The paramagnetic system at this temperature is
rotationally invarient. As the temperature is lowered, magnetic interaction
energy becomes more important than the thermal energy7, magnetic
moments become ordered and this gives rise to magnetization. The
symmetry that is broken in this transition is rotation symmetry.
If M is the average magnetization o f the system, Helmholtz free energy
can be expressed as
F(M, T) = F(0, T) + ccATW2 + a^T)M4 ...(11.62A)
If an external magnetic field H is applied to the system, Gibbs free
energy of the system can be expressed as
G(H, T) = F(M, T) - H M
= F ( 0, T) - H M + o ^ T W 2 + a 4t D M 4
•- ...(11.63)
218 Fundamentals of Statistical Mechanics

This is minimized when

(HI * 2«, ° ...(11.64)

where we have considered only the lowest order in M.


1 ...(11.65)
Therefore M = - ^—H
2 cx2
This shows that above the Currie-point, non-zero magnetization is
possible.
Below the Currie-point, the isothermal susceptibility is given by

r = = -1 ~ = ...(11.66)
2a 2
Vx H ) 7 2a0(T —Tc)
It is obvious from this relation that at T - Tc, the susceptibility
becomes infinite. Therefore, at the critical point even a small external
field may have a large effect on magnetization.
Below the critical point, as shown in the preceding section
oc0i T . - T )
M = ...(11.67)
2a4
dM = _ i /« o(Tc - T )
Therefore ...(11.68)
dT 2 V 2 <*4

Clearly, the magnetization increases abruptly below the critical point.


Diet us now calculate the chemical potential, the entropy and the
volume o f the magnetic material
(z) From Eq. (11.63) when H = 0
p = G(M,T) = F (0, T) + a,,(T)M2 + a .(DM 4
...(11.69)
In the paramagnetic domain (M = 0)
pp = FX0, D ...(11.70)
and in the ferromagnetic domain

\x.f = DO, D - ...(11.71)


4a A
(ii) Entrop3r in the paramagnetic domain

...(11.72)
p dT
while in ferromagnetic domain it is
dF « 2{2 a 4a ^ - a 2aJV}
S, = - — + . ...(11.73)
r dT 4a4
At the critical temperature a„ = 0
Therefore
Phase Equilibria 219

(iii) Volume V = ...(11.74)


p dp dp

V = 3F 1X2 { 2 ( X 4 a 2 p ~ a 2 a 4 p ) ...(1 1 .7 5 )
and
F dp 4a^

(because a2 = 0 at T = Tc; VF = )
Hence VP = V„F
...(11.76)
Thus, there is no discontinuity jump in entropy and volume at the
Currie point.
Let us now investigate whether there is any discontinuity jump in the
heat capacity at the Currie point

r _ T d(Sf ~ Sp) r 9 {2<*4«2


P c dT c dT 4a4

=r ^ ...di.77)
c 2a4
(because a2 = 0 at T = Tc)
Since <x4 > 0 heat capacity in the ferromagnetic state is larger than
that in the paramagnetic state.
In the same way we can show that the compressibility in the
ferromagnetic state exceeds that in the paramagnetic state.

T ----- ►
Fig. 11.9

........ [_ l. . .... ..................... -rtf*. o £3G&aaaBSBBt& £ . K>/isnjr.r^

I
Helium has proven to be one of the most remarkable elements in nature,
particularly from the standpoint o f statistical physics. It is the only
substance which remains liquid under ordinary pressure at absolute zero;
all others freeze to the solid state if sufficiently cooled. Solidification of
helium takes place under external pressure exceeding 25 atmospheres.
220 Fundamentals of Statistical Mechanics

According to classical concepts, at absolute zero, all the atoms of a


substance cease to move and occupy fixed positions with the body, thus
causing it to become a solid. The fact that helium remains a liquid at this
temperature, indicates that entirely different concepts of quantum
mechanics are required for its understanding.

Fig. 11.10
Helium atoms occur in nature in two stable isotopic forms, 3He and
4He. The former is rarer of these two isotopes. Chemically both these are
virtually identical. The only difference between them is a difference of
mass and that 3He has a nuclear spin — while 4He has spin zero. 3He,
therefore, obeys Fermi-Dirac statistics, while 4He obeys Bose-Einstein
statistics.
Because of the difference in their statistics, the two systems exhibit
very different behaviour upon cooling. 4He, which is a boson liquid, exhibits
a second-order transition to a ^superfluid state. This was first observed as
an abrupt jump in the liquid’s specific heat (Fig. 11.9). 3He also undergoes
a transition to a superfluid state, but at a much lower temperature -2 .7
x 10'3K. The mechanisms of the two transitions, however, are different.
The co-existence curves for liquid 4He are shown in Fig. 11.10. At a low
temperature,. 4He has four phases: a solid phase, two liquid phases and
a vapour phase. As the normal liquid is cooled, a line of X-point occurs
near 2 K, the exact value of temperature depends upon pressure. The X-
point line indicates that a continuous phase transition has occurred. Liquid
helium above the X-line is called Helium I (He I) and that below it He II.
It is the latter that shows unique properties. Initial experiments with
liquid helium 4He showed that it was able to leak out of its container
through cracks so tiny that even 4He gas could not leak through.
Apparently it has no viscosity. . _ . .
Further light on the properties of helium is thrown by a remarkable-
experiment first performed by Andronikashvili.
Phase Equilibria 221

A very closely spaced pile of disks is suspended


on a fibre (Fig. 11.11). The set of disks can perform
rotational oscillations about the fibre as the axis.
The period of oscillations depends on the mass of
the disks and, hence, mass of the disks can be
determined from the period of oscillations. The
disks are immersed in He I and the period of
oscillations measured. It was found that the
helium trapped between the disks is carried around
with them and effectively adds to their mass. If
the temperature is lowered below /1-point, it is p-ig_ n .i i
found that the mass of helium carried with the
disk rapidly diminishes and around 1 K virtually
none moves with the disk.
These properties of He II, are described well by a two-fluid model. It
is assumed that He II is a mixture of the two fluids: the normal fluid and
the superfluid and that there is no viscous interaction between them.
While the normal fluid has all the usual properties of a fluid, superfluid
has some curious properties. It has zero entropy and no viscosity.

I 11.10 CHEMICAL EQUILIBRIUM


Chemical reactions are usually written in the form
v.,A + VjjB & v^x, + vyy ...(11.78)
or more generally TAA: = 0 ...(11.79)

where A; are the chemical symbols for reacting substances and v; are
integers (+ve or -ve) giving the number of molecules of each, substance.
Consider, for example, the reaction
2NH3 <=> 3H2 + N2
This can be written as
3H2 + Nz - 2NHi = 0
Here vHj = 3 ; vNa = 1; and Vnh, = - 2
Along with the direct chemical reaction, reverse reaction also 'can
take place. However, such reactions ultimately lead to an equilibrium
state in which the two opposite reactions occur at rates such that the
number of particles of each reacting substance remains .constant. This
equilibrium is called the chemical equilibrium.

11.11 CONDITION FOR CHEMICAL EQUILIBRIUM


In .equilibrium Gibbs free energy must have.-its'.lowest possible value.',
.That-is - ’ ’
222 Fundam entals of Statistical Mechanics

dG = —SdT + VdP + Xn,-v,- = 0 ...(11.80)


i
Let us assume that the reaction occurs at constant pressure and
temperature. Under these conditions Eq. (11.80) reduces to
5>iV t = 0 ...(11.81)
i
This is condition for chemical equilibrium.

| 11.12 THE LAW OF MASS ACTION


Let us now apply the condition o f equilibrium Eq. (11.81) to a reaction
taking place in a mixture o f ideal gases.
The chemical potential for the translational modes of each gas as
given by Eq. (6.94) is
if 2KmlkT A372 1 1
pt. = - * T l n l-^ T -

\(2nmikT f ' 2 kT\


= -*T l n j r 7 ^ J TTj - t 11'82)
where p L is the partial pressure o f the ith gas. In addition to the
translational mode, there are rotational, vibrational and electronic modes
o f excitation possible in a molecule. We, therefore, introduce a factor
in the above expression for which represents the contribution from
these internal modes to the partition function.
Thus

\ (2K m ,kT 'f/2 _ k T ]


1 - ~ k T In J Z i~p^f ...(11.83)

Substitution this in Eq. (11.81), we have

- vik T ln{[ j m1?/2Zi(43’ )5' 2J + vj* 7 'h iP , =0


10

2 ln P ," = ^
‘ m r -Z ^ k T f"

5>,
or n ^ v IfJ'W'2} n -r n^r

= K(P, T) ...(11.84)
This is called the law o f mass action. The quantity K(P, T) is called
the chem ical equilibrium constant and is a function o f pressure and
tem perature.
P hase E q u ilib ria 223

Since vf may have +ve as well as -v e values, the right-hand side of


Eq. (11.84) [viz. K (P , T)\ is actually a fraction with a numerator represe­
nting the partial pressures of appearing substances and denominator
representing the partial pressures of disappearing substances. One can
easily see from Eq. (11.84) that greater the reaction constant K(P, T),
greater is the shift of the equilibrium in the direction of end products.
It must be mentioned here that the partition function appearing in
the formula (11.84) needs a correction. It is conventional to measure the
energies of the molecules from the ground state and use the partition
function Z r = However, a certain amount of energy is required to
form a molecule of a certain species r in its ground state from the
constituted atoms in their ground states. I f this energy is represented by
e* the correct partition function will be
Z = £ e-P<<v+0 = = Z r e~^£'r ...(11.85)
We shall now consider a couple of examples to illustrate the application
of the law of mass action.

11.13 SAHA IONIZATION FORMULA ' ~-!


A t a very high temperature, collisions between gas particles may lead to
the ionization of atoms. We shall consider in this section thermal ionization
of a monatomic gas.
Ionization equilibrium can be considered as a particular case of
chemical equilibrium. The equation of reaction is
A a <=> A* + e~
i.e. A+ + e ~ - A a = 0 ...(11.86)
where A a denotes the neutral atom, A + the singly ionized atom and e~ the
electron. The stoichiometric coefficients of the reaction are
v+ = 1, v_ = 1, va = -1
The equilibrium constant K(P, T) in this case is
\3/2 ( \3/2
K P , T, . S A -. (| f) (kT ) >/2 [ mime
l ma j
ZiZ, -e IkT
e ...(11.87)
where e* is the ionization energy of the atom. For atoms and ions the only
internal states are the electronic states and, hence, we may replace the
partition function Zt, Ze, Za by the ground state degeneracies g t, g e, g a.
Further for electrons g = 2. Therefore, Eq. (11.87) can be written as
-e IkT
K(P, D = f | f T /2(* T )5/2
f a V h J ga
...(11.88)
(because m =m)
224 Fundamentals of Statistical Mechanics

This problem was first considered by Saha and Eq. (11.88) is due to
him. It is used in problems with regard to astrophysics, electrical
conductivity in flames, formation of ionosphere, determination of electron
affinity in halogens, etc.
Equation (11.SS) can be expressed in terms of fraction a in the total
number of particles.

a = ~z> ,7 where P = P a + P.
t +e P
In neutral plasma the number o f ions equals the number of electrons,
i.e. Ps = P_.
PPe P(Pe /P)2 Pa2
Therefore
Pa 1 —2a

a2 f 2nme ']3/2 (.kT )5/2 2Si .-E x I k T


Hence e
1 —2a l h2 ) P Sa
...(11.89)
.........«***-- v J f f S '
I 11.14 DISSOCIATION OF MOLECULES "I
Molecules which are stable at a low temperature often dissociate at a
high temperature. Let us consider a reaction of the type:
A 2 <=> 2 A
Suppose, the temperature is such that the molecules are not ionized
but their rotational modes are fully excited. We can, therefore, replace
the electron partition functions by g m and^a, the ground state degeneracies
o f the molecules and atoms, respectively. Equation (11.86) then becomes
T>\
4i ,3/2 f m2 )\3/2
~C 2% me So
K(P, D = -^ - = |TiT I (pT)5'2 mM g m(Zr0tZuib)m
...(11.90)
where ma is the atomic mass, mM the molecular mass and ed the
dissociation energy.
Now Z rot = T / 20r* Z vib =
1 —e
where 0r, 0Vare the rotational and vibrational characteristics temperatures.
Further m_ - 2m.
p 2
K(P, T) = —5- =
2 tc
3/2
(,kT)3/2
3/2
1 —e ...(11.91)
P h2 7720_
P
If we put aa = where P = Pa + Pn

P(Pa / P f = P a*1}
K(P, T) = ...(11.92)
1~ P J P l-a„
Phase E q u ilib ria 225

| PROBLEMS
11.1 Find the critical temperature, pressure and volume for a gas
described by the equation of state P (V —b) = R T e~a/RTV (Dieterici
equation) where a, b are constants.
11.2 Find the heat of vaporization of water at a pressure 1.013 x
105 Pa, given that
(i) its boiling point at a pressure 1.03 x l -5 Pa is 100°C and at
1.05 x 105 Pa it is 101°C, and
( ii ) its specific volume upon vaporization at a lower pressure
increases from 1.04 x 10-3 to 1.673 m3/kg.
11.3 Given that the critical temperature of hydrogen is 33.2 K; its
critical pressure is 1.295 x 106 Pa and the molar volume in the
critical state is 6.5 x 10"5 m 3/mole, find the van der Waal’s
constants.
11.4 Using the data obtained in the preceding problem, find the
pressure from van der Waal’s equation for (£) a molar volume
10-3 m3/mol at a temperature 300 K; and (ii) a molar volume
10-4 m3/mol at a temperature 35 K. Compare the results with
the pressure of the ideal gas under the conditions of each case.
11.5 The density of ice is p = 0.9 g/cm3 and its heat of fusion is
334 kJ/kg. Find the change in the melting point of ice if the
pressure has changed from 0.098 to 0.196 MPa.

□ □ □
O '

u TR AN SP O R T PHEN O M EN A

So far we have considered the application o f statistical mechanics to the


systems in equilibrium when looked at from a microscopic point of view.
Details of the interaction processes which bring about the equilibrium
were not considered. We shall discuss in this chapter some simple
approximate methods for dealing with non-equilibrium processes. To obtain
a deeper understanding of the transport phenomena, a more general
theor>r will be given in the next chapter. We shall be mainly concerned
with the interactions that take place between atoms and molecules at
kinetic energies which are appropriate for conventional thermal energies.

12.1 MEAN COLLISION TIME " gl


Let us consider a gas which is dilute. I f the gas is not initially in
equilibrium, collisions between the molecules bring about the ultimate
equilibrium situation. In dilute gases the time between collisions is much
greater than the time involved in the collision. Let Pit) be the probability
that a molecule survives a time t without suffering a collision and ydt the
probability that a molecule suffers a collision between time t and t + dt.
One can see that P (0) = 1, and that the probability that a molecule does
not suffer a collision in the interval t to t + dt is 1 — ydt. Therefore, the
probability that a molecule survives a time t + dt is
Pit + dt) = Pit)i 1 - ydt) ...(12.1)
dP(t) ^
Therefore, Pit) = Pit) - P(t)iydt)
dt
1 dPit)
or = -Y ...(12.2)
Pit) dt
Integrating In Pit) = —yt + In C
or Pit) = Ce-* ...(12.3)
Because P(0) = C = 1
therefore Pit) = e-* ...(12.4)

226
Transport Phenom ena 227

Consider now another situation : a molecule survives a time t without


collision, but suffers a collision in the interval t to t + dt. The probability
of occurrence of such an event is
p t(t) dt = e_T y dt ...(12.5)
Therefore, the mean time between collisions is

t = | fp,U)d£ = ^e~v ytd t


0 o
Put yt = y
1 °r _ 1
Therefore t = - J e yydy = ...(12.6)
y o r
or Pl( t ) = e - t , T~ ...(12.7)

Since J p x(£)d£ = j e </T— = 1


o o x
The probability (12.7) is properly normalized.
The mean free path l— that is the mean distance travelled by a molecule
between collisions— is given by

l = (v)x= Q ...(12.8)
IA
where (v) is the mean velocity o f a molecule and fA is the collision
frequency.

*12.2; SCATTERING CROSS-SECTlON^gl,,


The effect of the encounter or interaction between two particles is usually
described in terms o f a collision cross-section.
Consider a beam o f particles. The number o f particles crossing a unit
area perpendicular to the beam per unit item is called the flux F. If n is
the number o f particles per emit volume (Fig. 12.1)
F =n(v) ...(12.9)
Dimensions o f flux are:L-2 T-1 ...(12.10)
W hen the beam approaches the target particles
some o f the particles are deviated by the force
field, so that they eventually move away at an
angle 0 with respect to its original direction. The
1; ____ ^ angle 0 is called the scattering angle. Let dF be
ife‘ ------»j the number o f particles scattered per second in a
-------------------------- solid angle d£i along a direction given by 0 and (J).
Fig. 12.1 The number dF will obviously be proportional to F
as well as the magnitude o f the solid angle d ll.
228 Fundamentals of Statistical Mechanics

Therefore dF «■=Fd£l
i.e. dF = a(£2) FdSl = Fdcs ...(12.11)
The constant o f a proportionality o(fi) will depend on 0 and $ and is
known as the differential cross-section.
We find from Eq. (12.11) that
dF
o(Q)dQ = — = da ...(12.12)
r
The differential cross-section, therefore, is a measure o f the probability
that a particle will be scattered in solid angle dQ..
The total scattering cross-section a, i.e. the total number o f particles
deflected in all directions per unit flux per unit time is given by
a =oJ<j(Q )rffl ...(12.13)
The total number o f particles scattered in all directions per unit time is
N = Fo ...(12.14)
Dimensions o f a are: T~\L~2 71-1)-1 = L 2 ...(12.15)
The geometrical meaning o f a(Q) is
Number o f particles Number o f particles in
scattered into the = the incident beam,
solid angle dQ per crossing an area equal
unit time to ct(£2) dQ per unit time ...(12.16)
Thus, we speak o f nuclear cross-sections being o f the order o f 10-30
m 2, molecular cross-sections o f the order o f 10-20 m2, and so on.
Provided that the interaction between the particles is a function o f r
only and that processes such as the creation o f new particles are not
involved, the definition o f the differential cross-section as given by Eq.
(12.12) is valid, whether the interaction is described by quantum or classical
mechanics.
Consider now a particle in a beam o f particles incident on a scattering
centre (Fig. 12.2).
Let b be the perpendicular distance between the scattering centre
and the velocity o f the incident particle, b is known as the impact
parameter. Suppose, this particle is scattered at an angle 0. Then a particle
with an impact parameter b + db will be scattered throug1 a smaller
angle 0 — dQ. The particles have azimuthal symmetry. Therefore, all the
particles in the annular ring 2nb db will be scattered in the solid angle
dQ = 2n sin0 dQ
Therefore 2nb F db = 2 kF sin0c£0 ct(£2) ' ...(12.17)
db
or a(Q) = - ...(12.18)
sin 0 dQ
Transport Phenomena 229

T h e n eg a tiv e sign is a tta ch ed b e ca u se th e d erivative db/dQ is —ve,


th a t is, as b in cre a se s 0 decreases.
E x a m p l e 12.1 A h a rd sp h ere o f ra d iu s s is advan cin g tow a rd s a
sp h ere o f ra diu s r. F in d th e d ifferen tia l an d tota l cross-section s.
T h e sp h ere o f ra d iu s s w ill g et sca tte re d after collision, as sh ow n in
Fig. 12.3.

H e re th e im p a ct p a ra m eter
® i (M

n —Q
b (s -t- r) sin a = (s + r) sin (s r) cos
~2
...(12.19)
db
T h e r e fo r e - - (s + r) sin - ...(12.20)
dO
i., x - e
j-b_ db = b(s + rt = Ks + rlsm -
Therefore a (O )
s in 0 dQ 2 s in e 4 s in e cos£
2 2
Is + r ) 2
...(12.21)
4
230 Fundamentals of Statistical Mechanics

and a = JcniDcZil = -47t = it(s + r)2 ...(12.22)


If a is known, one can readily find the probability y that a molecule
suffers a collision in unit time.
Consider a gas consisting of a certain kind of molecules. Let n be the
mean number of molecules per unit volume. Also let(v) be the mean
speed of these molecules and (V) be their mean relative speed.
Suppose we wish to calculate y of a particular type of molecules in
this gas. Let their number per unit volume be The flux of this type
of molecules, say type 1 molecules incident on any one molecule in a
volume element d3r is given by
F i = ni(V ) ...(1 2 .2 3 )

The number of these incident molecules scattered per unit time by the
target molecule in n1(V ) a. Hence, the total number of type 1 molecules
scattered by all the molecules in the element cPr is
(ni (v)<y^nd3r^j
1
Therefore collision probability y = —
Total number of scattered molecules
Total number of molecules present

= \ > - n (V )G ...(12.24)
n1d 3r
The collision probability y, therefore, is proportional to the number of
molecules per unit volume, to the relative velocity (V’y and to the cross-
section a. .
We know that the mean free path Z is

Z = x(Z) = ...(12.25)
' ' n (y )o

An approximate value of the ratio 2—L can be found as foh'wvs:


(V )
Consider two different molecules moving with velocities Uj and u2-
Their relative velocity is given by
V = V l- v 2
Therefore V2 = v \ + v \ - 2vx v2 = u% + v\ - 2v1vi cos 0
where 0 is the angle between the velocity vectors.
Therefore ( ^ 2) = i vi ) + {v%) ~ 2 ( o1l>2 cos0)

= <Tl>2 + (V2)2
Transport Phenomena 231

because 0 is as likely to be positive as negative for molecules moving in


random directions.
If the molecules are identical

- (■’?>
Therefore ( v 2^ = 2 (u2
/ 2\
1 1
and
2
{ v 2>

We approximately take <«> and thus


(V ) ~ V2
l = ...(12.26)
V2nc
E x a m p le 12.2 Find the mean free path and collision rate for
the nitrogen gas at STP. Assume the diameter of the molecule to be
2 x 10-10 m.
At STP, T = 300 K, P = 101.325 kN/m2
- ^ 101.325X10* = 24_5 x 1024,m,

c =7t d2 = 3.142 (2 x 10-10)2 = 12.56 x 10'20 m2


1
Therefore l = = 2.3 x 10~7 m
\/2no
3 kT 3 x l.3 8 x l0 ~ 23x300
" 2) = 2 6 x l.6 6 x l0 -27
= 500 m/s
2 .3xl0~ 7
or T = = 4.6 x 10 10
500

and the collision rate fA = - = 2.2 x 109 /s


■t

i 12.3 VISCOSITY *5
____ ..,._.GQSI
Consider a gas enclosed between two plates (Fig. 12.4). Let the distance
between the plates be L. Suppose the lower plate at z = 0 is stationary’
and the upper at z = L is moving in the x-direction with a constant
velocity u0. To a good approximation we can assume that the layers of the
gas close to the plates will have the velocities o f the plates. Thus, the
lowest layer will have velocity ux = 0, and the topmost ux = u0. The layers
between the plates will have velocities in the increasing order o f magnitude
as shown in the figure. This is obviously a non-equilibrium situation.
232 Fundamentals of Statistical Mechanics

ux~ u0 u, = u
o 6- 6- >
o o C/
r O >
L. (/ o o •
o <- {/ ^ -►
) S
] Is >
= 0

Fig. 12.4
An internal friction in gases—viscosity—is caused by the transport of
molecules moving across the direction of motion of the gas layers having
different velocities. As a result of thermal motion, the molecules go over
from one layer to another, transferring their momentum from one layer
to another. As a result o f the exchange o f molecules between the layers
moving with different velocities, the momentum of the faster moving
layers decreases, while that of the slower moving layer increases. This
is the essence o f the mechanism of appearance of the internal friction
force between the layers. Every layer in the gas exerts a tangential force
on the layer above it tending to reduce its drift velocity. There is, thus,

a gradient o f drift velocity, — ~ , superimposed on the random motion of


dz
the molecules.
Let A, Z, B be the three planes, each
o f unit area, situated in a gas (Fig. 12.5). A
Suppose that the separation between the
planes is such that A Z = ZB = l (mean free
path). Let the molecules have velocities ux
(Z + l) and ux(Z —l) at A and B, respectively.
Since the molecules moving downwards
have on .an average their last collision at 2 Z
A, their drift velocity at the moment o f
crossing the plane Z will be ux(Z —l). Since
the m olecular velocities are in random
direction, roughly one-third of them will
be moving in the ^-direction. Half of these,
1
that is g o f the molecules will be moving B

1
dow n w ard s and g u pw ard s. As the Fig. 12.5
molecules cross the plane z, molecules below the plane gain momentum
while those above, loose it. Since the force acting on a system is equal to
the rate o f change o f momentum, the gas above the plane is acted upon
by a force due to gas below the plane.
The momentum in the ac-direction transported per unit time across Z,
1 / V
in the upward direction is —n(uz)m ux(z - 1) and that in the downward
Transport Phenomena 233

direction is — n[u.-,) mux{z + Z) , where n is the number o f molecules per


unit volume, {uz ) is the mean velocity in the ^-direction and m is the
mass o f a molecule.
The net transport o f momentum per unit time, per unit area, from
below to above the plane Z is

F zx = \ ^ n {u t )m ux <,z-l)^ _ \ ^ n (u x)m u x{z + Z)'j

= -g n (uz ) m [un(z — l) —ux (2 + Z)] ...(12.27)


Expanding by Taylor’s theorem, w e have

F mx= r n {n x)i uxU ) - - ^ l - u x{ z ) - ^ - l


oz az

- 2 ^ 1 1 / J dux
dz 3 ' dz
dux
= *n dz
w here t) = \ n (u x)m l = i p (u z )l ...(12.28)
3 ' " 3
is called the coefficient o f viscosity o f the gas.

Since the calculation is over-sim plified, the factor — should not be


trusted too much. A detailed calculation considering the gas o f hard sphere
m olecules results into the follow ing relation:

■n = 2 P /u) 1 ...(12.29)
The relation (12.28) has som e interesting features.
Taking into account the M axw ell distribution, we can replace u by
l8kT
and Z by Eq. (12.26). W e thus have
V m
1 18 k T 1 1 SkT
T1 = -m n
= 3 PV m -J2mj 3 nm 42no
1 l8kTm 1
(8kTmV* ...(12.30)
3-j2 o \l K “ 3^2n 3/2d'

T his relation show s that:


(0 q is independent o f n, that is o f the number o f molecules.
(ii) sm aller the size o f the m olecule, larger is the coefficient o f
viscosity and
234 Fundamentals of Statistical Mechanics

U'it) T) increases with temperature.


These results seem to be somewhat extraordinary as they give a big
jolt to our intuitive expectations. One would expect the viscous resistance
to increase, if the number of molecules and their size increases. Further,
in the case of liquids, the viscosity decreases rapidly with temperature.
However, the three conclusions stated above were confirmed
experimentally by Maxwell. This apparently extraordinary behaviour can
be explained as follows.
It is indeed true that by increasing the number of molecules, more
molecules will be available to transport momenta from one plane to
another; but then the mean free path will proportionately be reduced and
the net rate of momentum transfer will remain unchanged.
Regarding* the temperature dependence of ri, we see from Eq. (12.30)
that it varies as T^2. This is because we have treated molecules as hard
spheres and have assumed a = ftd2. In reality q increases more rapidly
than TL'2. This can easily be understood if we note that a depends on the
mean relative speed of the molecules which, in turn, depends upon the
temperature. With increasing temperature a decreases and, hence, q
increases more rapidly than T^2. In liquids, however, momentum transfer
across a plane occurs by direct forces between the molecules on adjacent
sides of the plane and, hence, depends upon n which decreases with
increasing temperature. This results into the decrease of viscous resistance
with temperature.

| 12.4 ELECTRICAL CONDUCTIVITY


A simple treatment of electrical conductivity is given in this section.
If a small uniform electric field is applied to a system containing
changed particles which are free to move, the particles will be accelerated
in the direction of the field.
Now j <xE
Therefore j = ael E ...(12.31)
The constant of proportionality is called the electrical conductivity
of the system.
If n is the mean number of charged particles per unit volume and (v)
is the mean velocity in the field direction, the mean number of particles
crossing a unit area per unit time perpendicular to the direction of the
field is n v and, hence.
j = n e(v) . ...(12.32)
where e is the charge carried by each particle.
Now l v. = ax ...(12.33)
where a is the acceleration and x the collision time.
Transport Phenomena 235

eE
Therefore <«> -----t (because ma = eE) ...(12.34)
m
ne „
and J = -----xE ...(12.35)
m

Therefore ...(12.36)
In the case of the electron gas in metals, in which the electrons are
scattered by atoms of solids, we will assume that only electrons at the
Fermi level are thermally excited. Such electrons will have velocities
1/2
f
1 m
Ip
If mean free path is lF,
T = VF

ne2 lf
and ...(12.37)
m vF

I 12:5 THERM AL CONDUCTIVITY - =


Consider a system in which the temperature is a function of, say,
3-coordinate. Since the temperature is not uniform, the energy in the
form of heat will flow from the region o f higher to that of lower
dT
temperatures. If the temperature gradient ---- is not too large, the heat
dz
flowing per unit area perpendicular to z, Qz will be proportional to the
temperature gradient
dT
e, dz
dT
Qz = - K — ...(12.38)
dz
Note the negative sign, which is self-explanatory. K is called the
coefficient o f thermal conductivity of the substance.
As explained in the preceding section, if there are n molecules per
unit volume, roughly one-third of them will be moving along the 3-direction.

Half of these, —n molecules, will have, on an average, mean velocity (v


6
in the + z direction and the other half will have the same velocity in the
—3 direction. Consider a unit area in a plane z —constant. In the elementary
theory o f transport, a molecule is assumed to take up the properties of
the gas in the vicinity of its last collision. Assume two parallel planes
A and B as shown in Fig. 12.6, each at a distance l from the plane
3 = constant.
236 Fundamentals of Statistical Mechanics

Z = Constant

I f —— < 0 the molecules coming from A and crossing the area in the
dz
plane z = constant, will have less mean energy (Ej) than that (e2) of
molecules coming from B. The result is net transport of energy from the
region below to that above it.

Therefore

...(12.39)

where c is the specific heat per molecule -^5-


dT
Comparison o f Eq. (12.38) with Eq. (12.39) gives

K = ...(12.40)

The considerations could be extended to the calculation of thermal


conductivity o f a metal, when heat is predominantly transferred by
conduction electrons. As in the case of electrical conductivity, we assume
that only electrons at the Fermi level participate in the transportation of
the energy, their velocity and the mean free path being vF and lF
respectively.
Transport Phenomena 237

9e ...(12.41)
Therefore = ----nvFlF
3 9T
9e
where is the average specific heat per electron and is equal to

Tt2fe2r
[see Eq. (10.77)]
2£jt
-r
K 1 ,
= --nvplf K2k2T
——-----
Therefore
3 2ef
„ 1 nlFn2k2T ...(12.42)
Substituting for eF A = — -------------
3
Consider now the ratio of thermal to electrical conductivities
K _ nlpTp&T mvE
ad 3mvF ne2lF
n2k3T
...(12.43)
3c
Thus the ratio depends only on the temperature T and is independent
of the mass of the electrons, their number density or their collision time.
The above result Eq. (12.43) is known as Wiedemann-Franz law. The
K
ratio ---- = — — I is known as the Lorenz number.
aT 3 \e

I 12.6 THERMIONIC EMISSION


We know that metals emit electrons when they are heated to a high
temperature. The phenomenon is called thermionic emission. Since the
electron emission does not take place spontaneously, electrons in the
metal may be considered to be inside a potential well created by metallic
ions. Suppose the energy of the electrons in the well is - W. Then only
those electrons whose kinetic energy en associated with the motion normal
to a surface of the metal, is greater than W can escape through fhe
surface barrier. If the surface lies in the y 2 -plane, the electrons which

have energy e = — m.u2 > W will escape through the surface.


£i *
The number of electrons per unit volume with momenta between p
and p + dp is
2
N(p)dp = —v dpr dp dp, /(e)

where /(e) is the distribution function.


238 Fundamentals of Statistical Mechanics

T h erefore, th e n u m ber o f electrons lea vin g Per unit area per unit
tim e is
Ar = JAr(p)dpvx
o 7 7 ...(12.44)
N = TT J J dP* dPx

PxdP* ...(12.45)
J dPy / d p z j
m h'i3

~J “ ''y J J — J p t„f t d - , +1

S in ce IT is o f th e ord er o f 1 to 5 eV , a n d on ly th ose electron s w hose


en erg ies are in ex cess o f W con trib u te to th e em ission , w e m a y ig n ore the
term u n ity in th e den om in ator.
„ p( p’,+p> p‘. „
T h erefore iV = — J dP y f dpz f e ^ P xdPx
mh jL j —w
'J2m
2e^
m h3
] e - M /2mdpy J e-*p-,2mdPz x
/ 2m
P x dP x
■mW
_ 2e^F 12iun l2nm m _pw
mh3 v T v T T e

p2A3
...(12.46)
The current density j carried by these electrons is
.■ _ Np _ 4nme0k2T 2 n(t,-w)
J , 2

4nmek2T 2 _3<t,
A3 £
...(12.47)
where e is the charge on the electrons and <|> = W — e^.. The factor <j) is
generally referred to as the work function o f the metal. It represents the
energy necessary to remove from a metal an electron that has an energy
equal to the Fermi energy.
Equation (12.47) is known as Richardson-Dushman equation for
thermionic emission.
Transport Phenomena 239

| 1 2# PHOTOELECTRIC EFFECT
When light radiation of sufficient energy falls on a metal, electrons are
ejected from it. This is the pehenomenon of photoelectric emission. In this
case a conduction electron absorbs a photon o f energy hv from the incoming
beam. If the energy is high enough, the electron escapes the surface.
We may assume that components o f momentum py, pz parallel to the
surface remain unchanged. For the electron to escape from the metal, pz
has to satisfy the condition
»2
> W -h v ...(12.48)
2m
The flux o f electrons will depend upon the probability o f absorption
o f a photon. Let this probability be a, the flux is given by
„ 2a f , f , 7 P x d Px

mh. 3 J J
-hu) 0|
p > p ,+ p .
2m
+1 e
...(12.49)
Integration over py and pz may be carried out by changing over to the
corresponding polar coordinates (p\ <|>).

2a JP xdPx f 2np' d p '


Therefore n —
rw7—U.,\
mh3 .Izm(w-hu) ■» epi(p,a/2tu+pI 12m_£F)1 + 1
f=0

= f PxdP In I . / 1' ' - ) ...(12.50)


h p j2m(W-hv)
or in terms o f energy

JV = *22™ j ln h + e ^ - ^ l d e x ...(12.51)
W-ho L J
Changing to a new variable
y = p(ex - W + hv) ...(12.52)
Eq. (12.51) changes to

4nam
00
pfer-l-W+Aul
N = fin 1 + e l * ) dy
/i3f}2 0 -

471am
= J,3o2 jln [ 1+ ...(12.53)
0
where $ = W - e^. is the work function.
The current density, therefore, is
4name
J = Ne = y- J l n [ l + e(5- r)]d y ...(12.54)
h°&
240 Fundam entals of Statistical Mechanics

where 8 = P (hv — <j>)


Integrating by parts

4Tzame ly ln(l + e(s-^))o + ~ \ -^ — dy


J - V
h3p2 o °J +1
4name
r y... dy = 4name fie5) ...(12.55)
r,3 n 2
0f ~ "+ 1
J h6^
where the integral has been represented as a function of e5,
i.e. fie5)

If hv — <J» » kT, e5 » 1 and the function f e 5) =

47lame o 2name
then J = (hv - (j))2 ...(12.56)
h3P2 2
Let <J> = h v o
2 Ttame
then J = u (V - V g )2

v0 is called the threshold frequency.


We, thus, see that when the energy o f the light quantum is much
greater than the work function o f the metal, the current is independent
o f temperature.
If, on the other hand, e5 « 1, f(e5) = e5, then
4 Ttame s
J =
h3P2
4mimek 2rr2
T pP(hu - $>)
...(12.57)
Compare this with the thermionic current density (12.47)

W hen hv = <|>, 5 = 0 and f(e8) = ——


12
ita m ek T2rri2
J = ...(12.58)
3hd
The agreem ent between the experimental values and the theoretical
values calculated from the above formula is found to be excellent.

| 12.8 M O L E C U L A R C O LLIS IO N S
Let us first calculate the mean number o f impacts between the molecules
o f a gas and a unit area o f the wall o f the container per unit time.
Consider an elem ent o f unit area on one o f the walls normal to say
x-axis. Let f v ) d 3v be the probability that the molecules will have velocity
between v and v + dv, then
Transport Phenomena 241

\«\wfiv)d *v = 1 ...(12.59)
The number of such molecules per unit volume is n flv) dflv, where
N
n - ~y . Of these, those which are in the cylindrical volume of base o f unit
area and height vx will strike the element of area per unit time. The
number of such molecules is
dv = vjiflv) d3v ...(12.60)
Tire total number of impacts of gas molecules per unit area of the wall
per unit time can be found by integrating Eq. (12.60) over all velocities;
over v and vx from —°° to ~ and over vx from 0 to » since, when vx < 0,
the molecules are moving away from the wall and do not collide with it.
Therefore, the number of impacts per unit area per unit time is

v = nj] J vxf{v)dvxdvydvx
o ™ -~
...(12.61)

In order to find the kinetic pressure exerted by the gas on the walls,
we have to consider the total normal momentum imparted by the gas
molecules per unit time per unit area of the wall. On collision, the normal
component of the momentum of the particles changes from to The px —p
normal momentum imparted to the wall by the molecules whose velocities
lie between v and v + dv is, therefore, 2pxvxn flv)d3v. Hence, the pressure
P is

P = 2n J J J
vy = —«> vt = -® o ux = 0
Pxvxf(v)dvxdvydvz ...(1.2.62)

Or, since p v is an even function o f u

P =n j j j pxvxf{v)dvxdvydvx
(,’ =-«> v _=0
...(12.63)
P - n l [[u( P x v x ) n v ) d 3 ...(12.64)
= n {p xvx ) ...(12.65)

= n ( pv cos2 8^ ...(12.66)

= \ i nPv) ...(12.67)
E x a m p le 12.3 Find the average number of impacts of molecules of a
Maxwell-Boltzmann gas on unit area of the wall per unit time.

Now dv = J m f 72 dv x dv y dv
J x

...(1 2 .6 8 )
242 Fundam entals of Statistical M e cha nics

Therefore, total number of impacts

e 2 vJ.avx
="(aSrJ l dVyl d" l

J m Y 2n 1 kT ...(12.69)
2nkT J pm Pm 27tm

(because P = nkD ...(12.70)


— \l2nmkT
Example 12.4 Calculate the number of impacts per unit time per
unit area of the wall made by the molecules of an MB gas, for which the
angles between the direction of the velocity of the molecules and the
normal to the surface lie between 0 and 0 + dQ.
It is convenient to use spherical polar coordinates in this case.
Let x be the polar axis.
Therefore vx = v cos 0
.3/2 ~ 2n
dv = n f ——— f fe 3 v cos 0 vdQv sin 0 dv d(b

r m \3/2
) -(W/2,,3
= 2im ( l e - vadv sin 0 cos 0 dQ

J
2nkT )
s m \3/2
)
= 2m j 2txkT J 2

3 2 77l
sin 0 cos 0 dQ

( 2kT )1/2
= 7i ------ sin 0 cos 0 dQ ...(12.71)
^ mn J
Example 12.5 Find the number of impacts per unit time per unit
area of a wall in an electron gas at absolute zero of temperature.
The number of electrons per unit volume with momenta between p
and p + dp and at an angle with the normal to the wall between 0 and
0 + dQ is
2.2npdQ.psin Qdp
...(12.72)
h3
Therefore, the number o f impacts per unit time
jt / 2 pr 0 0 2 • r\ '
r 7/i f j 2.27tp sm 0
= I dQ J dp-------- g------ ucosO
n n h
k/2 Pr
47t
J p 3 sin 0 cos 9 dp
mh3 o o
Transport Phenom ena 243

4 tc 1 pp _ nh ( 3N ^
...(12.73>
mh3 2 4 2m l 871V

12'9> E F F U S IO N
!
Suppose a small hole is made in the wall o f the container. The molecules
of the gas within the container will emerge through the hole. This process
is known as effusion. '\
The number o f molecules emerging through the hole is the same as
the number of molecules which would strike the same element of area if
the hole is not made in the wall. This number is given by

N =n J £ J vxf{v)dvxdvydvz ...(12.74)

or in spherical polar coordinates


2 k i t / 2 =»

N =n J J J ucosQ f(v)v2 sin0 dQ d$ dv


<Jj=CLB=0u=0
1
= n 2k — v3f(v)dv J ...(12.75)
2o

= n n j~ ]vf(v)4nv2dv ...(12.76)
47C q

Since f(v) 4nv2 dv is the probability that the molecules will have
velocity between v and v + dv,

N = ~ n {v ) ...(12.77)
The result holds independently o f the statistics obeyed by the particles
and is thus, valid for BE, FD and MB gases.
If the gas obeys MB statistics

1 f8 /W n 1/2 P ______
“ d N = 3 " [" S T J = T f S r - (U J 8 >
Isotopes of different masses have different rates of flow out of the
hole. Thus, if the enclosure contains isotopes 3He and 4He
f \1/2
3 _ Ug m4 ...(12.79)
ft* n ,, mg
An interesting example of the application o f the theory developed
above concerns the rate o f evaporation from a surface. At low vapour
pressure
244 Fundamentals of Statistical Mechanics

Number of molecules = Number striking


evaporating from unit unit area
area per second per second

= \ n{v)
P(v)
...(12.80)
4 kT
This provides a simple method for estimating the rate of evaporation
at a given pressure.
E xam ple 12.6 Find the rate of evaporation of tungsten at 2000 K, its
vapour pressure at this temperature being 6.5 x 10~12 torr and its atomic
weight 183.8.
Number of atoms evaporating per second is given by
A> = ^ p ( 8k T 'f '2 _ P
4kT ~ 4kT nm J
~ ^2nmkT
P = 6.5 x 10-12 x 133.22 N/m2
^ = _____________ 6.5xl0~12 X133.22_____________
2x3.13x183.8x1.6 0 x l0 -27 x l.3 8 x l0 ~ 23 x2000
= 3.76 x 1012
Therefore, mass evaporating per second
= 3.76 x 1012 x 183.8 x 1.66 x 10"27
= 1.15 x 10-12 kg

?. -Vf.- v.
f c j t e V - ' .\F
Diffusion plays an important role in the non-equilibrium theory. We,
therefore, present here a very simple form of the theory.
Consider a tube filled with a gas o f type, say, B. Suppose molecules
o f type A are now inserted in the tube at P. Let nA be the number of
molecules o f type A per unit volume. In the equilibrium situation, the
molecules A would be uniformly distributed throughout the available
volume. The molecules of type A, therefore, will move tendm ^ to make
the concentration more nearly uniform (Fig. 12.7).

® ® ®
p @ ®
® ®
Fig. 12.7
Transport Phenomena 245

Let r, represent the flux of molecules A, i.e. the number of molecules


crossing unit area perpendicular to z-direction per unit time. It would be
reasonable to assume that r is proportional to the concentration gradient
of molecules A.
Therefore r_ « dnAdz
dn.
r. = -D dz ...(12.81)
The negative sign is self-explanatory. The constant of proportionality
D is called the coefficient o f diffusion.
Equation (12.81) is known as Fick’ s law o f diffusion.
We also have the equation of continuity.
V.(po) + | ^ = 0 ...(12.82)
at
which can be written as
„ dn a
v .r + — d. = 0 ...(12.83)
z dt
Substituting Eq. (12.81) in Eq. (12.83)
_ D d2nA ...(12.84)
dt dz2
This is the diffusion equation satisfied by nA.
Consider now a unit area in the plane Z = Z Q, and two other planes
at P and P on either side o f it, each at distance l equal to the mean free
path (Fig. 12.8). Let nA be the concentration at P. Then the concentration
d2n
at P' will be nA + -— 21. The number of molecules coming from P and
dz
1 , .
crossing the unit area will be (following the arguments o f Sec 3) ~ (i>) nA
where (v) is the average velocity; whereas that coming from P/ and-

crossing the same element in opposite direction is g (u) I nA + 21

1 1
I I 1
P I I * I p
I I

Fig. 12.8
246 Fundamentals of Statistical Mechanics

T h e net flux in the + r direction is


„ 1 v ( dnA 21
p: [ nA- n*-d T
1 - \. 8n ...(12.85)
dz
C om parin g this w ith Eq. (12.81), w e find
1
D = 3 (v)l ...(12.86)

For M B gas y ) and from Eq. (12.26)


V nm
l = 1 1 kT ...(12.87)
•J2na -J2a ( P )
w h ere a is the scatterin g cross-section and P the pressure. Therefore

D = 1 l8kT 1 kT
3 V 7tm -J2a (P )

- 2 jk3T3 ...(12.88)
3a (P )\ nm

1
12.11 BROWNIAN MOTION ^
S m all m a croscopic pa rticles w h en im m ersed in liq u ids are fou nd to exhibit
a ra n d om type o f m otion. T h is w as first ob serv ed b y R ob ert B row n in
1828 a n d th e p h en om en on is k n ow n a fter h im as B row n ia n m otion.
It is n ow k n ow n th a t th e sou rce o f th is m otion lies in the in cessa n t
a n d r a n d o m b o m b a rd m e n t o f th e p a r t ic le s b y th e m o le cu le s o f th e
su rrou n d in g liqu id. T h e p rob lem is e a sily trea ted b y a m eth od du e to
L an gevin .
C on sider a particle su rrou n d ed b y a flu id en viron m en t. Suppose, except
fo r th e force arisin g from th e m olecu la r b om b a rd m en t, n o oth er force acts
on th e pa rticle. T h e eq u a tion o f m otion o f th e p a rticle is

M -TT = F (t) ...(12.89)


at
w h e r e M is th e m ass o f p a rticle, v (t) its \ eloeity an d F i t ) th e force a ctin g
on it as a resu lt o f im p a cts o f th e flu id m olecu les. L a n g ev in su g gested F i t )
m a y be w ritten as a su m o f tw o p a rts: U) a stea d y pa rt, —a M v w h ich
r ep resen ts th e v iscou s d r a g on th e p a rticle a n d (ii) a flu ctu a tin g part
$ < t)M w h ich ov er lon g in terv a ls o f tim e a v e ra g e s o u t to zero. T h u s

= —<xMv + p { t) M ...(12.90)
Transport Phenom ena 247

d(v)
i.e. =-o.(v) + p(0 ...(12.91)
Taking the ensemble averages

= -a (v ) + (P(«) = -a(v)
Because (p(£)) = 0 ...(12.92)
Integrating (v) = u(0) e~al ...(12.93)
This gives the rate at which the drift velocity decays.
If t =x = ( v (t)} = i o ( 0 )
a e
The time x = 1/a is known as the relaxation time.
Therefore (t>(«) = u(0)e-'/T ...(12.94)
Let us construct the scalar product of (12.91) with the instantaneous
position r of the particle, as
r• = -ar-V + r£(rt ...(12.95)
at
Taking the ensemble averages

('•■■§) = - a (r-v) + (r-p(O) ...(12.96)

Now (r •P(d) = 0 because there is no correlation between the position


r(t) of the particle and the fluctuating force P(t)
1 dr2
r'V ~ 2 dt
dv 1 d2r2
and
dt 2 dt2
Therefore, from Eq. (12.96)
d2ir2) d2(r2) 0 / 2\
z *„ 1 -= -a ~ ^dt
—dt2 ,r L =
- -2v(v
- ,) ...(12.97)
If the particle has attained thermal equilibrium with the molecules of
the fluid
1 ■ o\ 3
— M ( V2 )

, o\ 3kT
Therefore {v ) = ■- - — ...(12.98)
' ' M
Thus Eq. (12.97) can be written as
6kT
...(12.99)
dt2 (•*) dt m
248 Fundamentals of Statistical Mechanics

Integrating, and using the boundary condition that r = 0 and r = 0 at


t = 0
GkT
...(12.100)
ma
1_
If t «
a
2 .2
6kT>
!) = M a2 at - •<1 - 1 —at + a t
v
3kT
t2= (o 2 )t 2 ... 0 2 . 101 )
M
Therefore = v •t ...(12.102)
which is consistent with Newton’s equation of motion.
1
On the other hand, if t » —
a
GkT GkT
('•2) - M a .at = ma t = ...(12.103)
which corresponds to the Brownian movement.
If the particles are spherical, we can use Stoke’s law, viz. the viscous
drag force on a particle moving with a velocity v is - 6 n Rr\v, where R is
the radius o f the particle and T| the coefficient of viscosity.
Therefore Ma. = 671 ...(12.104)
GkT M kT
Hence P > - M -t - 67lRt] kRt\ ...(12.105)
This equation was originally derived by Einstein. It shows that the
squared displacement is linear in time. Experimental studies have
confirmed Eq. (12.1) with an accuracy of ± 0.5 per cent.
Einstein treated Brownian motion on the basis of the so-called ‘Random
walk problem’ and thereby established a relationship between diffusion
and the mechanism o f molecular fluctuation.

I 12.12 EINSTEIN'S RELATION FOR M OBILITY ~

We have seen how (sec. 12.10) Fick’s law, Tz = —D , leads to the diffusion
equation

— = D ~ ' ...(12.106)
dt 3z
Suppose now that N particles are concentrated at x = 0 at t = 0, i.e.
nix, 0) =N 5ix) ...(12.107)
where 5(x) is the Dirac delta function.
Transport Phenomena 249

Of the various possible solutions of Eq. (12.106), the one relevant to


the present problem is
N
n(x, t) = ...(12.108)
(.4nDt)1/2
Comparison of this equation with Eq. (12.86), brings out clearly the
relationship between the random walk problem and the phenomenon of
diffusion.
The mean square value of the displacement is given by
Jl
N
(x Z) = —----- -- f x2e 4Dtdx
' ' n (4nDt)
n. (A-n-n+'i112 J

1
(4Dt)3/2 j y2e y dy ^because y2 =
(4 jt £ > f)1 /2

^ • 2 .^ = 2 Dt ...(12.109)
4
Einstein’s expression from the mean square displacement is given by
Eq. (12.103) as
6kT ,
(S ) = M a ...(12.110)
For the one-dimensional case this will reduce to
2kT .
(*2) = M a ...(12.111)
Comparison between this and Eq. (12.109) gives
kT
D = ...(12.112)
Ma
Suppose that in the case of Brownian motion, in addition to the viscous
drag and the fluctuating free force due to molecular bombardment a
steady force K0 acts on the particle. The equation of motion (12.90) now
would change to
rdv
M — = -otMv + P(()M + K0

dv
— = —a.v + o/.i\
B(t) + —n ...(12.113)
dt M
Taking averages

I t - §
d (v)
In the steady-state situation —— = 0
dt

therefore (v) = ^ 0 ...(12.114)


Ma
250 Fundam entals of Statistical Mechanics

Mobility p is defined as the drift velocity per unit force

Therefore u = -7 7 — ...(12.115)
Mo
Hence D = kT p
This is Einstein's relation connecting the coefficient of diffusion with
mobility.
If the particle lies under influence of an external electric field E

— = - a ~v) + + —
dt M

Therefore D = — p ...(12.116)
e
This is original form of Einstein’s relation.

| PROBLEMS
12.1 The gas kinetic of helium is 1.9 x 10~10 m. Find the probability
that an atom of helium covers a distance 0.5 mm without collision
at 0°C and pressure 100 Pa.
12.2 A uniform electric field E is applied to a dilute gas of molecules
containing an ion of mass m and charge e. The mean time between
collision suffered by the ion is t. Find the mean distance in the
direction of the field which the ion travels between collisions.
Assume that the component of velocity of the ion in the direction
■ of the field after each collision is zero.
12.3 Show that the average energy of thermionic electrons associated
with the component normal to the surface of the emitter is kT.
What is the energy associated with the other two components of
the motion?
12.4 Find the mean free path and gas kinetic radius of hydrogen at
temperature 273 K and pressure 1.01 x 10° Pa. The coefficient
of viscosity of hydrogen is n = 8.6 x 10 6 Pa.s.
12.5 Find the number of collisions with a wall in an extreme relativistic
completely degenerate electron gas.
12.6 A fine hole is made in the wall of a container filled with a gas
at a temperature T. Find the mean energy of the molecules
effusing through the hold. How and why does it differ from the
mean translational energy of the molecules in the container?
12.7 In a molecular beam experiment, the source contains 50%
monatomic hydrogen and 50% diatomic hydrogen at a pressure
0.1 Torr. The gas effuses through a 0.1 x 0.2 mm slit. Find the
number of H and H 2 particles leaving the slit.
Transport Phenomena 251

12.8 T he am ount o f sodium chloride effu sin g ou t o f an enclosure


through a sm all h ole o f area 1.6 x 10-6 m 2 at tem peratu re 900
K is 1.84 x 10~6 kg/hour. C alculate the vapou r pressure o f sodium
chloride at this tem perature.
12.9 In on e-dim en sional B row n ian m otion , th e prob a b ility th at a
particle w ill m ove forw ard is equ al to th e p robability th a t it w ill
m ove backw ards. Show th at (x) = 0; (^(x —(x)2^J = Nl2 w here Z
is the len gth o f the step.
12.10 C on sidering a set o f pa rticles each o f m ass m in sedim entation
equilibrium . O btain th e E in stein ’s equ ation D = p. k T .
12.11 D e te r m in e th e s t e a d y -s t a te s e d im e n ta tio n v e lo c it y fo r an
alum in ium p article o f d ia m eter on e m icron in w a ter at room
tem peratu re in th e E a rth ’s g ra vitation a l field.

□ □□
GENERAL THEORY OF
TRANSPORT PHENOMENA

W
e h ave seen that in classical statistics a com plete description o f a
system in equilibrium is given by the Gibbs distribution, w hich is
u n iform in space in the absence o f applied fields. In practice, however, w
h av e often to deal w ith system s that are not in equilibrium. This chapte
is d evoted to the investigation o f non-equilibrium states and processes.
T h e m eth od adopted here is a generalization o f the methods o f statistical
ph ysics.
T h e stu d y o f gas dynam ics requires a consideration o f the general
th eory' o f th e tr a n s p o rt o f m olecu les, m om en ta and energy. In the
elem en ta ry th eory, the distribution function o f m olecules was assum ed to
d ep en d on ly on th e properties o f the w alls. H owever, if the effect o f the
in ter-m olecu la r in teraction s is to be included in the theory, one has to
ta k e n o te o f th e fact that, due to interchange o f energy and m om entum
d u rin g collision s, an isotropy in the velocity distribution is likely to occur.
In a n on -eq u ilib riu m state, th e distribution function is generally different
from equilibrium distribution function and depends on tim e and coordinates
ev en in th e ab sen ce o f a p plied fields. It is the purpose o f the general
th e o r y to fin d h ow th e d istrib u tion fu n ction is m odified w hen non ­
e q u ilib riu m p rocesses prevail.

I
I 3.1 DISTRIBUTION FUNCTION ^
L et u s first obtain th e equ ation s sp ecifyin g the change o f the distribution
fu n ction in sp ace an d tim e.
L et f i r , v , t ) be th e d istribu tion function, w hich in the general case o f
a n on -eq u ilib riu m sy stem depen ds on coordinates, velocities and tim e, be
defin ed in sq ch a w a y th at
f i r , v , t ) d 3 r d 3v = th e m ea n n um ber o f m olecules w hose positions at
tim e t are loca ted betw een r and r + d r and have velocities lying betw een
v and v + d v in th e p-space. ...(13.1)
I f the total n u m b er o f m olecu les in the volum e V is N
General Th e o ry of Transpo rt P henom ena 253

If f is independent of r, i.e. the molecules are uniformly distributed


in space
N = V Jfir, v, t)d3v
or ^ = r\(r, t) = j f(r,v,t)d3v ...(13.3)
where T|(r, t) is the space density of the particles.
The mean value of a function ~flr, v, t) denoting a property of the
molecules at time t and at position r, will be
<*(r,t)> = f ^ v , t ) X( r ,v t ) d 3v
flr,v,t)d3v
- — —. \f{r,v,t)Xlr,v,t)d3v ...(13.4)
T | ( r ,£ ) J '

BOLTZMANN T R A N S P O R ^ Q U A T ld r j
At a point r, the time rate of change of distribution function, assuming
the total number of particles to be conserved, may be caused by the drift
of molecules in and out of a volume element and also by collisions among
the molecules. We, thus, have
£dt j * }
V dt Jdrift
+Vmdt ) cotf ...(13.5)
In time d t, fir, v, t ) would change to
f ir , d v , v + d v , t + d t)
If we neglect collisions and follow along a flow line, by Liouville’s
theorem
flr + d r, v + d v , t + d t) = f (r, v, t )
or fir + d r, v + d v, t + d t) — fir ,
v, t) = 0 ...(13.6)
i.e. f lr ,
v, t) + d r-g ra d r f + d v g r a d u f ...(13.7)
df
+ ~dt d t - f (r , v, t) = 0
where grad f is a vector, whose projections are
df, df_ df_\
gra d r f = d x ’ d y ’ d z)

and d f_ d L d L
gra d v f = dvx dvy dvz

df_ dr . dv j -
Hence •grad r f ~— •grad v f ...(13.8)
dt
254 Fundamentals of Statistical Mechanics

This is the kinetic equation proposed by Boltzmann for finding the


distribution function f. To take collisions into account, we have to find an

explict expression for K


dt 'coll

I 13.3 RELAXATION APPROXIMATION

In general, the term — ] varies depending on which collisions are


. dt *coll
taken into consideration. In one class o f cases, where the departures from
the equilibrium conditions are small, it can be represented as
an = f-fo ...(13.9)
dtJcoll x
where f is the non-equilibrium distribution, f 0 is the equilibrium distribution
and T is known as the relaxation time.
How can we justify such as assum ption?
Consider a gas in the state o f equilibrium . I f som e o f the m olecules
are brought out o f their equilibrium state in one w ay or the other, they
return to it as a result o f collisions w ith other molecules in a time x-
relaxation time, which is the tim e needed for the system to return to its
equilibrium state. The relaxation tim e characterizes the intensity o f
interaction with the “other m olecules” . Stronger the interaction, sm aller
is the relaxation time, and m ore rapidly does the system gain equilibrium.
These considerations can be represented m athem atically.
I f the deviations from equilibrium are sm all, w e expect the rate o f
restoring equilibrium to the prop ortion al to th e deviation from the
equilibrium state, i.e.
dn
.dt Jcou
or = const x (f - f Q) = —— ...(13.10)
V d t J coll 0 x

Thus, if f = f0, = 0.
at
‘ W e can write Eq. (13.10) as
d (f-fo ) f-fo
dt X

Integrating In (/”— + const


X

Therefore f i t ) - f Q = Af = (AO) - f Q) e ~ ...(13.11)


General Th e o ry of Transpo rt P henom ena 255

where /(0) is the value o f the non-equilibrium distribution function at the


initial instant.
Equation (13.11) shows that the deviation A/-o f the distribution function
from the equilibrium diminishes exponentially w ith the tim e constant x.
It is for this reason that t is called the relaxation time.
Thus, in the relaxation time approxim ation, B oltzm ann transport
equation takes the form

— + u-gradr f + a-graduf = ——— — ...(13.12)


dt x
E x a m p le 13.1 Show that in the relaxation approxim ation the mean
energy o f a particle is the same as in the equilibrium state.
The mean energy is given by

(e) = — |e fd 3rd3v = — \v2f d 3r d 3v ...(13.13)


' ' n J 2n J

In the steady state = 0.


dt
Therefore, from Eq. (13.12)
f = f 0 ~ ™ -gradr f — xa grad„ f

Therefore (e) = —£ je f 0d 8r d 3v —jew •g r a d rf — Jexa,g ra d vf~^


Since the energy is an even function and v and a are odd functions,
the contribution o f the la st two integrals in the bracket is zero.

(e) = ^ Jef0d 3r d 3v = §v2f 0d 3r d 3v ...(13.14)

|X?^EtEprni!CAi^.cpi«p;— ^RO W i^^jRELA^ATlO N


.a p p r o x im a tio n ;,' 'BL***- - - ’■
To illustrate the application o f Eq. (13.12), w e consider h ere the case o f
electrical conductivity o f a specim en.
Su ppose an ele ctrica l field E is a p p lie d to a sp ecim en in th e
z-direction. The electrons, therefore, w ill b e m ovin g in th e z-direction
eE
w ith a velocity v and th eir acceleration w ill b e -----, w h ere e is th e charge
z TTL

¥
dt
Therefore, Eq. (13.12) reduces to
df eE df f-fo
v — + -------- — ...(13.15)
2 dz m d v,

df eE d f
Hence X| Vz~^~
dz + ---
m^—
f=f0 ...(13.16)
dv,
256 Fundam entals of Statistical Mechanics

Assuming that f ~ f o « 1, we can replace f in the brackets by f a and


write
3 /^ c t f 3 /^
f = /o - ...(13.17)
dz dm dvz
Now, the equilibrium distribution function f0 is a function o f the velocity
v. temperature t and the concentration n, i.e. of the chemical potential (i.
% = ay. ^ dfo dT
Therefore ...(13.18)
dz dz dT dz

_ afo 3/o 1
and —m because zz =z—mv
dvz 3s dz 3e 2
...(13.19)
Since, the temperature and the carrier concentration does not change
with z, we have
3T
= 0 and = 0
dz dz

Therefore f
- — . mvz ^ = f 0 - xe E v j ^
m dz dz
...(13.20)
The electric current density is given by
j = dzv

= e §vzf 0 d3v - x c 2E Jv2 ~ fed 3v


...(13.21)
Since.' fg is an even function of the velocity, the first integrand being
odd, vanishes

Therefore j ~ —xe2 E \v2 ^ - d 3v ...(13.22)


J dz
Assum ing the equilibrium distribution to be Maxwellian
Z' m \3/2
)
-E I k T
. n { 2kTn
3/o n ( m f:
Therefore
dz kT ^2kkT kT
...(13.23)
xe2f? |
Hence j vzfod ...(13.24)
kT J
1
Since \v%f0d3v = \ n k T ...(13.25)
2m-
nte2E
We have j m ...(13.26)
Genera! Theory of Transport Phenomena 257

and the electrical conductivity


rate2
o = ----- - ...(13.27f
m
[compare with Eq. (12.36)]

| 13.5 A RIGOROUS FORMULATION OF TH E TRANSPORT THEORY


In what follows, we will take r as a position vector and v as a velocity
vector. We discussed above Boltzmann kinetic equation in the relaxation
time approximation without taking into account the effects of interaction
of the particles in equilibrium. We will now present a more rigorous
formulation of the theory. We assume that the gas is so dilute that only
binary collisions need be taken into account.
Consider a small volume element d?r, located between r and r + dr.
If collisions take place, molecules originally with positions and velocities
not in the range d3r. d3v, can be scattered into it and those originally in
this range, can be scattered out.

The net increase in the number of molecules per unit time is

d3r d3v. Let D,c~> fd3 rd3 v denote the decrease and D[+^ fd3 rd3v denote
the increase in the number of molecules in unit time, where f is the
distribution function.

Therefore, ( ^ J ;; d3 rd3v = D ^ fd 3 rd3v — D'^ fd3 rd3v


...(13.28)
Let us now obtain the expressions for
fd3 rd3v and fd3 rd3v
We have shown in Sec. 12.2 that the number of molecules scattered ■
out of the volume element d3r, due to collisions, per unit time is
(n1(V }a)(nd3r) ...(13.29) -
where n is the number of molecules per unit volume, nl the number of
molecules of the type whose scattering we are considering per unit volume;
V the relative velocity and a the scattering cross-section. In the present
case suppose that the molecules A2 with velocity v2 collide with molecule
A v with velocity Vr From Eq. (13.29), the number of molecules A j having
velocities V2 and V„ + dV2 and scattered out of the volume element d3r
is

\fi(r,v2,t)d3v2 |a(£l)dnjf(rv v v t) d?vx d3rl ...(13.30)


258 Fundamentals of Statistical Mechanics

The total number of molecules of the type A2 scattered by molecules


-A,, per unit volume, can be found by integration over V 2 and Cl. Thus
Z£*Y = fir. v j) d3i’1j d 3v2 JalCDdCl \iq - v2| fir, v2, t)
...(13.31)
After the collisions, the velocities of the molecules change to say V',
and 1",.
For the calculation of D{c+) f we have to consider how many molecules
end up after collision with a velocity in the range V2 and V2 + dV2, i.e.
we have to consider inverse collisions. We, therefore, consider a beam of
molecules with velocities V2 incident upon molecules with velocities V'.
Therefore = j d 3u2 J o'(Q .)dC l \ v { —v2 | f l r , v 2 , f ) f ( r , v [ , i ) d 3v1
...(13.32)
In classical physics, inverse collision between two microscopic objects
may be different from the original collision. This, however, is not true in
the case of molecular collisions. A proper quantum mechanical calculation
for small molecules shows that the cross-section for inverse encounter is
the same as that for the original encounter i.e.
a iv ^ v z ) |v[v 2 | = a lv^,v2 |vxv2)
or 0(12) =cf(£f) ...(13.33)
Since the collisions are assumed to be elastic, from the principle of
conservation of energy
V = |i>i - v 2 \ = \v{v'2 | ...(13.34)
and, in accordance with the Liouville’s theorem
d 3v 1d av 2 = d f v ’^ v ^ ...(13.35)

Hence (ift)*, = D'c+>^ ~

= J d 3V2 J o Q . ) d a \Vl - v2 \ { f l r . v ^ t f f i r ^ d ) - f l r , v x, t ) f l r , v 2 , t ) }
or using the abbreviations
/ j = f i r , v Jf t ); f 2 = f ir , v 2, t ); = f [ = f ir , v \ , t); f£ = f ir , o ', t)

{jx \ oll = l d3v* [flfz -fi f i) ...(13.36)

Hence, the Boltzmann non-linear integra-differential equation is

j + w,'gradr f + crgrad^ f

= j v2 j (f i -fifz ) l^ i - t>2 | alQ .) dQ. d f v 2 ...(13.37)


u
General Theory of Transport Phenomena 259

'>13,6 ESOLTZiyiANN'S THEOREM \&vu*h:


'■f',-j
Boltzmann’s integro-differential equation (13.37) is a general equation which
describes the transport processes. How can we derive equilibrium
distribution function from it? In the absence of external forces, the
equilibrium distribution function is a solution of Eq. (13.37), for which f
is independent of r and t. Therefore
3/ „ df df df F
— =0, — = — = 0 and a = — = 0
dt dx dy dz m
Hence, the equilibrium solution, denoted by fQ(v) satisfies the equation

— + v •gradr + a ■gradu fQ(v) = 0 ...(13.38)

or £ §f0(v{ )f0(v2 ) - fo(vi)fo(v2) \v± - v2\ ct(Q) dCl dv2 = 0 ...(13.39)


2n . . .
We see that if
/o K )foiv'2 ) - f 0iv1)f0(v2) = 0 ...(13.40)
f0iv) is a solution of Eq. (13.39). Eq. (13.40), therefore, is a sufficient
condition for fiv) to be the solution of Eq. (13.39). We now show that this
condition is also a necessary condition.
Let us examine, as was done by Boltzmann, the average properties of
a quantity Hit) defined by
Hit) = ^d&vfiv,t) hi fiu,t) ...(13.41)
where fiv, t) is the distribution function.

Therefore, dHit) f^3„ + :df_


dt - K " {| ■ dt
df_
= Jd3i>(1 + In f) ...(13.42)
dt
df
Therefore, for to be zero, dH/dt has to be equal to zero.

Hence dH
=0 ...(13.43)
dt

is a necessary condition for — to be zero, that is, for f to be a solution


dt*
of Eq. (13.39).
If fiv, t) satisfies the Boltzmann transport equation

Ia(Q)dQ K " «2 I if{ fi - fif2) ...(13.44)


2 60 F u ndam entals of S tatistical M echanics

Substituting this in Eq. (13.42)

~ = Jd V Jd 3vz \o{Cl)dn\ vx - v 2 \{ft f t - f t f 2) { l + In ft) ...(13.45)

Since Q = the angles between v x - v2 and v'2 - v[, o(£2) is invariant


u n der the following interchanges
(i) vx ?=» v2 ; v[ <=± v2
(ii) t>j v{ ; v2 v'
(Hi) vx v'2 ; v2 *± v[
Therefore, interchanging v x and u2; v\ and v'2> Eq. (13.45) changes to

^ = Jd 3v2 \d3vx Jo(O) \vx - v 2\Cf.' f t - f2 ft) (1 + In f 2) ...(13.46)

Interchanging u} and v[; v2 and v'2

= ]d 3v{ Jd 3v'2 J a (a )d n |u' - u 'l f t f t - ft f'){ 1 + In ft)


...(13.47)
Interchanging v1 and v2, v2 and u'

— ■ = \dzv'2 \d3v'z joCSDdQ |^ - u' |(f2 ft - f t ft) (1 + In ft)


...(13.48)
W e recall that dzvx d3v2 = d3v[ <23o ' and \v[ - v2\ = | — v2\
Therefore, adding Eqs. (13.45), (13.46), (13.47) and (13.48) and dividing
by 4, w e get

fd \ jd 3vz j a h ) d Q |Wl - v2\ f t ft - ft ft)


at 4 i
{ l n ( f t f 2) - l n ( f { f t ) } ...(13.49)
Since the integrand can never be positive for any set o f f 's , the integral
is either negative or zero. That is

— < 0 ...(13.50)
dt
H , therefore, can never increase. This is known as Boltzmann's H
theorem', H, thus, measures the deviation from the equilibrium state and
has a tendency to decrease as a system approaches equilibrium.
O ne can easily see from, (13.49) that (dH/dt) = 0 if and only if ft ft
— ft ft = 0. Therefore
ft f t - ft ft = o
is a necessary condition for f to be a solution of Eq. (13.39).
General Theory of Transport Phenomena 261

We have seen that for f to be a solution of Boltzmann’s equation (13.39),


it is necessary that
n f i - fx f* = 0
Therefore In f[ + In = In f x + In f2 ...(13.51)
This has the form of a law of conservation.
Let In f = yiv) ...(13.52)
Normally %{v) is a function of conserved quantities such as energy and
momentum.
Therefore, let %{v) = —Av2 + Bv + C' ...(13.53)
where A, B, C' are constants. With a proper choice of B and C', Eq. (13.53)
can be written as
= -A(u - u0)2 + In C ...(13.54)
Therefore In f = -A(i> —u0)2 + In C
or f = C e~A <u - ”o'2 ...(13.55)
We now show that C and A have the same values as in MB distribution.
Density n is given by
n = jf{r,v ,t)d 3v ...(13.56)
Therefore n = C jfe -Mv-v°? d3v
Putting v =v0 + V

n = C \e~AV^d3V = C 0 0 ...(13.57)

r a \2/3
Therefore ...(13.58)

Now
' ' f(v)d3v n
Since
tl
1
©

(v) = £ j( V + o0)e-Al'3d 3y
n _oo

= - fVe~AV*d3V + - V 0 \e~AV‘ d3V


n J n
= 0 + - u0 Je-AV’
\e-AV'dd3V
3V
n J
262 Fundamentals of Statistical Mechanics

/ , x 3 /2 1 / n 3 /2

' " (t ) ^ - ( a) ' l’° <13-59’


If the gas as a whole does not have transitional motion
(v) = {v0) = 0 ...(13.60)
Let us now find the value of the average energy
fi/m ;2/'(o)d3L)
\e) = 2 (because(oj) = 0)
jf(v )d 3v
Using spherical polar coordinates
mC
<e> = ~2n I* 4e Av dV sin0 dd dty

2KmC f v -4 - Au3
4e~A u' d V _ 2mnC 3
n o n • g ^ - 572 sR

ixmC " j 'A j 372 3


2nmC r 3m
A-®/2V7C = 4A ...(13.61)
8
3m 3m m
Therefore A = ...(13.62)
W ) 4 -k T 2kT
2
3/2 / m .3/2
:a
and C = n — ...(13.63)
nj = n 12vJiT
.3/2
( m \ m
Hence f0(v) = e 2kTl ...(13.64)
which is the MB distribution function.

. B «3!r " '■ $& 9,4-'£vr t y * r ,T

I 13.8 H '■THEOREJyi!:1N..^.yjA]NTjJ.MiM!E^HANl!?s^s‘<ie’ri
Let the Hamiltonian of a system be given by
H = H j + H' ...(13.65)
where H ' is a contribution due to small interaction.
If the system is in a state r, the probability o f its transition to a state
s due to the interaction H ' is
CD„= (s |H ' |r)2 = (s |H ' |r)* (r |H ' |s) ...(13.66)
The probability for inverse transition
...(13.67)
General Theory of Transport Phenomena 263

By the Hermitian property of the matrix elements


(s| fr| r}‘ = (r |H' |s) ...(13.68)
Hence cors = (Dsr ...(13.69)
This is the principle o f detailed balance. The transition probability for
a process and its inverse are equal.
Let Pr be the probability that a system is in a state r.
Nr
Therefore P. = ...(13.70)
N
where Nr is the number of a systems in the state r and N the total
number of systems.
Similarly Ps = ...(13.71)
Now, there will be an increase in N with time as some systems make
transition to r and also a decrease as some system may.leave r. The net
increase is given by
dN,
...(13.72)
dt
Equation (13.72) is known as the Master equation.
Substituting for Nr and Ns from Eqs. (13.70) and (13.71), we have
dPr
==?- = 2 > r S (P, - Pr) ...(13.73)
dt s 8
Let h be the mean value of In Pr over all accessible states, i.e.
h —(In Pr) = X Prln P r .(13.74)

(compare this definition of h with Eq. (13.41)


dh
Therefore
dt

1)

= X X “ rs (PS - Pr X ln P r + l ) ...(13.75)
s r
Interchanging r and s does not affect the sum.
Therefore — = I I “ r,(Pr - n X ln Ps + 1) ...(13.76)
dt
Adding and dividing bjy 2
dh
= 4 l I “ rS( f r - P s Xln Pr - In Ps) ...(13.77)
dt " s r
Since (Pr —Pe) (In Pr —In Pg) > 0 each term in the sum is positive and
hence
dh
— <0 ..(13.78)
dt
which proves the Boltzmann’s theorem.
264 Fundamentals of Statistical Mechanics

In Ch. 6 we have seen that


V = - X ^ r ln P ...(13.79)
k

Therefore ~ = -h ...(13.80)
K
This gives the relation between h and the entropy S.
This while h decreases with time, the entropy S increases with time,
which is consistent with our earlier deduction.
It must be emphasized that
dh
(а) —
j j < 0 i.e., H Theorem is valid only if the gas satisfies the
assumption is molecular chaos and
dh
(б) — 0 if and only if flu, t) is the MB distribution function.

I 13.9 VARIATION OF MEAN VALUES.


We show in this section how the mean value of a quantity associated with
a molecule varies as a function of t and r.
Let X (r, v, t) be a function which describes a property of a molecule
of velocity v and located at r at time t. If n(r, t) is the mean number of
molecules per unit volume and fir, u, t) is the distribution function.
(xO,*)) =/ ■ \f(r,v,t)x(rtu,t)d3v ...(13.81)
n(r,t) J
Multiplying both sides of the Boltzmann’s transport equation (13.37)
by Xi ~ X(r>vd ) and integrating over all velocities v1 we get

J[ —+Ui•gradr + a ■gradv j fX\d3vx


= Jd 3vx Jd3‘v2 J<j(£l)«iQ | - v%|(/i'/jj - f\h
...(13.82)
LHS of this equation is

J — + vx ■gradr + £ gradv j f X\dsvx

= f + \(v1.gradrf)X id 3v{ x
J r)t

j gradvf^ X id 3Vl + j f ^ d \ ...(13.83)


General Theory of Transport Phenomena 265
Now

(a) jf^*irf3ui = A i (rx^ r ^ Y v'

= ...(13.84)
5
Using Eq. (13.81)

J § x id \ = 1 ( " t a » - "(% ■ ) ...(13.85)

(6) Ji(o1.,gradr/')x 1<i3i;1

= ...(13.86)
In writing this, we have expressed vector quantities in terms of their
Cartesian components and used summation convention.
Therefore J(u1gradr/')x1o?3i;1

= — (n<uiaXi)) - n < vla


H dXa
-.(13.87)
(c) J" 1 ' gradv/' J X\dZv1
r F a df 3
= I---- — X id ui
J rn ovla

dvla ^ m
Fa j.
---- fX i
Jm dv:
...(13.88)
Since f -4 0 as v —> oo} the first term vanishes. Hence
K Hi
/ ( £ ' Sradvf \ Xld \ ...(13.89)
m dvln
Therefore, the L.H.S.

- i(n (X !))~ n Hi ~ ~ ( » (t’taXi)) - n / t>la


dt Ho \ dxa
266 Fundamentals of Statistical Mechanics

3 Fn 3
(” \X:)) + (« {i’iaX.i)) ~n + Xl
dt v ' ' v l// &Xd V la d%a m dv^
...(13.90)
Now, RI1S Jd3i\ Jdsi’2 Ja(flWfi 1 - f 2 I(/T fi. - h h )Xi ...(13.91)
Following the procedure adopted in the case of //-theorem, we make
the following interchanges which leave a(Q) unaffected:
U) fj v2 ; v\ v2
RHS = J’ c/A ’iJarIQk/Q |ui - o 2 |(/-2 /i, -/2 A )X 2 ..(13.92)
where *2 = X (r, v2, t)
(«) v1 5 -^ ; u2 <- ± u2
RHS = j d 3v{ 1</3^ Io(n )d n Iv[ - U' I( f j 2 - fifi )Xl ...(13.93)
(id) Uj ^2 ; v2 v[
RHS = Jd3v2 |d 3^ Jcs(Q.)dQ \v'2 -vi\ (f2fr - f2f{ )X2 ...(13.94)
Adding and dividing by 4

RHS = j Jd3u1 |d3v2ja(n)dQ. \ - v 2 \x(f2f { - / 2/ i )(%! + Xi -X 1X2 )


...(13.95)
Hence, Eq. (13.82) becomes
d / f3 3 Fa d 1

- y ~ ( ” - < v la Xi > ) + \ \d \ J rf3u2 Jcr(Q)c/£l \v1 - v 2 \


%a
* ( / 1/2 - / 1/2 )(Xi + X2 - Xi X2 ) ...(13.96)

I J3.10 THE LAWS’P F CONSERVATION


If X in the above treatment represents a conserved property
X1 +X 2 = X i + X 2 ....(13.97)
and, hence, Eq. (13.96) reduces to
a / / .■
, fd a a1 a , .,
-t ( n (X l)J = n < i^7 + yl a ^ — + ~ - --- A Xl > - T - - (n ' y
ot Idt 3xa m dula J dXa
...(13.98)
The quantities that are conserved in collisions of neutral molecules
are mass, momentum and energy.
General Theory of Transport Phenomena 267

(A) Conservation of mass


Put X\—m in Eq. (13.96) we have

~ (n m ) - (vlam) ...(13.99)
dt 3Xa '
Here we have used Einstein’s summation convention which means
the repeated index a is summed from 1 to 3.
Since p = nm, Eq. (13.99) becomes
dp
=0
dt

i.e. + A.(PWj) = 0 ...(13.100)


dt
This is the equation of continuity.
Similarly, we can derive the equations for momentum and energy
conservation.

PROBLEMS,
13.1 Show that the average kinetic energy of two colliding hard spheres.

is —kT, of which —kT is associated with the centre of mass and


2kT with the relative motion.
13.2 Estimate the probability that at any instant of time, a 1 cc
volume of a room of dimensions 4 m x 3 m x 3m at NTP becomes
totally devoid of air because of statistical fluctuations.

13.3 Find the value of the Lorenz number ^ 7 , assuming that the
electrons obeys Maxwell-Boltzmann statistics.
13.4 Assuming that a gas which flows with a velocity gradient (3
along the x-axis obeys classical statistics, find its coefficient of
viscosity.

□ □ □
A P P E N D IX

| APPENDIX A v

TH E METHOD OF LAGRANGiAN UNDETERMINED


MULTIPLIERS
In statistical mechanics one often needs to determine the extremum
( maximum or minimum) value o f a function o f several variables under
the condition that there exists some constraints between the variables.
This can be done by adopting the method suggested by Lagrange.
Suppose the function whose extreme value is to be obtained is
x2, ..., x n) ...(A.l)
subject to the condition that
(K x j, X 2, ..., X n) = 0 ...(A .2 )
To have an extremum value the function has to satisfy the condition
df . df .
■zr—dx, + — -—d:c0 + .. + i t dx_ — 0 ...(A.3)
axx 1 9x2 2 9x„
where the derivatives are to be evaluated at the extremum points.
Since Eq. (A.2) is always satisfied

...(A.4)

W e have to solve Eq. (A.3) subject to the condition (A.4). We now


introduce a param eter a whose value will be determined later, We mul-
tiply the restrictive condition (A.4) by a and subtract the result from Eq.
(A.3). W e get
[ df , df
9(b 9<)>
(M . —----- a — - dx„ = 0
—a —— dx2 + ...+
1 dx-y dXy J 1 {d x 2 dxn
dx2 ox1
...(A.5)
The variables x v x 2, ■■•,xn are not all independent. They are connected
by Eq. (A.2).
Any one o f these variables, say x 1 may therefore, be expressed in
terms o f all other variables which can be chosen independently without
268
Appendix 269

any restriction. Since the parameter a. is arbitrary, we can choose it so as


to eliminate the coefficient of dx1( i.e. we put
M— - o -..(A.G)
dx1 dxx
From Eq. (A.5) we see that the sum of all others terms is equal to
zero. Since the differentials dx2 dx3 ...dxn are all independent, each o f the
brackets in Eq. (A.5) must be equal to zero.

Therefore —— a = 0 for i = 1, 2,...,n ...(A.7)


dxi axl
These equations give us the value o f a and enable us to find the
required extremum.
For example, suppose we have to find the maximum value of fix, y)
= xy subject to the condition that x 2 + y 2 = 1.
For maximum
d f = xdy + ydx = 0
Also 2x dx + 2y dy = 0
Multiplying the second equation by a and subtracting it from the
first, we get
(y — 2o<x)dx + {x — 2ay)dy = 0
We choose a so as to eliminate the first bracket, i.e.
y
y — 2ax = 0 or a =
J 2x
y2
Therefore x — 2 or y - x — — = 0; ax = + y
From the restrictive condition x2 + y 2 = 1 we have

2x2 = 1 or x = -j= = ± y

Therefore the maximum value of fix, y) is .

I APPENDIXB '
SOME USEFUL INTEGRALS
1. Gamma function
T(n. + 1) = e~xx ndx = n
= nT(n) = ti\ if n is integral

and T (n + 1) = l ( n - 1)
2 2 V2 J
270 F u n d a m e n ta ls of Statistical M e c h a n ic s

= n(n — 1 )... ^ i yfH if n is half-integral.

2. I = \~xne~°~xlcix where n is a positive integer.


"J O
Substituting y = ca~
1 -(n+l) ,J n + 1
^ = ~ a_(n+1)/2 fc e -y y 25dy = - a
with even n = 2m
{2m —1)! I 7t
^2m — 2m+1
and with odd n = 2m + 1

2a"
The integral
™ i f0 when n is odd
J—°x ne~ajc dx = 21
OT , n. is even
„ when
3. The error function
^ ry - x2 ,
y = = e dx
J v n Jo
2
y 3 + 10 +••• I if 3/ « 1

and = 1- 1----- —+... if y » 1


2y2
2
4. -dx = —
'(ex + l)(e-x +1)

5. L ------- dx = T {n )c J jz ) for all n > 1, where E,(n) is the Riemann


ez _ i
Zeta function.
The numerical values of some of the q(n.~) are
_2
(i) q( 2) = — ~ 1.645 (ii) 4(3) = 1.202
6

(*«) ^(4) ; — = 1.082 (m) 4(5) » 1.037

(v) § (| j = 2.612 (ui) q 2 r i-34i


Appendix 271

I APPENDIX O g C ? k •- w
1. P H Y S IC A L C O N S T A N T S
Constant Value
Speed of light c 2.998 x 10s ms-1
(in vacuum)
Electronic charge e 1.602 x 10-19 coulombs

Electron rest mass me 9.109 x 10-31 kg


Proton rest mass mp 1.673 x lO"27 kg
Planck’s constant h 6.626 x 10-34 Js
Boltzmann constant k 1.381 x 10-23 J K-1
Gravitational constant G 6.67 Nm2 kg-2.
Bohr magneton \iB 9.271 x 10-24 J T"1
Avogadro number NA 6.023 x 1023 mole-1
Molar gas constant R 8.3144 J/mole
Molar volume at STP 2.242 m3/mole
2. C O N V E R S IO N F A C T O R S
1 Newton N 10s dynes
1 Joule (J) 107 ergs
1 Cal 4.19 J
f 1.062 xlO -19J
1 Electron-volt (eV)
123.05 k cal/mole
1 Atmosphere 1.01323 x 105 N/m2 (P0)
ANSWERS TO PROBLEMS
Chapter 2

2.1 dp = -
t>0 -<t>2
2.2 2 x 1 0 -8
2.3 (0.8)3 (0.9)4 = 0.336
2.4 31/32
2.5 0.65 *.
2.6 (a) 0.40, (6) 0.60, (c) 0.16
2.7 (a) (i) 0.444; (ii) 0.444
(6) (i) 0.437, (ii) 0.46
2.8 (i) < (j. > = 0 (ii) < |i.2 > = 2p (Hi) < A|t2 > = 2p|i20
2.9 (a) 0.37 (b) 0.08 2.10 0.8 x 10~23
272 Fundamentals of Statistical Mechanics

Chapter 3
3 .1 2 .4 x 1 0 12 3.4 1 0 'K
3 .2 1.9 3 x 103 m /s

C hapter 4

4 .1 3. 2 0 4.4 p = ± I2tneei i - 1

2 >
4.5 Elliplic helix x 2 h— ^ Po

C hapter 5
5.1 kT
5.2 (i) Pl = 0.8, p 2 = 0.2 T = 5.25 K Hi) p x = p 2 = 0.5, T =
5.4 (i) 2.0 x 1023, (ii) 2.04 J

5.5 AT 5.6 - A T
5.7 AT/mg 5.8 40 km
5.9 12 x 10-6 m 5.10 304 K
C h a pter 6
pm co2 „0mw2£2
6.1 P = iVAT
2 tzI pm(o2fl2/ 2 —1
6.2 147 J 6.6 (£) S = 0; (££) S = p2V In 2
6.7 (i) (1.554)N; (££) 0.50 Ne); (iii) 0.94 Nk
, Aco , ' AT / Aco
6.8 (t) N kT In ]: (« ) Nk In------ Hi (iti) g = AT In I
Aco
6.9 124 JK -1
6.10 -5 .9 8 x 10-20 J 6.11 3000 K
(Pi + P2f
6.14 N k In 6.15 20.8 JK"1.
4P1P2
C h a p te r 7

7.4 - A T
C h a p te r 8
Aco r —
P^co
8.2 — { l + 3e J { i + e -pfiwj 8.3 (£) valid; (« ) not valid

S A 4 x“ 10--9- 8.6 ^ 3 ( 2 j + i)e-'/('/+1)fl/7’

8.7 0.92 Nk 8.8 2452 K;


8.10 2.7 x 10-2 Pa
Appendix 273

C hapter 9

9.2 2.7 kT; 9:3

9.4 218 hours


2
9.5 (i) aTs4 0 0 (ii) 283 K
9.7 0.087 K;
9.8 0.91
C hapterlO
10.1 (i) 5 x 10-la J (ii) 0.47 xlO2 T
10.2 3/4 10.3 1567 Km/s
10.6 3 10.7 2 x 1010 Pa
10.8 3 x 56 x 10"10

C hapter 11

11.2 2.3 x 106 J/kg


11.3 b = 2.2 x 10"-5 m3 mole; a = 1.69 x 10~2 Pa m6/mole
R = 6.86 J / mole K
11.4 (i) P = 2.53 x 10s Pa; = 2.49 x 10s Pa
(ii) P = 1.39 x 10R Pa; P ideal = 2.91 x 106 Pa
11.5 -0.009 K
C h a p ter 12
.2
12.1 2 x 10-1 12.2 1 =
m
12.4 (i) 1.7 x 10~7m; (ii) 1.1 x 1 0 10 m
1 N
5 - c— 12.6 2kT
12'5 4 C V
12.8 2.1 x 10-3 torr 12.12 ~3 hrs.
C h a p ter 13

13.3 3 13.4 t\ = n kT x

□ □ c
B IB LIO G R A P H Y

1. Band, W A n In trod u ction to Q uantum S tatistics Van Nostrand, Princeton,


1955.
2. Beck, A H W S tatistica l M ech an ics, F lu ctu a tion s a n d N oise, Edward Arnold
1976.
3. Bowler, M.G. L ectu res on S ta tistica l M ech a n ics, Pergamon Press, 1982.
4. Bose, S N Z P h y sik 26, 178, 1924.
5. Brown, A F S ta tistica l P h y sics , Edinburgh U niversity Press, 1968.
6. Chandrasekher, S, In trod u ction to the S tu d y o f S tella r S tru ctu re, Dover
Publication, 1957.
7. Deslodge, E A S ta tistica l P h ysics, Holt, Rinchard and W inston, New York
1966.
8. E verdell, M H S ta tis tic a l M e c h a n ic s a n d its C h em ica l A p p lic a tio n s ,
Academ ic Press, 1975.
9. Feynm an, R P, Leighton, R B and Sands, M, T he F eyn m a n L ectu res on
P h y sics, Addison-W esley, Reading M ass, 1965.
10. Goldstein, H C la ssica l M ech a n ics, Addison-W esley Reading, M ass, 1964.
11. Gopal, E S R S ta tistica l M ech a n ic s a n d P r o p erties o f M a tter, M acmillan,
India, 1976.
12. Heer, C V S ta tistica l M ech a n ics, K in etic T h eory a n d S to ch a stic P rocesses,
A cadem ic Press, N ew York, 1975.
13. Huang, K S ta tistica l M ech a n ic s, W iley Eastern Lim ited, New D elhi 1975.
14. K ittel, C E lem en ta ry S ta tistica l P h y sics, John W iley, New Y ork, 1958.
15. K ittel, C T h erm a l P h y sics, W iley, New Y ork 1969.
16. K nox, J H M o le c u la r T h e rm o d y n a m ics, John W iley and Sons, N ew York,
1980.
17. Landau, L D and Lifshitz, E M, S ta tistica l P h y sics, Pergam on Press, Oxford,
1974.
18. Laud, B B, In tro d u ctio n to S ta tistica l M ec h a n ic s , M acm illan India Lim ited,
1981.
19. Laud, B B, E lectro m a g n etics, 2nd E d., W iley E astern Lim ited, 1992.
20. Laud, B B, L e sers a n d N o n -lin e a r O p tics, 2nd E d., W iley E astern Lim ited,
1991.
21. L ifshitz, E M S ci. A m . 198, 30, 1958.

274
Bibliography 275

22. Mandl, F S tatistica l P h ysics, John W iley and Sons Ltd., 1978.
23. Matveev, A N M olecu la r P h ysics, M ir Publishers, 1985.
24. McClellan, S tatistical T herm od yn a m ics, Chapman and Hall, London, 1973.
25. Pathria, R K, S tatistical M ech a n ics, Pergamon Press, Oxford, 1972.
26. Philippe Denney An In trod u ction to S ta tistica l M ech a n ics, George Allen
Unwin Ltd., London, 1972.
27. Pointon, A J, A n In trodu ction to S tatistical P h ysics fo r S tu d en ts, Longman’s,
London, 1967.
28. Reichel, L E A M od ern C ou rse in S ta tistica l P h ysics, Edward Arnold,
1980.
29. Reif, F F u n d a m en ta ls o f S ta tistica l a n d T h erm a l P h ysics, M cGraw Hill,
1965.
30. Rice, O K S ta tistica l M ech an ics, T h erm od yn a m ics a n d K in etics, Eurasia
Publishing House, 1970.
31. Rosser, W G V A n In trod u ction to S ta tistica l P h ysics, Ellis Harward Ltd.,
1983.
32. Rumer, Y u B and Ryvkai M Sh T h erm od yn a m ics, S ta tistica l P h ysics an d
K in etics, M ir Publishers, 1980.
33. Sankaranarayan, A , Goerge, S, A m . J. P h ys. 40, 620, 1972.
34. Schrodinger, E E n d ea v ou r, 109, IX, 1950.
35. Tolm an, R C P rin cip les o f S ta tistica l M ech a n ics, O xford University Press,
1978.
36. V an Horn, H M P h y sics T od a y, 32, 23, 1979.
37. Vasilyev, A M A n In tro d u ctio n to S ta tistica l P h y sics, M ir Publishers, 1983.

□ □ □

You might also like