You are on page 1of 26

Chapter-1

Statistical Basis of Thermodynamics


1.1 The macroscopic and microscopic states
We consider a physical system composed of N identical particles confined to a space
of volume V. In a typical case N is extremely large of the order of 10 23. Now we define the
thermodynamic limit as N   as V   such that the ratio N/V, the particle number density,
remains fixed at a pre-assigned value. In this limit, the extensive properties of the systems
become directly proportional to the size of the system (i.e. proportional to N or V), while the
intensive properties become independent of that. (The generalized work is defined as the
product of generalized force and generalized displacement. Then the displacement-like
quantities such as extension l in the case of stress, area A in the case of surface energy,
volume V for pressure P, charge q for e.m.f  etc. are called extensive quantities and the
force-like quantities such as stress, surface energy, pressure etc. are called intensive
quantities). The particle density N/V remains an important parameter for all physical
properties of the system.
Next we consider the total energy E of the system. If the particles of the system are
regarded as non-interacting, the total energy is given by,

E = 1.1
where ni is the number of particles with energy i. Also,

N = 1.2
According to the quantum mechanics, the single particle energies i are discrete and
their values depend on the volume V to which the particles are confined. (Consider a particle
in a 3-dimensional box. The energies are quantized and the spacing between them depends on
the dimensions of the box. For an ensemble there are n i particles having energy i.)
Accordingly, the possible values of the total energy E are also discrete. However, for large V,
the spacing of the different energy values is so small in comparison with the total energy E of
the system that the parameter E might be regarded as a continuous variable. This would be
true even if the particles were mutually interacting. In such cases we can not express E by
eqn1.1.
A macrostate of the given system can be defined by specifying the actual values of the
parameters N, V and E.
At the molecular level, because of the different possibilities of ni, different possibilities
of i, there will, in general, be a large number of different ways in which the macrostate of the
given system can be realized. (Example- Rs. 1000 distributed among 40 students. Sum of all
row totals equals to 1000. But the row total may change if the seating arrangement is changed
or if there is exchange of money between the students.) In the case of non-interacting system,
since the total energy E consists of a simple sum of N single-particle energies i, there will
obviously be a large number of different ways in which the individual i, can be chosen so as
to make the total energy equal to E. Or in other words, there will be a large number of
different ways in which the total energy E of the system can be distributed among the N
particles consisting the system. Each of these different ways specifies a microstate or
4 Statistical Mechanics MCT

complexion of the given system. In general, the various microstates or complexions of a given
system can be identified with the independent solutions (r1, r2, . . . . . .r N) of the Schrödinger
equation of the system, corresponding to the eigen value E of the relevant Hamiltonian. In any
case, to a given macrostate of the system there corresponds to a large number of microstates
such that at any time t , when there are no other constraints, the system is equally likely to be
in any one of these microstates. This is generally referred to as the postulate of equal a priori
probabilities for all microstates consistent with a given macrostate.
The actual number of all possible microstates will be a function of N, V and E and may
be denoted by the symbol (N, V, E). The dependence on V comes in because the possible
values of i, of the single particle energy  are themselves a function of this parameter.
(Particle in a box- energy levels depend on the dimensions and hence the volume of the box).
The complete thermodynamics of the given system can be derived from the magnitude of the
number  and from its dependence on the parameters N, V and E. To bring out the true nature
of the number  we consider the problem of “thermal contact” between two given physical
systems.

1.2 Contact between statistics and thermodynamics: Physical significance of


the number (N, V, E)

A1 A2
(N1, V1, E1) (N2, V2, E2)

Consider two physical systems A1 and A2, which are separately in equilibrium. Let the
macrostate of A1 be represented by the parameters N1, V1and E1, so that it has 1(N1, V1, E1)
possible microstates, and the macrostate of A2 be represented by the parameters N2, V2 and E2,
so that it has 2(N2, V2, E2) possible microstates. The mathematical form of the function 1
may not be the same as that of the function 2, because that ultimately depends upon the
nature of the system. The thermodynamic properties of the systems A 1 and A2 can be derived
from the functions 1(N1, V1, E1) and 2(N2, V2, E2), respectively.
Now the two systems are brought in thermal contact with each other but still separated
by a rigid, impenetrable wall, so that the respective volumes V 1 and V2 and the respective
particle numbers N1 and N2 remain fixed. Due to the thermal contact between the two systems
E1 and E2 vary but the total energy of the composite system A(0) is,
E(0) = E1 + E2 = constant. 1.3
Now at any time t, the sub-system A1 is likely to be in any one of the 1(E1) microstates, while
the sub-system A2 is likely to be in any one of the 2(E2) microstates. (N1, N2, V1 and V2
suppressed as they do not vary.) Therefore the composite system A(0) is equally likely to be in
any one of the (0)(E1, E2) microstates, where,
(0)(E1, E2) = 1(E1) 2(E2) 1.4
= 1(E1) 2(E(0)  E1)
= (0)(E(0), E1) 1.5
That is (0) can be expressed either in terms of E 1and E2 or in terms of E(0)and E1. Clearly the
number (0) varies with E1.
MCT Statistical Basis of Thermodynamics 5

Now let us find out the value of the variable E1 for which the composite system is in
equilibrium or how long the energy exchange between the systems takes place to attain mutual
equilibrium. A physical system, left to itself, proceeds naturally in a direction of increasing
number of microstates and finally settles down in a macrostate with a largest possible number
of microstates. Statistically speaking, we regard a macrostate with a larger number of
microstates as a more probable state, and with the largest number of microstates as the most
probable one. The state with largest number of microstates is identified as the equilibrium
state of the system.
Let Ē1 and Ē2 be the equilibrium values of E1 and E2 respectively. To find out them we
maximize (0). Differentiating eqn.1.4 with E1, we get,

But from eqn.1.3, = 1

Then, =
When, {(0)(E1, E2)}is maximum,

= 0

= 1.6
Thus the condition for equilibrium reduces to the equality of parameters 1and 2 of the sub-
systems A1 and A2, respectively, where  is defined by,

 = 1.7
We thus find that when two physical systems are brought into thermal contact, there is
a net exchange of energy between them until the equilibrium values Ē 1 and Ē2 of the variable
E1 and E2 are reached. Once these values are reached there is no more net exchange of energy
between the two systems. Then the systems are said to have attained a state of mutual
equilibrium. According to our analysis this happens only when 1 = 2.
6 Statistical Mechanics MCT

It is then natural to expect that the parameter  is somehow related to the


thermodynamic temperature T of the system. To determine this relationship, we recall the first
law of thermodynamics,
dE = đQ + đW
= TdS  PdV
Or, dE = TdS at constant N and V

Then, = 1.8
where, S is the entropy of the system. Eqn.1.7 can be written as,

 =

Using eqn.1.8,  = 1.7a


That is, there exists an intimate relationship between thermodynamic quantity S and statistical
quantity . Now we write, for any physical system,

= = constant 1.9
This correspondence was established by Boltzmann who also believed that, since the
relationship between thermodynamic approach and statistical approach seems to be of a
fundamental character, the constant appearing in eqn.1.9 must be a universal constant. It was
Planck who first wrote the explicit formula,

S = k ln 1.10

Equation 1.10 determines the absolute value of the entropy of a given physical system
in terms of the total number of microstates accessible to it in conformity with the given
macrostate. The zero entropy then corresponds to the special state for which only one
microstate is accessible (ie.  = 1). This state is called the unique configuration. Thus the
statistical approach provides a theoretical basis for the third law of thermodynamics. Eqn.1.10
has a fundamental importance in physics, because it provides a bridge between the
microscopic and macroscopic approaches.
We have already studied the principle of increase of entropy, which states that in all
the processes entropy of the universe either increases or remains unchanged. Also all natural
processes are such that a certain amount of energy is transferred from the available form for
conversion into work to the unavailable form. The entropy of a given system may be regarded
as a measure of the so-called disorder or chaos prevailing in the system. Eqn.1.10 tells us how
the disorder arises microscopically. Clearly, disorder is a manifestation of the largeness of the
number of microstates the system can have. The larger the choice of the microstates, the lesser
will be the degree of predictability or the level of order in the system. The system is in a
completely ordered state when it is in the unique state corresponding to  = 1, that is the
system has no other choice. Comparing 1.9 and 1.10, we get,
MCT Statistical Basis of Thermodynamics 7

k = or,  = 1.11
The universal constant k is generally referred to as the Boltzmann constant.
1.3 Further contact between statistics and thermodynamics
We now examine a more elaborate exchange between the sub-systems A 1 and A2. If we
assume that the wall separating the two sub-systems is movable as well as conducting so that
the volumes V1 and V2 become variable. Indeed, the total volume V(0) = V1 + V2 remains
constant, so that effectively we have only one more independent variable (V 2 = V(0)  V1).
The wall is still assumed to be impenetrable to particles, so N 1 and N2 remain fixed. Then the
equilibrium state of the composite system is a function of four parameters V 1, V2, E1 and E2
(or, V(0), E(0), V1 and E1).
i.e. (0)(V1, V2, E1, E2) = (0)( V(0), E(0), V1, E1)
= 1(V1, E1) 2(V2, E2) 1.12
Then, as before, we can write,

= 1.6a

= 1.6b

Now we define,  = 1.13


Then from eqn1.6 and 1.6b it is clear that the condition for equilibrium now takes the form of
an equality between the parameters (1, 1) of the sub-system A1 and (2, 2) of the sub-
system A2. Also we assume that the partition is penetrable and there is an exchange of
particles between A1 and A2 is allowed. Then we get,

= 1.6c

Now we define,  = 1.14


Then the condition for equilibrium becomes the equality between the parameters (1,1,1) of
the sub-system A1 and (2,2,2) of the sub-system A2. To determine the physical meaning of
the parameters and  we make use of the equations S = k ln and the first law of
thermodynamics,
dE = TdS  PdV + dN 1.15
where, P is the pressure and  is the chemical potential.

Then, = 1.16a
8 Statistical Mechanics MCT

= 1.16b

= 1.16c
From eqns.1.13 and 1.10

 = =

Using eqn.1.16b  = 1.17a

Similarly,  = 1.17b
If the partition wall of A1 and A2 is conducting and movable but still impenetrable, then the
conditions for equilibrium thermodynamically are,
T1 = T2 and P1 = P2 1.18a
On the other hand if it is conducting and penetrable but not movable, the conditions are,
T1 = T2 and 1 = 2 1.18b

Finally, if the partition is conducting, movable and penetrable, the conditions for equilibrium
are,
T1 = T2 ; P1 = P2 and 1 = 2 1.19
Now we derive the thermodynamic relationships from the statistical beginning. We
have,
S(N,V,E) = k ln(N,V,E) 1.20
Using eqns.1.7, 1.11, 1.13 and 1.14 we can write,

= = k = 1.21a

= = k = 1.21b

= = k = 1.21c
Then eqn.1.21b by eqn.1.21a we get,

P = .
MCT Statistical Basis of Thermodynamics 9

We have, if f(x, y, z, t) = 0, then at constant t, =  1.


Using this result in the above equation we get,

P = . = 1.22a
Similarly from eqn.1.21c and 1.21a,

 = . = 1.22b

From eqn.121a T = 1.22c


Equations 1.22a, 1.22b and 1.22c can also be derived from the equation,
dE = TdS  PdV + dN 1.15
The other thermodynamic relations are,
Enthalpy, H = E + PV 1.23a
Helmholtz free energy, A = E  TS 1.23b
Gibb’s free energy, G = H  TS = E + PV  TS 1.23c
= A + PV = N
Or, G + TS = E + PV = H 1.23d
From eqn1.15, the heat capacity at constant volume,

CV = = 1.24a
And heat capacity at constant pressure,

CP = =

= = 1.24b
1.4 The classical ideal gas
As an illustration of the above theory we now derive the various thermodynamic
properties of a classical ideal gas composed of monatomic molecules. We consider the
monatomic molecular system since it affords an explicit, though asymptotic, evaluation of the
number (N,V,E). Also this example enables us to identify the Boltzmann’s constant k in
terms of other physical constants. Moreover, the behavior of this system serves as a useful
reference with which the behavior of other physical systems, especially the real gases, since in
10 Statistical Mechanics MCT

the limit of high temperatures and low densities the ideal gas behavior becomes typical of
most real systems.
Now we make a remark applicable to all classical systems composed of non-
interacting particles that there is an explicit dependence of the number (N,V,E) on V and
hence the equation of state of these systems. Now, if no spatial correlations among the
particles exist, i.e. if the probability of any one of them being found in a particular region of
the available space is completely independent of the location of the other particles. Since for
one particle the available space for occupation is V, the number of ways in which a single
particle can be accommodated in the volume V is directly proportional to V. Hence for N
particles the number is the product of the corresponding numbers of the individual particles.
Thus, (N,V,E)  V.V.V. …….. V (N times)

i.e (N,V,E) = C VN 1.25

Taking logarithm, ln = lnC + NlnV

From eqn.1.21 =
Thus, PV = NkT 1.26
If the system contains n moles of the gas, then N = nNA, where NA is the Avagadro number.
Then PV = nNAkT = nRT 1.27
where, R = kNA is the universal gas constant per mole. Equation 1.27 is the famous ideal-gas
law. Thus for any classical system composed of non-interacting particles the ideal gas law
holds.
To derive other thermodynamic equations of the system we require a detailed
knowledge of the way  depends on the parameters N, V and E. Now we have to determine
the total number of independent ways of satisfying eqn.1.1. The number of degrees of freedom
associated with a non-interacting N particle system is 3N. Then eqn.1.1 becomes,

E = 1.28
where, i are the energies associated with various degrees of freedom of N particles.
Dependence of  on N and E is quite obvious. Its dependence on V is studied as follows. For
a particle in a cubical box of side L, the energy eigen values are given by,

= 1.29
where nx, ny, nz = 1,2,3, …………
The dependence of  on area has no physical significance. So we assume that  depends on
volume. Thus we can write,
L2 =
MCT Statistical Basis of Thermodynamics 11

Then, = = *, say. 1.30


And for N particle system the energy eigen values are given by,

= = = E* 1.31
Eqn.1.31 reveals that the volume and energy of the system enter into the expression for  in
the form of the combination . Consequently,

S(N,V,E) = k ln(N,V,E) = 1.32


For a reversible adiabatic process S and N are constants. Hence,

= C, a constant 1.33

Or, E =

Then, = =

But by eqn.1.22a, P = = 1.34


That is, the pressure of a system of non-relativistic non-interacting particles is precisely equal
to two-thirds of its energy density.
From eqn.1.26 and 1.34, we have,

= =

Therefore, E = = 1.35

By eqn.1.33 = C, a constant.

Using eqn.1.35, = C, a constant.

i.e. = a constant. 1.36a


This is same as, PV = a constant. Since for monatomic gas  = 5/3. Using eqn.1.26 in 1.36a
we get,
12 Statistical Mechanics MCT

= a constant.

i.e. = = a constant 1.36b


We shall now attempt to evaluate the number . In this ny
evaluation we shall explicitly assume the particles to be
distinguishable, so that if a particle in the ith state gets interchanged nr
with the particle in the jth state, the resulting microstate is counted as nx
distinct. (That is ijth and jith microstates are distinct).
nz
Consequently, the number (N,V,E) or better N(E*) is equal
to the positive-integral lattice points lying on the surface of a 3N- n 2 = n 2 + n 2 + n 2
r x y z
dimensional sphere of radius E* (see eqns.1.30&1.31). Clearly this
number will be an extremely irregular function of E*, since for two given values of E*, which
may be very close to one another, the corresponding numbers will be very different. So we
consider another number N(E*), which denotes the number of positive-integral lattice points
within and on the surface of a 3N-dimensional sphere of radius E*, will be much less
irregular. In our physical problem N(E*) corresponds to the number (N,V,E) of microstates of
the given system consistent with all macrostates characterized by the specified values of the
parameters N and V but having energy less than or equal to E.

i.e. (N, V, E) = 1.37

Or, N(E*) = 1.38


Of course, the number  will also be somewhat irregular. However, we expect that its
asymptotic behavior, as E*   will be a lot smoother than that of . We will see in the
sequel that the thermodynamics of the system follows equally well from the number  as from
.
To appreciate the point made above, let us deviate a little to examine the behavior of
the numbers 1(*) and 1(*), which corresponds to the case of a single particle confined to a
given volume V. We can find out 1(*) for *  10,000 from a table compiled by Gupta or by
a geometric method. Geometrically this number is approximately same as the volume of an
octant (one of the eight parts obtained by cutting a sphere by three mutually perpendicular
planes) of a three dimensional spherical volume of radius . Or asymptotically, that is,
*, 1(*) is equal to the volume of the octant.
[As an example let = 4 units. Then the positive integral lattice points are (1,1,1),
(1,1,2), (1,1,3), (1,1,4), (1,2,1), (1,2,2) etc. There are 32 such points. The volume of the octant

of a sphere of radius 4 units is, = = 33.49333  32]

Thus, asymptotically, 1(*) = =


MCT Statistical Basis of Thermodynamics 13

i.e. = 1 1.39
A more detailed analysis shows that if Dirichlet boundary condition, which excludes zero
quantum numbers (since Sine functions are used), is applied, a correction has to be subtracted
for the over estimation of the desired lattice points in the volume of the octant.

1(*)   1.40
Instead, if Neumann boundary condition, including zero quantum numbers (since /x =
cosine functions used), is applied, then the correction has to be added for the under estimation
of the desired lattice points.

 + 1.41

Figure shows the histograms


2800 showing the actual number of
(/6)*3/2 + (3/8)*
microstates available to a particle
in a cubical enclosure; the lower
2400 histogram corresponds to the so-
called Dirichlet boundary
conditions, while the upper one
2000 corresponds to the Neumann
(/6)*3/2 boundary conditions. Theoretical
estimates 1.40 and 1.41 are shown
1600
(/6)*3/2 – (3/8)* by dashed lines and the customary
estimate 1.39 by the solid line.
1200
200 220 240 260 280 300

In the case of N particle system, the number N(E*) should be asymptotically equal to
the ‘volume’ of the ‘positive compartment’ of a 3N-dimensional sphere of radius . For n-
dimensional space the volume element,

dnr = dx1.dx2.dx3. . . . . . . . dxi . . . …. . .dxn =


Accordingly the volume,

Vn(R) = =
14 Statistical Mechanics MCT

where, r2 = =

And, 0  r2  R2, that is, 0   R2


Also Vn(R) is proportional to Rn.
i.e. Vn(R) = CnRn,
where, Cn is a constant depends only on the dimensionality of the space. Then,
dVn(R) = nCnRn – 1dR = Sn(R)dR,
where, Sn(R) is the area of the surface. [For example, in 3-dimensions the spherical volume V

= , and dV = 4R2dR]. To evaluate Cn we make use of the formula,

= .

i.e. =

=
Assuming the volume as the combination of infinitely large number of spherical shells we can
write the L H S as,

i.e. =

Using the result, =

= =

i.e. =
Using = n!

=
MCT Statistical Basis of Thermodynamics 15

Therefore, Cn =

Then, Vn(R) = CnRn =


For 3N dimensions it is,

V3N(R) =
We have seen that for one particle (i.e. 3-dimensions),

1(*) 
So for N-particle (i.e. 3N-dimensions) system, since each plane cuts the volume into two parts,

N(E*) 

Since the radius of the 3N-dimensional sphere is ,

N(E*) 

Using eqn.1.31, we get, (N,V,E) 

 1.42

Taking logarithm, ln (N,V,E) 


16 Statistical Mechanics MCT

Applying Stirling formula, ln n! = n ln n  n

ln (N,V,E) 

 1.43
For deriving thermodynamic equations of the system we must fix the precise values of
energy of the system. Practically this is not possible since the absolutely isolated system is
only an idealization. For real systems there is always thermal contact between the system and
surroundings. As a result of this, the energy E of the system cannot be defined sharply. Let 
be the effective width of the range over which the energy may vary.  is small in comparison

with the mean value of the energy. Now the limits of energy are defined as and

. Then the number of microstates (N,V,E;) in the energy range  is given by,

(N,V,E;) 

Using eqn.1.42, 

 1.44

Taking logarithm, ln (N,V,E;)  ln (N,V,E) + +


MCT Statistical Basis of Thermodynamics 17

Using eqn.1.43  + + 1.45


For large values of N, (i.e. N   1), we can neglect lnN and ln(/E) in comparison with the
terms containing N. Hence for all practical purposes,
ln (N,V,E;)  ln (N,V,E)

 1.46
Then, S = k ln 

= 1.47
Dividing by Nk and rearranging,

Taking exponential, =

 E(N,V,S) = 1.48

By eqn.1.22c, T = =

 E = = = 1.49

where, n = is the number of moles of the gas.


The specific heat (heat capacity) at constant volume is given by,

CV = = = 1.50

By eqn.1.22a P = = 1.51
18 Statistical Mechanics MCT

Using eqn1.49, = =

Or, PV = NkT = = nRT 1.52


This is the equation of state, which agrees with the result given by eqn.1.27.
The specific heat (heat capacity) at constant pressure is given by,

CP = = = 1.53

So the ratio of the heat capacities,  = = 1.54


Now suppose that the gas undergoes an isothermal change of state (i.e. T and N are
constants). Then according to eqn.1.49 the total energy of the gas would remain constant and
according to eqn.1.52 the pressure varies inversely with volume (Boyle’s law). The change in
entropy can be calculated using the eqn.1.47.

Si =

Sf =

Therefore, S f  Si = 1.55
For a reversible adiabatic process S and N are constants. Then according to equations
1.48 and 1.49 both E and T would vary as and according to eqn.1.51 pressure would
vary as . These results agree with the conventional thermodynamic relations,
PV = a constant, and, TV – 1 = constant, with  = 5/3. 1.56
It may be noted that the change in energy E during an adiabatic process arises solely from the
external work done by the gas on the surroundings or vice-versa.
i.e. (dE)adiabatic =  PdV

Using eqn.1.51 = 1.57


MCT Statistical Basis of Thermodynamics 19

The consideration of this section have clearly demonstrated the manner in which the
thermodynamics of a macroscopic system can be derived from the multiplicity of its
microstates as represented by the number  or  or .
1.5 The entropy of mixing and Gibb’s paradox
Even in the idealized cases there remains an inadequacy that is related to the explicit
dependence of S on N. In this section we not only bring out this inadequacy but also provide
necessary remedy for it. If the entropy of an ideal gas is an extensive property of the system
(what is logically desired), the increase in the size of the system by a factor α, keeping the
intensive variable unchanged, the entropy of the system should also increase the same factor α.
But the entropy given by the expression 1.47 does not do so because of the presence of ln V
term in that expression. If entropy is not an extensive quantity then the entropy of a system is
different from the sum of the entropies of its parts. This will be clear from the discussion of
the Gibb’s paradox given below.

Consider the mixing of two ideal gases 1 and 2 (N1,V1,T) (N2,V2,T)


both being initially at the same temperature T. Clearly, the
temperature of the mixture also would be the same. The
initial entropies are given by eqn.1.47

S1 =

Using eqn.1.49, =

And, S2 =
After mixing the total entropy of the mixture is,

ST =
where, V = V1 + V2. The increase in entropy called “the entropy of mixing” is given by,
S = ST – (S1 + S2) = (N1 + N2) k ln (V1 + V2) – N1 k ln V1 – N2 k ln V2

=
20 Statistical Mechanics MCT

= 1.58
S is indeed positive, as expected for an irreversible process of mixing. Now, in a special
case, if the initial and final particle densities of the two gases are the same we can write,

n = = =

Then by eqn.1.58, (S)* =

= 1.59
This is again positive.
So far it seems all right. However, a paradoxical situation arises if we consider the
mixing of two samples of the same gas. The entropies of the individual gases before mixing
are given by,

S1 =

S2 =
Entropy of the mixture is given by,

ST =

Then again, S = 1.58a

(S)* = 1.59a
According to equations 1.58a and 1.59a the change in entropy is positive and the
process of mixing is irreversible. This is not acceptable because the mixing of two samples of
the same gas, with a common initial temperature T and a common initial particle density, is
clearly a reversible process. This is because, by reinserting the partition wall we obtain a
situation which is no way different from the one we had before mixing. Since this process is
reversible (S)*1=2 = 0. But for dissimilar gases under the same conditions, the process of
mixing is irreversible since the reinsertion of the partition wall will not give the initial state,
but gives only two samples of the mixture. This paradoxical situation is known as Gibb’s
paradox.
MCT Statistical Basis of Thermodynamics 21

Thus we are led to believe that there is something basically wrong with the original
expression for entropy. To avoid the paradoxical situation we consider the equation,
ln(N1+N2)! – lnN1! – lnN2! = (N1+N2)ln(N1 + N2) – N1 – N2 – N1lnN1 + N1 – N2lnN2 + N2
= N1 ln(N1 + N2) + N2 ln(N1 + N2) – N1lnN1 – N2lnN2

=
Thus eqn.1.59a becomes,
(S)* = k{ln(N1+N2)! – lnN1! – lnN2!} 1.60
This form of equation shows that we would obtain the correct equation for entropy by
subtracting an ad hoc term kln (N!) in eqn.1.47. So we subtract kln(N1!) from S1, kln(N2!)
from S2 and kln(N1! + N2!) from ST in the case of similar gases. Thus modifying eqn.1.47 for
entropy, we get,

S = 1.47a

= 1.61

= 1.62
This equation is generally referred to as the “Sackur-Tetrode” equation. This equation shows
the fact that the entropy becomes a truly extensive quantity.
If we now mix two samples of the same gas, we can write,

S1 =

S2 =
22 Statistical Mechanics MCT

ST =
Then, (S)1 = 2 = ST – (S1 + S2)

= 1.58b
If the initial particle densities of the samples are equal,

n = = =

Then, (S)*1 = 2 =
= – (N1 + N2) k ln n + N1k ln n + N2k ln n = 0 1.59b
It is clear that the subtraction of the ad hoc term is equivalent to an ad hoc reduction of
the statistical numbers  and  by a factor N!. This is precisely the remedy proposed by
Gibb’s to avoid the paradox in question.

In the case of dissimilar gases we get the earlier equations if we subtract kln (N 1!) from
S1, kln (N2!) from S2 and kln (N1! N2!) from ST. This can be shown as follows. The modified
equations for S1 and S2 are same as that for the same samples.

S1 =

S2 =

ST =

S = ST – (S1 + S2) (refer original eqn.)


= N1k ln(V1 + V2) + N2 k ln(V1 + V2) – N1k ln V1 + N1k ln N1 – N2k ln V2
+ N2k ln N2 – N1k – N2k – k ln (N1!) – k ln (N2!)
= N1k ln(V1 + V2) + N2 k ln(V1 + V2) – N1k lnV1 + N1k lnN1 – N2k lnV2
+ N2k ln N2 – N1k – N2k – N1k ln(N1) + N1k – N2k ln(N2) + N2k

= 1.58c
MCT Statistical Basis of Thermodynamics 23

And if the initial particle densities are equal,

(S)* =
These equations are exactly the same as the earlier equations. Thus the paradox of Gibb’s is
resolved by his recipe.
Now let us see some immediate consequences of the recipe of Gibbs. First of all we
note that the expression for the energy E of the gas is also modified. We start with equation for
S. By eqn.1.61

S =

Taking exponential, =

 E = 1.63
This equation makes energy also as an extensive quantity.

By eqn.1.22c, T = =

 E = = = 1.64

where, n = is the number of moles of the gas. Eqn.1.64 is same as the earlier eqn.1.49.

By eqn.1.22a P = = 1.65
This equation is same as eqn.1.51.
Also, (dE)adiabatic = – PdV

Using eqn.1.65 = 1.66


The equations for Cp, Cv and  are same as that in the previous section.
Next we consider the chemical potential μ.
24 Statistical Mechanics MCT

 = =

= 1.67

= =

Using eqns.1.34 and 1.35 = =


where, G = E + PV – TS is the Gibb’s free energy of the system.
Substituting for E and S (eqns.1.64 and 1.61) in eqn.1.67 we get,

 =

= =

= 1.68
Another quantity of importance is the Helmholtz free energy given by eqn.1.23b and1.23c
A = E – TS = G – PV = N – NkT

= 1.69
It is to be noted that while A is an extensive property of the system,  is intensive.
Chemical potential is defined as the quantity that determines the transport of matter from one
phase to another, causing the flow of a component from one phase to another when the
potential of the component is greater in the first phase than in the second. Or simply it is the
energy required to take a particle from one system (phase) and is denoted by .
1.6 Solved problems
1. Using the expression for entropy show that PV = NkT
By the first law of thermodynamics we have,
dE = TdS  PdV + dN

=
MCT Statistical Basis of Thermodynamics 25

P =

We have entropy S =

Thus, P = =
 PV = NkT
2. To derive the Stirling’s approximation.

= ln1 + ln2 + ln3 + …...


This is equal to the area enclosed
by the graph lnx versus x. This lnx
area under the smooth curve in the

figure is equal to . Thus,

=
Integrating by parts,

= 1 2 3 4 5 x
= xlnx  x + 1 xlnx  x
This is Stirling’s approximation.

3. Using the expression for entropy show that E = and CV =


By the first law of thermodynamics we have,
dE = TdS  PdV + dN

We have entropy S =
26 Statistical Mechanics MCT

= = =

 E =

CV = =
4. Using the principle of increase of entropy show that the particles move from a region of
higher chemical potential to a region of lower potential.
Consider two regions 1 and 2 with potentials 1 and 2. Let S1 and S2 be the initial
entropies and N1 and N2 be the initial number of particles in the regions 1 and 2. Since
entropy is extensive,

S = S1 + S2

=
According to the principle of increase of entropy,

> 0

i.e. > 0

i.e. > 0 1
But, N1 + N2 = N = constant

Thus, = 2
By first law of thermodynamics we have,
dE = TdS  PdV + dN
Since the composite system is isolated the total energy E = E1 + E2 remains constant.

Thus, = 3
Using eqns.2 and 3 in eqn.1

> 0
At equilibrium T1 = T2.
MCT Statistical Basis of Thermodynamics 27

Then, > 0
This is true only if (2  1) is positive dN1/dt is positive or if (2  1) is negative
dN1/dt is negative. That is the particles move from higher chemical potential region to
lower potential region.
1.7 Model Questions
Short answer type questions
1. Explain thermodynamic limit.
2. Distinguish between microstate and macrostate. [Knr. Uty. April 2008 old scheme]
3. State and explain the postulate of equal a priori probabilities.
4. Write down the various thermodynamic potentials and the first law of thermodynamics.
Also find the relationships for T, P and .
5. Define ,  and . Assuming the explicit dependence of S on , find out thermodynamic

relationships , and .
6. What is chemical potential? Explain.
7. What is a classical ideal gas? How does the thermodynamic probability  depend on the
volume of the system? Show that for any classical system composed of non-interacting
particles the ideal gas law holds.
8. Show that the pressure of a system of non-relativistic non-interacting particles is precisely
equal to two-thirds of its energy density.
9. State and prove the Stirling formula.
10. What is Gibb’s paradox? How it can be resolved? [Knr. Uty March-2010, May 2006, April
2008]
11. What is Gibb’s paradox? Show that the entropy of the given gas depends on the history of
the gas. [Knr. Uty. April 2007]
12. State and explain Gibb’s paradox. [Knr. Uty. April 2008 old scheme, May 2009]
13. Discuss the concept of thermodynamic probability and thermal equilibrium. [Knr. Uty.
April 2008]
14. Deduce the Sackur-Tetrode equation for the translational entropy of an ideal gas in
equilibrium at a temperature T. [Knr. Uty. April 2008]
15. Establish the relation between the entropy and thermodynamic probability. [Knr. Uty. May
2009]
16. Discuss the changes in entropy of the systems when two ideal gases of the same kind and
different kinds are mixed. [Knr. Uty. May 2009]
17. Obtain the Boltzmann relation between entropy and thermodynamic probability. [Knr.
Uty. May 2009]
18. Derive the condition for thermal equilibrium of two systems in thermal contact. [Knr. Uty.
May 2009; Show that 1 = 2 and hence T1 = T2]

Essay type questions


1. For a classical ideal gas, derive the expression for entropy. Obtain the entropy of mixing of
two different ideal gases. What happens when the gases are of the same kind? [Knr. Uty.
May 2001]
2. Discuss the contact between statistics and thermodynamics.
28 Statistical Mechanics MCT

3. Explain how Gibb’s paradox is resolved. Hence derive the correct expression for the
entropy of a classical ideal gas. Also derive thermodynamic relations for E, T, P,  and A.

Problems

1. For classical systems show that (N,V,E) =


2. Show that the pressure of a system of non-relativistic non-interacting particles is precisely
equal to two-thirds of its energy density.
3. Derive the explicit relation for entropy of a classical ideal gas.
4. Assuming the expression for (N,V,E) derive the thermodynamic relations for entropy,
pressure and heat capacities of a classical system.
5. For a classical ideal gas show that

= and A =
6. Find out the chemical potential of a very dilute gas containing N structure less particles
occupying a volume V at temperature T. [Knr. Uty. May 2009]
7. Show that the translational entropy of an ideal gas in equilibrium at a temperature T is

given by, = [Knr. Uty. March 2010]

You might also like