Professional Documents
Culture Documents
1.1 Temperature
Let us start by slowly introducing the temperature, which is the quantity that is equal
between two bodies in a thermal equilibrium. What is the thermal equilibrium? It is the
1
situation which is reached after two or more bodies are brought into a thermal contact,
where they can exchange heat, and enough time has passed to let the combined system
relax. There are many important caveats about this definition. First, I will rely on you
knowing that heat is the energy of the internal motion of atoms and molecules inside a
body. Second, one has to assume this thermal equilibrium is eventually reached. Third,
we will later see that more than two bodies can be in a thermal equilibrium, a fact familiar
from our everyday experience, but not obvious a priori.
When two objects are approaching the thermal equilibrium, the one which gives away
energy is said to be at a higher temperature, while the one that accepts energy is at lower
temperatures. At this point, we cannot define the temperature any better than that, but
eventually we will be able to calculate the temperature for a given microscopic model
of the object. To make some progress till then, we will rely on this loose definition of
temperature.
course the volume. For N particles, we have N similar contributions. Averaging over the
2
xi
number of particles hi, the time-and-particle averaged pressure is hP i = mhv V . We can of
course drop averaging in hP i – the measured pressure is the average result from multiple
particles colliding with the piston over time.
Piston area = A
Figure 1.4. A greatly sim-
plified model of an ideal gas,
vx with just one molecule bounc-
Volume = V = LA ing around elastically. Copy-
!v
c
right ⇤2000, Addison-Wesley.
Length = L
Assuming the system is isotropic (all directions are equivalent), hvx2 i = hvy2 i = hvz2 i.
Also, hvx2 i + hvy2 i + hvz2 i = hvx2 + vy2 + vz2 i = hv 2 i, so that hvx2 i = hv 2 i/3. Therefore,
P V = mhv 2 i/3 = 23 N hKi. Going back to our ideal gas law, P V = N kT , we see that
hKi = 32 kT . This relation explains the special importance of the ideal gas in measuring
the temperature – in this case, the temperature is directly proportional to the mean kinetic
energy of the particles.
3
At this point, we may want to estimate the scale of this energy. Let us measure the
energy in eV, 1 eV = 1.6 · 10−19 J. kT at room temperature (300 K) is then 1.38 · 10−23 ·
300/1.6 · 10−19 eV ≈ 25 meV. We see that this energy is too small to knock electrons out
of atoms (good!), but could high enough to cause excitations (vibrations and rotations) in
multi-atomic molecules.
4
(work) or 2) external parameters do not change, so that the energy levels of the body do
not change (thermal interactions). Heat is the “mean energy transfer between 2 systems as
a result of a purely thermal interaction.” Total U is conserved, but the forms can change.
Heat: spontaneous flow of energy between bodies. Figure 1.7. Work: any other transfer of
energy, e.g. pushing a piston, stirring, applying fields. The first law of thermodynamics:
∆U = Q + W – energy conservation. Wring ∆Q and ∆W is discouraged, because these
are not potential energies, and the result depends on the details of the trajectory, not only
the initial and the final points.
The mechanical work is the most intuitive, so we will mostly deal with it. However,
many types of work can be considered, e.g. magnetic one, and the formalism can be
reproduced. Compression work: W = F ∆x (∆x is positive when directed inward: the
world works on the system.) Or, W = P A∆x = −P ∆V (∆V is positive when volume
grows). Sign: work done by the external entity is positive when the gas is forced to
contract, δV < 0.
Important: for the pressure to be well-defined, the volume should be changed slow
enough. Such a process is referred to as quasistatic. The word (‘static’) should not
confuse you – there are even stricter demands for slowness of other processes. Here,
the process should be slower than the pressure equilibrates in the container, which is
determined by the speed of sound – quite fast.
Now suppose δV /V isRnot small, so that P will be changing even if the process is
quasistatic. Then W = − P dV . Example: square cycle, engine.
6
Figure 1.12. The P V curve Adiabat
Pressure
for adiabatic compression (called
an adiabat) begins on a lower-
temperature isotherm and ends on
a higher-temperature isotherm. Tf
c
Copyright ⇤2000, Addison-Wesley. Ti
Vf Vi Volume
addition to the change of the internal energy (the first term), the provided head has to also
provide for the work that the expanding gas performs on the outside world (the second
term).
Note that this is a general formula, in which the first term (∂U/∂T )P is not the same
as CV = (∂U/∂T )V . However, for the ideal gas the two are equal, because U depends
∂(N kT /P )
only on T , and not on V or P . The second term, P ∂V
∂T P = P ∂T P
= N k.
Therefore, CP = CV + N k (for the ideal gas only!). Incidentally, CP /CV = γ.
To ease the process of keeping track of the work by the gas at constant pressure as
its internal energy is changing, one introduces Enthalpy, H = U + P V . Meaning: the
total energy one needs to create the system from scratch and put it into the environment.
At constant pressure, ∆H = ∆U + P ∆V . Now, recall that ∆U = Q − P ∆V , so
that ∆H = Q (at constant P , and if there are no other types of work). Hence if the
temperature is changing, CP = ∂H ∂T P . Of course, enthalpy can change not only with T .
Most relevant examples are phase transitions and chemical reactions, which happen at a
constant pressure.
7
Figure 1.15. To create a rabbit out of nothing and place it on the table, the
magician must summon up not only the energy U of the rabbit, but also some
additional energy, equal to P V , to push the atmosphere out of the way to make
room. The total energy required is the enthalpy, H = U + P V . Drawing by
c
Karen Thurber. Copyright ⇤2000, Addison-Wesley.
2.1 Paramagnet
Consider a naive model of a paramagnet: an assembly of N non-interacting spins 1/2.
Spin 1/2 can have two possible projections, up and down. The system has 2N different
microstates. But I may not necessarily care which specific spins are up and which are
down and may want to characterize the system only by the total magnetization of the
system, ∝ M ≡ (N↑ − N↓ ). There are only N values M can take: N, N −2, ..., −N – one
says that there are N + 1 macrostates. 2N and N + 1 are very different numbers indeed.
This means that there should be very many different microstates that correspond to some
of the macrostates. The number of microstates that correspond to a macrostate is called
a multiplicity, Ω. What is the multiplicity of a state with a given N↑ ? We can choose
the first spin up out of N spins in N ways, the next one in N − 1 ways etc., resulting in
N (N − 1)(N − 2) . . . (N − N↑ + 1) = (N N !
−N↑ ) !. This procedure overcounts the number
of ways to pick our microstate, because we counted separately the different orders in
which the same way set of spins can be selected; we need to divideit by the number of
permutations of N↑ spins, resulting in Ω(N↑ ) = N↑ !(NN−N !
↑ )!
≡ NN↑ . (Incidentally, the
expression is equal to N↑N!N! ↓ ! , so it is symmetric with respect to spin ups and spin downs.)
N
N↑ is of course the binomial coefficient, and you can repeat the argument above to see
that it applies to the coefficients in expansion of (a + b)N in powers of a and b.
8
Solid A Solid B
Energy
NA , qA NB , qB
Figure 2.3. Two Einstein solids that can exchange energy with each other, iso-
c
lated from the rest of the universe. Copyright ⇤2000, Addison-Wesley.
It may be natural to assume that all microstates of the system are equally probable. The
probability to find the system in one of the macrostates is then equal to its multiplicity
over the total number of microstates: P (N↑ ) = Ω(N↑ )/Ωtotal . One can easily check that
N
indeed, ΣP (N↑ ) = N↑ /2N = 1 by using the binomial expansion for (1 + 1)N .
Multiplicity
qA qA
Figure 2.6. Typical multiplicity graphs for two interacting Einstein solids, con-
taining a few hundred oscillators and energy units (left) and a few thousand (right).
As the size of the system increases, the peak becomes very narrow relative to the
full horizontal scale. For N ⌅ q ⌅ 1020 , the peak is much too sharp to draw.
c
Copyright ⇤2000, Addison-Wesley.
tween the degrees of freedom in a microscopic body is so huge that the “most probable”
state is the only one can realistically observe.
10
equilibrium, when TA = TB , q/N should be equal to kT /h̄ω on each side.
But we can go further than that and evaluate the effect of small deviations from the
thermal equilibrium. Let δq be the deviation of qA,B from the most probable value:
qA = qNA /N + δq and qB = qNB /N − δq. Then S = NA ln[e(qNA /N + δq)/NA ] +
NB ln[e(qNB /N − δq/NB ]. Expanding in δq/q, we have
S ≈ N ln(eq/N ) + NA δqN/qNA − NB δqN/qNB −
NA [δqN/qNA ]2 /2 − NB [δqN/qNB ]2 /2
≈ N ln(eq/N ) − N 2 (1/NA + 1/NB )(δq/q)2 /2.
For two solids of the same size, S ≈ N ln(eq/N ) − 2N (δq/q)2 . (My N = 2Nbook .)
Therefore, the probability to find the system in a state different from the equilibrium is
∝ Ω(qA , NA )Ω(qB , NB ) ∝ exp[−2N (δq/q)2 ] = exp[−2N (δU/U )2 ]. Notice the squared
δU/U and linear N – these features√ are generic. This bell-shaped function falls very fast,
with the typical width of δU ∼ U/ N . These variations of energy (or other quantities)
around the most probable value are called fluctuations.
Second law: closed macroscopic system spontaneously tend to macrostates with max-
imal maximal multiplicity.
2.4 Entropy
Notice that we found it more convenient to work with ln Ω rather than the multiplicity
itself. The reason is that ln Ω(q, N ) ≈ N ln(eq/N ) = N ln(constU/N ) is an extensive
quantity: if we double the system size (N → 2N , U → 2U ), it doubles. (Other examples
of the extensive quantity are V and M . On the other hand, the intensive quantities, such
as P and T do not scale with the system size.) Multiplicity on the other hand is not
an extensive quantity – the multiplicity factorizes when two sub-systems are combined
together. (This is the reason why ln Ω is additive.)
We now have an extensive quantity which describes some important property of the
system. We want to give it a special name S ≡ k ln Ω. This quantity is called the entropy.
Based on the example of the Einstein solids, we have seen that the total entropy of
the system grows when two sub-systems are put in thermal contact. This simply is the
result of the system assuming a macrostate with the maximal multiplicity, purely based
on the probability. This is a probabilistic statement, grounded in experimental evidence,
and known as the second law of thermodynamics.
11
Entropy is a household name now, but it had no microscopic explanation before the
works of Boltzmann and Gibbs. It was introduced in the purely thermodynamics context
by Clausius in 1850-1860’s based on the ideas of Carnot. One can construct the theory of
thermodynamics without reference to statistics, postulating the existence of an extensive
thermodynamic function (a function of N , V and U ) which is maximal in equilibrium (in
analogy with the potential energy of a mechanical system which is minimal in equilib-
rium); see Callen’s book.
to the combined system. Applying the same approximation to the two subsystems,
N/2 N 2
Ω(qA , NA )Ω(qB , NB ) ≈ 2π(q/2) 2 (eq/N ) exp[−2N (δq/q) ].
Here, we previously had Ω(qA , NA )Ω(qB , NB ) ∝ (eq/N )N exp[−2N (δq/q)2 ]; the pref-
R∞ p
actor is the new result. Since exp[−2N (δq/q)2 ]dδq = πq 2 /2N ,
−∞
R∞ 2N
p
Ω(qA , NA )Ω(qB , NB )]dδq ≈ 2πq 2 πq 2 /2N (eq/N )N ≈ Ω(q, N ). We conclude that
−∞
the number of microstates under the Gaussian peak closely approximates the total num-
ber of microstates; most of the states are under the Gaussian, and we are not loosing any
significant number of microstates in the wings of the distribution.
12
2.6 Recap
q
N N
Ω(q, N ) ≈ 2πq 2 (eq/N ) for q > N – extremely fast growth with q. Ω(qA , NA )Ω(qB , NB )
– a product of a very steeply growing and a very steeply decaying functions, which has a
N 2
maximum at qA = qB = Q for NA = NB = N/2. Ω(qA , NA )Ω(qB , NB ) ≈ πq 2 [e (q/2 +
N N N
δq)(q/2+δq)/(N/2)2 )N/2 = πq 2 2 2 2 N/2
2 [e (q −4δq )/N ] = πq 2e exp[N/2 ln((q 2 −4δq 2 )/N 2 )] =
N N 2 2 N N N 2 2
πq 2 e exp[N ln(q/N ) + N/2 ln(1 − 4δq /q )] ≈ πq 2 e (q/N ) exp(−2N δq /q ).
2.7 Temperature
Let us restate our claim about the system A+B reaching the state of maximal multiplicity
in terms of the growth of entropy. As a function of UA , SA grows, while SB (UB ) =
∂SA ∂SB
SB (U −UA ) falls. At some point, S = SA +SB reaches the maximum, where ∂U A
+ ∂UA
=
∂SB ∂SB ∂SA ∂SB
0. Since ∂UA = − ∂UB . Therefore, in equilibrium, ∂UA = ∂UB . Notice that when UA is
below the equilibrium value, system A tends to receive heat from system B. At the same
∂SA ∂SB
time. We would conventionally say that TA < TB . At the same time, ∂U A
> ∂U B
.
∂UA
Therefore, it is natural to associate TA with ∂SA . Check dimensions. A very important
∂SA ∂SB
remark: in ∂U A
= ∂U B
, the LHS has only quantities describing A, and the RHS has only
quantities describing B, so that T could be introduced that determined the properties of the
corresponding subsystem. This structure allows for TA = TC if TA = TB and TB = TC .
Example: for Einstein solid, S = N k ln(eU/h̄ωN ). T = 1/ ∂N k ln(eU/h̄ωN
∂U
)
= U/N k, or
U = N kT – in accord with equipartition theorem. We will soon show that for the ideal
V U 3/2
gas, S = N k ln[ N ( N ) ], so that U = 32 N kT . The coefficient is simply determined by
the power of U in the log.
13
be non-integer, we will allow some tolerance δR; at the end we will see that it is unim-
portant. Since the points in the grid have a density of 1, this number can be represented
as the volume of a hypershell of radius R to R + δR.
A surface of a hypershell in an m-dimensional space, Sm (R) can be calculated with
a trick. Consider any spherically symmetric f (R). We can integrate it over the whole
RRR Qm R∞
space in two equivalent ways: dxi = Sm (R)dR. Let us choose to integrate
i=1 0
m R
exp(−x2i ) = exp(−R2 ). Since exp(−x2 )dx = π 1/2 , on the LHS we
Q
f (R) =
i=1
m
RRR Q R∞
have exp(−x2i )dxi = π m/2 . On the RHS we have Sm (R)exp(−R2 )dR. Since
i=1 0
m−1
Sm (R) = Sm (1)R , the RHS becomes
R∞ m−1 R∞
Sm (1) R exp(−R2 )dR = 21 Sm (1) (R2 )m/2−1 exp(−R2 )dR2 = 21 Sm (1)Γ(m/2).
0 0
m/2 m−1
Thus, Sm (R) = 2π R /Γ(m/2).
3N/2 3N −1
R
Using this expression, Ω = 2πΓ(3N/2)2 3N
δR
. The expression is divided by 23N because
ni can only be positive. (Negative ni describe the same standing wave.) Finally, using
πeR2
Stirling’s approximation, ln Ω ≈ 3N 2 ln[ 4(3N/2) ] + ln(2δR/R). The last term is clearly
negligible. Finally, ln Ω ≈ N ln[ πemU
3N ]
3/2 V
(πh̄)3 .
3 Thermodynamics
3.1 Reversible and irreversible processes
Second law: closed macroscopic system spontaneously tends to macrostates with maxi-
mal multiplicity. Or equivalently, since Ω ∝ eS/k , their total entropy is always growing.
Notice that this is not a fundamental law of nature like Newton’s laws, or Maxwell’s
equations, but a strong statement based on probability arguments.
If a process increases the total entropy, according to the second law it cannot be re-
versed and hence is called irreversible. Similarly, a reversible process is the one that
does not change the global entropy. Every reversible process must be quasistatic. One
can understand this statement by imagining the particles occupying some QM levels, and
for a slow enough compression / expansion the particles will stay on their levels so that the
multiplicity and entropy will not change. The opposite is not true – a quasistatic process
15
could be irreversible.
From the definition of temperature, T = ∂U/∂S, it follows that ∆S = ∆U/T . We
have seen earlier that ∆U = Q + W . If the volume does not change, W = 0, ∆U = Q
and ∆S = Q/T , which is the original definition of entropy through heat, or Q = T ∆S,
which is the way one may define heat now, once entropy is introduced microscopically.
[These expressions also work for processes in which the volume changes quasistatically,
as we will see later.]
An example of an irreversible process is that of heat transfer between two objects
∂SA ∂SB
with different T : ∆S = ∂U A
∆U − ∂U B
∆U = ∆U (1/TA − 1/TB ) > 0 when ∆U is
transferred from hot B to cold A. The only way to avoid this is to have the heat transfer
isothermal, with TA = TB so that ∆S = 0 . (One would have to set an infinitesimally
small temperature difference just to ensure the direction of the heat and entropy flow.
Upon the reversal of this infinitesimally small difference, the heat and entropy will flow
in reverse.) Another reversible process is the quasistatic adiabatic compression/expansion
– the isentropic process with ∆S = 0 – that we will see soon.
3.2 Pressure
Imagine two systems at constant total U and separated by a movable partition (constant
total V ). In equilibrium, the total multiplicity and total entropy must be maximal viewed
as a function of two variables, U and V. Maximizing Swithrespectto Ugivesus TA =
TB . Variation of S with V : 0 = ∂S∂Vtotal A
= ∂S∂VA
A
+ ∂S
∂V
B
A
= ∂S
∂VA
A
− ∂S∂VB
B
,
U UA UB UA UB
since ∂VA = −∂VB if VA + VB = Vtotal . So, ∂S ∂VA U =
A ∂SB
∂VB U . In equilibrium, the
A B
∂S
pressures of the two systems should be equal, so it tempting to relate ∂V U
with P . In
fact, it is equal to P/T . One can see this from dimensionality: [S/V ] = J/Km3 , while
[P/T ] = N/m2 K. Alternatively, in the ideal gas, where U ∝ T = const, P/T =
∂S ∂N k ln V
∂V U = ∂V U
= N k/V .
Finally, there is the following elegant argument (taken from Kubo’s book). Consider
a cylinder of gas of height x and area A compressed by a piston with a force F . The
multiplicity Ω(U, V ) = Ω(U0 − F x, V0 + Ax) should be maximal in the equilibrium. ∂V Let
∂S ∂S ∂U ∂S
us look for maximum of S = k ln Ω with respect to x: ∂x = ∂U V ∂x + ∂V U ∂x =
∂S ∂S ∂S
−F ∂U V
+ A ∂V U = 0. The first term is −F/T . Hence ∂V U = F/T A = P/T .
16
3.3 Macroscopic view of entropy and thermodynamic identity
Let us now consider a general system and change its U and V infinitesimally in two
consecutive steps. (It is important to change volume∂Squasistatically, so that P is well
∂S
defined.) Then, dS = dS1 + dS2 = ∂U V dU + ∂V U dV = dU/T + P dV /T , or
dU = T dS − P dV . This expression, known as the thermodynamic identity, replaces
∆U = Q + W for quasistatic processes. In this case, P is well-defined, W = −P dV and
therefore Q = T ∆S. We have already seen this result in the case when dV = 0 and the
gas did not work. The generalization is that now the volume is now allowed to change,
but it has to do it quasistatically.
Is Q always equal to T ∆S? No, because remember that P is defined only for qua-
sistatic processes! Indeed, in the example of an ideal gas, if you pull the piston suddenly
out, Q = 0 (no heat added through the walls) W = 0 (gas does not push on piston)
and ∆U = 0 (the energy of the gas does not depend on volume for the non-interacting
particles). However, ∆S 6= 0 – new entropy is produced, since this is a free expansion.
Incidentally, we can check what happens if we try to compress a freely expanded gas
back to one half of the container. We can do that with a piston, while taking the excess
heat to the external reservoir. This needs to be done isothermally, in order not to produce
any more entropy. At the end, the gas will return back to the initial state at temperature T
in one half of the container, so the work done by the piston, W = N kT ln VVfi = N kT ln 2
will have to be transferred to the reservoir as heat Q and associated entropy S = Q/T =
N k ln 2 – the same entropy produced in free expansion. The isothermal compression did
not add to the entropy and hence was reversible.
Finally, a special case of an adiabatic and quasistatic process is called isentropic.
Indeed: quasistatic gives Q = T ∆S and adiabatic gives Q = 0, so together ∆S = 0,
entropy is constant. For an ideal gas, S = const is equivalent to the argument of the log
to be a constant: (U/N )3/2 V /N = const. Since U ∝ T , this gives T 3/2 V = const. Since
T ∝ P V , this gives P V 5/3 = const, as we have seen before.
17
3.5 The Chemical Potential
We can complete our set of mental experiments involving two systems in equilibrium by
allowing them to exchange
particles. Repeating
verbatim the arguments for the exchange
∂SA ∂SB
of volume, we get ∂N A
= ∂N B
. Notice, that earlier we tacitly assumed
UA ,VA UB ,VB
N = const in all the partial derivatives. Now it becomes a variable.
The quantity which is equal in equilibriumwith respect to the particle exchange is the
∂S
chemical potential, defined as µ = −T ∂N U,V
. Which this definition, the thermody-
namic identity can be augmented as: dU = TdS − PdV + µdN.
The chemical potential is crucially important for the establishment of equilibrium be-
tween different phases, for chemical processes, and for quantum gases. Unfortunately,
we cannot get a good feel of this quantity at this point. Indeed, for the ideal gas, µ =
V U 3/2 V U 3/2 V 2πmkT 3/2
−T ∂[N k ln[const1 N ( N ) ]]/∂N U,V = −kT ln[const2 N ( N ) ] = −kT ln[ N ( h2 ) ].
At room temperature, the log is large, resulting in µ < 0 and |µ| kT .
18
Hot reservoir, Th
4 Engines
In this section we are going to discuss engines and refrigerators. This is the traditional
origin of thermal physics. In order to make some progress, we are often going to assume
that all processes are quasistatic.
19
Th isotherm
Figure 4.3. P V diagram for an
Pressure
Tc isotherm
Volume
when dumping heat in the cold reservoir, it will have to contract. The best one can do in
between is to produce no additional entropy, which means that the gas should expand and
compress isoentropically. At the end, we obtain the so called Carnot cycle ,made of two
isotherms and two adiabats. It looks rather odd if one looks
H at it inH the P – VH plain, but
on the T – S plain it makes more sense. Indeed, W = P dV = T dS (as dU = 0,
so that the work is the area of the loop, while the waist heat is the area below the loop.
e = (QH − QC )/QH = W/(W + QC ) = 1/(1 + QC /W ), so we have to minimize QC
and maximize W , which is achieved by a square loop.
20
Figure 4.4. Energy-flow di- Hot reservoir, Th
agram for a refrigerator or air
conditioner. For a kitchen Qh
refrigerator, the space inside
it is the cold reservoir and
Refrigerator W
the space outside it is the
hot reservoir. An electrically
powered compressor supplies
Qc
c
the work. Copyright ⇤2000,
Addison-Wesley. Cold reservoir, Tc
reservoir, QH /TH ≥ QC /TC , so that COP ≤ 1/(TH /TC −1). The maximal performance
could be achieved by running the Carnot cycle in reverse. The maximal performance
could be also argued based on the second law – if one could build the refrigerator with
COP exceeding the theoretical limit, one would be able to connect it to the Carnot cycle
engine to make a compound engine which produces no waist heat.
COP can be also written as COP = TC /(TH − TC ). We see that the performance falls
as TC becomes lower.
21
5 Phase transitions and thermodynamic potentials
5.1 Thermodynamic potentials
Enthalpy – energy required to create a system with internal energy U and push against
the outside pressure P to make it occupy volume V : H = U + P V . Introduce two more
thermodynamic potentials: Helmholtz free energy: F = U − T S and Gibbs free energy
G = U − T S + P V . Physical meaning: required to create a system with internal energy
U (or enthalpy H) while being able to borrow the heat from the environment with fixed
temperature T . Per each ∆S in entropy of the system, we can transfer the amount of heat
T ∆S from the environment, so less energy is required, hence −T S.
22
5.3 Partial derivatives and Maxwell relations
We had seen T = ∂U ∂U ∂U
∂S V,N , P = − ,
∂V S,N and µ = ∂N S,V . We can now also add
S = − ∂F ∂F ∂F
∂T V,N , P = − ∂V T,N , µ = ∂N T,V and similar for the other potentials.
These relations underscore the usefulness of F and G – their partial derivatives are with
respect to T or taken at constant T (and not S).
We can also use the factthat the mixed
derivatives do not depend on the order. Hence
∂S ∂ ∂F ∂ ∂F
= ∂P
∂V T,N = − ∂V ∂T V,N T,N
= − ∂T ∂V T,N V,N ∂T V,N .
5.5 Equilibrium
Consider a system in thermal contact with a very large reservoir, whose temperature is
constant at T . The total entropy of the system + environment, Stot = S + SR should be
maximal. Since dS = dU/T + P dV /T − µdN/T , at constant V and N , dSR = dUR /TR ,
or dStot = dS + dUR /TR . In equilibrium, TR = T , and dU + dUR = 0, so dStot =
(dS − dU/T )T = −dF/T . Since Stot has to be maximized, F (of the system, not system
+ environment!) has to be minimized. Similarly, G is maximal for a system in thermal
and pressure equilibrium with the environment. It is particularly important in experiments
(e.g. chemistry) where P and T are controlled. Finally, H is minimal at constant S and
P.
Furthermore, one can show that the max S requirement at constant U and V is equiv-
alent to min U at constant S and V . (Imagine S = aU − b(X − X0 )2 . The maximum
of S at U = const is at X = X0 . Equivalently, we have to look for the maximum of
U = S/a + b/a(X − X0 )2 at constant S. Importantly, a = 1/kT > 0.)
23
∂U
∂S , resolve for S(T ) and substitute to express U (T ). The problem is that knowing
this function U (T ) does not allow one to uniquely resolve it for U(S). Indeed, U (T )
should be viewed as a first order differential equation U = U ∂U ∂S , which has as so-
V U 3/2
lution S = S(U ) + const. Example: in ideal gas, S = N k ln[const N ( N ) ], or
N 2/3 ∂U
U = constN ( V ) exp(2S/3N k). T (S) = ∂S = 2U/3N k, or U = 3N kT /2. Clearly
about the system is lost –3/2V does not enter any more! (The equation
lots of information
∂U
is U = 3N k ∂S /2, so S = N k ln[constU ]; again, V dependence is lost.)
The resolution of this problem is to rewrite the fundamental relation in terms of the
thermodynamic potential that is suitable for working with T , F (T ). More generally, if
we have a fundamental relation y(x), and we want to replace x with p = dy/dx (which
is a tangent to the original curve), we have to indicate the position q where this tangent
line intercepts the vertical axis at x = 0: y = q + xdy/dx = q + px, or q = y − xdy/dx.
Going back, knowing q(p) allows one to reconstruct y(x) without ambiguity. Indeed,
dq = dy−pdx−xdp = −xdp (the first terms cancelled since p = dy/dx), or x = −dq/dp,
and y = q + px = q − pdq/dp. Notice that the direct and inverse transformations are the
same (up to the sign).
Example: ideal gas. Starting with U = constN ( N V)
2/3
exp(2S/3N k),
∂U N 2/3
T (S) = ∂S = 2/(3N k)constN ( V ) exp(2S/3N k), which can be resolved for
V
S = N k ln[const N (3N kT /2)3/2 ],
V 3/2N kT 3/2
F = U − ST = 3N kT /2 − N kT ln[const N ( N ) ] (no need to simplify).
This is a fundamental relation which contains all the information about the system
∂F
and allows one to reconstruct the original U (S). Indeed, S = − ∂T = 3N k/2 −
V 3/2N kT 3/2
3N k/2 + N k ln[const N ( N ) ]. U = F + T S = 3N kT /2 (the log terms cancel)
and resolving S(T ) for T = 2/(3k)( N V)
2/3
exp(2S/3N k) we get the original fundamental
N 2/3
relation U = constN ( V ) exp(2S/3N k).
24
changes at that point. Also, S = − ∂G
∂T P,N . Sgr > Sdia , so the graphite becomes more
stable at higher T. Now you would have to apply more P to convert graphite to diamond
at high T . So there is a non-trivial phase boundary between graphite and diamond on the
P − T plane. The shape of this boundary is determined by Ggr = Gdia .
The slope of the phase separation line in the P − T plane can be determined by con-
dition dG1 = dG2 , or from thermodynamic identity: −S1 dT + V1 dP = −S2 dT + V2 dP
S1 −S2
(at constant N ), or dP
dT = V1 −V2 . The entropy difference is related to the latent heat of
transformation: L = T ∆S, so that dP L
dT = T ∆V . This is Clausius-Clapeyron relation.
Pressure (bar)
Pressure (bar)
Figure 5.11. Phase diagram for H2 O (not to scale). The table gives the vapor Figure 5.12. Phase diagram for carbon dioxide (not to scale). The table gives the
pressure and molar latent heat for the solid-gas transformation (first three entries) vapor pressure along the solid-gas and liquid-gas equilibrium curves. Data from
and the liquid-gas transformation (remaining entries). Data from Keenan et al. c
Lide (1994) and Reynolds (1979). Copyright ⇤2000, Addison-Wesley.
c
(1978) and Lide (1994). Copyright ⇤2000, Addison-Wesley.
Type-I Superconductor Ferromagnet
External magnetic field
P (bar)
4 3
He He Magnetized up
Bc Critical point
Solid
Solid
Normal
T
34 Super-
25.3
Helium I conducting Magnetized down
(normal liquid)
Helium II Liquid Tc T
(superfluid)
1 Gas 1 Gas Figure 5.14. Left: Phase diagram for a typical type-I superconductor. For lead,
Tc = 7.2 K and Bc = 0.08 T. Right: Phase diagram for a ferromagnet, assuming
2.2 4.2 5.2 T (K) 3.2 3.3 T (K) that the applied field and magnetization are always along a given axis. Copyright
c
⇤2000, Addison-Wesley.
Figure 5.13. Phase diagrams of 4 He (left) and 3 He (right). Neither diagram is
to scale, but qualitative relations between the diagrams are shown correctly. Not
shown are the three di⌅erent solid phases (crystal structures) of each isotope, or
the superfluid phases of 3 He below 3 mK. Copyright ⇤2000,
c Addison-Wesley.
The high temperature phase must be the more disordered one, so that −T S term in
G would stabilize it. Usually, it is also less dense, and the slope of the phase boundary
is positive. However, water is heavier than ice (ice floats), so dPdT is negative. This fact
enables ice skating, as pressure induces transition from ice to water at constant T (Callen’s
book).
G 100
Figure 5.17. The experimen-
tal phase diagram of carbon.
80 The stability region of the gas
Diamond
phase is not visible on this scale;
Diamond the graphite-liquid-gas triple
P (kbar)
60
point is at the bottom of the
Graphite Graphite graphite-liquid phase boundary,
40 at 110 bars pressure. From
2.9 kJ David A. Young, Phase Dia-
20 grams of the Elements (Univer-
P (kbar) Liquid sity of California Press, Berke-
5 10 15 20
c
ley, 1991). Copyright ⇤2000,
Figure 5.15. Molar Gibbs free energies of diamond and graphite as functions of 0 1000 2000 3000 4000 5000 6000 Addison-Wesley.
pressure, at room temperature. These straight-line graphs are extrapolated from T (K)
low pressures, neglecting the changes in volume as pressure increases. Copyright
c
⇤2000, Addison-Wesley.
25
P/Pc
V /Vc
1 2 3
Figure 5.20. Isotherms (lines of constant temperature) for a van der Waals fluid.
From bottom to top, the lines are for 0.8, 0.9, 1.0, 1.1, and 1.2 times Tc , the
temperature at the critical point. The axes are labeled in units of the pressure
and volume at the critical point; in these units the minimum volume (N b) is 1/3.
c
Copyright ⇤2000, Addison-Wesley.
26
imagine two identical containers with a substance described by F(T,V,N) that are separated
by a movable partition. Imagine a fluctuation of the partition which leads to ∆V . Then,
the change
of the free energy is F (T, V + ∆V, N ) + F (T, V − ∆V, N ) − 2F (T, V, N ) ≈
2
∂ F ∂P
2
∂V 2 ∆V = − ∂V T,N
∆V 2 . For stability, F should be minimal, which means
∂P
T,N 2
∂V T,N ∆V < 0. If some parts of the isotherm do not satisfy this condition, they
are unstable. A spontaneous fluctuation will lead to density increasing on one side and
decreasing on the other, leading to phase separation.
We see that the vdW gas isotherms have regions that are absolutely unstable, leading
to phase separation. Let us analyze the stability and the phase diagram of the vdW gas by
plotting G(P ) at constant T and N . From the thermodynamic identity, G = V dP , and
we can graphically integrate the isotherm by flipping the graph to read V vs. P . There
are several features here. Minimum of G follows 1-2-6-7 line. Point 2, 6 corresponds to
the co-existence of gas and liquid. Region 3-4-5 is absolutely unstable, while 2-3 and 5-6
are supersaturated vapor and superheated liquid.
V G P/Pc
2 4 3 7
7 0.8
ame isotherm 2,6
5 6
tted sideways. A 0.6 4
3
e equal areas. 1
dison-Wesley. 4
0.4 5
5 B
6 0.2
P
0.4 0.6 0.8 P/Pc 1
27
P P
1.2 1.2
Critical point
1.0 1.0
0.8 0.8
0.6 0.6
Liquid
0.4 0.4 Gas
0.2 0.2
Figure 5.23. Complete phase diagrams predicted by the van der Waals model.
The isotherms shown at left are for T /Tc ranging from 0.75 to 1.1 in increments
of 0.05. In the shaded region the stable state is a combination of gas and liquid.
The full vapor pressure curve is shown at right. All axes are labeled in units of
c
the critical values. Copyright ⇤2000, Addison-Wesley.
6 Canonical distribution
6.1 Boltzmann Statistics
We have initially assumed that all states of a system with fixed macroscopic observ-
ables (intensive parameters), including energy, are equally probable. This is called mi-
crocanonical ensemble. Consider now a system in thermal equilibrium with a reservoir.
We will assume that the total of the system and the reservoir is microcanonically dis-
tributed. What is the distribution of the system? Let us consider state (i) with energy E.
(This is possibly one of many QM states of that energy.) The reservoir has the energy
of E0 − Ei , with the number of microstates ΩR (E0 − Ei ). The probability of encoun-
tering the state (i) is proportional to the number of the microstates of the total (system
+ reservoir) in which the system is in state (i). There are ΩR (E0 − Ei ) such states, so
Pi = ΩR (E0 − Ei )/Ωtot ∝ exp{SR (E0 − Ei )/k}. For a large reservoir, Ei E0 , so that
we can expand Pi ∝ exp{SR (E0 )/k − ∂S ∂E Ei /k} ∝ exp{−Ei /kT }. This is Boltzmann
R
or canonical distribution.
Rather than tracing the prefactors, we will normalize Pi in a different way. Since
Σi Pi = 1, we can write Pi = exp(−E Z
i /kT )
, where Z = Σi exp(−Ei /kT ). This quantity is
known as the “partition function”.
28
6.2 Connection to Thermal Physics
P P
U = hEi i = Ei Pi = Ei exp(−Ei /kT )/Z. Let us introduce β = 1/kT . Then
P i i
Ei e(−βEi )
∂
U= iP
e(−βEi ) = − ∂β ln(Z).
i
29
F = −kT ln Z = h̄ω/2 + kT ln(1 − e−βh̄ω ). Let us use this result to calculate the
−h̄ω/kT 2 e−h̄ω/kT e−h̄ω/kT
entropy: S = − ∂F
∂T = −k ln(1−e )+kT h̄ω/kT 1−e−h̄ω/kT = k[h̄ω/kT 1−e−h̄ω/kT
−
ln(1 − e−h̄ω/kT )]. From here, one
can again get U =F + T S= h̄ω(1/2
1
+ eh̄ω/kT −1 ).
2 eβh̄ω
Finally, heat capacity: CV = ∂(F∂T +T S)
= T ∂T∂S
= −β ∂S ∂β = k(βh̄ω) (eβh̄ω −1)2 =
k(h̄ω/2kT )2 / sinh2 (h̄ω/2kT ).
6.4 Fluctuations
Why do the two ensembles give the same results? Let us consider the fluctuations of
energy. (Refer back to the figure with the narrow distribution of Ω’s vs. energy for two
macroscopic systems in thermal
P equilibrium.)
Ei exp(−βEi )
Technically: U = hEi i = Pi exp(−βE i)
; consider
i
P 2 P 2
Ei exp(−βEi ) Ei exp(−βEi )
i
∂U/∂β = − P + Pi
= −hEi2 i + hEi i2 = −hδEi2 i
i exp(−βEi ) i exp(−βEi )
Indeed, hδEi2 i ≡ h(Ei − hEi i)2 = hEi2 i − 2hEi i2 + hEi i2 = hEi2 i − hEi i2 .
∂β
Next, hδEp 2
i i = −∂U/∂β p = − ∂U
∂T / ∂T = kT
2
C . Therefore, the relative magnitude of
p V √
fluctuations hδEi2 i/U 2 = kT 2 CV /U 2 ≈ kT 2 (N k)/(N kT )2 = 1/ N .
3N Q3N
βp2i /2m) = 2
P
Notice, that since exp(− i=1 exp(βpi /2m), the integral factorizes
i=0
+∞
R √
into a product of 3N individual integrals exp(−βp2i /2m)dpi = 2πmkT . We can
−∞
31
also group the terms corresponding to px , py , pz of one particle to write
6.8 Thermodynamics
F = −kT ln(ZN ) ≈ −N kT ln( NeVh3 (2πmkT )3/2 ).
∂F
P = − ∂V T,N
= N kT /V , as it should be.
S = − ∂F eV
3/2
∂T V,N = N k ln( N h3 (2πmkT ) ) + 3N kT /2T .
U = F + T S = 3N kT /2 (the logarithms cancel.)
We do not have to normalize the integrals (that would be easy: just by divide them by
h3N ) as long as the prefactor is the same in the numerator and the denominator, which is
the partition function. Let us assume now that the Hamiltonian of the system contains a
3N
Ai r2 + Bi p2i . The energy is
P
sum of terms quadratic in coordinates and momenta H =
i=0
then
3N
(Ai r2 + Bi p2i ) exp(−βH) 3N
RRR P Q
i=1 dri dpi
i=0
U= .
exp(−βH) 3N
RRR Q
i=1 dri dp i
32
For each term in the sum, the integral factorizes into multiple integrals identical in the
numerator and the R denominator, which all cancel except for one involving a specific ri or
2 2
Ar exp(−βAr )dr
pi ; it looks like R exp(−βAr2 )dr = kT /2. (Indeed, the dimension of the Ar2 , according to
R 2 −x2
the
R −x2 exponents, should be 1/β; factor 1/2 comes from integrating by parts x e dx =
e dx/2.) Overall, U = N f kT /2, where f is the number of the non-zero A’s and B’s
per particle.
7 Quantum Statistics
7.1 Level occupation
Similar to the number of microstates, Ω, partition function ZN counts the number of
ways to position the particles in the their levels. Gibbs factor 1/N ! was introduced when
33
all particles occupied different levels. If the particles could occupy the same level, it
no longer works. For examples, 2 particles that can occupy the same level could be
distributed between 3 levels in 6 different ways; 32 /2! = 4.5 gives a wrong answer, since
all double occupancy cases were artificially divided by a factor of 2. Multiple occupancy
is not a problem if the number of levels ∼ Z1 greatly exceeds N , but becomes an issue
when it is comparable with N : V /λ3T ∼ N , or density n ∼ λ−3 T . This happens at low T
or high n.
We now switch to characterizing the state of the system by specifying the occupation ni
of each level i, rather than specifying which level a particle occupies. Let us consider the
(i) P
level as a subsystem in equllibrium with other levels. Then Zµ = exp(−βni (i −
all allowed ni
(i) Q (i)
µ)) and Ω(i) = −kT ln Zµ . For the complete system, Zµ = i Zµ , and Ω = Ω(i) .
P
i
(i)
Probability of ni particles occupying the level is P (ni ) = exp(−βni (i − µ))/Zµ .
P (i)
The average number of particles per level is hni i = ni exp(−βni (i − µ))/Zµ . In a
i
(i)
classical gas, P (0) ≈ 1, P (1) ≈ exp(−β(i − µ)) 1, P (2) P (1), and Zµ ≈ 1, so
that hni i ≈ exp(−β(i − µ)) – the Boltzmann distribution. The requirement for hni i 1
is the most stringent for i = 0, and requires exp(βµ) 1. Recalling that in the classical
V 2πmkT 3/2
gas µ = −kT ln[ N ( h2 ) ] = kT ln(nλ3T ), we get exp(βµ) = nλ3T . We again see that
the classical statistics works if nλ3T 1.
34
7.3 Quantum corrections to µ in an ideal gas
P
N = ni . Here, the states i can be characterized by the possible quantized values
i
+∞
RRR +∞
RRR
3 3 3
d3 p.
P P
of the momentum for 1 particle: = = d p/(2πh̄/L) = V /h
i px ,py ,pz −∞ −∞
The integrand ni depends on the energy (p) = p /2m. Let us rewrite d p = 4πp2 dp
2 3
+∞
R R∞
as d3 p = 4π(2m3 )1/2 d. Then, = V /h3 4π(2m3 )1/2 d ≡ ν()d, where
P
px ,py ,pz 0 0
3 1/2 3 1/2
4πV (2m ) V (2m )
ν() = h3 ≡ is the “density of states”. In case of degeneracy g (e.g.
2π 2 h̄3
spin), ν has to be multiplied by g. It is important that in the 3-dimensional case with
√
parabolic dispersion, ν() = C1 V . Therefore,
Z∞ Z∞
1
N= ν()n()d = C1 V 1/2 d
exp(β( − µ)) ± 1
0 0
R∞
If the gas is classical, we can neglect 1 in the denominator, N ≈ C1 V 1/2 e−β(−µ) d =
0
βµ
R∞ 1/2 −β √
βµ π
√ 3 1/2
π 4π(2m )
C1 V e e d = C1 V e 2 (kT )3/2 = 2 h3 V eβµ (kT )3/2 =
0
(2πmkT )3/2 V eβµ /h3 , and eβµ = nλ3T .
1
If gas is weakly quantum, exp(βµ) 1, we can approximate exp(β(−µ)±1 ≈ exp(−β(−
R∞ R∞
µ))∓exp(−2β(−µ)). Then, N ≈ C1 V 1/2 exp(−β(−µ)d∓ C1 V 1/2 exp(−2β(−
0 0
µ)d. The first integral gives the classical result N = V λ−3 T exp(βµ). The ± correc-
tions involves an identical integral with twice β, which gives N = V λ−3 T exp(βµ)[1 ∓
−3/2 −3 −3/2
2 exp(βµ)], from which exp(βµ) = nλT /[1∓2 exp(βµ)] ≈ nλT /(1∓2−3/2 nλ3T ) ≈
3
nλ3T (1 ± 2−3/2 nλ3T ). (We plug exp(βµ) ≈ nλ3T in the correction term.) For fermions /
bosons, µ is higher / lower than dictated by the classical (Boltzmann) expression.
7.4 Thermodynamics
Q (i)
ln Zµi = ±kT ln[1 ± e−β(i −µ) ] =
P P
P V = −Ω = kT ln Zµ = kT ln i Zµ = kT
i √ i
−β(−µ)
R
±kT ν() ln[1 ± e ]d. Let us take ν() = C1 V . By parts:
√ 2 3/2 ±βe−β(−µ)
Z Z
−β(−µ)
P V = ±kT C1 V ln[1 ± e ]d = ±kT C1 V d =
3 1 ± e−β(−µ)
35
Z
2 1 2
ν() β(−µ) d = U
3 e ±1 3
However, P V = N kT and U = 3N kT /2 are no longer true.
Let us find small corrections to P V = N kT for weakly non-classical gas. Expanding
n() ≈ 1 ∓ e−β(−µ) , we get
Z Z
2 3/2 −β(−µ) −β(−µ) 2
P V ≈ C1 V e (1 ∓ e )d = C1 V e βµ
3/2 e−β d∓
3 3
Z
2 2βµ 2
e C1 V 3/2 e−2β d = C1 V Γ(5/2)[eβµ β −5/2 ∓ e2βµ (2β)−5/2 ] =
3 3
2
C1 V Γ(5/2)eβµ β −5/2 (1 ∓ 2−5/2 eβµ )
3
√
where Γ(5/2) = 3 π/4. Plugging in exp(βµ) ≈ nλ3T (1 ± 2−3/2 nλ3T ), we get P ≈
nkT (1 ± 2−3/2 nλ3T )(1 ∓ 2−5/2 nλ3T ) ≈ nkT (1 ± 2−5/2 nλ3T ). As a result, P is higher for
fermions (Pauli exclusion pushes particles to the levels with higher momenta) and lower
for bosons.
7.5 Fermions
RF 3/2
We have seen that ν() ∝ 1/2
. Let us write it as ν = g 1/2
. Since N = ν()d = 32 gF ,
0
3
g= Let us calculate corrections to the chemical potential when kT F . The
2 N/F .
number of particles in the system is independent of temperature, and given by: N =
R∞
ν()nF D ()d. Let us develop a general method of calculating the integrals of this form
0
(Sommerfeld expansion):
R∞
Consider I(µ, T ) = f ()nF D ()d. Integrating by parts, I(µ, T ) = nF D ()F ()|∞
0 −
0
R∞ R
F () dnFdD () d, where F () = f (0 )d0 . For power-law f (), the first term is equal to
0 0
(−µ)/kT
zero at both limits (nF D is exponentially small at large ). Here, − dnFdD () = kT
1 e
(e(−µ)/kT +1)2
=
1
4kT cosh2 [(−µ)/2kT ]
. This function is symmetric in −µ, has width of −µ ∼ kT and height
dnF D ()
∼ 1/kT . In fact, it tends to δ( − µ) in the zero T limit – indeed, then − d is a
derivative of a step function. Keeping this in mind, we can expand F () in Taylor series:
F () = F (µ) + f (µ)( − µ) + f 0 ( − µ)2 /2. We can also expand the limit of integration
R∞ 0 2
dnF D ()
over to −∞. Then I(µ, T ) ≈ [F (µ) + f (µ)( − µ) + f ( − µ) /2] − d d.
−∞
36
Odd terms in the Taylor series give zero, because − dnFdD ()
is even with respect to −µ.
R∞ x2 2 π2 2 0
Using 2
4 cosh (x/2)
dx = π /3 we finally get I(µ, T ) = F (µ) + 6 (kT ) f (µ). Notice
−∞
Rµ
that F (µ) = f ()d is formally equal to the unphysical I(µ, 0).
0
Rµ π2 2 0
Applying this to f = ν, we get N = ν()d + 6 (kT ) ν (µ). On the other hand, at
0
RF RF Rµ π2 2 0
T = 0, N = ν()d. Therefore, ν()d − ν()d = 6 (kT ) ν (µ). The difference
0 0 0
between the two integrals is ≈ ν(F )(F − µ). (It is OK to evaluate ν at F , and not at
2 0
(F + µ)/2 with this precision.) Finally, µ ≈ F − π6 (kT )2 νν((FF)) . (Again, we can take
ν 0 (F )
ν 0 at F , and not at µ with this precision.) Finally, for ν() ∝ 1/2 , ν(F ) = 1
2F and
2
π 2
µ ≈ F − 12 (kT ) /F .
R∞ Rµ 2
Taking U = ν()nF D ()d, so that f = ν, we get U = ν()d+ π6 (kT )2 (ν)0 ≈
0 0
RF 2 2
ν()d + (µ − F )ν(F )F + π6 (kT )2 (F ν 0 + ν). Second term is equal to − π6 (kT )2 F ν 0
0
and cancels with the corresponding part of the third term, leaving U (T ) ≈ U (0) +
π2 2 π2 2 3N
6 (kT ) ν(F ) = U (T ) + 4 (kT ) N/F , where ν(F ) = 2F is substituted.
37
n1/3 , so that P ∝ n4/3 . This difference between an exponent of 5/3 and 4/3 between the
non-relativistic and relativistic case will be important in the next section.
38
1
R∞ xs−1
Here, ζ(s) = Γ(s) ex −1 dx is Riemann zeta function.
0
We find a confusing result that the n∗ decays as T 3/2 . This is the maximal density
at this temperature; a finite (negative) µ would only make n smaller. At some critical
temperature TC , such that n∗ (TC ) = n, this “maximal” density will be reached. At lower
temperatures, we apparently cannot accommodate the existing particles within n∗ . Where
do they go?
P R∞
Let us look at = ν()d. The original sum contains the lowest energy state
px ,py ,pz
√ 0
with = 0, while ν ∝ gives it zero weight. This is not a good approximation when
R∞ √ 1
µ → 0. Let us add this state manually to the RHS: N = C1 V eβ −1 d + e−βµ1 −1 ≈
0
∗
N (T ) + Below TC , a macroscopic number of particles N − N ∗ (T ) will reside on the
kT
−µ .
lowest energy level – the “condensate”. The rest of them, N ∗ (T ) will be in the excited
states “above the condensate”.
−kT
Below TC , the chemical potential will be µ ≈ N −N ∗ (T ) . This expression is ∼ N times
smaller than the typical µ ∼ −kT ln(. . . ) in the classical case. In fact, it is also smaller
than the energy spacing between the energy levels, which explains why we singled out
only the lowest level and not several of them.
39
R∞ h̄ω3 V 1
It is instructive to rewrite this as a function of ω: U = π 2 c3 eβh̄ω −1 dω and look into
0
2
the energy density per range of frequencies: uω = πω2 c3 eβh̄ωh̄ω
−1
(Planck’s law). In the limit
hbarω
of high temperatures, eβh̄ω −1 ≈ kT (according to the equipartition theorem, the energy of
2
an oscillator is kT ) and we get a classical result uω ≈ ωπ2kT c3 (Rayleigh-Jeans law, notice
R∞
that h̄ does not enter). Integrated over all frequencies, this result diverges as U ∝ ω 2 dω.
0
Planck’s law fixes the problem, by introducing a high frequency cut-off, h̄ω ∼ kT .
R∞ 3
Going back to the full expression, the energy density per volume u = π2 c3 h̄3 eβ1−1 d =
0
(kT ) 4 R∞ x3 2
π (kT ) 4
Similarly, we could calculate the pressure. Here, instead of energy , each photon
arriving at the wall transfers perpendicular momentum 2/c cos(θ) (2 for reflection). Cor-
π/2
2cu
R
respondingly, P = 2c cos2 (θ) sin(θ)dθ = u/3. This result is in fact more general,
0
and U = 3P V for any relativistic gas (both bosons and fermions) without internal struc-
Q (i)
ture.PIndeed, similarPto section 7.4 above, P V R= −Ω = kT ln Zµ = kT ln i Zµ =
kT ln Zµi = ±kT ln[1 ± e−β(i −µ) ] = ±kT ν() ln[1 ± e−β(−µ) ]d. Taking ν() =
i i
2
C2 V for a relativistic gas, and integrating by parts, we get:
1 3 ±βe−β(−µ)
Z Z
2 −β(−µ)
P V = ±kT C2 V ln[1 ± e ]d = ±kT C2 V d =
3 1 ± e−β(−µ)
Z
1 1 1
ν() β(−µ) d = U
3 e ±1 3
40
7.10 Debay model
2
Dispersion: ω = sq. DOS: ν = 2π 2 s3 h̄3
. Sum of 1/s2i over the three polarizations is
h̄ωRmax
32 2
2
replaced by 3/s . Then CV = kV 2 3
2π s h̄3 2kT sinh−2 kT d, where ωmax ∼ s/a. The
0
exact coefficient comes from counting the total number of modes to be the same as the
number of the degrees of freedom, and is important only at high temperature to ensure
that CV = 3N k. In the low temperature limit, kT h̄ωmax , we could replace the upper
R∞
limit of the integral by ∞. Using x4 sinh−2 (x/2)dx = (2π)4 /15, we get
0
3 3
2π 2 12π 4
kT T
CV = Vk = Nk
5 h̄s 5 TD
41
8.2 Susceptibility to the external field
In external field, G(T, M ) = G0 (T )+A(T )M 2 +B(T )M 4 −M H. To find the equilibrium
∂G
M below the TC , we again have to minimize G: ∂M = 2a(T − TC )M + 4BM 3 − H =
0. We will care about the “susceptibility”, χ = ∂M ∂H , which is a response of M to an
∂
infinitesimal H. Take ∂H of the previous equation:
∂M
(2a(T − T C) + 12BM 2 ) = 1
∂H
Plugging in M (H = 0), we get χ = 2a(T1−TC ) above TC and χ = 4a(TC1 −T ) below TC .
Again, we see a critical exponent (−1), which is universal – it does not depend on the
details of the system.
42