You are on page 1of 46

The Fourth Laws of Thermodynamics

An Assignment submitted to the Department of Petroleum


Engineering, American University of Kurdistan

As a requirement for Mid-Term Exam

Student name: Nafi M.Taher Abdullah


Course title: Physics III: Principles of Thermodynamics
Instructor: Dr. Khedir R Khedir, PhD
Course Code: PHY206
Email: nafi.abdullah@students.auk.edu.krd
Due date: 20th of July 2020
Table of Contents
Chapter One...................................................................................................................1
Basic Concepts of Thermodynamics..........................................................................1
Introduction............................................................................................................1
DEFINITION OF THERMODYNAMICS.....................................................................1
THERMODYNAMIC SYSTEMS.................................................................................2
MACROSCOPIC AND MICROSCOPIC POINTS OF VIEW..........................................4
PURE SUBSTANCE...................................................................................................5
THERMODYNAMIC EQUILIBRIUM.........................................................................5
PROPERTIES OF SYSTEMS.......................................................................................6
STATE......................................................................................................................6
PROCESS.................................................................................................................6
Conclusion..................................................................................................................7
Chapter Two...................................................................................................................8
The Zeroth Law of Thermodynamics.........................................................................8
Introduction............................................................................................................8
Introducing equilibrium.........................................................................................9
The molecular world............................................................................................12
The Laws of Thermodynamics.............................................................................13
Conclusion................................................................................................................16
Chapter Three..............................................................................................................17
The Second Law of Thermodynamics......................................................................17
Introduction..........................................................................................................17
Energy Balance.....................................................................................................20
Energy Change of a System, _System..................................................................21
Mechanisms of Energy Transfer, Ein and Eout..................................................22
ENERGY BALANCE FOR CLOSED SYSTEMS...........................................................24
Conclusion................................................................................................................25
Chapter Four................................................................................................................26
The Second Law of Thermodynamics......................................................................26
Introduction..........................................................................................................26
Thermal Energy Reservoirs..................................................................................26
Heat Engines.........................................................................................................27
The Second Law: Kelvin‐Planck Statement.........................................................28
Refrigerators and Heat Pumps............................................................................29
Coefficient of Performance (COP)........................................................................29
The Second Law of Thermodynamics: Clausius Statement................................30
Reversible and Irreversible Process.....................................................................30
Conclusion................................................................................................................32
Chapter Five.................................................................................................................33
The Third Law of Thermodynamics.........................................................................33
Introduction..........................................................................................................33
Capitulation of Entropy........................................................................................33
Liquefying Gases...................................................................................................35
Conclusion................................................................................................................36
References...................................................................................................................37
Chapter One
Basic Concepts of Thermodynamics

Introduction

The kinetic theory of gases deals with the behavior of molecules constituting the gas.

According to this theory, the molecules of all gases are in continuous motion. As a
result of this they possess kinetic energy which is transferred from molecule to
molecule during their collision. The energy so transferred produces a change in the
velocity of individual molecules.

DEFINITION OF THERMODYNAMICS

Thermodynamics is an axiomatic science which deals with the relations among heat,
work and properties of system which are in equilibrium. It describes state and changes
in state of physical systems. Or Thermodynamics is the science of the regularities
governing processes of energy conversion. Or Thermodynamics is the science that
deals with the interaction between energy and

material systems. Thermodynamics, basically entails four laws or axioms known as


Zeroth, First, Second and

Third law of thermodynamics.

_ The First law throws light on concept of internal energy.

_ The Zeroth law deals with thermal equilibrium and establishes a concept of
temperature.

_ The Second law indicates the limit of converting heat into work and introduces the
principle of increase of entropy.

_ The Third law defines the absolute zero of entropy.

These laws are based on experimental observations and have no mathematical proof.
Like all physical laws, these laws are based on logical reasoning.

THERMODYNAMIC SYSTEMS

1. System, Boundary and Surroundings

System. A system is a finite quantity of matter or a prescribed region of space (Refer


Fig. 1.1) Boundary. The actual or hypothetical envelope enclosing the system is the
boundary of the system. The boundary may be fixed or it may move, as and when a
system containing a gas is compressed or expanded. The boundary may be real or
imaginary. It is not difficult to envisage a real boundary but an example of imaginary
boundary would be one drawn around a system consisting of the fresh mixture about
to enter the cylinder of an I.C. engine together with the remnants of the last cylinder
charge after the exhaust process (Refer

Fig. 1.2).

Fig. 1.1: The system Fig. 1.2: The real and imaginary boundaries
2. Closed System

Refer to Fig.1.3. If the boundary of the system is impervious to


the flow of matter, it is called a closed system. An example of
this system is mass of gas or vapor contained in an engine
cylinder, the boundary of which is drawn by the cylinder walls,
the cylinder head and piston crown. Here the boundary is
continuous and no matter may enter or leave.

Fig. 1.3: Closed system

3. Open System

Refer to Fig.1.4 An open system is one in which matter flows into or out of the

system. Most of the engineering systems are open.

Fig. 1.4: Open system


4. Isolated System

An isolated system is that system which exchanges neither energy nor matter with any

other system or with environment.

5. Adiabatic System

An adiabatic system is one which is thermally insulated from its surroundings. It can,

however, exchange work with its surroundings. If it does not, it becomes an isolated
system. Phase. A phase is a quantity of matter which is homogeneous throughout in
chemical composition and physical structure.

6. Homogeneous System

A system which consists of a single phase is termed as homogeneous system.


Examples : Mixture of air and water vapor, water plus nitric acid and octane plus
heptane.

7. Heterogeneous System

A system which consists of two or more phases is called a heterogeneous system.


Examples : Water plus steam, ice plus water and water plus oil.
MACROSCOPIC AND MICROSCOPIC POINTS OF VIEW

Thermodynamic studies are undertaken by the following two different approaches.


1. Macroscopic approach—(Macro mean big or total)
2. Microscopic approach—(Micro means small)
These approaches are discussed (in a comparative way) below :

Note. Although the macroscopic approach seems to be different from microscopic


one, there exists a relation between them. Hence when both the methods are applied to
a particular system, they give the same result.
PURE SUBSTANCE

A pure substance is one that has a homogeneous and invariable chemical composition
even though there is a change of phase. In other words, it is a system which is (a)
homogeneous in composition, (b) homogeneous in chemical aggregation. Examples :
Liquid, water, mixture of liquid water and steam, mixture of ice and water. The
mixture of liquid air and gaseous air is not a pure substance.

THERMODYNAMIC EQUILIBRIUM

A system is in thermodynamic equilibrium if the temperature and pressure at all


points are same ; there should be no velocity gradient ; the chemical equilibrium is
also necessary. Systems under temperature and pressure equilibrium but not under
chemical equilibrium are sometimes said to be in metastable equilibrium conditions. It
is only under thermodynamic equilibrium conditions that the properties of a system
can be fixed. Thus, for attaining a state of thermodynamic equilibrium the following
three types of equilibrium states must be achieved :
1. Thermal equilibrium. The temperature of the system does not change with time
and has same value at all points of the system.
2. Mechanical equilibrium. There are no unbalanced forces within the system or
between the surroundings. The pressure in the system is same at all points and does
not change with respect to time.
3. Chemical equilibrium. No chemical reaction takes place in the system and the
chemical composition which is same throughout the system does not vary with time.
PROPERTIES OF SYSTEMS

A property of a system is a characteristic of the system which depends upon its state,
but not upon how the state is reached. There are two sorts of property :
1. Intensive properties. These properties do not depend on the mass of the system.
Examples : Temperature and pressure.
2. Extensive properties. These properties depend on the mass of the system.
Example : Volume. Extensive properties are often divided by mass associated with
them to obtain the intensive properties. For example, if the volume of a system of
mass m is V, then the specific volume of matter within the system is V m = v which is
an intensive property.

STATE

State is the condition of the system at an instant of time as described or measured by


its properties. Or each unique condition of a system is called a state. It follows from
the definition of state that each property has a single value at each state. Stated
differently, all properties are state or point functions. Therefore, all properties are
identical for identical states. On the basis of the above discussion, we can determine if
a given variable is property or not by applying the following tests :
— A variable is a property, if and only if, it has a single value at each equilibrium
state.
— A variable is a property, if and only if, the change in its value between any two
prescribed equilibrium states is single-valued. Therefore, any variable whose change
is fixed by the end states is a property.

PROCESS

A process occurs when the system undergoes a change in a state or an energy transfer
at a steady state. A process may be non-flow in which a fixed mass within the defined
boundary is undergoing a change of state. Example : A substance which is being
heated in a closed cylinder undergoes a non-flow process (Fig.1.3). Closed systems
undergo non-flow processes. A process may be a flow process in which mass is
entering and leaving through the boundary of an open system. In a steady flow
process (Fig. 1.4) mass is crossing the boundary from surroundings at entry, and an
equal mass is crossing the boundary at the exit so that the total mass of the system
remains constant. In an open system it is necessary to take account of the work
delivered from the surroundings to the system at entry to cause the mass to enter, and
also of the work delivered from the system at surroundings to cause the mass to leave,
as well as any heat or work crossing the boundary of the system. Quasi-static
process. Quasi means ‘almost’. A quasi-static process is also called a reversible
process. This process is a succession of equilibrium states and infinite slowness is its
characteristic feature.

Conclusion

It is not difficult to envisage a real boundary but an example of imaginary boundary


would be one drawn around a system consisting of the fresh mixture about to enter the
cylinder of an I.C. engine together with the remnants of the last cylinder charge after
the exhaust process. Closed System Refer to Fig.1.3. If the boundary of the system is
impervious to the flow of matter, it is called a closed system. For example, if the
volume of a system of mass m is V, then the specific volume of matter within the
system is V m = v which is an intensive property. In a steady flow process mass is
crossing the boundary from surroundings at entry, and an equal mass is crossing the
boundary at the exit so that the total mass of the system remains constant. In an open
system it is necessary to take account of the work delivered from the surroundings to
the system at entry to cause the mass to enter, and also of the work delivered from the
system at surroundings to cause the mass to leave, as well as any heat or work
crossing the boundary of the system. An isolated system is that system which
exchanges neither energy nor matter with any If it does not, it becomes an isolated
system. Homogeneous System A system which consists of a single phase is termed as
homogeneous system. A system which consists of two or more phases is called a
heterogeneous system. A property of a system is a characteristic of the system which
depends upon its state, but not upon how the state is reached.
Chapter Two
The Zeroth Law of Thermodynamics

Introduction

The zeroth law is an afterthought. Although it had long been known that such a law
was essential to the logical structure of thermodynamics, it was not dignified with a
name and number until early in the twentieth century. By then, the first and second
laws had become so firmly established that there was no hope of going back and
renumbering them. As will become apparent, each law provides an experimental
foundation for the introduction of a thermodynamic property. The zeroth law
establishes the meaning of what is perhaps the most familiar but is in fact the most
enigmatic of these properties: temperature. Thermodynamics, like much of the rest of
science, takes term. with an everyday meaning and sharpens them—some would say,
hijacks them—so that they take on an exact and unambiguous meaning. We shall see
that happening throughout this introduction to thermodynamics. It starts as soon as we
enter its doors. The part of the universe that is at the center of attention in
thermodynamics is called the system. A system may be a block of iron, a beaker of
water, an engine, a human body. It may even be a circumscribed part of each of those
entities. The rest of the universe is called the surroundings. The surroundings are
where we stand to make observations on the system and infer its properties. Quite
often, the actual surroundings consist of a water bath maintained at constant
temperature, but that is a more controllable approximation to the true surroundings,
the rest of the world. The system and its surroundings jointly make up the universe.
Whereas for us the universe is everything, for fewer profligate thermodynamics it
might consist of a beaker of water (the system) immersed in a water bath (the
surroundings). A system is defined by its boundary. If matter can be added to or
removed from the system, then it is said to be open. A bucket, or more refined an
open flask, is an example, because we can just shovel in material. A system with a
boundary that is impervious to matter is called closed. A sealed bottle is a closed
system. A system with a boundary that is impervious to everything in the sense that
the system remains unchanged regardless of anything that happens in the
surroundings is called isolated. A stoppered vacuum flask of hot coffee is a good
approximation to an isolated system. The properties of a system depend on the
prevailing conditions. For instance, the pressure of a gas depends on the volume it
occupies, and we can observe the effect of changing that volume if the system has
flexible walls. ‘Flexible walls’ is best thought of as meaning that the boundary of the
system is rigid everywhere except for a patch—a piston—that can move in and out.
Think of a bicycle pump with your finger sealing the orifice. Properties are divided
into two classes. An extensive property depends on the quantity of matter in the
system—its extent. The mass of a system is an extensive property; so is its volume.
Thus, 2 kg of iron occupies twice the volume of 1 kg of iron. An intensive property is
independent of the amount of matter present. The temperature (whatever that is) and
the density are examples. The temperature of water drawn from a thoroughly stirred
hot tank is the same regardless of the size of the sample. The density of iron in 8.9 g
cm−3 regardless of whether we have a 1 kg block or 2 kg block We shall meet many
examples of both kinds of property as we unfold thermodynamics and it is helpful to
.keep the distinction in mind

Introducing equilibrium

So much for these slightly dusty definitions. Now we shall use a piston—a movable
patch in the boundary of a system—to introduce one important concept that will then
be the basis for introducing the enigma of temperature and the zeroth law itself.

Suppose we have two closed systems, each with a piston on one side and pinned into
place to make a rigid container (Figure 1). The two pistons are connected with a rigid
rod so that as one moves out the other moves in.We release the pins on the piston. If

the piston on the left drives the piston on the right into that system, we can infer that
the pressure on the left was higher than that on the right, even though we have not
made a direct measure of the two pressures. If the piston on the right won the battle,
then we would infer that the pressure on the right was higher than that on the left. If
nothing had happened when we released the pins, we would infer that the pressures

of the two systems were the

1. If the gases in these two containers are at different pressures, when the pins
holding the pistons are released, the pistons move one way or the other until the
two pressures are the same. The two systems are then in mechanical equilibrium.
If the pressures are the same to begin with, there is no movement of the pistons
when the pins are withdrawn, for the two systems are already in mechanical
equilibrium same, whatever they might be. The technical expression for the

condition arising from the equality of pressures is mechanical equilibrium.


Thermodynamics get very excited, or at least get very interested, when nothing
happens, and this condition of equilibrium will grow in importance as we go through
the laws. We need one more aspect of mechanical equilibrium: it will seem

trivial at this point, but establishes the analogy that will enable us to introduce the
concept of temperature. Suppose the two systems, which we shall call A and B, are in
mechanical equilibrium when they are brought together and the pins are released.
That is, they have the same pressure. Now suppose we break the link between them
and establish a link between system A and a third system, C, equipped with a piston.
Suppose we observe no change: we infer that the systems A and C are in mechanical
equilibrium and we can go on to say that they have the same pressure. Now suppose

we break that link and put system C in mechanical contact with system B. Even
without doing the experiment, we know what will happen: nothing. Because systems
A and B have the same pressure, and A and C have the same pressure, we can be
confident that systems C and B have the same pressure, and that pressure is a
universal indicator of mechanical equilibrium. Now we move from mechanics to
thermodynamics and the world of the zeroth law. Suppose that system A has rigid
walls made of metal and system B likewise. When we put the two systems in contact,
they might undergo some kind of physical change. For instance, their pressures might
change or we could see a change in color through a peephole. In everyday language
we would say that ‘heat has flowed from one system to the other’ and their properties
have changed accordingly. Don’t imagine, though, that we know what heat is yet: that
mystery is an aspect of the first law, and we aren’t even at the zeroth law yet.

It may be the case that no change occurs when the two systems are in contact even
though they are made of metal. In that case we say
2. A representation of the zeroth law involving (top left) three systems that can
be brought into thermal contact. If A is found to be in thermal equilibrium with
B (top right), and B is in thermal equilibrium with C (bottom left), then we can
be confident that C will be in thermal equilibrium with A if they are brought into
contact (bottom right)
that the two systems are in thermal equilibrium. Now consider three systems (Figure
2), just as we did when talking about mechanical equilibrium. It is found that if A is
put in contact with B and found to be in thermal equilibrium, and B is put in contact
with C and found to be in thermal equilibrium, then when C is put in contact with A,
it is always found that the two are in thermal equilibrium. This rather trite observation
is the essential content of the zeroth law of thermodynamics: if A is in thermal
equilibrium with B, and B is in thermal equilibrium with C, then C will be in thermal
equilibrium with A. The zeroth law implies that just as the pressure is a physical
property that enables us to anticipate when systems will be in mechanical equilibrium
when brought together regardless of their composition and size, then there exists a
property that enables us to anticipate when two systems will be in thermal equilibrium
regardless of their composition and size: we call this universal property the
temperature. We can now summarize the statement about the mutual thermal
equilibrium of the three systems simply by saying that they all have the same
temperature. We are not yet claiming that we know what temperature is, all we are
doing is recognizing that the zeroth law implies the existence of a criterion of thermal
equilibrium: if the temperatures of two systems are the same, then they will be in
thermal equilibrium when put in contact through conducting walls and an observer of
the two systems will have the excitement of noting that nothing changes. We can now
introduce two more contributions to the vocabulary of thermodynamics. Rigid walls
that permit changes of state when closed systems are brought into contact—that is,
permit the conduction of heat—are called diathermic (from the Greek words for
‘through’ and ‘warm’). Typically, diathermic walls are made of metal, but any
conducting material would do. Saucepans are diathermic vessels. If no change occurs,
then either the temperatures are the same or—if we know that they are different—then
the walls are classified as adiabatic (‘impassable’).We can anticipate that walls are
adiabatic if they are thermally insulated, such as in a vacuum flask or if the system is
embedded in foamed polystyrene. The zeroth law is the basis of the existence of a
thermometer, a device for measuring temperature. A thermometer is just a special
case of the system B that we talked about earlier. It is a system with a property that
might change when put in contact with a system with diathermic walls. A typical
thermometer makes use of the thermal expansion of mercury or the change in the
electrical properties of material. Thus, if we have a system B (‘the thermometer’) and
put it in thermal contact with A, and find that the thermometer does not change, and
then we put the thermometer in contact with C and find that it still doesn’t change,
then we can report that A and C are at the same temperature.
3. Three common temperature scales showing the relations between them. The
vertical dotted line on the left shows the lowest achievable temperature; the two
dotted lines on the right show the normal freezing and boiling points of water
There are several scales of temperature, and how they are established is
fundamentally the domain of the second law. However, it would be too cumbersome
to avoid referring to these scales until then, though formally that could be done, and
everyone is aware of the Celsius (centigrade) and Fahrenheit scales. The Swedish
astronomer Anders Celsius (1701–1744) after whom the former is named devised a
scale on which water froze at 100◦ and boiled at 0◦, the opposite of the current version
of his scale (0◦C and 100◦C, respectively). The German instrument maker Daniel
Fahrenheit (1686–1736) was the first to use mercury in a thermometer: he set 0◦ at the
lowest temperature he could reach with a mixture of salt, ice, and water, and for 100◦
he chose his body temperature, a readily transportable but unreliable standard. On this
scale water freezes at 32◦F and boils at 212◦F (Figure 3). The temporary advantage of
Fahrenheit’s scale was that with the primitive technology of the time, negative values
were rarely needed. As we shall see, however, there is an absolute zero of
temperature, a zero that cannot be passed and where negative temperatures have no
meaning except in a certain formal sense, not one that depends on the technology of
the time. It is therefore natural to measure temperatures by setting 0 at this lowest
attainable zero and to refer to such absolute temperatures as the thermodynamic
temperature. Thermodynamic temperatures are denoted T, and whenever that symbol
is used in this book, it means the absolute temperature with T = 0 corresponding to the
lowest possible temperature. The most common scale of thermodynamic temperatures
is the Kelvin scale, which uses degrees (‘kelvins’, K) of the same size as the Celsius
scale. On this scale, water freezes at 273 K (that is, at 273 Celsius-sized degrees
above absolute zero; the degree sign is not used on the Kelvin scale) and boils at 373
K. Put another way, the absolute zero of temperature lies at −273◦C. Very
occasionally you will come across the Rankine scale, in which absolute temperatures
are expressed using degrees of the same size as Fahrenheit’s.

The molecular world

In each of the first three chapters I shall introduce a property from the point of view of
an external observer. Then I shall enrich us understanding by showing how that
property is illuminated by thinking about what is going on inside the system.
Speaking about the ‘inside’ of a system, its structure in terms of atoms and molecules,
is alien to classical thermodynamics, but it adds deep insight, and science is all about
insight. Classical thermodynamics is the part of thermodynamics that emerged during
the nineteenth century before everyone was fully convinced about the reality of
atoms, and concerns relationships between bulk properties. You can do classical
thermodynamics even if you don’t believe in atoms. Towards the end of the
nineteenth century, when most scientists accepted that atoms were real and not just an
accounting device, there emerged the version of thermodynamics called statistical
thermodynamics, which sought to account for the bulk properties of matter in terms of
its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the
discussion of bulk properties we don’t need to think about the behaviour of individual
atoms but we do need to think about the average behaviour of myriad atoms. For
instance, the pressure exerted by a gas arises from the impact of its molecules on the
walls of the container; but to understand and calculate that pressure, we don’t need to
calculate the contribution of every single molecule: we can just look at the average of
the storm of molecules on the walls. In short, whereas dynamics deals with the
behaviour of individual bodies, thermodynamics deals with the average behaviour of
vast numbers of them. The central concept of statistical thermodynamics as far as we
are concerned in this chapter is an expression derived by Ludwig Boltzmann (1844–
1906) towards the end of the nineteenth century. That was not long before he
committed suicide, partly because he found intolerable the opposition to his ideas
from colleagues who were not convinced about the reality of atoms. Just as the zeroth
law introduces the concept of temperature from the viewpoint of bulk properties, so
the expression that Boltzmann derived introduces it from the viewpoint of atoms, and
illuminates its meaning. To understand the nature of Boltzmann’s expression, we need
to know that an atom can exist with only certain energies. This is the domain of
quantum mechanics, but we do not need any of that subject’s details, only that single
conclusion. At a given temperature—in the bulk sense—a collection of atoms consists
of some in their lowest energy state (their ‘ground state’), some in the next higher
energy state, and so on, with populations that diminish in progressively higher energy
states. When the populations of the states have settled down into their ‘equilibrium’
populations, and although atoms continue to

The Laws of Thermodynamics

jump between energy levels there is no net change in the populations, it turns out that
these populations can be calculated from a knowledge of the energies of the states and
a single parameter(beta). Another way of thinking about the problem is to think of a
series of shelves fixed at different heights on a wall, the shelves representing the
allowed energy states and their heights the allowed energies. The nature of these
energies is immaterial: they may correspond, for instance, to the translational,
rotational, or vibrational motion of molecules. Then we think of tossing balls
(representing the molecules) at the shelves and noting where they land. It turns out
that the most probable distribution of populations (the numbers of balls that land on
each shelf ) for a large number of throws, subject to the requirement that the total
energy has a particular value, can be expressed in terms of that single parameter ‚.
The precise form of the distribution of the molecules over their allowed states, or the
balls over the shelves, is called the Boltzmann distribution. This distribution is so
important that it is important to see its form. To simplify matters, we shall express it
in terms of the ratio of the population of a state of energy E to the population of the
lowest state, of energy 0:
4. The Boltzmann distribution is an exponentially decaying function of the
energy. As the temperature is increased, the populations migrate from lower
energy levels to higher energy levels. At absolute zero, only the lowest state is
occupied; at infinite temperature, all states are equally populated
surprising that an infinite value of ‚ (the value of ‚ when T = 0) is unattainable in a
finite number of steps. However, although ‚ is the more natural way of expressing
temperatures, it is ill-suited to everyday use. Thus water freezes at 0◦C (273 K),
corresponding to ‚ = 2.65 × 1020 J−1, and boils at 100◦C (373 K), corresponding to
‚ = 1.94 × 1020 J−1. These are not values that spring readily off the tongue. Nor are
the values of ‚ that typify a cool day (10◦C, corresponding to 2.56 × 1020 J−1) and a
warmer one (20◦C, corresponding to 2.47 × 1020 J−1). The point is that the existence
and value of the fundamental constant k is simply a consequence of our insisting on
using a conventional scale of temperature rather than the truly fundamental scale
based on ‚. The Fahrenheit, Celsius, and Kelvin scales are misguided: the reciprocal
of temperature, essentially ‚, is more meaningful, more natural, as a measure of
temperature. There is no hope, though, that it will ever be accepted, for history and
the potency of simple numbers, like 0 and 100, and even 32 and 212, are too deeply
embedded in our culture, and just too convenient for everyday use. Although
Boltzmann’s constant k is commonly listed as a fundamental constant, it is actually
only a recovery from a historical mistake. If Ludwig Boltzmann had done his work
before Fahrenheit and Celsius had done theirs, then it would have been seen that ‚ was
the natural measure of temperature, and we might have become used to expressing
temperatures in the units of inverse joules with warmer systems at low values of ‚ and
cooler systems at high values. However, conventions had become established, with
warmer systems at higher temperatures than cooler systems, and k was introduced,
through k‚ = 1/T, to align the natural scale of temperature based on ‚ to the
conventional and deeply ingrained one based on T. Thus, Boltzmann’s constant is
nothing but a conversion factor between a well-established conventional scale and the
one that, with hindsight, society might have adopted. Had it adopted ‚ as its measure
of temperature, Boltzmann’s constant would not have been necessary. We shall end
this section on a more positive note. We have established that the temperature, and
specifically ‚, is a parameter that expresses the equilibrium distribution of the
molecules of a system over their available energy states. One of the easiest systems to
imagine in this connection is a perfect (or ‘ideal’) gas, in which we imagine the
molecules as forming a chaotic swarm, some moving fast, others slow, travelling in
straight lines until one molecule collides with another, rebounding in a different
direction and with a different speed, and striking the walls in a storm of impacts and
thereby giving rise to what we interpret as pressure. A gas is a chaotic assembly of
molecules (indeed, the words ‘gas’ and ‘chaos’ stem from the same root), chaotic in
spatial distribution and chaotic in the distribution of molecular speeds. Each speed
corresponds to a certain kinetic energy, and so the Boltzmann distribution can be used
to express, through the distribution of molecules over their possible translational
energy states, their distribution of speeds, and to relate that distribution of speeds to
the temperature. The resulting expression is called the Maxwell–Boltzmann
distribution of speeds, for James Clerk

5. TheMaxwell–Boltzmann distribution of molecular speeds for molecules of


various mass and at different temperatures. Note that light molecules have
higher average speeds than heavy molecules. The distribution has consequences
for the composition of planetary atmospheres, as light molecules (such as
hydrogen and helium) may be able to escape into space
Maxwell (1831–1879) first derived it in a slightly different way. When the calculation
is carried through, it turns out that the average speed of the molecules increases as the
square root of the absolute temperature. The average speed of molecules in the air on
a warm day (25◦C, 298 K) is greater by 4 per cent than their average speed on a cold
day (0◦C, 273 K). Thus, we can think of temperature as an indication of the average
speeds of molecules in a gas, with high temperatures corresponding to high average
speeds and low temperatures to lower average speeds (Figure 5).

Conclusion

from the viewpoint of an observer stationed, as always, in the surroundings,


temperature is a property that reveals whether, when closed systems are in contact
through diathermic boundaries, they will be in thermal equilibrium—their
temperatures are the same—or whether there will be a consequent change of state—
their temperatures are different—that will continue until the temperatures have
equalized. From the inside, from the viewpoint of a microscopically eagle-eyed
observer within the system, one able to discern the distribution of molecules over the
available energy levels, the temperature is the single parameter that expresses those
populations. As the temperature is increased, that observer will see the population
extending up to higher energy states, and as it is lowered, the populations relax back
to the states of lower energy. At any temperature, the relative population of a state
varies exponentially with the energy of the state. That states of higher energy are
progressively populated as the temperature is raised means that more and more
molecules are moving (including rotating and vibrating) more vigorously, or the
atoms trapped at their locations in a solid are vibrating more vigorously about their
average positions. Turmoil and temperature go hand in hand.

Chapter Three
The Second Law of Thermodynamics
Introduction

we have considered various forms of energy such as heat Q, work W, and total energy
E individually, and no attempt has been made to relate them to each other during a
process. The first law of thermodynamics, also known as the conservation of energy
principle, provides a sound basis for studying the relationships among the various
forms of energy and energy interactions. Based on experimental observations, the first
law of thermodynamics states that energy can be neither created nor destroyed; it can
only change forms. Therefore, every bit of energy should be accounted for during a
process. We all know that a rock at some elevation possesses some potential energy,
and part of this potential energy is converted to kinetic energy as the rock falls
(Fig. 3–1). Experimental data show that the decrease in potential energy (mg delta z)
exactly equals the increase in kinetic energy when the air resistance
is negligible, thus confirming the conservation of energy principle. Consider a system
undergoing a series of adiabatic processes from a specified state 1 to another
specified state 2. Being adiabatic, these processes obviously cannot involve any heat
.figure.3-1: Energy cannot be created or destroyed; it can only change forms

transfer, but they may involve several kinds of work interactions. Careful
measurements during these experiments indicate the following: For all adiabatic
processes between two specified states of a closed system, the net work done is the
same regardless of the nature of the closed system and the details of the process.
Considering that there are an infinite number of ways to perform work interactions
under adiabatic conditions, this statement appears to be very powerful, with a
potential for far-reaching implications. This statement, which is largely based on the
experiments of Joule in the first half of the nineteenth century, cannot be drawn from
any other known physical principle and is recognized as a fundamental principle. This
principle is called the first law of thermodynamics or just the first law. A major
consequence of the first law is the existence and the definition of the property total
energy E. Considering that the network is the same for all adiabatic processes of a
closed system between two specified states, the value of the network must depend on
the end states of the system only, and thus it must correspond to a change in a
property of the system. This property is the total energy. Note that the first law makes
no reference to the value of the total energy of a closed system at a state. It simply
states that the change in the total energy during an adiabatic process must be equal to
the net work done. Therefore, any convenient arbitrary value can be assigned to total
energy at a specified state to serve as a reference point. Implicit in the first law
statement is the conservation of energy. Although the essence of the first law is the
existence of the property total energy, the first law is often viewed as a statement of
the conservation of energy principle. Next we develop the first law or the
conservation of energy relation for closed systems with the help of some familiar
examples using intuitive arguments. First, we consider some processes that involve
heat transfer but no work interactions. The potato baked in the oven is a good example
for this case (Fig. 3–2). As a result of heat transfer to the potato, the energy of the
potato will increase. If we disregard any mass transfer (moisture loss from the
figure.3-2: The increase in the energy of a potato in an oven is equal to the amount of
.heat transferred to it
potato), the increase in the total energy of the potato becomes equal to the amount of
heat transfer. That is, if 5 kJ of heat is transferred to the potato the energy increase of
the potato will also be 5 kJ. As another example, consider the heating of water in a

.pan on top of a range (Fig. 3–3)


Figure. 3-3: In the absence of any work interactions, energy change of a system is
.equal to the net heat transfer

If 15 kJ of heat is transferred to the water from the heating element and 3 kJ of it is


lost from the water to the surrounding air, the increase in energy of the water will be
equal to the net heat transfer to water, which is 12 kJ. Now consider a well-insulated
.(i.e., adiabatic) room heated by an electric heater as our system (Fig. 3–4)
Figure. 3-4: The work (electrical) done on an adiabatic system is equal to the increase
.in the energy of the system
As a result of electrical work done, the energy of the system will increase. Since the
system is adiabatic and cannot have any heat transfer to or from the surroundings (Q =
0), the conservation of energy principle dictates that the electrical work done on the
system must equal the increase in energy of the system. Next, let us replace the
.electric heater with a paddle wheel (Fig. 3–5)

Figure. 3-5: The work (shaft) done on an adiabatic system is equal to the increase in
.the energy of the system
As a result of the stirring process, the energy of the system will increase. Again, since
there is no heat interaction between the system and its surroundings (Q = 0), the
paddle-wheel work done on the system must show up as an increase in the energy of
the system. Many of you have probably noticed that the temperature of air rises when
.it is compressed (Fig. 3–6)

Figure.3-6: The work (boundary) done on an adiabatic


system is equal to the increase in the energy of the
system.

This is because energy is transferred to the air in the form of boundary work. In the
absence of any heat transfer (Q = 0), the entire boundary work will be stored in the air
as part of its total energy. The conservation of energy principle again requires that the
increase in the energy of the system be equal to the boundary work done on the
system. We can extend these discussions to systems that involve various heat and
work interactions simultaneously. For example, if a system gains 12 kJ of heat during
a process while 6 kJ of work is done on it, the increase in the energy of the system
during that process is 18 kJ (Fig. 3–7). That is, the change in the energy of a system
.during a process is simply equal to the net energy transfer to (or from) the system
Figure.3-7: The energy change of a system during a process is equal to the network
and heat transfer between the system and its surroundings.

Energy Balance

In the light of the preceding discussions, the conservation of energy principle

can be expressed as follows: The net change (increase or decrease) in the total

energy of the system during a process is equal to the difference between the

total energy entering and the total energy leaving the system during that

process. That is, during a process,

This relation is often referred to as the energy balance and is applicable to

any kind of system undergoing any kind of process. The successful use of this

relation to solve engineering problems depends on understanding the various


.forms of energy and recognizing the forms of energy transfer

Energy Change of a System, _System


The determination of the energy change of a system during a process involves

the evaluation of the energy of the system at the beginning and at the end of

the process, and taking their difference. That is,

Energy change = Energy at final state - Energy at initial state


Or

Note that energy is a property, and the value of a property does not change unless

the state of the system changes. Therefore, the energy change of a system

is zero if the state of the system does not change during the process. Also,

energy can exist in numerous forms such as internal (sensible, latent, chemical,

and nuclear), kinetic, potential, electric, and magnetic, and their sum constitutes

the total energy E of a system. In the absence of electric, magnetic,

and surface tension effects (i.e., for simple compressible systems), the change

in the total energy of a system during a process is the sum of the changes in its
internal, kinetic, and potential energies and can be expressed as

When the initial and final states are specified, the values of the specific internal

energies u1 and u2 can be determined directly from the property tables or

thermodynamic property relations. Most systems encountered in practice are


stationary, that is, they do not involve any changes in their velocity or elevation
during a process (Fig. 3–8). Thus, for stationary systems, the changes in kinetic and
potential energies are zero (that is, delta KE = delta PE = 0), and the total energy
change relation reduces to delta E = delta U for such systems. Also, the energy of a
system during a process will change even if only one form of its energy changes
.while the other forms of energy remain unchanged
Figure.3-8: For stationary systems, delta KE = delta PE = 0; thus delta E = delta U
Mechanisms of Energy Transfer, Ein and Eout

Energy can be transferred to or from a system in three forms: heat, work, and

mass flow. Energy interactions are recognized at the system boundary as they

cross it, and they represent the energy gained or lost by a system during a

process. The only two forms of energy interactions associated with a fixed

mass or closed system are heat transfer and work.

1. Heat Transfer, Q Heat transfer to a system (heat gain) increases the energy

of the molecules and thus the internal energy of the system, and heat

transfer from a system (heat loss) decreases it since the energy transferred out

as heat comes from the energy of the molecules of the system.

2. Work, W An energy interaction that is not caused by a temperature

difference between a system and its surroundings is work. A rising piston, a

rotating shaft, and an electrical wire crossing the system boundaries are all associated
with work interactions. Work transfer to a system (i.e., work done on a system)
increases the energy of the system, and work transfer from a

system (i.e., work done by the system) decreases it since the energy transferred

out as work comes from the energy contained in the system. Car engines

and hydraulic, steam, or gas turbines produce work while compressors,

pumps, and mixers consume work.

3. Mass Flow, m Mass flow in and out of the system serves as an additional

mechanism of energy transfer. When mass enters a system, the energy

of the system increases because mass carries energy with it (in fact, mass is

energy). Likewise, when some mass leaves the system, the energy contained

within the system decreases because the leaving mass takes out some energy

with it. For example, when some hot water is taken out of a water heater and

is replaced by the same amount of cold water, the energy content of the hot water

tank (the control volume) decreases as a result of this mass interaction


(Fig. 3–9).

Figure.3-9: The energy content of a control volume can be changed by mass flow

as well as heat and work interactions.

Noting that energy can be transferred in the forms of heat, work, and mass,

and that the net transfer of a quantity is equal to the difference between the

amounts transferred in and out, the energy balance can be written more explicitly

as

where the subscripts “in’’ and “out’’ denote quantities that enter and leave the

system, respectively. All six quantities on the right side of the equation represent

“amounts,’’ and thus they are positive quantities. The direction of any

energy transfer is described by the subscripts “in’’ and “out.’’ Therefore, we do

not need to adopt a formal sign convention for heat and work interactions.

When heat or work is to be determined and their direction is unknown, we can

assume any direction (in or out) for heat or work and solve the problem.

Negative result in that case will indicate that the assumed direction is wrong,

and it is corrected by reversing the assumed direction. This is just like assuming

a direction for an unknown force when solving a problem in statics and


reversing the assumed direction when a negative quantity is obtained.

The heat transfer Q is zero for adiabatic systems, the work transfer W is zero

for systems that involve no work interactions, and the energy transport with

mass Emass is zero for systems that involve no mass flow across their boundaries

(i.e., closed systems). Energy balance for any system undergoing any kind of process
can be expressed more compactly as

For constant rates, the total quantities during a time interval _t are related to the
quantities per unit time as

The energy balance can be expressed on a per unit mass basis as

which is obtained by dividing all the quantities by the mass m of the system. Energy

balance can also be expressed in the differential form as

For a closed system undergoing a cycle, the initial and final states are identical, and
thus, delta Esystem =E2 - E1 = 0. Then the energy balance for a cycle simplifies to
Ein - Eout = 0 or Ein = Eout. Noting that a closed system does not involve any mass
flow across its boundaries, the energy balance for a cycle can be expressed in terms of
heat and work interactions as
That is, the network output during a cycle is equal to net heat input (Fig. 3–10).

Figure.3-10: For a cycle delta E = 0, thus Q =W.

ENERGY BALANCE FOR CLOSED SYSTEMS

The energy balance (or the first-law) relations already given are intuitive in nature and
are easy to use when the magnitudes and directions of heat and work transfers are
known. However, when performing a general analytical study or solving a problem
that involves an unknown heat or work interaction, we need to assume a direction for
the heat or work interactions. In such cases, it is common practice to use the classical
thermodynamics sign convention and to assume heat to be transferred into the system
(heat input) in the amount of Q and work to be done by the system (work output) in
the amount of W, and then to solve the problem. The energy balance relation in that
case for a closed system becomes

where Q = Qnet, in = Qin - Qout is the net heat input and W = Wnet, out = Wout - Win
is the network output. Obtaining a negative quantity for Q or W simply means that the
assumed direction for that quantity is wrong and should be reversed. Various forms of
this “traditional” first-law relation for closed systems are given in Fig. 3–11.

The first law cannot be proven mathematically, but no process in nature is known to
have violated the first law, and this should be taken as sufficient proof. Note that if it
were possible to prove the first law on the basis of other physical principles, the first
law then would be a consequence of those principles instead of being a fundamental
physical law itself. As energy quantities, heat and work are not that different, and you
probably wonder why we keep distinguishing them. After all, the change in the
energy content of a system is equal to the amount of energy that crosses the system

boundaries, and it makes no difference whether the energy crosses the boundary

as heat or work. It seems as if the first-law relations would be much simpler


if we had just one quantity that we could call energy interaction to represent both heat
and work. Well, from the first-law point of view, heat and work are not different at
all. From the second-law point of view, however, heat and work are very different
figure. 3-11: Various forms of the first-law relation for closed systems.

Conclusion

The first law of thermodynamics is essentially an expression of the conservation of


energy principle, also called the energy balance. The general mass and energy
balances for any system undergoing any process can be expressed as Taking heat
transfer to the system and work done by the system to be positive quantities, the
energy balance for a closed system can also be expressed as
Chapter Four
The Second Law of Thermodynamics

Introduction

The second law of thermodynamics asserts that processes occur in a certain direction
and that the energy has quality as well as quantity. The first law places no restriction
on the direction of a process, and satisfying the first law does not guarantee that the
process will occur. Thus, we need another general principle (second law) to identify
.whether a process can occur or not

Fig 4- 1: Heat transfer from a hot container to the cold surroundings is possible;
.however, the reveres process (although satisfying the first law) is impossible
A process can occur when and only when it satisfies both the first and the second
laws of thermodynamics. The second law also asserts that energy has a quality.
Preserving the quality of energy is a major concern of engineers. In the above
example, the energy stored in a hot container (higher temperature) has higher quality
(ability to work) in comparison with the energy contained (at lower temperature) in
the surroundings. The second law is also used in determining the theoretical limits for
the performance of commonly used engineering systems, such as heat engines and
.refrigerators etc

Thermal Energy Reservoirs

Thermal energy reservoirs are hypothetical bodies with a relatively large thermal
energy capacity (mass x specific heat) that can supply or absorb finite amounts of heat
without undergoing any change in temperature.  Lakes, rivers, atmosphere, oceans are
example of thermal reservoirs.   A two‐phase system can be modeled as a reservoir
since it can absorb and release large quantities of heat while remaining at constant
temperature. A reservoir that supplies energy in the form of heat is called a source and
.one that absorbs energy in the form of heat is called a sink
Heat Engines

Heat engines convert heat to work. There are several types of heat engines, but they
are characterized by the following:
1‐ They all receive heat from a high‐temperature source (oil furnace, nuclear reactor,
etc.)
   2‐ They convert part of this heat to work.
3‐ They reject the remaining waste heat to a low‐temperature sink.
4‐ They operate in a cycle.

Fig 4. 2: Steam power plant is a heat engine. 


  Thermal efficiency: is the fraction of the heat input that is converted to the network
output (efficiency = benefit / cost). 

 
The thermal efficiencies of work‐producing devices are low. Ordinary spark‐ignition
automobile engines have a thermal efficiency of about 20%, diesel engines about
30%, and power plants in the order of 40%. Is it possible to save the rejected heat
Qout in a power cycle? The answer is NO, because without the cooling in condenser
the cycle cannot be completed. Every heat engine must waste some energy by
transferring it to a low‐temperature reservoir in order to complete the cycle, even in
idealized cycle.

The Second Law: Kelvin‐Planck Statement

It is impossible for any device that operates on a cycle to receive heat from a single
reservoir and produce a net amount of work. In other words, no heat engine can have
a thermal efficiency of 100%. 

Fig 4.3: A heat engine that violates the Kelvin‐Planck statement of the second law
cannot be built.
Refrigerators and Heat Pumps

In nature, heat flows from high‐temperature regions to low‐temperature ones. The


reverse process, however, cannot occur by itself. The transfer of heat from a low‐
temperature region to a high‐temperature one requires special devices called
refrigerators. Refrigerators are cyclic devices, and the working fluids used in the
cycles are called refrigerant.   Heat pumps transfer heat from a low‐temperature
medium to a high‐temperature one. Refrigerators and heat pumps are essentially the
same devices; they differ in their objectives only. Refrigerator is to maintain the
refrigerated space at a low temperature. On the other hand, a heat pump absorbs heat
from a low‐temperature source and supplies the heat to a warmer medium.  

Fig 4. 4: Objectives of refrigerator and heat pump.

Coefficient of Performance (COP)

The performance of refrigerators and heat pumps is expressed in terms of the coefficient of
performance (COP) which is defined as

Air conditioners are basically refrigerators whose refrigerated space is a room or a


building. The Energy Efficiency Rating (EER): is the amount of heat removed from
the cooled space in BTU’s for 1 Wh (watt‐hour).
EER = 3.412 COPR Most air conditioners have an EER between 8 to 12 (COP of 2.3
to 3.5).
The Second Law of Thermodynamics: Clausius Statement

It is impossible to construct a device that operates in a cycle and produces no effect


other than the transfer of heat from a lower‐temperature body to higher‐temperature
body. In other words, a refrigerator will not operate unless its compressor is driven by
an external power source. Kelvin‐Planck and Clausius statements of the second law
are negative statements, and a negative statement cannot be proved. So, the second
law, like the first law, is based on experimental observations. The two statements of
the second law are equivalent. In other words, any device violates the Kelvin‐Planck
statement also violates the Clausius statement and vice versa.   

Fig 4. 5: The violation of the Kelvin‐Planck statement leads to violation of Clausius.


Any device that violates the first law of thermodynamics (by creating energy) is
called a perpetual‐motion machine of the first kind (PMM1), and the device that
violates the second law is called a perpetual‐motion machine of the second kind
(PMM2).

Reversible and Irreversible Process

A reversible process is defined as a process that can be reversed without leaving any
trace on the surroundings. It means both system and surroundings are returned to their
initial states at the end of the reverse process. Processes that are not reversible are
called irreversible. Reversible processes do not occur and they are only idealizations
of actual processes. We use reversible process concept because, a) they are easy to
analyze (since system passes through a series of equilibrium states); b) they serve as
limits (idealized models) to which the actual processes can be compared. Some
factors that cause a process to become irreversible:
• Friction
• Unrestrained expansion and compression
• mixing
• Heat transfer (finite ∆T)
• Inelastic deformation
• Chemical reactions in a reversible process thing happen very slowly, without any
resisting force, without any space limitation  →  everything happens in a highly
organized way (it is not physically possible ‐ it is an idealization). Internally
reversible process: if no irreversibility's occur within the boundaries of the system.
In these processes a system undergoes through a series of equilibrium states, and
when the process is reversed, the system passes through exactly the same equilibrium
states while returning to its initial state. Externally reversible process: if no
irreversibility occurs outside the system boundaries during the process. Heat transfer
between a reservoir and a system is an externally reversible process if the surface of
contact between the system and reservoir is at the same temperature. Totally
reversible (reversible): both externally and internally reversible processes.
Conclusion

Thermal energy reservoirs are hypothetical bodies with a relatively large thermal
energy capacity that can supply or absorb finite amounts of heat without undergoing
any change in temperature. A reservoir that supplies energy in the form of heat is
called a source and one that absorbs energy in the form of heat is called a sink. Every
heat engine must waste some energy by transferring it to a low‐temperature reservoir
in order to complete the cycle, even in idealized cycle. Heat pumps transfer heat from
a low‐temperature medium to a high‐temperature one. On the other hand, a heat pump
absorbs heat from a low‐temperature source and supplies the heat to a warmer
medium. A reversible process is defined as a process that can be reversed without
leaving any trace on the surroundings. Reversible processes do not occur and they are
only idealizations of actual processes. In these processes a system undergoes through
a series of equilibrium states, and when the process is reversed, the system passes
through exactly the same equilibrium states while returning to its initial state.
Externally reversible process: if no irreversibility occurs outside the system
boundaries during the process. Heat transfer between a reservoir and a system is an
externally reversible process if the surface of contact between the system and
reservoir is at the same temperature.
Chapter Five
The Third Law of Thermodynamics

Introduction

In cold bodies the atoms find potential energy barriers difficult to surmount, because
the thermal motion is weak. That is the reason for liquefaction and solidification when
the intermolecular van der Waals forces overwhelm the free-flying gas atoms. If the
temperature tends to zero, no barriers – however small – can be overcome so that a
body must assume the state of lowest energy. No other state can be realized and
therefore the entropy must be zero. That is what the third law of thermodynamics
says. On the other hand, cold bodies have slow atoms and slow atoms have large de
Broglie wave lengths so that the quantum mechanical wave character may create
macroscopic effects. This is the reason for gas degeneracy which is, however, often
disguised by the van der Waals forces. In particular, in cold mixtures even the
smallest malus for the formation of unequal next neighbors prevents the existence of
such unequal pairs and should lead to un-mixing. This is in fact observed in a cold
mixture of liquid He3 and He4. In the process of un-mixing the mixture sheds its
entropy of mixing. Obviously, it must do so, if the entropy is to vanish. Let us
consider low-temperature phenomena in this chapter and let us record the history of
low-temperature thermodynamics and, in particular, of the science of cryogenics,
whose objective it is to reach low temperatures. The field is currently an active field
of research and lower and lower temperatures are being reached.

Capitulation of Entropy

It may happen – actually it happens more often than not – that a chemical reaction is
constrained. This means that, at a given pressure p, the reactants persist at
temperatures where, according to the law of mass action, they should long have been
converted into resultants; the Gibbs free energy g is lower for the resultants than for
the reactants, and yet the resultants do nor form. We may say that the mixture of
reactants is under-cooled, or overheated depending on the case. As we have
understood on the occasion of the ammonia synthesis, the phenomenon is due to
energetic barriers which must be overcome – or bypassed – before the reaction can
occur. The bypass may be achieved by an appropriate catalyst. An analogous behavior
occurs in phase transitions,1 mostly in solids: It may happen that there exist different
crystalline lattice structures in the same substance, one stable and one meta-stable, i.e.
as good as stable or, anyway, persisting nearly indefinitely. Hermann Walter Nernst
(1864– 1941) studied such cases, particularly for low and lowest temperatures. Take
tin for example. Tin, or pewter, as white tin is a perfectly good metal at room
temperature – with a tetragonal lattice structure – popular for tin plates, pewter cups,
organ pipes, or toy soldiers.2 Kept at 13.2°C and 1atm, white tin crumbles into the
unattractive cubic grey tin in a few hours. However, if it is not given the time, white
tin is meta-stable below 13.2°C and may persist virtually forever.3 It is for a pressure
of 1atm that the phase equilibrium occurs at 13.2°C. At other pressures that
temperature is different and we denote it by Tw􀄼g(p); its value is known for all p. At
that temperature 􀇻g = gw – gg vanishes, and below we have gw > gg, so that grey tin
is the stable phase. 􀇻g may be considered as the frustrated driving force for the
transition and it is sometimes called the affinity of the transition. It depends on T and
p and has two parts.

From some measurements Nernst convinced himself that this expression – which after
all is equal to 􀇻s(T,p) for T 􀄺 0 – is zero, irrespective of the pressure p, and for all
transitions.5 So he came to pronounce his law or theorem which we may express by
saying that the entropies of different phases of a crystalline body become equal for T
􀄺 0, irrespective of the lattice structure. Moreover, they are independent of the
pressure p. This became known as the third law of thermodynamics. We recall
Berthelot, who had assumed the affinity to be given by the heat of transition. And we
recall Helmholtz, who had insisted that the contribution of the entropy of the
transition must not be neglected. Helmholtz was right, of course, but the third law
provides a low temperature niche for Berthelot: Not only does T·􀇻s(T,p) go to zero,
􀇻s(T,p) itself goes to zero. The entropy capitulates to low temperature and gives up
its efficacy to influence reactions and transitions.

Liquefying Gases

It is not easy to lower temperatures and the creation of lower and lower temperatures
is in itself a fascinating chapter in the history of thermodynamics which we shall now
proceed to consider. The chapter is not closed, because low-temperature physics is at
present an active field of research. Currently the world record for the lowest
temperature in the universe16 stands at 1.5 μK, which was reached at the University
of Bayreuth in the early 1990’s. Naturally the cold spot was maintained only for some
hours. Such a value was, of course, far below the scope of the pioneers in the 19th
century who set themselves the task of liquefying the gases available to them and
then, perhaps, reach the solid phase. The easiest manner to cool a gas is by bringing it
in contact with a cold body and let a heat exchange take place. But that requires the
cold body to begin with, and such a body may not be available. No gas – apart from
water vapour – could be liquefied in this manner in the temperate zones of Europe
where most of the research was done. Since liquids occupy only a small portion of the
volume of gases at the same pressure, it stands to reason that a high pressure may be
conducive to liquefaction, just as a low temperature is. Both together should be even
better. That idea occurred to Michael Faraday – a pioneer of both electromagnetism
and cryogenics, the physics of low-temperature-generation – in 1823. He combined
high pressure and low temperature in an ingenious manner by using a glass tube
formed like a boomerang, cf. Fig. 5-1 Some manganese di-oxide with hydrochloric
acid was placed at one end. The tube was then sealed and gentle heating liberated the
gas chlorine which mixed with the air of the tube and, of course, raised the pressure.
The other end was put into ice water and it turned out that chlorine condensed at that
end and formed a puddle at 0°C and high pressure.

Fig.5-1. Michael Faraday (1791–1867) Liquefaction of chlorine


Conclusion

This means that, at a given pressure p, the reactants persist at temperatures where,
according to the law of mass action, they should long have been converted into
resultants; the Gibbs free energy g is lower for the resultants than for the reactants,
and yet the resultants do nor form. Tin, or pewter, as white tin is a perfectly good
metal at room temperature – with a tetragonal lattice structure – popular for tin plates,
pewter cups, organ pipes, or toy soldiers.2 Kept at 13.2°C and 1atm, white tin
crumbles into the unattractive cubic grey tin in a few hours. At other pressures that
temperature is different and we denote it by Tw􀄼g ; its value is known for all p. At
that temperature 􀇻g = gw – gg vanishes, and below we have gw > gg, so that grey tin
is the stable phase. It is not easy to lower temperatures and the creation of lower and
lower temperatures is in itself a fascinating chapter in the history of thermodynamics
which we shall now proceed to consider. Since liquids occupy only a small portion of
the volume of gases at the same pressure, it stands to reason that a high pressure may
be conducive to liquefaction, just as a low temperature is.
References

1. Engineering thermodynamics, Third Edition 2007 , SI Units Version, R.K.


Rajput. ISBN: 978-0-7637-8272-63678.
2. Peter Atkins, The Laws of Thermodynamics, A Very Short Introduction.
ISBN 978–0–19–957219–9.
3. ASHRAE Handbook of Fundamentals. SI version. Atlanta, GA: American
Society of Heating, Refrigerating, and Air-Conditioning Engineers, Inc., 1993.

4. ASHRAE Handbook of Refrigeration. SI version. Atlanta, GA: American


Society of Heating, Refrigerating, and Air- Conditioning Engineers, Inc., 1994.

5. A. Bejan. Advanced Engineering Thermodynamics. New York: John Wiley &


Sons, 1988.

6. Y. A. Çengel. “An Intuitive and Unified Approach to Teaching


Thermodynamics.” ASME International Mechanical Engineering Congress and
Exposition,

Atlanta, Georgia, AES-Vol. 36, pp. 251–260, November 17–22, 1996.

7. K. Wark and D. E. Richards. Thermodynamics. 6th ed. New York: McGraw-


Hill, 1999.
8. M. Bahrami, ENSC 388 (F09), 2nd Law of Thermodynamics, pdf.
9. A History of Thermodynamics, The Doctrine of Energy and Entropy, Ingu
Muller, Springer, 2007,pdf.

You might also like