Professional Documents
Culture Documents
Introduction
The kinetic theory of gases deals with the behavior of molecules constituting the gas.
According to this theory, the molecules of all gases are in continuous motion. As a
result of this they possess kinetic energy which is transferred from molecule to
molecule during their collision. The energy so transferred produces a change in the
velocity of individual molecules.
DEFINITION OF THERMODYNAMICS
Thermodynamics is an axiomatic science which deals with the relations among heat,
work and properties of system which are in equilibrium. It describes state and changes
in state of physical systems. Or Thermodynamics is the science of the regularities
governing processes of energy conversion. Or Thermodynamics is the science that
deals with the interaction between energy and
_ The Zeroth law deals with thermal equilibrium and establishes a concept of
temperature.
_ The Second law indicates the limit of converting heat into work and introduces the
principle of increase of entropy.
These laws are based on experimental observations and have no mathematical proof.
Like all physical laws, these laws are based on logical reasoning.
THERMODYNAMIC SYSTEMS
Fig. 1.2).
Fig. 1.1: The system Fig. 1.2: The real and imaginary boundaries
2. Closed System
3. Open System
Refer to Fig.1.4 An open system is one in which matter flows into or out of the
An isolated system is that system which exchanges neither energy nor matter with any
5. Adiabatic System
An adiabatic system is one which is thermally insulated from its surroundings. It can,
however, exchange work with its surroundings. If it does not, it becomes an isolated
system. Phase. A phase is a quantity of matter which is homogeneous throughout in
chemical composition and physical structure.
6. Homogeneous System
7. Heterogeneous System
A pure substance is one that has a homogeneous and invariable chemical composition
even though there is a change of phase. In other words, it is a system which is (a)
homogeneous in composition, (b) homogeneous in chemical aggregation. Examples :
Liquid, water, mixture of liquid water and steam, mixture of ice and water. The
mixture of liquid air and gaseous air is not a pure substance.
THERMODYNAMIC EQUILIBRIUM
A property of a system is a characteristic of the system which depends upon its state,
but not upon how the state is reached. There are two sorts of property :
1. Intensive properties. These properties do not depend on the mass of the system.
Examples : Temperature and pressure.
2. Extensive properties. These properties depend on the mass of the system.
Example : Volume. Extensive properties are often divided by mass associated with
them to obtain the intensive properties. For example, if the volume of a system of
mass m is V, then the specific volume of matter within the system is V m = v which is
an intensive property.
STATE
PROCESS
A process occurs when the system undergoes a change in a state or an energy transfer
at a steady state. A process may be non-flow in which a fixed mass within the defined
boundary is undergoing a change of state. Example : A substance which is being
heated in a closed cylinder undergoes a non-flow process (Fig.1.3). Closed systems
undergo non-flow processes. A process may be a flow process in which mass is
entering and leaving through the boundary of an open system. In a steady flow
process (Fig. 1.4) mass is crossing the boundary from surroundings at entry, and an
equal mass is crossing the boundary at the exit so that the total mass of the system
remains constant. In an open system it is necessary to take account of the work
delivered from the surroundings to the system at entry to cause the mass to enter, and
also of the work delivered from the system at surroundings to cause the mass to leave,
as well as any heat or work crossing the boundary of the system. Quasi-static
process. Quasi means ‘almost’. A quasi-static process is also called a reversible
process. This process is a succession of equilibrium states and infinite slowness is its
characteristic feature.
Conclusion
Introduction
The zeroth law is an afterthought. Although it had long been known that such a law
was essential to the logical structure of thermodynamics, it was not dignified with a
name and number until early in the twentieth century. By then, the first and second
laws had become so firmly established that there was no hope of going back and
renumbering them. As will become apparent, each law provides an experimental
foundation for the introduction of a thermodynamic property. The zeroth law
establishes the meaning of what is perhaps the most familiar but is in fact the most
enigmatic of these properties: temperature. Thermodynamics, like much of the rest of
science, takes term. with an everyday meaning and sharpens them—some would say,
hijacks them—so that they take on an exact and unambiguous meaning. We shall see
that happening throughout this introduction to thermodynamics. It starts as soon as we
enter its doors. The part of the universe that is at the center of attention in
thermodynamics is called the system. A system may be a block of iron, a beaker of
water, an engine, a human body. It may even be a circumscribed part of each of those
entities. The rest of the universe is called the surroundings. The surroundings are
where we stand to make observations on the system and infer its properties. Quite
often, the actual surroundings consist of a water bath maintained at constant
temperature, but that is a more controllable approximation to the true surroundings,
the rest of the world. The system and its surroundings jointly make up the universe.
Whereas for us the universe is everything, for fewer profligate thermodynamics it
might consist of a beaker of water (the system) immersed in a water bath (the
surroundings). A system is defined by its boundary. If matter can be added to or
removed from the system, then it is said to be open. A bucket, or more refined an
open flask, is an example, because we can just shovel in material. A system with a
boundary that is impervious to matter is called closed. A sealed bottle is a closed
system. A system with a boundary that is impervious to everything in the sense that
the system remains unchanged regardless of anything that happens in the
surroundings is called isolated. A stoppered vacuum flask of hot coffee is a good
approximation to an isolated system. The properties of a system depend on the
prevailing conditions. For instance, the pressure of a gas depends on the volume it
occupies, and we can observe the effect of changing that volume if the system has
flexible walls. ‘Flexible walls’ is best thought of as meaning that the boundary of the
system is rigid everywhere except for a patch—a piston—that can move in and out.
Think of a bicycle pump with your finger sealing the orifice. Properties are divided
into two classes. An extensive property depends on the quantity of matter in the
system—its extent. The mass of a system is an extensive property; so is its volume.
Thus, 2 kg of iron occupies twice the volume of 1 kg of iron. An intensive property is
independent of the amount of matter present. The temperature (whatever that is) and
the density are examples. The temperature of water drawn from a thoroughly stirred
hot tank is the same regardless of the size of the sample. The density of iron in 8.9 g
cm−3 regardless of whether we have a 1 kg block or 2 kg block We shall meet many
examples of both kinds of property as we unfold thermodynamics and it is helpful to
.keep the distinction in mind
Introducing equilibrium
So much for these slightly dusty definitions. Now we shall use a piston—a movable
patch in the boundary of a system—to introduce one important concept that will then
be the basis for introducing the enigma of temperature and the zeroth law itself.
Suppose we have two closed systems, each with a piston on one side and pinned into
place to make a rigid container (Figure 1). The two pistons are connected with a rigid
rod so that as one moves out the other moves in.We release the pins on the piston. If
the piston on the left drives the piston on the right into that system, we can infer that
the pressure on the left was higher than that on the right, even though we have not
made a direct measure of the two pressures. If the piston on the right won the battle,
then we would infer that the pressure on the right was higher than that on the left. If
nothing had happened when we released the pins, we would infer that the pressures
1. If the gases in these two containers are at different pressures, when the pins
holding the pistons are released, the pistons move one way or the other until the
two pressures are the same. The two systems are then in mechanical equilibrium.
If the pressures are the same to begin with, there is no movement of the pistons
when the pins are withdrawn, for the two systems are already in mechanical
equilibrium same, whatever they might be. The technical expression for the
trivial at this point, but establishes the analogy that will enable us to introduce the
concept of temperature. Suppose the two systems, which we shall call A and B, are in
mechanical equilibrium when they are brought together and the pins are released.
That is, they have the same pressure. Now suppose we break the link between them
and establish a link between system A and a third system, C, equipped with a piston.
Suppose we observe no change: we infer that the systems A and C are in mechanical
equilibrium and we can go on to say that they have the same pressure. Now suppose
we break that link and put system C in mechanical contact with system B. Even
without doing the experiment, we know what will happen: nothing. Because systems
A and B have the same pressure, and A and C have the same pressure, we can be
confident that systems C and B have the same pressure, and that pressure is a
universal indicator of mechanical equilibrium. Now we move from mechanics to
thermodynamics and the world of the zeroth law. Suppose that system A has rigid
walls made of metal and system B likewise. When we put the two systems in contact,
they might undergo some kind of physical change. For instance, their pressures might
change or we could see a change in color through a peephole. In everyday language
we would say that ‘heat has flowed from one system to the other’ and their properties
have changed accordingly. Don’t imagine, though, that we know what heat is yet: that
mystery is an aspect of the first law, and we aren’t even at the zeroth law yet.
It may be the case that no change occurs when the two systems are in contact even
though they are made of metal. In that case we say
2. A representation of the zeroth law involving (top left) three systems that can
be brought into thermal contact. If A is found to be in thermal equilibrium with
B (top right), and B is in thermal equilibrium with C (bottom left), then we can
be confident that C will be in thermal equilibrium with A if they are brought into
contact (bottom right)
that the two systems are in thermal equilibrium. Now consider three systems (Figure
2), just as we did when talking about mechanical equilibrium. It is found that if A is
put in contact with B and found to be in thermal equilibrium, and B is put in contact
with C and found to be in thermal equilibrium, then when C is put in contact with A,
it is always found that the two are in thermal equilibrium. This rather trite observation
is the essential content of the zeroth law of thermodynamics: if A is in thermal
equilibrium with B, and B is in thermal equilibrium with C, then C will be in thermal
equilibrium with A. The zeroth law implies that just as the pressure is a physical
property that enables us to anticipate when systems will be in mechanical equilibrium
when brought together regardless of their composition and size, then there exists a
property that enables us to anticipate when two systems will be in thermal equilibrium
regardless of their composition and size: we call this universal property the
temperature. We can now summarize the statement about the mutual thermal
equilibrium of the three systems simply by saying that they all have the same
temperature. We are not yet claiming that we know what temperature is, all we are
doing is recognizing that the zeroth law implies the existence of a criterion of thermal
equilibrium: if the temperatures of two systems are the same, then they will be in
thermal equilibrium when put in contact through conducting walls and an observer of
the two systems will have the excitement of noting that nothing changes. We can now
introduce two more contributions to the vocabulary of thermodynamics. Rigid walls
that permit changes of state when closed systems are brought into contact—that is,
permit the conduction of heat—are called diathermic (from the Greek words for
‘through’ and ‘warm’). Typically, diathermic walls are made of metal, but any
conducting material would do. Saucepans are diathermic vessels. If no change occurs,
then either the temperatures are the same or—if we know that they are different—then
the walls are classified as adiabatic (‘impassable’).We can anticipate that walls are
adiabatic if they are thermally insulated, such as in a vacuum flask or if the system is
embedded in foamed polystyrene. The zeroth law is the basis of the existence of a
thermometer, a device for measuring temperature. A thermometer is just a special
case of the system B that we talked about earlier. It is a system with a property that
might change when put in contact with a system with diathermic walls. A typical
thermometer makes use of the thermal expansion of mercury or the change in the
electrical properties of material. Thus, if we have a system B (‘the thermometer’) and
put it in thermal contact with A, and find that the thermometer does not change, and
then we put the thermometer in contact with C and find that it still doesn’t change,
then we can report that A and C are at the same temperature.
3. Three common temperature scales showing the relations between them. The
vertical dotted line on the left shows the lowest achievable temperature; the two
dotted lines on the right show the normal freezing and boiling points of water
There are several scales of temperature, and how they are established is
fundamentally the domain of the second law. However, it would be too cumbersome
to avoid referring to these scales until then, though formally that could be done, and
everyone is aware of the Celsius (centigrade) and Fahrenheit scales. The Swedish
astronomer Anders Celsius (1701–1744) after whom the former is named devised a
scale on which water froze at 100◦ and boiled at 0◦, the opposite of the current version
of his scale (0◦C and 100◦C, respectively). The German instrument maker Daniel
Fahrenheit (1686–1736) was the first to use mercury in a thermometer: he set 0◦ at the
lowest temperature he could reach with a mixture of salt, ice, and water, and for 100◦
he chose his body temperature, a readily transportable but unreliable standard. On this
scale water freezes at 32◦F and boils at 212◦F (Figure 3). The temporary advantage of
Fahrenheit’s scale was that with the primitive technology of the time, negative values
were rarely needed. As we shall see, however, there is an absolute zero of
temperature, a zero that cannot be passed and where negative temperatures have no
meaning except in a certain formal sense, not one that depends on the technology of
the time. It is therefore natural to measure temperatures by setting 0 at this lowest
attainable zero and to refer to such absolute temperatures as the thermodynamic
temperature. Thermodynamic temperatures are denoted T, and whenever that symbol
is used in this book, it means the absolute temperature with T = 0 corresponding to the
lowest possible temperature. The most common scale of thermodynamic temperatures
is the Kelvin scale, which uses degrees (‘kelvins’, K) of the same size as the Celsius
scale. On this scale, water freezes at 273 K (that is, at 273 Celsius-sized degrees
above absolute zero; the degree sign is not used on the Kelvin scale) and boils at 373
K. Put another way, the absolute zero of temperature lies at −273◦C. Very
occasionally you will come across the Rankine scale, in which absolute temperatures
are expressed using degrees of the same size as Fahrenheit’s.
In each of the first three chapters I shall introduce a property from the point of view of
an external observer. Then I shall enrich us understanding by showing how that
property is illuminated by thinking about what is going on inside the system.
Speaking about the ‘inside’ of a system, its structure in terms of atoms and molecules,
is alien to classical thermodynamics, but it adds deep insight, and science is all about
insight. Classical thermodynamics is the part of thermodynamics that emerged during
the nineteenth century before everyone was fully convinced about the reality of
atoms, and concerns relationships between bulk properties. You can do classical
thermodynamics even if you don’t believe in atoms. Towards the end of the
nineteenth century, when most scientists accepted that atoms were real and not just an
accounting device, there emerged the version of thermodynamics called statistical
thermodynamics, which sought to account for the bulk properties of matter in terms of
its constituent atoms. The ‘statistical’ part of the name comes from the fact that in the
discussion of bulk properties we don’t need to think about the behaviour of individual
atoms but we do need to think about the average behaviour of myriad atoms. For
instance, the pressure exerted by a gas arises from the impact of its molecules on the
walls of the container; but to understand and calculate that pressure, we don’t need to
calculate the contribution of every single molecule: we can just look at the average of
the storm of molecules on the walls. In short, whereas dynamics deals with the
behaviour of individual bodies, thermodynamics deals with the average behaviour of
vast numbers of them. The central concept of statistical thermodynamics as far as we
are concerned in this chapter is an expression derived by Ludwig Boltzmann (1844–
1906) towards the end of the nineteenth century. That was not long before he
committed suicide, partly because he found intolerable the opposition to his ideas
from colleagues who were not convinced about the reality of atoms. Just as the zeroth
law introduces the concept of temperature from the viewpoint of bulk properties, so
the expression that Boltzmann derived introduces it from the viewpoint of atoms, and
illuminates its meaning. To understand the nature of Boltzmann’s expression, we need
to know that an atom can exist with only certain energies. This is the domain of
quantum mechanics, but we do not need any of that subject’s details, only that single
conclusion. At a given temperature—in the bulk sense—a collection of atoms consists
of some in their lowest energy state (their ‘ground state’), some in the next higher
energy state, and so on, with populations that diminish in progressively higher energy
states. When the populations of the states have settled down into their ‘equilibrium’
populations, and although atoms continue to
jump between energy levels there is no net change in the populations, it turns out that
these populations can be calculated from a knowledge of the energies of the states and
a single parameter(beta). Another way of thinking about the problem is to think of a
series of shelves fixed at different heights on a wall, the shelves representing the
allowed energy states and their heights the allowed energies. The nature of these
energies is immaterial: they may correspond, for instance, to the translational,
rotational, or vibrational motion of molecules. Then we think of tossing balls
(representing the molecules) at the shelves and noting where they land. It turns out
that the most probable distribution of populations (the numbers of balls that land on
each shelf ) for a large number of throws, subject to the requirement that the total
energy has a particular value, can be expressed in terms of that single parameter ‚.
The precise form of the distribution of the molecules over their allowed states, or the
balls over the shelves, is called the Boltzmann distribution. This distribution is so
important that it is important to see its form. To simplify matters, we shall express it
in terms of the ratio of the population of a state of energy E to the population of the
lowest state, of energy 0:
4. The Boltzmann distribution is an exponentially decaying function of the
energy. As the temperature is increased, the populations migrate from lower
energy levels to higher energy levels. At absolute zero, only the lowest state is
occupied; at infinite temperature, all states are equally populated
surprising that an infinite value of ‚ (the value of ‚ when T = 0) is unattainable in a
finite number of steps. However, although ‚ is the more natural way of expressing
temperatures, it is ill-suited to everyday use. Thus water freezes at 0◦C (273 K),
corresponding to ‚ = 2.65 × 1020 J−1, and boils at 100◦C (373 K), corresponding to
‚ = 1.94 × 1020 J−1. These are not values that spring readily off the tongue. Nor are
the values of ‚ that typify a cool day (10◦C, corresponding to 2.56 × 1020 J−1) and a
warmer one (20◦C, corresponding to 2.47 × 1020 J−1). The point is that the existence
and value of the fundamental constant k is simply a consequence of our insisting on
using a conventional scale of temperature rather than the truly fundamental scale
based on ‚. The Fahrenheit, Celsius, and Kelvin scales are misguided: the reciprocal
of temperature, essentially ‚, is more meaningful, more natural, as a measure of
temperature. There is no hope, though, that it will ever be accepted, for history and
the potency of simple numbers, like 0 and 100, and even 32 and 212, are too deeply
embedded in our culture, and just too convenient for everyday use. Although
Boltzmann’s constant k is commonly listed as a fundamental constant, it is actually
only a recovery from a historical mistake. If Ludwig Boltzmann had done his work
before Fahrenheit and Celsius had done theirs, then it would have been seen that ‚ was
the natural measure of temperature, and we might have become used to expressing
temperatures in the units of inverse joules with warmer systems at low values of ‚ and
cooler systems at high values. However, conventions had become established, with
warmer systems at higher temperatures than cooler systems, and k was introduced,
through k‚ = 1/T, to align the natural scale of temperature based on ‚ to the
conventional and deeply ingrained one based on T. Thus, Boltzmann’s constant is
nothing but a conversion factor between a well-established conventional scale and the
one that, with hindsight, society might have adopted. Had it adopted ‚ as its measure
of temperature, Boltzmann’s constant would not have been necessary. We shall end
this section on a more positive note. We have established that the temperature, and
specifically ‚, is a parameter that expresses the equilibrium distribution of the
molecules of a system over their available energy states. One of the easiest systems to
imagine in this connection is a perfect (or ‘ideal’) gas, in which we imagine the
molecules as forming a chaotic swarm, some moving fast, others slow, travelling in
straight lines until one molecule collides with another, rebounding in a different
direction and with a different speed, and striking the walls in a storm of impacts and
thereby giving rise to what we interpret as pressure. A gas is a chaotic assembly of
molecules (indeed, the words ‘gas’ and ‘chaos’ stem from the same root), chaotic in
spatial distribution and chaotic in the distribution of molecular speeds. Each speed
corresponds to a certain kinetic energy, and so the Boltzmann distribution can be used
to express, through the distribution of molecules over their possible translational
energy states, their distribution of speeds, and to relate that distribution of speeds to
the temperature. The resulting expression is called the Maxwell–Boltzmann
distribution of speeds, for James Clerk
Conclusion
Chapter Three
The Second Law of Thermodynamics
Introduction
we have considered various forms of energy such as heat Q, work W, and total energy
E individually, and no attempt has been made to relate them to each other during a
process. The first law of thermodynamics, also known as the conservation of energy
principle, provides a sound basis for studying the relationships among the various
forms of energy and energy interactions. Based on experimental observations, the first
law of thermodynamics states that energy can be neither created nor destroyed; it can
only change forms. Therefore, every bit of energy should be accounted for during a
process. We all know that a rock at some elevation possesses some potential energy,
and part of this potential energy is converted to kinetic energy as the rock falls
(Fig. 3–1). Experimental data show that the decrease in potential energy (mg delta z)
exactly equals the increase in kinetic energy when the air resistance
is negligible, thus confirming the conservation of energy principle. Consider a system
undergoing a series of adiabatic processes from a specified state 1 to another
specified state 2. Being adiabatic, these processes obviously cannot involve any heat
.figure.3-1: Energy cannot be created or destroyed; it can only change forms
transfer, but they may involve several kinds of work interactions. Careful
measurements during these experiments indicate the following: For all adiabatic
processes between two specified states of a closed system, the net work done is the
same regardless of the nature of the closed system and the details of the process.
Considering that there are an infinite number of ways to perform work interactions
under adiabatic conditions, this statement appears to be very powerful, with a
potential for far-reaching implications. This statement, which is largely based on the
experiments of Joule in the first half of the nineteenth century, cannot be drawn from
any other known physical principle and is recognized as a fundamental principle. This
principle is called the first law of thermodynamics or just the first law. A major
consequence of the first law is the existence and the definition of the property total
energy E. Considering that the network is the same for all adiabatic processes of a
closed system between two specified states, the value of the network must depend on
the end states of the system only, and thus it must correspond to a change in a
property of the system. This property is the total energy. Note that the first law makes
no reference to the value of the total energy of a closed system at a state. It simply
states that the change in the total energy during an adiabatic process must be equal to
the net work done. Therefore, any convenient arbitrary value can be assigned to total
energy at a specified state to serve as a reference point. Implicit in the first law
statement is the conservation of energy. Although the essence of the first law is the
existence of the property total energy, the first law is often viewed as a statement of
the conservation of energy principle. Next we develop the first law or the
conservation of energy relation for closed systems with the help of some familiar
examples using intuitive arguments. First, we consider some processes that involve
heat transfer but no work interactions. The potato baked in the oven is a good example
for this case (Fig. 3–2). As a result of heat transfer to the potato, the energy of the
potato will increase. If we disregard any mass transfer (moisture loss from the
figure.3-2: The increase in the energy of a potato in an oven is equal to the amount of
.heat transferred to it
potato), the increase in the total energy of the potato becomes equal to the amount of
heat transfer. That is, if 5 kJ of heat is transferred to the potato the energy increase of
the potato will also be 5 kJ. As another example, consider the heating of water in a
Figure. 3-5: The work (shaft) done on an adiabatic system is equal to the increase in
.the energy of the system
As a result of the stirring process, the energy of the system will increase. Again, since
there is no heat interaction between the system and its surroundings (Q = 0), the
paddle-wheel work done on the system must show up as an increase in the energy of
the system. Many of you have probably noticed that the temperature of air rises when
.it is compressed (Fig. 3–6)
This is because energy is transferred to the air in the form of boundary work. In the
absence of any heat transfer (Q = 0), the entire boundary work will be stored in the air
as part of its total energy. The conservation of energy principle again requires that the
increase in the energy of the system be equal to the boundary work done on the
system. We can extend these discussions to systems that involve various heat and
work interactions simultaneously. For example, if a system gains 12 kJ of heat during
a process while 6 kJ of work is done on it, the increase in the energy of the system
during that process is 18 kJ (Fig. 3–7). That is, the change in the energy of a system
.during a process is simply equal to the net energy transfer to (or from) the system
Figure.3-7: The energy change of a system during a process is equal to the network
and heat transfer between the system and its surroundings.
Energy Balance
can be expressed as follows: The net change (increase or decrease) in the total
energy of the system during a process is equal to the difference between the
total energy entering and the total energy leaving the system during that
any kind of system undergoing any kind of process. The successful use of this
the evaluation of the energy of the system at the beginning and at the end of
Note that energy is a property, and the value of a property does not change unless
the state of the system changes. Therefore, the energy change of a system
is zero if the state of the system does not change during the process. Also,
energy can exist in numerous forms such as internal (sensible, latent, chemical,
and nuclear), kinetic, potential, electric, and magnetic, and their sum constitutes
and surface tension effects (i.e., for simple compressible systems), the change
in the total energy of a system during a process is the sum of the changes in its
internal, kinetic, and potential energies and can be expressed as
When the initial and final states are specified, the values of the specific internal
Energy can be transferred to or from a system in three forms: heat, work, and
mass flow. Energy interactions are recognized at the system boundary as they
cross it, and they represent the energy gained or lost by a system during a
process. The only two forms of energy interactions associated with a fixed
1. Heat Transfer, Q Heat transfer to a system (heat gain) increases the energy
of the molecules and thus the internal energy of the system, and heat
transfer from a system (heat loss) decreases it since the energy transferred out
rotating shaft, and an electrical wire crossing the system boundaries are all associated
with work interactions. Work transfer to a system (i.e., work done on a system)
increases the energy of the system, and work transfer from a
system (i.e., work done by the system) decreases it since the energy transferred
out as work comes from the energy contained in the system. Car engines
3. Mass Flow, m Mass flow in and out of the system serves as an additional
of the system increases because mass carries energy with it (in fact, mass is
energy). Likewise, when some mass leaves the system, the energy contained
within the system decreases because the leaving mass takes out some energy
with it. For example, when some hot water is taken out of a water heater and
is replaced by the same amount of cold water, the energy content of the hot water
Figure.3-9: The energy content of a control volume can be changed by mass flow
Noting that energy can be transferred in the forms of heat, work, and mass,
and that the net transfer of a quantity is equal to the difference between the
amounts transferred in and out, the energy balance can be written more explicitly
as
where the subscripts “in’’ and “out’’ denote quantities that enter and leave the
system, respectively. All six quantities on the right side of the equation represent
“amounts,’’ and thus they are positive quantities. The direction of any
not need to adopt a formal sign convention for heat and work interactions.
assume any direction (in or out) for heat or work and solve the problem.
Negative result in that case will indicate that the assumed direction is wrong,
and it is corrected by reversing the assumed direction. This is just like assuming
The heat transfer Q is zero for adiabatic systems, the work transfer W is zero
for systems that involve no work interactions, and the energy transport with
mass Emass is zero for systems that involve no mass flow across their boundaries
(i.e., closed systems). Energy balance for any system undergoing any kind of process
can be expressed more compactly as
For constant rates, the total quantities during a time interval _t are related to the
quantities per unit time as
which is obtained by dividing all the quantities by the mass m of the system. Energy
For a closed system undergoing a cycle, the initial and final states are identical, and
thus, delta Esystem =E2 - E1 = 0. Then the energy balance for a cycle simplifies to
Ein - Eout = 0 or Ein = Eout. Noting that a closed system does not involve any mass
flow across its boundaries, the energy balance for a cycle can be expressed in terms of
heat and work interactions as
That is, the network output during a cycle is equal to net heat input (Fig. 3–10).
The energy balance (or the first-law) relations already given are intuitive in nature and
are easy to use when the magnitudes and directions of heat and work transfers are
known. However, when performing a general analytical study or solving a problem
that involves an unknown heat or work interaction, we need to assume a direction for
the heat or work interactions. In such cases, it is common practice to use the classical
thermodynamics sign convention and to assume heat to be transferred into the system
(heat input) in the amount of Q and work to be done by the system (work output) in
the amount of W, and then to solve the problem. The energy balance relation in that
case for a closed system becomes
where Q = Qnet, in = Qin - Qout is the net heat input and W = Wnet, out = Wout - Win
is the network output. Obtaining a negative quantity for Q or W simply means that the
assumed direction for that quantity is wrong and should be reversed. Various forms of
this “traditional” first-law relation for closed systems are given in Fig. 3–11.
The first law cannot be proven mathematically, but no process in nature is known to
have violated the first law, and this should be taken as sufficient proof. Note that if it
were possible to prove the first law on the basis of other physical principles, the first
law then would be a consequence of those principles instead of being a fundamental
physical law itself. As energy quantities, heat and work are not that different, and you
probably wonder why we keep distinguishing them. After all, the change in the
energy content of a system is equal to the amount of energy that crosses the system
boundaries, and it makes no difference whether the energy crosses the boundary
Conclusion
Introduction
The second law of thermodynamics asserts that processes occur in a certain direction
and that the energy has quality as well as quantity. The first law places no restriction
on the direction of a process, and satisfying the first law does not guarantee that the
process will occur. Thus, we need another general principle (second law) to identify
.whether a process can occur or not
Fig 4- 1: Heat transfer from a hot container to the cold surroundings is possible;
.however, the reveres process (although satisfying the first law) is impossible
A process can occur when and only when it satisfies both the first and the second
laws of thermodynamics. The second law also asserts that energy has a quality.
Preserving the quality of energy is a major concern of engineers. In the above
example, the energy stored in a hot container (higher temperature) has higher quality
(ability to work) in comparison with the energy contained (at lower temperature) in
the surroundings. The second law is also used in determining the theoretical limits for
the performance of commonly used engineering systems, such as heat engines and
.refrigerators etc
Thermal energy reservoirs are hypothetical bodies with a relatively large thermal
energy capacity (mass x specific heat) that can supply or absorb finite amounts of heat
without undergoing any change in temperature. Lakes, rivers, atmosphere, oceans are
example of thermal reservoirs. A two‐phase system can be modeled as a reservoir
since it can absorb and release large quantities of heat while remaining at constant
temperature. A reservoir that supplies energy in the form of heat is called a source and
.one that absorbs energy in the form of heat is called a sink
Heat Engines
Heat engines convert heat to work. There are several types of heat engines, but they
are characterized by the following:
1‐ They all receive heat from a high‐temperature source (oil furnace, nuclear reactor,
etc.)
2‐ They convert part of this heat to work.
3‐ They reject the remaining waste heat to a low‐temperature sink.
4‐ They operate in a cycle.
The thermal efficiencies of work‐producing devices are low. Ordinary spark‐ignition
automobile engines have a thermal efficiency of about 20%, diesel engines about
30%, and power plants in the order of 40%. Is it possible to save the rejected heat
Qout in a power cycle? The answer is NO, because without the cooling in condenser
the cycle cannot be completed. Every heat engine must waste some energy by
transferring it to a low‐temperature reservoir in order to complete the cycle, even in
idealized cycle.
It is impossible for any device that operates on a cycle to receive heat from a single
reservoir and produce a net amount of work. In other words, no heat engine can have
a thermal efficiency of 100%.
Fig 4.3: A heat engine that violates the Kelvin‐Planck statement of the second law
cannot be built.
Refrigerators and Heat Pumps
The performance of refrigerators and heat pumps is expressed in terms of the coefficient of
performance (COP) which is defined as
A reversible process is defined as a process that can be reversed without leaving any
trace on the surroundings. It means both system and surroundings are returned to their
initial states at the end of the reverse process. Processes that are not reversible are
called irreversible. Reversible processes do not occur and they are only idealizations
of actual processes. We use reversible process concept because, a) they are easy to
analyze (since system passes through a series of equilibrium states); b) they serve as
limits (idealized models) to which the actual processes can be compared. Some
factors that cause a process to become irreversible:
• Friction
• Unrestrained expansion and compression
• mixing
• Heat transfer (finite ∆T)
• Inelastic deformation
• Chemical reactions in a reversible process thing happen very slowly, without any
resisting force, without any space limitation → everything happens in a highly
organized way (it is not physically possible ‐ it is an idealization). Internally
reversible process: if no irreversibility's occur within the boundaries of the system.
In these processes a system undergoes through a series of equilibrium states, and
when the process is reversed, the system passes through exactly the same equilibrium
states while returning to its initial state. Externally reversible process: if no
irreversibility occurs outside the system boundaries during the process. Heat transfer
between a reservoir and a system is an externally reversible process if the surface of
contact between the system and reservoir is at the same temperature. Totally
reversible (reversible): both externally and internally reversible processes.
Conclusion
Thermal energy reservoirs are hypothetical bodies with a relatively large thermal
energy capacity that can supply or absorb finite amounts of heat without undergoing
any change in temperature. A reservoir that supplies energy in the form of heat is
called a source and one that absorbs energy in the form of heat is called a sink. Every
heat engine must waste some energy by transferring it to a low‐temperature reservoir
in order to complete the cycle, even in idealized cycle. Heat pumps transfer heat from
a low‐temperature medium to a high‐temperature one. On the other hand, a heat pump
absorbs heat from a low‐temperature source and supplies the heat to a warmer
medium. A reversible process is defined as a process that can be reversed without
leaving any trace on the surroundings. Reversible processes do not occur and they are
only idealizations of actual processes. In these processes a system undergoes through
a series of equilibrium states, and when the process is reversed, the system passes
through exactly the same equilibrium states while returning to its initial state.
Externally reversible process: if no irreversibility occurs outside the system
boundaries during the process. Heat transfer between a reservoir and a system is an
externally reversible process if the surface of contact between the system and
reservoir is at the same temperature.
Chapter Five
The Third Law of Thermodynamics
Introduction
In cold bodies the atoms find potential energy barriers difficult to surmount, because
the thermal motion is weak. That is the reason for liquefaction and solidification when
the intermolecular van der Waals forces overwhelm the free-flying gas atoms. If the
temperature tends to zero, no barriers – however small – can be overcome so that a
body must assume the state of lowest energy. No other state can be realized and
therefore the entropy must be zero. That is what the third law of thermodynamics
says. On the other hand, cold bodies have slow atoms and slow atoms have large de
Broglie wave lengths so that the quantum mechanical wave character may create
macroscopic effects. This is the reason for gas degeneracy which is, however, often
disguised by the van der Waals forces. In particular, in cold mixtures even the
smallest malus for the formation of unequal next neighbors prevents the existence of
such unequal pairs and should lead to un-mixing. This is in fact observed in a cold
mixture of liquid He3 and He4. In the process of un-mixing the mixture sheds its
entropy of mixing. Obviously, it must do so, if the entropy is to vanish. Let us
consider low-temperature phenomena in this chapter and let us record the history of
low-temperature thermodynamics and, in particular, of the science of cryogenics,
whose objective it is to reach low temperatures. The field is currently an active field
of research and lower and lower temperatures are being reached.
Capitulation of Entropy
It may happen – actually it happens more often than not – that a chemical reaction is
constrained. This means that, at a given pressure p, the reactants persist at
temperatures where, according to the law of mass action, they should long have been
converted into resultants; the Gibbs free energy g is lower for the resultants than for
the reactants, and yet the resultants do nor form. We may say that the mixture of
reactants is under-cooled, or overheated depending on the case. As we have
understood on the occasion of the ammonia synthesis, the phenomenon is due to
energetic barriers which must be overcome – or bypassed – before the reaction can
occur. The bypass may be achieved by an appropriate catalyst. An analogous behavior
occurs in phase transitions,1 mostly in solids: It may happen that there exist different
crystalline lattice structures in the same substance, one stable and one meta-stable, i.e.
as good as stable or, anyway, persisting nearly indefinitely. Hermann Walter Nernst
(1864– 1941) studied such cases, particularly for low and lowest temperatures. Take
tin for example. Tin, or pewter, as white tin is a perfectly good metal at room
temperature – with a tetragonal lattice structure – popular for tin plates, pewter cups,
organ pipes, or toy soldiers.2 Kept at 13.2°C and 1atm, white tin crumbles into the
unattractive cubic grey tin in a few hours. However, if it is not given the time, white
tin is meta-stable below 13.2°C and may persist virtually forever.3 It is for a pressure
of 1atm that the phase equilibrium occurs at 13.2°C. At other pressures that
temperature is different and we denote it by Twg(p); its value is known for all p. At
that temperature g = gw – gg vanishes, and below we have gw > gg, so that grey tin
is the stable phase. g may be considered as the frustrated driving force for the
transition and it is sometimes called the affinity of the transition. It depends on T and
p and has two parts.
From some measurements Nernst convinced himself that this expression – which after
all is equal to s(T,p) for T 0 – is zero, irrespective of the pressure p, and for all
transitions.5 So he came to pronounce his law or theorem which we may express by
saying that the entropies of different phases of a crystalline body become equal for T
0, irrespective of the lattice structure. Moreover, they are independent of the
pressure p. This became known as the third law of thermodynamics. We recall
Berthelot, who had assumed the affinity to be given by the heat of transition. And we
recall Helmholtz, who had insisted that the contribution of the entropy of the
transition must not be neglected. Helmholtz was right, of course, but the third law
provides a low temperature niche for Berthelot: Not only does T·s(T,p) go to zero,
s(T,p) itself goes to zero. The entropy capitulates to low temperature and gives up
its efficacy to influence reactions and transitions.
Liquefying Gases
It is not easy to lower temperatures and the creation of lower and lower temperatures
is in itself a fascinating chapter in the history of thermodynamics which we shall now
proceed to consider. The chapter is not closed, because low-temperature physics is at
present an active field of research. Currently the world record for the lowest
temperature in the universe16 stands at 1.5 μK, which was reached at the University
of Bayreuth in the early 1990’s. Naturally the cold spot was maintained only for some
hours. Such a value was, of course, far below the scope of the pioneers in the 19th
century who set themselves the task of liquefying the gases available to them and
then, perhaps, reach the solid phase. The easiest manner to cool a gas is by bringing it
in contact with a cold body and let a heat exchange take place. But that requires the
cold body to begin with, and such a body may not be available. No gas – apart from
water vapour – could be liquefied in this manner in the temperate zones of Europe
where most of the research was done. Since liquids occupy only a small portion of the
volume of gases at the same pressure, it stands to reason that a high pressure may be
conducive to liquefaction, just as a low temperature is. Both together should be even
better. That idea occurred to Michael Faraday – a pioneer of both electromagnetism
and cryogenics, the physics of low-temperature-generation – in 1823. He combined
high pressure and low temperature in an ingenious manner by using a glass tube
formed like a boomerang, cf. Fig. 5-1 Some manganese di-oxide with hydrochloric
acid was placed at one end. The tube was then sealed and gentle heating liberated the
gas chlorine which mixed with the air of the tube and, of course, raised the pressure.
The other end was put into ice water and it turned out that chlorine condensed at that
end and formed a puddle at 0°C and high pressure.
This means that, at a given pressure p, the reactants persist at temperatures where,
according to the law of mass action, they should long have been converted into
resultants; the Gibbs free energy g is lower for the resultants than for the reactants,
and yet the resultants do nor form. Tin, or pewter, as white tin is a perfectly good
metal at room temperature – with a tetragonal lattice structure – popular for tin plates,
pewter cups, organ pipes, or toy soldiers.2 Kept at 13.2°C and 1atm, white tin
crumbles into the unattractive cubic grey tin in a few hours. At other pressures that
temperature is different and we denote it by Twg ; its value is known for all p. At
that temperature g = gw – gg vanishes, and below we have gw > gg, so that grey tin
is the stable phase. It is not easy to lower temperatures and the creation of lower and
lower temperatures is in itself a fascinating chapter in the history of thermodynamics
which we shall now proceed to consider. Since liquids occupy only a small portion of
the volume of gases at the same pressure, it stands to reason that a high pressure may
be conducive to liquefaction, just as a low temperature is.
References