You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/310644943

The Philosophy of Energy.

Article · July 2017

CITATIONS READS
0 1,581

1 author:

Manuel DeLanda
Princeton University
39 PUBLICATIONS   327 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

The New Materiality View project

Materialist Phenomenology. View project

All content following this page was uploaded by Manuel DeLanda on 22 November 2016.

The user has requested enhancement of the downloaded file.


The Philosophy of Energy.
!
Manuel DeLanda.
!
!
The subject of energy has great practical significance in connection with questions
about scarcity and finite reserves, the economics of transforming one form of
energy into another, and the merits of different forms relative to their side effects
(pollution, degradation.) But the subject of energy has also great philosophical
importance due to its conceptual history and to the large number of cognitive tools
(statements, problems, explanatory and classificatory strategies) that have been
produced in the course of investigating its different forms and transformations.
(See Appendix at the end of this essay). In the early seventeenth-century, when the
field of chemistry was beginning to differentiate from the material culture of
pharmacists, metallurgists, and alchemists, everything that was manipulated in the
laboratory was considered to be a material substance. There were, on one hand,
substances that could be reacted with one another, such as acids and alkalis, so that
their capacity to affect each other was easy to verify, but which could also be
weighed and thus be attributed a certain corporality. There were, on the other hand,
incorporeal or imponderable substances, such as air, electricity, magnetism, heat,
light, and fire, which had capacities to affect – gases accumulating in a closed
container could make it explode; fire and heat made metals expand – but which
could not be weighed or even captured. 1 These were the most problematic
substances belonging to the chemical domain and their mastery would take more
than a century, involving practitioners and resources from a variety of fields.

The first incorporeal to be demystified was air, thanks to the invention of an


apparatus to capture the gases produced by chemical reactions. This led to the
discovery of a variety of “airs” when chemists established that each of the captured
vapors reacted differently with other liquid or solid substances. Because of the
belief that Air was an elementary substance (one of the basic principles inherited
from Aristotle) the explanation of the different chemical capacities was at first
couched in terms of mixtures: there was only one Air but contamination with
impurities gave it different properties. Eventually, the chemical community agreed
that the different airs were in fact different substances, but reaching consensus
demanded a great conceptual advancement: if the different airs were to be treated
like any other substance then their vaporous state had to be conceived as being
generic to all substances. In other words, the concept of the gas state had to be
introduced, and its links with the liquid and solid states disentangled. By the last
quarter of the eighteenth-century all three states were conceived as depending on
the amount of heat that a given substance possessed. 2 We should not, however,
think that this implied something like the late nineteenth-century view that what
distinguished a gaseous substance from a liquid or solid one was the amount of
energy in it, because heat was thought of as a material substance (caloric.) This
belief led to some strange assertions, like the statement that oxygen gas was a
compound of oxygen and heat, not an elementary substance. Nevertheless, the
concept that matter can exist in different phases, and the statement that phase
transitions tend to occur at specific critical thresholds of some property like
temperature, began to take hold in the chemical community. From then on, the
characteristic boiling point of different substances was made into a signature of
their identity.
!
As far as the incorporeals are concerned this was the last great contribution
of chemists. Fire and heat, light, electricity and magnetism, were mastered in
subfields belonging to physics. Thermodynamics, in particular, developed around
the discovery that one of the incorporeals, heat, could be transformed into
mechanical work, and that precise quantitative statements about this transformation
(the mechanical equivalent of heat) could be made. Thus, while heat and fire could
still not be weighed, they could be converted into a form in which their capacities
to affect could be measured. 3 When other incorporeals were found to be
transformable into heat, and when heat was found to have some of the properties of
light – it could be focused, reflected, and refracted – the concept that all these
mysterious substances were in fact manifestations of one and the same entity
became irresistible. Thus was the concept of energy as a non-substantial entity
born, alongside many quantitative statements about its transformations among
different forms.
!
From a philosophical point of view, thermodynamics (and its chemical
counterpart, physical chemistry) made several important conceptual contributions.
The first is the distinction between extensive and intensive properties. The
textbook characterization of this distinction emphasizes their divisibility, or lack
thereof. Extensive properties like length, area, volume or amount of energy, can be
easily subdivided: a ruler one meter long divided into two halves yields two half
meter rulers. Intensive properties, on the other hand, properties like temperature,
pressure, speed, density, and voltage, cannot. Subdividing a given volume of water
at ninety degrees of temperature, for example, does not yield two half volumes at
forty five degrees, but two halves at the original temperature. The textbook
definition, on the other hand, leaves out what is most important philosophically.
One characteristic was already mentioned: all intensive properties are marked by
critical thresholds at which matter spontaneously changes from one to another type
of organization. But an even more important aspect is that differences in intensity
store potential energy, energy that can be used to drive a variety of processes. We
can illustrate this by comparing extensive and intensive maps of our planet.
!
An extensive map shows the distribution of land masses and oceans,
displaying the differently sized areas that they occupy and the lengths of coastlines,
as well as their conventional subdividing boundaries, such as the frontiers that
define a country. Unlike these maps, meteorological maps are intensive: a zone of
high pressure here, a zone of low pressure there; a cold front here, a warm front
there; a mass of air moving at slow speed here, a fast moving air mass there. These
intensive differences, in turn, are treated not as static and sterile properties but as
mobile and productive ones, since thunder storms, hurricanes, cloud formations,
wind circuits, and the other entities that inhabit the coupled system hydrosphere-
atmosphere, are produced by these differences. In a sense, while an extensive map
shows the product of a process, an intensive one shows the process itself. Take for
example the land masses that appear on an extensive map. To observe the tectonic
processes behind the origin of the continents – as well as of their most prominent
features like the Himalayas or the Rocky Mountains – we would need to dig deep
into the surface of these land masses to reach the active lava flows underneath.
Driven by intense temperature differences, these lava flows act like gigantic
conveyor belts, transporting the tectonic plates on which the continents are
embedded, making them clash and folding layers of rock into mountain ranges.
The technical term for these conveyor belts is “convection cells”. Convection is a
type of periodic or rhythmic flow of matter very different from uniform or laminar
flow, as well as from turbulent flow. The sequence of regimes of flow laminar-
convective-turbulent is characterized by sharp transitions from one to the next,
transitions that occur at specific critical values of intensity. This sequence is not
very different from the sequence gas-liquid-solid: much as a liquid has more
energy than a solid but less than a gas, so convective flow has more energy than
laminar flow but less than turbulent flow.
!
Thus far we have two important concepts: that of a state of matter or a
regime of flow that depends on the amount of energy in a system, and that of an
intensive difference – also referred to as a gradient – storing useful energy that can
be tapped into to drive processes. The concept of a gradient is highly significant to
a materialist philosopher because, unlike idealist philosophers who believe the
world is a product of the human mind, a materialist must believe in a mind-
independent world. But a materialist cannot explain the enduring identity of the
autonomous inhabitants of the world by postulating that they all all posses an
essence, a transcendent core containing all that is necessary to make an entity what
it is. Rather, materialist philosophers must account for the historical synthesis of
those entities, with the term “history” referring not just to human history, but to
cosmological, geological, and biological history. In nineteenth-century versions of
materialism, this synthesis was explained using Hegelian dialectics, in which a
single synthetic operation (the negation of the negation) was assumed to be
responsible for the entire furniture of the world. Unlike this a priori approach to
synthesis, more recent versions of materialism take an a posteriori stance towards
the problem: it is an empirical question just how many different forms of synthesis
there are. But most of these forms do have something in common: they are all
driven by gradients. So far we have discussed physical gradients, but there are
chemical and biological ones as well. In chemistry the relevant intensive property
is called chemical potential. Two substances, one acid and one alkali, when they
are brought into contact, form such a gradient, as do substances that oxidize easily
and substances that reduce easily. The latter is called a “redox gradient” and it is
thought to be the main intensive difference tapped into by the earliest living
creatures in order to fuel their evolution. 4
!
To illustrate biological gradients we need a bit of evolutionary history. The
earliest bacteria appeared on this planet roughly three and a half billion years ago
scavenging the products of non-biological chemical processes using fermentation;
a billion years later they evolved the capacity to tap into the solar gradient via
photosynthesis, producing oxygen as a toxic byproduct; and one billion years after
that they developed respiration, the capacity to use oxygen to greatly increase the
efficiency of the other two strategies. To give an idea of the gains in efficiency of
these metabolic landmarks we can use some numbers obtained from contemporary
microorganisms. Using the least efficient process, fermentation, 180 grams of
sugar can be broken down to yield 20,000 calories of energy. The sugar used as
raw material for this chemical reaction was originally taken ready-made from the
environment but with the photosynthesis ancient bacteria could now produce it:
using 264 grams of carbon dioxide, 108 grams of water, and 700,000 calories taken
from sunlight, they could produce the same 180 grams of sugar, plus 192 grams of
oxygen as waste product. With the advent of respiration, in turn, that waste product
could be used to burn the 180 grams of sugar to produce 700,000 calories of
energy. 5 Thus, adding photosynthesis to fermentation made the growth of the
earliest populations of living creatures self-sustaining, while adding respiration
produced a net surplus of bacterial flesh (or “biomass”). That surplus, in turn,
became a gradient of biomass, a gradient that could be tapped into by the ancestors
of contemporary unicellular predators like paramecia or amoebae.
!
The next conceptual distinction that a philosophy of energy must draw is
that between equilibrium and stability. When classical thermodynamics began to
consolidate the available knowledge about gradients, the mathematical resources
available to construct models were limited and physicists had no choice but to
study an energy transformation as if it had already reached equilibrium, the point at
which intensive differences have disappeared and the quality of the energy has
reached its maximum of deterioration, losing its capacity to drive processes. The
concept of entropy was introduced to quantify the degree of deterioration of
energy, so the equilibrium state could also be characterized as the state at which
the entropy of the process has been maximized so its stays constant. Despite its
limitations, the study of the equilibrium state was important because all gradients
have an inherent tendency to dissipate, that is, all gradients tend towards this state.
But concentrating on the final state meant that the process through which it was
reached was not modeled. Making the path to equilibrium part of a model involved
the explicit representation of the production of entropy in a process, since quality
of energy could not be assumed to remain constant at a maximum value. 6 Also,
while in the equilibrium case gradients could be ignored (since they have already
disappeared) in non-equilibrium models gradients and the flows they drive had to
be explicitly included. If the gradients were of low intensity they defined near
equilibrium states in which the relationship between gradients and flows could be
modeled as linear or proportional: small causes produced small effects, and large
causes produced large effects. High intensity gradients, on the other hand,
characterized far from equilibrium states in which the causal relationship was
nonlinear: small changes produced large effects and vice versa. 7
!
It is in the non-equilibrium case that it is possible to make a clear distinction
between equilibrium properly and stability. A non-equilibrium system must be
assumed to be open to a flow of high-quality energy from the outside, a flow that
constantly replenishes the gradients. This flow effectively prevents the state of
equilibrium from being reached but this does not mean that the system becomes
unstable. On the contrary, away from equilibrium systems discover that there are
other sources stability. Let’s illustrate the simplest case: the difference between
equilibrium and steady-state stability in the case of chemical reactions. Chemical
gradients drive a flow of matter, not of energy, but most reactions are also affected
by properties like temperature and energy transformations always occur in them. A
reaction is at equilibrium when all chemical change has ceased, when nothing
significant happens because the substances involved (say, an acid and an alkali)
have been fully converted into a new compound (a neutral salt.) A reaction is at a
steady-state, on the other hand, when the forward reaction (acid + alkali = salt) is
counteracted by the backward reaction, a salt decomposed into an acid and an
alkali. As in the equilibrium case nothing seems to happen (since the two opposite
reactions balance each other) but transformations are still taking place. This is the
typical form of stability that takes place near equilibrium, when the gradients are of
low intensity and the causality is linear. Nevertheless, steady-state stability and
equilibrium are very different despite their superficial similarity: one is static the
other is dynamic. To express this more technically, thermodynamic systems can be
classified by the extremum principle that defines their long term tendencies: first
there are isolated systems, in which there are no flows of energy or matter, tending
irreversibly towards a state of maximum entropy; second, there are closed systems,
with flows of energy but not matter, tending towards a state of minimum free
energy; and third, there are open systems with low-intensity flows of both matter
and energy, that have a tendency to move towards the state with a minimum
entropy production. 8 In all three cases the final state is stationary, but not
necessarily static.
!
Before discussing the kinds of stability that become available to systems
driven by high intensity gradients, let’s delve a little deeper into the conceptual
questions raised by stationary states. In the three cases just mentioned, the long
term tendencies are defined by a singularity, a maximum or a minimum. The term
“singular” signifies special, remarkable, noteworthy, as opposed to ordinary. The
perfect illustration of a singularity in physics is the critical thresholds mentioned
before, like the boiling point of water. This point is special or singular because, as
we said, it marks a transition from quantitative to qualitative change. So does the
freezing point of water, occurring at zero degrees centigrade. But between these
two singularities all other points are ordinary because nothing worthy of our
attention occurs when water happens to be at that temperature. Singularities do not
have to be points. The boiling point of water is located at 100 degrees centigrade of
temperature, but only if the pressure of air above us is that of sea level. For other
values of pressure, the boiling and freezing points have other values. We can map
these values using a phase diagram, a diagram with as many dimensions as the
number of intensive parameters affecting the body of water. When temperature is
the sole parameter, the diagram is one-dimensional, temperature values forming a
linear series in which the thresholds appear as points. Adding pressure makes the
diagram two dimensional and the singularities become lines. Moreover, the lines
form patterns that reveal previously hidden complexity. The two-dimensional
phase diagram of water, for example, is not structured by two parallel lines running
through the zero and one hundred degrees points of temperature. If it were, adding
pressure as an extra parameter would not add any new information. But in reality,
the lines are not parallel but form a shape with the form of the letter “Y”. The part
of the diagram representing sea level pressure is structured by the upper part of the
Y, so a perpendicular line of temperature values intersects its two arms at the two
points just mentioned. But at lower pressures the map is shaped by the lower part
of the Y, so a line of temperature values intersects it only once. This means that at
low pressures there are only two distinct phases, solid and gas, one transforming
directly into the other in a phase transition called “sublimation”. 9 In general, if the
number of dimensions of the diagram is represented by the variable N, singularities
are always entities with N-1 dimensions.
!
A phase diagram is just a graphic representation of data from laboratory
measurements of physical phenomena. But singularities can also be found in
mathematical models of those phenomena, and historically the term “singularity"
first appeared in that context. The concept was introduced in the eighteenth-
century, when an algorithm was invented to automatically find the minima,
maxima, and inflection points in the solutions to differential equations. This
algorithm was called the calculus of variations, and its conceptual importance goes
beyond the practical utility it offered physicists at the time: variational principles
are used today in almost every branch of physics. 10 The understand the importance
of mathematical singularities we need to make the connection between laboratory
phenomena and differential equations. The latter are used to capture dependencies
in the way the intensive and extensive properties that characterize a phenomenon
change. The intensity of the gravitational field between two celestial bodies, for
example, changes in direct proportion to the product of the bodies’ masses (one
dependency) and in indirect proportion to the square of the distance between them
(another dependency). That a given property varies in the same or the opposite
direction than other properties is something that can be checked by conducting
suitable experiments. Thus, in a very real sense, a mathematical model can behave
in a way that mimics or tracks the behavior of a material or energetic system, so
that discoveries about the former become significant when applied to the latter.
!
As far back as mid seventeenth-century, mathematicians were dreaming of
ways to strengthen this connection. One powerful idea was that they needed to
match the space of possible states of the system being modeled to the space of
possible solutions of the differential equations doing the modeling. Thus, if the
hypothesis was that light propagates between two points so as to minimize travel
time, we could form the set of all possible paths joining these two points (straight
paths, crooked paths, wavy paths) and try to find a procedure that picked among
these possibilities the path that captured the tendencies of real light rays. The main
problem was to make sure that the possibility space was maximally inclusive, that
is, that it did not leave out relevant alternatives. One solution was to generate all
the paths through the variation of a single parameter. 11 However, there are many
physical phenomena in which the space of possible states cannot be parametrized
by a discrete set of variables. The calculus of variations was created to tackle these
more complex cases. By the middle of the nineteenth-century, all the different
models that had been created to study classical physical phenomena had become
unified under a single master equation and a single extremum principle: the
equation is called a Hamiltonian, and the principle is that all classical systems have
a tendency to be in the state that minimizes the difference between potential and
kinetic energy. 12
!
From this brief review of the mathematical models used to study energy the
most important concept we can extract is that of the structure of a possibility
space. This structure can be captured empirically, as in the case of phase diagrams,
or theoretically, using variational ideas. Either way, the structure in question is
made out of the singular or special portion of a space of possible entities (states,
events, paths, solutions) as well as other topological invariants of those spaces. A
topological invariant is a property of a space that stays unchanged after the space
has undergone displacements, rotations, scalings, projections, foldings, and
stretchings. Dimensionality, connectivity, and a given distribution of singularities,
are invariant properties in this sense. In the cases just examined, these spaces are
used to think about the tendencies of physical processes, such as the universal
tendency of a gradient to cancel itself and reach equilibrium, as well as the
tendency of systems with live gradients to exist in certain preferred states, those
corresponding to a maximum or minimum of some property. Thus, soap film has a
tendency to adopt whatever form minimizes surface energy – a hyperbolic
paraboloid if constrained or a sphere if free – and a crystal has a tendency to adopt
any form that minimizes bonding energy, the actual form depending on its
composition, such as a cube in the case of sodium chloride. Philosophically,
tendencies are important because they have a special ontological status. Unlike
properties which are both real and actual, tendencies are real but not necessarily
actual: a piece of soap film in a wire has a real tendency to wrap itself up into a
spherical bubble, but while it is in the wire, the bubble is not actual. Capacities,
like the capacity of a live gradient to drive a process, are also real but not actual if
they are not currently being exercised: a battery, containing a chemical gradient,
has the capacity to drive electricity flows inside an electronic gadget, but this
capacity is not actual when the gadget is turned off. The philosophical term for the
status of an entity that is real but not actual is virtual. 13
!
At this point we have almost all the conceptual machinery needed to tackle
systems far from equilibrium, we just need to introduce a couple more concepts.
When we use Hamiltonians to model physical systems we assume that the latter are
conservative, that is, that the total energy of the system, represented by the master
equation, remains constant. On the other hand, if we open the system to external
energy flows to replenish its gradients, and if we allow the heat produced (as the
gradients try to cancel themselves) to dissipate, we are dealing with systems called
dissipative. To the contrast between conservative and dissipative systems, we must
add that between linear and nonlinear systems. We introduced this other distinction
in relation to causality: a cause is linear if its effects are proportional, nonlinear if
they are not. But the distinction also applies to mathematical models: they are
nonlinear if there is feedback among their variables, linear if there is not. In linear
and conservative systems, singularities are of the minimum or maximum type.
Keeping the system linear but opening it to external flows, adds one more kind of
singularity, a steady-state attractor, a point singularity that is neither a maximum
nor a minimum. Finally, systems that are both dissipative and nonlinear exhibit a
variety of singularities that are not points, but their dimensionally is constrained by
the N-1 rule mentioned above. The gradients in these dissipative systems must be
intense enough – the distance from equilibrium must be large enough – because for
low intensity gradients a nonlinear system is effectively linearized. 14
!
The mathematical study of these possibility spaces was initiated at end of the
nineteenth-century, with the invention of state space (or phase space.) A good
mathematical model, as just argued, is one in which the dependencies in the way in
which variables in an equation change match the dependencies in the way in which
the properties of a laboratory phenomenon change. The construction of state space
begins by assigning to each of the variables a dimension of a space (a differential
or topological manifold) after which a combination of values for each variable at a
given instant defines a point in that space. Since any valid combination of values
represents a state of the phenomenon, this point stands for an instantaneous state.
Then, using the dependencies captured by the equation, we can calculate what the
values should be in the next instant and plot the next point. After repeating this
operation several times we get a a series of points forming a curve or trajectory,
representing a possible history for the phenomenon, that is, a possible series of
states. We now move to the lab and using an apparatus to screen out all other
causal factors but the properties that are represented by variables, we place the
phenomenon at a given initial state and then let it run spontaneously through a
sequence of states, while we measure the properties at small intervals. The result
will be a sequence of numbers that we can plot on a piece of paper turning them
into a curve or trajectory. We then run our mathematical model, giving it the same
values for initial conditions as our laboratory run, and generate a state space
trajectory. Finally, we compare the two curves. If the two trajectories display
geometrical similarity (or if one tracks the other long enough) we will have
evidence that the model works. 15
!
If this was all that state space offered us it would simply be a visualization
tool, useful but hardly worthy of philosophical analysis. But after the construction
of state space was worked out, mathematicians began exploring it for its own sake,
trying to figure out whether it had any structure. A state space without structure is
one in which a trajectory, once started, wonders around aimlessly without settling,
representing a random phenomenon. But if we start several trajectories and they
converge at a particular point, then this point will represent a preferred state for the
phenomenon, just like the old maxima and minima. In this case, we are not just
matching curves to curves, but matching the tendencies of a trajectory to converge
to the tendencies of the phenomenon to seek a particular state. Exploring state
space this way led to the discovery of several point singularities, differing from
one another by the way in which the trajectories converge: nodes, saddle points,
foci, centers. 16 These are the different forms that steady-state attractors can take,
and this already points to a richer conception of singularities. Further exploration
of state space yielded another discovery: singularities did not have to be zero
dimensional (points) but could be one dimensional if the space had at least two
dimensions. This line singularities, wrapped into a loop, attract trajectories and
lock them into a periodic or oscillating form of stability. In other words, if an
external shock forces the trajectory to be dislodged from the attractor, it will
spontaneously tend to return to the preferred oscillating state. A few decades later,
the study of radio transmitters – which must have stable oscillating behavior to
periodically emit electromagnetic waves – showed that these new singularities
captured tendencies in real phenomena. They are referred to as limit cycles or
periodic attractors. 17 Finally, starting in the 1960’s, further exploration of state
space revealed that, in spaces with three dimensions or more, another type of
singularity existed, one that can be pictured as the result of repeatedly stretching
and folding a limit cycle. The new singularities do not have two dimensions but a
fractional dimension between one and two, so they only approximately follow the
N-1 rule. They are referred to as fractal, strange, or chaotic attractors. The term
“chaotic” however, is a misnomer because it suggests randomness, that is, it
implies lack of structure in the possibility space. Yet, a low dimensional chaotic
attractor embedded in a space with many dimensions, effectively locks the
modeled phenomenon into a small set of (ever changing) states, and this implies
order not randomness.
!
Let’s conclude this brief survey of the concepts involved in the philosophy
of energy by explaining some of the choices made in its presentation. Specifically,
instead of writing about the tendency of gradients to cancel themselves we could
have used the much more familiar expression, energy obeys the second law of
thermodynamics. The two expressions are, in fact, equivalent, so why using the
least familiar.? Because the former gives us an image of an energetic materiality
possessing its own tendencies and capacities, while the latter invites us to think of
matter as a substance that obediently follows commands from above. Aristotle was
the first philosopher to articulate this conception of matter. He divided reality into
two parts: the part that is accidental, subject to corruption and decay, the world of
individuals, and the part that is necessary, immutable, and eternal, the world of
genera and species. 18 The birth of form and structure was conceived as stemming
from the action of essences, acting as formal causes, on individual substances that
were inert, incapable of giving rise to form on their own. The world-view based on
eternal and immutable laws of nature is not the same as Aristotle’s (laws are not
formal causes) but they both share the same transcendent status, in which a form-
giving agency exists on a plane over and above that of matter. If matter exists in a
world with N number of dimensions, both laws and essences exist on an additional
plane with N+1 dimension. But the concept of an energetic materiality is entirely
different: the capacity of matter to spontaneously give rise to form depends only on
the zone of intensity that it occupies, closer or further away from equilibrium. And
rather than depending on a plane overflying that of the material world, the form-
giving agencies (singularities) exist at N-1 dimensions, embedded in the energetic
materiality and therefore, immanent to it. Thus, although for scientific purposes the
two expressions above are equivalent, philosophically it makes all the difference in
the world which one we choose.
!
!
Appendix: Five Steps in the Development of the Science of Energy.
!
Step 1:!
!
Thermodynamics began as the study of the relation between heat and motion (in systems like
the steam engine) but its practitioners soon realized that this relation was a special case of the
more general concept of energy transformation: the transformation of thermal into mechanical
energy. This insight was originally due to James Prescott Joule (1818-1889). To create a model
for energy transformations we must first divide the world into the interior and exterior of a
system, that is, we must postulate a boundary. Depending on the nature of the boundary
different model systems will result: !
!
! Isolated Systems: the boundary prevents the occurrence of flow any flow of energy or !
! matter !in or out of the system. !
!
! Closed Systems: the boundary prevents the occurrence of flow any flow of matter !
! in or out of the system, but energy is allowed to enter and exit the system.!
!
! Open Systems: the boundary allows flows of energy and matter to enter and exit !!
! the system. !
!
!
Step 2:!
!
Classical thermodynamics deals with isolated systems defined both by their intensive
properties, temperature, pressure, chemical potential, as well as by their extensive ones,
amount of energy and amount of entropy (or quality of energy). The distinction between
intensive and extensive properties (though not the terminology) is due to Joseph Black
(1728-1799) who first demonstrated that the temperature and the amount of heat in a system
were not the same quantity. The state of equilibrium is defined as the state in which all
gradients within a system have been exhausted (temperature and pressure are uniform
throughout) and in which entropy has reached a maximum and remains constant. In classical
thermodynamics systems are always studied in their equilibrium state. Because real systems
often operate away from equilibrium (e.g. a steam engine driven by live gradients) they are
modeled as if their their dynamics took place at infinitely small departures from equilibrium. The
steam engine was modeled as if the transfer of energy occurred in extremely small increments,
and as if the pistons moved at extremely slow speeds. The model of the idealized steam engine
is due to Nicolas Leonard Sadi Carnot (1796-1832). The main consequence of studying
systems in their final state, is that any event leading to this final equilibrium state can be
ignored. In other words, the history of the system becomes irrelevant.!
!
The concepts needed to set up models like these have separate histories. The idea that the
overall amount of energy is conserved, so that thinking about the equilibrium state needs only
information about the initial and final states (not the history) of a system, is due to Julius Robert
von Mayer (1814-1878) and Hermann von Helmholtz (1821-1894). Combining the idea that
history does not matter with insights obtained from the idealized steam engine, Rudolf Clausius
(1822-1888) invented the concept of entropy as a measure of the capacity of energy to perform
useful work. Because entropy always increases there is a limit to the convertibility of heat into
motion. !
!
!
Step 3:!
!
Extending these models to non-equilibrium states demanded a series of changes. Gradients
(whether thermal, electrical, or chemical) could no longer be assumed to be exhausted, so the
flows they drive had to be included in the model. At first, these gradients were assumed to be of
very low intensity, that is, the systems studied existed near equilibrium, while the flow across
boundaries was assumed to consist of energy only, that is, the systems were closed. These
simplifications allowed scientists to consider systems made out of many compartments in which
the old equilibrium models were assumed to be valid within the compartments, while new
mathematics were introduced to model the energy transfer at the boundary between
compartments. Modeling this transfer could be done using the simplest kind of equations, linear
equations, because at low intensity the causal relationship between gradients and flows is
proportional: small causes produce small effects, and large causes produce large effects. This
implied that only the current values of the variables had to be taken into account, allowing
modelers to continue ignoring the history of the system. The extension of classical ideas to the
near equilibrium case was performed by Lars Onsager (1903-1976) and Théophile DeDonder
(1872-1957). !
!
!
Step 4:!
!
The next step demanded modeling systems in which gradients of higher intensity (systems far
from equilibrium) drive flows of both matter and energy across boundaries (open systems). This
involved more drastic changes. First, making the path to equilibrium part of a model involved the
explicit representation of the production of entropy in a process, since this property could not be
assumed to remain constant at a maximum value. Second at higher intensities the causal
relation between gradients and flows is nonlinear: small changes in a gradient can produce
large effects in the flows, while in other conditions large changes can have negligible effects.
When the relation between gradients and flows is linear the system has only one long term
tendency: a single steady state. A steady state is not the same as an equilibrium state. The
state of equilibrium is static while a steady state is dynamic, such as the state in which a
process taking place in one direction is balanced by a process taking place in the opposite
direction. Nevertheless, a unique steady state implies that we already know what the end state
of a system will be, so there is no need to consider its history. When the relation between
gradients and flows is nonlinear, on the other hand, the long term tendencies of the system are
multiple: many potential steady states are possible, or even other types of dynamic stability
such as periodic stability. In this case, which end state the system happens to be in is a function
of its history (path dependence). The extension of thermodynamics to the far from equilibrium
case was performed by Ilya Prigogine (1917-2003). !
!
!
Step 5:!
!
Besides developing by extension to zones away from equilibrium, thermodynamics evolved by
hybridization. The field of physical chemistry is one such hybrid, developed by scientists like
Friedrich Michael Ostwald (1853-1932). From its inception in the eighteenth century chemists
had used the concept of affinity to refer to the selective tendency of substances to react with
other substances. Thus, acids and alkalis were said to have a great affinity for each other, as
displayed in the violent reaction produced when they interacted. Yet, the explanation of affinity
as a force remained elusive for as long as chemists used gravity or magnetism as a paradigm of
force. Gradients, on the other hand, provided an alternative way of conceptualizing that which
drives flows, so if differences of temperature, pressure, density or speed could play this role so
could differences in affinity. By the time this change was effected, the term chemical potential
had been introduced to replace affinity. In addition to allowing chemists to conceive of chemical
reactions thermodynamically, physical chemistry introduced the idea that the properties of
substances, long believed to be explained solely by their material composition, were also
affected by their energetic content.!
!
!
!
!
!
References:
!1. Stephen Toulmin and June Goodfield. The Architecture of Matter. (Chicago: University of
Chicago Press, 1982) p. 202.
!2. Maurice Crosland. Slippery Substances: Some Practical and Conceptual Problems in the
Understanding of Gases in the Pre-Lavoisier Era. In Instruments and Experimentation in the
History of Chemistry. Edited by Frederic L. Holmes and Trevor H. Levere. (Cambridge: M.I.T
Press, 2000.) p. 84.
! The apparatus allowing the capture of gases, the pneumatic trough, was created by
Stephen Hales in 1727.
!3. Mary Jo Nye. Before Big Science. The Pursuit of Modern Chemistry and Physics, 1800-1940.
(New York: Twayne Publishers,1996). p. 91-92.
! Chemists may have lost the lead in the study of incorporeals, but they still managed to
influence physicists. In particular, the concept of the mechanical equivalent of heat was created
by James Joule in1837, who was a student of the chemist John Dalton, and was influenced by the
electro-chemists Humphry Davy and Michael Faraday.
!4. Ronald E. Fox. Energy and the Evolution of Life. (New York: W.H. Freeman, 1988). p. 58-59.
!
5. George Wald. The Origin of Life. In The Chemical Basis of Life. (San Francisco: W.H.
Freeman, 1973). p. 16-17.

6. Dilip Kondepudi. Introduction to Modern Thermodynamics. (Chichester: John Wiley, 2007).


p. 141.
! The extension of chemical thermodynamics to near equilibrium states was performed in
the 1920's by the Belgian physicist and mathematician Théophile DeDonder. The Belgian
chemist Ilya Prigogine performed the extension to far from equilibrium states in the 1960's.
!7. Ibid. p. 327.
!8. Ilya Prigogine and Isabelle Stengers. Order Out of Chaos. (Toronto: Bantam, 1984.) p.
138-143.
!9. Philip Ball. Life’s Matrix. A Biography of Water. (Berkeley: University of California Press,
2001), p. 161.
!10. Don. S. Lemons. Perfect Form. Variational Principles, Methods and Applications in
Elementary Physics. (Princeton: Princeton University Press, 1997.) p. 17-27.
! The calculus of variations was created by the mathematician Leonard Euler in 1733.
!11. Ibid. p. 7.
! The original problem about light rays was posed by Pierre de Fermat in 1662.
!12. Ian Stewart. Does God Play Dice. The Mathematics of Chaos. (Oxford: Basil Blackwell,
1989.) p. 40-41.
! The master equation was created by the Irish mathematician William Hamilton in 1833.
!13. Manuel DeLanda. Intensive Science and Virtual Philosophy. (London: Bloomsbury, 2001).
Ch. 1.
!14. Ian Stewart. Does God Play Dice. Op. Cit. p. 82.
!15. Peter Smith. Explaining Chaos. (Cambridge: Cambridge University Press, 1998.) p. 72. (My
italics.)
!16. June Barrow-Green. Poincare and the Three Body Problem. (Providence: American
Mathematical Society, 1997), p. 30-35.
! State space and the geometric approach to the study of differential equations were created
in the 1880’s the French mathematician Henri Poincare, who also classified many of the
singularities he discovered.
!17. Ralph Abraham and Christopher Shaw. Dynamics: the Geometry of Behavior. (Santa Cruz:
Aerial Press, 1984). p. 105.
!18. Aristotle. The Metaphysics. (New York: Prometheus Books, 1991.) p. 100.
!
19. Gilles Deleuze and Felix Guattari. A Thousand Plateaus. (Minneapolis: University of
Minnesota Press, 1987.) p. 266.

View publication stats

You might also like