You are on page 1of 51


Hector C. Parr

This paper asks why so few philosophers and scientists have turned their attention to the
idea of infinity, and it expresses surprise at the readiness with which infinity has been
accepted as a meaningful concept even in fields where it causes serious logical
difficulties. I try to show that belief in a universe containing an infinite number of
material objects is logically flawed, and I examine the cosmological consequences.



The idea of infinity arises in several different contexts. Most of the applications to which
we shall refer in this paper belong to one or other of the following six categories:

1. The sequence of natural numbers, 1, 2, 3, ..., is said to be infinite.

2. In geometry, the number of points on a line is said to be infinite.
3. In mathematics, many examples occur of sequences of numbers which tend to
4. It is often assumed that time is infinite, in the sense that it had no beginning, or
will have no end, or both.
5. Likewise, space is often assumed to be infinite in extent.
6. Some theories of cosmology suppose that the amount of matter in the universe,
i.e. the number of stars or of atomic particles, is infinite.

Modern astronomers do not agree on whether or not the universe is infinite in extent.
While books on cosmology display much detailed knowledge of the history and structure
of the universe, they appear to find the issue of infinity difficult to decide.
Mathematicians tell us that the question is closely related to the average curvature of
space, and everything depends upon whether this curvature is positive, zero or negative.
If it is positive, we are told, the volume of space is finite, but if it is zero or negative the
volume must be infinite, and this is usually taken to imply that the number of stars and
galaxies must also be infinite. In fact this curvature is very close to zero, making it
difficult to determine by observation or measurement which sort of universe we live in.
Many of the formulae in cosmology must therefore be given in three different forms, so
that the correct version can be chosen when we do eventually discover whether the value
of k is +1, 0 or -1.

The question of the finiteness of time seems equally uncertain; most cosmologists now
believe that the universe began with a big bang, all its material content coming into being
at a single point in a colossal explosion, and with time itself beginning at this first
moment. But opinion is divided on whether it will end with some sort of "big crunch",
with everything finally ceasing to exist in a mammoth implosion.

It is surprising that cosmologists do not concern themselves greatly with these questions
of finiteness. They give us the three formulae, corresponding to the three possible values
of k, and leave it at that. Indeed some books on the subject fail to state clearly whether
particular arguments apply to an "open" or a "closed" universe, as if it did not really
matter. Some have no reference to "infinity" in their index. Surely few questions are more
significant than whether the universe is finite or infinite.

At one time there seemed to be a strong argument against the number of stars being
infinite. A simple calculation shows that if it is, then the whole night-sky should be ablaze
with light. The surfaces of the stars are, on average, as bright as the surface of the sun,
and if they are infinitely numerous it can easily be shown that any line of sight will
eventually terminate on a star, so that the whole sky will shine as brightly as the sun. It
can be argued that the most distant stars might have their light dimmed by passing
through gas or dust on its way to us, but if this were the case, the gas itself would be
raised to such a temperature that it too would shine with this same brilliance. This
problem was known as "Olbers' Paradox", after Heinrich Olbers (1758-1840). But it has
now been resolved; even if the universe were infinite, we know that its expansion would
provide an explanation for the darkness of the night-sky. Distant stars are dimmed not
because of intervening matter, but because they are moving away from us, and the
wavelength of their light is increased, and its energy reduced, by this motion. So Olbers'
effect does not now present an obstacle to those who believe in an infinite universe. But
here again it is surprising that taking the number of stars to be infinite is an assumption
that can be adopted or discarded at pleasure, without considering whether it should be
ruled out on logical grounds.


The idea of infinity has been used increasingly over the centuries by mathematicians, and
in general they have been more circumspect than have astronomers in their use of the
word. The natural numbers clearly have no upper limit; the process of counting the
numbers can never be completed. And if I name a number, however large, you can always
name a larger one. So we can agree that the class of numbers is unlimited. We might even
allow the class to be called "infinite". But it is a mistake to say that the number of its
members is "equal to infinity" for infinity cannot itself be a number. It can be defined
only as something "greater than any number we can name".

Modern mathematicians make much use of "infinite sequences" of numbers, but during
the nineteenth century they carefully defined their concepts in this field to avoid using the
word "infinity" as if it were indeed a number. To take a simple example, if a sequence is
defined so that the nth term is 1/n, we get the values 1, 1/2, 1/3, 1/4, ... as n takes the
values 1, 2, 3, ... The fractions get closer and closer to 0, but never quite reach it. We
disallow the statement "the nth term of the sequence equals 0 when n equals infinity", but
instead we say "the limit of the sequence is 0 as n tends to infinity". This is then carefully
defined as follows: "For any quantity q, however small, there exists a value of n such that
every term of the sequence after the nth differs from 0 by less than q". This is perfectly
explicit, and makes no reference to the number infinity. Likewise, when considering
continuous variables, we disallow the shorthand form "1/0 equals infinity", and we say
rather, if y = 1/x, then "y tends to infinity as x tends to 0", and this is rigorously defined to
mean "For any quantity Q, however large, there exists a value of x such that y is greater
than Q whenever x is numerically less than this fixed value". In this way modern
mathematics allows us to discuss infinite sequences in a way which is logically sound.

But other branches of mathematics have found it necessary approach the concept of
infinity more directly, and to adopt the notion of a "completed infinity", and as a result
have become dogged by paradox. They begin by defining carefully what is meant by two
numbers being equal; if the elements of two classes of objects can be put into one-to-one
correspondence with each other, then they must contain equal numbers of elements. If
every seat on an aircraft is occupied by a passenger, and every passenger has a seat, then
the number of passengers must equal the number of seats. But difficulties arise when this
definition is applied to unlimited classes such as the natural numbers; here is a simple
example of such a paradox. Mathematicians use the symbol "Aleph-0" to represent the
infinity we would obtain if we could count all the natural numbers. But because every
number can be multiplied by two, the natural numbers can be put into a one-to-one
correspondence with the even numbers, as the following lists show:

Natural Numbers 1 2 3 4 5 6 ....

Even Numbers 2 4 6 8 10 12 ....

It follows from this that the number of even numbers must also be equal to Aleph-0,
according to this one-to-one definition of "equality". What sort of a collection is it that
still contains the same number of elements after we have removed half of them? And
what sort of a number (apart from 0) is equal to twice itself?

The paradoxes become even more perplexing when we consider fractional numbers, such
as 1/2, 5/8 or 999/1000. It is clear that the number of fractions between 0 and 1 must be
unlimited, for it is always possible to find a new fraction between two given fractions,
however close together they are.

To satisfy the curious, this can be done simply by adding the tops and adding the bottoms; thus 5/17 lies
between 2/7 and 3/10.
And it is not difficult to show that the fractions between 0 and 1 can, in fact, be put into
one-to-one correspondence with the natural numbers, so that the number of such fractions
must be Aleph-0. But it can also be shown that the whole collection of fractions,
including "improper" fractions such as six and a half, or two thousand and nine tenths,
can also be made to stand in one-to-one correspondence with the natural numbers, and so
their number must be Aleph-0. So there are Aleph-0 fractions between 0 and 1, another
Aleph-0 between 1 and 2, and so on, and yet the total of all these infinities is no more
than any one of them. We have another puzzling formula: Aleph-0 remains unchanged
when we multiply it by itself.

The ancient Greeks realised that fractional numbers are needed in order to describe the
lengths of lines, or to indicate the position of a point on a line. Because we can always
find a fraction between any given pair of fractions, it follows that the complete set of
fractions has no gaps in it. So the class of all fractions would seem to be sufficient to
express accurately the length of any line. This was accepted by the Greeks more than 500
years B.C., and they were surprised to discover that it is just not true. If a square has sides
of length one inch, then Pythagoras' well known theorem shows that the length of the
diagonal must be the square root of 2, and Pythagoras himself proved that this number is
not, in fact, a fraction.

His proof was a remarkable achievement in its day, but is not difficult to comprehend today. Suppose the
fraction m/n does equal the square root of two, and suppose it is in its lowest terms, i.e. it does not need
cancelling. Then its square must equal 2, i.e. m squared over n squared equals 2, and so m squared is twice
n squared. This shows that m squared is an even number, and so m is even, and m squared must divide by 4.
It follows that n squared must divide by 2, and so n also must be even. But this contradicts our stipulation
that m/n does not need cancelling, and the contradiction shows our original supposition to be false; the
square root of 2 cannot be a fraction.

So the class of fractions, although it contains no gaps, is not adequate for describing the
lengths of lines. For some lines we need also numbers such as the square root of two,
which cannot be expressed as fractions. Until quite recently this presented an intriguing
puzzle. But what seems to be overlooked, even today, is that when we deal with the real
world of geometrical figures, fractions are sufficient for specifying the length of any line
we have measured. Measurements necessarily must be made to some limited degree of
accuracy, depending on our method and the care we take. However small the inaccuracy
we will allow, the length of a line can be expressed as a fraction (or what amounts to the
same thing, a decimal) to whatever precision our measuring technique will allow. The
length of the diagonal of an ideal square may indeed be "irrational", but this is of no
concern to us when we measure the diagonals of real squares. No measurement of a
square's diagonal could prove that its actual length is irrational.

Undeterred by the paradoxes they were unearthing, mathematicians in the nineteenth

century studied the infinite numbers in detail. They accepted it as true, despite the
apparent contradiction, that the number of even numbers was indeed the same as the
number of all the natural numbers. As we have seen, the number of fractional numbers
then turns out to be the same again, and they began to suspect that all infinite numbers
must equal each other. So it was a surprise when Cantor (1845-1918) showed that the
class of real numbers, which includes also the irrationals, contains a larger infinity than
Aleph-0. The criterion he used was again to ask whether this class could be put into one-
to-one relationship with the natural numbers, in other words whether it could be counted.
The set of even numbers, or the the set of fractions, are said to contain Aleph-0 members
because they can be counted (in theory, although we could never end the process), while
the class of real numbers cannot. So Cantor introduced us to a whole heirarchy of infinite
numbers, Aleph-0, Aleph-1, ..., which are not equal to each other.


The world of the Pure Mathematician is far removed from the real world. In the real
world there is no difficulty finding the length of the diagonal of a square and expressing
this as a decimal, but in the perfect world of Pure Mathematics this cannot be done. In the
real world we know the process of counting the natural numbers can never be completed,
so that the number of numbers is without meaning, while the mathematician finds it
necessary to say that if the process were completed, the number would be found to be
Aleph-0. These are harmless follies; what the mathematician gets up to inside his ivory
tower need not concern those outside.

But the mathematician's ideal world did impinge on reality at the beginning of the
twentieth century when Russell (1872-1970) attempted to reduce all mathematical
reasoning to simple logic. Even the natural numbers themselves could be defined in terms
of a simpler concept, but to make this possible Russell found it necessary to assume that
the number of real objects in the universe is itself infinite. Here again I find it astonishing
that he made this assumption so glibly. He called the principle his "Axiom of Infinity".
Now an "axiom" is something which is self-evident, unlike a "postulate", which is
assumed for convenience even though it is not self-evident. Why did Russell not refer to
the principle as his "Postulate of Infinity"? To me it is far from self-evident that there are
an infinite number of things in the universe; in fact I cannot see that the statement has any
meaning. Infinity is an indispensible concept in Pure Mathematics, but is it not
meaningless when applied to the number of real things? Nature, however, retaliated for
Russell's cavalier assumption that this number is in fact infinite. The theory of classes
which he developed from this assumption eventually showed some unacceptable
inconsistencies which he could avoid only by making arbitrary restrictions on the types of
class which the theory would allow. To my mind this represented an elaborate reductio ad
absurdum proving that the number of objects in the universe can not be infinite.

There is another strong argument that to talk of a universe containing an infinite number
of particles is without meaning. It has often been pointed out that, in such a universe,
anything which can exist without trangressing the rules of nature, must of necessity exist
somewhere, and anything which can possibly happen, will happen, So there must
somewhere be another planet which its inhabitants call "Earth", with a country called
"England" whose capital is "London", containing a cathedral called "St. Paul's". Indeed
there must be an infinite number of such planets, identical in every respect except that the
heights of these copies of St. Paul's differ among themselves. Is this not sufficiently
ridiculous to convince believers in an infinite universe that they are wrong? Is not this a
further indication that to talk of an infinity of material objects must be meaningless?
As we have shown above, mathematicians have tried to avoid the use of infinite
magnitudes wherever possible. They have succeeded in deflecting such ideas as series
which "tend to infinity", or the number of points on a line being "equal to infinity", by re-
stating propositions of this sort in terms which do not require use of the word "infinite".
Cantor and others were not able to treat in a similar fashion their discussion of the
equality or inequality of the different infinities they had discovered, but it must be
conceded that their arguments had no relevence in the real world of material particles and
real magnitudes. Indeed, their discussion of the various "orders of infinity" which they
introduced into Pure Mathematics, Aleph-0, Aleph-1 and so on, is no more than a
fascinating game whose rules, such as that defining "equality" in terms of one-to-one
relationships, are man-made and arbitrary; a different definition of equality would result
in a different pattern of relationships between the various infinities. Only when attempts
are made to relate the concept of infinity to objects in the real world, as Russell did with
his Axiom of Infinity, do we meet insurmountable logical contradictions.


Our reasoning seems to lead to the conclusion that the number of particles in the universe
is finite, but it has nothing immediately to say about the total volume of space or the total
duration of time. Physical quantities such as space and time can be quantified only by
constructing some system of measurement, whereas no such system is required to
enumerate entities such as particles. It is the process of counting which we maintain must
terminate when applied to real objects in the universe; we are not asserting that the whole
of space must be limited in extent, or that time must have a definite start and finish.

Indeed a process of measurement can sometimes lead to one quantity being described as
infinite in extent, while another equally valid process can result in this same quantity
having a finite magnitude. As an example, suppose a very long straight railway line is
spanned by a bridge, and a surveyor standing on the bridge wishes to assess the distance
along the track to a stationary wagon. He could do this by coming down from the bridge
and laying a measuring rod along the track until he reaches the wagon. He would describe
the distance as so many metres. But alternatively he could determine the distance by
measuring the angle between a vertical plumb line and his line of sight as he observes the
wagon from the bridge. The further away the wagon, the greater would be this angle, and
he could use its value to calculate the wagon's distance in metres. But if he wished he
could describe the distance by quoting this angle itself, to give the wagon's distance as so
many degrees. Although in most situations this would prove less convenient, both
measures are acceptable, for each identifies an exact point on the track, and they both
increase steadily the further away is the wagon. But if, by some miracle, the track were
infinite in extent, as the wagon receded the number of metres would tend to infinity,
while the number of degrees would merely approach 90o. The conclusion to be drawn
from this paragraph is not that there are better ways of measuring distances on a railway
than those we currently adopt, but that there is no intrinsic difference between a finite and
an infinite distance, for some method of measurement must be stated or implied, and the
mere convenience of that method cannot be taken as determining the finiteness or
otherwise of a length.
It would seem a meaningless question, therefore, to ask whether the amount of space in
the universe is finite or infinite, for equally valid methods of measurement could give
conflicting answers. But if we allow ourselves to be influenced by the convenience
argument, some methods are certainly preferable to others. Here on earth we make much
use of rigid bodies, such as tables, bricks, and measuring rods, and it is fortunate that in
one system of measurement the dimensions of all such bodies appear not to change unless
they are subject to extreme forces or processes, so it is not surprising that this is the
system we adopt. The stars are not rigid in this sense, but with this same measurement
system a star is very nearly spherical. And the diameters of stars of similar types are all
approximately equal, so we adopt this system, as far as possible, for defining
astronomical sizes and distances. The complexities of General Relativity, and the
expansion of the universe, present new difficulties when describing the dimensions of the
universe as a whole, but we shall continue to discuss whether or not space is finite
without too much risk of ambiguity, while constantly bearing in mind that we are not
talking about an intrinsic property of the universe, in contrast with our discussion of the
number of objects it contains, which is an intrinsic property. In such discussion we shall
assume a convenient system of measurement, a system in which rigid bodies and stars
preserve their dimensions even if transported to remote regions of space or time.

It is easy to understand the pressures that existed in earlier times to believe that the
universe must be infinite. In past centuries it was thought the only alternative to an
infinite volume was one which had a boundary, some sort of screen which would loom
up in front of anyone who had travelled far enough. Both these beliefs seemed
implausible, but the infinite universe was the lesser of the two evils. This dilemma forms
the subject for the first of Kant's well-known "antimonies". He argues that pure reason
leads inescapably to the conclusion that space and time are finite, and equally inescapably
to the conclusion that they are infinite. He offers no solution to this contradiction. But in
recent years we have not needed to make a choice between these two unpalatable beliefs.
Gauss and Riemann demonstrated in the middle of the nineteenth century that Euclid's is
not the only possible geometry, and that there is no reason why the geometry of our
universe should necessarily obey Euclid's rules. And some years later, Einstein showed
that our space indeed does not abide by them. It is the deviations from Euclidean
geometry which explain gravitation, and these deviations, although very small in the part
of space we can explore directly, have been detected experimentally. So now that we are
freed from the shackles of Euclidean geometry, there is no difficulty in reconciling a
finite space with the absence of any boundary. We cannot actually visualise a three-
dimensional space or a four-dimensional space-time which is finite but nevertheless
unbounded, but we have a perfect two-dimensional analogy when we picture the surface
of a sphere. The surface of a globe, for example, has no boundary in the sense that a
country has a boundary, and yet its area is strictly finite and calculable. Einstein's General
Relativity does not decide for us whether in fact our universe has this form, but if it has,
then the Kantian contradiction vanishes. Neither theory nor observation has yet proved
adequate to decide conclusively whether our universe is of this type, but Einstein
believed strongly that it is, and that the world we live in is indeed finite, but has no
Throughout most of the twentieth century it was known that the universe is expanding.
Edwin Hubble discovered in the 1920s that the galaxies are moving away from each
other, and the greater the distance of a galaxy the faster is it receding from us. And almost
until the end of the century it was assumed that this expansion must be slowing down,
because of the gravitational attraction between the galaxies. What was not so certain was
whether this retardation would be strong enough eventually to reverse the expansion, or
whether, because gravity grows weaker as distances increase, the universe would
continue to expand, ever more slowly, but for ever. It will be realised from the picture
painted in the previous paragraphs that I myself believe the former of these two
possibilities must come about; the expansion must reverse at some distant point in time,
leading to a big crunch, and ensuring that the length of life of the universe remains finite.

Now just as the surface of a sphere provides a good analogy for a finite but unbounded
three dimensional universe, but with one dimension missing, it provides an equally good
analogy in two dimensions for the four dimensions of space-time representing the
complete history of such a universe, provided this history is of finite duration as I am
suggesting. We can consider each circle of latitude to represent the whole of space at one
particular time (but with two dimensions missing), and each of the meridians to represent
time, and we can take the North Pole as its first moment and the South Pole as its last.

Translating this picture back to our usual viewpoint, we see our three-dimensional
universe coming into existence in the big bang, (at the North pole of our spherical
analogy), expanding to a maximum volume over some tens of billions of years, and then
collapsing into the big crunch at the South pole. The picture has a satisfying symmetry
which, in the absence of any real evidence to the contrary, makes it pleasing and

(The argument of the previous paragraphs has been severely condensed, because a much
fuller treatment will be found in Chapter 2 of the author's book "Quantum Physics: The
Nodal Theory" on another part of this website. The reader is earnestly requested to read
Chapter 2 now, and then to read again the present paragraphs.)

What was not expected was the announcement in 1998 by a group of astronomers, led by
Saul Perlmutter of Berkeley University, that they had reason to believe the expansion was
not slowing down at all, but was actually accelerating. Some stars end their lives as a
"supernova", a gigantic explosion when for a few days they shine with a brilliance many
billions of times that of the sun. So great is the light output from these events that with
modern telescopes we can often see supernovae which occurred billions of light years
away, and hence billions of years ago. And a particular class of supernovae, known as
Type Ia, are known all to have almost identical intrinsic brighnesses, so that measuring
their apparent brightness as seen from the earth can give an accurate determination of
their distance. So Perlmutter and his colleagues carefully measured the distance of some
distant supernovae using two different methods. By observing their spectra, and noting
the "Doppler" shift in wavelength of their light due to the recession of the galaxies in
which they were located, they could measure the distance of these galaxies using a well
known and well tried formula. And by measuring their apparent brightness they could
obtain these distances in another way, measuring, in effect, the length of time their light
had been travelling to reach us. From these two measurements it is possible to estimate
the acceleration of the region of space where they are located, and Perlmutter and his
team found to their surprise that this acceleration had a positive value, rather than the
negative value they expected.

Since 1998 several more Type Ia Supernovae have been observed, and evidence does
seem to be accumulating that the expansion is increasing rather than decreasing in speed,
and many scientists do appear to be convinced by this evidence. Such an acceleration
must require a vast expenditure of energy, and no explanation has been agreed upon for
the source of this energy. It has, however, been given a name, and space is supposed to be
filled with this "dark energy", which manifests itself only in one way, by producing this
acceleration of the distant galaxies.

My own view is that it is much too soon to reach conclusions. The whole investigation is
fraught with great difficulty. Each supernova must be observed during the brief period
that it shines with maximum brilliance so that accurate photometry can be performed, and
it gives only a few days' notice that this is about to happen. The world's largest telescopes
are in great demand; observing time is very valuable, and usually must be booked months
in advance, making the observation of such ephemeral phenomena very difficult. But this
is nothing compared with the theoretical difficulty of the calculations. We mentioned two
ways of measuring distances in space, with the difference between the results forming the
basis for the whole theory. There are at least four other ways of describing the distances
of stars, and only a thorough understanding of cosmology and General Relativity can
show which two of these will give the required results. And there is a multitude of
corrections that must be applied to the figures before the tiny differences emerge which
can distinguish between an acceleration and a deceleration; remember we are looking at
events which occurred billions of years ago. Such tentative evidence is not yet
sufficiently weighty to change the author's convictions that the universe must be finite in
all respects.
Furthermore, cosmologists have been known to make mistakes. In the 1930s they used
their knowledge of the distances of some neighbouring galaxies and their rates of
recession to calculate the age of the universe, and the results seemed to prove that the sun
and the earth were several times as old as the universe which contains them! Only in
1952 was it discovered that every galaxy was nearly three times as far away as had been
believed, for a mistake had been made in calculating the distance of the nearest one. Then
for several decades, when it became possible to estimate the total mass of a galaxy, they
found that every one was rotating at the wrong speed. Only recently have they begun to
understand the "dark matter" (not to be confused with the "dark energy" mentioned
above) which forms an invisible part of each galaxy. When account is taken of the
additional mass of this material the figures are no longer inconsistent. Then just a year
after the apparent discovery of the "acceleration" of the galaxies, some members of
Perlmutter's team issued a statement casting doubt on their conclusions, on the gounds
that supernovae in the early universe appeared to have different characteristics from those
observed nearer to home, and so may not have reached exactly the same maximum
brilliance. Still more recently it has been decided that, even if we are now in a period of
acceleration, this must have been preceded by several billion years of deceleration.
Indeed, it is not now considered unreasonable that the acceleration might again be
replaced by a deceleration some further billions of years down the line.

On these shifting sands it does not seem expedient to rebuild one's complete theory of
cosmology, and I shall let the whole of this paper remain as it is until some more
substantial foundations present themselves. In particular I remain convinced that the
universe will end within a finite time, with some sort of big crunch marking its demise. I
discuss the nature of this event in considerable detail in another essay on this website, and
I do hope the reader will proceed at once to study The Collapse of the Universe and the
Reversal of Time.


The Collapse of the Universe

and the Reversal of Time

Hector C. Parr
This essay is a continuation of the paper entitled Infinity on this website. If you have not
read this you are asked to do so before studying this essay. It is also hoped that you have
read the chapter on Time in the book on Quantum Physics. You will find this at Quantum
Physics: Chapter 2 .


Cosmologists today do not agree on whether the universe will expand forever, or will
eventually recollapse. And among those who subscribe to the latter view, there is
disagreement over the nature of the collapse. Some believe it will be a mirror image of
the big bang, so that any intelligent creatures living during the second half of the
universe's life will sense time moving in the reverse direction, and view today's events as
lying in their future. This paper presents no answers to these questions, but attempts to
analyse the possibilities rationally.



Writers on cosmology appear to speak with authority on the history of the universe. They
describe its earliest moments in detail, and discuss its present structure and future large
scale development. But many of their pronouncements have to take the form, "If such-
and-such is true, then so-and-so. And if not, then ..." In fact our quantitative knowledge of
the universe is very incomplete. True, we know the approximate distances of many
celestial objects, both near and far, and we know the age of the universe fairly accurately.
Although from time to time the different ways of calculating this age have produced
results which appear inconsistent with the known ages of the earth and the stars, there is
now general agreement that the big bang occurred between 10 and 15 billion years ago.
But estimates of the present size of the universe, or the amount of matter and radiation it
contains, differ widely, and any predictions we wish to make depend critically on these
quantities. Indeed there is disagreement over whether these magnitudes are finite or
infinite; surely this uncertainty is of considerable importance.

As explained in the Infinity essay referred to above, I believe that the quantity of matter
in the universe must be finite, as must its dimensions in space and time provided these are
measured in any reasonable manner. In the rest of this essay, therefore, we assume that
the total amount of matter and radiation has some definite finite value, and that the
universe has a finite volume. (This volume, of course, may not correspond with that of
the visible universe; the expansion results in a "horizon" beyond which we cannot see
because of the finite speed of light.) And because I believe the lifetime of the universe
must also be finite, the following discussion assumes that the expansion we observe today
will eventually be followed by a similar period of contraction, leading to a "big crunch",
and the end of everything including space and time themselves.

In considering questions such as this it is essential to rid ourselves of the false
impressions of time which our human limitations seem to impose upon us. This important
matter is discussed in detail in the chapter referred to above. Briefly, I maintain that the
idea of a "now" is a purely subjective phenomenon, existing only within the human mind,
with nothing corresponding to it in the outside world. It follows that the impression of a
moving time is false; there is nothing objective to move, and nothing with respect to
which it could move. Above all we must rid ourselves of the belief that the future is in
some way less determined than the past; if the borderline between past and future is
illusory, then so must be the distinction between the two regions of time which it is
supposed to separate. The only reason we believe the future to be still undecided while
the past is immutable is that we can remember the one and not the other. To avoid these
prejudices we must picture the history of the universe not as a three-dimensional stage on
which things change, but as a static four-dimensional space-time structure of which we
are a part. For reasons we cannot explore here (but which are presented in the chapter
mentioned above), we all have the false impression of moving through this structure,
taking with us a time value which we call "now", or "the present moment". We believe
that events are not real until they "happen", whereas in reality past, present and future are
all frozen in the four dimensions of space-time. Unfortunately, even if all this is accepted,
we have to continue using the language of a "moving" time, for we have no other, but we
must try to interpret this language always as a description of this unchanging space-time
structure. We display again here the illustration from the preceding essay representing a
spherical two-dimensional analogy of the four-dimensional life history we are envisaging.

Contemplating the history of the universe in this way, it is attractive to believe that the
periods of expansion and contraction could be related to each other by symmetry, with
the large scale features during the contraction phase presenting a mirror image of the
expansion; Stephen Hawking admits accepting this view at one time, ("A Brief History of
Time", p.150), and subsequently changing his mind. I hope to show below that both
points of view merit serious consideration, and that we cannot say with any certainty
whether or not the contracting universe will differ fundamentally from the expanding
phase that we observe today.

There are two essential respects in which the universe today is unstable. The first of these
is the wide range of temperatures existing today, and in particular the difference between
the stars (with surface temperatures of several thousand degrees K) and the background
radiation (about 3 degrees K). Bodies like the earth can maintain a temperature above the
background because of the thermal inertia of their masses and their low conductivity.
Radioactivity within a planet's core may also make a contribution. But in the case of the
sun and stars the effect is greatly enhanced by nuclear reactions which provide a steady
source of energy at their core. All these sources of heat will eventually be exhausted, and
while thermal inertia might keep a body at a temperature substantially above the
background for some millions of years, nuclear power can last for several billions. If for
the moment we ignore complicating factors such as the expansion and contraction of the
universe, all matter and radiation must eventually attain the same temperature, resulting
in Sir James Jeans' "heat death".

The other instability present in the universe is gravitational. Given sufficient time, one
might expect the galaxies to collapse through gravitational attraction, and eventually to
fall together, to form one gigantic mass, or one great black hole, but at present they are
distributed throughout space more or less uniformly. When matter does come together in
this way, there are three effects which can resist the tendency to collapse. Bodies of
planetary dimensions have sufficient structural rigidity to oppose successfully the
gravitational pull on the solid material which comprises them. In the case of gaseous
bodies like the stars the energy generated by their nuclear power house can prevent
collapse until the fuel is exhausted; and galaxies themselves are able to maintain their
integrity by their rotation. The first of these effects, the rigidity of solid bodies, can
continue indefinitely, but both of the latter two, nuclear power and galactic rotation, will
eventually be overcome. When a star's nuclear fuel is exhausted it collapses into a
neutron star or a black hole, and eventually frictional and tidal forces within a galaxy will
lead to its destruction also.


Both of the above types of instability, the progress towards a world in which everything is
at the same temperature, and towards gravitational collapse, would apply even if the
universe were not undergoing expansion or contraction, for each represents an
irreversible process. The Second Law of Thermodymanics is wholly attributable to this
asymmetry, and its source must be looked for in the initial state of the universe
immediately after the big bang. In the absence of any information to the contrary, we
would expect all the matter in the universe to be collected into one mass and to be at a
uniform temperature, even at the moment of its creation. Then nothing could ever
happen, and time would not have a direction. It is because the universe started off with its
material distributed almost uniformly, a fact we are still far from explaining adequately,
that we enjoy today the low entropy which makes the world interesting, and life possible.
But the expansion does have a significant influence on each of the above two effects,
ensuring that each can endure for much longer than if the universe were static. The
universal background radiation originated from matter at about 3000 degrees K, but
because of the expansion which has occurred since it was emitted, it now appears to have
come from matter at about 3 degrees. During the expansion phase this lowering of the
ambient temperature plays an important part in maintaining temperature differences and
delaying the "heat death". If the background were still at 3000 degrees, all matter would
be at least as hot as this, and the range of temperatures at the present time would be
narrower. Life would be impossible, and the heat death more imminent.

The expansion is also playing an important part in delaying gravitational collapse; a static
universe comparable in size to ours would probably have collapsed within ten billion
years, but because the remote regions are moving away at high speed many more billions
must elapse before gravity can reverse the motion and the collapse can commence.


What will be the effect on these two processes as the universe approaches and enters its
contracting phase? Clearly much will depend upon the different time scales. We have
three processes approaching completion, (i) the move towards thermal equilibrium, with
the stars burnt out, and all material at about the same temperature, (ii) gravitational
collapse of all matter into black holes, and (iii) the collapse of space itself as it
approaches the big crunch. To discuss the possibility of the contraction phase being a
mirror image of the present expansion phase, we must consider carefully the different
possible sequences in which these three influences could begin to dominate the universe
as it progresses towards its eventual collapse.

Considering first the gravitational problems, it is clear that processes (ii) and (iii), as
described above, are related. If the big crunch approaches before most of the matter in
space has fallen into black holes, then the rapidly decreasing volume of space will
accelerate the process; the last few moments of the universe's existence will certainly see
everything in one black hole. On the other hand, if the formation of black holes proceeds
more quickly, it may be possible for the collapse of matter to precede the collapse of
space itself; then the whole material universe will occupy only a small part of the total
volume, with empty space occupying the remainder. Whatever happens, however, it
seems that the processes are irreversible. There appears to be no way in which these
gravitational influences could result in a symmetrical history, with the final stages of the
universe's life mirroring its beginning, and the material contents of the universe becoming
more uniformly distributed as the big crunch is approached.

Turning now to the thermal effects, there are many uncertainties. At the present time the
thermal behaviour of systems is dominated by the Second Law of Thermodynamics. Heat
always passes from a hotter to a cooler body, and never the reverse, resulting in a
continual reduction of temperature differences, and the approach of the "heat death". This
uni-directional process presents an enigma, for all the fundamental processes of physics,
on which these thermal phenomena depend, are themselves time-symmetric, but as
shown above, and discussed in Chapter 2 of "Quantum Physics", the irreversibility is
explained by the special conditions existing in the very first moments of the universe's

In trying to picture this asymmetrical process, we must be careful not to allow our false
impressions of time to influence our reasoning. It may be true that the whole history of
the universe is determined by the nature of the big bang, but we should not think of this
as an example of cause and effect, a conception which depends essentially on the false
idea of time moving. As explained in the chapter referred to, we should think of the state
of affairs immediately after the big bang providing a set of boundary conditions, on
which the rest of history depends. This shows how any later state of a system can be
influenced by an earlier state, in virtue of their different temporal distances from the big
bang. This picture has no need of the false notion of a flowing time.

Now what predominant thermal influences will be at work during the contracting phase?
As the size of the universe begins to decrease, the gradual lowering of the background
radiation temperature will be reversed, and it will start to increase again, just as the
temperature of a gas increases when it is compressed. If we assume that, by then, the stars
are all burnt out and the material universe is approaching a uniform temperature, the
background radiation will eventually overtake this temperature, and matter will once
again begin to heat up. We cannot, of course, apply the Second Law to the universe as a
whole, for its expansion and contraction effectively provide an influence from outside.
But it seems certain that during the contracting phase the Second Law will continue to
apply to sub-systems in the same direction as at present. Heat transfer between
neighbouring bodies will still be from the hotter to the cooler. This asymmetry is not
dependent upon the universal expansion or contraction, but will still be determined by the
boundary conditions established at the beginning of time, even at a distance of so many
billions of years.

So it seems likely that the thermal behaviour of the universe during the second half of its
life, like its gravitational behaviour, will differ greatly from those we find in its first half.
There appears to be no possibility of it providing a "mirror image" era.


But perhaps there is a glimmer of hope for those who want to believe in a symmetrical
universe. Hawking maintains that black holes eventually evaporate, all the mass that has
fallen into them becoming radiation. The time scale required for this process in the case
of massive black holes is enormous, but if we can imagine that all matter has collapsed
long before the universe reaches maximum volume, is it possible that by then all the
black holes could have radiated away? The universe would then contain only radiation.
This would be a condition of the highest possible entropy, and might be a half-way stage,
separating two symmetrical histories. We could then suppose that the second half of
history would be constrained by another set of boundary conditions at the big crunch, just
as those in the present phase are determined by the conditions at the big bang. One half
could be a time-reversal of the other, and we would have symmetry, with the second half
dominated by a decreasing entropy. In such a world, records and memories would
necessarily be of the future, and any intelligent living creatures suffering from the illusion
of a moving time, as we do, would believe it to move in the opposite direction to that we

Our present knowledge is far from resolving these questions. From time to time
cosmologists decide that the universe is, or is not, finite in size, or finite in lifespan, and
they appear satisfied that their current theory settles the argument. But outsiders can see
that they are still groping, with insufficient evidence to decide the matter. Meanwhile
those of us who find it aesthetically pleasing to think of a finite universe, with its two
temporal halves related by symmetry, may do so with a clear conscience.


Entropy and the second law of thermodynamics

Student: Why the fast start? You took about 11 pages to get to entropy in that How come you’re putting it right up front here?

Prof: Some readers e-mailed me that "What is entropy?" was the only thing they
were interested in. For that introductory Web page, I thought some practical
examples like forest fires and rusting iron (or breaking surfboards – and bones)
would be a great introduction to the second law before talking about how it's all
measured by "entropy". Wasn’t that gradual approach OK?

S: Yeh. I think I understand everything pretty well, but I didn't want to take time to
read any more in that after what you called page six. What's new
about entropy that you're going to talk about here that wasn't back there?

P: [Just a sidenote to you: Sometime, click on, i.e.,

the "Last Page". It's based on chemistry, but it's not textbook stuff. It really could
change your attitude about life -- and the troubles that will hit you sooner or later.]
Now, back to entropy. Remember some people are logging on to this site without
having read To bring them up to speed -- if you say you
understood it pretty well, tell me about it as though I were a student.

S: Ha! You mean I can be up in front of the room with the chalk and you'll sit

P: Absolutely. Start from "What is the second law?" Then, go on to "What is

entropy, really?" Those are the questions I'm always getting. Summarize your
answers fast, maybe with a few simple examples, so new students can catch up
and we all can go further.

S: OK, here goes……Rocks and balls fall if you drop them, tires blow out if
there's a hole in them, hot pans cool down in cool rooms, iron rusts
spontaneously in air, paper and wood in air will burn and make carbon dioxide
and water (if you give them a little boost over their activation energy barrier ) You think these are all completely different?
Sure, they are, but they're all due to the same cause -- some kind of energy in
them spreads out!

Whether it's rocks or balls falling that you raised in the air (potential energy
changing to kinetic energy to "heat"), or hot pans cooling (thermal energy,
"heat"), or iron with oxygen changing to iron oxide (bond energy, enthalpy), or
wood burning in oxygen (bond energy, enthalpy), each change involves some or
all of its energy dispersing to the surroundings. (In cases like a gas expanding
into a vacuum, it's the gas's initial energy being spread out more in the new
larger volume without any change in the amount of energy, but that energy is
becoming more dispersed in space. Likewise, for ideal gases or liquids that mix
spontaneously -- the original energy of each component isn't changed; ; it’s just
that each kind of molecule has become separated from similar molecules – that’s
having their energies dispersed, more spread out than in the original pure gas or

The second law just summarizes our experience with those spontaneous
happenings and millions of others: All kinds of energy spontaneously spread out
from where they are localized to where they are more dispersed, if they're not
hindered from doing so. The opposite does not occur spontaneously -- you don't
see rocks concentrating energy from all the other rocks around them and jumping
up in the air, while the ground where they were cools down a little. Same way,
you'll never see pans in a cupboard getting red hot by taking energy from the
other pans or from the air or the cupboard.

P. Good start on the second law. Almost as good as Now go to that big one, "What is entropy, really?"

S: Entropy is no mystery or complicated idea. Entropy is merely the way to

measure the energy that disperses or spreads out in a process (at a specific
temperature). What's complicated about that? Entropy change, S, measures
how much energy is dispersed in a system, or how widely spread out the energy
of a system becomes (both always involving T). The first example in our text was
melting ice to water at 273 K where S = q(rev)/T. So, in that equation, it's easy
to see that q (the enthalpy of fusion) is how much "heat" energy was spread out
in the ice to change it to water. What's the big deal?

P. But if entropy measures energy spreading out, and that is q, why bother to
divide it by T?

S: Aw, come on. You're trying to see if I really understand! The definition I just
told you has to include energy spreading out "at a specific temperature, T" and
that means q/T for any reversible process like melting ice (a phase change at a
particular temperature, the melting point.)

P: Good work! And as we talked in, that simple

dividing by T is amazingly important. It's what makes entropy so POWERFUL in
helping us understand why things happen in the direction that they do. Let's take
the example of a big hot pan as a system that is cooling and let's say q is just a
little bit of thermal energy ("heat") that is spreading out from the pan. Let's write
the pan's temperature as a bold T to show it is a slightly higher temp than the
room. Then….

S: Not that old hot pan again! I'm going to go to sleep on that one…

P: Better not -- look out for the trick coming up!….As the pan cools just a little bit
(in a room that is just a little cooler than the pan -- so that temperatures of both
pan and room are practically unchanged, and thus the process of heat transfer is
a 'reversible' process in the system), the entropy change in the pan is -q/T . But if
the change is "minus q over T" that means a decrease of entropy in the system,
and yet the pan is spontaneously cooling down! How can that be? Spontaneous
events occur only when energy spreads out and entropy increases …yes?

S: Ha -- you can't catch me on that! You're making a mistake by only talking

about the system, the pan. That whole process of a pan cooling down doesn't
just involve the pan -- it wouldn't cool at all if the surroundings of the pan were at
exactly the same T as the pan! So, in this case you have to include the slightly
cooler surroundings to which the thermal energy ("heat") is moving, in order to
see really what's going on in terms of entropy change. Sure, the pan decreases
in entropy but the cooler air of the room increases more in entropy.

P: Very good. You're not sleepy at all. In many processes and chemical reactions,
we can just focus on the system (especially as you'll see later in and its "free energy" change will tell us whether a
process happens spontaneously. But if you see some process in a system that is
spontaneous and the system decreases in entropy (for example, what your
textbook calls an endothermic chemical reaction that goes spontaneously) look
out! Include the surroundings when thinking about what's happening and you'll
always find that the surroundings are increasing more in entropy. System plus
surroundings. System plus surroundings. Always include both in your thinking,
even though you may focus just on one.

Now, let's get back to the hot pan -- and I'll ask something that seems to be too
obvious, because you've already mentioned the surrroundings.. There's still a
hard question here: Can you scientifically predict why the pan will cool down in a
cool room, assuming nothing but knowledge of the second law?

S: Scientifically? Why bother? Everybody knows something hot will cool down in
a cool room. Foolish question.

P: I said tell me why, prove to me why! Don't play dumb and say what "everybody
knows". The second law tells us that energy spreads out, if it's not hindered from
doing so. What's the hindrance to thermal energy ("heat") flowing from the room
to the pan or the pan to the room? How can you prove -- on paper, not in an
experiment, not by asking "everybody" -- in what direction that "heat" energy will
spread out? Only entropy can tell you that, and do it only because of its
combination of q/T!

Here are the facts: The thermal energy we're talking about is q. The
temperatures are a somewhat larger T in the hot pan system than the smaller T
in the cooler room surroundings. Finally, energy spreading out is shown --and
measured by -- an increase in entropy of the system plus the surroundings (That
combination is called ‘the universe’ in many chemistry texts.)

So the question is, "In which direction is there an increase in entropy in this
‘universe’ of hot pan (q/T) and cooler room (q/T)"? (As you can see from the
larger size of T compared to T, q/T is a smaller number than q/T.) Would the
energy spread out from the cool room ( surroundings) to the hot pan ( system)? If
so, the entropy change would be q/T (pan, system) - q/T (room, surroundings) --
subtraction of a bigger number, q/T, from a smaller number, q/T, yielding a
negative number, and a decrease in entropy! That's your directional indicator. An
overall decrease in entropy means that the reaction or process will not go in that
direction spontaneously..

How about q spreading out from the hot pan (systeml) to cooler room
(surroundings)? That would be q/T (room) - q/T (pan) -- which equals a positive
number, an increase in entropy and therefore its characteristic of a spontaneous
process. That's how you can prove what will happen even if you've never seen it

Entropy increase predicts what physical and chemical events will happen
spontaneously -- in the lab and everywhere in the world since its beginning.
That's why entropy increases (or equivalently, the second law) can be called
"time's arrow". Energy continually disperses and spreads out in all natural
spontaneous events. (It's our experience all our lives with spontaneous natural
events that gives us our psychological feeling of "time" passing. See

S: OK, OK, I got that. Sometimes we can look only at the system, but we should
always keep an eye on the surroundings, i.e., never forget the combo of system
plus surroundings! Now, what's that big new stuff about entropy you promised

P: I want to talk about MOLECULAR thermodynamics. How the energetic

behavior of molecules helps us easily understand what causes entropy change.
We'll start by looking at how molecules move, their three kinds of motion. Then
we'll see how the total motional energy of a system is spread out among those
kinds. Finally, we'll be able to talk about the simple method of measuring entropy
change, by the change in the numbers of ways in which the system's energy can
be spread out. e.g., in more ways when a gas expands or gases and liquid mix,
and also in more ways when anything is heated or a solid changes to a liquid and
liquid to a gas or in spontaneous chemical reactions.

S: THREE kinds of motion? I thought molecules just moved, period.

P: Molecules like water, with three or more atoms, not only can (1) whiz around in
space and hit each other ("translation", t) but also (2) rotate around axes
("rotation", r) and (3) vibrate along the bonds between the atoms ("vibration", v).
Here's a Figure that shows how water can rotate and vibrate.
When you heat any matter, you are putting energy in its molecules and so
they move. Of course, in solids, the only possible molecular motion is an
extremely small amount of translation. They really just vibrate in place, trillions of
times a second. (That's the whole molecule moving but, before it really gets
anywhere, almost instantly colliding with molecules next to it that are doing the
same thing, not the kind of vibration inside the molecules that is shown in Figure
1.) Neither rotation of molecules or vibration along their bonds can occur freely in
solids -- only in liquids and gases. But before we see what comes next, I have to
ask you a question, "What do you remember about quantization of energy, and
about quantized energy levels?"

S: Ho Ho!! Now I get the chalk again to answer a question! First, I know that all
energy, whether the energy that molecules have when moving or light radiation
that zings through space, is not continuous. It's actually always in bunches,
separate packages, "quanta" of energy rather than like a continuous flow. That's
quantization of energy. In units not something continuous like a river.

And those bunches or quanta are also on quantized energy levels? Do you
mean like that electron in a hydrogen atom where it ordinarily can only be at one
energy level (i.e., cannot possibly be in-between levels)? But it can be kicked up
to a higher energy level by some exact amount of energy input, the right sized
"quantum". Only certain energy levels are possible for electrons in a hydrogen
atom, the difference between any two levels is therefore quantized. Steps rather
than like a continuous slope or a ramp.
P: Good. Now I'll draw a Figure to show the differences in energy levels for the
motions in molecules -- like water and those more complex. At the left in the
Figure below is the energy "ladder" for vibrations inside the molecules, along
their bonds. There's a very large difference between energy levels in vibration.
Therefore, large quanta that are only available at high temperatures of many
hundreds of degrees to change molecules from the lowest vibrational state to the
next higher and so on up. (Essentially all liquid water molecules and most gas
phase water would be in the lowest vibrational state, the lowest vibrational level
in this diagram.) Then, just to the right of vibrational energy levels is the ladder of
rotational levels (with a slightly darker line on the bottom -- I'll get back to that in
a second.). The rotational energy levels are much closer together than
vibrational. That means it doesn't take as much energy (not such large quanta of
energy input) to make a molecule rotate than to make its atoms stretch their
bonds a little in vibration. So, a molecule like water with temperatures rising from
273 K to 373 K can get more and more energy from the surroundings to make it
rotate and then faster and faster (quantized in those steps or energy levels) in its
three different modes (Figure 1).

Now to that slightly darker or thicker line at the bottom of the rotational
energy ladder. It represents the huge number of energy levels of translational
motion. It doesn't take much energy just to make a molecule move fast, and then
faster. Thus, the quantized difference in translational energy levels is so small
that there many levels extremely close to one another in a diagram like this
(which includes rotational and vibrational energy levels). Actually there should be
a whole bunch of thick lines, many lines on each one of those rotational levels to
show the huge numbers of different translational energies for each different
energy of rotation.

At the usual lab temperature, most water molecules are moving around 1000
miles an hour, with some at 0 and a few at 4000 mph at any given instant. Their
speeds constantly change as they endlessly and violently collide more than
trillions and trillions of times a second. When heated above room temperature,
they move faster between collisions when their speed may drop to zero if two
moving at the same speed hit head on. The next instant each of those "zero
speed" molecules will be hit by another molecule and start whizzing again. In a
liquid, they hardly get anywhere before being hit. In a gas they can move about a
thousand times the diameter of an oxygen molecule before a collision.

S: So?

P: So now we have the clues for seeing what molecules do that entropy
measures! First, the motions of molecules involve energy, quantized on specific
energy levels. Second, as does any type of energy spread out in its particular
kind of energy levels, the energy of molecular motion spreads out as much as it
can on its t, r, and v levels.

("As much as it can" means that, with any given energy content, as indicated by
a temperature of 298 K (25.0° C), let's say, a mole of water molecules just
doesn't have enough energy for any significant number of them to occupy the
higher and highest energy levels of vibration or rotation, or even of translation at
many thousands of miles an hour.

We say that those levels are "not accessible" under those conditions of T, V and
pressure. At any moment (because each molecule's energy is constantly
changing due to collisions) all the molecules have enough energy to be in, i.e., to
access, the very lowest energy levels. Most can access the mid-energy levels,
and some the slightly higher energy levels but there are many higher energy
levels that are not accessible until the molecules are given larger quanta of
energy when the system is heated.

A simple summary for ordinary systems would be: The most probable distribution
of the enormous numbers of molecular energies in a mole, let's say, on various
levels is a broad spread among all accessible levels but with more in the average
to lower levels than in the higher. [Diagrams of these kinds of "Boltzmann
distributions" and how they are calculated are in physical chemistry textbooks.]
S: That was “First”, molec motion as quantized energy. Then, spreading out of
that energy to accessible energy levels was “second”. OK, What's "third"?

P: Third is the beginning of the big payoff. Imagine that you could take an
instantaneous snapshot of the energy of all the individual molecules in a flask
containing a mole of gas or liquid at 298 K. Remember that each molecule's
energy is quantized on a particular energy level. Then, each of the far-more-than
Avagadro's number of accessible energy levels (at that temperature and in that
volume) could have zero, one, or many many molecules in it or “on it”. The whole
snapshot showing each molecule's energy of that mole is called a microstate –
the exact distribution on energy levels of the energies of all the molecules of the
mole at one instant in time.

S: Aw, that's impossible!

P: You're right. It's so impossible that it's ridiculous -- to take that kind of a
snapshot. But it's not only possible to think about that concept, it is essential to
do it!. The idea of a microstate is the start to a good understanding how
molecules are involved in entropy. (And you know well that entropy change is the
basis for understanding spontaneous change in the world.)

Since a collision between even two molecules will almost certainly change the
speed and thus the energy of each one, they will then be on different energy
levels than before colliding. Thus, even though the total energy of the whole mole
doesn't change – and even if no other movement occurred – that single collision
will change the energy distribution of its system into a new microstate! Because
there are trillions times trillions of collisions per second in liquids or gases (and
vibrations in solids), a system is constantly changing from one microstate to
another, one of the huge number of accessible microstates for any particular

S: No change in the total energy of the mole (or whatever amount you start with),
but constantly fast-changing so far as the exact distribution of each molecule's
energy on one of those gazillion energy levels – each ‘exact distribution' being a
different microstate?.

P: Ya got it.

S: But what do all those microstates have to do with entropy, if anything?

P: IF ANYTHING?! You're just trying to get me to yell at you :-). Certainly, you
don't believe that I'd take all this time talkng about microstates if they weren't
extremely important in understanding entropy, do you? OK, here it is: The
Boltzmann equation is the relation between microstates and entropy. It states
that the entropy of a substance at a given temperature and volume depends on
the logarithm of the number of microstates for it,* S = k B ln (number of
microstates), where kB is the Boltzmann constant of R/N = 1.4 x 10-23 J/K.. (You
will often see W in the Boltzmann equation in textbooks. It stands for “ Ways of
energy distribution” , the equivalent of the modern term “microstate”.) Then, any
entropy change from an Initial state to a Final state would be ΔS = kB ln [(number
of microstates)Final / (number of microstates)Initial ]

S: Aha! I can predict what you're going to say now: "If the number of microstates
for a system (or surroundings) increases, there is going to be an increase in
entropy." That's true because the more microstates (Final) the larger the log of
whatever the ratio turns out to be and that is multiplied by kB so the larger will be
the ΔS.

P: You're right. Hang in there and you'll be an expert!

S: Thanks, but I still have plenty of questions. What has this to do with what you
said that was the fundamental idea about entropy – that energy spontaneously
changes from where it is localized to where it becomes more dispersed and
spread out. What does “more microstates” for a system have to do with its
energy being more spread out? A system can only be in ONE microstate at one

P: Yes in only one microstate at one instant. However, the fact that the system
has more ‘choices’ or chances of being in more different microstates in the NEXT
instant – if there are "more microstates for the system" – is the equivalent of
being "more spread out" or "dispersed" instead of staying in a few and thus being
localized. (Of course, the greatest localization would be for a system to have only
one microstate. That’s the situation at absolute zero T – because then ln W = ln 1
= 0,). To see how the idea of energy dispersal works in thinking about exactly
what molecules do just as well as it works on the macro or "big scale beaker"
level, let's first summarize the molecular level. Then, let's check four important
examples of entropy change to see how energy dispersal occurs on both macro
and molecular scales.

You already stated the most important idea, a single microstate of a system has
all the energies of all the molecules on specific energy levels at one instant. In
the next instant, whether just one collision or many occur, the system is in a
different microstate. Because there are a gigantic number of different accessible
microstates for any system above 0 K, there are a very large number of choices
for the system to be in that next instant. So it is obvious that the greater the
number of possible microstates, the greater is the possibility that the system isn't
in this one or that one of all of those ‘gazillions'. It is in this sense that the energy
of the system is more dispersed when the number of possible microstates is
greater – there are more choices in any one of which the energy of the system
might be at one instant = less possibility that the energy is localized or found in
one or just a dozen or only a million microstates. It is NOT that the energy is ever
dispersed “over” or “smeared over” many more microstates! That's impossible.

So, what does "energy becomes more dispersed or spread out" mean so far as
molecular energies are concerned? Simple! What's the absolute opposite of
being dispersed or spread out? Right -- completely localized. In the case of
molecular energy, it would be staying always in the same microstate. Thus,
having the possibility of a huge number of additional microstates in any one of
which all the system's energy might be in -- that's really "more dispersed" at any
instant! That's what "an increase in entropy on a molecular scale" is.

S: That's the summary? It would help if you tell me how it applies to your four
basic examples of entropy increase on big-scale macro and on molecular levels.

P: First, macro (that you know very well already): Heating a system causes
energy from the hot surroundings to become more dispersed in that cooler
system. Simplest possible macro example: a warm metal bar touching a slightly
cooler metal bar. The thermal energy flows from the warmer to the cooler; it
becomes more spread out, dispersed. Or an ice cube in your warm hand: The
thermal energy from your hand becomes more dispersed when it flows into the
cold ice cube. In both these cases, the entropy of the system is q/T (where T the
slightly lower temp) minus the entropy of the surroundings, q/T (where bold T is
the higher temperature). That means (larger ΔSSystem – smaller ΔSSurroundings) and
therefore, ΔS overall involves an increase in entropy. Energy has become more
spread out or dispersed in the ‘universe’ (of system plus surroundings) because
of the process of warming a cooler system.

(Students in classes where the quantitative aspects of entropy using q/T are not
taught can still grasp the concept of energy becoming dispersed when a system
is heated and thus, entropy increasing. The plot of the numbers of molecules
having different molecular speeds at low and at high temperatures is shown in
most general chemistry textbooks. (Molecular speeds are directly related to
molecular energies by mv2/2.) The curve that we see in such plots is actually
drawn along the tops of ‘zillions’ of vertical lines, each line representing the
speed of a number of molecules. At low temperatures, the plot for those speeds
is a fairly high smooth "mountain" down toward the left of the plot. That means
that most of the molecules have speeds /energies that are in a relatively small
range as shown by the majority of them being under that ‘mountain curve’. At
higher temperatures, the "mountain" has become flattened, i.e., molecules have
a much broader range of different speeds, far more spread out in their energies
rather than being not too different in being ‘crowded together under the mountain’
of the lower temperature curve. Thus, the definition of entropy as a measure or
indicator of the greater dispersal of energy is visibly demonstrated by the plots.
When a system is heated, its total energy becomes much more spread out in that
its molecules have a far greater range in their energies due to that additional
thermal energy input. The system's entropy has increased. It, the system, was
the cooler ‘object’ when it was heated by a flame or a hot plate; thus, it increased
in entropy more than the flame or hot plate decreased.)

The conclusion is the same from another molecular viewpoint. There are many
many more microstates for a warmer object or a flame than for a cooler object or
substance. However, the transfer of energy to a cooler object causes a greater
number of additional microstates to become accessible for that cooler system
than the number of microstates that are lost for the hotter system. So, just
considering the increase in the number of microstates for the cooler system gives
you a proper measure of the entropy increase in it via the Boltzmann equation.
Because there are additional accessible microstates for the final state, there are
more choices for the system at one instant to be in any one of that larger number
of microstates – a greater dispersal of energy on the molecular scale.

S: Heating a system. That's one big example. Now for the second?

P: The second big category of entropy increase isn’t very big, but often poorly
described in general chemistry texts as "positional" entropy (as though energy
dispersal had nothing to do with the change and the molecules were just in
different ‘positions’!) It involves spontaneous increase in the volume of a system
at constant temperature. A gas expanding into a vacuum is the example that so
many textbooks illustrate with two bulbs, one of which contains a gas and the
other is evacuated. Then, the stopcock between them is opened, the gas
expands. In such a process with ideal gases there is no energy change; no heat
is introduced or removed. From a macro viewpoint, without any equations or
complexities, it is easy to see why the entropy of the system increases: the
energy of the system has been allowed to spread out to twice the original
volume. It is almost the simplest possible example of energy spontaneously
dispersing or spreading out when it is not hindered.

From a molecular viewpoint, quantum mechanics shows that whenever a system

is permitted to increase in volume, its molecular energy levels become closer
together. Therefore, any molecules whose energy was within a given energy
range in the initial smaller-volume system can access more energy levels in that
same energy range in the final larger-volume system: Another way of stating this
is "The density of energy levels (their closeness) increases when a system's
volume increases." Those additional accessible energy levels for the molecules'
energies result in many more microstates for a system when its volume becomes
larger. More microstates mean many many more possibilities for the system's
energy to be in any one microstate at an instant, i.e., an increase in entropy
occurs due to that volume change. That's why gases spontaneously mix and
why they expand into a vacuum or into lower pressure environments.

S: OK. Heating a system for one, and a gas expanding for two. What’s the third
example of basic processes involving entropy change from a macro viewpoint
and then a molecular one?

P: The third category isn’t talked about much in some general chemistry texts,
but it’s enormously important — mixing or simply "putting two or more
substances together". It is not the mixing process itself that causes spontaneous
entropy increase and is responsible for spontaneous mixing of ‘like’ liquids or
mixing (dissolving) of many solids and liquids. Rather, it is just the separation of
one kind of molecule from others of its kind that occurs when liquids are mixed or
a solute is added to a pure solvent. that is the source of greater entropy for
substances in a mixture. The motional energy of the molecules of each
component is more dispersed in a solution than is the motional energy of those
molecules in the component’s pure state.

[One rather different case of entropy in mixtures is that of two or more ideal
gases. Mixing such gases could be considered as gas expansion that I
mentioned a minute ago because one volume of gas A plus one of B makes an
increase in volume of both. Thus, this is a clear example of the energy of each
gas having its energy spread out to a greater volume and therefore that is the
cause of an increase in the entropy of each. However, calculations using the
techniques from statistical mechanics that are mentioned two paragraphs below
give the same results as those based on simple volume change.]

When liquids are mixed to form a solution, the entropy change in each is not
obviously related to a volume change, nor is this true in the most important case
of ideal solutes dissolving in solvents. (Dissolving real salts in water involves
three energy factors, the separation of ions, the reaction of ions with water, and
the dispersal of the energy of the ions in the solution. The latter factor is related
to entropy change.) From the macro viewpoint, the entropy increase in forming a
solution from a non-ionic solid and a solvent can only be described as due to
having two substances present in a solution (hardly an explanation!) and thus,
quantitatively, the maximal entropy change in mixing occurs when the
concentration of each component in moles is 0.5 of the total number of moles.
From a molecular viewpoint, the calculation and explanation of entropy change in
forming a solution from A and B liquids (or A as an ideal solute and B as the
solvent) is not as simple as those we have considered. (It is derived in statistical
mechanics.) The calculation of entropy change comes from the number of
different ‘cells’ (that are actually microstates) that can be formed from
combinations of A and B. Far more such cells or microstates are possible from A
and B than of the pure substances A or B. Thus, the increase in the
‘combinatoric’ number of cells or microstates as compared to the pure A or B is
the basis for calculation of the entropy of the mixture. A solvent in a solution has
many more microstates for its energy, and has increased in entropy compared to
the pure solvent. Thus, there is less tendency for a solvent to escape from the
solution to its vapor phase when heated or to its solid phase when cooled than
does the pure solvent. This is the basis of all colligative effects: higher boiling
points, lower freezing points, and greater osmotic pressure than the pure solvent.

The elevation of the boiling point of a solvent in a solution? A solvent in a solution

has its energy more dispersed in the solution than the pure solvent. Therefore, at
the pure solvent’s normal boiling point, not enough solvent molecules tend to
escape from the solution to equal the atmospheric pressure and the solution
does not ‘boil’. More thermal energy (more ‘heat’) must be forced into the solution
to cause more molecules to be able to escape (i.e., it has to be heated to a
higher temperature than the pure solvent’s boiling point) for the solvent
molecules to overcome the fact that their energy has become more spread out in
the solution.

The freezing point of a solvent in a solution? As above, the solvent’s energy is

more spread out in the solution than when it was a pure solvent, and so at the
pure solvent’s normal freezing point, there is less tendency for the solvent
molecules to leave the solution and form the intermolecular bonds of a solid.
Cooling to a lower temperature than the pure solvent’s freezing point, causes
energy to flow to the colder surroundings and the solvent molecules’ energy to be
less dispersed, more localized, more prone to form intermolecular bonds, i.e., to

Osmotic pressure? The phenomenon of solvent molecules moving through semi-

permeable membranes (those thin films that allow solvent molecules to pass
through them but obstruct the flow of ions or large molecules) is fundamentally
important in biology. Pure solvent that is on one side of a semi-permeable
membrane will flow through the membrane if there is a solution on the other side
because the result would be an increase in the pure solvent’s entropy -- a
spontaneous process. .

S: Now for the fourth example of basic processes – what's the entropy change in
them from a macro viewpoint and then a molecular one?
P: OK. The fourth example is phase change, such as solid ice melting to liquid
water and liquid water vaporizing to gaseous water (steam). Seen as a macro
process, there is clearly a great dispersal of energy from the surroundings to a
system, the enthalpy of fusion, the perfect illustration of spontaneous change due
to energy dispersal. (Of course, the converse, freezing or solidification,
represents spontaneous change if the surroundings are colder than the system.)
The fact that there is no temperature change in the system despite such a large
input of energy is a surprising situation if you knew nothing about molecules and
intermolecular bonds.

This illustration of entropy change and its equation may be the first equation you
see in your text for the concept. It's the original (1865 and still valid) equation of
ΔS = q(rev)/T. The calculation is easy but the explanation is impossible without
some knowledge of what is occurring on a molecular level.

(Basic lack of knowledge about molecules was the reason that those unfortunate
words "order" and "disorder" started to be used in the 1890s to describe entropy
change. Leading chemists of that day actually did not believe that molecules
existed as real particles. Virtually nothing was known about chemical bonds.
Boltzmann believed in the reality of molecules but thought that they might be
nearly infinitesimal in size. Thus, in 1898 for people to talk about entropy change
in a crystalline material like ice to fluid water as "order" to "disorder" was totally
excusable. But with the discovery of the nature of molecules, of chemical
bonding, of quantum mechanics and the motional energy of molecules as
quantized by 1950, "order" and "disorder" are inexcusable in describing entropy
today. See

From a molecular viewpoint of phase change, e.g., from solid ice to liquid water,
we should first see what is occurring in the molecules. The large amount of
thermal energy input, the enthalpy of fusion, causes breaking of intermolecular
hydrogen bonds in the solid but the temperature stays at 273.15 K. Therefore,
the motional energy of molecules in the new liquid water is the same quantity as
the molecules that were each vibrating violently in one place in the crystal lattice.
The difference is that now molecules in the water are not held so rigidly in the
structure of ice. Of course, at 273.16 K (!), they are not zooming around as they
would if they were in the gas form, but though they are all jumbled together (and
hydrogen -bonded – remember: ice-cold liquid water is more dense than ice),
they are extremely rapidly breaking their H bonds and forming new ones (at
trillionths of a second rate). So maybe they could be compared to a fantastically
huge crowded dance in which the participants hold hands momentarily, but
rapidly break loose to grab other hands (OK, H-bonds!) in loose circles many
more than billions of times a second. Thus, that initial motional energy of
vibration (at 273.14 K!) that was in the crystal is now distributed among an
enormous additional number of translational energy levels.
S: All right. You dazzle me with those hydrogen bonds of the ice broken and a
zillion other ‘breaks and makes’ going on in the liquid, but what happened to all
your talk about microstates being important?

P: Hang in there. I'm just a step ahead of you. Because there are so many
additional newly accessible energy levels due to the water molecules being able
to break, make, AND move a bit, that means that there are far more additional
accessible microstates. Now, maybe you can take over…

S: Sure. Additional accessible microstates mean that at any instant — a trillionth

of a trillionth of a second — the total energy of the system is in just one
microstate but it has very many more choices for a different microstate the next
instant than without “additional accessible microstates”. More choices are
equivalent to energy being more dispersed or spread out and greater energy
dispersal means that the system of liquid water has a larger entropy value than
solid water, ice.

P: Good response. Just for fun, I'll show you numbers for what “additional
accessible microstates” means.

The standard state entropy of any substance, S0, is really S0 because it is the
entropy change from 0 K to 298 K (or to 273.15 K, in the case of ice.) When we
look in the Tables, we find that the S0 for a mole of ice is 41 joules/K. So, using
the Boltzmann equation, S0 = 41 J/K = kB ln [microstates273 K / 1 ]. Now, since kB
= 1.4 x 10 -23 J/K, 41/1.4 x 10 -23 = ln microstates 273 K . You're probably more
comfortable using log10 base rather than natural logarithms. Therefore, let's
convert that result for the ln of the microstates to logs by multiplying by 0.43 to
give the log of the number of microstates for ice as 0.43 x 2.9 x 1024 or 1.3 x 1024.
Wait a minute, you're used to seeing Avogadro's number, N, as 6 x 1023 but we're
now talking about a number that is raised to a power that is more than that, 1.3 x

The energy in a cube of ice is constantly being redistributed – in any one of a

humanly incomprehensible large numbers of ways, microstates. From the above
calculation via the Boltzmann equation, there are 10 1,300,000,000,000,000,000,000,000
microstates for “orderly” crystalline ice, with the energy of the ice in only in one
microstate at one instant. Do you see now why it is not wise to talk about "order"
and entropy in ice compared to "disorderly" water? What could be more
disorderly than that incredible mess for ice of not just trillions times trillions times
trillions times trillions of microstates (i.e., which would be only 10 48 !) but 10
? (There are only about 10 70 particles in the entire
Now let's calculate how many microstates there are for water -- in any one of
which that "disorderly" water can possibly distribute its energy at 273.15 K. At
that temperature, water's S0 is 63 J/K. Going through the same calculations as
above, we find that there are 10 2,000,000,000,000,000,000,000,000 microstates for water. How
about that? Yes, water has more possibilities than ice – the liquid system could
distribute its energy to any one of more microstates -- (and thus we can say that
its energy is more "spread out" than that of ice). But certainly, this is not
convincing evidence of a contrast between "disorder" and "order"! We can't have
any concept of what these huge numbers mean; we can only write them on
paper and manipulate them.

Even though there are more microstates for water at 273,15 K than for ice, the
difference between 2.0 and 1.3 times 10 1,000,000,000,000,000,000,000,000 is surely not
convincing evidence of a contrast between "disorder" and "order". "Disorder" in
relation to entropy is an obsolete notion and has been discarded by most new
editions of US general chemistry texts. (See and also scroll down to
December 2005 in


Important stuff to remember

BUT if your textbook and prof disagree with the following, DO

IT THEIR WAY! Grades, and a good relationship with her/him,
are more important than anything while you're in the class.
Just keep this page for the future, when you get to a better
course or graduate and can emphasize fundamentals.

(As an excellent method of increasing

your 'mental muscles', when your prof
makes an error, think to yourself,
"Knowing what I know about entropy,
what should he/she have said? If I were
teaching, what would I say correctly?"
and scribble it in your notes, But keep it
to yourself!)

A qualitative statement of the second law of thermodynamics

Energy of all types spontaneously flows from being localized or concentrated to
becoming more dispersed or spread out, if it is not hindered.

The generalization for classical thermodynamics (macro thermo, Clausius):

Entropy change measures either (1) how much molecular motional

energy has been spread out in a reversible process divided by the absolute
temperature, e.g., (qrev /Tt), or (2) how spread out the original motional energy
becomes at a specific temperature.
(1) could involve heating a system very very gently (i.e., so the
temperature stays just barely above the original system temperature, nearly
reversibly) by energy being transferred from the hotter surroundings such as a
flame or a hot plate to the cooler system. (In irreversible heating (i..e ,.to any
temperature large or small above what the system is originally ), the entropy

change can be calculated by simulating tiny reversible steps via calculus: Cp /T

(2) involves expansion of a gas into a vacuum, mixing of gases or of
liquids, and dissolving solids in liquids because the energy of the gas or of each
constituent can be thought of as being literally spread out in a larger three-
dimensional volume. This is not strictly correct scientifically; the molecules'
motional energy of each constituent is actually spread out in the sense of having
the chance of being, at one instant, in any one of many many more microstates
in the larger gas volume or in a mixture than each had before the process of
expansion or of mixing.

The generalization for molecular thermodynamics (molecular thermo,


Entropy measures the energy dispersal for a system by the number of accessible
microstates, the number of arrangements (each containing the total system
energy) for which the molecules' quantized energy can be distributed, and in one
of which – at a given instant – the system exists prior to changing to another.
S = kB ln [Microstatesfinal/Microstatesinitial]

Entropy is not a driving force.

Energy of all types changes from being localized to becoming dispersed or

spread out, if it is not hindered from doing so. The overall process is an increase
in thermodynamic entropy, enabled in chemistry by the motional energy of
molecules (or the energy from bond energy change in a reaction) and actualized
because the process makes available a larger number of microstates, a maximal

The two factors, energy and probability, are both necessary for thermodynamic
entropy change but neither is sufficient alone. In sharp contrast, information
‘entropy’ depends only on the Shannon H, and ‘sigma entropy’ in physics (σ =
S/kB) depends only on probability as ln W.

Entropy is not "disorder". [See ]

Entropy is not a "measure of disorder".

Disorder in macro objects is caused by energetic agents (wind, heat,

earthquakes, driving rain, people, or, in a quite different category, gravity) acting
on them to push them around to what we see as "disorderly" arrangements, their
most probable locations after any active agent has moved them. The agents
(other than gravity!) undergo an increase in their entropy in the process. The
objects are unchanged in entropy if they are simply rearranged.

If an object is broken, there is no measurable change in

entropy until the number of bonds broken is about a
thousandth of those unchanged in the object. This means
that one fracture or even hundreds make no significant
difference in an object's entropy. (It is only when something
is ground to a fine powder that a measurable increase or
decrease in entropy occurs -- the sign of change depending
on the kinds of new bonds formed after the break compared
to those in the original object.)

Even though breaking a mole-sized crystal of NaCl in half involves slight changes
in hundreds to thousands of the NaCl units adjacent to the fracture line, in
addition to those actually on such a line, there are still at least 106 bonds totally
unaffected. Thus we can see why a single fracture of a ski (unhappy as it is to the
skier), or a house torn apart in to ten thousand pieces by a hurricane (disastrous
as it is to the homeowner), represent truly insignificant entropy changes.
[] The only notable scientific entropy change
occurs in the agent causing the breaks. Human concepts of order are misplaced
in evaluating entropy.

All physical and chemical processes involve an increase in entropy in the

combination of (system + surroundings). System plus surroundings!

Professor John P. Lowe’s explanation of the importance of the occupancy of

energy levels as a genuine basis for entropy (rather than "randomness" or
"disorder") via informal QandA is in the Journal of Chemical Education, 1988, 65
(5), 403 – 406. Pages 405 and 406 are especially pertinent and very readable.

An excellent introduction to Professor Norman C. Craig’s procedure of

attacking entropy problems is in "Entropy Analyses of Four Familiar Processes",
Journal of Chemical Education, 1988, 65 (9), 760 – 764. Professor Craig’s 200
page paperback Entropy Analysis (John Wiley, New York, 1992), is the best short
technical introduction to the laws of thermodynamics and the correct utilization of
entropy in print. It is accessible to a diligent first-year college student, and
especially valuable to a student beginning physical chemistry thermo (as well as
to mature chemists who really never understood entropy in their thermodynamics

Most 2005-2006 editions of US college general chemistry texts have

discarded the use of “disorder” in connection with entropy. Three or four still
present entropy as coming in "two flavors", "positional" entropy and "thermal"
entropy. (There is only one kind of entropy change: plain vanilla, measured by
dq(rev)/T on a macro level, or on the molecular level by the change in the
number of microstates after vs. before a process, by the Boltzmann entropy
equation. "Positional entropy" fails to recognize that its calculation of particulate
volume increase from statistical mechanics is fundamentally counting
microstates, not just ‘cells’ of location or volume in space. Microstates are the
accessible arrangements for a system’s energy, not arrangements of its
molecules in space.


•, mentioned in this section, is a practical

introduction to the second law from which illustrations in this present site
were taken, but the treatment of entropy there emphasizes macro
thermodynamics except for an introduction of molecular thermodynamics
in the Appendix.
• shows why shuffled cards,
messy desks and disorderly dorm rooms are NOT examples of entropy
increase: Journal of Chemical Education, 1999, 76 (10), 1385 – 1387.
• "Disorder -- A Cracked Crutch For Supporting Entropy Discussions" shows
that "disorder" is an archaic, misleading description of entropy for
chemistry students. It should be replaced by the simple description of
entropy as a measure of the dispersal or spreading out of molecular
motional energy in a process (as a function of temperature).
• is a more formal
presentation of the concepts in this site for instructors. Entropy Is Simple,
Qualitatively" presents the material in this site more formally.
• includes the foregoing articles and
seven other letters and articles that use the approach of this site.

Chapter 1

The View from Nowhen

• Outline of the book
• Remarks on style
• The stock philosophical debates about time
• The arrows of time
• The puzzle of origins

SAINT AUGUSTINE (354--430) remarks that time is at once familiar and deeply mysterious.
"What is time?" he asks. "If nobody asks me, I know; but if I were desirous to explain it
to one that should ask me, plainly I know not."[1] Despite some notable advances in
science and philosophy since the late fourth century, time has retained this unusual dual
character. Many of the questions that contemporary physicists and philosophers ask about
time are still couched in such everyday terms as to be readily comprehensible not only to
specialists on both sides of the widening gulf between the two subjects---that in itself is
remarkable enough---but also to educated people who know almost nothing about either
field. Time is something rather special, then. Few deep issues lie so close to the surface,
and fewer still are yet to be claimed by a single academic discipline.

This book is concerned with a particular kind of question about time. What is the
difference between the past and the future? Could---and does---the future affect the past?
What gives time its direction, or "arrow"? Could time be symmetric, or a universe be
symmetric in time? What would such a world be like? Is our world like that? The book is
concerned with what modern physics has to say about issues of this kind, but I am not
writing as a physicist, explaining the insights of my discipline to a general audience. I am
a philosopher, and the vantage point of the book is philosophical. One of my main aims is
to sort out some philosophical confusions in the answers that contemporary physicists
typically give to these questions. I want to provide physicists themselves, as well as
philosophers and general readers, with a clearer picture of these issues than has yet been

What are these philosophical confusions? The most basic mistake, I shall be arguing, is
that people who think about these problems---philosophers as well as physicists---often
fail to pay adequate attention to the temporal character of the viewpoint which we
humans have on the world. We are creatures in time, and this has a very great effect on
how we think about time and the temporal aspects of reality. But here, as elsewhere, it is
very difficult to distinguish what is genuinely an aspect of reality from what is a kind of
appearance, or artifact, of the particular perspective from which we regard reality. I want
to show that a distinction of this kind is crucial to the project of understanding the
asymmetry of time. In philosophy and in physics, theorists make mistakes which can be
traced to a failure to draw the distinction sufficiently clearly.

The need to guard against anthropocentrism of this kind is a familiar theme in the history
of both science and philosophy. One of the great projects in the history of modern thought
has been the attempt to achieve the untainted perspective, the Archimedean view of
reality---"the view from nowhere," as the philosopher Thomas Nagel calls it.[2] The main
theme of this book is that neither physics nor philosophy has yet paid enough attention to
the temporal aspect of this ancient quest. In particular, I want to show that if we want to
understand the asymmetry of time then we need to be able to understand, and quarantine,
the various ways in which our patterns of thought reflect the peculiarities of our own
temporal perspective. We need to acquaint ourselves with what might aptly be called the
view from nowhen.

Our interest in questions of temporal asymmetry thus lies at more than one level. There is
the intrinsic interest of the physical issues themselves, of course, and the book aims to
present a clearer, more insightful, and more accessible view of the main problems and
their possible resolutions than has yet been available. In criticizing previous writers,
however, my main argument will be that when discussing temporal asymmetry, they have
often failed to disentangle the human temporal perspective from the intended subject
matter. And it is the asymmetry of our ordinary temporal perspective which is the source
of the difficulty, so that the task of unraveling the anthropocentric products of this
perspective goes hand in hand with that of deciding how much of temporal asymmetry is
really objective, and therefore in need of explanation by physics.

The book thus straddles the territory between physics and philosophy. On the physical
side, my main goal will be to obtain a clear view of the problem, or problems, of the
asymmetry of time, to correct certain common errors in existing approaches to the
problem, and to assess current prospects for a solution. But the main contribution I bring
to these problems will be a philosophical one, particularly that of showing how errors
arise from a failure to distinguish between the viewpoint we have from within time and
the Archimedean standpoint from which physics needs to address these issues. On the
purely philosophical side, I shall be interested in the project of characterizing this view
from nowhen---of deciding which features of the ordinary world remain visible from this
perspective, for example, and which turn out to depend on the temporal viewpoint we
normally occupy.

Perspective shifts of this kind are nothing new in science, of course. Some of the most
dramatic revolutions in the history of science have been those that have overturned
previous conceptions of our own place in nature. The effect is something like that of
coming suddenly to a high vantage point---at once exciting and terrifying, as a familiar
view of our surroundings is revealed to be a limited and self-centered perspective on a
larger but more impersonal reality. In physics the most dramatic example is the
Copernican revolution, with its overthrow of the geocentric view of the universe. In
biology it is Darwinism, with its implications for the place of humanity in nature. These
two examples are linked in the more gradual but almost equally revolutionary discovery
of cosmological time (and hence of the insignificance of human history on the
cosmological scale).

While the perspective shift I shall be recommending in this book is not in this league---it
would be difficult to significantly dehumanize a world in which the place of humanity is
already so insignificant---it does have some of their horizon-extending impact. For it
turns on the realization that our present view of time and the temporal structure of the
world is still constrained and distorted by the contingencies of our viewpoint. Where time
itself is concerned, I claim, we haven't yet managed to tease apart what Wilfred Sellars
calls the scientific and manifest images---to distinguish how the world actually is, from
how it seems to be from our particular standpoint.

As in earlier cases, the intellectual constraint is largely self-imposed. To notice the new
standpoint is to be free to take it up, at least for the purposes of physics. (We can't
actually stand outside time, but we can imagine the physics of a creature who could.)
Again the discovery is both exciting and unsettling, however, in showing us a less
anthropocentric, more objective, but even more impersonal world.

Outline of the book

The remainder of this introductory chapter deals with some important preliminaries. One
of these is to set aside certain philosophical issues about time which won't be dealt with
later in the book. Philosophical discussions of time have often focused on two main
issues, that of the objectivity or otherwise of the past-present-future distinction, and that
of the status of the flow of time. Philosophers have tended to divide into two camps on
these issues. On the one side are those who treat flow and the present as objective
features of the world; on the other, those who argue that these things are mere artifacts of
our subjective perspective on the world. For most of the book I shall be taking the latter
view for granted. (Indeed, I take the central philosophical project of the book to be
continuous with that of philosophers such as D. C. Williams, J. J. C. Smart, A.
Grünbaum, and D. H. Mellor.)[3] I shall presuppose that we have learnt from this
tradition that many of our ordinary temporal notions are anthropocentric in this way. My
aim is to extend these insights, and apply them to physics. I shall not defend this
presupposition in the sort of detail it receives elsewhere in the philosophical literature---
that would take a book to itself---but I set out below what I see as the main points in its

The second important preliminary task is to clarify what is meant by the asymmetry or
arrow of time. A significant source of confusion in contemporary work on these topics is
that a number of distinct notions and questions are not properly distinguished. It will be
important to say in advance what our project is, and to set other issues to one side. Again,
however, I shall draw these distinctions rather quickly, with no claim to be
philosophically comprehensive, in order to be able to get on with the main project.

With the preliminaries out of the way, the remainder of the book is in two main parts. The
first part (chapters 2--4) focuses on the three main areas in which temporal asymmetry
turns up in modern physics: in thermodynamics, in phenomena involving radiation, and
in cosmology. In all these cases, what is puzzling is why the physical world should be
asymmetric in time at all, given that the underlying physical laws seem to be very largely
symmetric. These chapters look at some of the attempts that physicists have made to
solve this puzzle, and draw attention to some characteristic confusions and fallacies that
these attempts tend to involve.

Chapter 2 deals with thermodynamics. Few ideas in modern physics have had as much
impact on popular imagination and culture as the second law of thermodynamics. As
everyone knows, this is a time-asymmetric principle. It says that entropy increases over
time. In the late nineteenth century, as thermodynamics came to be addressed in terms of
the symmetric framework of statistical mechanics, the puzzle just described came slowly
into view: where does the asymmetry of the second law come from? I shall explain how,
as this problem came into view, it produced the first examples of a kind of fallacy which
has often characterized attempts to explain temporal asymmetry in physics. This fallacy
involves a kind of special pleading, or double standard. It takes an argument which could
be used equally well in either temporal direction and applies it selectively, in one
direction but not the other. Not surprisingly, this biased procedure leads to asymmetric
conclusions. Without a justification for the bias, however, these conclusions tell us
nothing about the origins of the real asymmetry we find in the world.

Fallacies of this kind crop up time and time again. One of the main themes of this book is
that we need the right starting point in order to avoid them. In chapter 2 I'll use examples
from the history of thermodynamics to illustrate this idea. I shall also describe an
exceptional early example of the required atemporal viewpoint, in the work of Ludwig
Boltzmann, the Austrian physicist who was responsible for some of the fundamental
results of the period. As we'll see, Boltzmann was perhaps the first person to appreciate
the true importance of the question: Why was entropy low in the past? The chapter
concludes with a discussion as to what it is that really needs to be explained about the
asymmetry of thermodynamics---I shall argue that very few writers have drawn the right
lesson from the nineteenth century debate---and offers some guidelines for avoiding the
kinds of mistakes that have plagued this field for 150 years.

Chapter 3 looks at the time asymmetry of a wide range of physical phenomena involving
radiation. Why do ripples on a water surface spread outwards rather than inwards, for
example? Similar things happen with other kinds of radiation, such as light, and
physicists have been puzzled by the temporal asymmetry of these phenomena since the
early years of the twentieth century. In discussing this issue, it turns out to be important to
correct some confusions about what this asymmetry actually involves. However, the
chapter's main focus will be the issue of the relation between this asymmetry and that of
thermodynamics. I want to show that several prominent attempts to reduce the former
asymmetry to the latter turn out to be fallacious, once the nature of the thermodynamic
asymmetry is properly appreciated. In particular, I want to look at a famous proposal by
the American physicists John Wheeler and Richard Feynman, called the Absorber Theory
of Radiation. At first sight, this theory seems to involve the very model of respect for an
atemporal perspective. I shall show that Wheeler and Feynman's reasoning is confused,
however, and that as it stands, their theory doesn't succeed in explaining the asymmetry
of radiation in terms of that of thermodynamics. However, the mathematical core of the
theory can be reinterpreted so that it does show---as Wheeler and Feynman believed, but
in a different way---that radiation is not intrinsically asymmetric; and that its apparent
asymmetry may be traced, if not to the thermodynamic asymmetry itself, then to
essentially the same source. (In effect, then, I want to show that Wheeler and Feynman
produced the right theory, but tried to use it in the wrong way.)

Chapter 4 turns to cosmology. As chapter 2 makes clear, the search for an explanation of
temporal asymmetry leads to the question why the universe was in a very special
condition early in its history---why entropy is low near the big bang. But in trying to
explain why the universe is like this, contemporary cosmologists often fall for the same
kind of fallacies of special pleading, the same application of a double standard with
respect to the past and the future, as their colleagues elsewhere in physics. In failing to
adopt a sufficiently atemporal viewpoint, then, cosmologists have failed to appreciate
how difficult it is to show that the universe must be in the required condition at the big
bang, without also showing that it must be in the same condition at the big crunch (so that
the ordinary temporal asymmetries would be reversed as the universe recollapsed).
Cosmologists who do consider the latter possibility often reject it on grounds which, if
applied consistently, would also rule out a low-entropy big bang. As we shall see, the
mistakes made here are very much like those made a century earlier, in the attempt to put
the asymmetry of thermodynamics on firm statistical foundations. My concern in this
chapter is to draw attention to these mistakes, to lay down some guidelines for avoiding
them, and to assess the current prospects for a cosmological explanation of temporal

In the first part of the book, then, the basic project is to try to clarify what modern physics
tells us about the ways in which the world turns out to be asymmetric in time, what it tells
us about how and why the future is different from the past. And the basic strategy is to
look at the problem from a sufficiently detached standpoint, so that we don't get misled
by the temporal asymmetries of our own natures and ways of thinking. In this way, I
argue, it is possible to avoid some of the mistakes which have been common in this
branch of physics for more than a century.

In the second part of the book, I turn from the physics of time asymmetry to physics more
generally. The big project of this part of the book is to show that the atemporal
Archimedean perspective has important ramifications for the most puzzling puzzle of all
in contemporary physics: the meaning of quantum theory. My view is that the most
promising understanding of quantum theory has been almost entirely overlooked, because
physicists and philosophers have not noticed the way in which our ordinary view of the
world is a product of our asymmetric standpoint. Once we do notice it---and once we
think about what kind of world we might expect, given what we have discovered about
the physical origins of time asymmetry---we find that we have good reason to expect the
very kind of phenomena which make quantum theory so puzzling. Quantum theory turns
out to be the kind of microphysics we might have expected, in other words, given our
present understanding of the physical origins of time asymmetry. Most important of all,
this path to quantum theory removes the main obstacles to a much more classical view of
quantum mechanics than is usually thought to be possible. It seems to solve the problem
of nonlocality, for example, and to open the door to the kind of interpretation of quantum
theory that Einstein always favored: a view in which there is still an objective world out
there, and no mysterious role for observers.

This is a very dramatic claim, and readers are right to be skeptical. If there were a
solution of this kind in quantum theory, after all, how could it have gone unnoticed for so
long? The answer, I think, is this: the presuppositions this suggestion challenges are so
deeply embedded in our ordinary ways of thinking that normally we simply don't notice
them. If we do notice them, they seem so secure that the thought of giving them up seems
crazy, even in comparison to the bizarre alternatives offered by quantum theory. Only by
approaching these presuppositions from an angle which has nothing to do with quantum
theory---in particular, by thinking about how they square with what we have discovered
about the physical origins of time asymmetry---do we find that there are independent
reasons to give them up. Suddenly, this way of thinking about quantum theory looks not
just sane, but a natural consequence of other considerations.

What are these presuppositions? They involve notions such as causation and physical
dependence. As we ordinarily use them, these notions are strongly time-asymmetric. For
example, we take it for granted that events depend on earlier events in a way in which
they do not depend on later events. Physicists often dismiss this asymmetry as subjective,
terminological, or merely "metaphysical." As we shall see, however, it continues to exert
a very powerful influence on their intuition---on what kind of models of the world they
regard as intuitively acceptable. It is the main reason why the approach to quantum
theory I want to recommend has received almost no serious attention.

In chapters 5--7 I mount a two-pronged attack on this intuition. Chapter 5 shows that it
sits very uneasily with the kind of picture of the nature and origins of time asymmetry in
physics which emerges from the earlier chapters. In this chapter I also explain in an
introductory way why abandoning this intuition would have important and very attractive
ramifications in the debate about quantum theory. However, the notions of causation,
dependence, and the like are not straightforward. They are notions which have often
puzzled philosophers, and their temporal asymmetry is especially mysterious. Is it some
extra ingredient of the world, over and above the various asymmetries in physics, for
example? Or can it be reduced to those asymmetries? These are philosophical issues, and
the second arm of my attack on the intuition mentioned above involves an investigation
of its origins, along philosophical lines.
In chapter 6 I argue that the asymmetry of causation cannot be reduced to any of the
available physical asymmetries, such as the second law of thermodynamics. The basic
problem for such a reduction is that the available physical asymmetries are essentially
macroscopic, and therefore cannot account for causal asymmetry in microphysics---
though our causal intuitions are no less robust when applied to this domain than they are
elsewhere. I argue instead that the asymmetry of causation is anthropocentric in origin.
Roughly, it reflects the time-asymmetric perspective we occupy as agents in the world---
the fact that we deliberate for the future on the basis of information about the past, for

As I explain in chapter 7, this account has the satisfying consequence that despite its
powerful grip on our intuitions---a grip which ought to seem rather puzzling, in view of
the apparent symmetry of physics itself---causal asymmetry does not reflect a further
ingredient of the world, over and above what is already described by physics. It doesn't
multiply the objective temporal "arrows," in other words. More surprisingly, we shall see
that the account does leave room for a limited violation of the usual causal order. In other
words, it leaves open the possibility that the world might be such that from our standard
asymmetric perspective, it would be appropriate to say that certain of our present actions
could be the causes of earlier effects. In failing to recognize this possibility, physics has
failed to practice what it has often preached concerning the status of causal asymmetry.
Having often concluded, rightly, that the asymmetry of causation is not a physical matter,
physicists have then failed to notice that the anthropocentric framework continues to
constrain their construction of models of reality. One of the great attractions of the
Archimedean standpoint is that it serves to break these conventional bonds, and hence to
free physics from such self-imposed constraints.

The last two chapters apply these lessons to the puzzles of quantum mechanics. Chapter 8
provides an informal overview of the long debate about how quantum mechanics should
be interpreted, identifying the main positions and their advantages and disadvantages. As
I'll explain, the best focus for such an overview is the question that Einstein took to be the
crucial one about quantum mechanics: Does it give us a complete description of the
systems to which it applies?

Famously, Einstein thought that quantum theory is incomplete, and that there must be
some further, more classical reality "in the background." His great disagreement with
Niels Bohr centered on this issue. Einstein is often said to have lost the argument, at least
in hindsight. (The work of John Bell in the 1960s is often thought to have put the final
nail in Bohr's case, so to speak.) I think this verdict is mistaken. Despite Bell's work,
Einstein's view is very much less implausible than it is widely taken to be, at least in
comparison to the opposing orthodoxy.

This conclusion is overshadowed by that of chapter 9, however, where I show how

dramatically the picture is altered if we admit the kind of backward causation identified
in chapter 7. In the quantum mechanical literature this possibility is usually dismissed, or
simply overlooked, because it flies in the face of such powerful intuitions about causality.
But the lesson of chapter 7 is that when we ask where these intuitions come from, we
discover that their foundations give us no reason at all to exclude the kind of limited
backward influence in question---on the contrary, if anything, because powerful
symmetry principles can be made to work in favor of the proposal.

In effect, then, my conclusion in chapter 9 is that the most promising and well-motivated
approach to the peculiar puzzles of quantum mechanics has been almost entirely
neglected, in part because the nature and significance of our causal intuitions have not
been properly understood. Had these things been understood in advance---and had the
real lessons of the nineteenth-century debate about temporal asymmetry been appreciated
a century ago---then quantum mechanics is the kind of theory of microphysics that the
twentieth century might well have expected.

Remarks on style

A few remarks on the style and level of the book. Much of the argument is philosophical
in character. It deals with live issues in contemporary physics, however, and takes for
granted that it is physicists who need to be convinced of the advantages of the
Archimedean standpoint. The book thus faces the usual hurdles of an interdisciplinary
work, with the additional handicap of a far-reaching and counterintuitive conclusion.
There is a danger that specialist readers on both sides will feel that my treatment of their
own material is simplistic or simply wrong, and that my account of the other side's
contribution is difficult, obscure and of doubtful relevance. Physicists are more likely to
have the first reaction, of course, and philosophers the second, because I am writing from
a philosophical standpoint.

There are conflicting constraints here, but the best approach seems to be to try to
maximize clarity and readability, even if sometimes at the expense of rigor and precision.
I have tried in particular to keep philosophical complexity to a minimum, in order to
make the general viewpoint as accessible as possible to readers from other fields. On the
physical side I had less choice in the matter---my own technical abilities soon reach their
limits---but here too, where possible, I have tried to opt for accessibility rather than
precision. Occasionally, where technicality of one sort or the other seemed especially
important, I have tried to quarantine it, so that the details may be skipped by readers who
are disinclined to tangle. (In these cases I indicate in the text which sections can be
skipped.) Most chapters finish with a summary, and there is an overview of the book as a
whole at the end.

Finally, a hint for impatient readers, keen to get into the quantum mechanics: start at
chapter 5, and follow the arrows from there.
The stock philosophical debates about time

The philosophy of time has a long history, and is unusual even by philosophical standards
for the durability of some of its main concerns. In a modern translation much of Saint
Augustine's work on time would pass for twentieth-century philosophy. Augustine's
concerns are often exactly those of modern philosophers. He is puzzled about the nature
of the distinctions between the past, the present, and the future, and about the fact that the
past and the future seem unreal: the past has ceased to exist, and the future doesn't yet
exist. And he is concerned about the nature and status of the apparent flow of time.

These two problems---the first the status of the past-present-future distinction, and the
related concern about the existence of the past and the future, and the second the issue of
the flow of time---remain the focus of much work in the philosophy of time. As I noted
earlier, philosophers tend to divide into two camps. On one side there are those who
regard the passage of time as an objective feature of reality, and interpret the present
moment as the marker or leading edge of this advance. Some members of this camp give
the present ontological priority, as well, sharing Augustine's view that the past and the
future are unreal. Others take the view that the past is real in a way that the future is not,
so that the present consists in something like the coming into being of determinate reality.

Philosophers in the opposing camp regard the present as a subjective notion, often
claiming that now is dependent on one's viewpoint in much the same way that here is.
Just as "here" means roughly "this place," so "now" means roughly "this time," and in
either case what is picked out depends where the speaker stands. On this view there is no
more an objective division of the world into the past, the present, and the future than
there is an objective division of a region of space into here and there. Not surprisingly,
then, supporters of this view deny that there is any ontological difference---any difference
concerning simply existence---between the past, the present, and the future.

Often this is called the block universe view, the point being that it regards reality as a
single entity of which time is an ingredient, rather than as a changeable entity set in time.
The block metaphor sometimes leads to confusion, however. In an attempt to highlight
the contrast with the dynamic character of the "moving present" view of time, people
sometimes say that the block universe is static. This is rather misleading, however, as it
suggests that there is a time frame in which the four-dimensional block universe stays the
same. There isn't, of course. Time is supposed to be included in the block, so it is just as
wrong to call it static as it is to call it dynamic or changeable. It isn't any of these things,
because it isn't the right sort of entity---it isn't an entity in time, in other words.

Defenders of the block universe view deny that there is an objective present, and usually
also deny that there is any objective flow of time. Indeed, perhaps the strongest reason for
denying the objectivity of the present is that it is so difficult to make sense of the notion
of an objective flow or passage of time. Why? Well, the stock objection is that if it made
sense to say that time flows then it would make sense to ask how fast it flows, which
doesn't seem to be a sensible question. Some people reply that time flows at one second
per second, but even if we could live with the lack of other possibilities, this answer
misses the more basic aspect of the objection. A rate of seconds per second is not a rate at
all in physical terms. It is a dimensionless quantity, rather than a rate of any sort. (We
might just as well say that the ratio of the circumference of a circle to its diameter flows
at pi seconds per second!)

A rarer but even more forceful objection is the following. If time flowed, then---as with
any flow---it would only make sense to assign that flow a direction with respect to a
choice as to what is to count as the positive direction of time. In saying that the sun
moves from east to west or that the hands of a clock move clockwise, we take for granted
the usual convention that the positive time axis lies toward what we call the future. But in
the absence of some objective grounding for this convention, there isn't an objective fact
as to which way the sun or the hands of the clock are "really" moving. Of course,
proponents of the view that there is an objective flow of time might see it as an advantage
of their view that it does provide an objective basis for the usual choice of temporal
coordinate. The problem is that until we have such an objective basis we don't have an
objective sense in which time is flowing one way rather than the other. In other words,
not only does it not seem to make sense to speak of an objective rate of flow of time; it
also doesn't make sense to speak of an objective direction of flow of time.

These problems in making sense of an objective flow of time spill over on the attempt to
make sense of an objective present. For example, if the present is said to be the "edge" at
which reality becomes concrete, at which the indeterminacy of the future gives way to the
determinacy of the past, then the argument just given suggests that there isn't an objective
sense in which reality is growing rather than shrinking.

These objections are all of a philosophical character, not especially dependent on physics.
A new objection to the view that there is an objective present arises from Einstein's
theory of special relativity. The objection is most forceful if we follow Augustine in
accepting that only the present moment is real. For then if we want to inquire what reality
includes, apart from our immediate surroundings, we need to think about what is now
happening elsewhere. However, Einstein's theory tells us that there is no such thing as
objective simultaneity between spatially separated events. Apparent simultaneity differs
from observer to observer, depending on their state of motion, and there is no such thing
as an objectively right answer. So the combination of Augustine and Einstein seems to
give us the view that reality too is a perspective-dependent matter. The distinctive feature
of the Augustinian view---the claim that the content of the present moment is an objective
feature of the world---seems to have been lost.

Augustine's own reasons for believing in the objectivity of the present---indeed, the
nonreality of everything else---seem to have been at least partly linguistic. That is, he was
moved by the fact that we say such things as "There are no dinosaurs---they no longer
exist" and "There is no cure for the common cold---it doesn't yet exist." By extrapolation,
it seems equally appropriate to say that there is no past, for it no longer exists; and that
there is no future, for it does not yet exist. However, a defender of the block universe
view will say that in according these intuitions the significance he gives them, Augustine
is misled by the tense structure of ordinary language. In effect, he fails to notice that
"Dinosaurs do not exist" means "Dinosaurs do not exist now." As a result, he fails to see
that the basic notion of existence or reality is not the one that dinosaurs are here being
said to lack---viz., existence now---but what we might term existence somewhen. Again
the spatial analogy seems helpful: we can talk about existence in a spatially localized
way, saying, for example, that icebergs don't exist here in Sydney; but in this case it is
clear that the basic notion of existence is the unqualified one---the one that we would
describe as existence somewhere, if language required us to put in a spatial qualification.
We are misled in the temporal case because the simplest grammatical form actually
includes a temporal qualification.

So it is doubtful whether Augustine's view can be defended on linguistic grounds. In

practice, the most influential argument in favor of the objective present and objective
flow of time rests on an appeal to psychology---to our own experience of time. It seems
to us as if time flows, the argument runs, and surely the most reasonable explanation of
this is that there is some genuine movement of time which we experience, or in which we

Arguments of this kind need to be treated with caution, however. After all, how would
things seem if it time didn't flow? If we suppose for the moment that there is an objective
flow of time, we seem to be able to imagine a world which would be just like ours, except
that it would be a four-dimensional block universe rather then a three-dimensional
dynamic one. It is easy to see how to map events-at-times in the dynamic universe onto
events-at-temporal-locations in the block universe. Among other things, our individual
mental states get mapped over, moment by moment. But then surely our copies in the
block universe would have the same experiences that we do---in which case they are not
distinctive of a dynamic universe after all. Things would seem this way, even if we
ourselves were elements of a block universe.

Proponents of the block universe view thus argue that in the case of the apparent flow of
time, like that of the apparent objectivity of the present, it is important to draw a
distinction between how things seem and how they actually are. Roughly speaking, what
we need to do is to explain why things seem this way, without assuming that the
"seeming" corresponds directly to anything in reality. Explanations of this kind are quite
common in philosophy. Their general strategy is to try to identify some characteristic of
the standpoint from which we "see" the appearance in question, such that the nature of
the appearance can be explained in terms of this characteristic of the viewpoint. (There
are lots of commonplace examples of this kind of thing. Rose-tinted spectacles explain
why the world seems warm and friendly to those who wear them.)[4]

One of my projects in this book is to try to extend these insights about the consequences
of the temporal perspective from which we view the world. We are interested in this
partly for its bearing on the attempt to explain the arrow of time---existing attempts often
go wrong because they fail to notice the influence of this perspective on ordinary ways of
thinking---but also for its general philosophical interest. In this respect, as I said earlier,
the book is an attempt to further the project of philosophical writers such as Williams,
Smart, and Mellor.
From now on I shall simply take for granted the main tenets of the block universe view.
In particular, I'll assume that the present has no special objective status, instead being
perspectival in the way that the notion of here is. And I'll take it for granted that there is
no objective flow of time. These assumptions will operate mainly in a negative way. I
shall not explore the suggestion that flow gives direction to time, for example, because I
shall be taking for granted that there is no such thing as flow.

In making these assumptions I don't mean to imply that I take the arguments for the block
universe view sketched above to be conclusive. I do think that it is a very powerful case,
by philosophical standards. However, the aim of the book is to explore the consequences
of the block universe view in physics and philosophy, not to conduct its definitive
defense. My impression is that these consequences give us new reasons to favor the view
over its Augustinian rival, but others might take the point in reverse, finding here new
grounds for the claim that the block universe leaves out something essential about time.
Either way, all that matters to begin with is that the block universe view is not already so
implausible that it would a waste of time to seek to extend it in this way, and this at least
is not in doubt.

The arrows of time

Our main concern is with the asymmetry of time, but what does this mean? The
terminology suggests that the issue concerns the asymmetry of time itself, but this turns
out not to be so. To start with, then, we need to distinguish the issue of the asymmetry of
time from that of the asymmetry of things in time. The easiest way to do this is to use a
simple spatial analogy.

Imagine a long narrow table, set for a meal. The contents of the table might vary from
end to end. There might be nonvegetarian food at one end and vegetarian at the other, for
example; there might be steak knives at one end but not at the other; all the forks might
be arranged so as to point to the same end of the table; and so on. This would constitute
asymmetry on the table. Alternatively, or as well, the table itself might vary from end to
end. It might be wider or thicker at one end than the other, for example, or even bounded
in one direction but infinite in the other. (This might be a meal on Judgment Day, for
example, with limited seating at the nonvegetarian end.) These things would be
asymmetries of the table---asymmetries of the table itself, rather than its contents.

There seems to be an analogous distinction in the case of time. Time itself might be
asymmetric in various ways. Most obviously, it might be bounded in one direction but not
in the other. There might be an earliest time but no latest time. There are other
possibilities: as long as we think of time as a kind of extended "stuff," there will be
various ways in which the characteristics of this stuff might vary from end to end. More
contentiously, if sense could be made of the notion of the flow of time, then that too
might provide a sense in which time itself had an intrinsic direction or asymmetry.
(However, supporters of the objective present/objective flow view are likely to be
unhappy with this use of a spatial metaphor to characterize the distinction between the
asymmetry of time and that of things in time.)

Independently of the issue as to whether time itself is symmetric from end to end, there is
an issue about whether the physical contents of time are symmetric along its axis. This is
analogous to the question as to whether the contents of the table are symmetric from end
to end. It turns out that the interesting questions about temporal asymmetry are very
largely of this kind. There are various respects in which the contents of the block universe
appear to be arranged asymmetrically with respect to the temporal axis. For example,
many common physical processes seem to exhibit a very marked temporal preference,
occurring in one temporal orientation but not the other. This is why the events depicted in
reversed films often seem bizarre. In the real world, buildings may collapse into rubble,
for example, but rubble does not "uncollapse" to form a building---even though, as it
happens, the latter process is no less consistent than the former with the laws of
mechanics. (It is this last fact that makes the asymmetry so puzzling---more on this in a

As we shall see in the following chapters, there are a number of apparently distinct ways
in which the world we inhabit seems asymmetric in time. One of the tasks of an account
of temporal asymmetry is thus a kind of taxonomic one: that of cataloging the different
asymmetries (or "arrows," as they have come to be called), and sorting out their family
relationships. Physicists in particular have been interested in the question as to whether
there is a single "master arrow," from which all the others are in some sense derived. As
we shall see, the leading candidate for this position has been the so-called arrow of
thermodynamics. This is the asymmetry embodied in the second law of thermodynamics,
which says roughly that the entropy of an isolated physical system never decreases.

As a gentle introduction to the kind of reasoning on which much of the book depends,
note that this formulation of the second law assumes a choice of temporal orientation. It
assumes that we are taking the "positive" temporal direction to be that of what we
ordinarily call the future. There is nothing to stop us taking the positive axis to lie in the
opposite direction, however, in which case the second law would need to be stated as the
principle that the entropy of an isolated system never increases. The lesson is that the
objective asymmetry consists in the presence of a unidirectional gradient in the entropy
curve of, apparently, all isolated physical systems. Each such system exhibits such a
gradient, and all the gradients slope in the same temporal direction. But it is not an
objective matter whether the gradients really go up or go down, for this simply depends
on an arbitrary choice of temporal orientation. They don't really go either way, from an
atemporal viewpoint.
The puzzle of origins

One of the problems of temporal asymmetry is thus to characterize the various temporal
arrows---asymmetries of things in time---and to explain how they relate to one another.
Let's call this the taxonomy problem. The second problem---call it the genealogy
problem---is to explain why there is any significant asymmetry of things in time, given
that the fundamental laws of physics appear to be (almost) symmetric with respect to
time. Roughly, this symmetry amounts to the principle that if a given physical process is
permitted by physical laws, so too is the reverse process---what we would see if a film of
the original process were shown in reverse. With one tiny exception---more on this in a
moment---modern physical theories appear to respect this principle. This means that
insofar as our taxonomy of temporal arrows reveals significant asymmetries---significant
cases in which the world shows a preference for one temporal orientation of a physical
process over the other, for example---it is puzzling how these asymmetries could be
explained in terms of the available physical theories. How are we going to explain why
buildings collapse into rubble but rubble does not "uncollapse" into buildings, for
example, if both processes are equally consistent with the laws of mechanics? We seem to
be trying to pull a square rabbit from a round hat!

As I noted, however, there seems to be one little exception to the principle that the basic
laws of physics are time-symmetric. This exception, first discovered in 1964, concerns
the behavior of a particle called the neutral kaon. To a very tiny extent, the behavior of
the neutral kaon appears to distinguish past and future---an effect which remains deeply
mysterious.[5] Tiny though it is, could this effect perhaps have something to do with the
familiar large-scale asymmetries (such as the tendency of buildings to collapse but not
"uncollapse")? At present, it is difficult to offer a convincing answer to this question, one
way or the other. The best strategy is to set the case of the kaon to one side, and to study
the more familiar arrows of time in physics as if there were no exceptions to the principle
that the underlying laws are time-symmetric. This way we can find out where the puzzles
really lie---and where, if at all, the kaon might have a role to play.[6]

Physicists and philosophers have long been puzzled by the genealogy problem. The most
famous attempt to provide at least a partial solution dates from the second half of the
nineteenth century, when Boltzmann claimed to have derived the second law of
thermodynamics for the case of gases from a statistical treatment within the symmetrical
framework of Newtonian mechanics. As we shall see in the next chapter, however,
Boltzmann's critics soon pointed out that he had relied on a temporally asymmetric
assumption (the so-called stoßahlansatz, or "assumption of molecular chaos").
Boltzmann's argument thus provides an early example of what has proved a common and
beguiling fallacy. In search of an explanation for the observed temporal asymmetries---
for the observed difference between the past and the future, in effect---people unwittingly
apply different standards with respect to the two temporal directions. The result is that the
asymmetry they get out is just the asymmetry they put in. Far from being solved, the
problems of temporal asymmetry are obscured and deferred---the lump in the carpet is
simply shifted from one place to another. In the course of the book we shall encounter
several examples of this kind of mistake.
The reason the mistake is so prevalent is not (of course) that the physicists and
philosophers who have thought about these problems are victims of some peculiar
intellectual deficit. It is simply that temporal asymmetry is so deeply ingrained in our
ways of thinking about the world that it is very difficult indeed to spot these asymmetric
presuppositions. Yet this is what we need to do, if we are to disentangle the various
threads in the problem of temporal asymmetry, and in particular to distinguish those
threads that genuinely lie in the world from those that merely reflect our own viewpoint.
In order to explain temporal asymmetry it is necessary to shake off its constraints on our
ordinary ways of thinking---to stand in thought at a point outside of time, and thence to
regard the world in atemporal terms. This book is a kind of self-help manual for those
who would make this Archimedean journey.

To put the project in perspective, let us reflect again on the history of science, or natural
philosophy more generally. In hindsight it is easy to see that our view of the world has
often unwittingly embodied the peculiarities of our own standpoint. As I noted earlier,
some of the most dramatic episodes in the history of science are associated with the
unmasking of distortions of this kind. I mentioned Copernicus and Darwin. Another
striking example is the conceptual advance that led to Newton's first law of motion. This
advance was Galileo's appreciation that the friction-dominated world of ordinary
mechanical experience was not the natural and universal condition it had been taken to
be. Left to its own devices, a moving body would move forever.

In the same historical period we find a parallel concern with the philosophical aspects of
the project of uncovering the anthropocentricities of our ordinary view of the world. We
find an interest in what soon came to be called the distinction between primary and
secondary qualities, and an appreciation that the proper concern of physics is with the
former: that is, with those aspects of the world that are not the product of our own
perceptual peculiarities.

Consider these remarks from Galileo himself, for example, in 1623:

• I feel myself impelled by the necessity, as soon as I conceive a piece of matter or corporeal
substance, of conceiving that in its own nature it is bounded and figured in such and such a figure,
that in relation to others it is large or small, that it is in this or that place, in this or that time, that it
is in motion or remains at rest, that it touches or does not touch another body, that it is single, few,
or many; in short by no imagination can a body be separated from such conditions; but that it must
be white or red, bitter or sweet, sounding or mute, of a pleasant or unpleasant odour, I do not
perceive my mind forced to acknowledge it necessarily accompanied by such conditions; so if the
senses were not the escorts, perhaps the reason or the imagination by itself would never have
arrived at them. Hence I think that these tastes, odours, colours, etc., on the side of the object in
which they seem to exist, are nothing else than mere names, but hold their residence solely in the
sensitive body; so that if the animal were removed, every such quality would be abolished and

Galileo is telling us that tastes, odors, colors, and the like are not part of the objective
furniture of the world; normally, in thinking otherwise, we mistake a by-product of our
viewpoint for an intrinsic feature of reality. In Galileo and later seventeenth-century
writers, the move to identify and quarantine these secondary qualities is driven in part by
the demands of physics; by the picture supplied by physics of what is objective in the
world. This is not a fixed constraint, however. It changes as physics changes, and some of
these changes themselves involve the recognition that some ingredient of the previously
excepted physical world view is anthropocentric.

These examples suggest that anthropocentrism infects science by at least two different
routes. In some cases the significant factor is that we happen to live in an exceptional part
of the universe. We thus take as normal what is really a regional specialty: geocentric
gravitational force, or friction, for example. In other cases the source is not so much in
our location as in our constitution. We unwittingly project onto the world some of the
idiosyncrasies of our own makeup, seeing the world in the colors of the in-built glass
through which we view it. But the distinction between these sources is not always a sharp
one, because our constitution is adapted to the peculiarities of our region.

It is natural to wonder whether modern physics is free of such distortions. Physicists

would be happy to acknowledge that physics might uncover new locational cases. Large
as it is, the known universe might turn out to be an unusual bit of something bigger.[8]
The possibility of continuing constitutional distortions is rather harder to swallow,
however. After all, it challenges the image physics holds of itself as an objective
enterprise, an enterprise concerned with not with how things seem but with how they
actually are. It is always painful for an academic enterprise to have to acknowledge that
it might not have been living up to its own professed standards!

In the course of the book, however, I want to argue that in its treatment of time
asymmetry, contemporary physics has failed to take account of distortions of just this
constitutional sort---distortions which originate in the kind of entities we humans are, in
one of our most fundamental aspects. If we see the historical process of detection and
elimination of anthropocentrism as one of the adoption of progressively more detached
standpoints for science, my claim is that physics has yet to achieve the standpoint
required for an understanding of temporal asymmetry. In this case the required standpoint
is an atemporal one, a point outside time, a point free of the distortions which stem from
the fact that we are creatures in time---truly, then, a view from nowhen.