You are on page 1of 48

Comp.

by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:18


Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 51
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

2
PHYSICS AVOIDANCE

[Dialecticians] prescribe certain formulae of argument, which lead to a conclusion


with such necessity that, if the reason commits itself to their trust, even though it
slackens the interest and no longer pays a heedful and close attention to the
proposition inferred, it can nonetheless come to a sure conclusion by virtue of the
form of the argument alone. Exactly so; the fact is that frequently we notice that often
the truth escapes away from these imprisoning bonds, while the people who have used
them in order to capture it remain entangled in them.
Descartes1

(i)
In whimsical moments, Brigham Young would order members of his flock to “travel
two hundred miles directly south and plant cotton.” When these exiled settlers encoun-
tered some easily skirted ravine or coulee, they would not deviate from their instruc-
tions but would painstakingly carve
ruts and fastenings into the rock so
that their wagon trains could be pulled
across the obstacle before them, obedi-
ent to their “travel directly south” dir-
ectives. I fancy that on a blistering
Utah afternoon some of these belea-
guered souls might have entertained
the impious thought, “Mightn’t we
have gone around this $%#&
canyon?” a ravine to be avoided

1
Rene Descartes, “Rules for the Direction of the Mind” in J. Cottingham et al., eds, The Philosophical Writings of
Descartes Vol. 1, trans. Dugald Murdoch (Cambridge: Cambridge University Press, 1985), Rule X.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:18
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 52
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

52 Physics Avoidance

And so it is within practical science. The surest and happiest routes to predictive,
explanatory, and design success do not always lie directly ahead, but employ clever
stratagems for evading the computational hazards that render the direct path unpassable.
Such evasive approaches succeed through adopting various covert strategies for what
I shall call physics avoidance. Most physical treatments one encounters in real life are
characterized by some form of deductive evasion. Plainly, we should skirt difficult
calculations if they aren’t required for the answers we seek and are prone to introduce
errors. Such altered treatments can usually be related to one another in rational patterns,
but descriptive philosophy of science has paid little attention to these issues. As the rest
of this book documents, conceptual confusion often follows in the aftermath.
These adjustments in deductive strategy commonly affect the background explanatory
landscape in which a problem is framed. Here’s an example: Suppose that we run a
stream of water molecules through a pipe containing a large rectangular block. Obvi-
ously, the locations of the molecules will constantly shift as they run into both the walls
and one another. Will their movements ever settle into a less chaotic pattern? Certainly
not after we first turn on the spigot (when the flow will manifest rapidly shifting
transients), but perhaps it might after the water has been allowed to flow for a long
period of time. Is it possible for the water to evolve to a condition where, as soon as one
molecule leaves its present location, a
successor stands ready to immediately
occupy its place, so that the whole
group presents the appearance of a
so-called steady-state flow? (Fresh
waters are ever flowing upon us, but
the overall condition of the water
steady-state flow
looks identical from moment to
moment.) Of course, from a strict, molecular point of view, a perfect steady-state
condition isn’t attainable, simply because the successor molecules can’t get to their
new positions fast enough. But if we smooth the molecular stream into a continuous
fluid by upper-scale averaging, a steady state might emerge as a coarse-grained approxi-
mation to the molecular flow. So our first adjustment in explanatory landscape is to shift
from molecular dynamics to that of continuous flow. Under that adjustment, steady-state
flows become possible as long as the fluid velocity is not too high (otherwise we witness
turbulence). From a mathematical point of view, such laminar flows are considerably
more tractable than our original complex swarm of colliding molecules. If such a stream
is also steady (i.e. looks the same over time from a fixed viewing location), a second
adjustment in landscape becomes available that permits an extraordinarily effective
degree of physics avoidance simplification. What is this new advantage? We can now
answer most important questions about our flow (what its varying pressures will be, etc.)
without needing to include the passage of time in our modeling at all.2 Accordingly,

2
In section (iii) we’ll find that the time terms will drop out of the salient modeling equations when this pattern of
physics avoidance is implemented.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:19
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 53
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 53

a focus upon steady-state


conditions opens up con-
venient computational
pathways that are other-
wise unavailable.
A standard form of
mathematical representa-
tion may help readers
appreciate the salient
adjustment in landscape.
On the left side of the dia-
gram, I have drawn various
time slices through which
the molecules of our fluid projecting temporal behavior onto a manifold
trace their individual
upward-heading trajectories. Such a portrait is called evolutionary because it shows how
the molecules move forward in time. But let us now shine a flashlight downward—what
will we see on the bottom sheet? Well, if the flow is steady, we will witness the tidy patterns
on the right side of our diagram, which constitutes a time-exposure photograph of our fluid,
whose individual streamlines represent the superimposed images of a large number of
passing fluid particles. Because successor particles in the continuum representation imme-
diately replace their predecessors, the streamlines become registered as dark regions to
which no beam from our flashlight penetrates. Mathematicians call this construction a “base
manifold picture” of the temporal flow within the evolutionary manifold on the left of the
diagram. If such a time-effaced picture can be successfully constructed, we can evade the
disagreeable complexities that we would confront if we continued to reason about our
water in a straightforwardly evolutionary mode.
Incidentally, this search-for-a-steady-state strategy is a first cousin to the tactics of
looking-for-the-constrained-equilibrium-resting-places we shall consider in section (iii)
from a more formal point of view.
Of course, running water is rarely encountered in a true steady state—one has to turn
on the tap at some time, after all. But it often proves possible to factor a system’s
behavior into two ingredients: a transitory
response (which records how a moving wall
of water first invades a pipe after a spigot is
opened) and the steady-state response (which
becomes dominant as the initial effects of the
transient response dies away). Approximative
divisions of this sort will become central in
other essays.
It is common in applied mathematics to say
that time has been factored out of our base
manifold portraits of steady-state flow. This is steady-state factoring of a time series
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:22
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 54
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

54 Physics Avoidance

because it is relatively easy to calculate the time it takes for a fluid particle to progress
from A to B by simply considering the spatial distance from A to B in our base level
projection. Employing base manifold distance as a “fake time” substitute for genuinely
evolutionary modeling is a common tactic in physics.It sometimes confuses observers
who don’t notice the shift in underlying landscape (cf. Appendix 1).
Unfortunately, fluid flow is not always amenable to such convenient treatments, for
these represent computational opportunities that nature makes available to us only in
favorable aquatic circumstances. Everyday turbulent flow does not directly submit to
such a treatment, and one of physics’ current challenges is that of uncovering some
analog to base manifold factorization that can support accurate estimates of the averaged
pressures, etc., encountered in turbulence.
Under any shift in explanatory landscape, new questions become salient. How will
the altered boundaries induced by the block alter the streamlines that we would
otherwise witness in an unencumbered pipe? How will the flow pattern alter if the
averaged velocity of the water far upstream increases? (Considerations of this second
sort are called structural alterations in a control parameter, and we will examine their
character more closely later on). At the same time, standard evolutionary queries get
suppressed under these same adjustments—we can no longer ask, “Through what
mechanism does an upstream fluid particle sense the block’s forthcoming presence?”
Philosophical troubles can arise when these distinct flavors of inquiry become confused.3
Key words like “force” and “cause” are mutable semantic creatures, able to acclimate
themselves to virtually any explanatory landscape into which they happen to be cast.
Within an evolutionary framework, the word “cause” usually attaches itself to the
processes that affect temporal change within a developing flow. But resolutely purge
“time” from our explanatory landscape and “cause” will still come creeping back
around.4 For example, even though we have effectively banished time from within a
base manifold projection, we are still likely to ask how alterations in the pipe’s
boundaries will “cause changes” in the streamline patterns. In formal terms, the
background explanatory landscape has shifted from that of an initial conditions problem
to a control variable setting, but “cause” nevertheless tags along in the adjustment.

(ii)
Accordingly, we should learn to recognize the physics avoidance rationales that underlie
these significant changes in explanatory landscape. Before we embark on this task, let us
avoid a methodological misstep that has significantly impeded descriptive philosophy of
science for nearly a century—viz. the inclination to substitute logic-derived notions for
mathematical relationships of a more sharply diagnosed character. I sometimes dub this

3
Essay 6’s appendix supplies a typical example.
4
Like poison ivy in the old Leiber–Stoller song.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:22
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 55
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 55

propensity Theory T thinking in tribute to the philosophy of science primers of my youth,


which trafficked entirely in toy models of a purely logical character, invariably dubbed
“Theory T” and “Theory T’”. The problem with these approaches (whose varieties are
many) is that the descriptive tools they wield are too blunt to adequately mark out the
varieties of explanatory architecture we must distinguish within these essays.5
A well-known passage from the philosopher Peter Railton illustrates the logicist
propensities I have in mind.6 As I see it, Railton is struggling to isolate the natural and
important idea that I call an evolutionary modeling (applied mathematicians generally
prefer even more rebarbative terminology such as “well-posed initial/boundary value
problem of a hyperbolic character”—the underlying idea itself isn’t scary, but the label
may be). Railton dubs his own target notion “an ideal D-N-P text,” but I believe that he
actually seeks a natural stochastic generalization of the evolutionary mapping idea. To
see what he has in mind, let’s first ignore his “P”-rider and concentrate upon what a
simple “ideal D-N text” is supposed to be. Divide the passage of time into a number of
stages Δt1 , Δt2 , Δt3 , . . . (mathematicians call these time steps) and then supply logical
deductions that link the physical conditions reported on each time step to earlier stages
utilizing whatever “laws” apply in the discipline at hand. Railton characterizes these
arrays of logically interconnected sentences as D-N texts, where “D” stands for “deduct-
ive” (the transitions are governed by logic) and
“N” for “nomological” (driven by scientific
laws). D-N assemblies of this character comprise
the basic ingredients within Peter Hempel’s
familiar, logic-centered account of explanation.
By supplementing such “texts” with “P”s (for
“probabilistic”), Railton considers collections of
sentences that are instead woven together with
probabilistic deductions, in the sense that the
“basic laws” now tolerate a wider number of
successive stages governed by a probability
measure. A possible history for a target system a D-N-P text
is any simply branch running through such a
D-N-P tree as pictured.
Against this backdrop, Railton alerts our attention to a conception of “partial infor-
mation” that bears considerable affinity to the “physics avoidance” we seek. He writes:

5
In everyday application, the term “theory” gets applied to a wide range of distinguishable formal structures, among
them: (1) a set of overarching principles (“Newton’s three laws” representing a good paradigm) that lay out a general
framework for descriptive endeavor without delineating the concrete special force laws or constitutive principles to be
employed to that end; (2) a delineation of exactly these missing ingredients together with their anticipated side conditions;
(3) a detailed delineation of a target system modeling (how many particles does it contain? Which special force laws are
active?) in a manner that permits the construction of a well-set mathematical problem; (4) a mathematical guess as to how
the behaviors of an equational set satisfying (3) might be effectively approximated. In a natural setting, these vagaries pose
no problems, but they can become crippling in a philosophy of science context where one often cannot figure out what a
phrase like “Theory T” is meant to cover. Very similar remarks apply to the cognate notion of a “law of nature.”
6
“A Deductive-Nomological Model of Probabilistic Explanation,” Philosophy of Science 45 (1978).
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:22
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 56
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

56 Physics Avoidance

Consider an ideal D-N-P text for the explanation of a fact p. Now consider any statement
S that, were we ignorant of this text but conversant with the language and concepts
employed in it and in S, would enable us to answer questions about the text in such a way
as to eliminate some degree of uncertainty about what is contained in it.7
That is, statement S provides partial information about some fully developed D-N-P text.
He writes:
It is hardly novel to speak of sentences providing information about complete texts in this
way: presumably we employ such a notion whenever we speak of a piece of writing . . . as a
summary, paraphrase, gloss, condensation, or partial description of an actual text such as
a novel. Unfortunately, I know of no satisfactory account of this familiar and highly general
notion . . . nor can I begin to provide an account of my own making.
The chief reason that Railton cannot supply a
sharper analysis traces to the bluntness of the
logical tools with which he works. Here’s a simple
way to see what’s gone wrong. In Frank Baum’s
Oz novels, Glinda owns a big book in which all
the events in the kingdom get instantaneously
described as they happen. But she will surely
require a non-denumerable number of pages in
her book, because between any two pages separ-
ated by a time interval Δt, there must lie infinitely
more pages, some of which might prove very
important from an explanatory point of view. But Glinda’s big book
if we correct this problem by adding the requisite
number of pages, what do we wind up with? Well, in the D-N case, not a “text,” but a
temporally connected set of states that mathematicians call an evolutionary flow. And the
stages within this process cannot be coherently linked together by “logical deduction”
because no page possesses an immediate predecessor. We have, in fact, left behind any
reasonable notion of linguistic relationship and are simply considering a solution set for a
collection of evolutionary differential equations. We will later find that not all differen-
tial equations generate flows of the temporal character illustrated here—terms like
“evolutionary” or “hyperbolic” isolate the developing behaviors we seek. Correcting
the Glinda problem with respect to Railton’s probabilistic D-N-P texts brings us into the
mathematical arena of processes that are governed by stochastic differential equations.8
As methodologists of science, we should talk directly of flows generated by evolutionary
equations, rather than of inadequate surrogates framed in terms of “logical deduction.”
Or so I recommend.

7
Ibid., p. 210.
8
We shall not be much concerned with such models in this essay, although they represent very interesting
constructions.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:23
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 57
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 57

In fact, a variety of related lapses affect Railton’s account beyond our Glinda problem.
No plausible notion of “law” should be identified with the full array of demands that
come encapsulated within a set of differential equations,9 but it is only the latter that can
generate the flows that replace Railton’s chains of logical derivation. In this same
manner, inadequate attention is paid to boundary conditions (in its proper mathematical
sense) and to the other stipulations that mathematicians classify as “side conditions.”
Theory T thinkers invariably substitute feeble logistical surrogates for the sharper
taxonomies provided within standard texts in applied mathematics. In doing so, the
philosophers frequently erase the very detail they require to resolve the conceptual
questions before them.
To these criticisms, a likely retort will be: “Philosophers of science seek accounts of
wider applicational scope than your parochial emphases offer. Your differential
equation-based distinctions may represent essential ingredients within mathematical
physics and a few fields like that, but we are required, as general methodologists, to
analyze ‘explanation’ as it appears across a far wider array of contexts.” But let us turn a
cold shoulder to these Theory T apologetics and see if we can’t better understand some
important explanatory patterns encountered within normal physics practice by borrow-
ing from the applied mathematician’s diagnostic toolkit. If we can gain genuine insight
through such means, we should consider ourselves satisfied. In any descriptive
endeavor, it’s best to start small, for the surest path to a miserable defeat is a vainglorious
assault. The siren call of “wider generality” has caused more sailors to wreck upon the
shoals of empty tautology than any other single source.
In truth, no moral or intellectual imperative requires us “to consider ‘explanation’ in
whatever context it may appear.” Pace the analytic metaphysicians criticized in Essay 6,
we enjoy no a priori assurance that anything very interesting can be obtained at that
rarified level of attack. I shall adopt as my own benchmark of philosophical progress the
issue of whether we can usefully unravel the behaviors of puzzling words such as
“cause” with the assistance of the diagnostic taxonomies developed by the applied
mathematicians. I am not striving to supply “a general theory” of anything.
The phrase “causal process” frequently (but not invariably) designates a continuously
acting evolutionary development, such as a sound wave moving through an iron bar or
the unfolding fluid flows we have already discussed. However, capturing this root idea
in precise terms is not easy and wasn’t properly accomplished until partial differential

9
In the ODE circumstance most favorable to this point of view P (Newtonian point mass mechanics), we follow a
procedure that I’ve sometimes called Euler’s recipe: employing Fi ¼ ma as a skeletal frame, we frame genuine
differential equations by specifying the relevant masses m and the additional constants required to turn on whatever
special force laws Fi we regard as applicable (these supplementary specifications don’t look much like “laws”). But rarely
does actual physics follow such a recipe because, inter alia, classical mechanics never settled upon a reliable set of special
force laws adequate to material cohesion and a host of other factors. Considerations of a completely different character
now enter the picture, none of which strike us as particularly “lawlike.” Example: we can evade the cohesion problem by
imposing a constraint—“this collection of molecules will remain equidistant from one another at all times.” Does that
stipulation strike the reader as a plausible “law of nature”? When we move into the arena of PDE modelings and their
more sophisticated variants, the identification of “laws” becomes even more problematic. Vague appeals to “laws” in
typical Theory T fashion emerge as diagnostic sore points at various junctures throughout these essays.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:23
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 58
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

58 Physics Avoidance

equations (PDEs) were invented circa 1750. Partial steps towards this objective were
taken earlier by Newton and Leibniz in their development of ordinary differential
equations (ODEs) in the 1670s, which represent a mathematical halfway house en
route to the more sophisticated PDE equations that more adequately capture the richer
set of features we expect to see within a “causal process.”10
I’ll explicate the differences between an ODE and a PDE in a moment, but let us first
consider a simple evolutionary circumstance that can be aptly modeled by a simple
ODE: the flight of a frictionless cannon ball under the earth’s gravitation. Here the
salient modeling equation is a direct application of “F ¼ ma”: md2 y=dt2 ¼ g, which
states that the downward pull of gravity (g) constantly accelerates the fall of the ball (y)
by a factor proportional to its mass (m). Starting with initial conditions (= starting
position and velocity) posed at a cannon’s mouth, we can readily plot the solutions to
our modeling equation on graph paper in a manner I’ll now outline.
But first ask yourself: how did folks think about cannon ball flight in the days before
the infinitesimally focused ODEs were invented? Answer: they employed what math-
ematicians now call finite difference terms, viz. they divided time into small step sizes Δt
and approximated the continuously acting gravitational process by rough estimates of
how the conditions on
time slice Δti will
affect the conditions
on time slice Δtiþ1 .
Rather than describing
a causal process in
infinitesimally focused
terms (e.g. @y=@x ¼
@y=@t  3y, employ a
first order PDE as
illustration), early
Euler’s approximation method
writers capture the
target developments in approximation terms ððΔyiþ1  Δyi Þ=ðΔxiþ1  Δxi Þ
¼ ðΔyiþ1  Δyi Þ=ðΔtiþ1  Δti Þ  3yÞ.11 Through these finite-difference-for-
differential-operator replacements, a clean temporal divide between “causes” (that act
at Δti ) and “effects” (that act at Δtiþ1 ) appears in a manner that divides a continuously
acting flow into the distinct pages found in one of Railton’s ideal D-N texts.

10
Commentators on the development of mathematical ideas often remark that the distinctive behaviors of PDEs were
looked at “from an ODE point of view” for a surprisingly long period of time. That is, many historical figures (and
philosophers to this day) operate with a picture of differential equation behavior that is derived from ODE experience
within simple evolutionary contexts such as standard celestial mechanics provides. But PDEs are much trickier critters
than ODEs, and these sanguine anticipations often mislead. I also believe (although I cannot prove) that many of the
formal inadequacies of Theory T thinking trace to this source as well.
11
Expressing infinitesimal relationships in what we now regard as approximative terms represents a common
developmental stage within the history of mechanics, and we shall investigate an intriguing exemplar in Essay 8. Early
authors experienced conceptual difficulties with respect to the infinitesimals appropriate to PDE circumstances because
the notions of dependent and independent variables hadn’t been adequately clarified.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:24
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 59
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 59

Applied mathematicians call the specific routine just outlined Euler’s method of finite
differences, as it represents the simplest form of numerical scheme for approximating a
set of differential equations. Despite its title (which obeys the V. Arnold axiom that if a
theorem or procedure is named for some historical figure, they probably had little to
with it), the tactic of subdividing a continuously evolving process into discrete time steps
predates both Euler and the development of the differential calculus; the technique has
long provided reasoners with an intuitive way of talking about causal processes in the
absence of appeal to the proper notion of an infinitesimal generator.12 Even today, when
we want to explain to ourselves what a differential equation model “means,” we will
sketch out a portrait of its developing behaviors in time-step terms. At various points in
these essays I will avail myself of this substitution technique to supply an intuitive
picture of how various strategies of physics avoidance operate.13
Accordingly, I claim that Railton’s ideal D-N-P text represents an attempt to capture
the familiar notion of a continuous causal process through appeal to the finite difference
tools of naïve common sense. And this prompts me to a more general methodological
remark. Mathematics should not be viewed as a magical discipline that operates with
strange procedures that are entirely alien to everyday life. No, more often it captures
completely familiar manners of reasoning in sharpened terms. Throughout this book
I will attempt to exploit these applied mathematics/everyday vernacular parallels as
fully as I can, for most of the physics avoidance techniques we shall study represent the
mathematization of some commonsensical strategy that we employ along within the
other avenues of our thinking.
Still we should acknowledge that the mathematicians have greatly improved our
understanding of important conceptual issues through their precisifications.The case at
hand supplies a sterling exemplar: by expressing the Δtiþ1 =Δti time-step transitions of
everyday causal thinking in terms of the continuously altering patterns laid down by a
differential operator, many of the blemishes characteristic of the somewhat ham-handed
techniques of intuitive reasoning are removed. By adopting the mathematicians’ evolu-
tionary flow perspective, we will find ourselves better prepared to diagnose the varieties
of strategic shift that Railton labels as “condensations.” If we merely analogize our
physics avoidance tactics to the motives for making long books shorter (in the mode of
the old Reader’s Digest), we may share Railton’s doubts whether such policies will ever
submit to trenchant philosophical analysis (the rationales for shortening a book can
prove as varied as the books themselves). But if we coordinate our physics avoidance
concerns with the precise characterizations of formal landscape that the applied math-
ematicians offer, we will obtain a significantly sharper appreciation of the critical factors
in play. To be sure, we shouldn’t ever expect to reach a full and complete taxonomy of

12
The allied notion of a continuously acting stochastic process is also quite intuitive—we naturally conceive of radioactive
decay in such terms—but figuring out how to capture the conception of a stochastic process in precise terms represented one
of the great mathematical accomplishments of the twentieth century, due to Norbert Wiener and many others.
13
For example, later in the essay I will explicate standard mathematical requirements on boundary/interior accord by
appealing to the concrete numerical adjustments we must make to enforce this “accord” on a piece of graph paper. In that
setting, the required techniques will be familiar to any devotee of crossword puzzles.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:24
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 60
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

60 Physics Avoidance

“useful techniques for physics avoidance,” for clever scientists continually discover novel
ways to lighten their inferential burdens. But with a little more math, we can greatly
improve upon the intellectual situation in which Railton has left us.
But in aligning our endeavors with Railton’s, we should
meticulously avoid a rash presumption that tacitly under-
lies his own thinking (and much of Theory T thought in
general). It is the premise that we somehow know, as a
matter of a priori philosophical assurance, that all forms of
successful explanation must qualify as “condensations”
over richer evolutionary data sets.14 But we certainly do
not know this. As we’ll see in section (iv), although such
expectations have often proved correct, in other cases, such
hopes remain entirely aspirational and, in some important
historical instances, have proved flat-out mistaken. So we
should take nothing for granted here. We want our inves-
tigations of explanatory pattern in science to be as descrip- condensing a story
tively accurate as possible, and we should not force all
explanations to fit some procrustean conception of how condensations must work (in
Essay 6 we’ll criticize the many philosophers who rashly make assumptions of this
character on an entirely armchair basis).
Before we go any further, let me briefly explain the difference between partial (PDEs)
and ordinary (ODEs) differential equations and why I called the latter a “halfway house”
to a suitable mathematical representation of “causal process.” The difference lies in the
fact that ODEs contain only one independent variable, which, in an evolutionary setting,
will be time (t). But in a typical application where the notion of causal process remains
robust, spatial variations (x, y, and z) need to be treated as independent variables as well.
Dick and Jane ride
on a seesaw, and
every time Jane
hits the ground,
she pumps upward
with her feet. Obvi-
ously, this action
will affect Dick
at the other end;
how does this
effect manifest itself causal connection between Dick and Jane
there? Answer: Jane locally bends the wood slightly in a manner that sends waves
through the plank over to Dick who will feel her pumping accordingly. To describe
these waves (we will study many examples throughout this book), we require a PDE

14
Such expectations are intimately entangled with the “autonomous trajectory” concerns of Essay 4.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:24
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 61
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 61

modeling that, at a minimum, includes x (the space between Dick and Jane) as well as t
(time) as independent variables.
Of course, in practical life we would like to avoid consideration of those waves
because they are extremely hard to model and compute accurately. So we commonly
assume that the board between Dick and Jane remains completely rigid. If we do so, we
can model our seesaw circumstances fairly capably with an ODE that employs time as its
only independent variable. In doing so, we suppress the physical factors that allow Jane
to communicate with Dick across the breach (a rigid board can’t carry waves). In this
fashion, a small application of practical physics avoidance manages to drive a natural
conception of “causal process” entirely from the reconstituted landscape. At various
points throughout these essays, we’ll find that serious conceptual confusions sometimes
stem from a failure to diagnose these adjustments in background setting correctly. This
is why I stressed the fact that PDEs (or their fancier functional analysis cousins) represent
better vehicles for capturing intuitive “causal processes” than ODEs.
Returning to our main themes, once Railton’s “ideal D-N text” is precisified to
become “flow generated by evolutionary differential equations,” we can readily deter-
mine that most of what passes for “explanation” within physical practice does not fit the
pattern he suggests. In the following sections, we shall supply a number of characteristic
examples of non-evolutionary modelings. Before we do so, let us observe that there are
some simple formal tests, familiar to mathematicians, that can be applied to a set of
modeling equations to determine whether the underlying explanatory landscape has
shifted in some significant way or not. In particular, modern texts on the equations of
mathematical physics (which are most commonly formulated as PDEs) instruct their
readers to classify these formulas into a variety of categories (hyperbolic, elliptic,
parabolic, mixed) based upon the algebraic structure of their differential operators.
These so-called signatures of equational type provide vital clues to the varieties of
reduced variable strategy operative in the treatment at hand. In our fluid flow example,
its evolutionary-manifold to base-manifold adjustments correspond directly to a shift in
the equational signatures of the formulas with which a physicist directly works (the
original evolutionary modeling equations will be hyperbolic in character but are
replaced by elliptic formulas when we shift to a steady-state approach).
Let me assure readers that this fancy technical talk doesn’t signify much beyond the
fact that the “time” variable gets purged from the original modeling equations when a
shift to a steady-state methodology is made. Concrete exemplars of these syntactic
repressions will be supplied in section (iii).
Every work in mechanics shifts swiftly from one type of equational format to another
without actively alerting their readers to this fact and offers little commentary on the
attendant changes in underlying landscape (which can be quite drastic). Physicists are
often cavalier about these transitions and learn to execute them through brute imitation
rather following any explicit justificatory rationale. It is mainly the contributions of
mathematicians (due, in prime place, to Jacques Hadamard) who have clarified these
issues in the manner we shall adopt here. In so doing, they have managed to make
rational sense out of methodological adjustments that can appear, to the naked eye, as
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:24
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 62
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

62 Physics Avoidance

“Rabelaisian” and undisciplined.15 In truth, the simplified mathematical treatments we


find in college-level physics primers are rarely ready for real-life application until they
have been significantly reworked by skilled applied mathematicians, who must convert
the undiagnosed “idealizations” characteristic of textbook exposition into precisified
patterns of modeling that will better suit the contours of a more complex reality.
In this essay, there are two central modes of physics avoidance we shall study:
(1) invoking time-symmetric considerations (e.g. equilibrium or steady-state con-
ditions) as a rationale for dropping terms from modeling equations in a manner
that alters their equational signature.
(2) reducing descriptive variables by grouping them together under the control of
a higher-scale constraint.
We shall look at (1) first.
Before we do, we should first address several misunder-
standings that have dogged philosophy’s understanding of
“cause” since Bertrand Russell’s celebrated “On the Notion
of ‘Cause’” of 1912.16 Russell argues that notions of cause
and effect play no significant role in modern science
because the latter employs differential equations with no
“before and after” about them. This critique overlooks the
simple fact, just observed, that finite differences (i.e. talk of
Δti and Δtiþ1 ) provides a natural vehicle for talking about a
causal process when one doesn’t have the vocabulary of the
calculus available. Russell further engages in the regrettable
practice of lumping together distinct types of equational Russell
sets indiscriminately. But we just noted that differences in
their equational signatures often indicate that familiar modeling formulas are handling
“time” according to different varieties of physics avoidance tactic.17 Thus a steady-state
strategy for modeling fluid flow gains its computational advantages by suppressing the
temporal transients that perturb steady flow in real life circumstances. By disregarding
these transients, altered differential equations come into prominence, generally with
altered signatures.18 As this happens, the usual syntactic indicators of an ongoing

15
Yu I. Manin, Mathematics and Physics (Berlin: Birkhauser, 1981), p. 35. A good overview of Hadamard’s work is
V. Maz’ya and T. Shaposhnikova, Jacques Hadamard: A Universal Mathematician (Providence: American Mathematical
Society, 1998).
16
Bertrand Russell, “On the Notion of ‘Cause’ ” in Mysticism and Logic (New York: Anchor, 1963). In candor, much of
his discussion appears to be lifted from Ernst Mach, e.g. The Science of Mechanics (LaSalle: Open Court, 1960), p. 325.
Sheldon Smith, “Resolving Russell’s Anti-Realism About Causation,” The Monist, 83(2) (2000) offers an important critique
that has greatly influenced the remarks offered here.
17
Russell wrote before Jacques Hadamard’s pioneering work on partial differential equations appeared: Lectures on
Cauchy’s Problem (New York: Dover, 1952). Smith and I are deeply indebted to this work.
18
As we’ll see, applied mathematicians classify differential operators according to a formal criterion called “signature.”
Equations for steady flow are usually of elliptic signature, whereas fully evolutionary sets are hyperbolic. The first group
directly delineates a constant field of velocity vectors, which do not mark passing time in the direct manner of an
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:25
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 63
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 63

evolutionary development generally disappear, indicating that the replacement model-


ing isn’t attempting to capture a genuine causal process at all. If, like Russell, we search
for indicators of “causal process” indiscriminately within every formula found in a
physics textbook, we will come up empty-handed, because many of those equational
models have intentionally suppressed any direct registration of temporal process (gen-
erally speaking, two equations extracted from a textbook of differential equations will
share no greater affinity than two people selected at random from a telephone direc-
tory). None of this indicates that talk of unfolding causal processes is not explicitly
registered within equational sets of the appropriate signature. An excellent survey of
these issues can be found in Sheldon Smith’s referenced paper. So the proper response to
Russell is simply: let the applied mathematicians tell you where you should look for the
processes you seek.
In the same vein, the best applied mathematics match to Railton’s “ideal deductive
text” is a well-set initial/boundary value problem of a purely evolutionary character. But
let’s wait on discussing initial and boundary values until we get a better handle on the
notion of an evolutionary process.

(iii)
Railton’s use of the honorific adjective “ideal” suggests that scientists will avidly seek
evolutionary character modelings wherever they can, but this is plainly not the case.
Extracting the information one wants about a physical system directly from an evolu-
tionary PDE modeling is often nearly impossible, and the answers obtained may prove
quite uninformative.Accordingly, practitioners have accumulated a large library of
devious tricks for avoiding difficult and untrustworthy computations in the same vein
as skirting the canyons would have improved the lot of our Mormon pioneers. For
example, a simple and common evasive
strategy of this sort is to concentrate
upon predictable episodes of achieved
equilibrium, rather than attempting to
track every detail of an evolutionary
process. We shall examine a formal
example shortly, but I’ll repeat a homely
illustration from my preface. Jack and Jill
have just fallen from the top of a steep
hill; can one predict where they will
land? “Sure—somewhere in the basin at
the bottom.” Here we implicitly pre-
heading for equilibrium sume that the kinetic energy they

evolutionary modeling (“time” does not appear as an independent variable in these equations). These distinctions were
apparently first articulated by Paul du Bois-Reymond, but their modern shaping is due largely to Jacques Hadamard.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:25
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 64
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

64 Physics Avoidance

acquire in tumbling down the hill will be lost in their encounters with rock and twig and
that they will eventually reach a bruised resting place somewhere in the potential well at
the hill’s base. In reasoning so, we jump immediately ahead to an anticipated equilib-
rium state and forego any attempt to compute their paths of descent from hilltop to
basin. The reliability of our predictions is greatly increased by this brisk invocation of
physics avoidance, for the computer-assisted schemes we might employ on a computer
to track their evolutionary paths are notoriously prone to poor results (a round-off error
may launch the children into outer space rather than conveying them correctly down the
hill). To be sure, through eschewing such error-prone reasoning, we lose the capacity to
predict exactly where around the hill’s circumference the children will land, but that
detail may prove unimportant. This exploitation of anticipated equilibrium offers a
simple example of the kinds of strategies for physics avoidance that we want to study.
When we shift computational strategy in this
manner, the background explanatory landscape
adjusts as well. If I ask “why did Jack land there?”,
the reply “because the potential energy is at a
local minimum” usually qualifies as an acceptable
answer. Other circumstances may demand an
evolutionary answer: “because he hit that rock a
moment before.”19
Let’s now see how allied adjustments in
explanatory frame manifest themselves in formal
terms. I’ll start with a classic example of physics
avoidance, due originally to Euler.20 Consider a
narrow band of an elastic metal loaded with a
weight W at the top with its end points con-
strained to two fixed positions, although we
allow the weight W to fall freely within its casing.
We can construct a suitable equation for this
situation by building upon the frame of Newton’s
F ¼ ma.21 Specifically, we obtain the formula
EI  @ 2 x=@y2 þ Wx ¼ ρ  @x2 =@t2 a loaded strut

19
Apropos of Leibniz’ “two kingdoms of explanation” discussed in Essay 3, invocation of attraction to a potential
energy basis incorporates a certain measure of “teleological thinking” into our modeling equations. Interesting issues lie
here, but I’ll not probe them further in this essay.
20
For simplicity’s sake, I’ll pretend that the Eulerian modeling with which we start is of a purely evolutionary
character, whereas, in truth, a fair measure of significant “avoidance” is already evident within our original modeling,
including the fact that we have reduced the dimensions of our strut from three dimensions to one. That reduction supplies
us with an ODE, rather than a PDE, to which the notion of “signature” isn’t immediately applicable, leaving one with the
simpler notion of “time as its only independent variable.” I’ll ignore these technical complications for the moment.
21
More exactly, we build upon the Cauchy recipe for a simple continuous body that is outlined more fully in my
“What is ‘Classical Mechanics’ Anyway?” in Robert Batterman, ed., Oxford Handbook of the Philosophy of Physics
(New York: Oxford University Press, 2013). Many of the issues raised in this essay are discussed more fully there.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:26
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 65
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 65

which describes a battle between the efforts of the falling weight to bend each
local section y of the strut and y’s efforts to resist such bending due to its intrinsic
elastic response. More exactly, our equation postulates that the falling weight
exerts a torque Wx upon each section y according to the degree x to which y
fails to lie directly underneath the weight W. At the same time, point y resists this
turning moment according to its degree of local curvature (by the amount
ΕΙ@ 2 x=@y2 where E and I capture the strength of the resistance and the geometry
of the strut’s cross-section, respectively). If ΕΙ@ 2 x=@y2 and Wx counterbalance
each other exactly, point y will remain inert; if not, it will display a transverse
acceleration @ 2 x=@t2 at each point y according to the local density ρ. Note that this
equation only captures the physical processes active inside the strut; it does not
describe the rather different factors that keep the endpoints of the strut fixed. The
latter conditions are coarsely codified within the problem’s boundary conditions, of
which more later.
Despite the suggestions of D-N model thinking, it is easy to see that our governing
differential equation shouldn’t be identified with “law” in any obvious sense. The
relative strengths of our two critical terms (Wx and ΕΙ@ 2 x=@y2 ) can eventually be
traced back to law-like sources (viz., Newton’s law of gravitation and the so-called
constitutive equations for the material in the strut), but this path is not immediate and
meanders through a variety of special modeling assumptions and approximations. We
can allow that the nature of these two forces are “determined by basic laws,” but the
differential equation itself also reflects a variety of further modeling assumptions as to
how these two forces operate in the present context.22
In any case, EI@ 2 x=@y2 þ Wx ¼ ρ@x2 =@t2 is a classic example of an evolutionary
wave equation.23 If, at some time t0, we happen to know both the displacement x and its
velocity @x=@t at every location y up and down the strut, we are dealing with a classic
initial/boundary value problem that determines, in accordance with the initial condi-
tions of the problem,24 how waves will move up and down the strut in accordance with

22
In these essays, I have not attempted to diagnose the considerable vagaries attendant upon the phrase “law of
nature.” Sometimes a philosophical author will tacitly identify a “law” with the full set of considerations required in
assembling the differential operators that carry evolutionary modelings forward in time. But these ingredients typically
include many types of data that don’t look like “laws” in any traditional sense, e.g. “this piece of iron is rigid” (for a
fuller discussion, see the forthcoming “Counterfactuals in the Real World” by James Woodward and myself). A few
pages later, the same author may offhandedly label Poisson’s equation as a “law of gravitation,” despite the fact that it
conveys no temporal information whatsoever. In my opinion, greater attention to the formal discriminations practiced
within applied mathematics could significantly improve the quality of current discourse with respect to “laws of
nature.”
23
Once again, a fair amount of physics avoidance is incorporated within the forcing term Wx, resulting in a
suppression of the compression waves that explain how the weight W communicates with the interior of the strut
(such concerns will emerge later in the paper). More generally, invoking forcing terms often shifts explanatory landscape
in modes similar to those discussed here, although I shall not consider their effects in any detail.
24
Standard initial conditions supply the x positioning and @x=@t velocities at all y along the strut at some starting time
t0. In fact, we would normally model our hammering as a delta function “displacement” that generalizes upon a
straightforward understanding of “initial conditions” in the rather complex manner discussed in Essay 7. Worse yet,
we will soon consider circumstances in which compressive waves become excited inside the strut through the manner in
which the upper end of our strut is initially loaded by the weight W. Properly characterized, we are contemplating
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:26
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 66
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

66 Physics Avoidance

our governing equation.


At the bottom of the
accompanying diagram,
I have hammered a little
bulge into our band at t0.
As time passes, this dis-
turbance will split into
two smaller lumps that
travel outwards in oppos-
ite directions until they hit
their respective boundary
clamps, at which point
they will turn upside
down and reverse their
direction of travel, as indi-
cated by the little black
arrows. These pulses will
also alter their shapes how a stress wave travels through our strut
somewhat as they travel,
due to the fact that the Wx torque is stronger near the clamped vise boundary than near
the weight. Note that in this diagram, its boundary conditions (i.e. the fact that the strut
is held fixed at its two end points at all moments) are registered as vertical lines
extending forward in time. Such conditions will not affect the way our pulses move
until such time as they bump into the clamps and reflect backwards.
With equations of this ilk, there is an associated collection of sound cones (or charac-
teristic surfaces) that register how fast disturbances25 can travel across each local section y
of the strut—these top permissible speeds may differ at different sites along the band,
depending upon their local constitution. Differing permissible speeds affect the angles
with which the local sound cones open. In our one-dimensional example, such charac-
teristic cones appear as little X’s (possibly curved), but in higher dimensions they assume
the form of generalized cones (possibly wavy). As such, characteristic cones are familiar

circumstances in which an impulsive side condition has been assigned to the upper boundary, not to the interior as
normally required. Namely, the endpoint remains fixed at every instant except t0, at which time an impulse enters the top
of the strut through the instantaneous loading. Side conditions of this ilk are very common in electricity and are often
loosely characterized as “initial conditions.”
I apologize for these technical complications. An expository problem I confront is that it is rather hard to provide easy-to-
grasp illustrations of hyperbolic PDEs that don’t tacitly incorporate some significant degree of physics avoidance in their
background derivations—this automatically occurs anytime we reduce the dimensions of our problem to lower than three.
Well, we must learn to crawl before we walk, so I must acknowledge a certain looseness in my formal characterization of the
strut problem. Nothing material in the present discussion turns upon the required corrective subtleties.
25
In a non-linear medium, shock waves can sometimes exceed these restrictions, but we’ll not worry about these
exceptions here.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:26
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 67
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 67

from relativity theory;


they appear as the cele-
brated light cones in that
subject, indicating the
top permissible speed
for any physical pro-
cess.26 In the case at
hand, compact initial
pulses will travel
through the strut at the
speed of sound supplied
by our formula, leaving
so-called vacuities in
their wake (in more gen-
eral circumstances lin-
gering disturbances will
sound cones and their characteristic curves spread out behind the
advancing wave front).
Thanks to Jacques Hadamard, mathematicians now recognize that purely evolution-
ary equations carve out little cones like this and that this capacity is closely connected to
the formal structure of its differential operator ðρ@x2 = @t2  EI@ 2 x= @y2 WxÞ.
Indeed, there is a little polynomial ðρx2  EIy2  WxÞ hidden there that captures
this cone-making capacity and draws the requisite characteristic curves in our space-time
diagram. When this happens, mathematicians say that the equation is “of hyperbolic
signature” (the name “hyperbolic” traces to the somewhat accidental fact that, for
equations of degree two, the associated polynomial happens to be an equation for a
hyperbola). But not all differential operators do this—if we render the last two terms
positive ðρ@x2 =@t2 þ EI@ 2 x=@y2 þ WxÞ, the operator will no longer pass the math-
ematician’s test for hyperbolic character and no (real) sound cones will be drawn.
The salient lesson of these rather fussy observations is this: If the equation set
employed within a physical modeling is not of a thoroughly hyperbolic character, it is
a reliable symptom that some substantive form of physics avoidance strategy is active
within the modeling and that we are confronted with the mathematical analog of one of
Railton’s “condensed texts.” Hadamard’s observations allow us develop a quite sharp
understanding of how such deviations operate and the respective advantages and
disadvantages they offer (in contrast to Railton’s inability to provide a “satisfactory
account” of his “condensations”). Later on, such distinctions will help us unravel some of
the confusions created by the rather eccentric behaviors of the word “cause.”

26
Special relativity requires that the sound cones of any specific material process must fall inside the light cones carved
by the characteristics of Maxwell’s vacuum equations.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:27
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 68
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

68 Physics Avoidance

Put another way, not all differential equations are created equal! Differences in the
signatures of their operators indicate that the relevant equations play rather different
explanatory roles within science. Hadamard particularly emphasizes the fact that, if
modeling equations are not purely evolutionary, they are probably not suited to
initial/boundary value problems at all. In point of fact, most treatments found in physics
books eschew modeling equations of a hyperbolic type, although circumstances may
superficially appear otherwise (due largely to common time mimics of a sort we shall
consider later in the essay). Many substantive confusions in philosophy of science trace
at root to the error of parsing non-initial/boundary value problems as if they comprise
the real article—our appendix supplies a characteristic example.
In fact, lots of natural questions we might ask about struts can’t be readily addressed
within a strict initial/boundary format. For example, our narrative of impulses traveling
through struts is rarely the kind of problem that concerns civil engineers. Most often,
they want to check whether a large load might collapse a static column. With this second
concern in view, let us now examine a celebrated stratagem of physics avoidance with
respect to our strut. As already noted, this evasive trick is one of the many glories we
owe to Leonhard Euler, for the ploy neatly suppresses any need to consider a proper
initial/boundary problem at all. Once we understand its strategic contours from an
intuitive point of view, we will be able to appreciate how Hadamard’s distinctions help
us understand the exact nature of the explanatory condensation that arises.
Now the strut formula we just wrote down is hyperbolic in signature and generates
temporal flows of exactly the flavor that Railton anticipates in a “D-N text.” But we also
noted that modelings of this type are often lengthy and unreliable affairs, requiring huge
amounts of computational effort to obtain results of miserable accuracy. In the spirit of
our Utah pioneers, Euler asks, “Do we really need to know all of these details?” Well, if
we are interested in earthquakes, we might, but for most engineering purposes we only
require the maximal weight that our strut can bear without sagging. As with Jack and Jill,
this means that we
are chiefly inter-
ested in a future
state of anticipated
equilibrium: will
our strut sag or
remain upright
under its load when
it finally stops wig-
gling? Three basic
possibilities arise as
illustrated: A, B, or
stable and unstable equilibriums
C (where we hope
that the answer will be B: the strut supports the weight W without sagging). Euler
proposed that we circumvent the computational obstacles posed by a straightforward
evolutionary modeling by simply dropping the time term ρ@x2 =@t2 from the foregoing
equation and consider the following truncated replacement instead:
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:27
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 69
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 69

EI@ 2 x=@y2 þ Wx ¼ 0:
In the sequel, I will dub this formula “Euler’s reduced equation.” The rough justification
for this maneuver is that frictional influences will eventually drain all energy of
movement from the strut, leaving only the factors EI@ 2 x=@y2 and Wx to balance
each other at equilibrium.Euler observed that, as long as the weight W remains
below a certain critical value Wc (determined by EI), the strut will be able to bear its
load in the admirably upright manner B. But as soon as the weight exceeds Wc, the A
and C modes of sustaining a EI@ 2 x=@y2 to Wx balance become available, creating a
situation in which our strut sags dangerously. As occurred with Jack and Jill, this
predictive technique doesn’t allow us to anticipate which of these two final shapes the
strut will adopt, but that indeterminacy usually doesn’t matter; a sagging column is bad
whichever way it leans.27 In the usual jargon, the equilibrium states available to the strut
undergo a bifurcation after the critical load Wc is passed (additional loading can produce
further bifurcations A*, A**, etc. within the response pattern).

bifurcation diagram

Applying our polynomial test to our newly reduced differential operator


ðEI@ 2 x=@y2 þ WxÞ, we find that its evolutionary character has been lost and its
associated sound cones have become imaginary.28 Moreover, our new equation now
only accepts solutions of a restricted “analytic function” class, another telltale feature of
considerable physics avoidance salience that we shall consider later.
In formal terms, Euler’s equilibrium-focused ploy shifts our treatment from compris-
ing an initial/boundary value problem to a pure boundary value problem, resulting in
an altered explanatory landscape. In the case at hand, the relevant “boundary” merely

27
Technically speaking, the B state still qualifies as a possible equilibrium even when the strut is loaded beyond
Wc, but it is unlikely to be witnessed in practice since it is unstable (= easily prompted into movement by any slight
disturbance). Unstable equilibra often appear in standard bifurcation diagrams as dotted lines, but we shall ignore
them altogether.
28
Generally, speaking the reduced operator becomes elliptic, but, in the one-dimensional setting considered here, we
obtain a simple ODE.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:28
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 70
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

70 Physics Avoidance

consists in the two clamps


at each end of the strut,
but, within higher dimen-
sional modelings, the term
carries its intuitive
meaning—e.g. the bound-
ary for a drum consists in
the clamping around its
perimeter. Once again, the
physical processes respon-
sible for the boundary
region behaviors are not
captured within the stand-
ard differential equation for
the interior membrane.29
A fair amount of physics
two forms of explanatory landscape avoidance is inevitably
active in the justification
of suitable boundary conditions for such a problem, but we won’t pursue that issue here.
Explanatory patterns that invoke pure boundary problems differ considerably in
intrinsic character from the (almost) purely evolutionary situations we have been
examining. Hadamard underscored this fact in his Lectures on Cauchy’s Problem:
Indeed the two sets of conditions play quite different parts, from the mechanical point of
view, and are often and rightly called by different names. We hitherto, according to the
geometric formulation of the question, have used the term “boundary conditions” indis-
criminately; but if we now think of the mechanical meaning of the questions, we shall be
lead to give to [the space-like] conditions the name “initial conditions,” the name
“boundary conditions” being kept only for [the time-like] conditions which correspond
to the extremities of our string.30
Following our own advice, we can get a good handle on the differences between these
two types of problem by examining some relevant numerical methods. Let’s first
observe how a computer might calculate how pulses will move through our strut
using our hyperbolic modeling equation and suitable assignments of initial and bound-
ary values. In effect, we are asking our computer to fill out a piece of graph paper
containing a time axis and, accordingly, must supply it with enough side condition data
to fill out the three sides of a space-time rectangle (viz. position data31 along the two

29
Simple membranes can sometimes be modeled by a two-dimensional wave equation similar to that for our strut:
EIð@ 2 x=@y2 þ EI@ 2 z=@y2 þ Wðx; zÞ ¼ ρð@x2 =@t2 þ @z2 =@t2 Þ. Unless otherwise specified, I will assume that this is the
relevant modeling equation.
30
Hadamard, Lectures on Cauchy’s Problem, p. 41.
31
That is, if our boundary conditions are of Dirichlet type (the endpoints stay fixed). Suitable boundary conditions data
for a clamped drumhead will instead involve velocities because the conditions themselves appeal to the work accom-
plished by the forces of binding. Even in such cases, we are supplied with a single type of data, rather than the two bits
available in an initial value problem.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:28
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 71
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 71

time-like boundary condition


lines and positions and veloci-
ties along the space-like initial
data line). Our computer can
then plot its graph by filling in
new squares according to the
values it has already assigned to
the squares behind it, in accord
with the five-point stencil
marked in the illustration. This
technique (a form of Euler’s rule
suited to a PDE) marches up the
graph paper like an old Roman
phalanx, explaining why routines
like this are called “marching
methods.” Accordingly, compu-
marching method plot of a moving wave front
tations for pure evolutionary
problems are quite straightfor-
ward and closely resemble the ODE cannonball techniques discussed above. Indeed, our
five-point stencil directly reflects the everyday “cause and effect” modes of talking that we
would employ if we described the activities operating within the strut interior in vernacular
terms—e.g. “within any portion of the strut x any current imbalance between the local
influence of the loading above and the strut’s present curvature, as determined by its
relationship to its nearest neighbors, will cause section x to increase or decrease the
current rate of its displacement change according to the precise effect demanded by
the material constants ρ and EI.” From this plotting, we immediately observe that the
main portions of our evolving pulse follow the sound cones (= characteristic curves)
laid down by our governing equation until these waves reach the endpoint clamping,
at which point our computer must be supplied with supplementary boundary-
condition data so that it will be able to complete its five-point stencils past these
endpoint encounters. The differing data requirements of a five-point stencil at these
junctures explain why initial and boundary conditions differ intrinsically in the kinds
of information they must supply.
Let us now turn to a pure boundary value problem, first in a two-dimensional format
that most closely resembles our wave calculations. So let’s consider how we might
calculate the equilibrium configurations of a drumhead loaded with rocks.32 At first
blush, it looks like we have more data to work with (on four sides rather than three), but
this impression isn’t correct, because we haven’t been told the angles at which the

32
To be concrete, I’m presuming that we confront a situation governed by the interior equation
EIð@ 2 x=@y2 þ @ 2 x=@y2 þ Wðx; yÞ ¼ 0Þ, where the rim is held at a fixed position (this is a so-called Dirichlet boundary
condition). In contrast, the lateral loading supplied by the rocks is not a proper boundary condition but is instead captured
within the forcing term W(x,y). But this assignment is largely an artifact of the lowered dimensionality of our membrane
modeling. In a 3-D treatment the rocks will correspond to a true boundary condition.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:29
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 72
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

72 Physics Avoidance

drumhead skin will lift away


from the boundary. In other
words, we are missing the for-
mal equivalent of the initial vel-
ocities that were supplied in the
evolutionary strut case. Instead,
a computational scheme needs
to guess how the skin might lift
off the drum’s edges and hope
that its subsequent calculations
agree when they meet in the
center (our computational situ-
ation resembles the problem
that the Western Pacific and
the Union Pacific faced in trying
to construct a transcontinental
railroad working from two
sides of the country). Working
shooting method calculation of constrained equilibrium in this manner, our computer
will have probably estimated
those initial angles wrongly and so must work through
a sequence of improving guesses until it eventually
finds a suitable across-the-drumhead match-up. Com-
putational techniques like this are generally called
iterative methods or, with respect to the particular guess-
ing technique suggested, shooting methods, because they
operate as one does in archery, adjusting the angle of
our bow after seeing how far our initial arrows fall
from the target.33
If we seek an equilibrium configuration for our strut Dang! Let’s try that again
based upon Euler’s reduced equation in a similar man-
ner, we can employ a shooting method routine to the same effect. The only difference
lies in the fact that our Eulerian model now only contains a single spatial variable and so
we only need to begin at the top boundary with a single trial angle guess, rather than all
around the boundary rim of our drumhead. But we’re still obliged to work through a
sequence of guesses before we manage to “shoot” a computational path to a position
that locates the bottom boundary of the strut in the right position.34 Note that in both

33
These shooting method archers will make a significant reappearance in Essay 6. I should also note that I have tacitly
concentrated on computing the lowest eigenfunction equilibrium here; we find others by asking that our strut or drumhead
to remain level at one or more places along its breadth. These multiple eigenfunction issues are important in understanding
the complexities of general Sturm–Liouville searches (discussed in Essay 9), but I will set these aside here.
34
There are many better elliptic problem solvers than this, but the method illustrates the basic successive approxi-
mation technique involved in all of them. I first utilize a two-dimensional drumhead example to render more vivid the
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:30
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 73
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 73

cases we calculate in an essentially different manner than


we did when we employed simple marching method
reasoning to follow the waves up and down the strut.
How so? Because we must repeatedly iterate our cal-
culations until we obtain a satisfactory across-the-
full-system matchup (Dang! Let’s try that again). This
major adjustment in reasoning technique is closely cor-
related with the removal of time from our equilibrium
modelings, although the connection may not seem obvi-
ous at first glance. When we set our strut wriggling by
inserting an original pulse, it takes a certain amount of
time before the pulse and its reflected descendants can
locate an equilibrium through exploring every inch of
the strut’s terrain. Accordingly, the assumption that the
system has reached an equilibrium tacitly presumes that
these exploratory processes have had enough time to
shooting method for the strut

bring every portion of the strut into


mutual accommodation across its entire
extent (the time required to reach this
accord is called the relaxation time of the
process). We gain our chief physics avoid-
ance advantages by not worrying about
what happens during these relaxation
time events.35
Hadamard noted a piquant mathemat-
ical feature of this “time required for
mutual accommodation.” Equations of
elliptic signature, such as the time-terms-
dropped-for-equilibrium versions of our
strut36 and drumhead equation, only
accept so-called analytic functions as solu-
tions. Now analytic functions per se appear
to be rather harmless critters—most ready- hyperbolic equations tolerate independent
to-hand mathematical functions are of this hammerings

manner in which our rules for “filling out a graph paper crossword puzzle” alter when we shift from an evolutionary to an
equilibrium setting.
35
And we also ignore the physical processes that smooth out and diminish the traveling wave pulses needed to
produce this eventual accord (they are not described within either of our two strut equations).We’ll return to these
suppressed factors later on.
36
Here I fib in an innocent way. Our original strut equation degenerates to an ODE, which doesn’t quite fit the PDE
requirements for elliptic signature.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:32
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 74
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

74 Physics Avoidance

type—but they secretly embody a reproducibility property allied to that of a flatworm:


from any little piece their behavior everywhere can be constructed by so-called analytic
continuation (mathematicians say that such equations smooth out the data away from
the boundaries). In normal evolutionary circumstances, this reproducibility represents an
unwelcome property, for one expects that two folks could hammer on two ends of a beam
in completely independent ways, with no blending of results until the pulses from each
meet in the center of the beam. If our wave equation behaved like an elliptic equation, the
pattern of the left-hand hammerer’s pounding could be deduced from that of the right-
hand hammerer without any signal passing between them. As Hadamard put it:
But such functions [= generic solutions of hyperbolic equations] differ from analytic
ones . . . in lacking one of their classic properties: the extension of a function . . . from one
part of its domain to a neighboring one is determined. [But a non-analytic] function,
being given in (0,1), may be extended to (1,2), for instance, in an infinite number of ways.
This capability of hyperbolic equations is closely tied to the sound cones they inscribe,
for the latter limit the speed at which messages can be sent from one region to another.
These limitations in signal transmission allow us to inject initial pulses into our strut at
different locations in a completely independent ways (whereas analytic function initial
data insist upon a secret coordination between our injections). Hadamard observes that
our intuitive understanding of “causal process” anticipates that we can freely start our
strut off in uncorrelated ways.37 In contrast, the physics avoidance assumptions lurking
in the background of most elliptic equation models presume that sundry smoothing
processes have had enough time to bring the system to the coordinated balances
characteristic of equilibrium or steady-state condition. This observation explains why
such models can accept analytic functions as their only solutions.

(iv)
These adjustments in explanatory landscape issues would occasion few conceptual
difficulties except for the dastardly propensity of words like “time” and “cause” to
creep back into our descriptive vocabularies,38 even when we have resolutely decided,
à la Euler, to eschew attention to both time and causal process in our thinking. Like the
unwanted cat in the old song, these words keep coming back even though we have

37
Hans Reichenbach’s attempts to define “causal process” in terms of the transmission of “marks” (= pulses) strike me
as a somewhat botched transmutation of Hadamard’s ideas, but Wesley Salmon told me that he (Salmon) had never heard
of Hadamard. Possibly Reichenbach (Salmon’s direct inspiration) acquired some amorphous grasp of Hadamard’s work
through the vagaries of “what’s in the air” discussion amongst physicists.
We might also observe that most concrete functions encountered within a mathematics class are analytic in character,
even if one doesn’t want that characteristic to appear as an ingredient within a physical model. But getting rid of undesired
analyticity is not always easy and raises the issues of “descriptive opportunism” central within Essay 9.
38
Excellent motives of linguistic efficiency lie behind these shifting behaviors, but I have reserved discussion of these
issues to Essay 1.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:32
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 75
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 75

presumed that they were goners. There are many


ways in which this can happen, including the man-
ner in whichtemporal imposters frequently
reappear in optical topics (vide Appendix 1). With
struts, allied time mimics reenter the equilibrium-
dominated landscape as follows: It is natural to
worry, in planning a building, whether its struts
can support the maximal expected weight we plan
to place there (e.g. a large record collection in a
shoddy apartment building). Such considerations
lead us to associate our reduced strut equation
with the bifurcation diagram provided a few
pages back. Such charts indicate, as we gradually
increase the load W on the upper end of the strut,
that a qualitative change in permissible equilib-
rium shape appears once we exceed the critical
load Wc, whose A and C branches correspond to
struts that bend to the left or right. In fact, if we
words keep coming back as well
increase the load W enough, new qualitative
changes in strut shape will appear, corresponding
to a further critical load Wc’, as indicated by the second branching in our right-
hand diagram.39
The mathematician’s usual name for figures like that chart is control space diagram
(the increasing weight on the upper end is the control variable). But if we fancy that we
have slowly loaded the upper boundary with increasing weights up to the moment
when the whole affair buckles, we have tacitly reintroduced a temporal dimension into
our considerations through an identification with an extraneous “clock” determined by
the exogenous rate at which the control variable W is altered.40 But this whole mode of
temporal thinking sits on top of an Eulerian strategy in which we have avoided tracking
temporal events within out strut by skipping ahead to its inevitable equilibriums.
Conceptually, this new “time” quantity is quite different from the former “time” we
have eliminated, for it represents a manipulation time that captures the driving rate at
which we increase the weight on the boundary. It does not represent the natural
endogenous time that the strut autonomously follows as it subsides into equilibrium.
Indeed, for a control variable perspective to operate as intended, the manipulation time
scale Δt* must unfold at a much slower rate than the internal relaxation time Δt
required for the strut to damp out is internal wigglings and settle into equilibrium.
Euler’s stratagem supplies reliable predications for our strut buckling problem only if
these two timescales remain strongly separated—if the load is increased too speedily, we

39
Strange behaviors sometimes appear within these secondary bifuractions: B. Audoly and Y. Pomeau, Elasticity and
Geometry (Oxford: Oxford University Press, 2010).
40
Note that in our earlier considerations W was treated as fixed and not variable at all.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:33
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 76
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

76 Physics Avoidance

find ourselves within the altered landscape of so-called dynamic loading such as earth-
quake engineers investigate. We might conceptualize how these two time scales operate
in tandem with one another as illustrated, in which we sandwich little pictures of the
“fast time” relaxation
processes in between
the constrained equilib-
rium events registered
upon our “slow time”
bifurcation chart. Putt-
ing the matter a differ-
ent way, the dynamic
processes active in our
strut (which involve
lots of waves racing
swiftly up and down
the column) are not
sandwiching evolutionary histories between equilibriums
registered within our
control space diagram
because the relaxation times required to reach equilibrium have been compressed into
infinitesimal invisibility there.41
If we don’t pay due attention to these distinct time scales, we can readily fall into
simple confusions, such as “if a strut’s behavior is deterministic, where does the
indeterminacy come from that prevents us from predicting whether the strut will sag
to the left or right?” Answer: we must open up the suppressed “fast time” behaviors to
obtain a deterministic prediction.42 When our temporal focus shifts to a control or
manipulation variable such as W in this fashion, it is not surprising that the word “cause”
opportunistically attaches itself to the changes wrought when the strut’s upper boundary
is manipulated, rather than focusing upon the interior causal processes that unfold on a
much faster clock (viz. the internal waves that travel up and down the beam once they
are set into motion by some initial disturbance).
Essay 1 argues that these vacillations in word reference aren’t whimsical, but trace to
the important ways in which “cause” serves as a crucial instrument of language
management. We employ “cause” to alert our listeners to the modeling circumstances
we intend to explore next: “Let’s figure out the effects that an increased weight will

41
Variants on these infinitesimal compressions will often appear throughout these essays (e.g. the suppression of
elastic effects in standard billiard ball modeling in Essay 8). Allied fast time/slow time concerns are central to
thermodynamics as well—see Essay 4.
42
The physics active in its boundary regions must be specified in greater detail as well. More substantive versions of
such confusions arise when the strong physics avoidance capacities of constraints are overlooked. The appendix to Essay 6
supplies a typical example. For a second exemplar, see John D. Norton, “Causation as Folk Science,” Philosopher’s
Imprint 3(4) 2003. For my diagnosis of the role that mixed level constraints play in this puzzle, see my “Determinism and
the Mystery of the Missing Physics,” The British Journal for the Philosophy of Science 60(1) (2007). These problems were
well understood in the nineteenth century.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:33
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 77
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 77

cause within the strut.” Or, when we wish to alter our explanatory focus: “No, I’m more
interested in how effects from the upper weight reach the middle of the beam.” Once a
specific explanatory landscape has been selected, “cause” will enter the new territory and
claim some of its salient landmarks as its own (rather as the presumptuous chair of a
search committee might award the job to himself).43
Although I have glibly appealed to a “relaxation time” for our strut equation,
perceptive readers will have noted that our modeling equation, as it stands, doesn’t
support such claims. This is because we have not incorporated any form of frictional
process within its internal workings. Without some energy dissipation, our strut will
wobble forever, passing waves up and down its column, and never settle into quies-
cence. Correcting this lapse in our modeling in a physically plausible manner is not an
easy task. Here’s the simplest form of frictional repair we might invoke: We add a simple
velocity dependent term κ@x=@t to our evolutionary equation that gradually damps
movement within the strut, obtaining ρ@x2 =@t2 ¼ EI: @ 2 x=@y2 þ Wx  κ@x=@t.
When we turn off the time variable in Euler’s manner, we still obtain the same reduced
equation, because κ@x=@t drops out along with ρ@x2 =@t2 . But we more plausibly
hope that the solutions of this friction-altered formula will asymptotically approach our
A, B, and C solutions as time ! 1. If so, our basic rationale for focusing directly upon
our strut’s anticipated equilibrium states can be justified as a limiting state of the full
evolutionary modeling.44
Unfortunately, crudely adding a simple term like κdx=dt may not supply the
equilibrium state subsidence we anticipate, because (inter alia) the posited damping
may prove so strong that the strut is forced to stop moving before it obtains the expected
equilibrium condition. In other words, the waves moving up and down the strut may get
eradicated before they’ve had enough time to explore the full length of the band. Indeed,
I don’t know of any plausible evolutionary modeling for our
strut that can ratify Euler’s drop-the-time-terms strategy in all
foreseeable cases (such repairs may exist; I merely don’t know of
them). Accordingly, our presumption that the successes of Eu-
ler’s equilibrium-based strategy can be founded upon an under-
lying evolutionary modeling of the sort suggested should only
be regarded as an aspirational hope, not an established verity. In
studying physical practice as we actually find it, we often
encounter situations in which some physics avoidance pattern
X has been presumed to piggyback upon a fuller evolutionary
modeling of type Y, but where it later emerges that the proper
justification for X traces to underlying factors Z of an entirely mimicry

43
Jim Woodward’s important work on the intimate connections between “cause” and “manipulation” are very helpful
here. Cf. Essay 6.
44
In other words, the equilibrium appears asymptotically as t ! 1. In this modeling, we have tacitly assumed that
the frictional drag is entirely occasioned by the ambient air. In reality, a good deal of frictional leakage will occur at the
endpoints, but such boundary conditions are hard to handle.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:34
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 78
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

78 Physics Avoidance

different character. In such cases, I sometimes say that Y has served as an explanatory
mimic with respect to Z. The Y and Z accounts may both engage in long strings of
computations that superficially look very much alike, but their underlying rationales are
quite different. Unwary observers are often deceived by these resemblances. Essay 8 is
chiefly concerned with a striking exemplar of this phenomenon.
Within the context of classical physics, we must be especially wary of positing the
existence of a supportive evolutionary story of an exclusively classical stripe, for appeals
to inherently quantum mechanical processes are often required to rationalize the
equilibrium-centered reasonings that classical mechanics frequently employs. Consider
the innocuous-looking assumption that an iron bar will remain approximately rigid
throughout its normal everyday interactions (this datum is called a constraint and will be
discussed in more detail below). The real-world underpinnings that supply our rod with
its stable shape appear to rely upon intrinsically quantum mechanical mechanisms (such
as tunneling) and may possess no analogs within a strictly classical framework.45
Accordingly, when I write of “physics avoidance” in this book, I sometimes invoke
the notion in applicational circumstances where we presently lack any clear conception
of what we’re actually avoiding. Wary readers should also become alerted to the fact
that elementary physics primers often switch from modeling strategy A to modeling
strategy B with an offhanded remark to the effect that it is “obvious” that B is fully
supported as an “approximation” to A. Like the emperor’s new clothes, such appeals
often point to a justificatory fabric that simply isn’t there.
Mistaking aspirational hope for accomplished task is a foible strongly encouraged by
loose thinking of a Theory T stripe. I believe that Peter Railton offhandedly presumes,
upon some nebulous a priori basis, that evolutionary
modelings comparable to his D-N and D-N-P texts must
underlie every successful explanatory pattern within
physics (every “condensed text” must have its “ideal
expansion”). Contemporary analytic metaphysicians
rely upon questionable assumptions with respect to the
future of “fundamental physics” in the same vein. They
fail to recognize that these are rash presumptions, that
wise physicists have sometimes wondered whether the
overall architecture of basic theory should be stitched
together from ingredients of a different formal character.
No doubt it would prove intellectually pleasing if every
methodological gambit utilized within science could
eventually be rested upon purely evolutionary underpin-
nings. But this prospect must be regarded as an empirical
matter, dependent upon what nature eventually decides “take your vittles and like ‘em”

45
Elliott H. Lieb and Robert Seiringer, The Stability of Matter in Quantum Mechanics (Cambridge: Cambridge
University Press, 2009). The fact that point mass mechanics could not model a normal solid aptly was familiar to figures
such as Kelvin and Maxwell.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:35
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 79
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 79

to place upon our plates with respect to workable explanatory schemes (it doesn’t seem to
much care whether we find its offerings “intellectually satisfying” or not). See Essay 6 for
more on these Theory T-induced issues. My object in this book is not to supply
contravening dogmas of my own, but to widen our methodological appreciation for the
variety of explanatory landscapes arising within science. I am a scientific realist at heart and
have no doubt that all of these varied patterns “fit together somehow.” But, at the present
moment in scientific time, we don’t really know how this “fitting together” formally
operates, and nature offers many surprises on this score.
Before moving on, let me observe that several further instances of compressed time
scales are implicit in our strut treatment. As already remarked, most natural problems
concerning struts are not true initial/boundary value problems. Indeed, some rather
subtle forms of “true time” avoidance were previously operative even within our
original choice of an allegedly “purely evolutionary” modeling. Normally, a strut does
not start flexing because it has been plucked in its interior, but because of the wiggling
that occurs when the load W is initially placed upon its upper boundary. This boundary
region loading then releases a wave-like pulse
into the interior, pulling it out of equilibrium.
But our formal initial/boundary condition for-
mulation presumes that the upper boundary
remains perfectly fixed at all times of interest.
But weight W can’t both move and not move
at the very same time! We can resolve this
contradiction by observing that the time scale
Δt* in which the moving weight creates a
disturbance within the strut interior is much
brisker than the time Δt required for the strut
to settle into equilibrium.46
We fall into related quandaries when we
notice that it is only through the upper
weight’s falling that the torque on the position
y gets generated, appearing as the Wx forcing
term. But the position y may lie relatively far
from the weight W—how does the latter
manage to send an appropriate signal to pos- causal mystery due to suppressed
underpinnings
ition y that it should bend by a factor x(y)?
The proper answer must invoke the consideration that, although we have allowed our
beam to flex laterally in the x direction, we suppressed its compressions in an up and

46
This form of descriptive “double think” with respect to movement and time is characteristic of side condition
specifications that often categorized loosely as “boundary conditions,” but actually represent stipulations of a more
complex formal nature. Canny observers should watch for these tacit little adjustments, for they sometimes provide the
root seed of great conceptual confusions.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:35
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 80
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

80 Physics Avoidance

down, y-direction mode. But we can’t send compressive wave messages from the weight
to position y unless this constraint is mildly lifted. This neglect of finely detailed vertical
movements is further obscured by the popular stratagem of modeling our strut as a
lower dimensional object, a physics avoidance ploy that often induces conceptual
puzzles of this sort.
Once again, tacit restrictions like this—the strut can flex only in the x direction—
qualify as constraints. These appear as annoying visitors within many of the essays in this
collection (including section (v) next to follow). Presumptions of this character represent
physics’ closest analogs to unruly dogs: they are man’s best friend but they sometimes
bite the postman. We’ll soon find that rigidity assumptions offer excellent platforms
upon which highly effective modeling simplifications can be built, but their underlying
methodological complexities may spawn conceptual confusions afterwards.
We might further observe that it is most likely that a moving strut sheds most of
its kinetic energy through minute movements of its upper and lower clamps, rather
than through interactions with the surrounding air (as the “κ@x=@t” term postulated
above suggests). But handling these details in a properly evolutionary mode requires
that we model the strut/clamp interaction in a more substantive way than merely as a
standard (Dirichlet) boundary condition. Once again, it is this suppression of fast time
detail that injects the indeterminacy into reduced Eulerian modeling. Unless we reopen
these suppressed factors, we won’t be able to predict whether the strut will sag into the
A or C configurations illustrated earlier.
As remarked earlier, it is rather hard to find pure exemplars of evolutionary initial/
boundary value problems within working science, because unnoticed policies of sub-
stantive physics avoidance frequently loiter within the justificatory woodwork—factors
that we might call “latent physics.” Friction’s under-detailed operations often play
significant roles in generating these latent physics masquerades.47

(v)
Let us now investigate a second pattern of profitable physics avoidance that also leads to
modeling equations that formally deviate from purist evolutionary expectations. These
involve the exploitation of the constraints recently introduced. From a formal point of
view, these appeals often arise when additional equations of non-hyperbolic type are
adjoined to a core evolutionary modeling. That is, we alter purely evolutionary charac-
ter by simply adding further equations of an alien disposition to our modeling set, rather
than tinkering with their signatures as we did with our strut. Here’s a characteristic
illustration: The governing equations for a simple fluid are supplied by the compressible

47
See Essay 4 for more on this theme. The close connection between “friction” and “loss of information” suggests that
very deep factors may cluster here. The proper treatment of so-called inverse problems often forces us to pull some of
these latent factors out of the explanatory woodwork.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:35
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 81
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 81

Navier–Stokes equations. But everyday fluids such as water strongly resist volumetric
change, so it is common to add an additional formula to our modeling equations
requiring the fluid to conserves volume as it flows. This amplified collection comprises
the so-called incompressible Navier–Stokes equations (or N–S equations, for brevity).
The extra assumption of incompressibility allows us to simplify our calculations in quite
significant ways.
This innocuous-looking supplementation alters the underlying explanatory landscape
considerably and causes “cause” to migrate in a manner that generates common
conceptual confusions with respect to the word “pressure” (further details on this
specific situation are provided in the appendix to Essay 6). For present purposes, let’s
instead examine what happens
to an explanatory landscape
when a very basic form of rigid-
ity constraint is enforced. Main-
taining that a bead stays on a
wire, that a ball slides or rolls
along a table top, or that a
machine part remains rigid all
qualify as typical exemplars of realistic beads wobble about a wire
such impositions. In the first
and third of these appeals, the constraint is implemented mathematically through a
supplementary algebraic equation that employs no differential operators whatsoever.48
For example, the constraint equation for our sliding bead is just the formula that
specifies the location of the wire. The constraint equations for our bar require that all
points along the bar maintain a constant distance from one another.
Almost always, multiple length scales are tacitly present when rigidity constraints
are invoked in this manner, just as “fast” and “slow” timescales are required in a fuller
discussion of our strut. So let us also distinguish “large” and “small” size scales,
symbolized as “ΔL*” and “ΔL”. On a smallish scale of resolution ΔL, our bead will
not be able to follow the curve of the wire exactly, but will wobble about its locus in a
very complicated way. But it will look as if our bead perfectly obeys the stay-on-the-wire
constraint perfectly when viewed on the coarser length scale ΔL*. Physicists then say
that “the bead stays on the wire perfectly when resolved on the macroscopic length
scale ΔL*.”49
Even if our bead cannot follow its wire exactly at the level ΔL, the fact that it appears
to do so on the length scale ΔL* supplies us with information that we can exploit very
effectively for physics avoidance purposes. A classic recipe of this type was codified by

48
The official name for a constraint of an algebraic character is holonomic. In contrast, the limitations in movement
imposed when a ball rolls across a table need to be captured in differential equation terms, and constraints of this second
class are classified as non-holonomic.
49
This notion of a claim holding true “resolved when upon a length scale” plays a central role in the developments of
Essay 5.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:35
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 82
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

82 Physics Avoidance

Lagrange within his celebrated “ana-


lytical” methods.50 These techniques
provide an early prototype for how
partial data extracted from different
scales of observation can be success-
fully blended together. To see how
Lagrange’s stratagem works, let’s
examine the simplest constraint of
all: the fact that an iron bar remains
rigid when evaluated on a length scale
ΔL*. Suppose that our bar is im-
bedded within some more complex
mechanism, such as the collection of constraint embedded in a mechanism
linked balls and springs illustrated.
What do we know in advance about our device? That, whatever else happens within
the whole gizmo, the parts of the rigid bar will forever remain at constant distances from
one another, if resolved upon the observation scale ΔL*. But we know less about the
lengths of the two embedded springs, for they will visibly compress or extend as the
device moves about. Secretly, of course, our allegedly “rigid” bar must also compress
and extend upon a smaller evaluation scale ΔL, but over such a minute range that we
can suppress these fluctuations at the observational level ΔL*.
How can we exploit this ΔL*-level information to reliability-enhancing purposes? We
want to cut off our system’s microscopic complexities from above, as it were. The
attractive advantage that top-down requirements offer (if we can get them to work
properly) is that we won’t need to re-derive a bunch of facts that we already know. We’ll
see how additional constraint equations facilitate this skirting of superfluous reasoning
pathways in a minute. In contrast, any bottom-up attempt to directly model the
underlying molecular sources of an iron bar’s rigidity while discussing machinery is an
extremely fraught enterprise, liable to greatly increase the unreliability of our results. In
the usual jargon of analytical mechanics, we are advised to “neglect the internal forces
that perform no work on the system”—that is, we should exploit our rigid bar infor-
mation as data that is already established.
Let’s first see what formally occurs if we ignore this advice and attempt to model our
springs and bar on a common descriptive basis. We’ll probably wind up with a modeling
scheme as illustrated, in which various point centers are interconnected by forces that
are classified as either “active” or “inactive.” The forces corresponding to our springs are
standardly considered “active,” as is the forces we apply to set the gizmo into wobbling.

50
J. T. Lagrange, Analytical Mechanics (Berlin: Springer, 2001). Because Lagrange did so many things, the term
“analytical mechanics” often attaches to other forms of mechanics, but I reserve the term for the approach to mechanics
that builds upon virtual work, d’Alembert’s principle, and the use of Lagrange multipliers. These foundational issues are
discussed at greater length in Essay 3. I regard his multiplier technique as an early version of the homogenization
techniques central to the multiscalar methods of Essay 5.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:35
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 83
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 83

But the rigid connecting bar gets


replaced by an “inactive” force,
as have all the strings and the
frame to which they are attached.
In their usual argot, physicists
claim say that the advantages of
Lagrangian technique lie in the
fact that it lets us ignore all of
these inactive factors when we
reason about our device. But at
the present moment we are cur-
schema for our device rently asking what will transpire
if we don’t take advantage of the
top-down Lagrangian shortcuts and model our gizmo directly on a common ΔL basis.
At first glance, everything looks as expected. We seem to confront a situation
governed by simple F ¼ ma mechanics, and so we’ll merely need to write down
the equations for all of the component forces involved and sum them. But there’s a
problem. We possess such formulas to govern all of the active spring forces (viz.
Hooke’s law F ¼  kr, where r is the distance between the centers connected by the
springs). However, we lack comparable equations for the constraints; all we know is
that their point centers always stay the same distance apart. But this stipulation isn’t a
force law; we must somehow convert
this datum into force-like information
so that F ¼ ma can properly sum its
ingredients together. Until we resolve
this problem, we confront what math-
ematicians call an under-constrained prob-
lem—we don’t have enough equations
to govern all of the forces included in
our in our ΔL-level diagram. How does
Lagrange resolve this dilemma?
It will be helpful if we first consider a
situation that is mathematically analo-
gous, the so-called combination of
observations problem from astronomy
(and statistics in general). We know
from baseline theoretical considerations
that artificial satellites revolve around
the earth in near ellipses, and that, over
short periods, such motions can be
approximated as simple straight lines
across the firmament. Such lines can be relationship between an orbit and its
captured as linear equations of the observational appearance
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:36
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 84
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

84 Physics Avoidance

simple form y ¼ mx þ b. Our task is extract suitable values for the constants m and b
from direct observations of the satellite’s path across the sky. With a good telescope we
can easily assemble an abundance of empirical observations E, all of the form <xi, yi, ti>.
Our formal task is one of amalgamating (i) the requirement that the satellite satisfies
y ¼ mx þ b (m and b operating as undetermined variables) with (ii) the large packet of
observational data E that we obtain from direct terrestrial sightings. Prima facie, one
would suppose that if E is sufficiently ample, we should be able to compute values for m
and b that allow us to predict future satellite movements with excellent accuracy.
But here’s the rub. Our combined data set is over-constrained: that is, E contains far too
many determinations for all of it to prove mathematically compatible with y ¼ mx þ b.
We wish to estimate m and b from our data set E, but only two data points are
algebraically required for this purpose. E, however, contains many more values than
this, any pair of which will assign m and b slightly different values when plugged into
y ¼ mx þ b. But it would be very unfortunate if we discarded most of the data in
E simply because of this consistency problem. How can we bring y ¼ mx þ b into a
more reasonable alignment with all of E? Intuitively, the underlying source of the over-
constraint is apparent; each member of the E will be corrupted by minor observational
errors εi that we have improperly ignored. So the canonical resolution to our over-
constraint problem (pioneered by Gauss and Lacroix) recommends that we should first
incorporate unknown error terms εi into each of our data points (i.e. we should consider
<xi ; yi þ εi ; ti > where yi is the true value of the satellite’s xi ordinate). We should then
ask, of all of the possible ways in which the unknown εi might vary, which policy for
“combining observations” might produce the smallest overall error as a rule? A proper
response can vary according to the assumptions we make about the origins of the errors,
but in simplest circumstances, the “rule of least squares” supplies excellent answers (pick
the m and b values that supply the smallest squared error with respect to the data we
have gathered).
From a formal point of view, we began with a modeling format ðy ¼ mx þ bÞ that
doesn’t contain enough independent parameters to accommodate our original data set
E. As a result, the collection of equations y1 ¼ mx1 þ b, y2 ¼ mx2 þ b, . . . becomes
over-constrained in the sense that E places far more demands on m and b than can be
usually satisfied. However, by explicitly including the additional εi in these formulae
(i.e. yi ¼ mðxi þ εi Þ þ b), we remove the over-constraint by providing some extra
variables to play with. In doing so, we actually overshoot the desired mark and wind
up with an under-constrained equational system that is too weak to force unique values
upon m and b. To rectify this new problem, we must invoke some additional equation
(generally of a variational character51) to get the ratio of modeling equations to

51
Note that the use of least squares reasoning typically involves searching through a set of trial possibilities until a
proper fixed point is found. Essay 6 discusses the localized “possibility spaces” in which such searches are commonly
conducted. One of my chief motivations in discussing these somewhat fussy details is to alert readers to the fact that the
appearance of variational techniques within physics is usually connected to deeper issues of backgrounded explanatory
landscape.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:36
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 85
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 85

independent parameters in proper balance. A least-squares demand on the errors


provides us with the crucial supplementary formula that arranges our mathematical
concerns in determinative
harmony.
To resolve our earlier prob-
lem of how constraint informa-
tion might be combined with
active force information, La-
grange adopts a similar strategy
and inserts new force centers
into the interiors of the rigid
parts, indicated in the diagram
by gray dots. Each of these new
centers serves as a locus of add-
itional constraint forces, whose
strengths are determined by so- inserting new force centers into our diagram
called Lagrange multipliers, usu-
ally written as λi . These force-like supplements reconfigure our constraint demands so
that they can accord with our underlying F ¼ ma principles in a proper harmony, akin
to the manner in which the error terms εi resolve our combination-of-observations
difficulties.52 Unfortunately, all we presently know about the λi is that they must, in
combination with the other operative forces, be of the exact strength required to hold the
rigid body points at their requisite separations when viewed at the Δ* level. These
demands supply us with a large number of salient equations, but not enough to save
the enlarged formula set from under-constraint. Once again some supplementary prin-
ciple is needed to complete our equation collection so that it can fully determine the
magnitudes of the constraint forces. The clever resolution championed by Lagrange
employs variational reasoning with respect to “least work” in a manner that formally
resembles least-squares minimization within astronomy.53
This additional dictum supplies our supplemental forces of constraint with the
requisite oomph needed to pull an otherwise stretched rod back to its required rigid
length. But observe that these microscopic force strengths have been calculated on the
basis of the macroscopic traits they induce on the ΔL* level (viz. rigidity). Such cross-
scalar arrangements place us in the admirable position of only needing to study how the
“active forces” applied to our system function—how do the externally applied forces
interact with the Hookean forces operating within the springs. Other forms of “inactive”

52
This across-scales functionality is why I regard Lagrange multipliers as an early prototype of the homogenization
methods discussed in Essay 5.
53
The central key is Lagrange’s virtual work scheme, otherwise known as the “principle of least work” for these
reasons (in truth, it calls for work stabilization, rather than minimization). In dynamic circumstances, d’Alembert’s
principle is utilized as well—see Essay 4 for more on this. Note that this policy evaluates “least work” on a global, across-
the-entire-device basis. Such top-down computations are characteristic of schemes that mix information across scales and
are generally resolved through successive approximations.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:37
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 86
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

86 Physics Avoidance

internal activity can be ignored because our Lagrange-


calculated λi forces automatically cancel out any aspect
of an exterior manipulation that directly opposes a con-
straint.54 Engineers reap a goodly amount of physics
avoidance advantage by reducing descriptive complexity
in this manner. In practical applications, they can often
identify the active forces operating upon a target structure
S through simple manipulation experiments—if we apply
an external force or torque to S, in which directions will it
wiggle?55 The manner in which Lagrangian methods exploit
Lagrange
macroscopic data in order to simplify a complicated micro-
scopic situation provides us with a sterling paradigm of physics avoidance technique
operating in a most effective form.
I find it useful to conceptualize these active force/constraint force blends as tacitly
operating on two asymptotically separated size scales: at the 0-dimensional level of the
point centers and at the 1-dimensional length scale of the rigid rods and springs.
Lagrange’s multiplier trick projects 0-dimensional surrogates for the higher level con-
straints into the level of the point centers so that they can mathematically commingle in
standard F ¼ ma fashion. These arrangements allow our two levels of information
about the system to “talk to one another” and Lagrange’s least work principle tightens
up the various λi until they pull their rods and strings to the requisite lengths.56 As is
often the case when information pertinent to different scales is blended together, a
certain conceptual price must be paid for the computational advantages gained. In
particular, the term “force” is covertly dragged to altered physical underpinnings
when it is applied to these λi -induced “forces,” in contrast with the “real forces” that
act between the unsupplemented point centers. As a result, certain standard demands
associated with Newton’s Third Law need to be turned off when these supplementary
“forces” are considered. The misunderstandings created by this simple shift in explana-
tory landscape have had profound, if unacknowledged, effects on philosophical thinking,
because the popular misconception that there is something “metaphysically fishy” about
force ultimately traces to these tacitly wandering referents. But I’ll reserve further
discussion of these matters to the appendix to Essay 7. Philosophers of science rarely
pay attention to constraints, but their appearance within a modeling is usually a signal
that some underlying shift in explanatory landscape has occurred.
For these reasons, I have elsewhere labeled reasoning architectures that utilize
ΔL=ΔL data amalgamations of this character as mixed level explanations57—they are
neither purely bottom-up (working on scale ΔL) nor top-down (working on scale ΔL*)

54
Such efforts are usually said to “perform no work” on our system, a phraseology that I regard as misleading. “No
visible work” might be preferable.
55
Such manipulation counterfactuals comprise a central topic of Essay 6.
56
In the fuller discussion of Lagrangian methods in Essay 7, I analogize these multipliers to macroscopic winches that
tighten up the arrangements on a lower scale.
57
Mark Wilson, “Mixed-Level Explanation,” Philosophy of Science 77(5) (2010), pp. 933–46.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:37
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 87
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 87

in the data to which they appeal. In section (vi) I’ll survey a few common misunder-
standings that arise when their specific structural features are overlooked because of
typical Theory T myopia.

(vi)
It is also important to recognize that the presumption that Lagrange’s λi forces can be
successfully underwritten by detailed molecular models of a classical character must be
characterized as an aspirational hope, rather than established fact. Indeed, the instability
of classical matter considerations raised earlier suggests that no viable classical modeling
of simple rigidity at the molecular level may be obtainable. In these respects, elementary
physics textbook rhetoric is often misleading, for these primers often claim to “prove”
Lagrange’s variational principles by simply multiplying basic-level Newtonian formulas
by arbitrary variations. But this is a fallacy. Such manipulations merely show that if a
system S can be successfully modeled at level ΔL in a wholly Newtonian mode, then
Lagrange’s principles will (approximately) hold of the system when it is resolved upon a
higher scale length ΔL*. In practice, however, Lagrange’s methods are commonly
applied in mixed-level circumstances where suitable classical underpinnings at a
lower-size scale are unknown and possibly unavailable. For these reasons, Lagrange’s
doctrines are substantially stronger in their physical import than the classical tenets that
our textbooks employ to “prove” them. From a logical point of view, one cannot
legitimately strengthen a formalism via deductive manipulations alone. Such maneuvers
can only weaken import, and this is all that multiplying by variations properly accom-
plishes. The better books on mechanics make all of this perfectly clear.58
As students of scientific methodology, we should strive to diagnose correctly the true
empirical support of the explanatory architectures scientists employ. When they utilize
the advantages of Lagrange-style analytical mechanics, the direct source of their reliable
constraint data usually traces to direct macroscopic observation; they rarely establish
these claims on the basis of a concrete molecular model. Pretending otherwise mis-
characterizes the strategic rationales that underpin these techniques. In recent years, the
philosophy of science literature has become flooded with discussions of “reduction” and
“emergence” that fail to take heed of these methodological precautions. For example,
many writers appear to misunderstand the strategic logic involved in the renormaliza-
tion techniques of material science in this way. In real life, physicists start with a
straightforward observational datum with respect to scaling behavior: that many boiling
liquids form gas bubbles of virtually every scale length at their critical points. This
striking symmetry, combined with some basic energetic considerations, locates all such
materials within a kind of mathematical trap that forces them to all behave very similarly
near their critical points, a feature often called universality. This formal commonality

58
Giovanni Gallavotti, The Elements of Mechanics (Berlin: Springer, 1983).
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:37
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 88
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

88 Physics Avoidance

then supplies an excellent descriptive opportunity for obtaining a reliable handle on the
macroscopic behaviors of a large number of otherwise different systems, a point that
Bob Batterman often emphasizes in his admirable writings on this topic.59 But his critics
often presume that the renormalization arguments establish the very datum with which
they begin: that many liquids form bubbles of all sizes under certain conditions. To be
sure, in specific circumstances modelers have been able to supply molecular explan-
ations of why these gas pockets form as they do, but such considerations are rather
irrelevant to the wider thrust of the renormalization arguments per se, which remain
applicable in the absence of established microscopic underpinnings. Such misunder-
standings of the underlying architecture have led to a situation where Batterman’s
correct identification of an important computational opportunity is often confused
with so-called Nagelian reduction.60
As I’ve already stressed, blending data extracted from different scales of observation
in Lagrange’s manner is vital to the progress of science as a reliable body of knowledge.
If you are secure in your knowledge that an iron bar will remain approximately rigid
within a larger machine, why not exploit that partial knowledge if you can, rather than
relying undependably upon speculative microscopic models? Indeed, canny amalgam-
ations of available information have provided developing science with an important set
of checks and balances that prevent it from wandering into the pastures of unmoored
speculation. But Theory T thinking pays little attention to reliability’s cultivation and
indulges exclusively in what I like to call a “man proposes; nature disposes” conception
of the scientific enterprise—set up your favorite Theory T and see how it fares
empirically. Little attention is paid to the importance of reliable architecture.61
Accordingly, philosophers of science are well advised to consider carefully whether a
textbook “derivation” actually represents a canny policy for effective data amalgamation.
To be sure, reaching a correct diagnosis often requires a lot of thought. Indeed, within
ongoing scientific practice, it must be conceded that a small bit of methodological
wuzziness often serves as an admirable Muse of Invention, for science commonly
enlarges its applicational scope through clever, albeit dodgy, forms of mix-and-match
combination. Indeed, the rewards of adventitious inferential exploration constitute one
of this book’s central themes—swift conceptual advance is often achieved through the
transfer and adaptation of a familiar computational routine from one setting to another.
At the same time, such advances frequently create a fair degree of conceptual confusion,
for the true strategic underpinnings of a newly adapted reasoning scheme are often
illusive. I believe that academic philosophers should recognize that their diagnostic
duties are more substantial—and less a priori determined—than many current writers
presume. Other essays in this collection focus centrally upon situations where misiden-
tified explanatory landscapes have created substantive philosophical confusion.The

59
Robert Batterman, The Devil in The Details (New York: Oxford University Press, 2000).
60
J. Butterfield, “Less is Different: Emergence and Reduction Reconciled,” Foundations of Physics 41(2011).
61
Such neglect is especially congenial to the self-regard of pseudo-scientific cranks, who habitually declare, “There’s no
real difference between my procedures and those of Einstein; he just happened to get lucky, that’s all.”
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:37
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 89
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 89

present essay has largely aimed at steering the reader’s attention to the applied math-
ematics resources that can assist us in diagnosing underlying architecture better,
although it has also supplied a few examples of the woes of misclassification as well
(e.g. mistaking an equilibrium pattern for an initial conditions landscape; misjudging a
mixed-level architecture in the same way; further examples appear in the appendices).
A chief obligation of the philosopher, with respect to both science and everyday life, is
that of delineating the factors that mold and inhibit human conceptual advance as we
wend an improving path through this world of ours. We should acknowledge the merits
and the harms of what might be called developmentally necessary confusion62—the fact
that words sometimes adjust their referential foci secretly when they migrate from
parent application A to daughter application B, even when the authors of the transition
fancy that they are “merely using old words in the same way.” The propensity for
cramming real-life explanatory complexity into the inadequate pigeon holes of Theory
T thinking impede these recognitions, for its coarse and ill-defined categorizations blur
over the varied explanatory architectures we must recognize and distinguish.
Accordingly, I don’t view this book’s project as one of “providing a general theory of
scientific explanation.” I doubt that such a “theory” exists and don’t believe that
philosophers are obliged to seek one. However, the goals I instead recommend are
consequential enough. We should attempt to unravel the conceptual tangles in which
human reasoning frequently finds itself through no other fault than exploiting new
descriptive opportunities as they become available to us. I should also stress that, in this
essay’s brief survey, we have scarcely touched upon many of the diagnostic elements we
shall require later. In particular, I’ve not commented much on boundary conditions (and
their cousin forms of side condition), but they will become central in other essays.
In sum: eschewing purely logical classifications in favor of the sharper discriminations
of modern applied mathematics can help us better appreciate the wide variety of
explanatory landscapes that appear within science, according to the various strategies
of physics avoidance that underwrite their utilities. If we can classify the various
“condensed books” of effect-
ive physical methodology in
a more detailed fashion than
Theory T thinking supplies,
we will become better posi-
tioned for understanding the
complex wanderings to
which promiscuous words
like “cause” and “force” are
liable. Generalizing from
classical physics to the
human condition generally, the pathways of physics avoidance are tangled

62
I believe that Joseph Camp has a similar theme in mind in his Confusion (Cambridge: Harvard University
Press, 2002).
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:38
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 90
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

90 Physics Avoidance

we should seek better diagnostic tools63 for unraveling the conceptual muddles into which
Homo sapiens inevitably falls, as the unavoidable side effects of its intellectual advance-
ments. To begin this project upon a firm grounding, we should first recognize that real-life
scientific explanation adopts the shortcuts and lifts of a game like “Chutes and Ladders”
more often than it dutifully follows the guileless pathways of a “Candy Land.”

Appendix 1: Initial/Boundary Condition Mimics


Let’s consider a seemingly simple optical situation where light enters a transparent
medium such as glass from the air. It is common for textbooks to loosely announce,
“Let’s consider what happens when oscilla-
tory data of a fixed frequency ω enters a
crystal.” This looks like a standard initial/
boundary value problem but it’s really not.
We have instead been invited to consider a
steady-state situation: What happens to con-
tinuously streaming light as it strikes our
glass? We are not asking, “What transitory
light patterns will form after the leading
edge of the incoming wave packet enters
the crystal?”64 Steady-state problems typic-
ally represent a projection of an evolution-
ary development onto a base manifold in the
manner of our fluid passing over a block. In
terms of the relevant equations, the true monochromatic light entering a crystal
time dependencies become factored away
through separation of variables, a manipulation technique which converts our governing
equations from hyperbolic to elliptic. This loss of hyperbolic character is not surprising
because the “steady” nature of steady-state conditions provides a convenient physics
avoidance perch from which the detailed complexities of normal temporal develop-
ments can be ignored, including all of their messy opening transients. However, as noted

63
Many of these missing instruments will undoubtedly be quite unlike the applied mathematics probes I have employed.
I’m not trying to resolve every philosophical problem in this book!
64
Mathematicians are much clearer about such issues, whereas physics presentations are frequently confusing. Jean
Leray comments upon the proper nature of steady state asymptotics as follows:
But let us assume that [our target equation] is a continuous mechanical system: therefore the physicists cannot impose its
initial condition and velocity: they have no more interest in Cauchy’s problem. What they are interested in are the waves.
(“The Meaning of Maslov’s Asymptotic Method,” Bulletin of the American Mathematical Society 5(1) (1981), p. 16)
A vexed question that often ripples through these pages concerns the puzzling friend and foe aspects of appeals to analytic
function representation within science. Like appeals to the notion of rigidity, their invocation often opens doors to
significant forms of physics avoidance, yet, for the reasons cited by Hadamard, they contain descriptive ingredients (viz.
their tapeworm reproducibility) that seem untrue to nature itself. Philosophically, we should seek deeper enlightenment
on these representational questions.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:38
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 91
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 91

before, “time” persistently sneaks back into explanatory settings from which it has been
officially banned. In our fluid-flowing-steadily-past-a-block circumstances, we can readily
paint pretend-time arrows onto the base manifold streamlines, indicating that the water
flows in from the left-hand side of the diagram. But this form of pretend-time reintro-
duction should not be mistaken for a true initial/boundary value problem.
Similar circumstances apply when we ask how monochromatic light enters our glass
block or scatters from a perfectly reflecting, infinitely long razor blade. To talk of
monochromatic light in such a context implicitly requires that we are considering an
evenly spaced wave pattern coming in from spatial infinity—i.e. another steady-state
circumstance, in which we have again factored away its true time components. With
respect to our modeling equations, we move from a hyperbolic Maxwellian wave
equation containing a true time variable to the elliptic Helmholtz equation appropriate
to steady-state conditions which lack any mention of time. The recipe for allocating
pretend-time arrows to our wavefront-entering-glass circumstances is straightforward
because no light gets reflected backwards. The razor blade situation is a bit more
complicated because the incoming light becomes thoroughly mixed with the outgoing
light everywhere in the diagram. Accordingly, the incoming light can’t be straightfor-
wardly identified with the “flow witnessed on the top of the razor.” However, Som-
merfeld found an exact solution for the razor
problem where the “incoming light” correl-
ates with a major term within his mathemat-
ical development whereas the “outgoing
light” becomes captured within a mixture of
other terms. This decompositional fact
allows us to introduce suitable pretend-time
arrows into our (strictly speaking) timeless
steady-state solution. Once again, we are not
painting pretend-time arrows on a timeless
addressing our circumstances in a genuine base manifold
initial/boundary value mode.
Things quickly become confusing
because Sommerfeld’s starting point allows
us to build up localized wave packets from
his basic patterns through interference
between the monochromatic solutions for
a spectrum of wavelengths. These compos- a wave packet
ite structures naturally move through our
diagram as coherent entities, permitting the reintroduction of robust true-time dynamic
arrows based upon the differing phases of the monochromatic representations contained
as factors within the wave packets.65 Through such roundabout means, we obtain a

65
We construct these dynamic arrows from the phase differences between the wave packet components following the
basic lift-off-a-base-manifold techniques outlined in Essay 3. I will discuss further subtleties of “phase” in a forthcoming
essay, “How ‘Wave Front’ Found its Truth-Values.”
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:39
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 92
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

92 Physics Avoidance

description of a temporally adjusting field that may appear as if it represents a standard


initial/boundary value problem. Despite these superficial appearances, we have, in fact,
set up a pertinent mathematical modeling through a detour through formulas of a
differing equational signature. Failing to recognize the strategic logic behind these
circuitous procedures has generated many conceptual puzzles within the field of optics.
This appendix surveys one of the simplest examples. Let’s return to our opening
circumstances: what happens when light passes from air into a block of glass? Doing so
requires that we employ different wave equations to describe light’s passage within the
two mediums. Can we figure out, in general terms, how these matched equational sets
should relate to one another? A clever stratagem enters the scene, one that relies
significantly upon the empirical observation that most optical materials can support
steady-state dispersive solutions—that, after a short initial interval, the glass will allow
the incoming light to pass through in a possibly complex, yet steady, manner. That
unruffled transmission through glass is possible represents a purely empirical datum, for
many materials react to incoming light in an inherently unsteady way, e.g. they
disintegrate or blacken. Surprisingly, this rather innocuous tolerance of steady-state
activity can be deftly exploited to glean important information about light’s behavior
within very varied circumstances. In particular, we can derive strong conclusions about
two important issues affecting an incoming wave packet: (a) To what degree will the new
medium cause the monochromatic components of our packet to travel at different speeds
(or “disperse”) relative to one another? (b) To what degree will the medium cause
monochromatic waves of different wavelengths to lose energy (or “attenuate”) relative
to one another? At first blush, it would seem as if these two sorts of behavior should
prove unrelated. In materials that don’t support steady- state behaviors, they are not.
However, our (a) and (b) responses must cooperate in a definite manner if steady light
flow is possible, for it turns out, under remarkably weak assumptions, that such
behaviors can be achieved only if the extinction rate χ2 of a signal at frequency ω is
related to its susceptibility χ1 through the Kramers–Kronig relationship:
Z 1
χ2 ðωÞ ¼ 2ω=πP χ1 ðω’Þ=½ω’2  ω2 dω’
0

where we have integrated over the susceptibilities of all possible frequencies.66


Textbook presentations of this reasoning often render the machinery of this argument
rather mysterious, largely because they articulate its steps in fake “initial/boundary
value” terms. In reality, these arguments trade centrally on the fact that analytic
functions, with their strong “analytic continuation” properties, are descriptively appro-
priate to equilibrium or steady-state circumstances, which display the holistic cooper-
ation of parts required for this treatment. When we examine the ramifications of this
“cooperation” on the complex plane, we find, employing standard arguments exploiting
contour integration, that the extinction rate χ2(ω) must depend upon the various χ1ðω’Þ

66
“P” indicates “Cauchy’s principal part.”
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:39
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 93
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 93

in the Kramers–Konig relationship, due to the fact that the only salient pole of our
formula lies upon the real axis at ω itself. This is a classic calculus of analytic functions
computation that reappears in many forms of applied mathematics reasoning. From a
physical point of view, the underlying analyticity traces to the across-all-frequencies
cooperation that a material must display to support a steady-state response. But this
crucial background assumption is easily overlooked within the “pull a rabbit out of a hat”
formal manipulations in which the Kramers–Kronig argumentation is often articulated.
The next step is to generalize these conclusions by presuming that the same disper-
sion relations maintain themselves even when light is not acting in a steady-state
manner. How could it be otherwise? Glass isn’t smart enough to plot, “Oh, I’ll disperse
incoming light by different rules according to how long the experimenter plans to leave
her light on.” In this fashion, we can exploit the special analytic function interconnec-
tions characteristic of steady-state behavior as a means of bootstrapping our way to more
general conclusions as to how light will pass from one medium into another. In short,
standard approaches to the dispersion relationships allow us to ascertain important
system parameters through a modicum of higher-scale information (“glass can support
a steady-state response to an incoming light stream”) and a clever shift to an appropriate
base manifold.
As noted above, textbooks often present these procedures in the misleading guise of
an ersatz “initial/boundary value problem,” relying upon the pretend-times we can
associate with the spatial wavelengths corresponding to the frequencies ω. Philosophy of
science has fallen into considerable confusion through its failure to unmask these initial/
boundary value problem mimics. Here’s an example. In the course of otherwise worthy
attempts to rebut Russell’s claims that modern science eschews a robust notion of
causation, writers within Nancy Cartwright’s school have argued that causal notions
should be invoked as a doctrinal supplement to the canonical laws of physics upon which
Russell bases his rejection. As already noted, this talk of “supplementation” is a mistake;
the basic relationships of causal evolutionary development are already marked within
the mathematics of hyperbolic equations. Following Cartwright’s lead, Mathias Frisch
cites the invocation of causality in standard derivations of the dispersion relations as
exemplars of this “supplementation,”67 which he regards as a new constraint that weeds
out spurious solutions that the equations
of standard electrodynamics would other-
wise accept. Consider the two forms of
fundamental solution that we might con-
template with respect to concentric wave
motion: (i) retarded motion, where the
waves travel outwards from a source and
(ii), advanced motion, where they travel
inward towards a focus. In nature—ripples advanced and retarded waves

67
Mathias Frisch, Inconsistency, Asymmetry, and Non-Locality (Oxford: Oxford University Press, 2005).
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:39
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 94
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

94 Physics Avoidance

on a pond are commonly cited—we almost invariably witness events of type (i), whereas
those of type (ii) appear unphysical (and only appear in films that are run backwards).
Following other writers, Frisch claims that we should therefore invoke “causality” as a
supplementary condition to standard electrodynamics as a means of discarding unphys-
ical solutions such as (ii).
In truth, patterns of type (ii) are not quite so rare in nature as all of that, for lenses or
curved barriers readily produce ingoing waves of the requisite formal character,
although, admittedly, somewhat imperfectly. So an outright ban on such solutions is
not a viable point of view.
In any event, Frisch’s chief mistake is that of presuming
that we are dealing with a standard initial/boundary
value problem that requires further pruning. These mis-
apprehensions trace to improperly parsing the special-
case-to-general-fixing-of-parameters reasoning traced
above. Optical circumstances are rarely suited to true
evolutionary initial/boundary value modelings and
almost invariably exploit some covert form of physics
avoidance consideration to skirt these obstacles. Two-
thirds of the way through any good text book on the
subject, one is apt to find a passage to the effect, “Up to
this point, we’ve tried not to lie to you too much, but we
properly need to think of the phases and frequencies of
light in a different manner.”68 Explanations within elem-
entary optics amply illustrate the familiar admonition: “things are seldom as they seem”
“things are seldom what they seem.”

Appendix 2: Constraints and the Physics of Mechanism


In the foregoing we observed how the imposition of a higher-scale constraint such as a
rod’s rigidity alters the salient explanatory landscape. In this note we’ll study what
happens when we impose a large number of these constraints, in the guise of rigid bars
freely pinned to one another. We obtain a hierarchy of structures as illustrated, ranging
from the under-constrained gizmo at the top that can freely wiggle in many ways, to the
bottom item, which has been welded together with excessive links that may trap a large

68
For example, Geoffrey Brooker writes (Modern Classical Optics (Oxford: Oxford University Press, 2003), p. 219):
Any light that isn’t perfectly monochromatic (and that’s every kind of light) necessarily has some degree of randomness. In
Chapter 9 we described this randomness . . . [but] there were obvious inadequacies; having no quantity to represent a “degree
of coherence”; we could not give a precise condition for coherence to disappear or for fringes to be so indistinct that they should
be considered absent. Moreover, we found ourselves using awkward wordings: “obtaining knowledge”; “permits us to
predict”; “so clumsy that interference fails to manifest itself”; “a wavetrain is the whole wave defined for all t.”
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:40
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 95
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 95

amount of strain energy inside the device. The first situ-


ation remains firmly within the ambit of normal Lagran-
gian mechanics, where the applicable constraints must be
supplemented by additional considerations of applied
force. Our bottom two cases belong to the subject of
structural mechanics.69 But suppose that a collection of
rigid parts is assembled in the second manner illustrated,
so that the whole comprises a so-called closed kinematic
chain in which the pinned links only permit a single free-
dom of internal movement, i.e. turning the crank 2 on the
lower left side of the crank-rocker mechanism illustrated
completely determines the position of every other part of
the device. This mechanical integration renders the device
suitable for transmuting input motions and forces in vari-
ous ways. A circular motion applied at the crank will be
converted into a wide range of rocking patterns in the
follower link 4, depending upon the sizing of its parts.
These transfigurative powers explain why “mechanism” is
the official name for this special class of constrained
relationships.
In these special arrangements, exploitable opportunities
ways in which rigid bars can open up for reasoning along considerably different con-
be linked together tours than are otherwise typical within general mechanics.
Franz Reuleaux, the founder of the modern theory of
machines, recognized this significant effacement from regular mechanical principle in
pungent terms:
[T]he sense of the reality of this separation
[of the theory of mechanism from gen-
eral Newton-style mechanics] has been
felt not only by engineers or others actually
engaged in machine design, but also by those
theoretical writers who have had any prac-
tical knowledge of machinery, in spite of the
increasing tendency in the treatment of
mechanical science to thin away machine-
problems into those of pure mechanics.70
Mathematically, this “separation” traces to
the following formal considerations. In
dealing with a lesser number of constraints a crank-rocker mechanism

69
The determinate and indeterminate railway bridges of Essay 8 supply pertinent examples of structures.
70
Franz Reuleaux, The Kinematics of Machinery (New York: Dover, 1976), p. 30. Reuleaux’s insights will concern us
again in Essay 7.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:40
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 96
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

96 Physics Avoidance

using Lagrange’s virtual work techniques, we utilize special devices (Lagrange multipliers)
to persuade an underlying set of differential equations to accept the helpful partial data
supplied by our knowledge of upper-scale constraint. In simple situations like our crank-
rocker device, the geometrical requirements imposed by the pinnings can be registered by
simple algebraic equations. When we deal with a mechanism, so many algebraic equations
of this type have become superadded that their combined effects overwhelm and override
any erstwhile differential equation processes that might operate on a more refined size
scale. The upshot is a revised explanatory landscape in which the principles of geometrical
reasoning utterly dominate. As a result, great opportunities open up for reasoning about
machines in very effective ways.
The availability of this special terrain for reasoning has created great confusions
within the development of physics, as strong propensities for identifying “classical
physics” with “the physics of machinery” have riddled its history. Thus we encounter
the popular stereotype of “Newton’s clockwork universe,” despite the fact that most
precisifications of what “Newtonian physics” comprises cannot deal with clocks ably.
In contrast to Newton, strong mechanism-leaning predilections lodge deeply within
Descartes’ breast. He often advances physical claims for which he has been roundly
criticized by commentators, but which make good sense in the context of machinery.
I believe that many of these complaints trace to an inadequate recognition of the
special explanatory landscape in which talk of mechanisms naturally resides. Consider,
in particular, Descartes’ notorious relationalism, the notion that all talk of “position”
within physics should refer only to the internal relationships of parts and not to any
background container space (or space-time) such as Newton’s own absolutist concep-
tion. Familiar phenomena such as Newton’s celebrated bucket experiment are often
cited to demonstrate the patent errors of Cartesian thinking. But consider the internal
mathematical closure of mechanisms as defined here—as long as parts never bend or
distort, only a single freedom of movement is left open to the device. Normally, this
solitary factor is settled by an outside agency turning the crank, in which case time
enters our picture as a control space intrusion in the manner discussed above.71 “But
what should happen,” Descartes asks (to phrase his concerns in anachronistic terms),
“if we inject energy into our device and don’t interfere with its subsequent motions
through friction of other coupling to external affairs?” His intuitive expectation: the
linkage will continue turning through the same mechanical cycles forever, indicating
that it has not lost any of its kinetic capacity to perform work upon outside agencies.
In particular, if we gingerly detach a whirling mechanism from its base, taking care to
not interfere with its internal movements, Descartes anticipates that these internal

71
In Wandering Significance, I describe these arrangements as “here-versus-there determinism” in contrast to the
normal “Cauchy surface determinism” pertinent to autonomous initial/boundary value problems. Note that our device
always displays the same cyclic behaviors no matter where the input movement is applied. The official name for this
symmetry is inversion of the mechanism.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:40
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 97
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

Physics Avoidance 97

motions will not be affected at all, no matter


where in Absolute Space we choose to drag
our gyrating whatsit. This is a conservation
principle—if we never rob a moving mechan-
ism of its capacities for performing work, it
will retain that ability forever. This energetic
intuition strikes me as supplying the deepest
root of most philosophical inclinations
towards relationalism.
Such assumptions will undoubtedly strike
many of us moderns as prima facie plausible
as well, although, in truth, nature itself has
decided that some of our mechanism’s parts
must bend or sustain high interior stresses
if they become vigorously accelerated by
Coriolis effects. In truth, strictly rigid rods are
effacement from a container space no better suited to a Newtonian universe than
to an Einsteinian one! But we readily become
confused about such matters because many pleasant pathways of useful physics avoid-
ance open up as soon as we learn to exploit the partial data offered by an upper-scale
constraint such as rigidity, whether we do this in Lagrange’s or Reuleaux’s manner. And
that is why our Physics 101 instructors freely mix constraints with regular Newton’s
universe ingredients. As per usual, we don’t
notice that the background explanatory land-
scape can alter considerably within these seem-
ingly innocent forays and only later do we
dimly sense that “something funny” has
affected key components of our descriptive
vocabularies, such as “force.”72
Let us briskly consider how the word
“cause” behaves when machinal constraints
alter the explanatory landscape. Consider the
complicated device illustrated that converts a
circular motion on the right-hand crank into a
back-and-forth motion at its highest extremity
suitable for sewing machine stitching. Consult-
ing the illustration, I find it hard to anticipate
with certainty how the gizmo will move as one
turns the crank leftward (experts with a better
“feeling” for mechanisms can probably do focus upon a procedural concern

72
See the appendix to Essay 7.
Comp. by: ANEESKUMAR A Stage : Proof ChapterID: 0003182465 Date:14/8/17 Time:16:33:41
Filepath:d:/womat-filecopy/0003182465.3D
Dictionary : OUP_UKdictionary 98
OUP UNCORRECTED PROOF – FIRST PROOF, 14/8/2017, SPi

98 Physics Avoidance

better). In particular, I’d like to know whether the triangular piece on the left will rock
clockwise or not. And I might naturally express my worries thus: “I see two causal
pathways leading from the crank to the triangle, but which one will dominate?”
This seems an entirely natural question, but what exactly does it ask for? I certainly
can’t be asking about the compression waves that carry the energy inputted at the crank
through the molecular matrix over to the triangle, for those undoubtedly travel in
patterns bearing no discernible relationship to the answer I expect.73 Nor am I concerned
with any of the manipulation dependencies that Jim Woodward highlights in his work.
Insofar as I can see, the question I’ve asked is entirely procedural in character. If we are
given a set of entangled equations to solve, we will often like to know whether one of
these variables (z, say) will turn out positive or negative, for that information can help us
hone in on a correct solution to the whole set through educated guesswork more swiftly
(in contrast, if we guess this sign wrongly, we may waste a lot of time pursuing
unworkable answers). This is entirely a procedural matter. Our mechanism question,
“in which direction is the triangle caused to move?” attaches itself to similar algorithmic
concerns—how can I best begin the process of solving how this device will behave?74
Essay 6 argues that the word “cause” serves, in part, as a central tool of linguistic
management, in the sense that we employ the term to arrange the component strategies
within a lengthy reasoning process. Typically, these applications supply “cause” with a
good deal of robust physical content as well, as happens when we talk of the “causal
processes” involved in wave motion. But our sewing machine appears to represent a
situation where most of the customary physical impedimenta have dropped away,
leaving behind a rather pure exemplar of procedural significance only.75

73
This is not to say that these same wave movements don’t represent exactly the “causal processes” we are interested
in when we worry whether excessive compression waves might damage some of our device’s component parts.
74
I discuss these themes more fully in Chapter 9 of Wandering Significance.
75
Philosophers sometimes recommend that the notion of “mechanism” best captures the central features we look for
in weighing “cause” and “effect.” Reuleaux’s analysis of the distinguishing features of a “mechanism” in everyday
application recommend against such a policy.

You might also like