You are on page 1of 216

Process Studies Supplement

Issue 24 (2017)

Process Physics, Time, and Consciousness:


Nature as an Internally Meaningful, Habit-
Establishing Process
Jeroen B. J. Van Dijk

Jeroen van Dijk is an independent scholar educated as a Mechanical Engineer at


the Hogeschool Eindhoven in The Netherlands. His main research interests are
foundational problems in contemporary mainstream physics and the connection
between Reg Cahill's process physics and process-oriented theories of how
consciousness works. Email: jvandijk@all-is-flux.nl

Abstract: Ever since Einstein s arrival at the forefront of science, mainstream


physicists have tended to think of nature as a giant 4-dimensional spacetime
continuum in which all of eternity exists all at once in one timeless block
universe. Accordingly, much to the dismay of more process-minded researchers,
the experience ofan ongoing present moment is typically branded as illusory.
Mainstream physics is having a hard time, however, providing a well-founded
defense of this alleged illusoriness of time. This is because physics, as an
empirical science, is itself utterly dependent on experience to begin with.
Moreover, if nature were indeed purely physical-as contemporary mainstream
physics wants us to believe-it is quite difficult to see how it could ever be
able to give rise to something so explicitly non-physical like conscious
experience. On top of this, the argument of times illusoriness becomes even
more doubtful in view of the extraordinary level of sophistication that would
be required for our conscious experience to achieve such an utterly convincing,
but-physically speaking-pointless illusion. It is because ofproblems like
these that process thought has persistently objected to the "eternalism" of
mainstream physics. Recently physicist Lee Smolin has brought up some other
major arguments against this timeless picture in his controversial 2013 book
Time Reborn. Although he passionately argues that physics should take an
entirely different direction, he admits that he has no readily available roadmap
to success. Fortunately, however, over the last 15 years or so, a neo-
Whiteheadian, biocentric way of doing foundational physics, namely Reg
Cahill s process physics, has made its appearance. According to process
physics, nature is a routine-driven or habit-based process, rather than a
changeless world whose observable phenomena are governed by eternally
2 PROCESS STUDIES SUPPLEMENT 24 (2017)

fzxed and highly deterministic physical laws. Although, in the currently


prevailing view, the universe is seen as a law-abiding natural world whose
entire history-past, present, and future-must have been "called forth by
law" in one go at the big bang, process physics suggests that the universe
has come into actuality from an initially undifferentiated, orderless background
of dispositional activity patterns which was driven by a habit-establishing,
iterative update routine. In the process physics model, all such habit-establishing
activity patterns are "mutually ieformative" as they are actively making a
meaningful difference to (i.e., in-forming) each other. This mutual in-
formativeness among activity patterns will thus actively give shape to ongoing
structure formation within nature as a whole, thereby renewing it through
stochastic (hence, novelty-infusing) update iterations. In this way, the process
of nature starts to evolve from its initial featurelessness to then branch out
to higher and higher levels of complexity, thus eventually even leading to
neural network-like structure formation on the universes supragalactic level
of organization. 1 Because of this noise-driven branching behavior, the natural
universe can be thought of as habit-bound with a potential for creative novelty
and open-ended evolution. Furthermore, three-dimensionality, gravitational
and relativistic effects, nonlocality, and near-classical behavior are spontaneously
emergent within the process physics model. Also, the models constantly
renewing activity patterns bring along an inherent present moment effect,
thereby reintroducing time in terms of the system s ongoing change as it goes
through its cyclic iterations. As a final point, subjectivity-in the form of
mutual ieformativeness-is a naturally evolving, innate feature, not a
coincidental, later-arriving side-effect.

Table of Contents
1. Introduction
1.1 Getting to know process physics in terms of time, life, and consciousness

2. Time
2.1 From the process of nature to the geometrical timeline
2.1.1 Aristotle's teleological physics
2.1.2 Galileo's non-teleological physics
2.1.3 The deficiencies of the geometrical timeline
2.2 From the geometrical timeline to time-based equations
2.3 From time-based equations to physical laws
2.3.1 The flawed notion of physical laws
2.4 From geometrization to the timeless block universe
2.5 Arguments against the block universe interpretation
2.5.1 The real world out there is objectively real and mind-independent
(or not?)
VAN DuK/Process Physics, Time, and Consciousness 3

2.5.2 Events in nature reside in a geometrical continuum (or not?)


2.5.3 Relativity of simultaneity means that our experience of time is
illusory (or not?)

3. Doing physics in a box


3.1 The Newtonian paradigm
3.1.l The exophysical aspect of the Newtonian paradigm
3.1.2 The decompositional aspect of the Newtonian paradigm
3.1.3 From quantum wholeness to the subject-object split and non-
decompositional decomposition.
3.2 Measurement and information theory
3.2.l Looking at measurement in a purely quantitative, information-
theoretical way
3.2.2 The modeling relation: relating empirical data to data-reproducing
algorithms
3.2.3 From information acquisition to info-computationalism
3.2.4 Information, quantum, and psycho-physical parallelism
3.2.5 From psycho-physical parallelism to measurement as a semiosic
process
3.3 From doing physics in a box to doing physics without a box

4. Life and consciousness


4.1 The evolution of the eye
4.2 From info-computationally inspired neo-Darwinism to "lived-through
subjectivity" as a relevant factor in evolution
4.2.l From the info-computational view to information as mutualistic
processuality
4.2.2 From the non-equilibrium universe to the beginning of life as an
autocatalytic cycle
4.2.3 From environmental stimuli to early subjective experience
4.2.4 From early photosensitivity to value-laden perception-action cycles
4.3 Perceptual categorization, consciousness, and mutual informativeness
4.3.l Integration, differentiation, and the mind-brain's mutual informativeness
4.3.2 Self-organization and the noisy brain
4.3.3 Self-organized criticality and action-potentiation networks

5. Process physics: A biocentric way of doing physics without a box


5.1 Requirements for doing physics without a box
5.2 Process physics as a possible candidate for doing physics without a box
5.3 Process physics: going into the details
5.3.1 Foundationless foundations, noisiness, mutual informativeness, and
lawlessness
4 PROCESS STUDIES SUPPLEMENT 24 (2017)

5.3.2 Process physics and its roots in quantum field theory


5.3.3 Process physics and its stochastic, iterative update routine
5.3.4 From pre-geometry to the emergence of three-dimensionality
5.3.5 Process physics, intrinsic subjectivity, and an inherent present
moment effect

6. Overview and Conclusions

Appendix A: Addendum to §2.5.2: Events in nature can be pinpointed geometrically


(or not?)
Works Cited

List of Figures
2-1: Bronze ball rolling down an inclined plane (with s-t diagram)
2-2: The earth-moon system in a temporal universe and in a block universe
2-3: Minkowski space-time diagram
3-1: Simplified universe of discourse in the exophysical-decompositional paradigm
3-2: Rothstein's analogy (between communication and measurement systems)
3-3: Steps towards Robert Rosen's modeling relation
3-4: Universe of discourse with von Neumann's object-subject boundary
3-5: From background semiosic cycle of preparation, observation, and formalization
to foreground data and algorithm
4-1: Conscious observer as an embedded endo-process within the greater embedding
omni-process which is the participatory universe
4-2: Subsequent stages in the evolution of the eye
4-3: Stationary and swarming cortical activity patterns in non-REM sleep and
wakefulness
4-4: Varying degrees ofneuroanatomical complexity in a young, mature, and
deteriorating brain
5-1: Schematic representation of interconnecting nodes
5-2: Artistic visualization of the stochastic iteration routine
5-3: Tree-graphs oflarge-valued nodes Bii and their connection distances Dx
5-4: Emergent 3D-embeddability with "islands" of strong connectivity
5-5: [Dk-k]-diagram
5-6: Fractal (self-similar) dynamical 3-space
5-7: Gray-Scott reaction-diffusion model
5-8: Fractal pattern formation leading to branching networks
5-9: Seamlessly integrated observer-world system with multiple levels of
self-similar, neuromorphic organization
5-10: Structuration of the universe at the level of supragalactic clusters
VAN DuK/Process Physics, Time, and Consciousness 5

1. Introduction
This article is intended as a follow-up to Lee Smolin's Time Reborn
(2013). In his controversial, yet well-received, book Smolin tried to argue
against contemporary mainstream physics' firm belief in the unreality of
time. Following in many of his footsteps at first, we will eventually try
to continue where he left off: with the suggestion that nature evolves
according to a principle of precedence and that our physics should therefore
be routine-driven, instead of based on eternally valid static laws.
So, starting with the basics, we will first follow the historical path
from: (1) the debate between Parmenides and Heraclitus regarding whether
reality should be thought of as existing statically or as dynamically
becoming; (2) Plato's idea of nature as an imperfect reflection of the
eternal, perfect realm of ideal forms; (3) the geocentric teleological physics
ofAristotle, who thought of time as an abstracted measure of motion; (4)
the heliocentric non-teleological physics of Galileo, who turned time into
a quantifiable one-dimensional coordinate line; and (5) the mechanistic
physics of the Newtonian-Laplacean clockwork universe, with its concepts
of absolute space as a 3-dimensional geometrical volume that exists
independently of its contents (i.e., the physical constituents of the "clockwork
universe") and absolute time as an externally running chain of intervals
that pass by at the same rate for everyone and everything in the entire
universe.
Then, at the end of this list, we will find what is today almost
unanimously considered to be one of the great highpoints in the history
of physics and, thus, the absolute climax in the history of thinking about
time, namely, (6) Einstein's relativistic physics, which, due to Minkowski's
block universe interpretation, led to the now well-established belief that
nature is actually a giant 4-dimensional spacetime continuum in which
all of eternity exists together at once as a huge static and timeless expanse.
That is, following the line of reasoning in the block universe
interpretation, which argued that the relativity of simultaneity2 necessarily
involved the unreality of time passing by, many physicists became convinced
that our experience of time had to be illusory. Supported by the wave of
public enthusiasm that followed Arthur Eddington's confirmation of
Einstein's prediction of the bending of star light around the sun ( 1919),
this belief in the unreality of time grew in popularity until it reached the
status of a logical necessity-a rock-solid truth.
6 PROCESS STUDIES SUPPLEMENT 24 (2017)

It is, however, very hard for mainstream physics to provide a truly


watertight defense of this illusoriness of time. This is first because physics,
as an empirical science, is itself utterly dependent on experience-since
it is instrument- as well as sensory-based. Second, if our experience of
time were indeed illusory, it is still exceptionally difficult to see how and
why it should ever have evolved at all. After all, in the context of the
prebiotic universe-which is, according to current mainstream belief, an
entirely inanimate and purely physical world-the emergence of such an
extraordinarily sophisticated and convincing illusion like our conscious
experience of time would be utterly pointless and inexplicable. It would
thus be entirely impossible to explain in a logically acceptable way how
our conscious illusion of an ever-changing present moment could ever
relate to such a becomingless whole as the timeless block universe (Capek
521). Or, to put it more graphically, it would become impossible to explain
why we are not living in the reign of George III (Ibid.; McTaggart 160)-or
any other past or future ruler, for that matter. Last, but not least, then,
although the block universe interpretation may indeed seem well-structured
and crystal-clear at first sight, on closer scrutiny its arguments in favor
of the unreality of time are not at all as firm and sound as one might hope for. 3
It is because of reasons like these that process thought has always
been opposed to the portrayal of nature as a purely physical and timeless
realm. In hindsight, we can now say that its minority opinion has forced
process thought into a long-lasting uphill battle. The empirically gathered
evidence of mainstream physics had proven so useful and convincing, and
its mathematics seemed so aesthetically pleasing, that any criticism did
not stand a chance if it were based on philosophical grounds alone.
Just recently, however, leading physicist Lee Smolin managed to
breathe some new life into the debate that, so it was commonly thought,
had already been won by mainstream physics long ago. In his critically
acclaimed book Time Reborn, he persuasively argues against the existence
of eternally valid laws of nature. That is, he claims that it is a mistake to
think that such local "laws" could "govern" the behavior of the universe
at its largest scale:
My argument starts with a simple observation: The success of
scientific theories from Newton to the present day is based on their
use of a particular framework of explanation invented by Newton.
This framework views nature as consisting of nothing but particles
with timeless properties whose motions and interactions are
VAN DuK/Process Physics, Time, and Consciousness 7

determined by timeless laws. The properties of the particles, such as


their masses and electric charges, never change, and neither do the
laws that act on them. This framework is ideally suited to describe
small parts of the universe, but it falls apart when we attempt to
apply it to the universe as a whole. All the major theories of physics
are about parts of the universe .... When we describe a part of the
universe we leave ourselves and our measuring tools outside the
system. We leave out our role in selecting or preparing the system we
study. We leave out the references that serve to establish where the
system is. Most crucially for our concern with the nature of time, we
leave out the clocks by which we measure change in the system. [But,
w ]hen we do cosmology, we confront a novel circumstance: It is
impossible to get outside the system we're studying when that system
is the entire universe. (Smolin, Time, xxiii)
What Smolin objects to particularly is any attempt to extrapolate our
conventional way of doing "physics in a box" to the universe at large-an
objection, by the way, that he shares with fellow physicist Joe Rosen (72-
75). Indeed, it has been a historically hugely successful method (1) to
isolate a small subsystem from the rest of the universe; then (2) to try to
extract empirical data from it; and then, finally, (3) to put together a
"lawful" physical equation on the basis of these data so that the behavior
of this isolated subsystem can be represented and computed with great
precision. Accordingly, the thus achieved Newtonian framework is very
much "info-computational" in that it depends heavily on the information-
theoretical scheme of data being encoded into data-reproducing algorithms
and then decoded again into data that is post- as well as predictive of the
target system's past and future behavior. Along these lines, modern
cosmology and astrophysics have come to think of our natural universe
as a giant information-processing system as well. The universe is thought
to have evolved from the big bang to its present state as the laws of nature
governed its entire historical lineage from its earliest initial conditions.
The initial conditions serve as informational input to the laws that perform
the computation from one state to the next, and so on.
This info-computationalism did not remain limited to the physical
sciences, though. With the advent of genetics, it came to be a major factor
in biology as well, thus giving rise to the idea that organisms could be
thought of in terms of the machine metaphor as information-processing
biomechanisms whose behavior as well as their fit with their environment
were basically already pre-programmed in the genetically inherited
8 PROCESS STUDIES SUPPLEMENT 24 (2017)

instructions of their DNA. On top of this, much of contemporary neuroscience


and cognitive science is computational in the sense that it supports the
computational theory of mind in which the brain is thought of as an
enormously complex biological computer in which what we like to call
"the mind" is ultimately no more than neural computation-the signal
exchange between information-processing communication modules as
known from classical information theory.
However, this information-inclined approach totally overlooks the
fact that nature, in its deepest essence, is not a pre-coded place. That is,
"nature as left unframed by our nature-dissecting gaze" is unlabeled in
terms of the categories, concepts, code and symbolic alphabets that we
usually like to attach to it (see Edelman and Tononi 104; Kauffman,
"Foreword: Evolution," 11). This, then, is one reason that nature as a
whole can never be modeled exhaustively by using such pre-defined
symbol systems. Another reason, which has already been noted above by
Smolin, is that this info-computational way of doing physics in a box
necessarily requires that we leave ourselves, our preparatory actions, and
also our entire measurement instrumentarium outside the system to be
observed, something that cannot be done when trying to attend to the
universe in full .
In fact, this info-computationally primed way of doing physics in a
box-or what we will later on also refer to as "exophysical-decompositional
physics"4-inevitably leads to much more of such impracticalities, all
kinds of paradoxes, and the dubious belief that the natural universe is
entirely timeless. Not to be forgotten, it will also bring along unanswerable
questions, such as "why these laws?" and "why did the universe start out
with the initial conditions from which it has evolved into its current state?"
(see Smolin, Time, 97-98). To find a way out of these problems, Smolin
suggests that we should drop the idea of eternally valid "laws of nature"
and exchange it for something else. That is, following in the footsteps of
process philosopher Charles Sanders Peirce (1839-1914), he argues that
nature is not being governed by predetermined laws, but develops habitually.
Inspired by this idea, Smolin becomes convinced that, in order for physics
to get rid of its problems with time and its unanswerable questions, it
should take an entirely different direction, although he admits that he has
no readily available road map to success.
Fortunately, however, some 15 years ago or so, a neo-Whiteheadian,
neuro-biologically inspired, biocentric way of doing physics without a
VAN DuK/Process Physics, Time, and Consciousness 9

box, namely Reg Cahill's 5 process physics, arrived on the scene and ever
since it has managed to grow into a serious habit-centered alternative for
contemporary mainstream physics. As such, process physics aims to model
the universe from an initially orderless and uniform pre-geometric pre-
space by setting up a stochastic, self-referential modeling of nature. In
process physics, all self-referential and initially noisy activity patterns
are "mutually in-formative" in the sense that they are actively making a
meaningful difference to each other (i.e., "in-forming" or "actively giving
shape to each other").
Through this internal, habit-establishing "co-informativeness," process
physics is able to avoid the info-computational approach of externally
imposed "pre-coded" symbol systems that seem to cause so much trouble
in mainstream physics. Also, due to this system-wide mutual in-formativeness,
the initially undifferentiated activity patterns can act as "start-up seeds"
that become engaged in self-renewing update iterations (see section 5.3 .3
for further details). In this way, the system starts to evolve from its initial
featurelessness to then "branch out" to higher and higher levels of
complexity-all this according to roughly the same basic principles as a
naturally evolving neural network.
Because of this self-organizing branching behavior, the process system
can be thought of as habit-bound with a potential for creative novelty and
open-ended evolution. Furthermore, nonlocality, threedimensionality,
gravitational and relativistic effects, and (semi-)classical behavior are
spontaneously emergent within the system. Also, the system's constantly
renewing activity patterns bring along an inherent present moment effect,
thereby reintroducing time as the system's "becomingness." As a final
point, subjectivity-in the form of "mutual informativeness" (which is
also used in Gerald Edelman's and Giulio Tononi's extended theory of
neuronal group selection to explain how higher-order consciousness can
emerge)-is a naturally evolving, innate feature, not a coincidental, later-
arriving side-effect.

1.1 Getting to know process physics in terms of time, life, and consciousness
In order to properly introduce process physics, first, a proper outline
of our contemporary mainstream physics and its problems must be given.
Therefore, in Chapter 2, titled "Time," we will discuss the most important
technicalities having to do with how mainstream physics deals with time.
To be more specific, we will first take a look at the role of time in Aristotle's
10 PROCESS STUDIES SUPPLEMENT 24 (2017)

teleological physics. After that, the more recent history of time as a


geometrical dimension will be sketched, starting with Galileo's one-
dimensional timeline, and then going from Newton's absolute space and
time to Einstein's 4-dimensional spacetime, which motivated Minkowski
to develop his interpretation of nature as an entirely static and timeless
block universe.
Then, Chapter 3, on "doing physics in a box," aims to give a
comprehensive analysis of the basic workings of our contemporary
mainstream physics, together with an outline of some of its intrinsically
problematic features. Although a good number of mainstream physics'
problems will be addressed, in the context of the present study the most
important of these are the denial of the reality of time and the claim that
consciousness must ultimately be illusory-two problems that process
philosophy has been dealing with for a long time.
If, indeed, consciousness is not illusory, as process philosophy,
phenomenology, and the system sciences like to argue, then it would be
crucial to sort out how it arose in living organisms, and how it enables
these organisms to get to know the natural world in which they live. This
and more will be discussed in Chapter 4, titled "Life and Consciousness,"
where the main topics of interest will be the emergence of life through
autocatalysis and the coming into actuality of higher-order consciousness.
Since subjectivity is here seen as the process of sense-making as an
organism goes through its cyclic perception-action loops, 6 one of the main
conclusions is that consciousness is not confined to some elusive center
of subjectivity buried deep within the brain, but extends well into the
organism's environment. That is, a sense of self and world gets to be
sculpted by the process of sense-making as it runs its course within the
seamlessly interconnected "organism-world system."
Both the emergence of life and that of consciousness are particularly
relevant to how process physics hangs together, because both of them can
be explained as self-organizing processes that come into actuality from a
primordial background of initially undifferentiated processuality. This
has a striking resemblance with how process physics works. That is,
process physics is a biocentric way of doing physics without a box, which
introduces a non-formal, self-organizing modeling of nature. As such, it
gives an account of nature in which the "earliest beginnings" are considered
to be inherently protobiotic, ecological and organismic, rather than entirely
abiotic, physical and mechanistic. In line with all this, the process physics
VAN DuK/Process Physics, Time, and Consciousness 11

model is not based on law-like physical equations, as in mainstream


physics, but on a stochastic iteration routine that reflects the Peircean
principle of precedence (Peirce 277).
By modeling nature with the help of "recursive routine" rather than
"timeless laws," process physics manages to set up a dynamic network of
dispositional relations through which higher-order relational patterns can
emerge from an initially uniform and undifferentiated background (see
Cahill, et al., "Process Physics: Modelling," 192-193). In so doing, the
process physics model will gradually start to exhibit many features also
found in our own universe: non-locality; emergent three-dimensionality;
emergent relativistic and gravitational effects; emergent semi-classical
behavior; creative novelty; habit formation; mutual informativeness;7 an
intrinsic present moment effect with open-ended evolution, and more.

2. Time
Although time has played a major role in physics ever since the early
1600s when Galileo started to specify it in terms of chronologically
arranged intervals along a geometrical, unidirectional line, there is still
no common agreement on what it actually is (see Davies, About Time,
279-283 ; Davies, "That Mysterious," 6-8; Smolin, Time, 240-242). Despite
the impressive theoretical and technological progress over the last four
hundred years, physicists and philosophers alike continue to be troubled
by the elusiveness of time and our incomplete understanding of it. So, for
sake of clarity, let us first try to reconstruct how the concept of time was
historically introduced into physics, and see how it developed over the years.

1.1 From the process of nature to the geometrical timeline


Presocratic process philosopher Heraclitus of Ephesus (circa 535-475
BCE) is famous for having coined the slogan "all is flux, and nothing
stays the same." Using this as a first principle, he put forward an account
of nature in which change and processuality were the central themes. As
a means to come to grips with the many chaotic and unpredictable aspects
of nature, he claimed that the world was actually a giant coherent process
in which there were hidden connections between its contrasting opposites.
According to Heraclitus, these opposites were thus tacitly related to one
another in a balancing way so as to compensate for each other's extremities.
He thought all change in one direction was ultimately evened out by a
12 PROCESS STUDIES SUPPLEMENT 24 (2017)

matching change in the countervailing direction, thus leading to a worldview


of nature as a dynamic equilibrium-united as one, yet capable of
heterogeneous change and diversity (see Kirk 178).
However, one of his contemporaries, Parmenides of Elea (circa 515-450
BCE), argued that only being and non-being were crucial, since, to him,
it was a logical necessity that what exists, must always have existed. In
order to show the inevitability of his logic, he asked the following question:
If what exists should ever have come into existence, or go out of it, how
should this ever occur at all? After all, if it were to come into being, it
should not even have existed in the first place. Subsequently, Parmenides
reasoned, because emergence out of nothing would be absurd, all change
should be considered an utter illusion:
There is only Being and Non-Being and no intermediate or
transitional form can be conceived of without inner contradiction. For
if that into which something changes did not originally exist, where
could it possibly have come from? [The only other option is that it
was already] there from the beginning, in which case there is no
change at all since everything has remained as it always was .... In
short, it is unthinkable that between Being and Non-Being there
exists a category of Becoming. If we believe that we observe change
all the time in our daily lives, then our observation is at fault and we
must conclude that our senses fail to provide us with reliable
information about the real world. (Cohen 9-10)
Plato (circa 425-347 BCE), then, when he developed his most famous
philosophical ideas, actually stayed quite close to Parmenides. In his
thinking about the essence of reality, he distinguished between the imperfect
world that is observable with the senses and the perfect world of "Ideal
Forms." In the Timaeus, Plato gave an account of the natural world as
being imperfectly modeled after those impeccable Ideal Forms, as poor
reflections of them (see Cohen 10). Far beyond the observable phenomena
of the natural world, he thought there had to be immutable and eternal
"Ideal Forms" whose properties could be spelled out by number theory
and geometry-the mathematical discipline dedicated to the specification
of abstract shapes and lengths. In short, Plato held that nature's phenomena
got their shapes from abstract geometric figures (such as circles, points,
angles, and lines) whose ideal forms they could never actually achieve,
but only approximate (10-11 ).
Later on, Aristotle (circa 384-322 BCE) exploited geometry in a way
that made it fit nicely within his teleological physics (see section 2.1.1).
VAN DuK/Process Physics, Time, and Consciousness 13

In order for this to work, he argued that the regularity in the motion of
stars and planets, which could be witnessed every night when looking up
to the nocturnal sky, was a sign of the explicitly geometrical nature of
steadily rotating celestial spheres. So, although Aristotle dismissed Plato's
realm of ideal forms, he still recognized geometry as one of the most
essential sciences. Particularly, he believed the sphere to be the most
perfect of all geometrical shapes. After all, to the naked eye, the heavenly
bodies clearly seemed to move around the Earth in an explicitly circular
manner. The combination of these two was then more than enough "proof'
for Aristotle to crown the celestial sphere as the pristine, perfect, geometry-
abiding part of nature.
Through their eternal sameness and perfectly circular regularity, the
celestial spheres far outstripped the everyday imperfection of our earthly
domain where all things die and decay without having the permanence of
the heavens. It was in this sense that geometry had a central place in
Aristotle's cosmology and that its influence was passed on for centuries
on end in support of the Aristotelian cosmology and teleological physics
( see section 2.1.1 ).
Aristotle's body of thought eventually turned out to dominate much
of the almost two millennia that followed. With his detailed and careful
study of the behavior of falling objects, however, Galileo Galilei (1564-1642)
basically established the blueprint for what-through the later work of
Newton and Einstein, among others-has now become our modem
mainstream physics. In Galileo's days, as it had been in the ancient Greek
era, geometry was still the preeminent piece of equipment in the scientific
tool box. Therefore, the accounts of motion that were formulated by
Galileo's contemporaries would typically be based on geometry, if not
directly then at least indirectly. Accordingly, all things having to do with
motion first had to be looked at through the filter of geometry, for instance,
by comparing traveled distances of thrown projectiles or the depth of
impact pits left behind by falling objects.

2.1.1 Aristotle's teleological physics


For a long time, however, the most influential account of motion had
indeed been that of Aristotle, who, in his time, had introduced a way of
doing physics that was very much end-directed and purpose-laden, and
thus in fine agreement with his teleological philosophy:
14 PROCESS STUDIES SUPPLEMENT 24 (2017)

Aristotle's vision for physics ... depended on [the] division of sub-


and superlunar cosmic domains. Five basic elements existed in
Aristotle's cosmos: earth, water, fire, air and aether. Each element
had a 'natural' motion associated with it. Earth and water 'naturally'
sought movement toward the Earth's center. Air and fire naturally
rose toward the celestial domain. The aether was a divine substance
constituting the heavenly spheres. These 'natural' inclinations
seemed self-evident to Aristotle and did not require separate tests.
Only many centuries later would a new breed of scientists such as
Galileo (in the late sixteenth and early seventeenth centuries) demand
that a hypothesis such as natural motion be validated through
experiments. (Frank 4 7)
Along these lines, Aristotle's explanation for what seemed to be the
two most obvious forms of motion-"free fall" and what may be called
"continued travel"-were very much end-directed. In the Aristotelian
framework, free fall would be explained by appeal to the striving of earthly
matters to move towards their natural endpoint, namely the heart of the
universe, which was, according to the then prevailing wisdom, the center
of the earth. Continued travel, on the other hand, was in Aristotle's view
the result of so-called antiperistasis. This is the phenomenon through
which the motion of a projectile like a spear or arrow is continued as
compressed air coming from the front of the projectile fills in the empty
gap that it leaves behind, thus pushing the projectile forwards with a
constant thrust.
Both forms of motion were later given a new, improved interpretation
by Galileo, but for now, we will keep our focus on Aristotle's view of
nature a little bit longer: In line with the teleological principle that earthly
matters and water are naturally driven toward Earth, he thought that the
speed of something falling down would actually depend on the amount
of earth-seeking elements it contained, or, in other words, on its weight.
He had been led to think so by the observation that heavier objects, when
being dropped in the water, sink to the bottom in an observably faster way
than lighter ones do. From this he concluded (wrongly, as it turned out)
that the rate of falling had to be proportional to the weight of the object
and inversely proportional to the viscosity of the medium. Put simply,
heavy objects had to fall faster than lighter objects. And although we now
think of this belief as being quite naive and impulsive, it still managed to
persist for a staggeringly long period of time-almost two millennia.
Nonetheless, over all those years, Aristotle's accounts of the two
VAN DuK/Process Physics, Time, and Consciousness 15

forms of motion-free fall and continued travel-still had to endure some


fair amount of skepticism. That is, some critical minds found that there
was something wrong with Aristotle's teleology. After all, the problem
with those purpose-based accounts of motion is that they merely re-describe
what is found to be the case. For instance, the explanation that "things
fall towards the ground because they strive towards Earth" basically
amounts to a tautology-saying the same thing twice in different words.
Unfortunately, this tautological reasoning did not only occur in Aristotle's
explanation of free fall. That is, since it was based on the teleological
principle of horror vacui, 8 Aristotle's antiperistatic explanation of continued
travel was found to suffer from the same weakness. As soon as one tries
to explain that air will fill up any gap left behind by an arrow because
"nature abhors a vacuum," it will immediately become apparent that this
line of reasoning is just as trivial and pointless as the above tautological
explanation behind free fall.
At the time when Galileo first started to think about motion, one of
the two components in Aristotle's account of motion-the phenomenon
of antiperistasis-had already been replaced by the idea of "impetus" or
"impressed force":
After leaving the arm of the thrower, the projectile would be moved
by an impetus given to it by the thrower and would continue to be
moved as long as the impetus remained stronger than the resistance,
and would be of infinite duration were it not diminished and
corrupted by a contrary force resisting it or by something inclining it
to a contrary motion. (Buridan as translated in Zupko 107)
Notwithstanding this revision by Buridan, and some others that
preceded him, the general framework in the pre-Galilean era was still very
much Aristotelian. Despite the earlier-mentioned criticism with regard to
the tautology of purpose-based teleological arguments, Aristotle's account
of free fall, based on the belief that heavy weights naturally outpace all
lighter ones when falling to earth, was still very much the established
view. The falseness of this belief, however, managed to remain unnoticed
for almost two thousand years, because no rigorous testing was being
performed and also because Aristotle and his followers unintentionally
threw up a smoke screen that basically prevented them from taking a better
look. Specifically, Aristotle's "rule," saying that heavy weights would
always hit the ground sooner than lighter ones, caused his physics to
become specifically geared towards comparative proportions. That is, in
16 PROCESS STUDIES SUPPLEMENT 24 (2017)

line with the naive belief in faster-falling heavy objects, Aristotle's rule
was converted into a neat quantitative expression, relating weight and
speed to each other in a proportional way:

W/W2=V/V2, with W=weight and V=speed (2.1-1)

The technicalities that came with this expression arguably kept Aristotle
and his followers busy enough to overlook the fact that it was actually
quite wrong. Initially, these ratios were only used in an after-the-fact
manner. But, with time, it became apparent that falling objects started
from a resting position and had to pick up their pace upon release, instead
of immediately dropping down at full speed.
This is when it was decided that the speed of the objects would have
to depend on the distance covered. Accordingly, it was concluded that
falling objects would increase their speed the deeper they fell. 9 What is
particularly noteworthy in this case is that the buildup of speed was not
being linked with the lapse of time, but with the distance covered.
To be able to understand the motives behind the linkage of speed with
covered distance rather than with time elapsed, we will first have to look
into Aristotle's thoughts of how movement and time were related with
each other:
[B]ecause movement is continuous so is time; for (excluding
differences of velocity) the time occupied is conceived as
proportionate to the distance moved over. Now, the primary
significance of before-and-aftemess is the local one of "in front of'
and "behind." There it is applied to order of position. But since there
is a before-and-after in magnitude, there must also be a before-and-
after in movement in analogy with them. But there is also a before-
and-afler in time, in virtue of the dependence of time upon motion.
Movement, then, is the objective seat of before-and-afterness both in
movement and in time; but conceptually the before-and-aflerness is
distinguishable from movement. Now, when we determine a movement
by defining its first and last limit, we also recognize a lapse of time;
for it is when we are aware of the measuring of motion by a prior and
posterior limit that we may say time has passed. And our
determination consists in distinguishing between the initial limit and
the final one, and seeing that what lies between them is distinct from
both; for when we distinguish between the extremes and what is
between them, and the mind pronounces the "nows" to be two-an
initial and a final one-it is then that we say that a certain time has
passed; for that which is determined either way by a "now" seems to
VAN DuK/Process Physics, Time, and Consciousness 17

be what we mean by time .... Accordingly, when we perceive a "now"


in isolation ... then no time seems to have elapsed, for neither has
there been any corresponding motion. But when we perceive a
distinct before and after, then we speak of time; for this is just what
time is, the calculable measure or dimension of motion with respect
to before-and-aftemess. Time, then, is not movement, but that by
which movement can be numerically estimated. (Aristotle, Physics,
219a-21 %-emphasis added)
In Aristotle's view one may speak of "time" when attending to the duration
aspect of motion, whereas one is dealing with "movement" when the
displacement aspect is at stake. Accordingly, change in place-i.e.,
movement, or locomotion-can be expressed by displacement as well as
duration. However, as can be understood from the italicized segment in
the quote above, Aristotle thought that time could become apparent only
by virtue of the occurrence of movement.
Nonetheless, time has a special relevance in Aristotle's framework
in that anything that changes, can only change in time. But because it is
impossible to point to time in the way that one would point to an actual
thing, time was considered a derived, abstract notion. As such, time was
thought to be dependent on movement, rather than being fundamental to
it. So, all in all, the belief in (1) faster-falling heavier objects, and (2) the
abstractness and motion-dependence of time, together with (3) the
unavailability of precise measuring instruments, and (4) the associated
lack of rigorous testing procedures, caused the Peripatetic school of
Aristotle to commit the error of linking the increase of speed with covered
distance and not with elapsed time.

2.1.2 Galileo's non-teleological physics


Thanks to a lot of hard work, dedication, and an especially inquisitive
mind, Galileo was able to find an entirely non-teleological alternative to
Aristotle's physics that both solved the problem of the tautological
arguments and rectified the incorrect linkage between increasing speed
and traveled distance. Instead of looking only at the change and differences
in weights, motions and speeds of objects, he started to look specifically
at the rate at which change occurred. That is, he found out that motion
could be catalogued more easily by recording not only the covered distance
and descended height of falling bodies, but also the rate at which these
quantities would change.
18 PROCESS STUDIES SUPPLEMENT 24 (2017)

Just to be able to do so, Galileo devised many ingenious experiments


in which he tried to link change in spatial coordinates to a standard measure
of duration. In this way, the total amount of change in position could not
only be expressed in terms of standard spatial intervals, but it could also
be measured in a chronological manner by counting up the amount of
standard units of duration between the initial and the final position.
For instance, by monitoring the changing water level in the reservoir
of a water clock that was running at the same time as he would release
some heavy mass at the top of an inclined plane, Galileo could chart the
duration of the object's descent in terms of the water level markings (this
would include split times as well as the total amount of time). In tum, the
covered distances at pre-marked split times could be registered by looking
at the distance markings that were carved along the ramp's downward slope.
As it turns out, though, it is quite difficult to get a reliable and consistent
reading in subsequent runs of such an experiment. This is because the
water level is changing rather slowly in comparison to the falling body' s
changing height. Therefore, later on, a more elaborate inclined plane
experiment was introduced. While it is not entirely certain if this
experiment-in the exact form as described below-was actually performed
by Galileo, it at least combines two of his earlier innovations that had a
great impact on the practice of physical experimentation: (1) the downhill
ramp, with its inclined plane for rolling down bronze balls; and (2) the
free-swinging pendulum, which could be used as a relatively precise
indicator for the rate of time.
In this enhanced inclined plane experiment, a bronze ball was made
to roll down the ramp, which had a pendulum hanging from the backside
of the platform from which the balls were released. Because of the added
pendulum, the ramp could effectively also double as a makeshift metronome.
That is, although the ramp's main function was to serve as a straight-lined
"speed track" for the bronze ball, its second job was to subdivide the total
time of descent into equally long intervals. This feature was achieved by
placing a series of moveable warning bells on strategic positions along
the slope of the ramp (see Fig. 2-1 ). The precise spots where these bells
should be placed could be found through synchronization with the swings
of the pendulum that was hanging beneath the platform located on top of
the slope. In this way, after having been released from the upper platform
of the ramp, the passing bronze ball would set off the bells one after the
other in an even, steady rhythm.
VAN DuK/Process Physics, Time, and Consciousness 19

Table 1: Distance and time va lues in Ga lileo's inclined plane experiment


Measurements leading to Galileo ' s time-squared law offal!. See tl1e end of Section 2.2 for how the valu e of
the propmtionality constant k relates to cu1Tent metric measures (also cf. McDougal 20).

tiine l x distance .r t 2 = .< I k t2

"ith equal intenals !it : as 1neasured in 'point.s' calculated with k = 33 t_,. squared
~ l = l x- l x_J

'1 = 1 33 1.00 11 = 1

'1= 2 130 3.94 21 = 4

t, = 3 298 9.03 31 = 9

t, = 4 526 15.94 41 = 16

ts= 5 824 24.97 5 2 = 25

t6 = 6 1192 36. 12 61 = 36

t7 = 7 1620 49.09 71 = 49

'• = 8 2 104 63.76 s' = 64

limct - -

0
0

500
§
traveled
1000
••

distance 1500

2000


h - - - 1~
Fig. 2-1: Bronze ball rolling down an inclined plane (with s-t diagram)
20 PROCESS STUDIES SUPPLEMENT 24 (2017)

By introducing a "normalized" measure of time (in which, for instance,


the equalized time stretches between the ramp ' s warning bells were taken
as countable standard units) Galileo could list this measure next to the
distance and/or height covered by the moving body. In so doing, he could
then put together a chronologically ordered record as in table 1 (see also
Smolin, Time, 31-36). As it turned out, the notion of a timeline could then
be derived by using the analogy between (a) a measuring tape stretching
straight from the moving body's starting position to its end point, and (b)
the time interval between initial and final readings on a water clock (or
even smaller intervals, such as those between the pendulum-calibrated
warning bells attached to the slope of the ramp). In other words, as had
happened before with the notion of space, 11 time was thus abstracted from
the process of nature as a linear phenomenon.
On the whole, there seems to be no other experiment that illustrates
better how distance and duration can be made to pair up-no better
demonstration of how time can be characterized as a geometrical phenomenon.
That is, it demonstrates most clearly how the initially unlabeled process
of nature is actually converted into (a) movable physical bodies, (b) a
spatial coordinate system, and (c) a linear time axis. By submitting the
world to his nature-dissecting gaze and then filtering away all that was
irrelevant to him, Galileo basically put nature through the wringer to thus
end up with a fully geometrized "stage setting" in which the act of doing
physics could be performed. It became finally possible to put the
mathematically predictable maneuvers of physical objects on display
within a timeline-equipped spatial coordinate system-thus implicitly
suggesting that nature truly worked in a geometrical way.
But on close enough scrutiny, the abstract geometrical timeline could
only be given a meaningful role through the artificial isolation of a "physical
object" from its local neighborhood, and all of Galileo's other acts of
abstraction that led to his first basic version of doing "physics in a box"-as
Lee Smolin calls it. It is only through all these acts of abstraction that
what we like to call "motion" can be "soaked loose" from the process of
nature as if it were the change of an object' s position "through" space and
time. Despite first appearances, object, neighborhood, measuring instrument,
observer, etc., are eventually not separate entities, but rather inseparably
connected process-structures deeply ingrained within the greater embedding
whole of the universe at large.
VAN DuK/Process Physics, Time, and Consciousness 21

It is because of these acts of simplifying abstraction from the process


of nature that our current way of doing physics in a box could be made
possible at all. And it is only in this abstracted, mathematized world that
nature's processuality can be translated into a moveable dot running along
a chain of equally long time stretches.12 This very method of doing physics
in a box, however, has persuaded many ofus to indeed think of time as a
geometrical exponent of reality that is needed to get from one event to
the next. However, this belief is a typical example of the fallacy of
misplaced concreteness. To put it even more strongly, it is already dubious
to even speak of "time" as having an autonomous existence. Even Einstein
himself was not shy to bring this to the fore: "Time and space are modes
by which we think, and not conditions in which we live" (Einstein as
quoted in Forsee 81). Remarkably, in opposition to the timeless and non-
processual Parmenidean block universe that Minkowski proposed on the
basis of Einstein's special theory of relativity, this same point is used in
process philosophy to endorse an explicitly processual interpretation of nature:
[T]ime is not in itself a fundamental reality. It is an abstraction from
the process .... What really exists is a succession of events. They are
by their very nature related to one another as past and future or as
contemporary. These relationships are temporal. But there is not
something to be called time apart from these actual relations. (Cobb
161)
In other words, time should not be reified. Galileo's geometrical timeline
model should not be interpreted as something that has an actual existence
alongside nature's events (Griffin, "Bohm and Whitehead," 127). Moreover:
[A]ll the features of time ... are rooted in the intrinsic reality of
events, in the process by which they become concrete, or
determinate, for it is here that the event includes the past events into
itself and it is this inclusion that makes time irreversible.
Accordingly, any approach that commits the fallacy of misplaced
concreteness by equating the extrinsic side of the events with their
complete reality will necessarily miss the roots of time in those
events. (Griffin, "Introduction," 13)
Hence, looking at all this in the most sober way we can, at the end of the
day we will have to admit that geometrical timelines result from the
subjective choice to abstract from the process of nature. Our long-standing
Western tradition of "doing physics in a box" 13 actually depends a great
deal on such idealizing abstraction.
22 PROCESS STUDIES SUPPLEMENT 24 (2017)

It cannot be denied that what has grown to become our present-day


physics has known many tremendous successes over the years. On balance,
the vast majority of those successes have come with impressive concrete
consequences. After all, many previously unexplained aspects of nature
are now considered familiar and well-understood phenomena as their
behavior can be traced and predicted mathematically to a near-perfect
degree of precision.
However, even perfect empirical agreement between some target
system of interest and its mathematical model does not mean that the
mathematical model is an exact representational twin version of the system
in question. In fact, our nature-dissecting physics can only deal with
observables, not beables. 14 Mainstream contemporary physics primarily
has to do with the mathematization of phenomena and typically likes to
evaluate nature in terms of instrument-based empirical data-backed up
by sense data, if needed.
Appropriately, then, physical equations should not be considered to
refer directly to nature-in-itself; for all practical purposes, their designated
source of information is to be found in the responses of observation
systems. Unlike an airplane in low-altitude flight, we cannot dive below
the "radar" of our own phenomenal awareness to check if the so-called
"real-world-out-there" exists exactly as we experience it. From early life
onwards, we have gradually learned to cut up our otherwise undivided
natural world into various subsystems (particularly target, subject, and
symbol systems, as well as their respective constituent parts). Despite our
thus developed nature-dissecting mindset, this will never bring us conclusive
proof if these systems do indeed exist just as we infer them to be (Van
Dijk, "The Process"). Therefore, we should realize very well that
physics-from what it was in Galileo's hands to what it has become in
the present day-has until now been no more than a collection of instrument-
based, mathematically expressed phenomenologies made possible by some
well-considered acts of abstraction (see also Sections 3.1.2 to 3.2).
Following the same argument, we should also recognize that the
concepts of space and time, as used in contemporary mainstream physics,
are ultimately phenomenologies: instrument-enabled, geometrically
expressed metaphors or figures of speech for how nature appears to us,
conscious nature-dissecting observers. Although they help break down
the process of nature into geometrical dimensions and their contents, space
and time should not be thought to exist as such. In the words of Alfred
VAN DuK/Process Physics, Time, and Consciousness 23

North Whitehead: "This passage of events in time and space is merely the
exhibition of the relations of extension which events bear to each other,
combined with the directional factor in time which expresses that ultimate
becomingness which is the creative advance of nature" (Whitehead, PNK,
63). Moreover: "We habitually muddle together this creative advance,
which we experience and know as the perpetual transition of nature into
novelty, with the single-time series which we naturally employ for
measurement" (Whitehead, CN, 178). Here, the word "naturally" should
perhaps better be replaced by "routinely." After all, by following Galileo's
first example of presenting such a single-time series in a table (as in Table
1, above), we are entirely taking for granted all the idealizations and
simplifications that in fact enabled him to present time as a unidirectional
geometrical line. In other words, we are thus accepting the authority of
tradition without really questioning Galileo's hidden presumptions.
Within certain contexts of use it may perhaps be quite convenient to
interpret space and time in terms of geometrical dimensions, but these
interpretations should not be taken so literally as to impose them onto the
process which is nature. We should not mistake our abstractions for reality,
so we should always remain critical towards claims that the physical real
world should sit within space and time, or exist as a 4-dimensional spacetime
continuum. At the end of the day, space and time, although all too often
interpreted as actually existing, geometrically specifiable dimensions, are
in fact artifacts of the human intellect that follow directly from the nature-
dissecting mindset on which our still well-established tradition of doing
"physics in a box" is based. This does not mean, however, that space and
time are mere illusions, but rather that what we have come to think of as
"space-time" is actually an intrinsic aspect of the process of nature and
can thus not be usefully reflected upon without taking nature's processuality
into account.

2.1.3 The deficiencies of the geometrical timeline


So, however useful Galileo's geometrical timeline has proven to be
over the years, it's quite another thing to suppose that this abstract construct
should have a concrete counterpart within nature-in-itself. Although the
geometrical timeline does quite well when it comes to chronologically
ordering the sampled values of observables 15 by associating them to a
serialized chain of time slots, it seems to perform quite poorly otherwise.
24 PROCESS STUDIES SUPPLEMENT 24 (2017)

After all, while nature is all about change, process, action, evolution, etc.,
the timeline by itself is as static as can be (Cahill, "Process Physics: Self-
Referential," 2).
Furthermore, the geometrical timeline does not allow for a present
moment effect. The timeline doesn't have a unique and indisputable Now
which will automatically come to the fore during use. This lack of a
dedicated present moment is in fact a shortcoming that even Einstein had
become quite concerned about in his later years. As Rudolf Carnap reported:
Once Einstein said that the problem of the Now worried him
seriously. He explained that the experience of the Now means
something special for man, something essentially different from the
past and the future, but that this important difference does not and
cannot occur within physics .... Einstein thought that scientific
descriptions [whether they be formulated in physical or in
psychological terms] cannot possibly satisfy our human needs; and
there is something essential about the Now which is just outside the
realm of science. (Carnap 37-38)
To us, conscious human beings, things happen in a particular order.
That is, things seem to change as they pass from the present into the future,
thus leaving their past "behind" them. Accordingly, when left to its own
devices, nature typically likes to follow the path of irreversibility. Water
does not spontaneously flow upwards against the slope of a mountain;
milk will not unmix itself from the coffee in which it is poured; and, as
we will all learn in life, we do not grow younger as we age. In classical
thermodynamics-although gas particles in bulk tend towards disorderliness,
thus leading to a preferred direction of time-the microscopic laws
describing the collisions of individual particles are time-symmetric,
indifferent to any distinction between a back or forward direction of time.
Accordingly, at least until the advent of quantum mechanics, all the then
known "laws of nature" were time-reversible, and to this day most of them
still are:
The reversibility of basic physical processes comes from the time
symmetry of the laws that underlie them. This time-reversal
symmetry is usually denoted by the letter "T." You can think ofT as
an (imaginary) operation that reverses the direction oftime-i.e.,
interchanges past and future. Time-symmetric laws have the property
that when the direction of time is inverted the equations that describe
them remain unchanged: they are invariant under T. A good example
is provided by Maxwell's equations of electromagnetism, which are
VAN DuK/Process Physics, Time, and Consciousness 25

certainly T-invariant [or in other words: symmetric under time-


reversal]. (Davies, About Time, 209)
When seen from the perspective of timeline-equipped laws of nature
themselves, it seems to make no difference at all if we start tracing the
timeline from left to right, or just the other way around. There is nothing
in the mathematics of our laws that forbids them to be "unrolled"
counterclockwise. So it turns out that physicists, when putting these laws
to the test, must first choose which direction to follow. Because the physical
equations themselves do not stipulate a specific direction, they actually
require external subjective choice in order for the timeline methodology
to work according to plan.
In fact, geometrical timelines are basically analogous to tear-off
calendars, whose pages can be removed from the front to the back, but
also vice versa, randomly, or in any other possible order. And-as is so
well-accepted that it is mostly forgotten-tear-off calendars, as well as
geometrical timelines, require social convention to know in which direction
they should be read, and external manipulation to get from one time slot
to the next. 16
In other words, physicists availing themselves of such an artificial
timeline necessarily have to apply an additional metarule which states
that an external time pointer should be run alongside the line to indicate
which point on it is effectively acting as the present moment. That is, just
like the tear-off calendar needs some outside help (another's "helping
hand") to remove the page of each day gone by, the timeline needs an
external present moment indicator to get from one time slot to the next.
It should be stressed that this present moment indicator (a) is entirely
separate from the timeline itself, and (b) acts in total independence from
any mathematical equation to which this timeline may be linked. As will
be explained in more detail below, however, this external present moment
indicator plays an enormously important role in the ongoing "mathematization
of nature."

2.2 From geometrical timeline to time-based equations


Galileo's revolutionary idea to relate the changing position of a moving
body to the rate of his own heartbeat as he felt it beating in his pulse, or
to other measures of time, 17 basically amounts to relating the change of
one thing to the change of another. Or, to be more precise, it amounts to
26 PROCESS STUDIES SUPPLEMENT 24 (2017)

encoding the nonlinear, nonuniform change in one aspect of nature (i.e.,


relocation in space) in terms of the more simple and steady change of
another (i.e., "relocation" in time).
It should be noted, though, that comparison to another clock is basically
the only way to determine at which rate the first clock's time indicator is
changing position. Therefore, to avoid an infinite regress of calibrating
clocks, the most practical solution is simply to accept the first one's rate
of change as smooth and uniform over its entire range. 18 In each of his
different experiments, Galileo had to assume that-more so than the beats
of his own heart-the time indicator markings 19 would always pass by at
a perfectly even rate, so that they could be used as a standard measure of time.
By supposing that these markings would indeed pass by in uniform
fashion, it became possible to introduce an operational definition of time. 20
Such an operational definition simply takes a conveniently chosen number
of the same recurrent events ( e.g., the swinging of a pendulum, or the
passing by of indicator markings) as the standard unit of time. In this way,
since the number of counts is the only thing required to specify the
magnitude of a time interval, there is no need to know any more if time
is something that physically exists in the so-called "real world out there."
That is, as long as the assumption of this uniform rate leads to empirically
adequate results, there is no need to know how time actually works, but
only that it works-in the above mentioned operational sense, that is.
The adoption of this operational definition of time allowed Galileo
to achieve a major breakthrough that ushered in a new era in the natural
sciences. By closely studying the time tables he had put together from his
experiments on falling and descending objects (see Table 1) he could
systematically compare the object's change in position (especially the
vertical component) to the amount of time that had elapsed during its descent:
Having placed this board [i.e., the downward track for the bronze
ball] in a sloping position, by lifting one end some one or two cubits
above the other, we rolled the ball, as I was just saying, along the
channel, noting, in a manner presently to be described, the time
required to make the descent. We . . .now rolled the ball only one-
quarter the length of the channel; and having measured the time of its
descent, we found it precisely one-half of the former. Next we tried
other distances, comparing the time for the whole length with that for
the half, or with that for two-thirds, or three-fourths, or indeed for
any fraction; in such experiments, repeated a full hundred times, we
always found that the spaces traversed were to each other as the
VAN DuK/Process Physics, Time, and Consciousness 27

squares of the times, and this was true for all inclinations of the
plane, i.e., of the channel, along which we rolled the ball. (Galileo,
Dialogues, 178-179)

In so doing, Galileo found that the traveled distance was directly proportional
to time squared:
S 0C t2

This expression contains two variables, s for distance, and t for time, and
says that with each elapsed time interval, the traveled distance increases
quadratically. This relation may indeed seem quite obvious, since rolling
down the ramp's entire length will take the bronze ball twice the time that
is needed for the first quarter distance. But in order to really make sure
that his hypothesis would stand the test of time, it seems that, further on
into the experiment, Galileo decided to replace the not so precise water
clock with the more reliable methodology of metronome-like transit sounds
made by strategically placed frets (i.e., "speed bumps") or alarm bells.
Because this method, due to "double calibration,"21 ensures a high level
of accuracy in making equally long time intervals, it becomes possible to
introduce a standardized unit of time. And although this method left their
actual size unspecified-or simply one (i.e., 1.00) by default-it led to
all time intervals having the same duration with a very small margin of error:
The phrase "measure time" makes us think at once of some standard
unit such as the astronomical second. Galileo could not measure time
with that kind of accuracy. His mathematical physics was based
entirely on ratios, not on standard units as such. In order to compare
ratios of times it is necessary only to divide time equally; it is not
necessary to name the units, let alone measure them in seconds. The
conductor of an orchestra, moving his baton, divides time evenly with
great precision over long periods without thinking of seconds or any
other standard unit. He maintains a certain even beat according to an
internal rhythm, and he can divide that beat in half again with an
accuracy rivaling that of any mechanical instrument. (Drake 98)
It was because of this increased measurement accuracy that the proportionality
constant k, relating distance and time to each other, could be determined
with ample accuracy:

s=kt2, with k=33 (see Table 1; Section 2.1.2)


28 PROCESS STUDIES SUPPLEMENT 24 (2017)

The precise value of k depends effectively on the relation between


the locally valid gravitational acceleration (or to be more precise, the
locally valid gravitational acceleration minus some possible deceleration
due to friction and other speed-diminishing factors) and the applied
measuring unit for distance (which, in the metric SI-system, is the meter).
However, it was initially based on the traveled length during the first
interval t0 - t 1 as expressed in terms of Galileo's standard unit-the
"point." And since Galileo normalized the time interval t0 - t 1 to unity,
his assumption that the velocity of a freely falling object would increase
uniformly led to another value for gravitational acceleration than we use today.
In the case of the measurement run for the inclined plane experiment
recorded in Table 1, the value of k equaled 33 . This was, indeed, the
number of distance markings that could be counted between the top of
the ramp and the position of the first warning bell. With the standard
measure for length-named "points"-amounting to approximately 29/30
::::: 0.967 mm, the traveled distances can be calculated from s = kt2 .
Accordingly, at time t1, the traveled distances 1 would reach a value of s
= (29/30) ·33· 12 = 31.9 mm (cf., Table 1; Section 2.1.2).
As a matter of fact, all other distances could be calculated in similar
fashion, for any single moment within the available time range. Although
Galileo had to fight an uphill battle in order to defend all these ideas,
today we realize that this ground-breaking spin-off of his initial proportionality
relation soc t2 actually embodied the first version of what has now become
the "gold standard": the time-based physical equation.

2.3 From time-based equations to physical laws


Elaborating on Galileo' s work, Isaac Newton then added force and
mass into the mix, which enabled him, among others, to formulate his
three laws of motion-out of which the most famous second one, is another
physical equation: F = ma. Together with yet another physical equation,
known as the law of universal gravitation,

F9 = G m~'; 2, with the gravitational constant G = 6.67 · l Q- 11 [N · m2/kg 2] ,

Newton was able to lay down a basic framework for describing the motion
of all of nature's physical objects within the earthly as well as the heavenly
domain. The equations dealing with the gravitation of ordinary objects
VAN DuK/Process Physics, Time, and Consciousness 29

on earth could be unified with Kepler's laws of planetary motion. Because


of the enormous empirical success and the giant leap of understanding
that this unification brought about, Newton's work could give rise to the
mechanistic "clockwork universe" worldview. This worldview even
motivated Pierre-Simon Laplace to claim that it should in principle be
possible to calculate, from a given set of interim conditions, the entire
history and future of nature as a whole:
We may regard the present state of the universe as the effect of its
past and the cause of its future . An intellect which at a certain
moment would know all forces that set nature in motion, and all
positions of all items of which nature is composed, if this intellect
were also vast enough to submit these data to analysis, it would
embrace in a single formula the movements of the greatest bodies of
the universe and those of the tiniest atom; for such an intellect
nothing would be uncertain and the future just like the past would be
present before its eyes. (Laplace 4)
Laplace's strict and absolute determinism is now typically regarded as
outdated. 22 This is mostly due to ongoing developments in physics. With
the advent of thermodynamics (with its novel concepts of ergodicity,
entropy, and statistical ensembles), and quantum mechanics (which, in
most interpretations, is taken to be inherently random and indeterministic),
this rather resolute form was no longer tenable.
However, although this strict Laplacian determinism had to be
abandoned, most other aspects of the mechanistic worldview managed to
survive. In fact, the general framework behind the mechanistic worldview
not only came out alive and well, but it even went on to permeate the
whole of science. As a result, our contemporary mainstream physics
became very much framed in terms of what may be called the Cartesian-
Newtonian paradigm.
In the Cartesian-Newtonian paradigm, nature is typically interpreted
as follows: (1) as an entirely physical "real world out there" ready to be
exploited and manipulated by us, conscious human beings who like to
think of it (2) as a giant collection of atomistic, elementary constituents
whose behavior is governed by fixed, eternal laws of nature.
The first feature refers to the apparent necessity in physics to keep an
external perspective on the system to be observed. That is, although our
physical sciences are utterly experience-based, as they rely on empirical
observation, the observer's experience itself always belongs to the
30 PROCESS STUDIES SUPPLEMENT 24 (2017)

instrumentarium of investigation, and never to its target. As such, our


nature-interpreting subjectivity resides on the other, non-physical side of
the epistemic cut (see Von Neumann 352; Pattee):
... any attempt to include the conscious observer into the theoretical
account, will cause the theory's extra-systemic "view from nowhere"
[see Nagel] to be handed over to a newly introduced meta-observer.
At least from the theory's perspective, the initial observer must then
be treated as any other physical system [see Von Neumann 352]. In
this way, physical theory seems to be condemned to a
"Cartesianesque" split [see Primas 610-612] that leads to the
undesirable bifurcation of nature [see Whitehead, CN, 27-30]. Hence
[our current physical sciences can be labelled as] "exophysical" [see
Atmanspacher and Dalenoort] which refers to an external "view from
nowhere" onto a world that is held to be interpretable entirely in
physical terms. (Van Dijk, "The Process")
Furthermore, the second feature of the Cartesian-Newtonian paradigm-its
corpuscular physicalism (see Barandiaran and Ruiz-Mirazo 297)-can be
credited first to Galileo, who paved the way for modem physics by being
the first to systematically single out material bodies as individual systems
to study their behavior during free fall. Second, it can indeed be credited
to Newton, who built upon Galileo's geometry-inspired groundwork to
arrive at his laws of motion and universal gravitation. The superior
explanatory power of Newton's laws-which made it possible to understand
such diverse phenomena as planetary motion, the working of the tides,
and the movement of material objects on Earth with the help of just one
theoretical framework-was so impressive that his work really set the
stage for all that followed:
... the success of scientific theories from Newton to the present day is
based on their use of a particular framework of explanation invented
by Newton. This framework views nature as consisting of nothing but
particles with timeless properties whose motions and interactions are
determined by timeless laws. The properties of the particles, such as
their masses and electric charges, never change, and neither do the
laws that act on them. (Smolin, Time, xxiii)
Although its initial mechanistic interpretation had to be abandoned
with the advent of Einstein's relativity theories and quantum mechanics,
this law-centred methodology was still quite painlessly passed on to our
contemporary mainstream physics. The general procedure of predicting
the future state of any system by identifying its initial conditions and then
VAN DuK/Process Physics, Time, and Consciousness 31

letting them be processed by "the laws that be" is still the way to go (see
Smolin, Time, 50). And even though the strict Laplacian determinism
turned out to be untenable,23 the general Newtonian mode of operation
is still alive and well. Nowadays, rigorous determinism is not a realistic
expectation anymore, but empirical agreement has taken its place. That
is, when a physical equation achieves empirical agreement with the system
it is held to portray, it is typically thought to closely follow the target
system's behavior. Accordingly, the physical equation is customarily
considered to represent the system at hand-if not in a direct chrono-
logical sense, then at least statistically (Van Dijk, "The Process").

2.3.1 The flawed notion ofphysical laws


When we call our physical equations "laws," this means that, all else
being equal, it applies to many, many cases (see Smolin, Time, 99). 24 In
other words, in a practically equal situation, the physical equation at hand
will apply without exception, again and again. And although the Cartesian-
Newtonian paradigm thus facilitates the view that there could be a complete
collection of fundamental laws of nature, there is more than enough reason
to disagree with this (see Cartwright 45-55; Giere 77-90). First of all,
none of our past or even present physical theories has universal validity.
This can be easily exemplified by investigating situations in which Newton's
universally valid "laws" of motion are to be combined with his "law" of
universal gravitation (Giere 90). No two interacting bodies, anywhere in
the universe, will be found to exactly behave according to these "laws":
The only possibility of Newton's Laws being precisely exemplified
by our two bodies would be either if they were alone in the universe
with no other bodies whose gravitational force would affect their
motions, or if they existed in a perfectly uniform gravitational field.
The former possibility is ruled out by the obvious existence of
numerous other bodies in the universe; the latter by inhomogeneities
in the distribution of matter in the universe. (Giere 90)
Moreover, all conditions would have to be thoroughly idealized
(perfectly spherical, chargeless bodies within a frictionless environment,
and so on). Even Richard Feynman stated that Newton's law of universal
gravitation is "the greatest generalization achieved by the human mind"
(Feynman 14) for it is simply not true that the force between any two
bodies is given by the law of gravitation. That is, no charged objects will
32 PROCESS STUDIES SUPPLEMENT 24 (2017)

exactly behave according to Newton's "law" of universal gravitation,


since Coulomb's "law" applies as well (Feynman 13-14; Cartwright 57).
Moreover, the effect of both "laws" typically differs depending on the
scale of magnitude and other relevant system and environmental conditions.
In view of all this, Cartwright goes on to argue that our celebrated "laws
of nature" are, in fact, no more than generalized theories 25 :
Many phenomena which have perfectly good scientific explanations
are not covered by any laws. No true laws, that is. They are at best
covered by ceteris paribus generalizations-generalizations that hold
only under special conditions, usually ideal conditions. (Cartwright
45)
These special, idealized conditions fail to present the whole picture. That
is, since our physical equations always refer to some carefully singled-
out system of interest, the entire rest of the universe is being ignored as
if it doesn't exist. 26 This neglect of the outer-system world is an absolute
necessity for a physical equation (e.g., F = ma) to be valid in many other
cases. Accordingly, the idea of physical equations having the status of a
"true law" can only be maintained thanks to theoretical neglect:
Newton's second law [for instance] describes how a particle's motion
is influenced by the forces it experiences .. .. Each particle in the
universe attracts every other gravitationally. There are also forces
between every pair of charged particles. That's a whole lotta forces to
contend with. To check whether Newton's second law holds exactly,
you would have to add up more than I 080 forces to predict the motion
of only one of the particles in the universe. In practice, of course, we
do nothing of the kind. We take into account just one or a few forces
from nearby bodies and ignore all the rest. (Smolin, Time, 100-10 I)
Hence, at the end of the day, we should realize that our physical equations
can only be "universally valid" when they are generalizations. But when
we have to admit that our equations are merely approximating generalizations,
they can never be thought of as being fundamental, or as having a perfect
or near-perfect fit with nature. So, despite the popular view that some of
our physical equations are so widely valid that they could deservedly be
labeled as laws, such labeling is in fact misplaced. After all, on closer
scrutiny, these so-called laws are neither true, nor universal (see Cartwright
13 and 45; Giere 86).
Despite all this, the idea that nature can ultimately be grasped fully
by a single physical equation, or a small set of physical laws, is still very
VAN DuK/Process Physics, Time, and Consciousness 33

much alive. It is almost as if the huge empirical success of these time-


based equations makes us forget that a great amount of abstraction,
idealization, simplification, approximation, and neglect is involved. When
ignoring all these manipulations we can easily become convinced that our
simple equations for our local isolated systems can be extrapolated to the
entire universe. And that's in fact exactly what happened after Galileo.

2.4 From geometrization to the timeless block universe


By expressing nature's processuality with the help of a geometrical
timeline, Galileo basically set the stage for modem science. Together with
the three already known spatial dimensions, the temporal dimension thus
made it possible to register all conceivable motion of physical bodies by
specifying, for each fixed interval on the timeline, their position in terms
of three spatial coordinates (x, y, z). The mechanistic Newtonian physics
that was then built upon Galileo's innovation managed to refine and expand
the still rudimentary geometrization of nature to a great extent.
In fact, although Galileo had only a promising vision of mathematics
being the language in which the great book of nature was written, Newton
actually seemed to have pulled off the as-good-as-complete mathematization
of nature-from falling apples on earth to the faraway heavenly bodies
in outer space. According to Laplace's strictly deterministic interpretation
of the Newtonian framework, the mathematically spelled out mechanical
laws of nature governed the entire future unfolding of the universe. It was
thought that the universe as a whole could be described deterministically
by simply taking its initial conditions 27 and then making Newton's laws
work out the inevitable consequences.
This idea of such an algorithmically laid down universe was indeed
very much inspired by Galileo's pioneering work. As a matter of fact, the
renowned clockwork universe picture was conceivable only through
Galileo's conception of the geometrical time line and the subsequent
development of the timeline-compliant physical equations. Without this
"geometrization of nature's processuality," the mathematical expression
of quantum mechanics and Einstein's special and general theories of
relativity would not even have been possible.
But with the implementation of timeline-compliant physical equations,
all major physical theories that followed in the wake of Galileo's theory
of falling bodies were effectively left timeless in the sense that: (1) their
34 PROCESS STUDIES SUPPLEMENT 24 (2017)

formulae have to make do without any dedicated and unique present


moment; (2) the entire past, present, and future are, according to Laplace,
already contained in the chosen initial conditions and the static, unchanging
physical laws acting thereon. On top of that, (3) in quantum mechanics,
all possible quantum states are held to exist simultaneously in what some
interpret to be a static kind of superposition-until observation leads to
the actualization of one particular possibility; and (4) Einstein's relativity
theories, with the newly introduced ideas of the 4-dimensional spacetime
continuum and the relativity of simultaneity, made it clear that observers
moving at different speeds would each experience a different order of
occurrence for any arbitrary sequence of events relative to which they are
movmg.
The entire combination of (a) Galileo's geometrization of time, (b)
the lack of a unique present moment in the physical equations that ensued
therefrom, (c) the lack of a preferred direction of time in these equations
(see Davies, "Whitrow"), (d) Einstein' s discovery of the relativity of
simultaneity, (e) the postulation of the Einsteinian-Minkowskian 4-
dimensional spacetime manifold, and (f) the introduction of superposition
in quantum mechanics (see Barbour 229-231 and 247; Smolin, Time, 80)
motivated mainstream physics to drop time altogether. All this led physicists
to argue that in reality there is no passage of time, but that all moments
of all of eternity exist as a giant universal timescape in which all moments
and configurations of nature are spread out as one eternally existing
whole-comparable to the spacetime picture as presented in Fig. 2-2, but
then for the entire universe. 28

a)
past - - - ---+ present- - - - ---+ fu ture

!
VAN DuK/Process Physics, Time, and Consciousness 35

b) - all moments existing together as one (en bloc) - --+

Fig. 2-2: The Earth-Moon system in a temporal universe and in a block


universe. The temporal view (Fig. 2-2a) and the block universe view (Fig. 2-2b)
of the Earth-Moon system. In the temporal view, the earth and moon move through
space in time, while in the block universe view, all instances of the earth and
moon exist together at once in a giant timeless space-time continuum. In the block
universe view, all experience of the earth and moon moving from one moment
to the next is thus held to be illusory. For ease of illustration, the images show
only two spatial dimensions as well as modified, unrealistic sizes and distances.
Images inspired by illustrations from (Davies, "That Mysterious," 9) and extensively
edited from a Wikimedia Commons image of the lunar phases (original image:
© Orion 8 CC BY-SA 3.0).

In the words of theoretical physicist Julian Barbour:


The most direct and naive interpretation [of the Wheeler-DeWitt
equation] is that it is a stationary [time-independent] Schrodinger
equation for one fixed value (zero) of the energy of the
universe .... The Wheeler-DeWitt equation is telling us, in its most
direct interpretation, that the universe in its entirety is like some huge
molecule in a stationary state and that the different possible
configurations of this "monster molecule" are the instants of time.
Quantum cosmology becomes the ultimate extension of the theory of
atomic structure, and simultaneously subsumes time. We can go on to
ask what this tells us about time. The implications are as profound as
they can be. Time does not exist. There is just the furniture of the
world that we call instants oftime. (Barbour 247)
Of course, this line of reasoning draws heavily on ideas from quantum
theory. However, in the confrontation between believers and disbelievers
in the passage of time, the relativity of simultaneity was what ultimately
settled the score, thus making relativity the prime incentive for physicists
like Barbour to try to make quantum physics comply with timelessness
36 PROCESS STUDIES SUPPLEMENT 24 (2017)

as well. But even before this quantum theory-based proposal of timelessness,


Minkowski' s block universe interpretation of Einstein's special theory of
relativity was, in conjunction with Eddington's famous solar eclipse ex-
periments,29 already persuasive enough to make some researchers join the
camp of the time-refuters. The remarkable agreement between prediction
and experiment, as well as the straightforwardness of Minkowski' s
geometrical interpretation, eventually seem to be the main reasons for
physicists to have become so convinced of the timelessness of nature. The
fact that the basics of Einstein's special relativity could be so graphically
explained by his captivating thought experiments 30 probably contributed
to its appeal as well:
Besides the existence of a universal speed limit that all observers
agree on, special relativity depends on one other hypothesis. This is
the principle of relativity itself. It holds that speed, other than the
speed of light, is a purely relative quantity-there is no way to tell
which observer is moving and which is at rest. Suppose two
observers approach each other, each moving at a constant speed.
According to the principle ofrelativity, each can plausibly declare
herself at rest and attribute the approach entirely to the motion of the
other. So, there's no right answer to questions that observers disagree
about, such as whether two events distant from each other happen
simultaneously. Thus, there can be nothing objectively real about
simultaneity, nothing real about "now." The relativity of simultaneity
was a big blow to the notion that time is real. (Smolin, Time, 57-58)
Accordingly, we may summarize the argumentation for the unreality of
time as follows :
1. Initial assumption: The universe is an entirely physical world with
an objectively real existence;

2. The relativity of simultaneity: Observers moving at different


speeds will-under certain circumstances (!)-not agree if two non-
identical, well-separated physical events happen simultaneously or
not;

3. Because of this lack of agreement, it must be concluded that there


is no objectively real now;

4. Hence, there is nothing to divide the past from the future (see
Capek 507);

5. Consequently, any passage of time would be a sheer impossibility;


VAN DuK/Process Physics, Time, and Consciousness 37

6. Therefore, the entire history of the universe-containing not only


all of its past and present moments, but also all moments yet to
come-must be considered to exist all together at once, in one
massive 4-dimensional block of frozen spacetime (see Fig. 2-2).
It was basically this line of reasoning-inspired significantly by the easy-
to-misinterpret geometry of Minkowksi' s spacetime construct-that made
the case for the timeless block universe, thereby basically marginalizing
any possible process-oriented interpretation of nature. 31 Fortunately,
though, the publication of Lee Smolin's Time Reborn has drawn renewed
attention to the fact that we are in dire need of a more process-friendly,
habit-driven physics. In Smolin's view, this new physics should then be
operating according to a principle of precedence, rather than proclaiming
the reign of timeless law (Smolin, Time, 14 7). Moreover, in such a physics,
the block universe interpretation would become obsolete and be replaced
by an interpretation in which the cosmos is seen as a giant dynamic network
of habit-establishing activity patterns.
On that account, space and time should not be conceived of in terms
of an abstract geometrical coordinate system. Instead, it would be better
to think of them as being a process-seamlessly interwoven with the
process of nature as a whole. Similar to the quantum vacuum-a well-
accepted concept from quantum field theory-space is not to be seen as
an absolutely empty void, but rather as a fiercely boiling ocean of activity
(see Davies, Space and Time, 136; Boi 69)-indeed, a process. Although,
by lack of any further explanation, this may perhaps still sound rather
speculative and premature, in Chapter 5 (on process physics) this will be
discussed in much greater detail.

2.5 Arguments against the block universe interpretation


Despite the enormous appeal of Einstein's relativity theories, their
huge impact on further developments within theoretical as well as
experimental physics, and the many practical benefits they have given us
over the years, there is still more than enough reason to handle them with
a good deal of caution. Especially the block universe interpretation-in
which nature is viewed as utterly timeless while our experience of time
is branded as a stubborn, obstinate illusion-should certainly deserve
some critical evaluation, to say the least. Although numerous weaknesses
can be collected from historical as well as more recent sources, let us first
38 PROCESS STUDIES SUPPLEMENT 24 (2017)

focus on some essential ones from a process-oriented point of view ( see


Capek).
When being pressed to provide a well-founded and indisputable defense
for the illusoriness of our experience of time, physicists are actually having
quite a hard time trying to do so. This is first and foremost because physics,
being so firmly rooted in experiment, is itself utterly dependent on empirical
experience. Secondly, if nature were indeed purely physical-as most
contemporary mainstream physicists would have it-it is then quite difficult
to see how it could ever give rise to something so explicitly non-physical
like conscious experience. On top of this, the argument of time's illusoriness
becomes even more doubtful in view of the extra-ordinary level of
sophistication that would be required for our conscious experience to
achieve such an extremely convincing, but-physically speaking-pointless
illusion. In other words, it would simply be next to miraculous for such
an illusion to ever have evolved at all. And if that would not be enough,
in a completely timeless world, the utterly processual and time-related
concept of (neuro )biological evolution would just make no sense.
Even though these points together would seem to make a very strong
case against the eternalism of Einstein and Minkowski, most mainstream
physicists do not appear to be very alarmed by them. Apparently, the
prevalent attitude among physicists is that the findings in physics-arguably
the most prominent member of our sciences-must certainly be more
fundamental than those of the other, lower-grade sciences, like chemistry,
biology, and especially neuroscience.
Another way to look at this, however, is to place the process of
empirical experience before any possible results ensuing therefrom.
Researchers on this side of the scale are far less likely to think of
neuroscience, psychology, consciousness studies, and the like, as being
inferior to the physical sciences. No wonder, then, that the two camps will
find themselves talking past each other over and over again (!).
Therefore, process-inspired arguments alone are probably not enough
to end this unfruitful status quo. More than anything else, it needs to be
demonstrated technically, in the language of physics, that the current
timeless view of the block universe interpretation needs a thorough revision.
Previously, with David Bohm, Basil Hiley, Ilya Prigogine, Henry Stapp,
and Lee Smolin, to name but a few, there have been a number of attempts
from within the physical sciences to move towards a more process-oriented
alternative to the static, timeless view. 32 So far, however, their "process-
VAN DuK/Process Physics, Time, and Consciousness 39

friendly" work has not been able to bring about any major reputation-
shattering crisis in mainstream physics. Nonetheless, these efforts should
definitely be taken seriously-if only to provide us with new angles and
ideas on how to tackle the many unresolved matters in physics and science
as a whole.
However helpful the work of the above mentioned researchers may
be in getting a more process-oriented perspective on the physical sciences,
for now let us focus on some specific objections against the initial
assumptions, the idealizing abstractions, and also the eventual interpretation
of these abstractions as used in Minkowski's timeless block universe
framework. The line of reasoning from initial assumptions to the interpretation
of nature as a 4-dimensional block universe can be roughly summarized
as follows :

The basic initial assumptions: 33


1. Reality is objectively real and mind-independent. That is, reality
exists independently from our mental experience and observation of
it. As such, it is an entirely physical "real world out there" whose
contents range widely from Planck-scale fluctuations to subatomic
particles, billiard balls, and pyramids, and from small-, medium- and-
large-sized planets to solar systems, galaxies, galaxy clusters, and
beyond.34

2. Events in nature can be localized spatiotemporally by specifying


their coordinates along the dimensions of geometrical space and
geometrical time. 35
The crux of Einstein's thought experiment:
3. There is relativity of simultaneity. That is, observers moving
relative to each other at different enough speeds will disagree
whether two well-separated, distant events occurred simultaneously
or not. For the moving observer, after all, the expected meetups with
the two oncoming light pulses may have a different order in
comparison to that of the other, motionless observer.
The line of reasoning leading to Minkowski's block universe picture:
4. With the objective existence of the real world as a background
assumption, it is inferred from the relativity of simultaneity that there
cannot be an objectively real now;
40 PROCESS STUDIES SUPPLEMENT 24 (2017)

5. In the absence of an objectively real now, there can be nothing to


divide the past from the future (see <;apek 507);

6. If there is nothing to separate past from future, then no transition


from one moment to the next can occur;

7. Without any transition between past and future, all of


eternity-past, present, and future-must exist together as one in a
timeless 4-dimensional block universe (see Fig. 2-2);

8. As a logical consequence of the absence of an objectively existing


present moment, any experience of a now, or of time passing by,
must be considered an illusion.

2.5.1 The real world out there is objectively real and mind-independent
(or not?)
For sake of argument, let's try to add some nuance to this seemingly
watertight step-by-step analysis. As noted, it is already quite a firm and
assertive statement to postulate an observer-independent "real world out
there." There seems to be more than enough reason to opt for another
initial assumption. For instance, quantum mechanics suggests that our
observational participation in quantum experiments plays an indispensable
role in the physical world. That is, quantum events are thought to exist in
a state of superposition when not being observed, while exposure to
observation will make this superposition "collapse" into one specific state.
Yet the more or less unanimously held assumption within the physical
sciences is that our natural universe is an entirely physical world. Minkowski
thought that this physical world had an absolute existence as a static four-
dimensional continuum. That is, although space-time allows observers to
adopt different reference frames, it exists objectively and entirely independent
of the mind. In his "world postulate" (Weinert 169), Minkowski put
forward that the four-dimensional geometry of his abstract space-time
construct perfectly matched the "architecture" of the "real world out
there"-which he simply referred to as "world" (Minkowski 83). This
point of view meant that one could basically treat the world as a Euclidian,
real coordinate space R4 with coordinates (x, y, z, t):
A four-dimensional continuum described by the "co-ordinates" x1, x2,
x 3, x 4 , was called "world" by Minkowski, who also termed a point-
event a "world-point" .... We can regard Minkowski's "world" in a
VAN DuK/Process Physics, Time, and Consciousness 41

formal manner as a four-dimensional Euclidean space (with an


imaginary time coordinate [x 4 = t]) . (Einstein 122)
Over the years, this objective and materialistic view of the universe
has led to an interpretation of consciousness as an emergent (but still
entirely physical) property of the brain, produced exclusively by the
interaction of its neurons as they are engaged in all kinds of activity
patterns associated with wakefulness. However, such a materialistic
view-although it is the current standard-leaves wholly unexplained
what it is like to experience the information that is encapsulated within
these activity patterns. Moreover, as David Ray Griffin mentioned in his
banquet address at the l 0th International Whitehead Conference, 2015 :
"Assuming that the neurons in our brains are purely physical things,
without experience, most mainstream philosophers have concluded that
it would be impossible to understand how consciousness could emerge
out of the brain."
It is, in other words, incredibly hard to understand how the world of
subjective experience-the seeing of red and the feeling of warmth-can
arise from mere physical events (see Edelman and Tononi 2). Renowned
quantum physicist Erwin Schrodinger identified this gap between the so-
called objective world of physics and the subjective world of conscious
sensation as follows:
If you ask a physicist what is his idea of yellow light, he will tell you
that it is transversal electromagnetic waves of wavelength in the
neighborhood of 590 millimicrons. If you ask him: "But where does
yellow come in?," he will say: "In my picture not at all, but these
kinds of vibrations, when they hit the retina of a healthy eye, give the
person whose eye it is the sensation of yellow." (Schrodinger 153)
In the wake of Schrodinger, Bertrand Russell was keen to emphasize the
inescapable role of subjectivity in physics:
Physics assures us that the occurrences which we call "perceiving
objects," [i.e., the conscious end products of focal attention; a.k.a.
percepts] are at the end of a long causal chain which starts from the
objects, and are not likely to resemble the objects except in very
abstract ways. We all start from "naive realism," i.e., the doctrine
that things are what they seem. We think that grass is green, that
stones are hard, and that snow is cold. But physics assures us that the
greenness of grass, the hardness of stones, and the coldness of snow,
are not the greenness, hardness, and coldness that we know in our
42 PROCESS STUDIES SUPPLEMENT 24 (2017)

own experience, but something very different. The observer, when he


seems to himself to be observing a stone, is really, if physics is to be
believed, observing the effects of the stone upon himself. Thus
science seems to be at war with itself: when it most means to be
objective, it finds itself plunged into subjectivity against its will.
(Russell 15)
It is exactly this argument that is being overlooked in the starting assumptions
of the block universe interpretation. By postulating in advance that our
observations of nature should always conform to the criterion of absolute
objectivity, these initial assumptions are implicitly already writing off
subjective experience as irrelevant. After all, our observations can only
be absolutely objective when our subjective experience manages to achieve
an exact synonymy with nature; observation can only be objective when
it involves only the registration of what is already "out there," or when it
manages to capture at least the bare essence of these outer-world physical
contents.
According to this at first sight rather dualistic interpretation, the
contents of conscious thought basically reflect those of the externally
existing physical world. Conscious thought, on this account, amounts to
nothing more than the computational processing of signals as the brain is
taking in information originating from the real world out there. In other
words, what we experience as our subjective world of thought is, at the
end of the day, just physical neurons firing in response to what goes on
in the equally physical "real world out there."
To non-physicalists, consciousness seems to be all too easily explained
away, as if it were a mere superfluous concept. Avid physicalists, however,
would argue that there is not anything there to be reasoned away in the
first place. That is, according to physicalism, there is no sign whatsoever
of anything nonphysical having scientifically traceable causal effects on
the physical world.36 As the argument goes, the operation of our
musculoskeletal apparatus depends solely on electrochemical signaling
in neuronal fibers. Even our emotions, feelings, drives, etc., should
ultimately be understood in this way so that our conscious inner lives are,
at the end of the day, nothing more than purely physical events.
Although this physicalist account is indeed quite attractive because
of its charming straightforwardness and non-ambiguity, it nonetheless
overlooks a crucial aspect of our experience of nature. Namely, we
conscious observers cannot step out of our own personal conscious
VAN DuK/Process Physics, Time, and Consciousness 43

awareness to check if nature-in-itself exists just as we experience it to be


(see Van Dijk, "The Process"). For this reason, although it is nowadays
considered sound scientific practice to think of nature as the "external
physical world," the entire phenomenal scenery around us should in fact
not simply be thought of as being "out there," but rather as an integral
and indigenous part of all that is involved in getting the process of conscious
experience up and running (see Sections 4.2.3 and 4.2.4 for more details).
In empirical experience, for instance, there is not only the target system
of interest that is relevant to what the outcome will be. It is not so much
the naked target system that is being put under scrutiny, it is rather the
target system in interaction with ( 1) the entire subject side of the universe
of discourse (i.e., all measurement equipment as well as the conscious
observer) and, not to be forgotten, (2) the entire grand-environment in
which both target and subject side are embedded. The former influence
is usually mentioned more often than the latter (see Planck 217; Heisenberg
58; D'Espagnat 17), but physicist Joe Rosen has given a well-argued
account of how events under observation are inescapably affected by the
greater universe in which they are embedded:
Since almost all laboratories are attached to Earth, the motion of
Earth-a complicated affair compounded of its daily rotation about
its axis, its yearly revolution around the Sun, and even the Sun's
motion, in which the Earth, along with the whole solar system,
participates-requires rotation and changes of location and velocity,
both for experiments repeated in the same laboratory and for those
duplicated in other laboratories. (J. Rosen 34)
Another, more visual way of illustrating how the goings-on of the entire
universe affect observation, comes from David Bohm:
Consider, for example, how on looking at the night sky, we are able
to discern structures covering immense stretches of space and time,
which are in some sense contained in the movements of light in the
tiny space encompassed by the eye (and also how instruments, such
as optical and radio telescopes, can discern more and more of this
totality, contained in each region of space). (Bohm, Wholeness, 188)
All the light-emitting structures in the cosmos that are actually involved
in stimulating retinal cells during one' s nightly star gazing activities also
have a gravitational effect on target systems that cannot be so easily
ignored. As physicist Michael Berry (95-99) has been able to show, even
the gravitational influence of an electron at the limit of the observable
44 PROCESS STUDIES SUPPLEMENT 24 (2017)

universe, some 10 billion light years away, has a noticeable effect on


physical systems located here on earth.
This basically means that even our most systematic and well-controlled
way of observing how nature works-empirical experience-is affected
by events happening at vast distances from the actual experiment. This
tells us that the entire natural universe is actually involved in giving shape
to our experiences.
In fact, the universe at large is not only constantly "sending out" light
and gravitational stimuli that trigger our sensory cells in the retina and
our balance-controlling vestibular system located in the inner-ear, it also
gives birth to all the building blocks of biological life as its vast arrays
of galaxy clusters harbor innumerable amounts of stars and supernovae
capable of producing, through nucleosynthesis, all naturally occurring
chemical elements. 37
Obviously, this entire network may indeed be available for observation
by us as conscious, visually inclined organisms, but this does not show
how all this is actually seamlessly participating in that process of observation
as well. For this, we need to look at how our local environment within the
universe has come to be favorable to life. That is, in order to understand
how the entire universe conspires to eventually give rise to life, conscious
awareneness, as well as developed empirical experience, we need to zoom
in a little bit further into the specific conditions of our own solar system.
Particularly interesting, of course, is the small rocky planet named earth,
which travels through space at just the right distance from the sun to be
able to enjoy life-favoring temperatures and an oxygen-containing
atmosphere. Other bio-friendly circumstances include its strong, protective,
electromagnetic field, and huge supplies of water that-under the right
conditions-could probably serve as a primordial soup for life to develop.
Given that all this (and more) has been necessary for our higher-order
consciousness to have evolved at all-as a seamlessly embedded endo-
process of the greater omni-process which is nature as a whole-we may
quite rightfully state that the process of experience, at least in a latent,
less condensed form, extends well beyond the limited confines of our
brain-equipped skulls.
In fact, it can be argued that the process of experience involves the
entire universe. Indeed, as mentioned by David Bohm (Wholeness, 188),
when spending some time staring into the sky at a clear, starry night, the
VAN DuK/Process Physics, Time, and Consciousness 45

tiny space occupied by our eyes will in some sense "contain" the goings-
on of vast portions of the cosmos-and thus, indirectly, of nature-as-a-
whole. The photons entering the pupil and hitting the retina may have
been in transit for billions of years, emitted by stars varying widely in
their historical appearance within the universe as well as in their distance
from our home planet earth. Nonetheless, all those photons come together
within the eyeball, thus "informing" and making a difference to the
conscious observer in question. As such, these photons are not just informing
the organism in a way that fits the scheme of classical information theory
(as if they were prestated signals 38 that are fed into an input port to
eventually "inform" the end station, which is often thought of as the brain's
CPU-like center of subjectivity), but as we will see later on in Section
4.2.4, they are seamlessly integrated participants in the organism's
perception-action loops-the cyclic stream of experience-through which
the process of subjectivity is kept on the go.
The process of experience can be taken to involve the universe at
large-from its earliest beginnings to its most recent occurrences. This
does not just mean that any arbitrary conscious observer should be able
to obtain sensory information about the distribution of stars and galaxies
across the universe, rather it amounts to much more than that. After all,
the absolutely critical characteristic of all these light-emitting stars (and
supernovae) is that they gave birth to all the chemical elements necessary
for life to be possible at all. That is, without their role as natural fusion
reactors, forming chemical elements during nucleosynthesis, the odds of
our current biological world having been able to appear would have been
zero. Not only is our natural universe the generative source of all the
chemical elements that have eventually enabled conscious life, but all
evolved conscious organisms get to sculpt their conscious nows-the
conscious twosome of self and scenery (see Sections 4.2.4 and 4.3 for a
more detailed account)-by entering into cyclic interaction with the same
realm from which they themselves have emerged.
Conscious organisms can thus be thought of as seamlessly integrated
endo-processes within a greater embedding omni-process (the universe
as a whole). As such, they can reflexively form a conscious now (see
Velmans 328) through the interplay between their own biological body
states and their embedding living environment. In this way, we are in fact
seamlessly embedded organisms-"equipped" with a memory-based,
anticipatory conscious now which enables us to navigate, to live from,
46 PROCESS STUDIES SUPPLEMENT 24 (2017)

and to make sense of, the larger embedding universe that forms our home
as well as the "stuff' that plays its part in our experiences. All things
considered, we are ultimately nothing less than active participants in the
reflexive process through which nature experiences itself (see Velmans
327-328; Van Dijk, "An Introduction," 81).
Hence, instead of labeling nature as an external "real world out there,"
it is far more helpful to treat it as a fundamentally indivisible whole in
which the "inner life" of conscious organism and the "outer world" of
natural phenomena are actually two intimately related aspects of the same
overarching psychophysical process. At the end of the day, all the above
should be more than enough to admit that subjectivity has to play a crucial
role in nature. Instead of adopting the detached and disembodied "point
observers" from Minkowski's block universe interpretation, with their
equally detached perspectives on an otherwise entirely "physical real
world out there," we should rather think of observers as seamlessly
embedded, living, and utterly involved participants. In other words, we
should consider them intimately nested endo-processes within the greater
omni-process of nature from which they ensue. Accordingly, we should
not think of ourselves as being the detached end-destination of passive
incoming information-whether empirical or sensory. Rather, we just as
much in-form, affect, give shape, and make a difference to the process of
nature as the process of nature informs and makes a difference to us.
Reminiscent of, but not entirely synonymous with, David Bohm's concept
of "active information" (Bohm and Hiley 35-3 7), all this can be said to
occur in an order of active mutual informativeness (see Van Dijk, "An
Introduction," 75; also "The Process").

2.5.2 Events in nature reside in a geometrical continuum (or not?)


Already in ancient Greece, Plato, Aristotle, and other philosophers
cherished geometry as one of the most esteemed branches of knowledge.
Roughly from the year 1100 to 1700, it was common practice to interpret
nature from the teleological perspective of Aristotelian-Scholastic
philosophy, which was part and parcel of the Christian doctrine that very
much dominated medieval society. But as much as Aristotle's framework
had been relying on basic principles of geometry, particularly to describe
the details of movement, it certainly had not succeeded in incorporating
time within this geometric picture (see Section 2.1.1 ).
VAN DuK/Process Physics, Time, and Consciousness 47

The latter achievement was left for Galileo to flesh out. And by doing
so, he managed to put together a framework that was now thought to be
fully geometric. Thanks to Galileo's efforts, the scientific relevance of
geometry eventually grew to unprecedented heights, thereby basically
pushing the Aristotelian conception of a purposeful cosmos from the
throne. In fact, Galileo was so much enthused with the outcomes of his
experiments that he practically became a "crusader for geometry." As
such, he felt that mathematics-which to him was identical to
geometry-should be crowned as the perfect and infallible language of nature:
Philosophy is written in this grand book-I mean the
universe-which stands continually open to our gaze, but it cannot be
understood unless one first learns to comprehend the language in
which it is written. It is written in the language of mathematics, and
its characters are triangles, circles, and other geometric figures,
without which it is humanly impossible to understand a single word
of it; without these, one is wandering about in a dark labyrinth.
(Galileo, "The Assayer") 39
Later on, when Newton elaborated on Galileo's work and came up with
his laws of motion and gravitation, the massive empirical success basically
silenced all critics (with the exception of his main opponent, Gottfried
Leibniz, and his followers) . As a result, the geometrical timeline and the
associated method of time-based physical equations ended up on a high
pedestal, and once in this privileged position they could quite easily
convince virtually anyone that spatial and temporal geometry were indeed
genuine aspects of nature itself, rather than idealizing abstractions. 40
Minkowski's four-dimensional geometrical construct is ultimately an
elaboration of Newtonian absolute space and time, 41 which, in turn, was
a keen extension of Galileo's seminal work. By fusing space and time
together into one, Minkowski obtained a 4-dimensional continuum that
could provide a quite straightforward framework for the relativity of
simultaneity to make sense. Despite spacetime's origination from Newton's
and Galileo's earlier geometrical understanding of space and time, the
final knockout of Newton's absolute space and time became an established
fact.
But Minkowski' s method basically consisted of refurbishing Newton's
notions of absolute space and time, using them as the semi-finished source
materials for his end product, which is the four-dimensional spacetime
continuum. Minkowski, working from Einstein's special theory of relativity,
48 PROCESS STUDIES SUPPLEMENT 24 (2017)

meant to overthrow Newton's absolute space and time not by replacing


them with something else, but by fusing them together into one continuum.
But we should not forget that Newton's space and time were themselves
already simplifying abstractions from the process of nature, and so is the
space-time construct through which they were amalgamated into one
continuum. Nonetheless, by assuming that his abstractions agreed with
what existed in nature and that they would hold indefinitely (i.e., here,
there, and everywhere within nature, on a local as well as on a global
scale), Minkowski basically tried to extrapolate to the global universe
what had only been found to be so on a fairly local scale:
[R]elativity theory, both special and general, is constructed with an
in-built principle of locality. This principle is manifest explicitly both
in the epistemological foundations of the theory as well as its
mathematical foundations in terms of the manifold approach to space-
time. Thus the point event and its local neighborhood are considered
as primary with the global structure of space-time arising by
consistently piecing together local neighborhood patches.
(Papatheodorou and Hiley 81)
This not-so-well-thought-through extrapolation of local models into global
frameworks has been criticized quite strongly by Lee Smolin:
[T]he success of scientific theories from Newton to the present day is
based on their use of a particular framework of explanation invented
by Newton. This framework views nature as consisting of nothing but
particles with timeless properties whose motions and interactions are
determined by timeless laws. The properties of the particles, such as
their masses and electric charges, never change, and neither do the
laws that act on them. This framework is ideally suited to describe
small parts of the universe, but it falls apart when we attempt to
apply it to the universe as a whole. All the major theories of physics
are about parts of the universe-a radio, a ball in flight, a biological
cell, the Earth, a galaxy. When we describe a part of the universe we
leave ourselves and our measuring tools outside the system. We leave
out our role in selecting or preparing the system we study. We leave
out the references that serve to establish where the system is. Most
crucially for our concern with the nature of time, we leave out the
clocks by which we measure change in the system. (Smolin, Time,
xxiii)
Although the consequences of taking this external perspective may not
become immediately apparent, there are some ominous tell-tale signs.
VAN DuK/Process Physics, Time, and Consciousness 49

Also, since Minkowski tried to build on the same Galilean-Newtonian


notions whose legitimacy he was questioning in the first place, it should
not come as a big surprise that empirical agreement between theoretical
expectations and measurement outcomes does indeed break down at a
certain point. Accordingly, when extrapolating from a locally successful
geometric construct in an attempt to eventually cover the entire universe,
we may well expect deviations between theory and practice.
When Einstein in turn elaborated on Minkowski's four-dimensional
spacetime manifold to put together his general theory of relativity, we
should keep in mind that the theory of general relativity may actually have
a limited domain of application as well. It is of course common knowledge
that general relativity only applies to large-scale, high-mass systems, such
as solar systems, and does not work on the small-scale, low-mass level
where quantum mechanics holds. Moreover, the Minkowskian-Einsteinian
4-dimensional geometrical construct may indeed have proven to be very
successful in many cases, but the associated theory of gravitation has
nonetheless produced some rather worrying anomalies: the bore hole
anomaly, the earth fly-by anomaly, and the dark matter/dark energy
anomalies (see Cahill, "Black Holes," 44; "Resolving"; also see McCarthy
358).
Contrary to what mainstream post-geometric physics42 suggests, we
cannot simply reduce the process of nature to a geometrized 4-dimensional
continuum without having to pay any price for it. We do not have to look
very far to see what has to be given up in exchange. When setting up such
an artificial geometrical arena, 43 populated by infinitesimal "point events"
and "point observers," 44 it should not come as a surprise that its characteristics
do not fully match up with those of "Nature-in-full": "[The] relativistic
space-time structure of point events and signals is only an abstraction
arising from the externalization of the undivided activity of... process
considered as a whole" (Papatheodorou and Hiley 249).
In other words, it is the premature removal of all processuality, creative
potentiality, and subjectivity ("innerworldliness") from our worldview
that causes our contemporary mainstream physics to portray nature as if
it were entirely timeless and devoid of experience. However, physics
itself-being an empirical science-is utterly based on experience to begin
with.
Since no point observer can be reasonably expected to exhibit any
level of conscious experience, the inference that the experience of time
50 PROCESS STUDIES SUPPLEMENT 24 (2017)

should be considered an illusory side-effect of physical reality is rather


gratuitous. Such an inference simply reformulates what was already tacitly
smuggled in at the beginning. That is, such an inference-as it relies on
the physicalist orthodoxy of mainstream physics-necessarily presupposes
the validity of the main pillars on which mainstream physics rests. This
includes the Galilean cut between quantifiable aspects of nature (e.g.,
length, mass, and time duration) and qualitative ones (e.g., the redness of
red, the warmth of heat, and the hurtfulness of pain), which already from
the very beginning strips the process of empirical observation from all its
experiential aspects. Further on down the line, then, by reducing live
observers to abstract "point observers," post-geometric physics is basically
telling us that it holds consciousness to be irrelevant because it does not
fit into our quantitative account of nature from which consciousness has
already been removed. Just as we saw earlier with Aristotle's teleological
physics, this amounts to a tautology, i.e., trying to justify something by
falling back on its presupposition.
This can already be interpreted as a subtle hint that, despite what such
physical abstractions may sometimes make us believe, we are not equivalent
to passive, point-like end-recipients of numerical data extracted from
an-according to mainstream physics-entirely physical and mathematically
tractable real world out there. As will be discussed in Chapter 4 ("Life
and Consciousness"), living observers are actually seamlessly embedded
endo-processes of nature that cannot be readily reduced to purely abstract
point-observers.
There is of course a whole lot more to be said about why it is better
not to think of nature as a whole in terms of geometry, point events, and
point-observers. For now, however, the thing to be remembered is primarily
that a bio-centric worldview in which organisms are seen as seamlessly
embedded participants of the process of nature is not likely to be compatible
with a geometrical continuum populated by point-size spectators that are
located outside of the events they aim to become informed about, instead
of inside the natural world in which they are participating. For those
readers who would like to know more about possible reasons to reject the
idea of point-observers, in Sections 4.2.3 to 4.3 .1 there is discussion of
how conscious organisms are well-embedded living beings that get to
sculpt their sense of self and world by being seamlessly integrated
participants of the same natural world they are trying to make sense of.
VAN DuK/Process Physics, Time, and Consciousness 51

2.5.3 Relativity of simultaneity means that our experience of time is


illusory (or not?)
With regard to the relativity of simultaneity, we may consider the
following: the fact that different observers moving at different enough
speeds may witness and undergo the effects of well-separated events in a
different order, showed the failure of Newton' s absolute and objective
present moment. The idea that there was a steadily advancing, absolute
now-valid for the entire universe and identical for every observer within
it-turned out to be flawed.
However, this should not necessarily mean that nature is entirely
without processuality. In other words, the scientific finding of the relativity
of simultaneity does not justify the conclusion that nature is devoid of a
present moment effect-nor does it license the claim that there is no
passage of time and that nature should therefore be entirely non-process.
Although different observers may indeed not be able to reach an agreement
on whether two well-separated events occur simultaneously or not, this
does not mean that nature is beyond doubt timeless and non-processual.
The relativity of simultaneity does indeed give a fatal blow to Newton's
absolute and universally valid time, but according to Milic Capek-a
process philosopher and philosopher of science who spent much of his
career trying to disentangle the many fascinating intricacies of space and
time-this does not automatically imply the end of temporality as understood
in a more wide-ranging sense:
Newtonian time may be only a special case of the far broader concept
of time or temporality in general in the same sense that the Euclidean
space is a specific instance of space or spatiality in general. If we
admit this possibility, then the negation of the Newtonian time entails
an elimination of temporality and change in general as little as the
giving up of the Euclidean geometry destroys the possibility of any
geometry. (Capek 507)
Although the steps leading to the block universe interpretation seem quite
sound and plausible, the whole argumentation for the non-processuality
of nature is in fact not as watertight as one might wish. Sure enough, the
conclusion of nature's timelessness follows from its premise, which is the
relativity of simultaneity. However, "it is simply not true that simultaneity
and, in particular, succession of events are purely and without qualification
relative" (Capek 508).
52 PROCESS STUDIES SUPPLEMENT 24 (2017)

In fact, actual relativity of simultaneity-i.e., the case when two


observers disagree about the (non)simultaneity of two events, E 1 and E 2,
because each observer has a different order in which these events are
observed-will only occur under precisely defined conditions. That is,
relativity in the observed order of events can only take place when the
spatial interval45 between the events E 1 and E 2 is greater than the maximum
reach that causal action can have within the temporal interval (t2 - t 1) that
separates them. 46 In other words, any two events E I and E 2 that appear in
one particular order for observer 0 1 can only have a different order for
observer 0 2 when the spatial separation between the events is greater
than the distance that light can travel within the available time frame (t2
- t 1) between their respective coming-into-actuality (Capek 508; see also
Weinert 161-184).
t (time)
t'
future causal
light cone

Elsewhere

I
Here-Now
Elsewhere

past causal
light cone

Fig. 2-3: Minkowski space-time diagram. Although Minkowski assumes that


the spacetime continuum should really have a total of four dimensions, for ease
of depiction only one spatial dimension is shown in his space-time diagram (see
illustration on the right-hand side). On the left, one spatial dimension has been
added to allow for improved visualization. Each observer or event at "here-now"
can be affected only by events within the past causal light cones. The cones are
depicted with a 45° angle, because, by convention, the speed of light is here
equivalent to 1 space unit per time unit. Therefore, a world line cannot exceed
the angle of 45°.

When their separation in space is smaller or equal to the temporal interval


VAN DuK/Process Physics, Time, and Consciousness 53

multiplied by the speed of light-or, in other words, whens:::; c·(t2-t 1)-then


both observers will see their events occurring in the same order. In other
words, all events that satisfy this condition will always have the same
order of occurrence for all sufficiently close observers-whether they be
moving or not.
To get a better understanding of how this actually works, let us take
a look at a Minkowski space-time diagram (Fig. 2-3). Within the causal
light cone, events are seen to neatly queue in line so that they form "world
lines." According to Minkowski, these world lines are the static spatiotemporal
threads of events that, in loose analogy to beads on a string, are stockpiled
one after the other all across the block universe-from its earliest of
beginnings into the infinite future. In every frame of reference, any
observer's "here-now" will precede all events belonging to the observer's
causal future, while it follows after all events found within the backward
cone of the observer's causal past. The thus obtained unidirectionality of
events is sometimes referred to as the arrow of time.47 Despite its slightly
misleading name, this metaphorical arrow does not move into the future,
but is typically thought to point towards it, thereby indicating a manifest
asymmetry in nature between past and future events:
By convention, the arrow of time points toward the future. This does
not imply, however, that the arrow is moving toward the future, any
more than a compass needle pointing north indicates that the compass
is traveling north. Both arrows symbolize an asymmetry, not a
movement. The arrow of time denotes an asymmetry of the world in
time, not an asymmetry or flux o/time [emphasis added]. The labels
"past" and "future" may legitimately be applied to temporal
directions, just as "up" and "down" may be applied to spatial
directions. (Davies, "That Mysterious," 9)
According to the static block universe interpretation, we should, at least
theoretically, be able to find all past and future events queued up in the
form of world lines when tracing down the direction of this arrow of time.
All these world lines and their past and future events are seen to exist
together at once in one eternal, statically frozen "timescape," and will
forever remain so (see Davies, "That Mysterious," 8). Accordingly, the
early events are queued up at the beginning, while the later, posterior
events are stockpiled further down the line. This asymmetry between the
past and future chunks of the block universe motivated Capek to reason
that, on close enough scrutiny, relativity theory actually suggests that
54 PROCESS STUDIES SUPPLEMENT 24 (2017)

everyone's "local now" ultimately has an absolute existence, with a strictly


fixed order of what comes after and what comes before:
Since this "before-after" relation is invariant in all systems, it follows
that in no frame of reference can my particular "here-now" appear
simultaneous with any event of my causal future or with any event in
my causal past. This follows from the fact that the succession of the
events constituting the world lines can never degenerate into
simultaneity in any system: this obviously applies to the world line of
my own body. In this sense my "now" still remains absolute. It is not
absolute in the classical Newtonian sense since it is confined to
"here" and does not spread instantaneously over the whole universe.
Yet, it remains absolute in the sense that it is anterior to its own
causal future in any frame of reference. (Capek 518-519)
Capek's argument boils down to this:
On every individual world line, the "here-now" moment separates
unambiguously the past events from the unrealized potentialities of
the future events, and this separation holds in all other possibly
existing frames of reference. It certainly cannot be called arbitrary. In
this precise sense each "here-now" is absolute. (520)
This basically means that it should be impossible for any such absolute
s
"here-now" to simultaneously be part of someone else past, or, equivalently,
it means that one observer's future events cannot already be part of another
observer's present reality:
No event of my causal future can ever be contained in the causal past
of any conceivably real observer. By "conceivably real observer" we
mean any frame of reference in any part of my causal past or
anywhere in my present region of "elsewhere." In a more ordinary
language, no event which has not yet happened in my present "here-
now" system could possibly have happened in any other
system .... Since the inclusion into the causal past of the observer is the
necessary condition for the perceivability of events, it means that the
postulated existence of future events is unobservable in
principle .... [T]he virtualities of our future history which our earthly
"now" separates from our causal past remain potentialities for all
contemporary observers in the universe. Something which did not yet
happen for us [locally] could not have happened elsewhere in the
universe. (Capek 519-521)
In an attempt to explain the implications of the special theory of
relativity in lay terms, however, renowned physicist Brian Greene claims
VAN DuK/Process Physics, Time, and Consciousness 55

that observers at, say, 10 billion light years from earth would be able to
make their "now slice" coincide with what we would call our remote
future-just by leisurely moving towards us (Greene 134-138). This,
according to him, is simply an inescapable consequence from the fact that
we live in a static block universe.
There is nonetheless good reason to doubt this conclusion. Namely,
the argument for the static block universe seems to depend largely on
what Whitehead called the fallacy of misplaced concreteness-the confusion
of nature itself with our theoretical abstractions of it. In this case, the
confusion is between (a) geometric conceptions of space and time, on the
one hand, and the process of nature, on the other hand, and (b) between
point-events and point-observers and actual events and live observers. On
top of that there is even another level of complication which makes things
even more mixed-up. That is, other theoretical abstractions in special
relativity-such as ideal clocks, ideal measuring rods, 48 four-dimensional
coordinate systems R4, space-time diagrams, causal light cones, photons
that basically serve as bits of information, etc.-are typically used in
further interplay with one another, thus yielding a level of meta-abstraction
which can then again be quite easily confused with the actual process of
nature as well. For instance, Alfred A. Robb argued in 1914 that:
The work of Minkowski is purely analytical and does not touch on
the difficulties which lie in the application of measurement to time
and space intervals and the introduction of a coordinate system. As
regards such measurement, one cannot regard either clocks or
measuring rods as satisfactory bases on which to build up a
theoretical structure such as is required in this subject. One knows
only too well the difficulty there is in getting clocks to agree with one
another; while measuring rods expand or contract in a greater or
lesser degree as compared with others . ... It is not sufficient to say
that Einstein's choices are ideal ones: for, before we are in a position
of speaking of them as being ideal, it is necessary to have some clear
conception as to how one could, at least theoretically, recognize ideal
clocks and measuring rods in case one were ever sufficiently
fortunate as to come across such things; and in case we have this
clear conception, it is quite unnecessary, in our theoretical
investigations, to introduce clocks or measuring rods at all. (Robb 13)
Notwithstanding criticism like this, Eddington's solar eclipse experiment
and other experiments following in its wake turned out to agree so well
with the predictions of Einstein' s relativistic physics that most people in
56 PROCESS STUDIES SUPPLEMENT 24 (2017)

the field sooner or later became dedicated believers. 49 Subsequently, this


was what encouraged physicists to take a serious look at Minkowski's
block universe interpretation as well. What they found was that Minkowski
had linked the relativity of simultaneity with the tilted simultaneity planes
in his geometry-based space-time diagrams that could be projected from
the past causal light cones into the future-directed cones. But instead of
interpreting this as an abstraction of nature, Minkowski held that it was
in fact a faithful representation of nature. That is, he claimed that nature
was actually a geometry-based 4-dimensional spacetime continuum and
that therefore such tilted simultaneity planes not only appeared in his
space-time diagrams, but that they really existed in nature-extending not
only into the past, but also into the future. Accordingly, the belief developed
that all of nature's future events are already lying "out there"-ready to
be a part of any observer's "now slice." 50
In retrospect, it seems that ( 1) the "discovery" of relative simultaneity
for space-like separated events, and (2) the above-mentioned further mix-
up of initial abstractions, have together been so compelling to Minkowski,
Einstein, and many of their contemporaries that they were willing to drop
every sense of temporality without much ado. Unfortunately, however,
while doing so they became so much committed to their geometrical
interpretation of nature that they entirely overlooked the lurking risk of
the fallacy of misplaced concreteness. Because they were most likely so
mesmerized by how well the block universe concept worked out, they
went on to marginalize the fact that both the relativity of simultaneity and
the relativity of succession of events do not apply under all conditions
(see Capek 508; see also Weinert 161-184).
This debatable idea of a static, timeless block universe has survived
a century of physical research and has since its inception established itself
as the standard interpretation of the special theory ofrelativity. But, with
all the criticism from Section 2.5 to 2.5 .3, we can now consider ourselves
sufficiently equipped to conclude that the concept of a static, timeless
block universe is at least premature, and most likely flawed.
Although this verdict seems to open up the possibility of a dynamic
block universe (as suggested by Capek, Weinert, Ellis, and others), this
alternative is still susceptible to some of the earlier-mentioned objections
(for instance, it still does not take into account that we, conscious observers,
are seamlessly embedded endo-processes of the greater overarching omni-
process which is nature as a whole). The dynamic block universe view
still hinges on the same acts of abstraction that helped facilitate the static
VAN DuK/Process Physics, Time, and Consciousness 57

block universe.51 A dynamic block universe interpretation will not lower


the risk of committing the fallacy of misplaced concreteness 52 and will
still lead to the undesirable bifurcation of nature (as it still reduces live
human beings to insentient point-observers).
In fact, the acts of abstraction that accompany the setting up of any
representational model of nature are part and parcel of what Lee Smolin
calls "doing physics in a box." To get a better understanding of how our
interpretations of nature actually get formed, let us take a look at how our
current way of "doing physics in a box" is actually made to work.

3. Doing physics in a box


3.1 The Newtonian paradigm
In his provocative and stimulating book Time Reborn (2013), Lee
Smolin argued that contemporary mainstream physics is still very much
based on the Newtonian paradigm:
My argument starts with a simple observation: the success of
scientific theories from Newton to the present day is based on their
use of a particular framework of explanation invented by Newton.
This framework views nature as consisting of nothing but particles
with timeless properties whose motions and interactions are
determined by timeless laws. The properties of the particles, such as
their masses and electric charges, never change, and neither do the
laws that act on them. This framework is ideally suited to describe
small parts of the universe, but it falls apart when we attempt to
apply it to the universe as a whole. All the major theories of physics
are about parts of the universe-a radio, a ball in flight, a biological
cell, the Earth, a galaxy. When we describe a part of the universe we
leave ourselves and our measuring tools outside the system. We leave
out our role in selecting or preparing the system we study. We leave
out the references that serve to establish where the system is. Most
crucially for our concern with the nature of time, we leave out the
clocks by which we measure change in the system. (Smolin, Time,
xxiii)
When ignoring the limitations of the Newtonian paradigm, we may all
too easily forget that the exclusion of clocks, measuring rods, and ourselves
is in fact an act of idealizing abstraction. By failing to question the actual
validity of this abstraction, we may soon become convinced that our simple
equations for our local isolated systems can be extrapolated to the global
universe without any trouble whatsoever. And that is in fact exactly what
happened after Galileo. That is, building from Galileo's proportional
58 PROCESS STUDIES SUPPLEMENT 24 (2017)

relationships between distance and time, Newton was able to add force
and mass into the mix, and in so doing developed his more advanced
Newtonian equations of motion and his universal law of gravitation. In
this way he provided a multipurpose set of equations that could be used
not only to describe the movement of ordinary objects on earth, but also
those of heavenly bodies,53 thus leading to the famous "clockwork universe"
worldview in which everything is determined from beginning to end. This
even motivated Pierre-Simon Laplace to claim that it should in principle
be possible to calculate any future state for all of nature if only we were
given "at a certain moment...all forces that set nature in motion, and all
positions of all items of which nature is composed" (Laplace 4).
Nowadays we may no longer be committed to such a strict determinism,
but when push comes to shove we still seem to think of our physical
equations as being essentially deterministic. Over the years mainstream
physics has thus developed the belief that it should in principle be possible
to predict any system' s temporal evolution with perfect precision. If not
yet today, then at least not too long from now, any system's future state
should thus be attainable from its initial conditions and the laws of nature
that are thought to govern the system's behavior (Smolin, Time, 94). 54 In
this manner a system of interest is typically singled out from its environment,
then some convenient intermediate state is chosen to serve as its initial
condition, after which the system is put through the wringer of natural law.

3.1.1 The exophysical aspect of the Newtonian paradigm


The first time that these key elements-natural system, initial conditions,
and mathematically spelled-out lawful regularities-appeared in the method
of physics, was when Galileo wrote out his experimental findings on
falling objects. In fact, in order to make all this possible, Galileo had to
separate what he took to be the objective primary qualities of the natural
world (e.g., shape, size, position, etc.) from any subjective secondary
qualities thereof (e.g., color, sound, tactility, etc.). In his view, the latter
amounted basically to no more than superficial name tagging as performed
by an observer's conscious mind:
Now I say that whenever I conceive any material or corporeal
substance, I immediately feel the need to think of it as bounded, and
as having this or that shape; as being large or small in relation to
other things, and in some specific place at any given time; as being in
motion or at rest; as touching or not touching some other body; and
VAN DuK/Process Physics, Time, and Consciousness 59

as being one in number, or few, or many. From these conditions I


cannot separate such a substance by any stretch of my imagination.
But that it must be white or red, bitter or sweet, noisy or silent, and of
sweet or foul odour, my mind does not feel compelled to bring in as
necessary accompaniments. Without the senses as our guides, reason
or imagination unaided would probably never arrive at qualities like
these. Hence I think that tastes, odours and colours, and so on are no
more than mere names as far as the object in which we place them is
concerned, and that they reside only in the consciousness. Hence, if
the living creature were removed, all these qualities would be wiped
away and annihilated. But since we have imposed upon them special
names, distinct from those of the other and real qualities mentioned
previously, we wish to believe that they really exist as actually
different from those. (Galileo, "The Assayer," 274)
In this way, Galileo introduced into physics the distinction between what
he considered to be the physical side of nature and what, in his opinion,
belonged to the mental side. By stripping physical objects from any
subjective sensory qualities, at last it became possible to make an objective
and completely quantitative comparison between physical properties. This
crucial distinction between the physical and the mental enabled Galileo
to formulate what has later become known as his "Law of Fall," s c< t 2 ,
the lawful mathematical expression in which the distance s is directly
proportional to time squared.
It was not until after Galileo that this proportional relation was actually
converted into what is now sometimes called Galileo's equation s= 1/2at2
(withs standing for distance, a for the constant of acceleration, and t for
time). Evidently, this posthumous tribute was given to honor Galileo's
pioneering contributions to the development of the physical equation. It
is, however, far less obvious that it was Galileo's act of dissecting the
physical from the mental that truly ushered in the era of modern physical
research. This was without doubt an enormously crucial step in the history
of science (see Goff) as it set the stage for all scientists who followed in
Galileo's footsteps and played an essential role in their attempts to one
day achieve the full mathematization of nature (see Dijksterhuis; Smolin,
Time, 107 and 245).
Despite the great progress that had been made by embracing Galileo's
method of separating subjective qualities from measurable quantities,
there was also a serious downside to this way of looking at nature:
... Galileo ... said that the scientific method was to study this world as
60 PROCESS STUDIES SUPPLEMENT 24 (2017)

if there were no consciousness and no living creatures in it. Galileo


made the statement that only quantifiable phenomena were admitted
to the domain of science. Galileo said: "Whatever cannot be
measured and quantified is not scientific"; and in post-Galilean
science this came to mean: "What cannot be quantified is not real."
This has been the most profound corruption from the Greek view of
nature as physis, which is alive, always in transformation, and not
divorced from us. Galileo's program offers us a dead world: Out go
sight, sound, taste, touch, and smell, and along with them have since
gone esthetic and ethical sensibility, values, quality, soul,
consciousness, spirit. Experience as such is cast out of the realm of
scientific discourse. Hardly anything has changed our world more
during the past four hundred years than Galileo's audacious program.
We had to destroy the world in theory before we could destroy it in
practice. (R.D. Laing55 as quoted in Capra 133)
In fact, physics has nowadays become so theoretical and so much centered
around mathematics and its equation-based method of representation that
many physicists seem to think of nature almost solely in terms of physical
equations-even to the extent that these equations are often mistaken for
the natural world they were actually meant to represent. In doing so,
however, these physicists are actually forgetting that the whole enterprise
of trying to grasp nature in terms of mathematics is only made possible
by first stripping away everything that could not be converted into
mathematics in the first place: the redness of red, the sweetness of sweets,
and the silkiness of silk cannot be converted into objective and lawful
mathematical equations since there is no way for anyone to have access
to someone else's sensory life. For this reason, the Newtonian
paradigm-which basically boils down to the idea that it should indeed
be possible in principle to one day grasp the entire whole of nature within
the language of rigid mathematics-is ultimately based on an incomplete
picture of nature and made possible only by the premature banishment of
subjectivity (see Goff).
In addition to Laing's and Goff s objections, there seems to be yet
another downside to Galileo's separation of object and subject. That is,
the universe as a whole can never be observed-let alone be measured
with clocks and measuring rods-from some ideal, all-encompassing
"view from nowhere" (see Nagel), imagined as a perspective from outside
our natural universe. This impossible "view from nowhere" not only
amounts to a kind of mythical outside perspective on what is generally
VAN DuK/Process Physics, Time, and Consciousness 61

held to be an entirely physical world, but it also refers to the assumption


that observation can be performed while totally neglecting any possible
observer influence on the system-to-be-observed. That is, by acting as if
the observer is not really present at all, as if a neutral and observerless
view is actually all there is, the process of observation can be presented
as if it were an assumption-free, objective, and non-participating act of
information intake.
However, quantum theory suggests that, even when the subject system's
presence is minimized, the very act of harvesting information from a
quantum system will make it collapse into one definite state, although
prior to observation the system is typically held to exist in a superposition
of all quantum states that comply with its wave function. So, despite the
fact that the observer's presence is neglected in theory, it seems indispensable
in practice. Notwithstanding this fundamental limitation, our highly
esteemed present-day physics firmly sticks to its tradition of "doing physics
in a box" with all unfavorable consequences there may be.

3.1.2 The decompositional aspect of the Newtonian paradigm


As mentioned above, the essence of the Newtonian paradigm can be
found in the belief that our physical equations should in principle be able
to represent nature-as-a-whole as if it were simply the sum total of physical
systems. The singling out of those physical systems, however, ultimately
hinges on only a few elementary acts of decomposition. That is, these
elementary decompositions are required to set the stage in which "the fine
art of doing physics" is to take place. In fact, the resemblance between
the art of doing physics and the art of doing theater is probably what must
have motivated physicist and well-known science popularizer Paul Davies
to portray nature in terms of the basic elements of drama:
If nature could be compared to a great cosmic drama in which the
contents of the universe-the various atoms of matter-were the cast,
and space and time the stage, then scientists considered their job to
be restricted entirely to working out the plot. Today, physicists would
not regard the task as complete until they had given a good account
of the whole thing: cast, stage and play. They would expect nothing
short of a complete explanation for the existence and properties of all
the particles of matter that make up the world, the nature of space and
time, and the entire repertoire of activity in which these entities can
engage. (Davies, About Time, 16)
62 PROCESS STUDIES SUPPLEMENT 24 (2017)

Just as the average audience usually does not pay too much attention to
the stage-building preparatory work of the theater crew, most working
physicists typically remain quite indifferent to the very elementary acts
of decomposition that actually enable them to do physics in the first place.
Although mostly taken for granted once they have become accomplished
facts, these elementary acts of decomposition basically serve to reduce
nature to a compact and well-defined universe of discourse within which
physics can be made to work. In order to do so, usually without even
realizing that they are basically relying on several a priori, and thus at
root hypothetical nature-dissecting cuts, most contemporary mainstream
physicists commit to the following elementary acts of decomposition (Van
Dijk, "The Process"):

• the decomposition of nature into target side and complementary subject side;
• the decomposition of the subject side into the conscious observer's
"center of subjectivity" and its observation-enabling support systems,
measurement instruments, etc.;
• the decomposition of the target side into relevant system-to-be-observed
and irrelevant system environment;
• the decomposition of system-to-be-observed into its "constituent elements"56

Last, but not least, a good case can be made that these decompositions
are to be accompanied by some further, more controversial ones. The
controversy relates especially to the fact that our current mainstream
physics holds that nature is entirely timeless and thus basically non-
processual, whereas these remaining decompositions suggest that it is not
nature that is non-process, but contemporary mainstream physics itself.
Following this suggestion, below we will start out with the process
of nature. However, once our nature-dissecting gaze has partitioned the
process of nature into its alleged constituent elements, it can be looked at
as if there is no processuality left. That is, Galileo and Newton have
basically set the stage for the expulsion of both time and processuality by
the later Einsteinian-Minkowskian block universe interpretation. After
all, by decomposing the process of nature into spatial and temporal
dimensions, the following decompositions later enabled Einstein to fuse
space and time together "again" into one spacetime continuum.
However, this fusing together of space and time, when looked at from
a process perspective, may just as well be explained as a not entirely
VAN DuK/Process Physics, Time, and Consciousness 63

successful attempt to glue together what should not have been taken apart
in the first place-namely, the undivided process of nature. Nonetheless,
the Einsteinian-Minkowskian block universe interpretation presented
nature as one giant block universe with past, present, and future frozen
solid into one static whole devoid of any unique and exclusive present
moment. Starting from the fundamental assumption that nature is inherently
processual, this would then require the following acts of decomposition:

• the decomposition of the process of nature into "occupied space" and


the "passage of time";
• the decomposition of the "passage of time" into a "geometrical timeline"
and an external and unidirectional "present moment indicator" moving
at a uniform rate; 57
• the decomposition of "occupied space" into "empty space" and its "con-
tent."58

Indeed, before Einstein's arrival on the scene, it was commonly believed


that space and time, as derived here, were absolute dimensions existing
independently from the material-energetic contents within them. This was,
of course, Newton's interpretation of how nature should hang together.
So the above decompositions may be called Newtonian decompositions.
Einstein, then, although famous for having gotten rid of Newton's
absolute space and time, actually built upon their skeletal framework to
arrive at the idea of a unified spacetime continuum. That is, although he
specifically aimed to reject Newton's absolute space and time, he still
elaborated on these notions by gluing them together into one spacetime
construct. And despite the many successes that could be celebrated because
of this idea of unifying space and time into one whole, there unfortunately
seem to be some downsides as well. This is partly because it is often not
such a good idea to try to glue together what should not have been taken
apart in the first place. Just as a yoke and egg white will not make a whole
egg-let alone a whole chick-when being put together again, 59 the
merging together of space and time into one spacetime continuum will
not result in the universe becoming whole and undivided. Already in the
process of separation there are things that will necessarily get lost. Galileo,
for instance, did not hesitate to separate target and subject world from
one another for the greater cause of being able to specify nature in terms
of mathematics. He made it particularly clear that all unquantifiable
64 PROCESS STUDIES SUPPLEMENT 24 (2017)

subjective aspects of observation should be thrown overboard. Further,


the overall organization of nature as a whole is something that necessarily
falls out of reach when dissecting nature into any possible variety of
constituent elements.
Notwithstanding all these fundamental limitations, the exophysical-
decompositional method did give rise to physical equations that have been
able to reach a high degree of empirical adequacy and to bring us lots of
practical applications. At the heart of this success were the ideas of
"timeline" and "system state" that allowed the fruitful collaboration
between lawlike "physical equation" and "configuration space"-two
essential ingredients of the Newtonian paradigm (see Smolin, Time, 49-
50 and 71). Together, these concepts of "timeline" and "system state"
turned out to lend themselves quite well to the expression of natural order.
This "natural order" is in fact an assumption that forms one of the basic
pillars on which science rests. That is, it is an axiomatic principle of
science to assume order in nature. Without this assumption, it would not
be possible to do science at all (Robert Rosen, Life Itself, 58).
When combining the concept of "timeline" with that of "system state,"
this natural order could be expressed with the help of time-driven equations
of state-physical equations that were held to specify different successive
system states with each step in time. This actually worked so well that
the many achieved successes caused this idea of natural order to be elevated
to the maxim of "lawful order" or even "natural law" (see Robert Rosen,
Life Itself, 58):
[T]he basic cornerstone on which our entire scientific enterprise rests
is the belief that events are not arbitrary, but obey definite laws
which can be discovered. The search for such laws is an expression
of our faith in causality. Above all, the development of theoretical
physics, from Newton and Maxwell through the present, represents
simultaneously the deepest expression and the most persuasive
vindication of this faith . Even in quantum mechanics, where the
discovery of the Uncertainty Principle of Heisenberg precipitated a
deep re-appraisal of causality, there is no abandonment of the notion
that microphysical events obey definite laws; the only real novelty is
that the quantum laws describe the statistics of classes of events
rather than individual elements of such classes. (Robert Rosen,
Anticipatory, 9)
Due to its many successes, this notion of a law-abiding natural world
VAN DuK/Process Physics, Time, and Consciousness 65

managed to totally overshadow the alternative narrative of emergent natural


order through habit formation (see Peirce 277; Smolin, Time, 147). Because
of this dominance, our way of doing physics has become largely geared
towards thinking of nature in terms of deterministic, lawlike physical
equations. On top of it all, however, this dominance has prevented us from
seriously questioning the validity of the above decompositions. That is,
instead of treating these decompositions as pre-theoretical interpretations
of nature-needed firstly to enable a universe of discourse for doing
physics, and secondly to be able to set up physical equations for the
expression of "lawful order"-we came to think of them as irrefutable
and irreplaceable features that should always, in one way or the other, be
included in our physics.
This belief has caused us to focus exclusively on (1) post-theoretical
interpretation of well-matured physical equations, (2) alternative, but
equivalent reformulations of these equations, and (3) enhancement of our
measurement technologies as the prime areas where breakthroughs should
be expected. The confusing multitude of different interpretations of the
quantum mechanics formalism, for Einstein' s relativity theories, and for
classical mechanics, for instance, can be seen as a tell-tale sign of the first
point. 60 As for the second point, string theory would be a good example,
but it is considered by more and more people to be an outdated project
(see Smolin, The Trouble; Woit). Regarding the third point, state of the
art measurement technology (such as the LHC-detector at CERN, Switzerland
and the UGO-detectors in the US) 61 can indeed help us uncover thus far
unexplored domains, but these will still operate within the Newtonian
paradigm and thus leave several essential aspects of nature out of the picture.
Spending our thoughts and energy mostly on such a post-theoretical
interpretation, reformulation, and measurement enhancement has kept
many researchers from trying to tackle the more fundamental pre-theoretical
interpretations. Pre-theoretical interpretation relates to the very "elementary"
acts of decomposition on which our way of doing physics is based. As
such, it is often thought to belong to metaphysics and philosophy, which,
in the eyes of many physicists, makes it an area that need not be looked
into any more since it is widely considered to have reached its mature end
stage.
Together with some other reasons, this has even caused most working
physicists to tum away from post-theoretical interpretation as well. That
is, the prevailing position nowadays seems to be that of trying to refrain
66 PROCESS STUDIES SUPPLEMENT 24 (2017)

from interpretation altogether by retreating into operationalism or


instrumentalism (informally referred to as the "shut up and calculate!"
approach). Unfortunately for those with an aversion to interpretation,
however, this is just as much an interpretation as all other interpretations,
so what may be called the "interpretation-avoiding argument" does not
really hold.
In fact, in comparison with post-theoretical interpretation, which deals
primarily with the interpretation of an instrument-based phenomenology
of nature, pre-theoretical interpretation seems to take us one step closer
to how we make sense of nature. Therefore, if we ever want to find out if
and how we can do physics without first having to break the initially
unlabeled and unbroken natural world into pieces, we likely stand a far
better chance if we start rethinking our first acts of elementary decomposition.

3.1.3 From quantum wholeness to the subject-object split and non-


decompositional decomposition
Given Niels Bohr's argument regarding the inseparability of observed
and observing system in quantum experiments ("Quantum"), contemporary
mainstream physics is burdened with the task to reunite what should not
have been separated in the first place. Although the first decomposition-i.e.,
the division of nature into target side and subject side-is necessary for
physics to enable experimentation, the very application of this division
in the experimental practice of quantum physics leads Bohr to the conclusion
that target and subject system ultimately form one inseparable whole. That
is, while the split between target and subject system is an epistemological
necessity to enable the practice of doing physics, this same practice suggests
that this split is, at the end of the day, an ontological unreality. Ultimately,
therefore, the split can be no more than a mere figure of speech, convenient
for didactic purposes within the context of physics, but a figure of speech
nonetheless. In the words of John Stewart Bell:
Now nobody knows just where the boundary between the classical
and quantum domain is situated. Most feel that experimental switch
settings and pointer readings are on this side. But some would think
the boundary nearer, others would think it farther, any many would
prefer not to think about it. ... A possibility is that we find exactly
where the boundary lies. More plausible to me is that we will find
that there is no boundary. (Bell, Speakable, 29-30)
According to Bell, many other near-fundamental concepts in physics are
VAN DuK/Process Physics, Time, and Consciousness 67

equally dubious, because they are so intimately related with the target-
subject split:
The concepts "system," "apparatus," "environment," immediately
imply an artificial division of the world, and an intention to neglect,
or take only schematic account of, the interaction across the split.
The notions of "microscopic" and "macroscopic" defy precise
definition. So also do the notions of "reversible" and "irreversible."
Einstein said that it is theory which decides what is "observable." I
think he was right-"observation" is a complicated and theory-laden
business. Then that notion should not appear in the formulation of
fundamental theory. Information? Whose information? Information
about what? On this list of bad words from good books, the worst of
all is "measurement." It must have a section to itself. (Bell,
"Against," 34)
The following question arises: could there be a way around all those feeble
foundations and their ambiguous behavior in experimental practice? A
first clue can be found by looking at how the original unbroken wholeness
of target world to be observed and observing subject world is already tom
apart in the pre-measurement stage. That is, in both classical and quantum
physics, there is a well-established tradition of "apartheid" that comes so
natural to physicists that they usually do not even think about it once, let
alone twice. This "apartheid" starts with the Galilean cut, by separating
the so-called "objective" and "subjective" aspects of observation from
each other in order to enable the explicitly quantitative way of doing
exophysical-decompositional physics.
As mentioned in Section 3.1.1, Galileo (''The Assayer," 274) paved
the way for mathematical physics by throwing overboard all subjective
and qualitative "secondary" aspects of observation (such as the colorfulness
of colors, the touch of textures, and the smell of scents). Since these aspects
cannot be quantified, tabulated, plotted against time, and turned into
mathematical relations, Galileo decided they should belong to the realm
of consciousness. Others before him never got to the point that they felt
the need to get rid of all these sensory qualities, so, therefore, Galileo was
the first to draw this cut between the "world of objectivity" and that of
"subjectivity":
[T]he supposition that material objects instantiate sensory qualities,
such as colours, shapes and odours, is incompatible with their having
an entirely mathematical nature. And hence it was necessary to strip
physical objects of their sensory qualities in order to make it
intelligible to suppose that the physical world could be completely
68 PROCESS STUDIES SUPPLEMENT 24 (2017)

captured in mathematics .... However, for all its virtues, physics has
never been in the business of giving a complete description of reality.
It aims to give a mathematical description of the fundamental causal
workings of the natural world. The formal nature of such a
description entails that it necessarily abstracts not only from the
reality of consciousness, but from any other real, categorical nature
that material entities might happen to have. (see Goff)
Although the embargo on sensory qualities made sure that only quantifiable
aspects (such as location, size, and weight) were taken into account, all
mathematical physics that followed after Galileo basically got burdened
with a built-in, hidden dualism. As long as we stick to Galileo's cut, all
kinds of problems related to this hidden dualism will continue to plague
physics. For instance, the necessity to set up a universe of discourse with
a dedicated subject side is fundamentally at odds with any attempt to apply
our physical equations to nature as a whole. For, in that case, although it
is typically taken for granted in routine, small-scale measurement situations,
the entire subject side (including measurement gear, such as clocks,
measuring rods, and also the conscious observer) has to be located outside
the natural universe, which is impossible (see Smolin, Time, 46 and 80).
Nonetheless, in quantum physics, not only is the Galilean cut required
to enable the mathematization of observed events, but "quantum particles"
also need to be "soaked loose" from their embedding environment (see
De Muynck 74-75, 83, 90-91, 94) before they can be submitted to
measurement by using, say, a bubble chamber, a photographic plate, or
some strategically positioned photodiodes.
Were we to retrace our steps, by trying to "re-submerge" those "quantum
particles" back into their embedding environment and undo the seminal
Galilean cut, what kind of decomposition would still enable us to get a
hold of nature? Would we be forced to fall back on the naked
eye-unprejudiced and unmediated by technological-mathematical tools?
Or would this still not enable us to stop putting nature through the filter
of our nature-dissecting intellect? To be sure, we are ourselves seamlessly
part of the same process we are trying to make sense of. So, therefore, we
can expect that our scientific reasoning will always have an element of
subjectivity in it, no matter how hard we try to avoid this from happening.
As Max Planck put it: "Science cannot solve the ultimate mystery of
nature. And this is because, in the last analysis, we ourselves are part of
nature and, therefore, part of the mystery that we are trying to solve"
(Planck 217).
VAN DuK/Process Physics, Time, and Consciousness 69

For this reason, when setting out to solve this mystery, we first need
to find out how we, seamlessly embedded observers, get to sculpt the
world in which we live into "conscious information" and become "knowers
of knowledge"-whatever that may tum out to mean exactly. Sections
4.3 and 4.3.1 will give a more detailed discussion of how this could work.
For now, however, we will focus on the special kind of decomposition
that seems to be necessary (but perhaps not sufficient) to pull off such a
project. It is special in the sense that it can be thought of as a form of
"nondecompositional decomposition" since it pertains to the coming-to-
the-fore of seamlessly embedded endo-processes from within a greater
embedding ecosystem of background processuality:

• the decomposition of the initially unlabeled natural world into identifiable


foreground signals and indiscriminate background noise;

So, instead of pre-theoretically decomposing nature into deeply contrasting


target and subject sides, and then trying to fuse these both sides together
again through post-theoretical interpretation, why not try to do it the other
way around? That is, it is probably more fruitful to try to model nature as
an undivided process from the get-go, and then to allow for an inner
selection process among fore- and background patterns to take place within
this process model. How such a nondecompositional process model can
be made to work will be the subject of Chapter 5, where the more subtle
details of process physics will be discussed. For now, however, we will
try to go a little bit deeper into the concept of information and how it
should be reinterpreted to make it fit a nondecompositional way of modeling
our initially unlabeled natural world.

3.2 Measurement and information theory


The unlabeled natural world can be referred to in many different ways.
It can go by a wide variety of names and interpretations, ranging from the
Kantian "noumenal world" or "nature-in-itself," Alfred North Whitehead's
"extensive continuum," John Archibald Wheeler's "pre-geometry" or
"pre-space," David Bohm's and Basil Riley's "holomovement" and
"implicate order," Bernard d'Espagnat's "veiled reality," the ancient Greek
apeiron, John Stewart Bell's world of pre-observational "beables,"62 or
"the vacuum state" in quantum field theory. Each and every one of these
terms seems to revolve around, or, at least leave room for, the idea of an
70 PROCESS STUDIES SUPPLEMENT 24 (2017)

underlying world of potentiality. As such, these terms can go quite well


with the concept of "pure data in the wild" that can be thought of as
embryonic "fractures in the fabric of Being" (see Flori di 85-86) or as "ur-
differences that make a difference" (see Bateson 459; Van Dijk, "An
Introduction," 75).63 This conception of information as a difference that
makes a difference fits precisely in the scheme of the above mentioned
embedded endo-processes emerging from within a greater "ocean of
potential" which is the embedding background processuality (see Section 5.2).
In mainstream physics, however, quite another concept of information
holds sway. Information is here typically used in the syntactical sense of
numerically expressed empirical data. These empirical data, however,
need a prestated alphabet of symbols (Kauffman, "Foreword: Evolution,"
11) to be expressible in the first place. In nature, there is no such pre-
available character set,64 but classical information theory and mainstream
physics readily take their symbolic alphabets for granted without even
questioning their validity or thinking about how they were put together
at all (see Shannon and Weaver).
In contemporary mainstream physics, information is what crosses the
boundary between the target and subject side of a universe of discourse.
It is, however, through pre-theoretical decomposition (see Section 3.1.2
and 3.1.3) that the universe of discourse for doing contemporary mainstream
physics (see Fig. 3-1) can be put together at all. The main function of this
universe of discourse is to divide nature into target and subject side,
establish an information-exchanging relation between them, and then to
convert any thus acquired raw measurement results into well-refined
empirical data that should be compressible into concise mathematical
algorithms (or, in other words, into "lawful" physical equations).

Natural ~
Measuring ~
Conscious
, ,
system instrument observer

Fig. 3-1: Simplified universe of discourse in the exophysical-


decompositional paradigm

Analogous to physics, classical information theory requires a division


of the world into an information-providing data source and an information-
acquiring endpoint. It draws on an explicitly quantitative method for
VAN DuK/Process Physics, Time, and Consciousness 71

analyzing the symbolic codes of transmitted and incoming messages. Such


quantitative information is typically employed in the form of syntactical
units of expression, for instance, as Morse code in telegraph communication,
as binary bit strings consisting of digital ones and zeroes, or as the 26
letters of the English alphabet.
In everyday life, once fully accustomed to such a symbol-based system
of communication, it basically becomes second nature to think of incoming
data mostly in terms of the meaning that we usually like to attach to it.
As one becomes a more advanced member of the reading community, for
instance, it will typically be quite difficult to suppress the tendency to
search a completely randomized text for the presence of familiar words,
abbreviations, or other sequences to which meaning can be attached. In
fact, language is so natural to us that we almost automatically think of
linguistic meaning being in the text at hand, or even in the digital coding
behind the fonts on our computer screen. But binary bits are utterly
meaningless to the computer itself, and so are the pixels and the magnetic
writings on the hard disk drive. In fact, a computer does not need to
understand what it is doing in order to do its work in a-for us-meaningful
way.
Likewise, in classical information theory, information need not, and
typically does not, have any meaning attached to it. Instead, it is primarily
understood in a purely quantitative sense as entirely passive data, the
becoming available of which will reduce the recipient's initial uncertainty
about the until then unknown contents of the original message as it was
released from the information source (see Shannon and Weaver 108-109;
Berger and Calabrese). Accordingly, information theorists have no use
for any meaning to be awarded to communication signals; from their
perspective it only matters how much of the original message is still intact
when it arrives at its final destination. Any possible meaning that a symbol-
interpreting recipient could perhaps attach to this message later on would
thus remain completely irrelevant.
In exophysical-decompositional physics the situation is much the
same. That is, despite Thomas Kuhn's argumentation that empirical data
will always be theory-laden, 65 it is still implicitly assumed that empirical
data can eventually be assessed in an exact, objective, and purely quantitative
way and that any possible qualitative aspects are merely a matter of post-
measurement interpretation. This implicit assumption entails that all earlier
rounds of preparation, measurement fine-tuning, and re-interpretation are
72 PROCESS STUDIES SUPPLEMENT 24 (2017)

typically taken for granted once a certain measurement practice has reached
maturity (see Van Fraassen 138-139; Van Dijk, "The Process").
Accordingly the involvement of all kinds of subjective qualitative
choices about competing measurement theories, background assumptions,
initial conditions, etc., is thought to become largely irrelevant when the
long-aspired ideal of picture-perfect agreement between empirical data
and algorithm comes in sight. And although such a complete and absolute
empirical fit may be impossible in practice, it is widely believed that
increasing measurement precision will allow the observer to approach it
to the limit, thus progressively approximating the perfectly isomorphic
correlations that are anticipated in representational theory.
According to this representational view-which is basically inherent
to the exophysical-decompositional paradigm-empirical data can be
mined for regularity-exhibiting patterns that are thus held to be objectively
informative about the lawful regularity of nature itself; all this regardless
of any possible later conceived interpretations. In line with Robert Rosen's
analysis (Life Itself, 58-59) exophysical-decompositional physics is possible
only by adherence to the following two premises:

• First, there must be lawfulness to nature. That is, orderly causal relations
are thought to hold between nature's observable and even its
unobservable events.
• Second, it is supposed that these causal relations can be communicated,
at least partially if not entirely, by way of informational relations
that can be established between "nature as observed" and the current
conscious observer in charge.

Along these lines, these two principles of natural law boil down to nature
having an inherent orderliness associated with it. Arguably, then, this
allegedly inherent orderliness "can be matched by, or put into correspondence
with some equivalent orderliness within the self [i.e., the mind, or the
observer's 'center of subjectivity']" (Robert Rosen, Life Itself, 59). However
successful it has been in the past, this notion of natural law comes with
some not-to-be-underestimated limitations. In particular, it leads to a
worldview in which nature is being assessed exclusively in terms of its
effect on something else-for instance, on us, conscious observers, or on
measurement equipment-rather than in terms of what it is in and to itself.
Accordingly, physics will never be able to go beyond a phenomenology
VAN DuK/Process Physics, Time, and Consciousness 73

of nature since it can be no more than an account based on the registered


change in the subject system's state (see Section 2.1 and 3.1.2 to 3.2.2).
In other words, the subject system's grain-size of observation will
automatically determine the lower limit of the observation range, thus
leaving out of scope all that may fall below it.
Moreover, this limitation relates directly to the nature of information.
That is, in this phenomenology-based approach to physics, information
can only be defined on the basis of the smallest distinction that can be
made by the subject system at hand. This phenomenology-based information
will never be able to attain the status of Luciano Flori di' s "pure data in
the wild" which, arguably, should amount to fundamental "fractures in
the fabric of Being" (85). Instead, it is typically assumed that any such
subphenomenal "ur-differences" in the alleged "real world out there" must
somehow be capable to affect the later-manifesting phenomena in the
observable domain, thereby allowing conscious observers to interpret
them according to some suitable context of use (Van Dijk, "The Process,"
note 4).
However, although the linear hierarchy of the exophysical-
decompositional universe of discourse may perhaps imply quite strongly
that there will indeed be measurement interaction, the question of how
such measurement interaction should actually take place will necessarily
remain unanswered. 66 Likewise, there will always be an indefinite amount
of uncertainty regarding the extent to which our measurement
phenomenologies can be thought of as representing the above-mentioned
unobservable "ur-differences. " 67
In order to find a satisfactory way around this fundamental indeterminacy,
physics eventually more or less settled for an instrumentalist solution.
That is, the apparently unfeasible direct representational relation between
"pure data in the wild" and "well-refined empirical data" was simply
substituted by an indirect probabilistic relation holding among the many
members of a statistical ensemble of possible measurement outcomes. As
explained below, this statistical approach is a clear example of classical
information theory put into practice.

3.2.1 Looking at measurement in a purely quantitative, information-


theoretical way
In mainstream physics, information is initially presented in a purely
74 PROCESS STUDIES SUPPLEMENT 24 (2017)

syntactical sense. Accordingly, the "becoming available" of empirical


data via the process of measurement is typically seen to reduce the
observer's earlier existing uncertainty (see Shannon and Weaver 108-109;
Berger and Calabrese; Van Dijk, "An Introduction," 75) so that the reduction
of the observer's uncertainty can be treated in an entirely quantitative
manner. That is, the increase in knowledge about some target system of
choice is expressed exclusively in terms of the relative amount of data
that the observer is capable to extract from it. The actually obtained value
state in each instance of measurement can be compared with the total
amount of potentially available value states, thereby leading to a purely
quantitative expression:
[I]n classical information theory (Shannon and Weaver) every single
incoming datum will lessen the recipient's uncertainty about a given
amount of possible alternatives. Like this, information is a measure
of the decrease in uncertainty about the occurrence of a specific event
from a given set of possible events. This information-theoretic
measure of uncertainty reduction is quantified as follows: The
probability of occurrence for each specific member (e.g., characters,
or system states)68 of a given set of possibilities (e.g., an alphabet, or
a fixed region in phase space) depends on the total amount of
available options and the relative frequency of all these alternatives
individually (see Fast 325-326). In other words, Shannon's
information-theoretic measure of uncertainty reduction indicates the
relative decrease in ignorance about which options from a
prespecified collection of possibilities get to be selected as the
individual content values of the data signal under construction. (Van
Dijk, "An Introduction," 76-77)
Jerome Rothstein, a physicist who wanted to analyze nature in an information-
theoretic sense, thought that it should be possible for a physical system
to predict, at least partially, its own future state and that of its external
surroundings merely by performing operations on its environment and
itself. By thinking of physical systems as information-processing automata,
or "well-informed heat engines" (Rothstein, "Thermodynamics"), he can
be seen as having tried to reinterpret the Newtonian paradigm in a computer-
like fashion. That is, by looking at natural systems as if they were in fact
processing, storing, and responding to incoming information, he could
treat them as if they were indeed entirely analogous to information theory.
VAN DuK/Process Physics, Time, and Consciousness 75

COMMU NICATIO N SYSTEM


INFORMATION
SOURCE TRANSMlmR CHANNEL RECEIVER DESTI NATION

RECEIVED

~,,~t,
MESSAGE SIGNAL SIGNAL MESSAGE

MEASURE OF MEASURED
PROPERTY VAWE

SYSTEM OF
OF INTEREST OBSERVER
MEASURING INDICATOR
INTEREST APPARATUS

ERROR SOURCE

MEAS URING PROCEDURE AND APPARATUS

Fig. 3-2: Rotbstein's analogy (between communication and measurement


systems)

In this way, Rothstein claimed that the processes of measurement


observation and communication should be seen as analogs-the former
being completely equivalent to the latter. Accordingly, he decided to depict
the measuring procedure in physics as if it absolutely conformed to a
communication system from Shannon' s classical information theory (see
Fig. 3-2). As a result, he addressed information in physics in the following
way:
Let us now try to be more precise about what is meant by information
in physics. Observation (measurement, experiment) is the only
admissible means for obtaining valid information about the world.
Measurement is a more quantitative variety of observation; e.g., we
observe that [an object] is near the right side of a table, but we
measure its position and orientation relative to two adjacent table
edges [italics added]. When we make a measurement, we use some
kind of procedure and apparatus providing an ensemble of possible
results. For measurement of length, for example, this ensemble of a
priori possible results might consist of: (a) too small to measure, (b)
an integer multiple of a smallest perceptible interval, (c) too large to
measure. It is usually assumed that cases (a) and (c) have been
excluded by selection of instruments having a suitable range (on the
basis of preliminary observation or prior knowledge). One can define
an entropy [i.e., an information-theoretical measure of uncertainty]
for this a priori ensemble, expressing how uncertain we are initially
about what the outcome of the measurement will be. The
measurement is made, but because of experimental errors there is a
76 PROCESS STUDIES SUPPLEMENT 24 (2017)

whole ensemble of values, each of which could have given rise to the
one observed. An entropy can also be defined for this a posteriori
ensemble, expressing how much uncertainty is still left unresolved
after the measurement. We can define the quantity of physical
information obtained from the measurement as the difference
between initial (a priori) and final (a posteriori) entropies. We can
speak of position entropy, angular entropy, etc., and note that we now
have a quantitative measure of the information yield of an
experiment. A given measuring procedure provides a set of
alternatives. Interaction between the object of interest and the
measuring apparatus results in selection of a subset thereof. When the
results of this process of selection become known to the observer, the
measurement has been completed. (Rothstein, "Information," 172)

3.2.2 The Modeling Relation: relating empirical data with data-reproducing


algorithm
Although it is more or less common practice within physics to speak
of physical equations as representing nature, this is rather a shorthand
expression for a somewhat more complicated state of affairs. What is
actually meant when physicists speak of "mathematical representations
of nature" is that the physical equations are algorithmic compressions of
well-refined samples of empirical data that are then associated with
something that is commonly known as a natural system. To be more precise:
In the exophysical-decompositional paradigm, samples of empirical
data are typically superimposed onto the observed aspects of nature
that we like to label "the natural system N." Like this, empirical data
and data-reproducing algorithms are basically "forced" to be
synonymous with "their" natural system N (see Robert Rosen,
Anticipatory, 71-75). Without this forced synonymy, physics as we
know it would not be able to function at all (Van Dijk, "The
Process," note 9).
Let us take a look at what mainstream physicists actually intend to
say when they speak of "the mathematical representation of nature." For
sake of convenience this will be illustrated by taking Galileo's inclined
plane experiment as a case in point: First, a target system is singled out
from its natural environment and then some particularly interesting aspect
of this system is chosen to be specified in terms of its effect on the
measurement equipment at hand. For instance, a bronze ball with mass m
could be picked out as the target system of choice after which its position
VAN DuK/Process Physics, Time, and Consciousness 77

is selected as the main phenomenon of interest. When this ball is released


on top of an inclined plane it would roll down the ramp and cause the
strategically positioned warning bells to call their notification signal (see
Section 2.1.2).
It should be noted that the physical observables of current interest,
distance and time, owe their status of quantifiable variables exclusively
to a very specific fact. It was, after all, Galileo's idea of putting together
a linear, continuous scale of chained-together standard units for both
distance and time that made it possible to measure length and duration
simply by counting the number of elapsed markings. By introducing this
method of determining the ratio between, on the one hand, standard units
of time and distance, and, on the other hand, their final count for each
measurement run, 69 he basically ushered in the golden age of mathematical
physics. In fact, to this day this proportional means of assigning numbers
to what he took to be the physical characteristics of nature is very much
at the heart of what mathematical physics is all about.
This general methodology enables us to record what is being observed,
and then to index it by when it is observed (Robert Rosen, Life Itself, 69).
That is, by matching the bronze ball's change in position with the
simultaneously recorded change in time, Galileo could establish motion
as a physically quantifiable phenomenon. What is more, spelling out these
physical characteristics of nature in terms of numbers made it possible to
search for any mathematical regularity in the pattern of the thus harvested
empirical data. Well-formed samples of empirical data can be put together.
That is, a more polished, mathematically related time series [s(tn),tn] (with
n = 0, 1, 2, 3, ... ) can be derived from the initially raw empirical data to
get a smoothly curved transition from initial to final conditions.
For instance, from all temporally arranged number pairs [s(t0 ),t0 ] one
pair may be chosen to serve as the starting entry. Then, after an arbitrarily
short intervaF0 the next pair can be taken to serve as its immediate successor.
As shown in Fig. 3-3a, the evolution of system states, from initial conditions
[s(t0),t0] to the immediately succeeding follow-up conditions [s(t 1),t 1],
and so on, is taken to be the result of some physical operation that is going
on in nature.
78 PROCESS STUDIES SUPPLEMENT 24 (2017)

"natura l system" formal sys tem


(phen omenology) (abstraction)
a) b)

c) d)

I
p N
R F
0 E
C R
E E
s N
s C
E

e) f) DECODING
DECODING

ENCODING

Fig. 3-3: Steps towards Robert Rosen's Modeling Relation.

Once the numerical values for the initial and all following conditions
are recorded, they are inferred to relate to one another by a mathematical
operation that closely matches this physical operation (see Fig. 3-3b). In
this way, it is supposed that the empirical data extracted from the so-called
"physical world" is represented by the mathematical world. In tum, the
mathematically calculated results will have to be verified against the next
pair of numbers in the sample of empirical data. And if sufficient agreement
between calculated and observed values cannot be found, the initially
VAN DuK/Process Physics, Time, and Consciousness 79

applied mathematical operation for getting from one state to the next will
have to be revised in order to achieve empirical congruence between
natural and formal system. This basically amounts to a more detailed re-
interpretation of the sample of raw empirical data by the newly applied
mathematical operation.
This complies very much with what we have already seen in Sections
3.2 to 3.2.2; namely, that the putting together of such a mathematical
representation is an act of abstraction. This means that any resulting
abstract equation should be converted back again into the empirical data
in order to confirm its agreement with measurement. After all, the raw
empirical data, if not the ultimate primary source of nature's information,
should at least be seen as the first level of information that is algorithmically
compressible. Accordingly, in the exophysical-decompositional paradigm,
it is actually those raw empirical data that make up the actual measurement
phenomenology of the process under investigation. And since we cannot
go beyond this phenomenology, we must realize that the thus achieved
empirical agreement is between "phenomenal" data and abstract algorithm,
not between nature and mathematics. This mode of perception-which
hinges exclusively on quantitative sense-perception, thereby excluding
all other, qualitative aspects of experience-was by Whitehead considered
a methodology that dealt with only half the evidence (MT 211; see also
Desmet, "Introduction," 15). That is, metaphorically speaking, physical
science "examines the coat, which is superficial, and neglects the body
which is fundamental" (MT 211 ). Accordingly, the processuality of nature
(on the extreme left; corresponding with "the body") can only be approximated
or implied by mathematical inference (which corresponds with the design
plan of "the coat"). In other words, when using the method of physical
equations, nature's processuality can never be grasped in full.
Despite this implicational character of data-reproducing algorithms,
our well-established physical equations are typically still thought of as
having a representational relation with the target system whose data they
are trying to replicate. In order to keep this representational interpretation
afloat, the empirical agreement between data and algorithm should reach
a level of at least near-perfection, if not beyond that. To be able to pull
this off, there always needs to be a ''post-dictive" 11 measurement encoding
in which the recorded samples of raw empirical results are converted into
more polished data that can then be mathematically compressed into a
short physical equation. Indeed, this measurement encoding amounts to
80 PROCESS STUDIES SUPPLEMENT 24 (2017)

implicational mathematization.
Subsequently, on the way back from abstract mathematical results to
concrete phenomenal results, there also needs to be a predictive decoding
that extrapolates from the thus achieved physical equation what the
numerical values of future measurement outcomes are expected to be (Fig.
3-3c). 72 This predictive decoding, then, amounts to imputational re-
phenomenalization, or in other words, the retranslation of abstract,
algorithmically generated numbers into concrete measurement phenomena
where the calculated numerics are imputed (i.e., attributed or assigned)
to their matching measurement results. In so doing, it has even become
common practice to treat these algorithmically generated numerical values
as if they were in principle entirely synonymous with the natural system
they are supposed to portray.
In any case, together these encoding and decoding encryptions serve
to filter out any unwanted irregularities, data-contaminating noise,
measurement errors, etc., from the original, raw measurement results (see
Robert Rosen, Life Itself, 59-62; Van Dijk, "The Process"). However,
there can be no procedure from which these encodings and decodings
themselves can be derived from the data or algorithms. 73 This is in fact
directly related to the impossibility of having a watertight procedure for
algorithm choice:
[B]ecause there are no neutral criteria for choosing one algorithm
over the other (Kuhn, "Objectivity"), no one algorithm can be
considered the ultimate candidate. For instance, since goodness offit,
consistency, broadness of scope, simplicity [beauty], and fruitfulness
may be at odds with each other, any choice for ranking these criteria
according to their alleged importance, or for finding an optimal
balance between them, can never be objective but is rather based on
personal preference, intuition, educated guesses, and the like. So,
together with the abovementioned encodings and decodings, all
criteria for choosing between competing algorithms are external from
the natural system N and the formal system F. Next to these already
major externalities, there are several other external criteria,
specifications, and decisions that play their part in setting up the
relation between data and algorithm. For instance, decisions have to
be made concerning (a) the frequency of sampling; (b) how many
entries the data sample should have as a minimum [in order to qualify
as a bona fide sample of empirical data]; (c) which statistical format
should be appropriate in which case; (d) which background theories
to apply in order to provide a more meaningful context for the in-
VAN DuK/Process Physics, Time, and Consciousness 81

itself meaningless foreground algorithm. (Van Dijk, "The Process")


All in all, the act of encoding and decoding seems to depend to a great
extent on external criteria, specifications, and decisions. Accordingly, as
depicted in Fig. 3-3c, any physical equation that is meant to represent a
given target system in the above-explained sense is actually the result of
implication (from raw empirical data to sharply specified mathematical
input value), mathematical inference (which proposes a mathematical
relation between initial input value, or initial condition, and its subsequent
output values), and imputation (i.e., attribution of the mathematically
calculated end result back onto the raw empirical data). When there is
sufficient agreement between the raw measurement outcomes and the
mathematical calculations, or, to be more precise, when the transitions
from initial to follow-up conditions in both the natural system N and the
formal system F can be seen to commute (i.e., are considered to be
isomorphic within acceptable margins), then it is considered acceptable
for physicists to speak of "representation."
In line with all this, Robert Rosen (Anticipatory, 20 and 75) emphasized
that causal linkages between preceding and succeeding system states can
indeed only be implied, and not be confirmed as an objective fact of nature.
Nonetheless, in mainstream physics the concept of causality has over the
years gained a strong aftertaste of objectivity. As will be explained below,
however, the concept of processuality is to be preferred over causality:
In the original version of the Modeling Relation [see Fig. 3-3e ], the
regular pattern in the calculated outcomes of the formal system F is
meant to comply with an equally regular causal pattern which is
thought to be active among the phenomena associated with the
empirical data of natural system N. However, causality-as it
dissects nature into a causative and a therefrom ensuing effectuated
side-is an exophysical-decompositional concept about nature, rather
than an inherent aspect of nature itself. For this reason, Rosen's
original left-hand term "causality" is here replaced by "process" [see
Fig. 3-3d to 3-3fJ, thus stressing the deeper-seated processuality ... of
nature. (Van Dijk, "The Process")
Because the concept of causality is typically identified with the notion of
system states and their transition from one to the next, its very formulation
must depend on the same external encodings and decodings that are
involved in putting together our familiar data-compressing physical
equations. Moreover, causality can only be pinpointed by supposing that
82 PROCESS STUDIES SUPPLEMENT 24 (2017)

the theoretically assumed physical activity between two consecutive system


states can be synonymized with the inferred mathematical operation (see
Fig. 3-3b to 3-3d). As mentioned above, such a straightforward synonymy
is ultimately unwarranted.
Even a less rigorous approximate isomorphism between data and
algorithm will not be enough for causality to be judged a better alternative
than processuality. That is, our concept of causality cannot reasonably be
applied to any deeper, subphenomenal level of reality. After all, the
exopysical-decompositional approach can only provide us with
phenomenologies, i.e., results based on sense data and the readings of
measurement instruments, rather than on the process of nature itself.
Causality can thus deal only with patterns of relationship that fall within
the measurement range of the observational system in use. All other less
prominent and finer-grained background activity and noisy external
influences will necessarily fall outside the scope of investigation and will
thus typically be labelled not only causally negligible, but also scientifically
irrelevant. But because causality is obviously a less universal concept, it
is here (in Fig. 3-3d to 3-3f) replaced by process-this is meant not so
much to dismiss the concept of causality, but rather to emphasize that
processuality subsumes causality.
After having added this nuance to the concept of causality, let us see
how the transition between system states can be presented in a more
straightforward way. For sake of simplicity, the second time slices, system
states, or number pairs-i.e., the phenomenal and mathematical results in
the lower boxes of Figs. 3-3b to 3-3d-may be placed on top of the one
with the initial conditions. We end up with something that comes very
close to Robert Rosen's original modeling relation (see Fig. 3-3e). What
is now particularly interesting is that both the mathematization-enabling
measurement encoding and the rephenomenalization-enabling predictive
decoding cannot be derived from the data or the algorithm itself.
So, analogously to the external present moment indicator in Section
2.1.3, encoding and decoding are in fact external manipulations; although
they give shape to the supposedly "hard and exact" physical equations,
they ultimately rely on subjective choice. And furthermore, we can see
the appearance of a geometrical timeline (Fig. 3-3f) resulting from the
subjective choice as a result of abstracting from the process of nature in
terms of phenomenology-based consecutive states.
VAN DuK/Process Physics, Time, and Consciousness 83

3.2.3 From information acquisition to info-computationalism


According to Rothstein's analogy (Fig. 3-2), the numerical data that
inform us about the natural world are harvested from target systems by
means of a linear chain of information-exchanging and data-processing
modules. In this way, data is shuttled from target system (i.e., the source
process) to subject system where it is being processed-analogously to a
computer that processes incoming information. The natural target system
is basically treated as if it were merely an information-emitting black box,
while, on the other hand, the end-observer's mind-brain is ultimately
thought to be nothing but an entirely matter-based, biological computer
(see Block)-indeed, a naturally evolved one, but a computer nonetheless
with the brain as the hardware, and the mental thought patterns being no
more than neural signaling.
With the rise of the electronic digital computer-the main principles
of which were provided among others by Shannon's information theory
and Turing's computational logic (Shannon and Weaver; Turing and
Ince)-it has now become a quite popular idea to look at nature from an
info-computational point of view. That is, as the computer began to play
a more and more prominent role within society, an entire belief system
emerged in which the universe was thought to work as a giant computational
system (see Lloyd) in which natural systems behaved just like mere
algorithm-executing conversion modules-turning inputs into outputs.
Accordingly, in what is probably the most popular version of the
computational theory of mind, the brain is considered to be a biological
information-processing computer, while the mind is seen as the software
that this bio-computer is running (Block). In this way, the mind-brain may
be interpreted in a device-like fashion, as electrochemically functioning
circuitry by means of which sensory stimuli can be encoded into nerve
pulses, and then channeled through the network infrastructure, thus enabling
transmission, storage, and output of neural signals-just as in a computer:
[W]ith the development of computer technology and artificial
intelligence, such cognitive processes as memory and perception
were analyzed into specific functions performed by specialized
processors, each of which received a certain input, performed a
specific operation upon it, and transmitted a certain output. All this
led to a picture of the brain as an elaborate biological computer, the
"modular mind" of Fodor, a system of neural processors shuttling
84 PROCESS STUDIES SUPPLEMENT 24 (2017)

raw sensory data around to make them into coherent pictures, much
as a computer takes millions of binary bits to make texts and pictures.
(P~chalska et al.)
However, this info-computational view still leaves much unexplained.
For instance, although sensory information typically makes a meaningful
difference to the inner life of conscious organisms in the biological world,
incoming digital data have no meaning whatsoever to information-processing
computers. Also, this info-computational view suggests that the signals
of neurons can be turned into mental imagery in roughly the same way as
binary data can be used to make up the graphical code of a digital image
file (such as a JPG-, BMP- or GIP-file) that can be shown on a computer
screen. But, contrary to a computer screen which typically has a conscious
user sitting behind it, the brain cannot be thought of as having within it
such a dedicated user who is capable to observe, interpret, and act upon
the data and pictures that are thus presented. After all, comparable to the
earlier problem of the meta-observer (Section 2.3), we could ask how such
an inbuilt center of subjectivity should work and then find that another
homunculus-like center of subjectivity would have to be invoked in order
to answer this question (for more neuroscientific context, see Edelman
and Tononi 127, 220-222).
From this we can conclude that the brain is not some kind of cinematic
theater in which lifelike scenes are shown to a first-person center of
subjectivity capable of sensorimotor control. Nor is the brain a mere black
box for converting inputs into outputs; it is not a CPU-like center of
subjectivity whose job it is to tum incoming stimuli into outgoing responses;
nor should it be seen as a transmitter-receiver unit equipped specifically
to pick up and pass on in- and outbound signal traffic. Rather, as we will
see later on, from Sections 4.2.4 to 4.3.1, the mind-brain is a seamlessly
embedded member of the ultimately indivisible organism-world system
in which a highly complex culmination of mutually informative activity
patterns facilitates the emergence of higher-order conscious experience.
Hence, the mind-brain is certainly not a pre-wired data-processing switch-
box in the sense of classical information theory. Unlike a CPU or switchbox,
the mind-brain has a neuroplastic organization and cannot retain its function
without its signal traffic or in isolation from its embedding "mother
system," which is the organism-world system as a whole (see Sections
4.2 to 4.3.1 for further details).
VAN DuK/Process Physics, Time, and Consciousness 85

Even though it explicitly meant to avoid it, the info-computational


approach, by setting up a computation-based monism, basically gives rise
to a tacit form of dualism. Although it is emphatically denying that there
be a separate mental process in the brain, the info-computational approach
suffers from a similar problem as the Newtonian paradigm with which it
is associated; namely, the problem of a potentially infinite regress of meta-
observers. In fact, by imposing the linear hierarchy of Shannon's classical
information theory onto the operation of the mind-brain, info-
computationalism is adopting a modular-brain point of view that inevitably
leads to such problematic complications:
[T]he exclusive attention to specific subsystems of the mind-brain
often causes a sort of theoretical myopia that prevents theorists from
seeing that their models still presuppose that somewhere,
conveniently hidden in the obscure "center" of the mind-brain, there
is a Cartesian Theatre, a place where "it all comes together" and
consciousness happens. (Dennett 39)
When setting up a universe of discourse, according to Rothstein's
information-theoretic analogy, any signal going from source to
destination-from target to subject side-would always require an already
conscious end-observer to interpret it. In other words, in order to put into
effect its core hypothesis that consciousness is mere neuron-based
information computation, info-computationalism has to tacitly presuppose
consciousness in the first place. After all, in an info-computational universe
of discourse, the communicated signals are by themselves utterly vacuous
and completely meaningless to the signal-conveying components that
reside on the subject side. Therefore, they must be made sense of through
the conscious inner-life of a data-interpreting end-observer in order to
reach the status of a meaningful message. However, any confirmation of
an "actual" center of subjectivity within the brain of this observer will
inevitably call forth the same problem all over again, requiring yet another
such "center of subjectivity" where the incoming information can finally
become conscious (or so the promise goes), thus triggering a confusing
infinite regress of meta-observers 74 (see Von Neumann 352).
The point is basically that the info-computational view can give us
no insight into what it is for information to become conscious. It can give
us no clue whatsoever as to what it feels like to have access to particular
sense data. It cannot ever tell us about what brings about the redness of
red, the painfulness of pain, the silky feel of silk, or the sweetness of
86 PROCESS STUDIES SUPPLEMENT 24 (2017)

sweets. For all that info-computationalists could care, these subjective


aspects are not even there to begin with, and if they were they would be
quite irrelevant or even illusory side effects of an organism's processing
of incoming sense data. But despite the suggestion of today's ICT-, Al-
and computer science communities that the subjective aspects of sense
data are at the end of the day just fluky, redundant, and pointless
epiphenomena, they are actually an indispensable fact in our personal
daily lives.

3.2.4 Information, quantum, and psycho-physical parallelism


The info-computational view of how knowledge acquisition works,
or, in other words, of how a conscious observer gets to make sense of
nature, hinges on the oversimplified and therefore misleading concept of
an information-theoretical universe of discourse. By sheer definition, a
universe of discourse-irrespective of a classical, quantum, or relativistic
context-brings along a split between "objective" source and "subjective"
recipient of information-i.e., it separates the target from the subject side.
The scientifically most fundamental example of such a nature-dissecting
cut can be found in quantum physics, where it appears to produce the
subjectivity-dependent "wave function collapse"-a phenomenon that
still does not have a commonly agreed upon interpretation.75
In an attempt to wrap his head around what he thought to be an apparent
(and not an actual) causal relation between conscious observation and
"wave function collapse," John Von Neumann proposed that there be a
psycho-physical parallelism (418-421 ). This psycho-physical parallelism,
founded by German physicist, philosopher, and psychologist Gustav
Theodor Fechner, entails that events taking place in the physical world
will always occur in tandem with the psychological contents of the mind.
As such, it rejects the possibility of interaction between body and mind.
Instead, it permits only a functional correlation to be present between the
world of physical events and the world of subjectivity. According to
psycho-physical parallelism, then, a mental state has a precise one-on-
one correlation with a brain state (and, by the same token, with an associated
physiological state of the body and a world state of the physical world).
VAN DuK/Process Physics, Time, and Consciousness 87

•) Q) -+ . . -+
~ REPRESENTATION

-~
I

© -t+ . ©-+
I

b) f)
I
I

c) Q)-+ .
!~
-j+
I
I
I REP~ ION

d) ©-+ --~ g)

e) ©-+
-~
Fig. 3-4: Universe of discourse with Von Neumann's object-subject
boundary (commonplace conception of measurement)

Von Neumann reasoned that psycho-physical parallelism would be


applicable as follows:
First, it is inherently entirely correct that the measurement or the
related process of the subjective perception is a new entity relative to
the physical environment and is not reducible to the latter. Indeed,
subjective perception leads us into the intellectual inner life of the
individual, which is extra-observational by its very nature (since it
must be taken for granted by any conceivable observation or
experiment). Nevertheless, it is a fundamental requirement of the
scientific viewpoint-the so-called principle of the psycho-physical
parallelism-that it must be possible so to describe the extra-physical
process of the subjective perception as if it were in reality in the
physical world-i.e., to assign to its parts equivalent physical
processes in the objective environment, in ordinary space. (Of course,
88 PROCESS STUDIES SUPPLEMENT 24 (2017)

in this correlating procedure there arises the frequent necessity of


localizing some of these processes at points which lie within the
portion of space occupied by our own bodies. But this does not alter
the fact of their belonging to the "world about us," the objective
environment referred to above.) In a simple example, these concepts
might be applied about as follows: We wish to measure a temperature
[see Fig. 3-4a]. Ifwe want, we can pursue this process numerically
until we have the temperature of the environment of the mercury
container of the thermometer, and then say: this temperature is
measured by the thermometer [see Fig. 3-4b]. But we can carry the
calculation further, and from the properties of the mercury, which can
be explained in kinetic and molecular terms, we can calculate its
heating, expansion, and the resultant length of the mercury column,
and then say: this length is seen by the observer [see Fig. 3-4c].
Going still further, and taking the light source into consideration, we
could find out the reflection of the light quanta on the opaque
mercury column, and the path of the remaining light quanta into the
eye of the observer, their refraction in the eye lens, and the formation
of an image on the retina, and then we would say: this image is
registered by the retina of the observer [see Fig. 3-4d]. And were our
physiological knowledge more precise than it is today, we could go
still further, tracing the chemical reactions which produce the
impression of this image on the retina, in the optic nerve tract and in
the brain, and then in the end say: these chemical changes of his brain
cells are perceived by the observer. But in any case, no matter how
far we calculate-to the mercury vessel, to the scale of the
thermometer, to the retina, or into the brain, at some time we must
say: and this is perceived by the observer [see Fig. 3-4e]. That is, we
must always divide the world into two parts, the one being the
observed system, the other the observer. In the former, we can follow
up all physical processes (in principle at least) arbitrarily precisely.
In the latter, this is meaningless. The boundary between the two is
arbitrary to a very large extent. In particular we saw in the four
different possibilities in the example above, that the observer in this
sense needs not to become identified with the body of the actual
observer: In one instance in the above example, we included even the
thermometer in it, while in another instance, even the eyes and optic
nerve tract were not included. That this boundary can be pushed
arbitrarily deeply into the interior of the body of the actual observer
is the content of the principle of the psycho-physical parallelism-but
this does not change the fact that in each method of description the
boundary must be put somewhere, if the method is not to proceed
vacuously, i.e., if a comparison with experiment is to be possible.
Indeed experience only makes statements of this type: an observer
VAN DuK/Process Physics, Time, and Consciousness 89

has made a certain (subjective) observation; and never any like this:
a physical quantity has a certain value. (Von Neumann 418-421)
In the proceedings of a 1938 conference in Warsaw (Poland), named New
Theories in Physics, Von Neumann's view on the object-subject boundary
and psycho-physical parallelism was summarized by the editors. They
gave the following rendition of his response to Bohr's presentation on
"The Causality Problem in Atomic Physics":
Professor von Neumann thought that there must always be an
observer somewhere in a system: it was therefore necessary to
establish a limit between the observed and the observer. But it was by
no means necessary that this limit should coincide with the
geometrical limits of the physical body of the individual who
observes. We could quite well "contract" the observer or "expand"
him: we could include all that passed within the eye of the observer
in the "observed" part of the system - which is described in a
quantum manner. Then the "observer" would begin behind the retina.
Or we could include part of the apparatus which we used in the
physical observation - a microscope for instance - in the
"observer." The principle of "psycho-physical parallelism" expresses
this exactly: that this limit may be displaced, in principle at least, as
much as we wish inside the physical body of the individual who
observes. There is thus no part of the system which is essentially the
observer, but in order to formulate quantum theory, an observer must
always be placed somewhere. (Bialobrzeski et al. 44)
During the era in which quantum mechanics was still in its infancy, the
majority of physicists believed-as most of them still do-that the physical
world is a causally closed realm. Although it was apparently still acceptable
for information from the physical world to somehow appear in the subjective
stream of the conscious observer (see Von Neumann 418-421; Stapp 15),
to materialistic physicists it is inconceivable that non-physical subjective
thought should be able to have a causal effect in the physical world. This
is why the idea arose that the world of subjective observation had to be
thought of as extraphysical and without a causal linkage to events in the
physical world. It was thus quite bothersome that the so-called "wave
function collapse" seemed to depend crucially on the becoming conscious
of a measurement act, whereas, prior to an observer becoming aware of
a measurement outcome, all possible quantum states were thought to exist
all-together-at-once-in superposition with each other.
This apparent causal dependency of physical quantum states on
90 PROCESS STUDIES SUPPLEMENT 24 (2017)

conscious observation was quite a perplexing mystery to the physics


community of the time. Therefore, Von Neumann, after having picked up
the idea from an earlier paper by Niels Bohr from 1929, decided to embrace
psycho-physical parallelism as a suitable alternative for this, to physicists,
unacceptable mental causation. After having discussed it with Niels Bohr,
both of them came to believe that the parallelism between physical and
mental world could be due to the principle of complementarity. As Bohr
put it: "Complementarity: any given application of classical concepts
precludes the simultaneous use of other classical concepts which in a
different connection are equally necessary for the elucidation of the
phenomena" (Bohr, Atomic, 10).
The link between psycho-physical parallelism and Bohr's complementarity
principle may thus be imagined as follows: Matter and mind, when thinking
of them as aspects of nature whose details cannot be simultaneously
studied, can perhaps be mutually exclusive during observation-just as
the wave and particle aspects of light. As Werner Heisenberg mentioned,
"we have to remember that what we observe is not nature in itself but
nature exposed to our method of questioning." So, accordingly, just as
we observe waves when we probe a quantum system in one way and
particles when we probe it in another (see Gribbin 186), Von Neumann
and Bohr supposed that matter and mind were two aspects of the natural
world that, due to their complementarity, required different methods of
assessment to come to the fore.
Although their opinions seemed to meet with regards to the issue of
complementarity, there was still a considerable difference in their approaches
to the quantum measurement process. In Bohr's view, quantum mechanics
and classical physics can only give complementary coordinations of our
experiences of quantum events and the observational system that measures
those events:
[For Bohr, the] wholly incompatible conceptions of subject and
object in measurement do not constitute incompatible
characterizations of the "real things" measuring and measured ... they
are instead to be considered only as complementary coordinations of
our experience of these things .... Quantum and classical mechanics are
thus relegated to the level of merely epistemically significant
complementary coordinations of experience, and as such their
incompatibility becomes unimportant. (Epperson 37)
Accordingly, the quantum events and the measuring equipment should,
VAN DuK/Process Physics, Time, and Consciousness 91

in Bohr's eyes, be described in a quantum mechanical and a classical way,


respectively. In this view, nature is, at the end of the day, the ur-source
of natural facts, whereas our empirical experience can only provide us
with knowledge about those facts. As such, nature-in-itself is, according
to Bohr, ultimately unknowable (Epperson 37).
In Michael Epperson's reading of Von Neumann's view, however,
measurement responses are just as much part of nature's facts as the events
or actualities that bring about these responses. In fact, in Von Neumann's
scheme, what is being measured (i.e., the target system S) and what is
doing the measurement (i.e., the measuring system M) will together form
a chain of "necessarily interrelated facts"-the so-called "Von Neumann
chain" in which Sis measured by M(apparatus)' then (S+M) can be measured
by M'(eye)' after which (S+M+M') can be measured by M'\visuaI cortex)' and so on:
Early pioneers in the development of quantum mechanics like Niels
Bohr assumed ... that the measurement devices behave according to the
laws of classical mechanics, but von Neumann pointed out, quite
correctly, that such devices also must satisfy the principles of
quantum mechanics. Hence, the wavefunction describing this device
becomes entangled with the wavefunction of the object that is being
measured, and the superposition of these entangled wavefunctions
continues to evolve in accordance with the equations of quantum
mechanics. This analysis leads to the notorious von Neumann chain,
where the measuring devices are left forever in an indefinite
superposition of quantum states. It is postulated that this chain can be
broken, ultimately, only by the mind of a conscious observer.
(Nauenberg)
In order to make his scheme of psycho-physical parallelism work, Von
Neumann supposed that the higher brain centers were directly associated
with consciousness. The main reason for this belief is that-as can be
learned from first-person introspective experience-the "abstract ego" 76
does not seem to allow the simultaneous co-existence of mental states. In
other words, contrary to quantum states, mental states do not seem to exist
in simultaneous superposition with one another.

3.2.5 From psycho-physical parallelism to measurement as a semiosic


process
A good case can be made, however, that psycho-physical parallelism
is ultimately just a pseudo-explanation for wave function collapse. That
92 PROCESS STUDIES SUPPLEMENT 24 (2017)

is, the formulation of von Neumann's psycho-physical parallelism may


well be possible only by committing what William James called the
psychologist's fallacy (Principles 196-197). The psychologist's fallacy
has been expressed in many ways, but for present purposes Anderson
Weekes' version is quite convenient: "[T]he 'psychologist's fallacy' . . .is
to find in introspection only the objects that appear to thought rather than
the whole of the thought to which objects appear (Weekes 230)."
When looking at nature from the perspective of psycho-physical
parallelism, Von Neumann seems to pay attention only to the foreground
details (i.e., the "facts" of measurement that appear in object-like fashion
to one's subjective "abstract ego"), while taking mostly for granted how
the underlying process of subjective perception works to bring these "facts"
into conscious actuality. In doing so, Von Neumann has to presume that
these facts come into conscious actuality, but does not provide any account
of exactly how-let alone why-this should occur.
To put this more in the context of quantum information and info-
computationalism: The "clicking" or "non-clicking" of a
photocounter-usually a photodiode-based detector capable of producing
an audible "click" on detection of a photon-can be postulated to represent
one bit of information. According to John Archibald Wheeler, this makes
up a raw fact:
With polarizer over the distant source and analyzer of polarization
over the photodetector, we ask the yes or no question, "Did the
counter register a click during the specified second?" If yes, we often
say, "A photon did it." We know perfectly well that the photon
existed neither before the emission nor after the detection. However,
we also have to recognize that any talk of the photon "existing"
during the intermediate period is only a blown-up version of the raw
fact, a count. The yes or no that is recorded constitutes an unsplitable
bit of information. (Wheeler, "Information," 311)
Since Claude Shannon had not yet put out his standard work on classical
information theory, Von Neumann probably did not adhere to such an
explicit information-theoretical conception of measurement as Wheeler's.
However, his application of the object-subject split automatically gives
rise to a universe of discourse that, in hindsight, has all the hallmarks of
information theory. After all, the assumption of an extraphysical "abstract
ego" (or an "immaterial intellectual inner life") seems to leave no other
alternative than to suppose that the appearance of a Wheelerian "fact" in
VAN DuK/Process Physics, Time, and Consciousness 93

one's stream of consciousness is like the intake of information by the


observer's "center of subjectivity."
The psychologist's fallacy, then, is the result of presupposing-as
Von Neumann did-that consciousness involves an extraphysical center
of subjectivity, rather than a within-nature, participatory process through
which organisms gradually get to make sense of the natural world in which
they live. That is, we conscious living beings should be seen as seamlessly
embedded and radically participatory inhabitants of the same natural world
we are trying to make sense of. We become acquainted with nature by
living through it, not by passively staring at it from an extraphysical
viewpoint. We acquire knowledge about nature not by passively taking
in information while residing on the subject side of a universe of discourse,
but by living through an ongoing, radically participatory cyclic process
of semiosis (Van Dijk, "The Process") through which experienced foreground
signals, signal-designating symbol system, and symbol-interpreting self
emerge as a triadic unity (see Section 4.2.3 to 4.3.1 for further details).
How conscious self and conscious world-thinker and thought-come
into actuality as two aspects of the same process will be discussed in
Sections 4.2.3 to 4.5. For now, however, we will briefly touch on how
doing physics does not revolve around an information-theoretically inspired
universe of discourse with a boxed-in target system and a subject side
separated therefrom, but that doing "physics as we know it" should instead
be thought of as a semiosic process of formalization, preparation, and
observation-the respective equivalents of symbol, referent, and symbol-
interpreting user in semiotics. As such, as will become clear later on, our
way of "doing physics in a box" is basically a technological-mathematical
extension of the semiosic process through which the emergence of conscious
experience comes about.
94 PROCESS STUDIES SUPPLEMENT 24 (2017)

••

.. ······· .. •
embedding •.

:

: context of use
..···········.... ·········...
:"preparation :" ••• observation -._
•.
·.

• i process :" ": process ": •
~ (ef.refere~t -\ • .. · •_: •••<~. user)

·.
: } :

..... . . · ··......· ··.•. ___ : :


•• ···(··--,~:~li:ti~~---r· ..


.. •.
•••
process
(cl.sign)
·.. ....... -·
•••
:

.. •

···········

••
..········
embedding
.. .•. ••

..······· ..•
embedding •.

:. .·. :. .·.
: context of use •. : co ntext of use •.
....···········_::-::_···········... _ .... ···········_::-:.:············...
• _: preparabon _.. \ ~ ••• =· ~t10n :" ·. ob9ervMOf1 ··.

\ \. . 7::J;{>-.) } \ '7:$;-i) .J
• ••••• P,OOl:t4 _.... •

Fig. 3-5: From background semiosic cycle of preparation, observation,


and formalization to foreground data and algorithm

After an empirically adequate physical equation has been found to


commute (by going through Rosen's modelling relation, see Section 3.2.2),
and measurement practice is thought to have reached maturity, all support
processes that were needed to give rise to the thus achieved level of
sophistication are more or less made to leave the center stage. As already
hinted at in Section 3.1.2, physicists who are using physical equations to
track the temporal evolution of a natural system can often be likened to
a theatre audience that forgets entirely about the preparatory activities of
the stage building crew, casting agency, producer, director, and scene
writer, each of whom is responsible for other background aspects of the play:
Once .. .an empirically faithful algorithm has indeed been found, it is
typically thought to reliably keep track of the target system's
VAN DuK/Process Physics, Time, and Consciousness 95

behavior-if not in a direct chronological sense, then at least


statistically. In any domain of experimentation, as soon as
measurement practice reaches maturity the research interest starts
shifting towards the aspired agreement between empirical data and
algorithm (see Van Fraassen 138-139). Simultaneously, attention
typically drifts away from the very process of measurement
interaction through which this empirical agreement could be achieved
in the first place. Together with all other preceding processes of
system delineation, measurement refinement, data processing,
algorithm selection, etc., it is basically evacuated to the backstage.
(Van Dijk, "The Process")
In quantum physics, the preparation process primarily pertains to how a
"quantum particle" is "soaked loose" from its embedding environment.
If it were not for this preparatory process and subsequent observation, a
"quantum particle" would lead its existence while remaining "submerged"
within the otherwise undivided process of nature-as-a-whole which is, at
the quantum level, a giant cosmic sea of vacuum fluctuations:
What we usually call "particles" are relatively stable and conserved
excitations on top of this vacuum. Such particles will be registered at
the large-scale level, where all apparatus is sensitive only to those
features of the field that will last a long time, but not to those features
that fluctuate rapidly. Thus the "vacuum" will produce no visible
effects at the large-scale level, since its fields will cancel themselves
out on the average, and space will be effectively "empty" for every
large-scale process (e.g., as a perfect crystal lattice is effectively
"empty" for an electron in its lowest band, even though the space is
full of atoms). (Bohm 111)
Cyclotrons, laser guns, and all kinds of other pieces of preparatory
equipment can be used to individuate a single quantum particle or photon
out of what usually exists only as a collective whole of many such "particles":
In the laboratory we see measurement arrangements that, although
intended to perform measurements in the microscopic domain of
quantum mechanics, are yet composed of macroscopic (although
often very small) components. In these measurement arrangements
one can often discern two fundamentally different parts, viz., the part
having as an objective to prepare microscopic objects (like an
electron emission grid, cyclotron, laser, etc.), and the part intended to
register some phenomenon that can be interpreted as measurement
result (like a photo diode, bubble chamber, spark chamber, etc.). The
first part will be referred to as the preparing apparatus, the second
96 PROCESS STUDIES SUPPLEMENT 24 (2017)

one as the measuring instrument. The measuring instrument has as an


essential part a macroscopic pointer ranging over a measurement
scale from which the individual measurement result m can be read
off. (De Muynck 74-75)
In classical physics, on the other hand, the process of preparation can be
interpreted as the singling out, by the observer's nature-dissecting gaze,
of some interesting "physical" aspect of nature, so that a target system
can be put together based on this act of individuation (see Van Dijk, "The
Process," note 7). Of course, the entire foregoing history of system
delineation, measurement refinement, data processing, algorithm selection,
pre-theoretical statistical analyses, and so on, can all be considered to be
part of the background processes that enable the presentation of the
foreground items, namely: the well-refined empirical data and their data-
reproducing algorithms. These data-reproducing foreground algorithms
can only come to the fore and reach the status of laws of nature through
the intense level of cooperation of all participating background processes:
Collectively, these intimately entangled and mutually overlapping
background processes can be grouped into three functionally distinct
subprocesses, namely the preparation process, the observation
process, and the formalization process-all embedded within the
same meaning-providing context of use [Fig. 3-5]. As such, they
work together to form a trilateral universe of discourse that closely
resembles the so-termed triadic relation in semiotic information
theory (see Zeman; Noth 85; Fiske 41-43) ... .In semiotic information
theory meaning gets to be established within a triangular relationship
between a sign, its referent, and a sign-interpreting user, 77 and as
shown in [Fig. 3-5], the three of them can be positively identified
with the formalization process, the preparation process, and the
observation process, respectively. In the thus formed semiotic
triunity, the process of empirical science is given shape and meaning
by passing through the different stages of preparation, observation
and formalization. Here, the preparation process delineates areas of
interest (target systems) from which informative signals can be ex-
tracted.78 The observation process then deals with the intake of
informative signals and converts them into empirical data. Finally,
the cycle is closed by the formalization process which imports these
empirical data and compresses them into concise algorithms, whose
calculated results are then hypothesized to reflect the target system's
informative signals. After this, the cycle can then be repeated to dig
deeper into the targeted process of interest and to attain more
VAN DuK/Process Physics, Time, and Consciousness 97

empirical data and/or better goodness of fit between data and


algorithm. (see Van Dijk, "The Process")
As will be explained further on (from Section 4.2.3 to 4.3.1), this semiosic
process of pre-measurement target individuation, instrument-assisted
observation, and algorithm selection, can be seen as an extension of the
conscious individuation, observation, and symbolization as performed by
sentient organisms as they live through their "sensation-valuation-motor
activation-world manipulation" cycles, thus giving rise to the brain-
mediated processes of perceptual categorization and concept formation.
Although semiotics can be regarded a part of information theory, it
does not work along the lines of our classical information theory in which
information acquisition takes place by one-way data traffic. Its mode of
operation is based on habit-forming cyclic loops in which meaning gets
to be established from within, rather than having to be bestowed from the
outside as is required in classical information theory.
Because of this it can be more easily brought in contact with non-
representationalist theories of consciousness-especially Gerald Edelman's
theory of neuronal group selection. In Edelman' s view, the massive mutual
informativeness among activity patterns within the thalamocortical region
of the mind-brain is what facilitates the emergence of our conscious
experience. Because of this active, mutual informativeness-facilitated
by a high level of recursive, participatory signaling-information does
not need to be conveyed in the familiar way of classical information theory,
computer technology, and info-computationalism.
Rather, organismically meaningful information is established by going
through the mutually informative cycles that entail not only the activity
patterns within the thalamo-cortical region, but the entire whole of
perception-action cycles in which the organism is engaged. No real object-
subject boundary can be drawn. Instead, I suggest it is better to think of
subjectivity as a system property of nature that does not at all fit into the
dualistic transmitter-receiver framework of classical information theory.
As will be made more plausible in Section 4.2.3 to 4.3 .1, subjectivity is
what is already tacitly present in the cyclic processuality of nature, but
which can gradually intensify as organisms go through their multimodal,
value-modulated perception-action loops.
Instead of interpreting target and subject systems in terms of our
conventional information theories, we should realize that the target
98 PROCESS STUDIES SUPPLEMENT 24 (2017)

process-whether it be a conventional natural system or even the "process


of subjectivity" associated with the observer's own conscious self-can
only become known in terms of the conscious information (i.e., percepts)
that comes into actuality during this process of mutual informativeness.
It can thus be concluded that the total arbitrariness in the choice of where
to situate the epistemic cut between target and subject side is a major hint
that something is seriously wrong with the conventional information-
theoretical picture. To understand how information works, we should
replace the linear module-based hierarchy of conventional information
theory with the well-nested processual holarchy of (bio )semiotic information
theory. In semiotics, the coming into actuality of experiencing self and
experienced world goes hand in hand with concept formation and
symbolization. As will become apparent in the next chapter, the "sculpting
process" of such biologically meaningful concepts and symbols occurs
by way of adaptive, value-steered perception-action cycles.

3.3 From doing physics in a box to doing physics without a box


Throughout this present chapter, we found that the problems of "doing
physics in a box" were numerous and fundamental. It turns out that there
are even more difficulties than Lee Smolin noted down in his 2013 book
Time Reborn (e.g., the cosmological fallacy and the reasoning away of
the passage of time) and it seems that the majority of these difficulties
have something to do with the reduction of the live, conscious observer
to an externalized, abstract, and insentient intake unit of pre-coded
data-one that readily meets the selection criteria of info-computationalism.
It is because of the uncritical and premature adoption of the point-
observer and its relatives-supposedly equipped with a purely info-
computational inner-center of information intake-that the actual living
experience on which empirical science itself is ultimately based could
have been swept overboard. We have apparently become so used to the
idealizing simplification of point-observers and the like, that we seem to
totally forget about their downsides. For instance, we all too easily take
it for granted that we conscious observers have to imagine ourselves as
somehow outside of what is so often thought of as our entirely physical
real-world-out-there. On top of that, just to be able to talk about such an
external world at all (see Quine 1) it has to be cut into bite-size bits and
VAN DuK/Process Physics, Time, and Consciousness 99

pieces by our nature-dissecting gaze. From this, then, it is only a small


step to start thinking of the natural world as if it truly existed as a mere
collection of such "bits and pieces in external interaction."
However, when we take this step we entirely forget that the involved
bits and pieces (i.e., material objects, molecules, atoms, quarks, strings,
and the like) are ultimately just idealized figures of speech, capable of
fitting quite nicely with phenomenal reality, but by no means giving us
something like the "true look" of nature-in-itself. Confusing our linguistic
labels, such as atoms, electrons, quarks, and so on, with what goes on in
the process of nature itself would amount to what Whitehead called the
fallacy of misplaced concreteness-mistaking the abstract for the concrete,
with all due undesirable consequences.
So, although the exophysical-decompositional methodology has served
us well over the last few hundred years, it still seems to be plagued by all
kinds of fundamental difficulties. Because of our historical attachment to
this well-tried and tested fractionating mode of understanding, it seems
to be a worthwhile idea to see if we can get rid of its negative aspects
while keeping its positive sides intact. What should escape unscathed from
such a "cleansing attempt" would definitely be its ability to break down
the raw, tumultuous processuality of nature into understandable bits and
pieces so that nature can be talked about, rather than just being looked at
in mute awe.
Judging from the halftime score of our present investigation, there
seems to be more than enough reason to try to go beyond the exophysical-
decompositional paradigm and replace it with something else. Due to its
many past as well as recent successes, it does, however, seem a tall order
to put the entire exophysical-decompositional approach out with the trash.
We have benefitted too much from its achievements, after all, to take such
a radical step.
So, instead, what seems to be needed is that we try not to ditch the
exophysical-decompositional approach altogether, but rather to supplement
it with a nonexophysical-nondecompositional method-a method that
does not replace it, but makes up for its weaknesses. In other words, when
trying to make sense of the indivisible process which is "nature-in-the-
raw," we need to arm ourselves with "binocular vision." That is, just to
be able to talk about nature, we may indeed be forced to decompose it
into smaller parts (see Quine 1), but when trying to grasp nature as one
indivisible process, we should take into account its undivided and
100 PROCESS STUDIES SUPPLEMENT 24 (2017)

interconnective wholeness as well.


Up to now the preferred way to go about this has mainly been to add
a holistic interpretation to the exophysical-decompositional formalisms
of our physical sciences-quite literally as an afterthought. For instance,
Niels Bohr's "quantum wholeness"-i.e., his interpretation that wave
function collapse hints at the fundamental inseparability of quantum system
and observer79-is a case in point. Another example is David Bohm's
holistic process metaphysics which, despite its admirable and impressive
effort to give an impression of the deeper processuality of nature, is still
very much organized around the quantum formalism that it is attempting
to move beyond. Both can be considered "aftermath interpretations" in
the sense that they came after the formulation of the quantum wave function
formalism. After Galileo, however, it has become common practice in
physics to think of mathematics as the primary language of nature. In this
way, any framework of interpretation is typically regarded as secondary;
i.e., as less fundamental than the mathematical formalism that it aims to
address. Because of this, then, we should not expect mere interpretation
to provide us with a definite answer to the problems of doing physics in
a box-especially since there are so many competing, but mathematically
equivalent interpretations.
To get rid of the aforementioned problems of the exophysical-
decompositional paradigm, therefore, it seems that we need more than
just such "post-mathematical re-interpretation" 80 to get the job done. In
fact, what our current physics seems to call for is a "significant other" to
stand by its side. What seems to be needed is a nonexophysical-
nondecompositional physics-a way of doing physics without a box-that
can serve as the "better half' for its exophysical-decompositional counterpart
of doing physics in a box. We need the best of both worlds to form a more
comprehensive, binocular physics that should then be capable of opening
up previously unseen vistas.
As a prime characteristic, such a "binocular physics" should not leave
it absurd that we exist (Deacon, cover text). In other words, our physics
should not leave unexplained that we-physics-abiding conscious
organisms-are actually inseparably part of the same natural world that
we are trying to make sense of (see Planck 217). We, within-nature
conscious organisms, who actually developed physics as a tool to help us
figure out what nature is all about, are in fact inseparably part of this tool's
target of investigation: the process of nature itself. Therefore, physics
VAN DuK/Process Physics, Time, and Consciousness 101

should take into account not only how nature works, but also how our
experience of nature works (see Wolfram 547).
Furthermore, honoring Whitehead, we should aspire to a physics that
does not reduce "nature alive" to "lifeless nature" (Whitehead, MT, 173-
232; Desmet, "On the Difference," 87), or, what is basically the same
thing, a physics that does not "objectify" nature. That is, our physics
should not rid its observers of their subjectivity, because that basically
amounts to treating live conscious observers as being equivalent to lifeless
and insentient objects-a point-observer being the most extreme example.
Whenever the conscious observer becomes the "object" of interest, we
should realize that the thus presented observer can be no more than a
"phantom observer"-an "objectified observer" whose lived subjectivity
is systematically being left out of the picture. The very essence of being
an observer is not really there anymore because it has simply been stripped
away for the sake of doing physics in a box. So, in order to find out how
subjectivity can be included within a nonexophysical-nondecompositional
physics, let us first take a closer look at what it is about consciousness
that is so hard to get a grip on.

4. Life and consciousness


Although we have already touched on the inability of info-
computationalism to properly address "the becoming conscious" of
information (see Section 3.2.3), we have not yet delved into the issue of
how to best describe consciousness itself. When trying to specify what
consciousness is exactly, it will soon become apparent that the "system-
to-be-specified"-i.e., the process of consciousness-is in fact overlapping
with the very system that is supposed to specify it-namely, the process
of consciousness itself:
Science has always tried to eliminate the subjective from its
description of the world. But what if subjectivity itself is its
subject? ... Consciousness poses a special problem that is not
encountered in other domains of science. In physics and chemistry,
we are used to explaining certain entities in terms of other entities
and laws. We can describe water with ordinary language, but we can
also describe water, at least in principle, in terms of atoms and the
laws of quantum mechanics. What we are really doing is connecting
two levels of description of the same external entity-a commonplace
102 PROCESS STUDIES SUPPLEMENT 24 (2017)

one and a scientific one that is enormously powerful and predictive.


Both levels of description-liquid water, or a particular arrangement
of atoms behaving according to the laws of quantum
mechanics-refer to an entity that is out there and that is assumed to
exist independently of the conscious observer. When we come to
consciousness, however, we encounter an asymmetry. What we are
trying to do is not just to understand how the behavior or cognitive
operations of another human being can be explained in terms of the
working of his or her brain, however daunting that task may be. We
are not just trying to connect a description of something out there
with a more sophisticated scientific description [of the same thing].
Instead we are trying to connect a description of something out
there-the brain-with something in here-an experience, our own
individual experience, that is occurring to us as conscious observers.
We are trying to get inside-to know, as the philosopher Thomas
Nagel felicitously phrased it-what it is like to be a bat. We know
what it is like to be us, but we would like to explain why we are
conscious at all, why there is "something" it is like to be us.
(Edelman and Tononi 10-11)
Because of the necessity for science to have a subject system that is
external from the target system to be observed, applying the abovementioned
method of investigation to consciousness will inevitably fail. Even when
zooming in on one's own conscious experiences, no description of it-no
matter how elaborate this description may be-can ever amount to
consciousness itself (Edelman and Tononi 11 ). Hence, since consciousness
can never be fully grasped by a description of it, science is forced to
substitute it with a virtual placeholder-an abstract representation of
consciousness, thus typically treating it as just another external thing,
rather than a seamlessly integrated process. In this way, consciousness,
as it can only be represented by a surrogate (such as a black box-like
"center of subjectivity," a CPU-like information-processing module, or,
as in Minkowski's space-time diagrams, a point-observer), must itself
remain absent from any physical description of measurement, observation,
perception, etc., and, by the same token, cannot be included in any other
representational description of reality whatsoever.
Even though all acts of measurement and information intake require
consciousness to perform its role of so-called "center of subjectivity,"
when nature is being looked at in a representational way, consciousness
will always remain a virtuality. This would not even change if one' s own
consciousness were to be chosen as the target system of interest. As already
hinted at above, unlike the usual target systems in our physical sciences,
VAN DuK/Process Physics, Time, and Consciousness 103

consciousness cannot be conveniently singled out by one's conscious gaze


since that would only lead to a strange loop involving the conscious
description of consciousness-which, to the best of our knowledge, is
still a description, not consciousness itself (see Edelman and Tononi 10-
14). At the end of the day, any attempt to get to the bottom of consciousness
by capturing it within a representational description is therefore doomed
to failure:
No amount of description will ever be able to account fully for a
subjective experience, no matter how accurate that description may
be. No scientific description of the neural mechanisms of color
discrimination, even if it is perfectly satisfactory, will make you
understand what it feels like to perceive a particular color. No amount
of description or theorizing, scientific or otherwise, will allow a
color-blind person to experience color. (Edelman and Tononi 11)
On top of this, a color-sensitive photodiode, although it may indeed be
responsive to a large spectrum of visible light, typically does not become
aware of the colors it manages to detect (see Edelman and Tononi 17).
Unlike a live conscious organism with color vision, a color-sensitive
photodiode does not adaptively change its sensory circuitry and its physiology

a) b)

<) d)

e)

g) h)

Fig 4-1: Conscious observer as an embedded en do-process within


the greater embedding omni-process which is the participatory
104 PROCESS STUDIES SUPPLEMENT 24 (2017)

universe. The conscious observer gets to make sense of nature by


living through his ongoing perception-action cycles (Fig. 4-1 d and e),
0 2-C02 cycles (Fig. 4-1 t), nutrient-waste cycles (Fig. 4-1 g), as well
as all other constantly renewing, criticality-seeking nonequilibrium
thermodynamic cycles that have developed within as well as between
environment and conscious organism. The 0 2-C02 cycle involves the
in- and exhalation of fresh and used air, respectively, to guarantee
adequate oxygenation of the organism's body cells and healthy levels
of CO2 in the lungs and the blood. Under sufficiently optimized
conditions, the body can aerobically metabolize ingested nutrients,
thereby enabling its cells (e.g., through their mitochondria) to
produce and store energy for doing work. The nutrient-waste cycle
goes from mouth to posterior turning nourishing foods into soil-
fertilizing manure, which, in turn, helps grow food again. Like this,
the organism's perception-action cycles are powered by metabolizing
food into energy-rich glucose (and derivative substances) and the
aerobic combustion of glucose which depends on the 0 2-C0 2
respiration cycle. Moreover, all these cycles extend well into the
embedding, co-evolving environment of which the conscious
organism is a seamlessly embedded, co-creative participant (see Fig.
4-lc tog).

in response to incoming stimuli. Nor does it attach any body-related


meaning to incoming stimulus information or grow any adaptive perception-
action repertoires that are sculpted by neuromodulatory value systems.81
It is along these lines-by way of an intimate, synergistic interplay
between (a) the incoming stream of stimulus information-see Gibson,
Ecological, 239-250, (b) previously laid down action repertoires, 82 and
(c) action-steering value systems-that experiencing self and experienced
world can gradually come into actuality from the blooming buzzing
confusion (see James, "Percept," 50) of the initially unlabeled world of
undifferentiated signals.
It must be emphasized that this coming into actuality of self and world
does not have anything to do with an homunculus-like center of subjectivity
taking in a "mental duplicate" of the alleged "real world out there." Instead,
it involves the coalescence of "world-related" exteroceptive signals and
"organism-related" interoceptive signals into one multimodal stream of
massively back-and-forth chattering thalamocortical activity patterns
through which the conscious organism-as it continually lives through
its own changing body states and its value-laden perception-action
VAN DuK/Process Physics, Time, and Consciousness 105

cycles-gradually gets to sculpt an experiencing self and experiential


scenery as two complementary aspects of the same bound-in-one "conscious
Now."
In this way, thinker and thought and are not to be looked at as some
signal-decoding central processing unit trying to interpret incoming signal
traffic. Because we are in fact seamlessly embedded parts of the same
natural world we are trying to make conscious sense of ( see Fig. 4-1 ),
thought and thinker should not be seen as being apart from each other,
but rather as two aspects of the same process (James, The Principles, 401 ).
Accordingly, without the need of any representational conception of our
mental contents, thinker and thought should be seen as one dual-aspect
process alternating between (a) the organism's inner-life, as it is sensed
and felt from within, and (b) its outward perspective on what is so often
(wrongfully) thought of as an entirely physical "real world out there."

4.1 The evolution of the eye


As an introduction to the actual coming into actuality of thinker and
thought, let us first turn our attention to the emergence and evolutionary
development of sight. In Fig. 4-2 we may recognize various developmental
stages in the evolution of the eye. Each individual stage is here illustrated
by the eyes of a particular species of mollusks (e.g., snails, squid, octopi,
etc.) that are thought to be good illustrations of some distinctive previous
evolutionary stages of the octopus eye. 83
photosensitive photosensitive cavity filled

w- g
cells cells transparent protective

n with fluid tissue (cornea)

~
fibers nerv
fibers
cup
op
,.
photoreceptor
layer (retina)
~- ...
opti
,.,
lens

photoreceptor
layer (retina)
refractive
lens

pigme nt pigm ent


spot , up pinhole primitive complex
eye lensed eye eye

Fig. 4-2: Subsequent stages in the evolution of the eye-The


sequence shows images of light-sensitive pigment spots and eyes of
different species of mollusks. From left to right: limpet, slit shell
mollusk, nautilus, marine snail, and squid. These images are edited
renditions from various revised versions of (Strickberger 34).
When sunlight or light from a bioluminescent source or a light bulb finds
106 PROCESS STUDIES SUPPLEMENT 24 (2017)

its way through light-diffracting media such as air or water, it is absorbed,


reflected, scattered, bent and diffracted by the various obstacles it comes
across. As such, it typically turns into diffuse light with a non-monochromatic
spectrum. Hence, at any spatial location where an organism may position
its photosensitive receptor cells (such as its pigment spot, optic cup, or
camera-type eye) the thus observed lighting conditions will inform the
organism about what is going on in the immediate ( or even more distant)
environment:
Imagine an environment illuminated by sunlight and therefore filled
with rays of light traveling between surfaces. At any point, light will
converge from all directions, and we can imagine the point
surrounded by a sphere divided into tiny solid angles. The intensity
and spectral composition of light will vary from one solid angle to
another and this spatial pattern of light is the optic array. Light
carries information because the structure of the optic array is
determined by the nature and position of the surfaces from which it
has been reflected. (Bruce et al. 6)
This ambient optic array (see Gibson, The Senses) is to be conceived
of as follows :
The optic array is the three-dimensional bundle of light rays that
impinge from all directions upon each point in an illuminated world.
Objects in the world can be thought of as labelling specific rays, so
producing a global pattern of light intensities. A retinal image
provides access to only part of the optic array at any one time, but a
stationary observer can sample different parts by eye movements and
head rotations. By changing position, the observer can sample the
different optic arrays impinging on neighbouring points in space.
However, sampling in this case should not be thought of as a discrete
process. Rather, as the observer gradually moves, so each ray
gradually moves, thus producing the smooth transformation in the
optic array that Gibson called the optic flow. (Harris 308)
With the introduction of the ambient optic array, J.J. (James) Gibson
wanted to show that an organism's photoreceptor cells are not so much
in the business of making passive snapshots of the incoming light that
hits them, but, rather that organisms get to know their environments by
tuning in on the information that is potentially available within a dynamically
changing optic array:
For an animal at the centre of this optic array to detect any
information at all, it must first have some kind of structure sensitive
VAN DuK/Process Physics, Time, and Consciousness 107

to light energy .... Many biological molecules absorb electromagnetic


radiation in the visible part of the spectrum, changing in chemical
structure as they do so. Various biochemical mechanisms have
evolved that couples such changes to other processes. One such
mechanism is photosynthesis, in which absorption of light by
chlorophyll molecules powers the biochemical synthesis of sugars by
plants. Animals, on the other hand, have concentrated on harnessing
the absorption of light by light-sensitive molecules to the
mechanisms that make them move. In single-celled animals,
absorption of light can modulate processes of locomotion directly
through biochemical pathways. Amoeba moves by a streaming
motion of the cytoplasm to form extensions of the cell called
pseudopods. If a pseudopod extends into bright light, streaming stops
and is diverted in a different direction, so that the animal remains in
dimly lit areas. Amoeba possesses no known pigment molecules
specialized for light sensitivity, and presumably light has some direct
effect on the enzymes involved in making the cytoplasm stream.
Thus, the animal can avoid bright light despite having no specialized
light-sensitive structures. Other protozoans do have pigment
molecules with the specific function of detecting light. One example
is the ciliate Stensor Coerulius, which responds to an increase in light
intensity by reversing the waves of beating of its cilia [i.e., the
lengthy protrusions that grow from the cell membrane] that propel it
through the water. Capture of light by a blue pigment causes a change
in the membrane potential of the cell, which in tum affects movement
of the cilia (see Wood). Some [other] protozoans, such as the
flagellate Euglena, have more elaborate light-sensitive structures in
which pigment is concentrated into an eyespot, but Stensor illustrates
the basic principles of transduction of light energy that operate in
more complex animals. First, when a pigment molecule absorbs light,
its chemical structure changes. This, in tum, is coupled to an
alteration in the structure of the cell membrane so that the membrane
permeability to ions is modified, which in tum leads to a change in
the electrical potential across the membrane. In a single cell, this
change in membrane potential needs to travel only a short distance to
influence processes that move the animal about. In a many-celled
animal, however, some cells are specialized for generating movement
and some for detection of light and other external energy. These are
separated by distances too great for passive spread of a change in
membrane potential and information is instead transmitted by
neurons with long processes, or axons, along which action potentials
are propagated. (Bruce et al. 7-8)
The take-home message here is that the physiology of organisms with
108 PROCESS STUDIES SUPPLEMENT 24 (2017)

photosensitivity changes when the organism is exposed to light. In this


way, a strict dividing line between informative signal and signal-informed
organismic system cannot really be drawn. Unlike a photodiode's flip-of-
a-switch kind of optical registration, which is reversible and does not
change the device's physical architecture in any relevant way, 84 biological
photocells, pigment layers, retinas (and, eventually, the entire organism)
irreversibly change their internal physiology and their external response
to their embedding environment. In fact, it is the very essence of organisms
to be intimately and symbiotically engaged with the world of signals in
which they lead their lives.

4.2 From info-computationally inspired neo-Darwinism to "lived-through


subjectivity" as a relevant factor in evolution
The signal-interpreting organism is so much entangled with the signal-
conveying world through which it lives that this world not only plays an
active part in the organism's getting-to-know it, but also that there is on
balance no clear and definite borderline to be drawn between organism
and world. As it stands, we will not be able to find a sharp and unambiguous
split between the organism's sensorimotor circuitry and the signal traffic
that it literally "gives way to," nor should we think that this signal traffic
and its initiatory "outer-world" signal can ultimately be separated in any
exhaustive and well-justified sense. We have not been able to do so in
physics (see Section 3.1.2, 3.1.3, and 3.2.5), nor will we be able to pull
it off in anatomy, physiology, biology, ecology, or the like.
However unattainable such a strict dividing line between target world
and subject system may be, 85 this hasn't stopped our nature-dissecting
intellect from trying to put it into practice. As a result, contemporary
biology has maneuvered itself into the wake of physics and adopted the
same approach that physics has chosen to deal with the difficulty of not
being able to know what happens in measurement interactions: instrumentalism
(which does not care too much how measurement or observation works,
but is concerned mainly with the fact that it works).
Instrumentalism does indeed admit our ignorance about what goes on
in measurement interaction, and accepts that there may be fundamental
uncertainty about the exact location or even the actual existence of the
split between target (i.e., source of information) and subject system (i.e.,
VAN DuK/Process Physics, Time, and Consciousness 109

endpoint of information). With this in mind, however, instrumentalist-


minded biologists may argue that there is no objection whatsoever to still
apply the split if it leads to empirical agreement between the empirical
data and the data-reproducing physical equations describing, say, the in-
and outgoing flows across a cell membrane. But it is then easily forgotten
that we were only applying the split as a convenient figure of speech, and
all too often start to treat the resulting empirical data and their data-
reproducing physical equations as if they were reality itself-thereby
reducing the raw processuality of nature to well-refined mechanical-
mathematical procedures.
In the same vein, biology has moved more and more towards the idea
that organisms may work in an info-computational manner, as if they were
basically DNA-recombining biological machines geared exclusively
towards following the instructions laid down in their genes. In fact, the
modem synthesis in biology, also known as neo-Darwinism, rests on two
pillars-Darwin's theory of evolution through natural selection and
Mendelian genetics. When combined, these two provide a picture of
biological evolution that can be explained entirely in terms of genetic
mechanisms:
The term "evolutionary synthesis" was introduced by Julian
Huxley ... to designate the general acceptance of two conclusions:
gradual evolution can be explained in terms of small genetic changes
("mutations") and recombination, and the ordering of this genetic
variation by natural selection; and the observed evolutionary
phenomena, particularly macroevolutionary processes and speciation,
can be explained in a manner that is consistent with the known
genetic mechanisms. (Mayr 1)
It can thus be argued that neo-Darwinism fits readily into the info-
computational narrative. By embracing this two-legged neo-Darwinian
interpretation of evolution, biologists basically start looking at life as
resulting entirely from DNA processing. DNA-sequences are thus treated
as if they were basically no more than coded instructions for synthesizing
proteins. 86 Put informally, DNA-code is thus likened with program
instructions in computer software, thereby interpreting the organism's
"genetic program" in an info-computational sense. That is to say, in analogy
to running a program which will then produce a certain computable output,
the info-computational interpretation of genetics basically treats DNA as
the set of instructions for "computing" which proteins will be synthesized.
110 PROCESS STUDIES SUPPLEMENT 24 (2017)

All in all, the info-computational neo-Darwinistic narrative implies


that organisms can be treated as if they were no more than code-converting
automatons whose arrival on the scene can be explained entirely in terms
of the accumulated changes in their genetic germ-line that gave them an
unpremeditated competitive edge over other rivalling organisms with
whom they share their precarious living-environment.
Put in a nutshell, the two pillars of neo-Darwinism can thus be
summarized as follows:
... order and harmony [in the Darwinistically evolving biological
world] does not arise from higher-order laws destined for such effect,
but can be justly attained only by letting individuals struggle for
personal benefits, thereby allowing order to arise as an unplanned
consequence of sorting among competitors. The Darwinism of the
modem synthesis is, therefore, a one-level theory that identifies
struggle among organisms within populations as the causal source of
evolutionary change, and views all other styles and descriptions of
change as consequences of this primary activity. (Gould 224)
In this way, this info-computationally inspired neo-Darwinism describes
biological evolution as a process of gradual, cumulative change. This
process, moreover, is thus thought to have no predetermined direction.
Instead, merely by realizing the unpremeditated "side-effect" of their own
survival and that of their offspring, individual organisms automatically
give rise to the evolutionary direction of their own species. At the end of
the day, neo-Darwinism concludes that species outperform their competition
merely by the otherwise blind optimization of their own processing of
incoming information, nutritional matter, and energy.
Although this is a very powerful and illuminating account that has
brought us a long way in understanding how evolution works, it is
unfortunately not the complete story. First of all, it suffers from the same
defect that plagues our contemporary mainstream physics: there is no
place for actual, lived subjectivity. As a result, subjective consciousness
is typically labeled as epiphenomenal-a non-essential, illusory side-
effect-although, ironically, the subjective mating choice of the female
animal can hardly be put aside as irrelevant (see Hunt 29-31).
A second point, somewhat related to the first, is that neo-Darwinism
generally does not address the relevance of how the organism lives through
all this processing of incoming information, matter and energy. With the
growing acceptance of epigenetics, 87 neuro-plasticity, and neural re-use
VAN DuK/Process Physics, Time, and Consciousness 111

(see Anderson) there seems to be a growing sense of appreciation for


aspects like this. That is, it is now a well-accepted idea that the pheno-
and genotype of organisms can change not only due to the mutation and
sexual recombination of DNA, but also via developmental and experiential
selection that lead to acquired characteristics (Edelman and Tononi 83-
84), or, in other words, traits picked up by going through life, instead of
being strictly determined by genetic mutation and inheritance. Also, the
preferences that an individual animal may acquire during life, as well as
the cultural traditions that may be developed from such preferences within
different social groups of animals and eventually even a species as a whole,
may significantly contribute to the niche that this animal, social group,
or species may start to occupy and exploit (see, for instance, Riesch, et al.).
Instead of looking at molecular changes in DNA and RNA as the sole
relevant factors for biological evolution, we should tum our attention to
other areas as well. In order to grasp natural evolution-and particularly
its most relevant aspect, life itself-we need to go beyond the current
info-computationally inspired approach of looking at organisms as if they
were merely the result of computational processing. We need to do better
than just looking at nature in terms of numerically specified inputs and
outputs of otherwise unspecified black boxes. 88
Instead of zooming in primarily on the quantitative specification of
inputs and outputs that should thus inform us about all kinds of informational,
material, and energetic flows, we need to focus more on the meaningful
difference that these inputs and outputs make to each other, the organism,
and its environment. To be more specific, we need to pay more attention
to how the organism's current lived experience of going through these
informational, material and energetic flows will affect its future going-
through life. Only in this way can we expect to give due credit to the
subjective aspects of our own everyday lives. Only by giving heed to how
an organism learns to make sense of its environment under precarious
conditions (see DiPaolo; Thompson 328-329) and to how it learns to
anticipate possible future events-and even construct scenarios of never
before experienced affairs-can we expect to overcome the defects that
plague the exophysical-decompositional, info-computational, representational
approach of our contemporary mainstream physical sciences, in particular
physics, chemistry, molecular biology, genetics, and (neuro )biology.
As will be discussed later on, these subjective acts of sense-making,
creative learning, anticipation, and the co-evolutionary synergism between
112 PROCESS STUDIES SUPPLEMENT 24 (2017)

the organism and its ecological niche all involve the laying down of
habitually grooved activity patterns (see, for instance, Barrett). They all
involve the strengthening and/or weakening of latent action dispositions.
Among the things that are relevant in the formation of an organism's
dispositional perception-action repertoires we can find, for instance, muscle
memory, perception-memory patterns, an organism's inborn instincts, its
commitments and preferences, neuroplasticity, (epi)genetically laid down
propensities, behavioral tendencies, and so on.

4.2.1 From the info-computational view to information as mutualistic


processuality
We need to go beyond the exophysical-decompositional, info-
computational approach and apply a concept of information that has the
two aspects of subjectivity and objectivity already baked in it from the
get-go, rather than sticking to the current convention of communication
theory in which data signals have to cross the poorly defined boundary
between target and subject side-from the source of the information to
its eventual end-receiver. In the info-computational view of cognition, of
which communication theory is one of the main inspirational influences,
information has to first reach the subject side before it can become
informative at all. But it is often forgotten that this information has to
remain unlabeled before it ever arrives there. On top of that, there is no
realistically attainable final center of subjectivity, arrival at which will
enable any incoming information signal to become fully known. Therefore,
a complete info-computational informativeness, although suggested by
our conventional information and communication theories, 89 will remain
forever out of reach.
In fact, when sticking to the info-computational mode of analysis we
will never be able to find out what it is about living, conscious organisms
that lifts them above the level of light-detecting photodiodes, information-
processing computers, and so on. To be sure, when we remain firmly
attached to the mechanical-mathematical, exophysical-decompositional
approach of info-computationalism, and do not make any allowance for
a complementary alternative account of information, we will never be
able to formulate a valid scientific answer to the question of what life and
consciousness are all about.
VAN DuK/Process Physics, Time, and Consciousness 113

In order to get a closer view of the first contours of a possible alternative


conception of infonnation, let us first take a look at how primitive organisms
get to make sense of their surroundings. In the biological world the
communication of signals is not to be understood in the extemalistic, data-
exchanging sense of info-computationalism in which code signals are sent
off on a one-way trip from source to destination. Instead, organism and
world are actively engaged in a joint process of mutual informativeness
in which everything within and without the organism can make a difference
(however slight it may be) to the informative process as a whole. 90

4.2.2 From the non-equilibrium universe to the beginning of life as an


autocatalytic cycle
Evolution is not driven purely by genetic mutations that may result
in one kind of organism becoming well-adapted to its environment and
another less so (thus leading to the 'selection' of winners and losers in
the struggle for survival). Rather, evolution just as well depends on
organisms giving shape (both unintentionally and intentionally) and actively
manipulating their environment in a way that affects their survival.
Accordingly, biological evolution seems to involve more than just random
genetics-based adaptation to precarious and unpredictably changing
environments.
It is a core characteristic of evolution that organism and environment
are engaged in an intimate, symbiotic relationship. So much so that when
being pressed to precisely locate the actual dividing line between the two,
we will sooner or later come to realize that there is no such sharp and
absolute boundary to be found. 91 Instead of there being a truly objective
divide between living organism and environment, the symbiotic process
which is life is actually a natural extension of the process of nature-a
local outgrowth from what was already there, rather than something entirely
new, different, and otherworldly. 92 From this alternative perspective, the
early universe, since it can be said to have had the potential for life within
it from the very beginning, should better be referred to as being biocentric,
rather than abiotic. In the words of biologist and complex systems researcher
Stuart Kauffman:
... the evolving universe since the Big Bang has yielded the formation
of galactic and supragalactic structures on enormous scales. Those
114 PROCESS STUDIES SUPPLEMENT 24 (2017)

stellar structures and the nuclear processes within stars, which have
generated the atoms and molecules from which life itself arose, are
open systems, driven by nonequilibrium processes. We have only
begun to understand the awesome creative powers of nonequilibrium
processes in the unfolding universe. We are all-complex atoms,
Jupiter, spiral galaxies, warthog, and frog-the logical progeny of
that creative power. (Kauffman, At Home, 50-51)
The universe as a whole can thus best be thought of as a giant nonequilibrium
process (Nicolis and Prigogine; Jantsch; Smolin, The Life, 158-160;
Chaisson 15, 125-131), rather than just an enormous lifeless collection of
externally interacting material "bits and pieces" in which life came into
being as a chance side-effect of entirely physical interactions. Although
this outline is admittedly a crude simplification, the latter view tries to
understand life and consciousness in terms of what is nonliving and
nonconscious, which is impossible (Griffin, "The Whiteheadian"). The
former, on the contrary, opens up the possibility of seeing the universe as
biocentric from its earliest of beginnings.
In fact, the beginning of life on earth could only occur due to nature's
nonequilibrium processuality. The interplanetary gas clouds and dust
particles that have come to form our planet earth, as well as the chemical
elements from which, later on, more complex molecules started to form,
all originate from nucleosynthesis in stars and supernovae ( see Arnett;
see also second half of Section 2.5 .1 ). All this eventually enabled the right
conditions for more complex chemical reactions to occur.
As Kauffman suggests, life is likely to have emerged spontaneously
from a "primordial soup" of such chemical substances. Under normal
conditions, such a primordial soup will accommodate numerous chemical
reactions among its different species of molecules. Most of these chemical
reactions are relatively slow-going, because reactions that go at a fast rate
quickly deplete their resources and will therefore typically fall into decline
about as fast as they got going (Kauffman, At Home 47-64).
However, such fast reactions do not have to mean the end of the
system' s chemical reactivity. Each of the system's different species has
the potential to be a catalyst for multiple other reactions. In other words,
each chemical may be able to speed up a chemical reaction that was already
occurring within the system, although initially at a much slower rate. As
soon as the system reaches a critical diversity, those catalytically accelerated
chemical reactions will no longer deplete their resources. Instead, they
VAN DuK/Process Physics, Time, and Consciousness 115

become part of a self-perpetuating non-equilibrium autocatalytic cycle in


which the reaction product of one reaction becomes a resource chemical
or catalyst for the next reaction, and so on, thus establishing a closed chain
with catalytic closure and a longer sustainable balance between the system's
production and consumption rates:
.. .life is a natural property of complex chemical systems .... [W]hen the
number of different kinds of molecules in a chemical soup passes a
certain threshold, a self-sustaining network of reactions-an
autocatalytic metabolism-will suddenly appear. Life emerged, I
suggest, not simple, but complex and whole, and has remained
complex and whole ever since-not because of a mysterious elan
vital, but thanks to the simple, profound transformation
of.. . molecules into an organization by which each molecule's
formation is catalyzed by some other molecule in the organization.
The secret of life, the wellspring of reproduction, is not to be found
in the beauty of Watson-Crick pairing, but in the achievement of
collective catalytic closure. The roots are deeper than the double
helix and are based on chemistry itself [and particularly the
emergence of self-perpetuating chemical cycles]. So, in another
sense, life-complex, whole, emergent-is simple after all, a natural
outgrowth of the world in which we live. (Kauffman, At Home, 47-
48)
Furthermore:
Here, in a nutshell .. .is what happens: as the diversity of molecules [in
a primordial "soup" of prebiotic chemical substances] increases, the
ratio of reactions to chemicals ... becomes ever higher.. .. As the ratio of
reactions to chemicals increases, the number of reactions that are
catalyzed by the molecules in the system increases [even more].
When the number of catalyzed reactions is about equal to the number
of chemical [molecule species], a giant catalyzed reaction web forms,
and a collectively autocatalytic system snaps into existence. A living
metabolism crystallizes. Life emerges as a phase transition.
(Kauffman, At Home, 62)
Such a collectively autocatalytic network has no clear boundary separating
it from its environment, other than the closed autocatalytic cycle in which
it is engaged. That is, although the cycle of coupled chemical reactions
remains largely the same with every iteration, the autocatalytic network
as a whole is an open system and keeps itself going by drawing in energy
and nutrients from its environment. In turn, this environment is then
"enriched" with the system's waste products and excess heat.
116 PROCESS STUDIES SUPPLEMENT 24 (2017)

Whenever such an autocatalytic network manages to maintain its


organizational integrity over longer periods of time (for instance, by
developing a semi-permeable membrane) 93 or even when it succeeds in
attaining a higher level of complexity, it may start to develop more intricate
autocatalytic cycles and subcycles nested within or running through itself,
each with their own reaction products and their own specific impact on
the system's local-global organization. It is the going through these cycles
that makes a (bio )chemically meaningful difference not only to the
autocatalytic system as a whole, but to its environment as well.
In fact, since there is no clear boundary between system and environment,
all non-equilibrium processuality94 that facilitates their symbiotic relationship
should actually be considered as the relevant phenomenon of interest-not
just the autocatalytic system by itself. Accordingly, a system's sensitivity
to its environment as well as its adaptivity should be seen as being part
of the "biunitary whole" of the system-environment system at large.
Sensitivity and adaptivity should thus not be thought of as properties
belonging strictly to the autocatalytic network itself, but rather as aspects
of the process as a whole. Accordingly, they are inevitably dependent
upon the same grand-environment from which the autocatalytic system
had arisen in the first place. As such, there is an unmistakable co-dependency
between the autocatalytic network and its environment, 95 and sensitivity
and adaptivity are just as well aspects of the environment as they are of
the network system in question. Even when such an autocatalytic network
manages to grow a protective semi-permeable membrane, this co-dependency
persists, as does the underlying "oneness" of the network and its environment.

4.2.3 From environmental stimuli to early subjective experience


In another layer of interpretation, though, this membrane-packaged
chemical reaction network may now be considered an "autopoietic unit"
(see Maturana and Varela)-an individual biological cell whose internal
processes are not only capable of maintaining the cellular whole in which
they are participating, but also of giving rise to new "infant cells" with
the same biochemical reaction repertoire as the original "parent cell."
Different stimuli may trigger different biochemical chains of events within
and between such primitive biological cells. Depending on the kind of
cell, stimuli may trigger changes in metabolism (by changing the cell's
VAN DuK/Process Physics, Time, and Consciousness 117

internal reaction pathways), outer shape (e.g., when an amoeba expels


fluids by using its contractile vacuole as a protective mechanism against
absorbing too much water), collective behavior (e.g., free-roaming slime
mold cells that aggregate together when food becomes scarce), the ability
to perform cell division, and so on.
In multicellular organisms, environmental stimuli are farther removed
from inner-organism processes so that direct stimulation can no longer
be used as an effective means of signal transmission. To get from sensory
stimulation to motor or homeostatic response, multicellular organisms
typically rely on extracellular electro-chemical signaling cascades facilitated
by lengthy nerve fibers (see Bruce, et al. 7-8). Such a membrane-bounded
organism, its entire embedding environment, as well as the environmental
stimuli that may gradually come to guide the organism's behavior, form
a seamlessly merged ecosystemic whole. Somatosensory and sensorimotor
activity patterns should therefore not be interpreted as happening exclusively
within an organism (Gibbs 270-271; Thompson and Varela). Instead, these
activity patterns transcend the organism as they loop through the organism-
environment system as a whole, thus ending up as adaptive perception-action
cycles that form the rudimentary basis of subjectivity.
In stark contrast to the above scenario, we often still resort to info-
computationalism and the simplifying machine metaphor in which sensitivity
and adaptivity are associated with receptor, processor, and effector units
within an organism. Accordingly, we typically like to ascribe the
characteristics of "sensitivity" and "adaptivity" primarily to organisms,
and not so much to the environment in which they live. But although an
environment may usually be less sensitive and adaptive to organism-
induced changes than the other way around, the environment participates
just as much in the cyclic process of co-sensitivity and co-adaptivity as
does the organism. In fact, as will be further discussed below (in Section
4.2.4), it is the going through such cyclic processes that should be identified
as the essence of subjective experience.

4.2.4 From early photosensitivity to value-laden perception-action cycles


To understand how the first sensory modalities-particularly light-
sensitivity-could show up in organisms, we should look at how any of
the cycles that a primitive autocatalytic network (or biological cell) might
118 PROCESS STUDIES SUPPLEMENT 24 (2017)

be engaged in, could come under the influence of light. That is, we have
to look at how autocatalytic cycles can tap into light energy to thus become
adaptively oriented towards light in a way that contributes to the organizational
integrity of the system as a whole. 96 If light stimuli manage to initiate a
chain of events that positively affects the well-being of the organism, then
this may offer the organism a whole new means to cope with the many
challenges of its precarious living-environment.
There are different scenarios that may lead to the development of such
light-sensitivity. For instance, an autocatalytic network may manage to
draw a photosensitive protein into one of its chemical cycles, or one of
the many proteins that are taking part in the chemical reaction network
of a primitive biological cell may start to switch to another state when
being impacted by light (for further details on the possible evolutionary
history of light-sensitive proteins and amino acids in animals see Fueda,
et al). In both cases, their biochemical networks are apt to develop adaptive
reaction pathways.
Specifically, when this newly acquired photo-sensitivity turns out to
facilitate the prolonged continuation of the cycles in which it is involved,
thereby leading to an increased organizational integrity, it can be said to
have "survival value" for the system as a whole. For instance, non-UV
light stimuli may trigger the unpremeditated production of reaction products
with UV-protective characteristics, thus enabling the networked cycles
to develop a defense against damaging UV radiation (see Fischer, et al.).
As an autocatalytic network develops an orientation towards light,
this may lead to protection against environmental threats, increased access
to nutrients and energy resources, proto-homeostatic regulation of the
organism's biochemistry, 97 and other adaptive benefits. In this way, it
becomes possible for the organism to keep its metabolism going, to carry
on with regenerative maintenance, and to keep investing in renewing
growth of its various life-supporting cycles. Light stimuli can in fact
acquire "somatic" meaning and become valuative as, during life, they
gradually get to be associated with repeatedly co-occurring favorable
and/or unfavorable internal states of the autocatalytic system.
For instance, when activation of a light-sensitive cycle will consistently
go hand in hand with access to nutrients, this will obviously promote the
network's metabolic well-being. Hence, through their coupling to the
organism's well-being and mal-being, environmental stimuli are basically
given body-related value and can thus become inherently meaningful to
VAN DuK/Process Physics, Time, and Consciousness 119

the organism. In the words of neurobiologist Gerald Edelman: "I use the
word value to refer to evolutionarily ... derived constraints favoring behavior
that fulfills homeostatic requirements or increases fitness in an individual
species" (Edelman, The Remembered, 287-88). In his book Consciousness:
How Matter Becomes Imagination, co-written with Giulio Tononi, it is
put like this: "We define values as phenotypic aspects of an organism that
were selected during evolution and constrain somatic selective events,
such as the synaptic changes that occur during brain development and ex-
perience" (Edelman and Tononi 88).
Accordingly, Edelman defines value systems as those constraining,
cycle-modulating parts of the organism on the basis of which it can: ( 1)
carve up the world of signals into re-cognizable, re-livable, somatically
relevant categories, 98 and (2) develop adaptive action repertoires (see
Edelman, "Building," 43). To an organism, value constraints enable it to
adaptively increase its fitness in the absence of pre-programmed,
quantitatively specified goals.
Unlike a boiler-heated or air-conditioned room whose change in
temperature can be initiated by turning the thermostat's temperature
selection dial up or down, the adaptive change in organisms is not directed
by such externally imposed set points. Whereas a thermostat (1) may just
as well be located outside the room whose temperature it is trying to
manipulate, and (2) is by itself just a largely passive signal-comparing
switch box, value pervasively participates in, and actively contributes to,
adaptive change in an organism's organization. Those inner-organism
cycles that, during evolution, have come to serve as "salience indicators"
for other inner-organism cycles, may thus be called value systems.
For instance, since it communicates information about environmental
light conditions to other parts of the body, the synthesis and secretion of
melatonin by the pineal gland plays a major role in the wake-sleep cycle
of human beings and other mammals. Another example can be found in
the way that the evolved shape, muscularity, and jointedness of a human
hand 99 leads to a certain repertoire of possible and impossible movements
(see Edelman and Tononi 88). This, of course, affords us, hand-equipped
human beings, to be able to manipulate and take advantage of environmental
opportunities in a specific way (see Gibson, The Ecological, 113, 224-
225). In tum, this may then direct further evolutionary adaptation of hand
morphology, the sensorimotor-somatosensory system, and, in the long
run, the human species as a whole.
120 PROCESS STUDIES SUPPLEMENT 24 (2017)

In the absence of any pre-available manual or coded instructions that


tell the organism how to make sense of the world in which it lives and
what to do in each situation in order to stay out of harm's way, value
systems are indispensable for growing-up organisms to learn survival
skills and to increase their adaptivity in a precarious and ever-changing
living-environment (Edelman and Tononi 46-48). Trying to take all this
into account, Edelman tries to delineate value systems as follows:
I define value systems as those parts of the organism (including
special portions of the nervous system) that provide a constraining
basis for categorization and action within a species. I say "within a
species" because it is through different value systems that
evolutionary selection has provided a framework of constraints for
those somatic selective events within the brain of each individual of
a particular species that lead to adaptive behavior. Value systems can
include many different bodily structures and functions (the so-called
phenotype); perhaps the most remarkable examples in the brain are
the noradrenergic, cholinergic, serotonergic, histaminergic, and
dopaminergic ascending systems. During brain action, these systems
are concerned with determining the salience of signals, setting
thresholds, and regulating waking and sleeping states. Inasmuch as
synaptic selection itself can provide no specific goal or purpose for
an individual, the absence of inherited value systems would simply
result in dithering, incoherent, or nonadaptive responses. Value
constraints on neural dynamics are required for meaningful behavior
and learning. (Edelman, "Building," 43-44)
Although thus far we have been focusing primarily on the role of value
in giving shape to the selection-driven adaptivity of organisms, it must
be emphasized that value does not only influence adaptivity. Most notably,
it is part and parcel of the organism's subjectivity, feeling, emotion, etc.,
as it directly pertains to what it means for the organism to go through its
many coupled and nested life-supporting cycles.
From all these life-supporting cycles that value is involved in,
perception-action cycles are certainly among the earliest and most prom-
inent. 100 Given the intimate interplay between sensorimotor, somatosensory,
and value cycles, it is probably more instructive to refer to such perception-
action cycles as sensation-valuation-motor activation-world manipulation
cycles. It also needs to be remarked that the concept of "sensation," as it
is used here, can pertain to both interoception and exteroception; the
inward-looping cycles of body-related "self signals" and the outward-
VAN DuK/Process Physics, Time, and Consciousness 121

looping cycles of world-related "non-self signals."


Furthermore, because of sensorimotor and somatosensory coupling
(see Sections 4.2.3 to 4.3.1 for a more elaborate discussion), a light-
sensitive cycle not only affects its own future course of development, 101
but also that of the greater network of organism-environment cycles as a
whole. Indeed, in the course of evolution, a light-sensitive cycle is likely
to have started out with an otherwise non-functional protein that, under
the influence of light, started to operate as a bi-stable switch for triggering
a whole cascade of other events (see Fueda, et al). So, once the bi-stable
operation of such a light-sensitive protein started to affect other processes
elsewhere in the chemical reaction network, this could trigger them to
change their dynamics in a way that would be favorable for the organism
as a whole. In other words, the protein's light-sensitivity gets to have
survival value for the organism as a whole. Moreover, this is how, already
at a primordial level, sensation-, metabolism- and action-related cycles
could become intimately interwoven, thereby giving rise to a primitive
form of sensorimotor and somatosensory coupling and the binding-into-
one of the organism's multiple cycles of experience.
It is by continuously going through these cycles that this ecosystemic
network as a whole enacts a phenomenal world for the organism to negotiate
(see Maturana and Varela). Although initially on an extremely rudimentary
level, the organism, by living through its various organism-environment
cycles, gets to "make sense" of what is otherwise left unlabeled. And to
the extent that no clear dividing line can be drawn between what belongs
to the organism and what belongs to its environment, the value-ladenness
should actually not be attributed solely to the organism, but to the organism-
environment system as a whole.
In fact, since the organism has only access to its world of experientially
categorized signals and has no other way of making sense of its living-
environment, the entire phenomenal scenery around it should ultimately
not be considered as being apart from, but rather as a part of the process
of experience (see Velmans 327-328). Accordingly, what we usually like
to think of as being the physical "real world out there" is actually part and
parcel of our process of experience (ibid.). So much so that, according to
Max Velmans's reflexive monism, we, as seamlessly embedded conscious
organisms with our conscious view of the greater embedding universe,
are in fact participating in a reflexive process through which nature
experiences itself (ibid.).
122 PROCESS STUDIES SUPPLEMENT 24 (2017)

Much in line with Alfred North Whitehead's panexperientialism, the


experiencing organism and its experienced environment are thus two
aspects of the same process of experience. Also, in the tradition of
biosemiotics, the outward- and inward-looping organism-environment
cycles that the organism is going through as it tries to face the challenges
of life, can together be said to make up a bundled biosemiosic cycle of
mutual significance, or, as early biosemiotician Jakob Von Uexkiill (see
Von Uexkiill and Von Uexkiill; Koutroufinis) would have it, the organism's
self-centered world consists of "carriers of significance," thus forming a
world of subjectively meaningful information beyond which there is
nothing for the organism to make sense of.
Yet another way of looking at these perception-action cycles is in the
context of Gestalt psychology. Gestalt cycles of experience (see Fig. 4-
1) are thought to be an indispensable part of the cyclic self-regulation of
all living organisms (Perls; Clarkson and Mackewn 48). Gestalt psychology
starts out with the idealizing picture of a general state of balance for living
organisms that basically allows them to stay at rest and be relaxed, almost
without a care in the world, just by letting the self-regulatory cycle do its
work. The main idea behind the Gestalt cycle, then, is that when its cycling
is left unperturbed, an organism may simply go through it without having
to actively take care of any business. But whenever an internal or external
disturbance of the cycle occurs, this will prompt the organism to redirect
the deviating course of the cycle in order to restore homeostatic balance,
to maintain a healthy metabolism, or to satisfy needs, in short, to return
to the desired situation of rest and balance:
The person [or organism] organizes his experience-his sensations,
images, energy, interest and activity-around the need until he has
met it. Once the need is met the person feels satisfied-so that
particular need loses its interest for him and recedes. The person is
then in a state of withdrawal, rest or equilibrium, before a new need
emerges and the cycle starts all over again. In a healthy individual
this sequence is self-regulating, dynamic and cyclical. Self-regulation
does not, of course, necessarily ensure the satisfaction of the needs of
the person. If the environment is deficient in one of the needed
items-water in the desert or affection in a family-the person will
not be able to quench his thirst or satisfy his need for love. Self-
regulation implies that the individual will do his best to regulate
himself in the environment given the actual resources of that
environment. (Clarkson and Mackewn 49)
VAN DuK/Process Physics, Time, and Consciousness 123

The various stages that the Gestalt cycle goes through during each full
tum, can be roughly described as follows (see Perls 69): (1) the organism
is in a state of rest; (2) the organism senses and becomes aware of a
disturbance (which may be internal or external); (3) a Gestalt is being
formed (i.e., a meaningful foreground pattern apparent from, yet seamlessly
embedded within, its background patterning and intimately related with
the entire whole and previous history of organism-environment system);
(4) the organism prepares for action and then follows up on it, thus taking
directed action with the aim of, (5) achieving a decrease in tension, which
should then result in, (6) the return to the desired organismic balance.
During all this, there is an intimate interdependence between organism
and environment to the extent that they can be seen as an inseparable
whole whose process of subjective experience does not take place exclusively
within the organism. Instead, subjectivity involves the entire bound-in-
one multiplicity of experiential organism-environment cycles. Because
the formation of a Gestalt (which can be loosely interpreted as an
"experiential foreground pattern" or "formed situation") depends on the
entire whole and history of the organism-environment system, it does not
enable the organism to see the world as it is, but to experience it in terms
of what may be called "motivational valences": 102
Valences are opportunities to engage in actions that structure a
motivated person's perception of a situation and her subsequent
actions. For a person who is motivated by hunger, a sandwich has
valences that it does not have for a sated person, but only if it is
reachable and does not belong to someone else. The key is that these
valences appear in the environment as a function of the motivations
of people, and vice versa. Valences are perceived forms that are a
function of the person's state and the environment's characteristics.
(Kaufer and Chemero 88)
Next to panexperientialism, biosemiotics, and Gestalt psychology, other
related theories of perception and conscious experience that should be
mentioned here are: J.J. Gibson ' s ecological psychology, enactivism
(Maturana and Varela), radical embodied cognitive science (Chemero),
Velmans's reflexive monism, James's neutral monism, and, of course,
Edelman's and Tononi's extended theory of neuronal group selection,
which drew heavily on James's "specious present" (James, Principles,
609; also Clay 167), and his view on the conscious stream of experience.
For now, however, I will not go further into their respective versions of
124 PROCESS STUDIES SUPPLEMENT 24 (2017)

the perception-action cycle. Instead we will focus on perceptual categorization,


and what this means for a conscious being's "sculpting into actuality" of
its sense of self and world.

4.3 Perceptual categorization, consciousness and mutual informativeness


As has already been briefly touched upon in Section 4.2.3, perceptual
categorization is the process of carving up nature into categories, although
nature itself does not contain any such categories at all (Edelman and
Tononi 104). From early life onwards, it is by means of perceptual
categorization that sentient organisms gradually get to differentiate salient,
life-affecting foreground patterns from less relevant background patterns.
This allows these organisms to chisel the world of initially uncoordinated
signals into a multimodal, action-affording and somatically meaningful
scene for adaptive purposes (see Edelman and Tononi 48-49; Pickering,
"Active").
This conscious scene should not be thought of as an inner-brain
projection of a so-called "real world out there," but rather as an unscripted
live-performance scene of first-person experience in which sense of self
and sense of world are two aspects of the conscious organism's living
through its non-equilibrium body-brain-environment cycles (see Fig 4-
1). Although the term "scene" probably reminds most people of audiovisual
media, the formation of such a conscious scene involves the binding
together of many sensorimotor and somatosensory streams, related not
only to vision, but also to other sensory modalities, proprioception,
interoception, value, and more. For sake of simplicity, though, we will
first focus on the stereotypical example of visual perception.
An incoming stream of light, originating from the ambient optic array
(Gibson, The Senses; and The Ecological, 58; see also Section 4.1) in the
organism's environment, passes through the eye's light-refracting cornea
and lens that focus the stream onto the retina. As the light is absorbed by
photosensitive proteins within the retinal photoreceptor cells, this prompts
the generation of nerve impulses that run through the optic nerve to the
thalamus, which can be thought of as the sensorimotor and somatosensory
"integration and intercommunication center" of the brain, located above
the brain stem, near the center of the brain. From there, the thalamus
connects to the primary, secondary, and higher visual cortices:
VAN DuK/Process Physics, Time, and Consciousness 125

... we can construct a simplified neurophysiological scenario of what


goes on in the brain when we perceive a given color. Various classes
of neurons in the retina, lateral geniculate nucleus [i.e., the vision-
dedicated part of the thalamus], primary visual cortex, and beyond,
progressively analyze the incoming signals and contribute to the
construction of new response properties in higher visual areas.
(Edelman and Tononi 161)
The stream of selectively activated response signals in the visual cortices
passes back to the thalamus, where it distributes further to various other
areas of the brain. According to present neuroscientific understanding,
the role of the thalamus is not only that of a "relay center" between different
subcortical areas, such as the hippocampus and the cerebral cortex. It
contributes to the establishment of sleeping patterns, circadian rhythmicity,
pineal melatonin production and secretion (Jan et al.) and is involved in
the facilitation of focal attention, and memory access.
Moreover, as neuroscientist Luiz Pessoa argues, the thalamus is not
just a passive relay station, but it plays an especially big role in the
integration of global signals. In this way, it contributes to affective valuation
and also enables integrative intercommunication among networks of brain
regions, not just one-way signaling from one brain module to the other
(which is the archetypical illustration of how the info-computational
approach works), but system-wide, back and forth, and cross-hierarchic
signaling within a highly interconnected, distributed network (Pessoa).
On top of that, the thalamus is considered a vital structure in the "firing
up" of consciousness because it seems to act as a gatekeeper that allows
or disallows exteroceptive, interoceptive and hippocampal signals to be
conveyed to, amongst others, their associated sensory cortices, the insular
cortex, and the pre-motor and motor cortices.
As all this is going on, the thalamus also signals to the prefrontal
cortex which has evolved to perform the task of linking sensorimotor
activity patterns with value- and emotion-mediated internal goals of the
organism (see Damasio 41, 267-268). This can occur because dopamine-
releasing reward systems in the subthalamic brain stem region project
specifically to the prefrontal cortex. Other diffusely projecting value
systems-neuromodulator-secreting nuclei, such as the noradrenergic,
serotoninergic, and histaminergic cell nuclei that have become evolutionarily
associated with events that are biologically relevant to the organism-also
condition the firing patterns and newly developing connection patterns in
126 PROCESS STUDIES SUPPLEMENT 24 (2017)

the thalamocortical region (see Edelman and Tononi 88-90, 134).


Because of this, an acquired history of organismically meaningful
events can become memorially embedded within the thalamocortical
network in a distributed way. This is achieved by the laying down of
dispositional action repertoires that, given the occurrence of perceptually
similar circumstances, enable the organism to prompt what may be called
a "thoughtful act" by reactivating a certain somatosensory-sensorimotor
performance routine that has proven to be successful before. In this way,
the associated perception-action cycle is started up so that the organism
can adaptively respond to the situation with which it is confronted.
This can happen because both the pre-motor and the motor cortex are
affected by these value systems and can thus play their part in such
"thoughtful acts." But although they participate in the thalamocortically
directed perception-action cycles, they do not signal back to the thalamus.
Instead, the pre-motor and motor cortex serve as outgoing ports as they
are dedicated primarily to coordinating and smoothening the activity of
motoneurons that set the musculoskeletal apparatus in motion (see Edelman
and Tononi 180).
Unlike the other cortical areas that are typically engaged in intense
back-and-forth signaling with the thalamus, the signals of the motor
cortices basically take the detour of motor activation and musculoskeletal
manipulation of the organism's environment-thus extending into the
outward-looping part of the organism's perception-action cycles. The
physical impact of these motor acts on the environment may then make a
relevant enough difference to the optic array, so that any changes can be
picked up by the organism's visual apparatus, thus completing the organism's
sensation-valuation-motor action-world manipulation cycle.
All the other non-motor signals, as they criss-cross all over the
thalamocortical region in a complexly reciprocating way, participate in
what Edelman has called the "dynamic core." This dynamic core, then, is
the highly dynamic, constantly fluctuating "swarm" of activity patterns
that reverberates all over the thalamocortical region as its many contributing
neuronal groups make a difference to each other's firing dispositions (see
Fig. 4-3; also see Edelman and Tononi 143-154). The dynamic core is
thought to facilitate the emergence of primary and higher-order consciousness
through the activity of reentry-the widespread reciprocal signaling in
the thalamocortical region of the brain:
VAN DuK/Process Physics, Time, and Consciousness 127

... reentry is a process of ongoing parallel and recursive signaling


between separate brain maps along massively parallel anatomical
connections, most of which are reciprocal. It alters and is altered by
the activity of the target areas it interconnects .... The correlation of
selective events across the various maps of the brain occurs as a
result of the dynamic process of reentry. Reentry .. .leads to the
synchronization of the activity of neuronal groups in different brain
maps, binding them into circuits capable of temporally coherent
output. Reentry is thus the central mechanism [or rather process] by
which the spatiotemporal coordination of diverse sensory and motor
events takes place .... It is important to emphasize that reentry is not
feedback. Feedback occurs along a single fixed loop made of
reciprocal connections using previous instructionally derived
information for control and correction such as an error signal. In
contrast, reentry occurs in selectional systems across multiple parallel
paths where information is not prespecified. (Edelman and Tononi
106-106, also 85)
Reentry can thus be involved in the formation of local, nested cycles as
well as in that of global cycles, thus giving shape to the local-global
reentrant organization of associative thalamo-cortical, cortico-cortical,
and thalamo-cortico-thalamic pathways. Accordingly, the process of
reentry allows a sentient organism to carve up its initially unlabeled living-
environment into conscious self and scenery without having to rely on a
homunculus or data-processing computer program (see Edelman and
Tononi 85). In a nutshell, reentry facilitates the culmination of exteroceptive,
interoceptive, and value signals into a multimodal, yet bound-in-one stream
of experience, thus giving rise to a thinker of thoughts, a doer of deeds,
and a feeler offeelings (see Thompson 325)-all wrapped in one:
This sculpting of a multimodal stream of experience ... is facilitated
by the extremely high degree of mutual informativeness within and
between the mind-brain's neuronal groups. Through reentry, neuronal
groups will fire back and forth in response to each other's in- and
outgoing neuronal spike trains, neurosecretory signals, etc., thus
giving rise to internally meaningful activity patterns [see Edelman
and Tononi 127-131]. That is, as "world-related" exteroceptive
signals are reentrantly associated with "organism-related"
interoceptive signals, this enables the realization of higher-order
perceptual categorization. Accordingly, exteroceptive signals can
acquire somatic meaning through their linkage with interoceptive
signals .... Hence, as ongoing perceptual categorization makes a
difference to the mind-brain's association patterns as well as to the
128 PROCESS STUDIES SUPPLEMENT 24 (2017)

organism's overall physiology, it enables the conscious organism to


re-cognize different outer-organism "world states" 103 by living
through the therewith associated inner-organism "body states" 104 [see
Edelman, The Remembered, 93-94; Pred 262-264]. (Van Dijk, "The
Process")
As they participate in perceptual categorization and the binding together
of exteroceptive, interoceptive, and value-laden signals, all these activity
patterns collectively maneuver all through the brain as one ongoing flow
process. Depending on shifts in attention and on which sensorimotor and
somatosensory circuits are involved, this "swarming" process constantly
varies its local density and composition. In this way, it is continuously
changing which parts of the thalamocortical network are participating in
foreground signaling and which parts are instead engaged in background
activity. In early life, all of the mind-brain's thalamocortical activity
patterns are still relatively undirected and unpolished, but the release of
dopamine and other neuromodulators condition neural pathways that are
active at that moment ( e.g., the motor circuits for grabbing and the visual
circuits for optical focusing). In this way, the conscious human being can
develop habitually grooved action repertoires with a high level of not only
specialization, but also flexibility to changing circumstances.
A) NREM sleep Sensorimotor

15 ms 55 ms 100 ms 250 ms 380 ms 750 ms

B) Wakefulness Sensorimotor

100 ms 150 ms 200 ms 350 ms

Fig. 4-3: Stationary and swarming cortical activity patterns in


non-REM sleep and wakefulness. © PNAS 2007. A healthy
subject's sensorimotor cortex is exposed to Transcranial Magnetic
VAN DuK/Process Physics, Time, and Consciousness 129

Stimulation (TMS) while brain activity is being recorded using


electroencephalography (EEG). In order for the locations with
maximum activity to light up, thresholding at 80% is applied to the
density distribution of the recorded action potential voltages. With
appropriately tuned stimulation parameters the recorded activity
patterns in Fig. 4-3A will remain quite stationary during non-REM
sleep as they linger slightly beneath the TMS coil. During
wakefulness, however, the activity patterns "swarm" across large
portions of the cortex (Fig. 4-3B). In the case of non-REM, dreamless
sleep (during which subjects are commonly thought to have no or
negligible conscious experience) the stationary activity patterns
indicate the absence of mutual informativeness among neuronal
groups in the thalamocortical region. In the case of wakefulness, on
the other hand, there is rich mutual informativeness which typically
enables the occurrence of avid swarming behavior. Although
consciousness-facilitating mutual informativeness is thought to occur
mainly within the thalamocortical region of the brain (Edelman and
Tononi 139-154), the here depicted EEG-images only show activity
patterns whose signal intensity is in the top range (80-100%).
(Images edited from: Massimini, et al. 8499 - Fig. 4)

4.3.1 Integration, differentiation, and the mind-brain's mutual


informativeness
According to Edelman' s theory of neuronal group selection, the brain
facilitates the coming together of multiple somatosensory, sensorimotor,
and value-related activity patterns, so that a bound-in-one integrated
experience can occur. This basically means that it is impossible to experience
individual aspects in one's stream of consciousness separately from all
the others. For instance, although we sighted conscious organisms are
indeed able to distinguish color, shape, and texture-all of which with
their own possible neural correlates-the conscious experience in which
these aspects come to the fore will always form a unified and integrated
whole. Hence, despite the mind-brain's capability to partition the world
of signals along many different perceptual dimensions, none of these
dimensions can be experienced in strict isolation from the others.
It is through the extraordinarily high level of reentry-driven mutual
informativeness among neuronal groups participating in the dynamic core
that the binding of these different perceptual modalities is achieved. If
130 PROCESS STUDIES SUPPLEMENT 24 (2017)

this mutual informativeness is absent-as in dreamless, non-REM sleep,


or during deep coma-conscious experience will not occur (see Fig. 4-
3A). On the other hand, if mutual informativeness does take place (see
Fig. 4-3B), it facilitates the coming into actuality of a unified "conscious
now" 105 that enables the organism to distinguish between many different
conscious events:
The ability to differentiate among a large repertoire of possibilities
constitutes information, in the precise sense of "reduction of
uncertainty." Furthermore, conscious discrimination represents
information that makes a difference, in the sense that the occurrence
of a given conscious state can lead to consequences that are different,
in terms of both thought and action, from those that might ensue from
other conscious states. (Edelman and Tononi 29-30)
Hence, when adopting an information-theoretical perspective, we may
state that the emergence of each such conscious state (or "conscious now")
rules out a vast range of other possibilities. The combinatorial potential
of possible perceptual categorizations in each emergent moment of
consciousness is practically infinite, and the coming into actuality of each
culmination of such categorizations amounts to an enormous reduction of
uncertainty, or, in other words, information (Edelman and Tononi 127-
129; Tononi 217-218; Van Dijk, "The Process").

4.3.2 Self-organization and the noisy brain


In the biological brain noisiness is an indispensable system property.
The presence of noise can help weak somatosensory and sensorimotor
input signals to overcome the activation thresholds of synapses. In the
presence of an already available input signal, random neural signaling can
add up to prompt further signal transmission along coupled response chains
branching across the brain's many levels of organization. Terrence Deacon
mentions that, over the course of evolution, neurons have developed out
of general-purpose cells that gradually came to function as long-distance
signaling fibers ( 499) while still having to carry out many other tasks in
service of the cell's life support. Because of this, neurons are intrinsically
noisy:
It would probably not be too far off the mark to estimate that of all
the output activity that a neuron generates, a small percentage is
VAN DuK/Process Physics, Time, and Consciousness 131

precisely correlated with input, while at least as much is the result of


essentially unpredictable molecular-metabolic noise; and the
uncorrelated fraction might be a great deal more. Neurons are
effectively poised at the edge of chaos, so to speak. They continually
maintain an ionic potential across their surface by incessantly
pumping positively charged ions ... outside their membrane. On this
electrically unstable surface, hundreds of synapses from other
neurons are tweaking the local function of these pumps, causing or
preventing what amounts to ion leaks which destabilize the cell
surface. As a result they are constantly generating output signals
generated by their intrinsic instability and modified by these many
inputs. (Deacon 499-500)
Put a huge number of neurons together to make up a neural network and
it is not so hard to imagine how the noisiness may start to dominate the
network's signaling patterns:
... brains the size of average mammal brains are astronomically huge,
highly interconnected, highly re-entrant networks. In such networks,
noise can tend to get wildly amplified, and even very clean signal
processing can produce unpredictable results; "dynamical chaos," it
is often called. But additionally, many of the most relevant parts of
mammal brains for "higher" cognitive functions include an
overwhelmingly large number of excitatory connections-a perfect
context for amplifying chaotic noisy activity .... Both self-organizing
and evolutionary processes epitomize the way that lower-order,
unorganized dynamics-the dynamical equivalent of noise-can
under special circumstances produce orderliness and high levels of
dynamical correlations. Although unpredictable in their details,
globally these processes don't produce messy results. This is the
starting point for a very different way to link neuronal processes to
mental processes. (Deacon 501-502)
To see how this might work, let us take a look at how nonlinear systems
like the brain can streamline their performance under the influence of
noise. In many systems, the occurrence of inner-system noise can facilitate
significant improvement of signal quality through noise-driven signal-
amplification, a phenomenon that is also known as stochastic resonance
(SR) (Gammaitoni, et al.; McDonnell and Abbott). The occurrence of
stochastic resonance is well established in neural networks both experimentally
and theoretically with the help of externally added noise (Linkenkaer-
Hansen 17) and has indeed been found to occur through endogenous neural
noise as well (Emberson, et al.). In the latter case, system-induced noise
132 PROCESS STUDIES SUPPLEMENT 24 (2017)

facilitates "intrinsic stochastic resonance," which appears to be essential


to an organism's optimal processing of somatosensory and sensorimotor
signals (Linkenkaer-Hansen 17, 27). It encourages system-wide distributed
signaling, as it effectively makes it easier to overcome signal-constraining
excitation thresholds between synapses, neurons, and neuronal groups.
Although constraints are, in our everyday language, often interpreted
as some kind of stumbling block or a barricade blocking the shortest route
towards some prospective destination, signal-constraining thresholds
should definitely not be considered undesirable. That is, a systematic lack
of both constraining thresholds and the value signals through which they
can be modulated, will cause neural traffic to over and over again follow
many different neural pathways without preference. Unfortunately, however,
this would automatically lead to the provocation of arbitrary responses
(e.g., uncoordinated motor action) without any means of error adjustment
or gradual learning by practice. Such a lack of dynamical threshold
functionality will result in the brain's failure to enforce specific neuronal
routes that would otherwise become preferred trajectories because of more
frequent use. In this way, the automatically occurring selectional rivalry
between more and more neural circuits will be undermined, which will
lead to below-standard brain performance, dysfunctional learning capability,
and impaired memory formation.
On the other hand, when thresholds between neuronal groups are too
high to be regularly overcome by excitatory signals, neural traffic tends
to become "locked in." This yields a situation of problematic "overstaticness"
in which there may still be quite some activity within stimulus-receiving
neuronal groups, but very little communication between these groups. 106
In fact, a delicate, close-to-critical balance between antagonistic ejfects 107
seems essential for healthy development and functioning of the brain
(Jung, et al. 1098, 1101).
VAN DuK/Process Physics, Time, and Consciousness 133

3
A a B C
0
1


0.2
0 3~
0.1 288 288
6
0.0
100 300 500
b 3
0.3
b


0
1
0.1
0.0
100 300 500 288 288
C >
-~ 0.3 C
·.:: 6


o 02
~
c 0.1 3
co
0.0
G>
~ 100 300 500 0
Time 1

288 288

Fig. 4-4: Varying degrees of neuroanatomical complexity in a


young, mature, and deteriorating brain. (source: Fig. 2 from
Tononi, et al. 5036; ©PNAS 1994)

That is, in a close-to-critical brain, coupled dynamic subsystems can


establish reentrant synchronization under the regime of each other's
stochastic side-effects, or, in other words, through widely distributed
system-induced noise. In this way, self-organizing neuroselectionism
along the lines of Edelman's theory of neuronal group selection (with
value-steered neuroplasticity and reentrant activity in the thalamocortical
region) can eventually lead to neuronal networks in which brain signaling
is optimized for adaptive goal- and task-directed performance, as well as
pleasure-seeking, risk-avoiding, and crisis-managing behavior, and, not
to be forgotten, long-term anticipatory behavior.
An illustration of these three cases-that is, (1) the far-from-optimally
connected, juvenile network, (2) the close-to-critical network, and (3) the
network in decline-is given in Fig. 4-4. Here, Fig. 4-4A-c represents
neuronal groups in the young, still unconditioned brain; Fig. 4-4A-b stands
for the same set of neuronal groups in the healthy, matured brain; and Fig.
134 PROCESS STUDIES SUPPLEMENT 24 (2017)

4-4A-a depicts these same neuronal groups in a deteriorating brain.


Although in a normally functioning young brain, neuromodulators like
dopamine will slowly but surely sculpt the cortical organization towards
that of Fig. 4-4A-b, in the absence of threshold-adjusting neuromodulatory
signals no further optimizing development of the neuroanatomy is to be
expected. In a close-to-critical brain, reentrant synchronization of
somatosensory and sensorimotor signals will enhance the development
of the organism's regulatory biochemistry (Edelman and Tononi 41, 89-
90), as well as the adaptivity and optimality of cognitive and behavioral
performance (Edelman and Tononi 48-49, 95-99; Aks 29; Newell, et al.;
Kitzbichler, et al.), including motor control, perceptual categorization,
language acquisition, etc. Sub- and supracritical brain dynamics, on the
other hand, will indeed frustrate these brain-facilitated competencies.

4.3.3 Self-organized criticality and action-potentiation networks


In addition to the brain there are countless other natural systems and
phenomena in which the birth and development of organized structure
require a close-to-critical balance between antagonistic forces. These
systems can be ranked under the label of self-organized criticality (SOC)
whenever the development of their close-to-critical behavior occurs
spontaneously, despite quite significant variations in any external control
parameters (Bak, et al.; Bak; Jensen 1-6). Self-organized criticality typically
appears in slowly-driven non-equilibrium dissipative systems with many
similar member elements and steadily ongoing material, energetic, or
informational influx. Moreover, SOC-systems typically exhibit widespread
reciprocity in that local dynamics affect other local, as well as more distant
and even global, system activity, and vice-versa.
In these slowly-driven systems, local changes can potentially "permeate"
the entire system through intricately linked "action-potentiation
chains"-dispositional patterns of connection that can effectively mobilize
even distant system localities to come up to, and then surmount, their
activation thresholds. This, in fact, is what makes the system "critical."
That is, it can be regarded as "critical" in the sense that it can maintain
its global organizational integrity only when there is a critical balance
between the system's formative driving force (e.g., a steady, unidirectional
inflow of matter or energy) and its dispersive antagonistic dynamics (the
VAN DuK/Process Physics, Time, and Consciousness 135

internal interaction forces between the similar system elements, such as


diffusion, dissipation, friction, etc.). Together, these opposing forces will
lead to growth and decay, buildup and relaxation of constraints, fill-up
and spillover of local pockets of potential (a.k.a. "potential wells"), and,
in the case of nervous signaling in the brain, the hyper- and depolarization
of a nerve fiber's cell membrane.
Another characteristic of such SOC-systems is that all constituent
system elements and system events (e.g., neurons and their action potentials;
granular particles and their avalanches; forest trees and forest fires)
influence each other with correlations that decay algebraically (instead
of exponentially) with distance (Jensen 3). In this way, the relatively
adaptable action-potentiation chains that have gradually developed among
the system elements facilitate system-wide interconnectedness. This, then,
enables arbitrarily remote system regions to become engaged in mutualistic
interaction, thus making a difference to each other's activity patterns.
In general, SOC-systems are open non-equilibrium systems that can
develop a relatively stable structural-functional architecture via the
throughput of energy-, matter-, and information-conveying elements. In
their tum, these elements can serve as energy-providing nutrients or
energy-absorbing buffers, as building material, or as an activation signal
or catalyst of some kind. In this way, multiple thermodynamic cycles may
emerge through the interplay between gradient-driven forces with an
external origin (such as gravitation or electromagnetic stimulation) and
"inter-member" interaction forces manifesting within the system (such as
friction or electrical resistance).
The standard educational example of a self-organized, criticality-
seeking (SOC) system, introduced by founding fathers Per Bak, Chao
Tang, and Kurt Wiesenfeld (1987), is a sand or rice pile (situated on a
table of arbitrary size) whose constituent sand or rice grains are being
released one by one at random locations above the base of the pile (thus
making up the driving force). Each sand grain will thus topple downwards
until: (1) it settles in a gap or slight dip somewhere along the slope (thus
amounting to a local threshold, held together by friction between contributing
grains); 108 (2) it triggers an avalanche of unpredictable size on its way
down; (3) the tumbling grain prompts a combination of (1) and (2) as it
elicits a small avalanche whose member grains all get "absorbed" behind
local thresholds scattered along the slope of the pile.
For educational reasons, it can indeed be helpful to think of self-
136 PROCESS STUDIES SUPPLEMENT 24 (2017)

organized criticality in terms of the sandpile example, but it must be


stressed that SOC has been found to occur in countless other complex
open systems as well, ranging from neural networks to stars (solar flares
and nucleosynthesis), tectonic plates (earthquakes), forests (incidence of
fires), infectious diseases (their spread among populations), and so on
(see Bak, How Nature, 85-104, 175-182; Jensen 25-68; Aschwanden 1-
35). For now, we will stick to general terminology so that either of the
above possibilities will do as a case in point.
In open systems that are perturbed by some arbitrary driving force,
explicit criticality is considered to occur only when the gradient-driven
influx process impacts much slower on the system than the internal
relaxation processes (see Watkins, et al., 22; Jensen 3). In this way, the
force of impact will have to build up local potential capacity before it can
overcome the system's internal thresholds. Typically, this will take longer
than the maximum time it takes for perturbed system elements to end up
in a resting state, so that the impact of threshold-overflow events can
potentially "permeate" the entire system through intricately branched
potentiation chains, without being prematurely "overwritten" by novel
overflow events (see Watkins, et al., 21-22).
While the driving force is in play, incoming energy gradually accumulates
"behind" inner-system thresholds, thus forming local pockets of potential,
or "potential wells." Sudden energy release (i.e., dissipation) then occurs
when an arbitrary internal threshold is overcome so that the system gets
perturbed (Jensen 4). For instance, due to the ongoing release of sand
grains above a pile, an unstable hump on the slope may, at any arbitrary
moment, get perturbed, thereby causing an avalanche of unpredictable
size. The precise value of built-up potential energy that is required to
trigger such a catastrophic event depends on the precise history and internal
configuration of the entire open system and the exact context-dependent
details of the external driving force.
As a result of the antagonistic forces, the constituent elements of the
system-insofar as these can be identified as such 109-will thus link up
into branched action-potentiation chains of all possible sizes which involve
system elements occupying coupled "islands" of relative instability
(Buchanan 59). In this way, each impact of the external driving force may
provoke responses that propagate across the system in the form of
catastrophic system events, many of which will occur on a small-scale,
several on a medium-sized scale, and only very few on the largest of
VAN DuK/Process Physics, Time, and Consciousness 137

scales; the latter being capable of inducing system-wide changes in just


one go (Christensen and Moloney 252).
Hence, these system events are without any characteristic spatiotemporal
scale, and their statistical distribution-which is used to describe event
sizes and their frequency of occurrence-will follow power laws (Bak, et
al.; Jensen 5-11). Accordingly, a slight increase in energy buildup behind
thresholds may induce impact events that can lead to entirely unpredictable
changes in the system's configuration. There is no telling when, or in what
size-small, medium, or large-these reconfiguring changes will occur:
"To predict the event, one would have to measure everything everywhere
with absolute accuracy, which is impossible. Then one would have to
perform an accurate computation based on this information, which is
equally impossible" (Bak 61 ). What can be predicted, though, is that in
SOC-systems, over and over again, reconfiguration events will enable any
nearby subcritical structures to become members of the earlier-mentioned
"islands of instability" and approach local criticality. Accordingly, they
will then link up in newly accessible branches of the action-potentiation
network, thus in tum readying other linked structures to overcome local
thresholds and cascade through the system. In the long run, action-
potentiation chains will progressively permeate the entire system across
structural hierarchies, thus forming what may be called a holarchy (see
Koestler).
As a result, the action-potentiation chains will become highly correlated,
while rearranging the system-wide network of local inner-system thresholds
along the way. In other words, through the unpredictable "absorption-
saturation-discharge cycles" of the system's thresholds, the system will
develop its own internal dynamic constraints biasing the preferred path
of its dynamics, thereby controlling how impact events propagate through
the system (Linkenkaer-Hansen 8). Hence, the system may now be said
to have grown a structurally embedded memory functionality in that its
future evolution will increasingly depend on its entire foregoing history
and the highly correlated, holarchically distributed system dynamics that
have thus become established.
The system has now effectively developed a global mutual informat-
iveness, 110 in that every locality within the system has grown to become
intimately connected with the system as a whole so that each change within
the system makes a difference to all the rest, and vice versa. Some even
go so far as to state that in SOC-systems each locality has become capable
138 PROCESS STUDIES SUPPLEMENT 24 (2017)

to "sense" the global system state based on local information (Hesse and
Gross 10). In a similar vein, we may say that everywhere within the system
there is a locally available, but globally distributed, "knowing-by-doing"
regarding how to remain close to overall, system-wide criticality. As such,
the system can even be thought of as having developed a primordial form
of adaptive, self-preserving behavior under the pressure of precarious
conditions, external impact events and/or the influx of matter, energy, and
information.

5. Process physics: A biocentric way of doing physics without a box


As mentioned in Section 3.3, our contemporary mainstream physics
needs a fellow physics on its side. Although our conventional way of
"doing physics in a box" has been hugely successful in mathematically
spelling out the behavior of many natural systems within their respective
domains of application, it comes up short in other departments. For instance,
as Lee Smolin argued (Time xxiii), it will inevitably fail whenever it tries
to cover the whole of nature. Moreover, it is unable to deal properly with
those aspects of nature that cannot be quantified, including all the aspects
that we hold so dear because they make up the essence of our being:
feeling, purpose, meaning, value, and the like.
Other aspects-such as creativity, novelty, complexity, and that which
cannot be mathematically predicted-also cannot be properly dealt with
by doing physics in a box. Especially all that is related to the qualitative
aspects of nature-our conscious inner-lives, the "what-it-is-likeness" of
sensory experience, and, according to Stuart Kauffman ("Forward:
Evolution," 10-11), even the entire biosphere-cannot be drawn within
the grasp of exophysical-decompositional physics. In order to have some
compensation for these downsides, mainstream physics would do well to
have a nonexophysical-nondecompositional companion; one that can make
up for the weaknesses of doing physics in a box without undermining its
strengths.

5.1 Requirements for doing physics without a box


Thus far, numerous clues have come to the fore that suggested how
VAN DuK/Process Physics, Time, and Consciousness 139

this nonexophysical-nondecompositional physics should hang together


and which requirements should be met. On the wish list we can find the
following points (without any particular order of importance):
I. A nonexophysical-nondecompositional physics should be bio-centric.
In other words, as Terrence Deacon suggests, our account of nature
should not leave it absurd that we exist;
2. Considering (a) the widespread occurrence of self-organized criticality
and nonequilibrium thermodynamical systems 111 throughout nature,
and (b) their defining characteristics of cyclicity and feedback loops
(which play an especially crucial role in the emergence of life and
consciousness; see Sections 4.2.3 to 4.3.1) any new way of doing
physics should have recursive dynamics as an inherent feature;
3. In line with John Bell's suggestion (Speakable, 29-30), there should
be no true object-subject boundary. This is basically equivalent to
Whitehead's recommendation to avoid the bifurcation of nature;
4. The universe is not a giant computer. In other words, nature does not
work in an info-computational way, but rather in a process-informative
or mutually informative way (see Sections 4.3 .3, 4.3 and 4.2.1-4.2.2).
Hence, any new way of doing physics should take this into account;
5. Additionally, a nonexophysical-nondecompositional physics should
find a way around psycho-physical parallelism and externalistic
representationalism (which both imply info-computationalism);" 2

In addition to this list, there are also some of John Archibald Wheeler's
requirements (Wheeler, "Information," 313-315) that are well worth being
mentioned:
6. No "tower of turtles," i.e., there should be no infinite regress of would-
be elementary constituents;
7. No pre-existing space and no pre-existing time, but rather a pre-geometry.
8. No laws, but rather "law without law."" 3

Wheeler-often referred to as the physicist who coined the term "black


hole"-put forth the following arguments in support of these requirements:

# 6: No "tower of turtles"-"Existence is not a globe ... supported by a


turtle, supported by yet another turtle, and so on. In other words,
[there should be] no infinite regress. No structure, no plan of
organization, no framework of ideas underlaid by yet another level,
140 PROCESS STUDIES SUPPLEMENT 24 (2017)

by yet another, ad infinitum, down to a bottomless night. To endlessness


no alternative is evident but loop, such a loop as this: Physics gives
rise to observer-participancy; observer-participancy gives rise to
information; and information gives rise to physics." (Wheeler,
"Information")
# 7: No pre-existing space and no pre-existing time-"Heaven did not
hand down the word 'time.' Man invented it, perhaps positing
hopefully as he did that 'Time is Nature's way to keep everything
from happening all at once.' If there are problems with the concept
of time, they are of our own creation! As Leibniz tells us, ' ... time
and space are not things, but orders of things ... ;' or as Einstein put
it, 'Time and space are modes by which we think, and not conditions
in which we live.' .... We will not feed time into any deep-reaching
account of existence. We must derive time ... out of it. Likewise with
space." (Wheeler, "Information")
# 8: No laws-"So far as we can see today, the laws of physics cannot
have existed from everlasting to everlasting. They must have come
into being at the big bang. There were no gears and pinions, no Swiss
watch-makers to put things together, not even a pre-existing plan .... Only
a principle of organization which is no organization at all would
seem to offer itself. In all of mathematics, nothing of this kind more
obviously offers itself than the principle that 'the boundary of boundary
is zero.' [ 114] Moreover, all three great field theories of physics use
this principle twice over. ... This circumstance would seem to give us
some reassurance that we are talking sense when we think of... physics
being as foundation-free as a logic loop, the closed circuit of ideas
in a self-referential deductive axiomatic system."[ 115 ] (Wheeler,
"Information")

Although implicit in some of the other criteria, there is one final requirement
that should not be overlooked:

9. No lowest-level foundations, but rather "foundations without foundation ."

This last point follows the same logic as Wheeler's "law without law"
requirement. Just as there were no gears, no pinions, no engineers, and
no building plans in the earliest beginnings of the universe, there were no
true foundations in the hierarchical sense of the word. That is, a priori
VAN DuK/Process Physics, Time, and Consciousness 141

entities (such as so-called "elementary particles," strings, knots, and so


on) can never be fundamental to our modeling of nature. This is because
these a priori entities are always preceded by pre-theoretical interpretation
(see Section 3.1.2 to 3.2).
Also, we can never be sure if these a priori entities are actually referring
to nature's lowest level of organization. After all, bearing in mind the
considerable amount of so-called elementary particles in the (as yet still
incomplete) standard model of particle physics, none of them should be
taken seriously as the one and only fundamental one. 116 Next to that, this
requirement of "foundations without foundation" is also meant to save
physics from regressing into an infinite downward spiral of supporting
"turtles" (see requirement #6).

5.2 Process physics as a possible candidate for doing physics without a box
Whereas doing physics in a box typically requires us to get involved
in pre-theoretical interpretation, nature-dissecting acts of decomposition,
and the like, process physics basically enables us to avoid much of this
by doing physics without a box. It does so by setting up a model that
manages to give rise to its own foundation-free foundations, so to speak.
Accordingly, the model starts out with an as good as patternless homogeneity
that can perhaps best be likened to what in quantum field theory is called
"the vacuum state" or "quantum vacuum." Despite its name, this vacuum
state is usually not so much thought of as an entirely empty void, but
rather as a fiercely fluctuating ocean of virtual energy potential which
contains all of existence in latent form (see Dewitt 178). From this vacuum-
like stage, then, the initial uniformity in the process physics model should
get its internal pattern formation "up and going" through recursive loops
that "bootstrap" themselves into actuality from their otherwise undifferentiated
background (see Cahill and Klinger, "Bootstrap," 109).
To get this internal pattern formation going, process physics depends
on only a few general nondecompositional preconditions, namely: universal
interconnectedness, holarchic instead of hierarchic organization, self-
reference, and initial lawlessness. In our conventional way of doing physics
in a box, we typically rely on well-trusted assumptions that we have so
much gotten used to that we tend to totally forget about their actual status
as metaphors, approximations, and idealizations.
142 PROCESS STUDIES SUPPLEMENT 24 (2017)

That is, prior to writing down the physical equations for specifying
the behavior of electrons, photons, electromagnetic fields, and all other
phenomena in nature, physicists usually do not think too much about all
their pre-theoretical interpretation, acts of decomposition, and the like.
Instead, they basically start out by assuming that all these things actually
already exist as such (see Chown 25). To be fair, however, physicists most
often do not literally assume that things like electrons really exist before
they formulate physical equations about them. Instead, they typically like
to think of "elementary particles" like the electron as transient fluctuations,
manifesting from underlying quantum fields into the classical world. By
assuming these fields, however, the same problem arises all over again,
thus leading to something quite akin to Wheeler's "tower of turtles"
problem. The physicist's solution seems to be to just choose a particular
level of description, postulate it as fundamental, and then proceed from
there onwards. So, in this way, it is still a valid diagnosis to state that
physics as we know it typically presumes the existence of what it is trying
to describe.
This, however, confronts us with a foundational problem: if not by
mere postulation, how can we actually be sure that a physical equation
does indeed pertain uniquely to its intended referent? 117 On top of that,
taking into consideration that, just after the "Big Bang" many of these
referents had not even come into existence yet, we might ask ourselves
what the most elementary initial conditions of these physical equations
should be, and why this should be so? Lee Smolin was particularly worried
by these issues and he addressed them as follows:
We, in our time, are led by our faith in the Newtonian paradigm to
two simple questions that no theory based on that paradigm will ever
be able to answer: [First:] Why these laws? Why is the universe
governed by a particular set of laws? What selected the actual laws
from other laws that might have governed the world? [Second:] The
universe starts off at the Big Bang with a particular set of initial
conditions. Why these initial conditions? Once we fix the laws, there
are still an infinite number of initial conditions the universe might
have begun with. What mechanism selected the actual initial
conditions out of the infinite set of possibilities? The Newtonian
paradigm cannot even begin to answer these two enormous questions,
because the laws and initial conditions are inputs to it. If physics
ultimately is formulated within the Newtonian paradigm, these big
questions will remain mysteries forever. (Smolin, Time, 97-98)
VAN DuK/Process Physics, Time, and Consciousness 143

Process physics, on the other hand, since it is not rooted in the Newtonian
paradigm of doing physics in a box, does not have these problems that
seem to be so inevitably associated with the use of physical equations.
That is, process physics simply does not avail itself of any formal system
of lawlike mathematical equations. In stark contrast with the math-based
models of mainstream physics, process physics introduces a non-formal,
self-organizing modeling of nature-based on a stochastic iteration routine
that reflects the Peircean principle of precedence (Peirce 277), rather than
being based on lawlike physical equations.
Thanks to its intrinsic stochastic recursiveness, the process physics
model eventually manages to evolve many features that we also find in
our own natural world: emergent three-dimensionality, emergent relativistic
and gravitational effects, non-locality, emergent quasi-deterministic
classical behavior, creative novelty, habit formation, an internal sense of
(proto )subjectivity made possible by its mutual informativeness, an intrinsic
present moment effect with open-ended evolution, 118 and more.
Without having discussed how process physics actually works, however,
it is of course still too early to label it the final cure-all for the main
problems in today's physical sciences. So, therefore, let us get down to
the finer details and explore if process physics can really be taken seriously
enough as a way of doing physics without a box to have it join forces with
our conventional way of doing physics in a box.

5.3 Process physics: going into the details


Process physics is a neurobiologically inspired way of doing physics
without a box that is derived from the global color model in quantum field
theory (see Section 5.3.2). Because it aims to model nature practically
from scratch, process physics does not rely on pre-theoretical interpretation
in the way that mainstream physics does. In mainstream physics, i.e.,
physics in a box, we first need to postulate how "the box" should be put
together and which basic entities are to inhabit it (see Sections 3.1.1 to 3.2).
In this way, however, we are already presupposing what we are trying
to make sense of. That is, through pre-theoretical interpretation we are
actually filling in beforehand what it is that our physical equations are
trying to come to grips with (see Chown 25). In other words, we are
prematurely identifying what the referents of our physical equations should
144 PROCESS STUDIES SUPPLEMENT 24 (2017)

be, thereby synonymizing that which is found in observation with what


is thought to constitute the system under investigation. This, then, amounts
to what Whitehead called the undesirable "fallacy of misplaced concreteness,"
which is arguably the most prevalent fallacy in contemporary mainstream
physics.
However, the map is not the territory and we should not pretend that
it is. There is no inventory of landscape elements that can exhaustively
sum up all features of the landscape being mapped. Likewise, there is no
shortlist of ultimate physical constituents of nature, whether they be
elementary particles, strings, knots, or any other such entities, that can
exhaustively cover the whole of our natural world. None of these alleged
"primitives" can ever be truly fundamental, since their explanation and
interpretation necessarily has to lie outside the system being modelled
(Cahill, "Process Physics: From Information," 19) just as the inventory
of landscape elements is external to both the map and its landscape. 119
After all, even when engaged in our deepest-probing science-particle
physics-there is always:
... a subjective element in the description of atomic events, since the
measuring device has been constructed by the observer, and we have
to remember that what we observe is not nature in itself but nature
exposed to our method of questioning. (Heisenberg 58)
Following roughly the same line of reasoning, fellow physicist Bernard
D'Espagnat put it like this:
The wider our knowledge expands, the greater grows the part of it
which bears on ourselves-on our structures as human beings-at
least as much as on some hypothetical "external world" or "eternal
truth." (D'Espagnat 17)
In the Cartesian-Newtonian paradigm, however, it is all too easily forgotten
that our target of interest is not nature in the raw, but a combination of
(1) nature as framed by our nature-dissecting intellect, and (2) nature in
interaction with our measurement equipment. So, because the aspect of
subjectivity will thus always be implicit in our linguistic, analytical, and
quantitative labeling of what is being submitted to observation, it is,
logically speaking, impossible to find physical equations that can pertain
to any potential "deepest" level of nature in the raw. 120 In fact, most
working physicists have chosen to take an instrumentalist position of
abstaining from any interpretation of physical equations (see De Muynck
VAN DuK/Process Physics, Time, and Consciousness 145

74; Bell, Speakable, 142). As such, they have given up on formulating


any hypothesis of how and why physical equations should work, but became
focused only on the fact that they work (see Van Dijk, "An Introduction,"
78; "The Process"). In so doing, however, they are often unconsciously
falling back onto the straightforward Cartesian-Newtonian idea that there
is indeed really an entirely physical "real world out there" for their
mathematical equations to pertain to. By taking this easy way out, though,
they are in fact already presuming what they are trying to explain, thus
basically making it impossible to reach any deeper level of understanding.

5.3.1 Foundationless foundations, noisiness, mutual informativeness,


and lawlessness
So, in order to avoid these problematic inconveniences of doing physics
in a box, process physics resorts to the earlier-mentioned idea of "foundations
without foundation" (see Section 5.1 ). In order to achieve such "foundationless
foundations," process physics starts out, not with a number of would-be
fundamental constituents, but with a uniform, featureless network of
initially homogeneous, dispositional relations. That is, whereas our familiar
way of doing physics in a box necessarily requires some preparatory "stage
building activities" and "casting direction" ( see Section 3 .1.2), in order
to be able to furnish "the box" with what may be best termed "false
primitives," process physics does not start out with such a speculative
premise that such and such ultimate entities or events should already exist
beforehand. After all, that would only amount to the presupposition of
what one is trying to make sense of scientifically.
As a way around this predicament, process physics proposes that we
can simply choose an arbitrarily early, "pseudo-primitive level" to start
modeling nature. This is in fact thought to be possible because all early
structure formation in nature is held to exhibit self-similarity thanks to
nature-wide self-organized criticality (see Bak, How Nature; Cahill and
Klinger, "Self-Referential Noise as a Fundamental"). This self-similarity
amounts to lower-order "system components" (i.e., actualities, events,
nested process-structures, or other such "entities") having the same kind
of organization as the emerging "components." In other words, at each
different level of organization there is a practically identical kind of
structure formation-something that can be found on a smaller scale in
146 PROCESS STUDIES SUPPLEMENT 24 (2017)

many natural systems, like, for instance, a Romanesco cauliflower that


basically looks the same at every level of magnification.
Hence, in process physics self-similarity can be exploited by simply
choosing any such arbitrarily early "level of magnification" as the starting
level of modeling. In stark contrast with the aforementioned use of "false
primitives" in mainstream physics (which basically amounts to having
drawn a map before even having seen the actual territory), the "start-up
components" of process physics do not need to be pre-specified. In fact,
whenever the self-similar structure formation at the model's start-up level
is so fine-grained that it can be thought of as initially undifferentiated and
devoid of any explicit internal structure, the start-up components can
basically be interpreted as being entirely "pre-actual." That is, although
some deeper internal structure may still be imagined to exist, it should be
considered so unsubstantial as to be negligible, i.e., virtually non-existent.
At this starting level of the process physics model, all structure is
thought to exist only in "relations without relata" (see Eastman 226). To
be more precise, what is being modeled-namely nature's initially "non-
substantial," but gradually habit-establishing interconnectedness-is
defined only by the strength of mutual connectivity. Starting with nodes
of connectivity that have zero connection strength, this basic level will
eventually be "overflown" by emergent, self-similar pattern formation
(see Watkins, et al., 21-22; also see Section 4.3 .3). It is as if this initial
background of dispositional connectivity simply evaporates from the
network after having served as the "supporting scaffolding" for setting
up the early pattern formation in the model. As Cahill puts it, modelling
nature in this way is "like pulling yourself up by your bootstraps, throwing
away the bootstraps, and still managing to stay suspended in mid-air"
(quoted in Chown 28).
It is in this way that a rich, self-organized network of relational
connectivity can lift itself into actuality from "foundationless foundations,"
i.e., from a virtually non-actual expanse of dispositional and initially
undifferentiated background processuality. As will be discussed in more
detail below (see Section 5.3.3), this relational network, whose inner-
system connection strengths are being indexed by a connection matrix, is
driven by a stochastic iteration routine that gives rise to internal pattern
formation. As such, process physics basically uses a noisy connectivity
matrix to index from scratch how connection patterns relate to each other
in an initially unlabeled, featureless universe. 121
VAN DuK/Process Physics, Time, and Consciousness 147

This noise, then, basically "blankets" the entire network with each
cycle of the stochastic iteration routine. It actually enables the initially
uniform, low-level processuality in the process physics model to self-
organize into mutually informative fore- and background patterns. So,
remarkably, the same kind of mutual informativeness that turned out to
be such a characteristic aspect in SOC-systems (see Section 4.3 .3) and
that played such a crucial role in the emergence of higher-order consciousness
(see Sections 4.3.1 and 4.3.2), ends up being crucial to process physics
as well!
Although in classical information theory and in electrical engineering
noise is typically thought of as an irregular, residual distortion signal, in
process physics it is an expression of the inherent lawlessness of nature.
That is, while our long-standing tradition of doing physics in a box dictates
that we try to capture natural systems in terms of algorithmically expressed
laws of nature, there is a lot in nature that seems to be unfit to be specified
like that. For instance, although complex systems and biological systems
may indeed seem regular and predictable when looked at during short
enough time spans or in between phase transitions, their behavior eventually
cannot be compressed into concise algorithmic expressions capable of
faithfully reproducing the empirical data extracted from these systems
(see Kauffman, "Foreword: Evolution," 9-22). In the words of physicist
Joe Rosen: 122
In our effort to understand, we first search for order among the
reproducible phenomena of nature, and then attempt to formulate
laws that fit the collected data and predict new results. Such laws of
nature are expressions of order, of simplicity. They condense all
existing data, as well as any amount of potential data, into compact
expressions. Thus, they are abstractions from the sets of data from
which they are derived, and are unifying, descriptive devices for their
relevant classes of natural phenomena .... [Then again,] we do not
claim that nature is predictable in all its aspects. But any
unpredictable aspects it might possess lie outside the domain of
science by the definition of science that informs our present
investigation. (J. Rosen 40, also 36)
So, in Joe Rosen's view, science, by definition, is not meant to deal with
irreproducible and unpredictable phenomena. 123 The empirical data extracted
from any such phenomena cannot be compressed into a smaller data-
reproducing algorithm. Therefore, no empirically adequate physical
148 PROCESS STUDIES SUPPLEMENT 24 (2017)

equation can be put together that may deserve the label of "law of nature."
Such phenomena, whether they are too random to find any regularity
within their data, or too unique to be reproduced, can therefore be called
"lawless." Under this banner we can not only gather complex systems like
the mind-brain (which only exhibits reproducibility in a limited sense),
but also nature as a whole. After all, the universe at large, because it cannot
be compared to any other specimen, and because it exceeds the reach of
any algorithmic compression, is utterly irreproducible:
When we push matters to their extreme and consider the whole
universe, we have clearly and irretrievably lost the last vestige of
reproducibility; the universe as a whole is a unique phenomenon and
as such is intrinsically irreproducible. (J. Rosen 72)
Process ecologist Robert Ulanowicz drove home a similar point in
his very engaging book The Third Window: Natural Life beyond Newton
and Darwin:
As most readers are probably aware, Kurt Godel (1931), working
with number theory, demonstrated how any formal, self-consistent,
recursive axiomatic system cannot encompass some true
propositions. In other words, some truths will always remain outside
the ken of the formal system. The suggestion by analogy is that the
known system of physical laws is incapable of encompassing all real
events. Some events perforce remain outside the realm of law, and we
label them chance. Of course, analogy is not proof, but I regard
Godel' s treatise on logic to be so congruent with how we reason
about nature that I find it hard to envision how our construct of
physical laws can possibly escape the same judgment that Godel
pronounced upon number theory. (Ulanowicz, The Third, 121-122)
Drawing on the work of physicist Walter Elsasser ( 1969; 1981 ), Ulanowicz
calls these complex chance events that fall outside the ken of physical
laws "aleatoric" (The Third 119-122). On this aleatoric account, such
complex chance events are irreproducible and unpredictable since they
involve a unique coincidence of nature's locally-globally actualizing
processes. By Joe Rosen's definition this makes them "lawless." Moreover,
due to the nature-wide abundance of these aleatoric events, particularly
in nonequilibrium systems, lawlessness should actually be considered the
rule, rather than the exception.
In line with Ulanowicz's above quotation on number theory and
physical law, algorithmic information theory (AIT; see Solomonoff;
VAN DuK/Process Physics, Time, and Consciousness 149

Chaitin, Algorithmic; Kolmogorov), a mathematical discipline interested


in the lossless compression of data into data-reproducing algorithms, can
also be applied to empirical data:
... in AIT the amount of data compression that can be achieved in
reproducing empirical data can be taken as a measure of the level of
scientific understanding about the associated target system
(Solomonoff; Chaitin, Thinking, 35). The main argument can be
stated as follows : the more compression, the better the understanding
about the system's recorded behavior (Chaitin, Thinking, 227 and
286). In this way, the extent of knowledge about a natural system is
thought to peak as the algorithm for reproducing its empirical data
approaches its minimum size." (Van Dijk, "An Introduction," 77)
Moreover, any numerical data that cannot be compressed into an algorithm,
or any data whose algorithmic expression has the same number of digits
(or even more) as the data sequence itself, cannot be algorithmically
compressed and is therefore defined as being "algorithmically random."
In this context, a random truth is a data string that cannot be encoded into
any algorithm since its sequence of digits is entirely unpredictable.
Analogously, noise-the equivalent of an algorithmically random truth-can
be seen to stand for everything that cannot be encompassed by a physical
equation. In other words, each activity pattern that has no mathematically
compressible regularity to it can be seen as a random fact with entirely
unpredictable micro-fluctuations.
Process physics purports that the process of nature is stochastically
routine-driven, or, in other words, habit-based in the above-mentioned
aleatoric way, rather than governed by fixed and eternal laws of nature.
Counter to the currently prevailing view of a law-abiding natural world,
process physics suggests that the universe in its earliest stage came into
actuality from an initially undifferentiated and structureless kind of "pre-
space." Reflecting the fact that all of nature's activity patterns are ultimately
seamlessly interconnected and must thus be seen to make up a complex,
random, and thus fundamentally irreproducible and unpredictable whole,
the process physics model is driven by a noisy (hence lawless) iterative
update routine. In this way, it forms a self-organizing, habit-establishing,
and internally meaningful whole, which makes a difference to all else
within it, and vice versa. This is in stark contrast with mainstream physics,
which conceives of nature as if it were ultimately no more than a collection
of mechanistically interacting physical contents governed by externally
150 PROCESS STUDIES SUPPLEMENT 24 (2017)

imposed laws of nature.


But as Smolin (Time 97-98) already argued in Section 5.2, finding the
actual "why" behind these "laws" will be a hopeless cause if we stubbornly
continue to hang on to the Newtonian paradigm. In fact, physical equations
that-implicitly, or even explicitly-presume that nature consists of some
kind of mechanistically interacting physical contents, can never really
explain how nature works. Rather they can only offer pseudo-explanations
which are themselves equally in need of an explanation. 124

5.3.2 Process physics and its roots in quantum field theory


Despite all this talk about process physics being a nonexophysical-
nondecompositional way of doing physics without a box, based on natural
routine rather than natural law, we still have not discussed its relation
with mainstream physics. Despite all the criticism of mainstream physics
that we have come across so far, we are still badly in need of its exophysical-
decompositional methodology, if only to compare any newly proposed
physics with our established physical theories and interpretations; for
instance, by subjecting it to quantitative analysis. Also, to make sure that
process physics-or any other new way of doing physics-is compatible
with everything that science has hitherto been able to teach us, it makes
good sense to see if such a new physics can be derived from our familiar
and well-respected way of doing physics in a box.
So, for this purpose, let us take a look at how the process physics
model can be extracted from quantum field theory. Quantum field theory
is the deepest-seated successful theory of present-day mainstream physics.
Entirely in line with the post-geometric Cartesian-Newtonian paradigm,
it gives an abstract mathematical account of the behavior of "elementary
particles" in the background of a fixed spacetime construct. The most
explicit and revealing formalism of quantum field theory is the functional
integral formalism. This formalism is used in the global color model of
quark physics (Fritzsch, et al. 1973) that grew from the seminal work of
Dirac and Feynman, and approximates low-energy hadronic behavior from
the underlying quark-gluon quantum field theory (see Cahill and Gunner).
However, it turned out that the functional integral formalism is not
necessarily the ultimate climax in quantum field theory. That is, by
introducing a stochastic formalism in which randomness was artificially
added, Parisi and Wu demonstrated an even lower level of description. In
VAN DuK/Process Physics, Time, and Consciousness 151

their formalism, the added stochastic iterative procedure facilitates the


random sampling of all possible system configurations. Originally, this
formalism was meant only to provide a better way of computing properties
of particles within quantum field theory. Accordingly, its stochasticity
was interpreted to represent no actually existing property of nature, but
rather to be a mere computational aid which permitted the computations
to explore various configurations (Cahill, "Process Physics: Self-Referential,"
9).
However, since Parisi and Wu's method eventually leads to the same
results as the functional integral formalism, it is not at all too outrageous
to suppose that their stochastic quantization procedure involves more than
just a convenient computational aid. After all, functional integrals can be
thought of as arising as ensemble averages of Wiener processes (see Cahill
and Klinger, "Self-Referential Noise and the Synthesis," part 2; "Bootstrap,"
part 4). These are normally associated with Brownian-type motions in
which random processes are used in modelling many-body dynamical
systems. But instead of considering the randomness as an uninteresting
side-effect, it can be argued that random processes actually underlie the
emergent hadronic structures of nature, thus reflecting the random facts
of Section 5.3.1.
All these considerations inspired Reg Cahill and his main accomplice
Chris Klinger to put together process physics, which is based on a stripped-
down version of the stochastic quantization procedure. That is, by removing
all elements from Parisi and Wu's formalism that were associated with
externally postulated, non-emergent characteristics (particularly those
indicative of a presupposed spacetime metric), Cahill and Klinger could
isolate the terms that, by their expectations, would be responsible for
emergent pattern formation . In fact, as explained in Section 5.3.3, the
remaining terms involve only iterative stochastic dynamics (see Cahill,
"Process Physics: From Information," 22 for technical details).
Stripping away redundant elements enables process physics to model
the universe as an all-encompassing Prigoginean dissipative structure
capable of renewing itself through the activity of self-referential noise.
As an added bonus, this self-referential noise will spontaneously establish
a regime of self-organized criticality (Cahill and Klinger, "Self-Referential
Noise and the Synthesis"). This, in tum, automatically leads to
"universality"-i.e., the occurrence of self-similar events at all scales
within the system as a whole. This basically means that any small
perturbation can trigger events of all possible sizes; from many small ones
152 PROCESS STUDIES SUPPLEMENT 24 (2017)

(that do not seem to lead to any explicit activity other than low-level noisy
fluctuations) to very rare giant ones (that can shake up the entire action-
potentiation network, thus drastically renewing it in one go).
In the famous sand pile systems, for instance, the characteristic events
are avalanches of all sizes; from many small cascades to rare, large sand
slides or even none at all (e.g., when a single grain falls directly in a local
"pocket of potential"-see Section 4.3.3). Likewise, in the process physics
model, each noisy perturbation of the network as a whole can trigger (a)
the emergence of many small, low-level phenomena (i.e., "events,"
"actualities," or "nodes") with weak or negligible connectivity, (b) less
frequent medium-size phenomena with more robust connectivity, and even
much more rare phenomena with proliferating higher-order connectivity.
Universality (or, in other words, the occurrence of system-characteristic
phenomena of all scales) can cause the system's low-grade starting level
to eventually become "hidden from plain view" as the higher-order
phenomena cascade all across the emergent action-potentiation network,
thus "overflowing" any lower levels of activity (see Watkins, et al., 21-
22; Cahill and Klinger, "Self-Referential Noise and the Synthesis," parts
4, 7). The self-organized criticality in the process physics model is what
actually facilitates the possibility of "foundations without foundation"-one
of the earlier-listed requirements for doing physics without a box (see
section 5.1).

5.3.3 Process physics and its stochastic, iterative update routine


To take into account that we are trying to model the in itself unlabeled
natural world in a purely relational way, process physics sets up an initially
uniform and structureless network of what may be called dispositional
relations. In this dispositional network of relationships, which is being
indexed with the help of a "connectivity matrix" (see Table 5-1 ), the start-
up nodes i and j are held to have (1) no internal connectivity worthy of
mention, and (2) no explicit actuality relative to the network as a whole.
In order to meet these preconditions, (1) anti-symmetry Bij = -B1i has to
be applicable within the network matrix so that self-connections Bii will
always be zero, and (2) all nodes within the system have to start off with
close-to-zero connection strength to model the absence of initial order
(Cahill and Klinger, "Bootstrap Universe," 109).
VAN DuK/Process Physics, Time, and Consciousness 153

As a result, these start-up nodes i and j can be seen as mere indexical


labeling for something that is not really present (yet). Moreover, this
indexing of the nodes within the connectivity matrix does not relate these
nodes to anything external like a reference frame, coordinate axes, timeline,
or whatever. In fact, the iterative indexing activity, as it is engaged in
"weaving" an intricate network of connection strengths, relates everything
within the network to everything else, thus giving rise to a sense of where
everything is with respect to each other without the need of any external
number-tagging (after all, the numbers on the i andj axes are basically
just address codes of connection strengths and do not denote the value of
the connection strengths themselves).

Table 5-1: The indexical rnlation matrix - When nodes i and j are conn ected, they will be indexed as
having a non-zero connection strength B,1. Anti-symmetry (here indicated by matching background colors of the
matrix cells) gu arantees that the strength of any self-connection (Bii) will always be zero. Positive or negative
signs of the actual B,1 values depend on the direction of the arrows between nodes i and j (see Fig. 5-1).
node
2 3 4 s
,nn.
0 B,2(= -B2, ) B 13 (= -B31) 8 14 (= -8 41 ) 8 ,s(=-8., )
8 2, (= -8,2) 0 823 (= - 8 32) 8 24(= - 842) 8 2s(= -8s2)
3 8 31(= -813) 8 32(= -823) 0 8 34(= -843) 83s(= -8s3) 830(= - 803)
4 8 41 {= -8 14 ) 8 42(= -824) 8 43 (= -834) 0 8 45 (= - B 54 ) 8 46 {= -8 64 )
s 8 s,(= -8,s) 8 s2(= -82s) 8 s3(= -83s) 8 s4(= - 8 45) 0 8 so(= - B 65)
8 03(= -830) 8 64(= -8 46 ) 8•s (= - 8 56) 0

In line with Wheeler's "law without law," the earlier-mentioned


"foundations without foundation," and the fact that we are trying to model
our initially unlabeled natural universe, we may refer to this indexical
labeling as "labeling without labeling." That is, since "nature as left
unframed by our nature-dissecting intellect" is basically an unlabeled
place, we cannot use any pre-defined categories, concepts, codings, or
symbol systems to label it in an a priori manner. Therefore, the system
has to "label" itself in terms of relational connection strengths among
initially latent nodes of connectivity. Accordingly, in the process physics
model, these "co-labeling" start-up nodes (or, in other words, "events,"
"sub-actual start-up seeds," "sub-actualities" or "pseudo-objects") are
154 PROCESS STUDIES SUPPLEMENT 24 (2017)

being used as temporary scaffolding to enable emergent connectivity


among them (see Cahill, et al.). 125 Although they facilitate patterns of
relationship among them, the nodes themselves remain "pseudo-actualities."
Once the network starts to evolve any higher-order activity patterns, the
level of start-up nodes gets hidden from plain view by way of self-organized
criticality (see Section 4.3.3).

Figure 5-1: Schematic representation of interconnecting


nodes-Connections between nodes i and j with arrows indicating
non-zero connection strengths B iJ" The direction of the arrows
determines the sign of the connection strengths; when nodes are
thought to be (as yet) unconnected, the arrows are absent, indicating
a connection strength BiJ = 0. Connection strengths indicated by
"darkness" (with black arrows denoting high-strength connections
and lighter-colored arrows implying weaker connectivity).

In matrix notation, the connectivity matrix-with its initially featureless,


"unlabeled" start-up nodes-can be written as an i Xj matrix (i.e., a matrix
with i rows and} columns):

b11 b12 b13 b1j


b21 b22 b2 3 b2j
b 31 b 32 b 33 b 3j
B;j = with i,j = l, 2, 3, .. . , 2M and M----> oo.

b;i bi2 b;3 bij


VAN DuK/Process Physics, Time, and Consciousness 155

0 - b21 -b3 1 - bi l
- h12 0 - b 32 - bi2
- b13 - b2 3 0 - bi3
Anti-symmetry then gives: Bij =

- blj - b 3j - b 3j 0

0 b1 2 b 13 bl j 0 -b2 1 -b31 -bil


-b1 2 0 b 23 bzj b 21 0 -b32 -b;z
- b 13 - b23 0 b3j b 31 b 32 0 -b;3
Or: B ij =

- bl j -b3j -b3j 0 bil b; 2 b;3 0

Process physics uses its connectivity matrix to model the gradually evolving
connection strengths of emergent activity patterns within the initially
uniform and orderless universe. In order to meet Wheeler's requirement
of "law without law" (see also Section 5.1), an iterative update routine is
used to enrich the network of connection strengths with system-wide
connectivity combined with a layer of system-renewing noise. As the
system continuously keeps on going through its stochastic iteration cycles,
with each such loop being indexed by the relation matrix, slowly but
surely, higher-order patterns of connectivity will emerge. 126
The iteration routine in question is in fact derived from the bifocal
field representation that is used in quantum electrodynamics-hence the
use of the symbol B in Eq. 5.1 as it refers to "bilocal" (see Cahill and
Klinger, "Self-Referential Noise and the Synthesis," section 3). By stripping
away all terms that refer to any presupposed geometrical aspects, the
following update routine is achieved:

B;j --> B ;j - a(B + s- 1 );j + W;j , with i,j = 1, 2, 3, ... , 2M and M--> co. (5.1 )

Cahill has summarized his stochastic iteration routine in the following way:
The iteration system has the form a,j __, a,j - a(B + a- ),j + w,j- Here Bij is
1

a square array of real numbers giving some relational link between


nodes i and j. Here B·' is the inverse of this array: to compute this all
the values BiJ are needed: in this sense the system is totally self-
referential. As well at each iteration step, in which the current values
of Bij are replaced by the values computed on the right-hand side, the
156 PROCESS STUDIES SUPPLEMENT 24 (2017)

random numbers wij are included: this enables the iteration process to
model all aspects of time. These random numbers are called self-
referential noise (SRN) as they limit the precision of the self-
referencing. Without the SRN the system is deterministic and
reversible, and loses all the experiential properties of time. (Cahill,
"Process Physics: Self-Referential," 12)
To recap, the first term Bu embodies the network's entire acquired past
(up to the immediately preceding iteration) as it holds the iteratively built-
up connection strengths among connection pairs i and}. As such, it may
be called the precedence term-this entirely in line with the Peircean
"principle of precedence" (see Peirce 277; Smolin, Time, 4 7). The second
term -a(B+B- 1), which may be referred to as the cross-linkage or binding
term, facilitates universal interconnectedness by hooking up the single
matrix B with its inverse counterpart B· 1• Something close to a holarchic
feedback loop becomes active within the system. This setup requires anti-
symmetry Bu= -Bj;· This is needed to ensure that self-connections B;; will
always be zero; this in conformity with the above requirement that there
is no internal subnetwork connectivity to the start-up nodes themselves.
Furthermore, the parameter a within this second term is comparable to a
tuning parameter in self-organized criticality (SOC) systems; such a
parameter does not influence the fact that SOC occurs, but it does
affect how SOC occurs. For instance, in sand pile systems it determines
the narrow region of near-critical angles in which avalanches will tumble
down the slope-but the parameter itself is non-critical since it can vary
widely without frustrating the occurrence of SOC. 127 Last but not least,
in every new iteration and for each "connection pair" i and j, the noise
term wij = -wji is an independent random variable with variance TJ, picked
arbitrarily from a probability distribution (see Cahill and Klinger, "Self-
Referential Noise as a Fundamental," for more detailed information, for
instance, on the size of TJ).
VAN DuK/Process Physics, Time, and Consciousness 157

binding term

a) Bij --+ Bij - a (B + s - 1 \ 1 + Wij b) .,,,~ ·


y y
precedence term noise term

B ij (zoomed-out level)

precedence term binding term noise term

c)

~ emergent higher-order
- a + connectivity among Bii
(zoomed-out level)

Fig. 5-2: Artistic visualization of the stochastic iteration


routine-(a) The noise-driven iterative update routine ofEq. 5.1 can
be subdivided into a precedence term, a binding term, and a noise
term; (b) At a coarse-grained level, the BiJ form a smooth and
homogeneous "indexing landscape"-this is in line with the absence
of connectivity. However, when zooming in on a finer-grained level,
the indexing landscape gives a much rougher and more spikey
impression characteristic of randomness; (c) The precedence,
binding, and noise terms are visualized as "indexing landscapes,"
thus forming a map of connection strengths. 128 Going through the
iterations again and again will eventually lead to the formation of
higher-order connectivity in a small region of the total indexing
landscape. (original image (3D surface of noise): © Paul Bourke
1997)

The connection strengths between all these "connection pairs" i and j are
not themselves visible features, so the above visualizations can only serve
as instructive metaphors and should not be thought of as images of nature
itself. In order to avoid the fallacy of misplaced concreteness, after all,
we should realize that the here depicted "connectivity landscape" pertains
to an indexical mapping of connection strengths. Just as Stuart Kauffman's
"fitness landscapes" (At Home, 163-180) do not depict any real landscapes,
these "connectivity landscapes" do not directly reflect nature. Rather, they
form an indexical grid of emergent connectivity on a pre-sorted layout,
analogous to using a grid of people's home addresses to index the level
of social connectivity within a community (see also Section 5.3.4).
Moreover, since the connection strengths are held to be initially practically
158 PROCESS STUDIES SUPPLEMENT 24 (2017)

zero-reflecting the absence of connectivity-there is no initial "meaning"


of the indexical network.
In fact, meaning only gets to be established later on, as the network
starts to give shape to itself through its mutually informative processuality.
That is, the indexicality, or, in other words, the relational mapping of
connection strengths, offers a means to "inform" each local "island of
connectivity" 129 about how it relates to everything else within the network.
In contrast with classical information theory, this does not take place via
the transmission and reception of symbolically expressed numerical data,
but through mutual informativeness-a.k.a., process-informativeness or
process-information (see Corbeil; Van Dijk, "An Introduction" and "The
Process"). That is, all events 130 actively make a difference to each other
through their mutualistic, diaphoric processuality, 131 so that the network
as a whole gradually becomes internally meaningful and habit-establishing.
The process physics model not only gives rise to internal meaningfulness
and habit formation. As will be shown in the next section, it also facilitates
the emergence of three-dimensionality and enables the network to become
organized in a quantumfoam-like way.

5.3.4 From pre-geometry to the emergence of threedimensionality


In the process physics model, or, to be more specific, in the connectivity
matrix which facilitates the indexical mapping of connection strengths,
there are no a priori elementary constituents whose behaviors are held to
be governed by any pre-available "laws of nature." Instead, there is only
a lawless, initially nondescript background of iterative, noise-driven
activity patterns. Although rare, more stable and relatively isolated
branching structures will start to emerge from the noisy background activity
when the system goes through enough update iterations. This is because
the noise term wij not only enriches the system with random novelty, but
also gives rise to rare large-valued connection strengths B!i.
In comparison to the smaller-valued connection strengths B!i in their
background vicinity, these large-valued B!i can more easily persist under
the regime of the system-renewing iterations (Cahill and Klinger, "Self-
Referential Noise as a Fundamental"). In the long run, those specific
linkages that are strong and durable enough to survive the system's noisy
iterations will then hook up to form tree-graph-shaped connectivity patterns
(see Fig. 5-3). This is because short-distance connections between
VAN DuK/Process Physics, Time, and Consciousness 159

neighboring monads are by far the most probable ones. As a logical


consequence, the majority of those rare large-valued connections will thus
be established between nearest neighbors. Meanwhile, an already significantly
smaller portion links to the second-nearest neighbors, and an even tinier
fraction is capable to attach to more distant neighbors (see the right hand
column of Table 5-2).

Do aa l

D1 = 2

D, = 4

D 3 =l

Fig. 5-3: Tree-graphs of large-valued nodes B;i and their


connection distances Dx

Table 5-2: The amount of connections arranged by distance and connection strength
low connection medium connection high connection
strength strength strength
sho1t-distance overwhelming majority few scarce
medium-distance few scarce very, very sca rce
long-distance scarce very, very scarce extremely scarce

As can be shown through numerical analysis of the indexed connection


strengths, these tree-graph-shaped branching structures of elevated
connectivity become organized in such a way that they have a natural
embedding within a 3-dimensional hypersphere. To be more specific, their
topology approximates the geometry of a 3-dimensional hypersphere S3.
(see Fig. 5-4). For this numerical analysis to be performed, we first need
to filter out which nodes are participating in branching structures of
elevated connectivity. This can be done by introducing a lower threshold
of connection strength and then sieving out only those nodes that have
larger connection strengths.
160 PROCESS STUDIES SUPPLEMENT 24 (2017)

(a) start-up (b) k = 35 (c) k = 60

(d) k = 90 (e) k = llO (! ) k = 145

Fig. 5-4: Emergent 3D-embeddability with "islands" of strong


connectivity-With increasing iterations (note that the number of
iterations is here indicated by k ranging from O to 145) the
connectivity nodes take on a distribution in which they are
embeddable within a hyperspherical geometry S3. To allow plotting
the fourth coordinate has been suppressed. The "sphere-within-sphere
embedding" can be best observed when looking at Fig. 5-4f. Figure
and caption text edited from: Fig. 7.53 in (Klinger, Bootstrapping,
281; © VDM Verlag Dr. Muller 2010).

After having applied this lower-threshold, in the indexical matrix we


can then see isolated islands of elevated connectivity. These are the ones
that may be depicted as tree-graphs of connected monads (see Fig. 5-3).
Although the tree-graphs are made up from monads i, j, k, I, ... , etc.,
whose respective "starting positions" are given by the row and column
numbers in the indexical connectivity matrix, the self-organizing regime
of noisy update iterations effectively "neutralizes" this pre-imposed
hierarchy. That is to say, the initial hierarchy of the B iJ is irrelevant. After
all, just as people's home addresses do not tell anything about which
members within the community are closest to them, the cell-coordinates
of the connection strengths B iJ within the matrix, as well as the row and
column numbers of individual B iJ within the tree-graphs, are of no actual
significance, just as long as the connections with their neighbors, their
VAN DuK/Process Physics, Time, and Consciousness 161

neighbor's neighbors, etc., are being catalogued by the iteratively built-


up connectivity index:
Consider the connectivity from the point of view of one monad [also
nameable as "event," "actuality," or "node of connectivity"], call it
monad i. Monad i is connected via these large BiJ to a number of
other monads, and the whole set of connected monads forms a tree-
graph relationship [i.e., a branching structure]. This is because the
large links are very improbable, and a tree-graph relationship is much
more probable than a similar graph involving the same monads but
with additional links. The set of all large valued BiJ then form tree-
graphs disconnected from one-another; [see Fig. 5-3]. In any one
tree-graph the simplest "distance" measure for any two nodes within
a graph is the smallest number of links connecting them. Indeed this
distance measure arises naturally using matrix multiplications when
the connectivity of a graph is encoded in a connectivity or adjacency
matrix. Let D1, D2, ••• DL be the number of nodes of distance 1, 2, ... ,
L from node i (define D 0 =1 for convenience), where Lis the largest
distance from i in a particular tree-graph, and let N be the total
number of nodes in the tree. Then we have the constraint L t=o Dk = N.
(Cahill and Klinger, "Self-Referential Noise and the Synthesis")
With all this in place, we can now start to count the number N (D,N) of
different N-node trees that, seen from the perspective of reference node
i, have the same distance distribution {Dk}. With all possible linkage
patterns included, this would give:

N(D N) = ( ) I Dz D3
M-1 .D1 Dz ... DL-1
DL

(5.2)
' (M-N-2)! D1! Dz! ... DL! '

After having specified this number N (D,N), Cahill and Klinger proceed:
Here vik+> is the number of different possible linkage patterns
between level k and level k+l, and (M-1)!/(M-N-2) is the number of
different possible choices for the monads, with i fixed. The
denominator accounts for those permutations which have already
been accounted for by the vik+> factors. We compute the most
likely tree-graph structure by maximising In N(D, N) + µ(Il =o Dk - N)
where µ is a Lagrange multiplier for the constraint. Using Stirling's
approximation for D) we obtain Dk+1 = Dk Ini?- - µDk+½ . •••
which can be solved numerically. [Fig. 5-5] sho~s1 a typical result
obtained by starting [equation (5.2)] with D 1=2, D2=5, and µ =0.9,
and giving L=16, N=253. Also shown is an approximate analytic
162 PROCESS STUDIES SUPPLEMENT 24 (2017)

solution Dk-sin 2 (rrk/L) found by Nagels.These results imply that the


most likely tree-graph structure to which a monad can belong has a
distance distribution {Dk} which indicates that the tree-graph is
embeddable in a 3-dimensional hypersphere, S3. Most importantly
monad i has a 3-dimensional connectivity to its neighbours, since ok~k2
for small(rrk/L). We call these tree-graph B-sets gebits [because they
seem to act as bits of emergent geometry-geometry bits]." (Cahill
and Klinger, "Self-Referential Noise and the Synthesis")
In more informal language, we may say that the connectivity nodes of the
tree-graph-like branching structures have practically the same distance
distribution as uniformly arranged points in a three-dimensional space:
The trees branch randomly, but if you take one pseudo-object [a.k.a.
connectivity node] and count its nearest neighbours in the tree,
second nearest neighbours, and so on, the numbers go up in
proportion to the square of the number of steps away. This is exactly
what you would get for points arranged uniformly throughout three-
dimensional space. (Chown 27)

3 • • • • • • ••

-8 -7 -6 -5 -4
k Log 10P

Fig. 5-5: [Dk-kl-diagram: "Data points show numerical solution of


Dk+1 = Dk In 0°k - µDk + ½for distance distribution Dk for a most
probable tree-kgraph with L = 16. Curve shows fit of approximate
analytic form Dk- sin 2 (rrk/L) to numerical solution, indicating weak
but natural embeddability in a hypersphere S3." (Cahill, et al.,
"Process Physics: Modelling," 193-194)
Emergent branching structures that manage to persist under ongoing
iterations can hook up with each other to form yet another higher-order
level of branching structures. The emerging 3-dimensionality starts to
spread across wider and wider ranges of the network, thereby giving rise
VAN DuK/Process Physics, Time, and Consciousness 163

to a fractal, quantum-foam-like web of connectivity. Consistent with the


universality (i.e., scale-free phenomena) that is so characteristic of self-
organized criticality systems, 3D-embeddable nested subnetworks of all
sizes can be found to occur in this quantum-foam-like network (see Fig.
5-6 for an artistic impression of such a quantum foam network).

Fig. 5-6: Fractal (self-similar) dynamical 3-space-Artistic


impression of fractal (i.e., holarchic) dynamical 3-space, a.k.a., three-
dimensional process-space. A comparison with Fig. 5-1 and Table 5-
1 can be made to better understand the linkage with the iteration
routine (5.1 ). Original image on the lower right hand side (the other
images have been edited); retrieved from (Cahill, et al., "Process
Physics: Modelling").
From further analytical and numerical study, it can be concluded that
this network of fractal, cell-like bits of geometry behaves as a Prigoginean
dissipative structure. That is, the "cells" arise from a noisy, initially
uniform background, much like Benard convection cells, or the emergent
cellular reaction patterns in certain Gray-Scott reaction-diffusion systems
(see Fig. 5-7). Through the combined effect of the binding and noise term
in Eq. 5.1, the network will act as an order-disorder system in which these
fractal (i.e., holarchic) cell-like process-structures come and go as they
164 PROCESS STUDIES SUPPLEMENT 24 (2017)

are engaged in slower- and faster-going growth-decay cycles-depending


on the local-global context and their internal reactivity (see Cahill and
Klinger, "Self-Referential Noise and the Synthesis" and "Self-Referential
Noise as a Fundamental").

a) c)

d) e)

Fig. 5-7: Gray-Scott reaction-diffusion model-Subsequent steps


in a reaction-diffusion model as it evolves from low-level random
noise. The reaction-diffusion system consists of two arbitrary
chemical species U and V. The variables u and v represent their
concentration for each point in space. At the start of the simulation,
the concentration of these two chemical species varies randomly. As
the simulation progresses, the chemical species react with each other
and diffuse through the available medium, thus yielding dynamically
varying concentration levels and associated pattern formation at any
given location. Depending on the parameters used, the two chemical
reactions take on different rates at each point within the medium. The
reactions involved are: U + 2V ----> 3V and V ----> P, with P being
an inert product that does not participate in any further chemical
reactions. For sake of simplicity it is assumed that there is an
abundant supply of reagents, so that the reactions only occur in this
one direction and not in the opposite one. Since V acts as a reactive
chemical as well as a reaction product it can be seen as a catalyst for
its own reaction. Hence, V takes part in an autocatalytic cycle. The
simulation of the reaction and diffusion processes is based on the
partial differential equations
au= Du'i1 2 u - uv 2
at
+ F(l - u) and av = Dv'i1 2 V - UV 2
at
- (F + k)V,
with u and v as the location- and time-dependent concentrations. The
first part of the first formula Du 'i7 2 u is the diffusion term with
parameter Du= 2.00 · 10-5; the second part -uv2 is the reaction rate,
VAN DuK/Process Physics, Time, and Consciousness 165

and the third part F(l-u) is the replenishment term (with feed rate
F = 0.0600) which is needed to replenish the chemical species U
because it gets used up in the reaction. For the second partial
differential equation, the parameters are: Dv = 1.00 · I0- 5, feed rate
F = 0.0600 and diminishment term k = 0.0620. The simulation was
originally performed by the XMorphia simulation software (authored
by Roy Williams at Caltech) with partial differential equations (as
discussed in Pearson). However, samples (a) to (t) are taken from a
renewed simulation run by Robert Munafo
(see http://mrob.com/pub/comp/xmorphia/index.html for details).
As the network continues to go through its update iterations, even higher-
order structures can arise from this initially low-level process. The model's
network of higher-order process-structures will thus start to exhibit all
kinds of characteristics that can also be found to occur in nature. Among
the signature features of the process physics model we can find, for
instance, nonlocality, emergent quantum behavior, emergence of a quasi-
classical world, gravitational and relativistic effects, inertia, universal
expansion, black holes and event horizons, and also a present moment
effect inherent to the system itself (see Cahill, "Process Physics: From
Information Theory," 11-12).

5.3.5 Process physics, intrinsic subjectivity, and an inherent present


moment effect
Whereas mainstream physics, with its dependence on the geometrical
timeline, does not allow for a unique and exclusive now (see Section
2.1.3), the process physics model has an inherent present moment effect to it:
The introduction of process and the stochasticity of self-referential
noise not only provides the spontaneous and creative generation of
spatial structure, it also captures what may be termed the "present
moment effect," and thus the essence of empirical or experiential
time. Successive iterations .. . generate a history ...which might be
recorded and replayed precisely, and in this there is a clear "arrow of
time" because, unlike a recording (which can be played forward or
backward arbitrarily to find and examine specific instances), one
cannot simply run the system in reverse to recover an earlier state
since the mapping is unidirectional-the presence of the noise term
precludes an inverse mapping. However, while the history may be
broadly inferred from the presence of persistent relational forms so
166 PROCESS STUDIES SUPPLEMENT 24 (2017)

that there is a sense of a natural partial memory, the "present


moment" is entirely contingent both on the specific detail of that
history and on the SRN [i.e., Self-Referential Noise] so that the
future awaits creation. (Klinger, "On the Foundations," 170)
Although the iterations of the update routine are not literally synonymous
with the phenomenon of time, they facilitate the ongoing renewal of the
system's connectivity patterns and thus are constitutive of what Whitehead
(PR, 128, 222) calls "the creative advance into novelty." Each fulfilled
round of iterations can be thought to bring on a new present moment.
(Admittedly, this is of course a simplifying idealization. After all, in reality
we cannot actually identify any such completion of stochastic iteration
cycles. However, when looking at nature in any way we can, again and
again we find that nonequilibrium cyclic processuality is a consistently
recurring phenomenon.) Moreover, the being engaged in those cyclic
update iterations makes a meaningful difference to the network's islands
of elevated connectivity. That is, by slightly modulating the connectivity
landscape with each tum, the update iterations affect connection strength,
spread, durability, and reactivity of the network's ongoing patterns of
relationship:
Numerical studies show that the outcome from the iterations is that
the gebits [i.e., the 3D-embeddable branching structures with
elevated connectivity strength] are seen to interconnect by forming
new links between reactive monads [i.e., reactive start-up nodes] and
to do so much more often than they self-link as a consequence of
links between reactive monads in the same gebit. We also see monads
not currently belonging to gebits being linked to reactive monads in
existing gebits. Furthermore the new links, in the main, join monads
located at the periphery of the gebits, i.e., these are the most reactive
monads of the gebits .... [T]he new links preserve the 3-dimensional
environment of the inner gebits, with the outer reactive monads
participating in new links. Clearly once gebits are sufficiently linked
by B- 1 they cease to be reactive and slowly die via the iterative map.
Hence there is an on-going changing population of reactive gebits
that arise from the noise, cross-link, and finally decay. Previous
generations of active but now decaying cross-linked gebits are thus
embedded in the structure formed by the newly emerging reactive
gebits. (Cahill and Klinger, "Self-Referential Noise and the
Synthesis")
In fact, the relation between strength, node distances, and reactivity
is such that it gives rise to an internal, dispositional preference of how to
VAN DuK/Process Physics, Time, and Consciousness 167

connect. That is, these locally evolving, emergent characteristics affect


the islands of connectivity in a way that makes them hook up with "kindred"
ones ( see Fig. 5-8b). Analogous to what happens in reentrant neural
networks (see Sections 4.3 and 4.3.1), the iterative noise in the process
physics model gives rise to a kind of plasticity in which simultaneously
active structures link up with each other. Accordingly, it can be derived
from the numerical analyses that "connectivity structures that are reactive
together, hook up together," this in surprising agreement with the well-
known motto from neurodevelopment: "[neural] cells that fire together,
wire together" (Lowel and Singer; Edelman and Tononi 83).
In many complex adaptive systems, such activity-driven mutualism
is known to give rise to self-similar fractal network structures in which
the same patterns of relationship occur at all levels of organization. In a
few words: fractal self-similarity means that the whole and its parts are
similarly shaped. As already hinted at in the last part of Section 4.3.3, a
fractal network structure is one that achieves the maximum correlation
among the constituent network elements. In fact, in a fractal network
system, successful branching structures persist as they do not stop to
participate in structure-enriching network cycles, while poorly inter-
associated and less interactive subnetworks will sooner or later fade away
as connectivity with the rest of the system drops below a sustainable level:
If the inputs to a system cause the same pattern of activity to occur
repeatedly, the set of active elements constituting that pattern will
become increasingly strongly interassociated. That is, each element
will tend to tum on every other element and (with negative weights)
to turn off the elements that do not form part of the pattern. To put it
another way, the pattern as a whole will become "auto-associated."
(Allport 44)
And as fractal structure formation facilitates a high level of auto-
association, the network's constituent elements become so intimately
connected with the network as a whole that we may even be so bold as to
state that they can "sense" the global state of the system through their
access to deeply correlated local information (Hesse and Gross 10).
Moreover, in the long run these strongly interassociating systems turn out
to develop an optimal ratio between (1) efficiency (i.e., the capability to
follow "the path of least resistance") and (2) diversity or flexibility (i.e.,
the capability to adapt, or take another, alternative pathway in the face of
changing local-global conditions). The near-critical balance between these
168 PROCESS STUDIES SUPPLEMENT 24 (2017)

two features (see Ulanowicz, The Third, 112) then leads to structure
formation as found, for instance, in the optimally complex neural network
of Fig. 4-4a-b. Indeed, similar fractal pattern formation can be found in
systems as diverse as neural networks (Fig. 5-8a), tree root networks (Fig.
5-8b), river deltas (Fig. 5-8c ), blood vascular networks (Fig. 5-8d), ant
foraging trails, and many more natural systems, even at the level of galactic
superclusters (see Fig. 5-10 below).
In summary, all the pattern formation in these systems thrives on
dispositional activity. Whenever branching structures, under the influence
of internal, external, local, and global contingencies and constraints, get
to become each other's "adjacent possible" (Kauffman, "Foreword: The
Open," xiii; "Foreword: Evolution," 15), they will likely hook up and,
depending on the level of mutual sustainment, get involved into a more
durable relationship or not. Simply because the probability to hook up
will increase when branching structures are (1) simultaneously active, (2)
equally strong, (3) equally durable, and (4) equally reactive, it would
certainly not be too much off-target to say that these branching structures
develop in an anticipatory, or at least a proto-anticipatory way. As all
branching structures are "biased" in the sense that they tend to connect
with resembling parts, this can also be thought of as a primitive form of
subjectivity. The network as a whole, then, will exhibit what Robert
Ulanowicz has called "ascendency" (The Ascendent; The Third 112)-the
tendency to develop towards ever-higher, and increasingly intense complexity.

Fig. 5-8: Fractal pattern formation leading to branching


VAN DuK/Process Physics, Time, and Consciousness 169

networks. (a) Four fluorescently stained neurons from a bird's brain


(finch). One small interneuron and three projection neurons in RA,
the robust nucleus of the arcopallium; a brain area involved in the
control of fine muscle movements required for the production of
learned song (photo authored by Mark Miller (2011), postdoctoral
fellow at UCSF School of Medicine); (b) Excavated root network of
Balsam Poplar with the arrow indicating a "root graft" (a shared
connection) between two individual trees. Root grafts are exquisite
examples of "kindred" dispositional branching structures that hook
up with each other, thus contributing to optimal mutualistic
connectivity within the network. Location: Quebec, Canada
(Adonsou, et al.); (c) Satellite picture of the fractal-shaped branching
structures of the Selenga River delta on the southeast shore of Lake
Baikal in Russia (source: U.S. Geological Survey); (d) Computer
model of fractal blood vessel network in human lungs (edited from
Haber, et al.).

In fact, in the process physics model, the occurrence of dispositionality,


and the emergence of primitive anticipatoriness, proto-subjectivity, and
ascendency ( see Ulanowicz, The Ascendant) is achieved by the system-
renewing effect of the noise-driven iterative update routine. Accordingly,
each round of iterations "in-forms" the entire connectivity network about
its own newly acquired internal connectivity. Through the complex interplay
of (1) the memory-like precedence term, (2) the Whiteheadian prehension
of local-global data by the cross-linking binder term, and (3) the creativity-
infusing noise term, the initially undifferentiated connectivity network
will give rise to a present moment effect unparalleled by any timeline-
based model. Aside from their geometrical timeline, those models have
to rely on an external time pointer to get from one moment to the next.
The present moment effect in the process physics model, however, is
inherent to the connectivity network in which it arises. So much so that
the network and the present moment cannot be told apart in any satisfactory
way. Accordingly, the present moment effect should be understood as
forming one inseparable whole with the connectivity network. Network
and present moment are best thought of as one integrated process-a
dispositional habit-driven present, or an anticipatory remembered present
(see Edelman, The Remembered; see also Fig. 5-9).
The most striking thing, here, is that we have already used the same
170 PROCESS STUDIES SUPPLEMENT 24 (2017)

term when we were discussing the coming-into-actuality of an organism's


conscious now (see Section 4.3.1, note 93; see Sections 4.2.3 - 4.3.2 for
more specific details). When taking a step back to contemplate this
peculiarity, however, we may notice that the same basic repertoire-namely
(1) memory, (2) linkage-establishing reentrant signaling, and (3) neural
noise-is at work in the thalamocortical region of the mind-brain. This
repertoire, in tum, is kept on the go as the organism's perception-action
loop continues to go through its cycles.
As long as this repertoire remains intact, it can give rise to the conscious
organism's emergent sense of self and world through which not only the
experience of an immediately apparent reality becomes possible, but also
the thinking up of all kinds of scenarios of events that might or might not
happen in the future . Moreover, an organism with higher-order
consciousness-or, in other words, a well-developed anticipatory remembered
present-would also be able to imagine events that could perhaps have
happened in the past if circumstances would have been different. The
organism's thoughts and actions would not necessarily be targeted solely
on the "conscious now," but could also be aimed at imagined possible
past or future realities.

Fig. 5-9: Seamlessly integrated observer-world system with


multiple levels of self-similar, neuromorphic organization.
Simplified illustration depicting (a) a within-nature observer who is
VAN DuK/Process Physics, Time, and Consciousness 171

(b) seamlessly embedded within the same natural world he, from
early life into adulthood, gets to make sense of through (c) the
workings of his mind-brain and the perception-action cycles, and
other non-equilibrium cycles that enable him to go through life (these
cycles are not depicted here; see Fig. 4-1 as a replacement).
Subsequently, (d) this brain-equipped observer, by going through his
perception-action cycles, gets to sculpt a conscious view of the
greater embedding world, which at the supragalactic level, is also
organized in a "neural network-like" way. Finally, then, (e) process
physics shows that, at the "deepest" level of organization, the process
of nature branches out into an all-encompassing, optimally
interconnected complex network of "neuromorphic" activity patterns.
This is characteristic of a self-organizing, criticality-seeking,
complex fractal network process. All this suggests that this self-
organizing network process gives rise to habit formation, internal
meaningfulness through universal mutual informativeness, and all
experiential aspects that mainstream physics systematically
overlooks.

(Credits: Edited neuron image and supragalactic network image


inserted from Mark Miller and from Volker Springe!, et al.,
respectively. Edited image of "neuromorphic" fractal quantumfoam-
like connectivity network inserted from (Cahill, et al.).
Although it would of course be going too far to state that nature at its
deepest level already possesses a highly evolved memory-based
anticipatoriness, all the above forces us to admit that a primordial form
of it could well be present from early beginnings onward. After all,
analogous to the dispositional behavior of branching structures in the
process physics model, a conscious organism's anticipatory remembered
present arises through dispositional memory repertoires within perception-
action cycles that thus enable the repetition of psychophysical acts, such
as thought, imagination, and value-modulated musculoskeletal control
(Edelman and Tononi 57-61). Given all this, we are now hopefully ready
to conclude that proto-subjectivity, time (i.e., nature's "becomingness"
as "the going through its iterative cycles"), universal interconnectedness,
something akin to Whitehead's "subjective aim" (see "dispositional
preference," teleology, or what Terrence Deacon [264-287] calls
"teleodynamics"), and mutual informativeness are intimately related
aspects of the in itself indivisible process of nature.
172 PROCESS STUDIES SUPPLEMENT 24 (2017)

Fig. 5-10: Structuration of the universe at the level of


supragalactic clusters. (Source of image: Springe!, et al.; © Nature
Publishing Group 2005)

These zoom images are generated by the Millennium Simulation.


Each individual window shows the simulation-generated structure in
a slice of thickness lSh- 1 Mpc (the order of magnitude for each
window can be derived from the distance indication in the lower right
and left hand corners). The sequence of windows gives consecutive
enlargements, with a factor four magnification for each step. The
simulation aims to model structure formation in the universe in a way
that agrees with the results obtained by the Sloan Digital Sky Survey
and the 2-degree Field Galaxy Redshift Survey (2dFGRS). To
achieve this, it is assumed that the early universe exhibited only weak
density fluctuations and was otherwise homogeneous. Starting from
such initial conditions, these fluctuations are then thought to be
amplified by gravity. Dark matter and energy are invoked to enable
the applicable gravitational equations to achieve neuromorphic
structuration compliant with the above-mentioned reference surveys.
In process physics, the fractal, neuromorphic structure formation
VAN DuK/Process Physics, Time, and Consciousness 173

results directly from the iterative update routine and no dark matter
or dark energy hypothesis needs to be invoked.

6. Overview and conclusions


Throughout this paper, we have seen that our conventional way of
doing physics in a box got us into trouble over and over again. The common
source of all these troubles seems to be the general methodology behind
doing physics in a box, or, in other words, the Newtonian paradigm. For
sake of clarity, let us retrace the steps through which the very method that
was invented specifically to improve our physical understanding of nature
(which, arguably, it did), could ever so paradoxically end up being such
an inhibitor of any deeper understanding as well.
To begin with, the Newtonian paradigm holds that the natural world
consists of nothing more than entirely physical contents. These contents
are thought to behave, to a greater or lesser extent, in a regular manner
that can be expressed in the form of lawful physical equations. Moreover,
it is one of the core beliefs in the Newtonian paradigm that nature as a
whole can eventually be captured by just a handful of these lawful physical
equations so that, eventually, the entire universe can be said to be "governed"
by only a small set of them. In a nutshell, this is roughly what the Newtonian
paradigm amounts to.
However, the Newtonian paradigm comes with quite a number of tacit
assumptions that we typically like to forget about once we are in the midst
of putting it into practice. A prime example among these tacit assumptions
is the "Galilean cut," which is the idea that quantifiable aspects of nature
(such as location, size, shape, and weight) belong to the "objective real
world out there," whereas qualitative aspects (such as color, touch, and
smell) belong to the subjective inner-life of the observer. A related, but
not entirely synonymous, idea is that a simple dividing line can be drawn
between the system-to-be-observed and its observational system, including,
in particular, the conscious observer behind the switches and knobs of the
measurement equipment. Another major point on the list of tacit assumptions,
then, is the idea that the environmental influence on a well-isolated system
can be safely neglected.
Although all these ideas are crucial elements in the Newtonian
paradigm-elements without which our present way of doing physics in
a box would not be possible-they cannot be upheld whenever our locally
successful "laws of nature" are extrapolated to nature as a whole. Such
174 PROCESS STUDIES SUPPLEMENT 24 (2017)

an extrapolation would lead to the following patchwork of arguments:

1. By trying to apply our local physical equations to the universe at large


we are in fact committing the cosmological fallacy (Smolin, Time,
97; also see J. Rosen 72).
2. Once having done so, we basically act as if we can position ourselves
outside of nature, along with our measuring rods, calibrated clocks,
and other scientific instruments, just to take on an exophysical "view
from nowhere" (see Nagel; Van Dijk, "The Process"). But it is of
course impossible to observe the universe from the outside and any
attempt to stubbornly stick with this exophysical methodology will
lead to conclusions that are impossible to check, such as that the
universe should appear static and frozen solid when looked at from
the outside (see Smolin, Time, 80). 132
3. The application of the Galilean cut-which is (1) intimately related
to the above mentioned exophysical view and (2) an absolute necessity
to make the formulation of physical equations possible at
all-automatically leads to the undesirable "bifurcation of nature."
That is, it splits up our natural world into lifeless nature and nature
alive (Whitehead MT, 173-232; Desmet, "On the Difference," 87),
or, in other words, into an inanimate part that is describable in terms
of physics and mathematics, and another animate part that is not.
Unfortunately, however, this leaves unexplained all kinds of things
that we typically like to associate with life, such as meaning,
subjectivity, value, creativity, novelty, and so on. As a result, we are
now left with an exophysical-decompositional physics that, in the
words of Terrence Deacon, leaves it absurd that we exist.
4. This same exophysical-decompositional physics, by making it so natural
and obvious to think of nature in terms of empirical data and their
data-reproducing algorithms, all too easily persuades us to confuse
those empirical data and their physical equations 133 with the natural
world to which they are referring. Given the utter staticness of those
data records, one may be tempted to conclude that the referents to
which these records are held to pertain 134 are themselves equally
static, and thus "frozen in time" (Smolin, Time, 33). However, this
would amount to mistaking empirical data for what are held to be
their referents. We would actually be committing the fallacy of
misplaced concreteness (Whitehead, PR, 7, 18), which is undesirable
since it ultimately leads to all kinds of confusing results, such as the
VAN DuK/Process Physics, Time, and Consciousness 175

denial of time and experience in Minkowski's block universe


interpretation of Einstein's special theory of relativity.

For the purpose of arriving at a proper, to-the-point conclusion, we do not


need to go too deep into all technical details of relativistic physics and its
timeless block universe interpretation. Instead, we can suffice by zooming
in on where the main assumptions of the block universe interpretation go
wrong. To recap, these main assumptions are:

(I) nature is an objectively existing, mind-independent real world out there;


(2) natural events reside in a geometrical continuum;
(3) relativity of simultaneity means that there is no passage of time and
that any experience of time passing by is thus illusory.

Remarkably, though, all three assumptions are in fact symptoms of what


may be called the physicist's fallacy. This fallacy, which may count
Galileo, Newton, Einstein, and many others among its victims, leads one
to suppose that what is being identified as an object in one's experience
must naturally have its origin in an external, mind-independent world of
entirely physical objects. This basically amounts to the idea that our
experience occurs somewhere in an exophysical center of subjectivity and
that it only has to import and interpret the information gathered from a
pre-coded "real world out there." However, nature is by itself unlabeled
and unframed by our filter of observation. As such, it does not contain
any categories, concepts, or pre-coded information from which our mind-
brains can construct nature-representing mental content.
Instead, as is shown in Sections 4.2.3-4.3.1, organisms get to sculpt
their "conscious sense of self and world" through perceptual
categorization-the ability of conscious organisms to partition nature into
categories, although nature by itself does not contain any such categories
at all (Edelman and Tononi 104). That is, sense of self and world emerge
as two aspects of one and the same stream of experience as conscious
organisms live through their multimodal perception-action cycles 135 as
well as the associated nutrient-waste cycles, 0 2-C02 cycles, and the like.
An organism's experiential world is carved out as a somatically meaningful
"self-centered world of significance" through a process of sense-making
that takes place within the integrated whole of the seamlessly interconnected
organism-world system-not within some exophysical center of subjectivity
or Cartesian theater.
176 PROCESS STUDIES SUPPLEMENT 24 (2017)

This not only debunks the first main assumption of the block universe
interpretation-namely, that nature is an objectively existing, mind-
independent "real world out there"-but also the second and the third.
After all, assumption (2) can only be made to work when natural events
and living observers are being reduced to point-events and point-
observers-something that is shown to be a misleading abstraction in
Section 2.5.2. Moreover, since it turns out that relativity of simultaneity
does not hold in each and every case (see Section 2.5.3), the third assumption,
that our experience of time passing by is merely an illusion, should no
longer be considered an established finding either. To drive home the
point that the block universe interpretation is mistaken, though, it will for
now be enough to focus primarily on the flaw in the first main assumption
and keep the flaws in the other assumptions on standby. As for the first
assumption, it should be quite obvious that the long-cherished ideal of a
mind-independent "real world out there" is flatly contradicted by the
above-mentioned finding that observing organism and observed world are
ultimately one. In fact, this finding is utterly incompatible with our entire
current enterprise of doing "physics in a box" (or "exophysical-
decompositional physics," as it can be referred to as well). 136
Due to this incompatibility, and some other reasons as well, it seems
we need to (1) temporarily put aside our exophysical-decompositional
way of doing physics in a box and save it exclusively for practical pur-
poses, 137 and (2) look out for a nanexophysical-nandecompositional way
of doing physics without a box to thus be able to get a modelling method
in which mutual informativeness is an integral part of the system, so that
we no longer need to bump into the problem of information having to be
pre-coded before it can ever be imported "from the outside" 138 (see also
Kauffman, "Foreword: Evolution," 9-22) as would be required for an
observer whose exophysical center of subjectivity is processing the sense
data originating from the allegedly mind-independent "real world out there."
This problem of pre-coded information can be avoided when information
is in fact an initially unlabeled process of mutual informativeness through
which the model system is dynamically being given shape from within,
i.e., a process through which all activity patterns can make a difference
to all other activity patterns within the system, and vice versa. Without
such mutual informativeness, the above-mentioned process of perceptual
categorization would not even be possible. It is through the mutual
informativeness among and within neuronal groups in the thalamocortical
VAN DuK/Process Physics, Time, and Consciousness 177

region of the mind-brain that conscious organisms get to carve out their
conscious "Umwelt" (i.e., their "self-centered world of significance"; see
Von Uexkiill and Von Uexkiill; Koutroufinis) from a less salient background
of noisy, lower-order activity patterns.
In a remarkably similar way, the mutually informative process of
"autocatalysis" is thought to have facilitated the advent of life by enabling
the emergence of initially primitive biotic networks from a nondescript
background of low-grade, slow-going chemical reaction cycles (see
Kauffman, At Home, 47-69). In both cases, a higher-order world of habit-
establishing foreground patterns is "bootstrapped into actuality" 139 through
the mutually informative cyclic activity within the system itself.
According to Reg Cahill's process physics, this mutual informativeness
is not only an essential characteristic of biological systems, but also of
nature as a whole. In the process physics model, it is the mutual
informativeness among inner-system activity patterns that gives rise to a
complex world of criticality-seeking, habit-establishing foreground patterns.
Similarly to what happens in the emergence of life and consciousness
through autocatalysis and perceptual categorization, respectively, these
foreground patterns get to "bootstrap" themselves into actuality from an
initially undifferentiated background process of noise-driven, mutually
informative activity patterns. Because of this mutual informativeness,
which enables it to avoid all the problems associated with pre-coded
information, process physics should be considered a prime candidate for
a nonexophysical-nondecompositional way of doing physics.
Process physics, by virtue of its "co-informativeness-based" 140 way
of doing physics without a box, introduces a non-mechanistic, non-
deterministic modeling of nature based on a self-organizing and noise-
driven iterative update routine. As such, process physics can be said to
work according to a Peircean principle of precedence (Peirce 277), so that
it has no need for lawful physical equations and can thus avoid the many
problems and fallacies that are associated with our conventional way of
doing physics in a box.
By means of its "habit-establishing, stochastic recursiveness," the
process physics model can give rise to constantly renewing activity patterns.
In contrast to mainstream physics, in which any sense of processuality
has been so worryingly absent, the thus achieved "becomingness" can be
associated with what we, in everyday life, experience as time. Instead of
ending up with an utterly timeless and non-processual world, such as the
178 PROCESS STUDIES SUPPLEMENT 24 (2017)

block universe which mainstream physics claims that we live in, the
process physics model, by going through its habit-establishing iterations,
gradually gives rise to an entirely processual network of self-organizing
activity patterns that exhibit lots of familiar behaviors that can also be
found to occur in nature itself. In so doing, the process physics model will
slowly but surely start to show more and more features that are also so
characteristic of our own natural universe: non-locality; emergent three-
dimensionality; inertia; emergent relativistic and gravitational effects;
emergent quasi-deterministic classical behavior; creative novelty; inherent
time-like processuality with open-ended evolution; and more. Finally,
perhaps the most directly appealing aspect of process physics may well
be its full compliance with our best theories on life and consciousness.

Appendix A: Addendum to §2.5.2 "Events in nature can be pinpointed


geometrically ( or not?)"
Mathematically formulated physical equations do not represent nature-
in-itself In physics, the actual target systems are samples of raw empirical
data that will acquire their eventual processed form only through the
intimate interplay between what we in earlier times liked to label as
subjective and objective aspects of nature. 141 Accordingly, physical
equations are to be thought of as intersubjective phenomenologies pertaining
to how the results of measurement interaction are presented in terms of
theory-laden data; they do not pertain to nature itself.
In other words, physical equations do not directly represent nature
itself and there is no objective, one-on-one representational relation between
any within-nature events and physical equations. At the end of the day,
physical equations are instrument- and sensation-based phenomenologies
of nature, rather than fully corresponding representations; they are
approximations of regularities found in observational data whose coarse-
grainedness depends on which measuring instruments, which measuring
methods, and which background theories are being employed (see Section
3.2.2 for more details).
For this reason, we should definitely reexamine the presupposition of
relativity theory that events and observers in nature can be pinpointed
geometrically. Instead of treating mathematics as the language of nature-as
Galileo did when he introduced the geometrical timeline, thereby basically
giving rise to modem physics-it makes much more sense to consider
VAN DuK/Process Physics, Time, and Consciousness 179

mathematics (and thus geometry) a later-arriving human artifact. Indeed,


as Lee Smolin suggested in Time Reborn (33, 245), we should think of
mathematics as a tool by means of which we can analyze, predict, and
postdict the data extracted from observationally-intellectually singled-out
natural systems.
To understand how this could be, we should again focus on how we
sculpt our conscious view of the natural world we live in (see Sections
4.2.3 to 4.3.1). For this, we should realize that we, as seamlessly embedded
conscious organisms, learn to make sense of nature by associating "world
states" with value-laden "body states." In other words, by living through
their own body states, conscious organisms will gradually learn to value
nature in terms of how it dynamically affects their internal milieu. 142 It is
along these lines that the organism develops value-laden sensorimotor
and somatosensory action repertoires through which both musculoskeletal
and cognitive acts can be repeated while matching, repertoire-specific
body states are called to the fore. The thus evolved psychophysical action
repertoires are "dispositional" in the sense that they constantly reroute
their firing patterns under the influence of novel stimuli. 143 Accordingly,
the organism can develop adaptive behavior even within rapidly changing
living environments (all this has been explained in greater detail in Sections
4.2.2 to 4.2.4).
Hence, what in early, pre-natal life is still a blooming, buzzing confusion
(see James, "Percept," 50) is thus given bodily meaning and gradually
becomes categorized into an inner- and outer-organism world. 144 The thus
developed experiential world does not represent the so-called "real world
out there," but arises within a joint effort of world and organism 145 as
ongoing perception-action loops are engaged in bringing somatically
meaningful, non-representational percepts into actuality.
It is only in this non-representational way that the richness of our
percepts, Gestalts, conscious categorizations, and higher-order concepts
has been able to emerge. What is more, a case can be made that all
mathematical concepts have actually originated in this manner. This may
indeed be quite hard to swallow for some-especially for those who, in
the spirit of Galileo, like to think of mathematics as the pre-given language
of nature. But despite the sobering effect of this non-representational
approach, it also has a lot going for it. First of all, it offers an evolutionary
account of mathematical thinking. Secondly, it opens up avenues for
philosophers and scientists alike to think of nature as being routine-based
180 PROCESS STUDIES SUPPLEMENT 24 (2017)

(i.e., habit-forming) instead of law-governed (i.e., obeying math-based


physical equations). In this way, thirdly, the question "Why these laws?"
can be dropped and replaced by the question of how habit-forming activity
patterns can arise, persist, and evolve in nature.
Analogous to non-representational conscious experience, then,
mathematics should ultimately be seen not as representing nature, but as
a tool that works with great precision within certain well-defined contexts
of use. On this account, geometry, too, is finally no more than an idealizing
tool with great pragmatic use. But its numerical specifications of lengths,
surfaces, and volumes should be seen as figures of speech rather than as
realistic representations of concrete reality-let alone as concrete realities
by themselves.

ENDNOTES

1. On a large-scale, supragalactic level, 21st century mainstream simulations


of the universe typically take on the form of a neural network-like cosmic
web. Notably, the Virgo Consortium's "Millennium simulation" (see Fig.
5.10) and the NASA- and NSF-funded "Bolshoi simulation" are some prominent
examples of such simulations.

2. Different observers moving at different speeds may not experience the


same well-separated events in the same order so that it cannot be confirmed
if these events are actually simultaneous or not.

3. While relativity of simultaneity seems to lead logically and inescapably


towards the negation of the passage of time, it is by no means an absolute
fact (Capek, 508). That is, the relativity of simultaneity will only occur under
certain specific circumstances, namely it requires: (1) well-separated events
that (2) must have come into actuality before they can ever (3) be detected
by observers that are moving relative to one another with a significant enough
difference in velocity.

4. The term "exophysical" refers to an external, non-participating observer


looking out onto an allegedly entirely physical world. Moreover,
"decompositional" refers to the nature-dissecting acts of decomposition that
have to be performed before physics as we know it can be done in the first
place (see van Dijk, "The Process").

5. Professor of physics at Flinders University in Adelaide (Australia), and


winner of the 2010 Gold Medal of the Telesio-Galilei Academy of Science.
VAN DuK/Process Physics, Time, and Consciousness 181

6. "Perception-action loops" is actually short for "sensation-valuation-motor


activation-world manipulation" loops.

7. Please note that, in cognitive neuroscience, mutual informativeness is also


characteristic of the process of subjectivity (see Edelman and Tononi 126-130).

8. The term horror vacui, typically paraphrased in English as "nature abhors


a vacuum," is often attributed to Aristotle, and refers here to antiperistasis,
the alleged phenomenon through which a vacuum behind a projectile in flight
is filled up by air coming from the front tip of the projectile.

9. Please note that the average speed still had to obey Aristotle's so-called
"law of motion": V c< FIR, (with V= speed, F= motive force, and R= resistance
of medium), which expressed Aristotle's belief that the rate of falling was
proportional to weight and inversely proportional to the density of the medium.
So, it was commonly agreed upon that air resistance and viscosity of water
would indeed slow down falling objects, thus to a certain extent affecting the
rate at which the falling speed would build up.

10. The data shown here can be found in Galileo's original working papers
on folio 107v [with "folio" meaning sheet, and v standing for "verso," which
is Italian for "back side" as opposed to r, which stands for "recto" (i.e., front
side)] . The working papers are being kept in Florence, in the Biblioteca
Nazionale Centrale (the Central National Library). The 160 surviving sheets
of the working papers are now bound as Volume 72 of the Galileo
manuscripts-also known as "Codex 72" or "Manoscritto Galileiano 72."

11. Euclid's magnum opus on geometry had already been published in Ancient
Greece around 300 BCE (see Byrne).

12. The equally long time stretches could be the intervals between (I) the
ramp's warning bells, (2) the water level markings of the water clocks, (3)
the sand level markings on an hour glass, (4) the completed swing periods of
a pendulum, or ( 5) any other indication of time units that can be used in an
experiment.

13. Doing physics in a box: this term, coined by Lee Smolin in his 2013 book
Time Reborn, refers to the long-established practice of isolating some aspect
of nature (or system of interest) from its surroundings and then trying to
empirically identify and mathematically capture the regularities in its behavior.

14. The term "beables," coined by John Bell (Speakable, 174), refers to those
existents purported to make up the unobservable realm "beneath" our
observation-based phenomenal world.
182 PROCESS STUDIES SUPPLEMENT 24 (2017)

15. E.g., physical parameters such as height, distance, water level, or the
angular position of a clock's hand (i.e., the "time pointer").

16. Furthermore, there has to be agreement on the rate of sampling ( e.g., a


calendar or timeline with a page, or segment, for each day, week, month,
year, or other measure of time). Next to that, one also has to decide which
aspects of nature to associate with the geometrical timeline, etc. (see Cahill,
"Process Physics: Self-Referential," 3; Van Dijk, "The Process").

17. One of his first spontaneous experiments was to time the swing periods
of a chandelier by using his pulse. In his later experiments, Galileo would
also exploit several other means of measuring time, such as the rising level
of a water clock, or, indeed, the increasing amount of synchronously ringing
downhill bells.

18. See McTaggart's A and B series.

19. Depending on which experiment was being performed, the time indicator
markings in question were: (a) the water level markings, or (b) the warning
bell positions.

20. Conventional definitions typically refer to something more fundamental


in order to specify the definiendum. However, this way of putting together
definitions will typically lead to infinite regress or circular reasoning. For
instance, the short and simple definition of time as "that which is measured
by a clock" depends on the definition of a clock as "a measuring instrument
for time" - a dependence relation which clearly involves circularity. The only
way to avoid this infinite regress and circularity is simply to terminate the
search for any more fundamental underpinnings, and, instead, to adopt an
operational definition that works for all practical purposes. According to Hans
Albert's Miinchhausen Trilemma (in which these three elements of infinite
regress, circular reasoning, and termination of the justification procedure
form an inescapable triadic unity), every such definition necessarily has to
remain non-exhaustive (see Albert 11-15).

21. The "double calibration" consists of: (1) synchronizing the frets or alarm
bells with the back-and-forth dangles of a free-swinging pendulum; (2)
synchronizing the frets or alarm bells to each other by hearing, that is, by
listening if their consecutive sounds, triggered by the descent of a downward
rolling ball, form an even sequence.

22. To begin with, it can be questioned if the postulation of such a superintellect


VAN DuK/Process Physics, Time, and Consciousness 183

is scientifically acceptable at all. After all, its existence can neither be


confirmed nor falsified. Furthermore, it is also quite hard to see how it should
ever be possible to gather, in one go, all of nature's information - involving
all positions, velocities, and forces of all particles in the universe.

23 . Over the years, determinism has now taken on a somewhat less rigorous
guise: reductionism. And although reductionism, in tum, comes in many
different flavors, its general idea is that all of nature can be brought back to
its most elementary physical foundations, which should then be expressible
in terms of a concise set of physical equations. The physical equation has
managed to boost its status from convenient, approximating tool (man-made
artifact / abstraction / simplifying idealization) to an all-encompassing, literal
representation of nature.

24. Newton's Second Law of Motion (F=ma), for instance, was thought to
pertain not just to one specific physical body. Instead, Newton deemed it
universally valid for all masses in the universe.

25 . This includes quantum mechanics (Cartwright 163-216) and, according


to Giere, also relativity theory (Giere 250n13).

26. Please note that a noise factor may be built into many physical equations
so that external influences can be taken into account. However, this makeshift
procedure is not used for "laws" since so-called laws of nature are thought
to give deterministic outcomes in many cases.

27. Since the term "initial conditions" is typically used for some specific,
carefully selected entry out of a larger set of temporally arranged alternatives,
the term "interim conditions" is probably more accurate.

28. Remarkably, contemporary mainstream physics seems to opportunistically


"smuggle" time back in by stating that the block universe entails a "causal
structure" of some kind. In this vast network of cause-effect chains all events
in the history of nature are thought to exist together at once - albeit with their
own particular spatiotemporal coordinates (see Smolin, Time , 58-59). Since
causal relations have a fixed order of events, with causes before effects, every
causal chain, or "worldline," can be said to imply the unidirectionality of
time. This impression of time is typically blamed on an asymmetry inherent
to the spatiotemporal states of the world (or slices of the block universe) as
they exist side by side within a causal order, rather than an asymmetry of
time as such (see Davies, "That Mysterious," 9). Arguably, however, this line
of reasoning is flawed, since it labels as nonexistent what has already been
abstracted away beforehand (i.e., the process of nature loses its processuality
184 PROCESS STUDIES SUPPLEMENT 24 (2017)

as it is reduced to static slices that are frozen solid within an asymmetrical


causal order).

29. Please note that Eddington's experiment pertained to predictions based


on Einstein's general theory of relativity, not special relativity. However,
because general relativity can be considered an elaboration of special relativity,
it could still contribute to the adoption of the idea of timelessness within the
scientific community.

30. These experiments were (1) the "clock-hit-by-light experiment," pertaining


to what would happen if one were to chase after a light beam reflected off
the face of a running clock, and (2) the "train-and-platform experiment,"
involving two lightning bolts striking simultaneously for one observer and
at different times for the other.

31. Next to the philosophical branch of process thought, we may think, for
instance, of David Bohm's, Milic Capek's, and Ilya Prigogine's processual
worldviews (see Griffin, Physics).

32. For a larger list of process-minded physicists, see Eastman and Keeton.

33. Please note that this summation does not include Einstein's assumptions,
because we are here dealing with the assumptions that gave rise to the block
universe interpretation as based on Einstein's special theory of relativity, not
STR itself. See also Bros.

34. In Einstein's time it was unknown if the universe actually extended beyond
our own galaxy.

35. It was not until Minkowski's later introduction of the 4-dimensional


spacetime construct (1908) that space and time were first interpreted as being
an inseparable whole. Therefore, Einstein's initial assumption was that the
whereabouts and "whenabouts" of events in nature could be specified in terms
of three space coordinates and one time coordinate (x, y, z, t).

36. This argument can be countered as follows: since consciousness enables


us to imagine and "somatically appreciate" - i.e., give body-related meaning
to something-to-be-perceived in terms of the conscious organism's body states
(see Damasio 133-167; Edelman and Tononi 82-110) - the different future
scenarios with which we may have to cope, consciousness "steers" our current
behavior in anticipation of what is expected to come. In a similar way, after
all, Pavlov's dog learned to associate the ringing of a bell with the appearance
of food which triggered the secretion of saliva, so that the dog would be better
VAN DuK/Process Physics, Time, and Consciousness 185

prepared to digest the food. Accordingly, from early life onwards, conscious
organisms gradually learn to value what they are undergoing by how their
body states are affected by it. Consciousness becomes a lived "anticipatory
remembered present" (see Van Dijk, "The Process-Informativeness") - i.e.,
a bound-in-one culmination of direct perception and value-laden memories
as experienced from within - which definitely has causal consequences for
physical reality. When held in the spotlight of the third-person perspective
of physical science, however, it remains notoriously elusive.

37. Next to big bang nucleosynthesis (which is the main source of hydrogen
[H] and helium [He] in the universe), there is also stellar nucleosynthesis and
supernova nucleosynthesis (synthesizing H and He into the more heavy
elements of the periodic table).

38. See, for instance, Kauffman, "Foreword: Evolution," 9-22.

39. As quoted in (Popkin 65).

40. Please note that "abstraction" is not the act of reducing concrete, real-
world objects, events, relations, and/or phenomena to their most pure and
ideal Platonic forms . Rather, abstraction is the dissection and reduction of
the process of nature to symbols, geometric elements, algorithms, etc., that
are meaningless by themselves. They can only achieve concrete significance
when situated within a socioculturally evolved, meaning-providing context
of use. Like this, in order to make any sense at all, they need to be considered
within a semiotic process where they can form a unified threesome with an
observer-individuated referent (i.e., target system or aspect of interest) and
with an impact on the sign-interpreting observer; see Section 3.2.5.

41. For Newton, space and time were absolutes that did not depend upon any
physical goings-on. Rather, they made up the backdrop within which the
contents of nature could be accommodated. Absolute space was seen to be
unchanging and immovable. Time, on the other hand, was thought to be
absolute and universal in the sense that (a) it was supposed to be valid for all
of nature simultaneously, and (b) it was held to run its course irrespective of
any events being present to unfold "within" this absolute time.

42. Post-geometric physics: Any field in physics where geometrical dimensions


are used to construct - via an act of preparatory stage-building - a "prefab
arena" in which the events of interest should run their course.

43. The 4-dimensional spacetime continuum of relativity theory does not


account for nonlocality. Therefore, at the least, it can be characterized as an
186 PROCESS STUDIES SUPPLEMENT 24 (2017)

idealizing simplification.

44. Initially, in special relativity, Einstein did not take into account gravitation.
Only with the later development of his general theory of relativity, gravitation
(and thus mass) were presented as a natural consequence of the curvature of
the geometrical spacetime continuum. As an addendum to Einstein's first
(special) theory of relativity, Minkowski's geometrical spacetime construct
only had to deal with geometry, point events, point observers, and any (less
than or equal to light-speed) causal connection between them (see Papatheodorou
and Hiley).

45. This can be derived from Minkowski's formula for the constancy of the
world interval/= s 2 - c2 (t2 - t 1 ) = constant (with c = 3· 105 km/sec= 3· 10 10
cm/sec; and with spatial intervals = -J (x 2 - x1 ) 2 + (y 2 - y 1 ) 2 + (z 2 - z 1 ) 2 .
Like this, sis expressed in terms of the spatial and temporal coordinates Xi,
y 1, Zi, ti, x 2 , y 2 , z2, and t 2 which are geometrically associated with the events
E 1 and E 2 • Please note that all these geometrical coordinates are specified
from the perspective of the observer whose reference frame is being applied.

46. In Newtonian physics, time is by convention considered to pass by at an


even rate, while space is held to be spread out in equally long stretches as
well. In Newtonian absolute space, therefore, the spatial distance between
two stationary point positions s 1 and s 2 will thus be the same for all observers
involved-moving or not. As a result, an object moving at uniform speed
between these locations will cover the given distance t..s= (s 2 - s 1) within
the same time interval At= (t2 - t 1)-no matter which coordinate system is
being used. In Minkowskian space-time, however, spatial distance and temporal
duration are treated as equivalent. As a result, the invariance to be agreed
upon is that of intervals of spacetime as an integrated whole. Such intervals
are called "world intervals" and are denoted by capital / (Minkowski).

47. See also Weinert 184 for the link between causality (causal chain) and
the before-after asymmetry.

48. Since measurement coordination is needed for any standard clock or


yardstick-i.e., the assignment of (1) a fixed standard rate by which ideal
clocks should be expected to run, or (2) a fixed standard length for the span
of an ideal measuring rod-measurement practice first requires a reliable
theoretical account in which this standard measure is to be grounded. But, in
tum, this theoretical account can only tum to measurement practice in order
to get the data on which to base its theoretical inferences of how to arrive at
a reliable context-independent standard measure. In other words, measurement
coordination is needed to guarantee that all ideal clocks will operate at a
VAN DuK/Process Physics, Time, and Consciousness 187

universally identical, standard rate when moving at any speed anywhere in


the cosmos.

49. As mentioned earlier, there have also been some experiments that did not
agree with the relativity theories. Examples are the experiments that led to
the "bore hole anomaly," the "earth fly-by anomaly," and, of course, the "dark
matter and dark energy anomalies." Instead of leading to doubt about the
theories, however, these experiments are typically thought to be indications
that the data are, in one way or the other, incomplete (see McCarthy 358).

50. However, only in hindsight-i.e., only after the synchronization of clocks


by two spatially separated, moving or non-moving observers-can two "inner-
cone events" be identified as lying on the same "simultaneity plane" within
that light cone (see Fig. 2-3). This strongly suggests that the synchronization
events (i.e., light emission, reflection, and reabsorption) must actually have
occurred before that, and that they do not pre-exist in the future light cone.

51. It is already a misleading idealization to treat position, time, events,


observers, clocks, measuring rods, and the like as if they were truly representative
of a "real world out there" and as if they can be successfully held in one's
thoughts separately from the process of nature itself.

52. As many generations of physicists before us have done, we could decide


to just stick to the well-beaten path of geometry-based approaches, which
have been so carefully laid down by Galileo, Newton, Einstein, and Minkowski.
/fwe would indeed choose to do so, we would eventually have to commit
ourselves to abstracting the process of nature into geometry-based spatial
and temporal dimensions, point events, point observers, causal light cones,
and so forth . When thinking of mathematics and geometry as tools (Smolin,
Time, 34), rather than regarding them as parts of an eternal, perfect, and exact
language of nature, however, these geometry-based abstractions are more
likely to tum out as idealizing figures of speech, not as representations of
concrete reality.

53. Kepler's laws of planetary motion only pertained to planets orbiting


around the sun and did not apply to the moon. Also, they provided no
explanation for the motion of the planets, but only succeeded in (approximately)
describing their orbits. Newton's universal law of gravitation, together with
his three laws of motion, not only provided an explanation for planetary
motion, but could also be applied to the moon and the lunar satellites of other
planets.

54. All this is typically expected to occur in an empirically adequate way,


188 PROCESS STUDIES SUPPLEMENT 24 (2017)

that is, with chronological, one-on-one empirical agreement between


measurement data and data-reproducing algorithms, or, otherwise, by way of
statistical goodness offit-as is the case in quantum mechanics.

55. This quote is Fritjof Capra's rendition of a personal conversation with


psychologist R. D. Laing at a 1980 conference on "Psycho-Therapy of the
Future," held in the Monasterio de Piedra Hotel near Zaragoza, Spain.

56. Please note that the system environment and observation facilities are
themselves thought to be made up from their own individual system constituents
as well. For instance, the observation-enabling support systems, among which
there are: (1) the sensory system, (2) accessories, and (3) research facilities,
may respectively be divided into: (1) the eyes, optic nerves, visual cortices,
etc.; (2) engineering tools and research equipment, such as wrenches, cloud
chambers, and photo-detectors; (3) lab buildings, cleanrooms, scientific
libraries, and so on. In turn, all this is embedded in a greater embedding
environment and set within a historically evolved context of sociocultural
and scientific use (see Van Dijk, "An Introduction," 77; also "The Process").
On the whole, however, all aforementioned systems are typically taken for
granted, neglected or left out of scope. Depending on the focus of the
investigation, as well as the personal preference and philosophical persuasion
of the chief investigator, any of the supporting subsystems on the subject side
may be handed over to the target side. A measuring instrument may itself
become part of the system-to-be-observed and the subject side will have to
trust "the naked eye" to gather its empirical data.

57. This present moment indicator, or time pointer, moves externally from
the timeline at a uniform rate, or else it cannot provide the otherwise completely
static timeline with any "dynamicity" or a distinction between past and future
(see Cahill, Klinger, and Kitto).

58. Please note that, in line with Einstein's famous equation E=mc2 , this
content is typically thought of in a material-energetic sense. In the timeless
interpretation of quantum physics, it is thought that the stationary wave
function can specify all possible configurations of all the universe's material-
energetic content that is compliant with the universe's actual initial conditions:
"In quantum mechanics, [the wave function] is all that does change. Forget
any idea about the particles themselves moving. The space Q of possible
configurations, or structures, is given once and for all: it is a timeless
configuration space ... .[T]he probability density [of this configuration space
Q] has a frozen value - it is independent of time (though its value generally
changes over Q). Such a state is called a stationary state ... .All true change
in quantum mechanics comes from interference between stationary states
VAN DuK/Process Physics, Time, and Consciousness 189

with different energies. In a system described by a stationary state, no change


takes place .... The suggestion is that the universe as a whole is described by
a single, stationary, indeed static state" (Barbour 229-231 ).

59. Admittedly, this is of course a crude caricature, but a telling one nonetheless.
After all, just as the shell is a crucial part of the egg that will be lost in the
process of separation, there are also various aspects of nature that will be lost
in the above process of decomposition. First of all, everything that is related
to the subject side-measurement instruments, including clocks and measuring
rods, as well as the conscious observer and all unquantifiable subjective
aspects of observation-is separated from what is held to be the entirely
physical "real world out there." Also, space, time, and mass-energy are indeed
first artificially decomposed from the undivided whole which is nature in the
raw, before it is attempted to glue them together again. But because the
initially unbroken "whole is more than the sum of its parts," any act of a
priori decomposition will cause something essential in nature to be lost. By
the way, the well-known phrase "the whole is more than the sum of its parts"
can easily be misunderstood, because in its deepest essence nature does not
contain any real "parts." That is, every "part" of nature is only a "part" in the
sense that it is subjectively singled out and linguistically labeled as such.

60. The framework of physical equations for each of those theories is subject
to all sorts of different interpretations. It is of course widely known that there
is a large number of different interpretations of quantum mechanics, among
which we can find the Copenhagen interpretation, the Bohmian hidden-
variable interpretation, Everett's many-worlds interpretation, Einstein's
neorealist interpretation, Von Neumann's extension of the Copenhagen
interpretation, and Heisenberg's potentia-actuality interpretation (see Herbert
16-29). Furthermore, next to the block universe interpretation of the theory
of special relativity, there is also a dynamic block universe interpretation, as
well as a Lorentzian and neo-Lorentzian interpretation, to name a few. Even
for the quite straightforward classical Newtonian mechanics there are at least
four empirically equivalent interpretations: ( 1) the action-at-a-distance
interpretation; (2) the gravitational field interpretation; (3) the curved space
interpretation; and (4) the analytical-mechanistic interpretation (see Jones).

61. In full, these acronyms are read as: "Large Hadron Collider" at the
"European Organization [formerly: Council] for Nuclear Research" and the
"Laser Interferometer Gravitational-Wave Observatory." Both research projects
have undergone several technical updates to increase their sensitivity and
measurement range.

62. That is, the imperceptible counterparts of"physical observables."


190 PROCESS STUDIES SUPPLEMENT 24 (2017)

63. These rudimentary activity patterns can thus be imagined to be emergent


from an initially undifferentiated vastness ( e.g., via something not unlike a
phase transition). Hence, from very early on, nature can already be thought
of in a mutually informative (hence, epistemic) as well as in an ontic sense.

64. DNA, for instance, has not been pre-available in the biosphere, but had
to evolve within it. In the prebiotic universe, it becomes even harder to find
something analogous to a symbol-based alphabet. In fact, the introduction of
an alphabet of symbols can be seen as part of pre-theoretical interpretation.

65. Necessarily, measurement outcomes will always have meaning-providing


interpretation associated with them due to the measurement and background
theories that give rise to the conversion of raw data into well-refined empirical
data (Kuhn, The Structure, 123).

66. Although target side and subject side can arguably be decomposed into
an arbitrary number of constituents, this cannot be done for the measurement
interaction between those opposing sides. This is due to what may be called
"the problem of the missing meta-observer" which is inherent to the use of
an epistemic cut between target and subject system. However, when allowing
such a new meta-observer to examine the finer details of measurement
interaction, the same problem will occur all over again, albeit this time between
the newly introduced meta-observer and the initial measurement interaction
that became the new target of investigation (see Von Neumann 352; Pattee;
Van Dijk, "The Process").

67. Depending on which metaphysical system is being used, these theoretically


assumed ur-differences may indeed be referred to as noumena, be-ables (i.e.,
the counterparts of observables), actualities, existents, and so on. In each
case, however, it should be noted that the use of the plural noun form already
involves a tacit elementary act of decomposition which dissects the one
undivided whole of nature into a multiplicity of constituent entities.

68. Please note that classical information theory allows various kinds of
members-"elementary items of information," such as symbols, signs, tokens,
bits, byte-sized bit strings, syllables, or words-to be used interchangeably.

69. As apparent from the example of weighing scales, which had already been
around in ancient Greece, the basic method of proportional comparison was
already available long before Galileo. He was the first, however, to systematically
apply it to time and distance combined.
VAN DuK/Process Physics, Time, and Consciousness 191

70. Nowadays, this usually depends on the maximum frequency of measurement.


In Galileo's case, however, it depended on the shortest period that could
practically be achieved for the standard interval of time.

71. Here, "post-dictive" is used as the counterpart of "predictive." In this


way, it refers to the encoding of past empirical data into a potentially congruent
data-reproducing algorithm (i.e., a candidate physical equation).

72. It must be noted that both measurement encoding and predictive decoding
will always require ample interpretation on the part of the experimentalist.
As first mentioned by Thomas Kuhn (The Structure 123), all measurement
interpretation occurs on the basis of the actually applied theory (including
all relevant background theories).

73. After all, this would only call forth the same problem all over again (i.e.,
which production rules to use for putting together the data-smoothening
encoding and decoding encryptions) and thus lead to confusing circularity
and/or infinite regress.

74. This infinite regress of meta-observers is analogous to the homunculus


problem in cognitive science (see Edelman and Tononi 94, 127).

75. Wave function collapse: the coming into actuality of one specific
measurement outcome although the system-to-be-measured is thought to exist
in a superposition of equally probable quantum states prior to the conscious
measuring act. In absence of conscious observation the quantum states are
believed to exist all-together-at-once, this in analogy to the different time
slices in Minkowski 's block universe that are thought to exist all-together-
at-once as well, thus leading to (1) the timeless view of the universe, and (2)
the arguable claim that our experience of time is completely imaginary. This
is why adherents of the block universe interpretation and relativity-inspired
interpretations of quantum physics like to dismiss consciousness as irrelevant
and illusory (see Smolin, Time, 59-64 and 80). The idea of consciousness
playing a decisive role in the collapse of the wave function has a controversial
history and has long been considered rather troublesome and unwanted by
many physicists. Hence, nowadays, mainstream physics has put its trust in
the quantum decoherence explanation in which wave function collapse can
be interpreted as being brought about by the hard-to-pin-down environmental
part of the quantum system under investigation-more or less along the lines
of Paul Dirac's idea of 'Nature making a choice' instead of consciousness.
The mathematical methodology behind the decoherence interpretation, however,
is definitely not without foundational problems either: "Under normal
circumstances ... one must regard the density matrix [i.e., the mathematical
192 PROCESS STUDIES SUPPLEMENT 24 (2017)

tool that forms the main pillar underneath the decoherence approach] as some
kind of approximation to the whole quantum truth .... It would seem to be a
strange view of physical reality to regard it to be 'really' described by a
density matrix. The density-matrix description may be thus regarded as a
pragmatic convenience: Something FAPP [an acronym by John Bell that
means for all practical purposes, or, in other words, a figure of speech], rather
than providing a 'true' picture of fundamental physical reality" (Penrose,
803). And although the last word on this topic has probably not yet been said,
for now, we will round off the discussion with the argument from process
physics-namely, that the process of quantum measurement involves aspects
of decoherence as well as consciousness (see Cahill, "Process Physics: Self-
Referential," 7-9 and 24; Cahill, "Process Physics: From Information," 33).

76. Von Neumann used the term "abstract ego" as a name for the "immaterial
intellectual inner life," the "conscious mind," or the "center of subjectivity."

77. According to semiotics, semantic as well as pragmatic information can


thus be added to in-themselves meaningless data-signifying syntactical
symbols. For reasons of simplicity, the possible difference between sign
( a.k.a. sign function or sign hood) and sign vehicle ( a.k.a. token or signifier)
is ignored here. Instead, the terms meaning and sign are used to denote the
use of a certain token within a triadic sign relation. See Noth 79 for more
details on the possible differences between sign and sign vehicle.

78. The preparation process may pertain to different activities in different


theoretical contexts. In quantum physics, it primarily denotes the process
through which a "quantum particle" is "soaked loose" from its embedding
environment so that it can be submitted to observation further on down the
line (see De Muynck 74-75, 83, 90-91, 94), and in classical physics it refers
merely to the process through which some interesting "physical" aspect of
nature is "individuated" into a target system.

79. See Mara Beller's 2003 article "Inevitability, Inseparability and Gedanken
Measurement" for some more background information on how Bohr arrived
at this interpretation.

80. Post-mathematical reinterpretation may also lead to an attempt to put


together an alternative, but equivalent, formulation of the initial physical
equations.

81. By value systems I mean neuromodulatory, hormone-secreting systems


that signal diffusely across the brain during biologically meaningful events,
thereby fine-tuning nervous pathways that are simultaneously active (Edelman
VAN DuK/Process Physics, Time, and Consciousness 193

and Tononi 46-47). Because of value systems, the organism can become
capable of strengthening successful activity patterns and weakening those
that are of little to no use for survival.

82. Acquired, experientially fine-tuned action repertoires involve the


establishment of action-triggering dispositional memory pathways that enable
the organism to repeat an act when being confronted with similar exteroceptive
and interoceptive stimuli (Edelman and Tononi 105). As such, they co-
determine how an organism gets to live through its perception-action cycles.

83. Due to the close resemblance between the eyes of humans and octopi, the
various evolutionary stages that are hypothesized to have preceded the current
stage of the octopus eye are often thought to make up a good model for the
evolutionary development the human eye may have undergone.

84. A photodiode will typically alternate between two pre-set states that
enable it to send out a binary signal, thus communicating the detection or
non-detection of light. These states can be considered part and parcel of the
physical architecture of the photodiode.

85. See "causative" stimulus and "effectuated" nervous signaling; body and
mind; the physical world and the mental world; Descartes' res extensa and
res cogitans, etc.

86. Proteins are biomolecules that are absolutely vital to living organisms as
they participate in a vast repertoire of biological activities, such as DNA
replication, cell metabolism, biochemical signaling, and molecular transportation
(as in the blood's O2-binding protein hemoglobin).

87. Epigenetics is the study of how each organism's life events can affect the
expression of their genes as some genes are left free to act while others are
deactivated by methylization (see Phillips).

88. Although it is typically suggested that the interior of these black boxes
can be fully accounted for in a later meta-analysis, this follow-up analysis
will then inevitably bring along the same problem all over again. In this way,
just as in physics (see Section 3.3), only a pseudo-explanation is given, or
another sub-plot, of how the processing of inputs into outputs should occur.

89. Whenever signal-distorting noise can be kept at a low enough level,


Shannon's information and communication theory holds that messages can
be received without any data corruption.
194 PROCESS STUDIES SUPPLEMENT 24 (2017)

90. See also John Pickering's papers on the relation between David Bohm's
active information and J.J. Gibson's view and on mutualism as an alternative
for conventional cognitivism for some added context.

91. This is analogous to the absence of a sharp and absolute borderline


between target and subject side in physics (see Section 3.1.2).

92. From the perspective of the orthodox physicalist paradigm, life (as well
as the related phenomenon of conscious experience) seems to be utterly
otherworldly. That is, by prematurely characterizing the early universe as an
entirely physical, mechanistic, and abiotic realm, the emergence of life
automatically becomes a sudden and radical departure from the mechanistic
status quo. As a result, reductionistic explanations have remained at a loss
ever since-requiring all kinds of counter-productive measures, such as
writing off conscious experience and the passage of time as illusory, just in
order to preserve the mechanistic, reductionistic worldview. Unfortunately,
though, such measures create more problems than they solve and leave more
things unexplained than they clarify. With that in mind, perhaps it is about
time to start questioning the mechanistic, reductionistic worldview, rather
than conscious experience and the passage of time.

93. When an autocatalytic network evolves a semi-permeable membrane, it


is typically referred to as an autopoietic system-a system capable of
maintaining and reproducing itself (see Maturana and Varela).

94. Of course, all non-equilibrium processuality will involve not only the
entire autocatalytic cycle, but also all of the (direct and indirect) in- and
outgoing flows of energy, material, and information.

95. An autocatalytic network and its environment have a co-dependent


symbiotic relationship, albeit an asymmetrical one, in that the impact of an
individual autocatalytic system on its environment is usually smaller than
that of the environment on one of its local autocatalytic networks. This is
simply because any autocatalytic network that exhausts its environment will
rob itself of its future resources, thereby sealing its own fate. It is far more
likely for an environment to grind down one of its in-house autocatalytic
networks than it is for some autocatalytic network to deplete its own envir-
onment. For instance, parasitic organisms typically co-evolve with their target
species, so that they do not completely run down their hosts or, at least, so
that they will not kill any individual hosts before having had the chance to
let their offspring spread across the community. Admittedly, before the arrival
of lipid bilayer membranes, it would have been more likely for autocatalytic
networks to deplete their environment. However, once an open autocatalytic
VAN DuK/Process Physics, Time, and Consciousness 195

system manages to evolve such a semi-permeable membrane, it will become


more protected against the risk of being dissipated into its immediate
environment. Also, it will become more likely for the autocatalytic network
to develop adaptive repertoires that will enable it to withstand harsh
environmental conditions for shorter or longer time spans.

96. The organizational integrity involves more than just self-preservation.


That is, this integrity not only refers to the capacity of an organism or ecosystem
to maintain its organization, it also pertains to the capacity (1) to develop
towards a higher level of complexity when conditions are favorable to do so;
and (2) to withdraw into an earlier state when energy inflow is depleting or
when resources are scarce (but still with the potential to return to the lost
higher-order level of organization). Such growth towards increasing levels
of complexity, which is characteristic of rich, healthy ecosystems is called
"ascendency" (Ulanowicz).

97. For instance, under the influence of the day-night cycle, the organism
may develop early circadian rhythms that affect its inner biochemistry.

98. This idea of categories pertains to the organism's capacity to "classify"


its environment in terms of what it does ( and thus means) to the organism
and what it triggers the organism to do in response. As in Pavlovian conditioning
(where, after repetitive trials, an initially seemingly neutral stimulus, such as
the sound of a ringing bell, gradually gets an entirely novel and explicit
meaning as it becomes associated with the arrival of food) , categorization
enables an organism to "get to know" its environment in terms of its own
"somatic status" (physiology, biochemistry, homeostasis, etc.) and adaptive
motor responses. As the organism develops a habit to repeat adaptive categorical
responses, it basically "sculpts" its ability to discriminate salient foreground
percepts from a less relevant background for adaptive purposes (see Edelman
and Tononi 48).

99. These features are the result of hominids adaptively living through their
perception-action cycles, nutrient-waste cycles, 0 2-C0 2-cycles, etc., during
the course of evolution.

100. Anyone wanting to distinguish between sensation (as mere "uninterpreted"


sensory stimulation and signal transduction) and perception (as the process
of valuative-emotive interpretation of sensory signals, stereotypically to be
performed by the brain) would probably prefer to use the term "sensation-
action cycle" instead of "perception-action cycle"-especially when the
investigative focus is on primitive cellular life. On the other hand, whenever
one prefers to avoid such a possibly premature distinction between sensation
196 PROCESS STUDIES SUPPLEMENT 24 (2017)

and perception, perception could also be thought of as exhibiting various


levels of sophistication-from the primitive, rudimentary level to the highly
complex. We can, after all, not rule out beforehand that there may be an
internal process of sense-making even in primitive organisms. Because of
the prominent role of value constraints, even in early life, we cannot exclude
this possibility too soon. Indeed, early "interpretative" valuation cycles may
at first sight not yet be functional as such, or be instead so rudimentary as to
be negligible. However, even in its prebiotic stage, the universe is-metaphorically
speaking-filled to the brim with nonequilibrium cycles. Whenever any
"individual" non-equilibrium cycle gets to be absorbed into another one, or
when an existing cycle evolves an inner sub-cycle such that the smaller, nested
cycle becomes relevant in maintaining the whole of the greater, overarching
one, then, to the best of our knowledge, we can consider the nested sub-cycle
to be of organizational value to the whole. What is more, they can in fact be
considered mutually meaningful, and so can all other non-equilibrium activity
patterns in the universe. Therefore, even such low-level valuative activity
can be considered relevant enough-at least potentially-to take their early
form of proto-sensitivity seriously ( see Section 4.2.4 for further details).

101. Fitness, organizational integrity, metabolic rate, the morphology and


functionality of habitually grooved biochemical pathways, etc., can all be
considered possible aspects of the future course of development of light-
sensitive cycles.

102. Originally coined by Gestalt psychologist Kurt Lewin as


Aufforderungscharaktere, the concept of "valences" later inspired J. J. (James)
Gibson to develop his theory of affordances (The Ecological, 119-135). See
also John Pickering's paper ("Active") on the relation between Gibson's view
and David Bohm's active information. This will become particularly relevant
later on, in Sections 4.3 to 4.3.3.

103. That is, different patterns of exteroceptive stimuli that are held to pertain
to the "state" of the outer-organism world, but also of proprioceptive signals
pertaining to the organism's musculoskeletal positions and movements within
that world.

104. That is, the totality of interoceptive patterns relating to the entire
homeostatic and physiological condition of the organism's body.

105. This "conscious now," which Edelman has coined as a "remembered


present," can be thought of as an ongoing conscious scene of self and world
(see also Edelman and Tononi 102-112). In higher-order organisms-capable
of symbolic thought, language, and hence the construction of imaginary
VAN DuK/Process Physics, Time, and Consciousness 197

"storylines" about possible futures-this remembered present can even be


called an "anticipatory remembered present."

106. In fact, this narrows down the range of possible neural patterns to a
relatively small set, thus leading to invariability.

107. For example, influx versus dissipation; system constraints versus system
dynamics; excitatory versus inhibitory forces; synaptic growth versus decay.

108. As mentioned earlier, such a threshold can thus form a local pocket of
potential (a.k.a. potential well).

109. It must be emphasized that, although SOC-systems are typically thought


of in terms of some kind of constituent elements (e.g., sand grains, neurons,
carriers of disease, solar flares, etc.), these elements are basically singled out
by our subjective nature-dissecting gaze. They should therefore not be
considered truly atomistic elements of the system in question. Instead they
would better be seen as relatively autonomous process-structures (Jantsch
21-24) that may perhaps be treated as individual constituents, but are ultimately
seamlessly embedded endo-processes within the greater embedding process
that our nature-dissecting gaze has labeled "the SOC-system." These SOC-
systems are not composed of some finite set of static unchanging components,
but of endo-processes that should be understood as relatively stable manifestations
of nature's processuality (e.g., a sand grain may appear to be atomistic, but
has a deeper processuality within it). So, despite our learned habit of depicting
processes in terms of interacting objects-which, historically, has proven to
be of great didactic use-this mode of operation eventually results in practically
useful object-oriented figure of speech that, despite appearances, has definitely
no absolute truth to it.

110. This in full agreement with the meaning of "mutual information" in


Sections 4.2.1, 4.3 and 4.3.1.

111 . Nature is not made up of quasi-isolated, equilibrium-seeking systems


such as those that are portrayed by classical thermodynamics. Dissipative
systems that behave according to non-equilibrium thermodynamics (NET)
are the rule, rather than the exception, and on many levels of organization
these systems show signs of self-organized criticality (Bak, How, 5; Jensen 2).

112. By implementing an alternative for exophysical representationalism


(ER) and psycho-physical parallelism (PPP), we can avoid the fallacy of
misplaced concreteness as well as what I like to call the physicist's fallacy
(namely: "To suppose that the objects of thought, as found in introspection,
198 PROCESS STUDIES SUPPLEMENT 24 (2017)

must have their origin in independently existing external objects residing in


the entirely physical 'real world out there', instead of being sculpted into
actuality through a process of sense-making that takes place within the integral
and inseparable whole which is the undivided organism-world system"). Last
but not least, getting rid of PPP and ER may help not to take so literally the
would-be fundamental concepts of "state," "system," "apparatus," "measurement"
that were criticized by Bell. Instead, it would become clear that these concepts
are ultimately just figures of speech-convenient within a certain context of
use, but meaningless without it.

113. The most serious candidate for this "law without law" criterion seems
to be what Charles Sanders Peirce called a "tendency to take habits" (277).

114. Here Wheeler did not include an explanation of what "the boundary of
a boundary is zero" should mean. He probably meant to say that it is a
fundamental assumption in physics that various conservation laws hold in
every physical system that is properly isolated from its environment (see Von
Kitzinger 177).

115 . This last remark-about physics having to be, in a sense, foundation-


free-can be linked with the second and fifth requirement on the list. That
is, if we are to avoid any logical paradoxes, impossibilities, infinite regresses,
etc., we should stay away from using any hypothetical set of elementary
building blocks as a foundation . Instead, what Wheeler calls "existence"
should keep itself "up and going" through recursive loops capable of
"bootstrapping" themselves into actuality from an otherwise undifferentiated
background (see Chew; Cahill and Klinger, "Pregeometric"; Cahill, Klinger
and Kitto; Cahill and Klinger, "Bootstrap").

116. As a possible solution for the foundation problem, it seems desirable to


rethink Geoffrey Chew's bootstrapping procedure, which was later used by
early string theory pioneers, such as Veneziano to formulate string theory-see
also Cushing for a historical overview. It should be noted, however, that this
updated version should not be one in which the same foundational problem
is being invoked all over again by introducing strings, elementary particles,
or other a priori entities that have to be bootstrapped into existence.

117. Among these referents may be found the earlier-mentioned electrons,


photons, and electromagnetic fields, but also all so-called elementary particles
of the standard model of particle physics. To the best of our current knowledge,
there does not seem to be any explanation for the physical equations that we
use to specify the behavior of these entities.
VAN DuK/Process Physics, Time, and Consciousness 199

118. An intrinsic present moment effect causes the external present moment
indicator to become redundant (see Section 2.1.3 for more details on the
external present moment indicator).

119. Trying to model nature with the help of such supposedly fundamental
physical constituents necessarily has to rely on pre-theoretical interpretation.
And in the Cartesian-Newtonian paradigm the first task to be performed
during pre-theoretical interpretation is to draw the Galilean cut which slices
away any subjective aspects of the phenomena under investigation. However,
as has been emphasized throughout this paper, it is a mistake to think that
this would successfully divide nature into, on the one hand, "entirely physical
constituents" and, on the other hand, our "entirely subjective experiences"
of those constituents. This would amount to the undesirable bifurcation of
nature, which, once having been put into effect, cannot be undone. That is,
"nature in the raw" cannot be cut into bits and pieces and still be kept intact,
i.e., in conformity with "naked fact."

120. Please note that the word "deepest" implies a layered hierarchy of lower-
and higher-order levels of organization. However, this use of language should
be considered metaphorical rather than true to nature; in reality, it makes
more sense to think of nature in a holarchic way-with each part being a
seamlessly integrated member of the whole in which it participates, and, in
tum, with each whole itself being interpretable as such a seamlessly integrated
part as well (see Koestler). All this is characteristic of self-similar fractal
organization.

121 . As already mentioned in Section 3 .1 .2, there are many different ways
to refer to the initially unlabeled natural world. A wide variety of names can
be used, all of which have their own context of use and are the result of a
specific set of beliefs on how nature works. Although these terms-the Kantian
"noumenal world" or "nature-in-itself," John Archibald Wheeler's "pre-
geometric quantum foam" or "pre-space," David Bohm and Basil Hiley's
"holomovement" and "implicate order," Bernard D'Espagnat's "veiled reality,"
the ancient Greek "apeiron," John Stewart Bell's world of pre-observational
"beables," or other words, like "vacuum," "void," or the Buddhist "plenary
void"-can all be used to refer to this primordial stage of nature, no one of
them can be crowned as the ultimate candidate.

122. As far as I know, Joe Rosen is not related to theoretical biologist and
biophysicist Robert Rosen.

123. Earlier on, Joe Rosen defines science as our attempt to understand the
reproducible and predicable aspects of nature as objectively as possible (30).
200 PROCESS STUDIES SUPPLEMENT 24 (2017)

By the authority of this definition, he excludes from science all phenomena


that are not reproducible and/or predictable. In his view, science is not meant
to deal with such phenomena. Although this seems to give us quite a clear
and well-defined description of what science is, it does not point out that the
reproducibility and predictability of empirical data often can be established
only by allowing margins of error. In other words, because of these margins
of error, neglect of noisy deviations, application of statistical meta-rules, etc.,
we may just as well conclude that absolute reproducibility and predictability
is in fact never possible; it is always reproducibility and predictability under
certain pre-theoretical restrictions.

124. Although physical equations are, when combined with their post-
theoretical interpretations, often thought to provide an explanation of how
nature works, they do not really do so. Just as there can be no neutral algorithm
for the choice of physical equations-i.e., for deciding which physical equation
best describes a given set of empirical data (Kuhn, The Structure, in the
Postscript written in 1969)-there can also be no finite and fairly balanced
procedure for finding the best interpretation of equation-based theories like
quantum theory or Einstein's relativity theories. Therefore, an interpretation
merely confirms the context of use within which a given physical equation
reached its mature form (see Van Dijk, "The Process"). This, then, is the
reason that no conclusive final answer can be found as to which interpretation
should be the best one.

125. Think, for instance, of a temporary wooden support on which to rest the
building bricks when constructing an archway. Although its semicircular
shape indicates where the bricks should be placed, once the arch is completed
the support is no longer needed and can be conveniently removed.

126. The noise-driven update routine has an effect that is quite similar to that
of neuromodulation (which enables brain plasticity in the initially unconditioned,
newly developing fetal brain). Analogous to self-referential noise in the
process physics model, neural noise and reentry play an indispensable role
in neuromodulation, neuroplasticity, the optimization of motor control, and
the like (see Sections 4.3 and 4.3.1 ). In the case of the process physics model,
however, there is no explicit, pre-developed substructure like a prewired brain.

127. In fact, in rice and sand pile systems such a tuning parameter can "gather
under its umbrella" the effects of various phenomena: (1) stickiness between
grains; (2) the average mass of grains; (3) the precise magnitude of the
gravitational constant (which may vary with the latitude at which the experiment
is performed); (4) the average downward velocity of the grains being dropped;
(5) possible wind sheer. .. and so on. Accordingly, such a tuning parameter
VAN DuK/Process Physics, Time, and Consciousness 201

may influence the self-organizing dynamics of the sand or rice pile system
in question. Particularly, it will set the angle (or, better put, the small range
of near-critical angles) at which avalanches will be able to tumble down the
slope. The occurrence of self-organized criticality itself, however, will remain
unaffected. By the same token, all such details can be likewise covered by
one generic parameter a in the process physics model. In both cases, the
precise features of all contributing micro-factors and subnetwork activities
do not matter too much, just as long as self-organized criticality will be
achieved. And just as avalanches can occur at a wide range of different angles
in rice and sand pile models, many different values of the tuning parameter
a may be used in the process physics model without them affecting the ongoing
self-organized coming-into-actuality of "foreground cells" of activity patterns
(i.e., connection nodes) from a background of activity patterns with lower-
order connectivity.

128. The here depicted images are white noise frequency spectra (see Bourke)
which are used for educational and aesthetic reasons only.

129. When talking about "islands of connectivity," the terms "branching


structures ' " "connectivity nodes ' " "(sub)actualities ' " "events '" etc ., can all
be used interchangeably. For sake of clarity, the terms "monads" or "pseudo-
objects" are used to refer to the start-up level of the connectivity network (or
a subnetwork). At this start-up level, internal connectivity is typically thought
of as non-explicit, because, due to universality (i.e., scale-free phenomena),
any higher-order structure can be used interchangeably as the low-level start-
up activity of yet another, higher level of organization. In any case, all these
terms are ultimately just educational figures of speech. That is, nature in itself
is ultimately unlabeled, which means that what happens in nature can never
be fully synonymized with our linguistic tags. The connectivity patterns
themselves, however, become meaningful to each other, despite their
unsuitedness to be named or, in other words, despite their unsuitedness to be
externally given meaning in any unambiguous way.

130. Events, a.k.a., "actualities," "nodes," or Whiteheadian actual occasions.


Using less short and snappy language, they can also be described as "local-
global (i.e., holarchic) centers of connectivity."

131 . "Diaphoric" means "difference-making."

132. "The requirement that the clock that measures time in quantum mechanics
must be outside the system has stark consequences when we attempt to apply
quantum theory to the universe as a whole. By definition, nothing can be
outside the universe, not even a clock. So how does the quantum state of the
202 PROCESS STUDIES SUPPLEMENT 24 (2017)

universe change with respect to a clock outside the universe? Since there is
no such clock, the only answer can be that it does not change with respect to
an outside clock. As a result, the quantum state of the universe, when viewed
from a mythical standpoint outside the universe, appears frozen in time"
(Smolin, Time, 80).

133. Next to the possible confusion of empirical data and data-reproducing


algorithms with their referents, all their further math- and geometry-based
abstractions (such as point-observers) can be similarly mixed up with what
they are supposed to refer to. In this case, for instance, abstract point-observers
are easily confused with their intended referents-live conscious observers
that are seamlessly embedded within the greater embedding process of nature
as a whole.

134. These referents could be anything that, according to the physicalist


paradigm of mainstream physics, can be thought to exist in the real world out
there, for instance, "states," "events," "objects," the "snapshot takes" of what
is thought to be an object in motion, and so on.

135 . Such perception-action cycles can also be called "sensation-valuation-


motor activation-world manipulation" cycles. These can be likened with
"Gestalt cycles," although there are still a number of differences between the
two concepts.

136. Our current way of doing physics in a box can be characterized as


exophysical-decompositional, or, to be more elaborate, as taking an external
perspective onto a world that is held to be decomposable into entirely physical
constituents (Van Dijk, "The Process"). A core characteristic of exophysical-
decompositional physics is that it implicitly suggests that its mathematical
labels are synonymous with nature itself. This, then, typically leads to the
fallacy of misplaced concreteness (Whitehead PR, 7, 18) and therewith
associated umealistic conclusions, such as nature being geometrical and timeless.

137. I.e., practical purposes like the design and manufacture of computer
chips; the sending out of space-craft on missions into space; the deployment
of a properly working GPS-system, and so on.

138. Please note that the intake of pre-coded information by an exophysical


observer implies the presupposition of what one is trying to explain. That is,
it means that the alphabet of expression with the help of which this observer
is trying to describe nature, is already given beforehand (see Kauffman,
"Foreword: Evolution," 11 ). This is like trying to describe the spectrum of
sunlight in terms of primary colors only. Obviously, one will then be left
VAN DuK/Process Physics, Time, and Consciousness 203

incapable to include ultraviolet, infrared, etc., into the picture. In other words,
pre-coded or pre-stated information necessarily leads to incomplete rep-
resentations since our tools of observation and alphabets of expression can
only denote so much-they always have upper and lower limits beyond which
they cannot go.

139. This "bootstrapping" refers to the Baron von Miinchhausen, who allegedly
used his own bootstraps to pull himself out of the deadly swamp. A bootstrap,
then, is the handgrip at the backside of a boot that can be used to pull it up.
For "bootstrapping" in the context of the emergence of autocatalysis and
higher-order consciousness, see Kauffman, The Origins, 373; and Edelman
and Tononi 173, 205, respectively.

140. The term "co-informativeness" is here used as a synonym for mutual


informativeness. Another synonym is "process-informativeness" or "process-
information" (see Corbeil; Van Dijk, "An Introduction" and "The Process").

141. Because there is always an element of subjectivity when it comes to


scientific observation, it seems to be more appropriate to use the concept of
intersubjectivity, instead of the absolute notions of objectivity and subjectivity.
In physical measurement, the format of the experimentally acquired empirical
data will always be affected by the subjective choice of (a) which aspect of
nature should be put under scrutiny, and (b) which data-refining encoding to
apply. At most we can attain some high degree of intersubjective agreement-i.e.,
getting the same results when probing nature in a certain way-but purely
objective outcomes are out of the question (J. Rosen 4-20).

142. The organism's internal milieu involves, among others, the state of its
sensory apparatus and life organs, its homeostasis (as well as derived feelings
and emotions), the kinesthetics and position of limbs and joints, muscle
tension, and so forth.

143. During use neural connections and muscle tissue are constantly engaged
in a process of strengthening and weakening through brain plasticity,
neuromuscular memory path formation, etc. (see Edelman and Tononi 46, 79-95).

144. See Von Uexkiill's "Umwelt" or "self-centered world of significance"


(see Von Uexkiill and Von Uexkiill; Koutroufinis).

145. Please note that world and organism should not be seen as truly separate.
The concept of "world" includes all within-nature organisms, while, in tum,
each organism is fully embedded within the natural world in which it lives.
204 PROCESS STUDIES SUPPLEMENT 24 (2017)

WORKS CITED

Adonsou, Kokouvi Emmanuel, Igor Drobyshev, Annie DesRochers,


Francine Tremblay. "Root Connections Affect Radial Growth of Balsam
Poplar Trees." In Trees: Structure and Function. Berlin: Springer, 2016.
Aks, Deborah. "Temporal and Spatial Patterns in Perceptual Behavior." In
Stephen Guastello, et al., eds. Chaos and Complexity in Psychology. New
York: Cambridge UP, 2008.
Albert, Hans. Treatise on Critical Reason. Princeton: Princeton UP, 1985.
Allport, D.A. "Distributed Memory, Modular Systems and Dysphasia." In
S.K. Newman and R. Epstein. Eds. Current Perspectives in Dysphasia.
Edinburgh: Churchill Livingstone, 1985.
Anderson, Michael L. After Phrenology: Neural Reuse and the Interactive
Brain . Cambridge: MIT P, 2014.
Aristotle. Physics. Tr. P.H. Wicksteed, F. M. Cornford. Cambridge: Harvard
UP, 1957.
Arnett, David. Supernovae and Nucleosynthesis. Princeton: Princeton UP, 1997.
Aschwanden, Markus. Self-Organized Criticality in Astrophysics. Berlin:
Springer Verlag, 2011.
Atmanspacher, Harald, and Gerard Dalenoort. Eds. Inside versus Outside,
Endo- and Exo-Concepts of Observation and Knowledge in Physics,
Philosophy, and Cognitive Science. Berlin: Springer Verlag, 1994.
Bak, Per. How Nature Works: The Science of Self-Organized Criticality. New
York: Copernicus P, 1996.
Bak, Per, Chao Tang, and Kurt Wiesenfeld. "Self-Organized Criticality: An
Explanation of the 1/f noise." Physical Review Letters 59 (27 July 1987): 381.
Barandiaran, Xabier, and Kepa Ruiz-Mirazo. "Modelling Autonomy: Simulating
the Essence of Life and Cognition." BioSystems 91.2 (2008): 295-304.
Barbour, Julian. The End of Time: The Next Revolution in Our Understanding
of the Universe. London: Weidenfeld & Nicolson, 1999.
Barrett, Nathaniel F. "A Dynamic Systems View of Habits." Frontiers in
Human Neuroscience 8 (2 September 2014): 682.
Bateson, Gregory. Steps to an Ecology of Mind. Chicago: U of Chicago P, 2000.
Bell, John Stewart. Speakable and Unspeakable in Quantum Mechanics.
Cambridge: Cambridge U P,1988.
_ . "Against 'Measurement."' Physics World (August 1990).
Beller, Mara. "Inevitability, Inseparability and Gedanken Measurement."
Abbay Ashtekar, et al. Eds. Revisiting the Foundations of Relativistic
Physics: Festschrift in Honor of John Stachel. Dordrecht: Springer, 2003:
438-450.
VAN DuK/Process Physics, Time, and Consciousness 205

Berger, Charles, and Richard Calabrese. "Some Exploration in Initial Interaction


and Beyond." Human Communication Research 1 (1975): 99-112.
Berry, Michael V. "Regular and Irregular Motion." In Topics in Nonlinear
Dynamics: A Tribute to Sir Edward Bullard. American Institute of Physics
Conference Proceedings 46 (1978): 16-120.
Bialobrzeski, Czeslaw, Leon Brillouin, Jean-Louis Destouches, John von
Neumann, and Niels Bohr. New Theories in Physics. International Institute
oflntellectual Co-Operation, 1939. As quoted in Blake Stacey. "Von
Neumann Was Not a Quantum Bayesian." Philosophical Transactions
Series A: Mathematical, Physical, and Engineering Sciences 374 (April
18, 2016).
Block, Ned. "The Mind as the Software of the Brain." In Daniel N. Osherson,
Lila Gleitman, Stephen M. Kosslyn, S. Smith, and Saadya Sternberg.
Eds. An Invitation to Cognitive Science. Cambridge: MIT P, 1995: 170-185.
Bohm, David. Wholeness and the Implicate Order. London: Routledge, 2002.
Bohm, David, and Basil J. Hiley. The Undivided Universe : An Ontological
Interpretation of Quantum Theory. London: Routledge, 1993.
Bohr, Niels. "Wirkungsquantum und Naturbeschreibung." Die
Naturwissenschaften 17 (1929): 483-486.
_ . Atomic Theory and the Description of Nature. Cambridge: Cambridge
UP, 1934.
_ . "Quantum Physics and Philosophy: Causality and Complementarity."
In Essays 1958/1962 on Atomic Physics and Human Knowledge. New
York: Interscience Publishers, 1963.
Boi, Luciano. The Quantum Vacuum. Baltimore: Johns Hopkins U P, 2011.
Bourke, Paul. "Online Gallery of Noise Frequency Spectra."
http://paulbourke.net/fractals/noise/ (retrieved and edited on July I 0, 2016).
Bros, Jacques. "The Geometry of Relativistic Spacetime: from Euclid's
Geometry to Minkowski's Spacetime." Seminaire Poincare (2005).
Bruce, Vicki, Patrick R. Green, and Mark A. Georgeson. Visual Perception:
Physiology, Psychology, and Ecology. New York: Psychology P, 2003.
Buchanan, M. Ubiquity. London: Weidenfeld and Nicolson, 2000.
Byrne, Oliver. The First Six Books of "The Elements of Euclid." London:
William Pickering, 1847.
Cahill, Reginald T. "Process Physics: From Information Theory to Quantum
Space and Matter." Process Studies Supplements 5 (2003).
_ . "Process Physics: Self-Referential Information and Experiential Reality."
Conference on Quantum Physics, Process Philosophy, and Matters of
Religious Concern." Claremont, CA: Center for Process Studies (September
28 - October 2, 2005).
206 PROCESS STUDIES SUPPLEMENT 24 (2017)

_ . "Black Holes and Quantum Theory: The Fine Structure Constant


Connection." Progress in Physics 4 (2006): 44-50.
_ . "Resolving Spacecraft Earth-Flyby Anomalies with Measured Light
Speed Anisotropy." Progress in Physics 3 (2008): 9-15.
Cahill, Reginald T., and Susan M. Gunner. "The Global Colour Model of
QCD for Hadronic Processes: A Review." Fizika B, 7 (1998): 171-202.
Cahill, Reginald T., and Christopher M. Klinger. "Pregeometric Modelling
of the Spacetime Phenomenology." Physics Letters A., 223.5 (1996):
313-319.
_ . "Self-Referential Noise and the Synthesis of Three-Dimensional Space."
General Relativity and Gravitation 32.3 (2000): 529-540.
_ . "Self-Referential Noise as a Fundamental Aspect of Reality." Proceedings
of the 2nd International Conference on Unsolved Problems of Noise and
Fluctuations. New York: American Institute of Physics, 2000.
_ . "Bootstrap Universe from Self-Referential Noise." Progress in Physics
2 (2005): 108-112.
Cahill, Reginald T., Christopher M. Klinger, and Kirsty Kitto. "Process
Physics: Modelling Reality as Self-Organising Information." The Physicist
37.6 (2000): 191-195.
Capek, Milic. "The Inclusion of Becoming in the Physical World." In Milic
Capek. Ed. Concepts of Space and Time. Dordrecht: D. Reidel, 1976.
Capra, Fritjof. Uncommon Wisdom. New York: Simon and Schuster, 1988.
Carnap, Rudolf. "Intellectual Autobiography." In P.A. Schilpp. Ed. The
Philosophy of Rudolf Carnap. LaSalle, IL: Open Court, 1963: 1-84.
Cartwright, Nancy. How the Laws of Physics Lie. Oxford: Clarendon P, 1983.
Chaisson, Eric. Cosmic Evolution: The Rise of Complexity in Nature. Cambridge:
Harvard UP, 2001.
Chaitin, Gregory J. Algorithmic Information Theory. Cambridge: Cambridge
UP, 1987.
_ . Thinking about Godel and Turing: Essays on Complexity, 1970-2007.
Singapore: World Scientific, 2007.
Chemero, Anthony. Radical Embodied Cognitive Science. Cambridge: MIT
P, 2009.
Chew, Geoffrey. "'Bootstrap': A Scientific Idea?" Science 23 (Aug 1968):
762-765 .
Chown, Marcus. "Random Reality." New Scientist (February 26, 2000): 25-28.
Christensen, Kim, and Nicholas Moloney. Complexity and Criticality. London:
Imperial College P, 2005 .
VAN DuK/Process Physics, Time, and Consciousness 207

Clarkson, Petruska, and Jennifer Mackewn. Fritz Perls. London: Sage


Publications, 1993.
Clay, Edmund. The Alternative: A Study in Psychology. London: Macmillan, 1882.
Cobb, John B. Jr. "Bohm and Time." In David R. Griffin. Ed. Physics and
the Ultimate Significance of Time. Albany: SUNY P, 1986.
Cohen, H. Floris. The Rise of Modern Science Explained: A Comparative
History. Cambridge: Cambridge UP, 2015.
Corbeil, Marc J.V. "Process Relational Metaphysics as a Necessary Foundation
for Environmental Philosophy." 6th International Whitehead Conference,
Salzburg, Austria (3-6 July 2006).
Cushing, James T. Theory Construction and Selection in Modern Physics:
The S-Matrix. Cambridge: Cambridge UP, 1990.
Damasio, Antonio. The Feeling of What Happens : Body, Emotion and the
Making of Consciousness . London: William Heineman, 1999.
Davies, Paul. Space and Time in the Modern Universe. Cambridge: Cambridge
UP, 1977.
_ . About Time: Einsteins Unfinished Symphony. London: Penguin Books, 1995.
_ . "Whitrow Lecture 2004: The Arrow of Time - Why Does Time Apparently
Fly One Way, When the Laws of Physics are Actually Time-Symmetrical?"
Astronomy & Geophysics 46.l (2005): 1.26-1.29.
_ . "That Mysterious Flow." Scientific American 16.1 (2006): 6-11.
Deacon, Terrence. Incomplete Nature: How Mind Emerged from Matter. New
York: Norton, 2012.
De Muynck, Willem. Foundations of Quantum Mechanics: An Empiricist
Approach. Boston: Kluwer Academic, 2002.
Dennett, Daniel C. Consciousness Explained. New York: Little, Brown &
Co, 1991.
Desmet, Ronny. "On the Difference Between Physics and Philosophical
Cosmology." In Michel Weber and Ronny Desmet. Eds. Chromatikon:
Yearbook of Philosophy in Process 9 (2013): 87-92.
_ . "Introduction." In Ronny Desmet. Ed. Intuition in Mathematics and
Physics: A Whiteheadian Approach. Anoka, MN: Process Century P,
2016: 1-33 .
D'Espagnat, Bernard. In Search of Reality: The Outlook of a Physicist. New
York: Springer Verlag, 1983.
Dewitt, Bryce. "Quantum Field Theory and Space-time: Formalism and
Reality." In Tian Cao. Ed. Conceptual Foundations of Quantum Field
Theory. New York: Cambridge UP, 1999: 176-186.
208 PROCESS STUDIES SUPPLEMENT 24 (2017)

Dijksterhuis, Eduard J. The Mechanization of the World Picture. Oxford:


Clarendon P, 1961.
DiPaolo, Ezequiel. "Extended Life." Topoi 28 (2009): 9-21.
Drake, Stillman. "The Role of Music in Galileo's Experiments." Scientific
American (June, 1975).
Eastman, Timothy E. "On Process Physics." In Timothy Eastman, et al. Ed.
Physics and Speculative Philosophy. Berlin: Ontos Verlag, 2016.
Eastman, Timothy E., and Hank Keeton. Eds. "Resource Guide for Physics
and Whitehead." Process Studies Supplements 6 (2004).
Edelman, Gerald. The Remembered Present: A Biological Theory of
Consciousness. New York: Basic Books, 1989.
_ . "Building a Picture of the Brain." In Gerald M. Edelman and Jean-Pierre
Changeux. Eds. The Brain. London: Transaction Books, 2001: 37-69.
Edelman, Gerald, and Giulio Tononi. Consciousness: How Matter Becomes
Imagination. London: Allen Lane, 2000.
Einstein, Albert. Relativity: The Special and the General Theory. London:
Methuen & Co. Ltd., 1954.
Elsasser, Walter M. "A Casual Phenomenon in Physics and Biology: A Case
for Reconstruction." American Scientist 57 (1969): 502-516.
_ . "A Form of Logic Suited for Biology?" In Robert Rosen. Ed. Progress
in Theoretical Biology 6 (1981): 23-62 .
Emberson, Lauren, et al. "Endogenous Neural Noise and Stochastic Resonance."
http://dx.doi.org/10.1117/12.724736.
Epperson, Michael. Quantum Mechanics and the Philosophy of Alfred North
Whitehead. New York: Fordham UP, 2004.
Fast, Johan. Entropy. London: Macmillan, 1968.
Fechner, Gustav T. Elemente der Psychophysik. Leipzig: Breitkopf & Hartel, 1860.
Feuda, Roberto, et al. "Metazoan Opsin Evolution Reveals a Simple Route
to Animal Vision." Proceedings of the National Academy of Sciences
109.46 (13 November 2012): 18868-18872.
Feynman, Richard. The Character of Physical Law. Cambridge: MIT P, 1967.
Fischer, Tobias, et al. "Melatonin as a Major Skin Protectant." Experimental
Dermatology 17 (2008): 713-730.
Fiske, John, ed. Introduction to Communication Studies. 2nd ed. New York:
Routledge, 2002.
Flori di, Luciano. The Philosophy of Information . Oxford: Oxford U P, 2011.
Fodor, Jerry. The Modularity of Mind: An Essay on Faculty Psychology.
Cambridge: MIT P, 1983.
VAN DuK/Process Physics, Time, and Consciousness 209

Forsee, Aleysa. Albert Einstein: Theoretical Physicist. New York: Macmillan,


1963.
Frank, Adam. About Time: Cosmology and Culture at the Twilight of the Big
Bang. New York: Free P, 2011.
Fritsch, Harald, et al. "Advantages of the Color Octet Gluon Picture."
doi.org./10.1016/0370-2693(73)90625-4
Galilei, Galileo. "The Assayer." In Discoveries and Opinions of Galileo. Tr.
Stillman Drake. New York: Doubleday and Co., 1957.
_ . Dialogues Concerning Two New Sciences. Tr. Henry Crew and Alfonso
de Salvio. New York: Cosimo Classics, 2010.
Gammaitoni, Luca, et al. "Stochastic Resonance." Reviews of Modern Physics
70.1 (1998): 223-287.
Gibbs, Raymond. Embodiment and Cognitive Science. Cambridge: Cambridge
UP, 2005.
Gibson, James J. The Senses Considered as Perceptual Systems. Boston:
Houghton Mifflin Company, 1966.
_ . The Ecological Approach to Visual Perception. Boston: Houghton Mifflin
1979.
Giere, Ronald N. Science Without Laws. Chicago: U of Chicago P, 1999.
Goff, Philip. "Why Science Can't Explain Consciousness." 2013. (Preview
paper of the forthcoming book Consciousness and Fundamental Reality.
Oxford: Oxford UP, 2017.)
Gould, Stephen J. The Richness of Life: The Essential Stephen Jay Gould.
New York: Norton & Co., 2007.
Greene, Brian. The Fabric of the Cosmos: Space, Time, and the Texture of
Reality. New York: Alfred A. Knopf, 2004.
s
Gribbin, John. Schrodinger Kittens and the Search for Reality: Solving the
Quantum Mysteries. New York: Little, Brown & Co., 1995.
Griffin, David, R. "Bohm and Whitehead on Wholeness, Freedom, Causality
and Time." In David R. Griffin. Ed. Physics and the Ultimate Significance
of Time. Albany: SUNY P, 1986: 127-153.
_ . "Introduction: Time and the Fallacy of Misplaced Concreteness." In
David R. Griffin. Ed. Physics and the Ultimate Significance of Time.
Albany: SUNY P, 1986: 1-48.
_ . "The Whiteheadian Century!" Banquet address at the 10th International
Whitehead Conference Claremont, CA. June 7, 2015.
http://www.pandopopulus.com/griffin-whitehead-century-revisited/
Haack, Susan. Philosophy of Logics. Cambridge: Cambridge UP, 1978.
210 PROCESS STUDIES SUPPLEMENT 24 (2017)

Haber, Shimon, Alys Clark, and Merryn Tawhai. "Blood Flow in Capillaries
of the Human Lung." Journal of Biomechanical Engineering 135.10
(September 20, 2013).
Harris, Michael G. "Optic and Retinal Flow." In A.T. Smith and R.J. Snowden.
Eds. Visual Detection of Motion. London: Academic P, 1994: 307-332.
Heisenberg, Werner. Physics and Philosophy: The Revolution in Modern
Science. London: George Allen and Unwin, 1958.
Herbert, Nick. Quantum Reality: Beyond the New Physics. New York: Anchor
Books, 1985.
Hesse, Janina, and Thilo Gross. "Self-Organized Criticality as a Fundamental
Property of Neural Systems." Frontiers in System Neuroscience 8 (23
September 2014): 166.
Hunt, Tam. Eco, Ego, Eros: Essays in Philosophy, Spirituality and Science.
Santa Barbara: Aramis P, 2014.
James, William. "Percept and Concept: The Import of Concepts." In Some
Problems of Philosophy: A Beginning of an Introduction to Philosophy.
New York: Longmans Green, 1911.
_ . The Principles of Psychology, Vol. 1. New York: Cosimo Classics, 2007.
Jan, James E., Russel J. Reiter, Michael B. Wasdell, and Martin Bax. "The
Role of the Thalamus in Sleep, Pineal Melatonin Production, and Circadian
Rhythm Sleep Disorders." Journal of Pineal Research 46 (2009): 1-7.
Jantsch, Erich. The Self-Organizing Universe: Scientific and Human Implications
of the Emerging Paradigm of Evolution . Frankfurt: Pergamon P, 1980.
Jensen, Henrik J. Self-Organized Criticality: Emergent Complex Behavior in
Physical and Biological Systems. Cambridge: Cambridge UP, 1998.
Jones, Roger. "Realism about What?" Philosophy of Science 58 (1991): 185-202.
Jung, Peter, et al. "Noise-Induced Spiral Waves in Astrocyte Syncytia Show
Evidence of Self-Organized Criticality." Journal of Neurophysiology 79
(1998): 1098-1101.
Kiiufer, Stephan, and Anthony Chemero. Phenomenology: An Introduction.
Cambridge: Polity P, 2015.
Kauffman, Stuart A. The Origins of Order: Self-organization and Selection
in Evolution . New York: Oxford UP, 1993.
_ . At Home in the Universe: The Search for the Laws of Self-Organization
and Complexity. Oxford: Oxford UP, 1995.
_ . "Foreword: The Open Universe." In Robert Ulanowicz. Ed. The Third
Window: Natural Life beyond Newton and Darwin. West Conshohocken,
PA: Templeton Foundation P, 2009.
VAN DuK/Process Physics, Time, and Consciousness 211

_ . "Foreword: Evolution beyond Newton, Darwin, and Entailing Law." In


Brian G. Henning and Adam C. Scarfe. Eds. Beyond Mechanism: Putting
Life Back into Biology. Lanham, MD: Lexington Books, 2013: 1-24.
Kirk, G.S. Heraclitus: The Cosmic Fragments . Cambridge: Cambridge UP, 2010.
Kitzbichler, Manfred, et al. "Broadband Criticality of Human Brain Network
Synchronization." 2009. doi: 10 .13 71 /joumal/pcbi.1000314
Klinger, Christopher M. Bootstrapping Reality from the Limitations of Logic.
Saarbrucken: VDM Verlag, 2010.
_ . "On the Foundations of Process Physics." In Timothy E. Eastman,
Michael Epperson, and David Ray Griffin. Eds. Physics and Speculative
Philosophy: Potentiality in Modern Physics. Boston: Walter de Gruyter,
2016: 143-176.
Koestler, Arthur. The Ghost in the Machine. London: Hutchinson, 1967.
Kolmogorov, Andrey N. Selected Works ofA.N. Kolmogorov, Volume III:
Information Theory and the Theory of Algorithms. Tr. A.B. Sossinsky.
Dordrecht: Kluwer, 1993.
Koutroufinis, Spyridon. "Uexkiill, Whitehead, Peirce: Rethinking the Concept
of 'Umwelt/environment' from a Process Philosophical Perspective." In
Maria P!!chalska and Michel Weber. Eds. Festschrift for Neurophysiologist
Jason Brown. Boston: De Gruyter, 2016.
Kuhn, Thomas S. "Objectivity, Value Judgment, and Theory Choice." In The
Essential Tension: Selected Studies in Scientific Tradition and Change.
Chicago: U of Chicago P, 1977: 320-339.
_ . The Structure of Scientific Revolutions. 4th ed. Chicago: U of Chicago
P, 2012.
Laplace, Pierre Simon. A Philosophical Essay on Probabilities. Tr. F.W.
Truscott and F.L. Emory. New York: Dover Publications, 1951.
Linkenkaer-Hansen, Klaus. "Self-Organized Criticality and Stochastic
Resonance in the Human Brain." Ph.D. dissertation. Helsinki University
of Technology, 2002.
Lowel, Siegrid, and Wolf Singer. "Selection oflntrinsic Horizontal Connections
in the Visual Cortex by Correlated Neuronal Activity." Science 255 (10
January 1992): 209-212.
Lloyd, Seth. "The Computational Universe." In Paul Davies and N. Gregersen.
Eds. Information and the Nature of Reality: From Physics to Metaphysics.
Cambridge: Cambridge UP, 2010 : 92-103 .
Massimini, Marcello, et al. "Triggering Sleep Slow Waves by Transcranial
Magnetic Stimulation." Proceedings of the National Academy of Sciences
of the United States of America. 104.20 (15 May 2007): 8496-8501.
212 PROCESS STUDIES SUPPLEMENT 24 (2017)

Maturana, Humberto R., and Francisco J. Varela. "Autopoiesis and Cognition:


The Realization of the Living." In Robert S. Cohen and Marx W. Wartofsky.
Eds. Boston Studies in the Philosophy of Science 42. Boston: D. Reidel, 1980.
Mayr, Ernst. "Some Thoughts on the History of the Evolutionary Synthesis."
In Ernst Mayr and William B. Provine. Eds. The Evolutionary Synthesis:
Perspectives on the Unification of Biology. Cambridge: Harvard UP,
1980: 1-48.
McCarthy, Lance. "Note de lecture: Process Physics by R.T. Cahill." Anna/es
de la Fondation Louis de Broglie 31. 2-3 (2006): 357-361.
McDonnell, Mark, and Derek Abbott. "What is Stochastic Resonance?"
doi: 10.1371/journal.pcbi.1000348
McDougal, Douglas W. Newtons Gravity: An Introductory Guide to the
Mechanics of the Universe. New York: Springer, 2012.
McTaggart, John E. Studies in the Hegelian Dialectic. 2nd ed. New York:
Russell and Russell, 1964.
Miller, Mark. "Four More Finch Neurons." (2011)
www .flickr.com/photos/neurollero/5596181359.
Minkowski, Hermann. "Space and Time." In H. A. Lorentz, A. Einstein, H.
Minkowski, and H. Weyl. The Principle of Relativity: A Collection of
Original Memoirs on the Special and General Theory of Relativity. New
York: Dover Publications, 1952: 75-91.
Nagel, Thomas. The View from Nowhere. New York: Oxford UP, 1986.
N agels, G.A. "Space as a 'Bucket of Dust'." General Relativity and Gravitation
17.6 (1985): 545-557.
Nauenberg, Michael. "Does Quantum Mechanics Require a Conscious
Observer?" Journal of Cosmology 14 (2011 ).
Newell, Karl, et al. "Landscapes Beyond the HKB Model." In Armin Fuchs
and Viktor Jirsa. Eds. Coordination: Neural, Behavioural, and Social
Dynamics. New York: Springer, 2008 .
Nicolis, Gregoire, and Ilya Prigogine. Self-Organization in Non-Equilibrium
Systems: From Dissipative Structures to Order through Fluctuations.
New York: J. Wiley and Sons, 1977.
Noth, Winfried. Handbook of Semiotics. Bloomington: Indiana UP, 1995.
P4chalska, Maria, Malgorzata Lipowska, and Beata Lukaszewska. "Towards
a Process Neuropsychology: Microgenetic Theory and Brain Science."
Acta Neuropsychologica 5.4 (2007): 228-245.
Papatheodorou, Christos, and Basil J. Hiley, "Process, Temporality and Space-
time." Process Studies 26 (1997): 247-278.
VAN DuK/Process Physics, Time, and Consciousness 213

Parisi, Giorgio, and Yong-Shi Wu. "Perturbation Theory without Gauge


Fixing." Scientia Sinica 24 (1981 ): 483-496.
Pattee, Howard H. "The Physics of Symbols: Bridging the Epistemic Cut."
Biosystems 60 (2001) : 5-21.
Pearson, John E. "Complex Patterns in a Simple System." Science 261 (9
July 1993): 189.
Peirce, Charles Sanders. "A Guess at the Riddle." In Nathan Houser and
Christian Kloesel. Eds. The Essential Peirce, Selected Philosophical
Writings. Bloomington: Indiana UP, 1992.
Penrose, Roger. The Road to Reality: A Complete Guide to the Laws of the
Universe. London: Jonathan Cape, 2004.
Perls, Fritz S., and Laura Perls. Ego, Hunger and Aggression. New York:
Random House, 1969.
Pessoa, Luiz. The Cognitive-Emotional Brain: From Interactions to Integration.
Cambridge: MIT P, 2013.
Phillips, Theresa. "The Role of Methylation in Gene Expression." Nature
Education 1.1 (2008): 116.
Pickering, John A. "Active Information in Physics and Psychology." In P.
Pylkkanen, and P. Pylkko. Eds. New Directions in Cognitive Science:
Proceedings of the International Symposium. Helsinki: Finnish Artificial
Intelligence Society, 1995.
_ . "Beyond Cognitivism: Mutualism and Postmodern Psychology." In P.
Pylkkanen, P. Pylkko, and A. Hautamaki. Eds. Brain, Mind, and Physics.
Amsterdam: IOS Publihing, 1997: 183-204.
Planck, Max. Where Science is Going. New York: W.W. Norton and Company,
1932.
Popkin, Richard H. Philosophy of the Sixteenth and Seventeenth Centuries.
New York: Free Press, 1966.
Pred, Ralph. Onjlow: Dynamics of Consciousness and Experience. Cambridge:
MIT P, 2005.
Prigogine, Ilya. The End of Certainty: Time, Chaos, and the New Laws of
Nature. New York: Free P, 1996.
Primas, Hans. "Realism and Quantum Mechanics." In Dag Prawitz, et al. Eds.
Logic, Methodology, and Philosophy of Science IX. Dordrecht: Elsevier
Science, 1994: 609-631.
Quine, Willard V.O. Ontological Relativity and Other Essays. New York:
Columbia UP, 1969.
214 PROCESS STUDIES SUPPLEMENT 24 (2017)

Riesch, Rudiger, et al. "Cultural Traditions and the Evolution of Reproductive


Isolation: Ecological Speciation in Killer Whales?" Biological Journal
of the Linne an Society 106 (2012): 1-17.
Robb, Alfred A. Geometry of Time and Space. Cambridge: Cambridge UP, 2014.
Rosen, Joe. Lawless Universe: Science and the Hunt for Reality. Baltimore:
Johns Hopkins UP, 2010.
Rosen, Robert. Life Itself: A Comprehensive Inquiry into the Nature, Origin,
and Fabrication of Life. New York: Columbia UP, 1991.
_ . Anticipatory Systems: Philosophical, Mathematical, and Methodological
Foundations . 2nd ed. Ed. George J. Klir. New York: Springer Verlag, 2012.
Rothstein, Jerome. "Information, Measurement, and Quantum Mechanics."
Science 114 (1951 ): 171-175.
_ . "Thermodynamics and Some Undecidable Physical Questions." Philosophy
of Science 31.1 (1964): 40-48).
Russell, Bertrand. An Inquiry into Meaning and Truth. London: George Allen
& Unwin, 1950.
Schrodinger, Erwin. What is Life?-The Physical Aspect of the Living Cell.
Cambridge: Cambridge UP, 1992.
Shannon, Claude E., and Warren Weaver. The Mathematical Theory of
Communication. Champaign: U of Illinois P, 1949.
Smolin, Lee. The Life of the Cosmos. New York: Oxford UP, 1997.
_ . The Trouble with Physics: The Rise of String Theory, the Fall of a Science,
and What Comes Next. Boston: Houghton Mifflin, 2006.
_ . Time Reborn: From the Crisis of Physics to the Future of the Universe.
London: Allen Lane, 2013 .
Solomonoff, Ray, "A Preliminary Report on a General Theory oflnductive
Inference." Report V-131. Cambridge: Zator Co., 1960.
Springe), Volker, et al. "Simulations of the Formation, Evolution and Clustering
of Galaxies and Quasars." Nature 435 (2 June 2005): 629-636.
Stapp, Henry P. Mindful Universe: Quantum Mechanics and the Participating
Observer. Berlin: Springer, 2007.
Strickberger, Monroe. Evolution. Boston: Jones and Bartlett, 2000.
Thompson, Evan. Waking, Dreaming, Being: Self and Consciousness in
Neuroscience, Meditation, and Philosophy. New York: Columbia UP, 2015.
Thompson, Evan, and Francisco Varela. "Radical Embodiment: Neural
Dynamics and Consciousness." Trends in Cognitive Science 5 (2001):
418-425.
Tononi, Giulio. "Consciousness as Integrated Information: A Provisional
VAN DuK/Process Physics, Time, and Consciousness 215

Manifesto." Biological Bulletin 215 (December 2008): 216-242.


Tononi, Giulio, Olaf Spoms, and Gerald M. Edelman. "A Measure for Brain
Complexity: Relating Functional Segregation and Integration in the
Nervous System." Proceedings of the National Academy of Sciences of
the United States ofAmerica 91 (May 1994): 5033-5037.
Turing, Alan M., and Darrel Ince. Mechanical Intelligence: Collected Works
ofA.M. Turing. Amsterdam: North-Holland - Elsevier Science, 1992.
Ulanowicz, Robert. The Ascendent Perspective. New York: Columbia UP, 1997.
_ . The Third Window: Natural Life beyond Newton and Darwin. West
Conshohocken, PA: Templeton Foundation Press, 2009.
Van Dijk, Jeroen B.J. "An Introduction to Process-Information." In Michel
Weber and Ronny Desmet. Eds. Chromatikon: Yearbook of Philosophy
in Process 7 (2011): 75-84.
_ . "The Process-Informativeness of Nature." Unpublished paper.
Van Fraassen, Bas. Scientific Representation: Paradoxes of Perspective.
Oxford: Clarendon P, 2008.
Velmans, Max. Understanding Consciousness. 2nd ed. London: Routledge, 2009.
Veneziano, Gabriel. "Construction of a Crossing-Symmetric Regge-Behaved
Amplitude for Linearly Rising Regge Trajectories." Nuovo Cimento A
57 (1968): 190-197.
Von Kitzinger, Eberhard. "Origin of Complex Order in Biology: Abdu'l-
Baha's Concept of the Originality of Species Compared to Concepts in
Modem Biology." In Keven Brown. Ed. Evolution and Baha'i Belief
'Abdu 'l-Baha s Response to Nineteenth-century Darwinism. Los Angeles:
Kalimat P, 2001.
Von Neumann, John. Mathematical Foundations of Quantum Mechanics.
Princeton: Princeton UP, 1955.
Von Uexkiill, Jakob, and Marina Von Uexkiill. A Foray Into the Worlds of
Animals and Humans: With a Theory of Mean ing. Tr. Joseph D O'Neil.
Minneapolis: U of Minnesota P, 2010.
Watkins, Nicholas, et al. "25 Years of Self-Organized Criticality." Space
Science Reviews 198 (2016): 3-44.
Weekes, Anderson. "The Many Streams in Ralph Pred's Onflow." In Michel
Weber and Pierfransesco Basile. Eds. Chromatikon: Yearbook of Philosophy
in Process 2 (2006): 227-244.
Weinert, Friedel. The Scientist as Philosopher: Philosophical Consequences
of Great Scientific Discoveries. Berlin: Springer Ver lag, 2005.
Wheeler, John Archibald. "Law without Law." In P. Medawar and J. Shelley.
216 PROCESS STUDIES SUPPLEMENT 24 (2017)

Eds. Structure in Science and Art. Amsterdam: Elsevier, 1980: 132-154.


_ . "Information, Physics, Quantum: The Search for Links." In Anthony
Hey, and Richard P. Feynman. Eds. Feynman and Computation: Exploring
the Limits of Computers . Cambridge, MA: Perseus Books, 1999: 309-336.
Whitehead, Alfred North. An Enquiry Concerning the Principles of Natural
Knowledge. Cambridge: Cambridge UP, 1919.
_ . The Concept of Nature. Cambridge: Cambridge UP, 1920.
_ . Modes of Thought. 1938. New York: Macmillan, 1968.
_ . Process and Reality: An Essay in Cosmology. 1929. Corrected ed. Ed.
David Ray Griffin and Donald Sherburne. New York: Free Press, 1978.
Woit, Peter. Not Even Wrong: The Failure of String Theory and the Search
for Unity in Physical Law. New York: Basic Books, 2006.
Wolfram, Stephen. A New Kind of Science. Champaign, IL: Wolfram Media, 2002.
Wood, David C. "Action Spectrum and Physiological Responses Correlated
with the Photo-Phobic Response of Stentor Coeruleus." Photochemistry
and Photobiology 24 (1976): 261-266.
Zeman, Jay. "Peirce's Theory of Signs." In Thomas Sebeok. Ed. A Perfusion
of Signs. Bloomington: Indiana UP, 1977.
Zupko, Jack. "Jean Buridan." In T. F. Glick, S. J. Livesay, F. Wallis. Eds.
Medieval Science, Technology and Medicine: An Encyclopedia. New
York: Routledge, 2005: 105-108.

PROCESS STUDIES SUPPLEMENT


24 (2017): 1-216

You might also like