You are on page 1of 11

An Introduction to Process-Information:

From Information Theory to Experiential Reality

Author: Jeroen B.J. van Dijk


Eindhoven, The Netherlands
(jvandijk@all-is-flux.nl)

(pre-print)
(to appear in: M. Weber and R. Desmet (Eds.), Chromatikon Yearbook of Philosophy in Process,Volume 7,
pp. 75-84, Louvain-la-Neuve: Les Éditions Chromatika, 2011)

key words: Process Physics, information theory, dissipative structures,


Reflexive Monism, Whiteheadian panexperientialism

Contents:

1. Introduction
2. Conventional information theories
2.1 Classical information theory
2.2 Semiotic information theory
2.3 Algorithmic information theory
2.4 Knowledge as data-based content
3. Process-information
4. Experiential reality
5. Main consequences
6. References

1. Introduction

In our conventional information theories it has become standard practice to interpret


information as well-formed syntactical data plus user-added meaning (Floridi 2011, 83-84).
Unfortunately, however, this interpretation focuses mainly on the results of information
intake, while the process of information intake itself is mostly neglected. In this way, any
data-interpreting user needs to know only that information intake works, not how it works.
The results of information intake are thus firstly understood in a purely quantitative
sense as otherwise meaningless facts capable of reducing uncertainty about some user-defined
area of interest (cf. Shannon and Weaver 1949, 108-109; Berger and Calabrese 1975).
Accordingly, these quantitative data are typically seen to present themselves in the form of
syntactical units of expression, for instance, as countable pulses in data-conveying
communication signals, or the compressible content of syntax-obeying symbol sequences.
In its deepest essence, however, information is not about such syntactical data at all.
Instead, information involves the activity of nature’s self-organizing process-structures1 as
they mutually affect (i.e. ‘in-form’) each other by way of their non-equilibrium in- and
outflow cycles. Like this, these criticality-seeking open systems actively give form to one
another’s structural-functional organization (cf. Jantsch 1980, 10-11 and 51; Bohm and Hiley
1993, 27) thus constituting an all-pervading reciprocal process-informativeness.
In this view, information is ultimately a nature-wide reflexive process (cf. Cahill 2005,
1-2) in which conscious observers are themselves embedded endo-systems within the greater
embedding processuality of nature as a whole. As a result, observers do not have to impose
any external meaning upon the syntactically expressed results of their information intake.
Instead, our process-informative natural world is internally meaningful (Cahill 2005, 16) in
the sense that all process-structures make a difference to all other process-structures, and vice
versa, in an order of undivided wholeness (cf. Bohm 1980, 154). Moreover, this view agrees
very much with Gregory Bateson’s conception of information as ‘a difference which makes a
difference’ (Bateson 1972, 459).

2. Conventional information theories

Over the last couple of decades it has become standard custom in our conventional
information theories to apply the General Definition of Information (GDI), a tripartite
statement (Floridi 2011, 84) claiming that:

σ is an instance of information, understood as semantic content, if and only if:


(GDI.1) σ consists of n data, for n ≥ 1;
(GDI.2) the data are well-formed;
(GDI.3) the well-formed data are meaningful.

According to this General Definition of Information, information can never be dataless, but
must consist of at least one datum. In the so-called Diaphoric interpretation (diaphora is the
Greek word for “difference”) a datum is equivalent to a difference or lack of uniformity
occurring within some context of interest, thus leading to the Diaphoric Definition of Data
(DDD):

1
A process-structure is the relatively ordered and stable pattern that can emerge via the non-equilibrium
behaviour of complex open systems (cf. Jantsch 1980, 21). In this sense, it refers to the eye-catching steady-
states invoked by the dynamic regimes of non-equilibrium dissipative structures (Nicolis and Prigogine 1977).
A datum is a difference x being distinct (or distinguishable) from y, where x and y are
two uninterpreted variables and the domain is left open to further interpretation.

This Diaphoric Definition of Data (cf. Floridi 2011, 85) forms the basic foundation for all our
mainstream information theories. Following in the wake of the DDD, the GDI.1, GDI.2, and
GDI.3 enable the superposition of additional layers to arrive at semantic information. It is thus
considered a prime requirement for such information to start out as data-based content
grounded on the most elementary distinctions of a dedicated observation system (such as a
measurement device and/or a conscious end-observer).
The ‘becoming available’ of data via such means of information intake is typically
understood to contribute to knowledge acquisition in that it reduces earlier-existing
uncertainty (cf. Shannon and Weaver 1949, 108-109; Berger and Calabrese 1975). Like this,
knowledge is first and foremost conceived in a quantitative sense in that it involves the
relative amount of data – out of some pre-available data source of known (or estimated) size –
to which an information-eager observer has free access.

2.1 Classical information theory

Accordingly, in classical information theory (Shannon and Weaver 1949) every single
incoming datum will lessen the recipient’s uncertainty about a given amount of possible
alternatives. Like this, information is a measure of the decrease in uncertainty about the
occurrence of a specific event from a given set of possible events. This information-theoretic
measure of uncertainty reduction is quantified as follows: The probability of occurrence for
each specific member (e.g. characters, or system states)2 of a given set of possibilities (e.g. an
alphabet, or a fixed region in phase space) depends on the total amount of available options
and the relative frequency of all these alternatives individually (cf. Fast 1968, 325-326). In
other words, Shannon’s information-theoretic measure of uncertainty reduction indicates the
relative decrease in ignorance about which options from a prespecified collection of
possibilities get to be selected as the individual content values of the data signal under
construction.

2
Please note that classical information theory allows various kinds of members – ‘elementary items of
information’ such as symbols, signs, tokens, bits, byte-sized bit strings, syllables, or words – to be used
interchangeably.
2.2 Semiotic information theory

Although semiotic information theory starts out with purely syntactical signs, it also includes
semantic information relating to meaning and truth of these signs, and pragmatic information
dealing with the effects and impact of these meaningful signs on those who use them.
Accordingly, in semiotics meaning gets to be established in a triadic relation between a sign,3
its referent (i.e. a ‘something-to-be-known’)4 and a sign-interpreting user (cf. Zeman 1977;
Nöth 1995, 85; Fiske 2002, 41-43). In this way, when the value state of a particular sign
becomes known, this reduces the user’s uncertainty about the state of the sign’s referent.
Moreover, when the user’s sociocultural background is taken into account as well it becomes
easier to determine which meaning and which practical consequences the sign will have for
the sign-interpreting user.
For instance, in a situation where red and blue (i.e. the signs) designate hot and cold
water (i.e. the referents), the resulting meanings and effects will most likely be very different
to sun tourists and polar adventurers (i.e. the respective sign-interpreting users) whose risks
on frostbite and dehydration symptoms are unevenly distributed. Hence, like in classical
information theory, knowledge is interpreted as a collection of data-conveying signs that will
decrease the user’s uncertainty about which particular alternative will occur and which
probabilities, meanings, and consequences are associated with this.

2.3 Algorithmic information theory

Whereas in classical and semiotic information theory the increase of available data leads to
more knowledge, in algorithmic information theory (AIT; Kolmogorov 1987; Chaitin 1987)
the opposite seems to be the case. That is, in AIT the amount of data compression that can be
achieved in (re)producing empirical data can be taken as a measure of the level of scientific

3
For reasons of simplicity, the possible difference between sign (a.k.a. sign function or signhood) and sign
vehicle (a.k.a. token or signifier) is ignored here. Instead, the terms meaning and sign are used to denote the use
of a certain token within a triadic sign relation. See (Nöth 1995, 79) for more details on the possible differences
between sign and sign vehicle.
4
In semiotics, referents are usually interpreted as objects, or Kantian things-in-themselves. Unfortunately,
however, these labels are philosophically biased in that they have an explicitly physical connotation to them,
thereby suggesting the pre-existence of purely physical entities without any proper justification. The term
‘something-to-be-known’ (Damasio 1999, 159-167) is already less prejudiced in this respect and therefore it can
be seen as the more acceptable alternative.
understanding about the associated target system (Solomonoff 1960; Chaitin 2007, 35). The
main argument can be stated as follows: the more compression, the better the understanding
about the system’s recorded behavior (Chaitin 2007, 227 and 286). In this way, the extent of
knowledge about a natural system is thought to peak as the algorithm for (re)producing its
empirical data approaches its minimum size.

2.4 Knowledge as data-based content

In all three above-mentioned information theories the amount of acquired knowledge is


measured by comparing the already received symbolically expressed data to potentially
available, but as yet unknown data. That is, by dividing the known data by the maximum
amount of data, one can establish a relative measure of knowledge. This knowledge is
typically passed on via symbolic units of expression – data-conveying signs taken from some
earlier agreed-upon symbol system.5 In fact, the alphabets, mathematical symbols, coding
systems, etc., that are used to express information and knowledge are typically taken for
granted as pre-available givens. And although it may indeed be tempting to consider these
data-conveying signs themselves as identical to the data themselves (and hence to data-based
information and knowledge), this is definitely not the case. After all, the Diaphoric Definition
of Data (DDD) and the therewith associated General Definition of Information (GDI) actually
refer to the distinctions to which these signs are related (Nöth 1995, 80) – not to the signs
themselves.
However, in their attempt to ground data on supposedly elementary distinctions,
information theorists end up neglecting the very process of information intake by means of
which these distinctions are made in the first place. In this way, it is merely required to know
that information intake works, not how it works. In order to make this work, some earlier
agreed-upon data-compliant symbolic alphabet should be made available prior to any
information intake,6 thus authorizing the sign before the data.
All this is done without paying any serious enough attention to the inner-workings or
underlying processuality of all the means of information intake. At the end of the day, it is

5
E.g. character sets following the alphabetical system, binary, (hexa)decimal, alphanumerical, ASCII, pictogram
or hieroglyphic systems, et cetera.
6
Not only some symbolic alphabet or coding system should be made available to their potential users in
advance, but also all other means of communication and information intake (such as measurement instruments,
receivers and transmitters, communication channels, et cetera).
simply assumed without any proof that the process of information intake itself can be
sufficiently accounted for by means of the symbol-manipulating techniques of mathematical
modeling. However, any attempt in this direction would inevitably trigger an infinite regress
as this requires yet another process of information intake to provide such a mathematical
model with proper data, thereby calling forth the same problem all over again (cf. Von
Neumann 1955, 352; Pattee 2001, par. 9).

3. Process-information

In sharp contrast with this inherently problematic syntax-based information, process-


information is not about the transfer, storage or (re)production of data. Instead, it involves the
capacity of interconnected process-structures to mutually affect (i.e. ‘in-form’) each other.
Through their numerous complex inner- and inter-system non-equilibrium cycles these
criticality-seeking self-organizing open systems actively give form to one another’s
functional-structural organization (cf. Jantsch 1980, 10-11 and 51; Bohm and Hiley 1993, 27),
thus constituting an all-pervading reciprocal process-informativeness.
Being itself a self-organizing process-structure, our mind-brain does not gather
information by simply importing incoming syntax-based sense data. Instead, it is the mutual
informativeness within and between neuronal groups that enables the mind-brain to develop
internally meaningful activity patterns (cf. Edelman and Tononi 2000, 127-131). That is, by
signaling back and forth in response to each other’s in- and outgoing activity patterns,
neuronal groups make a difference to each other and thus to the mind-brain as a whole. Like
this, a distributed kind of self-sensitivity can emerge by virtue of which the mind-brain can
become its own “observer” (Edelman and Tononi 2000, 127-128) instead of having to rely on
an inner-brain “homunculus” whose job it is to provide neural activity patterns with an
externally added sense of subjectivity (Damasio 1999, 189-192; Edelman and Tononi 2000,
127).
Although such an externally observing homunculus would measure the amount of
incoming information by the number and probability of observably different system states, a
self-sentient, mutually informative process-structure such as the mind-brain requires a
different approach. In order to offer an alternative to our conventional measures of
information, the concept of ‘mutual information’ can be used. In the context of cognitive
neuroscience, mutual information can be defined as the extent to which a given neuronal
group changes its dynamics in response to differences in the in- and outgoing activity patterns
of other neuronal groups (Edelman and Tononi 2000, 127-131). Hence, to an arbitrary
neuronal group – viewed as an embedded subsystem of a relatively autonomous cluster of
functionally related neuronal groups – the particularly relevant information is given by the
difference that the other participating neuronal groups make to its own activity patterns, and
vice versa:

“Such information can be measured by the statistical quantity we defined previously as


mutual information. Specifically, consider an isolated neural system X, a subset j of k
elements (Xkj) of that system, as well as its complement in the system, indicated as (X - Xkj).
Interactions between the subset (Xkj) and the rest of the system will be reflected in correlations
of their activity. Mutual information between the subset and its complement is given by

MI(Xkj ; X - Xkj) = H(Xkj ) + H(X - Xkj) - H(X)

where H(Xkj ) and H(X - Xkj) are the entropies of Xkj and X - Xkj considered independently and
H(X) is the entropy of the system considered as a whole. [Moreover,] H(Xkj ), which indicates
the entropy of the subset (Xkj ), is a general measure of its statistical variability [i.e. of the
subset ‘representing’ the neuronal group in question], being a function of the number of
possible patterns of activity that the subset of elements can take, weighted by the probability
of their occurrence.” (Edelman and Tononi 2000, 128)

4. Experiential Reality

In their landmark book ‘Consciousness: How Matter Becomes Imagination,’ Edelman and
Tononi claim that the self-sensitive mutual informativeness through which the
abovementioned activity patterns can become their own “observer” occurs mainly among
neuronal groups located within the thalamocortical region of the mind-brain. In their opinion
– the essence of which is supported by many others in cognitive neuroscience – conscious
experience can emerge from purely physical neuroactivity through an extra-ordinarily high
level of reciprocal signaling (called ‘reentry’7) within and among thalamocortical activity

7
Reentry is what allows conscious organisms to partition their unlabeled living-environment with the help of
percepts, categories and concepts without invoking a homunculus or information-processing computer program
patterns (Edelman and Tononi 2000, 44).
However, an organism’s conscious activity patterns can ultimately not be told apart
from the various inner- and outer-organism non-equilibrium cycles in which they take part. In
other words, everything within and without the organism is crucial to, and participates in the
abovementioned process-informative emergence of the mind-brain’s own observer. Hence, all
activity patterns that we usually like to think of as being external from the conscious organism
are in fact undeniably part of – rather than apart from – the process of experience (Velmans
2009, 327). In this way, what previously may have been thought of as purely physical
phenomena belonging to the ‘real world out there,’ should be understood as seamlessly
integrated with our mind-brain’s conscious activity patterns, so that, together, they form the
unified psychophysical content of nature’s all-embracing process of experience.
According to this view, our universe is an experiential psychophysical natural world in
which self-organizing reciprocal dynamics continuously develop within and between coupled
process-structures, thus ‘in-forming’ each other on all levels of organization (cf. Jantsch 180,
11). Like this, nature forms one huge processual whole of synergetically evolving and
complexly interconnected dissipative structures, such as galaxies, solar systems, and also the
earth’s sun-dependent biosphere with its abundant variety of life forms. Above all, however, it
has given rise to intelligent organisms whose mind-brains facilitate higher-order conscious
experience.
In turn, this experiential consciousness is not a representational inner-projection of the
external ‘real world out there’, or some elusive ‘mental ghost controlling the material
machinery of the organism’s body’, but a naturally developing within-nature activity. As
mentioned above, our higher-order conscious experience follows from nature’s process-based
reflexive informativeness – as a highly evolved confluent extension of it. To be more precise,
all of nature’s co-adaptive dissipative structures act together via the self-referential activity of
their in- and outflow cycles (Jantsch 1980, 49-50; Nicolis and Prigogine 1977; Schneider and
Sagan 2005, 81) to make up one giant endo-in-form-ative process.8

(cf. Edelman and Tononi 2000, 85). Moreover, “ ... reentry is a process of ongoing parallel and recursive
signaling between separate brain maps along massively parallel anatomical connections, most of which are
reciprocal. It alters and is altered by the activity of the target areas it interconnects.” (Edelman and Tononi 2000,
105-106)
8
Endo-in-form-ative: actively giving form to everything within itself – on all levels of organization.
5. Main consequences

Across all disciplines and echelons in science, the longstanding tradition of syntax-based
techniques and methodologies has been (and still is) dominating our view on the natural world
and ourselves (Rosen 1991, 4-7; Cahill 2005, 1-2). As a result, we have convinced ourselves
that any scientific discipline should conform to the generic scheme of formal logic and
information theory in which any universe of discourse requires only the presence of
meaningless symbols being moved around according to some action-governing rules of
manipulation (Rosen 1991, 7).
However, this object-oriented syntax-based approach has led us to believe that nature
consists ultimately of physically interacting elementary particles obeying some external set of
syntactically spelled-out laws of nature, instead of considering it as one unified processual
whole of seamlessly interconnected process-structures. As suggested by Reg Cahill’s Process
Physics (2005), nature is to be regarded in an explicitly non-formal sense, as a giant self-
organizing process-information system in which embedded dissipative structures are engaged
in open-ended evolution.
According to Process Physics, we should interpret our universe as one psycho-physical
process containing all kinds of differentiated contents, including conscious organisms like
ourselves. In this way, it can also be seen as compatible with Max Velmans’s Reflexive
Monism (2009) and David Ray Griffin’s neo-Whiteheadian panexperientialism (1998 and
1999) in which our conscious experience is a concrescent extension of nature’s inherent
psychophysicality.
Conscious organisms are thus to be conceived as seamlessly integrated dynamic parts
of an embedding wholeness (the universe itself) who will reflexively form a conscious view
of both their embedding surrounds and the therein processually embedded biological
organisms they like to think of as themselves (cf. Velmans 2009, 328). And in so far as we are
embedded organisms equipped with a dynamically evolved conscious view on the larger
embedding universe, we participate in a reflexive process through which nature experiences
itself (Velmans 2009, 327-328).
6. References

Bateson, G., Steps to an Ecology of Mind, Chicago: University of Chicago Press, 2000 (first
published in 1972)

Berger, C. R. and R.J Calabrese, “Some Exploration in Initial Interaction and Beyond:
Toward a Developmental Theory of Communication,” Human Communication Research,
1, 99–112, 1975

Bohm, D., Wholeness and the Implicate Order, London: Routledge and Kegan Paul, 1981

–––––– , and B.J. Hiley, The Undivided Universe: An Ontological Interpretation of Quantum
Theory, London: Routledge, Chapman and Hall, 1993

Cahill, R.T., Process Physics: from information theory to quantum space and matter, New
York: Nova Science Publishers, 2005

Chaitin, G.J., Algorithmic Information Theory, Cambridge University Press, Cambridge, 1987
––––––––– , Thinking about Gödel and Turing: Essays on Complexity, 1970-2007, Singapore:
World Scientific, 2007
Damasio, A.R., The Feeling of What Happens: Body and Emotion in the Making of
Consciousness, New York: Harcourt Brace, 1999

Edelman, G.M., and G. Tononi, Consciousness: How Matter Becomes Imagination, London:
Allen Lane - The Penguin Press, 2000
Fast, J.D., Entropy; the significance of the concept of entropy and its application in science
and technology (2nd edition), London and Basingstoke: McMillan, 1968
Fiske, J., Introduction to Communication Studies (2nd edition), London - New York:
Routledge, 2002
Floridi, L., The Philosophy of Information, Oxford: Oxford University Press, 2011
Griffin, D.R., Unsnarling the World-Knot: Consciousness, Freedom and the Mind-Body
Problem, Berkeley: University of California Press, 1998
––––––––– , “Materialist and Panexperientialist Physician: A Critique of Jaegwon Kim’s
Supervenience and Mind,” Process Studies, Vol. 28,/1-2 (Spring-Summer), pp. 4-27, 1999

Jantsch, E., The Self-Organizing Universe: Scientific and Human Implications of the
Emerging Paradigm of Evolution, Frankfurt: Pergamon Press, 1980
Kolmogorov, A.N., Selected Works of A.N. Kolmogorov, Volume III: Information Theory and
the Theory of Algorithms, Dordrecht: Kluwer, 1993 (Translation by A. B. Sossinsky of:
Kolmogorov, A.N., Избранные труды: Теория информации и теория алгоритмов,
Moscow: Nauka, 1987)
Nicolis, G., and I. Prigogine, Self-Organization in Non-Equilibrium Systems: From
Dissipative Structures to Order through Fluctuations, New York: J. Wiley and Sons, 1977
Nöth, W., Handbook of Semiotics: Advances in Semiotics, Bloomington: Indiana University
Press, 1995
Pattee, H.H., “The Physics of Symbols: Bridging the Epistemic Cut,” Biosystems, Vol. 60, pp.
5-21, 2001
Rosen, R., Life Itself: a Comprehensive Inquiry into the Nature, Origin, and Fabrication of
Life, New York: Columbia University Press, 1991
Schneider, E.D., and D. Sagan, Into the Cool: Energy Flow, Thermodynamics, and Life,
Chicago and London: University of Chicago Press, 2005
Shannon, C.E., and W. Weaver, The Mathematical Theory of Communication, Urbana (Ill.):
University of Illinois Press, 1949
Solomonoff, R., “A Preliminary Report on a General Theory of Inductive Inference,” Report
V-131, Cambridge: Zator Co., Feb 4, 1960 (revision, Nov. 1960)
Velmans, M., Understanding Consciousness (2nd renewed edition), London: Routledge, 2009
Von Neumann, J., The Mathematical Foundations of Quantum Mechanics, Princeton, NJ:
Princeton University Press, 1955
Zeman, J.J., “Peirce’s Theory of Signs,” in: T.A. Sebeok (Ed.), A Perfusion of Signs, 22-39,
Bloomington: Indiana University Press, 1977

You might also like