You are on page 1of 13

1

The evolution of knowledge

- A unified naturalistic approach to evolutionary epistemology taking into account the


impact of information technology and the Internet -

by
Dipl.-Inf. (univ.)
Pistorius, Stefan
private
Head of Software Department
s-pistorius@t-online.de

ABSTRACT
How can we describe and understand the impact of information technology on the evolution
of human knowledge? In order to answer this question, we develop a naturalistic formal
model to describe both the individual and collective evolution of knowledge. We conceive
this evolution as a dynamic global network consisting of the networked knowledge of both
humans and their cognitive tools. To define 'knowledge' and the 'knowledge network', we
introduce the algorithmic concept of interactive adaptive Turing machines (IATM). IATMs
are an extension of the classical ‘universal Turing machine’ (UTM) as applied in functionalist
approaches to describe the human brain. Using a simple mathematical argument, we prove
that the UTM model cannot describe the phenomenon of knowledge acquisition. IATMs are
more powerful since they allow for the description of interaction processes between
individuals (i.e. humans and/or computers) and between individuals and nature. We argue that
these interaction processes cause the propagation and evolution of new knowledge. The model
supports a hypothetical realism according to which all knowledge is provisional. Furthermore,
it turns out that the ontogeny of an individual’s knowledge follows the same rules as the
phylogeny of 'knowledge domains' and the overall global knowledge network. Thus, the
model may be looked at as a unified approach to different branches of evolutionary
epistemology. Above all, the network view of knowledge evolution enables us to derive new
epistemic insights from results in complex network research.
2

1 Introduction

This article was written in an attempt to answer the following question:


• How can we understand the dramatic impact of information technology and the
Internet on the evolution of human knowledge, and what could this mean for the future
of humankind?
To answer this question we first need to answer the following questions:
• How can we model and explain 'knowledge', and the 'evolution' of knowledge?
• Which rules govern the evolution of individual and collective knowledge?
We hoped to find answers in the field of evolutionary epistemology. Michael Bradie and
William Harms differentiate two programmes of evolutionary epistemology1. One is
concerned with the evolution of epistemological mechanisms (EEM) the other with the
evolution of theories (EET). The EEM programme focuses on the function of knowledge to
make survival of organisms more likely. Thus, organisms with better cognitive mechanisms
(i.e. sensory systems, brains), have higher chances to survive than those with less adequate
cognitive mechanisms do. Konrad Lorenz’s, Donald T. Campbell’s and Gerhard Vollmer's
naturalistic approaches are typical examples of the EEM programme2. The EET programme
accounts for the development of knowledge within knowledge communities as a result of
variation and selection processes of 'ideas', 'scientific theories' and culture in general. Karl
Popper, Stephen Toulmin, and Donald T. Campbell3 are exponents of the EET programme.
Although both branches represent naturalistic approaches that emphasise the importance of
natural selection, there has been no satisfactory unified theory. Moreover, none of the original
backgrounds takes into account the impact of information technology and the Internet on the
propagation and evolution of knowledge.

Our approach
In Order to integrate the EEM and EET programmes as well as the technological aspects of
knowledge evolution we introduce a formal model, which abstracts from the concrete
selection processes of the two approaches. In Section 2, we define ‘interactive adaptive
Turing machines’ (IATM) which is the computational model for a dynamic adaptive network
of interacting intelligent agents. From the computational model, we can derive precise
definitions of important epistemic concepts. First, we introduce our notions of ‘factual’ and
‘transformational’ knowledge. In Section 3, we apply these definitions to model the network
of knowledge of a single agent (i.e. human or computer), which we call her/his/its 'world
view'. In Section 4, we look at networks of interacting agents. A group of interacting agents
may constitute a particular field of knowledge, which we call 'knowledge domain'. The
knowledge network of all agents constitutes the global knowledge network. It turns out that
all our knowledge about the world is hypothetical. Only the members of a knowledge domain
decide on the adequacy of knowledge. On each level of granularity, from a single agent's
network of knowledge to super-individual knowledge domains and the global network,
knowledge evolution follows the same rules. In Section 5 we look at the topology of the
global knowledge network and derive some epistemological results from complex network
research. In Section 6, we discuss the nearer and farther future prospects of knowledge
evolution.

1
see BRADIE, Michael and HARMS, William F (2008)
2
see LORENZ, Konrad (1973), CAMPBELL, Donald T. (1974), VOLLMER, Gerhard (2005), and VOLLMER,
Gerhard (2003)
3
see POPPER, Karl (1963) u. (1984), TOULMIN, Stephen (1972), CAMPBELL, Donald T. (1974)
3

2 Standard Turing machines and interactive adaptive Turing machines

The English mathematician Alan Turing provided an influential formalisation of the concept
of algorithm and computation by the so-called Turing machine. On an abstract level, every
TM is a device that reads a finite input string (always one symbol at a time) from an input
tape, and rewrites the tape based on a finite set of rules (i.e. the software). We say the TM
accepts an input string if it starts at the beginning of the input string and halts after a finite
number of steps in a halting state. The new, rewritten string on the tape is called the output
string. Although this model seems to be very simple, it can be proven that it can implement
the most complex functional computations. Turing machines are one of several ways to
describe the mathematical class of what are known as 'µ-recursive functions'4. A special
Turing machine is the Universal Turing machine (UTM), which can simulate any other
Turing machine.
(U)TMs are used in machine state functionalism within the philosophy of mind5 to describe
the functioning of the human brain. For various reasons critics have raised objections against
computational functionalism6. We add a mathematical argument against the use of Standard
Turing machines to describe human thinking:
Whenever a (U)TM starts with a given input string s0, it either always accepts s0 or never. In
other words, the set of input strings it accepts is fix. Accordingly, a Turing machine cannot
'learn'! There is no way a (U)TM could ever change its operational behaviour. Even a non-
deterministic (U)TM always accepts a fixed set of input strings7. Therefore, it is false to say,
that a TM can simulate a human brain, since each human can acquire knowledge and change
her/his response (i.e. acceptance or non-acceptance) to the same input.
A Standard Turing machine is not even adequate to describe a modern computer. At least four
new ingredients need to be added to the model of computation:
• Persistent memory8: In contrast to a Standard Turing machine humans and modern
computers have a persistent memory even if they are turned off (or are asleep) for a while.
If they start again, further computations may depend on the memory content.
• Interaction: A TM does not interact with its environment. As we will see, 'interaction' is
fundamental for the propagation of existing knowledge and the evolution, i.e. the learning
of new knowledge.
• Infinity of operation: Humans or computers may in principle interact with their
environment without a definite end.
• Non-uniformity of programs means that agents in a network may change their algorithms
during operation. Nowadays most computers are regularly upgraded, and their software,
which represents their algorithms, may be fundamentally changed. If the agent represents
a human, the human may have learned something from others.
To model this kind of computation we introduce the abstract notion of interacting adaptive
Turing machines (IATM), similar to the notion of interactive Turing machines with advice,
4
LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), Section 5 introduces µ-recursive functions' and
several other alternatives to the TM and discusses the so-called Church-Turing theses according to which there is
no other more powerful formalisation of 'effectively calculable' functions.
5
see for instance PUTNAM, H. (1960)
6
see for instance SHAGRIR, O. (2005)
7
Moreover, it can be proven, that any set of input strings accepted by a non-deterministic TM can also be
accepted by a deterministic TM (see LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), p. 211)
8
see GOLDIN, Dina and WEGNER, Peter (2003) and GOLDIN, Dina and WEGNER, Peter (2005) for articles
about the expressiveness of interactive computing with persistent memory compared to the classical Turing
machine model.
4

which were first introduced by Jan van Leeuwen and Jirii Wiedermann9. We omit the
mathematical description and concentrate on the essential features.
Definition (informal), Interactive adaptive Turing machine (IATM)
An interactive adaptive Turing machine (IATM) is a device that
• receives an unbounded sequence of messages (i.e. finite strings) from other IATMs or
'sensorial data messages' from nature via its input port,
• does 'algorithmic computations' depending on its state, the memory content (essential
difference to a Standard Turing machine!) and the input received and
• continuously sends an unbounded sequence of messages to other IATMs via its output
port.
• It has an unbounded persistent read/write memory to 'memorise' data (i.e. messages or
intermediate results) as well as its algorithmic rules10.
Below we explain the IATM model of operation and define the essential concepts in a semi-
formal way.
• Interaction: An IATM M1 interacts with another IATM M2 if M1 outputs messages to the
input of M2 (and possibly vice versa). The 'purpose' of interaction is to send or exchange
messages. Messages may alter the memory. IATMs receiving 'messages' from nature
represent human sensory or technical sensors.
• Network of IATMs: A network of IATMs is made of a set S of interacting IATMs where
the nodes are the IATMs and the message exchange relations within S define the edges11.
• Environment of IATMs: For each IATM, everything delivering messages to its input and
receiving messages from its output is called its environment. Accordingly, we define the
environment of a network S of IATMs as everything delivering input from outside S (i.e.
other IATMs not in S or nature) and everything receiving output outside S (i.e. other
IATMs not in S12).
• Mutation: If the interaction between an IATM M and its environment (i.e. other IATMs or
nature) leads to some kinds of disruption (see Section 4) then M or the environment might
'mutate' and sometimes successfully adapt its algorithm, or the interaction might
continually be disturbed. To mutate, the IATM rewrites parts of its persistent memory
containing the algorithmic rules.
• Adaptation: An IATM may receive an ’upgrade’ or ‘adaptation-message’ from another
IATM interacting with it in order to rewrite/adapt the algorithm and other data. If for
instance M1 and M2 were computers that exchange erroneous business data, M3 could be
a human or a computer in the Internet that provides for the upgrade of M1 and/or M2.
‘Adaptation messages’ are highly effective (far more effective than rare and undirected
'mutations') and can be regarded as a kind of learning from others.

For a further discussion, we need


Theorem 1: For every finite set S of IATMs there exists a single IATM M that sequentially
implements the same computation as S does13.
9
see VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001). A purely mathematical definition can be found in
VERBAAN, Peter (2005).
10
Van Leeuwen and Wiedermann use a so-called advice function, which is non-computable and is less intuitive
than our read/write memory, which allows rewriting algorithmic rules. Our definition corresponds to the so-
called von Neumann architecture of modern Computers, where programs and data both reside in the same
memory.
11
A precise and formal definition of a dynamic network (like the Internet) based on an interactive Turing
machine concept can be found in VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001).
12
It does not seem 'reasonable' for an IATM to output messages to nature. Instead, we assume that a human or a
computer that applied an algorithm represented by an IATM interacts physically with nature.
13
For a formal proof one needs to formulate some more detailed assumptions about the network protocols and
the method of operation of an IATM that we do not define in this article. The idea of the proof may be found in
5

3 Knowledge and individual world views

Based on the definitions in the previous section we can now define our concept of knowledge
and illustrate how to apply it to humans and computers.

Definition, Factual knowledge of an IATM at time t


The factual knowledge of an IATM M at time t resides in its memory and may consist of the
following:
• data patterns (concepts or non-conceptual patterns) needed to process and memorise
input.
• propositions (either received from other IATMs or derived from M's own algorithm).

Definition, Transformational knowledge of an IATM at time t


The transformational knowledge of an IATM at time t is its algorithm residing in its memory
at t used to analyse input and to derive new messages.

Expressed in short terms factual knowledge means knowledge-that and transformational


knowledge means knowledge-how. In our context, knowledge-how is the kind of knowledge
needed to derive new factual knowledge from already existing knowledge or input from
nature. Factual knowledge can be anything from sensorial patterns needed to recognise
entities in the environment, basic concepts, simple propositions about observations up to
propositions in scientific theories. In general, the definitions abstract from any concrete
human mental state, motivation or cognitive mechanism. We now discuss in more detail how
knowledge can be attributed to agents but the knowledge core is an abstract notion
independent of the knowledge holder.
Gerhard Vollmer’s view of the ‘hierarchical structure of human knowledge’ describes
different categories of human knowledge14. According to his ‘projective model’, human
knowledge consists of four layers (see Fig. 2). Any knowledge acquisition starts with sensory
input from the environment. Some sensations transform to perceptions, some perceptions to
experiences and finally some experiences transform to scientific knowledge. We represent
each step as transformational knowledge (i.e. transformational rules) and factual knowledge
(i.e. patterns, concepts, facts) of one or more IATMs. The first two steps proceed
subconsciously. It is something that our sense organs somehow convey to us. After the two
non-conceptual steps, conceptual output may be produced, something that we call experience,
which has meaning to a human. Some experiences (e.g. particular experiments) may finally
help building scientific theories. Each of the four steps may be a complex multitude of IATMs
and it depends on the individual what kind of factual and procedural knowledge it possesses.
If we consider all the knowledge each human possesses, we must say that a whole network of
hundreds or thousands or even millions of interacting IATMs (the number depends on the
details of the knowledge model) might be necessary to describe a single human's formalised
conceptual and non-conceptual, factual and transformational knowledge15.

van LEEUWEN, Jan and WIEDERMANN, Jirii (2001), proposition 10 for the so-called Internet machine, a
model for a time varying set of interacting machines in the Internet.
14
(see VOLLMER, Gerhard (2003), Band 1, p.33 or p. 89)
15
However, according to theorem 1 all these IATMs are able to be simulated by a single IATM. Therefore, we
can say that all formalised human knowledge at time t may be regarded as a unity.
6

world view of an agent

environment

Output
scientific Knowledge IATMs
tranformational rules, concepts, scientific facts
conceptual
experience IATMs
transformational rules, concepts, ordinary facts knowledge

perception IATMs
transformational rules, perceptional patterns
non-conceptual
sensation IATMs knowledge
transformational rules, sensorial patterns
Input

Figure 1

Theoretically, each of the billions of the brain's neurons could be modelled as an IATM if we
had an expedient and operational theory of a single neuron's 'knowledge'. In our epistemic
context, it is enough to keep in mind, that all our knowledge is interactively connected and
evolving all the time. Altogether, the different IATMs account for her/his/its view of the
world, the world view.

Definition, world view: The world view of an agent at time t is the network of IATMs
representing her/his/its conceptual and non-conceptual transformational and factual
knowledge at time t (see Fig. 2).

This definition can also be applied to the software architecture of a (regularly upgraded)
computer. It does not matter whether we apply the IATM model of Fig. 2 to a human or to a
computer. As soon as we have all the details of an operational model of a human's factual and
transformational knowledge, we are able to describe it by (a network of) IATMs and hence it
could be implemented on a computer. It remains an open question, whether there is some kind
of human knowledge different from our definition of factual and transformational knowledge.
Whatever the answer, in our context we focus on knowledge defined by our formal model.
Moreover, we exclude other cognitive phenomena such as attention, emotion, consciousness
and will.

4 Knowledge evolution and supra-individual knowledge domains

So far we have only examined an individual’s structure of knowledge which we have called
her/his/its world view. Now we concentrate on the interaction processes that constitute and
influence the knowledge of both individuals and interacting groups. For further analysis, we
need the following definitions.

Knowledge Propagation: Knowledge propagates if one IATM outputs a message to another


IATM that accepts and memorises it as knowledge.
7

Knowledge Evolution: If the interaction process is disrupted and one party or both parties
adapt their knowledge to be able to exchange messages, we talk about knowledge evolution.

In order to motivate the model of knowledge evolution we analyse possible operational faults
among interacting IATMs and their strategies to ‘settle their differences’. We restrict
ourselves on the interaction between two IATMs. It follows from theorem 1 that this is
enough to describe the ‘settling’ process for a whole network.

a) Erroneous knowledge of an IATM


One IATM could have a bug in its rules or in its factual knowledge base such that it
occasionally releases an obviously erroneous message.

b) Knowledge of sender is contradictory to knowledge of receiver


In this case, both partners have no obvious bug in their rules but the interaction process does
still not work, because both parties work with different patterns or concepts and hence they
cannot accept their mutual messages.

In concrete networks, there are many more possible sources of disruption resulting from
interaction. For instance, synchronisation problems or message routing problems with loss of
messages and so on are difficult problems in real world networks. We can abandon from
these, because they are not essential in our context.
To dissolve the disruptions, the IATMs have to evolve their knowledge. The nature of the
evolution depends, of course, on the problem. The interesting questions are how can bugs be
avoided and how can we be sure that an adaptation is a correct solution to the problem. The
answer is as follows: For theoretical and practical reasons, we can never be sure that in a
dynamically changing environment an IATM works as required. In other words:

Theorem 2: All transformational and factual knowledge is hypothetical.


Because this argument is essential for further discussion, we have to prove it: First, we have
to be precise about what it means to prove that an 'IATM M works as required'. Since M
could be adapted any time, we assume that M, beginning at time t, consumes only one
message (i.e. a foreseeable input string of finite length). By this assumption, we look at M as
if it were a Standard Turing machine (see Section 2) for a while. Then we need a formal
specification of the expected behaviour of M and a proof that M performs accordingly.
Unfortunately, theory tells us that we cannot even be sure that M will ever halt on the input
message16. All we could prove is the so-called partial correctness of M at time t17. Since we
are interested in M's performance in the context of its environment, we have to make
assumptions about the environment too. If we do not care for M's environment, it could send
an unacceptable message. But if we want to be sure about the behaviour of the environment,
we need also a proof of the partial correctness of the environment at t. Since all knowledge
propagation may have started somewhere with input from nature, we need a proof of the
'transformational behaviour' of nature in order to produce sensorial data. However, there is no
way to prove, that nature's 'behaviour' meets a formal specification, because all we know
about nature is (scientific) theory and the theory is precisely what we would like to prove.
Thus, nature could trigger a chain reaction, which could lead to disruptions in the knowledge
propagation process. Only a posteriori would we be able to formulate an adequate formalistic
specification of the transformational behaviour of M. As an epistemic consequence, we get: In

16
for a formal proof see for instance LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), p.
283-284.
17
Partial correctness defines correctness neglecting the halting problem. For the Theory of Program
Verification see for instance LORCKX, Jacques and SIEBER, Kurt (1984)
8

an unforeseeable changing environment, the correctness of transformational and factual


knowledge of an IATM cannot be adequately formalised. A reasonable correctness criterion
can only be formulated for periods without unforeseen changes. The conclusion is, ‘All
factual and transformational knowledge is hypothetical'. The only way to improve an IATM's
erroneous performance is by trial and adaptation on error!

The decisive question is WHO specifies what correctness (even in times without changes) of
the factual and transformational knowledge of an IATM at time t should be? In the above case
b) it is not possible to decide who is wrong and who should adapt. Moreover, if for some
reason concepts need to be changed, each of the IATMs using the concept may have to adapt.
We say they all belong to the same ‘knowledge domain’.

Knowledge Domain (see Fig. 3 and Fig. 2)


A knowledge domain is the content of a particular field of knowledge. It consists of factual
knowledge (concepts and propositions) without obvious contradictions and transformational
knowledge necessary to deduce the factual knowledge.
A more technical definition: A network of agents exchanging more messages within their
network than with others constitutes a knowledge domain18.

According to this definition, there may be many different kinds of knowledge domains and
there may be hierarchies of knowledge domains. Some knowledge domains may consist of
scientific knowledge, of cultural or everyday knowledge and others only of non-conceptual
knowledge. The reason for using the expression ‘without obvious contradictions’ is that we
cannot be sure of the existence or non-existence of disruptions as can be proven by a similar
argument as in a). Only if contradictions are detected (by tests or accidentally) and only if the
members of the knowledge domain 'feel the pressure' to eliminate these contradictions will
knowledge evolve. The more intense the interaction between intelligent agents, the more
likely it is that knowledge contradictions will emerge and the higher the pressure to resolve
the contradictions will be. Therefore, we say the knowledge domain specifies the knowledge
requirements and decides on the correctness or rather the adequacy of it. By means of
mutation or an adaptation-message from others, the agents of a knowledge domain adapt their
knowledge to new requirements. In short: Knowledge evolution is a process of trial and
adaptation on error.
We summarise the results of this section by the following rules that govern the evolution of
both individual and collective knowledge evolution:
R1) Interaction triggers the propagation and evolution of knowledge
R2) All knowledge is hypothetical.
R3) Knowledge evolves by trial and adaptation on error
Figure 2 visualises a network of four interacting agents, i.e. a small network of networks. The
global knowledge network consists of the knowledge of some billion humans and computers
and we could never visualise it. Moreover, each single agent's network of knowledge may
consists of knowledge belonging to a vast multitude of different knowledge domains (in Fig. 3
only KD1 - KD5) and of a network of non-conceptual knowledge. Conversely, in the
evolution of each domain, there may be many agents involved. We model this by interaction
of the agents (symbolised by arrows). Agents belonging to the same knowledge domain may
still have different world views, and this may have a significant influence on the knowledge
domain. The influence is twofold. First, it arises from the non-conceptual layers of
knowledge. If the sensory of two agents provides for different experiences, it might influence

18
This definition is probably precise enough to identify algorithmically different domains in a network. - See
BARABASI, Albert-László (2004), page 171, referring to Flake, Lawrence, Giles from the company NEC, who
used the W W W link structure to identify 'communities' in this way.
9

their attitude towards some knowledge domains. Secondly, there are of course influences
from the other knowledge domains to which the agent belongs. The interaction between
agents or between an agent and nature leads to knowledge propagation and in case of
disruptions to knowledge evolution (see rule R3). If we think of knowledge evolution by
interaction with others we first think of the evolution of conceptual knowledge. However,
there is also non-conceptual knowledge evolution. First, the non-conceptual knowledge
evolution coincides with biological evolution, because a human’s cognitive organs (i.e. sense
organs, nervous system, brain) are responsible for the ‘quality’ of innate non-conceptual
knowledge, used for interaction with nature. Secondly, there is the ontogenetic, subconscious
evolution of non-conceptual knowledge, resulting from a constant interaction of an individual
with nature.

knowledge domains and world views

world view of agent 1 world view of agent 2 world view of agent 3 world view of agent 4
conceptual knowledge conceptual knowledge conceptual knowledge conceptual knowledge

KD 1 KD 2 KD 3 KD 1 KD 2 KD 4 KD 2 KD 3 KD 4 KD 3 KD 4 KD 5

non-conceptual non-conceptual non-conceptual non-conceptual


knowledge knowledge knowledge knowledge

kowledge domain KD1 : agent 1 + agent 2


kowledge domain KD2 : agent 1 + agent 2 + agent 3
kowledge domain KD3 : agent 1 + agent 3 + agent 4
kowledge domain KD4 : agent 2 + agent 3 + agent 4
kowledge domain KD5 : agent 4

world view of agent 1 := KD1 + KD2 + KD3 + non-conceptual knowledge


world view of agent 2 := KD1 + KD2 + KD4 + non-conceptual knowledge
world view of agent 3 := KD2 + KD3 + KD4 + non-conceptual knowledge
world view of agent 4 := KD3 + KD4 + KD5 + non-conceptual knowledge

Figure 2

The disruptions within knowledge domains might contribute to the evolution of scientific
theories. Karl Popper's conjectures and refutations approach to evolutionary epistemology of
theories19 addresses some of these aspects. According to Popper (as well as to our
framework), there is no absolute truth. Every scientific theory (i.e. a 'knowledge domain') can
only be valued as a ‘conjecture’. A good theory must be falsifiable and such it is possible that
new facts, i.e. messages from the environment refute the theory. Then the existing theory or
part of it needs to be adapted. Therefore, genuine science (in contrast to metaphysics) is to be
seen as a progressive evolutionary process, i.e. a converging knowledge domain. Philip
Kitcher reflects in more detail the 'division of cognitive labour' within a scientific community
i.e. the message exchange processes within the knowledge domain network20. Moreover, he
describes and explains the 'consensus practice' within scientific communities and he stresses
the influences of individual beliefs, i.e. the 'agent's world views' according to our
terminology.21

19
see POPPER, Karl (1963)
20
see KITCHER, Philip (1990)
21
see KITCHER, Philip (1993) or GOLDMAN, Alvin (2006) for a short summary of Kitcher's ideas.
10

5 The topology of the global knowledge network

So far, we have not assumed anything about the topology of the global network of knowledge.
However, new results in the theory of networks should have an important impact on the field
of evolutionary epistemology. Especially the branch of the so-called scale-free network
research, introduced by Albert-László Barabási (see for instance BARABASI, Albert-László
(2004)) sheds light on many scientific disciplines, such as biology, physics, computer science
and social sciences and consequently on epistemology. The most stunning result is that
complex networks tend to be scale-free. This means that the whole structure of the network
evolves towards so-called hubs, i.e. nodes in the network that are linked to an enormous
number of other nodes. The more links a node possesses, the more likely other nodes tend to
attach to these hubs. This phenomenon is called preferential attachment. In the World Wide
Web for instance, some of the hubs are the sites of 'Google', 'Yahoo', 'Microsoft' and others.
One consequence for our discussion about knowledge propagation and knowledge evolution
is the following. It is generally known that some of the W W W and Internet hubs use the
links to accumulate enormous amounts of data. Moreover, they distribute data and they decide
which data to distribute, i.e. they decide which knowledge to propagate. The so-called page-
rank mechanisms of the big search engines for instance, establish a knowledge selection
mechanism. Even if the selection is meant to serve the receiver in order to support his/her/its
needs, it inevitably leads to a favouritism of some web content and hence to the perception of
the respective factual knowledge by many agents. Whether we like it or not, this phenomenon
contributes to the unification of knowledge domains and hence to the convergence of world
views of many agents.
A second consequence of the recognition of the scale-free nature of the Internet, the W W W
and other networks is that completely new knowledge domains have been evolving and they
are different in what they can tell us about the world. The basic idea behind the new methods
of generating knowledge is to explore the petabytes of data accessible and to find patterns of
collective behaviour in nature or human societies. Vice versa, it is possible to derive
knowledge about individual humans just by comparing a profile of their individual data with
these patterns. Therefore, Barabasi initiates the establishment of a ‘Computational Social
Science’. He argues that empirical research in this field can lead to an adaptation of scientific
and social theories and to the development of new scientific disciplines22. Beyond the
epistemic consequences, there arise ethical questions23.

6 The future of the global knowledge network?

If we assume that the global interaction processes continue to intensify, one day any agent
will have immediate access to all knowledge required at any moment of her/his/its lifetime. In
such a scenario, we will not be able to differentiate between the knowledge of a single agent
and the knowledge of the overall knowledge network. Knowledge will simply come out of the
'cloud'24 and each human and all technical cognitive tools are part of it.
Knowledge domains develop on a global scale, some evolve and converge very rapidly, some
vanish, and new domains emerge. However, it is not at all clear whether this will lead towards

22
see BARABASI et al. (2009), p. 721
23
Gottschalk-Mazouz (see GOTTSCHALK-MAZOUZ, Nils (2007))highlights some ethical aspects of this
evolution. He also names typical features of knowledge that are compatible with the definitions in this article
24
The term 'cloud' is in use in Information Technology (IT) and is a metaphor for computer networks like the
Internet. 'Cloud Computing' means that IT-services can come from anywhere and that users do not have to care
(and do not have the chance to find out) about the origin of the different services.
11

a harmonised (i.e. free of obvious contradictions) world view of all agents. As some research
results about the World Wide Web indicate, scale-free networks (with directed edges) can be
'fragmented'; this means that large parts of the web are disconnected from each other25. The
overall structure of the network of knowledge is unknown until now, but we may assume, that
there also exists a multitude of disconnected knowledge domains, because the propagation of
knowledge relies heavily on the Internet and W W W. Moreover, the propagation and
evolution of knowledge are dynamic properties of the network and research on the dynamics
of complex networks has only just begun. Finally yet importantly, the 'success' may depend
on the nature and quality of the different knowledge domains. As long as the usage of the
world's natural resources discriminates against large parts of the world, new (knowledge and
physical) conflicts will always arise and there will not be worldwide harmony. Although we
do not know what the future will bring, it is an interesting thought experiment, as to what a
harmonised global knowledge network would be like from the model's theoretical point of
view:
Every single agent (humans and technical devices) would be connected to the global
knowledge network. It would be free of obvious contradictions. Each agent's perception of the
world would be perfectly compatible with all knowledge (especially scientific knowledge)
about the world. Every single observation and every single interaction of an agent with nature
(even with her/his/its own physical body) would immediately contribute to the perception and,
if necessary, to the adaptation of the global network. The main goal of the global network
would be to survive the challenges of nature and the universe.

7 Conclusions and perspectives

A new formal approach to evolutionary epistemology


In order to explain the evolution of knowledge we introduced the concept of interactive
adaptive Turing machines (IATM). It represents an interactive version of the classical
universal Turing machine (UTM). By a simple mathematical argument, we proved that UTMs
cannot explain knowledge evolution. By means of our IATM based adaptive network model
of knowledge we explained both the ontogeny of individual knowledge and the phylogeny of
super-individual knowledge corpora. Since our knowledge definition is independent of the
knowledge holder, we attributed knowledge to both humans and their cognitive technical
equipment. Thus, we embraced the impact of the Internet on knowledge evolution.
Furthermore, we proved that according to the model all our knowledge about the world is
hypothetical.

A unified framework for different branches of evolutionary epistemology


We investigated the rules that govern knowledge evolution. It turned out, that interaction
processes between agents and between agents and nature trigger the propagation and
evolution of both individual knowledge and supra-individual knowledge domains. The more
intense the interaction is the more likely is the occurrence of knowledge conflicts and the
stronger is the pressure to resolve the conflicts. Since there is no reasonable truth criterion,
knowledge adaptations (i.e. knowledge evolution) are a process of trial and error. Gerhard
Vollmer's 'hierarchical structure of individual human knowledge' and Karl Popper's
'conjecture and refutation' approach to the evolution of supra-individual scientific theories are
examples of evolutionary epistemologies that can be reinterpreted within the network model.

25
see BARABASI, Albert-László (2004)
12

Thus, the model represents a unified framework to the EEM and EET programmes of
evolutionary epistemology.

The network view of knowledge reveals new epistemic insights


The network view of knowledge enables us to derive new epistemic insights from
observations of the evolving Internet and from complex network research. The scale-free
structure of the World Wide Web and other observations reveal an accelerated convergence of
knowledge domains and the emergence of new knowledge domains. The global network as a
whole interacts with nature. According to the model, the challenge of the global network is to
adapt to future changes in nature.

Perspectives
The ideas presented in this article still need to be compared with similar approaches to
epistemology. This applies for example to the definitions of epistemic concepts or the
computational model as a whole. Moreover, we think that the model represents a perfect
framework for social epistemology. To integrate the ‘social’ one has to identify the promoters
of knowledge evolution and to operationalise the details by means of a network of IATMs.
We are convinced that this will lead to clarity and precision of the respective theory.
Moreover, it seems very promising to interpret the results of the aspiring discipline of
complex network research. Especially Barabasi's idea of a 'Computational Social Science’
promises new epistemic insights.
13

References

BARABASI, Albert-László (2004), Linked: How Everything Is Connected to


Everything Else, ISBN 0-452-28439-2
BARABASI et al (2009), Computational Social Science, Science, Vol. 323, p 721-723
BRADIE, Michael, HARMS, William (2008), Evolutionary Epistemology, The Stanford
Encyclopedia of Philosophy (Winter 2008 Edition), Edward N. Zalta (ed.),
URL = <http://plato.stanford.edu/archives/win2008/entries/epistemology-
evolutionary/>
CAMPBELL, Donald T. (1974), Evolutionary Epistemology, in The philosophy of
Karl R. Popper, edited by P. A. Schilpp, LaSalle, IL: Open Court, pp. 412-463
GOLDIN, Dina and WEGNER, Peter (2003), Computation Beyond Turing Machines.
Comm. ACM, Apr. 2003.
GOLDIN, Dina and WEGNER, Peter (2005), The Interactive Nature of Computing:
Refuting the Strong Church-Turing Thesis
GOTTSCHALK-MAZOUZ, Nils (2007), Internet and the flow of knowledge: Which
ethical and political challenges will we face?, in Philosophie of the Information
Society, Proceedings of the 30. International Wittgenstein Symposium, Kirchberg
am Wechsel, Austria 2007 Volume2.
KITCHER, Philip (1990), The Division of Cognitive Labor, The Journal of Philosophy,
87: 5–22.
KITCHER, Philip (1993), The Advancement of Science, New York: Oxford University Press.
LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), Elements of the theory
of Computation, Prentice Hall, ISBN 0-13-273417-6.
LOECKX, Jacques and SIEBER, Kurt (1984), The Foundations of Program Verification,
2nd ed., Teubner ISBN 3 519 12101 8,
Wiley ISBN 0 471 91282 4
LORENZ, Konrad (1973): Rückseite des Spiegels. Versuch einer Naturgeschichte des
menschlichen Erkennens. München / Zürich, Piper.
POPPER, Karl (1963), Conjectures and Refutations: The Growth of Scientific Knowledge,
ISBN 0415043182
POPPER, Karl (1984), “Evolutionary Epistemology”, in Evolutionary Theory: Paths into
the Future, (ed.) J. W. Pollard, London: John Wiley & Sons Ltd.
PUTNAM, H. (1960). ‘Minds and Machines”, reprinted in Putnam 1975b, 362–385.
SHAGRIR, O. (2005), The Rise and Fall of Computational Functionalism,
in Y. Ben-Menahem (ed.), Hilary Putnam. Cambridge: Cambridge University
Press, 220–250
TOULMIN, Stephen (1972), Human Understanding: The Collective Use and Evolution
of Concepts, ISBN 0-691-01996-7
VAN LEEUWEN, Jan and WIEDERMANN, Jirii (2001), The Turing Machine
Paradigm in Contemporary Computing, in Mathematics Unlimited - 2001 and
Beyond, eds. B. Enquist and W. Schmidt, LNCS, Springer-Verlag, 2000.
VERBAAN, Peter (2005), The Computational Complexity of Evolving Systems,
http://igitur-archive.library.uu.nl/dissertations/2006-0202-200042/full.pdf
VOLLMER, Gerhard (2003), Was können wir wissen? 2 Bde., Leipzig: Hirzel.
VOLLMER, Gerhard, (2005), How Is It That We Can Know This World?
New Arguments in Evolutionary Epistemology in Hösle, Vittorio and Illies,
Christian (eds) Darwinism and philosophy, University of Notre Dame Press,
ISBN 0-268-03072-3 (hbk) ; ISBN 0-268-03073-1 (pbk), Chapter 13.

You might also like