Welcome to Scribd. Sign in or start your free trial to enjoy unlimited e-books, audiobooks & documents.Find out more
Standard view
Full view
of .
Look up keyword
Like this
0 of .
Results for:
No results containing your search query
P. 1
The Evolution of Knowledge - Atiner

The Evolution of Knowledge - Atiner

Ratings: (0)|Views: 30|Likes:
Published by stefanpistorius
The evolution of knowledge - A unified naturalistic approach to evolutionary epistemology taking into account the impact of information technology and the Internet -

The evolution of knowledge - A unified naturalistic approach to evolutionary epistemology taking into account the impact of information technology and the Internet -

More info:

Categories:Types, Research, Science
Published by: stefanpistorius on Jun 03, 2010
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





The evolution of knowledge
- A unified naturalistic approach to evolutionary epistemology taking into account theimpact of information technology and the Internet -by
Dipl.-Inf. (univ.)Pistorius, StefanprivateHead of Software Department
How can we describe and understand the impact of information technology on the evolutionof human knowledge? In order to answer this question, we develop a naturalistic formalmodel t
o describe both the individual and collective evolution of knowledge. We conceivethis evolution as a dynamic global network consisting of the networked knowledge of bothhumans and their cognitive tools. To define 'knowledge' and the 'knowledge network', weintroduce the algorithmic concept of interactive adaptive Turing machines (IATM). IATMsare an extension of the classical ‘universal Turing machine’ (UTM) as applied in functionalistapproaches to describe the human brain. Using a simple mathematical argument, we provethat the UTM model cannot describe the phenomenon of knowledge acquisition. IATMs aremore powerful since they allow for the description of interaction processes betweenindividuals (i.e. humans and/or computers) and between individuals and nature. We argue thatthese interaction processes cause the propagation and evolution of 
knowledge. The modelsupports a hypothetical realism according to which all knowledge is provisional. Furthermore,it turns out that the ontogeny of an individual’s knowledge follows the same rules as thephylogeny of 'knowledge domains' and the overall global knowledge network. Thus, themodel may be looked at as a unified approach to different branches of evolutionaryepistemology. Above all, the network view of knowledge evolution enables us to derive newepistemic insights from results in complex network research.
This article was written in an attempt to answer the following question:
How can we understand the dramatic impact of information technology and theInternet on the evolution of human knowledge, and what could this mean for the futureof humankind?To answer this question we first need to answer the following questions:
How can we model and explain 'knowledge', and the 'evolution' of knowledge?
Which rules govern the evolution of individual and collective knowledge?
We hoped to find answers in the field of evolutionary epistemology. Michael Bradie andWilliam Harms differentiate two programmes of evolutionary epistemology
. One isconcerned with the evolution of epistemological mechanisms (EEM) the other with theevolution of theories (EET). The EEM programme focuses on the function of knowledge tomake survival of organisms more likely. Thus, organisms with better cognitive mechanisms(i.e. sensory systems, brains), have higher chances to survive than those with less adequatecognitive mechanisms do. Konrad Lorenz’s, Donald T. Campbell’s and Gerhard Vollmer'snaturalistic approaches are typical examples of the EEM programme
. The EET programmeaccounts for the development of knowledge within knowledge communities as a result of variation and selection processes of 'ideas', 'scientific theories' and culture in general. KarlPopper, Stephen Toulmin, and Donald T. Campbell
are exponents of the EET programme.Although both branches represent naturalistic approaches that emphasise the importance of natural selection, there has been no satisfactory unified theory. Moreover, none of the originalbackgrounds takes into account the impact of information technology and the Internet on thepropagation and evolution of knowledge.
Our approach
In Order to integrate the EEM and EET programmes as well as the technological aspects of knowledge evolution we introduce a formal model, which abstracts from the concreteselection processes of the two approaches. In Section 2, we define ‘interactive adaptiveTuring machines’ (IATM) which is the computational model for a dynamic adaptive network of interacting intelligent agents. From the computational model, we can derive precisedefinitions of important epistemic concepts. First, we introduce our notions of ‘factual’ and‘transformational’ knowledge. In Section 3, we apply these definitions to model the network of knowledge of a single agent (i.e. human or computer), which we call her/his/its 'worldview'. In Section 4, we look at networks of interacting agents. A group of interacting agentsmay constitute a particular field of knowledge, which we call 'knowledge domain'. Theknowledge network of all agents constitutes the global knowledge network. It turns out thatall our knowledge about the world is hypothetical. Only the members of a knowledge domaindecide on the adequacy of knowledge. On each level of granularity, from a single agent'snetwork of knowledge to super-individual knowledge domains and the global network,knowledge evolution follows the same rules. In Section 5 we look at the topology of theglobal knowledge network and derive some epistemological results from complex network research. In Section 6, we discuss the nearer and farther future prospects of knowledgeevolution.
see BRADIE, Michael and HARMS, William F (2008)
see LORENZ, Konrad (1973), CAMPBELL, Donald T. (1974), VOLLMER, Gerhard (2005), and VOLLMER,Gerhard (2003)
see POPPER, Karl (1963) u. (1984), TOULMIN, Stephen (1972), CAMPBELL, Donald T. (1974)
Standard Turing machines and interactive adaptive Turing machines
The English mathematician Alan Turing provided an influential formalisation of the conceptof algorithm and computation by the so-called Turing machine. On an abstract level, everyTM is a device that reads a finite input string (always one symbol at a time) from an inputtape, and rewrites the tape based on a finite set of rules (i.e. the software). We say the TMaccepts an input string if it starts at the beginning of the input string and halts after a finitenumber of steps in a halting state. The new, rewritten string on the tape is called the outputstring. Although this model seems to be very simple, it can be proven that it can implementthe most complex functional computations. Turing machines are one of several ways todescribe the mathematical class of what are known as '
-recursive functions'
. A specialTuring machine is the Universal Turing machine (UTM), which can simulate any otherTuring machine.(U)TMs are used in machine state functionalism within the philosophy of mind
to describethe functioning of the human brain. For various reasons critics have raised objections againstcomputational functionalism
. We add a mathematical argument against the use of StandardTuring machines to describe human thinking:Whenever a (U)TM starts with a given input string s
, it either always accepts s
or never. Inother words, the set of input strings it accepts is fix. Accordingly, a Turing machine cannot'learn'! There is no way a (U)TM could ever change its operational behaviour. Even a non-deterministic (U)TM always accepts a fixed set of input strings
. Therefore, it is false to say,that a TM can simulate a human brain, since each human can acquire knowledge and changeher/his response (i.e. acceptance or non-acceptance) to the same input.A Standard Turing machine is not even adequate to describe a modern computer. At least fournew ingredients need to be added to the model of computation:
Persistent memory
In contrast to a Standard Turing machine humans and moderncomputers have a persistent memory even if they are turned off (or are asleep) for a while.If they start again, further computations may depend on the memory content.
A TM does not interact with its environment. As we will see, 'interaction' isfundamental for the propagation of existing knowledge and the evolution, i.e. the learningof new knowledge.
 Infinity of operation:
Humans or computers may in principle interact with theirenvironment without a definite end.
 Non-uniformity of programs
means that agents in a network may change their algorithmsduring operation. Nowadays most computers are regularly upgraded, and their software,which represents their algorithms, may be fundamentally changed. If the agent representsa human, the human may have learned something from others.To model this kind of computation we introduce the abstract notion of interacting adaptiveTuring machines (IATM), similar to the notion of interactive Turing machines with advice,
LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), Section 5 introduces
-recursive functions' andseveral other alternatives to the TM and discusses the so-called Church-Turing theses according to which there isno other more powerful formalisation of 'effectively calculable' functions.
see for instance PUTNAM, H. (1960)
see for instance SHAGRIR, O. (2005)
Moreover, it can be proven, that any set of input strings accepted by a non-deterministic TM can also beaccepted by a deterministic TM (see LEWIS, Harry R. and PAPADIMITRIOU, Christos H. (1981), p. 211)
see GOLDIN, Dina and WEGNER, Peter (2003) and GOLDIN, Dina and WEGNER, Peter (2005) for articlesabout the expressiveness of interactive computing with persistent memory compared to the classical Turingmachine model.

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->