P. 1
consciousness studies

consciousness studies


|Views: 262|Likes:
Published by 420

More info:

Published by: 420 on Sep 27, 2007
Copyright:Attribution Non-commercial


Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less





In his Chinese Room Argument Searle shows that symbols on their own do not have any
meaning. In other words, a computer that is a set of electrical charges or flowing steel balls is
just a set of steel balls or electrical charges. Leibniz spotted this problem in the seventeeth
Searle's argument is also, partly, the Symbol Grounding Problem; Harnad (2001) defines
this as:
"the symbol grounding problem concerns how the meanings of the symbols in a system can
be grounded (in something other than just more ungrounded symbols) so they can have
meaning independently of any external interpreter."
Harnad defines a Total Turing Test in which a robot connected to the world by sensors and
actions might be judged to be indistinguishable from a human being. He considers that a robot
that passed such a test would overcome the symbol grounding problem. Unfortunately Harnad
does not tackle Leibniz's misgivings about the internal state of the robot being just a set of
symbols (cogs and wheels/charges etc.). The Total Turing Test is also doubtful if analysed in
terms of information systems alone, for instance, Powers (2001) argues that an information
system could be grounded in Harnad's sense if it were embedded in a virtual reality rather
than the world around it.
So what is "meaning" in an information system? In information systems a relation is defined


in terms of what thing contains another thing. Having established that one thing contains
another this thing is called an attribute. A car contains seats so seats are an attribute of cars.
Cars are sometimes red so cars sometimes have the attribute "red". This containing of one
thing by another leads to classification hierarchies known as a relational database. What
Harnad was seeking to achieve was a connection between items in the database and items in
the world outside the database. This did not succeed in giving "meaning" to the signals within
the machine - they were still a set of separate signals in a materialist model universe.
Aristotle and Plato had a clear idea of meaning when they proposed that ideas depend upon
internal images or forms. Plato, in particular conceived that understanding is due to the forms
in phenomenal consciousness. Bringing this view up to date, this implies that the way one
form contains another gives us understanding. The form of a car contains the form we call
seats etc. Even things that we consider to be "content" rather than "form", such as redness,
require an extension in space so that there is a red area rather than red by itself (cf: Hume
1739). So if the empiricists are correct our minds contain a geometrical classification system
("what contains what") or geometrical relational database.
A geometrical database has advantages over a sequential database because items within it are
highly classified (their relations to other items being implicit in the geometry) and can also be
easily related to the physical position of the organism in the world. It would appear that the
way forward for artificial consciousness would be to create a virtual reality within the
machine. Perhaps the brain works in this fashion and dreams, imagination and hallucinations
are evidence for this. In Part III the storage of geometrically related information in the "Place"
area of the brain is described. But although this would be closer to our experience it still
leaves us with the Hard Problem of how the state of a model could become conscious

•Harnad, S. (2001). Grounding Symbols in the Analog World With Neural Nets -- a






•Powers, D.M.W. (2001) A Grounding of Definition, Psycoloquy: 12,#56

You're Reading a Free Preview

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->