You are on page 1of 15

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/226823657

Mind Architecture and Brain Architecture

Article  in  Biology and Philosophy · July 1997


DOI: 10.1023/A:1006572424048

CITATIONS READS
2 493

2 authors:

Camilo Cela-Conde Gisèle Marty


University of the Balearic Islands University of the Balearic Islands
115 PUBLICATIONS   1,422 CITATIONS    41 PUBLICATIONS   551 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

R+D project: "Crime, judgment and morals" View project

All content following this page was uploaded by Camilo Cela-Conde on 15 March 2016.

The user has requested enhancement of the downloaded file.



Biology and Philosophy 12: 327–340, 1997.
c 1997 Kluwer Academic Publishers. Printed in the Netherlands.

Mind Architecture and Brain Architecture


On the claims made for a general synthetic model of consciousness

CAMILO J. CELA-CONDE
Department of Philosophy
Universidad de las Islas Baleares
Palma de Mallorca, Spain

GISÈLE MARTY
Department of Psychology
Universidad de las Islas Baleares
Palma de Mallorca, Spain

Abstract. The use of the computer metaphor has led to the proposal of “mind architecture”
(Pylyshyn 1984; Newell 1990) as a model of the organization of the mind. The dualist compu-
tational model, however, has, since the earliest days of psychological functionalism, required
that the concepts “mind architecture” and “brain architecture” be remote from each other. The
development of both connectionism and neurocomputational science, has sought to dispense
with this dualism and provide general models of consciousness – a “uniform cognitive archi-
tecture” –, which is in general reductionist, but which retains the computer metaphor. This
paper examines, in the first place, the concepts of mind architecture and brain architecture, in
order to evaluate the syntheses which have recently been offered. It then moves on to show
how modifications which have been made to classical functionalist mind architectures, with
the aim of making them compatible with brain architectures, are unable to resolve some of the
most serious problems of functionalism. Some suggestions are given as to why it is not possible
to relate mind structures and brain structures by using neurocomputational approaches, and
finally the question is raised of the validity of reductionism in a theory which sets out to unite
mind and brain architectures.
Key words: mind architecture, brain architecture, computer metaphor, chaos, non-linear phe-
nomena, reductionism, cognitive functionalism, neurocomputation, PDP, knowledge process

“In working on the mind-brain problem over the last few years, I have become disheartened
on occasion and come to the conclusion that western man is uniquely equipped and situated
so as to make that problem unsolvable, even though it may be a rather simple problem indeed.
As a scientist, I would like to hold philosophers responsible for this state of affairs”
Gordon G. Globus, “Mind, Structure, and Contradiction” (1976).

Since the early stages of the ascendancy of cognitive psychology, the com-
puter metaphor has proved a tool of great heuristic power in the inter-
pretation of the brain/mind phenomenon. From it derives the concept of
“mental architecture” (Pylyshyn 1984; Newell 1990) as a model of a rela-
328

tively fixed and generalized – the term “innate” would perhaps sum up these
two characteristics1 – organization of the mind. But the computer metaphor
included only the mind properties and design, while the experimental evi-
dence of the structure and functions of the brain known at the time was left to
one side. For that reason, since the earliest period of psychological function-
alism, there has been a considerable distance between “mind architecture”
and “brain architecture”; the terms have been used by different specialists –
cognitive psychologists on the one hand and neurobiologists on the other –
and are generally restricted to one of the corresponding bibliographies.
The decline of cognitive functionalism and the upsurge of connectionism
and, above all, neurocomputational science, seemed to put an end to this
mutual incomprehension. Thus, general models of consciousness have been
proposed – some making specific reference to the question of architecture
– which set out to establish the relationship between mind and brain by
maintaining positions which are, in general, reductionist and still based on the
use of the computer metaphor. The usual strategy of these synthetic attempts
is to establish the existence of a “uniform cognitive architecture” (Ebdon,
1993) , shared by all the entities related to consciousness, be they brains or
computers. These unified models have great value: as they transcend the initial
dualism of functionalist psychologists, they contribute to the re-establishment
of the evolutionary and organic aspects of a phenomenon, such as thought,
whose adaptive value is so crucial. But the continued use of a computational
model imposes requirements which are, in our judgment, inadmissible in the
light of emerging understanding of brain architecture.
In this paper we will examine, in the first place, the concepts of mind
architecture and brain architecture, in order to evaluate the syntheses which
have recently been proposed. We will then seek to show that the modifications
which have been made to classical functionalist mind architectures in order
to make them compatible with brain architectures – which is the main aim of
the neurocomputational approach –, are unable to resolve some of the most
serious problems of functionalism. Then, we will suggest why it is difficult,
from our point of view, to relate mind structures and brain structures by using
neurocomputational approaches. Finally, we will raise the question of the
validity of reductionism in a theory which sets out to link mind and brain
architectures.

Two concepts of “Architecture”

What constitutes a mind architecture? The concept has its origins, as we


have mentioned, in the early days of the computer metaphor. In essence, its
central idea is that mental activities – information processing, storage and
329

recovery of data – show the presence of hidden features, of some controlling


structures. Those structures are not accessible by introspection nor, directly,
by experiment, but they reflect and explain the behavior of the subjects. The
following paragraph provides a rather lengthy clarification, by inclusion, of
what constitutes mind architecture:
The architecture largely or completely specifies a great many things of
interest to cognitive psychologists, such as: the number of different kinds
of memory available (e.g. short-term vs. long-term, procedural vs. declar-
ative, episodic vs. semantic); basic characteristics of these memories (e.g.
their organization, capacity, method of accessing information, causes of
retrieval failure); the ways in which knowledge can be represented (e.g.
symbol structures, images, skills) and the efficiency with which these
different representations can be processed (as measured, e.g. by speed
or error rates); and the characteristics of arousal and selective attention
(Ebdon 1993, p. 370).
The computer metaphor does inevitably lead to these precise components
of mind architecture. Exactly the same elements appear if we ask what consti-
tutes the work of a psychologist. The metaphor recovers its value, however,
when it is proposed that the way in which the mind/brain system functions,
when performing these mind operations, is similar to the software/hardware
system of a computer. From this starting point it is possible to deduce the
characteristics of the mental architecture from the knowledge we have of
information processing in machines.
This deductive reasoning leaves, by the way, the world of the brain on
one side. The determination of the first functionalists to deny the relevance
of biological approaches – those relative to “hardware” – made that mental
architecture characteristics were inevitably inferred only from the behav-
ior of experimental subjects and not from examination of their “machines”.
This led to the fact that such architecture had nothing at all to do with the
architecture of the brain itself, even though the cerebral structures were then
being disentangled by contemporary neurobiologists. The main authors of
the functionalist tendency within cognitive psychology, it should be stressed,
maintain that it is useless to examine the hardware (the brain) if our objective
is to explain the software (the mind). If the differences between the mind
and the brain are the same as those between hardware and software in the
computer, then we can forget about neurology: computer engineering itself
can provide a better explanation of the cerebral organization. This version of
the computer metaphor imposed, for instance, the idea of different stores, a
single central processor and other similar traits of mental architecture.
But the search for an equivalent – in terms of the cerebral organization
– to the mind architecture proposed by functional cognitive psychologists
330

becomes ambiguous. On the one hand, we might consider that we are looking
for hardware, in the strict sense in which cognitive functionalism seems to
understand the concept: the anatomical aspects of the neurons and the synaptic
connections established between their terminals. An example of this “brain
architecture” is that proposed by Szentágothai (1969) when he sets out the
columnar model of the cortex with the different types of neurons present,
modular elements, etc. This type of architecture is irrelevant to functional
psychologists in their search for mind explanations. Fodor, for instance, holds:
From a scientific perspective, it is no more than an accident that psycho-
logical systems are incarnated in biological systems. In fact, biological
theory tells you little about what is happening; what can inform you is
the theory of functional relations (Garcı́a-Albea 1991; translated from the
Spanish original version).
But apart from physiological details – or, more precisely, in accordance
with them – neurobiologists have frequently postulated a brain structure
based on the analysis of its function, through architectural models which are
obtained, in many cases, from experimental studies of brain lesions.2 Classic
examples are Gazanniga and collaborators (1962) concerning the functional
effects of severing the corpus callosum and, in more familiar territory for
psychologists, Milner and Petrides’ (1984) revision of the consequences of
lesions to the frontal lobes of the human being.
The separation between first order – anatomical – and second order –
functional – architectures deserves closer attention. Examination of the cel-
lular framework reveals that there is no single cortical architecture at this
level: it depends on the analytical methods used. Different dyeing techniques
reveal distinct types of cellular relations – cytoarchitecture, mieloarchitec-
ture, cytochrome oxydase architecture – and some of them show with greater
detail than others the functional relations which exist between cortical areas.
The metabolic architecture based on cytochrome oxydase, for instance, is of
great importance when it comes to understanding the functional organization
of the striated cortex (Zeki 1993, p. 67).
Despite this, the distinction between first and second order brain archi-
tecture still has sense, for two reasons. Principally, because of the purely
functional nature of some of the studies. But a second reason is the difference
in quantitative range. While anatomical structure often refers to a relatively
small set of neurons and synaptic connections – even if it involves the defi-
nition of modules which are subsequently extended to include large areas of
the brain – the architecture related to, let us say, “higher” functions includes
extensive zones. Following Szentágothai himself, a cerebral phenomenon
may be considered as the result of activity in a small zone, while the mind
should be considered as the activity of the brain as a whole, or at least of a
331

significant proportion of the zones which participate in a particular function.3


The recursive activity of the brain, which is directed to analyzing its own state
instead of only analyzing information which comes from the environment, is
what causes us to speak of the “mind”.4
It seems clear that what we could call first order brain architecture,
anatomy, would be tolerated by cognitive functionalists as part of this rela-
tively uninteresting hardware. However, second order brain architecture,
which sets out to explain the cerebral functions, enters into direct compe-
tition with “mind architecture” as understood by the cognitivists. Thus, when
it comes to establishing more or less synthetic relationships between brain
and mind architectures, we should clearly specify what we are referring to.

The persistence of the computer metaphor in the synthesis of


architectures: the neurocomputational perspective

It is evident that without reasonable theories (or at least some coherent hypoth-
esis corresponding to the experimental data) which permit us to move from
simple brain architecture – anatomy – to more complex brain architecture –
functional activity of large zones of the cortex – we cannot go beyond those
aspects of the mind/brain question which have already been too frequently
rehearsed.
Imagine that such theories already exist. If we succeed in convincing
those cognitivist psychologists who have based all their work on the comput-
er metaphor that the study of the hardware is, after all, relevant, will our
problems have come to an end? Since the appearance of computational
neuroscience, or, to be more precise, since its high point – which could
refer to the year 1988, with the appearance of MacGregor’s book (1988) and
the papers by Sejnowski, Koch and Churchland (1988), Smolensky (1988),
and Koch (1988) – its adherents have sought to make progress by creating
some kind of synthesis between cognitive functionalism, cerebral structure
and connectionism (PDP).
As is stated by Churchland, Koch and Sejnowski:
The expression <Computational neuroscience> reflects the possibility
of generating theories of brain function in terms of the information-
processing properties of structures that make up nervous systems (Church-
land, Koch and Sejnowski 1990, p. 46).
Computational neuroscience, in relation to the architecture problem, tries
to build a bridge between the cognitive sciences – above all those with a
connectionist approach – and the anatomical and functional studies of the
brain carried out by neurobiologists. If we accept the distinction between the
332

“classical” architecture of the mind – Newell, Pylyshyn – and the “connec-


tionist” one – Rumelhart, Hinton and McClelland –, then the latter is that used
by practically all computational neurobiologists, with a consequent disappear-
ance of the reluctance to consider the hardware aspects of functional analysis.
From a connectionist perspective, the functionalists strategy of establishing
a radical dualism is incorrect in the light of the computer metaphor itself.
This has not only caused a major change in the approach of traditional
cognitive psychology: the arrival of neurocomputation has also created an
opportunity to develop new perspectives in the analysis of brain functions.
Marr’s classic work of computational neurobiology (Vision, 1982) is usually
cited as an example, but there are others which also show the importance
of mutual contact between the computational approach and the field of neu-
robiological experiment. For instance, the identification by Konishi and his
collaborators of a large proportion of the sound location algorithms to be
found in the brain of the barn owl – Tyto alba –, which has been made use
of to construct “owl integrated circuits” (Konishi et al 1988; Konishi 1992;
Pettigrew 1993). Or, in the opposite direction, the case of Wang, Mathur and
Koch (1990), who analyzed the visual measurement of movement in some
primates by using a computational algorithm which was originally developed
to endow a machine with sight.5 Even so, neurologists tout court have not
given their unanimous approval to these well intentioned new computational
approaches. An example from Zeki (1993) gives a perfect resumé of their
concerns:
They are far behind the times in believing computational neurobiology can
come up with theories that are of direct relevance to the neurobiologist,
one which may be put to the experimental test, by ignoring the facts of
nervous system (: : : ) To be of use to the neurobiologist, a computational
approach has to rely on the facts of the nervous system (Zeki 1993,
p. 119).
As can be seen, the same doubts which those working in neurobiology
express in relation to functionalist classical mind architectures are now
repeated, but, of course, with far greater intensity. Consequently, the tactical
problem which appeared earlier can now be repeated. What does the possible
limitation of computational neurobiology depend on? On our ignorance of
certain essential facts about the nervous system, as Zaki suggests? On certain
inherent problems in the computational approach? If the limit is only the need
to expand the neurobiological knowledge implied in computational models, it
is easily overcome. We will try to show, however, that the real barrier to neuro-
computation seems to be linked to its own standpoint: the fact of maintaining
the idea of a general theory of consciousness which both includes machines
and living things and, at the same time, goes beyond the merely banal. If this
333

is so, then connectionist architecture and neurobiological architecture are not


so much compatible, they are inseparable. But this is something which, from
our point of view, brings us up against serious difficulties.

The incompatibility of architectural features: volatility, mobility and


chaos

When we try to identify the organizational features of the mind – why, for
example, we perceive in three dimensions, think in four dimensions with the
addition of time, and construct sentences using subjects, verbs and predicates
– it may occur to us that the range of the question should go beyond the mind
and the brain to also include computers. Furthermore, we might consider, like
Pylyshyn, that computation is a literal description of mind activity (Pylyshyn,
1980) and, in accordance with this supposition, propose synthetic models
which include the whole spectrum: mind, brain and machine. This is precisely
the task which neurocomputational science has set itself.
However, the proposals for synthesis run into the same problems as those
we mentioned in relation to neurocomputation: the elasticity of functions,
be it of machine or brain, is subject to structural conditions. In other words,
using computational terminology, the architectural characteristics are dictated
by the hardware, the presence of which is relevant when explaining some
functional aspects of the software. To the extent that the structure/function
system is radically different in the cortex and in computing machines –
be they connectionist or Von Neumann’s automata –, we cannot propose
common cognitive architectures. We are clearly still very far from a generally
accepted model of second order cortical architecture which can deal with
mind phenomena such as the construction of knowledge. The first proposals
on consciousness (Crick, 1994) have received more criticism than praise, but
it is beginning to seem inevitable that the picture of what constitutes mind is
quite remote from our intuitive understanding.
The main problem for computational neuroscience is the need to conceive
brain hardware as a structure which, contrary to that found in digital com-
puters based on Von Neumann’s original idea, is not fixed and immutable but
modified during the learning process. Ebdon (1993) recognizes the existence
of this difficulty even though it does not, as far as he is concerned, invalidate
the concept of a brain computational architecture. This is the case because,
in his opinion:
Clearly, a great deal of the brain’s organization is fixed and genetically
specified. As for the effects of maturation, we can largely avoid the prob-
lem by concentrating on the adult brain, in which learning is presumably
334

the only normal cause of significant changes in physical structure (except,


perhaps, for a small amount of random cell death) (Ebdon 1993, p. 371).

In fact, the best strategy for defending computational neuroscience is not


to leave ontogeny to one side, but rather to insist that the comparison between
brain and computer is carried out in connectionist terms: since the days of
Rosenblat’s perceptron there has been discussion of computers capable of
learning. The supposition that brain organization is fixed and genetically
specified, something which is invariable once the maturation process is com-
plete, presupposes a return to models of the mind which are too rigid to be
able to comprehend what brain activity consists of. Ebdon seems to suggest
that the brain acquires certain features, and that those features, as far as infor-
mation processing phenomena are concerned, do not change except through
the death of neurons. What we are beginning to understand of the neural
phenomena involved in perception, however, contradicts this idea.
“Brain architecture” includes, of structural necessity, the presence of
different neurons on the one hand, and the existence of connections which
are established between them on the other. Not even the most radical defend-
ers of a computational model of the brain which would deny that the role
which synaptic connections play in the best understood aspects of informa-
tion processing in the brain – such as perception – is mobile, dynamic and
volatile. These characteristics constitute precisely the opposite of a strictly
fixed “uniform architecture”.7 The same basic idea lies behind the neurobio-
logical theory of consciousness established by Crick and Koch (1990) drawn
from the experimental evidence such as that of Gray and his collaborators
(Gray, Raether and Singer 1989) concerning the stimulation of the retina in
mammals. This is also Damasio’s conception (1989) when he introduces the
idea of epigenesis to understand even the phenomena of mind representation
in terms of time and space – which would, of course, be subject to stricter
innate control.
The volatility and mobility of the processes of consciousness is not limited
to synaptic connections. The mind representations themselves share these
characteristics. The phenomenon was described by Skarda and Freeman
(1987) when they detailed the way in which neuron activation occurs in
the process of learning new smells. The introduction of a new smell, or stim-
ulus, or a modification in the type of reinforcement, changes all the spatial
patterns of identification of this and other smells which have previously been
learnt. The invariance of spatial patterns of neuronal activity in relation to
stimuli thus disappears.
According to Skarda and Freeman, the only way to explain this lack of
invariance is to use non-linear dynamics to model the perceptual function of
the olfactory bulb. In the olfactory brain there is a global chaotic attractor
335

with multiple wings or lateral lobes, one for each smell which the subject has
learnt to discriminate. Each lobe is formed of a bifurcation during learning
which changes the entire structure of the attractor, including the pre-existing
lobes and the way in which they are accessed through attraction basins.8
The consequences of this phenomenon for the relationship between mind
and brain have been expounded by Freeman (1993; in press) and Fischer
(1993; in press). In the first place, the brain does not so much “process”
information from sensory inputs, but rather “constructs” it. The radically
active character of this construction, with an initial intentionality in the search
for sensory data and a continuous remodeling of the mind image of the world,
does not leave room in the model for memory “stores” in which traces can be
stored and subsequently recovered: the totality of traces is continually being
modified in response to conditions dictated both by the internal states of the
subject and by the environment.9
The above conclusions contradict both classical functionalist mind archi-
tecture, based on the computer metaphor and also synthetic neurocomputa-
tional architectures. In as much as a chaotic dynamic is part of the process,
any model of the cerebral function which is based on lineal systems will prove
to be inadequate.
An interesting proposal in connectionist models is a change from the
initial conception of Turing’s machine – with a lineal dynamic – to a machine
using Boltzmann’s approach – with a chaotic dynamic – (Ackley, Hinton and
Sejnowski, 1985).10 This approach would incorporate the random processes
which Freeman’s theory of olfactory perception requires, but at the expense
of definitively abandoning Pylyshyn’s hopes of creating a literal description
of mind architecture in terms of Von Neumann’s computation (Pylyshyn,
1980).11

Dualism and reductionism

The historical success of the computer metaphor, and, in general the idea of a
computational architecture in the mind, is not fortuitous or unjustified. Thanks
to the computer, that is to say, thanks to the ability to distinguish between
software and hardware, a mean was provided to enter into the problems of
the relationship between mind and brain, which were traditionally difficult
to deal with without recourse to radically dualist positions or to absolute
reductionism. The computer offers some subtle intermediate possibilities,
and nothing better illustrates this point than the way in which both the func-
tionalist perspective – which is clearly dualistic – and the neurocomputational
perspective – which claims to be totally reductionist – both describe them-
selves as “computational”. The explanation for such flexibility is in our view
336

related to the vagueness of the terminology, for example “algorithm” and


“representation”.
Algorithms are, clearly, part of the world of the computer, but what about
representations? It depends how one views them. If, by representation we
understand something with semantic content, the Fodorian tradition will
exclude it from objects of cognitive interest – remember, at the limits of
the paradox, how Fodor declared his “Cognitive Psychology No Existence
Law”. However, if representations do not have semantic content, then we will
have to explain how we escape from Searle’s Chinese Room problem (1980):
who reads the message if it is not the brain/computer?12
The proposal that it is the brain which receives information from the
environment and processes it to convert it into something which can be
understood as an informative semantic message does not represent a great
step forward in relation to functionalist mind architecture. It is enough to
say: “For mind, read brain”. This route does not avoid the traps of dualism,
because now we have to invent a new instance (the self? the subject? the
spirit?) capable of understanding and making use of the algorithms. As Pasko
Rakic pointed out in his experiments on Aplysia and Hermissenda, in the
early days of the comprehension of brain learning mechanisms, the way in
which a synapse learns does not tell us anything about what can be learned
(vid. Barnes 1986). Is it necessary, then, to return to the homunculus hidden
in the cortex which contemplates, as it were on a cinema screen, images,
messages and meanings. One supposes that any neurocomputational author
would protest, indignantly, reserving this task for the brain itself, but this
again is not an innovation. Szentágothai, as we have mentioned, maintained
a long time ago that “mind” is a way of referring to the global activity of
the brain. If we wish to go beyond the explanatory level of the functionalist
theory of mental architecture of – as low as it is mistaken, from our point of
view –, then we must answer the central question of the polemic surrounding
the mind/brain relationship: is all mental activity reducible to the functioning
of the brain architecture?
As long a distinction is made between “mind architecture” and “brain
architecture”, at whatever level, there will be a need to explain if we are
referring to the same thing seen in different ways, or if, on the contrary, we
are referring to distinct things, in the broad sense of the word “thing”. This is so
because methodological approaches, as we have tried to show throughout this
paper, do not in themselves automatically resolve the ontological question.
The discussion of the type of conceptual mechanism which could serve to
construct a theory of mind activity frequently takes place while leaving on
one side the problem of whether this particular activity can be carried out by
means of brain functions.
337

When speaking of reductionism it is essential to specify what the idea of


reduction consists of, because a sentence such as, “mental phenomena can be
reduced to the structure of the brain”, is ambiguous. Imagine, for example,
that a subject is observing a distant wall and says something like “this wall
is green”. From the reductionist point of view the mind phenomenon (“I am
seeing a green wall”) has at least two meanings:
– that the subject has mind representations of the exterior world, represen-
tations which include such things as colors and shapes.
– that the precise content of these mind images is, in this particular case,
that of “a green wall”.
If I am a sufficiently able – or lucky – reductionist, the examination of the
structure and functions of the brain which are present at this particular moment
should be sufficient to answer the following question: “Is subject X seeing
a green wall, or, if not, is the subject lying?”. According to Wittgenstein’s
second thesis on the inaccessibility of the individual mind, such a question
cannot be answered: individual mind phenomena are inaccessible.13 But the
reductionist could perhaps be interested in a different aspect of the mind/brain
relationship. Subjects who are lying, and those who try to establish who is
lying and who is not, all share some important features concerning the way
in which the exterior world is represented in the mind, and these features are
generically accessible. To the question “What color is the wall?” the subject
may answer “green”, “red” or “yellow”, lying or not, but will not answer
“sleepy” or “the day after tomorrow” or “deoxyribonucleic acid ”.
The general – architectonic – characteristics of the mind, that is to say those
which make it possible to establish communication with semantic content, are
accessible at least to the extent that between subjects it is possible to establish
evaluations, to put forward expectations of behavior, to influence others and
to assign intentionality. None of this would be possible in the absence of a
mind structure which must be of a high level, as required by Chomsky for
linguistic competence.
As Chomsky himself has pointed out, a being with very great cognitive
capacities but with a different deep grammar – a Martian – would have a
linguistic competence incompatible with our own, and something similar can
be maintained in relation to mind architecture. Nevertheless, if we take it as
given that this kind of mind architecture exists, then there is no difficulty
in understanding that its features could be related to brain functions. Setting
aside, of course, that we perhaps do not yet know what this relationship
precisely consists of, and that even when this has been established, the content
of mind phenomena will continue to be private. This idea constitutes the basis
for the study of mind features through the analysis of the behavior of subjects
with brain lesions, which prevent them from using their brains in a “normal”
338

way – that is to say, in a way compatible with the architectonic features which
we take to be common to the majority of human beings.

Notes
1
Pylyshyn (1980) proposes the term “functional architecture” to refer to these fixed mind
characteristics. We have preferred “mind architecture”, as will be seen subsequently, because
of the existence of a “brain architecture” which has also a functional nature.
2
Not, however, always. Paul Churchland (1986), for example, proposes a model of the
functional capacity of the cortex based on computational considerations. We will discuss the
neurocomputational approach below.
3
Churchland, Koch and Sejnowski warn that ten different structural levels exist: molecules,
membranes, synapses, neurons, nuclei, layers, columns, maps and systems, each of which can
be separated conceptually but not physically (Churchland, Koch and Sejnowski, 1990, p. 53).
Notwithstanding this, it seems to us that the division between structural orders reflects more
effectively the sense of the present discussion and, in any case, underlies the majority of the
polemical issues concerning the nature of the mind/brain relationship.
4
This idea of recursion has been used frequently. Barlow (1990), for example, suggests that,
even while there is no reason to doubt that the brain is entirely mechanical, it is nevertheless
a marvelous mechanism capable of generating and using the concept of mind. For a deeper
analysis of recursion, vid. Fischer (1993).
5
One of the most interesting aspects of the many studies of brain algorithms which focus on
the analysis of sensory data is the suspicion that general rules – strategies which are similar
in different species which are far removed in evolutionary terms – for the processing of infor-
mation do exist. McNaughton (1989) warns of the problems which appear when one tries to
uncover neural mechanisms of this type, and includes a synthesis of the main achievements.
6
Smolensky (1988) maintains that brain computation is different from that of computers: it
is not “symbolic” but rather “sub-symbolic”. We believe that this does not prevent the neuro-
computational perspective from retaining the central features of information processing proper
to computers.
7
Ebdon, following Newell (1990), accepts a changing architecture, but with very slow,
gradual changes on a relatively long time scale. The dynamic which we are referring to is of a
very different order.
8
About the presence of quantum phenomena in relation with consciousness, vid. Hameroff
(1994), Nunn, Clarke & Boltt (1994), and Clark’s (1994) note about Tucson Conference.
9
Globus (1992) holds a similar point of view when he maintains that neuronal informa-
tion processing theory can provide a good explanation for simple computer simulations but,
when it comes to transferring to the brain, fractal phenomena appear which support a non-
computational model of non-linear dynamic neural systems. The need to go beyond the models
of information “processing” and the emphasis on the errors committed in maintaining a com-
putational focus have been examined in greater detail by Freeman in a paper (“Three centuries
of category errors in brain science: a brief history of neurodynamics in behavioral studies”)
submitted to the Journal of the History of Psychology. A synthesis of the epistemological
consequences of the theory of perception by means of chaotic processes is to be found in
Cela-Conde and Marty (in press).
10
Vid. Rivière’s criticism (1991) p. 106. The alternatives of “deterministic” and “stochastic”
automata were posited by Daugman (1990), who, at the end of his paper, showed little enthu-
siasm for maintaining a computational metaphor for the brain.
11
This seems to be Penrose’s point of view (see Jane Clark’s interview to Penrose about his
new book). Penrose holds that, extending Gödel’s theorem, it is not possible to build a robot
which could both behave as a human being and be a computer in the current sense of the term
(Penrose and Clark 1994).
339
12
A deep analysis of Searle’s argument and its validity for non-symbolic models is to be
found in Harnad (1989).
13
For a historical perspective of the obstacles which may be encountered in the search for
objective knowledge of the mind, vid. (1991), p. 25 ff. Wittgenstein’s point of view has been
developed by Stroll (1993).

References

Ackley, D.H., Hinton, G.E. and Sejnowski, T.J.: 1985, ‘A learning algorithm for Boltzmann
machines’, Cognitive Science 9, 147–169.
Barlow, H.: 1990, ‘The mechanical mind’, Ann. Rev. Neurosci. 13, 15–24.
Barnes, D.M.: 1986, ‘From Genes to Cognition’, Science 231, 1066–1068.
Cela-Conde, C.J.: 1994, ‘Teorı́a neurobiológica de la consciencia’, Psicothema 6, 155–163.
Cela-Conde, C.J. and Marty, G.: 1994, ‘Vida, mente, máquina. Medio siglo de metáforas’,
Ludus Vitalis 2, 25–37.
Cela-Conde, C.J. and Marty, G.: 1995, ‘Caos y consciencia’, Psicothema 7, 679–684.
Clark, J.: 1994, ‘Toward a scientific basis for consciousness’, Journal of Consciousness Studies
1, 152–154.
Crick, F. and Koch, C.: 1990, ‘Towards a neurobiological theory of consciousness’, Seminars
in the Neurosciences 2, 263–275.
Crick, F.: 1994, The Astonishing Hypothesis, Scribner, New York, N.Y.
Churchland, P.M.: 1986, ‘Cognitive Neurobiology, A Computational Hypothesis for Laminar
Cortex’, Biology and Philosophy 1, 25–51.
Churchland, P.S., Koch, C. and Sejnowski, T.J.: 1990, ‘What Is Computational Neuroscience?’,
in E.L. Schwartz (ed.), Computational Neuroscience, M.I.T. Press, Cambridge, Mass., pp.
46–55.
Damasio, A.R.: 1989, ‘The brain binds entities and events by multiregional activation from
convergence zones’, Neural Computation 1, 123–132.
Daugman, J.G.: 1990, ‘Brain Metaphor and Brain Theory’, in E.L. Schwartz (ed.), Computa-
tional Neuroscience, M.I.T. Press, Cambridge, Mass., pp. 9–18.
Devor, M. and Wall, P.D.: 1981, ‘Effects of peripheral nerve injury on perceptive fields of cells
in cat spinal cord’, Journal of Comparative Neurology 199, 227–291.
Ebdon, M.: 1993, ‘Is the Brain Neocortex a Uniform Cognitive Architecture?’, Mind and
Language 8, 368–403.
Finkel, L.F., Reeke, G.N. and Edelman, G.M.: 1989, ‘A Population Approach to the Neural
Basis of Perceptual Categorization’, in L. Nadel, L.A. Cooper, P. Culicover and R.M.
Harnish (eds.), Neural Connections, Mind Computation, M.I.T. Press, Cambridge, Mass.,
pp. 146–179.
Fischer, R.: 1993, ‘From “transmition of signals” to “self-creation of meaning”. Transforma-
tions in the concept of information’, Cybernetica 36, 229–243.
Fischer, R. (in press), ‘On Some Not Yet Fashionable Aspects of Consciousness’, in M.E.
Carvallo (ed.), Nature, Cognition and System, vol. III, Kluwer Academic Publ.
Freeman, W.J.: 1993, ‘The Emergence of Chaotic Dynamics as a Basis for Comprehending
Intentionality in Experimental Subjects’, in Karl H. Pribram (ed.), Rethinking Neural Net-
works. Quantum Fields and Biological Data, Lawrence Erlbaum and Associates, Hillsdale,
N.J., pp. 507–514.
Freeman, W.J. (in press) ‘Three centuries of category errors in brain science, a brief history of
neurodynamics in behavioral studies’, Journal of the History of Psychology.
Garcı́a-Albea, J.E.: 1991, ‘Entrevista con Jerry A. Fodor, Funcionalismo y ciencia cognitiva,
lenguaje y pensamiento, modularidad y conexionismo’, Estudios de psicolog’a 45, 5–31.
340

Gazzaniga, M.S., Bogen, J.E. and Sperry, R.W.: 1962, ‘Some functional effects of sectioning
the brain commissures in man’, Proceedings of the National Academy of Science 48,
1765–1769.
Georgopoulos, A.P., Schwartz, A.B. and Kettner, R.E.: 1986, ‘Neuronal population coding of
movement direction’, Science 233, 1357–1460.
Globus, G.: 1992, ‘Toward a Noncomputational Cognitive Neuroscience’, Journal of Cognitive
Neuroscience 4, 299–310.
Goertzel, B.: 1993, ‘Self-reference and complexity. Component-systems and self-generating
systems in biology and cognitive science’, Evolution and Cognition 2, 257–283.
Gray, C.M., Engel, A.K., König, P. and Singer, W.: 1989, ‘Stimulus-dependent neuronal oscil-
lations in cat visual cortex. Receptive field properties and feature dependence’, European
Journal of Neurosciences 2, 607–619.
Hameroff, S.: 1994, ‘Quantum Coherence in Microtubules: A Neural Basis for Emergent
Consciousness?’, Journal of Consciousness Studies 1, 91–118.
Harnad, S.: 1989, ‘Minds, Machines and Searle’, Journal of Theoretical and Experimental
Artificial Intelligence 1, 5–25.
Milner, B. and Petrides, M.: 1984, ‘Behavioural effects of frontal-lobe lesions in man’, Trends
in Neurosciences 7, 403–407.
Newell, A.: 1990, Unified Theories of Cognition, Harvard University Press,Cambridge, MA.
Nunn, C.M.H., Clarke, C.J.S. and Blott, B.H.: 1994, ‘Collapse of a Quantum Field may Affect
Brain Function’, Journal of Consciousness Studies 1, 127–139.
Penrose, R. and Clark, J.: 1994, ‘Roger Penrose Frs. Rouse Ball Professor of Mathematics at
Oxford University, talks to Jane Clark about his forthcoming book Shadows of the Mind:
A Search for the Missing Science of Consciousness’, Journal of Consciousness Studies 1,
17–24.
Pylyshyn, Z.W.: 1980, ‘Cognition representation and the process-architecture distinction’,
Behavioral and Brain Sciences 3, 154–169.
Pylyshyn, Z.W.: 1984, Computation and Cognition, M.I.T. Press, Cambridge, MA.
Raichle, M.E.: 1993, ‘The scratchpad of the mind’, Nature 363, 583–584.
Rivière, A.: 1991, Objetos con mente, Alianza Universidad, Madrid.
Searle, J.: 1980, ‘Minds, Brains and Programs’, Behavioral and Brain Sciences 3, 417–424.
Skarda, C.A. and Freeman, W.J.: 1987, ‘How brains make chaos in order to make sense of the
world’, Behavioral and Brain Science 10, 161–173.
Smolensky, P.: 1988, ‘On the proper treatment of connectionism’, Behavioral and Brain
Sciences 11, 1–23.
Stroll, A.: 1993, ‘That Puzzle We Call the Mind’, Grazer Philosophische Studien 44, 189–210.
Szentágothai, J.: 1968, ‘Structure-Functional Considerations of the Cerebellar Neuron Net-
works’, Proceedings of the I.E.E.E. 56, 960–968.
Szentágothai, J.: 1969, ‘Architecture of the Brain Cortex’, in Jasper, H.H., Ward, A.A. and
Pope, A. (eds.), Basic Mechanisms of the Epilepsies, Little, Brown and Co., Boston, Mass.,
pp. 13–28.
Volkow, N.D. and Tancredi, L.R.: 1991, ‘Biological Correlates of Mind activity Studied With
PET’, American Journal of Psychiatry 148, 439–443.
Wells, A.: 1993, ‘Parallel Architectures and Mind Computation’, British Journal of Philosophy
of Science 44, 531–542.
Zeki, S.: 1993, A Vision of the Brain, Blackwell, Oxford.

View publication stats

You might also like