Professional Documents
Culture Documents
net/publication/226823657
CITATIONS READS
2 493
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Camilo Cela-Conde on 15 March 2016.
CAMILO J. CELA-CONDE
Department of Philosophy
Universidad de las Islas Baleares
Palma de Mallorca, Spain
GISÈLE MARTY
Department of Psychology
Universidad de las Islas Baleares
Palma de Mallorca, Spain
Abstract. The use of the computer metaphor has led to the proposal of “mind architecture”
(Pylyshyn 1984; Newell 1990) as a model of the organization of the mind. The dualist compu-
tational model, however, has, since the earliest days of psychological functionalism, required
that the concepts “mind architecture” and “brain architecture” be remote from each other. The
development of both connectionism and neurocomputational science, has sought to dispense
with this dualism and provide general models of consciousness – a “uniform cognitive archi-
tecture” –, which is in general reductionist, but which retains the computer metaphor. This
paper examines, in the first place, the concepts of mind architecture and brain architecture, in
order to evaluate the syntheses which have recently been offered. It then moves on to show
how modifications which have been made to classical functionalist mind architectures, with
the aim of making them compatible with brain architectures, are unable to resolve some of the
most serious problems of functionalism. Some suggestions are given as to why it is not possible
to relate mind structures and brain structures by using neurocomputational approaches, and
finally the question is raised of the validity of reductionism in a theory which sets out to unite
mind and brain architectures.
Key words: mind architecture, brain architecture, computer metaphor, chaos, non-linear phe-
nomena, reductionism, cognitive functionalism, neurocomputation, PDP, knowledge process
“In working on the mind-brain problem over the last few years, I have become disheartened
on occasion and come to the conclusion that western man is uniquely equipped and situated
so as to make that problem unsolvable, even though it may be a rather simple problem indeed.
As a scientist, I would like to hold philosophers responsible for this state of affairs”
Gordon G. Globus, “Mind, Structure, and Contradiction” (1976).
Since the early stages of the ascendancy of cognitive psychology, the com-
puter metaphor has proved a tool of great heuristic power in the inter-
pretation of the brain/mind phenomenon. From it derives the concept of
“mental architecture” (Pylyshyn 1984; Newell 1990) as a model of a rela-
328
tively fixed and generalized – the term “innate” would perhaps sum up these
two characteristics1 – organization of the mind. But the computer metaphor
included only the mind properties and design, while the experimental evi-
dence of the structure and functions of the brain known at the time was left to
one side. For that reason, since the earliest period of psychological function-
alism, there has been a considerable distance between “mind architecture”
and “brain architecture”; the terms have been used by different specialists –
cognitive psychologists on the one hand and neurobiologists on the other –
and are generally restricted to one of the corresponding bibliographies.
The decline of cognitive functionalism and the upsurge of connectionism
and, above all, neurocomputational science, seemed to put an end to this
mutual incomprehension. Thus, general models of consciousness have been
proposed – some making specific reference to the question of architecture
– which set out to establish the relationship between mind and brain by
maintaining positions which are, in general, reductionist and still based on the
use of the computer metaphor. The usual strategy of these synthetic attempts
is to establish the existence of a “uniform cognitive architecture” (Ebdon,
1993) , shared by all the entities related to consciousness, be they brains or
computers. These unified models have great value: as they transcend the initial
dualism of functionalist psychologists, they contribute to the re-establishment
of the evolutionary and organic aspects of a phenomenon, such as thought,
whose adaptive value is so crucial. But the continued use of a computational
model imposes requirements which are, in our judgment, inadmissible in the
light of emerging understanding of brain architecture.
In this paper we will examine, in the first place, the concepts of mind
architecture and brain architecture, in order to evaluate the syntheses which
have recently been proposed. We will then seek to show that the modifications
which have been made to classical functionalist mind architectures in order
to make them compatible with brain architectures – which is the main aim of
the neurocomputational approach –, are unable to resolve some of the most
serious problems of functionalism. Then, we will suggest why it is difficult,
from our point of view, to relate mind structures and brain structures by using
neurocomputational approaches. Finally, we will raise the question of the
validity of reductionism in a theory which sets out to link mind and brain
architectures.
becomes ambiguous. On the one hand, we might consider that we are looking
for hardware, in the strict sense in which cognitive functionalism seems to
understand the concept: the anatomical aspects of the neurons and the synaptic
connections established between their terminals. An example of this “brain
architecture” is that proposed by Szentágothai (1969) when he sets out the
columnar model of the cortex with the different types of neurons present,
modular elements, etc. This type of architecture is irrelevant to functional
psychologists in their search for mind explanations. Fodor, for instance, holds:
From a scientific perspective, it is no more than an accident that psycho-
logical systems are incarnated in biological systems. In fact, biological
theory tells you little about what is happening; what can inform you is
the theory of functional relations (Garcı́a-Albea 1991; translated from the
Spanish original version).
But apart from physiological details – or, more precisely, in accordance
with them – neurobiologists have frequently postulated a brain structure
based on the analysis of its function, through architectural models which are
obtained, in many cases, from experimental studies of brain lesions.2 Classic
examples are Gazanniga and collaborators (1962) concerning the functional
effects of severing the corpus callosum and, in more familiar territory for
psychologists, Milner and Petrides’ (1984) revision of the consequences of
lesions to the frontal lobes of the human being.
The separation between first order – anatomical – and second order –
functional – architectures deserves closer attention. Examination of the cel-
lular framework reveals that there is no single cortical architecture at this
level: it depends on the analytical methods used. Different dyeing techniques
reveal distinct types of cellular relations – cytoarchitecture, mieloarchitec-
ture, cytochrome oxydase architecture – and some of them show with greater
detail than others the functional relations which exist between cortical areas.
The metabolic architecture based on cytochrome oxydase, for instance, is of
great importance when it comes to understanding the functional organization
of the striated cortex (Zeki 1993, p. 67).
Despite this, the distinction between first and second order brain archi-
tecture still has sense, for two reasons. Principally, because of the purely
functional nature of some of the studies. But a second reason is the difference
in quantitative range. While anatomical structure often refers to a relatively
small set of neurons and synaptic connections – even if it involves the defi-
nition of modules which are subsequently extended to include large areas of
the brain – the architecture related to, let us say, “higher” functions includes
extensive zones. Following Szentágothai himself, a cerebral phenomenon
may be considered as the result of activity in a small zone, while the mind
should be considered as the activity of the brain as a whole, or at least of a
331
It is evident that without reasonable theories (or at least some coherent hypoth-
esis corresponding to the experimental data) which permit us to move from
simple brain architecture – anatomy – to more complex brain architecture –
functional activity of large zones of the cortex – we cannot go beyond those
aspects of the mind/brain question which have already been too frequently
rehearsed.
Imagine that such theories already exist. If we succeed in convincing
those cognitivist psychologists who have based all their work on the comput-
er metaphor that the study of the hardware is, after all, relevant, will our
problems have come to an end? Since the appearance of computational
neuroscience, or, to be more precise, since its high point – which could
refer to the year 1988, with the appearance of MacGregor’s book (1988) and
the papers by Sejnowski, Koch and Churchland (1988), Smolensky (1988),
and Koch (1988) – its adherents have sought to make progress by creating
some kind of synthesis between cognitive functionalism, cerebral structure
and connectionism (PDP).
As is stated by Churchland, Koch and Sejnowski:
The expression <Computational neuroscience> reflects the possibility
of generating theories of brain function in terms of the information-
processing properties of structures that make up nervous systems (Church-
land, Koch and Sejnowski 1990, p. 46).
Computational neuroscience, in relation to the architecture problem, tries
to build a bridge between the cognitive sciences – above all those with a
connectionist approach – and the anatomical and functional studies of the
brain carried out by neurobiologists. If we accept the distinction between the
332
When we try to identify the organizational features of the mind – why, for
example, we perceive in three dimensions, think in four dimensions with the
addition of time, and construct sentences using subjects, verbs and predicates
– it may occur to us that the range of the question should go beyond the mind
and the brain to also include computers. Furthermore, we might consider, like
Pylyshyn, that computation is a literal description of mind activity (Pylyshyn,
1980) and, in accordance with this supposition, propose synthetic models
which include the whole spectrum: mind, brain and machine. This is precisely
the task which neurocomputational science has set itself.
However, the proposals for synthesis run into the same problems as those
we mentioned in relation to neurocomputation: the elasticity of functions,
be it of machine or brain, is subject to structural conditions. In other words,
using computational terminology, the architectural characteristics are dictated
by the hardware, the presence of which is relevant when explaining some
functional aspects of the software. To the extent that the structure/function
system is radically different in the cortex and in computing machines –
be they connectionist or Von Neumann’s automata –, we cannot propose
common cognitive architectures. We are clearly still very far from a generally
accepted model of second order cortical architecture which can deal with
mind phenomena such as the construction of knowledge. The first proposals
on consciousness (Crick, 1994) have received more criticism than praise, but
it is beginning to seem inevitable that the picture of what constitutes mind is
quite remote from our intuitive understanding.
The main problem for computational neuroscience is the need to conceive
brain hardware as a structure which, contrary to that found in digital com-
puters based on Von Neumann’s original idea, is not fixed and immutable but
modified during the learning process. Ebdon (1993) recognizes the existence
of this difficulty even though it does not, as far as he is concerned, invalidate
the concept of a brain computational architecture. This is the case because,
in his opinion:
Clearly, a great deal of the brain’s organization is fixed and genetically
specified. As for the effects of maturation, we can largely avoid the prob-
lem by concentrating on the adult brain, in which learning is presumably
334
with multiple wings or lateral lobes, one for each smell which the subject has
learnt to discriminate. Each lobe is formed of a bifurcation during learning
which changes the entire structure of the attractor, including the pre-existing
lobes and the way in which they are accessed through attraction basins.8
The consequences of this phenomenon for the relationship between mind
and brain have been expounded by Freeman (1993; in press) and Fischer
(1993; in press). In the first place, the brain does not so much “process”
information from sensory inputs, but rather “constructs” it. The radically
active character of this construction, with an initial intentionality in the search
for sensory data and a continuous remodeling of the mind image of the world,
does not leave room in the model for memory “stores” in which traces can be
stored and subsequently recovered: the totality of traces is continually being
modified in response to conditions dictated both by the internal states of the
subject and by the environment.9
The above conclusions contradict both classical functionalist mind archi-
tecture, based on the computer metaphor and also synthetic neurocomputa-
tional architectures. In as much as a chaotic dynamic is part of the process,
any model of the cerebral function which is based on lineal systems will prove
to be inadequate.
An interesting proposal in connectionist models is a change from the
initial conception of Turing’s machine – with a lineal dynamic – to a machine
using Boltzmann’s approach – with a chaotic dynamic – (Ackley, Hinton and
Sejnowski, 1985).10 This approach would incorporate the random processes
which Freeman’s theory of olfactory perception requires, but at the expense
of definitively abandoning Pylyshyn’s hopes of creating a literal description
of mind architecture in terms of Von Neumann’s computation (Pylyshyn,
1980).11
The historical success of the computer metaphor, and, in general the idea of a
computational architecture in the mind, is not fortuitous or unjustified. Thanks
to the computer, that is to say, thanks to the ability to distinguish between
software and hardware, a mean was provided to enter into the problems of
the relationship between mind and brain, which were traditionally difficult
to deal with without recourse to radically dualist positions or to absolute
reductionism. The computer offers some subtle intermediate possibilities,
and nothing better illustrates this point than the way in which both the func-
tionalist perspective – which is clearly dualistic – and the neurocomputational
perspective – which claims to be totally reductionist – both describe them-
selves as “computational”. The explanation for such flexibility is in our view
336
way – that is to say, in a way compatible with the architectonic features which
we take to be common to the majority of human beings.
Notes
1
Pylyshyn (1980) proposes the term “functional architecture” to refer to these fixed mind
characteristics. We have preferred “mind architecture”, as will be seen subsequently, because
of the existence of a “brain architecture” which has also a functional nature.
2
Not, however, always. Paul Churchland (1986), for example, proposes a model of the
functional capacity of the cortex based on computational considerations. We will discuss the
neurocomputational approach below.
3
Churchland, Koch and Sejnowski warn that ten different structural levels exist: molecules,
membranes, synapses, neurons, nuclei, layers, columns, maps and systems, each of which can
be separated conceptually but not physically (Churchland, Koch and Sejnowski, 1990, p. 53).
Notwithstanding this, it seems to us that the division between structural orders reflects more
effectively the sense of the present discussion and, in any case, underlies the majority of the
polemical issues concerning the nature of the mind/brain relationship.
4
This idea of recursion has been used frequently. Barlow (1990), for example, suggests that,
even while there is no reason to doubt that the brain is entirely mechanical, it is nevertheless
a marvelous mechanism capable of generating and using the concept of mind. For a deeper
analysis of recursion, vid. Fischer (1993).
5
One of the most interesting aspects of the many studies of brain algorithms which focus on
the analysis of sensory data is the suspicion that general rules – strategies which are similar
in different species which are far removed in evolutionary terms – for the processing of infor-
mation do exist. McNaughton (1989) warns of the problems which appear when one tries to
uncover neural mechanisms of this type, and includes a synthesis of the main achievements.
6
Smolensky (1988) maintains that brain computation is different from that of computers: it
is not “symbolic” but rather “sub-symbolic”. We believe that this does not prevent the neuro-
computational perspective from retaining the central features of information processing proper
to computers.
7
Ebdon, following Newell (1990), accepts a changing architecture, but with very slow,
gradual changes on a relatively long time scale. The dynamic which we are referring to is of a
very different order.
8
About the presence of quantum phenomena in relation with consciousness, vid. Hameroff
(1994), Nunn, Clarke & Boltt (1994), and Clark’s (1994) note about Tucson Conference.
9
Globus (1992) holds a similar point of view when he maintains that neuronal informa-
tion processing theory can provide a good explanation for simple computer simulations but,
when it comes to transferring to the brain, fractal phenomena appear which support a non-
computational model of non-linear dynamic neural systems. The need to go beyond the models
of information “processing” and the emphasis on the errors committed in maintaining a com-
putational focus have been examined in greater detail by Freeman in a paper (“Three centuries
of category errors in brain science: a brief history of neurodynamics in behavioral studies”)
submitted to the Journal of the History of Psychology. A synthesis of the epistemological
consequences of the theory of perception by means of chaotic processes is to be found in
Cela-Conde and Marty (in press).
10
Vid. Rivière’s criticism (1991) p. 106. The alternatives of “deterministic” and “stochastic”
automata were posited by Daugman (1990), who, at the end of his paper, showed little enthu-
siasm for maintaining a computational metaphor for the brain.
11
This seems to be Penrose’s point of view (see Jane Clark’s interview to Penrose about his
new book). Penrose holds that, extending Gödel’s theorem, it is not possible to build a robot
which could both behave as a human being and be a computer in the current sense of the term
(Penrose and Clark 1994).
339
12
A deep analysis of Searle’s argument and its validity for non-symbolic models is to be
found in Harnad (1989).
13
For a historical perspective of the obstacles which may be encountered in the search for
objective knowledge of the mind, vid. (1991), p. 25 ff. Wittgenstein’s point of view has been
developed by Stroll (1993).
References
Ackley, D.H., Hinton, G.E. and Sejnowski, T.J.: 1985, ‘A learning algorithm for Boltzmann
machines’, Cognitive Science 9, 147–169.
Barlow, H.: 1990, ‘The mechanical mind’, Ann. Rev. Neurosci. 13, 15–24.
Barnes, D.M.: 1986, ‘From Genes to Cognition’, Science 231, 1066–1068.
Cela-Conde, C.J.: 1994, ‘Teorı́a neurobiológica de la consciencia’, Psicothema 6, 155–163.
Cela-Conde, C.J. and Marty, G.: 1994, ‘Vida, mente, máquina. Medio siglo de metáforas’,
Ludus Vitalis 2, 25–37.
Cela-Conde, C.J. and Marty, G.: 1995, ‘Caos y consciencia’, Psicothema 7, 679–684.
Clark, J.: 1994, ‘Toward a scientific basis for consciousness’, Journal of Consciousness Studies
1, 152–154.
Crick, F. and Koch, C.: 1990, ‘Towards a neurobiological theory of consciousness’, Seminars
in the Neurosciences 2, 263–275.
Crick, F.: 1994, The Astonishing Hypothesis, Scribner, New York, N.Y.
Churchland, P.M.: 1986, ‘Cognitive Neurobiology, A Computational Hypothesis for Laminar
Cortex’, Biology and Philosophy 1, 25–51.
Churchland, P.S., Koch, C. and Sejnowski, T.J.: 1990, ‘What Is Computational Neuroscience?’,
in E.L. Schwartz (ed.), Computational Neuroscience, M.I.T. Press, Cambridge, Mass., pp.
46–55.
Damasio, A.R.: 1989, ‘The brain binds entities and events by multiregional activation from
convergence zones’, Neural Computation 1, 123–132.
Daugman, J.G.: 1990, ‘Brain Metaphor and Brain Theory’, in E.L. Schwartz (ed.), Computa-
tional Neuroscience, M.I.T. Press, Cambridge, Mass., pp. 9–18.
Devor, M. and Wall, P.D.: 1981, ‘Effects of peripheral nerve injury on perceptive fields of cells
in cat spinal cord’, Journal of Comparative Neurology 199, 227–291.
Ebdon, M.: 1993, ‘Is the Brain Neocortex a Uniform Cognitive Architecture?’, Mind and
Language 8, 368–403.
Finkel, L.F., Reeke, G.N. and Edelman, G.M.: 1989, ‘A Population Approach to the Neural
Basis of Perceptual Categorization’, in L. Nadel, L.A. Cooper, P. Culicover and R.M.
Harnish (eds.), Neural Connections, Mind Computation, M.I.T. Press, Cambridge, Mass.,
pp. 146–179.
Fischer, R.: 1993, ‘From “transmition of signals” to “self-creation of meaning”. Transforma-
tions in the concept of information’, Cybernetica 36, 229–243.
Fischer, R. (in press), ‘On Some Not Yet Fashionable Aspects of Consciousness’, in M.E.
Carvallo (ed.), Nature, Cognition and System, vol. III, Kluwer Academic Publ.
Freeman, W.J.: 1993, ‘The Emergence of Chaotic Dynamics as a Basis for Comprehending
Intentionality in Experimental Subjects’, in Karl H. Pribram (ed.), Rethinking Neural Net-
works. Quantum Fields and Biological Data, Lawrence Erlbaum and Associates, Hillsdale,
N.J., pp. 507–514.
Freeman, W.J. (in press) ‘Three centuries of category errors in brain science, a brief history of
neurodynamics in behavioral studies’, Journal of the History of Psychology.
Garcı́a-Albea, J.E.: 1991, ‘Entrevista con Jerry A. Fodor, Funcionalismo y ciencia cognitiva,
lenguaje y pensamiento, modularidad y conexionismo’, Estudios de psicolog’a 45, 5–31.
340
Gazzaniga, M.S., Bogen, J.E. and Sperry, R.W.: 1962, ‘Some functional effects of sectioning
the brain commissures in man’, Proceedings of the National Academy of Science 48,
1765–1769.
Georgopoulos, A.P., Schwartz, A.B. and Kettner, R.E.: 1986, ‘Neuronal population coding of
movement direction’, Science 233, 1357–1460.
Globus, G.: 1992, ‘Toward a Noncomputational Cognitive Neuroscience’, Journal of Cognitive
Neuroscience 4, 299–310.
Goertzel, B.: 1993, ‘Self-reference and complexity. Component-systems and self-generating
systems in biology and cognitive science’, Evolution and Cognition 2, 257–283.
Gray, C.M., Engel, A.K., König, P. and Singer, W.: 1989, ‘Stimulus-dependent neuronal oscil-
lations in cat visual cortex. Receptive field properties and feature dependence’, European
Journal of Neurosciences 2, 607–619.
Hameroff, S.: 1994, ‘Quantum Coherence in Microtubules: A Neural Basis for Emergent
Consciousness?’, Journal of Consciousness Studies 1, 91–118.
Harnad, S.: 1989, ‘Minds, Machines and Searle’, Journal of Theoretical and Experimental
Artificial Intelligence 1, 5–25.
Milner, B. and Petrides, M.: 1984, ‘Behavioural effects of frontal-lobe lesions in man’, Trends
in Neurosciences 7, 403–407.
Newell, A.: 1990, Unified Theories of Cognition, Harvard University Press,Cambridge, MA.
Nunn, C.M.H., Clarke, C.J.S. and Blott, B.H.: 1994, ‘Collapse of a Quantum Field may Affect
Brain Function’, Journal of Consciousness Studies 1, 127–139.
Penrose, R. and Clark, J.: 1994, ‘Roger Penrose Frs. Rouse Ball Professor of Mathematics at
Oxford University, talks to Jane Clark about his forthcoming book Shadows of the Mind:
A Search for the Missing Science of Consciousness’, Journal of Consciousness Studies 1,
17–24.
Pylyshyn, Z.W.: 1980, ‘Cognition representation and the process-architecture distinction’,
Behavioral and Brain Sciences 3, 154–169.
Pylyshyn, Z.W.: 1984, Computation and Cognition, M.I.T. Press, Cambridge, MA.
Raichle, M.E.: 1993, ‘The scratchpad of the mind’, Nature 363, 583–584.
Rivière, A.: 1991, Objetos con mente, Alianza Universidad, Madrid.
Searle, J.: 1980, ‘Minds, Brains and Programs’, Behavioral and Brain Sciences 3, 417–424.
Skarda, C.A. and Freeman, W.J.: 1987, ‘How brains make chaos in order to make sense of the
world’, Behavioral and Brain Science 10, 161–173.
Smolensky, P.: 1988, ‘On the proper treatment of connectionism’, Behavioral and Brain
Sciences 11, 1–23.
Stroll, A.: 1993, ‘That Puzzle We Call the Mind’, Grazer Philosophische Studien 44, 189–210.
Szentágothai, J.: 1968, ‘Structure-Functional Considerations of the Cerebellar Neuron Net-
works’, Proceedings of the I.E.E.E. 56, 960–968.
Szentágothai, J.: 1969, ‘Architecture of the Brain Cortex’, in Jasper, H.H., Ward, A.A. and
Pope, A. (eds.), Basic Mechanisms of the Epilepsies, Little, Brown and Co., Boston, Mass.,
pp. 13–28.
Volkow, N.D. and Tancredi, L.R.: 1991, ‘Biological Correlates of Mind activity Studied With
PET’, American Journal of Psychiatry 148, 439–443.
Wells, A.: 1993, ‘Parallel Architectures and Mind Computation’, British Journal of Philosophy
of Science 44, 531–542.
Zeki, S.: 1993, A Vision of the Brain, Blackwell, Oxford.