You are on page 1of 28

Wiley

International Phenomenological Society


Philosophy and Phenomenological Research

Heidegger and Artificial Intelligence


Author(s): Beth Preston
Source: Philosophy and Phenomenological Research, Vol. 53, No. 1 (Mar., 1993), pp. 43-69
Published by: International Phenomenological Society
Stable URL: http://www.jstor.org/stable/2108053
Accessed: 29-10-2015 11:21 UTC

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at http://www.jstor.org/page/
info/about/policies/terms.jsp

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content
in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship.
For more information about JSTOR, please contact support@jstor.org.

Wiley, International Phenomenological Society and Philosophy and Phenomenological Research are collaborating with
JSTOR to digitize, preserve and extend access to Philosophy and Phenomenological Research.

http://www.jstor.org

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
Philosophy and Phenomenological Research
Vol. ini, No. 1, March 1993

Heidegger and Artificial Intelligence


BETH PRESTON
Department of Philosophy and
Artificial Intelligence Program
University of Georgia

Two decades ago Hubert Dreyfus launched a critique of AI which was largely
inspired by the work of Martin Heidegger. He has continued to press this cri-
tique, but it has not served to stem the flood of computational enthusiasm
even among philosophers, never mind among the practitioners of computa-
tion themselves. AI has become a flourishing academic industry, and the
computational theory of mind is thriving in psychology and philosophy de-
partments alike, under the rubric of cognitive science. Needless to say, few of
the denizens of Al laboratories or of the multifarious strongholds of cognitive
science are reading Heidegger.
Nevertheless, I believe that ideas of importance to AI and to the computa-
tional theory of mind in general are to be gleaned from Heidegger. There is
nothing in Heidegger to suggest that the project of constructing artificially
intelligent creatures is impossible, or that mental processes cannot be con-
strued as computational processes of some sort. What Heidegger does suggest
is an alternative approach to the study of intelligent behavior in general; an
approach which rejects some key assumptions of the computational theory of
mind in its traditional cognitivist formulation.
This paper has two main objectives. The first is to describe the alternative
approach suggested by a reading of Heidegger. In doing this I will be distin-
guishing my interpretation of Heidegger from Dreyfus' interpretation. I do
not think either Dreyfus' critique or his positive program are wrong-in fact
they are eminently worthwhile so far as they go. But they are limited by an
insufficiently radical interpretation of Heidegger which allows his potentially
most significant contributions to the analysis of intelligent activity to remain
undeveloped and unappreciated. So I will develop them, indicating along the
way points of contact with existing work in Al and cognitive science, and
sketch the methodological implications the adoption of the Heideggerian al-
ternative would have for further research.
The second obiective is to give some reasons for thinking that this alter-
native approach merits attention. This objective is accomplished in two

HEIDEGGERAND ARTIFICIALINTELLIGENCE 43

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
ways. First, I present an argument which turns on the notion of computa-
tional complexity and the practicaldifficulties it poses for the applied version
of the computationaltheoryof mind embodied by AI. I show how the alterna-
tive suggested by Heidegger can help with the resolution of these difficulties.
Second, I briefly present some current work in Al which successfully puts
into practiceaspects of the Heideggerianalternativeas I describe it. These ex-
amples are provided as evidence, however preliminary, that this alternative
approachcan indeedbe productiveof interestingresearchprojects.

Heidegger
In the course of his descriptionof the existential structureof Dasein, Heideg-
ger offered an analysis of routine activity involving the familiar implements
of everyday life.' He pointed out that this sort of activity unfolds virtually
automatically, without much thought or obvious cognitive effort at all. He
also maintainedthat the imtelligibilityof such activity and of the implements
involved`in it is essentially context dependent; indeed, that it is holistically
dependenton the entire context of practicesconstitutinga way of life.
Take as an example locking the door when you leave the house. This is
not something you plan to do the way you might plan to do three errandson
the way to work. Nor does doing it require that you know how the lock
works. For the most part, you only need to be able to turnthe key and to tell
whetheror not the operationhas been successful; the lock itself does the rest.
So it seems that to some extent you get things done because of the way the
world works, not because of what you explicitly know about the way it
works.
More importantly,the fact that locking up makes sense does not depend
on your entertainingreasons for it. You may be able to produce some reasons
if asked. But the really deep reasons-e.g., those having to do with the insti-
tution of private property-need not be entertainedby individual lockers of
doors at any time. Such institutionsare simply embodied in the existence of
artifacts like locks, and the habitual,unreflectivepractices involved in using
them.
However, the complexity of those practices must not be underestimated.
Although you may lock up as a general rule, there are a host of exceptions
ranging from the mundane (you are just running out to the mailbox) to the
extraordinary(the house is on fire). These exceptions seem to be effortlessly
taken into account in everyday activity. In addition, there are all the other
doors (to your office, your car, your office building, etc.), where the practice
of locking up still applies, except that both the general rules and the excep-

1
For those who speak Heideggerian, what follows is an interpretation of Heidegger's
contention that the ready-to-handand World are inconspicuous; that in circumspective
activity Dasein is absorbed in its World without thematizing it (Heidegger, 1962).

44 BETHPRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
tions are different in each case. And then there are all the myriad sorts of
doors in your life to which the practice of locking up does not apply (or at
least you are not the responsible person), and which you seem to distinguish
effortlessly in your-ongoing daily activity from those where it does apply.
The point is that the practice of locking up seems to depend on a vast
numberof facts about differentkinds of doors and the uses to which they are
put, as well as on a vast number of possible exceptions to the general rules
supposedly constituting the core of the practice. Yet little if any of this is
explicit in the course of everyday routine activity, and indeed it is difficult to
elicit any such explicit understandingfrom people for theoreticalpurposes.
Dreyfus calls this body of putative facts and rules the Background
(Dreyfus, 1981; Dreyfus and Dreyfus, 1986). Following Heidegger, he argues
that the fundamentalphenomenon in activity is skillful coping, and that it
presupposes the Background. But, he continues, this Background cannot in
fact be a body of explicit facts and rules. Or, to put it another way, know-
how is not reducible to know-that.Dreyfus' criticism of AI in particularand
cognitivism in general is precisely that their approachassumes from the be-
ginning and without argumentthatall intelligent behaviordepends essentially
on the explicit internalrepresentationof the Background.
...all these problems are versions of one basic problem. Current AI is based on the
idea.. .that all understanding consists in forming and using appropriate representations.
Given the nature of inference engines, AI's representations must be formal ones, and so
commonsense understandingmust be understood as some vast body of precise propositions,
beliefs, rules, facts, and procedures. Thus formulated, the problem has so far resisted solu-
tion. We predict it will continue to do so. (Dreyfus and Dreyfus, 1986, p. 99)

But on what evidence does Dreyfus base this prediction?Heidegger's evi-


dence is purely phenomenological-if we are not aware of explicitly repre-
senting the Backgroundto ourselves in routine activity, then it is not being
explicitly represented.But this view is always open to the objection that we
may simply not have any conscious access to our representationof the Back-
ground. This is a strong objection, given the psychological evidence for the
existence of unconsciousmentalprocesses of many kinds, and Heideggerdoes
not appear to have taken it into account. But even if Heidegger were right
about this with regard to naturally occurring intelligence, what is there to
preventa machine from duplicatingthe achievements of intelligent creatures
by explicitly representingthe Backgroundto itself? Isn't this at least a possi-
ble theory of how a cognitive system might work, even if it is not the right
theory from the point of view of cognitive modelling?
The problem,according to Dreyfus, is not just that spelling out the Back-
groundis an overwhelmingand seemingly infinite task. This has been a prac-
tical stumbling block for AI researchers, to many of whom it has begun to
seem as though something on the orderof a Ph.D. in economics and another

AND ARTIICIALINTELLiGENCE45
HEIDEGGER

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
one in social psychology is requiredin orderto purchasea package of gum in
a drugstore.The move towardsso-called micro-worldswas largely motivated
by this difficulty, in fact. But Dreyfus detects an underlyingproblem in prin-
ciple, not merely-in practice, with the notion that the Background can be
completely spelled out.
The problem is that in any given situation it must be determined which
Backgroundfacts are relevant. But this requiresthat the Backgroundinclude
facts about which facts are relevant in which situations. But this seems to
generate an infinite regress of facts about other facts. (A vaguely Wittgen-
steinian version of this argumentcan be run on rules ratherthan facts, mu-
tatis mutandis.If knowing when to apply a rule depends on having a rule for
applying it, then an infinite regress of rules for applying rules is generated.)
So in order to ground out this regress, it must be the case that the Back-
groundis not completely and explicitly represented.
Rather those problems point to something taken for granted: namely, a shared, human
background that alone makes possible all rule-like activity... .Thus in the last analysis all
intelligibility and all intelligent behavior must hark back to our sense of what we are, which
is, necessarily, on pain of regress, something we can never explicitly know. (Dreyfus and
Dreyfus, 1986, p. 81)

Dreyfus also points out that there are two different (but, please note, not
incompatible)directionsyou can take from this conclusion.
One response.. .is to say that such "knowledge"of human interests and practices need not be
representedat all....

Another possible account would allow a place for representations...but would stress that
these are usually non-formal representations,more like images, by means of which I explore
what I am, not what I know. (Dreyfus, 1981, p. 202-3)

The rise of connectionism with its notion of distributedrepresentationhas


lent substanceto the second option. And indeed, in his recent writing Dreyfus
has displayed some enthusiasm for connectionism (Dreyfus and Dreyfus,
1986, 1988), but has not discussed the first option at all. It may fairly be
concludedthat so far as he is concernedonly the second option is really viable
and/or interesting. My claim, in contrast, is that it is the first option that is
the more interesting of the two, and that it points in very different method-
ological directionsthanthe second one does.2
2 I also think that Heidegger cannot plausibly be construed as a proto-connectionist,
since there is no real evidence that the question of what internal representations are
like ever occurredto him. It is not even clear, as I have already said, that he had any real
appreciation of the fact that the possibility of unconscious mental processes makes the
interpretation of what he says about the ready-to-hand being 'unthought' or
'unapprehended'in circumspective activity problematic. But since my main purpose in
this paper is more to extrapolatefrom Heidegger than to expound him, I will not argue
this case any furtherhere.

46 BETH PRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
Clearly the way Dreyfus has formulated the contrast between these two
positions here is much too strong. Any theory of intelligent behavior or ac-
tion must make room for internalrepresentation,and must allow it some ex-
planatoryrole. This is surely recognized by all but the rankest of behavior-
ists. But there are interestingand importantquestions to be asked about just
what and how much mustbe representedin orderto do what needs to be done.
Consequentlyit is a questionjust how much of the explanatoryburdeninter-
nal representationcan bear, and how much of it must be borne instead by a
concomitant analysis of interactionswith environmentalstructuresand pro-
cesses which underwritethe intelligibility and the success of intelligent be-
havior withoutbeing reflectedin the internaleconomy of the organism.Drey-
fus is, of course, right to distinguish questions of this sort from questions
about the natureof whateverrepresentationsthere are. And it should be clear
that these very different questions are by no means incompatible;answers to
them may confidently be expected to complement each other rather than
conflict.
But in concrete terms, what sort of analysis does the first option suggest?
A good illustrationcan be found in the work of David Marr,who argued that
any informationprocessing task must first be analyzed at what he called the
computational level (Marr, 1982).3 This is the level at which you ask what
problem the system is solving and why, and the answer to the 'why' part es-
pecially will depend on an analysis of the task environment. Marr distin-
guishes this level from the level at which you go on to specify what sort of
representationyou have, and what algorithms are appropriatefor the trans-
formationbetween inputand output.
An example of the usefulness of computational level analysis is to be
found in Marr's discussion of the visual system's recovery of structurefrom
motion. This is based on work by Shimon Ullman, which showed that three
views of four non-coplanarpoints yield a unique interpretationin terms of a
rigid three-dimensionalstructure.This turnsout to be a very useful theory of
how the visual system recovers structurefrom more primitive visual features
because, and only because, most things in the visual world are in fact rigid.
But the visual system does not have to representthis feature of the world to
itself. It simply has to go on recovering structurefrom motion as if things in
the world were mostly rigid. So the explanation of what problem the visual
system is solving here, and why it is solving that one rather than another
one, depends essentially on a feature of the visual environment which is not
represented.4

3 I owe this point to Peter Woodruff.


4 A variant of this issue has also been discussed in philosophical circles in connection
with our intentional characterizations of cognitive systems, the point being that an
intentional characterization does not necessarily imply an internal representation in
the system with the correspondingcontent. To take Dennett's well-known example, we

HEIDEGGERAND ARTIFICIALINTELLIGENCE 47

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
To apply this sort of analysis to our previous example, for instance, you
might want to ask whetherthe apparentBackgroundstructureof rules and ex-
ceptions to rules need ever be representedas such. It might be the case that
we engage in a collection of relatively simple routines (e.g., one for leaving
home to go to work, another for running out to the mailbox, another for
leaving buildings in case of fire, etc.), each of which either does or does not
include locking whateverdoor happensto be between you and where you are
going. In this case the rule-plus-exceptionsframeworkfor "knowing"when
to lock the door would turnout to be an emergent structure,groundedin the
concatenationof simple routineactivities by the particulardoors involved. So
representationof the rule and of all the possible exceptions to it by the actor
would not enter into an explanationof her door-lockingactivity.

Connections vs. Interactions


It would be useful at this point to make an explicit comparison between the
HeideggerianprogramI am espousing and the programDreyfus espouses, as
well as between both of these and the traditionalprogramthey seek to amend.
The traditionalcognitivist approachexemplified in AI is mentalisticand indi-
vidualistic.Intelligenceis regardedas a matterof individualmentalcapacities,
and the intelligence exhibitedin outwardbehavioris regardedas derivative,as
merely the expression of this inner capacity. More specifically, this approach
encourages the assumption that getting around in the world is contingent
upon the ability to represent the world internally and to reason with those
representations.So the more accurateand detailed your internalworld model
and the more correct and sophisticatedyour inner logic, the more intelligent
you are, and the better you will be able to get aroundin the world. I shall re-
fer to this aspect of the traditionalcognitivist approachhenceforthas the in-
ternalworld model assumption.
This view is attractiveand comforting in large partbecause the sort of ex-
planatory stories to be told about behavior seem obvious and familiar. You
achieve goals by planning, learn and apply skills by problem solving, under-
stand language by translatinginto Mentalese, and so on. It is acknowledged
on all hands that most of our mental life is not accessible to conscious intro-
spection, but it is nevertheless tacitly assumed that mental processes in gen-
eral are similar to those processes to which we do have conscious access.
More specifically, it is assumed that representationof the world is articulate
and systematic, and that the manipulationof these representationsbetween
input and output is logically perspicuous. Mental processes are, in short,

may say that a chess program thinks it should get its queen out early without there be-
ing any rule like 'Deploy the queen early' represented in it (Dennett, 1978). Robert
Cummins has pointed out a number of different reasons why this characterizationmay
nevertheless be entirely appropriate,and has spoken up for the significance of what is
not represented alongside the acknowledged significance of what is (Cummins, 1986).

48 BETHPRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
conceived as quasi-linguisticin nature;so I shall refer to this as the sentential
assumption.
This last assumptionhas been repeatedlychallenged by Dreyfus (Dreyfus,
1979; Dreyfus and Dreyfus, 1986), as well as by Paul Churchland
(Churchland,1979). The complaintsvoiced by Dreyfus revolve aroundthe re-
calcitrance of some salient phenomena to traditional cognitivist accounts.
Pattern matching, for example, is acknowledged as an importantcognitive
phenomenon, but traditional AI systems are not very good at it. Dreyfus
surmises that this is because it requiresholistic similarity assessments which
are not reducibleto compiling and comparinglists of context independentfea-
tures. And skills, from the mundaneones like driving a car to the highly spe-
cialized and sophisticatedones like medical diagnosis, do not introspectively
seem to requirearticulateknowledge of the domain of the skill. Indeed, expert
systems designers have discovered to theirchagrinthat such knowledge is vir-
tually impossible to elicit from experts no matter how hard you try. So,
Dreyfus argues, skills may just not be reducible to, or explainable in terms
of, an articulatetheoryof the domain.
Churchland'sobjections are directed specifically at the sentences-in-the-
head characterizationof mental processes. He points out that language is a
relatively late acquisition in the development of the individual, and that it
must thereforebe acquiredon the basis of other more fundamentalcognitive
skills. It is thus implausible that these other skills are themselves linguistic
in nature.Moreover, the assumption that they must be smacks of parochial-
ism. Non-humananimals, many of whom display intelligence of a very high
order,do not use languageand are widely thoughtto be incapableeven of ac-
quiring it underhuman tutelage. But in other respects their cognitive accom-
plishments are continuous with ours. So projecting the idiosyncratic activity
we call language into a general theory of cognition is not justifiable.
Both Dreyfus and Churchlandhave recently exhibiteda great deal of inter-
est in connectionism, and particularlyin neuralnets. They feel that this tech-
nology suggests a non-cognitivist notion of cognition which is not subject
to these criticisms, and which implements cognitive mechanisms of a more
plausible sort. Dreyfus (Dreyfus and Dreyfus, 1986, 1988) points out that the
distributed associative nature of representation in connectionist systems
makes them betterat patternrecognition thantraditionalsystems. Patternsare
not reducedto lists of decontextualizedfeatures,since the activationof a node
in the net does not usually or necessarily have an interpretationin terms of
some discriminablefeatureof the world with a name in our everyday vocabu-
lary. So patternmatching proceeds holistically, and has some nice, realistic
propertieslike resistance to noise and graceful degradationas a result. This
difficulty in interpretingdistributedrepresentationsin terms of a systematic
combinatorialsemantics, and the concomitant difficulty in interpreting the
transformationof these representationsbetween input and output in terms of

HEIDEGGERAND ARTIFICIALINTELLIGENCE 49

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
steps in a reasoning process, also means that skills cannot be regardedas ei-
ther acquiredor exercised in virtue of an articulatetheory of the domain. For
these reasons, Dreyfus is cautiously optimistic about the possibility that neu-
ral net technology will eventually succeed in carrying out the AI project
which traditionalsystems have failed to realize.s
Churchlandis more sanguine. Several years ago he put forwardan alterna-
tive to the sententialconception of mental representations.
The basic idea.. .is that the brain representsvarious aspects of reality by a position in a suit-
able state space; and the brain performs computations on such representationsby means of
general coordinate transformations from one state space to another. (Churchland, 1986, p.
280)

On this view mental representationis quasi-geometricalratherthan quasi-


linguistic. Churchlandshows how this conception of mental representation
and computationyields plausible accounts in a numberof areas such as sen-
sory representation,which have not been amenableto sententialaccounts,and
also gives it a generalrecommendationas providing"a greatdeal of represen-
tationalpower at a very low price" (Churchland,1986, p. 300).
This refreshinglybold speculation about the natureof internalrepresenta-
tion was based on developments in neurobiology. These were soon joined by
equally interesting developments in AI, heralded by the publications of the
PDP group. These "new connectionists" also lay claim to a non-sentential
theory of cognitive functioning,underwrittenby the notion of distributedrep-
resentation.Indeed, it turnsout that computationwith distributedrepresenta-
tions is best describedas vector transformation,and thatindividualrepresenta-
tions of this type can be fruitfully seen as positions in an abstractstate space
(Churchland, 1989, p. 206). So Churchland finds himself at the happy
confluence of two burgeoningresearchprograms.
It is importantto see that neither Dreyfus nor Churchlandis rejecting the
internalworld model assumptionof the traditionalcognitivist approach.Their
quarrelis only with the sententialassumption,and their attractionto connec-
tionism is to be explained by the promise it holds out of an alternativecon-
ception of representationand computation.But neitherof them has repudiated
the individualist and mentalist assumptions which give rise to the notion of
an internalworld model, and which are if anythingeven more fundamentalto
cognitivism than the sentential assumption. Dreyfus and Churchlandhave in
effect substitutedthe brain, conceived in terms of neural nets, for the mind,
conceived in terms of the language-of-thought.But the brain is still clearly

'Cautious' is the operative word here. Dreyfus seems to feel that AI is more likely to
succeed by trying to model the brain than the mind, but he thinks even this project may
be just too difficult. He expresses a foreboding that "[n]eural network modelling may
simply be getting a deserved chance to fail, as did the symbolic approach." (Dreyfus
and Dreyfus, 1988, p. 37).

50 BErHPRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
the organ of mental functioning,and it is still safely ensconced within the in-
dividual skull.
So for them the basis of the explanationof behaviorhas not changed. It is
still tied to the transformationof internalrepresentationsof the world, i.e., to
an internalworld model. What has changed is only the characterizationof the
sort of representationand computationinvolved. I do not by any means wish
to deny that this is a significant departure from traditional cognitivist as-
sumptions;far from it. It is clear that what goes on inside the head plays an
essential role in explanationsof behavior,and that insofar as we fail to char-
acterizeinternalprocesses accuratelywe will fail to produceadequateexplana-
tions. Projects of the sort undertakenby Dreyfus and Churchlandare of ines-
timable value in this regard,and in fact I find myself substantially in agree-
ment with both their critical assessments and their positive proposals. But
what I am concerned to point out in this paper is that there is at least one
other way of diverging from the cognitivist paradigm; a divergence which
questions different cognitivist assumptions,has different methodological im-
plications, and results in differentsorts of explanationsof behavior.
In the first section I characterizedthis alternativedivergence in a rough and
ready way in terms of Heidegger's analysis of routine everyday activity, and
Marr'snotion of computationallevel analysis. Now I would like to make an-
other pass at a characterizationof this alternativeagainst the backgroundof
the Dreyfus/Churchlandalternativejust described. At this point it would be
nice to have some convenient labels for the things I am trying to distinguish
here. So I shall refer to the Dreyfus/Churchlandproject as the connectionist
alternative, and to the Heideggerian project as the interactionist alternative
(for reasons which should become clear shortly). These are both alternatives
to cognitivism in general, but the interactionist alternative tends to reject
both the internal world model and the sentential assumptions, whereas the
connectionistalternativerejects only the latter.
The bone of contention is the role of the environmentin the explanation
of behavior. That it has some role is, I should think, undisputed. But it is
very importantto see just what that role is. Under the internal world model
assumptionthe environmententers into the explanationof behavior basically
only insofar as it is reflected in the internalrepresentationaleconomy of the
organism. The world is transduced,processed and reconstructedin foro in-
terno. But representationsdo not wear their meanings on their sleeves, so the
main task of philosophy of mind/psychology on this view is to provide a
semantics for mental representations. And this is where the environment
comes in, because major determinantsof meaning like reference and truth
conditions can only be established by examining the relationship of the or-

HEIDEGGERAND ARTIFICIALINTELLIGENCE 51

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
ganism to its environment.6So the environmentmust be taken into account
as the basis for an explanation of the ascription of meaning to mental repre-
sentations and as the basis for an explanation of how they come to have that
meaning. But once this is established, the environmentdrops out, for behav-
ior is held to be generated by internal representationalprocesses and to be
fully explainable in terms of them.
This picture does not change much with the introductionof the connec-
tionist alternative.The role of the environmentis to supply training sets for
the network. The training process establishes the connection strengths be-
tween nodes, which in turn determine the patterns of activation which will
occur in response to future input. These patternsof activation are precisely
the distributedrepresentationsof the connectionist model. The question of
semantics is somewhat vexed in this case, since distributedrepresentationsdo
not lend themselves to systematic interpretationin any vocabularycurrently
to hand. But the trainingset does in effect specify the task on which the sys-
tem is supposed to be engaged, and so does provide the basis for the semantic
characterizationof the system at some suitably abstract level. The point,
however, is that under the connectionist alternative as under the traditional
cognitivist model the environmentplays a role only insofar as it is required
to explain the generation of appropriaterepresentationsand to underwritea
theory of meaning for them. It is the representationsthemselves which bear
the explanatoryburdenwhen it comes to the productionand characterization
of behavior. So the role of the environmentin the explanation of behavior is
again indirectand subsidiary.
The interactionistalternativeproposes a different explanatoryrole for the
environment.It proposes that we do not interactwith the environmentsolely
in virtue of representing it, but rather that non-represented environmental
structuresand processes must enter into the explanation of behavior directly
and independently.In other words, the position here is that the full functional
relationshipof the organismwith its world cannot be adequatelyexplained or
characterizedjust in termsof what aspects or featuresof that world are repre-
sented by the organism. Perhaps the most famous illustrationof the interac-
tionist alternative in the literatureis HerbertSimon's parable of the ant. As
Simon formulates it, the observed complexity of behavior may be attributed

6 The propriety of even this project was, notoriously, questioned by Jerry Fodor (Fodor,
1980), who argue4 for a methodologically solipsistic psychology in which the envi-
ronment would have no explanatory role at all. On this view, the semantics of mental
representations would depend exclusively on the mutually determining
causal/conceptual roles of the representations themselves. This is not now a popular
position, and Fodor himself has since recanted. But it is an extremely interesting, if
transient, episode in the philosophy of psychology since it is an unequivocal expres-
sion of the very real tendency of adherents of the internal world model assumption to
just ignore the environment for the purposes of doing psychology to the extent that it
is possible to do so.

52 BETHPRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
to the interactionsof a relatively simple creaturewith a complex environment
ratherthan to the sheer internalcomplexity of the creatureitself. The compli-
cated trajectoryof an ant's path across a beach is to be explained by the par-
ticular features of that beach in conjunction with a few simple mechanisms
for dealing with obstacles. So the ant's actual path is an emergent feature of
the patternsof its interactionswith its environment.
We will return to Simon later, but for now the point to be taken is that
what the interactionistalternativesuggests is a different unit of analysis. The
unit of analysis underthe internalworld model assumption is the individual,
and more specifically the individualhead. The environmentis consideredonly
to the extent necessary for ascribing meaning to the things in the head. The
unit of analysis for the interactionistalternative is the individual-plus-envi-
ronment.7The emphasis here is on the analysis of the interactions between
organism and environment,where the interactionis not necessarily assumed
to be mediated in any interestingway by internalrepresentations.So interac-
tionist analysis is resolutely non-individualisticand non-mentalistic.
Having now seen in outline what the interactionistalternative is, the ob-
vious next question is why anyone should care about its future and develop-
ment as a researchprogramin cognitive science. In orderto answer this ques-
tion, I would now like to returnto the problematicof the Backgroundand re-
cast it in terms of the notion of computational complexity. The difficulty
with the way Dreyfus formulates it-in terms of the danger of infinite
regress-is that no AI program has ever failed because it fell into such a
regress.
The regress does show up in the notorious "brittleness"of AI programs,
their tendency to just break when removed from the artificiallyrestrictedand
simplified environments for which they are designed. This "brittleness"
comes from assuming that intelligent activity depends essentially on having
precise and detailed informationabout the world, and then not being able to
provide that informationexcept for ludicrously simple worlds in which you
are then stuck. But this manifestation of the infinite regress is indirect at
best, since it only shows that there is an inherent appeal to more facts and
rules, not that that appeal must be infinitely reiterated. (The regress might
just bottom out somewhere, although there do not seem to be any real theo-
retical considerationsbearing on just where and how.) Consequently no un-

Please note that this anti-individualism is not to be confused with the anti-individual-
ism of Tyler Burge (Burge, 1979, 1986). Burge's project is still one of the ascription
of meaning to mental representations. He thinks this is ordinarily done on non-indi-
vidualistic grounds, and that a scientific psychology can and should proceed in this way
as well. But the patem here is the standardone in which the environment is taken into
account only insofar as it is necessary for the characterization of mental representa-
tions. So this project is still resolutely mentalistic. It is a project with which I have a
great deal of sympathy, but it is not the project I am trying to describe in this paper-a
project which abjures both mentalism and individualism.

HEIDEGGERAND ARTIFICIALINTELLIGENCE 53

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
equivocalconnectionbetween the infiniteregressand practicalproblemsfaced
by AI practitionersis established. Computationalcomplexity, on the other
hand,is a recognizedpracticalproblemwith clear engineeringimplications.

Computational Complexity
The theory of computationalcomplexity concerns the relative feasibility of
performingdifferent computations.The most common measure of complex-
ity is time; if a computation takes too much time, it will not be practically
feasible even if it is known that it will eventually returnan answer-say, in
30,000 years, give or take a century or two. A problem is said to be compu-
tationallytractable if the time it requiresincreasesonly as a polynomial func-
tion of the number of factors in the input instance, and computationallyin-
tractable if the time increases exponentially. Computationalintractabilityis
particularlya problemwhenever the input factorsare not considered indepen-
dently of each other, but ratherin all their possible combinations, as occurs
frequentlyin the searchalgorithmson which Al programstraditionallyrely.
The practical consequence of intractabilityis that there are computations
where a seemingly small increase in the inputresults in an enormousincrease
in execution time; the estimates do often run to thousands, or even millions,
of years. This obviously makes the computation infeasible in fact, if not in
principle. Moreover, computational complexity is generally regarded as an
inherentcharacteristicof a problem,and thereforerelativelyindependentof the
exact machine and type of computationused. The precise extent and practical
significance of this independence is still under investigation. But from a
mathematicalpoint of view it is clear at least that exponential increases in
computing time cannot ultimately be outrun by faster machines or parallel
computation,for example, unless unrealisticassumptions (such as having an
exponentiallyincreasingnumberof parallelprocessors)are made.
The problem of computationalintractabilityhas been a motive force in AI
since its infancy as a discipline. In the early days it was referred to as the
problem of combinatorialexplosion. This is what happens if you try to solve
problems by exhaustive search throughthe problem space. The standardex-
ample is trying to play chess by looking ahead to all the possible next
moves, and then examining all the possible responses to each of those possi-
ble moves, and so on to the end of the game. This quickly gets out of hand;
you will be considering a million possibilities by the time you are looking a
mere two moves ahead. So exhaustive search is clearly not a feasible method
for humans in the chess situation. But it was realized immediately that it is
not feasible for computers either, regardless of larger memories and faster
computing times. Most problems, even the officially tractableones, are not
amenable to solution by exhaustive search, because there are simply too
many possible combinations to be considered and it takes too long to con-

54 BETH PRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
sider them. In fact, one way of looking at official intractabilityis to view in-
tractableproblemsas those problems whose intrinsicstructurein effectforces
you to use exhaustive search.
So Newell, Shaw and Simon in their pioneering work in AI suggested
that the essence of intelligence lies in heuristicratherthan exhaustive search.
They suggested furtherthatthe study of general intelligence would turnout to
be the study of types and methods of heuristic search (Newell and Simon,
1976). The early history of AI was, then, shaped by the fact that exhaustive
search is not a useful way of trying to solve the problems you need to be able
to solve if you are an intelligentcreature.
But the problem did not disappear with this early conceptual shift to
heuristic search. Complexity theory as a branchof mathematicsdeveloped in
parallel with the development of AI as a discipline. Complexity theorists are
in the business of supplying mathematicaltools for sorting out the computa-
tionally intractableproblems from the tractableones, as well as for settling
various fine points, such as whether the intractabilitysets in early (i.e., for
very small input cases) or late, and so on. As it turns out, the pile of in-
tractableproblems is large and growing all the time. One result of this dis-
seminationand refinementof methodsfor evaluatingcomputationalcomplex-
ity is that researchers in Al now routinely produce complexity analyses of
problemson which they are working.
Some of the complexity results in Al are extremely discouraging. A good
example of this is David Chapman's proof that planning is intractable
(Chapman,1987). What Chapmanshowed is that most plannerswork in fun-
damentally the same way. They restrict their representationof action so that
conditional actions, derived side effects, and effects dependent on the input
situation are not representable.So long as these restrictions are maintained,
planningremains tractable.But as soon as you complicate your action repre-
sentation so that it can accommodate conditional actions, planning immedi-
ately becomes intractable. The difficulty, of course, is that for any sort of
plan that would be at all useful in the real world, you do need to take condi-
tional actions, derived side effects, and so on, into account. Worse yet, plan-
ning has long been taken as the foundationfor theories of action in AI, so its
intractabilityhas multifarious ramifications. Similar results have been ob-
tainedfor many otherAI problems,such as descriptioncomparison(Levesque
and Brachman, 1987). But how do you design AI systems and maintain the
plausibility of the computational theory of mind in the face of such
widespread and deep-seated intractability?The blanket notion of heuristic
searchhas given way to a numberof distinctapproaches.
One possible response is renewed optimism. Complexity analysis is al-
ways worst case analysis; the expected complexity, the complexity you must
actually deal with in the instances of the problem which you are likely to
confront in real life, need not present any practical difficulty at all. Then it

HEIDEGGERAND ARTIFICIALINTELLIGENCE 55

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
might be feasible to solve the problem for all the cases having any real prac-
tical interest,and the fact that the problemis worst case exponentialwould be
a matter of mere intellectual curiosity. There are a few cases which seem to
justify this optimism-cases in which a reasonably fast algorithm exists for
an officially intractableproblem.But the notion of expected complexity does
not representa principledhope for dealing with the issue of computationally
intractableproblems, since there is no general way of identifying expected
problem instances, or guaranteeing that they won't fall in the worst case
range.
A more common and realistic response is representedin the work of Hec-
tor Levesque. He has argued (Levesque, 1986) that since certain sorts of in-
formationprocessing tasks are inherently intractable,AI's knowledge-based
systems must depend on forms of reasoning which are logically unsound,
such as reasoning with defaults when the requisite information is not avail-
able in the knowledge base, or the use of nonmonotoniclogics. It is not quite
clear whether these particular options would help materially with the in-
tractabilityproblem. (There are indications, for instance, that reasoning with
defaults is actually more difficult rather than less so.) But the strategy
Levesque is suggesting here is clear enough: slipshod reasoning techniques
are supposed to work by tradingoff correctnessor generality for just getting
things done in a reasonableamountof time.8
This tactic for dealing with computationalintractabilitycan be generally
characterizedas the search for principledways of lowering the standardsfor
what is to count as a solution to the problem you are dealing with. It is, in
fact, the most common way of coping with intractability.It effectively al-
lows you to adopt a usable if less than ideal algorithm and get on with the
job, to satisfice rather than optimize, as Simon puts it (Simon, 1981). And
the satisficing approachto intractabilityhas generated a great deal of useful
and interestingwork on approximationalgorithms,for instance. However, it
is essential to notice that all the original assumptions about which problems
cognitive systems are solving are still intact. All that is being conceded in

The repercussions of complexity theory have been felt in philosophy as well, and have
engendered a very similar response. Christopher Cherniak, has argued (Cherniak,
1986) that our traditional notion of the ideally rational agent with a perfectly consis-
tent belief set and perfect deductive ability must be scrapped, because many of the in-
ferences such an agent would have to make are computationally intractable. He points
out that this negative evidence for the imperfection of a rational agent's logical capa-
bilities squares very nicely with positive empirical evidence that people rarely do use
formally correct procedures in solving everyday problems. Instead they rely on quick
and dirty heuristics which yield the correct answer-or an approximately correct one-
in a large percentage of cases. Cherniaksurmises that this is just a necessary tradeoff of
correctness for speed, given that insistence on always getting the right answer would
involve intractable computations in a significant proportion of cases. Since speed is of
the essence in real life, this tradeoff is rational. Chemiak calls the detailed analysis of
this tradeoff the theory of minimal rationality.

56 BETH PRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
the face of intractability is that optimal solutions to those problems are
unattainable(and, one might add, unnecessaryin the usual course of events).
But the underlyingtask structureand task specificationremainthe same.
However, complexity theoretic results can also be used to motivate more
fundamentalrevisions-revisions to the very notion of which problems cog-
nitive systems are to be conceived as addressingin the firstplace. One way to
do this is to use complexity theory as a diagnostic tool. It has been turnedto
account in this manner by Barton, Berwick and Ristad (1987), who have
shown that some possible grammarsare not naturalgrammars(i.e., could not
be the grammarof any naturallanguage),because they generatelanguagepro-
cessing problems which are computationally intractable. More interesting
still, their complexity analyses are sometimes fine-grainedenough to allow
them to localize the sources of intractabilityin specific features of the gram-
mar. They have suggested some revisions to the theory of generalized phrase
structuregrammaron this basis, for instance. Another good example of this
very positive approach to computational intractability is the work of John
Tsotsos at the University of Toronto, in which complexity analyses are used
to suggest constraintson the possible architectureof the visual system, and
thus to guide the developmentof a general theoryof vision (Tsotsos, 1988).
This diagnostic strategy for dealing with intractabilityis quite different.
The approachhere is to re-examinethe specificationof what the system is do-
ing from the groundup, and to figure out how it might achieve the same ends
by solving some other, more tractableproblems, rather than by merely ap-
proximatingsolutions to intractableones. The underlyingassumption here is
that if it looks like your system has to solve an intractable problem, then
you may have described what it is doing incorrectly to begin with. So com-
plexity theory provides an importantsource of constraintson cognitive the-
ory at a quite basic level; and theorizing at this level in turn serves as a pri-
mary source of ideas for dealing with computationallyintractableproblems
by the simple expedient of avoiding them.
We have, then, two basic responses to the perceived problem of computa-
tional complexity: the satisficing approach and the diagnostic approach.
These approachesare obviously complementary,and have both already re-
sulted in interestingand significant work in AI. The satisficing approachop-
erates on what Marrcalled the representationlevel. It deals with intractability
primarilyby changing our notions of what sorts of algorithms are most ap-
propriatefor the transformationof representationsbetween input and output.
The diagnosticapproach,on the otherhand, operateson Marr'scomputational
level. It deals with intractabilityby changing our ideas about what is being
computedand why.9

9 There is also a variation on the diagnostic approach which in effect calls into question
the integrity of the levels Mart distinguishes. If you radically change your whole repre-

HEIDEGGER
AND ARTIFICIAL
INTELLIGENCE57

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
At the end of the previous section we noted that Mar's descriptionof the
computationallevel has some remarkableaffinities with Heidegger's analysis
of routineeverydayactivity. In particular,Marremphasizedthe importanceof
taking the environment into account, and the possibility that specific envi-
ronmentalfeaturesand processes may underwritethe achievements of cogni-
tive systems without being internally representedby the system. Similarly,
we saw that one way of interpretingthe HeideggerianBackgroundis to see it
as necessary to the know-how displayed in our everyday activity, but not nec-
essarily representedby us. So a link exists between the diagnostic approach
to computationalintractabilityand the Heideggerianor interactionistapproach
to the study of intelligent activity via Marr's notion of computational level
analysis. Heidegger's analytic of Dasein is a rich source of ideas about com-
putational level issues, and the computational level in turn is a source of
significant and interesting resolutions to problems of computational com-
plexity and outrightintractability.
Computationalcomplexity thus becomes a strong motivation for taking
up the interactionistapproach.There are, as I have pointed out, ways other
than computational level analysis of dealing with intractability, viz. the
satisficing approach.There are obviously also ways of carryingout computa-
tional level analyses which do not appeal to Heidegger. But I do not see that
there is any point in limiting your options on either score. Computational
complexity is taken to be an interesting and serious problem by the practi-
tioners of computationat large, and in particularit has influenced (and con-
tinues to influence) the development of AI in significant respects. So if the
interactionistapproachoffers a way of dealing with computationalcomplex-
ity, that is reason enough for giving it serious considerationand developing it
in a systematic fashion. Such a research program might even, with a little
stretchof the imaginationand profuse apologies in various quarters,be called
HeideggerianAI.
But before launchinginto a discussion of HeideggerianAI, I would like to
say precisely what the argument from computational complexity shows, in
my view. Unlike Dreyfus' infinite regress argument, it is not designed to
show that the cognitivist approach,either in its classical form or its current

sentational scheme-if you abandon the traditional symbolic scheme, for instance, and
go connectionist-you ipso facto change the problems with which the system must
cope. So difficulties with computational complexity may simply be sloughed off along
with the rejected representational scheme. (This is unfortunately not guaranteed. You
may instead spawn a whole new set of complexity classes, as has occurred with the ex-
tension of complexity theory to parallel processing.) Besides problematizing the Mar-
rian framework, this variation also indicates ratherclearly that you can carry out a sort
of analysis of what problem the system is solving and why by adverting to the repre-
sentational scheme it uses ratherthan to the environment in which it operates. But this
does not touch the point that a very basic and significant form of computational level
analysis does require taking the environment into account.

58 BETHPRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
connectionist avatar,is doomed to failure. It does show that reliance on inter-
nal representation/computation as the basis for the explanationof behavior is
problematicin certainrespects. It also shows thatone good way of addressing
the problem is to change the approach, and in particularto change the ex-
planatoryframework.Clearly this argumentdoes not demonstrateeither that
cognitivism must fail or that interactionismmust succeed. But then I do not
believe thatdemonstrativea priori argumentsare to be had in these matters.

Heideggerian AI
So what would HeideggerianAI look like? Happily, there are already some
examples to cite. Philip Agre and David Chapmanare at work on a theory of
action for which they claim direct Heideggerian influence. Some aspects of
this theory have been implemented quite successfully (Agre and Chapman,
1987). Rodney Brooks has espoused a programof "intelligencewithoutrepre-
sentation"(Brooks, 1987), and he and his robotics group at MIT are imple-
menting it in a series of mobile robots (Horswill and Brooks, 1988). I will
describe some aspects of these systems briefly, and make some general points
abouttheirconsonancewith the Heideggerianapproachand the directionto be
takenfrom where they leave off.10
Agre and Chapmanhave implementeda system called Pengi which plays
Pengo, a commercial video game. It consists of a visual component and a
central component. The central component is a connectionist network, and
registrationof an aspect of the currentsituationcorrespondsto activationof a
node. Features of the world are thus represented,but these representations
have some special characteristicswhich make them ratherunlike internalrep-
resentations as usually conceived. Each of them represents a unitary, func-
tionally individuatedaspect of the situation.For example, some such aspects
of Pengi's antarcticworld might be the-ice-block-dead-aheador the-ice-block-
to-kick-at-the-enemy.(The hyphenationindicates that there is actually no in-
ternalstructurehere.)
The interesting thing about representationin Pengi is that variables are
not bound to constants. The system identifies things only by functional type,
never as named individuals. Variable binding in traditionalsystems is con-
ceived as the condition for the possibility of generalization. As Agre and
Chapmanput it: to generalize something you have learned on Monday, you
substitute variables for Monday's constants (individuals). Then on Tuesday
you substitute Tuesday's constants for the variables. But in many cases
things only ever need to be characterizedindexically in terms of theirrelative
functionality;their individualityis never an issue. This makes the naming of
10
These are the clearest examples of a 'Heideggerian' trend in AI. But there is other work,
such as Stanley Rosenschein's situated automata theory (Rosenschein, 1985) and Brian
Smith's theory of embedded computation (Smith, 1986), which also exhibits distinct
affinities with this view.

BEIDEGGER AND ARTIFICIALINTELLIGENCE 59

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
individualsand the substitutionprocess involved in variablebinding unneces-
sary.For example, the-ice-block-dead-aheadalways has the same implications
for action, so Pengi simply regardsit as always the "same"one every time it
comes up, ratherthan naming each occurrence and generalizing from one to
another.
There are actually lots of situations in real life you can handle this way,
such as the coins in your pocket, the sparrows you feed in the park, the
sheets of stationery in your desk drawer, etc. All that really matters is
whether what you have is a quarter,a hungrybird, or a blank piece of paper,
not whetheryou have ever seen this particularquarter,bird, or piece of paper
before or whetheryou will ever see it again. Agre and Chapmandescribe this
phenomenon as getting generalization for free. I think I would prefer to de-
scribe it as how to get along without generalization(so that it doesn't sound
like a species of generalization).But in any case, the point is that representa-
tional overhead is minimized by relying on a characteristicof the world (that
it contains many individualsof the same type) without representingthat char-
acteristic.
The other striking feature of Pengi's central system is that it does not
keep state; i.e., it does not remember what it is doing from one moment to
the next. It simply acts opportunistically on the basis of the aspects of the
situationregisteredfrom moment to moment. This results in coherentbehav-
ior in virtue of the fact that the networkis wired up so that the registrationof
given aspects will eventuate in given actions being taken. So similar se-
quences of events in the world will be responded to in similar ways without
Pengi's having to make any judgements of similarity. Agre and Chapmanre-
fer to these emergentpatternsof interactionbetween the system and its world
as routines. The incipient theory of routine evolution and maintenanceis in-
tendedto replace the classical theoryof planningas the cornerstoneof a more
general theory of action. And this move was directly motivated by the fact
that planningturnedout to be computationallyintractable.
The architectureof the mobile robots built by Rodney Brooks and his
group is quite different from Pengi's. These systems are built up in layers,
each of which independently accepts input from the sensory apparatusand
generates activity. Each layer is a simple network of finite state machines,
devoted to producingonly one sort of activity, e.g., obstacle avoidance. More
complicated systems are built by adding more layers. Conflicts between lay-
ers are handledby having the layers interactwith each other in a minimal sort
of way, by directly modifying each other's input or inhibiting each other's
output.
The most interesting feature of these mobile robots is that they have no
central system. This means that there is no central locus of control. The
overall behaviorof the system is an emergentproperty,based on the indepen-
dent productionof simpler behaviors by each layer. So these mobile robots

60 BETH PRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
do not engage in planningany more than Pengi does, but for a somewhat dif-
ferent reason. In this case there is no component of the system which could
undertakethe constructionof a plan or direct its execution in a layer-neutral
fashion. Consequently the organization and sequencing of behavior is
groundeddirectly in the independentresponses of the separatelayers to envi-
ronmentalcontingencies. So, for example, the obstacle avoidance layer sim-
ply avoids obstacles as they come up in the course of ongoing activity, rather
than having some central system component responsible for constructing a
map of the room and planninga path throughit.
Just as they have no central control, these mobile robots have no central
representationof the world. Each layer engages in some minimal and frag-
mentaryrepresentationof those specific featuresof the world which are of in-
terest to it. But there is no general purpose model of the world in this sys-
tem, and a fortiori no general purpose-reasoning processes for maintaining
and using such a model. This means that the behavior of the system is not
merely emergent with regardto its sequencing, but also with regardto its ap-
parent goals and purposes. The overall intentions of the system, as they ap-
pear to an observer, are not representedanywhere in the system itself. They
are simply interpretationsof the observer,groundedin the apparentcoherence
of the system's activity with regardto its actual environment.
Pengi and the mobile robots qualify as Heideggerian or interactionistbe-
cause they minimize the role of internalrepresentationwhile at the same time
emphasizing the role of the environment,e.g., in the sequencing of behavior.
This is in line with Heidegger's'contention (at least under the interpretationI
am favoring here) that the Backgroundis to a large extent not representedat
all, but that it nevertheless underwritesintelligent activity in various ways,
and must thereforebe takeninto account in any explanationof that activity.
But this brings us to a point on which both these systems seem to be
quite un-Heideggerian.For Heidegger, the Backgroundis-primarilysocially
constituted;it is a networkof artifactsand institutions.We avail ourselves of
it in virtue of the social practices and institutions in which we participate,
and the general tendency or capacity to participatein social practices is a pri-
mordialcharacteristicof Dasein, which I would like to call sociality. But it is
clear thatboth Pengi and the mobile robots are essentially solitary ratherthan
social creatures,so how Heideggeriancan they really be?
One possible response would be to say that these systems representat best
something like insect intelligence-a comparisonBrooks explicitly draws, in
fact (Brooks, 1986). They operateon the level of sheer embodiment,of reflex
and instinct;and the environmentto be taken into account at this level is the
brutephysical environment,unshapedby intelligent activity and unsharedby
other intelligent creatures.This purportedlyexplains the obvious shortcom-
ings of these systems in that they do not learn, or communicate, or produce
artifacts,or collaborate,and so on. So these systems are Heideggerianinsects,

HEIDEGGERAND ARTIFICIALINTELLIGENCE 61

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
and implement only one aspect of the Heideggerian notion that the explana-
tion of behavior must be based on analysis of the patternsof interactionbe-
tween system and environment, viz., the aspect under which the patternsof
interactionconcern-wired-in responses of the system to physical features of
the environment.
This would be all right except that the features these systems lack are in
fact features the vast majority of insects have. Not so much the ability to
learn,perhaps,-but certainlythe ability to communicate,produceartifacts,and
collaborate. From this point of view, Pengi and Brooks' robots do not even
approximateinsect intelligence very closely, since they completely lack such
sociality. If you considerHeidegger's insistence that the Backgroundis proto-
typically a social and artifactualone togetherwith the fact (which Heidegger
did not notice) that even for in-sectsthe Background recognizably has this
character,you are led to the conclusion that sociality is a condition for the
evolution of intelligence, and not a by-productwhich appearsonce individual
intelligence has been achieved. Consequentlyit should be taken into account
right from the start in the study of intelligent behavior. This is perhaps the
majorinsight a readingof Heidegger has to contributeto the furtherdevelop-
ment of the interactionistapproachin AI and cognitive science.
AI has always been short on sociality. This is understandablefor an ap-
proach which sees each cognitive system basically as a windowless monad
reflecting the entire universe from its point of view. But it will not do for
Heideggerian AI. It is hard to know where to startwith this, but perhapsone
should try thinking in terms of herds (flocks? swarms? schools?) of mobile
robots. Alternatively, computationaldevices could be integratedinto human
society in much the same way that domestic animals now are. In any case,
the point is to provide them with a real social environmentas well as a real
physical one. I shall have more to say about the methodological implications
of such a reorientationlateron.

Predecessors and Allies


Versions of the interactionistalternativehave been floating aroundin cogni-
tive science for a while, although not under this label, and perhaps not even
recognized as versions of a coherentanti-cognitivistprogram.HerbertSimon
has been mentioned, and his position does need to be clarified because it is
ratherambiguous.
Simon formulates his insight in terms of the behavior of an ant. He then
proposes to extend it to humans:the complexity of humanbehavior is inher-
ited just as much from the complexity of the environment in which we live
as it is from the complexity of our mental processes. But Simon immediately
qualifies this proposal out of existence. (I say 'immediately' advisedly-the
whole process takes exactly two short paragraphs.)The qualification Simon

62 BETHPRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
makes is to note that "a great furniture of information" can be stored in
memory, and to express his preference for regardingthis informationas part
of the environment rather than part of the organism (Simon, 1981, p. 65).
This, of course, begs the whole question.
Simon describes this internalizedenvironmentas "a large encyclopedia or
library...liberally cross-referenced...and with an elaborate index" (Simon,
1981, p. 104). Clearly this is a model of the world, the environmentas repre-
sented ratherthan the environmentitself. Declaring this encyclopedia part of
the environmentratherthan part of the organism is just an ad hoc move on
Simon's part to save his original hypothesis. But in effect he has simply re-
instated the internal world model assumption in all its glory, along with the
all-important corollary that the explanation of behavior is to be couched
solely in terms of internal representationalprocesses. So Simon loses his
original insight in the course of transferringit from ants to hominids. Never-
theless, his initial formulationof that insight is so clear and compelling that
it can still serve as an importantpoint of reference in the development of an
interactionistperspective.
Another proponentof an interactionistapproach(although not under that
title) is J. J. Gibson. His ecological psychology clearly has some of the
features of interactionistanalysis already noted. He insists on the mutuality
of the organismand the environment,and that independentdescriptionof en-
vironmentalstructuresand processes is a necessary componentof explanation
in psychology (Gibson, 1979). Gibson's theory of affordances is also an im-
portantcontributionto the explication of the notion that the functional rela-
tionship of the organism to its environment is not necessarily, or even
mostly, a representationallymediatedrelationship.
The difficulty with Gibson-a difficulty alreadynoted by David Marr-is
that he tends to go too far in the opposite direction. He opposes the notion
that all the structureof the visual world is imposed by us in virtue of our
processing of sensory inputs by insisting that the environmentalready has a
structureof its own of which we can simply avail ourselves. This is his the-
ory of informationpickup. If what he means by this is that the explanation
of orderin behavior requiresthat we pay equal attentionto the orderalready
inherentin the environmentand the inter-relationshipsbetween that orderand
the elements of orderintroducedby us, well and good. But he often speaks as
if all the relevant ordercame from the environment,and picking it up were a
trivial matter.This would constitute a rejection of the interactionistalterna-
tive from another, essentially behaviorist, perspective." The interactionist
alternativedoes not seek to deny either the existence or the importanceof in-

I don't want to say anything more here about the difference between the interactionist
alternative and behaviorism-that is, quite literally, another paper. Note, however,
that the metaphor favored by behaviorists for the relationship of environment to or-
ganism is one of control, not interaction.

AND ARTIFICIAL
HEIDEGGER INTELLIGENCE63

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
ternal computationalprocesses, but ratherto reassess their role vis-d-vis the
role of the environment.
Given the affinities between Gibson's ecological psychology and the in-
teractionist alternative, however, it is gratifying that his -work is still being
carriedon in a serious way by his studentsand followers. The body of litera-
ture generated-by these ecological psychologists over the last decade or so
must certainly be-drawnupon by anyone interestedin developing an interac-
tionist alternativeto cognitivism.
There are also alliances to be made outside the disciplines most closely as-
sociated with cognitive science. I am going to mention three here: ethology,
ethnomethodologyand cognitive anthropology.I am not going to go into de-
tail, not even to the extent of naming names. But the fashion in which each
of these disciplines might contributeto the furtherformulationand elabora-
tion of the interactionist alternative should be apparentin broad outline at
least.
Ethology as a discipline is founded upon the premise that the behavior of
animalscan only be accuratelydescribedand understoodin the naturalsettings
in which that behavior evolved and to which it is adapted.The ensuing flight
from Skinnerbox equippedlaboratoriesto the Serengeti and Gombe Reserve
has yielded an increasingly interesting literatureon the behavior of non-hu-
man animals. This behavior has turnedout to be much more surprisingand
complex than antecedently believed. Primate ethology has in particularre-
vealed social behaviorof unanticipatedsophistication.The importantaffinity
of ethology with the interactionistalternative,of course, is its insistence on
taking behavior-in-an-environmentas the unit of analysis. It may thus pro-
vide explanationsof the behavior of non-humananimals along interactionist
lines which can serve as the basis for the furtherelaboration of the interac-
tionist alternativein AI. Systems like Pengi and Brooks' mobile robots are at
best operating on the insect level as far as intelligence goes, so suggestions
towardsthe constructionof an AI lemming or even lizard would help.
Ethnomethodologyis a branchof sociology which is descended from the
Continental phenomenological tradition via Heidegger and Alfred Schutz.
True to their Heideggerianheritage, ethnomethodologistsstudy everyday ac-
tivities in situ. Theemphasis is on how social reality is constructed through
local collaborativeinteractionamong people. In other words, social structures
are regardedas emergent phenomena, not as internalizedsets of rules. Simi-
larly, attentionis often focussed on the way the artifactualenvironmentstruc-
tures social interactions in various ways. These are nice features, as is the
ethnomethodologicalpreoccupationwith conversationanalysis (as opposed to
the parsing of more or less decontextualizedsentences), which examines the
emergenceof social structurein verballymediatedinteractions.So a great deal
about the social environment and the interaction of the individual with it
might be learnedfrom ethnomethodologyby cognitive science.

64 BETH PRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
The third potential ally in this list is cognitive anthropology, but I have
in mind here a particulartrend within it which is usually known as the situ-
ated cognition approach.Like the ethnomethodologists,adherentsof this ap-
proach concentratetheir efforts on the observation and analysis of everyday
activity. But they have a particularinterest in cognition, which they regard
not as processes-going on in the individual head, but as public practices in
which individuals participate.These practices may involve processes in the
head, but they are not merely dependentresults of those processes. So cogni-
tion is not regardedas a matterof individual mental capacities which can be
transferredintact from situation to situation, but ratheras spread across col-
lections of individualsacting collaborativelyin certain sorts of environments,
and thus as essentially situationdependent.So we have here anotherclear in-
stance of the shift in the unit of-analysis which characterizesthe interactionist
alternative,as well as an emphasis on-sociality.
These potential allies are importantfirst of all because they are already
producing analyses of behavior which are compatible with the interactionist
alternativeI have described. But they all also have a lot to say about the role
of sociality and the social/artifactualenvironmentin behavior. This is impor-
tant because, as I complained in a previous section, Al has traditionallypaid
very little attention to the social foundations of intelligence. Moreover, the
versions of interactionist analysis within the present purview of cognitive
science are representedby the work of people like Marrand Gibson, whose
primaryconcern was vision. Because of the natureof vision, they were led to
concentrate largely on the physical/causal environment, and so do not offer
much in the way of inspirationwith regardto the social environment.And in
addition, as I have tried to point out above, the status of both Gibson and
Simon is somewhat equivocal with regardto interactionism.So if the interac-
tionist alternativeis to be pursuedand elaboratedwithin cognitive science and
AI, these other disciplines will have a crucial role to play as sources of data
and inspiration.

Conclusion
In conclusion, I would like to draw out some of the morals of this-tale. Al-
though for the sake of simplicity I have couched the discussion in this paper
mostly in termsof AI, these morals apply to cognitive science as a whole.
The first point concerns sources of theoretical inspiration and empirical
data. Traditionally,the disciplines constituting the interdisciplinaryfield of
cognitive science have looked almost entirely to each other in this respect.
This is, one must admit, only reasonableunderthe assumptionthat these dis-
ciplines have some common and exclusive subject matter, viz. cognition,
about which they share certain underlying assumptions, viz. the cognitivist
assumptionsoutlined in the previous section. The disciplines in question are

EIDEGGERAND ARTIFICIALINTELLIGENCE 65

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
primarily(Anglo-American)philosophy, AI, psychology, and neurobiology,
and to,alesser extent linguistics and (cognitive) anthropology.But the inter-
actionist alternativebreaks with some fundamentalcognitivist assumptions
about what cognition is, and how cognitive phenomena are to be explained.
This in turn suggests that breaking out of the charmed circle of the current
cognitive sciences is both necessary and salutary. What I am suggesting, in
short, is that the shift in the unit of analysis portendedby the interactionist
alternativeportendsa correspondingshift in ideas as to which disciplines be-
long within the cognitive science fold. In concrete terms, this would mean
putting ethology and sociology on an equal footing with psychology and the
neurosciences, for example. It would also mean (and this is why I used Hei-
degger as a springboardfor this whole discussion) putting Continental phi-
losophy on an equal footing with Anglo-Americanphilosophy.
The second point concerns representationand computation. These have
been regardedin cognitive science as the-medium of cognition, and indeed as
circumscribing the realm of cognitive phenomena in some very real sense.
The research of cognitive scientists has thus been overwhelmingly concen-
tratedon understandingand4explicatingthese notions. Adoptionof the interac-
tionist alternative would change all, this. Representation and computation
would continue to be legitimate topics of investigation, certainly. But they
would be regardedas resources for cognition, not as its exclusive medium.
And they would only be two among many such resources, because environ-
mental resources of various sorts would also become legitimate topics for in-
vestigation. Since cognitive science has been defined largely in terms of its
reliance on the notions-of representationand computation (Gardner, 1985),
this would seem to constitute a majorreorientationwithin the field.
The argumentfrom intractabilityis designed to show that there are practi-
cal motivations for taking up the interactionist alternative. There are, of
course, other motives for adoptingit which I have not discussed here. It is, in
my opinion, better philosophy because more profoundly anti-Cartesianthan
the other options available. And I think the argumentcan be made that inter-
actionism simply fits the observed facts better than unreconstructedcogni-
tivism has any hope of doing. So my recommendation of the interactionist
alternativeis in the spirit of JerryFodor's (although he had quite a different
programin mind) when he said:
The form of a philosophical theory, often enough, is: Let's try looking over here. (Fodor,
1981, p. 31)

References
Agre, Philip E. and Chapman,David. 1987. "Pengi: An Implementationof a
Theory of Activity." Proceedings of the 6th National Conference on
ArtificialIntelligence, Seattle, WA.

66 BETH PRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
Barton, G. Edward, Berwick, Robert C. and Ristad, Eric Sven. 1987. Com-
putational Complexityand Natural Language. Cambridge,MA: The MIT
Press.
Brooks, Rodney A. 1986. "Achieving Artificial Intelligence Through
Building Robots." MIT A. I. Memo 899.
Brooks, Rodney A. 1987. "IntelligenceWithoutRepresentation."Proceedings
of the Workshop on the Foundations of Artificial Intelligence. Cam-
bridge, MA.
Burge, Tyler. 1979. "Individualismand the Mental." In Midwest Studies in
Philosophy, Volume IV: Studies in Metaphysics. P. A. French, T. E.
Uehling and H. K. Wettstein, eds. Minneapolis: University of Min-
nesota Press.
Burge, Tyler. 1986. "Individualismand Psychology." Philosophical Review
95 (Jan.), 3-45.
Chapman,David. 1987. "Planningfor Conjunctive Goals." Artificial Intelli-
gence 32, 333-77.
Cherniak, Christopher. 1986. Minimal Rationality. Cambridge, MA: The
MIT Press.
Churchland,Paul M. 1979. Scientific Realism and the Plasticity of Mind.
Cambridge:CambridgeUniversityPress.
Churchland,Paul M. 1986. "Some Reductive Strategies in Cognitive Neuro-
biology." Mind 95, no. 379, 279-309.
Churchland,Paul M. 1989. A NeurocomputationalPerspective: The Nature
of Mind and the Structureof Science. Cambridge,MA: The MIT Press.
Cummins, Robert C. 1986. "Inexplicit Information." Myles Brand and
R. M. Harnish, eds. The Representation of Knowledge. Tucson:
University of ArizonaPress.
Dennett, Daniel C. 1978. "A Cure for the Common Code?" In Brainstorms.
Cambridge,MA: The MIT Press.
Dreyfus, Hubert. 1979. What Computers Can't Do (rev. ed.). New York:
Harper& Row.
Dreyfus, Hubert. 1981. "From Micro-worlds to Knowledge Representation:
AI at an Impasse." In Mind Design. John Haugeland, ed. Cambridge,
MA: The MIT Press.
Dreyfus, Hubertand Dreyfus, Stuart. 1986. Mind Over Machine. New York:
The Free Press.
Dreyfus, Hubertand Dreyfus, Stuart. 1988. "Making a Mind vs. Modeling
the Brain:AI Back at a Branchpoint."Daedalus 117, No. 1, 15-43.
Fodor, Jerry A. 1980. "Methodological Solipsism Considered as a Research
Strategy in Cognitive Psychology." Behavioral and Brain Sciences 3,
63-73.
Fodor, JerryA. 1981. Representations:Philosophical Essays on the Founda-
tions of Cognitive Science. Cambridge,MA: The MIT Press.

HEIDEGGERAND ARTIFICIALINTELLIGENCE 67

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
Gardner,Howard. 1985. The Mind's New Science: A History of the Cogni-
tive Revolution. New York: Basic Books.
Gibson, James J. 1979. The Ecological Approach to Visual Perception.
Boston: Houghton Mifflin Company.
Heidegger, Martin. 1962. Being and Time. John Macquarmeand Edward
Robinson, trans. New York: Harper& Row. (Original work published
1927).
Horswill, Ian D. and Brooks, Rodney A. 1988. "Situated Vision in a
Dynamic World: Chasing Objects." Proceedings of the 7th National
Conference,on ArtificialIntelligence, St. Paul, MN.
Levesque, Hector; 1986. "Making Believers out of Computers."Artificial
Intelligence 30, 81-108.
Levesque, H. and Brachman,R. 1987. "Expressiveness and Tractability in
Knowledge Representationand Reasoning." ComputationalIntelligence
3, 78-93.
Marr, David. 1982. Vision: A ComputationalInvestigation into the Human
Representation and Processing of Visual Information. San Francisco:
W. H. Freeman and Company.
Newell, Allen and Simon, HerbertC. 1976. "ComputerScience as Empirical
Enquiry: Symbols and Search." Communications of the ACM 19
(March, 1976), 113-26.
Rosenschein, Stanley J. 1985. "Formal Theories of Knowledge in AI and
Robotics." New Generation Computing3, No. 4, 345-57.
Simon, Herbert C. 1981. The Sciences of the Artificial. (revised edition)
Cambridge,MA: The MIT Press.
Smith, Brian Cantwell. 1986. (unpublished)"Is ComputationFormal?"
Tsotsos, John K. 1988. "A 'Complexity' Level Analysis of Immediate Vi-
sion." InternationalJournal of ComputerVision 1-4, 303-20.

Acknowledgements
Earlierversions of this paper were promulgatedin the form of talks delivered
to the Pacific Division of the American Philosophical Association, the Soci-
ety for Philosophy and Psychology, and the Revolving Seminar at the MIT
Artificial Intelligence Laboratory.I would like to thankthe audiences at these
talks, and in particularmy commentators,Peter Woodruff, Don Perlis and Al-
ice Kyburg, for their valuable criticisms and thought provoking questions. In
addition,I owe special thanksto HubertDreyfus for having made me aware of
the difference between his project and mine; to David Chapmanfor having,
on more than one occasion, gently disentangledme from the thickets of com-
plexity theory and sent me off again in the right direction; and to John
Haugelandand Tim van Gelder for patiently reading several drafts.The final
version of this paper was preparedduring my joint appointmentas a Mellon

68 BETH PRESTON

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions
Postdoctoral Fellow and a Visiting Fellow in the Center for the Philosophy
of Science at the University of Pittsburgh.I gratefully acknowledge this sup-
port, and the encouragementof my colleagues in Pittsburgh.

HEIDEGGERAND ARTIFICIALINTELLIGENCE 69

This content downloaded from 134.184.26.108 on Thu, 29 Oct 2015 11:21:18 UTC
All use subject to JSTOR Terms and Conditions

You might also like