You are on page 1of 32

Standing Up to Analytic Philosophy and Artificial Intelligence at MIT in the Sixties Author(s):

Hubert L. Dreyfus
Source: Proceedings and Addresses of the American Philosophical Association, Vol. 87
(NOVEMBER 2013), pp. 78-92
Published by: American Philosophical Association
Stable URL: https://www.jstor.org/stable/43661448
Accessed: 06-06-2019 02:59 UTC

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of
scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms

American Philosophical Association is collaborating with JSTOR to digitize, preserve and extend
access to Proceedings and Addresses of the American Philosophical Association

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
Standing Up to
Analytic Philosophy
and Artificial
Intelligence at MIT
in the Sixties

Hubert L. Dreyfus
UNIVERSITY OF CALIFORNIA, BERKELEY

John Dewey lecture delivered at the eighty-seventh annual Pacific Division meeting o
the American Philosophical Association in San Francisco, California, on March 29, 2013

I want to thank the Dewey Lecture Committee for encouraging meto reca
the most distressing and yet productive period in my academic career - my decade at MIT from
1960 to 1970.

First, in order to provide a context for recounting events at MIT in th


sixties I need to say a few words about where I was coming from when i 1960 I arrived at Building
14 to teach the humanities in Course 21 ę (At M everything was referred to by number.)

I wanted to use philosophy to understand, and hopefully influence, the world outside academic
philosophy. Seeing no way to do this, I was
lost. Happily, however, the Harvard philosophy department admitted me
to graduate school and later made a virtue of my lostness by granting me a Sheldon
Traveling Fellowship that required that I study in Europe but not spend more than a
month in any one place. So I found myself
drifting around Europe arranging to meet various influential continental
philosophers such as Martin Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty,
thereby getting, if not their arguments, at least a sense of their engaged approach to
philosophy. During a visit to Heidegger's home, for example, I asked him what he
thought of Sartre's Das Sein und dem Nichts, a copy of which was lying on his desk,
and he replied vehemently: "Was soll ich mit diesem Dreck anfangen?" or "How can I
begin to read such rubbish?"

I wanted to hear Sartre's side of the story so, along with John Compton, I dropped Sartre a
note, and he invited us to come at noon to his apartment

78

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION

overlooking Place St. Germain des Prés. He ushered us in and se opposite ends of his long desk,
placing himself between us. Thi out to be rather unnerving since during our two-hour convers
talked facing straight ahead while keeping one wandering eye o

us. As a result, we could never tell which of us he was talking t which of us should answer his
questions.

In any case, Sartre had an interesting way of limiting his time with
It turned out that his proposing a noon rendez-vous with us was no accident. Sartre
normally didn't eat lunch, so after two hours of diffuse discussion contrasting his
account of consciousness with Heidegger's,
we were faint from hunger and relieved when he ushered us to the door
and presumably returned to his work without lunch.

More frustrating than my encounter with Sartre was my visit with Merleau-
Ponty at the Collège de France. He filled the opening silence with small
talk, but when I tried to ask a question about his book, Phenomenology of
Perception, he said he didn't like to talk about an already published work. So,
undeterred, I asked what he was currently working on. But he replied that he never
talked about his current work, and turned the conversation
back to gossip about life at the Collège de France!1

My more recent existential interaction was far more rewarding. I had


several conversations with Michel Foucault, and, later, when asked about
the intellectual influences on his thought, he said: "I was surprised when two of my
friends in Berkeley wrote something about me and said that Heidegger was
influential. Of course, it was quite true, but no one in
France has ever perceived it."2 And in his last interview Foucault confided
that: "For me Heidegger has always been the essential philosopher ....
My entire philosophical development was determined by my reading of
Heidegger."

After the talk with Sartre, I was hoping to write a dissertation on Sartre and
Heideggeron consciousness, but atthattimethere was no phenomenologist in the
Harvard philosophy department to be my dissertation advisor
so I was still lost until Huston Smith hired me to teach humanities at
MIT.3 There, I was asked to teach Homer, Aeschylus, Augustine, Dante, Nietzsche, and
Dostoyevsky to MIT freshmen and sophomores - not the sort of readings usually assigned in
philosophy courses, but they seeme
to me surprisingly relevant to our times. Indeed, those authors stood me
in good stead when Sean Kelly and I wrote All Things Shining , subtitle "Reading the Classics to
Find Meaning in a Secular Age," suggesting how classical texts can be read as proposing cultural
change.4

79

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87

Many electrical engineering students at MIT loved the classic texts taught
in Course 21 and were eager to be involved in interpreting them. By
contrast the students from the artificial intelligence (Al) laboratory were
far from open-minded. They would come to my office hours and say in
effect:

You philosophers have been reflecting in your armchairs for over 2,000
years and you still don't understand intelligence. We in the Al lab have
taken over and are succeeding where you philosophers have failed. We

are now programming computers to solve problems, to


understand natural language, to perceive, to learn, and to be
generally intelligent.5

Marvin Minsky, head of the MIT Al lab proclaimed: "Within a generation we will have intelligent
computers like HAL in the film, 2001 "6

As luck would have it, in 1963 I was invited by the RAND Corporation to evaluate the
pioneering work of Alan Newell and Herbert Simon in a new field called Cognitive
Simulation (CS). Newell and Simon claimed that both
digital computers and the human mind could be understood as "physical symbol
systems/7, using strings of bits or streams of neuron pulses as symbols representing
features of the external world. Intelligence, they
claimed, simply required drawing the appropriate conclusions from these "internal
representations." As they put it: "A physical symbol system has the necessary and
sufficient means for general intelligent action."7

As I studied the RAND papers and memos, however, I found to my surprise


that, far from replacing philosophy, the pioneers in Al had learned a lot,
directly and indirectly, from the philosophers. They had embraced Hobbes's
claim that reasoning was calculating, Descartes's mental representations,
Leibniz's idea of a "universal characteristic" (a set of primitives in which all
knowledge could be expressed), Kant's claim that concepts were rules, Frege's
formalization of such rules, and Wittgenstein's postulation of
logical atoms in his Tractatus. In fact, without realizing it, Al researchers were hard at work
turning rationalist philosophy into a research program.

They saw their work as a first step towards Al, and claimed that "Intuition,
insight, and learning are no longer exclusive possessions of human beings: any large
high-speed computer can be programmed to exhibit them."8 The supposition that Al research
was a first step toward artificial intelligence,
however, presupposed that research in AI was on the right track - that
there was a continuum leading from current work to successful Al.

80

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION

This way of thinking is an example of what logician Yehoshua


called the "first-step fallacy." Built into the notion of a first ste
is a first step towards success, not a first step towards failure.
that one has made a genuine first step towards climbing a moun
must already have reason to believe that by going on this way
make it to the top. First-step claims, thus, have the idea of a su last step built in even though they
provide no argument for the claim that one is on the way to achieving one's goal. My brother "It's
like claiming that the first primate to climb a tree was mak

step towards flight to the moon/7 One may, however, have ov some serious problem along the
way. And, indeed, it turned ou
first step assumption was a bad basis for optimism in Al. There a discontinuity in the claimed
continuum of steady incremental
The unexpected obstacle was called the frame problem.

Using Heidegger as a guide, I began to see that Al researchers were


running up against the problem of representing and updating relevance - a
problem that Heidegger saw was implicit in Descartes's understanding of the world as a
collection of meaningless facts to which the mind had to
assign what Descartes called values, and John Searle now calls "function
predicates." Heidegger warned that values are just more meaningless facts and so
could not solve the problem of representing relevance.9 To assign to hammers the
function of hammering leaves out the relation of hammers to nails, to other equipment,
to the point of building things,
the skills required when actually using a hammer, etc. That is, Heidegger saw that
assigning function predicates to brute facts couldn't capture the
meaningful organization of the everyday world of equipment which we are always
already in, and in which alone hammering makes sense.

One might think that what obviously is missing is a representation of our


way of life, so we need to add that, but then one gets all the problems connected
with representing a world. The world is not an aggregate of facts but, as Heidegger
points out, it is the taken for granted holistic
background of familiar practices on the basis of which the relevance and significance
of particular facts is determined.

Minsky, unaware of Heidegger's critique, was convinced that all that was needed to
achieve AI was representing a few million facts. It seemed to
me, however, that the deep problem wasn't storing and retrieving millions of facts; it was
knowing how to zoom in on those facts that were relevant
in the current situation. The frame problem is one version of this relevance
problem. If the computer is running a representation of the current state of its world
and something in that world changes, how does the program
determine which of its other represented facts can be assumed to have

81

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87

stayed the same and which have to be updated? For example, if I put
up the shades in my office, which other facts about my office will have changed: the
intensity of the illumination for sure, the temperature perhaps, the shadows probably,
but not the solidity of the floor or the
number of books on the shelves.

Minsky suggested that, to avoid the relevance problem, Al programme


could use what he called frames - representations of the relevant facts i
a typical situation like going to a birthday party. For example, when I arrive at a birthday party, my birthday
party frame would lead me to take account
of those and only those facts that were normally relevant at birthday parties - for example: the
gifts, the goodies, but not the temperature
the force of gravity.

But a system of frames isn't in a situation, so in order to select the possibly relevant
facts in the current situation one would need a frame for recognizing situations as say
birthday parties, and for telling them apart from other situations such as ordering in a
restaurant. But how, I
wondered, could the computer select the relevant frame for selecting the
birthday party frame as the relevant frame, so as to zoom in on the current
relevance of, say, an exchange of gifts rather than of money? It seemed obvious to
me that any Al program using frames to organize millions of
meaningless facts so as to retrieve the currently relevant ones was going
to be caught in a regress of frames for recognizing relevant frames for
recognizing relevant facts, and that, therefore, the frame problem wasn't just a
problem but it was a sign that something was seriously wrong with the whole
approach.

But how, I wondered, do we manage to organize the vast array of facts that
supposedly make up commonsense knowledge so that we can update
just those facts that are relevant in the current situation? The answer,
according to Heidegger, is that we can't manage this feat any more than a computer
can, but fortunately we don't have to. We are always already in a world that is organized in
terms of our interests, experiences, and bodies
and, hence, a world permeated by significance and relevance. Only if
we stand back from our involvement in the world and represent things
from a detached perspective as meaningless objects do we confront the
problem of relevance. As Heidegger argues in his critique of Descartes, if you strip
away relevance and start with context-free bare facts, you can't
get relevance back.

According to Heidegger, only by being socialized from the start into a


world of shared practices do human beings acquire skills for getting
around in that world, and this general background familiarity determines

82

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION

what, at any given moment, attracts our attention as relevant. As i


copers, then, we don't have to give brute facts meaning. Things a
already have meaning, and so keeping track of changes in relevanc the intractable problem it is to
Symbolic Al researchers.

By the end of the sixties, it began to look to me as if Symbolic


Representational AI was an exemplary case of what Imre Lakatos in Proofs
and Refutations calls a degenerating research program. Such a research program is the
result of organizing research around a basically flawed assumption so that predictions
constantly fail to pan out and believers finally abandon the current approach as soon as
they can conceive of an
alternative.

Unfortunately, what characterized Al in those days was its refusal to face up to and learn from
its failures. In this case, to avoid facing the relevance
problem the Al programmers at MIT in the sixties and early seventies
limited their programs to what they called micro-worlds - artificial domains
in which the small number of features that were possibly relevant was determined beforehand.
However, since this approach obviously avoids
the real-world relevance problem, Ph.D. students at MIT felt obliged to
claim in their theses that their micro-worlds could be made more and

more realistic, and that the techniques they introduced could then be
generalized to cover relevance. There were, however, no successful follow-ups.

The work of Terry Winograd is typical. His "blocks-world" program,


SHRDLU, which responded to commands in ordinary English instructing a
virtual robot arm to move blocks displayed on a computer screen, was a
micro-world program that really worked - but, of course, only in its micro-
world. So to develop the expected generalization of his techniques,
Winograd started working on what he called a Knowledge Representation
Language (KRL). But his approach wasn't working. Winograd, however,
unlike his colleagues, was conscientious and open minded enough to
try to figure out what had gone wrong, so he suggested that we have
weekly lunches together to discuss his problem in a broader philosophical context.
Looking back, Winograd says: "My own work in computer science
is greatly influenced by conversations with Dreyfus. "10

After a year of such conversations, and after reading the relevant texts of the existential
phenomenologists, Winograd abandoned work on KRL. He continued, however, to direct
the dissertation work of Sergey Brin and Lawrence Page on retrieving relevant
information from a database
of possibly relevant information. Brin and Page never received Ph.D.s

83

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87

but their work became Google! Winograd meanwhile began including Heidegger in his
Stanford computer science courses.

In the meantime, the MIT humanities department had spun off a Philosophy Section
including such analytic philosophers as Jerry Fodor, Hilary Putnam, James Thomson,
and Sylvain Bromberger. But, sadly, in those days the relations between continental and
analytic philosophers
were utterly antagonistic.

Indeed, my MIT colleagues' way of dealing with my interest in existential phenomenology


was to virtually exclude me from the philosophy program. In a recent conversation Jerry Fodor
couldn't remember informing me of
any faculty meetings that took place during the time he was chair, and, indeed, in my
seven years at MIT, I was never informed of any faculty meeting. It seems that decisions
were made at informal gatherings that took place in the homes of various faculty
members to which I was not
invited. I heard strange rumors. A friendly assistant professor told me that at one such meeting
it was decided that no library funds were to be spent
on continental philosophy books devoted to "Stone Age Philosophy/'

The most up-to-date philosophers in those days believed that minds were software
running on the brain as hardware. So I was not surprised
to hear from Bromberger that Thomson and Minsky were "good friends."
The affinity between the philosophers and the Al researchers emerged
in a humorous way. When my 1965 RAND Paper, "Alchemy and Artificial Intelligence,"
was featured in The New Yorker's "Talk of the Town," Hilary
Putnam asked me earnestly over coffee when I would admit to being a
Turing Machine.11

Naturally, my teaching continental philosophy was an embarrassment to


the MIT Philosophy Section. Ned Block writes me:

When you were deciding to go to Berkeley I witnessed a


conversation between James Thomson and Barry Stroud in which
Barry said how pleased they were [at Berkeley]
to be getting you and Thomson said in his characteristic
style: "Your gain is our gain."

Given this attitude, it was not surprising that, when I came up for tenure,
the Al researchers (and presumably the philosophers too) recommended to Jerome
Wiesner, then provost in charge of academic matters, that he oppose my tenure, which
was thought to pose a threat to DARPA's (the
Defense Advanced Research Project Agency) support of Al research at
MIT. The argument of the Al researchers was that my being a professor at

84

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION

MIT would lend credibility to my claim that Symbolic Al, based a false assumptions about the way the
mind worked, was bound to
Al community was, therefore, more interested in destroying my cre
than in answering my critique. They encouraged Seymour Paper and circulate a document
entitled "The Artificial Intelligence of Dreyfus/' in which Papert stressed that I didn't know how to pr

ridiculed as "silly" everything I had written on the subject of AI.12 S


the price of trying to make philosophy relevant.

When I realized that Minsky and his colleagues were afraid that my
might fall into the hands of officials at DARPA, which supporte
research to the tune of a million dollars a year, I considered sc
Al supporters by hiring an actor to dress in uniform and lunch the MIT Faculty Club. I regret I never got
a chance to carry out
however. Before I could put my plan into effect, I was summo Pentagon as a consultant. There,
just as my MIT opponents fear
recommendation that the military cut all Al support was taken presumably contributing to the drying
up of DARPA support th
be known as the Al Winter.13 Symbolic AI was dead. Only a few Al b
continued to affirm the view that John Haugeland referred to de GOFAI (Good Old Fashioned Al).

Meanwhile, Wiesner had to decide whether or not to block the Humanities


Department's recommendation of tenure for me. He consulted computer scientists at
Harvard, Bell Labs, and Novosibirsk, and read an early draft
of my book, What Computers Can't Do, and then invited me into his office
and personally offered me tenure. I said I was sorry but I didn't feel
comfortable at MIT since my courses were not accepted by my colleagues and,
therefore, I was going to accept a pending offer from UC Berkeley.

Then things got really weird!

To encourage me to accept their offer of tenure, the Berkeley philosophy


department invited me to visit for a semester in 1967 and I accepted.
It turned out 1967 was an amazing time to be moving between MIT
and Berkeley. I left super-straight MIT, its dedicated students, and its
philosophical factions, to arrive at laid back Berkeley (although they didn't yet have that
word for that attitude). There, I heard Janis Joplin singing
on a street corner, and the philosophy department assured its sense
of community by organizing an annual retreat at Asilomar at which the majority of the
graduate students and some of the faculty dropped acid together - which did, indeed,
create a sense of togetherness.

85

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87

So I moved to the Bay Area. But right after I left MIT for Berkeley, Huston
Smith received an offer from Cal Tech to teach the humanities there.
Huston pointed out to the MIT deans that if I left for Berkeley, there wou
be no one among the philosophers at MIT for him to talk with, and consequently he was seriously
considering accepting the Cal Tech of So, in what seemed to be a desperate attempt to save the
MIT image openness to the humanities from being trumped by Cal Tech, the hi

authorities at MIT offered to hire a Bert-Dreyfus-substitute as soon a possible and, in the


meantime, to invite me back to MIT.

I was happy to come back to Cambridge since I enjoyed teaching continental


philosophy to eager undergraduate electrical engineers,
so I accepted an offer to return to teach Humanities at MIT for at least one semester.
But then it seems the MIT philosophers must have made clear they didn't want an
embarrassing existentialist around corrupting
their program with Stone Age Philosophy, for in the end, in order to keep Huston Smith from
leaving MIT for Cal Tech, MIT agreed to pay my full
salary for a semester without my doing any teaching at all. My only job
being to talk with Huston.

My experience teaching - and eventually not teaching - continental philosophy at


MIT had a happy ending. I left Berkeley for good, and
Winograd sums up what happened at MIT as follows:

For those who have followed the history of artificial intelligence, it is


ironic that [the MIT] laboratory should
become a cradle of "Heideggerian Al. 77 It was at MIT that
Dreyfus first formulated his critique, and, for twenty years,
the intellectual atmosphere in the Al Lab was overtly hostile to
recognizing the implications of what he said. Nevertheless, some
of the work now being done at that laboratory seems to have
been affected by Heidegger
and Dreyfus.14

Here's how it happened. In March 1986, the new director of the MIT Al
lab, Patrick Winston, moderated Minsky's hostile attitude toward me and
allowed, if not encouraged, several graduate students, led by Phil Agre
and John Batali, to invite me to give a talk to the Al community.15 1 called
the talk "Why Al Researchers Should Study Being and 77me."16

In my talk I repeated what I had written in 1972 in What Computers Can't Do : xx[T]he
meaningful objects . . . among which we live are not a model of
the world stored in our mind or brain; they are the world itself."17

86

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION

Meanwhile, Rodney Brooks had taken over as head of the MIT renounced representations. He
reported that, based on the idea that
"the best model of the world is the world itself," he had "developed a new approach
in which a mobile robot uses the external world itself as
its representation - continually referring to its sensors rather than to an
internal world model. ,/ļ8 Looking back at the frame problem, Brooks writes:

And why could my simulated robot handle it? Because it was


using the world as its model. It never referred to an internal
description of the world that would quickly get
out of date if anything in the real world moved.19

Brooks's work is an important advancement, but his robots respond only


to fixed features in the environment, not to context. By operating in a fixed
world and responding only to the small set of possibly relevant features
picked up by their receptors, Brooks's ant-like animats, as he calls them,
begged the question of changing relevance and so finessed rather than
solved the frame problem.

Yet, in spite of the history of first-step fallacies in Al, the next step was apparently
irresistible. Brooks and Daniel Dennett succumbed to the sort of extravagant optimism
characteristic of Al researchers in the sixties. On
the basis of Brooks's success with ant-like devices, instead of trying to make, say, an
artificial spider, Brooks and Dennett decided to leap ahead
on the supposed continuum from insects to humans and build a humanoid robot. As
Dennett explained in a 1994 report:

A team at MIT of which I am a part is now embarking on a long-term


project to design and build a humanoid robot,
Cog, whose cognitive talents will include speech, eye-coordinated
manipulation of objects, and a host of self-
protective, self-regulatory and self-exploring activities.20

Dennett seems to reduce the project to a joke when he adds, apparently in all
seriousness, "While we are at it, we might as well try to make Cog
crave human praise and company and even exhibit a sense of humor. 7,21 Of course, the

"long-term project" was short lived. It failed to achieve any

of its goals and the original robot is now in a museum. But, as far as I know,
neither Brooks nor Dennett nor anyone else connected with the project has published
an account of the failure and what mistaken assumptions underlay their absurd
optimism. In a personal communication, Dennett,
true to first-step thinking, claimed that "Progress was being made on all
the goals, but slower than had been anticipated."

87

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87

Clearly, something had gone wrong. Some specific assumptions must


have been mistaken, but all we find in Dennett's assessment is the
implicit assumption that human intelligence is on a continuum with insect
intelligence, and that therefore adding a bit of complexity to what had
already been accomplished with Brooks's animats counts as progress toward
humanoid intelligence.

Brooks acknowledges the similarities of his approach to Heidegger's,


but he continues to insist that he does not owe to Heidegger his idea of
replacing symbolic representations in a computer with embodied moving
robots in direct contact with the world. Indeed, he explicitly denies any influence,
saying:

[l]n some circles, much credence is given to Heidegger


as one who understood the dynamics of existence. Our approach
has certain similarities to work inspired by this German
philosopher but our work was not so inspired. It
is based purely on engineering considerations.22

Although he doesn't credit the direct influence of Heidegger and Merleau-Ponty,


Brooks does give me credit for "being right about many issues such
as the way in which people operate in the world is intimately coupled to
the existence of their body."23

Unlike previous generations of Al researchers with their first-step fallacy, Brooks is even
prepared to entertain a Merleau-Ponty-like suggestion that
his work might be on the wrong track. He concludes a discussion of his
animats with the insightful comment that

Perhaps there is a way of looking at biological systems


which will illuminate an inherent necessity in some
aspect of the interactions of their parts that is completely
missing from our artificial systems.

Brooks continues:

I am not suggesting that we need go outside the


current realms of mathematics, physics, chemistry, or
biochemistry. Rather I am suggesting that perhaps at
this point we simply do not get it, and that there is some
fundamental change necessary in our thinking in order
that we might build artificial systems that have the levels
of intelligence, emotional interactions, long term stability,

88

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION

autonomy, and general robustness that we might expect


of biological systems.24

Even Jerry Fodor, at the end of his book, The Modularity of Mind, acknowledges
that believers in GOFAI have no response to my critique.
He writes:

If someone - a Dreyfus, for example - were to ask us why


we should even suppose that the digital computer is a
plausible mechanism for the simulation of global cognitive
processes, the answering silence would be deafening.25

Happily, brain researchers such as Walter Freeman at Berkeley have


begun to propose models of how brains can take as input energy from
the physical universe and respond at the neuron level in such a way as
to open embodied beings like us to a world organized in terms of their needs, interests,
and bodily capacities.26

CONCLUSION

In spite of the tensions, I was happy in the Humanities Depart MIT. I had good students like Ned
Block to teach, good philoso Samuel Todes to talk to, great classics to enjoy in Course 21, a
researchers to debate. True, it was clear I would never teach g students since the Philosophy
Program was dominated by phil who disdained continental philosophy. Still, I did what I could

out of the continental/analytic stand off. Philosophy students like


Malick, who later became a famous movie director, came over from
Harvard to audit, and later to teach, my Heidegger course, and Richard Rorty, who was
teaching at nearby Wellesley, came by to talk, and later recalled that I turned him on to
Heidegger and Merleau-Ponty. He wrote that uno one in our day has done more than
Dreyfus to make American
Philosophy less parochial."

So, it turned out to everyone's surprise, including my own, that existential


philosophy could influence the world outside of academic philosophy,
making critical contributions in cultural and scientific domains.

NOTES

1. Merleau-Ponty held the chair of philosophy at the Collège de France from 1 until his death in 1961, making him
the youngest person to have been elected t chair at the Collège.

2. Michel Foucault, "Final Interview," Raritan (Summer 1985): 8. "Le Retour de morale," interview conducted by
Gilles Barbadette, Les Nouvelles (June 28, 19 The fuller quotation reads as follows: "Heidegger has always
been for me t

89

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87

essential philosopher. ... I still have the notes I took while reading Heidegger - I
have tons of them! - and they are far more important than the ones I took on Hegel
or Marx. My whole philosophical development was determined by my reading of Heidegger."

3. I did write a paper with Piotr Hoffman: "Sartre's Changed Conception of Consciousness: From Lucidity to
Opacity/' Library of Living Philosophers: The Philosophy of Jean-Paul Sartre , ed. P. A. Schilpp (Open
Court Publishing Company, 1982).

4. Hubert Dreyfus and Sean D. Kelly, All Things Shining: Reading the Classics to Find Meaning in a Secular
Age (Free Press, 201 1). [A New York Times best seller.]

5. This isn't just my impression. Philip Agre, a Ph.D. student at the A.I. lab at the time,
later wrote:

I have heard expressed many versions of the propositions . . . that


philosophy is a matter of mere thinking whereas technology is a matter of
real doing, and that philosophy consequently can be understood only as deficient. (
Computation and Human Experience [Cambridge: Cambridge
University Press, 1997], 239)

6. Marvin Minsky as quoted in an 1968 MGM press release for Stanley Kubrick's film "2001: A Space
Odyssey."

7. A. Newell and H. A. Simon, "Computer Science as Empirical Inquiry: Symbols and Search," in Mind
Design, ed. John Haugeland (Cambridge, MA: MIT Press), 1988.
8. Herbert A. Simon and Allen Newell, "Heuristic Problem Solving: The Next Advance in Operations
Research," Operations Research 6 (January-February 1958): 6.
9. Martin Heidegger, Being and Time, trans. J. Macquarrie and E. Robinson (New York: Harper & Row
Publishers, 1962), 132.
10. Heidegger, Coping, and Cognitive Science: Essays in Honor of Hubert L. Dreyfus, Vol. 2, ed. Mark
Wrathall (Cambridge, MA: The MIT Press, 2000), iii.

11. Hubert Dreyfus, "Alchemy and Artificial Intelligence," The New Yorker, June 11,
1966.

12. In retrospect Papert's attack makes amusing reading, as can be seen on Amazon, com at
http://www.amazon.com/The-artificial-intelligence-Hubert-Dreyfus/dp/
B0007EKRRK/ref=cm_cr_pr_product_top]. There a reviewer comments:

This is a marvelous piece of arcana. You will not stop laughing as "glorified computer technicians"
("computer scientists" ha-ha) try to refute Dreyfus dead on predictions. Strong Al is dead and it was
stillbirth from the start.

Just for the record, the story about my predicting that computers would never be
any good at chess is a false rumor started by Alvin Toffler. I wrote in my RAND paper
in 1965:

The initial NSS chess program was poor and, in the last five years, remains
unimproved. . . . According to Newell, Shaw, and Simon themseves,
evaluating the Los Alamos, the IBM, and the NSS programs: "All three
programs play roughly the same quality of chess (mediocre) with roughly
the same amount of computing time." Still no chess program can play
even amateur chess, and the world championship tournament is only two
years away.

[Hubert L. Dreyfus, "Alchemy and Artificial Intelligence," RAND P-3244, 1965, p. 10.]

90

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION

Misreading this last sentence, Toffler wrote in 1970:

Among other things, [Dreyfus] declared, "No chess program can play even amateur chess." In context,
he appeared to be saying that none
ever would.

[A. Toffler, Future Shock (Bantam Books, July 1970), 1 13.]

1 3. From Al Winter (http://en.wikipedia.org/wiki/AI_winter):

The field has experienced several cycles of hype, followed by


disappointment and criticism, followed by funding cuts, followed by
renewed interest years or decades later. There were two major winters in
1974-80 and 1987-93 and several smaller episodes, including:

• 1 966: the failure of machine translation,

• 1 970: the abandonment of connectionism,

• 1971-75: DARPA's frustration with the Speech Understanding Research program at


Carnegie Mellon University,

• 1973: the large decrease in Al research in the United Kingdom in response to


the Lighthill report,

• 1973-74: DARPA's cutbacks to academic Al research in general...

14. Terry Winograd, "Heidegger and the Design of Computer Systems," talk delivered at Applied
Heidegger Conference, Berkeley, CA, September 1989. Cited in What Computers Still Can't Do,
Introduction to the MIT Press Edition, xxxi.

15. The invitation was surprisingly respectful. It said:

Your work has aroused a great deal of interest among the members of the Artificial
Intelligence Laboratory. Accordingly, I am honored to invite you here to speak on the 1 0th of
March 1 986.

Patrick H. Winston
Professor of Computer Science
Director, Artificial Intelligence Laboratory

1 6. My talk was well attended but not everyone was pleased. Agre reported to me that after it was announced
that I was going to give a talk, Minsky came into his office and shouted at him for ten minutes or so for inviting
me.

17. Hubert Dreyfus, What Computers Still Can't Do, A Critique of Artificial Reason (MIT Press, 1992), 265-66.

18. Rodney A. Brooks, "Intelligence without Representation," in Mind Design , ed. John Haugeland
(The MIT Press, 1988), 416. (Brooks's paper was published in 1986).
John Haugeland explains Brooks's breakthrough using as an example Brooks's
robot, Herbert:

Brooks uses what he calls the "subsumption architecture„according to


which systems are decomposed not in the familiar way by local functions or faculties, but
rather by global activities or tasks. . . . Thus, Herbert has
one subsystem for detecting and avoiding obstacles in its path, another for wandering
around, a third for finding distant soda cans and homing
in on them, a fourth for noticing nearby soda cans and putting its hand
around them, a fifth for detecting something between its fingers and
closing them, and so on . . . fourteen in all. What's striking is that these
are all complete input/output systems, more or less independent of each other. [John
Haugeland, Having Thought: Essays in the Metaphysics of
Mind (Cambridge, MA: Harvard University Press, 1998), 218.]

91

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87

1 9. Rodney A. Brooks, Flesh and Machines: How Robots Will Change Us (Vintage Books,
2002), 42.

20. Daniel Dennett, "The Practical Requirements for Making a Conscious Robot," Philosophical
Transactions of the Royal Society of London, A, 349 (1994): 133-46.
21. Ibid., 133.

22. Rodney A. Brooks, "Intelligence without Representation," Flesh and Machines: How Robots Will Change
Us (Vintage Books, 2002), 168. Another reference gives: R.
A. Brooks, IEEE Journal of Robotics and Automation, R.A-2: 14-23, 1986 "A Robust
Layered Control System for a Mobile Robot."

23. R. A. Brooks, Flesh and Machines, 168.

In stressing the role of the body, I was influenced by my colleague Samuel Todes, who went
beyond Merleau-Ponty in showing how our world-disclosing perceptual experience is structured by
the actual structure of our bodies. Heidegger and
Merleau-Ponty never tell us what our bodies are actually like and how their structure
affects our experience. Todes, however, notes that our body has a front/back
and up/down orientation. It moves forward more easily than backward, and can
successfully cope only with what is in front of it. He then describes how, in order to
orient oneself and to explore our surrounding world, we have to be balanced within
a vertical field that we do not produce, be effectively directed in a circumstantial field (facing one
aspect of that field rather than another), and appropriately set to respond to the specific thing we
are encountering within that field. For Todes, then, perceptual receptivity is an embodied,
normative, skilled accomplishment, in response to our need to orient ourselves in the world
(Samuel Todes, Body and
World [Cambridge, MA: The MIT Press, 2001].)

24. Rodney A. Brooks, "From Earwigs to Humans," Robotics and Autonomous Systems 20 (1997): 301.
My italics.

25. Jerry A. Fodor, The Modularity of Mind (Bradford/MIT Press, 1983), 128-29.

26. See Walter Freeman, How Brains Make Up Their Minds (Diane Publishing Company, 1 999).
Freeman cites Merleau-Ponty in this book (pp. 28, 1 1 9-1 21, 1 24, 1 29), which is no coincidence since
we gave several seminars together.

92

This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms

You might also like