Professional Documents
Culture Documents
Hubert L. Dreyfus
Source: Proceedings and Addresses of the American Philosophical Association, Vol. 87
(NOVEMBER 2013), pp. 78-92
Published by: American Philosophical Association
Stable URL: https://www.jstor.org/stable/43661448
Accessed: 06-06-2019 02:59 UTC
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of
content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of
scholarship. For more information about JSTOR, please contact support@jstor.org.
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
https://about.jstor.org/terms
American Philosophical Association is collaborating with JSTOR to digitize, preserve and extend
access to Proceedings and Addresses of the American Philosophical Association
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
Standing Up to
Analytic Philosophy
and Artificial
Intelligence at MIT
in the Sixties
Hubert L. Dreyfus
UNIVERSITY OF CALIFORNIA, BERKELEY
John Dewey lecture delivered at the eighty-seventh annual Pacific Division meeting o
the American Philosophical Association in San Francisco, California, on March 29, 2013
I want to thank the Dewey Lecture Committee for encouraging meto reca
the most distressing and yet productive period in my academic career - my decade at MIT from
1960 to 1970.
I wanted to use philosophy to understand, and hopefully influence, the world outside academic
philosophy. Seeing no way to do this, I was
lost. Happily, however, the Harvard philosophy department admitted me
to graduate school and later made a virtue of my lostness by granting me a Sheldon
Traveling Fellowship that required that I study in Europe but not spend more than a
month in any one place. So I found myself
drifting around Europe arranging to meet various influential continental
philosophers such as Martin Heidegger, Jean-Paul Sartre, and Maurice Merleau-Ponty,
thereby getting, if not their arguments, at least a sense of their engaged approach to
philosophy. During a visit to Heidegger's home, for example, I asked him what he
thought of Sartre's Das Sein und dem Nichts, a copy of which was lying on his desk,
and he replied vehemently: "Was soll ich mit diesem Dreck anfangen?" or "How can I
begin to read such rubbish?"
I wanted to hear Sartre's side of the story so, along with John Compton, I dropped Sartre a
note, and he invited us to come at noon to his apartment
78
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION
overlooking Place St. Germain des Prés. He ushered us in and se opposite ends of his long desk,
placing himself between us. Thi out to be rather unnerving since during our two-hour convers
talked facing straight ahead while keeping one wandering eye o
us. As a result, we could never tell which of us he was talking t which of us should answer his
questions.
In any case, Sartre had an interesting way of limiting his time with
It turned out that his proposing a noon rendez-vous with us was no accident. Sartre
normally didn't eat lunch, so after two hours of diffuse discussion contrasting his
account of consciousness with Heidegger's,
we were faint from hunger and relieved when he ushered us to the door
and presumably returned to his work without lunch.
More frustrating than my encounter with Sartre was my visit with Merleau-
Ponty at the Collège de France. He filled the opening silence with small
talk, but when I tried to ask a question about his book, Phenomenology of
Perception, he said he didn't like to talk about an already published work. So,
undeterred, I asked what he was currently working on. But he replied that he never
talked about his current work, and turned the conversation
back to gossip about life at the Collège de France!1
After the talk with Sartre, I was hoping to write a dissertation on Sartre and
Heideggeron consciousness, but atthattimethere was no phenomenologist in the
Harvard philosophy department to be my dissertation advisor
so I was still lost until Huston Smith hired me to teach humanities at
MIT.3 There, I was asked to teach Homer, Aeschylus, Augustine, Dante, Nietzsche, and
Dostoyevsky to MIT freshmen and sophomores - not the sort of readings usually assigned in
philosophy courses, but they seeme
to me surprisingly relevant to our times. Indeed, those authors stood me
in good stead when Sean Kelly and I wrote All Things Shining , subtitle "Reading the Classics to
Find Meaning in a Secular Age," suggesting how classical texts can be read as proposing cultural
change.4
79
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87
Many electrical engineering students at MIT loved the classic texts taught
in Course 21 and were eager to be involved in interpreting them. By
contrast the students from the artificial intelligence (Al) laboratory were
far from open-minded. They would come to my office hours and say in
effect:
You philosophers have been reflecting in your armchairs for over 2,000
years and you still don't understand intelligence. We in the Al lab have
taken over and are succeeding where you philosophers have failed. We
Marvin Minsky, head of the MIT Al lab proclaimed: "Within a generation we will have intelligent
computers like HAL in the film, 2001 "6
As luck would have it, in 1963 I was invited by the RAND Corporation to evaluate the
pioneering work of Alan Newell and Herbert Simon in a new field called Cognitive
Simulation (CS). Newell and Simon claimed that both
digital computers and the human mind could be understood as "physical symbol
systems/7, using strings of bits or streams of neuron pulses as symbols representing
features of the external world. Intelligence, they
claimed, simply required drawing the appropriate conclusions from these "internal
representations." As they put it: "A physical symbol system has the necessary and
sufficient means for general intelligent action."7
They saw their work as a first step towards Al, and claimed that "Intuition,
insight, and learning are no longer exclusive possessions of human beings: any large
high-speed computer can be programmed to exhibit them."8 The supposition that Al research
was a first step toward artificial intelligence,
however, presupposed that research in AI was on the right track - that
there was a continuum leading from current work to successful Al.
80
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION
step towards flight to the moon/7 One may, however, have ov some serious problem along the
way. And, indeed, it turned ou
first step assumption was a bad basis for optimism in Al. There a discontinuity in the claimed
continuum of steady incremental
The unexpected obstacle was called the frame problem.
Minsky, unaware of Heidegger's critique, was convinced that all that was needed to
achieve AI was representing a few million facts. It seemed to
me, however, that the deep problem wasn't storing and retrieving millions of facts; it was
knowing how to zoom in on those facts that were relevant
in the current situation. The frame problem is one version of this relevance
problem. If the computer is running a representation of the current state of its world
and something in that world changes, how does the program
determine which of its other represented facts can be assumed to have
81
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87
stayed the same and which have to be updated? For example, if I put
up the shades in my office, which other facts about my office will have changed: the
intensity of the illumination for sure, the temperature perhaps, the shadows probably,
but not the solidity of the floor or the
number of books on the shelves.
But a system of frames isn't in a situation, so in order to select the possibly relevant
facts in the current situation one would need a frame for recognizing situations as say
birthday parties, and for telling them apart from other situations such as ordering in a
restaurant. But how, I
wondered, could the computer select the relevant frame for selecting the
birthday party frame as the relevant frame, so as to zoom in on the current
relevance of, say, an exchange of gifts rather than of money? It seemed obvious to
me that any Al program using frames to organize millions of
meaningless facts so as to retrieve the currently relevant ones was going
to be caught in a regress of frames for recognizing relevant frames for
recognizing relevant facts, and that, therefore, the frame problem wasn't just a
problem but it was a sign that something was seriously wrong with the whole
approach.
But how, I wondered, do we manage to organize the vast array of facts that
supposedly make up commonsense knowledge so that we can update
just those facts that are relevant in the current situation? The answer,
according to Heidegger, is that we can't manage this feat any more than a computer
can, but fortunately we don't have to. We are always already in a world that is organized in
terms of our interests, experiences, and bodies
and, hence, a world permeated by significance and relevance. Only if
we stand back from our involvement in the world and represent things
from a detached perspective as meaningless objects do we confront the
problem of relevance. As Heidegger argues in his critique of Descartes, if you strip
away relevance and start with context-free bare facts, you can't
get relevance back.
82
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION
Unfortunately, what characterized Al in those days was its refusal to face up to and learn from
its failures. In this case, to avoid facing the relevance
problem the Al programmers at MIT in the sixties and early seventies
limited their programs to what they called micro-worlds - artificial domains
in which the small number of features that were possibly relevant was determined beforehand.
However, since this approach obviously avoids
the real-world relevance problem, Ph.D. students at MIT felt obliged to
claim in their theses that their micro-worlds could be made more and
more realistic, and that the techniques they introduced could then be
generalized to cover relevance. There were, however, no successful follow-ups.
After a year of such conversations, and after reading the relevant texts of the existential
phenomenologists, Winograd abandoned work on KRL. He continued, however, to direct
the dissertation work of Sergey Brin and Lawrence Page on retrieving relevant
information from a database
of possibly relevant information. Brin and Page never received Ph.D.s
83
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87
but their work became Google! Winograd meanwhile began including Heidegger in his
Stanford computer science courses.
In the meantime, the MIT humanities department had spun off a Philosophy Section
including such analytic philosophers as Jerry Fodor, Hilary Putnam, James Thomson,
and Sylvain Bromberger. But, sadly, in those days the relations between continental and
analytic philosophers
were utterly antagonistic.
The most up-to-date philosophers in those days believed that minds were software
running on the brain as hardware. So I was not surprised
to hear from Bromberger that Thomson and Minsky were "good friends."
The affinity between the philosophers and the Al researchers emerged
in a humorous way. When my 1965 RAND Paper, "Alchemy and Artificial Intelligence,"
was featured in The New Yorker's "Talk of the Town," Hilary
Putnam asked me earnestly over coffee when I would admit to being a
Turing Machine.11
Given this attitude, it was not surprising that, when I came up for tenure,
the Al researchers (and presumably the philosophers too) recommended to Jerome
Wiesner, then provost in charge of academic matters, that he oppose my tenure, which
was thought to pose a threat to DARPA's (the
Defense Advanced Research Project Agency) support of Al research at
MIT. The argument of the Al researchers was that my being a professor at
84
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION
MIT would lend credibility to my claim that Symbolic Al, based a false assumptions about the way the
mind worked, was bound to
Al community was, therefore, more interested in destroying my cre
than in answering my critique. They encouraged Seymour Paper and circulate a document
entitled "The Artificial Intelligence of Dreyfus/' in which Papert stressed that I didn't know how to pr
When I realized that Minsky and his colleagues were afraid that my
might fall into the hands of officials at DARPA, which supporte
research to the tune of a million dollars a year, I considered sc
Al supporters by hiring an actor to dress in uniform and lunch the MIT Faculty Club. I regret I never got
a chance to carry out
however. Before I could put my plan into effect, I was summo Pentagon as a consultant. There,
just as my MIT opponents fear
recommendation that the military cut all Al support was taken presumably contributing to the drying
up of DARPA support th
be known as the Al Winter.13 Symbolic AI was dead. Only a few Al b
continued to affirm the view that John Haugeland referred to de GOFAI (Good Old Fashioned Al).
85
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87
So I moved to the Bay Area. But right after I left MIT for Berkeley, Huston
Smith received an offer from Cal Tech to teach the humanities there.
Huston pointed out to the MIT deans that if I left for Berkeley, there wou
be no one among the philosophers at MIT for him to talk with, and consequently he was seriously
considering accepting the Cal Tech of So, in what seemed to be a desperate attempt to save the
MIT image openness to the humanities from being trumped by Cal Tech, the hi
Here's how it happened. In March 1986, the new director of the MIT Al
lab, Patrick Winston, moderated Minsky's hostile attitude toward me and
allowed, if not encouraged, several graduate students, led by Phil Agre
and John Batali, to invite me to give a talk to the Al community.15 1 called
the talk "Why Al Researchers Should Study Being and 77me."16
In my talk I repeated what I had written in 1972 in What Computers Can't Do : xx[T]he
meaningful objects . . . among which we live are not a model of
the world stored in our mind or brain; they are the world itself."17
86
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION
Meanwhile, Rodney Brooks had taken over as head of the MIT renounced representations. He
reported that, based on the idea that
"the best model of the world is the world itself," he had "developed a new approach
in which a mobile robot uses the external world itself as
its representation - continually referring to its sensors rather than to an
internal world model. ,/ļ8 Looking back at the frame problem, Brooks writes:
Yet, in spite of the history of first-step fallacies in Al, the next step was apparently
irresistible. Brooks and Daniel Dennett succumbed to the sort of extravagant optimism
characteristic of Al researchers in the sixties. On
the basis of Brooks's success with ant-like devices, instead of trying to make, say, an
artificial spider, Brooks and Dennett decided to leap ahead
on the supposed continuum from insects to humans and build a humanoid robot. As
Dennett explained in a 1994 report:
Dennett seems to reduce the project to a joke when he adds, apparently in all
seriousness, "While we are at it, we might as well try to make Cog
crave human praise and company and even exhibit a sense of humor. 7,21 Of course, the
of its goals and the original robot is now in a museum. But, as far as I know,
neither Brooks nor Dennett nor anyone else connected with the project has published
an account of the failure and what mistaken assumptions underlay their absurd
optimism. In a personal communication, Dennett,
true to first-step thinking, claimed that "Progress was being made on all
the goals, but slower than had been anticipated."
87
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87
Unlike previous generations of Al researchers with their first-step fallacy, Brooks is even
prepared to entertain a Merleau-Ponty-like suggestion that
his work might be on the wrong track. He concludes a discussion of his
animats with the insightful comment that
Brooks continues:
88
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION
Even Jerry Fodor, at the end of his book, The Modularity of Mind, acknowledges
that believers in GOFAI have no response to my critique.
He writes:
CONCLUSION
In spite of the tensions, I was happy in the Humanities Depart MIT. I had good students like Ned
Block to teach, good philoso Samuel Todes to talk to, great classics to enjoy in Course 21, a
researchers to debate. True, it was clear I would never teach g students since the Philosophy
Program was dominated by phil who disdained continental philosophy. Still, I did what I could
NOTES
1. Merleau-Ponty held the chair of philosophy at the Collège de France from 1 until his death in 1961, making him
the youngest person to have been elected t chair at the Collège.
2. Michel Foucault, "Final Interview," Raritan (Summer 1985): 8. "Le Retour de morale," interview conducted by
Gilles Barbadette, Les Nouvelles (June 28, 19 The fuller quotation reads as follows: "Heidegger has always
been for me t
89
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87
essential philosopher. ... I still have the notes I took while reading Heidegger - I
have tons of them! - and they are far more important than the ones I took on Hegel
or Marx. My whole philosophical development was determined by my reading of Heidegger."
3. I did write a paper with Piotr Hoffman: "Sartre's Changed Conception of Consciousness: From Lucidity to
Opacity/' Library of Living Philosophers: The Philosophy of Jean-Paul Sartre , ed. P. A. Schilpp (Open
Court Publishing Company, 1982).
4. Hubert Dreyfus and Sean D. Kelly, All Things Shining: Reading the Classics to Find Meaning in a Secular
Age (Free Press, 201 1). [A New York Times best seller.]
5. This isn't just my impression. Philip Agre, a Ph.D. student at the A.I. lab at the time,
later wrote:
6. Marvin Minsky as quoted in an 1968 MGM press release for Stanley Kubrick's film "2001: A Space
Odyssey."
7. A. Newell and H. A. Simon, "Computer Science as Empirical Inquiry: Symbols and Search," in Mind
Design, ed. John Haugeland (Cambridge, MA: MIT Press), 1988.
8. Herbert A. Simon and Allen Newell, "Heuristic Problem Solving: The Next Advance in Operations
Research," Operations Research 6 (January-February 1958): 6.
9. Martin Heidegger, Being and Time, trans. J. Macquarrie and E. Robinson (New York: Harper & Row
Publishers, 1962), 132.
10. Heidegger, Coping, and Cognitive Science: Essays in Honor of Hubert L. Dreyfus, Vol. 2, ed. Mark
Wrathall (Cambridge, MA: The MIT Press, 2000), iii.
11. Hubert Dreyfus, "Alchemy and Artificial Intelligence," The New Yorker, June 11,
1966.
12. In retrospect Papert's attack makes amusing reading, as can be seen on Amazon, com at
http://www.amazon.com/The-artificial-intelligence-Hubert-Dreyfus/dp/
B0007EKRRK/ref=cm_cr_pr_product_top]. There a reviewer comments:
This is a marvelous piece of arcana. You will not stop laughing as "glorified computer technicians"
("computer scientists" ha-ha) try to refute Dreyfus dead on predictions. Strong Al is dead and it was
stillbirth from the start.
Just for the record, the story about my predicting that computers would never be
any good at chess is a false rumor started by Alvin Toffler. I wrote in my RAND paper
in 1965:
The initial NSS chess program was poor and, in the last five years, remains
unimproved. . . . According to Newell, Shaw, and Simon themseves,
evaluating the Los Alamos, the IBM, and the NSS programs: "All three
programs play roughly the same quality of chess (mediocre) with roughly
the same amount of computing time." Still no chess program can play
even amateur chess, and the world championship tournament is only two
years away.
[Hubert L. Dreyfus, "Alchemy and Artificial Intelligence," RAND P-3244, 1965, p. 10.]
90
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
DEWEY LECTURE - PACIFIC DIVISION
Among other things, [Dreyfus] declared, "No chess program can play even amateur chess." In context,
he appeared to be saying that none
ever would.
14. Terry Winograd, "Heidegger and the Design of Computer Systems," talk delivered at Applied
Heidegger Conference, Berkeley, CA, September 1989. Cited in What Computers Still Can't Do,
Introduction to the MIT Press Edition, xxxi.
Your work has aroused a great deal of interest among the members of the Artificial
Intelligence Laboratory. Accordingly, I am honored to invite you here to speak on the 1 0th of
March 1 986.
Patrick H. Winston
Professor of Computer Science
Director, Artificial Intelligence Laboratory
1 6. My talk was well attended but not everyone was pleased. Agre reported to me that after it was announced
that I was going to give a talk, Minsky came into his office and shouted at him for ten minutes or so for inviting
me.
17. Hubert Dreyfus, What Computers Still Can't Do, A Critique of Artificial Reason (MIT Press, 1992), 265-66.
18. Rodney A. Brooks, "Intelligence without Representation," in Mind Design , ed. John Haugeland
(The MIT Press, 1988), 416. (Brooks's paper was published in 1986).
John Haugeland explains Brooks's breakthrough using as an example Brooks's
robot, Herbert:
91
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms
PROCEEDINGS AND ADDRESSES OF THE APA, VOLUME 87
1 9. Rodney A. Brooks, Flesh and Machines: How Robots Will Change Us (Vintage Books,
2002), 42.
20. Daniel Dennett, "The Practical Requirements for Making a Conscious Robot," Philosophical
Transactions of the Royal Society of London, A, 349 (1994): 133-46.
21. Ibid., 133.
22. Rodney A. Brooks, "Intelligence without Representation," Flesh and Machines: How Robots Will Change
Us (Vintage Books, 2002), 168. Another reference gives: R.
A. Brooks, IEEE Journal of Robotics and Automation, R.A-2: 14-23, 1986 "A Robust
Layered Control System for a Mobile Robot."
In stressing the role of the body, I was influenced by my colleague Samuel Todes, who went
beyond Merleau-Ponty in showing how our world-disclosing perceptual experience is structured by
the actual structure of our bodies. Heidegger and
Merleau-Ponty never tell us what our bodies are actually like and how their structure
affects our experience. Todes, however, notes that our body has a front/back
and up/down orientation. It moves forward more easily than backward, and can
successfully cope only with what is in front of it. He then describes how, in order to
orient oneself and to explore our surrounding world, we have to be balanced within
a vertical field that we do not produce, be effectively directed in a circumstantial field (facing one
aspect of that field rather than another), and appropriately set to respond to the specific thing we
are encountering within that field. For Todes, then, perceptual receptivity is an embodied,
normative, skilled accomplishment, in response to our need to orient ourselves in the world
(Samuel Todes, Body and
World [Cambridge, MA: The MIT Press, 2001].)
24. Rodney A. Brooks, "From Earwigs to Humans," Robotics and Autonomous Systems 20 (1997): 301.
My italics.
25. Jerry A. Fodor, The Modularity of Mind (Bradford/MIT Press, 1983), 128-29.
26. See Walter Freeman, How Brains Make Up Their Minds (Diane Publishing Company, 1 999).
Freeman cites Merleau-Ponty in this book (pp. 28, 1 1 9-1 21, 1 24, 1 29), which is no coincidence since
we gave several seminars together.
92
This content downloaded from 200.75.19.153 on Thu, 06 Jun 2019 02:59:15 UTC
All use subject to https://about.jstor.org/terms