Professional Documents
Culture Documents
We may hope that machines will eventually compete with men in all purely intellectual
fields.
– Alan Turing (1950)
C omputers are everywhere – on our desks, shelves, and laps; in our offices,
factories, and labs; in grocery stores, gas stations, and hotels; and in our
rockets, trains, and cabs. They are ubiquitous.
Computers are also talked about in all types of places and by all types of
people. These discourses about computers usually invoke common themes
such as their enormous speed of operation, their humongous capacity for
storing data, their effectiveness as communication and coordination medi-
ums, their promise for enhancing our mental capabilities, and their potential
for undermining our selfhood, identity, and privacy. Outstanding among
these, however, is the theme of computers as “thinking machines.” It is to
this theme that the field of Artificial Intelligence (AI) owes its emergence.
Computers are at the core of AI. In their current incarnation, almost
all AI systems are built on the versatile silicon chip, the substrate of digital
computers. An overall understanding of electronic computing is, therefore, a
prerequisite for appreciating what AI is all about. To understand AI, however,
one also must understand many other things that have contributed to its
formation and development. These include intellectual history as well as the
social, cultural, and institutional environment in which AI is practiced.
AI is, at some level, a philosophical project (Dennett 1990; Agre 1995;
Smith 1996). Most approaches to AI have their origins in the philosophies
of past and present, and can thus be construed as attempts to address the
questions posed by those philosophies. I argue that among these, the views
of two philosophers, Descartes and Hobbes, have had the most significant
21
22 Artificial Dreams
influence on AI thinking. For our purposes, these influences are best labeled
as “mentalism” and “logicism”1 – topics that have been extensively discussed
elsewhere but that I now briefly describe myself.2
the world, the mind, and the world inside the mind
[T]here is an important sense in which how the world is makes no difference
to one’s mental states.
– Jerry Fodor (1980), commenting on Descartes
Not only does logic take thinking to be mainly deductive in nature, it also
takes deductions to be formal. In the widest sense of the word, formal means
that deductions are:4
And this follows, according to this rule, independently of what the specific
terms such as logic machine, computer, or formal system mean in these sen-
tences. The same rule, applied to (nonsensical) sentences (4) and (5), would
lead to (6):
This threefold scheme reveals a pattern that seems to have set a precedent
for the later practice of AI – namely, the stages of observing and decompos-
ing some aspect of human behavior into small, functional steps, of throwing
away spurious aspects and maintaining only those that are deemed rele-
vant, and of constructing an abstract model that can be implemented on an
actual machine. Most important, it explains the prevalent belief among AI
researchers in the idea of explaining by building. We will come across this
standard pattern of activity, which, for ease of reference, I will call the three
As of AI (analysis, abstraction, attainment) again and again throughout the
following chapters.
physical states that cause behaviors that are meaningfully distinct. In this case,
for instance, there is a physical state of your brain that represents your belief
in the boringness of Dullsburg, there is another physical state that represents
the excitement of Joyville. These states are not only physically distinct, they
can cause distinct behaviors such as interest in one town and repulsion for
another. In other words, physically distinct states can give rise to behaviors
that are semantically distinct – that is, behaviors that bear different relations
to the outside world. Since this physically driven and semantically distinct
behavior is roughly what computers also manifest, the argument goes, then
cognition is a type of computation.
On the basis of intuitions such as this, computers have acquired a cen-
tral role in cognitive theorizing – not merely as tools and instruments for
studying human behavior, but also, and more importantly, as instantiations
of such behavior. Pylyshyn (1984: 74–75) highlights the role of the computer
implementation of psychological theories in three major respects: the rep-
resentation of cognitive states in terms of symbolic expressions that can be
interpreted as describing aspects of the outside world; the representation of
rules that apply to these symbols; and the mapping of this system of symbols
and rules onto causal physical laws so as to preserve a coherent semantic
relationship between the symbols and the outside world. These principles
found explicit realization in some of the earliest models of AI, such as the
General Problem Solver (Newell and Simon 1961).
Heuristic-Search Hypothesis
The solutions to problems are represented as symbol structures. A physi-
cal symbol system exercises its intelligence in problem solving by search –
that is, by generating and progressively modifying symbol structures until it
produces a solution structure. (Newell and Simon 1976: 120)
In this way, Newell and Simon assigned search a central role that has been
accepted in much of AI practice ever since. They themselves actually devised
and built many models on this principle, one of the most influential of which
was the General Problem Solver (GPS). Taking, among other things, chess as
a typical example, GPS dealt exclusively with “objects” and “operators” that
transform them. Proving mathematical theorems, for instance, is described as
the transformation of axioms or theorems (objects) into other theorems using
rules of inference (operators) (Newell, Shaw, and Simon 1960: 258). Similarly,
the behavior of experimental subjects in solving abstract problems (usually
30 Artificial Dreams
for instance, usually talk about computers as vehicles of progress and social
well-being (the extensive discussion by high officials of the U.S. administra-
tion in the early 1990’s of the so-called information superhighway was a promi-
nent example). Businesspeople, Wall Street consultants, and the vendors of
computer technology, on the other hand, typically highlight the impact of
computer-related technologies (e.g., expert systems and “smart” products
of all sorts) on efficiency and productivity. To advance their agenda, all of
these groups usually depend on the claims of the AI community about the
outcomes of their research (of the kind we will study in the following chap-
ters). Although the relation between the research community and the out-
side commentators might often be indirect and mediated, its significance can
hardly be overestimated. In fact, as Edwards (1996: Preface) has suggested,
film and fiction have played a crucial role in dramatizing the goals of AI
and in creating a connection in the popular culture between computational
views of mind and what he calls the “closed world” of Cold War military
strategies in the United States. Journalists, too, have played a major role in
communicating AI claims to the public, partly because of their professional
enthusiasm for provocative topics, but also because AI projects and ideas,
as opposed to more esoteric disciplines such as theoretical physics, can be
explained “in words understandable to anyone” (Crevier 1993: 7).
The Culture of AI
Popular culture and its engagement with AI have contributed much to the
development of the discipline (Bloomfield 1987: 73). The high esteem enjoyed
by AI practitioners as experts in a technical field, the jumping on the AI band-
wagon by researchers from other disciplines, the strong popular following
for work in AI, and the interest of artists, businesspeople, journalists, mil-
itary planners, policy makers, and so on – all have shaped the embedding
circumstances of the AI culture during its first three decades. Here I want to
discuss the culture of AI as manifested in the views and social bonds of its
circle of practitioners. The AI community is embedded in a broad cultural
context that is partly continuous with other cultures and partly distinct from
them. Significant among the broader context are the cultures of engineers and
entrepreneurs, and the most recent example is the so-called hacker culture
of computer enthusiasts.
In the United States, for instance, the number of engineers grew from about
one hundred in 1870 to forty-three hundred in 1914 (Noble 1977). This drastic
increase was accompanied by a number of significant social and institutional
changes – most notably, the separation between the conception and the execu-
tion of work, the automation of production, and the standardization of engi-
neering practices. This quantitative change also was accompanied by what
David Noble calls a “Darwinian view of technological development,” which
assumes that “the flow of creative invention passes progressively through
three successive filters, each of which further guarantees that only the ‘best’
alternatives survive”:
! an objective technical filter that selects the most scientifically sound solu-
tion to a given problem.
! the pecuniary rationality of the hard-nosed businessperson, who selects
the most practical and most economical solutions.
! the self-correcting mechanism of the market that ensures that only the
best innovations survive. (1984: 144–45).
Engineers, who play a major role in at least the first two stages, are usu-
ally among the believers and advocates of this view. As such, the typical
engineer’s self-image is one of “an objective referee” who purports to use the
rigorous tools of mathematics to seek solutions for problems defined by some
authority (Agre and Schuler 1997). This worldview is, therefore, attended by a
strong professional sense of engineering practice as the creation of order from
social chaos, and of engineers as the benefactors of society as a whole. It is
also accompanied by a strong belief in a number of strict binary oppositions:
order versus chaos, reason versus emotion, mathematics versus language,
technology versus society (Noble 1984: 41; Agre and Schuler 1997).
Computer professionals, by and large, share the ideological outlook
of engineers, with an enhanced view of technological development as
“autonomous and uncontrollable when viewed from any given individual’s
perspective – something to be lived by rather than collectively chosen” (Agre
and Schuler 1997). This sense of technological determinism can be explained
by the magnitude, complexity, and relentlessly changing character of the
computer industry, which puts the typical professional under constant pres-
sure to catch up, inducing a strong perception of technology as the dominant
actor (ibid.).
One outcome of this deterministic outlook is what Kling (1980, 1997) calls
“utopian” and “dystopian” views of computer technology. Utopians, accord-
ing to Kling, “enchant us with images of new technologies that offer exciting
possibilities of manipulating large amounts of information rapidly with little
GPS: The Origins of AI 33
they incur. As David Noble (1984) has argued, the ability of scientists and
engineers to theorize, invent, and experiment derives in large part from
the power of their patrons. In the United States, where the government is
the major source of funding for academic research, the Defense Depart-
ment, especially DARPA (Defense Advanced Research Program Agency), has
played a decisive role in the development of computer technologies and
AI (Noble 1984: 55–56; Castells 1996: 60; Edwards 1996). Edwards portrays
this relationship as part of a wider historical context, where the strategic
plans of the Defense Department for a global command-and-control system
have converged with the tools, metaphors, and theories of computers and
of AI.
The generous support of the military at many stages during the history of
AI is sometimes described in mythic proportions. McCorduck summarizes
the philosophy of the Rand Corporation: “Here’s a bag of money, go off
and spend it in the best interests of the Air Force” (1979: 117). This, as Agre
(1997b) has pointed out, has typically been intended to leave a large degree of
latitude in research objectives and methods. Even if this were true in certain
periods, the close tie between the military and AI, far from being pervasive
and steady, was severely reduced at times, mainly because of the frequent
failure of AI to deliver on its promises, about which we will have much to say
in the remainder of this work.
Altogether, the implications of military ties for the overall direction of
development in AI should not be underestimated. In fact, one can discern
a close parallel between the development of military strategies after World
War II and the shift of theoretical focus in AI from the centrally controlled,
modularized symbolic systems of the 1950s through the 1970s, through the
distributed connectionist networks of the 1980s and the 1990s, to the more
recent Internet-based warfare and humanoid robots that are meant to act
and look like human beings. To be sure, the parallels are far more complex
than a unidirectional influence, let alone a simple cause–effect relationship,
but they certainly attest to a confluence of interests, motivations, theories,
technologies, and military strategies that have resonated with each other
in intricate ways. In particular, they should serve as a reminder that the
state, no less than the innovative entrepreneur in the garage, was a major
contributor to the development of computer technology and AI (Castells
1996: 60; Edwards 1996). Based on considerations such as these, Kling and
Iacono (1995) described AI as a computerization movement similar in char-
acter to office automation, personal computing, networking, and so on.
We will examine the implications of this view of AI in more detail in the
Epilogue.
36 Artificial Dreams
The Fabric of AI
To sum up what we have discussed so far, I would like here to list the above
set of influences and their most common manifestations in AI.9 I find the
word “thread” appropriate for the situation, based on the metaphor of AI as
a fabric made of various strands of ideas, cultures, and institutions (Latour
1999: 80, 106–07).
then to read those same concepts back off the artifact, as if they were
inherent in it.
GPS: The Origins of AI 37
! the idea that computer implementation is the best, if not the only, way of
understanding human cognition, and that those aspects of human behav-
ior that cannot be encoded in present-day computers are insignificant.
Professional Threads
! the engineer’s view of the world as disorderly and of the human role as
technological development.
! the entrepreneur’s disposition toward money as the ultimate value and
criterion of success.
! the hacker’s view of computer and technical work as the most valued
human activity.
Institutional Threads
! the pressure of seeking funding resources and its consequences in terms
Colors of Criticism
To put things on equal footing, a similar distanced scrutiny of some of the
common critiques of AI is also needed. While some critiques raise justified
points, others may derive from misunderstandings, prejudices, and flaws in
grasping technologies in general and AI in particular. Let me thus mention
the most common forms that such criticisms take.
ers as just another technology that does not differ significantly from
technologies of the past: “Computer romanticism is the latest version of
the nineteenth and twentieth-century faith . . . that has always expected to
38 Artificial Dreams
based on the premise that computers are mere machines, and following a
stereotyped argument – namely, that human intelligence is, or requires,
X (intentions, emotions, social life, biology) and that computers lack X
because they are mere machines, mechanisms, and so on (Searle 1980).