You are on page 1of 8

INTRODUCTION

What is AI? What is A? What is I?

Ronald Chrisley

Although the preceding articles have represented changes in the concept of


artificial intelligence, the papers in these tllree sections expHcitly address the
question: what is artificial intelligence? - either directly, in section 1, or by
way of trying to clarify one of the constituent concepts (artificiality or
intelligence), in sections 2 and 3 that follow.

1 Characterisations of artificial intelligence


Haugeland's paper brings up the key issues. starting with a ratheT orthodox
support of the Turing Test - for objections to this view, see section 3. On the
question of whether artificial intelligence must be like natural intelligence,
Haugeland, like Dreyfus, answers yes on grounds of intelligibility. He dis-
misses IQ metrics as irrelevant to the question 'what is intelligence?', as it is
really assuming that we know what is intelligent, and only tells us how much
intelligence someone has. But it seem s that the same consideration that leads
one to reject IQ tests - parochiality - should also lead one to reject the Turing
Test, which is in a way a kind of IQ Viva Voce Examination. Haugeland rebuts
the Lovelace objection (artide 12 in Volume 0 on the grounds that it would
imply an unacceptable scepticism concerning our own intelligence - follow-
ing Descartes, it seems that intelligence must be the kind of thing that it is
impossible for us not to have. He bravely attempts to justify the lack of
interest in learning on the part of symbolic artificial intelligence. On a more
historical note, Haugeland notes that the computer is responsible for the
modem-day ambitions and interest in artificial intelligence, explaining it as a
consequence of the idea that both the mind and the computer are essentially
symbol-processing devices. Although this view did indeed become orthodoxy
in symbolic artificial intelligence, it ignores another important factor in the
rise of computational artificial intelligence: universaUty. At least one reason
why it was thought that the computer could allow artificial intelligence to

3
ARTIFICIAL INTELLIGEN CE

begin in earnest was Turing's result that a unjversal Turing machine can
compute any function. Since human behaviour can be characterised in terms
of a function (it was thought), it follows that a universal computer which
could behave like a human is possible. This is true quite independently of
whether or not both humans and computers process symbols.
Schank criticises mathematical, software engineering and simplistic lin-
guistic approacnes to cnaracterising artificial intelligence. Although he is
more sympathetic to a psychological understanding of the concept, he
reveals his notion to be just as unprincipled when he states that we won't
know for sure what artificial intelligence is until a macltine ' begins to really
be the way writers of science fiction have imagined it'!
The sta ndard distinction between the two goals of artificial intelligence is
made: to build an intelligent machine, and to find out about the nature of
intelligence. Pointing out that there is tittle agreement about what intelligence
is. Scnank lists some of his criteria (communication, internal knowledge,
world knowledge (including learning), goals and plans, creativity) - but there
is no attempt to make these anything other than armchair reflections. Fur-
thermore, it is unclear in what sense the capabilities he mentions are criteria.
given that they are neither necessary nor sufficient for intelligence. Schank is
better seen as implicitly making a distinction between what we might call
criterial vs. ordinal features. The latter are not necessary nor sufficient for
intelligence, but their presence or absence leads us to say that there is more or
Jess intelligence present (cf. Haugeland's djscussion of IQ). Unfortunately,
Schank forgets this insight when discussing the idea that the number of plans
one has might serve as a measure of intelligence. Next, be quickJy evaluates
how far the field of artificial intelligence has come in achieving these features,
criticising expert systems en route: their hype has led to disruption, prompt-
ing only two responses: applications or science. He considers a common def-
inition of artificial intelligence - getting computers to do that which only
humans can do. But he also notes a familiar problem with this definition: as
soon as one gets a program to do something thought to be distinctively
human, one thereby changes one's opinion of the activity, taking the com-
puter realisation of it as evidence that it was not a distinctively human activ-
ity after all. Thus, artificial intelligence becomes conceptually impossible.
'Much of the good work in AJ bas just been answering the question ofwnat
the issues are'. Schank proposes thal artificial intelligence is not defined by
the methodologies it employs, but by the problems attacked: ' it is an AI
program if it addresses an AI issue' (although talk of programs assumes a
particular methodology). But the issues change, yet (for an unspecified rea-
son) Schank wants a static ('under all circumstances') definition of artificial
intelligence. He observes that some issues are perennial, and so tend to define
AI: representation (pace Brooks, et al.; see Volume Ul, Part 1), decoding (the
world is to be "decoded" into representations, making the world the code and
the representations the reality!), inference, control of combinatorial explo-

4
INTROD UC TION

sion, indexing (perhaps too technology-relative), prediction and recovery,


dynamic modification (learning), generalisation, curiosity, creativity. But
again, no attempt is made, no theory given, to explain why these issues are
crucial. Schank's claim that learning is the most important of these issues
goes entirely against Haugeland's comments, but Schank concedes that
there has been a conceptual shift on this issue between early work in
artificial intelligence and " now". The conclusion confusingly contradicts
the foregoing, talking of artificial intelligence as a methodology, and say-
ing •All subjects are really AI. All fields discuss the nature of man.' And
then immediately after, Schank co.n tradicts himself again, by identifying a
supposed difference between artificial intelligence and other fields: 'AI tries
to do something about it.' What about politics? Clinical psychology?
Clearly, the contributions of this paper are not to be found in this final
section.
Science studies and ethnomethodology unite in the paper by Suchman and
Trigg to give a different approach to characterising artificial intelligence.
They look at the actual activities of a pair of researchers (almost certainly in
Brian Cantwell Smith's group, probably including Smith himself): talking,
interrupting, gesturing, drawing, erasing, etc. Of particular interest is the
role of representations: how do the whlteboard technology, the duo-
dynamics, etc. permit the construction of representations that mediate
between the 'common conceptions of rational cognition' and processes
which can be realised in a computer? ln this sense, ethnomethodologjcal
studies of artificial intelligence are unlike those of other scientific
endeavours. The reflexive nature of this kind of research (forming represen-
tations about representations, reasoning about reasoning, as opposed to con-
structing formulae about quarks or thinking about proteins) creates a further
constraint on work in artificial intelligence that can be respected or ignored:
what one's research says intelligence is should be consistent with what one
does when doing that research. Suchman and Trigg, following Agre, note that
the researchers they studied were accountable to the scenario of scheduling
as it exists in the 'pseudo narratives' constructed by the fields and com-
munities in which they participate, more than to the phenomenon of schedul-
ing in itself, as activity. Along with Lave, they wonder what would happen 'if
the bases for AI's theorizing about everyday activity were not scenarios but
actual scenes, captured in some rich medium and inspected in detail for their
sense, their local structures, and their relations to other systems of activity?'
A defensive response would be: in a way, expert system work, with its inter-
action between actual doctors, patients, researchers and machines, is based
on 'scenes', although without the precision and care Suchman and Trigg
would no doubt like. But to ask for that is to berate artificial intelligence for
not being something else: ethnomethodological anthropology.
Agre argues that a study of the place of artificial intelligence in the history
of ideas, the kind of work which this set is meant to assist, is necessary for

5
ARTIFI C IAL INTELLIGEN CE

the prevention of sterility in artificial intelligence research. To illustrate this,


be conducts a mini-study of his own, tracing the tensions in the symbolic
approach to planning as exemplified in the STRIPS formalism of Fikes and
Nilsson back through Lashley and eventually Descartes. Although, or rather
because, artificial inteUigence rejects Descartes' ontological dualism while
adopting the rest of his project, an impasse arises when researchers attempt
to accept the soul as functionally defined yet replace its esoteric metaphysics
with computationally tractable processes. Although most practitioners
would see the appearance of this impasse as a failure, Agre takes it to be a
contribution of artificial intelligence to our understanding of the mind and
its intellectual history: Descartes' soul is not problematic because of its
ontology, but because of its 'causal distance from the realm of practical
action'. In this sense, Agre takes seriously his contention that artificial intel-
ligence is philosophy (to the extent that engineering 'failures' are valuable if
they illuminate philosophical issues). Thus, the interaction between artificial
intelligence and the humanities is (or can be) truly two-way. Agre closes with
a discussion of the role of formalism in artificial intelligence research. Like
Suchman and Trigg, he sees artificial intelligence as struggling on the inter-
face between actual rational activity and representations of that activity
which are formal enough to realise it in machines. He concludes that a
reformed artificial intefligence would begin with an awareness of tbis strug-
gle. and an openness to alternative ways of characterisjng that which is to be
formalised when impasses arise, so that the map does not become the terri-
tory. That is, he proposes a shift of allegiance away from theory and back to
the phenomena of cognitive activity itself, or rather to the cyclic interplay
between theory and phenomena.

2 The nature of the artificial


It would seem that the concept of artificial inteUigence depends heavily on
the concept of artificiality, but there is almost no discussion, on the part of
artificial intelligence practitioners, of the artificial/natural distinction. An
important exception is Simon's analysis. He initiaJly gives the impression of
making a simple, strict divide between the two ('a forest may be a phenom-
enon of nature; a farm certainly is not') based on the criterion of being
'man-made'. However, be (rightly) compHcates this by pointing out that 'arti-
facts are not apart from nature' in the sense that they do not violate natural
law. Their artificiality lies in the fact that they are adapted to our goals and
purposes. Given that his interest is in determining whether we can have a
science of artefacts, it is unfortunate that Simon falls short of asking the
following questions: Are we natural? If not, what makes us artificial? Adap-
tation to a creator's goals? If, on the other hand, we are natural, what singles
us out so that adaptation to our goals constitutes artificiality? Or is it (\dapta-
tion to anything's goals that makes something artificial? Are we then the only

6
INTRODUCTJON

things with natural goals? Or does artificiality extend beyond the human
sphere?
Simon thinks we can have sciences of the artificial, but tbat since the arti-
ficial is defined in terms of human purpose, it will be a science that, unlike
the natural sciences (and like psychology?), does not exclude the intentional
and the normative from its discussion (he cites Rosenblueth, Wiener and
Bigelow (Volume r, article 19) on this point). He summarises his discussion by
giving 'indicia' of the artificiaJ (synthesised by man, imitate appearances
while lacking the reality, characterised in terms of functions, discussed in
terms of imperatives as well as descriptive), but leaves it open as to whether
these are necessary or sufficient conditions.
The artefact. according to Simon. can be thought of as an interface
between two environments: the outer and the inner ('the substance and
organization of the artifact itselr). The advantage of this view is that it
allows us to understand the artefact without having to know details of the
inner environment, outer environment, or both. (Strikingly, Simon says that
the interface view can be applied to many things that are not man-made,
suggesting either that many organisms are artefacts, or that being adapted is
not a hallmark of the artificial, as suggested by his third indicium, above.)
The discussion now turns to simuJation, because computers are artefacts
which are good at being artificial, i.e. simulating (although Simon never
seems to acknowledge that he is employing this equivocation between two
senses of 'artificial'): 'Because of its abstract character and its symbol-
manipulating generality, the digital computer bas greatly extended the range
of systems whose behaviour can be imitated'. A crucial question is: 'how
could a simulation ever tell us anything that we do not already laww?' Simon
offers two answers: the obvious one is that a simulation can help us calculate
the consequences of our theory; the more subtle answer is that we can simu-
late systems that we do not fuUy understand, and acquire knowledge thereby.
Artificial systems themselves are particularly susceptible to this latter
approach. It is this ability to abstract away from inner and outer detail that
makes computers susceptible to mathematical analysis. But Simon stresses
that there can be an empirical science of computation as weU. By this. he
does not mean a physical science of the components of computers, but an
empirical science of their performance as systems. To support this, he cites
cases in the past in which the properties of computational systems could only
be determined by building those systems and observing them. But it is
unclear that this shows computers to be empirical objects in any interesting
sense; a quick response would be that mathematicians often cannot deter-
mine the truth of a proposition without getting out pencil and paper and
writing down formulae and proofs, yet presumably Simon would not want to
conclude thereby that mathematics is empirical (if he does, then he trivialises
his claim concerning computers).
Moving on to concerns more central to artificial intelligence, Simon notes

7
ARTIFICIAL I NTELL I GENCE

that 'if it is the organisation of components, and not their physical proper-
ties, which largely determine behaviour', then computer-assisted psychology
can proceed in advance of neurophysiology. But in the preceding para-
graph, he invokes his view that complexity of behaviour derives mainJy
from complexity of the outer rather than complexity of the inner (a view
made famous by his image, later in the same book, of an ant's complex
path being the product of the interaction of the ant's simple structure with
the complexity of the landscape though which it travels). So which is it:
behaviour is primarily the upshot of internal, abstract organisation, so
symbolic artificial intelligence may proceed; or primari1y the consequence
of external complexity, and thus a less intemalist, more activity-based
approach is required? Simon does not explicitly consider the question, but
it seems he is assuming that artificial intelligence must be concerned with
the internal c-Omponents, despite their secondary role in generating complex
behaviour.
Simon again complicates his natural artificial distinction when he claims
that the human mind and brain are members 'of an important family of
artifacts called symbols systems'. It is ironic that he thinks of symbol sys-
tems as 'almost the quintessential artifacts, for adaptivity to an environment
is their raison d'etre', given that connectionist net\vorks, usually thought of
in opposition to symbol systems, showed the architectures Simon concerned
himself with to be particularly inHexible and static. Simon concludes with a
statement of the physical symbol system hypothesis (see Volume ll, article
31): a physical symbol system has the necessary and sufficient means for
general inteJligent action.

3 lnteJligence and the Turing test


Although artificial intelligence practitioners have paid more attention to the
concept of intelligence than to the concept of artificiality, it usually takes the
form of introspective musings about what motivates them to do their
research, what they are striving for; the scientific psychological literature on
intelligence has been largely ignored, often consciously so (see the discussion
of IQ in Haugeland, above). This is in some ways justified. as much of what
is studied in psychology has to do with the contingencies of the human case,
rather than intelligence in its most abstract form. Nevertheless, a proper
assessment of a rtificial intelligence should include an examination of the
ways in which its concept of intelligence aligns with or grates against the
concept of intelligence in a closely related area of investigation. Neisser et
al.'s review is included for this reasons. Although it is true that little outside
of the first section of the paper has direct relevance to artificial inteUigence, it
was thought that the paper should be included in its entirety to make clear
the differences in interest, and to provide context for what is said that is
relevant. The report distinguishes five a pproaches to studying, or concepts
of, intelligence: psychometric, multidimensional, cuJtural, developmental
8
INTROD UCT ION

and biological. The first approach, which is caricatured by the slogan 'intelli-
gence is whatever intelligence tests measure' , remains the most dominant
notion of intelligence, but it is increasingly giving way to the view that there
are many independent aspects to intelligence, or several varieties of intelli-
gence (Sternberg proposes three aspects: analytic, creative and practical).
An interesting finding of the cultural approach is that at least in some
cases, to be considered as intelligent, 'one must excel in the skills valued by
one's own group'. The thinkers whose ideas have had the most impact on our
conceptions, including our conceptions of intelligence, are those who have
been able to communicate them effectively and in a lasting form (writing).
Furthermore, given the linguistic nature of this set, the points of view con-
tained herein have been those of people with exceptional linguistic skills, who
are members of communities which value highly such skills. Small wonder,
then, that our notion of intelligence has been lingui-centric, going back to
Descartes~ Turing and continuing through much of the symbolic approach.
But then what of the recent approaches to artificial intelligenc~ with their
non-linguistic notions of intelligence as adaptive situated activity or pattern
recognition? Are we to conclude that they are repounded by members of
communities in which language is less valued than previously? To some
extent, perhaps; ironically, it would be the rise of the computer and its sup-
posed diminution of the importance of verbal skilJs - or indeed, the rela-
tively non-communicative activity of the isolated artificial intelligence hacker
- which would promote a non-linguistic (and therefore not traditionally
computational) conception of intelligence.
The developmental approach, unJjke the others, does not emphasise indi-
vidual differences in intelligence, but seeks to understand bow intelligence
arises through interaction with the world. Piaget, for example, sees the devel-
opment of intelligence as a way of preserving the balance between incorpor-
ating new experiences into existing cognitive structures on the one band, and
modifying those structures in the Hght of new experiences on the other. In
this regard, the development of intelligence recapitulates scientific progress.
Vygotsky emphasises the role of society, and especially the parent, in scaffold-
ing the development of intelligence. Thus, an agent 's intelligence might be
best measured in terms of what they can achjeve given scaffolding of some
kind, rather than in terms of its static cognitive abilities at any one moment.
When, in artificial intelligence, attention does turn to the concept of intel-
ligeo~ it more often than not focuses on the Turing Test (see Volume II,
article 25). As counterpoint to this preoccupation, two papers have
been included that cast doubt on the relevance of the Turing Test to intelli-
gence, be it natural or artificial. Block, in effect, maintains that the Turing
Test is too easy: it could be passed by a lookup table - a machine that
searches through a set of canned responses to what has just been said .
(Although the Thring Test is behaviouristic, and artificial intemgence is
supposedly a cognitivist enterprise, Block does not find the test's popula rity
in artificial intelligence circles surprising.) He shows that the Turing Test
9
ARTIFICIAL INTELLIGENCE

conception of intelligence can be beefed up so that it is not refutable using


three standard objections to behaviourism, but stiJI falls afoul of his lookup
table objection. The second half of the paper is spent, in true philosophical
style, responding to various possible objections to the main argument. Along
the way, a couple of points of interest arise. First, it is striking how much
some of Block's examples resemble Searle's Chinese Room thought experi-
ment, although I am making no scholarly claims here as to which one came
first. Second, Block makes clever use of Putnam's theory of reference to
argue that he is not changing the meaning of the word 'intelligence' by deny-
ing that term's application to a lookup table: 'it is part of the logic of natural
kind terms that what seems to be a stereotypicaJ X can tum out not to be an
X at aU if it fails to belong to the same scientific natural kind as the main
body of things we have referred to as Xs'. It is curious, then, that in the next
response two paragraphs later he says: 'if someone offered a definition of
"life" that bad the unnoticed consequence that smaU stationery items such as
paper clips are alive, one could refute him by pointing out the absurdity of
the consequence, even if one had no very detailed account of what life is with
which to replace his.' One wonders why Putnam cannot be invoked here to
say: it is part of the logic of natural kind terms that what seems not to be a
living thing can turn out to be a living thing if it turns out to belong to the
same scientific natural kind as the main body of things we have referred to as
living things. This issue of how we can make sense of a concept's reference or
meaning changing over time is very relevant to this set as a whole, concerned
as it is with the development of the concept of artificial intelligence (cf. the
General Introduction).
By contrast, Whitby's paper can be read as arguing that the Turing test is
too hard: there are forms of intelligence which may be easily distinguishable
from human intelligence, and thus fail the test. Whitby ironically uses the
analogy with Bight, which has been used by artificial intelligence practitioners
to justify their disregard of biological details (see the Armer paper in Part II,
section 3 of this Volume), to argue against the Turing Test: as the development
of artificial tlight was not assisted by trying to imitate the performance of
birds (indeed, as Whitby points out, we could not even now build a machine
which could pass a 'bird flight' version of the imitation game), so also is the
development of artificial intelligence not assisted by trying to imitate human
performance. In fact, Whitby contends, such work has been hindered by this
faulty operational definition of intelligence. He contends that Turing never
intended it to be such, and offers an alternative, historically situated account
of the function the game was playing in Turing's paper, and how it became
misconstrued as an objective for research into intelligent machines.

10

You might also like