Professional Documents
Culture Documents
Ronald Chrisley
3
ARTIFICIAL INTELLIGEN CE
begin in earnest was Turing's result that a unjversal Turing machine can
compute any function. Since human behaviour can be characterised in terms
of a function (it was thought), it follows that a universal computer which
could behave like a human is possible. This is true quite independently of
whether or not both humans and computers process symbols.
Schank criticises mathematical, software engineering and simplistic lin-
guistic approacnes to cnaracterising artificial intelligence. Although he is
more sympathetic to a psychological understanding of the concept, he
reveals his notion to be just as unprincipled when he states that we won't
know for sure what artificial intelligence is until a macltine ' begins to really
be the way writers of science fiction have imagined it'!
The sta ndard distinction between the two goals of artificial intelligence is
made: to build an intelligent machine, and to find out about the nature of
intelligence. Pointing out that there is tittle agreement about what intelligence
is. Scnank lists some of his criteria (communication, internal knowledge,
world knowledge (including learning), goals and plans, creativity) - but there
is no attempt to make these anything other than armchair reflections. Fur-
thermore, it is unclear in what sense the capabilities he mentions are criteria.
given that they are neither necessary nor sufficient for intelligence. Schank is
better seen as implicitly making a distinction between what we might call
criterial vs. ordinal features. The latter are not necessary nor sufficient for
intelligence, but their presence or absence leads us to say that there is more or
Jess intelligence present (cf. Haugeland's djscussion of IQ). Unfortunately,
Schank forgets this insight when discussing the idea that the number of plans
one has might serve as a measure of intelligence. Next, be quickJy evaluates
how far the field of artificial intelligence has come in achieving these features,
criticising expert systems en route: their hype has led to disruption, prompt-
ing only two responses: applications or science. He considers a common def-
inition of artificial intelligence - getting computers to do that which only
humans can do. But he also notes a familiar problem with this definition: as
soon as one gets a program to do something thought to be distinctively
human, one thereby changes one's opinion of the activity, taking the com-
puter realisation of it as evidence that it was not a distinctively human activ-
ity after all. Thus, artificial intelligence becomes conceptually impossible.
'Much of the good work in AJ bas just been answering the question ofwnat
the issues are'. Schank proposes thal artificial intelligence is not defined by
the methodologies it employs, but by the problems attacked: ' it is an AI
program if it addresses an AI issue' (although talk of programs assumes a
particular methodology). But the issues change, yet (for an unspecified rea-
son) Schank wants a static ('under all circumstances') definition of artificial
intelligence. He observes that some issues are perennial, and so tend to define
AI: representation (pace Brooks, et al.; see Volume Ul, Part 1), decoding (the
world is to be "decoded" into representations, making the world the code and
the representations the reality!), inference, control of combinatorial explo-
4
INTROD UC TION
5
ARTIFI C IAL INTELLIGEN CE
6
INTRODUCTJON
things with natural goals? Or does artificiality extend beyond the human
sphere?
Simon thinks we can have sciences of the artificial, but tbat since the arti-
ficial is defined in terms of human purpose, it will be a science that, unlike
the natural sciences (and like psychology?), does not exclude the intentional
and the normative from its discussion (he cites Rosenblueth, Wiener and
Bigelow (Volume r, article 19) on this point). He summarises his discussion by
giving 'indicia' of the artificiaJ (synthesised by man, imitate appearances
while lacking the reality, characterised in terms of functions, discussed in
terms of imperatives as well as descriptive), but leaves it open as to whether
these are necessary or sufficient conditions.
The artefact. according to Simon. can be thought of as an interface
between two environments: the outer and the inner ('the substance and
organization of the artifact itselr). The advantage of this view is that it
allows us to understand the artefact without having to know details of the
inner environment, outer environment, or both. (Strikingly, Simon says that
the interface view can be applied to many things that are not man-made,
suggesting either that many organisms are artefacts, or that being adapted is
not a hallmark of the artificial, as suggested by his third indicium, above.)
The discussion now turns to simuJation, because computers are artefacts
which are good at being artificial, i.e. simulating (although Simon never
seems to acknowledge that he is employing this equivocation between two
senses of 'artificial'): 'Because of its abstract character and its symbol-
manipulating generality, the digital computer bas greatly extended the range
of systems whose behaviour can be imitated'. A crucial question is: 'how
could a simulation ever tell us anything that we do not already laww?' Simon
offers two answers: the obvious one is that a simulation can help us calculate
the consequences of our theory; the more subtle answer is that we can simu-
late systems that we do not fuUy understand, and acquire knowledge thereby.
Artificial systems themselves are particularly susceptible to this latter
approach. It is this ability to abstract away from inner and outer detail that
makes computers susceptible to mathematical analysis. But Simon stresses
that there can be an empirical science of computation as weU. By this. he
does not mean a physical science of the components of computers, but an
empirical science of their performance as systems. To support this, he cites
cases in the past in which the properties of computational systems could only
be determined by building those systems and observing them. But it is
unclear that this shows computers to be empirical objects in any interesting
sense; a quick response would be that mathematicians often cannot deter-
mine the truth of a proposition without getting out pencil and paper and
writing down formulae and proofs, yet presumably Simon would not want to
conclude thereby that mathematics is empirical (if he does, then he trivialises
his claim concerning computers).
Moving on to concerns more central to artificial intelligence, Simon notes
7
ARTIFICIAL I NTELL I GENCE
that 'if it is the organisation of components, and not their physical proper-
ties, which largely determine behaviour', then computer-assisted psychology
can proceed in advance of neurophysiology. But in the preceding para-
graph, he invokes his view that complexity of behaviour derives mainJy
from complexity of the outer rather than complexity of the inner (a view
made famous by his image, later in the same book, of an ant's complex
path being the product of the interaction of the ant's simple structure with
the complexity of the landscape though which it travels). So which is it:
behaviour is primarily the upshot of internal, abstract organisation, so
symbolic artificial intelligence may proceed; or primari1y the consequence
of external complexity, and thus a less intemalist, more activity-based
approach is required? Simon does not explicitly consider the question, but
it seems he is assuming that artificial intelligence must be concerned with
the internal c-Omponents, despite their secondary role in generating complex
behaviour.
Simon again complicates his natural artificial distinction when he claims
that the human mind and brain are members 'of an important family of
artifacts called symbols systems'. It is ironic that he thinks of symbol sys-
tems as 'almost the quintessential artifacts, for adaptivity to an environment
is their raison d'etre', given that connectionist net\vorks, usually thought of
in opposition to symbol systems, showed the architectures Simon concerned
himself with to be particularly inHexible and static. Simon concludes with a
statement of the physical symbol system hypothesis (see Volume ll, article
31): a physical symbol system has the necessary and sufficient means for
general inteJligent action.
and biological. The first approach, which is caricatured by the slogan 'intelli-
gence is whatever intelligence tests measure' , remains the most dominant
notion of intelligence, but it is increasingly giving way to the view that there
are many independent aspects to intelligence, or several varieties of intelli-
gence (Sternberg proposes three aspects: analytic, creative and practical).
An interesting finding of the cultural approach is that at least in some
cases, to be considered as intelligent, 'one must excel in the skills valued by
one's own group'. The thinkers whose ideas have had the most impact on our
conceptions, including our conceptions of intelligence, are those who have
been able to communicate them effectively and in a lasting form (writing).
Furthermore, given the linguistic nature of this set, the points of view con-
tained herein have been those of people with exceptional linguistic skills, who
are members of communities which value highly such skills. Small wonder,
then, that our notion of intelligence has been lingui-centric, going back to
Descartes~ Turing and continuing through much of the symbolic approach.
But then what of the recent approaches to artificial intelligenc~ with their
non-linguistic notions of intelligence as adaptive situated activity or pattern
recognition? Are we to conclude that they are repounded by members of
communities in which language is less valued than previously? To some
extent, perhaps; ironically, it would be the rise of the computer and its sup-
posed diminution of the importance of verbal skilJs - or indeed, the rela-
tively non-communicative activity of the isolated artificial intelligence hacker
- which would promote a non-linguistic (and therefore not traditionally
computational) conception of intelligence.
The developmental approach, unJjke the others, does not emphasise indi-
vidual differences in intelligence, but seeks to understand bow intelligence
arises through interaction with the world. Piaget, for example, sees the devel-
opment of intelligence as a way of preserving the balance between incorpor-
ating new experiences into existing cognitive structures on the one band, and
modifying those structures in the Hght of new experiences on the other. In
this regard, the development of intelligence recapitulates scientific progress.
Vygotsky emphasises the role of society, and especially the parent, in scaffold-
ing the development of intelligence. Thus, an agent 's intelligence might be
best measured in terms of what they can achjeve given scaffolding of some
kind, rather than in terms of its static cognitive abilities at any one moment.
When, in artificial intelligence, attention does turn to the concept of intel-
ligeo~ it more often than not focuses on the Turing Test (see Volume II,
article 25). As counterpoint to this preoccupation, two papers have
been included that cast doubt on the relevance of the Turing Test to intelli-
gence, be it natural or artificial. Block, in effect, maintains that the Turing
Test is too easy: it could be passed by a lookup table - a machine that
searches through a set of canned responses to what has just been said .
(Although the Thring Test is behaviouristic, and artificial intemgence is
supposedly a cognitivist enterprise, Block does not find the test's popula rity
in artificial intelligence circles surprising.) He shows that the Turing Test
9
ARTIFICIAL INTELLIGENCE
10