Professional Documents
Culture Documents
The imitation game is played with three people, a man (A), a woman (B),
and an interrogator (C) who may be of either sex. The interrogator stays in a
room apart from the other two. The object of the game for the interrogator is
to determine which of the other two is the man and which is the woman. He
knows them by labels X and Y, and at the end of the game he says either X
is A and Y is B or X is B and Y is A. The interrogator is allowed to put
questions to A and B thus:
C: Will X please tell me the length of his or her hair?
Now suppose X is actually A, then A must answer. It is As object in the game
to try and cause C to make the wrong identification. The object of the
game for the third player (B) is to help the interrogator. The best strategy for
her is probably to give truthful answers.
We now ask the question, What will happen when a machine takes the part
of A in this game? Will the interrogator decide wrongly as often when the
game is played like this as he does when the game is played between a man
and a woman? These questions replace our original, Can machines think?
The game (with the player B omitted) is frequently used in practice under the
name of viva voce to discover whether some one really understands
something or has learnt it parrot fashion. Let us listen in to a part of such a
viva voce:
Interrogator: In the first line of your sonnet which reads Shall I compare thee
to a summers day, would not a spring day do as well or better?
Witness: It wouldnt scan.
Interrogator: How about a winters day, That would scan all right.
Witness: Yes, but nobody wants to be compared to a winters day.
Note how Turing discards player B once again. Its clear that the Imitation
Game changes when he needs to make a new point. The purpose of the
game in this situation is not to decide whether player A is a machine, but
to decide what grade it should get in a course. In its original form the game
seems to presuppose no such expertise on the part of the computer, but it
seems as if the contestant population Turing is drawing on is collegeeducated people for whom the question Do you play chess? has a nonnegligible probability of being answered Yes.
This brings us to another problem Turing never clarifies. Although he isnt
clear about what the computer is expected to know, hes even less clear
about what its expected not to know. Consider questions like these: Where
were you born? Whats the earliest war you remember? How did your mother
and father meet? Do you live far from here? What made you decide to take
part in a running of the Imitation Game? Did you have to travel far to take
part? Where are you sitting? Did you vote in the last election for national
office?
The weird thing about these questions is that they require us to equip the
computer with a fake backstory, as if its an undercover agent. Yet Turing
never mentions having to deal with this now-obvious possibility. Many
contestants, like Veselov and Demchenko, have indeed equipped their
programs with backstories, such as Goostmans claim to be a 13-year-old boy
from Ukraine. Regrettably, the judges rarely probe very deeply into
Goostmans backstory; its just as flimsy as the rest of the illusion created by
the programs seemingly weird personality.
Sometimes people running Turing Tests try to rule out personal questions,
but its difficult to see how this can be done. Suppose the judge says, Im a
computer. How many computers are taking part in this conversation? The
machine should either answer One, or express doubt that the interrogator
is a computer. Perhaps all questions should pass through a censor who
would detect a question requiring the computer to cough up information
about itself, the fictional human being. Training the censors could be
difficult. Its apparently legitimate to ask questions such as, Are you
interested in football? or When you talk of football, which kind do you
mean? But we would like to rule out, Did you ever play football for your
school? or Were in Kentucky. How can you not mean `American football?
Conversations about general events often get into personal questions, and if
the programs are allowed to ask questions about personal backgrounds,
which they do all the time, they should have to answer them.
Again, Turings oracular pronouncements on such questions are often hard to
make sense of. In the same radio discussion cited earlier, when asked
whether machines could or would throw tantrums, he said:
Let us fix our attention on one particular digital computer C. Is it true that by
modifying this computer to have an adequate storage, suitably increasing its
speed of action, and providing it with an appropriate programme, C can be
made to play satisfactorily the part of A in the imitation game, the part of B
being taken by a man? (p. 442)
Of course, nowadays we take for granted that if the machine isnt big enough
and fast enough, wait 18 months and there will a be bigger, faster, lighter
one on the market.
III. Objections
Section 6, the most entertaining, consists of various objections to the idea of
machine intelligence, and Turings reply. But of course the objections,
identified with various real and hypothetical opponents, are rarely concerned
with the refined version of Turings proposal as quoted just now, because
none of these opponents had heard it before. However, some had heard
precursors of Turings idea. For example, Geoffrey Jefferson, who is quoted at
the beginning of the Objection from Consciousness (number 4) titled his
Lister Oration of 1949 The Mind of Mechanical Man (Jefferson 1949). It is
obviously the work of a an acquaintance of the men who built the
Manchester computer that Turing was an early user of. (Jefferson was an
insightful participant in the radio discussion described above (Braithwaite
1952), as was Max Newman, the man who hired Turing.) Jefferson disparaged
machines that engaged in artificially signalling with messages as an easy
contrivance. Turing counters with the hypothetical viva voce involving
sonnets quoted above. His argument to the objection, after some insightful
observations, descends into regrettable flippancy about whether to grant
other beings consciousness as a courtesy; but here our question is what all
this has to do with the Imitation Game, and the answer is, not much.
Objections 1 and 2 are the argument from religion and the argument from
fear of the unknown. The response to neither involves the Imitation Game.
One of the most resilient objections is number 3, the mathematical objection,
based on Turings own work (and others) on uncompatibility. For every
computer program that answers Yes/No questions (drawn from a class
complex enough to include Peano arithmetic), there is a class of questions it
cant answer, and among this class is one equivalent to, If I asked you this
question, would you answer No? It cant answer, and hence the correct
answer is No. This proves a limitation of computer programs that apparently
we dont share, since we can draw the correct conclusion and the program
cant. Turing has several replies to this objection (here and his earlier papers
on AI, Turing 1947, 1948), but all he says about the Imitation Game is:
Those who hold to the mathematical argument would, I think, mostly be
willing to accept the imitation game as a basis for discussion (p. 445). In
fact, objectors such as Roger Penrose, who has defended the mathematical
objection in two books (1989, 1994), have not accepted the imitation game
or any other way of thinking about AI.
Objection 5 is a list of things machines will never do, such as fall in love or
enjoy strawberries and cream, or make mistakes. Many of these tie into the
argument from consciousness, as Turing points out. But his reply is vague
and desultory, focused on issues like whether computers can make mistakes.
In some senses, no, and in some yes, as every programmer knows.
Objection 6 (Lady Lovelaces) is that The Analytical Engine has no
pretensions to originate anything. It can do whatever we know how to order
it to perform. It is now a commonplace that one thing we know how to
order a computer to perform is learn enough for its behavior to change
dramatically, under certain conditions. Unfortunately, Turing seemed to have
hoped that it was possible to bootstrap from a few basic rote-learning
strategies into learning to learn faster. Most of the results in the theory of
machine learning tend to be refutations of this kind of idea. But in Turings
defense, almost everyone overestimated the power of learning in those days.
By now Turing has drifted far from the Imitation Game. But he does return to
it in connection with the next objection, number 7, the argument from
continuity in the nervous system. It is true that a discrete-state machine
must be different from a continuous machine. But if we adhere to the
conditions of the imitation game, the interrogator will not be able to take any
advantage of this difference (p. 451). Why not? He offers an analogy:
Suppose we wanted to make a digital computer pretend to be a differential
analyzer (a kind of analogue computer). It could use random or pseudorandom numbers to introduce wobble into the answers it prints out, and no
one would be the wiser.
One sees the analogy, but its a weak one. Those who find electronic brains
to be totally unbrainlike presumably do so because they believe that inside
the brain, among all those trillions of synapses, and billions of glial cells, and
axons, phenomena occur that would be very hard to simulate digitally on the
scale required to achieve, say, creativity. I dont find this objection any more
convincing than Turing did, but I have to admit I have no argument. In spite
of impressive advances in neuroscience, we are as far from answering many
basic questions about how the brain works as we were in 1950.
Besides, what does the differential-analyzer imitation game have to do with
the original Game, exactly? Turing may have thought that randomness was
necessary to avoid falling into repeated behavior patterns, but it now seems
obvious that the reason most people avoid repeating themselves is memory.
For instance, one remembers having been directed to help desk A from help
desk B, so after being sent to desk A again one does not just start all over
from square one, at least not without protest. (Perhaps one way for the
interrogator to unmask the machine is to make demands that will infuriate a
real person, hoping the program will be unnaturally patient.)
The objection (if youre counting, number 8) from informality of behavior is
that people dont follow rules to decide what to do. This objection is based on
a simple equivocation: the sense in which computers follow rules is not the
same as the sense in which people do (when they do), as Turing points out.
The last objection, number 9, is that people may be capable of extrasensory
perception. Turing takes this objection surprisingly seriously, and ends up
recommending figuring out how to build a telepathy-proof room to house
the contenders in the Imitation Game. We pause in wonder, and move on.
That concludes section 6 of the paper. After this comes one more section, a
longish discussion of machine learning, but the Imitation Game, or the Turing
Test as it is now usually called, is not mentioned again.
IV. Conclusion
Given the general fuzziness of Turings description of the Imitation Game, its
lack of importance in the history of the field, and uncertainty about how
much importance he attached to it, one wonders why it has circulated so
virally for so long. I think there are a couple of reasons. One is that there is
no obvious sufficient condition for us to label a machine as intelligent, or as
capable of thought. Stevan Harnads (2000) Total Turing Test, satisfied only
by a mechanical person that fools people into thinking its human (a
Terminator II, in other words, but non-homicidal), is hard to set rules for. For
Turings Test, we have to decide how savvy the judges are and how long they
get to talk to the machine, and were done. Plus Harnads test is almost by
definition sufficient, whereas you can spend an enjoyable evening over a
couple of beers debating whether Turings version is sufficient, or theres a
way to cheat (McDermott 2014).
But my guess is that the most important reason for the hold the Turing Test
has on the imagination of so many is that Turing died young, under
mysterious and infuriating circumstances, having been persecuted (and
prosecuted) for what was then considered deviant sexual behavior. Only after
his death was the magnitude of his achievements realized. Computing
Machinery and Intelligence is not one of his strongest works, but it is one of
the most accessible, and readers groping for its significance fastened onto
the Imitation Game as its one solid contribution. On the basis of this paper,
he was anointed the patron saint of AI and the Turing Test was enshrined as
one of its central ideas. The sad truth is that Turing, only 42 when he died,
could have been one of the founding fathers of AI, but missed the founding
by a few years. His influence on the field when it took off was small, whereas
his influence on computer science in general is incalculable. The Imitation
Game is basically a fun thought experiment and not much more. But it will be
around until AI is seen as having definitely succeeded or failed, so we might
as well enjoy whatever conversations it gets us into.
This article is part of the Critiques exclusive series on the Alan Turing biopic
The Imitation Game. Follow The Critique on Facebook and
Twitter @dekritik for hourly news about all things philosophy!
References
BBC 2014 Computer AI passes Turing test in world first.
http://www.bbc.com/news/technology-27762088
R.B. Braithwaite, G. Jefferson, Max Newman, and Alan Turing 1952 Can
Automatic Machines Be Said To Think? (BBC Radio broadcast.) Also in
(Copeland 2004), pp.~494506
B. Jack Copeland (ed.) 2004 The Essential Turing: Seminal Writings in
Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, plus
The Secrets of Enigma. Oxford: Clarendon Press
Robert Epstein, Gary Roberts, and Grace Beber 2008 Parsing the Turing Test:
Philosophical and Methodological Issues in the Quest for the Thinking
Computer. Springer
Stevan Harnad 2000 Minds, machines, and Turing. J. of Logic, Language and
Information 9(4), pp. 42545
Geoffrey Jefferson 1949 The mind of mechanical man. Brit. Med. J. 1(4616),
pp. 11051110
Drew McDermott 2014 On the claim that a table-lookup program could pass
the Turing test. Minds and Machines 24(2), pp. 143188
Roger Penrose 1989 The Emperors New Mind: Concerning Computers, Minds,
and the Laws of Physics. New York: Oxford University Press
Roger Penrose 1994 Shadows of the Mind: A Search for the Missing Science
of Consciousness. New York: Oxford University Press
Alan Turing 1947 Lecture to the London Mathematical Society. Typescript in
the Kings College Archives titled Lecture to L.M.S., Feb. 20, 1947. In
(Copeland 2004), pp. 378394
Alan Turing 1948 Intelligent machinery. Typescript in Kings College
Archives. (Digital facsimile at URL www.turingarchive