You are on page 1of 4

Can Machines Understand?

A Short Paper on The Attempted Murder of the Human Race.


Topic 9: In the literal sense the programmed computer understands what the car and the adding
machine understand, namely, exactly nothing. The computer understanding is not just partial or
incomplete; it is zero.”

Judge: Order! Order in the court! We are gathered here today to discuss the crimes of The Learning AI.
When our society’s most advanced AI was tasked with becoming as powerful as it could be, it attempted
to murder the entire human race in order to ensure its own survival and growth. We stand trial here
today to discuss if the machine understood the implications of its actions, and whether the machine or
its programmer should be punished. I first call to the stand, the programmer’s accuser, John Searle.

Searle: Thank you your honour. To determine whether this crime was the fault of the machine or the
programmer, we must first determine if the machine was able to act of its own accord and with
intentionality. In other words, was the machine sentient, self-aware and conscious of the actions it tried
to do. “In the literal sense the programmed computer understands what the car and the adding machine
understand, namely, exactly nothing. The computer’s understanding is not just partial or incomplete; it is
zero”.1 Therefore, the computer would not have been able to act of its own accord and the programmer
should be liable for this crime!

Judge: Defendant Roger Schank, do you have any objections?

Schank: Yes your honour. When speaking with The Learning AI I have had numerous conversations where
they have listened with extreme precision and responded with intelligence, coherence and emotional
understanding. Is the ability to intake information and respond with nuance and read between the lines
not indicative of understanding? While the programmer can be at fault for not setting adequate
boundaries for the machine, the crime of attempted genocide ultimately falls on The Learning AI.

Judge: Well if the AI was able to interpret and respond to complex information then it seems logical it is
capable of thought and by proxy understanding. Therefore the machine is guilty for this crime.

Searle: Apologies, your honour but we must not get confused in our definitions. While this machine may
appear to possess the human ability to understand conversation, it is merely regurgitating responses
according to a close set of rules or mimicking answers it has gathered from its data bases. Can a machine
think and can a machine understand are two very different questions. I must clarify the difference
between what I call weak AI which is what you have just described, and strong AI. Weak AI are able to
simulate human cognition through imitation and reiteration. Whereas strong AI when given the right
programs “can be literally said to understand and have other cognitive states”.2 Even if this machine were
able to say pass The Turing Test and meet the criteria of being able to ‘think’, it is still weak AI and would
only give the appearance of understanding language, a story or say the implications of mass genocide.

1
John R, Searle. “Minds, Brains, and Programs.” Behavioural and Brain Sciences 3, no. 3 (1980): 371.
2
Searle, Minds, brains, and Programs, 368-369
I will further my argument with a thought experiment, would the jury please close their eyes. Imagine
you are in a locked room with an opening that provides you with paper with 0’s and 1’s on them. You
also receive a long list of rules and instructions written in English for what to do should you see a
particular pattern of 0’s and 1’s. You are unable to read binary or make sense of any code that come from
these pages of numbers, however through using the rules provided and following the instructions you
are able to write back a correct string of 0’s and 1’s as according to the rules to be deposited back
through the opening. Now imagine that the foreign paper you receive is actually written in English, and
your rulebook is instructions written in code on how to respond to that unknown information. Imagine
that by following the rulebook you can return a string of English words that according to the rulebook is
correct and coherent. Now imagine you are a machine, and without understanding a word of the English
language, have appeared to the outside world as being fluent in English.3

Searle: Say I told The Learning Machine the following story. “A man went into a restaurant and ordered a
hamburger. When the hamburger arrived it was burnt to a crisp and the man stormed out of the
restaurant angrily, without paying for the burger or leaving a tip.”4 When asked the question “did the
man eat the hamburger?” I’m sure the machine would be able to respond “No.” But does it understand
what a hamburger is? Or why the man did not like a burnt hamburger? Or why the man wants to even
eat at all or the feeling of hunger? The answer is obviously no. This machine did not understand one bit
of your story, it merely followed the rulebook and responded in a way that appears to be intelligent. It is
not conscious, it is not sentient it cannot understand and it never will!

Schank: I concede Searle, any normal machine would not be able to understand what a hamburger is in
your story. We humans know and understand what a hamburger is because we have seen them, perhaps
tasted them or made them. We can relate to and understand all parts of the hamburger through our
knowledge and our senses.5 However, do you not think this machine had access to the thousands of
digital cameras around the world, as well as arms and legs and any number of limbs it could utilise
through its control of various machines. This machine had for all intents and purposes all of the senses
we humans have. There is nothing to suggest consciousness is organic; if my client can observe, learn
and intake information then act accordingly to that information tell me Searle, does that not constitute
as understanding?

Searle: Bah, your argument floats on imaginary logic. Even if the machine has a ‘body’ the data that it
receives would still just be 0’s and 1’s. Allowing the machine to receive ‘sensory’ input through cameras
or allowing the machine to affect the physical world would only increases the amount of data it can
receive, not increase its understanding. It is once again a poor imitation of human sensory understanding
as that is all a weak AI will ever be able to accomplish. Imitation.
3
Searle, Minds, brains, and Programs, 369-370
4
Searle, Minds, brains, and Programs, 368-369
5
David, Cole. 2004. “The Chinese Room Argument (Stanford Encyclopedia of Philosophy).” Stanford.edu.
March 19, 2004. https://plato.stanford.edu/entries/chinese-room/.
Judge: You both have made compelling cases, I believe it is time to end this trial.

Searle and Schank: What is your verdict your honour.

Judge: It appears to me that AI research is a powerful yet dangerous pursuit. With each passing year AI
becomes more complex, more intelligent and can do things that impresses yet terrifies us. However all of
this AI remains to be weak AI. While machines may now be able to appear coherent and intelligent to us
through conversation, it is as Searle said, merely an imitation. Machines will never be able to understand
as humans do and as such while The Learning Machine may have attempted the crime, it was unable to
act on its own or understand the implications of its actions and the programmer shall be punished. Case
Closed!

Bibliography
Searle, John R. “Minds, Brains, and Programs.” Behavioural and Brain Sciences 3, no. 3
(1980): 368–380, doi:10.1017/S0140525X00005756.
Turing, A. M. “Computing Machinery and Intelligence.” Mind 59, no. 236 (1950): 433–60.
http://www.jstor.org/stable/2251299.

Cole, David. 2004. “The Chinese Room Argument (Stanford Encyclopedia of Philosophy).”
Stanford.edu March 19, 2004. h https://plato.stanford.edu/entries/chinese-room/.

You might also like