You are on page 1of 21

Do Robots Deserve Human Rights?

When the humanoid robot Sophia was granted citizenship in Saudi Arabia—the first
robot to receive citizenship anywhere in the world—many people were outraged. Some
were upset because she now had more rights than human women living in the same
country. Others just thought it was a ridiculous PR stunt.

Sophia’s big news brought forth a lingering question, especially as scientists continue to
develop advanced and human-like AI machines: Should robots be given human
rights?

Discover reached out to experts in artificial intelligence, computer science and human
rights to shed light on this question, which may grow more pressing as these
technologies mature. Please note, some of these emailed responses have been edited for
brevity.

Kerstin Dautenhahn
Professor of artificial intelligence school of computer science at the University of
Hertfordshire

Robots are machines, more similar to a car or toaster than to a human (or to any other
biological beings). Humans and other living, sentient beings deserve rights, robots
don’t, unless we can make them truly indistinguishable from us. Not only how they look,
but also how they grow up in the world as social beings immersed in culture, perceive
the world, feel, react, remember, learn and think. There is no indication in science that
we will achieve such a state anytime soon—it may never happen due to the inherently
different nature of what robots are (machines) and what we are (sentient, living,
biological creatures).

We might give robots “rights” in the same sense as constructs such as companies have
legal “rights”, but robots should not have the same rights as humans. They are
machines, we program them.

Hussein A. Abbass
Professor at the School of Engineering & IT at the University of South Wales-Canberra

Are robots equivalent to humans? No. Robots are not humans. Even as robots get
smarter, and even if their smartness exceeds humans’ smartness, it does not change the
fact that robots are of a different form from humans. I am not downgrading what robots
are or will be, I am a realist about what they are: technologies to support humanities.

Should robots be given rights? Yes. Humanity has obligations toward our ecosystem and
social system. Robots will be part of both systems. We are morally obliged to protect
them, design them to protect themselves against misuse, and to be morally harmonized
with humanity. There is a whole stack of rights they should be given, here are two: The
right to be protected by our legal and ethical system, and the right to be designed to be
trustworthy; that is, technologically fit-for-purpose and cognitively and socially
compatible (safe, ethically and legally aware, etc.).

Madeline Gannon
Founder and Principal Researcher of ATONATON

Your question is complicated because it’s asking for speculative insights into the future
of human robot relations. However, it can’t be separated from the realities of today. A
conversation about robot rights in Saudi Arabia is only a distraction from a more
uncomfortable conversation about human rights. This is very much a human problem
and contemporary problem. It’s not a robot problem.

Benjamin Kuipers
Professor of computer science and engineering at the University of Michigan

I don’t believe that there is any plausible case for the “yes” answer, certainly not at the
current time in history. The Saudi Arabian grant of citizenship to a robot is simply a
joke, and not a good one.

Even with the impressive achievements of deep learning systems such as AlphaGo, the
current capabilities of robots and AI systems fall so far below the capabilities of humans
that it is much more appropriate to treat them as manufactured tools.

There is an important difference between humans and robots (and other AIs). A human
being is a unique and irreplaceable individual with a finite lifespan. Robots (and other
AIs) are computational systems, and can be backed up, stored, retrieved, or duplicated,
even into new hardware. A robot is neither unique nor irreplaceable. Even if robots
reach a level of cognitive capability (including self-awareness and consciousness) equal
to humans, or even if technology advances to the point that humans can be backed up,
restored, or duplicated (as in certain Star Trek transporter plots), it is not at all clear
what this means for the “rights” of such “persons”.

We already face, but mostly avoid, questions like these about the rights and
responsibilities of corporations (which are a form of AI). A well-known problem with
corporate “personhood” is that it is used to deflect responsibility for misdeeds from
individual humans to the corporation.

Birgit Schippers
Visiting research fellow in the Senator George J. Mitchell Institute for Global Peace,
Security and Justice at Queen’s University Belfast

At present, I don’t think that robots should be given the same rights as humans. Despite
their ability to emulate, even exceed, many human capacities, robots do not, at least for
now, appear to have the qualities we associate with sentient life.

Of course, rights are not the exclusive preserve of humans; we already grant rights to
corporations and to some nonhuman animals. Given the accelerating deployment of
robots in almost all areas of human life, we urgently need to develop a rights framework
that considers the legal and ethical ramifications of integrating robots into our
workplaces, into the military, police forces, judiciaries, hospitals, care homes, schools
and into our domestic settings. It means that we need to address issues such as
accountability, liability and agency, but that we also pay renewed attention to the
meaning of human rights in the age of intelligent machines.

Ravina Shamdasani
Spokesperson for the United Nations Human Rights Office

My gut answer is that the Universal Declaration says that all human beings are born free
and equal…a robot may be a citizen, but certainly not a human being?

So…

The consensus from these experts is no. Still, they say robots should still receive some
rights. But what, exactly, should those rights look like?

One day, Schippers says, we may implement a robotic Bill of Rights that protects robots
against cruelty from humans. That’s something the American Society for the Prevention
and Cruelty for Robots already has conceived.

In time, we could also see that robots are given a sort of “personhood” similar to that of
corporations. In the United States, corporations are given some of the same rights and
obligations as its citizens—religious freedom, free speech rights. If a corporation is given
rights similar to humans it could make sense to do the same for smart machines.
Though, people are behind corporations…if AI advances to the point where robots think
independently and for themselves that throws us into a whole new territory.

Philosophy of artificial intelligence


From Wikipedia, the free encyclopedia
Jump to navigationJump to search
See also: Ethics of artificial intelligence
Artificial intelligence has close connections with philosophy because both share several concepts
and these include intelligence, action, consciousness, epistemology, and even free
will.[1] Furthermore, the technology is concerned with the creation of artificial animals or artificial
people (or, at least, artificial creatures) so the discipline is of considerable interest to
philosophers.[2] These factors contributed to the emergence of the philosophy of artificial
intelligence. Some scholars argue that the AI community's dismissal of philosophy is
detrimental.[citation needed]
The philosophy of artificial intelligence attempts to answer such questions as follows:[3]

 Can a machine act intelligently? Can it solve any problem that a person would solve by thinking?
 Are human intelligence and machine intelligence the same? Is the human brain essentially a
computer?
 Can a machine have a mind, mental states, and consciousness in the same way that a human
being can? Can it feel how things are?
Questions like these reflect the divergent interests of AI researchers, linguists, cognitive
scientists and philosophers respectively. The scientific answers to these questions depend on the
definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.
Important propositions in the philosophy of AI include:

 Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as


intelligent as a human being.[4]
 The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can be so
precisely described that a machine can be made to simulate it."[5]
 Newell and Simon's physical symbol system hypothesis: "A physical symbol system has the
necessary and sufficient means of general intelligent action."[6]
 Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs
and outputs would thereby have a mind in exactly the same sense human beings have minds."[7]
 Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of
the consequences of general names agreed upon for the 'marking' and 'signifying' of our
thoughts..."[8]

Contents

 1Can a machine display general intelligence?


o 1.1Intelligence
 1.1.1Turing test
 1.1.2Intelligent agent definition
o 1.2Arguments that a machine can display general intelligence
 1.2.1The brain can be simulated
 1.2.2Human thinking is symbol processing
 1.2.3Arguments against symbol processing
 1.2.3.1Gödelian anti-mechanist arguments
 1.2.3.2Dreyfus: the primacy of implicit skills
 2Can a machine have a mind, consciousness, and mental states?
o 2.1Consciousness, minds, mental states, meaning
o 2.2Arguments that a computer cannot have a mind and mental states
 2.2.1Searle's Chinese room
 2.2.2Related arguments: Leibniz' mill, Davis's telephone exchange, Block's Chinese nation and
Blockhead
 2.2.3Responses to the Chinese room
 3Is thinking a kind of computation?
 4Other related questions
o 4.1Can a machine have emotions?
o 4.2Can a machine be self-aware?
o 4.3Can a machine be original or creative?
o 4.4Can a machine be benevolent or hostile?
o 4.5Can a machine have a soul?
 5Views on the role of philosophy
 6Bibliography & Conferences
 7See also
 8Notes
 9References
Can a machine display general intelligence?[edit]
Is it possible to create a machine that can solve all the problems humans solve using their
intelligence? This question defines the scope of what machines will be able to do in the future and
guides the direction of AI research. It only concerns the behavior of machines and ignores the issues
of interest to psychologists, cognitive scientists and philosophers; to answer this question, it does not
matter whether a machine is really thinking (as a person thinks) or is just acting like it is thinking.[9]
The basic position of most AI researchers is summed up in this statement, which appeared in the
proposal for the Dartmouth workshop of 1956:

 Every aspect of learning or any other feature of intelligence can be so precisely described that a
machine can be made to simulate it.[5]
Arguments against the basic premise must show that building a working AI system is impossible,
because there is some practical limit to the abilities of computers or that there is some special quality
of the human mind that is necessary for thinking and yet cannot be duplicated by a machine (or by
the methods of current AI research). Arguments in favor of the basic premise must show that such a
system is possible.
The first step to answering the question is to clearly define "intelligence".
Intelligence[edit]

The "standard interpretation" of the Turing test.[10]

Turing test[edit]
Main article: Turing test
Alan Turing[11] reduced the problem of defining intelligence to a simple question about conversation.
He suggests that: if a machine can answer any question put to it, using the same words that an
ordinary person would, then we may call that machine intelligent. A modern version of his
experimental design would use an online chat room, where one of the participants is a real person
and one of the participants is a computer program. The program passes the test if no one can tell
which of the two participants is human.[4] Turing notes that no one (except philosophers) ever asks
the question "can people think?" He writes "instead of arguing continually over this point, it is usual
to have a polite convention that everyone thinks".[12] Turing's test extends this polite convention to
machines:

 If a machine acts as intelligently as human being, then it is as intelligent as a human being.


One criticism of the Turing test is that it is explicitly anthropomorphic[citation needed]. If our ultimate goal is
to create machines that are moreintelligent than people, why should we insist that our machines
must closely resemble people?[This quote needs a citation] Russell and Norvigwrite that "aeronautical
engineering texts do not define the goal of their field as 'making machines that fly so exactly like
pigeons that they can fool other pigeons'".[13]
Intelligent agent definition[edit]
Simple reflex agent

Recent A.I. research defines intelligence in terms of intelligent agents. An "agent" is something
which perceives and acts in an environment. A "performance measure" defines what counts as
success for the agent.[14]

 If an agent acts so as to maximize the expected value of a performance measure based on past
experience and knowledge then it is intelligent.[15]
Definitions like this one try to capture the essence of intelligence. They have the advantage that,
unlike the Turing test, they do not also test for human traits that we[who?] may not want to consider
intelligent, like the ability to be insulted or the temptation to lie[dubious – discuss]. They have the
disadvantage that they fail to make the commonsense[when defined as?] differentiation between "things that
think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence.[16]
Arguments that a machine can display general intelligence [edit]
The brain can be simulated[edit]
Main article: Artificial brain

An MRI scan of a normal adult human brain

Hubert Dreyfus describes this argument as claiming that "if the nervous system obeys the laws of
physics and chemistry, which we have every reason to suppose it does, then .... we ... ought to be
able to reproduce the behavior of the nervous system with some physical device".[17] This argument,
first introduced as early as 1943[18] and vividly described by Hans Moravec in 1988,[19] is now
associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a
complete brain simulation by the year 2029.[20] A non-real-time simulation of a thalamocortical model
that has the size of the human brain (1011 neurons) was performed in 2005[21] and it took 50 days to
simulate 1 second of brain dynamics on a cluster of 27 processors.
Few[quantify] disagree that a brain simulation is possible in theory,[citation needed][according to whom?] even critics of AI
such as Hubert Dreyfus and John Searle.[22] However, Searle points out that, in
principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point
leads to the conclusion that any process at all can technically be considered "computation". "What
we wanted to know is what distinguishes the mind from thermostats and livers," he writes.[23] Thus,
merely mimicking the functioning of a brain would in itself be an admission of ignorance regarding
intelligence and the nature of the mind[citation needed].
Human thinking is symbol processing[edit]
Main article: Physical symbol system
In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence
of both human and machine intelligence. They wrote:

 A physical symbol system has the necessary and sufficient means of general intelligent action.[6]
This claim is very strong: it implies both that human thinking is a kind of symbol manipulation
(because a symbol system is necessary for intelligence) and that machines can be intelligent
(because a symbol system is sufficient for intelligence).[24] Another version of this position was
described by philosopher Hubert Dreyfus, who called it "the psychological assumption":

 The mind can be viewed as a device operating on bits of information according to formal rules.[25]
A distinction is usually made[by whom?] between the kind of high level symbols that directly correspond
with objects in the world, such as <dog> and <tail> and the more complex "symbols" that are present
in a machine like a neural network. Early research into AI, called "good old fashioned artificial
intelligence" (GOFAI) by John Haugeland, focused on these kind of high level symbols.[26]
Arguments against symbol processing[edit]
These arguments show that human thinking does not consist (solely) of high level symbol
manipulation. They do not show that artificial intelligence is impossible, only that more than symbol
processing is required.
Gödelian anti-mechanist arguments[edit]
Main article: Mechanism (philosophy): Gödelian arguments
In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a
"Gödel statement" that a given consistent formal system of logic (such as a high-level symbol
manipulation program) could not prove. Despite being a true statement, the constructed Gödel
statement is unprovable in the given system. (The truth of the constructed Gödel statement is
contingent on the consistency of the given system; applying the same process to a subtly
inconsistent system will appear to succeed, but will actually yield a false "Gödel statement"
instead.)[citation needed] More speculatively, Gödel conjectured that the human mind can correctly
eventually determine the truth or falsity of any well-grounded mathematical statement (including any
possible Gödel statement), and that therefore the human mind's power is not reducible to
a mechanism.[27] Philosopher John Lucas (since 1961) and Roger Penrose (since 1989) have
championed this philosophical anti-mechanist argument.[28] Gödelian anti-mechanist arguments tend
to rely on the innocuous-seeming claim that a system of human mathematicians (or some
idealization of human mathematicians) is both consistent (completely free of error) and believes fully
in its own consistency (and can make all logical inferences that follow from its own consistency,
including belief in its Gödel statement)[citation needed]. This is provably impossible for a Turing
machine[clarification needed] (and, by an informal extension, any known type of mechanical computer) to do;
therefore, the Gödelian concludes that human reasoning is too powerful to be captured in a
machine[dubious – discuss].
However, the modern consensus in the scientific and mathematical community is that actual human
reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would
logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the
consistency of H (otherwise H is provably inconsistent); and that Gödel's theorems do not lead to
any valid argument that humans have mathematical reasoning capabilities beyond what a machine
could ever duplicate.[29][30][31] This consensus that Gödelian anti-mechanist arguments are doomed to
failure is laid out strongly in Artificial Intelligence: "any attempt to utilize (Gödel's incompleteness
results) to attack the computationalist thesis is bound to be illegitimate, since these results are quite
consistent with the computationalist thesis."[32]
More pragmatically, Russell and Norvig note that Gödel's argument only applies to what can
theoretically be proved, given an infinite amount of memory and time. In practice, real machines
(including humans) have finite resources and will have difficulty proving many theorems. It is not
necessary to prove everything in order to be intelligent[when defined as?].[33]
Less formally, Douglas Hofstadter, in his Pulitzer prize winning book Gödel, Escher, Bach: An
Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing
an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as
"this statement is false" or "I am lying".[34] But, of course, the Epimenides paradox applies to anything
that makes statements, whether they are machines or humans, even Lucas himself. Consider:

 Lucas can't assert the truth of this statement.[35]


This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to
the same limits that he describes for machines, as are all people, and so Lucas's argument is
pointless.[36]
After concluding that human reasoning is non-computable, Penrose went on to controversially
speculate that some kind of hypothetical non-computable processes involving the collapse
of quantum mechanical states give humans a special advantage over existing computers. Existing
quantum computers are only capable of reducing the complexity of Turing computable tasks and are
still restricted to tasks within the scope of Turing machines.[citation needed][clarification needed]. By Penrose and
Lucas's arguments, existing quantum computers are not sufficient[citation needed][clarification needed][why?], so
Penrose seeks for some other process involving new physics, for instance quantum gravity which
might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of
the wave function. These states, he suggested, occur both within neurons and also spanning more
than one neuron.[37] However, other scientists point out that there is no plausible organic mechanism
in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of
quantum decoherence seems too fast to influence neuron firing.[38]
Dreyfus: the primacy of implicit skills[edit]
Main article: Dreyfus' critique of artificial intelligence
Hubert Dreyfus argued that human intelligence and expertise depended primarily on implicit skill
rather than explicit symbolic manipulation, and argued that these skills would never be captured in
formal rules.[39]
Dreyfus's argument had been anticipated by Turing in his 1950 paper Computing machinery and
intelligence, where he had classified this as the "argument from the informality of behavior."[40] Turing
argued in response that, just because we do not know the rules that govern a complex behavior, this
does not mean that no such rules exist. He wrote: "we cannot so easily convince ourselves of the
absence of complete laws of behaviour ... The only way we know of for finding such laws is scientific
observation, and we certainly know of no circumstances under which we could say, 'We have
searched enough. There are no such laws.'"[41]
Russell and Norvig point out that, in the years since Dreyfus published his critique, progress has
been made towards discovering the "rules" that govern unconscious
reasoning.[42]The situated movement in robotics research attempts to capture our unconscious skills
at perception and attention.[43] Computational intelligence paradigms, such as neural
nets, evolutionary algorithms and so on are mostly directed at simulated unconscious reasoning and
learning. Statistical approaches to AI can make predictions which approach the accuracy of human
intuitive guesses. Research into commonsense knowledge has focused on reproducing the
"background" or context of knowledge. In fact, AI research in general has moved away from high
level symbol manipulation or "GOFAI", towards new models that are intended to capture more of
our unconscious reasoning[according to whom?]. Historian and AI researcher Daniel Crevier wrote that "time
has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated
them less aggressively, constructive actions they suggested might have been taken much earlier."[44]

Can a machine have a mind, consciousness, and mental


states?[edit]
This is a philosophical question, related to the problem of other minds and the hard problem of
consciousness. The question revolves around a position defined by John Searle as "strong AI":

 A physical symbol system can have a mind and mental states.[7]


Searle distinguished this position from what he called "weak AI":

 A physical symbol system can act intelligently.[7]


Searle introduced the terms to isolate strong AI from weak AI so he could focus on what he thought
was the more interesting and debatable issue. He argued that even if we assumethat we had a
computer program that acted exactly like a human mind, there would still be a difficult philosophical
question that needed to be answered.[7]
Neither of Searle's two positions are of great concern to AI research, since they do not directly
answer the question "can a machine display general intelligence?" (unless it can also be shown that
consciousness is necessary for intelligence). Turing wrote "I do not wish to give the impression that I
think there is no mystery about consciousness… [b]ut I do not think these mysteries necessarily
need to be solved before we can answer the question [of whether machines can
think]."[45] Russell and Norvig agree: "Most AI researchers take the weak AI hypothesis for granted,
and don't care about the strong AI hypothesis."[46]
There are a few researchers who believe that consciousness is an essential element in intelligence,
such as Igor Aleksander, Stan Franklin, Ron Sun, and Pentti Haikonen, although their definition of
"consciousness" strays very close to "intelligence." (See artificial consciousness.)
Before we can answer this question, we must be clear what we mean by "minds", "mental states"
and "consciousness".
Consciousness, minds, mental states, meaning[edit]
The words "mind" and "consciousness" are used by different communities in different ways.
Some new age thinkers, for example, use the word "consciousness" to describe something similar
to Bergson's "élan vital": an invisible, energetic fluid that permeates life and especially the
mind. Science fiction writers use the word to describe some essentialproperty that makes us human:
a machine or alien that is "conscious" will be presented as a fully human character, with intelligence,
desires, will, insight, pride and so on. (Science fiction writers also use the words "sentience",
"sapience," "self-awareness" or "ghost" - as in the Ghost in the Shell manga and anime series - to
describe this essential human property). For others[who?], the words "mind" or "consciousness" are
used as a kind of secular synonym for the soul.
For philosophers, neuroscientists and cognitive scientists, the words are used in a way that is both
more precise and more mundane: they refer to the familiar, everyday experience of having a
"thought in your head", like a perception, a dream, an intention or a plan, and to the way
we know something, or mean something or understand something[citation needed]. "It's not hard to give a
commonsense definition of consciousness" observes philosopher John Searle.[47] What is mysterious
and fascinating is not so much what it is but how it is: how does a lump of fatty tissue and electricity
give rise to this (familiar) experience of perceiving, meaning or thinking?
Philosophers call this the hard problem of consciousness. It is the latest version of a classic problem
in the philosophy of mind called the "mind-body problem."[48] A related problem is the problem
of meaning or understanding (which philosophers call "intentionality"): what is the connection
between our thoughts and what we are thinking about (i.e. objects and situations out in the world)? A
third issue is the problem of experience (or "phenomenology"): If two people see the same thing, do
they have the same experience? Or are there things "inside their head" (called "qualia") that can be
different from person to person?[49]
Neurobiologists believe all these problems will be solved as we begin to identify the neural correlates
of consciousness: the actual relationship between the machinery in our heads and its collective
properties; such as the mind, experience and understanding. Some of the harshest critics of artificial
intelligence agree that the brain is just a machine, and that consciousness and intelligence are the
result of physical processes in the brain.[50] The difficult philosophical question is this: can a computer
program, running on a digital machine that shuffles the binary digits of zero and one, duplicate the
ability of the neurons to create minds, with mental states (like understanding or perceiving), and
ultimately, the experience of consciousness?
Arguments that a computer cannot have a mind and mental states [edit]
Searle's Chinese room[edit]
Main article: Chinese room
John Searle asks us to consider a thought experiment: suppose we have written a computer
program that passes the Turing test and demonstrates "general intelligent action." Suppose,
specifically that the program can converse in fluent Chinese. Write the program on 3x5 cards and
give them to an ordinary person who does not speak Chinese. Lock the person into a room and have
him follow the instructions on the cards. He will copy out Chinese characters and pass them in and
out of the room through a slot. From the outside, it will appear that the Chinese room contains a fully
intelligent person who speaks Chinese. The question is this: is there anyone (or anything) in the
room that understands Chinese? That is, is there anything that has the mental state
of understanding, or which has conscious awareness of what is being discussed in Chinese? The
man is clearly not aware. The room cannot be aware. The cards certainly aren't aware. Searle
concludes that the Chinese room, or any other physical symbol system, cannot have a mind.[51]
Searle goes on to argue that actual mental states and consciousness require (yet to be described)
"actual physical-chemical properties of actual human brains."[52] He argues there are special "causal
properties" of brains and neurons that gives rise to minds: in his words "brains cause minds."[53]
Related arguments: Leibniz' mill, Davis's telephone exchange, Block's Chinese nation and
Blockhead[edit]
Gottfried Leibniz made essentially the same argument as Searle in 1714, using the thought
experiment of expanding the brain until it was the size of a mill.[54] In 1974, Lawrence Davis imagined
duplicating the brain using telephone lines and offices staffed by people, and in 1978 Ned
Block envisioned the entire population of China involved in such a brain simulation. This thought
experiment is called "the Chinese Nation" or "the Chinese Gym".[55] Ned Block also proposed
his Blockhead argument, which is a version of the Chinese room in which the program has been re-
factored into a simple set of rules of the form "see this, do that", removing all mystery from the
program.
Responses to the Chinese room[edit]
Responses to the Chinese room emphasize several different points.

 The systems reply and the virtual mind reply:[56] This reply argues that the system, including
the man, the program, the room, and the cards, is what understands Chinese. Searle claims that
the man in the room is the only thing which could possibly "have a mind" or "understand", but
others disagree, arguing that it is possible for there to be twominds in the same physical place,
similar to the way a computer can simultaneously "be" two machines at once: one physical (like
a Macintosh) and one "virtual" (like a word processor).
 Speed, power and complexity replies:[57] Several critics point out that the man in the room
would probably take millions of years to respond to a simple question, and would require "filing
cabinets" of astronomical proportions. This brings the clarity of Searle's intuition into doubt.
 Robot reply:[58] To truly understand, some believe the Chinese Room needs eyes and hands.
Hans Moravec writes: 'If we could graft a robot to a reasoning program, we wouldn't need a
person to provide the meaning anymore: it would come from the physical world."[59]
 Brain simulator reply:[60] What if the program simulates the sequence of nerve firings at the
synapses of an actual brain of an actual Chinese speaker? The man in the room would be
simulating an actual brain. This is a variation on the "systems reply" that appears more plausible
because "the system" now clearly operates like a human brain, which strengthens the intuition
that there is something besides the man in the room that could understand Chinese.
 Other minds reply and the epiphenomena reply:[61] Several people have noted that Searle's
argument is just a version of the problem of other minds, applied to machines. Since it is difficult
to decide if people are "actually" thinking, we should not be surprised that it is difficult to answer
the same question about machines.
A related question is whether "consciousness" (as Searle understands it) exists. Searle
argues that the experience of consciousness can't be detected by examining the behavior of
a machine, a human being or any other animal. Daniel Dennett points out that natural
selection cannot preserve a feature of an animal that has no effect on the behavior of the
animal, and thus consciousness (as Searle understands it) can't be produced by natural
selection. Therefore either natural selection did not produce consciousness, or "strong AI" is
correct in that consciousness can be detected by suitably designed Turing test.

Is thinking a kind of computation?[edit]


Main article: computational theory of mind
The computational theory of mind or "computationalism" claims that the relationship between
mind and brain is similar (if not identical) to the relationship between a running programand a
computer. The idea has philosophical roots in Hobbes (who claimed reasoning was "nothing
more than reckoning"), Leibniz (who attempted to create a logical calculus of all human
ideas), Hume (who thought perception could be reduced to "atomic impressions") and
even Kant (who analyzed all experience as controlled by formal rules).[62] The latest version is
associated with philosophers Hilary Putnam and Jerry Fodor.[63]
This question bears on our earlier questions: if the human brain is a kind of computer then
computers can be both intelligent and conscious, answering both the practical and philosophical
questions of AI. In terms of the practical question of AI ("Can a machine display general
intelligence?"), some versions of computationalism make the claim that (as Hobbes wrote):

 Reasoning is nothing but reckoning[8]


In other words, our intelligence derives from a form of calculation, similar to arithmetic. This is
the physical symbol system hypothesis discussed above, and it implies that artificial intelligence
is possible. In terms of the philosophical question of AI ("Can a machine have mind, mental
states and consciousness?"), most versions of computationalism claim that (as Stevan
Harnad characterizes it):
 Mental states are just implementations of (the right) computer programs[64]
This is John Searle's "strong AI" discussed above, and it is the real target of the Chinese
room argument (according to Harnad).[64]

Other related questions[edit]


Alan Turing noted that there are many arguments of the form "a machine will never do X", where
X can be many things, such as:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humor, tell right from
wrong, make mistakes, fall in love, enjoy strawberries and cream, make someone fall in love
with it, learn from experience, use words properly, be the subject of its own thought, have as
much diversity of behaviour as a man, do something really new.[65]

Turing argues that these objections are often based on naive assumptions about the versatility
of machines or are "disguised forms of the argument from consciousness". Writing a program
that exhibits one of these behaviors "will not make much of an impression."[65] All of these
arguments are tangential to the basic premise of AI, unless it can be shown that one of these
traits is essential for general intelligence.
Can a machine have emotions?[edit]
If "emotions" are defined only in terms of their effect on behavior or on how they function inside
an organism, then emotions can be viewed as a mechanism that an intelligent agentuses to
maximize the utility of its actions. Given this definition of emotion, Hans Moravec believes that
"robots in general will be quite emotional about being nice people".[66] Fear is a source of
urgency. Empathy is a necessary component of good human computer interaction. He says
robots "will try to please you in an apparently selfless manner because it will get a thrill out of
this positive reinforcement. You can interpret this as a kind of love."[66] Daniel Crevier writes
"Moravec's point is that emotions are just devices for channeling behavior in a direction
beneficial to the survival of one's species."[67]
However, emotions can also be defined in terms of their subjective quality, of what it feels like to
have an emotion. The question of whether the machine actually feels an emotion, or whether it
merely acts as if it is feeling an emotion is the philosophical question, "can a machine be
conscious?" in another form.[45]
Can a machine be self-aware?[edit]
"Self awareness", as noted above, is sometimes used by science fiction writers as a name for
the essential human property that makes a character fully human. Turing strips away all other
properties of human beings and reduces the question to "can a machine be the subject of its
own thought?" Can it think about itself? Viewed in this way, a program can be written that can
report on its own internal states, such as a debugger.[65] Though arguably self-awareness often
presumes a bit more capability; a machine that can ascribe meaning in some way to not only its
own state but in general postulating questions without solid answers: the contextual nature of its
existence now; how it compares to past states or plans for the future, the limits and value of its
work product, how it perceives its performance to be valued-by or compared to others.
Can a machine be original or creative?[edit]
Turing reduces this to the question of whether a machine can "take us by surprise" and argues
that this is obviously true, as any programmer can attest.[68] He notes that, with enough storage
capacity, a computer can behave in an astronomical number of different ways.[69] It must be
possible, even trivial, for a computer that can represent ideas to combine them in new ways.
(Douglas Lenat's Automated Mathematician, as one example, combined ideas to discover new
mathematical truths.) Kaplan and Haenlein suggest that machines can display scientific
creativity, while it seems likely that humans will have the upper hand where artistic creativity is
concerned.[70]
In 2009, scientists at Aberystwyth University in Wales and the U.K's University of Cambridge
designed a robot called Adam that they believe to be the first machine to independently come up
with new scientific findings.[71] Also in 2009, researchers at Cornell developed Eureqa, a
computer program that extrapolates formulas to fit the data inputted, such as finding the laws of
motion from a pendulum's motion.
Can a machine be benevolent or hostile?[edit]
Main article: ethics of artificial intelligence
This question (like many others in the philosophy of artificial intelligence) can be presented in
two forms. "Hostility" can be defined in terms function or behavior, in which case "hostile"
becomes synonymous with "dangerous". Or it can be defined in terms of intent: can a machine
"deliberately" set out to do harm? The latter is the question "can a machine have conscious
states?" (such as intentions) in another form.[45]
The question of whether highly intelligent and completely autonomous machines would be
dangerous has been examined in detail by futurists (such as the Singularity Institute). (The
obvious element of drama has also made the subject popular in science fiction, which has
considered many differently possible scenarios where intelligent machines pose a threat to
mankind.)
One issue is that machines may acquire the autonomy and intelligence required to be
dangerous very quickly. Vernor Vinge has suggested that over just a few years, computers will
suddenly become thousands or millions of times more intelligent than humans. He calls this "the
Singularity."[72] He suggests that it may be somewhat or possibly very dangerous for
humans.[73] This is discussed by a philosophy called Singularitarianism.
In 2009, academics and technical experts attended a conference to discuss the potential impact
of robots and computers and the impact of the hypothetical possibility that they could become
self-sufficient and able to make their own decisions. They discussed the possibility and the
extent to which computers and robots might be able to acquire any level of autonomy, and to
what degree they could use such abilities to possibly pose any threat or hazard. They noted that
some machines have acquired various forms of semi-autonomy, including being able to find
power sources on their own and being able to independently choose targets to attack with
weapons. They also noted that some computer viruses can evade elimination and have
achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction
is probably unlikely, but that there were other potential hazards and pitfalls.[72]
Some experts and academics have questioned the use of robots for military combat, especially
when such robots are given some degree of autonomous functions.[74] The US Navy has funded
a report which indicates that as military robots become more complex, there should be greater
attention to implications of their ability to make autonomous decisions.[75][76]
The President of the Association for the Advancement of Artificial Intelligence has commissioned
a study to look at this issue.[77] They point to programs like the Language Acquisition
Device which can emulate human interaction.
Some have suggested a need to build "Friendly AI", meaning that the advances which are
already occurring with AI should also include an effort to make AI intrinsically friendly and
humane.[78]
Can a machine have a soul?[edit]
Finally, those who believe in the existence of a soul may argue that "Thinking is a function of
man's immortal soul." Alan Turing called this "the theological objection". He writes
In attempting to construct such machines we should not be irreverently usurping His power of
creating souls, any more than we are in the procreation of children: rather we are, in either case,
instruments of His will providing mansions for the souls that He creates.[79]

Views on the role of philosophy[edit]


Some scholars argue that the AI community's dismissal of philosophy is detrimental. In
the Stanford Encyclopedia of Philosophy, some philosophers argue that the role of philosophy in
AI is underappreciated.[2] Physicist David Deutsch argues that without an understanding of
philosophy or its concepts, AI development would suffer from a lack of progress.[80]

Bibliography & Conferences[edit]


The main bibliography on the subject, with several sub-sections, is on PhilPapers
The main conference series on the issue is "Philosophy and Theory of AI" (PT-AI), run
by Vincent C. Müller
Intelligence is not the same as sentience (the ability to perceive or
feel things), consciousness (awareness of one’s body and environment),
and self-awareness (recognition of that consciousness).

Can consciousness emerge in a machine?

But not everyone agrees that human rights should be extended


to non-humans—even if they exhibit capacities like emotions and
self-reflexive behaviors. Some thinkers argue that only humans should
be allowed to participate in the social contract, and that the world
can be properly arranged into Homo sapiens and everything else—
whether that “everything else” is your gaming console, refrigerator,
pet dog, or companion robot.

American lawyer and author Wesley J. Smith, a Senior Fellow at


the Discovery Institute’s Center of Human Exceptionalism, says we
haven’t yet attained universal human rights, and that it’s grossly
premature to start worrying about future robot rights.

“No machine should ever be considered a rights bearer,” Smith


told Gizmodo. “Even the most sophisticated machine is just a
machine. It is not a living being. It is not an organism. It would be only
the sum of its programming, whether done by a human, another
computer, or if it becomes self-programming.”

Smith believes that only humans and human enterprises should


be considered persons.
Philosophy Now: a magazine of ideas

Could a Robot be Conscious?

Brian King says only if some specific conditions are met.

Will robots always be just machines with nothing going on


inside, or could they become conscious things with an inner life? If
they developed some kind of inner world, it would seem to be like
killing them if we scrapped them. Disposal of our machines would
become a moral issue.

There are at least three connected major aspects of


consciousness to be considered when we ask whether a robot could
be conscious. First, could it be self-conscious (as in ‘self-aware’)?
Second, could it have emotions and feelings? And third, could it
think consciously – that is, have insight and understanding in its
arguments and thoughts? The question is therefore not so much
whether robots could simulate human behaviour, which we know
they can do to increasing degrees, but whether they could actually
experience things, as humans do. This of course leads to a big
problem – how would we ever know?

In the Turing Test, a machine hidden from view is asked


questions by a human, and if that person thinks the answers indicate
he’s talking to another person, then the conclusion is that the
machine thinks. But that is an ‘imitation game’, as seminal computer
scientist Alan Turing (1912-1954) himself called it, and it does not
show that a machine has self-awareness. Clearly there is a
difference between programming something to give output like a
human, and being conscious of what is being computed.

We ascribe conscious behaviour to other humans not because


we have access to their consciousness (we don’t), but because
other people are analogous to us. Not only do they act and speak
like us, importantly, they are made of the same kind of stuff. And we
have an idea of what we mean by consciousness by considering our
own; we also think we have a rough idea what it’s like for some
animals to be conscious; but ascribing consciousness to a robot that
could act like a human would be difficult in the sense that we would
have no clear idea what exactly it was that we were ascribing to it.
In other words, ascribing it consciousness would be entirely
guesswork based on anthropomorphism.

Much of what I’ll argue in the following is based (loosely) on the


work of Antonio Damasio (b.1944).

Robot I

Robot I © Steve Lillie 2018 www.stevelillie.biz

The Stuff That Electric Dreams Are Made On

There are three ways a robot could be made, and these


differences may have a bearing on whether it could be conscious:

(1) A possibly conscious robot could be made from artificial


materials, either by copying human brain and body functions or by
inventing new ones.

The argument that this activity could lead to conscious robots is


functionalist: this view says that it doesn’t matter what the material is,
it’s what the material does that counts. Consider a valve: a valve
can be made of plastic or metal or any hard material, as long as it
performs the proper function – say, controlling the flow of liquid
through a tube by blocking and unblocking its pathway. Similarily,
the functionalists say, biological, living material obviously can
produce consciousness, but perhaps other materials could have the
same result. They argue that a silicon-based machine could, in
principle, have the same sort of mental life that a carbon-based
human being has, provided its systems carried out the appropriate
functional roles. If this is not the case, we are left saying that there is
something almost magical about living matter that can produce
both life and consciousness.

The idea that there’s something special about living matter was
common in the nineteenth century: ‘vitalism’ was the belief that
living organisms are fundamentally different from non-living objects
because they contain some extra stuff. However, the discovery of
the physical processes that are involved with living, reproduction,
inheritance, and evolution has rendered vitalism redundant. The
functionalist’s expectation is that the same will happen with ideas
about the specialness of consciousness through the organic brain.

(2) Another way to make conscious robots could be to insert


artificial parts and materials into a human nervous system to take the
place of natural ones, so that finally everything is artificial.

Looking at medical developments, one can see how far this


possibility has already advanced. For example, chips are being
developed to take the place of the hippocampus, which controls
short term memory and some spatial understanding (see Live
Science Feb 23, 2011); or special cameras attached to optical
nerves can allow blind people to see; nano technology can operate
at a cellular and even a molecular level. However, the question is
whether we could continually replace the human brain so that we
are left with nothing that was there originally and still have a
conscious being. Would replacing, say, bits of the brain’s neuronal
circuitry until all the neurons have been replaced with artificial
circuits mean that at some point ‘the lights go out’ and we’re left
with a philosophical zombie, capable of doing everything a human
can do, even behaving in a way indistinguishable from normal
human behaviour, but with nothing going on inside?
(3) A robot could be made of artificial organic material. This
possibility blurs the line between living and non-living material, but
would possibly be the most likely option for the artificial production of
a sentient, conscious being capable of feeling, since we know that
organic material can produce consciousness. To produce such an
artificial organism would probably necessitate creating artificial cells
which would have some of the properties of organic cells, including
the ability to multiply and assemble into coherent organs that could
be assembled into bodies controlled by some kind of artificial
organic brain.

Aping Evolution In Cyberspace

While it might be able to act and speak just like us, a robot
would also need to be constituted in a way very similar to us for us to
be reasonably certain that it’s conscious in a way similar to us. And
what other way is there of being meaningfully conscious, except in a
way similar to us? Any other way would be literally meaningless to us.
Our understanding of consciousness must be based on our
understanding of it. This not so much a tautology as a reinforcement
of the idea that to fundamentally change the meaning of the word
‘consciousness’, based as it is on experiences absolutely intimate to
us, to something we do not know, would therefore involve talking
about something that we cannot necessarily recognise as
consciousness.

Robot and child

The reason we would model our robot’s brain on a human one


is because we know that consciousness works in or through a human
brain. Now many neuroscientists, such as Damasio, think that the
brain evolved to help safeguard the body’s existence vis-a-vis the
outside world, and also to regulate the systems in the body
(homeostasis), and that consciousness has become part of this
process. For example, when you are conscious of feeling thirsty you
know to act and get a drink. That is, while many regulatory systems
are automatic and do not involve consciousness, many do. The
distinction between doing something unconsciously and doing it
consciously can be illustrated in the example of you driving home
but having your conscious attention on something else – some
problem occurring in your life – so that you suddenly find yourself
parking in your driveway without having been (fully) aware that you
were driving – not having your driving at the front of your mind, so to
speak. Similarily, the question of robots being conscious can be
rephrased as to whether they can ‘attend to’ something and be
aware of doing so. Conscious, attentive involvement seems to have
evolved when a complex physical response is required. So while you
can duck to avoid a brick being hurled at you before you are
conscious that it’s happening, you need to be conscious of thirst
(that is, feel thirsty) in order to decide to go to a tap and get some
water.

So one thing that’s special about organic/living stuff, is that it


needs to maintain its well-being. But there is no extra stuff that makes
physical stuff alive; instead there is an organisational requirement
that necessitates the provision of mechanisms to maintain the living
body. So to have a consciousness that we could recognise as such,
the robot’s body would also have to be a system that needed to be
maintained by some kind of regulatory brain. This would mean that,
like our brains, a conscious brain in a robot would need to be so
intimately connected with its body that it got feedback from both its
body’s organs and also the environment, and it would need to be
able to react appropriately so that its body’s functioning is
maintained.

The importance of the body for consciousness is reflected in


Damasio’s definition of it. In his book Self Comes To Mind (2010), he
says that “a consciousness is a particular state of mind” which is felt
and which reveals “patterns mapped in the idiom of every possible
sense – visual, auditory, tactile, muscular, visceral.” In other words,
consciousness is the result of a complex neural reaction to our
body’s situation, both internal and external.

It could be further argued that emotions and feelings stem from


two basic orientations a living thing can have – attraction towards
something, and repulsion away from something. This is denoted in
conscious creatures by pleasure and pain, or anticipation of these in
the experience of excitement or fear. Feelings are the perception of
the emotions. They are felt because we tend to notice and become
emotionally involved with those situations which have a bearing on
our well-being, and consciousness has developed to enable us to
have an awareness of and a concern for our bodies, including an
awareness of the environment and its possible impact them; and to
deliberate best possible outcomes to preserve their well-being. So
our emotions and feelings depend on us wanting to maintain the
well-being of our bodies. To argue that a robot has a mind that
would however be nothing like this because the robot’s body is not
in a homeostatic relation with its brain, would therefore, once again
be ascribing to that robot something that we could not recognise as
consciousness. While a robot could mimic human behaviour and be
programmed to do certain tasks, it would not be valid to ascribe
feeling to it unless it had a body which required internal homeostatic
control through its brain. So unless robots can be manufactured to
become ‘living’ in the sense that they have bodies that produce
emotions and feelings in brains that help regulate those bodies, we
would not be entitled to say that robots are conscious in any way we
would recognise.

Humans Understanding Human Understanding

These points also relate to the third question mentioned at the


beginning – whether robots could actually understand things. Could
there be a robot having an internal dialogue, weighing up
consequences, seeing implications, and judging others’ reactions?
And could we say it understood what it said?

Certainly, robots can be made to use words to look as though


they understand what’s being communicated – this is already
happening (Alexa, Siri…). But could there be something in the make-
up of a robot which would allow us to say that it not only responds
appropriately to our questions or instructions, but understands them
as well? And what’s the difference between understanding what
you say and acting and speaking as though you understood what
you say?

Well, what does it mean to understand something? Is it


something more than just computing? Isn’t it being aware of exactly
what it is you are computing? And what does that mean?

One way of understanding understanding in general is to


consider what’s going on when we understand something. The extra
insight needed to go from not understanding something to
understanding it – the achievement of understanding, so to speak –
is like seeing something clearly, or perhaps comprehending it in
terms of something simpler. So let’s say that when we understand
how something works, we explain it in terms of other, simpler things,
or things that are already understood that act as metaphors for what
we want to explain. We’re internally visualising an already-
understood model as a kind of metaphor for what is being
considered. For instance, when Rutherford and Bohr created their
model of the atom, they saw it as like a miniature Solar System. This
model was useful in terms of making clear many features of the
atom. So we can see understanding first in terms of metaphors which
model key features of something. This requires there to be basic
already-understood models in our thinking by which we understand
more complex things.

As for arguments: we can understand these as connecting or


linking ideas in terms of metaphors of physical space – one idea
contains another, or follows on from other ideas, or supports another.
This type of metaphor for thinking is embedded in our own physical
nature, as linguistic philosopher George Lakoff and others have
argued. Indeed, many metaphors and ideas have meanings which
stem from our body’s needs and our bodily experiences.

There is also the kind of understanding where we understand


the behaviour of others. We have a ‘theory of mind’ which means
we can put ourselves in others’ shoes, so to speak. Here perhaps
most clearly, our understanding of others is based on our own
feelings and intentions, which are in turn based on the requirements
of our bodies.

It is possible then that our conscious understanding boils down


to a kind of biological awareness. In other words, our experiences of
our embodied selves and our place in the world provides the
templates for all our understanding.

So if there is a link between consciousness and the type of


bodies which produce sensations, feelings, and understanding, then
a robot must also have that kind of body for it to be conscious.
© Brian King 2018

Brian King is a retired Philosophy and History teacher. He has


published an ebook, Arguing About Philosophy, and now runs adult
Philosophy and History groups via the University of the Third Age.

You might also like