You are on page 1of 11

1

The Relationship of Asimov’s Laws of Robotics

And the Frankenstein Complex

Yuezhi Chen
WR150 A3
6/15/2015
Essay 1 Final Draft
2

Abstract

Nowadays, many fictions and movies touch upon the topics of


artificial intelligence. Accordantly, most of these works express people’s
fear of robots or intellectual machines as a universal issue, which is
referred as “Frankenstein complex” by Isaac Asimov. In many of his
works, he demonstrates his perspective that as long as his Three Laws of
Robotics are implemented, robots are perfectly safe to human beings. Then
it comes to the discussion, whether Asimov’s Laws of Robotics serve as
the remedy for the Frankenstein complex? Several scholars, such as Lee
McCauley and Gorman Beauchamp have discussed the relationship
between Frankenstein complex and Asimov’s Laws of Robotics. By
engaging their arguments with the original works of Asimov and his
remarks together, my paper will bring the discussion of how Asimov’s
Three Laws cure people’s fear of machines on the literary level but are
almost invalid in the real life.

There is a fear deep inside the human nature that one day a much powerful being will

replace us and take control of our fate. The movie Jurassic Park in 1993 shows our fear

of the primitive dominator dinosaurs reappear at our time and takeover the world.

Dinosaurs are physically stronger than us, but there are some other threats to the mankind

that are not only physically stronger than us, but also mentally smarter than us. For

instance, in the novel Frankenstein that Mary Shelley wrote in 1818, the protagonist Dr.

Frankenstein creates an intellectual monster using unorthodox scientific methods. The

monster later kills his creator and the people around him. The fear of machines is

considered as Frankenstein complex according to Isaac Asimov. The story evokes

people’s fear of intellectual machines and this notion of Frankenstein complex has been

repeatedly used in many later novels and movies.

Unlike these works that reinforce or even exaggerate the Frankenstein complex,

Isaac Asimov’s stories never tell a situation that robots intentionally or unreasonably

attack human beings. In his fictions, he creates laws for his robots to follow, under the
3

restriction of these laws the robots are unable to harm us. The Three Laws of Robotics

state that:

1. A robot may not injure a human being, or, through inaction, allow a human being

to come to harm.

2. A robot must obey the orders given it by human beings except where such orders

would conflict with the Fist Law.

3. A robot must protect its own existence as long as such protection does not

conflict with the First or Second Law. (Caves Asimov 169)

Asimov formulates these laws to create robots that are benign and loyal to humans. The

robots in his fictions are always perfectly, if not impossibly, safe. It is his way of resisting

the Frankenstein complex. However, are these laws really the cure for people’s fear of

machines in fictions as well as in the real life? In order to answer this question, we have

to further examine his fictions and other scholars’ research on his works.

According to Asimov’s own words, “my robots reacted along the rational lines

that existed in their ‘brains’ from the moment of construction” (Asimov Rest 85). The

robot character Daneel in his novel The Caves of Steel acts even more logical and

reasonable than the human protagonist Baley does. For instance, at the shoe counter when

Baley has no idea how to calm the riot, Daneel is able to solve the problem by showing

sufficient authority. He successfully suppressed the riot using an unloaded blaster to

threaten those who were attempting to destroy two robots. Because his blaster is unloaded

and he is programming to obey the Laws of Robotics, he won’t really kill anybody and

his is not intended to. As he explains himself in the novel, “I would not have fired under

any circumstances, Elijah, as you know very well. I am incapable of hurting a human.
4

But, as you see, I did not have to fire. I did not expect to have to”(Asimov Caves 39). It’s

the Laws that regulate his behavior and forbid him to hurt any humans.

Ironically, the riot is happening because of people’s hatred towards machines, or

their fear of machine to takeover their jobs, but it is the machine that effectively prevents

the situation from getting out of hand. Daneel is able to accurately analyze human

behavior using the data stored in his brain and to come up with the most effective act

based on the Laws. In this case, Asimov’s Three Laws of Robotics serve as the remedy

for people’s fear of machines. If robot Daneel haven’t been installed the Laws in his

program, he will probably kill people under this emergent circumstances or just let the

riot happen in front of his eyes.

While Baley was trying to solve the murder case, he accused Daneel for several

times. Nevertheless, his conjectures were never corroborated because based on the Laws

of Robotics, Daneel is incapable of murdering a human. The truth turns out that it was the

commissioner, a human, who committed the murder. People tend to perceive robots to be

harmful, but ironically humans are actually more dangerous than robots. Asimov is trying

to say that robots are not threats to human beings as long as the Three Laws of Robotics

are employed. Furthermore, Beauchamp says, “the robots of his stories, Asimov

concludes, were more likely to be victimized by men, suffering from the Frankenstein

complex, than vice versa” (Beauchamp 85). Indeed, in The Caves of Steel, R. Daneel was

wronged by Baley for two times. He can’t complain because he is a robot, but ethically

he shouldn’t be accused for murder without any substantial evidence. Yet people on the

earth hate robots because they take away their jobs and they afraid of robots will one day

replace them entirely. However, because of the existence of these Three Laws, machines
5

cannot harm people, and they will only act for the well being of the mankind. As Asimov

justified the robots harmless nature in his fiction, the Laws become the remedy not only

for readers’ fear of machines, but also for those robot characters who are wronged and

misinterpreted by human.

Through the discussion above, it seems that the Three Laws of Robotics could be

the probable remedy for the Frankenstein complex since they prevent every possibility

that the machine will hurt human. It’s easy to explain the application of the laws in the

fiction. Asimov just simply says that the robot’s positronic brain is designed with the

Three Laws of Robotics and the robot will obey the Laws because he is programmed like

this. However, he never really explains how to encode the literary laws into computer

program and he didn’t closely consider the necessity of these laws while applying them in

the real world. According to McCauley, although Asimov’s Three Laws are influential in

literatures and movies, they are not acceptable from a professional perspective.

McCauley asserts that, “Asimov’s Three Laws of Robotics are, after all, literary devices

and not engineering principles any more than his fictional positronic brain is based on

scientific principles” (10). We can infer from his claim that Asimov’s Laws can only

serve as a literary cure for the Frankenstein complex. As in the real world, there are

several obstacles for his Laws to implement.

The Laws sound feasible and reasonable in the fiction, but in the real life they are

too ambiguous to execute. The First Law states that, “A robot may not injure a human

being, or, through inaction, allow a human being to come to harm” (Asimov Caves 168).

To implement this Law, the first problem people will encounter is how to define “harm”.

Does it mean mental harm or physical harm? If the term includes both meanings, then
6

how should the machine to identify mental harm? If the robot is about to conduct a

behavior that will prevent human from physical injury but will result in mental harm,

what should the robot do? Just the First Law can generate some many questions, not to

mention the rest of them. The problem of the ambiguity in the language impedes the

implement of Asimov’s laws. As McCauley argues, “the trouble is that robots don’t have

clear-cut symbols and rules like those that must be imagined necessary in the sci-fi

world” (10), which means sometimes the situation is just too complicated and there is no

guidance for the robot to understand a particular circumstance thus it don’t know how to

act. The technology is merely not advanced enough to translate the literary rules into a

feasible program that can be installed to machines.

The doubt of the necessity also obstructs the application of the Asimov’s Three

Laws. Since machines are not human, they can’t think independently. They will do what

humans make them to do. Even if they have the ability to think on their own, it is because

human deliberately program them to do so. Beauchamp states that, “Laws, in the sense of

moral injunctions, are designed to restrain conscious beings who can choose how to act”

(86). Since robots are not “conscious beings”, and they can’t choose to follow or to obey

a law, it’s unnecessary to have any laws at the first place. We just write the codes to tell

the machines what to do. Even though we assume the technology enables the robots to

have conscious, so that they can make choice and to decide on their own. Asimov’s Laws

would still be titular and lacking of a complete legal mechanism to punish the one who

violate the Law. It’s impossible for the Three Laws alone to manage the problem of the

possible rebellion of intellectual machines.


7

In some of Asimov’s later stories, he indirectly or unintentionally expressed the

idea that his Three Laws of Robotics are unlikely to be implemented in the real world.

Let’s take the short story “The Evitable Conflicts” as an example. The story records a

conversation between the protagonist Byerley and Dr. Calvin who is a robotic expert.

Byerley is consulting Dr. Calvin about the problem that the four major machines that

control the world economic were yielding imperfect data. The machines cannot make

mistakes, and they have the ability to detect whether people are feeding them the wrong

data and to automatically correct the mistakes. Based on this information, Byerley and

Dr. Calvin discovered the fact that the machines knew that people have disobeyed their

calculations, but they chose to not correct the answers back to the optimal directions. The

reason they did so is because the people who ignore their optimal answers are people who

against the dominance of machines. However, under this imaginative framework Asimov

has created for the story, the destruction of the machines will ultimately cause the

destruction of the world economy and in terms harm the humanity. The Machines chose

to slightly impair the people who are harmful to the humanity, in order to protect the

humanity as a whole.

Although the Machines are trying to save humanity, they are making decisions on

their own without consulting men. They even alter the interpretation of the First Law

inexplicitly. Quoting from the story, “the First Law becomes: ‘No machine many harm

humanity; or through inaction, allow humanity to come to harm’” (Asimov Evitable 216).

Then Dr. Calvin goes on:

And so should I say, and so should the Machines say. Their first care, therefore, is

to preserve themselves, for us…So rather that the Machine is shaking the boat-
8

very slightly-just enough to shake loose those few which cling to the side for

purposes the Machines consider harmful to Humanity. (Asimov Evitable 217).

What Dr. Calvin is saying is that to protect human as individual is no longer machines

priority, but to protect themselves. Moreover, their behaviors are based on what “they

consider” as the right things to do. It’s no longer human’s call to decide what should be

done and what shouldn’t.

In this story, Asimov presents a scenario of a future society that is control by

super machines, which have the Three Laws, implemented. Seemingly, he wants to

convey the idea that the Law makes the machines safe for human beings and he pictures

the machines as the salvation for humanity. However, at the same time, Asimov is

actually suggesting the questions to the readers: are we really going to accept this

situation in the real life? Will we willingly give up the right to make decisions for

ourselves? To commend on the Machine’s adaptation of the First Law, Beauchamp

respond that, “in allowing them to modify the Laws of Robotics to suit their own sense of

what is best for man, he provides, inadvertently or otherwise, a symbolic representation

of technics out of control, of autonomous man replaced by autonomous machines” (91).

Indeed, if it’s the Machines who are saving the humanity instead ourselves, then what

does the meaning of humanity have left for us. The online Merriam-Webster Dictionary

defines the word “humanity” as “the quality or state of being human”. To be a human,

rather than relying on machines to dictate what to do, we should be able to control our

own life. That is the proper quality of being human.

Even though the machines can’t physically harm us, but they will undoubtedly

hurt our pride as being human. And the Machines actually know this point fairly well as
9

Asimov wrote in his story, “and to know that may makes us unhappy and may hurt our

pride. The Machine cannot, must not, make us unhappy” (Evitable 218). That’s why the

Machines didn’t explain the reason they were yielding flawed information. Imagining in

the real life, machines are controlling and influencing the society we lived in under our

cognition, it will only aggravate our Frankenstein complex once we detect their

intentions. When Byerley’s asking if the “mankind has already lost its own say in its

future”, Dr. Calvin responds that, “It never had any, really” (Asimov Evitable 218). Her

explanation about her assertion is that the future is always defined by the economy and

sociological forces, which the Machines understand better than we do. And that’s why

it’s a wonderful thing to have the Machines that can avoid all the conflicts on the earth

for us. The protagonist Byerley, who possibly represents the opinion of Asimov himself,

reacts to Dr. Calvin’s explanation, “how horrible!” (Asimov Evitable 218). Byerley has

showed his own fear of the Machine through the expression. It’s awful to have Machines

to regulate our society. The Three Laws of Robotics wouldn’t help neither. As human, we

should be responsible for our own actions, we should solve the conflicts created by

ourselves, and we should live with the consequence of our mistakes. We can never escape

from our duties. If we live numbly under the directions of Machines, then what is the

point to be a human? Unlike what Dr. Calvin saying about Machines saving humanity,

they are actually sabotaging the essence of humanity and destructing human wills.

The fear of machines has long been embedded in people’s mind. Asimov calls this

fear the “Frankenstein Complex” and uses his Three Laws of Robotics to counter it. In

his fictions these laws are able to solve all robotic threats. However, the Three Laws are

not feasible in the real life. The Three Laws grows obscure and vague when we try to
10

capture it in computer programming language. It also has been considered impossible to

realize in the real world from a scientific standpoint. The necessity of the Laws is

questionable as well. Furthermore, the Three Laws of Robotics can’t prevent the threat of

Machines to our humanity, for they are more likely to take control of our wills. The Three

Laws are certainly not the remedy for the Frankenstein complex in reality. If there is any

remedy for the fear of machines, it would be a better understanding of our own existence

and a stronger will to master our own destiny.


11

Work Cited

Asimov, Isaac. The Caves of Steel. New York: Bantam Books, 1991. Print.

Asimov, Isaac. “Introduction.” The Rest of the Robots. Great Britain: Panther Books

Ltd., 1968. 5-8. PDF e-book.

Asimov, Isaac. “The Evitable Conflict.” I, Robot. New York: New American Library,

1956. 195-218. Print.

Beauchamp, Gorman. “The Frankenstein Complex and Asimov’s Robots.” Mosaic: A

Journal for the Interdisciplinary Study of Literature 13. 3-4 (1980): 83-94. Web. 2

June. 2015.

McCauley, Lee. “The Frankenstein Complex and Asimov’s Three Laws.” Association for

the Advancement of Artificial Intelligence Workshop (2007): 9-14. Web. 2 June.

2015.

You might also like