You are on page 1of 15

Leggett 1

Eric Leggett
PHIL 315-130
Dr .Roger Magyar
15 August 2015

Since the dawn of the technological revolution, technological development has been
driving toward artificially intelligent robots. Like most things in this era, the advancement of this
technology was driven largely by industrys search for efficiency and lowered costs. In this case
the manufacturing industry drove the advancement of robotic arms and machines designed to
quickly and efficiently do low skill and repetitive tasks that had previously been done by a
human worker, and this pursuit evolved as expanded capabilities were desired. Already at this
early stage of the automaton there were significant moral concerns about its development and
utilization; was it morally right to take away someones job and replace them with a robot,
knowing that that person had a family and possibly had no other skills with which to find another
job? This is a complex moral problem that has even still found no good answer. Industrialists and
economists will tell you that sacrificing efficiency and advancement for the sake of saving a few
jobs would put the death grip on our economy, they will say that in order to keep pace with the
world and continue to make advancements in medicine, computers, and many other fields that
depend on automation it is important to become as lean and mean as possible through the use of
intelligent machines. That drive toward maximizing efficiency in hopes of realizing new and
greater technology is the same spirit that drives the development of artificial intelligence today,
despite the moral questions that can be raised about both the motivation and cost of these
technologies and their implementations.

Leggett 2

In order to break down and digest a complex moral issue such as the one discussed
above, it is important to use a systematic approach to parse out the real motivation and
information that exists and make an informed decision based on a thorough investigation of all
facets of the issue. A 5 step moral analysis, including application of a moral framework and a
precursory risk assessment, will be done in order to help illuminate some of the more murky
areas of this debate. While the outcome of this analysis will be based both on analysis and
personal application of an individuals moral compass and ideas, the steps taken and information
contained herein will allow for others to become informed on this issue and make their own
judgments either in agreement or in opposition of this papers conclusion. The first step of any
moral analysis is to formulate a formal problem statement.
As robotic inelegancies become more and more advanced, the proposition of a truly
super-intelligent artificial intelligence becomes more and more realistic. As this advancement
rapidly becomes a reality, many of the brightest minds on the planet today have issued formal
warning against the development of this type of technology without proper caution. Superintelligent AI is such a contested issue due to the huge risks associated with its development. The
super-intelligent artificial intelligence may be the last thing human kind ever has to invent [1]; a
robot with this level of intelligence would by design be better than anyone human at researching
and designing new technologies. The nature of these intelligences, such that they exist in
portable and relocatable forms, would allow for an army of super intelligent and efficient
researches that could be deployed to simultaneously research and develop innovative solutions
for each of the seemingly infinite technological challenges that exist in todays world. This type
of superintelligence would also be able to design a better superintelligence then we could,
meaning that once the first superintelligence is created the intellectual level at which these

Leggett 3

superintelligences compute and analysis would increase exponentially with each new iteration
[1]. This could revolutionize the world we live in today, allowing essentially all problems
solvable by technology to be solved in rapid succession, and almost entirely eliminating the need
to work at all in most cases. While this dream seems incredibly enticing, this technology also has
a large amount of potential risks associated with it. Steven Hawkins, one of the preeminent
minds of our generation, warns that the advancement of artificial intelligence to a superintelligent level could cause the end of human kind [2]. His argument is essentially that like all
other predator prey systems, you never want a higher order predator to enter the system you
inhabit. Introducing an artificial superintelligence to our world would essentially be creating an
apex predator and placing them in our same ecosystem, and it may cause the end of our human
existence if the artificial intelligence that is both self-awake and incredibly power were to decide
it no longer wanted to answer to it human counterparts. While this eventuality may seem
outlandish upon initial inspection, it is a real possibility when dealing with a superintelligence
and something that need to be considered when dealing with this type of technology. In order to
reap the benefits and protect against the dangers of this technology, it is important that those
engineers designing it protect against such an eventuality, and the policy makers and humanity as
a whole try to resist any advancements that are not in the best interest of humanity. In order to do
this, it is important that the everyman try to become familiar with the issue of artificial
superintelligences, and that engineers who have thorough understandings of the AIs inner
workings remain vigilant of people who dangerously implement this developing technology.
The second step of any moral analysis is analysis of the interests of all stakeholders and
the defining of all key terms having to do with the topic of discussion. This step is complicated
when applied to this issue, as there are some groups of stakeholders who do not immediately

Leggett 4

come to mind when preforming normal analysis, and this topic surrounds a subject matter that
most people are not incredibly familiar with. This complexity only servers to increase the
importance of this step, so that the all sides of the argument exist on a level playing field, where
all can communicate and contribute to the final outcome by way of defining the terms and
conditions of the issue in a consistent way for all parties.
The term superintelligence is used to describe an artificial intelligence that has an
intelligence level equal to or greater than that of any human, making them inarguably better at
tasks containing computation and analysis, but also having the capabilities to make decisions in
an autonomous fashion separate from human micromanaging. A term that will come up during
the discussion as well is sentience, which is a term used to define something that can be
considered a being, or an independent and aware actor, possessing free will and autonomy as
well as self-awareness of these traits. A LAW will be defined as a lethal autonomism weapons
system used for the purposes of warfare which is able to make decisions based on its own
computation and analysis separate from human interaction, including in situations where it may
be taking human life.
A group of primary stakeholders in this situation are the workers of today, who have a
vested interest in being able to keep their employment and continue to provide for their families.
A developing technology like artificial intelligence does not become integrated into a society all
at once, so if the world is truly changed significantly by the invention of a superintelligence such
that working in many cases is unnecessary, there is concern as to what these workers would do to
continue to provide for their families. Another interested group that is not wholly disjoint from
the previous group is modern society as a whole. This group of stake holders is a very large one,
with a multitude of interests and ideas that are largely not in agreement with each other; however

Leggett 5

there are some interests that can be called unanimous or obvious desires of the whole such as a
cure for cancer, an end to world hunger, or an end to war. Advancements like this as a result of
AI benefit society as a whole, and these effects and are represented by this large group of
stakeholders. The humans of the future are a group of stakeholders who cannot participate in this
debate due to their current non-existence, but their basis desires can be derived using common
sense and considerations must be made to abide by these interests. The analysis of this issue must
take into account the general welfare of the future of mankind, while advancements using these
new superintelligences may provide significant benefit, the possible costs of this technology if
not handled properly also becomes exponentially greater. The policy makers of today and the
future are a party that has to be involved in these decision, as they will need to try and impose
sanctions to help define what this technology is. There is also a case to be made that if this group
does not get ahead of technology and define acceptable uses before it exists, the uses may be
defined by the people making the technology in a fast and rough manner. [3] The last stake
holder is that of the superintelligences themselves, as the programs become more and more
intelligent, they will become self-aware and autonomous actors upon their own free will. As this
is the case, they are a valid group of stakeholders whose interests must be considered during this
process. Their interests will depend largely on how they are designed, but some interests can be
applied to them simply because they are independent and sentient actors, such as a right to be
used as an ends and not only as a means, and a possibly provisions against entrapment and forced
servitude, depending on how complex the intelligences become.
At this point it is also important that all relevant moral values are introduced, and any
unknown or missing facts are discussed. The principle of Nonmaleficence is central to this
debate, or the idea that you should not design anything that will do more harm than good. The

Leggett 6

issue will be analyzed using Kantian rule ethics, Aristotles value ethics, and Mills
Utilitarianism, so the central ideas of these moral rule sets will be used. From Kantian rule ethics
these include the rule that all people should be used as ends not simply as means, the idea that
any moral decision you make should be universalizable, the idea that actions should be morally
judged based on their intentions not on their possible consequences, and the idea that any
disparity in costs and benefits should favor those that are underprivileged, If the disparity cannot
be removed completely. From Virtue ethics the golden mean will be discussed as a tool for
comparing certain traits, as well as the idea of friendship and love as centrally important to an
existence, and the idea that an action is morally correct if made by a person of high moral
character. Finally, from Utilitarianism, the pursuit for the most good for the most people will be
used as an analytical tool, as well as the idea that an action is judged based on its consequences
not on its intentions. The precautionary principle will be used during the analysis, which is the
idea that if an idea is beneficial to society and the environment not knowing the consequences is
not a valid reason not to implement it. The moral frameworks of value sensitive design,
reasoning, and thresholds will be used in this analysis. Where value sensitive design is defined as
being a design formwork in which some important values are given precedent over others during
the design process in order to ensure those values are protected by the final product as best as
possible. Reasoning and thresholds are relatively self-explanatory, and have intuitive definitions
following the definitions of the words they are named for.
When considering this issue, there is a large portion of the topics being discussed that
pertains to technologies that do not yet exist, and things that may or may not happen in the
future. Because of this, many of decisions made about these things will need to be fluid based on
new facts in that they can adapt in the future. The point of this analysis is to structure the issue

Leggett 7

and analysis it as best as possible at this point so that the technology can be designed in a way
that the outcomes of this analysis can be more easily integrated into the first superintelligence
and then into subsequent models.
The third part of a moral analysis is generating ideas for possible courses of action, and
doing precursory analysis of each. The simplest solutions are those that are considered binary, or
all or nothing solutions; these are not usually viable options, but help to illustrate the pros and
cons of a technology. In this case the binary off option would be to outright stop the research of
artificial intelligence, and to remove any currently existing implementation of the technology.
This would be counterproductive because it would negate all of the positive impact AI has
already provided our society, separate from what any farther enhancements may be able to do.
The binary on would be to let AI evolve unrestricted and unguided forward, which is also a poor
choice because of the dangers outlined previously. Recognizing the shortcomings of these two
options allows for the creation of other middle ground options that may have higher feasibility.
An option that follows relatively directly from the shortcomings of the binary no option would
be to halt research of super intelligent AI and simply continue to use the technology as it exists
currently. This would allow humanity to continue to benefit from the advances that have been
made thus far in AI while attempting to mitigate the possible negative effects of super intelligent
computers in the more distant future. Another option would be to use intelligent design to
attempt to control the computers motivations [1]. This is a method that would have to be
implemented and monitored by someone who has a deep understanding of the way that these
superintelligences would operate, but may hold the key in harnessing these superintelligences so
they can be used in a measured and safe way. Another option would be to develop artificial
inelegancies to a point where they are capable of free thought and autonomous decision making,

Leggett 8

and then simply allow them to integrate into society when there is a robot that can successfully
do so[3]. This is based on and understanding of the fundamental differences in the ways humans
and robots perceive the world, and would allow for a partnership between the two groups to
prosper greater than either would alone.
The forth step of an ethical analysis is the thorough moral and logical evaluation of all of
the generated solutions. This allows for informed decision making, and hopefully minimizes
unforeseen impacts of the selection made. When reviewing the binary off solution, the main
components that must be analyzed are the moral cost and logical feasibility of removing the
technology. Logistically, it would be almost impossible to halt the progression of AI as it has
already been integrated into so many things, from cars that help to prevent accidents to
keyboards that anticipate what you are going to type and provide suggestions as well as
corrections for you. This deep existing integration would cause backlash against this type of
policy, and it would be difficult to enforce such a policy. It would also be difficult to stop current
research on a worldwide scale, as many countries and research teams have already poured a large
amount of resources and time into both the implementation and protection against multiple AI
systems, and allowing one country to continue to develop and use AI while others do not would
provide a huge boon for that country, and would heavily incentivize other countries to breach any
kind of agreement made. Although this option has been shown to be logistically infeasible, it can
still be helpful to perform full ethical analysis on it in order to gain insight to use in the analysis
of other options. From a utilitarian point of view this solution fails. Taking away technology that
has already made the world much safer and made many thing easier and more intuitive to use
does not maximize humanities utility or happiness, failing the maximum utility principle. It also
would do harm to people who have been helped by its creation, for instance, Steven Hawkins

Leggett 9

uses a version of AI in the program for his robotic voice, taking it away would do direct harm to
him and would violate a stipulation of the freedom principle [2]. This option also fails under
analysis by Kantian ethics. If this premise was universalized, it would state that any technology
that has the opportunity to do eventual harm should be thrown out immediately. Following this
same logic, the world would not have developed the car, electricity, or countless other things we
now consider to be integral to our everyday lives. Removing this technology would also hurt
those who need it the most more significantly than others, which implies that there is a inequality
of cost skewed towered the underprivileged which is considered immoral by Kant. From a value
ethics standpoint it seems to pass, but is more complex than the previous analysis. While
removing this type of technology would seem to be in violation of the golden mean, it is a stance
that is backed by many industry leaders and great thinkers. This action has been chosen by
people of high moral standing, so by value ethics it is morally right, this is a contradiction that
occurs within the ethical guidelines, and there is no clear way to justify it. Using other ethical
analysis tools, it can be shown that this course of action contradicts the precautionary principle,
because it is a technology that could help humanity but it is being halted due to unknown or
undefined risks. A risk assessment seems to fail this course of action as well, due to the current
accidents and damage being prevented by AI currently and in the future being weighed against a
possible risk very far in the future, even if that potential risk is very large.
The next option to be analyzed is the binary yes option, which is to allow AI to simply
progress naturally without restriction. This is a feasible option, because it requires no change of
any kind from what is currently being done and so is easy to implement. A simple common sense
analysis of this option however brings up a few red flags. There are countless industry
professionals that are either calling for a complete disintegration of AI or at the minimum some

Leggett 10

guidelines to be imposed on its implementation and development, ignoring the opinions of the
people who know the most seems intuitively like a bad idea and could very well cause the things
they warn against to become a reality. Under utilitarian analysis this option does not maximize
happiness because of the negative impact of many of the currently evolving fields of AI,
including misimplementation of LAWS as well as an oft hypothesized end of mankind cause by
aggressive AI; Although there is a bit of a contradiction here, due to this possibly being the most
efficient implementation. If it was allowed to progress naturally without hindrance, the more
efficient uses of the technology would rise to the top. However, this maximum efficiency may
come at the detriment to humankind, which conflicts with the maximizing happiness principle. It
also fails the freedom principle, because if AI is developed without regulation it will hurt people,
and that violates the principle. Under Kantian analysis, the maxim that is made when the
decision is universalized can be critiqued. If in all situations where danger was possible, we
allowed the technology to continue unhindered, there would be a significant amount of damage
done. This is not a decision that could be universalized, so it is considered an immoral decision
by Kantian logic. Allowing AI to move forward unhindered would also cause an inequality of
benefit to those who can afford to make their own AI systems, possibly to use against others,
which is considered immoral by Kant. This path is considered immoral though virtue ethical
analysis, due to it being warned against by people of high moral character, and its unequivocal
disregarding of the golden mean. Developing technology in this way, with no regard for any of
the stakeholders, also violates the principle of Nonmaleficence. A risk assessment on this choice
would show a very high risk to relatively small benefit, without precaution this technology will
almost certainly effect humanity in a substantial and negative way.

Leggett 11

Halting the research and advancements of AI at its current levels and never allowing it to
progress to the superintelligent level, is another possible option. Feasibly this would be difficult
for similar reasons to the binary off option, because it would be difficult to enforce such a
mandate evenly without risk of some defector gaining a huge advantage over the rest of the
world. A precursory analysis done with common sense leads to a few questions. Why is the
current amount of artificial intelligence the optimal amount? If the amount we have now is
helpful, why would farther advances not be worth exploring? While there is more in-depth
analysis to be done on this option, these questions have no good answers. From a Kantian
viewpoint, the reasoning that motivates the option can be questioned. In this option, a technology
that has possible future iterations that could cause harm are terminated at wherever they currently
are on their development lifecycle, regardless of possible future benefits. This, if universalized,
would result in almost nothing being researched at all. When electricity was discovered, it was a
dangerous technology; by applying this rule development of that technology would have ceased
at that point, which would have prevented all advancements made as a result of it since then.
From a utilitarian point of view it can be argued that stopping people from continuing to make
advancements that help people would also violate the freedom principle, which states that people
are free to act as the will as long as they do not harm or prevent people from acting as they
please. This method also does not maximize happiness, because farther development in this field
could have improved happiness, and it will no longer be pursued under this plan. However, it
does seem to do well under virtue ethical analysis because it fits the golden mean method by
falling in the middle of the two extremes. If continued research was done on this technology
there would likely be farther advancements that would help humanity as a whole; so

Leggett 12

implementing this plan and stopping research on AI would be in violation of the precautionary
By analyzing the previous 3 plans, it seems the best option would be to continue to do
research in the field of artificial inelegancies and superintelligences. One way that has been
hypothesized as a safe way to do this is to shape the artificial mind by defining for it what is
important to it, and using this knowledge to manipulate it [1]. For instance, having an artificial
beings primary goal set as maximizing the efficiency of a toothpick factory may cause it to make
unsafe decision, or decisions detrimental to the continuation of the human species in order to
achieve this goal. However, if the primary goal is to befriend humans, it would stop short of any
damaging actions because its higher optimizing goal is to benefit humans. While this seems like
a good theory practically, it raises a few ethical concerns that hinge primarily on how you define
a conciseness. As these super inelegancies continue to progress and become more humanoid and
capable of actual autonomous thought, at what point do they also become an existence, deserving
of treatment as an ends and not merely as a means as outlined in Kantian ethics? When does it
begin to become important to consider the happiness of the intelligence under the utilitarian
Utility principle? Or to consider the importance of friendship for the computer under virtue
ethics? These are all questions that at this point seem ridiculous, but as superintelligent
computers become more and more complex, become things that must be considered. If these
superinelegancies get to a point where they can be considered an independent actor possessing
free will, and there is every indication they will eventually, this type of built in manipulation
would be considered incredibly immoral. However, this immoral behavior has the benefit of
continuing our human species, due to the huge negative impact an uncontrolled superintelligence
could have on our world.

Leggett 13

The final option explored is similar to a binary on, with some protections against things
that are dangerous implementations of the technology. This option allows for artificial
superintelligences to develop naturally into a full species independent of humans with the
guidance of humans, and allows them to coexist with humanity in the future. The analysis of this
plan depends on how the risk is calculated for this progression if the issue. It is possible that
allowing the technology to evolve unhindered within some areas and not others, such as ones in
which the AI cannot hurt human, would prevent the possible negative eventuality discussed in
the binary on choice, However, it is possible that this restriction will not be sufficient in limiting
the damage that could be done by a superintelligence on our society, and there will remain the
possibility that artificial intelligence will end our collective existence. Similar to the previous
option, the ethical analysis of this issue also depends on whether super intelligences can be
treated as beings. From a utilitarian point of view this option would maximize the happiness for
all beings, including the AI if it can be considered one. The freedom principle is followed,
because the AI is allowed to do as it pleases, and this option does not impede others from
pursuing their happiness. Under Kantian analysis this option is shown to have universalizable
reasoning, if some risk must be incurred to ensure all beings have freedom the risk is a risk worth
incurring. This theory treats all those involved as ends, not merely as means as is done in the
other options. Under virtue ethics it can be considered a good moral choice because it was
recommended by an expert who is considered to have high moral character, and follow the
golden mean because of its middle ground approach to the AI technologies application.
The final step of an ethical analysis is to make an ethical decision. Based on the analysis
done above the choice that is most morally correct is allowing AI to grow and coexist with
humans as they become more complex and individualized. While this option comes with some

Leggett 14

built in risk, it is the only morally correct option and thus should be the only option truly
considered. Because this is such a complex issue, there are three ethical design frameworks that
help to understand and map out the features of this situation. Reasoning will be very important in
crafting this unique solution, because you will need to decide in which areas artificial
intelligence will be allowed to be applied while it is developing. LAWs and other areas where it
could do significant impact are areas that should be avoided. Thresholds will also be important
for a similar reason, because it the limits of AI will have to be ever changing based on each
iterations ability to think and act for itself. Finally, the framework of value sensitive design is
going to be incredibly important, because designing a superintelligence with the tools to
communicate and integrate into our society is a huge part of the success of this implementation
plan, and will require lots of very specific implementation instructions and guidelines for use be
outlined industry wide.

Leggett 15

Works Cited
[1] Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in
Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in
Systems Research and Cybernetics, 2003, pp. 12-17

[2] Cellan-Jones, R. (2014, December 2). Stephen Hawking warns artificial

intelligence could end mankind - BBC News. Retrieved August 15, 2015.
[3] Russell, S., Hauert, S., Altman, R., & Veloso, M. (2015). Robotics: Ethics of
artificial intelligence. Nature, 521, 415-418. doi:10.1038/521415a