Professional Documents
Culture Documents
Introduction to Ethics
Baumeister
May 9, 2019
As technology advances, we face the controversial question regarding how much power
we should give to machines to control our everyday lives. In the near future, artificial
intelligence (AI) may advance further than human intelligence, decreasing human ambition and
ultimately leaving the human race behind. People continually rely on technology to complete
difficult tasks for them, which can create a downward spiral into complete automation
carried on without outside control.” Technology designed to think and make decisions without
human input draws the fine line between what we could do and what we should do in the
mechanized world we are creating. Automation has become a modern-day moral quandary for
people because although the population may reap some benefits from implementing artificial
Artificial intelligence could lead to many different paths of either positive influence or
destruction which is why it is such a hot topic with the growing scientific community. The world
of technology that expands beyond human knowledge and learns on its own opens the door for
these AI machines to become exponentially more intelligent than humans. “They’re doomed to
fail, however, because decision-making isn’t logical, it’s emotional” (Camp). As Camp states, an
important aspect of developing these revolutionary machines is keeping goals and morals aligned
with that of the human race. Most decisions or plans are not entirely black and white and there is
not always one answer or route to proceed. If there is a goal to be reached, there are next to one
hundred ways to achieve it; however, not every way is going to be clean or easy. Human
emotion is required while making decisions to ensure something else will not fall apart or be
impacted when enacting the final decision. The ethical theorist we studied in class, John Stuart
Mill, believes that everything that brings happiness is therefore a good thing. Mill’s greatest
happiness theory focuses primarily on the thing that produces the most happiness. Mill says “The
creed which accepts as the foundation of morals “utility” or the “greatest happiness principle”
holds that actions are right in proportion as they tend to promote happiness; wrong as they tend
to produce the reverse of happiness.” (Mill, 7) Although the intentions of AI seem to be good,
performing a task that is beneficial and achieves the desired goal but ends up using destructive
methods that impacts whole families to accomplish it. When a machine has a goal, it pays no
mind to anything or to anyone – just complete the task. If these machines end up surpassing
human intelligence, there will be no stopping them. How do we control something we do not
understand, or is unable to account for human nature? At the point where machines are acting of
their own accord, without a moral compass that keeps humanity as its focus, it may be too late.
The opposing standpoint morally argues for the promotion of AI. The machines
themselves are harmless because as of the latest improvements in technology, they cannot have
goals and only seem to act “on their own” because of extensive programming. Another theorist
we discussed throughout the semester was Immanuel Kant. Kant focuses on the intention of a
moral decision. Ultimately you must consider good will and reason when deciding whether or
not something is good. Kant says “A good will is good not because of what it effects or
accomplishes, nor because of its fitness to attain some proposed end; it is good only through its
willing, i.e., it is good in itself” (Kant, 7) Kant believes people have a duty to themselves and
others to make decisions that are good independent of the consequences. A lot of the excitement
associated with artificial intelligence is the hope that the abilities of the machine can change the
world of medicine, making procedures easier and possibly curing illnesses humans have been
unable to cure. It is not out of reach to teach a robot to administer medicine, surgically remove a
gall bladder, or stitch an open wound. People who support artificial intelligence will say that any
risky technological repercussions are too farfetched and that machines could not deliberately
“destroy the world” or cause the “demise of the human population” as some would say. Some
people believe the world of AI could exponentially improve society. If the intentions of AI are
not beneficial and moral in intention then Kant believes that it should not be carried out. Kant
says “But I maintain that in such a case an action of this kind, however dutiful and amiable it
may be, has nevertheless no true moral worth” (Kant, 11).Overall, if Kant believed that people
truly meant to bring good to the medical community as well as other industries that would
benefit from AI, then his intention based ethical theory would be in support of AI.
Autonomous artificial intelligence will not destroy the world tomorrow, but possibly
within the next decade a portion of your day may involve interaction with an autonomous robot.
As autonomous robots begin to become more prevalent, we will face the issue of power through
artificial intelligence and how people choose to use it. Understanding the moral debate or
quandaries people face in society today is important in developing opinions and choosing an
ethical standpoint in which to follow and live by. In the end, although AI has both risks
associated with intention and risks with consequences. An ethical philosopher may come to the
conclusion that the important benefits that outweigh the risks. The world is changing and
although people might be scared, the intent and consequences can be reasonably rationed through
ethical thinking.
Works Cited
webster.com/dictionary/autonomous.
“Benefits and Risks of Artificial Intelligence” Future of Life Institute, n.p., n.d.
, https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/.
Camp, Jim. “Decisions are Emotional not Logical” Big Think Edge, n.p., June 11, 2012,
http://bigthink.com/experts-corner/decisions-are-emotional-not-logical-the-neuroscience-
behind-decision-making.
Kant, Immanuel, and James W. Ellington. Ethical Philosophy: the Complete Texts of Grounding
for the Metaphysics of Morals, and Metaphysical Principles of Virtue, Part II of The
Mill, John Stuart, and George Sher. Utilitarianism. Hackett Pub., 2001.