Professional Documents
Culture Documents
A runaway trolley is approaching a fork in the tracks. If the trolley runs on its cur-
rent track, it will kill a work crew of five. If the driver steers the train down the
other branch, the trolley will kill a lone worker. If you were driving the trolley, what
Trolley cases, first introduced by philosopher which decisions must be made? It’s easy to argue
Philippa Foot in 19671 and a staple of introductory from a position of ignorance that such a goal is impos-
Machine ethics isn’t ethics courses, have multiplied in the past four sible to achieve. But precisely what are the challenges
decades. What if it’s a bystander, rather than the dri- and obstacles for implementing machine ethics? The
merely science fiction; ver, who has the power to switch the trolley’s course? computer revolution is continuing to promote reliance
What if preventing the five deaths requires pushing on automation, and autonomous systems are coming
it’s a topic that another spectator off a bridge onto the tracks? These whether we like it or not. Will they be ethical?
variants evoke different intuitive responses.
requires serious Given the advent of modern “driverless” train sys- Good and bad artificial agents?
tems, which are now common at airports and begin- This isn’t about the horrors of technology. Yes, the
consideration, given ning to appear in more complicated situations such machines are coming. Yes, their existence will have
as the London Underground and the Paris and unintended effects on our lives, not all of them good.
the rapid emergence of Copenhagen Metro systems, could trolley cases be But no, we don’t believe that increasing reliance on
one of the first frontiers for machine ethics? Machine autonomous systems will undermine our basic
increasingly complex ethics (also known as machine morality, artificial humanity. Neither will advanced robots enslave or
morality, or computational ethics) is an emerging exterminate us, in the best traditions of science fiction.
autonomous software field that seeks to implement moral decision-mak- We humans have always adapted to our technological
ing faculties in computers and robots. Is it too soon products, and the benefits of having autonomous
agents and robots. to be broaching this topic? We don’t think so. machines will most likely outweigh the costs.
Driverless systems put machines in the position of But optimism doesn’t come for free. We can’t just
making split-second decisions that could have life or sit back and hope things will turn out for the best.
death implications. As a rail network’s complexity We already have semiautonomous robots and soft-
increases, the likelihood of dilemmas not unlike the ware agents that violate ethical standards as a mat-
basic trolley case also increases. How, for example, ter of course. A search engine, for example, might
do we want our automated systems to compute where collect data that’s legally considered to be private,
to steer an out-of-control train? Suppose our driver- unbeknownst to the user who initiated the query.
less train knew that there were five railroad workers Furthermore, with the advent of each new tech-
on one track and a child on the other. Would we want nology, futuristic speculation raises public concerns
the system to factor this information into its decision? regarding potential dangers (see the “Skeptics of Dri-
The driverless trains of today are, of course, ethi- verless Trains” sidebar). In the case of AI and robot-
cally oblivious. Can and should software engineers ics, fearful scenarios range from the future takeover
attempt to enhance their software systems to explic- of humanity by a superior form of AI to the havoc
itly represent ethical dimensions of situations in created by endlessly reproducing nanobots. While
some of these fears are farfetched, they filling tasks for the robot’s owner? (Should this Making ethics explicit
underscore possible consequences of poorly be an owner-specified setting?) Should an Until recently, designers didn’t consider
designed technology. To ensure that the pub- autonomous agent simply abdicate responsi- the ways in which they implicitly embedded
lic feels comfortable accepting scientific bility to human controllers if all the options it values in the technologies they produced. An
progress and using new tools and products, discerns might cause harm to humans? (If so, important achievement of ethicists has been
we’ll need to keep them informed about new is it sufficiently autonomous?) to help engineers become aware of their
technologies and reassure them that design When we talk about what’s good in this work’s ethical dimensions. There’s now a
engineers have anticipated potential issues sense, we enter the domain of ethics and movement to bring more attention to unin-
and accommodated for them. morality. It’s important to defer questions tended consequences resulting from the
New technologies in the fields of AI, about whether a machine can be genuinely adoption of information technology. For
genomics, and nanotechnology will combine ethical or even genuinely autonomous— example, the ease with which information
in a myriad of unforeseeable ways to offer questions that typically presume that a gen- can be copied using computers has under-
promise in everything from increasing pro- uine ethical agent acts intentionally, mined legal standards for intellectual-prop-
ductivity to curing diseases. However, we’ll autonomously, and freely. The present engi- erty rights and forced a reevaluation of copy-
need to integrate artificial moral agents into neering challenge concerns only artificial right law. Helen Nissenbaum, who has been
these new technologies to manage their com- morality: ways of getting artificial agents to at the forefront of this movement, pointed out
plexity. These AMAs should be able to make act as if they were moral agents. If we’re to the interplay between values and technology
decisions that honor privacy, uphold shared trust multipurpose machines, operating when she wrote, “In such cases, we cannot
ethical standards, protect civil rights and indi- untethered from their designers or owners simply align the world with the values and
vidual liberty, and further the welfare of others. and programmed to respond flexibly in real principles we adhered to prior to the advent
Designing such value-sensitive AMAs won’t or virtual environments, we must be confi- of technological challenges. Rather, we must
be easy, but it’s necessary and inevitable. dent that their behavior satisfies appropriate grapple with the new demands that changes
To avoid the bad consequences of auto- norms. This means something more than tra- wrought by the presence and use of infor-
nomous artificial agents, we’ll need to direct ditional product safety. mation technology have placed on values and
considerable effort toward designing agents Of course, robots that short-circuit and moral principles.”2
whose decisions and actions might be consid- cause fires are no more tolerable than toast- Attention to the values that are uncon-
ered good. What do we mean by “good” in this ers that do so. An autonomous system that sciously built into technology is a welcome
context? Good chess-playing computers win ignorantly causes harm might not be morally development. At the very least, system design-
chess games. Good search engines find the blameworthy, any more than a toaster that ers should consider whose values, or what val-
results we want. Good robotic vacuum clean- catches fire can itself be blamed (although ues, they implement. But the morality implicit
ers clean floors with minimal human supervi- its designers might be at fault). But, in com- in artificial agents’actions isn’t simply a ques-
sion. These “goods” are measured against the plex automata, this kind of blamelessness tion of engineering ethics—that is to say, of
specific purposes of designers and users. But provides insufficient protection for those who getting engineers to recognize their ethical
specifying the kind of good behavior that might be harmed. If an autonomous system assumptions. Given modern computers’com-
autonomous systems require isn’t as easy. is to minimize harm, it must be cognizant of plexity, engineers commonly discover that
Should a good multipurpose robot rush to a possible harmful consequences and select its they can’t predict how a system will act in a
stranger’s aid, even if this means a delay in ful- actions accordingly. new situation. Hundreds of engineers con-
tribute to each machine’s design. Different Moral agency for AI exists whether moral theories such as the cat-
companies, research centers, and design teams Moral agency is a well-developed philo- egorical imperative or utilitarianism can
work on individual hardware and software sophical category that outlines criteria for guide the design of algorithms that could
components that make up the final system. attributing responsibility to humans for their directly support ethical competence in
The modular design of systems can mean that actions. Extending moral agency to artificial machines or that might allow a developmen-
no single person or group can fully grasp the entities raises many new issues. For exam- tal approach. As an engineering project,
manner in which the system will interact or ple, what are appropriate criteria for deter- designing AMAs requires specific hypotheses
respond to a complex flow of new inputs. mining success in creating an AMA? Who or and rigorous methods for evaluating results,
As systems get more sophisticated and what should be held responsible if the AMA but this will require dialog between philoso-
their ability to function autonomously in dif- performs actions that are harmful, destruc- phers and engineers to determine the suitabil-
ferent contexts and environments expands, it tive, or illegal? And should the project of ity of traditional ethical theories as a source of
will become more important for them to have developing AMAs be put on hold until we engineering ideas.
“ethical subroutines” of their own, to borrow can settle the issues of responsibility? Another question that naturally arises here
a phrase from Star Trek. We want the systems’ One practical problem is deciding what is whether AMAs will ever really be moral
choices to be sensitive to us and to the things values to implement in an AMA. This prob- agents. As a philosophical and legal concept,
that are important to us, but these machines lem isn’t, of course, specific to software moral agency is often interpreted as requiring
must be self-governing, capable of assess- agents—the question of what values should a sentient being with free will. While Ray
ing the ethical acceptability of the options Kurzweil and Hans Moravec contend that AI
they face. research will eventually create new forms of
sentient intelligence,5,6 there are also many
Self-governing machines If there are clear limits in our detractors. Our own opinions are divided on
Implementing AMAs involves a broad whether computers given the right programs
range of engineering, ethical, and legal con- ability to develop or manage can properly be said to have minds—the view
siderations. A full understanding of these John Searle attacks as “strong AI.”7 However,
issues will require a dialog among philoso- artificial moral agents, then we agree that you can pursue the question of
phers, robotic and software engineers, legal how to program autonomous agents to
theorists, developmental psychologists, and we’ll need to turn our attention behave acceptably regardless of your stand
other social scientists regarding the practi- on strong AI.
cality, possible design strategies, and limits
of autonomous AMAs. If there are clear lim-
away from a false reliance on Science fiction or
its in our ability to develop or manage AMAs, scientific challenge?
then we’ll need to turn our attention away autonomous systems. Are we now crossing the line into science
from a false reliance on autonomous systems fiction—or perhaps worse, into that brand of
and toward more human intervention in com- science fantasy often associated with AI? The
puters and robots’decision-making processes. direct human behavior has engaged theolo- charge might be justified if we were making
Many questions arise when we consider the gians, philosophers, and social theorists for bold predictions about the dawn of AMAs or
challenge of designing computer systems that centuries. Among the specific values applic- claiming that it’s just a matter of time before
function as the equivalent of moral agents.3,4 able to AMAs will be those usually listed as walking, talking machines will replace those
Can we implement in a computer system or the core concerns of computer ethics—data humans to whom we now turn for moral
robot the moral theories of philosophers, such privacy, security, digital rights, and the guidance. But we’re not futurists, and we
as the utilitarianism of Jeremy Bentham and transnational character of computer net- don’t know whether the apparent technolog-
John Stuart Mill, Immanuel Kant’s categori- works. However, will we also want to ensure ical barriers to AI are real or illusory. Nor are
cal imperative, or Aristotle’s virtues? Is it fea- that such technologies don’t undermine we interested in speculating about what life
sible to develop an AMA that follows the beliefs about the importance of human char- will be like when your counselor is a robot,
Golden Rule, or even Isaac Asimov’s laws? acter and human moral responsibility that are or even in predicting whether this will ever
How effective are bottom-up strategies—such essential to social cohesion? come to pass.
as genetic algorithms, learning algorithms, or Another problem is implementation. Are Rather, we’re interested in the incremen-
associative learning—for developing moral the cognitive capacities that an AMA would tal steps arising from present technologies
acumen in software agents? Does moral judg- need to instantiate possible within existing that suggest a need for ethical decision-mak-
ment require consciousness, a sense of self, technology, or within technology we’ll pos- ing capabilities. Perhaps these incremental
an understanding of the semantic content of sess in the not-too-distant future? steps will eventually lead to full-blown AI—
symbols and language, or emotions? At what Philosophers have typically studied the a less murderous counterpart to Arthur C.
stage might we consider computational sys- concept of moral agency without worrying Clarke’s HAL, hopefully—but even if they
tems to be making judgments or might we about whether they can apply their theories don’t, we think that engineers are facing an
view them as independent actors or AMAs? mechanically to make moral decisions issue that they can’t address alone.
We currently can’t answer many of these tractable. Neither have they worried, typi- Industrial robots engaged in repetitive
questions, but we can suggest pathways for fur- cally, about the developmental psychology mechanical tasks have already caused injury
ther research, experimentation, and reflection. of moral behavior. So, a substantial question and even death. With the advent of service
corresponding to a distinction ethicists make processes in humans underscores the chal- to which we can approximate or simulate
between absolute and prima facie duties. lenge of designing AMAs. moral decision making in a “mindless”
Making a moral robot would be a matter of machine.11 A central issue is whether there
finding the right set of constraints and the Beyond stoicism are mental faculties (emotions, a sense of
right formulas for resolving conflicts. The Introducing psychological aspects will self, awareness of the affective state of oth-
result would be a kind of “bounded morality,” seem to some philosophers to be confusing ers, and consciousness) that might be diffi-
capable of behaving inoffensively so long as the ethics that people have with the ethics cult (if not impossible) to simulate but that
any situation that’s encountered fits within they should have. But to insist that we should would be essential for true AI and machine
the general constraints its designers predicted. pursue machine ethics independently of the ethics. For example, when it comes to mak-
Where might such constraints come from? facts of human psychology is, in our view, to ing ethical decisions, the interplay between
Philosophers confronted with this problem take a premature stand on important ques- rationality and emotion is complex. While
will likely suggest a top-down approach of tions such as the extent to which the devel- the Stoic view of ethics sees emotions as
encoding a particular ethical theory in soft- opment of appropriate emotional reactions irrelevant and dangerous to making ethically
ware. This theoretical knowledge could then is a crucial part of normal moral develop- correct decisions, the more recent literature
be used to rank options for moral acceptabil- ment. The relationship between emotions and on emotional intelligence suggests that emo-
ity. With respect to computability, however, ethics is an ancient issue that also has reso- tional input is essential to rational behav-
the moral principles philosophers propose nance in more recent science fiction. Are the ior.12 Although ethics isn’t simply a matter of
leave much to be desired, often suggesting doing whatever “feels right,” it might be
incompatible courses of action or failing to essential to cultivate the right feelings, sen-
recommend any course of action. In some timents, and virtues. Only pursuit of the engi-
respects too, key ethical principles appear to A central issue is whether there neering project of developing AMAs will
be computationally intractable, putting them answer the question of how closely we can
beyond the limits of effective computation are mental faculties that might approximate ethical behavior without these.
because of the essentially limitless conse- The new field of machine ethics must also
quences of any action.10 be difficult (if not impossible) develop criteria and tests for evaluating an
But if we can’t implement an ethical the- artificial entity’s moral aptitude. Recogniz-
ory as a computer program, then how can to simulate but that would ing one limitation of the original Turing Test,
such theories provide sufficient guidelines Colin Allen, along with Gary Varner and
for human action? So, thinking about what
machines are or aren’t capable of might lead
be essential for true AI and Jason Zinser, considered the possibility of a
specialized Moral Turing Test (MTT) that
to deeper reflection about just what a moral would be less dependent on conversational
theory is supposed to be. Some philosophers machine ethics. skills than the original Turing Test:
will regard the computational approach to
ethics as misguided, preferring to see ethical To shift the focus from conversational ability to
action, an alternative MTT could be structured
human beings as exemplifying certain virtues emotion-suppressing Vulcans of Star Trek in such a way that the “interrogator” is given
that are rooted deeply in our own psycho- inherently capable of better judgment than pairs of descriptions of actual, morally-signif-
logical nature. The problem of AMAs, from the more intuitive, less rational, more exu- icant actions of a human and an AMA, purged
this perspective, isn’t how to give them berant humans from Earth? Does Spock’s of all references that would identify the agents.
If the interrogator correctly identifies the
abstract theoretical knowledge but how to utilitarian mantra of “The needs of the many machine at a level above chance, then the
embody the right tendencies to react in the outweigh the needs of the few” represent the machine has failed the test.10
world. It’s a problem of moral psychology, rational pinnacle of ethics as he engages in
not moral calculation. an admirable act of self-sacrifice? Or do the They noted several problems with this test,
Psychologists confronted with the prob- subsequent efforts of Kirk and the rest of the including that indistinguishability from
lem of constraining moral decision making Enterprise’s human crew to risk their own humans might set too low a standard for our
will likely focus on how children develop a lives out of a sense of personal obligation to AMAs.
sense of morality as they mature into adults. their friend represent a higher pinnacle of Scientific knowledge about the complexity,
A developmental approach might be the most moral sensibility? subtlety, and richness of human cognitive and
practicable route to machine ethics. But The new field of machine ethics must con- emotional faculties has grown exponentially
given what we know about the unreliability sider these questions, exploring the strengths during the past half century. Designing artifi-
of this process for developing moral human and weaknesses of the various approaches to cial systems that function convincingly and
beings, there’s a legitimate question about programming AMAs, and laying the ground- autonomously in real physical and social envi-
how reliable trying to train AMAs would be. work for engineering AMAs in a philosoph- ronments requires much more than abstract
Psychologists also focus on the ways in ically and cognitively sophisticated way. This logical representation of the relevant facts. Skills
which we construct our reality; become task requires dialog among philosophers, that we take for granted, and that children learn
aware of self, others, and our environment; robotic engineers, and social planners regard- at a very young age, such as navigating around
and navigate through the complex maze of ing the practicality, possible design strategies, a room or appreciating the semantic content of
moral issues in our daily life. Again, the com- and limits of autonomous moral agents. words and symbols, have provided the biggest
plexity and tremendous variability of these Serious questions remain about the extent challenge to our best roboticists.
7. J.R. Searle, “Minds, Brains, and Programs,” IEEE Distributed Systems Online brings you peer-reviewed
Behavioral and Brain Sciences, vol. 3, no. 3,
1980, pp. 417–457. articles, detailed tutorials, expert-managed topic areas, and
diverse departments covering the latest developments and
8. P. Danielson, Artificial Morality: Virtuous
Robots for Virtual Games, Routledge, 1992. news in this fast-growing field.
9. M. Anderson, S.L. Anderson, and C. Armen, Distributed Agents • Cluster Computing • Security
eds., “Machine Ethics,” AAAI Fall Symp., tech
report FS-05-06, AAAI Press, 2005.
Middleware • Peer-to-Peer • and More!