You are on page 1of 6

M a c h i n e E t h i c s

Why Machine Ethics?


Colin Allen, Indiana University

Wendell Wallach, Yale University

Iva Smit, E&E Consultants

A runaway trolley is approaching a fork in the tracks. If the trolley runs on its cur-

rent track, it will kill a work crew of five. If the driver steers the train down the

other branch, the trolley will kill a lone worker. If you were driving the trolley, what

would you do? What would a computer or robot do?

Trolley cases, first introduced by philosopher which decisions must be made? It’s easy to argue
Philippa Foot in 19671 and a staple of introductory from a position of ignorance that such a goal is impos-
Machine ethics isn’t ethics courses, have multiplied in the past four sible to achieve. But precisely what are the challenges
decades. What if it’s a bystander, rather than the dri- and obstacles for implementing machine ethics? The
merely science fiction; ver, who has the power to switch the trolley’s course? computer revolution is continuing to promote reliance
What if preventing the five deaths requires pushing on automation, and autonomous systems are coming
it’s a topic that another spectator off a bridge onto the tracks? These whether we like it or not. Will they be ethical?
variants evoke different intuitive responses.
requires serious Given the advent of modern “driverless” train sys- Good and bad artificial agents?
tems, which are now common at airports and begin- This isn’t about the horrors of technology. Yes, the
consideration, given ning to appear in more complicated situations such machines are coming. Yes, their existence will have
as the London Underground and the Paris and unintended effects on our lives, not all of them good.
the rapid emergence of Copenhagen Metro systems, could trolley cases be But no, we don’t believe that increasing reliance on
one of the first frontiers for machine ethics? Machine autonomous systems will undermine our basic
increasingly complex ethics (also known as machine morality, artificial humanity. Neither will advanced robots enslave or
morality, or computational ethics) is an emerging exterminate us, in the best traditions of science fiction.
autonomous software field that seeks to implement moral decision-mak- We humans have always adapted to our technological
ing faculties in computers and robots. Is it too soon products, and the benefits of having autonomous
agents and robots. to be broaching this topic? We don’t think so. machines will most likely outweigh the costs.
Driverless systems put machines in the position of But optimism doesn’t come for free. We can’t just
making split-second decisions that could have life or sit back and hope things will turn out for the best.
death implications. As a rail network’s complexity We already have semiautonomous robots and soft-
increases, the likelihood of dilemmas not unlike the ware agents that violate ethical standards as a mat-
basic trolley case also increases. How, for example, ter of course. A search engine, for example, might
do we want our automated systems to compute where collect data that’s legally considered to be private,
to steer an out-of-control train? Suppose our driver- unbeknownst to the user who initiated the query.
less train knew that there were five railroad workers Furthermore, with the advent of each new tech-
on one track and a child on the other. Would we want nology, futuristic speculation raises public concerns
the system to factor this information into its decision? regarding potential dangers (see the “Skeptics of Dri-
The driverless trains of today are, of course, ethi- verless Trains” sidebar). In the case of AI and robot-
cally oblivious. Can and should software engineers ics, fearful scenarios range from the future takeover
attempt to enhance their software systems to explic- of humanity by a superior form of AI to the havoc
itly represent ethical dimensions of situations in created by endlessly reproducing nanobots. While

12 1541-1672/06/$20.00 © 2006 IEEE IEEE INTELLIGENT SYSTEMS


Published by the IEEE Computer Society
Skeptics of Driverless Trains
Engineers insist that driverless train systems are safe—safer Nevertheless, despite advances in technology, passengers
than human drivers, in fact. But the public has always been remain skeptical. Parisian planners claimed that the only prob-
skeptical. The London Underground first tested driverless trains lems with driverless trains are “political, not technical.”1 No
more than four decades ago, in April 1964. But driverless trains doubt, some resistance can be overcome simply by installing
faced political resistance from rail workers who believed their driverless trains and establishing a safety record, as is already
jobs were threatened and from passengers who weren’t entirely beginning to happen in Koria, Barcelona, Paris, Copenhagen,
convinced of the safety claims, so London Transport contin- and London. But we feel sure that most passengers would still
ued to give human drivers responsibility for driving the trains think that there are crisis situations beyond the scope of any
through the stations. But computers are now driving Central programming, where human judgment would be preferred. In
Line trains in London through stations, even though human some of those situations, the relevant judgment would involve
drivers remain in the cab. Most passengers likely believe that ethical considerations.
human drivers are more flexible and able to deal with emer-
gencies than the computerized controllers. But this might be Reference
human hubris. Morten Sondergaard, in charge of safety for
the Copenhagen Metro, asserts that “Automatic trains are 1. M. Knutton, “The Future Lies in Driverless Trains,” Int’l Railway J., 1
safe and more flexible in fall-back situations because of the June 2002; www.findarticles.com/p/articles/mi_m0BQQ/is_6_42/ai_
speed with which timetables can be changed.”1 88099079.

some of these fears are farfetched, they filling tasks for the robot’s owner? (Should this Making ethics explicit
underscore possible consequences of poorly be an owner-specified setting?) Should an Until recently, designers didn’t consider
designed technology. To ensure that the pub- autonomous agent simply abdicate responsi- the ways in which they implicitly embedded
lic feels comfortable accepting scientific bility to human controllers if all the options it values in the technologies they produced. An
progress and using new tools and products, discerns might cause harm to humans? (If so, important achievement of ethicists has been
we’ll need to keep them informed about new is it sufficiently autonomous?) to help engineers become aware of their
technologies and reassure them that design When we talk about what’s good in this work’s ethical dimensions. There’s now a
engineers have anticipated potential issues sense, we enter the domain of ethics and movement to bring more attention to unin-
and accommodated for them. morality. It’s important to defer questions tended consequences resulting from the
New technologies in the fields of AI, about whether a machine can be genuinely adoption of information technology. For
genomics, and nanotechnology will combine ethical or even genuinely autonomous— example, the ease with which information
in a myriad of unforeseeable ways to offer questions that typically presume that a gen- can be copied using computers has under-
promise in everything from increasing pro- uine ethical agent acts intentionally, mined legal standards for intellectual-prop-
ductivity to curing diseases. However, we’ll autonomously, and freely. The present engi- erty rights and forced a reevaluation of copy-
need to integrate artificial moral agents into neering challenge concerns only artificial right law. Helen Nissenbaum, who has been
these new technologies to manage their com- morality: ways of getting artificial agents to at the forefront of this movement, pointed out
plexity. These AMAs should be able to make act as if they were moral agents. If we’re to the interplay between values and technology
decisions that honor privacy, uphold shared trust multipurpose machines, operating when she wrote, “In such cases, we cannot
ethical standards, protect civil rights and indi- untethered from their designers or owners simply align the world with the values and
vidual liberty, and further the welfare of others. and programmed to respond flexibly in real principles we adhered to prior to the advent
Designing such value-sensitive AMAs won’t or virtual environments, we must be confi- of technological challenges. Rather, we must
be easy, but it’s necessary and inevitable. dent that their behavior satisfies appropriate grapple with the new demands that changes
To avoid the bad consequences of auto- norms. This means something more than tra- wrought by the presence and use of infor-
nomous artificial agents, we’ll need to direct ditional product safety. mation technology have placed on values and
considerable effort toward designing agents Of course, robots that short-circuit and moral principles.”2
whose decisions and actions might be consid- cause fires are no more tolerable than toast- Attention to the values that are uncon-
ered good. What do we mean by “good” in this ers that do so. An autonomous system that sciously built into technology is a welcome
context? Good chess-playing computers win ignorantly causes harm might not be morally development. At the very least, system design-
chess games. Good search engines find the blameworthy, any more than a toaster that ers should consider whose values, or what val-
results we want. Good robotic vacuum clean- catches fire can itself be blamed (although ues, they implement. But the morality implicit
ers clean floors with minimal human supervi- its designers might be at fault). But, in com- in artificial agents’actions isn’t simply a ques-
sion. These “goods” are measured against the plex automata, this kind of blamelessness tion of engineering ethics—that is to say, of
specific purposes of designers and users. But provides insufficient protection for those who getting engineers to recognize their ethical
specifying the kind of good behavior that might be harmed. If an autonomous system assumptions. Given modern computers’com-
autonomous systems require isn’t as easy. is to minimize harm, it must be cognizant of plexity, engineers commonly discover that
Should a good multipurpose robot rush to a possible harmful consequences and select its they can’t predict how a system will act in a
stranger’s aid, even if this means a delay in ful- actions accordingly. new situation. Hundreds of engineers con-

JULY/AUGUST 2006 www.computer.org/intelligent 13


M a c h i n e E t h i c s

tribute to each machine’s design. Different Moral agency for AI exists whether moral theories such as the cat-
companies, research centers, and design teams Moral agency is a well-developed philo- egorical imperative or utilitarianism can
work on individual hardware and software sophical category that outlines criteria for guide the design of algorithms that could
components that make up the final system. attributing responsibility to humans for their directly support ethical competence in
The modular design of systems can mean that actions. Extending moral agency to artificial machines or that might allow a developmen-
no single person or group can fully grasp the entities raises many new issues. For exam- tal approach. As an engineering project,
manner in which the system will interact or ple, what are appropriate criteria for deter- designing AMAs requires specific hypotheses
respond to a complex flow of new inputs. mining success in creating an AMA? Who or and rigorous methods for evaluating results,
As systems get more sophisticated and what should be held responsible if the AMA but this will require dialog between philoso-
their ability to function autonomously in dif- performs actions that are harmful, destruc- phers and engineers to determine the suitabil-
ferent contexts and environments expands, it tive, or illegal? And should the project of ity of traditional ethical theories as a source of
will become more important for them to have developing AMAs be put on hold until we engineering ideas.
“ethical subroutines” of their own, to borrow can settle the issues of responsibility? Another question that naturally arises here
a phrase from Star Trek. We want the systems’ One practical problem is deciding what is whether AMAs will ever really be moral
choices to be sensitive to us and to the things values to implement in an AMA. This prob- agents. As a philosophical and legal concept,
that are important to us, but these machines lem isn’t, of course, specific to software moral agency is often interpreted as requiring
must be self-governing, capable of assess- agents—the question of what values should a sentient being with free will. While Ray
ing the ethical acceptability of the options Kurzweil and Hans Moravec contend that AI
they face. research will eventually create new forms of
sentient intelligence,5,6 there are also many
Self-governing machines If there are clear limits in our detractors. Our own opinions are divided on
Implementing AMAs involves a broad whether computers given the right programs
range of engineering, ethical, and legal con- ability to develop or manage can properly be said to have minds—the view
siderations. A full understanding of these John Searle attacks as “strong AI.”7 However,
issues will require a dialog among philoso- artificial moral agents, then we agree that you can pursue the question of
phers, robotic and software engineers, legal how to program autonomous agents to
theorists, developmental psychologists, and we’ll need to turn our attention behave acceptably regardless of your stand
other social scientists regarding the practi- on strong AI.
cality, possible design strategies, and limits
of autonomous AMAs. If there are clear lim-
away from a false reliance on Science fiction or
its in our ability to develop or manage AMAs, scientific challenge?
then we’ll need to turn our attention away autonomous systems. Are we now crossing the line into science
from a false reliance on autonomous systems fiction—or perhaps worse, into that brand of
and toward more human intervention in com- science fantasy often associated with AI? The
puters and robots’decision-making processes. direct human behavior has engaged theolo- charge might be justified if we were making
Many questions arise when we consider the gians, philosophers, and social theorists for bold predictions about the dawn of AMAs or
challenge of designing computer systems that centuries. Among the specific values applic- claiming that it’s just a matter of time before
function as the equivalent of moral agents.3,4 able to AMAs will be those usually listed as walking, talking machines will replace those
Can we implement in a computer system or the core concerns of computer ethics—data humans to whom we now turn for moral
robot the moral theories of philosophers, such privacy, security, digital rights, and the guidance. But we’re not futurists, and we
as the utilitarianism of Jeremy Bentham and transnational character of computer net- don’t know whether the apparent technolog-
John Stuart Mill, Immanuel Kant’s categori- works. However, will we also want to ensure ical barriers to AI are real or illusory. Nor are
cal imperative, or Aristotle’s virtues? Is it fea- that such technologies don’t undermine we interested in speculating about what life
sible to develop an AMA that follows the beliefs about the importance of human char- will be like when your counselor is a robot,
Golden Rule, or even Isaac Asimov’s laws? acter and human moral responsibility that are or even in predicting whether this will ever
How effective are bottom-up strategies—such essential to social cohesion? come to pass.
as genetic algorithms, learning algorithms, or Another problem is implementation. Are Rather, we’re interested in the incremen-
associative learning—for developing moral the cognitive capacities that an AMA would tal steps arising from present technologies
acumen in software agents? Does moral judg- need to instantiate possible within existing that suggest a need for ethical decision-mak-
ment require consciousness, a sense of self, technology, or within technology we’ll pos- ing capabilities. Perhaps these incremental
an understanding of the semantic content of sess in the not-too-distant future? steps will eventually lead to full-blown AI—
symbols and language, or emotions? At what Philosophers have typically studied the a less murderous counterpart to Arthur C.
stage might we consider computational sys- concept of moral agency without worrying Clarke’s HAL, hopefully—but even if they
tems to be making judgments or might we about whether they can apply their theories don’t, we think that engineers are facing an
view them as independent actors or AMAs? mechanically to make moral decisions issue that they can’t address alone.
We currently can’t answer many of these tractable. Neither have they worried, typi- Industrial robots engaged in repetitive
questions, but we can suggest pathways for fur- cally, about the developmental psychology mechanical tasks have already caused injury
ther research, experimentation, and reflection. of moral behavior. So, a substantial question and even death. With the advent of service

14 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS


robots, robotic systems are no longer con- recently, the various contributions to the AAAI imentation in building AMAs forces us to
fined to controlled industrial environments, Fall Symposium on Machine Ethics included think deeply about how we humans function,
where they come into contact only with a learning model based on prima facie duties which of our abilities we can implement in
trained workers. Small robot pets, such as (those with soft constraints) for applying the machines we design, and what charac-
Sony’s AIBO, are the harbinger of larger informed consent, an approach to mechaniz- teristics truly distinguish us from animals or
robot appliances. Rudimentary robot vacuum ing deontic logic, an artificial neural network new forms of intelligence that we create. Just
cleaners, robot couriers in hospitals, and for evaluating ethical decisions, and a tool for as AI has stimulated new lines of enquiry in
robot guides in museums have already case-based rule analysis.9 the philosophy of mind, machine ethics
appeared. Companies are directing consid- Machine ethics extends the field of com- potentially can stimulate new lines of enquiry
erable attention at developing service robots puter ethics beyond concern for what people in ethics. Robotics and AI laboratories could
that will perform basic household tasks and do with their computers to questions about become experimental centers for testing the
assist the elderly and the homebound. what the machines themselves do. Further- applicability of decision making in artificial
Although 2001 has passed and HAL more, it differs from much of what goes under systems and the ethical viability of those
remains fiction, and it’s a safe bet that the the heading of the philosophy of technol- decisions, as well as for testing the compu-
doomsday scenarios of the Terminator and ogy—a subdiscipline that raises important tational limits of common ethical theories.
Matrix movies will not be realized before questions about human values such as free-
their sell-by dates of 2029 and 2199, we’re dom and dignity in increasingly technologi- Finding the right approach
already at a point where engineered systems Engineers are very good at building sys-
make decisions that can affect our lives. For tems for well-specified tasks, but there’s no
example, Colin Allen recently drove from clear task specification for moral behavior.
Texas to California but didn’t attempt to use Robotics and AI laboratories could Talk of moral standards might seem to imply
a particular credit card until nearing the an accepted code of behavior, but consider-
Pacific coast. When he tried to use the card to become experimental centers for able disagreement exists about moral mat-
refuel his car, it was rejected, so he drove to ters. How to build AMAs that accommodate
another station. Upon inserting the card in testing the applicability of these differences is a question that requires
the pump, a message instructed him to hand input from a variety of perspectives. Talk of
the card to a cashier inside the store. Instead, decision making in artificial ethical subroutines also seems to suggest a
Allen telephoned the toll-free number on the particular conception of how to implement
back of the card. The credit card company’s
centralized computer had evaluated Allen’s
systems and the ethical viability ethical behavior. However, whether algo-
rithms or lines of software code can effec-
use of the card almost 2,000 miles from tively represent ethical knowledge requires
home, with no trail of purchases leading of those decisions. a sophisticated appreciation of what that
across the country, as suspicious, so it auto- knowledge consists of, and of how ethical
matically flagged his account. The human theory relates to the cognitive and emotional
agent at the credit card company listened to cal societies. Old-style philosophy of tech- aspects of moral behavior. The effort to clar-
Allen’s story and removed the flag. nology was mostly reactive and sometimes ify these issues and develop alternative ways
Of course, denying someone’s request to motivated by the specter of unleashing pow- of thinking about them takes on special
buy a tank of fuel isn’t typically a matter of erful processes over which we lack control. dimensions in the context of artificial agents.
huge moral importance. But how would we New-wave technology philosophers are more We must assess any theory of what it means
feel if an automated medical system denied proactive, seeking to make engineers aware to be ethical or to make an ethical decision
our loved one a life-saving operation? of the values they bring to any design process. in light of the feasibility of implementing the
Machine ethics goes one step further, seek- theory as a computer program.
A new field of enquiry: ing to build ethical decision-making capaci- Different specialists will likely take differ-
Machine ethics ties directly into the machines. The field is ent approaches to implementing an AMA.
The challenge of ensuring that robotic sys- fundamentally concerned with advancing the Engineers and computer scientists might treat
tems will act morally has held a fascination relevant technologies. ethics as simply an additional set of con-
ever since Asimov’s three laws appeared in I, We see the benefits of having machines straints, to be satisfied like any other con-
Robot. A half century of reflection and that operate with increasing autonomy, but straint on successful program operation. From
research into AI has moved us from science we want to know how to make them behave this perspective, there’s nothing distinctive
fiction toward the beginning of more careful ethically. The development of AMAs won’t about moral reasoning. But, questions remain
philosophical analysis of the prospects for hinder industry. Rather, the capacity for about what those additional constraints
implementing machine ethics. Better hard- moral decision making will allow deploy- should be and whether they should be very
ware and improved design strategies are com- ment of AMAs in contexts that might other- specific (“Obey posted speed limits”) or more
bining to make computational experiments in wise be considered too risky. abstract (“Never cause harm to a human
machine ethics feasible. Since Peter Daniel- Machine ethics is just as much about being”). There are also questions regarding
son’s efforts to develop virtuous robots for vir- human decision making as it is about the whether to treat them as hard constraints,
tual games,8 many researchers have attempted philosophical and practical issues of imple- never to be violated, or soft constraints, which
to implement ethical capacities in AI. Most menting AMAs. Reflection about and exper- may be stretched in pursuit of other goals—

JULY/AUGUST 2006 www.computer.org/intelligent 15


M a c h i n e E t h i c s

corresponding to a distinction ethicists make processes in humans underscores the chal- to which we can approximate or simulate
between absolute and prima facie duties. lenge of designing AMAs. moral decision making in a “mindless”
Making a moral robot would be a matter of machine.11 A central issue is whether there
finding the right set of constraints and the Beyond stoicism are mental faculties (emotions, a sense of
right formulas for resolving conflicts. The Introducing psychological aspects will self, awareness of the affective state of oth-
result would be a kind of “bounded morality,” seem to some philosophers to be confusing ers, and consciousness) that might be diffi-
capable of behaving inoffensively so long as the ethics that people have with the ethics cult (if not impossible) to simulate but that
any situation that’s encountered fits within they should have. But to insist that we should would be essential for true AI and machine
the general constraints its designers predicted. pursue machine ethics independently of the ethics. For example, when it comes to mak-
Where might such constraints come from? facts of human psychology is, in our view, to ing ethical decisions, the interplay between
Philosophers confronted with this problem take a premature stand on important ques- rationality and emotion is complex. While
will likely suggest a top-down approach of tions such as the extent to which the devel- the Stoic view of ethics sees emotions as
encoding a particular ethical theory in soft- opment of appropriate emotional reactions irrelevant and dangerous to making ethically
ware. This theoretical knowledge could then is a crucial part of normal moral develop- correct decisions, the more recent literature
be used to rank options for moral acceptabil- ment. The relationship between emotions and on emotional intelligence suggests that emo-
ity. With respect to computability, however, ethics is an ancient issue that also has reso- tional input is essential to rational behav-
the moral principles philosophers propose nance in more recent science fiction. Are the ior.12 Although ethics isn’t simply a matter of
leave much to be desired, often suggesting doing whatever “feels right,” it might be
incompatible courses of action or failing to essential to cultivate the right feelings, sen-
recommend any course of action. In some timents, and virtues. Only pursuit of the engi-
respects too, key ethical principles appear to A central issue is whether there neering project of developing AMAs will
be computationally intractable, putting them answer the question of how closely we can
beyond the limits of effective computation are mental faculties that might approximate ethical behavior without these.
because of the essentially limitless conse- The new field of machine ethics must also
quences of any action.10 be difficult (if not impossible) develop criteria and tests for evaluating an
But if we can’t implement an ethical the- artificial entity’s moral aptitude. Recogniz-
ory as a computer program, then how can to simulate but that would ing one limitation of the original Turing Test,
such theories provide sufficient guidelines Colin Allen, along with Gary Varner and
for human action? So, thinking about what
machines are or aren’t capable of might lead
be essential for true AI and Jason Zinser, considered the possibility of a
specialized Moral Turing Test (MTT) that
to deeper reflection about just what a moral would be less dependent on conversational
theory is supposed to be. Some philosophers machine ethics. skills than the original Turing Test:
will regard the computational approach to
ethics as misguided, preferring to see ethical To shift the focus from conversational ability to
action, an alternative MTT could be structured
human beings as exemplifying certain virtues emotion-suppressing Vulcans of Star Trek in such a way that the “interrogator” is given
that are rooted deeply in our own psycho- inherently capable of better judgment than pairs of descriptions of actual, morally-signif-
logical nature. The problem of AMAs, from the more intuitive, less rational, more exu- icant actions of a human and an AMA, purged
this perspective, isn’t how to give them berant humans from Earth? Does Spock’s of all references that would identify the agents.
If the interrogator correctly identifies the
abstract theoretical knowledge but how to utilitarian mantra of “The needs of the many machine at a level above chance, then the
embody the right tendencies to react in the outweigh the needs of the few” represent the machine has failed the test.10
world. It’s a problem of moral psychology, rational pinnacle of ethics as he engages in
not moral calculation. an admirable act of self-sacrifice? Or do the They noted several problems with this test,
Psychologists confronted with the prob- subsequent efforts of Kirk and the rest of the including that indistinguishability from
lem of constraining moral decision making Enterprise’s human crew to risk their own humans might set too low a standard for our
will likely focus on how children develop a lives out of a sense of personal obligation to AMAs.
sense of morality as they mature into adults. their friend represent a higher pinnacle of Scientific knowledge about the complexity,
A developmental approach might be the most moral sensibility? subtlety, and richness of human cognitive and
practicable route to machine ethics. But The new field of machine ethics must con- emotional faculties has grown exponentially
given what we know about the unreliability sider these questions, exploring the strengths during the past half century. Designing artifi-
of this process for developing moral human and weaknesses of the various approaches to cial systems that function convincingly and
beings, there’s a legitimate question about programming AMAs, and laying the ground- autonomously in real physical and social envi-
how reliable trying to train AMAs would be. work for engineering AMAs in a philosoph- ronments requires much more than abstract
Psychologists also focus on the ways in ically and cognitively sophisticated way. This logical representation of the relevant facts. Skills
which we construct our reality; become task requires dialog among philosophers, that we take for granted, and that children learn
aware of self, others, and our environment; robotic engineers, and social planners regard- at a very young age, such as navigating around
and navigate through the complex maze of ing the practicality, possible design strategies, a room or appreciating the semantic content of
moral issues in our daily life. Again, the com- and limits of autonomous moral agents. words and symbols, have provided the biggest
plexity and tremendous variability of these Serious questions remain about the extent challenge to our best roboticists.

16 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS


S ome of the decisions we call moral
decisions might be quite easy to imple-
ment in computers, while simulating skill at
T h e A u t h o r s
tackling other kinds of ethical dilemmas is Colin Allen is a professor in the Department of History and Philosophy of
well beyond our present knowledge. Regard- Science and in the Cognitive Science Program at Indiana University, Bloom-
less of how quickly or how far we progress in ington, where he’s also a core faculty member of the Center for the Integra-
developing AMAs, in the process of engag- tive Study of Animal Behavior and an adjunct faculty member in the Depart-
ment of Philosophy. His main research interests are the theoretical and
ing this challenge we will make significant philosophical issues in the scientific study of animal cognition, especially
strides in our understanding of what truly related to the philosophy of science, philosophy of biology, and philosophy
remarkable creatures we humans are. The of mind. He received his PhD in philosophy from UCLA. He’s a member of
exercise of thinking through the practical the American Philosophical Association, AAAI, Philosophy of Science Asso-
ciation, Society for Philosophy and Psychology, and American Association for the Advancement of
requirements of ethical decision making with Science. Contact him at the Dept. of History and Philosophy of Science, 1011 E. Third St., Goodbody
a view to implementing similar faculties into Hall 130, Indiana Univ., Bloomington, IN 47405; colallen@indiana.edu.
robots is thus an exercise in self-understand-
ing. We hope that readers will enthusiasti- Wendell Wallach is a lecturer and project coordinator at Yale University’s
cally pick up where we’ve left off and take Interdisciplinary Center for Bioethics. At Yale, he chairs the Technology and
Ethics working research group, coordinates programs on the dialog between
the next steps toward moving this project science and religion, and leads a seminar series for bioethics interns. He’s
from theory to practice, from philosophy to also a member of several working research groups in the Interdisciplinary
engineering. Center for Bioethics and Yale Law School studying neuroethics and ethical
and legal issues posed by new technologies. He received his M.Ed. from Har-
vard University. He’s a member of the AAAI and the Society for the Study
Acknowledgments of Artificial Intelligence and Simulation of Behavior. Contact him at the Yale
We’re grateful for the comments of the anony-
Institution for Social and Policy Studies, Interdisciplinary Center for Bioethics, PO Box 208209, New
mous IEEE Intelligent Systems referees and for
Haven, CT 06520-8209; wwallach@comcast.net or wendell.wallach@yale.edu.
Susan and Michael Anderson’s help and encour-
agement.
Iva Smit is an independent consultant. Her key assignments include help-
ing organizations with change management and dealing with organizational
References cultures, designing and developing AI-based decision-making and simulation
systems, and guiding multinational organizations in applying such systems
1. P. Foot, “The Problem of Abortion and the in their everyday practices. She received her PhD in organizational and health
Doctrine of Double Effect,” Oxford Rev., vol. psychology from Utrecht University. Contact her at E&E Consultants, Cra-
5, 1967, pp. 5–15. nenburgsestraat 23-68, 6561 AM Groesbeek, Netherlands; iva.smit@
chello.nl.
2. H. Nissenbaum, “How Computer Systems
Embody Values,” Computer, vol. 34, no. 3,
2001, pp. 120, 118–119.
Agent,” J. Experimental and Theoretical Arti- 12. A. Damasio, Descartes’ Error, Avon, 1994.
3. J. Gips, “Towards the Ethical Robot,” Android ficial Intelligence, vol. 12, no. 3, 2000, pp.
Epistemology, K. Ford, C. Glymour, and P. 251–261.
Hayes, eds., MIT Press, 1995, pp. 243–252.
11. L. Floridi and J.W. Sanders, “On the Moral- For more information on this or any other com-
4. C. Allen, I. Smit, and W. Wallach, “Artificial ity of Artificial Agents,” Minds and Machines, puting topic, please visit our Digital Library at
Morality: Top-Down, Bottom-Up, and Hybrid vol. 14, no. 3, 2004, pp. 349–379. www.computer.org/publications/dlib.
Approaches,” to be published in Ethics and
Information Technology, vol. 7, 2006, pp.
149–155.

5. R. Kurzweil, The Singularity Is Near: When


Humans Transcend Biology, Viking Adult,
2005.

6. H. Moravec, Robot: Mere Machine to Tran-


scendent Mind, Oxford Univ. Press, 2000.

7. J.R. Searle, “Minds, Brains, and Programs,” IEEE Distributed Systems Online brings you peer-reviewed
Behavioral and Brain Sciences, vol. 3, no. 3,
1980, pp. 417–457. articles, detailed tutorials, expert-managed topic areas, and
diverse departments covering the latest developments and
8. P. Danielson, Artificial Morality: Virtuous
Robots for Virtual Games, Routledge, 1992. news in this fast-growing field.

9. M. Anderson, S.L. Anderson, and C. Armen, Distributed Agents • Cluster Computing • Security
eds., “Machine Ethics,” AAAI Fall Symp., tech
report FS-05-06, AAAI Press, 2005.
Middleware • Peer-to-Peer • and More!

10. C. Allen, G. Varner, and J. Zinser, “Prole-


gomena to Any Future Artificial Moral
http://dsonline.computer.org
JULY/AUGUST 2006 www.computer.org/intelligent 17

You might also like