You are on page 1of 12

AI Magazine Volume 28 Number 4 (2007) (© AAAI)

Articles

Machine Ethics:
Creating an Ethical
Intelligent Agent

Michael Anderson and Susan Leigh Anderson

■ The newly emerging field of machine ethics using ethical principles. It can “represent ethics
(Anderson and Anderson 2006) is concerned explicitly and then operate effectively on the
with adding an ethical dimension to machines. basis of this knowledge.” Using Moor’s termi-
Unlike computer ethics—which has traditional- nology, most of those working on machine
ly focused on ethical issues surrounding
ethics would say that the ultimate goal is to
humans’ use of machines—machine ethics is
concerned with ensuring that the behavior of
create a machine that is an explicit ethical
machines toward human users, and perhaps agent.
other machines as well, is ethically acceptable. We are, here, primarily concerned with the
In this article we discuss the importance of ethical decision making itself, rather than how
machine ethics, the need for machines that rep- a machine would gather the information need-
resent ethical principles explicitly, and the chal- ed to make the decision and incorporate it into
lenges facing those working on machine ethics. its general behavior. It is important to see this
We also give an example of current research in
as a separate and considerable challenge. It is
the field that shows that it is possible, at least in
separate because having all the information
a limited domain, for a machine to abstract an
ethical principle from examples of correct ethi- and facility in the world won’t, by itself, gen-
cal judgments and use that principle to guide its erate ethical behavior in a machine. One needs
own behavior. to turn to the branch of philosophy that is con-
cerned with ethics for insight into what is con-
sidered to be ethically acceptable behavior. It is
a considerable challenge because, even among
experts, ethics has not been completely codi-

T
he ultimate goal of machine ethics, we
believe, is to create a machine that itself fied. It is a field that is still evolving. We shall
follows an ideal ethical principle or set of argue that one of the advantages of working on
principles; that is to say, it is guided by this machine ethics is that it might lead to break-
principle or these principles in decisions it throughs in ethical theory, since machines are
makes about possible courses of action it could well-suited for testing the results of consistent-
take. We need to make a distinction between ly following a particular ethical theory.
what James Moor has called an “implicit ethical One other point should be made in intro-
agent” and an “explicit ethical agent” (Moor ducing the subject of machine ethics. Ethics
2006). According to Moor, a machine that is an can be seen as both easy and hard. It appears
implicit ethical agent is one that has been pro- easy because we all make ethical decisions on a
grammed to behave ethically, or at least avoid daily basis. But that doesn’t mean that we are
unethical behavior, without an explicit repre- all experts in ethics. It is a field that requires
sentation of ethical principles. It is constrained much study and experience. AI researchers
in its behavior by its designer who is following must have respect for the expertise of ethicists
ethical principles. A machine that is an explic- just as ethicists must appreciate the expertise of
it ethical agent, on the other hand, is able to AI researchers. Machine ethics is an inherently
calculate the best action in ethical dilemmas interdisciplinary field.

Copyright © 2007, American Association for Artificial Intelligence. All rights reserved. ISSN 0738-4602 WINTER 2007 15
Articles

The Importance of behave when faced with ethical dilemmas.


Despite the obvious applied nature of the field
Machine Ethics of ethics, too often work in ethical theory is
Why is the field of machine ethics important? done with little thought to actual application.
There are at least three reasons that can be giv- When examples are discussed, they are typical-
en. First, there are ethical ramifications to what ly artificial examples. Research in machine
machines currently do and are projected to do ethics has the potential to discover problems
in the future. To neglect this aspect of machine with current theories, perhaps even leading to
behavior could have serious repercussions. the development of better theories, as AI
South Korea has recently mustered more than researchers force scrutiny of the details
30 companies and 1000 scientists to the end of involved in actually applying an ethical theory
putting “a robot in every home by 2010” to particular cases. As Daniel Dennett (2006)
(Onishi 2006). DARPA’s grand challenge to recently stated, AI “makes philosophy honest.”
have a vehicle drive itself across 132 miles of Ethics must be made computable in order to
desert terrain has been met, and a new grand make it clear exactly how agents ought to
challenge is in the works that will have vehi- behave in ethical dilemmas.
cles maneuvering in an urban setting. The An exception to the general rule that ethi-
United States Army’s Future Combat Systems cists don’t spend enough time discussing actu-
program is developing armed robotic vehicles al cases occurs in the field of biomedical ethics,
that will support ground troops with “direct- a field that has arisen out of a need to resolve
fire” and antitank weapons. From family cars pressing problems faced by health-care work-
that drive themselves and machines that dis- ers, insurers, hospital ethics boards, and bio-
charge our daily chores with little or no assis- medical researchers. As a result of there having
tance from us, to fully autonomous robotic been more discussion of actual cases in the
entities that will begin to challenge our field of biomedical ethics, a consensus is begin-
notions of the very nature of intelligence, it is ning to emerge as to how to evaluate ethical
clear that machines such as these will be capa- dilemmas in this domain, leading to the ethi-
ble of causing harm to human beings unless cally correct action in many dilemmas. A rea-
this is prevented by adding an ethical compo- son there might be more of a consensus in this
nent to them. domain than in others is that in the area of bio-
Second, it could be argued that humans’ fear medical ethics there is an ethically defensible
of the possibility of autonomous intelligent goal (the best possible health of the patient),
machines stems from their concern about whereas in other areas (such as business and
whether these machines will behave ethically, law) the goal may not be ethically defensible
so the future of AI may be at stake. Whether (make as much money as possible, serve the
society allows AI researchers to develop any- client’s interest even if he or she is guilty of an
thing like autonomous intelligent machines offense or doesn’t deserve a settlement) and
may hinge on whether they are able to build in ethics enters the picture as a limiting factor
safeguards against unethical behavior. From (the goal must be achieved within certain eth-
the murderous robot uprising in the 1920 play ical boundaries).
R.U.R. (Capek 1921) and the deadly coup d’état AI researchers working with ethicists might
perpetrated by the HAL 9000 computer in find it helpful to begin with this domain, dis-
2001: A Space Odyssey (Clarke 1968), to The covering a general approach to computing
Matrix virtual reality simulation for the pacifi- ethics that not only works in this domain, but
cation and subjugation of human beings by could be applied to other domains as well.
machines, popular culture is rife with images of
machines devoid of any ethical code mistreat-
ing their makers. In his widely circulated trea-
Explicit Ethical Machines
tise, “Why the future doesn’t need us,” Bill Joy It does seem clear, to those who have thought
(2000) argues that the only antidote to such about the issue, that some sort of safeguard
fates and worse is to “relinquish dangerous should be in place to prevent unethical machine
technologies.” We believe that machine ethics behavior (and that work in this area may pro-
research may offer a viable, more realistic solu- vide benefits for the study of ethical theory as
tion. well). This shows the need for creating at least
Finally, we believe that it’s possible that implicit ethical machines; but why must we cre-
research in machine ethics will advance the ate explicit ethical machines, which would seem
study of ethical theory. Ethics, by its very to be a much greater (perhaps even an impossi-
nature, is the most practical branch of philoso- ble) challenge for AI researchers? Furthermore,
phy. It is concerned with how agents ought to many fear handing over the job of ethical over-

16 AI MAGAZINE
Articles

seer to machines themselves. How could we feel they are necessary to discern morally relevant
confident that a machine would make the right differences in similar cases.
decision in situations that were not anticipated? The concern that machines that start out
Finally, what if the machine starts out behaving behaving ethically will end up behaving
in an ethical fashion but then morphs into one unethically, perhaps favoring their own inter-
that decides to behave unethically in order to ests, may stem from fears derived from legiti-
secure advantages for itself? mate concerns about human behavior. Most
On the need for explicit, rather than just human beings are far from ideal models of eth-
implicit, ethical machines: What is critical in ical agents, despite having been taught ethical
the “explicit ethical agent” versus “implicit principles; and humans do, in particular, tend
ethical agent” distinction, in our view, lies not to favor themselves. Machines, though, might
only in who is making the ethical judgments have an advantage over human beings in terms
(the machine versus the human programmer), of behaving ethically. As Eric Dietrich (2006)
but also in the ability to justify ethical judg- has recently argued, human beings, as biologi-
ments that only an explicit representation of cal entities in competition with others, may
ethical principles allows. An explicit ethical have evolved into beings with a genetic predis-
agent is able to explain why a particular action position toward unethical behavior as a sur-
is either right or wrong by appealing to an eth- vival mechanism. Now, though, we have the
ical principle. A machine that has learned, or chance to create entities that lack this predis-
been programmed, to make correct ethical position, entities that might even inspire us to
judgments, but does not have principles to behave more ethically. Consider, for example,
which it can appeal to justify or explain its Andrew, the robot hero of Isaac Asimov’s story
judgments, is lacking something essential to “The Bicentennial Man” (1976), who was far
being accepted as an ethical agent. Immanuel more ethical than the humans with whom he
Kant (1785) made a similar point when he dis- came in contact. Dietrich maintained that the
tinguished between an agent that acts from a machines we fashion to have the good quali-
sense of duty (consciously following an ethical ties of human beings, and that also follow prin-
principle), rather than merely in accordance ciples derived from ethicists who are the excep-
with duty, having praise only for the former. tion to the general rule of unethical human
If we believe that machines could play a role beings, could be viewed as “humans 2.0”—a
in improving the lives of human beings—that better version of human beings.
this is a worthy goal of AI research—then, since This may not completely satisfy those who
it is likely that there will be ethical ramifica- are concerned about a future in which human
tions to their behavior, we must feel confident beings share an existence with intelligent,
that these machines will act in a way that is autonomous machines. We face a choice, then,
ethically acceptable. It will be essential that between allowing AI researchers to continue in
they be able to justify their actions by appeal- their quest to develop intelligent, autonomous
ing to acceptable ethical principles that they machines—which will have to involve adding
are following, in order to satisfy humans who an ethical component to them—or stifling this
will question their ability to act ethically. The research. The likely benefits and possible harms
ethical component of machines that affect of each option will have to be weighed. In any
humans’ lives must be transparent, and princi- case, there are certain benefits to continuing to
ples that seem reasonable to human beings work on machine ethics. It is important to find
provide that transparency. Furthermore, the a clear, objective basis for ethics—making
concern about how machines will behave in ethics in principle computable—if only to rein
situations that were not anticipated also sup- in unethical human behavior; and AI
ports the need for explicit ethical machines. researchers, working with ethicists, have a bet-
The virtue of having principles to follow, rather ter chance of achieving breakthroughs in ethi-
than being programmed in an ad hoc fashion cal theory than theoretical ethicists working
to behave correctly in specific situations, is that alone. There is also the possibility that society
it allows machines to have a way to determine would not be able to prevent some researchers
the ethically correct action in new situations, from continuing to develop intelligent,
even in new domains. Finally, Marcello Guari- autonomous machines, even if society decides
ni (2006), who is working on a neural network that it is too dangerous to support such work.
model of machine ethics, where there is a pre- If this research should be successful, it will be
disposition to eliminate principles, argues that important that we have ethical principles that
principles seem to play an important role in we insist should be incorporated into such
revising ethical beliefs, which is essential to eth- machines. The one thing that society should
ical agency. He contends, for instance, that fear more than sharing an existence with intel-

WINTER 2007 17
Articles

ligent, autonomous machines is shar- manner. The algorithm is to compute human to consider the effects of each
ing an existence with machines like the best action, that which derives the of those actions on all those affected.
these without an ethical component. greatest net pleasure, from all alterna- Finally, for some individuals’
tive actions. It requires as input the actions—actions of the president of
number of people affected and, for the United States or the CEO of a large
Challenges Facing each person, the intensity of the pleas- international corporation—their
Those Working on ure/displeasure (for example, on a impact can be so great that the calcu-
Machine Ethics scale of 2 to –2), the duration of the lation of the greatest net pleasure may
pleasure/displeasure (for example, in be very time consuming, and the
The challenges facing those working days), and the probability that this speed of today’s machines gives them
on machine ethics can be divided into pleasure or displeasure will occur, for an advantage.
two main categories: philosophical each possible action. For each person, We conclude, then, that machines
concerns about the feasibility of com- the algorithm computes the product can follow the theory of act utilitari-
puting ethics and challenges from the of the intensity, the duration, and the anism at least as well as human beings
AI perspective. In the first category, we probability, to obtain the net pleasure and, perhaps, even better, given the
need to ask whether ethics is the sort for that person. It then adds the indi- data that human beings would need, as
of thing that can be computed. One vidual net pleasures to obtain the total well, to follow the theory. The theory
well-known ethical theory that sup- net pleasure: of act utilitarianism has, however,
ports an affirmative answer to this been questioned as not entirely agree-
Total net pleasure = ∑ (intensity ×
question is “act utilitarianism.” Ac- duration × probability) for each af- ing with intuition. It is certainly a
cording to this teleological theory (a fected individual good starting point in programming a
theory that maintains that the right-
This computation would be performed machine to be ethically sensitive—it
ness and wrongness of actions is deter-
for each alternative action. The action would probably be more ethically sen-
mined entirely by the consequences of
with the highest total net pleasure is sitive than many human beings—but,
the actions) that act is right which, of
the right action (Anderson, Anderson, perhaps, a better ethical theory can be
all the actions open to the agent, is
and Armen 2005b). used.
likely to result in the greatest net good
A machine might very well have an Critics of act utilitarianism have
consequences, taking all those affect-
advantage over a human being in fol- pointed out that it can violate human
ed by the action equally into account.
lowing the theory of act utilitarianism beings’ rights, sacrificing one person
Essentially, as Jeremy Bentham (1781)
for several reasons: First, human for the greater net good. It can also
long ago pointed out, the theory in-
beings tend not to do the arithmetic conflict with our notion of justice—
volves performing “moral arithmetic.”
strictly, but just estimate that a certain what people deserve—because the
Of course, before doing the arith-
metic, one needs to know what counts action is likely to result in the greatest rightness and wrongness of actions is
as a “good” and “bad” consequence. net good consequences, and so a determined entirely by the future con-
The most popular version of act utili- human being might make a mistake, sequences of actions, whereas what
tarianism—hedonistic act utilitarian- whereas such error by a machine people deserve is a result of past
ism—would have us consider the would be less likely. Second, as has behavior. A deontological approach to
pleasure and displeasure that those already been noted, human beings ethics (where the rightness and
affected by each possible action are tend toward partiality (favoring them- wrongness of actions depends on
likely to receive. And, as Bentham selves, or those near and dear to them, something other than the conse-
pointed out, we would probably need over others who might be affected by quences), such as Kant’s categorical
some sort of scale to account for such their actions or inactions), whereas an imperative, can emphasize the impor-
things as the intensity and duration of impartial machine could be devised. tance of rights and justice, but this
the pleasure or displeasure that each Since the theory of act utilitarianism approach can be accused of ignoring
individual affected is likely to receive. was developed to introduce objectivi- consequences. We believe, along with
This is information that a human ty into ethical decision making, this is W. D. Ross (1930), that the best
being would need to have as well to important. Third, humans tend not to approach to ethical theory is one that
follow the theory. Getting this infor- consider all of the possible actions combines elements of both teleologi-
mation has been and will continue to that they could perform in a particu- cal and deontological theories. A the-
be a challenge for artificial intelligence lar situation, whereas a more thor- ory with several prima facie duties
research in general, but it can be sepa- ough machine could be developed. (obligations that we should try to sat-
rated from the challenge of computing Imagine a machine that acts as an isfy, but which can be overridden on
the ethically correct action, given this advisor to human beings and “thinks” occasion by stronger obligations)—
information. With the requisite infor- like an act utilitarian. It will prompt some concerned with the conse-
mation, a machine could be developed the human user to consider alterna- quences of actions and others con-
that is just as able to follow the theory tive actions that might result in cerned with justice and rights—better
as a human being. greater net good consequences than acknowledges the complexities of eth-
Hedonistic act utilitarianism can be the action the human being is consid- ical decision making than a single
implemented in a straightforward ering doing, and it will prompt the absolute duty theory. This approach

18 AI MAGAZINE
Articles

has one major drawback, however. It its actions, and it would be difficult to the feasibility of computing ethics has
needs to be supplemented with a deci- establish that a machine possesses to do with whether there is a single
sion procedure for cases where the pri- these qualities; but neither attribute is correct action in ethical dilemmas.
ma facie duties give conflicting advice. necessary to do the morally correct Many believe that ethics is relative
This is a problem that we have worked action in an ethical dilemma and jus- either to the society in which one
on and will be discussed later on. tify it. All that is required is that the lives—“when in Rome, one should do
Among those who maintain that machine act in a way that conforms what Romans do”—or, a more extreme
ethics cannot be computed, there are with what would be considered to be version of relativism, to individuals—
those who question the action-based the morally correct action in that situ- whatever you think is right is right for
approach to ethics that is assumed by ation and be able to justify its action you. Most ethicists reject ethical rela-
defenders of act utilitarianism, Kant’s by citing an acceptable ethical princi- tivism (for example, see Mappes and
categorical imperative, and other ple that it is following (S. L. Anderson DeGrazia [2001, p. 38] and Gazzaniga
well-known ethical theories. Accord- 1995). [2006, p. 178]), in both forms, prima-
ing to the “virtue” approach to ethics, The connection between emotion- rily because this view entails that one
we should not be asking what we ality and being able to perform the cannot criticize the actions of soci-
ought to do in ethical dilemmas, but morally correct action in an ethical eties, as long as they are approved by
rather what sort of persons we should dilemma is more complicated. Cer- the majority in those societies, or indi-
be. We should be talking about the tainly one has to be sensitive to the viduals who act according to their
sort of qualities—virtues—that a per- suffering of others to act morally. This, beliefs, no matter how heinous they
son should possess; actions should be for human beings, means that one are. There certainly do seem to be
viewed as secondary. Given that we must have empathy, which, in turn, actions that experts in ethics, and
are concerned only with the actions requires that one have experienced most of us, believe are absolutely
of machines, it is appropriate, howev- similar emotions oneself. It is not wrong (for example, torturing a baby
er, that we adopt the action-based clear, however, that a machine could and slavery), even if there are societies,
approach to ethical theory and focus not be trained to take into account the or individuals, who approve of the
on the sort of principles that suffering of others in calculating how actions. Against those who say that
machines should follow in order to it should behave in an ethical dilem- ethical relativism is a more tolerant
behave ethically. ma, without having emotions itself. It view than ethical absolutism, it has
Another philosophical concern is important to recognize, further- been pointed out that ethical rela-
with the machine ethics project is more, that having emotions can actu- tivists cannot say that anything is
whether machines are the type of enti- ally interfere with a being’s ability to absolutely good—even tolerance (Poj-
ties that can behave ethically. It is determine, and perform, the right man [1996, p. 13]).
commonly thought that an entity action in an ethical dilemma. Humans What defenders of ethical relativism
must be capable of acting intentional- are prone to getting “carried away” by may be recognizing—that causes them
ly, which requires that it be conscious, their emotions to the point where to support this view—are two truths,
and that it have free will, in order to they are incapable of following moral neither of which entails the accept-
be a moral agent. Many would, also, principles. So emotionality can even ance of ethical relativism: (1) Different
add that sentience or emotionality is be viewed as a weakness of human societies have their own customs that
important, since only a being that has beings that often prevents them from we must acknowledge, and (2) there
feelings would be capable of appreci- doing the “right thing.” are difficult ethical issues about which
ating the feelings of others, a critical The necessity of emotions in ration- even experts in ethics cannot agree, at
factor in the moral assessment of pos- al decision making in computers has the present time, on the ethically cor-
sible actions that could be performed been championed by Rosalind Picard rect action. Concerning the first truth,
in a given situation. Since many doubt (1997), citing the work of Damasio we must distinguish between an ethi-
that machines will ever be conscious, (1994), which concludes that human cal issue and customs or practices that
have free will, or emotions, this would beings lacking emotion repeatedly fall outside the area of ethical concern.
seem to rule them out as being moral make the same bad decisions or are Customs or practices that are not a
agents. unable to make decisions in due time. matter of ethical concern can be
This type of objection, however, We believe that, although evolution respected, but in areas of ethical con-
shows that the critic has not recog- may have taken this circuitous path to cern we should not be tolerant of
nized an important distinction decision making in human beings, unethical practices.
between performing the morally cor- irrational control of rational processes Concerning the second truth, that
rect action in a given situation, includ- is not a necessary condition for all some ethical issues are difficult to
ing being able to justify it by appeal- rational systems—in particular, those resolve (for example, abortion)—and
ing to an acceptable ethical principle, specifically designed to learn from so, at this time, there may not be
and being held morally responsible for errors, heuristically prune search agreement by ethicists as to the correct
the action. Yes, intentionality and free spaces, and make decisions in the face action—it does not follow that all
will in some sense are necessary to of bounded time and knowledge. views on these issues are equally cor-
hold a being morally responsible for A final philosophical concern with rect. It will take more time to resolve

WINTER 2007 19
Articles

these issues, but most ethicists believe ligence researchers and philosophers, ty assurance for household robots,
that we should strive for a single cor- although generally on speaking terms, they question whether machines
rect position on these issues. What do not always hear what the other is should be obeying some set of rules
needs to happen is to see that a certain saying. It is clear that, for substantive decided by ethicists, concerned that
position follows from basic principles advancement of the field of machine these rules may not in fact be truly
that all ethicists accept, or that a cer- ethics, both are going to have to listen universal. They suggest that it might
tain position is more consistent with to each other intently. AI researchers be safer to have machines “imitating
other beliefs that they all accept. will need to admit their naiveté in the millions, not a few,” believing in such
From this last point, we should see field of ethics and convince philoso- “democracy-dependent algorithms”
that we may not be able to give phers that there is a pressing need for because, they contend, “most people
machines principles that resolve all their services; philosophers will need behave ethically without learning
ethical disputes at this time. (Hopeful- to be a bit more pragmatic than many ethics.” They propose an extension to
ly, the machine behavior that we are are wont to be and make an effort to their web-based knowledge discovery
concerned about won’t fall in too sharpen ethical theory in domains system GENTA (General Belief Retriev-
many of the disputed areas.) The where machines will be active. Both ing Agent) that would search the web
implementation of ethics can’t be will have to come to terms with this for opinions, usual behaviors, com-
more complete than is accepted ethi- newly spawned relationship and, mon consequences, and exceptions,
cal theory. Completeness is an ideal together, forge a common language by counting ethically relevant neigh-
for which to strive but may not be pos- and research methodology. boring words and phrases, aligning
sible at this time. The ethical theory, The machine ethics research agenda these along a continuum from posi-
or framework for resolving ethical dis- will involve testing the feasibility of a tive to negative behaviors, and sub-
putes, should allow for updates, as variety of approaches to capturing eth- jecting this information to statistical
issues that once were considered con- ical reasoning, with differing ethical analysis. They suggest that this analy-
tentious are resolved. What is more bases and implementation formalisms, sis, in turn, would be helpful in the
important than having a complete and applying this reasoning in systems development of a sort of majority-rule
ethical theory to implement is to have engaged in ethically sensitive activi- ethics useful in guiding the behavior
one that is consistent. This is where ties. This research will investigate how of autonomous systems. An important
machines may actually help to to determine and represent ethical open question is whether users will be
advance the study of ethical theory, by principles, incorporate ethical princi- comfortable with such behavior or
pointing out inconsistencies in the ples into a system’s decision procedure, will, as might be expected, demand
theory that one attempts to imple- make ethical decisions with incom- better than average ethical conduct
ment, forcing ethical theoreticians to plete and uncertain knowledge, pro- from autonomous systems.
resolve those inconsistencies. vide explanations for decisions made A neural network approach is
Considering challenges from an AI using ethical principles, and evaluate offered by Marcello Guarini (2006). At
perspective, foremost for the nascent systems that act based upon ethical what might be considered a less
field of machine ethics may be con- principles. extreme degree of casuistry, particular
vincing the AI community of the System implementation work is actions concerning killing and allow-
necessity and advisability of incorpo- already underway. A range of ma- ing to die are classified as acceptable or
rating ethical principles into ma- chine-learning techniques are being unacceptable depending upon differ-
chines. Some critics maintain that employed in an attempt to codify eth- ent motives and consequences. After
machine ethics is the stuff of science ical reasoning from examples of par- training a simple recurrent network on
fiction—machines are not yet (and ticular ethical dilemmas. As such, this a number of such cases, it is capable of
may never be) sophisticated enough to work is based, to a greater or lesser providing plausible responses to a
require ethical restraint. Others won- degree, upon casuistry—the branch of variety of previously unseen cases.
der who would deploy such systems applied ethics that, eschewing princi- This work attempts to shed light on
given the possible liability involved. ple-based approaches to ethics, the philosophical debate concerning
We contend that machines with a lev- attempts to determine correct respons- generalism (principle-based approaches
el of autonomy requiring ethical delib- es to new ethical dilemmas by drawing to moral reasoning) versus particular-
eration are here and both their num- conclusions based on parallels with ism (case-based approaches to moral
ber and level of autonomy are likely to previous cases in which there is agree- reasoning). Guarini finds that,
increase. The liability already exists; ment concerning the correct response. although some of the concerns per-
machine ethics is necessary as a means Rafal Rzepka and Kenji Araki (2005), taining to learning and generalizing
to mitigate it. In the following section, at what might be considered the most from ethical dilemmas without resort-
we will detail a system that helps extreme degree of casuistry, explore ing to principles can be mitigated with
establish this claim. how statistics learned from examples a neural network model of cognition,
Another challenge facing those con- of ethical intuition drawn from the “important considerations suggest
cerned with machine ethics is how to full spectrum of the world wide web that it cannot be the whole story
proceed in such an inherently inter- might be useful in furthering machine about moral reasoning—principles are
disciplinary endeavor. Artificial Intel- ethics. Working in the domain of safe- needed.” He argues that “to build an

20 AI MAGAZINE
Articles

artificially intelligent agent without for that conclusion.” SIROCCO is suc- of machine ethics principles. Selmer
the ability to question and revise its cessful at retrieving relevant cases but Bringsjord, Konstantine Arkoudas,
own initial instruction on cases is to performed beneath the level of an eth- and Paul Bello (2006) show how for-
assume a kind of moral and engineer- ical review board presented with the mal logics of action, obligation, and
ing perfection on the part of the same task. Deductive techniques, as permissibility might be used to incor-
designer.” He argues, further, that such well as any attempt at decision mak- porate a given set of ethical principles
perfection is unlikely and principles ing, are eschewed by McLaren due to into the decision procedure of an
seem to play an important role in the “the ill-defined nature of problem autonomous system. They contend
required subsequent revision—“at solving in ethics.” Critics might con- that such logics would allow for proofs
least some reflection in humans does tend that this “ill-defined nature” may establishing that (1) robots only take
appear to require the explicit repre- not make problem solving in ethics permissible actions, and (2) all actions
sentation or consultation of…rules,” completely indefinable, and attempts that are obligatory for robots are actu-
for instance, in discerning morally rel- at just such a definition may be possi- ally performed by them, subject to ties
evant differences in similar cases. Con- ble in constrained domains. Further, it and conflicts among available actions.
cerns about this approach are those might be argued that decisions offered They further argue that, while some
attributable to neural networks in gen- by a system that are consistent with may object to the wisdom of logic-
eral, including oversensitivity to train- decisions made in previous cases have based AI in general, they believe that
ing cases and the inability to generate merit and will be useful to those seek- in this case a logic-based approach is
reasoned arguments for system re- ing ethical advice. promising because one of the central
sponses. We (Anderson, Anderson, and issues in machine ethics is trust and
Bruce McLaren (2003), in the spirit Armen 2006a) have developed a deci- “mechanized formal proofs are per-
of a more pure form of casuistry, pro- sion procedure for an ethical theory in haps the single most effective tool at
motes a case-based reasoning ap- a constrained domain that has multi- our disposal for establishing trust.”
proach (in the artificial intelligence ple prima facie duties, using inductive Making no commitment as to the eth-
sense) for developing systems that logic programming (ILP) (Lavrec and ical content, their objective is to arrive
provide guidance in ethical dilemmas. Dzeroski 1997) to learn the relation- at a methodology that maximizes the
His first such system, Truth-Teller, ships between these duties. In agree- probability that an artificial intelligent
compares pairs of cases presenting eth- ment with Marcello Guarini and agent behaves in a certifiably ethical
ical dilemmas about whether or not to Baruch Brody (1988) that casuistry fashion, subject to proof explainable
tell the truth. alone is not sufficient, we begin with in ordinary English. They propose a
The Truth-Teller program marshals prima facie duties that often give con- general methodology for implement-
ethically relevant similarities and dif- flicting advice in ethical dilemmas and ing deontic logics in their logical
ferences between two given cases from then abstract a decision principle, framework, Athena, and illustrate the
the perspective of the “truth teller” when conflicts do arise, from cases of feasibility of this approach by encod-
(that is, the person faced with the ethical dilemmas where ethicists are in ing a natural deduction system for a
dilemma) and reports them to the agreement as to the correct action. We deontic logic for reasoning about what
user. In particular, it points out reasons have adopted a multiple prima facie agents ought to do. Concerns remain
for telling the truth (or not) that (1) duty approach to ethical decision regarding the practical relevance of
apply to both cases, (2) apply more making because we believe it is more the formal logics they are investigat-
strongly in one case than another, or likely to capture the complexities of ing and efficiency issues in their
(3) apply to only one case. ethical decision making than a single, implementation.
The System for Intelligent Retrieval absolute duty ethical theory. In an The work of Bringjord, Arkoudas,
of Operationalized Cases and Codes attempt to develop a decision proce- and Bello is based on research that
(SIROCCO), McLaren’s second pro- dure for determining the ethically cor- investigates, from perspectives other
gram, leverages information concern- rect action when the duties give con- than artificial intelligence, how deon-
ing a new ethical dilemma to predict flicting advice, we use ILP to abstract tic logic’s concern with what ought to
which previously stored principles and information leading to a general deci- be the case might be extended to repre-
cases are relevant to it in the domain sion principle from ethical experts’ sent and reason about what agents
of professional engineering ethics. intuitions about particular ethical ought to do. It has been argued that the
Cases are exhaustively formalized and dilemmas. A common criticism is implied assumption that the latter will
this formalism is used to index similar whether the relatively straightforward simply follow from investigation of
cases in a database of formalized, pre- representation scheme used to repre- the former is not the case. In this con-
viously solved cases that include prin- sent ethical dilemmas will be suffi- text, John Horty (2001) proposes an
ciples used in their solution. SIROC- cient to represent a wider variety of extension of deontic logic, incorporat-
CO’s goal, given a new case to analyze, cases in different domains. ing a formal theory of agency that
is “to provide the basic information Deontic logic’s formalization of the describes what agents ought to do
with which a human reasoner … notions of obligation, permission, and under various conditions over extend-
could answer an ethical question and related concepts1 make it a prime can- ed periods of time. In particular, he
then build an argument or rationale didate as a language for the expression adapts preference ordering from deci-

WINTER 2007 21
Articles

sion theory to “both define optimal Creating a Machine That ers or external circumstances, such as a
actions that an agent should perform lack of funds) and sufficiently free of
and the propositions whose truth the
Is an Explicit Ethical Agent internal constraints (for example, pain
agent should guarantee.” This frame- To demonstrate the possibility of cre- or discomfort, the effects of medica-
work permits the uniform formaliza- ating a machine that is an explicit eth- tion, irrational fears, or values that are
tion of a variety of issues of ethical ical agent, we have attempted in our likely to change over time). The prin-
theory and, hence, facilitates the dis- research to complete the following six ciple of nonmaleficence requires that
cussion of these issues. steps: the health-care professional not harm
Tom Powers (2006) assesses the fea- the patient, while the principle of
sibility of using deontic and default beneficence states that the health-care
logics to implement Kant’s categorical Step One professional should promote patient
imperative: We have adopted the prima facie duty welfare. Finally, the principle of justice
approach to ethical theory, which, as states that health-care services and
Act only according to that maxim
we have argued, better reveals the com- burdens should be distributed in a just
whereby you can at the same time
will that it should become a universal plexity of ethical decision making than fashion.
law… If contradiction and contrast single, absolute duty theories. It incor- Step Two
arise, the action is rejected; if harmo- porates the good aspects of the teleo-
ny and concord arise, it is accepted. logical and deontological approaches The domain we selected was medical
From this comes the ability to take to ethics, while allowing for needed ethics, consistent with our choice of
moral positions as a heuristic means. exceptions to adopting one or the oth- prima facie duties, and, in particular, a
For we are social beings by nature, er approach exclusively. It also has the representative type of ethical dilemma
and what we do not accept in others, advantage of being better able to adapt that involves three of the four princi-
we cannot sincerely accept in our- ples of biomedical ethics: respect for
to the specific concerns of ethical
selves. autonomy, nonmaleficence, and bene-
dilemmas in different domains. There
Powers suggests that a machine may be slightly different sets of prima ficence. The type of dilemma is one
might itself construct a theory of facie duties for biomedical ethics, legal that health-care workers often face: A
ethics by applying a universalization ethics, business ethics, and journalistic health-care worker has recommended
step to individual maxims, mapping ethics, for example. a particular treatment for her compe-
them into the deontic categories of There are two well-known prima tent adult patient, and the patient has
forbidden, permissible, or obligatory facie duty theories: Ross’s theory, deal- rejected that treatment option. Should
actions. Further, for consistency, these ing with general ethical dilemmas, the health-care worker try again to
universalized maxims need to be test- that has seven duties; and Beauchamp change the patient’s mind or accept
ed for contradictions with an already and Childress’s four principles of bio- the patient’s decision as final? The
established base of principles, and medical ethics (1979) (three of which dilemma arises because, on the one
these contradictions resolved. Powers are derived from Ross’s theory) that hand, the health-care professional
are intended to cover ethical dilem- shouldn’t challenge the patient’s
suggests, further, that such a system
mas specific to the field of biomedi- autonomy unnecessarily; on the other
will require support from a theory of
cine. Because there is more agreement hand, the health-care worker may
commonsense reasoning in which
between ethicists working on biomed- have concerns about why the patient
postulates must “survive the occasion-
ical ethics than in other areas, and is refusing the treatment.
al defeat,” thus producing a nonmo-
because there are fewer duties, we In this type of dilemma, the options
notonic theory whose implementa-
decided to begin to develop our prima for the health-care worker are just two,
tion will require some form of default
facie duty approach to computing either to accept the patient’s decision
reasoning. It has been noted (Ganascia or not, by trying again to change the
ethics using Beauchamp and Chil-
2007) that answer set programming patient’s mind. For this proof of con-
dress’s principles of biomedical ethics.
(ASP) (Baral 2003) may serve as an effi- cept test of attempting to make a pri-
Beauchamp and Childress’s princi-
cient formalism for modeling such ma facie duty ethical theory com-
ples of biomedical ethics include the
ethical reasoning. An open question is principle of respect for autonomy that putable, we have a single type of
what reason, other than temporal pri- states that the health-care profession- dilemma that encompasses a finite
ority, can be given for keeping the al should not interfere with the effec- number of specific cases, just three
whole set of prior maxims and disal- tive exercise of patient autonomy. For duties, and only two possible actions
lowing a new contradictory one. Pow- a decision by a patient concerning his in each case. We have abstracted, from
ers offers that “if we are to construe or her care to be considered fully a discussion of similar types of cases
Kant’s test as a way to build a set of autonomous, it must be intentional, given by Buchanan and Brock (1989),
maxims, we must establish rules of pri- based on sufficient understanding of the correct answers to the specific cas-
ority for accepting each additional his or her medical situation and the es of the type of dilemma we consider.
maxim.” The question remains as to likely consequences of forgoing treat- We have made the assumption that
what will constitute this moral epis- ment, sufficiently free of external con- there is a consensus among bioethi-
temic commitment. straints (for example, pressure by oth- cists that these are the correct answers.

22 AI MAGAZINE
Articles

Step Three chosen type of dilemma, detailed pre- case in which the first action super-
The major philosophical problem with viously, has only 18 possible cases sedes the second action and a negative
the prima facie duty approach to ethi- (given a range of +2 to –2 for the level example as one in which this is not
cal decision making is the lack of a of satisfaction or violation of the the case, a complete hypothesis is one
decision procedure when the duties duties) where, given the two possible that covers all positive cases, and a
give conflicting advice. What is need- actions, the first action supersedes the consistent hypothesis covers no nega-
ed, in our view, are ethical principles second (that is, was ethically prefer- tive cases. Negative training examples
that balance the level of satisfaction or able). Four of these cases were provid- are generated from positive training
violation of these duties and an algo- ed to the system as examples of when examples by inverting the order of
rithm that takes case profiles and out- the target predicate (supersedes) is these actions, causing the first action
puts that action that is consistent with true. Four examples of when the target to be the incorrect choice. The system
these principles. A profile of an ethical predicate is false were provided by starts with the most general hypothe-
dilemma consists of an ordered set of simply reversing the order of the sis stating that all actions supersede
numbers for each of the possible actions. The system discovered a prin- each other and, thus, covers all posi-
actions that could be performed, ciple that provides the correct answer tive and negative cases. The system is
where the numbers reflect whether for the remaining 14 positive cases, as then provided with positive cases (and
particular duties are satisfied or violat- verified by the consensus of ethicists. their negatives) and modifies its
ed and, if so, to what degree. John ILP was used as the method of learn- hypothesis, by adding or refining
Rawls’s “reflective equilibrium” (1951) ing in this system. ILP is concerned clauses, such that it covers given posi-
approach to creating and refining eth- with inductively learning relations tive cases and does not cover given
ical principles has inspired our solu- represented as first-order Horn clauses negative cases.
tion to the problem of a lack of a deci- (that is, universally quantified con- The decision principle that the sys-
sion procedure. We abstract a principle junctions of positive literals Li imply- tem discovered can be stated as fol-
from the profiles of specific cases of ing a positive literal H: H ← (L1 ∧ … ∧ lows: A health-care worker should
ethical dilemmas where experts in Ln)). ILP is used to learn the relation challenge a patient’s decision if it isn’t
ethics have clear intuitions about the supersedes (A1, A2), which states that fully autonomous and there’s either
correct action and then test the prin- action A1 is preferred over action A2 any violation of nonmaleficence or a
ciple on other cases, refining the prin- in an ethical dilemma involving these severe violation of beneficence.
ciple as needed. choices. Actions are represented as Although, clearly, this rule is implicit
The selection of the range of possi- ordered sets of integer values in the in the judgments of the consensus of
ble satisfaction or violation levels of a range of +2 to –2 where each value ethicists, to our knowledge this prin-
particular duty should, ideally, depend denotes the satisfaction (positive val- ciple has never before been stated
upon how many gradations are need- ues) or violation (negative values) of explicitly. Ethical theory has not yet
ed to distinguish between cases that each duty involved in that action. advanced to the point where princi-
are ethically distinguishable. Further, Clauses in the supersedes predicate are ples like this one—that correctly bal-
it is possible that new duties may need represented as disjunctions of lower ance potentially conflicting duties
to be added in order to make distinc- bounds for differentials of these val- with differing levels of satisfaction or
tions between ethically distinguish- ues. violation—have been formulated. It is
able cases that would otherwise have ILP was chosen to learn this relation a significant result that machine-
the same profiles. There is a clear for a number of reasons. The poten- learning techniques can discover a
advantage to our approach to ethical tially nonclassical relationships that principle such as this and help
decision making in that it can accom- might exist between prima facie duties advance the field of ethics. We offer it
modate changes to the range of inten- are more likely to be expressible in the as evidence that making the ethics
sities of the satisfaction or violation of rich representation language provided more precise will permit machine-
duties, as well as adding duties as by ILP than in less expressive repre- learning techniques to discover philo-
needed. sentations. Further, the consistency of sophically novel and interesting prin-
a hypothesis regarding the relation- ciples in ethics because the learning
Step Four ships between prima facie duties can system is general enough that it can
Implementing the algorithm for the be automatically confirmed across all be used to learn relationships between
theory required formulation of a prin- cases when represented as Horn claus- any set of prima facie duties where
ciple to determine the correct action es. Finally, commonsense background there is a consensus among ethicists
when the duties give conflicting knowledge regarding the supersedes as to the correct answer in particular
advice. We developed a system (Ander- relationship is more readily expressed cases.
son, Anderson, and Armen 2006a) and consulted in ILP’s declarative rep- Once the principle was discovered,
that uses machine-learning tech- resentation language. the needed decision procedure could
niques to abstract relationships be- The object of training is to learn a be fashioned. Given a profile repre-
tween the prima facie duties from par- new hypothesis that is, in relation to senting the satisfaction/violation lev-
ticular ethical dilemmas where there is all input cases, complete and consis- els of the duties involved in each pos-
an agreed-upon correct action. Our tent. Defining a positive example as a sible action, values of corresponding

WINTER 2007 23
Articles

duties are subtracted (those of the sec- patient to take his or her medication and, when a patient disregards a
ond action from those of the first). and decide when to accept a patient's reminder, the notify action. It is used to
The principle is then consulted to see refusal to take a medication that might decrement don’t remind and don’t notify
if the resulting differentials satisfy any prevent harm or provide benefit to the actions as well. A reminder is issued
of its clauses. If so, the first action of patient and when to notify an over- when, according to the principle, the
the profile is deemed ethically prefer- seer. This dilemma is analogous to the duty satisfaction or violation levels
able to the second. original dilemma in that the same have reached the point where remind-
duties are involved (nonmaleficence, ing is ethically preferable to not
Step Five beneficence, and respect for autono- reminding. Similarly, the overseer is
We have explored two prototype my) and “notifying the overseer” in notified when a patient has disregarded
applications of the discovered princi- the new dilemma corresponds to “try- reminders to take medication and the
ple governing Beauchamp and Chil- ing again” in the original. duty satisfaction or violation levels
dress’s principles of biomedical ethics. Machines are currently in use that have reached the point where notify-
In both prototypes, we created a pro- face this dilemma.3 The state of the art ing the overseer is ethically preferable
gram where a machine could use the in these reminder systems entails pro- to not notifying the overseer.
principle to determine the correct viding “context-awareness” (that is, a EthEl uses an ethical principle dis-
answer in ethical dilemmas. The first, characterization of the current situa- covered by a machine to determine
MedEthEx (Anderson, Anderson, and tion of a person) to make reminders reminders and notifications in a way
Armen 2006b), is a medical ethical more efficient and natural. Unfortu- that is proportional to the amount of
advisor system; the second, EthEl, is a nately, this awareness does not maximum harm to be avoided or good
system in the domain of elder care include consideration of ethical duties to be achieved by taking a particular
that determines when a patient should that such a system should adhere to medication, while not unnecessarily
be reminded to take medication and when interacting with its patient. In challenging a patient’s autonomy.
when a refusal to do so is serious an ethically sensitive elder-care sys- EthEl minimally satisfies the require-
enough to contact an overseer. EthEl is tem, both the timing of reminders and ments of an explicit ethical agent (in a
more autonomous than MedEthEx in responses to a patient’s disregard of constrained domain), according to Jim
that, whereas MedEthEx gives the eth- them should be tied to the duties Moor’s definition of the term: A
ically correct answer (that is, that involved. The system should chal- machine that is able to calculate the
which is consistent with its training) lenge patient autonomy only when best action in ethical dilemmas using
to a human user who will act on it or necessary, as well as minimize harm an ethical principle, as opposed to
not, EthEl herself acts on what she and loss of benefit to the patient. The having been programmed to behave
determines to be the ethically correct principle discovered from the original ethically, where the programmer is fol-
action. dilemma can be used to achieve these lowing an ethical principle.
MedEthEx is an expert system that goals by directing the system to
uses the discovered principle and deci- remind the patient only at ethically Step Six
sion procedure to give advice to a user justifiable times and notifying the As a possible means of assessing the
faced with a case of the dilemma type overseer only when the harm or loss of morality of a system’s behavior, Colin
previously described. In order to per- benefit reaches a critical level. Allen, G. Varner, and J. Zinser (2000)
mit use by someone unfamiliar with In the implementation, EthEl describe a variant of the test Alan Tur-
the representation details required by receives input from an overseer (most ing (1950) suggested as a means to
the decision procedure, a user inter- likely a doctor), including: the pre- determine the intelligence of a
face was developed that (1) asks ethi- scribed time to take a medication, the machine that bypassed disagreements
cally relevant questions of the user maximum amount of harm that could about the definition of intelligence.
regarding the particular case at hand, occur if this medication is not taken Their proposed “comparative moral
(2) transforms the answers to these (for example, none, some, or consider- Turing test” (cMTT) bypasses disagree-
questions into the appropriate pro- able), the number of hours it would ment concerning definitions of ethical
files, (3) sends these profiles to the take for this maximum harm to occur, behavior as well as the requirement
decision procedure, (4) presents the the maximum amount of expected that a machine have the ability to
answer provided by the decision pro- good to be derived from taking this articulate its decisions: an evaluator
cedure, and (5) provides a justification medication, and the number of hours assesses the comparative morality of
for this answer.2 it would take for this benefit to be lost. pairs of descriptions of morally signif-
The principle discovered can be The system then determines from this icant behavior where one describes the
used by other systems, as well, to pro- input the change in duty satisfaction actions of a human being in an ethical
vide ethical guidance for their actions. and violation levels over time, a func- dilemma and the other the actions of
Our current research uses the principle tion of the maximum amount of harm a machine faced with the same dilem-
to elicit ethically sensitive behavior or good and the number of hours for ma. If the machine is not identified as
from an elder-care system, EthEl, faced this effect to take place. This value is the less moral member of the pair sig-
with a different but analogous ethical used to increment duty satisfaction and nificantly more often than the
dilemma. EthEl must remind the violation levels for the remind action human, then it has passed the test.

24 AI MAGAZINE
Articles

They point out, though, that human tain knowledge, the explanation for 2006b. MedEthEx: A Prototype Medical
behavior is typically far from being decisions made using ethical princi- Ethics Advisor. In Proceedings of the Eigh-
morally ideal and a machine that ples, and the evaluation of systems teenth Conference on Innovative Applications
passed the cMTT might still fall far that act based upon ethical principles. of Artificial Intelligence. Menlo Park, CA:
AAAI Press.
below the high ethical standards to Of the many challenges facing
those who choose to work in the area Anderson, S. L. 1995. Being Morally
which we would probably desire a
Responsible for an Action Versus Acting
machine to be held. This legitimate of machine ethics, foremost is the
Responsibly or Irresponsibly. Journal of
concern suggests to us that, instead of need for a dialogue between ethicists
Philosophical Research 20: 453–62.
comparing the machine’s behavior in and researchers in artificial intelli-
Asimov, I. 1976. The Bicentennial Man. In
a particular dilemma against typical gence. Each has much to gain from
Stellar Science Fiction 2, ed. J.-L. del Rey. New
human behavior, the comparison working together on this project. For York: Ballatine Books.
ought to be made with behavior rec- ethicists, there is the opportunity of
Baral, C. 2003. Knowledge Representation,
ommended by a trained ethicist faced clarifying—perhaps even discover- Reasoning, and Declarative Problem Solving.
with the same dilemma. We also ing—the fundamental principles of Cambridge, UK: Cambridge University
believe that the principles used to jus- ethics. For AI researchers, convincing Press.
tify the decisions that are reached by the general public that ethical Beauchamp, T. L., and Childress, J. F. 1979.
both the machine and ethicist should machines can be created may permit Principles of Biomedical Ethics. Oxford, UK:
be made transparent and compared. continued support for work leading to Oxford University Press.
We plan to devise and carry out a the development of autonomous Bentham, J. 1907. An Introduction to the
moral Turing test of this type in future intelligent machines—machines that Principles and Morals of Legislation. Oxford:
work, but we have had some assess- might serve to improve the lives of Clarendon Press.
ment of the work that we have done human beings. Bringsjord, S.; Arkoudas, K.; and Bello, P.
to date. The decision principle that 2006. Toward a General Logicist Methodol-
Acknowledgements ogy for Engineering Ethically Correct
was discovered in MedEthEx, and used
This material is based upon work sup- Robots. IEEE Intelligent Systems 21(4): 38–
by EthEl, is supported by W. D. Ross’s
ported in part by the National Science 44.
claim that it is worse to harm than not
Foundation grant number IIS- Brody, B. 1988. Life and Death Decision Mak-
to help someone. Also, the fact that
0500133. ing. New York: Oxford University Press.
the principle provided answers to
Buchanan, A. E., and Brock, D. W. 1989.
nontraining cases that are consistent
Notes Deciding for Others: The Ethics of Surrogate
with Buchanan and Brock’s judgments
1. See plato.stanford.edu/entries/logic- Decision Making, 48–57. Cambridge, UK:
offers preliminary support for our Cambridge University Press.
deontic.
hypothesis that decision principles
2. A demonstration of MedEthEx is avail- Capek, K. 1921. R.U.R. In Philosophy and
discovered from some cases, using our Science Fiction, ed. M. Phillips. Amherst, NY:
able online at www.machineethics.com.
method, enable a machine to deter- Prometheus Books.
3. For example, see www.ot.toronto.ca/iat-
mine the ethically acceptable action in Clarke, A. C. 1968. 2001: A Space Odyssey.
sl/projects/medication.htm.
other cases as well. New York: Putnam.
References Damasio, A.R. 1994. Descartes’ Error: Emo-
tion, Reason, and the Human Brain. New
Conclusion Allen, C.; Varner, G.; and Zinser, J. 2000.
York: G. P. Putnam.
Prolegomena to Any Future Artificial Moral
We have argued that machine ethics is Agent. Journal of Experimental and Theoreti- Dennett, D. 2006. Computers as Prostheses
an important new field of artificial cal Artificial Intelligence 12(2000): 251–61. for the Imagination. Invited talk presented
intelligence and that its goal should be Anderson, M., and Anderson, S., eds. 2006. at the International Computers and Philos-
to create machines that are explicit Special Issue on Machine Ethics. IEEE Intel- ophy Conference, Laval, France, May 3.
ethical agents. We have done prelimi- ligent Systems 21(4) (July/August). Dietrich, E. 2006. After the Humans Are
nary work to show—through our Anderson, M.; Anderson, S.; and Armen, Gone. Keynote address presented at the 2006
C., eds. 2005a. Machine Ethics: Papers from North American Computing and Philosophy
proof of concept applications in con-
the AAAI Fall Symposium. Technical Report Conference, RPI, Troy, NY, August 12.
strained domains—that it may be pos-
FS-05-06, Association for the Advancement Ganascia, J. G. 2007. Using Non-Monoton-
sible to incorporate an explicit ethical
of Artificial Intelligence, Menlo Park, CA. ic Logics to Model Machine Ethics. Paper
component into a machine. Ensuring
Anderson, M.; Anderson, S.; and Armen, C. presented at the Seventh International
that a machine with an ethical com- Computer Ethics Conference, San Diego,
2005b. Toward Machine Ethics: Imple-
ponent can function autonomously in CA, July 12–14.
menting Two Action-Based Ethical Theo-
the world remains a challenge to ries. In Machine Ethics: Papers from the Gazzaniga, M. 2006. The Ethical Brain: The
researchers in artificial intelligence AAAI Fall Symposium. Technical Report FS- Science of Our Moral Dilemmas. New York:
who must further investigate the rep- 05-06, Association for the Advancement of Harper Perennial.
resentation and determination of eth- Artificial Intelligence, Menlo Park, CA. Guarini, M. 2006. Particularism and the
ical principles, the incorporation of Anderson, M.; Anderson, S.; and Armen, C. Classification and Reclassification of Moral
these ethical principles into a system’s 2006a. An Approach to Computing Ethics. Cases. IEEE Intelligent Systems 21(4): 22–28.
decision procedure, ethical decision IEEE Intelligent Systems 21(4): 56–63. Horty, J. 2001. Agency and Deontic Logic.
making with incomplete and uncer- Anderson, M.; Anderson, S.; and Armen, C. New York: Oxford University Press.

WINTER 2007 25
Articles

Proceedings of the Twenty-Second AAAI Conference


on Artificial Intelligence

July, 2007 Vancouver, British Columbia, Canada


2 vols., references, index, illus., ISBN 978-1-57735-323-2

www.aaaipress.org

Joy, B. 2000. Why the Future Doesn’t Need Rawls, J. 1951. Outline for a Decision Pro- bona fide field of study. He has cochaired
Us. Wired Magazine 8(04) (April). cedure for Ethics. The Philosophical Review the AAAI Fall 2005 Symposium on Machine
Kant, I. 1785. Groundwork of the Metaphysic 60(2): 177–197. Ethics and coedited an IEEE Intelligent Sys-
of Morals, trans. by H. J. Paton (1964). New Ross, W. D. 1930. The Right and the Good. tems special issue on machine ethics in
York: Harper & Row. Oxford: Clarendon Press. 2006. His research in machine ethics was
selected for IAAI as an emerging applica-
Lavrec, N., and Dzeroski, S. 1997. Inductive Rzepka, R., and Araki, K. 2005. What Could
tion in 2006. He maintains the machine
Logic Programming: Techniques and Applica- Statistics Do for Ethics? The Idea of a Com-
ethics website (www.machineethics.org)
tions. Chichester, UK: Ellis Horwood. mon Sense Processing-Based Safety Valve.
and can be reached at anderson@hart-
Mappes, T. A., and DeGrazia, D. 2001. Bio- In Machine Ethics: Papers from the AAAI
ford.edu.
Fall Symposiu. Technical Report FS-05-06,
medical Ethics, 5th ed., 39–42. New York:
Association for the Advancement of Artifi- Susan Leigh Anderson,
McGraw-Hill.
cial Intelligence, Menlo Park, CA. a professor of philoso-
McLaren, B. M. 2003. Extensionally Defin-
Turing, A. M. 1950. Computing Machinery phy at the University of
ing Principles and Cases in Ethics: An AI
and Intelligence. Mind LIX(236): 433–460. Connecticut, received
Model. Artificial Intelligence Journal 150(1–
her Ph.D. in philosophy
2): 145–1813.
at UCLA. Her specialty is
Moor, J. H. 2006. The Nature, Importance, applied ethics, most
and Difficulty of Machine Ethics. IEEE Intel- Michael Anderson is an
recently focusing on bio-
ligent Systems 21(4): 18–21. associate professor of
medical ethics and
computer science at the
Onishi, N. 2006. In a Wired South Korea, machine ethics. She has received funding
University of Hartford,
Robots Will Feel Right at Home. New York from NEH, NASA, and NSF. She is the
West Hartford, Con-
Times, April 2, 2006. author of three books in the Wadsworth
necticut. He earned his
Picard, R. W. 1997. Affective Computing. Philosophers Series, as well as numerous
Ph.D. in computer sci-
Cambridge, MA: The MIT Press. articles. With Michael Anderson, she has
ence and engineering at
presented work on machine ethics at
Pojman, L. J. 1996. The Case for Moral the University of Con-
national and international conferences,
Objectivism. In Do the Right Thing: A Philo- necticut. His interest in further enabling
organized and cochaired the AAAI Fall
sophical Dialogue on the Moral and Social machine autonomy brought him first to
2005 Symposium on Machine Ethics, and
Issues of Our Time, ed. F. J. Beckwith. New diagrammatic reasoning where he co-
coedited a special issue of IEEE Intelligent
York: Jones and Bartlett. chaired Diagrams 2000, the first conference
Systems on machine ethics (2006). She can
Powers, T. 2006. Prospects for a Kantian on the topic. This interest has currently led
be contacted at Susan.Anderson@uconn.
Machine. IEEE Intelligent Systems 21(4): 46– him, in conjuction with Susan Leigh
edu.
51. Anderson, to establish machine ethics as a

26 AI MAGAZINE

You might also like