You are on page 1of 13

Ethics Inf Technol (2012) 14:137–149

DOI 10.1007/s10676-012-9290-1

ORIGINAL PAPER

Out of character: on the creation of virtuous machines


Ryan Tonkens

Published online: 21 February 2012


Ó Springer Science+Business Media B.V. 2012

Abstract The emerging discipline of Machine Ethics is Introduction


concerned with creating autonomous artificial moral agents
that perform ethically significant actions out in the world. As machines become increasingly sophisticated, their
Recently, Wallach and Allen (Moral machines: teaching ability to act out in the world becomes more pronounced, to
robots right from wrong, Oxford University Press, Oxford, the point where they will be able to (or already do) perform
2009) and others have argued that a virtue-based moral autonomous actions with potentially significant and far-
framework is a promising tool for meeting this end. reaching moral consequences. Insofar as these machines
However, even if we could program autonomous machines can perform ethically relevant actions out in the world,
to follow a virtue-based moral framework, there are certain then we would need them to behave morally. In light of
pressing ethical issues that need to be taken into account, this, one task of the discipline of Machine Ethics (or Robot
prior to the implementation and development stages. Here Ethics) has become to develop autonomous artificial moral
I examine whether the creation of virtuous autonomous agents (hereafter AMAs), fuelled by the belief that creating
machines is morally permitted by the central tenets of these sorts of machines is something that we ought to do.1
virtue ethics. It is argued that the creation of such machines Yet, this presumption is rarely scrutinized.2 This is
violates certain tenets of virtue ethics, and hence that the evidenced by the fact that the main focus of machine
creation and use of those machines is impermissible. One ethicists thus far has been towards the engineering and
upshot of this is that, although virtue ethics may have a role computational barriers that stand in the way of the suc-
to play in certain near-term Machine Ethics projects (e.g. cessful development of AMAs, consequently leaving other
designing systems that are sensitive to ethical consider- (ethical) issues unattended. Thus, one of the primary con-
ations), machine ethicists need to look elsewhere for a cerns of Machine Ethics is to figure out what ethical
moral framework to implement into their autonomous framework is most conducive to implementation into
artificial moral agents, Wallach and Allen’s claims (autonomous) machines, so as to render them ethical (i.e. to
notwithstanding. program them in such a way so that they behave in an
ethically sustainable manner). Yet, there are also ethical
Keywords Machine ethics  Autonomous artificial moral
agents  Virtue ethics  Social justice  Wallach and Allen
1
In addition to making sure that these robots are safe for human use
and behave in an ethically sustainable manner, Machine Ethics is also
concerned with other important issues, including the improvement of
our understanding of (human) ethics through trying to implement
moral decision-making faculties into robots.
2
R. Tonkens (&) There are some noteworthy exceptions here. See for example
Department of Philosophy, Faculty of Liberal Arts and Singer (2009) and Krishnan (2009). Also, the recent establishment of
Professional Studies, York University, 4700 Keele Street, Room the international committee for robot arms control stems from a
S426 Ross Building, Toronto, ON M3J 1P3, Canada growing concern surrounding the development of autonomous robots
e-mail: tonkens@yorku.ca used in warfare and other militarized settings.

123
138 R. Tonkens

concerns that need to be attended to, prior to the imple- framework could be successfully computed into
mentation and development stages. machines, it must also be asked whether the act of cre-
Broadly speaking, the purpose of this paper is to explore ating virtuous machines is permitted by the tenets of
some of the ethical issues that arise with respect to the act virtue ethics. Although a virtue-ethical framework may
of developing autonomous AMAs. In the background here be implementable into autonomous machines,7 we also
is the question of whether the sort of AMAs currently being need to ask whether it is virtuous to develop virtuous
proposed for development ought to be developed in the first machines; if the act of creating virtuous machines vio-
place. I address this question (albeit somewhat indirectly) lates the tenets of virtue ethics, then doing so is morally
by examining whether this goal of Machine Ethics (i.e. to problematic.
develop autonomous moral machines) is deemed to be In the first two sections below, I discuss the sort of
morally permissible by the moral framework(s) that we are machines under issue here and elaborate on the nature of
attempting to implement into those machines. Part of my the proposed ethical constraint and the methodology of my
argument is that satisfying this constraint represents a overall project. In section three, I outline the tenets of a
serious challenge for Machine Ethics: to identify a moral virtue-ethical framework (drawing primarily from the work
framework that can be successfully implemented into of Hursthouse (1999)), which provides the theoretical basis
machines, in such a way so that machines can (do) act for a critical analysis of the development of virtuous
ethically out in the world, and whose own tenets condone machines undertaken in the remainder of the paper. In the
the creation of those AMAs in the first place.3 Although subsequent three sections, I take up different tenets central
there may be some benefits to creating certain kinds of to this virtue-ethical framework, and apply them to the
AMAs for certain purposes, part of the worry here is that development of autonomous AMAs. One way to examine
these could also be mitigated if the moral framework that the moral standing of creating virtuous AMAs is to ask
such AMAs are programmed to follow does not allow for whether doing so is consistent with the various virtues that
their development. are relevant to the Machine Ethics project, both from the
Here I elucidate this challenge through an examination perspective of the creator of the machine, and of the
of the development of virtuous machines (i.e. autonomous machine itself. In the final section I analyze the act of
AMAs that are designed to follow a virtue-based moral developing AMAs with respect to the moral character of
framework). In Moral Machines: Teaching Robots Right the engineer, and ask whether the creation of autonomous
from Wrong, Wallach and Allen argue that a hybrid com- AMAs is something that promotes the virtues associated
putational approach is a promising—indeed a necessary— with social justice.
strategy for implementing ethics into AMAs (2009, 117).4 In the end, there is good reason to believe that the
They suggest that a virtue-ethical framework contains development of virtuous machines violates virtue ethics in
elements conducive to both top-down and bottom-up several respects. Regardless of whether we are able to
computational approaches. Because of this, the authors program robots to follow a virtue-based framework, the
claim that a virtue-based approach to developing AMAs creation of such robots is problematic, since doing so is
will be quite fruitful in this vein.5 According to Wallach impermissible under the tenets of that same moral frame-
and Allen, virtue ethics is a superior ethic to both deon- work; autonomous machines that were programmed to
tology and consequentialism, at least with respect to their follow a virtue-based moral framework would come to
computability. And since this is what’s taken to matter at understand their creation (their very existence) as repre-
this stage in the game, with this insight we are one step senting a breach of that very same moral framework.
closer to the successful development of AMAs.6 However, Although autonomous AMAs could be the sorts of things
although it is important to ask whether a virtue-ethical that are virtuous agents, and perhaps even behave virtu-
ously in practice, their development and use undermines
3
Elsewhere I have argued for this claim at length. Tonkens (2009). the virtues associated with widespread social justice, both
4
Wallach and Allen are not alone in making this claim. See for with respect to our treatment of these artificial moral
example Lin et al. (2008).
5
agents, and since their existence and use (for certain pur-
Wallach and Allen (2009, 119) explicitly set the disagreements
poses) promises to exacerbate existing social injustices. If
surrounding the intricacies of virtue ethics to one side, in favour of
attending to ‘‘the computational tractability of virtue ethics.’’ this is the case, although a virtue-based framework may
6
It is important to note that Wallach and Allen (2009) do not argue still be a useful tool used for the development of less
that virtue ethics is the best or the only promising source for sophisticated kinds of moral machines, machine ethicists
developing moral machines, and they consider other approaches as
well. Moreover, it is worth noting that these authors also have some
7
reservations about the development and use of certain kinds of robots Of course, this remains to be seen. Even if creating virtuous AMAs
for certain kinds of purposes. See especially their Chapter 3 and is consistent with virtue ethics, actually creating virtuous machines
Chapter 12. may prove to be quite difficult in practice.

123
Out of character 139

have good reason to look beyond virtue ethics for a moral what those specific rules could provide, input that is gen-
framework to implement into their autonomous robots, erated by the machine itself, based (in part) on its inter-
Wallach and Allen’s claims notwithstanding. actions with its environment. Being autonomous demands
more than the machine’s simply making (or being able to
make) mistakes because of faulty programming or its
Virtuous artificial moral agents misuse by a human operator.9
What I am concerned with in this paper is any machine
What exactly is an Artificial Moral Agent (or an ethical that falls into the categories of explicit or full ethical agent.
robot or a moral machine)? Here I follow Moor (2006) who Importantly, the specific sort of AMAs under issue are
distinguishes between four types of moral agency, two of machines that can deliberate over, make moral decisions
which are relevant for our purposes: (1) explicit ethical about, and perform ethically consequential actions in real
agents and (2) full ethical agents. AMAs falling into these world contexts, and that do so by appealing to a virtue-
two categories have the distinctive features that, not only based moral framework.10
can they act out in the world, but they can do so with little
to no human supervision.
Explicit ethical agents have the ability to make explicit
ethical judgments, to justify those judgments, and to act out A challenge for machine ethics
in the world based on these moral deliberations. Examples
here include certain advanced automated military weapons Much of the work that is currently being done in Machine
currently under development or already in use. These Ethics concerns issues of how to best implement a given
machines are ‘capable of autonomously seeking out, moral framework into the machinery of a robot, so as to
attacking, and destroying legitimate enemy targets’ (Spar- render it ethical.11 The choice about which moral frame-
row 2007, 63). Full ethical agents go beyond explicit work to implement into machines has been understood to
ethical agency since they also possess capacities such as come down to which one is most conducive to being
consciousness, emotion, creativity, et cetera. The para- translated into computational terms. Many possibilities
digmatic full ethical agent is a ‘normal’ adult human being. have been examined,12 some of which are quite promising,
As of yet, no machine has reached the status of full ethical at least from an engineering perspective. What has yet to
agency. Much of what makes Machine Ethics relevant is come up, however, is the issue of whether or not the ethical
that it investigates whether it is possible to create artificial frameworks that we are trying to implement into machines
full ethical agents, and to help us prepare just in case it is.8 themselves allow for the creation of those kinds of AMAs.
Autonomy is being used in this paper in the sense of the
ability to set the course of one’s own actions in real world
contexts, based on the information and goals that one has, 9
Wallach and Allen (2009, 26) adopt a similar understanding of
and the decision-making schema one is operating from; machine autonomy, as does Sparrow (2007).
10
autonomy in the sense of rationally self-directed action. To There are other sorts of moral machines that may be developed,
be sure, machine autonomy is a highly contentious issue. ones that do not have the ability to act (autonomously) out in the real
world. For example, moral machines may be developed that could
Yet, it seems as though something like the emergence of
serve as ethical advisors to humans, but which do not themselves act
self-directed (and perhaps novel) behaviour for reasons of in any robust sense beyond giving such advice. The worries presented
the machine’s own justification would constitute a suffi- here about the use of a virtue-based moral framework in the discipline
cient degree of autonomy to meet the threshold being of Machine Ethics do not necessarily apply to these and other similar
moral machines. Thus, even if the implementation of virtue ethics into
demanded here. There is no (or need not be a) human
certain kinds of explicit and full AMAs turns out to be problematic,
programmer or user that is directly deciding how the that moral framework may still have a role to play in more near-term
machine is to proceed in practice under the specific cir- Machine Ethics projects (e.g. building ethical sensitivity into
cumstances at hand. Although the machine has been pro- machines). I am grateful to an anonymous reviewer for raising this
point. At the same time, if we will be building up from such near-term
grammed to follow certain rules during its conception,
platforms in order to develop explicit and full AMAs, then it may
these rules can either be overruled, or interpreted, or prove to be very helpful and prudent to start thinking about this
applied in unique ways to unique situations, to the extent consistency constraint presently, even when the development of
that performing the resulting action requires input beyond explicit and full AMAs is only on the distant horizon.
11
Much of this section has been adapted from Tonkens (2009).
12
Some of the moral frameworks that have been examined with
8
That the creation of these kinds of sophisticated autonomous respect to their computability include Rossian prima facie duties
machines is one of the goals of Machine Ethics is obvious from a (Anderson and Anderson 2007b), Utilitarianism (Grau 2006), Kantian
survey of the current literature. See for example Asaro (2008), moral theory (e.g. Powers 2006), and virtue ethics (Wallach and Allen
Sparrow (2007), and Wallach and Allen (2009). 2009).

123
140 R. Tonkens

The question of what moral framework we ought to (Allen et al. 2006, 15). But this does not demand enough;
implement into autonomous AMAs for them to follow is not only do we need a moral agent that can act consistently
quite complex, and I cannot do it justice here. There with the moral laws prescribed for it, but we also need to
seems to be at least three different ways of approaching it: ensure that we are allowed to create those machines in the
(1) determine the best moral framework, and implement first place, based on the dictates of the moral framework we
that framework into machines, assuming that this frame- seek to implement. In other words, machine ethicists need
work is computable (and, hoping that it allows for the to ensure the compatibility between the moral framework
development of these sorts of AMAs); or (2) determine that they implement into machines, the moral standing of
which moral framework is most conducive to successful the act of creating AMAs, and the tenets of the moral
implementation into machines, and proceed to implement framework being implemented. Failure to do so entails that
that framework, regardless of whether that framework we are behaving (and asking robots to behave) against the
speaks against doing so, and regardless of whether it is the prescriptions of the moral framework we are designing
best moral framework; or (3) determine which moral those machines to follow. Put bluntly, it is hypocritical to
frameworks allow for implementation into machines and ask machines to follow moral rules that do not permit their
are computable, and choose the best one from that pool. creation in the first place.
Machine Ethics will not get very far if it depends on An example may be helpful to illustrate the methodol-
satisfying (1), since consensus is characteristically lack- ogy being adopted here.13 In her ‘‘Asimov’s ‘three laws of
ing here (although this says nothing about whether or not robotics’ and machine metaethics,’’ Susan Anderson argues
we ought to wait for a verdict). The favoured approach of that ‘‘Asimov’s Laws of Robotics [LoR] are an unsatis-
machine ethicists thus far seems to be (2), although the factory basis for Machine Ethics, regardless of the [moral]
argument I advance here speaks against this strategy. But status of the machine’’ (2008, 477–478). Most of Ander-
on (3), we run the risk of not using the best moral son’s argument rests on considering the plausibility of LoR
framework, whether we are aware of it or not. Further- as a moral framework used for creating moral machines
more, it may turn out that no moral frameworks meet the from the perspective of Kantian moral theory. According to
criteria in (3), in which case the (long-term) goals of Anderson, to the extent that we would be treating robots—
Machine Ethics may need to be abandoned. Moreover, it in this case, Asimovian AMAs—as slaves used for
is unclear that consensus could be reached here either, and anthropocentric ends, Kantian moral law deems this to be a
thus we may have different AMAs built in different lab- moral breach, regardless of the moral status of the machine
oratories following different (and perhaps incompatible) (2008, 492).
moral codes. The important point here is that the moral Two important points are worth noting here. First, it is
perspective from which we start makes a difference with unclear why (or how) we are to assess the creation of
respect to the demands we put on the ethics of machine Asimovian AMAs (i.e. AMAs that are designed to follow
morality, and how we gauge the success of the Machine Asimov’s LoR) on Kantian grounds. For one thing, if it is
Ethics project more generally. proper to say that a Kantian picture of morality is the best
By choosing moral framework x to implement into picture of morality for machines (or at least better than
AMAs, engineers and roboticists are subsequently (albeit LoR), then presumably we would want to program our
perhaps only implicitly) subscribing to x—whether because robots to follow Kantian moral law, rather than LoR.
it is the best moral framework or the one most conducive to Moreover, (potentially) competing moral claims generated
implementation—and are asking the resulting AMAs to by different moral frameworks does not show much beyond
abide by x as well. Yet, if it turns out that x does not allow the fact that different moral frameworks may generate
for the creation of those AMAs (for that purpose), then we competing moral claims. It is unsurprising that Kantian
are faced with a problematic incompatibility between the morality—which emphasizes liberal democratic ideals for
moral code we are subscribing to and simultaneously all rational and autonomous agents living together in the
implementing into machines, and the tenets of that very moral community—would not condone the enslavement of
same moral code. If asked the moral standing of the creation any such agents. Conversely, Asimovian LoR has no such
of autonomous AMAs (for that purpose), such AMAs would ideals built into its system, and hence remains silent about
concede, through their application of the moral framework whether the enslavement of Asimovian AMAs to serve
they have been designed to follow, that their very existence human ends represents a moral violation. Second, Ander-
is in direct contradiction with their moral code. son is correct to ask whether these sorts of machines ought
It is right to ask ‘‘if ethics is the sort of thing that can be to be created—in the sense of whether their creation is
computed’’ (Anderson and Anderson 2007a, 18), and it is consistent with human moral standards. But this needs to
important to assess our moral frameworks ‘based on the
13
feasibility of implementing them as computer programs’ For a much more thorough example, see Tonkens (2009).

123
Out of character 141

be taken a step further, to the point where we ask whether view for several reasons, most notably because it is
the moral framework that we are trying to implement into widely recognizable in the contemporary literature on
machines itself allows for the creation of those machines. virtue ethics, it promises to be sufficiently action guiding,
Asimov’s first LoR states that ‘robots may not injure a and because it offers both (1) rules that may be imple-
human being, or, through inaction, allow a human being to mentable into machines for them to follow, and (2)
come to harm’ (Anderson 2008, 477). Although we may be accommodates a learning aspect of moral development. In
able to effectively compute this rule into machines, and this way, there is reason to believe that it could generate
have machines successfully act in accordance with it, it is the sort of hybrid computational framework that Wallach
unclear whether we would be allowed to develop certain and Allen and others are looking for. If there turns out to
kinds of machines, based on the tenets of LoR. With respect be serious flaws with Hursthouse’s view, then we should
to autonomous machines used in warfare, for example, look elsewhere for a better theoretical contender. This
insofar as these machines would be expected to harm and being said, my goal is to examine two things: (1) whether
kill legitimate human targets on the opposing military implementing something like Hursthouse’s virtue ethic
force, then they will routinely violate the first LoR, a law into machines could yield ethical machines, and, more
they have been programmed to follow. In this way, by importantly, (2) whether Hursthouse-style virtue ethics
creating these machines to serve the purpose of military allows for the development of virtuous machines, i.e.
combat, their very creation and use violates LoR, and whether implementing this moral framework into auton-
hence with them being law-abiding Asimovian AMAs. If omous AMAs is morally permissible.
we want to develop ‘killer robots’ (Sparrow 2007), then we Rather than emphasizing an agent’s duty or the conse-
need to program them with a moral framework that is both quences of her actions, virtue ethics is concerned primarily
computable (which Asimov’s LoR may be), and one that with the moral character of agents. In this way, an
also allows for the creation of those machines in the first assessment of the character of the acting agent is at the
place (which Asimov’s LoR does not seem to leave room foundation of moral judgment. According to Hursthouse, in
for).14 LoR as a moral framework is utterly denied by the order to be virtuous, an agent must establish a virtuous
development and use of autonomous AMAs for the purpose character, in the sense that she must form the dispositions
of lethal military activity. requisite for performing virtuous actions, indicative of fine
The crucial point is that we need to find a match inner states of character. On Hursthouse’s neo-Aristotelian
between the ethical frameworks we can implement into account, virtues are those character traits that promote
AMAs and those we are allowed to implement. Because of flourishing in some way—her view is a version of eudai-
this, we need to expand our focus beyond the computa- monism—where flourishing is taken to be an achievement
tional and developmental aspects of Machine Ethics, and requisite for living a good (human) life. Our best bet for
include considerations regarding the ethics of Machine flourishing as human beings is to habitually act in accor-
Ethics. dance with the virtues, and to avoid acting viciously.
One important objection that is often raised against
virtue ethics is that it cannot tell us how to act (i.e. it is
Normative virtue ethics insufficiently normative). Whereas deontology and conse-
quentialism offer explicit guidelines for acting morally, the
There are many different virtue-ethical frameworks on worry is that there is really no action-guidance on offer
offer. Because of this, much of how we answer the question from virtue ethics, since there are no (explicit) rules to
‘Is the creation of virtuous autonomous AMAs, for purpose follow. Although this charge needs to be taken seriously,
x, morally permissible (from the perspective of virtue Hursthouse (1999, 28) argues (I think rightly) that there is a
ethics)?’ depends on which virtue-ethical theory we sub- normative component to virtue ethics; agents are to act in
scribe to. Here I examine this issue through the lens of accordance with how a (paradigmatic) virtuous person
Hursthouse’s normative virtue ethic, as outlined primarily would act under the given circumstances:
in her On Virtue Ethics (1999). I have chosen Hursthouse’s
An action is right iff it is what a virtuous agent would
characteristically (i.e. acting in character) do in the
circumstances.
Given an enumeration of the virtues and vices, we can
determine whether an action is courageous (say) by asking
14 what a courageous person would do under the circum-
Whether we can get around this problem by simply not program-
ming this sort of rule into combat robots is an open question. On stances, and act accordingly. It is by referring to what a real
Asimovian grounds, it would be ethically inconsistent to do so. or hypothetical appropriately qualified agent would do in a

123
142 R. Tonkens

particular context that imperfect moral agents can gain an whether machines could have anything resembling psy-
understanding of how they are to act.15 chological states in the first place. Yet, a weaker (yet
Creating virtuous AMAs is a novel undertaking, and we plausible) understanding of ‘‘character’’ could refer to
may not know whether a paradigmatic virtuous person something like enduring dispositions to behave in certain
(engineer, roboticist) would create such a machine, since ways, rather than the somewhat mystical idea of fine inner
no one has ever been able to do so before. Yet, given an psychological states. On this understanding of character,
understanding of what it means to be a virtuous engineer, insofar as a machine was programmed to behave in a
we can reasonably deliberate about what a virtuous engi- certain way, and did so consistently over time and in the
neer would do under those circumstances. Specifically, we relevant situations, then we may be able to say that the
can ask whether the development of virtuous AMAs is machine displayed behaviour indicative of established
consistent with the character traits (virtues) that a virtuous character traits, even if it had no such traits underlying its
engineer would possess, and draw plausible conclusions behaviour. Indeed, just as we need to be open to the idea of
from this. multiple realizability of pain or consciousness (for exam-
Before doing so, however, there are several other theo- ple), we need to be open to the idea that traits of character
retical issues that need to be discussed. Even if it turns out could be realized in ways quite unlike the way they are
that the creation and use of autonomous AMAs is some- manifested in human psychology and physiology. All of
thing a virtuous agent would do, there may still be other this is to say that, assuming that robots could have the other
problems with implementing a virtue-based moral frame- qualities deemed necessary for moral agency, they could
work into those machines. In the following three sections I meet the criteria for being moral agents defined under a
ask whether autonomous machines could be virtuous moral virtue-ethical framework. And, autonomous robots that do
agents, whether our existing understanding of (human) not act immorally (e.g. do not lie, or murder, or commit
virtue could be applied to virtuous machines, what it would injustices, et cetera) would be moral machines to a suffi-
mean for machines to flourish, and whether the learning cient degree, even if they were doing so merely from the
component characteristic of virtue-based moral develop- fact that they could not act contrary to their programming.
ment is a desirable element in moral machines. There is reason to argue that these machines could be
moral machines that follow a virtue-ethical framework, and
that the goal of Machine Ethics would be met to this extent.
Can machines be virtuous?

Depending on whom one asks, in order for robots to be Flourishing and machine nature
genuine (explicit or full) moral agents, they would need to be
conscious, rational, autonomous, and possess at least some On many virtue-ethical accounts, the purpose or nature of
(proto-) emotions. Furthermore, in order to be virtuous moral an entity makes a moral difference (Hursthouse 1999;
agents, machines would also need to possess character traits, Swanton 2003). In the case of humans, our purpose is often
since actions are assessed based primarily on whether they thought to be something like living a characteristically
were performed by an agent who acts in accordance with the good life for humans. Human virtues are those character
various virtues (e.g. courage, charity, fidelity, et cetera), traits that are necessary for the promotion of human
which are generally taken to be indicative of fine inner states flourishing in some way, in the sense that acting viciously
of the agent (Hursthouse 1999). reliably and predictably compromises one’s ability to live a
Even if machines could be conscious, rational, autono- good life.16 Several issues come up here when we transfer
mous, and emotional—and controversy surrounds all of our human-centered virtue-ethical framework to machines.
these issues—it is unclear whether machines could have For one thing, just because a certain character trait is a
anything like traits of character, in the sense of having fine human virtue, it does not automatically follow that it would
inner psychological states. Much of this depends on be a virtue for machines as well. The virtue of courage is a
telling example here. An AMA used in warfare (for
15
example) may not be able to be courageous in any robust
There may not be much difference for a machine between
following rules designed for some set of circumstances and modeling
16
what a virtuous agent would do in those circumstances. Indeed, This appeal to naturalism has drawn its fair share of criticism.
machines may be well suited for modelling what a perfectly virtuous Moreover, it is unclear how much the project of creating artificial
agent would do, perhaps even more so than humans. Hursthouse’s moral agents is conducive to this way of putting things. The important
claim seems especially pertinent for Machine Ethics since we can aspect of Hursthouse-style naturalism for our purposes is the idea that
program the machine in ways that are much more coded (explicitly flourishing supervenes on an entity’s purpose, presumably regardless
action guiding) than virtue ethics is usually charged as being able to of whether this purpose originates naturally or is manufactured in
offer. some sense.

123
Out of character 143

sense of the term. The human that always faces danger military or geriatric care service may compromise human
without fear is rash; the human that routinely shies away flourishing to a certain extent, the same may or may not
from danger is cowardly; the human that faces danger in hold true for robots. Forcing a human being to care for
the proper circumstances, for the right reasons, and in the others against her will is morally questionable, and invol-
right way, is courageous. But the source of a robot’s untary military conscription has received widespread
behaviour makes a difference here, and acting as a coura- disapproval. Yet, insofar as virtue is linked with an entity’s
geous human would act does not alone guarantee its acting purpose, then a machine’s having the purpose of military
virtuously. combat or geriatric care may not be inconsistent with the
For instance, the machine may not possess anything like flourishing of that machine, qua machine. Indeed, to the
emotion, and hence may be incapable of experiencing fear, extent that these robots performed these tasks, doing so
or what it’s like to be afraid. Because of this, the machine may in fact promote their overall flourishing.19
would always ‘fearlessly’ confront the dangerous situations The crucial issue here becomes the extent to which
it is presented with; its reason for acting could not be one designing moral machines for specific purposes is morally
of ‘facing fearful situations with poise’, since it does not permissible, according to virtue ethics. In particular, as
actually recognize such situations as fearful in the first discussed later on (Sect. 8), issues of social justice arise
place. Regardless of whether the military AMA could here, which seem to place restrictions on the sort of
disregard or reject sound orders to confront dangerous, purposes that we may design autonomous robots to have in
risky, or otherwise terrifying situations, the typical reasons the first place; the purpose(s) that we design autonomous
for fleeing danger or refraining from taking risks (i.e. fear AMAs to have makes a moral difference. Although
and anxiety) may not be part of the machine’s repertoire. machines may be able to flourish (qua machine) with quite
Because of this, it would be a stretch to say that the robot diverse purposes than those of human beings, we need to
could be rash, or cowardly, or courageous; a machine’s ask whether creating otherwise autonomous machines for
decision of whether to persist or to flee would be based on those purposes is indicative of a virtuous character on
factors irrelevant to the nature of cowardliness or bravery. behalf of their creator and with the moral treatment of
One upshot of this is that we should be hesitant to auto- autonomous moral agents in general. If designing and using
matically transfer (all) human virtues to machines, at least machines for certain purposes turns out to be less-than-
without qualification.17 virtuous (e.g. indicative of a flawed character on behalf of
Secondly, the purpose of machines may be strikingly the engineer, roboticist, user), then creating those machines
different from that of human beings. On Hursthouse’s would violate a central tenet of virtue ethics, namely, act
understanding of human nature, the purpose of human virtuously.
beings is to live a good human life. But, we need to ask
whether machines could flourish, in the sense of ‘leading a
good life’ for a machine. Perhaps more importantly, we Footnote 18 continued
their consent may be similarly morally problematic. We could get
need to ask whether the purposes we intend on designing
around this problem my simply not making AMAs autonomous,
robots to have are consistent with the flourishing of those although this flies in the face of the goal of Machine Ethics, as I
machines (qua autonomous machine), and the virtues understand it. Thus, it seems as though the project of developing
requisite for flourishing in that sense. On the sort of virtue- autonomous AMAs may demand that we take their autonomy
(interests, rights) into account in the ways that we treat them and the
ethical account being appealed to here, it does not make
roles we assign to them.
sense to speak of virtuous moral agents (whether human or 19
Part of this may depend on our understanding of ‘‘machine’’. To be
machine) whose conditions for flourishing as such do not sure, machines have hitherto been understood as being tools (things,
influence how they ought to behave (and how others ought objects) manufactured and used by humans for achieving human
to treat them as well). Whereas forcing18 humans into goals. If we want to continue to accept this traditional definition of
‘‘machine’’ unconditionally, then it seems to follow that the creation
17
of moral machines would be a matter of programming rules of our
Considering that Aristotle thought that the most noble death choosing into machines, rules that would need only to satisfy the
possible was to die courageously in battle, he may have been an requirements of enabling the machine (tool, object) to achieve its
advocate of (just) warfare. But, given that combat robots may not be purpose of satisfying the ends of its human designer and user. And
able to be courageous, and that replacing human soldiers with robots yet, the traditional definition of ‘‘machine’’ is being challenged by
may lessen the former’s opportunity to display courage in battle recent advances in artificial intelligence and robotics research, and
(since they would be relegated to remote positions or not appear on may warrant significant revision or expansion—especially given the
the battlefield at all) and thus to die courageously, our drive towards emergence of machines that are sufficiently sophisticated so as to
automated warfare may have Aristotle rotating in his grave. possess (some of) the qualities central to moral agency. To the extent
18
I return to this issue later on. Because human moral agents are that these new kinds of machines would be moral agents, then they
autonomous, then making them do things against their will is would no longer just be tools (things, objects) used for human ends,
typically morally problematic. Insofar as AMAs would be sufficiently but would also be independent moral entities of their own. It is here
autonomous, then assigning them duties against their will or without that considerations of their flourishing arise.

123
144 R. Tonkens

Becoming virtuous acted viciously, but did not.20 A machine that behaves
immorally during its process of becoming virtuous does not
Wallach and Allen (2009, 119) rightly point out that a automatically undermine its status as a moral machine.
virtue-based moral framework has both top-down and Indeed, in order to be a proper moral agent, it must have
bottom-up elements. With respect to the former, some- the ability to not act morally (however limited), something
thing like Hursthouse’s virtue rules (or ‘v-rules’) could which the learning aspect of virtue-ethics readily
serve as a top-down computational tool. We may be able accommodates.
to implement rules such as ‘Act honestly’, ‘Do not act Nevertheless, during the learning phase of moral
unjustly’, and ‘Do not act foolishly’ into the machine for development, if such machines are able to act out in the
it to follow. Indeed, according to Hursthouse, all virtues real world and perform actions with serious moral con-
are accompanied by a prescription for action, and all sequences, then the benefits of having them there despite
vices a prohibition. In this way, virtue ethics does have their potential faults and moral transgressions may need to
normative rules on offer for the potential implementation meet a higher threshold than do human children, since we
into robots. What about the bottom-up aspect of virtue do not need to develop these machines in the way that we
ethics? ‘need’ to continue to procreate. Indeed, the above con-
Even assuming that robots could learn to develop vir- siderations unveil a troublesome dilemma: We need our
tuous characters and to behave virtuously, there may nev- autonomous robots to be able to act immorally to a certain
ertheless be reason to guard against creating robots that extent—because we need them to be able to adjust to
need to learn how to act morally in practice. On one hand, novel situations, which may demand that they have the
this bottom-up orientation suggests a welcome aspect to sort of resources that enable them to learn from their trials
machine morality, i.e. machines could learn from experi- and errors in practice, and because only agents that could
ence and act appropriately in novel situations. But, on the act contrary to virtue can be genuine moral agents—but
other hand, we run the risk of autonomous machines acting the extent to which they can act immorally represents a
immorally during their (ongoing) process of becoming strong reason against creating these sorts of machines in
virtuous. The learning aspect of virtue ethics should be the first place. The goal of Machine Ethics is to create
cause for concern, at least to the extent that robots would ethical robots, not robots that sometimes act ethically, or
need to (continue to) learn as they are already acting out in that can act unethically’.21 The extent to which we are
the world. At the very least, we need to assess the potential willing to accept that machines can act immorally depends
risks of allowing a moral learning curve in autonomous to a large extent on why we want to create these kinds of
machines. Even if machines could learn how to act virtu- robots in the first place (among other things). One reason is
ously in the laboratory, transferring that knowledge to real- that we could (presumably) remove the element of human
world dilemmas may be limited. Depending on what the risk and error from complicated, dangerous, and highly
consequences are when a given robot does not act morally, stressful moral contexts, such as situations in warfare where
we may want to put restrictions on the extent to which noncombatants may be present (Arkin 2009b). But to the
robots would need to learn how to behave in the situations extent that machines would be susceptible to error in such
they encounter once they leave the laboratory. To the situations (or even more commonplace contexts), then their
extent that learning to behave morally is often accompa- existence may become somewhat futile.22
nied with moral failure, there is some reason to be wary of This point can be made even without assuming that the
implementing a bottom-up computational framework into level of error in machines would be comparable to that of
autonomous AMAs. humans; to the extent that autonomous machines could
But, perhaps a learning curve in morality is no real cause shoot innocent civilians, or miscalculate the dosage of
for concern with respect to autonomous AMAs, just as it is medication to be given to a patient, et cetera, these may be
no cause for concern with respect to human children (for (inherent) risks that we should be hesitant to accept. To be
example). Recognizing that we live in an imperfect world,
and that humans are not typically moral saints, there may 20
Some virtue ethicists may argue that a virtuous agent could not act
be no reason why we need to eliminate the ability of viciously, since through doing so she would be revealed to be a non-
machines to perform less-than-moral actions altogether, or virtuous agent. Yet, virtuous agents certainly have the ability to act
guard against all possible scenarios where they may not act viciously, they simply do not do so.
21
morally. This standard may be too high. Moreover, it is Tonkens (2009).
22
generally accepted that it is (in part) because an agent See Guarini and Bello (2011). Arkin (2009a, 6) has argued that it
may be impossible to eliminate an AMA’s ability to act unethically in
could act immorally (if she so chooses to) that her actions
its entirety. However, he also argues that one benefit of using AMAs
can truly be said to be moral in the first place; part of what in military combat is that those robots could be more ethical than
makes a virtuous act virtuous is that the agent could have human beings.

123
Out of character 145

sure, we may be able to improve the performance of our treated in a morally sustainable manner, that is, virtuously.
robots in the laboratory to a certain extent, and whether However, as virtuous AMAs move into and establish
machines would actually fail to (exclusively) act morally in themselves within the wider moral community, certain
practice is an empirical issue. But our means for ensuring ethical issues arise.
that machines work effectively in complicated moral situ- Importantly, how human moral agents treat robotic
ations may be limited—especially prospectively, rather moral agents would become a legitimate ethical concern.
than retrospectively—despite rigorous laboratory experi- Not only would we demand that robots treat humans well
ence. Moreover, our willingness to get to the point where (Moor 2006), but it would also be required (based on the
we can test the effectiveness of these sorts of robots’ tenets of virtue ethics) that we treat those robots respect-
performance empirically depends on whether we are will- fully, charitably, justly, et cetera, as the autonomous moral
ing to accept the potential risks of doing so (e.g. the agents that they are. In short, we would be required to
potential for the machine to behave immorally outside the behave virtuously towards virtuous robots, just as we
laboratory). At the very least, creating autonomous robots expect them to behave virtuously towards us.25 Yet, once
that need to learn how to behave morally while they are we look at the creation of AMAs from the perspective of
already out in the world is unsettling, and much more work the creator of virtuous machines—which is to say, once we
needs to be done to resolve these issues. If AMAs are evaluate the creation of virtuous AMAs with respect to
programmed without this need to learn to be virtuous (i.e. whether doing so is consistent with the virtues associated
the bottom-up aspect of virtue ethics), then the above with being a virtuous engineer, roboticist, and human
worries would dissipate to a certain extent, albeit at the being26—it becomes evident that the creation of virtuous
expense of not following the virtue-based (hybrid) moral AMAs compromises certain virtues, and hence is morally
framework that Wallach and Allen (2009) recommend. problematic. An example here is (the virtues associated
So far, I have discussed some of the more general issues with) social justice.
that arise once we adopt a virtue-based perspective. There are several virtues associated with social justice.
A complete virtue-based approach to the ethics of devel- According to Foot (1977, 97), ‘justice has to do with what
oping autonomous machines also involves a direct assess- people owe each other by way of non-interference and
ment of whether doing so is in line with specific moral positive service,’ and is closely associated with the virtue
virtues. Thus, in the following section I examine whether of charity, which is ‘‘the virtue which attaches us to the
the creation of virtuous AMAs is just, that is, whether good of others.’’ Furthermore, benevolence demands that
doing so promotes or violates the virtues associated with we treat others with care and compassion, and do not act
social justice.23 malevolently towards them (e.g. do not cause them undue
harm). Although the virtues associated with justice may not

Ethical robots and social justice 25


Moral agents and moral patients both have moral worth, and both
moral agency and moral patienthood put demands on the actions of
The closer autonomous machines come to being virtuous moral agents; moral agents have certain obligations towards moral
moral agents, the more they may need to be considered as patients, who, although they do not themselves have any moral
the proper targets of social justice. (If robots were not obligations, nevertheless deserve to be treated with respect as such.
The paradigm example here is the ‘normal’ adult human (moral
genuine autonomous moral agents, then, presumably, the
agent) and the ‘normal’ human infant (moral patient). If machines
goal of creating artificial moral agents would not have been reach a certain level of sophistication and autonomy, then we may no
reached.) In the extreme, in the future we may be forced to longer be able to justifiably ignore their moral patienthood, which, by
include autonomous AMAs into the wider (hitherto pre- the received definition of the term, necessarily accompanies moral
agency. Denying them moral respect just because they are not human
dominantly human) moral community, whether we like it
moral agents is anthropocentric and inconsistent.
or not.24 One reason for this is that, according to virtue 26
What it means to be a virtuous roboticist or engineer is an
ethics, all moral agents (human and nonhuman alike) have important issue, detailed discussion of which cannot be offered here.
moral worth as such, which entails that they deserve to be It is worth noting that the virtues associated with being a virtuous
engineer may be role-specific, and do not necessarily directly
23
correspond (in kind, in number) to the virtues associated with being
There are many other virtues that are relevant in the context of a virtuous human. Yet, insofar as all engineers are human, then it may
Machine Ethics (e.g. practical wisdom, modesty, integrity, et cetera), be the case that (a) role-specific virtues ought not to conflict with
discussion of which is beyond the scope of this paper. being (able to be) a virtuous human being, and (b) no human vices
24
Calverley (2008) argues for a similar conclusion. These rights need should be considered virtues associated with that role (Oakley and
not be comparable to human rights in all respects. Moreover, although Cocking 2001). For example, even if considerations of justice are
we would need to do this in order to be ethically consistent, I am outside the scope of what it means to be a virtuous engineer, that the
inclined to think that this is a point in machine development that we virtues associated with justice are human virtues demands that the
should be hesitant to arrive at. behaviour of engineers not be inimical to them.

123
146 R. Tonkens

directly supervene on rights (insofar as acting within ones treating them as if they were not autonomous.30 We would
rights does not always entail that one acted virtuously27), be acting unjustly (uncharitably, malevolently) through
unjustifiably violating the rights of other moral agents creating these robots for such anthropocentric and servile
typically represents a moral breach. Indeed, since consid- purposes. In short, we (engineers, machine ethicists, rob-
erations of social justice are central to most endorsable oticists, philosophers) would not be behaving virtuously
moral frameworks, to the extent that the creation of certain towards the virtuous machines that we created, and thus
kinds of autonomous AMAs for certain purposes would be would be acting contrary to the moral framework that we
unjust (to them, to us), then it would be rare to find a moral designed those machines to follow.
framework that would sanction their creation. The impor- To be sure, we (justifiably) force non-autonomous
tant point argued for below is that developing autonomous machines to do things all the time. Indeed, insofar as we
machines that are to abide by a virtue-based moral are the ones developing and programming autonomous
framework simultaneously violates certain virtues, and thus AMAs, then we are certainly (unavoidably) forcing them
doing so is not in alignment with the moral code we (and to do certain things, not least of which being coming into
our robots) are subscribing to. existence in the first place. The problem is not that we are
Forcing autonomous moral agents (human or machine) designing and using technology for certain purposes, but
to perform certain tasks (e.g. forced military service, forced rather that some of this technology is now also being
labour in the geriatric care industry, forced childcare, designed with various degrees of autonomy. The worry
forced existence as a sex robot28) violates their rights and here is that we would be disrespecting the autonomy of
status as autonomous moral agents. By not giving them an an otherwise autonomous moral agent; we are limiting
opportunity to exercise their autonomy, or not giving them the ways that such agents can express their autonomy
a say in what tasks they will perform, or an avenue for (e.g. with respect to the functions that they will serve and
consent/descent in what roles they will play, we are treat- the activities they will pursue), something we are not
ing them disrespectfully, unduly paternalistically, i.e. as if justified in doing, and something that we do not typically
they were not autonomous moral agents.29 Doing so is allow to happen with respect to autonomous human
neither charitable nor benevolent; rather, it is unkind, moral agents.
unjust, and tyrannical. This remains the case even if the We can imagine an autonomous machine developed for
purpose of that entity is to be a slave; although such forced the purpose of clowning, as a source of entertainment for
labour may not violate machine-centered virtue in any way young children. This robotic clown may be fully autono-
(in the sense that a machine designed for these purposes mous with respect to the decisions that it makes about what
may be able to behave morally, and even to flourish qua design to paint on a child’s face, or what colour of balloon
machine), creating autonomous AMAs for these purposes to inflate, or what name to call its rabbit, or to develop
is already a violation of the virtues associated with justice. original magic tricks of its own devices, et cetera, and it
This is (in part) because engineers would be hindering the may also routinely behave in a morally permissible man-
machine’s freedom to the extent that no other functions ner. Yet, insofar as it does not have the freedom to no
would be open to it, and demanding that it perform duties longer be a clown (i.e. to stop clowning and perform some
that it may not consent to and have a chance to overrule; other function), then it is the victim of injustice. Although
we are developing machines that are autonomous, and then we may be justified in forcing the robot to behave morally,
the worry is that no such justification is forthcoming with
respect to restricting its function, truncating the otherwise
27
See Hursthouse (1997, 240). For example, although women may autonomous machine’s freedom for unduly paternalistic
have a right to abortion (founded on their rights to security of the
(and anthropocentric) reasons. One way to avoid this
person and bodily integrity), there may be cases where exercising that
right is morally wrong, i.e. when doing so is callous or irresponsible, problem may be to program machines to be autonomous
and violates specific virtues (e.g. courage, humility, self-confidence). prior to programming them for a specific purpose. Another
28
Certain low-level sex robots are currently on the market (see for way would be to program the machine for a specific pur-
example http://www.truecompanion.com). If Roxxxy’s successors are pose without programming it to be sufficiently autonomous
ever developed to the point where they could wilfully say ‘‘No’’ to
so as to deserve to be respected as a legitimate moral agent.
any given proposed sex act, certain interesting issues would
undoubtedly arise (for instance, what the moral and legal standing of Yet, it seems that we cannot have it both ways (i.e. a high
robotic rape is). However, not giving those (otherwise autonomous) degree of machine autonomy and one-sided control over
robots sufficient autonomy to say ‘‘No’’ may be to violate their rights,
resulting in nothing less than forced concubinage. 30
Whether we can overcome this worry by giving the robot a say in
29
This problem does not arise until after the autonomous machine deciding its purpose is an open question, but doing so would certainly
has been given its autonomy (moral agency), since, prior to this, the serve to respect their autonomy and any accompanying rights they
machine would have no autonomy (moral agency) that could be may have. Presumably, however, this is not something that developers
violated or disrespected. of autonomous AMAs have any intention to do.

123
Out of character 147

their function/behaviour) without thereby violating certain To be sure, much of this depends on the level of moral
virtues associated with social justice. agency that robots come to achieve, and the purposes we
Somewhat paradoxically, it seems as though the desire design them to have. None of the machines that exist at
to create virtuous autonomous AMAs for certain functions present meet the criteria for full ethical agency. Because of
demands that the machine’s autonomy and status as a this, focusing on whether the creation of virtuous AMAs
moral agent be taken into account, and hence that design- violates their rights is perhaps overly futuristic, given the
ing them for those purposes may undermine their moral current state of the art. Indeed, one may object that it is
status, representing a violation of the virtues associated unclear whether developing explicit (or lesser) AMAs to be
with social justice. If the machine were to remain at the soldiers or sex workers is (also) unjust, since these
level of being merely a machine, rather than a sophisticated machines would not be full moral agents or genuine rights-
and autonomous artificial moral agent (one that just so bearers, or perhaps not even deserving targets of our
happened to be a machine); if the machine were not a virtuous behaviour (i.e. moral patients). Moreover, the
moral agent that deserved to be treated with respect as problem of being ethically inconsistent may not be as
such, then these worries may not arise, at least not with pressing of a problem with respect to such less-sophisti-
such intensity. But this would come at the expense of not cated machines.
creating explicit and full moral machines that could per- Yet, issues of social justice arise here even if machines
form ethically relevant actions out in the world. do not have anything resembling (human) rights. So, even
Arkin (2009a, 31) offers several reasons in favour of if we are reluctant to accept the idea that machines could be
developing autonomous lethal robotic systems to be used in proper moral agents (or moral patients) or rights-bearers in
warfare. One reason is that their own right, the Machine Ethics project still needs to
meet other related demands of social justice. For example,
Autonomous armed robotic vehicles [for example] do
one question that we need to ask is whether the develop-
not need to have self-preservation as a foremost
ment of virtuous AMAs is consistent with social justice in
drive, if at all. They can be used in a self-sacrificing
general. This understanding of justice is broader in scope;
manner if needed and appropriate, without reserva-
here we go beyond asking whether creating AMAs for
tion by a [presumably human] commanding officer.
certain purposes violates their rights, and ask (as well)
The sort of robot that Arkin has in mind here is not a full whether having these sorts of machines present in the
ethical agent. Because of this, this robot may not be a world promotes, sustains, or hinders widespread social
proper target of social justice; it is not yet a moral agent justice.
(or a moral patient). On my analysis, however, the closer For example, to the extent that substituting human
autonomous AMAs become to being full AMAs, the more soldiers for robots may increase the likelihood and occur-
our using them (and creating them to be used) in ways such rence of warfare (even if it decreased the number of human
as the one described above becomes morally unsettling. combatant casualties in those wars), or may decrease the
Behaving virtuously towards advanced AMAs demands likelihood of terminating warfare once it has begun, and
that we treat them justly (as moral agents in their own given that the majority of wars are either inherently unjust
right), and hence that we take their status and rights seri- or prone to being catalysts for unjust behaviour (or both),31
ously (e.g. their putative right to self-preservation, among then performing actions that serve to increase the occur-
others). Not doing so is indicative of an unjust character on rence of (automated) warfare would contribute to injustice.
our behalf. Just as it typically marks an injustice to force Not only would the creation and use of autonomous mili-
human moral agents to martyr themselves in battle, so too tary machines not lessen social injustice, it directly and
may it be unjust to force autonomous AMAs to do so, since indirectly serves to fuel its continued expansion. Moreover,
creating a virtuous robot for this purpose contradicts the to the extent that only rich nations would have access to
moral framework that the robot was designed to follow. As military AMAs (and, assuming that having such access
indicated earlier, it is not enough to argue that AMAs are would increase that nation’s chances of ‘reigning victori-
merely machines (i.e. that they are not human moral ous’ in modern warfare), then doing so may contribute to
agents), and hence that a virtue-based orientation towards increasing existing gaps between rich and poor nations, and
them is not necessary or appropriate. If we ever get to the those nations with strong and weak military resources. Or,
point where we can create full artificial moral agents, then given that immense resources that are going into relevant
there would be no relevant and sustainable moral differ- robotics and artificial intelligence research (especially in
ence between these robots and all other moral agents, the military sector), resources that could otherwise be
whose status and rights are championed and protected. directed towards the healthcare or education systems (for
Being artificial (i.e. manufactured by humans) is not
31
enough to disqualify them here. See McMahan (2009).

123
148 R. Tonkens

example), and given that this lack of additional funding To be sure, it is undeniable that having autonomous
sustains an existing status quo that finds people of low machines acting out in the world that do not have the
socioeconomic status with poorer health and lower edu- constraints of morality programmed into them would be
cation, then doing so indirectly contributes to sustained worse than having autonomous moral machines that do not
inequality, and to fuel social injustice. The point of these meet the ethical constraint described herein.33 The central
speculations is to motivate the idea that issues of social point being argued for throughout is not that the views of
justice arise here regardless of the status of the autonomous Wallach and Allen (2009) and others are necessarily
AMA, and need to be tackled and settled prior to actually wrong, or even that there is absolutely no role for virtue
creating those machines.32 ethics to play in Machine Ethics. Rather, the goal has been
to demonstrate that such thinkers do not attend to certain
important ethical considerations, despite the fact that they
Concluding remarks need to do so in order to ensure that we (their human
creators and users) are behaving morally when we create
In this paper I have offered two independent arguments. At and use these kinds of machines. The ethical considerations
the outset, it was argued that ensuring that the tenets of the highlighted in this paper need to come before the actual
moral framework that we are attempting to implement into creation of these machines and, even if we believe we are
autonomous AMAs allows for the development and use of justified in choosing ‘the lesser evil’ (and that this some-
those machines (for those purposes) represents an impor- how justifies our morally questionable behaviour, even
tant challenge for Machine Ethics. The remainder of the given our unfettered ability to not create autonomous
paper explored whether the creation of virtuous machines moral machines at all), it does not follow that the creation
is morally permitted by the tenets of virtue ethics, and of this kind of autonomous ‘virtuous machine’ is ethically
hence whether it meets this constraint. defensible. If we can only create virtuous machines by
Based on the preliminary account offered here, there is behaving in a morally deplorable manner, then doing so
some reason to believe that the creation of virtuous AMAs despite this knowledge is morally dubious at best, and
is allowable under a virtue-based moral framework in something that we (engineers, philosophers, roboticists)
certain respects (e.g. advanced machines could be virtuous should work together to avoid.
moral agents, and virtue ethics does have a normative
component conducive to top-down computability), but Acknowledgments Thank you to the audience in Fredericton and
the anonymous reviewers of this journal for their helpful comments
impermissible in other respects. Most notably, there is on earlier versions of this paper. This paper has also benefited from
good reason to be wary of creating autonomous machines enlightening conversations with Ron Arkin, Verena Gottschling,
that need to learn how to become virtuous in practice, and Marcello Guarini, Gloria Jones-Nibetta, Patrick Lin, Steve Torrance,
the development of autonomous AMAs for certain pur- John Sullins, and Andreas Traut, to whom I am very grateful.
poses would be to violate the virtues associated with social
justice. To the extent that the creation of virtuous AMAs
References
violates virtue at all, then doing so is morally suspect, from
the perspective of virtue ethics. One upshot of all of this is Allen, C., Wallach, W., & Smit, I. (2006). Why machine ethics? IEEE
that, unless these problems can be overcome, machine Intelligent Systems, 21(4), 12–17.
ethicists have good reason to look elsewhere for an ethical Anderson, S. (2008). Asimov’s ‘three laws of robotics’ and machine
framework to implement into their autonomous robots metaethics. AI & Society, 22(4), 477–493.
Anderson, M., & Anderson, S. (2007a). The status of machine ethics:
(which is not to say all levels of moral machines for all A report from the AAAI symposium. Minds and Machines,
purposes), even if such machines could be programmed to 17(1), 1–10.
behave virtuously, since not doing so renders their projects Anderson, M., & Anderson, S. (2007b). Machine ethics: creating an
ethically problematic. ethical intelligent agent. AI Magazine, 28(4), 15–26.
Arkin, R. (2009a). Ethical robots in warfare. IEEE Technology and
Society Magazine, Spring, 30–33.
32
Arkin, R. (2009b). Governing lethal behavior in autonomous robots.
Our understanding of the ethics of warfare may need to be re- Dordrecht: Chapman & Hall.
evaluated; these regulations were drafted in order to protect the rights Asaro, P. (2008). How just could a robot war be? In P. Brey, A.
of human beings and to outline the criteria for just human warfare Briggle, & K. Waelbers (Eds.), Current issues in computing and
rather than machine warfare. See Asaro (2008) for a relevant philosophy (pp. 50–64). Amsterdam: IOS Press.
discussion. However, we also need to ask whether we are allowed to
amend the Laws of War and Rules of Engagement in order to make
room for autonomous military AMAs, and whether the tenets of just
33
war theory—the normative theory that these machines would be This may turn out to be a false dichotomy; there is no indication at
guided by—allow for the development and use of these sorts of present to suggest that there is no way to develop moral machines that
machines in the first place. meet the ethical consistency constraint described herein.

123
Out of character 149

Calverley, D. J. (2008). Imagining a non-biological machine as a legal http://ethics.calpoly.edu/ONR_report.pdf. Retrieved February 1,


person. AI & Society, 22(4), 523–537. 2010.
Foot, P. (1977). Euthanasia. Philosophy & Public Affairs, 6(2), McMahan, J. (2009). Killing in war. Oxford: Clarendon Press.
85–112. Moor, J. (2006). The nature, importance, and difficulty of Machine
Grau, C. (2006). There is no ‘I’ in ‘Robot’: Robots and utilitarianism. Ethics. IEEE Intelligent Systems, 21(4), 18–21.
IEEE Intelligent Systems, 21(4), 52–55. Oakley, J., & Cocking, D. (2001). Virtue ethics and professional
Guarini, M., & Bello, P. (2011). Robotic warfare: Some challenges in roles. Cambridge: Cambridge University Press.
moving from non civilian to civilian theaters. In P. Lin, G. Bekey Powers, T. M. (2006). Prospects for a Kantian machine. IEEE
& K. Abney (Eds.), Robot ethics: The ethical and social Intelligent Systems, 21(4), 46–51.
implications of robotics. Cambridge: MIT Press. Singer, P. W. (2009). Wired for war: The robotics revolution and
Hursthouse, R. (1997). Virtue theory and abortion. In D. Statman conflict in the 21st century. New York: Penguin.
(Ed.), Virtue ethics: A critical reader (pp. 227–244). Washing- Sparrow, R. (2007). Killer robots. Journal of Applied Philosophy,
ton: Georgetown University Press. 24(1), 62–77.
Hursthouse, R. (1999). On virtue ethics. Oxford: Oxford University Swanton, C. (2003). Virtue ethics: A pluralistic view. New York:
Press. Oxford University Press.
Krishnan, A. (2009). Killer robots: The legality and ethicality of Tonkens, R. (2009). A challenge for machine ethics. Minds and
autonomous weapons. Farnham: Ashgate. Machines, 19(3), 421–438.
Lin, P., Bekey, G., & Abney, K. (2008). Report on autonomous Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots
military robotics: Risk, ethics, and design. Available at right from wrong. Oxford: Oxford University Press.

123

You might also like