You are on page 1of 6

Artificial Life and Robotics (2019) 24:291–296

https://doi.org/10.1007/s10015-019-00525-1

ORIGINAL ARTICLE

Artificial intelligence, ethics and human values: the cases of military


drones and companion robots
Thibault de Swarte1 · Omar Boufous1 · Paul Escalle1

Received: 26 March 2018 / Accepted: 11 December 2018 / Published online: 29 January 2019
© International Society of Artificial Life and Robotics (ISAROB) 2019

Abstract
Can artificial intelligence (AI) be more ethical than human intelligence? Can it respect human values better than a human?
This article examines some issues raised by the AI with respect to ethics. The utilitarian approach can be a solution, especially
the one that uses agent-based theory. We have chosen two extreme cases: combat drones, vectors of death, and life supporting
companion robots. The ethics of AI and unmanned aerial vehicles (UAV) must be studied on the basis of military ethics and
human values when fighting. Despite the fact that they are not programmed to hurt humans or harm their dignity, companion
robots can potentially endanger their social, moral as well as their physical integrity. An important ethical condition is that
companion robots help the nursing staff to take better care of patients while not replacing them.

Keywords Ethics · Artificial intelligence · Human values · Companion robots · Military drones · UAV

1 Introduction embedded in AI (Part 4). Finally, we will look at the cases


of unmanned aerial vehicles (UAV) in part 5 and companion
Can artificial intelligence (AI) be more ethical than human robots (Part 6).
intelligence? Can it respect human values better than a The key point of this article is that relations between AI
human? Can a utilitarian approach using artificial agents and humans must be discussed from an ethical and human
promote ethics? Even if this article is an exploratory research values perspective based on cases examined in their specific
on the above issues, we try to answer two difficult questions contexts rather than too general perspective. This is the main
raised by AI with respect to ethics. That is why we have originality of the article below, based on the cases of mili-
chosen two very extreme cases: combat drones, vectors of tary drones and companion robots.
killing, and companion robots, life supporting.
Contrary to a pilot fighter, a military drone is not subject
to stress during combat but would it be possible for it to 2 The context: Ethicaa’s research
respect human values? Can companion robots help improve programme
the well-being of an increasing number of older people?
We will present the context of the Ethicaa team funded Machines and agents have an increasing number of
by the Agence Nationale de la Recherche (ANR/National autonomous functions and consequently are less and less
Research Agency, France (ANR-13-CORD-0006) in part 2. supervised by human operators or users. Therefore, when
We will then discuss the need for an ethics of AI (Part 3). machines interact with humans, we need to ensure that they
Then, we will study the possibility of ethics and morality do not harm us/them or threaten our/their autonomy, espe-
cially decision autonomy. Consequently, the question of an
ethical regulation or control of such autonomous agents is
This work was presented in part at the 23rd International raised. This has been discussed by several authors including
Symposium on Artificial Life and Robotics, Beppu, Oita, January
Wallach and Allen. As stated by Picard, the greater the free-
18–20, 2018.
dom of a machine, the more it will need moral standards [1].
* Thibault de Swarte In this article, we have chosen to focus on “context ethics”
Thibault.deSwarte@imt‑atlantique.fr rather than broad moral standards. Each ethical rule depends
1 on the context in which a drone or a companion is used.
IMT Atlantique, LASCO Laboratory, Rennes, France

13
Vol.:(0123456789)
292 Artificial Life and Robotics (2019) 24:291–296

The objectives of the Ethicaa project are twofold: (1) potential consequences of the empirical decisions taken,
defining what should be a moral autonomous agent and a an AI system creates numerous approximations that are a
system of moral autonomous agents, and (2) defining and significant drawback. This can make rules conflict even for
resolving the ethical conflicts that could occur. Ethical con- the 3 laws of robotics [7]; it may also lead to unintended
flicts are characterized by the absence of an optimal solu- consequences due to added rules [8]. It should be noted that
tion: there may only be outcomes more desirable than others. even inaction can be taken into account for injuring humans.
Nevertheless, when a decision must be made it should be an Moreover, the complexity of interaction between humans’
informed decision based on an assessment of the arguments priorities may lead to interpersonal inappropriate compari-
and values at stake. sons of various added laws [9]. The second method called
bottom up, focuses on case studies in order to learn gen-
eral concepts. The case studies make it possible for a strong
3 The need for ethical AI AI to autonomously learn wrong and biased principles and
even generalize them by applying them in new situations it
3.1 AI definition encounters. These types of approaches are considered to be
dangerous. In fact, in this process of learning, basic ethical
Artificial intelligence can be described by a universal triad, concepts are acquired through a comprehensive assessment
namely the data brought by the environment, the opera- of the environment and its compliance with previous knowl-
tions defined as the logic which mimic human behavior and edge. This is done without any bottom-up procedure. The
finally, a control phase aiming at retroacting over its previ- result of this learning will be taken into account for future
ous actions. Its definition is essentially based on two com- decision-making [10, 11]. Eventually, in the case where an
plementary views: one focused on the behavior and how it AI algorithm is facing a new situation it has not encountered
acts, especially as a human and the other which emphasizes before, the extrapolation without a control phase may result
the reasoning processes and how it reproduces human skills in perilous situations for humans [12].
[2, 3]. However, both points of view insist on the rational
behavior that an AI must have. Moreover, it is important
to pay attention to which kind of AI we are dealing with: 4 AI embedding moral and ethical
strong or weak AI [4]. Weak AI, also known as narrow AI principles
is shaped by behaviors answering to observable and specific
tasks that may be represented by a decisional tree. On the A utilitarian approach of ethics consists in choosing in the
other side, the strong AI, or artificial general intelligence, case of a set of possibilities the solution that leads to the
can copy human-like mental states. For this type of AI, this action consequently maximizing intrinsic good or net pleas-
means that decision abilities or ethical behavior are issues ure [13]. This involves quantifying God or Evil from a given
that need to be taken care of. Finally a strong AI could find situation. However, certain situations supported by ethical
the closest solution to the given objective and learn with an reasons with an empirical study may prohibit the combined
external reversion. execution of certain actions. These complex cases are at the
The latter is the one that is posing unprecedented prob- origin of dilemmas and the ethical principles do not make
lems that researchers are just starting to study. In fact, a it possible to establish a preference. Therefore, autonomous
system embedding strong AI is able to learn without human agents need to be endowed with the ability to distinguish the
assistance or injection of additional data since the AI algo- most desirable option in the light of the ethical principles
rithm generates its own knowledge. Therefore, the exterior involved.
observer or the user of such agents will no longer know what To achieve this goal, this article proposes in the following
the AI knows, what it is capable of doing nor the decisions subsections a method called a utility function as a mean of
it is going to take. Hence the need to establish an ethical avoiding ethical dilemmas.
framework that defines an area of action and prevents the
system from taking decisions contrary to the ethics. 4.1 Using a utility function to help agents make
ethical decisions
3.2 How to implement ethics rules?
In order to achieve this goal, a number of solutions have
In order to implement ethical rules, there are two approaches. been proposed [14]. One of them it the utility function,
The first one named top down approach is based on ethi- also known as objective function. This function is used to
cal rule-abiding machines [5, 6]. The strategy is to respect assign values to outcomes or decisions. The optimal solu-
unconditionally the ethical principles related to morality tion is the one that maximizes the utility function. This
such as “Do not kill”. However, without understanding the approach based on quantitative ethics determines which

13
Artificial Life and Robotics (2019) 24:291–296 293

action maximizes benefit and minimizes harm. Its objec- 4.2 Limits and dangers of the utilitarian approach
tive is to make it possible for an AI algorithm to take the
right decisions particularly when it encounters an ethical The approach previously described and consisting in quan-
dilemma. tifying situations and assessing them with a utility function
From a mathematical point of view, the utility function through a model has its own limits as far as strong AI is
takes a state or a situation as an input parameter and gives concerned.
as a result an output which is a number [15]. This number For weak AI, engineers at the design stage can imple-
is an indication of how good the given state or situation ment decision trees to establish rules. They can anticipate
is for the agent. The agent should then make the decision the behavior of the AI more easily. On the other hand, as
that leads to the state that maximizes the utility function. mentioned in Sect. 2, advanced AI systems learn directly
For instance, let us take the case of an autonomous from the environment and adapt accordingly. By doing so,
vehicle, and let us assume that the car is in a situation an external observer cannot always predict or anticipate the
where harm is unavoidable, and that it would inevitably actions of such systems [23]. This is true for the Alpha Go
hit either two men on the road or crash into a wall killing algorithm that is taking decisions and implements strategies
the passenger it is carrying [16]. Based on our previous even experts in the game cannot understand although it leads
utilitarian ethics definition, the decision that will minimize to an optimum solution. The intelligent agent behaves like
harm is the one that will lead to kill as few people as possi- a black box whose internal functioning is unknown. This is
ble. Therefore, the car should crash and kill the passenger particularly dangerous when it comes to autonomous vehi-
to save the two pedestrians because the utility function of cles or UAV drones that put human life at stake. Using only
this outcome is the highest. The same reasoning applies a utility function to decide whether or not a UAV could be
for military drones when they have to choose between mul- used in an armed conflict could be considered as a war crime
tiple outcomes that involve moral and ethical principles. (see below part 5, Sect. 2).
Autonomous cars embedding AI algorithms using the Indeed, it is essential to test AI algorithms in different
utility function are not yet marketed. Some models avail- environments [23] and cover as many situations as possible
able to the general public have an autopilot mode that still before they are registered for use. This involves confront-
requires the presence of a human being behind the steer- ing algorithms with different situations and ensuring they
ing wheel who will make a decision in case of a problem. behave properly by taking the most ethical decisions possi-
Fully autonomous cars still ride in test environments [17]. ble. It will then be possible to identify anomalies and correct
In the near future, the people who are likely to buy this them immediately.
type of car will primarily be public institutions such as
municipalities. For instance, the city of Helsinki is test-
ing an autonomous bus line, RoboBusLine which carries
passengers on a defined road with a limited speed and an 5 Artificial intelligence, ethics and human
autonomous shuttle is also in service in Las Vegas [18]. values: the case of UAV
However, these are still prototypes in a test phase with
an operator on board. The other customers that may be The ethics of artificial intelligence are post-Kantian [24] in
interested in using autonomous vehicles are companies the sense that they do not accept any opposition between nat-
that make deliveries given the advantage of automating ural and cultural sciences. They actually seek to overcome
the tasks resulting in cost reduction and efficiency. In fact, such an opposition. In the “Ethicaa” context, it is hoped that
Amazon, FedEx and UPS are investigating solutions for a contextual evaluation will be made for each particular situ-
driverless trucks. ation. A definition of a legal framework and the enactment
The utility function is currently under investigation as of UAVs matter.
an active solution to avoid ethical dilemmas non-modifying Ethical macro-dilemmas are linked to the fact that a
on-policy [19]. Autonomous robots are expanding and the democracy cannot, because of its human values, use drones
aim is not only to deal with ethical dilemmas but also to without precise rules of engagement. Micro-dilemmas are
reduce uncertainty by quantifying problems such as explora- the operational processes by which a drone performs an
tion or unknown mapping; both can be stochastically defined action. Do cameras, for example, provide a clear view of
(Shannon or Rényi’s entropy) [20, 21]. Describing and tak- the target? If not, what should the operator do? The meso-
ing actions in a world incompletely defined can be done dilemmas are the most interesting to study. They are at the
with the help of estimators but utility functions describe the articulation of the two types of dilemmas above. Georges
perceptual state in line with the rules, and an active strategy Lucas, Professor of Ethics at the Naval Postgraduate School
can hence be implemented. This is already done for robot has expressed caution primarily about marrying “strong
vision, for example [22]. Artificial Intelligence with full lethality (...). I do not want

13
294 Artificial Life and Robotics (2019) 24:291–296

robots making lethal targeting decisions entirely on their UAV pilots than jet fighter pilots. They have afterwards
own” [25]. developed a doctrine. Democratic values have sometimes
In general, the military doctrine emphasizes that ethi- been set aside in the name of short-term efficiency. Later,
cal choices are ultimately a matter of values, organized by in the 2010s, ethical questions were asked by the press and
the military rules and their hierarchy. Weber [26], with his public opinion. Is it acceptable to consider that an artificial
vision of the “ethics of responsibility” and his model of agent could take an ethical decision in the place of a human
“legal rational decision” making seems best suited to deal agent, while ethics are precisely what is produced by human
with ethical conflicts. It would indeed be irrelevant to con- intelligence?
sider an artificial agent building an ethical decision in the Finally, a drone is not in essence “more” or “less” ethical.
place of a human agent, because ethics are precisely what are The military process is often the same whether it is a drone
produced by natural human intelligence. If we seek to adapt or a combat jet. An important problem is the reliability of
Weber’s [27] framework to that of Ethicaa, we should also the equipment and the reversibility capability it offers or
ask ourselves the question of “scientific ethics and the spirit not. In any case, avoiding the “Fire and Forget” is necessary.
of artificial intelligence”: scientific ethics has therefore to be Combat aircraft may not hit.
as independent as possible from political or religious ethics. AI can be a condition for better ethics and respect of
Our colleague Ganascia [28] defines the “spirit” of arti- human values, because a UAV has less stress than a human
ficial intelligence as follows: “it shows how this epistemo- pilot in a certain number of war operations.
logical view opens on the many contemporary applications
of artificial intelligence that have already transformed-and
will continue to transform-all our cultural activities and our 6 The case of companion robots
world.
6.1 The global context
5.1 Traditional military ethics and combat drones
The dignity of robots and the human person is sometimes
Danet et al. [25] believe that the principles of military ethics the topic of intense academic debates. However, it had never
are generally stable over time since the “War Art” (Machi- been addressed by the case-law of the European Court of
avelli 1520) [29] and that a specifically human honor code is Human Rights before 2014 [32]. It is therefore an emerging
required. For Gérard de Boisboissel interviewed by Thibault issue for which research is fully legitimate in order to be able
de Swarte, since the age of Chivalry in Europe, the com- to inform the legislator in due course.
mand chain is still ultimately responsible and no artificial Europe in general and France in particular show certain
intelligence can replace it. The ethical dilemma is always skepticism towards companion robots. This skepticism
that of the high-ranking officer who must take responsibil- is taken into account by the European Parliament. At the
ity for ordering a drone to kill in accordance with “military beginning of 2017 its Committee on Legal Affairs adopted a
grandeur and servitude” [30]. We consider Mishima and his text calling on the executive body—the European Commis-
ethic of the Samurai (“Hagakuré”) [31] dating back to the sion—to take some measures to control robots and artificial
eighteenth century to be part of the same military values intelligence and to settle questions regarding their compat-
tradition. ibility with ethical standards and reliability. If UAVs are
not concerned by this text proposal, companion robots are.
5.2 UAV and military ethics in the twenty‑first Companion robots have been introduced in response to
century problems emerging from the ageing of the population and
to address therapeutic needs. Some people need care and
An armed drone does not create a break from other weapons. others companionship. Today, companion robots assist autis-
On the contrary, there is a reversibility that the previous tic children or the elderly, for example those with dementia
weapon systems did not have (artillery, fighter jets, etc.). who need companionship in retirement homes. The lack of
For the French military expert Pierre Servent interviewed by medical personnel in health services sometimes explains the
Thibault de Swarte, the question of whether a combat drone use of such robots. In retirement homes in Denmark, a thera-
can be ethical or not can be ironical and serious at the same peutic robot called Paro plays the role of a pet. Its sensors
time. It should in fact be asked for all types of weapons. The allow it to respond to the cuddles of patients, by moving its
key question is, “is the shooter under a democratic control?” tail, body and eyes. This robot mimics the behavior of a seal.
A democracy cannot delegate the right to kill to a machine. And as with all “carebots”, it operates in the intimacy of
The French position is characterized by precaution. This the people who use it on a daily basis. Although it has been
country wants to develop a drone use doctrine as a prelimi- shown that this type of robot relieves older people on a psy-
nary step. In contrast, the USA have recently trained more chological point of view [33], its proximity is problematic

13
Artificial Life and Robotics (2019) 24:291–296 295

on two levels: on one hand, it can jeopardize the privacy the circumstances of robot use for the patient. Moreover,
and intimacy of the person. On the other hand, it leads to a every computing device that is created is hackable, includ-
reduction in human contact, and potentially to social isola- ing neural implants and military systems. This means that
tion in the longer term. Both aspects are discussed in the robots have security flaws too. For instance, in 2016, Chi-
two following parts. nese hackers managed to remotely control an autonomous
car. By doing so, they had full control of all commands of
6.2 The issue of human dignity and privacy the vehicle and they were able to deliberately cause an acci-
dent. One can imagine how the consequences of such actions
The right to privacy is a fundamental right guaranteed by could be lethal. Therefore, even if they are intelligent and
the Article 8 of the European Convention on Human Rights sophisticated, robots can be hijacked and their source code
and by the Universal Declaration of Human Rights. If robots can be modified to be used for criminal purposes. Despite
are used to lift—as does the nursing robot Robear [34]—and that they are not programmed to hurt humans or harm their
move people in a way that implies that they are objects, dignity, robots can potentially endanger their social, moral
it could have the effect of reducing their self-esteem and and physical integrity.
making them feel humiliated. By doing so, this harms their
dignity [35].
The real problem of robots is the program they contain. 7 Conclusion
This program is written by a computer scientist with his
own bias, his own vision of the world and especially his This article has examined some issues raised by artificial
imperfections and flaws. And the question is not whether intelligence with respect to ethics and human values. This
the algorithms are ethical or not but whether the way they is an exploratory research on the above issues.
are used is good or bad; if they fulfill the human ethical The utilitarian approach is a solution, especially one that
intentions and values as humanly and socially as possible. uses agent-based theory. Using a utility function can help
agents make ethical decisions.
6.3 Ethics, human values and companion robots We have here chosen two extreme and very different
cases: combat drones, vectors of death, and companion
There are currently no procedures of control or these algo- robots, in favor of life.
rithms that are trusted by users when new forms of discrimi- The ethics of AI must be studied in all circumstances
nation, censorship, impasses, errors, artifacts, social norms, on the basis of human values when fighting and in military
false information and predictions appear. We are unable to ethics. Combat UAVs are part of the contemporary history
manage these problems because the legal system is not yet of such ethics that research must continue to develop and
adapted to the exponential AI development and its daily universalize, thanks to concepts like shot reversibility.
applications. On the contrary, a jurist such as Alain Ben- Despite the fact that they are not programmed to hurt
soussan heard at the EOGN research center in Paris sup- humans or harm their dignity, companion robots can poten-
ports the fact that we should protect robots from humans’ tially endanger their social, psychological as well as their
interventions on behalf of robots dignity. physical integrity. A general ethical condition, in line with
The vast majority of companion robots, such as NAO are human values, is that companion robots can help the nursing
connected to the Internet. Today, data are recorded by inte- staff to take better care of patients but do not replace them.
grated cameras and microphones and sent to remote servers. We hope that our research will continue on these topics
Often, the owners of these robots tend to forget why their and other emerging ones in our societies like ethics applied
personal information is recorded and stored in the cloud. The to autonomous cars, digital platforms or internet of things
current programming mode is the one that tends to make (IOT).
robots indispensable in everyday activities. But the robot
must not become a “prescriber” of behavior according to
Serge Tisseron. This would mean that the human would be
depending on the robot and that the robot would reify the References
human, which would be contrary to human dignity. In the
1. Picard RW (2003) Affective computing: challenges. Int J Hum
case of “carebots” used in hospitals, it is necessary that they Comput Stud 59(1):55–64. https​://www.scien​cedir​ect.com/scien​
preserve and guarantee the autonomy of the patient with ce/artic​le/abs/pii/S1071​58190​30005​21. Accessed 27 Jan 2019
regard to the medical staff and treatments. This requires 2. Nilsson NJ (1980) Principles of artificial intelligence,
pp 17–18. https​://www.sprin​ger.com/la/book/97835​40113​409.
the consent of the patient or his family and the possibil-
Accessed 27 Jan 2019
ity for him to retract and stop his treatment without con- 3. Russell S, Norvig P (1995) Artificial intelligence: a modern
flict. Detailed information should also be given specifying approach, pp 4–5. https​://ready​forai​.com/downl​oad/artif​icial​

13
296 Artificial Life and Robotics (2019) 24:291–296

-intel​ligen​ce-a-moder ​n-appro​ach-3rd-editi​on-pdf/. Accessed 27 22. Arindam Bhakta C, Hollitt WN, Browne M, Frean (2018) Utility
Jan 2019 function generated saccade strategies for robot active vision: a
4. Bringsjord S, Schimanski B (2003) What is artificial intelligence? probabilistic approach. https​://link.sprin​ger.com/artic​le/10.1007/
Psychometric AI as an answer, p 6. https​://www.ijcai​.org/Proce​ s1051​4-018-9752-3. Accessed 27 Jan 2019
eding​s/03/Paper​s/128.pdf. Accessed 27 Jan 2019 23. Hibbard B (2015) Ethical artificial intelligence. https​://arxiv​.org/
5. Powers TM (2006) Prospects for a Kantian machine, abs/1411.1373. Accessed 27 Jan 2019
pp 48–50. https:​ //www.academ ​ ia.edu/314677​ 71/Prospe​ cts_for_a_ 24. Kant E (1785) Fondements de la métaphysique des mœurs. Feed-
Kanti​an_Machi​ne. Accessed 27 Jan 2019 books. http://fr.feedb​o oks.com/book/114/fonde​m ents​- de-la-
6. Hanson R (2009) Prefer law to values. http://www.overc​oming​ m%25C3%25A9t​aphys​ique-des-moeur​s. Accessed 25 Jan 2019
bias.com/2009/10/prefe​r-law-to-value​s.html. Accessed 27 Jan 25. Danet D, Doaré R, Hanon JP, de Boisboissel G (2014) Robots
2019 on the battlefield: contemporary issues and implications for the
7. Asimov I (1950) “Runaround”. I, Robot, p 40. https:​ //www.ttu.ee/ future. Combat Studies Institute Press, France, p 301. https:​ //apps.
publi​c/m/mart.../Isaac​_Asimo​v_-_I_Robot​.pdf. Accessed 27 Jan dtic.mil/docs/citat​ions/ADA60​5889. Accessed 27 Jan 2019
2019 26. Weber M (1917/1949) The meaning of ‘Ethical Neutrality’ in
8. Pettit P (2003) Akrasia, collective and individual. https​://core. sociology and economics. In: The methodology of the social sci-
ac.uk/downl​oad/pdf/15661​6471.pdf. Accessed 27 Jan 2019 ences. https​://www.taylo​r fran​cis.com/books​/97813​51505​574/
9. Wallach W, Allen C, Smit I (2008) Machine morality: bottom- chapt​ers/10.4324%2F978​13151​24445​-1. Accessed 25 Jan 2019
up and top down approaches for modelling human moral facul- 27. Weber M (1920) The Protestant ethic and the " spirit” of capital-
ties, pp 570–579. https​://link.sprin​ger.com/artic​le/10.1007/s0014​ ism and other writings. https​://is.muni.cz/el/1423/podzi​m2013​/
6-007-0099-0. Accessed 27 Jan 2019 SOC57​1E/um/_Routl​edge_Class​ics___Max_Weber​-The_Prote​
10. McLaren B (2006) Computational models of ethical reasoning, stant​_Ethic​_and_the_Spiri​t_of_Capit ​alism​__Routl​edge_Class​
pp 30–32. https​://www.cs.cmu.edu/~bmcla​ren/pubs/McLar​en- ics_-Routl​edge__2001_.pdf. Accessed 27 Jan 2019
CompM​odels​Ethic​alRea​sonin​g-MachE​thics​2011.pdf. Accessed 28. Ganascia J-G (2010) Epistemology of AI revisited in the light of
27 Jan 2019 the philosophy of information. Knowl Technol Policy 23(1–2):57–
11. Guarini M (2006) Particularism and the classification of moral 73. https​://link.sprin​ger.com/artic​le/10.1007/s1213​0-010-9101-0.
cases, pp 23–26. https​://ieeex​plore​.ieee.org/docum​ent/16679​49/. Accessed 27 Jan 2019
Accessed 27 Jan 2019 29. Niquet V (1988) L’Art de la guerre de Sun Zi, traduction et édi-
12. Muehlhauser L, Helm L (2012) Intelligence explosion and tion critique, Éditions Economica. https:​ //www.econom ​ ica.fr/livre​
machine ethics. https:​ //intell​ igenc​ e.org/files/​ IE-ME.pdf. Accessed -l-art-de-la-guerre​ -sun-zi-niquet​ -valeri​ e,fr,4,978271​ 78589​ 69.cfm
27 Jan 2019 30. Vigny de A (1835) Grandeur et servitude militaires. http://www.
13. Sinnott-Armstrong W (2015) Consequentialism, Stanford Ency- bouqu​ineux​.com/?ebook​s=82&Vigny​. Accessed 25 Jan 2019
clopedia of Philosophy. https:​ //stanfo​ rd.librar​ y.sydney​ .edu.au/entri​ 31. Mishima Y (1985) le Japon moderne et l’éthique samouraï. Gal-
es/conse​quent​ialis​m/. Accessed 27 Jan 2019 limard, Paris. http://www.galli​mard.fr/Catal​ogue/GALLI​MARD/
14. Anderson M, Leigh Anderson S (2011) Machine ethics. https​:// Arcad​es/Le-Japon​-moder​ne-et-l-ethiq​ue-samou​rai
www.cambr​idge.org/core/books​/machi​ne-ethic​s/D7992​C92BD​ 32. European Court of Human Rights (1999–2014) Reports of judg-
465B5​4CA0D​91871​398AE​5A. Accessed 27 January 2019 ments and decisions—cumulative index 1999–2014. https:​ //www.
15. Hibbard B (2011) Model-based utility functions. https:​ //arxiv.​ org/ echr.coe.int/Docume​ nts/Index_​ 1999-2014_ENG.pdf. Accessed 25
vc/arxiv​/paper​s/1111/1111.3934v​1.pdf. Accessed 27 Jan 2019 Jan 2019
16. Bonnefon J-F, Rahwan I, Shariff A (2015) Autonomous vehicles 33. MacDonald B, Kerse N, Broadbent R (2013) The psychoso-
need experimental ethics: are we ready for utilitarian cars?. https​ cial effects of a companion robot: a randomized controlled trial
://www.resea​rchga​te.net/publi​catio​n/28284​3902_Auton​omous​ Hayley Robinson. https​: //www.ifa-fiv.org/wp-conte​n t/uploa​
_Vehicl​ es_Need_Experi​ menta​ l_Ethics​ _Are_We_Ready_​ for_Utili​ ds/2015/11/2013-Paro-loneli​ ness-​ RCT.pdf. Accessed 27 Jan 2019
taria​n_Cars. Accessed 27 Jan 2019 34. Mullan TMC (2016) The guardian—how a robot could be grand-
17. Tian Y, Pei K, Jana S, Ray B (2018) DeepTest: automated testing ma’s new career. https​://www.thegu​ardia​n.com/techn​ology​/2016/
of deep-neural-network-driven autonomous cars. https:​ //arxiv.​ org/ nov/06/robot​-could​-be-grand​mas-new-care-assis​tant. Accessed
abs/1708.08559​. Accessed 27 Jan 2019 Oct 2016
18. Kirk B (2016) Business opportunities in automated vehicles. In 35. Sharkey A, Sharkey N (2010) Granny and the robots: ethical
Journal of Unmanned Systems. http://www.nrcres​ earch​ press​ .com/ issues in robot care for the elderly. https​://link.sprin​ger.com/artic​
doi/full/10.1139/juvs-2015-0038#.XE3kH​88zYX​p le/10.1007/s1067​6-010-9234-6. Accessed 27 Jan 2019
19. Everitt T, Filan D, Daswani M, Hutter M (2016) Self-modification
of policy and utility function in rational agents. https​://arxiv​.org/ Publisher’s Note Springer Nature remains neutral with regard to
abs/1605.03142​. Accessed 27 Jan 2019 jurisdictional claims in published maps and institutional affiliations.
20. Carrillo H, Dames P, Kumar V, José A. Castellanos (2017)
Autonomous robotic exploration using a utility function based
on Rényi’s general theory of entropy. https​://link.sprin​ger.com/
artic​le/10.1007/s1051​4-017-9662-9. Accessed 27 Jan 2019
21. Keren S (2017) Redesigning stochastic environments for maxi-
mized utility. https​://www.aaai.org/ocs/index​.php/AAAI/AAAI1​
7/paper​/downl​oad/14549​/14200​. Accessed 27 Jan 2019

13

You might also like