Professional Documents
Culture Documents
Jenkins
Visit to download the full and correct content document:
https://ebookmass.com/product/autonomous-vehicle-ethics-ryan-jenkins/
Autonomous Vehicle Ethics
Autonomous Vehicle Ethics
The Trolley Problem and Beyond
Edited by
RYAN JENKINS, DAVID ČERNÝ, AND TOMÁŠ HŘÍBEK
Oxford University Press is a department of the University of Oxford. It furthers the
University’s objective of excellence in research, scholarship, and education by publishing
worldwide. Oxford is a registered trade mark of Oxford University Press in the UK and
certain other countries.
Published in the United States of America by Oxford University Press
198 Madison Avenue, New York, NY 10016, United States of America.
© Oxford University Press 2022
All rights reserved. No part of this publication may be reproduced, stored in a retrieval
system, or transmitted, in any form or by any means, without the prior permission in
writing of Oxford University Press, or as expressly permitted by law, by license, or under
terms agreed with the appropriate reproduction rights organization. Inquiries concerning
reproduction outside the scope of the above should be sent to the Rights Department,
Oxford University Press, at the address above.
You must not circulate this work in any other form and you must impose this same
condition on any acquirer.
Library of Congress Cataloging-in-Publication Data
Names: Jenkins, Ryan, editor. | Černý, David, editor. |
Hříbek, Tomáš, editor.
Title: Autonomous vehicle ethics : the trolley problem and beyond/
edited by Ryan Jenkins, David Černý, and Tomáš Hříbek.
Description: New York, NY, United States of America :
Oxford University Press, [2022] |
Includes bibliographical references and index.
Identifiers: LCCN 2022000306 (print) | LCCN 2022000307 (ebook) |
ISBN 9780197639191 (hbk) | ISBN 9780197639214 (epub) | ISBN 9780197639221
Subjects: LCSH: Automated vehicles—Moral and ethical aspects. |
Double effect (Ethics)
Classification: LCC TL152.8 .A8754 2022 (print) | LCC TL152.8 (ebook) |
DDC 629.2—dc23/eng/20220315
LC record available at https://lccn.loc.gov/2022000306
LC ebook record available at https://lccn.loc.gov/2022000307
DOI: 10.1093/oso/9780197639191.001.0001
Ryan Jenkins dedicates this book to those injured or killed
in automobile accidents the world over—and those working
to bend the arc of technological progress to minimize the
human suffering that results.
David Černý dedicates this book to his fiancée, Alena, who
gives meaning and joy to his work.
Tomáš Hříbek dedicates the book to all those who are tired
of being drivers and hope to be liberated by AV technology.
Contents
Acknowledgments
Contributors
Introduction
References
Foot, Philippa. 1967. “The Problem of Abortion and the Doctrine of Double Effect.”
Oxford Review 5: 5–15.
Thomson, Judith Jarvis. 1976. “Killing, Letting Die, and the Trolley Problem.” The
Monist 59, no. 2: 204–17.
PART I
AUTONOMOUS VEHICLES AND TROLLEY PROBLEMS
Introduction by David Černý
Swerve right and kill one passerby or swerve left and kill two (or
more) persons on impact? This seemingly simple question has
occupied the attention of many bright minds for decades. It should
not be seen as a surprise. Trolley-type scenarios, flourishing within
scholarly publications since Philippa Foot’s seminal paper on the
ethics of abortion, seem to bear a close structural similarity with
collision situations that may be encountered by autonomous vehicles
(AVs) on the road. The leading assumption has been that analogical
reasoning might be employed, enabling one to transfer important
moral conclusions from simplified thought experiments to real-life
situations. If, for example, the rightness of one’s choice in trolley-
type scenarios depends on the maximizing strategy (i.e., save the
most lives possible), then the same decision procedure can also be
employed in richer, nonidealized conditions of everyday traffic.
Notwithstanding considerable efforts that came into the
development and use of trolley-type scenarios, there has been a
steadily growing consensus that our ethical reflections should move
beyond these scenarios toward more realistic considerations. There
are still some scholars defending the importance of trolleyology in
the context of AV ethics, but many—maybe the majority—endorse
some sort of Trolley Pessimism, according to which either there are
not any relevant similarities between trolley and real-life road
scenarios, or there are insurmountable technological challenges
calling into question the very possibility of programming AVs to
follow a set of ethical rules provided by programmers.
Thus, the common thread running through all of the contributions
in Part I is the effort to go beyond the trolley problem in an attempt
to address the ethical issues raised by AVs.
We begin this section with a chapter by Nicholas G. Evans and
Heidi Furey. They draw on the existing literature on crash scenarios
involving AVs but go far beyond the traditional focus on the trolley
problem and its applications in autonomous driving. Evans does not
intend to eliminate trolley-case scenarios but subsumes them under
a more general risk distribution category. This category is more
general in two aspects: First, it removes the simplification
traditionally assumed in the discussion of trolley cases, according to
which our options and outcomes are certain. Second, it takes into
consideration far more types of morally relevant scenarios. The
authors divide decisions regarding risk distribution and AVs into
three categories. Each of these categories gives rise to a different
perspective from which to look at AVs and situations they may
encounter. There are three types of decisions corresponding to each
category: narrow-, medium-, and wide-scope decisions. Each opens
a distinct conceptual space and invites one to ask different questions
about how AVs should distribute risks and how we should regulate
and deploy AVs so that these risks are best distributed among
occupants of AVs, road users, and other members of society.
Next, in Chapter 2, David Černý addresses an admittedly
controversial issue of whether an AV’s decision processes based on
age would always and in all contexts be discriminatory.
Discriminatory behavior is commonly considered unethical and
prohibited by many international human rights documents. Yet it
might come as a surprise that at least in the context of artificial
intelligence (AI) ethics, there are not many attempts at a precise
definition of direct discrimination. Černý starts his chapter by
thoroughly analyzing the definitional marks of discrimination and
arrives at a semiformal definition of it. Successively, he delineates
the main contours of the derivational account of the badness of
death according to which death is bad in virtue of the fact that it
deprives us of all the prudential goods comprised in continued
existence. These two conceptual devices allow him to defend the
main conclusion of his chapter: If an AV chose between two human
targets on the basis of age, its choice would not be an instance of
direct discrimination.
Geoff Keeling, in his sophisticated contribution in Chapter 3, asks
an important question: “How does the moral status of an AV’s act
depend on its prediction of the classification of proximate objects?”
He presents three possible answers—the objective version and two
variants of a subjective view. The line of demarcation between
objective and subjective views depends on whether the evaluation of
the AV’s choices and acts takes into account the AV’s internal
representations of facts provided by external sensors. Keeling opts
for moderate subjectivism, according to which the rightness or
wrongness of the AV’s acts ought to be judged by the AV’s
epistemically justified or reasonable predictions about the morally
relevant facts. The next section of Keeling’s chapter is devoted to
developing a moderate subjectivist view and its application in the
context of mundane road-traffic situations. Keeling’s arguments are
very complex and, for those who are not comfortable with essential
higher mathematics, challenging to follow. Keeling’s overall aim here
is to find a decision-making procedure which would allow AVs to
determine how much weight should be given to safety depending on
how high the probability is that a perceived object classified as a
pedestrian is, in fact, a pedestrian.
Many authors working in the field of AI ethics had been confident
for a long time that trolley-type scenarios represent a conceptual
tool allowing one to describe and analyze possible choices leading to
harm. Recently, however, there is a growing consensus that the
matters are far from being that simple. In Chapter 4, Jeff Behrends
and John Basl take the view of Trolley Pessimists. The negation of
Trolley Pessimism is, of course, Trolley Optimism, which subscribes
to the theses that some possible collisions of AVs are structurally
similar (the authors give a precise definition of structural similarity)
to trolley-type cases and, accordingly, the engineers should work to
program AVs to behave in ways conforming to the moral conclusions
drawn from trolley cases. Berends and Basl suggest that both
Optimists and Pessimists have been victims to their inability to
recognize important features of the engineering techniques deployed
in the process of designing AVs’ guiding software. Both authors
endorse Trolley Pessimism and present a novel technological case
against Trolley Optimism. Their complex arguments are based on the
difference between traditional and machine learning algorithms. We
can see traditional algorithms as a set of rules enabling the
transformation of inputs into outputs according to the rules invented
by programmers. However, machine learning algorithms are radically
different in that they generate new algorithmic instructions not
explicitly provided by programmers. Consequently, we cannot expect
them to follow a set of prior established ethical rules incorporated
into their code by programmers. Therefore, engineers developing
software for AVs are not and will not be in a position to program
their vehicles to respond to the particular crash scenarios
encountered on the road in a predetermined and always consistent
manner.
A great deal of discussion in the context of the ethics of AVs is
predicated on the assumption of what can be called normative
monism. Normative monism may take two forms: Either we assume
that among all of the competing normative theories there is only one
that is correct, or we can hold to the view that for each field of
applied ethics there is only one solution. Saul Smilansky, in his highly
original contribution in Chapter 7, questions this assumption. He
considers a scenario, a hostage situation that he calls The Situation,
and demonstrates that many competing and sometimes contrasting
solutions may be invoked. By adopting a pluralist normative
worldview, Smilansky also goes beyond the classical trolley-type
scenarios inviting “either-or” type responses. The combination of
moral and value pluralism applied within the field of AV ethics gives
rise to an open moral world with many permissible possibilities, from
the design ethics to the behavior of self-driving vehicles in possible
crash situations. Smilansky’s normative pluralism may (and as he
believes is likely to) transform into a plurality of AV guiding
algorithms corresponding and responding to differences in cultural
backgrounds and preferences. The solution offered by Smilansky
falls under the umbrella of “Crazy Ethics,” a term coined by the
author to designate ethics which, despite being true, may lead to
counterintuitive consequences. Living in such a pluralistic world
might be hard at first, yet if Smilansky is right, we do not have any
other options available.
Like David Černý, Derek Leben in Chapter 8 also focuses on the
problem of discrimination in the context of AVs but considers it from
a different and more general angle. Leben argues that whether
choices made by algorithms represent unjustifiably discriminatory
behavior crucially depends on the nature of the task these
algorithms are called upon to fulfill. This task-relevance standard of
discrimination involves two components, one conceptual and the
other empirical. The conceptual component brings into focus the
essential task of an algorithm in a specific context. If, as the author
asserts, this task involves making decisions about the distribution of
harm interpreted as the predicted health outcomes (or the likelihood
of thereof) of collisions, then some features of the persons involved
may turn relevant to the task and others irrelevant. The empirical
component enters into play here; it depends on the answers to the
conceptual questions, and its role consists of determining which
features have an impact on accomplishing the essential function of
the algorithm. Consider, for example, age. If the essential task of AV-
guiding algorithms in the context of the distribution of harm in
collisions is to minimize harm measured as health outcomes or
otherwise (conceptual component) and age can serve as a statistical
predictor of these outcomes (empirical component), then age may
be relevant to the task. It follows from these considerations that
choices based on age may not represent instances of unjustified
discrimination.
For rethinking current approaches to the ethics of AVs, we can
turn to Soraj Hongladarom and Daniel D. Novotný in Chapter 6. The
authors call into question the predominant focus on the ethical
concerns and philosophical traditions of high-income, Western
countries in search of the one and only moral theory to be accepted
globally. Other cultures and traditions, however, often have different
standards of evaluation of what could be an acceptable and
desirable behavior of AVs. The challenge is how to take into account
these other cultures—and also the rich Asian, African, and other
traditions of ethical reflection. Their proposal consists in considering
AVs with their machine-learning systems as if they were human
drivers in a given culture that need to pass a driver’s license test.
Since they are aware that there still needs to be some in-built
fundamental norms, values, or virtues to make AVs human-aligned,
they explore the possibility of drawing upon the Buddhist concept of
compassion (karuṇā) for this role.
The trolley-like scenarios, despite some recent criticism mounted
against them, continue to occupy an important place in modern
experimental philosophy. Many philosophers are convinced that
moral intuitions—immediate moral reactions to presented scenarios
—should be treated as robust “data” expressing well-established
social norms. The overall aim of Chapter 5 by Akira Inoue, Kazumi
Shimizu, Daisuke Udagawa, and Yoshiki Wakamatsu is to
experimentally test whether, and to what extent, social norms
identified by a version of the trolley dilemma are robust. To achieve
this aim, the researchers conducted an online empirical survey in
Japan. The experimental results show, among others, that our
choices and willingness to follow socially established norms may be
heavily influenced by the presence or absence of public scrutiny.
These results are undoubtedly of immense relevance to the ethics of
AVs, and the authors go to great lengths to explore their impact to a
considerable depth. It may be argued that this chapter offers a first
step toward a solution to the so-called social dilemma of AVs.
1
Ethics and Risk Distribution for Autonomous
Vehicles
Nicholas G. Evans
Introduction
Autonomous vehicles (AVs) will be on our roads soon.1 How should
they be programmed to behave? The introduction of AVs will mark
the first time that artificially intelligent systems interact with humans
in the real world on such a large scale—and while travelling at such
high speeds.
Current AV ethics literature concentrates on crash scenarios in
which an AV must decide how to distribute unavoidable harm; for
example, an unoccupied AV must either swerve to the left, killing the
five passengers of a minivan, or swerve to the right, killing a lone
motorcyclist. Scenarios like these have been called “trolley cases”
because they resemble a series of famous thought experiments that
have sparked an enormous body of ethics literature (known,
somewhat derisively, as “trolleyology”).2 In the original case, a
runaway tram (or “trolley”) is about to run over and kill five workers,
but the driver can choose to steer from one track to another, but in
the process killing one worker on the alternate track.3 What’s
important about these cases is not that AVs are real-world analogs
to trolleys, but that AV navigation poses difficult ethical decisions.
Moreover, AVs, in virtue of being programmable rather than reacting
on human instinct, must be instructed how they ought to act in
these cases (or a decision, arguably equally morally weighty, must
be made to remain silent on what the AV ought to do in this case).
Trolley-based scenarios have been used to test intuitions about the
behavior of AVs, such as when it is permissible to choose (or allow)
a smaller group to be harmed in order to save a larger one, and
more controversially what kinds of people we should prioritize over
others in saving them.4
When people ask how AVs could have anything to do with ethics,
the trolley problem offers a quick and obvious explanation. But
trolley problem–inspired AV ethics has received considerably
criticism. One central line of reasoning is that, in the real world, we
are almost never certain about our options and their outcomes.
Nyholm and Smids, as well as Goodall, have argued that we should
focus on risk management when programming AVs, and they
describe a number of realistic cases involving risk, many of which
are similar to trolley cases but involve only probabilities of harm,
including how close AVs choose to drive to certain types of vehicles
and pedestrians, when they choose to change lanes, and which
vehicles they take pains to avoid crashing into.5 Himmelreich has
argued that trolley-like problems are too rare, and too extreme,
relative to the kind of ethical issue that are more likely to face AVs
on a day-to-day basis. He argues that we should instead focus more
on mundane driving scenarios, many of which involve risk. In
addition to the kinds of cases Goodall mentions, he draws attention
to the risks associated with the environmental impact of AVs and
with programming AV behavior that will be repeated exactly by every
other AV.6
The discussion of risk and AVs is just beginning. We’re at the
stage where (a) a good case has been made for the importance of
the discussion, and where (b) a smattering of different scenarios and
questions about risk has been posed. One way to approach a difficult
problem like finding a suitable ethical algorithm for AVs, which is
common to both engineering and philosophy,7 is to start with the
simplest or most idealized kinds of cases first. Greater complexity
can be added back into the picture as more progress is made. Hence
the trolley problem—a simple case outlining a clear issue in which
choices about doing harm, or allowing it to happen, are parsed in
the clearest detail.8 What comes next for AV ethics, however?
Our purpose here is not to reject the trolley problem, as others
have done. The trolley problem is an important thought experiment
in the history of philosophy; it serves a very specific purpose. In
point of fact, we believe its purpose is precisely the one it has
served: to force people to acknowledge, and then choose a position
on, an important moral feature that is subject to disagreement. The
point of the trolley problem, put another way, is to cause problems!
But the trolley problem cannot—indeed, no philosophical problem
can—solve a complex problem like the navigation of AVs on its own.
There are other challenges that are philosophically relevant to an
investigation of a complex problem like AVs. The field needs to
evolve, beyond the mere debate about trolleys (and whether that
debate is relevant), to encompass other philosophical issues.
In what follows, we outline three ways to think about this
evolution. We motivate this project first through conceptual,
empirical, and metaphilosophical concerns about the limits of the
trolley problem as it applies to the ethics of AVs. We then turn to
two case studies that demonstrate the challenge ahead. The first is
how AVs should behave when they encounter each other, and where
differences in their algorithmic behavior are morally relevant, and
where each is uncertain as to the other’s algorithm. The second is
considering a wide view of AVs, and how we account for the broader
question of AVs in large, even global transportation systems.
The results of this thought experiment are not binary: not just in
terms of the possible injuries that might arise to TG, AV, and FC, but
in terms of the options available to AV. As a parametric model, AV
could choose any combination of velocities and accelerations
available to the vehicle. Modeling on the above vignette gave options
such as a “wake-up call” where the AV initiated a low-speed collision
to TC to encourage them to brake, or in the case of an unresponsive
TC an “emergency brake” in which the AV initiated a series of low-
speed collisions until those collisions became inelastic and the AV
could use its own braking power to stop both cars.
Even in trolley cases for “unavoidable crashes,” there may be
continuous variables, such as how much braking room there is
between a vehicle and pedestrians or other vehicles; the side of the
vehicle that strikes the other object; the object’s reaction times if it
has any; and so on. These are not relevant to the trolley problem as
a thought experiment; because they may make meaningful
differences in the outcomes of these collisions; however, they may
be (though are not always) relevant to an AV’s decisions. These are
empirical concerns, but important ones in knowing what options are
available to an AV, which at least on some accounts is a precondition
for having good beliefs about what the AV ought to do.
Finally, there is a metaphilosophical problem around how we do
the ethics of AVs in conditions under risk. This arises from, but it is
not totally derivative of the first two problems above. Making
decisions about the ethics of AVs requires both knowing what,
philosophically, is at stake in decisions around AVs. But it also
requires empirical knowledge of the conditions under which those
AVs will make those decisions. This requires a form of collaboration
that is not common, between philosophers and empirical
researchers. While our previous work provides a model to emulate, it
does not solve larger metaphilosophical questions about how
philosophers should engage with practical and design processes.
These kinds of risks are important because they allow us to
loosen our three assumptions around AVs. We, with sufficient work,
no longer need to make a binary distinction between autonomous
and human-driven cars, and we can accept a range of levels of
autonomy—levels that exist but are typically eschewed in debates
about the ethics of AVs. We can further deal with important temporal
components of the deployment of AVs, from the near-future scenario
in which full autonomy is available only to a handful of cars on the
road, to the potential future in which all or nearly all cars are AVs.
Finally, we can deal with questions about what kinds of information
are necessary, and how decision-making might permissibly proceed
for vehicles operating with different kinds of data.
Possible Hazard I: The same as Pandemic, except that this time the AV
determines that, if all the usual safety precautions have been taken, then
there’s still a very small chance that any dangerous materials will leak into the
lake even if the Hazmat AV falls in.
Possible Hazard II: The same as Pandemic, except that this time the AV
is sure that dangerous materials will end up in the lake if it hits the unmanned
AV, but it’s uncertain whether it is a Hazmat AV, or whether the materials it
might carry are dangerous enough to warrant seriously injuring the lives of
several people.
It’s much less clear what the AV should do when facing this kind of
uncertainty. One possibility is to design it to do a cost-benefit
analysis and then to act so as to maximize expected well-being. But
we might also want it to give special priority to its own passengers,
at least, if it has any. And it may be difficult to determine exactly
how it should do the analysis. These are incredibly rare but
potentially high-impact events. When dealing with probabilities so
small and consequences so large, slight differences in its approach
could lead to noticeably different decision-making.
Importantly, the cost of avoiding these incidents could be high
enough to deter self-interested firms from responding to them. In
the case of Possible Hazard II, it is foreseeable that a manufacturer
could take the time to develop a contingency to detect a Hazmat AV
with very high confidence and make sure that their vehicles always,
or nearly always, respond appropriately. However, the costs in doing
so could be prohibitively high for the manufacturer. Their cars might
be slower in navigating terrain (to allow more time to notice and
respond to Hazmat AVs or similar kinds of threats); or the additional
development cost for a firm might reduce their competitiveness. In
either case, individual manufacturers have few incentives to respond
to Possible Hazard I or II, especially if the probability one of their
vehicles will encounter a Pandemic case is very low.24
We’ve considered cases in which an AV has to decide whether to
crash into a (potential) Hazmat AV. We can also consider cases in
which the crash is unavoidable, but in which the AV must decide
how to crash. For example:
Town or Ocean: A Hazmat AV is transporting a large amount of toxic waste
along a road along the edge of the ocean. The AV is a long truck equipped
with symbols and flashing lights to warn other vehicles to keep their distance.
But rain has made the road slippery and the Hazmat AV has started to skid
out of control. A passenger AV rounds a corner to find the truck skidding
toward it perpendicular to the road. The passenger AV can either swerve to
the left or to the right. Both maneuvers are expected to put its own
passengers at the same amount of risk. But if it skids to the left, the truck will
likely end up falling down off the road into the ocean. And if it skids to the
right, the truck will likely end up crashing into the main street of a small town.
The waste is expected to spill out either way. It will be easier to collect,
contain, and dispose of the waste if it ends up in town. But if it ends up there,
several people are likely to die from exposure or from drinking contaminated
water. If it ends up in the ocean, no one will die from it directly. But it will
devastate the ecosystem for hundreds of miles, and the town and a much
larger area will suffer economically and from higher rates of illness for years.
If the AV is able to make this kind of assessment of the situation, what should
it do? Or, supposing the Hazmat vehicle has made the assessment, what
should it signal the AV to do?
Conclusion
In this chapter, we describe an evolution in thinking around the
ethics of AVs. We identify current debates about AVs and describe
conceptual, empirical, and metaphilosophical problems that arise
with the current focus on trolley-like cases of risk in AVs. We then
show two cases in which deeper philosophical inquiries into AV
behavior might shed new light into applied problems with the
development and deployment of these technologies. Like trolley-like
problems, however, these are problems that would benefit from a
close, interdisciplinary collaboration between philosophers and
empirical researchers to model and examine these problems in a
range of contexts.
Notes
1. We set aside what precisely counts as “autonomy.” See Society for
Automotive Engineers, “J3016B: Taxonomy and Definitions for Terms Related
to Driving Automation Systems for On-Road Motor Vehicles—SAE
International,” June 15, 2018.
https://www.sae.org/standards/content/j3016_201806/.
2. Barbara H. Fried, “What Does Matter? The Case for Killing the Trolley
Problem (or Letting It Die),” The Philosophical Quarterly 62, no. 248 (July 1,
2012): 505–29. https://doi.org/10.1111/j.1467-9213.2012.00061.x.
3. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double
Effect,” in Virtues and Vices and Other Essays in Moral, 19–32 (New York:
Oxford University Press, 1993).
4. Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph
Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, “The Moral
Machine Experiment,” Nature 563, no. 7729 (November 2018): 59–64.
https://doi.org/10.1038/s41586-018-0637-6.
5. Sven Nyholm and Jilles Smids, “The Ethics of Accident-Algorithms for Self-
Driving Cars: An Applied Trolley Problem?,” Ethical Theory and Moral Practice
19, no. 5 (July 2016): 1275–89. https://doi.org/10.1007/s10677-016-9745-2;
Noah J. Goodall, “Away from Trolley Problems and Toward Risk Management,”
Applied Artificial Intelligence 30, no. 8 (November 2016): 810–21.
https://doi.org/10.1080/08839514.2016.1229922.
6. Johannes Himmelreich, “Never Mind the Trolley: The Ethics of Autonomous
Vehicles in Mundane Situations,” Ethical Theory and Moral Practice 21, no. 3
(May 2018): 669–84. https://doi.org/10.1007/s10677-018-9896-4.
7. E.g., Michael Weisberg, Simulation and Similarity: Using Models to
Understand the World (New York: Oxford University Press, 2013).
8. Geoff Keeling, “Why Trolley Problems Matter for the Ethics of Automated
Vehicles,” Science and Engineering Ethics 26, no. 1 (February 1, 2020): 293–
307. https://doi.org/10.1007/s11948-019-00096-1.
9. Heather M. Roff, “The Folly of Trolleys: Ethical Challenges and Autonomous
Vehicles,” Brookings, December 17, 2018.
https://www.brookings.edu/research/the-folly-of-trolleys-ethical-challenges-
and-autonomous-vehicles/.
10. Cf. Judith Jarvis Thomson, “The Trolley Problem,” The Yale Law Journal 94,
no. 6 (1985): 1395. https://doi.org/10.2307/796133.
11. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double
Effect,” Oxford Review 5 (1967).
12. Fritz Allhoff, Nicholas Greig Evans, and Adam Henschke, “Not Just Wars:
Expansions and Alternatives to the Just War Tradition,” in The Routledge
Handbook of Ethics and War, edited by Fritz Allhoff, 1–8 (New York:
Routledge, 2013).
13. Lara Buchak, “Why High-Risk, Non-Expected-Utility-Maximising Gambles Can
Be Rational and Beneficial: The Case of HIV Cure Studies,” Journal of Medical
Ethics 43, no. 2 (February 1, 2017): 90–95.
https://doi.org/10.1136/medethics-2015-103118.
14. A. Bjorndahl, A. J. London, and Kevin J. S. Zollman, “Kantian Decision Making
under Uncertainty: Dignity, Price, and Consistency,” Philosophers Imprint 17,
no. 7 (April 2017): 1–22.
15. Seth Lazar and Chad Lee Stronach, “Axiological Absolutism and Risk,” Noûs
53, no. 1 (March 2019): 97–113. https://doi.org/10.1111/nous.12210.
16. Pamela Robinson et al., “Modelling Ethical Algorithms in Autonomous Vehicles
Using Crash Data,” IEEE Transactions on Intelligent Transportation Systems
(May 2021), doi:10.1109/TITS.2021.3072792.
17. And they might also be less predictable, depending on the method of
developing an algorithm and its capacity to change over time. Many if not
most original equipment manufacturers—in the main, standard auto
companies—rely on formal methods to develop their algorithms. These
algorithms are predictable in the sense that their program is transparent, and
while it is possible to not test them adequately, they are in principle
understandable and predictable. Deep learning algorithms, however, and in
particular the development of algorithms through neural nets, provide
behavior that is interpolated from existing data. They can be very
sophisticated but are largely (though not exclusively, see Kiri L. Wagstaff and
Jake Lee, “Interpretable Discovery in Large Image Data Sets,”
arXiv:1806.08340[2018]) transparent in the sense that it is not possible to
know the exact form of the algorithm—they are sometimes called “black box”
algorithms. In the case of neural nets, emergent conditions could result in an
asymptotic, unpredictable response that diverges strongly from human
expectations or the data set.
18. https://www.businessinsider.com/mercedes-benz-self-driving-cars-
programmed-save-driver-2016-10
19. E.g., Charlie Osborne, “Tesla’s Autopilot Takes the Wheel as Driver Suffers
Pulmonary Embolism,” ZDNet. https://www.zdnet.com/article/teslas-autopilot-
takes-the-wheel-as-driver-suffers-pulmonary-embolism/.
20. It might also find itself about to crash into a facility handling hazardous
materials, but we won’t discuss this or other possibilities here.
21. See, e.g., Evans, Lipsitch, and Levinson (2016).
22. Lisa Brown, “Truck Carrying Radioactive Material Found after It Was Stolen in
Mexico,” NACCHO, December 6, 2013.
https://www.naccho.org/blog/articles/truck-carrying-radioactive-material-
found-after-it-was-stolen-in-mexico.
23. Centers for Disease Control and Prevention, “Report on the Inadvertent
Cross-Contamination and Shipment of a Laboratory Specimen with Influenza
Virus H5N1,” Atlanta, GA, August 2014.
https://www.cdc.gov/labs/pdf/InvestigationCDCH5N1contaminationeventAug
ust15.pdf.
24. The formal demonstration for these kinds of problem, and their ethical
significance, can be found in Lipstich, Evans, and Cotton-Barrett (2016).
2
Autonomous Vehicles, the Badness of Death,
and Discrimination
David Černý
Introduction
While autonomous vehicles (AVs) promise a number of benefits,
introducing them into traffic may also lead to some negative
consequences. I will call the benefits “positive factors” and the
negative consequences “negative factors.” From the ethical point of
view, it is important that the positive factors by far prevail over the
negative ones, as this makes it possible to postulate the following
thesis regarding the external justification of introducing AVs into
road traffic:
KANSAS.
Freeman F J, S’t 8 June
1614 64
F 4
8 June
1935 Gensarde Thos
A 14
1 Nov
12127 Sweeney M
H 22
11139 Weidman W 8 Oct
B 19
8 June
1663 Williams C A
A 6
Total 5.
KENTUCKY.
Allen Sam’l S, 13 April
329 64
Cor F 2
11 April
674 Alford George Cav
B 22
11 May
1575 Anderson S Cav
D 3
July
3385 Adams J D Cav 1 I
16
July
3759 Ashley J M Cav 1L
22
11 Aug
4723 Allen Wm, Cor Cav
C 4
39 Aug
4894 Atkins A Cav
H 6
18 Aug
6093 Anghlin J A, Cor C
B 18
13 Aug
6720 Arnett H S Cav
A 24
15 Oct
10514 Adamson Wm “
K 8
27 Nov
11759 Adams J L
G 3
4 Jan
12426 Arthur D 65
G 9
12528 Ayers E 52 Jan
A 26
52 Jan
12703 Ayers S 65
A 26
Jan
12593 Arnett T Cav 4F
5
1 Mch
193 Bow James “ 64
- 27
Mch
201 Burrows Wm “ 1K
31
11 April
366 Byesly Wm “
E 2
1 April
379 Baker Isaac “
H 5
12 April
413 Basham S “
E 7
11 April
419 Button Ed “
D 18
6 April
608 Burret B “
D 18
4 April
609 Bloomer H “
G 18
3 April
803 Baker A W “
C 29
12 May
832 Boley Peter
L 1
11 May
891 Bird W T Cav
H 5
14 May
857 Bailey A W
G 2
May
1167 Burton Tillman Cav 1F
17
1200 Butner L B, S’t “ 6 I May
18
11 May
1263 Bell P B “
I 21
8 May
1362 Barnett James “
H 25
12 June
1566 Baird Sam’l J “
D 2
11 June
1789 Bishop D L “
A 10
11 June
2022 Bowman G “
D 15
9 June
2423 Bray H N, Cor “
H 24
12 June
2529 Buchanan S “
F 26
11 July
2760 Ball David “
B 2
1 July
3087 Beard J C, S’t “
C 9
July
3228 Brophy M “ 5 I
12
4 July
3433 Bailey F M “
G 17
11 July
3909 Banner J “
C 24
July
3998 Bridell S, Cor “ 3F
26
16 Aug
4562 Booth Z, S’t “
E 2
Aug
4653 Barger George “ 5 I
3
Aug
4835 Baker Wm “ 3 I
6
4971 Bigler A “ 6B Aug
7
11 Aug
5471 Bailey J H “
A 12
1 Aug
5644 Branan H “
G 14
27 Aug
6576 Boston J “
E 23
1 Aug
6727 Bottoms J M “
H 24
11 Aug
9551 Brinton W J, S’t “
C 23
12 Sept
9568 Barnett A “
K 23
10 Sept
9628 Brown J “
I 24
13 Sept
9740 Boyd M “
A 25
5 Oct
10147 Batt W
G 1
Oct
10202 Byron H M, S’t C 1 I
2
Oct
10451 Bill B S Cav 1K
7
Oct
10816 Bodkins P, Cor “ 1K
12
11 Oct
10859 Bagley T “
- 13
Oct
11052 Brickey W L 4F
17
11 Oct
12256 Baldwin J W
H 21
11303 Brown E W 4F Oct
22
4 Oct
11491 Barber T Cav
H 26
Nov
12066 Brannon J 3B
13
Dec
12304 Beatty R 5B
18
11 Dec
12333 Barnes J
D 25
11 Dec
12360 Brodus O Cav
A 30
45 Jan
12421 Britton J 65
F 9
11 Aug
5098 Bowman Henry C 64
F 9
12 Mch
12777 Balson L
B 15
10 Oct
11483 Cranch J P
D 26
14 Mch
240 Conler Wm
I 30
12 April
484 Caldwell Wm Cav
I 9
12 April
509 Cook Theo “
D 12
11 April
672 Colvin George “
D 22
11 May
877 Christmas J “
F 4
12 May
906 Collague M “
E 8
May
1268 Cash Phillip “ 1 I
21
1600 Cole W C “ 1 June
C 4
Christenburg R 12 June
1676 “
I, S’t G 6
11 June
1687 Callihan Pat Cav 64
A 6
11 June
1856 Clane H “
E 12
40 June
2152 Clinge W H
A 18
June
2293 Cox A B Cav 6 I
21
June
2339 Chippendale C “ 1B
22
June
2446 Carlisle J “ 6 I
25
11 July
2823 Cummings J
F 3
18 July
2912 Cleming Thos
I 5
11 July
3184 Carter W Cav
H 11
4 July
60 Cristian John “
C 4
11 July
4044 Clark A H
I 27
11 Aug
4809 Chapman
H 5
23 Aug
6387 Coulter M
B 21
Sept
9835 Conrad R P 4B
27
11179 Clun W H Cav 11 Oct
L 19
6 Oct
11486 Chatsin W M “
H 26
4 Jan
12447 Carcanright 65
C 13
4 Jan
12700 Cook J P
G 26
June
2223 Corbitt Thos 5A 64
20
11 Sept
8113 Coyle C Cav
I 7
1 Aug
4740 Chance A J “
C 5
12 Apr
421 Dupon F
G 7
11 May
1388 Delaney M Cav
I 26
12 May
1414 Dugean J R, S’t
K 27
11 June
1568 DeBarnes P M
C 2
1 June
1027 Demody Thos
H 4
12 June
1867 Drake J H
G 12
5 July
2736 Davis B
C 1
12 Apr
23 Duncan E Cav
G 15
39 July
3623 Dodson E
H 20
Apr
27 Derine George Cav 1 I
17
3924 Davis G C 12 July
F 25
11 July
3966 Derringer H
I 25
11 Aug
4510 Dulrebeck H
E 1
4 Aug
4556 Delaney H Cav
H 2
Aug
5088 Dounty P 5F
8
Aug
5899 Daniel R 9F
16
6 Oct
11405 Disque F, S’t Cav
G 24
Dec
12280 Duland D W 3K
13
4 Feb
12623 Dannard W 65
D 9
Feb
12684 Dipple S 4E
21
May
1109 Dinsman H Cav 4E 64
15
13 July
2805 Davis J P
A 3
6 June
2117 Davis C Cav
D 30
Apr
639 Eodus James 1F
20
11 May
1174 Edminston J W
A 17
Edwards H S, May
1439 8K
Cor 27
2544 Emery J 10 June
G 27
Aug
2341 Errbanks J Cav 1A
11
Oct
12277 Esteff J 1L
22
1 May
1447 East R
G 29
Apr
384 Falconburg I K 1A
5
4 June
2540 Fleming R
D 27
July
3640 Forteen John 8A
20
1 July
4344 Fenkstine M
D 30
6 Aug
6763 Featherstone J
C 25
4 Aug
7068 Fritz J Cav
G 28
Oct
10280 Funk L 1 I
4
23 Oct
11549 Frazier C R
H 27
17 Nov
11720 Fletcher T
E 1
11 June
1612 Gritton G Cav
D 4
18 June
1618 Graves G
C 4
11 June
1841 Gritton M Cav
B 11
June
2583 Gibson John 6L
27
3680 Griffin B 11 July
E 20
July
3663 Glassman P Cav 4B
20
4 July
3888 Gonns J M
H 24
July
4438 Gather M Cav 4F 64
31
45 Aug
5779 Gullett A
K 15
11 Aug
7197 Green J B, S’t
I 29
Sept
7817 Grabul B 1F
4
4 Sept
8049 Gury J
H 6
20 Sept
8903 Gray C D
G 18
40 Sept
9318 Gett John, S’t
G 20
11 Sept
9950 Gill W J Cav
H 28
13 Sept
10053 Gower J C
A 30
Oct
10650 Gibson A Cav 8K
10
Oct
10831 Grulach J, S’t 4K
13
Nov
11910 Grimstead J R 1E
8
11 Nov
12022 Griffin R
E 15
1235 Gregory H Cav 12 May
D 20
12 Mar
81 Hauns J B
K 20
Holloway Mar
237 4 I
Richard 29
40 Apr
289 Harley Alfred
K 1
Apr
292 Hood G Cav 5F
1
1 Apr
348 Hammond J W
G 2
1 Apr
376 Harper J
C 5
13 Apr
402 Harlow Harvey
I 6
12 Apr
614 Hess Wm F Cav
M 18
11 Apr
643 Hendree A, S’t
F 20
11 May
1026 Hillard Geo
D 11
11 May
1127 Hoffman C Cav
E 15
Hughes Thos, 9 June
1584
S’t G 3
28 June
1760 Hennesey J
D 9
4 June
1878 Hundley G W Cav
- 12
18 June
1956 Hazlewood J H
G 14
June
1990 Hamner A 9B
15
2490 Huison J W, S’t 9B June
26
June
2705 Hillard S Cav 1 I
30
18 July
3239 Henderson J
B 12
11 Apr
26 Hooper Saml Cav
D 16
1 July
3944 Hooper J
H 25
45 July
3994 Hickworth J
H 26
1 July
4313 Hall J H Cav
C 30
June
4420 Hammontius P 6L
30
1 Aug
4970 Hayner E
D 7
12 Aug
5059 Haines J
D 8
15 Aug
5091 Harrington C
K 8
Aug
5793 Hatfield L 1F
15
11 Aug
6193 Hendrie Wm Cav
F 19
23 Aug
6801 Hardison G
I 25
Sept
8032 Hise P 4 I
6
11 Sept
8111 Hicks P Cav
F 7
8181 Heglen C “ 4 I Sept
8
18 Sept
9376 Hanker R “
F 20
11 Sept
9599 Hyrommus Jas “
H 23
Oct
10683 Halton S M 2K
11
Oct
11054 Halligan J 4A
17
Oct
11095 Hall F Cav 1F
18
11 Oct
11132 Hazer John
I 18
12 Oct
11251 Harter F Cav
M 21
Dec
12293 Hays J F 5A
15
4 Jan
12518 Hasting J 65
H 24
Feb
12638 Hudson B F 4A
11
24 Aug
5734 Inman John 64
A 15
3 Sept
9757 Isabell J M
H 25
11 Oct
11392 Inman W Cav
H 24
Dec
12203 Isabel A 1K
1
45 Apr
649 Jackson John
D 20
June
2679 Jeffries Wm Cav 1A
30