You are on page 1of 67

Autonomous Vehicle Ethics: The

Trolley Problem and Beyond Ryan


Jenkins
Visit to download the full and correct content document:
https://ebookmass.com/product/autonomous-vehicle-ethics-the-trolley-problem-and-b
eyond-ryan-jenkins/
Autonomous Vehicle Ethics
Autonomous
Vehicle Ethics
The Trolley Problem and Beyond
Edited by
RYA N J E N K I N S , DAV I D Č E R N Ý,
A N D T OM Á Š H Ř Í B E K
Oxford University Press is a department of the University of Oxford. It furthers
the University’s objective of excellence in research, scholarship, and education
by publishing worldwide. Oxford is a registered trade mark of Oxford University
Press in the UK and certain other countries.

Published in the United States of America by Oxford University Press


198 Madison Avenue, New York, NY 10016, United States of America.

© Oxford University Press 2022

All rights reserved. No part of this publication may be reproduced, stored in


a retrieval system, or transmitted, in any form or by any means, without the
prior permission in writing of Oxford University Press, or as expressly permitted
by law, by license, or under terms agreed with the appropriate reproduction
rights organization. Inquiries concerning reproduction outside the scope of the
above should be sent to the Rights Department, Oxford University Press, at the
address above.

You must not circulate this work in any other form


and you must impose this same condition on any acquirer.

Library of Congress Cataloging-in-Publication Data


Names: Jenkins, Ryan, editor. | Černý, David, editor. |
Hříbek, Tomáš, editor.
Title: Autonomous vehicle ethics : the trolley problem and beyond/
edited by Ryan Jenkins, David Černý, and Tomáš Hříbek.
Description: New York, NY, United States of America :
Oxford University Press, [2022] |
Includes bibliographical references and index.
Identifiers: LCCN 2022000306 (print) | LCCN 2022000307 (ebook) |
ISBN 9780197639191 (hbk) | ISBN 9780197639214 (epub) | ISBN 9780197639221
Subjects: LCSH: Automated vehicles—Moral and ethical aspects. |
Double effect (Ethics)
Classification: LCC TL152.8 .A8754 2022 (print) | LCC TL152.8 (ebook) |
DDC 629.2—dc23/eng/20220315
LC record available at https://lccn.loc.gov/2022000306
LC ebook record available at https://lccn.loc.gov/2022000307

DOI: 10.1093/​oso/​9780197639191.001.0001

1 3 5 7 9 8 6 4 2
Printed by Sheridan Books, Inc., United States of America
Ryan Jenkins dedicates this book to those injured or killed in automobile accidents
the world over—​and those working to bend the arc of technological progress to
minimize the human suffering that results.

David Černý dedicates this book to his fiancée, Alena, who gives meaning and joy
to his work.

Tomáš Hříbek dedicates the book to all those who are tired of being drivers and
hope to be liberated by AV technology.
Contents

Acknowledgments  xi
Contributors  xiii
Introduction  xix

PA RT I AU T O N OM OU S V E H IC L E S
A N D T R O L L EY P R O B L E M S
Introduction by David Černý
1. Ethics and Risk Distribution for Autonomous Vehicles  7
Nicholas G. Evans
2. Autonomous Vehicles, the Badness of Death, and Discrimination  20
David Černý
3. Automated Vehicles and the Ethics of Classification  41
Geoff Keeling
4. Trolleys and Autonomous Vehicles: New Foundations for the
Ethics of Machine Learning  58
Jeff Behrends and John Basl
5. The Trolley Problem and the Ethics of Autonomous Vehicles
in the Eyes of the Public: Experimental Evidence  80
Akira Inoue, Kazumi Shimizu, Daisuke Udagawa, and Yoshiki
Wakamatsu
6. Autonomous Vehicles in Drivers’ School: A Non-​Western
Perspective  99
Soraj Hongladarom and Daniel D. Novotný
7. Autonomous Vehicles and Normative Pluralism  114
Saul Smilansky
8. Discrimination in Algorithmic Trolley Problems  130
Derek Leben
viii Contents

PA RT I I E T H IC A L I S SU E S B EYO N D
T H E T R O L L EY P R O B L E M
Introduction by Ryan Jenkins
9. Unintended Externalities of Highly Automated Vehicles  147
Jeffrey K. Gurney
10. The Politics of Self-​Driving Cars: Soft Ethics, Hard Law, Big
Business, Social Norms  159
Ugo Pagallo
11. Autonomous Vehicles and Ethical Settings: Who Should Decide?  176
Paul Formosa
12. Algorithms of Life and Death: A Utilitarian Approach to the
Ethics of Self-​Driving Cars  191
Stephen Bennett
13. Autonomous Vehicles, Business Ethics, and Risk Distribution in
Hybrid Traffic  210
Brian Berkey
14. An Epistemic Approach to Cultivating Appropriate Trust in
Autonomous Vehicles  229
Kendra Chilson
15. How Soon Is Now?: On the Timing and Conditions for Adopting
Widespread Use of Autonomous Vehicles  243
Leonard Kahn
16. The Ethics of Abuse and Unintended Consequences for
Autonomous Vehicles  257
Keith Abney

PA RT I I I P E R SP E C T I V E S F R OM
P O L I T IC A L P H I L O S O P H Y
Introduction by Tomáš Hříbek
17. Distributive Justice, Institutionalism, and Autonomous Vehicles  279
Patrick Taylor Smith
18. Autonomous Vehicles and the Basic Structure of Society  295
Veljko Dubljević and William A. Bauer
19. Supply Chains, Work Alternatives, and Autonomous Vehicles  316
Luke Golemon, Fritz Allhoff, and T. J. Broy
Contents ix

20. Can Autonomous Cars Deliver Equity?  337


Anne Brown
21. Making Autonomous Vehicle Technologies Matter: Ensuring
Equitable Access and Opportunity  350
Madhu C. Dutta-​Koehler and Jennifer Hatch

PA RT I V AU T O N OM OU S V E H IC L E
TECHNOLOGY IN THE CIT Y
Introduction by Tomáš Hříbek
22. Fixing Congestion for Whom? The Distribution of Autonomous
Vehicles’ Effects on Congestion  375
Carole Turley Voulgaris
23. Fulfilling the Promise of Autonomous Vehicles with
a New Ethics of Transportation  390
Beaudry Kock and Yolanda Lannquist
24. Ethics, Autonomous Vehicles, and the Future City  415
Jason Borenstein, John Bucher, and Joseph Herkert
25. The Autonomous Vehicle in Asian Cities: Opportunities for
Gender Equity, Convivial Urban Relations, and Public Safety
in Seoul and Singapore  432
Jeffrey K. H. Chan and Jiwon Shim
26. Autonomous Vehicles, the Driverless City, and
the Pedestrian City  451
Tomáš Hříbek

Appendix: Varieties of Trolley Pessimism  475


Jeff Behrends and John Basl
Index  483
Acknowledgments

The editors of this collection would collectively like to thank our editors at
Oxford University Press, whose stewardship of the manuscript through the pro-
cess of proposal, review, editing, and publication, has been tremendously helpful,
in particular Peter Ohlin.
Ryan Jenkins would like to thank Patrick Lin, Keith Abney, and Zachary Rentz
for innumerable enlightening and invigorating conversations about the ethical
and social implications of emerging technologies. Without them, in fact, his in-
terest in technology might have never been stoked. His mentors and advocates
are too numerous to list, but among them are Bradley Strawser, Benjamin Hale,
Alastair Norcross, and Duncan Purves. If it is true that we are a reflection of the
people closest to us, then Ryan has been lucky to have found himself close to this
group of insightful, careful, and indefatigable thinkers. Finally, he’d like to thank
his wife, Gina, with whom he has been grateful to share both his successes and
setbacks.
David Černý would like to thank Saul Smilansky for his friendliness, kind-
ness, and continuous support. Saul sets a fine example of philosophical sophisti-
cation, exactness, courage to explore unprecedented routes, and love of wisdom.
Special thanks go to Patrick Lin for his passion for philosophy, love of discussion,
and generosity. Without Patrick, this book would not be possible.
Tomáš Hříbek joins David Černý in thanking Patrick Lin for his friendship
and collegiality, and for his invaluable support of this project. Tomáš also wishes
to extend a big thank-​you to Ryan Jenkins for bearing most of the responsibility
for preselecting and contacting the candidate contributors, and communicating
with them. Without his good rapport with the contributors, the project would
have hardly gotten off the ground. Finally, Tomáš thanks his colleagues Dan
Novotný and Pavel Kalina for their advice and feedback.
Both David and Tomáš are grateful for the support of grant project
TL01000467 “Ethics of Autonomous Vehicles” of the Technology Agency of the
Czech Republic.
Contributors

Keith Abney is Senior Lecturer in the Philosophy Department and a Senior Fellow at the
Ethics +​Emerging Sciences Group at California Polytechnic State University in San Luis
Obispo. His research involves the ethics of emerging technologies, especially space ethics,
artificial intelligence/​robot ethics, and bioethics, as well as autonomous vehicles.

Dr. Fritz Allhoff, JD, PhD, is Professor in the Department of Philosophy at Western
Michigan University, and Community Professor in the Program in Medical Ethics,
Humanities, and Law at the Western Michigan University Homer Stryker M. D. School of
Medicine. He publishes in ethical theory, applied ethics, and philosophy of law.

John Basl is an Associate Professor of Philosophy at Northeastern University and the


Associate Director of AI Initiatives at Northeastern’s Ethics Institute. He works in moral
philosophy and applied ethics, especially on the ethics of emerging technologies and arti-
ficial intelligence.

William A. Bauer is an Associate Teaching Professor in the Department of Philosophy


and Religious Studies at North Carolina State University. His research addresses problems
in the metaphysics of science and the ethics of technology. His publications include arti-
cles in Erkenntnis, Neuroethics, and Science and Engineering Ethics.

Jeff Behrends is an Associate Senior Lecturer in Philosophy at Harvard University and


the Director of Ethics and Technology Initiatives at the Edmond J. Safra Center for Ethics.
His research focuses on the nature and grounds of practical normativity, and the ethics of
developing and deploying computing technology.

Stephen Bennett is a higher degree research student in philosophy at Macquarie


University. His master’s thesis deals with the ethics of self-​driving cars, focusing on the
question of how they should be programmed to act when faced with an unavoidable acci-
dent where harm cannot be avoided.

Brian Berkey is an Assistant Professor in the Legal Studies and Business Ethics
Department in the Wharton School at the University of Pennsylvania. He works in
moral and political philosophy, and he has published articles on moral demandingness,
obligations of justice, climate change, exploitation, effective altruism, ethical consum-
erism, and collective obligations.

Jason Borenstein, PhD, is the Director of Graduate Research Ethics Programs at the
Georgia Institute of Technology. His appointment is divided between the School of Public
Policy and the Office of Graduate Studies. His teaching and research interests include
xiv Contributors

robot and artificial intelligence ethics, engineering ethics, research ethics/​RCR, and
bioethics.

T. J. Broy is a PhD student in the Philosophy Department at the University of Connecticut.


He works mainly in ethics, philosophy of art, and philosophy of language with his current
research focused on the intersection of the study of expression in philosophy of language
with various issues in normative ethics.

Dr. Anne Brown is an Assistant Professor in the School of Planning, Public Policy, and
Management at the University of Oregon. She researches issues of transportation equity,
shared mobility, and travel behavior.

John Bucher, AICP, PMP, is a Senior Planner at Stantec and a Fellow at Tulane University’s
Disaster Resilience Leadership Academy. His work is focused on building sustainable re-
silience through community development, climate adaptation, and hazard mitigation.

Dr. David Černý is a Research Fellow at the Institute of State and Law of the Czech
Academy of Sciences and the Institute of Computer Science of the Czech Academy of
Sciences. He is a founding member of the Karel Čapek Center for Values in Science and
Technology.

Dr. Jeffrey K. H. Chan is an Assistant Professor in the Humanities, Arts and Social
Sciences cluster at the Singapore University of Technology and Design. His research
focuses on design and planning ethics, and he is the author of two books, Urban Ethics in
the Anthropocene and Sharing by Design.

Kendra Chilson is a philosophy PhD student at the University of California, Riverside.


She studies the interconnections among decision theory, game theory, and ethics, and
how each of these fields can and should contribute to the development of artificially intel-
ligent systems.

Dr. Veljko Dubljević is a University Faculty Scholar and Associate Professor of


Philosophy and Science, Technology & Society at NC State University and leads the
NeuroComputational Ethics Research Group (https://​sites.goo​gle.com/​view/​neur​oeth​
ics-​group/​team-​memb​ers). He is a recipient of a NSF CAREER award and has published
extensively in neuroethics, neurophilosophy, and ethics of artificial intelligence.

Dr. Madhu C. Dutta Koehler is an MIT-​educated dreamer, designer, dancer, and entre-
preneur. Dutta-​Koehler has been a professor of architecture and planning for over two
decades and recently founded The Greener Health Benefit Corporation. An award-​
winning practitioner, she has lectured worldwide on issues where human development
and climate change collide.

Dr. Nicholas G. Evans is Assistant Professor of Philosophy at the University of


Massachusetts Lowell and a 2020–​2023 Greenwall Foundation Faculty Scholar. He
conducts research on the ethics technology and national security. His new book, The
Ethics of Neuroscience and National Security, was published with Routledge in May 2021.
Contributors xv

Dr. Paul Formosa is an Associate Professor in Philosophy and Director of the Centre for
Agency, Values and Ethics (CAVE) at Macquarie University. Paul has published widely on
topics in moral and political philosophy, with a focus on ethical issues raised by technolo-
gies such as videogames and artificial intelligence.

Luke Golemon, MA, is a PhD student in the Department of Philosophy at the University
of Arizona. His research is focused primarily on ethics of all flavors, political philosophy,
and philosophy of science. He pays special attention to their applications to technology,
medicine, feminist theory, and scientific theorizing.

Jeffrey K. Gurney is a partner with Nelson Mullins Riley & Scarborough, LLP. He is the
author of Automated Vehicle Law: Legal Liability, Regulation, and Data Security, which
was published by the American Bar Association, and numerous publications on auto-
mated vehicles. His practice includes representing companies involved in deploying auto-
mated driving systems.

Jennifer Hatch is the Strategic Advisor for Convergence at the Center for Sustainable
Energy, where she guides strategy for decarbonization infrastructure. Previously she
was a visiting scholar at the Boston University Urban Planning Department and led the
Transportation and Utility Practice at the BU Institute for Sustainable Energy. She holds a
bachelor’s degree from Wellesley College and a master’s degree in public policy from the
Harvard Kennedy School.

Dr. Joseph Herkert is an Associate Professor Emeritus of Science, Technology and


Society at North Carolina State University in Raleigh. He studies engineering ethics
and the ethics of emerging technologies. Recent work includes ethics of autonomous
vehicles, lessons learned from the Boeing 737 MAX crashes, and responsible innovation
in biotechnology.

Soraj Hongladarom is Professor of Philosophy and Director of the Center for Science,
Technology, and Society at Chulalongkorn University.

Dr. Tomáš Hříbek is a Research Fellow at the Institute of Philosophy of the Czech
Academy of Sciences. Together with David Černý, he is the founder of the Karel Čapek
Center for Values in Science and Technology. He also teaches at several colleges, including
Charles University and Anglo-​American University.

Dr. Akira Inoue is a Professor of Political Philosophy in the Department of Advanced


Social and International Studies, Graduate School of Arts and Sciences, University of
Tokyo, Japan. He works in the areas of distributive justice and democracy. He also works
on experimental research in normative political theory.

Dr. Ryan Jenkins is an Associate Professor of Philosophy, and a Senior Fellow at the Ethics
+​Emerging Sciences Group at California Polytechnic State University in San Luis Obispo.
He studies the ethics of emerging technologies, especially artificial intelligence and robot
ethics. He has published extensively on autonomous vehicles.
xvi Contributors

Leonard Kahn is the Associate Dean of the College of Arts & Sciences and an Associate
Professor of Philosophy at Loyola University New Orleans. He is also the 2021–​2022
Donald and Beverly Freeman Fellow at the Stockdale Center for Ethical Leadership, US
Naval Academy.

Dr. Geoff Keeling is an Affiliate Fellow at the Institute for Human-​Centered Artificial
Intelligence at Stanford University, an Associate Fellow at the Leverhulme Centre for the
Future of Intelligence at the University of Cambridge, and a Bioethicist at Google. His re-
search focuses on ethics, decision theory, and artificial intelligence.

Dr. Beaudry Kock works on mass transit products at Apple, Inc. He has previously
worked in transportation technology at the MBTA, Ford Motor Company, Daimler, and
in numerous startups. His focus is on making both cities and the mass transportation ex-
perience more pleasant, safe, equitable and sustainable.

Yolanda Lannquist is Head of Research & Advisory at The Future Society (TFS), a US
nonprofit specializing in governance of artificial intelligence and emerging technologies.
She leads artificial intelligence policy projects with international organizations and is ap-
pointed to the OECD AI Policy Observatory expert group on implementing trustworthy
artificial intelligence. She holds a master’s degree in public policy from Harvard Kennedy
School.

Derek Leben is Associate Teaching Professor of Ethics at the Tepper School of Business
at Carnegie Mellon University. His research focuses on the ethics of artificial intelligence
and autonomous systems, and he is the author of the book Ethics for Robots: How to Design
a Moral Algorithm (Routledge, 2018).

Daniel D. Novotny received his PhD from State University of New York at Buffalo and
is an Assistant Professor of Philosophy at the University of South Bohemia. He has
published in the area of the history of philosophy, metaphysics, and applied philosophy.

Ugo Pagallo is a Full Professor of Jurisprudence at the University of Turin and Faculty
Fellow at the CTLS in London. He is a member of several high-​level expert groups of
international institutions, such as the European Commission and the World Health
Organization, on the legal impact of artificial intelligence and other emerging
technologies.

Dr. Jiwon Shim is an Assistant Professor in the Department of Philosophy at Dongguk


University in Seoul. She studies applied ethics in human enhancement and artificial intel-
ligence and is interested in how technology affects women, the disabled, and transgender
individuals.

Dr. Kazumi Shimizu is a Professor in the Department of Political Science and Economics,
Waseda University, Japan. His research and teaching focus on experimental economics,
behavioral economics, and decision theory. His research is not only empirical but also
examines the methodological bases of empirical research.
Contributors xvii

Saul Smilansky is a Professor of Philosophy, University of Haifa, Israel. He works on nor-


mative and applied ethics, the free will problem, and meaning in life. He is the author of
Free Will and Illusion (Oxford University Press, 2000), 10 Moral Paradoxes (Blackwell,
2007), and one hundred papers in philosophical journals and edited collections.

Dr. Patrick Taylor Smith is Resident Fellow at the Stockdale Center for Ethical Leadership
at the United States Naval Academy. He was also a Postdoctoral Fellow at the McCoy
Center for Ethics in Society at Stanford University. His published work concerns the jus-
tice of emerging climate and military technologies.

Daisuke Udagawa is an Associate Professor in the Department of Economics, Hannan


University, Japan. His fields of research are microeconomics, experimental economics,
and behavioral economics. His recent work has focused on human behavior and social
preferences.

Dr. Carole Turley Voulgaris is an Assistant Professor of Urban Planning at the Harvard
University Graduate School of Design. She is trained as a transportation engineer and as a
transportation planner. Her teaching and research focus on how transportation planning
institutions use data to inform plans and policies.

Dr. Yoshiki Wakamatsu is a Professor at Gakushuin University Law School, Japan. His
research and teaching focus on legal and political philosophy. Recently, he has published
two books about paternalism in Japanese, one about libertarian paternalism and the other
about J. S. Mill.
Introduction
Ryan Jenkins, David Černý, and Tomas Hribek

“A runaway trolley is speeding down a track . . .” So begins what is perhaps the


most fecund thought experiment of the past several decades since its invention
by Philippa Foot in 1967 and subsequent modification into the version we all
know by Judith Thompson in 1976.
The trolley problem enjoys a rare distinction among philosophical thought
experiments: it has entered the common vocabulary. From its birthplace in ac-
ademic ethics it has passed through the portal into the folk consciousness and
become a tool used by nonphilosophers—​at least as a touchpoint for thinking
about what philosophers do, and in many cases is used as a tool ripe for appli-
cation to their own questions about balancing conflicting values, contrasting
doing with allowing, negotiating mutually exclusive obligations, understanding
the distribution of goods, and so on. This is a status enjoyed by very few other
stories woven by philosophers—​Plato’s cave comes to mind, along with perhaps
Descartes’s dreaming scenario (popularized by the Matrix and the “brain in a
vat” variation), Nozick’s experience machine and Singer’s shallow pond, and only
a handful of others.
Many of us in the analytic tradition of moral philosophy who came of age in the
first two decades of the 2000s were raised on a steady diet of trolley problems—​
the cottage industry of trolleyology. Its utility is chiefly due to its protean nature: it
lends itself to countless permutations. And this was before the news broke that
the trolley problem could be applied to autonomous vehicles (AVs). As if by an
alchemical reaction, this confluence of academic philosophy and emerging tech-
nology resulted in an explosion of literature. A quick search on Google Scholar
shows over 8,000 papers that mention the trolley problem, and over a thousand
that contain both the terms “trolley problem” and “autonomous vehicles.”
But as the honeymoon phase has waned over the last decade or so, debate grad-
ually pivoted to whether the trolley problem is a useful frame for thinking about
the behavior of AVs at all. Critics have suggested, for one, that the designers of
AVs will not program AVs to think through situations like humans do: pulling
a lever to choose option A or B after appraising them and attaching a moral
value or disvalue to each. Others contest that overwhelmingly negative—​and
conclusive—​legal reasons would prohibit designers from programming an AV
to aim for anyone. Others point out that the original trolley problem is morally

Autonomous Vehicle Ethics. Ryan Jenkins, David Černý, and Tomáš Hříbek, Oxford University Press.
© Oxford University Press 2022. DOI: 10.1093/​oso/​9780197639191.001.0001
xx Introduction

distinct from the situation of engineers, since engineers do not enjoy certainty
about the consequences of their actions. And so on.
A handful of authors have maintained the usefulness of trolley problems as a
general schema for understanding contrasting or conflicting values that com-
panies will have to negotiate, for example, how much space to afford a bicyclist
on one side and an oncoming car on the other side. These are examples of the
trolley problem on a micro scale: imagining a single situation that an AV is en-
gaged in and comparing it to a trolley problem. These are still questions about
balancing risks and trade-​offs, and the trolley problem is still somewhat useful
here. Ultimately, however, very few philosophers accept anymore that the trolley
problem is a perfect analogy for driverless cars, or that the situations AVs face
will resemble the forced choice of the unlucky bystander in the original thought
experiment. It is safe to say that the academic conversation around AVs is moving
beyond the trolley problem.
If the trolley problem is retained, it is in a diminished role as a metaphor rather
than a literal analogy for thinking about the design of driverless cars. That is, it is
retained as a macro-​level metaphor: it is useful to illustrate other problems with
AVs, or the forced choice that developers and companies will have to confront.
For example, making certain decisions about how to design AVs and where to
deploy them could have disparate impacts on the elderly, the mobility chal-
lenged, the poor, or historically disadvantaged minorities. Each of these trade-​
offs can be put into the frame of a trolley problem where the agent is forced to
distribute benefits and burdens one way or another—​and the trolley problem is
perhaps supremely useful among thought experiments for making those forced
choices stark and explicit.
At the same time, predictions about the benefits of autonomous cars have
become more muted. AVs were once hailed for their ability to all but eliminate
automobile accidents—​saving roughly a million lives per year worldwide—​to
reduce congestion and pollution; to drastically reduce the cost of insurance or
eliminate the need for car ownership altogether; and more. We now know the
truth is more complicated. Creating a car that drives as reliably as a human—​in
all conditions, in unpredictable circumstances, at night and in the snow, and
so on—​is a wicked problem for engineering. Cars may save lives, but they may
kill people that humans wouldn’t have. And while they may reduce conges-
tion, those benefits will probably only be temporary, just as new lanes added
to a highway inevitably fill right back up in a few years’ time. While AVs might
provide mobility to the disabled, their benefits will accrue to the wealthiest
first, potentially exacerbating inequalities. In short, all of these questions about
the ethical and social impacts of AVs require vigorous discussion. All of these
questions are beginning to overtake trolleyological questions in their im-
portance, urgency, and concreteness. And, here, the methods of philosophy,
Introduction xxi

buttressed and enhanced by engagement across other empirical disciplines,


can still be of tremendous help in clarifying and systematizing the trade-​offs
at stake.

The Structure of This Book

This book represents a substantial and purposeful effort—​the first we know


of—​to move the discussion of moral philosophy and other academic discip-
lines beyond the trolley problem to an examination of the other issues that AVs
present. This next generation of scholarship is directed at the ethical and social
implications of AVs more broadly. There are still urgent questions waiting to be
addressed, for example: how AVs might interact with human drivers in mixed
or “hybrid” traffic environments; how AVs might reshape our urban landscapes
now that we could expect demand for parking lots or garages to fall dramati-
cally; what unique security or privacy concerns are raised by AVs as connected
devices in the “Internet of Things”; how the benefits and burdens of this new
technology, including mobility, traffic congestion, and pollution, will be distrib-
uted throughout society; and so on.
This book is an attempt to map the landscape of these next-​generation
questions raised by AVs and to suggest preliminary answers. With input from the
disciplines of philosophy, sociology, economics, urban planning and transporta-
tion engineering, policy analysis, human-​robot interaction, business ethics, and,
from the private sphere, computer science, and law, this book explores the ethical
and social implications, very broadly construed, of AVs. In addition, the book
provides a worldwide perspective, as the authors included represent the United
States, the United Kingdom, the Czech Republic, Israel, Hong Kong, Thailand,
Singapore, Italy, Korea, and Japan.
Most scholars working on AVs now endorse some sort of Trolley Pessimism,
according to which either there are not any relevant similarities between
trolley and real-​life road scenarios, or there are insurmountable technological
challenges calling into question the very possibility of programming AVs to
follow a set of ethical rules provided by programmers. Thus, the common thread
running through all of the contributions in Part I is an effort to go beyond the
trolley problem in an attempt to address the ethical issues raised by AVs. In ag-
gregate, we take these pieces to present a conclusive case for abandoning the use
of trolley problems in the micro scale.
In Part II, authors are given license to speculate about longer-​term ethical is-
sues, exploring how AVs may reshape the human communities and interaction—​
from distributing risk among drivers and pedestrians, to anticipating the
externalities of AVs. While forward-​looking, these authors remain empirically
xxii Introduction

and philosophically grounded, applying and extending lessons from emerging


research in psychology, law, business ethics, and elsewhere.
The benefits of new technologies often initially accrue to the best-​off members
of a community. We should expect AVs to follow a similar pattern, since those
companies most aggressively pursuing autonomous driving features—​Tesla,
Mercedes, Volvo, and so on—​tend to manufacture more expensive cars. Authors
in Part III explore how we can anticipate and mitigate disparities in the effects of
AVs, and how their benefits and burdens might be distributed. These are the is-
sues that extend beyond ethics, in the narrow sense of the theory of interpersonal
conduct, to political theory, concerned with the design of a just community.
Several authors in Part III inquire whether AVs are likely to mitigate, or
worsen, the problems of congestion and pollution. To be sure, these are the
problems that are particularly acute in a dense urban environment. The issues
of the impact of the AV technology on the design of our cities are specifically
explored in the chapters of the final segment of the volume, Part IV. Will the new
technology help create the just city, or will it exacerbate existing inequalities?
Will it further accelerate the urban sprawl or instead promote a new densifica-
tion? Are AVs compatible with the classical ideal of the pedestrian city? These are
some of the important questions addressed in this part of the book.

References
Foot, Philippa. 1967. “The Problem of Abortion and the Doctrine of Double Effect.”
Oxford Review 5: 5–​15.
Thomson, Judith Jarvis. 1976. “Killing, Letting Die, and the Trolley Problem.” The Monist
59, no. 2: 204–​17.
PART I
AU TONOMOU S VE H ICLE S
AN D T ROL L EY PROBLE M S
Introduction by David Černý

Swerve right and kill one passerby or swerve left and kill two (or more) persons
on impact? This seemingly simple question has occupied the attention of many
bright minds for decades. It should not be seen as a surprise. Trolley-​type sce-
narios, flourishing within scholarly publications since Philippa Foot’s seminal
paper on the ethics of abortion, seem to bear a close structural similarity with
collision situations that may be encountered by autonomous vehicles (AVs) on
the road. The leading assumption has been that analogical reasoning might be
employed, enabling one to transfer important moral conclusions from simplified
thought experiments to real-​life situations. If, for example, the rightness of one’s
choice in trolley-​type scenarios depends on the maximizing strategy (i.e., save
the most lives possible), then the same decision procedure can also be employed
in richer, nonidealized conditions of everyday traffic.
Notwithstanding considerable efforts that came into the development and
use of trolley-​type scenarios, there has been a steadily growing consensus that
our ethical reflections should move beyond these scenarios toward more real-
istic considerations. There are still some scholars defending the importance
of trolleyology in the context of AV ethics, but many—​maybe the majority—​
endorse some sort of Trolley Pessimism, according to which either there are
not any relevant similarities between trolley and real-​life road scenarios, or
there are insurmountable technological challenges calling into question the
very possibility of programming AVs to follow a set of ethical rules provided by
programmers.
Thus, the common thread running through all of the contributions in Part I is
the effort to go beyond the trolley problem in an attempt to address the ethical
issues raised by AVs.
We begin this section with a chapter by Nicholas G. Evans and Heidi Furey.
They draw on the existing literature on crash scenarios involving AVs but go
far beyond the traditional focus on the trolley problem and its applications in
2 Autonomous Vehicles and Trolley Problems

autonomous driving. Evans does not intend to eliminate trolley-​case scenarios


but subsumes them under a more general risk distribution category. This cate-
gory is more general in two aspects: First, it removes the simplification tradition-
ally assumed in the discussion of trolley cases, according to which our options
and outcomes are certain. Second, it takes into consideration far more types of
morally relevant scenarios. The authors divide decisions regarding risk distri-
bution and AVs into three categories. Each of these categories gives rise to a dif-
ferent perspective from which to look at AVs and situations they may encounter.
There are three types of decisions corresponding to each category: narrow-​, me-
dium-​, and wide-​scope decisions. Each opens a distinct conceptual space and
invites one to ask different questions about how AVs should distribute risks and
how we should regulate and deploy AVs so that these risks are best distributed
among occupants of AVs, road users, and other members of society.
Next, in Chapter 2, David Černý addresses an admittedly controversial issue
of whether an AV’s decision processes based on age would always and in all
contexts be discriminatory. Discriminatory behavior is commonly considered
unethical and prohibited by many international human rights documents. Yet
it might come as a surprise that at least in the context of artificial intelligence
(AI) ethics, there are not many attempts at a precise definition of direct dis-
crimination. Černý starts his chapter by thoroughly analyzing the definitional
marks of discrimination and arrives at a semiformal definition of it. Successively,
he delineates the main contours of the derivational account of the badness of
death according to which death is bad in virtue of the fact that it deprives us of
all the prudential goods comprised in continued existence. These two conceptual
devices allow him to defend the main conclusion of his chapter: If an AV chose
between two human targets on the basis of age, its choice would not be an in-
stance of direct discrimination.
Geoff Keeling, in his sophisticated contribution in Chapter 3, asks an impor-
tant question: “How does the moral status of an AV’s act depend on its prediction
of the classification of proximate objects?” He presents three possible answers—​
the objective version and two variants of a subjective view. The line of demarca-
tion between objective and subjective views depends on whether the evaluation
of the AV’s choices and acts takes into account the AV’s internal representations
of facts provided by external sensors. Keeling opts for moderate subjectivism, ac-
cording to which the rightness or wrongness of the AV’s acts ought to be judged
by the AV’s epistemically justified or reasonable predictions about the morally
relevant facts. The next section of Keeling’s chapter is devoted to developing a
moderate subjectivist view and its application in the context of mundane road-​
traffic situations. Keeling’s arguments are very complex and, for those who
are not comfortable with essential higher mathematics, challenging to follow.
Keeling’s overall aim here is to find a decision-​making procedure which would
Autonomous Vehicles and Trolley Problems 3

allow AVs to determine how much weight should be given to safety depending
on how high the probability is that a perceived object classified as a pedestrian is,
in fact, a pedestrian.
Many authors working in the field of AI ethics had been confident for a long
time that trolley-​type scenarios represent a conceptual tool allowing one to de-
scribe and analyze possible choices leading to harm. Recently, however, there is a
growing consensus that the matters are far from being that simple. In Chapter 4,
Jeff Behrends and John Basl take the view of Trolley Pessimists. The negation of
Trolley Pessimism is, of course, Trolley Optimism, which subscribes to the theses
that some possible collisions of AVs are structurally similar (the authors give a
precise definition of structural similarity) to trolley-​type cases and, accordingly,
the engineers should work to program AVs to behave in ways conforming to the
moral conclusions drawn from trolley cases. Berends and Basl suggest that both
Optimists and Pessimists have been victims to their inability to recognize impor-
tant features of the engineering techniques deployed in the process of designing
AVs’ guiding software. Both authors endorse Trolley Pessimism and present a
novel technological case against Trolley Optimism. Their complex arguments are
based on the difference between traditional and machine learning algorithms.
We can see traditional algorithms as a set of rules enabling the transformation of
inputs into outputs according to the rules invented by programmers. However,
machine learning algorithms are radically different in that they generate new al-
gorithmic instructions not explicitly provided by programmers. Consequently,
we cannot expect them to follow a set of prior established ethical rules incorpo-
rated into their code by programmers. Therefore, engineers developing software
for AVs are not and will not be in a position to program their vehicles to respond
to the particular crash scenarios encountered on the road in a predetermined
and always consistent manner.
A great deal of discussion in the context of the ethics of AVs is predicated on
the assumption of what can be called normative monism. Normative monism
may take two forms: Either we assume that among all of the competing norma-
tive theories there is only one that is correct, or we can hold to the view that for
each field of applied ethics there is only one solution. Saul Smilansky, in his highly
original contribution in Chapter 7, questions this assumption. He considers a
scenario, a hostage situation that he calls The Situation, and demonstrates that
many competing and sometimes contrasting solutions may be invoked. By
adopting a pluralist normative worldview, Smilansky also goes beyond the clas-
sical trolley-​type scenarios inviting “either-​or” type responses. The combination
of moral and value pluralism applied within the field of AV ethics gives rise to an
open moral world with many permissible possibilities, from the design ethics to
the behavior of self-​driving vehicles in possible crash situations. Smilansky’s nor-
mative pluralism may (and as he believes is likely to) transform into a plurality of
4 Autonomous Vehicles and Trolley Problems

AV guiding algorithms corresponding and responding to differences in cultural


backgrounds and preferences. The solution offered by Smilansky falls under
the umbrella of “Crazy Ethics,” a term coined by the author to designate ethics
which, despite being true, may lead to counterintuitive consequences. Living in
such a pluralistic world might be hard at first, yet if Smilansky is right, we do not
have any other options available.
Like David Černý, Derek Leben in Chapter 8 also focuses on the problem of
discrimination in the context of AVs but considers it from a different and more
general angle. Leben argues that whether choices made by algorithms represent
unjustifiably discriminatory behavior crucially depends on the nature of the
task these algorithms are called upon to fulfill. This task-​relevance standard of
discrimination involves two components, one conceptual and the other empir-
ical. The conceptual component brings into focus the essential task of an algo-
rithm in a specific context. If, as the author asserts, this task involves making
decisions about the distribution of harm interpreted as the predicted health
outcomes (or the likelihood of thereof) of collisions, then some features of the
persons involved may turn relevant to the task and others irrelevant. The empir-
ical component enters into play here; it depends on the answers to the conceptual
questions, and its role consists of determining which features have an impact on
accomplishing the essential function of the algorithm. Consider, for example,
age. If the essential task of AV-​guiding algorithms in the context of the distribu-
tion of harm in collisions is to minimize harm measured as health outcomes or
otherwise (conceptual component) and age can serve as a statistical predictor of
these outcomes (empirical component), then age may be relevant to the task. It
follows from these considerations that choices based on age may not represent
instances of unjustified discrimination.
For rethinking current approaches to the ethics of AVs, we can turn to Soraj
Hongladarom and Daniel D. Novotný in Chapter 6. The authors call into ques-
tion the predominant focus on the ethical concerns and philosophical traditions
of high-​income, Western countries in search of the one and only moral theory to
be accepted globally. Other cultures and traditions, however, often have different
standards of evaluation of what could be an acceptable and desirable behavior
of AVs. The challenge is how to take into account these other cultures—​and also
the rich Asian, African, and other traditions of ethical reflection. Their proposal
consists in considering AVs with their machine-​learning systems as if they were
human drivers in a given culture that need to pass a driver’s license test. Since
they are aware that there still needs to be some in-​built fundamental norms,
values, or virtues to make AVs human-​aligned, they explore the possibility of
drawing upon the Buddhist concept of compassion (karuṇā) for this role.
The trolley-​like scenarios, despite some recent criticism mounted against
them, continue to occupy an important place in modern experimental
Autonomous Vehicles and Trolley Problems 5

philosophy. Many philosophers are convinced that moral intuitions—​immediate


moral reactions to presented scenarios—​should be treated as robust “data”
expressing well-​established social norms. The overall aim of Chapter 5 by Akira
Inoue, Kazumi Shimizu, Daisuke Udagawa, and Yoshiki Wakamatsu is to experi-
mentally test whether, and to what extent, social norms identified by a version of
the trolley dilemma are robust. To achieve this aim, the researchers conducted an
online empirical survey in Japan. The experimental results show, among others,
that our choices and willingness to follow socially established norms may be
heavily influenced by the presence or absence of public scrutiny. These results
are undoubtedly of immense relevance to the ethics of AVs, and the authors go
to great lengths to explore their impact to a considerable depth. It may be argued
that this chapter offers a first step toward a solution to the so-​called social di-
lemma of AVs.
1
Ethics and Risk Distribution
for Autonomous Vehicles
Nicholas G. Evans

Introduction

Autonomous vehicles (AVs) will be on our roads soon.1 How should they be pro-
grammed to behave? The introduction of AVs will mark the first time that artifi-
cially intelligent systems interact with humans in the real world on such a large
scale—​and while travelling at such high speeds.
Current AV ethics literature concentrates on crash scenarios in which an AV
must decide how to distribute unavoidable harm; for example, an unoccupied AV
must either swerve to the left, killing the five passengers of a minivan, or swerve
to the right, killing a lone motorcyclist. Scenarios like these have been called
“trolley cases” because they resemble a series of famous thought experiments
that have sparked an enormous body of ethics literature (known, somewhat de-
risively, as “trolleyology”).2 In the original case, a runaway tram (or “trolley”) is
about to run over and kill five workers, but the driver can choose to steer from
one track to another, but in the process killing one worker on the alternate track.3
What’s important about these cases is not that AVs are real-​world analogs to
trolleys, but that AV navigation poses difficult ethical decisions. Moreover, AVs,
in virtue of being programmable rather than reacting on human instinct, must
be instructed how they ought to act in these cases (or a decision, arguably equally
morally weighty, must be made to remain silent on what the AV ought to do in
this case). Trolley-​based scenarios have been used to test intuitions about the be-
havior of AVs, such as when it is permissible to choose (or allow) a smaller group
to be harmed in order to save a larger one, and more controversially what kinds
of people we should prioritize over others in saving them.4
When people ask how AVs could have anything to do with ethics, the trolley
problem offers a quick and obvious explanation. But trolley problem–​inspired
AV ethics has received considerably criticism. One central line of reasoning
is that, in the real world, we are almost never certain about our options and
their outcomes. Nyholm and Smids, as well as Goodall, have argued that we
should focus on risk management when programming AVs, and they describe

Nicholas G. Evans, Ethics and Risk Distribution for Autonomous Vehicles In: Autonomous Vehicle Ethics. Edited by:
Ryan Jenkins, David Černý, and Tomáš Hříbek, Oxford University Press. © Oxford University Press 2022.
DOI: 10.1093/​oso/​9780197639191.003.0001
8 Nicholas G. Evans

a number of realistic cases involving risk, many of which are similar to trolley
cases but involve only probabilities of harm, including how close AVs choose to
drive to certain types of vehicles and pedestrians, when they choose to change
lanes, and which vehicles they take pains to avoid crashing into.5 Himmelreich
has argued that trolley-​like problems are too rare, and too extreme, relative to
the kind of ethical issue that are more likely to face AVs on a day-​to-​day basis.
He argues that we should instead focus more on mundane driving scenarios,
many of which involve risk. In addition to the kinds of cases Goodall mentions,
he draws attention to the risks associated with the environmental impact of
AVs and with programming AV behavior that will be repeated exactly by every
other AV.6
The discussion of risk and AVs is just beginning. We’re at the stage where (a) a
good case has been made for the importance of the discussion, and where (b) a
smattering of different scenarios and questions about risk has been posed. One
way to approach a difficult problem like finding a suitable ethical algorithm for
AVs, which is common to both engineering and philosophy,7 is to start with the
simplest or most idealized kinds of cases first. Greater complexity can be added
back into the picture as more progress is made. Hence the trolley problem—​a
simple case outlining a clear issue in which choices about doing harm, or
allowing it to happen, are parsed in the clearest detail.8 What comes next for AV
ethics, however?
Our purpose here is not to reject the trolley problem, as others have done. The
trolley problem is an important thought experiment in the history of philosophy;
it serves a very specific purpose. In point of fact, we believe its purpose is precisely
the one it has served: to force people to acknowledge, and then choose a position
on, an important moral feature that is subject to disagreement. The point of the
trolley problem, put another way, is to cause problems! But the trolley problem
cannot—​indeed, no philosophical problem can—​solve a complex problem like
the navigation of AVs on its own. There are other challenges that are philosophi-
cally relevant to an investigation of a complex problem like AVs. The field needs
to evolve, beyond the mere debate about trolleys (and whether that debate is rel-
evant), to encompass other philosophical issues.
In what follows, we outline three ways to think about this evolution. We mo-
tivate this project first through conceptual, empirical, and metaphilosophical
concerns about the limits of the trolley problem as it applies to the ethics of
AVs. We then turn to two case studies that demonstrate the challenge ahead.
The first is how AVs should behave when they encounter each other, and where
differences in their algorithmic behavior are morally relevant, and where each is
uncertain as to the other’s algorithm. The second is considering a wide view of
AVs, and how we account for the broader question of AVs in large, even global
transportation systems.
Ethics and Risk Distribution for AVs 9

Post Trolley Ethics

What makes the trolley problem so convenient as a tool is that it contains a series
of assumptions that carry directly into the ethics of AVs. These assumptions, we
take it, are as follows:

1. A single isolated actor;


2. Making a decision under conditions of perfect information;
3. That has a well-​described series of options available to it.

In the case of the trolley problem, it is a tram or trolley that has perfect know-
ledge about the outcomes of its choices, of which there are two and only two. In
the case of the AV, trolley-​like problems are homologous to the trolley problem
in terms of their assumptions. That is:

1. The AV is a single, isolated actor.


2. The AV is made under conditions of perfect information about the kinds of
people it is hitting, how many, and what their outcomes (e.g., injury, death)
will be, and the relevant capacities to understand these inputs (i.e., it is a
“level 5” fully autonomous AV).
3. The AV has a well-​described series of options available to it.

These assumptions drive the trolley-​like problem characterized by, inter alia, the
MIT Moral Machine Project.
Three limits arise from this kind of analysis, however. Others have argued that
the determinate and binary set of options in the trolley problem make this kind
of analysis inappropriate under conditions of risk.9 We think that this, however,
misunderstands the trolley problem itself. Conceptually, the trolley problem
identifies a critical distinction in ethical decision-​making around our intentions
to bring about someone’s death, commonly claimed to be between “killing” and
“letting die.”10 This is an important distinction; indeed, it guides a number of
serious, applied ethical decisions. The original applied problem in which the
trolley problem was posed was not AVs, but abortion.11 The doctrine of double
effect, which turns in part on the distinction the trolley problem picks out, is also
an important doctrine in the ethics of killing in war.12 But no one would say that
epistemic concerns about fetal viability, or about personhood, undermine the
moral problem between intending versus foreseeing harm. Certainly in the just
war literature, uncertainty about combatant status is an important issue of dis-
cussion, but no one engaged in that discussion would say that our beliefs about
the trolley problem or doctrine of double effect are perforce irrelevant in light of
the presence of uncertainty.
10 Nicholas G. Evans

Thus, the conceptual issue is not that the trolley problem is limited. It was al-
ways limited. But that is hardly a weakness, any more than a diagnostic for breast
cancer is weak for not detecting glaucoma. Different tools do different things, and
if there is any weakness for trolleyology in the AV debate, it is in trying to make the
trolley problem do things it cannot. Rather, the problem that ethicists need to face
can be formulated as a question: “What other philosophical problems, outside of
doing versus allowing harm [or similar concepts, depending on our reading of the
trolley problem], are a propos when thinking through the ethics of AVs?”
If we care about risk, then, there are frameworks available to us that would
be well suited to the problem of AVs. For example, Lara Buchak’s work on ra-
tional risk aversion has been applied to how we should conceive of the ethics of
risk aversion in HIV/​AIDS therapy trials that require participants to cease their
routine antiretroviral medications.13 Likewise, Adam Bjorndahl and others have
discussed how we should think about the risk of violating human dignity and
make decisions where there is a possibility that some absolutely important value
(like dignity, for some) might be violated, but it is under a condition of uncer-
tainty.14 Finally, Seth Lazar and Chad Stronach have shown how we can account
for the ethics of risk of rights violation in armed conflict, including cases where
we are not sure of the kinds of rights those individuals possess.15
A second problem for AVs is empirical, including when we consider risk. AV
decision-​making is made under risk, yes, but it may also be tunable depending
on the case we care about. What we mean by this it is not simply the case that
there are a number of discrete but uncertain outcomes, but that there may be an
indeterminate range of options from which the AV might choose. Consider re-
cent work in AV decision-​making involving the following case:

A Tailgater (TG) is closely following the AV, and the AV is following the Front Car
(FC) in a two-​lane two-​way road. At the start of the scenario, all cars are cur-
rently moving at the same velocity of 90 kph, consistent with highway speeds. The
scenario starts with FC suddenly braking to a stop. The AV is responsive enough
to stop in time to prevent a collision with FC because the AV is following at a safe
distance. However, though the AV is responsive enough to avoid a collision with
the lead car, TG may not be responsive enough to avoid crashing into the AV. This
scenario is further constrained in that the AV cannot swerve out of the way (due to
oncoming traffic on the left and a barrier on the right). Intuitively, the AV appears
to have two options: it could slam on the brakes and suffer a severe rear-​end col-
lision, or it could intentionally ram the forward car at a relatively low speed, re-
ducing the speed of collision of the TG with the AV. Given the AV’s superior ability
to measure speeds and distances, is there another way of managing potential inju-
ries that may not be available to a human driver?16
Ethics and Risk Distribution for AVs 11

The results of this thought experiment are not binary: not just in terms of the
possible injuries that might arise to TG, AV, and FC, but in terms of the options
available to AV. As a parametric model, AV could choose any combination of vel-
ocities and accelerations available to the vehicle. Modeling on the above vignette
gave options such as a “wake-​up call” where the AV initiated a low-​speed colli-
sion to TC to encourage them to brake, or in the case of an unresponsive TC an
“emergency brake” in which the AV initiated a series of low-​speed collisions until
those collisions became inelastic and the AV could use its own braking power to
stop both cars.
Even in trolley cases for “unavoidable crashes,” there may be continuous
variables, such as how much braking room there is between a vehicle and
pedestrians or other vehicles; the side of the vehicle that strikes the other object;
the object’s reaction times if it has any; and so on. These are not relevant to the
trolley problem as a thought experiment; because they may make meaningful
differences in the outcomes of these collisions; however, they may be (though
are not always) relevant to an AV’s decisions. These are empirical concerns, but
important ones in knowing what options are available to an AV, which at least
on some accounts is a precondition for having good beliefs about what the AV
ought to do.
Finally, there is a metaphilosophical problem around how we do the ethics of
AVs in conditions under risk. This arises from, but it is not totally derivative of
the first two problems above. Making decisions about the ethics of AVs requires
both knowing what, philosophically, is at stake in decisions around AVs. But
it also requires empirical knowledge of the conditions under which those AVs
will make those decisions. This requires a form of collaboration that is not
common, between philosophers and empirical researchers. While our previous
work provides a model to emulate, it does not solve larger metaphilosophical
questions about how philosophers should engage with practical and design
processes.
These kinds of risks are important because they allow us to loosen our three
assumptions around AVs. We, with sufficient work, no longer need to make
a binary distinction between autonomous and human-​driven cars, and we
can accept a range of levels of autonomy—​levels that exist but are typically
eschewed in debates about the ethics of AVs. We can further deal with im-
portant temporal components of the deployment of AVs, from the near-​future
scenario in which full autonomy is available only to a handful of cars on the
road, to the potential future in which all or nearly all cars are AVs. Finally, we
can deal with questions about what kinds of information are necessary, and
how decision-​making might permissibly proceed for vehicles operating with
different kinds of data.
12 Nicholas G. Evans

Moral Uncertainty in AVs

The above described potential inroads into risk have to do with real-​world
conditions when we do AV ethics. These kinds of risk include the nature, number,
and identity of objects in collisions (human, animal, inanimate). However, there
are other, immediately important issues that the ethics of AVs ought to deal with
that invoke philosophical problems.
A preliminary and likely imminent issue is how an AV should distribute risk
in contexts where other AVs have divergent moral commitments. Human drivers
can be uncertain about how other human drivers will behave. AVs governed by
ethical algorithms could be more predictable than human-​driven vehicles,17 es-
pecially if the same ethical algorithm is used across all cars of a particular make.
AVs may be able, in some circumstances, to use this information to coordinate
with other AVs and to decide which ones to help or try to stop, and so on. But this
need not always be the case.
To illustrate, let us first imagine a case where there is no uncertainty about the
drivers’ ethical codes. Let’s simplify further and focus on two kinds of moral al-
gorithm: “selfish” drivers that prioritize the drivers’ (and their passengers’) well-​
being, and “selfless” drivers that always aim to minimize total harm among all
road users. Suppose that driver A finds herself in an unavoidable crash with a
selfless driver B and a selfish driver C. She can only control which vehicle she hits,
and the driver of the vehicle she hits is likely to be seriously injured or killed. Let’s
further assume each car contains a single occupant. A selfless, harm-​minimizing
driver A will hit the selfish driver if they believe the spared drivers will go on to
face their own ethical situations. In these future cases, the selfless driver will act
to minimize harm, but the selfish driver may not.
What if driver A is selfish and faces the same choice? She, too, can be expected
to spare the selfless driver. After all, she might run into the driver she spares in the
future, and she can expect to fare better in an encounter with a harm minimizer
than in one with another person looking out for herself. It is better for the selfish
driver to be surrounded by selfless drivers, in other words, and pick each other
off in cases where a choice between selfless and selfish is forced.
It’s unlikely that a human driver has ever been in a situation exactly like this.
But because ethical algorithms for AVs may in principle be easier to discover,
AVs may be in the position to make these kinds of decisions more often. The ac-
tual ethical algorithms we use to govern AVs will likely be complex, but so long as
some AVs follow more “selfish” algorithms (ones that favor the interests of their
owners and passengers)18 and some AVs follow more “selfless” algorithms (e.g.,
ones that try to minimize harm):

(a) AVs will have reason to distribute risk from the “selfless” AVs to the
“selfish” ones. That is, regardless of one’s culpability or safety otherwise,
Ethics and Risk Distribution for AVs 13

and if we believe that there is some room for reasonable disagreement


about one’s partial duties to oneself, selfish drivers will always be prior-
itized for crashes. Thus, even if one is an otherwise very risk-​averse but
selfish driver, one will be potentially targeted by other vehicles in cases
where a choice is forced.
(b) Some AVs will have reason to deceive other AVs about which ethical
algorithms they are following (or provide incentives to encourage other
AVs to spare them). That is, selfish drivers will have an incentive, where
it is possible, to behave or code their vehicles as selfless even if they ulti-
mately behave selfishly. That is, it will behoove drivers, and potentially
car companies, to say one thing about the ethics of their vehicles but do
something quite the opposite.

The second problem here becomes more acute because even if we have some
kind of signal to take the ambiguity out of signals between vehicles, there may be
uncertainty about the veracity of these signals. Anyone who has spent time on
the Internet will understand the perils of a network that relies on trust among a
large number of anonymous actors. But these actors are now cars weighing many
hundreds of pounds, doing speeds fast enough to kill humans (individually or in
groups).
An AV’s ability to accurately judge how likely it is that other AVs follow spe-
cific ethical algorithms will also be important in cases where it must coordinate
with other AVs to achieve the best outcome. A harm-​minimizing algorithm will
favor collective AV actions like grouping together to act as an enormous brake
for a truck that’s out of control, or to get out of the way when an AV needs to
get its driver to an emergency room.19 No single AV could do this on its own,
and it would increase the total risk of harm for a single AV to attempt it. So
harm minimizers will need to broadcast that they are harm minimizers and will
need to determine if other AVs are, too. If the AVs are connected through wire-
less communication, this might be achieved easily. But if they’re not, a harm-​
minimizing AV will still have reason to gather evidence about the other AVs’
algorithms. Either way, this kind of signaling raises questions about how we tell
whose moral commitments are whose in the AV world, and how we ensure trust
and compliance on the road between these different ethical AVs.

Broad Social Impacts

Our second case is how an AV should distribute risk among more members of
society when interacting with hazardous materials. Consider how an AV might
“interact” with hazardous materials. It might be used to transport them, and it
might find itself in a potential crash scenario with a vehicle transporting them.20
14 Nicholas G. Evans

We can expect there to be extra safety precautions for AVs allowed to transport
hazardous materials. Let’s call an AV allowed to carry hazardous materials a
“Hazmat AV.” Some potential safety precautions for Hazmat AVs might include
the following:

• mandatory and specially trained “drivers” or “monitors,” at least at the be-


ginning (perhaps as an addendum to the US Department of Transport’s
Hazardous Materials Regulations 49 CFR 173.134, or related regulations in
other jurisdictions);
• tracking mechanisms giving authorities access to the Hazmat AV’s location
and status;
• symbols, markings, or wireless signals to alert other vehicles to be extra cau-
tious, including those that are machine-​interpretable such as a “Hazmat QR
code”; and
• designs allowing the Hazmat AV to make different decisions from ordinary
AVs in dangerous situations (e.g., collisions, weather events, attempted
hijacking).

These possibilities raise questions about how Hazmat AVs should be designed
and regulated. But let’s consider a case where an AV encounters a Hazmat AV
and must decide how to distribute risk among society. Consider the following
example:

Pandemic: An unoccupied AV must swerve to the left or right. If it swerves


left, it will probably crash into what it knows to be an unmanned Hazmat AV,
knocking it into a lake. If it swerves right, it will probably crash into a vehicle
containing several passengers. There will be no immediate injuries and deaths
if it hits the unoccupied Hazmat AV, and almost certain deaths and serious
injuries if it hits the other vehicle. So it hits the Hazmat AV. But this causes
a recombinant flu virus21 to be released into the environment, infecting the
surrounding poultry and triggering a pandemic in humans which wipes out
15 percent of the world’s population.

In this case, an AV is provided a case in which the aggregate harm of permit-


ting one kind of collision diverges strongly from the immediate implications
of the collision. This kind of case is again rare but plausible, given that haz-
ardous substances are routinely shipped via road and may be detained, crash,
or even stolen.22 Moreover, incredibly hazardous biological materials may be
transported by accident, such as in 2014, when it was revealed the USDA received
highly pathogenic avian influenza samples that had contaminated its requested
Ethics and Risk Distribution for AVs 15

low-​pathogenicity samples.23 It seems that on the face of it, that given the ex-
treme stakes, the AV should collide with the passengers over the Hazmat AV.
Consider, then, either of the next two scenarios:

Possible Hazard I: The same as Pandemic, except that this time the AV
determines that, if all the usual safety precautions have been taken, then there’s
still a very small chance that any dangerous materials will leak into the lake
even if the Hazmat AV falls in.
Possible Hazard II: The same as Pandemic, except that this time the AV is
sure that dangerous materials will end up in the lake if it hits the unmanned AV,
but it’s uncertain whether it is a Hazmat AV, or whether the materials it might
carry are dangerous enough to warrant seriously injuring the lives of several
people.

It’s much less clear what the AV should do when facing this kind of uncertainty.
One possibility is to design it to do a cost-​benefit analysis and then to act so as to
maximize expected well-​being. But we might also want it to give special priority
to its own passengers, at least, if it has any. And it may be difficult to determine
exactly how it should do the analysis. These are incredibly rare but potentially
high-​impact events. When dealing with probabilities so small and consequences
so large, slight differences in its approach could lead to noticeably different
decision-​making.
Importantly, the cost of avoiding these incidents could be high enough to deter
self-​interested firms from responding to them. In the case of Possible Hazard II, it
is foreseeable that a manufacturer could take the time to develop a contingency to
detect a Hazmat AV with very high confidence and make sure that their vehicles
always, or nearly always, respond appropriately. However, the costs in doing so
could be prohibitively high for the manufacturer. Their cars might be slower in
navigating terrain (to allow more time to notice and respond to Hazmat AVs or
similar kinds of threats); or the additional development cost for a firm might
reduce their competitiveness. In either case, individual manufacturers have few
incentives to respond to Possible Hazard I or II, especially if the probability one
of their vehicles will encounter a Pandemic case is very low.24
We’ve considered cases in which an AV has to decide whether to crash into a
(potential) Hazmat AV. We can also consider cases in which the crash is unavoid-
able, but in which the AV must decide how to crash. For example:

Town or Ocean: A Hazmat AV is transporting a large amount of toxic waste


along a road along the edge of the ocean. The AV is a long truck equipped
with symbols and flashing lights to warn other vehicles to keep their distance.
16 Nicholas G. Evans

But rain has made the road slippery and the Hazmat AV has started to skid
out of control. A passenger AV rounds a corner to find the truck skidding to-
ward it perpendicular to the road. The passenger AV can either swerve to the
left or to the right. Both maneuvers are expected to put its own passengers at
the same amount of risk. But if it skids to the left, the truck will likely end up
falling down off the road into the ocean. And if it skids to the right, the truck
will likely end up crashing into the main street of a small town. The waste is
expected to spill out either way. It will be easier to collect, contain, and dis-
pose of the waste if it ends up in town. But if it ends up there, several people
are likely to die from exposure or from drinking contaminated water. If it
ends up in the ocean, no one will die from it directly. But it will devastate the
ecosystem for hundreds of miles, and the town and a much larger area will
suffer economically and from higher rates of illness for years. If the AV is
able to make this kind of assessment of the situation, what should it do? Or,
supposing the Hazmat vehicle has made the assessment, what should it signal
the AV to do?

There could also be cases in which a passenger AV or its passengers become con-
taminated. For example:

Contaminated AV: A passenger AV arrives at the scene of an accident. Its


passengers get out to try to help, find an unconscious injured person, and bring
the person into the AV to take them to the nearest hospital. (Perhaps they’ve
been on the phone with emergency services and that’s what they’ve been told
to do.) But the AV, meanwhile, has received a message from one of the vehicles
in the crash: a Hazmat AV. The passenger learns that the Hazmat AV was car-
rying a dangerous virus and that no one should leave the scene of the crash for
any reason. Its passengers instruct it to take them to the hospital anyway. What
should it do?

We’ve focused on passenger AVs because we’ve been trying to show that we
may eventually want them to be designed to make wide-​scope risk distribution
decisions that take more than the well-​being of their passengers and other road
users into account. But we have even more reason to want AVs carrying haz-
ardous materials to make good wide-​scope risk distribution decisions, and they’d
face more important ones more often. For example, the routes passenger AVs
choose may determine which neighborhoods have more traffic and noise pollu-
tion and higher accident rates. So the route a passenger AV chooses will deter-
mine how some small risks are distributed among a wider group of people than
those who are on the road at the time. An AV transporting hazardous materials
also has to choose which routes to take. So it also determines how similar risks
Ethics and Risk Distribution for AVs 17

of small harms are distributed. But in addition, there’s a very small risk of great
harm to all the neighborhoods it passes through.

Conclusion

In this chapter, we describe an evolution in thinking around the ethics of AVs.


We identify current debates about AVs and describe conceptual, empirical, and
metaphilosophical problems that arise with the current focus on trolley-​like
cases of risk in AVs. We then show two cases in which deeper philosophical
inquiries into AV behavior might shed new light into applied problems with the
development and deployment of these technologies. Like trolley-​like problems,
however, these are problems that would benefit from a close, interdisciplinary
collaboration between philosophers and empirical researchers to model and ex-
amine these problems in a range of contexts.

Notes

1. We set aside what precisely counts as “autonomy.” See Society for Automotive
Engineers, “J3016B: Taxonomy and Definitions for Terms Related to Driving
Automation Systems for On-​Road Motor Vehicles—​SAE International,” June 15, 2018.
https://​www.sae.org/​standa​rds/​cont​ent/​j3016​_​201​806/​.
2. Barbara H. Fried, “What Does Matter? The Case for Killing the Trolley Problem (or
Letting It Die),” The Philosophical Quarterly 62, no. 248 (July 1, 2012): 505–​29. https://​
doi.org/​10.1111/​j.1467-​9213.2012.00061.x.
3. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double Effect,” in
Virtues and Vices and Other Essays in Moral, 19–​32 (New York: Oxford University
Press, 1993).
4. Edmond Awad, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich,
Azim Shariff, Jean-​ François Bonnefon, and Iyad Rahwan, “The Moral Machine
Experiment,” Nature 563, no. 7729 (November 2018): 59–​64. https://​doi.org/​10.1038/​
s41​586-​018-​0637-​6.
5. Sven Nyholm and Jilles Smids, “The Ethics of Accident-​Algorithms for Self-​Driving
Cars: An Applied Trolley Problem?,” Ethical Theory and Moral Practice 19, no. 5 (July
2016): 1275–​89. https://​doi.org/​10.1007/​s10​677-​016-​9745-​2; Noah J. Goodall, “Away
from Trolley Problems and Toward Risk Management,” Applied Artificial Intelligence
30, no. 8 (November 2016): 810–​21. https://​doi.org/​10.1080/​08839​514.2016.1229​922.
6. Johannes Himmelreich, “Never Mind the Trolley: The Ethics of Autonomous Vehicles
in Mundane Situations,” Ethical Theory and Moral Practice 21, no. 3 (May 2018): 669–​
84. https://​doi.org/​10.1007/​s10​677-​018-​9896-​4.
7. E.g., Michael Weisberg, Simulation and Similarity: Using Models to Understand the
World (New York: Oxford University Press, 2013).
18 Nicholas G. Evans

8. Geoff Keeling, “Why Trolley Problems Matter for the Ethics of Automated Vehicles,”
Science and Engineering Ethics 26, no. 1 (February 1, 2020): 293–​307. https://​doi.org/​
10.1007/​s11​948-​019-​00096-​1.
9. Heather M. Roff, “The Folly of Trolleys: Ethical Challenges and Autonomous
Vehicles,” Brookings, December 17, 2018. https://​www.brooki​ngs.edu/​resea​rch/​the-​
folly-​of-​troll​eys-​ethi​cal-​cha​llen​ges-​and-​aut​onom​ous-​vehic​les/​.
10. Cf. Judith Jarvis Thomson, “The Trolley Problem,” The Yale Law Journal 94, no. 6
(1985): 1395. https://​doi.org/​10.2307/​796​133.
11. Philippa Foot, “The Problem of Abortion and the Doctrine of the Double
Effect,” Oxford Review 5 (1967).
12. Fritz Allhoff, Nicholas Greig Evans, and Adam Henschke, “Not Just Wars: Expansions
and Alternatives to the Just War Tradition,” in The Routledge Handbook of Ethics and
War, edited by Fritz Allhoff, 1–​8 (New York: Routledge, 2013).
13. Lara Buchak, “Why High-​Risk, Non-​Expected-​Utility-​Maximising Gambles Can Be
Rational and Beneficial: The Case of HIV Cure Studies,” Journal of Medical Ethics 43,
no. 2 (February 1, 2017): 90–​95. https://​doi.org/​10.1136/​medeth​ics-​2015-​103​118.
14. A. Bjorndahl, A. J. London, and Kevin J. S. Zollman, “Kantian Decision Making
under Uncertainty: Dignity, Price, and Consistency,” Philosophers Imprint 17, no. 7
(April 2017): 1–​22.
15. Seth Lazar and Chad Lee Stronach, “Axiological Absolutism and Risk,” Noûs 53, no. 1
(March 2019): 97–​113. https://​doi.org/​10.1111/​nous.12210.
16. Pamela Robinson et al., “Modelling Ethical Algorithms in Autonomous Vehicles
Using Crash Data,” IEEE Transactions on Intelligent Transportation Systems (May
2021), doi:10.1109/​TITS.2021.3072792.
17. And they might also be less predictable, depending on the method of developing an
algorithm and its capacity to change over time. Many if not most original equipment
manufacturers—​in the main, standard auto companies—​rely on formal methods to
develop their algorithms. These algorithms are predictable in the sense that their pro-
gram is transparent, and while it is possible to not test them adequately, they are in
principle understandable and predictable. Deep learning algorithms, however, and
in particular the development of algorithms through neural nets, provide behavior
that is interpolated from existing data. They can be very sophisticated but are largely
(though not exclusively, see Kiri L. Wagstaff and Jake Lee, “Interpretable Discovery
in Large Image Data Sets,” arXiv:1806.08340[2018]) transparent in the sense that it
is not possible to know the exact form of the algorithm—​they are sometimes called
“black box” algorithms. In the case of neural nets, emergent conditions could result
in an asymptotic, unpredictable response that diverges strongly from human expecta-
tions or the data set.
18. https://​www.businessinsider.com/​mercedes-​benz-​self-​driving-​cars-​programmed-​
save-​driver-​2016-​10
19. E.g., Charlie Osborne, “Tesla’s Autopilot Takes the Wheel as Driver Suffers Pulmonary
Embolism,” ZDNet. https://​www.zdnet.com/​arti​cle/​tes​las-​autopi​lot-​takes-​the-​
wheel-​as-​dri​ver-​suff​ers-​pulmon​ary-​embol​ism/​.
Ethics and Risk Distribution for AVs 19

20. It might also find itself about to crash into a facility handling hazardous materials, but
we won’t discuss this or other possibilities here.
21. See, e.g., Evans, Lipsitch, and Levinson (2016).
22. Lisa Brown, “Truck Carrying Radioactive Material Found after It Was Stolen in
Mexico,” NACCHO, December 6, 2013. https://​www.nac​cho.org/​blog/​artic​les/​truck-​
carry​ing-​radi​oact​ive-​mater​ial-​found-​after-​it-​was-​sto​len-​in-​mex​ico.
23. Centers for Disease Control and Prevention, “Report on the Inadvertent Cross-​
Contamination and Shipment of a Laboratory Specimen with Influenza Virus H5N1,”
Atlanta, GA, August 2014. https://​www.cdc.gov/​labs/​pdf/​InvestigationCDCH5N​
1con​tami​nati​onev​entA​ugus​t15.pdf.
24. The formal demonstration for these kinds of problem, and their ethical significance,
can be found in Lipstich, Evans, and Cotton-​Barrett (2016).
2
Autonomous Vehicles, the Badness
of Death, and Discrimination
David Černý

Introduction

While autonomous vehicles (AVs) promise a number of benefits, introducing


them into traffic may also lead to some negative consequences. I will call the
benefits “positive factors” and the negative consequences “negative factors.”
From the ethical point of view, it is important that the positive factors by far pre-
vail over the negative ones, as this makes it possible to postulate the following
thesis regarding the external justification of introducing AVs into road traffic:

External justification of introducing AVs to road traffic (EXT). From the eth-
ical point of view, introducing AVs into everyday traffic is justified by the prev-
alence of positive factors over negative ones.

EXT implies that if positive factors outweigh negative ones, we have good (and
even convincing) reasons not only to introduce AVs into traffic but also to strive
to proceed with that introduction as fast as possible. However, the fastest in-
troduction of AVs into traffic is predicated on the nonexistence of contrasting
deontological constraints. Therefore, it is essential to demonstrate that these
possible deontological constraints regarding, for example, fair rules of distrib-
uting harm or the issue of discrimination can be solved in a principled way.
Passing over problems of technical character, it is necessary to address the eth-
ical rules by which using AVs ought to be governed. With respect to EXT, we
can thus say that we have good reasons to solve the problems of the ethical reg-
ulation of AVs operation, and even that we have good reasons to solve them as
soon as possible.
Contemporary normative ethics offers a whole range of ethical systems that
could be used in connection with AVs. But whichever one of them we choose, it
will still hold that AVs will, albeit rarely, find themselves in dilemmatic situations.
I believe that solving them must meet at least two important requirements:

David Černý, Autonomous Vehicles, the Badness of Death, and Discrimination In: Autonomous Vehicle Ethics.
Edited by: Ryan Jenkins, David Černý, and Tomáš Hříbek, Oxford University Press. © Oxford University Press 2022.
DOI: 10.1093/​oso/​9780197639191.003.0002
AVs, Death, and Discrimination 21

1. Intuition. It must not be in a fundamental contradiction with important


moral intuitions of potential AV users.
2. Values. It must accord with the basic values based on the normative
equality of all human beings.

The first requirement occupies a greater part in the context of AVs than it does
in debates over the role of intuitions in ethical thought. This is because AVs are a
modern technology which is not yet present among us. Introducing it will arouse
strong emotions, and if AVs were governed by ethical rules solving dilemmatic
situations in a way that stands in contradiction to moral intuitions, it could easily
happen that people would not want to buy them; but this stands in contradiction
to EXT and its implied requirement of fast introduction of AVs to traffic.
The other requirement is no less important. From the normative point of view,
all human beings are equal; therefore, if AVs solved the ethical problems they
will encounter in ways contradicting this equality, we would have good reasons
to reject the ethical system in which these decisions are grounded. The principle
of equality would be especially gravely violated by any form of discrimination.
But apparently AVs can find themselves in situations when they will have to
distribute the harm between the involved road traffic participants in some way.
Let us imagine, for example, that an AV must decide between a young man and
an older man. Let us further assume that the probability of death is the same in
both cases (I am taking here the prospective view, i.e., that the AV will decide
based on the prospect of consequences based on its best knowledge in the given
situation). How should the AV decide? Randomly? Or perhaps based on age? But
if it decides based on age, will it not be a clear instance of discrimination?
In this paper, I will try to show that distributing harm based on age need not
be a kind of discrimination. First, I will present a general definition of direct dis-
crimination. Then I will present the basic contours of the deprivation conception
of the badness of death. Finally I will apply all of this to the problem of harm dis-
tribution in the context of autonomous traffic.

What Is Discrimination?

The word discrimination comes from the Latin noun discriminatio, which is de-
rived from the verb discriminare (to divide up, separate). At the level of values,
rejecting discrimination is based on the idea that all human beings are equal in
their freedom and rights, as explicitly stated by the first article of the Universal
Declaration of Human Rights of 1948: “All humans are born free and equal in dig-
nity and rights.” This equality and dignity prohibits discrimination, as expressed
in the following article of the same declaration:
22 David Černý

Everyone is entitled to all the rights and freedoms set forth in this Declaration,
without distinction of any kind, such as race, colour, sex, language, religion,
political or other opinion, national or social origin, property, birth or other
status.

This definition provides a fairly good summary of the basic characteristics of a


discriminating action; nonetheless, in what follows I will strive to present a more
rigorous definition of discrimination, which will allow me to answer the ques-
tion of whether distinguishing based on age is necessarily and in all situations,
including life-​and-​death situations, an instance of discriminating.1
Finding a definition for a certain form of human action is an uneasy task.
After all, we are not here in the sphere of mathematics with its clearly de-
fined objects and operations, but in the sphere of human practice, whose
richness and complexity defies precise distinctions and definitions. Our own
experiences show that the limits of the concept of discrimination are vaguely
defined, and we can find ourselves in a situation when we will not be certain
whether a certain action is an instance of discriminating or not. We will easily
find examples of clear cases of discrimination, but we can just as easily proceed
from the clear cases to the less clear ones, and even to those that are not clear
at all. We would probably all agree that if an employer refuses to employ per-
sons of color, it is a form of discrimination. But is it a form of discrimination
when someone—​X—​refuses to romantically date persons of color? In this case
our intuitions will probably be uncertain; on the one hand, it does not seem
to be a form of discrimination, as this is the private sphere of own romantic
preferences; but on the other hand, we can be inclined to think that it is a form
of discrimination, nevertheless. After all, X bases her different attitude to dif-
ferent persons on color, and she also seems to take a less favorable attitude to
persons of color.2
In my search for a general definition of direct discrimination, I will focus
on paradigmatic examples of discrimination. I am aware that I thereby run
the risk that the resulting definition will cover merely a focal meaning of dis-
criminating action and will not comprise its marginal forms. But I believe
that this is not so much of a problem. If I succeed to express the characteris-
tics that make certain forms of action discriminating, I will provide a precise
instrument for the debate over whether a certain action is an instance of dis-
criminating or not, even though it does not unambiguously meet all of the def-
initional elements of such an action. Human practice is too complex to allow
for binding it by generally valid definitions, but it is nevertheless open to our
ability to understand and classify. Thus, even if I do not succeed in providing
a definition of discrimination that would unambiguously divide human ac-
tion into two disjunct classes—​discriminating and nondiscriminating—​I still
Another random document with
no related content on Scribd:
The Project Gutenberg eBook of Kapitänleutnant
v. Möllers letzte Fahrt
This ebook is for the use of anyone anywhere in the United States
and most other parts of the world at no cost and with almost no
restrictions whatsoever. You may copy it, give it away or re-use it
under the terms of the Project Gutenberg License included with this
ebook or online at www.gutenberg.org. If you are not located in the
United States, you will have to check the laws of the country where
you are located before using this eBook.

Title: Kapitänleutnant v. Möllers letzte Fahrt

Author: K. E. Selow-Serman

Release date: September 19, 2023 [eBook #71681]

Language: German

Original publication: Berlin: August Scherl G. m. b. H, 1917

Credits: Peter Becker, Reiner Ruf, and the Online Distributed


Proofreading Team at https://www.pgdp.net (The digitized
holdings of the Staatsbibliothek zu Berlin are available to all
interested parties worldwide free of charge for non-
commercial use.)

*** START OF THE PROJECT GUTENBERG EBOOK


KAPITÄNLEUTNANT V. MÖLLERS LETZTE FAHRT ***
Anmerkungen zur Transkription
Der vorliegende Text wurde anhand der Buchausgabe von 1917 so weit wie möglich
originalgetreu wiedergegeben. Typographische Fehler wurden stillschweigend korrigiert.
Ungewöhnliche und heute nicht mehr verwendete Schreibweisen bleiben gegenüber dem
Original unverändert; fremdsprachliche Ausdrücke wurden nicht korrigiert. Wortvarianten,
insbesondere bei Ortsnamen, wurden nicht vereinheitlicht.
Die Umlaute Ä und Ü in Großbuchstaben werden im Original teilweise als deren
Umschreibung (Ae, Ue) dargestellt. In der vorliegenden Bearbeitung wurde deren
Schreibweise zu Ä und Ü vereinheitlicht.
Das Original wurde in Frakturschrift gesetzt. Passagen in Antiquaschrift werden hier
kursiv wiedergegeben. Abhängig von der im jeweiligen Lesegerät installierten Schriftart
können die im Original g e s p e r r t gedruckten Passagen gesperrt, in serifenloser Schrift,
oder aber sowohl serifenlos als auch gesperrt erscheinen.
Kapitänleutnant
v. Möllers letzte Fahrt
Alle Rechte, auch das der Übersetzung, vorbehalten.
Copyright 1917 by August Scherl G. m. b. H., Berlin.
Kapitänleutnant
v. Möllers letzte Fahrt

Von

K. E. Selow-Serman

1 . b i s 1 0 0 . Ta u s e n d

Druck und Verlag von August Scherl G. m. b. H.


Berlin
Inhalt

Seite
Tsingtau-Lied 7
Auf dem Hsikiang 9
Hochwasser 20
Krieg 40
Nach Manila 50
Interniert 66
Weddigen 76
Wieder interniert 85
Im Mauritius-Orlan 94
In die Wüste 111
Kapitänleutnant v. Möllers letzte Fahrt
Tsingtau-Lied

Verfaßt von Kapitänleutnant v. Möller

In Hongkong die Winde,


Sie wechseln geschwinde,
Mal lau und gelinde,
Mal heftig, mal rauh.
Umgeben von Riffen,
Von Bojen und Schiffen,
Ein bißchen bekniffen,
Da liegt die Tsingtau.

Der Pik, der schaut munter


Auf dies Kunterbunter
Von oben herunter,
Auf glitzerndes Blau.
Die Wolken, sie schweben
Am Pik, bleiben kleben
Und unten bleibt eben
Allein die Tsingtau.

Im Delta da fahren
Die Dschunken zu Paaren,
Die Zampans in Scharen
Und machen Radau,
Doch leis und bedächtig,
So grau und so prächtig,
Bewaffnet und mächtig
Erscheint die Tsingtau.

Im Westfluß, da fährt sie,


Viel Freude beschert sie,
Viel Wasser begehrt sie
Bis rauf nach Lungtschau.
Und jeder verehrt sie,
Den Handel vermehrt sie,
Doch Kohlen verzehrt sie,
Die kleine Tsingtau.

Man lebt auf dem Lande,


Man badet am Strande,
Man schwimmt auf dem Sande,
Man trinkt auch Kakau.
Man dreht die Maschinen,
Man spielt Mandolinen,
Das ganze heißt dienen
An Bord der Tsingtau.

Wo Deutsche auch wohnen,


Da zeigt man Kanonen,
Besucht die Missionen,
Die Zeiten sind mau,
Wenn bei den Chinesen
Ein Totschlag gewesen,
Dann schrei’n Kantonesen:
Wo bleibt die Tsingtau?

Die Zeit geht behende,


Sie ist bald zuende,
Adieu liebe Sände,
Adieu mein Tsingtau,
Zur Heimat geht’s wieder,
Zu Mutter und Brüder,
Wir kehren nicht wieder
Zurück zur Tsingtau.
Kapitänleutnant v. Möller †
Kommandant S. M. S. „Tsingtau“
Auf dem Hsikiang

Ein leichtes Knirschen unter dem Schiffsboden ... einige kurze


Stöße ... ein scharfer Ruck ... „S. M. S. Tsingtau“ sitzt auf einer
Sandbank fest.
„Beide Maschinen Stopp!“
Braunes Wasser quirlt zu beiden Seiten und am Heck auf, ganze
Lehm- und Schlickklumpen kommen hoch. Das Schiff ist
festgekommen. Bisher war die Reise, seit der Abfahrt von
Kongmoon, wo Schießübungen abgehalten wurden, glatt verlaufen.
„Eine verteufelte Geschichte!“ wendet sich der Kommandant,
Kapitänleutnant v. Möller, an seinen neben ihm auf der Brücke
stehenden Wachtoffizier, Leutnant z. D. v. Wenckstern. „Wenn das
Wasser nicht bald steigt, sehe ich schwarz für unsere Ankunft in
Wutschau!“
Am Bug, am Heck und an den Seiten sind ein Dutzend Leute
damit beschäftigt, mit gemarkten Stangen die Wassertiefe zu
messen und festzustellen, wo das Schiff aufsitzt. Vorne weist das
Wasser schon wieder zwei Meter Tiefe auf. Die Sandbank, die nach
Steuerbordseite abfällt, beginnt in der Höhe der Brücke. Vom
Schornstein bis fast zum Heck muß „Tsingtau“ festsitzen: keine 90
Zentimeter Wasser! Während auf der Brücke nach den Peilungen
überlegt wird, wie das Schiff loskommen kann, klingt’s von unten in
unverfälschtem Hamburger Platt herauf: „Du Koarl, willt wi beid’ mol
öwer Board jumpen unn ein losschuwen?“ Prompt kommt die
Antwort zurück: „Tja Hein, denn will ick öwer erst min Boadeanzug
antrekken!“
Ein leises Schmunzeln oben auf der Brücke.
„Steuerbord 10, beide Maschinen Äußerste!“
Wieder färbt sich der Strom unter dem dunkelbraunen Sand und
Schlick, den der Schraubenwirbel vom Grunde hochjagt; keine
Bewegung aber kommt in das Schiff. Schwer lastet „Tsingtau“ auf
dem Sande, die Landmarken bleiben unverändert.
„Stopp! Beide Maschinen große Fahrt rückwärts!“ Eine halbe,
eine ganze Minute peitschen die Schrauben das Wasser.
„Stopp! Beide Maschinen äußerste Kraft voraus!“ Da! Ein leises,
kaum merkbares Zittern. Das Schiff neigt sich nach Steuerbord über,
knirschend rutscht es von der Sandbank herunter, liegt grade, ist frei!
Mit halber Fahrt, äußerst vorsichtig, wird der Weg stromaufwärts
fortgesetzt.
Das in Südchina stationierte Flußkanonenboot „Tsingtau“ ist am
16. Mai 1914 von Kongmoon in der Mündung des Hsikiang
(Westfluß) abgegangen, um von Wutschau aus Erkundungsfahrten
in unbekannte Flußgebiete der Provinz Kwangsi vorzunehmen und
die deutsche Kriegsflagge dort zu zeigen. Eine für Offiziere und
Mannschaften des kleinen Fahrzeugs äußerst interessante, aber
keineswegs leichte Aufgabe. Der älteste Mann an Bord ist kaum
Mitte der Dreißig, allen wohnt der Drang, der in jedem Deutschen
sitzt, inne, Fremdes zu schauen, Neues, Ungewohntes zu erleben.
Jeder freut sich der kommenden Tage, die sicherlich Zwischenfälle
der mannigfachsten Art bringen werden. Nur wenige größere Städte
weist die Karte auf, was dazwischenliegt, ist unbekanntes Land. Die
kühnsten Hoffnungen werden an die Fahrt geknüpft: Jagdabenteuer,
Fischerei, Zusammentreffen mit Piraten, Erwerb echt chinesischer
Raritäten; je nach Liebhaberei.
Langsam gleitet „S. M. S. Tsingtau“ gegen die Strömung an. Vom
Löß, dem chinesischen Lehm gefärbt, wälzen sich die gelben Fluten
in schnellem Laufe dem Meere zu. Voraus kommt eine Dschunke in
Sicht. Das riesige, gezackte, braune Segel leuchtet im hellen
Sonnenschein schon von weitem herüber. Zwei ungeheure
Glotzaugen sind in grellen Farben zu beiden Seiten des Bugs
aufgemalt. Fast unheimlich ist der Eindruck, als schöbe sich
irgendein phantastisches Seeungeheuer herauf. Bis unter das Segel
türmt sich die Ladung, die aus Ballen getrockneter Häute besteht.
Stumpfsinnig hockt die Mannschaft an Deck herum. Eine
unheimliche Gesellschaft, mit der man im Anfang so gar nichts
anzufangen weiß, weil sie sich gleichen, wie ein Ei dem andern. Alle
scheinen die gleichen starren Gesichter zu haben, auf denen nicht
die geringste Regung eines eigenen Innenlebens zu erkennen ist.
Alle tragen sie das blaue, billige Nankingzeug. Erst wenn man sie
länger kennt, lernt man sie unterscheiden.
Gleichgültig schweifen nüchterne Augen von drüben über das
Kriegsschiff hinweg ins Leere. Auf hohem achteren Aufbau steht der
Mann am Steuer. Schnell rauscht die Dschunke mit dem Strom
vorbei, wie ein Bild aus längst entschwundenen Jahrtausenden
anmutend. Kein Laut, keine Bewegung an Bord, als seien es nicht
lebende Menschen.
Zu beiden Seiten gleitet das Ufer entlang. Bis zu fünfzehn Metern
hebt es sich stellenweise, kommt näher bald, um wieder weiter
zurückzutreten. Aus bläulichem Dunste leuchten in der Ferne
Bergzüge herüber, von deren Spitzen der kahle Fels im
Sonnenglanze schimmert, wie ewiger Schnee. Die Gegend ist
ziemlich belebt, reger Verkehr herrscht. Wie eine endlose Flut
dehnen sich gelbe Reisfelder bis an den Horizont, wo die Berge
ragen. Zwischen schlankem, grünbelaubtem Bambus glänzen helle
Mauern einzelner Gehöfte, über denen sich Schilfdächer wölben. Als
Ansteuerungsmarken und gleichzeitig als Wahrzeichen der Gegend
dienen die eigentümlich geformten Pagoden, die sich auf kleinen
Anhöhen erheben. In strahlendem Sonnenschein liegt die Gegend.
Auf den Feldern arbeiten Leute, auf den Wegen ziehen ungefüge
einräderige Karren langsam dahin.
In einer stillen, schilfumstandenen Bucht sielen sich
Wasserbüffel. Bis an den Hals stecken sie in ihrem geliebten
Schlamme, nur der wild anmutende zottige Schädel mit den großen
gutmütigen Augen sieht aus dem Wasser hervor. Ruhig, gleichmäßig
dösen sie, kaum daß der Kopf sich dahin wendet, wo eben das
deutsche Schiff vorbeizieht. Oben am Ufer steht ein altes Tier, das
erstaunt nach dem schnaubenden Ungetüm herüberäugt. Ein
kleiner, kaum vierjähriger Chinesenjunge, der mit Wasser wohl kaum
noch während seiner kurzen Erdenlaufbahn Bekanntschaft gemacht
hat, so dreckig ist er, sitzt auf seinem Rücken. Auch ihn läßt das
Schiff völlig kalt. Eine sture Gesellschaft! Fremd in ihren Ansichten
und der Auffassung vom Leben. Viele Jahrzehnte gehören wohl
dazu, sie aus ihrer unheimlichen Ruhe aufzustören!
Der Fluß verbreitert sich, die hier niedrigen Ufer treten zurück.
Die Strömung wird geringer, die Gefahr des Festkommens steigt
durch die Verflachung. Dauernd peilen die Leute von Deck aus die
Wassertiefen. Mit geringer Geschwindigkeit, äußerst vorsichtig setzt
„Tsingtau“ ihren Weg fort.
Weit voraus sind die Segel zweier Dschunken zu sehen, die quer
zum Strom fahren. Auf der oberhalb liegenden flammt ein Blitz,
starker Pulverqualm wälzt sich am niedrigen Bug auf, ein schwacher
Knall kommt herüber. Eine Kriegsdschunke, die soeben einen
Piraten gefaßt hat. Daß der Flußräuber selbst an der Arbeit ist,
scheint ausgeschlossen, da ihm das herankommende Kriegsschiff,
dessen charakteristische Formen ihm wohlbekannt sind, bei der
Ausübung seines Handwerks etwas unheimlich sein dürfte.
„Geschütze und Maschinengewehre klar!“ Ein eiliges Hasten an
Deck, Munition wird gemannt, die Geschütze werden geladen,
Patronengurte in die Maschinengewehre eingezogen.
Drüben geht die Jagd weiter. Vergebens versucht der Pirat das
Ufer zu erreichen, die schnellere Kriegsdschunke kneift ihm den
Weg ab. Einen Augenblick darauf scheint er seine Absicht geändert
zu haben, will stromabwärts entkommen. Einen Augenblick nur.
Weiß er doch zu genau, daß das ihm entgegenkommende
Kriegsschiff ein viel gefährlicherer Gegner ist als der bisherige
Verfolger. Wieder blitzt es auf der Kriegsdschunke auf, von der jetzt
zwei bunte chinesische Flaggen wehen. Ein Treffer. Die Vollkugel
fährt aus dem uralten Vorderlader und reißt einen erheblichen
Fetzen aus dem Segel des Räubers, dessen Geschwindigkeit sich
mehr und mehr verlangsamt. Rasch nähern sich die beiden
Dschunken. Aufgeregt hetzt die Mannschaft an Deck des zuerst
herankommenden Piraten herum. Der Mann am Steuer wirft sich mit
voller Wucht gegen die Pinne, laute Kommandorufe, Schreien,
Fluchen tönt herüber. Weit mehr Leute scheinen an Bord, als zur
Bedienung des Fahrzeuges erforderlich sind. Alle in dem blauen
Nankinganzug mit nacktem Oberkörper. Geradezu verboten sehen
sie aus. Einige versuchen, auf dem achteren kleinen Mast ein Segel
zu hissen, andere wieder laufen mit Flinten eines anscheinend
uralten Systems nach dem hohen Aufbau und beginnen nach dem
Verfolger hinüberzuschießen, der das Feuer sofort erwidert. Ein
wildes Geknalle hebt an, bald hier, bald da ein Schuß, dann wieder
eine ganze Salve. Schon jetzt ist zu sehen, daß die Kriegsdschunke
überlegen, ein Eingreifen der „Tsingtau“ also unnötig ist.
Wieder saust eine Kugel heran, trifft auf Deck, mitten in den
Knäuel der zusammengedrängten Piraten, und schlägt ein halbes
Dutzend Leute zu Boden. Vierkant drehen sie auf Land zu, um
wenigstens das nackte Leben noch zu retten. Zu spät! Das zerfetzte
Segel gibt dem Schiff keine Geschwindigkeit mehr, näher und näher
rauscht der Verfolger, ununterbrochen feuernd, heran. In Todesangst
— wissen sie doch, was ihrer harrt — springt ein Teil der Leute über
Bord. Die Strömung faßt sie, wirbelt sie herum und entführt sie
abwärts. Jetzt ist die Kriegsdschunke heran, geht am Piraten
längsseit und macht fest. Mit Säbeln und Pistolen springen Soldaten
herüber, bereit, jeden Widerstand zu ersticken. Im Handumdrehen ist
das Werk getan. Wer noch lebt, wird gefesselt, daß er auch nicht ein
Glied rühren kann, und wie ein gefüllter Sack an Bord der
Kriegsdschunke geworfen. Die meisten freilich sind vorher schon
unter den Kugeln gefallen oder ertrunken. Dann wird das erbeutete
Fahrzeug in Schlepp genommen und nach Wutschau zugehalten,
wohin auch S. M. S. „Tsingtau“ ihren Weg fortsetzt.
Eine riesige, zinnengekrönte Mauer umgibt die Stadt, aus deren
Mitte sich eine Anhöhe erhebt. Ein Gewimmel alter, halbverfallener
Häuser, schmaler, winkeliger Gassen und Gäßchen, aus denen nur
selten ansehnlichere Gebäude, die Sitze der Regierungsbehörden,
ragen. Besseren Eindruck macht des Geschäftsviertel zwischen der
Mauer und dem Fluß. Die Straßen sind reinlicher und breiter, die
Häuser ansehnlich. Längs des Ufers ist auf dem gelben Schlick die
Kaimauer aufgeführt. Einige Hulks europäischer Firmen, die

You might also like